Skip to content

Draping Imagery on 3D Tiles #12567

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 82 commits into from
May 30, 2025
Merged

Draping Imagery on 3D Tiles #12567

merged 82 commits into from
May 30, 2025

Conversation

Image for: Conversation
Copy link
Contributor

javagl commented Apr 11, 2025

Description

Addresses #7591

The goal of this PR is to allow draping imagery over arbitrary 3D Tiles.

Currently, tiled image data (like Bing Maps and others) can be draped over the globe in CesiumJS. This is accomplished by defining an ImageryLayerCollection for the viewer, causing the "globe (geometry) tiles" to then be textured with the image data. The same should be possible for arbitrary 3D Tiles: Given a (textured or untextured!) 3D Tiles data set, it should be possible to assign imagery layers to that, and drape that imagery over the 3D Tiles geometry.

Description of the current state

The following is the attempt of a summary of the current state, at the time of writing this. There are several parts that are incomplete or preliminary. Updates will be tracked in comments to this PR.

  • There now is a Cesium3DTileset.imageryLayers that can be used to assign an ImageryLayerCollection to a tileset
  • The Model class has a private getter for imageryLayers that returns the imagery layers of the tileset, iff the model is part of a Model3DTileContent that is part of a tileset that has imagery layers
  • The model now contains an instance of the new ModelImagery class, which in turn, contains one ModelPrimitiveImagery for each primitive of the model
  • These classes both offer update functions and ready getters, following the pattern that is used for other parts of the model:
    • The update functions are called in Model.update, checking for changes that may affect the internal state/data structures, and updating them accordingly
    • The ready flags (getters) indicate whether all required resources have been loaded and all computations have been performed
  • The ModelImagery does not do very much: It mainly delegates the work for the update/ready handling to the respective ModelPrimitiveImagery instances
  • The ModelPrimitiveImagery is doing most of the work:
    • In its update function, it is obtaining the vertex positions of the primitive, from the "POSITION" attribute data
    • These positions are converted into cartographic positions for the respective ellipsoid, and stored in a MappedPositions object. This contains the mapped positions and the cartographic bounding rectangle, and serves as the basis for most further imagery-related computations.
    • It computes ImageryCoverage objects from these cartographic positions. These describe the (x,y,level)-coordinates of the imagery tiles that are covered by the primitive, and that later have to be draped on the geometry data
    • It currently also processes the state machine of the corresponding Imagery objects, to make sure that they eventually reach the ImageryState.READY state, so that the whole ModelPrimitiveImagery can count as ready
    • It creates additional texture coordinate attributes for that primitive - namely, the texture coordinates that will be used for draping the imagery textures on that geometry
    • When all these computations have been performed, the ModelPrimitiveImagery counts as ready, and can be used as the input for the draping process

The draping implementation itself is implemented as a pipeline stage. Summarizing a few things (that should probably be documented on a higher level somewhere):

  • When a Model is created, it creates a ModelSceneGraph internally
  • For each 'primitive' of the model, a ModelRuntimePrimitive is created
  • The ModelRuntimePrimitive contains a list of "pipeline stages"
  • When the draw commands are built (skipping a lot here...), these pipeline stages are executed, and fill PrimitiveRenderResources with the data from the primitive, which essentially means "copying the data to the GPU, so that it can be used for rendering"

So now there is an ImageryPipelineStage that is inserted when the model is part of a tileset that contains imagery data.

  • It prepares the shader that will be used for the draping computations: The shader receives the additional texture coordinate attributes, and a bunch of uniforms that define things like the actual textures as sampler2D, and things like the alpha value that should be used. All this is then combined in a sampleAndBlend function in the fragment shader that takes the previous pixel color, and mixes it with the pixel color that was obtained from the imagery texture
  • The actual input for these uniforms is (not yet a proper class, but just) a structure that is referred to as imageryInput for now: It contains the Texture itself, a rectangle that indicates which part of the imagery texture has to be mapped to that primitive, as well as translation/scale values that are applied to the imagery data so that this rectangle is filled with the right part of the imagery texture

Issue number and link

Image for: Issue number and link

#7591

Testing plan

Image for: Testing plan

The specs currently only cover the basic aspects of the most important classes and functions. Ideally, testing software should not mean to "check whether it works", but rather to "try and break it". There certainly are many corner cases where one or the other aspect of the current implementation may not work. But due to time constraints, the tests may not be as thorough as one might wish.

Some sandcastles for further tests have been posted in #12567 (comment)

The following is a test that was created initially, when this PR was opened,

testLevels2.zip

It is a simple tileset with a unit square, with two levels of detail. The first is a plane with 10x10 vertices. The second one are 4 planes, with 10x10 vertices each.

And this is a Sandcastle for testing this on this branch, on localhost (updated for the "Ready for Review" state):

http://localhost:8080/Apps/Sandcastle/#c=vVhbU9s6EP4rGs5DzUxwwq2XNDCHSznNTCkdAm3p5EW2lUQHW/JIMpCe4b+f1c2XYKDQhocMlrTaXX37aXdFzJlU6IqSayLQDmLkGh0QSYss/GrmgvFKbMYHnClMGRHjldX3YzZm3S46EAQrgtQMfjQlkihEmRlahWMWG/V+ccfNhzImjIS5oBlV9IrIECdJMGYI4WtMlffA/tk8PLPbw4ng2blIwaWZUnm/2015jNMZl6r/ttfb7Doz4b+Ss/FKB/2nNSKUkKiYjmb8ep8XLKFs+pWnRUb6SImCdKxMhm9oVmSjWBDCRjmOyQchuOij9ddG4nZ1zMpjn+SKcoZTOKzMSay4QBP4GTtT0F8BosXdkcmNIiwJ3NHsZOOAQ6/smN5QY6zbtfCVZnRIjnlC0grJuFWF0W8kvc9HdQf7SMNh3KQZnhIxR5HGBgtKJOI2hNOUR8RHsMRwaOW1uf1qy47BEmzRCQoell21YXHuO/Of8JwIwwK354vgVzTR9DPhqdFSazvgXEAggXtyUd7FHGn84aSjeEZ0qGsKvpHomIgYA0xnNZlg1VFBhxohjf/tXZoHhYGYTihJVkvwUuM+CGClN8gZL9IERQThPE9BEilevyRjhuWcxWhSsFgzCcXGgKP5sA5JYODSDoEfh2SCi1ShYF+T7Bjn0rjqaf+LWDaumLX8jYvUS+9p1yoYpZqngJ8TH3JWlx3pxXDvw+lw79Md9JzTTydeiRJCC+Q7FDgnyX0UnOBUag4i1GDhA5tWfYp4IoTPIuRvUNLDaoGFUcXNPSnplDUQ9Wxc5JyJ+0NU87lihFkSY6lSgs6HCOSK3OeBq1oGMifDbJqSQwKR7RlnJWRk8h3yZm14UQ5xms8wjMwgEnQ6U4xI6WfAhhJg2I9nBSn1YlUIrG+LX5ziLHO6br3n53nib2qmvTzGStAbxCd1KDoowhIuJbd79Io/Rde6b/9c+H360FbfmJV3tjCmHJDHlbHAkcri5RUDXJ+LLAI+lAiGfs3GtCZ/ihOQd9wArbNQcZijmMmgbZN1us2EXbkje3Gv7IW/uVZaA3UmwC7Ut6zyqZySIYFwfeZCzc7zM35Eb0hyJDBQ19LVF3Is4GaA/5umhoP7UGZlsPZmO1zf3tjqve2gzXfhu63X795sd9B6I4XU3HbRrEMD4y2jc6QF7laLynJgseg4BMBM2PP3q7bB6QxafBBcGQJakc1FP+zZTp3QD59tXUDvs7T5uKX2A3tDzs6Cc086WP2q3LEVp5yRYGFyePjh89nw7MIya2ExgxpF83Qe1PR2mlTq1G0+QUnzlM/VUiPTHRX1gtrEpTaq9Qa1jANblC7MXEKeS9uaA4QFQYmpR2Zzs+BVeemxrOPLrzUofcrRZa+9kqWETdUM7ezsoF5Z9AQkdsF8QfFkMO7qhq5V0ZSooOdxMqKhyemt+U0v2LBYySrht4lXq/U9viS07fBrdXkoGW2iMF2XqqpJayYsV+t7TL1pEzcLjX4Rwg9xvJ6RxUiieAYJQb8LFiJqNB3BtdYtvo/n/SXmfbV8hws1R0ZFJGNBoRF1zUCdU+6qXDIeX/JChQBlfFkdS+txrzfO0whrTiQ8hocTU5oGH1KiP/fnQ3i5rTgZ+zZc1Ky74Dn0rIlxsLTQ8Zr1Hv2ACqw9BvVDvyMrVyp61yKK5ck1gzYrJ0LNA71ptaT2gge+i9J+n0SSiCscQbWoeWK2l2LS4xa0RKfZgj0YIxOEH5xnC81YB11TuI4YyUxnCj6Z6Nex5OULAkGnRiU8ENJ0DjhICu76cDjpxlv9I8Ea3C9UxbNTzbCgmQzrHcTaxka4bSvfowIbvV7Ysy9f92D7Cac540F5EOuNFljprAzMa2FX7/ybZjm0BaiA93oYdhXJ8lS3x92oiC8hrcT+mv/l6eVCFwELp0K36H0kphEOtjY6yP964Vu7C6EcunPzotjKb9xUBE04EWsCzlHIauG2YYayvFDe2BVwh0I1WMMpdNF9lNEkSUnTwprieR9tlFb8dMSV4lm1AmYGXQ/AIKFXiCY7Lf89QXGKpYQVHd0R/UnGK7uDLsg3tqXcRPQEPIT0o0Vm67uf7GQYhoMuDO/uKq+hicFAaZrvWrcHKuLJfNdzfKDEbvUyGahkd891loMuDJpL1QjGFkA1zwkYFJpscKaMMhj19Be+ga/N1/pbKpLDYB0+4ZbgtQhyAIyvcKobe9/KdpCZsJW0j14ZA6/cGdrNKnKjtH5AD0bbD+tvaGqcDgbiAURMT/n9N/BYL/FY7z2Oh29Ql4OG1f57WFy8LBYXS8Xi4tlY7OnG5o9ck/UKCEi0G/fwWJtb2iXRyp+NxH7Zs/2ZrPErcFR94rIwqSw8G5gD15q+HCy+GV4WKF7/syH5WJCXQwP6/WUBAaqfn0XLl8XLQVG9ZpaWTksLzwbmH/2GejlMzJNtWXAY5b+EBHyWTRl8u27Nt3X/Aw

It offers some basic functions for manipulating the geometry and the imagery settings, targeted at basic tests.

Notes:

The commit history may look odd: At some point, I started doing the development on a local branch, with fine(r)-grained commits, but for the first time in my life, used the --squash function. There just had been too many unknowns and experimental, intermediate steps, and it just wouldn't make sense to track them in all detail here...

Author checklist

  • I have submitted a Contributor License Agreement
  • I have added my name to CONTRIBUTORS.md
  • I have updated CHANGES.md with a short summary of my change
  • I have added or updated unit tests to ensure consistent code coverage
  • I have updated the inline documentation, and included code examples where relevant
  • I have performed a self-review of my code
Copy link

Thank you for the pull request, @javagl!

✅ We can confirm we have a CLA on file for you.

Copy link
Contributor Author

javagl commented Apr 13, 2025

The last commits

  • The computation of the textures that are required and their corresponding texture coordinate rectangle has been fixed. (In fact, the "missing textures" had been there, but ... their texture coordinate rectangle was "empty"...). It may still need review and further testing, though.
  • The transform of the primitives within the model had not been taken into account. So now, it is passing the ModelRuntimeNode to the ModelPrimitiveImagery as well, so that it can integrate this transform for the computation of the primitive positions that are supposed to be mapped to the ellipsoid
  • The function that obtains the primitive positions did not handle quantized attributes. Basic support for quantized positions is implemented now (and I tested this with a draco-compressed/quantized tileset), but it might have to be reviewed in terms of possible ~"other forms of quantization"

There seems to be an issue related to the computation of the offset/scale for the imagery that is mapped to the primitives. It does work for the test tileset, but not for another ("real-world") one. This might somehow be connected to that quantization, but is hopefully only a detail 🤞 - to be investigated.

Copy link
Contributor

ggetz commented Apr 18, 2025

Thanks for opening this @javagl! I'm looking through the code, but I also has a few higher-level thoughts as I do so:

The decision about how to pick the right level (of detail) for the imagery is currently still open. There's that XXX_DRAPING_LEVEL slider in the test, which is a hack for testing different levels.

If I understand correctly, the is driven by a maximumScreenSpaceError tolerance in the existing terrain and imagery engine. Can we do something similar here? Something along the lines of refining to the level such that:

minTileDimensionInMeters / minTileImageDimensionInPixels < tileset.maximumScreenSpaceError

?

The function that obtains the vertex positions from the primitive currently copies the attribute data to the CPU. And this does not properly handle quantized data. This might be easy to generalize (if things like Draco or Meshopt do not have to be handled manually). But it might involve shuffling some larger code blocks from existing classes around.

Certainly open to a small refactor to make this easier. If needed, maybe consider a separate PR to keep this scope of this constrained as much as possible.

How to handle the case that the draping involves more than 16 texture units. This is largely open. It may involve that ominous "upsampling", or maybe could be handled by combining several imagery parts the reduce the necessary number of texture units...?

Am I correct in my understanding that this can be addressed either by:

  1. Implementing upsampling: Loader higher LODs of geometry such that we don't have multiple tiles of one imagery layer covering a single geometry tile.

or

  1. Merging multiple textures for a geometry tile into one single texture

?

Copy link
Contributor Author

javagl commented Apr 18, 2025

If I understand correctly, the is driven by a maximumScreenSpaceError tolerance in the existing terrain and imagery engine. Can we do something similar here? Something along the lines of refining to the level such that: ...

There are two different possible approaches. In CesiumJS, that level is computed here (and I did not yet go through the math and/or justifications there). In cesium-native, it is computed with computeDesiredScreenPixels (implementation), which carries a few implementation notes (and is invoked in some non-trivial contexts, including the translation of these "screen pixels" to the "level").

How all this could be translated to the draping is not yet clear. But when you mention the tileset maximumScreenSpaceError: Note that in cesium-native, the decision is based on the RasterOverlayOptions.maximumScreenSpaceError, which apparently can be configured independently. Whether or not this can simply be "replaced" with the tileset maximumScreenSpaceError in CesiumJS
(and maybe throwing in some / 8.0 at the right place) has to be examined...

The function that obtains the vertex positions [...] does not properly handle quantized data.

Certainly open to a small refactor to make this easier.

This turned out to not be such a huge code block and has already been implemented by now. The AttributeCompression.dequantize is 'static' and could just be called. What's left is the offset/scale handling. It would be nice to have that encapsulated similarly, because ... my initial implementation was wrong, and fixed in a subsequent commit: There are quite a few quantizedVolume....-variables floating around, and reading what they are takes orders of magnitude more time than just calling some nice, opaque, convenient nowReallyDequantizeThatStuff-function...

Am I correct in my understanding that this can be addressed either by:

  • Implementing upsampling: Loader higher LODs of geometry such that we don't have multiple tiles of one imagery layer covering a single geometry tile.
  • Merging multiple textures for a geometry tile into one single texture

These options had been my initial thought. But based on the recent discussion and comments by Kevin, the second one would not be viable. So apparently, there has to be some sort of upsampling. Unfortunately, this is not only about "loading higher LODs", but ... about creating higher LODs (i.e. slicing-and-dicing the actual geometry). This raises many questions, but I'll pragmatically start with trying to pick some code fragments from cesium-native (which have been ported from CesiumJS, and port them back to CesiumJS ... 😬) and see whether I can use that, for example, to split the unit square into 4 parts.

Copy link
Contributor

ggetz commented Apr 22, 2025

How all this could be translated to the draping is not yet clear. But when you mention the tileset maximumScreenSpaceError: Note that in cesium-native, ... [this] can be configured independently. Whether or not this can simply be "replaced" with the tileset maximumScreenSpaceError in CesiumJS has to be examined...

Gotcha. From native, it looks like the tileset's geometric error and maximum SEE are factors here, but as you said, ultimately there is a separate SSE option used to drive the imagery refinement (see comments below). So that boils down to

desiredScreenPixels = projectedRectangleSizeInMeters * tilesetMaximumScreenSpaceError / tileGeometricError

and we are then looking for the imagery level such that

desiredScreenPixels <= imageryMaximumScreenSpaceError

I am open to having a separate SSE factor for the raster imagery layer— or a modifier that can be applied to the tileset's maximumScreenSpaceError— but if there's already a precedent with the former, that might be best for consistency's sake.


From native:

   * In 3D Tiles, we can only get so close to a model before it switches to the
   * next higher level-of-detail by showing its children instead. The switch
   * distance is controlled by the `geometric error` of the tile, as well as by
   * the `maximum screen space error` of the tileset. So this method requires
   * both of those parameters.
  // It works like this:
  // * Estimate the size of the projected rectangle in world coordinates.
  // * Compute the distance at which tile will switch to its children, based on
  // its geometric error and the tileset SSE.
  // * Compute the on-screen size of the projected rectangle at that distance.
  //
  // For the two compute steps, we use the usual perspective projection SSE
  // equation:
  // screenSize = (realSize * viewportHeight) / (distance * 2 * tan(0.5 * fovY))
  //
  // Conveniently a bunch of terms cancel out, so the screen pixel size at the
  // switch distance is not actually dependent on the screen dimensions or
  // field-of-view angle.
   * The `target screen pixels` returned here may be further modified by the
   * raster overlay's {@link RasterOverlayTileProvider::getTile} method. In particular, it
   * will usually be divided by the raster overlay's `maximum screen space
   * error` of the raster overlay (not to be confused with the `maximum screen
   * space error` of the tileset, mentioned above).
Copy link
Contributor

ggetz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few comments on the code so far...

Copy link
Contributor Author

javagl commented Apr 30, 2025

The last commit (which is, again, squashed from a local state) aims at working towards a state that allows for some basic tests. Some details are summarized below.


About "upsampling"

There has been some discussion internally, about the process of "upsampling" the geometry data. I did start some experiments and created some building blocks. But the upsampling is not enabled in the current state. There are still too many building blocks missing.

This is roughly related to some of the inlined review comments:

javagl: "I should not have removed that "Some generic reader would be nice" comment that I originally added there."
ggetz: "I generally agree about a generic reader. Though this may be something else out of scope for the PR. I don't think we have an issue open. Would you mind creating one?"
ggetz: "Ah, I meant things like interleaved buffers, quantized vs un-quantized, etc..."

The point of such a "generic reader" would exactly be to provide some form of abstraction from all these variants. Certain tasks that are required for the upsampling are pretty simple, like "GIve me the position/normal/texCoord of a vertex". But there is no infrastructure for that. It will have to fetch the data from the GPU, and take into account whether the data is normalized, quantized, oct-encoded, draco-compressed, or stored in separate or interleaved buffers (and for indices, whether they are TRIANGLES or TRIANGLE_STRIP), and eventually, all (float) attributes of the data (regardless of their representation) will have to be read, subdivided, interpolated, and assembled into "new geometry data". There are many questions related to that. So while there is a draft for some ModelAttributeReader in the current state, this is not fully wired in. I locally started creating test data for that, as in files

unitSquare11x11_draco.glb
unitSquare11x11_draco_interleaved.glb
unitSquare11x11_meshopt.glb
unitSquare11x11_meshopt_interleaved.glb
unitSquare11x11_plain.glb
unitSquare11x11_plain_interleaved.glb
unitSquare11x11_unsignedShortTexCoords.glb
unitSquare11x11_unsignedShortTexCoords_interleaved.glb
unitSquare11x2_triangleStrip_plain.glb
...

But even if this was already tested and implemented properly, it would only be a tiny part of the upsampling. So for now, upsampling is omitted.


About the current state:

Instead of "upsampling", the process is currently computing a "matching" imagery level for the geometry at hand, and drapes this over the geometry. So it is essentially trying to always map one imagery texture to a certain primitive. This does not work for all possible cases. Specifically, it assumes that the geometry data itself provides a sufficient level-of-detail structure. (So when the tileset contains a single LOD with a 1000kmx1000km square consisting of 4 vertices, this won't work well...). But it allows for some basic tests, e.g. with Google Photorealistic 3D Tiles.

The following is a sandcastle for such basic tests when running the current state on localhost:

http://localhost:8080/Apps/Sandcastle/#c=vVZbb9s2FP4rRPZQGbDluxV7TrDEXdsA7RrU7friF0qiJSIUKZCUE6/If98hJUqy6wVFsOTBhshz5XeukeBKox0l90SiC8TJPVoRRYvM/9veeZuzyJ5XgmtMOZGbsy76seEIJUyEZIG2mCnS3fDHzu8bHll1mjKiiAZ9+B5T7TRGkmBN3guRMHKbCi3gzKjSNBq//VqKeFaz4Gz/TVGefKc6LfnfExGJmMgF0rKozZV++yoinPi5pBnVdEeUj+PYq5wwbBve76OVtY50SpBXcJWTiG4piTuIZjghco8Y3hOpgAFrK6BSUbAYhQThPGfAirSw4pXmDcdqzyO0LXikqeCofF/1kptS60er1OuUkB3Ac8BxiH2btBKMEWvAK9+CEDj3lmxxwTTyrgEm9AnnqmMo1go6acGAUl3cSrGjsQmu5UanwvRdSOb4r8w7vR+OGyGl9wxC77wVvM29NkT/6s8vN1cfu07msVN+mSfAqX7HOyFRTMIiSeAdC7ROxb3F2AUlFAWPsaREQVYcou/gtOJG8K3EOXFemDBcN8IXZZ5a63SLvF8R6jwXzlYkjcaVEDKmHEBVxxItTMEIQLCOUpIBtC0V30n4icgIQ718bfF4nafBlUQXkp/0Hdgeq6q4Uoom/AByVwdHyV6CTY+StsycpzIfjFU6/GNheuSU9WiNeRxhpRlB324QiBW5M27K/RN0AQayFjnM8hQv0NAiEUqapJoTpdwNSGkJqtw5LQDZgf1UGNDBpqocMcFZVul6dL58y2PXM8ARDdhDGm7BLDvVNRCWBMU2n6z0YcJ2UYgVNJHq1rwFZeYxG153kMLac3VUWXTNw6TtAWI+IzzRKbq4uECDOlnLuLtMcEVi3TyG3E+g4w5cV7EsvoUUGP8qshAStIa8JJQZVnI2eJ9ib6htGReRUxKO1uaHiJ1ihes2VxPMU8wNtS1jw32K3RI6rRJZQbghbvcpOY4cilLME6J+iqDV9E6KzAxRF7//CG7L0roIVSQpTJyq+NpJUnWDOy6iO1FoH7CK7hq/W9NXCBZiE+xYREVGuDZx/pMR83m9v4lhplc8mzMjdqzZjLs9zJXYOlhb6DrNRmYLfdsr7XGcQfPgTXW28rUVMqw+33NofDmReu8ZoU6ds0ceuL5m/P4cKiJ3OGSk7YkVr9mUw807AX/dFB3Kt4JybeGNQIvECJenctHIGXk4WixKNh9q2IbT+hwT2Ft41UBazXqFJbR5ivm4mga90Ww+DuaBP50OB7PBJAiqtt2bjObBcDz1p8NgMJ8Eo2lFGJ9PJ8Fg4AfwPxlOxrOJuS+7vYC5BHvYT2Y/EGyidUt1lH6BhaEyPvFn0+kA7M5H0+FwPq9t9Ab+6Hw2Pp9P4H84Pj8fzyrK0B9PZ8MAPB1Ng2A8DMakFzgH7NJ11j1b2gXg0lz/QbNcSI0KyTzf72uS5cxMun5YRHfQ8yPXAH5zeVnFPIT0TaQZtgskkxB7k1EXud/APy+lEMph0NrdYJI/VFchzFMiexLeXKiG8HhghvK80M7YDpKORpj1YOFMALqMxjEjhxZ6WuQLNKqtuOtQaC2yhgJmln0HwDKmO0TjixNrMooYVgoo24KxNf2HbM4ul33gPxBjwkbuM3gIjcmwpMPLj+Wl7/vLPhx/lqrr18ZgqU19XJZuL3Uo4v2lK46llpfNkrHU8eWVaeTLPnwd3jcnOJfo6X1OwJo0XQ4elFEOp4H5wg/wNYQvpUluLv3BCE5QfbgXQuuAqx1mZt7audFF9lTO0wV6Y9W/qdw/bVSTB230A3Bwmj6h/EDNwcPgIJ9A4rqeUf8LHONfgaOZiy+FSWPh2cCsqlH8erC44f9SoDj9z4bkQ0FeDw3Yb14KCFD9bAzW9Sb1elA029tLIdJYeDYw783O+HqY2BX1peCwyn8JCfisRw18VzPIDat/AQ

It is using Google Photorealistic tiles, draping Bing Aerial maps over them, and offering some basic imagery layer manipulation UI for testing. A short screencap:


Given that there are many commented-out blocks related to ~"upsampling experiments" in the current state, it does not make sense to review this state on a code level. But maybe someone wants to try it out.

Copy link
Contributor Author

javagl commented May 8, 2025

The last commit added a few cleanups (removing obsolete comments, omitting (hopefully) obsolete debug logs, and omitting the part that is related to the upsampling experiments).

The upsampling might be added in a PR that is built on top of this one, to make the process of reviewing and testing easier.

Copy link
Contributor

ggetz commented May 9, 2025

Sounds like plan @javagl!

What's left in this base PR? Is the TODO list in the PR description up to date?

Copy link
Contributor Author

javagl commented May 9, 2025

There are still many TODO_DRAPING comments in the current state. Some of them are rather "notes" for things that may require additional attention during a review. Others are "trivial" cleanups, with minor degrees of freedom (e.g. some of the ones that are related to the inlined comments from your initial review that are still open).

The "larger" ones refer to some of the points that had been listed as "To do" in the initial comment, often somehow related to upsampling. I just removed that list from the initial comment, here is one with some updates:

  • The decision about how to pick the right level (of detail) for the imagery is currently still open. There are some inlined comments about how this is done in CesiumJS/native for quantized meshes, sometimes depending on the SSE for the tileset or the imagery. "The right" solution still has to be assembled. Some of this may be obsolete without upsampling: Even if we compute some level, we can not necessarily apply that level. So I think that the current approach of trying to ~"map roughly one imagery texture to each primitive" is the safest bet.
  • Obtaining the vertex attributes from the primitive: For the current state, this "only" needs the positions. But for upsampling, it will be necessary to read all attributes. And I will not try to implement this by copy-and-pasting getVertexPosition and tweaking it for normalized, ZXY-encoded normals or so. There has to be some abstraction for that. Ideally, this would be an AccessorView (as in cesium-native), preferably something that also handles further corner cases. For now, I started a ModelReader (and actually have some specs for that locally), but some work has to go into that.
  • The processing of the "state machine" of the Imagery has to be reviewed. Part of that is that there's that Imagery.addReference function, which is called at a somewhat random place right now. In a similar vein, some aspects of memory management in general have to be reviewed (and maybe I'd add some comment to things like buffer.vertexArrayDestroyable once I figure out what this is about...).

If desired, I can try to remove "as many TODOs as possible" (i.e. the ones that are rather "notes" or indicate "small fixes").

Copy link
Contributor

ggetz commented May 14, 2025

@javagl It looks like there is an error in built-ts which is preventing this branch from being deployed in CI. Having this branch deployed and available for testing would be helpful. Would you mind taking a look at the error?

Copy link
Contributor Author

javagl commented May 27, 2025

I added the first pass of specs. Until now, they mainly cover the "leaves of call trees", i.e. static functions, but also a few ones that rely on actual state and model/tileset loading. There are several dimensions along which the coverage should be more thorough. On the one hand, I'm tempted to create more spec data files (more unit squares!!!) and think that they could be useful elsewhere. On the other hand, there are many paths related to the actual "lifecycle of the draping itself" (down to the level of checking reference counters for specific Imagery objects). Let's see how far we can get here.

And regarding the previous build failures: No matter how pedantic you are, there's always a computer that's more pedantic than you 😆

Copy link
Contributor Author

javagl commented May 29, 2025

The last commits mainly consisted of adding specs. Unfortunately, one of the specs uncovered an open question (that had been in the code as TODO_DRAPING for a while now), namely that of the reference counting for imageries: The Imagery has a referenceCount property. And this was not maintained properly (skipping some details). The solution is to store the Imagery object directly in the ImageryCoverage (which I originally tried to avoid, for various reasons - again, skipping some details). It's certainly not ideal to do such a change so late in the process, but given that there now is a (very basic) spec for this, and the previous test sandcastles still seem to work, I hope that this turns out to be OK.

javagl marked this pull request as ready for review May 29, 2025 17:12
Copy link
Contributor

ggetz commented May 29, 2025

@javagl I hope you don't mind– I pushed some minor inline documentation updates, mostly to cross link and add more examples.

Copy link
Contributor

ggetz commented May 30, 2025

This is looking fantastic @javagl! I'm going to go ahead and merge. I'll follow up with a PR adding a Sandcastle example that we'll use for announcements.

Could you please open issues for well-defined future improvement we know of? The two that are top of mind are 1. Upsampling for loading high resolution imagery on low resolution geometry, and 2. Seams or other undefined behavior over the IDL.

ggetz added this pull request to the merge queue May 30, 2025
Merged via the queue into main with commit 68314bd May 30, 2025
9 checks passed
ggetz deleted the draping-3d-tiles branch May 30, 2025 19:25
Copy link
Contributor Author

javagl commented May 30, 2025

@ggetz These are the most important ones. I just opened #12643 with these (and some details). Further points can be added to this issue, or dedicated new issues, as we see fit.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
2 participants