Skip to content

Conversation

@gizatt
Copy link
Collaborator

@gizatt gizatt commented Apr 21, 2022

At present, even if a geometry or material is sent twice with the same UUID, there's no caching (at the parsing level, anyway -- maybe there's something deeper) of that declared-to-be-duplicated info. This enables UUID-based caching by creating a persistent ExtensibleObjectLoader that keeps a cache of UUIDs across all parsing calls (which is more than the vanilla ObjectLoader does). This change can be taken advantage of by clients (e.g. meshcat-python PR #114) to save tons of bandwidth.

@jwnimmer-tri
Copy link
Contributor

I've been starting to consider some of these same improvements myself.

I'm curious if you've thought about how to keep the cache from growing without bound? If I'm reading this code correctly, it seems like if I keep calling set_object over and over again on the same path with a new object each time, eventually the cache would become unreasonably large and run out of memory, with all of the cached-but-never-reused objects.

There's also the twin side of this -- the code in dispose_recursive() seems like it would dispose any loaded textures on the first copy of the object we remove from the scene, leaving the other copies of the object without their textures any more?

@jwnimmer-tri
Copy link
Contributor

Closing this as incomplete / abandoned.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants