demo: Implement various metrics for hierarchical simplification#836
Merged
demo: Implement various metrics for hierarchical simplification#836
Conversation
This metric is more invariant across different meshes; although note that we still compute it on the original topology, so the "ideal" number 1.0 is only reachable on fully connected meshes without any normal/UV splits.
For hierarchical clusterization, disconnected clusters that happen early in the pipeline are problematic because unless they get merged with other clusters to fill the gap, they will create an increasing number of locked edges downstream. This makes this a good comparison metric between different DAG algorithms, so we now compute this for every level separately. Unlike main demo, for now we compute this based on position-only connectivity as this is what is used for cluster partitioning as well.
Since every iteration computes the next DAG level and outputs statistics just for that, we never end up computing this for LOD0 - but these are valuable to understand the clusterization efficiency in isolation.
In addition to counting triangles per cluster we now count the average vertex load. This is relevant for mesh shader execution model, as a higher vertex reuse will be more efficient to transform and output the meshlets, and also serves as a proxy for cluster boundary size.
All other metrics around clusters we use are relative to cluster count; the absolute number of full clusters is not meaningfully comparable between levels, so this change switches that to relative as well.
With the added metrics it is easy to compare recursive to non-recursive clusterization; recursive clusterization results in more accumulated cluster splits over time. It yields slightly more triangles per cluster on average, but from the global perspective the results are similar. In the interest of keeping just one superior implementation we will keep the non-recursive variant, although it's not fully clear which one of these is better overall.
These names make it more clear that the functions are only needed to collect statistics.
When vertex locks are used, we can count the number of locked vertices in each cluster which is a representation of the boundary that restricts the simplifier. This is a little different from the regular meshlet boundary, as the boundary is computed for groups of clusters, but evaluated on a simplified subset to make the metrics match between other metrics of the same LOD. To reduce clutter we also now output stuck statistics only if any clusters are stuck in a given LOD.
This helps, at a glance, to see if the DAG is truncated early or is too deep compared to the "optimal" depth. The optimal depth by itself is not necessarily a good target to chase, as long as the real depth is within a couple levels the DAG should be just fine.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
We now analyze connectivity, vertex transform and boundary size for all clusters and output summary statistics per level of detail (DAG depth). This allows more granular analysis and comparison of various algorithms and tweaks.
With these it's more clear that Metis non-recursive clusterization is generally better than the recursive variant, so this change also removes the recursive variant for simplicity.
Note that for now, the code that computes metrics (and is thus non-operational) is intermixed with the code that computes the actual DAG. Since for now this demo is not intended as production-ready example, this is fine but the demo will need to be reworked in the future to serve as a good example.
This contribution is sponsored by Valve.