You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This PR fixes a memory usage regression reported in https://jira.unity3d.com/browse/UUM-60476.
The PR that caused the regression was RG Compilation Caching https://github.cds.internal.unity3d.com/unity/unity/pull/40721, which for NRP RG Compiler creates 20 instances of `CompilerContextData` ([see code](https://github.cds.internal.unity3d.com/unity/unity/blob/trunk/Packages/com.unity.render-pipelines.core/Runtime/RenderGraph/RenderGraphCompilationCache.cs#L36-L43)) which in its constructor reserves native list capacity according to worst case, resulting in many megabytes of reserved memory. This multiplied by 20 causes tens of MBs of upfront memory allocation whenever RG Compilation Caching is enabled (which it is by default).
The fix removes the upfront allocation from `ResourcesData` constructor based on a couple of observations:
- I only touched `ResourcesData` because by my count, `ResourcesData.readerData` accounts for ~90% of all memory allocated in `CompilerContextData` constructor.
- This change should be 100% safe & correct, because `ResourcesData.Initialize()` is called as the first step of NRP Compilation (by `NativePassCompiler.SetupContextData`), and that function resizes each array to the size it needs to be.
- As a side effect of above, it is not required to estimate the amount of resources, since it is known by the time `Initialize` is called. This reduces unnecessary allocation of unused memory.
- These are native memory allocations and do not generate garbage. When a new graph hash is encountered (meaning we must compile a graph), we take a new `CompilerContextData` from the pool and the resourceData arrays are allocated at that point. This is expected to have some small overhead, but likely negligible compared to the overhead already incurred by having to compile the graph.
- In addition, it's good to be aware that `NativeList` always rounds requested capacity to the next power-of-two (`newCapacity = math.ceilpow2(newCapacity)`). So if you resize to 100000 elements, it will actually reserve memory for 131072 elements.
Note that memory usage can clearly be improved further - I want to make the low-risk simple fix first and improve upon it afterwards.
nativeSubPassData=newNativeList<SubPassDescriptor>(estimatedNumPasses,AllocatorManager.Persistent);// there should "never" be more subpasses than graph passes
59
59
createData=newNativeList<ResourceHandle>(estimatedNumPasses*2,AllocatorManager.Persistent);// assume every pass creates two resources
@@ -183,7 +183,7 @@ public bool AddToFragmentList(TextureAccess access, int listFirstIndex, int numI
resourceNames[t]=newDynamicArray<Name>(estimatedNumResourcesPerType);// T in NativeList<T> cannot contain managed types, so the names are stored separately
210
+
// Note: All these lists are allocated with zero capacity, they will be resized in Initialize when
0 commit comments