You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Before the descriptor sets can be bound to a pipeline, they must first be referenced by a _Pipeline Layout_. Pipeline layouts essentially tell the API which descriptor set layouts the shader can expect.
60
56
While the per-set-layout bindings are statically defined, standardizing them across the engine frees us to re-use both the descriptor set layouts and the pipeline layout that reference them for most of our common shader collections. Doing this drastically simplifies the engine design. For OpenGL-like implementations, allocating tons of descriptor sets and copying & rewriting the bindings for every draw call is a common practice. The elegant persistent binding method highlighted in the NVIDIA article was far more preferable to us. This means that having maximally flexible descriptor set layouts is a crucial design goal.
Note that the usage of "UBO" in various code examples here is a relic of the old OpenGL nomenclature for "Uniform Buffer Object". Essentially "UBO" and "Uniform Buffer" refer to the same concept.
113
106
114
107
## Descriptor Indexing to the rescue!
115
108
116
109
Before Vulkan Descriptor Indexing, one had to pre-specify exactly how many texture bindings there could be in a descriptor set. Additionally, each binding had to be set to a valid VkImage resource regardless of whether or not it was used by each shader. This made writing the code for re-usable flexible pipeline layouts both tedious and ugly. Luckily, the Descriptor Indexing feature is core as of Vulkan 1.2. All we have to do is enable the feature during instance creation, and then use it in our code!
Once our descriptor set layouts, pipeline layouts, and pipelines were set up, we needed to actually allocate our resources on the GPU. For Vulkan, that boils down to whether or not they should live in device-local memory or not. For textures, the question is a no brainer - device-local memory is more efficient for the GPU to access during sampling. For buffers, the answer becomes less cut and dry. Device-local memory requires a copy from host-visible (CPU) memory to update. This update usually requires recording a command buffer to execute the transfer for you. On the other hand, host-visible memory, as its name implies, can be directly updated from the host (with some coherency and caching caveats that may need to be considered). The general rule is, if the resource is updated infrequently and accessed repeatedly from a shader, device-local is the way to go. Otherwise, host-visible is usually a better option.
187
171
188
172
For our engine, we decided to go with device-local scene uniform buffers (updated once per frame), device-local material uniform buffers (updated during scene initialization), and host-visible per-draw uniform buffers.
The last consideration we made was to employ an old solution to an old problem. Updating draw UBOs in Vulkan is quite different compared to higher level APIs such as OpenGL. In Vulkan, you cannot change a resource that is currently being utilized by an in-flight command buffer. This means, that if we draw 100 objects in a single frame, we would need to maintain memory for and perform updates on 100 separate drawUbo buffers. This quickly becomes unruly as scene complexity grows. Thus, we opted to go with dynamic uniform buffers to solve this problem. Dynamic UBOs essentially allow us to map one large host visible buffer and copy each buffer into it and grow it as we draw our frame. This method works well for most dynamic scenes with lots of draws.
0 commit comments