Advanced instrumentation support: vsg::Instrumentation, vsg::GpuAnnotation and vsg::TracyInstrumentation #1065
Replies: 4 comments
-
vsg::GpuAnnotation - annotating GPU work for use with Vulkan API layer and Debug layerVulkan has VK_EXT_debug_utils extension that can be used to record labels with colors into the vkCommandBuffer. With the merge of Instrumentation branch the VulkanSceneGraph now has support, with the required function pointers provided by in the new InstanceExtensions structure. The new vsg::GpuAnnoation class provides a concrete vsg::Instrumentation that utilizes the VK_EXT_debug_utils extension to put labels with a colour into the vkCommandBuffer during the RecordTraversal, CompileTraversal and TransferTask operations: /// GpuAnnotationo is a vsg::Instrumentation subclasses that uses VK_debug_utils to emit annotation of the scene graph traversal.
/// Provides tools like RenderDoc a way to report the source location associated with Vulkan calls.
class VSG_DECLSPEC GpuAnnotation : public Inherit<Instrumentation, GpuAnnotation>
{
public:
GpuAnnotation();
enum LabelType
{
SourceLocation_name,
SourceLocation_function,
Object_className,
};
LabelType labelType = SourceLocation_name;
void enterCommandBuffer(const SourceLocation* sl, uint64_t& reference, CommandBuffer& commandBuffer) const override;
void leaveCommandBuffer(const SourceLocation* sl, uint64_t& reference, CommandBuffer& commandBuffer) const override;
void enter(const vsg::SourceLocation* sl, uint64_t& reference, CommandBuffer& commandBuffer, const Object* object) const override;
void leave(const vsg::SourceLocation* sl, uint64_t& reference, CommandBuffer& commandBuffer, const Object* object) const override;
protected:
virtual ~GpuAnnotation();
};
VSG_type_name(vsg::GpuAnnotation); The vsgviewer now has code check for extension support and to set up GpuAnnocation object, along with configuring it: vsg::ref_ptr<vsg::Instrumentation> instrumentation;
if (arguments.read({"--gpu-annotation", "--ga"}) && vsg::isExtensionSupported(VK_EXT_DEBUG_UTILS_EXTENSION_NAME))
{
windowTraits->debugUtils = true;
auto gpu_instrumentation = vsg::GpuAnnotation::create();
if (arguments.read("--name")) gpu_instrumentation->labelType = vsg::GpuAnnotation::SourceLocation_name;
else if (arguments.read("--className")) gpu_instrumentation->labelType = vsg::GpuAnnotation::Object_className;
else if (arguments.read("--func")) gpu_instrumentation->labelType = vsg::GpuAnnotation::SourceLocation_function;
instrumentation = gpu_instrumentation;
}
...
if (instrumentation) viewer->assignInstrumentation(instrumentation); The Vulkan API Layer reports all the debug util label messages so if you run vsgviewer to render a single frame with GpuAnnotation and the API debug layer you'll see entries like below included in all the console output: vsgviewer models/teapot.vsgt --ga -a -f 1 The section of the output where vsg::TransferTask is recording looks like, note the vkCmdBeginDebugUtilsLabel and vkCmdEndDebugUtilsLabelEXT that are generated by the GpuAnnotation::enter(..) and leave(..) respectively: Thread 0, Frame 0:
vkCmdBeginDebugUtilsLabelEXT(commandBuffer, pLabelInfo) returns void:
commandBuffer: VkCommandBuffer = 0x563a9c076620
pLabelInfo: const VkDebugUtilsLabelEXT* = 0x7ffe914800b0:
sType: VkStructureType = VK_STRUCTURE_TYPE_DEBUG_UTILS_LABEL_EXT (1000128002)
pNext: const void* = NULL
pLabelName: const char* = "transferDynamicData"
color: float[4] = 0x7ffe914800c8
color[0]: float = 1
color[1]: float = 0.498039
color[2]: float = 0
color[3]: float = 1
Thread 0, Frame 0:
vkCmdCopyBuffer(commandBuffer, srcBuffer, dstBuffer, regionCount, pRegions) returns void:
commandBuffer: VkCommandBuffer = 0x563a9c076620
srcBuffer: VkBuffer = 0x563a9b8f73d0
dstBuffer: VkBuffer = 0x563a9b8f7700
regionCount: uint32_t = 2
pRegions: const VkBufferCopy* = 0x563a9c07ba70
pRegions[0]: const VkBufferCopy = 0x563a9c07ba70:
srcOffset: VkDeviceSize = 0
dstOffset: VkDeviceSize = 0
size: VkDeviceSize = 2112
pRegions[1]: const VkBufferCopy = 0x563a9c07ba88:
srcOffset: VkDeviceSize = 2112
dstOffset: VkDeviceSize = 2112
size: VkDeviceSize = 16
Thread 0, Frame 0:
vkCmdEndDebugUtilsLabelEXT(commandBuffer) returns void:
commandBuffer: VkCommandBuffer = 0x563a9c076620
You can also do things like grep for just the labels so see all instrumentation, for teapot.vsg is pretty brief as it;'s a very simple scene graph, even with the VSG compiled with VSG_MAX_INSTRUMENTATION_LEVEL set to 3 this is the labels to expect: $ vsgviewer models/teapot.vsgt --ga -a -f 1 | grep pLabelName
pLabelName: const char* = "CommandGraph record"
pLabelName: const char* = "RenderGraph"
pLabelName: const char* = "View"
pLabelName: const char* = "Transform"
pLabelName: const char* = "MatrixTransform"
pLabelName: const char* = "MatrixTransform"
pLabelName: const char* = "StateGroup"
pLabelName: const char* = "VertexIndexDraw"
pLabelName: const char* = "transferDynamicData" If the instrumentation level is backed off to 0 you won't see any of these labels, with 1 you'll see the CommandGraph, RenderGraph,View and transferDynamicData ones, with 2 you'll see the internal scene graph nodes - Transform, MatrixTransfom and StateGroup, with 3 you'll also get to see the VertexIndexDraw leaf node. When debugging rendering issues if you see a Vulkan debug error and you've enabled GpuAnnotation and the API and Vulkan debug layers you can locate the ERROR message then look back up the recorded API layer output to find the various vkCmdBeginDebugUtilsLabelEXT that encloses it. This can help with figuring out what part of the scene graph lead to the error making it easier to pinpoint the problem code that created it - I even used this trick to help fix an VSG issue I can across back in December. |
Beta Was this translation helpful? Give feedback.
-
vsg::GpuAnnoation for use with RenderDocRenderDoc is really powerful graphics debugging tool, from RenderDoc website:
This already works well with VulkanSceneGraph applications, you just need to run RenderDoc, set up the application you want to launch and then run and collect all the Vulkan data and operations being done by your application. However, out of the box you only get to find out the Vulkan calls and data you don't have information about what parts of your application and scene graph relate directly to these Vulkan calls. There is a way of enriching the information that RenderDoc has access to - RenderDoc support the VK_EXT_debug_utils extension and logs all the vkCmdBeginDebugUtilsLabelEXT/vkCmdEndDebugUtilsLabelEXT entries in the Vulkan command buffer passed to Vulkan. This is where vsg::GpuAnnoation steps in to provided all the VK_EXT_debug_utils labels with colours so that RenderDoc can then annotate all the Vulkan operations so you can step through a frame to see exactly what parts of the scene graph are associated with what calls and what visuals on screen they are associated with. To illustrate how you use vsg::GpuAnnoation with RenderDoc we'll use the new vsgviewer --ga command line option to enable instrumentation by passing this as Command-line Arguments via the LaunchApplication widget: Then after pressing Launch the vsgviewer is run by RenderDoc and when you signal the capture of frame (pressing PrtScreen for me) it then captures all the Vulkan calls and the labels generated by vsg::GpuAnnotation, so you can then browse all the Vulkan commands and have them nested within all the labels: Note the orange transferDynamicData and green CommandGraph record labels - these are generated by GpuAnnotation, you can then start clicking on these to view all the commands that they encapsulate: Here we can now see that the vkCmdCopyBuffer is associated with the transferDynamicData task, and how the commands associated scene graph with different part of the scene graph including the hierarchy right now to he final vkCmdDrawIndexed called from with VertexIndexDraw node: RenderDoc allows you to click on each command and see the visuals of what the frame buffer looks like at this point, so you see how the scene graph is recorded to the command buffer and the impact this has on rendering. It might be that particular calls are a problem so you can see this happen in RenderDoc and the GpuAnnotation labels can then provide you with information about what nodes of the scene graph are related to it. One aspect to the labeling that might not be obvious is that GpuAnnotation just generates labels during the RecordTraversal that traverses the scene graph top down and possibly may never traverse some subgraphs do to view frustum or LOD culling, but the you'll labels will still be generated for the internal scene graph nodes above where the culling happened. This leads to some labels appearing that have no Vulkan commands associated with them - when you see this it's simply that part of the scene graph didn't lead directly to commands being recorded. This also happens when encountering vsg::DepthSorted nodes such as used for the trees in the lz.vsgt, the RecordTraversals encounters the DepthSorted node then places it in the associated vsg::Bin, which is then traversed after main RecordTraversal. If you look at the above screenshot you'll see a Bin entry that I haven't opened up. If I did you'd see all the subgraphs parts associated with the trees. The vsg::GpuAnnoation provides option for generating labels with SourceLocation.name as shown above, or SourceLocation.function name, or object.className. To view the function name vsgviewer now has a --func command line option, adding this to the RenderDoc Launch Application Command Line options we see the output: If we use the vsgviewer --className we see the output: I have only touched the surface of all the capabilities of RenderDoc, in this post I'm focused on how the new vsg::GpuAnnotation functional can be leveraged in RenderDoc, so I recommend having a look at 3rd party articles and videos on the tool. |
Beta Was this translation helpful? Give feedback.
-
vsg::TracyInstrumentation and Tracy ProfilerThe Tracy Profiler is another powerful developer tool that is directly supported by the new vsg::TracyInstrumentation that seamlessly adapts the inbuilt vsg::Instrumentation calls to Tracy's instrumentation. Summary of what Tracy Profiler is from the project github page:
To use the Tracy Profiler you have to link your application with the TracyClient library and then when you run your application it starts broadcasting Tracy instrumentation data via UDP packets and then you run the Profiler application that listens for and then can start tracking any applications that are broadcasting across your network. This allow you to do things like run your application on embedded device like a RaspberryPi and then run the Profiler on your laptop or desktop machine. Normally using Tracy requires library and application developers to directly add Tracy* macros calls through their code base and to link against the Tracy library. Rather than force this dependency on all VulkanSceneGraph users I designed the instrumentation support to be extensible and keep these dependencies decoupled from the build of the VSG itself. This decoupling also applies to add-on libraries to the VSG as well as users applications - you can integrate with vsg::Instrumentation through your code base and have this work with whatever concrete instrumentation you find most useful at that point. A big part of the challenge of integrating with Tracy in such a decoupled way is Tracy is designed to be directly integrated into code bases, with it's Tracy::SourceLocationData struct providing hooks that help tracy reference the original source code. To provide this rich level of instrumentation the vsg::SourceLocation struct is specifically written to use the same data types, naming and ordering so that vsg::SourceLocation can be passed to Tracy without needing it to be adapted or Tracy ever knowing that it's not actually a Tracy::SourceLocationData struct. Another issue I had resolve with integration is while the CPU instrumentation could work with Tracy public CPU related API, parts of the Tracy Vulkan API were declared private so that only Tracy classes could invoke them. It took a quite a bit of experimentation to figure out how to integrate with the Vulkan and in the end it only needed a couple of small tweaks to Tracy that thankfully the author of Tracy has now merged so you'll need to use Tracy master in order to use vsg::TracyInstrumentation class that provides the glue between the VSG's instrumentation and Tracy's instrumentation. When you look at how vsg::Instrumentation base class is written, and how it's invoked from macros and helper classes you'll see parallels with what is required for integrating with Tracy directly, this is a natural consequence of providing a instrumentation that is cohesive with Tracy, as well as adopting lessons from Tracy that can help with more general instrumentation tasks. Essentially I've learned what is required to work with Tracy in a loosely coupled but cohesive way and kept things general enough to also work with tasks like vsg::GpuAnnotation and offer the possibility of users writing their own vsg::Instrumentation classes. While vsg::GpuAnnotation just requires a Vulkan extension to work, Tracy integration requires linking to the TracyClient library. To avoid forcing this dependency on the build of the VSG the vsg::TracyInstrumentation is a header-only class, so the dependency is only added to your application when you use this integration class. Tracy has good CMake config so it's easy to find the library with a find_package(Tracy), the CMakeLists.txt found in vsgtracyinstrumentation and vsgshadow illustrate non optional and optional inclusion of TracyClient. The vsgshadow example CMakeLists.txt looks like: set(SOURCES
vsgshadow.cpp
)
add_executable(vsgshadow ${SOURCES})
target_link_libraries(vsgshadow vsg::vsg)
if (vsgXchange_FOUND)
target_compile_definitions(vsgshadow PRIVATE vsgXchange_FOUND)
target_link_libraries(vsgshadow vsgXchange::vsgXchange)
endif()
if (Tracy_FOUND)
target_compile_definitions(vsgshadow PRIVATE Tracy_FOUND)
target_link_libraries(vsgshadow Tracy::TracyClient)
endif()
install(TARGETS vsgshadow RUNTIME DESTINATION bin) And vsgshadow.cpp optional includes TracyInstrumentation.h #ifdef Tracy_FOUND
# include <vsg/utils/TracyInstrumentation.h>
#endif And the code that sets up vsg::TracyInstrumentation, not the extra controls for setting up the instrumentation level that the Tracy profiling should be activated for:
TracyClient library can also built with TRACY_ON_DEMAND which enables the library to only start instrumenting when the Tracy Profiler is running. This is little more complex so I've kept an example of this to the vsgtracyinstrumentation example. The on demand code is done in the main frame loop and only assigns the TracyInstrumentation to the viewer once the Tracy Profiler has signaled that it's wanting to track the application The Viewer::assignInstrumentation(..) method that is used will automatically stop any viewer threading, assign instrumentation and duplicate where required for threading, then restart the threads for you: // rendering main loop
while (viewer->advanceToNextFrame() && (numFrames < 0 || (numFrames--) > 0))
{
#if defined(TRACY_ENABLE) && defined(TRACY_ON_DEMAND)
if (!viewer->instrumentation && GetProfiler().IsConnected())
{
vsg::info("Tracy profile is now connected, assigning TracyInstrumentation.");
viewer->assignInstrumentation(instrumentation);
}
#endif
// pass any events into EventHandlers assigned to the Viewer
viewer->handleEvents();
viewer->update();
viewer->recordAndSubmit();
viewer->present();
} Here's an example of running vsgtracyinstrumentation on my laptop and the Tracy Profiler application on my desktop systems with it's 3440x1440 display (you can make use of every pixel of such a display with Tracy :-) On the laptop: vsgtracyprofiler models/openstreetmap.vsgt And this is what it looks like with both running together, I've stopped Tracy at a point where the 4 database pager threads and associated compile contexts are all running as the view zooms in: As part of Tracy's Vulkan integration it has support for calibrating the Vulkan timestamps to CPU timestamps via the if (calibrated)
{
auto physicalDevice = window->getOrCreatePhysicalDevice();
if (physicalDevice->supportsDeviceExtension(VK_EXT_CALIBRATED_TIMESTAMPS_EXTENSION_NAME))
{
windowTraits->deviceExtensionNames.push_back(VK_EXT_CALIBRATED_TIMESTAMPS_EXTENSION_NAME);
}
} |
Beta Was this translation helpful? Give feedback.
-
To further illustrate the steps required to add instrumentation to an application I have modified the vsgdynamicload example so that you can enable GpuAnnoation with --ga or TracyInstrumentation with --tracy command line parameters. This vsgExamples commit shows all the change. As a quick test I have used find and xargs utilities to pass all the ,.gltf files in glTF-Sample-Models samples to vsgdynamlcload and enabled decoration of the loading models with InstrumentationNode with the filename names so we can then see what subgraphs are what: find . -name *.gltf | xargs vsgdynamicload --tracy --decorate -a --ga -f 10 | grep LabelName On the console the last frame of output (I ran the app for 10 frames) looks like: pLabelName: const char* = "transferDynamicData"
pLabelName: const char* = "CommandGraph record"
pLabelName: const char* = "RenderGraph"
pLabelName: const char* = "View"
pLabelName: const char* = "./TwoSidedPlane/glTF/TwoSidedPlane.gltf"
pLabelName: const char* = "./TextureTransformTest/glTF/TextureTransformTest.gltf"
pLabelName: const char* = "./BoxTextured/glTF-Embedded/BoxTextured.gltf"
pLabelName: const char* = "./BoxTextured/glTF/BoxTextured.gltf"
pLabelName: const char* = "./SimpleMeshes/glTF-Embedded/SimpleMeshes.gltf"
pLabelName: const char* = "./SimpleMeshes/glTF/SimpleMeshes.gltf"
pLabelName: const char* = "./Box/glTF-Embedded/Box.gltf"
pLabelName: const char* = "./Box/glTF/Box.gltf"
pLabelName: const char* = "transferDynamicData"
pLabelName: const char* = "Context record" This is the output with the VSG compiled with the default instrumentation level of 1. If we do this with the higher instrumentation levels we'll see a lot more output. Now test Tracy Profile with --tracy find . -name *.gltf | xargs vsgdynamicload --tracy --decorate --tracy And then after attaching Tracy Profiler to vsgydnamicload then zooming into a frame and clicking on the View zone we see: A word of warning about the Tracy Vulkan instrumentation - if you have a large scene graph and use a high instrumentation level like 2 or 3 you can overwhelm Tracy's hardwired Vulkan query pools and the VSG application can crash or the Tracy Profiler can stop viewing the results. There is also a bug in the Tracy Profiler that if you zoom into Vulkan zones too much the display of them can disappear. I'll or another volunteer will need to come up with reliable way to illustrate/reproduce this for the Tracy community to figure out how to fix these problems. CPU profiling looks to be more solid though and it's very impressive just how much information can be gleaned about the running of the application. There is so much information that can be seen with Tracy and RenderDoc that I can't come close to presenting them in this thread, I don't even know most of what they are capable of yet, so this has been a big learning experience for me and expect to continue to learn. I expect members of the community will also have knowledge of, or be starting out on their journey learning how to make the best use of this tools. As we get more familiar I expect we'll want to further evolve vsg::Instrumentation, both to support RenderDoc and Tracy better and to start developing new vsg::Instrumentation implementations. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi All,
Through most of December I was working on implementation extensible profiling support into the VulkanSceneGraph, the first phase of this work is now complete and ready to be used by VulkanSceneGraph users. All required changes are now merged into VSG master with new example and use of profiling merged with vsgExamples master. This is a significant chunk of work but for most users you'll be able to just update the VSG and everything should just build and run as before.
Changes to VulkanSceneGraph : 78 commits and 58 files changes to VSG itself.
Changes to vsgExamples : 17 commits and 7 files changes.
To make it possible for users to do compile/runtime tests against the new instrumentation functionality I have updated the VSG version to 1.1.1 and vsgExamples to 1.1.1.
VSG_MAX_INSTRUMENTATION_LEVEL : Developer defined levels of instrumentation
The approach taken is very different to the high level and fixed scheme found in the OpenSceneGraph with osg::Stats and the on screen stats that could be added to the view window. Instead the VulkanSceneGraph approach offers different levels of instrumentation from coarse grained/high level instrumentation down to fine grained instrumentation. You can select the level instrumentation compiled into the VulkanSceneGraph library by setting the new VSG_MAX_INSTRUMENTATION_LEVEL cmake variable which you can set via the command line or ccmake/CMakeSetup, this value of this can also be found in the automatically generated include/vsg/Version.h file.
There are 4 levels that are currently used:
Selecting 0 is what you might use for building for releases of end user applications as this ensures there is non memory or CPU/GPU overhead associated with instrumentation.
Selecting 1 has low overhead with just a handful of instrumentation points invoked per frame, for most application it may well be a low enough overhead that it's fine to ship it in this state. This level of instrumentation might also be useful for things like reporting FPS or using frame time to manage LOD scaling for loading balancing.
Selecting 2 and 3 provides progressively more instrumentation points that are compiled directly into the code, the overhead is higher but the granularity is also much finer so for development purposes it can be really informative. With fine grained instrumentation you have to be mindful that the cost of instrumentation can affecting the results you get back.
Extensible vsg::Instrumentation class
Developers will want to profile their applications in different ways when trying to answer different questions about their application and datasets, broadly one might be trying to debug a rendering problem, trying to optimize performance or just implement some for a unit testing - the tools one can use for this are very different. Rather than integrate multiple different schemes for different tools I have take the approach of having a vsg::Instrumentation base class that enables tool specific implementations.
Th base class has a series of virtual enter/leave methods to tell the instrumentation implementation when the CPU/GPU are entering different sections of the code. The VSG provides two concrete Instrumentation classes so you most user probably won't need worry too much about the fine details. The base class looks like:
So 3rd party tools can figure out where in the code the instrumentation was invoked from a SourceLocation struct is provided by the instrumentation code sprinkled through the scene graph, or you application if you choose to add it. This SourceLocation struct is specifically written to be compatible with the Tracy profilers SourceLocationData struct, we'll see later this post how useful this is. The struct is defined as:
The uint_color is a uint32_t that packs the colour in a form BGRA ordering that is compatible with Tracy, while r,g,b,a members help make it easy to use when passing data to Vulkan etc.
Instrumentation of code
The include/vsg/utils/Instrumetnation.h header provides CpuInstrumentation, GpuInstrumentation and CommandBufferInstrumentation helper classes and an extensive set of #define's that help developers add instrumentation to their methods.
Typical usage for instrumenting code for collecting just CPU profiling is done by adding a CPU_INSTRUMENTATION_* macro like is done in RecordTraversal::clearBins():
This creates an instrumentation for this method that automatically calls instrumentation->enter(..) when execution passes through the macro point and then instrumentation->leave() on exiting the scope of where the macro was defined. This is all handled by automatically the vsg::CpuInstrumentation helper object construction and destruction that macros creates in the local scope. The macro also creates a static constexpr vsg::SourceLocation object that is then passed to the enter(..) and leave(..) methods so the the concrete Instrumentation class can then refer back to the exact line of code that invoked it.
For GPU instrumentation you also need to pass in the vsg::CommandBuffer so that timing stats or debug util annotation can be added to the command buffer, to help with the the GPU_INSTRUMENTATION_* macros can be used, typically usage looks like:
Again this macro provides both a static constexpr SourceLocation point and GpuInstrumentation object to do the require enter(..) and leave(..) calls.
The macros are all defined so that when the VSG_MAX_INSTRUMENTATION_LEVEL value is less than the L1/L2/L3 value used in the macro it compiles to a non op.
When the VSG_MAX_INSTRUMENTATION_LEVEL is greater than the L1/L2/L3 the required static constexpr SourceLocation and Cpu/GpuInstrumentation are created in the local scope, but the instrumentation->enter() and instrumentation->leave() is only invoked when a non instrumentation pointer is provided - so if you haven't assigned Instrumentation then the extra cost will be
checking of the instrumentation pointer on construction and destruction of the Cpu/GpuInstrumentation object.
Assigning Instrumentation to Viewer and associated classes
In order to use instrumentation at runtime you assign an instance of one of the sub-classes from vsg::Instrumentation to the Viewer or directly to one of the classes that supports instrumentation. The two sub-classes of Instrumentation that are currently provided are vsg::GpuAnnotation and vsg::TracyInstrumentation that integrate with VK_debug_utils extension and Tracy profiler respectively, which we'll outline below and then in dedicated follow up posts going into capabilities of each of these options.
Assigning instrumentation simply requires you to create an Instrumentation object then assign it to the viewer:
vsg::InstrumentationNode
In addition to instrumentation that can be set up via the CPU_INSTRUMENTATION_* and GPU_INSTRUMENTATION_* macros that are adding to the VSG source code or your own application you can also decorate subgraphs in your scene with the new vsg::IntrumentationNode.
The vsg::InstrumentationNode enables you to set the name, colour and instrumentation level that the node will invoke instrumentation at: The public interface of InstrumentationNode node is:
The vsgviewer now has a code for setting up InstrumentationNode by decorating subgraphs and then setting the name and colour, a simple help functions sets up the decorated subgraph:
And in the main(..) when requested via the new --decorate command line option it then decorates each loaded model setting the name to the filename of the loaded file:
Then when the instrumentation is assigned to the Viewer during traversals the InstrumentationNode will be encountered and automatically dispatch the required Instrumentation::enter/leave() methods. This enables application developers to decide what parts of the scene to instrumentation and at what labels to use for them.
The InstrumentationNode supports read/writing so that you can save to .vsgt and .vsgb if required.
The vsgshadow and vsgtracyinstrumentation also provide examples of vsg::TracyInstrumentation which I'll detail tomorrow.
Beta Was this translation helpful? Give feedback.
All reactions