diff --git a/configuration/structure.json b/configuration/structure.json index d18060d28..e32f420dd 100644 --- a/configuration/structure.json +++ b/configuration/structure.json @@ -659,8 +659,24 @@ }, "content": "features/featuresDeepDive/frameGraph/frameGraphBasicConcepts" }, + "frameGraphClassFramework": { + "friendlyName": "Frame Graph Framework Description", + "children": { + "frameGraphClassOverview": { + "friendlyName": "Introduction to Frame Graph classes", + "children": {}, + "content": "features/featuresDeepDive/frameGraph/frameGraphClassFramework/frameGraphClassOverview" + }, + "frameGraphTaskList": { + "friendlyName": "Frame Graph Task List", + "children": {}, + "content": "features/featuresDeepDive/frameGraph/frameGraphClassFramework/frameGraphTaskList" + } + }, + "content": "features/featuresDeepDive/frameGraph/frameGraphClassFramework" + }, "frameGraphBlocks": { - "friendlyName": "Render Graph Blocks", + "friendlyName": "Node Render Graph Blocks", "children": { "frameGraphBlocksGeneralNotes": { "friendlyName": "General notes about Render Graph Blocks", @@ -668,7 +684,7 @@ "content": "features/featuresDeepDive/frameGraph/frameGraphBlocks/frameGraphBlocksGeneralNotes" }, "frameGraphBlocksDescription": { - "friendlyName": "Description of the Render Graph Blocks", + "friendlyName": "Render Graph Blocks Description", "children": {}, "content": "features/featuresDeepDive/frameGraph/frameGraphBlocks/frameGraphBlocksDescription" } diff --git a/content/features/featuresDeepDive/frameGraph/frameGraphBasicConcepts.md b/content/features/featuresDeepDive/frameGraph/frameGraphBasicConcepts.md index 1adca78df..7cb7b499c 100644 --- a/content/features/featuresDeepDive/frameGraph/frameGraphBasicConcepts.md +++ b/content/features/featuresDeepDive/frameGraph/frameGraphBasicConcepts.md @@ -23,7 +23,7 @@ Each task declares its input and output resources. The resources can be textures, a list of renderable meshes, cameras, lights. As for the textures, they are allocated and managed by the frame graph subsystem and not directly by the tasks. This allows us to optimize the allocation of textures and reuse them during the execution of a graph, thus saving GPU memory. -By default, there is no persistence of resources between each execution of a rendering graph, unless a resource is specifically labeled as “persistent” (think of a texture that must be reused from one frame to the next). In our implementation, persistent textures are used to implement “ping-pong” textures, where we change the read and write textures with each frame (used to implement the temporal antialiasing task, for example). +By default, there is no persistence of resources between each execution of a render graph, unless a resource is specifically labeled as “persistent” (think of a texture that must be reused from one frame to the next). In our implementation, persistent textures are used to implement “ping-pong” textures, where we change the read and write textures with each frame (used to implement the temporal antialiasing task, for example). To clarify the ideas, here is a simple graph: @@ -31,6 +31,14 @@ To clarify the ideas, here is a simple graph: The “Color Texture”, “Depth Texture”, “Camera” and “Object List” nodes are input resources (respectively, of the texture, depth texture, camera and object list type). “Clear” and “Main Rendering” are two tasks, the first clears a texture/depth texture and the second renders objects in a texture. “Output” is the output buffer (think of it as the screen output). +As a user, the process of creating and using a frame graph is as follows: +* Create a frame graph, either by using the `FrameGraphXXX` classes (see [Frame Graph Framework Description](/features/featuresDeepDive/frameGraph/frameGraphClassFramework)), or by loading a node render graph (see [Node Render Graph Blocks](/features/featuresDeepDive/frameGraph/frameGraphBlocks)). +* Build the frame graph (`await FrameGraph.buildAsync()` or `await NodeRenderGraph.buildAsync()`). Calling these functions will also wait until the graph is ready to be displayed (it waits until all tasks + all internal states are ready). +
+At this point, the frame graph can be safely executed: call `FrameGraph.execute()`, or simply set `scene.frameGraph = myFrameGraph`, in which case the call to `execute` will be performed by the scene's rendering loop. + +Note that in this scenario, you never have to manage passes: you will only need to create passes when you create your own tasks. As a user, you simply use the existing tasks (see [Frame Graph Task List](/features/featuresDeepDive/frameGraph/frameGraphClassFramework/frameGraphTaskList) for the list of existing tasks in the framework), creating an instance of the task, setting its input parameters to reasonable values, and adding the task to the frame graph. See [Introduction to Frame Graph classes](/features/featuresDeepDive/frameGraph/frameGraphClassFramework/frameGraphClassOverview) for more information. + ### Benefits A frame graph allows a high-level knowledge of the whole frame to be acquired, which: @@ -39,7 +47,7 @@ A frame graph allows a high-level knowledge of the whole frame to be acquired, w * simplifies asynchronous computation and resource barriers. This is an advantage for native implementations, but for the web, we don't (yet?) have asynchronous computation and we don't have to manage resource barriers ourselves. * allows for autonomous and efficient rendering modules * overcomes some of the limitations of our hard-coded pipeline. - +
The last advantage is particularly important, as it allows things that are not possible in our current fixed pipeline, which can be described as follows (this is what the existing `Scene.render` method does): 1. Animate 1. Update cameras @@ -50,7 +58,7 @@ The last advantage is particularly important, as it allows things that are not p 1. Rendering of RTT declared at camera level 1. Rendering of active meshes 1. Apply post-processing to the camera - +
We have a number of observables that allow you to inject code between these stages, and internal components that add functionality at key points (such as shadows, layers, effect layers, etc.), but the order of tasks is always fixed and strongly centered on the camera, as you can see. With a frame graph, nothing is defined in advance; you simply create tasks and their interconnections. The camera has no particular status; it is a resource that you can use to construct the graph, in the same way that you can use textures or lists of objects. @@ -74,7 +82,7 @@ In this mode, the execution of a frame graph largely replaces the flow of operat * You must define `Scene.cameraToUseForPointers` for the camera to be used for pointer operations. By default, if nothing is defined in this property, `Scene.activeCamera` is used. But since this last property is now `null` most of the time, pointer operations will not work as expected if you do not define `Scene.cameraToUseForPointers`. * You will not be able to define the parameters of a certain number of components via the inspector (such as rendering pipelines, effect layers, post-processing) because these are now simple tasks within a frame graph. As a workaround, if you use a node render graph to generate the frame graph, you will be able to set parameters in the node render graph editor. We will also look at how to update the inspector (when possible), so that we can adjust the settings from this tool. * Most of the existing observables notified by `Scene.render` are no longer notified. As explained above, the execution of the frame graph replaces much of `Scene.render` (see below), so a lot of code is no longer executed. - +
Regarding the last point, this is what `Scene.render` does when `Scene.frameGraph` is defined: 1. Notifies `Scene.onBeforeAnimationsObservable` 1. Calls `Scene.animate` @@ -84,7 +92,7 @@ Regarding the last point, this is what `Scene.render` does when `Scene.frameGrap 1. Animates the particle systems 1. Executes the frame graph 1. Notifies `Scene.onAfterRenderObservable` - +
As you can see, only 3 observables are notified in this case. ### Use a frame graph in addition to the existing scene render loop diff --git a/content/features/featuresDeepDive/frameGraph/frameGraphBasicConcepts/frameGraphInAdditionToRenderLoop.md b/content/features/featuresDeepDive/frameGraph/frameGraphBasicConcepts/frameGraphInAdditionToRenderLoop.md index 0d0cfa0bd..20c7625e3 100644 --- a/content/features/featuresDeepDive/frameGraph/frameGraphBasicConcepts/frameGraphInAdditionToRenderLoop.md +++ b/content/features/featuresDeepDive/frameGraph/frameGraphBasicConcepts/frameGraphInAdditionToRenderLoop.md @@ -22,17 +22,13 @@ We now create the frame graph and import the post-process texture into it, so th ```javascript const frameGraph = new BABYLON.FrameGraph(scene, true); -engine.onResizeObservable.add(() => { - frameGraph.build(); -}); - const passPostProcessHandle = frameGraph.textureManager.importTexture("pass post-process", passPostProcess.inputTexture.texture); passPostProcess.onSizeChangedObservable.add(() => { frameGraph.textureManager.importTexture("pass post-process", passPostProcess.inputTexture.texture, passPostProcessHandle); }); ``` -Note that when the size of the post-process changes (due to a resizing of the window, for example), the texture will be recreated, and we will therefore have to re-import it into the frame graph. We can pass a texture handle to `importTexture` so that the texture passed as the first parameter replaces the texture associated with this handle. This way, we won't have to update the frame graph, the post-process pass texture always remains associated with `passPostProcessHandle`. +Note that when the size of the post-process changes (due to a resizing of the window, for example), the texture will be recreated, and we will therefore have to re-import it into the frame graph. We can pass a texture handle to `importTexture()` so that the texture passed as the second parameter replaces the texture associated with this handle. This way, we won't have to update the frame graph, the post-process pass texture always remains associated with `passPostProcessHandle`. The next step is to create the bloom, black and white and copy to back buffer tasks, add them to the frame graph and build the graph: ```javascript @@ -48,12 +44,16 @@ const copyToBackbufferTask = new BABYLON.FrameGraphCopyToBackbufferColorTask("co copyToBackbufferTask.sourceTexture = bnwTask.outputTexture; frameGraph.addTask(copyToBackbufferTask); -frameGraph.build(); +engine.onResizeObservable.add(async () => { + await frameGraph.buildAsync(); +}); -await frameGraph.whenReadyAsync(); +await frameGraph.buildAsync(); ``` There is not much to say here, the code should be self-explanatory. +Refer to [Frame Graph Task List](/features/featuresDeepDive/frameGraph/frameGraphClassFramework/frameGraphTaskList) for detailed explanations of the various frame graph tasks used in the code snippet above. + Finally, we need to execute the frame graph at each frame. As we use the regular rendering output of the scene, the best place is inside a `Scene.onAfterRenderObservable` observer: ```javascript scene.onAfterRenderObservable.add(() => { @@ -61,7 +61,7 @@ scene.onAfterRenderObservable.add(() => { }); ``` -The full PG: +The full PG: ## Using a node render graph @@ -85,16 +85,16 @@ const nrg = await BABYLON.NodeRenderGraph.ParseFromSnippetAsync("#FAPQIH#1", sce const frameGraph = nrg.frameGraph; -passPostProcess.onSizeChangedObservable.add(() => { +const setExternalTexture = async () => { nrg.getBlockByName("Texture").value = passPostProcess.inputTexture.texture; - nrg.build(); -}); + await nrg.buildAsync(false, true, false); +}; -nrg.getBlockByName("Texture").value = passPostProcess.inputTexture.texture; - -nrg.build(); +passPostProcess.onSizeChangedObservable.add(async () => { + await setExternalTexture(); +}); -await nrg.whenReadyAsync(); +await setExternalTexture(); scene.onAfterRenderObservable.add(() => { frameGraph.execute(); @@ -102,8 +102,8 @@ scene.onAfterRenderObservable.add(() => { ``` As above, we create a "pass" post-process, so that the scene is rendered in a texture. This texture is set as the value of the block named “Texture”, which is our input texture in the graph. -Note that we have deactivated the automatic building of the graph when resizing the engine, because when the screen is resized, we must first update the texture of the “Texture” block before rebuilding the graph. +Note that we have deactivated the automatic building of the graph when resizing the engine (parameter **rebuildGraphOnEngineResize** in the call to `ParseFromSnippetAsync()`), because when the screen is resized, we must first update the texture of the “Texture” block before rebuilding the graph: `passPostProcess.onSizeChangedObservable` replaces `engine.onResizeObservable`. -The rest of the code should be simple to understand. +Also note the parameters passed to `nrg.buildAsync(dontBuildFrameGraph = false, waitForReadiness = true, setAsSceneFrameGraph = false)`: the first two parameters have their default values, but the third is set to *false* so that the frame graph is not defined at the scene level! -The full PG: +The full PG: diff --git a/content/features/featuresDeepDive/frameGraph/frameGraphBasicConcepts/frameGraphReplaceRenderLoop.md b/content/features/featuresDeepDive/frameGraph/frameGraphBasicConcepts/frameGraphReplaceRenderLoop.md index d154a85cc..2b20e9d46 100644 --- a/content/features/featuresDeepDive/frameGraph/frameGraphBasicConcepts/frameGraphReplaceRenderLoop.md +++ b/content/features/featuresDeepDive/frameGraph/frameGraphBasicConcepts/frameGraphReplaceRenderLoop.md @@ -16,6 +16,8 @@ scene.cameraToUseForPointers = camera; ``` The second parameter of the `FrameGraph` constructor instructs the framework to create debugging textures for the textures created by the frame graph, which you can browse in the inspector. This can help debug / understand what's going on in your code! +We set `scene.frameGraph = frameGraph` so that the frame graph is used instead of the usual scene rendering loop. + Let's create a color and depth texture, which will be used to render the meshes: ```javascript const colorTexture = frameGraph.textureManager.createRenderTargetTexture("color", { @@ -50,6 +52,8 @@ You can see that we create the textures through the `frameGraph.textureManager` This is why `frameGraph.textureManager.createRenderTargetTexture` returns a texture handle (a number) and not a real texture object: most of the frame graph framework methods that deal with textures use texture handles! +Refer to [Introduction to Frame Graph classes](/features/featuresDeepDive/frameGraph/frameGraphClassFramework/frameGraphClassOverview#texture-handles) for more information about texture handles. + Note that you can import an existing texture into a frame graph by calling `frameGraph.textureManager.importTexture()`, and this method will return a texture handle that you can use as input for frame graph tasks. There is also the inverse method `frameGraph.textureManager.getTextureFromHandle()` which allows you to obtain the real texture object from a texture handle (useful when you use a frame graph at the same time as the scene render loop - see the following section for more information). An important property of the object passed to the `createRenderTargetTexture` method is `sizeIsPercentage`: if it is `true`, it means that the size values are percentages instead of fixed pixel sizes. These percentages are related to the size of the output screen. If you set `width=height=100`, this means that the texture will be created with the same size as the output screen. If you set these values to `50`, the texture will be created with half the size of the screen. Most of the time, you will want to set `sizeIsPercentage=true` to keep your frame graph independent of the output size. @@ -67,6 +71,8 @@ frameGraph.addTask(clearTask); ``` This code creates a "clear" task, configures it and adds it to the frame graph. +Refer to [Frame Graph Task List](/features/featuresDeepDive/frameGraph/frameGraphClassFramework/frameGraphTaskList) for detailed explanations of the various frame graph tasks used in the code snippets on this page. + The main task is the rendering of the meshes: ```javascript const rlist = { @@ -94,22 +100,18 @@ copyToBackbufferTask.sourceTexture = renderTask.outputTexture; frameGraph.addTask(copyToBackbufferTask); ``` -Once all the tasks have been added to the frame graph, you must build the graph by calling `FrameGraph.build()`. This ensures that everything is ready before the graph can be executed (among other things, it allocates the textures). - -You can also call `await FrameGraph.whenReadyAsync()` to make sure that all the resources are ready and that the next call to `FrameGraph.execute()` (which is done automatically at the appropriate moment by the framework when `Scene.frameGraph` is defined) will render something and will not be delayed. +Once all tasks have been added to the frame graph, you must build the graph by calling `await FrameGraph.buildAsync()`. This creates the various passes that will be executed when `FrameGraph.execute()` is called and ensures that everything is ready before the graph can be executed (among other things, it allocates textures). -Finally, you must manage the resizing of the screen, so simply call `frameGraph.build()` when the engine resizes: +Finally, don't forget to handle screen resizing: ```javascript -engine.onResizeObservable.add(() => { - frameGraph.build(); +engine.onResizeObservable.add(async () => { + await frameGraph.buildAsync(); }); -frameGraph.build(); - -await frameGraph.whenReadyAsync(); +await frameGraph.buildAsync(); ``` -Here's the PG corresponding to this example: +Here's the PG corresponding to this example: ## Using a node render graph @@ -122,15 +124,13 @@ The javascript code: ```javascript const nrg = await BABYLON.NodeRenderGraph.ParseFromSnippetAsync("#CCDXLX", scene); -nrg.build(); - -await nrg.whenReadyAsync(); - -scene.frameGraph = nrg.frameGraph; +await nrg.buildAsync(); ``` That's all you need to make it work with a node render graph! -The full PG: +The full PG: + +By default, calling `nrg.buildAsync()` will also assign the frame graph to `scene.frameGraph`. For more complicated examples, you may need to pass a third parameter to `NodeRenderGraph.ParseFromSnippetAsync()` to configure the node render graph: ```javascript diff --git a/content/features/featuresDeepDive/frameGraph/frameGraphBlocks/frameGraphBlocksDescription.md b/content/features/featuresDeepDive/frameGraph/frameGraphBlocks/frameGraphBlocksDescription.md index a9142fc3c..c3f7a5d01 100644 --- a/content/features/featuresDeepDive/frameGraph/frameGraphBlocks/frameGraphBlocksDescription.md +++ b/content/features/featuresDeepDive/frameGraph/frameGraphBlocks/frameGraphBlocksDescription.md @@ -228,6 +228,10 @@ Once again, the inputs and outputs are self-explanatory: **target** is the textu ## Misc blocks + + +This block allows you to execute a compute shader (WebGPU only - doesn't do anything in WebGL). The compute shader must be configured programmatically. See the NRG playground in [FrameGraphComputeShaderTask](/features/featuresDeepDive/frameGraph/frameGraphClassFramework/frameGraphTaskList#framegraphcomputeshadertask) for an example of how to use this block. + This block allows you to cull a list of objects against a camera frustum: you must provide the camera via the **camera** input, and the list of objects via the **objects** input. The result of this block is provided by the **output**: the list of objects that are (at least partially) inside the camera frustum. @@ -244,8 +248,6 @@ executeBlock.task.func = (_context) => { }; ``` -In WebGPU, you can use this block to execute a compute shader at a specific moment in the execution of the frame graph, for example. - This block allows you to apply a full-screen GUI over a frame graph texture. diff --git a/content/features/featuresDeepDive/frameGraph/frameGraphClassFramework.md b/content/features/featuresDeepDive/frameGraph/frameGraphClassFramework.md new file mode 100644 index 000000000..4930a7ba6 --- /dev/null +++ b/content/features/featuresDeepDive/frameGraph/frameGraphClassFramework.md @@ -0,0 +1,15 @@ +--- +title: Frame Graph Framework Description +image: +description: Learn all about the Babylon.js Frame Graph system. +keywords: diving deeper, frame graph, rendering, node editor, framework +--- + +See the [Basic Concepts and Getting Started](/features/featuresDeepDive/frameGraph/frameGraphBasicConcepts) section for general information on nomenclature and an overview of the frame graph concept. + +This section describes the frame graph class framework: +* [Overview of the frame graph framework](/features/featuresDeepDive/frameGraph/frameGraphClassFramework/frameGraphClassOverview) describes the main classes that are part of the framework +* [List of task classes in the frame graph framework](/features/featuresDeepDive/frameGraph/frameGraphClassFramework/frameGraphTaskList) describes all the tasks that can be used with a frame graph. +
+Note that the implementation of the framework is inspired by Unity's [Render Graph System](https://docs.unity3d.com/6000.2/Documentation/Manual/urp/render-graph-introduction.html). + diff --git a/content/features/featuresDeepDive/frameGraph/frameGraphClassFramework/frameGraphClassOverview.md b/content/features/featuresDeepDive/frameGraph/frameGraphClassFramework/frameGraphClassOverview.md new file mode 100644 index 000000000..cf5c0c008 --- /dev/null +++ b/content/features/featuresDeepDive/frameGraph/frameGraphClassFramework/frameGraphClassOverview.md @@ -0,0 +1,312 @@ +--- +title: Introduction to Frame Graph classes +image: +description: Learn all about the Babylon.js Frame Graph system. +keywords: diving deeper, frame graph, rendering, node editor, overview +--- + +The frame graph class framework consists of several main classes, described below. + +## [FrameGraph](/typedoc/classes/babylon.framegraph) +This is the main class, whose purpose is to allow you to build and execute a frame graph. + +### Main methods and properties +* `addTask(task)`. Adds a task to the graph. +* `addPass(name, whenTaskDisabled)`, `addRenderPass(name, whenTaskDisabled)`, `addObjectListPass(name, whenTaskDisabled)`. Methods that create a new pass for the currently processed task and return that pass. These methods can only be called from a `FrameGraphTask.record()` method, which is the method responsible for creating the passes of a task. +* `buildAsync(waitForReadiness = true)`. Traverses all tasks in the graph and calls their `record()` method, which in turn will create the task's passes. This is also when the actual textures are allocated and linked to the texture handles created in the frame graph. +* `whenReadyAsync(timeStep = 16, maxTimeout = 10000)`. Returns a promise that resolves when the frame graph is ready to be executed. In general, calling `await buildAsync()` should suffice, as this function also waits for readiness by default. +* `execute()`. Traverses all tasks in the graph and executes the passes for each of them. +* **textureManager**. This property gives you access to the frame graph's [Texture manager](#framegraphtexturemanager). +* **optimizeTextureAllocation**. Boolean that determines whether texture allocation should be optimized (i.e., reuse existing textures when possible to limit GPU memory usage). +* **pausedExecution**. Indicates whether the execution of the frame graph is paused (default is false). +
+ +You should not have to call `whenReadyAsync()`, as `buildAsync()` already calls this function by default. However, if you disable this feature, your code should look like this: +```typescript +await frameGraph.buildAsync(false); + +frameGraph.pausedExecution = true; + +await frameGraph.whenReadyAsync(); + +frameGraph.pausedExecution = false; +``` +You should generally disable frame graph execution before calling `await FrameGraph.whenReadyAsync()`, so that the frame graph is not executed by the main rendering loop before everything is ready, which could cause errors. + +### List of tasks +The graph itself is stored as a list (array) of tasks: explicit connections between task inputs and outputs are not stored in this class, it is up to the user to add tasks to the graph in the correct order, so that a task T2 that requires the result of another task T1 is added after T1. + +Tasks are (implicitly) connected together through their input and output properties: these are simple properties that are declared at the class level. For existing tasks in the framework, output properties are generally prefixed with **output** to differentiate them from inputs. + +### Code example +Outline of operations for creating and using a frame graph: +```typescript +const frameGraph = new BABYLON.FrameGraph(scene); + +// Clear task +const clearTask = new BABYLON.FrameGraphClearTextureTask("clear", frameGraph); + +// Defines the task inputs +clearTask.targetTexture = colorTexture; +clearTask.depthTexture = depthTexture; + +// Adds the task to the graph +frameGraph.addTask(clearTask); + +// Render task +const renderTask = new BABYLON.FrameGraphObjectRendererTask("renderObjects", frameGraph, scene); + +// Connects certain inputs of the class to the outputs of the Clear task +renderTask.targetTexture = clearTask.outputTexture; +renderTask.depthTexture = clearTask.outputDepthTexture; + +// Defines other inpputs +renderTask.objectList = rlist; +renderTask.camera = camera; + +// Adds the task to the graph +frameGraph.addTask(renderTask); + +...adds other tasks... + +// Builds the graph when all tasks have been added and wait until everything is ready to be run / executed +await frameGraph.buildAsync(); + +// When you want to render the graph, calls: +frameGraph.execute(); + +// Alternatively, if you set the graph at scene level, execute() will be called automatically every frame +scene.frameGraph = frameGraph; +``` +This code corresponds to this graph: +![Basic graph](/img/frameGraph/graph_skeleton.jpg) + + +## [FrameGraphTask](/typedoc/classes/babylon.framegraphtask) +This is the base class for a task in a frame graph. A task is usually a rendering process, but it may also be unrelated to rendering (for example, we have a culling task in the frame graph to cull objects relative to a camera). + +### Main methods and properties +* `initAsync()`. This function is called once after the task has been added to the frame graph and before the frame graph is built for the first time. This allows you to initialize asynchronous resources, which is not possible in the constructor. +* `record()`. Called by `FrameGraph` at build time, it is responsible for creating passes for the task. +* `isReady()`. Checks if the task is ready to be executed. +* **onBeforeTaskExecute**. An observable that is triggered before the task is executed. +* **onAfterTaskExecute**. An observable that is triggered after the task is executed. +* **disabled**. Property that allows you to enable/disable a task. +* **dependencies**. Property that allows you to define the dependencies (texture) of the task. +
+ +Note that as a user of a frame graph (as opposed to creating new tasks), you do not need to concern yourself with the methods outlined above: these methods are automatically called by the framework when you use it. + +### Passes +The main purpose of a task is to manage a list of passes: these passes are executed when the `execute()` method of the frame graph is called (which in turn calls the `_execute()` method of each task). + +Passes are created by the `record()` method, which is the main method of a task, and must be implemented by classes extending `FrameGraphTask` (`record()` is abstract in `FrameGraphTask`). Different types of passes can be created (normal passes, rendering passes, and object list passes). The type of pass you create (by calling the appropriate method of `FrameGraph`) determines the type of context that the callback passed to `Pass.setExecuteFunc(func: (context: T) => void)` receives, which is the function executed when the pass is executed: +* [FrameGraphContext](/typedoc/classes/babylon.framegraphcontext) for normal passes +* [FrameGraphRenderContext](/typedoc/classes/babylon.framegraphrendercontext) for render passes +* [FrameGraphContext](/typedoc/classes/babylon.framegraphcontext) for object list passes (there is no special context class for object list passes yet, but this may change in the future) + +In turn, the context determines what you can do during the execution of the pass (the `FrameGraphContext` class does not have any rendering-related methods, for example, contrary to `FrameGraphRenderContext`). + +Compared to other frame graph implementations, Babylon.js is a bit unusual when it comes to passes, as there are two sets of passes: regular passes and disabled passes. The latter are executed instead of the former if the task is disabled (**disabled** property set to *true*). + +This allows you to easily “remove” a task from the graph without actually deleting it. Removing a task from a graph can be a bit complicated, as the inputs/outputs of other tasks must be reconnected to account for the removal, and the graph must be rebuilt to regenerate the passes. In comparison, disabling a task is very easy and does not require any of these operations. However, this simplicity comes at a price: performance may be (slightly) worse when a task is disabled than when it is removed from the graph. This is because some tasks still need to perform some processing when they are disabled. For example, a post-processing task must copy the contents of its source texture to the target texture when it is disabled. If a post-processing task is completely removed from the graph, you will save this copy. As is often the case, this is a trade-off between ease of use and performance. + +### Task dependencies +Another property worth mentioning is that of **dependencies**. + +You can add a texture to a task's dependencies to indicate that a texture must retain its contents at least until that task before the texture optimizer can reuse it for subsequent tasks. If you do not do this, in some cases, a texture may be reused too early and the graph result may not match your expectations. + +Note that this can only happen if you retrieve a texture from the frame graph to use it in external code. For example, if you generate a texture in the frame graph and then later use that texture as a diffuse texture for a material that will be used to render a mesh in another task in the graph, the texture optimizer could potentially reuse this texture for other purposes, as it does not know that the texture must retain its content even after it has been generated (we are assuming that this texture is not explicitly reused by another task in the graph afterwards, which would extend its lifetime). + +Babylon.js already implements a number of tasks (see [Frame Graph Task List](/features/featuresDeepDive/frameGraph/frameGraphClassFramework/frameGraphTaskList)), but you can also implement your own tasks (see TODOLINK). + +### Code example +As an illustration, here is the complete implementation of a simple task, the “copy to back buffer color” task (the class [FrameGraphCopyToBackbufferColorTask](/typedoc/classes/babylon.framegraphcopytobackbuffercolortask)): +```typescript +export class FrameGraphCopyToBackbufferColorTask extends FrameGraphTask { + /** + * The source texture to copy to the backbuffer color texture. + */ + public sourceTexture: FrameGraphTextureHandle; + + public record() { + if (this.sourceTexture === undefined) { + throw new Error(`FrameGraphCopyToBackbufferColorTask "${this.name}": sourceTexture is required`); + } + + const pass = this._frameGraph.addRenderPass(this.name); + + pass.addDependencies(this.sourceTexture); + + pass.setRenderTarget(backbufferColorTextureHandle); + pass.setExecuteFunc((context) => { + if (!context.isBackbuffer(this.sourceTexture)) { + context.copyTexture(this.sourceTexture); + } + }); + + const passDisabled = this._frameGraph.addRenderPass(this.name + "_disabled", true); + + passDisabled.setRenderTarget(backbufferColorTextureHandle); + passDisabled.setExecuteFunc((_context) => {}); + } +} +``` +Notes: +* For a render pass, you must specify which texture the pass should be rendered to by calling the `RenderPass.setRenderTarget(textureHandle)` method. +* To create a render pass that is used when the task is disabled, simply pass *true* as the second parameter to `FrameGraph.addXXXPass(name, whenTaskDisabled)`. + +## [FrameGraphTextureManager](/typedoc/classes/babylon.framegraphtexturemanager) +This class is responsible for managing textures (rendering target textures, i.e., the textures we render) in a frame graph. + +### Main methods and properties +* `getTextureFromHandle(handle)`. Retrieves the actual texture (an `InternalTexture` instance) from a texture handle. +* `importTexture(name, internalTexture)`. Imports an `InternalTexture` instance into the texture manager. This method returns a handle so that the texture can be used in the frame graph (see explanations below regarding handles). +* `createRenderTargetTexture(name, creationOptions)`. Creates a texture to be used in the render graph. Returns a handle. +* `createDanglingHandle()`. Creates an empty handle. You must call `resolveDanglingHandle()` later, before building the graph, to associate a texture with this handle. +* `resolveDanglingHandle(danglingHandle, handle?, newTextureName?, creationOptions?)`. Associates a texture with a dangling handle. If you do not provide a handle for a texture (second parameter), the third and fourth parameters are used to create a new texture handle by calling `createRenderTargetTexture(newTextureName, creationOptions)`. + +### Texture Handles +To achieve texture reuse (enabled via [FrameGraph.optimizeTextureAllocation](/typedoc/classes/babylon.framegraph#optimizetextureallocation)), which is one of the main goals of frame graphs, textures must be manipulated via handles (which are just numbers under the hood) and not directly using a reference (pointer) to the actual texture. This way, the system is free to link the same texture to different handles without the user knowing. Whenever you need an actual texture pointer, use the `getTextureFromHandle(handle)` method. + +When you want to create a texture (render target) to use in a frame graph, call the `FrameGraphTextureManager.createRenderTargetTexture(name, creationOptions)` method. This method will return a handle to the texture, which you can then use as input for certain tasks, for example: +```typescript +const colorTexture = frameGraph.textureManager.createRenderTargetTexture("color", { + size: { width: 100, height: 100 }, + options: { + createMipMaps: false, + types: [BABYLON.Constants.TEXTURETYPE_FLOAT], + formats: [BABYLON.Constants.TEXTUREFORMAT_RGBA], + samples: 1, + useSRGBBuffers: [false], + labels: [“color”], + }, + sizeIsPercentage: true, +}); + +const clearTask = new BABYLON.FrameGraphClearTextureTask("clear", frameGraph); + +clearTask.clearColor = true; +clearTask.targetTexture = colorTexture; + +frameGraph.addTask(clearTask); +``` + +### Dangling texture handles +Sometimes you want to create a texture handle, but you don't yet know what the texture actually is. This happens for all tasks that have an output texture. For these tasks, we want to create the output handle as soon as possible, so that this handle can be used when we build and connect the tasks of a frame graph. + +For example: +```typescript +const clearTask = new BABYLON.FrameGraphClearTextureTask(“clear”, frameGraph); + +...define the properties of clearTask... + +const renderTask = new BABYLON.FrameGraphObjectRendererTask(“renderObjects”, frameGraph, scene); + +renderTask.targetTexture = clearTask.outputTexture; +``` +**clearTask.outputTexture** must have a value right after the task is created, because we use it immediately to set the value of **renderTask.targetTexture**. This means that the handle must be created in the `FrameGraphClearTextureTask` constructor, but at this point we don't have enough information to associate a texture with this handle. That's why we create a dangling handle in the constructor, which we assign to **FrameGraphClearTextureTask.outputTexture** (the same for the depth texture): +```typescript +constructor(name: string, frameGraph: FrameGraph) { + super(name, frameGraph); + + this.outputTexture = this._frameGraph.textureManager.createDanglingHandle(); + this.outputDepthTexture = this._frameGraph.textureManager.createDanglingHandle(); +} +``` +Later (in `FrameGraphClearTextureTask.record()`), we will resolve the dangling handle (by calling `FrameGraphTextureManager.resolveDanglingHandle()`) with a real texture handle. + +### History textures +Babylon supports a simplified version of history textures: ping-pong textures. + +A ping-pong texture is a set of two textures, where one texture is used during frame X, the other during frame X + 1, and then we return to the first texture in frame X + 2, etc. + +This means that writes and reads to a ping-pong texture always use the same texture in a given frame, **except** when a rendering operation renders to the ping-pong texture and the shader used for that rendering also uses the ping-pong texture as input: in this case, the shader uses the other texture. + +This can be useful for implementing certain effects. For example, the TAA (Temporal Anti-Aliasing) post-process task uses a ping-pong texture to accumulate different, slightly offset renderings for each frame. + +It is very easy to use (extract from the `record()` method of [FrameGraphTAATask](/typedoc/classes/babylon.framegraphtaatask)): +* Create the ping-pong texture +```typescript +const pingPongTextureCreationOptions: FrameGraphTextureCreationOptions = { + ...creation options here... + + isHistoryTexture: true, +}; + +const pingPongHandle = textureManager.createRenderTargetTexture(`${this.name} history`, pingPongTextureCreationOptions); + +textureManager.resolveDanglingHandle(this.outputTexture, pingPongHandle); + +pass.setRenderTarget(this.outputTexture); +``` +The important parameter is `isHistoryTexture: true,` which tells the system that this texture is actually a history texture (ping-pong). Note that we define this texture as the output texture of the task/render pass. +* Apply the post-process shader when the pass is executed +```typescript +pass.setExecuteFunc((context) => { + context.applyFullScreenEffect( + this._postProcessDrawWrapper, + () => { + context.bindTextureHandle(this._postProcessDrawWrapper.effect!, "textureSampler", this.sourceTexture!); + context.bindTextureHandle(this._postProcessDrawWrapper.effect!, "historySampler", pingPongHandle); + this.postProcess.bind(); + } + ); +}); +``` +You can see that the ping-pong texture is used in read mode by the TAA shader and is bound under the name “historySampler”. This means that it is the other texture of the ping-pong texture that will be bound to the shader during this frame (i.e., the texture we wrote to in the previous frame), not the texture we are writing to. + +## [Pass](/typedoc/classes/babylon.framegraphpass), [RenderPass](/typedoc/classes/babylon.framegraphrenderpass), [ObjectListPass](/typedoc/classes/babylon.framegraphobjectlistpass) +These classes are used to create different types of passes within a task. + +### Main methods and properties +All types of passes: +* `setExecuteFunc(func)`. Sets the function that will be executed when the pass is run. + +Render passes: +* `setRenderTarget(renderTargetHandle?)`. Sets the texture to render to during the pass execution. +* `setRenderTargetDepth(renderTargetHandle?)`. Sets the depth attachment texture to use during pass execution. +* `addDependencies(dependencies)`. Adds a texture dependency to this pass. + +Object list passes: +* `setObjectList(objectList)`. Sets the list of objects output by the pass. + +### Working with passes +Passes must not be created directly (their constructor is marked as “internal”), but by calling `FrameGraph.addPass(name, whenTaskDisabled)`, `FrameGraph.addRenderPass(name, whenTaskDisabled)`, or `FrameGraph.addObjectListPass(name, whenTaskDisabled)`. Furthermore, these methods will check that they are called from a `FrameGraphTask.record()` method, as you are not allowed to create passes from any other location. This is necessary for the frame graph class to manage the graph correctly. + +Depending on the type of pass you are creating, the callback method you provide when calling `setExecuteFunc(callback)` will be called by the system with an appropriate context: `FrameGraphContext` for normal and object list passes, and `FrameGraphRenderContext` for a render pass. You can use this context in your callback function to implement the runtime of the pass. + +As explained above, you can also create tasks to be executed when the task is disabled: simply pass **true** as the second parameter for the various methods for creating passes. If no passes have been created for the disabled state of a task, the normal passes will be used when the task is disabled. + +### Code example +Here's how the class [FrameGraphCopyToBackbufferColorTask](/typedoc/classes/babylon.framegraphcopytobackbuffercolortask) implements the `record()` method: +```typescript +public record() { + if (this.sourceTexture === undefined) { + throw new Error(`FrameGraphCopyToBackbufferColorTask "${this.name}": sourceTexture is required`); + } + + const pass = this._frameGraph.addRenderPass(this.name); + + pass.addDependencies(this.sourceTexture); + + pass.setRenderTarget(backbufferColorTextureHandle); + pass.setExecuteFunc((context) => { + if (!context.isBackbuffer(this.sourceTexture)) { + context.copyTexture(this.sourceTexture); + } + }); + + const passDisabled = this._frameGraph.addRenderPass(this.name + "_disabled", true); + + passDisabled.setRenderTarget(backbufferColorTextureHandle); + passDisabled.setExecuteFunc((_context) => {}); +} +``` +A render pass is created, and **sourceTexture** is added to the pass dependencies. The output texture of the pass is set to `backbufferColorTextureHandle`, which is a special handle referring to the back buffer (the screen). + +The callback passed to `setExecuteFunc()` is a simple function that just copies the source texture to the back buffer (we first check if the source texture is not already the back buffer, in which case we don't have to do the copy!). + +We also create a specific pass for when the task is disabled: this pass is quite simple, as the callback simply does nothing! + +Of course, you are not limited to a single pass per task; you can create multiple passes if your task is more complicated than the example above. diff --git a/content/features/featuresDeepDive/frameGraph/frameGraphClassFramework/frameGraphTaskList.md b/content/features/featuresDeepDive/frameGraph/frameGraphClassFramework/frameGraphTaskList.md new file mode 100644 index 000000000..c6cb3dde6 --- /dev/null +++ b/content/features/featuresDeepDive/frameGraph/frameGraphClassFramework/frameGraphTaskList.md @@ -0,0 +1,805 @@ +--- +title: Frame Graph Task List +image: +description: Learn all about the tasks implemented in the Frame Graph system. +keywords: diving deeper, frame graph, rendering, task +--- + +This page describes all the tasks that are implemented in the framework and are available for your own use when creating frame graphs. + +Note that most of these tasks are wrappers around existing processes in Babylon.js: whenever applicable, we will provide a link to the original documentation to avoid redundant explanations. + +To illustrate the use of each task, two PGs are provided: +* one using frame graph classes +* the other using a node render graph + +This way, you can choose what best suits your needs. + +## Layer tasks + + + +Provides the same functionalities as the [glow layer](/features/featuresDeepDive/mesh/glowLayer) class. + +[Link to the class](/typedoc/classes/babylon.framegraphglowlayertask) + + + + +Inputs: +* **targetTexture**. The target texture to apply the effect layer to. The effect will be blended with the contents of this texture. +* **objectRendererTask**. The object renderer task used to render the objects in the texture to which the layer will be applied. This is needed because the layer may have to inject code in the rendering manager used by object renderer task. +* **layerTexture** (optional). The layer texture to render the effect into. If not provided, a default texture will be created, based on **targetTexture** size, type and format. +
+Properties: +* **layer**. Let's you access the configuration of the glow layer itself. +
+Outputs: +* **outputTexture**. The output texture of the task (same underlying texture as **targetTexture**, but the handle will be different). + + + +Provides the same functionalities as the [highlight layer](/features/featuresDeepDive/mesh/highlightLayer) class. + +[Link to the class](/typedoc/classes/babylon.framegraphhighlightlayertask) + + + + +Inputs: +* **targetTexture**. The target texture to apply the effect layer to. The effect will be blended with the contents of this texture. +* **objectRendererTask**. The object renderer task used to render the objects in the texture to which the layer will be applied. This is needed because the layer may have to inject code in the rendering manager used by object renderer task. +* **layerTexture** (optional). The layer texture to render the effect into. If not provided, a default texture will be created, based on **targetTexture** size, type and format. +
+Properties: +* **layer**. Let's you access the configuration of the glow layer itself. +
+Outputs: +* **outputTexture**. The output texture of the task (same underlying texture as **targetTexture**, but the handle will be different). +
+Note that the **objectRendererTask** you define for the corresponding property must use a depth texture with a stencil aspect. An exception will be thrown if this is not the case. + +## Miscellaneous tasks + + + +Task used to execute a compute shader (WebGPU only). + +[Link to the class](/typedoc/classes/babylon.framegraphcomputeshadertask) + + + + +Inputs: +* **dispatchSize**. Defines the dispatch size for the compute shader. +* **indirectDispatch** (optional). Defines an indirect dispatch buffer and offset. If set, this will be used instead of the **dispatchSize** property and an indirect dispatch will be performed. +* **execute** (optional). An optional execute function that will be called at the beginning of the task execution. +
+You can use the **execute** function to apply additional settings (uniform parameters, textures, etc.) before executing the compute shader: see the example above. + +The compute shader created by the class is accessible via the **computeShader** getter. For ease of use, all methods of the original [ComputeShader](/typedoc/classes/babylon.computeshader) class (such as `setTexture`, `setInternalTexture`, etc.) are available directly in the `FrameGraphComputeShaderTask` class. + +You can create uniform buffers directly via the `FrameGraphComputeShaderTask` class by calling the `createUniformBuffer(name, description, autoUpdate)` method. The advantage over creating the buffer by calling `new UniformBuffer()` is that, by default (see the **autoUpdate** parameter in `createUniformBuffer`), the `UniformBuffer.update()` method will be called automatically before the compute shader is run. Also, the buffer will be automatically disposed when the task is disposed. + + + +Task used to cull objects that are not visible. + +[Link to the class](/typedoc/classes/babylon.framegraphcullobjectstask) + + + + +Inputs: +* **objectList**. The object list to cull. +* **camera**. The camera to use for culling. +
+Outputs: +* **outputObjectList**. The output object list containing the culled objects. +
+Notes: +* Only meshes are culled, not particle systems +* If meshes are frozen, culling is not performed and the last culling result before switching to frozen mode is reused +* If the task is disabled, the list of output objects (**outputObjectList**) is identical to the list of input objects (**objectList**). + + + +Task used to execute a custom function. + +[Link to the class](/typedoc/classes/babylon.framegraphexecutetask) + + + + +Inputs: +* **func**. The function to execute when the task is enabled. +* **funcDisabled** (optional). The function to execute when the task is disabled. If not provided, **func** is also used when the task is disabled. +* **customIsReady** (optional). Custom readiness check. +
+You can use this task for any custom process you want to run during the execution of the frame graph. In the PG example above, we use it to increment a counter each time the task is executed. + + + +Task that renders a GUI texture. + +[Link to the class](/typedoc/classes/babylon.framegraphguitask) + + + + +Inputs: +* **targetTexture**. The target texture to render the GUI to. +
+Properties: +* **gui**. Gets the underlying advanced dynamic texture. +
+Outputs: +* **outputTexture**. The output texture of the task. This is the same texture as the target texture, but the handles are different! +
+Note that you can provide an existing instance of `AdvancedDynamicTexture` at construction time (third parameter of the constructor), but you must ensure that the **useStandalone** property is set to *true*, as this is required for correct use in frame graphs! + +## Post-process tasks + +Unless otherwise specified, all post-process tasks share certain common properties. + +Inputs: +* **sourceTexture**. The source texture to apply the post process on. It's allowed to be `undefined` if the post process does not require a source texture. In that case, `targetTexture` must be provided. +* **sourceSamplingMode**. The sampling mode to use for the source texture. +* **targetTexture** (optional). The target texture to render the post process to. If not supplied, a texture with the same configuration as the source texture will be created. +* **stencilState** (optional). The stencil state to use for the post process. +* **depthAttachmentTexture** (optional). The depth attachment texture to use for the post process. Note that a post-process task never writes to the depth buffer: attaching a depth texture is only useful if you want to test against the depth/stencil aspect or write to the stencil buffer. +
+Properties: +* **depthReadOnly**. If *true*, the depth attachment will be read-only. This means that the post process will not write to the depth/stencil buffer. Setting **depthReadOnly** and **stencilReadOnly** to *true* is useful when you want to also be able to bind this same depth/stencil attachment to a shader. Note that it will only work in WebGPU, as WebGL does not support read-only depth/stencil attachments. +* **stencilReadOnly**. If *true*, the stencil attachment will be read-only. This means that the post process will not write to the stencil buffer. See above for further explanation. +* **disableColorWrite**. If *true*, color write will be disabled when applying the post process. This means that the post process will not write to the color buffer. +* **drawBackFace**. If *true*, the post process will be generated by a back face full-screen quad (CW order). +* **depthTest**. If depth testing should be enabled. +* **viewport** (optional). The viewport to use when applying the post process. If set to *null*, the currently active viewport is used. If *undefined* (default), the viewport is reset to a full screen viewport before applying the post process. +
+Outputs: +* **outputTexture**. The output texture of the post process. Same texture than **targetTexture**, but with a different handle. +* **outputDepthAttachmentTexture**. The output depth attachment texture. This texture will point to the same texture than the **depthAttachmentTexture** property if it is set. Note, however, that the handle itself will be different! +
+Since these properties are common to all post-process tasks, we will not repeat their description in the following sections. + +In addition, all these tasks are wrappers around existing post-process classes. You should therefore refer to the documentation for these classes (for each task in the following sections, we provide a link to the corresponding post-process class) to find out which specific properties are available for each task: you can access these properties via **postProcessTask.postProcess**. + + + +Task which applies an anaglyph post-process. + +[Link to the class](/typedoc/classes/babylon.framegraphanaglyphtask) + + + + +Inputs: +* **leftTexture**. The texture to use as the left texture. + + + +Task which applies a black and white post-process. + +[Link to the class](/typedoc/classes/babylon.framegraphblackandwhitetask) + + + + +Properties: +* [postProcess](/typedoc/classes/babylon.thinblackandwhitepostprocess). The properties of the post-process. +
+ + + +Task which applies a bloom post-process. + +[Link to the class](/typedoc/classes/babylon.framegraphbloomtask) + + + + +This post-process **doesn't share** the common properties of post-processes! + +Inputs: +* **sourceTexture**. The source texture to apply the bloom effect on. +* **sourceSamplingMode**. The sampling mode to use for the source texture. +* **targetTexture** (optional). The target texture to render the bloom effect to. If not supplied, a texture with the same configuration as the source texture will be created. +
+Properties: +* **hdr** (read only). Whether the bloom effect is HDR. When *true*, the bloom effect will use a higher precision texture format (half float or float). Else, it will use unsigned byte. +* [bloom](/typedoc/classes/babylon.thinbloomeffect). The properties of the post-process. +
+Outputs: +* **outputTexture**. The output texture of the bloom effect. +
+Note that you can set **weight**, **kernel**, **threshold**, and **hdr** at construction time (these are constructor parameters). You can change these values later via the **bloom** property, except for **hdr**, which is only a construction parameter. + + + +Task which applies a blur post-process. + +[Link to the class](/typedoc/classes/babylon.framegraphblurtask) + + + + +Properties: +* [postProcess](/typedoc/classes/babylon.thinblurpostprocess). The properties of the post-process. +
+ + + +Task which applies a chromatic aberration post-process. + +[Link to the class](/typedoc/classes/babylon.framegraphchromaticaberrationtask) + + + + +Properties: +* [postProcess](/typedoc/classes/babylon.thinchromaticaberrationpostprocess). The properties of the post-process. +
+ + + +Task which applies a circle of confusion post-process. + +[Link to the class](/typedoc/classes/babylon.framegraphcircleofconfusiontask) + + + + +Inputs: +* **depthTexture**. The depth texture to use for the circle of confusion effect. It must store camera space depth (Z coordinate). +* **depthSamplingMode**. The sampling mode to use for the depth texture. +* **camera**. The camera to use for the circle of confusion effect. +
+Properties: +* [postProcess](/typedoc/classes/babylon.thincircleofconfusionpostprocess). The properties of the post-process. +
+ + + +Task which applies a color correction post-process. + +[Link to the class](/typedoc/classes/babylon.framegraphcolorcorrectiontask) + + + + +Properties: +* [postProcess](/typedoc/classes/babylon.thincolorcorrectionpostprocess). The properties of the post-process. +
+ + + +Task which applies a convolution post-process. + +[Link to the class](/typedoc/classes/babylon.framegraphconvolutiontask) + + + + +Properties: +* [postProcess](/typedoc/classes/babylon.thinconvolutionpostprocess). The properties of the post-process. +
+ + + +Task which applies a custom post-process. + +[Link to the class](/typedoc/classes/babylon.framegraphcustompostprocesstask) + + + +Inputs: +* **onApplyObservable**. Observable triggered when bind is called for the post process. Use this to set custom uniforms (see example playground). +
+ + + +Task which applies a depth of field post-process. + +[Link to the class](/typedoc/classes/babylon.framegraphdepthoffieldtask) + + + + +This post-process **doesn't share** the common properties of post-processes! + +Inputs: +* **sourceTexture**. The source texture to apply the depth of field effect on. +* **sourceSamplingMode**. The sampling mode to use for the source texture. +* **depthTexture**. The depth texture to use for the depth of field effect. Should store camera space depth (Z coordinate). +* **depthSamplingMode**. The sampling mode to use for the depth texture. +* **camera**. The camera used to render the scene. +* **targetTexture** (optional). The target texture to render the depth of field effect to. If not supplied, a texture with the same configuration as the source texture will be created. +
+Properties: +* **hdr** (read only). Whether the depth of field effect is applied on HDR textures. When true, the depth of field effect will use a higher precision texture format (half float or float). Else, it will use unsigned byte. +* [depthOfField](/typedoc/classes/babylon.thindepthoffieldeffect). The properties of the post-process. +
+Outputs: +* **outputTexture**. The output texture of the depth of field effect. +
+Note that **hdr** can only be set at construction time (it is a constructor parameter). + + + +Task which applies a extract highlights post-process. + +[Link to the class](/typedoc/classes/babylon.framegraphextracthighlightstask) + + + + +Properties: +* [postProcess](/typedoc/classes/babylon.thinextracthighlightspostprocess). The properties of the post-process. +
+ + + +Task which applies a filter post-process. + +[Link to the class](/typedoc/classes/babylon.framegraphfiltertask) + + + + +Properties: +* [postProcess](/typedoc/classes/babylon.thinfilterpostprocess). The properties of the post-process. +
+ + + +Task which applies a FXAA post-process. + +[Link to the class](/typedoc/classes/babylon.framegraphfxaatask) + + + + +Properties: +* [postProcess](/typedoc/classes/babylon.thinfxaapostprocess). The properties of the post-process. +
+ + + +Task which applies a grain post-process. + +[Link to the class](/typedoc/classes/babylon.framegraphgraintask) + + + + +Properties: +* [postProcess](/typedoc/classes/babylon.thingrainpostprocess). The properties of the post-process. +
+ + + +Task which applies a image processing post-process. + +[Link to the class](/typedoc/classes/babylon.framegraphimageprocessingtask) + + + + +Properties: +* [postProcess](/typedoc/classes/babylon.thinimageprocessingpostprocess). The properties of the post-process. +
+If you use this post-process, you will probably want to set **disableImageProcessing = true** on the object render that renders the texture to which the image processing is applied. +Alternatively, you can also set `FrameGraphImageProcessingTask.postProcess.fromLinearSpace = false` to indicate that the source texture is in gamma space. + + + +Task which applies a motion blur post-process. + +[Link to the class](/typedoc/classes/babylon.framegraphmotionblurtask) + + + + + +Inputs: +* **velocityTexture** (optional). The velocity texture to use for the motion blur effect. Needed for object-based motion blur. +* **depthTexture** (optional). The (view) depth texture to use for the motion blur effect. Needed for screen-based motion blur. +
+Properties: +* [postProcess](/typedoc/classes/babylon.thinmotionblurpostprocess). The properties of the post-process. +
+This task requires a geometry texture as input, which depends on the type of motion blur you want to use: +* for object-based motion blur, you need to connect a **geometry velocity** texture +* for screen-based motion blur, you need to connect a **geometry view depth** texture + +If the appropriate texture is not connected according to the current motion blur type, you will get an error in the console log. + +Both of these textures can be generated by the [FrameGraphGeometryRendererTask](#framegraphgeometryrenderertask) task (see below). + + + +Task which applies a screen space curvature post-process. + +[Link to the class](/typedoc/classes/babylon.framegraphscreenspacecurvaturetask) + + + + +Inputs: +* **normalTexture**. The normal texture to use for the screen space curvature effect. It must store normals in camera view space. +
+Properties: +* [postProcess](/typedoc/classes/babylon.thinscreenspacecurvaturepostprocess). The properties of the post-process. +
+ + + +Task which applies a sharpen post-process. + +[Link to the class](/typedoc/classes/babylon.framegraphsharpentask) + + + + +Properties: +* [postProcess](/typedoc/classes/babylon.thinsharpenpostprocess). The properties of the post-process. +
+ + + +Task which applies a screen-space ambient occlusion post-process. + +[Link to the class](/typedoc/classes/babylon.framegraphssao2renderingpipelinetask) + + + + +This post-process **doesn't share** the common properties of post-processes! + +Inputs: +* **sourceTexture**. The source texture to apply the SSAO2 effect on. +* **sourceSamplingMode**. The sampling mode to use for the source texture. +* **depthTexture**. The depth texture used by the SSAO2 effect (Z coordinate in camera view space). +* **normalTexture**. The normal texture used by the SSAO2 effect (normal vector in camera view space). +* **camera**. The camera used to render the scene. +* **targetTexture** (optional). The target texture to render the SSAO2 effect to. If not supplied, a texture with the same configuration as the source texture will be created. +
+Properties: +* **ratioSSAO** (read only). The ratio between the SSAO texture size and the source texture size. +* **ratioBlur** (read only). The ratio between the SSAO blur texture size and the source texture size. +* **textureType** (read only). The texture type used by the different post processes created by SSAO2. It's a read-only property. If you want to change it, you must recreate the task and pass the appropriate texture type to the constructor. +* [ssao](/typedoc/classes/babylon.thinssao2renderingpipeline). The properties of the post-process. +
+Outputs: +* **outputTexture**. The output texture of the ssao effect. +
+Note that you can set **ratioSSAO**, **ratioBlur** and **textureType** only at construction time (these are constructor parameters). + + + +Task which applies a screen-space reflection post-process. + +[Link to the class](/typedoc/classes/babylon.framegraphssrrenderingpipelinetask) + + + + +This post-process **doesn't share** the common properties of post-processes! + +Inputs: +* **sourceTexture**. The source texture to apply the SSR effect on. +* **sourceSamplingMode**. The sampling mode to use for the source texture. +* **depthTexture**. The depth texture used by the SSR effect. Can be a view or screen space depth texture. +* **normalTexture**. The normal texture used by the SSR effect. Can be a view or world space normal texture. +* **backDepthTexture** (optional). The back depth texture used by the SSR effect. Can be a view or screen space depth texture. This is used when automatic thickness computation is enabled. The back depth texture is the depth texture of the scene rendered for the back side of the objects (that is, front faces are culled). +* **reflectivityTexture**. The reflectivity texture used by the SSR effect. +* **camera**. The camera used to render the scene. +* **targetTexture** (optional). The target texture to render the SSR effect to. If not supplied, a texture with the same configuration as the source texture will be created. +
+Properties: +* **textureType** (read only). The texture type used by the different post processes created by SSR. It's a read-only property. If you want to change it, you must recreate the task and pass the appropriate texture type to the constructor. +* [ssr](/typedoc/classes/babylon.thinssrrenderingpipeline). The properties of the post-process. +
+Outputs: +* **outputTexture**. The output texture of the ssr effect. +
+Note that you can set **textureType** only at construction time (this is a constructor parameter). + +If you programmatically change the blur strength value (the `FrameGraphSSRRenderingPipelineTask.ssr.blurDispersionStrength` property) from 0 to a non-zero value, or vice versa, you must call the `FrameGraph.build()` function again to rebuild the graph! This is because the frame graph task of the SSR rendering pipeline generates blur tasks when this parameter is non-zero, and does not generate them when it is zero (for performance reasons). This means that the rendering passes generated are not the same depending on this parameter, which is why you need to call `build()` again when you change it. + +**backDepthTexture** is optional, and you should only connect a texture to this input if you set **FrameGraphSSRRenderingPipelineTask.ssr.enableAutomaticThicknessComputation = true**. Do not connect a texture if you do not use this option, otherwise you will be penalized in terms of performance for having generated the texture when it is not used by the SSR block. If you connect something to **backDepthTexture**, you must connect the same type of texture as the one you connect to **depthTexture**: a view or a screen space depth texture. You will get an error message in the console if you mix the types. Also, don't forget to set **reverseCulling = true** in the geometry renderer task that generates the back depth texture, as this texture must store the depth of the back faces! + +Refer to [Screen Space Reflections (SSR) Rendering Pipeline](/features/featuresDeepDive/postProcesses/SSRRenderingPipeline) for more information on SSR and the parameters you can use to adjust the effect. + + + +Task which applies a temporal anti-aliasing post-process. + +[Link to the class](/typedoc/classes/babylon.framegraphtaatask) + + + +Inputs: +* **objectRendererTask**. The object renderer task used to render the scene objects. +* **velocityTexture**. The handle to the velocity texture. Only needed if **postProcess.reprojectHistory** is enabled. Note that you must use the linear velocity texture! +
+Properties: +* [postProcess](/typedoc/classes/babylon.thintaapostprocess). The properties of the post-process. +
+This task is slightly different from other post-process tasks, as the **source** input is mandatory and the **target** input is not (yet) used. +In addition, this task must instrument the rendering of objects, which is why you must connect a [FrameGraphObjectRendererTask](#framegraphobjectrenderertask) instance to the **objectRendererTask** input. + +If you are using the [reprojectHistory](/features/featuresDeepDive/postProcesses/TAARenderingPipeline#reproject-history) option, you must provide a **velocityTexture** texture. You can generate this texture using the [FrameGraphGeometryRendererTask](#framegraphgeometryrenderertask) task. Note that you should not connect anything to the **velocityTexture** input if you are not using the reprojection history option, otherwise you may get rendering artifacts (and affect performance). + +In addition, when using TAA post-processing, you probably don't want to use MSAA for the color/depth texture. So set their samples to 1. + +Refer to [Temporal Anti-Aliasing (TAA) Rendering Pipeline](/features/featuresDeepDive/postProcesses/TAARenderingPipeline) for more information on TAA and the parameters you can use to adjust the effect. + + + +Task which applies a tonemap post-process. + +[Link to the class](/typedoc/classes/babylon.framegraphtonemaptask) + + + + +Properties: +* [postProcess](/typedoc/classes/babylon.thintonemappostprocess). The properties of the post-process. +
+ +## Rendering tasks + + + +Task used to render objects to a texture. + +[Link to the class](/typedoc/classes/babylon.framegraphobjectrenderertask) + + + + +Inputs: +* **targetTexture**. The target texture where the objects will be rendered. +* **depthTexture** (optional). The depth attachment texture where the objects will be rendered. +* **shadowGenerators** (optional). The shadow generators used to render the objects. +* **camera**. Camera used to render the objects. +* **objectList**. The list of objects to render. +
+Properties: +* **depthTest**. If depth testing should be enabled (default is true). +* **depthWrite**. If depth writing should be enabled (default is true). +* **disableShadows**. If shadows should be disabled (default is false). +* **disableImageProcessing**. If image processing should be disabled (default is false). *false* means that the default image processing configuration will be applied (the one from the scene). +* **isMainObjectRenderer**. Sets this property to *true* if this task is the main object renderer of the frame graph. It will help to locate the main object renderer in the frame graph when multiple object renderers are used. This is useful for the inspector to know which object renderer to use for additional rendering features like wireframe rendering or frustum light debugging. It is also used to determine the main camera used by the frame graph: this is the camera used by the main object renderer. +* **renderParticles**. Defines if particles should be rendered (default is true). +* **renderSprites**. Defines if sprites should be rendered (default is true). +* **forceLayerMaskCheck**. Forces checking the layerMask property even if a custom list of meshes is provided (ie. if renderList is not undefined). Default is true. +* **enableBoundingBoxRendering**. Enables the rendering of bounding boxes for meshes (still subject to `Mesh.showBoundingBox` or `scene.forceShowBoundingBoxes`). Default is true. +* **enableOutlineRendering**. Enables the rendering of outlines/overlays for meshes (still subject to `Mesh.renderOutline` / `Mesh.renderOverlay`). Default is true. +* **resolveMSAAColors**. If true, targetTexture will be resolved at the end of the render pass, if this/these texture(s) is/are MSAA (default: true). +* **resolveMSAADepth**. If true, **depthTexture** will be resolved at the end of the render pass, if this texture is provided and is MSAA (default: false). +* **objectRenderer**. The object renderer used to render the objects. +
+Outputs: +* **outputTexture**. The output texture. This texture will point to the same texture than the **targetTexture** property. Note, however, that the handle itself will be different! +* **outputDepthTexture**. The output depth attachment texture. This texture will point to the same texture than the depthTexture property if it is set. Note, however, that the handle itself will be different! +
+This is the main task used to render objects on a texture. You can consider it as the equivalent of [RenderTargetTexture](/typedoc/classes/babylon.rendertargettexture) when you want to create a render pass programmatically. + +The **shadowGenerators** input is optional and can be used if you want to generate shadows at the same time as you render objects. This input expects an array of [FrameGraphShadowGeneratorTask](#framegraphshadowgeneratortask) instances. This way, you can generate shadows from multiple lights at once. + + + +Task used to render geometry to a set of textures. + +[Link to the class](/typedoc/classes/babylon.framegraphgeometryrenderertask) + + + + +Inputs: +* **depthTexture** (optional). The depth texture attachment to use for rendering. +* **camera**. Camera used to render the objects. +* **objectList**. The object list used for rendering. +
+Properties: +* **depthTest**. Whether depth testing is enabled (default is true). +* **depthWrite**. Whether depth writing is enabled (default is true). +* **size**. The size of the output textures (default is 100% of the back buffer texture size). +* **sizeIsPercentage**. Whether the size is a percentage of the back buffer size (default is true). +* **samples**. The number of samples to use for the output textures (default is 1). +* **reverseCulling**. Whether to reverse culling (default is false). +* **dontRenderWhenMaterialDepthWriteIsDisabled**. Indicates if a mesh shouldn't be rendered when its material has depth write disabled (default is true). +* **textureDescriptions**. The list of texture descriptions used by the geometry renderer task. Only geometry textures described in this array will be generated. See below for more information. +* **objectRenderer**. The object renderer used by the geometry renderer task. +* **forceLayerMaskCheck**. Force checking the layerMask property even if a custom list of meshes is provided (ie. if renderList is not undefined). Default is true. +* **resolveMSAAColors**. If true, the output geometry texture(s) will be resolved at the end of the render pass, if **samples** is greater than 1 (default: true) +* **resolveMSAADepth**. If true, **depthTexture** will be resolved at the end of the render pass, if this texture is provided and **samples** is greater than 1 (default: true) +
+Methods: +* `excludeSkinnedMeshFromVelocityTexture(skinnedMesh: AbstractMesh)`. Excludes the given skinned mesh from computing bones velocities. Computing bones velocities can have a cost. The cost can be saved by calling this function and by passing the skinned mesh to ignore. +* `removeExcludedSkinnedMeshFromVelocityTexture(skinnedMesh: AbstractMesh)`. Removes the given skinned mesh from the excluded meshes list. +
+Outputs: +* **outputDepthTexture**. The output depth texture attachment texture. This texture will point to the same texture than the **depthTexture** property if it is set. Note, however, that the handle itself will be different! +* **geometryViewDepthTexture**. The depth (in view space) output texture. Will point to a valid texture only if that texture has been requested in textureDescriptions! +* **geometryNormViewDepthTexture**. The normalized depth (in view space) output texture. Will point to a valid texture only if that texture has been requested in textureDescriptions! The normalization is `(d - near) / (far - near)`, where **d** is the depth value in view space and **near** and **far** are the near and far planes of the camera. +* **geometryScreenDepthTexture**. The depth (in screen space) output texture. Will point to a valid texture only if that texture has been requested in textureDescriptions! +* **geometryViewNormalTexture**. The normal (in view space) output texture. Will point to a valid texture only if that texture has been requested in textureDescriptions! +* **geometryWorldNormalTexture**. The normal (in world space) output texture. Will point to a valid texture only if that texture has been requested in textureDescriptions! +* **geometryLocalPositionTexture**. The position (in local space) output texture. Will point to a valid texture only if that texture has been requested in textureDescriptions! +* **geometryWorldPositionTexture**. The position (in world space) output texture. Will point to a valid texture only if that texture has been requested in textureDescriptions! +* **geometryAlbedoTexture**. The albedo output texture. Will point to a valid texture only if that texture has been requested in textureDescriptions! +* **geometryReflectivityTexture**. The reflectivity output texture. Will point to a valid texture only if that texture has been requested in textureDescriptions! +* **geometryVelocityTexture**. The velocity output texture. Will point to a valid texture only if that texture has been requested in textureDescriptions! +* **geometryLinearVelocityTexture**. The linear velocity output texture. Will point to a valid texture only if that texture has been requested in textureDescriptions! +
+ +This block is used to generate geometry textures, i.e. textures containing special data such as depths in view/screen space, normals in view/world space, reflectivity, etc. Here is a list of all outputs that this block can generate: +* **geometryViewDepthTexture**: depth in camera view space. This is the Z component of the vertex coordinate in the camera's view space and is a value between **near** and **far**, the camera's near and far clipping planes. +* **geometryNormViewDepthTexture**: normalized depth in camera view space. Identical to the value above, but with values between 0 and 1, calculated using the formula `normViewDepth = (viewDepth - near) / (far - near)`. +* **geometryScreenDepthTexture**: depth in screen space. This is the depth written to the depth buffer (`gl_FragCoord.z`) and is a value between 0 and 1. +* **geometryViewNormalTexture**: normal in camera view space. This is the normal to the vertex in camera view space. The vector is normalized before being written to the texture. +* **geometryWorldNormalTexture**: normal in world space. This is the normal at the vertex in world space. The vector is normalized before being written to the texture. It is also scaled and offset by 0.5 to generate components between 0 and 1. +* **geometryLocalPositionTexture**: position in local space. This is the position of the vertex in the object model space, i.e., before any camera or world transformations. +* **geometryWorldPositionTexture**: position in world space. This is the position of the vertex in world space. +* **geometryAlbedoTexture**: albedo color. This is the albedo/diffuse color of the vertex. +* **geometryReflectivityTexture**: reflectivity color. This is the reflectivity color of the vertex (used by SSR, for example). +* **geometryVelocityTexture**: velocity vector in screen space. See [Motion blur by object](https://john-chapman-graphics.blogspot.com/2013/01/per-object-motion-blur.html) for more details on what a velocity texture is. **geometryVelocityTexture** is a texture constructed with the optimization described in the “Format and precision” section to improve accuracy when using an unsigned byte texture type. +* **geometryLinearVelocityTexture**: linear velocity vector in screen space. It is identical to the one above, but without the optimization, so without the `pow()` call. The coordinates are multiplied by 0.5, so that they are between [-0.5, 0.5] instead of [-1, 1]. + + + +Task used to generate shadows from a list of objects. + +[Link to the class](/typedoc/classes/babylon.framegraphshadowgeneratortask) + +[Link to the class (CSM)](/typedoc/classes/babylon.framegraphcascadedshadowgeneratortask) + + + + + +Inputs: +* **objectList**. The object list that generates shadows. +* **light**. The light to generate shadows from. +* **camera**. The camera used to generate the shadows. +
+Inputs (specific to cascaded shadow maps): +* **depthTexture** (optional). The depth texture used by the **autoCalcDepthBounds** feature (optional if **autoCalcDepthBounds** is set to *false*). This texture is used to compute the min/max depth bounds of the scene to setup the cascaded shadow generator. The texture should contain either “view,” “normalized view,” or “screen” depth values - if possible, connect “normalized view,” or “screen” for best performance. **Warning**: Do not set a texture if you are not using the **autoCalcDepthBounds** feature, to avoid generating a depth texture that will not be used. +
+Properties: +* **mapSize**. The size of the shadow map. +* **useFloat32TextureType**. If true, the shadow map will use a 32 bits float texture type (else, 16 bits float is used if supported). +* **useRedTextureFormat**. If true, the shadow map will use a red texture format (else, a RGBA format is used). +* **bias**. The bias to apply to the shadow map. +* **normalBias**. The normal bias to apply to the shadow map. +* **darkness**. The darkness of the shadows. +* **transparencyShadow**. Gets or sets the ability to have transparent shadows. +* **enableSoftTransparentShadow**. Enables or disables shadows with varying strength based on the transparency. +* **useOpacityTextureForTransparentShadow**. If this is true, use the opacity texture's alpha channel for transparent shadows instead of the diffuse one. +* **filter**. The filter to apply to the shadow map. +* **filteringQuality**. The filtering quality to apply to the filter. +* **shadowGenerator** (read only). The shadow generator. +
+Properties (specific to cascaded shadow maps): +* **numCascades**. The number of cascades. +* **debug**. Indicates whether the shadow generator should display the cascades. +* **stabilizeCascades**. Indicates whether the shadow generator should stabilize the cascades. +* **lambda**. Lambda parameter of the shadow generator. +* **cascadeBlendPercentage**. Cascade blend percentage. +* **depthClamp**. Indicates whether the shadow generator should use depth clamping. +* **autoCalcDepthBounds**. Indicates whether the shadow generator should automatically calculate the depth bounds. +* **autoCalcDepthBoundsRefreshRate**. Defines the refresh rate of the min/max computation used when **autoCalcDepthBounds** is set to true. Use 0 to compute just once, 1 to compute on every frame, 2 to compute every two frames and so on... +* **shadowMaxZ**. Maximum shadow Z value. If 0, will use **camera.maxZ**. +
+Outputs: +* **outputTexture**. The shadow map texture. +
+Use this task when you want to generate shadows from a light. The light must be a “shadow light”, i.e. any light except area lights and hemispherical lights. + +You may be surprised to see a **camera** input, because the scene is rendered from the point of view of the light to generate the shadow map, not from the point of view of a camera. This camera is necessary to: +* split the camera frustum when using a `FrameGraphCascadedShadowGeneratorTask` block +* calculate the LOD of the meshes. The LOD of the meshes is defined according to the distance from a camera, not from a light. +
+Refer to [Shadows](/features/featuresDeepDive/lights/shadows) for general information about shadows, and [Cascaded Shadow Maps](/features/featuresDeepDive/lights/shadows_csm) for specific information about cascaded shadow maps. + + + +Task used to render an utility layer. + +[Link to the class](/typedoc/classes/babylon.framegraphutilitylayerrenderertask) + + + + +Inputs: +* **targetTexture**. The target texture of the task. +* **camera**. The camera used to render the utility layer. +
+Properties: +* **layer**. The utility layer renderer. +
+Outputs: +* **outputTexture**. The output texture of the task. This is the same texture as the target texture, but the handles are different! +
+This class is a wrapper around [UtilityLayerRenderer](/typedoc/classes/babylon.utilitylayerrenderer) and allows you to use gizmos with frame graphs: see the PGs above for some examples. + +## Texture tasks + +### FrameGraphClearTextureTask + +Task used to clear a texture. + +[Link to the class](/typedoc/classes/babylon.framegraphcleartexturetask) + +No specific example: the clear texture task is used in all the PG on this page! + +Inputs: +* **targetTexture** (optional). The color texture to clear. +* **depthTexture** (optional). The depth attachment texture to clear. +
+Properties: +* **color**. The color to clear the texture with. +* **clearColor**. If the color should be cleared. +* **convertColorToLinearSpace**. If the color should be converted to linear space (default: false). +* **clearDepth**. If the depth should be cleared. +* **clearStencil**. If the stencil should be cleared. +* **stencilValue**. The value to use to clear the stencil buffer (default: 0). +
+Outputs: +* **outputTexture**. The output texture (same as **targetTexture**, but the handle will be different). +* **outputDepthTexture**. The output depth texture (same as **depthTexture**, but the handle will be different). +
+The inputs **target** and **depth** are optional, but you must provide at least one of them (otherwise there is no reason to use a clear task!). + +### FrameGraphCopyToBackbufferColorTask + +Task which copies a texture to the backbuffer color texture (the screen). + +[Link to the class](/typedoc/classes/babylon.framegraphcopytobackbuffercolortask) + +No specific example: the copy to backbuffer color task is used in all the PG on this page! + +Inputs: +* **sourceTexture**. The source texture to copy to the backbuffer color texture. +
+ +### FrameGraphCopyToTextureTask + +Task used to copy a texture to another texture. + +[Link to the class](/typedoc/classes/babylon.framegraphcopytotexturetask) + +Playground: see the PG for the [Geometry renderer task](#framegraphgeometryrenderertask) + +Inputs: +* **sourceTexture**. The source texture to copy from. +* **targetTexture**. The target texture to copy to. +
+Properties: +* **viewport** (optional). The viewport to use when doing the copy.If set to *null*, the currently active viewport is used. If *undefined* (default), the viewport is reset to a full screen viewport before performing the copy. +* **lodLevel**. The LOD level to copy from the source texture (default: 0). +
+Outputs: +* **outputTexture**. The output texture (same as **targetTexture**, but the handle may be different). +
+ + + +Task which generates mipmaps for a texture. + +[Link to the class](/typedoc/classes/babylon.framegraphgeneratemipmapstask) + + + + +Inputs: +* **targetTexture**. The texture to generate mipmaps for. +
+Outputs: +* **outputTexture**. The output texture (same as **targetTexture**, but the handle may be different). +
+Note that in a frame graph, mipmaps are not generated automatically; you must use a `FrameGraphGenerateMipMapsTask` task to do so. diff --git a/content/features/featuresDeepDive/frameGraph/frameGraphExamples/frameGraphExampleCustomPostProcess.md b/content/features/featuresDeepDive/frameGraph/frameGraphExamples/frameGraphExampleCustomPostProcess.md index d2f45d04e..c8781c3f4 100644 --- a/content/features/featuresDeepDive/frameGraph/frameGraphExamples/frameGraphExampleCustomPostProcess.md +++ b/content/features/featuresDeepDive/frameGraph/frameGraphExamples/frameGraphExampleCustomPostProcess.md @@ -36,7 +36,7 @@ const task = new BABYLON.FrameGraphCustomPostProcessTask("edgeDetection", frameG uniforms: ["escale", "threshold"], }); -this.task.onApplyObservable.add((effect) => { +task.onApplyObservable.add((effect) => { effect.setFloat("escale", 3); effect.setFloat("threshold", 0.1); }); @@ -54,7 +54,7 @@ const task = new BABYLON.FrameGraphCustomPostProcessTask("edgeDetection", frameG Here is a simple PG that illustrates the two code paths (update line 21 to use the standard or frame graph path): - + ## Making the task available in NRGE @@ -87,10 +87,10 @@ Note: **EdgeDetectionFragment** is the shader code of the post-process. This is the same code as above, but it allows you to easily modify the **escale** and **threshold** properties. -As for the implementation of the node graph rendering block `NodeRenderGraphEdgePostProcessBlock`, it is quite simple thanks to the use of `NodeRenderGraphBasePostProcessBlock`: +As for the implementation of the node graph rendering block `NodeRenderGraphEdgePostProcessBlock`, it is quite simple thanks to the use of `NodeRenderGraphBaseWithPropertiesPostProcessBlock`: * First, we declare a private variable to contain the wrapper class and initialize it in the constructor: ```typescript -export class NodeRenderGraphEdgePostProcessBlock extends BABYLON.NodeRenderGraphBasePostProcessBlock { +export class NodeRenderGraphEdgePostProcessBlock extends BABYLON.NodeRenderGraphBaseWithPropertiesPostProcessBlock { protected override _frameGraphTask: BABYLON.FrameGraphCustomPostProcessTask; public override get task() { @@ -178,7 +178,7 @@ Note: Since a playground can be launched multiple times, we first check whether You can use this PG to test it: - + The scene is displayed normally. To use a frame graph with the edge detection post-process, proceed as follows: * Open the inspector @@ -202,7 +202,7 @@ scene.frameGraph = nrg.frameGraph; await nrg.whenReadyAsync(); ``` - + **Important**: editing the node render graph with the standalone NRGE (https://nrge.babylonjs.com) will not work! @@ -212,7 +212,7 @@ This is because the implementation of the block node for the edge detection post Note that you can also open NRGE using a code: ```typescript -const nrg = await BABYLON.NodeRenderGraph.ParseFromSnippetAsync("XQF0ML#1", scene); +const nrg = await BABYLON.NodeRenderGraph.ParseFromSnippetAsync("XQF0ML", scene); nrg.build(); nrg.edit(); diff --git a/content/features/featuresDeepDive/frameGraph/frameGraphExamples/frameGraphExampleGeometryVAT.md b/content/features/featuresDeepDive/frameGraph/frameGraphExamples/frameGraphExampleGeometryVAT.md index be81c9697..a93d3c9ae 100644 --- a/content/features/featuresDeepDive/frameGraph/frameGraphExamples/frameGraphExampleGeometryVAT.md +++ b/content/features/featuresDeepDive/frameGraph/frameGraphExamples/frameGraphExampleGeometryVAT.md @@ -92,19 +92,17 @@ The texture can only be retrieved after the frame graph has been built, so `fram All that remains is to write the code to build the frame graph and execute it: ```javascript -frameGraph.build(); +await frameGraph.buildAsync(); -await frameGraph.whenReadyAsync(); - -engine.onResizeObservable.add(() => { - frameGraph.build(); +engine.onResizeObservable.add(async () => { + await frameGraph.buildAsync(); }); scene.onBeforeRenderObservable.add(() => { frameGraph.execute(); }); ``` -The full PG: +The full PG: You can see that the spiders are now animated in the normal texture. @@ -114,7 +112,7 @@ For completeness, here is the same thing using a node render graph: Node Render Graph: -PG: +PG: Notes: * We disable `autoFillExternalInputs` when loading the node's render graph, because the list of meshes used by the geometry renderer task must not contain the plane on which we display the geometry texture (otherwise, you will get an error such as “GL_INVALID_OPERATION: glDrawElements: feedback loop formed between the frame buffer and the active texture"). diff --git a/content/features/featuresDeepDive/frameGraph/frameGraphExamples/frameGraphExampleTransmission.md b/content/features/featuresDeepDive/frameGraph/frameGraphExamples/frameGraphExampleTransmission.md index 76d32f769..6cfd11fa6 100644 --- a/content/features/featuresDeepDive/frameGraph/frameGraphExamples/frameGraphExampleTransmission.md +++ b/content/features/featuresDeepDive/frameGraph/frameGraphExamples/frameGraphExampleTransmission.md @@ -85,7 +85,7 @@ Key points to highlight from this graph: Here is a playground that uses the node render graph described above to render a scene with transmissive materials: - + We have created a separate class `RenderWithTransmission`, which takes care of loading and configuring the graph for you. This should make it easier to reuse the code in your own projects, but feel free to use this code as a starting point for your own experiments! diff --git a/public/img/frameGraph/block_computeshader.jpg b/public/img/frameGraph/block_computeshader.jpg new file mode 100644 index 000000000..4ff9dce46 Binary files /dev/null and b/public/img/frameGraph/block_computeshader.jpg differ diff --git a/public/img/frameGraph/empty.jpg b/public/img/frameGraph/empty.jpg new file mode 100644 index 000000000..ec165f680 Binary files /dev/null and b/public/img/frameGraph/empty.jpg differ diff --git a/public/img/frameGraph/graph_skeleton.jpg b/public/img/frameGraph/graph_skeleton.jpg new file mode 100644 index 000000000..cbfaa3eaa Binary files /dev/null and b/public/img/frameGraph/graph_skeleton.jpg differ diff --git a/public/img/frameGraph/task_anaglyph.jpg b/public/img/frameGraph/task_anaglyph.jpg new file mode 100644 index 000000000..249a4cc5b Binary files /dev/null and b/public/img/frameGraph/task_anaglyph.jpg differ diff --git a/public/img/frameGraph/task_blackandwhite.jpg b/public/img/frameGraph/task_blackandwhite.jpg new file mode 100644 index 000000000..ae7ab160d Binary files /dev/null and b/public/img/frameGraph/task_blackandwhite.jpg differ diff --git a/public/img/frameGraph/task_bloom.jpg b/public/img/frameGraph/task_bloom.jpg new file mode 100644 index 000000000..1cda71c65 Binary files /dev/null and b/public/img/frameGraph/task_bloom.jpg differ diff --git a/public/img/frameGraph/task_blur.jpg b/public/img/frameGraph/task_blur.jpg new file mode 100644 index 000000000..f18a40989 Binary files /dev/null and b/public/img/frameGraph/task_blur.jpg differ diff --git a/public/img/frameGraph/task_chromaticaberration.jpg b/public/img/frameGraph/task_chromaticaberration.jpg new file mode 100644 index 000000000..d7ca8eda3 Binary files /dev/null and b/public/img/frameGraph/task_chromaticaberration.jpg differ diff --git a/public/img/frameGraph/task_circleofconfusion.jpg b/public/img/frameGraph/task_circleofconfusion.jpg new file mode 100644 index 000000000..aa79abc89 Binary files /dev/null and b/public/img/frameGraph/task_circleofconfusion.jpg differ diff --git a/public/img/frameGraph/task_colorcorrection.jpg b/public/img/frameGraph/task_colorcorrection.jpg new file mode 100644 index 000000000..86fde97a8 Binary files /dev/null and b/public/img/frameGraph/task_colorcorrection.jpg differ diff --git a/public/img/frameGraph/task_computeshader.jpg b/public/img/frameGraph/task_computeshader.jpg new file mode 100644 index 000000000..ee267d31f Binary files /dev/null and b/public/img/frameGraph/task_computeshader.jpg differ diff --git a/public/img/frameGraph/task_convolution.jpg b/public/img/frameGraph/task_convolution.jpg new file mode 100644 index 000000000..1980ed160 Binary files /dev/null and b/public/img/frameGraph/task_convolution.jpg differ diff --git a/public/img/frameGraph/task_cull.jpg b/public/img/frameGraph/task_cull.jpg new file mode 100644 index 000000000..c5f1ffb29 Binary files /dev/null and b/public/img/frameGraph/task_cull.jpg differ diff --git a/public/img/frameGraph/task_custompostprocess.jpg b/public/img/frameGraph/task_custompostprocess.jpg new file mode 100644 index 000000000..5c3666ed6 Binary files /dev/null and b/public/img/frameGraph/task_custompostprocess.jpg differ diff --git a/public/img/frameGraph/task_depthoffield.jpg b/public/img/frameGraph/task_depthoffield.jpg new file mode 100644 index 000000000..163926bd8 Binary files /dev/null and b/public/img/frameGraph/task_depthoffield.jpg differ diff --git a/public/img/frameGraph/task_execute.jpg b/public/img/frameGraph/task_execute.jpg new file mode 100644 index 000000000..049c89ec2 Binary files /dev/null and b/public/img/frameGraph/task_execute.jpg differ diff --git a/public/img/frameGraph/task_extracthighlights.jpg b/public/img/frameGraph/task_extracthighlights.jpg new file mode 100644 index 000000000..543c11f49 Binary files /dev/null and b/public/img/frameGraph/task_extracthighlights.jpg differ diff --git a/public/img/frameGraph/task_filter.jpg b/public/img/frameGraph/task_filter.jpg new file mode 100644 index 000000000..6e1fc8451 Binary files /dev/null and b/public/img/frameGraph/task_filter.jpg differ diff --git a/public/img/frameGraph/task_fxaa.jpg b/public/img/frameGraph/task_fxaa.jpg new file mode 100644 index 000000000..7f598005d Binary files /dev/null and b/public/img/frameGraph/task_fxaa.jpg differ diff --git a/public/img/frameGraph/task_generatemipmaps.jpg b/public/img/frameGraph/task_generatemipmaps.jpg new file mode 100644 index 000000000..d4e763c28 Binary files /dev/null and b/public/img/frameGraph/task_generatemipmaps.jpg differ diff --git a/public/img/frameGraph/task_geometryrenderer.jpg b/public/img/frameGraph/task_geometryrenderer.jpg new file mode 100644 index 000000000..3b1357a46 Binary files /dev/null and b/public/img/frameGraph/task_geometryrenderer.jpg differ diff --git a/public/img/frameGraph/task_glow.jpg b/public/img/frameGraph/task_glow.jpg new file mode 100644 index 000000000..93a86e496 Binary files /dev/null and b/public/img/frameGraph/task_glow.jpg differ diff --git a/public/img/frameGraph/task_grain.jpg b/public/img/frameGraph/task_grain.jpg new file mode 100644 index 000000000..b8eddfccf Binary files /dev/null and b/public/img/frameGraph/task_grain.jpg differ diff --git a/public/img/frameGraph/task_gui.jpg b/public/img/frameGraph/task_gui.jpg new file mode 100644 index 000000000..ad635406f Binary files /dev/null and b/public/img/frameGraph/task_gui.jpg differ diff --git a/public/img/frameGraph/task_highlight.jpg b/public/img/frameGraph/task_highlight.jpg new file mode 100644 index 000000000..b4b0603e9 Binary files /dev/null and b/public/img/frameGraph/task_highlight.jpg differ diff --git a/public/img/frameGraph/task_imageprocessing.jpg b/public/img/frameGraph/task_imageprocessing.jpg new file mode 100644 index 000000000..e5fd38266 Binary files /dev/null and b/public/img/frameGraph/task_imageprocessing.jpg differ diff --git a/public/img/frameGraph/task_motionblur.jpg b/public/img/frameGraph/task_motionblur.jpg new file mode 100644 index 000000000..6ac3376e8 Binary files /dev/null and b/public/img/frameGraph/task_motionblur.jpg differ diff --git a/public/img/frameGraph/task_objectrenderer.jpg b/public/img/frameGraph/task_objectrenderer.jpg new file mode 100644 index 000000000..ff8780d76 Binary files /dev/null and b/public/img/frameGraph/task_objectrenderer.jpg differ diff --git a/public/img/frameGraph/task_screenspacecurvature.jpg b/public/img/frameGraph/task_screenspacecurvature.jpg new file mode 100644 index 000000000..ee2e3a49c Binary files /dev/null and b/public/img/frameGraph/task_screenspacecurvature.jpg differ diff --git a/public/img/frameGraph/task_shadowgenerator.jpg b/public/img/frameGraph/task_shadowgenerator.jpg new file mode 100644 index 000000000..d8b73610f Binary files /dev/null and b/public/img/frameGraph/task_shadowgenerator.jpg differ diff --git a/public/img/frameGraph/task_sharpen.jpg b/public/img/frameGraph/task_sharpen.jpg new file mode 100644 index 000000000..b8552d439 Binary files /dev/null and b/public/img/frameGraph/task_sharpen.jpg differ diff --git a/public/img/frameGraph/task_ssao2.jpg b/public/img/frameGraph/task_ssao2.jpg new file mode 100644 index 000000000..2860e664b Binary files /dev/null and b/public/img/frameGraph/task_ssao2.jpg differ diff --git a/public/img/frameGraph/task_ssr.jpg b/public/img/frameGraph/task_ssr.jpg new file mode 100644 index 000000000..ba582b049 Binary files /dev/null and b/public/img/frameGraph/task_ssr.jpg differ diff --git a/public/img/frameGraph/task_taa.jpg b/public/img/frameGraph/task_taa.jpg new file mode 100644 index 000000000..10fe4b242 Binary files /dev/null and b/public/img/frameGraph/task_taa.jpg differ diff --git a/public/img/frameGraph/task_tonemap.jpg b/public/img/frameGraph/task_tonemap.jpg new file mode 100644 index 000000000..cc23e4d42 Binary files /dev/null and b/public/img/frameGraph/task_tonemap.jpg differ diff --git a/public/img/frameGraph/task_utilitylayerrenderer.jpg b/public/img/frameGraph/task_utilitylayerrenderer.jpg new file mode 100644 index 000000000..9b7acacb1 Binary files /dev/null and b/public/img/frameGraph/task_utilitylayerrenderer.jpg differ diff --git a/public/img/playgroundsAndNMEs/pg-4QES4Q-1.png b/public/img/playgroundsAndNMEs/pg-4QES4Q-1.png new file mode 100644 index 000000000..88d9acf9a Binary files /dev/null and b/public/img/playgroundsAndNMEs/pg-4QES4Q-1.png differ diff --git a/public/img/playgroundsAndNMEs/pg-4QES4Q-2.png b/public/img/playgroundsAndNMEs/pg-4QES4Q-2.png new file mode 100644 index 000000000..88d9acf9a Binary files /dev/null and b/public/img/playgroundsAndNMEs/pg-4QES4Q-2.png differ diff --git a/public/img/playgroundsAndNMEs/pg-539X0P-52.png b/public/img/playgroundsAndNMEs/pg-539X0P-52.png new file mode 100644 index 000000000..4184d6738 Binary files /dev/null and b/public/img/playgroundsAndNMEs/pg-539X0P-52.png differ diff --git a/public/img/playgroundsAndNMEs/pg-ARI9J5-1.png b/public/img/playgroundsAndNMEs/pg-ARI9J5-1.png new file mode 100644 index 000000000..1ec17b2c5 Binary files /dev/null and b/public/img/playgroundsAndNMEs/pg-ARI9J5-1.png differ diff --git a/public/img/playgroundsAndNMEs/pg-GCG2Z7-2.png b/public/img/playgroundsAndNMEs/pg-GCG2Z7-2.png new file mode 100644 index 000000000..8cf2c7d6d Binary files /dev/null and b/public/img/playgroundsAndNMEs/pg-GCG2Z7-2.png differ diff --git a/public/img/playgroundsAndNMEs/pg-GCG2Z7-3.png b/public/img/playgroundsAndNMEs/pg-GCG2Z7-3.png new file mode 100644 index 000000000..8cf2c7d6d Binary files /dev/null and b/public/img/playgroundsAndNMEs/pg-GCG2Z7-3.png differ diff --git a/public/img/playgroundsAndNMEs/pg-GCG2Z7.png b/public/img/playgroundsAndNMEs/pg-GCG2Z7.png new file mode 100644 index 000000000..8cf2c7d6d Binary files /dev/null and b/public/img/playgroundsAndNMEs/pg-GCG2Z7.png differ diff --git a/public/img/playgroundsAndNMEs/pg-IG8NRC-81.png b/public/img/playgroundsAndNMEs/pg-IG8NRC-81.png new file mode 100644 index 000000000..83b2999ba Binary files /dev/null and b/public/img/playgroundsAndNMEs/pg-IG8NRC-81.png differ diff --git a/public/img/playgroundsAndNMEs/pg-IG8NRC-82.png b/public/img/playgroundsAndNMEs/pg-IG8NRC-82.png new file mode 100644 index 000000000..a6dd09634 Binary files /dev/null and b/public/img/playgroundsAndNMEs/pg-IG8NRC-82.png differ diff --git a/public/img/playgroundsAndNMEs/pg-JWKDME-174.png b/public/img/playgroundsAndNMEs/pg-JWKDME-174.png new file mode 100644 index 000000000..5d1b2f616 Binary files /dev/null and b/public/img/playgroundsAndNMEs/pg-JWKDME-174.png differ diff --git a/public/img/playgroundsAndNMEs/pg-JWKDME-175.png b/public/img/playgroundsAndNMEs/pg-JWKDME-175.png new file mode 100644 index 000000000..a55f0695c Binary files /dev/null and b/public/img/playgroundsAndNMEs/pg-JWKDME-175.png differ diff --git a/public/img/playgroundsAndNMEs/pg-JWKDME-176.png b/public/img/playgroundsAndNMEs/pg-JWKDME-176.png new file mode 100644 index 000000000..dab28e137 Binary files /dev/null and b/public/img/playgroundsAndNMEs/pg-JWKDME-176.png differ diff --git a/public/img/playgroundsAndNMEs/pg-KOBPUW-11.png b/public/img/playgroundsAndNMEs/pg-KOBPUW-11.png new file mode 100644 index 000000000..a5c1ec923 Binary files /dev/null and b/public/img/playgroundsAndNMEs/pg-KOBPUW-11.png differ diff --git a/public/img/playgroundsAndNMEs/pg-KOBPUW-14.png b/public/img/playgroundsAndNMEs/pg-KOBPUW-14.png new file mode 100644 index 000000000..169405ce4 Binary files /dev/null and b/public/img/playgroundsAndNMEs/pg-KOBPUW-14.png differ diff --git a/public/img/playgroundsAndNMEs/pg-KOBPUW-9.png b/public/img/playgroundsAndNMEs/pg-KOBPUW-9.png new file mode 100644 index 000000000..b303486dd Binary files /dev/null and b/public/img/playgroundsAndNMEs/pg-KOBPUW-9.png differ diff --git a/public/img/playgroundsAndNMEs/pg-MJDYB1-3.png b/public/img/playgroundsAndNMEs/pg-MJDYB1-3.png new file mode 100644 index 000000000..681d9eb9c Binary files /dev/null and b/public/img/playgroundsAndNMEs/pg-MJDYB1-3.png differ diff --git a/public/img/playgroundsAndNMEs/pg-MJDYB1-6.png b/public/img/playgroundsAndNMEs/pg-MJDYB1-6.png new file mode 100644 index 000000000..681d9eb9c Binary files /dev/null and b/public/img/playgroundsAndNMEs/pg-MJDYB1-6.png differ diff --git a/public/img/playgroundsAndNMEs/pg-MJDYB1-8.png b/public/img/playgroundsAndNMEs/pg-MJDYB1-8.png new file mode 100644 index 000000000..681d9eb9c Binary files /dev/null and b/public/img/playgroundsAndNMEs/pg-MJDYB1-8.png differ diff --git a/public/img/playgroundsAndNMEs/pg-OWGOUN-15.png b/public/img/playgroundsAndNMEs/pg-OWGOUN-15.png new file mode 100644 index 000000000..92f7cd513 Binary files /dev/null and b/public/img/playgroundsAndNMEs/pg-OWGOUN-15.png differ diff --git a/public/img/playgroundsAndNMEs/pg-PIZ1GK-2373.png b/public/img/playgroundsAndNMEs/pg-PIZ1GK-2373.png new file mode 100644 index 000000000..cf014ea54 Binary files /dev/null and b/public/img/playgroundsAndNMEs/pg-PIZ1GK-2373.png differ diff --git a/public/img/playgroundsAndNMEs/pg-PV8OLY-26.png b/public/img/playgroundsAndNMEs/pg-PV8OLY-26.png new file mode 100644 index 000000000..24cbdd519 Binary files /dev/null and b/public/img/playgroundsAndNMEs/pg-PV8OLY-26.png differ diff --git a/public/img/playgroundsAndNMEs/pg-PV8OLY-28.png b/public/img/playgroundsAndNMEs/pg-PV8OLY-28.png new file mode 100644 index 000000000..24cbdd519 Binary files /dev/null and b/public/img/playgroundsAndNMEs/pg-PV8OLY-28.png differ diff --git a/public/img/playgroundsAndNMEs/pg-PV8OLY-29.png b/public/img/playgroundsAndNMEs/pg-PV8OLY-29.png new file mode 100644 index 000000000..24cbdd519 Binary files /dev/null and b/public/img/playgroundsAndNMEs/pg-PV8OLY-29.png differ diff --git a/public/img/playgroundsAndNMEs/pg-R33LVG-2.png b/public/img/playgroundsAndNMEs/pg-R33LVG-2.png new file mode 100644 index 000000000..99fe89bbb Binary files /dev/null and b/public/img/playgroundsAndNMEs/pg-R33LVG-2.png differ diff --git a/public/img/playgroundsAndNMEs/pg-SUEU9U-10.png b/public/img/playgroundsAndNMEs/pg-SUEU9U-10.png new file mode 100644 index 000000000..89674d9f5 Binary files /dev/null and b/public/img/playgroundsAndNMEs/pg-SUEU9U-10.png differ diff --git a/public/img/playgroundsAndNMEs/pg-SUEU9U-11.png b/public/img/playgroundsAndNMEs/pg-SUEU9U-11.png new file mode 100644 index 000000000..3e6c3b977 Binary files /dev/null and b/public/img/playgroundsAndNMEs/pg-SUEU9U-11.png differ diff --git a/public/img/playgroundsAndNMEs/pg-SUEU9U-12.png b/public/img/playgroundsAndNMEs/pg-SUEU9U-12.png new file mode 100644 index 000000000..53a8705ea Binary files /dev/null and b/public/img/playgroundsAndNMEs/pg-SUEU9U-12.png differ diff --git a/public/img/playgroundsAndNMEs/pg-SUEU9U-13.png b/public/img/playgroundsAndNMEs/pg-SUEU9U-13.png new file mode 100644 index 000000000..53a8705ea Binary files /dev/null and b/public/img/playgroundsAndNMEs/pg-SUEU9U-13.png differ diff --git a/public/img/playgroundsAndNMEs/pg-SUEU9U-14.png b/public/img/playgroundsAndNMEs/pg-SUEU9U-14.png new file mode 100644 index 000000000..cf1a18e19 Binary files /dev/null and b/public/img/playgroundsAndNMEs/pg-SUEU9U-14.png differ diff --git a/public/img/playgroundsAndNMEs/pg-SUEU9U-15.png b/public/img/playgroundsAndNMEs/pg-SUEU9U-15.png new file mode 100644 index 000000000..cf1a18e19 Binary files /dev/null and b/public/img/playgroundsAndNMEs/pg-SUEU9U-15.png differ diff --git a/public/img/playgroundsAndNMEs/pg-SUEU9U-16.png b/public/img/playgroundsAndNMEs/pg-SUEU9U-16.png new file mode 100644 index 000000000..fd303477f Binary files /dev/null and b/public/img/playgroundsAndNMEs/pg-SUEU9U-16.png differ diff --git a/public/img/playgroundsAndNMEs/pg-SUEU9U-17.png b/public/img/playgroundsAndNMEs/pg-SUEU9U-17.png new file mode 100644 index 000000000..fd303477f Binary files /dev/null and b/public/img/playgroundsAndNMEs/pg-SUEU9U-17.png differ diff --git a/public/img/playgroundsAndNMEs/pg-SUEU9U-18.png b/public/img/playgroundsAndNMEs/pg-SUEU9U-18.png new file mode 100644 index 000000000..a92342027 Binary files /dev/null and b/public/img/playgroundsAndNMEs/pg-SUEU9U-18.png differ diff --git a/public/img/playgroundsAndNMEs/pg-SUEU9U-19.png b/public/img/playgroundsAndNMEs/pg-SUEU9U-19.png new file mode 100644 index 000000000..a92342027 Binary files /dev/null and b/public/img/playgroundsAndNMEs/pg-SUEU9U-19.png differ diff --git a/public/img/playgroundsAndNMEs/pg-SUEU9U-21.png b/public/img/playgroundsAndNMEs/pg-SUEU9U-21.png new file mode 100644 index 000000000..a58c58a71 Binary files /dev/null and b/public/img/playgroundsAndNMEs/pg-SUEU9U-21.png differ diff --git a/public/img/playgroundsAndNMEs/pg-SUEU9U-22.png b/public/img/playgroundsAndNMEs/pg-SUEU9U-22.png new file mode 100644 index 000000000..a58c58a71 Binary files /dev/null and b/public/img/playgroundsAndNMEs/pg-SUEU9U-22.png differ diff --git a/public/img/playgroundsAndNMEs/pg-SUEU9U-24.png b/public/img/playgroundsAndNMEs/pg-SUEU9U-24.png new file mode 100644 index 000000000..752293dce Binary files /dev/null and b/public/img/playgroundsAndNMEs/pg-SUEU9U-24.png differ diff --git a/public/img/playgroundsAndNMEs/pg-SUEU9U-25.png b/public/img/playgroundsAndNMEs/pg-SUEU9U-25.png new file mode 100644 index 000000000..da32b8ddd Binary files /dev/null and b/public/img/playgroundsAndNMEs/pg-SUEU9U-25.png differ diff --git a/public/img/playgroundsAndNMEs/pg-SUEU9U-26.png b/public/img/playgroundsAndNMEs/pg-SUEU9U-26.png new file mode 100644 index 000000000..b2eb01f0f Binary files /dev/null and b/public/img/playgroundsAndNMEs/pg-SUEU9U-26.png differ diff --git a/public/img/playgroundsAndNMEs/pg-SUEU9U-27.png b/public/img/playgroundsAndNMEs/pg-SUEU9U-27.png new file mode 100644 index 000000000..b2eb01f0f Binary files /dev/null and b/public/img/playgroundsAndNMEs/pg-SUEU9U-27.png differ diff --git a/public/img/playgroundsAndNMEs/pg-SUEU9U-28.png b/public/img/playgroundsAndNMEs/pg-SUEU9U-28.png new file mode 100644 index 000000000..ea1a87e63 Binary files /dev/null and b/public/img/playgroundsAndNMEs/pg-SUEU9U-28.png differ diff --git a/public/img/playgroundsAndNMEs/pg-SUEU9U-29.png b/public/img/playgroundsAndNMEs/pg-SUEU9U-29.png new file mode 100644 index 000000000..ea1a87e63 Binary files /dev/null and b/public/img/playgroundsAndNMEs/pg-SUEU9U-29.png differ diff --git a/public/img/playgroundsAndNMEs/pg-SUEU9U-31.png b/public/img/playgroundsAndNMEs/pg-SUEU9U-31.png new file mode 100644 index 000000000..ef4f97514 Binary files /dev/null and b/public/img/playgroundsAndNMEs/pg-SUEU9U-31.png differ diff --git a/public/img/playgroundsAndNMEs/pg-SUEU9U-32.png b/public/img/playgroundsAndNMEs/pg-SUEU9U-32.png new file mode 100644 index 000000000..ef4f97514 Binary files /dev/null and b/public/img/playgroundsAndNMEs/pg-SUEU9U-32.png differ diff --git a/public/img/playgroundsAndNMEs/pg-SUEU9U-33.png b/public/img/playgroundsAndNMEs/pg-SUEU9U-33.png new file mode 100644 index 000000000..6a2a52e4d Binary files /dev/null and b/public/img/playgroundsAndNMEs/pg-SUEU9U-33.png differ diff --git a/public/img/playgroundsAndNMEs/pg-SUEU9U-34.png b/public/img/playgroundsAndNMEs/pg-SUEU9U-34.png new file mode 100644 index 000000000..6a2a52e4d Binary files /dev/null and b/public/img/playgroundsAndNMEs/pg-SUEU9U-34.png differ diff --git a/public/img/playgroundsAndNMEs/pg-SUEU9U-35.png b/public/img/playgroundsAndNMEs/pg-SUEU9U-35.png new file mode 100644 index 000000000..171f5e9ed Binary files /dev/null and b/public/img/playgroundsAndNMEs/pg-SUEU9U-35.png differ diff --git a/public/img/playgroundsAndNMEs/pg-SUEU9U-36.png b/public/img/playgroundsAndNMEs/pg-SUEU9U-36.png new file mode 100644 index 000000000..171f5e9ed Binary files /dev/null and b/public/img/playgroundsAndNMEs/pg-SUEU9U-36.png differ diff --git a/public/img/playgroundsAndNMEs/pg-SUEU9U-37.png b/public/img/playgroundsAndNMEs/pg-SUEU9U-37.png new file mode 100644 index 000000000..59758aa76 Binary files /dev/null and b/public/img/playgroundsAndNMEs/pg-SUEU9U-37.png differ diff --git a/public/img/playgroundsAndNMEs/pg-SUEU9U-38.png b/public/img/playgroundsAndNMEs/pg-SUEU9U-38.png new file mode 100644 index 000000000..59758aa76 Binary files /dev/null and b/public/img/playgroundsAndNMEs/pg-SUEU9U-38.png differ diff --git a/public/img/playgroundsAndNMEs/pg-SUEU9U-39.png b/public/img/playgroundsAndNMEs/pg-SUEU9U-39.png new file mode 100644 index 000000000..d643097b3 Binary files /dev/null and b/public/img/playgroundsAndNMEs/pg-SUEU9U-39.png differ diff --git a/public/img/playgroundsAndNMEs/pg-SUEU9U-4.png b/public/img/playgroundsAndNMEs/pg-SUEU9U-4.png new file mode 100644 index 000000000..78336fe27 Binary files /dev/null and b/public/img/playgroundsAndNMEs/pg-SUEU9U-4.png differ diff --git a/public/img/playgroundsAndNMEs/pg-SUEU9U-40.png b/public/img/playgroundsAndNMEs/pg-SUEU9U-40.png new file mode 100644 index 000000000..ab5a01a61 Binary files /dev/null and b/public/img/playgroundsAndNMEs/pg-SUEU9U-40.png differ diff --git a/public/img/playgroundsAndNMEs/pg-SUEU9U-45.png b/public/img/playgroundsAndNMEs/pg-SUEU9U-45.png new file mode 100644 index 000000000..c6158cac0 Binary files /dev/null and b/public/img/playgroundsAndNMEs/pg-SUEU9U-45.png differ diff --git a/public/img/playgroundsAndNMEs/pg-SUEU9U-46.png b/public/img/playgroundsAndNMEs/pg-SUEU9U-46.png new file mode 100644 index 000000000..ac5894ef3 Binary files /dev/null and b/public/img/playgroundsAndNMEs/pg-SUEU9U-46.png differ diff --git a/public/img/playgroundsAndNMEs/pg-SUEU9U-47.png b/public/img/playgroundsAndNMEs/pg-SUEU9U-47.png new file mode 100644 index 000000000..b80eeb1ef Binary files /dev/null and b/public/img/playgroundsAndNMEs/pg-SUEU9U-47.png differ diff --git a/public/img/playgroundsAndNMEs/pg-SUEU9U-48.png b/public/img/playgroundsAndNMEs/pg-SUEU9U-48.png new file mode 100644 index 000000000..b80eeb1ef Binary files /dev/null and b/public/img/playgroundsAndNMEs/pg-SUEU9U-48.png differ diff --git a/public/img/playgroundsAndNMEs/pg-SUEU9U-49.png b/public/img/playgroundsAndNMEs/pg-SUEU9U-49.png new file mode 100644 index 000000000..5485d99b4 Binary files /dev/null and b/public/img/playgroundsAndNMEs/pg-SUEU9U-49.png differ diff --git a/public/img/playgroundsAndNMEs/pg-SUEU9U-50.png b/public/img/playgroundsAndNMEs/pg-SUEU9U-50.png new file mode 100644 index 000000000..fc04d0120 Binary files /dev/null and b/public/img/playgroundsAndNMEs/pg-SUEU9U-50.png differ diff --git a/public/img/playgroundsAndNMEs/pg-SUEU9U-51.png b/public/img/playgroundsAndNMEs/pg-SUEU9U-51.png new file mode 100644 index 000000000..2b4a48b95 Binary files /dev/null and b/public/img/playgroundsAndNMEs/pg-SUEU9U-51.png differ diff --git a/public/img/playgroundsAndNMEs/pg-SUEU9U-52.png b/public/img/playgroundsAndNMEs/pg-SUEU9U-52.png new file mode 100644 index 000000000..2b4a48b95 Binary files /dev/null and b/public/img/playgroundsAndNMEs/pg-SUEU9U-52.png differ diff --git a/public/img/playgroundsAndNMEs/pg-SUEU9U-54.png b/public/img/playgroundsAndNMEs/pg-SUEU9U-54.png new file mode 100644 index 000000000..cf014ea54 Binary files /dev/null and b/public/img/playgroundsAndNMEs/pg-SUEU9U-54.png differ diff --git a/public/img/playgroundsAndNMEs/pg-SUEU9U-55.png b/public/img/playgroundsAndNMEs/pg-SUEU9U-55.png new file mode 100644 index 000000000..41a50a7c9 Binary files /dev/null and b/public/img/playgroundsAndNMEs/pg-SUEU9U-55.png differ diff --git a/public/img/playgroundsAndNMEs/pg-SUEU9U-57.png b/public/img/playgroundsAndNMEs/pg-SUEU9U-57.png new file mode 100644 index 000000000..41a50a7c9 Binary files /dev/null and b/public/img/playgroundsAndNMEs/pg-SUEU9U-57.png differ diff --git a/public/img/playgroundsAndNMEs/pg-SUEU9U-58.png b/public/img/playgroundsAndNMEs/pg-SUEU9U-58.png new file mode 100644 index 000000000..5485d99b4 Binary files /dev/null and b/public/img/playgroundsAndNMEs/pg-SUEU9U-58.png differ diff --git a/public/img/playgroundsAndNMEs/pg-SUEU9U-74.png b/public/img/playgroundsAndNMEs/pg-SUEU9U-74.png new file mode 100644 index 000000000..367860971 Binary files /dev/null and b/public/img/playgroundsAndNMEs/pg-SUEU9U-74.png differ diff --git a/public/img/playgroundsAndNMEs/pg-SUEU9U-77.png b/public/img/playgroundsAndNMEs/pg-SUEU9U-77.png new file mode 100644 index 000000000..6d2d467ca Binary files /dev/null and b/public/img/playgroundsAndNMEs/pg-SUEU9U-77.png differ diff --git a/public/img/playgroundsAndNMEs/pg-SUEU9U-78.png b/public/img/playgroundsAndNMEs/pg-SUEU9U-78.png new file mode 100644 index 000000000..754396444 Binary files /dev/null and b/public/img/playgroundsAndNMEs/pg-SUEU9U-78.png differ diff --git a/public/img/playgroundsAndNMEs/pg-SUEU9U-84.png b/public/img/playgroundsAndNMEs/pg-SUEU9U-84.png new file mode 100644 index 000000000..0c3c40d00 Binary files /dev/null and b/public/img/playgroundsAndNMEs/pg-SUEU9U-84.png differ diff --git a/public/img/playgroundsAndNMEs/pg-SUEU9U-85.png b/public/img/playgroundsAndNMEs/pg-SUEU9U-85.png new file mode 100644 index 000000000..29257399e Binary files /dev/null and b/public/img/playgroundsAndNMEs/pg-SUEU9U-85.png differ diff --git a/public/img/playgroundsAndNMEs/pg-YB006J-746.png b/public/img/playgroundsAndNMEs/pg-YB006J-746.png new file mode 100644 index 000000000..e0fbf96d8 Binary files /dev/null and b/public/img/playgroundsAndNMEs/pg-YB006J-746.png differ diff --git a/public/img/playgroundsAndNMEs/pg-YB006J-749.png b/public/img/playgroundsAndNMEs/pg-YB006J-749.png new file mode 100644 index 000000000..aad87e250 Binary files /dev/null and b/public/img/playgroundsAndNMEs/pg-YB006J-749.png differ diff --git a/public/img/playgroundsAndNMEs/pg-YB006J-753.png b/public/img/playgroundsAndNMEs/pg-YB006J-753.png new file mode 100644 index 000000000..baa838f86 Binary files /dev/null and b/public/img/playgroundsAndNMEs/pg-YB006J-753.png differ