You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: src/content/docs/tutorial/02 Drawing a triangle/06 Buffers.md
+27-1Lines changed: 27 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,7 @@ description: Buffers
4
4
slug: "tutorial/drawing-a-triangle/buffers"
5
5
---
6
6
7
-
## Description
7
+
## General
8
8
9
9
When uploading data to the GPU, OpenGL used target-specific buffer targets (Vertex Object/Array Buffers, etc.). Daxa uses bindless buffers instead. This means a buffer isn't bound to one target only. One buffer can be used in all of these different bind targets and there is therefore only one buffer 'type'.
10
10
@@ -18,3 +18,29 @@ auto buffer_id = device.create_buffer({
18
18
.name = "my vertex data",
19
19
});
20
20
```
21
+
22
+
## Uploading to Buffers
23
+
24
+
To upload to a buffer in daxa, you query the buffers host pointer. Not all buffers have a host pointer, make sure to set either `MemoryFlagBits::HOST_ACCESS_SEQUENTIAL_WRITE` or `MemoryFlagBits::HOST_ACCESS_RANDOM` as `.allocate_info` when creating the buffer:
* Use `MemoryFlagBits::HOST_ACCESS_SEQUENTIAL_WRITE` for buffers that require fast reads on the gpu and host writes. This type is suboptimal for host readback. Its typically in device vram.
35
+
* Use `MemoryFlagBits::HOST_ACCESS_RANDOM` for buffers that do not need fast access on the gpu but random cpu write and read access. This type is optimal for readback. Its typically in host ram.
36
+
37
+
Uploading any data itself is then done via direct writes or a memcpy like so:
While not entirely necessary, we're going to use TaskGraph, which allows us to compile a list of GPU tasks and their dependencies into a synchronized set of commands. This simplifies your code by making different tasks completely self-contained, while also generating the most optimal synchronization for the tasks you describe. To use TaskGraph, as its also an optional feature, add the include path `<daxa/utils/task_graph.hpp>` at the top of our main file.
10
10
11
-
## Creating a vertex uploading task
11
+
## Creating a Rendering task
12
12
13
13
Before we can use a task graph, we first need to create actual tasks that can be executed. The first task we are going to create will upload vertex data to the GPU.
14
14
@@ -17,105 +17,65 @@ Each task struct must consist of a child struct 'Uses' that will store all share
17
17
For our task, this base task structure will look like this:
// this callback is executed later when executing the graph after completing recording.
29
+
....
30
30
});
31
-
}
32
-
```
33
-
34
-
In the `task` callback function, for the sake of brevity, we will create the data we will upload. In this sample, we will use the standard triangle vertices.
To send the data to the GPU, we can create a staging buffer, which has host access, so that we can then issue a command to copy from this buffer to the dedicated GPU memory.
45
-
46
-
```cpp
47
-
auto staging_buffer_id = ti.device.create_buffer({
We can also ask the command recorder to destroy this temporary buffer since we don't care about it living, but we DO need it to survive through its usage on the GPU (which won't happen until after these commands are submitted), so we tell the command recorder to destroy it in a deferred fashion.
We first need to get the screen width and height in the callback function. We can do this by getting the target image dimensions.
94
-
95
-
```cpp
96
-
auto const size = ti.device.info(ti.get(render_target).ids[0]).value().size;
97
-
```
98
-
99
-
Next, we need to record an actual renderpass. The values are pretty self-explanatory if you have used OpenGL before. This contains the actual rendering logic.
@@ -128,16 +88,7 @@ Back in our main method, the first we'll make is the swap chain image task resou
128
88
auto task_swapchain_image = daxa::TaskImage{{.swapchain_image = true, .name = "swapchain image"}};
129
89
```
130
90
131
-
We will also create a buffer task resource, for our MyVertex buffer buffer_id. We do something a little special here, which is that we set the initial access of the buffer to be vertex shader read, and that's because we'll create a task list that will upload the buffer.
Next, we need to create the actual task graph itself:
91
+
We need to create the actual task graph itself:
141
92
142
93
```cpp
143
94
auto loop_task_graph = daxa::TaskGraph({
@@ -149,15 +100,22 @@ auto loop_task_graph = daxa::TaskGraph({
149
100
150
101
We need to explicitly declare all uses of persistent task resources because manually marking used resources makes it possible to detect errors in your graph recording.
151
102
103
+
The vertex buffer is read only after initialization, therefor it needs no runtime sync, it should be ignored by the taskgraph and get no attachment in tasks. It should be passed directly via the push constants.
0 commit comments