Skip to content

Commit b3e253a

Browse files
committed
Updated tutorial to latest task graph
1 parent 543650e commit b3e253a

File tree

2 files changed

+88
-104
lines changed

2 files changed

+88
-104
lines changed

src/content/docs/tutorial/02 Drawing a triangle/06 Buffers.md

Lines changed: 27 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ description: Buffers
44
slug: "tutorial/drawing-a-triangle/buffers"
55
---
66

7-
## Description
7+
## General
88

99
When uploading data to the GPU, OpenGL used target-specific buffer targets (Vertex Object/Array Buffers, etc.). Daxa uses bindless buffers instead. This means a buffer isn't bound to one target only. One buffer can be used in all of these different bind targets and there is therefore only one buffer 'type'.
1010

@@ -18,3 +18,29 @@ auto buffer_id = device.create_buffer({
1818
.name = "my vertex data",
1919
});
2020
```
21+
22+
## Uploading to Buffers
23+
24+
To upload to a buffer in daxa, you query the buffers host pointer. Not all buffers have a host pointer, make sure to set either `MemoryFlagBits::HOST_ACCESS_SEQUENTIAL_WRITE` or `MemoryFlagBits::HOST_ACCESS_RANDOM` as `.allocate_info` when creating the buffer:
25+
26+
```cpp
27+
auto buffer_id = device.create_buffer({
28+
.size = sizeof(MyVertex) * 3,
29+
.allocate_info = MemoryFlagBits::HOST_ACCESS_SEQUENTIAL_WRITE,
30+
.name = "my vertex data",
31+
});
32+
```
33+
34+
* Use `MemoryFlagBits::HOST_ACCESS_SEQUENTIAL_WRITE` for buffers that require fast reads on the gpu and host writes. This type is suboptimal for host readback. Its typically in device vram.
35+
* Use `MemoryFlagBits::HOST_ACCESS_RANDOM` for buffers that do not need fast access on the gpu but random cpu write and read access. This type is optimal for readback. Its typically in host ram.
36+
37+
Uploading any data itself is then done via direct writes or a memcpy like so:
38+
39+
```cpp
40+
std::array<MyVertex, 3> * vert_buf_ptr = device.buffer_host_address_as<std::array<MyVertex, 3>>(buffer_id).value();
41+
*vert_buf_ptr = std::array{
42+
MyVertex{.position = {-0.5f, +0.5f, 0.0f}, .color = {1.0f, 0.0f, 0.0f}},
43+
MyVertex{.position = {+0.5f, +0.5f, 0.0f}, .color = {0.0f, 1.0f, 0.0f}},
44+
MyVertex{.position = {+0.0f, -0.5f, 0.0f}, .color = {0.0f, 0.0f, 1.0f}},
45+
};~
46+
```

src/content/docs/tutorial/02 Drawing a triangle/07 Task graph.md

Lines changed: 61 additions & 103 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ slug: "tutorial/drawing-a-triangle/task-graph"
88

99
While not entirely necessary, we're going to use TaskGraph, which allows us to compile a list of GPU tasks and their dependencies into a synchronized set of commands. This simplifies your code by making different tasks completely self-contained, while also generating the most optimal synchronization for the tasks you describe. To use TaskGraph, as its also an optional feature, add the include path `<daxa/utils/task_graph.hpp>` at the top of our main file.
1010

11-
## Creating a vertex uploading task
11+
## Creating a Rendering task
1212

1313
Before we can use a task graph, we first need to create actual tasks that can be executed. The first task we are going to create will upload vertex data to the GPU.
1414

@@ -17,105 +17,65 @@ Each task struct must consist of a child struct 'Uses' that will store all share
1717
For our task, this base task structure will look like this:
1818

1919
```cpp
20-
void upload_vertex_data_task(daxa::TaskGraph & tg, daxa::TaskBufferView vertices)
21-
{
22-
tg.add_task({
23-
.attachments = {
24-
daxa::inl_attachment(daxa::TaskBufferAccess::TRANSFER_WRITE, vertices),
25-
},
26-
.task = [=](daxa::TaskInterface ti)
27-
{
28-
// [...]
29-
},
20+
daxa::Task task = daxa::RasterTask("draw task")
21+
// adds an attachment:
22+
// * stage = color_attachment,
23+
// * access = reads_writes,
24+
// * view_type = REGULAR_2D,
25+
// * task_image_view = render_target
26+
.color_attachments.reads_writes(daxa::ImageViewType::REGULAR_2D, render_target)
27+
.executes([=](daxa::TaskInterface ti){
28+
// this callback is executed later when executing the graph after completing recording.
29+
....
3030
});
31-
}
32-
```
33-
34-
In the `task` callback function, for the sake of brevity, we will create the data we will upload. In this sample, we will use the standard triangle vertices.
35-
36-
```cpp
37-
auto data = std::array{
38-
MyVertex{.position = {-0.5f, +0.5f, 0.0f}, .color = {1.0f, 0.0f, 0.0f}},
39-
MyVertex{.position = {+0.5f, +0.5f, 0.0f}, .color = {0.0f, 1.0f, 0.0f}},
40-
MyVertex{.position = {+0.0f, -0.5f, 0.0f}, .color = {0.0f, 0.0f, 1.0f}},
41-
};
42-
```
43-
44-
To send the data to the GPU, we can create a staging buffer, which has host access, so that we can then issue a command to copy from this buffer to the dedicated GPU memory.
45-
46-
```cpp
47-
auto staging_buffer_id = ti.device.create_buffer({
48-
.size = sizeof(data),
49-
.allocate_info = daxa::MemoryFlagBits::HOST_ACCESS_RANDOM,
50-
.name = "my staging buffer",
51-
});
52-
```
53-
54-
We can also ask the command recorder to destroy this temporary buffer since we don't care about it living, but we DO need it to survive through its usage on the GPU (which won't happen until after these commands are submitted), so we tell the command recorder to destroy it in a deferred fashion.
55-
56-
```cpp
57-
ti.recorder.destroy_buffer_deferred(staging_buffer_id);
58-
```
59-
60-
We then get the memory-mapped pointer of the staging buffer, and write the data directly to it.
61-
62-
```cpp
63-
auto * buffer_ptr = ti.device.buffer_host_address_as<std::array<MyVertex, 3>>(staging_buffer_id).value();
64-
*buffer_ptr = data;
65-
ti.recorder.copy_buffer_to_buffer({
66-
.src_buffer = staging_buffer_id,
67-
.dst_buffer = ti.get(vertices).ids[0],
68-
.size = sizeof(data),
69-
});
31+
tg.add_task(task);
7032
```
7133
72-
## Creating a Rendering task
34+
For the drawing, we need the following within the execute callback.
7335
74-
We will again create a simple task:
36+
Within the task callback we have access to the device, a fast transient allocator, a cmd recorder and accessor functions for data around attachments:
7537
7638
```cpp
77-
void draw_vertices_task(daxa::TaskGraph & tg, std::shared_ptr<daxa::RasterPipeline> pipeline, daxa::TaskBufferView vertices, daxa::TaskImageView render_target)
39+
void draw_swapchain_task_callback(daxa::TaskInterface ti, daxa::RasterPipeline * pipeline, daxa::TaskImageView color_target, daxa::BufferId vertex_buffer)
7840
{
79-
tg.add_task({
80-
.attachments = {
81-
daxa::inl_attachment(daxa::TaskBufferAccess::VERTEX_SHADER_READ, vertices),
82-
daxa::inl_attachment(daxa::TaskImageAccess::COLOR_ATTACHMENT, daxa::ImageViewType::REGULAR_2D, render_target),
83-
},
84-
.task = [=](daxa::TaskInterface ti)
85-
{
86-
// [...]
41+
// The task interface provides a way to get the attachment info:
42+
auto image_info = ti.info(color_target).value();
43+
auto image_id = ti.id(color_target);
44+
auto image_view_id = ti.view(color_target);
45+
auto image_layout = ti.layout(color_target);
46+
47+
// When starting a render pass via a rasterization pipeline, daxa "eats" a generic command recorder
48+
// and turns it into a RenderCommandRecorder.
49+
// Only the RenderCommandRecorder can record raster commands.
50+
// The RenderCommandRecorder can only record commands that are valid within a render pass.
51+
// This way daxa ensures typesafety for command recording.
52+
daxa::RenderCommandRecorder render_recorder = std::move(ti.recorder).begin_renderpass({
53+
.color_attachments = std::array{
54+
daxa::RenderAttachmentInfo{
55+
.image_view = ti.view(color_target),
56+
.load_op = daxa::AttachmentLoadOp::CLEAR,
57+
.clear_value = std::array<daxa::f32, 4>{0.1f, 0.0f, 0.5f, 1.0f},
58+
},
8759
},
88-
.name = "draw vertices",
60+
.render_area = {.width = image_info.size.x, .height = image_info.size.y},
8961
});
90-
}
91-
```
92-
93-
We first need to get the screen width and height in the callback function. We can do this by getting the target image dimensions.
94-
95-
```cpp
96-
auto const size = ti.device.info(ti.get(render_target).ids[0]).value().size;
97-
```
98-
99-
Next, we need to record an actual renderpass. The values are pretty self-explanatory if you have used OpenGL before. This contains the actual rendering logic.
100-
101-
```cpp
102-
daxa::RenderCommandRecorder render_recorder = std::move(ti.recorder).begin_renderpass({
103-
.color_attachments = std::array{
104-
daxa::RenderAttachmentInfo{
105-
.image_view = ti.get(render_target).view_ids[0],
106-
.load_op = daxa::AttachmentLoadOp::CLEAR,
107-
.clear_value = std::array<daxa::f32, 4>{0.1f, 0.0f, 0.5f, 1.0f},
108-
},
109-
},
110-
.render_area = {.width = size.x, .height = size.y},
111-
});
62+
// Here, we'll bind the pipeline to be used in the draw call below
63+
render_recorder.set_pipeline(*pipeline);
64+
65+
// Very importantly, task graph packs up our attachment shader data into a byte blob.
66+
// We need to pass this blob to our shader somehow.
67+
// The typical way to do this is to assign the blob to the push constant.
68+
render_recorder.push_constant(MyPushConstant{
69+
.vertices = ti.device.device_address(vertex_buffer).value(),
70+
});
71+
// and issue the draw call with the desired number of vertices.
72+
render_recorder.draw({.vertex_count = 3});
11273
113-
render_recorder.set_pipeline(*pipeline);
114-
render_recorder.push_constant(MyPushConstant{
115-
.my_vertex_ptr = ti.device.device_address(ti.get(vertices).ids[0]).value(),
116-
});
117-
render_recorder.draw({.vertex_count = 3});
118-
ti.recorder = std::move(render_recorder).end_renderpass();
74+
// VERY IMPORTANT! A renderpass must be ended after finishing!
75+
// The ending of a render pass returns back the original command recorder.
76+
// Assign it back to the task interfaces command recorder.
77+
ti.recorder = std::move(render_recorder).end_renderpass();
78+
};
11979
```
12080

12181
## Creating a Rendering TaskGraph
@@ -128,16 +88,7 @@ Back in our main method, the first we'll make is the swap chain image task resou
12888
auto task_swapchain_image = daxa::TaskImage{{.swapchain_image = true, .name = "swapchain image"}};
12989
```
13090
131-
We will also create a buffer task resource, for our MyVertex buffer buffer_id. We do something a little special here, which is that we set the initial access of the buffer to be vertex shader read, and that's because we'll create a task list that will upload the buffer.
132-
133-
```cpp
134-
auto task_vertex_buffer = daxa::TaskBuffer({
135-
.initial_buffers = {.buffers = std::span{&buffer_id, 1}},
136-
.name = "task vertex buffer",
137-
});
138-
```
139-
140-
Next, we need to create the actual task graph itself:
91+
We need to create the actual task graph itself:
14192
14293
```cpp
14394
auto loop_task_graph = daxa::TaskGraph({
@@ -149,15 +100,22 @@ auto loop_task_graph = daxa::TaskGraph({
149100

150101
We need to explicitly declare all uses of persistent task resources because manually marking used resources makes it possible to detect errors in your graph recording.
151102

103+
The vertex buffer is read only after initialization, therefor it needs no runtime sync, it should be ignored by the taskgraph and get no attachment in tasks. It should be passed directly via the push constants.
104+
152105
```cpp
153-
loop_task_graph.use_persistent_buffer(task_vertex_buffer);
154106
loop_task_graph.use_persistent_image(task_swapchain_image);
155107
```
156108

157109
Since we need the task graph to do something, we add the task that draws to the screen:
158110

159-
```cpp
160-
draw_vertices_task(loop_task_graph, pipeline, task_vertex_buffer, task_swapchain_image);
111+
```cpp
112+
auto draw_swapchain_task =
113+
daxa::RasterTask("draw triangle")
114+
.color_attachment.reads_writes(daxa::ImageViewType::REGULAR_2D, task_swapchain_image.view())
115+
.executes(draw_swapchain_task_callback, pipeline.get(), buffer_id);
116+
117+
// Insert the task into the graph:
118+
loop_task_graph.add_task(draw_swapchain_task);
161119
```
162120
163121
Once we have added all the tasks we want, we have to tell the task graph we are done.

0 commit comments

Comments
 (0)