You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: en/Building_a_Simple_Engine/Subsystems/01_introduction.adoc
+20Lines changed: 20 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -34,6 +34,26 @@ for more than just Graphics in your application. Our goal with this tutorial
34
34
with the tools necessary to tackle any Vulkan application development and to
35
35
think critically about how your applications can benefit from the GPU.
36
36
37
+
=== Practical considerations: Don't offload everything to the GPU
38
+
39
+
While Vulkan compute can deliver impressive speedups, it's not always advisable to offload every subsystem to the GPU—especially on mobile:
40
+
41
+
* Mobile power and thermals: Many phones and tablets use big.LITTLE CPU clusters and mobile GPUs that are power/thermal constrained. Sustained heavy GPU compute can quickly lead to thermal throttling, causing frame rate drops and inconsistent latency.
42
+
* Scheduling and latency: GPUs excel at throughput, but certain tasks (tight control loops, small pointer-heavy updates) can prefer CPU execution due to launch overheads and scheduling latency.
43
+
* Determinism and debugging: For gameplay-critical physics, determinism and step-by-step debugging on the CPU can be advantageous. Consider keeping broad-phase or whole-physics on the CPU on mobile, or use a hybrid approach (e.g., CPU broad-phase + GPU narrow-phase).
44
+
* Memory bandwidth: On integrated architectures, GPU/CPU share memory bandwidth. Aggressively moving everything to GPU can contend with graphics and hurt frame time and battery life.
45
+
46
+
Audio-specific guidance:
47
+
48
+
* Prefer platform audio APIs and any available dedicated audio hardware/DSP when feasible (for mixing, resampling, effects). This path often provides lower latency, better power characteristics, and more predictable behavior across devices.
49
+
* HRTF support in dedicated hardware is not widespread in the wild. Many consumer devices rely on software HRTF in the OS or application. Evaluate your needs: a software HRTF pipeline may be perfectly adequate; reserving GPU compute strictly for spatial audio is rarely necessary unless you have many sources or complex effects.
50
+
51
+
Practical recommendations:
52
+
53
+
* Profile first: Establish CPU and GPU baselines before moving work.
54
+
* Favor hybrid designs: Offload the clearly parallel, heavy kernels (e.g., batched constraint solves, FFT/IR convolution) while keeping control/coordination on CPU.
55
+
* Plan for mobility: Provide runtime toggles to switch between CPU/GPU paths based on device class, thermal state, and power mode.
56
+
37
57
=== Prerequisites
38
58
39
59
This chapter builds extensively on the engine architecture and Vulkan foundations established in previous chapters. The modular design patterns we've implemented become crucial when adding subsystems that need to integrate cleanly with existing rendering, camera, and resource management systems.
0 commit comments