-
Notifications
You must be signed in to change notification settings - Fork 465
Writing custom shaders
This lesson covers the basics of writing your own HLSL shaders and using them with DirectX Tool Kit, in particular to customize SpriteBatch.
The general approach is to author your own shaders in HLSL and compile them. For this lesson, we'll focus on writing a custom pixel shader and rely on the built-in vertex shader for SpriteBatch, but the same basic principles apply to all HLSL shaders: vertex shaders, pixel shaders, geometry shaders, hull shaders, domain shaders, and even compute shaders.
UNDER CONSTRUCTION
UNDER CONSTRUCTION
UNDER CONSTRUCTION
For this tutorial, we make use of the built-in Visual Studio HLSL build rules which handles building our shaders automatically. If you are using CMake instead, then you need to build the shaders using a custom target.
# Build HLSL shaders
add_custom_target(shaders)
set(HLSL_SHADER_FILES BloomCombine.hlsl BloomExtract.hlsl GaussianBlur.hlsl)
set_source_files_properties(${HLSL_SHADER_FILES} PROPERTIES ShaderType "ps")
set(HLSL_SHADER_FILES ${HLSL_SHADER_FILES} SpriteVertexShader.hlsl)
set_source_files_properties(SpriteVertexShader.hlsl PROPERTIES ShaderType "vs")
foreach(FILE ${HLSL_SHADER_FILES})
get_filename_component(FILE_WE ${FILE} NAME_WE)
get_source_file_property(shadertype ${FILE} ShaderType)
add_custom_command(TARGET shaders
COMMAND dxc.exe /nologo /Emain /T${shadertype}_6_0 $<$<CONFIG:DEBUG>:/Od> /Zi /Fo ${CMAKE_BINARY_DIR}/${FILE_WE}.cso /Fd ${CMAKE_BINARY_DIR}/${FILE_WE}.pdb ${FILE}
MAIN_DEPENDENCY ${FILE}
COMMENT "HLSL ${FILE}"
WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}
VERBATIM)
endforeach(FILE)
add_dependencies(${PROJECT_NAME} shaders)
First the original scene is rendered to a hidden render target m_offscreenTexture as normal. The only change here was for Clear to use m_offscreenTexture's render target view rather than DeviceResources backbuffer render target view.
Our first past of post-processing is to render the original scene texture as a 'full-screen quad' onto our first half-sized render target using the custom shader in "BloomExtract.hlsl" into m_renderTarget1.
float4 main(float4 color : COLOR0, float2 texCoord : TEXCOORD0) : SV_Target0
{
float4 c = Texture.Sample(TextureSampler, texCoord);
return saturate((c - BloomThreshold) / (1 - BloomThreshold));
}
We take the result of the extract & down-size and then blur it horizontally using "GausianBlur.hlsl" from m_renderTarget1 to m_renderTarget2.
float4 main(float4 color : COLOR0, float2 texCoord : TEXCOORD0) : SV_Target0
{
float4 c = 0;
// Combine a number of weighted image filter taps.
for (int i = 0; i < SAMPLE_COUNT; i++)
{
c += Texture.Sample(TextureSampler, texCoord + SampleOffsets[i]) * SampleWeights[i];
}
return c;
}
We take that result in m_renderTarget2 and then blur it vertically using the same shader--we are using a Gaussian blur which is a separable filter which allows us to do the filter in two simple render passes one for each dimension--back into m_renderTarget1.

And finally we take the result of both blur passes in m_renderTarget1 and combine it with our original scene texture m_offscreenTexture using the "BloomCombine.hlsl" shader to get our final image into the presentation swapchain.
// Helper for modifying the saturation of a color.
float4 AdjustSaturation(float4 color, float saturation)
{
// The constants 0.3, 0.59, and 0.11 are chosen because the
// human eye is more sensitive to green light, and less to blue.
float grey = dot(color.rgb, float3(0.3, 0.59, 0.11));
return lerp(grey, color, saturation);
}
float4 main(float4 color : COLOR0, float2 texCoord : TEXCOORD0) : SV_Target0
{
float4 base = BaseTexture.Sample(TextureSampler, texCoord);
float4 bloom = BloomTexture.Sample(TextureSampler, texCoord);
// Adjust color saturation and intensity.
bloom = AdjustSaturation(bloom, BloomSaturation) * BloomIntensity;
base = AdjustSaturation(base, BaseSaturation) * BaseIntensity;
// Darken down the base image in areas where there is a lot of bloom,
// to prevent things looking excessively burned-out.
base *= (1 - saturate(bloom));
// Combine the two images.
return base + bloom;
}We use half-size-in-each-dimension render targets for the blur because it is a quarter the memory/bandwidth to render, and because we are blurring the image significantly there's no need for the 'full' resolution. Other kinds of post-process effects may require more fidelity in the temporary buffers.
-
See PostProcess for a Bloom example using the built-in post-processing shaders. If you are targeting Direct3D Hardware Feature Level 9.x, the PostProcess class is not supported for you so you should use the sprite batch solution above.
-
See BasicPostProcess and DualPostProcess for additional built-in post-processing effects.
Next lesson: * Using HDR rendering
DirectX Tool Kit docs SpriteBatch
Using dxc.exe and dxcompiler.dll
Compiling Shaders
I borrowed heavily from the XNA Game Studio Bloom Postprocess sample for this lesson.
All content and source code for this package are subject to the terms of the MIT License.
This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.
- Universal Windows Platform apps
- Windows desktop apps
- Windows 11
- Windows 10
- Xbox One
- Xbox Series X|S
- x86
- x64
- ARM64
- Visual Studio 2022
- Visual Studio 2019 (16.11)
- clang/LLVM v12 - v20
- MinGW 12.2, 13.2
- CMake 3.21