Skip to content

Commit a9de355

Browse files
committed
Be consistent with backticks.
A lot of names are used sometimes with backticks, sometimes without. It's not 100% clear but I think "without" is better, for the following: - rustc_codegen_* - cuda_builder - rustc - cuda_std - rust-gpu (which will become "Rust GPU" in a subsequent commit) In contrast, `lib.rs` is a file name and should use backticks.
1 parent 45d560c commit a9de355

File tree

7 files changed

+19
-19
lines changed

7 files changed

+19
-19
lines changed

guide/src/cuda/gpu_computing.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,7 @@ Most of the reasons for using rust on the CPU apply to using Rust for the GPU, t
4040
I will not repeat them here.
4141

4242
A couple of particular rust features make writing CUDA code much easier: RAII and Results.
43-
In `cust` everything uses RAII (through `Drop` impls) to manage freeing memory and returning handles, which
43+
In cust everything uses RAII (through `Drop` impls) to manage freeing memory and returning handles, which
4444
frees users from having to think about that, which yields safer, more reliable code.
4545

4646
Results are particularly helpful, almost every single call in every CUDA library returns a status code in the form of a CUDA result.

guide/src/faq.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -252,7 +252,7 @@ one GPU may be lacking a feature.
252252
- CUDA is usually 10-30% faster than OpenCL overall, this is likely due to subpar OpenCL drivers by NVIDIA,
253253
but it is unlikely this performance gap will change in the near future.
254254
- CUDA has a much richer set of libraries and tools than OpenCL, such as cuFFT, cuBLAS, cuRand, cuDNN, OptiX, NSight Compute, cuFile, etc.
255-
- You can seamlessly use existing CUDA C/C++ code with `cust` or `rustc_codegen_nvvm`-generated PTX by
255+
- You can seamlessly use existing CUDA C/C++ code with `cust` or rustc_codegen_nvvm-generated PTX by
256256
using the CUDA linker APIs which are exposed in `cust`. Allowing for incremental switching to Rust.
257257
- There is a generally larger set of code samples in CUDA C/C++ over OpenCL.
258258
- Documentation is __far__ better, there are (mostly) complete API docs for every single CUDA library and function out there.

guide/src/guide/compute_capabilities.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -142,9 +142,9 @@ Note: While the 'a' variant enables all these features during compilation (allow
142142

143143
For more details on suffixes, see [NVIDIA's blog post on family-specific architecture features](https://developer.nvidia.com/blog/nvidia-blackwell-and-nvidia-cuda-12-9-introduce-family-specific-architecture-features/).
144144

145-
### Manual Compilation (Without `cuda_builder`)
145+
### Manual Compilation (Without cuda_builder)
146146

147-
If you're invoking `rustc` directly instead of using `cuda_builder`, you only need to specify the architecture through LLVM args:
147+
If you're invoking rustc directly instead of using cuda_builder, you only need to specify the architecture through LLVM args:
148148

149149
```bash
150150
rustc --target nvptx64-nvidia-cuda \
@@ -210,7 +210,7 @@ These patterns work when using base architectures (no suffix), which enable all
210210

211211
If you encounter errors about missing functions or features:
212212

213-
1. Check the compute capability you're targeting in `cuda_builder`
213+
1. Check the compute capability you're targeting in cuda_builder
214214
2. Verify your GPU supports the features you're using
215215
3. Use `nvidia-smi` to check your GPU's compute capability
216216
4. Add appropriate `#[cfg]` guards or increase the target architecture

guide/src/guide/getting_started.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Getting Started
22

3-
This section covers how to get started writing GPU crates with `cuda_std` and `cuda_builder`.
3+
This section covers how to get started writing GPU crates with cuda_std and cuda_builder.
44

55
## Required Libraries
66

@@ -53,12 +53,12 @@ edition = "2021"
5353
+cuda_std = "XX"
5454
```
5555

56-
Where `XX` is the latest version of `cuda_std`.
56+
Where `XX` is the latest version of cuda_std.
5757

5858
We changed our crate's crate types to `cdylib` and `rlib`. We specified `cdylib` because the nvptx targets do not support binary crate types.
5959
`rlib` is so that we will be able to use the crate as a dependency, such as if we would like to use it on the CPU.
6060

61-
## lib.rs
61+
## `lib.rs`
6262

6363
Before we can write any GPU kernels, we must add a few directives to our `lib.rs` which are required by the codegen:
6464

@@ -86,7 +86,7 @@ If you would like to use `alloc` or things like printing from GPU kernels (which
8686
extern crate alloc;
8787
```
8888

89-
Finally, if you would like to use types such as slices or arrays inside of GPU kernels you must allow `improper_cytypes_definitions` either on the whole crate or the individual GPU kernels. This is because on the CPU, such types are not guaranteed to be passed a certain way, so they should not be used in `extern "C"` functions (which is what kernels are implicitly declared as). However, `rustc_codegen_nvvm` guarantees the way in which things like structs, slices, and arrays are passed. See [Kernel ABI](./kernel_abi.md).
89+
Finally, if you would like to use types such as slices or arrays inside of GPU kernels you must allow `improper_cytypes_definitions` either on the whole crate or the individual GPU kernels. This is because on the CPU, such types are not guaranteed to be passed a certain way, so they should not be used in `extern "C"` functions (which is what kernels are implicitly declared as). However, rustc_codegen_nvvm guarantees the way in which things like structs, slices, and arrays are passed. See [Kernel ABI](./kernel_abi.md).
9090

9191
```rs
9292
#![allow(improper_ctypes_definitions)]
@@ -161,7 +161,7 @@ It also applies `#[no_mangle]` so the name of the kernel is the same as it is de
161161

162162
## Building the GPU crate
163163

164-
Now that you have some kernels defined in a crate, you can build them easily using `cuda_builder`.
164+
Now that you have some kernels defined in a crate, you can build them easily using cuda_builder.
165165
which builds GPU crates while passing everything needed by rustc.
166166

167167
To use it you can simply add it as a build dependency in your CPU crate (the crate running the GPU kernels):
@@ -173,7 +173,7 @@ To use it you can simply add it as a build dependency in your CPU crate (the cra
173173

174174
Where `XX` is the current version of cuda_builder.
175175

176-
Then, you can simply invoke it in the build.rs of your CPU crate:
176+
Then, you can simply invoke it in the `build.rs` of your CPU crate:
177177

178178
```rs
179179
use cuda_builder::CudaBuilder;

guide/src/nvvm/backends.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -21,18 +21,18 @@ five publicly known codegens that exist:
2121
- rustc_codegen_spirv
2222
- rustc_codegen_nvvm, obviously the best codegen ;)
2323

24-
`rustc_codegen_cranelift` targets the cranelift backend, which is a codegen backend written in rust that is faster than LLVM but does not have many optimizations
25-
compared to LLVM. `rustc_codegen_llvm` is obvious, it is the backend almost everybody uses which targets LLVM. `rustc_codegen_gcc` targets GCC (GNU Compiler Collection)
26-
which is able to target more exotic targets than LLVM, especially for embedded. `rustc_codegen_spirv` targets the SPIR-V (Standard Portable Intermediate Representation 5)
24+
rustc_codegen_cranelift targets the cranelift backend, which is a codegen backend written in rust that is faster than LLVM but does not have many optimizations
25+
compared to LLVM. rustc_codegen_llvm is obvious, it is the backend almost everybody uses which targets LLVM. rustc_codegen_gcc targets GCC (GNU Compiler Collection)
26+
which is able to target more exotic targets than LLVM, especially for embedded. rustc_codegen_spirv targets the SPIR-V (Standard Portable Intermediate Representation 5)
2727
format, which is a format mostly used for compiling shader languages such as GLSL or WGSL to a standard representation that Vulkan/OpenGL can use, the reasons
2828
why SPIR-V is not an alternative to CUDA/rustc_codegen_nvvm have been covered in the [FAQ](../../faq.md).
2929

30-
Finally, we come to the star of the show, `rustc_codegen_nvvm`. This backend targets NVVM IR for compiling rust to GPU kernels that can be run by CUDA.
30+
Finally, we come to the star of the show, rustc_codegen_nvvm. This backend targets NVVM IR for compiling rust to GPU kernels that can be run by CUDA.
3131
What NVVM IR/libNVVM are has been covered in the [CUDA section](../../cuda/pipeline.md).
3232

3333
# rustc_codegen_ssa
3434

35-
`rustc_codegen_ssa` is the central crate behind every single codegen and does much of the hard work.
35+
rustc_codegen_ssa is the central crate behind every single codegen and does much of the hard work.
3636
It abstracts away the MIR lowering logic so that custom codegens only have to implement some
3737
traits and the SSA codegen does everything else. For example:
3838
- A trait for getting a type like an integer type.

guide/src/nvvm/ptxgen.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -69,7 +69,7 @@ libdevice is also lazy loaded so we do not import useless intrinsics.
6969
# libintrinsics
7070

7171
This is the last special module we load, it is simple, it is just a dumping ground for random wrapper functions
72-
we need to define that `cuda_std` or the codegen needs. You can find the LLVM IR definition for it in the codegen directory
72+
we need to define that cuda_std or the codegen needs. You can find the LLVM IR definition for it in the codegen directory
7373
called `libintrinsics.ll`. All of its functions should be declared with the `__nvvm_` prefix.
7474

7575
# Compilation

guide/src/nvvm/types.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,8 +3,8 @@
33
Types! who doesn't love types, especially those that cause libNVVM to randomly segfault or loop forever!
44
Anyways, types are an integral part of the codegen and everything revolves around them and you will see them everywhere.
55

6-
`rustc_codegen_ssa` does not actually tell you what your type representation should be, it allows you to decide. For
7-
example, `rust-gpu` represents it as a `SpirvType` enum, while both `rustc_codegen_llvm` and our codegen represent it as
6+
rustc_codegen_ssa does not actually tell you what your type representation should be, it allows you to decide. For
7+
example, rust-gpu represents it as a `SpirvType` enum, while both rustc_codegen_llvm and our codegen represent it as
88
opaque LLVM types:
99

1010
```rs

0 commit comments

Comments
 (0)