| parent | Extended API |
|---|---|
| nav_order | 0 |
Standard C++ presents a view that the cost to synchronize threads is uniform and low.
CUDA C++ is different: the cost to synchronize threads grows as threads are further apart. It is low across threads within a block, but high across arbitrary threads in the system running on multiple GPUs and CPUs.
To account for non-uniform thread synchronization costs that are not always low, CUDA C++ extends the standard C++ memory model and concurrency facilities in the cuda:: namespace with thread scopes, retaining the syntax and semantics of standard C++ by default.
Asynchronous operations - like the copy operations performed by memcpy_async - are performed as-if by new asynchronous threads.
A thread scope specifies the kind of threads that can synchronize with each other using synchronization primitive such as atomic or barrier.
namespace cuda {
enum thread_scope {
thread_scope_system,
thread_scope_device,
thread_scope_block,
thread_scope_thread
};
} // namespace cudaEach program thread is related to each other program thread by one or more thread scope relations:
- Each thread in the system is related to each other thread in the system by the system thread scope:
thread_scope_system. - Each GPU thread is related to each other GPU thread in the same CUDA device by the device thread scope:
thread_scope_device. - Each GPU thread is related to each other GPU thread in the same CUDA thread block by the block thread scope:
thread_scope_block. - Each thread is related to itself by the
threadthread scope:thread_scope_thread. - Each thread is related to each asynchronous thread that it creates by all scopes.
Types in namespaces std:: and cuda::std:: have the same behavior as corresponding types in namespace cuda:: when instantiated with a scope of cuda::thread_scope_system.
An atomic operation is atomic at the scope it specifies if:
- it specifies a scope other than
thread_scope_system, or
the scope is thread_scope_system and:
- it affects an object in unified memory and
concurrentManagedAccessis1, or - it affects an object in CPU memory and
hostNativeAtomicSupportedis1, or - it is a load or store that affects a naturally-aligned object of sizes
1,2,4, or8bytes on mapped memory, or - it affects an object in GPU memory and only GPU threads access it.
Refer to the CUDA programming guide for more information on unified memory, mapped memory, CPU memory, and GPU peer memory.
Modify intro.races paragraph 21 of ISO/IEC IS 14882 (the C++ Standard) as follows:
The execution of a program contains a data race if it contains two potentially concurrent conflicting actions, at least one of which is not atomic at a scope that includes the thread that performed the other operation, and neither happens before the other, except for the special case for signal handlers described below. Any such data race results in undefined behavior. [...]
Modify thread.barrier.class paragraph 4 of ISO/IEC IS 14882 (the C++ Standard) as follows:
- Concurrent invocations of the member functions of
barrier, other than its destructor, do not introduce data races as if they were atomic operations. [...]
Modify thread.latch.class paragraph 2 of ISO/IEC IS 14882 (the C++ Standard) as follows:
- Concurrent invocations of the member functions of
latch, other than its destructor, do not introduce data races as if they were atomic operations.
Modify thread.sema.cnt paragraph 3 of ISO/IEC IS 14882 (the C++ Standard) as follows:
- Concurrent invocations of the member functions of
counting_semaphore, other than its destructor, do not introduce data races as if they were atomic operations.
Modify thread.stoptoken.intro paragraph 5 of ISO/IEC IS 14882 (the C++ Standard) as follows:
Calls to the functions request_stop, stop_requested, and stop_possible do not introduce data races as if they were atomic operations. [...]
Modify atomics.fences paragraph 2 through 4 of ISO/IEC IS 14882 (the C++ Standard) as follows:
A release fence A synchronizes with an acquire fence B if there exist atomic operations X and Y, both operating on some atomic object M, such that A is sequenced before X, X modifies M, Y is sequenced before B, and Y reads the value written by X or a value written by any side effect in the hypothetical release sequence X would head if it were a release operation, and each operation (A, B, X, and Y) specifies a scope that includes the thread that performed each other operation.
A release fence A synchronizes with an atomic operation B that performs an acquire operation on an atomic object M if there exists an atomic operation X such that A is sequenced before X, X modifies M, and B reads the value written by X or a value written by any side effect in the hypothetical release sequence X would head if it were a release operation, and each operation (A, B, and X) specifies a scope that includes the thread that performed each other operation.
An atomic operation A that is a release operation on an atomic object M synchronizes with an acquire fence B if there exists some atomic operation X on M such that X is sequenced before B and reads the value written by A or a value written by any side effect in the release sequence headed by A, and each operation (A, B, and X) specifies a scope that includes the thread that performed each other operation.
The following example passes a message stored to the x variable by a thread in block 0 to a thread in block 1 via the flag f:
|
`int x = 0;` `int f = 0;` | |
| **Thread 0 Block 0** | **Thread 0 Block 1** |
|
`x = 42;` `cuda::atomic_ref flag(f);` `flag.store(1, memory_order_release);` |
`cuda::atomic_ref flag(f);` `while(flag.load(memory_order_acquire) != 1);` `assert(x == 42);` |
In the following variation of the previous example, two threads concurrently access the f object without synchronization, which leads to a data race, and exhibits undefined behavior:
|
`int x = 0;` `int f = 0;` | |
| **Thread 0 Block 0** | **Thread 0 Block 1** |
|
`x = 42;` `cuda::atomic_ref flag(f);` `flag.store(1, memory_order_release); // UB: data race` |
`cuda::atomic_ref flag(f);` `while(flag.load(memory_order_acquire) != 1); // UB: data race` `assert(x == 42);` |
While the memory operations on f - the store and the loads - are atomic, the scope of the store operation is "block scope". Since the store is performed by Thread 0 of Block 0, it only includes all other threads of Block 0. However, the thread doing the loads is in Block 1, i.e., it is not in a scope included by the store operation performed in Block 0, causing the store and the load to not be "atomic", and introducing a data-race.
For more examples see the PTX memory consistency model litmus tests.