@@ -43,8 +43,8 @@ address, the first store can be erased. This transformation is not allowed for a
43
43
pair of volatile stores. On the other hand, a non-volatile non-atomic load can
44
44
be moved across a volatile load freely, but not an Acquire load.
45
45
46
- This document is intended to provide a guide to anyone either writing a frontend
47
- for LLVM or working on optimization passes for LLVM with a guide for how to deal
46
+ This document is intended to guide anyone writing a frontend
47
+ for LLVM or working on optimization passes for LLVM on how to deal
48
48
with instructions with special semantics in the presence of concurrency. This
49
49
is not intended to be a precise guide to the semantics; the details can get
50
50
extremely complicated and unreadable, and are not usually necessary.
@@ -94,7 +94,7 @@ The following is equivalent in non-concurrent situations:
94
94
95
95
However, LLVM is not allowed to transform the former to the latter: it could
96
96
indirectly introduce undefined behavior if another thread can access ``x `` at
97
- the same time. That thread would read `undef ` instead of the value it was
97
+ the same time. That thread would read `` undef ` ` instead of the value it was
98
98
expecting, which can lead to undefined behavior down the line. (This example is
99
99
particularly of interest because before the concurrency model was implemented,
100
100
LLVM would perform this transformation.)
@@ -149,7 +149,7 @@ NotAtomic
149
149
NotAtomic is the obvious, a load or store which is not atomic. (This isn't
150
150
really a level of atomicity, but is listed here for comparison.) This is
151
151
essentially a regular load or store. If there is a race on a given memory
152
- location, loads from that location return undef.
152
+ location, loads from that location return `` undef `` .
153
153
154
154
Relevant standard
155
155
This is intended to match shared variables in C/C++, and to be used in any
@@ -429,7 +429,7 @@ support *ALL* operations of that size in a lock-free manner.
429
429
430
430
When the target implements atomic ``cmpxchg `` or LL/SC instructions (as most do)
431
431
this is trivial: all the other operations can be implemented on top of those
432
- primitives. However, on many older CPUs (e.g. ARMv5, SparcV8 , Intel 80386) there
432
+ primitives. However, on many older CPUs (e.g. ARMv5, Sparc V8 , Intel 80386) there
433
433
are atomic load and store instructions, but no ``cmpxchg `` or LL/SC. As it is
434
434
invalid to implement ``atomic load `` using the native instruction, but
435
435
``cmpxchg `` using a library call to a function that uses a mutex, ``atomic
@@ -475,7 +475,7 @@ atomic constructs. Here are some lowerings it can do:
475
475
``shouldExpandAtomicRMWInIR ``, ``emitMaskedAtomicRMWIntrinsic ``,
476
476
``shouldExpandAtomicCmpXchgInIR ``, and ``emitMaskedAtomicCmpXchgIntrinsic ``.
477
477
478
- For an example of these look at the ARM (first five lowerings) or RISC-V (last
478
+ For an example of these, look at the ARM (first five lowerings) or RISC-V (last
479
479
lowering) backend.
480
480
481
481
AtomicExpandPass supports two strategies for lowering atomicrmw/cmpxchg to
@@ -542,7 +542,7 @@ to take note of:
542
542
543
543
- They support all sizes and alignments -- including those which cannot be
544
544
implemented natively on any existing hardware. Therefore, they will certainly
545
- use mutexes in for some sizes/alignments.
545
+ use mutexes for some sizes/alignments.
546
546
547
547
- As a consequence, they cannot be shipped in a statically linked
548
548
compiler-support library, as they have state which must be shared amongst all
@@ -568,7 +568,7 @@ Libcalls: __sync_*
568
568
Some targets or OS/target combinations can support lock-free atomics, but for
569
569
various reasons, it is not practical to emit the instructions inline.
570
570
571
- There's two typical examples of this.
571
+ There are two typical examples of this.
572
572
573
573
Some CPUs support multiple instruction sets which can be switched back and forth
574
574
on function-call boundaries. For example, MIPS supports the MIPS16 ISA, which
@@ -589,7 +589,7 @@ case. The only common architecture without that property is SPARC -- SPARCV8 SMP
589
589
systems were common, yet it doesn't support any sort of compare-and-swap
590
590
operation.
591
591
592
- Some targets (like RISCV ) support a ``+forced-atomics `` target feature, which
592
+ Some targets (like RISC-V ) support a ``+forced-atomics `` target feature, which
593
593
enables the use of lock-free atomics even if LLVM is not aware of any specific
594
594
OS support for them. In this case, the user is responsible for ensuring that
595
595
necessary ``__sync_* `` implementations are available. Code using
@@ -653,6 +653,6 @@ implemented in both ``compiler-rt`` and ``libgcc`` libraries
653
653
iN __aarch64_ldeorN_ORDER(iN val, iN *ptr)
654
654
iN __aarch64_ldsetN_ORDER(iN val, iN *ptr)
655
655
656
- Please note, if LSE instruction set is specified for AArch64 target then
656
+ Please note, if LSE instruction set is specified for AArch64 target, then
657
657
out-of-line atomics calls are not generated and single-instruction atomic
658
658
operations are used in place.
0 commit comments