Skip to content
This repository was archived by the owner on Mar 15, 2022. It is now read-only.

Commit dd97283

Browse files
committed
Improvements for documentation layout
1 parent 4a1c0b9 commit dd97283

9 files changed

+240
-242
lines changed

lib/thread_safe/atomic_reference_cache_backend.rb

Lines changed: 154 additions & 168 deletions
Large diffs are not rendered by default.

lib/thread_safe/mri_cache_backend.rb

Lines changed: 14 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,21 @@
11
module ThreadSafe
22
class MriCacheBackend < NonConcurrentCacheBackend
3-
# We can get away with a single global write lock (instead of a per-instance one) because of the GVL/green threads.
3+
# We can get away with a single global write lock (instead of a per-instance
4+
# one) because of the GVL/green threads.
45
#
5-
# The previous implementation used `Thread.critical` on 1.8 MRI to implement the 4 composed atomic operations (`put_if_absent`, `replace_pair`,
6-
# `replace_if_exists`, `delete_pair`) this however doesn't work for `compute_if_absent` because on 1.8 the Mutex class is itself implemented
7-
# via `Thread.critical` and a call to `Mutex#lock` does not restore the previous `Thread.critical` value (thus any synchronisation clears the
8-
# `Thread.critical` flag and we loose control). This poses a problem as the provided block might use synchronisation on its own.
6+
# The previous implementation used `Thread.critical` on 1.8 MRI to implement
7+
# the 4 composed atomic operations (`put_if_absent`, `replace_pair`,
8+
# `replace_if_exists`, `delete_pair`) this however doesn't work for
9+
# `compute_if_absent` because on 1.8 the Mutex class is itself implemented
10+
# via `Thread.critical` and a call to `Mutex#lock` does not restore the
11+
# previous `Thread.critical` value (thus any synchronisation clears the
12+
# `Thread.critical` flag and we loose control). This poses a problem as the
13+
# provided block might use synchronisation on its own.
914
#
10-
# NOTE: a neat idea of writing a c-ext to manually perform atomic put_if_absent, while relying on Ruby not releasing a GVL while calling
11-
# a c-ext will not work because of the potentially Ruby implemented `#hash` and `#eql?` key methods.
15+
# NOTE: a neat idea of writing a c-ext to manually perform atomic
16+
# put_if_absent, while relying on Ruby not releasing a GVL while calling a
17+
# c-ext will not work because of the potentially Ruby implemented `#hash`
18+
# and `#eql?` key methods.
1219
WRITE_LOCK = Mutex.new
1320

1421
def []=(key, value)

lib/thread_safe/non_concurrent_cache_backend.rb

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,9 @@
11
module ThreadSafe
22
class NonConcurrentCacheBackend
3-
# WARNING: all public methods of the class must operate on the @backend directly without calling each other. This is important
4-
# because of the SynchronizedCacheBackend which uses a non-reentrant mutex for perfomance reasons.
3+
# WARNING: all public methods of the class must operate on the @backend
4+
# directly without calling each other. This is important because of the
5+
# SynchronizedCacheBackend which uses a non-reentrant mutex for perfomance
6+
# reasons.
57
def initialize(options = nil)
68
@backend = {}
79
end

lib/thread_safe/synchronized_cache_backend.rb

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,8 @@ module ThreadSafe
22
class SynchronizedCacheBackend < NonConcurrentCacheBackend
33
require 'mutex_m'
44
include Mutex_m
5-
# WARNING: Mutex_m is a non-reentrant lock, so the synchronized methods are not allowed to call each other.
5+
# WARNING: Mutex_m is a non-reentrant lock, so the synchronized methods are
6+
# not allowed to call each other.
67

78
def [](key)
89
synchronize { super }

lib/thread_safe/synchronized_delegator.rb

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -9,9 +9,9 @@
99
# array = SynchronizedDelegator.new([]) # thread-safe
1010
#
1111
# A simple `Monitor` provides a very coarse-grained way to synchronize a given
12-
# object, in that it will cause synchronization for methods that have no
13-
# need for it, but this is a trivial way to get thread-safety where none may
14-
# exist currently on some implementations.
12+
# object, in that it will cause synchronization for methods that have no need
13+
# for it, but this is a trivial way to get thread-safety where none may exist
14+
# currently on some implementations.
1515
#
1616
# This class is currently being considered for inclusion into stdlib, via
1717
# https://bugs.ruby-lang.org/issues/8556

lib/thread_safe/util/adder.rb

Lines changed: 5 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,10 @@
11
module ThreadSafe
22
module Util
3-
# A Ruby port of the Doug Lea's jsr166e.LondAdder class version 1.8 available in public domain.
4-
# Original source code available here: http://gee.cs.oswego.edu/cgi-bin/viewcvs.cgi/jsr166/src/jsr166e/LongAdder.java?revision=1.8
3+
# A Ruby port of the Doug Lea's jsr166e.LondAdder class version 1.8
4+
# available in public domain.
5+
#
6+
# Original source code available here:
7+
# http://gee.cs.oswego.edu/cgi-bin/viewcvs.cgi/jsr166/src/jsr166e/LongAdder.java?revision=1.8
58
#
69
# One or more variables that together maintain an initially zero
710
# sum. When updates (method +add+) are contended across threads,

lib/thread_safe/util/cheap_lockable.rb

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,7 @@
11
module ThreadSafe
22
module Util
3-
# Provides a cheapest possible (mainly in terms of memory usage) +Mutex+ with the +ConditionVariable+ bundled in.
3+
# Provides a cheapest possible (mainly in terms of memory usage) +Mutex+
4+
# with the +ConditionVariable+ bundled in.
45
#
56
# Usage:
67
# class A

lib/thread_safe/util/striped64.rb

Lines changed: 52 additions & 56 deletions
Original file line numberDiff line numberDiff line change
@@ -1,69 +1,65 @@
11
module ThreadSafe
22
module Util
3-
# A Ruby port of the Doug Lea's jsr166e.Striped64 class version 1.6 available in public domain.
4-
# Original source code available here: http://gee.cs.oswego.edu/cgi-bin/viewcvs.cgi/jsr166/src/jsr166e/Striped64.java?revision=1.6
3+
# A Ruby port of the Doug Lea's jsr166e.Striped64 class version 1.6
4+
# available in public domain.
55
#
6-
# Class holding common representation and mechanics for classes supporting dynamic striping on 64bit values.
6+
# Original source code available here:
7+
# http://gee.cs.oswego.edu/cgi-bin/viewcvs.cgi/jsr166/src/jsr166e/Striped64.java?revision=1.6
78
#
8-
# This class maintains a lazily-initialized table of atomically
9-
# updated variables, plus an extra +base+ field. The table size
10-
# is a power of two. Indexing uses masked per-thread hash codes.
11-
# Nearly all methods on this class are private, accessed directly
12-
# by subclasses.
9+
# Class holding common representation and mechanics for classes supporting
10+
# dynamic striping on 64bit values.
1311
#
14-
# Table entries are of class +Cell+; a variant of AtomicLong padded
15-
# to reduce cache contention on most processors. Padding is
16-
# overkill for most Atomics because they are usually irregularly
17-
# scattered in memory and thus don't interfere much with each
18-
# other. But Atomic objects residing in arrays will tend to be
19-
# placed adjacent to each other, and so will most often share
20-
# cache lines (with a huge negative performance impact) without
12+
# This class maintains a lazily-initialized table of atomically updated
13+
# variables, plus an extra +base+ field. The table size is a power of two.
14+
# Indexing uses masked per-thread hash codes. Nearly all methods on this
15+
# class are private, accessed directly by subclasses.
16+
#
17+
# Table entries are of class +Cell+; a variant of AtomicLong padded to
18+
# reduce cache contention on most processors. Padding is overkill for most
19+
# Atomics because they are usually irregularly scattered in memory and thus
20+
# don't interfere much with each other. But Atomic objects residing in
21+
# arrays will tend to be placed adjacent to each other, and so will most
22+
# often share cache lines (with a huge negative performance impact) without
2123
# this precaution.
2224
#
23-
# In part because +Cell+s are relatively large, we avoid creating
24-
# them until they are needed. When there is no contention, all
25-
# updates are made to the +base+ field. Upon first contention (a
26-
# failed CAS on +base+ update), the table is initialized to size 2.
27-
# The table size is doubled upon further contention until
28-
# reaching the nearest power of two greater than or equal to the
29-
# number of CPUS. Table slots remain empty (+nil+) until they are
25+
# In part because +Cell+s are relatively large, we avoid creating them until
26+
# they are needed. When there is no contention, all updates are made to the
27+
# +base+ field. Upon first contention (a failed CAS on +base+ update), the
28+
# table is initialized to size 2. The table size is doubled upon further
29+
# contention until reaching the nearest power of two greater than or equal
30+
# to the number of CPUS. Table slots remain empty (+nil+) until they are
3031
# needed.
3132
#
32-
# A single spinlock (+busy+) is used for initializing and
33-
# resizing the table, as well as populating slots with new +Cell+s.
34-
# There is no need for a blocking lock: When the lock is not
35-
# available, threads try other slots (or the base). During these
36-
# retries, there is increased contention and reduced locality,
37-
# which is still better than alternatives.
33+
# A single spinlock (+busy+) is used for initializing and resizing the
34+
# table, as well as populating slots with new +Cell+s. There is no need for
35+
# a blocking lock: When the lock is not available, threads try other slots
36+
# (or the base). During these retries, there is increased contention and
37+
# reduced locality, which is still better than alternatives.
3838
#
39-
# Per-thread hash codes are initialized to random values.
40-
# Contention and/or table collisions are indicated by failed
41-
# CASes when performing an update operation (see method
42-
# +retry_update+). Upon a collision, if the table size is less than
43-
# the capacity, it is doubled in size unless some other thread
44-
# holds the lock. If a hashed slot is empty, and lock is
45-
# available, a new +Cell+ is created. Otherwise, if the slot
46-
# exists, a CAS is tried. Retries proceed by "double hashing",
47-
# using a secondary hash (XorShift) to try to find a
48-
# free slot.
39+
# Per-thread hash codes are initialized to random values. Contention and/or
40+
# table collisions are indicated by failed CASes when performing an update
41+
# operation (see method +retry_update+). Upon a collision, if the table size
42+
# is less than the capacity, it is doubled in size unless some other thread
43+
# holds the lock. If a hashed slot is empty, and lock is available, a new
44+
# +Cell+ is created. Otherwise, if the slot exists, a CAS is tried. Retries
45+
# proceed by "double hashing", using a secondary hash (XorShift) to try to
46+
# find a free slot.
4947
#
50-
# The table size is capped because, when there are more threads
51-
# than CPUs, supposing that each thread were bound to a CPU,
52-
# there would exist a perfect hash function mapping threads to
53-
# slots that eliminates collisions. When we reach capacity, we
54-
# search for this mapping by randomly varying the hash codes of
55-
# colliding threads. Because search is random, and collisions
56-
# only become known via CAS failures, convergence can be slow,
57-
# and because threads are typically not bound to CPUS forever,
58-
# may not occur at all. However, despite these limitations,
59-
# observed contention rates are typically low in these cases.
48+
# The table size is capped because, when there are more threads than CPUs,
49+
# supposing that each thread were bound to a CPU, there would exist a
50+
# perfect hash function mapping threads to slots that eliminates collisions.
51+
# When we reach capacity, we search for this mapping by randomly varying the
52+
# hash codes of colliding threads. Because search is random, and collisions
53+
# only become known via CAS failures, convergence can be slow, and because
54+
# threads are typically not bound to CPUS forever, may not occur at all.
55+
# However, despite these limitations, observed contention rates are
56+
# typically low in these cases.
6057
#
61-
# It is possible for a +Cell+ to become unused when threads that
62-
# once hashed to it terminate, as well as in the case where
63-
# doubling the table causes no thread to hash to it under
64-
# expanded mask. We do not try to detect or remove such cells,
65-
# under the assumption that for long-running instances, observed
66-
# contention levels will recur, so the cells will eventually be
58+
# It is possible for a +Cell+ to become unused when threads that once hashed
59+
# to it terminate, as well as in the case where doubling the table causes no
60+
# thread to hash to it under expanded mask. We do not try to detect or
61+
# remove such cells, under the assumption that for long-running instances,
62+
# observed contention levels will recur, so the cells will eventually be
6763
# needed again; and for short-lived ones, it does not matter.
6864
class Striped64
6965
# Padded variant of AtomicLong supporting only raw accesses plus CAS.
@@ -85,8 +81,8 @@ def cas_computed
8581

8682
extend Volatile
8783
attr_volatile :cells, # Table of cells. When non-null, size is a power of 2.
88-
:base, # Base value, used mainly when there is no contention, but also as a fallback during table initialization races. Updated via CAS.
89-
:busy # Spinlock (locked via CAS) used when resizing and/or creating Cells.
84+
:base, # Base value, used mainly when there is no contention, but also as a fallback during table initialization races. Updated via CAS.
85+
:busy # Spinlock (locked via CAS) used when resizing and/or creating Cells.
9086

9187
alias_method :busy?, :busy
9288

lib/thread_safe/util/xor_shift_random.rb

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,9 @@
11
module ThreadSafe
22
module Util
3-
# A xorshift random number (positive +Fixnum+s) generator, provides reasonably cheap way to generate thread local random numbers without contending for
4-
# the global +Kernel.rand+.
3+
# A xorshift random number (positive +Fixnum+s) generator, provides
4+
# reasonably cheap way to generate thread local random numbers without
5+
# contending for the global +Kernel.rand+.
6+
#
57
# Usage:
68
# x = XorShiftRandom.get # uses Kernel.rand to generate an initial seed
79
# while true

0 commit comments

Comments
 (0)