Skip to content

Commit d6d1dd2

Browse files
committed
convert rest of the examples to evaluated md
1 parent 07510fc commit d6d1dd2

File tree

4 files changed

+158
-192
lines changed

4 files changed

+158
-192
lines changed

.yardopts

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,7 @@
55
--title=Concurrent Ruby
66
--template default
77
--template-path ./yard-template
8+
--default-return undocumented
89

910
./lib/**/*.rb
1011
./ext/concurrent_ruby_ext/**/*.c

Gemfile

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -16,6 +16,7 @@ group :development do
1616
# TODO (pitr-ch 15-Oct-2016): does not work on 1.9.3 anymore
1717
gem 'inch', '~> 0.6.3', :platforms => :mri, :require => false
1818
gem 'redcarpet', '~> 3.3.2', platforms: :mri # understands github markdown
19+
gem 'md-ruby-eval'
1920
end
2021

2122
group :testing do

doc/promises.in.md

Lines changed: 65 additions & 61 deletions
Original file line numberDiff line numberDiff line change
@@ -3,8 +3,8 @@
33
Promises is a new framework unifying former `Concurrent::Future`,
44
`Concurrent::Promise`, `Concurrent::IVar`, `Concurrent::Event`,
55
`Concurrent.dataflow`, `Delay`, and `TimerTask`. It extensively uses the new
6-
synchronization layer to make all the features *non-blocking* and
7-
*lock-free*, with the exception of obviously blocking operations like
6+
synchronization layer to make all the features *lock-free*,
7+
with the exception of obviously blocking operations like
88
`#wait`, `#value`, etc. As a result it lowers a danger of deadlocking and offers
99
better performance.
1010

@@ -15,11 +15,11 @@ better performance.
1515
- What is it?
1616
- What is it for?
1717
- Main classes {Future}, {Event}
18-
- Explain `_on` `_using` suffixes.
18+
- Explain pool usage :io vs :fast, and `_on` `_using` suffixes.
1919

2020
## Old examples follow
2121

22-
*TODO rewrite into md with examples*
22+
*TODO review pending*
2323

2424
Constructors are not accessible, instead there are many constructor methods in
2525
FactoryMethods.
@@ -65,7 +65,7 @@ future.value
6565
future.resolved?
6666
```
6767

68-
Rejecting asynchronous task
68+
Rejecting asynchronous task:
6969

7070
```ruby
7171
future = future { raise 'Boom' }
@@ -76,14 +76,16 @@ future.reason
7676
raise future rescue $!
7777
```
7878

79-
Direct creation of resolved futures
79+
Direct creation of resolved futures:
8080

8181
```ruby
8282
fulfilled_future(Object.new)
8383
rejected_future(StandardError.new("boom"))
84+
```
8485

85-
### Chaining of futures
86+
Chaining of futures:
8687

88+
```ruby
8789
head = fulfilled_future 1 #
8890
branch1 = head.then(&:succ) #
8991
branch2 = head.then(&:succ).then(&:succ) #
@@ -96,19 +98,19 @@ zip(branch1, branch2, branch1).then { |*values| values.reduce &:+ }.value!
9698
# pick only first resolved
9799
any(branch1, branch2).value!
98100
(branch1 | branch2).value!
101+
```
99102

103+
Any supplied arguments are passed to the block, promises ensure that they are visible to the block:
100104

101-
### Arguments
102-
103-
# any supplied arguments are passed to the block, promises ensure that they are visible to the block
104-
105+
```ruby
105106
future('3') { |s| s.to_i }.then(2) { |a, b| a + b }.value
106107
fulfilled_future(1).then(2, &:+).value
107108
fulfilled_future(1).chain(2) { |fulfilled, value, reason, arg| value + arg }.value
109+
```
108110

111+
Error handling:
109112

110-
### Error handling
111-
113+
```ruby
112114
fulfilled_future(Object.new).then(&:succ).then(&:succ).rescue { |e| e.class }.value # error propagates
113115
fulfilled_future(Object.new).then(&:succ).rescue { 1 }.then(&:succ).value # rescued and replaced with 1
114116
fulfilled_future(1).then(&:succ).rescue { |e| e.message }.then(&:succ).value # no error, rescue not applied
@@ -118,18 +120,18 @@ rejected_zip.result
118120
rejected_zip.then { |v| 'never happens' }.result
119121
rejected_zip.rescue { |a, b| (a || b).message }.value
120122
rejected_zip.chain { |fulfilled, values, reasons| [fulfilled, values.compact, reasons.compact] }.value
123+
```
121124

125+
Delay will not evaluate until asked by #value or other method requiring resolution.
122126

123-
### Delay
124-
125-
# will not evaluate until asked by #value or other method requiring resolution
127+
``` ruby
126128
future = delay { 'lazy' }
127129
sleep 0.1 #
128130
future.resolved?
129131
future.value
130-
131-
# propagates trough chain allowing whole or partial lazy chains
132-
132+
```
133+
It propagates trough chain allowing whole or partial lazy chains.
134+
```ruby
133135
head = delay { 1 }
134136
branch1 = head.then(&:succ)
135137
branch2 = head.delay.then(&:succ)
@@ -144,21 +146,23 @@ sleep 0.1 # forces only head to resolve, branch 2 stays pending
144146

145147
join.value
146148
[head, branch1, branch2, join].map(&:resolved?)
149+
```
147150

151+
When flatting, it waits for inner future. Only the last call to value blocks thread.
148152

149-
### Flatting
150-
151-
# waits for inner future, only the last call to value blocks thread
153+
```ruby
152154
future { future { 1+1 } }.flat.value
153155

154156
# more complicated example
155157
future { future { future { 1 + 1 } } }.
156158
flat(1).
157159
then { |f| f.then(&:succ) }.
158160
flat(1).value
161+
```
159162

163+
Scheduling of asynchronous tasks:
160164

161-
### Schedule
165+
```ruby
162166

163167
# it'll be executed after 0.1 seconds
164168
scheduled = schedule(0.1) { 1 }
@@ -171,9 +175,11 @@ scheduled = delay { 1 }.schedule(0.1).then(&:succ)
171175
# will not be scheduled until value is requested
172176
sleep 0.1 #
173177
scheduled.value # returns after another 0.1sec
178+
```
174179

180+
Resolvable Future and Event:
175181

176-
### Resolvable Future and Event
182+
```ruby
177183

178184
future = resolvable_future
179185
event = resolvable_event()
@@ -189,10 +195,11 @@ event.resolve
189195

190196
# The threads can be joined now
191197
[t1, t2].each &:join #
198+
```
192199

200+
Callbacks:
193201

194-
### Callbacks
195-
202+
```ruby
196203
queue = Queue.new
197204
future = delay { 1 + 1 }
198205

@@ -203,22 +210,22 @@ queue.empty?
203210
future.value
204211
queue.pop
205212
queue.pop
213+
```
206214

215+
Factory methods are taking names of the global executors
216+
(or instances of custom executors).
207217

208-
### Thread-pools
209-
210-
# Factory methods are taking names of the global executors
211-
# (ot instances of custom executors)
212-
218+
```ruby
213219
# executed on :fast executor, only short and non-blocking tasks can go there
214220
future_on(:fast) { 2 }.
215221
# executed on executor for blocking and long operations
216222
then_using(:io) { File.read __FILE__ }.
217223
wait
224+
```
218225

226+
Interoperability with actors:
219227

220-
### Interoperability with actors
221-
228+
```ruby
222229
actor = Concurrent::Actor::Utils::AdHoc.spawn :square do
223230
-> v { v ** 2 }
224231
end
@@ -230,37 +237,26 @@ future { 2 }.
230237
value
231238

232239
actor.ask(2).then(&:succ).value
233-
234-
235-
### Interoperability with channels
236-
237-
ch1 = Concurrent::Channel.new
238-
ch2 = Concurrent::Channel.new
239-
240-
result = select(ch1, ch2)
241-
ch1.put 1
242-
result.value!
243-
244-
245-
future { 1+1 }.
246-
then_put(ch1)
247-
result = future { '%02d' }.
248-
then_select(ch1, ch2).
249-
then { |format, (value, channel)| format format, value }
250-
result.value!
251-
240+
```
252241

253242
### Common use-cases Examples
254243

255-
# simple background processing
244+
#### simple background processing
245+
246+
```ruby
256247
future { do_stuff }
248+
```
257249

258-
# parallel background processing
250+
#### parallel background processing
251+
252+
```ruby
259253
jobs = 10.times.map { |i| future { i } } #
260254
zip(*jobs).value
255+
```
261256

257+
#### periodic task
262258

263-
# periodic task
259+
```ruby
264260
def schedule_job(interval, &job)
265261
# schedule the first execution and chain restart og the job
266262
Concurrent.schedule(interval, &job).chain do |fulfilled, continue, reason|
@@ -269,23 +265,25 @@ def schedule_job(interval, &job)
269265
else
270266
# handle error
271267
p reason
272-
# retry
273-
schedule_job(interval, &job)
268+
# retry sooner
269+
schedule_job(interval / 10, &job)
274270
end
275271
end
276272
end
277273

278274
queue = Queue.new
279275
count = 0
276+
interval = 0.05 # small just not to delay execution of this example
280277

281-
schedule_job 0.05 do
278+
schedule_job interval do
282279
queue.push count
283280
count += 1
284281
# to continue scheduling return true, false will end the task
285282
if count < 4
286283
# to continue scheduling return true
287284
true
288285
else
286+
# close the queue with nil to simplify reading it
289287
queue.push nil
290288
# to end the task return false
291289
false
@@ -294,10 +292,14 @@ end
294292

295293
# read the queue
296294
arr, v = [], nil; arr << v while (v = queue.pop) #
295+
# arr has the results from the executed scheduled tasks
297296
arr
297+
```
298+
#### How to limit processing where there are limited resources?
299+
300+
By creating an actor managing the resource
298301

299-
# How to limit processing where there are limited resources?
300-
# By creating an actor managing the resource
302+
```ruby
301303
DB = Concurrent::Actor::Utils::AdHoc.spawn :db do
302304
data = Array.new(10) { |i| '*' * i }
303305
lambda do |message|
@@ -317,9 +319,11 @@ concurrent_jobs = 11.times.map do |v|
317319
end #
318320

319321
zip(*concurrent_jobs).value!
322+
```
320323

324+
In reality there is often a pool though:
321325

322-
# In reality there is often a pool though:
326+
```ruby
323327
data = Array.new(10) { |i| '*' * i }
324328
pool_size = 5
325329

0 commit comments

Comments
 (0)