@@ -200,7 +200,7 @@ block, but there is no simple way that a client can enforce atomicity
200
200
across nodes on a distributed system.
201
201
202
202
The compromise of limiting the transaction pipeline to same-slot keys
203
- is exactly that: a compromise. While this behavior is differnet from
203
+ is exactly that: a compromise. While this behavior is different from
204
204
non-transactional cluster pipelines, it simplifies migration of clients
205
205
from standalone to cluster under some circumstances. Note that application
206
206
code that issues multi/exec commands on a standalone client without
@@ -215,6 +215,70 @@ An alternative is some kind of two-step commit solution, where a slot
215
215
validation is run before the actual commands are run. This could work
216
216
with controlled node maintenance but does not cover single node failures.
217
217
218
+ Given the cluster limitations for transactions, by default pipeline isn't in
219
+ transactional mode. To enable transactional context set:
220
+
221
+ .. code :: python
222
+
223
+ >> > p = r.pipeline(transaction = True )
224
+
225
+ After entering the transactional context you can add commands to a transactional
226
+ context, by one of the following ways:
227
+
228
+ .. code :: python
229
+
230
+ >> > p = r.pipeline(transaction = True ) # Chaining commands
231
+ >> > p.set(" key" , " value" )
232
+ >> > p.get(" key" )
233
+ >> > response = p.execute()
234
+
235
+ Or
236
+
237
+ .. code :: python
238
+
239
+ >> > with r.pipeline(transaction = True ) as pipe: # Using context manager
240
+ >> > pipe.set(" key" , " value" )
241
+ >> > pipe.get(" key" )
242
+ >> > response = pipe.execute()
243
+
244
+ As you see there's no need to explicitly send MULTI/EXEC commands to control context start/end
245
+ ClusterPipeline will take care of it.
246
+
247
+ To ensure that different keys will be mapped to a same hash slot on the server side
248
+ prepend your keys with the same hash tag, the technique that allows you to control
249
+ keys distribution. More information `here <https://redis.io/docs/latest/operate/oss_and_stack/reference/cluster-spec/#hash-tags >`_
250
+
251
+ .. code :: python
252
+
253
+ >> > with r.pipeline(transaction = True ) as pipe:
254
+ >> > pipe.set(" {tag} foo" , " bar" )
255
+ >> > pipe.set(" {tag} bar" , " foo" )
256
+ >> > pipe.get(" {tag} foo" )
257
+ >> > pipe.get(" {tag} bar" )
258
+ >> > response = pipe.execute()
259
+
260
+ CAS Transactions
261
+ ~~~~~~~~~~~~~~~~~~~~~~~~
262
+
263
+ If you want to apply optimistic locking for certain keys, you have to execute
264
+ WATCH command in transactional context. WATCH command follows the same limitations
265
+ as any other multi key command - all keys should be mapped to the same hash slot.
266
+
267
+ However, the difference between CAS transaction and normal one is that you have to
268
+ explicitly call MULTI command to indicate the start of transactional context, WATCH
269
+ command itself and any subsequent commands before MULTI will be immediately executed
270
+ on the server side so you can apply optimistic locking and get necessary data before
271
+ transaction execution.
272
+
273
+ .. code :: python
274
+
275
+ >> > with r.pipeline(transaction = True ) as pipe:
276
+ >> > pipe.watch(" mykey" ) # Apply locking by immediately executing command
277
+ >> > val = pipe.get(" mykey" ) # Immediately retrieves value
278
+ >> > val = val + 1 # Increment value
279
+ >> > pipe.multi() # Starting transaction context
280
+ >> > pipe.set(" mykey" , val) # Command will be pipelined
281
+ >> > response = pipe.execute() # Returns OK or None if key was modified in the meantime
218
282
219
283
220
284
Publish / Subscribe
0 commit comments