@@ -29,21 +29,33 @@ SELECT graphile_worker.add_job('send_email', '{"count": 2}', job_key := 'abc');
2929COMMIT ;
3030```
3131
32+ If both the previous and new versions of the job use array payloads they will be
33+ merged; for example the ` process_invoices ` job will run once with the payload
34+ ` [{id: 42},{id: 67}] ` :
35+
36+ ``` sql
37+ BEGIN ;
38+ SELECT graphile_worker .add_job (' process_invoices' , ' [{"id": 42}]' , job_key := ' inv' );
39+ SELECT graphile_worker .add_job (' process_invoices' , ' [{"id": 67}]' , job_key := ' inv' );
40+ COMMIT ;
41+ ```
42+
3243In all cases if no match is found then a new job will be created.
3344
3445### ` job_key_mode `
3546
3647Behavior when an existing job with the same job key is found is controlled by
3748the ` job_key_mode ` setting:
3849
39- - ` replace ` (default) - overwrites the unlocked job with the new values. This is
40- primarily useful for rescheduling, updating, or ** debouncing** (delaying
41- execution until there have been no events for at least a certain time period).
42- Locked jobs will cause a new job to be scheduled instead.
43- - ` preserve_run_at ` - overwrites the unlocked job with the new values, but
44- preserves ` run_at ` . This is primarily useful for ** throttling** (executing at
45- most once over a given time period). Locked jobs will cause a new job to be
46- scheduled instead.
50+ - ` replace ` (default) - overwrites the unlocked job with the new values (merging
51+ array payloads). This is primarily useful for rescheduling, updating, or
52+ ** debouncing** (delaying execution until there have been no events for at
53+ least a certain time period). Locked jobs will cause a new job to be scheduled
54+ instead.
55+ - ` preserve_run_at ` - overwrites the unlocked job with the new values (merging
56+ array payloads), but preserves ` run_at ` . This is primarily useful for
57+ ** throttling** (executing at most once over a given time period). Locked jobs
58+ will cause a new job to be scheduled instead.
4759- ` unsafe_dedupe ` - if an existing job is found, even if it is locked or
4860 permanently failed, then it won't be updated. This is very dangerous as it
4961 means that the event that triggered this ` add_job ` call may not result in any
@@ -71,6 +83,49 @@ The full `job_key_mode` algorithm is roughly as follows:
7183- Otherwise:
7284 - the job will have all its attributes updated to their new values.
7385
86+ ### Array payload merging
87+
88+ When updating an existing job via ` job_key ` (except in ` unsafe_dedupe ` mode), if
89+ both the existing job's payload and the new payload are JSON arrays, they will
90+ be concatenated rather than overwritten. This enables a batching pattern where
91+ multiple events can be accumulated into a single job for more efficient
92+ execution; see [ Handling batch jobs] ( ./tasks.md#handling-batch-jobs ) for more
93+ info.
94+
95+ ``` sql
96+ -- First call creates job with payload: [{"id": 1}]
97+ SELECT graphile_worker .add_job (
98+ ' process_events' ,
99+ ' [{"id": 1}]' ::json,
100+ job_key := ' my_batch' ,
101+ job_key_mode := ' preserve_run_at' ,
102+ run_at := NOW() + INTERVAL ' 10 seconds'
103+ );
104+
105+ -- Second call (before job runs) merges to: [{"id": 1}, {"id": 2}]
106+ SELECT graphile_worker .add_job (
107+ ' process_events' ,
108+ ' [{"id": 2}]' ::json,
109+ job_key := ' my_batch' ,
110+ job_key_mode := ' preserve_run_at' ,
111+ run_at := NOW() + INTERVAL ' 10 seconds'
112+ );
113+ ```
114+
115+ Combined with ` preserve_run_at ` job_key_mode, this creates a fixed batching
116+ window: the job runs at the originally scheduled time with all accumulated
117+ payloads merged together. With the default ` replace ` job_key_mode, each new
118+ event would push the ` run_at ` forward, creating a rolling/debounce window
119+ instead.
120+
121+ :::caution Both payloads must be arrays for merging to occur
122+
123+ If ** either** payload is not an array (e.g., one is an object, as is the default
124+ if no payload is specified), the standard replace behavior applies and the old
125+ payload will be lost.
126+
127+ :::
128+
74129## Removing jobs
75130
76131Pending jobs may also be removed using ` job_key ` :
@@ -98,3 +153,9 @@ prevented from running again, and will have the `job_key` removed from it.)
98153
99154Calling ` remove_job ` for a locked (i.e. running) job will not actually remove
100155it, but will prevent it from running again on failure.
156+
157+ There's currently a race condition in adding jobs with a job key which means
158+ under very high contention of a specific key an ` add_job ` may fail and return
159+ ` null ` . You should check for this ` null ` and handle it appropriately: retrying,
160+ throwing an error, or however else makes sense to your code. See:
161+ https://github.com/graphile/worker/issues/580 for more details.
0 commit comments