You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Add capability to discard duplicate jobs with concurrency configuration
Remove 'duplicate' verbiage and use concurrency limits instead, simplify control flow
Fix race condition vulnerability by changing logic to enqueue
Add assertions when bulk enqueuing jobs with concurrency controls
Dispatch jobs in the order they were enqueued
Set ActiveJob successfully_enqueued for both enqueued/blocked and discarded jobs
Change concurrency 'at_limit' -> 'on_conflict'
Update discard logic to trigger an ActiveRecord rollback when attempting dispatch to prevent discarded job creation
Change default on_conflict concurrency option to old behaviour (blocking execution)
Add concurrent on_conflict documentation to README
Add test for discarding grouped concurrent jobs
Fix tests which expect raising enqueue errors
Add test to confirm scheduled jobs are also discarded
Copy file name to clipboardExpand all lines: README.md
+34-2Lines changed: 34 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -428,18 +428,21 @@ In the case of recurring tasks, if such error is raised when enqueuing the job c
428
428
429
429
## Concurrency controls
430
430
431
-
Solid Queue extends Active Job with concurrency controls, that allows you to limit how many jobs of a certain type or with certain arguments can run at the same time. When limited in this way, jobs will be blocked from running, and they'll stay blocked until another job finishes and unblocks them, or after the set expiry time (concurrency limit's _duration_) elapses. Jobs are never discarded or lost, only blocked.
431
+
Solid Queue extends Active Job with concurrency controls, that allows you to limit how many jobs of a certain type or with certain arguments can run at the same time. When limited in this way, jobs will be blocked from running, and they'll stay blocked until another job finishes and unblocks them, or after the set expiry time (concurrency limit's _duration_) elapses. Jobs can can be configured to either be discarded or blocked.
- `key`is the only required parameter, and it can be a symbol, a string or a proc that receives the job arguments as parameters and will be used to identify the jobs that need to be limited together. If the proc returns an Active Record record, the key will be built from its class name and `id`.
440
440
- `to`is `1` by default.
441
441
- `duration`is set to `SolidQueue.default_concurrency_control_period` by default, which itself defaults to `3 minutes`, but that you can configure as well.
442
442
- `group`is used to control the concurrency of different job classes together. It defaults to the job class name.
443
+
- `on_conflict`controls behaviour when enqueuing a job which is above the max concurrent executions for your configuration.
444
+
- (default) `:block`; the job is blocked and is dispatched until another job completes and unblocks it
445
+
- `:discard`; the job is discarded
443
446
444
447
When a job includes these controls, we'll ensure that, at most, the number of jobs (indicated as `to`) that yield the same `key` will be performed concurrently, and this guarantee will last for `duration` for each job enqueued. Note that there's no guarantee about _the order of execution_, only about jobs being performed at the same time (overlapping).
445
448
@@ -482,6 +485,31 @@ Jobs are unblocked in order of priority but queue order is not taken into accoun
482
485
483
486
Finally, failed jobs that are automatically or manually retried work in the same way as new jobs that get enqueued: they get in the queue for getting an open semaphore, and whenever they get it, they'll be run. It doesn't matter if they had already gotten an open semaphore in the past.
484
487
488
+
### Discarding conflicting jobs
489
+
490
+
When configuring `on_conflict` with `:discard`, jobs enqueued above the concurrent execution limit are discarded and failed to be enqueued.
491
+
492
+
```ruby
493
+
class ConcurrentJob < ApplicationJob
494
+
limits_concurrency key: ->(record) { record }, on_conflict: :discard
second_enqueued_job = ConcurrentJob.perform_later(record) do |job|
505
+
job.successfully_enqueued?
506
+
# => false
507
+
end
508
+
509
+
second_enqueued_job
510
+
# => false
511
+
```
512
+
485
513
### Performance considerations
486
514
487
515
Concurrency controls introduce significant overhead (blocked executions need to be created and promoted to ready, semaphores need to be created and updated) so you should consider carefully whether you need them. For throttling purposes, where you plan to have `limit` significantly larger than 1, I'd encourage relying on a limited number of workers per queue instead. For example:
@@ -505,6 +533,10 @@ production:
505
533
506
534
Or something similar to that depending on your setup. You can also assign a different queue to a job on the moment of enqueuing so you can decide whether to enqueue a job in the throttled queue or another queue depending on the arguments, or pass a block to `queue_as` as explained [here](https://guides.rubyonrails.org/active_job_basics.html#queues).
507
535
536
+
### Discarding concurrent jobs
537
+
538
+
539
+
508
540
## Failed jobs and retries
509
541
510
542
Solid Queue doesn't include any automatic retry mechanism, it [relies on Active Job for this](https://edgeguides.rubyonrails.org/active_job_basics.html#retrying-or-discarding-failed-jobs). Jobs that fail will be kept in the system, and a _failed execution_ (a record in the `solid_queue_failed_executions` table) will be created for these. The job will stay there until manually discarded or re-enqueued. You can do this in a console as:
0 commit comments