- fix timestamp issue in _defer_until without timezone offset, #182
- add option to disable signal handler registration from running inside other frameworks, #183
- add
default_queue_nametocreate_redis_poolandArqRedis, #191 Workercan retrieve thequeue_namefrom the connection pool, if present- fix potential race condition when starting jobs, #194
- support python 3.9 and pydantic 1.7, #214
- Python 3.8 support, #178
- fix concurrency with multiple workers, #180
- full mypy coverage, #181
- Add
py.typedfile to tell mypy the package has type hints, #163 - Added
ssloption toRedisSettings, #165
- Include
queue_namewhen for job object in response toenqueue_job, #160
- Fix cron scheduling on a specific queue, by @dmvass and @Tinche
- add support for Redis Sentinel fix #132
- fix
Worker.abort_jobinvalid expire time error, by @dmvass
- fix usage of
max_burst_jobs, improve coverage fix #152 - stop lots of
WatchVariableErrorerrors in log, #153
- deal better with failed job deserialization, #149 by @samuelcolvin
- fix
run_check(xmax_burst_jobs=...)when a jobs fails, #150 by @samuelcolvin
- add
worker.queue_read_limit, fix #141, by @rubik - custom serializers, eg. to use msgpack rather than pickle, #143 by @rubik
- add
ArqRedis.queued_jobsutility method for getting queued jobs while testing, fix #145 by @samuelcolvin
- prevent duplicate
job_idwhen job result exists, fix #137 - add "don't retry mode" via
worker.retry_jobs = False, fix #139 - add
worker.max_burst_jobs
- improved error when a job is aborted (eg. function not found)
- fix semaphore on worker with many expired jobs
- add support for different queues, #127 thanks @tsutsarin
- use dicts for pickling not tuples, better handling of pickling errors, #123
- use
pipelineinenqueue_job - catch any error when pickling job result
- add support for python 3.6
- add
Worker.run_check, fix #115
- fix
Workerwith custom redis settings
- add
job_tryargument toenqueue_job, #113 - adding
--watchmode to the worker (requireswatchgod), #114 - allow
ctxwhen creating Worker - add
all_job_resultstoArqRedis - fix python path when starting worker
- Breaking Change: COMPLETE REWRITE!!! see docs for details, #110
- update dependencies
- reconfigure
Job, return a job instance when enqueuing tasks #93 - tweaks to docs #106
- package updates, particularly compatibility for
msgpack 0.5.6
- Breaking Change: integration with aioredis >= 1.0, basic usage hasn't changed but look at aioredis's migration docs for changes in redis API #76
- better signal handling, support
uvloop#73 - drain pending tasks and drain task cancellation #74
- add aiohttp and docker demo
/demo#75
- extract
create_pool_lenientfromRedixMixin - improve redis connection traceback
RedisSettingsrepr method- add
create_connection_timeoutto connection pool
- fix bug with
RedisMixin.get_redis_poolcreating multiple queues - tweak drain logs
- only save job on task in drain if re-enqueuing
- add semaphore timeout to drains
- add key count to
log_redis_info
- correct format of
log_redis_info
- log redis version when starting worker, fix #64
- log "connection success" when connecting to redis after connection failures, fix #67
- add job ids, for now they're just used in logging, fix #53
- allow set encoding in msgpack for jobs #49
- cron tasks allowing scheduling of functions in the future #50
- Breaking change: switch
to_unix_msto just return the timestamp int, addto_unix_ms_tzto return tz offset too
- uprev setup requires
- correct setup arguments
- add
async-timeoutdependency - use async-timeout around
shadow_factory - change logger name for control process log messages
- use
Semaphorerather thanasyncio.wait(...return_when=asyncio.FIRST_COMPLETED)for improved performance - improve log display
- add timeout and retry logic to
RedisMixin.create_redis_pool
- implementing reusable
Drainwhich takes tasks from a redis list and allows them to be execute asynchronously. - Drain uses python 3.6
async yield, therefore python 3.5 is no longer supported. - prevent repeated identical health check log messages
- mypy at last passing, #30
- adding trove classifiers, #29
- add
StopJobexception for cleaning ending jobs, #21 - add
flushdbtoMockRedis, #23 - allow configurable length job logging via
log_curtailonWorker, #28
- add
shadow_kwargsmethod toBaseWorkerto make customising actors easier.
- reimplement worker reuse as it turned out to be useful in tests.
- use
gatherrather thanwaitfor startup and shutdown so exceptions propagate. - add
--checkoption to confirm arq worker is running.
- fix issue with
Concurrentclass binding with multiple actor instances.
- improving naming of log handlers and formatters
- upgrade numerous packages, nothing significant
- add
startupandshutdownmethods to actors - switch
@concurrentto return aConcurrentinstance so the direct method is accessible via<func>.direct
- improved solution for preventing new jobs starting when the worker is about to stop
- switch
SIGRTMIN>SIGUSR1to work with mac
- fix main process signal handling so the worker shuts down when just the main process receives a signal
- re-enqueue un-started jobs popped from the queue if the worker is about to exit
- rename settings class to
RedisSettingsand simplify significantly
- add
concurrency_enabledargument to aid in testing - fix conflict with unitest.mock
- prevent logs disabling other logs
- first proper release