-
Notifications
You must be signed in to change notification settings - Fork 2.6k
Open
Description
Hello!
We met some difficulties with async sentinel connection.
In some module we generate global Redis connection like this:
from redis.retry import Retry
from redis.backoff import ExponentialBackoff
from redis.asyncio.sentinel import Sentinel
from redis.asyncio import Redis, BlockingConnectionPool
import redis.exceptions as redis_exceptions
from config.vars import app_vars
retry = Retry(ExponentialBackoff(), 3)
sentinel = Sentinel(
sentinels=[
(
host[0],
int(app_vars['REDIS_SENTINEL_PORT']) if app_vars.get('REDIS_SENTINEL_PORT') else 26379
)
for host in app_vars['REDIS_HOST']
],
socket_timeout=REDIS_SOCKET_TIMEOUT
)
redis_client = sentinel.master_for(
app_vars['REDIS_SERVICE_NAME'],
socket_timeout=REDIS_SOCKET_TIMEOUT,
username=app_vars['REDIS_USERNAME'],
password=app_vars['REDIS_PASSWORD'],
db=app_vars['REDIS_DB'],
retry_on_error=[
redis_exceptions.BusyLoadingError,
redis_exceptions.ConnectionError,
redis_exceptions.TimeoutError,
redis_exceptions.ReadOnlyError,
redis_exceptions.MasterDownError
],
redis_class=Redis,
retry_on_timeout=True,
socket_keepalive=True
)
After importing this connection into other modules and simply using the await redis_client.set() type, we encounter an overflow of the maximum number of clients in Redis.
If we use the async with redis_client, then quite often we get an error that the client is already closed.
What are we doing wrong and how to fix this situation?
It is worth adding that in one of the modules we generate many asynchronous tasks, in each of which we access Redis and wait for them via asyncio.gather.
Metadata
Metadata
Assignees
Labels
No labels