How should I handle aiobotocore S3 client instances when I have to read from multiple buckets from an opt-in region? #1215
Unanswered
adarmiento
asked this question in
Q&A
Replies: 1 comment 2 replies
-
I'm imagining something like this: class S3ClientCache:
def __init__(self, session: aiobotocore.session.AioSession):
self._session = session
self._cache = {}
self._exit_stack = contextlib.AsyncExitStack()
async def __aenter__(self):
pass
async def __aexit__(self, exc_type, exc_val, exc_tb):
await self._exit_stack.__aexit__(exc_type, exc_val, exc_tb)
async def get_client(self, region_name: str):
if client := self._cache.get(region_name):
return client
self._cache[region_name] = await self._exit_stack.enter_async_context(
self._session.client('s3', region_name=region_name)
)
return self._cache[region_name] |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello
My service reads file from S3 via an aiobotocore S3 client:
This works most of the time, however, if I deploy my service in a opt-in region, the inner region routing will not work and I have instead to create my client with explicit region of the bucket I want to read from.
For example:
async with session.create_client('s3', region) as s3_client:
My problem is that in my application I do not know where I am reading from, and in fact I receive requests containing a bucket name and an object key, and I do the rest. I may have to read from N buckets potentially in N regions.
I wanted to create a simple cache of S3Client instances and use the right one when I need it, but S3Client is a short lived object that I can only create in a
async with
block, so my idea of having a:cannot work.
Is there a proper pattern or strategy to follow in this case? Thank you!
Beta Was this translation helpful? Give feedback.
All reactions