Inconsistent / Delayed Responses from Amazon SP-API Batch Endpoints at Scale #5122
Unanswered
ihtishamtanveer
asked this question in
Q&A
Replies: 1 comment
-
|
Are you seeing delayed responses from all three endpoints or a specific one? Please note Catalog Items API have multiple rate limits - selling partner account- application pair and also on the application that calls the Selling Partner API on behalf of the selling partner account. Requests are rate-limited by whichever threshold you reach first. Make sure you are accounting for both the rate limits. Thanks, |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I’m experiencing inconsistent and significantly delayed responses when using Amazon SP-API batch endpoints at scale. The issue becomes noticeable when processing large datasets (40–50k items) across multiple users/accounts.
Use Case
I’ve built a catalog ingestion and enrichment service where:
Vendor catalog files (CSV/XLSX) are processed
Items are enriched using Amazon SP-API
The system runs concurrently for multiple users
Data volume per run is ~40–50k items
The system works fine for small datasets and single users, but once concurrency and volume increase, API behavior becomes unstable.
APIs Involved
searchCatalogItems
getItemOffersBatch
getFeesEstimateBatch / feesListingBatch
Observed Issues
Batch calls start taking significantly longer than expected
Responses are inconsistent (some batches succeed, others are delayed or fail)
Latency increases progressively during long-running jobs
Even when staying within documented rate limits, behavior degrades at scale
Architecture Notes
Calls are distributed via SQS queues to manage throughput
Rate limits and backoff strategies are implemented
Requests are spread across multiple seller accounts / users
Batch sizes are kept within documented limits
Questions
Are there undocumented throttling rules or internal limits for batch endpoints when used at high volume or concurrency?
Is there a known issue or best practice for running batch APIs across multiple sellers simultaneously?
Are batch APIs internally queued or deprioritized compared to single-item endpoints?
Has anyone observed similar latency or inconsistency when processing tens of thousands of items?
Any recommendations on:
Optimal batch sizes
Preferred retry strategy
Whether single-item calls are more reliable at scale than batch APIs
Goal
Looking to understand whether this is:
Any insights, official guidance, or real-world experience would be greatly appreciated.
Beta Was this translation helpful? Give feedback.
All reactions