-
Notifications
You must be signed in to change notification settings - Fork 32
🎨 web-server: accelerate input:match via caching rest client call
#7802
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
service/{key}/version/{version}/input:match by adding a cache to the catalog'sservice/{key}/version/{version}/input:match by adding a cache mechanism
matusdrobuliak66
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks 👍
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## master #7802 +/- ##
==========================================
- Coverage 86.72% 82.56% -4.16%
==========================================
Files 1850 703 -1147
Lines 71853 33555 -38298
Branches 1215 176 -1039
==========================================
- Hits 62314 27706 -34608
+ Misses 9198 5791 -3407
+ Partials 341 58 -283
Continue to review full report in Codecov by Sentry.
🚀 New features to boost your workflow:
|
services/web/server/src/simcore_service_webserver/catalog/_catalog_rest_client_service.py
Show resolved
Hide resolved
mrnicegyu11
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
very good optimization, I left a pedantic comment, fell free to ignore
sanderegg
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just do not rely on sticky connections. I plan to remove as much of possible as this goes against scaling practices.
Thanks anyway!!
930861d to
3c8d05e
Compare
service/{key}/version/{version}/input:match by adding a cache mechanism input:match via caching rest client call
|
@mergify queue |
🟠 Waiting for conditions to match
|
@sanderegg I will leave a note. |
|
wvangeit
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great, thanks.



What do these changes do?
Problem
The frontend is currently making frequent calls to the endpoint
service/{key}/version/{version}/input:matchto validate input matches. This results in a high number of repeated requests, which seem unnecessary and introduce higher load and latency. @odeimaiz is investigating ways to reduce the number of these calls on the frontend.Implemented Solution 🎨
To mitigate the backend impact in the meantime, we propose adding an in-memory TTL cache to the catalog’s REST client. This cache will store recent results from
input:matchlookups, significantly improving response times for repeated requests.Given that connections are sticky, this approach is sufficient to handle short-term bursts of traffic — especially those occurring within a ~1-minute window.
Related issue/s
jsonifierservice which has 12+ inputsHow to test
jsonifier 1.1.0(with 10+ inputs) and connect with number parameter (single output).POST service/{key}/version/{version}/input:matchBefore
After
Dev-ops