-
Notifications
You must be signed in to change notification settings - Fork 32
🎨 Make POSTGRES_MINSIZE and POSTGRES_MAXSIZE configurable #8199
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
🎨 Make POSTGRES_MINSIZE and POSTGRES_MAXSIZE configurable #8199
Conversation
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## master #8199 +/- ##
==========================================
+ Coverage 79.26% 87.85% +8.59%
==========================================
Files 1855 1910 +55
Lines 70991 73418 +2427
Branches 1301 1301
==========================================
+ Hits 56272 64504 +8232
+ Misses 14332 8527 -5805
Partials 387 387
Continue to review full report in Codecov by Sentry.
🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are there any sources beyond AI that support these changes? Could you add them to description?
It would be also very helpful to have a brief description of the parameters changed (what exactly these settings mean)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks
This was validated "experimentally", by checking if putting Apart from this, an AI investigation has actually pointed me towards this finding. I was actually thinking that we hit the max limit of pooling connections, but we have prometheus metrics that prove this isnt the case. By removing CPU limitations I checked the effect of CPU throttling on postgres, but it did not lead to a speedup. At some point I elevated the min-connections and this helped |
🧪 CI InsightsHere's what we observed from your CI run for 01cf113. ❌ Failed Jobs
✅ Passed Jobs With Interesting Signals
|
What is Y axis on the image? integer numbers do not looks like seconds or sth |
@YuryHrytsuk it is seconds |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks 🙏
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Multiple things here:
- this PR changes the minimum number of connections to postgres in all the services that use postgres from 1 to 5. For example the webserver uses many connections, probably the director-v2, most of the other services use maybe 1,2 or 3 (such as every dynamic sidecar). Was this thought about?
- changing the settings default value instead of using the ENV variable to fix only master to test. Why?
- using the ENVs we can even change only the minimum by service
.../resource-usage-tracker/tests/unit/with_dbs/test_background_task_periodic_heartbeat_check.py
Outdated
Show resolved
Hide resolved
@mrnicegyu11 just click the re-review button next time. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you very much for the extensive testing, this is very interesting.
Here is the link to SQLAlchemy official docs: https://docs.sqlalchemy.org/en/20/dialects/postgresql.html#disabling-the-postgresql-jit-to-improve-enum-datatype-handling
Nevertheless:
- From your graph I see that setting the min size to 2 already brings all the performance up. I don't really see the difference then between
jitonoroff? Maybe you can explain tomorrow? - I would suggest that you add a
NOTEnext to the postgres settings change that links to at least the SQLAlchemy link I posted above to document why it is now hard-coded to minimum 2. - Also you should probably change the ge on MIN_SIZE and MAX_SIZE to 2 then.
- so when you increased the number of connections I guess that since this JIT thing happens on new connection, it already set it up and that is why it is faster?
Very interesting stuff! However, generally it is best practice to use |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the great effort! Nice findings!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, interesting
|






What do these changes do?
Exposes the env-vars
POSTGRES_MINSIZEandPOSTGRES_MAXSIZE. This can be used to elevate e.g.POSTGRES_MINSIZEwhich leads to drastic performance increases on osparc-master.speag.com .Elevates / Reverts back the default value of
POSTGRES_MINSIZEfrom1to2(see long comment below).5is the actual default in asyncpg. It was overwritten to1in the past. Now the value can be changed via env-vars.jitcompilation is disabled (see below for why).BONUS: Hostnames now clearly identify services (also in postgres/adminer, per client)removedRelated issue/s
https://www.perplexity.ai/page/postgres-query-slowness-invest-7bOGFHGrTEWSoOYv4FcS_g
Ops-PR: https://git.speag.com/oSparc/osparc-ops-deployment-configuration/-/merge_requests/1531
How to test
Dev-ops