Wakapi database performance benchmark #832
muety
started this conversation in
Show and tell
Replies: 1 comment
-
|
Did a few quick tests with ClickhouseDB and its performance is impressive. The following query took ~23 s on MySQL and ~0.5s on Clickhouse. select user_id, language, (sum(duration) / 1000 / 1000 / 1000 / 60) as total
from durations
where time between '2025-01-01 00:00:00' and '2025-04-30 00:00:00'
group by user_id, language
order by total descWould love to implement Clickhouse support and potentially even outsource heartbeats- and durations aggregation entirely to the database instead of computing those programatically on the Go side. However, the driver for GORM still appears to be somewhat flaky, e.g. see go-gorm/clickhouse#13 (comment). |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Setup
I ran some quick load tests against Wakapi with different databases, namely MySQL, Postgres and SQLite, using
loadtest.sh. Those experiments are not "scientifically" accurate, but only serve to get a rough indication about how well Wakapi runs with different db backends.The load tests are designed to simulate a number of concurrent clients (
c=10) firing a number of HTTPGETrequests (m = n / c = 500) against the/summariesAPI endpoint to retrieve (non-cached,recompute=true) summaries for random time intervals.I used an anonymized version of the production database for the tests, which contains 48m rows in the
heartbeatstable and is around 30 GB in size.Tests were executed locally on the same machine (Ryzen 5 3600, 32 GB RAM) with the latest build of Wakapi and, in case of MySQL and Postgres, against a freshly started database server (in Docker) with default config.
Results
Summary (response time)
MySQL
Postgres
SQLite
Discussion
Judging from those very ad-hoc tests, Wakapi runs pretty much equally well on MySQL and Postgres, which might suggest that database is not a bottleneck in the test scenarios. On SQLite, performance is much worse, which is probably mainly due to the fact that queries can't really run concurrently (CPU is barely utilized at all).
However, please note that the test simulates quite extreme conditions (very large database and large number of requests per second). For real-world scenarios and moderately sized databases, SQLite is still a totally suitable choice.
Please also note that the test conditions can definitely be tweaked to provide a more representative and fair test environment than my current quick-and-dirty setup.
Beta Was this translation helpful? Give feedback.
All reactions