Quarkus Community Call - 2026-02-17 #52678
cescoffier
started this conversation in
Design Discussions
Replies: 1 comment
-
Small clarification. The Jenkins and Horreum automation is something that's been in place for a while, and it's very valuable – it's the mechanism by which we run all our performance benchmarks in the lab, and capture the results for ever. What Eric and I presented is a new layer of automation on top. It uses GitHub scripts to exfiltrate perf lab run results out to the public internet, and then auto-generate charts in the Quarkus house style. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hello,
Last Tuesday (17th of February, 2026), we had our public Quarkus community call.
Here is a short summary. You can find the complete minutes (including the recording) on https://docs.google.com/document/d/1TgFZsuOQo9qZ4CnQII5LHhQVgMC6YsVMs1UJIAJooyM/edit?usp=sharing.
Recap:
Guillaume and Georgios kicked things off with an impressive update on JVM Ahead-of-Time (AOT) support, which requires Java 25 and utilizes an AOT cache file. This reduces application startup time to 80-90ms for simple REST applications, effectively bridging the gap between JVM and Native modes (in startup time). This feature is targeted for Quarkus 3.32, though Guillaume stressed that users must weigh the trade-offs between image size and potential architectural constraints.
Holly and Eric presented their major initiative to overhaul the Quarkus performance benchmarks. Recognizing that our website data was outdated and notably missing throughput metrics, the team open-sourced the spring-quarkus-perf-comparison repository. Initial lab results are incredibly strong: Quarkus boot times are roughly half of Spring Boot 4's, and our throughput is approximately 2.5x better. This process is now fully automated via Jenkins and Horreum, capturing deep system metrics and auto-generating charts.
The discussion closed on how to position our performance story. While Native compilation remains a powerful tool, it does impose throughput and build-time penalties. The consensus was that our new benchmarking data will enable us to tell richer, more nuanced stories about cost and performance, helping users make informed decisions among the various modes supported by Quarkus: JVM, AOT, and Native.
Beta Was this translation helpful? Give feedback.
All reactions