Skip to content

Commit d782226

Browse files
committed
[doc] Add a pitch on the measurement of the performance of Sirius Web based applications
Signed-off-by: Stéphane Bégaudeau <stephane.begaudeau@obeo.fr>
1 parent 71aeff1 commit d782226

File tree

2 files changed

+68
-1
lines changed

2 files changed

+68
-1
lines changed

CHANGELOG.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44

55
=== Pitches
66

7-
7+
- Measure the performance of Sirius Web based applications
88

99

1010
=== Architectural decision records
Lines changed: 67 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,67 @@
1+
:author: Stéphane Bégaudeau
2+
:date: 2026-03-25
3+
:status: proposed
4+
:consulted: Pierre-Charles David
5+
:informed: Florian Rouëné
6+
:deciders: Stéphane Bégaudeau
7+
:issue: https://github.com/eclipse-sirius/sirius-web/issues/6326
8+
9+
= (XS) Measure the performance of Sirius Web based applications
10+
11+
== Problem
12+
13+
We want to be able to measure the performance of Sirius Web based applications.
14+
15+
16+
== Key Result
17+
18+
We should have a tool and some patterns to perform performance testing.
19+
It should allow us to measure the performance of the GraphQL HTTP API, the REST API and the WebSocket API simultaneously in one test if necessary.
20+
It should provide us with clear reports allowing us to track the performance of various use cases over time.
21+
We should be able to have keep some history of those performance reports.
22+
This history could be maintained manually in another document for now.
23+
24+
The solution must be applicable to Sirius Web and downstream projects.
25+
It should be possible to reuse fragments of performance tests from Sirius Web in downstream applications.
26+
Those fragments should share the same philosophy as our reusable tests services such as query runners with the minimal amount of coupling necessary.
27+
28+
29+
=== Acceptance Criteria
30+
31+
The performance test suites should be executable locally with a development instance.
32+
Such an instance should be able to have the frontend and backend running on two different ports (e.g. http://localhost:5173 and http://localhost:8080).
33+
They should also be able to run with a production instance where everything is packed into one executable.
34+
35+
The performance test suites should also run during our continuous integration process.
36+
37+
38+
== Solution
39+
40+
We will create a set of performance test suites based on reusable fragments that can be composed to write complex tests.
41+
It should also help initialize the state of the application with data that matches real usage patterns of Sirius Web.
42+
43+
44+
=== Scenario
45+
46+
Sirius Web contributors should be able to write new performance tests and run existing ones.
47+
The manipulation of those performance tests should be straightforward and documented.
48+
49+
50+
=== Breadboarding
51+
52+
No user interface needed
53+
54+
55+
=== Cutting backs
56+
57+
We should be able to see a code coverage report of the application after the execution of the performance test suites to evaluate the parts of the application that have not be covered by the tests.
58+
59+
60+
== Rabbit holes
61+
62+
We may identify some performance issues during our tests, we may not solve them but we will track them using Github issues.
63+
64+
65+
== No-gos
66+
67+
None

0 commit comments

Comments
 (0)