|
| 1 | +:author: Stéphane Bégaudeau |
| 2 | +:date: 2026-03-25 |
| 3 | +:status: proposed |
| 4 | +:consulted: Pierre-Charles David |
| 5 | +:informed: Florian Rouëné |
| 6 | +:deciders: Stéphane Bégaudeau |
| 7 | +:issue: https://github.com/eclipse-sirius/sirius-web/issues/6326 |
| 8 | + |
| 9 | += (XS) Measure the performance of Sirius Web based applications |
| 10 | + |
| 11 | +== Problem |
| 12 | + |
| 13 | +We want to be able to measure the performance of Sirius Web based applications. |
| 14 | +We need to measure and track the changes in the performance of our application over time to be able to detect potential regressions, identify major performance issues and to track the progress on any future performance improvements. |
| 15 | + |
| 16 | +We don't know yet what kind of performance issues are the most important and what we want to focus on. |
| 17 | +As such, the performance test suites should help us evaluate various aspect of the performance of the application such as: |
| 18 | + |
| 19 | +- latency |
| 20 | +- throughput |
| 21 | +- network usage |
| 22 | +- cpu usage |
| 23 | +- memory usage |
| 24 | + |
| 25 | + |
| 26 | +== Key Result |
| 27 | + |
| 28 | +We should have a tool and some patterns to perform performance testing. |
| 29 | +It should allow us to measure the performance of the GraphQL HTTP API, the REST API and the WebSocket API simultaneously in one test if necessary. |
| 30 | +It should provide us with clear reports allowing us to track the performance of various use cases over time. |
| 31 | +We should be able to have keep some history of those performance reports. |
| 32 | +This history could be maintained manually in another document for now. |
| 33 | +We should keep a history of those performance reports to track the evolution of the performance of our application over time and to be able to identify potential regressions. |
| 34 | + |
| 35 | +The solution must be applicable to Sirius Web and downstream projects. |
| 36 | +It should be possible to reuse fragments of performance tests from Sirius Web in downstream applications. |
| 37 | +Those fragments should share the same philosophy as our reusable tests services such as query runners with the minimal amount of coupling necessary. |
| 38 | + |
| 39 | + |
| 40 | +=== Acceptance Criteria |
| 41 | + |
| 42 | +The performance test suites should be executable locally with a development instance. |
| 43 | +Such an instance should be able to have the frontend and backend running on two different ports (e.g. http://localhost:5173 and http://localhost:8080). |
| 44 | +They should also be able to run with a production instance where everything is packed into one executable. |
| 45 | + |
| 46 | +The performance test suites should also run during our continuous integration process. |
| 47 | + |
| 48 | + |
| 49 | +== Solution |
| 50 | + |
| 51 | +We will create a set of performance test suites based on reusable fragments that can be composed to write complex tests. |
| 52 | +It should also help initialize the state of the application with data that matches real usage patterns of Sirius Web. |
| 53 | + |
| 54 | + |
| 55 | +=== Scenario |
| 56 | + |
| 57 | +Sirius Web contributors should be able to write new performance tests and run existing ones. |
| 58 | +The manipulation of those performance tests should be straightforward and documented. |
| 59 | + |
| 60 | + |
| 61 | +=== Breadboarding |
| 62 | + |
| 63 | +No user interface needed |
| 64 | + |
| 65 | + |
| 66 | +=== Cutting backs |
| 67 | + |
| 68 | +We should be able to see a code coverage report of the application after the execution of the performance test suites to evaluate the parts of the application that have not be covered by the tests. |
| 69 | + |
| 70 | + |
| 71 | +== Rabbit holes |
| 72 | + |
| 73 | +We may identify some performance issues during our tests, we may not solve them but we will track them using Github issues. |
| 74 | + |
| 75 | + |
| 76 | +== No-gos |
| 77 | + |
| 78 | +None |
0 commit comments