You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* Update K6 scripts to call all APIs
* Remove SQL scripts and add new K6 scripts to populate database
(VehicleInventoryService was overwriting SQL scripts on startup).
* Update READMEs
* Init and populate DB with some basic data
* Update copyrights
Testing:
```
OverheadTests > runAllTestConfigurations() > all STANDARD_OUT
----------------------------------------------------------
Run at Thu Feb 29 23:45:51 UTC 2024
all : Compares all DistroConfigs
5 users, 5000 iterations
----------------------------------------------------------
DistroConfig : noneapp_signals_disabledapp_signals_no_tracesapp_signals_traces
Run duration : 00:10:04 00:10:02 00:10:02 00:10:02
Avg. CPU (user) % : 0.0 0.0 0.0 0.0
Max. CPU (user) % : 0.0 0.0 0.0 0.0
Avg. mch tot cpu % : 0.0 0.0 0.0 0.0
Startup time (ms) : 4011 5014 5016 5013
Total allocated MB : 0.00 0.00 0.00 0.00
Thread switch rate : 0.0 0.0 0.0 0.0
GC time (ms) : 0 0 0 0
GC pause time (ms) : 0 0 0 0
Req. mean (ms) : 104.11 105.04 106.39 104.71
Req. p95 (ms) : 389.89 389.92 389.93 389.90
Iter. mean (ms) : 1458.85 1471.86 1490.91 1467.22
Iter. p95 (ms) : 1520.01 1540.09 1629.93 1540.02
Net read avg (bps) : 0.00 0.00 0.00 0.00
Net write avg (bps) : 0.00 0.00 0.00 0.00
Peak threads : 0 0 0 0
Gradle Test Executor 41 finished executing tests.
> Task :test
Finished generating test XML results (0.696 secs) into: /workplace/thp/python-sdk/aws-otel-python-instrumentation/performance-tests/build/test-results/test
Generating HTML test report...
Finished generating test html results (0.73 secs) into: /workplace/thp/python-sdk/aws-otel-python-instrumentation/performance-tests/build/reports/tests/test
Deprecated Gradle features were used in this build, making it incompatible with Gradle 9.0.
You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.
For more on this, please refer to https://docs.gradle.org/8.6/userguide/command_line_interface.html#sec:command_line_warnings in the Gradle documentation.
BUILD SUCCESSFUL in 41m 9s
3 actionable tasks: 1 executed, 2 up-to-date
```
By submitting this pull request, I confirm that you can use, modify,
copy, and redistribute this contribution, under the terms of your
choice.
Copy file name to clipboardExpand all lines: performance-tests/README.md
+28-50Lines changed: 28 additions & 50 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,41 +8,35 @@
8
8
-[Setup and Usage](#setup-and-usage)
9
9
-[Visualization](#visualization)
10
10
11
-
This directory will contain tools and utilities
12
-
that help us to measure the performance overhead introduced by
13
-
the distro and to measure how this overhead changes over time.
11
+
This directory will contain tools and utilities that help us to measure the performance overhead introduced by the distro and to measure how this overhead changes over time.
14
12
15
-
The overhead tests here should be considered a "macro" benchmark. They serve to measure high-level
16
-
overhead as perceived by the operator of a "typical" application. Tests are performed on a Java 11
17
-
distribution from [Eclipse Temurin](https://projects.eclipse.org/projects/adoptium.temurin).
13
+
The overhead tests here should be considered a "macro" benchmark. They serve to measure high-level overhead as perceived by the operator of a "typical" application. Tests are performed on Python 3.10.
18
14
19
15
## Process
20
16
21
-
There is one dynamic test here called [OverheadTests](https://github.com/open-telemetry/opentelemetry-java-instrumentation/blob/main/benchmark-overhead/src/test/java/io/opentelemetry/OverheadTests.java).
22
-
The `@TestFactory` method creates a test pass for each of the [defined configurations](https://github.com/open-telemetry/opentelemetry-java-instrumentation/blob/main/benchmark-overhead/src/test/java/io/opentelemetry/config/Configs.java).
23
-
Before the tests run, a single collector instance is started. Each test pass has one or more distroConfigs and those are tested in series.
24
-
For each distro defined in a configuration, the test runner (using [testcontainers](https://www.testcontainers.org/)) will:
17
+
There is one dynamic test here called OverheadTests. The `@TestFactory` method creates a test pass for each of the defined configurations. Before the tests run, a single collector instance is started. Each test pass has one or more distroConfigs and those are tested in series. For each distro defined in a configuration, the test runner (using [testcontainers](https://www.testcontainers.org/)) will:
25
18
26
19
1. create a fresh postgres instance and populate it with initial data.
27
-
2. create a fresh instance of [spring-petclinic-rest](https://github.com/spring-petclinic/spring-petclinic-rest)instrumented with the specified distroConfig
28
-
3. measure the time until the petclinic app is marked "healthy" and then write it to a file.
29
-
4. if configured, perform a warmup phase. During the warmup phase, a bit of traffic is generated in order to get the application into a steady state (primarily helping facilitate jit compilations). Currently, we use a 30 second warmup time.
30
-
5. start a JFR recording by running `jcmd`inside the petclinic container
31
-
6. run the [k6 test script](https://github.com/open-telemetry/opentelemetry-java-instrumentation/blob/main/benchmark-overhead/k6/basic.js) with the configured number of iterations through the file and the configured number of concurrent virtual users (VUs).
32
-
7. after k6 completes, petclinic is shut down
33
-
8. after petclinic is shut down, postgres is shut down
20
+
2. create a fresh instance of vehicle inventory service instrumented with the specified distroConfig, and image service (not currently instrumented)
21
+
3. measure the time until the app is marked "healthy" and then write it to a file.
22
+
4. if configured, perform a warmup phase. During the warmup phase, a bit of traffic is generated in order to get the application into a steady state (primarily helping facilitate jit compilations).
23
+
5. start a profiling recording by running a script that relies on psutils inside the application container
24
+
6. run a k6 test script with the configured number of iterations through the file and the configured number of concurrent virtual users (VUs).
25
+
7. after k6 completes, application is shut down
26
+
8. after application is shut down, postgres is shut down
34
27
35
28
And this repeats for every distro configured in each test configuration.
36
29
37
30
After all the tests are complete, the results are collected and committed back to the `/results` subdirectory as csv and summary text files.
38
31
39
32
## What do we measure?
40
33
41
-
For each test pass, we record the following metrics in order to compare distroConfigs and determine
42
-
relative overhead.
34
+
For each test pass, we record the following metrics in order to compare distroConfigs and determine relative overhead.
35
+
36
+
// WIP: This list will change once we finalize the profiling script.
| Peak threads | # | Highest number of running threads in the VM, including distroConfig threads |
57
51
| Network read mean | bits/s | Average network read rate |
58
52
| Network write mean | bits/s | Average network write rate |
59
-
| Average JVM user CPU | % | Average observed user CPU (range 0.0-1.0) |
60
-
| Max JVM user CPU | % | Max observed user CPU used (range 0.0-1.0) |
53
+
| Average user CPU| % | Average observed user CPU (range 0.0-1.0) |
54
+
| Max user CPU| % | Max observed user CPU used (range 0.0-1.0) |
61
55
| Average machine tot. CPU | % | Average percentage of machine CPU used (range 0.0-1.0) |
62
-
| Total GC pause nanos | ns |JVM time spent paused due to GC |
56
+
| Total GC pause nanos | ns | time spent paused due to GC|
63
57
| Run duration ms | ms | Duration of the test run, in ms |
64
58
65
59
## Config
@@ -74,41 +68,25 @@ Each config contains the following:
74
68
- totalIterations - the number of passes to make through the k6 test script
75
69
- warmupSeconds - how long to wait before starting conducting measurements
76
70
77
-
Currently, we test:
78
-
79
-
- no distro versus latest released distro
80
-
- no distro versus latest snapshot
81
-
- latest release vs. latest snapshot
82
-
83
71
Additional configurations can be created by submitting a PR against the `Configs` class.
84
72
85
73
### DistroConfigs
86
74
87
-
An distroConfig is defined in code as a name, description, optional URL, and optional additional
88
-
arguments to be passed to the JVM (not including `-javaagent:`). New distroConfigs may be defined
89
-
by creating new instances of the `Distro` class. The `AgentResolver` is used to download
90
-
the relevant distroConfig jar for an `Distro` definition.
91
-
92
-
## Automation
93
-
94
-
The tests are run nightly via github actions. The results are collected and appended to
95
-
a csv file, which is committed back to the repo in the `/results` subdirectory.
75
+
An distroConfig is defined in code as a name, description, flag for instrumentation. and optional additional arguments to be passed to the application container. New distroConfigs may be defined by creating new instances of the `Distro` class. The `AgentResolver` is used to download the relevant distroConfig jar for an `Distro` definition.
96
76
97
77
## Setup and Usage
98
78
99
-
The tests require docker to be running. Simply run `OverheadTests` in your IDE.
79
+
Pre-requirements:
80
+
* Have `docker` installed and running - verify by running the `docker` command.
81
+
* Export AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_SESSION_TOKEN, and S3_BUCKET environment variables.
100
82
101
-
Alternatively, you can run the tests from
102
-
the command line with gradle:
103
-
104
-
```
105
-
cd benchmark-overhead
83
+
Steps:
84
+
* From `aws-otel-python-instrumentation` dir, execute:
85
+
```sh
86
+
./scripts/build_and_install_distro.sh
87
+
./scripts/set-up-performance-tests.sh
88
+
cd performance-tests
106
89
./gradlew test
107
-
108
90
```
109
91
110
-
## Visualization
111
-
112
-
None yet. Help wanted! Our goal is to have the results and a rich UI running in the
113
-
`gh-pages` branch similar to [earlier tools](https://breedx-splk.github.io/iguanodon/web/).
114
-
Please help us make this happen.
92
+
The last step can be run or you can run from IDE (after setting environment variables appropriately).
0 commit comments