You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* chore(build): move ci scripts to .gitlab
* chore(build): no need to ignore circle ci files anymore
* chore(build): make .gitlab files global effect
* chore(build): Not possible to choose system tests commit anymore with gitlab
CircleCI system-tests were in GitHub actions. You could change the version checked out
in the action, but it didn't run that change until it was merged into master. GHA runs
master-only for security reasons.
Gitlab system-tests are pinned now instead of latest, but the pin is in one-pipeline.
* chore(ci): Typo and PR suggestions
* chore(ci): spotless
---------
Co-authored-by: Sarah Chen <[email protected]>
- if [ "$PROFILE_TESTS" == "true" ]; then .circleci/collect_profiles.sh; fi
491
-
- .circleci/collect_results.sh
492
-
- .circleci/upload_ciapp.sh $CACHE_TYPE $testJvm
489
+
- .gitlab/collect_reports.sh
490
+
- if [ "$PROFILE_TESTS" == "true" ]; then .gitlab/collect_profiles.sh; fi
491
+
- .gitlab/collect_results.sh
492
+
- .gitlab/upload_ciapp.sh $CACHE_TYPE $testJvm
493
493
- gitlab_section_end "collect-reports"
494
494
- URL_ENCODED_JOB_NAME=$(jq -rn --arg x "$CI_JOB_NAME" '$x|@uri')
495
495
- echo -e "${TEXT_BOLD}${TEXT_YELLOW}See test results in Datadog:${TEXT_CLEAR} https://app.datadoghq.com/ci/test/runs?query=test_level%3Atest%20%40test.service%3Add-trace-java%20%40ci.pipeline.id%3A${CI_PIPELINE_ID}%20%40ci.job.name%3A%22${URL_ENCODED_JOB_NAME}%22"
Copy file name to clipboardExpand all lines: docs/how_to_test.md
+27-26Lines changed: 27 additions & 26 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,25 +5,25 @@
5
5
The project leverages different types of test:
6
6
7
7
1. The most common ones are **unit tests**.
8
-
They are intended to test a single isolated feature, and rely on [JUnit 5 framework](https://junit.org/junit5/docs/current/user-guide/) or [Spock 2 framework](https://spockframework.org/spock/docs/).
9
-
JUnit framework is recommended for most unit tests for its simplicity and performance reasons.
10
-
Spock framework provides an alternative for more complex test scenarios, or tests that requires Groovy Script to access data outside their scope limitation (eg private fields).
8
+
They are intended to test a single isolated feature, and rely on [JUnit 5 framework](https://junit.org/junit5/docs/current/user-guide/) or [Spock 2 framework](https://spockframework.org/spock/docs/).
9
+
*JUnit framework is recommended for most unit tests for its simplicity and performance reasons.
10
+
*Spock framework provides an alternative for more complex test scenarios, or tests that require Groovy Script to access data outside their scope limitation (eg private fields).
11
11
12
-
2. A variant of unit tests are**instrumented tests**.
13
-
Their purpose is similar to the unit tests but the tested code is instrumented by the java agent (`:dd-trace-java:java-agent`) while running. They extend the Spock specification `datadog.trace.agent.test.AgentTestRunner` which allows to test produced traces and metrics.
12
+
2. A variant of unit tests is**instrumented tests**.
13
+
Their purpose is similar to the unit tests but the tested code is instrumented by the java agent (`:dd-trace-java:java-agent`) while running. They extend the Spock specification `datadog.trace.agent.test.AgentTestRunner` which allows to test produced traces and metrics.
14
14
15
-
3. The third type of tests are**Muzzle checks**.
16
-
Their goal is to check the [Muzzle directives](./how_instrumentations_work.md#muzzle), making sure instrumentations are safe to load against specific library versions.
15
+
3. The third type of tests is**Muzzle checks**.
16
+
Their goal is to check the [Muzzle directives](./how_instrumentations_work.md#muzzle), making sure instrumentations are safe to load against specific library versions.
17
17
18
-
3. The fourth type of tests are**integration tests**.
19
-
They test features that requires a more complex environment setup.
20
-
In order to build such enviroments, integration tests use Testcontainers to setup the services needed to run the tests.
18
+
4. The fourth type of tests is**integration tests**.
19
+
They test features that require a more complex environment setup.
20
+
In order to build such environment, integration tests use Testcontainers to set up the services needed to run the tests.
21
21
22
-
4. The fifth type of test are**smoke tests**.
23
-
They are dedicated to test the java agent (`:dd-java-agent`) behavior against demo applications to prevent any regression. All smoke tests are located into the `:dd-smoke-tests` module.
22
+
5. The fifth type of test is**smoke tests**.
23
+
They are dedicated to test the java agent (`:dd-java-agent`) behavior against demo applications to prevent any regression. All smoke tests are located into the `:dd-smoke-tests` module.
24
24
25
-
5. The last type of test are**system tests**.
26
-
They are intended to test behavior consistency between all the client libraries, and relies on [their on GitHub repository](https://github.com/DataDog/system-tests).
25
+
6. The last type of test is**system tests**.
26
+
They are intended to test behavior consistency between all the client libraries, and rely on [their on GitHub repository](https://github.com/DataDog/system-tests).
27
27
28
28
> [!TIP]
29
29
> Most of the instrumented tests and integration tests are instrumentation tests.
@@ -40,13 +40,16 @@ This mechanism exists to make sure either java agent state or static data are re
40
40
41
41
### Flaky Tests
42
42
43
-
If a test runs unreliably, or doen't have a fully deterministic behavior, this will lead into recurrent unexpected errors in continuous integration.
43
+
If a test runs unreliably, or doesn't have a fully deterministic behavior, this will lead into recurrent unexpected errors in continuous integration.
44
44
In order to identify such tests and avoid the continuous integration to fail, they are marked as _flaky_ and must be annotated with the `@Flaky` annotation.
45
45
46
46
> [!TIP]
47
-
> In case your pull request checks failed due to some unexpected flaky tests, you can retry the continous integration pilepeline on CircleCI using the `Rerun workflow from failed` button:
48
-
49
-

47
+
> In case your pull request checks failed due to some unexpected flaky tests, you can retry the continuous
48
+
> integration pipeline on Gitlab
49
+
> * using the `Run again` button from the pipeline view:
50
+
> 
51
+
> * using the `Retry` button from the job view:
52
+
> 
50
53
51
54
## Running Tests
52
55
@@ -71,25 +74,23 @@ To run tests on a different JVM than the one used for doing the build, you need
71
74
72
75
### Running System Tests
73
76
74
-
The system tests are setup to run on continous integration as pull request check.
77
+
The system tests are setup to run on continuous integration as pull request check.
75
78
76
79
If you would like to run them locally, you would have to grab [a local copy of the system tests](https://github.com/DataDog/system-tests), and run them from there.
77
80
You can make them use your development version of `dd-trace-java` by [dropping the built artifacts to the `/binaries` folder](https://github.com/DataDog/system-tests/blob/main/docs/execute/binaries.md#java-library) of your local copy of the system tests.
78
81
79
-
If you would like to run another version of the system tests on continuous integration, or update them to the latest version, you would need to use [the update pinned system tests script](../.circleci/update_pinned_system_tests.sh) as your pull request won't use the latest `main` version from the system test repository, but a pinned version.
80
-
81
-
> [!NOTE]
82
-
> The system tests version used for continous integration is defined using `default_system_tests_commit` in [CircleCI configuration](../.circleci/config.continue.yml.j2).
82
+
In the CI System tests will be run with the pipeline defined [`DataDog/system-tests/blob/main/.github/workflows/system-tests.yml`](https://github.com/DataDog/system-tests/blob/main/.github/workflows/system-tests.yml)
83
83
84
84
### The APM test agent
85
85
86
86
The APM test agent emulates the APM endpoints of the Datadog Agent.
87
87
The APM Test Agent container runs alongside Java tracer Instrumentation Tests in CI,
88
88
handling all traces during test runs and performing a number of `Trace Checks`.
89
89
Trace Check results are returned within the `Get APM Test Agent Trace Check Results` step for all instrumentation test jobs.
90
-
Check [trace invariant checks](https://github.com/DataDog/dd-apm-test-agent#trace-invariant-checks) for more informations.
90
+
Check [trace invariant checks](https://github.com/DataDog/dd-apm-test-agent#trace-invariant-checks) for more information.
91
91
92
92
The APM Test Agent also emits helpful logging, including logging received traces' headers, spans, errors encountered,
93
-
ands information on trace checks being performed.
94
-
Logs can be viewed in CircleCI within the Test-Agent container step for all instrumentation test suites, ie: `z_test_8_inst` job.
93
+
ands information on trace checks being performed.
94
+
95
+
Logs can be viewed in GitLab within the Test-Agent container step for all instrumentation test suites, e.g. the `test_inst` jobs.
95
96
Read more about [the APM Test Agent](https://github.com/datadog/dd-apm-test-agent#readme).
0 commit comments