Skip to content

Commit 2e43648

Browse files
authored
TT-1955 Create page for flaky e2e tests tips tricks (#1639)
1 parent e09f3c5 commit 2e43648

File tree

1 file changed

+108
-0
lines changed

1 file changed

+108
-0
lines changed
Lines changed: 108 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,108 @@
1+
# Finding the Root Cause of E2E Test Flakes
2+
3+
## Introduction
4+
When end-to-end tests fail intermittently, the underlying issues can stem from resource constraints, environment setup, or test design—among other possibilities. This guide helps engineers systematically diagnose and address E2E test flakiness, reducing the time spent on guesswork and repeated failures.
5+
6+
---
7+
8+
## 1. GitHub Runners' Hardware
9+
GitHub provides **hosted runners** with specific CPU, memory, and disk allocations. If your tests require more resources than these runners can provide, you may encounter intermittent failures.
10+
11+
By default, we run tests on **`ubuntu-latest`**, as it is **free for public repositories** and the **most cost-effective option for private repositories**. However, this runner has limited resources, which can lead to intermittent failures in resource-intensive tests.
12+
13+
> **Note:** `ubuntu-latest` for **private repositories** has weaker hardware compared to `ubuntu-latest` for **public repositories**. You can learn more about this distinction in [GitHub's documentation](https://docs.github.com/en/actions/using-github-hosted-runners/using-github-hosted-runners/about-github-hosted-runners#standard-github-hosted-runners-for-public-repositories).
14+
15+
### 1.1 Available GitHub Runners
16+
Below are the some of the GitHub-hosted runners available in our organization:
17+
18+
| Runner Name | CPU | Memory | Disk |
19+
|------------|-----|--------|------|
20+
| `ubuntu-22.04-4cores-16GB` | 4 cores | 16 GB RAM | 150 GB SSD |
21+
| `ubuntu-latest-4cores-16GB` | 4 cores | 16 GB RAM | 150 GB SSD |
22+
| `ubuntu-22.04-8cores-32GB` | 8 cores | 32 GB RAM | 300 GB SSD |
23+
| `ubuntu-latest-8cores-32GB` | 8 cores | 32 GB RAM | 300 GB SSD |
24+
| `ubuntu-22.04-8cores-32GB-ARM` | 8 cores | 32 GB RAM | 300 GB SSD |
25+
26+
27+
### 1.2 Tips for Low-Resource Environments
28+
- **Profile your tests** to understand their CPU and memory usage.
29+
- **Optimize**: Only spin up what you need.
30+
- **If resources are insufficient**, consider redesigning your tests to run in smaller, independent chunks.
31+
- **If needed**, you can configure CI workflows to use a higher-tier runner, but this comes at an additional cost.
32+
- **Run with debug logs** or Delve debugger. For more details, check out the [CTF Debug Docs.](https://smartcontractkit.github.io/chainlink-testing-framework/framework/components/debug.html)
33+
34+
---
35+
36+
## 2. Reproducing Flakes
37+
Flaky tests don't fail on every run, so you need to execute them multiple times to isolate problems.
38+
39+
### 2.1 Repeat Runs
40+
For E2E tests, run them 5–10 times consecutively to expose intermittent issues. To run the tests with flakeguard validation, execute the following command from the `chainlink-core/` directory:
41+
42+
```sh
43+
cd chainlink-core/
44+
make run_flakeguard_validate_e2e_tests
45+
```
46+
47+
You’ll be prompted to provide:
48+
- **Test IDs** (e.g., `smoke/forwarders_ocr2_test.go:*,smoke/vrf_test.go:*`)
49+
50+
*Note: Test IDs can be taken from the `e2e-tests.yml` file.*
51+
52+
- **Number of runs** (default: 5)
53+
- **Chainlink version** (default: develop)
54+
- **Branch name** (default: develop)
55+
56+
57+
### 2.2 Flaky Unit Tests in the Core Repository
58+
For unit tests in the core repository, you can use a dedicated command to detect flakiness in an updated test:
59+
60+
61+
```sh
62+
cd chainlink-core/
63+
make run_flakeguard_validate_unit_tests
64+
```
65+
66+
67+
## 3. Testing Locally Under CPU and Memory Constraints
68+
69+
If CPU throttling or resource contention is suspected, here's how you can approach testing under constrained resources:
70+
71+
1. **Spin up Docker containers locally with limited CPU or memory.**
72+
2. **Mimic GitHub's environment** (use the same OS, similar resource limits).
73+
3. **Run E2E tests** repeatedly to see if flakiness correlates with resource usage.
74+
4. **Review logs and metrics** for signs of CPU or memory starvation.
75+
76+
77+
### Setting Global Limits (Docker Desktop)
78+
If you are using **Docker Desktop** on **macOS or Windows**, you can globally limit Docker's resource usage:
79+
80+
1. Open **Docker Desktop**.
81+
2. Navigate to **Settings****Resources**.
82+
3. Adjust the sliders for **CPUs** and **Memory**.
83+
4. Click **Apply & Restart** to enforce the new limits.
84+
85+
This setting caps the **total** resources Docker can use on your machine, ensuring all containers run within the specified constraints.
86+
87+
88+
### Observing Test Behavior Under Constraints
89+
- **Run your E2E tests repeatedly** with different global resource settings.
90+
- Watch for flakiness: If tests start failing more under tighter limits, suspect CPU throttling or memory starvation.
91+
- **Examine logs/metrics** to pinpoint if insufficient resources are causing sporadic failures.
92+
93+
By setting global limits, you can simulate resource-constrained environments similar to CI/CD pipelines and detect potential performance bottlenecks in your tests.
94+
95+
96+
## 4. Common Pitfalls and “Gotchas”
97+
1. **Resource Starvation**: Heavy tests on minimal hardware lead to timeouts or slow responses.
98+
2. **External Dependencies**: Network latency, rate limits, or third-party service issues can cause sporadic failures.
99+
3. **Shared State**: Race conditions arise if tests share databases or global variables in parallel runs.
100+
4. **Timeouts**: Overly tight time limits can fail tests on slower environments.
101+
102+
103+
## 5. Key Takeaways
104+
Tackle flakiness systematically:
105+
1. **Attempt local reproduction** (e.g., Docker + limited resources).
106+
2. **Run multiple iterations** on GitHub runners.
107+
3. **Analyze logs and metrics** to see if resource or concurrency issues exist.
108+
4. **Escalate** to the infra team only after confirming the issue isn't in your own test code or setup.

0 commit comments

Comments
 (0)