You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/book/src/developer/testing.md
+81-38Lines changed: 81 additions & 38 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -11,29 +11,69 @@ Unit tests focus on individual pieces of logic - a single func - and don't requi
11
11
be fast and great for getting the first signal on the current implementation, but unit tests have the risk of
12
12
allowing integration bugs to slip through.
13
13
14
-
Historically, in Cluster API unit tests were developed using [go test], [gomega] and the [fakeclient]; see the quick reference below.
14
+
In Cluster API most of the unit tests are developed using [go test], [gomega] and the [fakeclient]; however using
15
+
[fakeclient] is not suitable for all the use cases due to some limitations in how it is implemented. In some cases
16
+
contributors will be required to use [envtest]. See the [quick reference](#quick-reference) below for more details.
15
17
16
-
However, considering some changes introduced in the v0.3.x releases (e.g. ObservedGeneration, Conditions), there is a common
17
-
agreement among Cluster API maintainers that using [fakeclient] should be progressively deprecated in favor of using
18
-
[envtest]. See the quick reference below.
18
+
### Mocking external APIs
19
+
In some cases when writing tests it is required to mock external API, e.g. etcd client API or the AWS SDK API.
20
+
21
+
This problem is usually well scoped in core Cluster API, and in most cases it is already solved by using fake
22
+
implementations of the target API to be injected during tests.
23
+
24
+
Instead, mocking is much more relevant for infrastructure providers; in order to address the issue
25
+
some providers can use simulators reproducing the behaviour of a real infrastructure providers (e.g CAPV);
26
+
if this is not possible, a viable solution is to use mocks (e.g CAPA).
27
+
28
+
### Generic providers
29
+
When writing tests core Cluster API contributors should ensure that the code works with any providers, and thus it is required
30
+
to not use any specific provider implementation. Instead, the so-called generic providers e.g. "GenericInfrastructureCluster"
31
+
should be used because they implement the plain Cluster API contract. This prevents tests from relying on assumptions that
32
+
may not hold true in all cases.
33
+
34
+
Please note that in the long term we would like to improve the implementation of generic providers, centralizing
35
+
the existing set of utilities scattered across the codebase, but while details of this work will be defined do not
36
+
hesitate to reach out to reviewers and maintainers for guidance.
19
37
20
38
## Integration tests
21
39
22
40
Integration tests are focused on testing the behavior of an entire controller or the interactions between two or
23
41
more Cluster API controllers.
24
42
25
-
In older versions of Cluster API, integration tests were based on a real cluster and meant to be run in CI only; however,
26
-
now we are considering a different approach based on [envtest] and with one or more controllers configured to run against
43
+
In Cluster API, integration tests are based on [envtest] and one or more controllers configured to run against
27
44
the test cluster.
28
45
29
-
With this approach it is possible to interact with Cluster API like in a real environment, by creating/updating
30
-
Kubernetes objects and waiting for the controllers to take action.
46
+
With this approach it is possible to interact with Cluster API almost like in a real environment, by creating/updating
47
+
Kubernetes objects and waiting for the controllers to take action. See the [quick reference](#quick-reference) below for more details.
48
+
49
+
Also in case of integration tests, considerations about [mocking external APIs](#mocking-external-apis) and usage of [generic providers](#generic-providers) apply.
50
+
51
+
## Test maintainability
52
+
53
+
Tests are an integral part of the project codebase.
54
+
55
+
Cluster API maintainers and all the contributors should be committed to help in ensuring that tests are easily maintainable,
56
+
easily readable, well documented and consistent across the code base.
57
+
58
+
In light of continuing improving our practice around this ambitious goal, we are starting to introduce a shared set of:
31
59
32
-
Please note that while using this mode, as of today, when testing the interactions with an infrastructure provider
33
-
some infrastructure components will be generated, and this could have relevant impacts on test durations (and requirements).
60
+
- Builders (`sigs.k8s.io/cluster-api/internal/test/builder`), allowing to create test objects in a simple and consistent way.
61
+
- Matchers (`sigs.k8s.io/cluster-api/internal/test/matchers`), improving how we write test assertions.
34
62
35
-
While, as of today this is a strong limitation, in the future we might consider to have a "dry-run" option in CAPD or
36
-
a fake infrastructure provider to allow test coverage for testing the interactions with an infrastructure provider as well.
63
+
Each contribution in growing this set of utilities or their adoption across the codebase is more than welcome!
64
+
65
+
Another consideration that can help in improving test maintainability is the idea of testing "by layers"; this idea could
66
+
apply whenever we are testing "higher-level" functions that internally uses one or more "lower-level" functions;
67
+
in order to avoid writing/maintaining redundant tests, whenever possible contributors should take care of testing
68
+
_only_ the logic that is implemented in the "higher-level" function, delegating the test function called internally
69
+
to a "lower-level" set of unit tests.
70
+
71
+
A similar concern could be raised also in the case whenever there is overlap between unit tests and integration tests,
72
+
but in this case the distinctive value of the two layers of testing is determined by how test are designed:
73
+
74
+
- unit test are focused on code structure: func(input) = output, including edge case values, asserting error conditions etc.
75
+
- integration test are user story driven: as a user, I want express some desired state using API objects, wait for the
76
+
reconcilers to take action, check the new system state.
panic(fmt.Sprintf("unable to setup index: %v", err))
314
+
}
315
+
316
+
// Run tests
286
317
...
287
318
}
288
319
```
289
320
290
-
Please note that, because [envtest] uses a real kube-apiserver that is shared across many tests, the developer
321
+
By combining pre-configured validation and mutating webhooks and reconcilers/indexes it is possible
322
+
to use [envtest] for developing Cluster API integration tests that can mimic how the system
323
+
behaves in real Cluster.
324
+
325
+
Please note that, because [envtest] uses a real kube-apiserver that is shared across many test cases, the developer
291
326
should take care in ensuring each test runs in isolation from the others, by:
292
327
293
328
- Creating objects in separated namespaces.
294
329
- Avoiding object name conflict.
295
330
331
+
Developers should also be aware of the fact that the informers cache used to access the [envtest]
332
+
depends on actual etcd watches/API calls for updates, and thus it could happen that after creating
333
+
or deleting objects the cache takes a few milliseconds to get updated. This can lead to test flakes,
334
+
and thus it always recommended to use patterns like create and wait or delete and wait; Cluster API env
335
+
test provides a set of utils for this scope.
336
+
296
337
However, developers should be aware that in some ways, the test control plane will behave differently from “real”
297
338
clusters, and that might have an impact on how you write tests.
298
339
@@ -374,13 +415,15 @@ comes with a set of limitations that could hamper the validity of a test, most n
374
415
375
416
- it does not properly handle a set of fields which are common in the Kubernetes API objects (and Cluster API objects as well)
376
417
like e.g. `creationTimestamp`, `resourceVersion`, `generation`, `uid`
377
-
-API calls doe not execute defaulting or validation webhooks, so there are no enforced guarantees about the semantic accuracy
418
+
-[fakeclient] operations do not trigger defaulting or validation webhooks, so there are no enforced guarantees about the semantic accuracy
378
419
of the test objects.
420
+
- the [fakeclient] does not use a cache based on informers/API calls/etcd watches, so the test written in this way
421
+
can't help in surfacing race conditions related to how those components behave in real cluster.
422
+
- there is no support for cache index/operations using cache indexes.
379
423
380
-
Historically, [fakeclient] is widely used in Cluster API, however, given the growing relevance of the above limitations
381
-
with regard to some changes introduced in the v0.3.x releases (e.g. ObservedGeneration, Conditions), there is a common
382
-
agreement among Cluster API maintainers that using [fakeclient] should be progressively deprecated in favor of use
383
-
of [envtest].
424
+
Accordingly, using [fakeclient] is not suitable for all the use cases, so in some cases contributors will be required
425
+
to use [envtest] instead. In case of doubts about which one to use when writing tests, don't hesitate to ask for
426
+
guidance from project maintainers.
384
427
385
428
### `ginkgo`
386
429
[Ginkgo] is a Go testing framework built to help you efficiently write expressive and comprehensive tests using Behavior-Driven Development (“BDD”) style.
0 commit comments