You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/book/src/developer/testing.md
+80-38Lines changed: 80 additions & 38 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -11,29 +11,68 @@ Unit tests focus on individual pieces of logic - a single func - and don't requi
11
11
be fast and great for getting the first signal on the current implementation, but unit tests have the risk of
12
12
allowing integration bugs to slip through.
13
13
14
-
Historically, in Cluster API unit tests were developed using [go test], [gomega] and the [fakeclient]; see the quick reference below.
14
+
In Cluster API most of the unit tests are developed using [go test], [gomega] and the [fakeclient]; however using
15
+
[fakeclient] is not suitable for all the use cases due to some limitation derived by how it is implemented,
16
+
so in some cases contributors will be required to use [envtest]. See the [quick reference](#quick-reference) below for more details.
15
17
16
-
However, considering some changes introduced in the v0.3.x releases (e.g. ObservedGeneration, Conditions), there is a common
17
-
agreement among Cluster API maintainers that using [fakeclient] should be progressively deprecated in favor of using
18
-
[envtest]. See the quick reference below.
18
+
### Mocking external APIs
19
+
In some cases when writing tests it is required to mock external API, e.g. etcd client API or the AWS sdk API.
20
+
21
+
This problem is usually well scoped in core Cluster API, and in most cases it is already solved by using fake
22
+
implementations of the target API to be injected during tests.
23
+
24
+
Instead, mocking is much more relevant for infrastructure providers; in order to address the issue
25
+
some providers can use simulators reproducing the behaviour of a real infrastructure providers (e.g CAPV);
26
+
if this is not possible, a viable solution is to use mocks (e.g CAPA).
27
+
28
+
### Generic providers
29
+
While testing core Cluster API contributors should ensure that the code works with any providers, and thus it is required
30
+
to not use any specific provider implementation. Instead, the so-called generic providers should be used because they
31
+
implement the plain Cluster API contract, thus preventing developers to make assumptions we cannot rely on.
32
+
33
+
Please note that in the long tern we would like to improve current implementation of generic providers, centralizing
34
+
the existing set of utilities scattered across the codebase, but while details of this work will be defined do not
35
+
hesitate to reach out to reviewers and maintainers for guidance.
19
36
20
37
## Integration tests
21
38
22
39
Integration tests are focused on testing the behavior of an entire controller or the interactions between two or
23
40
more Cluster API controllers.
24
41
25
-
In older versions of Cluster API, integration tests were based on a real cluster and meant to be run in CI only; however,
26
-
now we are considering a different approach based on [envtest] and with one or more controllers configured to run against
42
+
In Cluster API, integration test are based on [envtest] and one or more controllers configured to run against
27
43
the test cluster.
28
44
29
-
With this approach it is possible to interact with Cluster API like in a real environment, by creating/updating
30
-
Kubernetes objects and waiting for the controllers to take action.
45
+
With this approach it is possible to interact with Cluster API almost like in a real environment, by creating/updating
46
+
Kubernetes objects and waiting for the controllers to take action. See the [quick reference](#quick-reference) below for more details.
47
+
48
+
Also in case of integration tests, considerations about [mocking external APIs](#mocking-external-apis) and usage of [generic providers](#generic-providers) apply.
49
+
50
+
## Test maintainability
51
+
52
+
As a matter of fact test are integral part of the project codebase.
53
+
54
+
Cluster API maintainers and all the contributors should be committed to help in ensuring that test are easily maintainable,
55
+
easily readable, well documented and consistent across the code base.
56
+
57
+
In light of continuing improving our practice around this ambitious goal, we are starting to introduce a shared set of:
31
58
32
-
Please note that while using this mode, as of today, when testing the interactions with an infrastructure provider
33
-
some infrastructure components will be generated, and this could have relevant impacts on test durations (and requirements).
59
+
- Builders (`sigs.k8s.io/cluster-api/internal/test/builder`), allowing to create test objects in a simple and consistent way.
60
+
- Matchers (`sigs.k8s.io/cluster-api/internal/test/matchers`), improving how we write test assertions.
34
61
35
-
While, as of today this is a strong limitation, in the future we might consider to have a "dry-run" option in CAPD or
36
-
a fake infrastructure provider to allow test coverage for testing the interactions with an infrastructure provider as well.
62
+
Each contribution in growing this set of utilities or their adoption across the codebase is more than welcome!
63
+
64
+
Another consideration that can help in improving test maintainability is the idea of testing "by layers"; this idea could
65
+
apply whenever we are testing "higher-level" functions that internally uses one or more "lower-level" functions;
66
+
in order to avoid writing/maintaining redundant test, whenever possible contributors should take care of testing
67
+
_only_ the logic that is implemented in the "higher-level" function, delegating the test function called internally
68
+
to a "lower-level" set of unit tests.
69
+
70
+
A similar concern could be raised also in the case whenever there is overlap between unit tests and integration tests,
71
+
but in this case the distinctive value of the two layers of testing is determined by how test are designed:
72
+
73
+
- unit test are focused on code structure: func(input) = output, including edge case values, asserting error conditions etc.
74
+
- integration test are user story driven: as a user, I want express some desired state using API objects, wait for the
75
+
reconcilers to take action, check the new system state.
panic(fmt.Sprintf("unable to setup index: %v", err))
313
+
}
314
+
315
+
// Run tests
286
316
...
287
317
}
288
318
```
289
319
290
-
Please note that, because [envtest] uses a real kube-apiserver that is shared across many tests, the developer
320
+
By the combination of pre-configured validation and mutating webhooks and reconcilers/indexes it is possible
321
+
to use [envtest] for developing Cluster API integration tests that can mimic very closely how the system
322
+
behaves in real Cluster.
323
+
324
+
Please note that, because [envtest] uses a real kube-apiserver that is shared across many test cases, the developer
291
325
should take care in ensuring each test runs in isolation from the others, by:
292
326
293
327
- Creating objects in separated namespaces.
294
328
- Avoiding object name conflict.
295
329
330
+
Another thing that the developer should usually take care of when using [envtest] is the fact that the informers cache
331
+
used internally the controller runtime client depends on actual etcd watches/API calls for updates, and thus it could
332
+
happen that after creating or deleting objects the cache takes a few milliseconds to get updated. This can lead to
333
+
test flakes, and thus it always recommended to use patterns like create and wait or delete and wait; Cluster API env
334
+
test provides a set of utils for this scope.
335
+
296
336
However, developers should be aware that in some ways, the test control plane will behave differently from “real”
297
337
clusters, and that might have an impact on how you write tests.
298
338
@@ -374,13 +414,15 @@ comes with a set of limitations that could hamper the validity of a test, most n
374
414
375
415
- it does not properly handle a set of fields which are common in the Kubernetes API objects (and Cluster API objects as well)
376
416
like e.g. `creationTimestamp`, `resourceVersion`, `generation`, `uid`
377
-
- API calls doe not execute defaulting or validation webhooks, so there are no enforced guarantees about the semantic accuracy
417
+
- API calls does not execute defaulting or validation webhooks, so there are no enforced guarantees about the semantic accuracy
378
418
of the test objects.
419
+
- the [fakeclient] does not use a cache based on informers/API calls/etcd watches, so the test written in this way
420
+
can't help in surfacing race conditions related to how those components behave in real cluster.
421
+
- there is no support for cache index/operations using cache indexes.
379
422
380
-
Historically, [fakeclient] is widely used in Cluster API, however, given the growing relevance of the above limitations
381
-
with regard to some changes introduced in the v0.3.x releases (e.g. ObservedGeneration, Conditions), there is a common
382
-
agreement among Cluster API maintainers that using [fakeclient] should be progressively deprecated in favor of use
383
-
of [envtest].
423
+
Accordingly, using [fakeclient] is not suitable for all the use cases, so in some cases contributors will be required
424
+
to use [envtest] instead. In case of doubts about which one to use when writing tests, don't hesitate to ask for
425
+
guidance from project maintainers.
384
426
385
427
### `ginkgo`
386
428
[Ginkgo] is a Go testing framework built to help you efficiently write expressive and comprehensive tests using Behavior-Driven Development (“BDD”) style.
0 commit comments