You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Add compatibility suite with selected OpenAPI documents (#267)
### Motivation
In order to help shape the 1.0 release, we'd like to maintain a public
compatibility test suite, with some selected real-world OpenAPI
documents. The goal of this is to check that (a) we can run the code
generation for the document, and (b) the generated code compiles. We can
then use this test as part of CI. However, we may want to gate PRs only
on the code generation phase and run the build on main to keep PR
feedback timely.
### Modifications
- Adds a new test suite `CompatibilityTest` to the
`OpenAPIGeneratorReferenceTests` test target.
- Adds docker support.
### Result
CI runs can check we are still able to handle selected real-world
OpenAPI documents.
### Test Plan
I have run this many times locally with varying parameters, and intend
to tweak the parameters used in CI during this job. The draft PR hijacks
an existing CI pipeline (`docc-test`), which will be reverted and
replaced with a new pipeline before merging.
### TODO
- [x] Determine appropriate params for CI.
- [x] Remove commit that hijacks existing CI pipeline for testing.
- [x] Enable new pipeline with desired parameters.
### Notes
There are a number of decisions to make about how we run this suite:
- Do we build the generated code, or just run the code generation?
- Do we build the generator and test suite in debug or release?
- Do we run the XCTest suite in parallel?
- Do we generate the code in parallel?
- Do we build the generated code in parallel?
The below sections contain some numbers from some local experiments.
#### Building the generator and compatibility test
| Build configuration | `-j $(($(nproc)-1))` | Time |
|---------------------|----------------------|------|
| debug | ❌ | 70s |
| debug | ✅ | 68s |
| release | ❌ | 252s |
| release | ✅ | 256s |
Conclusion: `-j` has no impact; presumably because it defaults to core
count.
All following experiments assume no `-j` argument is used when building
the
generator and the compatibility test suite.
#### LLVM flag when building the generator and compatibility test
| Build configuration | `-Xllvm -vectorize-slp=false` | Time |
|---------------------|-------------------------------|------|
| debug | ❌ | 70s |
| debug | ✅ | 71s |
| release | ❌ | 252s |
| release | ✅ | 249s |
Conclusion: As expected, this has no impact in debug; but also, not in
release.
All following experiments do not use this flag when building the
generator and
the compatibility test suite.
#### Test times (only code generation)
The following table shows the result of running the compatibility test
suite in
a mode that only performs code generation, i.e. it does not build the
generated
code.
| Build configuration | Parallel test[^1] | Time[^2] |
|---------------------|-------------------|----------|
| debug | no | 157s |
| debug | yes | 108s |
| release | no | 53s |
| release | yes | 38s |
[^1]: _Parallel test_ implies the test was run with: `--parallel
--num-workers $(($(nproc)-1))`.
[^2]: This does not include build time; tests are run with
`--skip-build`.
Conclusion: Probably worth parallelizing the test, up to the cores.
#### Building the compatibility test and running codegen
The following table combines the results from the previous sections.
| Build configuration | Parallel test | Build time | Test time | Total
time |
| ------------------- | ------------- | ----------- | --------- |
---------- |
| debug | no | 70s | 157s | 227s |
| debug | yes | 70s | 108s | 178s |
| release | no | 252s | 53s | 305s |
| release | yes | 252s | 38s | 270s |
Conclusion: Given that building the generated code (next section) is
unaffected
by whether the generator and test are compiled in release mode we can
determine
that the fastest pipeline that runs the codegen is debug, with parallel
test.
All further experiments use debug build of the generator and tests.
#### Test times (including building generated code)
| `--parallel` | Test time[^3] |
|--------------|---------------|
| ❌ | 21m34s |
| ✅ | 29m45s |
[^3]: Note, this does _not_ include build time of the generator or the
compatibility test suite.
Conclusion: using `--parallel`, at the XCTest level,
doesn't play too well with the `swift build`, within the test. That is
because
each of the `swift build` commands in each of the parallel tests will
parallelise according to the number of cores. Looking at `top` in the
container, we can see that there is an explosion of build processes that
compete with each other and with the outstanding
`swift-openapi-generator` that
are still in flight.
### Disable parallel build of generated code
This can be resolved by using `swift build -j 1` within the test, which
does
allow `--parallel` to provide some speedup:
| Parallel test | Parallel build | Time |
|--------------|--------|--------|
| ❌ | ✅ | 21m34s |
| ✅ | ✅ | 29m45s |
| ✅ | ❌ | 15m56s |
| ❌ | ❌ | ? |
#### Conclusions
1. If we want a pipeline that just runs code generation, it will take
around
3 minutes, and should:
a. Build the generator and compatibility test suite in debug.
b. Use `--parallel` when running the test.
2. If we want a pipeline that also builds the generated code, it will
take around
17 minutes, and should, additionally:
c. Use `swift build -j 1` when building the generated code.
#### Potential other directions
It might be worth _not_ using XCTest as the harness for the
compatibility suite
and then we can control the parallelism from within the compatibility
test
harness itself.
That said, this is probably good enough for now.
---------
Signed-off-by: Si Beaumont <[email protected]>
0 commit comments