This summarizes the content, structure, maintenance, and TODOs of this Spinnaker monorepo project
Original RFC: spinnaker/governance#336
See also: ADOPTION.md
If you do not have access to any of these things, please DM me!
Repo: https://github.com/jcavanagh/spinnaker-monorepo-public
Nexus: Coming Soon (TM)
GCP Project: https://console.cloud.google.com/home/dashboard?project=spinnaker-monorepo-test
- Gradle 7 upgrade
- Gradle 8 was attempted, but too much was broken and Kotlin plugins don't support it well (or at all)
- Gradle 7.6.1 brings us important composite build configuration options, notably the ability to prevent something from being substituted by Gradle and allowing us to override it
- Old
enableFeaturePreviewdeclarations removed, as the feature was out of preview - Several plugins used by
spinnaker-gradle-projectwere upgraded for Gradle 7 compatibility - everything that publishes, mainly - Some code was added to specify
duplicatesStrategyonCopytasks, as Gradle 7 now validates to prevent duplicate resource files on the classpath
- Consoldiation and rework of all Github Actions
- All workflows were consolidated and refactored for reusability
- Reworking of all versioning and publishing
- In general, all things now have an associated
- This is implemented by the
.github/actions/build-number-tagaction - Each project + ref combination has its own counter for build artifacts
- A special
spinnakerscope exists to coordinate Java library versions - The
deckscope applies a consistent build counter to all Deck packages
- This is implemented by the
mainversioning- The
mainbranch publishes on an ever-increasing build number
- The
release-*versioning- Release branches will build according to their ref
- e.g.
release-2023.1.xgenerates build versions2023.1.1and so forth - This is distinct from an BOM release name
2023.1.1, which may reference many container images e.g.clouddriver:2023.1.5andorca:2023.1.4, depending on what changes have been made - We might consider adjusting this if tagging is confusing
- e.g.
- Releases (GH/boms/etc) are auto-manual - there is a workflow button to press, but do not happen on commits to release branches
- Release branches will build according to their ref
- All Java libraries must be published on each push, so that
-bompackages have coherent references to their internal dependencies when published - Containers and debs only publish if themselves or a direct dependency changed (e.g.
korkchanges publish everything)
- In general, all things now have an associated
- Liquibase upgrade to 4.3.5 (the version that ships with current Boot)
- This was a happy accident, as the liquibase duplicate-files-on-classpath issue is actually the same problem as the Gradle 7 upgrade note above
- This upgrade was required to fix tests failing on MySQL with a
ClassCastExceptionwhen running migrations - I chose not to adopt the 4.13.0 upgrade PR, as I was still able to replicate that issue
- Various small tweaks to the SQL tests were required after this, but they now all run successfully
- Mass-deletion of unused Gradle wrappers, property pins, .github, .idea, and etc as detailed below
- Java 11 and 17 container publishing for applicable projects
- The
publish-dockercomposite action will auto-detect the presence of Java 11 Dockerfiles and publish as needed
- The
- Most GHA consolidated into custom composite actions over reusable workflows
- Most of the limitations of composition actions have been fixed by Github over the past several months
- The current main drawback to composite actions is a lack of direct access to repository
secrets- they must be passed in as inputs like any non-composite action - Issue link: actions/toolkit#1168
- The current main drawback to composite actions is a lack of direct access to repository
- However, the main limitation of reusable workflows is much harder to deal with - each reusable workflow is a separate job, and additional steps cannot easily be added around it
- Most of the limitations of composition actions have been fixed by Github over the past several months
- Deck publishing flow is now fully integrated
- No more version bump PRs - prerelease NPM versions are published on every
deckordeck-kayentapublish (seenpmrepo above for examples) - Versions are somewhat synthetic - rewritten dynamically and committed during the build, pre-publish
- We can change the in-repo committed version to something like
0.0.0for all packages - This may not be ideal - it is likely better to use Lerna versioning and incremental publishing, though its assumptions around tagging and committed values aren't exactly aligned with
- We can change the in-repo committed version to something like
- Deck package versions are also aligned - all published under the same version, regardless of which packages changed
- No more version bump PRs - prerelease NPM versions are published on every
- Tooling to allow pulling and integrating changes from individual repos via automated or on-demand pull requests to the monorepo
- Tooling to allow a user to port existing individual-repo PRs in a guided fashion\
- Fully automated release tooling via the
spinnaker-releasecustom action
- May need to go back to incremental Deck package publishing
- General feedback on versions
- Maven library versions
- Deb versions, including new
<project>-devmain-branch deb builds - Container tagging - I may have missed some variants here
- Elimination of sha + timestamp tags, replaced with build number tags with additional label metadata
- General feedback on GHA workflow structure and implementation
- Deck versions and internal version bumps
- Just using a single Lerna version for all packages - no bump PRs, everything gets shipped just like Java libraries
- Halyard will now publish on the same release train as Spinnaker
- Halyard compatible versions are no longer referenced in the BOM
- Users are meant to just use the same Halyard version as the release train
- Default run tasks in root build.gradle have combined output
- Not really a great way to fix this in Gradle, but still wanted a "just start it" button
- Still have an initial configuration problem
- May be better as a docker-compose setup or something
- Cost design of GHA workflows - better to have more complex conditions in steps and fewer jobs?
Workflows are designed with the following goals:
- Everything should be able to be run manually in a normal fashion if needed
- Workflows should be specifically scoped when possible, to allow expansion of the monorepo without disruption to existing builds
- Common functionality should be reused via
workflow_callor composite actions wherever possible
Three major components are spinnaker-libraries.yml, generic-build-publish.yml and version.yml.
spinnaker-libraries.yml is a reusable workflow for publishing all Spinnaker libraries with one coherent version. This allows -bom packages to function, as they all pin versions internally equal to the Gradle version set during the composite build.
generic-build-publish.yml is a reusable workflow for publishing all artifacts required by JVM service projects.
version.yml encapsulates all versioning information, and provides outputs to be referenced by downstream jobs. Running this more than once in a workflow can be detrimental (double bumping a build number, for instance), so it is important the information is captured once and plumbed through.
- Removed all old Gradle wrappers
- Removed all nested
.githuband.ideafolders - Removed all project version pin properties throughout
- Removed old partial-composite build code from build scripts
- Removed all
mavenLocalenablement code from projects now composite - Added a root-level
versions.gradlefile to deliver buildscript dependency versions across projects - Consolidated all
kotlin.gradleandkotlin-test.gradlefiles - Split off detekt configuration from
kotlin.gradlein Kork/Orca, moved remainder to root - Remove all
defaultTasksdeclarations from composites - those have new entry points defined in the rootbuild.gradle - Removed all
deckscripting around version bumps and bump PRs
This repository publishes all artifacts to a parallel GCP set of buckets/GAR repos.
I would recommend it continue to do so until the next release cut, before which time we can validate produced artifacts independently. After it has been determined that the produced artifacts are correct and good, we can cut over development the first cut branch would be release-2023.1, then builds
There are two ways this repository can be maintained before it is actively producing public artifacts:
- Checkout a clean branch with an initial empty commit
- Run each
git subtree add ...command frominit.shmanually - Cherry-pick the single monorepo-conversion commit on top of that
- Force-push the branch
- Check out a new branch based on the ref you'd like to catch up
- Run
./pull.sh <project> -r <ref>and resolve any conflicts
- The
subtree_pull_editor.shscript writes a nicer and more helpful commit message, detailing what was merged - After resolving conflicts, be sure to use the
git commit -a -F SUBTREE_MERGE_MSGcommand to keep the nice commit message, as the script will recommend
- Repeat for each project
- Create a pull request
- Merge the PR with a MERGE COMMIT
- I cannot stress this enough - it must be a regular merge commit
- Squashing/rebase-merging is destructive to Git history that we need to preserve while transitioning
- History destruction via squash/rebase will make applying patches from individual repositories during transition much more difficult than it needs to be, and will obscure commits/PRs made in individual repos during that time that may need to be referenced
There will be both backports to individual repositories and the monorepo while we are in a transitional state. We have two choices of how to resolve the transition:
- Maintain the separate repos until the previous release is no longer supported, then deprecate the individual repositories
- After the monorepo is stably producing artfacts, import the old ref as a new release branch and re-monorepo-ify it
- This may require some special GHA workflows to accomodate the old-style "semver" release versioning, but probably not
Option two is likely the least
This section details how to move code bidirectionally between monorepo and individual repos. This is mostly just lifted from the original RFC for visibility.
This is basically the above transition process in more detail. This technique can also be useful for building your own bespoke monorepo of Spinnaker, and perhaps other Spinnaker-related projects or plugins that might benefit from composite builds or colocated code.
-
Add the OSS monorepo to your private individual fork as a remote
git remote add oss git@github.com:spinnaker/spinnaker.gitgit fetch oss
-
Merge the OSS monorepo branch into your fork with the
subtreestrategy. DO NOT SQUASH OR REBASE THESE CHANGES - MERGE COMMIT ONLY!git merge -X subtree=<subtree> oss/<branch>- For example, if you have a
clouddriverfork, and you want to integrate changes frommain:git merge -X subtree=clouddriver oss/main
-
A (mostly) equivalent command is
git subtree pullorgit subtree merge, but using that will create multiple "split" points in the tree, and make the history harder to traverse. It is generally preferable to usegit merge -X subtree=<subtree> ...instead. -
Keep in mind that the merged code won't necessarily actually run - the changes could depend on additional changes in other projects, like
kork. Repeat the above for all Spinnaker projects that require re-integration.
Some fork maintainers may wish to pick and choose which code they take, rather than integrating the OSS branch wholesale. This process looks a bit roundabout, but works similarly to how git cherry-pick operates under the hood.
-
Add the OSS monorepo to your private individual fork as a remote
git remote add oss git@github.com:spinnaker/spinnaker.gitgit fetch oss
-
Retrieve and apply the diff to your tree with the following command:
git show <commit_sha_to_pick> --no-color -- "<subtree>/*" | git apply -p2 -3 --index -- The pattern matching string in this example is a subtree's folder, but it can be any path(s) if additional filtering is desired
-
This will pipe the diff of your desired change to
git apply, filtering the files to one project subtree only (-- "<subtree>/*"), trimming the file paths in the diff by one additional directory (-p2), using three-way merge (-3), and updating the index (--index). Modify other options togit applyas desired to suit your preferences and workflow.
Individual project forks can still contribute changes back to the OSS Monorepo, using a similar diff/apply flow as above, but in the reverse direction:
-
Fork the OSS monorepo on Github, and clone that monorepo fork locally - as one would any project.
-
Add your private individual fork as a remote to the cloned monorepo fork
git remote add <remote_name> <url_to_private_individual_fork>git fetch <remote_name> <url_to_private_individual_fork>
-
Choose a branch name, and check it out using the appropriate remote base branch
git checkout -b <name> <remote_name>/<remote_ref>
-
Retrieve and apply the diff to your tree with the following command:
git show <commit_sha_to_pick> | git apply --directory <subtree> -3 --index -
-
This will pipe the diff of your desired change to
git apply, adding the destination subtree directory prefix (--directory <subtree>), using three-way merge (-3), and updating the index (--index). Modify other options togit applyas desired to suit your preferences and workflow. -
Push your branch to your Github organization, and open a PR from there as usual
Once the OSS monorepo is the source of truth, private individual project forks will still exist and need to be maintained.
Unfortunately, this process is the most difficult. There is not a way to cleanly merge from an OSS monorepo to a private individual fork, as changes to the OSS monorepo will include files from other services that do not exist in the destination single-service tree.
My recommendation here is that all private individual forks monorepo-ify themselves, using the creation and import process described elsewhere in this document, and leveraging the OSS composite build process. Once all of your forks are combined into a private monorepo, the file paths will align and OSS changes can be integrated cleanly.
If you are already wholesale-integrating OSS changes into your forks, you can just import your forks as they are into your new monorepo, then pull from the OSS monorepo directly.
If you are not wholesale-integrating OSS changes from your forks, you can still pick changes from the OSS monorepo to your private monorepo using the standard cherry-pick process.
- Rework plugin version compatibility checking against the new versioning system