-
Notifications
You must be signed in to change notification settings - Fork 0
How OpenVox builds work
Every bit used except for the Vox Pupuli OpenVox private key is available in public repos in the OpenVoxProject
Github org.
Building off of the work Jeff Clark did to improve Docker support in Vanagon, we added a bunch more things (and more since!) to make building these packages with containers work better. The choice of container images for each platform can be found in the platform default files. We also tried to standardize the platform defaults a bit since they've gotten rather divergent over the years. Using containerized builds also allows us to build for different architectures without having to build on a system of that particular architecture.
Generally, you will be able to build all packages locally yourself by using Rake tasks, or a GitHub action set up for this purpose.
The S3 buckets where files get uploaded to is generously run by the OSU Open Source Lab and lives at https://s3.osuosl.org/<bucket-name>
. The openvox-artifacts
bucket is where intermediate build files are uploaded to (e.g. puppet-runtime
, unsigned openvox-agent
packages, etc.). The openvox-yum
and openvox-apt
buckets contain the actual repos that package managers point to, as well as rpm/deb files to set up those repos with the the public key. CNAMEs {apt,yum,artifacts}.overlookinfratech.com are set up to point to these OSL URLs.
Originally, the repo files pointed to the overlookinfratech.com addresses. As of 2025-05-06, these now point to {apt,yum}.voxpupuli.org. This is a mirror of the OSL buckets, and is not updated immediately upon uploading something to the bucket. Additionally, https://downloads.voxpupuli.org points to the downloads
folder at https://artifacts.voxpupuli.org, and is where MacOS and Windows agents are stored. The mirrors sync every hour. If you need the sync to happen immediately (e.g. on a new release), contact NickB to do this.
In some cases, Perforce uses their own internal version of build tools (pl-build-tools) to build for older platforms that ship with build tools that are too old. Rather than do this, we've moved to utilizing publicly available updated tools instead. These days, that seems to be mostly for el7.
There are two component repos that are required for building the agent.
- puppet-runtime - vanagon repo containing components packaged in the All-In-One (AIO) agent package
- openvox - the core OpenVox product, including packaging into openvox-agent AIO packages
We no longer ship pxp-agent with openvox-agent, as this is for Puppet Enterprise. However, a lightweight execution_wrapper script is included that is used by Choria.
Additionally, the openvox-agent repo used to be the vanagon repo to create the openvox-agent packages. As of version 8.23.0, this is now contained in the packaging directory of the openvox repo.
Within these repos, you'll find the following rake tasks:
-
vox:tag['<tag>']
- This tags the repo and pushes the tag to origin. Generally not used in favor of release automation. -
vox:build['<project>','<platform>']
- This takes a project name (found inconfigs/project
) and platform to build for (found inconfigs/platforms
) and performs the build using vanagon'sdocker
engine. The component will be built inside the container, and files will end up in theoutput
directory of your repo clone. -
vox:upload['<tag>','<platform>']
- This uploads the artifacts generated by the build to the OSL openvox-artifacts S3 bucket or potentially a different S3 bucket if desired. You won't be able to use this without the AWS CLI set up with appropriate secrets. -
vox:promote['<component>','<tag>']
- This task is found in the openvox repo and can be used for promoting any of the components in packaging.
The puppet-runtime
repo should be tagged like YYYY.MM.DD.R
where R is the particular release of it you are doing that day. For example, if this is the second build I am tagging on February 20th, 2025, the tag would be 2025.02.20.2
.
First, puppet-runtime
is built and uploaded to the puppet-runtime artifacts directory. Then openvox-agent
is built and uploaded to the openvox-agent artifacts directory. In this last directory, the agent packages are stored, but these are unsigned.
Note that when creating a new release of the agent, you must use the prepare release action. This will handle bumping the version in lib/puppet/version.rb
and run the appropriate machinery.
The process for building the agent is now mostly in GitHub Actions. They share a build_vanagon.yml
workflow, which contains the full list of platforms that OpenVox currently supports for the agent. An example of how this shared workflow is used can be found in puppet-runtime. The shared workflow is able to upload these artifacts to the appropriate S3 bucket locations.
As of 8.23.0, MacOS agents are built entirely within GitHub Actions rather than on a local VM. However, you may still follow the instructions below to build it yourself.
!!! Important !!! You should use a fresh VM each time you build something that hasn't built any of the components previously. Because we are using --engine local
and files are installed locally before being packaged up, vanagon seems to only package up changes in this build versus the last one. This will result in bad output.
In order to build the MacOS (macos-all-{arm64,x86_64}) puppet-runtime and openvox-agent, you will need to set up a MacOS VM on a MacOS host and run the build process inside it. UTM is recommended. You should create a fresh copy every time you do a build. All build commands need to be run from a root shell (sudo su - root
). Then, you will need to install XCode tools, build libyaml from source, then install Ruby (rbenv recommended).
xcode-select --install
curl -o yaml-0.2.5.tar.gz https://pyyaml.org/download/libyaml/yaml-0.2.5.tar.gz
tar xf yaml-0.2.5.tar.gz
cd yaml-0.2.5
./configure
make
make install
git clone https://github.com/rbenv/rbenv.git ~/.rbenv
~/.rbenv/bin/rbenv init
(Open a new terminal or run /bin/bash)
git clone https://github.com/rbenv/ruby-build.git "$(rbenv root)"/plugins/ruby-build
rbenv install 3.2.7
If you are planning on uploading the artifacts after build and have the appropriate AWS credentials in ~/.aws
, install AWS CLI.
softwareupdate --install-rosetta
curl -o AWSCLIV2.pkg https://awscli.amazonaws.com/AWSCLIV2.pkg
installer -pkg ./AWSCLIV2.pkg -target /
To set up a VM with the appropriate cert in the right place for the tasks to sign the right things, you will need the actual Apple Developer account Application signing identity (cert + private key, can be exported together as a .p12 file), the Installer signing identity, and the Apple Developer intermediate cert. The Apple Developer account signing identity will have a description like Developer ID Application: <Company name> (<10 character team ID>)
. The installer identity will be similar but say Installer
instead of Application
.
Note the 10-character team ID. You will also need an application token generated from your personal Apple ID that is part of the organization in the Apple Developer account.
You will need to have the Xcode app installed (not just the Xcode CLI tools).
security create-keychain signing
security default-keychain -s signing
security unlock-keychain signing
security import /path/to/DeveloperIDG2CA.cer -k /Library/Keychains/System.keychain
security import /path/to/application_identity.p12 -k signing -P <password> -T /usr/bin/codesign
security import /path/to/installer_identity.p12 -k signing -P <password> -T /usr/bin/productsign
security set-key-partition-list -S "apple-tool:,apple:" -D <description of application identity> signing
security set-key-partition-list -S "apple-tool:,apple:" -D <description of installer identity> signing
xcrun notarytool store-credentials "OpenVoxNotaryProfile" --apple-id "[email protected]" --password "<app token>" --team-id <10 character team ID>
In the MacOS agent package, all binary files, including dylib
and bundle
files, are signed by the application key. The pkg file inside the dmg is signed with the installer key. Lastly, the dmg itself is signed with the application key, and then notarized. All of this is required in order for Gatekeeper to not complain on MacOS 15+. In order for the automation to work correctly, you need to set the following environment variables:
- SIGNING_KEYCHAIN_PW - The password to unlock the keychain (yes, this isn't great, we'll make this more secure before we put it in GitHub Actions)
- SIGNING_KEYCHAIN - The path to the keychain where the certs/keys are stored. This should be
/Library/Keychains/System.keychain
so thatroot
can access it. - APPLICATION_SIGNING_CERT - The description of the application identity described above
- INSTALLER_SIGNING_CERT - The description of the installer identity described above
- NOTARY_PROFILE - The name of the notary profile in the keychain described above
- VANAGON_FORCE_SIGNING - Unless you are doing a dev build and don't need signing, set this to
true
(or anything, I think). Otherwise, make sure it is unset.
The way we do it with GitHub Actions-built .dmg files is to unpack the unsigned dmg, sign all the pieces inside it that need signing, and then rebuild it. You can see this process here.
As of 8.23.0, Windows agents are built entirely within GitHub Actions rather than on a local VM. However, you may still follow the instructions below to build it yourself.
To build for Windows, you should use a relatively modern OS (probably anything Server 2016+/10+ will work, but I've used 11 and Server 2022). You will need to install Cygwin, and optionally Wix to create the installer. At the base of the repos, there is a setup.ps1 which will download and install Cygwin and the base packages needed in order to do a successful bundle install
(e.g. https://github.com/OpenVoxProject/puppet-runtime/blob/main/setup.ps1). This is a little bit of a chicken and egg problem, since you need Cygwin to install git in order to clone the repo, so just take a look at that file and run it manually in Powershell.
From the Cygwin terminal, clone the repo and build as described above. The platform string we are using for Windows builds is windows-all-x64
, and should work for any Windows version newer than Server 2016 or Windows 10.
Signing involves using the Overlook InfraTech code signing certificate. Talk to NickB if you need details here.
These are built using our modified version of ezbake, which allows us to change the name of the packages. The openvox-server and openvoxdb repos contain similar rake tasks, but are used slightly differently:
-
vox:tag['<tag>']
- First, this changes the version found inproject.clj
to the tag and commits that change. Then it tags the repo. Then it creates a new commit after the tag that increments the Z part of the version with-SNAPSHOT
, following the current convention for these repos. Finally, it pushes the branch and the tag to origin. -
vox:build['<tag>']
- Because thevox:tag
task ends up creating a commit after the tag, this checks out the tag you want to build first. Then, it creates a container to do the ezbake build and saves the artifacts to theoutput
directory in your repo clone. Note that since these projects are fairly platform-agnostic, all of the packages can be built inside a single container. This container must be rpm-based, asrpmbuild
is needed byfpm
to create the rpms, but no special packages are needed to build the debs. The tasks have a default list of platforms to build for, but you can defineDEB_PLATFORMS
andRPM_PLATFORMS
environment variables. These are a comma-separated list of platforms with the architectecture excluded (e.g. ubuntu-18.04,debian-12 or el-9,amazon-2023). These are used by the GitHub build action. -
vox:upload['<tag>','<optional platform>']
- This uploads the artifacts generated by the build to the OSL openvox-artifacts S3 bucket or, potentially, a different S3 bucket if desired. You won't be able to use this without the AWS CLI set up with appropriate secrets.
The process for building openvox-server and openvoxdb are now mostly in GitHub Actions. They share a build_ezbake.yml
workflow. The default for the aformentioned environment variables listing the platforms to build for are defined here. An example of how this shared workflow is used can be found in openvox-server. The shared workflow is able to upload these artifacts to the appropriate S3 bucket locations.
To create the repository packages (i.e. the rpm files at https://yum.voxpupuli.org/ to set up the repo on your machine), openvox-release is used. The packages this generates will place the public key in the right place and import it, and set up the appropriate apt/yum repo on your machine. There is a build action to build and upload these files automatically. Note that the voxpupuli.org mirror will not sync immediately (unless you contact Nick and ask him to do so).
Signing is performed using the sign_from_s3.rb
script run on an Overlook InfraTech GCP instance. You won't be able to use this yourself without the private signing key, but you can see the code used. It downloads the unsigned packages from the OSL openvox-artifacts S3 bucket, signs them, then incorporates them into yum and apt repos, which are then later synced to the S3 buckets. The apt repo is currently maintained with the reprepro tool. Additionally, MacOS agents must be signed on a MacOS host with the appropriate credentials set up. Windows agents can be signed on Linux, but must have the appropriate software installed and artifacts in place. At some point, we'll move this to a more automated and sustainable workflow.
Warning: This process is destructive for the pt repo. While you can add packages to the Yum repository, the Apt repository is replaced with an updated version each time we publish. That means that you must start from an existing repo, such as the one stored on the GCP
signer
instance.
Until this is further automated, this can only be performed by @nmburgan or @binford2k.
- Log into the
signer-new
GCP instance and switch to thesigner
user.- Make sure you use a login shell so that profile scripts run
sudo su --login signer
- If you need to forward your SSH key in order to update the
misc
repo and already have it forwarded to your user, use the following bash function. If someone has a better way, please put it here, because this is not great.function signer(){ dir=$(dirname "${SSH_AUTH_SOCK}") echo "Chowning ${dir} to signer" sudo chown -R signer:signer $(dirname "${SSH_AUTH_SOCK}") sudo --preserve-env=SSH_AUTH_SOCK -u signer -i echo "Chowning ${dir} to ${USER}" sudo chown -R ${USER}:${USER} $(dirname "${SSH_AUTH_SOCK}") }
- The
signer
user should have all the relevant env vars needed for signing deb and rpm packages and updating the repos with them. - Take a current backup of the repo by running
~/backup2gcp
. This will back up the current state of the repos to a GCP bucket in case this process goes badly. Also, you may note that theNOBACKUP
env var is set. This tells the signing script not to make a local backup, since we now backup to this GCP bucket. - Sign the new package(s):
~/misc/signing/sign_from_s3.rb <component> <version> <repo>
- Example:
~/misc/signing/sign_from_s3.rb openvox-agent 8.19.0 openvox8
- Sync the repos. First, do these commands with
DRYRUN=1
and inspect the files it is going to update. Then run without the env var to do the full sync.~/misc/signing/sync.rb apt
~/misc/signing/sync.rb yum