- Golang - 1.22.x
- Operator SDK version - 1.39.1
- podman, podman-docker or docker
- Access to OpenShift cluster (4.12+)
- Container registry to storage images
Our builder and base images are curated images from OpenShift. They are pulled from registry.ci.openshift.org, which require an authentication. To get access to these images, you have to login and retrieve a token, following these steps
In summary:
- login to one of the clusters' console
- use the console's shortcut to get the commandline login command
- log in from the command line with the provided command
- use "oc registry login" to save the token locally
Set your quay.io userid
export QUAY_USERID=<user>
export IMAGE_TAG_BASE=quay.io/${QUAY_USERID}/openshift-sandboxed-containers-operator
export IMG=quay.io/${QUAY_USERID}/openshift-sandboxed-containers-operator
make help
make docker-build
make docker-push
make bundle CHANNELS=candidate
make bundle-build
make bundle-push
make catalog-build
make catalog-push
Create a new CatalogSource yaml. Replace user with your quay.io user and
version with the operator version.
cat > my_catalog.yaml <<EOF
apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
name: my-operator-catalog
namespace: openshift-marketplace
spec:
displayName: My Operator Catalog
sourceType: grpc
image: quay.io/${QUAY_USERID}/openshift-sandboxed-containers-operator-catalog:version
updateStrategy:
registryPoll:
interval: 5m
EOF
Deploy the catalog
oc create -f my_catalog.yaml
The new operator should be now available for installation from the OpenShift web console
When deploying the Operator using CLI, cert-manager needs to be installed otherwise
webhook will not start. cert-manager is not required when deploying via the web console as OLM
takes care of webhook certificate management. You can read more on this here
oc apply -f https://github.com/jetstack/cert-manager/releases/download/v1.5.3/cert-manager.yaml
Uncomment all entries marked with [CERTMANAGER] in manifest files under config/*
make install && make deploy
When adding a new container definition in some pod yaml, make sure to tag the image
field with ## OSC_VERSION, e.g.
image: registry.redhat.io/openshift-sandboxed-containers/osc-monitor-rhel9:1.11.2 ## OSC_VERSION
Do the same when adding new RELATED_IMAGE entries in the environment of the controller
in config/manager/manager.yaml, e.g.
- name: RELATED_IMAGE_KATA_MONITOR
value: registry.redhat.io/openshift-sandboxed-containers/osc-monitor-rhel9:1.11.2 ## OSC_VERSION
This is a best effort to track locations where OSC version bumps should happen.
When starting a new version, several locations should be updated with the new version number :
- all the locations tagged with
## OSC_VERSION - version labels in
Dockerfile,config/peerpods/podvm/Dockerfile.podvm-builderandmust-gather/Dockerfile - the
olm.skipRangeannotation inconfig/manifests/bases/sandboxed-containers-operator.clusterserviceversion.yaml
The spec.replaces field in config/manifests/bases/sandboxed-containers-operator.clusterserviceversion.yaml should
be updated with the number of the latest officialy released version.
You can use this script to bump the version and to regenerate bundle (check usage
with -h).