Skip to content
Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
51 changes: 51 additions & 0 deletions .github/workflows/install.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
name: Install
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think that we need a dedicated action to run the install script. All e2e runs in CI will use the install script so I would expect install script issues to be present in e2e runs.

Copy link
Contributor Author

@camilamacedo86 camilamacedo86 Nov 13, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unfortunately, that doesn't seem accurate. We’ve been addressing issues with the script for exactly this reason. For example, the error in PR #1429 OR #1411 would have prevented the CI from passing. Could you please check and let me know where this test might be duplicated elsewhere?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Makefile as it is on 'main' right now works fine for me on Linux. The test you linked to above is failing because it appears it was before you changed the docker-build line to target the ./bin directory rather than ./bin/linux.

I don't have access to a mac but I don't really see how the changes in this PR are really different from how it worked before--it seems to just be changing where we build the binaries from ./bin/linux to ./bin? And it's also effectively removing the functionality of the existing 'go-build-local' target and replacing it with the current 'go-build-linux,' just renaming them and removing the former since you're still setting GOOS=linux.

It is interesting that the existing go-build-local on main isn't directly used anywhere--I'm assuming it's there for cases where somebody wanted to compile locally and would just locally edit the docker-build target to use 'build' for one-off testing. Maybe we should just make another docker-build target that uses -local, i.e. docker-build-local, if that's a workflow we see people needing/wanting

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with @everettraven. We are (in theory) automatically generating a manifest and rendering an install script as part of the e2e.

If there are gaps in what happens with the installation during the e2e, let's fix it in the e2e instead of introducing a separate test.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@camilamacedo86 Part of running our e2e tests includes doing, more or less, what you are doing here.

If you look at the test-e2e make target at

# When running the e2e suite, you can set the ARTIFACT_PATH variable to the absolute path
# of the directory for the operator-controller e2e tests to store the artifacts, which
# may be helpful for debugging purposes after a test run.
#
# for example: ARTIFACT_PATH=/tmp/artifacts make test-e2e
.PHONY: test-e2e
test-e2e: KIND_CLUSTER_NAME := operator-controller-e2e
test-e2e: KUSTOMIZE_BUILD_DIR := config/overlays/e2e
test-e2e: GO_BUILD_FLAGS := -cover
test-e2e: run image-registry e2e e2e-coverage kind-clean #HELP Run e2e test suite on local kind cluster
and dig even further into the run target at
.PHONY: run
run: docker-build kind-cluster kind-load kind-deploy #HELP Build the operator-controller then deploy it into a new kind cluster.
you can see that it runs the make targets docker-build, kind-cluster, kind-load, and kind-deploy. The kind-deploy target at
.PHONY: kind-deploy
kind-deploy: export MANIFEST="./operator-controller.yaml"
kind-deploy: manifests $(KUSTOMIZE) #EXHELP Install controller and dependencies onto the kind cluster.
$(KUSTOMIZE) build $(KUSTOMIZE_BUILD_DIR) > operator-controller.yaml
envsubst '$$CATALOGD_VERSION,$$CERT_MGR_VERSION,$$INSTALL_DEFAULT_CATALOGS,$$MANIFEST' < scripts/install.tpl.sh | bash -s
uses the same installation script that we generate for releases.

Essentially, everything you are doing in this CI action is already done as part of all of our e2e runs in CI.

I'm not quite sure why CI didn't catch the issues you linked, but I also never ended up running into those problems. Is it possible that there may be some difference in tooling versions that resulted in you experiencing those issues?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

HI @everettraven

docker-build kind-cluster kind-load kind-deploy

I see that is calling so that is true.
Thank you for point out.

Is it possible that there may be some difference in tooling versions that resulted in you experiencing those issues?

Not that is not really possible
if you revert #1429 OR #1411 you will see that the error occurs always.

So probably we move forward besides the e2e failures.


on:
workflow_dispatch:
pull_request:
merge_group:
push:
branches:
- main

jobs:
kind-deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-go@v5
with:
go-version-file: go.mod

- name: Install the latest version of kind
run: |
curl -Lo ./kind https://kind.sigs.k8s.io/dl/latest/kind-linux-amd64
chmod +x ./kind
sudo mv ./kind /usr/local/bin/kind

- name: Create Kind cluster for operator-controller
run: |
kind create cluster --name operator-controller

- name: Build the project
run: |
make docker-build

- name: Load image into Kind cluster and deploy
run: |
make kind-load kind-deploy

- name: Logs and Details
if: always()
run: |
# Capture high-level information
echo "Gathering details for all resources in namespace olmv1-system..."
kubectl get all -n olmv1-system

# Describe each pod in the namespace for more details
for pod in $(kubectl get pods -n olmv1-system -o jsonpath='{.items[*].metadata.name}'); do
echo "Describing pod $pod..."
kubectl describe pod $pod -n olmv1-system
echo "Logs for pod $pod:"
kubectl logs $pod -n olmv1-system --all-containers=true
done