test: e2e test for ArgoCD Application in Any Namespace#1538
test: e2e test for ArgoCD Application in Any Namespace#1538chengfang merged 5 commits intoargoproj-labs:masterfrom
Conversation
Signed-off-by: dkarpele <karpelevich@gmail.com>
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughAdded a new sequential Ginkgo end-to-end test for Image Updater (cross-namespace), expanded CI/test workflow namespaces and execution order, and updated indirect dependencies in go.mod. Changes
Possibly related PRs
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Tip Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs). Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
.github/workflows/ci-tests.yaml (1)
199-206:⚠️ Potential issue | 🟡 MinorNamespace configuration inconsistency with Makefile.
The
ARGOCD_CLUSTER_CONFIG_NAMESPACESenvironment variable is set toargocd-e2e-cluster-config, argocd-operator-systemhere, but the Makefile'sPATCH_TEMPLATE(lines 37-38) hardcodes onlyargocd-operator-system. Since the Makefile directly patches the Kubernetes Deployment manifest, this shell environment variable will be ignored, and onlyargocd-operator-systemwill be applied to the operator.Either:
- Remove the CI env var if it's not needed, or
- Update the Makefile to use
$(ARGOCD_CLUSTER_CONFIG_NAMESPACES)from the environment instead of hardcoding the value.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/ci-tests.yaml around lines 199 - 206, The CI sets ARGOCD_CLUSTER_CONFIG_NAMESPACES to "argocd-e2e-cluster-config, argocd-operator-system" but the Makefile's PATCH_TEMPLATE hardcodes "argocd-operator-system", causing the env var to be ignored; update the Makefile so PATCH_TEMPLATE (or the rule that builds the Deployment patch) reads and uses $(ARGOCD_CLUSTER_CONFIG_NAMESPACES) from the environment instead of the hardcoded namespace, or alternatively remove the ARGOCD_CLUSTER_CONFIG_NAMESPACES entry from the CI if multiple namespaces are not intended—modify either the Makefile reference to use the ARGOCD_CLUSTER_CONFIG_NAMESPACES variable or remove the CI env var to resolve the mismatch.
🧹 Nitpick comments (4)
test/ginkgo/Makefile (1)
37-38: Hardcoded namespace value should use environment variable.The
ARGOCD_CLUSTER_CONFIG_NAMESPACESvalue is hardcoded toargocd-operator-system, but the CI workflow sets this environment variable toargocd-e2e-cluster-config, argocd-operator-system. This hardcoded value will override the CI configuration, potentially breaking tests that depend onargocd-e2e-cluster-config.Consider using the environment variable with a default fallback:
♻️ Proposed fix to use environment variable
+# Default cluster config namespaces, can be overridden by environment +ARGOCD_CLUSTER_CONFIG_NAMESPACES ?= argocd-operator-system + # Define the patch template define PATCH_TEMPLATE apiVersion: apps/v1 kind: Deployment metadata: name: controller-manager namespace: system spec: template: spec: containers: - name: manager env: - name: ARGOCD_IMAGE_UPDATER_IMAGE value: $(ARGOCD_IMAGE_UPDATER_IMAGE) - name: ARGOCD_CLUSTER_CONFIG_NAMESPACES - value: argocd-operator-system + value: $(ARGOCD_CLUSTER_CONFIG_NAMESPACES) endef🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/ginkgo/Makefile` around lines 37 - 38, The Makefile currently hardcodes ARGOCD_CLUSTER_CONFIG_NAMESPACES to "argocd-operator-system"; change the value to read from the environment with a safe default so CI-provided values are not overridden — use the ARGOCD_CLUSTER_CONFIG_NAMESPACES env var if set, otherwise fallback to "argocd-operator-system". Locate the assignment for ARGOCD_CLUSTER_CONFIG_NAMESPACES in the test/ginkgo/Makefile and replace the literal value with the environment-expansion form (e.g., using shell parameter expansion or Make's override mechanism) so the CI list "argocd-e2e-cluster-config, argocd-operator-system" is respected. Ensure quoting/whitespace is preserved to handle comma-separated lists.test/ginkgo/parallel/1-008-app-in-any-ns_test.go (3)
66-86: Cleanup doesn't wait for resource deletion or clean up all created resources.The cleanup fires delete requests but doesn't wait for completion. Compare with
1-001-argocd-write-back-target_test.go(context snippet 1) which usesEventually(imageUpdater, "2m", "3s").Should(k8sFixture.NotExistByName())to ensure the finalizer is processed.Additionally,
appProjectandapp(Application) created in the test are not deleted inAfterEach.♻️ Proposed fix to add proper cleanup
Add
appProjectandappto thevarblock and update cleanup:AfterEach(func() { + if app != nil { + By("deleting Application") + _ = k8sClient.Delete(ctx, app) + Eventually(app, "2m", "3s").Should(k8sFixture.NotExistByName()) + } + + if appProject != nil { + By("deleting AppProject") + _ = k8sClient.Delete(ctx, appProject) + } + if imageUpdater != nil { By("deleting ImageUpdater CR") _ = k8sClient.Delete(ctx, imageUpdater) + Eventually(imageUpdater, "2m", "3s").Should(k8sFixture.NotExistByName()) } if argoCD != nil { By("deleting ArgoCD CR") _ = k8sClient.Delete(ctx, argoCD) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/ginkgo/parallel/1-008-app-in-any-ns_test.go` around lines 66 - 86, The AfterEach currently issues deletes but doesn’t wait for finalizers or remove all created resources; add appProject and app to the test-scoped var block and delete them in AfterEach, then replace/augment the raw k8sClient.Delete calls (for ImageUpdater, argoCD, appProject, app) with a deletion + wait pattern: call k8sClient.Delete(ctx, <resource>) and then use Eventually(<resource>, "2m", "3s").Should(k8sFixture.NotExistByName()) (same approach as in 1-001-argocd-write-back-target_test.go) to ensure finalizers are processed and resources are really gone; keep calling cleanupFunc() and fixture.OutputDebugOnFail(ns) afterwards.
159-165: Variable shadowing:deplused for both loop variable and Deployment.The loop variable
depl(string) is shadowed bydepl := &appsv1.Deployment{...}on line 161. This compiles but is confusing and error-prone.♻️ Proposed fix to use distinct variable names
By("verifying all workloads are started") deploymentsShouldExist := []string{"argocd-redis", "argocd-server", "argocd-repo-server", "argocd-argocd-image-updater-controller"} -for _, depl := range deploymentsShouldExist { - depl := &appsv1.Deployment{ObjectMeta: metav1.ObjectMeta{Name: depl, Namespace: ns.Name}} - Eventually(depl).Should(k8sFixture.ExistByName()) - Eventually(depl).Should(deplFixture.HaveReplicas(1)) - Eventually(depl, "3m", "3s").Should(deplFixture.HaveReadyReplicas(1), depl.Name+" was not ready") +for _, deplName := range deploymentsShouldExist { + depl := &appsv1.Deployment{ObjectMeta: metav1.ObjectMeta{Name: deplName, Namespace: ns.Name}} + Eventually(depl).Should(k8sFixture.ExistByName()) + Eventually(depl).Should(deplFixture.HaveReplicas(1)) + Eventually(depl, "3m", "3s").Should(deplFixture.HaveReadyReplicas(1), deplName+" was not ready") }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/ginkgo/parallel/1-008-app-in-any-ns_test.go` around lines 159 - 165, The loop shadows the string loop variable `depl` with the Deployment pointer `depl := &appsv1.Deployment{...}`; rename the inner Deployment variable (e.g., `deplObj`, `dep`, or `deployment`) so the loop variable `depl` (from `deploymentsShouldExist := []string{...}`) is not shadowed and update its usages in the assertions (calls to `k8sFixture.ExistByName()`, `deplFixture.HaveReplicas(1)`, and `deplFixture.HaveReadyReplicas(1)` accordingly) to reference the new Deployment variable name.
233-233: Consider using a dedicated test image or documenting the registry dependency.The test references
quay.io/dkarpele/my-guestbook, a personal registry image. While currently accessible, relying on personal registries in CI can become fragile if the image is removed, the registry access changes, or the owner stops maintaining it. For long-term maintainability, consider either:
- Using an official image from the project's registry, or
- Documenting this external dependency and its stability guarantees.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/ginkgo/parallel/1-008-app-in-any-ns_test.go` at line 233, The test uses a personal registry image via the ImageName field ("quay.io/dkarpele/my-guestbook:~29437546.0"), which is fragile for CI; update the test to reference a stable test image (e.g., a maintained official or project-hosted image) or add documentation and comments next to the ImageName entry explaining the external dependency, its owner, and stability/availability guarantees so future maintainers know why this image is used and how to replace it if it disappears.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@test/ginkgo/parallel/1-008-app-in-any-ns_test.go`:
- Around line 94-127: The test is mutating the shared
argocd-operator-controller-manager Deployment (setting
ARGOCD_CLUSTER_CONFIG_NAMESPACES and annotating restart) which can cause flakes
in parallel runs; fix by isolating this change: either move this spec from the
parallel suite into a sequential/serial suite, or create/use a dedicated
operator Deployment/namespace for the test (create a new Deployment resource
rather than modifying argocd-operator-controller-manager), or guard the test
with a Ginkgo serial-only flag so it never runs concurrently with other
operator-dependent tests; update the code paths that touch operatorDeploy,
k8sClient.Update, and the Eventually readiness check accordingly so they operate
on the isolated operator instance or run only in serial.
- Line 92: The test creates a fixed namespace with nsDev :=
fixture.CreateNamespace("dev"), which can collide in parallel runs and is not
cleaned up; change to create a uniquely named namespace (e.g., use
fixture.CreateNamespace with a random/suffixed name) and capture its cleanup
function into cleanupFuncDev (declare cleanupFuncDev in the existing var block
with the other cleanup variables), then ensure AfterEach calls cleanupFuncDev()
along with existing cleanup calls so the namespace is removed after each test;
update references to nsDev where needed to use the new variable and preserved
cleanup function.
---
Outside diff comments:
In @.github/workflows/ci-tests.yaml:
- Around line 199-206: The CI sets ARGOCD_CLUSTER_CONFIG_NAMESPACES to
"argocd-e2e-cluster-config, argocd-operator-system" but the Makefile's
PATCH_TEMPLATE hardcodes "argocd-operator-system", causing the env var to be
ignored; update the Makefile so PATCH_TEMPLATE (or the rule that builds the
Deployment patch) reads and uses $(ARGOCD_CLUSTER_CONFIG_NAMESPACES) from the
environment instead of the hardcoded namespace, or alternatively remove the
ARGOCD_CLUSTER_CONFIG_NAMESPACES entry from the CI if multiple namespaces are
not intended—modify either the Makefile reference to use the
ARGOCD_CLUSTER_CONFIG_NAMESPACES variable or remove the CI env var to resolve
the mismatch.
---
Nitpick comments:
In `@test/ginkgo/Makefile`:
- Around line 37-38: The Makefile currently hardcodes
ARGOCD_CLUSTER_CONFIG_NAMESPACES to "argocd-operator-system"; change the value
to read from the environment with a safe default so CI-provided values are not
overridden — use the ARGOCD_CLUSTER_CONFIG_NAMESPACES env var if set, otherwise
fallback to "argocd-operator-system". Locate the assignment for
ARGOCD_CLUSTER_CONFIG_NAMESPACES in the test/ginkgo/Makefile and replace the
literal value with the environment-expansion form (e.g., using shell parameter
expansion or Make's override mechanism) so the CI list
"argocd-e2e-cluster-config, argocd-operator-system" is respected. Ensure
quoting/whitespace is preserved to handle comma-separated lists.
In `@test/ginkgo/parallel/1-008-app-in-any-ns_test.go`:
- Around line 66-86: The AfterEach currently issues deletes but doesn’t wait for
finalizers or remove all created resources; add appProject and app to the
test-scoped var block and delete them in AfterEach, then replace/augment the raw
k8sClient.Delete calls (for ImageUpdater, argoCD, appProject, app) with a
deletion + wait pattern: call k8sClient.Delete(ctx, <resource>) and then use
Eventually(<resource>, "2m", "3s").Should(k8sFixture.NotExistByName()) (same
approach as in 1-001-argocd-write-back-target_test.go) to ensure finalizers are
processed and resources are really gone; keep calling cleanupFunc() and
fixture.OutputDebugOnFail(ns) afterwards.
- Around line 159-165: The loop shadows the string loop variable `depl` with the
Deployment pointer `depl := &appsv1.Deployment{...}`; rename the inner
Deployment variable (e.g., `deplObj`, `dep`, or `deployment`) so the loop
variable `depl` (from `deploymentsShouldExist := []string{...}`) is not shadowed
and update its usages in the assertions (calls to `k8sFixture.ExistByName()`,
`deplFixture.HaveReplicas(1)`, and `deplFixture.HaveReadyReplicas(1)`
accordingly) to reference the new Deployment variable name.
- Line 233: The test uses a personal registry image via the ImageName field
("quay.io/dkarpele/my-guestbook:~29437546.0"), which is fragile for CI; update
the test to reference a stable test image (e.g., a maintained official or
project-hosted image) or add documentation and comments next to the ImageName
entry explaining the external dependency, its owner, and stability/availability
guarantees so future maintainers know why this image is used and how to replace
it if it disappears.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: af7cc1f4-e378-409e-b0ae-1214b5d3e98b
⛔ Files ignored due to path filters (2)
go.sumis excluded by!**/*.sumtest/ginkgo/go.sumis excluded by!**/*.sum
📒 Files selected for processing (4)
.github/workflows/ci-tests.yamlgo.modtest/ginkgo/Makefiletest/ginkgo/parallel/1-008-app-in-any-ns_test.go
Signed-off-by: dkarpele <karpelevich@gmail.com>
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
test/ginkgo/sequential/1-008-app-in-any-ns_test.go (1)
100-124: Restore the operator namespace allowlist in cleanup.Line 103 overwrites
ARGOCD_CLUSTER_CONFIG_NAMESPACESon the sharedargocd-operator-controller-managerDeployment, butAfterEachnever rolls it back. That leaves test-specific cluster state behind and makes later sequential specs order-dependent. Capture the original env var before the update and restore it during cleanup, then wait for that rollout as well.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/ginkgo/sequential/1-008-app-in-any-ns_test.go` around lines 100 - 124, The test mutates the shared argocd-operator-controller-manager Deployment by changing the ARGOCD_CLUSTER_CONFIG_NAMESPACES env var (in operatorDeploy) but never restores it; capture the original value (and a flag if it was absent) before changing it, store them in test-scoped variables, and in AfterEach fetch operatorDeploy, restore the env var to its original value (or remove it if it didn't exist), update the Deployment via k8sClient.Update and wait for the rollout to complete (same rollout-wait logic you use elsewhere) so the cluster-wide allowlist is returned to its prior state and later specs remain order-independent.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@test/ginkgo/sequential/1-008-app-in-any-ns_test.go`:
- Around line 126-133: The current Eventually uses an inline function that only
checks ReadyReplicas and can pass before the Deployment's ObservedGeneration
updates; replace that inline check with the existing fixture helper by calling
deplFixture.HaveReadyReplicas(1) (keeping the same timeout "2m" and poll
interval "3s") so the wait also verifies ObservedGeneration for the
"argocd-operator-controller-manager" Deployment in the "argocd-operator-system"
namespace.
---
Nitpick comments:
In `@test/ginkgo/sequential/1-008-app-in-any-ns_test.go`:
- Around line 100-124: The test mutates the shared
argocd-operator-controller-manager Deployment by changing the
ARGOCD_CLUSTER_CONFIG_NAMESPACES env var (in operatorDeploy) but never restores
it; capture the original value (and a flag if it was absent) before changing it,
store them in test-scoped variables, and in AfterEach fetch operatorDeploy,
restore the env var to its original value (or remove it if it didn't exist),
update the Deployment via k8sClient.Update and wait for the rollout to complete
(same rollout-wait logic you use elsewhere) so the cluster-wide allowlist is
returned to its prior state and later specs remain order-independent.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 154bc12a-53b6-4138-a4b8-45764cd5c7d6
📒 Files selected for processing (2)
test/ginkgo/Makefiletest/ginkgo/sequential/1-008-app-in-any-ns_test.go
Signed-off-by: dkarpele <karpelevich@gmail.com>
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (1)
test/ginkgo/sequential/1-008-app-in-any-ns_test.go (1)
165-172: Variable shadowing:deplused for both string and Deployment.The loop variable
depl(string) is shadowed by the innerdepl(*Deployment). While valid Go, this can be confusing.Suggested clarity improvement
deploymentsShouldExist := []string{"argocd-redis", "argocd-server", "argocd-repo-server", "argocd-argocd-image-updater-controller"} -for _, depl := range deploymentsShouldExist { - depl := &appsv1.Deployment{ObjectMeta: metav1.ObjectMeta{Name: depl, Namespace: ns.Name}} - Eventually(depl).Should(k8sFixture.ExistByName()) - Eventually(depl).Should(deplFixture.HaveReplicas(1)) - Eventually(depl, "3m", "3s").Should(deplFixture.HaveReadyReplicas(1), depl.Name+" was not ready") +for _, deplName := range deploymentsShouldExist { + depl := &appsv1.Deployment{ObjectMeta: metav1.ObjectMeta{Name: deplName, Namespace: ns.Name}} + Eventually(depl).Should(k8sFixture.ExistByName()) + Eventually(depl).Should(deplFixture.HaveReplicas(1)) + Eventually(depl, "3m", "3s").Should(deplFixture.HaveReadyReplicas(1), deplName+" was not ready") }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/ginkgo/sequential/1-008-app-in-any-ns_test.go` around lines 165 - 172, The loop shadows the string loop variable `depl` with a Deployment variable created as `&appsv1.Deployment{...}`, which is confusing; rename the inner Deployment variable (e.g., to `deployment` or `deplObj`) and update its uses in the Eventually checks so the string loop variable and the *appsv1.Deployment variable are distinct; ensure calls to k8sFixture.ExistByName(), deplFixture.HaveReplicas(1) and deplFixture.HaveReadyReplicas(1) reference the renamed Deployment variable and any diagnostic text (e.g., `depl.Name`) is updated accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@test/ginkgo/sequential/1-008-app-in-any-ns_test.go`:
- Around line 61-66: Replace the call to EnsureParallelCleanSlate() in the
BeforeEach block with EnsureSequentialCleanSlate() so the sequential test uses
the proper cleanup routine; update the call where
fixture.EnsureParallelCleanSlate() is invoked (in the BeforeEach that also sets
k8sClient and ctx) to fixture.EnsureSequentialCleanSlate() to ensure correct
sequential test isolation.
- Around line 68-92: AfterEach currently omits restoring the operator
Deployment's ARGOCD_CLUSTER_CONFIG_NAMESPACES env var changed earlier; update
AfterEach to patch the operator Deployment back to the original value
("argocd-operator-system") (or restore a captured original value if you saved it
when modifying), using the k8s client to update the Deployment's container env
var ARGOCD_CLUSTER_CONFIG_NAMESPACES and handle nils/errors gracefully so
subsequent tests are not polluted; place this restoration alongside the existing
cleanup steps in AfterEach.
In `@test/ginkgo/sequential/suite_test.go`:
- Around line 44-49: Replace the incorrect cleanup call in the BeforeSuite hook:
the sequential test suite currently invokes fixture.EnsureParallelCleanSlate(),
but it must call fixture.EnsureSequentialCleanSlate() to perform the full
namespace/cluster cleanup described in the comment; update the BeforeSuite
function to call EnsureSequentialCleanSlate() (keeping the existing
logf.SetLogger(...) line) so the sequential suite uses the aggressive cleanup
routine.
---
Nitpick comments:
In `@test/ginkgo/sequential/1-008-app-in-any-ns_test.go`:
- Around line 165-172: The loop shadows the string loop variable `depl` with a
Deployment variable created as `&appsv1.Deployment{...}`, which is confusing;
rename the inner Deployment variable (e.g., to `deployment` or `deplObj`) and
update its uses in the Eventually checks so the string loop variable and the
*appsv1.Deployment variable are distinct; ensure calls to
k8sFixture.ExistByName(), deplFixture.HaveReplicas(1) and
deplFixture.HaveReadyReplicas(1) reference the renamed Deployment variable and
any diagnostic text (e.g., `depl.Name`) is updated accordingly.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: b36623b7-1577-4a9d-b7b9-e041332dd2d0
📒 Files selected for processing (2)
test/ginkgo/sequential/1-008-app-in-any-ns_test.gotest/ginkgo/sequential/suite_test.go
… to default Signed-off-by: dkarpele <karpelevich@gmail.com>
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (1)
test/ginkgo/sequential/suite_test.go (1)
42-45:⚠️ Potential issue | 🟠 MajorSequential suite
BeforeSuiteis missingEnsureSequentialCleanSlate()call.The
BeforeSuiteonly initializes the logger but skips critical global setup. Pertest/ginkgo/parallel/suite_test.go(context snippet 1), theBeforeSuiteshould callfixture.EnsureSequentialCleanSlate()which:
- Configures Ginkgo default timeouts and polling intervals
- Validates minimum disk space
- Cleans up test namespaces from previous runs
- Removes cluster-scoped test resources
Without this call, sequential tests may fail due to leftover resources from parallel tests or missing timeout configuration.
Proposed fix
var _ = BeforeSuite(func() { logf.SetLogger(zap.New(zap.WriteTo(GinkgoWriter), zap.UseDevMode(true))) + // Before the test suite starts, ensure that operator is restored to default state, and all previous parallel test resources are deleted + fixture.EnsureSequentialCleanSlate() })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/ginkgo/sequential/suite_test.go` around lines 42 - 45, BeforeSuite currently only sets up logging (logf.SetLogger) but omits the required global test initialization; call fixture.EnsureSequentialCleanSlate() inside the BeforeSuite function (before or after the logger setup) to configure Ginkgo timeouts/polling, validate disk space, and clean leftover namespaces and cluster-scoped resources; locate the BeforeSuite block in suite_test.go and add a call to fixture.EnsureSequentialCleanSlate() (import fixture if missing).
🧹 Nitpick comments (3)
test/ginkgo/sequential/suite_test.go (1)
47-48: Consider removing emptyAfterSuiteor adding a placeholder comment.The empty
AfterSuiteblock serves no purpose. Either remove it or add a comment explaining it's a placeholder for future cleanup logic.Option 1: Remove empty block
-var _ = AfterSuite(func() { -})Option 2: Add placeholder comment
var _ = AfterSuite(func() { + // Placeholder for any suite-level cleanup if needed in the future })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/ginkgo/sequential/suite_test.go` around lines 47 - 48, The AfterSuite(func() { }) block in the test/suite (symbol: AfterSuite) is empty and should be removed or clarified; either delete the empty AfterSuite declaration to avoid dead code, or keep it but add a short placeholder comment inside (e.g., "// placeholder for global teardown" or "// TODO: add cleanup") so intent is clear to future readers.test/ginkgo/sequential/1-008-app-in-any-ns_test.go (2)
183-188: Minor: Loop variable shadowing reduces readability.The inner
deplvariable shadows the outer loop variable (which is astring). While this works correctly, it can be confusing. Consider renaming the inner variable.Proposed fix
for _, depl := range deploymentsShouldExist { - depl := &appsv1.Deployment{ObjectMeta: metav1.ObjectMeta{Name: depl, Namespace: ns.Name}} - Eventually(depl).Should(k8sFixture.ExistByName()) - Eventually(depl).Should(deplFixture.HaveReplicas(1)) - Eventually(depl, "3m", "3s").Should(deplFixture.HaveReadyReplicas(1), depl.Name+" was not ready") + d := &appsv1.Deployment{ObjectMeta: metav1.ObjectMeta{Name: depl, Namespace: ns.Name}} + Eventually(d).Should(k8sFixture.ExistByName()) + Eventually(d).Should(deplFixture.HaveReplicas(1)) + Eventually(d, "3m", "3s").Should(deplFixture.HaveReadyReplicas(1), d.Name+" was not ready") }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/ginkgo/sequential/1-008-app-in-any-ns_test.go` around lines 183 - 188, The loop shadows the string loop variable depl with a new variable of the same name; rename the inner variable to avoid shadowing (e.g., change the inner "depl := &appsv1.Deployment{...}" to "deplName := depl" and then create "deployment := &appsv1.Deployment{ObjectMeta: metav1.ObjectMeta{Name: deplName, Namespace: ns.Name}}" or simply name the object "deployment") and update calls to ExistByName(), HaveReplicas(1), HaveReadyReplicas(1) and the failure message to use the new deployment/object variable to preserve behavior while improving readability.
269-289: Consider adding a descriptive assertion message for the image verification.The final assertion checks for a specific image version but lacks a failure message to help diagnose issues when the assertion fails.
Proposed fix
- }, "5m", "3s").Should(Equal("quay.io/dkarpele/my-guestbook:29437546.0")) + }, "5m", "3s").Should(Equal("quay.io/dkarpele/my-guestbook:29437546.0"), "Image Updater should have updated the Application's Kustomize image to version 29437546.0")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/ginkgo/sequential/1-008-app-in-any-ns_test.go` around lines 269 - 289, Add a descriptive failure message to the final Gomega assertion so test failures show context: modify the Eventually(...).Should(Equal("quay.io/dkarpele/my-guestbook:29437546.0")) call to include a message (e.g. Should(Equal(...), "expected Application Kustomize image to be updated to 29437546.0 for app %s", app.Name)) so that when the image check in the Eventually closure (which calls k8sClient.Get, uses iuFixture.TriggerArgoCDRefresh and inspects app.Spec.Source.Kustomize.Images) fails, the output provides the application name and what was expected.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@test/ginkgo/sequential/1-008-app-in-any-ns_test.go`:
- Around line 68-108: AfterEach restores ARGOCD_CLUSTER_CONFIG_NAMESPACES but
may panic if operatorDeploy.Spec.Template.ObjectMeta.Annotations is nil; before
setting
operatorDeploy.Spec.Template.ObjectMeta.Annotations["kubectl.kubernetes.io/restartedAt"]
ensure the Annotations map is non-nil and initialize it if needed (e.g. if
operatorDeploy.Spec.Template.ObjectMeta.Annotations == nil {
operatorDeploy.Spec.Template.ObjectMeta.Annotations = map[string]string{} }) so
the assignment and subsequent k8sClient.Update call are safe.
In `@test/ginkgo/sequential/suite_test.go`:
- Around line 19-31: The import block is missing the fixture package needed by
EnsureSequentialCleanSlate(); add the fixture package import (imported as
fixture) into the existing imports so the call to
fixture.EnsureSequentialCleanSlate() can compile and run; locate the import
block in suite_test.go (where EnsureSequentialCleanSlate is referenced) and add
the fixture package entry alongside the other imports.
---
Duplicate comments:
In `@test/ginkgo/sequential/suite_test.go`:
- Around line 42-45: BeforeSuite currently only sets up logging (logf.SetLogger)
but omits the required global test initialization; call
fixture.EnsureSequentialCleanSlate() inside the BeforeSuite function (before or
after the logger setup) to configure Ginkgo timeouts/polling, validate disk
space, and clean leftover namespaces and cluster-scoped resources; locate the
BeforeSuite block in suite_test.go and add a call to
fixture.EnsureSequentialCleanSlate() (import fixture if missing).
---
Nitpick comments:
In `@test/ginkgo/sequential/1-008-app-in-any-ns_test.go`:
- Around line 183-188: The loop shadows the string loop variable depl with a new
variable of the same name; rename the inner variable to avoid shadowing (e.g.,
change the inner "depl := &appsv1.Deployment{...}" to "deplName := depl" and
then create "deployment := &appsv1.Deployment{ObjectMeta:
metav1.ObjectMeta{Name: deplName, Namespace: ns.Name}}" or simply name the
object "deployment") and update calls to ExistByName(), HaveReplicas(1),
HaveReadyReplicas(1) and the failure message to use the new deployment/object
variable to preserve behavior while improving readability.
- Around line 269-289: Add a descriptive failure message to the final Gomega
assertion so test failures show context: modify the
Eventually(...).Should(Equal("quay.io/dkarpele/my-guestbook:29437546.0")) call
to include a message (e.g. Should(Equal(...), "expected Application Kustomize
image to be updated to 29437546.0 for app %s", app.Name)) so that when the image
check in the Eventually closure (which calls k8sClient.Get, uses
iuFixture.TriggerArgoCDRefresh and inspects app.Spec.Source.Kustomize.Images)
fails, the output provides the application name and what was expected.
In `@test/ginkgo/sequential/suite_test.go`:
- Around line 47-48: The AfterSuite(func() { }) block in the test/suite (symbol:
AfterSuite) is empty and should be removed or clarified; either delete the empty
AfterSuite declaration to avoid dead code, or keep it but add a short
placeholder comment inside (e.g., "// placeholder for global teardown" or "//
TODO: add cleanup") so intent is clear to future readers.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 650250b4-0bd6-4a24-b561-165705cf5371
📒 Files selected for processing (2)
test/ginkgo/sequential/1-008-app-in-any-ns_test.gotest/ginkgo/sequential/suite_test.go
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## master #1538 +/- ##
==========================================
+ Coverage 71.48% 73.51% +2.03%
==========================================
Files 50 53 +3
Lines 4667 5098 +431
==========================================
+ Hits 3336 3748 +412
- Misses 1133 1143 +10
- Partials 198 207 +9 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
| cleanupFuncDev() | ||
| } | ||
|
|
||
| fixture.OutputDebugOnFail(ns) |
There was a problem hiding this comment.
shall we also include nsDev?
Co-authored-by: Claude <noreply@anthropic.com> Signed-off-by: dkarpele <karpelevich@gmail.com>
There was a problem hiding this comment.
♻️ Duplicate comments (1)
test/ginkgo/sequential/1-008-app-in-any-ns_test.go (1)
83-86:⚠️ Potential issue | 🟡 MinorAdd nil check for annotations map in AfterEach to prevent potential panic.
If the test fails before line 136-138 (where annotations are initialized in the test body), the
Annotationsmap may be nil when accessed here, causing a runtime panic.🛡️ Proposed fix
break } } + if operatorDeploy.Spec.Template.ObjectMeta.Annotations == nil { + operatorDeploy.Spec.Template.ObjectMeta.Annotations = map[string]string{} + } operatorDeploy.Spec.Template.ObjectMeta.Annotations["kubectl.kubernetes.io/restartedAt"] = time.Now().Format(time.RFC3339) _ = k8sClient.Update(ctx, operatorDeploy) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/ginkgo/sequential/1-008-app-in-any-ns_test.go` around lines 83 - 86, The AfterEach accesses operatorDeploy.Spec.Template.ObjectMeta.Annotations without checking for nil, which can panic if the map was never initialized; update the teardown logic that touches operatorDeploy.Spec.Template.ObjectMeta.Annotations (the block setting "kubectl.kubernetes.io/restartedAt" and calling k8sClient.Update) to first verify operatorDeploy and operatorDeploy.Spec.Template.ObjectMeta are non-nil and to initialize Annotations if nil (e.g., set Annotations = map[string]string{} before assigning the timestamp) so the assignment and k8sClient.Update call are safe.
🧹 Nitpick comments (1)
test/ginkgo/sequential/1-008-app-in-any-ns_test.go (1)
182-188: Consider renaming inner variable to avoid shadowing.The inner
deplvariable (pointer to Deployment) shadows the outer loop variabledepl(string). While this works correctly, using a distinct name improves readability.♻️ Proposed fix
deploymentsShouldExist := []string{"argocd-redis", "argocd-server", "argocd-repo-server", "argocd-argocd-image-updater-controller"} -for _, depl := range deploymentsShouldExist { - depl := &appsv1.Deployment{ObjectMeta: metav1.ObjectMeta{Name: depl, Namespace: ns.Name}} - Eventually(depl).Should(k8sFixture.ExistByName()) - Eventually(depl).Should(deplFixture.HaveReplicas(1)) - Eventually(depl, "3m", "3s").Should(deplFixture.HaveReadyReplicas(1), depl.Name+" was not ready") +for _, deplName := range deploymentsShouldExist { + depl := &appsv1.Deployment{ObjectMeta: metav1.ObjectMeta{Name: deplName, Namespace: ns.Name}} + Eventually(depl).Should(k8sFixture.ExistByName()) + Eventually(depl).Should(deplFixture.HaveReplicas(1)) + Eventually(depl, "3m", "3s").Should(deplFixture.HaveReadyReplicas(1), deplName+" was not ready") }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/ginkgo/sequential/1-008-app-in-any-ns_test.go` around lines 182 - 188, The loop shadowing issue: the loop variable deploymentsShouldExist uses "depl" (string) and then reuses "depl" as a pointer to appsv1.Deployment, which is confusing; rename the inner variable (e.g., deploymentObj or deplObj) used for &appsv1.Deployment{...} and update its usages in Eventually calls (deplFixture.HaveReplicas, deplFixture.HaveReadyReplicas) so the loop string variable remains distinct from the Deployment pointer and improves readability.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@test/ginkgo/sequential/1-008-app-in-any-ns_test.go`:
- Around line 83-86: The AfterEach accesses
operatorDeploy.Spec.Template.ObjectMeta.Annotations without checking for nil,
which can panic if the map was never initialized; update the teardown logic that
touches operatorDeploy.Spec.Template.ObjectMeta.Annotations (the block setting
"kubectl.kubernetes.io/restartedAt" and calling k8sClient.Update) to first
verify operatorDeploy and operatorDeploy.Spec.Template.ObjectMeta are non-nil
and to initialize Annotations if nil (e.g., set Annotations =
map[string]string{} before assigning the timestamp) so the assignment and
k8sClient.Update call are safe.
---
Nitpick comments:
In `@test/ginkgo/sequential/1-008-app-in-any-ns_test.go`:
- Around line 182-188: The loop shadowing issue: the loop variable
deploymentsShouldExist uses "depl" (string) and then reuses "depl" as a pointer
to appsv1.Deployment, which is confusing; rename the inner variable (e.g.,
deploymentObj or deplObj) used for &appsv1.Deployment{...} and update its usages
in Eventually calls (deplFixture.HaveReplicas, deplFixture.HaveReadyReplicas) so
the loop string variable remains distinct from the Deployment pointer and
improves readability.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 875ae0bc-2e8f-4e14-bb13-a180a37d7c3f
📒 Files selected for processing (1)
test/ginkgo/sequential/1-008-app-in-any-ns_test.go
Summary by CodeRabbit
Tests
Chores