-
-
Notifications
You must be signed in to change notification settings - Fork 1
Added tests #18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added tests #18
Conversation
WalkthroughThis pull request updates variable handling in a Terraform provider file, bumps module versions in a remote state configuration, and introduces several new configuration files and tests. Notably, it alters the determination of a Kubernetes auth role, adds new YAML files for various Terraform components (catalog, orgs, vendor), and incorporates a new Go module alongside test suites for AWS Load Balancer Controller deployment. Additionally, a simple test script was removed. Changes
Sequence Diagram(s)sequenceDiagram
participant T as Test Runner
participant CS as ComponentSuite
participant DA as DeployAtmosComponent
participant EK as EKS Cluster
participant KC as Kubernetes Client
participant IR as Ingress Resource
T->>CS: Invoke TestBasic
CS->>DA: Deploy Load Balancer Controller in unique namespace
DA->>EK: Provision and initialize deployment
EK->>KC: Return cluster details and namespace info
KC->>IR: Monitor ingress updates
IR-->>KC: Provide load balancer metadata
KC->>CS: Validate deployment and ingress status
Poem
✨ Finishing Touches
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
Merge ProtectionsYour pull request matches the following merge protections and will not be merged until they are valid. 🟢 Require terratestWonderful, this rule succeeded.This rule require terratest status
|
|
/terratest |
|
/terratest |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 6
🧹 Nitpick comments (5)
test/fixtures/stacks/catalog/usecase/disabled.yaml (1)
1-38: Disabled Configuration and Newline Reminder.The configuration for the disabled use case is correctly set with
enabled: falseand the related parameters are appropriately defined. However, YAMLlint has flagged that the file is missing a newline at the end. Please add a newline character at the end of the file to adhere to YAML best practices.🧰 Tools
🪛 YAMLlint (1.35.1)
[error] 38-38: no new line character at the end of file
(new-line-at-end-of-file)
test/fixtures/stacks/catalog/eks-cluster.yaml (1)
1-94: EKS Cluster YAML Configuration Review.The file comprehensively outlines the EKS cluster configuration, including node groups, add-on versions, and network/security settings. To conform with YAML best practices and address the YAMLlint warning, please add a newline character at the end of the file.
🧰 Tools
🪛 YAMLlint (1.35.1)
[error] 94-94: no new line character at the end of file
(new-line-at-end-of-file)
test/go.mod (1)
17-17: Uncomment or remove the commented replace directive.The replace directive is commented out but seems to be important for redirecting from an old dependency path to a new one.
-// replace github.com/rebuy-de/aws-nuke/v2/mocks => github.com/ekristen/aws-nuke/v3/ v3.29.0 +replace github.com/rebuy-de/aws-nuke/v2/mocks => github.com/ekristen/aws-nuke/v3/ v3.29.0test/component_test.go (2)
73-77: Typo in variable name and error handling improvement.There's a typo in the variable name
clusrerIdwhich should beclusterId. Also, the clientset creation could benefit from immediate error checking.- clusrerId := atmos.Output(s.T(), clusterOptions, "eks_cluster_id") - cluster := awsHelper.GetEksCluster(s.T(), context.Background(), awsRegion, clusrerId) - clientset, err := awsHelper.NewK8SClientset(cluster) - assert.NoError(s.T(), err) - assert.NotNil(s.T(), clientset) + clusterId := atmos.Output(s.T(), clusterOptions, "eks_cluster_id") + cluster := awsHelper.GetEksCluster(s.T(), context.Background(), awsRegion, clusterId) + clientset, err := awsHelper.NewK8SClientset(cluster) + if err != nil { + s.T().Fatalf("Failed to create Kubernetes clientset: %v", err) + } + assert.NotNil(s.T(), clientset)
179-186: Incorrect error message.The timeout message incorrectly refers to worker nodes when it should be referring to the ingress controller.
select { case <-stopChannel: msg := "All ingress have joined the EKS cluster" fmt.Println(msg) case <-time.After(1 * time.Minute): - msg := "Not all worker nodes have joined the EKS cluster" + msg := "Timeout waiting for ingress to be properly configured" fmt.Println(msg) }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
test/go.sumis excluded by!**/*.sum
📒 Files selected for processing (15)
src/provider-helm.tf(1 hunks)src/remote-state.tf(2 hunks)test/.gitignore(1 hunks)test/component_test.go(1 hunks)test/fixtures/atmos.yaml(1 hunks)test/fixtures/stacks/catalog/account-map.yaml(1 hunks)test/fixtures/stacks/catalog/eks-cluster.yaml(1 hunks)test/fixtures/stacks/catalog/usecase/basic.yaml(1 hunks)test/fixtures/stacks/catalog/usecase/disabled.yaml(1 hunks)test/fixtures/stacks/catalog/vpc.yaml(1 hunks)test/fixtures/stacks/orgs/default/test/_defaults.yaml(1 hunks)test/fixtures/stacks/orgs/default/test/tests.yaml(1 hunks)test/fixtures/vendor.yaml(1 hunks)test/go.mod(1 hunks)test/run.sh(0 hunks)
💤 Files with no reviewable changes (1)
- test/run.sh
🧰 Additional context used
🪛 YAMLlint (1.35.1)
test/fixtures/stacks/catalog/usecase/disabled.yaml
[error] 38-38: no new line character at the end of file
(new-line-at-end-of-file)
test/fixtures/stacks/catalog/eks-cluster.yaml
[error] 94-94: no new line character at the end of file
(new-line-at-end-of-file)
🪛 golangci-lint (1.64.8)
test/component_test.go
1-1: : # test [test.test]
./component_test.go:29:14: undefined: strings
./component_test.go:29:30: undefined: random
./component_test.go:33:2: declared and not used: input
(typecheck)
⏰ Context from checks skipped due to timeout of 90000ms (2)
- GitHub Check: Mergify Merge Protections
- GitHub Check: Summary
🔇 Additional comments (15)
test/.gitignore (1)
1-5: Accept .gitignore rules.
The new.gitignoreentries properly exclude temporary and generated files (such as state directories, cache files, and test suite outputs) from being tracked by Git. This is a good practice to keep the repository clean.src/remote-state.tf (2)
3-3: Update EKS module version.
The module "eks" now uses version"1.8.0", which appears to be aligned with the updated configuration files in the test fixtures.
12-12: Update VPC module version.
Similarly, updating the "vpc" module version to"1.8.0"ensures compatibility with the new configurations. Verify that all references consuming these modules are compatible with this version bump.test/fixtures/stacks/catalog/vpc.yaml (1)
1-20: New VPC configuration file addition.
This YAML file defines the VPC component with the necessary metadata and variables (e.g., availability zones, subnet settings, NAT configurations, etc.). Ensure that these settings match your deployment requirements and that the related tests correctly reference these configurations.test/fixtures/stacks/orgs/default/test/tests.yaml (1)
1-7: Test suite configuration file for component tests.
The file imports essential dependencies—including defaults, VPC, EKS cluster, and both basic and disabled use cases. This configuration supports the PR objectives of validating fundamental functionalities and ensuring that no resources are created when disabled.test/fixtures/vendor.yaml (1)
1-43: New vendor configuration file addition.
TheAtmosVendorConfigis well-defined with sources foraccount-map,eks/cluster, andvpc. It specifies version details, target paths, and filters for included and excluded file types. Verify that the version numbers and repository sources are in sync with the current infrastructure setup.test/fixtures/stacks/catalog/usecase/basic.yaml (1)
1-37: Configuration File Structure Verified.The YAML file for the basic use case is well-structured and clearly defines all the necessary metadata and variable settings for the
eks/alb-controllercomponent. The comments provide helpful context, and the indentation and keys appear correct.test/fixtures/stacks/catalog/account-map.yaml (1)
1-47: Account Map Configuration File Review.The account map configuration is well-organized with clear metadata and variable definitions, which will assist in managing Terraform workspaces and role assumptions. For consistency with YAML file conventions, consider ensuring that a newline exists at the end of the file if it is currently missing.
test/fixtures/stacks/orgs/default/test/_defaults.yaml (2)
4-35: Looks good - standard terraform backend and label configuration.The terraform backend configuration and label ordering setup follows best practices for infrastructure organization.
36-67: Avoid hardcoded placeholders in production environments.The configuration uses placeholder values like
<TEST_ACCOUNT_ID>that should be replaced with actual values in production. This is fine for testing purposes but should be documented.Are there any values that need to be configured by the developer before running the tests? Consider creating a documentation file or adding comments indicating which environment variables must be set.
test/go.mod (2)
19-181: Large dependency list looks good, but consider version management.The extensive list of dependencies is well organized with direct and indirect requirements properly specified. Consider using a dependency management tool to keep these versions updated and scan for security vulnerabilities regularly.
3-5:Details
❌ Incorrect review comment
Verify Go version compatibility.
The test module is configured to use Go 1.23.4 with toolchain 1.23.7. These versions appear to be future dates compared to the current stable Go releases.
🌐 Web query:
What is the latest stable version of Go?💡 Result:
The latest stable version of Go is 1.24.1, released on March 4, 2025[1][4][6][8]. This release includes security fixes to the
net/httppackage and various bug fixes across the toolchain, compiler, and runtime[2][6].Key details:
- Go maintains backward compatibility under its Go 1 promise, ensuring nearly all programs continue working across versions[1].
- Version numbering follows
go1.[major].[patch], with 1.24.1 indicating:
- Major version 24
- First minor release of version 24
- First patch update[1][4]
- Supported platforms include Windows, Linux, macOS, BSD variants, Solaris, and experimental OpenBSD/RISC-V support[4][6].
Security-focused teams should prioritize upgrading to 1.24.1 due to its HTTP package fixes[2][6]. The official Docker image
golang:1.24.1provides a containerized version[8].Citations:
- 1: https://tip.golang.org/doc/go1.21
- 2: https://go.dev/doc/devel/release
- 3: https://github.com/golang/tools/releases
- 4: https://en.wikipedia.org/wiki/Go_(programming_language)
- 5: https://tip.golang.org/doc/go1.23
- 6: https://go.dev/dl/
- 7: https://groups.google.com/g/golang-nuts/c/_ROx_UGCCCg
- 8: https://hub.docker.com/_/golang
Action Required: Verify Your Go Version Configuration
It appears that the comment’s assumption about the Go versions being “future dates” is incorrect. The latest stable Go release (as of March 2025) is 1.24.1, while your test module in
test/go.modspecifies:
go 1.23.4toolchain go1.23.7These versions are older than the current stable release. Please confirm if this older configuration is intentional for compatibility or if you’d prefer to update the module to align with Go 1.24.1.
Likely an incorrect or invalid review comment.
test/fixtures/atmos.yaml (3)
1-10: Good CLI configuration with clear documentation.The configuration header properly documents the loading order and POSIX-style Glob support, which will be helpful for developers using this configuration.
11-18: Base path configuration is well documented.The base path configuration and its interaction with component paths is clearly explained, including environment variable and command-line alternatives.
20-78: Configuration settings are well structured and documented.The components, stacks, workflows, logs, settings, and templates sections are properly organized with appropriate configuration options. Each setting includes documentation about alternative ways to specify it.
|
/terratest |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (4)
test/component_test.go (4)
31-37:⚠️ Potential issueAddress unused variable and add namespace creation.
The
inputvariable containskubernetes_namespacebut isn't used to create the actual namespace in Kubernetes. This will likely cause the test to fail when trying to deploy to a non-existent namespace.controllerNamespace := fmt.Sprintf("alb-controller-%s", randomID) input := map[string]interface{}{ "kubernetes_namespace": controllerNamespace, } + // Create namespace before deployment + ns := &corev1.Namespace{ + ObjectMeta: metav1.ObjectMeta{ + Name: controllerNamespace, + }, + } + _, err := clientset.CoreV1().Namespaces().Create(context.Background(), ns, metav1.CreateOptions{}) + assert.NoError(s.T(), err)Note: You'll need to add
corev1 "k8s.io/api/core/v1"to your imports.
93-95:⚠️ Potential issueMissing namespace creation for ingress.
The test creates an ingress in the
ingressNamespacebut never creates this namespace first.ingressNamespace := fmt.Sprintf("example-%s", randomID) ingressName := fmt.Sprintf("example-ingress-%s", randomID) + // Create ingress namespace + ingressNs := &corev1.Namespace{ + ObjectMeta: metav1.ObjectMeta{ + Name: ingressNamespace, + }, + } + _, err = clientset.CoreV1().Namespaces().Create(context.Background(), ingressNs, metav1.CreateOptions{}) + assert.NoError(s.T(), err)
158-178:⚠️ Potential issueFix potential panic in event handler.
The code assumes that the Ingress status has at least one LoadBalancer entry, which could cause a panic if the array is empty.
informer.AddEventHandler(cache.ResourceEventHandlerFuncs{ UpdateFunc: func(oldObj, newObj interface{}) { _, ok := oldObj.(*networkingv1.Ingress) if !ok { return } newIngress, ok := newObj.(*networkingv1.Ingress) if !ok { return } // Check if the Ingress's LoadBalancer status has been populated - if len(newIngress.Status.LoadBalancer.Ingress[0].Hostname) > 0 { + if len(newIngress.Status.LoadBalancer.Ingress) > 0 && len(newIngress.Status.LoadBalancer.Ingress[0].Hostname) > 0 { fmt.Printf("Ingress %s is ready\n", newIngress.Name) close(stopChannel) } else { fmt.Printf("Ingress %s is not ready yet\n", newIngress.Name) } }, })🧰 Tools
🪛 golangci-lint (1.64.8)
158-158: Error return value of
informer.AddEventHandleris not checked(errcheck)
190-194:⚠️ Potential issueFix potential panic in ingress status check.
The code assumes that the Ingress status has at least one LoadBalancer entry, which could cause a panic if the array is empty.
ingressStatus, err := clientset.NetworkingV1().Ingresses(ingressNamespace).Get(context.Background(), ingressName, metav1.GetOptions{}) assert.NoError(s.T(), err) assert.Equal(s.T(), ingressStatus.Name, ingressName) - assert.NotEmpty(s.T(), ingressStatus.Status.LoadBalancer.Ingress[0].Hostname) + assert.True(s.T(), len(ingressStatus.Status.LoadBalancer.Ingress) > 0, "No LoadBalancer entries found in Ingress status") + if len(ingressStatus.Status.LoadBalancer.Ingress) > 0 { + assert.NotEmpty(s.T(), ingressStatus.Status.LoadBalancer.Ingress[0].Hostname, "LoadBalancer hostname is empty") + }
🧹 Nitpick comments (5)
test/component_test.go (5)
74-76: Fix typo in variable name.There's a typo in the
clusrerIdvariable name (should beclusterId).- clusrerId := atmos.Output(s.T(), clusterOptions, "eks_cluster_id") - cluster := awsHelper.GetEksCluster(s.T(), context.Background(), awsRegion, clusrerId) + clusterId := atmos.Output(s.T(), clusterOptions, "eks_cluster_id") + cluster := awsHelper.GetEksCluster(s.T(), context.Background(), awsRegion, clusterId)
83-83: Avoid hardcoding namespace count.Asserting a specific number of namespaces (5) is fragile as it depends on the cluster's state. Consider using a "greater than" check instead.
- assert.Equal(s.T(), len(namespaces.Items), 5) + assert.True(s.T(), len(namespaces.Items) >= 5, "Expected at least 5 namespaces")
147-147: Add error checking for the defer Delete call.The error from the
Deleteoperation isn't being checked, which could hide cleanup failures.- defer clientset.NetworkingV1().Ingresses(ingressNamespace).Delete(context.Background(), ingressName, metav1.DeleteOptions{}) + defer func() { + deleteErr := clientset.NetworkingV1().Ingresses(ingressNamespace).Delete(context.Background(), ingressName, metav1.DeleteOptions{}) + if deleteErr != nil { + fmt.Printf("Failed to delete ingress %s: %v\n", ingressName, deleteErr) + } + }()🧰 Tools
🪛 golangci-lint (1.64.8)
147-147: Error return value of
(k8s.io/client-go/kubernetes/typed/networking/v1.IngressInterface).Deleteis not checked(errcheck)
186-187: Correct misleading error message.The error message refers to worker nodes, but the context is about waiting for an ingress to be ready.
- msg := "Not all worker nodes have joined the EKS cluster" + msg := "Timed out waiting for ingress to be ready"
198-202: Add more detailed assertions for TestEnabledFlag.The
TestEnabledFlagmethod only callsVerifyEnabledFlagwithout any additional assertions. Consider adding specific assertions to verify no resources are created when disabled.func (s *ComponentSuite) TestEnabledFlag() { const component = "eks/alb-controller/disabled" const stack = "default-test" - s.VerifyEnabledFlag(component, stack, nil) + + // Verify the enabled flag + result := s.VerifyEnabledFlag(component, stack, nil) + assert.True(s.T(), result, "VerifyEnabledFlag should return true") + + // Additional verification that no resources were created + clusterOptions := s.GetAtmosOptions("eks/cluster", stack, nil) + clusterId := atmos.Output(s.T(), clusterOptions, "eks_cluster_id") + cluster := awsHelper.GetEksCluster(s.T(), context.Background(), "us-east-2", clusterId) + clientset, err := awsHelper.NewK8SClientset(cluster) + assert.NoError(s.T(), err) + + // Verify the controller namespace doesn't exist + _, err = clientset.CoreV1().Namespaces().Get(context.Background(), "aws-load-balancer-controller", metav1.GetOptions{}) + assert.Error(s.T(), err, "Namespace should not exist when component is disabled") }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
test/component_test.go(1 hunks)
🧰 Additional context used
🪛 golangci-lint (1.64.8)
test/component_test.go
147-147: Error return value of (k8s.io/client-go/kubernetes/typed/networking/v1.IngressInterface).Delete is not checked
(errcheck)
158-158: Error return value of informer.AddEventHandler is not checked
(errcheck)
⏰ Context from checks skipped due to timeout of 90000ms (3)
- GitHub Check: Mergify Merge Protections
- GitHub Check: Mergify Merge Protections
- GitHub Check: Summary
|
/terratest |
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (2)
test/component_test.go (2)
35-37:⚠️ Potential issueUnused variable and missing namespace creation.
The
inputvariable is declared but not used effectively. Also, there's no code that creates thecontrollerNamespacein Kubernetes before deploying the component.This issue was identified in a previous review comment but hasn't been addressed in this code:
- input := map[string]interface{}{ - "kubernetes_namespace": controllerNamespace, - } + // Create namespace before deployment + ns := &corev1.Namespace{ + ObjectMeta: metav1.ObjectMeta{ + Name: controllerNamespace, + }, + } + _, err := clientset.CoreV1().Namespaces().Create(context.Background(), ns, metav1.CreateOptions{}) + assert.NoError(s.T(), err)
195-196:⚠️ Potential issuePotential panic in ingress status check.
The code directly accesses the first element of the LoadBalancer.Ingress array without checking if it exists, which could cause a panic if the array is empty.
This issue was identified in a previous review comment but hasn't been addressed in this code:
assert.Equal(s.T(), ingressStatus.Name, ingressName) - assert.NotEmpty(s.T(), ingressStatus.Status.LoadBalancer.Ingress[0].Hostname) + assert.True(s.T(), len(ingressStatus.Status.LoadBalancer.Ingress) > 0, "No LoadBalancer entries found in Ingress status") + if len(ingressStatus.Status.LoadBalancer.Ingress) > 0 { + assert.NotEmpty(s.T(), ingressStatus.Status.LoadBalancer.Ingress[0].Hostname, "LoadBalancer hostname is empty") + }
🧹 Nitpick comments (1)
test/component_test.go (1)
189-190: Incorrect timeout message.The timeout message references worker nodes joining the EKS cluster, but this test is actually waiting for the Ingress LoadBalancer status to be populated.
Update the timeout message to accurately reflect what the test is waiting for:
- msg := "Not all worker nodes have joined the EKS cluster" + msg := "Timeout waiting for Ingress LoadBalancer status to be populated"
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
src/provider-helm.tf(1 hunks)test/component_test.go(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- src/provider-helm.tf
🧰 Additional context used
🪛 golangci-lint (1.64.8)
test/component_test.go
1-1: : # test [test.test]
./component_test.go:92:9: no new variables on left side of :=
./component_test.go:92:74: undefined: corev1
(typecheck)
⏰ Context from checks skipped due to timeout of 90000ms (3)
- GitHub Check: Mergify Merge Protections
- GitHub Check: Mergify Merge Protections
- GitHub Check: Summary
🔇 Additional comments (3)
test/component_test.go (3)
173-180: Well-implemented condition check for LoadBalancer ingress.Good job on implementing the check for LoadBalancer entries before accessing their properties. This prevents potential panics that could occur if no LoadBalancer entries exist.
201-205: Good implementation of disabled component test.The TestEnabledFlag method correctly tests the "disabled" component configuration, which aligns with the PR objectives of adding a "disabled component test" to ensure no resources are created when the input parameter
enabledis set to false.
198-198: Good implementation of drift testing.The test correctly implements a drift test to verify that the component does not experience any drift during a subsequent run with identical inputs, which was one of the PR objectives.
|
/terratest |
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
|
/terratest |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
♻️ Duplicate comments (1)
test/component_test.go (1)
31-33:⚠️ Potential issueNamespace created but not explicitly applied to Kubernetes.
The
controllerNamespaceis defined but there's no code that creates this namespace in Kubernetes before deploying the controller.Add namespace creation before deployment:
controllerNamespace := fmt.Sprintf("alb-controller-%s", randomID) + // Create controller namespace in Kubernetes + _, err := clientset.CoreV1().Namespaces().Create(context.Background(), &corev1.Namespace{ + ObjectMeta: metav1.ObjectMeta{ + Name: controllerNamespace, + }, + }, metav1.CreateOptions{}) + assert.NoError(s.T(), err)Note: This code needs to be moved after the clientset is created (after line 78).
🧹 Nitpick comments (4)
test/component_test.go (4)
76-78: Consider adding a retry mechanism for Kubernetes client initialization.Creating a clientset is a critical operation that might temporarily fail due to network issues or cluster state.
Implement a retry mechanism:
- clientset, err := awsHelper.NewK8SClientset(cluster) - assert.NoError(s.T(), err) - assert.NotNil(s.T(), clientset) + var clientset *kubernetes.Clientset + var err error + for retries := 0; retries < 3; retries++ { + clientset, err = awsHelper.NewK8SClientset(cluster) + if err == nil && clientset != nil { + break + } + time.Sleep(5 * time.Second) + } + assert.NoError(s.T(), err) + assert.NotNil(s.T(), clientset, "Failed to create Kubernetes clientset after retries")This requires adding
"k8s.io/client-go/kubernetes"to your imports.
189-190: Incorrect timeout message.The timeout message indicates waiting for worker nodes but the code is actually waiting for Ingress readiness.
- msg := "Not all worker nodes have joined the EKS cluster" + msg := "Timeout waiting for Ingress to receive LoadBalancer hostname"
183-191: Timeout handling could be improved.The current timeout handling prints a message but doesn't fail the test when the timeout occurs, which might hide issues.
select { case <-stopChannel: msg := "All ingress have joined the EKS cluster" fmt.Println(msg) case <-time.After(1 * time.Minute): msg := "Timeout waiting for Ingress to receive LoadBalancer hostname" fmt.Println(msg) + s.T().Error(msg) }
156-161: Consider adding a WaitGroup for better control of background goroutines.The current implementation uses a channel to signal completion, but adding a WaitGroup would ensure proper cleanup of background goroutines.
+ import "sync" // Wait for the Ingress to be updated with the LoadBalancer metadata factory := informers.NewSharedInformerFactory(clientset, 0) informer := factory.Networking().V1().Ingresses().Informer() + var wg sync.WaitGroup + wg.Add(1) stopChannel := make(chan struct{}) + defer func() { + close(stopChannel) // Ensure channel is closed + wg.Wait() // Wait for goroutine to exit + }()And in the goroutine:
- go informer.Run(stopChannel) + go func() { + defer wg.Done() + informer.Run(stopChannel) + }()
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
test/component_test.go(1 hunks)
🧰 Additional context used
🪛 golangci-lint (1.64.8)
test/component_test.go
91-91: Error return value of (k8s.io/client-go/kubernetes/typed/core/v1.NamespaceInterface).Delete is not checked
(errcheck)
151-151: Error return value of (k8s.io/client-go/kubernetes/typed/networking/v1.IngressInterface).Delete is not checked
(errcheck)
162-162: Error return value of informer.AddEventHandler is not checked
(errcheck)
⏰ Context from checks skipped due to timeout of 90000ms (3)
- GitHub Check: Mergify Merge Protections
- GitHub Check: Mergify Merge Protections
- GitHub Check: Summary
🔇 Additional comments (1)
test/component_test.go (1)
193-196: Potential panic if LoadBalancer status is empty.Even though there's a check in the event handler, there's still a risk of accessing an empty array directly in the assertion.
Apply this diff to safely check the LoadBalancer status:
ingressStatus, err := clientset.NetworkingV1().Ingresses(ingressNamespace).Get(context.Background(), ingressName, metav1.GetOptions{}) assert.NoError(s.T(), err) assert.Equal(s.T(), ingressStatus.Name, ingressName) - assert.NotEmpty(s.T(), ingressStatus.Status.LoadBalancer.Ingress[0].Hostname) + assert.True(s.T(), len(ingressStatus.Status.LoadBalancer.Ingress) > 0, "No LoadBalancer entries found in Ingress status") + if len(ingressStatus.Status.LoadBalancer.Ingress) > 0 { + assert.NotEmpty(s.T(), ingressStatus.Status.LoadBalancer.Ingress[0].Hostname, "LoadBalancer hostname is empty") + }
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (3)
test/component_test.go (3)
201-204:⚠️ Potential issuePotential panic in ingress status check.
The code assumes that the Ingress status has at least one LoadBalancer entry, which could cause a panic if the array is empty.
ingressStatus, err := clientset.NetworkingV1().Ingresses(ingressNamespace).Get(context.Background(), ingressName, metav1.GetOptions{}) assert.NoError(s.T(), err) assert.Equal(s.T(), ingressStatus.Name, ingressName) -assert.NotEmpty(s.T(), ingressStatus.Status.LoadBalancer.Ingress[0].Hostname) +assert.True(s.T(), len(ingressStatus.Status.LoadBalancer.Ingress) > 0, "No LoadBalancer entries found in Ingress status") +if len(ingressStatus.Status.LoadBalancer.Ingress) > 0 { + assert.NotEmpty(s.T(), ingressStatus.Status.LoadBalancer.Ingress[0].Hostname, "LoadBalancer hostname is empty") +}
170-189:⚠️ Potential issueError return value from AddEventHandler is not checked.
The error from
AddEventHandleroperation isn't checked, which could hide failures during event handler registration.-informer.AddEventHandler(cache.ResourceEventHandlerFuncs{ +if err := informer.AddEventHandler(cache.ResourceEventHandlerFuncs{ UpdateFunc: func(oldObj, newObj interface{}) { _, ok := oldObj.(*networkingv1.Ingress) if !ok { return } newIngress, ok := newObj.(*networkingv1.Ingress) if !ok { return } // Check if the Ingress's LoadBalancer status has been populated if len(newIngress.Status.LoadBalancer.Ingress) > 0 && len(newIngress.Status.LoadBalancer.Ingress[0].Hostname) > 0 { fmt.Printf("Ingress %s is ready\n", newIngress.Name) close(stopChannel) } else { fmt.Printf("Ingress %s is not ready yet\n", newIngress.Name) } }, -}) +}); err != nil { + fmt.Printf("Error adding event handler: %v\n", err) +}🧰 Tools
🪛 golangci-lint (1.64.8)
170-170: Error return value of
informer.AddEventHandleris not checked(errcheck)
40-40:⚠️ Potential issueError from DeployAtmosComponent is ignored.
The error returned by
DeployAtmosComponentis discarded, which could hide deployment failures.-options, _ := s.DeployAtmosComponent(s.T(), component, stack, &input) +options, err := s.DeployAtmosComponent(s.T(), component, stack, &input) +assert.NoError(s.T(), err, "Failed to deploy component")
🧹 Nitpick comments (3)
test/component_test.go (3)
74-74: Fix typo in variable name.There's a typo in the variable name
clusrerIdinstead ofclusterId.-clusrerId := atmos.Output(s.T(), clusterOptions, "eks_cluster_id") -cluster := awsHelper.GetEksCluster(s.T(), context.Background(), awsRegion, clusrerId) +clusterId := atmos.Output(s.T(), clusterOptions, "eks_cluster_id") +cluster := awsHelper.GetEksCluster(s.T(), context.Background(), awsRegion, clusterId)
196-198: Incorrect error message in timeout case.The error message in case of timeout is misleading. The code is waiting for the Ingress to be ready, not for worker nodes to join the cluster.
case <-time.After(1 * time.Minute): - msg := "Not all worker nodes have joined the EKS cluster" + msg := "Timeout waiting for Ingress to be ready" fmt.Println(msg)
35-37: Theinputvariable is declared but used inconsistently.The input map is correctly defined, but the address operator is used when passing it to the methods. If the methods expect a pointer, this is correct, but you should be consistent across all usages.
-defer s.DestroyAtmosComponent(s.T(), component, stack, &input) -options, _ := s.DeployAtmosComponent(s.T(), component, stack, &input) +defer s.DestroyAtmosComponent(s.T(), component, stack, input) +options, err := s.DeployAtmosComponent(s.T(), component, stack, input)Alternatively, if the methods do require a pointer, update line 206 to use the pointer as well:
-s.DriftTest(component, stack, nil) +s.DriftTest(component, stack, &input)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
test/component_test.go(1 hunks)
🧰 Additional context used
🪛 golangci-lint (1.64.8)
test/component_test.go
170-170: Error return value of informer.AddEventHandler is not checked
(errcheck)
⏰ Context from checks skipped due to timeout of 90000ms (3)
- GitHub Check: Mergify Merge Protections
- GitHub Check: Mergify Merge Protections
- GitHub Check: Summary
|
/terratest |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (4)
test/component_test.go (4)
39-40:⚠️ Potential issueError from DeployAtmosComponent is ignored.
The error returned by
DeployAtmosComponentis discarded, which could hide deployment failures.- defer s.DestroyAtmosComponent(s.T(), component, stack, &input) - options, _ := s.DeployAtmosComponent(s.T(), component, stack, &input) + defer s.DestroyAtmosComponent(s.T(), component, stack, &input) + options, err := s.DeployAtmosComponent(s.T(), component, stack, &input) + assert.NoError(s.T(), err, "Failed to deploy component")
170-189:⚠️ Potential issueError return value not checked for AddEventHandler.
The error from
AddEventHandleris not checked, which could lead to silent failures.- informer.AddEventHandler(cache.ResourceEventHandlerFuncs{ + if err := informer.AddEventHandler(cache.ResourceEventHandlerFuncs{ UpdateFunc: func(oldObj, newObj interface{}) { _, ok := oldObj.(*networkingv1.Ingress) if !ok { return } newIngress, ok := newObj.(*networkingv1.Ingress) if !ok { return } // Check if the Ingress's LoadBalancer status has been populated if len(newIngress.Status.LoadBalancer.Ingress) > 0 && len(newIngress.Status.LoadBalancer.Ingress[0].Hostname) > 0 { fmt.Printf("Ingress %s is ready\n", newIngress.Name) close(stopChannel) } else { fmt.Printf("Ingress %s is not ready yet\n", newIngress.Name) } }, - }) + }); err != nil { + fmt.Printf("Error adding event handler: %v\n", err) + }🧰 Tools
🪛 golangci-lint (1.64.8)
170-170: Error return value of
informer.AddEventHandleris not checked(errcheck)
201-204:⚠️ Potential issuePotential panic in ingress status check.
The assertion assumes that the Ingress status has at least one LoadBalancer entry, which could cause a panic if the array is empty.
ingressStatus, err := clientset.NetworkingV1().Ingresses(ingressNamespace).Get(context.Background(), ingressName, metav1.GetOptions{}) assert.NoError(s.T(), err) assert.Equal(s.T(), ingressStatus.Name, ingressName) - assert.NotEmpty(s.T(), ingressStatus.Status.LoadBalancer.Ingress[0].Hostname) + assert.True(s.T(), len(ingressStatus.Status.LoadBalancer.Ingress) > 0, "No LoadBalancer entries found in Ingress status") + if len(ingressStatus.Status.LoadBalancer.Ingress) > 0 { + assert.NotEmpty(s.T(), ingressStatus.Status.LoadBalancer.Ingress[0].Hostname, "LoadBalancer hostname is empty") + }
35-40:⚠️ Potential issueThe
inputvariable is declared but not used properly.The issue is with passing the input variable to the methods. You're passing a pointer to a map (
&input) instead of the map directly.input := map[string]interface{}{ "kubernetes_namespace": controllerNamespace, } - defer s.DestroyAtmosComponent(s.T(), component, stack, &input) - options, _ := s.DeployAtmosComponent(s.T(), component, stack, &input) + defer s.DestroyAtmosComponent(s.T(), component, stack, input) + options, _ := s.DeployAtmosComponent(s.T(), component, stack, input)
🧹 Nitpick comments (2)
test/component_test.go (2)
193-199: Misleading error message in timeout case.The error message indicates that worker nodes haven't joined the cluster, but the code is actually waiting for the ingress to be ready.
select { case <-stopChannel: msg := "All ingress have joined the EKS cluster" fmt.Println(msg) case <-time.After(1 * time.Minute): - msg := "Not all worker nodes have joined the EKS cluster" + msg := "Timeout waiting for ingress to be ready" fmt.Println(msg) }
196-199: Consider including the timeout expiration in the test assertion.The current code continues execution after the timeout without failing the test, which might lead to false positive test results.
case <-time.After(1 * time.Minute): msg := "Timeout waiting for ingress to be ready" fmt.Println(msg) + assert.Fail(s.T(), "Timeout waiting for ingress to be ready") }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
test/component_test.go(1 hunks)
🧰 Additional context used
🪛 golangci-lint (1.64.8)
test/component_test.go
170-170: Error return value of informer.AddEventHandler is not checked
(errcheck)
⏰ Context from checks skipped due to timeout of 90000ms (3)
- GitHub Check: Mergify Merge Protections
- GitHub Check: Mergify Merge Protections
- GitHub Check: Summary
🔇 Additional comments (4)
test/component_test.go (4)
206-206: Good addition of drift testing.Including drift testing ensures that no unexpected changes occur when running the component multiple times with identical inputs.
209-213: Well-structured test for disabled flag.This test correctly verifies that when the component is disabled, no resources are created. This is an important test case for Terraform components.
215-219: Well-structured SetupSuite method.The method correctly initializes the test configuration and sets the component destination directory.
221-226: Well-structured test runner.The test runner correctly sets up dependencies before running the tests, ensuring that the VPC and EKS cluster are available.
|
These changes were released in v1.535.1. |
What
basiccomponent testdisabledcomponent testWhy
enabled: falsesetReferences
Summary by CodeRabbit
Bug Fixes
Chores
New Features
Tests