Conversation
…ariables and subroutines On-behalf-of: @SAP angel.kafazov@sap.com Signed-off-by: Angel Kafazov <akafazov@cst-bg.net>
On-behalf-of: @SAP angel.kafazov@sap.com Signed-off-by: Angel Kafazov <akafazov@cst-bg.net>
… operations On-behalf-of: @SAP angel.kafazov@sap.com Signed-off-by: Angel Kafazov <akafazov@cst-bg.net>
On-behalf-of: @SAP angel.kafazov@sap.com Signed-off-by: Angel Kafazov <akafazov@cst-bg.net>
On-behalf-of: @SAP angel.kafazov@sap.com Signed-off-by: Angel Kafazov <akafazov@cst-bg.net>
On-behalf-of: @SAP angel.kafazov@sap.com Signed-off-by: Angel Kafazov <akafazov@cst-bg.net>
On-behalf-of: @SAP angel.kafazov@sap.com Signed-off-by: Angel Kafazov <akafazov@cst-bg.net>
…tform-mesh.yaml On-behalf-of: @SAP angel.kafazov@sap.com Signed-off-by: Angel Kafazov <akafazov@cst-bg.net>
…skfile.yml On-behalf-of: @SAP angel.kafazov@sap.com Signed-off-by: Angel Kafazov <akafazov@cst-bg.net>
…nal connections On-behalf-of: @SAP angel.kafazov@sap.com Signed-off-by: Angel Kafazov <akafazov@cst-bg.net>
On-behalf-of: @SAP angel.kafazov@sap.com Signed-off-by: Angel Kafazov <akafazov@cst-bg.net>
…ientMock directly On-behalf-of: @SAP angel.kafazov@sap.com Signed-off-by: Angel Kafazov <akafazov@cst-bg.net>
…ests On-behalf-of: @SAP angel.kafazov@sap.com Signed-off-by: Angel Kafazov <akafazov@cst-bg.net>
On-behalf-of: @SAP angel.kafazov@sap.com Signed-off-by: Angel Kafazov <akafazov@cst-bg.net>
On-behalf-of: @SAP angel.kafazov@sap.com Signed-off-by: Angel Kafazov <akafazov@cst-bg.net>
On-behalf-of: @SAP angel.kafazov@sap.com Signed-off-by: Angel Kafazov <akafazov@cst-bg.net>
…arts with gotemplates and infra and components profiles. On-behalf-of: @SAP angel.kafazov@sap.com Signed-off-by: Angel Kafazov <akafazov@cst-bg.net>
…ve test cases for provider secret handling On-behalf-of: @SAP angel.kafazov@sap.com Signed-off-by: Angel Kafazov <akafazov@cst-bg.net>
…ce test structure - Removed unnecessary client Get call in updateHelmReleaseWithImageTag function. - Improved test case structure by directly using clientMock in subroutine tests. - Updated config struct definition for clarity. On-behalf-of: @SAP angel.kafazov@sap.com Signed-off-by: Angel Kafazov <akafazov@cst-bg.net>
…g in operator - Added gotemplates directory to the Dockerfile for inclusion in the build. - Refactored operator to replace FluxCD client with a more generic Kubernetes client for improved flexibility. - Updated merge logic in merge.go to enhance object merging behavior and logging. On-behalf-of: @SAP angel.kafazov@sap.com Signed-off-by: Angel Kafazov <akafazov@cst-bg.net>
On-behalf-of: @SAP angel.kafazov@sap.com Signed-off-by: Angel Kafazov <akafazov@cst-bg.net>
- Added error handling for Infra client creation in RunController function to log the error and exit if the client cannot be created. - Updated merge logic comments in merge.go for clarity. - Adjusted test case formatting in subroutine_test.go for consistency. On-behalf-of: @SAP angel.kafazov@sap.com Signed-off-by: Angel Kafazov <akafazov@cst-bg.net>
On-behalf-of: @SAP angel.kafazov@sap.com Signed-off-by: Angel Kafazov <akafazov@cst-bg.net>
On-behalf-of: @SAP angel.kafazov@sap.com Signed-off-by: Angel Kafazov <akafazov@cst-bg.net>
On-behalf-of: @SAP angel.kafazov@sap.com Signed-off-by: Angel Kafazov <akafazov@cst-bg.net>
On-behalf-of: @SAP angel.kafazov@sap.com Signed-off-by: Angel Kafazov <akafazov@cst-bg.net>
…plate rendering On-behalf-of: @SAP angel.kafazov@sap.com Signed-off-by: Angel Kafazov <akafazov@cst-bg.net>
On-behalf-of: @SAP angel.kafazov@sap.com Signed-off-by: Angel Kafazov <akafazov@cst-bg.net>
On-behalf-of: @SAP angel.kafazov@sap.com Signed-off-by: Angel Kafazov <akafazov@cst-bg.net>
On-behalf-of: @SAP angel.kafazov@sap.com Signed-off-by: Angel Kafazov <akafazov@cst-bg.net>
- Added `StatusFieldPath` and `StatusValue` to `ResourceType` for improved status checking of resources. - Modified CRD definitions to reflect the new structure and descriptions. - Adjusted wait logic to utilize the new status checking mechanism for ArgoCD Applications. - Updated tests to cover new functionality related to status field checks. On-behalf-of: @SAP angel.kafazov@sap.com Signed-off-by: Angel Kafazov <akafazov@cst-bg.net>
On-behalf-of: @SAP angel.kafazov@sap.com Signed-off-by: Angel Kafazov <akafazov@cst-bg.net>
…t-#2-argocd On-behalf-of: @SAP angel.kafazov@sap.com Signed-off-by: Angel Kafazov <akafazov@cst-bg.net>
On-behalf-of: @SAP angel.kafazov@sap.com Signed-off-by: Angel Kafazov <akafazov@cst-bg.net>
…t-#2-argocd On-behalf-of: @SAP angel.kafazov@sap.com Signed-off-by: Angel Kafazov <akafazov@cst-bg.net>
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
|
@coderabbitai full review |
✅ Actions performedFull review triggered. |
WalkthroughThe change implements profile-driven deployments via a ConfigMap-based profile system and adds support for Argo CD alongside FluxCD. It introduces dual Kubernetes clients (infra/runtime) for optional remote runtime and remote infra reconciliation, switches many apply operations to server-side apply, and expands template-driven manifest generation with numerous Go templates for Applications, HelmReleases, and OCM Resources. API and config updates include a new ConfigMapReference type, modified ResourceType/FeatureToggle shapes, and new OperatorConfig fields for remote behavior and subroutine controls. Tests and e2e assets were updated accordingly. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 8
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (5)
pkg/subroutines/resource/subroutine_test.go (1)
404-452:⚠️ Potential issue | 🟡 MinorPatch count assertion may be incorrect.
The
patchCallCountvariable is only incremented within the firstPatchexpectation (lines 405-429). The secondPatchexpectation (lines 433-444) is a separate mock setup that won't incrementpatchCallCount. However, the assertion on line 452 expectspatchCallCountto be 2.Additionally, line 432 sets up a
Getmock, but the comment on line 431 states "HelmRelease now uses Patch (Server-Side Apply) instead of Update" which implies no Get is needed. This mock may be unnecessary.Proposed fix
- // HelmRelease now uses Patch (Server-Side Apply) instead of Update - clientMock.EXPECT().Get(mock.Anything, mock.Anything, mock.Anything).Return(nil) - clientMock.EXPECT().Patch(mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).RunAndReturn( - func(ctx context.Context, obj client.Object, patch client.Patch, opts ...client.PatchOption) error { - unstr := obj.(*unstructured.Unstructured) - if unstr.GetKind() == "HelmRelease" { + clientMock.EXPECT().Patch(mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).RunAndReturn( + func(ctx context.Context, obj client.Object, patch client.Patch, opts ...client.PatchOption) error { + patchCallCount++ + unstr := obj.(*unstructured.Unstructured) + if unstr.GetKind() == "HelmRelease" { version, found, err := unstructured.NestedString(unstr.Object, "spec", "chart", "spec", "version")test/e2e/kind/helpers.go (1)
7-7:⚠️ Potential issue | 🟠 MajorConsider using
text/templateinstead ofhtml/template.The
html/templatepackage performs HTML escaping which could corrupt YAML content containing characters like<,>, or&. For manifest processing,text/templateis more appropriate as it performs no escaping.Proposed fix
- "html/template" + "text/template"pkg/subroutines/featuretoggles_test.go (1)
82-90:⚠️ Potential issue | 🟡 MinorReorder mock expectations and add call counts to prevent ambiguous matching.
The broad
mock.Anything, mock.Anythingmatcher can satisfy calls intended for the specific"root:orgs:default"expectation, making behavior non-deterministic. Testify/mock provides no precedence rule for overlapping matchers; apply the specific expectation first and constrain both withOnce()to make matching deterministic.Proposed fix
- s.helperMock.EXPECT(). - NewKcpClient(mock.Anything, mock.Anything). - Return(mockKcpClient, nil) - - s.helperMock.EXPECT(). - NewKcpClient(mock.Anything, "root:orgs:default"). - Return(mockKcpClient, nil) + s.helperMock.EXPECT(). + NewKcpClient(mock.Anything, "root:orgs:default"). + Return(mockKcpClient, nil). + Once() + + s.helperMock.EXPECT(). + NewKcpClient(mock.Anything, mock.Anything). + Return(mockKcpClient, nil). + Once()pkg/subroutines/providersecret.go (1)
197-210:⚠️ Potential issue | 🟠 MajorExternal host selection is overwritten.
newConfig.Hostis set based onpc.External, but then it's immediately replaced with the front-proxy URL, so external connections never use the external host.🔧 Suggested fix
newConfig := rest.CopyConfig(cfg) - hostPort := fmt.Sprintf("https://%s-front-proxy.%s:%s", operatorCfg.KCP.FrontProxyName, operatorCfg.KCP.Namespace, operatorCfg.KCP.FrontProxyPort) - if pc.External { - newConfig.Host = getExternalKcpHost(instance, &operatorCfg) + address.Path - } else { - newConfig.Host = getInternalKcpHost(&operatorCfg) + address.Path - } - host, err := url.JoinPath(hostPort, address.Path) + baseHost := getInternalKcpHost(&operatorCfg) + if pc.External { + baseHost = getExternalKcpHost(instance, &operatorCfg) + } + host, err := url.JoinPath(baseHost, address.Path) if err != nil { log.Error().Err(err).Msg("Failed to join path for provider connection") return ctrl.Result{}, errors.NewOperatorError(err, false, false) } newConfig.Host = hosttest/e2e/kind/suite_kind_test.go (1)
167-172:⚠️ Potential issue | 🟡 MinorDuplicate scheme registration for fluxcdv2.
fluxcdv2.AddToScheme(s.scheme)is registered twice (Lines 167 and 172). Remove the duplicate registration.Proposed fix
utilruntime.Must(fluxcdv2.AddToScheme(s.scheme)) utilruntime.Must(corev1.AddToScheme(s.scheme)) utilruntime.Must(appsv1.AddToScheme(s.scheme)) utilruntime.Must(certmanager.AddToScheme(s.scheme)) utilruntime.Must(fluxcdv1.AddToScheme(s.scheme)) - utilruntime.Must(fluxcdv2.AddToScheme(s.scheme)) utilruntime.Must(apiextensionsv1.AddToScheme(s.scheme))
🤖 Fix all issues with AI agents
In `@cmd/operator.go`:
- Around line 143-146: The error log for the Resource reconciler is using the
wrong controller name ("PlatformMesh"); update the setup failure log in the
resourceReconciler.SetupWithManager error branch so the setupLog.Error call uses
the correct controller identifier (e.g., "Resource" or "ResourceReconciler")
instead of "PlatformMesh"—locate the controller.NewResourceReconciler
instantiation and the subsequent SetupWithManager(error) block and replace the
"controller", "PlatformMesh" pair with the appropriate name to accurately
reflect the Resource controller.
In `@Dockerfile`:
- Line 37: The COPY instruction using "COPY --chown=${USER_UID}:${GROUP_UID}
gotemplates/ /operator/gotemplates" will fail in the final "FROM scratch" stage
because gotemplates/ isn't present there; add gotemplates/ to the builder stage
(ensure it's copied into the builder image during the builder stage) and then
update the final-stage COPY to pull from the builder using the --from=builder
flag (e.g., COPY --from=builder --chown=${USER_UID}:${GROUP_UID} gotemplates/
/operator/gotemplates) so the files and ownership are available in the final
image.
In `@gotemplates/components/infra/helmreleases.yaml`:
- Around line 18-41: The template can emit duplicate chart keys when both
$config.helmRepo and $config.gitRepo are true; change the separate conditionals
to a single mutually-exclusive branch by replacing the second {{- if
$config.gitRepo }} with {{- else if $config.gitRepo }} (and keep the final {{-
else }} for the OCIRepository chartRef case), or alternatively add an explicit
guard that errors when both $config.helmRepo and $config.gitRepo are set; update
the chart/chartRef blocks (including reconcileStrategy and sourceRef usage) so
only one branch produces output.
In `@gotemplates/infra/runtime/traefik/resource.yaml`:
- Around line 21-25: The Go template is trimming the newline after the
`referencePath:` key because `{{- range .ocm.referencePath }}` and `{{- end }}`
remove preceding whitespace; change those delimiters to `{{ range
.ocm.referencePath }}` and `{{ end }}` so the list renders on subsequent lines,
keeping the sequence items (`- name: {{ .name }}` and `- name: {{ .traefik.name
}}`) properly indented under `referencePath:` to produce a valid YAML list.
In `@pkg/subroutines/deployment_helpers.go`:
- Around line 35-67: renderAndApplyTemplates currently calls renderTemplateFile
and only applies the first resource because yaml.Unmarshal drops subsequent
documents; change the post-render handling in renderAndApplyTemplates (after
calling r.renderTemplateFile) to decode the rendered YAML as a multi-document
stream using k8s.io/apimachinery/pkg/util/yaml.NewYAMLOrJSONDecoder (or
equivalent) and iterate decoding each document into an unstructured/untyped
runtime.Object (matching how renderTemplateFile previously returned obj),
skipping empty docs and applying each decoded object with k8sClient.Patch(ctx,
obj, client.Apply, client.FieldOwner(fieldManagerDeployment),
client.ForceOwnership), and propagate errors via errors.Wrap (include
path/Kind/Name) so all rendered resources from templates like applications.yaml
and ocm-resources.yaml are applied instead of only the first.
In `@pkg/subroutines/deployment.go`:
- Around line 261-287: The istio-istiod block dereferences rel without checking
for nil; after calling getDeploymentResource (in the same block handling
deploymentTech) add a nil check for rel (and treat err==nil && rel==nil as
resource-not-found), log an appropriate message, and return the same pattern of
errors.NewOperatorError used elsewhere so you don't access rel.Object; ensure
this check is before any use of unstructured.NestedString or
matchesConditionWithStatus and reference the symbols getDeploymentResource, rel,
deploymentTech, unstructured.NestedString, and matchesConditionWithStatus when
making the change.
In `@README.md`:
- Around line 146-149: The README still mentions DEPLOYMENT_KUBECONFIG; update
those sentences to reference the actual env vars used in the
codebase—REMOTE_INFRA_KUBECONFIG and REMOTE_RUNTIME_KUBECONFIG—so docs match
configuration; specifically change the “In-cluster behavior” and “Remote
deployment” bullets to explain that when those REMOTE_* vars are unset the
operator uses in-cluster credentials and that setting REMOTE_INFRA_KUBECONFIG /
REMOTE_RUNTIME_KUBECONFIG enables deploying to a remote cluster.
🧹 Nitpick comments (13)
Taskfile.yml (1)
86-88: Make the/etc/hostsupdate idempotent.Appending unconditionally can create duplicate entries across reruns. Consider a guard before appending.
Proposed change
- echo "127.0.0.1 portal.dev.local kcp.api.portal.dev.local" | sudo tee -a /etc/hosts + if ! grep -q 'portal.dev.local' /etc/hosts; then + echo "127.0.0.1 portal.dev.local kcp.api.portal.dev.local" | sudo tee -a /etc/hosts + figotemplates/infra/runtime/cert-manager/resource-cainjector.yaml (1)
9-10: Consider adding similar annotation to resource-controller.yaml for consistency.This template includes an
annotations.pathfield (cainjector.image.tag), but the correspondingresource-controller.yamltemplate lacks this annotation. If the path annotation is required for image tag resolution, verify whetherresource-controller.yamlshould include a similar annotation (e.g.,controller.image.tagor equivalent).test/e2e/kind/helpers.go (1)
71-122: Code duplication between template and non-template parsing.
unstructuredsFromFileandunstructuredsFromFileNoTemplateshare significant logic for reading files, splitting YAML documents, unmarshalling, and logging. Consider extracting the shared parsing logic to reduce duplication.Proposed refactor
+func parseYAMLDocuments(content []byte, path string, log *logger.Logger) ([]unstructured.Unstructured, error) { + objects := strings.Split(string(content), "---\n") + var unstructuredObjs []unstructured.Unstructured + for _, obj := range objects { + var objMap map[string]interface{} + if err := yaml.Unmarshal([]byte(obj), &objMap); err != nil { + return nil, errors.Wrap(err, "Failed to unmarshal YAML from %s", path) + } + u := unstructured.Unstructured{Object: objMap} + log.Debug().Str("file", path).Str("kind", u.GetKind()).Str("name", u.GetName()).Str("namespace", u.GetNamespace()).Msg("Preparing manifest") + unstructuredObjs = append(unstructuredObjs, u) + } + return unstructuredObjs, nil +} + func unstructuredsFromFile(path string, templateData map[string]string, log *logger.Logger) ([]unstructured.Unstructured, error) { manifestBytes, err := os.ReadFile(path) if err != nil { return []unstructured.Unstructured{}, errors.Wrap(err, "Failed to read file, pwd: %s", path) } - log.Debug().Str("file", path).Str("template", string(manifestBytes)).Str("templateData", fmt.Sprintf("%+v", templateData)).Msg("Replacing template") - res, err := ReplaceTemplate(templateData, manifestBytes) if err != nil { return []unstructured.Unstructured{}, errors.Wrap(err, "Failed to replace template with path: %s", path) } - - // split the result into multiple YAML objects - objects := strings.Split(string(res), "---\n") - var unstructuredObjs []unstructured.Unstructured - for _, obj := range objects { - var objMap map[string]interface{} - if err := yaml.Unmarshal([]byte(obj), &objMap); err != nil { - return []unstructured.Unstructured{}, errors.Wrap(err, "Failed to unmarshal YAML from template %s. Output:\n%s", path, string(res)) - } - - obj := unstructured.Unstructured{Object: objMap} - - log.Debug().Str("file", path).Str("kind", obj.GetKind()).Str("name", obj.GetName()).Str("namespace", obj.GetNamespace()).Msg("Applying manifest") - unstructuredObjs = append(unstructuredObjs, obj) - } - return unstructuredObjs, nil + return parseYAMLDocuments(res, path, log) }gotemplates/components/runtime/ocm-image-resources.yaml (1)
15-22: Inconsistent YAML indentation between labels and annotations.Line 17 uses
{{- toYaml . | nindent 4 }}(with dash for whitespace trimming), while line 21 uses{{ toYaml . | nindent 4 }}(without dash). This inconsistency may produce unexpected whitespace in the rendered output.Suggested fix for consistent whitespace handling
{{- with .annotations }} annotations: -{{ toYaml . | nindent 4 }} + {{- toYaml . | nindent 4 }} {{- end }}pkg/subroutines/deployment.go (4)
380-383: Creating a new logger instead of using context logger.This function creates a new logger with
logger.DefaultConfig()instead of usinglogger.LoadLoggerFromContext(ctx), which is inconsistent with other methods in this file and may result in different log formatting or output destinations.Proposed fix to use context logger
- // Merge infra profile (base) with templateVars (overrides) - // templateVars take precedence over profile values - log, err := logger.New(logger.DefaultConfig()) - if err != nil { - return nil, errors.Wrap(err, "Failed to create logger") - } - tmplVars, err := merge.MergeMaps(infraProfileMap, templateVarsMap, log) + log := logger.LoadLoggerFromContext(ctx) + tmplVars, err := merge.MergeMaps(infraProfileMap, templateVarsMap, log)Note: This requires adding
ctx context.Contextas a parameter if not already present, which it is in this case.
526-529: Same logger creation inconsistency in buildComponentsTemplateVars.Creates a new logger instead of using the context logger passed to the function.
Proposed fix
func (r *DeploymentSubroutine) buildComponentsTemplateVars(ctx context.Context, inst *v1alpha1.PlatformMesh, templateVars apiextensionsv1.JSON) (map[string]interface{}, error) { - log, err := logger.New(logger.DefaultConfig()) - if err != nil { - return nil, errors.Wrap(err, "Failed to create logger") - } + log := logger.LoadLoggerFromContext(ctx)
969-1012: Duplicated ArgoCD Application preservation logic.The logic to preserve existing
repoURLandtargetRevisionin ArgoCD Applications (lines 969-1012) is duplicated inrenderAndApplyComponentsInfraTemplates(lines 1116-1163). Consider extracting this into a helper function.Suggested helper function
// preserveArgoCDApplicationFields removes repoURL and targetRevision from the new object // if the existing Application has non-placeholder values, to preserve ResourceSubroutine updates. func (r *DeploymentSubroutine) preserveArgoCDApplicationFields(ctx context.Context, obj *unstructured.Unstructured) error { if obj.GetKind() != "Application" || obj.GetAPIVersion() != "argoproj.io/v1alpha1" { return nil } existingApp := &unstructured.Unstructured{} existingApp.SetGroupVersionKind(obj.GroupVersionKind()) if err := r.clientInfra.Get(ctx, client.ObjectKey{Name: obj.GetName(), Namespace: obj.GetNamespace()}, existingApp); err != nil { return nil // Not found is OK } // ... preservation logic ... return nil }Also applies to: 1116-1163
1417-1439: getDeploymentResource returns nil without error on NotFound.Returning
(nil, nil)when the resource is not found makes it easy for callers to forget the nil check. Consider returning a sentinel error or changing the return signature to indicate "not found" explicitly.Proposed fix
func getDeploymentResource(ctx context.Context, client client.Client, resourceName string, resourceNamespace string, deploymentTech string) (*unstructured.Unstructured, error) { deploymentTech = strings.ToLower(deploymentTech) obj := &unstructured.Unstructured{} if deploymentTech == "argocd" { obj.SetGroupVersionKind(schema.GroupVersionKind{Group: "argoproj.io", Version: "v1alpha1", Kind: "Application"}) } else { // Default to FluxCD obj.SetGroupVersionKind(schema.GroupVersionKind{Group: "helm.toolkit.fluxcd.io", Version: "v2", Kind: "HelmRelease"}) } err := client.Get(ctx, types.NamespacedName{Name: resourceName, Namespace: resourceNamespace}, obj) if err != nil { if kerrors.IsNotFound(err) { log.Info().Str("deploymentTechnology", deploymentTech).Msgf("%s/%s resource not found, waiting for it to be created", resourceName, resourceNamespace) - return nil, nil + return nil, err // Return the NotFound error so callers can handle it } log.Error().Err(err).Str("deploymentTechnology", deploymentTech).Msgf("Failed to get %s/%s resource", resourceName, resourceNamespace) - return nil, nil + return nil, err } return obj, nil }pkg/subroutines/resource/subroutine.go (2)
519-546: Unused loop variable workaround.Line 535 uses
_ = ito suppress the unused variable warning. Consider restructuring the loop to avoid this pattern.Cleaner loop structure
func getNestedString(m map[string]interface{}, path ...string) (string, bool) { if len(path) == 0 { return "", false } current := m - for i, key := range path[:len(path)-1] { + for _, key := range path[:len(path)-1] { val, ok := current[key] if !ok { return "", false } if valMap, ok := val.(map[string]interface{}); ok { current = valMap } else { return "", false } - _ = i // avoid unused variable }
750-751: Hardcoded ConfigMap names may reduce flexibility.The fallback ConfigMap names
["platform-mesh-profile", "platform-mesh-system-profile"]are hardcoded. If the naming convention changes, this code will need to be updated.Consider extracting these to constants or deriving them from the namespace name for consistency with the pattern used elsewhere (e.g.,
inst.Name + "-profile").pkg/subroutines/wait.go (1)
72-74: Make status-check precedence explicit when both configs are setWhen
StatusFieldPathis present, condition-based checks are ignored; a small warning helps catch misconfiguration.Proposed tweak
- useStatusFieldPath := len(resourceType.StatusFieldPath) > 0 + useStatusFieldPath := len(resourceType.StatusFieldPath) > 0 + if useStatusFieldPath && resourceType.ConditionType != "" { + log.Warn().Msg("Both StatusFieldPath and ConditionType are set; using StatusFieldPath") + }test/e2e/kind/suite_kind_test.go (1)
510-533: Redundant CRD availability check.This
Eventuallyloop checking forrepositories.delivery.ocm.softwareCRD establishment duplicates the logic inwaitForCRDEstablishedcalled at Line 417. Thetime.Sleep(15 * time.Second)at Line 511 followed by theEventuallyblock is redundant sincewaitForCRDEstablishedalready handles polling with timeout.Consider removing the redundant wait logic in
applyKustomize:Proposed simplification
err = kapply.ApplyDir(ctx, "../../../test/e2e/kind/kustomize/base", clients) if err != nil { s.logger.Error().Err(err).Msg("Failed to apply base kustomize manifests") return err } - // wait for Repository CRD to be available - time.Sleep(15 * time.Second) - avail := s.Eventually(func() bool { - crd := &apiextensionsv1.CustomResourceDefinition{} - err := s.client.Get(ctx, client.ObjectKey{ - Name: "repositories.delivery.ocm.software", - }, crd) - if err != nil { - s.logger.Warn().Err(err).Msg("Repository CRD not found") - return false - } - // Check if the CRD is established (ready to use) - for _, condition := range crd.Status.Conditions { - if condition.Type == apiextensionsv1.Established && condition.Status == apiextensionsv1.ConditionTrue { - return true - } - } - s.logger.Warn().Msg("Repository CRD not yet established") - return false - }, 120*time.Second, 5*time.Second, "Repository CRD did not become available") - - if !avail { - return fmt.Errorf("Repository CRD is not available") - } - s.logger.Info().Msg("kapply finished successfully") return nil }internal/controller/platformmesh_controller.go (1)
88-130: Consider logging errors from List operation.The
Listcall at Line 100 silently returns an empty request slice on error. This could mask issues during reconciliation. Consider logging the error for observability.Proposed improvement
func (r *PlatformMeshReconciler) mapConfigMapToPlatformMesh(ctx context.Context, obj client.Object) []reconcile.Request { var requests []reconcile.Request configMap, ok := obj.(*corev1.ConfigMap) if !ok { return requests } // List all PlatformMesh resources platformMeshList := &corev1alpha1.PlatformMeshList{} // We use the context from the handler since it's provided by controller-runtime if err := r.client.List(ctx, platformMeshList); err != nil { + // Log error but return empty requests to avoid blocking reconciliation + ctrl.Log.Error(err, "failed to list PlatformMesh resources for ConfigMap mapping") return requests }
cmd/operator.go
Outdated
| resourceReconciler := controller.NewResourceReconciler(log, mgr, &operatorCfg, clientInfra) | ||
| if err := resourceReconciler.SetupWithManager(mgr, defaultCfg, log.ChildLogger("type", "Resource")); err != nil { | ||
| setupLog.Error(err, "unable to create controller", "controller", "PlatformMesh") | ||
| os.Exit(1) |
There was a problem hiding this comment.
Fix controller name in Resource reconciler error log.
The log currently labels the Resource controller as “PlatformMesh,” which is misleading for debugging.
Proposed fix
- setupLog.Error(err, "unable to create controller", "controller", "PlatformMesh")
+ setupLog.Error(err, "unable to create controller", "controller", "Resource")🤖 Prompt for AI Agents
In `@cmd/operator.go` around lines 143 - 146, The error log for the Resource
reconciler is using the wrong controller name ("PlatformMesh"); update the setup
failure log in the resourceReconciler.SetupWithManager error branch so the
setupLog.Error call uses the correct controller identifier (e.g., "Resource" or
"ResourceReconciler") instead of "PlatformMesh"—locate the
controller.NewResourceReconciler instantiation and the subsequent
SetupWithManager(error) block and replace the "controller", "PlatformMesh" pair
with the appropriate name to accurately reflect the Resource controller.
| COPY --from=builder --chown=${USER_UID}:${GROUP_UID} /workspace/manager /operator/manager | ||
| COPY --from=builder --chown=${USER_UID}:${GROUP_UID} /workspace/manifests /operator/manifests | ||
|
|
||
| COPY --chown=${USER_UID}:${GROUP_UID} gotemplates/ /operator/gotemplates |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify build context includes gotemplates and check for .dockerignore
echo "=== Checking for .dockerignore ==="
find . -name ".dockerignore" -type f
echo "=== Checking .dockerignore content for gotemplates ==="
if [ -f .dockerignore ]; then
cat -n .dockerignore
fi
echo "=== Checking if gotemplates directory exists ==="
ls -la gotemplates/ 2>&1 || echo "gotemplates/ not found"
echo "=== Checking Dockerfile content ==="
cat -n DockerfileRepository: platform-mesh/platform-mesh-operator
Length of output: 2145
🌐 Web query:
Docker FROM scratch COPY build context
💡 Result:
FROM scratchmeans your final stage starts from an empty image (no shell, no package manager). You typically only useCOPY/ADDplus metadata instructions (likeCMD,ENTRYPOINT,ENV). (docs.docker.com)COPY <src> <dest>can only copy files that are in the build context you pass todocker build(e.g.,docker build .makes.the context).COPY/ADDsource paths are resolved relative to the context root, not relative to where the Dockerfile lives. (docs.docker.com)- You can’t do
COPY ../something ...to pull files from outside the build context (it’s blocked for security). (docker.com)
Common pattern (multi-stage build): build in one stage, copy into scratch:
FROM golang:1.25 AS build
WORKDIR /src
COPY . .
RUN go build -o /bin/app ./...
FROM scratch
COPY --from=build /bin/app /bin/app
CMD ["/bin/app"]This copies from a previous stage (--from=build), not from your host filesystem. (docs.docker.com)
If you truly need files from outside the main context: with BuildKit/buildx you can pass additional named build contexts and then COPY --from=<name> .... (docker.com)
Citations:
- 1: https://docs.docker.com/build/building/multi-stage/?utm_source=openai
- 2: https://docs.docker.com/build/building/context/?utm_source=openai
- 3: https://www.docker.com/blog/dockerfiles-now-support-multiple-build-contexts/?utm_source=openai
- 4: https://docs.docker.com/build/building/multi-stage/?utm_source=openai
- 5: https://www.docker.com/blog/dockerfiles-now-support-multiple-build-contexts/?utm_source=openai
Fix COPY instruction: line 37 will fail in FROM scratch stage without --from=builder.
The COPY --chown=${USER_UID}:${GROUP_UID} gotemplates/ /operator/gotemplates instruction cannot access the build context in a FROM scratch stage. Additionally, gotemplates/ is not copied into the builder stage, so it is unavailable even with --from=builder.
Add gotemplates/ to the builder stage and use --from=builder to copy it into the final image:
Required fix
COPY manifests/ manifests/
+COPY gotemplates/ gotemplates/
# Build
RUN CGO_ENABLED=0 GOOS=linux GOARCH=${TARGETARCH} go build -ldflags '-w -s' -o manager main.go
FROM scratch
COPY --from=builder /usr/share/zoneinfo /usr/share/zoneinfo
COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
ENV USER_UID=1001
ENV GROUP_UID=1001
COPY --from=builder --chown=${USER_UID}:${GROUP_UID} /workspace/manager /operator/manager
COPY --from=builder --chown=${USER_UID}:${GROUP_UID} /workspace/manifests /operator/manifests
-COPY --chown=${USER_UID}:${GROUP_UID} gotemplates/ /operator/gotemplates
+COPY --from=builder --chown=${USER_UID}:${GROUP_UID} /workspace/gotemplates /operator/gotemplates🤖 Prompt for AI Agents
In `@Dockerfile` at line 37, The COPY instruction using "COPY
--chown=${USER_UID}:${GROUP_UID} gotemplates/ /operator/gotemplates" will fail
in the final "FROM scratch" stage because gotemplates/ isn't present there; add
gotemplates/ to the builder stage (ensure it's copied into the builder image
during the builder stage) and then update the final-stage COPY to pull from the
builder using the --from=builder flag (e.g., COPY --from=builder
--chown=${USER_UID}:${GROUP_UID} gotemplates/ /operator/gotemplates) so the
files and ownership are available in the final image.
| rel, err := getDeploymentResource(ctx, r.clientInfra, "cert-manager", inst.Namespace, deploymentTech) | ||
| if err != nil { | ||
| log.Error().Err(err).Msg("Failed to get cert-manager Release") | ||
| log.Error().Err(err).Msg("Failed to get cert-manager resource") | ||
| return ctrl.Result{}, errors.NewOperatorError(err, false, false) | ||
| } | ||
| if !matchesConditionWithStatus(rel, "Ready", "True") { | ||
| log.Info().Msg("cert-manager Release is not ready..") | ||
| return ctrl.Result{}, errors.NewOperatorError(errors.New("cert-manager Release is not ready"), true, false) | ||
| } | ||
| if deploymentTech == "argocd" { | ||
| // For ArgoCD Applications, check status.sync.status and status.health.status directly | ||
| // ArgoCD Applications may not have conditions initially, so check status fields directly | ||
| syncStatus, found, _ := unstructured.NestedString(rel.Object, "status", "sync", "status") | ||
| healthStatus, healthFound, _ := unstructured.NestedString(rel.Object, "status", "health", "status") | ||
|
|
||
| mergedValues, err := MergeValuesAndServices(inst, templateVars) | ||
| if err != nil { | ||
| log.Error().Err(err).Msg("Failed to merge templateVars and services") | ||
| return ctrl.Result{}, errors.NewOperatorError(err, true, true) | ||
| if !found || syncStatus != "Synced" { | ||
| log.Info().Str("deploymentTechnology", deploymentTech). | ||
| Str("syncStatus", syncStatus).Msg("cert-manager Application is not synced..") | ||
| return ctrl.Result{}, errors.NewOperatorError(errors.New("cert-manager Application is not synced"), true, false) | ||
| } | ||
| if !healthFound || healthStatus != "Healthy" { | ||
| log.Info().Str("deploymentTechnology", deploymentTech). | ||
| Str("healthStatus", healthStatus).Msg("cert-manager Application is not healthy..") | ||
| return ctrl.Result{}, errors.NewOperatorError(errors.New("cert-manager Application is not healthy"), true, false) | ||
| } | ||
| } else { | ||
| // For FluxCD HelmReleases, check Ready condition | ||
| if !matchesConditionWithStatus(rel, "Ready", "True") { | ||
| log.Info().Str("deploymentTechnology", deploymentTech).Msg("cert-manager Release is not ready..") | ||
| return ctrl.Result{}, errors.NewOperatorError(errors.New("cert-manager Release is not ready"), true, false) | ||
| } | ||
| } |
There was a problem hiding this comment.
Potential nil pointer dereference when deployment resource is not found.
getDeploymentResource returns (nil, nil) when the resource is not found (lines 1431-1436). The caller at line 204 does not check for nil before accessing rel.Object at line 212, which will cause a panic if the resource does not exist.
Proposed fix to handle nil resource
rel, err := getDeploymentResource(ctx, r.clientInfra, "cert-manager", inst.Namespace, deploymentTech)
if err != nil {
log.Error().Err(err).Msg("Failed to get cert-manager resource")
return ctrl.Result{}, errors.NewOperatorError(err, false, false)
}
+ if rel == nil {
+ log.Info().Str("deploymentTechnology", deploymentTech).Msg("cert-manager resource not found, waiting for it to be created")
+ return ctrl.Result{}, errors.NewOperatorError(errors.New("cert-manager resource not found"), true, false)
+ }
if deploymentTech == "argocd" {| rel, err := getDeploymentResource(ctx, r.clientInfra, "istio-istiod", inst.Namespace, deploymentTech) | ||
| if err != nil { | ||
| log.Error().Err(err).Msg("Failed to get istio-istiod Release") | ||
| log.Error().Err(err).Msg("Failed to get istio-istiod resource") | ||
| return ctrl.Result{}, errors.NewOperatorError(err, false, false) | ||
| } | ||
| if deploymentTech == "argocd" { | ||
| // For ArgoCD Applications, check status.sync.status and status.health.status directly | ||
| syncStatus, found, _ := unstructured.NestedString(rel.Object, "status", "sync", "status") | ||
| healthStatus, healthFound, _ := unstructured.NestedString(rel.Object, "status", "health", "status") | ||
|
|
||
| if !matchesConditionWithStatus(rel, "Ready", "True") { | ||
| log.Info().Msg("istio-istiod Release is not ready..") | ||
| return ctrl.Result{}, errors.NewOperatorError(errors.New("istio-istiod Release is not ready"), true, false) | ||
| if !found || syncStatus != "Synced" { | ||
| log.Info().Str("deploymentTechnology", deploymentTech). | ||
| Str("syncStatus", syncStatus).Msg("istio-istiod Application is not synced..") | ||
| return ctrl.Result{}, errors.NewOperatorError(errors.New("istio-istiod Application is not synced"), true, false) | ||
| } | ||
| if !healthFound || healthStatus != "Healthy" { | ||
| log.Info().Str("deploymentTechnology", deploymentTech). | ||
| Str("healthStatus", healthStatus).Msg("istio-istiod Application is not healthy..") | ||
| return ctrl.Result{}, errors.NewOperatorError(errors.New("istio-istiod Application is not healthy"), true, false) | ||
| } | ||
| } else { | ||
| // For FluxCD HelmReleases, check Ready condition | ||
| if !matchesConditionWithStatus(rel, "Ready", "True") { | ||
| log.Info().Str("deploymentTechnology", deploymentTech).Msg("istio-istiod Release is not ready..") | ||
| return ctrl.Result{}, errors.NewOperatorError(errors.New("istio-istiod Release is not ready"), true, false) | ||
| } | ||
| } |
There was a problem hiding this comment.
Same nil pointer dereference risk for istio-istiod resource.
Similar to the cert-manager check, rel may be nil when the resource is not found, but the code proceeds to access rel.Object without a nil check.
Proposed fix
rel, err := getDeploymentResource(ctx, r.clientInfra, "istio-istiod", inst.Namespace, deploymentTech)
if err != nil {
log.Error().Err(err).Msg("Failed to get istio-istiod resource")
return ctrl.Result{}, errors.NewOperatorError(err, false, false)
}
+ if rel == nil {
+ log.Info().Str("deploymentTechnology", deploymentTech).Msg("istio-istiod resource not found, waiting for it to be created")
+ return ctrl.Result{}, errors.NewOperatorError(errors.New("istio-istiod resource not found"), true, false)
+ }
if deploymentTech == "argocd" {📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| rel, err := getDeploymentResource(ctx, r.clientInfra, "istio-istiod", inst.Namespace, deploymentTech) | |
| if err != nil { | |
| log.Error().Err(err).Msg("Failed to get istio-istiod Release") | |
| log.Error().Err(err).Msg("Failed to get istio-istiod resource") | |
| return ctrl.Result{}, errors.NewOperatorError(err, false, false) | |
| } | |
| if deploymentTech == "argocd" { | |
| // For ArgoCD Applications, check status.sync.status and status.health.status directly | |
| syncStatus, found, _ := unstructured.NestedString(rel.Object, "status", "sync", "status") | |
| healthStatus, healthFound, _ := unstructured.NestedString(rel.Object, "status", "health", "status") | |
| if !matchesConditionWithStatus(rel, "Ready", "True") { | |
| log.Info().Msg("istio-istiod Release is not ready..") | |
| return ctrl.Result{}, errors.NewOperatorError(errors.New("istio-istiod Release is not ready"), true, false) | |
| if !found || syncStatus != "Synced" { | |
| log.Info().Str("deploymentTechnology", deploymentTech). | |
| Str("syncStatus", syncStatus).Msg("istio-istiod Application is not synced..") | |
| return ctrl.Result{}, errors.NewOperatorError(errors.New("istio-istiod Application is not synced"), true, false) | |
| } | |
| if !healthFound || healthStatus != "Healthy" { | |
| log.Info().Str("deploymentTechnology", deploymentTech). | |
| Str("healthStatus", healthStatus).Msg("istio-istiod Application is not healthy..") | |
| return ctrl.Result{}, errors.NewOperatorError(errors.New("istio-istiod Application is not healthy"), true, false) | |
| } | |
| } else { | |
| // For FluxCD HelmReleases, check Ready condition | |
| if !matchesConditionWithStatus(rel, "Ready", "True") { | |
| log.Info().Str("deploymentTechnology", deploymentTech).Msg("istio-istiod Release is not ready..") | |
| return ctrl.Result{}, errors.NewOperatorError(errors.New("istio-istiod Release is not ready"), true, false) | |
| } | |
| } | |
| rel, err := getDeploymentResource(ctx, r.clientInfra, "istio-istiod", inst.Namespace, deploymentTech) | |
| if err != nil { | |
| log.Error().Err(err).Msg("Failed to get istio-istiod resource") | |
| return ctrl.Result{}, errors.NewOperatorError(err, false, false) | |
| } | |
| if rel == nil { | |
| log.Info().Str("deploymentTechnology", deploymentTech).Msg("istio-istiod resource not found, waiting for it to be created") | |
| return ctrl.Result{}, errors.NewOperatorError(errors.New("istio-istiod resource not found"), true, false) | |
| } | |
| if deploymentTech == "argocd" { | |
| // For ArgoCD Applications, check status.sync.status and status.health.status directly | |
| syncStatus, found, _ := unstructured.NestedString(rel.Object, "status", "sync", "status") | |
| healthStatus, healthFound, _ := unstructured.NestedString(rel.Object, "status", "health", "status") | |
| if !found || syncStatus != "Synced" { | |
| log.Info().Str("deploymentTechnology", deploymentTech). | |
| Str("syncStatus", syncStatus).Msg("istio-istiod Application is not synced..") | |
| return ctrl.Result{}, errors.NewOperatorError(errors.New("istio-istiod Application is not synced"), true, false) | |
| } | |
| if !healthFound || healthStatus != "Healthy" { | |
| log.Info().Str("deploymentTechnology", deploymentTech). | |
| Str("healthStatus", healthStatus).Msg("istio-istiod Application is not healthy..") | |
| return ctrl.Result{}, errors.NewOperatorError(errors.New("istio-istiod Application is not healthy"), true, false) | |
| } | |
| } else { | |
| // For FluxCD HelmReleases, check Ready condition | |
| if !matchesConditionWithStatus(rel, "Ready", "True") { | |
| log.Info().Str("deploymentTechnology", deploymentTech).Msg("istio-istiod Release is not ready..") | |
| return ctrl.Result{}, errors.NewOperatorError(errors.New("istio-istiod Release is not ready"), true, false) | |
| } | |
| } |
🤖 Prompt for AI Agents
In `@pkg/subroutines/deployment.go` around lines 261 - 287, The istio-istiod block
dereferences rel without checking for nil; after calling getDeploymentResource
(in the same block handling deploymentTech) add a nil check for rel (and treat
err==nil && rel==nil as resource-not-found), log an appropriate message, and
return the same pattern of errors.NewOperatorError used elsewhere so you don't
access rel.Object; ensure this check is before any use of
unstructured.NestedString or matchesConditionWithStatus and reference the
symbols getDeploymentResource, rel, deploymentTech, unstructured.NestedString,
and matchesConditionWithStatus when making the change.
| - **Configuration methods**: All parameters can be set via environment variables (using underscore-separated uppercase names) or command-line flags (using kebab-case format, e.g., `--kcp-url`, `--workspace-dir`). | ||
| - **In-cluster behavior**: When running the operator inside a Kubernetes cluster without `KUBECONFIG` or `DEPLOYMENT_KUBECONFIG` set, it will use the in-cluster service account credentials. | ||
| - **Remote deployment**: Setting `DEPLOYMENT_KUBECONFIG` enables scenarios where the control plane (operator) runs in one cluster while deploying components to another cluster. | ||
| - **Subroutine control**: Individual subroutines can be disabled by setting their respective `_ENABLED` variables to `false`, allowing fine-grained control over operator behavior. |
There was a problem hiding this comment.
Docs still reference DEPLOYMENT_KUBECONFIG.
The configuration section defines REMOTE_INFRA_KUBECONFIG / REMOTE_RUNTIME_KUBECONFIG, so the notes should align to avoid confusion.
📝 Suggested update
- - **In-cluster behavior**: When running the operator inside a Kubernetes cluster without `KUBECONFIG` or `DEPLOYMENT_KUBECONFIG` set, it will use the in-cluster service account credentials.
- - **Remote deployment**: Setting `DEPLOYMENT_KUBECONFIG` enables scenarios where the control plane (operator) runs in one cluster while deploying components to another cluster.
+ - **In-cluster behavior**: When running the operator inside a Kubernetes cluster without `KUBECONFIG` set, it will use the in-cluster service account credentials.
+ - **Remote deployment**: Setting `REMOTE_INFRA_KUBECONFIG` (and/or `REMOTE_RUNTIME_KUBECONFIG`) enables scenarios where the control plane (operator) runs in one cluster while deploying components to another cluster.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - **Configuration methods**: All parameters can be set via environment variables (using underscore-separated uppercase names) or command-line flags (using kebab-case format, e.g., `--kcp-url`, `--workspace-dir`). | |
| - **In-cluster behavior**: When running the operator inside a Kubernetes cluster without `KUBECONFIG` or `DEPLOYMENT_KUBECONFIG` set, it will use the in-cluster service account credentials. | |
| - **Remote deployment**: Setting `DEPLOYMENT_KUBECONFIG` enables scenarios where the control plane (operator) runs in one cluster while deploying components to another cluster. | |
| - **Subroutine control**: Individual subroutines can be disabled by setting their respective `_ENABLED` variables to `false`, allowing fine-grained control over operator behavior. | |
| - **Configuration methods**: All parameters can be set via environment variables (using underscore-separated uppercase names) or command-line flags (using kebab-case format, e.g., `--kcp-url`, `--workspace-dir`). | |
| - **In-cluster behavior**: When running the operator inside a Kubernetes cluster without `KUBECONFIG` set, it will use the in-cluster service account credentials. | |
| - **Remote deployment**: Setting `REMOTE_INFRA_KUBECONFIG` (and/or `REMOTE_RUNTIME_KUBECONFIG`) enables scenarios where the control plane (operator) runs in one cluster while deploying components to another cluster. | |
| - **Subroutine control**: Individual subroutines can be disabled by setting their respective `_ENABLED` variables to `false`, allowing fine-grained control over operator behavior. |
🤖 Prompt for AI Agents
In `@README.md` around lines 146 - 149, The README still mentions
DEPLOYMENT_KUBECONFIG; update those sentences to reference the actual env vars
used in the codebase—REMOTE_INFRA_KUBECONFIG and REMOTE_RUNTIME_KUBECONFIG—so
docs match configuration; specifically change the “In-cluster behavior” and
“Remote deployment” bullets to explain that when those REMOTE_* vars are unset
the operator uses in-cluster credentials and that setting
REMOTE_INFRA_KUBECONFIG / REMOTE_RUNTIME_KUBECONFIG enables deploying to a
remote cluster.
There was a problem hiding this comment.
Actionable comments posted: 13
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
pkg/subroutines/providersecret.go (1)
197-210:⚠️ Potential issue | 🟠 MajorExternal provider host is overwritten.
newConfig.Hostis set based onpc.External, then unconditionally overwritten byhostPort, so external connections are never honored.Proposed fix
- hostPort := fmt.Sprintf("https://%s-front-proxy.%s:%s", operatorCfg.KCP.FrontProxyName, operatorCfg.KCP.Namespace, operatorCfg.KCP.FrontProxyPort) - if pc.External { - newConfig.Host = getExternalKcpHost(instance, &operatorCfg) + address.Path - } else { - newConfig.Host = getInternalKcpHost(&operatorCfg) + address.Path - } - host, err := url.JoinPath(hostPort, address.Path) + baseHost := getInternalKcpHost(&operatorCfg) + if pc.External { + baseHost = getExternalKcpHost(instance, &operatorCfg) + } + host, err := url.JoinPath(baseHost, address.Path) if err != nil { log.Error().Err(err).Msg("Failed to join path for provider connection") return ctrl.Result{}, errors.NewOperatorError(err, false, false) } newConfig.Host = host
🤖 Fix all issues with AI agents
In `@go.mod`:
- Around line 132-135: The dependency golang.org/x/net v0.49.0 requires Go
1.24.0, so either update the module's Go toolchain version or pin a compatible
x/net version: open go.mod and (1) if you support Go 1.24+, change the module's
"go" directive to 1.24 to match golang.org/x/net v0.49.0; or (2) if you must
stay on an older toolchain, downgrade golang.org/x/net to a version that
supports your current "go" directive. Refer to the golang.org/x/net v0.49.0
entry and the "go" directive in go.mod when making the change and run go mod
tidy and go test afterwards.
In `@gotemplates/components/infra/applications.yaml`:
- Around line 22-55: The template currently allows both $config.helmRepo and
$config.gitRepo blocks to render which creates overlapping source entries;
update the conditional structure so they are mutually exclusive (e.g., replace
the separate {{- if $config.helmRepo }} ... {{- end }} and {{- if
$config.gitRepo }} ... {{- end }} blocks with an if/else-if chain such as {{- if
$config.helmRepo }} ... {{- else if $config.gitRepo }} ... {{- end }}), ensuring
only the helm block (chart/helm/targetRevision) or the git block
(path/targetRevision) is emitted, and keep existing defaults/values logic inside
their respective branches.
In `@gotemplates/infra/infra/gateway-api/application.yaml`:
- Around line 25-30: The template currently accesses
.gatewayApi.kustomization.path unguarded which can fail when .gatewayApi.enabled
is true but kustomization is missing; update the template to guard access to
.gatewayApi.kustomization.path (e.g., wrap the reference in an if checking
.gatewayApi.enabled and that .gatewayApi.kustomization is defined) or
alternatively ensure schema validation requires .gatewayApi.kustomization when
.gatewayApi.enabled is true; specifically modify the block that references
.gatewayApi.kustomization.path to mirror the guarded pattern used with
.gatewayApi.kustomization.install.crds so rendering cannot access
.gatewayApi.kustomization.path unless the kustomization object exists.
In `@gotemplates/infra/infra/traefik/application.yaml`:
- Around line 1-2: The template currently only checks .traefik.enabled and emits
an ArgoCD Application (apiVersion: argoproj.io/v1alpha1) even when the selected
deploymentTechnology isn't ArgoCD; update the top-level conditional to also
require deploymentTechnology == "argocd" (e.g. change the existing {{- if
.traefik.enabled }} to something like {{- if and .traefik.enabled (eq
.deploymentTechnology "argocd") }} or the equivalent accessor used in this
chart) so the Application manifest is only rendered when Traefik is enabled and
the deploymentTechnology is argocd.
In `@gotemplates/infra/runtime/cert-manager/resource-webhook.yaml`:
- Around line 16-31: The template uses .ocm.interval and .ocm.skipVerify
directly which can render invalid YAML if unset; update the template in
resource-webhook.yaml to wrap these with the default helper (e.g., use {{
default "5m" .ocm.interval }} for .ocm.interval and {{ default false
.ocm.skipVerify }} for .ocm.skipVerify) so the fields always emit valid YAML
when the profile ConfigMap omits them; ensure you update both occurrences of
these symbols in the file and keep the surrounding indentation/format unchanged.
In `@gotemplates/infra/runtime/gateway-api/resource.yaml`:
- Around line 1-29: YAML lint fails because Go template directives (e.g., {{- if
.gatewayApi.enabled }}) in gotemplates/resource.yaml are being parsed by
yamllint; either add this template path to the yamllint exclude list (so files
like gotemplates/infra/runtime/gateway-api/resource.yaml are ignored) or rename
template files to a non-.yaml extension (e.g., resource.yaml.tmpl) and update
the CI lint target configuration to only include real YAML files so that
constructs like .gatewayApi.enabled, .ocm.referencePath, and
.gatewayApi.ocmResourceName are not linted by yamllint.
In `@gotemplates/infra/runtime/traefik/resource-crds.yaml`:
- Around line 21-25: The Go-template trims the newline after the
`referencePath:` key because `{{- range .ocm.referencePath }}` and `{{- end }}`
remove preceding whitespace, causing the list items and the key to be on the
same line; update the template to preserve the newline by using non-trimming
delimiters (e.g., `{{ range .ocm.referencePath }}` and `{{ end }}`) or otherwise
ensure a newline after `referencePath:` so each `- name: ...` (including the `-
name: {{ .traefikCRDs.ocmComponentName }}` entry) renders as its own YAML list
item.
In `@pkg/subroutines/deployment_test.go`:
- Around line 48-53: The test enables operatorCfg.RemoteInfra but constructs the
DeploymentSubroutine with a nil infra client (second argument), which can cause
panics when clientInfra methods are invoked; update the setup to create and pass
a mock infra client that implements the expected interface (e.g.,
Get/Patch/Delete/List) instead of nil and use that mock when calling
subroutines.NewDeploymentSubroutine(s.clientMock, <mockInfraClient>, &cfg,
&operatorCfg) so RemoteInfra paths execute against the mock; ensure the mock is
initialized on the test struct (e.g., s.infraMock) and wired into any other test
helpers that expect an infra client.
In `@pkg/subroutines/deployment.go`:
- Around line 1417-1439: getDeploymentResource currently returns (nil, nil) when
the resource is not found which causes nil dereferences in callers; change it to
return an explicit error instead and update callers to handle that error.
Specifically, add a sentinel error (e.g., ErrResourceNotFound) in the same
package and have getDeploymentResource return that error when
kerrors.IsNotFound(err) (or wrap the original error with context), and ensure
all call sites that call getDeploymentResource check for ErrResourceNotFound (or
use errors.Is) and implement the intended retry/wait logic rather than assuming
a non-nil object. Use the function name getDeploymentResource and the sentinel
ErrResourceNotFound to find and update code paths that currently assume a nil
error implies a valid object.
In `@pkg/subroutines/resource/subroutine.go`:
- Around line 748-799: The fallback in
getDeploymentTechnologyFromConfigMapDirect uses hardcoded configMapNames which
mismatches the primary lookup that builds names from the PlatformMesh resource
(pm.Name+"-profile"), causing misses for custom-named PlatformMesh resources;
update getDeploymentTechnologyFromConfigMapDirect to compute the same name
pattern(s) used by the primary lookup (e.g., include pm.Name+"-profile" and
pm.Name+"-system-profile" or accept the PlatformMesh name as a parameter) or
make the ConfigMap name list configurable so the fallback mirrors the primary
lookup behavior.
In `@test/e2e/kind/helpers.go`:
- Around line 20-37: In applyUnstructureds, the current Patch error stored in
err is being lost because errors.Wrap is wrapping errRet instead of the current
err; change the error accumulation so that when err != nil you create a wrapped
error that includes the path/kind/name and the current err (e.g. using
fmt.Errorf("Failed to apply manifest file: %s (%s/%s): %w", path, obj.GetKind(),
obj.GetName(), err)) and then set errRet = errors.Join(errRet, wrapped) so the
current error is preserved and combined with previous errors.
In `@test/e2e/kind/yaml/platform-mesh-resource/default-profile.yaml`:
- Around line 337-341: The OTLP endpoint namespace in the telemetry block is
inconsistent; update the telemetry.trace.otlp.endpoint value to use the same
namespace as the other services (replace
"observability-opentelemetry-collector.openmfp-observability.svc.cluster.local:4317"
with the correct namespace used elsewhere, e.g.
"observability-opentelemetry-collector.observability.svc.cluster.local:4317") so
traces are sent to the same collector; locate the endpoint via the YAML key path
telemetry.trace.otlp.endpoint and make the namespace change accordingly.
In `@test/e2e/kind/yaml/virtual-workspaces/vws-cert.yaml`:
- Around line 7-25: The certificate manifest currently sets spec.usages to only
"server auth", which replaces cert-manager's defaults and is likely
insufficient; edit the Certificate resource (the block containing dnsNames and
secretName: virtual-workspaces-tls) to either remove the usages field entirely
or add the standard TLS usages "digital signature" and "key encipherment"
alongside "server auth" under spec.usages so the certificate includes the
expected key usages.
🧹 Nitpick comments (20)
gotemplates/components/runtime/ocm-image-resources.yaml (1)
42-42: Hardcoded interval differs from sibling template.This template uses a hardcoded
1minterval, whereasocm-resources.yamluses{{ $values.ocm.interval }}. If consistent configurability is intended, consider parameterizing this value.Suggested change
- interval: 1m + interval: {{ $values.ocm.interval }}test/e2e/kind/yaml/virtual-workspaces/vws-cert.yaml (1)
27-32: Prefer a namespaced Issuer unless cross-namespace issuance is required.cert-manager documents that
Issueris namespace-scoped whileClusterIssueris cluster-scoped for use across namespaces. The Certificate inplatform-mesh-systemreferences only thisselfsigned-issuerwith no cross-namespace usage detected. Converting to a namespacedIssuerwould be sufficient and aligns with the principle of minimal cluster-wide scope.Proposed refactor (namespaced Issuer)
issuerRef: group: cert-manager.io - kind: ClusterIssuer + kind: Issuer name: selfsigned-issuer --- apiVersion: cert-manager.io/v1 -kind: ClusterIssuer +kind: Issuer metadata: name: selfsigned-issuer + namespace: platform-mesh-system spec: selfSigned: {}gotemplates/infra/infra/appproject.yaml (1)
24-24: Remove trailing blank line.Static analysis flagged an extra blank line at the end of the file.
Proposed fix
{{- end }} -gotemplates/components/infra/helmreleases.yaml (1)
1-3: Unused variable$values.Line 1 assigns
$values := .valuesbut this variable is never referenced in the template. Either remove it or use it consistently instead of.values.Proposed fix
-{{ $values := .values }} {{- range $service, $config := .values.services }}gotemplates/components/infra/applications.yaml (1)
1-1: Unused variable$values.The variable is assigned but never referenced.
Proposed fix
-{{ $values := .values }} {{- range $service, $config := .values.services }}gotemplates/infra/infra/traefik/helmrelease.yaml (1)
30-31: Consider guarding thevaluesblock with a conditional.If
.traefik.valuesis nil or empty, this renders an emptyvalues:key, which may cause validation issues. Other templates in this PR wrap similar blocks in conditionals.Proposed fix
+{{- if .traefik.values }} values: {{ toYaml .traefik.values | nindent 4 }} +{{- end }}gotemplates/infra/infra/cert-manager/application.yaml (2)
25-26: Using wildcardtargetRevisionmay cause unexpected upgrades.
targetRevision: "*"will always sync to the latest chart version. This could introduce breaking changes unexpectedly. Consider allowing this to be configurable via.certManager.targetRevisionwith a sensible default, similar to the pattern used inapplication-crds.yamlfor traefik.Proposed fix
chart: cert-manager + {{- if .certManager.targetRevision }} + targetRevision: "{{ .certManager.targetRevision }}" + {{- else }} targetRevision: "*" + {{- end }}
55-55: Trailing blank line.Minor style issue flagged by YAMLlint. Remove the trailing blank line for consistency.
gotemplates/infra/infra/traefik/application-crds.yaml (1)
54-54: Trailing blank line.Minor style issue flagged by YAMLlint. Remove the trailing blank line for consistency.
gotemplates/infra/infra/gateway-api/application.yaml (1)
38-47: Merge custom syncOptions with defaults instead of replacing.Providing any custom syncOptions currently drops CreateNamespace/ServerSideApply defaults, which can cause unexpected behavior.
Proposed adjustment
- {{- if .gatewayApi.syncOptions }} - syncOptions: -{{ toYaml .gatewayApi.syncOptions | nindent 4 }} - {{- else }} - syncOptions: - - CreateNamespace=true - - ServerSideApply=true - {{- if .gatewayApi.ignoreDifferences }} - - RespectIgnoreDifferences=true - {{- end }} - {{- if and .gatewayApi.kustomization.install.crds (eq .gatewayApi.kustomization.install.crds "Skip") }} - - SkipDryRunOnMissingResource=true - {{- end }} - {{- end }} + syncOptions: + - CreateNamespace=true + - ServerSideApply=true + {{- if .gatewayApi.ignoreDifferences }} + - RespectIgnoreDifferences=true + {{- end }} + {{- if and .gatewayApi.kustomization.install.crds (eq .gatewayApi.kustomization.install.crds "Skip") }} + - SkipDryRunOnMissingResource=true + {{- end }} + {{- range .gatewayApi.syncOptions }} + - {{ . }} + {{- end }}gotemplates/infra/infra/traefik/application.yaml (1)
44-52: Preserve default syncOptions when custom options are supplied.Custom syncOptions currently replace defaults; consider appending to keep CreateNamespace and RespectIgnoreDifferences intact.
Proposed adjustment
- {{- if .traefik.syncOptions }} - syncOptions: -{{ toYaml .traefik.syncOptions | nindent 4 }} - {{- else }} - syncOptions: - - CreateNamespace=true - {{- if .traefik.ignoreDifferences }} - - RespectIgnoreDifferences=true - {{- end }} - {{- end }} + syncOptions: + - CreateNamespace=true + {{- if .traefik.ignoreDifferences }} + - RespectIgnoreDifferences=true + {{- end }} + {{- range .traefik.syncOptions }} + - {{ . }} + {{- end }}pkg/subroutines/resource/subroutine_test.go (1)
36-48: Consider consolidating deployment-technology mock helpers.
Both helpers differ mainly by the desired deploymentTechnology; a single parameterized helper would reduce duplication.Also applies to: 1249-1273
pkg/subroutines/wait_test.go (1)
401-413: Remove commented-out code.Line 403 contains a commented-out
Listexpectation. Remove dead code to maintain clarity.s.clientMock.EXPECT(). Get(mock.Anything, types.NamespacedName{Namespace: "default", Name: "test-helmrelease"}, mock.Anything). - // List(mock.Anything, mock.AnythingOfType("*unstructured.UnstructuredList"), mock.Anything). RunAndReturn(func(ctx context.Context, nn types.NamespacedName, obj client.Object, opts ...client.GetOption) error {test/e2e/kind/suite_kind_test.go (1)
510-533: Redundant CRD availability check and unnecessary sleep.This inline CRD check duplicates the logic in
waitForCRDEstablished(lines 417-424). Additionally, thetime.Sleep(15 * time.Second)at line 511 adds unnecessary delay since theEventuallyloop already handles waiting.Consider reusing
waitForCRDEstablishedhere:- // wait for Repository CRD to be available - time.Sleep(15 * time.Second) - avail := s.Eventually(func() bool { - crd := &apiextensionsv1.CustomResourceDefinition{} - err := s.client.Get(ctx, client.ObjectKey{ - Name: "repositories.delivery.ocm.software", - }, crd) - if err != nil { - s.logger.Warn().Err(err).Msg("Repository CRD not found") - return false - } - // Check if the CRD is established (ready to use) - for _, condition := range crd.Status.Conditions { - if condition.Type == apiextensionsv1.Established && condition.Status == apiextensionsv1.ConditionTrue { - return true - } - } - s.logger.Warn().Msg("Repository CRD not yet established") - return false - }, 120*time.Second, 5*time.Second, "Repository CRD did not become available") - - if !avail { - return fmt.Errorf("Repository CRD is not available") + if err := s.waitForCRDEstablished(ctx, "repositories.delivery.ocm.software", 2*time.Minute); err != nil { + return fmt.Errorf("Repository CRD is not available: %w", err) }pkg/subroutines/resource/subroutine.go (1)
519-546: Remove unnecessary loop variable.The
ivariable in the loop at line 525 is unused. Line 535 suppresses the linter warning with_ = i, but the variable serves no purpose.Suggested fix
func getNestedString(m map[string]interface{}, path ...string) (string, bool) { if len(path) == 0 { return "", false } current := m - for i, key := range path[:len(path)-1] { + for _, key := range path[:len(path)-1] { val, ok := current[key] if !ok { return "", false } if valMap, ok := val.(map[string]interface{}); ok { current = valMap } else { return "", false } - _ = i // avoid unused variable }api/v1alpha1/platformmesh_types.go (1)
51-78: ResourceType expansion for ArgoCD status checking is well-designed.The addition of
StatusFieldPathandStatusValuefields enables flexible nested status field checking for ArgoCD Applications. The documentation clearly explains that whenStatusFieldPathis set,ConditionType/ConditionStatusare ignored.Consider adding validation to ensure
StatusValueis required whenStatusFieldPathis set (currently documented but not enforced via kubebuilder markers).Optional: Add validation marker
// StatusFieldPath specifies a path to a nested status field to check instead of conditions. // When set, StatusValue is required and ConditionType/ConditionStatus are ignored. // Example: ["status", "sync", "status"] for ArgoCD Application sync status. // +optional StatusFieldPath []string `json:"statusFieldPath,omitempty"` // StatusValue is the expected value at the StatusFieldPath. // Required when StatusFieldPath is set. // Example: "Synced" for ArgoCD Application sync status. // +optional + // +kubebuilder:validation:XValidation:rule="size(self.statusFieldPath) == 0 || self.statusValue != ''",message="statusValue is required when statusFieldPath is set" StatusValue string `json:"statusValue,omitempty"`pkg/subroutines/deployment.go (3)
328-395: templateVarsFromProfileInfra creates a new logger unnecessarily.At line 380, a new logger is created with
logger.New(logger.DefaultConfig())instead of using the context logger. This is inconsistent with other methods and creates unnecessary overhead.Use context logger instead
- // Merge infra profile (base) with templateVars (overrides) - // templateVars take precedence over profile values - log, err := logger.New(logger.DefaultConfig()) - if err != nil { - return nil, errors.Wrap(err, "Failed to create logger") - } + log := logger.LoadLoggerFromContext(ctx) tmplVars, err := merge.MergeMaps(infraProfileMap, templateVarsMap, log)Note: This requires adding
ctx context.Contextas a parameter if not already available, or passing the logger from the caller.
523-529: buildComponentsTemplateVars also creates a new logger unnecessarily.Same issue as templateVarsFromProfileInfra - a new logger is created instead of using the context logger available via
logger.LoadLoggerFromContext(ctx).Use context logger
func (r *DeploymentSubroutine) buildComponentsTemplateVars(ctx context.Context, inst *v1alpha1.PlatformMesh, templateVars apiextensionsv1.JSON) (map[string]interface{}, error) { - log, err := logger.New(logger.DefaultConfig()) - if err != nil { - return nil, errors.Wrap(err, "Failed to create logger") - } + log := logger.LoadLoggerFromContext(ctx)
896-916: Silent error swallowing in renderTemplatesInValue may hide issues.Template parsing and execution errors are silently ignored (lines 902-908), returning the original value. While this prevents crashes from invalid templates, it could mask configuration errors that users should be aware of.
Consider adding debug logging when template rendering fails:
parsed, err := template.New("value").Funcs(templateFuncMap()).Parse(val) if err != nil { - // If parsing fails, it might not be a valid template, so return the original value + // If parsing fails, it might not be a valid template, log and return the original value + // Note: Using log from context would require passing it through return val, nil }internal/controller/platformmesh_controller.go (1)
88-130: Silent error handling in mapConfigMapToPlatformMesh.When
r.client.Listfails at line 100, the error is silently ignored and an empty slice is returned. This could hide connectivity or permission issues.Add error logging
func (r *PlatformMeshReconciler) mapConfigMapToPlatformMesh(ctx context.Context, obj client.Object) []reconcile.Request { var requests []reconcile.Request configMap, ok := obj.(*corev1.ConfigMap) if !ok { return requests } // List all PlatformMesh resources platformMeshList := &corev1alpha1.PlatformMeshList{} // We use the context from the handler since it's provided by controller-runtime if err := r.client.List(ctx, platformMeshList); err != nil { + // Log at debug level since this is a watch handler and transient errors are common + ctrl.Log.WithName("platformmesh-controller").V(1).Error(err, "Failed to list PlatformMesh resources in ConfigMap mapper") return requests }
go.mod
Outdated
| golang.org/x/net v0.49.0 // indirect | ||
| golang.org/x/oauth2 v0.34.0 // indirect | ||
| golang.org/x/sync v0.19.0 // indirect | ||
| golang.org/x/term v0.38.0 // indirect | ||
| golang.org/x/term v0.39.0 // indirect |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
golang.org/x/net v0.49.0 release notes breaking changes security
💡 Result:
golang.org/x/net v0.49.0 (tagged Jan 12, 2026) — “release notes”
There aren’t curated human-written release notes for this tag; v0.49.0 is essentially a dependency-refresh tag: the tag commit message is “go.mod: update golang.org/x dependencies”. [1]
What changed in v0.49.0
The only functional change in the tag commit is bumping the module’s direct dependencies in go.mod / go.sum (no x/net package code changes in the tag commit itself): [2]
golang.org/x/cryptov0.46.0 → v0.47.0golang.org/x/sysv0.39.0 → v0.40.0golang.org/x/termv0.38.0 → v0.39.0golang.org/x/textv0.32.0 → v0.33.0
Breaking changes (practical upgrade risks)
- Go toolchain requirement:
go.modspecifiesgo 1.24.0, so builds with older Go versions may fail. [2] - Otherwise, no explicit breaking API changes are indicated by the v0.49.0 tag commit, but the dependency bumps above can introduce behavior changes indirectly. [2]
Security / vulnerabilities relevant to v0.49.0
v0.49.0 doesn’t announce a new CVE fix itself, but it is newer than prior security-fix tags, so it includes those fixes:
- CVE-2025-22870 (proxy bypass via IPv6 zone IDs) was fixed by tagging x/net v0.36.0. Any version ≥ v0.36.0 (including v0.49.0) contains the fix. [3]
- CVE-2025-47911 and CVE-2025-58190 (x/net/html DoS: excessive slowness / infinite loop) were fixed by tagging x/net v0.45.0. Any version ≥ v0.45.0 (including v0.49.0) contains the fix. [4]
- CVE-2025-22872 (x/net/html tokenizer parsing issue) is listed as fixed in x/net v0.38.0, so v0.49.0 also includes it. [5]
Sources: [1] v0.49.0 tag commit page, [2] v0.49.0 tag diff, [3] Go security announcement for v0.36.0 / CVE-2025-22870, [4] Go security announcement for v0.45.0 / CVE-2025-47911 & CVE-2025-58190, [5] advisory for CVE-2025-22872.
🌐 Web query:
golang.org/x/term v0.39.0 release notes breaking changes
💡 Result:
For golang.org/x/term v0.39.0 there aren’t separate “release notes” beyond the tagged commit history; the v0.39.0 tag is a dependency bump commit. [1]
What changed in/landed by v0.39.0 (vs v0.38.0)
- Updated dependency:
golang.org/x/sysv0.39.0 → v0.40.0 (changes only ingo.mod/go.sum). [2] - Terminal line-editing behavior fixes:
- “handle delete key” (fix for Go issue
#76826). [3] - “handle transpose” (behavior matching readline/libedit; also tied to Go issue
#76826). [4]
- “handle delete key” (fix for Go issue
Breaking changes
- No explicit breaking API changes are documented for v0.39.0; the tag itself only updates
x/sys. [1] [2] - The only potentially user-visible “break” would be behavioral: if your program depended on the previous line-editing behavior, the updated handling of Delete and Transpose could change interactive input behavior. [3] [4]
Sources:
[1] v0.39.0 tag (commit message / contents) (go.googlesource.com)
[2] v0.39.0 diff (shows x/sys version bump) (go.googlesource.com)
[3] “handle delete key” commit (go.googlesource.com)
[4] “handle transpose” commit (go.googlesource.com)
Citations:
- 1: https://go.googlesource.com/term/%2B/refs/tags/v0.39.0
- 2: https://go.googlesource.com/term/%2B/refs/tags/v0.39.0%5E%21/
- 3: https://go.googlesource.com/term/%2B/9b991dd831b8a478f9fc99a0b39b492b4e25a3c0
- 4: https://go.googlesource.com/term/%2B/943f25d3595f79ce29c4175d889758d38b375688
Verify Go version requirement: golang.org/x/net v0.49.0 requires Go 1.24.0.
The version bump includes security fixes (CVE-2025-22870, CVE-2025-47911, CVE-2025-58190, CVE-2025-22872), but golang.org/x/net v0.49.0 specifies a minimum Go toolchain version of 1.24.0. Confirm your project's minimum supported Go version is compatible. golang.org/x/term v0.39.0 contains only minor behavioral improvements for terminal line-editing and introduces no breaking changes.
🤖 Prompt for AI Agents
In `@go.mod` around lines 132 - 135, The dependency golang.org/x/net v0.49.0
requires Go 1.24.0, so either update the module's Go toolchain version or pin a
compatible x/net version: open go.mod and (1) if you support Go 1.24+, change
the module's "go" directive to 1.24 to match golang.org/x/net v0.49.0; or (2) if
you must stay on an older toolchain, downgrade golang.org/x/net to a version
that supports your current "go" directive. Refer to the golang.org/x/net v0.49.0
entry and the "go" directive in go.mod when making the change and run go mod
tidy and go test afterwards.
| {{- if .traefik.enabled }} | ||
| apiVersion: argoproj.io/v1alpha1 |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, find the file and understand the repository structure
find . -type f -name "application.yaml" | head -20Repository: platform-mesh/platform-mesh-operator
Length of output: 241
🏁 Script executed:
# Search for deploymentTechnology usage to understand how it's used
rg -n "deploymentTechnology" --type-list | head -5
rg -n "deploymentTechnology" -B2 -A2 | head -50Repository: platform-mesh/platform-mesh-operator
Length of output: 4490
🏁 Script executed:
# Look for traefik related files
fd -t f "traefik" | grep -E "\.(yaml|go|yml)$"Repository: platform-mesh/platform-mesh-operator
Length of output: 62
🏁 Script executed:
# Search for the specific file mentioned in the review
find . -path "*traefik/application.yaml" -o -path "*traefik*application*"Repository: platform-mesh/platform-mesh-operator
Length of output: 186
🏁 Script executed:
# Read the traefik application.yaml file
cat -n ./gotemplates/infra/infra/traefik/application.yaml | head -50Repository: platform-mesh/platform-mesh-operator
Length of output: 1841
🏁 Script executed:
# Read gateway-api application.yaml to see the pattern
cat -n ./gotemplates/infra/infra/gateway-api/application.yaml | head -50Repository: platform-mesh/platform-mesh-operator
Length of output: 2140
🏁 Script executed:
# Read cert-manager application.yaml for comparison
cat -n ./gotemplates/infra/infra/cert-manager/application.yaml | head -50Repository: platform-mesh/platform-mesh-operator
Length of output: 1866
Gate this Application on deploymentTechnology=argocd.
Without the deploymentTechnology check, this template renders regardless of the selected deployment technology, potentially creating conflicting installs when FluxCD is active. The gateway-api template demonstrates the correct pattern.
Suggested fix
-{{- if .traefik.enabled }}
+{{- if and .traefik.enabled (eq .deploymentTechnology "argocd") }}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| {{- if .traefik.enabled }} | |
| apiVersion: argoproj.io/v1alpha1 | |
| {{- if and .traefik.enabled (eq .deploymentTechnology "argocd") }} | |
| apiVersion: argoproj.io/v1alpha1 |
🧰 Tools
🪛 YAMLlint (1.38.0)
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
🤖 Prompt for AI Agents
In `@gotemplates/infra/infra/traefik/application.yaml` around lines 1 - 2, The
template currently only checks .traefik.enabled and emits an ArgoCD Application
(apiVersion: argoproj.io/v1alpha1) even when the selected deploymentTechnology
isn't ArgoCD; update the top-level conditional to also require
deploymentTechnology == "argocd" (e.g. change the existing {{- if
.traefik.enabled }} to something like {{- if and .traefik.enabled (eq
.deploymentTechnology "argocd") }} or the equivalent accessor used in this
chart) so the Application manifest is only rendered when Traefik is enabled and
the deploymentTechnology is argocd.
| // getDeploymentResource gets either a FluxCD HelmRelease or ArgoCD Application based on deploymentTechnology | ||
| func getDeploymentResource(ctx context.Context, client client.Client, resourceName string, resourceNamespace string, deploymentTech string) (*unstructured.Unstructured, error) { | ||
| deploymentTech = strings.ToLower(deploymentTech) | ||
| obj := &unstructured.Unstructured{} | ||
|
|
||
| if deploymentTech == "argocd" { | ||
| obj.SetGroupVersionKind(schema.GroupVersionKind{Group: "argoproj.io", Version: "v1alpha1", Kind: "Application"}) | ||
| } else { | ||
| // Default to FluxCD | ||
| obj.SetGroupVersionKind(schema.GroupVersionKind{Group: "helm.toolkit.fluxcd.io", Version: "v2", Kind: "HelmRelease"}) | ||
| } | ||
|
|
||
| err := client.Get(ctx, types.NamespacedName{Name: resourceName, Namespace: resourceNamespace}, obj) | ||
| if err != nil { | ||
| if kerrors.IsNotFound(err) { | ||
| log.Info().Str("deploymentTechnology", deploymentTech).Msgf("%s/%s resource not found, waiting for it to be created", resourceName, resourceNamespace) | ||
| return nil, nil | ||
| } | ||
| log.Error().Err(err).Str("deploymentTechnology", deploymentTech).Msgf("Failed to get %s/%s resource", resourceName, resourceNamespace) | ||
| return nil, nil | ||
| } | ||
| return obj, nil | ||
| } |
There was a problem hiding this comment.
getDeploymentResource returns (nil, nil) on NotFound, causing nil dereference in callers.
The function returns (nil, nil) when a resource is not found (line 1433), but callers expect a non-nil object on success. This design leads to the nil dereference issues flagged above.
Proposed fix: Return error on NotFound or change caller expectations
Option 1 - Return a sentinel error:
+var ErrResourceNotFound = errors.New("resource not found")
+
func getDeploymentResource(ctx context.Context, client client.Client, resourceName string, resourceNamespace string, deploymentTech string) (*unstructured.Unstructured, error) {
deploymentTech = strings.ToLower(deploymentTech)
obj := &unstructured.Unstructured{}
if deploymentTech == "argocd" {
obj.SetGroupVersionKind(schema.GroupVersionKind{Group: "argoproj.io", Version: "v1alpha1", Kind: "Application"})
} else {
// Default to FluxCD
obj.SetGroupVersionKind(schema.GroupVersionKind{Group: "helm.toolkit.fluxcd.io", Version: "v2", Kind: "HelmRelease"})
}
err := client.Get(ctx, types.NamespacedName{Name: resourceName, Namespace: resourceNamespace}, obj)
if err != nil {
if kerrors.IsNotFound(err) {
log.Info().Str("deploymentTechnology", deploymentTech).Msgf("%s/%s resource not found, waiting for it to be created", resourceName, resourceNamespace)
- return nil, nil
+ return nil, ErrResourceNotFound
}
log.Error().Err(err).Str("deploymentTechnology", deploymentTech).Msgf("Failed to get %s/%s resource", resourceName, resourceNamespace)
- return nil, nil
+ return nil, err
}
return obj, nil
}🤖 Prompt for AI Agents
In `@pkg/subroutines/deployment.go` around lines 1417 - 1439,
getDeploymentResource currently returns (nil, nil) when the resource is not
found which causes nil dereferences in callers; change it to return an explicit
error instead and update callers to handle that error. Specifically, add a
sentinel error (e.g., ErrResourceNotFound) in the same package and have
getDeploymentResource return that error when kerrors.IsNotFound(err) (or wrap
the original error with context), and ensure all call sites that call
getDeploymentResource check for ErrResourceNotFound (or use errors.Is) and
implement the intended retry/wait logic rather than assuming a non-nil object.
Use the function name getDeploymentResource and the sentinel ErrResourceNotFound
to find and update code paths that currently assume a nil error implies a valid
object.
| func (r *ResourceSubroutine) getDeploymentTechnologyFromConfigMapDirect(ctx context.Context, namespace string, log *logger.Logger) string { | ||
| // Try common ConfigMap name patterns | ||
| configMapNames := []string{"platform-mesh-profile", "platform-mesh-system-profile"} | ||
|
|
||
| for _, cmName := range configMapNames { | ||
| configMap := &corev1.ConfigMap{} | ||
| // Use runtime client for reading ConfigMaps (they're in the runtime cluster) | ||
| if err := r.clientRuntime.Get(ctx, types.NamespacedName{Name: cmName, Namespace: namespace}, configMap); err != nil { | ||
| log.Debug().Err(err).Str("configMap", cmName).Str("namespace", namespace).Msg("ConfigMap not found, trying next") | ||
| continue | ||
| } | ||
|
|
||
| err = unstructured.SetNestedField(obj.Object, url, "spec", "url") | ||
| if err != nil { | ||
| return err | ||
| log.Info().Str("configMap", cmName).Str("namespace", namespace).Msg("Found ConfigMap, reading profile.yaml") | ||
|
|
||
| profileYAML, ok := configMap.Data["profile.yaml"] | ||
| if !ok { | ||
| log.Warn().Str("configMap", cmName).Msg("ConfigMap found but profile.yaml key missing") | ||
| continue | ||
| } | ||
|
|
||
| err = unstructured.SetNestedField(obj.Object, "1m0s", "spec", "interval") | ||
| if err != nil { | ||
| return err | ||
| var profile map[string]interface{} | ||
| if err := yaml.Unmarshal([]byte(profileYAML), &profile); err != nil { | ||
| log.Warn().Err(err).Str("configMap", cmName).Msg("Failed to parse profile YAML") | ||
| continue | ||
| } | ||
|
|
||
| return err | ||
| }) | ||
| if err != nil { | ||
| log.Error().Err(err).Msg("Failed to create or update OCIRepository") | ||
| return ctrl.Result{}, errors.NewOperatorError(err, true, true) | ||
| log.Debug().Str("configMap", cmName).Msg("Successfully parsed profile YAML, checking for deploymentTechnology") | ||
|
|
||
| // Check infra section first | ||
| if infra, ok := profile["infra"].(map[string]interface{}); ok { | ||
| log.Debug().Str("configMap", cmName).Msg("Found infra section in profile") | ||
| if dt, ok := infra["deploymentTechnology"].(string); ok && dt != "" { | ||
| log.Info().Str("deploymentTechnology", strings.ToLower(dt)).Str("configMap", cmName).Str("section", "infra").Msg("Read deploymentTechnology from profile ConfigMap (direct lookup)") | ||
| return strings.ToLower(dt) | ||
| } | ||
| log.Debug().Str("configMap", cmName).Msg("infra section found but deploymentTechnology not found or empty") | ||
| } else { | ||
| log.Debug().Str("configMap", cmName).Msg("infra section not found in profile") | ||
| } | ||
|
|
||
| // Check components section | ||
| if components, ok := profile["components"].(map[string]interface{}); ok { | ||
| log.Debug().Str("configMap", cmName).Msg("Found components section in profile") | ||
| if dt, ok := components["deploymentTechnology"].(string); ok && dt != "" { | ||
| log.Info().Str("deploymentTechnology", strings.ToLower(dt)).Str("configMap", cmName).Str("section", "components").Msg("Read deploymentTechnology from profile ConfigMap (direct lookup)") | ||
| return strings.ToLower(dt) | ||
| } | ||
| log.Debug().Str("configMap", cmName).Msg("components section found but deploymentTechnology not found or empty") | ||
| } else { | ||
| log.Debug().Str("configMap", cmName).Msg("components section not found in profile") | ||
| } | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Search for ConfigMap references with "profile" in the name
echo "=== Searching for profile-related ConfigMap references ==="
rg -i "platform-mesh.*profile" --type go --type yaml -n | head -40Repository: platform-mesh/platform-mesh-operator
Length of output: 812
🏁 Script executed:
#!/bin/bash
# Check if there are test files that reference these ConfigMap names
echo "=== Searching in test files ==="
fd -e "_test.go" -o -e "test.go" | xargs rg "platform-mesh-profile\|platform-mesh-system-profile" -n 2>/dev/null | head -20Repository: platform-mesh/platform-mesh-operator
Length of output: 245
🏁 Script executed:
#!/bin/bash
# Search for ConfigMap definitions in manifests
echo "=== Searching for ConfigMap kind definitions ==="
fd -e ".yaml" -e ".yml" | xargs rg -B2 -A5 "kind: ConfigMap" | rg -A5 "platform-mesh" | head -40Repository: platform-mesh/platform-mesh-operator
Length of output: 640
🏁 Script executed:
#!/bin/bash
# Check if ConfigMap names are configurable via environment or configuration
echo "=== Searching for ConfigMap name constants or configuration ==="
rg "platform-mesh-profile\|platform-mesh-system-profile" --type go -B3 -A3 | head -50Repository: platform-mesh/platform-mesh-operator
Length of output: 143
🏁 Script executed:
#!/bin/bash
# Look at the function and surrounding context for comments
sed -n '740,810p' pkg/subroutines/resource/subroutine.goRepository: platform-mesh/platform-mesh-operator
Length of output: 3453
🏁 Script executed:
#!/bin/bash
# Check if there are any README or documentation files explaining ConfigMap naming
fd -e ".md" -e ".txt" | xargs rg -l "ConfigMap\|profile" 2>/dev/nullRepository: platform-mesh/platform-mesh-operator
Length of output: 64
🏁 Script executed:
#!/bin/bash
# Count how many places these specific ConfigMap names appear
echo "=== Occurrences of platform-mesh-profile ==="
rg "platform-mesh-profile" --type go --type yaml | wc -l
echo "=== Occurrences of platform-mesh-system-profile ==="
rg "platform-mesh-system-profile" --type go --type yaml | wc -lRepository: platform-mesh/platform-mesh-operator
Length of output: 180
ConfigMap naming strategy mismatch between primary and fallback logic.
The fallback function uses hardcoded ConfigMap names ["platform-mesh-profile", "platform-mesh-system-profile"], but the primary lookup (line 738-743) uses a dynamic naming pattern based on the PlatformMesh resource name (pm.Name+"-profile"). This inconsistency means a PlatformMesh resource created with a custom name will succeed in the primary lookup but fail in the fallback, which then returns the "fluxcd" default. While logging prevents truly silent failures, this design inconsistency reduces maintainability. Consider making the fallback follow the same dynamic naming pattern or make ConfigMap names configurable.
🤖 Prompt for AI Agents
In `@pkg/subroutines/resource/subroutine.go` around lines 748 - 799, The fallback
in getDeploymentTechnologyFromConfigMapDirect uses hardcoded configMapNames
which mismatches the primary lookup that builds names from the PlatformMesh
resource (pm.Name+"-profile"), causing misses for custom-named PlatformMesh
resources; update getDeploymentTechnologyFromConfigMapDirect to compute the same
name pattern(s) used by the primary lookup (e.g., include pm.Name+"-profile" and
pm.Name+"-system-profile" or accept the PlatformMesh name as a parameter) or
make the ConfigMap name list configurable so the fallback mirrors the primary
lookup behavior.
| func applyUnstructureds( | ||
| ctx context.Context, | ||
| path string, k8sClient client.Client, templateData map[string]string, | ||
| k8sClient client.Client, | ||
| objs []unstructured.Unstructured, | ||
| path string, | ||
| log *logger.Logger, | ||
| ) error { | ||
| log := logger.LoadLoggerFromContext(ctx) | ||
|
|
||
| objs, err := unstructuredsFromFile(path, templateData, log) | ||
| if err != nil { | ||
| return err | ||
| } | ||
|
|
||
| var errRet error = nil | ||
| for _, obj := range objs { | ||
| if obj.Object == nil { | ||
| continue | ||
| } | ||
| err = k8sClient.Patch(ctx, &obj, client.Apply, client.FieldOwner("platform-mesh-operator")) | ||
| err := k8sClient.Patch(ctx, &obj, client.Apply, client.FieldOwner("platform-mesh-operator")) | ||
| if err != nil { | ||
| errRet = errors.Wrap(errRet, "Failed to apply manifest file: %s (%s/%s)", path, obj.GetKind(), obj.GetName()) | ||
| } | ||
| } | ||
| return errRet |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cd /repo && find . -name "helpers.go" -path "*/kind/*" 2>/dev/null | head -20Repository: platform-mesh/platform-mesh-operator
Length of output: 135
🏁 Script executed:
cd /repo && cat -n test/e2e/kind/helpers.go | head -60Repository: platform-mesh/platform-mesh-operator
Length of output: 135
🏁 Script executed:
cd /repo && rg "errors\." test/e2e/kind/helpers.go | head -20Repository: platform-mesh/platform-mesh-operator
Length of output: 135
🏁 Script executed:
cd /repo && rg "^import|errors|stderrors" test/e2e/kind/helpers.go | head -30Repository: platform-mesh/platform-mesh-operator
Length of output: 135
🏁 Script executed:
find . -name "helpers.go" -path "*/kind/*" 2>/dev/null | head -20Repository: platform-mesh/platform-mesh-operator
Length of output: 106
🏁 Script executed:
cat -n test/e2e/kind/helpers.go | head -80Repository: platform-mesh/platform-mesh-operator
Length of output: 3104
🏁 Script executed:
rg "^import|import \(" test/e2e/kind/helpers.go -A 15Repository: platform-mesh/platform-mesh-operator
Length of output: 449
🏁 Script executed:
rg "package errors" . 2>/dev/null | head -5Repository: platform-mesh/platform-mesh-operator
Length of output: 62
🏁 Script executed:
rg "func Wrap" . --type go | grep -E "errors|Wrap"Repository: platform-mesh/platform-mesh-operator
Length of output: 62
🏁 Script executed:
find . -path "*/golang-commons/errors*" -type f 2>/dev/null | head -10Repository: platform-mesh/platform-mesh-operator
Length of output: 62
🏁 Script executed:
rg "golang-commons/errors" . --type go -l | head -5Repository: platform-mesh/platform-mesh-operator
Length of output: 257
🏁 Script executed:
git log --oneline --all | head -20Repository: platform-mesh/platform-mesh-operator
Length of output: 2240
🏁 Script executed:
rg "github.com/platform-mesh/golang-commons" . --type go | grep "errors" | head -10Repository: platform-mesh/platform-mesh-operator
Length of output: 932
🏁 Script executed:
rg "Wrap.*func" . --type go -A 3 | head -30Repository: platform-mesh/platform-mesh-operator
Length of output: 583
🏁 Script executed:
rg "errors.Wrap\(" test/e2e/kind/helpers.go -B 2 -A 2Repository: platform-mesh/platform-mesh-operator
Length of output: 1883
🏁 Script executed:
rg "errors.Wrap\(" . --type go | head -20Repository: platform-mesh/platform-mesh-operator
Length of output: 2617
🏁 Script executed:
web_search
golang-commons errors.Wrap behavior nil first argumentRepository: platform-mesh/platform-mesh-operator
Length of output: 183
🏁 Script executed:
rg "errors.Join" . --type goRepository: platform-mesh/platform-mesh-operator
Length of output: 340
🏁 Script executed:
rg "stderrors\|std.*errors" . --type go | head -20Repository: platform-mesh/platform-mesh-operator
Length of output: 62
🌐 Web query:
platform-mesh golang-commons errors.Wrap function behavior
💡 Result:
In github.com/platform-mesh/golang-commons/errors, errors.Wrap(cause, msg, args...):
- Creates a new error that prefixes the original error message with
msg(optionally formatted withargs). - Ensures the returned error has an attached stack trace, but does so idempotently: unlike
github.com/pkg/errors.Wrap/Wrapf, it won’t create an additional stack trace if the error already carries a stack trace that matches the current program position (so repeated wrapping at the same spot doesn’t keep adding duplicate stacks). [1]
Related helpers in the same package:
EnsureStack(err)only adds a stack trace if one is missing. [1]WithStack(err)(re-exported frompkg/errors) can be used to add the current stack trace. [1]
Source: package docs on pkg.go.dev. [1]
References:
[1] (pkg.go.dev)
Citations:
🏁 Script executed:
rg "errors.Join" test/e2e/kind/suite_kind_test.go -B 3 -A 3Repository: platform-mesh/platform-mesh-operator
Length of output: 564
🏁 Script executed:
rg "func.*Wrap" . --type go -A 5 | head -50Repository: platform-mesh/platform-mesh-operator
Length of output: 62
🏁 Script executed:
rg "errRet.*errors.Wrap|errors.Wrap.*err" . --type go -B 2 -A 2 | head -40Repository: platform-mesh/platform-mesh-operator
Length of output: 2631
Current error is lost during accumulation; fix error wrapping logic.
The err variable from k8sClient.Patch() is discarded. errors.Wrap(errRet, ...) wraps only the accumulated error, not the current failure. When errRet is nil (first iteration), the wrapped result is also nil, silently swallowing the error. Each subsequent error only adds to the message without preserving the actual error.
Use errors.Join() to combine the accumulated error with the current error:
Proposed fix
err := k8sClient.Patch(ctx, &obj, client.Apply, client.FieldOwner("platform-mesh-operator"))
if err != nil {
- errRet = errors.Wrap(errRet, "Failed to apply manifest file: %s (%s/%s)", path, obj.GetKind(), obj.GetName())
+ errRet = errors.Join(errRet, errors.Wrap(err, "Failed to apply manifest file: %s (%s/%s)", path, obj.GetKind(), obj.GetName()))
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| func applyUnstructureds( | |
| ctx context.Context, | |
| path string, k8sClient client.Client, templateData map[string]string, | |
| k8sClient client.Client, | |
| objs []unstructured.Unstructured, | |
| path string, | |
| log *logger.Logger, | |
| ) error { | |
| log := logger.LoadLoggerFromContext(ctx) | |
| objs, err := unstructuredsFromFile(path, templateData, log) | |
| if err != nil { | |
| return err | |
| } | |
| var errRet error = nil | |
| for _, obj := range objs { | |
| if obj.Object == nil { | |
| continue | |
| } | |
| err = k8sClient.Patch(ctx, &obj, client.Apply, client.FieldOwner("platform-mesh-operator")) | |
| err := k8sClient.Patch(ctx, &obj, client.Apply, client.FieldOwner("platform-mesh-operator")) | |
| if err != nil { | |
| errRet = errors.Wrap(errRet, "Failed to apply manifest file: %s (%s/%s)", path, obj.GetKind(), obj.GetName()) | |
| } | |
| } | |
| return errRet | |
| func applyUnstructureds( | |
| ctx context.Context, | |
| k8sClient client.Client, | |
| objs []unstructured.Unstructured, | |
| path string, | |
| log *logger.Logger, | |
| ) error { | |
| var errRet error = nil | |
| for _, obj := range objs { | |
| if obj.Object == nil { | |
| continue | |
| } | |
| err := k8sClient.Patch(ctx, &obj, client.Apply, client.FieldOwner("platform-mesh-operator")) | |
| if err != nil { | |
| errRet = errors.Join(errRet, errors.Wrap(err, "Failed to apply manifest file: %s (%s/%s)", path, obj.GetKind(), obj.GetName())) | |
| } | |
| } | |
| return errRet |
🤖 Prompt for AI Agents
In `@test/e2e/kind/helpers.go` around lines 20 - 37, In applyUnstructureds, the
current Patch error stored in err is being lost because errors.Wrap is wrapping
errRet instead of the current err; change the error accumulation so that when
err != nil you create a wrapped error that includes the path/kind/name and the
current err (e.g. using fmt.Errorf("Failed to apply manifest file: %s (%s/%s):
%w", path, obj.GetKind(), obj.GetName(), err)) and then set errRet =
errors.Join(errRet, wrapped) so the current error is preserved and combined with
previous errors.
| telemetry: | ||
| trace: | ||
| enabled: true | ||
| otlp: | ||
| endpoint: observability-opentelemetry-collector.openmfp-observability.svc.cluster.local:4317 |
There was a problem hiding this comment.
OpenFGA OTLP endpoint namespace looks inconsistent.
Other services in this profile target observability while this endpoint uses openmfp-observability. If that’s a typo, traces won’t reach the collector. Consider aligning to the same namespace.
🔧 Proposed fix
- endpoint: observability-opentelemetry-collector.openmfp-observability.svc.cluster.local:4317
+ endpoint: observability-opentelemetry-collector.observability.svc.cluster.local:4317📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| telemetry: | |
| trace: | |
| enabled: true | |
| otlp: | |
| endpoint: observability-opentelemetry-collector.openmfp-observability.svc.cluster.local:4317 | |
| telemetry: | |
| trace: | |
| enabled: true | |
| otlp: | |
| endpoint: observability-opentelemetry-collector.observability.svc.cluster.local:4317 |
🤖 Prompt for AI Agents
In `@test/e2e/kind/yaml/platform-mesh-resource/default-profile.yaml` around lines
337 - 341, The OTLP endpoint namespace in the telemetry block is inconsistent;
update the telemetry.trace.otlp.endpoint value to use the same namespace as the
other services (replace
"observability-opentelemetry-collector.openmfp-observability.svc.cluster.local:4317"
with the correct namespace used elsewhere, e.g.
"observability-opentelemetry-collector.observability.svc.cluster.local:4317") so
traces are sent to the same collector; locate the endpoint via the YAML key path
telemetry.trace.otlp.endpoint and make the namespace change accordingly.
| dnsNames: | ||
| - frontproxy-front-proxy | ||
| - frontproxy-front-proxy.platform-mesh-system | ||
| - frontproxy-front-proxy.platform-mesh-system.svc.cluster.local | ||
| - virtual-workspaces | ||
| - virtual-workspaces.platform-mesh-system | ||
| - virtual-workspaces.platform-mesh-system.svc.cluster.local | ||
| duration: 8760h0m0s | ||
| issuerRef: | ||
| group: cert-manager.io | ||
| kind: ClusterIssuer | ||
| name: selfsigned-issuer | ||
| privateKey: | ||
| algorithm: RSA | ||
| size: 2048 | ||
| renewBefore: 360h0m0s | ||
| secretName: virtual-workspaces-tls | ||
| usages: | ||
| - server auth |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check if the file exists and read its contents
if [ -f "test/e2e/kind/yaml/virtual-workspaces/vws-cert.yaml" ]; then
echo "=== File exists, showing full content ==="
cat -n "test/e2e/kind/yaml/virtual-workspaces/vws-cert.yaml"
else
echo "File not found at test/e2e/kind/yaml/virtual-workspaces/vws-cert.yaml"
# Try to find similar files
echo "=== Searching for similar files ==="
find . -name "*vws-cert*" -o -name "*virtual-workspaces*cert*" 2>/dev/null | head -20
fiRepository: platform-mesh/platform-mesh-operator
Length of output: 1130
🌐 Web query:
cert-manager Certificate default usages digital signature key encipherment SelfSigned issuer
💡 Result:
-
If you don’t set
spec.usageson acert-manager.io/v1Certificate, cert-manager applies default requested usages. In the Certificate usage docs, that default isdigital signature,key encipherment, andserver auth(EKU). [1] -
In the API reference,
CertificateSpec.usagesis documented as defaulting todigital signatureandkey enciphermentwhen unset. [2]- Practical takeaway: the “defaults” have varied across cert-manager doc versions/releases; if you need deterministic output, explicitly set
spec.usages.
- Practical takeaway: the “defaults” have varied across cert-manager doc versions/releases; if you need deterministic output, explicitly set
-
For a
SelfSignedIssuer, cert-manager will issue a certificate that matches the usages you requested (it won’t silently change them), same as theCAissuer. [1] -
If you set
spec.isCA: true, cert-manager will automatically addcert signto the usages list. [2]
Example (explicitly request the common TLS server defaults):
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: example
spec:
secretName: example-tls
issuerRef:
name: selfsigned-issuer
kind: Issuer
dnsNames: ["example.internal"]
usages:
- digital signature
- key encipherment
- server authSources:
[1] cert-manager docs – Certificate “X.509 key usages and extended key usages” (defaults + SelfSigned/CA honor usages)
[2] cert-manager API reference – CertificateSpec.usages defaults; isCA adds cert sign
Add standard TLS key usages or remove the override.
Setting spec.usages explicitly causes cert-manager to apply only those usages; the default values (digital signature, key encipherment, server auth) are not merged in. This certificate includes only server auth, which may be insufficient for typical TLS certificate expectations. Add digital signature and key encipherment to the usages list, or remove the usages field to rely on defaults.
Proposed fix (add standard usages)
usages:
+ - digital signature
+ - key encipherment
- server auth📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| dnsNames: | |
| - frontproxy-front-proxy | |
| - frontproxy-front-proxy.platform-mesh-system | |
| - frontproxy-front-proxy.platform-mesh-system.svc.cluster.local | |
| - virtual-workspaces | |
| - virtual-workspaces.platform-mesh-system | |
| - virtual-workspaces.platform-mesh-system.svc.cluster.local | |
| duration: 8760h0m0s | |
| issuerRef: | |
| group: cert-manager.io | |
| kind: ClusterIssuer | |
| name: selfsigned-issuer | |
| privateKey: | |
| algorithm: RSA | |
| size: 2048 | |
| renewBefore: 360h0m0s | |
| secretName: virtual-workspaces-tls | |
| usages: | |
| - server auth | |
| dnsNames: | |
| - frontproxy-front-proxy | |
| - frontproxy-front-proxy.platform-mesh-system | |
| - frontproxy-front-proxy.platform-mesh-system.svc.cluster.local | |
| - virtual-workspaces | |
| - virtual-workspaces.platform-mesh-system | |
| - virtual-workspaces.platform-mesh-system.svc.cluster.local | |
| duration: 8760h0m0s | |
| issuerRef: | |
| group: cert-manager.io | |
| kind: ClusterIssuer | |
| name: selfsigned-issuer | |
| privateKey: | |
| algorithm: RSA | |
| size: 2048 | |
| renewBefore: 360h0m0s | |
| secretName: virtual-workspaces-tls | |
| usages: | |
| - digital signature | |
| - key encipherment | |
| - server auth |
🤖 Prompt for AI Agents
In `@test/e2e/kind/yaml/virtual-workspaces/vws-cert.yaml` around lines 7 - 25, The
certificate manifest currently sets spec.usages to only "server auth", which
replaces cert-manager's defaults and is likely insufficient; edit the
Certificate resource (the block containing dnsNames and secretName:
virtual-workspaces-tls) to either remove the usages field entirely or add the
standard TLS usages "digital signature" and "key encipherment" alongside "server
auth" under spec.usages so the certificate includes the expected key usages.
…m overwriting Resource-managed image versions On-behalf-of: @SAP angel.kafazov@sap.com Signed-off-by: Angel Kafazov <akafazov@cst-bg.net>
…t-#2-argocd On-behalf-of: @SAP angel.kafazov@sap.com Signed-off-by: Angel Kafazov <akafazov@cst-bg.net>
…ourceSubroutine On-behalf-of: @SAP angel.kafazov@sap.com Signed-off-by: Angel Kafazov <akafazov@cst-bg.net>
…routines On-behalf-of: @SAP angel.kafazov@sap.com Signed-off-by: Angel Kafazov <akafazov@cst-bg.net>
Summary
This PR adds remote deployment capabilities and ArgoCD support to the platform-mesh-operator.
Major Changes