-
Notifications
You must be signed in to change notification settings - Fork 0
feat: added multi-cluster runtime #90
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
On-behalf-of: SAP [email protected]
WalkthroughReplace kcp cluster-aware manager with a multicluster-runtime provider/manager and refactor controllers, subroutines, tests, and mocks to use mcmanager, mccontext, and builder-based multicluster lifecycles; add RemoveInitializer and mock support; update go.mod, Dockerfile, and mockery config. (50 words) Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Comment |
On-behalf-of: SAP [email protected]
On-behalf-of: SAP [email protected]
On-behalf-of: SAP [email protected]
On-behalf-of: SAP [email protected]
| log.Error().Err(err).Msg("unable to construct cluster provider") | ||
| os.Exit(1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🟡 Lets replace it witth log.Fatal() which does basically the same(plus some additional flushing), but it will have more severe log level and is more relevant to this situation.
cmd/operator.go
Outdated
| scheme = runtime.NewScheme() | ||
| ) | ||
|
|
||
| // TODO try to use multi-cluster runtime for client creation |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🟢 Is it a leftover?
…d context with timeout On-behalf-of: SAP [email protected]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
🧹 Nitpick comments (1)
go.mod (1)
9-9: Confirm intentional pinning of multicluster-provider commitThe dependency uses commit
2b5ca378b7b4instead of a semver-tagged release; ensure this pin is documented in your dependency management process and consider switching to a tagged version when available.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
go.sumis excluded by!**/*.sum
📒 Files selected for processing (20)
cmd/initializer.go(5 hunks)cmd/model_generator.go(3 hunks)cmd/operator.go(5 hunks)go.mod(3 hunks)internal/config/config.go(1 hunks)internal/controller/apibinding_controller.go(1 hunks)internal/controller/authorization_model_controller.go(2 hunks)internal/controller/initializer_controller.go(1 hunks)internal/controller/runnable.go(1 hunks)internal/controller/store_controller.go(3 hunks)internal/subroutine/authorization_model.go(3 hunks)internal/subroutine/authorization_model_generation_test.go(1 hunks)internal/subroutine/authorization_model_test.go(2 hunks)internal/subroutine/mocks/mock_Client.go(1 hunks)internal/subroutine/remove_initializer.go(1 hunks)internal/subroutine/store_test.go(2 hunks)internal/subroutine/tuples.go(3 hunks)internal/subroutine/tuples_test.go(3 hunks)internal/subroutine/worksapce_authorization.go(2 hunks)internal/subroutine/workspace_initializer.go(4 hunks)
🧰 Additional context used
🧬 Code graph analysis (10)
internal/subroutine/authorization_model_generation_test.go (1)
internal/subroutine/mocks/mock_Client.go (1)
MockClient(22-24)
internal/subroutine/workspace_initializer.go (1)
internal/config/config.go (1)
Config(4-16)
cmd/initializer.go (1)
internal/controller/initializer_controller.go (1)
NewLogicalClusterReconciler(26-38)
internal/controller/initializer_controller.go (5)
internal/config/config.go (1)
Config(4-16)internal/subroutine/workspace_initializer.go (1)
NewWorkspaceInitializer(25-40)internal/subroutine/worksapce_authorization.go (1)
NewWorkspaceAuthConfigurationSubroutine(28-34)internal/subroutine/realm.go (1)
NewRealmSubroutine(38-43)internal/subroutine/remove_initializer.go (1)
NewRemoveInitializer(63-68)
internal/controller/apibinding_controller.go (2)
internal/subroutine/authorization_model.go (1)
NewLogicalClusterClientFunc(49-49)internal/subroutine/authorization_model_generation.go (1)
NewAuthorizationModelGenerationSubroutine(28-33)
internal/controller/authorization_model_controller.go (2)
internal/subroutine/authorization_model.go (1)
NewLogicalClusterClientFunc(49-49)internal/subroutine/tuples.go (1)
NewTupleSubroutine(216-222)
internal/subroutine/authorization_model.go (1)
api/v1alpha1/authorizationmodel_types.go (1)
AuthorizationModelList(54-58)
internal/controller/store_controller.go (4)
internal/subroutine/authorization_model.go (2)
NewLogicalClusterClientFunc(49-49)NewAuthorizationModelSubroutine(31-37)internal/subroutine/store.go (1)
NewStoreSubroutine(27-33)internal/subroutine/tuples.go (1)
NewTupleSubroutine(216-222)api/v1alpha1/store_types.go (1)
Store(44-50)
cmd/operator.go (2)
internal/controller/store_controller.go (1)
NewStoreReconciler(35-42)internal/controller/authorization_model_controller.go (1)
NewAuthorizationModelReconciler(29-36)
cmd/model_generator.go (2)
api/v1alpha1/groupversion_info.go (1)
AddToScheme(19-19)internal/controller/apibinding_controller.go (1)
NewAPIBindingReconciler(21-29)
🔇 Additional comments (15)
internal/subroutine/mocks/mock_Client.go (2)
64-87: LGTM! Auto-generated mock follows established patterns.The
Applymethod implementation correctly follows the same pattern as other mocked client methods (Create, Delete, Update, etc.), with proper variadic argument handling and error return logic.
89-124: LGTM! Helper methods are correctly implemented.The
MockClient_Apply_Callstruct and associated helper methods (Run,Return,RunAndReturn) follow the exact pattern used by other mock methods in this file. The variadic argument extraction in theRunmethod (lines 103-114) correctly handles nil values and performs safe type assertions.internal/subroutine/worksapce_authorization.go (2)
56-59: Verify context deadline propagation in Multicluster builder
Confirm thatbuilder.NewBuilder(...).BuildMultiClusterinjects a per-reconcile deadline into thectxpassed toProcess(). If it does not, wrap the externalruntimeClient.Getcall in an explicitcontext.WithTimeoutto avoid potential indefinite blocking.
63-94: Confirm explicit timeout on CreateOrUpdate call.I couldn’t find any
context.WithTimeoutusage; ensure theReconcilecontext passed tocontrollerutil.CreateOrUpdatehas an appropriate deadline to avoid potential hangs during slow API-server operations.internal/subroutine/workspace_initializer.go (1)
103-103: LGTM! Per-cluster client usage aligns with multicluster patterns.Using
cluster.GetClient()to obtain a cluster-scoped client is the correct approach for multicluster-runtime architecture.internal/subroutine/tuples.go (2)
19-19: LGTM! Migration from kontext to mccontext.The import change from
kontexttomccontextaligns with the multicluster-runtime migration objectives.
55-55: Correct string conversion for mccontext.WithCluster.Converting
logicalcluster.Nametostringwhen callingmccontext.WithClusteris correct, as the multicluster-runtime API expects string cluster keys.Also applies to: 139-139
internal/subroutine/tuples_test.go (1)
17-17: LGTM! Test updates align with production code changes.The migration from
kontexttomccontextin tests is consistent with the production code changes intuples.go.Also applies to: 312-312, 387-387
internal/config/config.go (1)
14-14: LGTM! Configurable initializer name added.The new
InitializerNamefield provides flexibility for configuring the initializer identifier, with a sensible default value.cmd/model_generator.go (2)
70-82: LGTM! Provider-based multicluster manager initialization.The initialization sequence (provider creation → manager creation with provider) follows the multicluster-runtime pattern correctly.
84-85: Constructor signature matches definition.internal/controller/authorization_model_controller.go (2)
38-57: LGTM! Reconcile refactored for multicluster-runtime.The refactored reconcile logic correctly:
- Extracts the cluster from the multicluster manager
- Obtains a cluster-scoped client
- Creates a lifecycle manager bound to the cluster client
- Delegates to the lifecycle manager for reconciliation
69-76: No action needed: providerRunnable implements manager.RunnableproviderRunnable is defined in internal/controller/runnable.go and provides a Start(context.Context) error method, satisfying the manager.Runnable interface.
go.mod (2)
18-24: Verify Kubernetes modules v0.34.x compatibility and security advisories
Confirm that k8s.io/api, k8s.io/apimachinery, k8s.io/client-go and k8s.io/apiextensions-apiserver at v0.34.x align with your cluster version and review known CVEs in NVD/OSV for these modules.
24-24: Alpha dependency detected (sigs.k8s.io/multicluster-runtime v0.21.0-alpha.9)
No v1+ stable release is available; account for potential API instability and include thorough testing before production deployment.
| runtimeScheme := runtime.NewScheme() | ||
| utilruntime.Must(appsv1.AddToScheme(runtimeScheme)) | ||
| utilruntime.Must(securityv1alpha1.AddToScheme(runtimeScheme)) | ||
|
|
||
| mgr, err := kcp.NewClusterAwareManager(cfg, mgrOpts) | ||
| if mgrOpts.Scheme == nil { | ||
| log.Error().Err(fmt.Errorf("scheme should not be nil")).Msg("scheme should not be nil") | ||
| return fmt.Errorf("scheme should not be nil") | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Redundant scheme initialization and check.
A new runtimeScheme is created and populated at lines 61-63, but it's never assigned to mgrOpts.Scheme. Line 38 already sets mgrOpts.Scheme = scheme (presumably a package-level variable). The nil-check at lines 65-68 validates mgrOpts.Scheme, not runtimeScheme.
Apply this diff to fix the scheme initialization:
- runtimeScheme := runtime.NewScheme()
- utilruntime.Must(appsv1.AddToScheme(runtimeScheme))
- utilruntime.Must(securityv1alpha1.AddToScheme(runtimeScheme))
-
if mgrOpts.Scheme == nil {
log.Error().Err(fmt.Errorf("scheme should not be nil")).Msg("scheme should not be nil")
return fmt.Errorf("scheme should not be nil")
}Or, if you intended to use the new scheme:
runtimeScheme := runtime.NewScheme()
utilruntime.Must(appsv1.AddToScheme(runtimeScheme))
utilruntime.Must(securityv1alpha1.AddToScheme(runtimeScheme))
+ mgrOpts.Scheme = runtimeScheme
if mgrOpts.Scheme == nil {📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| runtimeScheme := runtime.NewScheme() | |
| utilruntime.Must(appsv1.AddToScheme(runtimeScheme)) | |
| utilruntime.Must(securityv1alpha1.AddToScheme(runtimeScheme)) | |
| mgr, err := kcp.NewClusterAwareManager(cfg, mgrOpts) | |
| if mgrOpts.Scheme == nil { | |
| log.Error().Err(fmt.Errorf("scheme should not be nil")).Msg("scheme should not be nil") | |
| return fmt.Errorf("scheme should not be nil") | |
| } | |
| if mgrOpts.Scheme == nil { | |
| log.Error().Err(fmt.Errorf("scheme should not be nil")).Msg("scheme should not be nil") | |
| return fmt.Errorf("scheme should not be nil") | |
| } |
🤖 Prompt for AI Agents
In cmd/model_generator.go around lines 61-68, the code creates and populates a
new runtimeScheme but never assigns it to mgrOpts.Scheme while the nil-check
validates mgrOpts.Scheme; either remove the redundant runtimeScheme creation
(lines 61-63) and keep the existing mgrOpts.Scheme set earlier, or if you
intended to use the new scheme, assign runtimeScheme to mgrOpts.Scheme before
the nil-check and then add the API registrations to that scheme so the check and
registrations operate on the same object.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
internal/subroutine/authorization_model_generation.go (1)
113-116: Critical:lcClientFuncis never initialized
NewAuthorizationModelGenerationSubroutineonly setsmgrand leaveslcClientFuncnil, so callinga.lcClientFunc(…)in Finalize (line 113) will panic at runtime. Fix by either:
- Extending the constructor to accept and assign a
NewLogicalClusterClientFunc(and update its callers), or- Dropping
lcClientFuncand usinga.mgr.ClusterFromContext(ctx).GetClient()directly.
♻️ Duplicate comments (1)
cmd/model_generator.go (1)
61-68: Fix Scheme wiring: AddToScheme runs on an unused SchemeYou build runtimeScheme and register types, but never assign it to the manager. mgrOpts.Scheme remains whatever was set earlier, so the registrations here are ignored.
Apply this diff to use the populated Scheme:
runtimeScheme := runtime.NewScheme() utilruntime.Must(appsv1.AddToScheme(runtimeScheme)) utilruntime.Must(securityv1alpha1.AddToScheme(runtimeScheme)) +mgrOpts.Scheme = runtimeScheme if mgrOpts.Scheme == nil { log.Error().Err(fmt.Errorf("scheme should not be nil")).Msg("scheme should not be nil") return fmt.Errorf("scheme should not be nil") }Also ensure all required APIs are added to this Scheme (e.g., kcp apis/core v1alpha1) before manager/provider creation.
#!/bin/bash # Check for AddToScheme registrations for kcp types and security/operator types. rg -n -C2 -g '!**/vendor/**' -P 'AddToScheme\s*\(' rg -n -C2 -P 'github\.com/kcp-dev/kcp/sdk/apis/(apis|core)/v1alpha1'
🧹 Nitpick comments (1)
cmd/model_generator.go (1)
99-104: Bind provider lifecycle to the manager instead of a raw goroutineUse a manager Runnable so startup/shutdown is coordinated with mgr.Start.
-go func() { - if err := provider.Run(ctx, mgr); err != nil { - log.Fatal().Err(err).Msg("unable to run provider") - } -}() +if err := mgr.Add(manager.RunnableFunc(func(ctx context.Context) error { + return provider.Run(ctx, mgr) +})); err != nil { + log.Error().Err(err).Msg("unable to add provider runnable") + return err +}Optional: start the manager with the same ctx you derive earlier to keep lifecycles consistent.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
cmd/model_generator.go(4 hunks)internal/controller/apibinding_controller.go(1 hunks)internal/controller/initializer_controller.go(1 hunks)internal/subroutine/authorization_model_generation.go(6 hunks)
🧰 Additional context used
🧬 Code graph analysis (4)
cmd/model_generator.go (2)
api/v1alpha1/groupversion_info.go (1)
AddToScheme(19-19)internal/controller/apibinding_controller.go (1)
NewAPIBindingReconciler(21-29)
internal/subroutine/authorization_model_generation.go (2)
internal/subroutine/authorization_model.go (1)
NewLogicalClusterClientFunc(49-49)api/v1alpha1/authorizationmodel_types.go (1)
AuthorizationModel(31-37)
internal/controller/apibinding_controller.go (1)
internal/subroutine/authorization_model_generation.go (1)
NewAuthorizationModelGenerationSubroutine(28-32)
internal/controller/initializer_controller.go (5)
internal/config/config.go (1)
Config(4-16)internal/subroutine/workspace_initializer.go (1)
NewWorkspaceInitializer(25-40)internal/subroutine/worksapce_authorization.go (1)
NewWorkspaceAuthConfigurationSubroutine(28-34)internal/subroutine/realm.go (1)
NewRealmSubroutine(38-43)internal/subroutine/remove_initializer.go (1)
NewRemoveInitializer(63-68)
🔇 Additional comments (9)
internal/subroutine/authorization_model_generation.go (3)
28-32: LGTM! Clean constructor migration.The constructor has been properly simplified to accept the multicluster manager directly, aligning with the new architecture.
95-157: Client usage properly migrated (except line 113).The client retrieval and usage throughout this section has been correctly migrated to use
cluster.GetClient()for List, Get, and Delete operations. The logic for counting bindings and conditionally deleting the authorization model is sound.Note: This approval excludes line 113, which has a critical bug commented separately.
172-247: LGTM! Process method fully migrated.The
Processmethod has been successfully migrated to the multicluster-runtime manager approach. All client operations now properly usecluster.GetClient()after retrieving the cluster from context, and the business logic remains intact.internal/controller/initializer_controller.go (3)
32-40: Confirm WithReadOnly is intended (initializer likely writes)Workspace initializer/auth/realm subroutines commonly create/update resources. WithReadOnly() may prevent writes if enforced by the lifecycle manager.
Would you like me to scan subroutines for mutations and adjust the builder options accordingly?
43-46: Cluster scoping looks correctmccontext.WithCluster(ctx, req.ClusterName) before delegating to lifecycle is the expected pattern.
48-50: Setup wiring LGTMManager-based wiring with predicates and max concurrency is consistent with the multicluster lifecycle.
internal/controller/apibinding_controller.go (3)
21-29: Constructor/lifecycle wiring looks goodClean migration to multicluster builder and manager.
41-43: SetupWithManager LGTMController wiring via lifecycle with predicates and concurrency is consistent.
36-39: APIBinding already registered via apisv1alpha1.AddToScheme(scheme) in cmd/operator.go; no further action needed.
| return ctrl.Result{}, errors.NewOperatorError(fmt.Errorf("unable to get cluster from context: %w", err), true, false) | ||
| } | ||
|
|
||
| log.Info().Msg(fmt.Sprint("HOST -- %s", cluster.GetConfig().Host)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix formatting function call.
fmt.Sprint does not support format strings. The %s placeholder will be printed literally instead of being replaced with the host value.
Apply this diff:
- log.Info().Msg(fmt.Sprint("HOST -- %s", cluster.GetConfig().Host))
+ log.Info().Msg(fmt.Sprintf("HOST -- %s", cluster.GetConfig().Host))🤖 Prompt for AI Agents
In internal/subroutine/authorization_model_generation.go around line 92, the log
call uses fmt.Sprint with a format string so the %s is not replaced; change the
call to use a formatting function (e.g., fmt.Sprintf with the format and host
value) or use the logger's formatted message helper (e.g., Msgf) so the host is
interpolated correctly in the log output.
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (1)
internal/subroutine/remove_initializer.go (1)
58-58: Prefer structured logging.Using
fmt.SprintfinsideMsg()is less efficient and less queryable than structured fields. Consider using zerolog's structured field methods.Apply this diff for better performance and queryability:
- log.Info().Msg(fmt.Sprintf("Removed initializer from LogicalCluster status, name %s,uuid %s", lc.Name, lc.UID)) + log.Info(). + Str("name", lc.Name). + Str("uid", string(lc.UID)). + Msg("Removed initializer from LogicalCluster status")
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
internal/subroutine/remove_initializer.go(1 hunks)
🔇 Additional comments (1)
internal/subroutine/remove_initializer.go (1)
24-27: Past issue resolved.The panic from the previous review has been correctly fixed. Returning a benign result prevents controller crashes during finalization.
|
|
||
| // Process implements subroutine.Subroutine. | ||
| func (r *removeInitializer) Process(ctx context.Context, instance runtimeobject.RuntimeObject) (ctrl.Result, errors.OperatorError) { | ||
| lc := instance.(*kcpv1alpha1.LogicalCluster) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add type assertion safety check.
The unchecked type assertion will panic if the controller framework passes an unexpected type. Use the two-value form to handle this gracefully.
Apply this diff to add a safety check:
- lc := instance.(*kcpv1alpha1.LogicalCluster)
+ lc, ok := instance.(*kcpv1alpha1.LogicalCluster)
+ if !ok {
+ return ctrl.Result{}, errors.NewOperatorError(fmt.Errorf("expected *kcpv1alpha1.LogicalCluster, got %T", instance), false, false)
+ }Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In internal/subroutine/remove_initializer.go around line 37, the current
unchecked type assertion "lc := instance.(*kcpv1alpha1.LogicalCluster)" can
panic if instance is not the expected type; replace it with the two-value form
(lc, ok := instance.(*kcpv1alpha1.LogicalCluster)) and handle the failure path
by logging an explanatory error (including the unexpected type) and returning
early (or returning nil/error as appropriate for this function) to avoid a
panic.
go.mod
Outdated
| @@ -1,94 +1,66 @@ | |||
| module github.com/platform-mesh/security-operator | |||
|
|
|||
| go 1.24.5 | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe lets use the opportunity and upgrade to the latest go version in this PR, as usual a go upgrade should not be an issue
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sure, I've done it
On-behalf-of: SAP [email protected]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
Dockerfile (3)
10-12: Avoid persisting credentials in image layers (secret leakage).Current flow writes the token to disk in a separate RUN and sets credential.helper=store, leaving secrets in builder layers/cache. Do creds + download in a single RUN with cleanup, and rely on BuildKit secrets.
Apply this diff to remove the persistent credential steps here:
- RUN git config --global credential.helper store - RUN --mount=type=secret,id=org_token echo "https://gha:$(cat /run/secrets/org_token)@github.com" > /root/.git-credentials + # credentials are handled ephemerally during `go mod download`Add this at the go mod download step (see Line 19 comment) and also add a Dockerfile syntax directive at the top so --mount works reliably:
# syntax=docker/dockerfile:1.7
19-19: Use ephemeral credentials + cache mounts forgo mod download.Scope the token to a single layer and remove it before commit; also speed up builds via cache mounts.
-RUN go mod download +RUN --mount=type=secret,id=org_token \ + --mount=type=cache,target=/go/pkg/mod \ + --mount=type=cache,target=/root/.cache/go-build \ + bash -eu -c '\ + trap "rm -f /root/.git-credentials" EXIT; \ + git config --global credential.helper store; \ + echo "https://gha:$(cat /run/secrets/org_token)@github.com" > /root/.git-credentials; \ + go mod download \ + '
7-8: Don’t disable the checksum database globally.Turning off GOSUMDB reduces supply‑chain integrity for public deps. Scope no‑sum to your private host instead and match all submodules.
-ENV GOPRIVATE="github.com/platform-mesh" -ENV GOSUMDB=off +ENV GOPRIVATE="github.com/platform-mesh/*" +ENV GONOSUMDB="github.com/platform-mesh/*"
🧹 Nitpick comments (2)
Dockerfile (2)
31-31: Faster, smaller, more reproducible builds (cache + flags).Add build cache mounts and deterministic flags; avoid setting GOARCH when unset.
-RUN CGO_ENABLED=0 GOOS=${TARGETOS:-linux} GOARCH=${TARGETARCH} go build -a -o manager main.go +RUN --mount=type=cache,target=/root/.cache/go-build \ + --mount=type=cache,target=/go/pkg/mod \ + sh -c 'CGO_ENABLED=0 GOOS=${TARGETOS:-linux} ${TARGETARCH:+GOARCH=${TARGETARCH}} \ + go build -trimpath -ldflags="-s -w -buildid=" -buildvcs=false -o manager main.go'
33-36: Optional: pin runtime base image by digest.Improves supply‑chain integrity of the final artifact.
-FROM gcr.io/distroless/static:nonroot +# Example (replace with current digest you trust) +FROM gcr.io/distroless/static:nonroot@sha256:<digest>
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
Dockerfile(1 hunks)
🔇 Additional comments (1)
Dockerfile (1)
2-2: Verify CI build workflows and args: go.mod’sgodirective is 1.25.2 matching your Dockerfile, and GOPRIVATE/GOSUMDB are set; please confirm CI buildx secrets/args and workflows are updated.
On-behalf-of: SAP [email protected]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (1)
cmd/operator.go (1)
61-62: Drop the leftover host logging.Line 61 still emits an info log with the full host for every logical-cluster client. This looks like debugging output, pollutes logs, and can leak internal endpoints. Please remove or downgrade it before merging.
🧹 Nitpick comments (2)
internal/controller/store_controller.go (2)
16-16: Clean up commented-out import.The commented-out
logicalclusterimport suggests incomplete cleanup from the migration. If this import is no longer needed, it should be removed entirely.Apply this diff to remove the commented import:
- //"github.com/kcp-dev/logicalcluster/v3"
67-78: Use structured logger instead of global logger.The watch handler uses the global
log.Error()(lines 69, 76) instead of the reconciler's structured logger (r.log). This is inconsistent with the rest of the codebase and loses context.Apply this diff to use the structured logger:
cluster, err := mgr.GetCluster(ctx, model.Spec.StoreRef.Path) if err != nil { - log.Error().Err(err).Msg("failed to get cluster from manager (store watcher)") + r.log.Error().Err(err).Msg("failed to get cluster from manager (store watcher)") return nil } var lc kcpcorev1alpha1.LogicalCluster err = cluster.GetClient().Get(ctx, client.ObjectKey{Name: "cluster"}, &lc) if err != nil { - log.Error().Err(err).Msg("failed to get logical cluster") + r.log.Error().Err(err).Msg("failed to get logical cluster") return nil }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
cmd/operator.go(7 hunks)internal/controller/authorization_model_controller.go(1 hunks)internal/controller/store_controller.go(2 hunks)internal/subroutine/authorization_model.go(4 hunks)internal/subroutine/store.go(3 hunks)
🧰 Additional context used
🧬 Code graph analysis (4)
cmd/operator.go (2)
internal/controller/store_controller.go (1)
NewStoreReconciler(37-48)internal/controller/authorization_model_controller.go (1)
NewAuthorizationModelReconciler(26-34)
internal/subroutine/authorization_model.go (2)
api/v1alpha1/store_types.go (1)
Store(44-50)api/v1alpha1/authorizationmodel_types.go (1)
AuthorizationModelList(54-58)
internal/controller/store_controller.go (3)
internal/subroutine/store.go (1)
NewStoreSubroutine(28-33)internal/subroutine/authorization_model.go (1)
NewAuthorizationModelSubroutine(30-35)internal/subroutine/tuples.go (1)
NewTupleSubroutine(217-222)
internal/controller/authorization_model_controller.go (2)
internal/subroutine/tuples.go (1)
NewTupleSubroutine(217-222)api/v1alpha1/authorizationmodel_types.go (1)
AuthorizationModel(31-37)
🔇 Additional comments (17)
internal/subroutine/store.go (3)
5-5: LGTM: Import added for error wrapping.The
fmtimport is correctly added to support the error wrapping in theFinalizemethod (line 51).
28-32: LGTM: Constructor signature correctly updated.The constructor signature has been properly updated to accept
mcmanager.Managerinstead of the previous k8s client andlcClientFuncparameters, aligning with the multi-cluster runtime migration.
49-54: LGTM: Cluster context retrieval with proper error handling.The cluster retrieval from context is correctly implemented:
- Uses
mgr.ClusterFromContext(ctx)to obtain the cluster- Properly wraps the error with context
- Obtains the per-cluster client via
cluster.GetClient()internal/controller/authorization_model_controller.go (4)
22-24: LGTM: Struct updated for multi-cluster lifecycle.The reconciler struct has been correctly updated to include:
- A
logfield for structured logging- A
lifecyclefield using the multi-cluster lifecycle manager
26-34: LGTM: Constructor correctly implements builder pattern.The constructor has been properly migrated to:
- Accept
mcmanager.Managerfor multi-cluster support- Use the builder pattern for lifecycle construction
- Initialize subroutines with the multi-cluster manager
- Call
BuildMultiCluster(mcMgr)to wire the multi-cluster lifecycle
36-39: LGTM: Per-cluster reconciliation correctly implemented.The
Reconcilemethod properly:
- Accepts
mcreconcile.Requestwith cluster information- Binds the context to the target cluster via
mccontext.WithCluster- Delegates to the lifecycle manager with the enriched context
41-43: LGTM: Setup method signature updated for multi-cluster support.The
SetupWithManagermethod correctly:
- Accepts
mcmanager.Managerfor multi-cluster wiring- Passes predicates for event filtering
- Delegates to the lifecycle's multi-cluster setup
internal/controller/store_controller.go (4)
32-35: LGTM: Struct updated for multi-cluster lifecycle.The reconciler struct has been correctly updated to include the
fgaclient andlogfields for proper operation in the multi-cluster context.
37-48: LGTM: Constructor correctly implements builder pattern.The constructor has been properly migrated to:
- Accept
mcmanager.Managerfor multi-cluster support- Use the builder pattern with
WithConditionManagement()- Initialize all required subroutines with the multi-cluster manager
- Call
BuildMultiCluster(mcMgr)to wire the multi-cluster lifecycle
50-53: LGTM: Per-cluster reconciliation correctly implemented.The
Reconcilemethod properly implements per-cluster reconciliation by binding the context to the target cluster before delegating to the lifecycle manager.
56-90: LGTM: Setup method correctly implements multi-cluster watches.The
SetupWithManagermethod properly:
- Accepts
mcmanager.Managerfor multi-cluster support- Uses the lifecycle's builder-based setup
- Configures watches on
AuthorizationModelwith proper cluster resolution- Applies predicates for event filtering
internal/subroutine/authorization_model.go (6)
27-27: LGTM: Struct field updated for multi-cluster support.The struct correctly replaces the k8s client and
lcClientFuncwithmgr mcmanager.Manager, aligning with the multi-cluster runtime migration.
30-35: LGTM: Constructor signature correctly updated.The constructor has been properly updated to accept
mcmanager.Managerand initialize the struct with the multi-cluster manager.
49-73: LGTM: Function correctly refactored for multi-cluster context.The
getRelatedAuthorizationModelsfunction has been properly updated to:
- Accept only the necessary parameters (removed
lcClientFunc)- Extract cluster key from context via
mccontext.ClusterFrom- Use
mccontext.WithCluster(ctx, "")for all-context queries- Compare paths using the cluster key directly
79-84: LGTM: Cluster retrieval with proper error handling.The cluster retrieval from context is correctly implemented with proper error wrapping, matching the pattern used in other files.
84-84: LGTM: Client retrieval updated to use cluster client.The call to
getRelatedAuthorizationModelscorrectly passescluster.GetClient()to obtain the per-cluster client.
47-47: Do not remove the NewLogicalClusterClientFunc type
It’s still referenced in internal/subroutine/store.go (field lcClientFunc), internal/subroutine/authorization_model_generation.go (field lcClientFunc), and cmd/operator.go (return type of logicalClusterClientFromKey).Likely an incorrect or invalid review comment.
| go func() { | ||
| if err := provider.Run(ctx, mgr); err != nil { | ||
| log.Fatal().Err(err).Msg("unable to run provider") | ||
| } | ||
| }() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid fatal exit when the provider stops.
Line 169 calls log.Fatal, which terminates the process. provider.Run returns context.Canceled on graceful shutdown, so this turns every normal exit into a fatal crash with a non-zero status. Switch to a non-fatal log (and ignore context.Canceled) so shutdown remains clean.
- go func() {
- if err := provider.Run(ctx, mgr); err != nil {
- log.Fatal().Err(err).Msg("unable to run provider")
- }
- }()
+ go func() {
+ if err := provider.Run(ctx, mgr); err != nil && !errors.Is(err, context.Canceled) {
+ log.Error().Err(err).Msg("provider exited with error")
+ }
+ }()(Remember to import "errors".)
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| go func() { | |
| if err := provider.Run(ctx, mgr); err != nil { | |
| log.Fatal().Err(err).Msg("unable to run provider") | |
| } | |
| }() | |
| go func() { | |
| if err := provider.Run(ctx, mgr); err != nil && !errors.Is(err, context.Canceled) { | |
| log.Error().Err(err).Msg("provider exited with error") | |
| } | |
| }() |
🤖 Prompt for AI Agents
In cmd/operator.go around lines 167 to 171, replace the log.Fatal call inside
the goroutine so that when provider.Run returns nil or context.Canceled the
process does not exit with a fatal status: check the error returned, if
errors.Is(err, context.Canceled) (import "errors" and keep context imported) or
err == nil then log an Info/Debug message and return; otherwise log an Error
(non-fatal) with Err(err). Ensure the goroutine does not call log.Fatal so
graceful shutdowns are treated as normal exits.
On-behalf-of: SAP [email protected]
On-behalf-of: SAP [email protected]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (5)
internal/subroutine/authorization_model_generation.go (3)
116-121: Don’t return early on missing AccountInfo; continue the loopReturning exits Finalize prematurely and skips checking other bindings. Use continue.
- if kerrors.IsNotFound(err) || meta.IsNoMatchError(err) { - // If the accountinfo does not exist, we can skip the model generation. - return ctrl.Result{}, nil - } + if kerrors.IsNotFound(err) || meta.IsNoMatchError(err) { + // Skip this binding; continue scanning others. + continue + }
206-208: Fix out-of-bounds risk when truncating groupIndexing group by (len(longestRelationName)-50) can exceed len(group) and panic. Compute a safe maxGroupLen instead.
- if len(longestRelationName) > 50 { - group = resourceSchema.Spec.Group[len(longestRelationName)-50:] - } + if len(longestRelationName) > 50 { + // Ensure len("create_"+group+"_"+plural) <= 50 + maxGroupLen := 50 - (len("create_") + 1 + len(resourceSchema.Spec.Names.Plural)) + if maxGroupLen < 0 { + maxGroupLen = 0 + } + if len(group) > maxGroupLen { + group = group[len(group)-maxGroupLen:] + } + }
140-145: Align deletion logic with creation naming or use labels
Creation names models as<plural>-<org>, but Finalize deletes<exportName>-<exportPath>, orphaning models. Either delete using the plural-org scheme or add labels on create and list-delete by those labels.cmd/operator.go (1)
137-144: grpc.NewClient does not exist; use grpc.DialContext/DialThis won’t compile. Create a client connection with DialContext.
- conn, err := grpc.NewClient(appCfg.FGA.Target, grpc.WithTransportCredentials(insecure.NewCredentials())) + conn, err := grpc.DialContext(ctx, appCfg.FGA.Target, grpc.WithTransportCredentials(insecure.NewCredentials())) if err != nil { log.Error().Err(err).Msg("unable to create grpc client") return err }internal/controller/store_controller.go (1)
64-89: Use multicluster reconcile.Request and set ClusterName; drop unnecessary cluster lookupshandler from multicluster-runtime expects []mcreconcile.Request. You also need ClusterName in the request. The LogicalCluster Get is unused; remove it.
- handler.EnqueueRequestsFromMapFunc(func(ctx context.Context, obj client.Object) []reconcile.Request { + handler.EnqueueRequestsFromMapFunc(func(ctx context.Context, obj client.Object) []mcreconcile.Request { model := obj.(*corev1alpha1.AuthorizationModel) - - cluster, err := mgr.GetCluster(ctx, model.Spec.StoreRef.Path) - if err != nil { - log.Error().Err(err).Msg("failed to get cluster from manager (store watcher)") - return nil - } - - var lc kcpcorev1alpha1.LogicalCluster - err = cluster.GetClient().Get(ctx, client.ObjectKey{Name: "cluster"}, &lc) - if err != nil { - log.Error().Err(err).Msg("failed to get logical cluster") - return nil - } - - return []reconcile.Request{ + return []mcreconcile.Request{ { + ClusterName: model.Spec.StoreRef.Path, NamespacedName: types.NamespacedName{ Name: model.Spec.StoreRef.Name, }, }, } }),Additionally, remove now-unused imports:
- kcpcorev1alpha1
- "sigs.k8s.io/controller-runtime/pkg/reconcile"
- potentially "sigs.k8s.io/controller-runtime/pkg/client" if not used elsewhere in this file.
♻️ Duplicate comments (2)
internal/subroutine/remove_initializer.go (1)
37-41: Guard the type assertion to avoid panicsUse the two-value assertion and return a typed error if instance is unexpected.
- lc := instance.(*kcpv1alpha1.LogicalCluster) + lc, ok := instance.(*kcpv1alpha1.LogicalCluster) + if !ok { + return ctrl.Result{}, errors.NewOperatorError(fmt.Errorf("expected *kcpv1alpha1.LogicalCluster, got %T", instance), false, false) + }cmd/operator.go (1)
167-171: Don’t fatal-exit on normal provider shutdownprovider.Run returns context.Canceled on graceful shutdown; log.Fatal turns this into a crash. Log non-fatal and ignore context.Canceled.
- go func() { - if err := provider.Run(ctx, mgr); err != nil { - log.Fatal().Err(err).Msg("unable to run provider") - } - }() + go func() { + if err := provider.Run(ctx, mgr); err != nil && !errors.Is(err, context.Canceled) { + log.Error().Err(err).Msg("provider exited with error") + } + }()Also add:
import ( + "errors" )
🧹 Nitpick comments (7)
internal/controller/store_controller.go (1)
56-62: Minor: avoid shadowing the builder import with a local varRename the local builder variable to bldr for clarity.
- builder, err := r.lifecycle.SetupWithManagerBuilder(...) + bldr, err := r.lifecycle.SetupWithManagerBuilder(...) ... - return builder. + return bldr.internal/subroutine/authorization_model.go (1)
110-110: Consider tracking the TODO comment.The TODO comment suggests handling not-found authorization model errors by continuing with writing the model. This is a pre-existing item but could improve resilience.
Would you like me to open a new issue to track this TODO item?
internal/subroutine/store_test.go (5)
23-24: Avoid shadowing the imported package namesubroutineRename the local variable to reduce confusion.
Apply:
- subroutine := subroutine.NewStoreSubroutine(nil, nil) - assert.Equal(t, "Store", subroutine.GetName()) + sr := subroutine.NewStoreSubroutine(nil, nil) + assert.Equal(t, "Store", sr.GetName())- subroutine := subroutine.NewStoreSubroutine(nil, nil) - assert.Equal(t, []string{"core.platform-mesh.io/fga-store"}, subroutine.Finalizers()) + sr := subroutine.NewStoreSubroutine(nil, nil) + assert.Equal(t, []string{"core.platform-mesh.io/fga-store"}, sr.Finalizers())Also applies to: 28-30
38-39:mgrMocksis defined but never used in TestProcessEither remove it or invoke it after creating the manager. Suggest invoking to keep flexibility.
Apply:
manager := mocks.NewMockManager(t) subroutine := subroutine.NewStoreSubroutine(fga, manager) + if test.mgrMocks != nil { + test.mgrMocks(manager) + }Also applies to: 133-135
153-154:mgrMocksis also unused in TestFinalizeWire it similarly after creating the manager.
Apply:
manager := mocks.NewMockManager(t) subroutine := subroutine.NewStoreSubroutine(fga, manager) + if test.mgrMocks != nil { + test.mgrMocks(manager) + }Also applies to: 269-271
272-281: Good gating of cluster/client wiring; rename localclientto avoid shadowing the imported packageShadowing
client(package) with a local variable reduces readability.Apply:
- client := mocks.NewMockClient(t) + k8sClient := mocks.NewMockClient(t) if test.k8sMocks != nil { - test.k8sMocks(client) + test.k8sMocks(k8sClient) } manager.EXPECT().ClusterFromContext(mock.Anything).Return(cluster, nil) - cluster.EXPECT().GetClient().Return(client) + cluster.EXPECT().GetClient().Return(k8sClient)Also consider adding a test case where
ClusterFromContextreturns an error to cover that failure path.
283-283: Pass a plain string to WithCluster; drop the logicalcluster conversion/importThe conversion is unnecessary; using a string simplifies the test and allows removing the logicalcluster import.
Apply:
- ctx := mccontext.WithCluster(context.Background(), string(logicalcluster.Name("path"))) + ctx := mccontext.WithCluster(context.Background(), "path")And remove the now-unused import:
// delete from imports: // "github.com/kcp-dev/logicalcluster/v3"Confirm that your manager’s ClusterFromContext uses the same key format you set here (plain path string). If not, adjust accordingly.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (17)
.mockery.yaml(1 hunks)cmd/operator.go(7 hunks)internal/controller/authorization_model_controller.go(1 hunks)internal/controller/store_controller.go(2 hunks)internal/subroutine/authorization_model.go(4 hunks)internal/subroutine/authorization_model_generation.go(7 hunks)internal/subroutine/authorization_model_generation_test.go(5 hunks)internal/subroutine/authorization_model_test.go(10 hunks)internal/subroutine/mocks/mock_Client.go(5 hunks)internal/subroutine/mocks/mock_Cluster.go(1 hunks)internal/subroutine/mocks/mock_Manager.go(1 hunks)internal/subroutine/mocks/mock_OpenFGAServiceClient.go(1 hunks)internal/subroutine/remove_initializer.go(1 hunks)internal/subroutine/store.go(3 hunks)internal/subroutine/store_test.go(10 hunks)internal/subroutine/tuples_test.go(9 hunks)internal/subroutine/worksapce_authorization.go(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- internal/subroutine/worksapce_authorization.go
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-10-07T11:39:49.273Z
Learnt from: kekcleader
PR: platform-mesh/account-operator#43
File: pkg/subroutines/mocks/mock_K8Service.go:533-605
Timestamp: 2025-10-07T11:39:49.273Z
Learning: In controller-runtime v0.22.x, the Writer.Apply method signature uses runtime.ApplyConfiguration as the second parameter, not client.Object: Apply(ctx context.Context, obj runtime.ApplyConfiguration, opts ...ApplyOption) error. This is the correct signature for Server-Side Apply support.
Applied to files:
internal/subroutine/mocks/mock_Client.go
🧬 Code graph analysis (9)
internal/subroutine/authorization_model_generation.go (1)
api/v1alpha1/authorizationmodel_types.go (1)
AuthorizationModel(31-37)
cmd/operator.go (2)
internal/controller/store_controller.go (1)
NewStoreReconciler(37-48)internal/controller/authorization_model_controller.go (1)
NewAuthorizationModelReconciler(26-34)
internal/subroutine/authorization_model_generation_test.go (4)
internal/subroutine/mocks/mock_Client.go (2)
MockClient(22-24)NewMockClient(835-845)internal/subroutine/mocks/mock_Manager.go (1)
NewMockManager(827-837)internal/subroutine/mocks/mock_Cluster.go (1)
NewMockCluster(509-519)internal/subroutine/authorization_model_generation.go (1)
NewAuthorizationModelGenerationSubroutine(28-32)
internal/subroutine/tuples_test.go (4)
internal/subroutine/tuples.go (1)
NewTupleSubroutine(217-222)internal/subroutine/mocks/mock_Manager.go (2)
MockManager(31-33)NewMockManager(827-837)internal/subroutine/mocks/mock_Cluster.go (1)
NewMockCluster(509-519)internal/subroutine/mocks/mock_Client.go (1)
NewMockClient(835-845)
internal/subroutine/authorization_model.go (2)
api/v1alpha1/store_types.go (1)
Store(44-50)api/v1alpha1/authorizationmodel_types.go (1)
AuthorizationModelList(54-58)
internal/subroutine/authorization_model_test.go (2)
internal/subroutine/authorization_model.go (1)
NewAuthorizationModelSubroutine(29-34)internal/subroutine/mocks/mock_Manager.go (2)
MockManager(31-33)NewMockManager(827-837)
internal/subroutine/store_test.go (4)
internal/subroutine/store.go (1)
NewStoreSubroutine(27-32)internal/subroutine/mocks/mock_Manager.go (2)
MockManager(31-33)NewMockManager(827-837)internal/subroutine/mocks/mock_Cluster.go (1)
NewMockCluster(509-519)internal/subroutine/mocks/mock_Client.go (1)
NewMockClient(835-845)
internal/controller/authorization_model_controller.go (2)
internal/subroutine/tuples.go (1)
NewTupleSubroutine(217-222)api/v1alpha1/authorizationmodel_types.go (1)
AuthorizationModel(31-37)
internal/controller/store_controller.go (5)
internal/subroutine/store.go (1)
NewStoreSubroutine(27-32)internal/subroutine/authorization_model.go (1)
NewAuthorizationModelSubroutine(29-34)internal/subroutine/tuples.go (1)
NewTupleSubroutine(217-222)api/v1alpha1/store_types.go (1)
Store(44-50)api/v1alpha1/authorizationmodel_types.go (1)
AuthorizationModel(31-37)
🔇 Additional comments (13)
internal/subroutine/authorization_model_generation.go (2)
28-38: Constructor migration to mcmanager.Manager looks goodState is reduced to mgr; aligns with multicluster-runtime patterns.
227-237: CreateOrUpdate via per-cluster client is correctIdempotent update with cluster.GetClient() fits multicluster flow.
internal/subroutine/mocks/mock_Manager.go (1)
1-837: Generated mock looks goodInterfaces and methods mirror mcmanager.Manager; safe to proceed.
cmd/operator.go (2)
127-134: No issue:setupLogis defined in cmd/root.go
Declared at line 60 (setupLog = ctrl.Log.WithName("setup")), so references in operator.go compile.
47-69: Don't remove—function is in use
logicalClusterClientFromKeyandNewLogicalClusterClientFuncare referenced incmd/initializer.go:78; they aren’t dead code.Likely an incorrect or invalid review comment.
internal/subroutine/authorization_model.go (6)
24-27: LGTM! Clean refactoring to manager-based pattern.The struct now stores a
mcmanager.Managerinstead of separate k8s client and LC client function, which is a more cohesive design aligned with multicluster-runtime patterns.
46-46: LGTM! Simplified function signature.Removing the
lcClientFuncparameter simplifies the signature. The cluster key is now extracted directly viamccontext.ClusterFrom(ctx).
29-34: AllNewAuthorizationModelSubroutinecalls updated Verified that each invocation now passes anmcmanager.Manager(store controller usesmcMgr; tests usenilor a mock manager).
76-81: Verify multicluster client behavior
Ensurecluster.GetClient()provides the same namespace scoping, list filtering, and RBAC semantics as the previous Kubernetes client.
48-53: Verify cluster key vs StoreRef.Path format
Ensure that the string returned bymccontext.ClusterFrom(ctx)matches the logical cluster path format expected bymodel.Spec.StoreRef.Path(line 62).
18-19: Confirm acceptance of alpha dependency on multicluster-runtime v0.21.0-alpha.9This import points to an experimental alpha release (no stable/GA version exists, known instability and breaking-change risk). Ensure you’re comfortable using this in production, have a migration plan for a stable release, and validate that the APIs you rely on won’t change.
internal/subroutine/store_test.go (2)
19-19: mccontext import switch looks goodUsing sigs.k8s.io/multicluster-runtime/pkg/context aligns with the new manager wiring.
175-176: List() mock signature updates look correctUsing two args with the variadic List signature and RunAndReturn helper is consistent with controller-runtime’s client.Client.
Also applies to: 204-204, 219-219, 236-236, 253-253
On-behalf-of: SAP [email protected]
On-behalf-of: SAP [email protected]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (3)
internal/subroutine/remove_initializer_test.go (2)
30-39: Consider adding a defensive type assertion check.The type assertion on line 31 assumes
objis always a*kcpv1alpha1.LogicalCluster. While this is safe in the current test context, adding a check would make the helper more robust and prevent potential panics if reused elsewhere.Apply this diff to add a defensive check:
func (f *fakeStatusWriter) Patch(ctx context.Context, obj client.Object, patch client.Patch, opts ...client.SubResourcePatchOption) error { - lc := obj.(*kcpv1alpha1.LogicalCluster) + lc, ok := obj.(*kcpv1alpha1.LogicalCluster) + if !ok { + f.t.Fatalf("expected *LogicalCluster, got %T", obj) + } // Ensure initializer was removed before patch for _, init := range lc.Status.Initializers { if init == f.expectClear { f.t.Fatalf("initializer %q should have been removed prior to Patch", string(init)) } } return f.err }
41-99: Consider adding test coverage for ClusterFromContext error propagation.The current test suite comprehensively covers the core RemoveInitializer logic. However, adding a test case that verifies error propagation when
Manager.ClusterFromContextfails would improve completeness.Example test case to add:
t.Run("returns error when ClusterFromContext fails", func(t *testing.T) { mgr := mocks.NewMockManager(t) mgr.EXPECT().ClusterFromContext(mock.Anything).Return(nil, assert.AnError) lc := &kcpv1alpha1.LogicalCluster{} lc.Status.Initializers = []kcpv1alpha1.LogicalClusterInitializer{ kcpv1alpha1.LogicalClusterInitializer(initializerName), } r := subroutine.NewRemoveInitializer(mgr, initializerName) _, err := r.Process(context.Background(), lc) assert.NotNil(t, err) })internal/subroutine/realm_test.go (1)
187-206: Verify that SMTP configuration is correctly applied.The test validates that no error occurs when SMTP config is present, but it doesn't verify that the SMTP configuration values are actually used or passed correctly to the patched objects. Consider inspecting the patched object's spec/values to confirm the SMTP settings are properly applied, similar to how the test on lines 166-174 validates the HelmRelease values.
Consider enhancing the test to inspect the patched objects:
t.Run("success create with SMTP config", func(t *testing.T) { t.Parallel() clientMock := newClientMock(t, func(m *mocks.MockClient) { - m.EXPECT().Patch(mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(nil).Twice() + m.EXPECT().Patch(mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(nil).Once() + m.EXPECT().Patch(mock.Anything, mock.Anything, mock.Anything, mock.Anything). + RunAndReturn(func(_ context.Context, obj client.Object, _ client.Patch, _ ...client.PatchOption) error { + hr := obj.(*unstructured.Unstructured) + spec, _, _ := unstructured.NestedFieldNoCopy(hr.Object, "spec") + specMap := spec.(map[string]interface{}) + values, _, _ := unstructured.NestedFieldNoCopy(specMap, "values") + valuesMap := values.(map[string]interface{}) + // Verify SMTP config is present in values + require.Contains(t, valuesMap, "smtp") + return nil + }).Once() })
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
internal/subroutine/realm_test.go(2 hunks)internal/subroutine/remove_initializer_test.go(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
internal/subroutine/realm_test.go (3)
internal/subroutine/mocks/mock_Client.go (1)
MockClient(22-24)internal/config/config.go (1)
Config(4-31)internal/subroutine/realm.go (1)
NewRealmSubroutine(40-46)
internal/subroutine/remove_initializer_test.go (4)
internal/subroutine/mocks/mock_Manager.go (1)
NewMockManager(827-837)internal/subroutine/mocks/mock_Cluster.go (1)
NewMockCluster(509-519)internal/subroutine/remove_initializer.go (1)
NewRemoveInitializer(63-68)internal/subroutine/mocks/mock_Client.go (1)
NewMockClient(835-845)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: pipe / dockerBuild / docker
🔇 Additional comments (3)
internal/subroutine/remove_initializer_test.go (3)
44-55: LGTM!This test correctly validates the skip behavior when the initializer is absent. The minimal mock setup (only ClusterFromContext) is appropriate since Process should return early without calling GetClient.
57-79: LGTM!This test thoroughly validates the removal and patching behavior:
- Mock expectations correctly chain Manager → Cluster → Client → StatusWriter
- The
fakeStatusWritervalidates that the initializer was removed before the Patch call- Lines 76-78 verify the in-memory object is also updated
- The test uses multiple initializers to ensure only the target is removed
81-98: LGTM!This test correctly validates error propagation when the status patch fails. The use of
assert.AnErroris appropriate for testing error handling without coupling to specific error types.
| // New: Finalize missing workspace annotation | ||
| t.Run("missing workspace annotation in Finalize", func(t *testing.T) { | ||
| clientMock := newClientMock(t, nil) | ||
| rs := NewRealmSubroutine(clientMock, &config.Config{}, baseDomain) | ||
| lc := &kcpv1alpha1.LogicalCluster{} | ||
| res, opErr := rs.Finalize(context.Background(), lc) | ||
| require.NotNil(t, opErr) | ||
| require.Equal(t, ctrl.Result{}, res) | ||
| }) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Move Finalize test outside the Process test block.
This test validates the Finalize method but is nested inside the "Process" test run block (line 154), which is confusing. Move it to the "Finalize - delete scenarios" block (starting at line 245) or create a separate top-level Finalize test group for better organization.
Apply this diff to move the test to the appropriate location:
})
- // New: Finalize missing workspace annotation
- t.Run("missing workspace annotation in Finalize", func(t *testing.T) {
- clientMock := newClientMock(t, nil)
- rs := NewRealmSubroutine(clientMock, &config.Config{}, baseDomain)
- lc := &kcpv1alpha1.LogicalCluster{}
- res, opErr := rs.Finalize(context.Background(), lc)
- require.NotNil(t, opErr)
- require.Equal(t, ctrl.Result{}, res)
- })
-
t.Run("missing workspace annotation", func(t *testing.T) {Then add it to the Finalize test block after line 244:
})
t.Run("Finalize - delete scenarios", func(t *testing.T) {
t.Parallel()
+
+ t.Run("missing workspace annotation", func(t *testing.T) {
+ t.Parallel()
+ clientMock := newClientMock(t, nil)
+ rs := NewRealmSubroutine(clientMock, &config.Config{}, baseDomain)
+ lc := &kcpv1alpha1.LogicalCluster{}
+ res, opErr := rs.Finalize(context.Background(), lc)
+ require.NotNil(t, opErr)
+ require.Equal(t, ctrl.Result{}, res)
+ })
+
cases := []struct {🤖 Prompt for AI Agents
internal/subroutine/realm_test.go lines 224-232: the test for "missing workspace
annotation in Finalize" is currently nested inside the "Process" test block;
remove those lines from the Process block and insert the exact test into the
"Finalize - delete scenarios" block immediately after line 244 (or create a new
top-level Finalize test group) so the test is grouped with other Finalize tests;
ensure the test retains its clientMock setup, rs initialization, and assertions
and adjust indentation to match the surrounding test block.
On-behalf-of: SAP [email protected]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
internal/subroutine/remove_initializer_test.go (1)
76-78: Consider usingassert.NotContainsfor more idiomatic assertion.The loop-based assertion works correctly, but you could simplify it using testify's built-in helper.
Apply this diff to make the assertion more concise:
- // ensure it's removed in in-memory object as well - for _, init := range lc.Status.Initializers { - assert.NotEqual(t, initializerName, string(init)) - } + // ensure it's removed in in-memory object as well + assert.NotContains(t, lc.Status.Initializers, kcpv1alpha1.LogicalClusterInitializer(initializerName))
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
internal/subroutine/remove_initializer_test.go(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
internal/subroutine/remove_initializer_test.go (4)
internal/subroutine/mocks/mock_Manager.go (1)
NewMockManager(827-837)internal/subroutine/mocks/mock_Cluster.go (1)
NewMockCluster(509-519)internal/subroutine/remove_initializer.go (1)
NewRemoveInitializer(63-68)internal/subroutine/mocks/mock_Client.go (1)
NewMockClient(835-845)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: pipe / dockerBuild / docker
- GitHub Check: pipe / testSource / test
- GitHub Check: pipe / lint / lint
🔇 Additional comments (4)
internal/subroutine/remove_initializer_test.go (4)
15-39: LGTM! Test helper is well-designed.The
fakeStatusWritereffectively validates that the initializer is removed before the patch is applied. The implementation correctly intercepts the Status().Patch call and asserts the expected behavior.
41-99: LGTM! Comprehensive test coverage.The test cases thoroughly cover the RemoveInitializer behavior:
- Skipping when initializer is absent
- Successfully removing and patching
- Error propagation when patch fails
All scenarios are well-tested with appropriate mocking and assertions.
101-110: LGTM! Metadata methods are properly tested.The test correctly verifies that GetName returns "RemoveInitializer", Finalizers returns an empty slice, and Finalize returns no error.
112-120: LGTM! Error propagation is properly tested.The test correctly verifies that when
ClusterFromContextfails, the error is propagated to the caller. This ensures robust error handling in the subroutine.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
internal/controller/store_controller.go (1)
62-62: Consider defensive type assertion.The type assertion
model := obj.(*corev1alpha1.AuthorizationModel)could panic if an unexpected type is passed, though the watch type should guarantee correctness.For defensive coding, you could add a check:
- model := obj.(*corev1alpha1.AuthorizationModel) + model, ok := obj.(*corev1alpha1.AuthorizationModel) + if !ok { + return nil + }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
internal/controller/store_controller.go(1 hunks)internal/subroutine/authorization_model_generation.go(7 hunks)internal/subroutine/tuples.go(5 hunks)
🧰 Additional context used
🧬 Code graph analysis (3)
internal/subroutine/tuples.go (1)
api/v1alpha1/store_types.go (1)
Store(44-50)
internal/subroutine/authorization_model_generation.go (1)
api/v1alpha1/authorizationmodel_types.go (1)
AuthorizationModel(31-37)
internal/controller/store_controller.go (5)
internal/subroutine/store.go (1)
NewStoreSubroutine(27-32)internal/subroutine/authorization_model.go (1)
NewAuthorizationModelSubroutine(29-34)internal/subroutine/tuples.go (1)
NewTupleSubroutine(222-227)api/v1alpha1/store_types.go (1)
Store(44-50)api/v1alpha1/authorizationmodel_types.go (1)
AuthorizationModel(31-37)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: pipe / dockerBuild / docker
- GitHub Check: pipe / lint / lint
🔇 Additional comments (14)
internal/controller/store_controller.go (5)
3-25: LGTM! Imports correctly updated for multi-cluster support.The imports properly include the necessary multicluster-runtime packages (manager, reconcile, context, builder, handler) with clear aliasing, while retaining required controller-runtime packages.
28-32: LGTM! Struct properly refactored for multi-cluster lifecycle.The StoreReconciler now correctly holds the fga client, logger, and multi-cluster lifecycle manager as private fields.
34-45: LGTM! Constructor correctly refactored for multi-cluster lifecycle.The constructor properly:
- Accepts
mcmanager.Managerinstead of single-cluster dependencies- Wires all three subroutines with the multi-cluster manager
- Builds the lifecycle with
BuildMultiCluster(mcMgr)The previously flagged unused parameter issue has been addressed.
47-50: LGTM! Reconcile correctly implements multi-cluster pattern.The method properly:
- Accepts
mcreconcile.Requestwith cluster information- Establishes per-cluster context via
mccontext.WithCluster- Delegates to the lifecycle manager for reconciliation
This aligns with multicluster-runtime best practices.
Based on learnings
58-73: Verify multi-cluster request handling in Watches
- Confirm that
handler.EnqueueRequestsFromMapFunccorrectly converts your returned[]reconcile.Requestinto the expected[]mcreconcile.Requestfor the reconciler.- Confirm that the handler injects the appropriate
ClusterNameinto each request (or explicitly include it) so reconciliation runs in the right cluster.internal/subroutine/tuples.go (4)
43-46: LGTM! Proper error handling for cluster context.The error handling correctly wraps the cluster retrieval error and returns it as an OperatorError with appropriate retry flags.
222-227: LGTM! Constructor migration complete.The constructor signature correctly reflects the migration to multicluster-runtime by accepting an
mcmanager.Managerinstead of the previous client-based parameters.
142-145: Verify cross-cluster store access
Confirm that:
obj.Spec.StoreRef.Pathis a valid, resolvable cluster identifier fort.mgr.GetCluster- the target store lives in a different cluster than the AuthorizationModel
GetClustererrors are handled correctly (wrapped/preserved via NewOperatorError)
56-56: Verify mccontext.WithCluster signature
internal/subroutine/tuples.go:56
Confirm thatmccontext.WithCluster(ctx, string(logicalcluster.Name(...)))matches the expected parameter type and semantics in the multicluster‐runtime context package.internal/subroutine/authorization_model_generation.go (5)
28-32: LGTM! Constructor migration complete.The constructor correctly accepts
mcmanager.Managerand stores it for use throughout the subroutine, aligning with the multicluster-runtime migration.
86-89: LGTM! Proper error handling for cluster context.The error handling correctly retrieves the cluster from context and wraps any errors appropriately.
232-232: LGTM! Correct cluster client usage.The
CreateOrUpdatecall correctly usesapiExportCluster.GetClient()to perform the operation in the API export cluster context.
110-113: Confirm correct cluster identifier extraction. Verify thatstring(logicalcluster.From(&binding))returns the exact cluster name/IDmgr.GetClusterexpects and accurately represents the binding’s cluster.
189-192: Verify APIExportClusterName before calling GetCluster
- Ensure
binding.Status.APIExportClusterNameis non-empty and matches a valid cluster identifier.- Confirm the referenced cluster is created and accessible at runtime.
- Handle
GetClustererrors to distinguish missing-cluster vs transient failures.
On-behalf-of: SAP [email protected]
On-behalf-of: SAP [email protected]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (4)
internal/subroutine/authorization_model_generation_test.go (2)
31-286: Update test name to reflect new implementation.Line 39: The test name "error on lcClientFunc for binding workspace client" refers to the old implementation detail (
lcClientFunc). Now that the implementation usesmanager.ClusterFromContext, consider renaming this test to something like "error on ClusterFromContext for binding workspace" to accurately reflect what's being tested.Apply this diff to update the test name:
{ - name: "error on lcClientFunc for binding workspace client", + name: "error on ClusterFromContext for binding workspace", binding: &kcpv1alpha1.APIBinding{And update the corresponding check on line 266:
- if test.name == "error on lcClientFunc for binding workspace client" { + if test.name == "error on ClusterFromContext for binding workspace" { manager.EXPECT().ClusterFromContext(mock.Anything).Return(nil, assert.AnError)
288-550: Update test name to reflect new implementation.Line 318: The test name "error on lcClientFunc for binding in Finalize" refers to the old implementation detail (
lcClientFunc). Consider renaming to "error on ClusterFromContext for binding in Finalize" to match the current implementation.Apply this diff to update the test name:
{ - name: "error on lcClientFunc for binding in Finalize", + name: "error on ClusterFromContext for binding in Finalize", binding: &kcpv1alpha1.APIBinding{Spec: kcpv1alpha1.APIBindingSpec{Reference: kcpv1alpha1.BindingReference{Export: &kcpv1alpha1.ExportBindingReference{Name: "foo", Path: "bar"}}}},And update the corresponding check on line 530:
- if test.name == "error on lcClientFunc for binding in Finalize" { + if test.name == "error on ClusterFromContext for binding in Finalize" { manager.EXPECT().ClusterFromContext(mock.Anything).Return(nil, assert.AnError)internal/subroutine/tuples_test.go (1)
35-38: Remove or wire up the newmgrMockshook.
mgrMockswas added to the table struct but never invoked, so callers cannot inject manager expectations and we silently ignore setup logic. Either calltest.mgrMocks(manager)(mirroring the oldk8sMocksusage) or drop the field to avoid dead code.internal/controller/store_controller.go (1)
61-65: Consider logging when type assertion fails.While returning
nilon type assertion failure is a common pattern, adding a debug log statement whenokis false would aid troubleshooting if unexpected object types are passed to the handler.Example:
model,ok := obj.(*corev1alpha1.AuthorizationModel) if !ok { + // Use r.log.Debug or similar if logger is accessible in closure return nil }Note: This requires the logger to be accessible in the closure, which may need refactoring the handler into a method on
StoreReconciler.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
internal/controller/store_controller.go(2 hunks)internal/subroutine/authorization_model_generation_test.go(5 hunks)internal/subroutine/tuples_test.go(9 hunks)
🧰 Additional context used
🧬 Code graph analysis (3)
internal/subroutine/tuples_test.go (4)
internal/subroutine/tuples.go (1)
NewTupleSubroutine(222-227)internal/subroutine/mocks/mock_Manager.go (2)
MockManager(31-33)NewMockManager(827-837)internal/subroutine/mocks/mock_Cluster.go (1)
NewMockCluster(509-519)internal/subroutine/mocks/mock_Client.go (1)
NewMockClient(835-845)
internal/controller/store_controller.go (5)
internal/subroutine/store.go (1)
NewStoreSubroutine(27-32)internal/subroutine/authorization_model.go (1)
NewAuthorizationModelSubroutine(29-34)internal/subroutine/tuples.go (1)
NewTupleSubroutine(222-227)api/v1alpha1/store_types.go (1)
Store(44-50)api/v1alpha1/authorizationmodel_types.go (1)
AuthorizationModel(31-37)
internal/subroutine/authorization_model_generation_test.go (4)
internal/subroutine/mocks/mock_Client.go (2)
MockClient(22-24)NewMockClient(835-845)internal/subroutine/mocks/mock_Manager.go (1)
NewMockManager(827-837)internal/subroutine/mocks/mock_Cluster.go (1)
NewMockCluster(509-519)internal/subroutine/authorization_model_generation.go (1)
NewAuthorizationModelGenerationSubroutine(28-32)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: pipe / lint / lint
- GitHub Check: pipe / dockerBuild / docker
🔇 Additional comments (7)
internal/subroutine/authorization_model_generation_test.go (1)
552-555: Finalizers() handles nil receiver safely. The method returns an empty slice without using the receiver, so calling it on a nil pointer is safe.internal/controller/store_controller.go (6)
3-25: LGTM! Import changes align with multicluster migration.The imports correctly reflect the transition from single-cluster to multi-cluster runtime, bringing in the necessary multicluster-runtime packages (mcbuilder, mccontext, mcmanager, mcreconcile) and updating the lifecycle import to the multicluster variant.
28-32: LGTM! Struct fields properly support multicluster lifecycle.The addition of
fgaandlogfields enables the constructor to build a complete multicluster lifecycle, and thelifecyclefield correctly uses the multicluster LifecycleManager type.
34-45: LGTM! Well-structured multicluster lifecycle construction.The constructor follows best practices by:
- Using the builder pattern to compose three subroutines (Store, AuthorizationModel, Tuple)
- Properly injecting dependencies (fga, mcMgr) to each subroutine
- Enabling condition management via
WithConditionManagement()- Building the multicluster lifecycle with
BuildMultiCluster(mcMgr)The removal of the unused
cltparameter (addressed in commit 88c5326) is correctly reflected here.
47-50: LGTM! Correct multicluster reconciliation pattern.The Reconcile method correctly:
- Accepts
mcreconcile.Requestwhich contains cluster information- Enriches the context with cluster name via
mccontext.WithCluster- Delegates to the lifecycle manager with the cluster-aware context
This follows the standard multicluster-runtime reconciliation pattern.
53-77: LGTM! Proper multicluster manager setup with watch relationship.The SetupWithManager correctly:
- Accepts
mcmanager.Managerfor multicluster support- Uses
lifecycle.SetupWithManagerBuilderto integrate with the lifecycle manager- Establishes a watch on
AuthorizationModelobjects that triggers Store reconciliation when the referenced store changes- Applies generation-change predicates to avoid unnecessary reconciliations
The watch logic correctly extracts the
StoreRef.Namefrom the AuthorizationModel to enqueue the appropriate Store for reconciliation.
9-12: Confirm multicluster-runtime v0.21.0-alpha.8 exists
Verified tag v0.21.0-alpha.8 (Jun 6, 2025) in kubernetes-sigs/multicluster-runtime; note v0.21.0-alpha.9 (Aug 29, 2025) is also available—consider upgrading to the latest alpha or a stable release when possible.
* feat: support additional redirectURLs for the clients * chore: linter errors
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
internal/subroutine/authorization_model_test.go (1)
11-11: Use the standard library errors packageThis file calls errors.New(...). Importing github.com/platform-mesh/golang-commons/errors shadows the stdlib and likely breaks compilation here. Switch to the standard library errors.
- "github.com/platform-mesh/golang-commons/errors" + "errors"internal/subroutine/authorization_model_generation.go (1)
209-218: Fix out-of-bounds panic in group truncation logicUsing len(longestRelationName)-50 as an index into resourceSchema.Spec.Group is incorrect and can panic. Compute the allowed group length and slice safely.
- group := resourceSchema.Spec.Group - - if len(longestRelationName) > 50 { - group = resourceSchema.Spec.Group[len(longestRelationName)-50:] - } + const maxNameLen = 50 + group := resourceSchema.Spec.Group + if l := len(longestRelationName); l > maxNameLen { + drop := l - maxNameLen // chars to remove from group + newLen := len(group) - drop // desired group length + if newLen <= 0 { + // keep at least 1 char to avoid empty group + group = group[len(group)-1:] + } else { + group = group[len(group)-newLen:] + } + }internal/subroutine/authorization_model.go (1)
101-104: Avoid mutating config redirect URLs slice.Explanation...
diff...
internal/subroutine/tuples.go (1)
43-67: Use cross-cluster store retrieval in Finalize.Finalize (tuples.go lines 57–63) uses the context’s default cluster when fetching the Store, ignoring
obj.Spec.StoreRef.Path. Process instead callsstoreCluster, err := t.mgr.GetCluster(ctx, obj.Spec.StoreRef.Path) … storeCluster.GetClient().Get(…)Update Finalize to obtain
storeClusterviat.mgr.GetCluster(ctx, obj.Spec.StoreRef.Path)before fetching the Store.
♻️ Duplicate comments (1)
cmd/initializer.go (1)
64-65: Replaceos.Exit(1)withlog.Fatal().Inconsistent error handling: provider construction failure uses
os.Exit(1)(line 65), while provider runtime failure useslog.Fatal()(line 117). As noted in the past review comment,log.Fatal()is more appropriate here—it provides proper log flushing and a more severe log level.Apply this diff to align error handling:
if err != nil { - log.Error().Err(err).Msg("unable to construct cluster provider") - os.Exit(1) + log.Fatal().Err(err).Msg("unable to construct cluster provider") }
🧹 Nitpick comments (8)
internal/config/config.go (1)
14-14: Add validation for InitializerName in config loading
InitializerName is used across subroutines and controllers, and defaulted via the tag, but there’s no explicit check against empty or malformed values. Introduce a validation step after config unmarshal to ensure cfg.InitializerName is non-empty and matches the expected namespace:name pattern.cmd/initializer.go (1)
96-98: Verify if defensive nil check is necessary.The code defensively initializes
AdditionalRedirectURLsto an empty slice if nil. This is safe, but may indicate that config initialization does not guarantee non-nil slices. Consider:
- If the config loader always initializes slices to non-nil (e.g., empty slice), this check is redundant.
- If downstream code relies on non-nil slices, ensure the config initialization handles this consistently.
If you confirm the config loader guarantees non-nil slices, remove this check. Otherwise, document the rationale or move this validation closer to config loading.
internal/subroutine/remove_initializer_test.go (2)
15-39: Strengthen the assertion: verify that Status().Patch is actually calledCurrently, the test will pass even if Patch is never invoked. Track a patched flag in fakeStatusWriter and assert it.
type fakeStatusWriter struct { t *testing.T expectClear kcpv1alpha1.LogicalClusterInitializer err error + patched bool } func (f *fakeStatusWriter) Create(ctx context.Context, obj client.Object, subResource client.Object, opts ...client.SubResourceCreateOption) error { return nil } func (f *fakeStatusWriter) Update(ctx context.Context, obj client.Object, opts ...client.SubResourceUpdateOption) error { return nil } func (f *fakeStatusWriter) Patch(ctx context.Context, obj client.Object, patch client.Patch, opts ...client.SubResourcePatchOption) error { + f.patched = true lc := obj.(*kcpv1alpha1.LogicalCluster) // Ensure initializer was removed before patch for _, init := range lc.Status.Initializers { if init == f.expectClear { f.t.Fatalf("initializer %q should have been removed prior to Patch", string(init)) } } return f.err }
64-79: Assert Patch invocation in the success scenarioCapture the fake writer instance and assert that Patch was invoked.
- k8s.EXPECT().Status().Return(&fakeStatusWriter{t: t, expectClear: kcpv1alpha1.LogicalClusterInitializer(initializerName), err: nil}) + fsw := &fakeStatusWriter{t: t, expectClear: kcpv1alpha1.LogicalClusterInitializer(initializerName), err: nil} + k8s.EXPECT().Status().Return(fsw) @@ _, err := r.Process(context.Background(), lc) assert.Nil(t, err) // ensure it's removed in in-memory object as well for _, init := range lc.Status.Initializers { assert.NotEqual(t, initializerName, string(init)) } + assert.True(t, fsw.patched, "expected status Patch to be invoked")internal/subroutine/tuples_test.go (1)
21-28: Avoid shadowing the imported package name with a local variableUsing subroutine as a local var shadows the imported package and hurts readability. Prefer s, sut, or tuples.
internal/subroutine/store_test.go (1)
22-30: Avoid shadowing the imported package name with a local variablesubroutine := subroutine.NewStoreSubroutine(...) shadows the package. Rename the variable (e.g., s or sut) for clarity.
internal/subroutine/authorization_model_test.go (1)
55-62: Optional: avoid shadowing the package namesubroutine := subroutine.NewAuthorizationModelSubroutine(...) shadows the import. Rename local var for readability.
internal/subroutine/tuples.go (1)
144-144: Fix formatting: add space after comma.Apply this diff:
- storeCluster, err := t.mgr.GetCluster(ctx,obj.Spec.StoreRef.Path) + storeCluster, err := t.mgr.GetCluster(ctx, obj.Spec.StoreRef.Path)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
go.sumis excluded by!**/*.sum
📒 Files selected for processing (16)
cmd/initializer.go(5 hunks)go.mod(3 hunks)internal/config/config.go(2 hunks)internal/subroutine/authorization_model.go(3 hunks)internal/subroutine/authorization_model_generation.go(8 hunks)internal/subroutine/authorization_model_generation_test.go(5 hunks)internal/subroutine/authorization_model_test.go(10 hunks)internal/subroutine/realm.go(4 hunks)internal/subroutine/remove_initializer.go(1 hunks)internal/subroutine/remove_initializer_test.go(1 hunks)internal/subroutine/store.go(2 hunks)internal/subroutine/store_test.go(10 hunks)internal/subroutine/tuples.go(6 hunks)internal/subroutine/tuples_test.go(9 hunks)internal/subroutine/worksapce_authorization.go(2 hunks)internal/subroutine/workspace_initializer.go(3 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- internal/subroutine/remove_initializer.go
- go.mod
🧰 Additional context used
🧬 Code graph analysis (11)
internal/subroutine/authorization_model_generation_test.go (4)
internal/subroutine/mocks/mock_Client.go (2)
MockClient(22-24)NewMockClient(835-845)internal/subroutine/mocks/mock_Manager.go (1)
NewMockManager(827-837)internal/subroutine/mocks/mock_Cluster.go (1)
NewMockCluster(509-519)internal/subroutine/authorization_model_generation.go (1)
NewAuthorizationModelGenerationSubroutine(28-32)
internal/subroutine/tuples_test.go (4)
internal/subroutine/tuples.go (1)
NewTupleSubroutine(224-229)internal/subroutine/mocks/mock_Manager.go (2)
MockManager(31-33)NewMockManager(827-837)internal/subroutine/mocks/mock_Cluster.go (1)
NewMockCluster(509-519)internal/subroutine/mocks/mock_Client.go (1)
NewMockClient(835-845)
internal/subroutine/authorization_model_generation.go (1)
api/v1alpha1/authorizationmodel_types.go (1)
AuthorizationModel(31-37)
internal/subroutine/authorization_model.go (2)
api/v1alpha1/store_types.go (1)
Store(44-50)api/v1alpha1/authorizationmodel_types.go (1)
AuthorizationModelList(54-58)
internal/subroutine/authorization_model_test.go (4)
internal/subroutine/authorization_model.go (1)
NewAuthorizationModelSubroutine(29-34)internal/subroutine/mocks/mock_Manager.go (2)
MockManager(31-33)NewMockManager(827-837)internal/subroutine/mocks/mock_Cluster.go (1)
NewMockCluster(509-519)internal/subroutine/mocks/mock_Client.go (1)
NewMockClient(835-845)
internal/subroutine/store_test.go (4)
internal/subroutine/store.go (1)
NewStoreSubroutine(27-32)internal/subroutine/mocks/mock_Manager.go (2)
MockManager(31-33)NewMockManager(827-837)internal/subroutine/mocks/mock_Cluster.go (1)
NewMockCluster(509-519)internal/subroutine/mocks/mock_Client.go (1)
NewMockClient(835-845)
cmd/initializer.go (1)
internal/controller/initializer_controller.go (1)
NewLogicalClusterReconciler(29-41)
internal/subroutine/tuples.go (1)
api/v1alpha1/store_types.go (1)
Store(44-50)
internal/subroutine/remove_initializer_test.go (4)
internal/subroutine/mocks/mock_Manager.go (1)
NewMockManager(827-837)internal/subroutine/mocks/mock_Cluster.go (1)
NewMockCluster(509-519)internal/subroutine/remove_initializer.go (1)
NewRemoveInitializer(63-68)internal/subroutine/mocks/mock_Client.go (1)
NewMockClient(835-845)
internal/subroutine/workspace_initializer.go (2)
internal/config/config.go (1)
Config(4-33)api/v1alpha1/store_types.go (1)
Store(44-50)
internal/subroutine/store.go (1)
api/v1alpha1/store_types.go (1)
Store(44-50)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: pipe / dockerBuild / docker
- GitHub Check: pipe / testSource / test
- GitHub Check: pipe / lint / lint
🔇 Additional comments (9)
internal/subroutine/worksapce_authorization.go (3)
9-9: Import path updated correctly.The import path change to
runtimeobjectaligns with the multicluster-runtime refactor.
40-42: LGTM!The method signature correctly reflects the interface change, and the unused parameter is appropriately marked with
_.
44-46: LGTM!The signature update is correct, and the no-op implementation is appropriate for this subroutine.
internal/subroutine/authorization_model_generation.go (1)
171-178: LGTM on multicluster client acquisitionCluster resolution via mgr.ClusterFromContext/GetCluster and subsequent client usage looks correct and aligns with multicluster-runtime patterns.
Also applies to: 191-205, 234-235
internal/subroutine/tuples_test.go (1)
305-313: No invalid t.Cleanup syntax detected
Allt.Cleanupcalls in internal/subroutine/mocks use the correctt.Cleanup(func() { … })form; no compilation error or changes needed.Likely an incorrect or invalid review comment.
internal/subroutine/tuples.go (4)
3-22: LGTM! Imports align with multicluster-runtime refactoring.The import changes properly support the migration from kcp cluster-aware manager to multicluster-runtime patterns, with mccontext and mcmanager replacing the previous kontext approach.
24-27: LGTM! Type definition properly simplified.The struct consolidation from multiple fields (k8s, lcClientFunc) to a single mgr field aligns with the manager-based architecture and improves maintainability.
224-229: LGTM! Constructor properly simplified.The constructor signature aligns with the new manager-based architecture, reducing complexity by consolidating multiple client parameters into a single manager instance.
104-106: No action required:Finalizerssignature correctly matcheslifecycle.Subroutine
All existing subroutines implementFinalizers(_ runtimeobject.RuntimeObject) []string, aligning with the externallifecycle.Subroutineinterface and intentionally ignoring the parameter.
On-behalf-of: SAP [email protected]
related to #47
Summary by CodeRabbit
New Features
Improvements
Configuration
Tests
Chores