diff --git a/.cursor/rules/api-codegen.mdc b/.cursor/rules/api-codegen.mdc new file mode 100644 index 000000000..d4f160abf --- /dev/null +++ b/.cursor/rules/api-codegen.mdc @@ -0,0 +1,17 @@ +--- +description: API codegen rules for kubebuilder/controller-gen and generated files hygiene. Apply when adding/modifying API types or kubebuilder markers under api/v*/, and when deciding whether regeneration is required. Apply when editing relevant files, and when reasoning/planning/answering questions where this rule could influence code decisions (even if matching files are not currently open). +globs: api/v*/**/*.go +alwaysApply: false +--- + +- Kubebuilder markers & API changes (MUST): + - If I add a new API object/type or modify an existing one in `api/` (especially changes to `// +kubebuilder:*` markers, validation markers, printcolumns, subresources, etc.), I MUST run code generation and include the regenerated outputs in the same change. + - In this repo, run generation from the repository root: + - `bash hack/generate_code.sh` + - If I am intentionally doing an **API-only refactor stage** where changes outside `api/` are temporarily forbidden/undesired (e.g. the rest of the repo is not yet refactored and will not compile), then: + - It is acceptable to **defer CRD regeneration** (outputs under `crds/`) until the stage when cross-repo refactor is allowed. + - I MUST still keep `api/v1alpha1` internally consistent and compilable; prefer running **object/deepcopy generation only** when possible, instead of editing generated files by hand. + +- Generated files (MUST NOT edit by hand): + - Do NOT edit `zz_generated*` files (e.g. `api/v1alpha1/zz_generated.deepcopy.go`) manually. + - If a generated file needs to change, update the source types/markers and re-run generation instead. diff --git a/.cursor/rules/api-conditions.mdc b/.cursor/rules/api-conditions.mdc new file mode 100644 index 000000000..7bea7b9d2 --- /dev/null +++ b/.cursor/rules/api-conditions.mdc @@ -0,0 +1,56 @@ +--- +description: API condition Type/Reason constants naming, ordering, comments, and stability rules. Apply when editing api/v*/**/*_conditions.go, and when deciding how to name/add conditions for API objects. Apply when editing relevant files, and when reasoning/planning/answering questions where this rule could influence code decisions (even if matching files are not currently open). +globs: api/v*/**/*_conditions.go +alwaysApply: false +--- + +- Condition constants naming: + - Every API object `Status` MUST expose `.status.conditions` (`[]metav1.Condition`) (see `types_rules.mdc`). + - Any API object that has at least one standardized/used condition MUST have its own condition type/reason constants scoped by object name. + - If the API type exposes `.status.conditions` but there are **no** standardized/used conditions yet: + - The `Conditions` field MUST remain in the API (it is part of the contract). + - The `_conditions.go` file MAY be absent. + - Do NOT create placeholder/empty condition constants “just in case”. + - Current API types that expose `.status.conditions` in this repo: + - `ReplicatedVolume` + - `ReplicatedVolumeReplica` + - `ReplicatedVolumeAttachment` + - `ReplicatedStorageClass` + - `ReplicatedStoragePool` + +- Condition Type constants MUST be named: + - `CondType` + - `CondTypeName` MUST match the string value of `.Type`. + - Examples: + - `ReplicatedVolumeCondIOReadyType = "IOReady"` + - `ReplicatedVolumeReplicaCondDataInitializedType = "DataInitialized"` + - `ReplicatedVolumeAttachmentCondReplicaIOReadyType = "ReplicaIOReady"` + +- Condition Reason constants MUST be named: + - `CondReason` + - `CondTypeName` MUST match the string value of the condition type (the `.Type` string). + - `ReasonName` MUST match the string value of `.Reason`. + - Examples: + - `ReplicatedVolumeReplicaCondScheduledReasonReplicaScheduled = "ReplicaScheduled"` + - `ReplicatedVolumeCondQuorumReasonQuorumLost = "QuorumLost"` + - `ReplicatedVolumeAttachmentCondAttachedReasonSettingPrimary = "SettingPrimary"` + +- Conditions grouping (MUST): + - Keep each condition type and **all of its reasons in a single `const (...)` block**. + - Conditions MUST be ordered alphabetically by condition type name within the file/package. + - Reasons within a condition MUST be ordered alphabetically by reason constant name. + +- Conditions comments (MUST): + - Avoid controller-specific comments like “managed by X” in API packages. + - Add short English docs: what the condition represents and what the reasons mean. + +- Value stability (MUST): + - Do NOT change string values of `.Type` and `.Reason` constants. + - Only rename Go identifiers when reorganizing/clarifying. + +- Scoping & duplication (MUST): + - Do NOT use generic `ConditionType*` / `Reason*` constants. + - If the same reason string is used by multiple conditions, create separate constants per condition type, even if the string is identical. + - Example: `"NodeNotReady"`: + - `ReplicatedVolumeReplicaCondOnlineReasonNodeNotReady = "NodeNotReady"` + - `ReplicatedVolumeReplicaCondIOReadyReasonNodeNotReady = "NodeNotReady"` diff --git a/.cursor/rules/api-file-structure.mdc b/.cursor/rules/api-file-structure.mdc new file mode 100644 index 000000000..991a1c4f8 --- /dev/null +++ b/.cursor/rules/api-file-structure.mdc @@ -0,0 +1,21 @@ +--- +description: API package conventions: object prefixes and per-object/common file naming rules under api/. Apply when creating/renaming/editing Go files under api/v*/, and when deciding where API code should live. Apply when editing relevant files, and when reasoning/planning/answering questions where this rule could influence code decisions (even if matching files are not currently open). +globs: api/v*/**/*.go +alwaysApply: false +--- + +- Object prefixes (MUST): + - Use short prefixes: `rv`, `rvr`, `rva`, `rsc`, `rsp`. + +- File naming per object (MUST): + - `_types.go`: API types (kubebuilder tags), object/spec/status structs, adapters for interfaces (e.g. GetConditions/SetConditions) and tightly coupled constants/types and pure set/get/has helpers (no I/O, no external context). + - `_conditions.go`: condition Type/Reason constants for the object. + - MAY be absent if the API object exposes `.status.conditions` but there are no standardized/used conditions yet (do not create empty placeholder constants). + - `_custom_logic_that_should_not_be_here.go`: non-trivial/domain logic helpers (everything that does not fit `*_types.go`). + +- Common file naming (MUST): + - `common_types.go`: shared types/enums/constants for the API package. + - `common_helpers.go`: shared pure helpers used across API types. + - `labels.go`: well-known label keys (constants). + - `finalizers.go`: module finalizer constants. + - `register.go`: scheme registration. diff --git a/.cursor/rules/api-kubebuilder-markers.mdc b/.cursor/rules/api-kubebuilder-markers.mdc new file mode 100644 index 000000000..ca605448f --- /dev/null +++ b/.cursor/rules/api-kubebuilder-markers.mdc @@ -0,0 +1,108 @@ +--- +description: Kubebuilder marker hygiene rules for CEL expressions, gofmt smart quote avoidance, and typographic character detection. Apply when writing or editing kubebuilder markers (especially XValidation with CEL), when reviewing API types under api/v*/, and when debugging unexpected character transformations in comments. Apply when editing relevant files, and when reasoning/planning/answering questions where this rule could influence code decisions (even if matching files are not currently open). +globs: api/v*/**/*.go +alwaysApply: true +--- + +Normative keywords used in this document are defined in `rfc-like-mdc.mdc`. + +# Kubebuilder markers and gofmt smart quote issue + +## Background + +Since Go 1.19, `gofmt` reformats doc comments (comments immediately preceding `type`, `func`, `var`, `const` declarations) for improved documentation rendering. One side effect is that `gofmt` converts two consecutive single quotes (`''`) into a typographic RIGHT DOUBLE QUOTATION MARK (`"`, U+201D) in doc comments. + +This behavior breaks kubebuilder `XValidation` markers that contain CEL expressions, because CEL uses single quotes for string literals (`'hello'`), and an empty string in CEL is `''`. + +**References:** +- Stack Overflow: https://stackoverflow.com/questions/79734115/why-does-gofmt-replace-two-single-quotes-with-a-single-double-quote-in-my-go-com +- Go 1.19 release notes (comment formatting): https://go.dev/doc/go1.19#go-doc + +## Empty string comparison in CEL (MUST) + +When writing CEL expressions in `// +kubebuilder:validation:XValidation` markers, you MUST NOT use `''` (two single quotes) for empty string comparison. + +**Bad (will be corrupted by gofmt):** +```go +// +kubebuilder:validation:XValidation:rule="self.field != ''",message="field must not be empty" +``` + +**After gofmt, this becomes (broken):** +```go +// +kubebuilder:validation:XValidation:rule="self.field != "",message="field must not be empty" +``` + +### Solution 1: Use `size()` function (RECOMMENDED) + +Use the CEL `size()` function instead of comparing to an empty string: + +| Instead of | Use | +|------------|-----| +| `field != ''` | `size(field) > 0` | +| `field == ''` | `size(field) == 0` | + +**Good:** +```go +// +kubebuilder:validation:XValidation:rule="size(self.field) > 0",message="field must not be empty" +``` + +### Solution 2: Tab indentation (alternative) + +Adding a tab character before the `+` in the marker causes `gofmt` to treat the line as a preformatted code block, which disables smart quote conversion: + +```go +// +kubebuilder:validation:XValidation:rule="self.field != ''",message="field must not be empty" +``` + +Note: The tab character is between `//` and `+`. This approach is less preferred because it is subtle and easy to lose during edits. + +## Detecting typographic characters (MUST) + +When reviewing or debugging kubebuilder markers, you MUST check for typographic/Unicode characters that should not be present in code: + +| Bad character | Unicode | Hex bytes (UTF-8) | Should be | +|---------------|---------|-------------------|-----------| +| `"` (left double) | U+201C | `e2 80 9c` | `"` (0x22) | +| `"` (right double) | U+201D | `e2 80 9d` | `"` (0x22) | +| `'` (left single) | U+2018 | `e2 80 98` | `'` (0x27) | +| `'` (right single) | U+2019 | `e2 80 99` | `'` (0x27) | +| `–` (en dash) | U+2013 | `e2 80 93` | `-` (0x2d) | +| `—` (em dash) | U+2014 | `e2 80 94` | `--` or `-` | + +### How to detect + +Use `xxd` or `hexdump` to inspect suspicious lines: + +```bash +sed -n 'p' | xxd | grep -E "e2 80" +``` + +If you see `e2 80 9c`, `e2 80 9d`, `e2 80 98`, or `e2 80 99` in the output, the file contains typographic quotes that will cause problems. + +### How to fix + +Replace typographic characters with their ASCII equivalents. For CEL empty string comparisons, use `size()` as described above. + +## Sources of typographic characters + +Typographic quotes are often introduced when: + +1. **Copying from documents** (Word, Google Docs, PDF, Notion) +2. **Copying from web pages** with "smart quotes" enabled +3. **macOS keyboard** with "Use smart quotes and dashes" enabled +4. **AI assistants** (ChatGPT, Claude, etc.) that sometimes generate typographic quotes +5. **Running gofmt** on code that contains `''` in doc comments + +## Validation timing + +CEL expression syntax is NOT validated by: +- Go compiler (`go build`) +- `gofmt` / `gopls` +- `controller-gen` +- `golangci-lint` + +CEL syntax errors are only detected at runtime when: +1. The CRD is applied to a Kubernetes cluster +2. A resource is created/updated and Kubernetes validates it against the CEL rule + +Therefore, you SHOULD manually verify CEL expressions before committing. diff --git a/.cursor/rules/api-labels-and-finalizers.mdc b/.cursor/rules/api-labels-and-finalizers.mdc new file mode 100644 index 000000000..08e0da668 --- /dev/null +++ b/.cursor/rules/api-labels-and-finalizers.mdc @@ -0,0 +1,42 @@ +--- +description: API naming rules for label keys (labels.go) and finalizer constants (finalizers.go): naming, value formats, and stability. Apply when editing api/v*/**/labels.go or api/v*/**/finalizers.go, and when deciding label/finalizer names/values. Apply when editing relevant files, and when reasoning/planning/answering questions where this rule could influence code decisions (even if matching files are not currently open). +globs: api/v*/**/finalizers.go,api/v*/**/labels.go +alwaysApply: false +--- + +## Label keys (`labels.go`) + +- **Constant naming (MUST)**: + - Label key constants MUST end with `LabelKey`. + - Good: `ReplicatedVolumeLabelKey`, `NodeNameLabelKey` + - Bad: `LabelReplicatedVolume`, `NodeLabel`, `LabelNodeName` + +- **Prefix constant (MUST)**: + - The label prefix constant MUST be private and named `labelPrefix` (unless there is a proven need to export it). + - The prefix value MUST be the module-scoped prefix: + - `sds-replicated-volume.deckhouse.io/` + +- **Value format (MUST)**: + - Label key values MUST be built as `labelPrefix + ""`. + - The `` part MUST be lowercase-kebab-case, without repeating the module name. + - Good: `labelPrefix + "replicated-volume"` + - Bad: `labelPrefix + "sds-replicated-volume-replicated-volume"` + +- **Layout (MUST)**: + - Keep all exported `...LabelKey` constants in a single `const (...)` block. + - Avoid commented-out placeholder constants; prefer adding constants only when actually needed. + +## Finalizers (`finalizers.go`) + +- **Constant naming (MUST)**: + - Finalizer constants MUST end with `Finalizer`. + - Good: `ControllerFinalizer`, `AgentFinalizer` + - Bad: `FinalizerController`, `ControllerFinalizerName` + +- **Value format (MUST)**: + - Finalizer values MUST be module-scoped and stable: + - `sds-replicated-volume.deckhouse.io/` + - `` MUST be lowercase and short (e.g. `controller`, `agent`). + +- **Stability (MUST)**: + - Do NOT change existing finalizer string values (this would break cleanup semantics). diff --git a/.cursor/rules/api-types.mdc b/.cursor/rules/api-types.mdc new file mode 100644 index 000000000..73a8d2fb5 --- /dev/null +++ b/.cursor/rules/api-types.mdc @@ -0,0 +1,157 @@ +--- +description: API type rules: type-centric layout, enums/constants, status/conditions requirements, naming, and what helpers may live in *_types.go vs custom-logic files. Apply when editing api/v*/**/*_types.go or api/v*/**/common_types.go, and when deciding API type layout or helper placement. Apply when editing relevant files, and when reasoning/planning/answering questions where this rule could influence code decisions (even if matching files are not currently open). +globs: api/v*/**/*_types.go,api/v*/**/common_types.go +alwaysApply: false +--- + +## Code layout: type-centric blocks (MUST) + +- **Type-centric blocks** MUST be used to organize code: + - Each type MUST be readable without scrolling across the file (keep related declarations together). + - Code from different types MUST NOT be interleaved. + +- **API object file layout** (MUST): + - This applies to typical files containing one API root object (`type struct`) plus its `Spec`/`Status`/`List`. + - The main flow MUST read top-to-bottom without jumping: + - Root object: `type struct { ... }` + - `type List struct { ... }` (see List rule below) + - `type Spec struct { ... }` + - Spec-local types/enums/constants/interfaces/helpers used by `Spec` + - `type Status struct { ... }` + - Status-local types/enums/constants/interfaces/helpers used by `Status` + - Secondary/helper types referenced by the above (pseudo-DFS), keeping each type block contiguous + - Shared helpers (if any) at the very end + +- **Block structure** for each type MUST follow this strict order: + - `type struct { ... }` + - Enums and constants belonging to this type (incl. tightly-coupled sub-enums) + - Interfaces tightly coupled to the type + - Public methods of the type + - Private helpers of the type + +- **Block ordering in a file** MUST be a human-oriented dependency order (pseudo-DFS), not alphabetical: + - Main (primary) type of the file + - Types directly referenced by the main type + - Secondary/helper types + +- **List types** (MUST): + - `List` SHOULD be placed immediately after `` (right under the root object), to make navigation consistent and fast. + - `List` MUST NOT split the `Spec`/`Status` flow (i.e. do not put it between `Spec` and spec-local enums/helpers, or between `Status` and status-local enums/helpers). + - If there is a strong reason (rare), `List` MAY be placed after `Status`/secondary types, but keep it as a single contiguous block (no interleaving). + +- **Locality rule for enums/constants/helpers** (MUST): + - If an enum/const/helper is primarily used by `Spec`, it MUST be placed in the Spec-local section (right after `type Spec ...` and its methods). + - If an enum/const/helper is primarily used by `Status`, it MUST be placed in the Status-local section (right after `type Status ...` and its methods). + - If an enum/const/helper is used by both `Spec` and `Status`, it SHOULD be placed with the `Spec` section (earlier) unless that hurts readability; do NOT duplicate it. + +- **Shared helpers**: + - Avoid generic helpers without a clear owning type. + - If a helper is used by multiple types, it MUST be placed after all type blocks (or moved to `common_helpers.go` if shared broadly). + +- Enums (MUST): + - If a field has a finite set of constant values, model it as an enum: + - `type EnumType string` + - `const ( EnumTypeValue1 EnumType = "Value1" ... )` + - Enum declaration order MUST be contiguous: + - `type EnumType string` + - `const (...)` with all enum values + - enum helpers (if any) — right after the const block + - Enums MUST provide `String()` method: + - `func (e EnumType) String() string { return string(e) }` + - Keep enum values documented (short English comment per value or a short block comment). + - Do NOT create separate wrapper types for arbitrary string/number/bool fields unless there is a strong, confirmed need. + - Common enums (MUST): + - If the same enum is used by multiple API objects, it MUST be moved to `common_types.go`. + - Do NOT move enums to `common_types.go` if they are only used by a single API object. + +- Status (MUST): + - `Spec` and `Status` structs MUST be embedded as values on the root object (e.g. `Spec TSpec`, `Status TStatus`), not `*TStatus`. + - Every API object `Status` MUST expose `.conditions` as `[]metav1.Condition`: + - Field name MUST be `Conditions []metav1.Condition`. + - Use the standard kubebuilder/patch markers for mergeable conditions list: + - `// +patchMergeKey=type` + - `// +patchStrategy=merge` + - `// +listType=map` + - `// +listMapKey=type` + - `// +optional` + - JSON tag: ``json:"conditions,omitempty"`` and patch tags consistent with the above. + - Condition Type/Reason constants are defined in `_conditions.go` only when they become standardized/used (see `conditions_rules.mdc`). + - Every API **root object** that exposes `.status.conditions` MUST provide adapter methods to satisfy `api/objutilv1.StatusConditionObject`: + - `GetStatusConditions() []metav1.Condition` (returns `o.Status.Conditions`) + - `SetStatusConditions([]metav1.Condition)` (sets `o.Status.Conditions`) + +- Type naming (MUST): + - This section applies to ALL API types (including enums). + - Names MUST be unique within the API package. + - Names MUST NOT start with short object prefixes like `RV`, `RVR`, `RVA`, `RSC`, `RSP`. + - Usually, names MUST NOT start with the full object name if the type is not generic and is unlikely to clash: + - Good: `ReplicaType`, `DiskState` + - Prefer full object name only for generic/repeated concepts (below). + - If the type name is generic and likely to be repeated across objects (e.g. `Phase`, `Type`), it MUST start with the full object name: + - Examples: `ReplicatedStoragePoolPhase`, `ReplicatedStoragePoolType`, `ReplicatedVolumeAttachmentPhase` + - Structural type name (e.g. `Spec`, `Status`) MUST be prefixed by the full object name: + - Examples: `ReplicatedVolumeSpec`, `ReplicatedVolumeStatus`, `ReplicatedStorageClassSpec`, `ReplicatedStorageClassStatus` + +- Optional scalar fields (optional `*T`) (MUST): + - This section applies to Kubernetes API fields that are semantically optional (tagged `json:",omitempty"`), but whose underlying value type is a scalar `T` (non-nil-able, e.g. `bool`, numbers, `string`, structs). + - To preserve the distinction between "unset" and "set to the zero value", such API fields MUST be represented as pointers (`*T`) in Go API types. + - Example (illustrative): if `TimeoutSeconds` is optional, use `*int32` (not `int32`) and tag it with `json:"timeoutSeconds,omitempty"`. + +## Helpers vs custom_logic_that_should_not_be_here (MUST) + +Write helpers in `*_types.go`. If a function does **not** fit the rules below, it MUST go to `*_custom_logic_that_should_not_be_here.go`. + +## What belongs in `*_types.go` (MUST) + +Helpers are **pure**, **local**, **context-free** building blocks. + +- **Pure / deterministic**: + - Same input → same output. + - No reads of current time, random, env vars, filesystem, network, Kubernetes API, shell commands. + - No goroutines, channels, retries, backoff, sleeping, polling. + +- **No external context**: + - Do not require `context.Context`, `*runtime.Scheme`, `client.Client`, informers, listers, recorders, loggers. + - Do not require controller-runtime utilities (e.g. `controllerutil.*`). + +- **Allowed operations**: + - Field reads/writes on in-memory structs and maps/slices. + - Simple validation and parsing/formatting that is deterministic. + - Nil-guards and trivial branching. + - Returning `(value, ok)` / `(changed bool)` patterns. + +- **Typical helper shapes (examples)**: + - `HasX() bool`, `GetX() (T, bool)`, `SetX(v T) (changed bool)`, `ClearX() (changed bool)` + - `IsXEqual(...) bool`, `XEquals(...) bool` + - `ParseX(string) X` / `FormatX(...) string` (no I/O, no time, no external lookups) + +## What MUST NOT be in `*_types.go` (MUST NOT) + +If any of these are present, the code belongs in `*_custom_logic_that_should_not_be_here.go`. + +- **Business / orchestration logic**: + - Decisions that interpret cluster state, desired/actual reconciliation, phase machines, progress tracking. + - Anything that “synchronizes” different parts of an object (spec ↔ status, spec ↔ labels/annotations, cross-object references). + +- **Conditions/status mutation logic**: + - Creating/updating `metav1.Condition` / using `meta.SetStatusCondition` / computing reason/message based on multi-step state. + - Anything that sets `.status.phase`, `.status.reason`, counters, aggregates, etc. based on logic. + +- **Controller/Kubernetes integration**: + - `controllerutil.SetControllerReference`, finalizer management with external expectations, scheme usage. + - Any reads/writes via API clients (even if “simple”). + +- **I/O and side effects**: + - File/network access, exec/shell, OS calls, time-based logic (`time.Now`, `time.Since`), randomness. + +- **Non-trivial control flow**: + - Complex `if/switch` trees, multi-branch logic tied to domain semantics. + - Loops that encode placement/selection/scheduling decisions. + +## Kubebuilder generation hygiene for non-API types (MUST) + +- **Non-Kubernetes API types** (MUST): + - Avoid placing non-API types (e.g. `error` implementations, internal helper structs) in `api/` packages. + - If a non-API type must live in `api/` (for locality/type-safety), it MUST be explicitly excluded from kubebuilder object/deepcopy generation: + - Add `// +kubebuilder:object:generate=false` on the type. + - Rationale: `controller-gen object` may generate DeepCopy methods for any struct type in the API package, which pollutes `zz_generated.deepcopy.go` with irrelevant helpers/errors. diff --git a/.cursor/rules/controller-controller.mdc b/.cursor/rules/controller-controller.mdc new file mode 100644 index 000000000..6777748bf --- /dev/null +++ b/.cursor/rules/controller-controller.mdc @@ -0,0 +1,167 @@ +--- +description: Rules for controller package entrypoint wiring in controller.go (builder chain, options, predicates wiring) and strict separation from reconciliation business logic. Apply when editing images/controller/internal/controllers/**/controller*.go, and when deciding what belongs in controller.go vs reconciler.go. Apply when editing relevant files, and when reasoning/planning/answering questions where this rule could influence code decisions (even if matching files are not currently open). +globs: images/controller/internal/controllers/**/controller*.go +alwaysApply: false +--- + +See `rfc-like-mdc.mdc` for normative keywords (BCP 14 / RFC 2119 / RFC 8174) and general .mdc writing conventions. + +- TL;DR: + - **`controller.go`** = **Wiring-only** **Entrypoint**. + - **Entrypoint** = `BuildController(mgr manager.Manager) error`. + - **builder chain** = single fluent chain, ends with `.Complete(rec)`. + - **predicates**/**filters**: + - are **mechanical** change detection (no **I/O**, no **domain/business** decisions), + - live in **`predicates.go`**, + - MUST NOT be implemented in **`controller.go`**. + - All **Reconciliation business logic** = **`reconciler.go`**. + - **controller name** string values are `kebab-case` (see **Controller terminology**). + +- **`controller.go`** purpose (MUST): + - **`controller.go`** is the **Wiring-only** **Entrypoint** of a **controller package**. + - It owns controller-runtime **builder chain** configuration, **watch** registration, and reconciler construction. + - If the controller needs event filtering, **`controller.go`** wires predicates by calling + `builder.WithPredicates(Predicates()...)` at the `.For(...)`/`.Owns(...)`/`.Watches(...)` call site. + When `` refers to a kind defined in this repository’s API (types under `api/v*/`), `` MUST use the **short kind name** (see `controller-terminology.mdc`). + - It MUST NOT contain **Reconciliation business logic** (that belongs to **`reconciler.go`**). + +- ALLOW (in **`controller.go`**): + - controller-runtime builder wiring: + - `.ControllerManagedBy(mgr).Named(...)` + - `.For(...)`, `.Owns(...)`, `.Watches(...)` + - `.WithOptions(...)`, `.Complete(...)` + - wiring **predicates**/**filters** by calling `builder.WithPredicates(Predicates()...)` + (where `Predicates()` is implemented in **`predicates.go`**). + - **Manager-owned dependencies** (wiring-only) from the **manager**: + - `mgr.GetClient()`, `mgr.GetScheme()`, `mgr.GetCache()`, `mgr.GetEventRecorderFor(...)` + - registering **runnables**/**sources** on the **manager** (wiring-only), e.g. `mgr.Add(...)`, indexes, **sources**. + +- DENY (in **`controller.go`**): + - any functions that **compute/ensure/apply/reconcile** domain logic (must live in `reconciler.go`). + - implementing controller-runtime **predicates**/**filters**: + - **`controller.go`** MUST NOT define `predicate.Funcs{...}` (or any other predicate implementation) inline. + - All predicate implementations MUST live in **`predicates.go`** (see: `controller-predicate.mdc`). + - reading/modifying `.Spec` / `.Status` (except **mechanical** access in wiring callbacks): + - **`controller.go`** MUST NOT read or write `.Spec` / `.Status` as part of business logic. + - **mechanical** reads are allowed only inside **watch** mapping functions whose only job is pure request mapping (`obj -> []reconcile.Request`). + - **`controller.go`** MUST NOT write `.Spec` / `.Status` anywhere. + - any multi-step decisions (state machines, placement, scheduling, condition computation). + - any **Kubernetes API I/O** beyond **manager** wiring (`Get/List/Create/Update/Patch/Delete`). + +- **`controller.go`** layout (MUST): + - `const = ""` (stable **controller name**). + - The `` value MUST follow **Controller terminology** (**controller name** conventions): `kebab-case`, no `.`, no `_`, stable, unique. + - The suffix "-controller" MAY be appended; it SHOULD be appended only when needed to avoid ambiguity/collisions (see **Controller terminology**). + - **Entrypoint**: `BuildController(mgr manager.Manager) error`. + - **predicates**/**filters** are optional. + - If the controller uses any **predicates**/**filters**, the **controller package** MUST include **`predicates.go`**. + - Predicate implementation is done in **`predicates.go`**; **`controller.go`** wires it via `builder.WithPredicates(...)`. + +- What belongs in `BuildController` (MUST): + - Take **Manager-owned dependencies** from the **manager**: + - `cl := mgr.GetClient()` + - other manager-owned deps when needed (scheme, cache, recorder, etc.). + - Register required **runnables**/**sources** on the **manager** (if any): + - example: cache initializers added via `mgr.Add(...)` (often after leader election). + - Construct the reconciler (composition root for the package): + - `rec := NewReconciler(cl, )` + - Wire controller-runtime builder in a single fluent chain: + - `.ControllerManagedBy(mgr).Named()` + - `.For(&{} /*, ... */)` + - `.Watches(...)` when the controller reacts to additional objects/events + - `.WithOptions(controller.Options{MaxConcurrentReconciles: 10})` by default + - `.Complete(rec)` + + Example: minimal `BuildController` skeleton (illustrative) + + ```go + package examplecontroller + + import ( + "sigs.k8s.io/controller-runtime/pkg/builder" + "sigs.k8s.io/controller-runtime/pkg/controller" + "sigs.k8s.io/controller-runtime/pkg/manager" + + "example.com/api/v1alpha1" + ) + + const ExampleControllerName = "example-controller" + + func BuildController(mgr manager.Manager) error { + cl := mgr.GetClient() + + // Optional wiring-only dependencies/runnables: + // src := NewSomethingInitializer(mgr) + // if err := mgr.Add(src); err != nil { return fmt.Errorf("adding initializer: %w", err) } + + rec := NewReconciler(cl /*, src */) + + return builder.ControllerManagedBy(mgr). + Named(ExampleControllerName). + For(&v1alpha1.Example{}, builder.WithPredicates( + examplePredicates()..., + )). + WithOptions(controller.Options{MaxConcurrentReconciles: 10}). + Complete(rec) + } + ``` + +- Predicate implementation rules: + - **predicates**/**filters** MUST be implemented in **`predicates.go`**. + - **`controller.go`** MUST NOT contain predicate implementation code. + - **`controller.go`** wires predicates by calling `builder.WithPredicates(Predicates()...)`. + When `` refers to a kind defined in this repository’s API (types under `api/v*/`), `` MUST use the **short kind name** (see `controller-terminology.mdc`). + - See: `controller-predicate.mdc`. + +- MaxConcurrentReconciles (MUST): + - Configure `.WithOptions(controller.Options{MaxConcurrentReconciles: 10})` unless there is a strong, explicit reason not to. + - If deviating from 10, document the reason near the options. + +- Watching child resources (MUST): + - Watch **child resources** either: + - by owner reference (when this controller is the owner/controller of the child objects), or + - by an explicit field/index (when children may be created by others: another controller or a user). + - If it is not obvious which model applies for a given child object: + - default to the safest *correctness* choice (prefer being conservative and reconciling more over missing important events), and + - add a short comment explaining why this watch strategy was chosen (and what would justify switching to the alternative). + + Example: watch child objects by owner reference (controller is the owner) + ```go + builder.ControllerManagedBy(mgr). + Named(ExampleControllerName). + For(&v1alpha1.Example{}, builder.WithPredicates(examplePredicates()...)). + Owns( + &v1alpha1.ExampleChild{}, + builder.WithPredicates(exampleChildPredicates()...), + ). // ownerRef-based mapping + WithOptions(controller.Options{MaxConcurrentReconciles: 10}). + Complete(rec) + ``` + + Example: watch child objects by explicit field/index (children may be created by others) + ```go + builder.ControllerManagedBy(mgr). + Named(ExampleControllerName). + For(&v1alpha1.Example{}, builder.WithPredicates(examplePredicates()...)). + Watches( + &v1alpha1.ExampleChild{}, + handler.EnqueueRequestsFromMapFunc(func(ctx context.Context, obj client.Object) []reconcile.Request { + ch, ok := obj.(*v1alpha1.ExampleChild) + if !ok || ch == nil { + return nil + } + return []reconcile.Request{{NamespacedName: types.NamespacedName{ + Namespace: ch.Namespace, + Name: ch.Spec.ParentName, + }}} + }), + builder.WithPredicates(exampleChildPredicates()...), + ). + WithOptions(controller.Options{MaxConcurrentReconciles: 10}). + Complete(rec) + ``` + +- What MUST NOT be in `controller.go`: + - any `Reconcile(...)` implementation; + - any Kubernetes API I/O beyond manager wiring (`Get/List/Create/Update/Patch/Delete`); + - any non-trivial domain/business decisions (placement/scheduling/state machines/condition computation). diff --git a/.cursor/rules/controller-file-structure.mdc b/.cursor/rules/controller-file-structure.mdc new file mode 100644 index 000000000..6dd6c615e --- /dev/null +++ b/.cursor/rules/controller-file-structure.mdc @@ -0,0 +1,60 @@ +--- +description: Rules for controller package file structure (controller.go/predicates.go/reconciler.go/tests) and what belongs in each file. Apply when creating or editing controller packages under images/controller/internal/controllers/, and when deciding where to place controller logic. Apply when editing relevant files, and when reasoning/planning/answering questions where this rule could influence code decisions (even if matching files are not currently open). +globs: images/controller/internal/controllers/**/*.go +alwaysApply: false +--- + +See `rfc-like-mdc.mdc` for normative keywords (BCP 14 / RFC 2119 / RFC 8174) and general .mdc writing conventions. + +- **controller package** structure (MUST): + - Each **controller package** MUST have these files: + - **`controller.go`** + - **`predicates.go`** (required only when the controller uses controller-runtime **predicate**/**filter**s) + - **`reconciler.go`** + - **`reconciler_test.go`** + +- **`controller.go`** (MUST): **Wiring-only** **Entrypoint** (**builder chain**/**options**/**predicates**/**runnables**), no **Reconciliation business logic**. + - See: `controller-controller.mdc`. + +- **`reconciler.go`** (MUST): all **Reconciliation business logic** for this controller. + - This includes the Controller POV pipeline: compute **intended**, observe **actual**, decide/enforce **target**, and compute/publish **report** (including persisting **controller-owned state** and **report** into Kubernetes POV **observed state** (`.status`) via the appropriate **patch domain**). + - Detailed rules for **phase** usage, **I/O** boundaries, **patch domains** and patterns: `controller-reconciliation.mdc`. + - **`reconciler.go`** MUST contain these categories of code: + - 1. **Reconcile method** functions/methods. + - MUST comply with: `controller-reconcile.mdc`. + - Definition (MUST): + - the controller-runtime `Reconcile(...)` method, and + - any other function/method whose name starts with `reconcile*` / `Reconcile*`. + - 2. **ReconcileHelper** functions/methods: helpers used by **Reconcile method** functions/methods. + - MUST comply with: `controller-reconcile-helper.mdc`. + - Definition (MUST): any function/method whose name matches one of these helper naming categories/patterns: + - **ComputeReconcileHelper**: `compute*` / `Compute*` (see `controller-reconcile-helper-compute.mdc`) + - Common sub-families: `computeIntended*`, `computeActual*`, `computeTarget*`, `compute*Report`. + - **ConstructionReconcileHelper**: `new*` / `build*` / `make*` / `compose*` (see `controller-reconcile-helper-construction.mdc`) + - **IsInSyncReconcileHelper**: `is*InSync*` / `Is*InSync*` (starts with `is`/`Is` and contains `InSync`) (see `controller-reconcile-helper-is-in-sync.mdc`) + - **ApplyReconcileHelper**: `apply*` / `Apply*` (see `controller-reconcile-helper-apply.mdc`) + - **EnsureReconcileHelper**: `ensure*` / `Ensure*` (see `controller-reconcile-helper-ensure.mdc`) + - **GetReconcileHelper**: `get*` / `Get*` (see `controller-reconcile-helper-get.mdc`) + - **CreateReconcileHelper**: `create*` / `Create*` (see `controller-reconcile-helper-create.mdc`) + - **DeleteReconcileHelper**: `delete*` / `Delete*` (see `controller-reconcile-helper-delete.mdc`) + - **PatchReconcileHelper**: `patch*` / `Patch*` (see `controller-reconcile-helper-patch.mdc`) + - 3. **Other supporting code**: auxiliary functions/methods/types that do not fit either category above. + - SHOULD be rare; if a helper matches the **ReconcileHelper** naming or contracts, prefer making it a **ReconcileHelper**. + +- **`reconciler_test.go`** (MUST): tests for reconciliation behavior and edge cases. + +- Additional **Wiring-only** / infra components (MAY): **manager** **runnables**/**sources** (not reconcilers, not pure helpers). + - Allowed example: + - `manager.Runnable`/`manager.LeaderElectionRunnable` initializers/sources that prepare or maintain in-memory state and expose it via a small interface (blocking + non-blocking access). + - Notes: + - These components MAY perform **Kubernetes API I/O** as part of initialization/maintenance. + - Their registration/wiring belongs to **`controller.go`** (`mgr.Add(...)`, indexes, sources, etc.); **Reconciliation business logic** still belongs to **`reconciler.go`**. + +- Additional components (MAY): extracted helpers for heavy computations or caching. + - Allowed examples: + - “world view” / “planner” / “topology scorer” components that build an in-memory model for convenient calculations (often used to shape **actual**, decide **target**, and produce **report** artifacts). + - stateful allocators / ID pools (e.g., device minor / ordinal allocation) used for deterministic assignments (often producing **controller-owned state** that is persisted across reconciliations). + - caching components to avoid repeated expensive computation (explicitly owned by the reconciler and easy to invalidate). + - Constraints (MUST): + - computation components MUST be pure: no **Kubernetes API I/O**, no patches, no **DeepCopy**, no time/random/env **I/O**. + - caching components MUST NOT hide **Kubernetes API I/O** inside themselves; **I/O** stays in **`reconciler.go`** or other **runnables**/**sources**. diff --git a/.cursor/rules/controller-predicate.mdc b/.cursor/rules/controller-predicate.mdc new file mode 100644 index 000000000..b0b91c439 --- /dev/null +++ b/.cursor/rules/controller-predicate.mdc @@ -0,0 +1,197 @@ +--- +description: Rules for controller-runtime predicates/filters in predicates*.go: mechanical change detection only, no I/O, no domain logic, no mutations. Apply when editing images/controller/internal/controllers/**/predicates*.go, and when deciding whether logic belongs in predicates vs reconciliation. Apply when editing relevant files, and when reasoning/planning/answering questions where this rule could influence code decisions (even if matching files are not currently open). +globs: images/controller/internal/controllers/**/predicates*.go +alwaysApply: false +--- + +See `rfc-like-mdc.mdc` for normative keywords (BCP 14 / RFC 2119 / RFC 8174) and general .mdc writing conventions. + +- TL;DR: + - **`predicates.go`** contains controller-runtime **predicate**/**filter** implementations for a **controller package**. + - **predicates**/**filters** are **mechanical** change detection only: + - no **I/O**, + - no **domain/business** decisions, + - no mutation of observed objects. + - **`controller.go`** wires predicates into the **builder chain**: + - by calling `builder.WithPredicates(Predicates()...)` at the `.For(...)`/`.Owns(...)`/`.Watches(...)` call site. + - Predicate implementation still lives in **`predicates.go`**. + - When `` refers to a kind defined in this repository’s API (types under `api/v*/`), `` MUST use the **short kind name** (see `controller-terminology.mdc`). + - **`reconciler.go`** MUST NOT contain **predicates**/**filters**. + +- Scope (MUST): + - This document applies only to **`predicates.go`**. + - It defines what is allowed inside controller-runtime **predicates**/**filters** and how to structure them. + +- What is allowed in **`predicates.go`** (MUST): + - Definitions of predicate sets as **functions** (no package-level `var` predicates). + Predicate-set function naming (MUST) follows this convention: + - `func Predicates() []predicate.Predicate { ... }` + - `` MUST either correspond to the Kubernetes object **Kind** being filtered, or be a short kind name that is already established in this codebase (do not invent new abbreviations ad-hoc). + - When `` refers to a kind defined in this repository’s API (types under `api/v*/`), `` MUST use the **short kind name** (see `controller-terminology.mdc`). + - Each such function returns **all** predicates needed for that `` at the watch site where it is used. + - Pure, **mechanical** comparisons of object fields to decide whether to enqueue a **reconcile request**. + - Typed events (preferred): `event.TypedUpdateEvent[client.Object]`, etc. + - **`predicates.go`** MUST NOT define controller-runtime builder wiring helpers: + - no `*ForOptions` / `*OwnsOptions` / `*WatchesOptions` functions, + - no `builder.*` imports. + +- What is forbidden in **`predicates.go`** (MUST NOT): + - any **Kubernetes API I/O** (`Get/List/Create/Update/Patch/Delete`) or controller-runtime client usage; + - any multi-step **domain/business** logic (validation rules, placement/scheduling decisions, state machines); + - any mutation of the event objects (no writes to `.Spec`, `.Status`, metadata, conditions, maps/slices); + - any “hidden I/O” (time/random/env/network); + - direct `.Status.Conditions` access (use **`obju`** for condition comparisons). + +- Naming and shape (SHOULD): + - Predicate symbols SHOULD be unexported unless another package must reuse them. + - Use names that reflect the filtered object kind: + - `Predicates` (returns `[]predicate.Predicate`) + - When `` refers to a kind defined in this repository’s API (types under `api/v*/`), `` SHOULD use the **short kind name**. + - Avoid generic prefixes like `primary*` in concrete controllers; prefer naming by the actual watched kind. + +- Multiple predicate sets for the same kind (MAY): + - If you need distinct predicate sets for the same `` (for example, different watches), you MAY add a short suffix **before** `Predicates`: + - `Predicates` + - `` MUST be a short, stable identifier in `PascalCase` and MUST NOT repeat ``. + - Typical scopes (illustrative): `Status`, `Spec`, `Child`, `Owner`, `Cast`. + - Prefer one canonical set per kind; introduce multiple sets only when it improves clarity at the watch site. + +- Rules for predicate behavior (MUST): + - Keep predicates lightweight and **mechanical** (no multi-step reasoning). + - If a handler would only `return true`, omit it (do not generate noop handlers). + - Performance matters: predicates are hot-path; avoid allocations, reflection, and heavy comparisons. + - Be conservative on uncertainty: + - if a type assertion fails or the event is not classifiable, return `true` (allow reconcile). + +- Change detection guidance (MUST): + - If **Reconciliation business logic** uses `.status.conditions` (or any condition-driven logic), + **predicate** MUST react to **`metadata.generation`** (**Generation**) changes. + - For CRDs, **Generation** usually bumps on spec changes. + - **Metadata-only changes** (labels/annotations/finalizers/ownerRefs) may not bump **Generation**. + If the controller must react to them, compare them explicitly via `client.Object` getters. + +- **object** access in **predicates** (MUST): + - Priority order: + - `client.Object` getters + - **`obju`** for conditions + - API **mechanical** helper methods + - direct field reads (last resort) + - If a field is available via `client.Object` methods, you MUST use those methods: + - `GetGeneration()`, `GetLabels()`, `GetAnnotations()`, `GetFinalizers()`, `GetOwnerReferences()`, etc. + + Example: functions returning predicate sets (predicates.go style) + (requires Go 1.21+ for `maps`/`slices`; and `k8s.io/apimachinery/pkg/api/equality` for `apiequality`) + + ```go + package examplecontroller + + import ( + "maps" + "slices" + + apiequality "k8s.io/apimachinery/pkg/api/equality" + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/event" + "sigs.k8s.io/controller-runtime/pkg/predicate" + ) + + func examplePredicates() []predicate.Predicate { + return []predicate.Predicate{ + predicate.Funcs{ + UpdateFunc: func(e event.TypedUpdateEvent[client.Object]) bool { + // React to spec-driven updates. + if e.ObjectNew.GetGeneration() != e.ObjectOld.GetGeneration() { + return true + } + + // React to metadata-only changes only when reconciliation depends on them. + if !maps.Equal(e.ObjectNew.GetLabels(), e.ObjectOld.GetLabels()) { + return true + } + if !slices.Equal(e.ObjectNew.GetFinalizers(), e.ObjectOld.GetFinalizers()) { + return true + } + if !apiequality.Semantic.DeepEqual(e.ObjectNew.GetOwnerReferences(), e.ObjectOld.GetOwnerReferences()) { + return true + } + + // Ignore pure status updates to avoid reconcile loops. + return false + }, + }, + } + } + ``` + +- Condition comparisons (MUST): + - If you need to compare **condition**(s) in **predicates**, you MUST use **`obju`** (do not open-code `.status.conditions` access). + - Prefer: + - `obju.AreConditionsSemanticallyEqual(...)` when you need Type/Status/Reason/Message/ObservedGeneration semantics. + - `obju.AreConditionsEqualByStatus(...)` when only Type+Status matter. + + Example: compare condition(s) via **`obju`** (predicates.go style) + + ```go + package examplecontroller + + import ( + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/event" + "sigs.k8s.io/controller-runtime/pkg/predicate" + + obju "github.com/deckhouse/sds-replicated-volume/api/objutilv1" + ) + + func exampleStatusPredicates() []predicate.Predicate { + return []predicate.Predicate{ + predicate.Funcs{ + UpdateFunc: func(e event.TypedUpdateEvent[client.Object]) bool { + newObj, okNew := e.ObjectNew.(obju.StatusConditionObject) + oldObj, okOld := e.ObjectOld.(obju.StatusConditionObject) + if !okNew || !okOld || newObj == nil || oldObj == nil { + // Be conservative if we cannot type-assert. + return true + } + + return !obju.AreConditionsSemanticallyEqual(newObj, oldObj /* condition types... */) + }, + }, + } + } + ``` + +- Type assertions/casts (MUST): + - Cast to a concrete API type only when `client.Object` methods are not enough. + - If you cast and the assertion fails / is nil, return `true` (allow reconcile). + + Example: safe cast (predicates.go style) + + ```go + package examplecontroller + + import ( + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/event" + "sigs.k8s.io/controller-runtime/pkg/predicate" + + "example.com/api/v1alpha1" + ) + + func exampleCastPredicates() []predicate.Predicate { + return []predicate.Predicate{ + predicate.Funcs{ + UpdateFunc: func(e event.TypedUpdateEvent[client.Object]) bool { + oldObj, okOld := e.ObjectOld.(*v1alpha1.Example) + newObj, okNew := e.ObjectNew.(*v1alpha1.Example) + if !okOld || !okNew || oldObj == nil || newObj == nil { + return true + } + + // Field-level mechanical comparison (keep it small and explicit). + return newObj.Spec.Replicas != oldObj.Spec.Replicas + }, + }, + } + } + ``` + diff --git a/.cursor/rules/controller-reconcile-helper-apply.mdc b/.cursor/rules/controller-reconcile-helper-apply.mdc new file mode 100644 index 000000000..5ff881bf1 --- /dev/null +++ b/.cursor/rules/controller-reconcile-helper-apply.mdc @@ -0,0 +1,299 @@ +--- +description: Contracts for ApplyReconcileHelper (apply*) functions: pure/deterministic non-I/O in-memory mutations for exactly one patch domain. Apply when writing apply* helpers in reconciler*.go, and when deciding how to apply target/report artifacts to objects. Apply when editing relevant files, and when reasoning/planning/answering questions where this rule could influence code decisions (even if matching files are not currently open). +globs: images/controller/internal/controllers/**/reconciler*.go +alwaysApply: false +--- + +See `rfc-like-mdc.mdc` for normative keywords (BCP 14 / RFC 2119 / RFC 8174) and general .mdc writing conventions. + +# ApplyReconcileHelper + +This document defines naming and contracts for **ApplyReconcileHelper** functions/methods. + +Common terminology and rules for any **ReconcileHelper** live in `controller-reconcile-helper.mdc`. + +--- + +## TL;DR + +Summary only; if anything differs, follow normative sections below. + +- **ApplyReconcileHelpers** (`apply*`) are **pure**, **deterministic**, strictly **non-I/O** “in-memory write” steps. +- They take a previously computed **target** (and/or **report**) and mutate `obj` in place for **exactly one** **patch domain** (**main patch domain** or **status patch domain**). +- A status **report** MAY directly reuse selected **actual** observations (including being the same value/type as an **actual** snapshot); persisting such observations into `.status` is OK and they remain **report/observations** (output-only). +- They MUST NOT perform **Kubernetes API I/O**, use the controller-runtime client, call **DeepCopy**, or execute patches / make **patch ordering** or **patch type decision** decisions. +- They MUST treat `target` / `report` (and any other inputs) as **read-only inputs** and MUST NOT mutate them (including via **Aliasing**); when copying maps/slices from `target` / `report` into `obj`, **Clone** to avoid sharing. +- If both **main patch domain** and **status patch domain** need changes, use two **ApplyReconcileHelpers** (one per **patch domain**) and compose them in **Reconcile methods**. + +--- + +## Definition + +An **ApplyReconcileHelper** (“apply helper”) is a **ReconcileHelper** that is: + +- **strictly non-I/O**, and +- applies a previously computed **target** (and/or **report**) to the in-memory object, and +- mutates **exactly one patch domain** in place (**main resource** or **status subresource**), without executing any **patch request**. + +Typical apply helpers perform the “mechanical write” step right after **Reconcile methods** create a **patch base** and right before they patch that domain. + +Notes on **status patch domain**: +- Values in `.status` may include both **controller-owned state** (persisted decisions/memory) and **report/observations** (the published **report**). +- The published **report** MAY include a direct projection of **actual** observations. In some cases the same value/type may be used for both **actual** and published output; once written to `.status` it is still **report/observations** (output-only). +- Apply helpers that mutate `.status` MUST keep this distinction clear in naming and data flow: + - applying persisted decisions should be driven by **target** (often “**target status**” / controller-owned fields), + - applying published status output should be driven by **report** (often from a dedicated `compute*Report` helper, or returned alongside **target** from `computeTarget*` as a separate output). + +--- + +## Naming + +- An **ApplyReconcileHelper** name MUST start with `apply` / `Apply`. +- **ApplyReconcileHelpers** MUST be domain-explicit in the name when ambiguity is possible (ambiguity is possible when the applied artifact name refers to a field/group that exists in both `.spec` (**main patch domain**) and `.status` (**status patch domain**) of the same **object**): + - `applyMain*` / `ApplyMain*` (**main patch domain**) + - `applyStatus*` / `ApplyStatus*` (**status patch domain**) +- **ApplyReconcileHelpers** SHOULD NOT include `Main` / `Status` in the name when there is no such ambiguity. +- For main-domain **ApplyReconcileHelpers**, the name MUST also include the concrete artifact being applied (e.g. labels, annotations, or a specific spec field/group) — avoid names that imply “the whole main”. +- **ApplyReconcileHelpers** names MUST NOT sound like persistence (`applyPatch`, `applyUpdate`, `applyToAPI`) — apply helpers only mutate in-memory state. +- **ApplyReconcileHelpers** names MUST NOT include `Desired` / `Actual` / `Intended` / `Target` / `Report` unless the applied “thing” name in the **object** API includes those words. + - Exception: helpers that apply published status artifacts MAY end with `Report` (e.g., `applyStatusReport`, `applyConditionsReport`) to make the `report`-driven write explicit. +- **ApplyReconcileHelpers** names SHOULD name the “thing” being applied: + - `applyLabels(obj, targetLabels)` + - `applySpecFoo(obj, targetFoo)` + - `applyStatus(obj, targetStatus)` (when applying controller-owned state) + - `applyStatusReport(obj, report)` / `applyConditionsReport(obj, reportConditions)` (when applying published **report**) + +--- + +## Preferred signatures + +- For **ApplyReconcileHelpers** (`apply*`), the simplest signature from the variants below that preserves explicit dependencies and purity SHOULD be chosen. +- If additional signature variants are explicitly permitted elsewhere in this document, they MAY also be used. + +### Simple apply +```go +func applyFoo(obj *v1alpha1.Foo, target TargetFoo) +``` + +Or, if an error is realistically possible: +```go +func applyFoo(obj *v1alpha1.Foo, target TargetFoo) error +``` + +--- + +## Receivers + +- **ApplyReconcileHelpers** MUST be plain functions (no `Reconciler` receiver). + +--- + +## I/O boundaries + +**ApplyReconcileHelpers** MUST NOT do any of the following: + +- controller-runtime client usage (`client.Client`, `r.client`, etc.); +- Kubernetes API calls (`Get/List/Create/Update/Patch/Delete`); +- `DeepCopy` (including `obj.DeepCopy()`, `runtime.Object.DeepCopyObject()`, etc.); +- executing patches (`Patch` / `Status().Patch`) or making any patch ordering / patch type decisions; +- creating/updating Kubernetes objects in the API server in any form. + +**ApplyReconcileHelpers** MUST NOT do “hidden I/O” either: + +- `time.Now()` / `time.Since(...)` (nondeterministic wall-clock reads) (except setting `metav1.Condition.LastTransitionTime`, typically indirectly via `obju.SetStatusCondition`); +- random number generation (`rand.*`); +- environment reads (`os.Getenv`, reading files); +- network calls of any kind. + +> Rationale: apply helpers should be **deterministic** “in-memory write” steps; all API interactions and patch execution belong to **Reconcile methods**. + +--- + +## Determinism contract + +An **ApplyReconcileHelper** MUST be **deterministic** given its explicit inputs and intended mutation domain. + +See the common determinism contract in `controller-reconcile-helper.mdc`. + +> Practical reason: nondeterminism creates patch churn and flaky tests. + +--- + +## Read-only contract + +`apply*` / `Apply*` MUST treat all inputs except the target mutation on `obj` as read-only: + +- it MUST NOT mutate inputs other than `obj` (e.g., `target`, `report`, templates, computed structs); +- it MUST mutate only the intended **patch domain** on `obj` (**main resource** **or** **status subresource**), treating the other domain as read-only; +- it MUST NOT perform in-place modifications through aliases to non-`obj` data. + +See the common read-only contract in `controller-reconcile-helper.mdc` (especially the Go aliasing rule for `map` / `[]T`). + +--- + +## Patch-domain separation + +- `apply*` / `Apply*` MUST mutate `obj` in-place for **exactly one** patch domain: + - **main resource** (**metadata + spec + non-status fields**), **or** + - **status subresource** (`.status`). +- An **ApplyReconcileHelper** MUST NOT mutate both domains in the same function. +- If you need to apply **target**/**report** values to both domains, you MUST implement **two** apply helpers and call them separately from **Reconcile methods**. + +✅ Separate apply helpers (GOOD) +```go +func applyFoo(obj *v1alpha1.Foo, target TargetFooMain) +func applyFooStatusReport(obj *v1alpha1.Foo, report FooReport) +``` + +❌ Mixed apply (BAD) +```go +func applyFoo( + obj *v1alpha1.Foo, + targetMain TargetFooMain, + report FooReport, +) { + // mutates both spec/metadata and status in one helper +} +``` + +--- + +## Composition + +- An **ApplyReconcileHelper** MAY apply multiple related fields in one pass **within a single** **patch domain**. +- If applied fields represent one conceptual **target** (or one conceptual **report** artifact), they SHOULD be passed as one value (small struct) rather than a long parameter list. +- If applied changes are distinguishable and used independently, they SHOULD be split into separate `apply*` helpers and composed in **Reconcile methods** (not by making apply helpers depend on each other). +- An **ApplyReconcileHelper** MAY call **ConstructionReconcileHelpers** (`make*`, `compose*`, `new*`, `build*`) as pure in-memory building blocks, as long as it stays **non-I/O** and **deterministic**. + +--- + +## Flow phase scopes and outcomes + +- **ApplyReconcileHelpers** MUST NOT create a `reconcile/flow` **phase scope** (they do not accept `ctx context.Context`; see `controller-reconcile-helper.mdc`). +- **ApplyReconcileHelpers** MUST NOT return **ReconcileOutcome** (`flow.ReconcileOutcome`) or **EnsureOutcome** (`flow.EnsureOutcome`) (they are “in-memory write” steps). + - If a failure is possible, return `error` and let the caller convert it into a flow result in its own scope + (for example, `rf.Fail(err)` in a reconcile scope or `ef.Err(err)` in an ensure scope). + +--- + +## Error handling + +- See the common error handling rules in `controller-reconcile-helper.mdc`. +- ApplyReconcileHelpers (`apply*`) SHOULD be non-failing. + - If an **ApplyReconcileHelper** returns `error`, it MUST be only for **local validation** failures (e.g., nil pointers, impossible desired shape). + - It MUST NOT wrap/enrich errors (external errors should not exist in `apply*`), and MUST NOT include **object identity** (e.g. `namespace/name`, UID, object key). + - Any action/**object identity** context belongs to the calling function. + +--- + +## Common anti-patterns (MUST NOT) + +❌ Doing any Kubernetes API I/O (client usage / API calls in apply): +```go +// forbidden: apply helpers MUST NOT accept ctx (they are non-I/O) +// forbidden: apply helpers MUST NOT accept client.Client +func applyFoo(ctx context.Context, c client.Client, obj *v1alpha1.Foo, target TargetFoo) error { + // forbidden: apply helpers are non-I/O + return c.Update(ctx, obj) +} +``` + +❌ Executing patches or making patch decisions inside apply: +```go +// forbidden: apply helpers MUST NOT accept ctx or client.Client +func applyFoo(ctx context.Context, c client.Client, obj, base *v1alpha1.Foo, target TargetFoo) error { + // forbidden: patch execution belongs to Reconcile methods / PatchReconcileHelpers + obj.Spec = target.Spec + return c.Patch(ctx, obj, client.MergeFrom(base)) +} +``` + +❌ Calling `DeepCopy` inside apply: +```go +func applyFoo(obj *v1alpha1.Foo, target TargetFoo) { + _ = obj.DeepCopy() // forbidden: DeepCopy belongs to Reconcile methods + obj.Spec = target.Spec +} +``` + +❌ Returning a reconcile/ensure outcome / doing flow control inside apply: +```go +func applyFoo(obj *v1alpha1.Foo, target TargetFoo) flow.ReconcileOutcome { + var rf flow.ReconcileFlow + obj.Spec = target.Spec + return rf.Continue() // forbidden: apply helpers do not return flow control +} +``` + +❌ Adding logging/phases to apply helpers (they must stay tiny and have no `ctx`): +```go +// forbidden: apply helpers MUST NOT accept ctx +func applyFoo(ctx context.Context, obj *v1alpha1.Foo, target TargetFoo) error { + l := log.FromContext(ctx) + l.Info("applying target foo") // forbidden: apply helpers do not log + obj.Spec = target.Spec + return nil +} +``` + +❌ Mutating both patch domains in one apply helper: +```go +func applyFoo(obj *v1alpha1.Foo, targetMain TargetFooMain, report FooReport) { + obj.Spec = targetMain.Spec // main domain + // publishing report belongs to status domain + obj.Status = report.Status + // forbidden: apply must touch exactly one patch domain +} +``` + +❌ Implementing business logic inside apply (deciding desired state while applying it): +```go +func applyFoo(obj *v1alpha1.Foo, target TargetFoo) { + // forbidden: decisions belong to compute/ensure; apply is mechanical + if obj.Spec.Mode == "special" { + target.Replicas = 5 // also mutates target (see below) + } + obj.Spec.Replicas = target.Replicas +} +``` + +❌ Mutating `target` / `report` (or any other non-`obj` input): +```go +func applyLabels(obj *v1alpha1.Foo, target TargetLabels) { + target.Labels["x"] = "y" // forbidden: target is read-only + obju.SetLabels(obj, target.Labels) +} +``` + +❌ Sharing maps/slices from `target` / `report` into `obj` (aliasing): +```go +func applyLabels(obj *v1alpha1.Foo, target TargetLabels) { + obj.SetLabels(target.Labels) // forbidden: shares map backing storage + + // later mutation now also mutates `target.Labels` through aliasing + obj.GetLabels()["owned"] = "true" +} +``` + +❌ Writing nondeterministic ordered fields (map iteration order leaks into slices): +```go +func applyFinalizers(obj *v1alpha1.Foo, target TargetFinalizers) { + finals := make([]string, 0, len(target.Set)) + for f := range target.Set { // map iteration order is random + finals = append(finals, f) + } + // missing sort => nondeterministic object state => patch churn + obj.SetFinalizers(finals) +} +``` + +❌ Manual metadata/conditions manipulation when `objutilv1` must be used: +```go +func applyLabels(obj *v1alpha1.Foo, target TargetLabels) { + // forbidden in this codebase: do not open-code label map edits + if obj.Labels == nil { + obj.Labels = map[string]string{} + } + obj.Labels["a"] = "b" +} +``` diff --git a/.cursor/rules/controller-reconcile-helper-compute.mdc b/.cursor/rules/controller-reconcile-helper-compute.mdc new file mode 100644 index 000000000..129549e58 --- /dev/null +++ b/.cursor/rules/controller-reconcile-helper-compute.mdc @@ -0,0 +1,490 @@ +--- +description: Contracts for ComputeReconcileHelper (compute*) functions: pure/deterministic non-I/O computations producing intended/actual/target/report artifacts. Apply when writing compute* helpers in reconciler*.go, and when deciding what should be computed vs observed vs reported. Apply when editing relevant files, and when reasoning/planning/answering questions where this rule could influence code decisions (even if matching files are not currently open). +globs: images/controller/internal/controllers/**/reconciler*.go +alwaysApply: false +--- + +See `rfc-like-mdc.mdc` for normative keywords (BCP 14 / RFC 2119 / RFC 8174) and general .mdc writing conventions. + +# ComputeReconcileHelper + +This document defines naming and contracts for **ComputeReconcileHelper** functions/methods. + +Common terminology and rules for any **ReconcileHelper** live in `controller-reconcile-helper.mdc`. + +--- + +## TL;DR + +Summary only; if anything differs, follow normative sections below. + +- **ComputeReconcileHelpers** (`compute*`) are **pure**, **deterministic**, strictly **non-I/O** computations (no **Hidden I/O**: no time/random/env/network). +- They compute **intended** (`computeIntended*`), **actual** (`computeActual*`), **target** (`computeTarget*`), and/or **report** (`compute*Report`) (and/or intermediate **computed value**s), and return them (or write into explicit `out` args). +- They MAY use **ConstructionReconcileHelpers** (`new*`, `build*`, `make*`, `compose*`) for internal in-memory construction, as long as the compute helper’s purity/determinism/non-I/O contract remains satisfied. +- They treat `obj` and all caller-provided inputs as **read-only inputs** and MUST NOT mutate them (including via **Aliasing** of maps/slices; **Clone** before modifying derived maps/slices). +- They MUST NOT perform **Kubernetes API I/O**, call **DeepCopy**, execute patches, or make any **patch ordering** / **patch type decision** decisions. +- A **ComputeReconcileHelper** MUST return computed values (and optionally `error`) and MUST NOT report object mutations or optimistic-lock intent. + In particular, a **ComputeReconcileHelper** MUST NOT return `flow.EnsureOutcome` and MUST NOT call `ReportChanged*` / `RequireOptimisticLock`. +- If `computeTarget*` derives **target** values for **both** **patch domains** (**main patch domain** + **status patch domain**) that will later be used by **IsInSyncReconcileHelper** and/or **ApplyReconcileHelper**, it MUST return **two separate** values (**target main** + **target status**), not a mixed struct. +- New code MUST NOT introduce `computeDesired*` helpers. Replace legacy “desired” helpers with **intended**/**target**/**report** helpers. +- If a **ComputeReconcileHelper** depends on previous compute output, the dependency MUST be explicit in the signature as args **after `obj`**. + +--- + +## Definition + +A **ComputeReconcileHelper** (“compute helper”) is a **ReconcileHelper** that is: + +- **strictly non-I/O**, and +- performs **computations** from inputs and the current object state, and +- returns computed results (and optionally an error). + +Typical compute helpers compute: +- **intended** (`computeIntended*`) and/or +- **actual** (`computeActual*`) and/or +- **target** (`computeTarget*`) and/or +- **report** (`compute*Report`) and/or +- intermediate derived values used by later steps. + +--- + +## Naming + +- A **ComputeReconcileHelper** name MUST start with `compute` / `Compute`. +- **ComputeReconcileHelpers** for **intended** computations MUST use the form: + - `computeIntended*` / `ComputeIntended*`. +- **ComputeReconcileHelpers** for **actual** computations MUST use the form: + - `computeActual*` / `ComputeActual*`. +- **ComputeReconcileHelpers** for **target** computations MUST use the form: + - `computeTarget*` / `ComputeTarget*`. +- **ComputeReconcileHelpers** for **report** computations MUST use the form: + - `compute*Report` / `Compute*Report` (i.e., the helper name MUST end with `Report`). + - Exception: a `computeTarget*` helper MAY also compute and return one or more **report** artifacts as additional outputs, as long as: + - the **report** output(s) are returned via separate return values / `out` args, and + - **report** data is not mixed into **target status**. +- **ComputeReconcileHelpers** that compute values for exactly one **patch domain** MUST be domain-explicit in the name when ambiguity is possible (ambiguity is possible when the computed “thing” name refers to a field/group that exists in both `.spec` (**main patch domain**) and `.status` (**status patch domain**) of the same **object**). +- If a **ComputeReconcileHelper** computes values spanning both **patch domain**s, it MAY omit `Main` / `Status`. +- **ComputeReconcileHelpers** names SHOULD name the computed “thing”: +- `computeActualStatus(...)` (ok when **actual** status snapshot is small; otherwise prefer artifact-specific) + - `computeActualLabels(...)` + - `computeActualSpecFoo(...)` +- `computeIntendedStatus(...)` (when computing **intended** status-shaped intent inputs / normalization artifacts) +- `computeIntendedLabels(...)` +- `computeIntendedSpecFoo(...)` +- `computeTargetLabels(...)` +- `computeTargetSpecFoo(...)` +- `computeTargetChildObjects(...)` +- `computeStatusReport(...)` +- `computeConditionsReport(...)` +- **ComputeReconcileHelpers** names SHOULD NOT be “vague” (`computeStuff`, `computeAll`, `computeData`) — the intent should be obvious from the name. + +Naming guidance (avoid overlap with **ConstructionReconcileHelpers**): +- Use `computeIntended*` / `computeActual*` / `computeTarget*` / `compute*Report` when the output is conceptually **intended**/**actual**/**target**/**report** in the reconciliation pipeline. +- Use **ConstructionReconcileHelpers** (`new*`, `build*`, `make*`, `compose*`) to construct helper inputs and intermediate values (including whole objects) that support compute helpers, without implying **intended**/**actual**/**target**/**report** pipeline semantics. + +--- + +## Preferred signatures + +- For **ComputeReconcileHelpers** (`compute*`), the simplest signature from the variants below that preserves explicit dependencies and purity SHOULD be chosen. +- If additional signature variants are explicitly permitted elsewhere in this document, they MAY also be used. + +### Simple computation (no flow, no logging) +```go +func computeIntendedFoo(obj *v1alpha1.Foo) (IntendedFoo, error) +``` + +Or, if no error is realistically possible: +```go +func computeIntendedFoo(obj *v1alpha1.Foo) IntendedFoo +``` + +Or, for **actual** computations: +```go +func computeActualFoo(obj *v1alpha1.Foo) (ActualFoo, error) +``` + +Or, if no error is realistically possible: +```go +func computeActualFoo(obj *v1alpha1.Foo) ActualFoo +``` + +Or, for **target** computations: +```go +func computeTargetFoo(obj *v1alpha1.Foo, intendedFoo IntendedFoo, actualFoo ActualFoo) (TargetFoo, error) +``` + +Or, if no error is realistically possible: +```go +func computeTargetFoo(obj *v1alpha1.Foo, intendedFoo IntendedFoo, actualFoo ActualFoo) TargetFoo +``` + +Or, for **target** computations that also emit a **report** in one pass: +```go +func computeTargetFoo(obj *v1alpha1.Foo, intendedFoo IntendedFoo, actualFoo ActualFoo) (TargetFoo, FooReport, error) +``` + +Or, for **report** computations: +```go +func computeFooReport(obj *v1alpha1.Foo, intendedFoo IntendedFoo, actualFoo ActualFoo, targetFoo TargetFoo) (FooReport, error) +``` + +Or, if a compute helper needs data from `Reconciler`: +```go +func (r *Reconciler) computeIntendedFoo(obj *v1alpha1.Foo) (IntendedFoo, error) +``` + +Or, if no error is realistically possible: +```go +func (r *Reconciler) computeIntendedFoo(obj *v1alpha1.Foo) IntendedFoo +``` + +Or, for **actual** computations when the helper needs data from `Reconciler`: +```go +func (r *Reconciler) computeActualFoo(obj *v1alpha1.Foo) (ActualFoo, error) +``` + +Or, if no error is realistically possible: +```go +func (r *Reconciler) computeActualFoo(obj *v1alpha1.Foo) ActualFoo +``` + +Or, for **target** computations when the helper needs data from `Reconciler`: +```go +func (r *Reconciler) computeTargetFoo(obj *v1alpha1.Foo, intendedFoo IntendedFoo, actualFoo ActualFoo) (TargetFoo, error) +``` + +Or, for **report** computations when the helper needs data from `Reconciler`: +```go +func (r *Reconciler) computeFooReport(obj *v1alpha1.Foo, intendedFoo IntendedFoo, actualFoo ActualFoo, targetFoo TargetFoo) (FooReport, error) +``` + +### Complex compute with structured logging + +When a compute helper is large and benefits from phase-scoped logging/panic logging, it SHOULD: +- accept `ctx context.Context`, +- compute into explicit `out` args, +- return `error`, +- and use a step scope (`flow.BeginStep`) for standardized `phase start/end` logs. + +Preferred signature: +```go +func computeIntendedFoo(ctx context.Context, obj *v1alpha1.Foo, out *IntendedFoo) error +``` + +Or, if a compute helper needs data from `Reconciler`: +```go +func (r *Reconciler) computeIntendedFoo(ctx context.Context, obj *v1alpha1.Foo, out *IntendedFoo) error +``` + +Or, for **target** computations that also emit a **report** in one pass: +```go +func computeTargetFoo( + ctx context.Context, + obj *v1alpha1.Foo, + intendedFoo IntendedFoo, + actualFoo ActualFoo, + outTarget *TargetFoo, + outReport *FooReport, +) error +``` + +### Dependent compute +If a compute helper depends on previous compute output, the dependency MUST be explicit and come **after `obj`**: +```go +func computeTargetBar(obj *v1alpha1.Foo, intendedFoo IntendedFoo, actualFoo ActualFoo, targetFoo TargetFoo) (TargetBar, error) +``` + +Or, for **actual** computations: +```go +func computeActualBar(obj *v1alpha1.Foo, actualFoo ActualFoo) (ActualBar, error) +``` + +Or, if a compute helper needs data from `Reconciler`: +```go +func (r *Reconciler) computeTargetBar(obj *v1alpha1.Foo, intendedFoo IntendedFoo, actualFoo ActualFoo, targetFoo TargetFoo) (TargetBar, error) +``` + +Or, for **actual** computations when the helper needs data from `Reconciler`: +```go +func (r *Reconciler) computeActualBar(obj *v1alpha1.Foo, actualFoo ActualFoo) (ActualBar, error) +``` + +--- + +## Receivers + +- **ComputeReconcileHelpers** SHOULD be plain functions when they do not need any data from `Reconciler`. +- If a **ComputeReconcileHelper** needs data from `Reconciler`, it MUST be a method on `Reconciler`. + +--- + +## I/O boundaries + +**ComputeReconcileHelpers** MUST NOT do any of the following: + +- controller-runtime client usage (`client.Client`, `r.client`, etc.); +- Kubernetes API calls (`Get/List/Create/Update/Patch/Delete`); +- `DeepCopy` (including `obj.DeepCopy()`, `runtime.Object.DeepCopyObject()`, etc.); +- executing patches (`Patch` / `Status().Patch`) or making any patch ordering / patch type decisions; +- creating/updating Kubernetes objects in the API server in any form. + +**ComputeReconcileHelpers** MUST NOT do “hidden I/O” either: + +- `time.Now()` / `time.Since(...)` (nondeterministic wall-clock reads); +- random number generation (`rand.*`); +- environment reads (`os.Getenv`, reading files); +- network calls of any kind. + +> Rationale: compute helpers should be **deterministic** and unit-testable; all observable side effects belong to **ApplyReconcileHelpers** / **PatchReconcileHelpers** / **EnsureReconcileHelpers** / etc. + +--- + +## Determinism contract + +A **ComputeReconcileHelper** MUST be **deterministic** given its explicit inputs and read-only dependencies. + +See the common determinism contract in `controller-reconcile-helper.mdc`. + +In particular, avoid producing “equivalent but different” outputs across runs (e.g., unstable ordering). +- **ComputeReconcileHelpers** MAY use extracted computation/caching components owned by the reconciler (e.g. “world view” / “planner” / “topology scorer”, caches), as described in `controller-file-structure.mdc` (“Additional components”), as long as they do not violate the I/O boundaries above. + - Note: cache population is a side effect and an additional source of state; therefore, the helper is deterministic only relative to that state. For the same explicit inputs and the same state of these components, the result MUST be the same. +- Errors (when returned) MUST be stable for the same inputs and object state (no nondeterministic branching / hidden I/O). + +> Practical reason: nondeterminism creates patch churn and flaky tests. + +--- + +## Read-only contract + +`computeIntended*` / `ComputeIntended*`, `computeActual*` / `ComputeActual*`, `computeTarget*` / `ComputeTarget*`, and `compute*Report` / `Compute*Report` MUST treat all inputs as read-only: + +- it MUST NOT mutate any input values (including `obj` and any computed dependencies passed after `obj`); +- it MUST NOT perform in-place modifications through aliases. + +Note: reconciler-owned deterministic components (e.g. caches) are allowed mutation targets in `compute*` helpers **only** under the constraints defined above (non-I/O, explicit dependency, deterministic relative to the component state). +If a `compute*` helper mutates such a component, its GoDoc comment MUST explicitly state that this helper mutates reconciler-owned deterministic state and why this is acceptable (rare-case exception). + +See the common read-only contract in `controller-reconcile-helper.mdc` (especially the Go aliasing rule for `map` / `[]T`). + +--- + +## Phase scopes (optional) + +- A **ComputeReconcileHelper** MUST NOT create a phase scope by default. +- A **large** **ComputeReconcileHelper** MAY create a step phase scope (`flow.BeginStep` + deferred `sf.OnEnd(&err)`) + **only when it improves structure or diagnostics**. + - Otherwise (small/straightforward compute), it MUST NOT create a phase scope. + - If it creates a step phase scope (or writes logs), it MUST accept `ctx context.Context` (see `controller-reconcile-helper.mdc`). + - Step scope placement rules are defined in `controller-reconciliation-flow.mdc`. + +### Change reporting and optimistic-lock signaling + +**ComputeReconcileHelpers** MUST NOT report object changes or optimistic-lock requirements: +- MUST NOT return `flow.EnsureOutcome` +- MUST NOT call `ReportChanged` / `ReportChangedIf` +- MUST NOT call `RequireOptimisticLock` + +Rationale: change reporting / optimistic-lock intent semantically mean +“this helper already mutated the target object and the subsequent save of that mutation must use **Optimistic locking** semantics”. +**ComputeReconcileHelpers** do not mutate `obj` by contract. + +### Step scope pattern (illustrative) + +```go +func computeIntendedFoo(ctx context.Context, obj *v1alpha1.Foo, out *IntendedFoo) (err error) { + sf := flow.BeginStep(ctx, "compute-intended-foo") + defer sf.OnEnd(&err) + + if out == nil { + return sf.Errf("out is nil") + } + + // compute into *out (pure) + // use sf.Ctx() for context if needed + *out = IntendedFoo{ /* ... */ } + + return nil +} +``` + +--- + +## Patch-domain separation + +- `computeIntended*` / `ComputeIntended*`, `computeActual*` / `ComputeActual*`, `computeTarget*` / `ComputeTarget*`, and `compute*Report` / `Compute*Report` MAY analyze **both** **patch domains** (**main patch domain** and **status patch domain**) as inputs. +- If a `computeTarget*` helper derives **target** values for **both** **patch domains** (**main patch domain** + **status patch domain**), and those **target** values will later be used by `IsInSync` and/or `Apply`, it MUST return **two separate** values (**target main** + **target status**), not a single “mixed” struct. +- **target status** (for `computeTarget*`) is reserved for status-shaped values that represent **controller-owned state** to persist. + - It MUST NOT include **report** data (conditions/messages/progress). + - A `computeTarget*` helper MAY also compute **report** output, but it MUST return that **report** as a separate output (not embedded into **target status**). +- **report** data is written under the **status patch domain**. + - It is typically computed by `compute*Report` helpers, but a `computeTarget*` helper MAY also return **report** output alongside **target** (separate outputs). + - **report** MAY include published observations derived from **actual**. + - In some cases, a published observation is exactly the same value as an **actual** snapshot (or a subset). Reusing the same value/type is acceptable; once written to `.status` it is still **report/observations** (output-only). +- If a `computeActual*` helper derives **actual** snapshot values that are used only as intermediate inputs for other compute helpers, it MAY return them in any shape that is convenient for that internal composition (including a single struct). + +✅ Separate **target** values (GOOD) +```go +func (r *Reconciler) computeTargetX(obj *v1alpha1.X, intended IntendedX, actual ActualX) (targetMain TargetLabels, targetStatus TargetEKStatus, err error) +``` + +❌ Mixed **target** main+status (BAD) +```go +func (r *Reconciler) computeTargetX(obj *v1alpha1.X, intended IntendedX, actual ActualX) (target MixedTargetX, err error) // main+status intermingled +``` + +Notes (SHOULD): +- “Main” typically includes metadata/spec of the root object and/or child objects (intended/target or actual, depending on the helper). +- “Status” typically includes conditions, observed generation, and other status-only values (intended/target or actual, depending on the helper). + +--- + +## Composition + +- A **ComputeReconcileHelper** MAY compute multiple related outputs (intended/target and/or actual) in one pass. + - If these outputs are **not distinguishable for external code** (they represent one conceptual “state”), it SHOULD return them as **one object** (small struct, anonymous struct, slice/map). + - If these outputs **are distinguishable for external code** (they are meaningfully different and will be used independently), it SHOULD return them as **separate objects**. +- A `computeIntended*` / `ComputeIntended*` helper MAY call other `computeIntended*` helpers (pure composition). +- A `computeActual*` / `ComputeActual*` helper MAY call other `computeActual*` helpers only (pure composition). +- A `computeTarget*` / `ComputeTarget*` helper MAY call `computeIntended*`, `computeActual*`, `computeTarget*`, and/or `compute*Report` helpers (pure composition) — especially when it returns **target** and **report** outputs in the same pass. +- A `compute*Report` / `Compute*Report` helper MAY call `computeActual*` helpers and/or other `compute*Report` helpers (pure composition). +- Any `compute*` helper MAY call **ConstructionReconcileHelpers** (`new*`, `build*`, `make*`, `compose*`) as pure building blocks. +- A **ComputeReconcileHelper** MAY depend on outputs of previous compute helpers: + - the dependency MUST be explicit in the signature as additional args **after `obj`**. + +--- + +## Error handling + +- See the common error handling rules in `controller-reconcile-helper.mdc`. +- **ComputeReconcileHelpers** SHOULD generally return errors as-is. + + **Allowed (rare)**: when propagating a **non-local** error (e.g., from parsing/validation libs or injected pure components) and additional context is necessary to **disambiguate multiple different error sources** within the same calling **Reconcile method**, a **ComputeReconcileHelper** MAY wrap with small, local context: + - prefer `fmt.Errorf(": %w", err)` + - keep `` specific to the helper responsibility (e.g., `parseDesiredTopology`, `computeDesiredLabels`, `normalizeReplicaSet`) + + **Forbidden (MUST NOT)**: + - do not add **object identity** (e.g. `namespace/name`, UID, object key) + - do not add generic “outside world” context (that belongs to the **Reconcile method**) + +--- + +## Common anti-patterns (MUST NOT) + +❌ Doing any Kubernetes API I/O (directly or indirectly): +```go +func (r *Reconciler) computeActualFoo(ctx context.Context, obj *v1alpha1.Foo) (ActualFoo, error) { + var cm corev1.ConfigMap + key := client.ObjectKey{Namespace: obj.Namespace, Name: "some-cm"} + if err := r.client.Get(ctx, key, &cm); err != nil { // forbidden: I/O in compute + return ActualFoo{}, err + } + return ActualFoo{}, nil +} +``` + +❌ Executing a patch / update / delete (or hiding it behind helpers): +```go +func computeTargetFoo(ctx context.Context, obj *v1alpha1.Foo, intendedFoo IntendedFoo, actualFoo ActualFoo) (TargetFoo, error) { + _ = patchFoo(ctx, obj) // forbidden: patch execution in compute + return TargetFoo{}, nil +} +``` + +❌ Calling `DeepCopy` as a shortcut (or to “avoid aliasing”): +```go +func computeIntendedFoo(obj *v1alpha1.Foo) IntendedFoo { + _ = obj.DeepCopy() // forbidden in compute helpers + return IntendedFoo{} +} +``` + +❌ Mutating `obj` (including “harmless” metadata/spec/status writes): +```go +func computeTargetFoo(obj *v1alpha1.Foo, intendedFoo IntendedFoo, actualFoo ActualFoo) TargetFoo { + obj.Spec.Replicas = 3 // forbidden: compute must not mutate obj + return TargetFoo{} +} +``` + +❌ Mutating `obj` through aliasing of maps/slices: +```go +func computeTargetFoo(obj *v1alpha1.Foo, intendedFoo IntendedFoo, actualFoo ActualFoo) TargetFoo { + labels := obj.GetLabels() + labels["my-controller/owned"] = "true" // forbidden: mutates obj via alias + return TargetFoo{} +} +``` + +❌ Returning references that alias `obj` internals (callers may mutate later): +```go +func computeActualFoo(obj *v1alpha1.Foo) ActualFoo { + return ActualFoo{ + Labels: obj.GetLabels(), // forbidden: exposes obj map alias + } +} +``` + +❌ Hidden I/O / nondeterminism (time, random, env, filesystem, extra network): +```go +func computeIntendedFoo(obj *v1alpha1.Foo) IntendedFoo { + _ = time.Now() // forbidden + _ = rand.Int() // forbidden + _ = os.Getenv("X") // forbidden + // net/http calls, reading files, etc. are also forbidden + return IntendedFoo{} +} +``` + +❌ Depending on map iteration order (unstable output → patch churn): +```go +func computeTargetFoo(obj *v1alpha1.Foo, intendedFoo IntendedFoo, actualFoo ActualFoo) TargetFoo { + out := make([]string, 0, len(obj.Spec.Flags)) + for k := range obj.Spec.Flags { // map iteration order is random + out = append(out, k) + } + // missing sort => nondeterministic output + return TargetFoo{Keys: out} +} +``` + +❌ Mixing **target main** + **target status** into one “mixed” **target** value used by Apply/IsInSync: +```go +type MixedTargetFoo struct { + Labels map[string]string + Status v1alpha1.FooStatus +} + +func computeTargetFoo(obj *v1alpha1.Foo, intendedFoo IntendedFoo, actualFoo ActualFoo) (MixedTargetFoo, error) { // forbidden shape + return MixedTargetFoo{}, nil +} +``` + +❌ Smuggling implicit dependencies instead of explicit arguments: +```go +var globalDefault IntendedFoo // forbidden: implicit dependency + +func computeIntendedFoo(obj *v1alpha1.Foo) IntendedFoo { + return globalDefault // hidden dependency: not explicit in signature +} +``` + +❌ Writing results into `obj` instead of returning them / writing into an explicit `out` arg: +```go +func computeActualFoo(obj *v1alpha1.Foo) ActualFoo { + obj.Status.ObservedGeneration = obj.Generation // forbidden: compute writes into obj + return ActualFoo{} +} +``` + +❌ Using change reporting / optimistic-lock signaling in compute: +```go +func computeTargetFoo(ctx context.Context, obj *v1alpha1.Foo, intendedFoo IntendedFoo, actualFoo ActualFoo, out *TargetFoo) error { + *out = TargetFoo{ /* ... */ } + + // forbidden: compute helpers do not mutate obj and must not signal persistence semantics + _ = flow.EnsureOutcome{}.ReportChanged() // (illustrative) forbidden category mixing + + return nil +} +``` diff --git a/.cursor/rules/controller-reconcile-helper-construction.mdc b/.cursor/rules/controller-reconcile-helper-construction.mdc new file mode 100644 index 000000000..865809a27 --- /dev/null +++ b/.cursor/rules/controller-reconcile-helper-construction.mdc @@ -0,0 +1,352 @@ +--- +description: Contracts for ConstructionReconcileHelper (new*/build*/make*/compose*) functions: pure/deterministic non-I/O in-memory construction helpers and naming family selection. Apply when writing construction helpers used by compute helpers in reconciler*.go, and when deciding naming/shape for in-memory builders. Apply when editing relevant files, and when reasoning/planning/answering questions where this rule could influence code decisions (even if matching files are not currently open). +globs: images/controller/internal/controllers/**/reconciler*.go +alwaysApply: false +--- + +See `rfc-like-mdc.mdc` for normative keywords (BCP 14 / RFC 2119 / RFC 8174) and general .mdc writing conventions. + +# ConstructionReconcileHelper + +This document defines naming and contracts for **ConstructionReconcileHelper** functions/methods: +`new*`, `build*`, `make*`, `compose*`. + +Common controller terminology lives in `controller-terminology.mdc`. +Common terminology and rules for any **ReconcileHelper** live in `controller-reconcile-helper.mdc`. + +--- + +## TL;DR + +Summary only; if anything differs, follow normative sections below. + +- **ConstructionReconcileHelpers** (`new*`/`build*`/`make*`/`compose*`) are **pure**, **deterministic**, strictly **non-I/O** helpers that construct in-memory values/objects (or groups of them) from **explicit inputs**. +- Inputs are **read-only**: + - MUST NOT mutate inputs (including via Go **aliasing** of maps/slices). + - Clone maps/slices before editing; avoid returning references that alias caller-owned storage unless explicitly documented and safe. +- MUST NOT: + - do Kubernetes API I/O, filesystem/network/env reads, or use time/random sources, + - log/print, accept `context.Context`, start `reconcile/flow` phase scopes (`flow.BeginReconcile` / `flow.BeginEnsure` / `flow.BeginStep`), or call `DeepCopy`, + - return `flow.ReconcileOutcome` / `flow.EnsureOutcome` or make flow/patch orchestration decisions (patch ordering/strategy/execution). +- MUST be plain functions (no `Reconciler` receiver) and may only call other **construction** helpers. +- If the primary goal is a reconciliation pipeline artifact (**intended/actual/target/report**) or domain decision-making, prefer **ComputeReconcileHelper** (`compute*`) and use construction helpers only as sub-steps. + +Naming family selection (pick exactly one, by return shape + meaning): + +1) Returns **one logical domain whole** (root value) and owns invariants → `new*` +2) Returns a **set of independently meaningful results** (`[]T`, `map[...]T`, tuples) → `build*` +3) Returns **mechanical glue** (packing/formatting, minimal semantics) → `make*` +4) Only **binds already-built parts** (no construction/invariants) → `compose*` + +--- + +## Definition + +A **ConstructionReconcileHelper** (“construction helper”) is a **ReconcileHelper** that is: + +- **strictly non-I/O**, and +- **deterministic**, and +- constructs new in-memory values/objects (or groups of values/objects), and +- treats all inputs as **read-only inputs** (no mutation, including via **Aliasing**), and +- returns the constructed result(s) (and optionally an error). + +Typical construction helpers are pure “building blocks” used by reconciliation code and other helpers to assemble in-memory values/objects, without implying **intended**/**actual**/**target**/**report** pipeline semantics. + +Key distinction vs **ComputeReconcileHelper**: +- **ConstructionReconcileHelper** focuses on *constructing* one object/value (or a set of objects/values) from explicitly provided inputs. +- **ComputeReconcileHelper** focuses on *computing state* (typically one state artifact from another) in the reconciliation pipeline (for example, deriving **intended** from inputs, **target** from **intended** + **actual**, or building a **report**). + Computing state often includes construction steps, but construction is then a sub-step of the computation rather than the main purpose. + +Rule of thumb: +- If the primary purpose is deterministic construction from clearly defined inputs → use **ConstructionReconcileHelper** (`new*`/`build*`/`make*`/`compose*`). +- If the primary purpose is computing state (usually “state from state”) → use **ComputeReconcileHelper** (`compute*`). + +IMPORTANT!!! MUST NOT be used for domain decisions or pipeline artifacts (use **ComputeReconcileHelper** for **intended**/**actual**/**target**/**report** computations). + +In this codebase, **ConstructionReconcileHelper** uses four naming families (`new*`, `build*`, `make*`, `compose*`), described below as separate sections. + +> Naming intent: `new*/build*/make*/compose*` communicates *what kind of thing was constructed*, +> while `compute*` / `apply*` / `ensure*` communicate *reconciliation role and allowed side effects*. + +### `new*` — **Single domain whole** + +Choose `new*` when the result is **one logical domain whole**, even if multiple internal objects are created. + +When: +- Result is a single logical unit; internal parts have no meaning independently. +- The function owns the invariants of that composition. + +Signals: +- One “root” return value (single type). +- Callers treat the result as a whole. +- If construction fails, domain meaning breaks (so returning `error` may be appropriate). + +Examples: +```go +func newVolumeLayout(cfg Config) VolumeLayout +func newPodTemplate(cr *MyCR) (corev1.PodTemplateSpec, error) +func newChildService(cr *MyCR) corev1.Service +``` + +> Note: do not use `new*` solely to allocate memory (`&T{}`); the name is about a **domain whole**, not a pointer. + +### `build*` — **Set of independent results** + +Choose `build*` when the function returns a **set of independently meaningful results**: `[]T`, `map[...]T`, `(A, B, C)`, etc. + +When: +- Each result has its own lifecycle (no wrapper domain type). +- The function aggregates steps/sources and returns multiple independent outputs. +- Often used near reconciliation orchestration to prepare multiple objects. + +Examples: +```go +func buildStatusConditions(state State) []metav1.Condition +func buildOwnedResources(cr *MyCR) []client.Object +func buildLabelsAndAnnotations(cr *MyCR) (map[string]string, map[string]string) +``` + +### `make*` — **Mechanical glue** + +Choose `make*` for **mechanical glue**: simple assembling/packing/formatting of inputs with minimal logic and no domain semantics. + +Examples: +```go +func makeConditionSet(conds ...metav1.Condition) []metav1.Condition +func makeOwnerRefs(owner metav1.Object) []metav1.OwnerReference +func makeLabels(kv ...string) map[string]string +``` + +### `compose*` — **Bind already-built parts** + +Choose `compose*` when you want to make it explicit that the function does **not create** new meaning; it only **binds** already computed values. + +When: +- Inputs are already computed “ready” objects/values. +- No heavy computation or invariant ownership. +- Only grouping/tying together. + +Examples: +```go +func composeOwnerRefsAndLabels(ownerRefs []metav1.OwnerReference, labels map[string]string) metav1.ObjectMeta +func composeStatusWithConditions(base FooStatus, conds []metav1.Condition) FooStatus +``` + +--- + +## Naming + +- A **ConstructionReconcileHelper** name MUST start with one of: + `new` / `New` / `build` / `Build` / `make` / `Make` / `compose` / `Compose`. +- A **ConstructionReconcileHelper** MUST choose exactly one naming family by the *shape and meaning* of the return value: + - **`new*`**: + - MUST be used when the result is **one logical domain whole** (even if built from many internal parts). + - MUST NOT be used when the function returns a set of independently meaningful results (use `build*` instead). + - **`build*`**: + - MUST be used when the function returns a **set of independently meaningful results** (`[]T`, `map[...]T`, tuples). + - MUST NOT be used when the function returns one domain whole (use `new*` instead). + - **`make*`**: + - MUST be used for **mechanical glue**: simple assembly/packing/formatting of inputs with minimal/no domain semantics. + - **`compose*`**: + - MUST be used when the function only **binds already-built parts** (grouping/tying together) and does not create new meaning/invariants. + - MUST NOT be used for domain decisions or pipeline artifacts (use **ComputeReconcileHelper** for **intended**/**actual**/**target**/**report** computations). + +--- + +## Preferred signatures + +- For **ConstructionReconcileHelpers**, the simplest signature that preserves determinism and read-only inputs SHOULD be chosen. +- **ConstructionReconcileHelpers** MUST NOT accept `ctx context.Context`. + - If you need logging/phases/flow control, use **ComputeReconcileHelpers** / **EnsureReconcileHelpers** or keep it in the caller. +- `new*` / `build*` helpers MAY return `(T, error)` when construction can fail. +- `make*` / `compose*` helpers SHOULD be non-failing (prefer returning a value only). + +Examples: + +```go +func newPodTemplate(cr *v1alpha1.Foo) (corev1.PodTemplateSpec, error) +``` + +```go +func buildOwnedResources(cr *v1alpha1.Foo) []client.Object +``` + +```go +func makeOwnerRefs(owner metav1.Object) []metav1.OwnerReference +``` + +```go +func composeServiceSpecWithPorts(spec corev1.ServiceSpec, ports []corev1.ServicePort) corev1.ServiceSpec +``` + +--- + +## Receivers + +- **ConstructionReconcileHelpers** MUST be plain functions (no `Reconciler` receiver). + +--- + +## I/O boundaries + +**ConstructionReconcileHelpers** MUST NOT perform **I/O** of any kind: + +- **Kubernetes API I/O** (no client usage), +- filesystem/network/env reads, +- time/random sources, +- logging/printing, +- and MUST NOT call **DeepCopy**. + +--- + +## Determinism contract + +**ConstructionReconcileHelpers** MUST be **deterministic** for the same explicit inputs: + +- stable ordering (sort when building ordered slices from maps/sets), +- no map-iteration-order leakage. + +See the common determinism contract in `controller-reconcile-helper.mdc`. + +--- + +## Read-only contract + +**ConstructionReconcileHelpers** MUST treat all inputs as **read-only inputs**: + +- no mutation of inputs (including through **Aliasing**), +- clone maps/slices before editing, +- avoid returning references that alias internal storage of inputs (unless explicitly documented and safe). + +See the common read-only contract in `controller-reconcile-helper.mdc`. + +--- + +## Patch-domain separation + +- **ConstructionReconcileHelpers** MUST NOT execute patches, make **Patch ordering** decisions, or mutate a Kubernetes **patch domain** as part of their work. + +--- + +## Composition + +- **ConstructionReconcileHelpers** are “building blocks”. +- **ConstructionReconcileHelpers** are typically used inside **ComputeReconcileHelpers** and **EnsureReconcileHelpers**. +- A **ConstructionReconcileHelper** MAY call other **ConstructionReconcileHelpers** (`new*`, `build*`, `make*`, `compose*`) as pure sub-steps. +- A **ConstructionReconcileHelper** MUST NOT call **ReconcileHelpers** from other helper categories (`compute*`, `apply*`, `ensure*`, `patch*`, `create*`, `delete*`, `is*InSync`). + - If you need those semantics, move the orchestration to the caller (typically a compute/ensure helper or a Reconcile method). +- If a function’s primary purpose is to produce **intended**/**actual**/**target**/**report** as part of reconciliation, you SHOULD prefer `compute*` naming and use **ConstructionReconcileHelpers** internally for sub-steps. + +Important distinctions: + +- `new*` constructs an in-memory object/value. + `create*` (**CreateReconcileHelper**) persists an object via Kubernetes API **I/O**. + +--- + +## Flow phase scopes and outcomes + +- **ConstructionReconcileHelpers** MUST NOT create a `reconcile/flow` **phase scope** (`flow.BeginReconcile` / `flow.BeginEnsure` / `flow.BeginStep`). +- **ConstructionReconcileHelpers** MUST NOT return **ReconcileOutcome** (`flow.ReconcileOutcome`) or **EnsureOutcome** (`flow.EnsureOutcome`). +- **ConstructionReconcileHelpers** MUST NOT log (they do not accept `ctx context.Context`). + +--- + +## Error handling + +- Like any **ReconcileHelper**, an error from a **ConstructionReconcileHelper** MUST NOT include **object identity** (see `controller-reconcile-helper.mdc`). +- Construction helpers SHOULD be non-failing where possible. +- If a **ConstructionReconcileHelper** returns an `error`, it: + - MUST NOT include **object identity** (e.g. `namespace/name`, UID, object key), + - MUST NOT wrap/enrich errors with “outside world” context (that belongs to the caller), + - SHOULD be used only for local validation / impossible-shape failures / pure parsing failures. +- **Allowed (rare):** when propagating a non-local pure error and additional context is necessary to disambiguate multiple error sources in the same caller, a helper MAY wrap with small, local action context: + - prefer `fmt.Errorf(": %w", err)` + - keep `` specific to the helper responsibility. + +--- + +## Common anti-patterns (MUST NOT) + +❌ Doing any Kubernetes API I/O: + +```go +// forbidden: construction helpers MUST NOT accept ctx +// forbidden: construction helpers MUST NOT accept client.Client +func newFoo(ctx context.Context, c client.Client, obj *v1alpha1.Foo) (FooOut, error) { + // forbidden: I/O in ConstructionReconcileHelper + _ = c.Get(ctx, client.ObjectKeyFromObject(obj), &corev1.ConfigMap{}) + return FooOut{}, nil +} +``` + +❌ Accepting `ctx` / logging / creating phases: + +```go +func buildFoo(ctx context.Context, obj *v1alpha1.Foo) FooOut { + l := log.FromContext(ctx) + l.Info("building foo") // forbidden: no logging/phases in construction helpers + rf := flow.BeginReconcile(ctx, "build-foo") // forbidden: no flow phase scopes in construction helpers + _ = rf + return FooOut{} +} +``` + +❌ Returning `flow.Outcome` / doing flow control: + +```go +func makeFoo(obj *v1alpha1.Foo) flow.ReconcileOutcome { + var rf flow.ReconcileFlow + return rf.Continue() // forbidden: construction helpers do not return reconcile outcomes +} +``` + +❌ Hidden I/O / nondeterminism: + +```go +func makeNonce() string { + // forbidden: time/random sources + return time.Now().Format(time.RFC3339) +} +``` + +❌ Depending on map iteration order: + +```go +func buildKeys(m map[string]struct{}) []string { + out := make([]string, 0, len(m)) + for k := range m { // random order + out = append(out, k) + } + // missing sort => nondeterministic output + return out +} +``` + +❌ Mutating inputs through aliasing: + +```go +func makeLabels(in map[string]string) map[string]string { + // forbidden: mutates caller-owned map + in["x"] = "y" + return in +} +``` + +❌ Calling other helper categories from construction helpers: + +```go +func newFoo(obj *v1alpha1.Foo) (FooOut, error) { + _ = computeTargetFoo(obj) // forbidden: construction helpers only call other construction helpers + return FooOut{}, nil +} +``` + +❌ Calling `DeepCopy` as a shortcut: + +```go +func newFoo(obj *v1alpha1.Foo) FooOut { + _ = obj.DeepCopy() // forbidden: DeepCopy belongs to Reconcile methods + return FooOut{} +} +``` diff --git a/.cursor/rules/controller-reconcile-helper-create.mdc b/.cursor/rules/controller-reconcile-helper-create.mdc new file mode 100644 index 000000000..15d2fbcba --- /dev/null +++ b/.cursor/rules/controller-reconcile-helper-create.mdc @@ -0,0 +1,282 @@ +--- +description: Contracts for CreateReconcileHelper (create) functions: exactly one Kubernetes API Create call for one object, deterministic payload, and no status writes. Apply when writing create* helpers in reconciler*.go, and when deciding how to create child resources safely. Apply when editing relevant files, and when reasoning/planning/answering questions where this rule could influence code decisions (even if matching files are not currently open). +globs: images/controller/internal/controllers/**/reconciler*.go +alwaysApply: false +--- + +See `rfc-like-mdc.mdc` for normative keywords (BCP 14 / RFC 2119 / RFC 8174) and general .mdc writing conventions. + +# CreateReconcileHelper + +This document defines naming and contracts for **CreateReconcileHelper** functions/methods. + +Common terminology and rules for any **ReconcileHelper** live in `controller-reconcile-helper.mdc`. + +--- + +## TL;DR + +Summary only; if anything differs, follow normative sections below. + +- **CreateReconcileHelpers** (`create`) are **single-call I/O helpers**: they perform exactly **one** **Kubernetes API I/O** write — `Create(...)` — for exactly one **object**. +- They MUST create using the **caller-owned object instance** (`obj`) and, on success, the same instance MUST be updated with **API-server-assigned fields/defaults** (e.g. `uid`, `resourceVersion`, defaulted fields). +- They MUST NOT perform any other **Kubernetes API I/O** calls (`Get/List/Update/Patch/Delete`), MUST NOT call **DeepCopy**, and MUST NOT execute patches or make **patch ordering** / **patch type decision** decisions. +- They MUST NOT write the **status subresource** as part of create (no `Status().Patch/Update`); any status write (publishing **report** and/or persisting **controller-owned state**) is a **separate request** done by **Reconcile methods**. +- Everything they control (the create request payload) MUST be deterministic (no time/random/env-driven values; stable ordering where relevant). + +--- + +## Definition + +A **CreateReconcileHelper** (“create helper”) is a **ReconcileHelper** that is: + +- **allowed to perform I/O**, and +- creates exactly **one** Kubernetes object via the API, and +- returns the created object in its final state (and optionally an error). + +Typical create helpers are used for child resources to encapsulate the mechanical create call and ensure the caller-visible object instance reflects server-assigned fields (e.g., `resourceVersion`, defaults). + +--- + +## Naming + +- A **CreateReconcileHelper** name MUST start with `create` / `Create`. +- **CreateReconcileHelpers** for Kubernetes **objects** MUST use the form: `create` / `Create`. `` MUST either correspond to the Kubernetes **object** kind being created or be a short kind name that is already established in the codebase. + When `` refers to a kind defined in this repository’s API (types under `api/v*/`), `` MUST use the **short kind name** (see `controller-terminology.mdc`). + Examples: + - `createCM(...)` (or `createConfigMap(...)`) + - `createSVC(...)` (or `createService(...)`) + - `createEK(...)` (or `createExampleKind(...)`) +- **CreateReconcileHelpers** names MUST NOT imply orchestration or existence checks (`ensureCreated`, `reconcileCreate`, `createIfNeeded`) — branching and policy belong to **Reconcile methods**. + +--- + +## Preferred signatures + +- For **CreateReconcileHelpers** (`create*`), the simplest signature from the variants below that preserves explicit dependencies and a single-API-call scope SHOULD be chosen. +- If additional signature variants are explicitly permitted elsewhere in this document, they MAY also be used. + +### Simple create +```go +func (r *Reconciler) createEK( + ctx context.Context, + obj *v1alpha1.ExampleKind, +) error +``` + +--- + +## Receivers + +- **CreateReconcileHelpers** MUST be methods on `Reconciler` (they perform I/O via controller-runtime client owned by `Reconciler`). + +--- + +## I/O boundaries + +**CreateReconcileHelpers** MAY do the following: + +- controller-runtime client usage to execute exactly **one** Kubernetes API call: `Create(...)`. + +**CreateReconcileHelpers** MUST NOT do any of the following: + +- Kubernetes API calls other than that single `Create(...)` (no `Get/List/Update/Patch/Delete`); +- `DeepCopy` (including `obj.DeepCopy()`, `runtime.Object.DeepCopyObject()`, etc.); +- executing patches (`Patch` / `Status().Patch`) or making any patch ordering / patch type decisions; +- performing any other I/O besides the single Kubernetes API request they own. + +**CreateReconcileHelpers** MUST NOT do “hidden I/O” either: + +- `time.Now()` / `time.Since(...)` (nondeterministic wall-clock reads); +- random number generation (`rand.*`); +- environment reads (`os.Getenv`, reading files); +- network calls of any kind **other than** the single Kubernetes API request they own. + +> Rationale: create helpers are mechanical wrappers around exactly one create operation; ordering, retries, and higher-level policy remain explicit in **Reconcile methods**. + +--- + +## Determinism contract + +A **CreateReconcileHelper** MUST be **deterministic** in everything it controls. + +In particular: +- The request payload it sends MUST be deterministic given explicit inputs (no random names, UUIDs, timestamps, or unstable ordering). +- See the common determinism contract in `controller-reconcile-helper.mdc` (ordering stability, no map iteration order reliance). +- **CreateReconcileHelpers** MUST NOT introduce “hidden I/O” (time, random, env, extra network calls) beyond the single Kubernetes API `Create(...)` request they own. + +> Practical reason: nondeterminism creates hard-to-debug drift and flaky tests; create should be a mechanical operation. + +--- + +## Read-only contract + +`create` / `Create` MUST treat all inputs except the created object as read-only: + +- it MUST NOT mutate any input objects other than the object being created; +- it MUST NOT mutate shared templates/defaults through aliasing (clone before editing); +- it MUST NOT perform in-place modifications through aliases to non-created-object data. + +See the common read-only contract in `controller-reconcile-helper.mdc` (especially the Go aliasing rule for `map` / `[]T`). + +--- + +## Patch-domain separation + +- A **CreateReconcileHelper** MUST perform exactly one API write: `Create(...)` for the **main resource**. +- It MUST NOT write the status subresource as part of creation: + - it MUST NOT issue `Status().Patch(...)` / `Status().Update(...)`; + - it MUST NOT rely on setting `.status` in the create request. +- If initial `.status` must be set (e.g., persisting **controller-owned state** and/or publishing an initial **report**), it MUST be done by **Reconcile methods** as a **separate** status write (separate request). + +--- + +## Composition + +- A **CreateReconcileHelper** MUST perform exactly one API write (`Create(...)`) for exactly one object. +- A **CreateReconcileHelper** MAY rely on pure helpers (**ComputeReconcileHelpers** / **ApplyReconcileHelpers** / **EnsureReconcileHelpers**) and/or **ConstructionReconcileHelpers** to prepare the object **in-memory** before calling `Create(...)`, but it MUST NOT perform any additional API calls. +- If creating an object requires multiple API writes (e.g., create main resource and then write status), those writes MUST be composed in **Reconcile methods** as separate operations, not hidden inside the create helper. +- If multiple objects must be created (loops, groups, fan-out), that orchestration MUST live in **Reconcile methods**; create helpers must remain single-object. + +--- + +## Flow phase scopes and outcomes + +- **CreateReconcileHelpers** MUST NOT create a `reconcile/flow` **phase scope** — they should stay mechanical and short. +- **CreateReconcileHelpers** MUST return `error` and MUST NOT return **ReconcileOutcome** (`flow.ReconcileOutcome`) or **EnsureOutcome** (`flow.EnsureOutcome`). + - Any retry/requeue policy belongs to the calling **Reconcile method** (use `ReconcileFlow` there). + +--- + +## Error handling + +- See the common error handling rules in `controller-reconcile-helper.mdc`. +- A **CreateReconcileHelper** SHOULD be mechanically thin: if the single `Create(...)` call fails, return the error **without wrapping**. +- A **CreateReconcileHelper** MUST NOT enrich errors with additional context (including **object identity** such as `namespace/name`, UID, object key). + - Error enrichment (action + **object identity** + **phase**) is the calling **Reconcile method**’s responsibility. + +--- + +## Common anti-patterns (MUST NOT) + +❌ Doing existence checks (`Get/List`) or any extra Kubernetes API calls: +```go +func (r *Reconciler) createEK(ctx context.Context, obj *v1alpha1.EK) error { + // forbidden: extra API call + var existing v1alpha1.EK + if err := r.client.Get(ctx, client.ObjectKeyFromObject(obj), &existing); err == nil { + return nil // "already exists" decision belongs to Reconcile methods + } + + // forbidden: second API call in the same helper if create proceeds + if err := r.client.Create(ctx, obj); err != nil { + return err + } + return nil +} +``` + +❌ Performing more than one write (`Create` + `Update/Patch/Delete`, retries-as-extra-calls, fallback logic): +```go +func (r *Reconciler) createEK(ctx context.Context, obj *v1alpha1.EK) error { + if err := r.client.Create(ctx, obj); err != nil { + // forbidden: "fallback" write makes it >1 API call + return r.client.Update(ctx, obj) + } + return nil +} +``` + +❌ Creating on a temporary object and dropping it (caller-owned `obj` is not updated with UID/RV/defaults): +```go +func (r *Reconciler) createEK(ctx context.Context, obj *v1alpha1.EK) error { + tmp := &v1alpha1.EK{ + ObjectMeta: metav1.ObjectMeta{ + Namespace: obj.Namespace, + Name: obj.Name, + }, + Spec: obj.Spec, + } + if err := r.client.Create(ctx, tmp); err != nil { + return err + } + + // obj is still stale: uid/resourceVersion/defaults are on tmp, not on obj + return nil +} +``` + +❌ Using `DeepCopy` in create helpers: +```go +func (r *Reconciler) createEK(ctx context.Context, obj *v1alpha1.EK) error { + base := obj.DeepCopy() // forbidden: DeepCopy belongs to Reconcile methods, not create helpers + _ = base + return r.client.Create(ctx, obj) +} +``` + +❌ Writing status as part of create (or “relying on status in the create request”): +```go +func (r *Reconciler) createEK(ctx context.Context, obj *v1alpha1.EK) error { + obj.Status.Phase = "Ready" // forbidden: status writes (report/controller-owned state) are a separate request + if err := r.client.Create(ctx, obj); err != nil { + return err + } + // forbidden: second write and status subresource write inside create helper + return r.client.Status().Update(ctx, obj) +} +``` + +❌ Executing patches inside create helpers: +```go +func (r *Reconciler) createEK(ctx context.Context, obj *v1alpha1.EK, base *v1alpha1.EK) error { + // forbidden: patch execution belongs to PatchReconcileHelpers / Reconcile methods + if err := r.client.Create(ctx, obj); err != nil { + return err + } + return r.client.Patch(ctx, obj, client.MergeFrom(base)) +} +``` + +❌ Creating multiple objects in a single create helper: +```go +func (r *Reconciler) createEKs(ctx context.Context, objs []*v1alpha1.EK) error { + for _, obj := range objs { + if err := r.client.Create(ctx, obj); err != nil { // forbidden: multiple API calls + return err + } + } + return nil +} +``` + +❌ Hidden I/O / nondeterministic request payload (time/random/env, nondeterministic ordering): +```go +func (r *Reconciler) createEK(ctx context.Context, obj *v1alpha1.EK) error { + obj.Annotations["createdAt"] = time.Now().Format(time.RFC3339) // forbidden + obj.Labels["nonce"] = uuid.NewString() // forbidden + obj.Spec.Seed = rand.Int() // forbidden + return r.client.Create(ctx, obj) +} +``` + +❌ Using `GenerateName` / random naming for resources that must be stable in reconciliation: +```go +func (r *Reconciler) createEK(ctx context.Context, obj *v1alpha1.EK) error { + obj.Name = "" + obj.GenerateName = "eon-" // anti-pattern: server adds a random suffix => nondeterministic identity + return r.client.Create(ctx, obj) +} +``` + +❌ Mutating shared templates/defaults through aliasing while preparing `obj`: +```go +func (r *Reconciler) createEK(ctx context.Context, obj *v1alpha1.EK, template *v1alpha1.EK) error { + // forbidden: template labels map is shared; mutating it mutates the template + labels := template.GetLabels() + labels["app"] = "eon" + obj.SetLabels(labels) + + return r.client.Create(ctx, obj) +} +``` diff --git a/.cursor/rules/controller-reconcile-helper-delete.mdc b/.cursor/rules/controller-reconcile-helper-delete.mdc new file mode 100644 index 000000000..df30c3b7c --- /dev/null +++ b/.cursor/rules/controller-reconcile-helper-delete.mdc @@ -0,0 +1,271 @@ +--- +description: Contracts for DeleteReconcileHelper (delete) functions: exactly one Kubernetes API Delete call for one object, deterministic handling, and no object/status mutation. Apply when writing delete* helpers in reconciler*.go, and when deciding deletion semantics and ordering. Apply when editing relevant files, and when reasoning/planning/answering questions where this rule could influence code decisions (even if matching files are not currently open). +globs: images/controller/internal/controllers/**/reconciler*.go +alwaysApply: false +--- + +See `rfc-like-mdc.mdc` for normative keywords (BCP 14 / RFC 2119 / RFC 8174) and general .mdc writing conventions. + +# DeleteReconcileHelper + +This document defines naming and contracts for **DeleteReconcileHelper** functions/methods. + +Common terminology and rules for any **ReconcileHelper** live in `controller-reconcile-helper.mdc`. + +--- + +## TL;DR + +Summary only; if anything differs, follow normative sections below. + +- **DeleteReconcileHelpers** (`delete`) are **single-call I/O helpers**: they perform exactly **one** **Kubernetes API I/O** write — `Delete(...)` — for exactly one **object** (or treat NotFound as “already absent”, depending on policy). +- They MUST NOT perform any other **Kubernetes API I/O** calls (`Get/List/Create/Update/Patch`), MUST NOT call **DeepCopy**, and MUST NOT execute patches or make **patch ordering** / **patch type decision** decisions. +- They MUST NOT mutate the **object** as part of deletion (no “marking”, no finalizer edits, no status writes — no publishing **report** and no persisting **controller-owned state**); any prerequisite mutations (e.g., finalizer removal) are done by **Reconcile methods** via a **separate** ensure/apply + patch step **before** calling delete. +- Everything they control MUST be deterministic (no time/random/env-driven behavior; consistent NotFound handling). + +--- + +## Definition + +A **DeleteReconcileHelper** (“delete helper”) is a **ReconcileHelper** that is: + +- **allowed to perform I/O**, and +- deletes exactly **one** Kubernetes object via the API (or ensures it is absent), and +- returns the delete outcome (and optionally an error). + +Typical delete helpers encapsulate the mechanical delete call (including “already gone” handling) for child resources, while **Reconcile methods** decide ordering relative to other actions. + +--- + +## Naming + +- A **DeleteReconcileHelper** name MUST start with `delete` / `Delete`. +- **DeleteReconcileHelpers** for Kubernetes **objects** MUST use the form: `delete` / `Delete`. `` MUST either correspond to the Kubernetes **object** kind being deleted or be a short kind name that is already established in the codebase. + When `` refers to a kind defined in this repository’s API (types under `api/v*/`), `` MUST use the **short kind name** (see `controller-terminology.mdc`). + Examples: + - `deleteCM(...)` (or `deleteConfigMap(...)`) + - `deleteSVC(...)` (or `deleteService(...)`) + - `deleteEK(...)` (or `deleteExampleKind(...)`) +- **DeleteReconcileHelpers** names MUST NOT imply orchestration or multi-step cleanup (`reconcileDelete`, `deleteAll`, `deleteAndWait`) — ordering and lifecycle policy belong to **Reconcile methods**. + +--- + +## Preferred signatures + +- For **DeleteReconcileHelpers** (`delete*`), the simplest signature from the variants below that preserves explicit dependencies and a single-API-call scope SHOULD be chosen. +- If additional signature variants are explicitly permitted elsewhere in this document, they MAY also be used. + +### Simple delete +```go +func (r *Reconciler) deleteEK( + ctx context.Context, + obj *v1alpha1.ExampleKind, +) error +``` + +--- + +## Receivers + +- **DeleteReconcileHelpers** MUST be methods on `Reconciler` (they perform I/O via controller-runtime client owned by `Reconciler`). + +--- + +## I/O boundaries + +**DeleteReconcileHelpers** MAY do the following: + +- controller-runtime client usage to execute exactly **one** Kubernetes API call: `Delete(...)`. + +**DeleteReconcileHelpers** MUST NOT do any of the following: + +- Kubernetes API calls other than that single `Delete(...)` (no `Get/List/Create/Update/Patch`); +- `DeepCopy` (including `obj.DeepCopy()`, `runtime.Object.DeepCopyObject()`, etc.); +- executing patches (`Patch` / `Status().Patch`) or making any patch ordering / patch type decisions; +- performing any other I/O besides the single Kubernetes API request they own. + +**DeleteReconcileHelpers** MUST NOT do “hidden I/O” either: + +- `time.Now()` / `time.Since(...)` (nondeterministic wall-clock reads); +- random number generation (`rand.*`); +- environment reads (`os.Getenv`, reading files); +- network calls of any kind **other than** the single Kubernetes API request they own. + +> Rationale: delete helpers are mechanical wrappers around exactly one delete operation; ordering and lifecycle policy remain explicit in **Reconcile methods**. + +--- + +## Determinism contract + +A **DeleteReconcileHelper** MUST be **deterministic** in everything it controls. + +In particular: +- It MUST issue a single, mechanical delete operation with behavior determined only by explicit inputs. +- It MUST NOT introduce “hidden I/O” (time, random, env, extra network calls) beyond the single Kubernetes API `Delete(...)` request they own. +- It MUST NOT contain business-logic branching that depends on nondeterministic inputs. +- See the common determinism contract in `controller-reconcile-helper.mdc` (ordering stability, no map iteration order reliance). + +> Practical reason: delete should be a predictable mechanical operation; nondeterminism leads to flaky cleanup paths. + +--- + +## Read-only contract + +`delete` / `Delete` MUST treat inputs as read-only: + +- it MUST NOT mutate input objects (including the object being deleted); +- it MUST NOT perform in-place modifications through aliases. + +See the common read-only contract in `controller-reconcile-helper.mdc` (especially the Go aliasing rule for `map` / `[]T`). + +--- + +## Patch-domain separation + +- A **DeleteReconcileHelper** MUST perform exactly one API write: `Delete(...)`. +- It MUST NOT modify either patch domain (main or status) as part of deletion: + - no “prepare for delete” patches (e.g., finalizer removal); + - no status updates/patches. +- If deletion requires preliminary changes (e.g., removing a finalizer), those changes MUST be performed by **Reconcile methods** via separate ensure/apply + patch steps **before** calling the delete helper. + +--- + +## Composition + +- A **DeleteReconcileHelper** MUST perform exactly one API write (`Delete(...)`) for exactly one object. +- Any prerequisite mutations (e.g., removing finalizers) MUST be composed in **Reconcile methods** (ensure/apply + patch) and MUST NOT be hidden inside the delete helper. +- If multiple objects must be deleted (loops, groups, fan-out), that orchestration MUST live in **Reconcile methods**; delete helpers must remain single-object. +- A **DeleteReconcileHelper** MUST NOT call other **ReconcileHelpers**. + +--- + +## Flow phase scopes and outcomes + +- **DeleteReconcileHelpers** MUST NOT create a `reconcile/flow` **phase scope** — they should stay mechanical and short. +- **DeleteReconcileHelpers** MUST return `error` and MUST NOT return **ReconcileOutcome** (`flow.ReconcileOutcome`) or **EnsureOutcome** (`flow.EnsureOutcome`). + - Any retry/requeue policy belongs to the calling **Reconcile method** (use `ReconcileFlow` there). + +--- + +## Error handling + +- See the common error handling rules in `controller-reconcile-helper.mdc`. +- A **DeleteReconcileHelper** SHOULD be mechanically thin: if the single `Delete(...)` call fails, return the error **without wrapping** (or treat NotFound per the chosen deterministic policy). +- A **DeleteReconcileHelper** MUST NOT enrich errors with additional context (including **object identity** such as `namespace/name`, UID, object key). + - Error enrichment (action + **object identity** + **phase**) is the calling **Reconcile method**’s responsibility. + +--- + +## Common anti-patterns (MUST NOT) + +❌ Doing existence checks (`Get/List`) or any extra Kubernetes API calls: +```go +func (r *Reconciler) deleteEK(ctx context.Context, obj *v1alpha1.EK) error { + // forbidden: extra API call + var existing v1alpha1.EK + if err := r.client.Get(ctx, client.ObjectKeyFromObject(obj), &existing); err != nil { + return err + } + + // forbidden: second API call in the same helper + if err := r.client.Delete(ctx, &existing); err != nil { + return err + } + return nil +} +``` + +❌ Performing more than one write (`Delete` + `Patch/Update/Create`, retries-as-extra-calls, fallback logic): +```go +func (r *Reconciler) deleteEK(ctx context.Context, obj *v1alpha1.EK) error { + if err := r.client.Delete(ctx, obj); err != nil { + // forbidden: "fallback" write makes it >1 API call + // also forbidden: DeepCopy in delete helper (see I/O boundaries) + return r.client.Patch(ctx, obj, client.MergeFrom(obj.DeepCopy())) + } + return nil +} +``` + +❌ Mutating the object as part of deletion (“marking”, finalizer edits, status writes): +```go +func (r *Reconciler) deleteEK(ctx context.Context, obj *v1alpha1.EK) error { + obj.Finalizers = nil // forbidden: mutation belongs to ensure/apply + patch + obj.Status.Phase = "Deleting" // forbidden: status writes (report/controller-owned state) belong elsewhere + return r.client.Delete(ctx, obj) +} +``` + +❌ Trying to “prepare for delete” inside the delete helper (remove finalizer + delete): +```go +func (r *Reconciler) deleteEK(ctx context.Context, obj *v1alpha1.EK) error { + // forbidden: any patch/update belongs to Reconcile methods and is a separate patch domain write + base := obj.DeepCopy() // also forbidden: DeepCopy in delete helper + obj.Finalizers = []string{} // forbidden: mutation + if err := r.client.Patch(ctx, obj, client.MergeFrom(base)); err != nil { // forbidden: extra write + return err + } + return r.client.Delete(ctx, obj) +} +``` + +❌ Calling `DeepCopy` inside delete helpers: +```go +func (r *Reconciler) deleteEK(ctx context.Context, obj *v1alpha1.EK) error { + _ = obj.DeepCopy() // forbidden: DeepCopy belongs to Reconcile methods + return r.client.Delete(ctx, obj) +} +``` + +❌ Deleting multiple objects in a single delete helper: +```go +func (r *Reconciler) deleteEKs(ctx context.Context, objs []*v1alpha1.EK) error { + for _, obj := range objs { + if err := r.client.Delete(ctx, obj); err != nil { // forbidden: multiple API calls + return err + } + } + return nil +} +``` + +❌ Hidden I/O / nondeterminism (time/random/env/extra network calls): +```go +func (r *Reconciler) deleteEK(ctx context.Context, obj *v1alpha1.EK) error { + if os.Getenv("DELETE_FAST") == "1" { // forbidden: env read in helper + // ... + } + _ = time.Now() // forbidden + return r.client.Delete(ctx, obj) +} +``` + +❌ Using `DeleteAllOf` or broad deletes from a delete helper: +```go +func (r *Reconciler) deleteEK(ctx context.Context, obj *v1alpha1.EK) error { + // forbidden: not “exactly one object delete” + return r.client.DeleteAllOf(ctx, &v1alpha1.EK{}, client.InNamespace(obj.Namespace)) +} +``` + +❌ Doing “wait until gone” polling inside the delete helper: +```go +func (r *Reconciler) deleteEK(ctx context.Context, obj *v1alpha1.EK) error { + if err := r.client.Delete(ctx, obj); err != nil { + return err + } + + // forbidden: extra API calls / orchestration belongs to Reconcile methods + for { + var cur v1alpha1.EK + err := r.client.Get(ctx, client.ObjectKeyFromObject(obj), &cur) + if apierrors.IsNotFound(err) { + return nil + } + if err != nil { + return err + } + time.Sleep(100 * time.Millisecond) // forbidden: time-based hidden I/O + } +} +``` diff --git a/.cursor/rules/controller-reconcile-helper-ensure.mdc b/.cursor/rules/controller-reconcile-helper-ensure.mdc new file mode 100644 index 000000000..6a1248711 --- /dev/null +++ b/.cursor/rules/controller-reconcile-helper-ensure.mdc @@ -0,0 +1,436 @@ +--- +description: Contracts for EnsureReconcileHelper (ensure*) functions: pure/deterministic non-I/O in-place reconciliation for one patch domain with Outcome change/optimistic-lock reporting. Apply when writing ensure* helpers in reconciler*.go, and when deciding how to structure imperative in-place reconciliation steps. Apply when editing relevant files, and when reasoning/planning/answering questions where this rule could influence code decisions (even if matching files are not currently open). +globs: images/controller/internal/controllers/**/reconciler*.go +alwaysApply: false +--- + +See `rfc-like-mdc.mdc` for normative keywords (BCP 14 / RFC 2119 / RFC 8174) and general .mdc writing conventions. + +# EnsureReconcileHelper + +This document defines naming and contracts for **EnsureReconcileHelper** functions/methods. + +Common terminology and rules for any **ReconcileHelper** live in `controller-reconcile-helper.mdc`. + +--- + +## TL;DR + +Summary only; if anything differs, follow normative sections below. + +- **EnsureReconcileHelpers** (`ensure*`) are **pure**, **deterministic**, strictly **non-I/O** in-place steps for **exactly one** **patch domain** (**main patch domain** or **status patch domain**) that compute/enforce the per-step **target** (and/or status **report**) and immediately bring `obj` to it. +- They mutate the caller-owned `obj` to the computed **target** / **report** and return **EnsureOutcome** (in code: `flow.EnsureOutcome`) that encodes: + - whether `obj` was changed, + - whether the subsequent save requires **Optimistic locking**, + - and whether an error occurred. +- **EnsureReconcileHelpers MUST always start an ensure phase scope** (`ef := flow.BeginEnsure(...)` + `defer ef.OnEnd(&outcome)`). + - Therefore, every ensure helper MUST accept `ctx context.Context` and MUST use a named return `outcome flow.EnsureOutcome`. +- **EnsureReconcileHelpers** are the **single source of truth** for **Change reporting** and **optimistic lock requirement** for their **patch domain**. +- **Reconcile methods** MUST implement patch execution according to **EnsureOutcome** (in code: `flow.EnsureOutcome`) (`DidChange` / `OptimisticLockRequired`) and MUST NOT override these decisions with ad-hoc logic. +- They MUST NOT perform **Kubernetes API I/O**, call **DeepCopy**, or execute patches / make **patch ordering** decisions. +- If both **main patch domain** and **status patch domain** need changes, split into **two** **EnsureReconcileHelpers** (one per **patch domain**) and patch them separately in **Reconcile methods**. + +--- + +## Definition + +An **EnsureReconcileHelper** (“ensure helper”) is a **ReconcileHelper** that is: + +- **strictly non-I/O**, and +- computes/enforces the per-step **target** (and/or status **report**) and immediately performs in-place mutations on the object to bring it to that state for **exactly one patch domain** (**main resource** or **status subresource**), and + returns a `flow.EnsureOutcome` that reports whether it changed the object, whether optimistic locking is required for the save operation (if any), and whether an error occurred. + +Typical ensure helpers implement step-by-step in-place reconciliation and return `flow.EnsureOutcome` (e.g., via `ef.Ok().ReportChangedIf(...)`, `ef.Err(err)`, `flow.MergeEnsures(...)`, or chainable `outcome.Merge(other)`) to drive patching decisions in **Reconcile methods**. + +Notes on `.status` (role vs location): +- A status-domain ensure helper may write both: + - **controller-owned state** (persisted decisions/memory derived from **target**), and/or + - the published **report** (conditions/progress/selected observations). +- The published **report** MAY directly reuse selected **actual** observations (including being the same value/type as an **actual** snapshot). Persisting such observations into `.status` is OK and they remain **report/observations** (output-only). +- Status-domain ensure helpers MUST NOT treat existing **report/observations** as “intent/config inputs” for new **target** decisions. + - However, they MAY use existing **report/observations** (including previously published report fields in `.status`) as observation/constraint inputs (i.e., as a cached/stale form of **actual**) when deriving a new **target**. + - If prior decisions must be stable across reconciles, that input MUST come from explicit **controller-owned state** fields (by design), not from arbitrary report fields. + +--- + +## Naming + +- An **EnsureReconcileHelper** name MUST start with `ensure` / `Ensure`. +- **EnsureReconcileHelpers** MUST be domain-explicit in the name when ambiguity is possible (ambiguity is possible when the ensured invariant/property name refers to a field/group that exists in both `.spec` (**main patch domain**) and `.status` (**status patch domain**) of the same **object**): + - `ensureMain*` / `EnsureMain*` (**main patch domain**) + - `ensureStatus*` / `EnsureStatus*` (**status patch domain**) +- **EnsureReconcileHelpers** SHOULD NOT include `Main` / `Status` in the name when there is no such ambiguity. +- **EnsureReconcileHelpers** names SHOULD name the invariant or property being ensured: + - `ensureFinalizer(...)` + - `ensureOwnerRefs(...)` + - `ensureLabels(...)` + - `ensureStatusConditions(...)` (conditions are typically part of the published **report**) +- **EnsureReconcileHelpers** names MUST NOT include `Desired` / `Actual` / `Intended` / `Target` / `Report` unless the applied “thing” name in the **object** API includes those words. + - Exception: helpers that explicitly build/publish a status **report** artifact MAY end with `Report` when it improves clarity (e.g., `ensureStatusReport`, `ensureConditionsReport`). +- **EnsureReconcileHelpers** names MUST NOT sound like orchestration (`ensureAll`, `ensureEverything`, `ensureAndPatch`) — ensure helpers do not execute **I/O**; they only mutate and return **EnsureOutcome** (in code: `flow.EnsureOutcome`). + +--- + +## Preferred signatures + +- For **EnsureReconcileHelpers** (`ensure*`), the simplest signature from the variants below that preserves explicit dependencies and flow semantics SHOULD be chosen. +- If additional signature variants are explicitly permitted elsewhere in this document, they MAY also be used. + +### Ensure (always scoped) +```go +func ensureFoo( + ctx context.Context, + obj *v1alpha1.Foo, +) (outcome flow.EnsureOutcome) +``` + +Or, if an ensure helper needs data from `Reconciler`: +```go +func (r *Reconciler) ensureFoo( + ctx context.Context, + obj *v1alpha1.Foo, +) (outcome flow.EnsureOutcome) +``` + +### Dependent ensure +Dependencies MUST be explicit and come **after `obj`**: +```go +func ensureBar( + ctx context.Context, + obj *v1alpha1.Foo, + targetFoo TargetFoo, +) (outcome flow.EnsureOutcome) +``` + +Or, if an ensure helper needs data from `Reconciler`: +```go +func (r *Reconciler) ensureBar( + ctx context.Context, + obj *v1alpha1.Foo, + targetFoo TargetFoo, +) (outcome flow.EnsureOutcome) +``` + +--- + +## Receivers + +- **EnsureReconcileHelpers** SHOULD be plain functions when they do not need any data from `Reconciler`. +- If an **EnsureReconcileHelper** needs data from `Reconciler`, it MUST be a method on `Reconciler`. + +--- + +## I/O boundaries + +**EnsureReconcileHelpers** MUST NOT do any of the following: + +- controller-runtime client usage (`client.Client`, `r.client`, etc.); +- Kubernetes API calls (`Get/List/Create/Update/Patch/Delete`); +- `DeepCopy` (including `obj.DeepCopy()`, `runtime.Object.DeepCopyObject()`, etc.); +- executing patches (`Patch` / `Status().Patch`) or making any patch ordering decisions; +- creating/updating/deleting Kubernetes objects in the API server in any form. + +**EnsureReconcileHelpers** MUST NOT do “hidden I/O” either: + +- `time.Now()` / `time.Since(...)` (nondeterministic wall-clock reads) (except setting `metav1.Condition.LastTransitionTime`, typically indirectly via `obju.SetStatusCondition`); +- random number generation (`rand.*`); +- environment reads (`os.Getenv`, reading files); +- network calls of any kind. + +**EnsureReconcileHelpers** MAY request **Optimistic locking** by encoding it in the returned `flow.EnsureOutcome`, but they MUST NOT perform the save operation themselves. + +> Rationale: ensure helpers should be **deterministic** and unit-testable; they describe the in-memory mutations required to reach the chosen **target** and/or publish the status **report** (and any save-mode requirements), while the actual persistence belongs to **Reconcile methods**. + +--- + +## Determinism contract + +An **EnsureReconcileHelper** MUST be **deterministic** given its explicit inputs and allowed in-place mutations. + +See the common determinism contract in `controller-reconcile-helper.mdc`. + +In particular: +- **EnsureReconcileHelpers** MAY use extracted computation/caching components owned by the reconciler (e.g. “world view” / “planner” / “topology scorer”, caches), as described in `controller-file-structure.mdc` (“Additional components”), as long as they do not violate the I/O boundaries above. + - Note: cache population is a side effect and an additional source of state; therefore, the helper is deterministic only relative to that state. For the same explicit inputs and the same state of these components, the result MUST be the same. +- Returned `flow.EnsureOutcome` flags (changed / optimisticLock / error) MUST be stable for the same inputs and object state. + +> Practical reason: nondeterminism creates patch churn and flaky tests. + +--- + +## Read-only contract + +`ensure*` / `Ensure*` MUST treat all inputs except the intended in-place mutation on `obj` as read-only: + +- it MUST NOT mutate any input other than `obj` (including computed dependencies passed after `obj`, templates, shared defaults, global variables); +- it MUST mutate only the intended patch domain on `obj` (main resource **or** status subresource), treating the other domain as read-only; +- it MUST NOT perform in-place modifications through aliases to non-`obj` data. + +Note: reconciler-owned deterministic components (e.g. caches) are allowed mutation targets in `ensure*` helpers **only** under the constraints defined above (non-I/O, explicit dependency, deterministic relative to the component state). +If an `ensure*` helper mutates such a component, its GoDoc comment MUST explicitly state that this helper mutates reconciler-owned deterministic state and why this is acceptable (rare-case exception). + +See the common read-only contract in `controller-reconcile-helper.mdc` (especially the Go aliasing rule for `map` / `[]T`). + +--- + +## Patch-domain separation + +- `ensure*` / `Ensure*` MUST mutate `obj` in-place for **exactly one** patch domain: + - main resource (**metadata + spec + non-status fields**), **or** + - status subresource (`.status`). +- An **EnsureReconcileHelper** MUST NOT mutate both domains in the same function. +- If you need “ensure” logic for both domains, you MUST split it into **two** ensure helpers and call them separately from **Reconcile methods** (with separate patch requests). + +✅ Separate ensure helpers (GOOD) +```go +func ensureMainFoo(ctx context.Context, obj *v1alpha1.Foo) (outcome flow.EnsureOutcome) +func ensureStatusFoo(ctx context.Context, obj *v1alpha1.Foo) (outcome flow.EnsureOutcome) +``` + +❌ Mixed ensure (BAD) +```go +func ensureFoo(ctx context.Context, obj *v1alpha1.Foo) (outcome flow.EnsureOutcome) { + ef := flow.BeginEnsure(ctx, "foo") + defer ef.OnEnd(&outcome) + + obj.Spec.Replicas = 3 // main domain + obj.Status.Phase = "Reconciling" // status domain + // forbidden: ensure must touch exactly one patch domain + return ef.Ok().ReportChanged() +} +``` + +--- + +## Composition + +- An **EnsureReconcileHelper** MAY implement multiple related “ensure” steps in one pass **within a single** **patch domain**. + - If these steps represent one conceptual invariant set, they SHOULD remain in one ensure helper. + - If steps are distinguishable and reused independently, they SHOULD be extracted into smaller ensure helpers. +- An **EnsureReconcileHelper** MAY call other ensure helpers (compose “sub-ensures”). +- An **EnsureReconcileHelper** MAY call **ConstructionReconcileHelpers** (`new*`, `build*`, `make*`, `compose*`) as pure building blocks, as long as it stays strictly **non-I/O** and **deterministic**. +- An **EnsureReconcileHelper** MAY depend on outputs of previous compute helpers: + - the dependency MUST be explicit in the signature as additional args **after `obj`**. +- If an **EnsureReconcileHelper** composes multiple sub-ensures, it MUST combine their results deterministically: + - “changed” information MUST be preserved (no dropping); + - optimistic-locking requirement MUST be preserved; + - errors MUST be preserved (no dropping), using a deterministic aggregation strategy (e.g., `flow.MergeEnsures(...)` or chainable `outcome.Merge(other)`). + +--- + +## Ensure phases and **EnsureOutcome** + +- **Every** **EnsureReconcileHelper** MUST create an ensure phase scope (`flow.BeginEnsure` + deferred `ef.OnEnd(&outcome)`). + - The phase scope MUST cover the whole function (exactly one scope per function). + - Phase scopes MUST NOT be started inside loops. + - Scope placement rules are defined in `controller-reconciliation-flow.mdc`. +- Therefore, **EnsureReconcileHelpers** MUST accept `ctx context.Context` and MUST use a named return `outcome flow.EnsureOutcome`. +- **EnsureReconcileHelpers** MUST return **EnsureOutcome** (in code: `flow.EnsureOutcome`) using: + - `EnsureFlow` constructors (`Ok`, `Err`, `Errf`) and `EnsureFlow.Merge(...)` (when aggregating), + - and `EnsureOutcome` helpers (`ReportChanged*`, `RequireOptimisticLock`, `Enrichf`). + +### Recommended pattern: change + optimistic-lock reporting (SHOULD) + +```go +func ensureFoo(ctx context.Context, obj *v1alpha1.Foo) (outcome flow.EnsureOutcome) { + ef := flow.BeginEnsure(ctx, "foo") + defer ef.OnEnd(&outcome) + + changed := false + needLock := false + + // ... deterministically mutate obj ... + // use ef.Ctx() for context if needed + + outcome = ef.Ok().ReportChangedIf(changed) + if needLock { + outcome = outcome.RequireOptimisticLock() + } + return outcome +} +``` + +```go +func ensureFoo(ctx context.Context, obj *v1alpha1.Foo) (outcome flow.EnsureOutcome) { + ef := flow.BeginEnsure(ctx, "foo") + defer ef.OnEnd(&outcome) + + changed := false + + // ... deterministically mutate obj ... + // use ef.Ctx() for context if needed + + return ef.Ok(). + ReportChangedIf(changed). + RequireOptimisticLock() +} +``` + +--- + +## Error handling + +- See the common error handling rules in `controller-reconcile-helper.mdc`. +- **EnsureReconcileHelpers** SHOULD generally return errors as-is (e.g., via `ef.Err(err)`). + + **Allowed (rare)**: when propagating a **non-local** error (e.g., from validation utilities or injected pure components) and additional context is necessary to **disambiguate multiple different error sources** within the same calling **Reconcile method**, an **EnsureReconcileHelper** MAY wrap with small, local context: + - prefer `ef.Err(err).Enrichf("")` (or `ef.Errf(...)` for local validation errors) + - keep `` specific to the helper responsibility (e.g., `ensureOwnerRefs`, `ensureStatusConditions`, `normalizeSpec`) + + **Forbidden (MUST NOT)**: + - do not add **object identity** (e.g. `namespace/name`, UID, object key) + - do not add generic “outside world” context (that belongs to the **Reconcile method**) + +--- + +## Common anti-patterns (MUST NOT) + +❌ Doing any Kubernetes API I/O (directly or indirectly): +```go +func (r *Reconciler) ensureFoo(ctx context.Context, obj *v1alpha1.Foo) (outcome flow.EnsureOutcome) { + ef := flow.BeginEnsure(ctx, "foo") + defer ef.OnEnd(&outcome) + + // forbidden: I/O in ensure + var cm corev1.ConfigMap + key := client.ObjectKey{Namespace: obj.Namespace, Name: "some-cm"} + if err := r.client.Get(ctx, key, &cm); err != nil { + return ef.Err(err) + } + return ef.Ok() +} +``` + +❌ Executing patches / updates / deletes (or hiding them behind helpers): +```go +func (r *Reconciler) ensureFoo(ctx context.Context, obj *v1alpha1.Foo) (outcome flow.EnsureOutcome) { + ef := flow.BeginEnsure(ctx, "foo") + defer ef.OnEnd(&outcome) + + // forbidden: patch execution belongs to Reconcile methods / PatchReconcileHelpers + base := obj.DeepCopy() // also forbidden: DeepCopy in ensure + obj.Spec.Replicas = 3 + _ = r.client.Patch(ctx, obj, client.MergeFrom(base)) + return ef.Ok().ReportChanged() +} +``` + +❌ Calling `DeepCopy` inside ensure helpers: +```go +func ensureFoo(ctx context.Context, obj *v1alpha1.Foo) (outcome flow.EnsureOutcome) { + ef := flow.BeginEnsure(ctx, "foo") + defer ef.OnEnd(&outcome) + + _ = obj.DeepCopy() // forbidden: DeepCopy belongs to Reconcile methods + return ef.Ok() +} +``` + +❌ Mutating both patch domains (main + status) in one ensure helper: +```go +func ensureFoo(ctx context.Context, obj *v1alpha1.Foo) (outcome flow.EnsureOutcome) { + ef := flow.BeginEnsure(ctx, "foo") + defer ef.OnEnd(&outcome) + + obj.Spec.Replicas = 3 // main domain + obj.Status.Phase = "Reconciling" // status domain (typically published **report**) + // forbidden: ensure must touch exactly one patch domain + return ef.Ok().ReportChanged() +} +``` + +❌ Returning "changed" inconsistently (mutated object but outcome does not report it): +```go +func ensureFoo(ctx context.Context, obj *v1alpha1.Foo) (outcome flow.EnsureOutcome) { + ef := flow.BeginEnsure(ctx, "foo") + defer ef.OnEnd(&outcome) + + obj.Spec.Replicas = 3 + // forbidden: mutation happened, but outcome does not report change + return ef.Ok() +} +``` + +❌ Reporting "changed" without actually changing the object: +```go +func ensureFoo(ctx context.Context, obj *v1alpha1.Foo) (outcome flow.EnsureOutcome) { + ef := flow.BeginEnsure(ctx, "foo") + defer ef.OnEnd(&outcome) + + // forbidden: reports change but did not mutate anything + return ef.Ok().ReportChanged() +} +``` + +❌ Requesting optimistic locking "sometimes" without determinism (same inputs -> different outcome): +```go +func ensureFoo(ctx context.Context, obj *v1alpha1.Foo) (outcome flow.EnsureOutcome) { + ef := flow.BeginEnsure(ctx, "foo") + defer ef.OnEnd(&outcome) + + if rand.Int()%2 == 0 { // forbidden: nondeterministic + obj.Spec.Replicas = 3 + return ef.Ok().ReportChanged().RequireOptimisticLock() + } + obj.Spec.Replicas = 3 + return ef.Ok().ReportChanged() +} +``` + +❌ Hidden I/O / nondeterminism (time/random/env/network): +```go +func ensureFoo(ctx context.Context, obj *v1alpha1.Foo) (outcome flow.EnsureOutcome) { + ef := flow.BeginEnsure(ctx, "foo") + defer ef.OnEnd(&outcome) + + _ = time.Now() // forbidden (except condition timestamps via obju) + _ = rand.Int() // forbidden + _ = os.Getenv("FLAG") // forbidden + return ef.Ok() +} +``` + +❌ Depending on map iteration order when building ordered slices (patch churn): +```go +func ensureFoo(ctx context.Context, obj *v1alpha1.Foo) (outcome flow.EnsureOutcome) { + ef := flow.BeginEnsure(ctx, "foo") + defer ef.OnEnd(&outcome) + + out := make([]string, 0, len(obj.Spec.Flags)) + for k := range obj.Spec.Flags { // map iteration order is random + out = append(out, k) + } + // missing sort => nondeterministic object state + obj.Spec.FlagKeys = out + return ef.Ok().ReportChanged() +} +``` + +❌ Mutating shared templates/defaults through aliasing: +```go +func ensureFoo(ctx context.Context, obj *v1alpha1.Foo, template *v1alpha1.Foo) (outcome flow.EnsureOutcome) { + ef := flow.BeginEnsure(ctx, "foo") + defer ef.OnEnd(&outcome) + + // forbidden: template labels map is shared; mutating it mutates the template + labels := template.GetLabels() + labels["owned"] = "true" + obj.SetLabels(labels) + return ef.Ok().ReportChanged() +} +``` + +❌ Manual metadata/conditions manipulation when `objutilv1` (`obju`) must be used: +```go +func ensureFoo(ctx context.Context, obj *v1alpha1.Foo) (outcome flow.EnsureOutcome) { + ef := flow.BeginEnsure(ctx, "foo") + defer ef.OnEnd(&outcome) + + // forbidden in this codebase: do not open-code label/finalizer/condition edits + if obj.Labels == nil { + obj.Labels = map[string]string{} + } + obj.Labels["a"] = "b" + return ef.Ok().ReportChanged() +} +``` diff --git a/.cursor/rules/controller-reconcile-helper-get.mdc b/.cursor/rules/controller-reconcile-helper-get.mdc new file mode 100644 index 000000000..59ed31565 --- /dev/null +++ b/.cursor/rules/controller-reconcile-helper-get.mdc @@ -0,0 +1,302 @@ +--- +description: Contracts for GetReconcileHelper (get*) functions: at most one Kubernetes API read (Get or List), deterministic ordering, and no Outcome/phases. Apply when writing get* helpers in reconciler*.go, and when deciding what logic is allowed in read helpers. Apply when editing relevant files, and when reasoning/planning/answering questions where this rule could influence code decisions (even if matching files are not currently open). +globs: images/controller/internal/controllers/**/reconciler*.go +alwaysApply: false +--- + +See `rfc-like-mdc.mdc` for normative keywords (BCP 14 / RFC 2119 / RFC 8174) and general .mdc writing conventions. + +# GetReconcileHelper + +This document defines naming and contracts for **GetReconcileHelper** functions/methods. + +Common terminology and rules for any **ReconcileHelper** live in `controller-reconcile-helper.mdc`. + +--- + +## TL;DR + +Summary only; if anything differs, follow normative sections below. + +- **GetReconcileHelpers** (`get*`) are **single-call I/O helper categories** for reads: they perform **at most one** **Kubernetes API I/O** read call (`Get(...)` **or** `List(...)`) via the controller-runtime client. +- They are **mechanical** read wrappers: + - MUST NOT perform any **Kubernetes API I/O** writes (`Create/Update/Patch/Delete`, including `Status().Patch/Update`), + - MUST NOT call **DeepCopy**, + - MUST NOT execute patches or make **Patch ordering** decisions. +- They MAY implement deterministic, clearly documented “optional” semantics (for example, returning `(nil, nil)` when the object is not found). +- If they return an ordered slice and the order is meaningful to callers, it MUST be **deterministic** (explicit sort with a tie-breaker). +- They MUST NOT create a **phase scope** and MUST NOT return **ReconcileOutcome** (`flow.ReconcileOutcome`) or **EnsureOutcome** (`flow.EnsureOutcome`). + - Any reconcile control flow decisions (done/requeue/error) belong to the calling **Reconcile method**. + +--- + +## Definition + +A **GetReconcileHelper** (“get helper”) is a **ReconcileHelper** that is: + +- **allowed to perform I/O**, and +- performs **at most one** controller-runtime client read call: + - `Get(ctx, key, obj)` **or** + - `List(ctx, list, opts...)`, +- and returns the fetched object(s) (or an empty/absent result) plus an optional error. + +Typical get helpers: +- fetch an object by identity (name/namespace) for use as **intent inputs** or **observations/constraints**, +- list objects relevant to the current **Reconcile method** step (often via an index), +- optionally post-process results in-memory (filter/sort) in a **deterministic** way. + +--- + +## Naming + +- A **GetReconcileHelper** name MUST start with `get` / `Get`. +- Get helpers SHOULD communicate which read call they wrap via the name: + - Single object fetch (`Get(...)`): `get` / `get`. + - Multi-object fetch (`List(...)`): `get` / `getList` / `get`. +- When the `` part refers to a kind defined in this repository’s API (types under `api/v*/`), `` MUST use the **short kind name** (see `controller-terminology.mdc`). +- If the helper guarantees ordering, the name MUST include an ordering signal: + - `getSorted*`, `getOrdered*`, `getFIFO*`, or an equivalent explicit term. +- If ordering is **not** guaranteed, the helper MUST NOT imply ordering in its name. + - If callers must not rely on order, the helper’s GoDoc MUST state that the returned slice is unordered. +- A get helper that treats “not found” as a non-error MUST document that behavior in GoDoc. + - If the output shape does not make the “not found” case obvious, the name SHOULD include an explicit signal (for example, `Optional`, `Maybe`, `OrNil`). + +Get helpers MUST NOT imply orchestration or policy: +- MUST NOT use names like `ensure*`, `reconcile*`, `getOrCreate*`, `getAndPatch*`, `getWithRetry*`. +- Any higher-level sequencing belongs to **Reconcile method** code. + +--- + +## Preferred signatures + +- For **GetReconcileHelpers** (`get*`), choose the simplest signature that keeps dependencies explicit and makes “optional” semantics unambiguous. + +### Single object (optional by NotFound) + +```go +func (r *Reconciler) getEK( + ctx context.Context, + key client.ObjectKey, +) (*v1alpha1.ExampleKind, error) +``` + +Recommended “optional by NotFound” rule for this shape: +- if `Get(...)` returns NotFound, return `(nil, nil)`. + +### Single object (required by NotFound) + +If NotFound is an error at this call site, either: +- handle NotFound in the **Reconcile method**, or +- use an explicit required variant name: + +```go +func (r *Reconciler) getRequiredEK( + ctx context.Context, + key client.ObjectKey, +) (*v1alpha1.ExampleKind, error) +``` + +### List (unordered) + +```go +func (r *Reconciler) getEKs( + ctx context.Context, + opts ...client.ListOption, +) ([]v1alpha1.ExampleKind, error) +``` + +If no objects match, return `([]v1alpha1.ExampleKind{}, nil)` (empty slice, not `nil`) SHOULD be preferred for ergonomics. + +### List (ordered) + +```go +func (r *Reconciler) getSortedEKs( + ctx context.Context, + opts ...client.ListOption, +) ([]v1alpha1.ExampleKind, error) +``` + +--- + +## Receivers + +- **GetReconcileHelpers** MUST be methods on `Reconciler` (they perform **Kubernetes API I/O** via the controller-runtime client owned by `Reconciler`). + +--- + +## I/O boundaries + +**GetReconcileHelpers** MAY do the following: + +- controller-runtime client usage to execute **at most one** **Kubernetes API I/O** read call: + - `Get(...)`, or + - `List(...)`. + +**GetReconcileHelpers** MUST NOT do any of the following: + +- any **Kubernetes API I/O** writes: + - `Create/Update/Patch/Delete`, + - `Status().Patch(...)` / `Status().Update(...)`, + - `DeleteAllOf(...)`, + - watches/sources registration (that belongs to **`controller.go`**); +- any additional **Kubernetes API I/O** read calls beyond the single read they own (no second `Get`/`List`); +- **DeepCopy** (including `obj.DeepCopy()` or `runtime.Object.DeepCopyObject()`), because **DeepCopy** for **`base`** is owned by **Reconcile method** code; +- executing patches or making **Patch ordering** / **patch type decision** decisions; +- any other external **I/O**. + +**GetReconcileHelpers** MUST NOT do **Hidden I/O** either: + +- `time.Now()` / `time.Since(...)`, +- random number generation (`rand.*`), +- environment reads (`os.Getenv`, reading files), +- network calls of any kind other than the single Kubernetes read they own. + +--- + +## Determinism contract + +A **GetReconcileHelper** MUST be **deterministic** in everything it controls. + +In particular: + +- Inputs to the read call (key / list options) MUST be derived only from explicit inputs (no **Hidden I/O**). +- If the helper returns a slice whose order is meaningful, it MUST enforce **stable ordering**: + - sort explicitly, and + - include a deterministic tie-breaker when the primary sort key may collide. + +Recommended tie-breakers: +- for namespaced objects: `(namespace, name)`, +- for cluster-scoped objects: `name`. + +If the helper returns an unordered slice: +- its GoDoc MUST state the order is unspecified, and +- callers MUST treat the result as a set (do not rely on ordering). + +--- + +## Read-only contract + +`get*` / `Get*` MUST treat all inputs as **read-only inputs**: + +- it MUST NOT mutate input values (including filters/options passed in, or caller-owned templates); +- it MUST NOT perform in-place modifications through **Aliasing**. + +If a helper needs to normalize/transform a `map` / `[]T` derived from an input option structure, it MUST **Clone** first. + +--- + +## Composition + +- A **GetReconcileHelper** MUST perform **at most one** controller-runtime client read call (`Get` **or** `List`). +- A **GetReconcileHelper** MUST NOT call any other **ReconcileHelper** methods/functions (from any **Helper categories**), + because that would hide additional logic and policy behind a read wrapper. +- A **GetReconcileHelper** MAY do small, local, **deterministic** in-memory post-processing of the fetched result + (for example, filtering and/or sorting), but that post-processing MUST be implemented inline in the get helper + (no calls to other **ReconcileHelper** helpers). + +If multiple reads are needed: +- they MUST be expressed explicitly in the calling **Reconcile method** as multiple separate steps, or +- split into multiple **GetReconcileHelper** calls from the **Reconcile method** (one call per helper). + +--- + +## Flow phase scopes and outcomes + +- **GetReconcileHelpers** MUST NOT create a **phase scope**. +- **GetReconcileHelpers** MUST NOT return **ReconcileOutcome** (`flow.ReconcileOutcome`) or **EnsureOutcome** (`flow.EnsureOutcome`). + +> Rationale: get helpers do not mutate a **patch domain**; they only read. + +--- + +## Error handling + +- A **GetReconcileHelper** SHOULD be mechanically thin: + - return read errors as-is (no wrapping), + - apply a deterministic NotFound policy (either propagate it, or convert it to “absent”). +- A **GetReconcileHelper** error MUST NOT include **object identity** (for example, `namespace/name`, UID, object key). + - Error enrichment (action + **object identity** + **phase**) is owned by the calling **Reconcile method**. + +--- + +## Common anti-patterns (MUST NOT) + +❌ Returning **Outcome** from a get helper: +```go +func (r *Reconciler) getEK(ctx context.Context, key client.ObjectKey) flow.ReconcileOutcome { + var rf flow.ReconcileFlow + return rf.Continue() // forbidden: get helpers must not return reconcile outcomes +} +``` + +❌ Doing multiple reads (more than one `Get`/`List`) in the same helper: +```go +func (r *Reconciler) getEKAndFriends(ctx context.Context, key client.ObjectKey) (*v1alpha1.EK, error) { + var a v1alpha1.EK + _ = r.client.Get(ctx, key, &a) // first read + + var b v1alpha1.Other + _ = r.client.Get(ctx, key, &b) // second read (forbidden) + + return &a, nil +} +``` + +❌ Doing any write (`Patch/Create/Delete/Status().Patch`) from a get helper: +```go +func (r *Reconciler) getEK(ctx context.Context, key client.ObjectKey) (*v1alpha1.EK, error) { + var obj v1alpha1.EK + if err := r.client.Get(ctx, key, &obj); err != nil { + return nil, err + } + obj.Labels["fetched"] = "true" + // forbidden: any write operation in a get helper + _ = r.client.Update(ctx, &obj) + return &obj, nil +} +``` + +❌ Calling **DeepCopy** inside a get helper: +```go +func (r *Reconciler) getEK(ctx context.Context, key client.ObjectKey) (*v1alpha1.EK, error) { + var obj v1alpha1.EK + _ = obj.DeepCopy() // forbidden + if err := r.client.Get(ctx, key, &obj); err != nil { + return nil, err + } + return &obj, nil +} +``` + +❌ Returning “sorted” results without deterministic tie-breakers: +```go +func (r *Reconciler) getSortedEKs(ctx context.Context) ([]v1alpha1.EK, error) { + var list v1alpha1.EKList + if err := r.client.List(ctx, &list); err != nil { + return nil, err + } + + // forbidden: ties produce unstable ordering across calls + sort.Slice(list.Items, func(i, j int) bool { + return list.Items[i].CreationTimestamp.Before(&list.Items[j].CreationTimestamp) + }) + + return list.Items, nil +} +``` + +✅ Preferred deterministic tie-breaker (illustrative): +```go +sort.SliceStable(items, func(i, j int) bool { + ti := items[i].CreationTimestamp.Time + tj := items[j].CreationTimestamp.Time + if !ti.Equal(tj) { + return ti.Before(tj) + } + // Tie-breaker for determinism: + if items[i].Namespace != items[j].Namespace { + return items[i].Namespace < items[j].Namespace + } + return items[i].Name < items[j].Name +}) +``` diff --git a/.cursor/rules/controller-reconcile-helper-is-in-sync.mdc b/.cursor/rules/controller-reconcile-helper-is-in-sync.mdc new file mode 100644 index 000000000..b914b6d94 --- /dev/null +++ b/.cursor/rules/controller-reconcile-helper-is-in-sync.mdc @@ -0,0 +1,268 @@ +--- +description: Contracts for IsInSyncReconcileHelper (is*InSync*) functions: tiny pure/deterministic non-I/O equality checks per patch domain. Apply when writing is*InSync* helpers in reconciler*.go, and when deciding how to gate patches deterministically. Apply when editing relevant files, and when reasoning/planning/answering questions where this rule could influence code decisions (even if matching files are not currently open). +globs: images/controller/internal/controllers/**/reconciler*.go +alwaysApply: false +--- + +See `rfc-like-mdc.mdc` for normative keywords (BCP 14 / RFC 2119 / RFC 8174) and general .mdc writing conventions. + +# IsInSyncReconcileHelper + +This document defines naming and contracts for **IsInSyncReconcileHelper** functions/methods. + +Common terminology and rules for any **ReconcileHelper** live in `controller-reconcile-helper.mdc`. + +--- + +## TL;DR + +Summary only; if anything differs, follow normative sections below. + +- **IsInSyncReconcileHelpers** (`is*InSync`) are tiny, **pure**, **deterministic**, strictly **non-I/O** boolean checks. +- They compare the current `obj` state to a single **target** (and/or **report**) value for **exactly one** **patch domain** (**main patch domain** or **status patch domain**) and return `true/false`. +- For status **report/observations**, the compared “**report**” value MAY be directly reused from selected **actual** observations (including being the same value/type as an **actual** snapshot) when publishing observations verbatim to `.status`. +- They SHOULD NOT return errors, MUST NOT do reconcile flow control (**ReconcileOutcome**), and MUST NOT log. +- They treat `obj` and `target` / `report` as **read-only inputs** (no mutations, including via map/slice **Aliasing**; **Clone** before any normalization). + +--- + +## Definition + +An **IsInSyncReconcileHelper** (“in-sync helper”) is a **ReconcileHelper** that is: + +- **strictly non-I/O**, and +- checks whether the current object state is already equal to the intended **target** (and/or published **report**) for **exactly one patch domain** (**main resource** or **status subresource**), and +- returns a boolean result. + +Typical in-sync helpers gate patch execution by answering “do we need to patch this domain?” for a single **target**/**report** input. + +--- + +## Naming + +- An **IsInSyncReconcileHelper** name MUST start with `is` / `Is` and MUST contain `InSync`. +- **IsInSyncReconcileHelpers** MUST be domain-explicit in the name when ambiguity is possible (ambiguity is possible when the checked “thing” name refers to a field/group that exists in both `.spec` (**main patch domain**) and `.status` (**status patch domain**) of the same **object**): + - `isMain*InSync` / `IsMain*InSync` / `is*MainInSync` / `Is*MainInSync` + - `isStatus*InSync` / `IsStatus*InSync` / `is*StatusInSync` / `Is*StatusInSync` +- **IsInSyncReconcileHelpers** SHOULD NOT include `Main` / `Status` in the name when there is no such ambiguity. +- **IsInSyncReconcileHelpers** names MUST NOT include `Desired` / `Actual` / `Intended` / `Target` / `Report` unless the checked “thing” name in the **object** API includes those words. +- **IsInSyncReconcileHelpers** names SHOULD name the “thing” being checked for drift: + - `isLabelsInSync(obj, targetLabels)` + - `isSpecFooInSync(obj, targetFoo)` + - `isStatusInSync(obj, targetStatus)` (ok when status is small; otherwise prefer artifact-specific checks) + - `isConditionsInSync(obj, reportConditions)` (when checking published **report** conditions) +- **IsInSyncReconcileHelpers** names SHOULD NOT be generic (`isInSync`, `isEverythingInSync`) — the name should communicate the **patch domain** + artifact being compared. + +--- + +## Preferred signatures + +- For **IsInSyncReconcileHelpers** (`is*InSync`), the simplest signature from the variants below that preserves explicit dependencies and purity SHOULD be chosen. +- If additional signature variants are explicitly permitted elsewhere in this document, they MAY also be used. + +### Simple check (no flow, no logging) +```go +func isFooInSync(obj *v1alpha1.Foo, target TargetFoo) bool +``` + +--- + +## Receivers + +- **IsInSyncReconcileHelpers** MUST be plain functions (no `Reconciler` receiver). + +--- + +## I/O boundaries + +**IsInSyncReconcileHelpers** MUST NOT do any of the following: + +- controller-runtime client usage (`client.Client`, `r.client`, etc.); +- Kubernetes API calls (`Get/List/Create/Update/Patch/Delete`); +- `DeepCopy` (including `obj.DeepCopy()`, `runtime.Object.DeepCopyObject()`, etc.); +- executing patches (`Patch` / `Status().Patch`) or making any patch ordering / patch type decisions; +- creating/updating Kubernetes objects in the API server in any form. + +**IsInSyncReconcileHelpers** MUST NOT do “hidden I/O” either: + +- `time.Now()` / `time.Since(...)` (nondeterministic wall-clock reads); +- random number generation (`rand.*`); +- environment reads (`os.Getenv`, reading files); +- network calls of any kind. + +> Rationale: in-sync helpers should be **deterministic** and unit-testable; all observable side effects belong to **Reconcile methods**. + +--- + +## Determinism contract + +An **IsInSyncReconcileHelper** MUST be **deterministic** given its explicit inputs and read-only dependencies. + +See the common determinism contract in `controller-reconcile-helper.mdc`. + +In particular, avoid producing “equivalent but different” intermediate representations across runs (e.g., unstable ordering that flips the boolean result depending on traversal). + +> Practical reason: nondeterminism creates patch churn and flaky tests. + +--- + +## Read-only contract + +`is*InSync` / `Is*InSync` MUST treat all inputs as read-only: + +- it MUST NOT mutate any input values (including `obj`, `target` / `report`, and any other args); +- it MUST NOT perform in-place modifications through aliases. + +See the common read-only contract in `controller-reconcile-helper.mdc` (especially the Go aliasing rule for `map` / `[]T`). + +--- + +## Patch-domain separation + +- `is*InSync` / `Is*InSync` MUST check **exactly one** patch domain: + - **main resource** (**metadata + spec + non-status fields**), **or** + - **status subresource** (`.status`). +- If you need to check both domains, you MUST use **two** separate helpers (one per **patch domain**), and combine the results in **Reconcile methods**. + +✅ Main-only / status-only (GOOD) +```go +func isFooInSync(obj *v1alpha1.Foo, target TargetFooMain) bool +func isFooStatusInSync(obj *v1alpha1.Foo, report FooReport) bool +``` + +❌ Mixed domains in one helper (BAD) +```go +func isFooInSync( + obj *v1alpha1.Foo, + targetMain TargetFooMain, + report FooReport, +) bool +``` + +--- + +## Composition + +- An **IsInSyncReconcileHelper** MUST stay a single, simple check: it returns exactly one boolean for one **target**/**report** input. +- If multiple “pieces” must be checked together for the same domain, they SHOULD be bundled into a single `target` / `report` value (small struct) and checked in one helper. +- An **IsInSyncReconcileHelper** MAY call other `is*InSync` helpers for reuse (pure composition). + - It SHOULD NOT use such calls to compose independent checks; independent checks should be composed in Reconcile methods. +- If checks are meaningfully independent and will be used separately, they SHOULD be split into separate `is*InSync` helpers and composed in Reconcile methods (not inside the helper). +- An **IsInSyncReconcileHelper** MUST NOT call **ReconcileHelpers** from other **Helper categories**. + +--- + +## Flow phase scopes and outcomes + +- **IsInSyncReconcileHelpers** MUST NOT create a `reconcile/flow` **phase scope** (they do not accept `ctx context.Context`; see `controller-reconcile-helper.mdc`). +- **IsInSyncReconcileHelpers** MUST NOT return **ReconcileOutcome** (`flow.ReconcileOutcome`) or **EnsureOutcome** (`flow.EnsureOutcome`) (they are pure checks). + - If you need flow control (requeue, done, fail), keep it in the caller and/or use other helper categories (e.g., compute/ensure/patch). +- **IsInSyncReconcileHelpers** MUST NOT log. + +--- + +## Error handling + +- See the common error handling rules in `controller-reconcile-helper.mdc`. +- **IsInSyncReconcileHelpers** SHOULD be designed to be non-failing (pure checks). + - If an error is realistically possible, prefer handling it in a **ComputeReconcileHelper** (or in the caller) and pass only validated/normalized inputs to `is*InSync`. +- **IsInSyncReconcileHelpers** MUST NOT create/wrap/enrich errors, and MUST NOT include **object identity** (e.g. `namespace/name`, UID, object key). +- Do **not** log and also return a “failure signal” for the same condition unless the surrounding reconcile style explicitly requires it (avoid duplicate logs). + +--- + +## Common anti-patterns (MUST NOT) + +❌ Doing any Kubernetes API I/O (directly or indirectly): +```go +// forbidden shape: IsInSync helpers MUST NOT accept ctx (no I/O allowed) +// forbidden: IsInSync helpers MUST be plain functions (no Reconciler receiver) +func (r *Reconciler) isFooInSync(ctx context.Context, obj *v1alpha1.Foo, target TargetFoo) bool { + // forbidden: I/O in IsInSync helper + var cm corev1.ConfigMap + key := client.ObjectKey{Namespace: obj.Namespace, Name: "some-cm"} + _ = r.client.Get(ctx, key, &cm) + return true +} +``` + +❌ Returning `error` as part of the signature when it is avoidable: +```go +func isFooInSync(obj *v1alpha1.Foo, target TargetFoo) (bool, error) { // avoid + return true, nil +} +``` + +❌ Doing flow control / returning `flow.Outcome`: +```go +func isFooInSync(obj *v1alpha1.Foo, target TargetFoo) flow.ReconcileOutcome { // forbidden + var rf flow.ReconcileFlow + return rf.Continue() +} +``` + +❌ Logging or creating phases (no `ctx`, no logs): +```go +// forbidden: IsInSync helpers MUST NOT accept ctx (they must stay pure and non-logging) +func isFooInSync(ctx context.Context, obj *v1alpha1.Foo, target TargetFoo) bool { + l := log.FromContext(ctx) + l.Info("checking in-sync") // forbidden: no logging in IsInSync helpers + return true +} +``` + +❌ Calling `DeepCopy`: +```go +func isFooInSync(obj *v1alpha1.Foo, target TargetFoo) bool { + _ = obj.DeepCopy() // forbidden + return true +} +``` + +❌ Mutating `obj` (even “harmless” changes): +```go +func isFooInSync(obj *v1alpha1.Foo, target TargetFoo) bool { + obj.Spec.Replicas = target.Replicas // forbidden: IsInSync is read-only + return false +} +``` + +❌ Mutating `target` / `report`: +```go +func isFooInSync(obj *v1alpha1.Foo, target TargetFoo) bool { + target.Replicas = 3 // forbidden: target is read-only + return obj.Spec.Replicas == target.Replicas +} +``` + +❌ Mutating through aliasing (maps/slices from inputs): +```go +func isFooInSync(obj *v1alpha1.Foo, target TargetFoo) bool { + ids := obj.Spec.IDs + slices.Sort(ids) // forbidden: sorts in place and mutates obj + return true +} +``` + +❌ Depending on map iteration order (nondeterministic boolean): +```go +func isFooInSync(obj *v1alpha1.Foo, target TargetFoo) bool { + // obj.Spec.Flags is a map[string]bool + got := make([]string, 0, len(obj.Spec.Flags)) + for k := range obj.Spec.Flags { // map iteration order is random + got = append(got, k) + } + // comparing to target.Keys without sorting => nondeterministic result + return reflect.DeepEqual(got, target.Keys) +} +``` + +❌ Checking both patch domains in one helper: +```go +func isFooInSync(obj *v1alpha1.Foo, target TargetFoo) bool { + // forbidden: mixes main + status checks + mainOK := obj.Spec.Replicas == target.Replicas + statusOK := obj.Status.Phase == target.Phase + return mainOK && statusOK +} +``` diff --git a/.cursor/rules/controller-reconcile-helper-patch.mdc b/.cursor/rules/controller-reconcile-helper-patch.mdc new file mode 100644 index 000000000..1a62c1754 --- /dev/null +++ b/.cursor/rules/controller-reconcile-helper-patch.mdc @@ -0,0 +1,333 @@ +--- +description: Contracts for PatchReconcileHelper (patch) functions: exactly one patch request for one patch domain (main or status), explicit base + optimistic-lock flag, and no other I/O. Apply when writing patch* helpers in reconciler*.go, and when deciding patch mechanics for main vs status. Apply when editing relevant files, and when reasoning/planning/answering questions where this rule could influence code decisions (even if matching files are not currently open). +globs: images/controller/internal/controllers/**/reconciler*.go +alwaysApply: false +--- + +See `rfc-like-mdc.mdc` for normative keywords (BCP 14 / RFC 2119 / RFC 8174) and general .mdc writing conventions. + +# PatchReconcileHelper + +This document defines naming and contracts for **PatchReconcileHelper** functions/methods. + +Common terminology and rules for any **ReconcileHelper** live in `controller-reconcile-helper.mdc`. + +--- + +## TL;DR + +Summary only; if anything differs, follow normative sections below. + +- **PatchReconcileHelpers** (`patch`) are **single-call I/O helpers**: they execute exactly one **patch request** for exactly one **patch domain** (`Patch(...)` (**main patch domain**) or `Status().Patch(...)` (**status patch domain**)). +- They take `base` explicitly (created by **Reconcile methods** immediately before the patch) and an explicit `optimisticLock` flag, and MUST NOT decide **patch ordering** or **patch strategy** beyond that flag. +- They MUST patch using the **caller-owned object instance** (`obj`) and, on success, the same instance MUST be updated with **API-server-updated fields** (e.g., `resourceVersion`, managed fields, defaults). +- They MUST NOT perform any other **Kubernetes API I/O** calls (`Get/List/Create/Update/Delete`), MUST NOT call **DeepCopy**, and MUST NOT patch both **patch domains** in one helper. +- They MUST treat `base` as **read-only inputs** and stay **deterministic** in everything they control (no **Hidden I/O**: no time/random/env/network beyond the single **patch request**). + +Notes: +- A status-domain patch (`Status().Patch(...)`) persists Kubernetes POV **observed state** (`.status`), which may include both: + - **controller-owned state** (persisted decisions/memory), and + - the published **report** (conditions/progress/selected observations). + Patch helpers stay agnostic; deciding *what* should be in `.status` (and keeping roles distinct) belongs to **Reconcile methods** + compute/apply helpers. + +--- + +## Definition + +A **PatchReconcileHelper** (“patch helper”) is a **ReconcileHelper** that is: + +- **allowed to perform I/O**, and +- executes exactly **one** **Kubernetes patch request** for exactly **one patch domain** (**main resource patch** or **status subresource patch**), and +- returns the patch outcome (and optionally an error). + +Typical patch helpers encapsulate the mechanical “patch this domain now” operation (including optimistic-lock semantics) and ensure the caller-visible in-memory object reflects server-assigned fields after the patch (e.g., `resourceVersion`, defaults), while **Reconcile methods** still own **patch ordering** decisions across multiple patches. + +--- + +## Naming + +- A **PatchReconcileHelper** name MUST start with `patch` / `Patch`. +- **PatchReconcileHelpers** MUST use the form: + - `patch` / `Patch` (**main patch domain**) + - `patchStatus` / `PatchStatus` (**status patch domain**) + `` MUST either correspond to the Kubernetes **object** kind being patched or be a short kind name that is already established in the codebase. + When `` refers to a kind defined in this repository’s API (types under `api/v*/`), `` MUST use the **short kind name** (see `controller-terminology.mdc`). + Examples: + - `patchCM(...)` (or `patchConfigMap(...)`) + - `patchCMStatus(...)` (or `patchConfigMapStatus(...)`) + - `patchSVC(...)` (or `patchService(...)`) + - `patchSVCStatus(...)` (or `patchServiceStatus(...)`) + - `patchEK(...)` (or `patchExampleKind(...)`) + - `patchEKStatus(...)` (or `patchExampleKindStatus(...)`) +- **PatchReconcileHelpers** names MUST NOT hide strategy or ordering (`patchOptimistically`, `patchAll`, `patchWithOrdering`) — patch helpers execute exactly one patch; ordering and strategy decisions live in **Reconcile methods**. + +--- + +## Preferred signatures + +- For **PatchReconcileHelpers** (`patch*`), the simplest signature from the variants below that preserves explicit dependencies and a single-patch scope SHOULD be chosen. +- If additional signature variants are explicitly permitted elsewhere in this document, they MAY also be used. + +### Simple patch +Pass `base` explicitly (created in the **Reconcile methods** immediately before the patch) +and an explicit optimistic-lock flag: +```go +func (r *Reconciler) patchEK( + ctx context.Context, + obj *v1alpha1.ExampleKind, + base *v1alpha1.ExampleKind, + optimisticLock bool, +) error +``` + +### Status-subresource patch variant +```go +func (r *Reconciler) patchEKStatus( + ctx context.Context, + obj *v1alpha1.ExampleKind, + base *v1alpha1.ExampleKind, + optimisticLock bool, +) error +``` + +### Implementation example (illustrative) + +```go +func (r *Reconciler) patchEK( + ctx context.Context, + obj *v1alpha1.ExampleKind, + base *v1alpha1.ExampleKind, + optimisticLock bool, +) error { + var patch client.Patch + if optimisticLock { + patch = client.MergeFromWithOptions(base, client.MergeFromWithOptimisticLock{}) + } else { + patch = client.MergeFrom(base) + } + return r.client.Patch(ctx, obj, patch) +} + +func (r *Reconciler) patchEKStatus( + ctx context.Context, + obj *v1alpha1.ExampleKind, + base *v1alpha1.ExampleKind, + optimisticLock bool, +) error { + var patch client.Patch + if optimisticLock { + patch = client.MergeFromWithOptions(base, client.MergeFromWithOptimisticLock{}) + } else { + patch = client.MergeFrom(base) + } + return r.client.Status().Patch(ctx, obj, patch) +} +``` + +--- + +## Receivers + +- **PatchReconcileHelpers** MUST be methods on `Reconciler` (they perform I/O via controller-runtime client owned by `Reconciler`). + +--- + +## I/O boundaries + +**PatchReconcileHelpers** MAY do the following: + +- controller-runtime client usage to execute exactly **one** Kubernetes patch call for exactly **one** patch domain: + - `Patch(...)` (main resource), or + - `Status().Patch(...)` (status subresource), + using the **Optimistic locking** mode provided by the caller (typically derived from `EnsureOutcome.OptimisticLockRequired()`). + +**PatchReconcileHelpers** MUST NOT do any of the following: + +- Kubernetes API calls other than that single patch call (no `Get/List/Create/Update/Delete`, no second patch); +- `DeepCopy` (including `obj.DeepCopy()`, `runtime.Object.DeepCopyObject()`, etc.); +- making any patch ordering decisions across multiple patch requests; +- performing any other I/O besides the single Kubernetes API request they own. + +**PatchReconcileHelpers** MUST NOT do “hidden I/O” either: + +- `time.Now()` / `time.Since(...)` (nondeterministic wall-clock reads); +- random number generation (`rand.*`); +- environment reads (`os.Getenv`, reading files); +- network calls of any kind **other than** the single Kubernetes API request they own. + +> Rationale: patch helpers are mechanical “execute exactly one patch” operations; ordering and multi-step reconciliation policy remain explicit and reviewable in **Reconcile methods**. + +--- + +## Determinism contract + +A **PatchReconcileHelper** MUST be **deterministic** in everything it controls. + +In particular: +- It MUST execute a single patch request whose parameters are determined only by explicit inputs (`obj`, `base`, `optimisticLock`, domain). +- See the common determinism contract in `controller-reconcile-helper.mdc` (ordering stability, no map iteration order reliance). +- It MUST NOT introduce “hidden I/O” (time, random, env, extra network calls) beyond the single patch request they own. + +> Practical reason: nondeterminism produces patch churn and makes conflicts hard to reason about. + +--- + +## Read-only contract + +`patch` / `Patch` MUST treat inputs as read-only. + +In particular, it MUST treat `base` as read-only (it is the patch base / diff reference): + +- it MUST NOT mutate `base` (it is the patch base / diff reference); +- it MUST NOT mutate any other inputs; +- it MAY observe `obj` being updated as a result of the patch call (e.g., `resourceVersion`, defaults), but MUST NOT perform additional in-memory business mutations inside the patch helper. + +See the common read-only contract in `controller-reconcile-helper.mdc` (especially the Go aliasing rule for `map` / `[]T`). + +--- + +## Patch-domain separation + +- A **PatchReconcileHelper** MUST execute exactly **one** patch request for exactly **one** patch domain: + - **main resource** patch domain: `Patch(...)`, **or** + - **status subresource** patch domain: `Status().Patch(...)`. +- A **PatchReconcileHelper** MUST NOT patch both domains in one helper. +- If both domains need patching, **Reconcile methods** MUST issue two separate patch operations (typically via two patch helpers), each with its own `base` and request. + +--- + +## Composition + +- A **PatchReconcileHelper** MUST execute exactly one patch request for exactly one patch domain. +- A **PatchReconcileHelper** MAY be preceded by pure helpers that prepared the in-memory `obj` (compute/apply/ensure), but the patch helper itself MUST NOT perform any business-logic composition beyond executing the single patch request. +- If multiple patch requests are needed (multiple domains or multiple sequential patches), they MUST be composed in **Reconcile methods** as multiple explicit patch operations (each with its own `base` taken immediately before that patch). +- A **PatchReconcileHelper** MUST NOT call other **ReconcileHelpers**. + +--- + +## Flow phase scopes and outcomes + +- **PatchReconcileHelpers** MUST NOT create a `reconcile/flow` **phase scope** — they should stay mechanical and short. +- **PatchReconcileHelpers** MUST return `error` and MUST NOT return **ReconcileOutcome** (`flow.ReconcileOutcome`) or **EnsureOutcome** (`flow.EnsureOutcome`). + - Any retry/requeue policy belongs to the calling **Reconcile method** (use `ReconcileFlow` there). + +--- + +## Error handling + +- See the common error handling rules in `controller-reconcile-helper.mdc`. +- A **PatchReconcileHelper** SHOULD be mechanically thin: if the single patch call fails, return the error **without wrapping**. +- A **PatchReconcileHelper** MUST NOT enrich errors with additional context (including **object identity** such as `namespace/name`, UID, object key). + - Error enrichment (action + **object identity** + **phase**) is the calling **Reconcile method**’s responsibility. + +--- + +## Common anti-patterns (MUST NOT) + +❌ Doing any Kubernetes API calls other than the single patch request (`Get/List/Create/Update/Delete`, or a second patch): +```go +func (r *Reconciler) patchEK(ctx context.Context, obj, base *v1alpha1.EK, optimisticLock bool) error { + // forbidden: extra API call + var cur v1alpha1.EK + if err := r.client.Get(ctx, client.ObjectKeyFromObject(obj), &cur); err != nil { + return err + } + + // forbidden: patch after an extra call (still >1 API call in helper) + return r.client.Patch(ctx, obj, client.MergeFrom(base)) +} +``` + +❌ Calling `DeepCopy` inside patch helpers (the caller creates `base`): +```go +func (r *Reconciler) patchEK(ctx context.Context, obj, base *v1alpha1.EK, optimisticLock bool) error { + _ = obj.DeepCopy() // forbidden: DeepCopy belongs to Reconcile methods + return r.client.Patch(ctx, obj, client.MergeFrom(base)) +} +``` + +❌ Patching a temporary copy and dropping it (caller-owned `obj` stays stale): +```go +func (r *Reconciler) patchEK(ctx context.Context, obj, base *v1alpha1.EK, optimisticLock bool) error { + tmp := obj.DeepCopy() // also forbidden: DeepCopy in patch helper + if err := r.client.Patch(ctx, tmp, client.MergeFrom(base)); err != nil { + return err + } + // forbidden: obj is not updated with new resourceVersion/defaults + return nil +} +``` + +❌ Patching both patch domains in one helper: +```go +func (r *Reconciler) patchEK(ctx context.Context, obj, base *v1alpha1.EK, optimisticLock bool) error { + // forbidden: two requests / two domains + if err := r.client.Patch(ctx, obj, client.MergeFrom(base)); err != nil { // main + return err + } + return r.client.Status().Patch(ctx, obj, client.MergeFrom(base)) // status +} +``` + +❌ Making patch ordering decisions (patch helpers execute exactly one patch, ordering lives in Reconcile methods): +```go +func (r *Reconciler) patchEK(ctx context.Context, obj, base *v1alpha1.EK, optimisticLock bool) error { + // forbidden: deciding to patch status first / mixing ordering policy into the helper + if needsStatus(obj) { + if err := r.client.Status().Patch(ctx, obj, client.MergeFrom(base)); err != nil { + return err + } + } + return r.client.Patch(ctx, obj, client.MergeFrom(base)) +} +``` + +❌ Overriding the caller’s optimistic-locking decision: +```go +func (r *Reconciler) patchEK(ctx context.Context, obj, base *v1alpha1.EK, optimisticLock bool) error { + optimisticLock = true // forbidden: helper must not change the decision + // ... + return r.client.Patch(ctx, obj, client.MergeFrom(base)) +} +``` + +❌ Performing business-logic mutations inside the patch helper (beyond the patch call itself): +```go +func (r *Reconciler) patchEK(ctx context.Context, obj, base *v1alpha1.EK, optimisticLock bool) error { + // forbidden: business mutations belong to compute/apply/ensure before calling patch + obj.Spec.Replicas = 3 + return r.client.Patch(ctx, obj, client.MergeFrom(base)) +} +``` + +❌ Mutating `base` (it is read-only diff reference): +```go +func (r *Reconciler) patchEK(ctx context.Context, obj, base *v1alpha1.EK, optimisticLock bool) error { + labels := base.GetLabels() + labels["x"] = "y" // forbidden: mutates base via alias + return r.client.Patch(ctx, obj, client.MergeFrom(base)) +} +``` + +❌ Hidden I/O / nondeterminism (time/random/env/extra network calls): +```go +func (r *Reconciler) patchEK(ctx context.Context, obj, base *v1alpha1.EK, optimisticLock bool) error { + if os.Getenv("PATCH_FAST") == "1" { // forbidden: env read in helper + // ... + } + _ = time.Now() // forbidden + return r.client.Patch(ctx, obj, client.MergeFrom(base)) +} +``` + +❌ Using broad patch helpers that patch multiple objects (must patch exactly one object instance): +```go +func (r *Reconciler) patchEKs(ctx context.Context, objs []*v1alpha1.EK, base *v1alpha1.EK, optimisticLock bool) error { + for _, obj := range objs { + if err := r.client.Patch(ctx, obj, client.MergeFrom(base)); err != nil { // forbidden: multiple API calls + return err + } + } + return nil +} +``` diff --git a/.cursor/rules/controller-reconcile-helper.mdc b/.cursor/rules/controller-reconcile-helper.mdc new file mode 100644 index 000000000..13b9c401c --- /dev/null +++ b/.cursor/rules/controller-reconcile-helper.mdc @@ -0,0 +1,218 @@ +--- +description: Common rules for ReconcileHelper functions/methods in reconciler.go: naming-by-category, signatures, determinism, aliasing, and I/O boundaries. Apply when implementing or reviewing reconcile helper functions in reconciler*.go, and when deciding helper categories or allowed side effects. Apply when editing relevant files, and when reasoning/planning/answering questions where this rule could influence code decisions (even if matching files are not currently open). +globs: images/controller/internal/controllers/**/reconciler*.go +alwaysApply: false +--- + +See `rfc-like-mdc.mdc` for normative keywords (BCP 14 / RFC 2119 / RFC 8174) and general .mdc writing conventions. + +# ReconcileHelper functions/methods + +This document defines naming and contracts for **ReconcileHelper** functions/methods. + +--- + +## TL;DR + +Summary only; if anything differs, follow normative sections below. + +- **Reconcile methods** (`Reconcile*` / `reconcile*`) own reconciliation orchestration and I/O sequencing; **ReconcileHelpers** are category-named helpers used by them. +- All **ReconcileHelpers** follow strict **naming-by-category** (some categories have multiple allowed prefixes, e.g. **ConstructionReconcileHelper** uses `new*`/`build*`/`make*`/`compose*`): `compute*`, `new*`/`build*`/`make*`/`compose*`, `is*InSync*`, `apply*`, `ensure*`, `get*`, `create*`, `delete*`, `patch*` — to make intent and allowed behavior reviewable. +- Every ReconcileHelper has explicit dependencies: if it takes `ctx`, it is first; if it operates on a Kubernetes object, `obj` is the first arg after `ctx`; all other inputs come **after `obj`**. +- ReconcileHelpers are **deterministic**: never rely on map iteration order; sort when order matters; avoid “equivalent but different” outputs/states that cause patch churn. +- ReconcileHelpers treat inputs as **read-only** except for the explicitly allowed mutation target(s); never mutate through map/slice aliasing — **clone before editing**. +- **I/O** is **explicitly bounded by category**: + - **Compute / Construction / IsInSync / Apply / Ensure**: strictly **non-I/O**. + - **Get**: allowed **I/O**, but **at most one API read** per helper (`Get` or `List`). + - **Create / Delete / Patch**: allowed **I/O**, but **exactly one API write** per helper (`Create` / `Delete` / `Patch` or `Status().Patch`). + +--- + +## Terminology + +- **Reconcile methods**: the controller-runtime `Reconcile(...)` method and any other function/method whose name matches `reconcile*` / `Reconcile*` (see `controller-file-structure.mdc`). +- **ReconcileHelper functions/methods**: any helper function/method used by **Reconcile methods**, implemented in `reconciler.go`, whose name matches one of the **ReconcileHelper categories** below. + - When referring to *any* helper from these categories, use **ReconcileHelper**. + - When referring to a *specific kind* of helper, use the corresponding category name below. + +### ReconcileHelper categories + +These categories are naming categories/patterns (see also `controller-file-structure.mdc`): + +- **ComputeReconcileHelper**: `compute*` / `Compute*` (see `controller-reconcile-helper-compute.mdc`). +- **ConstructionReconcileHelper**: `new*` / `build*` / `make*` / `compose*` (see `controller-reconcile-helper-construction.mdc`). +- **IsInSyncReconcileHelper**: `is*InSync*` / `Is*InSync*` (starts with `is`/`Is` and contains `InSync`) (see `controller-reconcile-helper-is-in-sync.mdc`). +- **ApplyReconcileHelper**: `apply*` / `Apply*` (see `controller-reconcile-helper-apply.mdc`). +- **EnsureReconcileHelper**: `ensure*` / `Ensure*` (see `controller-reconcile-helper-ensure.mdc`). +- **GetReconcileHelper**: `get*` / `Get*` (see `controller-reconcile-helper-get.mdc`). +- **CreateReconcileHelper**: `create*` / `Create*` (see `controller-reconcile-helper-create.mdc`). +- **DeleteReconcileHelper**: `delete*` / `Delete*` (see `controller-reconcile-helper-delete.mdc`). +- **PatchReconcileHelper**: `patch*` / `Patch*` (see `controller-reconcile-helper-patch.mdc`). + +--- + +## Scope + +This document defines **common** conventions for all **ReconcileHelper categories**. + +Category-specific conventions are defined in dedicated documents referenced in **“ReconcileHelper categories”** above. + +--- + +## Any ReconcileHelper + +### Signatures + +- If a **ReconcileHelper** creates a **phase** or writes logs, it MUST accept `ctx context.Context`. +- A function operating on an **object** MUST take a pointer to the root object as: + - the **first argument** if the function does not accept `ctx`; + - the **first argument after `ctx`** if the function accepts `ctx`. + (root object = the full API object (`*`), not `Spec`/`Status` or other sub-structs) +- Additional inputs (computed flags, outputs of previous compute steps) MUST appear **after `obj`** to keep dependencies explicit. +- If a **ReconcileHelper** returns **EnsureOutcome** (in code: `flow.EnsureOutcome`), it MUST be the **first return value**. + - It SHOULD be the only return value for convenience, unless additional return values are clearly justified. + +### Flow phase scopes + +- **Phase scope** usage (`flow.BeginEnsure` / `flow.BeginStep`) is **strictly limited**: + - **All `ensure*`**: MUST create an **ensure phase scope**. + - **Large `compute*`**: MAY create a **step phase scope** **only when it improves structure or diagnostics**. + - **All other Helper categories** (`apply*`, `is*InSync*`, `get*`, `create*`, `delete*`, `patch*`) MUST NOT create **phase scopes**. +- If a helper uses a **phase scope**, it MUST follow `controller-reconciliation-flow.mdc` (one scope per function; scope on first line; no scopes inside loops). + +### Visibility and receivers + + - **ReconcileHelpers** SHOULD be unexported (private) by default. Export a **ReconcileHelper** only with an explicit, documented reason. + - **ReconcileHelpers** SHOULD be plain functions when they do not need any data from `Reconciler`. + - If a **ReconcileHelper** needs data from `Reconciler`, it SHOULD be a method on `Reconciler`. + +### Naming + +- If a **ReconcileHelper** name includes a Kubernetes object kind (e.g. `create`, `delete`, `patch`): + - when `` refers to a kind defined in this repository’s API (types under `api/v*/`), `` MUST use the **short kind name** (see `controller-terminology.mdc`); + - otherwise, `` MAY be either: + - a short, codebase-established name (preferred in examples), or + - the full kind name. +- If a short kind name is used, it MUST be an established name in this codebase (do not invent new abbreviations ad-hoc). + - Examples (illustrative): `createSKN(...)` (or `createSomeKindName(...)`), `patchSKN(...)` (or `patchSomeKindName(...)`). + +### Determinism contract + +Any **ReconcileHelper** MUST be **deterministic** given its explicit inputs and allowed **mutation target**s / **I/O** boundaries. + +In particular: +- Never rely on map iteration order: if output order matters, MUST sort it. +- If you build ordered slices from maps/sets (finalizers/ownerRefs/conditions/etc.), MUST make ordering stable (`slices.Sort`, sort by key, etc.). +- Avoid producing “equivalent but different” object states or intermediate representations across runs (e.g., writing the same elements in different order). + +> Practical reason: nondeterminism creates patch churn and flaky tests. + +### Read-only contract + +Any **ReconcileHelper** MUST treat all **read-only inputs** except explicitly allowed **mutation target**s as read-only. + +In particular: +- It MUST NOT mutate inputs other than the allowed **mutation target**(s). +- It MUST NOT perform in-place modifications through aliases to **read-only inputs**. + +**Important Go aliasing rule (MUST):** +- `map` / `[]T` values are reference-like. If you copy them from a read-only input and then mutate them, you may be mutating the original input through aliasing. +- Therefore, if you need to modify a map/slice derived from a read-only input, you MUST clone/copy it first. + +Examples (illustrative): + +✅ GOOD: clone before normalizing/editing (derived from `obj`) +```go +labels := maps.Clone(obj.GetLabels()) +labels["some/ephemeral"] = "" // edit on a clone +``` + +❌ BAD: mutates input through alias +```go +labels := obj.GetLabels() +labels["some/ephemeral"] = "" // mutates obj +``` + +✅ GOOD: clone slice before editing +```go +in := obj.Spec.SomeSlice +out := slices.Clone(in) // or append([]T(nil), in...) +out = append(out, "new") +``` + +✅ GOOD: clone desired map before setting on `obj` +```go +labels := maps.Clone(desired.Labels) +obj.SetLabels(labels) +``` + +❌ BAD: shares map with desired (and future edits may mutate desired) +```go +obj.SetLabels(desired.Labels) // aliasing +``` + +Note: the same cloning rule applies to any other read-only inputs (e.g., shared templates/dependencies or patch bases). + +### Error handling + +- **ReconcileHelpers** SHOULD generally return errors as-is. Do not enrich errors “for the outside world” in helpers. +- **Hard ban (MUST NOT)**: a **ReconcileHelper** error MUST NOT include **object identity** (e.g. `namespace/name`, UID, object key). + - Rationale: **object identity** and action-level context belong to the calling **Reconcile method**, which owns orchestration and **phases**. +- If a **ReconcileHelper** creates its own local validation error, it MAY include the **problematic field/constraint** (purely local, non-identity) to keep the error actionable. +- If additional context is needed to disambiguate multiple *different* error sources within the same **Reconcile method**, this is allowed only where the category doc explicitly permits it (notably `compute*` / `ensure*`), and the added context MUST remain local and non-identifying. +- Do **not** log and also return an error for the same condition unless the surrounding reconcile style explicitly requires it (avoid duplicate logs). + +--- + +## When to create helpers (SHOULD/MUST NOT) + +This section is **not** about what helpers are *allowed* to do (see the category docs). This section is about **when extracting a helper is worth it** vs when it is unnecessary indirection. + +### General guidance + +- Prefer **locality**: keep logic close to its only call site unless extraction clearly improves reuse or readability. +- Prefer **category purity** over “nice structure”: do not create helpers that *almost* fit a category. If it needs orchestration or mixes domains, keep it in a Reconcile method (or split into multiple helpers in correct categories). +- Extract when it helps you enforce **determinism** or **domain separation** (main vs status), especially when doing it inline would be error-prone. + +### CreateReconcileHelper (`create*`) / PatchReconcileHelper (`patch*`) / DeleteReconcileHelper (`delete*`) (I/O helpers) + +- SHOULD create these helpers **only when they have 2+ call sites** (within the same controller package). +- SHOULD NOT create them “for symmetry” if the helper would only hide a one-off, standard I/O action (even when that action is usually written as a small boilerplate block in Reconcile methods). + +### ApplyReconcileHelper (`apply*`) / IsInSyncReconcileHelper (`is*InSync*`) (small pure helpers) + +- SHOULD create these helpers only when the logic cannot be expressed as **one obvious action** at the call site. + - Examples of “one obvious action” (inline instead of helper): a single `obju.*` call; a single simple assignment; a single `meta` / `metav1` helper call. +- SHOULD create these helpers when: + - the call site would otherwise contain multiple coordinated field writes/comparisons for the same patch domain; + - the logic requires deterministic normalization (sorting/canonicalization) that you want to keep consistent between “compute“, “check” and “apply”. + +### ComputeReconcileHelper (`compute*`) / EnsureReconcileHelper (`ensure*`) (core of reconciliation logic) + +- If reconciliation needs to compute **intended**, observe **actual**, decide **target**, and/or publish a **report**, there SHOULD be at least one explicit step that performs this work as either: + - a ComputeReconcileHelper (`computeIntended*`, `computeActual*`, `computeTarget*`, and/or `compute*Report`), or + - an EnsureReconcileHelper (`ensure*`) that derives and applies corrections in-place (for a single **patch domain**). + The intent is to keep **Reconcile methods** focused on orchestration and to make “where decisions live” reviewable. + - Cache-like deterministic components (memoization of derived values) MAY be used inside **ComputeReconcileHelper** / **EnsureReconcileHelper**, but stateful allocators / ID pools (e.g., device minor / ordinal allocation) MUST NOT be hidden inside them (keep the allocation decision explicit in **Reconcile methods** together with persistence as **controller-owned state**). + +#### Splitting / nesting guidelines + +- SHOULD NOT split trivial logic into **ComputeReconcileHelper** (`compute*`) + **EnsureReconcileHelper** (`ensure*`) just to “follow patterns”. If one small helper can do it clearly (and within category rules), keep it in one place. +- MAY create an **EnsureReconcileHelper** (`ensure*`) that is only an orchestrator for **ComputeReconcileHelper** (`compute*`) → **IsInSyncReconcileHelper** (`is*InSync*`) → **ApplyReconcileHelper** (`apply*`) **only** when it significantly improves readability at the call site and does not hide orchestration decisions (ordering/retries/patch policy) that must remain explicit in a **Reconcile method**. + - In general, the purpose of **EnsureReconcileHelper** (`ensure*`) is to perform in-place, step-by-step corrections on `obj` (for a single **patch domain**), not to wrap a **target**/**report**-driven pipeline. +- If an **EnsureReconcileHelper** (`ensure*`) is small and readable, keep it monolithic: + - SHOULD NOT extract a separate **ComputeReconcileHelper** (`compute*`) just to compute a couple of booleans or a tiny struct. +- If an **EnsureReconcileHelper** (`ensure*`) becomes complex: + - MAY split it into multiple sub-**EnsureReconcileHelper** (`ensure*`) helpers (same domain; explicit dependencies after `obj`). + - MAY extract sub-**ComputeReconcileHelper** (`compute*`) helpers for non-trivial derived values used by **EnsureReconcileHelper**, keeping them pure and **deterministic**. +- If a **ComputeReconcileHelper** (`compute*`) becomes complex: + - MAY split it into smaller **ComputeReconcileHelper** (`compute*`) helpers (pure composition) with explicit data flow via parameters/return values. + - SHOULD keep each compute focused on a single artifact (e.g., **intended** normalization, **actual** snapshot shaping, **target** decisions for one domain/artifact, **report** artifacts), rather than a “compute everything” blob. + +### ConstructionReconcileHelper (`new*` / `build*` / `make*` / `compose*`) + +- SHOULD use **ConstructionReconcileHelpers** to extract pure object/value construction that is: + - reused across multiple compute/apply/ensure steps, or + - non-trivial enough that inline construction would be error-prone (ordering/canonicalization/aliasing). +- SHOULD NOT use **ConstructionReconcileHelpers** as a substitute for **ComputeReconcileHelpers** when the output is conceptually **intended**/**actual**/**target**/**report**. + Use the `compute*` family for reconciliation pipeline artifacts; use **ConstructionReconcileHelpers** for sub-artifacts and building blocks. diff --git a/.cursor/rules/controller-reconciliation-flow.mdc b/.cursor/rules/controller-reconciliation-flow.mdc new file mode 100644 index 000000000..980532ea6 --- /dev/null +++ b/.cursor/rules/controller-reconciliation-flow.mdc @@ -0,0 +1,344 @@ +--- +description: Rules for using lib/go/common/reconciliation/flow in controller reconciliation code: phases (BeginPhase/EndPhase) and Outcome composition/propagation. Apply when writing reconciliation code that uses flow.* in reconciler*.go, and when reasoning about reconciliation control flow and error handling. Apply when editing relevant files, and when reasoning/planning/answering questions where this rule could influence code decisions (even if matching files are not currently open). +globs: images/controller/internal/controllers/**/reconciler*.go +alwaysApply: false +--- + +See `rfc-like-mdc.mdc` for normative keywords (BCP 14 / RFC 2119 / RFC 8174) and general .mdc writing conventions. + +# Using flow (`lib/go/common/reconciliation/flow`) + +This document defines the **usage contract** for `lib/go/common/reconciliation/flow` in controller reconciliation code: +how to structure work into **phase scopes** and how to compose/propagate/enrich reconciliation results. + +Scope: any function that calls `flow.BeginRootReconcile`, `flow.BeginReconcile`, `flow.BeginEnsure`, `flow.BeginStep`, +and/or returns/handles `flow.ReconcileOutcome` / `flow.EnsureOutcome` / `error`, MUST follow this document. + +--- + +## TL;DR + +Summary only; if anything differs, follow normative sections below. + +- **Root `Reconcile(...)`** MUST start with `rf := flow.BeginRootReconcile(ctx)` and return via `outcome.ToCtrl()`. + Root reconcile MUST NOT use `OnEnd` (root has no phase-scope end handler). +- Any **non-root Reconcile method** that uses phase logging MUST: + - call `rf := flow.BeginReconcile(ctx, "", )` on the **first executable line**, + - `defer rf.OnEnd(&outcome)` on the **second executable line**, + - declare a named return `outcome flow.ReconcileOutcome`, + - use `rf.Ctx()` for context and (if logging) `rf.Log()` after that. +- Any ensure helper MUST: + - call `ef := flow.BeginEnsure(ctx, "", )` on the **first executable line**, + - `defer ef.OnEnd(&outcome)` on the **second executable line**, + - declare a named return `outcome flow.EnsureOutcome`, + - use `ef.Ctx()` for context and (if logging) `ef.Log()` after that. +- Any step function that returns plain `error` and uses phase logging MUST: + - call `sf := flow.BeginStep(ctx, "", )` on the **first executable line**, + - `defer sf.OnEnd(&err)` on the **second executable line**, + - declare a named return `err error`, + - use `sf.Ctx()` for context and (if logging) `sf.Log()` after that. +- **Phase names** MUST be stable identifiers (no dynamic values). Variable identity MUST go into `` key/value pairs. +- **Error logging**: errors are logged by the deferred `OnEnd` of the corresponding scope (or by controller-runtime for the root `Reconcile`). + Code MUST NOT log the same error again. If you intentionally drop an error/stop signal (best-effort override), you MUST log it. + +--- + +## Phase scope rules + +A **phase scope** is created by one of: +- `BeginReconcile` (non-root reconcile phases; returns `ReconcileFlow`) +- `BeginEnsure` (ensure phases; returns `EnsureFlow`) +- `BeginStep` (step phases; returns `StepFlow`) + +A function is either: +- **scoped** (exactly one `Begin*` + exactly one deferred `OnEnd`), or +- **unscoped** (no `Begin*` / `OnEnd` in that function). + +There is no mixed mode. + +### Single-scope rule + +- A scoped function MUST create exactly one phase scope. +- A function MUST NOT create multiple phase scopes (no sequential scopes). +- A function MUST NOT create a phase scope inside a loop. + +### Scope placement + +If a function is scoped: +- `BeginReconcile` / `BeginEnsure` / `BeginStep` MUST be called on the **first executable line** of the function. +- The corresponding `defer .OnEnd(&...)` MUST be the **second executable line** of the function. +- The function MUST NOT have any other statements (including variable declarations, logging, or conditionals) + before `Begin*` or between `Begin*` and the `defer`. + +This guarantees that: +- the entire function body is covered by the scope, +- all early returns are finalized consistently, +- panics are logged consistently. + +### Required named return variables + +To standardize deferred end handlers: + +- Any function scoped with `BeginReconcile` MUST use a named return value named `outcome` of type `flow.ReconcileOutcome`. +- Any function scoped with `BeginEnsure` MUST use a named return value named `outcome` of type `flow.EnsureOutcome`. +- Any function scoped with `BeginStep` MUST use a named return value named `err` of type `error`. + +The deferred end handler MUST receive a pointer to that named return variable. + +### No bare return + +Any scoped function MUST NOT use bare `return` (empty return). It MUST return explicitly. + +--- + +## Phase name and metadata + +### Phase name rules + +The phase name is used as a logger name segment (`logr.WithName`). + +- The phase name MUST NOT be empty. +- The phase name MUST NOT contain whitespace or control characters. +- The phase name MUST be a stable identifier and MUST NOT include dynamic values (resource names, UIDs, loop indices, etc.). +- The phase name MUST NOT include redundant prefixes like `reconcile-` or `ensure-` (the scope type is already known from the `Begin*` call). + +Recommended style: +- lowercase ASCII, +- `kebab-case` segments, +- optional hierarchical segments separated by `/` when it improves structure. + +### Metadata (key/value pairs) + +Variable or contextual information MUST NOT be encoded in the phase name. +It MUST be passed as key/value pairs (`"k1","v1","k2","v2",...`) to `BeginReconcile` / `BeginEnsure` / `BeginStep`. + +Rules: +- Metadata MUST be passed as key/value pairs (even number of strings). +- Metadata SHOULD be minimal and SHOULD NOT duplicate: + - reconcile-request fields already present on the controller-runtime logger (`namespace`, `name`, `reconcileID`, `controller*`, ...), + - metadata already present in a parent phase scope. + +--- + +## Context and logger handling + +Each `Begin*` attaches a phase-scoped logger to the returned context. + +- After `Begin*`, a scoped function MUST use the flow context (`.Ctx()`) as the base context for all subsequent work. + It MAY derive child contexts (timeouts/cancel), but MUST NOT use the incoming context again. +- A scoped function SHOULD call `.Ctx()` directly at each call site instead of storing it in a variable. +- If a scoped function logs, it SHOULD use the flow logger (`.Log()`), or `log.FromContext(.Ctx())`. + It MUST NOT use a logger derived from the pre-scope context. +- Helpers called from a scoped function MUST receive the flow context so logs are attributed correctly. + +--- + +## Root Reconcile + +Scope: the controller-runtime method: + +```go +func (r *Reconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) +``` + +Rules: +- The root Reconcile MUST start with `rf := flow.BeginRootReconcile(ctx)`. +- The root Reconcile MUST NOT use `BeginReconcile` / `OnEnd` inside itself. + If phase logging is needed, split work into non-root reconcile methods and scope those. +- The root Reconcile MUST return via `ToCtrl()` on a `flow.ReconcileOutcome`. +- The root Reconcile MUST NOT log errors returned via `ToCtrl()`. + (controller-runtime logs returned errors; scoped phases log their own errors.) + +Example (illustrative): + +```go +func (r *Reconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { + rf := flow.BeginRootReconcile(ctx) + + // ... orchestration ... + + return rf.Done().ToCtrl() +} +``` + +--- + +## Non-root Reconcile methods + +Any non-root **Reconcile method** that uses flow phase logging MUST: +- use `BeginReconcile` + deferred `OnEnd` per the placement rules, +- return `flow.ReconcileOutcome`. + +Example (illustrative): + +```go +func (r *Reconciler) reconcileFoo(ctx context.Context) (outcome flow.ReconcileOutcome) { + rf := flow.BeginReconcile(ctx, "foo") + defer rf.OnEnd(&outcome) + + // use rf.Ctx() for context in all calls + return rf.Continue() +} +``` + +--- + +## Working with ReconcileOutcome + +`flow.ReconcileOutcome` is the reconciliation control-flow value (continue/done/requeue/error). + +### Constructing outcomes + +A function that returns `flow.ReconcileOutcome` MUST construct it only via `ReconcileFlow` methods: +- `rf.Continue()` +- `rf.Done()` +- `rf.Requeue()` +- `rf.RequeueAfter(d)` +- `rf.Fail(err)` +- `rf.Failf(err, "...")` + +Code MUST NOT: +- construct `flow.ReconcileOutcome{...}` directly, +- create a new failure outcome from an existing outcome’s `Error()`. + +### Handling outcomes + +At each call site that receives a `flow.ReconcileOutcome` that can influence control flow, the receiver MUST do one of: + +- **Immediate check**: check `ShouldReturn()` immediately and return on true. +- **Immediate return**: return the outcome upward without checking locally. +- **Accumulate then handle**: merge multiple outcomes and then check/return. +- **Intentional override (best-effort; RARE)**: intentionally drop the merged outcome. + - This MUST be explicitly justified with a comment. + - If the override drops an error/stop signal, it MUST be made visible (log it). + +### Merging outcomes + +Use `flow.MergeReconciles(...)` or the chainable `outcome.Merge(other)` to combine outcomes when multiple independent steps must all run. + +Reviewability: +- Single-shot merge SHOULD NOT be used (harder to review/extend). + Prefer incremental merging or collect+merge. + +Incremental merge (illustrative): + +```go +outcome = stepA(...) +outcome = outcome.Merge(stepB(...)) +outcome = outcome.Merge(stepC(...)) + +if outcome.ShouldReturn() { + return outcome +} +``` + +Loop + collect + merge (illustrative): + +```go +outcomes := make([]flow.ReconcileOutcome, 0, len(items)) +for i := range items { + item := items[i] + o := r.reconcileOne(ctx, item) + outcomes = append(outcomes, o) +} + +outcome = flow.MergeReconciles(outcomes...) +return outcome +``` + +--- + +## Error enrichment and logging + +### No manual error logging + +- Errors carried via `ReconcileOutcome` in scoped functions are logged by the deferred `rf.OnEnd(&outcome)`. +- Errors returned from the root `Reconcile` are logged by controller-runtime. +- Therefore, reconciliation code MUST NOT log an error and also return it through `ReconcileOutcome`. + +Exception: +- If you intentionally drop an error/stop signal (best-effort override), you MUST log it + (typically at Info level, because the failure is intentionally ignored). + +### Error enrichment + +Enrich errors only via: +- `rf.Failf(err, "")` when creating a failure outcome, and/or +- `outcome.Enrichf("")` before returning an outcome received from a callee. + +A receiver MUST NOT rebuild an outcome from `outcome.Error()` (this breaks error de-duplication across nested phases). + +Forbidden (illustrative): + +```go +// BAD: reconstructs a new failure outcome; breaks single-error logging. +return rf.Failf(childOutcome.Error(), "foo") +``` + +Allowed (illustrative): + +```go +// GOOD: preserves the original outcome and enriches its error. +return childOutcome.Enrichf("foo") +``` + +--- + +## EnsureFlow and EnsureOutcome + +`flow.EnsureOutcome` is used by ensure helpers to report: +- an error (if any), +- whether the helper mutated its object, +- and whether the subsequent save must use optimistic locking. + +Any ensure helper MUST follow the scope placement rules with `BeginEnsure`. + +Example (illustrative): + +```go +func ensureFoo(ctx context.Context, obj *v1alpha1.Foo) (outcome flow.EnsureOutcome) { + ef := flow.BeginEnsure(ctx, "foo") + defer ef.OnEnd(&outcome) + + changed := false + // mutate obj; set changed=true if needed + // use ef.Ctx() for context if needed + + return ef.Ok().ReportChangedIf(changed) +} +``` + +Rules: +- Ensure helpers MUST NOT log and also return the same error via `EnsureOutcome`. + The deferred `ef.OnEnd(&outcome)` logs errors. +- Code MUST call `RequireOptimisticLock()` only after `ReportChanged()` / `ReportChangedIf(...)` + (calling it earlier is a contract violation and panics). +- To merge multiple sub-ensure results, use `flow.MergeEnsures(...)` or the chainable `outcome.Merge(other)`. + +--- + +## StepFlow + +`StepFlow` is used by steps that return plain `error` but still want standardized phase logging and panic handling. + +Any step function that uses phase logging MUST follow the scope placement rules with `BeginStep`. + +Example (illustrative): + +```go +func computeBar(ctx context.Context, input string) (err error) { + sf := flow.BeginStep(ctx, "compute-bar") + defer sf.OnEnd(&err) + + if input == "" { + return sf.Errf("bad input: %s", input) + } + // use sf.Ctx() for context if needed + return nil +} +``` + +Rules: +- Step functions MUST NOT log and also return the same error. + The deferred `sf.OnEnd(&err)` logs errors. +- To join multiple independent errors, use `flow.MergeSteps(errA, errB, ...)`. diff --git a/.cursor/rules/controller-reconciliation.mdc b/.cursor/rules/controller-reconciliation.mdc new file mode 100644 index 000000000..bc22159a2 --- /dev/null +++ b/.cursor/rules/controller-reconciliation.mdc @@ -0,0 +1,415 @@ +--- +description: Rules for Reconcile method orchestration in reconciler.go: file layout, call-graph ordering, patch sequencing, determinism, and reconciliation patterns. Apply when editing reconciler*.go Reconcile/reconcile* methods, and when planning reconciliation structure or patch ordering. Apply when editing relevant files, and when reasoning/planning/answering questions where this rule could influence code decisions (even if matching files are not currently open). +globs: images/controller/internal/controllers/**/reconciler*.go +alwaysApply: false +--- + +# Controller reconciliation orchestration (Reconcile methods) + +See `rfc-like-mdc.mdc` for normative keywords (BCP 14 / RFC 2119 / RFC 8174) and general .mdc writing conventions. + +This document complements `controller-reconcile-helper*.mdc` and defines rules that are **owned by Reconcile methods** +(the orchestration layer), not by helper categories and not by `lib/go/common/reconciliation/flow` usage. + +--- + +## Terminology + +> Terms like “main resource”, “status subresource”, and patch-domain boundaries are defined in +> `controller-reconcile-helper*.mdc`. +> This document defines only orchestration-specific terminology. + +- **Reconcile method**: any function/method named `Reconcile*` / `reconcile*` that orchestrates reconciliation + (root entrypoint, group reconciler, per-object reconciler, etc.). +- **Patch request**: one API write that persists drift for a single patch domain + (typically executed by a `patch*` / `patch*Status` helper). +- **Patch base (`base`)**: the `DeepCopy()` snapshot used as a diff reference for **one** patch request. + +## `reconciler.go` layout and sorting (MUST) + +This section defines the canonical ordering inside **`reconciler.go`** to keep the file readable and reviewable. +It is a *layout* convention (not a behavioral contract), but it MUST be followed for consistency. + +### High-level rule (MUST) + +- **`reconciler.go`** MUST be organized top-to-bottom in **call-graph order**, keeping helpers from **Non-I/O helper categories** close to + the **Reconcile method** that primarily uses/owns them. +- The file SHOULD use explicit section comments to make boundaries obvious, e.g.: + - `// --- Reconcile: ` + - `// --- Helpers: (Non-I/O helper categories)` + - `// --- Single-call I/O helper categories` + +### 1. Wiring / construction (MUST) + +- `type Reconciler { ... }` MUST be first (top of file). +- `NewReconciler(...)` MUST be immediately after `type Reconciler { ... }`. +- `NewReconciler` MUST remain wiring/DI only (no Kubernetes API I/O). + +### 2. Reconcile methods in call-graph order (MUST) + +- The controller-runtime `Reconcile(ctx, req)` MUST appear before any other `reconcile*` / `Reconcile*` methods. +- Other **Reconcile methods** MUST be declared in the order they are called. + - If `Reconcile` calls `reconcileA(...)` and then `reconcileB(...)`, `reconcileA` MUST appear before `reconcileB`. + - Sibling reconciles (called from the same parent) SHOULD appear in the same order as they appear at the call site. + +### 3. Per-reconcile helper blocks (non-I/O) (MUST) + +Immediately after each **Reconcile method**, **`reconciler.go`** MUST place the helpers from **Non-I/O helper categories** that are used by +that method (excluding helpers from **Single-call I/O helper categories**), in this order: + +1) **EnsureReconcileHelper** helpers (`ensure*`) +2) **ComputeReconcileHelper** + **IsInSyncReconcileHelper** + **ApplyReconcileHelper** (grouped per entity/artifact; see below) +3) **ConstructionReconcileHelper** helpers (`new*` / `build*` / `make*` / `compose*`) + +#### ComputeReconcileHelper + IsInSyncReconcileHelper + ApplyReconcileHelper grouping rule (MUST) + +- For one logical entity/artifact, `compute*`, `is*InSync*`, and `apply*` helpers MUST be kept adjacent as a group. +- Inside such a group, the order MUST be: + 1. `computeIntended*` (if any) + 2. `computeActual*` (if any) + 3. `computeTarget*` and/or `compute*Report` + 4. `is*InSync*` + 5. `apply*` + +Notes: +- Use `is*InSync*` naming (not “up-to-date”) per `controller-reconcile-helper-is-in-sync.mdc`. +- “Construction” helpers in this per-reconcile block MUST be local to the same reconcile step; if they become shared, + move them to the nearest owning reconcile step (see next section) or a shared block. + +#### Shared helper placement (SHOULD) + +- If a non-I/O helper is used by more than one **Reconcile method**, it SHOULD be placed under the nearest + **owning** reconcile step (the closest common parent in the call graph that conceptually owns the helper). +- If there is no clear owner, it MAY be placed into a small `// Shared non-I/O helpers` block immediately above + the I/O helpers section. +- Helpers MUST NOT be duplicated to satisfy locality. + +### 4. I/O helpers at the end (MUST) + +- All helpers from **Single-call I/O helper categories** (**GetReconcileHelper**, **CreateReconcileHelper**, **PatchReconcileHelper**, **DeleteReconcileHelper**) MUST be the last section in **`reconciler.go`**. +- These helpers MUST be grouped and sorted as follows: + 1) **Group by object kind/type** (one group per kind). + 2) Inside a kind group, order MUST be: **GetReconcileHelper → CreateReconcileHelper → PatchReconcileHelper → DeleteReconcileHelper**. + 3) Kind-group ordering MUST be: + 1. the **primary reconciled kind** first, + 2. then other kinds from this repository API (types under `api/v*/`) in **alphabetical order** by established kind name (short kind name when applicable), + 3. then kinds from other APIs in **alphabetical order** by kind name. + +## Optional scalar fields (optional `*T`) (MUST) + +Kubernetes APIs sometimes encode optionality via `json:",omitempty"` for fields whose underlying value is a scalar `T` +(non-nil-able, e.g. `bool`, numbers, `string`, small structs). + +If the API represents such a field as `*T` to preserve the distinction between "unset" and "set to the zero value", +controller code MUST keep the same representation across the reconciliation pipeline: + - store it as `*T` in controller POV state artifacts (**intended**, **actual**, **target**, **report**, and derived structs), + - pass it between functions as `*T`, + - return it from functions as `*T`. +- Comparisons SHOULD use `ptr.Equal(a, b)`. +- Writes SHOULD assign the pointer directly; assigning `nil` MUST represent "unset". +- Exception: if a function parameter is an explicitly required input (the function cannot be called without a value), that parameter MAY be `T` by value. + +Definitions: +- **non-nil-able** scalar types include: `bool`, numeric types, `string`, structs, arrays, `time.Duration`, `metav1.Duration`, `resource.Quantity`, etc. +- **nil-able** types include: pointers, maps, slices, interfaces, channels, functions. + +Example (illustrative): +```go +import "k8s.io/utils/ptr" + +// Foo.Spec.TimeoutSeconds is `*int32` with `json:",omitempty"`. +type TargetFooSpec struct { + TimeoutSeconds *int32 +} + +func applyFooSpec(obj *v1alpha1.Foo, target *int32) { + if !ptr.Equal(obj.Spec.TimeoutSeconds, target) { + obj.Spec.TimeoutSeconds = target + } +} +``` + +## State variable naming conventions (MUST) + +Variables that hold controller POV state artifacts MUST be named after the state they contain +(`intended`, `actual`, `target`, `report`, etc.). + +### Canonical convention (MUST) + +The canonical naming style for state variables is **state-prefix**: + +- `` (lowerCamelCase) + - examples: `intendedLabels`, `actualPods`, `targetMain`, `targetStatus`, `reportConditions` + +This rule applies even when the variable type already contains the state word (types like `ActualFoo`, `TargetBar`, etc.): +the *variable* name MUST still carry the state word for readability. + +### Alternative convention (MAY) + +The **state-suffix** style MAY be used in legacy code or when it reads strictly better in a very small scope: + +- `` + - examples: `labelsIntended`, `podsActual`, `conditionsReport` + +Constraints (MUST): +- A function MUST NOT mix state-prefix and state-suffix styles in the same scope. +- Regardless of style, a state variable name MUST contain the state word (`intended` / `actual` / `target` / `report`). + +### Terminology guardrails (MUST NOT) + +- New code MUST NOT use `desired` as a controller POV state name (use `intended` / `target` / `report`). + +### Default shortening (MAY) + +When there is exactly one artifact of a given state in a tight scope, the artifact part MAY be omitted: +`intended`, `actual`, `target`, `report`. + +### Target split naming (SHOULD) + +This rule applies only when the same target artifact (same conceptual value, same name) exists in both patch domains. + +When such a **target** artifact is split by patch domain, variables SHOULD be named: +- `targetMain` for the **main patch domain**, +- `targetStatus` for the **status patch domain** (**controller-owned state** to persist). + +Published status output SHOULD be named as `report...` (it SHOULD NOT be stored in a `targetStatus...` variable). + +### Suggested declaration order (SHOULD) + +When declaring several pipeline variables together, the order SHOULD follow the pipeline: +`intended` → `actual` → `target` → `report`. + +## controller-runtime split client & determinism (MUST) + +### Background + +For performance, controllers SHOULD use the default `client.Client` provided by controller-runtime. +That default client behaves like a **split client**: + +- reads (`Get`/`List`) are served from a local cache; +- writes (`Create`/`Patch`/`Update`/`Delete`) go directly to the API server; +- the cache is **eventually consistent** and is not guaranteed to be invalidated immediately after a write. + +You can mentally model this as having a local, slightly delayed copy of the cluster state. + +### Consequences for reconciliation code (MUST) + +- Reconcile code MUST assume cache reads can be stale relative to our own recent writes. +- Reconcile code MUST NOT rely on read-after-write consistency through the cached client for correctness. +- Reconcile code MUST be deterministic: + - if the same Reconcile method re-runs before the cache catches up, it MUST compute the same intended result and produce + the same idempotent writes (or harmless repeats). + +### Non-determinism hazards & required protections (MUST) + +If you need to perform something non-deterministic (random IDs, timestamps, unstable naming, etc.), you MUST introduce a +stabilizing mechanism so retries do not diverge. + +Examples of required protections: + +1. **Updating an object with a deterministic identity** + - Prefer a patch strategy with an **optimistic lock** and re-run reconciliation on conflict. +2. **Creating an object** + - If the name is deterministic, repeated `Create` is safe: retries converge via `AlreadyExists`. + - If the name is not deterministic, retries can create duplicates (BAD). + - You MUST make naming deterministic, or + - persist the chosen name (or parameters required to compute it deterministically) in a stable place + (commonly: parent status) before creating. + +## Core invariants for Reconcile methods (MUST) + +### Phases for Reconcile methods (MUST) + +- Any **non-root Reconcile method** MUST start a **reconcile phase scope** (`flow.BeginReconcile`) and return **ReconcileOutcome** (in code: `flow.ReconcileOutcome`). +- The **root Reconcile** MUST use `flow.BeginRootReconcile(ctx)` (no phase scope) and return via `outcome.ToCtrl()`. +- See: `controller-reconciliation-flow.mdc`. + +### One Reconcile method = one reconciliation pattern (MUST) + +- A single Reconcile method MUST choose exactly **one** pattern from **“Reconciliation patterns”** below + and apply it consistently for all changes it performs (across any domains it touches). +- A single Reconcile method MUST NOT mix patterns within itself. +- If different parts of reconciliation naturally need different patterns, split the logic into **multiple** + Reconcile methods (e.g., `reconcileMain(...)` and `reconcileStatus(...)`), each with its own pattern. + +### Pattern documentation is mandatory (MUST) + +- The selected pattern MUST be documented in the GoDoc comment of the Reconcile method entrypoint using + a single stable style with exact key and order: + + - `Reconcile pattern:` `` + + Example (required format): + - `// Reconcile pattern: Conditional desired evaluation` + +--- + +## Patch sequencing policy + +Reconcile methods MUST be the only place that decides: +- whether a patch request is needed; +- the order of multiple patch requests (including main vs status sequencing); +- how outcomes/errors from multiple sub-steps are aggregated; +- where child reconciliation calls are placed relative to patching. + +Single-call API writes may be delegated to helpers, but **the sequencing policy lives here**. + +--- + +## DeepCopy & patch-base rules + +### DeepCopy is per patch request + +- For every patch request, the Reconcile method MUST create **exactly one** + patch base via `obj.DeepCopy()` **immediately before** the object is mutated in that **patch domain** (and then used for the subsequent patch request). +- The patch base variable name MUST be `base`. + +If a Reconcile method performs multiple patch requests: + +- it MUST create multiple `base` objects (one per patch request); +- each `base` MUST be taken from the object state **immediately before** that **patch domain** is mutated for that specific patch request; +- after patch #1 updates the object, patch #2 MUST take `base` from the updated object + to preserve correct diff and `resourceVersion`. + +Go note (no extra lexical scopes required): + +```go +var base *ObjT + + +base = obj.DeepCopy() + +// ApplyReconcileHelpers (or EnsureReconcileHelpers) bring `obj` to the intended in-memory state for this patch domain. +applyMainLabels(obj, targetLabels) +applyMainSpec(obj, targetSpec) + +if err := patchObj(ctx, obj, base); err != nil { + return err +} + +base = obj.DeepCopy() + +// ApplyReconcileHelpers (or EnsureReconcileHelpers) bring `obj` to the intended in-memory state for this patch domain. +applyStatusReport(obj, report) +applyStatusConditions(obj, reportConditions) + +if err := patchObjStatus(ctx, obj, base); err != nil { + return err +} +``` + +### `base` is a read-only diff reference (MUST) + +- Reconcile methods MUST NOT mutate `base` + (directly or through map/slice aliasing). + +--- + +## Object identity & list reconciliation (MUST) + +### Lists MUST be reconciled via pointers to list items (MUST) + +When reconciling objects from a `List`, you MUST take pointers to the actual list elements. + +GOOD: +```go +for i := range list.Items { + obj := &list.Items[i] +} +``` + +BAD: +```go +for _, obj := range list.Items { +} +``` + +### Local slices after Create/Patch (MUST) + +If a Reconcile method creates objects and keeps a local slice/list for subsequent logic, +it MUST append/insert the created objects in their final in-memory state +(including updated `resourceVersion`, defaults, and generated fields). + +--- + +## Reconciliation patterns (MUST) + +### Pattern selection rule (MUST) + +- Each **Reconcile method** MUST choose exactly one pattern. +- The choice MUST be documented in GoDoc. + +### Pattern 1: In-place reconciliation + +ObjCopy → Ensure #1 → Ensure #2 → ... → if changed → Patch + +Use when reconciliation is naturally step-by-step and imperative. + +### Pattern 2 (default): Target-state driven + +ComputeTarget #1 → ComputeTarget #2 → ... → if !all isInSync → ObjCopy → Apply those not InSync → Patch + +Use when computing the target is cheap/necessary and the up-to-date check naturally depends on the computed target. + +### Pattern 3: Conditional desired evaluation + +if ! isInSync → ObjCopy → Ensure OR (ComputeTarget + Apply) → Patch + +Use when it is easy to check up-to-date equality without computing state. + +### Pattern 4: Pure orchestration + +reconcile #1 → reconcile #2 → ... → reconcile #N + +Use when the **Reconcile method** is a thin orchestrator that delegates all work to other **Reconcile methods**, and does not implement **domain/business** logic itself (except basic object loading and delegation). + +--- + +## Mixing patterns (FORBIDDEN) (MUST) + +Forbidden within one Reconcile method: + +- main uses Pattern 3, status uses Pattern 1; +- main uses Pattern 2, status uses Pattern 3. + +Allowed: + +- same pattern for all domains; +- split into multiple Reconcile methods, each with its own pattern. + +--- + +## Child resources and decomposition (MUST) + +- Child resources SHOULD be reconciled in separate Reconcile methods: + - group reconciler (list + ordering); + - per-object reconciler. +- Prefer passing already loaded objects. +- Always pass pointers from list iteration. +- Caller owns ordering relative to patching. + +--- + +## Business logic failures & requeue policy (MUST) + +- Business-logic blocking conditions MUST return an error. +- Exception: if unblocked by watched resources, returning “done / no-op” is acceptable. +- If unblocked by **unwatched** events: + - return an error, or + - requeue only with clear justification and visible status signal. + +--- + +## objutilv1 usage (MUST) + +All work with: + +- labels, +- annotations, +- finalizers, +- owner references, +- conditions + +MUST go through `objutilv1`, imported as `obju`. + +Manual manipulation is forbidden unless `objutilv1` is extended. diff --git a/.cursor/rules/controller-terminology.mdc b/.cursor/rules/controller-terminology.mdc new file mode 100644 index 000000000..dfd22822b --- /dev/null +++ b/.cursor/rules/controller-terminology.mdc @@ -0,0 +1,748 @@ +--- +description: Shared controller terminology and definitions used across controller rule files. Apply when editing controller code under images/controller/internal/controllers/, and when reasoning/planning/answering questions that use these terms (controller.go/predicates.go/reconciler.go, patch domains, intended/actual/target/report). Apply when editing relevant files, and when reasoning/planning/answering questions where this rule could influence code decisions (even if matching files are not currently open). +globs: images/controller/internal/controllers/**/*.go,.cursor/rules/controller*.mdc +alwaysApply: false +--- + +See `rfc-like-mdc.mdc` for normative keywords (BCP 14 / RFC 2119 / RFC 8174) and general .mdc writing conventions. + +# Controller terminology + +This document defines shared terminology used across controller rule files in this repository. +All other controller `.mdc` documents SHOULD reference this file instead of re-defining the same terms. + +--- + +## Codebase structure terms + +### **controller package** +A **controller package** is a Go package under `images/controller/internal/controllers//...` that defines one controller-runtime controller, and contains: + +- **`controller.go`** (**Wiring-only** setup) +- **`predicates.go`** (predicate/**filter** implementations; required only when the package uses controller-runtime **predicate**/**filter**s) +- **`reconciler.go`** (**Reconciliation business logic**) +- **`reconciler_test.go`** (tests) and/or other `*_test.go` files + +### **`controller.go`** +**`controller.go`** is the **Wiring-only** setup file of a **controller package**. + +- It owns controller-runtime builder configuration (**watches**, options, **predicates**). +- It constructs the reconciler, registers **runnables** on the **manager** (`mgr.Add(...)`), + configures **watches** via the **builder chain**, and registers field indexes via the **manager**’s field indexer. + +### **`predicates.go`** +**`predicates.go`** is the file that owns controller-runtime **predicate**/**filter** implementations for a **controller package**. + +- It contains only **mechanical** change detection (no **I/O**, no **domain/business** decisions). +- It is referenced from the **builder chain** in **`controller.go`** via `builder.WithPredicates(Predicates()...)`. +- It is optional, but when the controller uses any controller-runtime **predicate**/**filter**, **`predicates.go`** is required. + +### **`reconciler.go`** +**`reconciler.go`** is the file that owns all **Reconciliation business logic** for the **controller package**, including: + +- the controller-runtime `Reconcile(...)` method, and +- other internal **Reconcile methods** and **ReconcileHelpers**. + +### **`reconciler_test.go`** +**`reconciler_test.go`** contains tests for reconciliation behavior and edge cases. + +--- + +## controller-runtime wiring terms + +### **Entrypoint** +The **Entrypoint** of a **controller package** is the function: + +- **`BuildController(mgr manager.Manager) error`** + +It is the only wiring entrypoint that registers the controller with the **manager**. + +### **controller name** +A **controller name** is the stable string used in `.Named(...)` for controller-runtime builder. +In this codebase it is defined as a package-level `const = ""`. + +**controller name** conventions (this repository) (MUST): + +- The `` value MUST be `kebab-case` and MUST match: + - `^[a-z0-9]+(-[a-z0-9]+)*$` +- The `` value MUST NOT contain `.` (dot), `_` (underscore), or whitespace. +- The `` value MUST be stable over time (treat it as a public identifier used in logs/metrics). +- The `` value MUST be unique among all controllers registered on the same **manager**. +- The suffix "-controller" MAY be appended. + - It SHOULD be appended when omitting it would create ambiguity (e.g., name collision risk with another **controller name**, or confusion with a non-controller component). + - It SHOULD NOT be appended when the shorter name is already unambiguous and collision-free in the same binary. + +### **short kind name** +A **short kind name** is a stable, codebase-established abbreviation used in Go symbol names that refer to a Kubernetes **object** kind +(for example, controller helper names like `get`, `patchStatus`, and predicate-set functions like `Predicates`). + +In this repository: + +- When `` refers to a kind defined in this repository’s API (types under `api/v*/`), `` MUST use the **short kind name** (not the full kind name). +- Short kind names MUST be stable and MUST NOT be invented ad-hoc. + +Canonical short kind names for this repository’s API kinds: + +- `ReplicatedVolume` → `RV` +- `ReplicatedVolumeReplica` → `RVR` +- `ReplicatedVolumeAttachment` → `RVA` +- `ReplicatedStorageClass` → `RSC` +- `ReplicatedStoragePool` → `RSP` + +### **manager** +The **manager** is the controller-runtime **`manager.Manager`** instance. + +**Manager-owned dependencies** are things obtained from the **manager** for wiring and dependency injection, e.g.: + +- `mgr.GetClient()` +- `mgr.GetScheme()` +- `mgr.GetCache()` +- `mgr.GetEventRecorderFor(...)` + +### **builder chain** +A **builder chain** is the fluent controller-runtime builder sequence that starts with: + +- `builder.ControllerManagedBy(mgr)` + +and ends with: + +- `.Complete(rec)` + +In this codebase, the **builder chain** implies a single fluent chain (not multiple partial builders). + +### **runnable** +A **runnable** is a component registered on the **manager** via `mgr.Add(...)` that runs in the **manager** lifecycle. +Common interfaces: + +- **`manager.Runnable`** +- **`manager.LeaderElectionRunnable`** + +**runnables** and **sources** are wiring/infra components; they are not reconcilers and not **ReconcileHelpers**. + +### **source** / **watch** +A **watch** is a controller-runtime configuration that causes **reconcile requests** to be enqueued on events. + +A **source** is the event source feeding a **watch** (e.g., a **`source.Source`** or **`source.Kind(...)`**), but in this codebase “**watch**” is the preferred term at the builder configuration level. + +Common **watch** styles: + +- **OwnerRef-based watch**: **watch** **child resources** owned by the **primary resource** (`Owns(...)`). +- **Index/field-based watch**: **watch** **objects** and map them to **reconcile requests** via a mapping function (`Watches(..., handler.EnqueueRequestsFromMapFunc(...))`), often supported by a field index. + +### **predicate** / **filter** +A **predicate** (a.k.a. **filter**) is a controller-runtime **predicate** used to decide whether an event should enqueue a **reconcile request**. + +In this codebase, **predicates** are intended for **mechanical** change detection (see: “Kubernetes metadata terminology used by **predicates**”). + +--- + +## Reconciliation layering terms + +### **Wiring-only** vs **Reconciliation business logic** +- **Wiring-only**: configuration/registration code (builder/**watches**/options/**runnables**/**predicates** construction). No Kubernetes API reads/writes beyond **manager** wiring. +- **Reconciliation business logic** (a.k.a. **domain/business** logic): any logic that computes **intended**, observes **actual**, decides/applies **target**, computes **report** (and persists **controller-owned state** when needed), performs orchestration, decides patch sequencing, or writes to the API server. Lives in **`reconciler.go`**. + +### **mechanical** vs **domain/business** +A step is **mechanical** when it is a straightforward technical operation that does not encode **domain/business** policy (e.g., “compare **Generation**”, “copy desired labels into obj”, “execute one Patch call”). + +A step is **domain/business** when it contains policy decisions (state machines, placement/scheduling decisions, validation of domain rules, **condition** reasoning beyond simple comparisons). + +### **reconcile loop** +The **reconcile loop** is the overall process where events cause controller-runtime to call `Reconcile(ctx, req)` for a reconcile request. + +A **reconcile request** is `ctrl.Request` (or `reconcile.Request`) carrying `NamespacedName`. + +--- + +## **Reconcile method** terms + +### **Reconcile method** +A **Reconcile method** is any function/method whose name matches: + +- `Reconcile(...)` (controller-runtime interface method), or +- `reconcile*` / `Reconcile*` (internal orchestration methods) + +**Reconcile methods** own orchestration: sequencing, retries/requeues, **Patch ordering**, error context, and **child resource** ordering. + +### **root Reconcile** +The **root Reconcile** is the controller-runtime method: + +- `func (r *Reconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error)` + +### **non-root Reconcile method** +Any other `reconcile*` / `Reconcile*` method called by the root Reconcile. +These are used to split orchestration into readable sub-steps (root, main, status, child groups, per-child, etc.). + +--- + +## **ReconcileHelper** terms + +### **ReconcileHelper** +A **ReconcileHelper** is a helper function/method used by **Reconcile methods** whose name matches one of the **Helper categories** (below). + +**ReconcileHelpers** exist to make behavior reviewable by name: the prefix implies allowed **I/O** and mutation. + +### **Helper categories** +**Helper categories** are defined by name prefix/pattern: + +- **ComputeReconcileHelper**: `compute*` / `Compute*` +- **ConstructionReconcileHelper**: `new*` / `build*` / `make*` / `compose*` +- **IsInSyncReconcileHelper**: `is*InSync*` / `Is*InSync*` +- **ApplyReconcileHelper**: `apply*` / `Apply*` +- **EnsureReconcileHelper**: `ensure*` / `Ensure*` +- **GetReconcileHelper**: `get*` / `Get*` +- **CreateReconcileHelper**: `create*` / `Create*` +- **DeleteReconcileHelper**: `delete*` / `Delete*` +- **PatchReconcileHelper**: `patch*` / `Patch*` (including `patch*Status` variants) + +### **Non-I/O helper categories** +A helper is **non-I/O** when it performs no Kubernetes API calls and no other external I/O. +Note: **non-I/O** helpers may still mutate their allowed **mutation target** (e.g., **ApplyReconcileHelpers** / **EnsureReconcileHelpers**). +In this codebase, these **Helper categories** are **non-I/O** by definition: + +- **ComputeReconcileHelper** +- **ConstructionReconcileHelper** +- **IsInSyncReconcileHelper** +- **ApplyReconcileHelper** +- **EnsureReconcileHelper** + +### **Single-call I/O helper categories** +A helper is a **single-call I/O helper** when it performs **at most one** **Kubernetes API I/O** request (read or write). +In this codebase, these **Helper categories** are single-call **I/O** helpers by definition: + +- **GetReconcileHelper** → at most one `Get(...)` or `List(...)` +- **CreateReconcileHelper** → exactly one `Create(...)` +- **DeleteReconcileHelper** → exactly one `Delete(...)` +- **PatchReconcileHelper** → exactly one **patch request** (`Patch(...)` OR `Status().Patch(...)`) + +--- + +## State terminology: Kubernetes POV vs Controller POV + +This section defines **state concepts** used in controller code and helper semantics. +It separates terms by two perspectives: **Kubernetes POV** (standard API model) and **Controller POV** (internal reconciliation model). + +### Kubernetes POV (standard API model) + +**Kubernetes POV** describes state **as represented by a Kubernetes API object** and its subresources. +This is the commonly accepted model used across built-in APIs and documentation. + +- **desired state** (Kubernetes POV) + The declared intent stored in the object, conventionally in **`spec`**. + +- **observed state** (Kubernetes POV) + The most recently observed / reported state, conventionally in **`status`** (typically written by controllers, not by users). + +- **current state** (Kubernetes POV) + The real state of the cluster/world “right now” that controllers observe and try to move closer to the **desired state**. + (This is not a field; it’s the external reality that **observed state** reports about.) + +> **Important:** Kubernetes **observed state** means the object’s **`status`** field. +> In **Controller POV** below, the controller also “observes” reality at runtime, but that runtime snapshot is called **actual**. + +--- + +### Controller POV (internal reconciliation model) + +**Controller POV** describes state as data flowing through the controller while it reads inputs, observes reality, +decides actions, and publishes results. These terms apply to variables, helper inputs/outputs, and helper semantics. + +#### Status roles (location vs role) + +Although Kubernetes stores controller output in **observed state** (`status`), **not everything in `status` plays the same role** for the controller. + +Within this codebase we distinguish two roles that may both live under `.status`: + +- **controller-owned state** (persisted decisions / memory) + Values **chosen by the controller** that must remain stable across reconciliations (e.g., allocated IDs, selected bindings, + chosen placements, step/phase markers, “locked-in” decisions). + These fields may be read back by the controller as *inputs* to keep behavior stable. + +- **report/observations** (published report) + Progress, conditions, messages, timestamps, and selected observed facts intended for users/other controllers. + These fields are **output-only** and should **not** be used as “intent inputs”. + +> Rule of thumb: **Only controller-owned state may be fed back** as commitment/intent inputs into **intended**/**target**. +> **report/observations** MAY be read as observations/constraints (i.e., as **actual**) when deciding **target**, but they MUST NOT silently become a source of **desired state**. + +#### Terms + +- **intended** (effective desired / effective goal state) + The controller’s computed effective goal state to converge to (“where we need to end up”), + after interpreting inputs and applying stabilization (defaults, normalization, canonicalization). + + **intended** is built from read inputs, which may include: + - the reconciled object’s **desired state** (`spec`), + - other Kubernetes objects the controller treats as **intent inputs**, + - controller-owned persisted decisions/memory (i.e., **controller-owned state**) stored in **observed state** (`status`) for stability/coordination. + + **intended** answers: “What is our effective goal, given inputs + what we already committed to?” + + **Do not confuse:** pulling arbitrary **observed state** (`status`) into **intended** is discouraged. + Only **controller-owned state** qualifies as a feedback input. + +- **actual** (controller observations snapshot) + What the controller observes/reads at runtime from Kubernetes and/or external systems “right now” + for decision making. This is a snapshot and may be partial/stale. + + **actual** is **not** Kubernetes POV **observed state** (`status`). + **actual** answers: “What do we currently see?” + +- **target** (decision to enforce in this reconciliation step) + The controller’s chosen enforceable goal/decision for this reconciliation step: + what it will try to make true by performing actions (Kubernetes changes and/or external side effects). + Derived from **intended** + **actual**, possibly constrained by reality/capabilities/progress. + + Some **target** decisions may be persisted as **controller-owned state** in **observed state** (`status`) for stability/coordination. + +- **report** (published controller report) + What the controller intends to publish back into Kubernetes as its latest progress snapshot, + typically written to the reconciled object’s **observed state** (`status`). + + **report** is **not** the same as **actual**: + **actual** is what the controller reads; **report** is what the controller writes. + +- **computed value** (auxiliary derived value) + Any additional derived/intermediate value used by the controller that is not itself **intended**, **actual**, **target**, or **report** + (e.g., diffs/patches, hashes, intermediate graphs, scoring results, debug/trace data). + +#### Other objects: **intent inputs** vs **observations/constraints** + +Controllers often depend on multiple Kubernetes objects and/or external systems: + +- **intent inputs** → contribute to **intended** + Objects that represent desired configuration/policy (e.g., policy/config resources, “profile” objects, templates). + +- **observations/constraints** → contribute to **actual** + Existing resources, external system state, and other controllers’ **report** / **observed state**. + +> If an object is used “because it describes what the user wants” → treat it as an **intent input**. +> If it is used “because it reflects what exists or what is allowed right now” → treat it as an **observation/constraint**. + +#### Reconciliation data flow (reference pipeline) + +A typical reconciliation step follows this conceptual flow: + +1) Read inputs (including the reconciled object **desired state** (`spec`), relevant **intent inputs**, and persisted **controller-owned state** in **observed state** (`status`)). +2) Compute **intended**. +3) Observe **actual**. +4) Decide **target**. +5) Execute actions / side effects (apply toward **target**). +6) Compute **report**. +7) Write **observed state** (`status`) (publish **report** and persist **controller-owned state** when needed). + +--- + +### **target main** vs **target status** + +When **target** values are used for later `is*InSync` and/or `apply*`, **target** MUST be separated by **patch domain**: + +- **target main**: **target** values for the **main patch domain** (metadata/spec/non-status) +- **target status**: **target** values for the **status patch domain** that represent **controller-owned state** to persist + +A “mixed target” that intermingles **main patch domain** + **status patch domain** into one value is considered an invalid shape for target-driven apply/isInSync flows in this codebase. + +**report** is computed separately (often also written under the **status patch domain**) and should not be mixed into **target status**. + +--- + +## Patch and persistence terminology + +### **patch domain** +A **patch domain** is the part of a Kubernetes **resource**/**object** that is persisted by one **patch request**. + +In this codebase, patching is treated as two **patch domains** for a **resource**/**object** (when both are applicable). +Other subresources are out of scope for these rules. + +1. **main patch domain**: + - metadata (except status-only fields), + - spec, + - any non-status fields of the **resource**/**object** + +2. **status patch domain**: + - `.status` (including **`.status.conditions`**, etc.) + +### **patch request** +A **patch request** is a single Kubernetes API write that persists drift for one **patch domain**, typically: + +- **main patch domain**: `client.Patch(ctx, obj, ...)` +- **status patch domain**: `client.Status().Patch(ctx, obj, ...)` + +### **patch base** (**`base`**) +A **patch base** (variable name: **`base`**) is the `DeepCopy()` snapshot used as the diff reference for one **patch request**. + +Properties: +- **`base`** is taken immediately before mutating the **resource**/**object** for that **patch domain**, + and used as the diff reference for the subsequent **patch request**. +- **`base`** is treated as a read-only diff reference. + +### **DeepCopy** +**DeepCopy** refers to calling the generated Kubernetes API `DeepCopy()` (or equivalent deep clone) on an API **resource**/**object**. + +In this codebase: +- **DeepCopy** is used primarily to produce **`base`** for patch diffing. +- **DeepCopy** is forbidden inside most non-orchestration **ReconcileHelpers** (category-specific rules apply). + +### **Patch ordering** +**Patch ordering** is the decision of: +- whether to patch at all, +- and if multiple **patch requests** exist, in what sequence they are executed (**main patch domain** vs **status patch domain**, **child resources** ordering, etc.). + +**Patch ordering** is owned by **Reconcile methods**, not **ReconcileHelpers**. + +### **patch strategy** / **patch type decision** +A **patch strategy** (or **patch type decision**) is a choice about how the patch should be executed (e.g., “plain merge patch” vs “merge patch with optimistic lock”). + +In this codebase: +- **PatchReconcileHelpers** do not decide the **patch strategy**; they accept an explicit `optimisticLock` input and execute accordingly. + +### **Optimistic locking** (**optimistic lock**) +**Optimistic locking** is the patch mode that causes the API write to fail on concurrent modification conflicts (i.e., it requires the object’s version to match). + +### **optimistic lock requirement** +An **optimistic lock requirement** is a decision that the subsequent save of a changed **resource**/**object** MUST use **Optimistic locking** semantics. + +In this codebase: +- **EnsureReconcileHelpers** are the primary source of “optimistic lock required” signaling via **`flow.EnsureOutcome`**. + +--- + +## Determinism / purity terminology + +### **deterministic** +A function/step is **deterministic** when, for the same explicit inputs (and same allowed internal deterministic state), it produces: + +- the same outputs, and/or +- the same in-memory mutations, and/or +- the same patch payload (for **I/O** helpers) + +Determinism requires **stable ordering** when order affects the serialized **resource**/**object** state. + +### **stable ordering** / **canonical form** +- **stable ordering**: any ordered output derived from an unordered source (maps/sets) must be sorted. +- **canonical form**: a normalized representation (sorted slices, normalized strings, consistent defaults) that avoids “equivalent but different” states. + +### **patch churn** +**patch churn** is repeated, unnecessary patching caused by: +- **nondeterminism** in ordering, +- equivalent-but-different representations, +- or avoidable drift that flips back and forth. + +### **I/O** +**I/O** is any interaction with systems outside of pure in-memory computation, including (but not limited to): +- Kubernetes API calls via controller-runtime client, +- filesystem, +- network, +- environment reads, +- time/random sources. + +### **Kubernetes API I/O** +**Kubernetes API I/O** is any call made through controller-runtime client that interacts with Kubernetes state +(cache and/or API server), e.g.: +`Get/List/Create/Update/Patch/Delete`, `Status().Patch/Update`, `DeleteAllOf`. + +### **Hidden I/O** / **nondeterminism** +**Hidden I/O** is any **I/O** that is not explicit in the **Helper categories** contract (e.g., `time.Now()`, `rand.*`, `os.Getenv`, extra network calls). +**Hidden I/O** is treated as a determinism violation for categories that require purity. + +--- + +## Read-only / mutation terminology + +### **mutation target** +A helper’s **mutation target** is the only value it is allowed to mutate (if any), based on its **Helper categories** contract. + +Examples: +- **ApplyReconcileHelpers** / **EnsureReconcileHelpers**: mutate `obj` in place (one **patch domain**). +- **CreateReconcileHelpers** / **PatchReconcileHelpers**: mutate `obj` only as a result of API server updates from the call (resourceVersion/defaults). +- **patch base** (**`base`**): never a **mutation target** (**read-only inputs**). + +### **read-only inputs** +All inputs other than the **mutation target** are **read-only inputs** and MUST NOT be mutated. + +### **Aliasing** (Go maps/slices) +**Aliasing** is accidental sharing of reference-like backing storage (especially `map` and `[]T`) between: +- `obj` and `desired`, +- `obj` and shared templates/defaults, +- **`base`** and anything else. + +**Aliasing** is dangerous because mutating the “copy” mutates the original. + +### **Clone** / **Copy** +- **Clone**: create a new map/slice with its own backing storage (`maps.Clone`, `slices.Clone`, `append([]T(nil), ...)`, manual copy). +- **Copy**: general term for producing an independent value; for maps/slices it implies cloning. + +--- + +## **flow** terminology + +### **flow** +**`flow`** refers to `lib/go/common/reconciliation/flow`, the internal package used to structure reconciliation and standardize phase-scoped logging. + +### **phase** +A **phase** is a structured execution scope created by one of: + +- `flow.BeginReconcile(ctx, "", ...)` (for **non-root Reconcile method** functions) +- `flow.BeginEnsure(ctx, "", ...)` (for **EnsureReconcileHelper** functions) +- `flow.BeginStep(ctx, "", ...)` (for step-style helpers that return `error`) + +A phase is always finalized by deferring the corresponding `OnEnd` method: + +- `defer rf.OnEnd(&outcome)` (where `outcome` is `flow.ReconcileOutcome`) +- `defer ef.OnEnd(&outcome)` (where `outcome` is `flow.EnsureOutcome`) +- `defer sf.OnEnd(&err)` (where `err` is `error`) + +**phases** are used to structure logs, attribute errors, and standardize panic logging + re-panicking. + +### **ReconcileOutcome** +A **ReconcileOutcome** is a value of type **`flow.ReconcileOutcome`** that represents the decision of a **Reconcile method**: +continue/done/requeue/requeueAfter/fail + error. + +Naming conventions: +- single **ReconcileOutcome** variable: `outcome` +- slice of **ReconcileOutcome** values: `outcomes` + +### **EnsureOutcome** +An **EnsureOutcome** is a value of type **`flow.EnsureOutcome`** that represents the result of an **EnsureReconcileHelper**: +error + **Change reporting** + **optimistic lock requirement**. + +Naming conventions: +- single **EnsureOutcome** variable: `outcome` +- slice of **EnsureOutcome** values: `outcomes` + +### **Change reporting** +**Change reporting** means signaling that an in-memory **resource**/**object** was mutated and needs persistence, typically via: + +- `EnsureOutcome.ReportChanged()` / `EnsureOutcome.ReportChangedIf(...)` + +The canonical “was changed?” flag is read via `EnsureOutcome.DidChange()`. + +### **Optimistic-lock signaling** +**Optimistic-lock signaling** means encoding that the save MUST use **Optimistic locking** semantics, typically via: + +- `EnsureOutcome.RequireOptimisticLock()` + +The canonical flag is read via `EnsureOutcome.OptimisticLockRequired()`. + +### **ReconcileOutcome control flow** +- `ReconcileOutcome.ShouldReturn()` indicates the caller should stop and return (done/requeue/error). +- `ReconcileOutcome.ToCtrl()` converts an outcome into `(ctrl.Result, error)` for controller-runtime. + +### **Merging outcomes** +**Merging outcomes** means combining multiple independent results deterministically: + +- `ReconcileFlow.Merge(...)` merges **ReconcileOutcome** values. +- `EnsureFlow.Merge(...)` merges **EnsureOutcome** values. +- `StepFlow.Merge(...)` merges multiple `error` values via `errors.Join`. + +--- + +## **object identity** terminology + +### **resource** (**object**) +A **resource** (or **object**) is any Kubernetes API **object** that participates in reconciliation. +It may be read as input, computed against, and/or persisted as part of the controller’s behavior. + +A resource is either: +- the **primary resource**, or +- a **secondary resource** (child resource). + +### **primary resource** +The **primary resource** is the **resource**/**object** named by the **reconcile request** (`req.NamespacedName`) that the controller is responsible for. + +### **secondary resource** (**child resource**) +A **secondary resource** (or **child resource**) is any Kubernetes **resource**/**object** other than the **primary resource** that is created/managed/reconciled as part of the controller’s behavior. + +Examples: owned **child resources**, referenced **objects**, dependent **objects**. + +### **object identity** in error strings +In this codebase, **object identity** means: +- **namespaced identity**: `/` for namespaced resources +- **cluster identity**: `` for cluster-scoped resources + +--- + +## **conditions** and **objutilv1** terminology + +### **objutilv1** (**`obju`**) +**objutilv1** (import alias: **`obju`**) is the project’s **object** utility package. + +In this codebase, all manipulations of: +- labels, +- annotations, +- finalizers, +- owner references, +- **conditions** + +are expected to go through **`obju`** rather than open-coded field edits. + +### **condition** +A **condition** is a `metav1.Condition` stored on **`.status.conditions`**. + +Key fields commonly referenced: +- `Type` +- `Status` +- `Reason` +- `Message` +- `ObservedGeneration` +- `LastTransitionTime` + +### **StatusConditionObject** +A **StatusConditionObject** is an **object** that exposes **conditions** in the shape expected by **`obju`** **condition** helpers (e.g., an interface used for **condition** comparisons/updates). + +### **Condition semantic equality** +**Condition semantic equality** means equality by meaning (Type/Status/Reason/Message/ObservedGeneration), as defined by the **`obju`** comparison helpers. + +### **Condition equality by status** +**Condition equality by status** means equality only by `Type` + `Status`, ignoring other fields, as defined by **`obju`** helpers. + +--- + +## Kubernetes metadata terminology used by **predicates** + +### **`metadata.generation`** (**Generation**) +**Generation** (**`metadata.generation`**) is the Kubernetes counter typically incremented by the API server on spec changes for custom resources. + +### **Metadata-only changes** +**Metadata-only changes** are changes that may not bump **Generation**, such as: +- labels, +- annotations, +- finalizers, +- owner references. + +**predicates** sometimes compare these fields directly because **Generation** may not change. + +--- + +## Rules for controller rules + +### Scope + +This section applies to **.mdc** rules that describe how to write controllers in this repository. + +### Common requirements + +- All other controller `.mdc` documents SHOULD reference this file instead of re-defining the same terms. + +### Writing conventions + +- Formatting conventions for controller rule files (including normative keywords and term emphasis) are defined in `rfc-like-mdc.mdc`. + +### Term usage + +- Terms defined in **Controller terminology** MUST be used consistently with their definitions. +- Terms defined in the current rules document (within a specific **.mdc** file) MUST be used consistently with their definitions. +- If a concept matches an existing term from **Controller terminology**, you SHOULD reuse the existing term (and spelling) instead of introducing a new synonym. + +### Canonical term list + +Below is the list of terms (without definitions) that are defined in **Controller terminology**. Use these spellings consistently across controller rules: +Terms MUST be written in italics on every mention (see `rfc-like-mdc.mdc`). + +- **controller package** +- **`controller.go`** +- **`predicates.go`** +- **`reconciler.go`** +- **`reconciler_test.go`** +- **Entrypoint** +- **controller name** +- **short kind name** +- **manager** +- **Manager-owned dependencies** +- **builder chain** +- **runnable** +- **source** +- **watch** +- **OwnerRef-based watch** +- **Index/field-based watch** +- **predicate** +- **filter** +- **Wiring-only** +- **Reconciliation business logic** +- **mechanical** +- **domain/business** +- **reconcile loop** +- **reconcile request** +- **Reconcile method** +- **root Reconcile** +- **non-root Reconcile method** +- **ReconcileHelper** +- **Helper categories** +- **ComputeReconcileHelper** +- **ConstructionReconcileHelper** +- **IsInSyncReconcileHelper** +- **ApplyReconcileHelper** +- **EnsureReconcileHelper** +- **GetReconcileHelper** +- **CreateReconcileHelper** +- **DeleteReconcileHelper** +- **PatchReconcileHelper** +- **Non-I/O helper categories** +- **Single-call I/O helper categories** +- **non-I/O** +- **single-call I/O helper** +- **desired state** +- **observed state** +- **current state** +- **controller-owned state** +- **report/observations** +- **intended** +- **actual** +- **target** +- **report** +- **computed value** +- **intent inputs** +- **observations/constraints** +- **target main** +- **target status** +- **patch domain** +- **main patch domain** +- **status patch domain** +- **patch request** +- **patch base** +- **`base`** +- **DeepCopy** +- **Patch ordering** +- **patch strategy** +- **patch type decision** +- **Optimistic locking** +- **optimistic lock** +- **optimistic lock requirement** +- **deterministic** +- **stable ordering** +- **canonical form** +- **patch churn** +- **I/O** +- **Kubernetes API I/O** +- **Hidden I/O** +- **nondeterminism** +- **mutation target** +- **read-only inputs** +- **Aliasing** +- **Clone** +- **Copy** +- **flow** +- **phase** +- **ReconcileOutcome** +- **EnsureOutcome** +- **Change reporting** +- **Optimistic-lock signaling** +- **ReconcileOutcome control flow** +- **Merging outcomes** +- **resource** +- **object** +- **primary resource** +- **secondary resource** +- **child resource** +- **object identity** +- **namespaced identity** +- **cluster identity** +- **objutilv1** +- **`obju`** +- **condition** +- **StatusConditionObject** +- **Condition semantic equality** +- **Condition equality by status** +- **Generation** +- **`metadata.generation`** +- **Metadata-only changes** diff --git a/.cursor/rules/go-tests.mdc b/.cursor/rules/go-tests.mdc new file mode 100644 index 000000000..28c2bacaa --- /dev/null +++ b/.cursor/rules/go-tests.mdc @@ -0,0 +1,22 @@ +--- +description: Rules for writing Go tests (fixtures via go:embed, minimal payloads, tags, topology/YAML specifics). Apply when creating/editing/reviewing *_test.go files, and when planning test structure or fixtures. Apply when editing relevant files, and when reasoning/planning/answering questions where this rule could influence code decisions (even if matching files are not currently open). +globs: **/*_test.go +alwaysApply: false +--- + +- Test fixtures & I/O (MUST): + - Prefer embedding static fixtures with `//go:embed` into a `[]byte`. + - Do NOT read fixtures from disk at runtime unless embedding is impossible. + +- Test payload minimalism (MUST): + - Only include fields that are asserted in the test. + - Prefer small, explicit test bodies over helpers until a helper is reused in 3+ places. + +- Struct tags in tests (MUST): + - Include only the codec actually used by the test. + - Do NOT duplicate `json` and `yaml` tags unless both are parsed in the same code path. + - Prefer relying on field names; add a `yaml` tag only when the YAML key differs and renaming the field would hurt clarity. + +- Topology tests specifics (MUST): + - Parse YAML fixtures into existing structs without adding extra tags. + - Embed testdata (e.g., `testdata/tests.yaml`) and unmarshal directly; avoid runtime I/O. diff --git a/.cursor/rules/go.mdc b/.cursor/rules/go.mdc new file mode 100644 index 000000000..cfe5b2bc5 --- /dev/null +++ b/.cursor/rules/go.mdc @@ -0,0 +1,7 @@ +--- +description: Go formatting requirement: run gofmt/go fmt on modified Go files. Apply when editing Go code, and when deciding how to format/structure Go changes. Apply when editing relevant files, and when reasoning/planning/answering questions where this rule could influence code decisions (even if matching files are not currently open). +alwaysApply: true +--- + +- Formatting (MUST): + - After making changes to Go code, run `gofmt` (or `go fmt`) on the modified files before finalizing the change. diff --git a/.cursor/rules/repo-wide.mdc b/.cursor/rules/repo-wide.mdc new file mode 100644 index 000000000..44150dd6d --- /dev/null +++ b/.cursor/rules/repo-wide.mdc @@ -0,0 +1,55 @@ +--- +description: Repository-wide Cursor/agent rules for this repo (formatting, change hygiene, git hygiene, and commit-message conventions). Apply when working in this repository (any task), especially when making or explaining changes, planning a sequence of edits, or preparing commits. Apply when editing relevant files, and when reasoning/planning/answering questions where this rule could influence code decisions (even if matching files are not currently open). +alwaysApply: true +--- + +- Formatting & style (MUST): + - Match existing formatting and indentation exactly. + +- Change hygiene / cleanup (MUST): + - If I create a file and later replace it with a correct alternative, I MUST remove the now-invalid file(s) in the same change. + +- Dialogue adherence (MUST): + - User answers are authoritative context. + - If I ask a question and receive an answer, subsequent actions MUST align with that answer and MUST NOT contradict or ignore it. + +- File moves/renames (MUST): + - When moving or renaming files, preserve Git history by using `git mv` (or an equivalent Git-aware rename). + - Do NOT implement a move as "create new file + delete old file". + +- Git commit messages (MUST): + - When the user asks you to generate a commit message, ALWAYS output the commit message in a copy-friendly code block. + - When the user asks you to generate a commit message, ALWAYS remind about sign-off as plain text AFTER the commit message (do NOT put the reminder inside the commit message body). Prefer: `Don't forget to sign off` and suggest `git commit -s`. + - Use English for commit messages. + - Prefer prefixing the subject with a component in square brackets, e.g. `[controller] Fix ...`, `[api] Add ...`. + - If the change is non-trivial, add a short body listing the key changes; for small changes, the subject alone is enough. + - When generating a commit message, consider the full diff (don’t skimp on context), including: + - Staged/cached changes (index) + - Contents of deleted files + +- When making a commit (MUST): + - ALWAYS sign off (prefer `git commit -s`). + +- API development / design / review (MUST): + - When developing, designing, or reviewing API types under `api/v*/`, you MUST consult and follow the API-specific `.mdc` rules in `.cursor/rules/`: + - `api-file-structure.mdc` — API package conventions: object prefixes and per-object/common file naming rules. + - `api-types.mdc` — API type rules: type-centric layout, enums/constants, status/conditions requirements, naming. + - `api-conditions.mdc` — API condition Type/Reason constants naming, ordering, comments, and stability. + - `api-labels-and-finalizers.mdc` — API naming rules for label keys and finalizer constants. + - `api-codegen.mdc` — API codegen rules for kubebuilder/controller-gen and generated files hygiene. + - You SHOULD also consult `controller-terminology.mdc` for shared terminology (intended/actual/target/report, patch domains, etc.) that applies to API design. + - Each rule file has a `description` in its frontmatter indicating when to apply it. + - When in doubt, start with `api-file-structure.mdc` for where code belongs and `api-types.mdc` for type layout conventions. + +- Controller development / design / review (MUST): + - When developing, designing, or reviewing controllers under `images/controller/internal/controllers/`, you MUST consult and follow the controller-specific `.mdc` rules in `.cursor/rules/`: + - `controller-terminology.mdc` — shared terminology and definitions used across all controller rule files. + - `controller-file-structure.mdc` — controller package file structure (`controller.go`/`predicates.go`/`reconciler.go`/tests). + - `controller-controller.mdc` — rules for `controller.go` (wiring, builder chain, predicates wiring). + - `controller-predicate.mdc` — rules for `predicates.go` (mechanical change detection, no I/O, no domain logic). + - `controller-reconciliation.mdc` — rules for `Reconcile` method orchestration in `reconciler.go`. + - `controller-reconciliation-flow.mdc` — rules for using `lib/go/common/reconciliation/flow` (phases, outcomes). + - `controller-reconcile-helper.mdc` — common rules for ReconcileHelper functions/methods. + - `controller-reconcile-helper-*.mdc` — category-specific contracts (compute, apply, ensure, get, create, delete, patch, is-in-sync, construction). + - Each rule file has a `description` and optional `globs` in its frontmatter indicating when to apply it. + - When in doubt, start with `controller-terminology.mdc` for definitions and `controller-file-structure.mdc` for where code belongs. diff --git a/.cursor/rules/rfc-like-mdc.mdc b/.cursor/rules/rfc-like-mdc.mdc new file mode 100644 index 000000000..c14a32582 --- /dev/null +++ b/.cursor/rules/rfc-like-mdc.mdc @@ -0,0 +1,163 @@ +--- +description: RFC-style writing conventions for .mdc rule files: normative keywords (MUST/SHOULD/MAY per BCP 14), term emphasis, Cursor frontmatter requirements (description required; include when-to-apply incl. decision-making; globs discouraged), language and style guidelines (CMOS-based), literals, examples, and section drafting checklist. Apply when writing, editing, or reviewing .cursor/rules/*.mdc files, and when deciding how to phrase/structure rule requirements. Apply when editing relevant files, and when reasoning/planning/answering questions where this rule could influence code decisions (even if matching files are not currently open). +alwaysApply: false +--- + +# RFC-style English and structure for .mdc + +This section defines repository-wide constraints that every Cursor `.mdc` rule file must follow. + +Write clear, reviewable technical prose. + + +## 1. Normative keywords + +Use BCP 14 key words (RFC 2119 / RFC 8174) only when you intend normative meaning. Per BCP 14 (RFC 2119 / RFC 8174), these key words have the meanings specified below only when they appear in all capitals. + +1. MUST + This word, or the terms "REQUIRED" or "SHALL", mean that the definition is an absolute requirement of the specification. + +2. MUST NOT + This phrase, or the phrase "SHALL NOT", mean that the definition is an absolute prohibition of the specification. + +3. SHOULD + This word, or the adjective "RECOMMENDED", mean that there may exist valid reasons in particular circumstances to ignore a particular item, but the full implications must be understood and carefully weighed before choosing a different course. + +4. SHOULD NOT + This phrase, or the phrase "NOT RECOMMENDED", mean that there may exist valid reasons in particular circumstances when the particular behavior is acceptable or even useful, but the full implications should be understood and the case carefully weighed before implementing any behavior described with this label. + +5. MAY + This word, or the adjective "OPTIONAL", mean that an item is truly optional. + +- Use these keywords only when you intend normative meaning. +- Use them only when they appear in all capitals, as shown above. +- Do not apply emphasis to normative keywords: no bold, no italics, and no inline code. + +### 1.1. Centralized normative keyword declaration + +- This file is the only place where normative keywords are declared. +- All other `.mdc` files MUST NOT repeat the keyword list, synonyms, or keyword definitions. +- All other `.mdc` files MUST start with a link to this file, and the link text MUST say that normative keywords are defined here. + + +## 2. Terms and emphasis + +- Terms (words/phrases with a defined meaning in a Terminology / Definitions / Glossary section) MUST be written in italics on every mention. +- Terms MUST NOT be bolded. +- Terms MUST NOT be left unformatted when used with their defined meaning. +- If a term contains a literal token, keep the literal in inline code and italicize the term as a whole (for example: *`controller.go`*). + +## 3. Cursor frontmatter + +Frontmatter MUST be present, valid YAML, and start at the first line of the file (delimited by `---`). + +### 3.1. `description` (required, always) + +Every `.mdc` file MUST set a non-empty `description`. This repository relies on `description` for rule discovery and relevance. + +`description` MUST be detailed enough for the model to decide, without guesswork, that the rules in this file should be loaded for the current task. Vague or generic descriptions (for example, "Go rules", "Controller guidelines") SHOULD be treated as incorrect. + +`description` MUST explicitly state *when to apply* the rule (i.e., what tasks/files/areas trigger it). You SHOULD include a literal phrase like "Apply when ..." (or equivalent wording) so the model can reliably match it to the current work. + +`description` MUST cover both: +- direct work on matching files (editing/creating/reviewing), and +- cases where the assistant is reasoning, planning, or answering questions in a way that could be influenced by this rule (even if the matching files are not currently open). + +When you change the contents of an `.mdc` file, you MUST update its `description` to reflect the full, updated contents of the file, so that the requirement above remains true. + +### 3.2. `globs` (discouraged; prefer `description`) + +`globs` MAY be set, but `globs` matching is unreliable in practice (it can work poorly or not work at all depending on Cursor version and context-loading behavior). Therefore: + +- You SHOULD prefer `description`-driven loading over `globs`. +- You MUST NOT rely on `globs` as the only way to make a rule apply. +- If a rule is critical for a task, you SHOULD reference it explicitly via `@` to force-load it. +- If `globs` is already set in an existing `.mdc` file, you MUST NOT remove it automatically as a "cleanup"; keep it unless you intentionally change the rule attachment scope. + +### 3.3. `globs` format (when used) + +If you set `globs`, it MUST follow the Cursor `.mdc` frontmatter format documented by Cursor: + +- `globs` MUST be a single scalar string value (not a YAML array). +- Multiple patterns MUST be written as a single comma-separated string with no whitespace around commas (for example, `**/*.go,**/*.yaml`). +- Each pattern SHOULD be a workspace-relative glob (for example, `api/**/*.go`, `images/controller/**/controller*.go`). +- `globs` SHOULD NOT be surrounded by quotes (prefer a plain scalar like `globs: **/*.go,**/*.yaml`), because quoted values have been reported to break matching in some Cursor versions. + +Examples: + +```yaml +--- +description: Go controller rules for reconciler helpers and predicates; apply when editing any controller implementation files under images/controller/internal/controllers/. +globs: images/controller/internal/controllers/**/*.go +alwaysApply: false +--- +``` + +```yaml +--- +description: API coding conventions and Kubebuilder marker requirements; apply when editing versioned API types and related helpers under api/. +globs: api/v*/**/*.go,api/v*/**/*_types.go,api/v*/**/common_types.go +alwaysApply: false +--- +``` + +## 4. Language and Style +### 4.1. Authority and precedence + +- Follow The Chicago Manual of Style (CMOS) for English grammar, punctuation, capitalization, and general editorial decisions, unless overridden by an explicit requirement in the current document. +- If a stylistic convention conflicts with an explicit requirement, the requirement takes precedence. +- Do not make "stylistic" edits that change technical meaning, scope, or applicability. + +### 4.2. Literals and exactness + +- Preserve the spelling and casing of: identifiers, commands/flags, file and package names, stable constants, and any other literal tokens where a change would alter meaning. +- Preserve the original spelling of proper names and quoted material. +- Put literal tokens in inline code (backticks) and keep surrounding punctuation outside the literal. +- When quoting literal text (exact strings to match, exact tokens), punctuation MUST be outside quotation marks so the quoted literal remains exact. +- When quoting general prose (not a literal), punctuation SHOULD follow normal CMOS conventions. Prefer block quotes for longer quotations. + +### 4.3. Editing discipline + +- Aim for clarity, consistency, and readability; fix internal inconsistencies (terminology, capitalization, duplicated text). +- If a passage is unclear in a way that could affect interpretation, flag it explicitly rather than guessing. +- Treat editing as distinct from technical review: suggest rewrites for clarity, but never optimize typography over correctness. +- If you cannot confidently choose the CMOS-preferred option for a purely stylistic change, and the change is not required for correctness or clarity, avoid making the change. + +### 4.4. Style conventions + +- Use American English spelling by default; keep spelling consistent within the document. +- Use the serial (Oxford) comma where it improves clarity. +- Avoid ambiguous pronouns ("it/this/that") when the referent could be unclear; prefer explicit subjects. +- Prefer short, declarative sentences for requirements; make conditions explicit (split sentences or use structured lists). +- Use parallel structure in lists and sublists; avoid burying critical conditions in parenthetical asides. +- Keep capitalization consistent within the document and, when applicable, across closely related documents. +- For section titles, prefer CMOS title case unless a full-sentence title is clearer; be consistent. + +#### 4.4.1 Citations, references, and cross-references + +- Ensure every citation has a corresponding reference entry, and every reference entry is cited. +- Do not rely on page numbers; prefer stable locations (section titles/numbers, anchors, or explicit URLs). +- When citing RFCs/BCPs or other specs, use a stable label scheme (e.g., [RFC2119], [RFC8174]) and define labels in a References section. + +#### 4.4.2 Examples and placeholder safety + +- Prefer fenced code blocks for multi-line literals and examples. Do not "pretty up" examples if that risks breaking reproducibility. +- Use reserved example domains (e.g., example.com / example.net / example.org) for generic DNS/URI examples; avoid real production domains as "generic examples". +- Clearly distinguish placeholders (e.g., ) from literal values. +- Keep examples minimal, accurate, and resilient to staleness. + +#### 4.4.3 Abbreviations + +- Expand abbreviations in titles and on first use: "full expansion (ABBR)". +- Use one expansion consistently when multiple expansions are possible. + +## 5. Section drafting checklist (apply to any heading level) + +- Does this section need a short intro/abstract? +- Does it need background/context, or can it stand alone? +- Are there any terms that must be defined to remove ambiguity? +- Where are the requirements, and can a reviewer find them quickly? +- Do we need rationale (why) to explain trade-offs or non-obvious choices? +- Do we need examples, and are they clearly marked as examples (not requirements)? +- Mixing requirements/rationale/definitions/examples is allowed, but requirements must remain easy to locate. +- For short, obvious sections, one tight paragraph may be enough; do not create subsections just to satisfy structure. diff --git a/.cursor/rules/tooling.mdc b/.cursor/rules/tooling.mdc new file mode 100644 index 000000000..a6c2d31af --- /dev/null +++ b/.cursor/rules/tooling.mdc @@ -0,0 +1,43 @@ +--- +description: Project tooling commands and safety policy: what the agent may run automatically vs what requires confirmation (lint/tests/build/codegen, git, werf/kubectl). Apply when asking the agent to run commands or when deciding which checks to run for a change. Apply when editing relevant files, and when reasoning/planning/answering questions where this rule could influence code decisions (even if matching files are not currently open). +alwaysApply: true +--- + +## Canonical development commands (prefer these) + +- **Lint**: `bash hack/run-linter.sh` + - Default tags: `ee fe` + - Options: + - `--tags "ee"` (or `"fe"`) + - `--new-from-base ` (incremental) + - `--fix` (modifies files) +- **Tests**: `bash hack/run-tests.sh` +- **Build sanity (linux/amd64, CGO=0)**: `bash hack/build_prototype.sh` +- **Codegen / CRDs / go:generate**: `bash hack/generate_code.sh` + - Runs `controller-gen` and updates `crds/`, generated Go files, and may change `go.mod/go.sum` + +## Safe to run WITHOUT asking (agent may run automatically) + +Only local/read-only checks: + +- `bash hack/run-linter.sh` **without** `--fix` +- `bash hack/run-tests.sh` +- `bash hack/build_prototype.sh` +- Plain `go test ...` and `go tool golangci-lint run ...` **when they do not modify repo files** + +## MUST ask for confirmation BEFORE running + +Anything that may change the working tree, git history, or touches external systems: + +- `bash hack/generate_code.sh` (writes CRDs/generated files; may edit `go.mod/go.sum`) +- `bash hack/go-mod-tidy` / `bash hack/go-mod-upgrade` (edits `go.mod/go.sum`) +- `bash hack/run-linter.sh --fix` (edits source files) +- Any `git` operations that change history or remote state: + - `git commit`, `git tag`, `git push`, `git checkout/switch`, `git reset`, `git fetch` (including scripts that do them) +- `hack/local_build.sh` and any commands using **werf / kubectl / curl to internal services / registry login** + +## When making changes + +- If edits touch CRD types (`api/v1alpha1`), run `bash hack/generate_code.sh` **only after confirmation**. +- Prefer tasks/commands above over ad-hoc custom pipelines to keep CI parity. + diff --git a/.dmtlint.yaml b/.dmtlint.yaml index 4670c1e60..413192674 100644 --- a/.dmtlint.yaml +++ b/.dmtlint.yaml @@ -1,6 +1,16 @@ linters-settings: container: exclude-rules: + # TODO: fix and remove - disable seccomp-profile check for agent and controller + seccomp-profile: + # TODO: fix and remove + - kind: DaemonSet + name: agent + container: agent + # TODO: fix and remove + - kind: Deployment + name: controller + container: controller no-new-privileges: - kind: DaemonSet name: csi-node @@ -14,6 +24,14 @@ linters-settings: - kind: Deployment name: sds-replicated-volume-controller container: sds-replicated-volume-controller + # TODO: fix and remove + - kind: DaemonSet + name: agent + container: agent + # TODO: fix and remove + - kind: Deployment + name: controller + container: controller liveness-probe: - kind: Deployment name: csi-controller @@ -85,11 +103,43 @@ linters-settings: container: linstor-satellite rbac: exclude-rules: + # TODO: fix and remove - disable placement (#rbac) check for CSI driver RBAC objects + placement: + # TODO: fix and remove + - kind: RoleBinding + name: csi:controller:external-attacher + # TODO: fix and remove + - kind: RoleBinding + name: csi:controller:external-provisioner + # TODO: fix and remove + - kind: RoleBinding + name: csi:controller:external-resizer + # TODO: fix and remove + - kind: RoleBinding + name: csi:controller:external-snapshotter + # TODO: fix and remove + - kind: Role + name: csi:controller:external-attacher + # TODO: fix and remove + - kind: Role + name: csi:controller:external-provisioner + # TODO: fix and remove + - kind: Role + name: csi:controller:external-resizer + # TODO: fix and remove + - kind: Role + name: csi:controller:external-snapshotter + # TODO: fix and remove + - kind: ServiceAccount + name: csi wildcards: - kind: ClusterRole name: d8:sds-replicated-volume:metadata-backup - kind: ClusterRole name: d8:sds-replicated-volume:linstor-controller + # TODO: fix and remove + - kind: ClusterRole + name: d8:sds-replicated-volume:controller images: patches: disable: true diff --git a/.envrc b/.envrc new file mode 100644 index 000000000..33924b9d3 --- /dev/null +++ b/.envrc @@ -0,0 +1,22 @@ +#!/usr/bin/env bash +set -euo pipefail + +# Keep project caches in-repo to avoid polluting global $HOME caches. +mkdir -p \ + "$PWD/.cache" \ + "$PWD/.cache/go-build" \ + "$PWD/.cache/go-mod" \ + "$PWD/.cache/go-tmp" \ + "$PWD/.cache/golangci-lint" + +export XDG_CACHE_HOME="$PWD/.cache" + +# Go caches +export GOCACHE="$PWD/.cache/go-build" +export GOMODCACHE="$PWD/.cache/go-mod" +export GOTMPDIR="$PWD/.cache/go-tmp" + +# golangci-lint cache (used by `go tool golangci-lint ...`) +export GOLANGCI_LINT_CACHE="$PWD/.cache/golangci-lint" + + diff --git a/.gitignore b/.gitignore index a7fe4bfa3..9b7be9bdc 100644 --- a/.gitignore +++ b/.gitignore @@ -34,6 +34,14 @@ __pycache__/ .pytest_cache/ # dev -images/sds-replicated-volume-controller/dev/Dockerfile-dev -images/sds-replicated-volume-controller/src/Makefile hack.sh +**/Dockerfile-dev +.secret +images/**/Makefile + +# local caches (Cursor/Go/direnv, etc.) +.cache/ +.direnv/ + +# test data +images/agent/pkg/drbdconf/testdata/out/ diff --git a/.werf/consts.yaml b/.werf/consts.yaml index dad020414..d3c71498a 100644 --- a/.werf/consts.yaml +++ b/.werf/consts.yaml @@ -17,7 +17,7 @@ {{- $_ := set $versions "UTIL_LINUX" "v2.39.3" }} {{- $_ := set $versions "DRBD" "9.2.13" }} {{- $_ := set $versions "DRBD_REACTOR" "1.8.0" }} -{{- $_ := set $versions "DRBD_UTILS" "9.30.0" }} +{{- $_ := set $versions "DRBD_UTILS" "9.31.0" }} {{- $_ := set $versions "LINSTOR_AFFINITY_CONTROLLER" "0.3.0" }} {{- $_ := set $versions "LINSTOR_API_PY" "1.19.0" }} {{- $_ := set $versions "LINSTOR_CLIENT" "1.19.0" }} @@ -37,6 +37,7 @@ {{- $_ := set $versions "SEMVER_TOOL" "3.4.0" }} {{- $_ := set $versions "SPAAS" "v0.1.5" }} {{- $_ := set $versions "THIN_SEND_RECV" "1.1.3" }} +{{- $_ := set $versions "LVM2" "2_03_38" }} {{- $_ := set $ "VERSIONS" $versions }} @@ -45,4 +46,4 @@ {{- $_ := set $ "BUILD_PACKAGES" "build-essential rpm-build rpm-macros-intro-conflicts sudo git jq" }} {{- $_ := set $ "DECKHOUSE_UID_GID" "64535" }} {{- $_ := set $ "ALT_CLEANUP_CMD" "rm -rf /var/lib/apt/lists/* /var/cache/apt/* && mkdir -p /var/lib/apt/lists/partial /var/cache/apt/archives/partial" }} -{{- $_ := set $ "ALT_BASE_PACKAGES" "openssl libtirpc tzdata" }} \ No newline at end of file +{{- $_ := set $ "ALT_BASE_PACKAGES" "openssl libtirpc tzdata" }} diff --git a/api/go.mod b/api/go.mod index d5754a5e5..363d8c81b 100644 --- a/api/go.mod +++ b/api/go.mod @@ -2,23 +2,214 @@ module github.com/deckhouse/sds-replicated-volume/api go 1.24.11 -require k8s.io/apimachinery v0.32.3 +require k8s.io/apimachinery v0.34.3 require ( - github.com/fxamacker/cbor/v2 v2.7.0 // indirect - github.com/go-logr/logr v1.4.2 // indirect + 4d63.com/gocheckcompilerdirectives v1.3.0 // indirect + 4d63.com/gochecknoglobals v0.2.2 // indirect + github.com/4meepo/tagalign v1.4.2 // indirect + github.com/Abirdcfly/dupword v0.1.3 // indirect + github.com/Antonboom/errname v1.0.0 // indirect + github.com/Antonboom/nilnil v1.0.1 // indirect + github.com/Antonboom/testifylint v1.5.2 // indirect + github.com/BurntSushi/toml v1.5.0 // indirect + github.com/Crocmagnon/fatcontext v0.7.1 // indirect + github.com/Djarvur/go-err113 v0.0.0-20210108212216-aea10b59be24 // indirect + github.com/GaijinEntertainment/go-exhaustruct/v3 v3.3.1 // indirect + github.com/Masterminds/semver/v3 v3.4.0 // indirect + github.com/OpenPeeDeeP/depguard/v2 v2.2.1 // indirect + github.com/alecthomas/go-check-sumtype v0.3.1 // indirect + github.com/alexkohler/nakedret/v2 v2.0.5 // indirect + github.com/alexkohler/prealloc v1.0.0 // indirect + github.com/alingse/asasalint v0.0.11 // indirect + github.com/alingse/nilnesserr v0.1.2 // indirect + github.com/ashanbrown/forbidigo v1.6.0 // indirect + github.com/ashanbrown/makezero v1.2.0 // indirect + github.com/beorn7/perks v1.0.1 // indirect + github.com/bkielbasa/cyclop v1.2.3 // indirect + github.com/blizzy78/varnamelen v0.8.0 // indirect + github.com/bombsimon/wsl/v4 v4.5.0 // indirect + github.com/breml/bidichk v0.3.2 // indirect + github.com/breml/errchkjson v0.4.0 // indirect + github.com/butuzov/ireturn v0.3.1 // indirect + github.com/butuzov/mirror v1.3.0 // indirect + github.com/catenacyber/perfsprint v0.8.2 // indirect + github.com/ccojocar/zxcvbn-go v1.0.2 // indirect + github.com/cespare/xxhash/v2 v2.3.0 // indirect + github.com/charithe/durationcheck v0.0.10 // indirect + github.com/chavacava/garif v0.1.0 // indirect + github.com/ckaznocha/intrange v0.3.0 // indirect + github.com/curioswitch/go-reassign v0.3.0 // indirect + github.com/daixiang0/gci v0.13.5 // indirect + github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect + github.com/denis-tingaikin/go-header v0.5.0 // indirect + github.com/ettle/strcase v0.2.0 // indirect + github.com/fatih/color v1.18.0 // indirect + github.com/fatih/structtag v1.2.0 // indirect + github.com/firefart/nonamedreturns v1.0.5 // indirect + github.com/fsnotify/fsnotify v1.9.0 // indirect + github.com/fxamacker/cbor/v2 v2.9.0 // indirect + github.com/fzipp/gocyclo v0.6.0 // indirect + github.com/ghostiam/protogetter v0.3.9 // indirect + github.com/go-critic/go-critic v0.12.0 // indirect + github.com/go-logr/logr v1.4.3 // indirect + github.com/go-task/slim-sprig/v3 v3.0.0 // indirect + github.com/go-toolsmith/astcast v1.1.0 // indirect + github.com/go-toolsmith/astcopy v1.1.0 // indirect + github.com/go-toolsmith/astequal v1.2.0 // indirect + github.com/go-toolsmith/astfmt v1.1.0 // indirect + github.com/go-toolsmith/astp v1.1.0 // indirect + github.com/go-toolsmith/strparse v1.1.0 // indirect + github.com/go-toolsmith/typep v1.1.0 // indirect + github.com/go-viper/mapstructure/v2 v2.4.0 // indirect + github.com/go-xmlfmt/xmlfmt v1.1.3 // indirect + github.com/gobwas/glob v0.2.3 // indirect + github.com/gofrs/flock v0.12.1 // indirect github.com/gogo/protobuf v1.3.2 // indirect - github.com/google/gofuzz v1.2.0 // indirect + github.com/golangci/dupl v0.0.0-20250308024227-f665c8d69b32 // indirect + github.com/golangci/go-printf-func-name v0.1.0 // indirect + github.com/golangci/gofmt v0.0.0-20250106114630-d62b90e6713d // indirect + github.com/golangci/golangci-lint v1.64.8 // indirect + github.com/golangci/misspell v0.6.0 // indirect + github.com/golangci/plugin-module-register v0.1.1 // indirect + github.com/golangci/revgrep v0.8.0 // indirect + github.com/golangci/unconvert v0.0.0-20240309020433-c5143eacb3ed // indirect + github.com/google/go-cmp v0.7.0 // indirect + github.com/google/pprof v0.0.0-20250820193118-f64d9cf942d6 // indirect + github.com/gordonklaus/ineffassign v0.1.0 // indirect + github.com/gostaticanalysis/analysisutil v0.7.1 // indirect + github.com/gostaticanalysis/comment v1.5.0 // indirect + github.com/gostaticanalysis/forcetypeassert v0.2.0 // indirect + github.com/gostaticanalysis/nilerr v0.1.1 // indirect + github.com/hashicorp/go-immutable-radix/v2 v2.1.0 // indirect + github.com/hashicorp/go-version v1.7.0 // indirect + github.com/hashicorp/golang-lru/v2 v2.0.7 // indirect + github.com/hexops/gotextdiff v1.0.3 // indirect + github.com/inconshreveable/mousetrap v1.1.0 // indirect + github.com/jgautheron/goconst v1.7.1 // indirect + github.com/jingyugao/rowserrcheck v1.1.1 // indirect + github.com/jjti/go-spancheck v0.6.4 // indirect github.com/json-iterator/go v1.1.12 // indirect + github.com/julz/importas v0.2.0 // indirect + github.com/karamaru-alpha/copyloopvar v1.2.1 // indirect + github.com/kisielk/errcheck v1.9.0 // indirect + github.com/kkHAIKE/contextcheck v1.1.6 // indirect + github.com/kulti/thelper v0.6.3 // indirect + github.com/kunwardeep/paralleltest v1.0.10 // indirect + github.com/lasiar/canonicalheader v1.1.2 // indirect + github.com/ldez/exptostd v0.4.2 // indirect + github.com/ldez/gomoddirectives v0.6.1 // indirect + github.com/ldez/grignotin v0.9.0 // indirect + github.com/ldez/tagliatelle v0.7.1 // indirect + github.com/ldez/usetesting v0.4.2 // indirect + github.com/leonklingele/grouper v1.1.2 // indirect + github.com/macabu/inamedparam v0.1.3 // indirect + github.com/maratori/testableexamples v1.0.0 // indirect + github.com/maratori/testpackage v1.1.1 // indirect + github.com/matoous/godox v1.1.0 // indirect + github.com/mattn/go-colorable v0.1.14 // indirect + github.com/mattn/go-isatty v0.0.20 // indirect + github.com/mattn/go-runewidth v0.0.16 // indirect + github.com/mgechev/revive v1.7.0 // indirect + github.com/mitchellh/go-homedir v1.1.0 // indirect github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect - github.com/modern-go/reflect2 v1.0.2 // indirect + github.com/modern-go/reflect2 v1.0.3-0.20250322232337-35a7c28c31ee // indirect + github.com/moricho/tparallel v0.3.2 // indirect + github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect + github.com/nakabonne/nestif v0.3.1 // indirect + github.com/nishanths/exhaustive v0.12.0 // indirect + github.com/nishanths/predeclared v0.2.2 // indirect + github.com/nunnatsa/ginkgolinter v0.19.1 // indirect + github.com/olekukonko/tablewriter v0.0.5 // indirect + github.com/onsi/ginkgo/v2 v2.27.2 // indirect + github.com/onsi/gomega v1.38.3 // indirect + github.com/pelletier/go-toml/v2 v2.2.4 // indirect + github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect + github.com/polyfloyd/go-errorlint v1.7.1 // indirect + github.com/prometheus/client_golang v1.23.2 // indirect + github.com/prometheus/client_model v0.6.2 // indirect + github.com/prometheus/common v0.66.1 // indirect + github.com/prometheus/procfs v0.17.0 // indirect + github.com/quasilyte/go-ruleguard v0.4.3-0.20240823090925-0fe6f58b47b1 // indirect + github.com/quasilyte/go-ruleguard/dsl v0.3.22 // indirect + github.com/quasilyte/gogrep v0.5.0 // indirect + github.com/quasilyte/regex/syntax v0.0.0-20210819130434-b3f0c404a727 // indirect + github.com/quasilyte/stdinfo v0.0.0-20220114132959-f7386bf02567 // indirect + github.com/raeperd/recvcheck v0.2.0 // indirect + github.com/rivo/uniseg v0.4.7 // indirect + github.com/rogpeppe/go-internal v1.14.1 // indirect + github.com/ryancurrah/gomodguard v1.3.5 // indirect + github.com/ryanrolds/sqlclosecheck v0.5.1 // indirect + github.com/sagikazarmark/locafero v0.7.0 // indirect + github.com/sanposhiho/wastedassign/v2 v2.1.0 // indirect + github.com/santhosh-tekuri/jsonschema/v6 v6.0.1 // indirect + github.com/sashamelentyev/interfacebloat v1.1.0 // indirect + github.com/sashamelentyev/usestdlibvars v1.28.0 // indirect + github.com/securego/gosec/v2 v2.22.2 // indirect + github.com/sirupsen/logrus v1.9.3 // indirect + github.com/sivchari/containedctx v1.0.3 // indirect + github.com/sivchari/tenv v1.12.1 // indirect + github.com/sonatard/noctx v0.1.0 // indirect + github.com/sourcegraph/conc v0.3.0 // indirect + github.com/sourcegraph/go-diff v0.7.0 // indirect + github.com/spf13/afero v1.12.0 // indirect + github.com/spf13/cast v1.7.1 // indirect + github.com/spf13/cobra v1.10.2 // indirect + github.com/spf13/pflag v1.0.10 // indirect + github.com/spf13/viper v1.20.1 // indirect + github.com/ssgreg/nlreturn/v2 v2.2.1 // indirect + github.com/stbenjam/no-sprintf-host-port v0.2.0 // indirect + github.com/stretchr/objx v0.5.2 // indirect + github.com/stretchr/testify v1.11.1 // indirect + github.com/subosito/gotenv v1.6.0 // indirect + github.com/tdakkota/asciicheck v0.4.1 // indirect + github.com/tetafro/godot v1.5.0 // indirect + github.com/tidwall/match v1.2.0 // indirect + github.com/timakin/bodyclose v0.0.0-20241017074812-ed6a65f985e3 // indirect + github.com/timonwong/loggercheck v0.10.1 // indirect + github.com/tomarrell/wrapcheck/v2 v2.10.0 // indirect + github.com/tommy-muehle/go-mnd/v2 v2.5.1 // indirect + github.com/ultraware/funlen v0.2.0 // indirect + github.com/ultraware/whitespace v0.2.0 // indirect + github.com/uudashr/gocognit v1.2.0 // indirect + github.com/uudashr/iface v1.3.1 // indirect github.com/x448/float16 v0.8.4 // indirect - golang.org/x/net v0.40.0 // indirect - golang.org/x/text v0.25.0 // indirect + github.com/xen0n/gosmopolitan v1.2.2 // indirect + github.com/yagipy/maintidx v1.0.0 // indirect + github.com/yeya24/promlinter v0.3.0 // indirect + github.com/ykadowak/zerologlint v0.1.5 // indirect + gitlab.com/bosi/decorder v0.4.2 // indirect + go-simpler.org/musttag v0.13.0 // indirect + go-simpler.org/sloglint v0.9.0 // indirect + go.uber.org/automaxprocs v1.6.0 // indirect + go.uber.org/multierr v1.11.0 // indirect + go.uber.org/zap v1.27.0 // indirect + go.yaml.in/yaml/v2 v2.4.2 // indirect + golang.org/x/exp v0.0.0-20251023183803-a4bb9ffd2546 // indirect + golang.org/x/exp/typeparams v0.0.0-20250210185358-939b2ce775ac // indirect + golang.org/x/mod v0.29.0 // indirect + golang.org/x/net v0.46.0 // indirect + golang.org/x/sync v0.19.0 // indirect + golang.org/x/sys v0.39.0 // indirect + golang.org/x/text v0.30.0 // indirect + golang.org/x/tools v0.38.0 // indirect + golang.org/x/tools/go/expect v0.1.1-deprecated // indirect + golang.org/x/tools/go/packages/packagestest v0.1.1-deprecated // indirect + google.golang.org/protobuf v1.36.9 // indirect gopkg.in/inf.v0 v0.9.1 // indirect + gopkg.in/yaml.v2 v2.4.0 // indirect + gopkg.in/yaml.v3 v3.0.1 // indirect + honnef.co/go/tools v0.6.1 // indirect k8s.io/klog/v2 v2.130.1 // indirect - k8s.io/utils v0.0.0-20241104100929-3ea5e8cea738 // indirect - sigs.k8s.io/json v0.0.0-20241010143419-9aa6b5e7a4b3 // indirect - sigs.k8s.io/structured-merge-diff/v4 v4.4.2 // indirect - sigs.k8s.io/yaml v1.4.0 // indirect + k8s.io/utils v0.0.0-20251002143259-bc988d571ff4 // indirect + mvdan.cc/gofumpt v0.7.0 // indirect + mvdan.cc/unparam v0.0.0-20240528143540-8a5130ca722f // indirect + sigs.k8s.io/json v0.0.0-20250730193827-2d320260d730 // indirect + sigs.k8s.io/randfill v1.0.0 // indirect + sigs.k8s.io/structured-merge-diff/v6 v6.3.0 // indirect +) + +tool ( + github.com/golangci/golangci-lint/cmd/golangci-lint + github.com/onsi/ginkgo/v2/ginkgo ) diff --git a/api/go.sum b/api/go.sum index 279dde56b..e4a1d5cb6 100644 --- a/api/go.sum +++ b/api/go.sum @@ -1,85 +1,625 @@ +4d63.com/gocheckcompilerdirectives v1.3.0 h1:Ew5y5CtcAAQeTVKUVFrE7EwHMrTO6BggtEj8BZSjZ3A= +4d63.com/gocheckcompilerdirectives v1.3.0/go.mod h1:ofsJ4zx2QAuIP/NO/NAh1ig6R1Fb18/GI7RVMwz7kAY= +4d63.com/gochecknoglobals v0.2.2 h1:H1vdnwnMaZdQW/N+NrkT1SZMTBmcwHe9Vq8lJcYYTtU= +4d63.com/gochecknoglobals v0.2.2/go.mod h1:lLxwTQjL5eIesRbvnzIP3jZtG140FnTdz+AlMa+ogt0= +github.com/4meepo/tagalign v1.4.2 h1:0hcLHPGMjDyM1gHG58cS73aQF8J4TdVR96TZViorO9E= +github.com/4meepo/tagalign v1.4.2/go.mod h1:+p4aMyFM+ra7nb41CnFG6aSDXqRxU/w1VQqScKqDARI= +github.com/Abirdcfly/dupword v0.1.3 h1:9Pa1NuAsZvpFPi9Pqkd93I7LIYRURj+A//dFd5tgBeE= +github.com/Abirdcfly/dupword v0.1.3/go.mod h1:8VbB2t7e10KRNdwTVoxdBaxla6avbhGzb8sCTygUMhw= +github.com/Antonboom/errname v1.0.0 h1:oJOOWR07vS1kRusl6YRSlat7HFnb3mSfMl6sDMRoTBA= +github.com/Antonboom/errname v1.0.0/go.mod h1:gMOBFzK/vrTiXN9Oh+HFs+e6Ndl0eTFbtsRTSRdXyGI= +github.com/Antonboom/nilnil v1.0.1 h1:C3Tkm0KUxgfO4Duk3PM+ztPncTFlOf0b2qadmS0s4xs= +github.com/Antonboom/nilnil v1.0.1/go.mod h1:CH7pW2JsRNFgEh8B2UaPZTEPhCMuFowP/e8Udp9Nnb0= +github.com/Antonboom/testifylint v1.5.2 h1:4s3Xhuv5AvdIgbd8wOOEeo0uZG7PbDKQyKY5lGoQazk= +github.com/Antonboom/testifylint v1.5.2/go.mod h1:vxy8VJ0bc6NavlYqjZfmp6EfqXMtBgQ4+mhCojwC1P8= +github.com/BurntSushi/toml v1.5.0 h1:W5quZX/G/csjUnuI8SUYlsHs9M38FC7znL0lIO+DvMg= +github.com/BurntSushi/toml v1.5.0/go.mod h1:ukJfTF/6rtPPRCnwkur4qwRxa8vTRFBF0uk2lLoLwho= +github.com/Crocmagnon/fatcontext v0.7.1 h1:SC/VIbRRZQeQWj/TcQBS6JmrXcfA+BU4OGSVUt54PjM= +github.com/Crocmagnon/fatcontext v0.7.1/go.mod h1:1wMvv3NXEBJucFGfwOJBxSVWcoIO6emV215SMkW9MFU= +github.com/Djarvur/go-err113 v0.0.0-20210108212216-aea10b59be24 h1:sHglBQTwgx+rWPdisA5ynNEsoARbiCBOyGcJM4/OzsM= +github.com/Djarvur/go-err113 v0.0.0-20210108212216-aea10b59be24/go.mod h1:4UJr5HIiMZrwgkSPdsjy2uOQExX/WEILpIrO9UPGuXs= +github.com/GaijinEntertainment/go-exhaustruct/v3 v3.3.1 h1:Sz1JIXEcSfhz7fUi7xHnhpIE0thVASYjvosApmHuD2k= +github.com/GaijinEntertainment/go-exhaustruct/v3 v3.3.1/go.mod h1:n/LSCXNuIYqVfBlVXyHfMQkZDdp1/mmxfSjADd3z1Zg= +github.com/Masterminds/semver/v3 v3.4.0 h1:Zog+i5UMtVoCU8oKka5P7i9q9HgrJeGzI9SA1Xbatp0= +github.com/Masterminds/semver/v3 v3.4.0/go.mod h1:4V+yj/TJE1HU9XfppCwVMZq3I84lprf4nC11bSS5beM= +github.com/OpenPeeDeeP/depguard/v2 v2.2.1 h1:vckeWVESWp6Qog7UZSARNqfu/cZqvki8zsuj3piCMx4= +github.com/OpenPeeDeeP/depguard/v2 v2.2.1/go.mod h1:q4DKzC4UcVaAvcfd41CZh0PWpGgzrVxUYBlgKNGquUo= +github.com/alecthomas/assert/v2 v2.11.0 h1:2Q9r3ki8+JYXvGsDyBXwH3LcJ+WK5D0gc5E8vS6K3D0= +github.com/alecthomas/assert/v2 v2.11.0/go.mod h1:Bze95FyfUr7x34QZrjL+XP+0qgp/zg8yS+TtBj1WA3k= +github.com/alecthomas/go-check-sumtype v0.3.1 h1:u9aUvbGINJxLVXiFvHUlPEaD7VDULsrxJb4Aq31NLkU= +github.com/alecthomas/go-check-sumtype v0.3.1/go.mod h1:A8TSiN3UPRw3laIgWEUOHHLPa6/r9MtoigdlP5h3K/E= +github.com/alecthomas/repr v0.4.0 h1:GhI2A8MACjfegCPVq9f1FLvIBS+DrQ2KQBFZP1iFzXc= +github.com/alecthomas/repr v0.4.0/go.mod h1:Fr0507jx4eOXV7AlPV6AVZLYrLIuIeSOWtW57eE/O/4= +github.com/alexkohler/nakedret/v2 v2.0.5 h1:fP5qLgtwbx9EJE8dGEERT02YwS8En4r9nnZ71RK+EVU= +github.com/alexkohler/nakedret/v2 v2.0.5/go.mod h1:bF5i0zF2Wo2o4X4USt9ntUWve6JbFv02Ff4vlkmS/VU= +github.com/alexkohler/prealloc v1.0.0 h1:Hbq0/3fJPQhNkN0dR95AVrr6R7tou91y0uHG5pOcUuw= +github.com/alexkohler/prealloc v1.0.0/go.mod h1:VetnK3dIgFBBKmg0YnD9F9x6Icjd+9cvfHR56wJVlKE= +github.com/alingse/asasalint v0.0.11 h1:SFwnQXJ49Kx/1GghOFz1XGqHYKp21Kq1nHad/0WQRnw= +github.com/alingse/asasalint v0.0.11/go.mod h1:nCaoMhw7a9kSJObvQyVzNTPBDbNpdocqrSP7t/cW5+I= +github.com/alingse/nilnesserr v0.1.2 h1:Yf8Iwm3z2hUUrP4muWfW83DF4nE3r1xZ26fGWUKCZlo= +github.com/alingse/nilnesserr v0.1.2/go.mod h1:1xJPrXonEtX7wyTq8Dytns5P2hNzoWymVUIaKm4HNFg= +github.com/ashanbrown/forbidigo v1.6.0 h1:D3aewfM37Yb3pxHujIPSpTf6oQk9sc9WZi8gerOIVIY= +github.com/ashanbrown/forbidigo v1.6.0/go.mod h1:Y8j9jy9ZYAEHXdu723cUlraTqbzjKF1MUyfOKL+AjcU= +github.com/ashanbrown/makezero v1.2.0 h1:/2Lp1bypdmK9wDIq7uWBlDF1iMUpIIS4A+pF6C9IEUU= +github.com/ashanbrown/makezero v1.2.0/go.mod h1:dxlPhHbDMC6N6xICzFBSK+4njQDdK8euNO0qjQMtGY4= +github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM= +github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw= +github.com/bkielbasa/cyclop v1.2.3 h1:faIVMIGDIANuGPWH031CZJTi2ymOQBULs9H21HSMa5w= +github.com/bkielbasa/cyclop v1.2.3/go.mod h1:kHTwA9Q0uZqOADdupvcFJQtp/ksSnytRMe8ztxG8Fuo= +github.com/blizzy78/varnamelen v0.8.0 h1:oqSblyuQvFsW1hbBHh1zfwrKe3kcSj0rnXkKzsQ089M= +github.com/blizzy78/varnamelen v0.8.0/go.mod h1:V9TzQZ4fLJ1DSrjVDfl89H7aMnTvKkApdHeyESmyR7k= +github.com/bombsimon/wsl/v4 v4.5.0 h1:iZRsEvDdyhd2La0FVi5k6tYehpOR/R7qIUjmKk7N74A= +github.com/bombsimon/wsl/v4 v4.5.0/go.mod h1:NOQ3aLF4nD7N5YPXMruR6ZXDOAqLoM0GEpLwTdvmOSc= +github.com/breml/bidichk v0.3.2 h1:xV4flJ9V5xWTqxL+/PMFF6dtJPvZLPsyixAoPe8BGJs= +github.com/breml/bidichk v0.3.2/go.mod h1:VzFLBxuYtT23z5+iVkamXO386OB+/sVwZOpIj6zXGos= +github.com/breml/errchkjson v0.4.0 h1:gftf6uWZMtIa/Is3XJgibewBm2ksAQSY/kABDNFTAdk= +github.com/breml/errchkjson v0.4.0/go.mod h1:AuBOSTHyLSaaAFlWsRSuRBIroCh3eh7ZHh5YeelDIk8= +github.com/butuzov/ireturn v0.3.1 h1:mFgbEI6m+9W8oP/oDdfA34dLisRFCj2G6o/yiI1yZrY= +github.com/butuzov/ireturn v0.3.1/go.mod h1:ZfRp+E7eJLC0NQmk1Nrm1LOrn/gQlOykv+cVPdiXH5M= +github.com/butuzov/mirror v1.3.0 h1:HdWCXzmwlQHdVhwvsfBb2Au0r3HyINry3bDWLYXiKoc= +github.com/butuzov/mirror v1.3.0/go.mod h1:AEij0Z8YMALaq4yQj9CPPVYOyJQyiexpQEQgihajRfI= +github.com/catenacyber/perfsprint v0.8.2 h1:+o9zVmCSVa7M4MvabsWvESEhpsMkhfE7k0sHNGL95yw= +github.com/catenacyber/perfsprint v0.8.2/go.mod h1:q//VWC2fWbcdSLEY1R3l8n0zQCDPdE4IjZwyY1HMunM= +github.com/ccojocar/zxcvbn-go v1.0.2 h1:na/czXU8RrhXO4EZme6eQJLR4PzcGsahsBOAwU6I3Vg= +github.com/ccojocar/zxcvbn-go v1.0.2/go.mod h1:g1qkXtUSvHP8lhHp5GrSmTz6uWALGRMQdw6Qnz/hi60= +github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs= +github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs= +github.com/charithe/durationcheck v0.0.10 h1:wgw73BiocdBDQPik+zcEoBG/ob8uyBHf2iyoHGPf5w4= +github.com/charithe/durationcheck v0.0.10/go.mod h1:bCWXb7gYRysD1CU3C+u4ceO49LoGOY1C1L6uouGNreQ= +github.com/chavacava/garif v0.1.0 h1:2JHa3hbYf5D9dsgseMKAmc/MZ109otzgNFk5s87H9Pc= +github.com/chavacava/garif v0.1.0/go.mod h1:XMyYCkEL58DF0oyW4qDjjnPWONs2HBqYKI+UIPD+Gww= +github.com/ckaznocha/intrange v0.3.0 h1:VqnxtK32pxgkhJgYQEeOArVidIPg+ahLP7WBOXZd5ZY= +github.com/ckaznocha/intrange v0.3.0/go.mod h1:+I/o2d2A1FBHgGELbGxzIcyd3/9l9DuwjM8FsbSS3Lo= +github.com/cpuguy83/go-md2man/v2 v2.0.6/go.mod h1:oOW0eioCTA6cOiMLiUPZOpcVxMig6NIQQ7OS05n1F4g= +github.com/curioswitch/go-reassign v0.3.0 h1:dh3kpQHuADL3cobV/sSGETA8DOv457dwl+fbBAhrQPs= +github.com/curioswitch/go-reassign v0.3.0/go.mod h1:nApPCCTtqLJN/s8HfItCcKV0jIPwluBOvZP+dsJGA88= +github.com/daixiang0/gci v0.13.5 h1:kThgmH1yBmZSBCh1EJVxQ7JsHpm5Oms0AMed/0LaH4c= +github.com/daixiang0/gci v0.13.5/go.mod h1:12etP2OniiIdP4q+kjUGrC/rUagga7ODbqsom5Eo5Yk= github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM= github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= -github.com/fxamacker/cbor/v2 v2.7.0 h1:iM5WgngdRBanHcxugY4JySA0nk1wZorNOpTgCMedv5E= -github.com/fxamacker/cbor/v2 v2.7.0/go.mod h1:pxXPTn3joSm21Gbwsv0w9OSA2y1HFR9qXEeXQVeNoDQ= -github.com/go-logr/logr v1.4.2 h1:6pFjapn8bFcIbiKo3XT4j/BhANplGihG6tvd+8rYgrY= -github.com/go-logr/logr v1.4.2/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY= +github.com/denis-tingaikin/go-header v0.5.0 h1:SRdnP5ZKvcO9KKRP1KJrhFR3RrlGuD+42t4429eC9k8= +github.com/denis-tingaikin/go-header v0.5.0/go.mod h1:mMenU5bWrok6Wl2UsZjy+1okegmwQ3UgWl4V1D8gjlY= +github.com/dlclark/regexp2 v1.11.0 h1:G/nrcoOa7ZXlpoa/91N3X7mM3r8eIlMBBJZvsz/mxKI= +github.com/dlclark/regexp2 v1.11.0/go.mod h1:DHkYz0B9wPfa6wondMfaivmHpzrQ3v9q8cnmRbL6yW8= +github.com/ettle/strcase v0.2.0 h1:fGNiVF21fHXpX1niBgk0aROov1LagYsOwV/xqKDKR/Q= +github.com/ettle/strcase v0.2.0/go.mod h1:DajmHElDSaX76ITe3/VHVyMin4LWSJN5Z909Wp+ED1A= +github.com/fatih/color v1.18.0 h1:S8gINlzdQ840/4pfAwic/ZE0djQEH3wM94VfqLTZcOM= +github.com/fatih/color v1.18.0/go.mod h1:4FelSpRwEGDpQ12mAdzqdOukCy4u8WUtOY6lkT/6HfU= +github.com/fatih/structtag v1.2.0 h1:/OdNE99OxoI/PqaW/SuSK9uxxT3f/tcSZgon/ssNSx4= +github.com/fatih/structtag v1.2.0/go.mod h1:mBJUNpUnHmRKrKlQQlmCrh5PuhftFbNv8Ys4/aAZl94= +github.com/firefart/nonamedreturns v1.0.5 h1:tM+Me2ZaXs8tfdDw3X6DOX++wMCOqzYUho6tUTYIdRA= +github.com/firefart/nonamedreturns v1.0.5/go.mod h1:gHJjDqhGM4WyPt639SOZs+G89Ko7QKH5R5BhnO6xJhw= +github.com/frankban/quicktest v1.14.6 h1:7Xjx+VpznH+oBnejlPUj8oUpdxnVs4f8XU8WnHkI4W8= +github.com/frankban/quicktest v1.14.6/go.mod h1:4ptaffx2x8+WTWXmUCuVU6aPUX1/Mz7zb5vbUoiM6w0= +github.com/fsnotify/fsnotify v1.9.0 h1:2Ml+OJNzbYCTzsxtv8vKSFD9PbJjmhYF14k/jKC7S9k= +github.com/fsnotify/fsnotify v1.9.0/go.mod h1:8jBTzvmWwFyi3Pb8djgCCO5IBqzKJ/Jwo8TRcHyHii0= +github.com/fxamacker/cbor/v2 v2.9.0 h1:NpKPmjDBgUfBms6tr6JZkTHtfFGcMKsw3eGcmD/sapM= +github.com/fxamacker/cbor/v2 v2.9.0/go.mod h1:vM4b+DJCtHn+zz7h3FFp/hDAI9WNWCsZj23V5ytsSxQ= +github.com/fzipp/gocyclo v0.6.0 h1:lsblElZG7d3ALtGMx9fmxeTKZaLLpU8mET09yN4BBLo= +github.com/fzipp/gocyclo v0.6.0/go.mod h1:rXPyn8fnlpa0R2csP/31uerbiVBugk5whMdlyaLkLoA= +github.com/ghostiam/protogetter v0.3.9 h1:j+zlLLWzqLay22Cz/aYwTHKQ88GE2DQ6GkWSYFOI4lQ= +github.com/ghostiam/protogetter v0.3.9/go.mod h1:WZ0nw9pfzsgxuRsPOFQomgDVSWtDLJRfQJEhsGbmQMA= +github.com/gkampitakis/ciinfo v0.3.2 h1:JcuOPk8ZU7nZQjdUhctuhQofk7BGHuIy0c9Ez8BNhXs= +github.com/gkampitakis/ciinfo v0.3.2/go.mod h1:1NIwaOcFChN4fa/B0hEBdAb6npDlFL8Bwx4dfRLRqAo= +github.com/gkampitakis/go-diff v1.3.2 h1:Qyn0J9XJSDTgnsgHRdz9Zp24RaJeKMUHg2+PDZZdC4M= +github.com/gkampitakis/go-diff v1.3.2/go.mod h1:LLgOrpqleQe26cte8s36HTWcTmMEur6OPYerdAAS9tk= +github.com/gkampitakis/go-snaps v0.5.15 h1:amyJrvM1D33cPHwVrjo9jQxX8g/7E2wYdZ+01KS3zGE= +github.com/gkampitakis/go-snaps v0.5.15/go.mod h1:HNpx/9GoKisdhw9AFOBT1N7DBs9DiHo/hGheFGBZ+mc= +github.com/go-critic/go-critic v0.12.0 h1:iLosHZuye812wnkEz1Xu3aBwn5ocCPfc9yqmFG9pa6w= +github.com/go-critic/go-critic v0.12.0/go.mod h1:DpE0P6OVc6JzVYzmM5gq5jMU31zLr4am5mB/VfFK64w= +github.com/go-logr/logr v1.4.3 h1:CjnDlHq8ikf6E492q6eKboGOC0T8CDaOvkHCIg8idEI= +github.com/go-logr/logr v1.4.3/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY= +github.com/go-quicktest/qt v1.101.0 h1:O1K29Txy5P2OK0dGo59b7b0LR6wKfIhttaAhHUyn7eI= +github.com/go-quicktest/qt v1.101.0/go.mod h1:14Bz/f7NwaXPtdYEgzsx46kqSxVwTbzVZsDC26tQJow= +github.com/go-task/slim-sprig/v3 v3.0.0 h1:sUs3vkvUymDpBKi3qH1YSqBQk9+9D/8M2mN1vB6EwHI= +github.com/go-task/slim-sprig/v3 v3.0.0/go.mod h1:W848ghGpv3Qj3dhTPRyJypKRiqCdHZiAzKg9hl15HA8= +github.com/go-toolsmith/astcast v1.1.0 h1:+JN9xZV1A+Re+95pgnMgDboWNVnIMMQXwfBwLRPgSC8= +github.com/go-toolsmith/astcast v1.1.0/go.mod h1:qdcuFWeGGS2xX5bLM/c3U9lewg7+Zu4mr+xPwZIB4ZU= +github.com/go-toolsmith/astcopy v1.1.0 h1:YGwBN0WM+ekI/6SS6+52zLDEf8Yvp3n2seZITCUBt5s= +github.com/go-toolsmith/astcopy v1.1.0/go.mod h1:hXM6gan18VA1T/daUEHCFcYiW8Ai1tIwIzHY6srfEAw= +github.com/go-toolsmith/astequal v1.0.3/go.mod h1:9Ai4UglvtR+4up+bAD4+hCj7iTo4m/OXVTSLnCyTAx4= +github.com/go-toolsmith/astequal v1.1.0/go.mod h1:sedf7VIdCL22LD8qIvv7Nn9MuWJruQA/ysswh64lffQ= +github.com/go-toolsmith/astequal v1.2.0 h1:3Fs3CYZ1k9Vo4FzFhwwewC3CHISHDnVUPC4x0bI2+Cw= +github.com/go-toolsmith/astequal v1.2.0/go.mod h1:c8NZ3+kSFtFY/8lPso4v8LuJjdJiUFVnSuU3s0qrrDY= +github.com/go-toolsmith/astfmt v1.1.0 h1:iJVPDPp6/7AaeLJEruMsBUlOYCmvg0MoCfJprsOmcco= +github.com/go-toolsmith/astfmt v1.1.0/go.mod h1:OrcLlRwu0CuiIBp/8b5PYF9ktGVZUjlNMV634mhwuQ4= +github.com/go-toolsmith/astp v1.1.0 h1:dXPuCl6u2llURjdPLLDxJeZInAeZ0/eZwFJmqZMnpQA= +github.com/go-toolsmith/astp v1.1.0/go.mod h1:0T1xFGz9hicKs8Z5MfAqSUitoUYS30pDMsRVIDHs8CA= +github.com/go-toolsmith/pkgload v1.2.2 h1:0CtmHq/02QhxcF7E9N5LIFcYFsMR5rdovfqTtRKkgIk= +github.com/go-toolsmith/pkgload v1.2.2/go.mod h1:R2hxLNRKuAsiXCo2i5J6ZQPhnPMOVtU+f0arbFPWCus= +github.com/go-toolsmith/strparse v1.0.0/go.mod h1:YI2nUKP9YGZnL/L1/DLFBfixrcjslWct4wyljWhSRy8= +github.com/go-toolsmith/strparse v1.1.0 h1:GAioeZUK9TGxnLS+qfdqNbA4z0SSm5zVNtCQiyP2Bvw= +github.com/go-toolsmith/strparse v1.1.0/go.mod h1:7ksGy58fsaQkGQlY8WVoBFNyEPMGuJin1rfoPS4lBSQ= +github.com/go-toolsmith/typep v1.1.0 h1:fIRYDyF+JywLfqzyhdiHzRop/GQDxxNhLGQ6gFUNHus= +github.com/go-toolsmith/typep v1.1.0/go.mod h1:fVIw+7zjdsMxDA3ITWnH1yOiw1rnTQKCsF/sk2H/qig= +github.com/go-viper/mapstructure/v2 v2.4.0 h1:EBsztssimR/CONLSZZ04E8qAkxNYq4Qp9LvH92wZUgs= +github.com/go-viper/mapstructure/v2 v2.4.0/go.mod h1:oJDH3BJKyqBA2TXFhDsKDGDTlndYOZ6rGS0BRZIxGhM= +github.com/go-xmlfmt/xmlfmt v1.1.3 h1:t8Ey3Uy7jDSEisW2K3somuMKIpzktkWptA0iFCnRUWY= +github.com/go-xmlfmt/xmlfmt v1.1.3/go.mod h1:aUCEOzzezBEjDBbFBoSiya/gduyIiWYRP6CnSFIV8AM= +github.com/gobwas/glob v0.2.3 h1:A4xDbljILXROh+kObIiy5kIaPYD8e96x1tgBhUI5J+Y= +github.com/gobwas/glob v0.2.3/go.mod h1:d3Ez4x06l9bZtSvzIay5+Yzi0fmZzPgnTbPcKjJAkT8= +github.com/goccy/go-yaml v1.18.0 h1:8W7wMFS12Pcas7KU+VVkaiCng+kG8QiFeFwzFb+rwuw= +github.com/goccy/go-yaml v1.18.0/go.mod h1:XBurs7gK8ATbW4ZPGKgcbrY1Br56PdM69F7LkFRi1kA= +github.com/gofrs/flock v0.12.1 h1:MTLVXXHf8ekldpJk3AKicLij9MdwOWkZ+a/jHHZby9E= +github.com/gofrs/flock v0.12.1/go.mod h1:9zxTsyu5xtJ9DK+1tFZyibEV7y3uwDxPPfbxeeHCoD0= github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q= github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q= -github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY= -github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI= -github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY= +github.com/golangci/dupl v0.0.0-20250308024227-f665c8d69b32 h1:WUvBfQL6EW/40l6OmeSBYQJNSif4O11+bmWEz+C7FYw= +github.com/golangci/dupl v0.0.0-20250308024227-f665c8d69b32/go.mod h1:NUw9Zr2Sy7+HxzdjIULge71wI6yEg1lWQr7Evcu8K0E= +github.com/golangci/go-printf-func-name v0.1.0 h1:dVokQP+NMTO7jwO4bwsRwLWeudOVUPPyAKJuzv8pEJU= +github.com/golangci/go-printf-func-name v0.1.0/go.mod h1:wqhWFH5mUdJQhweRnldEywnR5021wTdZSNgwYceV14s= +github.com/golangci/gofmt v0.0.0-20250106114630-d62b90e6713d h1:viFft9sS/dxoYY0aiOTsLKO2aZQAPT4nlQCsimGcSGE= +github.com/golangci/gofmt v0.0.0-20250106114630-d62b90e6713d/go.mod h1:ivJ9QDg0XucIkmwhzCDsqcnxxlDStoTl89jDMIoNxKY= +github.com/golangci/golangci-lint v1.64.8 h1:y5TdeVidMtBGG32zgSC7ZXTFNHrsJkDnpO4ItB3Am+I= +github.com/golangci/golangci-lint v1.64.8/go.mod h1:5cEsUQBSr6zi8XI8OjmcY2Xmliqc4iYL7YoPrL+zLJ4= +github.com/golangci/misspell v0.6.0 h1:JCle2HUTNWirNlDIAUO44hUsKhOFqGPoC4LZxlaSXDs= +github.com/golangci/misspell v0.6.0/go.mod h1:keMNyY6R9isGaSAu+4Q8NMBwMPkh15Gtc8UCVoDtAWo= +github.com/golangci/plugin-module-register v0.1.1 h1:TCmesur25LnyJkpsVrupv1Cdzo+2f7zX0H6Jkw1Ol6c= +github.com/golangci/plugin-module-register v0.1.1/go.mod h1:TTpqoB6KkwOJMV8u7+NyXMrkwwESJLOkfl9TxR1DGFc= +github.com/golangci/revgrep v0.8.0 h1:EZBctwbVd0aMeRnNUsFogoyayvKHyxlV3CdUA46FX2s= +github.com/golangci/revgrep v0.8.0/go.mod h1:U4R/s9dlXZsg8uJmaR1GrloUr14D7qDl8gi2iPXJH8k= +github.com/golangci/unconvert v0.0.0-20240309020433-c5143eacb3ed h1:IURFTjxeTfNFP0hTEi1YKjB/ub8zkpaOqFFMApi2EAs= +github.com/golangci/unconvert v0.0.0-20240309020433-c5143eacb3ed/go.mod h1:XLXN8bNw4CGRPaqgl3bv/lhz7bsGPh4/xSaMTbo2vkQ= +github.com/google/go-cmp v0.5.1/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= +github.com/google/go-cmp v0.5.2/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= +github.com/google/go-cmp v0.5.4/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= +github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= +github.com/google/go-cmp v0.5.8/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY= +github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8= +github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU= github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg= -github.com/google/gofuzz v1.2.0 h1:xRy4A+RhZaiKjJ1bPfwQ8sedCA+YS2YcCHW6ec7JMi0= -github.com/google/gofuzz v1.2.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg= +github.com/google/pprof v0.0.0-20250820193118-f64d9cf942d6 h1:EEHtgt9IwisQ2AZ4pIsMjahcegHh6rmhqxzIRQIyepY= +github.com/google/pprof v0.0.0-20250820193118-f64d9cf942d6/go.mod h1:I6V7YzU0XDpsHqbsyrghnFZLO1gwK6NPTNvmetQIk9U= +github.com/gordonklaus/ineffassign v0.1.0 h1:y2Gd/9I7MdY1oEIt+n+rowjBNDcLQq3RsH5hwJd0f9s= +github.com/gordonklaus/ineffassign v0.1.0/go.mod h1:Qcp2HIAYhR7mNUVSIxZww3Guk4it82ghYcEXIAk+QT0= +github.com/gostaticanalysis/analysisutil v0.7.1 h1:ZMCjoue3DtDWQ5WyU16YbjbQEQ3VuzwxALrpYd+HeKk= +github.com/gostaticanalysis/analysisutil v0.7.1/go.mod h1:v21E3hY37WKMGSnbsw2S/ojApNWb6C1//mXO48CXbVc= +github.com/gostaticanalysis/comment v1.4.1/go.mod h1:ih6ZxzTHLdadaiSnF5WY3dxUoXfXAlTaRzuaNDlSado= +github.com/gostaticanalysis/comment v1.4.2/go.mod h1:KLUTGDv6HOCotCH8h2erHKmpci2ZoR8VPu34YA2uzdM= +github.com/gostaticanalysis/comment v1.5.0 h1:X82FLl+TswsUMpMh17srGRuKaaXprTaytmEpgnKIDu8= +github.com/gostaticanalysis/comment v1.5.0/go.mod h1:V6eb3gpCv9GNVqb6amXzEUX3jXLVK/AdA+IrAMSqvEc= +github.com/gostaticanalysis/forcetypeassert v0.2.0 h1:uSnWrrUEYDr86OCxWa4/Tp2jeYDlogZiZHzGkWFefTk= +github.com/gostaticanalysis/forcetypeassert v0.2.0/go.mod h1:M5iPavzE9pPqWyeiVXSFghQjljW1+l/Uke3PXHS6ILY= +github.com/gostaticanalysis/nilerr v0.1.1 h1:ThE+hJP0fEp4zWLkWHWcRyI2Od0p7DlgYG3Uqrmrcpk= +github.com/gostaticanalysis/nilerr v0.1.1/go.mod h1:wZYb6YI5YAxxq0i1+VJbY0s2YONW0HU0GPE3+5PWN4A= +github.com/gostaticanalysis/testutil v0.3.1-0.20210208050101-bfb5c8eec0e4/go.mod h1:D+FIZ+7OahH3ePw/izIEeH5I06eKs1IKI4Xr64/Am3M= +github.com/gostaticanalysis/testutil v0.5.0 h1:Dq4wT1DdTwTGCQQv3rl3IvD5Ld0E6HiY+3Zh0sUGqw8= +github.com/gostaticanalysis/testutil v0.5.0/go.mod h1:OLQSbuM6zw2EvCcXTz1lVq5unyoNft372msDY0nY5Hs= +github.com/hashicorp/go-immutable-radix/v2 v2.1.0 h1:CUW5RYIcysz+D3B+l1mDeXrQ7fUvGGCwJfdASSzbrfo= +github.com/hashicorp/go-immutable-radix/v2 v2.1.0/go.mod h1:hgdqLXA4f6NIjRVisM1TJ9aOJVNRqKZj+xDGF6m7PBw= +github.com/hashicorp/go-uuid v1.0.3 h1:2gKiV6YVmrJ1i2CKKa9obLvRieoRGviZFL26PcT/Co8= +github.com/hashicorp/go-uuid v1.0.3/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro= +github.com/hashicorp/go-version v1.2.1/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA= +github.com/hashicorp/go-version v1.7.0 h1:5tqGy27NaOTB8yJKUZELlFAS/LTKJkrmONwQKeRZfjY= +github.com/hashicorp/go-version v1.7.0/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA= +github.com/hashicorp/golang-lru/v2 v2.0.7 h1:a+bsQ5rvGLjzHuww6tVxozPZFVghXaHOwFs4luLUK2k= +github.com/hashicorp/golang-lru/v2 v2.0.7/go.mod h1:QeFd9opnmA6QUJc5vARoKUSoFhyfM2/ZepoAG6RGpeM= +github.com/hexops/gotextdiff v1.0.3 h1:gitA9+qJrrTCsiCl7+kh75nPqQt1cx4ZkudSTLoUqJM= +github.com/hexops/gotextdiff v1.0.3/go.mod h1:pSWU5MAI3yDq+fZBTazCSJysOMbxWL1BSow5/V2vxeg= +github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8= +github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw= +github.com/jgautheron/goconst v1.7.1 h1:VpdAG7Ca7yvvJk5n8dMwQhfEZJh95kl/Hl9S1OI5Jkk= +github.com/jgautheron/goconst v1.7.1/go.mod h1:aAosetZ5zaeC/2EfMeRswtxUFBpe2Hr7HzkgX4fanO4= +github.com/jingyugao/rowserrcheck v1.1.1 h1:zibz55j/MJtLsjP1OF4bSdgXxwL1b+Vn7Tjzq7gFzUs= +github.com/jingyugao/rowserrcheck v1.1.1/go.mod h1:4yvlZSDb3IyDTUZJUmpZfm2Hwok+Dtp+nu2qOq+er9c= +github.com/jjti/go-spancheck v0.6.4 h1:Tl7gQpYf4/TMU7AT84MN83/6PutY21Nb9fuQjFTpRRc= +github.com/jjti/go-spancheck v0.6.4/go.mod h1:yAEYdKJ2lRkDA8g7X+oKUHXOWVAXSBJRv04OhF+QUjk= +github.com/joshdk/go-junit v1.0.0 h1:S86cUKIdwBHWwA6xCmFlf3RTLfVXYQfvanM5Uh+K6GE= +github.com/joshdk/go-junit v1.0.0/go.mod h1:TiiV0PqkaNfFXjEiyjWM3XXrhVyCa1K4Zfga6W52ung= github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM= github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo= +github.com/julz/importas v0.2.0 h1:y+MJN/UdL63QbFJHws9BVC5RpA2iq0kpjrFajTGivjQ= +github.com/julz/importas v0.2.0/go.mod h1:pThlt589EnCYtMnmhmRYY/qn9lCf/frPOK+WMx3xiJY= +github.com/karamaru-alpha/copyloopvar v1.2.1 h1:wmZaZYIjnJ0b5UoKDjUHrikcV0zuPyyxI4SVplLd2CI= +github.com/karamaru-alpha/copyloopvar v1.2.1/go.mod h1:nFmMlFNlClC2BPvNaHMdkirmTJxVCY0lhxBtlfOypMM= github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8= +github.com/kisielk/errcheck v1.9.0 h1:9xt1zI9EBfcYBvdU1nVrzMzzUPUtPKs9bVSIM3TAb3M= +github.com/kisielk/errcheck v1.9.0/go.mod h1:kQxWMMVZgIkDq7U8xtG/n2juOjbLgZtedi0D+/VL/i8= github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck= +github.com/kkHAIKE/contextcheck v1.1.6 h1:7HIyRcnyzxL9Lz06NGhiKvenXq7Zw6Q0UQu/ttjfJCE= +github.com/kkHAIKE/contextcheck v1.1.6/go.mod h1:3dDbMRNBFaq8HFXWC1JyvDSPm43CmE6IuHam8Wr0rkg= +github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE= +github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk= +github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY= +github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE= +github.com/kulti/thelper v0.6.3 h1:ElhKf+AlItIu+xGnI990no4cE2+XaSu1ULymV2Yulxs= +github.com/kulti/thelper v0.6.3/go.mod h1:DsqKShOvP40epevkFrvIwkCMNYxMeTNjdWL4dqWHZ6I= +github.com/kunwardeep/paralleltest v1.0.10 h1:wrodoaKYzS2mdNVnc4/w31YaXFtsc21PCTdvWJ/lDDs= +github.com/kunwardeep/paralleltest v1.0.10/go.mod h1:2C7s65hONVqY7Q5Efj5aLzRCNLjw2h4eMc9EcypGjcY= +github.com/lasiar/canonicalheader v1.1.2 h1:vZ5uqwvDbyJCnMhmFYimgMZnJMjwljN5VGY0VKbMXb4= +github.com/lasiar/canonicalheader v1.1.2/go.mod h1:qJCeLFS0G/QlLQ506T+Fk/fWMa2VmBUiEI2cuMK4djI= +github.com/ldez/exptostd v0.4.2 h1:l5pOzHBz8mFOlbcifTxzfyYbgEmoUqjxLFHZkjlbHXs= +github.com/ldez/exptostd v0.4.2/go.mod h1:iZBRYaUmcW5jwCR3KROEZ1KivQQp6PHXbDPk9hqJKCQ= +github.com/ldez/gomoddirectives v0.6.1 h1:Z+PxGAY+217f/bSGjNZr/b2KTXcyYLgiWI6geMBN2Qc= +github.com/ldez/gomoddirectives v0.6.1/go.mod h1:cVBiu3AHR9V31em9u2kwfMKD43ayN5/XDgr+cdaFaKs= +github.com/ldez/grignotin v0.9.0 h1:MgOEmjZIVNn6p5wPaGp/0OKWyvq42KnzAt/DAb8O4Ow= +github.com/ldez/grignotin v0.9.0/go.mod h1:uaVTr0SoZ1KBii33c47O1M8Jp3OP3YDwhZCmzT9GHEk= +github.com/ldez/tagliatelle v0.7.1 h1:bTgKjjc2sQcsgPiT902+aadvMjCeMHrY7ly2XKFORIk= +github.com/ldez/tagliatelle v0.7.1/go.mod h1:3zjxUpsNB2aEZScWiZTHrAXOl1x25t3cRmzfK1mlo2I= +github.com/ldez/usetesting v0.4.2 h1:J2WwbrFGk3wx4cZwSMiCQQ00kjGR0+tuuyW0Lqm4lwA= +github.com/ldez/usetesting v0.4.2/go.mod h1:eEs46T3PpQ+9RgN9VjpY6qWdiw2/QmfiDeWmdZdrjIQ= +github.com/leonklingele/grouper v1.1.2 h1:o1ARBDLOmmasUaNDesWqWCIFH3u7hoFlM84YrjT3mIY= +github.com/leonklingele/grouper v1.1.2/go.mod h1:6D0M/HVkhs2yRKRFZUoGjeDy7EZTfFBE9gl4kjmIGkA= +github.com/macabu/inamedparam v0.1.3 h1:2tk/phHkMlEL/1GNe/Yf6kkR/hkcUdAEY3L0hjYV1Mk= +github.com/macabu/inamedparam v0.1.3/go.mod h1:93FLICAIk/quk7eaPPQvbzihUdn/QkGDwIZEoLtpH6I= +github.com/maratori/testableexamples v1.0.0 h1:dU5alXRrD8WKSjOUnmJZuzdxWOEQ57+7s93SLMxb2vI= +github.com/maratori/testableexamples v1.0.0/go.mod h1:4rhjL1n20TUTT4vdh3RDqSizKLyXp7K2u6HgraZCGzE= +github.com/maratori/testpackage v1.1.1 h1:S58XVV5AD7HADMmD0fNnziNHqKvSdDuEKdPD1rNTU04= +github.com/maratori/testpackage v1.1.1/go.mod h1:s4gRK/ym6AMrqpOa/kEbQTV4Q4jb7WeLZzVhVVVOQMc= +github.com/maruel/natural v1.1.1 h1:Hja7XhhmvEFhcByqDoHz9QZbkWey+COd9xWfCfn1ioo= +github.com/maruel/natural v1.1.1/go.mod h1:v+Rfd79xlw1AgVBjbO0BEQmptqb5HvL/k9GRHB7ZKEg= +github.com/matoous/godox v1.1.0 h1:W5mqwbyWrwZv6OQ5Z1a/DHGMOvXYCBP3+Ht7KMoJhq4= +github.com/matoous/godox v1.1.0/go.mod h1:jgE/3fUXiTurkdHOLT5WEkThTSuE7yxHv5iWPa80afs= +github.com/matryer/is v1.4.0 h1:sosSmIWwkYITGrxZ25ULNDeKiMNzFSr4V/eqBQP0PeE= +github.com/matryer/is v1.4.0/go.mod h1:8I/i5uYgLzgsgEloJE1U6xx5HkBQpAZvepWuujKwMRU= +github.com/mattn/go-colorable v0.1.14 h1:9A9LHSqF/7dyVVX6g0U9cwm9pG3kP9gSzcuIPHPsaIE= +github.com/mattn/go-colorable v0.1.14/go.mod h1:6LmQG8QLFO4G5z1gPvYEzlUgJ2wF+stgPZH1UqBm1s8= +github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY= +github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y= +github.com/mattn/go-runewidth v0.0.9/go.mod h1:H031xJmbD/WCDINGzjvQ9THkh0rPKHF+m2gUSrubnMI= +github.com/mattn/go-runewidth v0.0.16 h1:E5ScNMtiwvlvB5paMFdw9p4kSQzbXFikJ5SQO6TULQc= +github.com/mattn/go-runewidth v0.0.16/go.mod h1:Jdepj2loyihRzMpdS35Xk/zdY8IAYHsh153qUoGf23w= +github.com/mfridman/tparse v0.18.0 h1:wh6dzOKaIwkUGyKgOntDW4liXSo37qg5AXbIhkMV3vE= +github.com/mfridman/tparse v0.18.0/go.mod h1:gEvqZTuCgEhPbYk/2lS3Kcxg1GmTxxU7kTC8DvP0i/A= +github.com/mgechev/revive v1.7.0 h1:JyeQ4yO5K8aZhIKf5rec56u0376h8AlKNQEmjfkjKlY= +github.com/mgechev/revive v1.7.0/go.mod h1:qZnwcNhoguE58dfi96IJeSTPeZQejNeoMQLUZGi4SW4= +github.com/mitchellh/go-homedir v1.1.0 h1:lukF9ziXFxDFPkA1vsr5zpc1XuPDn/wFntq5mG+4E0Y= +github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0= github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q= github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg= github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q= -github.com/modern-go/reflect2 v1.0.2 h1:xBagoLtFs94CBntxluKeaWgTMpvLxC4ur3nMaC9Gz0M= github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk= +github.com/modern-go/reflect2 v1.0.3-0.20250322232337-35a7c28c31ee h1:W5t00kpgFdJifH4BDsTlE89Zl93FEloxaWZfGcifgq8= +github.com/modern-go/reflect2 v1.0.3-0.20250322232337-35a7c28c31ee/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk= +github.com/moricho/tparallel v0.3.2 h1:odr8aZVFA3NZrNybggMkYO3rgPRcqjeQUlBBFVxKHTI= +github.com/moricho/tparallel v0.3.2/go.mod h1:OQ+K3b4Ln3l2TZveGCywybl68glfLEwFGqvnjok8b+U= +github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA= +github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ= +github.com/nakabonne/nestif v0.3.1 h1:wm28nZjhQY5HyYPx+weN3Q65k6ilSBxDb8v5S81B81U= +github.com/nakabonne/nestif v0.3.1/go.mod h1:9EtoZochLn5iUprVDmDjqGKPofoUEBL8U4Ngq6aY7OE= +github.com/nishanths/exhaustive v0.12.0 h1:vIY9sALmw6T/yxiASewa4TQcFsVYZQQRUQJhKRf3Swg= +github.com/nishanths/exhaustive v0.12.0/go.mod h1:mEZ95wPIZW+x8kC4TgC+9YCUgiST7ecevsVDTgc2obs= +github.com/nishanths/predeclared v0.2.2 h1:V2EPdZPliZymNAn79T8RkNApBjMmVKh5XRpLm/w98Vk= +github.com/nishanths/predeclared v0.2.2/go.mod h1:RROzoN6TnGQupbC+lqggsOlcgysk3LMK/HI84Mp280c= +github.com/nunnatsa/ginkgolinter v0.19.1 h1:mjwbOlDQxZi9Cal+KfbEJTCz327OLNfwNvoZ70NJ+c4= +github.com/nunnatsa/ginkgolinter v0.19.1/go.mod h1:jkQ3naZDmxaZMXPWaS9rblH+i+GWXQCaS/JFIWcOH2s= +github.com/olekukonko/tablewriter v0.0.5 h1:P2Ga83D34wi1o9J6Wh1mRuqd4mF/x/lgBS7N7AbDhec= +github.com/olekukonko/tablewriter v0.0.5/go.mod h1:hPp6KlRPjbx+hW8ykQs1w3UBbZlj6HuIJcUGPhkA7kY= +github.com/onsi/ginkgo/v2 v2.27.2 h1:LzwLj0b89qtIy6SSASkzlNvX6WktqurSHwkk2ipF/Ns= +github.com/onsi/ginkgo/v2 v2.27.2/go.mod h1:ArE1D/XhNXBXCBkKOLkbsb2c81dQHCRcF5zwn/ykDRo= +github.com/onsi/gomega v1.38.3 h1:eTX+W6dobAYfFeGC2PV6RwXRu/MyT+cQguijutvkpSM= +github.com/onsi/gomega v1.38.3/go.mod h1:ZCU1pkQcXDO5Sl9/VVEGlDyp+zm0m1cmeG5TOzLgdh4= +github.com/otiai10/copy v1.2.0/go.mod h1:rrF5dJ5F0t/EWSYODDu4j9/vEeYHMkc8jt0zJChqQWw= +github.com/otiai10/copy v1.14.0 h1:dCI/t1iTdYGtkvCuBG2BgR6KZa83PTclw4U5n2wAllU= +github.com/otiai10/copy v1.14.0/go.mod h1:ECfuL02W+/FkTWZWgQqXPWZgW9oeKCSQ5qVfSc4qc4w= +github.com/otiai10/curr v0.0.0-20150429015615-9b4961190c95/go.mod h1:9qAhocn7zKJG+0mI8eUu6xqkFDYS2kb2saOteoSB3cE= +github.com/otiai10/curr v1.0.0/go.mod h1:LskTG5wDwr8Rs+nNQ+1LlxRjAtTZZjtJW4rMXl6j4vs= +github.com/otiai10/mint v1.3.0/go.mod h1:F5AjcsTsWUqX+Na9fpHb52P8pcRX2CI6A3ctIT91xUo= +github.com/otiai10/mint v1.3.1/go.mod h1:/yxELlJQ0ufhjUwhshSj+wFjZ78CnZ48/1wtmBH1OTc= +github.com/pelletier/go-toml/v2 v2.2.4 h1:mye9XuhQ6gvn5h28+VilKrrPoQVanw5PMw/TB0t5Ec4= +github.com/pelletier/go-toml/v2 v2.2.4/go.mod h1:2gIqNv+qfxSVS7cM2xJQKtLSTLUE9V8t9Stt+h56mCY= github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U= github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= -github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA= +github.com/polyfloyd/go-errorlint v1.7.1 h1:RyLVXIbosq1gBdk/pChWA8zWYLsq9UEw7a1L5TVMCnA= +github.com/polyfloyd/go-errorlint v1.7.1/go.mod h1:aXjNb1x2TNhoLsk26iv1yl7a+zTnXPhwEMtEXukiLR8= +github.com/prashantv/gostub v1.1.0 h1:BTyx3RfQjRHnUWaGF9oQos79AlQ5k8WNktv7VGvVH4g= +github.com/prashantv/gostub v1.1.0/go.mod h1:A5zLQHz7ieHGG7is6LLXLz7I8+3LZzsrV0P1IAHhP5U= +github.com/prometheus/client_golang v1.23.2 h1:Je96obch5RDVy3FDMndoUsjAhG5Edi49h0RJWRi/o0o= +github.com/prometheus/client_golang v1.23.2/go.mod h1:Tb1a6LWHB3/SPIzCoaDXI4I8UHKeFTEQ1YCr+0Gyqmg= +github.com/prometheus/client_model v0.6.2 h1:oBsgwpGs7iVziMvrGhE53c/GrLUsZdHnqNwqPLxwZyk= +github.com/prometheus/client_model v0.6.2/go.mod h1:y3m2F6Gdpfy6Ut/GBsUqTWZqCUvMVzSfMLjcu6wAwpE= +github.com/prometheus/common v0.66.1 h1:h5E0h5/Y8niHc5DlaLlWLArTQI7tMrsfQjHV+d9ZoGs= +github.com/prometheus/common v0.66.1/go.mod h1:gcaUsgf3KfRSwHY4dIMXLPV0K/Wg1oZ8+SbZk/HH/dA= +github.com/prometheus/procfs v0.17.0 h1:FuLQ+05u4ZI+SS/w9+BWEM2TXiHKsUQ9TADiRH7DuK0= +github.com/prometheus/procfs v0.17.0/go.mod h1:oPQLaDAMRbA+u8H5Pbfq+dl3VDAvHxMUOVhe0wYB2zw= +github.com/quasilyte/go-ruleguard v0.4.3-0.20240823090925-0fe6f58b47b1 h1:+Wl/0aFp0hpuHM3H//KMft64WQ1yX9LdJY64Qm/gFCo= +github.com/quasilyte/go-ruleguard v0.4.3-0.20240823090925-0fe6f58b47b1/go.mod h1:GJLgqsLeo4qgavUoL8JeGFNS7qcisx3awV/w9eWTmNI= +github.com/quasilyte/go-ruleguard/dsl v0.3.22 h1:wd8zkOhSNr+I+8Qeciml08ivDt1pSXe60+5DqOpCjPE= +github.com/quasilyte/go-ruleguard/dsl v0.3.22/go.mod h1:KeCP03KrjuSO0H1kTuZQCWlQPulDV6YMIXmpQss17rU= +github.com/quasilyte/gogrep v0.5.0 h1:eTKODPXbI8ffJMN+W2aE0+oL0z/nh8/5eNdiO34SOAo= +github.com/quasilyte/gogrep v0.5.0/go.mod h1:Cm9lpz9NZjEoL1tgZ2OgeUKPIxL1meE7eo60Z6Sk+Ng= +github.com/quasilyte/regex/syntax v0.0.0-20210819130434-b3f0c404a727 h1:TCg2WBOl980XxGFEZSS6KlBGIV0diGdySzxATTWoqaU= +github.com/quasilyte/regex/syntax v0.0.0-20210819130434-b3f0c404a727/go.mod h1:rlzQ04UMyJXu/aOvhd8qT+hvDrFpiwqp8MRXDY9szc0= +github.com/quasilyte/stdinfo v0.0.0-20220114132959-f7386bf02567 h1:M8mH9eK4OUR4lu7Gd+PU1fV2/qnDNfzT635KRSObncs= +github.com/quasilyte/stdinfo v0.0.0-20220114132959-f7386bf02567/go.mod h1:DWNGW8A4Y+GyBgPuaQJuWiy0XYftx4Xm/y5Jqk9I6VQ= +github.com/raeperd/recvcheck v0.2.0 h1:GnU+NsbiCqdC2XX5+vMZzP+jAJC5fht7rcVTAhX74UI= +github.com/raeperd/recvcheck v0.2.0/go.mod h1:n04eYkwIR0JbgD73wT8wL4JjPC3wm0nFtzBnWNocnYU= +github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc= +github.com/rivo/uniseg v0.4.7 h1:WUdvkW8uEhrYfLC4ZzdpI2ztxP1I582+49Oc5Mq64VQ= +github.com/rivo/uniseg v0.4.7/go.mod h1:FN3SvrM+Zdj16jyLfmOkMNblXMcoc8DfTHruCPUcx88= +github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ= +github.com/rogpeppe/go-internal v1.14.1/go.mod h1:MaRKkUm5W0goXpeCfT7UZI6fk/L7L7so1lCWt35ZSgc= +github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM= +github.com/ryancurrah/gomodguard v1.3.5 h1:cShyguSwUEeC0jS7ylOiG/idnd1TpJ1LfHGpV3oJmPU= +github.com/ryancurrah/gomodguard v1.3.5/go.mod h1:MXlEPQRxgfPQa62O8wzK3Ozbkv9Rkqr+wKjSxTdsNJE= +github.com/ryanrolds/sqlclosecheck v0.5.1 h1:dibWW826u0P8jNLsLN+En7+RqWWTYrjCB9fJfSfdyCU= +github.com/ryanrolds/sqlclosecheck v0.5.1/go.mod h1:2g3dUjoS6AL4huFdv6wn55WpLIDjY7ZgUR4J8HOO/XQ= +github.com/sagikazarmark/locafero v0.7.0 h1:5MqpDsTGNDhY8sGp0Aowyf0qKsPrhewaLSsFaodPcyo= +github.com/sagikazarmark/locafero v0.7.0/go.mod h1:2za3Cg5rMaTMoG/2Ulr9AwtFaIppKXTRYnozin4aB5k= +github.com/sanposhiho/wastedassign/v2 v2.1.0 h1:crurBF7fJKIORrV85u9UUpePDYGWnwvv3+A96WvwXT0= +github.com/sanposhiho/wastedassign/v2 v2.1.0/go.mod h1:+oSmSC+9bQ+VUAxA66nBb0Z7N8CK7mscKTDYC6aIek4= +github.com/santhosh-tekuri/jsonschema/v6 v6.0.1 h1:PKK9DyHxif4LZo+uQSgXNqs0jj5+xZwwfKHgph2lxBw= +github.com/santhosh-tekuri/jsonschema/v6 v6.0.1/go.mod h1:JXeL+ps8p7/KNMjDQk3TCwPpBy0wYklyWTfbkIzdIFU= +github.com/sashamelentyev/interfacebloat v1.1.0 h1:xdRdJp0irL086OyW1H/RTZTr1h/tMEOsumirXcOJqAw= +github.com/sashamelentyev/interfacebloat v1.1.0/go.mod h1:+Y9yU5YdTkrNvoX0xHc84dxiN1iBi9+G8zZIhPVoNjQ= +github.com/sashamelentyev/usestdlibvars v1.28.0 h1:jZnudE2zKCtYlGzLVreNp5pmCdOxXUzwsMDBkR21cyQ= +github.com/sashamelentyev/usestdlibvars v1.28.0/go.mod h1:9nl0jgOfHKWNFS43Ojw0i7aRoS4j6EBye3YBhmAIRF8= +github.com/securego/gosec/v2 v2.22.2 h1:IXbuI7cJninj0nRpZSLCUlotsj8jGusohfONMrHoF6g= +github.com/securego/gosec/v2 v2.22.2/go.mod h1:UEBGA+dSKb+VqM6TdehR7lnQtIIMorYJ4/9CW1KVQBE= +github.com/shurcooL/go v0.0.0-20180423040247-9e1955d9fb6e/go.mod h1:TDJrrUr11Vxrven61rcy3hJMUqaf/CLWYhHNPmT14Lk= +github.com/shurcooL/go-goon v0.0.0-20170922171312-37c2f522c041/go.mod h1:N5mDOmsrJOB+vfqUK+7DmDyjhSLIIBnXo9lvZJj3MWQ= +github.com/sirupsen/logrus v1.9.3 h1:dueUQJ1C2q9oE3F7wvmSGAaVtTmUizReu6fjN8uqzbQ= +github.com/sirupsen/logrus v1.9.3/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ= +github.com/sivchari/containedctx v1.0.3 h1:x+etemjbsh2fB5ewm5FeLNi5bUjK0V8n0RB+Wwfd0XE= +github.com/sivchari/containedctx v1.0.3/go.mod h1:c1RDvCbnJLtH4lLcYD/GqwiBSSf4F5Qk0xld2rBqzJ4= +github.com/sivchari/tenv v1.12.1 h1:+E0QzjktdnExv/wwsnnyk4oqZBUfuh89YMQT1cyuvSY= +github.com/sivchari/tenv v1.12.1/go.mod h1:1LjSOUCc25snIr5n3DtGGrENhX3LuWefcplwVGC24mw= +github.com/sonatard/noctx v0.1.0 h1:JjqOc2WN16ISWAjAk8M5ej0RfExEXtkEyExl2hLW+OM= +github.com/sonatard/noctx v0.1.0/go.mod h1:0RvBxqY8D4j9cTTTWE8ylt2vqj2EPI8fHmrxHdsaZ2c= +github.com/sourcegraph/conc v0.3.0 h1:OQTbbt6P72L20UqAkXXuLOj79LfEanQ+YQFNpLA9ySo= +github.com/sourcegraph/conc v0.3.0/go.mod h1:Sdozi7LEKbFPqYX2/J+iBAM6HpqSLTASQIKqDmF7Mt0= +github.com/sourcegraph/go-diff v0.7.0 h1:9uLlrd5T46OXs5qpp8L/MTltk0zikUGi0sNNyCpA8G0= +github.com/sourcegraph/go-diff v0.7.0/go.mod h1:iBszgVvyxdc8SFZ7gm69go2KDdt3ag071iBaWPF6cjs= +github.com/spf13/afero v1.12.0 h1:UcOPyRBYczmFn6yvphxkn9ZEOY65cpwGKb5mL36mrqs= +github.com/spf13/afero v1.12.0/go.mod h1:ZTlWwG4/ahT8W7T0WQ5uYmjI9duaLQGy3Q2OAl4sk/4= +github.com/spf13/cast v1.7.1 h1:cuNEagBQEHWN1FnbGEjCXL2szYEXqfJPbP2HNUaca9Y= +github.com/spf13/cast v1.7.1/go.mod h1:ancEpBxwJDODSW/UG4rDrAqiKolqNNh2DX3mk86cAdo= +github.com/spf13/cobra v1.10.2 h1:DMTTonx5m65Ic0GOoRY2c16WCbHxOOw6xxezuLaBpcU= +github.com/spf13/cobra v1.10.2/go.mod h1:7C1pvHqHw5A4vrJfjNwvOdzYu0Gml16OCs2GRiTUUS4= github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg= +github.com/spf13/pflag v1.0.9/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg= +github.com/spf13/pflag v1.0.10 h1:4EBh2KAYBwaONj6b2Ye1GiHfwjqyROoF4RwYO+vPwFk= +github.com/spf13/pflag v1.0.10/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg= +github.com/spf13/viper v1.20.1 h1:ZMi+z/lvLyPSCoNtFCpqjy0S4kPbirhpTMwl8BkW9X4= +github.com/spf13/viper v1.20.1/go.mod h1:P9Mdzt1zoHIG8m2eZQinpiBjo6kCmZSKBClNNqjJvu4= +github.com/ssgreg/nlreturn/v2 v2.2.1 h1:X4XDI7jstt3ySqGU86YGAURbxw3oTDPK9sPEi6YEwQ0= +github.com/ssgreg/nlreturn/v2 v2.2.1/go.mod h1:E/iiPB78hV7Szg2YfRgyIrk1AD6JVMTRkkxBiELzh2I= +github.com/stbenjam/no-sprintf-host-port v0.2.0 h1:i8pxvGrt1+4G0czLr/WnmyH7zbZ8Bg8etvARQ1rpyl4= +github.com/stbenjam/no-sprintf-host-port v0.2.0/go.mod h1:eL0bQ9PasS0hsyTyfTjjG+E80QIyPnBVQbYZyv20Jfk= github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= +github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw= +github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo= +github.com/stretchr/objx v0.5.2 h1:xuMeJ0Sdp5ZMRXx/aWO6RZxdr3beISkG5/G/aIRr3pY= +github.com/stretchr/objx v0.5.2/go.mod h1:FRsXN1f5AsAjCGJKqEizvkpNtU+EGNCLh3NxZ/8L+MA= github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI= -github.com/stretchr/testify v1.9.0 h1:HtqpIVDClZ4nwg75+f6Lvsy/wHu+3BoSGCbBAcpTsTg= -github.com/stretchr/testify v1.9.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY= +github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4= +github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA= +github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= +github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= +github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU= +github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo= +github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U= +github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U= +github.com/subosito/gotenv v1.6.0 h1:9NlTDc1FTs4qu0DDq7AEtTPNw6SVm7uBMsUCUjABIf8= +github.com/subosito/gotenv v1.6.0/go.mod h1:Dk4QP5c2W3ibzajGcXpNraDfq2IrhjMIvMSWPKKo0FU= +github.com/tdakkota/asciicheck v0.4.1 h1:bm0tbcmi0jezRA2b5kg4ozmMuGAFotKI3RZfrhfovg8= +github.com/tdakkota/asciicheck v0.4.1/go.mod h1:0k7M3rCfRXb0Z6bwgvkEIMleKH3kXNz9UqJ9Xuqopr8= +github.com/tenntenn/modver v1.0.1 h1:2klLppGhDgzJrScMpkj9Ujy3rXPUspSjAcev9tSEBgA= +github.com/tenntenn/modver v1.0.1/go.mod h1:bePIyQPb7UeioSRkw3Q0XeMhYZSMx9B8ePqg6SAMGH0= +github.com/tenntenn/text/transform v0.0.0-20200319021203-7eef512accb3 h1:f+jULpRQGxTSkNYKJ51yaw6ChIqO+Je8UqsTKN/cDag= +github.com/tenntenn/text/transform v0.0.0-20200319021203-7eef512accb3/go.mod h1:ON8b8w4BN/kE1EOhwT0o+d62W65a6aPw1nouo9LMgyY= +github.com/tetafro/godot v1.5.0 h1:aNwfVI4I3+gdxjMgYPus9eHmoBeJIbnajOyqZYStzuw= +github.com/tetafro/godot v1.5.0/go.mod h1:2oVxTBSftRTh4+MVfUaUXR6bn2GDXCaMcOG4Dk3rfio= +github.com/tidwall/gjson v1.18.0 h1:FIDeeyB800efLX89e5a8Y0BNH+LOngJyGrIWxG2FKQY= +github.com/tidwall/gjson v1.18.0/go.mod h1:/wbyibRr2FHMks5tjHJ5F8dMZh3AcwJEMf5vlfC0lxk= +github.com/tidwall/match v1.2.0 h1:0pt8FlkOwjN2fPt4bIl4BoNxb98gGHN2ObFEDkrfZnM= +github.com/tidwall/match v1.2.0/go.mod h1:eRSPERbgtNPcGhD8UCthc6PmLEQXEWd3PRB5JTxsfmM= +github.com/tidwall/pretty v1.2.1 h1:qjsOFOWWQl+N3RsoF5/ssm1pHmJJwhjlSbZ51I6wMl4= +github.com/tidwall/pretty v1.2.1/go.mod h1:ITEVvHYasfjBbM0u2Pg8T2nJnzm8xPwvNhhsoaGGjNU= +github.com/tidwall/sjson v1.2.5 h1:kLy8mja+1c9jlljvWTlSazM7cKDRfJuR/bOJhcY5NcY= +github.com/tidwall/sjson v1.2.5/go.mod h1:Fvgq9kS/6ociJEDnK0Fk1cpYF4FIW6ZF7LAe+6jwd28= +github.com/timakin/bodyclose v0.0.0-20241017074812-ed6a65f985e3 h1:y4mJRFlM6fUyPhoXuFg/Yu02fg/nIPFMOY8tOqppoFg= +github.com/timakin/bodyclose v0.0.0-20241017074812-ed6a65f985e3/go.mod h1:mkjARE7Yr8qU23YcGMSALbIxTQ9r9QBVahQOBRfU460= +github.com/timonwong/loggercheck v0.10.1 h1:uVZYClxQFpw55eh+PIoqM7uAOHMrhVcDoWDery9R8Lg= +github.com/timonwong/loggercheck v0.10.1/go.mod h1:HEAWU8djynujaAVX7QI65Myb8qgfcZ1uKbdpg3ZzKl8= +github.com/tomarrell/wrapcheck/v2 v2.10.0 h1:SzRCryzy4IrAH7bVGG4cK40tNUhmVmMDuJujy4XwYDg= +github.com/tomarrell/wrapcheck/v2 v2.10.0/go.mod h1:g9vNIyhb5/9TQgumxQyOEqDHsmGYcGsVMOx/xGkqdMo= +github.com/tommy-muehle/go-mnd/v2 v2.5.1 h1:NowYhSdyE/1zwK9QCLeRb6USWdoif80Ie+v+yU8u1Zw= +github.com/tommy-muehle/go-mnd/v2 v2.5.1/go.mod h1:WsUAkMJMYww6l/ufffCD3m+P7LEvr8TnZn9lwVDlgzw= +github.com/ultraware/funlen v0.2.0 h1:gCHmCn+d2/1SemTdYMiKLAHFYxTYz7z9VIDRaTGyLkI= +github.com/ultraware/funlen v0.2.0/go.mod h1:ZE0q4TsJ8T1SQcjmkhN/w+MceuatI6pBFSxxyteHIJA= +github.com/ultraware/whitespace v0.2.0 h1:TYowo2m9Nfj1baEQBjuHzvMRbp19i+RCcRYrSWoFa+g= +github.com/ultraware/whitespace v0.2.0/go.mod h1:XcP1RLD81eV4BW8UhQlpaR+SDc2givTvyI8a586WjW8= +github.com/uudashr/gocognit v1.2.0 h1:3BU9aMr1xbhPlvJLSydKwdLN3tEUUrzPSSM8S4hDYRA= +github.com/uudashr/gocognit v1.2.0/go.mod h1:k/DdKPI6XBZO1q7HgoV2juESI2/Ofj9AcHPZhBBdrTU= +github.com/uudashr/iface v1.3.1 h1:bA51vmVx1UIhiIsQFSNq6GZ6VPTk3WNMZgRiCe9R29U= +github.com/uudashr/iface v1.3.1/go.mod h1:4QvspiRd3JLPAEXBQ9AiZpLbJlrWWgRChOKDJEuQTdg= github.com/x448/float16 v0.8.4 h1:qLwI1I70+NjRFUR3zs1JPUCgaCXSh3SW62uAKT1mSBM= github.com/x448/float16 v0.8.4/go.mod h1:14CWIYCyZA/cWjXOioeEpHeN/83MdbZDRQHoFcYsOfg= +github.com/xen0n/gosmopolitan v1.2.2 h1:/p2KTnMzwRexIW8GlKawsTWOxn7UHA+jCMF/V8HHtvU= +github.com/xen0n/gosmopolitan v1.2.2/go.mod h1:7XX7Mj61uLYrj0qmeN0zi7XDon9JRAEhYQqAPLVNTeg= +github.com/yagipy/maintidx v1.0.0 h1:h5NvIsCz+nRDapQ0exNv4aJ0yXSI0420omVANTv3GJM= +github.com/yagipy/maintidx v1.0.0/go.mod h1:0qNf/I/CCZXSMhsRsrEPDZ+DkekpKLXAJfsTACwgXLk= +github.com/yeya24/promlinter v0.3.0 h1:JVDbMp08lVCP7Y6NP3qHroGAO6z2yGKQtS5JsjqtoFs= +github.com/yeya24/promlinter v0.3.0/go.mod h1:cDfJQQYv9uYciW60QT0eeHlFodotkYZlL+YcPQN+mW4= +github.com/ykadowak/zerologlint v0.1.5 h1:Gy/fMz1dFQN9JZTPjv1hxEk+sRWm05row04Yoolgdiw= +github.com/ykadowak/zerologlint v0.1.5/go.mod h1:KaUskqF3e/v59oPmdq1U1DnKcuHokl2/K1U4pmIELKg= +github.com/yuin/goldmark v1.1.25/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= +github.com/yuin/goldmark v1.1.32/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= +github.com/yuin/goldmark v1.3.5/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k= +github.com/yuin/goldmark v1.4.1/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k= +github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY= +gitlab.com/bosi/decorder v0.4.2 h1:qbQaV3zgwnBZ4zPMhGLW4KZe7A7NwxEhJx39R3shffo= +gitlab.com/bosi/decorder v0.4.2/go.mod h1:muuhHoaJkA9QLcYHq4Mj8FJUwDZ+EirSHRiaTcTf6T8= +go-simpler.org/assert v0.9.0 h1:PfpmcSvL7yAnWyChSjOz6Sp6m9j5lyK8Ok9pEL31YkQ= +go-simpler.org/assert v0.9.0/go.mod h1:74Eqh5eI6vCK6Y5l3PI8ZYFXG4Sa+tkr70OIPJAUr28= +go-simpler.org/musttag v0.13.0 h1:Q/YAW0AHvaoaIbsPj3bvEI5/QFP7w696IMUpnKXQfCE= +go-simpler.org/musttag v0.13.0/go.mod h1:FTzIGeK6OkKlUDVpj0iQUXZLUO1Js9+mvykDQy9C5yM= +go-simpler.org/sloglint v0.9.0 h1:/40NQtjRx9txvsB/RN022KsUJU+zaaSb/9q9BSefSrE= +go-simpler.org/sloglint v0.9.0/go.mod h1:G/OrAF6uxj48sHahCzrbarVMptL2kjWTaUeC8+fOGww= +go.uber.org/automaxprocs v1.6.0 h1:O3y2/QNTOdbF+e/dpXNNW7Rx2hZ4sTIPyybbxyNqTUs= +go.uber.org/automaxprocs v1.6.0/go.mod h1:ifeIMSnPZuznNm6jmdzmU3/bfk01Fe2fotchwEFJ8r8= +go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto= +go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE= +go.uber.org/multierr v1.11.0 h1:blXXJkSxSSfBVBlC76pxqeO+LN3aDfLQo+309xJstO0= +go.uber.org/multierr v1.11.0/go.mod h1:20+QtiLqy0Nd6FdQB9TLXag12DsQkrbs3htMFfDN80Y= +go.uber.org/zap v1.27.0 h1:aJMhYGrd5QSmlpLMr2MftRKl7t8J8PTZPA732ud/XR8= +go.uber.org/zap v1.27.0/go.mod h1:GB2qFLM7cTU87MWRP2mPIjqfIDnGu+VIO4V/SdhGo2E= +go.yaml.in/yaml/v2 v2.4.2 h1:DzmwEr2rDGHl7lsFgAHxmNz/1NlQ7xLIrlN2h5d1eGI= +go.yaml.in/yaml/v2 v2.4.2/go.mod h1:081UH+NErpNdqlCXm3TtEran0rJZGxAYx9hb/ELlsPU= +go.yaml.in/yaml/v3 v3.0.4 h1:tfq32ie2Jv2UxXFdLJdh3jXuOzWiL1fo0bu/FbuKpbc= +go.yaml.in/yaml/v3 v3.0.4/go.mod h1:DhzuOOF2ATzADvBadXxruRBLzYTpT36CKvDb3+aBEFg= golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= +golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc= +golang.org/x/crypto v0.13.0/go.mod h1:y6Z2r+Rw4iayiXXAIxJIDAJ1zMW4yaTpebo8fPOliYc= +golang.org/x/crypto v0.14.0/go.mod h1:MVFd36DqK4CsrnJYDkBA3VC4m2GkXAM0PvzMCn4JQf4= +golang.org/x/exp v0.0.0-20251023183803-a4bb9ffd2546 h1:mgKeJMpvi0yx/sU5GsxQ7p6s2wtOnGAHZWCHUM4KGzY= +golang.org/x/exp v0.0.0-20251023183803-a4bb9ffd2546/go.mod h1:j/pmGrbnkbPtQfxEe5D0VQhZC6qKbfKifgD0oM7sR70= +golang.org/x/exp/typeparams v0.0.0-20220428152302-39d4317da171/go.mod h1:AbB0pIl9nAr9wVwH+Z2ZpaocVmF5I4GyWCDIsVjR0bk= +golang.org/x/exp/typeparams v0.0.0-20230203172020-98cc5a0785f9/go.mod h1:AbB0pIl9nAr9wVwH+Z2ZpaocVmF5I4GyWCDIsVjR0bk= +golang.org/x/exp/typeparams v0.0.0-20250210185358-939b2ce775ac h1:TSSpLIG4v+p0rPv1pNOQtl1I8knsO4S9trOxNMOLVP4= +golang.org/x/exp/typeparams v0.0.0-20250210185358-939b2ce775ac/go.mod h1:AbB0pIl9nAr9wVwH+Z2ZpaocVmF5I4GyWCDIsVjR0bk= golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= +golang.org/x/mod v0.4.1/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= +golang.org/x/mod v0.4.2/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= +golang.org/x/mod v0.6.0-dev.0.20220106191415-9b9b3d81d5e3/go.mod h1:3p9vT2HGsQu2K1YbXdKPJLVgG5VJdoTa1poYQBtP1AY= +golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4= +golang.org/x/mod v0.7.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= +golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= +golang.org/x/mod v0.9.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= +golang.org/x/mod v0.12.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= +golang.org/x/mod v0.13.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c= +golang.org/x/mod v0.29.0 h1:HV8lRxZC4l2cr3Zq1LvtOsi/ThTgWnUk/y64QSs8GwA= +golang.org/x/mod v0.29.0/go.mod h1:NyhrlYXJ2H4eJiRy/WDBO6HMqZQ6q9nk4JzS3NuCK+w= golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= +golang.org/x/net v0.0.0-20200625001655-4c5254603344/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA= golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU= -golang.org/x/net v0.40.0 h1:79Xs7wF06Gbdcg4kdCCIQArK11Z1hr5POQ6+fIYHNuY= -golang.org/x/net v0.40.0/go.mod h1:y0hY0exeL2Pku80/zKK7tpntoX23cqL3Oa6njdgRtds= +golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg= +golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM= +golang.org/x/net v0.0.0-20211015210444-4f30a5c0130f/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y= +golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c= +golang.org/x/net v0.2.0/go.mod h1:KqCZLdyyvdV855qA2rE3GC2aiw5xGR5TEjj8smXukLY= +golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs= +golang.org/x/net v0.8.0/go.mod h1:QVkue5JL9kW//ek3r6jTKnTFis1tRmNAW2P1shuFdJc= +golang.org/x/net v0.10.0/go.mod h1:0qNGK6F8kojg2nk9dLZ2mShWaEBan6FAoqfSigmmuDg= +golang.org/x/net v0.15.0/go.mod h1:idbUs1IY1+zTqbi8yxTbhexhEEk5ur9LInksu6HrEpk= +golang.org/x/net v0.16.0/go.mod h1:NxSsAGuq816PNPmqtQdLE42eU2Fs7NoRIZrHJAlaCOE= +golang.org/x/net v0.46.0 h1:giFlY12I07fugqwPuWJi68oOnpfqFnJIJzaIIm2JVV4= +golang.org/x/net v0.46.0/go.mod h1:Q9BGdFy1y4nkUwiLvT5qtyhAnEHgnQ/zd8PfU6nc210= golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.0.0-20200625203802-6e8e738ad208/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.3.0/go.mod h1:FU7BRWz2tNW+3quACPkgCx/L+uEAv1htQ0V83Z9Rj+Y= +golang.org/x/sync v0.4.0/go.mod h1:FU7BRWz2tNW+3quACPkgCx/L+uEAv1htQ0V83Z9Rj+Y= +golang.org/x/sync v0.19.0 h1:vV+1eWNmZ5geRlYjzm2adRgW2/mcpevXNg50YZtPCE4= +golang.org/x/sync v0.19.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI= golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20211019181941-9d821ace8654/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20211105183446-c75c47738b0c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.2.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.12.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.13.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.39.0 h1:CvCKL8MeisomCi6qNZ+wbb0DN9E5AATixKsvNtMoMFk= +golang.org/x/sys v0.39.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks= +golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= +golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8= +golang.org/x/term v0.2.0/go.mod h1:TVmDHMZPmdnySmBfhjOoOdhjzdE1h4u1VwSiw2l1Nuc= +golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k= +golang.org/x/term v0.6.0/go.mod h1:m6U89DPEgQRMq3DNkDClhWw02AUbt2daBVO4cn4Hv9U= +golang.org/x/term v0.8.0/go.mod h1:xPskH00ivmX89bAKVGSKKtLOWNx2+17Eiy94tnKShWo= +golang.org/x/term v0.12.0/go.mod h1:owVbMEjm3cBLCHdkQu9b1opXd4ETQWc3BhuQGKgXgvU= +golang.org/x/term v0.13.0/go.mod h1:LTmsnFJwVN6bCy1rVCoS+qHT1HhALEFxKncY3WNNh4U= golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= +golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk= golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= -golang.org/x/text v0.25.0 h1:qVyWApTSYLk/drJRO5mDlNYskwQznZmkpV2c8q9zls4= -golang.org/x/text v0.25.0/go.mod h1:WEdwpYrmk1qmdHvhkSTNPm3app7v4rsT8F2UD6+VHIA= +golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= +golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ= +golang.org/x/text v0.4.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8= +golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8= +golang.org/x/text v0.8.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8= +golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8= +golang.org/x/text v0.13.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE= +golang.org/x/text v0.30.0 h1:yznKA/E9zq54KzlzBEAWn1NXSQ8DIp/NYMy88xJjl4k= +golang.org/x/text v0.30.0/go.mod h1:yDdHFIX9t+tORqspjENWgzaCVXgk0yYnYuSZ8UzzBVM= golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= +golang.org/x/tools v0.0.0-20200324003944-a576cf524670/go.mod h1:Sl4aGygMT6LrqrWclx+PTx3U+LnKx/seiNR+3G19Ar8= +golang.org/x/tools v0.0.0-20200329025819-fd4102a86c65/go.mod h1:Sl4aGygMT6LrqrWclx+PTx3U+LnKx/seiNR+3G19Ar8= golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE= +golang.org/x/tools v0.0.0-20200724022722-7017fd6b1305/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA= +golang.org/x/tools v0.0.0-20200820010801-b793a1359eac/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA= +golang.org/x/tools v0.0.0-20201023174141-c8cfbd0f21e6/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= +golang.org/x/tools v0.1.1-0.20210205202024-ef80cdb6ec6d/go.mod h1:9bzcO0MWcOuT0tm1iBGzDVPshzfwoVvREIui8C+MHqU= +golang.org/x/tools v0.1.1-0.20210302220138-2ac05c832e1a/go.mod h1:9bzcO0MWcOuT0tm1iBGzDVPshzfwoVvREIui8C+MHqU= +golang.org/x/tools v0.1.1/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk= +golang.org/x/tools v0.1.5/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk= +golang.org/x/tools v0.1.10/go.mod h1:Uh6Zz+xoGYZom868N8YTex3t7RhtHDBrE8Gzo9bV56E= +golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc= +golang.org/x/tools v0.3.0/go.mod h1:/rWhSS2+zyEVwoJf8YAX6L2f0ntZ7Kn/mGgAWcipA5k= +golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU= +golang.org/x/tools v0.7.0/go.mod h1:4pg6aUX35JBAogB10C9AtvVL+qowtN4pT3CGSQex14s= +golang.org/x/tools v0.13.0/go.mod h1:HvlwmtVNQAhOuCjW7xxvovg8wbNq7LwfXh/k7wXUl58= +golang.org/x/tools v0.14.0/go.mod h1:uYBEerGOWcJyEORxN+Ek8+TT266gXkNlHdJBwexUsBg= +golang.org/x/tools v0.38.0 h1:Hx2Xv8hISq8Lm16jvBZ2VQf+RLmbd7wVUsALibYI/IQ= +golang.org/x/tools v0.38.0/go.mod h1:yEsQ/d/YK8cjh0L6rZlY8tgtlKiBNTL14pGDJPJpYQs= +golang.org/x/tools/go/expect v0.1.1-deprecated h1:jpBZDwmgPhXsKZC6WhL20P4b/wmnpsEAGHaNy0n/rJM= +golang.org/x/tools/go/expect v0.1.1-deprecated/go.mod h1:eihoPOH+FgIqa3FpoTwguz/bVUSGBlGQU67vpBeOrBY= +golang.org/x/tools/go/packages/packagestest v0.1.1-deprecated h1:1h2MnaIAIXISqTFKdENegdpAgUXz6NrPEsbIeWaBRvM= +golang.org/x/tools/go/packages/packagestest v0.1.1-deprecated/go.mod h1:RVAQXBGNv1ib0J382/DPCRS/BPnsGebyM1Gj5VSDpG8= golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= -gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM= +google.golang.org/protobuf v1.36.9 h1:w2gp2mA27hUeUzj9Ex9FBjsBm40zfaDtEWow293U7Iw= +google.golang.org/protobuf v1.36.9/go.mod h1:fuxRtAxBytpl4zzqUh6/eyUujkJdNiuEkXntxiD/uRU= gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= +gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk= +gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q= gopkg.in/inf.v0 v0.9.1 h1:73M5CoZyi3ZLMOyDlQh031Cx6N9NDJ2Vvfl76EDAgDc= gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw= +gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= +gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY= +gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ= +gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA= gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= -k8s.io/apimachinery v0.32.3 h1:JmDuDarhDmA/Li7j3aPrwhpNBA94Nvk5zLeOge9HH1U= -k8s.io/apimachinery v0.32.3/go.mod h1:GpHVgxoKlTxClKcteaeuF1Ul/lDVb74KpZcxcmLDElE= +honnef.co/go/tools v0.6.1 h1:R094WgE8K4JirYjBaOpz/AvTyUu/3wbmAoskKN/pxTI= +honnef.co/go/tools v0.6.1/go.mod h1:3puzxxljPCe8RGJX7BIy1plGbxEOZni5mR2aXe3/uk4= +k8s.io/apimachinery v0.34.3 h1:/TB+SFEiQvN9HPldtlWOTp0hWbJ+fjU+wkxysf/aQnE= +k8s.io/apimachinery v0.34.3/go.mod h1:/GwIlEcWuTX9zKIg2mbw0LRFIsXwrfoVxn+ef0X13lw= k8s.io/klog/v2 v2.130.1 h1:n9Xl7H1Xvksem4KFG4PYbdQCQxqc/tTUyrgXaOhHSzk= k8s.io/klog/v2 v2.130.1/go.mod h1:3Jpz1GvMt720eyJH1ckRHK1EDfpxISzJ7I9OYgaDtPE= -k8s.io/utils v0.0.0-20241104100929-3ea5e8cea738 h1:M3sRQVHv7vB20Xc2ybTt7ODCeFj6JSWYFzOFnYeS6Ro= -k8s.io/utils v0.0.0-20241104100929-3ea5e8cea738/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0= -sigs.k8s.io/json v0.0.0-20241010143419-9aa6b5e7a4b3 h1:/Rv+M11QRah1itp8VhT6HoVx1Ray9eB4DBr+K+/sCJ8= -sigs.k8s.io/json v0.0.0-20241010143419-9aa6b5e7a4b3/go.mod h1:18nIHnGi6636UCz6m8i4DhaJ65T6EruyzmoQqI2BVDo= -sigs.k8s.io/structured-merge-diff/v4 v4.4.2 h1:MdmvkGuXi/8io6ixD5wud3vOLwc1rj0aNqRlpuvjmwA= -sigs.k8s.io/structured-merge-diff/v4 v4.4.2/go.mod h1:N8f93tFZh9U6vpxwRArLiikrE5/2tiu1w1AGfACIGE4= -sigs.k8s.io/yaml v1.4.0 h1:Mk1wCc2gy/F0THH0TAp1QYyJNzRm2KCLy3o5ASXVI5E= -sigs.k8s.io/yaml v1.4.0/go.mod h1:Ejl7/uTz7PSA4eKMyQCUTnhZYNmLIl+5c2lQPGR2BPY= +k8s.io/utils v0.0.0-20251002143259-bc988d571ff4 h1:SjGebBtkBqHFOli+05xYbK8YF1Dzkbzn+gDM4X9T4Ck= +k8s.io/utils v0.0.0-20251002143259-bc988d571ff4/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0= +mvdan.cc/gofumpt v0.7.0 h1:bg91ttqXmi9y2xawvkuMXyvAA/1ZGJqYAEGjXuP0JXU= +mvdan.cc/gofumpt v0.7.0/go.mod h1:txVFJy/Sc/mvaycET54pV8SW8gWxTlUuGHVEcncmNUo= +mvdan.cc/unparam v0.0.0-20240528143540-8a5130ca722f h1:lMpcwN6GxNbWtbpI1+xzFLSW8XzX0u72NttUGVFjO3U= +mvdan.cc/unparam v0.0.0-20240528143540-8a5130ca722f/go.mod h1:RSLa7mKKCNeTTMHBw5Hsy2rfJmd6O2ivt9Dw9ZqCQpQ= +sigs.k8s.io/json v0.0.0-20250730193827-2d320260d730 h1:IpInykpT6ceI+QxKBbEflcR5EXP7sU1kvOlxwZh5txg= +sigs.k8s.io/json v0.0.0-20250730193827-2d320260d730/go.mod h1:mdzfpAEoE6DHQEN0uh9ZbOCuHbLK5wOm7dK4ctXE9Tg= +sigs.k8s.io/randfill v1.0.0 h1:JfjMILfT8A6RbawdsK2JXGBR5AQVfd+9TbzrlneTyrU= +sigs.k8s.io/randfill v1.0.0/go.mod h1:XeLlZ/jmk4i1HRopwe7/aU3H5n1zNUcX6TM94b3QxOY= +sigs.k8s.io/structured-merge-diff/v6 v6.3.0 h1:jTijUJbW353oVOd9oTlifJqOGEkUw2jB/fXCbTiQEco= +sigs.k8s.io/structured-merge-diff/v6 v6.3.0/go.mod h1:M3W8sfWvn2HhQDIbGWj3S099YozAsymCo/wrT5ohRUE= +sigs.k8s.io/yaml v1.6.0 h1:G8fkbMSAFqgEFgh4b1wmtzDnioxFCUgTZhlbj5P9QYs= +sigs.k8s.io/yaml v1.6.0/go.mod h1:796bPqUfzR/0jLAl6XjHl3Ck7MiyVv8dbTdyT3/pMf4= diff --git a/api/objutilv1/conditions.go b/api/objutilv1/conditions.go new file mode 100644 index 000000000..75e9d9eb0 --- /dev/null +++ b/api/objutilv1/conditions.go @@ -0,0 +1,211 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package objutilv1 + +import ( + "slices" + + "k8s.io/apimachinery/pkg/api/meta" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" +) + +// ConditionSemanticallyEqual compares conditions ignoring LastTransitionTime. +// +// This is used to avoid bumping LastTransitionTime when only ObservedGeneration changes. +func ConditionSemanticallyEqual(a, b *metav1.Condition) bool { + if a == nil || b == nil { + return a == b + } + + return a.Type == b.Type && + a.Status == b.Status && + a.Reason == b.Reason && + a.Message == b.Message && + a.ObservedGeneration == b.ObservedGeneration +} + +// ConditionEqualByStatus compares conditions by Type and Status only. +func ConditionEqualByStatus(a, b *metav1.Condition) bool { + if a == nil || b == nil { + return a == b + } + + return a.Type == b.Type && + a.Status == b.Status +} + +func areConditionsEqual(a, b StatusConditionObject, condTypes []string, cmp func(a, b *metav1.Condition) bool) bool { + if a == nil || b == nil { + return a == b + } + + aConds := a.GetStatusConditions() + bConds := b.GetStatusConditions() + + var types []string + if len(condTypes) > 0 { + // Keep caller order; ignore duplicates without sorting. + types = make([]string, 0, len(condTypes)) + for _, t := range condTypes { + if slices.Contains(types, t) { + continue + } + types = append(types, t) + } + } else { + types = make([]string, 0, len(aConds)+len(bConds)) + for i := range aConds { + types = append(types, aConds[i].Type) + } + for i := range bConds { + types = append(types, bConds[i].Type) + } + + // Deduplicate for the "all types" mode; order doesn't matter here. + slices.Sort(types) + types = slices.Compact(types) + } + + for i := range types { + condType := types[i] + ac := meta.FindStatusCondition(aConds, condType) + bc := meta.FindStatusCondition(bConds, condType) + if ac == nil || bc == nil { + if ac == bc { + continue + } + return false + } + if !cmp(ac, bc) { + return false + } + } + + return true +} + +// AreConditionsSemanticallyEqual compares `.status.conditions` between two objects. +// +// If condTypes are provided, it compares only those condition types (duplicates are ignored). +// If condTypes is empty, it compares all condition types present in either object. +// +// Missing conditions: +// - if a condition type is missing on both objects, it is considered equal; +// - if it is missing on exactly one object, it is not equal. +// +// Semantic equality ignores LastTransitionTime (see ConditionSemanticallyEqual). +func AreConditionsSemanticallyEqual(a, b StatusConditionObject, condTypes ...string) bool { + return areConditionsEqual(a, b, condTypes, ConditionSemanticallyEqual) +} + +// AreConditionsEqualByStatus compares `.status.conditions` between two objects by Type and Status only. +// +// If condTypes are provided, it compares only those condition types (duplicates are ignored). +// If condTypes is empty, it compares all condition types present in either object. +// +// Missing conditions: +// - if a condition type is missing on both objects, it is considered equal; +// - if it is missing on exactly one object, it is not equal. +func AreConditionsEqualByStatus(a, b StatusConditionObject, condTypes ...string) bool { + return areConditionsEqual(a, b, condTypes, ConditionEqualByStatus) +} + +// IsStatusConditionPresentAndEqual reports whether `.status.conditions` contains the condition type with the given status. +func IsStatusConditionPresentAndEqual(obj StatusConditionObject, condType string, condStatus metav1.ConditionStatus) bool { + actual := meta.FindStatusCondition(obj.GetStatusConditions(), condType) + return actual != nil && actual.Status == condStatus +} + +// IsStatusConditionPresentAndTrue is a convenience wrapper for IsStatusConditionPresentAndEqual(..., ConditionTrue). +func IsStatusConditionPresentAndTrue(obj StatusConditionObject, condType string) bool { + return IsStatusConditionPresentAndEqual(obj, condType, metav1.ConditionTrue) +} + +// IsStatusConditionPresentAndFalse is a convenience wrapper for IsStatusConditionPresentAndEqual(..., ConditionFalse). +func IsStatusConditionPresentAndFalse(obj StatusConditionObject, condType string) bool { + return IsStatusConditionPresentAndEqual(obj, condType, metav1.ConditionFalse) +} + +// IsStatusConditionPresentAndSemanticallyEqual reports whether the condition with the same Type is present and semantically equal. +func IsStatusConditionPresentAndSemanticallyEqual(obj StatusConditionObject, expected metav1.Condition) bool { + // This is consistent with SetStatusCondition, so we can use Generation from the object. + if expected.ObservedGeneration == 0 { + expected.ObservedGeneration = obj.GetGeneration() + } + + actual := meta.FindStatusCondition(obj.GetStatusConditions(), expected.Type) + return actual != nil && ConditionSemanticallyEqual(actual, &expected) +} + +// HasStatusCondition reports whether `.status.conditions` contains the given condition type. +func HasStatusCondition(obj StatusConditionObject, condType string) bool { + return meta.FindStatusCondition(obj.GetStatusConditions(), condType) != nil +} + +// GetStatusCondition returns the condition with the given type from `.status.conditions`, or nil if it is not present. +func GetStatusCondition(obj StatusConditionObject, condType string) *metav1.Condition { + return meta.FindStatusCondition(obj.GetStatusConditions(), condType) +} + +// SetStatusCondition upserts a condition into `.status.conditions`. +// +// It always sets ObservedGeneration to obj.Generation and returns whether the +// stored conditions have changed. +// +// LastTransitionTime behavior: +// - MUST be updated when the condition's Status changes +// - SHOULD NOT be updated when only Reason or Message changes +// - for ObservedGeneration-only changes, it preserves the previous LastTransitionTime +func SetStatusCondition(obj StatusConditionObject, cond metav1.Condition) (changed bool) { + cond.ObservedGeneration = obj.GetGeneration() + + conds := obj.GetStatusConditions() + old := meta.FindStatusCondition(conds, cond.Type) + + // Per Kubernetes conditions guidance: + // - MUST bump LastTransitionTime on Status changes + // - SHOULD NOT bump it on Reason/Message-only changes + // + // meta.SetStatusCondition implements the same semantics, but: + // - for a new condition, it sets LastTransitionTime to now() only if it's zero + // - for status changes, it uses the provided LastTransitionTime if non-zero + // + // We explicitly set LastTransitionTime for new conditions and status changes, + // and leave it zero for non-status updates so meta keeps the existing value. + if old == nil || old.Status != cond.Status { + cond.LastTransitionTime = metav1.Now() + } else { + cond.LastTransitionTime = metav1.Time{} + } + + changed = meta.SetStatusCondition(&conds, cond) + if changed { + obj.SetStatusConditions(conds) + } + return changed +} + +// RemoveStatusCondition removes the condition with the given type from `.status.conditions`. +// It returns whether the stored conditions changed. +func RemoveStatusCondition(obj StatusConditionObject, condType string) (changed bool) { + conds := obj.GetStatusConditions() + changed = meta.RemoveStatusCondition(&conds, condType) + if changed { + obj.SetStatusConditions(conds) + } + return changed +} diff --git a/api/objutilv1/conditions_test.go b/api/objutilv1/conditions_test.go new file mode 100644 index 000000000..ab0330f41 --- /dev/null +++ b/api/objutilv1/conditions_test.go @@ -0,0 +1,293 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package objutilv1_test + +import ( + "testing" + "time" + + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + + "github.com/deckhouse/sds-replicated-volume/api/objutilv1" +) + +type testConditionedObject struct { + metav1.PartialObjectMetadata + conds []metav1.Condition +} + +func (o *testConditionedObject) GetStatusConditions() []metav1.Condition { + return o.conds +} + +func (o *testConditionedObject) SetStatusConditions(conditions []metav1.Condition) { + o.conds = conditions +} + +func TestSetStatusCondition_ObservedGenerationAndLastTransitionTime(t *testing.T) { + obj := &testConditionedObject{} + obj.SetGeneration(1) + + in := metav1.Condition{Type: "Ready", Status: metav1.ConditionTrue, Reason: "OK", Message: "ok"} + + if changed := objutilv1.SetStatusCondition(obj, in); !changed { + t.Fatalf("expected changed=true on first set") + } + + got := objutilv1.GetStatusCondition(obj, "Ready") + if got == nil { + t.Fatalf("expected condition to be present") + } + + if got.ObservedGeneration != 1 { + t.Fatalf("expected ObservedGeneration=1, got %d", got.ObservedGeneration) + } + if got.LastTransitionTime.IsZero() { + t.Fatalf("expected LastTransitionTime to be set") + } + ltt1 := got.LastTransitionTime + + // Same input, same generation -> no change. + if changed := objutilv1.SetStatusCondition(obj, in); changed { + t.Fatalf("expected changed=false on idempotent set") + } + + got = objutilv1.GetStatusCondition(obj, "Ready") + if got == nil { + t.Fatalf("expected condition to be present") + } + if got.LastTransitionTime != ltt1 { + t.Fatalf("expected LastTransitionTime to be preserved on idempotent set") + } + + // Only generation changes -> ObservedGeneration changes, but LastTransitionTime is preserved. + obj.SetGeneration(2) + if changed := objutilv1.SetStatusCondition(obj, in); !changed { + t.Fatalf("expected changed=true when only ObservedGeneration changes") + } + + got = objutilv1.GetStatusCondition(obj, "Ready") + if got == nil { + t.Fatalf("expected condition to be present") + } + if got.ObservedGeneration != 2 { + t.Fatalf("expected ObservedGeneration=2, got %d", got.ObservedGeneration) + } + if got.LastTransitionTime != ltt1 { + t.Fatalf("expected LastTransitionTime to be preserved when only ObservedGeneration changes") + } + + // Message changes -> LastTransitionTime is preserved. + obj.conds[0].LastTransitionTime = metav1.NewTime(time.Unix(2, 0).UTC()) + ltt2 := obj.conds[0].LastTransitionTime + + in.Message = "new-message" + obj.SetGeneration(3) + if changed := objutilv1.SetStatusCondition(obj, in); !changed { + t.Fatalf("expected changed=true when message changes") + } + got = objutilv1.GetStatusCondition(obj, "Ready") + if got == nil { + t.Fatalf("expected condition to be present") + } + if got.LastTransitionTime != ltt2 { + t.Fatalf("expected LastTransitionTime to be preserved when only message changes") + } + + // Reason changes -> LastTransitionTime is preserved. + obj.conds[0].LastTransitionTime = metav1.NewTime(time.Unix(3, 0).UTC()) + ltt3 := obj.conds[0].LastTransitionTime + + in.Reason = "Other" + obj.SetGeneration(4) + if changed := objutilv1.SetStatusCondition(obj, in); !changed { + t.Fatalf("expected changed=true when reason changes") + } + got = objutilv1.GetStatusCondition(obj, "Ready") + if got == nil { + t.Fatalf("expected condition to be present") + } + if got.LastTransitionTime != ltt3 { + t.Fatalf("expected LastTransitionTime to be preserved when only reason changes") + } + + // Actual transition -> LastTransitionTime updated. + // Make old LTT distinguishable. + obj.conds[0].LastTransitionTime = metav1.NewTime(time.Unix(1, 0).UTC()) + oldLTT := obj.conds[0].LastTransitionTime + + in.Status = metav1.ConditionFalse + obj.SetGeneration(5) + if changed := objutilv1.SetStatusCondition(obj, in); !changed { + t.Fatalf("expected changed=true when meaning changes") + } + + got = objutilv1.GetStatusCondition(obj, "Ready") + if got == nil { + t.Fatalf("expected condition to be present") + } + if got.LastTransitionTime == oldLTT { + t.Fatalf("expected LastTransitionTime to change when meaning changes") + } +} + +func TestRemoveStatusCondition(t *testing.T) { + obj := &testConditionedObject{} + + if changed := objutilv1.RemoveStatusCondition(obj, "Ready"); changed { + t.Fatalf("expected changed=false when condition not present") + } + + obj.SetGeneration(1) + _ = objutilv1.SetStatusCondition(obj, metav1.Condition{Type: "Ready", Status: metav1.ConditionTrue}) + + if changed := objutilv1.RemoveStatusCondition(obj, "Ready"); !changed { + t.Fatalf("expected changed=true when condition present") + } + if objutilv1.HasStatusCondition(obj, "Ready") { + t.Fatalf("expected condition to be removed") + } +} + +func TestConditionEqualByStatus(t *testing.T) { + a := &metav1.Condition{Type: "Ready", Status: metav1.ConditionTrue, Reason: "A", Message: "a"} + b := &metav1.Condition{Type: "Ready", Status: metav1.ConditionTrue, Reason: "B", Message: "b"} + + if !objutilv1.ConditionEqualByStatus(a, b) { + t.Fatalf("expected equal when Type and Status match") + } + + b.Type = "Other" + if objutilv1.ConditionEqualByStatus(a, b) { + t.Fatalf("expected not equal when Type differs") + } + + b.Type = "Ready" + b.Status = metav1.ConditionFalse + if objutilv1.ConditionEqualByStatus(a, b) { + t.Fatalf("expected not equal when Status differs") + } + + if !objutilv1.ConditionEqualByStatus((*metav1.Condition)(nil), (*metav1.Condition)(nil)) { + t.Fatalf("expected nil==nil to be equal") + } + if objutilv1.ConditionEqualByStatus(a, (*metav1.Condition)(nil)) { + t.Fatalf("expected non-nil != nil") + } +} + +func TestAreConditionsSemanticallyEqual_SelectedTypes(t *testing.T) { + a := &testConditionedObject{} + b := &testConditionedObject{} + + a.SetGeneration(1) + b.SetGeneration(1) + + _ = objutilv1.SetStatusCondition(a, metav1.Condition{Type: "Ready", Status: metav1.ConditionTrue, Reason: "OK"}) + _ = objutilv1.SetStatusCondition(b, metav1.Condition{Type: "Ready", Status: metav1.ConditionTrue, Reason: "OK"}) + + // Both missing -> equal for that type. + if !objutilv1.AreConditionsSemanticallyEqual(a, b, "Missing") { + t.Fatalf("expected equal when selected condition type is missing on both objects") + } + + // Present on both and semantically equal -> equal. + if !objutilv1.AreConditionsSemanticallyEqual(a, b, "Ready") { + t.Fatalf("expected equal for semantically equal condition on both objects") + } + + // Missing on one -> not equal. + _ = objutilv1.RemoveStatusCondition(b, "Ready") + if objutilv1.AreConditionsSemanticallyEqual(a, b, "Ready") { + t.Fatalf("expected not equal when condition is missing on exactly one object") + } +} + +func TestAreConditionsSemanticallyEqual_AllTypesWhenEmpty(t *testing.T) { + a := &testConditionedObject{} + b := &testConditionedObject{} + + a.SetGeneration(1) + b.SetGeneration(1) + + _ = objutilv1.SetStatusCondition(a, metav1.Condition{Type: "Ready", Status: metav1.ConditionTrue, Reason: "OK"}) + _ = objutilv1.SetStatusCondition(b, metav1.Condition{Type: "Ready", Status: metav1.ConditionTrue, Reason: "OK"}) + + _ = objutilv1.SetStatusCondition(a, metav1.Condition{Type: "Online", Status: metav1.ConditionTrue}) + _ = objutilv1.SetStatusCondition(b, metav1.Condition{Type: "Online", Status: metav1.ConditionTrue}) + + if !objutilv1.AreConditionsSemanticallyEqual(a, b) { + t.Fatalf("expected equal when all condition types are semantically equal") + } + + // Change meaning for one condition type. + _ = objutilv1.SetStatusCondition(b, metav1.Condition{Type: "Online", Status: metav1.ConditionFalse, Reason: "Down"}) + if objutilv1.AreConditionsSemanticallyEqual(a, b) { + t.Fatalf("expected not equal when any condition meaning differs") + } +} + +func TestAreConditionsEqualByStatus_SelectedTypes(t *testing.T) { + a := &testConditionedObject{} + b := &testConditionedObject{} + + a.SetGeneration(1) + b.SetGeneration(1) + + _ = objutilv1.SetStatusCondition(a, metav1.Condition{Type: "Ready", Status: metav1.ConditionTrue, Reason: "A"}) + _ = objutilv1.SetStatusCondition(b, metav1.Condition{Type: "Ready", Status: metav1.ConditionTrue, Reason: "B"}) + + // StatusEqual ignores Reason/Message/ObservedGeneration differences. + if !objutilv1.AreConditionsEqualByStatus(a, b, "Ready") { + t.Fatalf("expected equal when Type and Status match") + } + + // Both missing -> equal for that type. + if !objutilv1.AreConditionsEqualByStatus(a, b, "Missing") { + t.Fatalf("expected equal when selected condition type is missing on both objects") + } + + // Missing on one -> not equal. + _ = objutilv1.RemoveStatusCondition(b, "Ready") + if objutilv1.AreConditionsEqualByStatus(a, b, "Ready") { + t.Fatalf("expected not equal when condition is missing on exactly one object") + } +} + +func TestAreConditionsEqualByStatus_AllTypesWhenEmpty(t *testing.T) { + a := &testConditionedObject{} + b := &testConditionedObject{} + + a.SetGeneration(1) + b.SetGeneration(1) + + _ = objutilv1.SetStatusCondition(a, metav1.Condition{Type: "Ready", Status: metav1.ConditionTrue, Reason: "A"}) + _ = objutilv1.SetStatusCondition(b, metav1.Condition{Type: "Ready", Status: metav1.ConditionTrue, Reason: "B"}) + + _ = objutilv1.SetStatusCondition(a, metav1.Condition{Type: "Online", Status: metav1.ConditionTrue}) + _ = objutilv1.SetStatusCondition(b, metav1.Condition{Type: "Online", Status: metav1.ConditionTrue}) + + if !objutilv1.AreConditionsEqualByStatus(a, b) { + t.Fatalf("expected equal when all condition types have equal Type+Status") + } + + // Status differs for one condition type -> not equal. + _ = objutilv1.SetStatusCondition(b, metav1.Condition{Type: "Online", Status: metav1.ConditionFalse, Reason: "Down"}) + if objutilv1.AreConditionsEqualByStatus(a, b) { + t.Fatalf("expected not equal when any condition Status differs") + } +} diff --git a/api/objutilv1/finalizers.go b/api/objutilv1/finalizers.go new file mode 100644 index 000000000..5af54f517 --- /dev/null +++ b/api/objutilv1/finalizers.go @@ -0,0 +1,79 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package objutilv1 + +import ( + "slices" + + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" +) + +// HasFinalizer reports whether the object has the given finalizer. +func HasFinalizer(obj metav1.Object, finalizer string) bool { + return slices.Contains(obj.GetFinalizers(), finalizer) +} + +// AddFinalizer ensures the given finalizer is present on the object. +// It returns whether the finalizers were changed. +func AddFinalizer(obj metav1.Object, finalizer string) (changed bool) { + finalizers := obj.GetFinalizers() + if slices.Contains(finalizers, finalizer) { + return false + } + + obj.SetFinalizers(append(finalizers, finalizer)) + return true +} + +// RemoveFinalizer removes the given finalizer from the object. +// It returns whether the finalizers were changed. +func RemoveFinalizer(obj metav1.Object, finalizer string) (changed bool) { + finalizers := obj.GetFinalizers() + + idx := slices.Index(finalizers, finalizer) + if idx < 0 { + return false + } + + obj.SetFinalizers(slices.Delete(finalizers, idx, idx+1)) + return true +} + +// HasFinalizersOtherThan reports whether the object has any finalizers not in the allowed list. +func HasFinalizersOtherThan(obj metav1.Object, allowedFinalizers ...string) bool { + finalizers := obj.GetFinalizers() + + switch len(allowedFinalizers) { + case 0: + return len(finalizers) > 0 + case 1: + allowed := allowedFinalizers[0] + for _, f := range finalizers { + if f != allowed { + return true + } + } + return false + default: + for _, f := range finalizers { + if !slices.Contains(allowedFinalizers, f) { + return true + } + } + return false + } +} diff --git a/api/objutilv1/finalizers_test.go b/api/objutilv1/finalizers_test.go new file mode 100644 index 000000000..683715cec --- /dev/null +++ b/api/objutilv1/finalizers_test.go @@ -0,0 +1,57 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package objutilv1_test + +import ( + "testing" + + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + + "github.com/deckhouse/sds-replicated-volume/api/objutilv1" +) + +func TestFinalizersHelpers(t *testing.T) { + obj := &metav1.PartialObjectMetadata{} + + if objutilv1.HasFinalizer(obj, "f") { + t.Fatalf("expected no finalizer") + } + + if changed := objutilv1.AddFinalizer(obj, "f"); !changed { + t.Fatalf("expected changed=true on first add") + } + if changed := objutilv1.AddFinalizer(obj, "f"); changed { + t.Fatalf("expected changed=false on idempotent add") + } + if !objutilv1.HasFinalizer(obj, "f") { + t.Fatalf("expected finalizer to be present") + } + + if !objutilv1.HasFinalizersOtherThan(obj, "other") { + t.Fatalf("expected to have finalizers other than allowed") + } + if objutilv1.HasFinalizersOtherThan(obj, "f") { + t.Fatalf("expected no finalizers other than allowed") + } + + if changed := objutilv1.RemoveFinalizer(obj, "f"); !changed { + t.Fatalf("expected changed=true on remove") + } + if changed := objutilv1.RemoveFinalizer(obj, "f"); changed { + t.Fatalf("expected changed=false on repeated remove") + } +} diff --git a/api/objutilv1/interfaces.go b/api/objutilv1/interfaces.go new file mode 100644 index 000000000..1d85a91db --- /dev/null +++ b/api/objutilv1/interfaces.go @@ -0,0 +1,43 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package objutilv1 + +import ( + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/runtime" +) + +// StatusConditionObject is a root Kubernetes object that exposes status conditions. +// +// It is intentionally small: helpers in this package need only metadata access +// (for generation/labels/finalizers/ownerRefs) and the ability to read/write +// the `.status.conditions` slice. +type StatusConditionObject interface { + metav1.Object + + GetStatusConditions() []metav1.Condition + SetStatusConditions([]metav1.Condition) +} + +// MetaRuntimeObject is a Kubernetes object that provides both metadata (name/uid) +// and an explicit GroupVersionKind via runtime.Object. +// +// It is used for OwnerReference helpers. +type MetaRuntimeObject interface { + metav1.Object + runtime.Object +} diff --git a/api/objutilv1/labels.go b/api/objutilv1/labels.go new file mode 100644 index 000000000..5676adfdf --- /dev/null +++ b/api/objutilv1/labels.go @@ -0,0 +1,74 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package objutilv1 + +import metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + +// HasLabel reports whether the object has the given label key. +func HasLabel(obj metav1.Object, key string) bool { + labels := obj.GetLabels() + if labels == nil { + return false + } + + _, ok := labels[key] + return ok +} + +// HasLabelValue reports whether the object has the given label key set to the provided value. +func HasLabelValue(obj metav1.Object, key, value string) bool { + labels := obj.GetLabels() + if labels == nil { + return false + } + + return labels[key] == value +} + +// SetLabel ensures the object has the given label key set to the provided value. +// It returns whether the labels were changed. +func SetLabel(obj metav1.Object, key, value string) (changed bool) { + labels := obj.GetLabels() + if labels == nil { + labels = make(map[string]string) + } + + if labels[key] == value { + return false + } + + labels[key] = value + obj.SetLabels(labels) + return true +} + +// RemoveLabel removes the given label key from the object. +// It returns whether the labels were changed. +func RemoveLabel(obj metav1.Object, key string) (changed bool) { + labels := obj.GetLabels() + if labels == nil { + return false + } + + if _, ok := labels[key]; !ok { + return false + } + + delete(labels, key) + obj.SetLabels(labels) + return true +} diff --git a/api/objutilv1/labels_test.go b/api/objutilv1/labels_test.go new file mode 100644 index 000000000..ac285b618 --- /dev/null +++ b/api/objutilv1/labels_test.go @@ -0,0 +1,57 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package objutilv1_test + +import ( + "testing" + + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + + "github.com/deckhouse/sds-replicated-volume/api/objutilv1" +) + +func TestLabelsHelpers(t *testing.T) { + obj := &metav1.PartialObjectMetadata{} + + if objutilv1.HasLabel(obj, "k") { + t.Fatalf("expected no label") + } + if objutilv1.HasLabelValue(obj, "k", "v") { + t.Fatalf("expected no label value") + } + + if changed := objutilv1.SetLabel(obj, "k", "v"); !changed { + t.Fatalf("expected changed=true on first set") + } + if !objutilv1.HasLabel(obj, "k") { + t.Fatalf("expected label to be present") + } + if !objutilv1.HasLabelValue(obj, "k", "v") { + t.Fatalf("expected label value to match") + } + + if changed := objutilv1.SetLabel(obj, "k", "v"); changed { + t.Fatalf("expected changed=false on idempotent set") + } + + if changed := objutilv1.RemoveLabel(obj, "k"); !changed { + t.Fatalf("expected changed=true on remove") + } + if changed := objutilv1.RemoveLabel(obj, "k"); changed { + t.Fatalf("expected changed=false on repeated remove") + } +} diff --git a/api/objutilv1/ownerrefs.go b/api/objutilv1/ownerrefs.go new file mode 100644 index 000000000..f3b2c4f86 --- /dev/null +++ b/api/objutilv1/ownerrefs.go @@ -0,0 +1,124 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package objutilv1 + +import ( + "slices" + + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" +) + +// HasMatchingOwnerRef reports whether the object has an owner reference matching the given owner. +func HasMatchingOwnerRef(obj metav1.Object, owner MetaRuntimeObject, controller bool) bool { + desired := mustDesiredOwnerRef(owner, controller) + + for _, ref := range obj.GetOwnerReferences() { + if ownerRefsEqual(ref, desired) { + return true + } + } + + return false +} + +// SetOwnerRef ensures the object has an owner reference for the given owner. +// It returns whether the ownerReferences were changed. +func SetOwnerRef(obj metav1.Object, owner MetaRuntimeObject, controller bool) (changed bool) { + desired := mustDesiredOwnerRef(owner, controller) + + refs := obj.GetOwnerReferences() + + idx := indexOfOwnerRef(refs, desired) + if idx < 0 { + obj.SetOwnerReferences(append(refs, desired)) + return true + } + + if ownerRefsEqual(refs[idx], desired) { + return false + } + + newRefs := slices.Clone(refs) + newRefs[idx] = desired + obj.SetOwnerReferences(newRefs) + return true +} + +// mustDesiredOwnerRef builds an OwnerReference for the given owner. +// +// We expect owner objects passed to objutilv1 helpers to have a non-empty GVK, +// because OwnerReference requires APIVersion/Kind to be set. +// If GVK is empty, this function panics. +func mustDesiredOwnerRef(owner MetaRuntimeObject, controller bool) metav1.OwnerReference { + gvk := owner.GetObjectKind().GroupVersionKind() + if gvk.Empty() { + panic("objutilv1: owner object has empty GroupVersionKind; ensure APIVersion/Kind (GVK) is set on the owner runtime.Object") + } + + if owner.GetName() == "" { + panic("objutilv1: owner object has empty name; ensure metadata.name is set on the owner") + } + if owner.GetUID() == "" { + panic("objutilv1: owner object has empty uid; ensure metadata.uid is set on the owner") + } + + return metav1.OwnerReference{ + APIVersion: gvk.GroupVersion().String(), + Kind: gvk.Kind, + Name: owner.GetName(), + UID: owner.GetUID(), + Controller: boolPtr(controller), + } +} + +func boolPtr(v bool) *bool { return &v } + +func ownerRefsEqual(a, b metav1.OwnerReference) bool { + return a.APIVersion == b.APIVersion && + a.Kind == b.Kind && + a.Name == b.Name && + a.UID == b.UID && + boolPtrEqual(a.Controller, b.Controller) +} + +func boolPtrEqual(a, b *bool) bool { + if a == nil || b == nil { + return false + } + return *a == *b +} + +func indexOfOwnerRef(refs []metav1.OwnerReference, desired metav1.OwnerReference) int { + if desired.UID != "" { + for i := range refs { + if refs[i].UID == desired.UID { + return i + } + } + return -1 + } + + for i := range refs { + if refs[i].APIVersion == desired.APIVersion && + refs[i].Kind == desired.Kind && + refs[i].Name == desired.Name { + return i + } + } + + return -1 +} diff --git a/api/objutilv1/ownerrefs_test.go b/api/objutilv1/ownerrefs_test.go new file mode 100644 index 000000000..af7028f1e --- /dev/null +++ b/api/objutilv1/ownerrefs_test.go @@ -0,0 +1,110 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package objutilv1_test + +import ( + "testing" + + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/types" + + "github.com/deckhouse/sds-replicated-volume/api/objutilv1" +) + +func TestOwnerRefsHelpers(t *testing.T) { + obj := &metav1.PartialObjectMetadata{} + + ownerNoGVK := &metav1.PartialObjectMetadata{} + ownerNoGVK.SetName("owner") + ownerNoGVK.SetUID(types.UID("u1")) + + t.Run("empty_GVK_panics", func(t *testing.T) { + func() { + defer func() { + if r := recover(); r == nil { + t.Fatalf("expected panic when owner has empty GVK (SetOwnerRef)") + } + }() + _ = objutilv1.SetOwnerRef(obj, ownerNoGVK, true) + }() + + func() { + defer func() { + if r := recover(); r == nil { + t.Fatalf("expected panic when owner has empty GVK (HasMatchingOwnerRef)") + } + }() + _ = objutilv1.HasMatchingOwnerRef(obj, ownerNoGVK, true) + }() + }) + + t.Run("empty_name_panics", func(t *testing.T) { + owner := &metav1.PartialObjectMetadata{} + owner.TypeMeta.APIVersion = "test.io/v1" + owner.TypeMeta.Kind = "TestOwner" + owner.SetUID(types.UID("u1")) + + defer func() { + if r := recover(); r == nil { + t.Fatalf("expected panic when owner has empty name") + } + }() + _ = objutilv1.SetOwnerRef(obj, owner, true) + }) + + t.Run("empty_uid_panics", func(t *testing.T) { + owner := &metav1.PartialObjectMetadata{} + owner.TypeMeta.APIVersion = "test.io/v1" + owner.TypeMeta.Kind = "TestOwner" + owner.SetName("owner") + + defer func() { + if r := recover(); r == nil { + t.Fatalf("expected panic when owner has empty uid") + } + }() + _ = objutilv1.SetOwnerRef(obj, owner, true) + }) + + owner := &metav1.PartialObjectMetadata{} + owner.TypeMeta.APIVersion = "test.io/v1" + owner.TypeMeta.Kind = "TestOwner" + owner.SetName("owner") + owner.SetUID(types.UID("u1")) + + if changed := objutilv1.SetOwnerRef(obj, owner, true); !changed { + t.Fatalf("expected changed=true on first set") + } + if changed := objutilv1.SetOwnerRef(obj, owner, true); changed { + t.Fatalf("expected changed=false on idempotent set") + } + + if !objutilv1.HasMatchingOwnerRef(obj, owner, true) { + t.Fatalf("expected to match ownerRef") + } + if objutilv1.HasMatchingOwnerRef(obj, owner, false) { + t.Fatalf("expected not to match ownerRef with different controller flag") + } + + // Update controller flag for the same owner UID. + if changed := objutilv1.SetOwnerRef(obj, owner, false); !changed { + t.Fatalf("expected changed=true when updating controller flag") + } + if !objutilv1.HasMatchingOwnerRef(obj, owner, false) { + t.Fatalf("expected to match updated ownerRef") + } +} diff --git a/api/v1alpha1/annotations.go b/api/v1alpha1/annotations.go new file mode 100644 index 000000000..74f58371c --- /dev/null +++ b/api/v1alpha1/annotations.go @@ -0,0 +1,25 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package v1alpha1 + +const annotationPrefix = "sds-replicated-volume.deckhouse.io/" + +const ( + // LVMVolumeGroupUnschedulableAnnotationKey marks an LVMVolumeGroup as unschedulable + // for new ReplicatedVolumeReplicas. + LVMVolumeGroupUnschedulableAnnotationKey = annotationPrefix + "unschedulable" +) diff --git a/api/v1alpha1/common_helpers.go b/api/v1alpha1/common_helpers.go new file mode 100644 index 000000000..a216fd691 --- /dev/null +++ b/api/v1alpha1/common_helpers.go @@ -0,0 +1,20 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package v1alpha1 + +// Place shared helpers here that do not belong to any single API object, +// but you are sure they must live in the API package. diff --git a/api/v1alpha1/common_types.go b/api/v1alpha1/common_types.go new file mode 100644 index 000000000..ac1c9dadb --- /dev/null +++ b/api/v1alpha1/common_types.go @@ -0,0 +1,17 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package v1alpha1 diff --git a/api/v1alpha1/drbd_node_operation.go b/api/v1alpha1/drbd_node_operation.go new file mode 100644 index 000000000..e9b5f046a --- /dev/null +++ b/api/v1alpha1/drbd_node_operation.go @@ -0,0 +1,78 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package v1alpha1 + +import ( + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" +) + +// +kubebuilder:object:generate=true +// +kubebuilder:object:root=true +// +kubebuilder:subresource:status +// +kubebuilder:resource:scope=Cluster,shortName=dno +// +kubebuilder:metadata:labels=module=sds-replicated-volume +// +kubebuilder:printcolumn:name="Node",type=string,JSONPath=".spec.nodeName" +// +kubebuilder:printcolumn:name="Type",type=string,JSONPath=".spec.type" +// +kubebuilder:printcolumn:name="Phase",type=string,JSONPath=".status.phase" +// +kubebuilder:printcolumn:name="Age",type=date,JSONPath=".metadata.creationTimestamp" +type DRBDNodeOperation struct { + metav1.TypeMeta `json:",inline"` + metav1.ObjectMeta `json:"metadata"` + + Spec DRBDNodeOperationSpec `json:"spec"` + // +patchStrategy=merge + Status *DRBDNodeOperationStatus `json:"status,omitempty" patchStrategy:"merge"` +} + +// +kubebuilder:object:generate=true +type DRBDNodeOperationSpec struct { + // +kubebuilder:validation:Required + // +kubebuilder:validation:MinLength=1 + // +kubebuilder:validation:MaxLength=253 + // +kubebuilder:validation:XValidation:rule="self == oldSelf",message="nodeName is immutable" + NodeName string `json:"nodeName"` + + // +kubebuilder:validation:Required + // +kubebuilder:validation:Enum=UpdateDRBD + // +kubebuilder:validation:XValidation:rule="self == oldSelf",message="type is immutable" + Type DRBDNodeOperationType `json:"type"` +} + +// +kubebuilder:object:generate=true +type DRBDNodeOperationStatus struct { + // +optional + Phase DRBDOperationPhase `json:"phase,omitempty"` + + // +kubebuilder:validation:MaxLength=1024 + // +optional + Message string `json:"message,omitempty"` + + // +optional + StartedAt *metav1.Time `json:"startedAt,omitempty"` + + // +optional + CompletedAt *metav1.Time `json:"completedAt,omitempty"` +} + +// +kubebuilder:object:generate=true +// +kubebuilder:object:root=true +// +kubebuilder:resource:scope=Cluster +type DRBDNodeOperationList struct { + metav1.TypeMeta `json:",inline"` + metav1.ListMeta `json:"metadata"` + Items []DRBDNodeOperation `json:"items"` +} diff --git a/api/v1alpha1/drbd_resource.go b/api/v1alpha1/drbd_resource.go new file mode 100644 index 000000000..12b230081 --- /dev/null +++ b/api/v1alpha1/drbd_resource.go @@ -0,0 +1,326 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package v1alpha1 + +import ( + "k8s.io/apimachinery/pkg/api/resource" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" +) + +// +kubebuilder:object:generate=true +// +kubebuilder:object:root=true +// +kubebuilder:subresource:status +// +kubebuilder:resource:scope=Cluster,shortName=dr +// +kubebuilder:metadata:labels=module=sds-replicated-volume +// +kubebuilder:printcolumn:name="Node",type=string,JSONPath=".spec.nodeName" +// +kubebuilder:printcolumn:name="State",type=string,JSONPath=".spec.state" +// +kubebuilder:printcolumn:name="Role",type=string,JSONPath=".status.role" +// +kubebuilder:printcolumn:name="Type",type=string,JSONPath=".spec.type" +// +kubebuilder:printcolumn:name="DiskState",type=string,JSONPath=".status.diskState" +// +kubebuilder:printcolumn:name="Quorum",type=boolean,JSONPath=".status.quorum" +// +kubebuilder:printcolumn:name="Age",type=date,JSONPath=".metadata.creationTimestamp" +// +kubebuilder:validation:XValidation:rule="self.spec.type == 'Diskful' ? has(self.spec.lvmLogicalVolumeName) && size(self.spec.lvmLogicalVolumeName) > 0 : !has(self.spec.lvmLogicalVolumeName) || size(self.spec.lvmLogicalVolumeName) == 0",message="lvmLogicalVolumeName is required when type is Diskful and must be empty when type is Diskless" +// +kubebuilder:validation:XValidation:rule="!has(oldSelf.spec.size) || self.spec.size >= oldSelf.spec.size",message="spec.size cannot be decreased" +type DRBDResource struct { + metav1.TypeMeta `json:",inline"` + metav1.ObjectMeta `json:"metadata"` + + Spec DRBDResourceSpec `json:"spec"` + // +patchStrategy=merge + Status *DRBDResourceStatus `json:"status,omitempty" patchStrategy:"merge"` +} + +// +kubebuilder:object:generate=true +type DRBDResourceSpec struct { + // +kubebuilder:validation:Required + // +kubebuilder:validation:MinLength=1 + // +kubebuilder:validation:MaxLength=253 + // +kubebuilder:validation:XValidation:rule="self == oldSelf",message="nodeName is immutable" + NodeName string `json:"nodeName"` + + // +kubebuilder:validation:Required + // +kubebuilder:validation:MinItems=1 + // +kubebuilder:validation:MaxItems=16 + // +kubebuilder:validation:items:MaxLength=64 + SystemNetworks []string `json:"systemNetworks"` + + // +kubebuilder:validation:Minimum=0 + // +kubebuilder:validation:Maximum=31 + // +optional + Quorum byte `json:"quorum,omitempty"` + + // +kubebuilder:validation:Minimum=0 + // +kubebuilder:validation:Maximum=31 + // +optional + QuorumMinimumRedundancy byte `json:"quorumMinimumRedundancy,omitempty"` + + // +kubebuilder:validation:Enum=Up;Down + // +optional + State DRBDResourceState `json:"state,omitempty"` + + // +kubebuilder:validation:Required + Size resource.Quantity `json:"size"` + + // +kubebuilder:validation:Enum=Primary;Secondary + // +optional + Role DRBDRole `json:"role,omitempty"` + + // +kubebuilder:default=false + // +optional + AllowTwoPrimaries bool `json:"allowTwoPrimaries,omitempty"` + + // +kubebuilder:validation:Enum=Diskful;Diskless + // +kubebuilder:default=Diskful + // +optional + Type DRBDResourceType `json:"type,omitempty"` + + // Required when type is Diskful, must be empty when type is Diskless. + // +kubebuilder:validation:MinLength=1 + // +kubebuilder:validation:MaxLength=128 + // +optional + LVMLogicalVolumeName string `json:"lvmLogicalVolumeName,omitempty"` + + // +kubebuilder:validation:Required + // +kubebuilder:validation:Minimum=0 + // +kubebuilder:validation:Maximum=31 + // +kubebuilder:validation:XValidation:rule="self == oldSelf",message="nodeID is immutable" + NodeID uint `json:"nodeID"` + + // +patchMergeKey=name + // +patchStrategy=merge + // +listType=map + // +listMapKey=name + // +kubebuilder:validation:MaxItems=31 + // +optional + Peers []DRBDResourcePeer `json:"peers,omitempty" patchStrategy:"merge" patchMergeKey:"name"` + + // Maintenance mode - when set, reconciliation is paused but status is still updated + // +kubebuilder:validation:Enum=NoResourceReconciliation + // +optional + Maintenance MaintenanceMode `json:"maintenance,omitempty"` +} + +// MaintenanceMode represents the maintenance mode of a DRBD resource. +type MaintenanceMode string + +const ( + // MaintenanceModeNoResourceReconciliation pauses reconciliation but status is still updated. + MaintenanceModeNoResourceReconciliation MaintenanceMode = "NoResourceReconciliation" +) + +// +kubebuilder:object:generate=true +type DRBDResourcePeer struct { + // Peer node name. Immutable, used as list map key. + // +kubebuilder:validation:Required + // +kubebuilder:validation:MinLength=1 + // +kubebuilder:validation:MaxLength=253 + // +kubebuilder:validation:Pattern=`^[0-9A-Za-z.+_-]*$` + Name string `json:"name"` + + // +kubebuilder:validation:Enum=Diskful;Diskless + // +kubebuilder:default=Diskful + // +optional + Type DRBDResourceType `json:"type,omitempty"` + + // +kubebuilder:default=true + // +optional + AllowRemoteRead bool `json:"allowRemoteRead,omitempty"` + + // +kubebuilder:validation:Required + // +kubebuilder:validation:Minimum=0 + // +kubebuilder:validation:Maximum=31 + NodeID uint `json:"nodeID"` + + // +kubebuilder:validation:Enum=A;B;C + // +kubebuilder:default=C + // +kubebuilder:validation:XValidation:rule="self == oldSelf",message="protocol is immutable" + // +optional + Protocol DRBDProtocol `json:"protocol,omitempty"` + + // +kubebuilder:validation:MaxLength=256 + // +optional + SharedSecret string `json:"sharedSecret,omitempty"` + + // +kubebuilder:validation:Enum=SHA256;SHA1;DummyForTest + // +optional + SharedSecretAlg SharedSecretAlg `json:"sharedSecretAlg,omitempty"` + + // +kubebuilder:default=false + // +optional + PauseSync bool `json:"pauseSync,omitempty"` + + // +patchMergeKey=systemNetworkName + // +patchStrategy=merge + // +listType=map + // +listMapKey=systemNetworkName + // +kubebuilder:validation:MinItems=1 + // +kubebuilder:validation:MaxItems=16 + Paths []DRBDResourcePath `json:"paths" patchStrategy:"merge" patchMergeKey:"systemNetworkName"` +} + +// +kubebuilder:object:generate=true +type DRBDResourcePath struct { + // System network name. Immutable, used as list map key. + // +kubebuilder:validation:Required + // +kubebuilder:validation:MinLength=1 + // +kubebuilder:validation:MaxLength=64 + SystemNetworkName string `json:"systemNetworkName"` + + // +kubebuilder:validation:Required + Address DRBDAddress `json:"address"` +} + +// +kubebuilder:object:generate=true +type DRBDAddress struct { + // +kubebuilder:validation:Required + // +kubebuilder:validation:Pattern=`^(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)$` + IPv4 string `json:"ipv4"` + + // +kubebuilder:validation:Minimum=1025 + // +kubebuilder:validation:Maximum=65535 + Port uint `json:"port"` +} + +// +kubebuilder:object:generate=true +type DRBDResourceStatus struct { + // Device path, e.g. /dev/drbd10012 or /dev/sds-replicated/ + // Only present on primary + // +kubebuilder:validation:MaxLength=256 + // +optional + Device string `json:"device,omitempty"` + + // +kubebuilder:validation:MaxItems=32 + // +optional + Addresses []DRBDResourceAddressStatus `json:"addresses,omitempty"` + + // +kubebuilder:validation:Enum=Primary;Secondary + // +optional + Role DRBDRole `json:"role,omitempty"` + + // +patchStrategy=merge + // +optional + ActiveConfiguration *DRBDResourceActiveConfiguration `json:"activeConfiguration,omitempty" patchStrategy:"merge"` + + // +patchMergeKey=name + // +patchStrategy=merge + // +listType=map + // +listMapKey=name + // +kubebuilder:validation:MaxItems=31 + // +optional + Peers []DRBDResourcePeerStatus `json:"peers,omitempty" patchStrategy:"merge" patchMergeKey:"name"` + + // +optional + DiskState DiskState `json:"diskState,omitempty"` + + // +optional + Quorum *bool `json:"quorum,omitempty"` +} + +// +kubebuilder:object:generate=true +type DRBDResourceAddressStatus struct { + // +kubebuilder:validation:Required + // +kubebuilder:validation:MaxLength=64 + SystemNetworkName string `json:"systemNetworkName"` + + // +kubebuilder:validation:Required + Address DRBDAddress `json:"address"` +} + +// +kubebuilder:object:generate=true +type DRBDResourceActiveConfiguration struct { + // +optional + Quorum *byte `json:"quorum,omitempty"` + + // +optional + QuorumMinimumRedundancy *byte `json:"quorumMinimumRedundancy,omitempty"` + + // +kubebuilder:validation:Enum=Up;Down + // +optional + State DRBDResourceState `json:"state,omitempty"` + + // +optional + Size *resource.Quantity `json:"size,omitempty"` + + // +kubebuilder:validation:Enum=Primary;Secondary + // +optional + Role DRBDRole `json:"role,omitempty"` + + // +optional + AllowTwoPrimaries *bool `json:"allowTwoPrimaries,omitempty"` + + // +kubebuilder:validation:Enum=Diskful;Diskless + // +optional + Type DRBDResourceType `json:"type,omitempty"` + + // Disk path, e.g. /dev/... + // +kubebuilder:validation:MaxLength=256 + // +optional + Disk string `json:"disk,omitempty"` +} + +// +kubebuilder:object:generate=true +type DRBDResourcePeerStatus struct { + // +kubebuilder:validation:Required + // +kubebuilder:validation:MinLength=1 + // +kubebuilder:validation:MaxLength=253 + Name string `json:"name"` + + // +kubebuilder:validation:Enum=Diskful;Diskless + // +optional + Type DRBDResourceType `json:"type,omitempty"` + + // +kubebuilder:validation:Minimum=0 + // +kubebuilder:validation:Maximum=31 + // +optional + NodeID *uint `json:"nodeID,omitempty"` + + // +patchMergeKey=systemNetworkName + // +patchStrategy=merge + // +listType=map + // +listMapKey=systemNetworkName + // +kubebuilder:validation:MaxItems=16 + // +optional + Paths []DRBDResourcePathStatus `json:"paths,omitempty" patchStrategy:"merge" patchMergeKey:"systemNetworkName"` + + // +optional + ConnectionState ConnectionState `json:"connectionState,omitempty"` + + // +optional + DiskState DiskState `json:"diskState,omitempty"` +} + +// +kubebuilder:object:generate=true +type DRBDResourcePathStatus struct { + // +kubebuilder:validation:Required + // +kubebuilder:validation:MaxLength=64 + SystemNetworkName string `json:"systemNetworkName"` + + // +kubebuilder:validation:Required + Address DRBDAddress `json:"address"` + + // +optional + Established bool `json:"established,omitempty"` +} + +// +kubebuilder:object:generate=true +// +kubebuilder:object:root=true +// +kubebuilder:resource:scope=Cluster +type DRBDResourceList struct { + metav1.TypeMeta `json:",inline"` + metav1.ListMeta `json:"metadata"` + Items []DRBDResource `json:"items"` +} diff --git a/api/v1alpha1/drbd_resource_consts.go b/api/v1alpha1/drbd_resource_consts.go new file mode 100644 index 000000000..792bd48ba --- /dev/null +++ b/api/v1alpha1/drbd_resource_consts.go @@ -0,0 +1,85 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package v1alpha1 + +// DRBDResourceState represents the desired state of a DRBD resource. +type DRBDResourceState string + +const ( + // DRBDResourceStateUp indicates the resource should be up. + DRBDResourceStateUp DRBDResourceState = "Up" + // DRBDResourceStateDown indicates the resource should be down. + DRBDResourceStateDown DRBDResourceState = "Down" +) + +// DRBDRole represents the role of a DRBD resource. +type DRBDRole string + +const ( + // DRBDRolePrimary indicates the resource is primary. + DRBDRolePrimary DRBDRole = "Primary" + // DRBDRoleSecondary indicates the resource is secondary. + DRBDRoleSecondary DRBDRole = "Secondary" +) + +// DRBDResourceType represents the type of a DRBD resource. +type DRBDResourceType string + +const ( + // DRBDResourceTypeDiskful indicates a diskful resource that stores data. + DRBDResourceTypeDiskful DRBDResourceType = "Diskful" + // DRBDResourceTypeDiskless indicates a diskless resource. + DRBDResourceTypeDiskless DRBDResourceType = "Diskless" +) + +// DRBDProtocol represents the DRBD replication protocol. +type DRBDProtocol string + +const ( + // DRBDProtocolA is asynchronous replication protocol. + DRBDProtocolA DRBDProtocol = "A" + // DRBDProtocolB is memory synchronous (semi-synchronous) replication protocol. + DRBDProtocolB DRBDProtocol = "B" + // DRBDProtocolC is synchronous replication protocol. + DRBDProtocolC DRBDProtocol = "C" +) + +// DRBDResourceOperationType represents the type of operation to perform on a DRBD resource. +type DRBDResourceOperationType string + +const ( + // DRBDResourceOperationCreateNewUUID creates a new UUID for the resource. + DRBDResourceOperationCreateNewUUID DRBDResourceOperationType = "CreateNewUUID" + // DRBDResourceOperationForcePrimary forces the resource to become primary. + DRBDResourceOperationForcePrimary DRBDResourceOperationType = "ForcePrimary" + // DRBDResourceOperationInvalidate invalidates the resource data. + DRBDResourceOperationInvalidate DRBDResourceOperationType = "Invalidate" + // DRBDResourceOperationOutdate marks the resource as outdated. + DRBDResourceOperationOutdate DRBDResourceOperationType = "Outdate" + // DRBDResourceOperationVerify verifies data consistency with peers. + DRBDResourceOperationVerify DRBDResourceOperationType = "Verify" + // DRBDResourceOperationCreateSnapshot creates a snapshot of the resource. + DRBDResourceOperationCreateSnapshot DRBDResourceOperationType = "CreateSnapshot" +) + +// DRBDNodeOperationType represents the type of operation to perform on a DRBD node. +type DRBDNodeOperationType string + +const ( + // DRBDNodeOperationUpdateDRBD updates DRBD on the node. + DRBDNodeOperationUpdateDRBD DRBDNodeOperationType = "UpdateDRBD" +) diff --git a/api/v1alpha1/drbd_resource_operation.go b/api/v1alpha1/drbd_resource_operation.go new file mode 100644 index 000000000..255d84282 --- /dev/null +++ b/api/v1alpha1/drbd_resource_operation.go @@ -0,0 +1,105 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package v1alpha1 + +import ( + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" +) + +// +kubebuilder:object:generate=true +// +kubebuilder:object:root=true +// +kubebuilder:subresource:status +// +kubebuilder:resource:scope=Cluster,shortName=dro +// +kubebuilder:metadata:labels=module=sds-replicated-volume +// +kubebuilder:printcolumn:name="Resource",type=string,JSONPath=".spec.drbdResourceName" +// +kubebuilder:printcolumn:name="Type",type=string,JSONPath=".spec.type" +// +kubebuilder:printcolumn:name="Phase",type=string,JSONPath=".status.phase" +// +kubebuilder:printcolumn:name="Age",type=date,JSONPath=".metadata.creationTimestamp" +type DRBDResourceOperation struct { + metav1.TypeMeta `json:",inline"` + metav1.ObjectMeta `json:"metadata"` + + Spec DRBDResourceOperationSpec `json:"spec"` + // +patchStrategy=merge + Status *DRBDResourceOperationStatus `json:"status,omitempty" patchStrategy:"merge"` +} + +// +kubebuilder:object:generate=true +type DRBDResourceOperationSpec struct { + // +kubebuilder:validation:Required + // +kubebuilder:validation:MinLength=1 + // +kubebuilder:validation:MaxLength=253 + // +kubebuilder:validation:Pattern=`^[0-9A-Za-z.+_-]*$` + // +kubebuilder:validation:XValidation:rule="self == oldSelf",message="drbdResourceName is immutable" + DRBDResourceName string `json:"drbdResourceName"` + + // +kubebuilder:validation:Required + // +kubebuilder:validation:Enum=CreateNewUUID;ForcePrimary;Invalidate;Outdate;Verify;CreateSnapshot + // +kubebuilder:validation:XValidation:rule="self == oldSelf",message="type is immutable" + Type DRBDResourceOperationType `json:"type"` + + // Parameters for CreateNewUUID operation. Immutable once set. + // +kubebuilder:validation:XValidation:rule="self == oldSelf",message="createNewUUID is immutable" + // +optional + CreateNewUUID *CreateNewUUIDParams `json:"createNewUUID,omitempty"` +} + +// +kubebuilder:object:generate=true +type CreateNewUUIDParams struct { + // +kubebuilder:default=false + // +optional + ClearBitmap bool `json:"clearBitmap,omitempty"` +} + +// +kubebuilder:object:generate=true +type DRBDResourceOperationStatus struct { + // +optional + Phase DRBDOperationPhase `json:"phase,omitempty"` + + // +kubebuilder:validation:MaxLength=1024 + // +optional + Message string `json:"message,omitempty"` + + // +optional + StartedAt *metav1.Time `json:"startedAt,omitempty"` + + // +optional + CompletedAt *metav1.Time `json:"completedAt,omitempty"` +} + +// DRBDOperationPhase represents the phase of a DRBD operation. +type DRBDOperationPhase string + +const ( + // DRBDOperationPhasePending indicates the operation is pending. + DRBDOperationPhasePending DRBDOperationPhase = "Pending" + // DRBDOperationPhaseRunning indicates the operation is running. + DRBDOperationPhaseRunning DRBDOperationPhase = "Running" + // DRBDOperationPhaseSucceeded indicates the operation completed successfully. + DRBDOperationPhaseSucceeded DRBDOperationPhase = "Succeeded" + // DRBDOperationPhaseFailed indicates the operation failed. + DRBDOperationPhaseFailed DRBDOperationPhase = "Failed" +) + +// +kubebuilder:object:generate=true +// +kubebuilder:object:root=true +// +kubebuilder:resource:scope=Cluster +type DRBDResourceOperationList struct { + metav1.TypeMeta `json:",inline"` + metav1.ListMeta `json:"metadata"` + Items []DRBDResourceOperation `json:"items"` +} diff --git a/api/v1alpha1/finalizers.go b/api/v1alpha1/finalizers.go new file mode 100644 index 000000000..2fe604820 --- /dev/null +++ b/api/v1alpha1/finalizers.go @@ -0,0 +1,23 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package v1alpha1 + +const AgentFinalizer = "sds-replicated-volume.deckhouse.io/agent" + +const ControllerFinalizer = "sds-replicated-volume.deckhouse.io/controller" + +const RSCControllerFinalizer = "sds-replicated-volume.deckhouse.io/rsc-controller" diff --git a/api/v1alpha1/labels.go b/api/v1alpha1/labels.go new file mode 100644 index 000000000..a8121265e --- /dev/null +++ b/api/v1alpha1/labels.go @@ -0,0 +1,37 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package v1alpha1 + +const labelPrefix = "sds-replicated-volume.deckhouse.io/" + +const ( + // ReplicatedStorageClassLabelKey is the label key for ReplicatedStorageClass name on RV and RVR. + ReplicatedStorageClassLabelKey = labelPrefix + "replicated-storage-class" + + // ReplicatedVolumeLabelKey is the label key for ReplicatedVolume name on RVR. + ReplicatedVolumeLabelKey = labelPrefix + "replicated-volume" + + // LVMVolumeGroupLabelKey is the label key for LVMVolumeGroup name on RVR. + LVMVolumeGroupLabelKey = labelPrefix + "lvm-volume-group" + + // NodeNameLabelKey is the label key for the Kubernetes node name where the RVR is scheduled. + // Note: This stores node.metadata.name, not the OS hostname (kubernetes.io/hostname). + NodeNameLabelKey = labelPrefix + "node-name" + + // AgentNodeLabelKey is the label key for selecting nodes where the agent should run. + AgentNodeLabelKey = "storage.deckhouse.io/sds-replicated-volume-node" +) diff --git a/api/v1alpha1/register.go b/api/v1alpha1/register.go index a2ceb69e7..f33e47e8d 100644 --- a/api/v1alpha1/register.go +++ b/api/v1alpha1/register.go @@ -14,6 +14,8 @@ See the License for the specific language governing permissions and limitations under the License. */ +// +kubebuilder:object:generate=true +// +groupName=storage.deckhouse.io package v1alpha1 import ( @@ -44,6 +46,18 @@ func addKnownTypes(scheme *runtime.Scheme) error { &ReplicatedStorageClassList{}, &ReplicatedStoragePool{}, &ReplicatedStoragePoolList{}, + &ReplicatedVolume{}, + &ReplicatedVolumeList{}, + &ReplicatedVolumeAttachment{}, + &ReplicatedVolumeAttachmentList{}, + &ReplicatedVolumeReplica{}, + &ReplicatedVolumeReplicaList{}, + &DRBDResource{}, + &DRBDResourceList{}, + &DRBDResourceOperation{}, + &DRBDResourceOperationList{}, + &DRBDNodeOperation{}, + &DRBDNodeOperationList{}, ) metav1.AddToGroupVersion(scheme, SchemeGroupVersion) return nil diff --git a/api/v1alpha1/replicated_storage_class.go b/api/v1alpha1/replicated_storage_class.go deleted file mode 100644 index ac196c6e9..000000000 --- a/api/v1alpha1/replicated_storage_class.go +++ /dev/null @@ -1,47 +0,0 @@ -/* -Copyright 2025 Flant JSC - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -package v1alpha1 - -import metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - -type ReplicatedStorageClass struct { - metav1.TypeMeta `json:",inline"` - metav1.ObjectMeta `json:"metadata,omitempty"` - Spec ReplicatedStorageClassSpec `json:"spec"` - Status ReplicatedStorageClassStatus `json:"status,omitempty"` -} - -// ReplicatedStorageClassList contains a list of empty block device -type ReplicatedStorageClassList struct { - metav1.TypeMeta `json:",inline"` - metav1.ListMeta `json:"metadata"` - Items []ReplicatedStorageClass `json:"items"` -} - -type ReplicatedStorageClassSpec struct { - StoragePool string `json:"storagePool"` - ReclaimPolicy string `json:"reclaimPolicy"` - Replication string `json:"replication"` - VolumeAccess string `json:"volumeAccess"` - Topology string `json:"topology"` - Zones []string `json:"zones"` -} - -type ReplicatedStorageClassStatus struct { - Phase string `json:"phase,omitempty"` - Reason string `json:"reason,omitempty"` -} diff --git a/api/v1alpha1/replicated_storage_pool.go b/api/v1alpha1/replicated_storage_pool.go deleted file mode 100644 index d43c5db13..000000000 --- a/api/v1alpha1/replicated_storage_pool.go +++ /dev/null @@ -1,48 +0,0 @@ -/* -Copyright 2025 Flant JSC - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -package v1alpha1 - -import metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - -type ReplicatedStoragePool struct { - metav1.TypeMeta `json:",inline"` - metav1.ObjectMeta `json:"metadata,omitempty"` - Spec ReplicatedStoragePoolSpec `json:"spec"` - Status ReplicatedStoragePoolStatus `json:"status,omitempty"` -} - -type ReplicatedStoragePoolSpec struct { - Type string `json:"type"` - LVMVolumeGroups []ReplicatedStoragePoolLVMVolumeGroups `json:"lvmVolumeGroups"` -} - -type ReplicatedStoragePoolLVMVolumeGroups struct { - Name string `json:"name"` - ThinPoolName string `json:"thinPoolName"` -} - -type ReplicatedStoragePoolStatus struct { - Phase string `json:"phase"` - Reason string `json:"reason"` -} - -// ReplicatedStoragePoolList contains a list of ReplicatedStoragePool -type ReplicatedStoragePoolList struct { - metav1.TypeMeta `json:",inline"` - metav1.ListMeta `json:"metadata"` - Items []ReplicatedStoragePool `json:"items"` -} diff --git a/api/v1alpha1/rsc_conditions.go b/api/v1alpha1/rsc_conditions.go new file mode 100644 index 000000000..520627523 --- /dev/null +++ b/api/v1alpha1/rsc_conditions.go @@ -0,0 +1,62 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package v1alpha1 + +const ( + // ReplicatedStorageClassCondConfigurationRolledOutType indicates whether all volumes' + // configuration matches the storage class. + // + // Reasons describe configuration rollout state. + ReplicatedStorageClassCondConfigurationRolledOutType = "ConfigurationRolledOut" + ReplicatedStorageClassCondConfigurationRolledOutReasonConfigurationRolloutDisabled = "ConfigurationRolloutDisabled" // Configuration rollout strategy is NewVolumesOnly. + ReplicatedStorageClassCondConfigurationRolledOutReasonConfigurationRolloutInProgress = "ConfigurationRolloutInProgress" // Configuration rollout in progress. + ReplicatedStorageClassCondConfigurationRolledOutReasonNewConfigurationNotYetObserved = "NewConfigurationNotYetObserved" // Some volumes haven't observed the new configuration. + ReplicatedStorageClassCondConfigurationRolledOutReasonRolledOutToAllVolumes = "RolledOutToAllVolumes" // Configuration rolled out to all volumes. +) + +const ( + // ReplicatedStorageClassCondReadyType indicates overall readiness of the storage class. + // + // Reasons describe readiness or blocking conditions. + ReplicatedStorageClassCondReadyType = "Ready" + ReplicatedStorageClassCondReadyReasonInsufficientEligibleNodes = "InsufficientEligibleNodes" // Not enough eligible nodes. + ReplicatedStorageClassCondReadyReasonInvalidConfiguration = "InvalidConfiguration" // Configuration is invalid. + ReplicatedStorageClassCondReadyReasonReady = "Ready" // Storage class is ready. + ReplicatedStorageClassCondReadyReasonWaitingForStoragePool = "WaitingForStoragePool" // Waiting for referenced storage pool. +) + +const ( + // ReplicatedStorageClassCondStoragePoolReadyType indicates whether the referenced storage pool is ready. + // + // Reasons describe storage pool state. This condition may also use any reason + // from ReplicatedStoragePool Ready condition (see rsp_conditions.go). + ReplicatedStorageClassCondStoragePoolReadyType = "StoragePoolReady" + ReplicatedStorageClassCondStoragePoolReadyReasonPending = "Pending" // ReplicatedStoragePool has no Ready condition yet. + ReplicatedStorageClassCondStoragePoolReadyReasonStoragePoolNotFound = "StoragePoolNotFound" // Referenced storage pool not found; used only when migration from storagePool field failed because RSP does not exist. +) + +const ( + // ReplicatedStorageClassCondVolumesSatisfyEligibleNodesType indicates whether all volumes' + // replicas are placed on eligible nodes. + // + // Reasons describe eligible nodes satisfaction state. + ReplicatedStorageClassCondVolumesSatisfyEligibleNodesType = "VolumesSatisfyEligibleNodes" + ReplicatedStorageClassCondVolumesSatisfyEligibleNodesReasonAllVolumesSatisfy = "AllVolumesSatisfy" // All volumes satisfy eligible nodes requirements. + ReplicatedStorageClassCondVolumesSatisfyEligibleNodesReasonConflictResolutionInProgress = "ConflictResolutionInProgress" // Eligible nodes conflict resolution in progress. + ReplicatedStorageClassCondVolumesSatisfyEligibleNodesReasonManualConflictResolution = "ManualConflictResolution" // Conflict resolution strategy is Manual. + ReplicatedStorageClassCondVolumesSatisfyEligibleNodesReasonUpdatedEligibleNodesNotYetObserved = "UpdatedEligibleNodesNotYetObserved" // Some volumes haven't observed the updated eligible nodes. +) diff --git a/api/v1alpha1/rsc_types.go b/api/v1alpha1/rsc_types.go new file mode 100644 index 000000000..17bd64ce8 --- /dev/null +++ b/api/v1alpha1/rsc_types.go @@ -0,0 +1,408 @@ +/* +Copyright 2023 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package v1alpha1 + +import metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + +// ReplicatedStorageClass is a Kubernetes Custom Resource that defines a configuration for a Kubernetes Storage class. +// +kubebuilder:object:generate=true +// +kubebuilder:object:root=true +// +kubebuilder:resource:scope=Cluster,shortName=rsc +// +kubebuilder:metadata:labels=heritage=deckhouse +// +kubebuilder:metadata:labels=module=sds-replicated-volume +// +kubebuilder:metadata:labels=backup.deckhouse.io/cluster-config=true +// +kubebuilder:printcolumn:name="Phase",type=string,JSONPath=`.status.phase` +// +kubebuilder:printcolumn:name="Reason",type=string,priority=1,JSONPath=`.status.reason` +// +kubebuilder:printcolumn:name="Age",type=date,JSONPath=`.metadata.creationTimestamp`,description="The age of this resource" +type ReplicatedStorageClass struct { + metav1.TypeMeta `json:",inline"` + metav1.ObjectMeta `json:"metadata,omitempty"` + Spec ReplicatedStorageClassSpec `json:"spec"` + Status ReplicatedStorageClassStatus `json:"status,omitempty"` +} + +// +kubebuilder:object:generate=true +// +kubebuilder:object:root=true +type ReplicatedStorageClassList struct { + metav1.TypeMeta `json:",inline"` + metav1.ListMeta `json:"metadata"` + Items []ReplicatedStorageClass `json:"items"` +} + +// GetStatusConditions is an adapter method to satisfy objutilv1.StatusConditionObject. +// It returns the root object's `.status.conditions`. +func (o *ReplicatedStorageClass) GetStatusConditions() []metav1.Condition { return o.Status.Conditions } + +// SetStatusConditions is an adapter method to satisfy objutilv1.StatusConditionObject. +// It sets the root object's `.status.conditions`. +func (o *ReplicatedStorageClass) SetStatusConditions(conditions []metav1.Condition) { + o.Status.Conditions = conditions +} + +// +kubebuilder:validation:XValidation:rule="!has(self.replication) || self.replication != 'None' || self.topology == 'Ignored'",message="Replication None requires topology Ignored (no replicas to distribute)." +// +kubebuilder:validation:XValidation:rule="self.topology != 'TransZonal' || !has(self.replication) || self.replication != 'Availability' || !has(self.zones) || size(self.zones) == 0 || size(self.zones) >= 3",message="TransZonal topology with Availability replication requires at least 3 zones (if specified)." +// +kubebuilder:validation:XValidation:rule="self.topology != 'TransZonal' || !has(self.replication) || self.replication != 'Consistency' || !has(self.zones) || size(self.zones) == 0 || size(self.zones) >= 2",message="TransZonal topology with Consistency replication requires at least 2 zones (if specified)." +// +kubebuilder:validation:XValidation:rule="self.topology != 'TransZonal' || (has(self.replication) && self.replication != 'ConsistencyAndAvailability') || !has(self.zones) || size(self.zones) == 0 || size(self.zones) >= 3",message="TransZonal topology with ConsistencyAndAvailability replication (default) requires at least 3 zones (if specified)." +// Defines a Kubernetes Storage class configuration. +// +// > Note that this field is in read-only mode. +// +kubebuilder:object:generate=true +type ReplicatedStorageClassSpec struct { + // StoragePool is the name of a ReplicatedStoragePool resource. + // Deprecated: Use Storage instead. This field cannot be added or changed, only removed. + // +kubebuilder:validation:XValidation:rule="!has(self) || (has(oldSelf) && self == oldSelf)",message="StoragePool cannot be added or changed, only removed" + // +optional + StoragePool string `json:"storagePool,omitempty"` + // Storage defines the storage backend configuration for this storage class. + // Specifies the type of volumes (LVM or LVMThin) and which LVMVolumeGroups + // will be used to allocate space for volumes. + Storage ReplicatedStorageClassStorage `json:"storage"` + // The storage class's reclaim policy. Might be: + // - Delete (If the Persistent Volume Claim is deleted, deletes the Persistent Volume and its associated storage as well) + // - Retain (If the Persistent Volume Claim is deleted, remains the Persistent Volume and its associated storage) + // +kubebuilder:validation:Enum=Delete;Retain + ReclaimPolicy ReplicatedStorageClassReclaimPolicy `json:"reclaimPolicy"` + // The Storage class's replication mode. Might be: + // - None — In this mode the Storage class's 'placementCount' and 'AutoEvictMinReplicaCount' params equal '1'. + // Requires topology to be 'Ignored' (no replicas to distribute across zones). + // - Availability — In this mode the volume remains readable and writable even if one of the replica nodes becomes unavailable. Data is stored in two copies on different nodes. This corresponds to `placementCount = 2` and `AutoEvictMinReplicaCount = 2`. **Important:** this mode does not guarantee data consistency and may lead to split brain and data loss in case of network connectivity issues between nodes. Recommended only for non-critical data and applications that do not require high reliability and data integrity. + // - ConsistencyAndAvailability — In this mode the volume remains readable and writable when one replica node fails. Data is stored in three copies on different nodes (`placementCount = 3`, `AutoEvictMinReplicaCount = 3`). This mode provides protection against data loss when two nodes containing volume replicas fail and guarantees data consistency. However, if two replicas are lost, the volume switches to suspend-io mode. + // + // > Note that default Replication mode is 'ConsistencyAndAvailability'. + // +kubebuilder:validation:Enum=None;Availability;Consistency;ConsistencyAndAvailability + // +kubebuilder:default:=ConsistencyAndAvailability + Replication ReplicatedStorageClassReplication `json:"replication,omitempty"` + // The Storage class's volume access mode. Defines how pods access the volume. Might be: + // - Local — volume is accessed only from the node where a replica resides. Pod scheduling waits for consumer. + // - EventuallyLocal — volume can be accessed remotely, but a local replica will be created on the accessing node + // after some time. Pod scheduling waits for consumer. + // - PreferablyLocal — volume prefers local access but allows remote access if no local replica is available. + // Scheduler tries to place pods on nodes with replicas. Pod scheduling waits for consumer. + // - Any — volume can be accessed from any node. Most flexible mode with immediate volume binding. + // + // > Note that the default Volume Access mode is 'PreferablyLocal'. + // +kubebuilder:validation:Enum=Local;EventuallyLocal;PreferablyLocal;Any + // +kubebuilder:default:=PreferablyLocal + VolumeAccess ReplicatedStorageClassVolumeAccess `json:"volumeAccess,omitempty"` + // The topology settings for the volumes in the created Storage class. Might be: + // - TransZonal — replicas of the volumes will be created in different zones (one replica per zone). + // To use this topology, the available zones must be specified in the 'zones' param, and the cluster nodes must have the topology.kubernetes.io/zone= label. + // - Zonal — all replicas of the volumes are created in the same zone that the scheduler selected to place the pod using this volume. + // - Ignored — the topology information will not be used to place replicas of the volumes. + // The replicas can be placed on any available nodes, with the restriction: no more than one replica of a given volume on one node. + // Required when replication is 'None'. + // + // > Note that the 'Ignored' value can be used only if there are no zones in the cluster (there are no nodes with the topology.kubernetes.io/zone label). + // + // > For the system to operate correctly, either every cluster node must be labeled with 'topology.kubernetes.io/zone', or none of them should have this label. + // +kubebuilder:validation:Enum=TransZonal;Zonal;Ignored + Topology ReplicatedStorageClassTopology `json:"topology"` + // Array of zones the Storage class's volumes should be replicated in. The controller will put a label with + // the Storage class's name on the nodes which be actual used by the Storage class. + // + // For TransZonal topology, the number of zones depends on replication mode: + // - Availability, ConsistencyAndAvailability: at least 3 zones required + // - Consistency: at least 2 zones required + // + // When replication is 'None' (topology 'Ignored'), zones act as a node constraint + // limiting where the single replica can be placed. + // +kubebuilder:validation:MaxItems=10 + // +kubebuilder:validation:items:MaxLength=63 + // +listType=set + // +optional + Zones []string `json:"zones,omitempty"` + // NodeLabelSelector filters nodes eligible for DRBD participation. + // Only nodes matching this selector can store data, provide access, or host tiebreaker. + // If not specified, all nodes are candidates. + // +optional + NodeLabelSelector *metav1.LabelSelector `json:"nodeLabelSelector,omitempty"` + // SystemNetworkNames specifies network names used for DRBD replication traffic. + // At least one network name must be specified. Each name is limited to 64 characters. + // + // TODO(systemnetwork): Currently only "Internal" (default node network) is supported. + // Custom network support requires NetworkNode watch implementation in the controller. + // +kubebuilder:validation:MinItems=1 + // +kubebuilder:validation:MaxItems=1 + // +kubebuilder:validation:Items={type=string,maxLength=64} + // +kubebuilder:validation:XValidation:rule="self.all(n, n == 'Internal')",message="Only 'Internal' network is currently supported" + // +kubebuilder:default:={"Internal"} + SystemNetworkNames []string `json:"systemNetworkNames"` + // ConfigurationRolloutStrategy defines how configuration changes are applied to existing volumes. + // Always present with defaults. + ConfigurationRolloutStrategy ReplicatedStorageClassConfigurationRolloutStrategy `json:"configurationRolloutStrategy"` + // EligibleNodesConflictResolutionStrategy defines how the controller handles volumes with eligible nodes conflicts. + // Always present with defaults. + EligibleNodesConflictResolutionStrategy ReplicatedStorageClassEligibleNodesConflictResolutionStrategy `json:"eligibleNodesConflictResolutionStrategy"` + // EligibleNodesPolicy defines policies for managing eligible nodes. + // Always present with defaults. + EligibleNodesPolicy ReplicatedStoragePoolEligibleNodesPolicy `json:"eligibleNodesPolicy"` +} + +// ReplicatedStorageClassStorage defines the storage backend configuration for RSC. +// +kubebuilder:validation:XValidation:rule="self.type != 'LVMThin' || self.lvmVolumeGroups.all(g, g.thinPoolName != ”)",message="thinPoolName is required for each lvmVolumeGroups entry when type is LVMThin" +// +kubebuilder:validation:XValidation:rule="self.type != 'LVM' || self.lvmVolumeGroups.all(g, !has(g.thinPoolName) || g.thinPoolName == ”)",message="thinPoolName must not be specified when type is LVM" +// +kubebuilder:object:generate=true +type ReplicatedStorageClassStorage struct { + // Type defines the volumes type. Might be: + // - LVM (for Thick) + // - LVMThin (for Thin) + // +kubebuilder:validation:Enum=LVM;LVMThin + Type ReplicatedStoragePoolType `json:"type"` + // LVMVolumeGroups is an array of LVMVolumeGroup resource names whose Volume Groups/Thin-pools + // will be used to allocate the required space. + // + // > Note that every LVMVolumeGroup resource must have the same type (Thin/Thick) + // as specified in the Type field. + // +kubebuilder:validation:MinItems=1 + LVMVolumeGroups []ReplicatedStoragePoolLVMVolumeGroups `json:"lvmVolumeGroups"` +} + +// ReplicatedStorageClassReclaimPolicy enumerates possible values for ReplicatedStorageClass spec.reclaimPolicy field. +type ReplicatedStorageClassReclaimPolicy string + +// ReclaimPolicy values for [ReplicatedStorageClass] spec.reclaimPolicy field. +const ( + // RSCReclaimPolicyDelete means the PV is deleted when the PVC is deleted. + RSCReclaimPolicyDelete ReplicatedStorageClassReclaimPolicy = "Delete" + // RSCReclaimPolicyRetain means the PV is retained when the PVC is deleted. + RSCReclaimPolicyRetain ReplicatedStorageClassReclaimPolicy = "Retain" +) + +func (p ReplicatedStorageClassReclaimPolicy) String() string { + return string(p) +} + +// ReplicatedStorageClassReplication enumerates possible values for ReplicatedStorageClass spec.replication field. +type ReplicatedStorageClassReplication string + +// Replication values for [ReplicatedStorageClass] spec.replication field. +const ( + // ReplicationNone means no replication (single replica). + ReplicationNone ReplicatedStorageClassReplication = "None" + // ReplicationAvailability means 2 replicas; can lose 1 node, but may lose consistency in network partitions. + ReplicationAvailability ReplicatedStorageClassReplication = "Availability" + // ReplicationConsistency means 2 replicas with consistency guarantees; requires quorum for writes. + ReplicationConsistency ReplicatedStorageClassReplication = "Consistency" + // ReplicationConsistencyAndAvailability means 3 replicas; can lose 1 node and keeps consistency. + ReplicationConsistencyAndAvailability ReplicatedStorageClassReplication = "ConsistencyAndAvailability" +) + +func (r ReplicatedStorageClassReplication) String() string { + return string(r) +} + +// ReplicatedStorageClassVolumeAccess enumerates possible values for ReplicatedStorageClass spec.volumeAccess field. +type ReplicatedStorageClassVolumeAccess string + +// VolumeAccess values for [ReplicatedStorageClass] spec.volumeAccess field. +const ( + // VolumeAccessLocal requires data to be accessed only from nodes with Diskful replicas + VolumeAccessLocal ReplicatedStorageClassVolumeAccess = "Local" + // VolumeAccessPreferablyLocal prefers local access but allows remote if needed + VolumeAccessPreferablyLocal ReplicatedStorageClassVolumeAccess = "PreferablyLocal" + // VolumeAccessEventuallyLocal will eventually migrate to local access + VolumeAccessEventuallyLocal ReplicatedStorageClassVolumeAccess = "EventuallyLocal" + // VolumeAccessAny allows access from any node + VolumeAccessAny ReplicatedStorageClassVolumeAccess = "Any" +) + +func (a ReplicatedStorageClassVolumeAccess) String() string { + return string(a) +} + +// ReplicatedStorageClassTopology enumerates possible values for ReplicatedStorageClass spec.topology field. +type ReplicatedStorageClassTopology string + +// Topology values for [ReplicatedStorageClass] spec.topology field. +const ( + // RSCTopologyTransZonal means replicas should be placed across zones. + RSCTopologyTransZonal ReplicatedStorageClassTopology = "TransZonal" + // RSCTopologyZonal means replicas should be placed in a single zone. + RSCTopologyZonal ReplicatedStorageClassTopology = "Zonal" + // RSCTopologyIgnored means topology information is not used for placement. + RSCTopologyIgnored ReplicatedStorageClassTopology = "Ignored" +) + +func (t ReplicatedStorageClassTopology) String() string { + return string(t) +} + +// ReplicatedStorageClassConfigurationRolloutStrategy defines how configuration changes are rolled out to existing volumes. +// +kubebuilder:validation:XValidation:rule="self.type != 'RollingUpdate' || has(self.rollingUpdate)",message="rollingUpdate is required when type is RollingUpdate" +// +kubebuilder:validation:XValidation:rule="self.type == 'RollingUpdate' || !has(self.rollingUpdate)",message="rollingUpdate must not be set when type is not RollingUpdate" +// +kubebuilder:object:generate=true +type ReplicatedStorageClassConfigurationRolloutStrategy struct { + // Type specifies the rollout strategy type. + // +kubebuilder:validation:Enum=RollingUpdate;NewVolumesOnly + // +kubebuilder:default:=RollingUpdate + Type ReplicatedStorageClassConfigurationRolloutStrategyType `json:"type,omitempty"` + // RollingUpdate configures parameters for RollingUpdate strategy. + // Required when type is RollingUpdate. + // +optional + RollingUpdate *ReplicatedStorageClassConfigurationRollingUpdateStrategy `json:"rollingUpdate,omitempty"` +} + +// ReplicatedStorageClassConfigurationRolloutStrategyType enumerates possible values for configuration rollout strategy type. +type ReplicatedStorageClassConfigurationRolloutStrategyType string + +const ( + // ReplicatedStorageClassConfigurationRolloutStrategyTypeRollingUpdate means configuration changes are rolled out to existing volumes. + ReplicatedStorageClassConfigurationRolloutStrategyTypeRollingUpdate ReplicatedStorageClassConfigurationRolloutStrategyType = "RollingUpdate" + // ReplicatedStorageClassConfigurationRolloutStrategyTypeNewVolumesOnly means configuration changes only apply to newly created volumes. + ReplicatedStorageClassConfigurationRolloutStrategyTypeNewVolumesOnly ReplicatedStorageClassConfigurationRolloutStrategyType = "NewVolumesOnly" +) + +func (t ReplicatedStorageClassConfigurationRolloutStrategyType) String() string { return string(t) } + +// ReplicatedStorageClassConfigurationRollingUpdateStrategy configures parameters for rolling update configuration rollout strategy. +// +kubebuilder:object:generate=true +type ReplicatedStorageClassConfigurationRollingUpdateStrategy struct { + // MaxParallel is the maximum number of volumes being rolled out simultaneously. + // +kubebuilder:validation:Minimum=1 + // +kubebuilder:validation:Maximum=200 + // +kubebuilder:default:=5 + MaxParallel int32 `json:"maxParallel"` +} + +// ReplicatedStorageClassEligibleNodesConflictResolutionStrategy defines how the controller resolves volumes with eligible nodes conflicts. +// +kubebuilder:validation:XValidation:rule="self.type != 'RollingRepair' || has(self.rollingRepair)",message="rollingRepair is required when type is RollingRepair" +// +kubebuilder:validation:XValidation:rule="self.type == 'RollingRepair' || !has(self.rollingRepair)",message="rollingRepair must not be set when type is not RollingRepair" +// +kubebuilder:object:generate=true +type ReplicatedStorageClassEligibleNodesConflictResolutionStrategy struct { + // Type specifies the conflict resolution strategy type. + // +kubebuilder:validation:Enum=Manual;RollingRepair + // +kubebuilder:default:=RollingRepair + Type ReplicatedStorageClassEligibleNodesConflictResolutionStrategyType `json:"type,omitempty"` + // RollingRepair configures parameters for RollingRepair conflict resolution strategy. + // Required when type is RollingRepair. + // +optional + RollingRepair *ReplicatedStorageClassEligibleNodesConflictResolutionRollingRepair `json:"rollingRepair,omitempty"` +} + +// ReplicatedStorageClassEligibleNodesConflictResolutionStrategyType enumerates possible values for eligible nodes conflict resolution strategy type. +type ReplicatedStorageClassEligibleNodesConflictResolutionStrategyType string + +const ( + // ReplicatedStorageClassEligibleNodesConflictResolutionStrategyTypeManual means conflicts are resolved manually. + ReplicatedStorageClassEligibleNodesConflictResolutionStrategyTypeManual ReplicatedStorageClassEligibleNodesConflictResolutionStrategyType = "Manual" + // ReplicatedStorageClassEligibleNodesConflictResolutionStrategyTypeRollingRepair means replicas are moved automatically when eligible nodes change. + ReplicatedStorageClassEligibleNodesConflictResolutionStrategyTypeRollingRepair ReplicatedStorageClassEligibleNodesConflictResolutionStrategyType = "RollingRepair" +) + +func (t ReplicatedStorageClassEligibleNodesConflictResolutionStrategyType) String() string { + return string(t) +} + +// ReplicatedStorageClassEligibleNodesConflictResolutionRollingRepair configures parameters for rolling repair conflict resolution strategy. +// +kubebuilder:object:generate=true +type ReplicatedStorageClassEligibleNodesConflictResolutionRollingRepair struct { + // MaxParallel is the maximum number of volumes being repaired simultaneously. + // +kubebuilder:validation:Minimum=1 + // +kubebuilder:validation:Maximum=200 + // +kubebuilder:default:=5 + MaxParallel int32 `json:"maxParallel"` +} + +// Displays current information about the Storage Class. +// +kubebuilder:object:generate=true +type ReplicatedStorageClassStatus struct { + // +patchMergeKey=type + // +patchStrategy=merge + // +listType=map + // +listMapKey=type + // +optional + Conditions []metav1.Condition `json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions"` + + // The Storage class current state. Might be: + // - Failed (if the controller received incorrect resource configuration or some errors occurred during the operation) + // - Create (if everything went fine) + // +kubebuilder:validation:Enum=Failed;Created + Phase ReplicatedStorageClassPhase `json:"phase,omitempty"` + // Additional information about the current state of the Storage Class. + Reason string `json:"reason,omitempty"` + // ConfigurationGeneration is the RSC generation when configuration was accepted. + // +optional + ConfigurationGeneration int64 `json:"configurationGeneration,omitempty"` + // Configuration is the resolved configuration that volumes should align to. + // +optional + Configuration *ReplicatedStorageClassConfiguration `json:"configuration,omitempty"` + // StoragePoolEligibleNodesRevision tracks RSP's eligibleNodesRevision for change detection. + // +optional + StoragePoolEligibleNodesRevision int64 `json:"storagePoolEligibleNodesRevision,omitempty"` + // StoragePoolBasedOnGeneration is the RSC generation when storagePoolName was computed. + // +optional + StoragePoolBasedOnGeneration int64 `json:"storagePoolBasedOnGeneration,omitempty"` + // StoragePoolName is the computed name of the ReplicatedStoragePool for this RSC. + // Format: auto-rsp-. Multiple RSCs with identical storage parameters + // will share the same StoragePoolName. + // +optional + StoragePoolName string `json:"storagePoolName,omitempty"` + // Volumes provides aggregated volume statistics. + // Always present (may have total=0). + Volumes ReplicatedStorageClassVolumesSummary `json:"volumes"` +} + +// ReplicatedStorageClassPhase enumerates possible values for ReplicatedStorageClass status.phase field. +type ReplicatedStorageClassPhase string + +// Phase values for [ReplicatedStorageClass] status.phase field. +const ( + // RSCPhaseFailed means the controller detected an invalid configuration or an operation error. + RSCPhaseFailed ReplicatedStorageClassPhase = "Failed" + // RSCPhaseCreated means the replicated storage class has been reconciled successfully. + RSCPhaseCreated ReplicatedStorageClassPhase = "Created" +) + +func (p ReplicatedStorageClassPhase) String() string { + return string(p) +} + +// ReplicatedStorageClassConfiguration represents the resolved configuration that volumes should align to. +// +kubebuilder:object:generate=true +type ReplicatedStorageClassConfiguration struct { + // Topology is the resolved topology setting. + Topology ReplicatedStorageClassTopology `json:"topology"` + // Replication is the resolved replication mode. + Replication ReplicatedStorageClassReplication `json:"replication"` + // VolumeAccess is the resolved volume access mode. + VolumeAccess ReplicatedStorageClassVolumeAccess `json:"volumeAccess"` + // StoragePoolName is the name of the ReplicatedStoragePool used by this RSC. + StoragePoolName string `json:"storagePoolName"` +} + +// ReplicatedStorageClassVolumesSummary provides aggregated information about volumes in this storage class. +// +kubebuilder:object:generate=true +type ReplicatedStorageClassVolumesSummary struct { + // Total is the total number of volumes. + // +optional + Total *int32 `json:"total,omitempty"` + // PendingObservation is the number of volumes that haven't observed current RSC configuration or eligible nodes. + // +optional + PendingObservation *int32 `json:"pendingObservation,omitempty"` + // Aligned is the number of volumes whose configuration matches the storage class. + // +optional + Aligned *int32 `json:"aligned,omitempty"` + // InConflictWithEligibleNodes is the number of volumes with replicas on non-eligible nodes. + // +optional + InConflictWithEligibleNodes *int32 `json:"inConflictWithEligibleNodes,omitempty"` + // StaleConfiguration is the number of volumes with outdated configuration. + // +optional + StaleConfiguration *int32 `json:"staleConfiguration,omitempty"` + // UsedStoragePoolNames is a sorted list of storage pool names currently used by volumes. + // +optional + UsedStoragePoolNames []string `json:"usedStoragePoolNames,omitempty"` +} diff --git a/api/v1alpha1/rsp_conditions.go b/api/v1alpha1/rsp_conditions.go new file mode 100644 index 000000000..ecdc14e0d --- /dev/null +++ b/api/v1alpha1/rsp_conditions.go @@ -0,0 +1,28 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package v1alpha1 + +const ( + // ReplicatedStoragePoolCondReadyType indicates whether the storage pool is ready. + // + // Reasons describe readiness or failure conditions. + ReplicatedStoragePoolCondReadyType = "Ready" + ReplicatedStoragePoolCondReadyReasonInvalidLVMVolumeGroup = "InvalidLVMVolumeGroup" // LVMVolumeGroup is invalid. + ReplicatedStoragePoolCondReadyReasonInvalidNodeLabelSelector = "InvalidNodeLabelSelector" // NodeLabelSelector is invalid. + ReplicatedStoragePoolCondReadyReasonLVMVolumeGroupNotFound = "LVMVolumeGroupNotFound" // LVMVolumeGroup not found. + ReplicatedStoragePoolCondReadyReasonReady = "Ready" // Storage pool is ready. +) diff --git a/api/v1alpha1/rsp_types.go b/api/v1alpha1/rsp_types.go new file mode 100644 index 000000000..80340ccea --- /dev/null +++ b/api/v1alpha1/rsp_types.go @@ -0,0 +1,232 @@ +/* +Copyright 2023 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package v1alpha1 + +import metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + +// ReplicatedStoragePool is a Kubernetes Custom Resource that defines a configuration for Linstor Storage-pools. +// +kubebuilder:object:generate=true +// +kubebuilder:object:root=true +// +kubebuilder:resource:scope=Cluster,shortName=rsp +// +kubebuilder:metadata:labels=heritage=deckhouse +// +kubebuilder:metadata:labels=module=sds-replicated-volume +// +kubebuilder:metadata:labels=backup.deckhouse.io/cluster-config=true +// +kubebuilder:printcolumn:name="Type",type=string,JSONPath=`.spec.type` +// +kubebuilder:printcolumn:name="Ready",type=string,JSONPath=`.status.conditions[?(@.type=="Ready")].status` +// +kubebuilder:printcolumn:name="Age",type=date,JSONPath=`.metadata.creationTimestamp`,description="The age of this resource" +type ReplicatedStoragePool struct { + metav1.TypeMeta `json:",inline"` + metav1.ObjectMeta `json:"metadata,omitempty"` + Spec ReplicatedStoragePoolSpec `json:"spec"` + Status ReplicatedStoragePoolStatus `json:"status,omitempty"` +} + +// ReplicatedStoragePoolList contains a list of ReplicatedStoragePool +// +kubebuilder:object:generate=true +// +kubebuilder:object:root=true +type ReplicatedStoragePoolList struct { + metav1.TypeMeta `json:",inline"` + metav1.ListMeta `json:"metadata"` + Items []ReplicatedStoragePool `json:"items"` +} + +// GetStatusConditions is an adapter method to satisfy objutilv1.StatusConditionObject. +// It returns the root object's `.status.conditions`. +func (o *ReplicatedStoragePool) GetStatusConditions() []metav1.Condition { return o.Status.Conditions } + +// SetStatusConditions is an adapter method to satisfy objutilv1.StatusConditionObject. +// It sets the root object's `.status.conditions`. +func (o *ReplicatedStoragePool) SetStatusConditions(conditions []metav1.Condition) { + o.Status.Conditions = conditions +} + +// Defines desired rules for Linstor's Storage-pools. +// +kubebuilder:object:generate=true +// +kubebuilder:validation:XValidation:rule="self.type != 'LVMThin' || self.lvmVolumeGroups.all(g, g.thinPoolName != ”)",message="thinPoolName is required for each lvmVolumeGroups entry when type is LVMThin" +// +kubebuilder:validation:XValidation:rule="self.type != 'LVM' || self.lvmVolumeGroups.all(g, !has(g.thinPoolName) || g.thinPoolName == ”)",message="thinPoolName must not be specified when type is LVM" +type ReplicatedStoragePoolSpec struct { + // Defines the volumes type. Might be: + // - LVM (for Thick) + // - LVMThin (for Thin) + // +kubebuilder:validation:Enum=LVM;LVMThin + // +kubebuilder:validation:XValidation:rule="self == oldSelf",message="Value is immutable." + Type ReplicatedStoragePoolType `json:"type"` + // An array of names of LVMVolumeGroup resources, whose Volume Groups/Thin-pools will be used to allocate + // the required space. + // + // > Note that every LVMVolumeGroup resource has to have the same type Thin/Thick + // as it is in current resource's 'Spec.Type' field. + // +kubebuilder:validation:XValidation:rule="self == oldSelf",message="Value is immutable." + // +kubebuilder:validation:MinItems=1 + LVMVolumeGroups []ReplicatedStoragePoolLVMVolumeGroups `json:"lvmVolumeGroups"` + // Array of zones the Storage pool's volumes should be replicated in. + // +kubebuilder:validation:XValidation:rule="self == oldSelf",message="Value is immutable." + // +kubebuilder:validation:MaxItems=10 + // +kubebuilder:validation:items:MaxLength=63 + // +listType=set + // +optional + Zones []string `json:"zones,omitempty"` + // NodeLabelSelector filters nodes eligible for storage pool participation. + // Only nodes matching this selector can store data. + // If not specified, all nodes with matching LVMVolumeGroups are candidates. + // +kubebuilder:validation:XValidation:rule="self == oldSelf",message="Value is immutable." + // +kubebuilder:validation:XValidation:rule="!has(self.matchExpressions) || self.matchExpressions.all(e, e.operator in ['In', 'NotIn', 'Exists', 'DoesNotExist'])",message="matchExpressions[].operator must be one of: In, NotIn, Exists, DoesNotExist" + // +kubebuilder:validation:XValidation:rule="!has(self.matchExpressions) || self.matchExpressions.all(e, (e.operator in ['Exists', 'DoesNotExist']) ? (!has(e.values) || size(e.values) == 0) : (has(e.values) && size(e.values) > 0))",message="matchExpressions[].values must be empty for Exists/DoesNotExist operators, non-empty for In/NotIn" + // +optional + NodeLabelSelector *metav1.LabelSelector `json:"nodeLabelSelector,omitempty"` + // SystemNetworkNames specifies network names used for DRBD replication traffic. + // At least one network name must be specified. Each name is limited to 64 characters. + // + // TODO(systemnetwork): Currently only "Internal" (default node network) is supported. + // Custom network support requires NetworkNode watch implementation in the controller. + // +kubebuilder:validation:MinItems=1 + // +kubebuilder:validation:MaxItems=1 + // +kubebuilder:validation:Items={type=string,maxLength=64} + // +kubebuilder:validation:XValidation:rule="self.all(n, n == 'Internal')",message="Only 'Internal' network is currently supported" + // +kubebuilder:default:={"Internal"} + SystemNetworkNames []string `json:"systemNetworkNames"` + // EligibleNodesPolicy defines policies for managing eligible nodes. + // Always present with defaults. + EligibleNodesPolicy ReplicatedStoragePoolEligibleNodesPolicy `json:"eligibleNodesPolicy"` +} + +// EligibleNodesPolicy defines policies for managing eligible nodes. +// +kubebuilder:object:generate=true +type ReplicatedStoragePoolEligibleNodesPolicy struct { + // NotReadyGracePeriod specifies how long to wait before removing + // a not-ready node from the eligible nodes list. + // +kubebuilder:default="10m" + NotReadyGracePeriod metav1.Duration `json:"notReadyGracePeriod"` +} + +// ReplicatedStoragePoolType enumerates possible values for ReplicatedStoragePool spec.type field. +type ReplicatedStoragePoolType string + +// ReplicatedStoragePool spec.type possible values. +// Keep these in sync with `ReplicatedStoragePoolSpec.Type` validation enum. +const ( + // ReplicatedStoragePoolTypeLVM means Thick volumes backed by LVM. + ReplicatedStoragePoolTypeLVM ReplicatedStoragePoolType = "LVM" + // ReplicatedStoragePoolTypeLVMThin means Thin volumes backed by LVM Thin pools. + ReplicatedStoragePoolTypeLVMThin ReplicatedStoragePoolType = "LVMThin" +) + +func (t ReplicatedStoragePoolType) String() string { + return string(t) +} + +type ReplicatedStoragePoolLVMVolumeGroups struct { + // Selected LVMVolumeGroup resource's name. + // +kubebuilder:validation:MinLength=1 + // +kubebuilder:validation:Pattern=`^[a-z0-9]([a-z0-9-.]{0,251}[a-z0-9])?$` + Name string `json:"name"` + // Selected Thin-pool name. + // +kubebuilder:validation:MinLength=1 + // +kubebuilder:validation:MaxLength=128 + // +kubebuilder:validation:Pattern=`^[a-zA-Z0-9][a-zA-Z0-9_.+-]*$` + // +optional + ThinPoolName string `json:"thinPoolName,omitempty"` +} + +// Displays current information about the state of the LINSTOR storage pool. +// +kubebuilder:object:generate=true +type ReplicatedStoragePoolStatus struct { + // +patchMergeKey=type + // +patchStrategy=merge + // +listType=map + // +listMapKey=type + // +optional + Conditions []metav1.Condition `json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions"` + + // TODO: Remove Phase once the old controller (sds-replicated-volume-controller) is retired. + // Phase is used only by the old controller and will be removed in a future version. + // +optional + Phase ReplicatedStoragePoolPhase `json:"phase,omitempty"` + // TODO: Remove Reason once the old controller (sds-replicated-volume-controller) is retired. + // Reason is used only by the old controller and will be removed in a future version. + // +optional + Reason string `json:"reason,omitempty"` + + // EligibleNodesRevision is incremented when eligible nodes change. + // +optional + EligibleNodesRevision int64 `json:"eligibleNodesRevision,omitempty"` + // EligibleNodes lists nodes eligible for this storage pool. + // +optional + EligibleNodes []ReplicatedStoragePoolEligibleNode `json:"eligibleNodes,omitempty"` + + // UsedBy tracks which resources are using this storage pool. + // +optional + UsedBy ReplicatedStoragePoolUsedBy `json:"usedBy,omitempty"` +} + +// ReplicatedStoragePoolUsedBy tracks resources using this storage pool. +// +kubebuilder:object:generate=true +type ReplicatedStoragePoolUsedBy struct { + // ReplicatedStorageClassNames lists RSC names using this storage pool. + // +listType=set + // +optional + ReplicatedStorageClassNames []string `json:"replicatedStorageClassNames,omitempty"` +} + +// TODO: Remove ReplicatedStoragePoolPhase once the old controller (sds-replicated-volume-controller) is retired. +// ReplicatedStoragePoolPhase represents the phase of the ReplicatedStoragePool. +// Deprecated: Used only by the old controller. +type ReplicatedStoragePoolPhase string + +// ReplicatedStoragePool phase values. +// Deprecated: Used only by the old controller. +const ( + RSPPhaseCompleted ReplicatedStoragePoolPhase = "Completed" + RSPPhaseFailed ReplicatedStoragePoolPhase = "Failed" +) + +func (p ReplicatedStoragePoolPhase) String() string { + return string(p) +} + +// ReplicatedStoragePoolEligibleNode represents a node eligible for placing volumes of this storage pool. +// +kubebuilder:object:generate=true +type ReplicatedStoragePoolEligibleNode struct { + // NodeName is the Kubernetes node name. + NodeName string `json:"nodeName"` + // ZoneName is the zone this node belongs to. + // +optional + ZoneName string `json:"zoneName,omitempty"` + // LVMVolumeGroups lists LVM volume groups available on this node. + // +optional + LVMVolumeGroups []ReplicatedStoragePoolEligibleNodeLVMVolumeGroup `json:"lvmVolumeGroups,omitempty"` + // Unschedulable indicates whether new volumes should not be scheduled to this node. + Unschedulable bool `json:"unschedulable"` + // NodeReady indicates whether the Kubernetes node is ready. + NodeReady bool `json:"nodeReady"` + // AgentReady indicates whether the sds-replicated-volume agent on this node is ready. + AgentReady bool `json:"agentReady"` +} + +// ReplicatedStoragePoolEligibleNodeLVMVolumeGroup represents an LVM volume group on an eligible node. +// +kubebuilder:object:generate=true +type ReplicatedStoragePoolEligibleNodeLVMVolumeGroup struct { + // Name is the LVMVolumeGroup resource name. + Name string `json:"name"` + // ThinPoolName is the thin pool name (for LVMThin storage pools). + // +optional + ThinPoolName string `json:"thinPoolName,omitempty"` + // Unschedulable indicates whether new volumes should not use this volume group. + Unschedulable bool `json:"unschedulable"` + // Ready indicates whether the LVMVolumeGroup (and its thin pool, if applicable) is ready. + Ready bool `json:"ready"` +} diff --git a/api/v1alpha1/rv_conditions.go b/api/v1alpha1/rv_conditions.go new file mode 100644 index 000000000..91ebf93f9 --- /dev/null +++ b/api/v1alpha1/rv_conditions.go @@ -0,0 +1,110 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package v1alpha1 + +const ( + // ReplicatedVolumeCondBackingVolumeCreatedType indicates whether backing volumes exist for all diskful replicas. + // + // Reasons describe readiness and waiting conditions for backing volumes. + ReplicatedVolumeCondBackingVolumeCreatedType = "BackingVolumeCreated" + ReplicatedVolumeCondBackingVolumeCreatedReasonAllBackingVolumesReady = "AllBackingVolumesReady" // All backing volumes are ready. + ReplicatedVolumeCondBackingVolumeCreatedReasonBackingVolumesNotReady = "BackingVolumesNotReady" // Some backing volumes are not ready. + ReplicatedVolumeCondBackingVolumeCreatedReasonWaitingForBackingVolumes = "WaitingForBackingVolumes" // Backing volumes are not yet observable/created. +) + +const ( + // ReplicatedVolumeCondConfigurationReadyType indicates whether the volume's configuration + // matches the storage class configuration. + // + // Reasons describe configuration readiness state. + ReplicatedVolumeCondConfigurationReadyType = "ConfigurationReady" + ReplicatedVolumeCondConfigurationReadyReasonConfigurationRolloutInProgress = "ConfigurationRolloutInProgress" // Configuration rollout is in progress. + ReplicatedVolumeCondConfigurationReadyReasonReady = "Ready" // Configuration matches storage class. + ReplicatedVolumeCondConfigurationReadyReasonStaleConfiguration = "StaleConfiguration" // Configuration does not match storage class (stale). + ReplicatedVolumeCondConfigurationReadyReasonStorageClassNotFound = "StorageClassNotFound" // Referenced storage class does not exist. +) + +const ( + // ReplicatedVolumeCondConfiguredType indicates whether all replicas are configured. + // + // Reasons describe configuration progress / mismatch. + ReplicatedVolumeCondConfiguredType = "Configured" + ReplicatedVolumeCondConfiguredReasonAllReplicasConfigured = "AllReplicasConfigured" // All replicas are configured. + ReplicatedVolumeCondConfiguredReasonConfigurationInProgress = "ConfigurationInProgress" // Configuration is still in progress. + ReplicatedVolumeCondConfiguredReasonReplicasNotConfigured = "ReplicasNotConfigured" // Some replicas are not configured yet. +) + +const ( + // ReplicatedVolumeCondDataQuorumType indicates whether the volume has data quorum (diskful replicas). + // + // Reasons describe data quorum state (reached/degraded/lost). + ReplicatedVolumeCondDataQuorumType = "DataQuorum" + ReplicatedVolumeCondDataQuorumReasonDataQuorumDegraded = "DataQuorumDegraded" // Data quorum is reached but degraded. + ReplicatedVolumeCondDataQuorumReasonDataQuorumLost = "DataQuorumLost" // Data quorum is lost. + ReplicatedVolumeCondDataQuorumReasonDataQuorumReached = "DataQuorumReached" // Data quorum is reached. +) + +const ( + // ReplicatedVolumeCondInitializedType indicates whether enough replicas are initialized. + // + // Reasons describe initialization progress and waiting conditions. + ReplicatedVolumeCondInitializedType = "Initialized" + ReplicatedVolumeCondInitializedReasonInitializationInProgress = "InitializationInProgress" // Initialization is still in progress. + ReplicatedVolumeCondInitializedReasonInitialized = "Initialized" // Initialization requirements are met. + ReplicatedVolumeCondInitializedReasonWaitingForReplicas = "WaitingForReplicas" // Waiting for replicas to appear/initialize. +) + +const ( + // ReplicatedVolumeCondIOReadyType indicates whether the volume has enough IOReady replicas. + // + // Reasons describe why IO is ready or blocked due to replica readiness. + ReplicatedVolumeCondIOReadyType = "IOReady" + ReplicatedVolumeCondIOReadyReasonIOReady = "IOReady" // IO is ready. + ReplicatedVolumeCondIOReadyReasonInsufficientIOReadyReplicas = "InsufficientIOReadyReplicas" // Not enough IOReady replicas. + ReplicatedVolumeCondIOReadyReasonNoIOReadyReplicas = "NoIOReadyReplicas" // No replicas are IOReady. +) + +const ( + // ReplicatedVolumeCondQuorumType indicates whether the volume has quorum. + // + // Reasons describe quorum state (reached/degraded/lost). + ReplicatedVolumeCondQuorumType = "Quorum" + ReplicatedVolumeCondQuorumReasonQuorumDegraded = "QuorumDegraded" // Quorum is reached but degraded. + ReplicatedVolumeCondQuorumReasonQuorumLost = "QuorumLost" // Quorum is lost. + ReplicatedVolumeCondQuorumReasonQuorumReached = "QuorumReached" // Quorum is reached. +) + +const ( + // ReplicatedVolumeCondSatisfyEligibleNodesType indicates whether all replicas are placed + // on eligible nodes according to the storage class. + // + // Reasons describe eligible nodes satisfaction state. + ReplicatedVolumeCondSatisfyEligibleNodesType = "SatisfyEligibleNodes" + ReplicatedVolumeCondSatisfyEligibleNodesReasonConflictResolutionInProgress = "ConflictResolutionInProgress" // Eligible nodes conflict resolution is in progress. + ReplicatedVolumeCondSatisfyEligibleNodesReasonInConflictWithEligibleNodes = "InConflictWithEligibleNodes" // Some replicas are on non-eligible nodes. + ReplicatedVolumeCondSatisfyEligibleNodesReasonSatisfyEligibleNodes = "SatisfyEligibleNodes" // All replicas are on eligible nodes. +) + +const ( + // ReplicatedVolumeCondScheduledType indicates whether all replicas have been scheduled. + // + // Reasons describe scheduling progress / deficit. + ReplicatedVolumeCondScheduledType = "Scheduled" + ReplicatedVolumeCondScheduledReasonAllReplicasScheduled = "AllReplicasScheduled" // All replicas are scheduled. + ReplicatedVolumeCondScheduledReasonReplicasNotScheduled = "ReplicasNotScheduled" // Some replicas are not scheduled yet. + ReplicatedVolumeCondScheduledReasonSchedulingInProgress = "SchedulingInProgress" // Scheduling is still in progress. +) diff --git a/api/v1alpha1/rv_types.go b/api/v1alpha1/rv_types.go new file mode 100644 index 000000000..4c03a5d79 --- /dev/null +++ b/api/v1alpha1/rv_types.go @@ -0,0 +1,247 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package v1alpha1 + +import ( + "k8s.io/apimachinery/pkg/api/resource" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" +) + +// +kubebuilder:object:generate=true +// +kubebuilder:object:root=true +// +kubebuilder:subresource:status +// +kubebuilder:resource:scope=Cluster,shortName=rv +// +kubebuilder:metadata:labels=module=sds-replicated-volume +// +kubebuilder:validation:XValidation:rule="size(self.metadata.name) <= 120",message="metadata.name must be at most 120 characters (to fit derived RVR/LLV names)" +// +kubebuilder:printcolumn:name="IOReady",type=string,JSONPath=".status.conditions[?(@.type=='IOReady')].status" +// +kubebuilder:printcolumn:name="Size",type=string,JSONPath=".spec.size" +// +kubebuilder:printcolumn:name="ActualSize",type=string,JSONPath=".status.actualSize" +// +kubebuilder:printcolumn:name="DiskfulReplicas",type=string,JSONPath=".status.diskfulReplicaCount" +// +kubebuilder:printcolumn:name="Phase",type=string,JSONPath=".status.phase" +type ReplicatedVolume struct { + metav1.TypeMeta `json:",inline"` + metav1.ObjectMeta `json:"metadata"` + + Spec ReplicatedVolumeSpec `json:"spec"` + // +patchStrategy=merge + Status ReplicatedVolumeStatus `json:"status,omitempty" patchStrategy:"merge"` +} + +// +kubebuilder:object:generate=true +// +kubebuilder:object:root=true +// +kubebuilder:resource:scope=Cluster +type ReplicatedVolumeList struct { + metav1.TypeMeta `json:",inline"` + metav1.ListMeta `json:"metadata"` + Items []ReplicatedVolume `json:"items"` +} + +// GetStatusConditions is an adapter method to satisfy objutilv1.StatusConditionObject. +// It returns the root object's `.status.conditions`. +func (o *ReplicatedVolume) GetStatusConditions() []metav1.Condition { return o.Status.Conditions } + +// SetStatusConditions is an adapter method to satisfy objutilv1.StatusConditionObject. +// It sets the root object's `.status.conditions`. +func (o *ReplicatedVolume) SetStatusConditions(conditions []metav1.Condition) { + o.Status.Conditions = conditions +} + +// +kubebuilder:object:generate=true +type ReplicatedVolumeSpec struct { + // +kubebuilder:validation:Required + Size resource.Quantity `json:"size"` + + // +kubebuilder:validation:Required + // +kubebuilder:validation:MinLength=1 + ReplicatedStorageClassName string `json:"replicatedStorageClassName"` +} + +// +kubebuilder:object:generate=true +type ReplicatedVolumeStatus struct { + // +patchMergeKey=type + // +patchStrategy=merge + // +listType=map + // +listMapKey=type + // +optional + Conditions []metav1.Condition `json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions"` + + // +patchStrategy=merge + // +optional + DRBD *DRBDResourceDetails `json:"drbd,omitempty" patchStrategy:"merge"` + + // +kubebuilder:validation:MaxItems=2 + // +kubebuilder:validation:Items={type=string,minLength=1,maxLength=253} + // +optional + ActuallyAttachedTo []string `json:"actuallyAttachedTo,omitempty"` + + // DesiredAttachTo is the desired set of nodes where the volume should be attached (up to 2 nodes). + // It is computed by controllers from ReplicatedVolumeAttachment (RVA) objects. + // +kubebuilder:validation:MaxItems=2 + // +kubebuilder:validation:Items={type=string,minLength=1,maxLength=253} + // +optional + DesiredAttachTo []string `json:"desiredAttachTo,omitempty"` + + // Configuration is the desired configuration snapshot for this volume. + // +optional + Configuration *ReplicatedStorageClassConfiguration `json:"configuration,omitempty"` + + // ConfigurationGeneration is the RSC generation from which configuration was taken. + // +optional + ConfigurationGeneration int64 `json:"configurationGeneration,omitempty"` + + // ConfigurationObservedGeneration is the RSC generation when configuration was last observed/acknowledged. + // +optional + ConfigurationObservedGeneration int64 `json:"configurationObservedGeneration,omitempty"` + + // EligibleNodesViolations lists replicas placed on non-eligible nodes. + // +optional + EligibleNodesViolations []ReplicatedVolumeEligibleNodesViolation `json:"eligibleNodesViolations,omitempty"` + + // DatameshRevision is a counter incremented when datamesh configuration changes. + DatameshRevision int64 `json:"datameshRevision"` + + // Datamesh is the computed datamesh configuration for the volume. + // +patchStrategy=merge + Datamesh ReplicatedVolumeDatamesh `json:"datamesh" patchStrategy:"merge"` +} + +// ReplicatedVolumeDatamesh holds datamesh configuration for the volume. +// +kubebuilder:object:generate=true +type ReplicatedVolumeDatamesh struct { + // SystemNetworkNames is the list of system network names for DRBD communication. + // +kubebuilder:validation:MaxItems=16 + // +kubebuilder:validation:items:MaxLength=64 + SystemNetworkNames []string `json:"systemNetworkNames"` + // AllowTwoPrimaries enables two primaries mode for the datamesh. + // +kubebuilder:default=false + AllowTwoPrimaries bool `json:"allowTwoPrimaries"` + // Size is the desired size of the volume. + // +kubebuilder:validation:Required + Size resource.Quantity `json:"size"` + // Members is the list of datamesh members. + // +kubebuilder:validation:MaxItems=24 + // +patchMergeKey=name + // +patchStrategy=merge + // +listType=map + // +listMapKey=name + Members []ReplicatedVolumeDatameshMember `json:"members" patchStrategy:"merge" patchMergeKey:"name"` + // Quorum is the quorum value for the datamesh. + // +kubebuilder:validation:Minimum=0 + // +kubebuilder:validation:Maximum=13 + // +kubebuilder:default=0 + Quorum byte `json:"quorum"` + // QuorumMinimumRedundancy is the minimum redundancy required for quorum. + // +kubebuilder:validation:Minimum=0 + // +kubebuilder:validation:Maximum=8 + // +kubebuilder:default=0 + QuorumMinimumRedundancy byte `json:"quorumMinimumRedundancy"` +} + +// ReplicatedVolumeDatameshMember represents a member of the datamesh. +// +kubebuilder:object:generate=true +// +kubebuilder:validation:XValidation:rule="self.type == 'Diskful' ? (!has(self.typeTransition) || self.typeTransition == 'ToDiskless') : (!has(self.typeTransition) || self.typeTransition == 'ToDiskful')",message="typeTransition must be ToDiskless for Diskful type, or ToDiskful for Access/TieBreaker types" +type ReplicatedVolumeDatameshMember struct { + // Name is the member name (used as list map key). + // +kubebuilder:validation:Required + // +kubebuilder:validation:MinLength=1 + Name string `json:"name"` + // Type is the member type (Diskful, Access, or TieBreaker). + // +kubebuilder:validation:Required + Type ReplicaType `json:"type"` + // TypeTransition indicates the desired type transition for this member. + // +kubebuilder:validation:Enum=ToDiskful;ToDiskless + // +optional + TypeTransition ReplicatedVolumeDatameshMemberTypeTransition `json:"typeTransition,omitempty"` + // Role is the DRBD role of this member. + // +optional + Role DRBDRole `json:"role,omitempty"` + // NodeName is the Kubernetes node name where the member is located. + // +kubebuilder:validation:Required + // +kubebuilder:validation:MinLength=1 + NodeName string `json:"nodeName"` + // Zone is the zone where the member is located. + // +optional + Zone string `json:"zone,omitempty"` + // Addresses is the list of DRBD addresses for this member. + // +kubebuilder:validation:MaxItems=16 + Addresses []DRBDResourceAddressStatus `json:"addresses"` +} + +// ReplicatedVolumeDatameshMemberTypeTransition enumerates possible type transitions for datamesh members. +type ReplicatedVolumeDatameshMemberTypeTransition string + +const ( + // ReplicatedVolumeDatameshMemberTypeTransitionToDiskful indicates transition to Diskful type. + ReplicatedVolumeDatameshMemberTypeTransitionToDiskful ReplicatedVolumeDatameshMemberTypeTransition = "ToDiskful" + // ReplicatedVolumeDatameshMemberTypeTransitionToDiskless indicates transition to a diskless type (Access or TieBreaker). + ReplicatedVolumeDatameshMemberTypeTransitionToDiskless ReplicatedVolumeDatameshMemberTypeTransition = "ToDiskless" +) + +func (t ReplicatedVolumeDatameshMemberTypeTransition) String() string { + return string(t) +} + +// +kubebuilder:object:generate=true +type DRBDResourceDetails struct { + // +patchStrategy=merge + // +optional + Config *DRBDResourceConfig `json:"config,omitempty" patchStrategy:"merge"` +} + +// +kubebuilder:object:generate=true +type DRBDResourceConfig struct { + // +kubebuilder:default=false + AllowTwoPrimaries bool `json:"allowTwoPrimaries,omitempty"` +} + +type SharedSecretAlg string + +// Shared secret hashing algorithms +const ( + // SharedSecretAlgSHA256 is the SHA256 hashing algorithm for shared secrets + SharedSecretAlgSHA256 = "SHA256" + // SharedSecretAlgSHA1 is the SHA1 hashing algorithm for shared secrets + SharedSecretAlgSHA1 = "SHA1" + SharedSecretAlgDummyForTest = "DummyForTest" +) + +func (a SharedSecretAlg) String() string { + return string(a) +} + +// ReplicatedVolumeEligibleNodesViolation describes a replica placed on a non-eligible node. +// +kubebuilder:object:generate=true +type ReplicatedVolumeEligibleNodesViolation struct { + // NodeName is the node where the replica is placed. + NodeName string `json:"nodeName"` + // ReplicaName is the ReplicatedVolumeReplica name. + ReplicaName string `json:"replicaName"` + // Reason describes why this placement violates eligible nodes constraints. + Reason ReplicatedVolumeEligibleNodesViolationReason `json:"reason"` +} + +// ReplicatedVolumeEligibleNodesViolationReason enumerates possible reasons for eligible nodes violation. +type ReplicatedVolumeEligibleNodesViolationReason string + +const ( + // ReplicatedVolumeEligibleNodesViolationReasonOutOfEligibleNodes means replica is on a node not in eligible nodes list. + ReplicatedVolumeEligibleNodesViolationReasonOutOfEligibleNodes ReplicatedVolumeEligibleNodesViolationReason = "OutOfEligibleNodes" + // ReplicatedVolumeEligibleNodesViolationReasonNodeTopologyMismatch means replica is on a node with wrong topology. + ReplicatedVolumeEligibleNodesViolationReasonNodeTopologyMismatch ReplicatedVolumeEligibleNodesViolationReason = "NodeTopologyMismatch" +) + +func (r ReplicatedVolumeEligibleNodesViolationReason) String() string { return string(r) } diff --git a/api/v1alpha1/rva_conditions.go b/api/v1alpha1/rva_conditions.go new file mode 100644 index 000000000..d4918e36e --- /dev/null +++ b/api/v1alpha1/rva_conditions.go @@ -0,0 +1,54 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package v1alpha1 + +const ( + // ReplicatedVolumeAttachmentCondAttachedType indicates whether the volume is attached to the requested node. + // + // Reasons describe attach/detach progress and blocking conditions. + ReplicatedVolumeAttachmentCondAttachedType = "Attached" + + ReplicatedVolumeAttachmentCondAttachedReasonAttached = "Attached" + ReplicatedVolumeAttachmentCondAttachedReasonConvertingTieBreakerToAccess = "ConvertingTieBreakerToAccess" + ReplicatedVolumeAttachmentCondAttachedReasonLocalityNotSatisfied = "LocalityNotSatisfied" + ReplicatedVolumeAttachmentCondAttachedReasonSettingPrimary = "SettingPrimary" + ReplicatedVolumeAttachmentCondAttachedReasonUnableToProvideLocalVolumeAccess = "UnableToProvideLocalVolumeAccess" + ReplicatedVolumeAttachmentCondAttachedReasonWaitingForActiveAttachmentsToDetach = "WaitingForActiveAttachmentsToDetach" + ReplicatedVolumeAttachmentCondAttachedReasonWaitingForReplica = "WaitingForReplica" + ReplicatedVolumeAttachmentCondAttachedReasonWaitingForReplicatedVolume = "WaitingForReplicatedVolume" + ReplicatedVolumeAttachmentCondAttachedReasonWaitingForReplicatedVolumeReady = "WaitingForReplicatedVolumeReady" +) + +const ( + // ReplicatedVolumeAttachmentCondReadyType indicates whether the attachment is ready for use. + // It is an aggregate condition: Attached=True AND ReplicaReady=True. + // + // Reasons describe which prerequisite is missing. + ReplicatedVolumeAttachmentCondReadyType = "Ready" + ReplicatedVolumeAttachmentCondReadyReasonNotAttached = "NotAttached" // Attached=False. + ReplicatedVolumeAttachmentCondReadyReasonReady = "Ready" // Attached=True and ReplicaReady=True. + ReplicatedVolumeAttachmentCondReadyReasonReplicaNotReady = "ReplicaNotReady" // ReplicaReady=False. +) + +const ( + // ReplicatedVolumeAttachmentCondReplicaReadyType indicates whether the replica on the requested node is Ready. + // This condition mirrors RVR Ready (status/reason/message) for the replica on rva.spec.nodeName. + // + // Reasons typically mirror the replica's Ready reason; this one is used when it is not yet observable. + ReplicatedVolumeAttachmentCondReplicaReadyType = "ReplicaReady" + ReplicatedVolumeAttachmentCondReplicaReadyReasonWaitingForReplica = "WaitingForReplica" +) diff --git a/api/v1alpha1/rva_types.go b/api/v1alpha1/rva_types.go new file mode 100644 index 000000000..45012e0b7 --- /dev/null +++ b/api/v1alpha1/rva_types.go @@ -0,0 +1,118 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package v1alpha1 + +import metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + +// ReplicatedVolumeAttachment is a Kubernetes Custom Resource that represents an attachment intent/state +// of a ReplicatedVolume to a specific node. +// +kubebuilder:object:generate=true +// +kubebuilder:object:root=true +// +kubebuilder:subresource:status +// +kubebuilder:resource:scope=Cluster,shortName=rva +// +kubebuilder:metadata:labels=module=sds-replicated-volume +// +kubebuilder:selectablefield:JSONPath=.spec.nodeName +// +kubebuilder:selectablefield:JSONPath=.spec.replicatedVolumeName +// +kubebuilder:printcolumn:name="Volume",type=string,JSONPath=".spec.replicatedVolumeName" +// +kubebuilder:printcolumn:name="Node",type=string,JSONPath=".spec.nodeName" +// +kubebuilder:printcolumn:name="Phase",type=string,JSONPath=".status.phase" +// +kubebuilder:printcolumn:name="Attached",type=string,JSONPath=".status.conditions[?(@.type=='Attached')].status" +// +kubebuilder:printcolumn:name="ReplicaReady",type=string,JSONPath=".status.conditions[?(@.type=='ReplicaReady')].status" +// +kubebuilder:printcolumn:name="Ready",type=string,JSONPath=".status.conditions[?(@.type=='Ready')].status" +// +kubebuilder:printcolumn:name="Age",type=date,JSONPath=".metadata.creationTimestamp" +type ReplicatedVolumeAttachment struct { + metav1.TypeMeta `json:",inline"` + + metav1.ObjectMeta `json:"metadata"` + + Spec ReplicatedVolumeAttachmentSpec `json:"spec"` + + // +patchStrategy=merge + Status ReplicatedVolumeAttachmentStatus `json:"status,omitempty" patchStrategy:"merge"` +} + +// ReplicatedVolumeAttachmentList contains a list of ReplicatedVolumeAttachment +// +kubebuilder:object:generate=true +// +kubebuilder:object:root=true +// +kubebuilder:resource:scope=Cluster +type ReplicatedVolumeAttachmentList struct { + metav1.TypeMeta `json:",inline"` + metav1.ListMeta `json:"metadata"` + Items []ReplicatedVolumeAttachment `json:"items"` +} + +// GetStatusConditions is an adapter method to satisfy objutilv1.StatusConditionObject. +// It returns the root object's `.status.conditions`. +func (o *ReplicatedVolumeAttachment) GetStatusConditions() []metav1.Condition { + return o.Status.Conditions +} + +// SetStatusConditions is an adapter method to satisfy objutilv1.StatusConditionObject. +// It sets the root object's `.status.conditions`. +func (o *ReplicatedVolumeAttachment) SetStatusConditions(conditions []metav1.Condition) { + o.Status.Conditions = conditions +} + +// +kubebuilder:object:generate=true +type ReplicatedVolumeAttachmentSpec struct { + // +kubebuilder:validation:Required + // +kubebuilder:validation:MinLength=1 + // +kubebuilder:validation:MaxLength=127 + // +kubebuilder:validation:Pattern=`^[0-9A-Za-z.+_-]*$` + // +kubebuilder:validation:XValidation:rule="self == oldSelf",message="replicatedVolumeName is immutable" + ReplicatedVolumeName string `json:"replicatedVolumeName"` + + // +kubebuilder:validation:Required + // +kubebuilder:validation:MinLength=1 + // +kubebuilder:validation:MaxLength=253 + // +kubebuilder:validation:XValidation:rule="self == oldSelf",message="nodeName is immutable" + NodeName string `json:"nodeName"` +} + +// +kubebuilder:object:generate=true +type ReplicatedVolumeAttachmentStatus struct { + // +kubebuilder:validation:Enum=Pending;Attaching;Attached;Detaching + // +optional + Phase ReplicatedVolumeAttachmentPhase `json:"phase,omitempty"` + + // +patchMergeKey=type + // +patchStrategy=merge + // +listType=map + // +listMapKey=type + // +optional + Conditions []metav1.Condition `json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions"` +} + +// ReplicatedVolumeAttachmentPhase enumerates possible values for ReplicatedVolumeAttachment status.phase field. +type ReplicatedVolumeAttachmentPhase string + +// ReplicatedVolumeAttachment status.phase possible values. +// Keep these in sync with `ReplicatedVolumeAttachmentStatus.Phase` validation enum. +const ( + // ReplicatedVolumeAttachmentPhasePending means the attachment is not started yet. + ReplicatedVolumeAttachmentPhasePending ReplicatedVolumeAttachmentPhase = "Pending" + // ReplicatedVolumeAttachmentPhaseAttaching means the system is attaching the volume. + ReplicatedVolumeAttachmentPhaseAttaching ReplicatedVolumeAttachmentPhase = "Attaching" + // ReplicatedVolumeAttachmentPhaseAttached means the volume is attached. + ReplicatedVolumeAttachmentPhaseAttached ReplicatedVolumeAttachmentPhase = "Attached" + // ReplicatedVolumeAttachmentPhaseDetaching means the system is detaching the volume. + ReplicatedVolumeAttachmentPhaseDetaching ReplicatedVolumeAttachmentPhase = "Detaching" +) + +func (p ReplicatedVolumeAttachmentPhase) String() string { + return string(p) +} diff --git a/api/v1alpha1/rvr_conditions.go b/api/v1alpha1/rvr_conditions.go new file mode 100644 index 000000000..36715e7ad --- /dev/null +++ b/api/v1alpha1/rvr_conditions.go @@ -0,0 +1,61 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package v1alpha1 + +const ( + // ReplicatedVolumeReplicaCondAttachedType indicates whether the replica is attached. + // + // Reasons describe attachment state, progress, or applicability. + ReplicatedVolumeReplicaCondAttachedType = "Attached" + ReplicatedVolumeReplicaCondAttachedReasonAttached = "Attached" // Attached (primary). + ReplicatedVolumeReplicaCondAttachedReasonPending = "Pending" // Waiting to become primary/attach. + ReplicatedVolumeReplicaCondAttachedReasonAttachingNotApplicable = "AttachingNotApplicable" // Not applicable for this replica type. + ReplicatedVolumeReplicaCondAttachedReasonAttachingNotInitialized = "AttachingNotInitialized" // Not enough status to decide. + ReplicatedVolumeReplicaCondAttachedReasonDetached = "Detached" // Detached (secondary). +) + +const ( + // ReplicatedVolumeReplicaCondBackingVolumeCreatedType indicates whether the backing volume has been created. + // + // Reasons describe applicability and create/delete outcomes. + ReplicatedVolumeReplicaCondBackingVolumeCreatedType = "BackingVolumeCreated" + ReplicatedVolumeReplicaCondBackingVolumeCreatedReasonBackingVolumeCreationFailed = "BackingVolumeCreationFailed" // Creation failed. + ReplicatedVolumeReplicaCondBackingVolumeCreatedReasonBackingVolumeDeletionFailed = "BackingVolumeDeletionFailed" // Deletion failed. + ReplicatedVolumeReplicaCondBackingVolumeCreatedReasonBackingVolumeNotReady = "BackingVolumeNotReady" // Backing volume is not ready. + ReplicatedVolumeReplicaCondBackingVolumeCreatedReasonBackingVolumeReady = "BackingVolumeReady" // Backing volume is ready. + ReplicatedVolumeReplicaCondBackingVolumeCreatedReasonNotApplicable = "NotApplicable" // Not applicable for this replica type. +) + +const ( + // ReplicatedVolumeReplicaCondReadyType indicates whether the replica is ready for I/O. + // + // Reasons describe why it is not ready, or confirm it is ready. + ReplicatedVolumeReplicaCondReadyType = "Ready" + ReplicatedVolumeReplicaCondReadyReasonReady = "Ready" // Ready for I/O. +) + +const ( + // ReplicatedVolumeReplicaCondScheduledType indicates whether the replica has been scheduled to a node. + // + // Reasons describe scheduling outcome or failure. + ReplicatedVolumeReplicaCondScheduledType = "Scheduled" + ReplicatedVolumeReplicaCondScheduledReasonNoAvailableNodes = "NoAvailableNodes" // No nodes are available. + ReplicatedVolumeReplicaCondScheduledReasonReplicaScheduled = "ReplicaScheduled" // Scheduled successfully. + ReplicatedVolumeReplicaCondScheduledReasonSchedulingFailed = "SchedulingFailed" // Scheduling failed. + ReplicatedVolumeReplicaCondScheduledReasonSchedulingPending = "SchedulingPending" // Scheduling is pending. + ReplicatedVolumeReplicaCondScheduledReasonTopologyConstraintsFailed = "TopologyConstraintsFailed" // Topology constraints prevent scheduling. +) diff --git a/api/v1alpha1/rvr_custom_logic_that_should_not_be_here.go b/api/v1alpha1/rvr_custom_logic_that_should_not_be_here.go new file mode 100644 index 000000000..0f939bb78 --- /dev/null +++ b/api/v1alpha1/rvr_custom_logic_that_should_not_be_here.go @@ -0,0 +1,67 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package v1alpha1 + +import ( + "fmt" + "slices" + "strconv" + "strings" +) + +func (rvr *ReplicatedVolumeReplica) NodeID() (uint, bool) { + idx := strings.LastIndex(rvr.Name, "-") + if idx < 0 { + return 0, false + } + + id, err := strconv.ParseUint(rvr.Name[idx+1:], 10, 0) + if err != nil { + return 0, false + } + return uint(id), true +} + +func (rvr *ReplicatedVolumeReplica) SetNameWithNodeID(nodeID uint) { + rvr.Name = fmt.Sprintf("%s-%d", rvr.Spec.ReplicatedVolumeName, nodeID) +} + +func (rvr *ReplicatedVolumeReplica) ChooseNewName(otherRVRs []ReplicatedVolumeReplica) bool { + reservedNodeIDs := make([]uint, 0, RVRMaxNodeID) + + for i := range otherRVRs { + otherRVR := &otherRVRs[i] + if otherRVR.Spec.ReplicatedVolumeName != rvr.Spec.ReplicatedVolumeName { + continue + } + + id, ok := otherRVR.NodeID() + if !ok { + continue + } + reservedNodeIDs = append(reservedNodeIDs, id) + } + + for i := RVRMinNodeID; i <= RVRMaxNodeID; i++ { + if !slices.Contains(reservedNodeIDs, i) { + rvr.SetNameWithNodeID(i) + return true + } + } + + return false +} diff --git a/api/v1alpha1/rvr_types.go b/api/v1alpha1/rvr_types.go new file mode 100644 index 000000000..805d68850 --- /dev/null +++ b/api/v1alpha1/rvr_types.go @@ -0,0 +1,459 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package v1alpha1 + +import ( + "fmt" + "strconv" + "strings" + + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" +) + +// ReplicatedVolumeReplica is a Kubernetes Custom Resource that represents a replica of a ReplicatedVolume. +// +kubebuilder:object:generate=true +// +kubebuilder:object:root=true +// +kubebuilder:subresource:status +// +kubebuilder:resource:scope=Cluster,shortName=rvr +// +kubebuilder:metadata:labels=module=sds-replicated-volume +// +kubebuilder:selectablefield:JSONPath=.spec.nodeName +// +kubebuilder:selectablefield:JSONPath=.spec.replicatedVolumeName +// +kubebuilder:printcolumn:name="Volume",type=string,JSONPath=".spec.replicatedVolumeName" +// +kubebuilder:printcolumn:name="Node",type=string,JSONPath=".spec.nodeName" +// +kubebuilder:printcolumn:name="Type",type=string,JSONPath=".spec.type" +// +kubebuilder:printcolumn:name="Attached",type=string,JSONPath=".status.conditions[?(@.type=='Attached')].status" +// +kubebuilder:printcolumn:name="Online",type=string,JSONPath=".status.conditions[?(@.type=='Online')].status" +// +kubebuilder:printcolumn:name="Ready",type=string,JSONPath=".status.conditions[?(@.type=='Ready')].status" +// +kubebuilder:printcolumn:name="Configured",type=string,JSONPath=".status.conditions[?(@.type=='Configured')].status" +// +kubebuilder:printcolumn:name="DataInitialized",type=string,JSONPath=".status.conditions[?(@.type=='DataInitialized')].status" +// +kubebuilder:printcolumn:name="InQuorum",type=string,JSONPath=".status.conditions[?(@.type=='InQuorum')].status" +// +kubebuilder:printcolumn:name="InSync",type=string,JSONPath=".status.syncProgress" +// +kubebuilder:printcolumn:name="Age",type=date,JSONPath=".metadata.creationTimestamp" +// +kubebuilder:validation:XValidation:rule="self.metadata.name.startsWith(self.spec.replicatedVolumeName + '-')",message="metadata.name must start with spec.replicatedVolumeName + '-'" +// +kubebuilder:validation:XValidation:rule="int(self.metadata.name.substring(self.metadata.name.lastIndexOf('-') + 1)) <= 31",message="numeric suffix must be between 0 and 31" +// +kubebuilder:validation:XValidation:rule="size(self.metadata.name) <= 123",message="metadata.name must be at most 123 characters (to fit derived LLV name with prefix)" +type ReplicatedVolumeReplica struct { + metav1.TypeMeta `json:",inline"` + + metav1.ObjectMeta `json:"metadata"` + + Spec ReplicatedVolumeReplicaSpec `json:"spec"` + + // +patchStrategy=merge + Status ReplicatedVolumeReplicaStatus `json:"status,omitempty" patchStrategy:"merge"` +} + +// +kubebuilder:object:generate=true +// +kubebuilder:object:root=true +// +kubebuilder:resource:scope=Cluster +type ReplicatedVolumeReplicaList struct { + metav1.TypeMeta `json:",inline"` + metav1.ListMeta `json:"metadata"` + Items []ReplicatedVolumeReplica `json:"items"` +} + +// GetStatusConditions is an adapter method to satisfy objutilv1.StatusConditionObject. +// It returns the root object's `.status.conditions`. +func (o *ReplicatedVolumeReplica) GetStatusConditions() []metav1.Condition { + return o.Status.Conditions +} + +// SetStatusConditions is an adapter method to satisfy objutilv1.StatusConditionObject. +// It sets the root object's `.status.conditions`. +func (o *ReplicatedVolumeReplica) SetStatusConditions(conditions []metav1.Condition) { + o.Status.Conditions = conditions +} + +// +kubebuilder:object:generate=true +type ReplicatedVolumeReplicaSpec struct { + // +kubebuilder:validation:Required + // +kubebuilder:validation:MinLength=1 + // +kubebuilder:validation:MaxLength=120 + // +kubebuilder:validation:Pattern=`^[0-9A-Za-z.+_-]*$` + // +kubebuilder:validation:XValidation:rule="self == oldSelf",message="replicatedVolumeName is immutable" + ReplicatedVolumeName string `json:"replicatedVolumeName"` + + // +optional + // +kubebuilder:validation:MinLength=1 + // +kubebuilder:validation:MaxLength=253 + NodeName string `json:"nodeName,omitempty"` + + // +kubebuilder:validation:Required + // +kubebuilder:validation:Enum=Diskful;Access;TieBreaker + Type ReplicaType `json:"type"` +} + +// ReplicaType enumerates possible values for ReplicatedVolumeReplica spec.type and status.actualType fields. +type ReplicaType string + +// Replica type values for [ReplicatedVolumeReplica] spec.type field. +const ( + // ReplicaTypeDiskful represents a diskful replica that stores data on disk. + ReplicaTypeDiskful ReplicaType = "Diskful" + // ReplicaTypeAccess represents a diskless replica for data access. + ReplicaTypeAccess ReplicaType = "Access" + // ReplicaTypeTieBreaker represents a diskless replica for quorum. + ReplicaTypeTieBreaker ReplicaType = "TieBreaker" +) + +func (t ReplicaType) String() string { + return string(t) +} + +// +kubebuilder:object:generate=true +type ReplicatedVolumeReplicaStatus struct { + // +patchMergeKey=type + // +patchStrategy=merge + // +listType=map + // +listMapKey=type + // +optional + Conditions []metav1.Condition `json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions"` + + // +kubebuilder:validation:Enum=Diskful;Access;TieBreaker + ActualType ReplicaType `json:"actualType,omitempty"` + + // +optional + // +kubebuilder:validation:MaxLength=256 + LVMLogicalVolumeName string `json:"lvmLogicalVolumeName,omitempty"` + + // +patchStrategy=merge + DRBD *DRBD `json:"drbd,omitempty" patchStrategy:"merge"` +} + +// +kubebuilder:object:generate=true +type DRBD struct { + // +patchStrategy=merge + Config *DRBDConfig `json:"config,omitempty" patchStrategy:"merge"` + // +patchStrategy=merge + Actual *DRBDActual `json:"actual,omitempty" patchStrategy:"merge"` + // +patchStrategy=merge + Status *DRBDStatus `json:"status,omitempty" patchStrategy:"merge"` +} + +// +kubebuilder:object:generate=true +type DRBDConfig struct { + // +optional + Address *Address `json:"address,omitempty"` + + // Peers contains information about other replicas in the same ReplicatedVolume. + // The key in this map is the node name where the peer replica is located. + // +optional + Peers map[string]Peer `json:"peers,omitempty"` + + // PeersInitialized indicates that Peers has been calculated. + // This field is used to distinguish between no peers and not yet calculated. + // +optional + PeersInitialized bool `json:"peersInitialized,omitempty"` + + // +optional + Primary *bool `json:"primary,omitempty"` +} + +// +kubebuilder:object:generate=true +type DRBDActual struct { + // +optional + // +kubebuilder:validation:Pattern=`^(/[a-zA-Z0-9/.+_-]+)?$` + // +kubebuilder:validation:MaxLength=256 + Disk string `json:"disk,omitempty"` + + // +optional + // +kubebuilder:default=false + AllowTwoPrimaries bool `json:"allowTwoPrimaries,omitempty"` + + // +optional + // +kubebuilder:default=false + InitialSyncCompleted bool `json:"initialSyncCompleted,omitempty"` +} + +// +kubebuilder:object:generate=true +type DRBDStatus struct { + Name string `json:"name"` + //nolint:revive // var-naming: NodeId kept for API compatibility with JSON tag + NodeId int `json:"nodeId"` + Role string `json:"role"` + Suspended bool `json:"suspended"` + SuspendedUser bool `json:"suspendedUser"` + SuspendedNoData bool `json:"suspendedNoData"` + SuspendedFencing bool `json:"suspendedFencing"` + SuspendedQuorum bool `json:"suspendedQuorum"` + ForceIOFailures bool `json:"forceIOFailures"` + WriteOrdering string `json:"writeOrdering"` + Devices []DeviceStatus `json:"devices"` + Connections []ConnectionStatus `json:"connections"` +} + +type DiskState string + +const ( + DiskStateDiskless DiskState = "Diskless" + DiskStateAttaching DiskState = "Attaching" + DiskStateDetaching DiskState = "Detaching" + DiskStateFailed DiskState = "Failed" + DiskStateNegotiating DiskState = "Negotiating" + DiskStateInconsistent DiskState = "Inconsistent" + DiskStateOutdated DiskState = "Outdated" + DiskStateUnknown DiskState = "DUnknown" + DiskStateConsistent DiskState = "Consistent" + DiskStateUpToDate DiskState = "UpToDate" +) + +func (s DiskState) String() string { + return string(s) +} + +func ParseDiskState(s string) DiskState { + switch DiskState(s) { + case DiskStateDiskless, + DiskStateAttaching, + DiskStateDetaching, + DiskStateFailed, + DiskStateNegotiating, + DiskStateInconsistent, + DiskStateOutdated, + DiskStateUnknown, + DiskStateConsistent, + DiskStateUpToDate: + return DiskState(s) + default: + return "" + } +} + +type ReplicationState string + +const ( + ReplicationStateOff ReplicationState = "Off" + ReplicationStateEstablished ReplicationState = "Established" + ReplicationStateStartingSyncSource ReplicationState = "StartingSyncS" + ReplicationStateStartingSyncTarget ReplicationState = "StartingSyncT" + ReplicationStateWFBitMapSource ReplicationState = "WFBitMapS" + ReplicationStateWFBitMapTarget ReplicationState = "WFBitMapT" + ReplicationStateWFSyncUUID ReplicationState = "WFSyncUUID" + ReplicationStateSyncSource ReplicationState = "SyncSource" + ReplicationStateSyncTarget ReplicationState = "SyncTarget" + ReplicationStatePausedSyncSource ReplicationState = "PausedSyncS" + ReplicationStatePausedSyncTarget ReplicationState = "PausedSyncT" + ReplicationStateVerifySource ReplicationState = "VerifyS" + ReplicationStateVerifyTarget ReplicationState = "VerifyT" + ReplicationStateAhead ReplicationState = "Ahead" + ReplicationStateBehind ReplicationState = "Behind" + ReplicationStateUnknown ReplicationState = "Unknown" +) + +func (s ReplicationState) String() string { + return string(s) +} + +func ParseReplicationState(s string) ReplicationState { + switch ReplicationState(s) { + case ReplicationStateOff, + ReplicationStateEstablished, + ReplicationStateStartingSyncSource, + ReplicationStateStartingSyncTarget, + ReplicationStateWFBitMapSource, + ReplicationStateWFBitMapTarget, + ReplicationStateWFSyncUUID, + ReplicationStateSyncSource, + ReplicationStateSyncTarget, + ReplicationStatePausedSyncSource, + ReplicationStatePausedSyncTarget, + ReplicationStateVerifySource, + ReplicationStateVerifyTarget, + ReplicationStateAhead, + ReplicationStateBehind, + ReplicationStateUnknown: + return ReplicationState(s) + default: + return "" + } +} + +// IsSyncingState returns true if the replication state indicates active synchronization. +func (s ReplicationState) IsSyncingState() bool { + switch s { + case ReplicationStateSyncSource, + ReplicationStateSyncTarget, + ReplicationStateStartingSyncSource, + ReplicationStateStartingSyncTarget, + ReplicationStatePausedSyncSource, + ReplicationStatePausedSyncTarget, + ReplicationStateWFBitMapSource, + ReplicationStateWFBitMapTarget, + ReplicationStateWFSyncUUID: + return true + default: + return false + } +} + +type ConnectionState string + +const ( + ConnectionStateStandAlone ConnectionState = "StandAlone" + ConnectionStateDisconnecting ConnectionState = "Disconnecting" + ConnectionStateUnconnected ConnectionState = "Unconnected" + ConnectionStateTimeout ConnectionState = "Timeout" + ConnectionStateBrokenPipe ConnectionState = "BrokenPipe" + ConnectionStateNetworkFailure ConnectionState = "NetworkFailure" + ConnectionStateProtocolError ConnectionState = "ProtocolError" + ConnectionStateConnecting ConnectionState = "Connecting" + ConnectionStateTearDown ConnectionState = "TearDown" + ConnectionStateConnected ConnectionState = "Connected" + ConnectionStateUnknown ConnectionState = "Unknown" +) + +func (s ConnectionState) String() string { + return string(s) +} + +func ParseConnectionState(s string) ConnectionState { + switch ConnectionState(s) { + case ConnectionStateStandAlone, + ConnectionStateDisconnecting, + ConnectionStateUnconnected, + ConnectionStateTimeout, + ConnectionStateBrokenPipe, + ConnectionStateNetworkFailure, + ConnectionStateProtocolError, + ConnectionStateConnecting, + ConnectionStateTearDown, + ConnectionStateConnected, + ConnectionStateUnknown: + return ConnectionState(s) + default: + return "" + } +} + +// +kubebuilder:object:generate=true +type DeviceStatus struct { + Volume int `json:"volume"` + Minor int `json:"minor"` + DiskState DiskState `json:"diskState"` + Client bool `json:"client"` + Open bool `json:"open"` + Quorum bool `json:"quorum"` + Size int `json:"size"` +} + +// +kubebuilder:object:generate=true +type ConnectionStatus struct { + //nolint:revive // var-naming: PeerNodeId kept for API compatibility with JSON tag + PeerNodeId int `json:"peerNodeId"` + Name string `json:"name"` + ConnectionState ConnectionState `json:"connectionState"` + Congested bool `json:"congested"` + Peerrole string `json:"peerRole"` + TLS bool `json:"tls"` + Paths []PathStatus `json:"paths"` + PeerDevices []PeerDeviceStatus `json:"peerDevices"` +} + +// +kubebuilder:object:generate=true +type PathStatus struct { + ThisHost HostStatus `json:"thisHost"` + RemoteHost HostStatus `json:"remoteHost"` + Established bool `json:"established"` +} + +// +kubebuilder:object:generate=true +type HostStatus struct { + Address string `json:"address"` + Port int `json:"port"` + Family string `json:"family"` +} + +// +kubebuilder:object:generate=true +type PeerDeviceStatus struct { + Volume int `json:"volume"` + ReplicationState ReplicationState `json:"replicationState"` + PeerDiskState DiskState `json:"peerDiskState"` + PeerClient bool `json:"peerClient"` + ResyncSuspended string `json:"resyncSuspended"` + OutOfSync int `json:"outOfSync"` + HasSyncDetails bool `json:"hasSyncDetails"` + HasOnlineVerifyDetails bool `json:"hasOnlineVerifyDetails"` + PercentInSync string `json:"percentInSync"` +} + +// +kubebuilder:object:generate=true +type Peer struct { + // +kubebuilder:validation:Minimum=0 + // +kubebuilder:validation:Maximum=7 + //nolint:revive // var-naming: NodeId kept for API compatibility with JSON tag + NodeId uint `json:"nodeId"` + + // +kubebuilder:validation:Required + Address Address `json:"address"` + + // +kubebuilder:default=false + Diskless bool `json:"diskless,omitempty"` +} + +// +kubebuilder:object:generate=true +type Address struct { + // +kubebuilder:validation:Required + // +kubebuilder:validation:Pattern=`^(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)$` + IPv4 string `json:"ipv4"` + + // +kubebuilder:validation:Minimum=1025 + // +kubebuilder:validation:Maximum=65535 + Port uint `json:"port"` +} + +// DRBD node ID constants for ReplicatedVolumeReplica +const ( + // RVRMinNodeID is the minimum valid node ID for DRBD configuration in ReplicatedVolumeReplica + RVRMinNodeID = uint(0) + // RVRMaxNodeID is the maximum valid node ID for DRBD configuration in ReplicatedVolumeReplica + RVRMaxNodeID = uint(31) +) + +// IsValidNodeID checks if nodeID is within valid range [RVRMinNodeID; RVRMaxNodeID]. +func IsValidNodeID(nodeID uint) bool { + return nodeID >= RVRMinNodeID && nodeID <= RVRMaxNodeID +} + +// FormatValidNodeIDRange returns a formatted string representing the valid nodeID range. +// faster than fmt.Sprintf("%d; %d", RVRMinNodeID, RVRMaxNodeID) because it avoids allocation and copying of the string. +func FormatValidNodeIDRange() string { + var b strings.Builder + b.Grow(10) // Pre-allocate: "[0; 31]" = 8 bytes, but allocate a bit more + b.WriteByte('[') + b.WriteString(strconv.FormatUint(uint64(RVRMinNodeID), 10)) + b.WriteString("; ") + b.WriteString(strconv.FormatUint(uint64(RVRMaxNodeID), 10)) + b.WriteByte(']') + return b.String() +} + +func SprintDRBDDisk(actualVGNameOnTheNode, actualLVNameOnTheNode string) string { + return fmt.Sprintf("/dev/%s/%s", actualVGNameOnTheNode, actualLVNameOnTheNode) +} + +func ParseDRBDDisk(disk string) (actualVGNameOnTheNode, actualLVNameOnTheNode string, err error) { + parts := strings.Split(disk, "/") + if len(parts) != 4 || parts[0] != "" || parts[1] != "dev" || + len(parts[2]) == 0 || len(parts[3]) == 0 { + return "", "", + fmt.Errorf( + "parsing DRBD Disk: expected format '/dev/{actualVGNameOnTheNode}/{actualLVNameOnTheNode}', got '%s'", + disk, + ) + } + return parts[2], parts[3], nil +} diff --git a/api/v1alpha1/zz_generated.deepcopy.go b/api/v1alpha1/zz_generated.deepcopy.go index 6aece3f00..707c1d756 100644 --- a/api/v1alpha1/zz_generated.deepcopy.go +++ b/api/v1alpha1/zz_generated.deepcopy.go @@ -1,5 +1,7 @@ +//go:build !ignore_autogenerated + /* -Copyright 2025 Flant JSC +Copyright Flant JSC Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -14,31 +16,187 @@ See the License for the specific language governing permissions and limitations under the License. */ +// Code generated by controller-gen. DO NOT EDIT. + package v1alpha1 -import "k8s.io/apimachinery/pkg/runtime" +import ( + "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/runtime" +) + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *Address) DeepCopyInto(out *Address) { + *out = *in +} -// --------------- replicated storage class +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Address. +func (in *Address) DeepCopy() *Address { + if in == nil { + return nil + } + out := new(Address) + in.DeepCopyInto(out) + return out +} // DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. -func (in *ReplicatedStorageClass) DeepCopyInto(out *ReplicatedStorageClass) { +func (in *ConnectionStatus) DeepCopyInto(out *ConnectionStatus) { + *out = *in + if in.Paths != nil { + in, out := &in.Paths, &out.Paths + *out = make([]PathStatus, len(*in)) + copy(*out, *in) + } + if in.PeerDevices != nil { + in, out := &in.PeerDevices, &out.PeerDevices + *out = make([]PeerDeviceStatus, len(*in)) + copy(*out, *in) + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ConnectionStatus. +func (in *ConnectionStatus) DeepCopy() *ConnectionStatus { + if in == nil { + return nil + } + out := new(ConnectionStatus) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *CreateNewUUIDParams) DeepCopyInto(out *CreateNewUUIDParams) { + *out = *in +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new CreateNewUUIDParams. +func (in *CreateNewUUIDParams) DeepCopy() *CreateNewUUIDParams { + if in == nil { + return nil + } + out := new(CreateNewUUIDParams) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *DRBD) DeepCopyInto(out *DRBD) { + *out = *in + if in.Config != nil { + in, out := &in.Config, &out.Config + *out = new(DRBDConfig) + (*in).DeepCopyInto(*out) + } + if in.Actual != nil { + in, out := &in.Actual, &out.Actual + *out = new(DRBDActual) + **out = **in + } + if in.Status != nil { + in, out := &in.Status, &out.Status + *out = new(DRBDStatus) + (*in).DeepCopyInto(*out) + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DRBD. +func (in *DRBD) DeepCopy() *DRBD { + if in == nil { + return nil + } + out := new(DRBD) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *DRBDActual) DeepCopyInto(out *DRBDActual) { + *out = *in +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DRBDActual. +func (in *DRBDActual) DeepCopy() *DRBDActual { + if in == nil { + return nil + } + out := new(DRBDActual) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *DRBDAddress) DeepCopyInto(out *DRBDAddress) { + *out = *in +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DRBDAddress. +func (in *DRBDAddress) DeepCopy() *DRBDAddress { + if in == nil { + return nil + } + out := new(DRBDAddress) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *DRBDConfig) DeepCopyInto(out *DRBDConfig) { + *out = *in + if in.Address != nil { + in, out := &in.Address, &out.Address + *out = new(Address) + **out = **in + } + if in.Peers != nil { + in, out := &in.Peers, &out.Peers + *out = make(map[string]Peer, len(*in)) + for key, val := range *in { + (*out)[key] = val + } + } + if in.Primary != nil { + in, out := &in.Primary, &out.Primary + *out = new(bool) + **out = **in + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DRBDConfig. +func (in *DRBDConfig) DeepCopy() *DRBDConfig { + if in == nil { + return nil + } + out := new(DRBDConfig) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *DRBDNodeOperation) DeepCopyInto(out *DRBDNodeOperation) { *out = *in out.TypeMeta = in.TypeMeta in.ObjectMeta.DeepCopyInto(&out.ObjectMeta) + out.Spec = in.Spec + if in.Status != nil { + in, out := &in.Status, &out.Status + *out = new(DRBDNodeOperationStatus) + (*in).DeepCopyInto(*out) + } } -// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new EmptyBlockDevice. -func (in *ReplicatedStorageClass) DeepCopy() *ReplicatedStorageClass { +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DRBDNodeOperation. +func (in *DRBDNodeOperation) DeepCopy() *DRBDNodeOperation { if in == nil { return nil } - out := new(ReplicatedStorageClass) + out := new(DRBDNodeOperation) in.DeepCopyInto(out) return out } // DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object. -func (in *ReplicatedStorageClass) DeepCopyObject() runtime.Object { +func (in *DRBDNodeOperation) DeepCopyObject() runtime.Object { if c := in.DeepCopy(); c != nil { return c } @@ -46,58 +204,100 @@ func (in *ReplicatedStorageClass) DeepCopyObject() runtime.Object { } // DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. -func (in *ReplicatedStorageClassList) DeepCopyInto(out *ReplicatedStorageClassList) { +func (in *DRBDNodeOperationList) DeepCopyInto(out *DRBDNodeOperationList) { *out = *in out.TypeMeta = in.TypeMeta in.ListMeta.DeepCopyInto(&out.ListMeta) if in.Items != nil { in, out := &in.Items, &out.Items - *out = make([]ReplicatedStorageClass, len(*in)) + *out = make([]DRBDNodeOperation, len(*in)) for i := range *in { (*in)[i].DeepCopyInto(&(*out)[i]) } } } -// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new GuestbookList. -func (in *ReplicatedStorageClassList) DeepCopy() *ReplicatedStorageClassList { +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DRBDNodeOperationList. +func (in *DRBDNodeOperationList) DeepCopy() *DRBDNodeOperationList { if in == nil { return nil } - out := new(ReplicatedStorageClassList) + out := new(DRBDNodeOperationList) in.DeepCopyInto(out) return out } // DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object. -func (in *ReplicatedStorageClassList) DeepCopyObject() runtime.Object { +func (in *DRBDNodeOperationList) DeepCopyObject() runtime.Object { if c := in.DeepCopy(); c != nil { return c } return nil } -// --------------- replicated storage pool +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *DRBDNodeOperationSpec) DeepCopyInto(out *DRBDNodeOperationSpec) { + *out = *in +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DRBDNodeOperationSpec. +func (in *DRBDNodeOperationSpec) DeepCopy() *DRBDNodeOperationSpec { + if in == nil { + return nil + } + out := new(DRBDNodeOperationSpec) + in.DeepCopyInto(out) + return out +} // DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. -func (in *ReplicatedStoragePool) DeepCopyInto(out *ReplicatedStoragePool) { +func (in *DRBDNodeOperationStatus) DeepCopyInto(out *DRBDNodeOperationStatus) { + *out = *in + if in.StartedAt != nil { + in, out := &in.StartedAt, &out.StartedAt + *out = (*in).DeepCopy() + } + if in.CompletedAt != nil { + in, out := &in.CompletedAt, &out.CompletedAt + *out = (*in).DeepCopy() + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DRBDNodeOperationStatus. +func (in *DRBDNodeOperationStatus) DeepCopy() *DRBDNodeOperationStatus { + if in == nil { + return nil + } + out := new(DRBDNodeOperationStatus) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *DRBDResource) DeepCopyInto(out *DRBDResource) { *out = *in out.TypeMeta = in.TypeMeta in.ObjectMeta.DeepCopyInto(&out.ObjectMeta) + in.Spec.DeepCopyInto(&out.Spec) + if in.Status != nil { + in, out := &in.Status, &out.Status + *out = new(DRBDResourceStatus) + (*in).DeepCopyInto(*out) + } } -// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new EmptyBlockDevice. -func (in *ReplicatedStoragePool) DeepCopy() *ReplicatedStoragePool { +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DRBDResource. +func (in *DRBDResource) DeepCopy() *DRBDResource { if in == nil { return nil } - out := new(ReplicatedStoragePool) + out := new(DRBDResource) in.DeepCopyInto(out) return out } // DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object. -func (in *ReplicatedStoragePool) DeepCopyObject() runtime.Object { +func (in *DRBDResource) DeepCopyObject() runtime.Object { if c := in.DeepCopy(); c != nil { return c } @@ -105,33 +305,1337 @@ func (in *ReplicatedStoragePool) DeepCopyObject() runtime.Object { } // DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. -func (in *ReplicatedStoragePoolList) DeepCopyInto(out *ReplicatedStoragePoolList) { +func (in *DRBDResourceActiveConfiguration) DeepCopyInto(out *DRBDResourceActiveConfiguration) { + *out = *in + if in.Quorum != nil { + in, out := &in.Quorum, &out.Quorum + *out = new(byte) + **out = **in + } + if in.QuorumMinimumRedundancy != nil { + in, out := &in.QuorumMinimumRedundancy, &out.QuorumMinimumRedundancy + *out = new(byte) + **out = **in + } + if in.Size != nil { + in, out := &in.Size, &out.Size + x := (*in).DeepCopy() + *out = &x + } + if in.AllowTwoPrimaries != nil { + in, out := &in.AllowTwoPrimaries, &out.AllowTwoPrimaries + *out = new(bool) + **out = **in + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DRBDResourceActiveConfiguration. +func (in *DRBDResourceActiveConfiguration) DeepCopy() *DRBDResourceActiveConfiguration { + if in == nil { + return nil + } + out := new(DRBDResourceActiveConfiguration) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *DRBDResourceAddressStatus) DeepCopyInto(out *DRBDResourceAddressStatus) { + *out = *in + out.Address = in.Address +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DRBDResourceAddressStatus. +func (in *DRBDResourceAddressStatus) DeepCopy() *DRBDResourceAddressStatus { + if in == nil { + return nil + } + out := new(DRBDResourceAddressStatus) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *DRBDResourceConfig) DeepCopyInto(out *DRBDResourceConfig) { + *out = *in +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DRBDResourceConfig. +func (in *DRBDResourceConfig) DeepCopy() *DRBDResourceConfig { + if in == nil { + return nil + } + out := new(DRBDResourceConfig) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *DRBDResourceDetails) DeepCopyInto(out *DRBDResourceDetails) { + *out = *in + if in.Config != nil { + in, out := &in.Config, &out.Config + *out = new(DRBDResourceConfig) + **out = **in + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DRBDResourceDetails. +func (in *DRBDResourceDetails) DeepCopy() *DRBDResourceDetails { + if in == nil { + return nil + } + out := new(DRBDResourceDetails) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *DRBDResourceList) DeepCopyInto(out *DRBDResourceList) { *out = *in out.TypeMeta = in.TypeMeta in.ListMeta.DeepCopyInto(&out.ListMeta) if in.Items != nil { in, out := &in.Items, &out.Items - *out = make([]ReplicatedStoragePool, len(*in)) + *out = make([]DRBDResource, len(*in)) for i := range *in { (*in)[i].DeepCopyInto(&(*out)[i]) } } } -// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new GuestbookList. -func (in *ReplicatedStoragePoolList) DeepCopy() *ReplicatedStoragePoolList { +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DRBDResourceList. +func (in *DRBDResourceList) DeepCopy() *DRBDResourceList { if in == nil { return nil } - out := new(ReplicatedStoragePoolList) + out := new(DRBDResourceList) in.DeepCopyInto(out) return out } // DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object. -func (in *ReplicatedStoragePoolList) DeepCopyObject() runtime.Object { +func (in *DRBDResourceList) DeepCopyObject() runtime.Object { + if c := in.DeepCopy(); c != nil { + return c + } + return nil +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *DRBDResourceOperation) DeepCopyInto(out *DRBDResourceOperation) { + *out = *in + out.TypeMeta = in.TypeMeta + in.ObjectMeta.DeepCopyInto(&out.ObjectMeta) + in.Spec.DeepCopyInto(&out.Spec) + if in.Status != nil { + in, out := &in.Status, &out.Status + *out = new(DRBDResourceOperationStatus) + (*in).DeepCopyInto(*out) + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DRBDResourceOperation. +func (in *DRBDResourceOperation) DeepCopy() *DRBDResourceOperation { + if in == nil { + return nil + } + out := new(DRBDResourceOperation) + in.DeepCopyInto(out) + return out +} + +// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object. +func (in *DRBDResourceOperation) DeepCopyObject() runtime.Object { + if c := in.DeepCopy(); c != nil { + return c + } + return nil +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *DRBDResourceOperationList) DeepCopyInto(out *DRBDResourceOperationList) { + *out = *in + out.TypeMeta = in.TypeMeta + in.ListMeta.DeepCopyInto(&out.ListMeta) + if in.Items != nil { + in, out := &in.Items, &out.Items + *out = make([]DRBDResourceOperation, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DRBDResourceOperationList. +func (in *DRBDResourceOperationList) DeepCopy() *DRBDResourceOperationList { + if in == nil { + return nil + } + out := new(DRBDResourceOperationList) + in.DeepCopyInto(out) + return out +} + +// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object. +func (in *DRBDResourceOperationList) DeepCopyObject() runtime.Object { if c := in.DeepCopy(); c != nil { return c } return nil } + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *DRBDResourceOperationSpec) DeepCopyInto(out *DRBDResourceOperationSpec) { + *out = *in + if in.CreateNewUUID != nil { + in, out := &in.CreateNewUUID, &out.CreateNewUUID + *out = new(CreateNewUUIDParams) + **out = **in + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DRBDResourceOperationSpec. +func (in *DRBDResourceOperationSpec) DeepCopy() *DRBDResourceOperationSpec { + if in == nil { + return nil + } + out := new(DRBDResourceOperationSpec) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *DRBDResourceOperationStatus) DeepCopyInto(out *DRBDResourceOperationStatus) { + *out = *in + if in.StartedAt != nil { + in, out := &in.StartedAt, &out.StartedAt + *out = (*in).DeepCopy() + } + if in.CompletedAt != nil { + in, out := &in.CompletedAt, &out.CompletedAt + *out = (*in).DeepCopy() + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DRBDResourceOperationStatus. +func (in *DRBDResourceOperationStatus) DeepCopy() *DRBDResourceOperationStatus { + if in == nil { + return nil + } + out := new(DRBDResourceOperationStatus) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *DRBDResourcePath) DeepCopyInto(out *DRBDResourcePath) { + *out = *in + out.Address = in.Address +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DRBDResourcePath. +func (in *DRBDResourcePath) DeepCopy() *DRBDResourcePath { + if in == nil { + return nil + } + out := new(DRBDResourcePath) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *DRBDResourcePathStatus) DeepCopyInto(out *DRBDResourcePathStatus) { + *out = *in + out.Address = in.Address +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DRBDResourcePathStatus. +func (in *DRBDResourcePathStatus) DeepCopy() *DRBDResourcePathStatus { + if in == nil { + return nil + } + out := new(DRBDResourcePathStatus) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *DRBDResourcePeer) DeepCopyInto(out *DRBDResourcePeer) { + *out = *in + if in.Paths != nil { + in, out := &in.Paths, &out.Paths + *out = make([]DRBDResourcePath, len(*in)) + copy(*out, *in) + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DRBDResourcePeer. +func (in *DRBDResourcePeer) DeepCopy() *DRBDResourcePeer { + if in == nil { + return nil + } + out := new(DRBDResourcePeer) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *DRBDResourcePeerStatus) DeepCopyInto(out *DRBDResourcePeerStatus) { + *out = *in + if in.NodeID != nil { + in, out := &in.NodeID, &out.NodeID + *out = new(uint) + **out = **in + } + if in.Paths != nil { + in, out := &in.Paths, &out.Paths + *out = make([]DRBDResourcePathStatus, len(*in)) + copy(*out, *in) + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DRBDResourcePeerStatus. +func (in *DRBDResourcePeerStatus) DeepCopy() *DRBDResourcePeerStatus { + if in == nil { + return nil + } + out := new(DRBDResourcePeerStatus) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *DRBDResourceSpec) DeepCopyInto(out *DRBDResourceSpec) { + *out = *in + if in.SystemNetworks != nil { + in, out := &in.SystemNetworks, &out.SystemNetworks + *out = make([]string, len(*in)) + copy(*out, *in) + } + out.Size = in.Size.DeepCopy() + if in.Peers != nil { + in, out := &in.Peers, &out.Peers + *out = make([]DRBDResourcePeer, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DRBDResourceSpec. +func (in *DRBDResourceSpec) DeepCopy() *DRBDResourceSpec { + if in == nil { + return nil + } + out := new(DRBDResourceSpec) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *DRBDResourceStatus) DeepCopyInto(out *DRBDResourceStatus) { + *out = *in + if in.Addresses != nil { + in, out := &in.Addresses, &out.Addresses + *out = make([]DRBDResourceAddressStatus, len(*in)) + copy(*out, *in) + } + if in.ActiveConfiguration != nil { + in, out := &in.ActiveConfiguration, &out.ActiveConfiguration + *out = new(DRBDResourceActiveConfiguration) + (*in).DeepCopyInto(*out) + } + if in.Peers != nil { + in, out := &in.Peers, &out.Peers + *out = make([]DRBDResourcePeerStatus, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } + if in.Quorum != nil { + in, out := &in.Quorum, &out.Quorum + *out = new(bool) + **out = **in + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DRBDResourceStatus. +func (in *DRBDResourceStatus) DeepCopy() *DRBDResourceStatus { + if in == nil { + return nil + } + out := new(DRBDResourceStatus) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *DRBDStatus) DeepCopyInto(out *DRBDStatus) { + *out = *in + if in.Devices != nil { + in, out := &in.Devices, &out.Devices + *out = make([]DeviceStatus, len(*in)) + copy(*out, *in) + } + if in.Connections != nil { + in, out := &in.Connections, &out.Connections + *out = make([]ConnectionStatus, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DRBDStatus. +func (in *DRBDStatus) DeepCopy() *DRBDStatus { + if in == nil { + return nil + } + out := new(DRBDStatus) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *DeviceStatus) DeepCopyInto(out *DeviceStatus) { + *out = *in +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DeviceStatus. +func (in *DeviceStatus) DeepCopy() *DeviceStatus { + if in == nil { + return nil + } + out := new(DeviceStatus) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *HostStatus) DeepCopyInto(out *HostStatus) { + *out = *in +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new HostStatus. +func (in *HostStatus) DeepCopy() *HostStatus { + if in == nil { + return nil + } + out := new(HostStatus) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *PathStatus) DeepCopyInto(out *PathStatus) { + *out = *in + out.ThisHost = in.ThisHost + out.RemoteHost = in.RemoteHost +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PathStatus. +func (in *PathStatus) DeepCopy() *PathStatus { + if in == nil { + return nil + } + out := new(PathStatus) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *Peer) DeepCopyInto(out *Peer) { + *out = *in + out.Address = in.Address +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Peer. +func (in *Peer) DeepCopy() *Peer { + if in == nil { + return nil + } + out := new(Peer) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *PeerDeviceStatus) DeepCopyInto(out *PeerDeviceStatus) { + *out = *in +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PeerDeviceStatus. +func (in *PeerDeviceStatus) DeepCopy() *PeerDeviceStatus { + if in == nil { + return nil + } + out := new(PeerDeviceStatus) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *ReplicatedStorageClass) DeepCopyInto(out *ReplicatedStorageClass) { + *out = *in + out.TypeMeta = in.TypeMeta + in.ObjectMeta.DeepCopyInto(&out.ObjectMeta) + in.Spec.DeepCopyInto(&out.Spec) + in.Status.DeepCopyInto(&out.Status) +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ReplicatedStorageClass. +func (in *ReplicatedStorageClass) DeepCopy() *ReplicatedStorageClass { + if in == nil { + return nil + } + out := new(ReplicatedStorageClass) + in.DeepCopyInto(out) + return out +} + +// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object. +func (in *ReplicatedStorageClass) DeepCopyObject() runtime.Object { + if c := in.DeepCopy(); c != nil { + return c + } + return nil +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *ReplicatedStorageClassConfiguration) DeepCopyInto(out *ReplicatedStorageClassConfiguration) { + *out = *in +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ReplicatedStorageClassConfiguration. +func (in *ReplicatedStorageClassConfiguration) DeepCopy() *ReplicatedStorageClassConfiguration { + if in == nil { + return nil + } + out := new(ReplicatedStorageClassConfiguration) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *ReplicatedStorageClassConfigurationRollingUpdateStrategy) DeepCopyInto(out *ReplicatedStorageClassConfigurationRollingUpdateStrategy) { + *out = *in +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ReplicatedStorageClassConfigurationRollingUpdateStrategy. +func (in *ReplicatedStorageClassConfigurationRollingUpdateStrategy) DeepCopy() *ReplicatedStorageClassConfigurationRollingUpdateStrategy { + if in == nil { + return nil + } + out := new(ReplicatedStorageClassConfigurationRollingUpdateStrategy) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *ReplicatedStorageClassConfigurationRolloutStrategy) DeepCopyInto(out *ReplicatedStorageClassConfigurationRolloutStrategy) { + *out = *in + if in.RollingUpdate != nil { + in, out := &in.RollingUpdate, &out.RollingUpdate + *out = new(ReplicatedStorageClassConfigurationRollingUpdateStrategy) + **out = **in + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ReplicatedStorageClassConfigurationRolloutStrategy. +func (in *ReplicatedStorageClassConfigurationRolloutStrategy) DeepCopy() *ReplicatedStorageClassConfigurationRolloutStrategy { + if in == nil { + return nil + } + out := new(ReplicatedStorageClassConfigurationRolloutStrategy) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *ReplicatedStorageClassEligibleNodesConflictResolutionRollingRepair) DeepCopyInto(out *ReplicatedStorageClassEligibleNodesConflictResolutionRollingRepair) { + *out = *in +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ReplicatedStorageClassEligibleNodesConflictResolutionRollingRepair. +func (in *ReplicatedStorageClassEligibleNodesConflictResolutionRollingRepair) DeepCopy() *ReplicatedStorageClassEligibleNodesConflictResolutionRollingRepair { + if in == nil { + return nil + } + out := new(ReplicatedStorageClassEligibleNodesConflictResolutionRollingRepair) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *ReplicatedStorageClassEligibleNodesConflictResolutionStrategy) DeepCopyInto(out *ReplicatedStorageClassEligibleNodesConflictResolutionStrategy) { + *out = *in + if in.RollingRepair != nil { + in, out := &in.RollingRepair, &out.RollingRepair + *out = new(ReplicatedStorageClassEligibleNodesConflictResolutionRollingRepair) + **out = **in + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ReplicatedStorageClassEligibleNodesConflictResolutionStrategy. +func (in *ReplicatedStorageClassEligibleNodesConflictResolutionStrategy) DeepCopy() *ReplicatedStorageClassEligibleNodesConflictResolutionStrategy { + if in == nil { + return nil + } + out := new(ReplicatedStorageClassEligibleNodesConflictResolutionStrategy) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *ReplicatedStorageClassList) DeepCopyInto(out *ReplicatedStorageClassList) { + *out = *in + out.TypeMeta = in.TypeMeta + in.ListMeta.DeepCopyInto(&out.ListMeta) + if in.Items != nil { + in, out := &in.Items, &out.Items + *out = make([]ReplicatedStorageClass, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ReplicatedStorageClassList. +func (in *ReplicatedStorageClassList) DeepCopy() *ReplicatedStorageClassList { + if in == nil { + return nil + } + out := new(ReplicatedStorageClassList) + in.DeepCopyInto(out) + return out +} + +// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object. +func (in *ReplicatedStorageClassList) DeepCopyObject() runtime.Object { + if c := in.DeepCopy(); c != nil { + return c + } + return nil +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *ReplicatedStorageClassSpec) DeepCopyInto(out *ReplicatedStorageClassSpec) { + *out = *in + in.Storage.DeepCopyInto(&out.Storage) + if in.Zones != nil { + in, out := &in.Zones, &out.Zones + *out = make([]string, len(*in)) + copy(*out, *in) + } + if in.NodeLabelSelector != nil { + in, out := &in.NodeLabelSelector, &out.NodeLabelSelector + *out = new(v1.LabelSelector) + (*in).DeepCopyInto(*out) + } + if in.SystemNetworkNames != nil { + in, out := &in.SystemNetworkNames, &out.SystemNetworkNames + *out = make([]string, len(*in)) + copy(*out, *in) + } + in.ConfigurationRolloutStrategy.DeepCopyInto(&out.ConfigurationRolloutStrategy) + in.EligibleNodesConflictResolutionStrategy.DeepCopyInto(&out.EligibleNodesConflictResolutionStrategy) + out.EligibleNodesPolicy = in.EligibleNodesPolicy +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ReplicatedStorageClassSpec. +func (in *ReplicatedStorageClassSpec) DeepCopy() *ReplicatedStorageClassSpec { + if in == nil { + return nil + } + out := new(ReplicatedStorageClassSpec) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *ReplicatedStorageClassStatus) DeepCopyInto(out *ReplicatedStorageClassStatus) { + *out = *in + if in.Conditions != nil { + in, out := &in.Conditions, &out.Conditions + *out = make([]v1.Condition, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } + if in.Configuration != nil { + in, out := &in.Configuration, &out.Configuration + *out = new(ReplicatedStorageClassConfiguration) + **out = **in + } + in.Volumes.DeepCopyInto(&out.Volumes) +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ReplicatedStorageClassStatus. +func (in *ReplicatedStorageClassStatus) DeepCopy() *ReplicatedStorageClassStatus { + if in == nil { + return nil + } + out := new(ReplicatedStorageClassStatus) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *ReplicatedStorageClassStorage) DeepCopyInto(out *ReplicatedStorageClassStorage) { + *out = *in + if in.LVMVolumeGroups != nil { + in, out := &in.LVMVolumeGroups, &out.LVMVolumeGroups + *out = make([]ReplicatedStoragePoolLVMVolumeGroups, len(*in)) + copy(*out, *in) + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ReplicatedStorageClassStorage. +func (in *ReplicatedStorageClassStorage) DeepCopy() *ReplicatedStorageClassStorage { + if in == nil { + return nil + } + out := new(ReplicatedStorageClassStorage) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *ReplicatedStorageClassVolumesSummary) DeepCopyInto(out *ReplicatedStorageClassVolumesSummary) { + *out = *in + if in.Total != nil { + in, out := &in.Total, &out.Total + *out = new(int32) + **out = **in + } + if in.PendingObservation != nil { + in, out := &in.PendingObservation, &out.PendingObservation + *out = new(int32) + **out = **in + } + if in.Aligned != nil { + in, out := &in.Aligned, &out.Aligned + *out = new(int32) + **out = **in + } + if in.InConflictWithEligibleNodes != nil { + in, out := &in.InConflictWithEligibleNodes, &out.InConflictWithEligibleNodes + *out = new(int32) + **out = **in + } + if in.StaleConfiguration != nil { + in, out := &in.StaleConfiguration, &out.StaleConfiguration + *out = new(int32) + **out = **in + } + if in.UsedStoragePoolNames != nil { + in, out := &in.UsedStoragePoolNames, &out.UsedStoragePoolNames + *out = make([]string, len(*in)) + copy(*out, *in) + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ReplicatedStorageClassVolumesSummary. +func (in *ReplicatedStorageClassVolumesSummary) DeepCopy() *ReplicatedStorageClassVolumesSummary { + if in == nil { + return nil + } + out := new(ReplicatedStorageClassVolumesSummary) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *ReplicatedStoragePool) DeepCopyInto(out *ReplicatedStoragePool) { + *out = *in + out.TypeMeta = in.TypeMeta + in.ObjectMeta.DeepCopyInto(&out.ObjectMeta) + in.Spec.DeepCopyInto(&out.Spec) + in.Status.DeepCopyInto(&out.Status) +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ReplicatedStoragePool. +func (in *ReplicatedStoragePool) DeepCopy() *ReplicatedStoragePool { + if in == nil { + return nil + } + out := new(ReplicatedStoragePool) + in.DeepCopyInto(out) + return out +} + +// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object. +func (in *ReplicatedStoragePool) DeepCopyObject() runtime.Object { + if c := in.DeepCopy(); c != nil { + return c + } + return nil +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *ReplicatedStoragePoolEligibleNode) DeepCopyInto(out *ReplicatedStoragePoolEligibleNode) { + *out = *in + if in.LVMVolumeGroups != nil { + in, out := &in.LVMVolumeGroups, &out.LVMVolumeGroups + *out = make([]ReplicatedStoragePoolEligibleNodeLVMVolumeGroup, len(*in)) + copy(*out, *in) + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ReplicatedStoragePoolEligibleNode. +func (in *ReplicatedStoragePoolEligibleNode) DeepCopy() *ReplicatedStoragePoolEligibleNode { + if in == nil { + return nil + } + out := new(ReplicatedStoragePoolEligibleNode) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *ReplicatedStoragePoolEligibleNodeLVMVolumeGroup) DeepCopyInto(out *ReplicatedStoragePoolEligibleNodeLVMVolumeGroup) { + *out = *in +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ReplicatedStoragePoolEligibleNodeLVMVolumeGroup. +func (in *ReplicatedStoragePoolEligibleNodeLVMVolumeGroup) DeepCopy() *ReplicatedStoragePoolEligibleNodeLVMVolumeGroup { + if in == nil { + return nil + } + out := new(ReplicatedStoragePoolEligibleNodeLVMVolumeGroup) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *ReplicatedStoragePoolEligibleNodesPolicy) DeepCopyInto(out *ReplicatedStoragePoolEligibleNodesPolicy) { + *out = *in + out.NotReadyGracePeriod = in.NotReadyGracePeriod +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ReplicatedStoragePoolEligibleNodesPolicy. +func (in *ReplicatedStoragePoolEligibleNodesPolicy) DeepCopy() *ReplicatedStoragePoolEligibleNodesPolicy { + if in == nil { + return nil + } + out := new(ReplicatedStoragePoolEligibleNodesPolicy) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *ReplicatedStoragePoolLVMVolumeGroups) DeepCopyInto(out *ReplicatedStoragePoolLVMVolumeGroups) { + *out = *in +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ReplicatedStoragePoolLVMVolumeGroups. +func (in *ReplicatedStoragePoolLVMVolumeGroups) DeepCopy() *ReplicatedStoragePoolLVMVolumeGroups { + if in == nil { + return nil + } + out := new(ReplicatedStoragePoolLVMVolumeGroups) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *ReplicatedStoragePoolList) DeepCopyInto(out *ReplicatedStoragePoolList) { + *out = *in + out.TypeMeta = in.TypeMeta + in.ListMeta.DeepCopyInto(&out.ListMeta) + if in.Items != nil { + in, out := &in.Items, &out.Items + *out = make([]ReplicatedStoragePool, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ReplicatedStoragePoolList. +func (in *ReplicatedStoragePoolList) DeepCopy() *ReplicatedStoragePoolList { + if in == nil { + return nil + } + out := new(ReplicatedStoragePoolList) + in.DeepCopyInto(out) + return out +} + +// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object. +func (in *ReplicatedStoragePoolList) DeepCopyObject() runtime.Object { + if c := in.DeepCopy(); c != nil { + return c + } + return nil +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *ReplicatedStoragePoolSpec) DeepCopyInto(out *ReplicatedStoragePoolSpec) { + *out = *in + if in.LVMVolumeGroups != nil { + in, out := &in.LVMVolumeGroups, &out.LVMVolumeGroups + *out = make([]ReplicatedStoragePoolLVMVolumeGroups, len(*in)) + copy(*out, *in) + } + if in.Zones != nil { + in, out := &in.Zones, &out.Zones + *out = make([]string, len(*in)) + copy(*out, *in) + } + if in.NodeLabelSelector != nil { + in, out := &in.NodeLabelSelector, &out.NodeLabelSelector + *out = new(v1.LabelSelector) + (*in).DeepCopyInto(*out) + } + if in.SystemNetworkNames != nil { + in, out := &in.SystemNetworkNames, &out.SystemNetworkNames + *out = make([]string, len(*in)) + copy(*out, *in) + } + out.EligibleNodesPolicy = in.EligibleNodesPolicy +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ReplicatedStoragePoolSpec. +func (in *ReplicatedStoragePoolSpec) DeepCopy() *ReplicatedStoragePoolSpec { + if in == nil { + return nil + } + out := new(ReplicatedStoragePoolSpec) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *ReplicatedStoragePoolStatus) DeepCopyInto(out *ReplicatedStoragePoolStatus) { + *out = *in + if in.Conditions != nil { + in, out := &in.Conditions, &out.Conditions + *out = make([]v1.Condition, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } + if in.EligibleNodes != nil { + in, out := &in.EligibleNodes, &out.EligibleNodes + *out = make([]ReplicatedStoragePoolEligibleNode, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } + in.UsedBy.DeepCopyInto(&out.UsedBy) +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ReplicatedStoragePoolStatus. +func (in *ReplicatedStoragePoolStatus) DeepCopy() *ReplicatedStoragePoolStatus { + if in == nil { + return nil + } + out := new(ReplicatedStoragePoolStatus) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *ReplicatedStoragePoolUsedBy) DeepCopyInto(out *ReplicatedStoragePoolUsedBy) { + *out = *in + if in.ReplicatedStorageClassNames != nil { + in, out := &in.ReplicatedStorageClassNames, &out.ReplicatedStorageClassNames + *out = make([]string, len(*in)) + copy(*out, *in) + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ReplicatedStoragePoolUsedBy. +func (in *ReplicatedStoragePoolUsedBy) DeepCopy() *ReplicatedStoragePoolUsedBy { + if in == nil { + return nil + } + out := new(ReplicatedStoragePoolUsedBy) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *ReplicatedVolume) DeepCopyInto(out *ReplicatedVolume) { + *out = *in + out.TypeMeta = in.TypeMeta + in.ObjectMeta.DeepCopyInto(&out.ObjectMeta) + in.Spec.DeepCopyInto(&out.Spec) + in.Status.DeepCopyInto(&out.Status) +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ReplicatedVolume. +func (in *ReplicatedVolume) DeepCopy() *ReplicatedVolume { + if in == nil { + return nil + } + out := new(ReplicatedVolume) + in.DeepCopyInto(out) + return out +} + +// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object. +func (in *ReplicatedVolume) DeepCopyObject() runtime.Object { + if c := in.DeepCopy(); c != nil { + return c + } + return nil +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *ReplicatedVolumeAttachment) DeepCopyInto(out *ReplicatedVolumeAttachment) { + *out = *in + out.TypeMeta = in.TypeMeta + in.ObjectMeta.DeepCopyInto(&out.ObjectMeta) + out.Spec = in.Spec + in.Status.DeepCopyInto(&out.Status) +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ReplicatedVolumeAttachment. +func (in *ReplicatedVolumeAttachment) DeepCopy() *ReplicatedVolumeAttachment { + if in == nil { + return nil + } + out := new(ReplicatedVolumeAttachment) + in.DeepCopyInto(out) + return out +} + +// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object. +func (in *ReplicatedVolumeAttachment) DeepCopyObject() runtime.Object { + if c := in.DeepCopy(); c != nil { + return c + } + return nil +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *ReplicatedVolumeAttachmentList) DeepCopyInto(out *ReplicatedVolumeAttachmentList) { + *out = *in + out.TypeMeta = in.TypeMeta + in.ListMeta.DeepCopyInto(&out.ListMeta) + if in.Items != nil { + in, out := &in.Items, &out.Items + *out = make([]ReplicatedVolumeAttachment, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ReplicatedVolumeAttachmentList. +func (in *ReplicatedVolumeAttachmentList) DeepCopy() *ReplicatedVolumeAttachmentList { + if in == nil { + return nil + } + out := new(ReplicatedVolumeAttachmentList) + in.DeepCopyInto(out) + return out +} + +// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object. +func (in *ReplicatedVolumeAttachmentList) DeepCopyObject() runtime.Object { + if c := in.DeepCopy(); c != nil { + return c + } + return nil +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *ReplicatedVolumeAttachmentSpec) DeepCopyInto(out *ReplicatedVolumeAttachmentSpec) { + *out = *in +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ReplicatedVolumeAttachmentSpec. +func (in *ReplicatedVolumeAttachmentSpec) DeepCopy() *ReplicatedVolumeAttachmentSpec { + if in == nil { + return nil + } + out := new(ReplicatedVolumeAttachmentSpec) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *ReplicatedVolumeAttachmentStatus) DeepCopyInto(out *ReplicatedVolumeAttachmentStatus) { + *out = *in + if in.Conditions != nil { + in, out := &in.Conditions, &out.Conditions + *out = make([]v1.Condition, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ReplicatedVolumeAttachmentStatus. +func (in *ReplicatedVolumeAttachmentStatus) DeepCopy() *ReplicatedVolumeAttachmentStatus { + if in == nil { + return nil + } + out := new(ReplicatedVolumeAttachmentStatus) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *ReplicatedVolumeDatamesh) DeepCopyInto(out *ReplicatedVolumeDatamesh) { + *out = *in + if in.SystemNetworkNames != nil { + in, out := &in.SystemNetworkNames, &out.SystemNetworkNames + *out = make([]string, len(*in)) + copy(*out, *in) + } + out.Size = in.Size.DeepCopy() + if in.Members != nil { + in, out := &in.Members, &out.Members + *out = make([]ReplicatedVolumeDatameshMember, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ReplicatedVolumeDatamesh. +func (in *ReplicatedVolumeDatamesh) DeepCopy() *ReplicatedVolumeDatamesh { + if in == nil { + return nil + } + out := new(ReplicatedVolumeDatamesh) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *ReplicatedVolumeDatameshMember) DeepCopyInto(out *ReplicatedVolumeDatameshMember) { + *out = *in + if in.Addresses != nil { + in, out := &in.Addresses, &out.Addresses + *out = make([]DRBDResourceAddressStatus, len(*in)) + copy(*out, *in) + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ReplicatedVolumeDatameshMember. +func (in *ReplicatedVolumeDatameshMember) DeepCopy() *ReplicatedVolumeDatameshMember { + if in == nil { + return nil + } + out := new(ReplicatedVolumeDatameshMember) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *ReplicatedVolumeEligibleNodesViolation) DeepCopyInto(out *ReplicatedVolumeEligibleNodesViolation) { + *out = *in +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ReplicatedVolumeEligibleNodesViolation. +func (in *ReplicatedVolumeEligibleNodesViolation) DeepCopy() *ReplicatedVolumeEligibleNodesViolation { + if in == nil { + return nil + } + out := new(ReplicatedVolumeEligibleNodesViolation) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *ReplicatedVolumeList) DeepCopyInto(out *ReplicatedVolumeList) { + *out = *in + out.TypeMeta = in.TypeMeta + in.ListMeta.DeepCopyInto(&out.ListMeta) + if in.Items != nil { + in, out := &in.Items, &out.Items + *out = make([]ReplicatedVolume, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ReplicatedVolumeList. +func (in *ReplicatedVolumeList) DeepCopy() *ReplicatedVolumeList { + if in == nil { + return nil + } + out := new(ReplicatedVolumeList) + in.DeepCopyInto(out) + return out +} + +// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object. +func (in *ReplicatedVolumeList) DeepCopyObject() runtime.Object { + if c := in.DeepCopy(); c != nil { + return c + } + return nil +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *ReplicatedVolumeReplica) DeepCopyInto(out *ReplicatedVolumeReplica) { + *out = *in + out.TypeMeta = in.TypeMeta + in.ObjectMeta.DeepCopyInto(&out.ObjectMeta) + out.Spec = in.Spec + in.Status.DeepCopyInto(&out.Status) +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ReplicatedVolumeReplica. +func (in *ReplicatedVolumeReplica) DeepCopy() *ReplicatedVolumeReplica { + if in == nil { + return nil + } + out := new(ReplicatedVolumeReplica) + in.DeepCopyInto(out) + return out +} + +// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object. +func (in *ReplicatedVolumeReplica) DeepCopyObject() runtime.Object { + if c := in.DeepCopy(); c != nil { + return c + } + return nil +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *ReplicatedVolumeReplicaList) DeepCopyInto(out *ReplicatedVolumeReplicaList) { + *out = *in + out.TypeMeta = in.TypeMeta + in.ListMeta.DeepCopyInto(&out.ListMeta) + if in.Items != nil { + in, out := &in.Items, &out.Items + *out = make([]ReplicatedVolumeReplica, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ReplicatedVolumeReplicaList. +func (in *ReplicatedVolumeReplicaList) DeepCopy() *ReplicatedVolumeReplicaList { + if in == nil { + return nil + } + out := new(ReplicatedVolumeReplicaList) + in.DeepCopyInto(out) + return out +} + +// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object. +func (in *ReplicatedVolumeReplicaList) DeepCopyObject() runtime.Object { + if c := in.DeepCopy(); c != nil { + return c + } + return nil +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *ReplicatedVolumeReplicaSpec) DeepCopyInto(out *ReplicatedVolumeReplicaSpec) { + *out = *in +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ReplicatedVolumeReplicaSpec. +func (in *ReplicatedVolumeReplicaSpec) DeepCopy() *ReplicatedVolumeReplicaSpec { + if in == nil { + return nil + } + out := new(ReplicatedVolumeReplicaSpec) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *ReplicatedVolumeReplicaStatus) DeepCopyInto(out *ReplicatedVolumeReplicaStatus) { + *out = *in + if in.Conditions != nil { + in, out := &in.Conditions, &out.Conditions + *out = make([]v1.Condition, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } + if in.DRBD != nil { + in, out := &in.DRBD, &out.DRBD + *out = new(DRBD) + (*in).DeepCopyInto(*out) + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ReplicatedVolumeReplicaStatus. +func (in *ReplicatedVolumeReplicaStatus) DeepCopy() *ReplicatedVolumeReplicaStatus { + if in == nil { + return nil + } + out := new(ReplicatedVolumeReplicaStatus) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *ReplicatedVolumeSpec) DeepCopyInto(out *ReplicatedVolumeSpec) { + *out = *in + out.Size = in.Size.DeepCopy() +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ReplicatedVolumeSpec. +func (in *ReplicatedVolumeSpec) DeepCopy() *ReplicatedVolumeSpec { + if in == nil { + return nil + } + out := new(ReplicatedVolumeSpec) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *ReplicatedVolumeStatus) DeepCopyInto(out *ReplicatedVolumeStatus) { + *out = *in + if in.Conditions != nil { + in, out := &in.Conditions, &out.Conditions + *out = make([]v1.Condition, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } + if in.DRBD != nil { + in, out := &in.DRBD, &out.DRBD + *out = new(DRBDResourceDetails) + (*in).DeepCopyInto(*out) + } + if in.ActuallyAttachedTo != nil { + in, out := &in.ActuallyAttachedTo, &out.ActuallyAttachedTo + *out = make([]string, len(*in)) + copy(*out, *in) + } + if in.DesiredAttachTo != nil { + in, out := &in.DesiredAttachTo, &out.DesiredAttachTo + *out = make([]string, len(*in)) + copy(*out, *in) + } + if in.Configuration != nil { + in, out := &in.Configuration, &out.Configuration + *out = new(ReplicatedStorageClassConfiguration) + **out = **in + } + if in.EligibleNodesViolations != nil { + in, out := &in.EligibleNodesViolations, &out.EligibleNodesViolations + *out = make([]ReplicatedVolumeEligibleNodesViolation, len(*in)) + copy(*out, *in) + } + in.Datamesh.DeepCopyInto(&out.Datamesh) +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ReplicatedVolumeStatus. +func (in *ReplicatedVolumeStatus) DeepCopy() *ReplicatedVolumeStatus { + if in == nil { + return nil + } + out := new(ReplicatedVolumeStatus) + in.DeepCopyInto(out) + return out +} diff --git a/crds/replicatedstorageclass.yaml b/crds/replicatedstorageclass.yaml deleted file mode 100644 index 1fc99ced3..000000000 --- a/crds/replicatedstorageclass.yaml +++ /dev/null @@ -1,176 +0,0 @@ -apiVersion: apiextensions.k8s.io/v1 -kind: CustomResourceDefinition -metadata: - name: replicatedstorageclasses.storage.deckhouse.io - labels: - heritage: deckhouse - module: sds-replicated-volume - backup.deckhouse.io/cluster-config: "true" -spec: - group: storage.deckhouse.io - scope: Cluster - names: - plural: replicatedstorageclasses - singular: replicatedstorageclass - kind: ReplicatedStorageClass - shortNames: - - rsc - preserveUnknownFields: false - versions: - - name: v1alpha1 - served: true - storage: true - schema: - openAPIV3Schema: - type: object - description: | - ReplicatedStorageClass is a Kubernetes Custom Resource that defines a configuration for a Kubernetes Storage class. - required: - - spec - properties: - spec: - x-kubernetes-validations: - - rule: '(has(self.replication) && self.replication == "None") || ((!has(self.replication) || self.replication == "Availability" || self.replication == "ConsistencyAndAvailability") && (!has(self.zones) || size(self.zones) == 0 || size(self.zones) == 1 || size(self.zones) == 3))' - message: 'When "replication" is not set or is set to "Availability" or "ConsistencyAndAvailability" (default value), "zones" must be either not specified, or must contain exactly three zones.' - - message: zones field cannot be deleted or added - rule: (has(self.zones) && has(oldSelf.zones)) || (!has(self.zones) && !has(oldSelf.zones)) - - message: replication filed cannot be deleted or added - rule: (has(self.replication) && has(oldSelf.replication)) || (!has(self.replication) && !has(oldSelf.replication)) - - message: volumeAccess filed cannot be deleted or added - rule: (has(self.volumeAccess) && has(oldSelf.volumeAccess)) || (!has(self.volumeAccess) && !has(oldSelf.volumeAccess)) - type: object - description: | - Defines a Kubernetes Storage class configuration. - - > Note that this field is in read-only mode. - required: - - storagePool - - reclaimPolicy - - topology - properties: - storagePool: - type: string - x-kubernetes-validations: - - rule: self == oldSelf - message: Value is immutable. - description: | - Selected ReplicatedStoragePool resource's name. - reclaimPolicy: - type: string - x-kubernetes-validations: - - rule: self == oldSelf - message: Value is immutable. - description: | - The storage class's reclaim policy. Might be: - - Delete (If the Persistent Volume Claim is deleted, deletes the Persistent Volume and its associated storage as well) - - Retain (If the Persistent Volume Claim is deleted, remains the Persistent Volume and its associated storage) - enum: - - Delete - - Retain - replication: - type: string - x-kubernetes-validations: - - rule: self == oldSelf - message: Value is immutable. - description: | - The Storage class's replication mode. Might be: - - None — In this mode the Storage class's 'placementCount' and 'AutoEvictMinReplicaCount' params equal '1'. - - Availability — In this mode the volume remains readable and writable even if one of the replica nodes becomes unavailable. Data is stored in two copies on different nodes. This corresponds to `placementCount = 2` and `AutoEvictMinReplicaCount = 2`. **Important:** this mode does not guarantee data consistency and may lead to split brain and data loss in case of network connectivity issues between nodes. Recommended only for non-critical data and applications that do not require high reliability and data integrity. - - ConsistencyAndAvailability — In this mode the volume remains readable and writable when one replica node fails. Data is stored in three copies on different nodes (`placementCount = 3`, `AutoEvictMinReplicaCount = 3`). This mode provides protection against data loss when two nodes containing volume replicas fail and guarantees data consistency. However, if two replicas are lost, the volume switches to suspend-io mode. - - > Note that default Replication mode is 'ConsistencyAndAvailability'. - enum: - - None - - Availability - - ConsistencyAndAvailability - default: "ConsistencyAndAvailability" - volumeAccess: - type: string - x-kubernetes-validations: - - rule: self == oldSelf - message: Value is immutable. - description: | - The Storage class's access mode. Might be: - - Local (in this mode the Storage class's 'allowRemoteVolumeAccess' param equals 'false' - and Volume Binding mode equals 'WaitForFirstConsumer') - - EventuallyLocal (in this mode the Storage class's 'allowRemoteVolumeAccess' param - equals '- fromSame:\n - topology.kubernetes.io/zone', 'auto-diskful' param equals '30' minutes, - 'auto-diskful-allow-cleanup' param equals 'true', - and Volume Binding mode equals 'WaitForFirstConsumer') - - PreferablyLocal (in this mode the Storage class's 'allowRemoteVolumeAccess' param - equals '- fromSame:\n - topology.kubernetes.io/zone', - and Volume Binding mode equals 'WaitForFirstConsumer') - - Any (in this mode the Storage class's 'allowRemoteVolumeAccess' param - equals '- fromSame:\n - topology.kubernetes.io/zone', - and Volume Binding mode equals 'Immediate') - - > Note that the default Volume Access mode is 'PreferablyLocal'. - enum: - - Local - - EventuallyLocal - - PreferablyLocal - - Any - default: "PreferablyLocal" - topology: - type: string - x-kubernetes-validations: - - rule: self == oldSelf - message: Value is immutable. - description: | - The topology settings for the volumes in the created Storage class. Might be: - - TransZonal - replicas of the volumes will be created in different zones (one replica per zone). - To use this topology, the available zones must be specified in the 'zones' param, and the cluster nodes must have the topology.kubernetes.io/zone= label. - - Zonal - all replicas of the volumes are created in the same zone that the scheduler selected to place the pod using this volume. - - Ignored - the topology information will not be used to place replicas of the volumes. - The replicas can be placed on any available nodes, with the restriction: no more than one replica of a given volume on one node. - - > Note that the 'Ignored' value can be used only if there are no zones in the cluster (there are no nodes with the topology.kubernetes.io/zone label). - - > For the system to operate correctly, either every cluster node must be labeled with 'topology.kubernetes.io/zone', or none of them should have this label. - enum: - - TransZonal - - Zonal - - Ignored - zones: - type: array - x-kubernetes-validations: - - rule: self == oldSelf - message: Value is immutable. - description: | - Array of zones the Storage class's volumes should be replicated in. The controller will put a label with - the Storage class's name on the nodes which be actual used by the Storage class. - - > Note that for Replication mode 'Availability' and 'ConsistencyAndAvailability' you have to select - exactly 1 or 3 zones. - items: - type: string - status: - type: object - description: | - Displays current information about the Storage Class. - properties: - phase: - type: string - description: | - The Storage class current state. Might be: - - Failed (if the controller received incorrect resource configuration or some errors occurred during the operation) - - Create (if everything went fine) - enum: - - Failed - - Created - reason: - type: string - description: | - Additional information about the current state of the Storage Class. - additionalPrinterColumns: - - jsonPath: .status.phase - name: Phase - type: string - - jsonPath: .status.reason - name: Reason - type: string - priority: 1 - - jsonPath: .metadata.creationTimestamp - name: Age - type: date - description: The age of this resource diff --git a/crds/replicatedstoragepool.yaml b/crds/replicatedstoragepool.yaml deleted file mode 100644 index 8ee9b7ce1..000000000 --- a/crds/replicatedstoragepool.yaml +++ /dev/null @@ -1,107 +0,0 @@ -apiVersion: apiextensions.k8s.io/v1 -kind: CustomResourceDefinition -metadata: - name: replicatedstoragepools.storage.deckhouse.io - labels: - heritage: deckhouse - module: sds-replicated-volume - backup.deckhouse.io/cluster-config: "true" -spec: - group: storage.deckhouse.io - scope: Cluster - names: - plural: replicatedstoragepools - singular: replicatedstoragepool - kind: ReplicatedStoragePool - shortNames: - - rsp - versions: - - name: v1alpha1 - served: true - storage: true - schema: - openAPIV3Schema: - type: object - description: | - ReplicatedStoragePool is a Kubernetes Custom Resource that defines a configuration for Linstor Storage-pools. - required: - - spec - properties: - spec: - type: object - description: | - Defines desired rules for Linstor's Storage-pools. - required: - - type - - lvmVolumeGroups - properties: - type: - type: string - description: | - Defines the volumes type. Might be: - - LVM (for Thick) - - LVMThin (for Thin) - enum: - - LVM - - LVMThin - x-kubernetes-validations: - - rule: self == oldSelf - message: Value is immutable. - lvmVolumeGroups: - type: array - description: | - An array of names of LVMVolumeGroup resources, whose Volume Groups/Thin-pools will be used to allocate - the required space. - - > Note that every LVMVolumeGroup resource has to have the same type Thin/Thick - as it is in current resource's 'Spec.Type' field. - items: - type: object - required: - - name - properties: - name: - type: string - description: | - Selected LVMVolumeGroup resource's name. - minLength: 1 - pattern: '^[a-z0-9]([a-z0-9-.]{0,251}[a-z0-9])?$' - thinPoolName: - type: string - description: | - Selected Thin-pool name. - status: - type: object - description: | - Displays current information about the state of the LINSTOR storage pool. - properties: - phase: - type: string - description: | - The actual ReplicatedStoragePool resource's state. Might be: - - Completed (if the controller received correct resource configuration and Linstor Storage-pools configuration is up-to-date) - - Updating (if the controller received correct resource configuration and Linstor Storage-pools configuration needs to be updated) - - Failed (if the controller received incorrect resource configuration or an error occurs during the operation) - enum: - - Updating - - Failed - - Completed - reason: - type: string - description: | - The additional information about the resource's current state. - additionalPrinterColumns: - - jsonPath: .status.phase - name: Phase - type: string - - jsonPath: .spec.type - name: Type - type: string - - jsonPath: .status.reason - name: Reason - type: string - priority: 1 - - jsonPath: .metadata.creationTimestamp - name: Age - type: date - description: The age of this resource diff --git a/crds/storage.deckhouse.io_drbdnodeoperations.yaml b/crds/storage.deckhouse.io_drbdnodeoperations.yaml new file mode 100644 index 000000000..9345151be --- /dev/null +++ b/crds/storage.deckhouse.io_drbdnodeoperations.yaml @@ -0,0 +1,99 @@ +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.19.0 + labels: + module: sds-replicated-volume + name: drbdnodeoperations.storage.deckhouse.io +spec: + group: storage.deckhouse.io + names: + kind: DRBDNodeOperation + listKind: DRBDNodeOperationList + plural: drbdnodeoperations + shortNames: + - dno + singular: drbdnodeoperation + scope: Cluster + versions: + - additionalPrinterColumns: + - jsonPath: .spec.nodeName + name: Node + type: string + - jsonPath: .spec.type + name: Type + type: string + - jsonPath: .status.phase + name: Phase + type: string + - jsonPath: .metadata.creationTimestamp + name: Age + type: date + name: v1alpha1 + schema: + openAPIV3Schema: + properties: + apiVersion: + description: |- + APIVersion defines the versioned schema of this representation of an object. + Servers should convert recognized schemas to the latest internal value, and + may reject unrecognized values. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources + type: string + kind: + description: |- + Kind is a string value representing the REST resource this object represents. + Servers may infer this from the endpoint the client submits requests to. + Cannot be updated. + In CamelCase. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + type: string + metadata: + type: object + spec: + properties: + nodeName: + maxLength: 253 + minLength: 1 + type: string + x-kubernetes-validations: + - message: nodeName is immutable + rule: self == oldSelf + type: + description: DRBDNodeOperationType represents the type of operation + to perform on a DRBD node. + enum: + - UpdateDRBD + type: string + x-kubernetes-validations: + - message: type is immutable + rule: self == oldSelf + required: + - nodeName + - type + type: object + status: + properties: + completedAt: + format: date-time + type: string + message: + maxLength: 1024 + type: string + phase: + description: DRBDOperationPhase represents the phase of a DRBD operation. + type: string + startedAt: + format: date-time + type: string + type: object + required: + - metadata + - spec + type: object + served: true + storage: true + subresources: + status: {} diff --git a/crds/storage.deckhouse.io_drbdresourceoperations.yaml b/crds/storage.deckhouse.io_drbdresourceoperations.yaml new file mode 100644 index 000000000..618122301 --- /dev/null +++ b/crds/storage.deckhouse.io_drbdresourceoperations.yaml @@ -0,0 +1,116 @@ +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.19.0 + labels: + module: sds-replicated-volume + name: drbdresourceoperations.storage.deckhouse.io +spec: + group: storage.deckhouse.io + names: + kind: DRBDResourceOperation + listKind: DRBDResourceOperationList + plural: drbdresourceoperations + shortNames: + - dro + singular: drbdresourceoperation + scope: Cluster + versions: + - additionalPrinterColumns: + - jsonPath: .spec.drbdResourceName + name: Resource + type: string + - jsonPath: .spec.type + name: Type + type: string + - jsonPath: .status.phase + name: Phase + type: string + - jsonPath: .metadata.creationTimestamp + name: Age + type: date + name: v1alpha1 + schema: + openAPIV3Schema: + properties: + apiVersion: + description: |- + APIVersion defines the versioned schema of this representation of an object. + Servers should convert recognized schemas to the latest internal value, and + may reject unrecognized values. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources + type: string + kind: + description: |- + Kind is a string value representing the REST resource this object represents. + Servers may infer this from the endpoint the client submits requests to. + Cannot be updated. + In CamelCase. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + type: string + metadata: + type: object + spec: + properties: + createNewUUID: + description: Parameters for CreateNewUUID operation. Immutable once + set. + properties: + clearBitmap: + default: false + type: boolean + type: object + x-kubernetes-validations: + - message: createNewUUID is immutable + rule: self == oldSelf + drbdResourceName: + maxLength: 253 + minLength: 1 + pattern: ^[0-9A-Za-z.+_-]*$ + type: string + x-kubernetes-validations: + - message: drbdResourceName is immutable + rule: self == oldSelf + type: + description: DRBDResourceOperationType represents the type of operation + to perform on a DRBD resource. + enum: + - CreateNewUUID + - ForcePrimary + - Invalidate + - Outdate + - Verify + - CreateSnapshot + type: string + x-kubernetes-validations: + - message: type is immutable + rule: self == oldSelf + required: + - drbdResourceName + - type + type: object + status: + properties: + completedAt: + format: date-time + type: string + message: + maxLength: 1024 + type: string + phase: + description: DRBDOperationPhase represents the phase of a DRBD operation. + type: string + startedAt: + format: date-time + type: string + type: object + required: + - metadata + - spec + type: object + served: true + storage: true + subresources: + status: {} diff --git a/crds/storage.deckhouse.io_drbdresources.yaml b/crds/storage.deckhouse.io_drbdresources.yaml new file mode 100644 index 000000000..2d2046ea5 --- /dev/null +++ b/crds/storage.deckhouse.io_drbdresources.yaml @@ -0,0 +1,388 @@ +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.19.0 + labels: + module: sds-replicated-volume + name: drbdresources.storage.deckhouse.io +spec: + group: storage.deckhouse.io + names: + kind: DRBDResource + listKind: DRBDResourceList + plural: drbdresources + shortNames: + - dr + singular: drbdresource + scope: Cluster + versions: + - additionalPrinterColumns: + - jsonPath: .spec.nodeName + name: Node + type: string + - jsonPath: .spec.state + name: State + type: string + - jsonPath: .status.role + name: Role + type: string + - jsonPath: .spec.type + name: Type + type: string + - jsonPath: .status.diskState + name: DiskState + type: string + - jsonPath: .status.quorum + name: Quorum + type: boolean + - jsonPath: .metadata.creationTimestamp + name: Age + type: date + name: v1alpha1 + schema: + openAPIV3Schema: + properties: + apiVersion: + description: |- + APIVersion defines the versioned schema of this representation of an object. + Servers should convert recognized schemas to the latest internal value, and + may reject unrecognized values. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources + type: string + kind: + description: |- + Kind is a string value representing the REST resource this object represents. + Servers may infer this from the endpoint the client submits requests to. + Cannot be updated. + In CamelCase. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + type: string + metadata: + type: object + spec: + properties: + allowTwoPrimaries: + default: false + type: boolean + lvmLogicalVolumeName: + description: Required when type is Diskful, must be empty when type + is Diskless. + maxLength: 128 + minLength: 1 + type: string + maintenance: + description: Maintenance mode - when set, reconciliation is paused + but status is still updated + enum: + - NoResourceReconciliation + type: string + nodeID: + maximum: 31 + minimum: 0 + type: integer + x-kubernetes-validations: + - message: nodeID is immutable + rule: self == oldSelf + nodeName: + maxLength: 253 + minLength: 1 + type: string + x-kubernetes-validations: + - message: nodeName is immutable + rule: self == oldSelf + peers: + items: + properties: + allowRemoteRead: + default: true + type: boolean + name: + description: Peer node name. Immutable, used as list map key. + maxLength: 253 + minLength: 1 + pattern: ^[0-9A-Za-z.+_-]*$ + type: string + nodeID: + maximum: 31 + minimum: 0 + type: integer + paths: + items: + properties: + address: + properties: + ipv4: + pattern: ^(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)$ + type: string + port: + maximum: 65535 + minimum: 1025 + type: integer + required: + - ipv4 + - port + type: object + systemNetworkName: + description: System network name. Immutable, used as list + map key. + maxLength: 64 + minLength: 1 + type: string + required: + - address + - systemNetworkName + type: object + maxItems: 16 + minItems: 1 + type: array + x-kubernetes-list-map-keys: + - systemNetworkName + x-kubernetes-list-type: map + pauseSync: + default: false + type: boolean + protocol: + default: C + description: DRBDProtocol represents the DRBD replication protocol. + enum: + - A + - B + - C + type: string + x-kubernetes-validations: + - message: protocol is immutable + rule: self == oldSelf + sharedSecret: + maxLength: 256 + type: string + sharedSecretAlg: + enum: + - SHA256 + - SHA1 + - DummyForTest + type: string + type: + default: Diskful + description: DRBDResourceType represents the type of a DRBD + resource. + enum: + - Diskful + - Diskless + type: string + required: + - name + - nodeID + - paths + type: object + maxItems: 31 + type: array + x-kubernetes-list-map-keys: + - name + x-kubernetes-list-type: map + quorum: + maximum: 31 + minimum: 0 + type: integer + quorumMinimumRedundancy: + maximum: 31 + minimum: 0 + type: integer + role: + description: DRBDRole represents the role of a DRBD resource. + enum: + - Primary + - Secondary + type: string + size: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + state: + description: DRBDResourceState represents the desired state of a DRBD + resource. + enum: + - Up + - Down + type: string + systemNetworks: + items: + maxLength: 64 + type: string + maxItems: 16 + minItems: 1 + type: array + type: + default: Diskful + description: DRBDResourceType represents the type of a DRBD resource. + enum: + - Diskful + - Diskless + type: string + required: + - nodeID + - nodeName + - size + - systemNetworks + type: object + status: + properties: + activeConfiguration: + properties: + allowTwoPrimaries: + type: boolean + disk: + description: Disk path, e.g. /dev/... + maxLength: 256 + type: string + quorum: + type: integer + quorumMinimumRedundancy: + type: integer + role: + description: DRBDRole represents the role of a DRBD resource. + enum: + - Primary + - Secondary + type: string + size: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + state: + description: DRBDResourceState represents the desired state of + a DRBD resource. + enum: + - Up + - Down + type: string + type: + description: DRBDResourceType represents the type of a DRBD resource. + enum: + - Diskful + - Diskless + type: string + type: object + addresses: + items: + properties: + address: + properties: + ipv4: + pattern: ^(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)$ + type: string + port: + maximum: 65535 + minimum: 1025 + type: integer + required: + - ipv4 + - port + type: object + systemNetworkName: + maxLength: 64 + type: string + required: + - address + - systemNetworkName + type: object + maxItems: 32 + type: array + device: + description: |- + Device path, e.g. /dev/drbd10012 or /dev/sds-replicated/ + Only present on primary + maxLength: 256 + type: string + diskState: + type: string + peers: + items: + properties: + connectionState: + type: string + diskState: + type: string + name: + maxLength: 253 + minLength: 1 + type: string + nodeID: + maximum: 31 + minimum: 0 + type: integer + paths: + items: + properties: + address: + properties: + ipv4: + pattern: ^(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)$ + type: string + port: + maximum: 65535 + minimum: 1025 + type: integer + required: + - ipv4 + - port + type: object + established: + type: boolean + systemNetworkName: + maxLength: 64 + type: string + required: + - address + - systemNetworkName + type: object + maxItems: 16 + type: array + x-kubernetes-list-map-keys: + - systemNetworkName + x-kubernetes-list-type: map + type: + description: DRBDResourceType represents the type of a DRBD + resource. + enum: + - Diskful + - Diskless + type: string + required: + - name + type: object + maxItems: 31 + type: array + x-kubernetes-list-map-keys: + - name + x-kubernetes-list-type: map + quorum: + type: boolean + role: + description: DRBDRole represents the role of a DRBD resource. + enum: + - Primary + - Secondary + type: string + type: object + required: + - metadata + - spec + type: object + x-kubernetes-validations: + - message: lvmLogicalVolumeName is required when type is Diskful and must + be empty when type is Diskless + rule: 'self.spec.type == ''Diskful'' ? has(self.spec.lvmLogicalVolumeName) + && self.spec.lvmLogicalVolumeName != ” : !has(self.spec.lvmLogicalVolumeName) + || self.spec.lvmLogicalVolumeName == ”' + - message: spec.size cannot be decreased + rule: '!has(oldSelf.spec.size) || self.spec.size >= oldSelf.spec.size' + served: true + storage: true + subresources: + status: {} diff --git a/crds/storage.deckhouse.io_replicatedstorageclasses.yaml b/crds/storage.deckhouse.io_replicatedstorageclasses.yaml new file mode 100644 index 000000000..93951964f --- /dev/null +++ b/crds/storage.deckhouse.io_replicatedstorageclasses.yaml @@ -0,0 +1,541 @@ +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.19.0 + labels: + backup.deckhouse.io/cluster-config: "true" + heritage: deckhouse + module: sds-replicated-volume + name: replicatedstorageclasses.storage.deckhouse.io +spec: + group: storage.deckhouse.io + names: + kind: ReplicatedStorageClass + listKind: ReplicatedStorageClassList + plural: replicatedstorageclasses + shortNames: + - rsc + singular: replicatedstorageclass + scope: Cluster + versions: + - additionalPrinterColumns: + - jsonPath: .status.phase + name: Phase + type: string + - jsonPath: .status.reason + name: Reason + priority: 1 + type: string + - description: The age of this resource + jsonPath: .metadata.creationTimestamp + name: Age + type: date + name: v1alpha1 + schema: + openAPIV3Schema: + description: ReplicatedStorageClass is a Kubernetes Custom Resource that defines + a configuration for a Kubernetes Storage class. + properties: + apiVersion: + description: |- + APIVersion defines the versioned schema of this representation of an object. + Servers should convert recognized schemas to the latest internal value, and + may reject unrecognized values. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources + type: string + kind: + description: |- + Kind is a string value representing the REST resource this object represents. + Servers may infer this from the endpoint the client submits requests to. + Cannot be updated. + In CamelCase. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + type: string + metadata: + type: object + spec: + description: |- + Defines a Kubernetes Storage class configuration. + + > Note that this field is in read-only mode. + properties: + configurationRolloutStrategy: + description: |- + ConfigurationRolloutStrategy defines how configuration changes are applied to existing volumes. + Always present with defaults. + properties: + rollingUpdate: + description: |- + RollingUpdate configures parameters for RollingUpdate strategy. + Required when type is RollingUpdate. + properties: + maxParallel: + default: 5 + description: MaxParallel is the maximum number of volumes + being rolled out simultaneously. + format: int32 + maximum: 200 + minimum: 1 + type: integer + required: + - maxParallel + type: object + type: + default: RollingUpdate + description: Type specifies the rollout strategy type. + enum: + - RollingUpdate + - NewVolumesOnly + type: string + type: object + x-kubernetes-validations: + - message: rollingUpdate is required when type is RollingUpdate + rule: self.type != 'RollingUpdate' || has(self.rollingUpdate) + - message: rollingUpdate must not be set when type is not RollingUpdate + rule: self.type == 'RollingUpdate' || !has(self.rollingUpdate) + eligibleNodesConflictResolutionStrategy: + description: |- + EligibleNodesConflictResolutionStrategy defines how the controller handles volumes with eligible nodes conflicts. + Always present with defaults. + properties: + rollingRepair: + description: |- + RollingRepair configures parameters for RollingRepair conflict resolution strategy. + Required when type is RollingRepair. + properties: + maxParallel: + default: 5 + description: MaxParallel is the maximum number of volumes + being repaired simultaneously. + format: int32 + maximum: 200 + minimum: 1 + type: integer + required: + - maxParallel + type: object + type: + default: RollingRepair + description: Type specifies the conflict resolution strategy type. + enum: + - Manual + - RollingRepair + type: string + type: object + x-kubernetes-validations: + - message: rollingRepair is required when type is RollingRepair + rule: self.type != 'RollingRepair' || has(self.rollingRepair) + - message: rollingRepair must not be set when type is not RollingRepair + rule: self.type == 'RollingRepair' || !has(self.rollingRepair) + eligibleNodesPolicy: + description: |- + EligibleNodesPolicy defines policies for managing eligible nodes. + Always present with defaults. + properties: + notReadyGracePeriod: + default: 10m + description: |- + NotReadyGracePeriod specifies how long to wait before removing + a not-ready node from the eligible nodes list. + type: string + required: + - notReadyGracePeriod + type: object + nodeLabelSelector: + description: |- + NodeLabelSelector filters nodes eligible for DRBD participation. + Only nodes matching this selector can store data, provide access, or host tiebreaker. + If not specified, all nodes are candidates. + properties: + matchExpressions: + description: matchExpressions is a list of label selector requirements. + The requirements are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label key that the selector applies + to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + reclaimPolicy: + description: |- + The storage class's reclaim policy. Might be: + - Delete (If the Persistent Volume Claim is deleted, deletes the Persistent Volume and its associated storage as well) + - Retain (If the Persistent Volume Claim is deleted, remains the Persistent Volume and its associated storage) + enum: + - Delete + - Retain + type: string + replication: + default: ConsistencyAndAvailability + description: |- + The Storage class's replication mode. Might be: + - None — In this mode the Storage class's 'placementCount' and 'AutoEvictMinReplicaCount' params equal '1'. + Requires topology to be 'Ignored' (no replicas to distribute across zones). + - Availability — In this mode the volume remains readable and writable even if one of the replica nodes becomes unavailable. Data is stored in two copies on different nodes. This corresponds to `placementCount = 2` and `AutoEvictMinReplicaCount = 2`. **Important:** this mode does not guarantee data consistency and may lead to split brain and data loss in case of network connectivity issues between nodes. Recommended only for non-critical data and applications that do not require high reliability and data integrity. + - ConsistencyAndAvailability — In this mode the volume remains readable and writable when one replica node fails. Data is stored in three copies on different nodes (`placementCount = 3`, `AutoEvictMinReplicaCount = 3`). This mode provides protection against data loss when two nodes containing volume replicas fail and guarantees data consistency. However, if two replicas are lost, the volume switches to suspend-io mode. + + > Note that default Replication mode is 'ConsistencyAndAvailability'. + enum: + - None + - Availability + - Consistency + - ConsistencyAndAvailability + type: string + storage: + description: |- + Storage defines the storage backend configuration for this storage class. + Specifies the type of volumes (LVM or LVMThin) and which LVMVolumeGroups + will be used to allocate space for volumes. + properties: + lvmVolumeGroups: + description: |- + LVMVolumeGroups is an array of LVMVolumeGroup resource names whose Volume Groups/Thin-pools + will be used to allocate the required space. + + > Note that every LVMVolumeGroup resource must have the same type (Thin/Thick) + as specified in the Type field. + items: + properties: + name: + description: Selected LVMVolumeGroup resource's name. + minLength: 1 + pattern: ^[a-z0-9]([a-z0-9-.]{0,251}[a-z0-9])?$ + type: string + thinPoolName: + description: Selected Thin-pool name. + maxLength: 128 + minLength: 1 + pattern: ^[a-zA-Z0-9][a-zA-Z0-9_.+-]*$ + type: string + required: + - name + type: object + minItems: 1 + type: array + type: + description: |- + Type defines the volumes type. Might be: + - LVM (for Thick) + - LVMThin (for Thin) + enum: + - LVM + - LVMThin + type: string + required: + - lvmVolumeGroups + - type + type: object + x-kubernetes-validations: + - message: thinPoolName is required for each lvmVolumeGroups entry + when type is LVMThin + rule: self.type != 'LVMThin' || self.lvmVolumeGroups.all(g, g.thinPoolName + != ”) + - message: thinPoolName must not be specified when type is LVM + rule: self.type != 'LVM' || self.lvmVolumeGroups.all(g, !has(g.thinPoolName) + || g.thinPoolName == ”) + storagePool: + description: |- + StoragePool is the name of a ReplicatedStoragePool resource. + Deprecated: Use Storage instead. This field cannot be added or changed, only removed. + type: string + x-kubernetes-validations: + - message: StoragePool cannot be added or changed, only removed + rule: '!has(self) || (has(oldSelf) && self == oldSelf)' + systemNetworkNames: + default: + - Internal + description: |- + SystemNetworkNames specifies network names used for DRBD replication traffic. + At least one network name must be specified. Each name is limited to 64 characters. + + Custom network support requires NetworkNode watch implementation in the controller. + items: + type: string + maxItems: 1 + minItems: 1 + type: array + x-kubernetes-validations: + - message: Only 'Internal' network is currently supported + rule: self.all(n, n == 'Internal') + topology: + description: |- + The topology settings for the volumes in the created Storage class. Might be: + - TransZonal — replicas of the volumes will be created in different zones (one replica per zone). + To use this topology, the available zones must be specified in the 'zones' param, and the cluster nodes must have the topology.kubernetes.io/zone= label. + - Zonal — all replicas of the volumes are created in the same zone that the scheduler selected to place the pod using this volume. + - Ignored — the topology information will not be used to place replicas of the volumes. + The replicas can be placed on any available nodes, with the restriction: no more than one replica of a given volume on one node. + Required when replication is 'None'. + + > Note that the 'Ignored' value can be used only if there are no zones in the cluster (there are no nodes with the topology.kubernetes.io/zone label). + + > For the system to operate correctly, either every cluster node must be labeled with 'topology.kubernetes.io/zone', or none of them should have this label. + enum: + - TransZonal + - Zonal + - Ignored + type: string + volumeAccess: + default: PreferablyLocal + description: |- + The Storage class's volume access mode. Defines how pods access the volume. Might be: + - Local — volume is accessed only from the node where a replica resides. Pod scheduling waits for consumer. + - EventuallyLocal — volume can be accessed remotely, but a local replica will be created on the accessing node + after some time. Pod scheduling waits for consumer. + - PreferablyLocal — volume prefers local access but allows remote access if no local replica is available. + Scheduler tries to place pods on nodes with replicas. Pod scheduling waits for consumer. + - Any — volume can be accessed from any node. Most flexible mode with immediate volume binding. + + > Note that the default Volume Access mode is 'PreferablyLocal'. + enum: + - Local + - EventuallyLocal + - PreferablyLocal + - Any + type: string + zones: + description: |- + Array of zones the Storage class's volumes should be replicated in. The controller will put a label with + the Storage class's name on the nodes which be actual used by the Storage class. + + For TransZonal topology, the number of zones depends on replication mode: + - Availability, ConsistencyAndAvailability: at least 3 zones required + - Consistency: at least 2 zones required + + When replication is 'None' (topology 'Ignored'), zones act as a node constraint + limiting where the single replica can be placed. + items: + maxLength: 63 + type: string + maxItems: 10 + type: array + x-kubernetes-list-type: set + required: + - configurationRolloutStrategy + - eligibleNodesConflictResolutionStrategy + - eligibleNodesPolicy + - reclaimPolicy + - storage + - systemNetworkNames + - topology + type: object + x-kubernetes-validations: + - message: Replication None requires topology Ignored (no replicas to + distribute). + rule: '!has(self.replication) || self.replication != ''None'' || self.topology + == ''Ignored''' + - message: TransZonal topology with Availability replication requires + at least 3 zones (if specified). + rule: self.topology != 'TransZonal' || !has(self.replication) || self.replication + != 'Availability' || !has(self.zones) || size(self.zones) == 0 || + size(self.zones) >= 3 + - message: TransZonal topology with Consistency replication requires at + least 2 zones (if specified). + rule: self.topology != 'TransZonal' || !has(self.replication) || self.replication + != 'Consistency' || !has(self.zones) || size(self.zones) == 0 || size(self.zones) + >= 2 + - message: TransZonal topology with ConsistencyAndAvailability replication + (default) requires at least 3 zones (if specified). + rule: self.topology != 'TransZonal' || (has(self.replication) && self.replication + != 'ConsistencyAndAvailability') || !has(self.zones) || size(self.zones) + == 0 || size(self.zones) >= 3 + status: + description: Displays current information about the Storage Class. + properties: + conditions: + items: + description: Condition contains details for one aspect of the current + state of this API Resource. + properties: + lastTransitionTime: + description: |- + lastTransitionTime is the last time the condition transitioned from one status to another. + This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. + format: date-time + type: string + message: + description: |- + message is a human readable message indicating details about the transition. + This may be an empty string. + maxLength: 32768 + type: string + observedGeneration: + description: |- + observedGeneration represents the .metadata.generation that the condition was set based upon. + For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date + with respect to the current state of the instance. + format: int64 + minimum: 0 + type: integer + reason: + description: |- + reason contains a programmatic identifier indicating the reason for the condition's last transition. + Producers of specific condition types may define expected values and meanings for this field, + and whether the values are considered a guaranteed API. + The value should be a CamelCase string. + This field may not be empty. + maxLength: 1024 + minLength: 1 + pattern: ^[A-Za-z]([A-Za-z0-9_,:]*[A-Za-z0-9_])?$ + type: string + status: + description: status of the condition, one of True, False, Unknown. + enum: + - "True" + - "False" + - Unknown + type: string + type: + description: type of condition in CamelCase or in foo.example.com/CamelCase. + maxLength: 316 + pattern: ^([a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*/)?(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])$ + type: string + required: + - lastTransitionTime + - message + - reason + - status + - type + type: object + type: array + x-kubernetes-list-map-keys: + - type + x-kubernetes-list-type: map + configuration: + description: Configuration is the resolved configuration that volumes + should align to. + properties: + replication: + description: Replication is the resolved replication mode. + type: string + storagePoolName: + description: StoragePoolName is the name of the ReplicatedStoragePool + used by this RSC. + type: string + topology: + description: Topology is the resolved topology setting. + type: string + volumeAccess: + description: VolumeAccess is the resolved volume access mode. + type: string + required: + - replication + - storagePoolName + - topology + - volumeAccess + type: object + configurationGeneration: + description: ConfigurationGeneration is the RSC generation when configuration + was accepted. + format: int64 + type: integer + phase: + description: |- + The Storage class current state. Might be: + - Failed (if the controller received incorrect resource configuration or some errors occurred during the operation) + - Create (if everything went fine) + enum: + - Failed + - Created + type: string + reason: + description: Additional information about the current state of the + Storage Class. + type: string + storagePoolBasedOnGeneration: + description: StoragePoolBasedOnGeneration is the RSC generation when + storagePoolName was computed. + format: int64 + type: integer + storagePoolEligibleNodesRevision: + description: StoragePoolEligibleNodesRevision tracks RSP's eligibleNodesRevision + for change detection. + format: int64 + type: integer + storagePoolName: + description: |- + StoragePoolName is the computed name of the ReplicatedStoragePool for this RSC. + Format: auto-rsp-. Multiple RSCs with identical storage parameters + will share the same StoragePoolName. + type: string + volumes: + description: |- + Volumes provides aggregated volume statistics. + Always present (may have total=0). + properties: + aligned: + description: Aligned is the number of volumes whose configuration + matches the storage class. + format: int32 + type: integer + inConflictWithEligibleNodes: + description: InConflictWithEligibleNodes is the number of volumes + with replicas on non-eligible nodes. + format: int32 + type: integer + pendingObservation: + description: PendingObservation is the number of volumes that + haven't observed current RSC configuration or eligible nodes. + format: int32 + type: integer + staleConfiguration: + description: StaleConfiguration is the number of volumes with + outdated configuration. + format: int32 + type: integer + total: + description: Total is the total number of volumes. + format: int32 + type: integer + usedStoragePoolNames: + description: UsedStoragePoolNames is a sorted list of storage + pool names currently used by volumes. + items: + type: string + type: array + type: object + required: + - volumes + type: object + required: + - spec + type: object + served: true + storage: true + subresources: {} diff --git a/crds/storage.deckhouse.io_replicatedstoragepools.yaml b/crds/storage.deckhouse.io_replicatedstoragepools.yaml new file mode 100644 index 000000000..c3cfb61a6 --- /dev/null +++ b/crds/storage.deckhouse.io_replicatedstoragepools.yaml @@ -0,0 +1,370 @@ +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.19.0 + labels: + backup.deckhouse.io/cluster-config: "true" + heritage: deckhouse + module: sds-replicated-volume + name: replicatedstoragepools.storage.deckhouse.io +spec: + group: storage.deckhouse.io + names: + kind: ReplicatedStoragePool + listKind: ReplicatedStoragePoolList + plural: replicatedstoragepools + shortNames: + - rsp + singular: replicatedstoragepool + scope: Cluster + versions: + - additionalPrinterColumns: + - jsonPath: .spec.type + name: Type + type: string + - jsonPath: .status.conditions[?(@.type=="Ready")].status + name: Ready + type: string + - description: The age of this resource + jsonPath: .metadata.creationTimestamp + name: Age + type: date + name: v1alpha1 + schema: + openAPIV3Schema: + description: ReplicatedStoragePool is a Kubernetes Custom Resource that defines + a configuration for Linstor Storage-pools. + properties: + apiVersion: + description: |- + APIVersion defines the versioned schema of this representation of an object. + Servers should convert recognized schemas to the latest internal value, and + may reject unrecognized values. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources + type: string + kind: + description: |- + Kind is a string value representing the REST resource this object represents. + Servers may infer this from the endpoint the client submits requests to. + Cannot be updated. + In CamelCase. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + type: string + metadata: + type: object + spec: + description: Defines desired rules for Linstor's Storage-pools. + properties: + eligibleNodesPolicy: + description: |- + EligibleNodesPolicy defines policies for managing eligible nodes. + Always present with defaults. + properties: + notReadyGracePeriod: + default: 10m + description: |- + NotReadyGracePeriod specifies how long to wait before removing + a not-ready node from the eligible nodes list. + type: string + required: + - notReadyGracePeriod + type: object + lvmVolumeGroups: + description: |- + An array of names of LVMVolumeGroup resources, whose Volume Groups/Thin-pools will be used to allocate + the required space. + + > Note that every LVMVolumeGroup resource has to have the same type Thin/Thick + as it is in current resource's 'Spec.Type' field. + items: + properties: + name: + description: Selected LVMVolumeGroup resource's name. + minLength: 1 + pattern: ^[a-z0-9]([a-z0-9-.]{0,251}[a-z0-9])?$ + type: string + thinPoolName: + description: Selected Thin-pool name. + maxLength: 128 + minLength: 1 + pattern: ^[a-zA-Z0-9][a-zA-Z0-9_.+-]*$ + type: string + required: + - name + type: object + minItems: 1 + type: array + x-kubernetes-validations: + - message: Value is immutable. + rule: self == oldSelf + nodeLabelSelector: + description: |- + NodeLabelSelector filters nodes eligible for storage pool participation. + Only nodes matching this selector can store data. + If not specified, all nodes with matching LVMVolumeGroups are candidates. + properties: + matchExpressions: + description: matchExpressions is a list of label selector requirements. + The requirements are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label key that the selector applies + to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + x-kubernetes-validations: + - message: Value is immutable. + rule: self == oldSelf + - message: 'matchExpressions[].operator must be one of: In, NotIn, + Exists, DoesNotExist' + rule: '!has(self.matchExpressions) || self.matchExpressions.all(e, + e.operator in [''In'', ''NotIn'', ''Exists'', ''DoesNotExist''])' + - message: matchExpressions[].values must be empty for Exists/DoesNotExist + operators, non-empty for In/NotIn + rule: '!has(self.matchExpressions) || self.matchExpressions.all(e, + (e.operator in [''Exists'', ''DoesNotExist'']) ? (!has(e.values) + || size(e.values) == 0) : (has(e.values) && size(e.values) > 0))' + systemNetworkNames: + default: + - Internal + description: |- + SystemNetworkNames specifies network names used for DRBD replication traffic. + At least one network name must be specified. Each name is limited to 64 characters. + + Custom network support requires NetworkNode watch implementation in the controller. + items: + type: string + maxItems: 1 + minItems: 1 + type: array + x-kubernetes-validations: + - message: Only 'Internal' network is currently supported + rule: self.all(n, n == 'Internal') + type: + description: |- + Defines the volumes type. Might be: + - LVM (for Thick) + - LVMThin (for Thin) + enum: + - LVM + - LVMThin + type: string + x-kubernetes-validations: + - message: Value is immutable. + rule: self == oldSelf + zones: + description: Array of zones the Storage pool's volumes should be replicated + in. + items: + maxLength: 63 + type: string + maxItems: 10 + type: array + x-kubernetes-list-type: set + x-kubernetes-validations: + - message: Value is immutable. + rule: self == oldSelf + required: + - eligibleNodesPolicy + - lvmVolumeGroups + - systemNetworkNames + - type + type: object + x-kubernetes-validations: + - message: thinPoolName is required for each lvmVolumeGroups entry when + type is LVMThin + rule: self.type != 'LVMThin' || self.lvmVolumeGroups.all(g, g.thinPoolName + != ”) + - message: thinPoolName must not be specified when type is LVM + rule: self.type != 'LVM' || self.lvmVolumeGroups.all(g, !has(g.thinPoolName) + || g.thinPoolName == ”) + status: + description: Displays current information about the state of the LINSTOR + storage pool. + properties: + conditions: + items: + description: Condition contains details for one aspect of the current + state of this API Resource. + properties: + lastTransitionTime: + description: |- + lastTransitionTime is the last time the condition transitioned from one status to another. + This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. + format: date-time + type: string + message: + description: |- + message is a human readable message indicating details about the transition. + This may be an empty string. + maxLength: 32768 + type: string + observedGeneration: + description: |- + observedGeneration represents the .metadata.generation that the condition was set based upon. + For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date + with respect to the current state of the instance. + format: int64 + minimum: 0 + type: integer + reason: + description: |- + reason contains a programmatic identifier indicating the reason for the condition's last transition. + Producers of specific condition types may define expected values and meanings for this field, + and whether the values are considered a guaranteed API. + The value should be a CamelCase string. + This field may not be empty. + maxLength: 1024 + minLength: 1 + pattern: ^[A-Za-z]([A-Za-z0-9_,:]*[A-Za-z0-9_])?$ + type: string + status: + description: status of the condition, one of True, False, Unknown. + enum: + - "True" + - "False" + - Unknown + type: string + type: + description: type of condition in CamelCase or in foo.example.com/CamelCase. + maxLength: 316 + pattern: ^([a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*/)?(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])$ + type: string + required: + - lastTransitionTime + - message + - reason + - status + - type + type: object + type: array + x-kubernetes-list-map-keys: + - type + x-kubernetes-list-type: map + eligibleNodes: + description: EligibleNodes lists nodes eligible for this storage pool. + items: + description: ReplicatedStoragePoolEligibleNode represents a node + eligible for placing volumes of this storage pool. + properties: + agentReady: + description: AgentReady indicates whether the sds-replicated-volume + agent on this node is ready. + type: boolean + lvmVolumeGroups: + description: LVMVolumeGroups lists LVM volume groups available + on this node. + items: + description: ReplicatedStoragePoolEligibleNodeLVMVolumeGroup + represents an LVM volume group on an eligible node. + properties: + name: + description: Name is the LVMVolumeGroup resource name. + type: string + ready: + description: Ready indicates whether the LVMVolumeGroup + (and its thin pool, if applicable) is ready. + type: boolean + thinPoolName: + description: ThinPoolName is the thin pool name (for LVMThin + storage pools). + type: string + unschedulable: + description: Unschedulable indicates whether new volumes + should not use this volume group. + type: boolean + required: + - name + - ready + - unschedulable + type: object + type: array + nodeName: + description: NodeName is the Kubernetes node name. + type: string + nodeReady: + description: NodeReady indicates whether the Kubernetes node + is ready. + type: boolean + unschedulable: + description: Unschedulable indicates whether new volumes should + not be scheduled to this node. + type: boolean + zoneName: + description: ZoneName is the zone this node belongs to. + type: string + required: + - agentReady + - nodeName + - nodeReady + - unschedulable + type: object + type: array + eligibleNodesRevision: + description: EligibleNodesRevision is incremented when eligible nodes + change. + format: int64 + type: integer + phase: + description: Phase is used only by the old controller and will be + removed in a future version. + type: string + reason: + description: Reason is used only by the old controller and will be + removed in a future version. + type: string + usedBy: + description: UsedBy tracks which resources are using this storage + pool. + properties: + replicatedStorageClassNames: + description: ReplicatedStorageClassNames lists RSC names using + this storage pool. + items: + type: string + type: array + x-kubernetes-list-type: set + type: object + type: object + required: + - spec + type: object + served: true + storage: true + subresources: {} diff --git a/crds/storage.deckhouse.io_replicatedvolumeattachments.yaml b/crds/storage.deckhouse.io_replicatedvolumeattachments.yaml new file mode 100644 index 000000000..71ca672ec --- /dev/null +++ b/crds/storage.deckhouse.io_replicatedvolumeattachments.yaml @@ -0,0 +1,169 @@ +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.19.0 + labels: + module: sds-replicated-volume + name: replicatedvolumeattachments.storage.deckhouse.io +spec: + group: storage.deckhouse.io + names: + kind: ReplicatedVolumeAttachment + listKind: ReplicatedVolumeAttachmentList + plural: replicatedvolumeattachments + shortNames: + - rva + singular: replicatedvolumeattachment + scope: Cluster + versions: + - additionalPrinterColumns: + - jsonPath: .spec.replicatedVolumeName + name: Volume + type: string + - jsonPath: .spec.nodeName + name: Node + type: string + - jsonPath: .status.phase + name: Phase + type: string + - jsonPath: .status.conditions[?(@.type=='Attached')].status + name: Attached + type: string + - jsonPath: .status.conditions[?(@.type=='ReplicaReady')].status + name: ReplicaReady + type: string + - jsonPath: .status.conditions[?(@.type=='Ready')].status + name: Ready + type: string + - jsonPath: .metadata.creationTimestamp + name: Age + type: date + name: v1alpha1 + schema: + openAPIV3Schema: + description: |- + ReplicatedVolumeAttachment is a Kubernetes Custom Resource that represents an attachment intent/state + of a ReplicatedVolume to a specific node. + properties: + apiVersion: + description: |- + APIVersion defines the versioned schema of this representation of an object. + Servers should convert recognized schemas to the latest internal value, and + may reject unrecognized values. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources + type: string + kind: + description: |- + Kind is a string value representing the REST resource this object represents. + Servers may infer this from the endpoint the client submits requests to. + Cannot be updated. + In CamelCase. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + type: string + metadata: + type: object + spec: + properties: + nodeName: + maxLength: 253 + minLength: 1 + type: string + x-kubernetes-validations: + - message: nodeName is immutable + rule: self == oldSelf + replicatedVolumeName: + maxLength: 127 + minLength: 1 + pattern: ^[0-9A-Za-z.+_-]*$ + type: string + x-kubernetes-validations: + - message: replicatedVolumeName is immutable + rule: self == oldSelf + required: + - nodeName + - replicatedVolumeName + type: object + status: + properties: + conditions: + items: + description: Condition contains details for one aspect of the current + state of this API Resource. + properties: + lastTransitionTime: + description: |- + lastTransitionTime is the last time the condition transitioned from one status to another. + This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. + format: date-time + type: string + message: + description: |- + message is a human readable message indicating details about the transition. + This may be an empty string. + maxLength: 32768 + type: string + observedGeneration: + description: |- + observedGeneration represents the .metadata.generation that the condition was set based upon. + For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date + with respect to the current state of the instance. + format: int64 + minimum: 0 + type: integer + reason: + description: |- + reason contains a programmatic identifier indicating the reason for the condition's last transition. + Producers of specific condition types may define expected values and meanings for this field, + and whether the values are considered a guaranteed API. + The value should be a CamelCase string. + This field may not be empty. + maxLength: 1024 + minLength: 1 + pattern: ^[A-Za-z]([A-Za-z0-9_,:]*[A-Za-z0-9_])?$ + type: string + status: + description: status of the condition, one of True, False, Unknown. + enum: + - "True" + - "False" + - Unknown + type: string + type: + description: type of condition in CamelCase or in foo.example.com/CamelCase. + maxLength: 316 + pattern: ^([a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*/)?(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])$ + type: string + required: + - lastTransitionTime + - message + - reason + - status + - type + type: object + type: array + x-kubernetes-list-map-keys: + - type + x-kubernetes-list-type: map + phase: + description: ReplicatedVolumeAttachmentPhase enumerates possible values + for ReplicatedVolumeAttachment status.phase field. + enum: + - Pending + - Attaching + - Attached + - Detaching + type: string + type: object + required: + - metadata + - spec + type: object + selectableFields: + - jsonPath: .spec.nodeName + - jsonPath: .spec.replicatedVolumeName + served: true + storage: true + subresources: + status: {} diff --git a/crds/storage.deckhouse.io_replicatedvolumereplicas.yaml b/crds/storage.deckhouse.io_replicatedvolumereplicas.yaml new file mode 100644 index 000000000..898247f15 --- /dev/null +++ b/crds/storage.deckhouse.io_replicatedvolumereplicas.yaml @@ -0,0 +1,425 @@ +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.19.0 + labels: + module: sds-replicated-volume + name: replicatedvolumereplicas.storage.deckhouse.io +spec: + group: storage.deckhouse.io + names: + kind: ReplicatedVolumeReplica + listKind: ReplicatedVolumeReplicaList + plural: replicatedvolumereplicas + shortNames: + - rvr + singular: replicatedvolumereplica + scope: Cluster + versions: + - additionalPrinterColumns: + - jsonPath: .spec.replicatedVolumeName + name: Volume + type: string + - jsonPath: .spec.nodeName + name: Node + type: string + - jsonPath: .spec.type + name: Type + type: string + - jsonPath: .status.conditions[?(@.type=='Attached')].status + name: Attached + type: string + - jsonPath: .status.conditions[?(@.type=='Online')].status + name: Online + type: string + - jsonPath: .status.conditions[?(@.type=='Ready')].status + name: Ready + type: string + - jsonPath: .status.conditions[?(@.type=='Configured')].status + name: Configured + type: string + - jsonPath: .status.conditions[?(@.type=='DataInitialized')].status + name: DataInitialized + type: string + - jsonPath: .status.conditions[?(@.type=='InQuorum')].status + name: InQuorum + type: string + - jsonPath: .status.syncProgress + name: InSync + type: string + - jsonPath: .metadata.creationTimestamp + name: Age + type: date + name: v1alpha1 + schema: + openAPIV3Schema: + description: ReplicatedVolumeReplica is a Kubernetes Custom Resource that + represents a replica of a ReplicatedVolume. + properties: + apiVersion: + description: |- + APIVersion defines the versioned schema of this representation of an object. + Servers should convert recognized schemas to the latest internal value, and + may reject unrecognized values. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources + type: string + kind: + description: |- + Kind is a string value representing the REST resource this object represents. + Servers may infer this from the endpoint the client submits requests to. + Cannot be updated. + In CamelCase. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + type: string + metadata: + type: object + spec: + properties: + nodeName: + maxLength: 253 + minLength: 1 + type: string + replicatedVolumeName: + maxLength: 120 + minLength: 1 + pattern: ^[0-9A-Za-z.+_-]*$ + type: string + x-kubernetes-validations: + - message: replicatedVolumeName is immutable + rule: self == oldSelf + type: + description: ReplicaType enumerates possible values for ReplicatedVolumeReplica + spec.type and status.actualType fields. + enum: + - Diskful + - Access + - TieBreaker + type: string + required: + - replicatedVolumeName + - type + type: object + status: + properties: + actualType: + description: ReplicaType enumerates possible values for ReplicatedVolumeReplica + spec.type and status.actualType fields. + enum: + - Diskful + - Access + - TieBreaker + type: string + conditions: + items: + description: Condition contains details for one aspect of the current + state of this API Resource. + properties: + lastTransitionTime: + description: |- + lastTransitionTime is the last time the condition transitioned from one status to another. + This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. + format: date-time + type: string + message: + description: |- + message is a human readable message indicating details about the transition. + This may be an empty string. + maxLength: 32768 + type: string + observedGeneration: + description: |- + observedGeneration represents the .metadata.generation that the condition was set based upon. + For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date + with respect to the current state of the instance. + format: int64 + minimum: 0 + type: integer + reason: + description: |- + reason contains a programmatic identifier indicating the reason for the condition's last transition. + Producers of specific condition types may define expected values and meanings for this field, + and whether the values are considered a guaranteed API. + The value should be a CamelCase string. + This field may not be empty. + maxLength: 1024 + minLength: 1 + pattern: ^[A-Za-z]([A-Za-z0-9_,:]*[A-Za-z0-9_])?$ + type: string + status: + description: status of the condition, one of True, False, Unknown. + enum: + - "True" + - "False" + - Unknown + type: string + type: + description: type of condition in CamelCase or in foo.example.com/CamelCase. + maxLength: 316 + pattern: ^([a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*/)?(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])$ + type: string + required: + - lastTransitionTime + - message + - reason + - status + - type + type: object + type: array + x-kubernetes-list-map-keys: + - type + x-kubernetes-list-type: map + drbd: + properties: + actual: + properties: + allowTwoPrimaries: + default: false + type: boolean + disk: + maxLength: 256 + pattern: ^(/[a-zA-Z0-9/.+_-]+)?$ + type: string + initialSyncCompleted: + default: false + type: boolean + type: object + config: + properties: + address: + properties: + ipv4: + pattern: ^(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)$ + type: string + port: + maximum: 65535 + minimum: 1025 + type: integer + required: + - ipv4 + - port + type: object + peers: + additionalProperties: + properties: + address: + properties: + ipv4: + pattern: ^(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)$ + type: string + port: + maximum: 65535 + minimum: 1025 + type: integer + required: + - ipv4 + - port + type: object + diskless: + default: false + type: boolean + nodeId: + maximum: 7 + minimum: 0 + type: integer + required: + - address + - nodeId + type: object + description: |- + Peers contains information about other replicas in the same ReplicatedVolume. + The key in this map is the node name where the peer replica is located. + type: object + peersInitialized: + description: |- + PeersInitialized indicates that Peers has been calculated. + This field is used to distinguish between no peers and not yet calculated. + type: boolean + primary: + type: boolean + type: object + status: + properties: + connections: + items: + properties: + congested: + type: boolean + connectionState: + type: string + name: + type: string + paths: + items: + properties: + established: + type: boolean + remoteHost: + properties: + address: + type: string + family: + type: string + port: + type: integer + required: + - address + - family + - port + type: object + thisHost: + properties: + address: + type: string + family: + type: string + port: + type: integer + required: + - address + - family + - port + type: object + required: + - established + - remoteHost + - thisHost + type: object + type: array + peerDevices: + items: + properties: + hasOnlineVerifyDetails: + type: boolean + hasSyncDetails: + type: boolean + outOfSync: + type: integer + peerClient: + type: boolean + peerDiskState: + type: string + percentInSync: + type: string + replicationState: + type: string + resyncSuspended: + type: string + volume: + type: integer + required: + - hasOnlineVerifyDetails + - hasSyncDetails + - outOfSync + - peerClient + - peerDiskState + - percentInSync + - replicationState + - resyncSuspended + - volume + type: object + type: array + peerNodeId: + type: integer + peerRole: + type: string + tls: + type: boolean + required: + - congested + - connectionState + - name + - paths + - peerDevices + - peerNodeId + - peerRole + - tls + type: object + type: array + devices: + items: + properties: + client: + type: boolean + diskState: + type: string + minor: + type: integer + open: + type: boolean + quorum: + type: boolean + size: + type: integer + volume: + type: integer + required: + - client + - diskState + - minor + - open + - quorum + - size + - volume + type: object + type: array + forceIOFailures: + type: boolean + name: + type: string + nodeId: + type: integer + role: + type: string + suspended: + type: boolean + suspendedFencing: + type: boolean + suspendedNoData: + type: boolean + suspendedQuorum: + type: boolean + suspendedUser: + type: boolean + writeOrdering: + type: string + required: + - connections + - devices + - forceIOFailures + - name + - nodeId + - role + - suspended + - suspendedFencing + - suspendedNoData + - suspendedQuorum + - suspendedUser + - writeOrdering + type: object + type: object + lvmLogicalVolumeName: + maxLength: 256 + type: string + type: object + required: + - metadata + - spec + type: object + x-kubernetes-validations: + - message: metadata.name must start with spec.replicatedVolumeName + '-' + rule: self.metadata.name.startsWith(self.spec.replicatedVolumeName + '-') + - message: numeric suffix must be between 0 and 31 + rule: int(self.metadata.name.substring(self.metadata.name.lastIndexOf('-') + + 1)) <= 31 + - message: metadata.name must be at most 123 characters (to fit derived LLV + name with prefix) + rule: size(self.metadata.name) <= 123 + selectableFields: + - jsonPath: .spec.nodeName + - jsonPath: .spec.replicatedVolumeName + served: true + storage: true + subresources: + status: {} diff --git a/crds/storage.deckhouse.io_replicatedvolumes.yaml b/crds/storage.deckhouse.io_replicatedvolumes.yaml new file mode 100644 index 000000000..3dc81e8e4 --- /dev/null +++ b/crds/storage.deckhouse.io_replicatedvolumes.yaml @@ -0,0 +1,353 @@ +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.19.0 + labels: + module: sds-replicated-volume + name: replicatedvolumes.storage.deckhouse.io +spec: + group: storage.deckhouse.io + names: + kind: ReplicatedVolume + listKind: ReplicatedVolumeList + plural: replicatedvolumes + shortNames: + - rv + singular: replicatedvolume + scope: Cluster + versions: + - additionalPrinterColumns: + - jsonPath: .status.conditions[?(@.type=='IOReady')].status + name: IOReady + type: string + - jsonPath: .spec.size + name: Size + type: string + - jsonPath: .status.actualSize + name: ActualSize + type: string + - jsonPath: .status.diskfulReplicaCount + name: DiskfulReplicas + type: string + - jsonPath: .status.phase + name: Phase + type: string + name: v1alpha1 + schema: + openAPIV3Schema: + properties: + apiVersion: + description: |- + APIVersion defines the versioned schema of this representation of an object. + Servers should convert recognized schemas to the latest internal value, and + may reject unrecognized values. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources + type: string + kind: + description: |- + Kind is a string value representing the REST resource this object represents. + Servers may infer this from the endpoint the client submits requests to. + Cannot be updated. + In CamelCase. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + type: string + metadata: + type: object + spec: + properties: + replicatedStorageClassName: + minLength: 1 + type: string + size: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + required: + - replicatedStorageClassName + - size + type: object + status: + properties: + actuallyAttachedTo: + items: + type: string + maxItems: 2 + type: array + conditions: + items: + description: Condition contains details for one aspect of the current + state of this API Resource. + properties: + lastTransitionTime: + description: |- + lastTransitionTime is the last time the condition transitioned from one status to another. + This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. + format: date-time + type: string + message: + description: |- + message is a human readable message indicating details about the transition. + This may be an empty string. + maxLength: 32768 + type: string + observedGeneration: + description: |- + observedGeneration represents the .metadata.generation that the condition was set based upon. + For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date + with respect to the current state of the instance. + format: int64 + minimum: 0 + type: integer + reason: + description: |- + reason contains a programmatic identifier indicating the reason for the condition's last transition. + Producers of specific condition types may define expected values and meanings for this field, + and whether the values are considered a guaranteed API. + The value should be a CamelCase string. + This field may not be empty. + maxLength: 1024 + minLength: 1 + pattern: ^[A-Za-z]([A-Za-z0-9_,:]*[A-Za-z0-9_])?$ + type: string + status: + description: status of the condition, one of True, False, Unknown. + enum: + - "True" + - "False" + - Unknown + type: string + type: + description: type of condition in CamelCase or in foo.example.com/CamelCase. + maxLength: 316 + pattern: ^([a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*/)?(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])$ + type: string + required: + - lastTransitionTime + - message + - reason + - status + - type + type: object + type: array + x-kubernetes-list-map-keys: + - type + x-kubernetes-list-type: map + configuration: + description: Configuration is the desired configuration snapshot for + this volume. + properties: + replication: + description: Replication is the resolved replication mode. + type: string + storagePoolName: + description: StoragePoolName is the name of the ReplicatedStoragePool + used by this RSC. + type: string + topology: + description: Topology is the resolved topology setting. + type: string + volumeAccess: + description: VolumeAccess is the resolved volume access mode. + type: string + required: + - replication + - storagePoolName + - topology + - volumeAccess + type: object + configurationGeneration: + description: ConfigurationGeneration is the RSC generation from which + configuration was taken. + format: int64 + type: integer + configurationObservedGeneration: + description: ConfigurationObservedGeneration is the RSC generation + when configuration was last observed/acknowledged. + format: int64 + type: integer + datamesh: + description: Datamesh is the computed datamesh configuration for the + volume. + properties: + allowTwoPrimaries: + default: false + description: AllowTwoPrimaries enables two primaries mode for + the datamesh. + type: boolean + members: + description: Members is the list of datamesh members. + items: + description: ReplicatedVolumeDatameshMember represents a member + of the datamesh. + properties: + addresses: + description: Addresses is the list of DRBD addresses for + this member. + items: + properties: + address: + properties: + ipv4: + pattern: ^(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)$ + type: string + port: + maximum: 65535 + minimum: 1025 + type: integer + required: + - ipv4 + - port + type: object + systemNetworkName: + maxLength: 64 + type: string + required: + - address + - systemNetworkName + type: object + maxItems: 16 + type: array + name: + description: Name is the member name (used as list map key). + minLength: 1 + type: string + nodeName: + description: NodeName is the Kubernetes node name where + the member is located. + minLength: 1 + type: string + role: + description: Role is the DRBD role of this member. + type: string + type: + description: Type is the member type (Diskful, Access, or + TieBreaker). + type: string + typeTransition: + description: TypeTransition indicates the desired type transition + for this member. + enum: + - ToDiskful + - ToDiskless + type: string + zone: + description: Zone is the zone where the member is located. + type: string + required: + - addresses + - name + - nodeName + - type + type: object + x-kubernetes-validations: + - message: typeTransition must be ToDiskless for Diskful type, + or ToDiskful for Access/TieBreaker types + rule: 'self.type == ''Diskful'' ? (!has(self.typeTransition) + || self.typeTransition == ''ToDiskless'') : (!has(self.typeTransition) + || self.typeTransition == ''ToDiskful'')' + maxItems: 24 + type: array + x-kubernetes-list-map-keys: + - name + x-kubernetes-list-type: map + quorum: + default: 0 + description: Quorum is the quorum value for the datamesh. + maximum: 13 + minimum: 0 + type: integer + quorumMinimumRedundancy: + default: 0 + description: QuorumMinimumRedundancy is the minimum redundancy + required for quorum. + maximum: 8 + minimum: 0 + type: integer + size: + anyOf: + - type: integer + - type: string + description: Size is the desired size of the volume. + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + systemNetworkNames: + description: SystemNetworkNames is the list of system network + names for DRBD communication. + items: + maxLength: 64 + type: string + maxItems: 16 + type: array + required: + - allowTwoPrimaries + - members + - quorum + - quorumMinimumRedundancy + - size + - systemNetworkNames + type: object + datameshRevision: + description: DatameshRevision is a counter incremented when datamesh + configuration changes. + format: int64 + type: integer + desiredAttachTo: + description: |- + DesiredAttachTo is the desired set of nodes where the volume should be attached (up to 2 nodes). + It is computed by controllers from ReplicatedVolumeAttachment (RVA) objects. + items: + type: string + maxItems: 2 + type: array + drbd: + properties: + config: + properties: + allowTwoPrimaries: + default: false + type: boolean + type: object + type: object + eligibleNodesViolations: + description: EligibleNodesViolations lists replicas placed on non-eligible + nodes. + items: + description: ReplicatedVolumeEligibleNodesViolation describes a + replica placed on a non-eligible node. + properties: + nodeName: + description: NodeName is the node where the replica is placed. + type: string + reason: + description: Reason describes why this placement violates eligible + nodes constraints. + type: string + replicaName: + description: ReplicaName is the ReplicatedVolumeReplica name. + type: string + required: + - nodeName + - reason + - replicaName + type: object + type: array + required: + - datamesh + - datameshRevision + type: object + required: + - metadata + - spec + type: object + x-kubernetes-validations: + - message: metadata.name must be at most 120 characters (to fit derived RVR/LLV + names) + rule: size(self.metadata.name) <= 120 + served: true + storage: true + subresources: + status: {} diff --git a/docs/dev/megatest.md b/docs/dev/megatest.md new file mode 100644 index 000000000..d1496f891 --- /dev/null +++ b/docs/dev/megatest.md @@ -0,0 +1,166 @@ +# Каждая горутина пишет в лог: +- при начале действия: + - имя rv + - название действия и параметры + - при окончании действия + - имя rv + - название действия и параметры + - результат + - сколько заняло времени + - если она следит, то при смене состояния + - имя rv + - ожидаемое состояние + - наблюдаемое состояние + +# Пачка горутин +## volume-checker(rv) +Собирает статистику по переходам состояния rv ориентируясь на condition. + - следит (Watch вместо Get каждые N секунд), что с rv все ок. + - condition остается RV.ioReady==True + - condition остается RV.Quorum==True + - при переключении состояния - писать в лог с Reason и Message если Condition status меняется. Записать в структуру rvName, кол-во переходов для каждого из condition, в начале condition должны быть в true. Написать в лог condition rvr == false. + Таким образом четное кол-во переходов указывает на то, что rv поддерживает нужное состояние несмотря на попытки ее развалить, а нечетное, что попытки удались. В идеале нужно иметь счетчики переходов по нулям. + - когда получает сигнал окончания — выходит +## volume-attacher (rv, period_min, period_max) +Эмулирует работу CSI, публикуя RV на разных нодах через ресурсы **RVA** (`ReplicatedVolumeAttachment`). + - в цикле: + - ждет рандом + - случайным образом выбирает одну ноду(wantedNodeName) с label sds-replicated-volume. + - в зависимости от количества активных **RVA** (т.е. желаемых прикреплений): + - 0: + - rand(100) > 10 - обычный цикл (добавим одну и уберем одну) (0 нод на выходе) + - rand(100) < 10 - Attach цикл (только добавить 1 ноду) (1 нод на выходе) + - 1 : + - wantedNodeName не находится среди RVA - тогда цикл эмуляции миграции (создаём новую RVA, удаляем старую RVA, затем удаляем новую) (0 нод на выходе) + - wantedNodeName уже находится среди RVA - тогда только detach цикл (удалить RVA) (0 нод на выходе) + - 2: - кейс когда контроллер упал и поднялся + - wantedNodeName находится или не находится среди RVA - делаем Detach цикл, удаляем случайную RVA (1 на выходе). + + Таким образом у нас большая часть будет с 0 нод(вне цикла работы volume-attacher), а часть с 1 нодой для эмуляции миграции. + Итого: + из 0 нод с шаном 5% мы делаем 1 ноду(без этого у нас всегда будет оставаться 0 и мы спустя какое-то время после старта никогда не получим 2), а обычно не делаем(оставлем 0 на выходе) + из 1 ноды мы делаем 0, но с разным подходом: либо сразу либо с эмуляцией миграции(временно делаем 2, затем 0) + из 2 нод мы делаем 1. + + + - **Обычный цикл** (добавим одну и уберем одну): + - делает действие паблиш: **создаёт RVA** для выбранной ноды (не затрагивая другие RVA). + - дожидается успеха: `rva.status.conditions[type=Ready].status=True` (агрегат: `Attached=True` и `ReplicaReady=True`) и/или `rv.status.actuallyAttachedTo` содержит выбранную ноду. + - ждет рандом + - делает действие анпаблиш **выбранной ноды**: удаляет соответствующую RVA (если она существует) + - дожидается успеха: `rv.status.actuallyAttachedTo` не содержит выбранную ноду (и/или RVA удалена). + - пишет в лог о любых действиях или бездействиях (когда ноды 2) + - **Detach цикл** (убрать одну ноду): + - действие анпаблиш **выбранной ноды**: удаляет RVA (если она существует) + - дожидается успеха: `rv.status.actuallyAttachedTo` не содержит выбранную ноду + - пишет в лог о любых действиях или бездействиях (когда ноды 2) + - **Attach цикл** (только добавить 1 ноду): + - делает действие паблиш: создаёт RVA для выбранной ноды + - дожидается успеха: RVA `Ready=True` и/или `rv.status.actuallyAttachedTo` содержит выбранную ноду + - пишет в лог + - **Цикл эмуляции миграции** (создаём новую RVA, удаляем старую RVA, затем удаляем новую) + - делает действие паблиш: создаёт RVA для выбранной новой ноды + - дожидается успеха: `rv.status.actuallyAttachedTo` содержит выбранную новую ноду (и при необходимости обе, если в итоге должно быть 2) + - действие анпаблиш **старой ноды**: удаляет RVA старой ноды + - дожидается успеха: `rv.status.actuallyAttachedTo` не содержит старую ноду + - пишет в лог о любых действиях или бездействиях (когда ноды 2) + - ждет рандом + - действие анпаблиш **выбранной новой ноды**: удаляет RVA выбранной новой ноды + - дожидается успеха: `rv.status.actuallyAttachedTo` не содержит выбранную новую ноду + - пишет в лог о любых действиях или бездействиях (когда ноды 2) + + - когда получает сигнал окончания + - делает действие анпаблиш + - удаляет все RVA для данного RV + - дожидается успеха + - выходит +## volume-resizer(rv, period_min, period_max, step_min, step_max) - ОТЛОЖЕНО! +Меняет размеры rv. +TODO: не увеличивать размер > maxRvSize + - в цикле + - ждет рандом + - делает действие ресайза + - увеличивает размер в rv на случайный размер в диапазоне + - дожидается успеха + - когда получает сигнал окончания + - выходит +## volume-replica-destroyer (rv, period_min, period_max) +Удаляет случайные rvr у rv. + - в цикле пока не выйдем, с случайным интервалом из (period_min+max) + - ждет рандом(в интервале выше) + - случайным образом выбирает rvr из тех которые у нас в данном rv. + - выполняет действие удаления: + - вызывает delete на rvr + - НЕ дожидается успеха + - пишет в лог , который уже структурирован, действие + - когда получает сигнал окончания + - выходит +## volume-replica-creator (rv, period_min, period_max) +Создает случайные rvr у rv. + - в цикле пока не выйдем, с случайным интервалом из (period_min+max) + - ждет рандом (в интервале выше) + - случайным образом выбирает тип rvr: + - Access или TieBreaker + - Diskful пока не создаем (у нас нет удалятора лишних diskful пока) + - выполняет действие создания rvr c выбранным типом. + - создает rvr + - НЕ дожидается успеха + - пишет в лог, который уже структурирован, тип и действие + - когда получает сигнал окончания + - выходит +## volume-main (rv, sc, lifetime_period) + - рандомом выбирает, сколько нод сразу в паблиш (это первоначальное состояние кластера при запуске megatest, далее поддерживать такое не надо) + - 0 — 30% + - 1 — 60% + - 2 — 10% + - рандомом выбирает, то количество нод, которое получили на предыдущем шаге + - выполняет действие создать rv + - создает rv + - запускает: + - volume-attacher(rv, 30, 60) - подумать над интервалами + - volume-attacher(rv, 100, 200) - РЕШИЛИ НЕ ДЕЛАТЬ! + - volume-resizer(rv, 50, 50, 4kb, 64kb) - ОТЛОЖЕНО! - контроллер ресайза может увеличить rv больше чем запрошено, если это требуется на более низком уровне, поэтому проверка должна это учитывать. Но нужно уточнить порог срабатывания sds-node-configurator - он может не увеличивать на малые значения. + - volume-replica-destroyer (rv, 30, 300) + - volume-replica-creator (rv, 30, 300) + - дожидается, что станет ready + - запускает + - volume-checker(rv) + - когда ей посылают сигнал окончания или истекает lifetime_period + - останавливает: + - volume-checker + - volume-attacher’ы + - выполняет действие удаление rv + - дожидается успеха + - останавливает + - volume-resizer + - volume-replica-destroyer + - volume-replica-creator + - выходит +## pod-destroyer (ns, label_selector, pod_min, pod_max, period_min, period_max) +Удаляет поды control-plane по label_selector + - в цикле: + - ждет рандом rand(period_min, period_max) + - выбирает поды с заданным label_selector, перемешивает список (на статус не смотрит) + - выбирает случайное число из (rand(pod_min, pod_max)) + - делает delete выбранного числа pod'ов с начала списка + - не дожидается удаления + - когда ей посылают сигнал окончания + - выходит +## multivolume(list sc, max_vol, step_min, step_max, step_period_min, step_period_max, vol_period_min, vol_period_max) +Оркестратор горутин (он же main). + - запускает: + - pod-destroyer(agent, 1, 2, 30, 60) + - pod-destroyer(controller, 1, 3, 30, 60) + - pod-destroyer(kube-apiserver, 1, 3, 120, 240) - ПОКА НЕ ДЕЛАЕМ (т.е. kube-apiserver это статичный под)! + - в цикле + - если количество запущенных volume_main < max_vol + - выбирает случайным образом количество для запуска (step_min, step_max), может превышать max_vol + - в цикле для каждого N + - выбирает случайный scName + - выбирает случайный vol_period + - генерирует случайное имя rvName + - запускает volume-main(rvName, scName, vol_period) + - ждет рандом(step_period_min, step_period_max) + - когда ей посылают сигнал окончания + - останавливает всё запущенное + - выходит diff --git a/docs/dev/spec_ai_task_index_contracts.txt b/docs/dev/spec_ai_task_index_contracts.txt new file mode 100644 index 000000000..fb45b5a23 --- /dev/null +++ b/docs/dev/spec_ai_task_index_contracts.txt @@ -0,0 +1,9 @@ +Мы редактируем разделы "Контракт данных: `ReplicatedVolume`" и "Контракт данных: `ReplicatedVolumeReplica`". + +Там должны быть указаны те поля этих ресурсов, которые упомянуты в данной спецификации. Для каждого такого поля нужно указать важную информацию, например, о том, в каком контроллере она обновляется и/или используется, либо список возможных значений или формат данных, если это где-то явно указывается в спецификациях контроллеров. + +Также требуется сверить наличие полей в API контрактах. Если поле отсутствует, для него надо оставить пометку. + +В конце разделов предоставь список тех полей в API контрактах, колторые не упомянуты в спецификации. Отдельным списокм можно указать неупомянутые константы (см. v1alpha3/) + +Стиль оформления сейчас представлен в этих разделах. Создавай новый список в том же стиле. Текущий - удали, он только для примера. Для отступов всегда используй два пробела. \ No newline at end of file diff --git a/docs/dev/spec_v1alpha3.md b/docs/dev/spec_v1alpha3.md new file mode 100644 index 000000000..a70933c70 --- /dev/null +++ b/docs/dev/spec_v1alpha3.md @@ -0,0 +1,774 @@ +- [Основные положения](#основные-положения) + - [Схема именования акторов](#схема-именования-акторов) + - [Условное обозначение триггеров](#условное-обозначение-триггеров) + - [Алгоритмы](#алгоритмы) + - [Типы реплик и целевое количество реплик](#типы-реплик-и-целевое-количество-реплик) + - [Константы](#константы) + - [RVR Ready условия](#rvr-ready-условия) + - [RV Ready условия](#rv-ready-условия) + - [Алгоритмы хеширования shared secret](#алгоритмы-хеширования-shared-secret) + - [Порты DRBD](#порты-drbd) + - [Финализаторы ресурсов](#финализаторы-ресурсов) +- [Контракт данных: `ReplicatedVolume`](#контракт-данных-replicatedvolume) + - [`spec`](#spec) + - [`status`](#status) +- [Контракт данных: `ReplicatedVolumeReplica`](#контракт-данных-replicatedvolumereplica) + - [`spec`](#spec-1) + - [`status`](#status-1) +- [Акторы приложения: `agent`](#акторы-приложения-agent) + - [`drbd-config-controller`](#drbd-config-controller) + - [Статус: \[OK | priority: 5 | complexity: 5\]](#статус-ok--priority-5--complexity-5) + - [`drbd-primary-controller`](#drbd-primary-controller) + - [Статус: \[OK | priority: 5 | complexity: 2\]](#статус-ok--priority-5--complexity-2) + - [`rvr-status-config-address-controller`](#rvr-status-config-address-controller) + - [Статус: \[OK | priority: 5 | complexity: 3\]](#статус-ok--priority-5--complexity-3) +- [Акторы приложения: `controller`](#акторы-приложения-controller) + - [`rvr-diskful-count-controller`](#rvr-diskful-count-controller) + - [Статус: \[OK | priority: 5 | complexity: 4\]](#статус-ok--priority-5--complexity-4) + - [`rvr-scheduling-controller`](#rvr-scheduling-controller) + - [Статус: \[OK | priority: 5 | complexity: 5\]](#статус-ok--priority-5--complexity-5-1) + - [`rvr-status-config-node-id-controller`](#rvr-status-config-node-id-controller) + - [Статус: \[OK | priority: 5 | complexity: 2\]](#статус-ok--priority-5--complexity-2-1) + - [`rvr-status-config-peers-controller`](#rvr-status-config-peers-controller) + - [Статус: \[OK | priority: 5 | complexity: 3\]](#статус-ok--priority-5--complexity-3-1) + - [`rv-status-config-device-minor-controller`](#rv-status-config-device-minor-controller) + - [Статус: \[OK | priority: 5 | complexity: 2\]](#статус-ok--priority-5--complexity-2-2) + - [`rvr-tie-breaker-count-controller`](#rvr-tie-breaker-count-controller) + - [Статус: \[OK | priority: 5 | complexity: 4\]](#статус-ok--priority-5--complexity-4-1) + - [`rvr-access-count-controller`](#rvr-access-count-controller) + - [Статус: \[OK | priority: 5 | complexity: 3\]](#статус-ok--priority-5--complexity-3-2) + - [`rv-attach-controller`](#rv-attach-controller) + - [Статус: \[OK | priority: 5 | complexity: 4\]](#статус-ok--priority-5--complexity-4-2) + - [`rvr-volume-controller`](#rvr-volume-controller) + - [Статус: \[OK | priority: 5 | complexity: 3\]](#статус-ok--priority-5--complexity-3-3) + - [`rvr-quorum-and-attach-constrained-release-controller`](#rvr-quorum-and-attach-constrained-release-controller) + - [Статус: \[OK | priority: 5 | complexity: 2\]](#статус-ok--priority-5--complexity-2-3) + - [`rvr-owner-reference-controller`](#rvr-owner-reference-controller) + - [Статус: \[OK | priority: 5 | complexity: 1\]](#статус-ok--priority-5--complexity-1) + - [`rv-status-config-quorum-controller`](#rv-status-config-quorum-controller) + - [Статус: \[OK | priority: 5 | complexity: 4\]](#статус-ok--priority-5--complexity-4-3) + - [`rv-status-config-shared-secret-controller`](#rv-status-config-shared-secret-controller) + - [Статус: \[OK | priority: 3 | complexity: 3\]](#статус-ok--priority-3--complexity-3) + +# Основные положения + +## Схема именования акторов +`{controlledEntity}-{name}-{actorType}` +где + - `controlledEntity` - название сущности под контролем актора + - `name` - имя актора, указывающее на его основную цель + - `actorType` - тип актора (`controller`, `scanner`, `worker`) + +## Условное обозначение триггеров + - `CREATE` - событие создания и синхронизации ресурса; синхронизация происходит для каждого ресурса, при старте контроллера, а также на регулярной основе (раз в 20 часов) + - `UPDATE` - событие обновления ресурса (в т.ч. проставление `metadata.deletionTimestamp`) + - `DELETE` - событие окончательного удаления ресурса, происходит после снятия последнего финализатора (может быть потеряно в случае недоступности контроллера) + +## Алгоритмы + +### Типы реплик и целевое количество реплик + +Существуют три вида реплик по предназначению: + - \[DF\] diskful - чтобы воспользоваться диском + - \[DL-AP\] diskless (access point) - чтобы воспользоваться быстрым доступом к данным, не ожидая долгой синхронизации, либо при отсутствии диска + - \[DL-TB\] diskless (tie-breaker) - чтобы участвовать в кворуме при чётном количестве других реплик + +В зависимости от значения свойства `ReplicatedStorageClass` `spec.replication`, количество реплик разных типов следующее: + - `None` + - \[DF\]: 1 + - \[DL-AP\]: 0 / 1 / 2 + - `Availability` + - \[DF\]: 2 + - \[DL-TB\]: 1 / 0 / 1 + - \[DL-AP\]: 0 / 1 / 2 + - `ConsistencyAndAvailability` + - \[DF\]: 3 + - \[DL-AP\]: 1 + +Для миграции надо две primary. + +Виртуалка может подключится к TB либо запросить себе AP. + +В случае если `spec.volumeAcess!=Local` AP не может быть Primary. + +TB в любой ситуации поддерживает нечетное, и сама может превратится в AP. Превращение происходит с помощью удаления. + +## Константы +Константы - это значения, которые должны быть определены в коде во время компиляции программы. + +Ссылка на константы в данной спецификации означает необходимость явного определения, либо переиспользования данной константы в коде. + +### RVR Ready условия +Это список предикатов вида `rvr.status.conditions[type=].status=`, объединение которых является критерием +для выставления значения `rvr.status.conditions[type=Ready].status=True`. + - `InitialSync==True` + - `DevicesReady==True` + - `ConfigurationAdjusted==True` + - `Quorum==True` + - `DiskIOSuspended==False` + - `AddressConfigured==True` + +### RV Ready условия +Это список предикатов вида `rv.status.conditions[type=].status=`, объединение которых является критерием +для выставления значения `rv.status.conditions[type=Ready].status=True`. + - `QuorumConfigured==True` + - `DiskfulReplicaCountReached==True` + - `AllReplicasReady==True` + - `SharedSecretAlgorithmSelected==True` + +### Алгоритмы хеширования shared secret + - `sha256` + - `sha1` + +### Порты DRBD + - `drbdMinPort=7000` - минимальный порт для использования ресурсами + - `drbdMaxPort=7999` - максимальный порт для использования ресурсами + +### Финализаторы ресурсов +- `rv` + - `sds-replicated-volume.deckhouse.io/controller` +- `rvr` + - `sds-replicated-volume.deckhouse.io/controller` + - `sds-replicated-volume.deckhouse.io/agent` +- `llv` + - `sds-replicated-volume.deckhouse.io/controller` + +# Контракт данных: `ReplicatedVolume` +## `spec` +- `size` + - Тип: Kubernetes `resource.Quantity`. + - Обязательное поле. +- `replicatedStorageClassName` + - Обязательное поле. + - Используется: + - **rvr-diskful-count-controller** — определяет целевое число реплик по `ReplicatedStorageClass`. + - **rv-attach-controller** — проверяет `rsc.spec.volumeAccess==Local` для возможности локального доступа. +> Примечание: запрос на публикацию (attach intent) задаётся не через `rv.spec`, а через ресурсы +> [`ReplicatedVolumeAttachment`](#контракт-данных-replicatedvolumeattachment-rva). Итоговый набор целевых нод +> публикуется в `rv.status.desiredAttachTo`. + +## `status` +- `conditions[]` + - `type=Ready` + - Обновляется: **rv-status-controller**. + - Критерий: все [RV Ready условия](#rv-ready-условия) достигнуты. + - `type=QuorumConfigured` + - Обновляется: **rv-status-config-quorum-controller**. + - `type=SharedSecretAlgorithmSelected` + - Обновляется: **rv-status-config-shared-secret-controller**. + - При исчерпании вариантов: `status=False`, `reason=UnableToSelectSharedSecretAlgorithm`, `message=`. + - `type=DiskfulReplicaCountReached` + - Обновляется: **rvr-diskful-count-controller**. +- `drbd.config` + - Путь в API: `status.drbd.config.*`. + - `sharedSecret` + - Инициализирует: **rv-status-config-controller**. + - Меняет при ошибке алгоритма на реплике: **rv-status-config-shared-secret-controller**. + - `sharedSecretAlg` + - Выбирает/обновляет: **rv-status-config-controller** / **rv-status-config-shared-secret-controller**. + - `quorum` + - Обновляет: **rv-status-config-quorum-controller** (см. формулу в описании контроллера). + - `quorumMinimumRedundancy` + - Обновляет: **rv-status-config-quorum-controller**. + - `allowTwoPrimaries` + - Обновляет: **rv-attach-controller** (включает при 2 узлах в `status.desiredAttachTo`, выключает иначе). + - `deviceMinor` + - Обновляет: **rv-status-config-device-minor-controller** (уникален среди всех RV). +- `actuallyAttachedTo[]` + - Обновляется: **rv-attach-controller**. + - Значение: список узлов, где `rvr.status.drbd.status.role==Primary`. +- `desiredAttachTo[]` + - Обновляется: **rv-attach-controller**. + - Значение: список узлов, где том **должен** быть опубликован (макс. 2). + - Источник: вычисляется из активных `ReplicatedVolumeAttachment` (RVA), с учётом ограничений локальности + (`rsc.spec.volumeAccess==Local`). +- `actualSize` + - Присутствует в API; источник обновления не описан в спецификации. +- `phase` + - Возможные значения: `Terminating`, `Synchronizing`, `Ready`. + - Обновляется: **rv-status-controller**. + +# Контракт данных: `ReplicatedVolumeAttachment` (RVA) +RVA — это ресурс «намерения публикации» тома на конкретной ноде. + +## `spec` +- `replicatedVolumeName` + - Обязательное; неизменяемое. + - Значение: имя `ReplicatedVolume`, который требуется опубликовать. +- `nodeName` + - Обязательное; неизменяемое. + - Значение: имя узла, на который требуется опубликовать том. + +## `status` +- `phase` (Enum: `Pending`, `Attaching`, `Attached`, `Detaching`) +- `conditions[]` + - `type=Attached` + - `status=True`, `reason=Attached` — том опубликован (replica Primary) на `spec.nodeName`. + - `status=False` — ожидание/ошибка публикации. Основные `reason`: + - `WaitingForActiveAttachmentsToDetach` + - `WaitingForReplicatedVolume` + - `WaitingForReplicatedVolumeReady` + - `WaitingForReplica` + - `ConvertingTieBreakerToAccess` + - `UnableToProvideLocalVolumeAccess` + - `LocalityNotSatisfied` + - `SettingPrimary` + - `type=ReplicaReady` + - Зеркалирует `rvr.status.conditions[type=IOReady]` для реплики на `spec.nodeName` + (копируются `status`, `reason`, `message`). + - `type=Ready` + - Агрегат: `Attached=True` **и** `ReplicaReady=True`. + - `status=True`, `reason=Ready`. + - `status=False`, `reason=NotAttached` или `ReplicaNotReady`. + +# Контракт данных: `ReplicatedVolumeReplica` +## `spec` +- `replicatedVolumeName` + - Обязательное; неизменяемое. + - Используется всеми контроллерами для привязки RVR к соответствующему RV. +- `nodeName` + - Обновляется/используется: **rvr-scheduling-controller**. + - Учитывается: **rvr-missing-node-controller**, **rvr-node-cordon-controller**. +- `type` (Enum: `Diskful` | `Access` | `TieBreaker`) + - Устанавливается/меняется: + - **rvr-diskful-count-controller** — создаёт `Diskful`. + - **rvr-access-count-controller** — создаёт/переводит `TieBreaker→Access`, удаляет лишние `Access`. + - **rvr-tie-breaker-count-controller** — создаёт/удаляет `TieBreaker`. + +## `status` +- `conditions[]` + - `type=Ready` + - `reason`: `WaitingForInitialSync`, `DevicesAreNotReady`, `AdjustmentFailed`, `NoQuorum`, `DiskIOSuspended`, `Ready`. + - `type=InitialSync` + - `reason`: `InitialSyncRequiredButNotReady`, `SafeForInitialSync`, `InitialDeviceReadinessReached`. + - `type=Primary` + - `reason`: `ResourceRoleIsPrimary`, `ResourceRoleIsNotPrimary`. + - `type=DevicesReady` + - `reason`: `DeviceIsNotReady`, `DeviceIsReady`. + - `type=ConfigurationAdjusted` + - `reason`: `ConfigurationFailed`, `MetadataCheckFailed`, `MetadataCreationFailed`, `StatusCheckFailed`, `ResourceUpFailed`, `ConfigurationAdjustFailed`, `ConfigurationAdjustmentPausedUntilInitialSync`, `PromotionDemotionFailed`, `ConfigurationAdjustmentSucceeded`. + - Примечание: `reason=UnsupportedAlgorithm` упомянут в спецификации, но отсутствует среди API-констант. + - `type=Quorum` + - `reason`: `NoQuorumStatus`, `QuorumStatus`. + - `type=DiskIOSuspended` + - `reason`: `DiskIONotSuspendedStatus`, `DiskIOSuspendedUnknownReason`, `DiskIOSuspendedByUser`, `DiskIOSuspendedNoData`, `DiskIOSuspendedFencing`, `DiskIOSuspendedQuorum`. +- `actualType` (Enum: `Diskful` | `Access` | `TieBreaker`) + - Обновляется контроллерами; используется **rvr-volume-controller** для удаления LLV при `spec.type!=Diskful` только когда `actualType==spec.type`. +- `drbd.config` + - `nodeId` (0..7) + - Обновляет: **rvr-status-config-node-id-controller** (уникален в пределах RV). + - `address.ipv4`, `address.port` + - Обновляет: **rvr-status-config-address-controller**; IPv4; порт ∈ [1025;65535]; выбирается свободный порт DRBD. + - `peers` + - Обновляет: **rvr-status-config-peers-controller** — у каждой готовой RVR перечислены все остальные готовые реплики того же RV. + - `disk` + - Обеспечивает: **rvr-volume-controller** при `spec.type==Diskful`; формат `/dev//`. + - `primary` + - Обновляет: **rv-attach-controller** (промоут/демоут). +- `drbd.actual` + - `allowTwoPrimaries` + - Используется: **rv-attach-controller** (ожидание применения настройки на каждой RVR). + - `disk` + - Поле присутствует в API; не используется в спецификации явно. +- `drbd.status` + - Публикуется: **rvr-drbd-status-controller**; далее используется другими контроллерами. + - `name`, `nodeId`, `role`, `suspended`, `suspendedUser`, `suspendedNoData`, `suspendedFencing`, `suspendedQuorum`, `forceIOFailures`, `writeOrdering`. + - `devices[]`: `volume`, `minor`, `diskState`, `client`, `open`, `quorum`, `size`, `read`, `written`, `alWrites`, `bmWrites`, `upperPending`, `lowerPending`. + - `connections[]`: `peerNodeId`, `name`, `connectionState`, `congested`, `peerRole`, `tls`, `apInFlight`, `rsInFlight`, + - `paths[]`: `thisHost.address`, `thisHost.port`, `thisHost.family`, `remoteHost.address`, `remoteHost.port`, `remoteHost.family`, `established`, + - `peerDevices[]`: `volume`, `replicationState`, `peerDiskState`, `peerClient`, `resyncSuspended`, `outOfSync`, `pending`, `unacked`, `hasSyncDetails`, `hasOnlineVerifyDetails`, `percentInSync`. + +Поля API, не упомянутые в спецификации: +- `status.drbd.actual.disk`. + +# Акторы приложения: `agent` + +## `drbd-config-controller` + +### Статус: [OK | priority: 5 | complexity: 5] + +### Цель + +Согласовать желаемую конфигурацию в полях ресурсов и конфигурации DRBD, выполнять +первоначальную синхронизацию и настройку DRBD ресурсов на ноде. Название ноды +`rvr.spec.nodeName` должно соответствовать названию ноды контроллера +(переменная окружения `NODE_NAME`, см. `images/agent/cmd/env_config.go`) + +Обязательные поля. Нельзя приступать к конфигурации, пока значение поля не +проинициализировано: +- `rv.metadata.name` +- `rv.status.drbd.config.sharedSecret` +- `rv.status.drbd.config.sharedSecretAlg` +- `rv.status.deviceMinor` +- `rvr.status.drbd.config.nodeId` +- `rvr.status.drbd.config.address` +- `rvr.status.drbd.config.peers` + - признак инициализации: `rvr.status.drbd.config.peersInitialized` +- `rvr.status.lvmLogicalVolumeName` + - обязателен только для `rvr.spec.type=Diskful` + +Дополнительные поля. Можно приступать к конфигурации с любыми значениями в них: +- `rv.status.drbd.config.quorum` +- `rv.status.drbd.config.quorumMinimumRedundancy` +- `rv.status.drbd.config.allowTwoPrimaries` + +Список ошибок, которые требуется поддерживать (выставлять и снимать) после каждого +реконсайла: + - `rvr.status.drbd.errors.sharedSecretAlgSelectionError` - результат валидации алгоритма + - `rvr.status.drbd.errors.lastAdjustmentError` - вывод команды `drbdadm adjust` + - `rvr.status.drbd.errors.<...>Error` - вывод любой другой использованной команды `drbd` (требуется доработать API контракт) + +Список полей, которые требуется поддерживать (выставлять и снимать) как результат каждого +реконсайла: + - `rvr.status.drbd.actual.disk` - должно соответствовать пути к диску `rvr.status.lvmLogicalVolumeName` + - только для `rvr.spec.type==Diskful` + - формат `/dev/{actualVGNameOnTheNode}/{actualLVNameOnTheNode}` + - `rvr.status.drbd.actual.allowTwoPrimaries` - должно соответствовать `rv.status.drbd.config.allowTwoPrimaries` + - `rvr.status.drbd.actual.initialSyncCompleted` + +Для работы с форматом конфигурации DRBD предлагается воспользоваться существующими пакетами + - см. метод `writeResourceConfig` в `images/agent/internal/reconcile/rvr/reconcile_handler.go`. +Также требуется использовать те же самые параметры по умолчанию (`protocol`, `rr-conflict`, и т.д.) + +Существующая реализация поддерживает `Diskful` и `Access` типы реплик. Для +`TieBreaker` реплик требуется изменить параметры так, чтобы избежать +синхронизации метаданных на ноду (провести исследование самостоятельно). + +Последовательность реконсайла, если не заполнен `rvr.metadata.deletionTimestamp`: +- ставим финализаторы на rvr + - `sds-replicated-volume.deckhouse.io/agent` + - `sds-replicated-volume.deckhouse.io/controller` +- пишем конфиг во временный файл и проверяем валидность + - команда (новая, нужно реализовать аналогично другим): `drbdadm --config-to-test <...>.res_tmp --config-to-exclude <...>.res sh-nop` + - в случае невалидного конфига, нужно вывести ошибку в `rvr.status.drbd.errors.<...>` и прекратить реконсайл +- пишем конфиг в основной файл (можно переместить, либо пересоздать и удалить временный) +- если `rvr.spec.type==Diskful` + - проверяем наличие метаданных + - `drbdadm dump-md` + - см. существующую реализацию + - если метаданных нет - создаем их + - `drbdadm create-md` + - см. существующую реализацию + - проверяем необходимость первоначальной синхронизации (AND) + - `rvr.status.drbd.config.peersInitialized` + - `len(rvr.status.drbd.config.peers)==0` + - `rvr.status.drbd.status.devices[0].diskState != UpToDate` + - `rvr.status.drbd.actual.initialSyncCompleted!=true` + - если первоначальная синхронизация нужна + - выполняем `drdbadm primary --force` + - см. существующую реализацию + - выполняем `drdbadm secondary` + - см. существующую реализацию + - выставляем `rvr.status.drbd.actual.initialSyncCompleted=true` +- если `rvr.spec.type!=Diskful` + - выставляем `rvr.status.drbd.actual.initialSyncCompleted=true` +- выполнить `drbdadm status`, чтобы убедиться, не "поднят" ли ресурс + - см. существующую реализацию +- если ресурс "не поднят", выполнить `drbdadm up` + - см. существующую реализацию +- выполнить `drbdadm adjust` + - см. существующую реализацию + +Если заполнен `rvr.metadata.deletionTimestamp`: +- если есть другие финализаторы, кроме `sds-replicated-volume.deckhouse.io/agent`, +то прекращаем реконсайл, т.к. агент должен быть последним, кто удаляет свой финализатор +- выполнить `drbdadm down` + - см. существующую реализацию +- удалить конфиги ресурса (основной и временный), если они есть +- снять последний финализатор с rvr + +TODO: + - Агент (drbd-config) должен ставить финалайзер agent на llv перед тем, как начинает ее использовать и снимать после того, как перестал. + - У реплики добавить отдельный condition FullyConnected, который НЕ влияет на Ready. Он true, когда у реплики есть связь со всеми ее пирами. + +### Вывод + - `rvr.status.drbd.errors.*` + - `rvr.status.drbd.actual.*` + - *.res, *.res_tmp файлы на ноде + +## `drbd-primary-controller` + +### Статус: [OK | priority: 5 | complexity: 2] + +### Цель +Выполнить команду `drbdadm primary`/`drbdadm secondary`, когда желаемая роль ресурса не +соответствует фактической. + +Команда должна выполняться на `rvr.spec.nodeName` ноде. + +Cм. существующую реализацию `drbdadm primary` и `drbdadm secondary`. + +Предусловия для выполнения команды (AND): + - `rv.status.conditions[type=Ready].status=True` + - `rvr.status.drbd.initialSyncCompleted=true` + - OR + - выполняем `drbdadm primary` (AND) + - `rvr.status.drbd.config.primary==true` + - `rvr.status.drbd.status.role!=Primary` + - выполняем `drbdadm secondary` (AND) + - `rvr.status.drbd.config.primary==false` + - `rvr.status.drbd.status.role==Primary` + +Ошибки drbd команд требуется выводить в `rvr.status.drbd.errors.*`. + +### Вывод + - `rvr.status.drbd.errors.*` + +## `rvr-status-config-address-controller` + +### Статус: [OK | priority: 5 | complexity: 3] + +### Цель +Проставить значение свойству `rvr.status.drbd.config.address`. + - `ipv4` - взять из `node.status.addresses[type=InternalIP]` + - `port` - найти наименьший свободный порт в диапазоне, задаваемом в [портах DRBD](#Порты-DRBD) `drbdMinPort`/`drbdMaxPort` + +В случае, если нет свободного порта, настроек порта, либо IP: повторять реконсайл с ошибкой. + +Процесс и результат работы контроллера должен быть отражён в `rvr.status.conditions[type=AddressConfigured]` + +### Триггер + - `CREATE/UPDATE(RVR, rvr.spec.nodeName, !rvr.status.drbd.config.address)` + +### Вывод + - `rvr.status.drbd.config.address` + - `rvr.status.conditions[type=AddressConfigured]` + +# Акторы приложения: `controller` + +## `rvr-diskful-count-controller` + +### Статус: [OK | priority: 5 | complexity: 4] + +### Цель +Добавлять привязанные diskful-реплики (RVR) для RV. + +Целевое количество реплик определяется в `ReplicatedStorageClass` (получать через `rv.spec.replicatedStorageClassName`). + +Первая реплика должна перейти в полностью работоспособное состояние, прежде чем +будет создана вторая реплика. Вторая и последующие реплики могут быть созданы +параллельно. + +Процесс и результат работы контроллера должен быть отражён в `rv.status.conditions[type=DiskfulReplicaCountReached]` + +### Триггер + - `CREATE(RV)`, `UPDATE(RVR[metadata.deletionTimestamp -> !null])` + - когда фактическое количество реплик (в том числе неработоспособных, но исключая удаляемые) меньше требуемого + - `UPDATE(RVR[status.conditions[type=Ready].status == True])` + - когда фактическое количество реплик равно 1 + +### Вывод + - создаёт diskful RVR (`rvr.spec.type==Diskful`) вплоть до RV-> +[RSC->`spec.replication`](https://deckhouse.io/modules/sds-replicated-volume/stable/cr.html#replicatedstorageclass-v1alpha1-spec-replication) + - `spec.replicatedVolumeName` имеет значение RV `metadata.name` + - `metadata.ownerReferences` указывает на RV по имени `metadata.name` + - `rv.status.conditions[type=DiskfulReplicaCountReached]` + +## `rvr-scheduling-controller` + +### Статус: [OK | priority: 5 | complexity: 5] + +### Цель + +Назначить всем rvr каждой rv уникальную ноду, задав поле `rvr.spec.nodeName`. +Список нод определяется пересечением нод из двух наборов: +- ноды находящиеся в зонах `rsc.spec.zones`. Если там ничего не указано - все ноды. Если тип `Access` - то все ноды. +- ноды, на которых размещены LVG `rsp.spec.lvmVolumeGroups` (применимо только для `Diskful` нод, иначе - все ноды) + +Три последовательные фазы: + +- Размещение `Diskful` + - исключаем из планирования узлы, на которых уже есть реплики этой RV (любого типа) + - учитываем topology: + - `Zonal` - все реплики должны быть в рамках одной зоны + - если уже есть Diskful реплики - используем их зону + - иначе если указан `rv.status.desiredAttachTo` - выбраем лучшую из зон desiredAttachTo узлов (даже если в `rv.status.desiredAttachTo` будут указаны узлы, зоны которых не указаны в `rsc.spec.zones`) + - иначе выбираем лучшую разрешённую зону (из `rsc.spec.zones` или все зоны кластера) + - `TransZonal` - реплики распределяются равномерно по зонам + - каждую реплику размещаем в зону с наименьшим количеством Diskful реплик + - если невозможно поддержать равномерное распределение - ошибка невозможности планирования + - `Ignored` - зоны не учитываются, реплики размещаются по произвольным нодам + - учитываем место + - делаем вызов в scheduler-extender (см. https://github.com/deckhouse/sds-node-configurator/pull/183) + - пытаемся учесть `rv.status.desiredAttachTo` - назначить `Diskful` реплики на эти ноды, если это возможно (увеличиваем приоритет таких нод) +- Размещение `Access` + - фаза работает только если: + - `rv.status.desiredAttachTo` задан и не на всех нодах из `rv.status.desiredAttachTo` есть реплики + - `rsc.spec.volumeAccess!=Local` + - исключаем из планирования узлы, на которых уже есть реплики этой RV (любого типа) + - не учитываем topology, место на диске + - допустимо иметь ноды в `rv.status.desiredAttachTo`, на которые не хватило реплик + - допустимо иметь реплики, которые никуда не запланировались (потому что на всех `rv.status.desiredAttachTo` и так есть + реплики какого-то типа) +- Размещение `TieBreaker` + - исключаем из планирования узлы, на которых уже есть реплики этой RV (любого типа) + - учитываем topology: + - `Zonal` - TieBreaker размещается в той же зоне, где уже есть Diskful реплики + - если Diskful реплик нет - ошибка невозможности планирования + - если не хватает свободных узлов - ошибка невозможности планирования + - `TransZonal` - каждый rvr планируем в зону с самым маленьким количеством реплик (всех типов) + - если зон с самым маленьким количеством несколько - выбираем любую из них + - если в зонах с самым маленьким количеством реплик нет свободного узла - + ошибка невозможности планирования (нельзя гарантировать равномерное распределение) + - `Ignored` - зоны не учитываются + - если не хватает свободных узлов - ошибка невозможности планирования + +Ошибка невозможности планирования: + - в каждой rvr протавляем + - `rvr.status.conditions[type=Scheduled].status=False` + - `rvr.status.conditions[type=Scheduled].reason=` + - `<исходя из сценария неудачи>` + - `WaitingForAnotherReplica` - для rvr, к размещению которых ещё не приступали + - `rvr.status.conditions[type=Scheduled].message=<для пользователя>` + +В случае успешного планирования: + - в rvr протавляем + - `rvr.status.conditions[type=Scheduled].status=True` + - `rvr.status.conditions[type=Scheduled].reason=ReplicaScheduled` + +### Вывод + - `rvr.spec.nodeName` + - `rvr.status.conditions[type=Scheduled]` + +## `rvr-status-config-node-id-controller` + +### Статус: [OK | priority: 5 | complexity: 2] + +### Цель +Проставить свойству `rvr.status.drbd.config.nodeId` уникальное значение среди всех реплик одной RV, в диапазоне [0; 7]. + +В случае превышения количества реплик, повторять реконсайл с ошибкой. + +### Триггер + - `CREATE(RVR, status.drbd.config.nodeId==nil)` + +### Вывод + - `rvr.status.drbd.config.nodeId` + +## `rvr-status-config-peers-controller` + +### Статус: [OK | priority: 5 | complexity: 3] + +### Цель +Поддерживать актуальное состояние пиров на каждой реплике. + +Для любого RV у всех готовых RVR в пирах прописаны все остальные готовые, кроме неё. + +Готовая RVR - та, у которой `spec.nodeName!="", status.nodeId !=nil, status.address != nil` + +После первой инициализации, даже в случае отсутствия пиров, требуется поставить +`rvr.status.drbd.config.peersInitialized=true` в том же патче. + +### Вывод + - `rvr.status.drbd.config.peers` + - `rvr.status.drbd.config.peersInitialized` + +## `rv-status-config-device-minor-controller` + +### Статус: [OK | priority: 5 | complexity: 2] + +### Цель + +Инициализировать свойство `rv.status.deviceMinor` минимальным свободным значением среди всех RV. + +По завершению работы контроллера у каждой RV должен быть свой уникальный `rv.status.deviceMinor`. + +### Триггер + - `CREATE/UPDATE(RV, rv.status.deviceMinor != nil)` + +### Вывод + - `rv.status.deviceMinor` + +## `rvr-tie-breaker-count-controller` + +### Статус: [OK | priority: 5 | complexity: 4] + +### Цель + +Failure domain (FD) - либо - нода, либо, в случае, если `rsc.spec.topology==TransZonal`, то - и нода, и зона. + +Создавать и удалять RVR с `rvr.spec.type==TieBreaker`, чтобы поддерживались требования: + +- отказ любого одного FD не должен приводить к потере кворума +- отказ большинства FD должен приводить к потере кворума +- поэтому надо тай-брейкерами доводить количество реплик на всех FD до минимального +числа, при котором будут соблюдаться условия: + - отличие в количестве реплик между FD не больше чем на 1 + - общее количество реплик - нечётное + +### Вывод + - Новая rvr с `rvr.spec.type==TieBreaker` + - `rvr.metadata.deletionTimestamp==true` + +## `rvr-access-count-controller` + +### Статус: [OK | priority: 5 | complexity: 3] + +### Цель +Поддерживать количество `rvr.spec.type==Access` реплик (для всех режимов +`rsc.spec.volumeAccess`, кроме `Local`) таким, чтобы их хватало для размещения на тех узлах, где это требуется: + - список запрашиваемых для доступа узлов — `rv.status.desiredAttachTo` (вычисляется из RVA) + - `Access` реплики требуются для доступа к данным на тех узлах, где нет других реплик +Когда узел больше не в `rv.status.desiredAttachTo`, а также не в `rv.status.actuallyAttachedTo`, +`Access` реплика на нём должна быть удалена. + +### Вывод + - создает, обновляет, удаляет `rvr` + +## `rv-attach-controller` + +### Статус: [OK | priority: 5 | complexity: 4] + +### Цель + +Обеспечить переход в primary (промоут) и обратно реплик. Для этого нужно следить за списком нод в +`rv.status.desiredAttachTo` (вычисляется из RVA) и приводить в соответствие реплики на этих нодах, +проставляя им `rvr.status.drbd.config.primary`. +Источник запроса на публикацию — активные ресурсы `ReplicatedVolumeAttachment` (RVA). Контроллер вычисляет +целевой набор нод как `rv.status.desiredAttachTo` и уже по нему промоут/демоут реплик. + +В случае, если `rsc.spec.volumeAccess==Local`, но реплика не `rvr.spec.type==Diskful`, +либо её нет вообще, промоут невозможен. В этом случае контроллер отражает проблему в статусе RVA: + - `rva.status.conditions[type=Attached].status=False` + - `rva.status.conditions[type=Attached].reason=UnableToProvideLocalVolumeAccess` или `LocalityNotSatisfied` + - `rva.status.conditions[type=Attached].message=<сообщение для пользователя>` +и не добавляет ноду в `rv.status.desiredAttachTo` (для Local access). + +Не все реплики могут быть primary. Для `rvr.spec.type=TieBreaker` требуется поменять тип на +`rvr.spec.type=Accees` (в одном патче вместе с `rvr.status.drbd.config.primary`). + +В `rv.status.desiredAttachTo` может быть указано 2 узла (что соответствует двум активным RVA). Однако, в кластере по умолчанию стоит запрет на 2 primary ноды. В таком случае, нужно временно выключить запрет: + - поменяв `rv.status.drbd.config.allowTwoPrimaries=true` + - дождаться фактического применения настройки на каждой rvr `rvr.status.drbd.actual.allowTwoPrimaries` + - и только потом обновлять `rvr.status.drbd.config.primary` + +В случае, когда в `rv.status.desiredAttachTo` менее двух нод, нужно убедиться, что настройка `rv.status.drbd.config.allowTwoPrimaries=false`. + +Также требуется поддерживать свойство `rv.status.actuallyAttachedTo`, указывая там список нод, на которых +фактически произошёл переход реплики в состояние Primary. Это состояние публикуется в `rvr.status.drbd.status.role` (значение `Primary`). + +Контроллер работает только когда RV имеет `status.condition[type=Ready].status=True` + +### Вывод + - `rvr.status.drbd.config.primary` + - `rv.status.drbd.config.allowTwoPrimaries` + - `rv.status.actuallyAttachedTo` + +## `rvr-volume-controller` + +### Статус: [OK | priority: 5 | complexity: 3] + +### Цель +1. Обеспечить наличие LLV для каждой реплики, у которой + - `rvr.spec.type==Diskful` + - `rvr.metadata.deletionTimestamp==nil` + Всем LLV под управлением проставляется `metadata.ownerReference`, указывающий на RVR. +2. Обеспечить проставление значения в свойства `rvr.status.lvmLogicalVolumeName`, указывающее на соответствующую LLV, готовую к использованию. +3. Обеспечить отсутствие LLV диска у RVR с `rvr.spec.type!=Diskful`, но только когда +фактический тип (`rvr.status.actualType`) соответствует целевому `rvr.spec.type`. +4. Обеспечить сброс свойства `rvr.status.lvmLogicalVolumeName` после удаления LLV. + +### Вывод + - Новое `llv` + - Обновление для уже существующих: `llv.metadata.ownerReference` - вынесли в отдельный контроллер [`llv-owner-reference-controller`](#llv-owner-reference-controller) + - `rvr.status.lvmLogicalVolumeName` (задание и сброс) + +## `rvr-quorum-and-attach-constrained-release-controller` + +### Статус: [OK | priority: 5 | complexity: 2] + +### Контекст + +Приложение agent ставит 2 финализатора на все RVR до того, как сконфигурирует DRBD. + - `sds-replicated-volume.deckhouse.io/agent` (далее - `F/agent`) + - `sds-replicated-volume.deckhouse.io/controller` (далее - `F/controller`) + +При удалении RVR, agent не удаляет ресурс из DRBD, и не снимает финализаторы, +пока стоит `F/controller`. + +### Цель + +Цель `rvr-quorum-and-attach-constrained-release-controller` - снимать финализатор `F/controller` с удаляемых rvr, когда +кластер к этому готов. Условия готовности: + +- количество rvr `rvr.status.conditions[type=Ready].status == rvr.status.conditions[type=FullyConnected].status == True` +(исключая ту, которую собираются удалить) больше, либо равно `rv.status.drbd.config.quorum` +- присутствует необходимое количество `rvr.status.actualType==Diskful && rvr.status.conditions[type=Ready].status==True && rvr.metadata.deletionTimestamp==nil` реплик, в +соответствии с `rsc.spec.replication` +- удаляемая реплика не является фактически опубликованной, т.е. её нода не в `rv.status.actuallyAttachedTo` + + +### Вывод + - удалить `rvr.metadata.finalizers[sds-replicated-volume.deckhouse.io/controller]` + +## `rvr-owner-reference-controller` + +### Статус: [OK | priority: 5 | complexity: 1] + +### Цель + +Поддерживать `rvr.metada.ownerReference`, указывающий на `rv` по имени +`rvr.spec.replicatedVolumeName`. + +Чтобы выставить правильные настройки, требуется использовать функцию `SetControllerReference` из пакета +`sigs.k8s.io/controller-runtime/pkg/controller/controllerutil`. + +### Вывод + - `rvr.metada.ownerReference` + +## `rv-status-config-quorum-controller` + +### Статус: [OK | priority: 5 | complexity: 4] + +### Цель + +Поднять значение кворума до необходимого, после того как кластер станет работоспособным. + +Работоспособный кластер - это RV, у которого все [RV Ready условия](#rv-ready-условия) достигнуты, без учёта условия `QuorumConfigured`. + +До поднятия кворума нужно поставить финализатор на каждую RVR. Также необходимо обработать проставление rvr.`metadata.deletionTimestamp` таким образом, чтобы финализатор с RVR был снят после фактического уменьшения кворума. + +Процесс и результат работы контроллера должен быть отражён в `rv.status.conditions[type=QuorumConfigured]` + +### Триггер + - `CREATE/UPDATE(RV, rv.status.conditions[type=Ready].status==True)` + +### Вывод + - `rv.status.drbd.config.quorum` + - `rv.status.drbd.config.quorumMinimumRedundancy` + - `rv.status.conditions[type=QuorumConfigured]` + +Правильные значения: + +N - все реплики +M - diskful реплики + +``` +if M > 1 { + var quorum byte = max(2, N/2 + 1) + var qmr byte = max(2, M/2 +1) +} else { + var quorum byte = 0 + var qmr byte = 0 +} +``` + +## `rv-status-config-shared-secret-controller` + +### Статус: [OK | priority: 3 | complexity: 3] + +### Цель +Проставить первоначальное значения для `rv.status.drbd.config.sharedSecret` и `rv.status.drbd.config.sharedSecretAlg`, +а также обработать ошибку применения алгоритма на любой из реплик из `rvr.status.drbd.errors.sharedSecretAlgSelectionError`, и поменять его на следующий по [списку алгоритмов хеширования](Алгоритмы хеширования shared secret). Последний проверенный алгоритм должен быть указан в `rvr.status.drbd.errors.sharedSecretAlgSelectionError.unsupportedAlg`. + +В случае, если список закончился - прекратить попытки. + +### Триггер + - `CREATE(RV)` + - `CREATE/UPDATE(RVR)` + +### Вывод + - `rv.status.drbd.config.sharedSecret` + - генерируется новый + - `rv.status.drbd.config.sharedSecretAlg` + - выбирается из захардкоженного списка по порядку diff --git a/docs/dev/spec_v1alpha3_wave2.md b/docs/dev/spec_v1alpha3_wave2.md new file mode 100644 index 000000000..d541bdf02 --- /dev/null +++ b/docs/dev/spec_v1alpha3_wave2.md @@ -0,0 +1,318 @@ +- [Акторы приложения: `agent`](#акторы-приложения-agent) + - [`drbd-config-controller`](#drbd-config-controller) + - [`drbd-resize-controller`](#drbd-resize-controller) + - [Статус: \[OK | priority: 5 | complexity: 2\]](#статус-ok--priority-5--complexity-2) + - [`drbd-primary-controller`](#drbd-primary-controller) + - [`rvr-status-config-address-controller`](#rvr-status-config-address-controller) +- [Акторы приложения: `controller`](#акторы-приложения-controller) + - [`rvr-diskful-count-controller`](#rvr-diskful-count-controller) + - [`rvr-scheduling-controller`](#rvr-scheduling-controller) + - [`rvr-status-config-node-id-controller`](#rvr-status-config-node-id-controller) + - [`rvr-status-config-peers-controller`](#rvr-status-config-peers-controller) + - [`rv-status-config-device-minor-controller`](#rv-status-config-device-minor-controller) + - [`rvr-tie-breaker-count-controller`](#rvr-tie-breaker-count-controller) + - [`rvr-access-count-controller`](#rvr-access-count-controller) + - [`rv-attach-controller`](#rv-attach-controller) + - [`rvr-volume-controller`](#rvr-volume-controller) + - [`rvr-quorum-and-attach-constrained-release-controller`](#rvr-quorum-and-attach-constrained-release-controller) + - [`rvr-owner-reference-controller`](#rvr-owner-reference-controller) + - [`rv-status-config-quorum-controller`](#rv-status-config-quorum-controller) + - [`rv-status-config-shared-secret-controller`](#rv-status-config-shared-secret-controller) + - [`rvr-missing-node-controller`](#rvr-missing-node-controller) + - [`rvr-node-cordon-controller`](#rvr-node-cordon-controller) + - [`rvr-status-conditions-controller`](#rvr-status-conditions-controller) + - [`llv-owner-reference-controller`](#llv-owner-reference-controller) + - [`rv-status-conditions-controller`](#rv-status-conditions-controller) + - [`rv-gc-controller`](#rv-gc-controller) + - [`tie-breaker-removal-controller`](#tie-breaker-removal-controller) + - [`rvr-finalizer-release-controller`](#rvr-finalizer-release-controller) + - [Статус: \[OK | priority: 5 | complexity: 3\]](#статус-ok--priority-5--complexity-3) + - [`rv-finalizer-controller`](#rv-finalizer-controller) + - [Статус: \[OK | priority: 5 | complexity: 1\]](#статус-ok--priority-5--complexity-1) + - [`rv-delete-propagation-controller`](#rv-delete-propagation-controller) + - [Статус: \[OK | priority: 5 | complexity: 1\]](#статус-ok--priority-5--complexity-1-1) + - [`rvr-missing-node-controller`](#rvr-missing-node-controller-1) + - [Статус: \[TBD | priority: 3 | complexity: 3\]](#статус-tbd--priority-3--complexity-3) + - [`llv-owner-reference-controller`](#llv-owner-reference-controller-1) + - [Статус: \[TBD | priority: 5 | complexity: 1\]](#статус-tbd--priority-5--complexity-1) + - [`rv-status-conditions-controller`](#rv-status-conditions-controller-1) + - [`rv-gc-controller`](#rv-gc-controller-1) + - [`tie-breaker-removal-controller`](#tie-breaker-removal-controller-1) + +# Акторы приложения: `agent` + +## `drbd-config-controller` + +### Уточнение +Пока на rv нет нашего финализатора "[sds-replicated-volume.deckhouse.io/controller](spec_v1alpha3.md#финализаторы-ресурсов)", rv не обрабатываем. + +Если на rvr/rv есть `metadata.deletionTimestamp` и не наш финализатор (не `sds-replicated-volume.deckhouse.io/*`), +то объект не должен считаться удалённым. Любая логика, связанная с обработкой удалённых rv/rvr должна +быть обновлена, чтобы включать это условие. + +## `drbd-resize-controller` + +### Статус: [OK | priority: 5 | complexity: 2] + +### Цель +Выполнить команду `drbdadm resize`, когда желаемый размер диска больше +фактического. + +Команда должна выполняться на `rvr.spec.type=Diskful` ноде с наименьшим +`rvr.status.drbd.config.nodeId` для ресурса. + +Cм. существующую реализацию `drbdadm resize`. + +Предусловия для выполнения команды (AND): + - `rv.status.conditions[type=Ready].status=True` + - `rvr.status.drbd.initialSyncCompleted=true` + - `rv.status.actualSize != nil` + - `rv.size - rv.status.actualSize > 0` + +Поле `rv.status.actualSize` должно поддерживаться актуальным размером. Когда оно +незадано - его требуется задать. После успешного изменения размера тома - его +требуется обновить. + +Ошибки drbd команд требуется выводить в `rvr.status.drbd.errors.*`. + +Пока на rv нет нашего финализатора "[sds-replicated-volume.deckhouse.io/controller](spec_v1alpha3.md#финализаторы-ресурсов)", rv не обрабатываем. + +### Вывод + - `rvr.status.drbd.errors.*` + - `rv.status.actualSize` + +## `drbd-primary-controller` + +### Уточнение +Пока на rv нет нашего финализатора "[sds-replicated-volume.deckhouse.io/controller](spec_v1alpha3.md#финализаторы-ресурсов)", rv не обрабатываем. + +Если на rvr/rv есть `metadata.deletionTimestamp` и не наш финализатор (не `sds-replicated-volume.deckhouse.io/*`), +то объект не должен считаться удалённым. Любая логика, связанная с обработкой удалённых rv/rvr должна +быть обновлена, чтобы включать это условие. + +## `rvr-status-config-address-controller` + +### Уточнение +Пока на rv нет нашего финализатора "[sds-replicated-volume.deckhouse.io/controller](spec_v1alpha3.md#финализаторы-ресурсов)", rv не обрабатываем. + +Если на rvr/rv есть `metadata.deletionTimestamp` и не наш финализатор (не `sds-replicated-volume.deckhouse.io/*`), +то объект не должен считаться удалённым. Любая логика, связанная с обработкой удалённых rv/rvr должна +быть обновлена, чтобы включать это условие. + +# Акторы приложения: `controller` + +## `rvr-diskful-count-controller` + +### Уточнение +Пока на rv нет нашего финализатора "[sds-replicated-volume.deckhouse.io/controller](spec_v1alpha3.md#финализаторы-ресурсов)", rv не обрабатываем. + +В случае, если в rv стоит `metadata.deletionTimestamp` и только наши финализаторы +`sds-replicated-volume.deckhouse.io/*` (нет чужих), новые реплики не создаются. + +## `rvr-scheduling-controller` + +### Уточнение +Пока на rv нет нашего финализатора "[sds-replicated-volume.deckhouse.io/controller](spec_v1alpha3.md#финализаторы-ресурсов)", rv не обрабатываем. + +## `rvr-status-config-node-id-controller` + +### Уточнение +Пока на rv нет нашего финализатора "[sds-replicated-volume.deckhouse.io/controller](spec_v1alpha3.md#финализаторы-ресурсов)", rv не обрабатываем. + +## `rvr-status-config-peers-controller` + +### Уточнение +Пока на rv нет нашего финализатора "[sds-replicated-volume.deckhouse.io/controller](spec_v1alpha3.md#финализаторы-ресурсов)", rv не обрабатываем. + +## `rv-status-config-device-minor-controller` + +### Уточнение +Пока на rv нет нашего финализатора "[sds-replicated-volume.deckhouse.io/controller](spec_v1alpha3.md#финализаторы-ресурсов)", rv не обрабатываем. + +## `rvr-tie-breaker-count-controller` + +### Уточнение +Пока на rv нет нашего финализатора "[sds-replicated-volume.deckhouse.io/controller](spec_v1alpha3.md#финализаторы-ресурсов)", rv не обрабатываем. + +В случае, если в rv стоит `metadata.deletionTimestamp` и только наши финализаторы +`sds-replicated-volume.deckhouse.io/*` (нет чужих), новые реплики не создаются. + +## `rvr-access-count-controller` + +### Уточнение +Пока на rv нет нашего финализатора "[sds-replicated-volume.deckhouse.io/controller](spec_v1alpha3.md#финализаторы-ресурсов)", rv не обрабатываем. + +В случае, если в rv стоит `metadata.deletionTimestamp` и только наши финализаторы +`sds-replicated-volume.deckhouse.io/*` (нет чужих), новые реплики не создаются. + +### Добавление +- начинать работу только если у RV status.condition[type=IOReady].status=True + +## `rv-attach-controller` + +### Уточнение +Пока на rv нет нашего финализатора "[sds-replicated-volume.deckhouse.io/controller](spec_v1alpha3.md#финализаторы-ресурсов)", rv не обрабатываем. + +В случае, если в rv стоит `metadata.deletionTimestamp` и только наши финализаторы +`sds-replicated-volume.deckhouse.io/*` (нет чужих) - убираем публикацию со всех rvr данного rv и +не публикуем новые rvr для данного rv. + +## `rvr-volume-controller` + +### Уточнение +Пока на rv нет нашего финализатора "[sds-replicated-volume.deckhouse.io/controller](spec_v1alpha3.md#финализаторы-ресурсов)", rv не обрабатываем. + +## `rvr-quorum-and-attach-constrained-release-controller` + +### Уточнение +Пока на rv нет нашего финализатора "[sds-replicated-volume.deckhouse.io/controller](spec_v1alpha3.md#финализаторы-ресурсов)", rv не обрабатываем. + +## `rvr-owner-reference-controller` + +### Уточнение +Пока на rv нет нашего финализатора "[sds-replicated-volume.deckhouse.io/controller](spec_v1alpha3.md#финализаторы-ресурсов)", rv не обрабатываем. + +## `rv-status-config-quorum-controller` + +### Уточнение +Пока на rv нет нашего финализатора "[sds-replicated-volume.deckhouse.io/controller](spec_v1alpha3.md#финализаторы-ресурсов)", rv не обрабатываем. + +## `rv-status-config-shared-secret-controller` + +### Уточнение +Пока на rv нет нашего финализатора "[sds-replicated-volume.deckhouse.io/controller](spec_v1alpha3.md#финализаторы-ресурсов)", rv не обрабатываем. + +## `rvr-missing-node-controller` + +### Уточнение +Пока на rv нет нашего финализатора "[sds-replicated-volume.deckhouse.io/controller](spec_v1alpha3.md#финализаторы-ресурсов)", rv не обрабатываем. + +## `rvr-node-cordon-controller` + +### Уточнение +Пока на rv нет нашего финализатора "[sds-replicated-volume.deckhouse.io/controller](spec_v1alpha3.md#финализаторы-ресурсов)", rv не обрабатываем. + +## `rvr-status-conditions-controller` + +### Уточнение +Пока на rv нет нашего финализатора "[sds-replicated-volume.deckhouse.io/controller](spec_v1alpha3.md#финализаторы-ресурсов)", rv не обрабатываем. + +## `llv-owner-reference-controller` + +### Уточнение +Пока на rv нет нашего финализатора "[sds-replicated-volume.deckhouse.io/controller](spec_v1alpha3.md#финализаторы-ресурсов)", rv не обрабатываем. + +## `rv-status-conditions-controller` + +### Уточнение +Пока на rv нет нашего финализатора "[sds-replicated-volume.deckhouse.io/controller](spec_v1alpha3.md#финализаторы-ресурсов)", rv не обрабатываем. + +## `rv-gc-controller` + +### Уточнение +Пока на rv нет нашего финализатора "[sds-replicated-volume.deckhouse.io/controller](spec_v1alpha3.md#финализаторы-ресурсов)", rv не обрабатываем. + +## `tie-breaker-removal-controller` + +### Уточнение +Пока на rv нет нашего финализатора "[sds-replicated-volume.deckhouse.io/controller](spec_v1alpha3.md#финализаторы-ресурсов)", rv не обрабатываем. + +## `rvr-finalizer-release-controller` + +### Статус: [OK | priority: 5 | complexity: 3] + +### Обновление + +Контроллер заменяет `rvr-quorum-and-attach-constrained-release-controller` + +### Контекст + +Приложение agent ставит 2 финализатора на все RVR до того, как сконфигурирует DRBD. + - `sds-replicated-volume.deckhouse.io/agent` (далее - `F/agent`) + - `sds-replicated-volume.deckhouse.io/controller` (далее - `F/controller`) + +При удалении RVR, agent не удаляет ресурс из DRBD, и не снимает финализаторы, +пока есть хотя бы один финализатор, кроме `F/agent`. + +### Цель + +Цель `rvr-finalizer-release-controller` - снимать финализатор `F/controller` с удаляемых rvr, когда +кластер к этому готов. + +Условие готовности (даже если `rv.metadata.deletionTimestamp!=nil`): +- удаляемые реплики не опубликованы (`rv.status.actuallyAttachedTo`), при этом при удалении RV, удаляемыми +считаются все реплики (`len(rv.status.actuallyAttachedTo)==0`) + +В случае, когда RV не удаляется (`rv.metadata.deletionTimestamp==nil`), требуется +проверить дополнительные условия: +- количество rvr `rvr.status.conditions[type=Online].status == True` +(исключая ту, которую собираются удалить) больше, либо равно `rv.status.drbd.config.quorum` +- присутствует необходимое количество `rvr.spec.Type==Diskful && rvr.status.actualType==Diskful && rvr.status.conditions[type=IOReady].status==True && rvr.metadata.deletionTimestamp==nil` реплик, в +соответствии с `rsc.spec.replication` + +### Вывод + - удалить `rvr.metadata.finalizers[sds-replicated-volume.deckhouse.io/controller]` + +## `rv-finalizer-controller` + +### Статус: [OK | priority: 5 | complexity: 1] + +### Цель + +Добавлять финализатор `sds-replicated-volume.deckhouse.io/controller` на rv. + +Снимать финализатор с rv, когда на нем есть `metadata.deletionTimestamp` и в +кластере нет rvr, привязанных к данному rv по `rvr.spec.replicatedVolumeName`. + +### Вывод +- добавляет и снимает финализатор `sds-replicated-volume.deckhouse.io/controller` на rv + +## `rv-delete-propagation-controller` + +### Статус: [OK | priority: 5 | complexity: 1] + +### Цель +Вызвать delete для всех rvr, у которых стоит `metadata.deletionTimestamp` на RV + +### Вывод + - удаляет `rvr` + +## `rvr-missing-node-controller` + +### Статус: [TBD | priority: 3 | complexity: 3] + +### Цель +Удаляет (без снятия финализатора) RVR с тех нод, которых больше нет в кластере. + +### Триггер + - во время INIT/DELETE `corev1.Node` + - когда Node больше нет в кластере + +### Вывод + - delete rvr + +## `llv-owner-reference-controller` + +### Статус: [TBD | priority: 5 | complexity: 1] + +### Цель + +Поддерживать `llv.metada.ownerReference`, указывающий на `rvr`. + +Чтобы выставить правильные настройки, требуется использовать функцию `SetControllerReference` из пакета +`sigs.k8s.io/controller-runtime/pkg/controller/controllerutil`. + +### Вывод + - `llv.metada.ownerReference` + +## `rv-status-conditions-controller` +### Цель +### Вывод + +## `rv-gc-controller` +### Цель +### Вывод + +## `tie-breaker-removal-controller` +### Цель +### Вывод diff --git a/docs/dev/spec_v1alpha3_wave2_conditions_rv_rvr_spec.md b/docs/dev/spec_v1alpha3_wave2_conditions_rv_rvr_spec.md new file mode 100644 index 000000000..35c1583b7 --- /dev/null +++ b/docs/dev/spec_v1alpha3_wave2_conditions_rv_rvr_spec.md @@ -0,0 +1,696 @@ +# Спецификация изменений Conditions (v1alpha3) + +## Терминология + +| Аббревиатура | Полное название | Описание | +|--------------|-----------------|----------| +| **RV** | ReplicatedVolume | Реплицируемый том | +| **RVR** | ReplicatedVolumeReplica | Реплика тома (одна копия на одной ноде) | +| **RSC** | ReplicatedStorageClass | Класс хранения для реплицируемых томов | +| **LLV** | LvmLogicalVolume | Реализация BackingVolume через LVM | + +**Соглашения:** +- `rv.field` / `rvr.field` — ссылка на поле объекта (lowercase) +- `RV.Condition` / `RVR.Condition` — название условия (uppercase) + +--- + +## Обзор: RVR Conditions + +| Condition | Описание | Устанавливает | Reasons | +|-----------|----------|---------------|---------| +| `Scheduled` | Нода выбрана | rvr-scheduling-controller | `ReplicaScheduled`, `WaitingForAnotherReplica`, `NoAvailableNodes`, `TopologyConstraintsFailed`, `InsufficientStorage` | +| `BackingVolumeCreated` | BackingVolume создан и ready | rvr-volume-controller | `BackingVolumeReady`, `BackingVolumeNotReady`, `WaitingForBackingVolume`, `BackingVolumeCreationFailed`, `NotApplicable` | +| `Initialized` | Инициализация (не снимается) | drbd-config-controller (agent) | `Initialized`, `WaitingForInitialSync`, `InitialSyncInProgress` | +| `InQuorum` | Реплика в кворуме | drbd-status-controller (agent) | `InQuorum`, `QuorumLost` | +| `InSync` | Данные синхронизированы | drbd-status-controller (agent) | `InSync`, `Synchronizing`, `OutOfSync`, `Inconsistent`, `Diskless`, `DiskAttaching` | +| `Configured` | Конфигурация применена | drbd-config-controller (agent) | `Configured`, `ConfigurationPending`, `ConfigurationFailed`, ...errors... | +| `Online` | Scheduled + Initialized + InQuorum | rvr-status-conditions-controller | `Online`, `Unscheduled`, `Uninitialized`, `QuorumLost`, `NodeNotReady`, `AgentNotReady` | +| `IOReady` | Online + InSync (safe) | rvr-status-conditions-controller | `IOReady`, `Offline`, `OutOfSync`, `Synchronizing`, `NodeNotReady`, `AgentNotReady` | +| `Attached` | Реплика Primary | rv-attach-controller | `Attached`, `Detached`, `AttachPending` | +| `AddressConfigured` | Адрес DRBD настроен | rvr-status-config-address-controller (agent) | `AddressConfigured`, `WaitingForAddress` | + +### Удаляемые + +| Condition | Причина удаления | +|-----------|------------------| +| ~~`Ready`~~ | Неоднозначная семантика "готова к чему?". Заменён на `Online` + `IOReady`. | + +--- + +## Обзор: RV Conditions + +| Condition | Описание | Устанавливает | Reasons | +|-----------|----------|---------------|---------| +| ~~`QuorumConfigured`~~ | ~~Конфигурация кворума~~ | ❌ убрать | Дублирует `rv.status.drbd.config.quorum != nil` | +| ~~`DiskfulReplicaCountReached`~~ | ~~Кол-во Diskful достигнуто~~ | ❌ убрать | Дублирует счётчик `diskfulReplicaCount` | +| `Scheduled` | Все RVR Scheduled | rv-status-conditions-controller | `AllReplicasScheduled`, `ReplicasNotScheduled`, `SchedulingInProgress` | +| `BackingVolumeCreated` | Все Diskful BackingVolume ready | rv-status-conditions-controller | `AllBackingVolumesReady`, `BackingVolumesNotReady`, `WaitingForBackingVolumes` | +| `Configured` | Все RVR Configured | rv-status-conditions-controller | `AllReplicasConfigured`, `ReplicasNotConfigured`, `ConfigurationInProgress` | +| `Initialized` | Достаточно RVR Initialized | rv-status-conditions-controller | `Initialized`, `WaitingForReplicas`, `InitializationInProgress` | +| `Quorum` | Кворум достигнут | rv-status-conditions-controller | `QuorumReached`, `QuorumLost`, `QuorumDegraded` | +| `DataQuorum` | Кворум данных Diskful | rv-status-conditions-controller | `DataQuorumReached`, `DataQuorumLost`, `DataQuorumDegraded` | +| `IOReady` | Quorum=True+DataQuorum=True+DesiredAttachTo=IOReady | rv-status-conditions-controller | `IOReady`, `InsufficientIOReadyReplicas`, `NoIOReadyReplicas` | + +### Удаляемые + +| Condition | Причина удаления | +|-----------|------------------| +| ~~`Ready`~~ | Неоднозначная семантика "готова к чему?". Заменён на `IOReady`. | +| ~~`AllReplicasReady`~~ | Зависел от удалённого `RVR.Ready`. | +| ~~`QuorumConfigured`~~ | Дублирует проверку `rv.status.drbd.config.quorum != nil`. Потребители могут проверять поле напрямую. | +| ~~`DiskfulReplicaCountReached`~~ | Дублирует информацию из счётчика `diskfulReplicaCount`. Заменён проверкой `current >= desired` из счётчика. | + +--- + +# RVR Conditions (`ReplicatedVolumeReplica.status.conditions[]`) + +### `type=Scheduled` + +- Обновляется: **rvr-scheduling-controller**. +- `status`: + - `True` — нода выбрана + - `rvr.spec.nodeName != ""` + - `False` — нода не выбрана +- `reason`: + - `ReplicaScheduled` — реплика успешно назначена на ноду + - `WaitingForAnotherReplica` — ожидание готовности другой реплики перед планированием + - `NoAvailableNodes` — нет доступных нод для размещения + - `TopologyConstraintsFailed` — не удалось выполнить ограничения топологии (Zonal/TransZonal) + - `InsufficientStorage` — недостаточно места на доступных нодах +- Без изменений относительно текущей реализации. + +### `type=BackingVolumeCreated` + +- Обновляется: **rvr-volume-controller**. +- `status`: + - `True` — BackingVolume создан и готов (AND) + - `rvr.status.lvmLogicalVolumeName != ""` + - соответствующий LLV (реализация BackingVolume) имеет `status.phase=Created` + - `False` — BackingVolume не создан или не ready +- `reason`: + - `BackingVolumeReady` — BackingVolume (LLV) создан и имеет `phase=Created` + - `BackingVolumeNotReady` — BackingVolume создан, но ещё не ready + - `WaitingForBackingVolume` — ожидание создания BackingVolume + - `BackingVolumeCreationFailed` — ошибка создания BackingVolume + - `NotApplicable` — для `rvr.spec.type != Diskful` (diskless реплики) +- Используется: **rvr-diskful-count-controller** — для определения готовности первой реплики. + +### `type=DataInitialized` + +- Обновляется: на агенте (предположительно **drbd-config-controller**). +- `status`: + - `True` — реплика `rvr.spec.type==Diskful` и прошла инициализацию (не снимается!) + - DRBD ресурс создан и поднят + - Начальная синхронизация завершена (если требовалась) + - `False` — инициализация не завершена, либо реплика `rvr.spec.type!=Diskful` +- `reason`: + - `Initialized` — реплика успешно инициализирована + - `WaitingForInitialSync` — ожидание завершения начальной синхронизации + - `InitialSyncInProgress` — начальная синхронизация в процессе +- Примечание: **не снимается** после установки в True — используется для определения "реплика работала". +- Используется: **rvr-diskful-count-controller** — создание следующих реплик только после инициализации первой. + +### `type=InQuorum` + +- Обновляется: на агенте (предположительно **drbd-status-controller**). +- Ранее: `Quorum`. +- `status`: + - `True` — реплика в кворуме + - `rvr.status.drbd.status.devices[0].quorum=true` + - `False` — реплика вне кворума + - `Unknown` — нода или agent недоступны +- `reason`: + - `InQuorum` — реплика участвует в кворуме + - `QuorumLost` — реплика потеряла кворум (недостаточно подключений) + - `NodeNotReady` — нода недоступна, статус неизвестен + - `AgentNotReady` — agent pod не работает, статус неизвестен +- Примечание: `devices[0]` — в текущей версии RVR всегда использует один DRBD volume (индекс 0). +- Примечание: для TieBreaker реплик логика может отличаться. + +### `type=InSync` + +- Обновляется: на агенте (предположительно **drbd-status-controller**). +- Ранее: `DevicesReady`. +- **Назначение:** Показывает состояние синхронизации данных реплики. +- `status`: + - `True` — данные синхронизированы + - Diskful: `rvr.status.drbd.status.devices[0].diskState = UpToDate` + - Access/TieBreaker: `diskState = Diskless` (всегда True с reason `Diskless`) + - `False` — данные не синхронизированы + - `Unknown` — нода или agent недоступны +- `reason`: + - `InSync` — данные полностью синхронизированы (Diskful, diskState=UpToDate) + - `Diskless` — diskless реплика (Access/TieBreaker), нет локальных данных, I/O через сеть + - `Synchronizing` — синхронизация в процессе (diskState=SyncSource/SyncTarget) + - `OutOfSync` — данные устарели (diskState=Outdated), ожидание resync + - `Inconsistent` — данные повреждены (diskState=Inconsistent), требуется восстановление + - `DiskAttaching` — подключение к диску (diskState=Attaching/Negotiating) + - `NodeNotReady` — нода недоступна, статус неизвестен + - `AgentNotReady` — agent pod не работает (crash, OOM, evicted), статус неизвестен +- Применимость: все типы реплик. +- **DRBD diskState mapping:** + - `UpToDate` → reason=`InSync` + - `SyncSource`, `SyncTarget` → reason=`Synchronizing` + - `Outdated` → reason=`OutOfSync` + - `Inconsistent` → reason=`Inconsistent` + - `Attaching`, `Negotiating`, `DUnknown` → reason=`DiskAttaching` + - `Diskless` → reason=`Diskless` + +### `type=Online` + +- Обновляется: **rvr-status-conditions-controller**. +- `status`: + - `True` — реплика онлайн (AND) + - `Scheduled=True` + - `Initialized=True` + - `InQuorum=True` + - `False` — реплика не онлайн +- `reason`: + - `Online` — реплика полностью онлайн + - `Unscheduled` — реплика не назначена на ноду + - `Uninitialized` — реплика не прошла инициализацию + - `QuorumLost` — реплика вне кворума + - `NodeNotReady` — нода недоступна + - `AgentNotReady` — agent pod не работает +- Примечание: `Configured` НЕ учитывается — реплика может быть online с устаревшей конфигурацией. + +### `type=IOReady` + +- Обновляется: **rvr-status-conditions-controller**. +- **Назначение:** Строгая проверка готовности к критическим операциям (resize, promote, snapshot). +- `status`: + - `True` — реплика **безопасно** готова к I/O (AND) + - `Online=True` + - `InSync=True` (diskState=UpToDate) + - `False` — реплика не готова к безопасным I/O операциям +- `reason`: + - `IOReady` — реплика полностью готова к I/O операциям + - `Offline` — реплика не онлайн (смотри `Online` condition) + - `OutOfSync` — данные не синхронизированы (diskState != UpToDate) + - `Synchronizing` — идёт синхронизация (SyncSource/SyncTarget) + - `NodeNotReady` — нода недоступна + - `AgentNotReady` — agent pod не работает +- Используется: RV.IOReady вычисляется из RVR.IOReady. +- **Примечание:** Гарантирует что данные полностью синхронизированы (diskState=UpToDate). +- **Promote:** Переключение реплики Secondary→Primary. Требует `IOReady=True` чтобы гарантировать актуальность данных и избежать split-brain. + + +### `type=Configured` + +- Обновляется: на агенте (предположительно **drbd-config-controller**). +- Ранее: `ConfigurationAdjusted`. +- `status`: + - `True` — конфигурация полностью применена (AND) + - все поля `rvr.status.drbd.actual.*` == соответствующим в `rv.status.drbd.config` или `rvr.status.drbd.config` + - `rvr.status.drbd.errors.lastAdjustmentError == nil` + - `rvr.status.drbd.errors.<...>Error == nil` + - `False` — есть расхождения или ошибки + - `Unknown` — нода или agent недоступны +- `reason`: + - `Configured` — конфигурация успешно применена + - `ConfigurationPending` — ожидание применения конфигурации + - `ConfigurationFailed` — общая ошибка конфигурации + - `MetadataCheckFailed` — ошибка проверки DRBD метаданных (`drbdadm dump-md`) + - `MetadataCreationFailed` — ошибка создания DRBD метаданных (`drbdadm create-md`) + - `StatusCheckFailed` — не удалось получить статус DRBD (`drbdadm status`) + - `ResourceUpFailed` — ошибка поднятия ресурса (`drbdadm up`) + - `AdjustmentFailed` — ошибка применения конфигурации (`drbdadm adjust`) + - `WaitingForInitialSync` — ожидание начальной синхронизации перед продолжением + - `PromotionDemotionFailed` — ошибка переключения primary/secondary + - `NodeNotReady` — нода недоступна, статус неизвестен + - `AgentNotReady` — agent pod не работает, статус неизвестен +- `message`: детали ошибки из `rvr.status.drbd.errors.*` +- Примечание: может "мигать" при изменении параметров — это нормально. +- Примечание: НЕ включает attach и resize — они отделены. + +### `type=Attached` + +- Обновляется: **rv-attach-controller**. +- Ранее: `Primary`. +- `status`: + - `True` — реплика опубликована (primary) + - `rvr.status.drbd.status.role=Primary` + - `False` — реплика не опубликована +- `reason`: + - `Attached` — реплика является Primary + - `Detached` — реплика является Secondary + - `AttachPending` — ожидание перехода в Primary +- Применимость: только для `Access` и `Diskful` реплик. +- Примечание: `TieBreaker` не может быть Primary напрямую — требуется сначала изменить тип на `Access`. +- Примечание: НЕ учитывает состояние I/O — только факт публикации. + +### `type=AddressConfigured` + +- Обновляется: на агенте **rvr-status-config-address-controller**. +- Существующий condition (уже реализован). +- `status`: + - `True` — адрес DRBD настроен + - `rvr.status.drbd.config.address.ipv4 != ""` + - `rvr.status.drbd.config.address.port != 0` + - `False` — адрес не настроен +- `reason`: + - `AddressConfigured` — адрес успешно назначен + - `WaitingForAddress` — ожидание назначения адреса +- Применимость: для всех типов реплик. +- Примечание: контроллер выбирает свободный порт DRBD в диапазоне [1025; 65535]. + +### Удаляемые conditions + +- ~~`type=Ready`~~ + - ❌ Удалить. + - Причина: непонятная семантика "готова к чему?". + - Замена: использовать `Online` или `IOReady` в зависимости от контекста. + +--- + +# RV Conditions (`ReplicatedVolume.status.conditions[]`) + +### `type=QuorumConfigured` - убрать + +- Обновляется: **rv-status-config-quorum-controller**. +- `status`: + - `True` — конфигурация кворума применена + - `rv.status.drbd.config.quorum` установлен + - `rv.status.drbd.config.quorumMinimumRedundancy` установлен + - `False` — конфигурация кворума не применена +- `reason`: + - `QuorumConfigured` — конфигурация кворума успешно применена + - `WaitingForReplicas` — ожидание готовности реплик для расчёта кворума +- Примечание: показывает что **настройки** кворума применены, а не что кворум **достигнут** (для этого есть `Quorum`). + +### `type=DiskfulReplicaCountReached` - удалить - копирует частично `type=IOReady` + counter по diskfull репликам. + +- Обновляется: **rvr-diskful-count-controller**. +- `status`: + - `True` — достигнуто требуемое количество Diskful реплик + - количество RVR с `spec.type=Diskful` >= требуемое по `rsc.spec.replication` + - `False` — недостаточно Diskful реплик +- `reason`: + - `RequiredNumberOfReplicasIsAvailable` — все требуемые реплики созданы + - `FirstReplicaIsBeingCreated` — создаётся первая реплика + - `WaitingForFirstReplica` — ожидание готовности первой реплики +- Примечание: контролирует создание Diskful реплик, первая реплика должна быть Initialized перед созданием остальных. + +### `type=IOReady` + +- Обновляется: **rv-status-conditions-controller**. +- `status`: + - `True` — достаточно реплик готовы к I/O + - достаточное количество RVR (согласно QMR + RSC) имеют `IOReady=True` + - QMR = quorumMinimumRedundancy (минимум Diskful реплик для кворума данных) + - RSC = ReplicatedStorageClass (определяет требования репликации) + - `False` — недостаточно готовых реплик +- `reason`: + - `IOReady` — volume готов к I/O операциям + - `InsufficientIOReadyReplicas` — недостаточно IOReady реплик + - `NoIOReadyReplicas` — нет ни одной IOReady реплики +- TODO: уточнить точную формулу threshold для IOReady (предположительно >= 1 реплика). +- Используется: **rv-attach-controller**, **drbd-resize-controller**, **drbd-primary-controller**. + +### `type=Scheduled` + +- Обновляется: **rv-status-conditions-controller**. +- `status`: + - `True` — все реплики назначены на ноды + - все RVR имеют `Scheduled=True` + - `False` — есть неназначенные реплики +- `reason`: + - `AllReplicasScheduled` — все реплики назначены + - `ReplicasNotScheduled` — есть реплики без назначенной ноды + - `SchedulingInProgress` — планирование в процессе + +### `type=BackingVolumeCreated` + +- Обновляется: **rv-status-conditions-controller**. +- `status`: + - `True` — все BackingVolume созданы и готовы + - все Diskful RVR имеют `BackingVolumeCreated=True` + - `False` — есть неготовые BackingVolume +- `reason`: + - `AllBackingVolumesReady` — все BackingVolume готовы + - `BackingVolumesNotReady` — есть неготовые BackingVolume + - `WaitingForBackingVolumes` — ожидание создания BackingVolume + +### `type=Configured` + +- Обновляется: **rv-status-conditions-controller**. +- `status`: + - `True` — все реплики сконфигурированы + - все RVR имеют `Configured=True` + - `False` — есть несконфигурированные реплики или Unknown +- `reason`: + - `AllReplicasConfigured` — все реплики сконфигурированы + - `ReplicasNotConfigured` — есть несконфигурированные реплики + - `ConfigurationInProgress` — конфигурация в процессе + +### `type=Initialized` + +- Обновляется: **rv-status-conditions-controller**. +- `status`: + - `True` — достаточно реплик инициализировано (один раз, далее НЕ снимается) + - достаточное количество RVR (согласно `rsc.spec.replication`) имеют `Initialized=True` + - `False` — до достижения порога +- `reason`: + - `Initialized` — достаточное количество реплик инициализировано + - `WaitingForReplicas` — ожидание инициализации реплик + - `InitializationInProgress` — инициализация в процессе +- Порог "достаточного количества": + - `None`: 1 реплика + - `Availability`: 2 реплики + - `ConsistencyAndAvailability`: 3 реплики + +### `type=Quorum` + +- Обновляется: **rv-status-conditions-controller**. +- `status`: + - `True` — есть кворум + - количество RVR с `InQuorum=True` >= `rv.status.drbd.config.quorum` + - `False` — кворума нет +- `reason`: + - `QuorumReached` — кворум достигнут + - `QuorumLost` — кворум потерян + - `QuorumDegraded` — кворум на грани (N+0) +- Формула расчёта `quorum`: + ``` + N = все реплики (Diskful + TieBreaker + Access) + M = только Diskful реплики + + if M > 1: + quorum = max(2, N/2 + 1) + else: + quorum = 0 // кворум отключён для single-replica + ``` +- Примечание: использует `InQuorum`, а не `InSync` — проверяет **подключение**, а не **синхронизацию**. + +### `type=DataQuorum` + +- Обновляется: **rv-status-conditions-controller**. +- `status`: + - `True` — есть кворум данных (только Diskful реплики) + - количество Diskful RVR с `InQuorum=True` >= `rv.status.drbd.config.quorumMinimumRedundancy` + - `False` — кворума данных нет +- `reason`: + - `DataQuorumReached` — кворум данных достигнут + - `DataQuorumLost` — кворум данных потерян + - `DataQuorumDegraded` — кворум данных на грани +- Формула расчёта `quorumMinimumRedundancy` (QMR): + ``` + M = только Diskful реплики + + if M > 1: + qmr = max(2, M/2 + 1) + else: + qmr = 0 // QMR отключён для single-replica + ``` +- Примечание: учитывает только Diskful реплики — **носители данных**. +- Примечание: использует `InQuorum` (подключение), а не `InSync` (синхронизация). +- Связь с другими полями: + - `Quorum` — кворум по всем репликам (защита от split-brain) + - `DataQuorum` — кворум среди носителей данных (защита данных от split-brain) + - `diskfulReplicasInSync` counter — сколько реплик имеют **актуальные** данные + +--- + +## `status` (counters — не conditions) + +- `diskfulReplicaCount` + - Тип: string. + - Формат: `current/desired` (например, `3/3`). + - Обновляется: **rv-status-conditions-controller**. + - Описание: количество Diskful реплик / желаемое количество. + +- `diskfulReplicasInSync` + - Тип: string. + - Формат: `current/total` (например, `2/3`). + - Обновляется: **rv-status-conditions-controller**. + - Описание: количество синхронизированных Diskful реплик / всего Diskful реплик. + +- `attachedAndIOReadyCount` + - Тип: string. + - Формат: `current/requested` (например, `1/1`). + - Обновляется: **rv-status-conditions-controller**. + - Описание: количество опубликованных и IOReady реплик / запрошено для публикации. + +--- + +# Future Conditions in wave3 (следующий этап) + +## RV Future Conditions + +### `type=QuorumAtRisk` + +- Обновляется: **rv-status-conditions-controller**. +- `status`: + - `True` — кворум есть, но на грани (AND) + - `Quorum=True` + - количество RVR с `InQuorum=True` == `rv.status.drbd.config.quorum` (ровно на границе) + - `False` — кворум с запасом или кворума нет +- `reason`: + - `QuorumAtRisk` — кворум на грани, нет запаса (N+0) + - `QuorumSafe` — кворум с запасом (N+1 или больше) + - `QuorumLost` — кворума нет +- Описание: кворум есть, но нет N+1. Потеря одной реплики приведёт к потере кворума. +- Применение: alerting, UI warning. + +### `type=DataQuorumAtRisk` + +- Обновляется: **rv-status-conditions-controller**. +- `status`: + - `True` — кворум данных под угрозой (OR) + - `DataQuorum=True` AND количество Diskful RVR с `InQuorum=True` == QMR (ровно на границе) + - `DataQuorum=True` AND НЕ все Diskful RVR имеют `InSync=True` + - `False` — кворум данных безопасен +- `reason`: + - `DataQuorumAtRisk` — кворум данных на грани + - `DataQuorumSafe` — кворум данных с запасом + - `DataQuorumLost` — кворум данных потерян + - `ReplicasOutOfSync` — есть несинхронизированные реплики +- Описание: кворум данных есть, но нет N+1, или не все InSync. +- Применение: alerting, UI warning. + +### `type=DataAtRisk` + +- Обновляется: **rv-status-conditions-controller**. +- `status`: + - `True` — данные в единственном экземпляре + - количество Diskful RVR с `InSync=True` == 1 + - `False` — данные реплицированы +- `reason`: + - `DataAtRisk` — данные только на одной реплике + - `DataRedundant` — данные реплицированы на несколько реплик +- Описание: данные в единственном экземпляре. Потеря этой реплики = потеря данных. +- Применение: critical alerting, UI critical warning. + +### `type=SplitBrain` + +- Обновляется: **rv-status-conditions-controller**. +- `status`: + - `True` — обнаружен split-brain + - `False` — split-brain не обнаружен +- `reason`: + - `SplitBrainDetected` — обнаружен split-brain + - `NoSplitBrain` — split-brain не обнаружен + - `SplitBrainResolved` — split-brain был, но разрешён +- Описание: требуется исследование логики определения. +- Возможные признаки: + - несколько Primary реплик без `allowTwoPrimaries` + - `rvr.status.drbd.status.connections[].connectionState=SplitBrain` + - несовпадение данных между репликами (out-of-sync с обеих сторон) +- TODO: требуется детальное исследование DRBD status для определения. + +## RVR Future Conditions + +### `type=FullyConnected` + +- Обновляется: на агенте (предположительно **drbd-status-controller**). +- `status`: + - `True` — есть связь со всеми peers + - `len(rvr.status.drbd.status.connections) == len(rvr.status.drbd.config.peers)` + - все connections имеют `connectionState=Connected` + - `False` — нет связи с частью peers +- `reason`: + - `FullyConnected` — связь со всеми peers установлена + - `PartiallyConnected` — связь только с частью peers + - `Disconnected` — нет связи ни с одним peer + - `Connecting` — установка соединений в процессе +- Примечание: НЕ влияет на `Online` или `IOReady`. +- Применение: диагностика сетевых проблем. + +### `type=ResizeInProgress` + +- Обновляется: на агенте (предположительно **drbd-resize-controller**). +- `status`: + - `True` — resize операция в процессе + - `rv.spec.size > rv.status.actualSize` + - `False` — resize не требуется или завершён +- `reason`: + - `ResizeInProgress` — изменение размера в процессе + - `ResizeCompleted` — изменение размера завершено + - `ResizeNotNeeded` — изменение размера не требуется + - `ResizeFailed` — ошибка изменения размера +- Применение: UI индикация, блокировка некоторых операций. + +--- + +# Спецификации контроллеров conditions + +## rvr-status-conditions-controller + +### Цель + +Вычислять computed RVR conditions с проверкой доступности ноды/агента. + +### Архитектура + +```go +builder.ControllerManagedBy(mgr). + For(&v1alpha3.ReplicatedVolumeReplica{}). + Watches(&corev1.Pod{}, handler.EnqueueRequestsFromMapFunc(agentPodToRVRMapper), + builder.WithPredicates(agentPodPredicate)). + Complete(rec) +``` + +### Условия + +| Condition | Логика | Примерный список reasons | +|-----------|--------|--------------------------| +| `Online` | `Scheduled ∧ Initialized ∧ InQuorum` → True | `Online`, `Unscheduled`, `Uninitialized`, `QuorumLost`, `NodeNotReady`, `AgentNotReady` | +| `IOReady` | `Online ∧ InSync` → True | `IOReady`, `Offline`, `OutOfSync`, `Synchronizing`, `NodeNotReady`, `AgentNotReady` | + +> **Примерный список reasons, добавьте/уберите если необходимо.** + +### Проверка доступности + +``` +1. Get Agent Pod: + - labels: app=sds-drbd-agent, spec.nodeName=rvr.spec.nodeName + - If Pod not found OR phase != Running OR Ready != True: + → Agent NotReady, продолжаем к шагу 2 + +2. If Agent NotReady — определяем reason: + - Get Node by rvr.spec.nodeName + - If node not found OR node.Ready == False/Unknown: + → reason = NodeNotReady + - Else: + → reason = AgentNotReady + +3. Set conditions: + RVR.Online = False, reason = + RVR.IOReady = False, reason = +``` + +### Сценарии + +**NodeNotReady:** +- Node failure (нода упала) +- Node unreachable (network partition) +- Kubelet не отвечает (node.Ready = Unknown) + +**AgentNotReady (node OK):** +- Agent pod CrashLoopBackOff +- Agent pod OOMKilled +- Agent pod Evicted +- Agent pod Pending/Terminating + +### Вывод + +- `rvr.status.conditions[type=Online]` +- `rvr.status.conditions[type=IOReady]` + +--- + +## rv-status-conditions-controller + +### Цель + +Агрегировать RVR conditions в RV conditions и обновлять счётчики. + +### Архитектура + +```go +builder.ControllerManagedBy(mgr). + For(&v1alpha3.ReplicatedVolume{}). + Owns(&v1alpha3.ReplicatedVolumeReplica{}). + Complete(rec) +``` + +### Условия + +| Condition | Логика | Примерный список reasons | +|-----------|--------|--------------------------| +| `Scheduled` | ALL `RVR.Scheduled=True` | `AllReplicasScheduled`, `ReplicasNotScheduled`, `SchedulingInProgress` | +| `BackingVolumeCreated` | ALL Diskful `RVR.BackingVolumeCreated=True` | `AllBackingVolumesReady`, `BackingVolumesNotReady`, `WaitingForBackingVolumes` | +| `Configured` | ALL `RVR.Configured=True` | `AllReplicasConfigured`, `ReplicasNotConfigured`, `ConfigurationInProgress` | +| `Initialized` | count(Initialized=True) >= threshold | `Initialized`, `WaitingForReplicas`, `InitializationInProgress` | +| `Quorum` | count(All InQuorum=True) >= quorum | `QuorumReached`, `QuorumLost`, `QuorumDegraded` | +| `DataQuorum` | count(Diskful InSync=True) >= QMR | `DataQuorumReached`, `DataQuorumLost`, `DataQuorumDegraded` | +| `IOReady` | count(Diskful IOReady=True) >= threshold | `IOReady`, `InsufficientIOReadyReplicas`, `NoIOReadyReplicas` | + +> **Примерный список reasons, добавьте/уберите если необходимо.** + +### Счётчики + +| Counter | Формат | Описание | +|---------|--------|----------| +| `diskfulReplicaCount` | `current/desired` | Diskful реплик | +| `diskfulReplicasInSync` | `current/total` | InSync Diskful реплик | +| `attachedAndIOReadyCount` | `current/requested` | Attached + IOReady | + +### Вывод + +- `rv.status.conditions[type=*]` +- `rv.status.diskfulReplicaCount` +- `rv.status.diskfulReplicasInSync` +- `rv.status.attachedAndIOReadyCount` + +--- + +## Время обнаружения + +| Метод | Контроллер | Что обнаруживает | Скорость | +|-------|------------|------------------|----------| +| Agent Pod watch | rvr-status-conditions-controller | Agent crash/OOM/evict | ~секунды | +| Agent Pod watch | rvr-status-conditions-controller | Node failure (pod → Unknown/Failed) | ~секунды | +| Owns(RVR) | rv-status-conditions-controller | RVR condition changes, quorum loss | ~секунды | + +**Как это работает:** + +1. **rvr-status-conditions-controller** — смотрит на Agent Pod, если pod недоступен — проверяет Node.Ready и ставит `NodeNotReady` или `AgentNotReady`. + +2. **rv-status-conditions-controller** — получает события через `Owns(RVR)` когда RVR условия меняются (включая изменения от DRBD агентов на других нодах). + +**Примечание о DRBD:** +Если нода падает, DRBD агент на других нодах обнаружит потерю connection и обновит свой `rvr.status.drbd.status.connections[]`. Это триггерит reconcile для `rv-status-conditions-controller` через `Owns(RVR)`. + + +--- + +# Влияние на контроллеры (удаление conditions) + +### rvr-diskful-count-controller + +| Изменение | Описание | +|-----------|----------| +| Read: `Ready` → `Initialized` | Проверяем `Initialized=True` вместо `Ready=True` | +| ❌ Убрать: `DiskfulReplicaCountReached` | Дублирует счётчик `diskfulReplicaCount` | + +### rv-status-config-quorum-controller + +| Изменение | Описание | +|-----------|----------| +| ❌ Убрать: `QuorumConfigured` | Дублирует `quorum != nil` | +| ❌ Убрать: `AllReplicasReady` | Зависит от удалённого `Ready` | +| ❌ Убрать: `DiskfulReplicaCountReached` | Использовать счётчик `diskfulReplicaCount` | +| 🆕 Read: `RV.Configured` | Заменяет все проверки sharedSecret | + +**Новая логика `isReadyForQuorum`(пример):** +```go +current, desired := parseDiskfulReplicaCount(rv.status.diskfulReplicaCount) +return current >= desired && current > 0 && RV.Configured=True +``` + +**Потребители `QuorumConfigured`:** проверять `rv.status.drbd.config.quorum != nil`. + +--- + diff --git a/go.work b/go.work new file mode 100644 index 000000000..ca767b154 --- /dev/null +++ b/go.work @@ -0,0 +1,14 @@ +go 1.24.11 + +use ( + ./api + ./hooks/go + ./images/agent + ./images/controller + ./images/csi-driver + ./images/linstor-drbd-wait + ./images/megatest + ./images/sds-replicated-volume-controller + ./images/webhooks + ./lib/go/common +) diff --git a/go.work.sum b/go.work.sum new file mode 100644 index 000000000..547c8a26c --- /dev/null +++ b/go.work.sum @@ -0,0 +1,525 @@ +bitbucket.org/creachadair/shell v0.0.8/go.mod h1:vINzudofoUXZSJ5tREgpy+Etyjsag3ait5WOWImEVZ0= +bitbucket.org/liamstask/goose v0.0.0-20150115234039-8488cc47d90c/go.mod h1:hSVuE3qU7grINVSwrmzHfpg9k87ALBk+XaualNyUzI4= +buf.build/gen/go/bufbuild/protovalidate/protocolbuffers/go v1.36.4-20250130201111-63bb56e20495.1/go.mod h1:novQBstnxcGpfKf8qGRATqn1anQKwMJIbH5Q581jibU= +cel.dev/expr v0.20.0/go.mod h1:MrpN08Q+lEBs+bGYdLxxHkZoUSsCp0nSKTs0nTymJgw= +cel.dev/expr v0.23.0/go.mod h1:hLPLo1W4QUmuYdA72RBX06QTs6MXw941piREPl3Yfiw= +cel.dev/expr v0.24.0/go.mod h1:hLPLo1W4QUmuYdA72RBX06QTs6MXw941piREPl3Yfiw= +cloud.google.com/go v0.116.0/go.mod h1:cEPSRWPzZEswwdr9BxE6ChEn01dWlTaF05LiC2Xs70U= +cloud.google.com/go v0.121.1/go.mod h1:nRFlrHq39MNVWu+zESP2PosMWA0ryJw8KUBZ2iZpxbw= +cloud.google.com/go/ai v0.8.0/go.mod h1:t3Dfk4cM61sytiggo2UyGsDVW3RF1qGZaUKDrZFyqkE= +cloud.google.com/go/auth v0.15.0/go.mod h1:WJDGqZ1o9E9wKIL+IwStfyn/+s59zl4Bi+1KQNVXLZ8= +cloud.google.com/go/auth v0.16.2/go.mod h1:sRBas2Y1fB1vZTdurouM0AzuYQBMZinrUYL8EufhtEA= +cloud.google.com/go/auth/oauth2adapt v0.2.7/go.mod h1:NTbTTzfvPl1Y3V1nPpOgl2w6d/FjO7NNUQaWSox6ZMc= +cloud.google.com/go/auth/oauth2adapt v0.2.8/go.mod h1:XQ9y31RkqZCcwJWNSx2Xvric3RrU88hAYYbjDWYDL+c= +cloud.google.com/go/compute v1.6.1/go.mod h1:g85FgpzFvNULZ+S8AYq87axRKuf2Kh7deLqV/jJ3thU= +cloud.google.com/go/compute v1.19.1/go.mod h1:6ylj3a05WF8leseCdIf77NK0g1ey+nj5IKd5/kvShxE= +cloud.google.com/go/compute/metadata v0.3.0/go.mod h1:zFmK7XCadkQkj6TtorcaGlCW1hT1fIilQDwofLpJ20k= +cloud.google.com/go/compute/metadata v0.6.0/go.mod h1:FjyFAW1MW0C203CEOMDTu3Dk1FlqW3Rga40jzHL4hfg= +cloud.google.com/go/compute/metadata v0.7.0/go.mod h1:j5MvL9PprKL39t166CoB1uVHfQMs4tFQZZcKwksXUjo= +cloud.google.com/go/firestore v1.6.1/go.mod h1:asNXNOzBdyVQmEU+ggO8UPodTkEVFW5Qx+rwHnAz+EY= +cloud.google.com/go/iam v1.2.2/go.mod h1:0Ys8ccaZHdI1dEUilwzqng/6ps2YB6vRsjIe00/+6JY= +cloud.google.com/go/iam v1.5.2/go.mod h1:SE1vg0N81zQqLzQEwxL2WI6yhetBdbNQuTvIKCSkUHE= +cloud.google.com/go/longrunning v0.5.7/go.mod h1:8GClkudohy1Fxm3owmBGid8W0pSgodEMwEAztp38Xng= +cloud.google.com/go/longrunning v0.6.7/go.mod h1:EAFV3IZAKmM56TyiE6VAP3VoTzhZzySwI/YI1s/nRsY= +cloud.google.com/go/monitoring v1.21.2/go.mod h1:hS3pXvaG8KgWTSz+dAdyzPrGUYmi2Q+WFX8g2hqVEZU= +cloud.google.com/go/monitoring v1.24.2/go.mod h1:x7yzPWcgDRnPEv3sI+jJGBkwl5qINf+6qY4eq0I9B4U= +cloud.google.com/go/spanner v1.82.0/go.mod h1:BzybQHFQ/NqGxvE/M+/iU29xgutJf7Q85/4U9RWMto0= +cloud.google.com/go/storage v1.49.0/go.mod h1:k1eHhhpLvrPjVGfo0mOUPEJ4Y2+a/Hv5PiwehZI9qGU= +cloud.google.com/go/storage v1.55.0/go.mod h1:ztSmTTwzsdXe5syLVS0YsbFxXuvEmEyZj7v7zChEmuY= +cloud.google.com/go/trace v1.11.3/go.mod h1:pt7zCYiDSQjC9Y2oqCsh9jF4GStB/hmjrYLsxRR27q8= +contrib.go.opencensus.io/exporter/stackdriver v0.13.14/go.mod h1:5pSSGY0Bhuk7waTHuDf4aQ8D2DrhgETRo9fy6k3Xlzc= +dario.cat/mergo v1.0.1/go.mod h1:uNxQE+84aUszobStD9th8a29P2fMDhsBdgRYvZOxGmk= +github.com/Azure/go-ansiterm v0.0.0-20230124172434-306776ec8161/go.mod h1:xomTg63KZ2rFqZQzSB4Vz2SUXa1BpHTVz9L5PTmPC4E= +github.com/DATA-DOG/go-sqlmock v1.5.2/go.mod h1:88MAG/4G7SMwSE3CeA0ZKzrT5CiOU3OJ+JlNzwDqpNU= +github.com/GoogleCloudPlatform/grpc-gcp-go/grpcgcp v1.5.2/go.mod h1:dppbR7CwXD4pgtV9t3wD1812RaLDcBjtblcDF5f1vI0= +github.com/GoogleCloudPlatform/opentelemetry-operations-go/detectors/gcp v1.26.0/go.mod h1:2bIszWvQRlJVmJLiuLhukLImRjKPcYdzzsx6darK02A= +github.com/GoogleCloudPlatform/opentelemetry-operations-go/detectors/gcp v1.27.0/go.mod h1:yAZHSGnqScoU556rBOVkwLze6WP5N+U11RHuWaGVxwY= +github.com/GoogleCloudPlatform/opentelemetry-operations-go/exporter/metric v0.48.1/go.mod h1:jyqM3eLpJ3IbIFDTKVz2rF9T/xWGW0rIriGwnz8l9Tk= +github.com/GoogleCloudPlatform/opentelemetry-operations-go/exporter/metric v0.53.0/go.mod h1:ZPpqegjbE99EPKsu3iUWV22A04wzGPcAY/ziSIQEEgs= +github.com/GoogleCloudPlatform/opentelemetry-operations-go/internal/resourcemapping v0.48.1/go.mod h1:viRWSEhtMZqz1rhwmOVKkWl6SwmVowfL9O2YR5gI2PE= +github.com/GoogleCloudPlatform/opentelemetry-operations-go/internal/resourcemapping v0.53.0/go.mod h1:cSgYe11MCNYunTnRXrKiR/tHc0eoKjICUuWpNZoVCOo= +github.com/MakeNowJust/heredoc v1.0.0/go.mod h1:mG5amYoWBHf8vpLOuehzbGGw0EHxpZZ6lCpQ4fNJ8LE= +github.com/Masterminds/goutils v1.1.1/go.mod h1:8cTjp+g8YejhMuvIA5y2vz3BpJxksy863GQaJW2MFNU= +github.com/Masterminds/sprig/v3 v3.3.0/go.mod h1:Zy1iXRYNqNLUolqCpL4uhk6SHUMAOSCzdgBfDb35Lz0= +github.com/Microsoft/go-winio v0.6.2/go.mod h1:yd8OoFMLzJbo9gZq8j5qaps8bJ9aShtEA8Ipt1oGCvU= +github.com/NYTimes/gziphandler v1.1.1/go.mod h1:n/CVRwUEOgIxrgPvAQhUUr9oeUtvrhMomdKFjzJNB0c= +github.com/ProtonMail/go-crypto v0.0.0-20230217124315-7d5c6f04bbb8/go.mod h1:I0gYDMZ6Z5GRU7l58bNFSkPTFN6Yl12dsUlAZ8xy98g= +github.com/VividCortex/ewma v1.2.0/go.mod h1:nz4BbCtbLyFDeC9SUHbtcT5644juEuWfUAUnGx7j5l4= +github.com/adrg/xdg v0.5.3/go.mod h1:nlTsY+NNiCBGCK2tpm09vRqfVzrc2fLmXGpBLF0zlTQ= +github.com/ajeddeloh/go-json v0.0.0-20200220154158-5ae607161559/go.mod h1:otnto4/Icqn88WCcM4bhIJNSgsh9VLBuspyyCfvof9c= +github.com/alecthomas/kingpin/v2 v2.4.0/go.mod h1:0gyi0zQnjuFk8xrkNKamJoyUo382HRL7ATRpFZCw6tE= +github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc= +github.com/alecthomas/units v0.0.0-20211218093645-b94a6e3cc137/go.mod h1:OMCwj8VM1Kc9e19TLln2VL61YJF0x1XFtfdL4JdbSyE= +github.com/antihax/optional v1.0.0/go.mod h1:uupD/76wgC+ih3iEmQUL+0Ugr19nfwCT1kdvxnR2qWY= +github.com/antlr4-go/antlr/v4 v4.13.0/go.mod h1:pfChB/xh/Unjila75QW7+VU4TSnWnnk9UTnmpPaOR2g= +github.com/armon/go-metrics v0.3.10/go.mod h1:4O98XIr/9W0sxpJ8UaYkvjk10Iff7SnFrb4QAOwNTFc= +github.com/armon/go-socks5 v0.0.0-20160902184237-e75332964ef5/go.mod h1:wHh0iHkYZB8zMSxRWpUBQtwG5a7fFgvEO+odwuTv2gs= +github.com/asaskevich/govalidator v0.0.0-20190424111038-f61b66f89f4a/go.mod h1:lB+ZfQJz7igIIfQNfa7Ml4HSf2uFQQRzpGGRXenZAgY= +github.com/asaskevich/govalidator v0.0.0-20230301143203-a9d515a09cc2/go.mod h1:WaHUgvxTVq04UNunO+XhnAqY/wQc+bxr74GqbsZ/Jqw= +github.com/avast/retry-go/v4 v4.6.1/go.mod h1:V6oF8njAwxJ5gRo1Q7Cxab24xs5NCWZBeaHHBklR8mA= +github.com/aws/aws-sdk-go v1.55.5/go.mod h1:eRwEWoyTWFMVYVQzKMNHWP5/RV4xIUGMQfXQHfHkpNU= +github.com/aymanbagabas/go-osc52/v2 v2.0.1/go.mod h1:uYgXzlJ7ZpABp8OJ+exZzJJhRNQ2ASbcXHWsFqH8hp8= +github.com/bgentry/speakeasy v0.2.0/go.mod h1:+zsyZBPWlz7T6j88CTgSN5bM796AkVf0kBD4zp0CCIs= +github.com/blang/semver v3.5.1+incompatible/go.mod h1:kRBLl5iJ+tD4TcOOxsy/0fnwebNt5EWlYSAyrTnjyyk= +github.com/blang/semver/v4 v4.0.0/go.mod h1:IbckMUScFkM3pff0VJDNKRiT6TG/YpiHIM2yvyW5YoQ= +github.com/bufbuild/protocompile v0.14.1/go.mod h1:ppVdAIhbr2H8asPk6k4pY7t9zB1OU5DoEw9xY/FUi1c= +github.com/bufbuild/protovalidate-go v0.9.1/go.mod h1:5jptBxfvlY51RhX32zR6875JfPBRXUsQjyZjm/NqkLQ= +github.com/cenkalti/backoff/v4 v4.3.0/go.mod h1:Y3VNntkOUPxTVeUxJ/G5vcM//AlwfmyYozVcomhLiZE= +github.com/cenkalti/backoff/v5 v5.0.2/go.mod h1:rkhZdG3JZukswDf7f0cwqPNk4K0sa+F97BxZthm/crw= +github.com/census-instrumentation/opencensus-proto v0.4.1/go.mod h1:4T9NM4+4Vw91VeyqjLS6ao50K5bOcLKN6Q42XnYaRYw= +github.com/charmbracelet/colorprofile v0.3.1/go.mod h1:/GkGusxNs8VB/RSOh3fu0TJmQ4ICMMPApIIVn0KszZ0= +github.com/charmbracelet/lipgloss v1.1.0/go.mod h1:/6Q8FR2o+kj8rz4Dq0zQc3vYf7X+B0binUUBwA0aL30= +github.com/charmbracelet/x/ansi v0.9.2/go.mod h1:3RQDQ6lDnROptfpWuUVIUG64bD2g2BgntdxH0Ya5TeE= +github.com/charmbracelet/x/cellbuf v0.0.13/go.mod h1:xe0nKWGd3eJgtqZRaN9RjMtK7xUYchjzPr7q6kcvCCs= +github.com/charmbracelet/x/term v0.2.1/go.mod h1:oQ4enTYFV7QN4m0i9mzHrViD7TQKvNEEkHUMCmsxdUg= +github.com/checkpoint-restore/go-criu/v5 v5.3.0/go.mod h1:E/eQpaFtUKGOOSEBZgmKAcn+zUUwWxqcaKZlF54wK8E= +github.com/cheggaaa/pb/v3 v3.1.6/go.mod h1:urxmfVtaxT+9aWk92DbsvXFZtNSWQSO5TRAp+MJ3l1s= +github.com/chzyer/logex v1.2.1/go.mod h1:JLbx6lG2kDbNRFnfkgvh4eRJRPX1QCoOIWomwysCBrQ= +github.com/chzyer/readline v1.5.1/go.mod h1:Eh+b79XXUwfKfcPLepksvw2tcLE/Ct21YObkaSkeBlk= +github.com/chzyer/test v1.0.0/go.mod h1:2JlltgoNkt4TW/z9V/IzDdFaMTM2JPIi26O1pF38GC8= +github.com/cilium/ebpf v0.7.0/go.mod h1:/oI2+1shJiTGAMgl6/RgJr36Eo1jzrRcAWbcXO2usCA= +github.com/cloudflare/backoff v0.0.0-20161212185259-647f3cdfc87a/go.mod h1:rzgs2ZOiguV6/NpiDgADjRLPNyZlApIWxKpkT+X8SdY= +github.com/cloudflare/circl v1.6.1/go.mod h1:uddAzsPgqdMAYatqJ0lsjX1oECcQLIlRpzZh3pJrofs= +github.com/cloudflare/redoctober v0.0.0-20211013234631-6a74ccc611f6/go.mod h1:Ikt4Wfpln1YOrak+auA8BNxgiilj0Y2y7nO+aN2eMzk= +github.com/cncf/udpa/go v0.0.0-20220112060539-c52dc94e7fbe/go.mod h1:6pvJx4me5XPnfI9Z40ddWsdw2W/uZgQLFXToKeRcDiI= +github.com/cncf/xds/go v0.0.0-20250121191232-2f005788dc42/go.mod h1:W+zGtBO5Y1IgJhy4+A9GOqVhqLpfZi+vwmdNXUehLA8= +github.com/cncf/xds/go v0.0.0-20250326154945-ae57f3c0d45f/go.mod h1:W+zGtBO5Y1IgJhy4+A9GOqVhqLpfZi+vwmdNXUehLA8= +github.com/cockroachdb/datadriven v1.0.2/go.mod h1:a9RdTaap04u637JoCzcUoIcDmvwSUtcUFtT/C3kJlTU= +github.com/container-storage-interface/spec v1.11.0/go.mod h1:DtUvaQszPml1YJfIK7c00mlv6/g4wNMLanLgiUbKFRI= +github.com/containerd/console v1.0.3/go.mod h1:7LqA/THxQ86k76b8c/EMSiaJ3h1eZkMkXar0TQ1gf3U= +github.com/containerd/errdefs v1.0.0/go.mod h1:+YBYIdtsnF4Iw6nWZhJcqGSg/dwvV7tyJ/kCkyJ2k+M= +github.com/containerd/errdefs/pkg v0.3.0/go.mod h1:NJw6s9HwNuRhnjJhM7pylWwMyAkmCQvQ4GpJHEqRLVk= +github.com/containerd/log v0.1.0/go.mod h1:VRRf09a7mHDIRezVKTRCrOq78v577GXq3bSa3EhrzVo= +github.com/containerd/platforms v0.2.1/go.mod h1:XHCb+2/hzowdiut9rkudds9bE5yJ7npe7dG/wG+uFPw= +github.com/coredns/caddy v1.1.1/go.mod h1:A6ntJQlAWuQfFlsd9hvigKbo2WS0VUs2l1e2F+BawD4= +github.com/coredns/corefile-migration v1.0.29/go.mod h1:56DPqONc3njpVPsdilEnfijCwNGC3/kTJLl7i7SPavY= +github.com/coreos/go-oidc v2.3.0+incompatible/go.mod h1:CgnwVTmzoESiwO9qyAFEMiHoZ1nMCKZlZ9V6mm3/LKc= +github.com/coreos/go-semver v0.3.1/go.mod h1:irMmmIw/7yzSRPWryHsK7EYSg09caPQL03VsM8rvUec= +github.com/coreos/go-systemd v0.0.0-20191104093116-d3cd4ed1dbcf h1:iW4rZ826su+pqaw19uhpSCzhj44qo35pNgKFGqzDKkU= +github.com/coreos/go-systemd v0.0.0-20191104093116-d3cd4ed1dbcf/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4= +github.com/cpuguy83/go-md2man/v2 v2.0.7/go.mod h1:oOW0eioCTA6cOiMLiUPZOpcVxMig6NIQQ7OS05n1F4g= +github.com/creack/pty v1.1.18/go.mod h1:MOBLtS5ELjhRRrroQr9kyvTxUAFNvYEK993ew/Vr4O4= +github.com/cristalhq/acmd v0.12.0/go.mod h1:LG5oa43pE/BbxtfMoImHCQN++0Su7dzipdgBjMCBVDQ= +github.com/cyberphone/json-canonicalization v0.0.0-20241213102144-19d51d7fe467/go.mod h1:uzvlm1mxhHkdfqitSA92i7Se+S9ksOn3a3qmv/kyOCw= +github.com/cyphar/filepath-securejoin v0.2.4/go.mod h1:aPGpWjXOXUn2NCNjFvBE6aRxGGx79pTxQpKOJNYHHl4= +github.com/danieljoos/wincred v1.2.2/go.mod h1:w7w4Utbrz8lqeMbDAK0lkNJUv5sAOkFi7nd/ogr0Uh8= +github.com/digitorus/pkcs7 v0.0.0-20230818184609-3a137a874352/go.mod h1:SKVExuS+vpu2l9IoOc0RwqE7NYnb0JlcFHFnEJkVDzc= +github.com/digitorus/timestamp v0.0.0-20231217203849-220c5c2851b7/go.mod h1:GvWntX9qiTlOud0WkQ6ewFm0LPy5JUR1Xo0Ngbd1w6Y= +github.com/distribution/reference v0.6.0/go.mod h1:BbU0aIcezP1/5jX/8MP0YiH4SdvB5Y4f/wlDRiLyi3E= +github.com/docker/docker v28.2.2+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk= +github.com/docker/go-connections v0.5.0/go.mod h1:ov60Kzw0kKElRwhNs9UlUHAE/F9Fe6GLaXnqyDdmEXc= +github.com/docker/go-units v0.4.0/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk= +github.com/docker/go-units v0.5.0/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk= +github.com/drone/envsubst/v2 v2.0.0-20210730161058-179042472c46/go.mod h1:esf2rsHFNlZlxsqsZDojNBcnNs5REqIvRrWRHqX0vEU= +github.com/dustin/go-humanize v1.0.1/go.mod h1:Mu1zIs6XwVuF/gI1OepvI0qD18qycQx+mFykh5fBlto= +github.com/ebitengine/purego v0.8.2/go.mod h1:iIjxzd6CiRiOG0UyXP+V1+jWqUXVjPKLAI0mRfJZTmQ= +github.com/emicklei/go-restful/v3 v3.11.0/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc= +github.com/emicklei/go-restful/v3 v3.12.1/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc= +github.com/envoyproxy/go-control-plane v0.13.4/go.mod h1:kDfuBlDVsSj2MjrLEtRWtHlsWIFcGyB2RMO44Dc5GZA= +github.com/envoyproxy/go-control-plane/envoy v1.32.4/go.mod h1:Gzjc5k8JcJswLjAx1Zm+wSYE20UrLtt7JZMWiWQXQEw= +github.com/envoyproxy/go-control-plane/ratelimit v0.1.0/go.mod h1:Wk+tMFAFbCXaJPzVVHnPgRKdUdwW/KdbRt94AzgRee4= +github.com/envoyproxy/protoc-gen-validate v1.2.1/go.mod h1:d/C80l/jxXLdfEIhX1W2TmLfsJ31lvEjwamM4DxlWXU= +github.com/felixge/httpsnoop v1.0.4/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U= +github.com/flatcar/container-linux-config-transpiler v0.9.4/go.mod h1:LxanhPvXkWgHG9PrkT4rX/p7YhUPdDGGsUdkNpV3L5U= +github.com/flatcar/ignition v0.36.2/go.mod h1:uk1tpzLFRXus4RrvzgMI+IqmmB8a/RGFSBlI+tMTbbA= +github.com/fsnotify/fsnotify v1.8.0/go.mod h1:8jBTzvmWwFyi3Pb8djgCCO5IBqzKJ/Jwo8TRcHyHii0= +github.com/fullstorydev/grpcurl v1.9.3/go.mod h1:/b4Wxe8bG6ndAjlfSUjwseQReUDUvBJiFEB7UllOlUE= +github.com/fxamacker/cbor/v2 v2.7.0/go.mod h1:pxXPTn3joSm21Gbwsv0w9OSA2y1HFR9qXEeXQVeNoDQ= +github.com/getsentry/sentry-go v0.11.0/go.mod h1:KBQIxiZAetw62Cj8Ri964vAEWVdgfaUCn30Q3bCvANo= +github.com/globocom/go-buffer v1.2.2/go.mod h1:kY1ALQS0ChiiThmWhsFoT5CYSiuad0t3keIew5LsWdM= +github.com/go-chi/chi v4.1.2+incompatible/go.mod h1:eB3wogJHnLi3x/kFX2A+IbTBlXxmMeXJVKy9tTv1XzQ= +github.com/go-jose/go-jose/v4 v4.0.4/go.mod h1:NKb5HO1EZccyMpiZNbdUw/14tiXNyUJh188dfnMCAfc= +github.com/go-jose/go-jose/v4 v4.0.5/go.mod h1:s3P1lRrkT8igV8D9OjyL4WRyHvjB6a4JSllnOrmmBOA= +github.com/go-kit/log v0.2.1/go.mod h1:NwTd00d/i8cPZ3xOwwiv2PO5MOcx78fFErGNcVmBjv0= +github.com/go-logfmt/logfmt v0.5.1/go.mod h1:WYhtIu8zTZfxdn5+rREduYbwxfcBr/Vr6KEVveWlfTs= +github.com/go-logr/logr v0.2.0/go.mod h1:z6/tIYblkpsD+a4lm/fGIIU9mZ+XfAiaFtq7xTgseGU= +github.com/go-logr/logr v1.2.0/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A= +github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A= +github.com/go-logr/logr v1.4.1/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY= +github.com/go-logr/logr v1.4.2/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY= +github.com/go-ole/go-ole v1.2.6/go.mod h1:pprOEPIfldk/42T2oK7lQ4v4JSDwmV0As9GaiUsvbm0= +github.com/go-openapi/analysis v0.23.0/go.mod h1:9mz9ZWaSlV8TvjQHLl2mUW2PbZtemkE8yA5v22ohupo= +github.com/go-openapi/errors v0.22.1/go.mod h1:+n/5UdIqdVnLIJ6Q9Se8HNGUXYaY6CN8ImWzfi/Gzp0= +github.com/go-openapi/jsonreference v0.20.1/go.mod h1:Bl1zwGIM8/wsvqjsOQLJ/SH+En5Ap4rVB5KVcIDZG2k= +github.com/go-openapi/loads v0.22.0/go.mod h1:yLsaTCS92mnSAZX5WWoxszLj0u+Ojl+Zs5Stn1oF+rs= +github.com/go-openapi/runtime v0.28.0/go.mod h1:QN7OzcS+XuYmkQLw05akXk0jRH/eZ3kb18+1KwW9gyc= +github.com/go-openapi/spec v0.21.0/go.mod h1:78u6VdPw81XU44qEWGhtr982gJ5BWg2c0I5XwVMotYk= +github.com/go-openapi/strfmt v0.23.0/go.mod h1:NrtIpfKtWIygRkKVsxh7XQMDQW5HKQl6S5ik2elW+K4= +github.com/go-openapi/validate v0.24.0/go.mod h1:iyeX1sEufmv3nPbBdX3ieNviWnOZaJ1+zquzJEf2BAQ= +github.com/go-sql-driver/mysql v1.7.1/go.mod h1:OXbVy3sEdcQ2Doequ6Z5BW6fXNQTmx+9S1MCJN5yJMI= +github.com/go-sql-driver/mysql v1.9.2/go.mod h1:qn46aNg1333BRMNU69Lq93t8du/dwxI64Gl8i5p1WMU= +github.com/gobuffalo/flect v1.0.3/go.mod h1:A5msMlrHtLqh9umBSnvabjsMrCcCpAyzglnDvkbYKHs= +github.com/gofrs/uuid/v5 v5.3.0/go.mod h1:CDOjlDMVAtN56jqyRUZh58JT31Tiw7/oQyEXZV+9bD8= +github.com/golang-jwt/jwt/v4 v4.5.2/go.mod h1:m21LjoU+eqJr34lmDMbreY2eSTRJ1cv77w39/MY0Ch0= +github.com/golang-jwt/jwt/v5 v5.2.2/go.mod h1:pqrtFR0X4osieyHYxtmOUWsAWrfe1Q5UVIyoH402zdk= +github.com/golang/glog v1.2.4/go.mod h1:6AhwSGph0fcJtXVM/PEHPqZlFeoLxhs7/t5UDAwmO+w= +github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc= +github.com/golang/groupcache v0.0.0-20241129210726-2c02b8208cf8/go.mod h1:wcDNUvekVysuuOpQKo3191zZyTpiI6se1N1ULghS0sw= +github.com/golang/mock v1.7.0-rc.1/go.mod h1:s42URUywIqd+OcERslBJvOjepvNymP31m3q8d/GkuRs= +github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk= +github.com/golangci/modinfo v0.3.3/go.mod h1:wytF1M5xl9u0ij8YSvhkEVPP3M5Mc7XLl1pxH3B2aUM= +github.com/google/cel-go v0.26.0/go.mod h1:A9O8OU9rdvrK5MQyrqfIxo1a0u4g3sF8KB6PUIaryMM= +github.com/google/generative-ai-go v0.19.0/go.mod h1:JYolL13VG7j79kM5BtHz4qwONHkeJQzOCkKXnpqtS/E= +github.com/google/gnostic-models v0.6.9/go.mod h1:CiWsm0s6BSQd1hRn8/QmxqB6BesYcbSZxsz9b0KuDBw= +github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= +github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY= +github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY= +github.com/google/go-github/v53 v53.2.0/go.mod h1:XhFRObz+m/l+UCm9b7KSIC3lT3NWSXGt7mOsAWEloao= +github.com/google/pprof v0.0.0-20240727154555-813a5fbdbec8/go.mod h1:K1liHPHnj73Fdn/EKuT8nrFqBihUSKXoLYU0BuatOYo= +github.com/google/pprof v0.0.0-20241029153458-d1b30febd7db/go.mod h1:vavhavw2zAxS5dIdcRluK6cSGGPlZynqzFM8NdvU144= +github.com/google/s2a-go v0.1.9/go.mod h1:YA0Ei2ZQL3acow2O62kdp9UlnvMmU7kA6Eutn0dXayM= +github.com/google/trillian v1.7.2/go.mod h1:mfQJW4qRH6/ilABtPYNBerVJAJ/upxHLX81zxNQw05s= +github.com/googleapis/enterprise-certificate-proxy v0.3.4/go.mod h1:YKe7cfqYXjKGpGvmSg28/fFvhNzinZQm8DGnaburhGA= +github.com/googleapis/enterprise-certificate-proxy v0.3.6/go.mod h1:MkHOF77EYAE7qfSuSS9PU6g4Nt4e11cnsDUowfwewLA= +github.com/googleapis/gax-go/v2 v2.14.1/go.mod h1:Hb/NubMaVM88SrNkvl8X/o8XWwDJEPqouaLeN2IUxoA= +github.com/googleapis/gax-go/v2 v2.14.2/go.mod h1:ON64QhlJkhVtSqp4v1uaK92VyZ2gmvDQsweuyLV+8+w= +github.com/gookit/color v1.5.4/go.mod h1:pZJOeOS8DM43rXbp4AZo1n9zCU2qjpcRko0b6/QJi9w= +github.com/gorilla/mux v1.8.1/go.mod h1:AKf9I4AEqPTmMytcMc0KkNouC66V3BtZ4qD5fmWSiMQ= +github.com/gorilla/websocket v1.4.2/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE= +github.com/gorilla/websocket v1.5.4-0.20250319132907-e064f32e3674/go.mod h1:r4w70xmWCQKmi1ONH4KIaBptdivuRPyosB9RmPlGEwA= +github.com/gregjones/httpcache v0.0.0-20190611155906-901d90724c79/go.mod h1:FecbI9+v66THATjSRHfNgh1IVFe/9kFxbXtjV0ctIMA= +github.com/grpc-ecosystem/go-grpc-middleware v1.3.0/go.mod h1:z0ButlSOZa5vEBq9m2m2hlwIgKw+rp3sdCBRoJY+30Y= +github.com/grpc-ecosystem/go-grpc-middleware v1.4.0/go.mod h1:g5qyo/la0ALbONm6Vbp88Yd8NsDy6rZz+RcrMPxvld8= +github.com/grpc-ecosystem/go-grpc-middleware/providers/prometheus v1.0.1/go.mod h1:lXGCsh6c22WGtjr+qGHj1otzZpV/1kwTMAqkwZsnWRU= +github.com/grpc-ecosystem/go-grpc-middleware/v2 v2.3.0/go.mod h1:qOchhhIlmRcqk/O9uCo/puJlyo07YINaIqdZfZG3Jkc= +github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0/go.mod h1:8NvIoxWQoOIhqOTXgfV/d3M/q6VIi02HzZEHgUlZvzk= +github.com/grpc-ecosystem/grpc-gateway v1.16.0/go.mod h1:BDjrQk3hbvj6Nolgz8mAMFbcEtjT1g+wF4CSlocrBnw= +github.com/grpc-ecosystem/grpc-gateway/v2 v2.26.3/go.mod h1:ndYquD05frm2vACXE1nsccT4oJzjhw2arTS2cpUD1PI= +github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.1/go.mod h1:Zanoh4+gvIgluNqcfMVTJueD4wSS5hT7zTt4Mrutd90= +github.com/hashicorp/consul/api v1.12.0/go.mod h1:6pVBMo0ebnYdt2S3H87XhekM/HHrUoTD2XXb/VrZVy0= +github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4= +github.com/hashicorp/go-cleanhttp v0.5.2/go.mod h1:kO/YDlP8L1346E6Sodw+PrpBSV4/SoxCXGY6BqNFT48= +github.com/hashicorp/go-hclog v1.2.0/go.mod h1:whpDNt7SSdeAju8AWKIWsul05p54N/39EeqMAyrmvFQ= +github.com/hashicorp/go-immutable-radix v1.3.1 h1:DKHmCUm2hRBK510BaiZlwvpD40f8bJFeZnpfm2KLowc= +github.com/hashicorp/go-immutable-radix v1.3.1/go.mod h1:0y9vanUI8NX6FsYoO3zeMjhV/C5i9g4Q3DwcSNZ4P60= +github.com/hashicorp/go-multierror v1.1.1/go.mod h1:iw975J/qwKPdAO1clOe2L8331t/9/fmwbPZ6JB6eMoM= +github.com/hashicorp/go-rootcerts v1.0.2/go.mod h1:pqUvnprVnM5bf7AOirdbb01K4ccR319Vf4pU3K5EGc8= +github.com/hashicorp/golang-lru v0.5.4 h1:YDjusn29QI/Das2iO9M0BHnIbxPeyuCHsjMW+lJfyTc= +github.com/hashicorp/golang-lru v0.5.4/go.mod h1:iADmTwqILo4mZ8BN3D2Q6+9jd8WM5uGBxy+E8yxSoD4= +github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ= +github.com/hashicorp/serf v0.9.7/go.mod h1:TXZNMjZQijwlDvp+r0b63xZ45H7JmCmgg4gpTwn9UV4= +github.com/hexdigest/gowrap v1.4.3/go.mod h1:XWL8oQW2H3fX5ll8oT3Fduh4mt2H3cUAGQHQLMUbmG4= +github.com/huandu/xstrings v1.5.0/go.mod h1:y5/lhBue+AyNmUVz9RLU9xbLR0o4KIIExikq4ovT0aE= +github.com/iancoleman/strcase v0.3.0/go.mod h1:iwCmte+B7n89clKwxIoIXy/HfoL7AsD47ZCWhYzw7ho= +github.com/ianlancetaylor/demangle v0.0.0-20250417193237-f615e6bd150b/go.mod h1:gx7rwoVhcfuVKG5uya9Hs3Sxj7EIvldVofAWIUtGouw= +github.com/imdario/mergo v0.3.16/go.mod h1:WBLT9ZmE3lPoWsEzCh9LPo3TiwVN+ZKEjmz+hD27ysY= +github.com/in-toto/attestation v1.1.2/go.mod h1:gYFddHMZj3DiQ0b62ltNi1Vj5rC879bTmBbrv9CRHpM= +github.com/in-toto/in-toto-golang v0.9.0/go.mod h1:xsBVrVsHNsB61++S6Dy2vWosKhuA3lUTQd+eF9HdeMo= +github.com/jackc/pgpassfile v1.0.0/go.mod h1:CEx0iS5ambNFdcRtxPj5JhEz+xB6uRky5eyVu/W2HEg= +github.com/jackc/pgservicefile v0.0.0-20221227161230-091c0ba34f0a/go.mod h1:5TJZWKEWniPve33vlWYSoGYefn3gLQRzjfDlhSJ9ZKM= +github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761/go.mod h1:5TJZWKEWniPve33vlWYSoGYefn3gLQRzjfDlhSJ9ZKM= +github.com/jackc/pgx/v5 v5.4.3/go.mod h1:Ig06C2Vu0t5qXC60W8sqIthScaEnFvojjj9dSljmHRA= +github.com/jackc/pgx/v5 v5.7.5/go.mod h1:aruU7o91Tc2q2cFp5h4uP3f6ztExVpyVv88Xl/8Vl8M= +github.com/jackc/puddle/v2 v2.2.1/go.mod h1:vriiEXHvEE654aYKXXjOvZM39qJ0q+azkZFrfEOc3H4= +github.com/jackc/puddle/v2 v2.2.2/go.mod h1:vriiEXHvEE654aYKXXjOvZM39qJ0q+azkZFrfEOc3H4= +github.com/jedisct1/go-minisign v0.0.0-20230811132847-661be99b8267/go.mod h1:h1nSAbGFqGVzn6Jyl1R/iCcBUHN4g+gW1u9CoBTrb9E= +github.com/jessevdk/go-flags v1.4.0/go.mod h1:4FA24M0QyGHXBuZZK/XkWh8h0e1EYbRYJSGM75WSRxI= +github.com/jessevdk/go-flags v1.6.1/go.mod h1:Mk8T1hIAWpOiJiHa9rJASDK2UGWji0EuPGBnNLMooyc= +github.com/jhump/protoreflect v1.17.0/go.mod h1:h9+vUUL38jiBzck8ck+6G/aeMX8Z4QUY/NiJPwPNi+8= +github.com/jmespath/go-jmespath v0.4.1-0.20220621161143-b0104c826a24/go.mod h1:T8mJZnbsbmF+m6zOOFylbeCJqk5+pHWvzYPziyZiYoo= +github.com/jmhodges/clock v1.2.0/go.mod h1:qKjhA7x7u/lQpPB1XAqX1b1lCI/w3/fNuYpI/ZjLynI= +github.com/jmoiron/sqlx v1.3.5/go.mod h1:nRVWtLre0KfCLJvgxzCsLVMogSvQ1zNJtpYr2Ccp0mQ= +github.com/jpillora/backoff v1.0.0/go.mod h1:J/6gKK9jxlEcS3zixgDgUAsiuZ7yrSoa/FX5e0EB2j4= +github.com/julienschmidt/httprouter v1.3.0/go.mod h1:JR6WtHb+2LUe8TCKY3cZOxFyyO8IZAc4RVcycCCAKdM= +github.com/keybase/go-keychain v0.0.1/go.mod h1:PdEILRW3i9D8JcdM+FmY6RwkHGnhHxXwkPPMeUgOK1k= +github.com/kisielk/sqlstruct v0.0.0-20201105191214-5f3e10d3ab46/go.mod h1:yyMNCyc/Ib3bDTKd379tNMpB/7/H5TjM2Y9QJ5THLbE= +github.com/klauspost/compress v1.17.11/go.mod h1:pMDklpSncoRMuLFrf1W9Ss9KT+0rH90U12bZKk7uwG0= +github.com/kr/fs v0.1.0/go.mod h1:FFnZGqtBN9Gxj7eW1uZ42v5BccTP0vu6NEaFoC2HwRg= +github.com/kubernetes-csi/csi-lib-utils v0.21.0/go.mod h1:ZCVRTYuup+bwX9tOeE5Q3LDw64QvltSwMUQ3M3g2T+Q= +github.com/kylelemons/go-gypsy v1.0.0/go.mod h1:chkXM0zjdpXOiqkCW1XcCHDfjfk14PH2KKkQWxfJUcU= +github.com/letsencrypt/boulder v0.0.0-20240620165639-de9c06129bec/go.mod h1:TmwEoGCwIti7BCeJ9hescZgRtatxRE+A72pCoPfmcfk= +github.com/letsencrypt/pkcs11key/v4 v4.0.0/go.mod h1:EFUvBDay26dErnNb70Nd0/VW3tJiIbETBPTl9ATXQag= +github.com/lucasb-eyer/go-colorful v1.2.0/go.mod h1:R4dSotOR9KMtayYi1e77YzuveK+i7ruzyGqttikkLy0= +github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0/go.mod h1:zJYVVT2jmtg6P3p1VtQj7WsuWi/y4VnjVBn7F8KPB3I= +github.com/lyft/protoc-gen-star/v2 v2.0.4-0.20230330145011-496ad1ac90a4/go.mod h1:amey7yeodaJhXSbf/TlLvWiqQfLOSpEk//mLlc+axEk= +github.com/magefile/mage v1.14.0/go.mod h1:z5UZb/iS3GoOSn0JgWuiw7dxlurVYTu+/jHXqQg881A= +github.com/magiconair/properties v1.8.6/go.mod h1:y3VJvCyxH9uVvJTWEGAELF3aiYNyPKd5NZ3oSwXrF60= +github.com/mattn/go-colorable v0.1.13/go.mod h1:7S9/ev0klgBDR4GtXTXX8a3vIGJpMovkB8vQcUbaXHg= +github.com/mattn/go-isatty v0.0.16/go.mod h1:kYGgaQfpe5nmfYZH+SKPsOc2e4SrIfOl2e/yFXSvRLM= +github.com/mattn/go-sqlite3 v1.14.28/go.mod h1:Uh1q+B4BYcTPb+yiD3kU8Ct7aC0hY9fxUwlHK0RXw+Y= +github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0= +github.com/mgechev/dots v0.0.0-20210922191527-e955255bf517/go.mod h1:KQ7+USdGKfpPjXk4Ga+5XxQM4Lm4e3gAogrreFAYpOg= +github.com/miekg/pkcs11 v1.1.1/go.mod h1:XsNlhZGX73bx86s2hdc/FuaLm2CPZJemRLMA+WTFxgs= +github.com/mitchellh/copystructure v1.2.0/go.mod h1:qLl+cE2AmVv+CoeAwDPye/v+N2HKCj9FbZEVFJRxO9s= +github.com/mitchellh/mapstructure v1.5.1-0.20231216201459-8508981c8b6c/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo= +github.com/mitchellh/reflectwalk v1.0.2/go.mod h1:mSTlrgnPZtwu0c4WaC2kGObEpuNDbx0jmZXqmk4esnw= +github.com/moby/docker-image-spec v1.3.1/go.mod h1:eKmb5VW8vQEh/BAr2yvVNvuiJuY6UIocYsFu/DxxRpo= +github.com/moby/spdystream v0.5.0/go.mod h1:xBAYlnt/ay+11ShkdFKNAG7LsyK/tmNBVvVOwrfMgdI= +github.com/moby/sys/atomicwriter v0.1.0/go.mod h1:Ul8oqv2ZMNHOceF643P6FKPXeCmYtlQMvpizfsSoaWs= +github.com/moby/term v0.0.0-20221205130635-1aeaba878587/go.mod h1:8FzsFHVUBGZdbDsJw/ot+X+d5HLUbvklYLJ9uGfcI3Y= +github.com/moby/term v0.5.0/go.mod h1:8FzsFHVUBGZdbDsJw/ot+X+d5HLUbvklYLJ9uGfcI3Y= +github.com/morikuni/aec v1.0.0/go.mod h1:BbKIizmSmc5MMPqRYbxO4ZU0S0+P200+tUnFx7PXmsc= +github.com/mozilla/tls-observatory v0.0.0-20210609171429-7bc42856d2e5/go.mod h1:FUqVoUPHSEdDR0MnFM3Dh8AU0pZHLXUD127SAJGER/s= +github.com/mreiferson/go-httpclient v0.0.0-20201222173833-5e475fde3a4d/go.mod h1:OQA4XLvDbMgS8P0CevmM4m9Q3Jq4phKUzcocxuGJ5m8= +github.com/mrunalp/fileutils v0.5.1/go.mod h1:M1WthSahJixYnrXQl/DFQuteStB1weuxD2QJNHXfbSQ= +github.com/muesli/termenv v0.16.0/go.mod h1:ZRfOIKPFDYQoDFF4Olj7/QJbW60Ol/kL1pU3VfY/Cnk= +github.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U= +github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f/go.mod h1:ZdcZmHo+o7JKHSa8/e818NopupXU1YMK5fe1lsApnBw= +github.com/nxadm/tail v1.4.8/go.mod h1:+ncqLTQzXmGhMZNUePPaPqPvBxHAIsmXswZKocGu+AU= +github.com/oklog/ulid v1.3.1/go.mod h1:CirwcVhetQ6Lv90oh/F+FBtV6XMibvdAFo93nm5qn4U= +github.com/onsi/ginkgo v1.16.5 h1:8xi0RTUf59SOSfEtZMvwTvXYMzG4gV23XVHOZiXNtnE= +github.com/onsi/ginkgo v1.16.5/go.mod h1:+E8gABHa3K6zRBolWtd+ROzc/U5bkGt0FwiG042wbpU= +github.com/onsi/ginkgo/v2 v2.19.0/go.mod h1:rlwLi9PilAFJ8jCg9UE1QP6VBpd6/xj3SRC0d6TU0To= +github.com/onsi/ginkgo/v2 v2.21.0/go.mod h1:7Du3c42kxCUegi0IImZ1wUQzMBVecgIHjR1C+NkhLQo= +github.com/onsi/ginkgo/v2 v2.22.0/go.mod h1:7Du3c42kxCUegi0IImZ1wUQzMBVecgIHjR1C+NkhLQo= +github.com/onsi/ginkgo/v2 v2.23.4/go.mod h1:Bt66ApGPBFzHyR+JO10Zbt0Gsp4uWxu5mIOTusL46e8= +github.com/onsi/ginkgo/v2 v2.25.3/go.mod h1:43uiyQC4Ed2tkOzLsEYm7hnrb7UJTWHYNsuy3bG/snE= +github.com/onsi/gomega v1.33.1/go.mod h1:U4R44UsT+9eLIaYRB2a5qajjtQYn0hauxvRm16AVYg0= +github.com/onsi/gomega v1.35.1/go.mod h1:PvZbdDc8J6XJEpDK4HCuRBm8a6Fzp9/DmhC9C7yFlog= +github.com/onsi/gomega v1.36.1/go.mod h1:PvZbdDc8J6XJEpDK4HCuRBm8a6Fzp9/DmhC9C7yFlog= +github.com/onsi/gomega v1.38.0/go.mod h1:OcXcwId0b9QsE7Y49u+BTrL4IdKOBOKnD6VQNTJEB6o= +github.com/op/go-logging v0.0.0-20160315200505-970db520ece7/go.mod h1:HzydrMdWErDVzsI23lYNej1Htcns9BCg93Dk0bBINWk= +github.com/opencontainers/selinux v1.10.0/go.mod h1:2i0OySw99QjzBBQByd1Gr9gSjvuho1lHsJxIJ3gGbJI= +github.com/opentracing/opentracing-go v1.2.0/go.mod h1:GxEUsuufX4nBwe+T+Wl9TAgYrxe9dPLANfrWvHYVTgc= +github.com/peterbourgon/diskv v2.0.1+incompatible/go.mod h1:uqqh8zWWbv1HBMNONnaR/tNboyR3/BZd58JJSHlUSCU= +github.com/phayes/checkstyle v0.0.0-20170904204023-bfd46e6a821d/go.mod h1:3OzsM7FXDQlpCiw2j81fOmAwQLnZnLGXVKUzeKQXIAw= +github.com/pkg/diff v0.0.0-20210226163009-20ebb0f2a09e/go.mod h1:pJLUxLENpZxwdsKMEsNbx1VGcRFpLqf3715MtcvvzbA= +github.com/pkg/sftp v1.13.7/go.mod h1:KMKI0t3T6hfA+lTR/ssZdunHo+uwq7ghoN09/FSu3DY= +github.com/planetscale/vtprotobuf v0.6.1-0.20240319094008-0393e58bdf10/go.mod h1:t/avpk3KcrXxUnYOhZhMXJlSEyie6gQbtLq5NM3loB8= +github.com/power-devops/perfstat v0.0.0-20210106213030-5aafc221ea8c/go.mod h1:OmDBASR4679mdNQnz2pUhc2G8CO2JrUAVFDRBDP/hJE= +github.com/pquerna/cachecontrol v0.1.0/go.mod h1:NrUG3Z7Rdu85UNR3vm7SOsl1nFIeSiQnrHV5K9mBcUI= +github.com/prometheus/client_golang v1.20.4/go.mod h1:PIEt8X02hGcP8JWbeHyeZ53Y/jReSnHgO035n//V5WE= +github.com/prometheus/client_golang v1.20.5/go.mod h1:PIEt8X02hGcP8JWbeHyeZ53Y/jReSnHgO035n//V5WE= +github.com/prometheus/common v0.61.0/go.mod h1:zr29OCN/2BsJRaFwG8QOBr41D6kkchKbpeNH7pAjb/s= +github.com/prometheus/procfs v0.16.1/go.mod h1:teAbpZRB1iIAJYREa1LsoWUXykVXA1KlTmWl8x/U+Is= +github.com/prometheus/prometheus v0.51.0/go.mod h1:yv4MwOn3yHMQ6MZGHPg/U7Fcyqf+rxqiZfSur6myVtc= +github.com/quasilyte/go-ruleguard/rules v0.0.0-20211022131956-028d6511ab71/go.mod h1:4cgAphtvu7Ftv7vOT2ZOYhC6CvBxZixcasr8qIOTA50= +github.com/rogpeppe/fastuuid v1.2.0/go.mod h1:jVj6XXZzXRy/MSR5jhDC/2q6DgLz+nrA6LYCDYWNEvQ= +github.com/rogpeppe/go-internal v1.9.0/go.mod h1:WtVeX8xhTBvf0smdhujwtBcq4Qrzq/fJaraNFVN+nFs= +github.com/rogpeppe/go-internal v1.10.0/go.mod h1:UQnix2H7Ngw/k4C5ijL5+65zddjncjaFoBhdsK/akog= +github.com/rogpeppe/go-internal v1.11.0/go.mod h1:ddIwULY96R17DhadqLgMfk9H9tvdUzkipdSkR5nkCZA= +github.com/rogpeppe/go-internal v1.12.0/go.mod h1:E+RYuTGaKKdloAfM02xzb0FW3Paa99yedzYV+kq4uf4= +github.com/rogpeppe/go-internal v1.13.1/go.mod h1:uMEvuHeurkdAXX61udpOXGD/AzZDWNMNyH2VO9fmH0o= +github.com/rs/cors v1.11.1/go.mod h1:XyqrcTp5zjWr1wsJ8PIRZssZ8b/WMcMf71DJnit4EMU= +github.com/russross/blackfriday v1.6.0/go.mod h1:ti0ldHuxg49ri4ksnFxlkCfN+hvslNlmVHqNRXXJNAY= +github.com/sagikazarmark/crypt v0.6.0/go.mod h1:U8+INwJo3nBv1m6A/8OBXAq7Jnpspk5AxSgDyEQcea8= +github.com/santhosh-tekuri/jsonschema/v5 v5.3.1/go.mod h1:uToXkOrWAZ6/Oc07xWQrPOhJotwFIyu2bBVN41fcDUY= +github.com/sassoftware/relic v7.2.1+incompatible/go.mod h1:CWfAxv73/iLZ17rbyhIEq3K9hs5w6FpNMdUT//qR+zk= +github.com/seccomp/libseccomp-golang v0.9.2-0.20220502022130-f33da4d89646/go.mod h1:JA8cRccbGaA1s33RQf7Y1+q9gHmZX1yB/z9WDN1C6fg= +github.com/secure-systems-lab/go-securesystemslib v0.9.0/go.mod h1:DVHKMcZ+V4/woA/peqr+L0joiRXbPpQ042GgJckkFgw= +github.com/shibumi/go-pathspec v1.3.0/go.mod h1:Xutfslp817l2I1cZvgcfeMQJG5QnU2lh5tVaaMCl3jE= +github.com/shirou/gopsutil/v4 v4.25.2/go.mod h1:34gBYJzyqCDT11b6bMHP0XCvWeU3J61XRT7a2EmCRTA= +github.com/shopspring/decimal v1.4.0/go.mod h1:gawqmDU56v4yIKSwfBSFip1HdCCXN8/+DMd9qYNcwME= +github.com/shurcooL/sanitized_anchor_name v1.0.0/go.mod h1:1NzhyTcUVG4SuEtjjoZeVRXNmyL/1OwPU0+IJeTBvfc= +github.com/sigstore/cosign/v2 v2.5.3/go.mod h1:eihZ0ZZyx7dtrwQA3UbkQLetICc2HAiJ8jnt8aMfSvI= +github.com/sigstore/protobuf-specs v0.5.0/go.mod h1:+gXR+38nIa2oEupqDdzg4qSBT0Os+sP7oYv6alWewWc= +github.com/sigstore/rekor v1.3.10/go.mod h1:JvryKJ40O0XA48MdzYUPu0y4fyvqt0C4iSY7ri9iu3A= +github.com/sigstore/rekor-tiles v0.1.7-0.20250624231741-98cd4a77300f/go.mod h1:1Epq0PQ73v5Z276rAY241JyaP8gtD64I6sgYIECHPvc= +github.com/sigstore/sigstore v1.9.5/go.mod h1:VtxgvGqCmEZN9X2zhFSOkfXxvKUjpy8RpUW39oCtoII= +github.com/sigstore/sigstore-go v1.1.0/go.mod h1:97lDVpZVBCTFX114KPAManEsShVe934KyaVhZGhPVBM= +github.com/sigstore/timestamp-authority v1.2.8/go.mod h1:G2/0hAZmLPnevEwT1S9IvtNHUm9Ktzvso6xuRhl94ZY= +github.com/sirupsen/logrus v1.8.1/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0= +github.com/soheilhy/cmux v0.1.5/go.mod h1:T7TcVDs9LWfQgPlPsdngu6I6QIoyIFZDDC6sNE1GqG0= +github.com/spf13/jwalterweatherman v1.1.0/go.mod h1:aNWZUN0dPAAO/Ljvb5BEdw96iTZ0EXowPYD95IqWIGo= +github.com/spiffe/go-spiffe/v2 v2.5.0/go.mod h1:P+NxobPc6wXhVtINNtFjNWGBTreew1GBUCwT2wPmb7g= +github.com/stoewer/go-strcase v1.3.0/go.mod h1:fAH5hQ5pehh+j3nZfvwdk2RgEgQjAoM8wodgtPmh1xo= +github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= +github.com/stretchr/testify v1.7.5/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU= +github.com/stretchr/testify v1.9.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY= +github.com/syndtr/gocapability v0.0.0-20200815063812-42c35b437635/go.mod h1:hkRG7XYTFWNJGYcbNJQlaLq0fg1yr4J4t/NcTQtrfww= +github.com/theupdateframework/go-tuf v0.7.0/go.mod h1:uEB7WSY+7ZIugK6R1hiBMBjQftaFzn7ZCDJcp1tCUug= +github.com/theupdateframework/go-tuf/v2 v2.1.1/go.mod h1:V675cQGhZONR0OGQ8r1feO0uwtsTBYPDWHzAAPn5rjE= +github.com/tidwall/gjson v1.14.2/go.mod h1:/wbyibRr2FHMks5tjHJ5F8dMZh3AcwJEMf5vlfC0lxk= +github.com/titanous/rocacheck v0.0.0-20171023193734-afe73141d399/go.mod h1:LdwHTNJT99C5fTAzDz0ud328OgXz+gierycbcIx2fRs= +github.com/tklauser/go-sysconf v0.3.12/go.mod h1:Ho14jnntGE1fpdOqQEEaiKRpvIavV0hSfmBq8nJbHYI= +github.com/tklauser/numcpus v0.6.1/go.mod h1:1XfjsgE2zo8GVw7POkMbHENHzVg3GzmoZ9fESEdAacY= +github.com/tmc/grpc-websocket-proxy v0.0.0-20220101234140-673ab2c3ae75/go.mod h1:KO6IkyS8Y3j8OdNO85qEYBsRPuteD+YciPomcXdrMnk= +github.com/tomasen/realip v0.0.0-20180522021738-f0c99a92ddce/go.mod h1:o8v6yHRoik09Xen7gje4m9ERNah1d1PPsVq1VEx9vE4= +github.com/transparency-dev/formats v0.0.0-20250421220931-bb8ad4d07c26/go.mod h1:ODywn0gGarHMMdSkWT56ULoK8Hk71luOyRseKek9COw= +github.com/transparency-dev/merkle v0.0.2/go.mod h1:pqSy+OXefQ1EDUVmAJ8MUhHB9TXGuzVAT58PqBoHz1A= +github.com/transparency-dev/tessera v0.2.1-0.20250610150926-8ee4e93b2823/go.mod h1:Jv2IDwG1q8QNXZTaI1X6QX8s96WlJn73ka2hT1n4N5c= +github.com/urfave/cli v1.22.1/go.mod h1:Gos4lmkARVdJ6EkW0WaNv/tZAAMe9V7XWyB60NtXRu0= +github.com/urfave/cli v1.22.16/go.mod h1:EeJR6BKodywf4zciqrdw6hpCPk68JO9z5LazXZMn5Po= +github.com/valyala/bytebufferpool v1.0.0/go.mod h1:6bBcMArwyJ5K/AmCkWv1jt77kVWyCJ6HpOuEn7z0Csc= +github.com/valyala/fastjson v1.6.4/go.mod h1:CLCAqky6SMuOcxStkYQvblddUtoRxhYMGLrsQns1aXY= +github.com/valyala/quicktemplate v1.8.0/go.mod h1:qIqW8/igXt8fdrUln5kOSb+KWMaJ4Y8QUsfd1k6L2jM= +github.com/vincent-petithory/dataurl v1.0.0/go.mod h1:FHafX5vmDzyP+1CQATJn7WFKc9CvnvxyvZy6I1MrG/U= +github.com/vishvananda/netlink v1.1.0/go.mod h1:cTgwzPIzzgDAYoQrMm0EdrjRUBkTqKYppBueQtXaqoE= +github.com/vishvananda/netns v0.0.0-20191106174202-0a2b9b5464df/go.mod h1:JP3t17pCcGlemwknint6hfoeCVQrEMVwxRLRjXpq+BU= +github.com/xhit/go-str2duration/v2 v2.1.0/go.mod h1:ohY8p+0f07DiV6Em5LKB0s2YpLtXVyJfNt1+BlmyAsU= +github.com/xiang90/probing v0.0.0-20221125231312-a49e3df8f510/go.mod h1:UETIi67q53MR2AWcXfiuqkDkRtnGDLqkBTpCHuJHxtU= +github.com/xo/terminfo v0.0.0-20210125001918-ca9a967f8778/go.mod h1:2MuV+tbUrU1zIOPMxZ5EncGwgmMJsa+9ucAQZXxsObs= +github.com/xo/terminfo v0.0.0-20220910002029-abceb7e1c41e/go.mod h1:RbqR21r5mrJuqunuUZ/Dhy/avygyECGrLceyNeo4LiM= +github.com/yusufpapurcu/wmi v1.2.4/go.mod h1:SBZ9tNy3G9/m5Oi98Zks0QjeHVDvuK0qfxQmPyzfmi0= +github.com/zeebo/errs v1.4.0/go.mod h1:sgbWHsvVuTPHcqJJGQ1WhI5KbWlHYz+2+2C/LSEtCw4= +github.com/ziutek/mymysql v1.5.4/go.mod h1:LMSpPZ6DbqWFxNCHW77HeMg9I646SAhApZ/wKdgO/C0= +github.com/zmap/zcertificate v0.0.1/go.mod h1:q0dlN54Jm4NVSSuzisusQY0hqDWvu92C+TWveAxiVWk= +go.etcd.io/bbolt v1.4.2/go.mod h1:Is8rSHO/b4f3XigBC0lL0+4FwAQv3HXEEIgFMuKHceM= +go.etcd.io/etcd/api/v3 v3.6.4/go.mod h1:eFhhvfR8Px1P6SEuLT600v+vrhdDTdcfMzmnxVXXSbk= +go.etcd.io/etcd/client/pkg/v3 v3.6.4/go.mod h1:sbdzr2cl3HzVmxNw//PH7aLGVtY4QySjQFuaCgcRFAI= +go.etcd.io/etcd/client/v2 v2.305.4/go.mod h1:Ud+VUwIi9/uQHOMA+4ekToJ12lTxlv0zB/+DHwTGEbU= +go.etcd.io/etcd/client/v2 v2.305.21/go.mod h1:OKkn4hlYNf43hpjEM3Ke3aRdUkhSl8xjKjSf8eCq2J8= +go.etcd.io/etcd/client/v3 v3.6.4/go.mod h1:jaNNHCyg2FdALyKWnd7hxZXZxZANb0+KGY+YQaEMISo= +go.etcd.io/etcd/etcdctl/v3 v3.6.0/go.mod h1:ukAtyfIbiTajTDRfXruqUluVGvqcn/aGn0HEWdnzWC4= +go.etcd.io/etcd/etcdutl/v3 v3.6.0/go.mod h1:gheEcr7WMMV9TN+TvXSxP9ixk8Bg5Lwp63uz1OANeKg= +go.etcd.io/etcd/pkg/v3 v3.6.4/go.mod h1:kKcYWP8gHuBRcteyv6MXWSN0+bVMnfgqiHueIZnKMtE= +go.etcd.io/etcd/raft/v3 v3.5.21/go.mod h1:fmcuY5R2SNkklU4+fKVBQi2biVp5vafMrWUEj4TJ4Cs= +go.etcd.io/etcd/server/v3 v3.6.4/go.mod h1:aYCL/h43yiONOv0QIR82kH/2xZ7m+IWYjzRmyQfnCAg= +go.etcd.io/etcd/tests/v3 v3.6.0/go.mod h1:wuyuwvXTF33++K6kQtpsMrbsISxCQZNbVGpFgx63E9w= +go.etcd.io/etcd/v3 v3.6.0/go.mod h1:0sMPTfyOUZNFRYJEweFWFmr2vppoupl4gBiDF/IB7ng= +go.etcd.io/gofail v0.2.0/go.mod h1:nL3ILMGfkXTekKI3clMBNazKnjUZjYLKmBHzsVAnC1o= +go.etcd.io/raft/v3 v3.6.0/go.mod h1:nLvLevg6+xrVtHUmVaTcTz603gQPHfh7kUAwV6YpfGo= +go.mongodb.org/mongo-driver v1.14.0/go.mod h1:Vzb0Mk/pa7e6cWw85R4F/endUC3u0U9jGcNU603k65c= +go.opencensus.io v0.24.0/go.mod h1:vNK8G9p7aAivkbmorf4v+7Hgx+Zs0yY+0fOtgBfjQKo= +go.opentelemetry.io/contrib/detectors/gcp v1.34.0/go.mod h1:cV4BMFcscUR/ckqLkbfQmF0PRsq8w/lMGzdbCSveBHo= +go.opentelemetry.io/contrib/detectors/gcp v1.35.0/go.mod h1:qGWP8/+ILwMRIUf9uIVLloR1uo5ZYAslM4O6OqUi1DA= +go.opentelemetry.io/contrib/detectors/gcp v1.36.0/go.mod h1:IbBN8uAIIx734PTonTPxAxnjc2pQTxWNkwfstZ+6H2k= +go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.58.0/go.mod h1:HDBUsEjOuRC0EzKZ1bSaRGZWUBAzo+MhAcUUORSr4D0= +go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.60.0/go.mod h1:rg+RlpR5dKwaS95IyyZqj5Wd4E13lk/msnTS0Xl9lJM= +go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.61.0/go.mod h1:snMWehoOh2wsEwnvvwtDyFCxVeDAODenXHtn5vzrKjo= +go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.58.0/go.mod h1:umTcuxiv1n/s/S6/c2AT/g2CQ7u5C59sHDNmfSwgz7Q= +go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.59.0/go.mod h1:FRmFuRJfag1IZ2dPkHnEoSFVgTVPUd2qf5Vi69hLb8I= +go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.61.0/go.mod h1:UHB22Z8QsdRDrnAtX4PntOl36ajSxcdUMt1sF7Y6E7Q= +go.opentelemetry.io/otel v1.33.0/go.mod h1:SUUkR6csvUQl+yjReHu5uM3EtVV7MBm5FHKRlNx4I8I= +go.opentelemetry.io/otel v1.34.0/go.mod h1:OWFPOQ+h4G8xpyjgqo4SxJYdDQ/qmRH+wivy7zzx9oI= +go.opentelemetry.io/otel v1.36.0/go.mod h1:/TcFMXYjyRNh8khOAO9ybYkqaDBb/70aVwkNML4pP8E= +go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.34.0/go.mod h1:7Bept48yIeqxP2OZ9/AqIpYS94h2or0aB4FypJTc8ZM= +go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.34.0/go.mod h1:U7HYyW0zt/a9x5J1Kjs+r1f/d4ZHnYFclhYY2+YbeoE= +go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.33.0/go.mod h1:wAy0T/dUbs468uOlkT31xjvqQgEVXv58BRFWEgn5v/0= +go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.29.0/go.mod h1:2uL/xnOXh0CHOBFCWXz5u1A4GXLiW+0IQIzVbeOEQ0U= +go.opentelemetry.io/otel/metric v1.33.0/go.mod h1:L9+Fyctbp6HFTddIxClbQkjtubW6O9QS3Ann/M82u6M= +go.opentelemetry.io/otel/metric v1.34.0/go.mod h1:CEDrp0fy2D0MvkXE+dPV7cMi8tWZwX3dmaIhwPOaqHE= +go.opentelemetry.io/otel/metric v1.36.0/go.mod h1:zC7Ks+yeyJt4xig9DEw9kuUFe5C3zLbVjV2PzT6qzbs= +go.opentelemetry.io/otel/sdk v1.35.0/go.mod h1:+ga1bZliga3DxJ3CQGg3updiaAJoNECOgJREo9KHGQg= +go.opentelemetry.io/otel/sdk v1.36.0/go.mod h1:+lC+mTgD+MUWfjJubi2vvXWcVxyr9rmlshZni72pXeY= +go.opentelemetry.io/otel/sdk/metric v1.35.0/go.mod h1:is6XYCUMpcKi+ZsOvfluY5YstFnhW0BidkR+gL+qN+w= +go.opentelemetry.io/otel/trace v1.33.0/go.mod h1:uIcdVUZMpTAmz0tI1z04GoVSezK37CbGV4fr1f2nBck= +go.opentelemetry.io/otel/trace v1.34.0/go.mod h1:Svm7lSjQD7kG7KJ/MUHPVXSDGz2OX4h0M2jHBhmSfRE= +go.opentelemetry.io/otel/trace v1.36.0/go.mod h1:gQ+OnDZzrybY4k4seLzPAWNwVBBVlF2szhehOBB/tGA= +go.opentelemetry.io/proto/otlp v1.5.0/go.mod h1:keN8WnHxOy8PG0rQZjJJ5A2ebUoafqWp0eVQ4yIXvJ4= +go.uber.org/atomic v1.9.0/go.mod h1:fEN4uk6kAWBTFdckzkM89CLk9XfWZrxpCo0nPH17wJc= +go.uber.org/mock v0.5.2/go.mod h1:wLlUxC2vVTPTaE3UD51E0BGOAElKrILxhVSDYQLld5o= +go.uber.org/multierr v1.9.0/go.mod h1:X2jQV1h+kxSjClGpnseKVIxpmcjrj7MNnI0bnlfKTVQ= +go.yaml.in/yaml/v3 v3.0.3/go.mod h1:tBHosrYAkRZjRAOREWbDnBXUf08JOwYq++0QNwQiWzI= +go4.org v0.0.0-20201209231011-d4a079459e60/go.mod h1:CIiUVy99QCPfoE13bO4EZaz5GZMZXMSBGhxRdsvzbkg= +golang.org/x/crypto v0.33.0/go.mod h1:bVdXmD7IV/4GdElGPozy6U7lWdRXA4qyRVGJV57uQ5M= +golang.org/x/crypto v0.36.0/go.mod h1:Y4J0ReaxCR1IMaabaSMugxJES1EpwhBHhv2bDHklZvc= +golang.org/x/exp v0.0.0-20240719175910-8a7402abbf56/go.mod h1:M4RDyNAINzryxdtnbRXRL/OHtkFuWGRjvuhBJpk2IlY= +golang.org/x/mod v0.14.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c= +golang.org/x/mod v0.21.0/go.mod h1:6SkKJ3Xj0I0BrPOZoBy3bdMptDDU9oJrpohJ3eWZ1fY= +golang.org/x/mod v0.24.0/go.mod h1:IXM97Txy2VM4PJ3gI61r1YEk/gAj6zAHN3AdZt6S9Ww= +golang.org/x/mod v0.25.0/go.mod h1:IXM97Txy2VM4PJ3gI61r1YEk/gAj6zAHN3AdZt6S9Ww= +golang.org/x/mod v0.28.0/go.mod h1:yfB/L0NOf/kmEbXjzCPOx1iK1fRutOydrCMsqRhEBxI= +golang.org/x/net v0.0.0-20201202161906-c7110b5ffcbb/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU= +golang.org/x/net v0.0.0-20211123203042-d83791d6bcd9/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y= +golang.org/x/net v0.33.0/go.mod h1:HXLR5J+9DxmrqMwG9qjGCxZ+zKXxBru04zlTvWlWuN4= +golang.org/x/net v0.35.0/go.mod h1:EglIi67kWsHKlRzzVMUD93VMSWGFOMSZgxFjparz1Qk= +golang.org/x/net v0.37.0/go.mod h1:ivrbrMbzFq5J41QOQh0siUuly180yBYtLp+CKbEaFx8= +golang.org/x/net v0.38.0/go.mod h1:ivrbrMbzFq5J41QOQh0siUuly180yBYtLp+CKbEaFx8= +golang.org/x/net v0.39.0/go.mod h1:X7NRbYVEA+ewNkCNyJ513WmMdQ3BineSwVtN2zD/d+E= +golang.org/x/net v0.40.0/go.mod h1:y0hY0exeL2Pku80/zKK7tpntoX23cqL3Oa6njdgRtds= +golang.org/x/net v0.41.0/go.mod h1:B/K4NNqkfmg07DQYrbwvSluqCJOOXwUjeb/5lOisjbA= +golang.org/x/net v0.45.0/go.mod h1:ECOoLqd5U3Lhyeyo/QDCEVQ4sNgYsqvCZ722XogGieY= +golang.org/x/oauth2 v0.24.0/go.mod h1:XYTD2NtWslqkgxebSiOHnXEap4TF09sJSc7H1sXbhtI= +golang.org/x/oauth2 v0.26.0/go.mod h1:XYTD2NtWslqkgxebSiOHnXEap4TF09sJSc7H1sXbhtI= +golang.org/x/oauth2 v0.28.0/go.mod h1:onh5ek6nERTohokkhCD/y2cV4Do3fxFHFuAejCkRWT8= +golang.org/x/oauth2 v0.30.0/go.mod h1:B++QgG3ZKulg6sRPGD/mqlHQs5rB3Ml9erfeDY7xKlU= +golang.org/x/sync v0.11.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk= +golang.org/x/sync v0.12.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA= +golang.org/x/sync v0.13.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA= +golang.org/x/sync v0.15.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA= +golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20210616094352-59db8d763f22/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20220310020820-b874c991c1a5/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.26.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= +golang.org/x/sys v0.28.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= +golang.org/x/sys v0.29.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= +golang.org/x/sys v0.30.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= +golang.org/x/sys v0.31.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k= +golang.org/x/sys v0.32.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k= +golang.org/x/sys v0.33.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k= +golang.org/x/telemetry v0.0.0-20251008203120-078029d740a8/go.mod h1:Pi4ztBfryZoJEkyFTI5/Ocsu2jXyDr6iSdgJiYE/uwE= +golang.org/x/term v0.30.0/go.mod h1:NYYFdzHoI5wRh/h5tDMdMqCqPJZEuNqVR5xJLd/n67g= +golang.org/x/term v0.32.0/go.mod h1:uZG1FhGx848Sqfsq4/DlJr3xGGsYMu/L5GW4abiaEPQ= +golang.org/x/text v0.12.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE= +golang.org/x/text v0.21.0/go.mod h1:4IBbMaMmOPCJ8SecivzSH54+73PCFmPWxNTLm+vZkEQ= +golang.org/x/text v0.22.0/go.mod h1:YRoo4H8PVmsu+E3Ou7cqLVH8oXWIHVoX0jqUWALQhfY= +golang.org/x/text v0.23.0/go.mod h1:/BLNzu4aZCJ1+kcD0DNRotWKage4q2rGVAg4o22unh4= +golang.org/x/text v0.24.0/go.mod h1:L8rBsPeo2pSS+xqN0d5u2ikmjtmoJbDBT1b7nHvFCdU= +golang.org/x/text v0.25.0/go.mod h1:WEdwpYrmk1qmdHvhkSTNPm3app7v4rsT8F2UD6+VHIA= +golang.org/x/text v0.26.0/go.mod h1:QK15LZJUUQVJxhz7wXgxSy/CJaTFjd0G+YLonydOVQA= +golang.org/x/time v0.8.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM= +golang.org/x/time v0.9.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM= +golang.org/x/tools v0.16.1/go.mod h1:kYVVN6I1mBNoB1OX+noeBjbRk4IUEPa7JJ+TJMEooJ0= +golang.org/x/tools v0.26.0/go.mod h1:TPVVj70c7JJ3WCazhD8OdXcZg/og+b9+tH/KxylGwH0= +golang.org/x/tools v0.32.0/go.mod h1:ZxrU41P/wAbZD8EDa6dDCa6XfpkhJ7HFMjHJXfBDu8s= +golang.org/x/tools v0.33.0/go.mod h1:CIJMaWEY88juyUfo7UbgPqbC8rU2OqfAV1h2Qp0oMYI= +golang.org/x/tools v0.37.0/go.mod h1:MBN5QPQtLMHVdvsbtarmTNukZDdgwdwlO5qGacAzF0w= +golang.org/x/xerrors v0.0.0-20220517211312-f3a8303e98df/go.mod h1:K8+ghG5WaK9qNqU5K3HdILfMLy1f3aNYFI/wnl100a8= +google.golang.org/api v0.223.0/go.mod h1:C+RS7Z+dDwds2b+zoAk5hN/eSfsiCn0UDrYof/M4d2M= +google.golang.org/api v0.241.0/go.mod h1:cOVEm2TpdAGHL2z+UwyS+kmlGr3bVWQQ6sYEqkKje50= +google.golang.org/appengine v1.6.7/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc= +google.golang.org/genproto v0.0.0-20241118233622-e639e219e697 h1:ToEetK57OidYuqD4Q5w+vfEnPvPpuTwedCNVohYJfNk= +google.golang.org/genproto v0.0.0-20241118233622-e639e219e697/go.mod h1:JJrvXBWRZaFMxBufik1a4RpFw4HhgVtBBWQeQgUj2cc= +google.golang.org/genproto v0.0.0-20250505200425-f936aa4a68b2 h1:1tXaIXCracvtsRxSBsYDiSBN0cuJvM7QYW+MrpIRY78= +google.golang.org/genproto v0.0.0-20250505200425-f936aa4a68b2/go.mod h1:49MsLSx0oWMOZqcpB3uL8ZOkAh1+TndpJ8ONoCBWiZk= +google.golang.org/genproto/googleapis/api v0.0.0-20250218202821-56aae31c358a/go.mod h1:3kWAYMk1I75K4vykHtKt2ycnOgpA6974V7bREqbsenU= +google.golang.org/genproto/googleapis/api v0.0.0-20250303144028-a0af3efb3deb/go.mod h1:jbe3Bkdp+Dh2IrslsFCklNhweNTBgSYanP1UXhJDhKg= +google.golang.org/genproto/googleapis/api v0.0.0-20250324211829-b45e905df463/go.mod h1:U90ffi8eUL9MwPcrJylN5+Mk2v3vuPDptd5yyNUiRR8= +google.golang.org/genproto/googleapis/api v0.0.0-20250603155806-513f23925822/go.mod h1:h3c4v36UTKzUiuaOKQ6gr3S+0hovBtUrXzTG/i3+XEc= +google.golang.org/genproto/googleapis/rpc v0.0.0-20230803162519-f966b187b2e5/go.mod h1:zBEcrKX2ZOcEkHWxBPAIvYUWOKKMIhYcmNiUIu2ji3I= +google.golang.org/genproto/googleapis/rpc v0.0.0-20240826202546-f6391c0de4c7/go.mod h1:UqMtugtsSgubUsoxbuAoiCXvqvErP7Gf0so0mK9tHxU= +google.golang.org/genproto/googleapis/rpc v0.0.0-20241216192217-9240e9c98484/go.mod h1:lcTa1sDdWEIHMWlITnIczmw5w60CF9ffkb8Z+DVmmjA= +google.golang.org/genproto/googleapis/rpc v0.0.0-20250218202821-56aae31c358a/go.mod h1:uRxBH1mhmO8PGhU89cMcHaXKZqO+OfakD8QQO0oYwlQ= +google.golang.org/genproto/googleapis/rpc v0.0.0-20250227231956-55c901821b1e/go.mod h1:LuRYeWDFV6WOn90g357N17oMCaxpgCnbi/44qJvDn2I= +google.golang.org/genproto/googleapis/rpc v0.0.0-20250324211829-b45e905df463/go.mod h1:qQ0YXyHHx3XkvlzUtpXDkS29lDSafHMZBAZDc03LQ3A= +google.golang.org/genproto/googleapis/rpc v0.0.0-20250519155744-55703ea1f237/go.mod h1:qQ0YXyHHx3XkvlzUtpXDkS29lDSafHMZBAZDc03LQ3A= +google.golang.org/genproto/googleapis/rpc v0.0.0-20250528174236-200df99c418a/go.mod h1:qQ0YXyHHx3XkvlzUtpXDkS29lDSafHMZBAZDc03LQ3A= +google.golang.org/grpc v1.57.1/go.mod h1:Sd+9RMTACXwmub0zcNY2c4arhtrbBYD1AUHI/dt16Mo= +google.golang.org/grpc v1.67.1/go.mod h1:1gLDyUQU7CTLJI90u3nXZ9ekeghjeM7pTDZlqFNg2AA= +google.golang.org/grpc v1.69.0/go.mod h1:vyjdE6jLBI76dgpDojsFGNaHlxdjXN9ghpnd2o7JGZ4= +google.golang.org/grpc v1.71.0/go.mod h1:H0GRtasmQOh9LkFoCPDu3ZrwUtD1YGE+b2vYBYd/8Ec= +google.golang.org/grpc v1.72.1/go.mod h1:wH5Aktxcg25y1I3w7H69nHfXdOG3UiadoBtjh3izSDM= +google.golang.org/grpc/examples v0.0.0-20230224211313-3775f633ce20/go.mod h1:Nr5H8+MlGWr5+xX/STzdoEqJrO+YteqFbMyCsrb6mH0= +google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw= +google.golang.org/protobuf v1.33.0/go.mod h1:c6P6GXX6sHbq/GpV6MGZEdwhWPcYBgnhAHhKbcUYpos= +google.golang.org/protobuf v1.34.2/go.mod h1:qYOHts0dSfpeUzUFpOMr/WGzszTmLH+DiWniOlNbLDw= +google.golang.org/protobuf v1.35.1/go.mod h1:9fA7Ob0pmnwhb644+1+CVWFRbNajQ6iRojtC/QF5bRE= +google.golang.org/protobuf v1.36.1/go.mod h1:9fA7Ob0pmnwhb644+1+CVWFRbNajQ6iRojtC/QF5bRE= +google.golang.org/protobuf v1.36.5/go.mod h1:9fA7Ob0pmnwhb644+1+CVWFRbNajQ6iRojtC/QF5bRE= +google.golang.org/protobuf v1.36.6/go.mod h1:jduwjTPXsFjZGTmRluh+L6NjiWu7pchiJ2/5YcXBHnY= +google.golang.org/protobuf v1.36.8/go.mod h1:fuxRtAxBytpl4zzqUh6/eyUujkJdNiuEkXntxiD/uRU= +gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw= +gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= +gopkg.in/go-jose/go-jose.v2 v2.6.3/go.mod h1:zzZDPkNNw/c9IE7Z9jr11mBZQhKQTMzoEEIoEdZlFBI= +gopkg.in/ini.v1 v1.67.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k= +gopkg.in/natefinch/lumberjack.v2 v2.2.1/go.mod h1:YD8tP3GAjkrDg1eZH7EGmyESg/lsYskCTPBJVb9jqSc= +gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw= +k8s.io/api v0.32.1/go.mod h1:/Yi/BqkuueW1BgpoePYBRdDYfjPF5sgTr5+YqDZra5k= +k8s.io/api v0.33.0/go.mod h1:CTO61ECK/KU7haa3qq8sarQ0biLq2ju405IZAd9zsiM= +k8s.io/apiextensions-apiserver v0.33.0/go.mod h1:VeJ8u9dEEN+tbETo+lFkwaaZPg6uFKLGj5vyNEwwSzc= +k8s.io/apimachinery v0.32.3/go.mod h1:GpHVgxoKlTxClKcteaeuF1Ul/lDVb74KpZcxcmLDElE= +k8s.io/apimachinery v0.33.0/go.mod h1:BHW0YOu7n22fFv/JkYOEfkUYNRN0fj0BlvMFWA7b+SM= +k8s.io/apimachinery v0.34.1/go.mod h1:/GwIlEcWuTX9zKIg2mbw0LRFIsXwrfoVxn+ef0X13lw= +k8s.io/apiserver v0.34.1/go.mod h1:eOOc9nrVqlBI1AFCvVzsob0OxtPZUCPiUJL45JOTBG0= +k8s.io/apiserver v0.34.3/go.mod h1:QPnnahMO5C2m3lm6fPW3+JmyQbvHZQ8uudAu/493P2w= +k8s.io/client-go v0.32.1/go.mod h1:aTTKZY7MdxUaJ/KiUs8D+GssR9zJZi77ZqtzcGXIiDg= +k8s.io/cluster-bootstrap v0.33.3/go.mod h1:p970f8u8jf273zyQ5raD8WUu2XyAl0SAWOY82o7i/ds= +k8s.io/code-generator v0.33.0/go.mod h1:KnJRokGxjvbBQkSJkbVuBbu6z4B0rC7ynkpY5Aw6m9o= +k8s.io/code-generator v0.34.1/go.mod h1:DeWjekbDnJWRwpw3s0Jat87c+e0TgkxoR4ar608yqvg= +k8s.io/code-generator v0.34.3/go.mod h1:oW73UPYpGLsbRN8Ozkhd6ZzkF8hzFCiYmvEuWZDroI4= +k8s.io/component-base v0.32.1/go.mod h1:j1iMMHi/sqAHeG5z+O9BFNCF698a1u0186zkjMZQ28w= +k8s.io/component-base v0.34.1/go.mod h1:mknCpLlTSKHzAQJJnnHVKqjxR7gBeHRv0rPXA7gdtQ0= +k8s.io/component-base v0.34.3/go.mod h1:5iIlD8wPfWE/xSHTRfbjuvUul2WZbI2nOUK65XL0E/c= +k8s.io/gengo/v2 v2.0.0-20250207200755-1244d31929d7/go.mod h1:EJykeLsmFC60UQbYJezXkEsG2FLrt0GPNkU5iK5GWxU= +k8s.io/gengo/v2 v2.0.0-20250604051438-85fd79dbfd9f/go.mod h1:EJykeLsmFC60UQbYJezXkEsG2FLrt0GPNkU5iK5GWxU= +k8s.io/klog/v2 v2.2.0/go.mod h1:Od+F08eJP+W3HUb4pSrPpgp9DGU4GzlpG/TmITuYh/Y= +k8s.io/klog/v2 v2.80.1/go.mod h1:y1WjHnz7Dj687irZUWR/WLkLc5N1YHtjLdmgWjndZn0= +k8s.io/kms v0.34.1/go.mod h1:s1CFkLG7w9eaTYvctOxosx88fl4spqmixnNpys0JAtM= +k8s.io/kms v0.34.3/go.mod h1:s1CFkLG7w9eaTYvctOxosx88fl4spqmixnNpys0JAtM= +k8s.io/kube-openapi v0.0.0-20241212222426-2c72e554b1e7/go.mod h1:GewRfANuJ70iYzvn+i4lezLDAFzvjxZYK1gn1lWcfas= +k8s.io/kube-openapi v0.0.0-20250318190949-c8a335a9a2ff/go.mod h1:5jIi+8yX4RIb8wk3XwBo5Pq2ccx4FP10ohkbSKCZoK8= +k8s.io/utils v0.0.0-20240711033017-18e509b52bc8/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0= +k8s.io/utils v0.0.0-20241104100929-3ea5e8cea738/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0= +k8s.io/utils v0.0.0-20241210054802-24370beab758/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0= +sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.31.2/go.mod h1:Ve9uj1L+deCXFrPOk1LpFXqTg7LCFzFso6PA48q/XZw= +sigs.k8s.io/controller-runtime v0.20.4/go.mod h1:xg2XB0K5ShQzAgsoujxuKN4LNXR2LfwwHsPj7Iaw+XY= +sigs.k8s.io/controller-tools v0.18.0/go.mod h1:gLKoiGBriyNh+x1rWtUQnakUYEujErjXs9pf+x/8n1U= +sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd/go.mod h1:B8JuhiUyNFVKdsE8h686QcCxMaH6HrOAZj4vswFpcB0= +sigs.k8s.io/json v0.0.0-20241010143419-9aa6b5e7a4b3/go.mod h1:18nIHnGi6636UCz6m8i4DhaJ65T6EruyzmoQqI2BVDo= +sigs.k8s.io/randfill v0.0.0-20250304075658-069ef1bbf016/go.mod h1:XeLlZ/jmk4i1HRopwe7/aU3H5n1zNUcX6TM94b3QxOY= +sigs.k8s.io/structured-merge-diff/v4 v4.5.0/go.mod h1:N8f93tFZh9U6vpxwRArLiikrE5/2tiu1w1AGfACIGE4= +sigs.k8s.io/structured-merge-diff/v4 v4.6.0/go.mod h1:dDy58f92j70zLsuZVuUX5Wp9vtxXpaZnkPGWeqDfCps= +sigs.k8s.io/structured-merge-diff/v6 v6.2.0/go.mod h1:M3W8sfWvn2HhQDIbGWj3S099YozAsymCo/wrT5ohRUE= +sigs.k8s.io/yaml v1.4.0/go.mod h1:Ejl7/uTz7PSA4eKMyQCUTnhZYNmLIl+5c2lQPGR2BPY= diff --git a/hack/boilerplate.txt b/hack/boilerplate.txt new file mode 100644 index 000000000..5749b43c6 --- /dev/null +++ b/hack/boilerplate.txt @@ -0,0 +1,15 @@ +/* +Copyright YEAR Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ \ No newline at end of file diff --git a/hack/build_prototype.sh b/hack/build_prototype.sh new file mode 100644 index 000000000..e8e382323 --- /dev/null +++ b/hack/build_prototype.sh @@ -0,0 +1,45 @@ +#!/bin/bash + +# Copyright 2025 Flant JSC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +set -e + +cd images/agent +GOOS=linux GOARCH=amd64 CGO_ENABLED=0 go build -ldflags="-s -w" -o ./out ./cmd +rm -f ./out +GOOS=linux GOARCH=amd64 CGO_ENABLED=0 go test ./... +echo "agent ok" +cd - > /dev/null + +cd images/controller +GOOS=linux GOARCH=amd64 CGO_ENABLED=0 go build -ldflags="-s -w" -o ./out ./cmd +rm -f ./out +GOOS=linux GOARCH=amd64 CGO_ENABLED=0 go test ./... +echo "controller ok" +cd - > /dev/null + +cd images/csi-driver +GOOS=linux GOARCH=amd64 CGO_ENABLED=0 go build -ldflags="-s -w" -o ./out ./cmd +rm -f ./out +GOOS=linux GOARCH=amd64 CGO_ENABLED=0 go test ./... +echo "csi-driver ok" +cd - > /dev/null + +cd api +GOOS=linux GOARCH=amd64 CGO_ENABLED=0 go build -ldflags="-s -w" -o ./out ./v1alpha1 +rm -f ./out +GOOS=linux GOARCH=amd64 CGO_ENABLED=0 go test ./... +echo "api ok" +cd - > /dev/null diff --git a/hack/for-each-mod b/hack/for-each-mod new file mode 100755 index 000000000..92b751b7c --- /dev/null +++ b/hack/for-each-mod @@ -0,0 +1,37 @@ +#!/bin/bash + +# Copyright 2025 Flant JSC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# Runs command in each folder with go.mod file +# +# Examples: +# Tidy all the modules: +# `for-each-mod go mod tidy` +# Generate all the modules: +# `for-each-mod go generate ./...` +# + +os="$(uname -s)" + +case "$os" in + Darwin) + # BSD find on macOS: keep expression simple and portable. + find . -name go.mod -execdir sh -c "$*" {} + + ;; + *) + # Original behaviour (Linux / CI, etc.). + find . -type f -name go.mod -execdir sh -c "$*" {} + + ;; +esac diff --git a/hack/generate_code.sh b/hack/generate_code.sh new file mode 100755 index 000000000..45ca3b865 --- /dev/null +++ b/hack/generate_code.sh @@ -0,0 +1,39 @@ +#!/bin/bash + +# Copyright 2025 Flant JSC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# run from repository root with: 'bash hack/generate_code.sh' +set -euo pipefail +cd api + +# crds +# Run controller-gen without pinning it into this module's go.mod. +# Force module mode to allow updating go.sum for tool dependencies when needed. +go run -mod=mod sigs.k8s.io/controller-tools/cmd/controller-gen@v0.19 \ + object:headerFile=../hack/boilerplate.txt \ + crd paths=./v1alpha1 output:crd:dir=../crds \ + paths=./v1alpha1 + +# remove development dependencies +go mod tidy -go=1.24.11 + +cd .. + +# generate mocks and any other go:generate targets across all modules +./hack/for-each-mod go generate ./... + +# TODO: re-generate spec according to changes in CRDs with AI + +echo "OK" diff --git a/hack/go-mod-tidy b/hack/go-mod-tidy new file mode 100644 index 000000000..74391e8bf --- /dev/null +++ b/hack/go-mod-tidy @@ -0,0 +1,25 @@ +#!/bin/bash + +# Copyright 2025 Flant JSC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# Upgrade all direct and transitive dependencies and tidy go.mod/go.sum +# for every Go module in this repository. +# +# Run from repository root with: 'bash hack/go-mod-upgrade' + +set -euo pipefail + +hack/for-each-mod go mod tidy -go=1.24.11 + diff --git a/hack/go-mod-upgrade b/hack/go-mod-upgrade new file mode 100644 index 000000000..50ef5fbc1 --- /dev/null +++ b/hack/go-mod-upgrade @@ -0,0 +1,25 @@ +#!/bin/bash + +# Copyright 2025 Flant JSC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# Upgrade all direct and transitive dependencies and tidy go.mod/go.sum +# for every Go module in this repository. +# +# Run from repository root with: 'bash hack/go-mod-upgrade' + +set -euo pipefail + +hack/for-each-mod "go get -t -u ./... && go mod tidy" + diff --git a/hack/local_build.sh b/hack/local_build.sh new file mode 100755 index 000000000..0b3701a57 --- /dev/null +++ b/hack/local_build.sh @@ -0,0 +1,167 @@ +#!/bin/bash + +# Copyright 2025 Flant JSC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# prevent the script from being sourced +if [[ "${BASH_SOURCE[0]}" != "$0" ]]; then + echo "ERROR: This script must not be sourced." >&2 + return 1 +fi + +REGISTRY_PATH=registry.flant.com/deckhouse/storage/localbuild +NAMESPACE=d8-sds-replicated-volume +DAEMONSET_NAME=agent +SECRET_NAME=sds-replicated-volume-module-registry + +# CI and werf variables +export SOURCE_REPO="https://github.com" +export CI_COMMIT_REF_NAME="null" + +# PAT TOKEN must be created here https://fox.flant.com/-/user_settings/personal_access_tokens +# It must have api, read_api, read_repository, read_registry, write_registry scopes +#PAT_TOKEN='REPLACEME' +# prefix of the custom tag +# the final image will look like: ${REGISTRY_PATH}:${CUSTOM_TAG}- +#CUSTOM_TAG=username + +# you can optionally define secrets in a gitignored folder: +source ./.secret/$(basename "$0") 2>/dev/null || true + +if [ -z "$PAT_TOKEN" ];then + echo "ERR: empty PAT_TOKEN" + exit 1 +fi +if [ -z "$CUSTOM_TAG" ];then + echo "ERR: empty CUSTOM_TAG" + exit 1 +fi +if [ -z "$REGISTRY_PATH" ];then + echo "ERR: empty REGISTRY_PATH" + exit 1 +fi +if ! command -v werf &> /dev/null; then + echo "ERR: werf is not installed or not in PATH" + exit 1 +fi + +build_action() { + if [ $# -eq 0 ]; then + echo "ERR: " + exit 1 + else + IMAGE_NAMES="$@" + fi + + echo "Get base_images.yml" + + BASE_IMAGES_VERSION=$(grep -oP 'BASE_IMAGES_VERSION:\s+"v\d+\.\d+\.\d+"' ./.github/workflows/build_dev.yml | grep -oP 'v\d+\.\d+\.\d+' | head -n1) + if [ -z "$BASE_IMAGES_VERSION" ];then + echo "ERR: empty BASE_IMAGES_VERSION" + exit 1 + fi + + echo BASE_IMAGES_VERSION=$BASE_IMAGES_VERSION + + curl -OJL https://fox.flant.com/api/v4/projects/deckhouse%2Fbase-images/packages/generic/base_images/${BASE_IMAGES_VERSION}/base_images.yml + + echo "Start building for images:" + + werf cr login $REGISTRY_PATH --username='pat' --password=$PAT_TOKEN + + for image in $IMAGE_NAMES; do + echo "Building image: $image" + werf build $image --add-custom-tag=$CUSTOM_TAG"-"$image --repo=$REGISTRY_PATH --dev + done + + echo "Delete base_images.yml" + rm -rf base_images.yml +} + +_create_secret() { + echo "{\"auths\":{\"${REGISTRY_PATH}\":{\"auth\":\"$(echo -n pat:${PAT_TOKEN} | base64 -w 0)\"}}}" | base64 -w 0 +} + +patch_agent() { + ( + set -exuo pipefail + + DAEMONSET_CONTAINER_NAME=agent + IMAGE=${REGISTRY_PATH}:${CUSTOM_TAG}-agent + + SECRET_DATA=$(_create_secret) + + kubectl -n d8-system scale deployment deckhouse --replicas=0 + + kubectl -n $NAMESPACE patch secret $SECRET_NAME -p \ + "{\"data\": {\".dockerconfigjson\": \"$SECRET_DATA\"}}" + + kubectl -n $NAMESPACE patch daemonset $DAEMONSET_NAME -p \ + "{\"spec\": {\"template\": {\"spec\": {\"containers\": [{\"name\": \"$DAEMONSET_CONTAINER_NAME\", \"image\": \"$IMAGE\"}]}}}}" + + kubectl -n $NAMESPACE patch daemonset $DAEMONSET_NAME -p \ + "{\"spec\": {\"template\": {\"spec\": {\"containers\": [{\"name\": \"$DAEMONSET_CONTAINER_NAME\", \"imagePullPolicy\": \"Always\"}]}}}}" + + kubectl -n $NAMESPACE rollout restart daemonset $DAEMONSET_NAME + ) +} + +restore_agent() { + ( + set -exuo pipefail + + kubectl -n $NAMESPACE delete secret $SECRET_NAME + + kubectl -n $NAMESPACE delete daemonset $DAEMONSET_NAME + + kubectl -n d8-system scale deployment deckhouse --replicas=1 + ) +} + +print_help() { + echo " Usage: $0 build [ [ ...]]" + echo " Possible actions: build, patch_agent, build_patch_agent, restore_agent" +} + +if [ $# -lt 1 ]; then + print_help + exit 1 +fi +ACTION=$1 + +shift + +case "$ACTION" in + --help) + print_help + ;; + build) + build_action + ;; + patch_agent) + patch_agent + ;; + build_patch_agent) + build_action agent + patch_agent + ;; + restore_agent) + restore_agent + ;; + *) + echo "Unknown action: $ACTION" + print_help + exit 1 + ;; +esac diff --git a/hack/run-linter.sh b/hack/run-linter.sh new file mode 100755 index 000000000..91db26ce6 --- /dev/null +++ b/hack/run-linter.sh @@ -0,0 +1,173 @@ +#!/bin/bash + +# Copyright 2025 Flant JSC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +set -euo pipefail + +# Colors for output +RED='\033[0;31m' +GREEN='\033[0;32m' +YELLOW='\033[1;33m' +NC='\033[0m' # No Color + +# Default values +BUILD_TAGS="ee fe" +FIX_ISSUES="false" +NEW_FROM_MERGE_BASE="" + +# Function to print colored output +print_status() { + local color=$1 + local message=$2 + echo -e "${color}${message}${NC}" +} + +# Function to show usage +show_usage() { + cat << EOF +Usage: $0 [OPTIONS] + +Run golangci-lint on the project using go tool golangci-lint. + +OPTIONS: + -h, --help Show this help message + -t, --tags TAGS Build tags to use, space-separated (default: $BUILD_TAGS) + -f, --fix Auto-fix issues where possible + -n, --new-from-base SHA Run linter only on files changed since merge base SHA + +EXAMPLES: + $0 # Run linter with default settings + $0 --fix # Run linter and auto-fix issues + $0 --tags "ee" # Run linter only for 'ee' build tag + $0 --new-from-base abc123 # Run linter only on changed files + +EOF +} + +# Parse command line arguments +while [[ $# -gt 0 ]]; do + case $1 in + -h|--help) + show_usage + exit 0 + ;; + -t|--tags) + BUILD_TAGS="$2" + shift 2 + ;; + -f|--fix) + FIX_ISSUES="true" + shift + ;; + -n|--new-from-base) + NEW_FROM_MERGE_BASE="$2" + shift 2 + ;; + *) + print_status $RED "Unknown option: $1" + show_usage + exit 1 + ;; + esac +done + +# Run linter on a directory +run_linter() { + local dir="$1" + local edition="$2" + local extra_args="$3" + + print_status $YELLOW "Running linter in $dir (edition: $edition)" + + # Change to the directory and run linter + (cd "$dir" && { + local linter_cmd="go tool golangci-lint run --color=always --allow-parallel-runners --build-tags $edition" + + if [[ "$FIX_ISSUES" == "true" ]]; then + linter_cmd="$linter_cmd --fix" + fi + + if [[ -n "$extra_args" ]]; then + linter_cmd="$linter_cmd $extra_args" + fi + + if eval "$linter_cmd"; then + print_status $GREEN "Linter PASSED in $dir (edition: $edition)" + return 0 + else + print_status $RED "Linter FAILED in $dir (edition: $edition)" + return 1 + fi + }) +} + +# Main function +main() { + print_status $GREEN "Starting golangci-lint run using go tool" + + # Convert space-separated tags to array + read -ra TAGS_ARRAY <<< "$BUILD_TAGS" + + local basedir=$(pwd) + local failed=false + local extra_args="" + + # Prepare extra arguments + if [[ -n "$NEW_FROM_MERGE_BASE" ]]; then + extra_args="--new-from-merge-base=$NEW_FROM_MERGE_BASE" + print_status $YELLOW "Running linter only on files changed since $NEW_FROM_MERGE_BASE" + fi + + # Find all go.mod files in images directory + local go_mod_files=$(find . -name "go.mod" -type f) + + if [[ -z "$go_mod_files" ]]; then + print_status $RED "No go.mod files found in images directory" + exit 1 + fi + + # Run linter for each go.mod file and each build tag + for go_mod_file in $go_mod_files; do + local dir=$(dirname "$go_mod_file") + + for edition in "${TAGS_ARRAY[@]}"; do + if ! run_linter "$dir" "$edition" "$extra_args"; then + failed=true + fi + done + done + + # Check for uncommitted changes if --fix was used + if [[ "$FIX_ISSUES" == "true" ]]; then + if [[ -n "$(git status --porcelain --untracked-files=no 2>/dev/null || true)" ]]; then + print_status $YELLOW "Linter made changes to files. Review the changes:" + git diff --name-only + print_status $YELLOW "To apply all changes: git add . && git commit -m 'Fix linter issues'" + else + print_status $GREEN "No changes made by linter" + fi + fi + + if [[ "$failed" == "true" ]]; then + print_status $RED "Linter failed on one or more directories" + exit 1 + else + print_status $GREEN "All linter checks passed!" + exit 0 + fi +} + +# Run main function +main "$@" diff --git a/hack/run-tests.sh b/hack/run-tests.sh new file mode 100755 index 000000000..62826ba17 --- /dev/null +++ b/hack/run-tests.sh @@ -0,0 +1,72 @@ +#!/bin/bash + +# Copyright 2026 Flant JSC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +set -euo pipefail + +# Colors for output +RED='\033[0;31m' +GREEN='\033[0;32m' +YELLOW='\033[1;33m' +NC='\033[0m' # No Color + +print_status() { + local color=$1 + local message=$2 + echo -e "${color}${message}${NC}" +} + +print_status "$YELLOW" "Starting test run..." + +# Get all workspace modules from go.work +modules=$(go work edit -json | jq -r '.Use[].DiskPath') + +if [ -z "$modules" ]; then + print_status "$RED" "No modules found in go.work" + exit 1 +fi + +# Track results +total_modules=0 +failed_modules=0 +passed_modules=0 + +for mod in $modules; do + print_status "$YELLOW" "Testing $mod" + total_modules=$((total_modules + 1)) + + if (cd "$mod" && go test -v ./...); then + print_status "$GREEN" "✓ PASSED: $mod" + passed_modules=$((passed_modules + 1)) + else + print_status "$RED" "✗ FAILED: $mod" + failed_modules=$((failed_modules + 1)) + fi + echo +done + +# Print summary +echo "==========================================" +print_status "$YELLOW" "Test Summary:" +echo "Total modules: $total_modules" +print_status "$GREEN" "Passed: $passed_modules" +if [ $failed_modules -gt 0 ]; then + print_status "$RED" "Failed: $failed_modules" + exit 1 +else + print_status "$GREEN" "Failed: $failed_modules" + print_status "$GREEN" "All tests passed!" + exit 0 +fi diff --git a/hack/todo_prototype.sh b/hack/todo_prototype.sh new file mode 100644 index 000000000..d38f47f97 --- /dev/null +++ b/hack/todo_prototype.sh @@ -0,0 +1,32 @@ +#!/usr/bin/env bash + +# Copyright 2025 Flant JSC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# Print all todos in the selected folders + +BASE_URL="https://github.com/deckhouse/sds-replicated-volume" +BRANCH="astef-prototype" + +grep -RIn "TODO" api images/controller images/agent images/csi-driver | \ +while IFS=: read -r file line text; do + # Trim leading/trailing whitespace from the TODO line + trimmed_text=$(printf '%s' "$text" | sed 's/^[[:space:]]*//; s/[[:space:]]*$//') + + echo "$trimmed_text" + # Normalize path (remove leading ./ if present) + rel="${file#./}" + echo "${BASE_URL}/blob/${BRANCH}/${rel}#L${line}" + echo +done \ No newline at end of file diff --git a/hooks/go/060-manual-cert-renewal/manual_cert_renewal.go b/hooks/go/060-manual-cert-renewal/manual_cert_renewal.go index 0c01e53e6..0c3a56bd8 100644 --- a/hooks/go/060-manual-cert-renewal/manual_cert_renewal.go +++ b/hooks/go/060-manual-cert-renewal/manual_cert_renewal.go @@ -172,7 +172,7 @@ func getTrigger(ctx context.Context, cl client.Client, input *pkg.HookInput) *v1 return nil } - if err := snapshots[0].UnmarhalTo(cm); err != nil { + if err := snapshots[0].UnmarshalTo(cm); err != nil { input.Logger.Error("failed unmarshalling snapshot, skip update", "err", err) return nil } diff --git a/hooks/go/060-manual-cert-renewal/state_machine.go b/hooks/go/060-manual-cert-renewal/state_machine.go index cd3a64d73..dc3edd85e 100644 --- a/hooks/go/060-manual-cert-renewal/state_machine.go +++ b/hooks/go/060-manual-cert-renewal/state_machine.go @@ -26,7 +26,6 @@ import ( appsv1 "k8s.io/api/apps/v1" corev1 "k8s.io/api/core/v1" - v1 "k8s.io/api/core/v1" "k8s.io/apimachinery/pkg/labels" "k8s.io/apimachinery/pkg/util/wait" "sigs.k8s.io/controller-runtime/pkg/client" @@ -78,14 +77,14 @@ func (s step) String() string { return s.Name } type stateMachine struct { ctx context.Context - trigger *v1.ConfigMap + trigger *corev1.ConfigMap cl client.Client log pkg.Logger currentStepIdx int steps []step - cachedSecrets map[string]*v1.Secret + cachedSecrets map[string]*corev1.Secret cachedDaemonSets map[string]*appsv1.DaemonSet cachedDeployments map[string]*appsv1.Deployment @@ -96,7 +95,7 @@ func newStateMachine( ctx context.Context, cl client.Client, log pkg.Logger, - trigger *v1.ConfigMap, + trigger *corev1.ConfigMap, hookInput *pkg.HookInput, ) *stateMachine { s := &stateMachine{} @@ -342,10 +341,10 @@ func (s *stateMachine) turnOffDaemonSetAndWait(name string) error { // turn off patch := client.MergeFrom(ds.DeepCopy()) - ds.Spec.Template.Spec.Affinity = &v1.Affinity{ - NodeAffinity: &v1.NodeAffinity{ - RequiredDuringSchedulingIgnoredDuringExecution: &v1.NodeSelector{ - NodeSelectorTerms: []v1.NodeSelectorTerm{ + ds.Spec.Template.Spec.Affinity = &corev1.Affinity{ + NodeAffinity: &corev1.NodeAffinity{ + RequiredDuringSchedulingIgnoredDuringExecution: &corev1.NodeSelector{ + NodeSelectorTerms: []corev1.NodeSelectorTerm{ {}, // match no objects }, }, diff --git a/hooks/go/080-discover-data-nodes-checksum/080-discovery-data-nodes-checksum.go b/hooks/go/080-discover-data-nodes-checksum/080-discovery-data-nodes-checksum.go index c3fcdc4cb..ade28b6b6 100644 --- a/hooks/go/080-discover-data-nodes-checksum/080-discovery-data-nodes-checksum.go +++ b/hooks/go/080-discover-data-nodes-checksum/080-discovery-data-nodes-checksum.go @@ -54,7 +54,6 @@ var _ = registry.RegisterFunc( ) func discoveryDataNodesChecksum(_ context.Context, input *pkg.HookInput) error { - uidList, err := objectpatch.UnmarshalToStruct[string](input.Snapshots, nodeSnapshotName) if err != nil { return fmt.Errorf("failed to unmarshal node UIDs: %w", err) diff --git a/hooks/go/090-on-start-checks/090-on-start-checks.go b/hooks/go/090-on-start-checks/090-on-start-checks.go index 62cc30368..e0e9c4025 100644 --- a/hooks/go/090-on-start-checks/090-on-start-checks.go +++ b/hooks/go/090-on-start-checks/090-on-start-checks.go @@ -20,12 +20,13 @@ import ( "context" "encoding/json" - "github.com/deckhouse/module-sdk/pkg" - "github.com/deckhouse/module-sdk/pkg/registry" "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured" "k8s.io/apimachinery/pkg/runtime/schema" "k8s.io/apimachinery/pkg/types" "sigs.k8s.io/controller-runtime/pkg/client" + + "github.com/deckhouse/module-sdk/pkg" + "github.com/deckhouse/module-sdk/pkg/registry" ) var _ = registry.RegisterFunc( @@ -65,7 +66,6 @@ func onStartChecks(ctx context.Context, input *pkg.HookInput) error { propValue, _, _ := unstructured.NestedString(spec, "prop_value") if propKey == "DrbdOptions/AutoEvictAllowEviction" && propValue == "True" { - patch := map[string]interface{}{ "spec": map[string]interface{}{ "prop_value": "False", @@ -105,7 +105,6 @@ func onStartChecks(ctx context.Context, input *pkg.HookInput) error { err := cl.Get(ctx, client.ObjectKey{Name: "sds-replicated-volume"}, modCfg) if err != nil { - if client.IgnoreNotFound(err) == nil { input.Logger.Info("ModuleConfig not found, creating new one") } else { @@ -157,59 +156,58 @@ func onStartChecks(ctx context.Context, input *pkg.HookInput) error { } return nil + } - } else { - input.Logger.Info("No thin pool granularity found, checking if thin provisioning should be disabled") + input.Logger.Info("No thin pool granularity found, checking if thin provisioning should be disabled") - // Check existing ModuleConfig for enableThinProvisioning setting - modCfg := &unstructured.Unstructured{} - modCfg.SetGroupVersionKind(schema.GroupVersionKind{ - Group: "deckhouse.io", - Version: "v1alpha1", - Kind: "ModuleConfig", - }) - modCfg.SetName("sds-replicated-volume") + // Check existing ModuleConfig for enableThinProvisioning setting + modCfg := &unstructured.Unstructured{} + modCfg.SetGroupVersionKind(schema.GroupVersionKind{ + Group: "deckhouse.io", + Version: "v1alpha1", + Kind: "ModuleConfig", + }) + modCfg.SetName("sds-replicated-volume") - err := cl.Get(ctx, client.ObjectKey{Name: "sds-replicated-volume"}, modCfg) - if err != nil { - if client.IgnoreNotFound(err) == nil { - input.Logger.Info("ModuleConfig not found, nothing to disable") - } else { - input.Logger.Error("Failed to get ModuleConfig", "err", err) - return err - } + err := cl.Get(ctx, client.ObjectKey{Name: "sds-replicated-volume"}, modCfg) + if err != nil { + if client.IgnoreNotFound(err) == nil { + input.Logger.Info("ModuleConfig not found, nothing to disable") } else { - // Check if enableThinProvisioning is currently true - enableThinProvisioning, found, _ := unstructured.NestedBool(modCfg.Object, "spec", "settings", "enableThinProvisioning") + input.Logger.Error("Failed to get ModuleConfig", "err", err) + return err + } + return nil + } - if found && enableThinProvisioning { + // Check if enableThinProvisioning is currently true + enableThinProvisioning, found, _ := unstructured.NestedBool(modCfg.Object, "spec", "settings", "enableThinProvisioning") - // Disable thin provisioning + if found && enableThinProvisioning { + // Disable thin provisioning - input.Logger.Info("Thin provisioning in moduleconfig set to True - disabling") + input.Logger.Info("Thin provisioning in moduleconfig set to True - disabling") - patch := map[string]interface{}{ - "spec": map[string]interface{}{ - "settings": map[string]interface{}{ - "enableThinProvisioning": false, - }, - }, - } + patch := map[string]interface{}{ + "spec": map[string]interface{}{ + "settings": map[string]interface{}{ + "enableThinProvisioning": false, + }, + }, + } - patchBytes, err := json.Marshal(patch) - if err != nil { - input.Logger.Info("Failed to marshal patch for moduleconfig", "err", err) - } else { - if err := cl.Patch(ctx, modCfg, client.RawPatch(types.MergePatchType, patchBytes)); err != nil { - input.Logger.Info("Failed to patch moduleconfig", "err", err) - } else { - input.Logger.Info("Patched moduleconfig with thin provisioning disabled") - } - } + patchBytes, err := json.Marshal(patch) + if err != nil { + input.Logger.Info("Failed to marshal patch for moduleconfig", "err", err) + } else { + if err := cl.Patch(ctx, modCfg, client.RawPatch(types.MergePatchType, patchBytes)); err != nil { + input.Logger.Info("Failed to patch moduleconfig", "err", err) } else { - input.Logger.Info("Thin provisioning already disabled or not set") + input.Logger.Info("Patched moduleconfig with thin provisioning disabled") } } + } else { + input.Logger.Info("Thin provisioning already disabled or not set") } return nil diff --git a/hooks/go/certs/webhook_certs.go b/hooks/go/certs/webhook_certs.go index 507436f58..c45d6d59b 100644 --- a/hooks/go/certs/webhook_certs.go +++ b/hooks/go/certs/webhook_certs.go @@ -30,7 +30,6 @@ import ( func RegisterWebhookCertsHook() { tlscertificate.RegisterManualTLSHookEM(WebhookCertConfigs()) - } func WebhookCertConfigs() tlscertificate.GenSelfSignedTLSGroupHookConf { diff --git a/hooks/go/go.mod b/hooks/go/go.mod index 60e0bd407..f0a2b16e5 100644 --- a/hooks/go/go.mod +++ b/hooks/go/go.mod @@ -4,95 +4,273 @@ go 1.24.11 require ( github.com/cloudflare/cfssl v1.6.5 - github.com/deckhouse/deckhouse/pkg/log v0.0.0-20241205040953-7b376bae249c - github.com/deckhouse/module-sdk v0.1.1-0.20250225114715-86f38bb419fe - k8s.io/api v0.29.8 - k8s.io/apimachinery v0.29.8 - k8s.io/client-go v0.29.8 - sigs.k8s.io/controller-runtime v0.17.0 + github.com/deckhouse/module-sdk v0.4.0 + k8s.io/api v0.34.3 + k8s.io/apimachinery v0.34.3 + k8s.io/client-go v0.34.3 + sigs.k8s.io/controller-runtime v0.22.4 ) require ( + 4d63.com/gocheckcompilerdirectives v1.3.0 // indirect + 4d63.com/gochecknoglobals v0.2.2 // indirect + github.com/4meepo/tagalign v1.4.2 // indirect + github.com/Abirdcfly/dupword v0.1.3 // indirect + github.com/Antonboom/errname v1.0.0 // indirect + github.com/Antonboom/nilnil v1.0.1 // indirect + github.com/Antonboom/testifylint v1.5.2 // indirect + github.com/BurntSushi/toml v1.5.0 // indirect + github.com/Crocmagnon/fatcontext v0.7.1 // indirect github.com/DataDog/gostackparse v0.7.0 // indirect + github.com/Djarvur/go-err113 v0.0.0-20210108212216-aea10b59be24 // indirect + github.com/GaijinEntertainment/go-exhaustruct/v3 v3.3.1 // indirect + github.com/Masterminds/semver/v3 v3.4.0 // indirect + github.com/OpenPeeDeeP/depguard/v2 v2.2.1 // indirect + github.com/alecthomas/go-check-sumtype v0.3.1 // indirect + github.com/alexkohler/nakedret/v2 v2.0.5 // indirect + github.com/alexkohler/prealloc v1.0.0 // indirect + github.com/alingse/asasalint v0.0.11 // indirect + github.com/alingse/nilnesserr v0.1.2 // indirect + github.com/ashanbrown/forbidigo v1.6.0 // indirect + github.com/ashanbrown/makezero v1.2.0 // indirect github.com/beorn7/perks v1.0.1 // indirect - github.com/caarlos0/env/v11 v11.2.2 // indirect - github.com/cespare/xxhash/v2 v2.2.0 // indirect - github.com/containerd/stargz-snapshotter/estargz v0.14.3 // indirect + github.com/bkielbasa/cyclop v1.2.3 // indirect + github.com/blizzy78/varnamelen v0.8.0 // indirect + github.com/bombsimon/wsl/v4 v4.5.0 // indirect + github.com/breml/bidichk v0.3.2 // indirect + github.com/breml/errchkjson v0.4.0 // indirect + github.com/butuzov/ireturn v0.3.1 // indirect + github.com/butuzov/mirror v1.3.0 // indirect + github.com/caarlos0/env/v11 v11.3.1 // indirect + github.com/catenacyber/perfsprint v0.8.2 // indirect + github.com/ccojocar/zxcvbn-go v1.0.2 // indirect + github.com/cespare/xxhash/v2 v2.3.0 // indirect + github.com/charithe/durationcheck v0.0.10 // indirect + github.com/chavacava/garif v0.1.0 // indirect + github.com/ckaznocha/intrange v0.3.0 // indirect + github.com/containerd/stargz-snapshotter/estargz v0.17.0 // indirect + github.com/curioswitch/go-reassign v0.3.0 // indirect + github.com/daixiang0/gci v0.13.5 // indirect github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect - github.com/docker/cli v24.0.0+incompatible // indirect - github.com/docker/distribution v2.8.2+incompatible // indirect - github.com/docker/docker v28.1.1+incompatible // indirect - github.com/docker/docker-credential-helpers v0.7.0 // indirect - github.com/emicklei/go-restful/v3 v3.11.0 // indirect + github.com/deckhouse/deckhouse/pkg/log v0.0.0-20250909165437-ef0b7f73d870 // indirect + github.com/denis-tingaikin/go-header v0.5.0 // indirect + github.com/docker/cli v28.4.0+incompatible // indirect + github.com/docker/distribution v2.8.3+incompatible // indirect + github.com/docker/docker-credential-helpers v0.9.3 // indirect + github.com/emicklei/go-restful/v3 v3.13.0 // indirect github.com/ettle/strcase v0.2.0 // indirect - github.com/evanphx/json-patch/v5 v5.9.0 // indirect - github.com/fsnotify/fsnotify v1.7.0 // indirect - github.com/go-logr/logr v1.4.2 // indirect - github.com/go-openapi/jsonpointer v0.19.6 // indirect - github.com/go-openapi/jsonreference v0.20.2 // indirect - github.com/go-openapi/swag v0.22.5 // indirect + github.com/evanphx/json-patch/v5 v5.9.11 // indirect + github.com/fatih/color v1.18.0 // indirect + github.com/fatih/structtag v1.2.0 // indirect + github.com/firefart/nonamedreturns v1.0.5 // indirect + github.com/fsnotify/fsnotify v1.9.0 // indirect + github.com/fxamacker/cbor/v2 v2.9.0 // indirect + github.com/fzipp/gocyclo v0.6.0 // indirect + github.com/ghostiam/protogetter v0.3.9 // indirect + github.com/go-critic/go-critic v0.12.0 // indirect + github.com/go-logr/logr v1.4.3 // indirect + github.com/go-openapi/jsonpointer v0.22.0 // indirect + github.com/go-openapi/jsonreference v0.21.1 // indirect + github.com/go-openapi/swag v0.24.1 // indirect + github.com/go-openapi/swag/cmdutils v0.24.0 // indirect + github.com/go-openapi/swag/conv v0.24.0 // indirect + github.com/go-openapi/swag/fileutils v0.24.0 // indirect + github.com/go-openapi/swag/jsonname v0.24.0 // indirect + github.com/go-openapi/swag/jsonutils v0.24.0 // indirect + github.com/go-openapi/swag/loading v0.24.0 // indirect + github.com/go-openapi/swag/mangling v0.24.0 // indirect + github.com/go-openapi/swag/netutils v0.24.0 // indirect + github.com/go-openapi/swag/stringutils v0.24.0 // indirect + github.com/go-openapi/swag/typeutils v0.24.0 // indirect + github.com/go-openapi/swag/yamlutils v0.24.0 // indirect + github.com/go-task/slim-sprig/v3 v3.0.0 // indirect + github.com/go-toolsmith/astcast v1.1.0 // indirect + github.com/go-toolsmith/astcopy v1.1.0 // indirect + github.com/go-toolsmith/astequal v1.2.0 // indirect + github.com/go-toolsmith/astfmt v1.1.0 // indirect + github.com/go-toolsmith/astp v1.1.0 // indirect + github.com/go-toolsmith/strparse v1.1.0 // indirect + github.com/go-toolsmith/typep v1.1.0 // indirect + github.com/go-viper/mapstructure/v2 v2.4.0 // indirect + github.com/go-xmlfmt/xmlfmt v1.1.3 // indirect + github.com/gobwas/glob v0.2.3 // indirect + github.com/gofrs/flock v0.12.1 // indirect github.com/gogo/protobuf v1.3.2 // indirect - github.com/gojuno/minimock/v3 v3.4.3 // indirect - github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect - github.com/golang/protobuf v1.5.4 // indirect - github.com/google/certificate-transparency-go v1.1.7 // indirect - github.com/google/gnostic-models v0.6.8 // indirect - github.com/google/go-cmp v0.6.0 // indirect - github.com/google/go-containerregistry v0.17.0 // indirect - github.com/google/gofuzz v1.2.0 // indirect + github.com/gojuno/minimock/v3 v3.4.7 // indirect + github.com/golangci/dupl v0.0.0-20250308024227-f665c8d69b32 // indirect + github.com/golangci/go-printf-func-name v0.1.0 // indirect + github.com/golangci/gofmt v0.0.0-20250106114630-d62b90e6713d // indirect + github.com/golangci/golangci-lint v1.64.8 // indirect + github.com/golangci/misspell v0.6.0 // indirect + github.com/golangci/plugin-module-register v0.1.1 // indirect + github.com/golangci/revgrep v0.8.0 // indirect + github.com/golangci/unconvert v0.0.0-20240309020433-c5143eacb3ed // indirect + github.com/google/btree v1.1.3 // indirect + github.com/google/certificate-transparency-go v1.3.2 // indirect + github.com/google/gnostic-models v0.7.0 // indirect + github.com/google/go-cmp v0.7.0 // indirect + github.com/google/go-containerregistry v0.20.6 // indirect + github.com/google/pprof v0.0.0-20250820193118-f64d9cf942d6 // indirect github.com/google/uuid v1.6.0 // indirect - github.com/imdario/mergo v0.3.16 // indirect + github.com/gordonklaus/ineffassign v0.1.0 // indirect + github.com/gostaticanalysis/analysisutil v0.7.1 // indirect + github.com/gostaticanalysis/comment v1.5.0 // indirect + github.com/gostaticanalysis/forcetypeassert v0.2.0 // indirect + github.com/gostaticanalysis/nilerr v0.1.1 // indirect + github.com/hashicorp/go-immutable-radix/v2 v2.1.0 // indirect + github.com/hashicorp/go-version v1.7.0 // indirect + github.com/hashicorp/golang-lru/v2 v2.0.7 // indirect + github.com/hexops/gotextdiff v1.0.3 // indirect github.com/inconshreveable/mousetrap v1.1.0 // indirect - github.com/jmoiron/sqlx v1.3.5 // indirect - github.com/jonboulle/clockwork v0.4.0 // indirect + github.com/jgautheron/goconst v1.7.1 // indirect + github.com/jingyugao/rowserrcheck v1.1.1 // indirect + github.com/jjti/go-spancheck v0.6.4 // indirect + github.com/jmoiron/sqlx v1.4.0 // indirect + github.com/jonboulle/clockwork v0.5.0 // indirect github.com/josharian/intern v1.0.0 // indirect github.com/json-iterator/go v1.1.12 // indirect - github.com/klauspost/compress v1.16.5 // indirect - github.com/mailru/easyjson v0.7.7 // indirect + github.com/julz/importas v0.2.0 // indirect + github.com/karamaru-alpha/copyloopvar v1.2.1 // indirect + github.com/kisielk/errcheck v1.9.0 // indirect + github.com/kkHAIKE/contextcheck v1.1.6 // indirect + github.com/klauspost/compress v1.18.0 // indirect + github.com/kulti/thelper v0.6.3 // indirect + github.com/kunwardeep/paralleltest v1.0.10 // indirect + github.com/lasiar/canonicalheader v1.1.2 // indirect + github.com/ldez/exptostd v0.4.2 // indirect + github.com/ldez/gomoddirectives v0.6.1 // indirect + github.com/ldez/grignotin v0.9.0 // indirect + github.com/ldez/tagliatelle v0.7.1 // indirect + github.com/ldez/usetesting v0.4.2 // indirect + github.com/leonklingele/grouper v1.1.2 // indirect + github.com/macabu/inamedparam v0.1.3 // indirect + github.com/mailru/easyjson v0.9.0 // indirect + github.com/maratori/testableexamples v1.0.0 // indirect + github.com/maratori/testpackage v1.1.1 // indirect + github.com/matoous/godox v1.1.0 // indirect + github.com/mattn/go-colorable v0.1.14 // indirect + github.com/mattn/go-isatty v0.0.20 // indirect + github.com/mattn/go-runewidth v0.0.16 // indirect + github.com/mgechev/revive v1.7.0 // indirect github.com/mitchellh/go-homedir v1.1.0 // indirect github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect - github.com/modern-go/reflect2 v1.0.2 // indirect + github.com/modern-go/reflect2 v1.0.3-0.20250322232337-35a7c28c31ee // indirect + github.com/moricho/tparallel v0.3.2 // indirect github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect + github.com/nakabonne/nestif v0.3.1 // indirect + github.com/nishanths/exhaustive v0.12.0 // indirect + github.com/nishanths/predeclared v0.2.2 // indirect + github.com/nunnatsa/ginkgolinter v0.19.1 // indirect + github.com/olekukonko/tablewriter v0.0.5 // indirect + github.com/onsi/ginkgo/v2 v2.27.2 // indirect + github.com/onsi/gomega v1.38.3 // indirect github.com/opencontainers/go-digest v1.0.0 // indirect - github.com/opencontainers/image-spec v1.1.0-rc3 // indirect - github.com/pelletier/go-toml v1.9.3 // indirect + github.com/opencontainers/image-spec v1.1.1 // indirect + github.com/pelletier/go-toml v1.9.5 // indirect + github.com/pelletier/go-toml/v2 v2.2.4 // indirect github.com/pkg/errors v0.9.1 // indirect github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect - github.com/prometheus/client_golang v1.19.0 // indirect - github.com/prometheus/client_model v0.5.0 // indirect - github.com/prometheus/common v0.48.0 // indirect - github.com/prometheus/procfs v0.12.0 // indirect + github.com/polyfloyd/go-errorlint v1.7.1 // indirect + github.com/prometheus/client_golang v1.23.2 // indirect + github.com/prometheus/client_model v0.6.2 // indirect + github.com/prometheus/common v0.66.1 // indirect + github.com/prometheus/procfs v0.17.0 // indirect + github.com/quasilyte/go-ruleguard v0.4.3-0.20240823090925-0fe6f58b47b1 // indirect + github.com/quasilyte/go-ruleguard/dsl v0.3.22 // indirect + github.com/quasilyte/gogrep v0.5.0 // indirect + github.com/quasilyte/regex/syntax v0.0.0-20210819130434-b3f0c404a727 // indirect + github.com/quasilyte/stdinfo v0.0.0-20220114132959-f7386bf02567 // indirect + github.com/raeperd/recvcheck v0.2.0 // indirect + github.com/rivo/uniseg v0.4.7 // indirect + github.com/rogpeppe/go-internal v1.14.1 // indirect + github.com/ryancurrah/gomodguard v1.3.5 // indirect + github.com/ryanrolds/sqlclosecheck v0.5.1 // indirect + github.com/sagikazarmark/locafero v0.7.0 // indirect + github.com/sanposhiho/wastedassign/v2 v2.1.0 // indirect + github.com/santhosh-tekuri/jsonschema/v6 v6.0.1 // indirect + github.com/sashamelentyev/interfacebloat v1.1.0 // indirect + github.com/sashamelentyev/usestdlibvars v1.28.0 // indirect + github.com/securego/gosec/v2 v2.22.2 // indirect github.com/sirupsen/logrus v1.9.3 // indirect - github.com/spf13/cobra v1.8.1 // indirect - github.com/spf13/pflag v1.0.5 // indirect - github.com/sylabs/oci-tools v0.7.0 // indirect - github.com/tidwall/gjson v1.14.4 // indirect - github.com/tidwall/match v1.1.1 // indirect - github.com/tidwall/pretty v1.2.0 // indirect - github.com/vbatts/tar-split v0.11.3 // indirect - github.com/weppos/publicsuffix-go v0.30.0 // indirect - github.com/zmap/zcrypto v0.0.0-20230310154051-c8b263fd8300 // indirect - github.com/zmap/zlint/v3 v3.5.0 // indirect - golang.org/x/crypto v0.38.0 // indirect - golang.org/x/exp v0.0.0-20240719175910-8a7402abbf56 // indirect - golang.org/x/net v0.40.0 // indirect - golang.org/x/oauth2 v0.27.0 // indirect - golang.org/x/sync v0.14.0 // indirect - golang.org/x/sys v0.33.0 // indirect - golang.org/x/term v0.32.0 // indirect - golang.org/x/text v0.25.0 // indirect - golang.org/x/time v0.8.0 // indirect - gomodules.xyz/jsonpatch/v2 v2.4.0 // indirect - google.golang.org/protobuf v1.35.1 // indirect + github.com/sivchari/containedctx v1.0.3 // indirect + github.com/sivchari/tenv v1.12.1 // indirect + github.com/sonatard/noctx v0.1.0 // indirect + github.com/sourcegraph/conc v0.3.0 // indirect + github.com/sourcegraph/go-diff v0.7.0 // indirect + github.com/spf13/afero v1.12.0 // indirect + github.com/spf13/cast v1.7.1 // indirect + github.com/spf13/cobra v1.10.2 // indirect + github.com/spf13/pflag v1.0.10 // indirect + github.com/spf13/viper v1.20.1 // indirect + github.com/ssgreg/nlreturn/v2 v2.2.1 // indirect + github.com/stbenjam/no-sprintf-host-port v0.2.0 // indirect + github.com/stretchr/objx v0.5.2 // indirect + github.com/stretchr/testify v1.11.1 // indirect + github.com/subosito/gotenv v1.6.0 // indirect + github.com/sylabs/oci-tools v0.18.0 // indirect + github.com/tdakkota/asciicheck v0.4.1 // indirect + github.com/tetafro/godot v1.5.0 // indirect + github.com/tidwall/gjson v1.18.0 // indirect + github.com/tidwall/match v1.2.0 // indirect + github.com/tidwall/pretty v1.2.1 // indirect + github.com/timakin/bodyclose v0.0.0-20241017074812-ed6a65f985e3 // indirect + github.com/timonwong/loggercheck v0.10.1 // indirect + github.com/tomarrell/wrapcheck/v2 v2.10.0 // indirect + github.com/tommy-muehle/go-mnd/v2 v2.5.1 // indirect + github.com/ultraware/funlen v0.2.0 // indirect + github.com/ultraware/whitespace v0.2.0 // indirect + github.com/uudashr/gocognit v1.2.0 // indirect + github.com/uudashr/iface v1.3.1 // indirect + github.com/vbatts/tar-split v0.12.1 // indirect + github.com/weppos/publicsuffix-go v0.50.1-0.20250829105427-5340293a34a1 // indirect + github.com/x448/float16 v0.8.4 // indirect + github.com/xen0n/gosmopolitan v1.2.2 // indirect + github.com/yagipy/maintidx v1.0.0 // indirect + github.com/yeya24/promlinter v0.3.0 // indirect + github.com/ykadowak/zerologlint v0.1.5 // indirect + github.com/zmap/zcrypto v0.0.0-20250830192831-dcac38cad4c0 // indirect + github.com/zmap/zlint/v3 v3.6.7 // indirect + gitlab.com/bosi/decorder v0.4.2 // indirect + go-simpler.org/musttag v0.13.0 // indirect + go-simpler.org/sloglint v0.9.0 // indirect + go.uber.org/automaxprocs v1.6.0 // indirect + go.uber.org/multierr v1.11.0 // indirect + go.uber.org/zap v1.27.0 // indirect + go.yaml.in/yaml/v2 v2.4.2 // indirect + go.yaml.in/yaml/v3 v3.0.4 // indirect + golang.org/x/crypto v0.43.0 // indirect + golang.org/x/exp v0.0.0-20251023183803-a4bb9ffd2546 // indirect + golang.org/x/exp/typeparams v0.0.0-20250210185358-939b2ce775ac // indirect + golang.org/x/mod v0.29.0 // indirect + golang.org/x/net v0.46.0 // indirect + golang.org/x/oauth2 v0.31.0 // indirect + golang.org/x/sync v0.19.0 // indirect + golang.org/x/sys v0.39.0 // indirect + golang.org/x/term v0.36.0 // indirect + golang.org/x/text v0.30.0 // indirect + golang.org/x/time v0.13.0 // indirect + golang.org/x/tools v0.38.0 // indirect + golang.org/x/tools/go/expect v0.1.1-deprecated // indirect + gomodules.xyz/jsonpatch/v2 v2.5.0 // indirect + google.golang.org/protobuf v1.36.9 // indirect + gopkg.in/evanphx/json-patch.v4 v4.13.0 // indirect gopkg.in/inf.v0 v0.9.1 // indirect gopkg.in/yaml.v2 v2.4.0 // indirect gopkg.in/yaml.v3 v3.0.1 // indirect - k8s.io/apiextensions-apiserver v0.29.8 // indirect - k8s.io/component-base v0.29.8 // indirect + honnef.co/go/tools v0.6.1 // indirect + k8s.io/apiextensions-apiserver v0.34.3 // indirect k8s.io/klog/v2 v2.130.1 // indirect - k8s.io/kube-openapi v0.0.0-20240228011516-70dd3763d340 // indirect - k8s.io/utils v0.0.0-20240711033017-18e509b52bc8 // indirect - sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd // indirect - sigs.k8s.io/structured-merge-diff/v4 v4.4.1 // indirect - sigs.k8s.io/yaml v1.4.0 // indirect + k8s.io/kube-openapi v0.0.0-20250909170358-d67c058d9372 // indirect + k8s.io/utils v0.0.0-20251002143259-bc988d571ff4 // indirect + mvdan.cc/gofumpt v0.7.0 // indirect + mvdan.cc/unparam v0.0.0-20240528143540-8a5130ca722f // indirect + sigs.k8s.io/json v0.0.0-20250730193827-2d320260d730 // indirect + sigs.k8s.io/randfill v1.0.0 // indirect + sigs.k8s.io/structured-merge-diff/v6 v6.3.0 // indirect + sigs.k8s.io/yaml v1.6.0 // indirect +) + +tool ( + github.com/golangci/golangci-lint/cmd/golangci-lint + github.com/onsi/ginkgo/v2/ginkgo ) diff --git a/hooks/go/go.sum b/hooks/go/go.sum index 8dc7b6b18..24a7812e0 100644 --- a/hooks/go/go.sum +++ b/hooks/go/go.sum @@ -1,318 +1,729 @@ -github.com/BurntSushi/toml v1.2.1/go.mod h1:CxXYINrC8qIiEnFrOxCa7Jy5BFHlXnUU2pbicEuybxQ= +4d63.com/gocheckcompilerdirectives v1.3.0 h1:Ew5y5CtcAAQeTVKUVFrE7EwHMrTO6BggtEj8BZSjZ3A= +4d63.com/gocheckcompilerdirectives v1.3.0/go.mod h1:ofsJ4zx2QAuIP/NO/NAh1ig6R1Fb18/GI7RVMwz7kAY= +4d63.com/gochecknoglobals v0.2.2 h1:H1vdnwnMaZdQW/N+NrkT1SZMTBmcwHe9Vq8lJcYYTtU= +4d63.com/gochecknoglobals v0.2.2/go.mod h1:lLxwTQjL5eIesRbvnzIP3jZtG140FnTdz+AlMa+ogt0= +filippo.io/edwards25519 v1.1.0/go.mod h1:BxyFTGdWcka3PhytdK4V28tE5sGfRvvvRV7EaN4VDT4= +github.com/4meepo/tagalign v1.4.2 h1:0hcLHPGMjDyM1gHG58cS73aQF8J4TdVR96TZViorO9E= +github.com/4meepo/tagalign v1.4.2/go.mod h1:+p4aMyFM+ra7nb41CnFG6aSDXqRxU/w1VQqScKqDARI= +github.com/Abirdcfly/dupword v0.1.3 h1:9Pa1NuAsZvpFPi9Pqkd93I7LIYRURj+A//dFd5tgBeE= +github.com/Abirdcfly/dupword v0.1.3/go.mod h1:8VbB2t7e10KRNdwTVoxdBaxla6avbhGzb8sCTygUMhw= +github.com/Antonboom/errname v1.0.0 h1:oJOOWR07vS1kRusl6YRSlat7HFnb3mSfMl6sDMRoTBA= +github.com/Antonboom/errname v1.0.0/go.mod h1:gMOBFzK/vrTiXN9Oh+HFs+e6Ndl0eTFbtsRTSRdXyGI= +github.com/Antonboom/nilnil v1.0.1 h1:C3Tkm0KUxgfO4Duk3PM+ztPncTFlOf0b2qadmS0s4xs= +github.com/Antonboom/nilnil v1.0.1/go.mod h1:CH7pW2JsRNFgEh8B2UaPZTEPhCMuFowP/e8Udp9Nnb0= +github.com/Antonboom/testifylint v1.5.2 h1:4s3Xhuv5AvdIgbd8wOOEeo0uZG7PbDKQyKY5lGoQazk= +github.com/Antonboom/testifylint v1.5.2/go.mod h1:vxy8VJ0bc6NavlYqjZfmp6EfqXMtBgQ4+mhCojwC1P8= +github.com/BurntSushi/toml v1.5.0 h1:W5quZX/G/csjUnuI8SUYlsHs9M38FC7znL0lIO+DvMg= +github.com/BurntSushi/toml v1.5.0/go.mod h1:ukJfTF/6rtPPRCnwkur4qwRxa8vTRFBF0uk2lLoLwho= +github.com/Crocmagnon/fatcontext v0.7.1 h1:SC/VIbRRZQeQWj/TcQBS6JmrXcfA+BU4OGSVUt54PjM= +github.com/Crocmagnon/fatcontext v0.7.1/go.mod h1:1wMvv3NXEBJucFGfwOJBxSVWcoIO6emV215SMkW9MFU= github.com/DataDog/gostackparse v0.7.0 h1:i7dLkXHvYzHV308hnkvVGDL3BR4FWl7IsXNPz/IGQh4= github.com/DataDog/gostackparse v0.7.0/go.mod h1:lTfqcJKqS9KnXQGnyQMCugq3u1FP6UZMfWR0aitKFMM= +github.com/Djarvur/go-err113 v0.0.0-20210108212216-aea10b59be24 h1:sHglBQTwgx+rWPdisA5ynNEsoARbiCBOyGcJM4/OzsM= +github.com/Djarvur/go-err113 v0.0.0-20210108212216-aea10b59be24/go.mod h1:4UJr5HIiMZrwgkSPdsjy2uOQExX/WEILpIrO9UPGuXs= +github.com/GaijinEntertainment/go-exhaustruct/v3 v3.3.1 h1:Sz1JIXEcSfhz7fUi7xHnhpIE0thVASYjvosApmHuD2k= +github.com/GaijinEntertainment/go-exhaustruct/v3 v3.3.1/go.mod h1:n/LSCXNuIYqVfBlVXyHfMQkZDdp1/mmxfSjADd3z1Zg= +github.com/Masterminds/semver/v3 v3.4.0 h1:Zog+i5UMtVoCU8oKka5P7i9q9HgrJeGzI9SA1Xbatp0= +github.com/Masterminds/semver/v3 v3.4.0/go.mod h1:4V+yj/TJE1HU9XfppCwVMZq3I84lprf4nC11bSS5beM= +github.com/OpenPeeDeeP/depguard/v2 v2.2.1 h1:vckeWVESWp6Qog7UZSARNqfu/cZqvki8zsuj3piCMx4= +github.com/OpenPeeDeeP/depguard/v2 v2.2.1/go.mod h1:q4DKzC4UcVaAvcfd41CZh0PWpGgzrVxUYBlgKNGquUo= +github.com/alecthomas/assert/v2 v2.11.0 h1:2Q9r3ki8+JYXvGsDyBXwH3LcJ+WK5D0gc5E8vS6K3D0= +github.com/alecthomas/assert/v2 v2.11.0/go.mod h1:Bze95FyfUr7x34QZrjL+XP+0qgp/zg8yS+TtBj1WA3k= +github.com/alecthomas/go-check-sumtype v0.3.1 h1:u9aUvbGINJxLVXiFvHUlPEaD7VDULsrxJb4Aq31NLkU= +github.com/alecthomas/go-check-sumtype v0.3.1/go.mod h1:A8TSiN3UPRw3laIgWEUOHHLPa6/r9MtoigdlP5h3K/E= +github.com/alecthomas/repr v0.4.0 h1:GhI2A8MACjfegCPVq9f1FLvIBS+DrQ2KQBFZP1iFzXc= +github.com/alecthomas/repr v0.4.0/go.mod h1:Fr0507jx4eOXV7AlPV6AVZLYrLIuIeSOWtW57eE/O/4= +github.com/alexkohler/nakedret/v2 v2.0.5 h1:fP5qLgtwbx9EJE8dGEERT02YwS8En4r9nnZ71RK+EVU= +github.com/alexkohler/nakedret/v2 v2.0.5/go.mod h1:bF5i0zF2Wo2o4X4USt9ntUWve6JbFv02Ff4vlkmS/VU= +github.com/alexkohler/prealloc v1.0.0 h1:Hbq0/3fJPQhNkN0dR95AVrr6R7tou91y0uHG5pOcUuw= +github.com/alexkohler/prealloc v1.0.0/go.mod h1:VetnK3dIgFBBKmg0YnD9F9x6Icjd+9cvfHR56wJVlKE= +github.com/alingse/asasalint v0.0.11 h1:SFwnQXJ49Kx/1GghOFz1XGqHYKp21Kq1nHad/0WQRnw= +github.com/alingse/asasalint v0.0.11/go.mod h1:nCaoMhw7a9kSJObvQyVzNTPBDbNpdocqrSP7t/cW5+I= +github.com/alingse/nilnesserr v0.1.2 h1:Yf8Iwm3z2hUUrP4muWfW83DF4nE3r1xZ26fGWUKCZlo= +github.com/alingse/nilnesserr v0.1.2/go.mod h1:1xJPrXonEtX7wyTq8Dytns5P2hNzoWymVUIaKm4HNFg= +github.com/ashanbrown/forbidigo v1.6.0 h1:D3aewfM37Yb3pxHujIPSpTf6oQk9sc9WZi8gerOIVIY= +github.com/ashanbrown/forbidigo v1.6.0/go.mod h1:Y8j9jy9ZYAEHXdu723cUlraTqbzjKF1MUyfOKL+AjcU= +github.com/ashanbrown/makezero v1.2.0 h1:/2Lp1bypdmK9wDIq7uWBlDF1iMUpIIS4A+pF6C9IEUU= +github.com/ashanbrown/makezero v1.2.0/go.mod h1:dxlPhHbDMC6N6xICzFBSK+4njQDdK8euNO0qjQMtGY4= github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM= github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw= -github.com/caarlos0/env/v11 v11.2.2 h1:95fApNrUyueipoZN/EhA8mMxiNxrBwDa+oAZrMWl3Kg= -github.com/caarlos0/env/v11 v11.2.2/go.mod h1:JBfcdeQiBoI3Zh1QRAWfe+tpiNTmDtcCj/hHHHMx0vc= -github.com/cespare/xxhash/v2 v2.2.0 h1:DC2CZ1Ep5Y4k3ZQ899DldepgrayRUGE6BBZ/cd9Cj44= -github.com/cespare/xxhash/v2 v2.2.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs= +github.com/bkielbasa/cyclop v1.2.3 h1:faIVMIGDIANuGPWH031CZJTi2ymOQBULs9H21HSMa5w= +github.com/bkielbasa/cyclop v1.2.3/go.mod h1:kHTwA9Q0uZqOADdupvcFJQtp/ksSnytRMe8ztxG8Fuo= +github.com/blizzy78/varnamelen v0.8.0 h1:oqSblyuQvFsW1hbBHh1zfwrKe3kcSj0rnXkKzsQ089M= +github.com/blizzy78/varnamelen v0.8.0/go.mod h1:V9TzQZ4fLJ1DSrjVDfl89H7aMnTvKkApdHeyESmyR7k= +github.com/bombsimon/wsl/v4 v4.5.0 h1:iZRsEvDdyhd2La0FVi5k6tYehpOR/R7qIUjmKk7N74A= +github.com/bombsimon/wsl/v4 v4.5.0/go.mod h1:NOQ3aLF4nD7N5YPXMruR6ZXDOAqLoM0GEpLwTdvmOSc= +github.com/breml/bidichk v0.3.2 h1:xV4flJ9V5xWTqxL+/PMFF6dtJPvZLPsyixAoPe8BGJs= +github.com/breml/bidichk v0.3.2/go.mod h1:VzFLBxuYtT23z5+iVkamXO386OB+/sVwZOpIj6zXGos= +github.com/breml/errchkjson v0.4.0 h1:gftf6uWZMtIa/Is3XJgibewBm2ksAQSY/kABDNFTAdk= +github.com/breml/errchkjson v0.4.0/go.mod h1:AuBOSTHyLSaaAFlWsRSuRBIroCh3eh7ZHh5YeelDIk8= +github.com/butuzov/ireturn v0.3.1 h1:mFgbEI6m+9W8oP/oDdfA34dLisRFCj2G6o/yiI1yZrY= +github.com/butuzov/ireturn v0.3.1/go.mod h1:ZfRp+E7eJLC0NQmk1Nrm1LOrn/gQlOykv+cVPdiXH5M= +github.com/butuzov/mirror v1.3.0 h1:HdWCXzmwlQHdVhwvsfBb2Au0r3HyINry3bDWLYXiKoc= +github.com/butuzov/mirror v1.3.0/go.mod h1:AEij0Z8YMALaq4yQj9CPPVYOyJQyiexpQEQgihajRfI= +github.com/caarlos0/env/v11 v11.3.1 h1:cArPWC15hWmEt+gWk7YBi7lEXTXCvpaSdCiZE2X5mCA= +github.com/caarlos0/env/v11 v11.3.1/go.mod h1:qupehSf/Y0TUTsxKywqRt/vJjN5nz6vauiYEUUr8P4U= +github.com/catenacyber/perfsprint v0.8.2 h1:+o9zVmCSVa7M4MvabsWvESEhpsMkhfE7k0sHNGL95yw= +github.com/catenacyber/perfsprint v0.8.2/go.mod h1:q//VWC2fWbcdSLEY1R3l8n0zQCDPdE4IjZwyY1HMunM= +github.com/ccojocar/zxcvbn-go v1.0.2 h1:na/czXU8RrhXO4EZme6eQJLR4PzcGsahsBOAwU6I3Vg= +github.com/ccojocar/zxcvbn-go v1.0.2/go.mod h1:g1qkXtUSvHP8lhHp5GrSmTz6uWALGRMQdw6Qnz/hi60= +github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs= +github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs= +github.com/charithe/durationcheck v0.0.10 h1:wgw73BiocdBDQPik+zcEoBG/ob8uyBHf2iyoHGPf5w4= +github.com/charithe/durationcheck v0.0.10/go.mod h1:bCWXb7gYRysD1CU3C+u4ceO49LoGOY1C1L6uouGNreQ= +github.com/chavacava/garif v0.1.0 h1:2JHa3hbYf5D9dsgseMKAmc/MZ109otzgNFk5s87H9Pc= +github.com/chavacava/garif v0.1.0/go.mod h1:XMyYCkEL58DF0oyW4qDjjnPWONs2HBqYKI+UIPD+Gww= +github.com/ckaznocha/intrange v0.3.0 h1:VqnxtK32pxgkhJgYQEeOArVidIPg+ahLP7WBOXZd5ZY= +github.com/ckaznocha/intrange v0.3.0/go.mod h1:+I/o2d2A1FBHgGELbGxzIcyd3/9l9DuwjM8FsbSS3Lo= github.com/cloudflare/cfssl v1.6.5 h1:46zpNkm6dlNkMZH/wMW22ejih6gIaJbzL2du6vD7ZeI= github.com/cloudflare/cfssl v1.6.5/go.mod h1:Bk1si7sq8h2+yVEDrFJiz3d7Aw+pfjjJSZVaD+Taky4= -github.com/containerd/stargz-snapshotter/estargz v0.14.3 h1:OqlDCK3ZVUO6C3B/5FSkDwbkEETK84kQgEeFwDC+62k= -github.com/containerd/stargz-snapshotter/estargz v0.14.3/go.mod h1:KY//uOCIkSuNAHhJogcZtrNHdKrA99/FCCRjE3HD36o= -github.com/cpuguy83/go-md2man/v2 v2.0.2/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o= -github.com/cpuguy83/go-md2man/v2 v2.0.4/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o= -github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E= +github.com/containerd/stargz-snapshotter/estargz v0.17.0 h1:+TyQIsR/zSFI1Rm31EQBwpAA1ovYgIKHy7kctL3sLcE= +github.com/containerd/stargz-snapshotter/estargz v0.17.0/go.mod h1:s06tWAiJcXQo9/8AReBCIo/QxcXFZ2n4qfsRnpl71SM= +github.com/cpuguy83/go-md2man/v2 v2.0.6/go.mod h1:oOW0eioCTA6cOiMLiUPZOpcVxMig6NIQQ7OS05n1F4g= +github.com/curioswitch/go-reassign v0.3.0 h1:dh3kpQHuADL3cobV/sSGETA8DOv457dwl+fbBAhrQPs= +github.com/curioswitch/go-reassign v0.3.0/go.mod h1:nApPCCTtqLJN/s8HfItCcKV0jIPwluBOvZP+dsJGA88= +github.com/daixiang0/gci v0.13.5 h1:kThgmH1yBmZSBCh1EJVxQ7JsHpm5Oms0AMed/0LaH4c= +github.com/daixiang0/gci v0.13.5/go.mod h1:12etP2OniiIdP4q+kjUGrC/rUagga7ODbqsom5Eo5Yk= github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM= github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= -github.com/deckhouse/deckhouse/pkg/log v0.0.0-20241205040953-7b376bae249c h1:dK30IW9uGg0DvSy+IcdQ6zwEBRV55R7tEtaruEKYkSA= -github.com/deckhouse/deckhouse/pkg/log v0.0.0-20241205040953-7b376bae249c/go.mod h1:Mk5HRzkc5pIcDIZ2JJ6DPuuqnwhXVkb3you8M8Mg+4w= -github.com/deckhouse/module-sdk v0.1.1-0.20250225114715-86f38bb419fe h1:v9jkJ8J9eP9jLOAshgghjCHdCwWeBZjMJyqNf9MocIo= -github.com/deckhouse/module-sdk v0.1.1-0.20250225114715-86f38bb419fe/go.mod h1:xZuqvKXZunp9VNAiF70fgYiN/HQkLDo8tvGymXNpu0o= -github.com/docker/cli v24.0.0+incompatible h1:0+1VshNwBQzQAx9lOl+OYCTCEAD8fKs/qeXMx3O0wqM= -github.com/docker/cli v24.0.0+incompatible/go.mod h1:JLrzqnKDaYBop7H2jaqPtU4hHvMKP+vjCwu2uszcLI8= -github.com/docker/distribution v2.8.2+incompatible h1:T3de5rq0dB1j30rp0sA2rER+m322EBzniBPB6ZIzuh8= -github.com/docker/distribution v2.8.2+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w= -github.com/docker/docker v28.1.1+incompatible h1:49M11BFLsVO1gxY9UX9p/zwkE/rswggs8AdFmXQw51I= -github.com/docker/docker v28.1.1+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk= -github.com/docker/docker-credential-helpers v0.7.0 h1:xtCHsjxogADNZcdv1pKUHXryefjlVRqWqIhk/uXJp0A= -github.com/docker/docker-credential-helpers v0.7.0/go.mod h1:rETQfLdHNT3foU5kuNkFR1R1V12OJRRO5lzt2D1b5X0= -github.com/emicklei/go-restful/v3 v3.11.0 h1:rAQeMHw1c7zTmncogyy8VvRZwtkmkZ4FxERmMY4rD+g= -github.com/emicklei/go-restful/v3 v3.11.0/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc= +github.com/deckhouse/deckhouse/pkg/log v0.0.0-20250909165437-ef0b7f73d870 h1:oFbNkr/7Y2SibUSjbqENMS1dTVPWVskDEzhJUK4jrgQ= +github.com/deckhouse/deckhouse/pkg/log v0.0.0-20250909165437-ef0b7f73d870/go.mod h1:pbAxTSDcPmwyl3wwKDcEB3qdxHnRxqTV+J0K+sha8bw= +github.com/deckhouse/module-sdk v0.4.0 h1:kRtJgCCh5/+xgFPR5zbo4UD+noh69hSj+QC+OM5ZmhM= +github.com/deckhouse/module-sdk v0.4.0/go.mod h1:J7zhZcxEuVWlwBNraEi5sZX+s86ATdxuecvvdrwWC0E= +github.com/denis-tingaikin/go-header v0.5.0 h1:SRdnP5ZKvcO9KKRP1KJrhFR3RrlGuD+42t4429eC9k8= +github.com/denis-tingaikin/go-header v0.5.0/go.mod h1:mMenU5bWrok6Wl2UsZjy+1okegmwQ3UgWl4V1D8gjlY= +github.com/dlclark/regexp2 v1.11.0 h1:G/nrcoOa7ZXlpoa/91N3X7mM3r8eIlMBBJZvsz/mxKI= +github.com/dlclark/regexp2 v1.11.0/go.mod h1:DHkYz0B9wPfa6wondMfaivmHpzrQ3v9q8cnmRbL6yW8= +github.com/docker/cli v28.4.0+incompatible h1:RBcf3Kjw2pMtwui5V0DIMdyeab8glEw5QY0UUU4C9kY= +github.com/docker/cli v28.4.0+incompatible/go.mod h1:JLrzqnKDaYBop7H2jaqPtU4hHvMKP+vjCwu2uszcLI8= +github.com/docker/distribution v2.8.3+incompatible h1:AtKxIZ36LoNK51+Z6RpzLpddBirtxJnzDrHLEKxTAYk= +github.com/docker/distribution v2.8.3+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w= +github.com/docker/docker-credential-helpers v0.9.3 h1:gAm/VtF9wgqJMoxzT3Gj5p4AqIjCBS4wrsOh9yRqcz8= +github.com/docker/docker-credential-helpers v0.9.3/go.mod h1:x+4Gbw9aGmChi3qTLZj8Dfn0TD20M/fuWy0E5+WDeCo= +github.com/emicklei/go-restful/v3 v3.13.0 h1:C4Bl2xDndpU6nJ4bc1jXd+uTmYPVUwkD6bFY/oTyCes= +github.com/emicklei/go-restful/v3 v3.13.0/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc= github.com/ettle/strcase v0.2.0 h1:fGNiVF21fHXpX1niBgk0aROov1LagYsOwV/xqKDKR/Q= github.com/ettle/strcase v0.2.0/go.mod h1:DajmHElDSaX76ITe3/VHVyMin4LWSJN5Z909Wp+ED1A= github.com/evanphx/json-patch v5.9.0+incompatible h1:fBXyNpNMuTTDdquAq/uisOr2lShz4oaXpDTX2bLe7ls= github.com/evanphx/json-patch v5.9.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk= -github.com/evanphx/json-patch/v5 v5.9.0 h1:kcBlZQbplgElYIlo/n1hJbls2z/1awpXxpRi0/FOJfg= -github.com/evanphx/json-patch/v5 v5.9.0/go.mod h1:VNkHZ/282BpEyt/tObQO8s5CMPmYYq14uClGH4abBuQ= -github.com/fsnotify/fsnotify v1.7.0 h1:8JEhPFa5W2WU7YfeZzPNqzMP6Lwt7L2715Ggo0nosvA= -github.com/fsnotify/fsnotify v1.7.0/go.mod h1:40Bi/Hjc2AVfZrqy+aj+yEI+/bRxZnMJyTJwOpGvigM= -github.com/go-logr/logr v1.4.2 h1:6pFjapn8bFcIbiKo3XT4j/BhANplGihG6tvd+8rYgrY= -github.com/go-logr/logr v1.4.2/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY= +github.com/evanphx/json-patch/v5 v5.9.11 h1:/8HVnzMq13/3x9TPvjG08wUGqBTmZBsCWzjTM0wiaDU= +github.com/evanphx/json-patch/v5 v5.9.11/go.mod h1:3j+LviiESTElxA4p3EMKAB9HXj3/XEtnUf6OZxqIQTM= +github.com/fatih/color v1.18.0 h1:S8gINlzdQ840/4pfAwic/ZE0djQEH3wM94VfqLTZcOM= +github.com/fatih/color v1.18.0/go.mod h1:4FelSpRwEGDpQ12mAdzqdOukCy4u8WUtOY6lkT/6HfU= +github.com/fatih/structtag v1.2.0 h1:/OdNE99OxoI/PqaW/SuSK9uxxT3f/tcSZgon/ssNSx4= +github.com/fatih/structtag v1.2.0/go.mod h1:mBJUNpUnHmRKrKlQQlmCrh5PuhftFbNv8Ys4/aAZl94= +github.com/firefart/nonamedreturns v1.0.5 h1:tM+Me2ZaXs8tfdDw3X6DOX++wMCOqzYUho6tUTYIdRA= +github.com/firefart/nonamedreturns v1.0.5/go.mod h1:gHJjDqhGM4WyPt639SOZs+G89Ko7QKH5R5BhnO6xJhw= +github.com/frankban/quicktest v1.14.6 h1:7Xjx+VpznH+oBnejlPUj8oUpdxnVs4f8XU8WnHkI4W8= +github.com/frankban/quicktest v1.14.6/go.mod h1:4ptaffx2x8+WTWXmUCuVU6aPUX1/Mz7zb5vbUoiM6w0= +github.com/fsnotify/fsnotify v1.9.0 h1:2Ml+OJNzbYCTzsxtv8vKSFD9PbJjmhYF14k/jKC7S9k= +github.com/fsnotify/fsnotify v1.9.0/go.mod h1:8jBTzvmWwFyi3Pb8djgCCO5IBqzKJ/Jwo8TRcHyHii0= +github.com/fxamacker/cbor/v2 v2.9.0 h1:NpKPmjDBgUfBms6tr6JZkTHtfFGcMKsw3eGcmD/sapM= +github.com/fxamacker/cbor/v2 v2.9.0/go.mod h1:vM4b+DJCtHn+zz7h3FFp/hDAI9WNWCsZj23V5ytsSxQ= +github.com/fzipp/gocyclo v0.6.0 h1:lsblElZG7d3ALtGMx9fmxeTKZaLLpU8mET09yN4BBLo= +github.com/fzipp/gocyclo v0.6.0/go.mod h1:rXPyn8fnlpa0R2csP/31uerbiVBugk5whMdlyaLkLoA= +github.com/ghostiam/protogetter v0.3.9 h1:j+zlLLWzqLay22Cz/aYwTHKQ88GE2DQ6GkWSYFOI4lQ= +github.com/ghostiam/protogetter v0.3.9/go.mod h1:WZ0nw9pfzsgxuRsPOFQomgDVSWtDLJRfQJEhsGbmQMA= +github.com/gkampitakis/ciinfo v0.3.2 h1:JcuOPk8ZU7nZQjdUhctuhQofk7BGHuIy0c9Ez8BNhXs= +github.com/gkampitakis/ciinfo v0.3.2/go.mod h1:1NIwaOcFChN4fa/B0hEBdAb6npDlFL8Bwx4dfRLRqAo= +github.com/gkampitakis/go-diff v1.3.2 h1:Qyn0J9XJSDTgnsgHRdz9Zp24RaJeKMUHg2+PDZZdC4M= +github.com/gkampitakis/go-diff v1.3.2/go.mod h1:LLgOrpqleQe26cte8s36HTWcTmMEur6OPYerdAAS9tk= +github.com/gkampitakis/go-snaps v0.5.15 h1:amyJrvM1D33cPHwVrjo9jQxX8g/7E2wYdZ+01KS3zGE= +github.com/gkampitakis/go-snaps v0.5.15/go.mod h1:HNpx/9GoKisdhw9AFOBT1N7DBs9DiHo/hGheFGBZ+mc= +github.com/go-critic/go-critic v0.12.0 h1:iLosHZuye812wnkEz1Xu3aBwn5ocCPfc9yqmFG9pa6w= +github.com/go-critic/go-critic v0.12.0/go.mod h1:DpE0P6OVc6JzVYzmM5gq5jMU31zLr4am5mB/VfFK64w= +github.com/go-logr/logr v1.4.3 h1:CjnDlHq8ikf6E492q6eKboGOC0T8CDaOvkHCIg8idEI= +github.com/go-logr/logr v1.4.3/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY= github.com/go-logr/zapr v1.3.0 h1:XGdV8XW8zdwFiwOA2Dryh1gj2KRQyOOoNmBy4EplIcQ= github.com/go-logr/zapr v1.3.0/go.mod h1:YKepepNBd1u/oyhd/yQmtjVXmm9uML4IXUgMOwR8/Gg= -github.com/go-openapi/jsonpointer v0.19.6 h1:eCs3fxoIi3Wh6vtgmLTOjdhSpiqphQ+DaPn38N2ZdrE= -github.com/go-openapi/jsonpointer v0.19.6/go.mod h1:osyAmYz/mB/C3I+WsTTSgw1ONzaLJoLCyoi6/zppojs= -github.com/go-openapi/jsonreference v0.20.2 h1:3sVjiK66+uXK/6oQ8xgcRKcFgQ5KXa2KvnJRumpMGbE= -github.com/go-openapi/jsonreference v0.20.2/go.mod h1:Bl1zwGIM8/wsvqjsOQLJ/SH+En5Ap4rVB5KVcIDZG2k= -github.com/go-openapi/swag v0.22.3/go.mod h1:UzaqsxGiab7freDnrUUra0MwWfN/q7tE4j+VcZ0yl14= -github.com/go-openapi/swag v0.22.5 h1:fVS63IE3M0lsuWRzuom3RLwUMVI2peDH01s6M70ugys= -github.com/go-openapi/swag v0.22.5/go.mod h1:Gl91UqO+btAM0plGGxHqJcQZ1ZTy6jbmridBTsDy8A0= -github.com/go-sql-driver/mysql v1.6.0/go.mod h1:DCzpHaOWr8IXmIStZouvnhqoel9Qv2LBy8hT2VhHyBg= -github.com/go-task/slim-sprig v0.0.0-20230315185526-52ccab3ef572 h1:tfuBGBXKqDEevZMzYi5KSi8KkcZtzBcTgAUUtapy0OI= -github.com/go-task/slim-sprig v0.0.0-20230315185526-52ccab3ef572/go.mod h1:9Pwr4B2jHnOSGXyyzV8ROjYa2ojvAY6HCGYYfMoC3Ls= +github.com/go-openapi/jsonpointer v0.22.0 h1:TmMhghgNef9YXxTu1tOopo+0BGEytxA+okbry0HjZsM= +github.com/go-openapi/jsonpointer v0.22.0/go.mod h1:xt3jV88UtExdIkkL7NloURjRQjbeUgcxFblMjq2iaiU= +github.com/go-openapi/jsonreference v0.21.1 h1:bSKrcl8819zKiOgxkbVNRUBIr6Wwj9KYrDbMjRs0cDA= +github.com/go-openapi/jsonreference v0.21.1/go.mod h1:PWs8rO4xxTUqKGu+lEvvCxD5k2X7QYkKAepJyCmSTT8= +github.com/go-openapi/swag v0.24.1 h1:DPdYTZKo6AQCRqzwr/kGkxJzHhpKxZ9i/oX0zag+MF8= +github.com/go-openapi/swag v0.24.1/go.mod h1:sm8I3lCPlspsBBwUm1t5oZeWZS0s7m/A+Psg0ooRU0A= +github.com/go-openapi/swag/cmdutils v0.24.0 h1:KlRCffHwXFI6E5MV9n8o8zBRElpY4uK4yWyAMWETo9I= +github.com/go-openapi/swag/cmdutils v0.24.0/go.mod h1:uxib2FAeQMByyHomTlsP8h1TtPd54Msu2ZDU/H5Vuf8= +github.com/go-openapi/swag/conv v0.24.0 h1:ejB9+7yogkWly6pnruRX45D1/6J+ZxRu92YFivx54ik= +github.com/go-openapi/swag/conv v0.24.0/go.mod h1:jbn140mZd7EW2g8a8Y5bwm8/Wy1slLySQQ0ND6DPc2c= +github.com/go-openapi/swag/fileutils v0.24.0 h1:U9pCpqp4RUytnD689Ek/N1d2N/a//XCeqoH508H5oak= +github.com/go-openapi/swag/fileutils v0.24.0/go.mod h1:3SCrCSBHyP1/N+3oErQ1gP+OX1GV2QYFSnrTbzwli90= +github.com/go-openapi/swag/jsonname v0.24.0 h1:2wKS9bgRV/xB8c62Qg16w4AUiIrqqiniJFtZGi3dg5k= +github.com/go-openapi/swag/jsonname v0.24.0/go.mod h1:GXqrPzGJe611P7LG4QB9JKPtUZ7flE4DOVechNaDd7Q= +github.com/go-openapi/swag/jsonutils v0.24.0 h1:F1vE1q4pg1xtO3HTyJYRmEuJ4jmIp2iZ30bzW5XgZts= +github.com/go-openapi/swag/jsonutils v0.24.0/go.mod h1:vBowZtF5Z4DDApIoxcIVfR8v0l9oq5PpYRUuteVu6f0= +github.com/go-openapi/swag/loading v0.24.0 h1:ln/fWTwJp2Zkj5DdaX4JPiddFC5CHQpvaBKycOlceYc= +github.com/go-openapi/swag/loading v0.24.0/go.mod h1:gShCN4woKZYIxPxbfbyHgjXAhO61m88tmjy0lp/LkJk= +github.com/go-openapi/swag/mangling v0.24.0 h1:PGOQpViCOUroIeak/Uj/sjGAq9LADS3mOyjznmHy2pk= +github.com/go-openapi/swag/mangling v0.24.0/go.mod h1:Jm5Go9LHkycsz0wfoaBDkdc4CkpuSnIEf62brzyCbhc= +github.com/go-openapi/swag/netutils v0.24.0 h1:Bz02HRjYv8046Ycg/w80q3g9QCWeIqTvlyOjQPDjD8w= +github.com/go-openapi/swag/netutils v0.24.0/go.mod h1:WRgiHcYTnx+IqfMCtu0hy9oOaPR0HnPbmArSRN1SkZM= +github.com/go-openapi/swag/stringutils v0.24.0 h1:i4Z/Jawf9EvXOLUbT97O0HbPUja18VdBxeadyAqS1FM= +github.com/go-openapi/swag/stringutils v0.24.0/go.mod h1:5nUXB4xA0kw2df5PRipZDslPJgJut+NjL7D25zPZ/4w= +github.com/go-openapi/swag/typeutils v0.24.0 h1:d3szEGzGDf4L2y1gYOSSLeK6h46F+zibnEas2Jm/wIw= +github.com/go-openapi/swag/typeutils v0.24.0/go.mod h1:q8C3Kmk/vh2VhpCLaoR2MVWOGP8y7Jc8l82qCTd1DYI= +github.com/go-openapi/swag/yamlutils v0.24.0 h1:bhw4894A7Iw6ne+639hsBNRHg9iZg/ISrOVr+sJGp4c= +github.com/go-openapi/swag/yamlutils v0.24.0/go.mod h1:DpKv5aYuaGm/sULePoeiG8uwMpZSfReo1HR3Ik0yaG8= +github.com/go-quicktest/qt v1.101.0 h1:O1K29Txy5P2OK0dGo59b7b0LR6wKfIhttaAhHUyn7eI= +github.com/go-quicktest/qt v1.101.0/go.mod h1:14Bz/f7NwaXPtdYEgzsx46kqSxVwTbzVZsDC26tQJow= +github.com/go-sql-driver/mysql v1.8.1/go.mod h1:wEBSXgmK//2ZFJyE+qWnIsVGmvmEKlqwuVSjsCm7DZg= +github.com/go-task/slim-sprig/v3 v3.0.0 h1:sUs3vkvUymDpBKi3qH1YSqBQk9+9D/8M2mN1vB6EwHI= +github.com/go-task/slim-sprig/v3 v3.0.0/go.mod h1:W848ghGpv3Qj3dhTPRyJypKRiqCdHZiAzKg9hl15HA8= +github.com/go-toolsmith/astcast v1.1.0 h1:+JN9xZV1A+Re+95pgnMgDboWNVnIMMQXwfBwLRPgSC8= +github.com/go-toolsmith/astcast v1.1.0/go.mod h1:qdcuFWeGGS2xX5bLM/c3U9lewg7+Zu4mr+xPwZIB4ZU= +github.com/go-toolsmith/astcopy v1.1.0 h1:YGwBN0WM+ekI/6SS6+52zLDEf8Yvp3n2seZITCUBt5s= +github.com/go-toolsmith/astcopy v1.1.0/go.mod h1:hXM6gan18VA1T/daUEHCFcYiW8Ai1tIwIzHY6srfEAw= +github.com/go-toolsmith/astequal v1.0.3/go.mod h1:9Ai4UglvtR+4up+bAD4+hCj7iTo4m/OXVTSLnCyTAx4= +github.com/go-toolsmith/astequal v1.1.0/go.mod h1:sedf7VIdCL22LD8qIvv7Nn9MuWJruQA/ysswh64lffQ= +github.com/go-toolsmith/astequal v1.2.0 h1:3Fs3CYZ1k9Vo4FzFhwwewC3CHISHDnVUPC4x0bI2+Cw= +github.com/go-toolsmith/astequal v1.2.0/go.mod h1:c8NZ3+kSFtFY/8lPso4v8LuJjdJiUFVnSuU3s0qrrDY= +github.com/go-toolsmith/astfmt v1.1.0 h1:iJVPDPp6/7AaeLJEruMsBUlOYCmvg0MoCfJprsOmcco= +github.com/go-toolsmith/astfmt v1.1.0/go.mod h1:OrcLlRwu0CuiIBp/8b5PYF9ktGVZUjlNMV634mhwuQ4= +github.com/go-toolsmith/astp v1.1.0 h1:dXPuCl6u2llURjdPLLDxJeZInAeZ0/eZwFJmqZMnpQA= +github.com/go-toolsmith/astp v1.1.0/go.mod h1:0T1xFGz9hicKs8Z5MfAqSUitoUYS30pDMsRVIDHs8CA= +github.com/go-toolsmith/pkgload v1.2.2 h1:0CtmHq/02QhxcF7E9N5LIFcYFsMR5rdovfqTtRKkgIk= +github.com/go-toolsmith/pkgload v1.2.2/go.mod h1:R2hxLNRKuAsiXCo2i5J6ZQPhnPMOVtU+f0arbFPWCus= +github.com/go-toolsmith/strparse v1.0.0/go.mod h1:YI2nUKP9YGZnL/L1/DLFBfixrcjslWct4wyljWhSRy8= +github.com/go-toolsmith/strparse v1.1.0 h1:GAioeZUK9TGxnLS+qfdqNbA4z0SSm5zVNtCQiyP2Bvw= +github.com/go-toolsmith/strparse v1.1.0/go.mod h1:7ksGy58fsaQkGQlY8WVoBFNyEPMGuJin1rfoPS4lBSQ= +github.com/go-toolsmith/typep v1.1.0 h1:fIRYDyF+JywLfqzyhdiHzRop/GQDxxNhLGQ6gFUNHus= +github.com/go-toolsmith/typep v1.1.0/go.mod h1:fVIw+7zjdsMxDA3ITWnH1yOiw1rnTQKCsF/sk2H/qig= +github.com/go-viper/mapstructure/v2 v2.4.0 h1:EBsztssimR/CONLSZZ04E8qAkxNYq4Qp9LvH92wZUgs= +github.com/go-viper/mapstructure/v2 v2.4.0/go.mod h1:oJDH3BJKyqBA2TXFhDsKDGDTlndYOZ6rGS0BRZIxGhM= +github.com/go-xmlfmt/xmlfmt v1.1.3 h1:t8Ey3Uy7jDSEisW2K3somuMKIpzktkWptA0iFCnRUWY= +github.com/go-xmlfmt/xmlfmt v1.1.3/go.mod h1:aUCEOzzezBEjDBbFBoSiya/gduyIiWYRP6CnSFIV8AM= +github.com/gobwas/glob v0.2.3 h1:A4xDbljILXROh+kObIiy5kIaPYD8e96x1tgBhUI5J+Y= +github.com/gobwas/glob v0.2.3/go.mod h1:d3Ez4x06l9bZtSvzIay5+Yzi0fmZzPgnTbPcKjJAkT8= +github.com/goccy/go-yaml v1.18.0 h1:8W7wMFS12Pcas7KU+VVkaiCng+kG8QiFeFwzFb+rwuw= +github.com/goccy/go-yaml v1.18.0/go.mod h1:XBurs7gK8ATbW4ZPGKgcbrY1Br56PdM69F7LkFRi1kA= +github.com/gofrs/flock v0.12.1 h1:MTLVXXHf8ekldpJk3AKicLij9MdwOWkZ+a/jHHZby9E= +github.com/gofrs/flock v0.12.1/go.mod h1:9zxTsyu5xtJ9DK+1tFZyibEV7y3uwDxPPfbxeeHCoD0= github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q= github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q= -github.com/gojuno/minimock/v3 v3.4.3 h1:CGH14iGxTd6kW6ZetOA/teusRN710VQ2nq8SdEuI3OQ= -github.com/gojuno/minimock/v3 v3.4.3/go.mod h1:b+hbQhEU0Csi1eyzpvi0LhlmjDHyCDPzwhXbDaKTSrQ= -github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da h1:oI5xCqsCo564l8iNU+DwB5epxmsaqB+rhGL0m5jtYqE= -github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc= -github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek= -github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps= -github.com/google/certificate-transparency-go v1.1.7 h1:IASD+NtgSTJLPdzkthwvAG1ZVbF2WtFg4IvoA68XGSw= -github.com/google/certificate-transparency-go v1.1.7/go.mod h1:FSSBo8fyMVgqptbfF6j5p/XNdgQftAhSmXcIxV9iphE= -github.com/google/gnostic-models v0.6.8 h1:yo/ABAfM5IMRsS1VnXjTBvUb61tFIHozhlYvRgGre9I= -github.com/google/gnostic-models v0.6.8/go.mod h1:5n7qKqH0f5wFt+aWF8CW6pZLLNOfYuF5OpfBSENuI8U= -github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY= -github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI= -github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY= -github.com/google/go-containerregistry v0.17.0 h1:5p+zYs/R4VGHkhyvgWurWrpJ2hW4Vv9fQI+GzdcwXLk= -github.com/google/go-containerregistry v0.17.0/go.mod h1:u0qB2l7mvtWVR5kNcbFIhFY1hLbf8eeGapA+vbFDCtQ= -github.com/google/go-github v17.0.0+incompatible/go.mod h1:zLgOLi98H3fifZn+44m+umXrS52loVEgC2AApnigrVQ= -github.com/google/go-querystring v1.0.0/go.mod h1:odCYkC5MyYFN7vkCjXpyrEuKhc/BUO6wN/zVPAxq5ck= +github.com/gojuno/minimock/v3 v3.4.7 h1:vhE5zpniyPDRT0DXd5s3DbtZJVlcbmC5k80izYtj9lY= +github.com/gojuno/minimock/v3 v3.4.7/go.mod h1:QxJk4mdPrVyYUmEZGc2yD2NONpqM/j4dWhsy9twjFHg= +github.com/golangci/dupl v0.0.0-20250308024227-f665c8d69b32 h1:WUvBfQL6EW/40l6OmeSBYQJNSif4O11+bmWEz+C7FYw= +github.com/golangci/dupl v0.0.0-20250308024227-f665c8d69b32/go.mod h1:NUw9Zr2Sy7+HxzdjIULge71wI6yEg1lWQr7Evcu8K0E= +github.com/golangci/go-printf-func-name v0.1.0 h1:dVokQP+NMTO7jwO4bwsRwLWeudOVUPPyAKJuzv8pEJU= +github.com/golangci/go-printf-func-name v0.1.0/go.mod h1:wqhWFH5mUdJQhweRnldEywnR5021wTdZSNgwYceV14s= +github.com/golangci/gofmt v0.0.0-20250106114630-d62b90e6713d h1:viFft9sS/dxoYY0aiOTsLKO2aZQAPT4nlQCsimGcSGE= +github.com/golangci/gofmt v0.0.0-20250106114630-d62b90e6713d/go.mod h1:ivJ9QDg0XucIkmwhzCDsqcnxxlDStoTl89jDMIoNxKY= +github.com/golangci/golangci-lint v1.64.8 h1:y5TdeVidMtBGG32zgSC7ZXTFNHrsJkDnpO4ItB3Am+I= +github.com/golangci/golangci-lint v1.64.8/go.mod h1:5cEsUQBSr6zi8XI8OjmcY2Xmliqc4iYL7YoPrL+zLJ4= +github.com/golangci/misspell v0.6.0 h1:JCle2HUTNWirNlDIAUO44hUsKhOFqGPoC4LZxlaSXDs= +github.com/golangci/misspell v0.6.0/go.mod h1:keMNyY6R9isGaSAu+4Q8NMBwMPkh15Gtc8UCVoDtAWo= +github.com/golangci/plugin-module-register v0.1.1 h1:TCmesur25LnyJkpsVrupv1Cdzo+2f7zX0H6Jkw1Ol6c= +github.com/golangci/plugin-module-register v0.1.1/go.mod h1:TTpqoB6KkwOJMV8u7+NyXMrkwwESJLOkfl9TxR1DGFc= +github.com/golangci/revgrep v0.8.0 h1:EZBctwbVd0aMeRnNUsFogoyayvKHyxlV3CdUA46FX2s= +github.com/golangci/revgrep v0.8.0/go.mod h1:U4R/s9dlXZsg8uJmaR1GrloUr14D7qDl8gi2iPXJH8k= +github.com/golangci/unconvert v0.0.0-20240309020433-c5143eacb3ed h1:IURFTjxeTfNFP0hTEi1YKjB/ub8zkpaOqFFMApi2EAs= +github.com/golangci/unconvert v0.0.0-20240309020433-c5143eacb3ed/go.mod h1:XLXN8bNw4CGRPaqgl3bv/lhz7bsGPh4/xSaMTbo2vkQ= +github.com/google/btree v1.1.3 h1:CVpQJjYgC4VbzxeGVHfvZrv1ctoYCAI8vbl07Fcxlyg= +github.com/google/btree v1.1.3/go.mod h1:qOPhT0dTNdNzV6Z/lhRX0YXUafgPLFUh+gZMl761Gm4= +github.com/google/certificate-transparency-go v1.3.2 h1:9ahSNZF2o7SYMaKaXhAumVEzXB2QaayzII9C8rv7v+A= +github.com/google/certificate-transparency-go v1.3.2/go.mod h1:H5FpMUaGa5Ab2+KCYsxg6sELw3Flkl7pGZzWdBoYLXs= +github.com/google/gnostic-models v0.7.0 h1:qwTtogB15McXDaNqTZdzPJRHvaVJlAl+HVQnLmJEJxo= +github.com/google/gnostic-models v0.7.0/go.mod h1:whL5G0m6dmc5cPxKc5bdKdEN3UjI7OUGxBlw57miDrQ= +github.com/google/go-cmp v0.5.1/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= +github.com/google/go-cmp v0.5.2/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= +github.com/google/go-cmp v0.5.4/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= +github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= +github.com/google/go-cmp v0.5.8/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY= +github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8= +github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU= +github.com/google/go-containerregistry v0.20.6 h1:cvWX87UxxLgaH76b4hIvya6Dzz9qHB31qAwjAohdSTU= +github.com/google/go-containerregistry v0.20.6/go.mod h1:T0x8MuoAoKX/873bkeSfLD2FAkwCDf9/HZgsFJ02E2Y= github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg= github.com/google/gofuzz v1.2.0 h1:xRy4A+RhZaiKjJ1bPfwQ8sedCA+YS2YcCHW6ec7JMi0= github.com/google/gofuzz v1.2.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg= -github.com/google/pprof v0.0.0-20210720184732-4bb14d4b1be1 h1:K6RDEckDVWvDI9JAJYCmNdQXq6neHJOYx3V6jnqNEec= -github.com/google/pprof v0.0.0-20210720184732-4bb14d4b1be1/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE= +github.com/google/pprof v0.0.0-20250820193118-f64d9cf942d6 h1:EEHtgt9IwisQ2AZ4pIsMjahcegHh6rmhqxzIRQIyepY= +github.com/google/pprof v0.0.0-20250820193118-f64d9cf942d6/go.mod h1:I6V7YzU0XDpsHqbsyrghnFZLO1gwK6NPTNvmetQIk9U= github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0= github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= -github.com/imdario/mergo v0.3.16 h1:wwQJbIsHYGMUyLSPrEq1CT16AhnhNJQ51+4fdHUnCl4= -github.com/imdario/mergo v0.3.16/go.mod h1:WBLT9ZmE3lPoWsEzCh9LPo3TiwVN+ZKEjmz+hD27ysY= +github.com/gordonklaus/ineffassign v0.1.0 h1:y2Gd/9I7MdY1oEIt+n+rowjBNDcLQq3RsH5hwJd0f9s= +github.com/gordonklaus/ineffassign v0.1.0/go.mod h1:Qcp2HIAYhR7mNUVSIxZww3Guk4it82ghYcEXIAk+QT0= +github.com/gostaticanalysis/analysisutil v0.7.1 h1:ZMCjoue3DtDWQ5WyU16YbjbQEQ3VuzwxALrpYd+HeKk= +github.com/gostaticanalysis/analysisutil v0.7.1/go.mod h1:v21E3hY37WKMGSnbsw2S/ojApNWb6C1//mXO48CXbVc= +github.com/gostaticanalysis/comment v1.4.1/go.mod h1:ih6ZxzTHLdadaiSnF5WY3dxUoXfXAlTaRzuaNDlSado= +github.com/gostaticanalysis/comment v1.4.2/go.mod h1:KLUTGDv6HOCotCH8h2erHKmpci2ZoR8VPu34YA2uzdM= +github.com/gostaticanalysis/comment v1.5.0 h1:X82FLl+TswsUMpMh17srGRuKaaXprTaytmEpgnKIDu8= +github.com/gostaticanalysis/comment v1.5.0/go.mod h1:V6eb3gpCv9GNVqb6amXzEUX3jXLVK/AdA+IrAMSqvEc= +github.com/gostaticanalysis/forcetypeassert v0.2.0 h1:uSnWrrUEYDr86OCxWa4/Tp2jeYDlogZiZHzGkWFefTk= +github.com/gostaticanalysis/forcetypeassert v0.2.0/go.mod h1:M5iPavzE9pPqWyeiVXSFghQjljW1+l/Uke3PXHS6ILY= +github.com/gostaticanalysis/nilerr v0.1.1 h1:ThE+hJP0fEp4zWLkWHWcRyI2Od0p7DlgYG3Uqrmrcpk= +github.com/gostaticanalysis/nilerr v0.1.1/go.mod h1:wZYb6YI5YAxxq0i1+VJbY0s2YONW0HU0GPE3+5PWN4A= +github.com/gostaticanalysis/testutil v0.3.1-0.20210208050101-bfb5c8eec0e4/go.mod h1:D+FIZ+7OahH3ePw/izIEeH5I06eKs1IKI4Xr64/Am3M= +github.com/gostaticanalysis/testutil v0.5.0 h1:Dq4wT1DdTwTGCQQv3rl3IvD5Ld0E6HiY+3Zh0sUGqw8= +github.com/gostaticanalysis/testutil v0.5.0/go.mod h1:OLQSbuM6zw2EvCcXTz1lVq5unyoNft372msDY0nY5Hs= +github.com/hashicorp/go-immutable-radix/v2 v2.1.0 h1:CUW5RYIcysz+D3B+l1mDeXrQ7fUvGGCwJfdASSzbrfo= +github.com/hashicorp/go-immutable-radix/v2 v2.1.0/go.mod h1:hgdqLXA4f6NIjRVisM1TJ9aOJVNRqKZj+xDGF6m7PBw= +github.com/hashicorp/go-uuid v1.0.3 h1:2gKiV6YVmrJ1i2CKKa9obLvRieoRGviZFL26PcT/Co8= +github.com/hashicorp/go-uuid v1.0.3/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro= +github.com/hashicorp/go-version v1.2.1/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA= +github.com/hashicorp/go-version v1.7.0 h1:5tqGy27NaOTB8yJKUZELlFAS/LTKJkrmONwQKeRZfjY= +github.com/hashicorp/go-version v1.7.0/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA= +github.com/hashicorp/golang-lru/v2 v2.0.7 h1:a+bsQ5rvGLjzHuww6tVxozPZFVghXaHOwFs4luLUK2k= +github.com/hashicorp/golang-lru/v2 v2.0.7/go.mod h1:QeFd9opnmA6QUJc5vARoKUSoFhyfM2/ZepoAG6RGpeM= +github.com/hexops/gotextdiff v1.0.3 h1:gitA9+qJrrTCsiCl7+kh75nPqQt1cx4ZkudSTLoUqJM= +github.com/hexops/gotextdiff v1.0.3/go.mod h1:pSWU5MAI3yDq+fZBTazCSJysOMbxWL1BSow5/V2vxeg= github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8= github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw= github.com/itchyny/gojq v0.12.17 h1:8av8eGduDb5+rvEdaOO+zQUjA04MS0m3Ps8HiD+fceg= github.com/itchyny/gojq v0.12.17/go.mod h1:WBrEMkgAfAGO1LUcGOckBl5O726KPp+OlkKug0I/FEY= github.com/itchyny/timefmt-go v0.1.6 h1:ia3s54iciXDdzWzwaVKXZPbiXzxxnv1SPGFfM/myJ5Q= github.com/itchyny/timefmt-go v0.1.6/go.mod h1:RRDZYC5s9ErkjQvTvvU7keJjxUYzIISJGxm9/mAERQg= -github.com/jmoiron/sqlx v1.3.5 h1:vFFPA71p1o5gAeqtEAwLU4dnX2napprKtHr7PYIcN3g= -github.com/jmoiron/sqlx v1.3.5/go.mod h1:nRVWtLre0KfCLJvgxzCsLVMogSvQ1zNJtpYr2Ccp0mQ= -github.com/jonboulle/clockwork v0.4.0 h1:p4Cf1aMWXnXAUh8lVfewRBx1zaTSYKrKMF2g3ST4RZ4= -github.com/jonboulle/clockwork v0.4.0/go.mod h1:xgRqUGwRcjKCO1vbZUEtSLrqKoPSsUpK7fnezOII0kc= +github.com/jgautheron/goconst v1.7.1 h1:VpdAG7Ca7yvvJk5n8dMwQhfEZJh95kl/Hl9S1OI5Jkk= +github.com/jgautheron/goconst v1.7.1/go.mod h1:aAosetZ5zaeC/2EfMeRswtxUFBpe2Hr7HzkgX4fanO4= +github.com/jingyugao/rowserrcheck v1.1.1 h1:zibz55j/MJtLsjP1OF4bSdgXxwL1b+Vn7Tjzq7gFzUs= +github.com/jingyugao/rowserrcheck v1.1.1/go.mod h1:4yvlZSDb3IyDTUZJUmpZfm2Hwok+Dtp+nu2qOq+er9c= +github.com/jjti/go-spancheck v0.6.4 h1:Tl7gQpYf4/TMU7AT84MN83/6PutY21Nb9fuQjFTpRRc= +github.com/jjti/go-spancheck v0.6.4/go.mod h1:yAEYdKJ2lRkDA8g7X+oKUHXOWVAXSBJRv04OhF+QUjk= +github.com/jmoiron/sqlx v1.4.0 h1:1PLqN7S1UYp5t4SrVVnt4nUVNemrDAtxlulVe+Qgm3o= +github.com/jmoiron/sqlx v1.4.0/go.mod h1:ZrZ7UsYB/weZdl2Bxg6jCRO9c3YHl8r3ahlKmRT4JLY= +github.com/jonboulle/clockwork v0.5.0 h1:Hyh9A8u51kptdkR+cqRpT1EebBwTn1oK9YfGYbdFz6I= +github.com/jonboulle/clockwork v0.5.0/go.mod h1:3mZlmanh0g2NDKO5TWZVJAfofYk64M7XN3SzBPjZF60= github.com/josharian/intern v1.0.0 h1:vlS4z54oSdjm0bgjRigI+G1HpF+tI+9rE5LLzOg8HmY= github.com/josharian/intern v1.0.0/go.mod h1:5DoeVV0s6jJacbCEi61lwdGj/aVlrQvzHFFd8Hwg//Y= +github.com/joshdk/go-junit v1.0.0 h1:S86cUKIdwBHWwA6xCmFlf3RTLfVXYQfvanM5Uh+K6GE= +github.com/joshdk/go-junit v1.0.0/go.mod h1:TiiV0PqkaNfFXjEiyjWM3XXrhVyCa1K4Zfga6W52ung= github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM= github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo= +github.com/julz/importas v0.2.0 h1:y+MJN/UdL63QbFJHws9BVC5RpA2iq0kpjrFajTGivjQ= +github.com/julz/importas v0.2.0/go.mod h1:pThlt589EnCYtMnmhmRYY/qn9lCf/frPOK+WMx3xiJY= +github.com/karamaru-alpha/copyloopvar v1.2.1 h1:wmZaZYIjnJ0b5UoKDjUHrikcV0zuPyyxI4SVplLd2CI= +github.com/karamaru-alpha/copyloopvar v1.2.1/go.mod h1:nFmMlFNlClC2BPvNaHMdkirmTJxVCY0lhxBtlfOypMM= github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8= +github.com/kisielk/errcheck v1.9.0 h1:9xt1zI9EBfcYBvdU1nVrzMzzUPUtPKs9bVSIM3TAb3M= +github.com/kisielk/errcheck v1.9.0/go.mod h1:kQxWMMVZgIkDq7U8xtG/n2juOjbLgZtedi0D+/VL/i8= github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck= -github.com/klauspost/compress v1.16.5 h1:IFV2oUNUzZaz+XyusxpLzpzS8Pt5rh0Z16For/djlyI= -github.com/klauspost/compress v1.16.5/go.mod h1:ntbaceVETuRiXiv4DpjP66DpAtAGkEQskQzEyD//IeE= -github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ= -github.com/konsorten/go-windows-terminal-sequences v1.0.3/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ= -github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo= -github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI= +github.com/kkHAIKE/contextcheck v1.1.6 h1:7HIyRcnyzxL9Lz06NGhiKvenXq7Zw6Q0UQu/ttjfJCE= +github.com/kkHAIKE/contextcheck v1.1.6/go.mod h1:3dDbMRNBFaq8HFXWC1JyvDSPm43CmE6IuHam8Wr0rkg= +github.com/klauspost/compress v1.18.0 h1:c/Cqfb0r+Yi+JtIEq73FWXVkRonBlf0CRNYc8Zttxdo= +github.com/klauspost/compress v1.18.0/go.mod h1:2Pp+KzxcywXVXMr50+X0Q/Lsb43OQHYWRCY2AiWywWQ= github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE= github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk= -github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ= -github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI= github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY= github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE= -github.com/lib/pq v1.2.0/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo= -github.com/mailru/easyjson v0.7.7 h1:UGYAvKxe3sBsEDzO8ZeWOSlIQfWFlxbzLZe7hwFURr0= -github.com/mailru/easyjson v0.7.7/go.mod h1:xzfreul335JAWq5oZzymOObrkdz5UnU4kGfJJLY9Nlc= -github.com/mattn/go-sqlite3 v1.14.6/go.mod h1:NyWgC/yNuGj7Q9rpYnZvas74GogHl5/Z4A/KQRfk6bU= +github.com/kulti/thelper v0.6.3 h1:ElhKf+AlItIu+xGnI990no4cE2+XaSu1ULymV2Yulxs= +github.com/kulti/thelper v0.6.3/go.mod h1:DsqKShOvP40epevkFrvIwkCMNYxMeTNjdWL4dqWHZ6I= +github.com/kunwardeep/paralleltest v1.0.10 h1:wrodoaKYzS2mdNVnc4/w31YaXFtsc21PCTdvWJ/lDDs= +github.com/kunwardeep/paralleltest v1.0.10/go.mod h1:2C7s65hONVqY7Q5Efj5aLzRCNLjw2h4eMc9EcypGjcY= +github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc= +github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw= +github.com/lasiar/canonicalheader v1.1.2 h1:vZ5uqwvDbyJCnMhmFYimgMZnJMjwljN5VGY0VKbMXb4= +github.com/lasiar/canonicalheader v1.1.2/go.mod h1:qJCeLFS0G/QlLQ506T+Fk/fWMa2VmBUiEI2cuMK4djI= +github.com/ldez/exptostd v0.4.2 h1:l5pOzHBz8mFOlbcifTxzfyYbgEmoUqjxLFHZkjlbHXs= +github.com/ldez/exptostd v0.4.2/go.mod h1:iZBRYaUmcW5jwCR3KROEZ1KivQQp6PHXbDPk9hqJKCQ= +github.com/ldez/gomoddirectives v0.6.1 h1:Z+PxGAY+217f/bSGjNZr/b2KTXcyYLgiWI6geMBN2Qc= +github.com/ldez/gomoddirectives v0.6.1/go.mod h1:cVBiu3AHR9V31em9u2kwfMKD43ayN5/XDgr+cdaFaKs= +github.com/ldez/grignotin v0.9.0 h1:MgOEmjZIVNn6p5wPaGp/0OKWyvq42KnzAt/DAb8O4Ow= +github.com/ldez/grignotin v0.9.0/go.mod h1:uaVTr0SoZ1KBii33c47O1M8Jp3OP3YDwhZCmzT9GHEk= +github.com/ldez/tagliatelle v0.7.1 h1:bTgKjjc2sQcsgPiT902+aadvMjCeMHrY7ly2XKFORIk= +github.com/ldez/tagliatelle v0.7.1/go.mod h1:3zjxUpsNB2aEZScWiZTHrAXOl1x25t3cRmzfK1mlo2I= +github.com/ldez/usetesting v0.4.2 h1:J2WwbrFGk3wx4cZwSMiCQQ00kjGR0+tuuyW0Lqm4lwA= +github.com/ldez/usetesting v0.4.2/go.mod h1:eEs46T3PpQ+9RgN9VjpY6qWdiw2/QmfiDeWmdZdrjIQ= +github.com/leonklingele/grouper v1.1.2 h1:o1ARBDLOmmasUaNDesWqWCIFH3u7hoFlM84YrjT3mIY= +github.com/leonklingele/grouper v1.1.2/go.mod h1:6D0M/HVkhs2yRKRFZUoGjeDy7EZTfFBE9gl4kjmIGkA= +github.com/lib/pq v1.10.9/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o= +github.com/macabu/inamedparam v0.1.3 h1:2tk/phHkMlEL/1GNe/Yf6kkR/hkcUdAEY3L0hjYV1Mk= +github.com/macabu/inamedparam v0.1.3/go.mod h1:93FLICAIk/quk7eaPPQvbzihUdn/QkGDwIZEoLtpH6I= +github.com/mailru/easyjson v0.9.0 h1:PrnmzHw7262yW8sTBwxi1PdJA3Iw/EKBa8psRf7d9a4= +github.com/mailru/easyjson v0.9.0/go.mod h1:1+xMtQp2MRNVL/V1bOzuP3aP8VNwRW55fQUto+XFtTU= +github.com/maratori/testableexamples v1.0.0 h1:dU5alXRrD8WKSjOUnmJZuzdxWOEQ57+7s93SLMxb2vI= +github.com/maratori/testableexamples v1.0.0/go.mod h1:4rhjL1n20TUTT4vdh3RDqSizKLyXp7K2u6HgraZCGzE= +github.com/maratori/testpackage v1.1.1 h1:S58XVV5AD7HADMmD0fNnziNHqKvSdDuEKdPD1rNTU04= +github.com/maratori/testpackage v1.1.1/go.mod h1:s4gRK/ym6AMrqpOa/kEbQTV4Q4jb7WeLZzVhVVVOQMc= +github.com/maruel/natural v1.1.1 h1:Hja7XhhmvEFhcByqDoHz9QZbkWey+COd9xWfCfn1ioo= +github.com/maruel/natural v1.1.1/go.mod h1:v+Rfd79xlw1AgVBjbO0BEQmptqb5HvL/k9GRHB7ZKEg= +github.com/matoous/godox v1.1.0 h1:W5mqwbyWrwZv6OQ5Z1a/DHGMOvXYCBP3+Ht7KMoJhq4= +github.com/matoous/godox v1.1.0/go.mod h1:jgE/3fUXiTurkdHOLT5WEkThTSuE7yxHv5iWPa80afs= +github.com/matryer/is v1.4.0 h1:sosSmIWwkYITGrxZ25ULNDeKiMNzFSr4V/eqBQP0PeE= +github.com/matryer/is v1.4.0/go.mod h1:8I/i5uYgLzgsgEloJE1U6xx5HkBQpAZvepWuujKwMRU= +github.com/mattn/go-colorable v0.1.14 h1:9A9LHSqF/7dyVVX6g0U9cwm9pG3kP9gSzcuIPHPsaIE= +github.com/mattn/go-colorable v0.1.14/go.mod h1:6LmQG8QLFO4G5z1gPvYEzlUgJ2wF+stgPZH1UqBm1s8= +github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY= +github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y= +github.com/mattn/go-runewidth v0.0.9/go.mod h1:H031xJmbD/WCDINGzjvQ9THkh0rPKHF+m2gUSrubnMI= +github.com/mattn/go-runewidth v0.0.16 h1:E5ScNMtiwvlvB5paMFdw9p4kSQzbXFikJ5SQO6TULQc= +github.com/mattn/go-runewidth v0.0.16/go.mod h1:Jdepj2loyihRzMpdS35Xk/zdY8IAYHsh153qUoGf23w= +github.com/mattn/go-sqlite3 v1.14.22/go.mod h1:Uh1q+B4BYcTPb+yiD3kU8Ct7aC0hY9fxUwlHK0RXw+Y= +github.com/mfridman/tparse v0.18.0 h1:wh6dzOKaIwkUGyKgOntDW4liXSo37qg5AXbIhkMV3vE= +github.com/mfridman/tparse v0.18.0/go.mod h1:gEvqZTuCgEhPbYk/2lS3Kcxg1GmTxxU7kTC8DvP0i/A= +github.com/mgechev/revive v1.7.0 h1:JyeQ4yO5K8aZhIKf5rec56u0376h8AlKNQEmjfkjKlY= +github.com/mgechev/revive v1.7.0/go.mod h1:qZnwcNhoguE58dfi96IJeSTPeZQejNeoMQLUZGi4SW4= github.com/mitchellh/go-homedir v1.1.0 h1:lukF9ziXFxDFPkA1vsr5zpc1XuPDn/wFntq5mG+4E0Y= github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0= github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q= github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg= github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q= -github.com/modern-go/reflect2 v1.0.2 h1:xBagoLtFs94CBntxluKeaWgTMpvLxC4ur3nMaC9Gz0M= github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk= -github.com/mreiferson/go-httpclient v0.0.0-20160630210159-31f0106b4474/go.mod h1:OQA4XLvDbMgS8P0CevmM4m9Q3Jq4phKUzcocxuGJ5m8= -github.com/mreiferson/go-httpclient v0.0.0-20201222173833-5e475fde3a4d/go.mod h1:OQA4XLvDbMgS8P0CevmM4m9Q3Jq4phKUzcocxuGJ5m8= +github.com/modern-go/reflect2 v1.0.3-0.20250322232337-35a7c28c31ee h1:W5t00kpgFdJifH4BDsTlE89Zl93FEloxaWZfGcifgq8= +github.com/modern-go/reflect2 v1.0.3-0.20250322232337-35a7c28c31ee/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk= +github.com/moricho/tparallel v0.3.2 h1:odr8aZVFA3NZrNybggMkYO3rgPRcqjeQUlBBFVxKHTI= +github.com/moricho/tparallel v0.3.2/go.mod h1:OQ+K3b4Ln3l2TZveGCywybl68glfLEwFGqvnjok8b+U= github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA= github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ= -github.com/onsi/ginkgo/v2 v2.14.0 h1:vSmGj2Z5YPb9JwCWT6z6ihcUvDhuXLc3sJiqd3jMKAY= -github.com/onsi/ginkgo/v2 v2.14.0/go.mod h1:JkUdW7JkN0V6rFvsHcJ478egV3XH9NxpD27Hal/PhZw= -github.com/onsi/gomega v1.30.0 h1:hvMK7xYz4D3HapigLTeGdId/NcfQx1VHMJc60ew99+8= -github.com/onsi/gomega v1.30.0/go.mod h1:9sxs+SwGrKI0+PWe4Fxa9tFQQBG5xSsSbMXOI8PPpoQ= -github.com/op/go-logging v0.0.0-20160315200505-970db520ece7/go.mod h1:HzydrMdWErDVzsI23lYNej1Htcns9BCg93Dk0bBINWk= +github.com/nakabonne/nestif v0.3.1 h1:wm28nZjhQY5HyYPx+weN3Q65k6ilSBxDb8v5S81B81U= +github.com/nakabonne/nestif v0.3.1/go.mod h1:9EtoZochLn5iUprVDmDjqGKPofoUEBL8U4Ngq6aY7OE= +github.com/nishanths/exhaustive v0.12.0 h1:vIY9sALmw6T/yxiASewa4TQcFsVYZQQRUQJhKRf3Swg= +github.com/nishanths/exhaustive v0.12.0/go.mod h1:mEZ95wPIZW+x8kC4TgC+9YCUgiST7ecevsVDTgc2obs= +github.com/nishanths/predeclared v0.2.2 h1:V2EPdZPliZymNAn79T8RkNApBjMmVKh5XRpLm/w98Vk= +github.com/nishanths/predeclared v0.2.2/go.mod h1:RROzoN6TnGQupbC+lqggsOlcgysk3LMK/HI84Mp280c= +github.com/nunnatsa/ginkgolinter v0.19.1 h1:mjwbOlDQxZi9Cal+KfbEJTCz327OLNfwNvoZ70NJ+c4= +github.com/nunnatsa/ginkgolinter v0.19.1/go.mod h1:jkQ3naZDmxaZMXPWaS9rblH+i+GWXQCaS/JFIWcOH2s= +github.com/olekukonko/tablewriter v0.0.5 h1:P2Ga83D34wi1o9J6Wh1mRuqd4mF/x/lgBS7N7AbDhec= +github.com/olekukonko/tablewriter v0.0.5/go.mod h1:hPp6KlRPjbx+hW8ykQs1w3UBbZlj6HuIJcUGPhkA7kY= +github.com/onsi/ginkgo/v2 v2.27.2 h1:LzwLj0b89qtIy6SSASkzlNvX6WktqurSHwkk2ipF/Ns= +github.com/onsi/ginkgo/v2 v2.27.2/go.mod h1:ArE1D/XhNXBXCBkKOLkbsb2c81dQHCRcF5zwn/ykDRo= +github.com/onsi/gomega v1.38.3 h1:eTX+W6dobAYfFeGC2PV6RwXRu/MyT+cQguijutvkpSM= +github.com/onsi/gomega v1.38.3/go.mod h1:ZCU1pkQcXDO5Sl9/VVEGlDyp+zm0m1cmeG5TOzLgdh4= github.com/opencontainers/go-digest v1.0.0 h1:apOUWs51W5PlhuyGyz9FCeeBIOUDA/6nW8Oi/yOhh5U= github.com/opencontainers/go-digest v1.0.0/go.mod h1:0JzlMkj0TRzQZfJkVvzbP0HBR3IKzErnv2BNG4W4MAM= -github.com/opencontainers/image-spec v1.1.0-rc3 h1:fzg1mXZFj8YdPeNkRXMg+zb88BFV0Ys52cJydRwBkb8= -github.com/opencontainers/image-spec v1.1.0-rc3/go.mod h1:X4pATf0uXsnn3g5aiGIsVnJBR4mxhKzfwmvK/B2NTm8= -github.com/pelletier/go-toml v1.9.3 h1:zeC5b1GviRUyKYd6OJPvBU/mcVDVoL1OhT17FCt5dSQ= -github.com/pelletier/go-toml v1.9.3/go.mod h1:u1nR/EPcESfeI/szUZKdtJ0xRNbUoANCkoOuaOx1Y+c= +github.com/opencontainers/image-spec v1.1.1 h1:y0fUlFfIZhPF1W537XOLg0/fcx6zcHCJwooC2xJA040= +github.com/opencontainers/image-spec v1.1.1/go.mod h1:qpqAh3Dmcf36wStyyWU+kCeDgrGnAve2nCC8+7h8Q0M= +github.com/otiai10/copy v1.2.0/go.mod h1:rrF5dJ5F0t/EWSYODDu4j9/vEeYHMkc8jt0zJChqQWw= +github.com/otiai10/copy v1.14.0 h1:dCI/t1iTdYGtkvCuBG2BgR6KZa83PTclw4U5n2wAllU= +github.com/otiai10/copy v1.14.0/go.mod h1:ECfuL02W+/FkTWZWgQqXPWZgW9oeKCSQ5qVfSc4qc4w= +github.com/otiai10/curr v0.0.0-20150429015615-9b4961190c95/go.mod h1:9qAhocn7zKJG+0mI8eUu6xqkFDYS2kb2saOteoSB3cE= +github.com/otiai10/curr v1.0.0/go.mod h1:LskTG5wDwr8Rs+nNQ+1LlxRjAtTZZjtJW4rMXl6j4vs= +github.com/otiai10/mint v1.3.0/go.mod h1:F5AjcsTsWUqX+Na9fpHb52P8pcRX2CI6A3ctIT91xUo= +github.com/otiai10/mint v1.3.1/go.mod h1:/yxELlJQ0ufhjUwhshSj+wFjZ78CnZ48/1wtmBH1OTc= +github.com/pelletier/go-toml v1.9.5 h1:4yBQzkHv+7BHq2PQUZF3Mx0IYxG7LsP222s7Agd3ve8= +github.com/pelletier/go-toml v1.9.5/go.mod h1:u1nR/EPcESfeI/szUZKdtJ0xRNbUoANCkoOuaOx1Y+c= +github.com/pelletier/go-toml/v2 v2.2.4 h1:mye9XuhQ6gvn5h28+VilKrrPoQVanw5PMw/TB0t5Ec4= +github.com/pelletier/go-toml/v2 v2.2.4/go.mod h1:2gIqNv+qfxSVS7cM2xJQKtLSTLUE9V8t9Stt+h56mCY= github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4= github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U= github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= -github.com/prometheus/client_golang v1.19.0 h1:ygXvpU1AoN1MhdzckN+PyD9QJOSD4x7kmXYlnfbA6JU= -github.com/prometheus/client_golang v1.19.0/go.mod h1:ZRM9uEAypZakd+q/x7+gmsvXdURP+DABIEIjnmDdp+k= -github.com/prometheus/client_model v0.5.0 h1:VQw1hfvPvk3Uv6Qf29VrPF32JB6rtbgI6cYPYQjL0Qw= -github.com/prometheus/client_model v0.5.0/go.mod h1:dTiFglRmd66nLR9Pv9f0mZi7B7fk5Pm3gvsjB5tr+kI= -github.com/prometheus/common v0.48.0 h1:QO8U2CdOzSn1BBsmXJXduaaW+dY/5QLjfB8svtSzKKE= -github.com/prometheus/common v0.48.0/go.mod h1:0/KsvlIEfPQCQ5I2iNSAWKPZziNCvRs5EC6ILDTlAPc= -github.com/prometheus/procfs v0.12.0 h1:jluTpSng7V9hY0O2R9DzzJHYb2xULk9VTR1V1R/k6Bo= -github.com/prometheus/procfs v0.12.0/go.mod h1:pcuDEFsWDnvcgNzo4EEweacyhjeA9Zk3cnaOZAZEfOo= -github.com/rogpeppe/go-internal v1.10.0 h1:TMyTOH3F/DB16zRVcYyreMH6GnZZrwQVAoYjRBZyWFQ= -github.com/rogpeppe/go-internal v1.10.0/go.mod h1:UQnix2H7Ngw/k4C5ijL5+65zddjncjaFoBhdsK/akog= +github.com/polyfloyd/go-errorlint v1.7.1 h1:RyLVXIbosq1gBdk/pChWA8zWYLsq9UEw7a1L5TVMCnA= +github.com/polyfloyd/go-errorlint v1.7.1/go.mod h1:aXjNb1x2TNhoLsk26iv1yl7a+zTnXPhwEMtEXukiLR8= +github.com/prashantv/gostub v1.1.0 h1:BTyx3RfQjRHnUWaGF9oQos79AlQ5k8WNktv7VGvVH4g= +github.com/prashantv/gostub v1.1.0/go.mod h1:A5zLQHz7ieHGG7is6LLXLz7I8+3LZzsrV0P1IAHhP5U= +github.com/prometheus/client_golang v1.23.2 h1:Je96obch5RDVy3FDMndoUsjAhG5Edi49h0RJWRi/o0o= +github.com/prometheus/client_golang v1.23.2/go.mod h1:Tb1a6LWHB3/SPIzCoaDXI4I8UHKeFTEQ1YCr+0Gyqmg= +github.com/prometheus/client_model v0.6.2 h1:oBsgwpGs7iVziMvrGhE53c/GrLUsZdHnqNwqPLxwZyk= +github.com/prometheus/client_model v0.6.2/go.mod h1:y3m2F6Gdpfy6Ut/GBsUqTWZqCUvMVzSfMLjcu6wAwpE= +github.com/prometheus/common v0.66.1 h1:h5E0h5/Y8niHc5DlaLlWLArTQI7tMrsfQjHV+d9ZoGs= +github.com/prometheus/common v0.66.1/go.mod h1:gcaUsgf3KfRSwHY4dIMXLPV0K/Wg1oZ8+SbZk/HH/dA= +github.com/prometheus/procfs v0.17.0 h1:FuLQ+05u4ZI+SS/w9+BWEM2TXiHKsUQ9TADiRH7DuK0= +github.com/prometheus/procfs v0.17.0/go.mod h1:oPQLaDAMRbA+u8H5Pbfq+dl3VDAvHxMUOVhe0wYB2zw= +github.com/quasilyte/go-ruleguard v0.4.3-0.20240823090925-0fe6f58b47b1 h1:+Wl/0aFp0hpuHM3H//KMft64WQ1yX9LdJY64Qm/gFCo= +github.com/quasilyte/go-ruleguard v0.4.3-0.20240823090925-0fe6f58b47b1/go.mod h1:GJLgqsLeo4qgavUoL8JeGFNS7qcisx3awV/w9eWTmNI= +github.com/quasilyte/go-ruleguard/dsl v0.3.22 h1:wd8zkOhSNr+I+8Qeciml08ivDt1pSXe60+5DqOpCjPE= +github.com/quasilyte/go-ruleguard/dsl v0.3.22/go.mod h1:KeCP03KrjuSO0H1kTuZQCWlQPulDV6YMIXmpQss17rU= +github.com/quasilyte/gogrep v0.5.0 h1:eTKODPXbI8ffJMN+W2aE0+oL0z/nh8/5eNdiO34SOAo= +github.com/quasilyte/gogrep v0.5.0/go.mod h1:Cm9lpz9NZjEoL1tgZ2OgeUKPIxL1meE7eo60Z6Sk+Ng= +github.com/quasilyte/regex/syntax v0.0.0-20210819130434-b3f0c404a727 h1:TCg2WBOl980XxGFEZSS6KlBGIV0diGdySzxATTWoqaU= +github.com/quasilyte/regex/syntax v0.0.0-20210819130434-b3f0c404a727/go.mod h1:rlzQ04UMyJXu/aOvhd8qT+hvDrFpiwqp8MRXDY9szc0= +github.com/quasilyte/stdinfo v0.0.0-20220114132959-f7386bf02567 h1:M8mH9eK4OUR4lu7Gd+PU1fV2/qnDNfzT635KRSObncs= +github.com/quasilyte/stdinfo v0.0.0-20220114132959-f7386bf02567/go.mod h1:DWNGW8A4Y+GyBgPuaQJuWiy0XYftx4Xm/y5Jqk9I6VQ= +github.com/raeperd/recvcheck v0.2.0 h1:GnU+NsbiCqdC2XX5+vMZzP+jAJC5fht7rcVTAhX74UI= +github.com/raeperd/recvcheck v0.2.0/go.mod h1:n04eYkwIR0JbgD73wT8wL4JjPC3wm0nFtzBnWNocnYU= +github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc= +github.com/rivo/uniseg v0.4.7 h1:WUdvkW8uEhrYfLC4ZzdpI2ztxP1I582+49Oc5Mq64VQ= +github.com/rivo/uniseg v0.4.7/go.mod h1:FN3SvrM+Zdj16jyLfmOkMNblXMcoc8DfTHruCPUcx88= +github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ= +github.com/rogpeppe/go-internal v1.14.1/go.mod h1:MaRKkUm5W0goXpeCfT7UZI6fk/L7L7so1lCWt35ZSgc= github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM= -github.com/sebdah/goldie/v2 v2.5.3 h1:9ES/mNN+HNUbNWpVAlrzuZ7jE+Nrczbj8uFRjM7624Y= -github.com/sebdah/goldie/v2 v2.5.3/go.mod h1:oZ9fp0+se1eapSRjfYbsV/0Hqhbuu3bJVvKI/NNtssI= +github.com/ryancurrah/gomodguard v1.3.5 h1:cShyguSwUEeC0jS7ylOiG/idnd1TpJ1LfHGpV3oJmPU= +github.com/ryancurrah/gomodguard v1.3.5/go.mod h1:MXlEPQRxgfPQa62O8wzK3Ozbkv9Rkqr+wKjSxTdsNJE= +github.com/ryanrolds/sqlclosecheck v0.5.1 h1:dibWW826u0P8jNLsLN+En7+RqWWTYrjCB9fJfSfdyCU= +github.com/ryanrolds/sqlclosecheck v0.5.1/go.mod h1:2g3dUjoS6AL4huFdv6wn55WpLIDjY7ZgUR4J8HOO/XQ= +github.com/sagikazarmark/locafero v0.7.0 h1:5MqpDsTGNDhY8sGp0Aowyf0qKsPrhewaLSsFaodPcyo= +github.com/sagikazarmark/locafero v0.7.0/go.mod h1:2za3Cg5rMaTMoG/2Ulr9AwtFaIppKXTRYnozin4aB5k= +github.com/sanposhiho/wastedassign/v2 v2.1.0 h1:crurBF7fJKIORrV85u9UUpePDYGWnwvv3+A96WvwXT0= +github.com/sanposhiho/wastedassign/v2 v2.1.0/go.mod h1:+oSmSC+9bQ+VUAxA66nBb0Z7N8CK7mscKTDYC6aIek4= +github.com/santhosh-tekuri/jsonschema/v6 v6.0.1 h1:PKK9DyHxif4LZo+uQSgXNqs0jj5+xZwwfKHgph2lxBw= +github.com/santhosh-tekuri/jsonschema/v6 v6.0.1/go.mod h1:JXeL+ps8p7/KNMjDQk3TCwPpBy0wYklyWTfbkIzdIFU= +github.com/sashamelentyev/interfacebloat v1.1.0 h1:xdRdJp0irL086OyW1H/RTZTr1h/tMEOsumirXcOJqAw= +github.com/sashamelentyev/interfacebloat v1.1.0/go.mod h1:+Y9yU5YdTkrNvoX0xHc84dxiN1iBi9+G8zZIhPVoNjQ= +github.com/sashamelentyev/usestdlibvars v1.28.0 h1:jZnudE2zKCtYlGzLVreNp5pmCdOxXUzwsMDBkR21cyQ= +github.com/sashamelentyev/usestdlibvars v1.28.0/go.mod h1:9nl0jgOfHKWNFS43Ojw0i7aRoS4j6EBye3YBhmAIRF8= +github.com/sebdah/goldie/v2 v2.7.1 h1:PkBHymaYdtvEkZV7TmyqKxdmn5/Vcj+8TpATWZjnG5E= +github.com/sebdah/goldie/v2 v2.7.1/go.mod h1:oZ9fp0+se1eapSRjfYbsV/0Hqhbuu3bJVvKI/NNtssI= +github.com/securego/gosec/v2 v2.22.2 h1:IXbuI7cJninj0nRpZSLCUlotsj8jGusohfONMrHoF6g= +github.com/securego/gosec/v2 v2.22.2/go.mod h1:UEBGA+dSKb+VqM6TdehR7lnQtIIMorYJ4/9CW1KVQBE= github.com/sergi/go-diff v1.3.1 h1:xkr+Oxo4BOQKmkn/B9eMK0g5Kg/983T9DqqPHwYqD+8= github.com/sergi/go-diff v1.3.1/go.mod h1:aMJSSKb2lpPvRNec0+w3fl7LP9IOFzdc9Pa4NFbPK1I= -github.com/sirupsen/logrus v1.3.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo= -github.com/sirupsen/logrus v1.7.0/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0= -github.com/sirupsen/logrus v1.9.0/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ= +github.com/shurcooL/go v0.0.0-20180423040247-9e1955d9fb6e/go.mod h1:TDJrrUr11Vxrven61rcy3hJMUqaf/CLWYhHNPmT14Lk= +github.com/shurcooL/go-goon v0.0.0-20170922171312-37c2f522c041/go.mod h1:N5mDOmsrJOB+vfqUK+7DmDyjhSLIIBnXo9lvZJj3MWQ= github.com/sirupsen/logrus v1.9.3 h1:dueUQJ1C2q9oE3F7wvmSGAaVtTmUizReu6fjN8uqzbQ= github.com/sirupsen/logrus v1.9.3/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ= -github.com/spf13/cobra v1.8.1 h1:e5/vxKd/rZsfSJMUX1agtjeTDf+qv1/JdBF8gg5k9ZM= -github.com/spf13/cobra v1.8.1/go.mod h1:wHxEcudfqmLYa8iTfL+OuZPbBZkmvliBWKIezN3kD9Y= -github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA= +github.com/sivchari/containedctx v1.0.3 h1:x+etemjbsh2fB5ewm5FeLNi5bUjK0V8n0RB+Wwfd0XE= +github.com/sivchari/containedctx v1.0.3/go.mod h1:c1RDvCbnJLtH4lLcYD/GqwiBSSf4F5Qk0xld2rBqzJ4= +github.com/sivchari/tenv v1.12.1 h1:+E0QzjktdnExv/wwsnnyk4oqZBUfuh89YMQT1cyuvSY= +github.com/sivchari/tenv v1.12.1/go.mod h1:1LjSOUCc25snIr5n3DtGGrENhX3LuWefcplwVGC24mw= +github.com/sonatard/noctx v0.1.0 h1:JjqOc2WN16ISWAjAk8M5ej0RfExEXtkEyExl2hLW+OM= +github.com/sonatard/noctx v0.1.0/go.mod h1:0RvBxqY8D4j9cTTTWE8ylt2vqj2EPI8fHmrxHdsaZ2c= +github.com/sourcegraph/conc v0.3.0 h1:OQTbbt6P72L20UqAkXXuLOj79LfEanQ+YQFNpLA9ySo= +github.com/sourcegraph/conc v0.3.0/go.mod h1:Sdozi7LEKbFPqYX2/J+iBAM6HpqSLTASQIKqDmF7Mt0= +github.com/sourcegraph/go-diff v0.7.0 h1:9uLlrd5T46OXs5qpp8L/MTltk0zikUGi0sNNyCpA8G0= +github.com/sourcegraph/go-diff v0.7.0/go.mod h1:iBszgVvyxdc8SFZ7gm69go2KDdt3ag071iBaWPF6cjs= +github.com/spf13/afero v1.12.0 h1:UcOPyRBYczmFn6yvphxkn9ZEOY65cpwGKb5mL36mrqs= +github.com/spf13/afero v1.12.0/go.mod h1:ZTlWwG4/ahT8W7T0WQ5uYmjI9duaLQGy3Q2OAl4sk/4= +github.com/spf13/cast v1.7.1 h1:cuNEagBQEHWN1FnbGEjCXL2szYEXqfJPbP2HNUaca9Y= +github.com/spf13/cast v1.7.1/go.mod h1:ancEpBxwJDODSW/UG4rDrAqiKolqNNh2DX3mk86cAdo= +github.com/spf13/cobra v1.10.2 h1:DMTTonx5m65Ic0GOoRY2c16WCbHxOOw6xxezuLaBpcU= +github.com/spf13/cobra v1.10.2/go.mod h1:7C1pvHqHw5A4vrJfjNwvOdzYu0Gml16OCs2GRiTUUS4= github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg= +github.com/spf13/pflag v1.0.9/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg= +github.com/spf13/pflag v1.0.10 h1:4EBh2KAYBwaONj6b2Ye1GiHfwjqyROoF4RwYO+vPwFk= +github.com/spf13/pflag v1.0.10/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg= +github.com/spf13/viper v1.20.1 h1:ZMi+z/lvLyPSCoNtFCpqjy0S4kPbirhpTMwl8BkW9X4= +github.com/spf13/viper v1.20.1/go.mod h1:P9Mdzt1zoHIG8m2eZQinpiBjo6kCmZSKBClNNqjJvu4= +github.com/ssgreg/nlreturn/v2 v2.2.1 h1:X4XDI7jstt3ySqGU86YGAURbxw3oTDPK9sPEi6YEwQ0= +github.com/ssgreg/nlreturn/v2 v2.2.1/go.mod h1:E/iiPB78hV7Szg2YfRgyIrk1AD6JVMTRkkxBiELzh2I= +github.com/stbenjam/no-sprintf-host-port v0.2.0 h1:i8pxvGrt1+4G0czLr/WnmyH7zbZ8Bg8etvARQ1rpyl4= +github.com/stbenjam/no-sprintf-host-port v0.2.0/go.mod h1:eL0bQ9PasS0hsyTyfTjjG+E80QIyPnBVQbYZyv20Jfk= github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= -github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw= github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo= -github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs= +github.com/stretchr/objx v0.5.2 h1:xuMeJ0Sdp5ZMRXx/aWO6RZxdr3beISkG5/G/aIRr3pY= +github.com/stretchr/objx v0.5.2/go.mod h1:FRsXN1f5AsAjCGJKqEizvkpNtU+EGNCLh3NxZ/8L+MA= github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI= github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4= +github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA= github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU= -github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4= -github.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA= -github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY= -github.com/sylabs/oci-tools v0.7.0 h1:SIisUvcEL+Vpa9/kmQDy1W3AwV2XVGad83sgZmXLlb0= -github.com/sylabs/oci-tools v0.7.0/go.mod h1:Ry6ngChflh20WPq6mLvCKSw2OTd9iDB5aR8OQzeq4hM= -github.com/sylabs/sif/v2 v2.15.0 h1:Nv0tzksFnoQiQ2eUwpAis9nVqEu4c3RcNSxX8P3Cecw= -github.com/sylabs/sif/v2 v2.15.0/go.mod h1:X1H7eaPz6BAxA84POMESXoXfTqgAnLQkujyF/CQFWTc= -github.com/tidwall/gjson v1.14.4 h1:uo0p8EbA09J7RQaflQ1aBRffTR7xedD2bcIVSYxLnkM= -github.com/tidwall/gjson v1.14.4/go.mod h1:/wbyibRr2FHMks5tjHJ5F8dMZh3AcwJEMf5vlfC0lxk= -github.com/tidwall/match v1.1.1 h1:+Ho715JplO36QYgwN9PGYNhgZvoUSc9X2c80KVTi+GA= +github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo= +github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U= +github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U= +github.com/subosito/gotenv v1.6.0 h1:9NlTDc1FTs4qu0DDq7AEtTPNw6SVm7uBMsUCUjABIf8= +github.com/subosito/gotenv v1.6.0/go.mod h1:Dk4QP5c2W3ibzajGcXpNraDfq2IrhjMIvMSWPKKo0FU= +github.com/sylabs/oci-tools v0.18.0 h1:6Fv8zGRiMC0Z6vKTzxHb1a8TD6ZtJXkEQiX0QN73ufY= +github.com/sylabs/oci-tools v0.18.0/go.mod h1:QBTammEL5Wuy94tVib6O3equoUH5OPp4NXo9MBcu5Bo= +github.com/sylabs/sif/v2 v2.22.0 h1:Y+xXufp4RdgZe02SR3nWEg7S6q4tPWN237WHYzkDSKA= +github.com/sylabs/sif/v2 v2.22.0/go.mod h1:W1XhWTmG1KcG7j5a3KSYdMcUIFvbs240w/MMVW627hs= +github.com/tdakkota/asciicheck v0.4.1 h1:bm0tbcmi0jezRA2b5kg4ozmMuGAFotKI3RZfrhfovg8= +github.com/tdakkota/asciicheck v0.4.1/go.mod h1:0k7M3rCfRXb0Z6bwgvkEIMleKH3kXNz9UqJ9Xuqopr8= +github.com/tenntenn/modver v1.0.1 h1:2klLppGhDgzJrScMpkj9Ujy3rXPUspSjAcev9tSEBgA= +github.com/tenntenn/modver v1.0.1/go.mod h1:bePIyQPb7UeioSRkw3Q0XeMhYZSMx9B8ePqg6SAMGH0= +github.com/tenntenn/text/transform v0.0.0-20200319021203-7eef512accb3 h1:f+jULpRQGxTSkNYKJ51yaw6ChIqO+Je8UqsTKN/cDag= +github.com/tenntenn/text/transform v0.0.0-20200319021203-7eef512accb3/go.mod h1:ON8b8w4BN/kE1EOhwT0o+d62W65a6aPw1nouo9LMgyY= +github.com/tetafro/godot v1.5.0 h1:aNwfVI4I3+gdxjMgYPus9eHmoBeJIbnajOyqZYStzuw= +github.com/tetafro/godot v1.5.0/go.mod h1:2oVxTBSftRTh4+MVfUaUXR6bn2GDXCaMcOG4Dk3rfio= +github.com/tidwall/gjson v1.18.0 h1:FIDeeyB800efLX89e5a8Y0BNH+LOngJyGrIWxG2FKQY= +github.com/tidwall/gjson v1.18.0/go.mod h1:/wbyibRr2FHMks5tjHJ5F8dMZh3AcwJEMf5vlfC0lxk= github.com/tidwall/match v1.1.1/go.mod h1:eRSPERbgtNPcGhD8UCthc6PmLEQXEWd3PRB5JTxsfmM= -github.com/tidwall/pretty v1.2.0 h1:RWIZEg2iJ8/g6fDDYzMpobmaoGh5OLl4AXtGUGPcqCs= +github.com/tidwall/match v1.2.0 h1:0pt8FlkOwjN2fPt4bIl4BoNxb98gGHN2ObFEDkrfZnM= +github.com/tidwall/match v1.2.0/go.mod h1:eRSPERbgtNPcGhD8UCthc6PmLEQXEWd3PRB5JTxsfmM= github.com/tidwall/pretty v1.2.0/go.mod h1:ITEVvHYasfjBbM0u2Pg8T2nJnzm8xPwvNhhsoaGGjNU= -github.com/urfave/cli v1.22.12/go.mod h1:sSBEIC79qR6OvcmsD4U3KABeOTxDqQtdDnaFuUN30b8= -github.com/vbatts/tar-split v0.11.3 h1:hLFqsOLQ1SsppQNTMpkpPXClLDfC2A3Zgy9OUU+RVck= -github.com/vbatts/tar-split v0.11.3/go.mod h1:9QlHN18E+fEH7RdG+QAJJcuya3rqT7eXSTY7wGrAokY= -github.com/weppos/publicsuffix-go v0.12.0/go.mod h1:z3LCPQ38eedDQSwmsSRW4Y7t2L8Ln16JPQ02lHAdn5k= -github.com/weppos/publicsuffix-go v0.13.0/go.mod h1:z3LCPQ38eedDQSwmsSRW4Y7t2L8Ln16JPQ02lHAdn5k= -github.com/weppos/publicsuffix-go v0.30.0 h1:QHPZ2GRu/YE7cvejH9iyavPOkVCB4dNxp2ZvtT+vQLY= -github.com/weppos/publicsuffix-go v0.30.0/go.mod h1:kBi8zwYnR0zrbm8RcuN1o9Fzgpnnn+btVN8uWPMyXAY= -github.com/weppos/publicsuffix-go/publicsuffix/generator v0.0.0-20220927085643-dc0d00c92642/go.mod h1:GHfoeIdZLdZmLjMlzBftbTDntahTttUMWjxZwQJhULE= +github.com/tidwall/pretty v1.2.1 h1:qjsOFOWWQl+N3RsoF5/ssm1pHmJJwhjlSbZ51I6wMl4= +github.com/tidwall/pretty v1.2.1/go.mod h1:ITEVvHYasfjBbM0u2Pg8T2nJnzm8xPwvNhhsoaGGjNU= +github.com/tidwall/sjson v1.2.5 h1:kLy8mja+1c9jlljvWTlSazM7cKDRfJuR/bOJhcY5NcY= +github.com/tidwall/sjson v1.2.5/go.mod h1:Fvgq9kS/6ociJEDnK0Fk1cpYF4FIW6ZF7LAe+6jwd28= +github.com/timakin/bodyclose v0.0.0-20241017074812-ed6a65f985e3 h1:y4mJRFlM6fUyPhoXuFg/Yu02fg/nIPFMOY8tOqppoFg= +github.com/timakin/bodyclose v0.0.0-20241017074812-ed6a65f985e3/go.mod h1:mkjARE7Yr8qU23YcGMSALbIxTQ9r9QBVahQOBRfU460= +github.com/timonwong/loggercheck v0.10.1 h1:uVZYClxQFpw55eh+PIoqM7uAOHMrhVcDoWDery9R8Lg= +github.com/timonwong/loggercheck v0.10.1/go.mod h1:HEAWU8djynujaAVX7QI65Myb8qgfcZ1uKbdpg3ZzKl8= +github.com/tomarrell/wrapcheck/v2 v2.10.0 h1:SzRCryzy4IrAH7bVGG4cK40tNUhmVmMDuJujy4XwYDg= +github.com/tomarrell/wrapcheck/v2 v2.10.0/go.mod h1:g9vNIyhb5/9TQgumxQyOEqDHsmGYcGsVMOx/xGkqdMo= +github.com/tommy-muehle/go-mnd/v2 v2.5.1 h1:NowYhSdyE/1zwK9QCLeRb6USWdoif80Ie+v+yU8u1Zw= +github.com/tommy-muehle/go-mnd/v2 v2.5.1/go.mod h1:WsUAkMJMYww6l/ufffCD3m+P7LEvr8TnZn9lwVDlgzw= +github.com/ultraware/funlen v0.2.0 h1:gCHmCn+d2/1SemTdYMiKLAHFYxTYz7z9VIDRaTGyLkI= +github.com/ultraware/funlen v0.2.0/go.mod h1:ZE0q4TsJ8T1SQcjmkhN/w+MceuatI6pBFSxxyteHIJA= +github.com/ultraware/whitespace v0.2.0 h1:TYowo2m9Nfj1baEQBjuHzvMRbp19i+RCcRYrSWoFa+g= +github.com/ultraware/whitespace v0.2.0/go.mod h1:XcP1RLD81eV4BW8UhQlpaR+SDc2givTvyI8a586WjW8= +github.com/uudashr/gocognit v1.2.0 h1:3BU9aMr1xbhPlvJLSydKwdLN3tEUUrzPSSM8S4hDYRA= +github.com/uudashr/gocognit v1.2.0/go.mod h1:k/DdKPI6XBZO1q7HgoV2juESI2/Ofj9AcHPZhBBdrTU= +github.com/uudashr/iface v1.3.1 h1:bA51vmVx1UIhiIsQFSNq6GZ6VPTk3WNMZgRiCe9R29U= +github.com/uudashr/iface v1.3.1/go.mod h1:4QvspiRd3JLPAEXBQ9AiZpLbJlrWWgRChOKDJEuQTdg= +github.com/vbatts/tar-split v0.12.1 h1:CqKoORW7BUWBe7UL/iqTVvkTBOF8UvOMKOIZykxnnbo= +github.com/vbatts/tar-split v0.12.1/go.mod h1:eF6B6i6ftWQcDqEn3/iGFRFRo8cBIMSJVOpnNdfTMFA= +github.com/weppos/publicsuffix-go v0.50.1-0.20250829105427-5340293a34a1 h1:e+uu4AaRkDK7dfU29WbMpf+jDS8TYmLw97dtNbSA4DE= +github.com/weppos/publicsuffix-go v0.50.1-0.20250829105427-5340293a34a1/go.mod h1:VXhClBYMlDrUsome4pOTpe68Ui0p6iQRAbyHQD1yKoU= +github.com/x448/float16 v0.8.4 h1:qLwI1I70+NjRFUR3zs1JPUCgaCXSh3SW62uAKT1mSBM= +github.com/x448/float16 v0.8.4/go.mod h1:14CWIYCyZA/cWjXOioeEpHeN/83MdbZDRQHoFcYsOfg= +github.com/xen0n/gosmopolitan v1.2.2 h1:/p2KTnMzwRexIW8GlKawsTWOxn7UHA+jCMF/V8HHtvU= +github.com/xen0n/gosmopolitan v1.2.2/go.mod h1:7XX7Mj61uLYrj0qmeN0zi7XDon9JRAEhYQqAPLVNTeg= +github.com/yagipy/maintidx v1.0.0 h1:h5NvIsCz+nRDapQ0exNv4aJ0yXSI0420omVANTv3GJM= +github.com/yagipy/maintidx v1.0.0/go.mod h1:0qNf/I/CCZXSMhsRsrEPDZ+DkekpKLXAJfsTACwgXLk= +github.com/yeya24/promlinter v0.3.0 h1:JVDbMp08lVCP7Y6NP3qHroGAO6z2yGKQtS5JsjqtoFs= +github.com/yeya24/promlinter v0.3.0/go.mod h1:cDfJQQYv9uYciW60QT0eeHlFodotkYZlL+YcPQN+mW4= +github.com/ykadowak/zerologlint v0.1.5 h1:Gy/fMz1dFQN9JZTPjv1hxEk+sRWm05row04Yoolgdiw= +github.com/ykadowak/zerologlint v0.1.5/go.mod h1:KaUskqF3e/v59oPmdq1U1DnKcuHokl2/K1U4pmIELKg= +github.com/yuin/goldmark v1.1.25/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= +github.com/yuin/goldmark v1.1.32/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= +github.com/yuin/goldmark v1.3.5/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k= +github.com/yuin/goldmark v1.4.1/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k= github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY= -github.com/zmap/rc2 v0.0.0-20131011165748-24b9757f5521/go.mod h1:3YZ9o3WnatTIZhuOtot4IcUfzoKVjUHqu6WALIyI0nE= -github.com/zmap/rc2 v0.0.0-20190804163417-abaa70531248/go.mod h1:3YZ9o3WnatTIZhuOtot4IcUfzoKVjUHqu6WALIyI0nE= -github.com/zmap/zcertificate v0.0.0-20180516150559-0e3d58b1bac4/go.mod h1:5iU54tB79AMBcySS0R2XIyZBAVmeHranShAFELYx7is= -github.com/zmap/zcertificate v0.0.1/go.mod h1:q0dlN54Jm4NVSSuzisusQY0hqDWvu92C+TWveAxiVWk= -github.com/zmap/zcrypto v0.0.0-20201128221613-3719af1573cf/go.mod h1:aPM7r+JOkfL+9qSB4KbYjtoEzJqUK50EXkkJabeNJDQ= -github.com/zmap/zcrypto v0.0.0-20201211161100-e54a5822fb7e/go.mod h1:aPM7r+JOkfL+9qSB4KbYjtoEzJqUK50EXkkJabeNJDQ= -github.com/zmap/zcrypto v0.0.0-20230310154051-c8b263fd8300 h1:DZH5n7L3L8RxKdSyJHZt7WePgwdhHnPhQFdQSJaHF+o= -github.com/zmap/zcrypto v0.0.0-20230310154051-c8b263fd8300/go.mod h1:mOd4yUMgn2fe2nV9KXsa9AyQBFZGzygVPovsZR+Rl5w= -github.com/zmap/zlint/v3 v3.0.0/go.mod h1:paGwFySdHIBEMJ61YjoqT4h7Ge+fdYG4sUQhnTb1lJ8= -github.com/zmap/zlint/v3 v3.5.0 h1:Eh2B5t6VKgVH0DFmTwOqE50POvyDhUaU9T2mJOe1vfQ= -github.com/zmap/zlint/v3 v3.5.0/go.mod h1:JkNSrsDJ8F4VRtBZcYUQSvnWFL7utcjDIn+FE64mlBI= +github.com/zmap/zcrypto v0.0.0-20250830192831-dcac38cad4c0 h1:wpo70uPQ9XOSFBjccR4jFCh7P9JWC1C6WzA8eH/V9Xk= +github.com/zmap/zcrypto v0.0.0-20250830192831-dcac38cad4c0/go.mod h1:AKX5NNnkZBK+CSiHJExY89oimgqfqXHhNyMjWieJFIk= +github.com/zmap/zlint/v3 v3.6.7 h1:ETRdgQ0MpcoyZqGGhBINCWnlFJ8TmmFotX9ezjzQRsU= +github.com/zmap/zlint/v3 v3.6.7/go.mod h1:Tm0qwwaO629pgJ/En7M9U9Edx4+rQRuoeXVpXvgVHhA= +gitlab.com/bosi/decorder v0.4.2 h1:qbQaV3zgwnBZ4zPMhGLW4KZe7A7NwxEhJx39R3shffo= +gitlab.com/bosi/decorder v0.4.2/go.mod h1:muuhHoaJkA9QLcYHq4Mj8FJUwDZ+EirSHRiaTcTf6T8= +go-simpler.org/assert v0.9.0 h1:PfpmcSvL7yAnWyChSjOz6Sp6m9j5lyK8Ok9pEL31YkQ= +go-simpler.org/assert v0.9.0/go.mod h1:74Eqh5eI6vCK6Y5l3PI8ZYFXG4Sa+tkr70OIPJAUr28= +go-simpler.org/musttag v0.13.0 h1:Q/YAW0AHvaoaIbsPj3bvEI5/QFP7w696IMUpnKXQfCE= +go-simpler.org/musttag v0.13.0/go.mod h1:FTzIGeK6OkKlUDVpj0iQUXZLUO1Js9+mvykDQy9C5yM= +go-simpler.org/sloglint v0.9.0 h1:/40NQtjRx9txvsB/RN022KsUJU+zaaSb/9q9BSefSrE= +go-simpler.org/sloglint v0.9.0/go.mod h1:G/OrAF6uxj48sHahCzrbarVMptL2kjWTaUeC8+fOGww= +go.uber.org/automaxprocs v1.6.0 h1:O3y2/QNTOdbF+e/dpXNNW7Rx2hZ4sTIPyybbxyNqTUs= +go.uber.org/automaxprocs v1.6.0/go.mod h1:ifeIMSnPZuznNm6jmdzmU3/bfk01Fe2fotchwEFJ8r8= go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto= go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE= go.uber.org/multierr v1.11.0 h1:blXXJkSxSSfBVBlC76pxqeO+LN3aDfLQo+309xJstO0= go.uber.org/multierr v1.11.0/go.mod h1:20+QtiLqy0Nd6FdQB9TLXag12DsQkrbs3htMFfDN80Y= -go.uber.org/zap v1.26.0 h1:sI7k6L95XOKS281NhVKOFCUNIvv9e0w4BF8N3u+tCRo= -go.uber.org/zap v1.26.0/go.mod h1:dtElttAiwGvoJ/vj4IwHBS/gXsEu/pZ50mUIRWuG0so= -golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= +go.uber.org/zap v1.27.0 h1:aJMhYGrd5QSmlpLMr2MftRKl7t8J8PTZPA732ud/XR8= +go.uber.org/zap v1.27.0/go.mod h1:GB2qFLM7cTU87MWRP2mPIjqfIDnGu+VIO4V/SdhGo2E= +go.yaml.in/yaml/v2 v2.4.2 h1:DzmwEr2rDGHl7lsFgAHxmNz/1NlQ7xLIrlN2h5d1eGI= +go.yaml.in/yaml/v2 v2.4.2/go.mod h1:081UH+NErpNdqlCXm3TtEran0rJZGxAYx9hb/ELlsPU= +go.yaml.in/yaml/v3 v3.0.4 h1:tfq32ie2Jv2UxXFdLJdh3jXuOzWiL1fo0bu/FbuKpbc= +go.yaml.in/yaml/v3 v3.0.4/go.mod h1:DhzuOOF2ATzADvBadXxruRBLzYTpT36CKvDb3+aBEFg= golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= -golang.org/x/crypto v0.0.0-20201124201722-c8d3bf9c5392/go.mod h1:jdWPYTVW3xRLrWPugEBEK3UY2ZEsg3UU495nc5E+M+I= -golang.org/x/crypto v0.0.0-20201208171446-5f87f3452ae9/go.mod h1:jdWPYTVW3xRLrWPugEBEK3UY2ZEsg3UU495nc5E+M+I= golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc= -golang.org/x/crypto v0.7.0/go.mod h1:pYwdfH91IfpZVANVyUOhSIPZaFoJGxTFbZhFTx+dXZU= -golang.org/x/crypto v0.38.0 h1:jt+WWG8IZlBnVbomuhg2Mdq0+BBQaHbtqHEFEigjUV8= -golang.org/x/crypto v0.38.0/go.mod h1:MvrbAqul58NNYPKnOra203SB9vpuZW0e+RRZV+Ggqjw= -golang.org/x/exp v0.0.0-20240719175910-8a7402abbf56 h1:2dVuKD2vS7b0QIHQbpyTISPd0LeHDbnYEryqj5Q1ug8= -golang.org/x/exp v0.0.0-20240719175910-8a7402abbf56/go.mod h1:M4RDyNAINzryxdtnbRXRL/OHtkFuWGRjvuhBJpk2IlY= +golang.org/x/crypto v0.13.0/go.mod h1:y6Z2r+Rw4iayiXXAIxJIDAJ1zMW4yaTpebo8fPOliYc= +golang.org/x/crypto v0.14.0/go.mod h1:MVFd36DqK4CsrnJYDkBA3VC4m2GkXAM0PvzMCn4JQf4= +golang.org/x/crypto v0.43.0 h1:dduJYIi3A3KOfdGOHX8AVZ/jGiyPa3IbBozJ5kNuE04= +golang.org/x/crypto v0.43.0/go.mod h1:BFbav4mRNlXJL4wNeejLpWxB7wMbc79PdRGhWKncxR0= +golang.org/x/exp v0.0.0-20251023183803-a4bb9ffd2546 h1:mgKeJMpvi0yx/sU5GsxQ7p6s2wtOnGAHZWCHUM4KGzY= +golang.org/x/exp v0.0.0-20251023183803-a4bb9ffd2546/go.mod h1:j/pmGrbnkbPtQfxEe5D0VQhZC6qKbfKifgD0oM7sR70= +golang.org/x/exp/typeparams v0.0.0-20220428152302-39d4317da171/go.mod h1:AbB0pIl9nAr9wVwH+Z2ZpaocVmF5I4GyWCDIsVjR0bk= +golang.org/x/exp/typeparams v0.0.0-20230203172020-98cc5a0785f9/go.mod h1:AbB0pIl9nAr9wVwH+Z2ZpaocVmF5I4GyWCDIsVjR0bk= +golang.org/x/exp/typeparams v0.0.0-20250210185358-939b2ce775ac h1:TSSpLIG4v+p0rPv1pNOQtl1I8knsO4S9trOxNMOLVP4= +golang.org/x/exp/typeparams v0.0.0-20250210185358-939b2ce775ac/go.mod h1:AbB0pIl9nAr9wVwH+Z2ZpaocVmF5I4GyWCDIsVjR0bk= golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= +golang.org/x/mod v0.4.1/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= +golang.org/x/mod v0.4.2/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= +golang.org/x/mod v0.6.0-dev.0.20220106191415-9b9b3d81d5e3/go.mod h1:3p9vT2HGsQu2K1YbXdKPJLVgG5VJdoTa1poYQBtP1AY= golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4= +golang.org/x/mod v0.7.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= +golang.org/x/mod v0.9.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= +golang.org/x/mod v0.12.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= +golang.org/x/mod v0.13.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c= +golang.org/x/mod v0.29.0 h1:HV8lRxZC4l2cr3Zq1LvtOsi/ThTgWnUk/y64QSs8GwA= +golang.org/x/mod v0.29.0/go.mod h1:NyhrlYXJ2H4eJiRy/WDBO6HMqZQ6q9nk4JzS3NuCK+w= golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= -golang.org/x/net v0.0.0-20200202094626-16171245cfb2/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= +golang.org/x/net v0.0.0-20200625001655-4c5254603344/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA= golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU= -golang.org/x/net v0.0.0-20201110031124-69a78807bb2b/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU= -golang.org/x/net v0.0.0-20201209123823-ac852fbbde11/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg= golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg= +golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM= +golang.org/x/net v0.0.0-20211015210444-4f30a5c0130f/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y= golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c= +golang.org/x/net v0.2.0/go.mod h1:KqCZLdyyvdV855qA2rE3GC2aiw5xGR5TEjj8smXukLY= golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs= -golang.org/x/net v0.7.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs= golang.org/x/net v0.8.0/go.mod h1:QVkue5JL9kW//ek3r6jTKnTFis1tRmNAW2P1shuFdJc= -golang.org/x/net v0.40.0 h1:79Xs7wF06Gbdcg4kdCCIQArK11Z1hr5POQ6+fIYHNuY= -golang.org/x/net v0.40.0/go.mod h1:y0hY0exeL2Pku80/zKK7tpntoX23cqL3Oa6njdgRtds= -golang.org/x/oauth2 v0.27.0 h1:da9Vo7/tDv5RH/7nZDz1eMGS/q1Vv1N/7FCrBhI9I3M= -golang.org/x/oauth2 v0.27.0/go.mod h1:onh5ek6nERTohokkhCD/y2cV4Do3fxFHFuAejCkRWT8= +golang.org/x/net v0.10.0/go.mod h1:0qNGK6F8kojg2nk9dLZ2mShWaEBan6FAoqfSigmmuDg= +golang.org/x/net v0.15.0/go.mod h1:idbUs1IY1+zTqbi8yxTbhexhEEk5ur9LInksu6HrEpk= +golang.org/x/net v0.16.0/go.mod h1:NxSsAGuq816PNPmqtQdLE42eU2Fs7NoRIZrHJAlaCOE= +golang.org/x/net v0.46.0 h1:giFlY12I07fugqwPuWJi68oOnpfqFnJIJzaIIm2JVV4= +golang.org/x/net v0.46.0/go.mod h1:Q9BGdFy1y4nkUwiLvT5qtyhAnEHgnQ/zd8PfU6nc210= +golang.org/x/oauth2 v0.31.0 h1:8Fq0yVZLh4j4YA47vHKFTa9Ew5XIrCP8LC6UeNZnLxo= +golang.org/x/oauth2 v0.31.0/go.mod h1:lzm5WQJQwKZ3nwavOZ3IS5Aulzxi68dUSgRHujetwEA= golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.0.0-20200625203802-6e8e738ad208/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.14.0 h1:woo0S4Yywslg6hp4eUFjTVOyKt0RookbpAHG4c1HmhQ= -golang.org/x/sync v0.14.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA= -golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= +golang.org/x/sync v0.3.0/go.mod h1:FU7BRWz2tNW+3quACPkgCx/L+uEAv1htQ0V83Z9Rj+Y= +golang.org/x/sync v0.4.0/go.mod h1:FU7BRWz2tNW+3quACPkgCx/L+uEAv1htQ0V83Z9Rj+Y= +golang.org/x/sync v0.19.0 h1:vV+1eWNmZ5geRlYjzm2adRgW2/mcpevXNg50YZtPCE4= +golang.org/x/sync v0.19.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI= golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20201126233918-771906719818/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20211019181941-9d821ace8654/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20211105183446-c75c47738b0c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.0.0-20220906165534-d0df966e6959/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.2.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.33.0 h1:q3i8TbbEz+JRD9ywIRlyRAQbM0qF7hu24q3teo2hbuw= -golang.org/x/sys v0.33.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k= -golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw= +golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.12.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.13.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.39.0 h1:CvCKL8MeisomCi6qNZ+wbb0DN9E5AATixKsvNtMoMFk= +golang.org/x/sys v0.39.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks= golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8= +golang.org/x/term v0.2.0/go.mod h1:TVmDHMZPmdnySmBfhjOoOdhjzdE1h4u1VwSiw2l1Nuc= golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k= golang.org/x/term v0.6.0/go.mod h1:m6U89DPEgQRMq3DNkDClhWw02AUbt2daBVO4cn4Hv9U= -golang.org/x/term v0.32.0 h1:DR4lr0TjUs3epypdhTOkMmuF5CDFJ/8pOnbzMZPQ7bg= -golang.org/x/term v0.32.0/go.mod h1:uZG1FhGx848Sqfsq4/DlJr3xGGsYMu/L5GW4abiaEPQ= +golang.org/x/term v0.8.0/go.mod h1:xPskH00ivmX89bAKVGSKKtLOWNx2+17Eiy94tnKShWo= +golang.org/x/term v0.12.0/go.mod h1:owVbMEjm3cBLCHdkQu9b1opXd4ETQWc3BhuQGKgXgvU= +golang.org/x/term v0.13.0/go.mod h1:LTmsnFJwVN6bCy1rVCoS+qHT1HhALEFxKncY3WNNh4U= +golang.org/x/term v0.36.0 h1:zMPR+aF8gfksFprF/Nc/rd1wRS1EI6nDBGyWAvDzx2Q= +golang.org/x/term v0.36.0/go.mod h1:Qu394IJq6V6dCBRgwqshf3mPF85AqzYEzofzRdZkWss= golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= +golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk= golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= -golang.org/x/text v0.3.4/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= +golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ= +golang.org/x/text v0.4.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8= golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8= golang.org/x/text v0.8.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8= -golang.org/x/text v0.25.0 h1:qVyWApTSYLk/drJRO5mDlNYskwQznZmkpV2c8q9zls4= -golang.org/x/text v0.25.0/go.mod h1:WEdwpYrmk1qmdHvhkSTNPm3app7v4rsT8F2UD6+VHIA= -golang.org/x/time v0.8.0 h1:9i3RxcPv3PZnitoVGMPDKZSq1xW1gK1Xy3ArNOGZfEg= -golang.org/x/time v0.8.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM= +golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8= +golang.org/x/text v0.13.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE= +golang.org/x/text v0.30.0 h1:yznKA/E9zq54KzlzBEAWn1NXSQ8DIp/NYMy88xJjl4k= +golang.org/x/text v0.30.0/go.mod h1:yDdHFIX9t+tORqspjENWgzaCVXgk0yYnYuSZ8UzzBVM= +golang.org/x/time v0.13.0 h1:eUlYslOIt32DgYD6utsuUeHs4d7AsEYLuIAdg7FlYgI= +golang.org/x/time v0.13.0/go.mod h1:eL/Oa2bBBK0TkX57Fyni+NgnyQQN4LitPmob2Hjnqw4= golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= +golang.org/x/tools v0.0.0-20200324003944-a576cf524670/go.mod h1:Sl4aGygMT6LrqrWclx+PTx3U+LnKx/seiNR+3G19Ar8= +golang.org/x/tools v0.0.0-20200329025819-fd4102a86c65/go.mod h1:Sl4aGygMT6LrqrWclx+PTx3U+LnKx/seiNR+3G19Ar8= golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE= +golang.org/x/tools v0.0.0-20200724022722-7017fd6b1305/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA= +golang.org/x/tools v0.0.0-20200820010801-b793a1359eac/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA= +golang.org/x/tools v0.0.0-20201023174141-c8cfbd0f21e6/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= +golang.org/x/tools v0.1.1-0.20210205202024-ef80cdb6ec6d/go.mod h1:9bzcO0MWcOuT0tm1iBGzDVPshzfwoVvREIui8C+MHqU= +golang.org/x/tools v0.1.1-0.20210302220138-2ac05c832e1a/go.mod h1:9bzcO0MWcOuT0tm1iBGzDVPshzfwoVvREIui8C+MHqU= +golang.org/x/tools v0.1.1/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk= +golang.org/x/tools v0.1.5/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk= +golang.org/x/tools v0.1.10/go.mod h1:Uh6Zz+xoGYZom868N8YTex3t7RhtHDBrE8Gzo9bV56E= golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc= +golang.org/x/tools v0.3.0/go.mod h1:/rWhSS2+zyEVwoJf8YAX6L2f0ntZ7Kn/mGgAWcipA5k= golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU= -golang.org/x/tools v0.23.0 h1:SGsXPZ+2l4JsgaCKkx+FQ9YZ5XEtA1GZYuoDjenLjvg= -golang.org/x/tools v0.23.0/go.mod h1:pnu6ufv6vQkll6szChhK3C3L/ruaIv5eBeztNG8wtsI= +golang.org/x/tools v0.7.0/go.mod h1:4pg6aUX35JBAogB10C9AtvVL+qowtN4pT3CGSQex14s= +golang.org/x/tools v0.13.0/go.mod h1:HvlwmtVNQAhOuCjW7xxvovg8wbNq7LwfXh/k7wXUl58= +golang.org/x/tools v0.14.0/go.mod h1:uYBEerGOWcJyEORxN+Ek8+TT266gXkNlHdJBwexUsBg= +golang.org/x/tools v0.38.0 h1:Hx2Xv8hISq8Lm16jvBZ2VQf+RLmbd7wVUsALibYI/IQ= +golang.org/x/tools v0.38.0/go.mod h1:yEsQ/d/YK8cjh0L6rZlY8tgtlKiBNTL14pGDJPJpYQs= +golang.org/x/tools/go/expect v0.1.1-deprecated h1:jpBZDwmgPhXsKZC6WhL20P4b/wmnpsEAGHaNy0n/rJM= +golang.org/x/tools/go/expect v0.1.1-deprecated/go.mod h1:eihoPOH+FgIqa3FpoTwguz/bVUSGBlGQU67vpBeOrBY= +golang.org/x/tools/go/packages/packagestest v0.1.1-deprecated h1:1h2MnaIAIXISqTFKdENegdpAgUXz6NrPEsbIeWaBRvM= +golang.org/x/tools/go/packages/packagestest v0.1.1-deprecated/go.mod h1:RVAQXBGNv1ib0J382/DPCRS/BPnsGebyM1Gj5VSDpG8= golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= -gomodules.xyz/jsonpatch/v2 v2.4.0 h1:Ci3iUJyx9UeRx7CeFN8ARgGbkESwJK+KB9lLcWxY/Zw= -gomodules.xyz/jsonpatch/v2 v2.4.0/go.mod h1:AH3dM2RI6uoBZxn3LVrfvJ3E0/9dG4cSrbuBJT4moAY= -google.golang.org/protobuf v1.35.1 h1:m3LfL6/Ca+fqnjnlqQXNpFPABW1UD7mjh8KO2mKFytA= -google.golang.org/protobuf v1.35.1/go.mod h1:9fA7Ob0pmnwhb644+1+CVWFRbNajQ6iRojtC/QF5bRE= +gomodules.xyz/jsonpatch/v2 v2.5.0 h1:JELs8RLM12qJGXU4u/TO3V25KW8GreMKl9pdkk14RM0= +gomodules.xyz/jsonpatch/v2 v2.5.0/go.mod h1:AH3dM2RI6uoBZxn3LVrfvJ3E0/9dG4cSrbuBJT4moAY= +google.golang.org/protobuf v1.36.9 h1:w2gp2mA27hUeUzj9Ex9FBjsBm40zfaDtEWow293U7Iw= +google.golang.org/protobuf v1.36.9/go.mod h1:fuxRtAxBytpl4zzqUh6/eyUujkJdNiuEkXntxiD/uRU= gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= -gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk= gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q= +gopkg.in/evanphx/json-patch.v4 v4.13.0 h1:czT3CmqEaQ1aanPc5SdlgQrrEIb8w/wwCvWWnfEbYzo= +gopkg.in/evanphx/json-patch.v4 v4.13.0/go.mod h1:p8EYWUEYMpynmqDbY58zCKCFZw8pRWMG4EsWvDvM72M= gopkg.in/inf.v0 v0.9.1 h1:73M5CoZyi3ZLMOyDlQh031Cx6N9NDJ2Vvfl76EDAgDc= gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw= gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= -gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= -gopkg.in/yaml.v2 v2.3.0/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY= gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ= gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= @@ -320,27 +731,33 @@ gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA= gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= gotest.tools/v3 v3.4.0 h1:ZazjZUfuVeZGLAmlKKuyv3IKP5orXcwtOwDQH6YVr6o= gotest.tools/v3 v3.4.0/go.mod h1:CtbdzLSsqVhDgMtKsx03ird5YTGB3ar27v0u/yKBW5g= -k8s.io/api v0.29.8 h1:ZBKg9clWnIGtQ5yGhNwMw2zyyrsIAQaXhZACcYNflQE= -k8s.io/api v0.29.8/go.mod h1:XlGIpmpzKGrtVca7GlgNryZJ19SvQdI808NN7fy1SgQ= -k8s.io/apiextensions-apiserver v0.29.8 h1:VkyGgClTTWs8i81O13wsTLSs9Q1PWVr0L880F2GjwUI= -k8s.io/apiextensions-apiserver v0.29.8/go.mod h1:e6dPglIfPWm9ydsXuNqefecEVDH0uLfzClJEupSk2VU= -k8s.io/apimachinery v0.29.8 h1:uBHc9WuKiTHClIspJqtR84WNpG0aOGn45HWqxgXkk8Y= -k8s.io/apimachinery v0.29.8/go.mod h1:i3FJVwhvSp/6n8Fl4K97PJEP8C+MM+aoDq4+ZJBf70Y= -k8s.io/client-go v0.29.8 h1:QMRKcIzqE/qawknXcsi51GdIAYN8UP39S/M5KnFu/J0= -k8s.io/client-go v0.29.8/go.mod h1:ZzrAAVrqO2jVXMb8My/jTke8n0a/mIynnA3y/1y1UB0= -k8s.io/component-base v0.29.8 h1:4LJ94/eOJpDFZFbGbRH4CEyk29a7PZr8noVe9tBJUUY= -k8s.io/component-base v0.29.8/go.mod h1:FYOQSsKgh9/+FNleq8m6cXH2Cq8fNiUnJzDROowLaqU= +honnef.co/go/tools v0.6.1 h1:R094WgE8K4JirYjBaOpz/AvTyUu/3wbmAoskKN/pxTI= +honnef.co/go/tools v0.6.1/go.mod h1:3puzxxljPCe8RGJX7BIy1plGbxEOZni5mR2aXe3/uk4= +k8s.io/api v0.34.3 h1:D12sTP257/jSH2vHV2EDYrb16bS7ULlHpdNdNhEw2S4= +k8s.io/api v0.34.3/go.mod h1:PyVQBF886Q5RSQZOim7DybQjAbVs8g7gwJNhGtY5MBk= +k8s.io/apiextensions-apiserver v0.34.3 h1:p10fGlkDY09eWKOTeUSioxwLukJnm+KuDZdrW71y40g= +k8s.io/apiextensions-apiserver v0.34.3/go.mod h1:aujxvqGFRdb/cmXYfcRTeppN7S2XV/t7WMEc64zB5A0= +k8s.io/apimachinery v0.34.3 h1:/TB+SFEiQvN9HPldtlWOTp0hWbJ+fjU+wkxysf/aQnE= +k8s.io/apimachinery v0.34.3/go.mod h1:/GwIlEcWuTX9zKIg2mbw0LRFIsXwrfoVxn+ef0X13lw= +k8s.io/client-go v0.34.3 h1:wtYtpzy/OPNYf7WyNBTj3iUA0XaBHVqhv4Iv3tbrF5A= +k8s.io/client-go v0.34.3/go.mod h1:OxxeYagaP9Kdf78UrKLa3YZixMCfP6bgPwPwNBQBzpM= k8s.io/klog/v2 v2.130.1 h1:n9Xl7H1Xvksem4KFG4PYbdQCQxqc/tTUyrgXaOhHSzk= k8s.io/klog/v2 v2.130.1/go.mod h1:3Jpz1GvMt720eyJH1ckRHK1EDfpxISzJ7I9OYgaDtPE= -k8s.io/kube-openapi v0.0.0-20240228011516-70dd3763d340 h1:BZqlfIlq5YbRMFko6/PM7FjZpUb45WallggurYhKGag= -k8s.io/kube-openapi v0.0.0-20240228011516-70dd3763d340/go.mod h1:yD4MZYeKMBwQKVht279WycxKyM84kkAx2DPrTXaeb98= -k8s.io/utils v0.0.0-20240711033017-18e509b52bc8 h1:pUdcCO1Lk/tbT5ztQWOBi5HBgbBP1J8+AsQnQCKsi8A= -k8s.io/utils v0.0.0-20240711033017-18e509b52bc8/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0= -sigs.k8s.io/controller-runtime v0.17.0 h1:fjJQf8Ukya+VjogLO6/bNX9HE6Y2xpsO5+fyS26ur/s= -sigs.k8s.io/controller-runtime v0.17.0/go.mod h1:+MngTvIQQQhfXtwfdGw/UOQ/aIaqsYywfCINOtwMO/s= -sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd h1:EDPBXCAspyGV4jQlpZSudPeMmr1bNJefnuqLsRAsHZo= -sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd/go.mod h1:B8JuhiUyNFVKdsE8h686QcCxMaH6HrOAZj4vswFpcB0= -sigs.k8s.io/structured-merge-diff/v4 v4.4.1 h1:150L+0vs/8DA78h1u02ooW1/fFq/Lwr+sGiqlzvrtq4= -sigs.k8s.io/structured-merge-diff/v4 v4.4.1/go.mod h1:N8hJocpFajUSSeSJ9bOZ77VzejKZaXsTtZo4/u7Io08= -sigs.k8s.io/yaml v1.4.0 h1:Mk1wCc2gy/F0THH0TAp1QYyJNzRm2KCLy3o5ASXVI5E= -sigs.k8s.io/yaml v1.4.0/go.mod h1:Ejl7/uTz7PSA4eKMyQCUTnhZYNmLIl+5c2lQPGR2BPY= +k8s.io/kube-openapi v0.0.0-20250909170358-d67c058d9372 h1:6n2yF16Z5B+r+iKN6yL6/0cRj7lI5omG5F0wuI9ZHhw= +k8s.io/kube-openapi v0.0.0-20250909170358-d67c058d9372/go.mod h1:kdmbQkyfwUagLfXIad1y2TdrjPFWp2Q89B3qkRwf/pQ= +k8s.io/utils v0.0.0-20251002143259-bc988d571ff4 h1:SjGebBtkBqHFOli+05xYbK8YF1Dzkbzn+gDM4X9T4Ck= +k8s.io/utils v0.0.0-20251002143259-bc988d571ff4/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0= +mvdan.cc/gofumpt v0.7.0 h1:bg91ttqXmi9y2xawvkuMXyvAA/1ZGJqYAEGjXuP0JXU= +mvdan.cc/gofumpt v0.7.0/go.mod h1:txVFJy/Sc/mvaycET54pV8SW8gWxTlUuGHVEcncmNUo= +mvdan.cc/unparam v0.0.0-20240528143540-8a5130ca722f h1:lMpcwN6GxNbWtbpI1+xzFLSW8XzX0u72NttUGVFjO3U= +mvdan.cc/unparam v0.0.0-20240528143540-8a5130ca722f/go.mod h1:RSLa7mKKCNeTTMHBw5Hsy2rfJmd6O2ivt9Dw9ZqCQpQ= +sigs.k8s.io/controller-runtime v0.22.4 h1:GEjV7KV3TY8e+tJ2LCTxUTanW4z/FmNB7l327UfMq9A= +sigs.k8s.io/controller-runtime v0.22.4/go.mod h1:+QX1XUpTXN4mLoblf4tqr5CQcyHPAki2HLXqQMY6vh8= +sigs.k8s.io/json v0.0.0-20250730193827-2d320260d730 h1:IpInykpT6ceI+QxKBbEflcR5EXP7sU1kvOlxwZh5txg= +sigs.k8s.io/json v0.0.0-20250730193827-2d320260d730/go.mod h1:mdzfpAEoE6DHQEN0uh9ZbOCuHbLK5wOm7dK4ctXE9Tg= +sigs.k8s.io/randfill v1.0.0 h1:JfjMILfT8A6RbawdsK2JXGBR5AQVfd+9TbzrlneTyrU= +sigs.k8s.io/randfill v1.0.0/go.mod h1:XeLlZ/jmk4i1HRopwe7/aU3H5n1zNUcX6TM94b3QxOY= +sigs.k8s.io/structured-merge-diff/v6 v6.3.0 h1:jTijUJbW353oVOd9oTlifJqOGEkUw2jB/fXCbTiQEco= +sigs.k8s.io/structured-merge-diff/v6 v6.3.0/go.mod h1:M3W8sfWvn2HhQDIbGWj3S099YozAsymCo/wrT5ohRUE= +sigs.k8s.io/yaml v1.6.0 h1:G8fkbMSAFqgEFgh4b1wmtzDnioxFCUgTZhlbj5P9QYs= +sigs.k8s.io/yaml v1.6.0/go.mod h1:796bPqUfzR/0jLAl6XjHl3Ck7MiyVv8dbTdyT3/pMf4= diff --git a/hooks/go/tls-certificate/manual_tls.go b/hooks/go/tls-certificate/manual_tls.go index a1a5c2517..c569653a2 100644 --- a/hooks/go/tls-certificate/manual_tls.go +++ b/hooks/go/tls-certificate/manual_tls.go @@ -140,7 +140,6 @@ func GenerateNewSelfSignedTLSGroup( input *pkg.HookInput, confGroup GenSelfSignedTLSGroupHookConf, ) ([]*certificate.Certificate, error) { - var res []*certificate.Certificate caConf := confGroup[0] diff --git a/images/agent/LICENSE b/images/agent/LICENSE new file mode 100644 index 000000000..b77c0c92a --- /dev/null +++ b/images/agent/LICENSE @@ -0,0 +1,201 @@ + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright [yyyy] [name of copyright owner] + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/images/agent/cmd/main.go b/images/agent/cmd/main.go new file mode 100644 index 000000000..5528b6da2 --- /dev/null +++ b/images/agent/cmd/main.go @@ -0,0 +1,86 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package main + +import ( + "context" + "errors" + "fmt" + "log/slog" + "os" + "time" + + "github.com/go-logr/logr" + "golang.org/x/sync/errgroup" + crlog "sigs.k8s.io/controller-runtime/pkg/log" + "sigs.k8s.io/controller-runtime/pkg/manager/signals" + + "github.com/deckhouse/sds-common-lib/slogh" + u "github.com/deckhouse/sds-common-lib/utils" + "github.com/deckhouse/sds-replicated-volume/images/agent/internal/env" +) + +func main() { + ctx := signals.SetupSignalHandler() + + slogh.EnableConfigReload(ctx, nil) + logHandler := &slogh.Handler{} + log := slog.New(logHandler). + With("startedAt", time.Now().Format(time.RFC3339)) + crlog.SetLogger(logr.FromSlogHandler(logHandler)) + slog.SetDefault(log) + + log.Info("agent app started") + + err := run(ctx, log) + if !errors.Is(err, context.Canceled) || ctx.Err() != context.Canceled { + log.Error("agent exited unexpectedly", "err", err, "ctxerr", ctx.Err()) + os.Exit(1) + } + log.Info( + "agent gracefully shutdown", + // cleanup errors do not affect status code, but worth logging + "err", err, + ) +} + +func run(ctx context.Context, log *slog.Logger) (err error) { + // The derived Context is canceled the first time a function passed to eg.Go + // returns a non-nil error or the first time Wait returns + eg, ctx := errgroup.WithContext(ctx) + + envConfig, err := env.GetConfig() + if err != nil { + return u.LogError(log, fmt.Errorf("getting env config: %w", err)) + } + log = log.With("nodeName", envConfig.NodeName()) + + // MANAGER + mgr, err := newManager(ctx, log, envConfig) + if err != nil { + return err + } + + eg.Go(func() error { + if err := mgr.Start(ctx); err != nil { + return u.LogError(log, fmt.Errorf("starting controller: %w", err)) + } + return ctx.Err() + }) + + return eg.Wait() +} diff --git a/images/agent/cmd/manager.go b/images/agent/cmd/manager.go new file mode 100644 index 000000000..50049a0a0 --- /dev/null +++ b/images/agent/cmd/manager.go @@ -0,0 +1,88 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package main + +import ( + "context" + "fmt" + "log/slog" + + "github.com/go-logr/logr" + "sigs.k8s.io/controller-runtime/pkg/client/config" + "sigs.k8s.io/controller-runtime/pkg/healthz" + "sigs.k8s.io/controller-runtime/pkg/manager" + "sigs.k8s.io/controller-runtime/pkg/metrics/server" + + u "github.com/deckhouse/sds-common-lib/utils" + "github.com/deckhouse/sds-replicated-volume/images/agent/internal/controllers" + "github.com/deckhouse/sds-replicated-volume/images/agent/internal/indexes" + "github.com/deckhouse/sds-replicated-volume/images/agent/internal/scheme" +) + +type managerConfig interface { + HealthProbeBindAddress() string + MetricsBindAddress() string +} + +func newManager( + ctx context.Context, + log *slog.Logger, + cfg managerConfig, +) (manager.Manager, error) { + config, err := config.GetConfig() + if err != nil { + return nil, u.LogError(log, fmt.Errorf("getting rest config: %w", err)) + } + + scheme, err := scheme.New() + if err != nil { + return nil, u.LogError(log, fmt.Errorf("building scheme: %w", err)) + } + + mgrOpts := manager.Options{ + Scheme: scheme, + BaseContext: func() context.Context { return ctx }, + Logger: logr.FromSlogHandler(log.Handler()), + HealthProbeBindAddress: cfg.HealthProbeBindAddress(), + Metrics: server.Options{ + BindAddress: cfg.MetricsBindAddress(), + }, + } + + mgr, err := manager.New(config, mgrOpts) + if err != nil { + return nil, u.LogError(log, fmt.Errorf("creating manager: %w", err)) + } + + if err := indexes.RegisterIndexes(ctx, mgr); err != nil { + return nil, u.LogError(log, fmt.Errorf("registering indexes: %w", err)) + } + + if err = mgr.AddHealthzCheck("healthz", healthz.Ping); err != nil { + return nil, u.LogError(log, fmt.Errorf("AddHealthzCheck: %w", err)) + } + + if err = mgr.AddReadyzCheck("readyz", healthz.Ping); err != nil { + return nil, u.LogError(log, fmt.Errorf("AddReadyzCheck: %w", err)) + } + + if err := controllers.BuildAll(mgr); err != nil { + return nil, err + } + + return mgr, nil +} diff --git a/images/agent/go.mod b/images/agent/go.mod new file mode 100644 index 000000000..049c84b9b --- /dev/null +++ b/images/agent/go.mod @@ -0,0 +1,252 @@ +module github.com/deckhouse/sds-replicated-volume/images/agent + +go 1.24.11 + +replace github.com/deckhouse/sds-replicated-volume/lib/go/common => ../../lib/go/common + +require ( + github.com/deckhouse/sds-common-lib v0.6.3 + github.com/deckhouse/sds-node-configurator/api v0.0.0-20251112082451-591b11c7b2da + github.com/deckhouse/sds-replicated-volume/api v0.0.0-00010101000000-000000000000 + github.com/go-logr/logr v1.4.3 + github.com/google/go-cmp v0.7.0 + golang.org/x/sync v0.19.0 + k8s.io/api v0.34.3 + k8s.io/apimachinery v0.34.3 + sigs.k8s.io/controller-runtime v0.22.4 +) + +require ( + 4d63.com/gocheckcompilerdirectives v1.3.0 // indirect + 4d63.com/gochecknoglobals v0.2.2 // indirect + github.com/4meepo/tagalign v1.4.2 // indirect + github.com/Abirdcfly/dupword v0.1.3 // indirect + github.com/Antonboom/errname v1.0.0 // indirect + github.com/Antonboom/nilnil v1.0.1 // indirect + github.com/Antonboom/testifylint v1.5.2 // indirect + github.com/BurntSushi/toml v1.5.0 // indirect + github.com/Crocmagnon/fatcontext v0.7.1 // indirect + github.com/Djarvur/go-err113 v0.0.0-20210108212216-aea10b59be24 // indirect + github.com/GaijinEntertainment/go-exhaustruct/v3 v3.3.1 // indirect + github.com/Masterminds/semver/v3 v3.4.0 // indirect + github.com/OpenPeeDeeP/depguard/v2 v2.2.1 // indirect + github.com/alecthomas/go-check-sumtype v0.3.1 // indirect + github.com/alexkohler/nakedret/v2 v2.0.5 // indirect + github.com/alexkohler/prealloc v1.0.0 // indirect + github.com/alingse/asasalint v0.0.11 // indirect + github.com/alingse/nilnesserr v0.1.2 // indirect + github.com/ashanbrown/forbidigo v1.6.0 // indirect + github.com/ashanbrown/makezero v1.2.0 // indirect + github.com/beorn7/perks v1.0.1 // indirect + github.com/bkielbasa/cyclop v1.2.3 // indirect + github.com/blizzy78/varnamelen v0.8.0 // indirect + github.com/bombsimon/wsl/v4 v4.5.0 // indirect + github.com/breml/bidichk v0.3.2 // indirect + github.com/breml/errchkjson v0.4.0 // indirect + github.com/butuzov/ireturn v0.3.1 // indirect + github.com/butuzov/mirror v1.3.0 // indirect + github.com/catenacyber/perfsprint v0.8.2 // indirect + github.com/ccojocar/zxcvbn-go v1.0.2 // indirect + github.com/cespare/xxhash/v2 v2.3.0 // indirect + github.com/charithe/durationcheck v0.0.10 // indirect + github.com/chavacava/garif v0.1.0 // indirect + github.com/ckaznocha/intrange v0.3.0 // indirect + github.com/curioswitch/go-reassign v0.3.0 // indirect + github.com/daixiang0/gci v0.13.5 // indirect + github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect + github.com/denis-tingaikin/go-header v0.5.0 // indirect + github.com/emicklei/go-restful/v3 v3.13.0 // indirect + github.com/ettle/strcase v0.2.0 // indirect + github.com/evanphx/json-patch v5.9.0+incompatible // indirect + github.com/evanphx/json-patch/v5 v5.9.11 // indirect + github.com/fatih/color v1.18.0 // indirect + github.com/fatih/structtag v1.2.0 // indirect + github.com/firefart/nonamedreturns v1.0.5 // indirect + github.com/fsnotify/fsnotify v1.9.0 // indirect + github.com/fxamacker/cbor/v2 v2.9.0 // indirect + github.com/fzipp/gocyclo v0.6.0 // indirect + github.com/ghostiam/protogetter v0.3.9 // indirect + github.com/go-critic/go-critic v0.12.0 // indirect + github.com/go-openapi/jsonpointer v0.22.0 // indirect + github.com/go-openapi/jsonreference v0.21.1 // indirect + github.com/go-openapi/swag v0.24.1 // indirect + github.com/go-openapi/swag/cmdutils v0.24.0 // indirect + github.com/go-openapi/swag/conv v0.24.0 // indirect + github.com/go-openapi/swag/fileutils v0.24.0 // indirect + github.com/go-openapi/swag/jsonname v0.24.0 // indirect + github.com/go-openapi/swag/jsonutils v0.24.0 // indirect + github.com/go-openapi/swag/loading v0.24.0 // indirect + github.com/go-openapi/swag/mangling v0.24.0 // indirect + github.com/go-openapi/swag/netutils v0.24.0 // indirect + github.com/go-openapi/swag/stringutils v0.24.0 // indirect + github.com/go-openapi/swag/typeutils v0.24.0 // indirect + github.com/go-openapi/swag/yamlutils v0.24.0 // indirect + github.com/go-task/slim-sprig/v3 v3.0.0 // indirect + github.com/go-toolsmith/astcast v1.1.0 // indirect + github.com/go-toolsmith/astcopy v1.1.0 // indirect + github.com/go-toolsmith/astequal v1.2.0 // indirect + github.com/go-toolsmith/astfmt v1.1.0 // indirect + github.com/go-toolsmith/astp v1.1.0 // indirect + github.com/go-toolsmith/strparse v1.1.0 // indirect + github.com/go-toolsmith/typep v1.1.0 // indirect + github.com/go-viper/mapstructure/v2 v2.4.0 // indirect + github.com/go-xmlfmt/xmlfmt v1.1.3 // indirect + github.com/gobwas/glob v0.2.3 // indirect + github.com/gofrs/flock v0.12.1 // indirect + github.com/gogo/protobuf v1.3.2 // indirect + github.com/golangci/dupl v0.0.0-20250308024227-f665c8d69b32 // indirect + github.com/golangci/go-printf-func-name v0.1.0 // indirect + github.com/golangci/gofmt v0.0.0-20250106114630-d62b90e6713d // indirect + github.com/golangci/golangci-lint v1.64.8 // indirect + github.com/golangci/misspell v0.6.0 // indirect + github.com/golangci/plugin-module-register v0.1.1 // indirect + github.com/golangci/revgrep v0.8.0 // indirect + github.com/golangci/unconvert v0.0.0-20240309020433-c5143eacb3ed // indirect + github.com/google/gnostic-models v0.7.0 // indirect + github.com/google/pprof v0.0.0-20250820193118-f64d9cf942d6 // indirect + github.com/google/uuid v1.6.0 // indirect + github.com/gordonklaus/ineffassign v0.1.0 // indirect + github.com/gostaticanalysis/analysisutil v0.7.1 // indirect + github.com/gostaticanalysis/comment v1.5.0 // indirect + github.com/gostaticanalysis/forcetypeassert v0.2.0 // indirect + github.com/gostaticanalysis/nilerr v0.1.1 // indirect + github.com/hashicorp/go-immutable-radix/v2 v2.1.0 // indirect + github.com/hashicorp/go-version v1.7.0 // indirect + github.com/hashicorp/golang-lru/v2 v2.0.7 // indirect + github.com/hexops/gotextdiff v1.0.3 // indirect + github.com/inconshreveable/mousetrap v1.1.0 // indirect + github.com/jgautheron/goconst v1.7.1 // indirect + github.com/jingyugao/rowserrcheck v1.1.1 // indirect + github.com/jjti/go-spancheck v0.6.4 // indirect + github.com/josharian/intern v1.0.0 // indirect + github.com/json-iterator/go v1.1.12 // indirect + github.com/julz/importas v0.2.0 // indirect + github.com/karamaru-alpha/copyloopvar v1.2.1 // indirect + github.com/kisielk/errcheck v1.9.0 // indirect + github.com/kkHAIKE/contextcheck v1.1.6 // indirect + github.com/kulti/thelper v0.6.3 // indirect + github.com/kunwardeep/paralleltest v1.0.10 // indirect + github.com/lasiar/canonicalheader v1.1.2 // indirect + github.com/ldez/exptostd v0.4.2 // indirect + github.com/ldez/gomoddirectives v0.6.1 // indirect + github.com/ldez/grignotin v0.9.0 // indirect + github.com/ldez/tagliatelle v0.7.1 // indirect + github.com/ldez/usetesting v0.4.2 // indirect + github.com/leonklingele/grouper v1.1.2 // indirect + github.com/macabu/inamedparam v0.1.3 // indirect + github.com/mailru/easyjson v0.9.0 // indirect + github.com/maratori/testableexamples v1.0.0 // indirect + github.com/maratori/testpackage v1.1.1 // indirect + github.com/matoous/godox v1.1.0 // indirect + github.com/mattn/go-colorable v0.1.14 // indirect + github.com/mattn/go-isatty v0.0.20 // indirect + github.com/mattn/go-runewidth v0.0.16 // indirect + github.com/mgechev/revive v1.7.0 // indirect + github.com/mitchellh/go-homedir v1.1.0 // indirect + github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect + github.com/modern-go/reflect2 v1.0.3-0.20250322232337-35a7c28c31ee // indirect + github.com/moricho/tparallel v0.3.2 // indirect + github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect + github.com/nakabonne/nestif v0.3.1 // indirect + github.com/nishanths/exhaustive v0.12.0 // indirect + github.com/nishanths/predeclared v0.2.2 // indirect + github.com/nunnatsa/ginkgolinter v0.19.1 // indirect + github.com/olekukonko/tablewriter v0.0.5 // indirect + github.com/onsi/ginkgo/v2 v2.27.2 // indirect + github.com/pelletier/go-toml/v2 v2.2.4 // indirect + github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect + github.com/polyfloyd/go-errorlint v1.7.1 // indirect + github.com/prometheus/client_golang v1.23.2 // indirect + github.com/prometheus/client_model v0.6.2 // indirect + github.com/prometheus/common v0.66.1 // indirect + github.com/prometheus/procfs v0.17.0 // indirect + github.com/quasilyte/go-ruleguard v0.4.3-0.20240823090925-0fe6f58b47b1 // indirect + github.com/quasilyte/go-ruleguard/dsl v0.3.22 // indirect + github.com/quasilyte/gogrep v0.5.0 // indirect + github.com/quasilyte/regex/syntax v0.0.0-20210819130434-b3f0c404a727 // indirect + github.com/quasilyte/stdinfo v0.0.0-20220114132959-f7386bf02567 // indirect + github.com/raeperd/recvcheck v0.2.0 // indirect + github.com/rivo/uniseg v0.4.7 // indirect + github.com/rogpeppe/go-internal v1.14.1 // indirect + github.com/ryancurrah/gomodguard v1.3.5 // indirect + github.com/ryanrolds/sqlclosecheck v0.5.1 // indirect + github.com/sagikazarmark/locafero v0.7.0 // indirect + github.com/sanposhiho/wastedassign/v2 v2.1.0 // indirect + github.com/santhosh-tekuri/jsonschema/v6 v6.0.1 // indirect + github.com/sashamelentyev/interfacebloat v1.1.0 // indirect + github.com/sashamelentyev/usestdlibvars v1.28.0 // indirect + github.com/securego/gosec/v2 v2.22.2 // indirect + github.com/sirupsen/logrus v1.9.3 // indirect + github.com/sivchari/containedctx v1.0.3 // indirect + github.com/sivchari/tenv v1.12.1 // indirect + github.com/sonatard/noctx v0.1.0 // indirect + github.com/sourcegraph/conc v0.3.0 // indirect + github.com/sourcegraph/go-diff v0.7.0 // indirect + github.com/spf13/afero v1.12.0 // indirect + github.com/spf13/cast v1.7.1 // indirect + github.com/spf13/cobra v1.10.2 // indirect + github.com/spf13/pflag v1.0.10 // indirect + github.com/spf13/viper v1.20.1 // indirect + github.com/ssgreg/nlreturn/v2 v2.2.1 // indirect + github.com/stbenjam/no-sprintf-host-port v0.2.0 // indirect + github.com/stretchr/objx v0.5.2 // indirect + github.com/stretchr/testify v1.11.1 // indirect + github.com/subosito/gotenv v1.6.0 // indirect + github.com/tdakkota/asciicheck v0.4.1 // indirect + github.com/tetafro/godot v1.5.0 // indirect + github.com/timakin/bodyclose v0.0.0-20241017074812-ed6a65f985e3 // indirect + github.com/timonwong/loggercheck v0.10.1 // indirect + github.com/tomarrell/wrapcheck/v2 v2.10.0 // indirect + github.com/tommy-muehle/go-mnd/v2 v2.5.1 // indirect + github.com/ultraware/funlen v0.2.0 // indirect + github.com/ultraware/whitespace v0.2.0 // indirect + github.com/uudashr/gocognit v1.2.0 // indirect + github.com/uudashr/iface v1.3.1 // indirect + github.com/x448/float16 v0.8.4 // indirect + github.com/xen0n/gosmopolitan v1.2.2 // indirect + github.com/yagipy/maintidx v1.0.0 // indirect + github.com/yeya24/promlinter v0.3.0 // indirect + github.com/ykadowak/zerologlint v0.1.5 // indirect + gitlab.com/bosi/decorder v0.4.2 // indirect + go-simpler.org/musttag v0.13.0 // indirect + go-simpler.org/sloglint v0.9.0 // indirect + go.uber.org/automaxprocs v1.6.0 // indirect + go.uber.org/multierr v1.11.0 // indirect + go.uber.org/zap v1.27.0 // indirect + go.yaml.in/yaml/v2 v2.4.2 // indirect + go.yaml.in/yaml/v3 v3.0.4 // indirect + golang.org/x/exp/typeparams v0.0.0-20250210185358-939b2ce775ac // indirect + golang.org/x/mod v0.29.0 // indirect + golang.org/x/net v0.46.0 // indirect + golang.org/x/oauth2 v0.31.0 // indirect + golang.org/x/sys v0.39.0 // indirect + golang.org/x/term v0.36.0 // indirect + golang.org/x/text v0.30.0 // indirect + golang.org/x/time v0.13.0 // indirect + golang.org/x/tools v0.38.0 // indirect + gomodules.xyz/jsonpatch/v2 v2.5.0 // indirect + google.golang.org/protobuf v1.36.9 // indirect + gopkg.in/evanphx/json-patch.v4 v4.13.0 // indirect + gopkg.in/inf.v0 v0.9.1 // indirect + gopkg.in/yaml.v2 v2.4.0 // indirect + gopkg.in/yaml.v3 v3.0.1 // indirect + honnef.co/go/tools v0.6.1 // indirect + k8s.io/apiextensions-apiserver v0.34.3 // indirect + k8s.io/client-go v0.34.3 // indirect + k8s.io/klog/v2 v2.130.1 // indirect + k8s.io/kube-openapi v0.0.0-20250909170358-d67c058d9372 // indirect + k8s.io/utils v0.0.0-20251002143259-bc988d571ff4 // indirect + mvdan.cc/gofumpt v0.7.0 // indirect + mvdan.cc/unparam v0.0.0-20240528143540-8a5130ca722f // indirect + sigs.k8s.io/json v0.0.0-20250730193827-2d320260d730 // indirect + sigs.k8s.io/randfill v1.0.0 // indirect + sigs.k8s.io/structured-merge-diff/v6 v6.3.0 // indirect + sigs.k8s.io/yaml v1.6.0 // indirect +) + +replace github.com/deckhouse/sds-replicated-volume/api => ../../api + +tool ( + github.com/golangci/golangci-lint/cmd/golangci-lint + github.com/onsi/ginkgo/v2/ginkgo +) diff --git a/images/agent/go.sum b/images/agent/go.sum new file mode 100644 index 000000000..7af68cb92 --- /dev/null +++ b/images/agent/go.sum @@ -0,0 +1,703 @@ +4d63.com/gocheckcompilerdirectives v1.3.0 h1:Ew5y5CtcAAQeTVKUVFrE7EwHMrTO6BggtEj8BZSjZ3A= +4d63.com/gocheckcompilerdirectives v1.3.0/go.mod h1:ofsJ4zx2QAuIP/NO/NAh1ig6R1Fb18/GI7RVMwz7kAY= +4d63.com/gochecknoglobals v0.2.2 h1:H1vdnwnMaZdQW/N+NrkT1SZMTBmcwHe9Vq8lJcYYTtU= +4d63.com/gochecknoglobals v0.2.2/go.mod h1:lLxwTQjL5eIesRbvnzIP3jZtG140FnTdz+AlMa+ogt0= +github.com/4meepo/tagalign v1.4.2 h1:0hcLHPGMjDyM1gHG58cS73aQF8J4TdVR96TZViorO9E= +github.com/4meepo/tagalign v1.4.2/go.mod h1:+p4aMyFM+ra7nb41CnFG6aSDXqRxU/w1VQqScKqDARI= +github.com/Abirdcfly/dupword v0.1.3 h1:9Pa1NuAsZvpFPi9Pqkd93I7LIYRURj+A//dFd5tgBeE= +github.com/Abirdcfly/dupword v0.1.3/go.mod h1:8VbB2t7e10KRNdwTVoxdBaxla6avbhGzb8sCTygUMhw= +github.com/Antonboom/errname v1.0.0 h1:oJOOWR07vS1kRusl6YRSlat7HFnb3mSfMl6sDMRoTBA= +github.com/Antonboom/errname v1.0.0/go.mod h1:gMOBFzK/vrTiXN9Oh+HFs+e6Ndl0eTFbtsRTSRdXyGI= +github.com/Antonboom/nilnil v1.0.1 h1:C3Tkm0KUxgfO4Duk3PM+ztPncTFlOf0b2qadmS0s4xs= +github.com/Antonboom/nilnil v1.0.1/go.mod h1:CH7pW2JsRNFgEh8B2UaPZTEPhCMuFowP/e8Udp9Nnb0= +github.com/Antonboom/testifylint v1.5.2 h1:4s3Xhuv5AvdIgbd8wOOEeo0uZG7PbDKQyKY5lGoQazk= +github.com/Antonboom/testifylint v1.5.2/go.mod h1:vxy8VJ0bc6NavlYqjZfmp6EfqXMtBgQ4+mhCojwC1P8= +github.com/BurntSushi/toml v1.5.0 h1:W5quZX/G/csjUnuI8SUYlsHs9M38FC7znL0lIO+DvMg= +github.com/BurntSushi/toml v1.5.0/go.mod h1:ukJfTF/6rtPPRCnwkur4qwRxa8vTRFBF0uk2lLoLwho= +github.com/Crocmagnon/fatcontext v0.7.1 h1:SC/VIbRRZQeQWj/TcQBS6JmrXcfA+BU4OGSVUt54PjM= +github.com/Crocmagnon/fatcontext v0.7.1/go.mod h1:1wMvv3NXEBJucFGfwOJBxSVWcoIO6emV215SMkW9MFU= +github.com/Djarvur/go-err113 v0.0.0-20210108212216-aea10b59be24 h1:sHglBQTwgx+rWPdisA5ynNEsoARbiCBOyGcJM4/OzsM= +github.com/Djarvur/go-err113 v0.0.0-20210108212216-aea10b59be24/go.mod h1:4UJr5HIiMZrwgkSPdsjy2uOQExX/WEILpIrO9UPGuXs= +github.com/GaijinEntertainment/go-exhaustruct/v3 v3.3.1 h1:Sz1JIXEcSfhz7fUi7xHnhpIE0thVASYjvosApmHuD2k= +github.com/GaijinEntertainment/go-exhaustruct/v3 v3.3.1/go.mod h1:n/LSCXNuIYqVfBlVXyHfMQkZDdp1/mmxfSjADd3z1Zg= +github.com/Masterminds/semver/v3 v3.4.0 h1:Zog+i5UMtVoCU8oKka5P7i9q9HgrJeGzI9SA1Xbatp0= +github.com/Masterminds/semver/v3 v3.4.0/go.mod h1:4V+yj/TJE1HU9XfppCwVMZq3I84lprf4nC11bSS5beM= +github.com/OpenPeeDeeP/depguard/v2 v2.2.1 h1:vckeWVESWp6Qog7UZSARNqfu/cZqvki8zsuj3piCMx4= +github.com/OpenPeeDeeP/depguard/v2 v2.2.1/go.mod h1:q4DKzC4UcVaAvcfd41CZh0PWpGgzrVxUYBlgKNGquUo= +github.com/alecthomas/assert/v2 v2.11.0 h1:2Q9r3ki8+JYXvGsDyBXwH3LcJ+WK5D0gc5E8vS6K3D0= +github.com/alecthomas/assert/v2 v2.11.0/go.mod h1:Bze95FyfUr7x34QZrjL+XP+0qgp/zg8yS+TtBj1WA3k= +github.com/alecthomas/go-check-sumtype v0.3.1 h1:u9aUvbGINJxLVXiFvHUlPEaD7VDULsrxJb4Aq31NLkU= +github.com/alecthomas/go-check-sumtype v0.3.1/go.mod h1:A8TSiN3UPRw3laIgWEUOHHLPa6/r9MtoigdlP5h3K/E= +github.com/alecthomas/repr v0.4.0 h1:GhI2A8MACjfegCPVq9f1FLvIBS+DrQ2KQBFZP1iFzXc= +github.com/alecthomas/repr v0.4.0/go.mod h1:Fr0507jx4eOXV7AlPV6AVZLYrLIuIeSOWtW57eE/O/4= +github.com/alexkohler/nakedret/v2 v2.0.5 h1:fP5qLgtwbx9EJE8dGEERT02YwS8En4r9nnZ71RK+EVU= +github.com/alexkohler/nakedret/v2 v2.0.5/go.mod h1:bF5i0zF2Wo2o4X4USt9ntUWve6JbFv02Ff4vlkmS/VU= +github.com/alexkohler/prealloc v1.0.0 h1:Hbq0/3fJPQhNkN0dR95AVrr6R7tou91y0uHG5pOcUuw= +github.com/alexkohler/prealloc v1.0.0/go.mod h1:VetnK3dIgFBBKmg0YnD9F9x6Icjd+9cvfHR56wJVlKE= +github.com/alingse/asasalint v0.0.11 h1:SFwnQXJ49Kx/1GghOFz1XGqHYKp21Kq1nHad/0WQRnw= +github.com/alingse/asasalint v0.0.11/go.mod h1:nCaoMhw7a9kSJObvQyVzNTPBDbNpdocqrSP7t/cW5+I= +github.com/alingse/nilnesserr v0.1.2 h1:Yf8Iwm3z2hUUrP4muWfW83DF4nE3r1xZ26fGWUKCZlo= +github.com/alingse/nilnesserr v0.1.2/go.mod h1:1xJPrXonEtX7wyTq8Dytns5P2hNzoWymVUIaKm4HNFg= +github.com/ashanbrown/forbidigo v1.6.0 h1:D3aewfM37Yb3pxHujIPSpTf6oQk9sc9WZi8gerOIVIY= +github.com/ashanbrown/forbidigo v1.6.0/go.mod h1:Y8j9jy9ZYAEHXdu723cUlraTqbzjKF1MUyfOKL+AjcU= +github.com/ashanbrown/makezero v1.2.0 h1:/2Lp1bypdmK9wDIq7uWBlDF1iMUpIIS4A+pF6C9IEUU= +github.com/ashanbrown/makezero v1.2.0/go.mod h1:dxlPhHbDMC6N6xICzFBSK+4njQDdK8euNO0qjQMtGY4= +github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM= +github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw= +github.com/bkielbasa/cyclop v1.2.3 h1:faIVMIGDIANuGPWH031CZJTi2ymOQBULs9H21HSMa5w= +github.com/bkielbasa/cyclop v1.2.3/go.mod h1:kHTwA9Q0uZqOADdupvcFJQtp/ksSnytRMe8ztxG8Fuo= +github.com/blizzy78/varnamelen v0.8.0 h1:oqSblyuQvFsW1hbBHh1zfwrKe3kcSj0rnXkKzsQ089M= +github.com/blizzy78/varnamelen v0.8.0/go.mod h1:V9TzQZ4fLJ1DSrjVDfl89H7aMnTvKkApdHeyESmyR7k= +github.com/bombsimon/wsl/v4 v4.5.0 h1:iZRsEvDdyhd2La0FVi5k6tYehpOR/R7qIUjmKk7N74A= +github.com/bombsimon/wsl/v4 v4.5.0/go.mod h1:NOQ3aLF4nD7N5YPXMruR6ZXDOAqLoM0GEpLwTdvmOSc= +github.com/breml/bidichk v0.3.2 h1:xV4flJ9V5xWTqxL+/PMFF6dtJPvZLPsyixAoPe8BGJs= +github.com/breml/bidichk v0.3.2/go.mod h1:VzFLBxuYtT23z5+iVkamXO386OB+/sVwZOpIj6zXGos= +github.com/breml/errchkjson v0.4.0 h1:gftf6uWZMtIa/Is3XJgibewBm2ksAQSY/kABDNFTAdk= +github.com/breml/errchkjson v0.4.0/go.mod h1:AuBOSTHyLSaaAFlWsRSuRBIroCh3eh7ZHh5YeelDIk8= +github.com/butuzov/ireturn v0.3.1 h1:mFgbEI6m+9W8oP/oDdfA34dLisRFCj2G6o/yiI1yZrY= +github.com/butuzov/ireturn v0.3.1/go.mod h1:ZfRp+E7eJLC0NQmk1Nrm1LOrn/gQlOykv+cVPdiXH5M= +github.com/butuzov/mirror v1.3.0 h1:HdWCXzmwlQHdVhwvsfBb2Au0r3HyINry3bDWLYXiKoc= +github.com/butuzov/mirror v1.3.0/go.mod h1:AEij0Z8YMALaq4yQj9CPPVYOyJQyiexpQEQgihajRfI= +github.com/catenacyber/perfsprint v0.8.2 h1:+o9zVmCSVa7M4MvabsWvESEhpsMkhfE7k0sHNGL95yw= +github.com/catenacyber/perfsprint v0.8.2/go.mod h1:q//VWC2fWbcdSLEY1R3l8n0zQCDPdE4IjZwyY1HMunM= +github.com/ccojocar/zxcvbn-go v1.0.2 h1:na/czXU8RrhXO4EZme6eQJLR4PzcGsahsBOAwU6I3Vg= +github.com/ccojocar/zxcvbn-go v1.0.2/go.mod h1:g1qkXtUSvHP8lhHp5GrSmTz6uWALGRMQdw6Qnz/hi60= +github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs= +github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs= +github.com/charithe/durationcheck v0.0.10 h1:wgw73BiocdBDQPik+zcEoBG/ob8uyBHf2iyoHGPf5w4= +github.com/charithe/durationcheck v0.0.10/go.mod h1:bCWXb7gYRysD1CU3C+u4ceO49LoGOY1C1L6uouGNreQ= +github.com/chavacava/garif v0.1.0 h1:2JHa3hbYf5D9dsgseMKAmc/MZ109otzgNFk5s87H9Pc= +github.com/chavacava/garif v0.1.0/go.mod h1:XMyYCkEL58DF0oyW4qDjjnPWONs2HBqYKI+UIPD+Gww= +github.com/ckaznocha/intrange v0.3.0 h1:VqnxtK32pxgkhJgYQEeOArVidIPg+ahLP7WBOXZd5ZY= +github.com/ckaznocha/intrange v0.3.0/go.mod h1:+I/o2d2A1FBHgGELbGxzIcyd3/9l9DuwjM8FsbSS3Lo= +github.com/cpuguy83/go-md2man/v2 v2.0.6/go.mod h1:oOW0eioCTA6cOiMLiUPZOpcVxMig6NIQQ7OS05n1F4g= +github.com/curioswitch/go-reassign v0.3.0 h1:dh3kpQHuADL3cobV/sSGETA8DOv457dwl+fbBAhrQPs= +github.com/curioswitch/go-reassign v0.3.0/go.mod h1:nApPCCTtqLJN/s8HfItCcKV0jIPwluBOvZP+dsJGA88= +github.com/daixiang0/gci v0.13.5 h1:kThgmH1yBmZSBCh1EJVxQ7JsHpm5Oms0AMed/0LaH4c= +github.com/daixiang0/gci v0.13.5/go.mod h1:12etP2OniiIdP4q+kjUGrC/rUagga7ODbqsom5Eo5Yk= +github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= +github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= +github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM= +github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= +github.com/deckhouse/sds-common-lib v0.6.3 h1:k0OotLuQaKuZt8iyph9IusDixjAE0MQRKyuTe2wZP3I= +github.com/deckhouse/sds-common-lib v0.6.3/go.mod h1:UHZMKkqEh6RAO+vtA7dFTwn/2m5lzfPn0kfULBmDf2o= +github.com/deckhouse/sds-node-configurator/api v0.0.0-20251112082451-591b11c7b2da h1:LFk9OC/+EVWfYDRe54Hip4kVKwjNcPhHZTftlm5DCpg= +github.com/deckhouse/sds-node-configurator/api v0.0.0-20251112082451-591b11c7b2da/go.mod h1:X5ftUa4MrSXMKiwQYa4lwFuGtrs+HoCNa8Zl6TPrGo8= +github.com/denis-tingaikin/go-header v0.5.0 h1:SRdnP5ZKvcO9KKRP1KJrhFR3RrlGuD+42t4429eC9k8= +github.com/denis-tingaikin/go-header v0.5.0/go.mod h1:mMenU5bWrok6Wl2UsZjy+1okegmwQ3UgWl4V1D8gjlY= +github.com/dlclark/regexp2 v1.11.0 h1:G/nrcoOa7ZXlpoa/91N3X7mM3r8eIlMBBJZvsz/mxKI= +github.com/dlclark/regexp2 v1.11.0/go.mod h1:DHkYz0B9wPfa6wondMfaivmHpzrQ3v9q8cnmRbL6yW8= +github.com/emicklei/go-restful/v3 v3.13.0 h1:C4Bl2xDndpU6nJ4bc1jXd+uTmYPVUwkD6bFY/oTyCes= +github.com/emicklei/go-restful/v3 v3.13.0/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc= +github.com/ettle/strcase v0.2.0 h1:fGNiVF21fHXpX1niBgk0aROov1LagYsOwV/xqKDKR/Q= +github.com/ettle/strcase v0.2.0/go.mod h1:DajmHElDSaX76ITe3/VHVyMin4LWSJN5Z909Wp+ED1A= +github.com/evanphx/json-patch v5.9.0+incompatible h1:fBXyNpNMuTTDdquAq/uisOr2lShz4oaXpDTX2bLe7ls= +github.com/evanphx/json-patch v5.9.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk= +github.com/evanphx/json-patch/v5 v5.9.11 h1:/8HVnzMq13/3x9TPvjG08wUGqBTmZBsCWzjTM0wiaDU= +github.com/evanphx/json-patch/v5 v5.9.11/go.mod h1:3j+LviiESTElxA4p3EMKAB9HXj3/XEtnUf6OZxqIQTM= +github.com/fatih/color v1.18.0 h1:S8gINlzdQ840/4pfAwic/ZE0djQEH3wM94VfqLTZcOM= +github.com/fatih/color v1.18.0/go.mod h1:4FelSpRwEGDpQ12mAdzqdOukCy4u8WUtOY6lkT/6HfU= +github.com/fatih/structtag v1.2.0 h1:/OdNE99OxoI/PqaW/SuSK9uxxT3f/tcSZgon/ssNSx4= +github.com/fatih/structtag v1.2.0/go.mod h1:mBJUNpUnHmRKrKlQQlmCrh5PuhftFbNv8Ys4/aAZl94= +github.com/firefart/nonamedreturns v1.0.5 h1:tM+Me2ZaXs8tfdDw3X6DOX++wMCOqzYUho6tUTYIdRA= +github.com/firefart/nonamedreturns v1.0.5/go.mod h1:gHJjDqhGM4WyPt639SOZs+G89Ko7QKH5R5BhnO6xJhw= +github.com/frankban/quicktest v1.14.6 h1:7Xjx+VpznH+oBnejlPUj8oUpdxnVs4f8XU8WnHkI4W8= +github.com/frankban/quicktest v1.14.6/go.mod h1:4ptaffx2x8+WTWXmUCuVU6aPUX1/Mz7zb5vbUoiM6w0= +github.com/fsnotify/fsnotify v1.9.0 h1:2Ml+OJNzbYCTzsxtv8vKSFD9PbJjmhYF14k/jKC7S9k= +github.com/fsnotify/fsnotify v1.9.0/go.mod h1:8jBTzvmWwFyi3Pb8djgCCO5IBqzKJ/Jwo8TRcHyHii0= +github.com/fxamacker/cbor/v2 v2.9.0 h1:NpKPmjDBgUfBms6tr6JZkTHtfFGcMKsw3eGcmD/sapM= +github.com/fxamacker/cbor/v2 v2.9.0/go.mod h1:vM4b+DJCtHn+zz7h3FFp/hDAI9WNWCsZj23V5ytsSxQ= +github.com/fzipp/gocyclo v0.6.0 h1:lsblElZG7d3ALtGMx9fmxeTKZaLLpU8mET09yN4BBLo= +github.com/fzipp/gocyclo v0.6.0/go.mod h1:rXPyn8fnlpa0R2csP/31uerbiVBugk5whMdlyaLkLoA= +github.com/ghostiam/protogetter v0.3.9 h1:j+zlLLWzqLay22Cz/aYwTHKQ88GE2DQ6GkWSYFOI4lQ= +github.com/ghostiam/protogetter v0.3.9/go.mod h1:WZ0nw9pfzsgxuRsPOFQomgDVSWtDLJRfQJEhsGbmQMA= +github.com/gkampitakis/ciinfo v0.3.2 h1:JcuOPk8ZU7nZQjdUhctuhQofk7BGHuIy0c9Ez8BNhXs= +github.com/gkampitakis/ciinfo v0.3.2/go.mod h1:1NIwaOcFChN4fa/B0hEBdAb6npDlFL8Bwx4dfRLRqAo= +github.com/gkampitakis/go-diff v1.3.2 h1:Qyn0J9XJSDTgnsgHRdz9Zp24RaJeKMUHg2+PDZZdC4M= +github.com/gkampitakis/go-diff v1.3.2/go.mod h1:LLgOrpqleQe26cte8s36HTWcTmMEur6OPYerdAAS9tk= +github.com/gkampitakis/go-snaps v0.5.15 h1:amyJrvM1D33cPHwVrjo9jQxX8g/7E2wYdZ+01KS3zGE= +github.com/gkampitakis/go-snaps v0.5.15/go.mod h1:HNpx/9GoKisdhw9AFOBT1N7DBs9DiHo/hGheFGBZ+mc= +github.com/go-critic/go-critic v0.12.0 h1:iLosHZuye812wnkEz1Xu3aBwn5ocCPfc9yqmFG9pa6w= +github.com/go-critic/go-critic v0.12.0/go.mod h1:DpE0P6OVc6JzVYzmM5gq5jMU31zLr4am5mB/VfFK64w= +github.com/go-logr/logr v1.4.3 h1:CjnDlHq8ikf6E492q6eKboGOC0T8CDaOvkHCIg8idEI= +github.com/go-logr/logr v1.4.3/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY= +github.com/go-logr/zapr v1.3.0 h1:XGdV8XW8zdwFiwOA2Dryh1gj2KRQyOOoNmBy4EplIcQ= +github.com/go-logr/zapr v1.3.0/go.mod h1:YKepepNBd1u/oyhd/yQmtjVXmm9uML4IXUgMOwR8/Gg= +github.com/go-openapi/jsonpointer v0.22.0 h1:TmMhghgNef9YXxTu1tOopo+0BGEytxA+okbry0HjZsM= +github.com/go-openapi/jsonpointer v0.22.0/go.mod h1:xt3jV88UtExdIkkL7NloURjRQjbeUgcxFblMjq2iaiU= +github.com/go-openapi/jsonreference v0.21.1 h1:bSKrcl8819zKiOgxkbVNRUBIr6Wwj9KYrDbMjRs0cDA= +github.com/go-openapi/jsonreference v0.21.1/go.mod h1:PWs8rO4xxTUqKGu+lEvvCxD5k2X7QYkKAepJyCmSTT8= +github.com/go-openapi/swag v0.24.1 h1:DPdYTZKo6AQCRqzwr/kGkxJzHhpKxZ9i/oX0zag+MF8= +github.com/go-openapi/swag v0.24.1/go.mod h1:sm8I3lCPlspsBBwUm1t5oZeWZS0s7m/A+Psg0ooRU0A= +github.com/go-openapi/swag/cmdutils v0.24.0 h1:KlRCffHwXFI6E5MV9n8o8zBRElpY4uK4yWyAMWETo9I= +github.com/go-openapi/swag/cmdutils v0.24.0/go.mod h1:uxib2FAeQMByyHomTlsP8h1TtPd54Msu2ZDU/H5Vuf8= +github.com/go-openapi/swag/conv v0.24.0 h1:ejB9+7yogkWly6pnruRX45D1/6J+ZxRu92YFivx54ik= +github.com/go-openapi/swag/conv v0.24.0/go.mod h1:jbn140mZd7EW2g8a8Y5bwm8/Wy1slLySQQ0ND6DPc2c= +github.com/go-openapi/swag/fileutils v0.24.0 h1:U9pCpqp4RUytnD689Ek/N1d2N/a//XCeqoH508H5oak= +github.com/go-openapi/swag/fileutils v0.24.0/go.mod h1:3SCrCSBHyP1/N+3oErQ1gP+OX1GV2QYFSnrTbzwli90= +github.com/go-openapi/swag/jsonname v0.24.0 h1:2wKS9bgRV/xB8c62Qg16w4AUiIrqqiniJFtZGi3dg5k= +github.com/go-openapi/swag/jsonname v0.24.0/go.mod h1:GXqrPzGJe611P7LG4QB9JKPtUZ7flE4DOVechNaDd7Q= +github.com/go-openapi/swag/jsonutils v0.24.0 h1:F1vE1q4pg1xtO3HTyJYRmEuJ4jmIp2iZ30bzW5XgZts= +github.com/go-openapi/swag/jsonutils v0.24.0/go.mod h1:vBowZtF5Z4DDApIoxcIVfR8v0l9oq5PpYRUuteVu6f0= +github.com/go-openapi/swag/loading v0.24.0 h1:ln/fWTwJp2Zkj5DdaX4JPiddFC5CHQpvaBKycOlceYc= +github.com/go-openapi/swag/loading v0.24.0/go.mod h1:gShCN4woKZYIxPxbfbyHgjXAhO61m88tmjy0lp/LkJk= +github.com/go-openapi/swag/mangling v0.24.0 h1:PGOQpViCOUroIeak/Uj/sjGAq9LADS3mOyjznmHy2pk= +github.com/go-openapi/swag/mangling v0.24.0/go.mod h1:Jm5Go9LHkycsz0wfoaBDkdc4CkpuSnIEf62brzyCbhc= +github.com/go-openapi/swag/netutils v0.24.0 h1:Bz02HRjYv8046Ycg/w80q3g9QCWeIqTvlyOjQPDjD8w= +github.com/go-openapi/swag/netutils v0.24.0/go.mod h1:WRgiHcYTnx+IqfMCtu0hy9oOaPR0HnPbmArSRN1SkZM= +github.com/go-openapi/swag/stringutils v0.24.0 h1:i4Z/Jawf9EvXOLUbT97O0HbPUja18VdBxeadyAqS1FM= +github.com/go-openapi/swag/stringutils v0.24.0/go.mod h1:5nUXB4xA0kw2df5PRipZDslPJgJut+NjL7D25zPZ/4w= +github.com/go-openapi/swag/typeutils v0.24.0 h1:d3szEGzGDf4L2y1gYOSSLeK6h46F+zibnEas2Jm/wIw= +github.com/go-openapi/swag/typeutils v0.24.0/go.mod h1:q8C3Kmk/vh2VhpCLaoR2MVWOGP8y7Jc8l82qCTd1DYI= +github.com/go-openapi/swag/yamlutils v0.24.0 h1:bhw4894A7Iw6ne+639hsBNRHg9iZg/ISrOVr+sJGp4c= +github.com/go-openapi/swag/yamlutils v0.24.0/go.mod h1:DpKv5aYuaGm/sULePoeiG8uwMpZSfReo1HR3Ik0yaG8= +github.com/go-quicktest/qt v1.101.0 h1:O1K29Txy5P2OK0dGo59b7b0LR6wKfIhttaAhHUyn7eI= +github.com/go-quicktest/qt v1.101.0/go.mod h1:14Bz/f7NwaXPtdYEgzsx46kqSxVwTbzVZsDC26tQJow= +github.com/go-task/slim-sprig/v3 v3.0.0 h1:sUs3vkvUymDpBKi3qH1YSqBQk9+9D/8M2mN1vB6EwHI= +github.com/go-task/slim-sprig/v3 v3.0.0/go.mod h1:W848ghGpv3Qj3dhTPRyJypKRiqCdHZiAzKg9hl15HA8= +github.com/go-toolsmith/astcast v1.1.0 h1:+JN9xZV1A+Re+95pgnMgDboWNVnIMMQXwfBwLRPgSC8= +github.com/go-toolsmith/astcast v1.1.0/go.mod h1:qdcuFWeGGS2xX5bLM/c3U9lewg7+Zu4mr+xPwZIB4ZU= +github.com/go-toolsmith/astcopy v1.1.0 h1:YGwBN0WM+ekI/6SS6+52zLDEf8Yvp3n2seZITCUBt5s= +github.com/go-toolsmith/astcopy v1.1.0/go.mod h1:hXM6gan18VA1T/daUEHCFcYiW8Ai1tIwIzHY6srfEAw= +github.com/go-toolsmith/astequal v1.0.3/go.mod h1:9Ai4UglvtR+4up+bAD4+hCj7iTo4m/OXVTSLnCyTAx4= +github.com/go-toolsmith/astequal v1.1.0/go.mod h1:sedf7VIdCL22LD8qIvv7Nn9MuWJruQA/ysswh64lffQ= +github.com/go-toolsmith/astequal v1.2.0 h1:3Fs3CYZ1k9Vo4FzFhwwewC3CHISHDnVUPC4x0bI2+Cw= +github.com/go-toolsmith/astequal v1.2.0/go.mod h1:c8NZ3+kSFtFY/8lPso4v8LuJjdJiUFVnSuU3s0qrrDY= +github.com/go-toolsmith/astfmt v1.1.0 h1:iJVPDPp6/7AaeLJEruMsBUlOYCmvg0MoCfJprsOmcco= +github.com/go-toolsmith/astfmt v1.1.0/go.mod h1:OrcLlRwu0CuiIBp/8b5PYF9ktGVZUjlNMV634mhwuQ4= +github.com/go-toolsmith/astp v1.1.0 h1:dXPuCl6u2llURjdPLLDxJeZInAeZ0/eZwFJmqZMnpQA= +github.com/go-toolsmith/astp v1.1.0/go.mod h1:0T1xFGz9hicKs8Z5MfAqSUitoUYS30pDMsRVIDHs8CA= +github.com/go-toolsmith/pkgload v1.2.2 h1:0CtmHq/02QhxcF7E9N5LIFcYFsMR5rdovfqTtRKkgIk= +github.com/go-toolsmith/pkgload v1.2.2/go.mod h1:R2hxLNRKuAsiXCo2i5J6ZQPhnPMOVtU+f0arbFPWCus= +github.com/go-toolsmith/strparse v1.0.0/go.mod h1:YI2nUKP9YGZnL/L1/DLFBfixrcjslWct4wyljWhSRy8= +github.com/go-toolsmith/strparse v1.1.0 h1:GAioeZUK9TGxnLS+qfdqNbA4z0SSm5zVNtCQiyP2Bvw= +github.com/go-toolsmith/strparse v1.1.0/go.mod h1:7ksGy58fsaQkGQlY8WVoBFNyEPMGuJin1rfoPS4lBSQ= +github.com/go-toolsmith/typep v1.1.0 h1:fIRYDyF+JywLfqzyhdiHzRop/GQDxxNhLGQ6gFUNHus= +github.com/go-toolsmith/typep v1.1.0/go.mod h1:fVIw+7zjdsMxDA3ITWnH1yOiw1rnTQKCsF/sk2H/qig= +github.com/go-viper/mapstructure/v2 v2.4.0 h1:EBsztssimR/CONLSZZ04E8qAkxNYq4Qp9LvH92wZUgs= +github.com/go-viper/mapstructure/v2 v2.4.0/go.mod h1:oJDH3BJKyqBA2TXFhDsKDGDTlndYOZ6rGS0BRZIxGhM= +github.com/go-xmlfmt/xmlfmt v1.1.3 h1:t8Ey3Uy7jDSEisW2K3somuMKIpzktkWptA0iFCnRUWY= +github.com/go-xmlfmt/xmlfmt v1.1.3/go.mod h1:aUCEOzzezBEjDBbFBoSiya/gduyIiWYRP6CnSFIV8AM= +github.com/gobwas/glob v0.2.3 h1:A4xDbljILXROh+kObIiy5kIaPYD8e96x1tgBhUI5J+Y= +github.com/gobwas/glob v0.2.3/go.mod h1:d3Ez4x06l9bZtSvzIay5+Yzi0fmZzPgnTbPcKjJAkT8= +github.com/goccy/go-yaml v1.18.0 h1:8W7wMFS12Pcas7KU+VVkaiCng+kG8QiFeFwzFb+rwuw= +github.com/goccy/go-yaml v1.18.0/go.mod h1:XBurs7gK8ATbW4ZPGKgcbrY1Br56PdM69F7LkFRi1kA= +github.com/gofrs/flock v0.12.1 h1:MTLVXXHf8ekldpJk3AKicLij9MdwOWkZ+a/jHHZby9E= +github.com/gofrs/flock v0.12.1/go.mod h1:9zxTsyu5xtJ9DK+1tFZyibEV7y3uwDxPPfbxeeHCoD0= +github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q= +github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q= +github.com/golangci/dupl v0.0.0-20250308024227-f665c8d69b32 h1:WUvBfQL6EW/40l6OmeSBYQJNSif4O11+bmWEz+C7FYw= +github.com/golangci/dupl v0.0.0-20250308024227-f665c8d69b32/go.mod h1:NUw9Zr2Sy7+HxzdjIULge71wI6yEg1lWQr7Evcu8K0E= +github.com/golangci/go-printf-func-name v0.1.0 h1:dVokQP+NMTO7jwO4bwsRwLWeudOVUPPyAKJuzv8pEJU= +github.com/golangci/go-printf-func-name v0.1.0/go.mod h1:wqhWFH5mUdJQhweRnldEywnR5021wTdZSNgwYceV14s= +github.com/golangci/gofmt v0.0.0-20250106114630-d62b90e6713d h1:viFft9sS/dxoYY0aiOTsLKO2aZQAPT4nlQCsimGcSGE= +github.com/golangci/gofmt v0.0.0-20250106114630-d62b90e6713d/go.mod h1:ivJ9QDg0XucIkmwhzCDsqcnxxlDStoTl89jDMIoNxKY= +github.com/golangci/golangci-lint v1.64.8 h1:y5TdeVidMtBGG32zgSC7ZXTFNHrsJkDnpO4ItB3Am+I= +github.com/golangci/golangci-lint v1.64.8/go.mod h1:5cEsUQBSr6zi8XI8OjmcY2Xmliqc4iYL7YoPrL+zLJ4= +github.com/golangci/misspell v0.6.0 h1:JCle2HUTNWirNlDIAUO44hUsKhOFqGPoC4LZxlaSXDs= +github.com/golangci/misspell v0.6.0/go.mod h1:keMNyY6R9isGaSAu+4Q8NMBwMPkh15Gtc8UCVoDtAWo= +github.com/golangci/plugin-module-register v0.1.1 h1:TCmesur25LnyJkpsVrupv1Cdzo+2f7zX0H6Jkw1Ol6c= +github.com/golangci/plugin-module-register v0.1.1/go.mod h1:TTpqoB6KkwOJMV8u7+NyXMrkwwESJLOkfl9TxR1DGFc= +github.com/golangci/revgrep v0.8.0 h1:EZBctwbVd0aMeRnNUsFogoyayvKHyxlV3CdUA46FX2s= +github.com/golangci/revgrep v0.8.0/go.mod h1:U4R/s9dlXZsg8uJmaR1GrloUr14D7qDl8gi2iPXJH8k= +github.com/golangci/unconvert v0.0.0-20240309020433-c5143eacb3ed h1:IURFTjxeTfNFP0hTEi1YKjB/ub8zkpaOqFFMApi2EAs= +github.com/golangci/unconvert v0.0.0-20240309020433-c5143eacb3ed/go.mod h1:XLXN8bNw4CGRPaqgl3bv/lhz7bsGPh4/xSaMTbo2vkQ= +github.com/google/btree v1.1.3 h1:CVpQJjYgC4VbzxeGVHfvZrv1ctoYCAI8vbl07Fcxlyg= +github.com/google/btree v1.1.3/go.mod h1:qOPhT0dTNdNzV6Z/lhRX0YXUafgPLFUh+gZMl761Gm4= +github.com/google/gnostic-models v0.7.0 h1:qwTtogB15McXDaNqTZdzPJRHvaVJlAl+HVQnLmJEJxo= +github.com/google/gnostic-models v0.7.0/go.mod h1:whL5G0m6dmc5cPxKc5bdKdEN3UjI7OUGxBlw57miDrQ= +github.com/google/go-cmp v0.5.1/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= +github.com/google/go-cmp v0.5.2/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= +github.com/google/go-cmp v0.5.4/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= +github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= +github.com/google/go-cmp v0.5.8/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY= +github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8= +github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU= +github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg= +github.com/google/gofuzz v1.2.0 h1:xRy4A+RhZaiKjJ1bPfwQ8sedCA+YS2YcCHW6ec7JMi0= +github.com/google/gofuzz v1.2.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg= +github.com/google/pprof v0.0.0-20250820193118-f64d9cf942d6 h1:EEHtgt9IwisQ2AZ4pIsMjahcegHh6rmhqxzIRQIyepY= +github.com/google/pprof v0.0.0-20250820193118-f64d9cf942d6/go.mod h1:I6V7YzU0XDpsHqbsyrghnFZLO1gwK6NPTNvmetQIk9U= +github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0= +github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= +github.com/gordonklaus/ineffassign v0.1.0 h1:y2Gd/9I7MdY1oEIt+n+rowjBNDcLQq3RsH5hwJd0f9s= +github.com/gordonklaus/ineffassign v0.1.0/go.mod h1:Qcp2HIAYhR7mNUVSIxZww3Guk4it82ghYcEXIAk+QT0= +github.com/gostaticanalysis/analysisutil v0.7.1 h1:ZMCjoue3DtDWQ5WyU16YbjbQEQ3VuzwxALrpYd+HeKk= +github.com/gostaticanalysis/analysisutil v0.7.1/go.mod h1:v21E3hY37WKMGSnbsw2S/ojApNWb6C1//mXO48CXbVc= +github.com/gostaticanalysis/comment v1.4.1/go.mod h1:ih6ZxzTHLdadaiSnF5WY3dxUoXfXAlTaRzuaNDlSado= +github.com/gostaticanalysis/comment v1.4.2/go.mod h1:KLUTGDv6HOCotCH8h2erHKmpci2ZoR8VPu34YA2uzdM= +github.com/gostaticanalysis/comment v1.5.0 h1:X82FLl+TswsUMpMh17srGRuKaaXprTaytmEpgnKIDu8= +github.com/gostaticanalysis/comment v1.5.0/go.mod h1:V6eb3gpCv9GNVqb6amXzEUX3jXLVK/AdA+IrAMSqvEc= +github.com/gostaticanalysis/forcetypeassert v0.2.0 h1:uSnWrrUEYDr86OCxWa4/Tp2jeYDlogZiZHzGkWFefTk= +github.com/gostaticanalysis/forcetypeassert v0.2.0/go.mod h1:M5iPavzE9pPqWyeiVXSFghQjljW1+l/Uke3PXHS6ILY= +github.com/gostaticanalysis/nilerr v0.1.1 h1:ThE+hJP0fEp4zWLkWHWcRyI2Od0p7DlgYG3Uqrmrcpk= +github.com/gostaticanalysis/nilerr v0.1.1/go.mod h1:wZYb6YI5YAxxq0i1+VJbY0s2YONW0HU0GPE3+5PWN4A= +github.com/gostaticanalysis/testutil v0.3.1-0.20210208050101-bfb5c8eec0e4/go.mod h1:D+FIZ+7OahH3ePw/izIEeH5I06eKs1IKI4Xr64/Am3M= +github.com/gostaticanalysis/testutil v0.5.0 h1:Dq4wT1DdTwTGCQQv3rl3IvD5Ld0E6HiY+3Zh0sUGqw8= +github.com/gostaticanalysis/testutil v0.5.0/go.mod h1:OLQSbuM6zw2EvCcXTz1lVq5unyoNft372msDY0nY5Hs= +github.com/hashicorp/go-immutable-radix/v2 v2.1.0 h1:CUW5RYIcysz+D3B+l1mDeXrQ7fUvGGCwJfdASSzbrfo= +github.com/hashicorp/go-immutable-radix/v2 v2.1.0/go.mod h1:hgdqLXA4f6NIjRVisM1TJ9aOJVNRqKZj+xDGF6m7PBw= +github.com/hashicorp/go-uuid v1.0.3 h1:2gKiV6YVmrJ1i2CKKa9obLvRieoRGviZFL26PcT/Co8= +github.com/hashicorp/go-uuid v1.0.3/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro= +github.com/hashicorp/go-version v1.2.1/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA= +github.com/hashicorp/go-version v1.7.0 h1:5tqGy27NaOTB8yJKUZELlFAS/LTKJkrmONwQKeRZfjY= +github.com/hashicorp/go-version v1.7.0/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA= +github.com/hashicorp/golang-lru/v2 v2.0.7 h1:a+bsQ5rvGLjzHuww6tVxozPZFVghXaHOwFs4luLUK2k= +github.com/hashicorp/golang-lru/v2 v2.0.7/go.mod h1:QeFd9opnmA6QUJc5vARoKUSoFhyfM2/ZepoAG6RGpeM= +github.com/hexops/gotextdiff v1.0.3 h1:gitA9+qJrrTCsiCl7+kh75nPqQt1cx4ZkudSTLoUqJM= +github.com/hexops/gotextdiff v1.0.3/go.mod h1:pSWU5MAI3yDq+fZBTazCSJysOMbxWL1BSow5/V2vxeg= +github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8= +github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw= +github.com/jgautheron/goconst v1.7.1 h1:VpdAG7Ca7yvvJk5n8dMwQhfEZJh95kl/Hl9S1OI5Jkk= +github.com/jgautheron/goconst v1.7.1/go.mod h1:aAosetZ5zaeC/2EfMeRswtxUFBpe2Hr7HzkgX4fanO4= +github.com/jingyugao/rowserrcheck v1.1.1 h1:zibz55j/MJtLsjP1OF4bSdgXxwL1b+Vn7Tjzq7gFzUs= +github.com/jingyugao/rowserrcheck v1.1.1/go.mod h1:4yvlZSDb3IyDTUZJUmpZfm2Hwok+Dtp+nu2qOq+er9c= +github.com/jjti/go-spancheck v0.6.4 h1:Tl7gQpYf4/TMU7AT84MN83/6PutY21Nb9fuQjFTpRRc= +github.com/jjti/go-spancheck v0.6.4/go.mod h1:yAEYdKJ2lRkDA8g7X+oKUHXOWVAXSBJRv04OhF+QUjk= +github.com/josharian/intern v1.0.0 h1:vlS4z54oSdjm0bgjRigI+G1HpF+tI+9rE5LLzOg8HmY= +github.com/josharian/intern v1.0.0/go.mod h1:5DoeVV0s6jJacbCEi61lwdGj/aVlrQvzHFFd8Hwg//Y= +github.com/joshdk/go-junit v1.0.0 h1:S86cUKIdwBHWwA6xCmFlf3RTLfVXYQfvanM5Uh+K6GE= +github.com/joshdk/go-junit v1.0.0/go.mod h1:TiiV0PqkaNfFXjEiyjWM3XXrhVyCa1K4Zfga6W52ung= +github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM= +github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo= +github.com/julz/importas v0.2.0 h1:y+MJN/UdL63QbFJHws9BVC5RpA2iq0kpjrFajTGivjQ= +github.com/julz/importas v0.2.0/go.mod h1:pThlt589EnCYtMnmhmRYY/qn9lCf/frPOK+WMx3xiJY= +github.com/karamaru-alpha/copyloopvar v1.2.1 h1:wmZaZYIjnJ0b5UoKDjUHrikcV0zuPyyxI4SVplLd2CI= +github.com/karamaru-alpha/copyloopvar v1.2.1/go.mod h1:nFmMlFNlClC2BPvNaHMdkirmTJxVCY0lhxBtlfOypMM= +github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8= +github.com/kisielk/errcheck v1.9.0 h1:9xt1zI9EBfcYBvdU1nVrzMzzUPUtPKs9bVSIM3TAb3M= +github.com/kisielk/errcheck v1.9.0/go.mod h1:kQxWMMVZgIkDq7U8xtG/n2juOjbLgZtedi0D+/VL/i8= +github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck= +github.com/kkHAIKE/contextcheck v1.1.6 h1:7HIyRcnyzxL9Lz06NGhiKvenXq7Zw6Q0UQu/ttjfJCE= +github.com/kkHAIKE/contextcheck v1.1.6/go.mod h1:3dDbMRNBFaq8HFXWC1JyvDSPm43CmE6IuHam8Wr0rkg= +github.com/klauspost/compress v1.18.0 h1:c/Cqfb0r+Yi+JtIEq73FWXVkRonBlf0CRNYc8Zttxdo= +github.com/klauspost/compress v1.18.0/go.mod h1:2Pp+KzxcywXVXMr50+X0Q/Lsb43OQHYWRCY2AiWywWQ= +github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE= +github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk= +github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY= +github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE= +github.com/kulti/thelper v0.6.3 h1:ElhKf+AlItIu+xGnI990no4cE2+XaSu1ULymV2Yulxs= +github.com/kulti/thelper v0.6.3/go.mod h1:DsqKShOvP40epevkFrvIwkCMNYxMeTNjdWL4dqWHZ6I= +github.com/kunwardeep/paralleltest v1.0.10 h1:wrodoaKYzS2mdNVnc4/w31YaXFtsc21PCTdvWJ/lDDs= +github.com/kunwardeep/paralleltest v1.0.10/go.mod h1:2C7s65hONVqY7Q5Efj5aLzRCNLjw2h4eMc9EcypGjcY= +github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc= +github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw= +github.com/lasiar/canonicalheader v1.1.2 h1:vZ5uqwvDbyJCnMhmFYimgMZnJMjwljN5VGY0VKbMXb4= +github.com/lasiar/canonicalheader v1.1.2/go.mod h1:qJCeLFS0G/QlLQ506T+Fk/fWMa2VmBUiEI2cuMK4djI= +github.com/ldez/exptostd v0.4.2 h1:l5pOzHBz8mFOlbcifTxzfyYbgEmoUqjxLFHZkjlbHXs= +github.com/ldez/exptostd v0.4.2/go.mod h1:iZBRYaUmcW5jwCR3KROEZ1KivQQp6PHXbDPk9hqJKCQ= +github.com/ldez/gomoddirectives v0.6.1 h1:Z+PxGAY+217f/bSGjNZr/b2KTXcyYLgiWI6geMBN2Qc= +github.com/ldez/gomoddirectives v0.6.1/go.mod h1:cVBiu3AHR9V31em9u2kwfMKD43ayN5/XDgr+cdaFaKs= +github.com/ldez/grignotin v0.9.0 h1:MgOEmjZIVNn6p5wPaGp/0OKWyvq42KnzAt/DAb8O4Ow= +github.com/ldez/grignotin v0.9.0/go.mod h1:uaVTr0SoZ1KBii33c47O1M8Jp3OP3YDwhZCmzT9GHEk= +github.com/ldez/tagliatelle v0.7.1 h1:bTgKjjc2sQcsgPiT902+aadvMjCeMHrY7ly2XKFORIk= +github.com/ldez/tagliatelle v0.7.1/go.mod h1:3zjxUpsNB2aEZScWiZTHrAXOl1x25t3cRmzfK1mlo2I= +github.com/ldez/usetesting v0.4.2 h1:J2WwbrFGk3wx4cZwSMiCQQ00kjGR0+tuuyW0Lqm4lwA= +github.com/ldez/usetesting v0.4.2/go.mod h1:eEs46T3PpQ+9RgN9VjpY6qWdiw2/QmfiDeWmdZdrjIQ= +github.com/leonklingele/grouper v1.1.2 h1:o1ARBDLOmmasUaNDesWqWCIFH3u7hoFlM84YrjT3mIY= +github.com/leonklingele/grouper v1.1.2/go.mod h1:6D0M/HVkhs2yRKRFZUoGjeDy7EZTfFBE9gl4kjmIGkA= +github.com/macabu/inamedparam v0.1.3 h1:2tk/phHkMlEL/1GNe/Yf6kkR/hkcUdAEY3L0hjYV1Mk= +github.com/macabu/inamedparam v0.1.3/go.mod h1:93FLICAIk/quk7eaPPQvbzihUdn/QkGDwIZEoLtpH6I= +github.com/mailru/easyjson v0.9.0 h1:PrnmzHw7262yW8sTBwxi1PdJA3Iw/EKBa8psRf7d9a4= +github.com/mailru/easyjson v0.9.0/go.mod h1:1+xMtQp2MRNVL/V1bOzuP3aP8VNwRW55fQUto+XFtTU= +github.com/maratori/testableexamples v1.0.0 h1:dU5alXRrD8WKSjOUnmJZuzdxWOEQ57+7s93SLMxb2vI= +github.com/maratori/testableexamples v1.0.0/go.mod h1:4rhjL1n20TUTT4vdh3RDqSizKLyXp7K2u6HgraZCGzE= +github.com/maratori/testpackage v1.1.1 h1:S58XVV5AD7HADMmD0fNnziNHqKvSdDuEKdPD1rNTU04= +github.com/maratori/testpackage v1.1.1/go.mod h1:s4gRK/ym6AMrqpOa/kEbQTV4Q4jb7WeLZzVhVVVOQMc= +github.com/maruel/natural v1.1.1 h1:Hja7XhhmvEFhcByqDoHz9QZbkWey+COd9xWfCfn1ioo= +github.com/maruel/natural v1.1.1/go.mod h1:v+Rfd79xlw1AgVBjbO0BEQmptqb5HvL/k9GRHB7ZKEg= +github.com/matoous/godox v1.1.0 h1:W5mqwbyWrwZv6OQ5Z1a/DHGMOvXYCBP3+Ht7KMoJhq4= +github.com/matoous/godox v1.1.0/go.mod h1:jgE/3fUXiTurkdHOLT5WEkThTSuE7yxHv5iWPa80afs= +github.com/matryer/is v1.4.0 h1:sosSmIWwkYITGrxZ25ULNDeKiMNzFSr4V/eqBQP0PeE= +github.com/matryer/is v1.4.0/go.mod h1:8I/i5uYgLzgsgEloJE1U6xx5HkBQpAZvepWuujKwMRU= +github.com/mattn/go-colorable v0.1.14 h1:9A9LHSqF/7dyVVX6g0U9cwm9pG3kP9gSzcuIPHPsaIE= +github.com/mattn/go-colorable v0.1.14/go.mod h1:6LmQG8QLFO4G5z1gPvYEzlUgJ2wF+stgPZH1UqBm1s8= +github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY= +github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y= +github.com/mattn/go-runewidth v0.0.9/go.mod h1:H031xJmbD/WCDINGzjvQ9THkh0rPKHF+m2gUSrubnMI= +github.com/mattn/go-runewidth v0.0.16 h1:E5ScNMtiwvlvB5paMFdw9p4kSQzbXFikJ5SQO6TULQc= +github.com/mattn/go-runewidth v0.0.16/go.mod h1:Jdepj2loyihRzMpdS35Xk/zdY8IAYHsh153qUoGf23w= +github.com/mfridman/tparse v0.18.0 h1:wh6dzOKaIwkUGyKgOntDW4liXSo37qg5AXbIhkMV3vE= +github.com/mfridman/tparse v0.18.0/go.mod h1:gEvqZTuCgEhPbYk/2lS3Kcxg1GmTxxU7kTC8DvP0i/A= +github.com/mgechev/revive v1.7.0 h1:JyeQ4yO5K8aZhIKf5rec56u0376h8AlKNQEmjfkjKlY= +github.com/mgechev/revive v1.7.0/go.mod h1:qZnwcNhoguE58dfi96IJeSTPeZQejNeoMQLUZGi4SW4= +github.com/mitchellh/go-homedir v1.1.0 h1:lukF9ziXFxDFPkA1vsr5zpc1XuPDn/wFntq5mG+4E0Y= +github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0= +github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q= +github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg= +github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q= +github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk= +github.com/modern-go/reflect2 v1.0.3-0.20250322232337-35a7c28c31ee h1:W5t00kpgFdJifH4BDsTlE89Zl93FEloxaWZfGcifgq8= +github.com/modern-go/reflect2 v1.0.3-0.20250322232337-35a7c28c31ee/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk= +github.com/moricho/tparallel v0.3.2 h1:odr8aZVFA3NZrNybggMkYO3rgPRcqjeQUlBBFVxKHTI= +github.com/moricho/tparallel v0.3.2/go.mod h1:OQ+K3b4Ln3l2TZveGCywybl68glfLEwFGqvnjok8b+U= +github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA= +github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ= +github.com/nakabonne/nestif v0.3.1 h1:wm28nZjhQY5HyYPx+weN3Q65k6ilSBxDb8v5S81B81U= +github.com/nakabonne/nestif v0.3.1/go.mod h1:9EtoZochLn5iUprVDmDjqGKPofoUEBL8U4Ngq6aY7OE= +github.com/nishanths/exhaustive v0.12.0 h1:vIY9sALmw6T/yxiASewa4TQcFsVYZQQRUQJhKRf3Swg= +github.com/nishanths/exhaustive v0.12.0/go.mod h1:mEZ95wPIZW+x8kC4TgC+9YCUgiST7ecevsVDTgc2obs= +github.com/nishanths/predeclared v0.2.2 h1:V2EPdZPliZymNAn79T8RkNApBjMmVKh5XRpLm/w98Vk= +github.com/nishanths/predeclared v0.2.2/go.mod h1:RROzoN6TnGQupbC+lqggsOlcgysk3LMK/HI84Mp280c= +github.com/nunnatsa/ginkgolinter v0.19.1 h1:mjwbOlDQxZi9Cal+KfbEJTCz327OLNfwNvoZ70NJ+c4= +github.com/nunnatsa/ginkgolinter v0.19.1/go.mod h1:jkQ3naZDmxaZMXPWaS9rblH+i+GWXQCaS/JFIWcOH2s= +github.com/olekukonko/tablewriter v0.0.5 h1:P2Ga83D34wi1o9J6Wh1mRuqd4mF/x/lgBS7N7AbDhec= +github.com/olekukonko/tablewriter v0.0.5/go.mod h1:hPp6KlRPjbx+hW8ykQs1w3UBbZlj6HuIJcUGPhkA7kY= +github.com/onsi/ginkgo/v2 v2.27.2 h1:LzwLj0b89qtIy6SSASkzlNvX6WktqurSHwkk2ipF/Ns= +github.com/onsi/ginkgo/v2 v2.27.2/go.mod h1:ArE1D/XhNXBXCBkKOLkbsb2c81dQHCRcF5zwn/ykDRo= +github.com/onsi/gomega v1.38.3 h1:eTX+W6dobAYfFeGC2PV6RwXRu/MyT+cQguijutvkpSM= +github.com/onsi/gomega v1.38.3/go.mod h1:ZCU1pkQcXDO5Sl9/VVEGlDyp+zm0m1cmeG5TOzLgdh4= +github.com/otiai10/copy v1.2.0/go.mod h1:rrF5dJ5F0t/EWSYODDu4j9/vEeYHMkc8jt0zJChqQWw= +github.com/otiai10/copy v1.14.0 h1:dCI/t1iTdYGtkvCuBG2BgR6KZa83PTclw4U5n2wAllU= +github.com/otiai10/copy v1.14.0/go.mod h1:ECfuL02W+/FkTWZWgQqXPWZgW9oeKCSQ5qVfSc4qc4w= +github.com/otiai10/curr v0.0.0-20150429015615-9b4961190c95/go.mod h1:9qAhocn7zKJG+0mI8eUu6xqkFDYS2kb2saOteoSB3cE= +github.com/otiai10/curr v1.0.0/go.mod h1:LskTG5wDwr8Rs+nNQ+1LlxRjAtTZZjtJW4rMXl6j4vs= +github.com/otiai10/mint v1.3.0/go.mod h1:F5AjcsTsWUqX+Na9fpHb52P8pcRX2CI6A3ctIT91xUo= +github.com/otiai10/mint v1.3.1/go.mod h1:/yxELlJQ0ufhjUwhshSj+wFjZ78CnZ48/1wtmBH1OTc= +github.com/pelletier/go-toml/v2 v2.2.4 h1:mye9XuhQ6gvn5h28+VilKrrPoQVanw5PMw/TB0t5Ec4= +github.com/pelletier/go-toml/v2 v2.2.4/go.mod h1:2gIqNv+qfxSVS7cM2xJQKtLSTLUE9V8t9Stt+h56mCY= +github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4= +github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= +github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= +github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U= +github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= +github.com/polyfloyd/go-errorlint v1.7.1 h1:RyLVXIbosq1gBdk/pChWA8zWYLsq9UEw7a1L5TVMCnA= +github.com/polyfloyd/go-errorlint v1.7.1/go.mod h1:aXjNb1x2TNhoLsk26iv1yl7a+zTnXPhwEMtEXukiLR8= +github.com/prashantv/gostub v1.1.0 h1:BTyx3RfQjRHnUWaGF9oQos79AlQ5k8WNktv7VGvVH4g= +github.com/prashantv/gostub v1.1.0/go.mod h1:A5zLQHz7ieHGG7is6LLXLz7I8+3LZzsrV0P1IAHhP5U= +github.com/prometheus/client_golang v1.23.2 h1:Je96obch5RDVy3FDMndoUsjAhG5Edi49h0RJWRi/o0o= +github.com/prometheus/client_golang v1.23.2/go.mod h1:Tb1a6LWHB3/SPIzCoaDXI4I8UHKeFTEQ1YCr+0Gyqmg= +github.com/prometheus/client_model v0.6.2 h1:oBsgwpGs7iVziMvrGhE53c/GrLUsZdHnqNwqPLxwZyk= +github.com/prometheus/client_model v0.6.2/go.mod h1:y3m2F6Gdpfy6Ut/GBsUqTWZqCUvMVzSfMLjcu6wAwpE= +github.com/prometheus/common v0.66.1 h1:h5E0h5/Y8niHc5DlaLlWLArTQI7tMrsfQjHV+d9ZoGs= +github.com/prometheus/common v0.66.1/go.mod h1:gcaUsgf3KfRSwHY4dIMXLPV0K/Wg1oZ8+SbZk/HH/dA= +github.com/prometheus/procfs v0.17.0 h1:FuLQ+05u4ZI+SS/w9+BWEM2TXiHKsUQ9TADiRH7DuK0= +github.com/prometheus/procfs v0.17.0/go.mod h1:oPQLaDAMRbA+u8H5Pbfq+dl3VDAvHxMUOVhe0wYB2zw= +github.com/quasilyte/go-ruleguard v0.4.3-0.20240823090925-0fe6f58b47b1 h1:+Wl/0aFp0hpuHM3H//KMft64WQ1yX9LdJY64Qm/gFCo= +github.com/quasilyte/go-ruleguard v0.4.3-0.20240823090925-0fe6f58b47b1/go.mod h1:GJLgqsLeo4qgavUoL8JeGFNS7qcisx3awV/w9eWTmNI= +github.com/quasilyte/go-ruleguard/dsl v0.3.22 h1:wd8zkOhSNr+I+8Qeciml08ivDt1pSXe60+5DqOpCjPE= +github.com/quasilyte/go-ruleguard/dsl v0.3.22/go.mod h1:KeCP03KrjuSO0H1kTuZQCWlQPulDV6YMIXmpQss17rU= +github.com/quasilyte/gogrep v0.5.0 h1:eTKODPXbI8ffJMN+W2aE0+oL0z/nh8/5eNdiO34SOAo= +github.com/quasilyte/gogrep v0.5.0/go.mod h1:Cm9lpz9NZjEoL1tgZ2OgeUKPIxL1meE7eo60Z6Sk+Ng= +github.com/quasilyte/regex/syntax v0.0.0-20210819130434-b3f0c404a727 h1:TCg2WBOl980XxGFEZSS6KlBGIV0diGdySzxATTWoqaU= +github.com/quasilyte/regex/syntax v0.0.0-20210819130434-b3f0c404a727/go.mod h1:rlzQ04UMyJXu/aOvhd8qT+hvDrFpiwqp8MRXDY9szc0= +github.com/quasilyte/stdinfo v0.0.0-20220114132959-f7386bf02567 h1:M8mH9eK4OUR4lu7Gd+PU1fV2/qnDNfzT635KRSObncs= +github.com/quasilyte/stdinfo v0.0.0-20220114132959-f7386bf02567/go.mod h1:DWNGW8A4Y+GyBgPuaQJuWiy0XYftx4Xm/y5Jqk9I6VQ= +github.com/raeperd/recvcheck v0.2.0 h1:GnU+NsbiCqdC2XX5+vMZzP+jAJC5fht7rcVTAhX74UI= +github.com/raeperd/recvcheck v0.2.0/go.mod h1:n04eYkwIR0JbgD73wT8wL4JjPC3wm0nFtzBnWNocnYU= +github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc= +github.com/rivo/uniseg v0.4.7 h1:WUdvkW8uEhrYfLC4ZzdpI2ztxP1I582+49Oc5Mq64VQ= +github.com/rivo/uniseg v0.4.7/go.mod h1:FN3SvrM+Zdj16jyLfmOkMNblXMcoc8DfTHruCPUcx88= +github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ= +github.com/rogpeppe/go-internal v1.14.1/go.mod h1:MaRKkUm5W0goXpeCfT7UZI6fk/L7L7so1lCWt35ZSgc= +github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM= +github.com/ryancurrah/gomodguard v1.3.5 h1:cShyguSwUEeC0jS7ylOiG/idnd1TpJ1LfHGpV3oJmPU= +github.com/ryancurrah/gomodguard v1.3.5/go.mod h1:MXlEPQRxgfPQa62O8wzK3Ozbkv9Rkqr+wKjSxTdsNJE= +github.com/ryanrolds/sqlclosecheck v0.5.1 h1:dibWW826u0P8jNLsLN+En7+RqWWTYrjCB9fJfSfdyCU= +github.com/ryanrolds/sqlclosecheck v0.5.1/go.mod h1:2g3dUjoS6AL4huFdv6wn55WpLIDjY7ZgUR4J8HOO/XQ= +github.com/sagikazarmark/locafero v0.7.0 h1:5MqpDsTGNDhY8sGp0Aowyf0qKsPrhewaLSsFaodPcyo= +github.com/sagikazarmark/locafero v0.7.0/go.mod h1:2za3Cg5rMaTMoG/2Ulr9AwtFaIppKXTRYnozin4aB5k= +github.com/sanposhiho/wastedassign/v2 v2.1.0 h1:crurBF7fJKIORrV85u9UUpePDYGWnwvv3+A96WvwXT0= +github.com/sanposhiho/wastedassign/v2 v2.1.0/go.mod h1:+oSmSC+9bQ+VUAxA66nBb0Z7N8CK7mscKTDYC6aIek4= +github.com/santhosh-tekuri/jsonschema/v6 v6.0.1 h1:PKK9DyHxif4LZo+uQSgXNqs0jj5+xZwwfKHgph2lxBw= +github.com/santhosh-tekuri/jsonschema/v6 v6.0.1/go.mod h1:JXeL+ps8p7/KNMjDQk3TCwPpBy0wYklyWTfbkIzdIFU= +github.com/sashamelentyev/interfacebloat v1.1.0 h1:xdRdJp0irL086OyW1H/RTZTr1h/tMEOsumirXcOJqAw= +github.com/sashamelentyev/interfacebloat v1.1.0/go.mod h1:+Y9yU5YdTkrNvoX0xHc84dxiN1iBi9+G8zZIhPVoNjQ= +github.com/sashamelentyev/usestdlibvars v1.28.0 h1:jZnudE2zKCtYlGzLVreNp5pmCdOxXUzwsMDBkR21cyQ= +github.com/sashamelentyev/usestdlibvars v1.28.0/go.mod h1:9nl0jgOfHKWNFS43Ojw0i7aRoS4j6EBye3YBhmAIRF8= +github.com/securego/gosec/v2 v2.22.2 h1:IXbuI7cJninj0nRpZSLCUlotsj8jGusohfONMrHoF6g= +github.com/securego/gosec/v2 v2.22.2/go.mod h1:UEBGA+dSKb+VqM6TdehR7lnQtIIMorYJ4/9CW1KVQBE= +github.com/shurcooL/go v0.0.0-20180423040247-9e1955d9fb6e/go.mod h1:TDJrrUr11Vxrven61rcy3hJMUqaf/CLWYhHNPmT14Lk= +github.com/shurcooL/go-goon v0.0.0-20170922171312-37c2f522c041/go.mod h1:N5mDOmsrJOB+vfqUK+7DmDyjhSLIIBnXo9lvZJj3MWQ= +github.com/sirupsen/logrus v1.9.3 h1:dueUQJ1C2q9oE3F7wvmSGAaVtTmUizReu6fjN8uqzbQ= +github.com/sirupsen/logrus v1.9.3/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ= +github.com/sivchari/containedctx v1.0.3 h1:x+etemjbsh2fB5ewm5FeLNi5bUjK0V8n0RB+Wwfd0XE= +github.com/sivchari/containedctx v1.0.3/go.mod h1:c1RDvCbnJLtH4lLcYD/GqwiBSSf4F5Qk0xld2rBqzJ4= +github.com/sivchari/tenv v1.12.1 h1:+E0QzjktdnExv/wwsnnyk4oqZBUfuh89YMQT1cyuvSY= +github.com/sivchari/tenv v1.12.1/go.mod h1:1LjSOUCc25snIr5n3DtGGrENhX3LuWefcplwVGC24mw= +github.com/sonatard/noctx v0.1.0 h1:JjqOc2WN16ISWAjAk8M5ej0RfExEXtkEyExl2hLW+OM= +github.com/sonatard/noctx v0.1.0/go.mod h1:0RvBxqY8D4j9cTTTWE8ylt2vqj2EPI8fHmrxHdsaZ2c= +github.com/sourcegraph/conc v0.3.0 h1:OQTbbt6P72L20UqAkXXuLOj79LfEanQ+YQFNpLA9ySo= +github.com/sourcegraph/conc v0.3.0/go.mod h1:Sdozi7LEKbFPqYX2/J+iBAM6HpqSLTASQIKqDmF7Mt0= +github.com/sourcegraph/go-diff v0.7.0 h1:9uLlrd5T46OXs5qpp8L/MTltk0zikUGi0sNNyCpA8G0= +github.com/sourcegraph/go-diff v0.7.0/go.mod h1:iBszgVvyxdc8SFZ7gm69go2KDdt3ag071iBaWPF6cjs= +github.com/spf13/afero v1.12.0 h1:UcOPyRBYczmFn6yvphxkn9ZEOY65cpwGKb5mL36mrqs= +github.com/spf13/afero v1.12.0/go.mod h1:ZTlWwG4/ahT8W7T0WQ5uYmjI9duaLQGy3Q2OAl4sk/4= +github.com/spf13/cast v1.7.1 h1:cuNEagBQEHWN1FnbGEjCXL2szYEXqfJPbP2HNUaca9Y= +github.com/spf13/cast v1.7.1/go.mod h1:ancEpBxwJDODSW/UG4rDrAqiKolqNNh2DX3mk86cAdo= +github.com/spf13/cobra v1.10.2 h1:DMTTonx5m65Ic0GOoRY2c16WCbHxOOw6xxezuLaBpcU= +github.com/spf13/cobra v1.10.2/go.mod h1:7C1pvHqHw5A4vrJfjNwvOdzYu0Gml16OCs2GRiTUUS4= +github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg= +github.com/spf13/pflag v1.0.9/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg= +github.com/spf13/pflag v1.0.10 h1:4EBh2KAYBwaONj6b2Ye1GiHfwjqyROoF4RwYO+vPwFk= +github.com/spf13/pflag v1.0.10/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg= +github.com/spf13/viper v1.20.1 h1:ZMi+z/lvLyPSCoNtFCpqjy0S4kPbirhpTMwl8BkW9X4= +github.com/spf13/viper v1.20.1/go.mod h1:P9Mdzt1zoHIG8m2eZQinpiBjo6kCmZSKBClNNqjJvu4= +github.com/ssgreg/nlreturn/v2 v2.2.1 h1:X4XDI7jstt3ySqGU86YGAURbxw3oTDPK9sPEi6YEwQ0= +github.com/ssgreg/nlreturn/v2 v2.2.1/go.mod h1:E/iiPB78hV7Szg2YfRgyIrk1AD6JVMTRkkxBiELzh2I= +github.com/stbenjam/no-sprintf-host-port v0.2.0 h1:i8pxvGrt1+4G0czLr/WnmyH7zbZ8Bg8etvARQ1rpyl4= +github.com/stbenjam/no-sprintf-host-port v0.2.0/go.mod h1:eL0bQ9PasS0hsyTyfTjjG+E80QIyPnBVQbYZyv20Jfk= +github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= +github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw= +github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo= +github.com/stretchr/objx v0.5.2 h1:xuMeJ0Sdp5ZMRXx/aWO6RZxdr3beISkG5/G/aIRr3pY= +github.com/stretchr/objx v0.5.2/go.mod h1:FRsXN1f5AsAjCGJKqEizvkpNtU+EGNCLh3NxZ/8L+MA= +github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI= +github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4= +github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA= +github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= +github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= +github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU= +github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo= +github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U= +github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U= +github.com/subosito/gotenv v1.6.0 h1:9NlTDc1FTs4qu0DDq7AEtTPNw6SVm7uBMsUCUjABIf8= +github.com/subosito/gotenv v1.6.0/go.mod h1:Dk4QP5c2W3ibzajGcXpNraDfq2IrhjMIvMSWPKKo0FU= +github.com/tdakkota/asciicheck v0.4.1 h1:bm0tbcmi0jezRA2b5kg4ozmMuGAFotKI3RZfrhfovg8= +github.com/tdakkota/asciicheck v0.4.1/go.mod h1:0k7M3rCfRXb0Z6bwgvkEIMleKH3kXNz9UqJ9Xuqopr8= +github.com/tenntenn/modver v1.0.1 h1:2klLppGhDgzJrScMpkj9Ujy3rXPUspSjAcev9tSEBgA= +github.com/tenntenn/modver v1.0.1/go.mod h1:bePIyQPb7UeioSRkw3Q0XeMhYZSMx9B8ePqg6SAMGH0= +github.com/tenntenn/text/transform v0.0.0-20200319021203-7eef512accb3 h1:f+jULpRQGxTSkNYKJ51yaw6ChIqO+Je8UqsTKN/cDag= +github.com/tenntenn/text/transform v0.0.0-20200319021203-7eef512accb3/go.mod h1:ON8b8w4BN/kE1EOhwT0o+d62W65a6aPw1nouo9LMgyY= +github.com/tetafro/godot v1.5.0 h1:aNwfVI4I3+gdxjMgYPus9eHmoBeJIbnajOyqZYStzuw= +github.com/tetafro/godot v1.5.0/go.mod h1:2oVxTBSftRTh4+MVfUaUXR6bn2GDXCaMcOG4Dk3rfio= +github.com/tidwall/gjson v1.18.0 h1:FIDeeyB800efLX89e5a8Y0BNH+LOngJyGrIWxG2FKQY= +github.com/tidwall/gjson v1.18.0/go.mod h1:/wbyibRr2FHMks5tjHJ5F8dMZh3AcwJEMf5vlfC0lxk= +github.com/tidwall/match v1.2.0 h1:0pt8FlkOwjN2fPt4bIl4BoNxb98gGHN2ObFEDkrfZnM= +github.com/tidwall/match v1.2.0/go.mod h1:eRSPERbgtNPcGhD8UCthc6PmLEQXEWd3PRB5JTxsfmM= +github.com/tidwall/pretty v1.2.1 h1:qjsOFOWWQl+N3RsoF5/ssm1pHmJJwhjlSbZ51I6wMl4= +github.com/tidwall/pretty v1.2.1/go.mod h1:ITEVvHYasfjBbM0u2Pg8T2nJnzm8xPwvNhhsoaGGjNU= +github.com/tidwall/sjson v1.2.5 h1:kLy8mja+1c9jlljvWTlSazM7cKDRfJuR/bOJhcY5NcY= +github.com/tidwall/sjson v1.2.5/go.mod h1:Fvgq9kS/6ociJEDnK0Fk1cpYF4FIW6ZF7LAe+6jwd28= +github.com/timakin/bodyclose v0.0.0-20241017074812-ed6a65f985e3 h1:y4mJRFlM6fUyPhoXuFg/Yu02fg/nIPFMOY8tOqppoFg= +github.com/timakin/bodyclose v0.0.0-20241017074812-ed6a65f985e3/go.mod h1:mkjARE7Yr8qU23YcGMSALbIxTQ9r9QBVahQOBRfU460= +github.com/timonwong/loggercheck v0.10.1 h1:uVZYClxQFpw55eh+PIoqM7uAOHMrhVcDoWDery9R8Lg= +github.com/timonwong/loggercheck v0.10.1/go.mod h1:HEAWU8djynujaAVX7QI65Myb8qgfcZ1uKbdpg3ZzKl8= +github.com/tomarrell/wrapcheck/v2 v2.10.0 h1:SzRCryzy4IrAH7bVGG4cK40tNUhmVmMDuJujy4XwYDg= +github.com/tomarrell/wrapcheck/v2 v2.10.0/go.mod h1:g9vNIyhb5/9TQgumxQyOEqDHsmGYcGsVMOx/xGkqdMo= +github.com/tommy-muehle/go-mnd/v2 v2.5.1 h1:NowYhSdyE/1zwK9QCLeRb6USWdoif80Ie+v+yU8u1Zw= +github.com/tommy-muehle/go-mnd/v2 v2.5.1/go.mod h1:WsUAkMJMYww6l/ufffCD3m+P7LEvr8TnZn9lwVDlgzw= +github.com/ultraware/funlen v0.2.0 h1:gCHmCn+d2/1SemTdYMiKLAHFYxTYz7z9VIDRaTGyLkI= +github.com/ultraware/funlen v0.2.0/go.mod h1:ZE0q4TsJ8T1SQcjmkhN/w+MceuatI6pBFSxxyteHIJA= +github.com/ultraware/whitespace v0.2.0 h1:TYowo2m9Nfj1baEQBjuHzvMRbp19i+RCcRYrSWoFa+g= +github.com/ultraware/whitespace v0.2.0/go.mod h1:XcP1RLD81eV4BW8UhQlpaR+SDc2givTvyI8a586WjW8= +github.com/uudashr/gocognit v1.2.0 h1:3BU9aMr1xbhPlvJLSydKwdLN3tEUUrzPSSM8S4hDYRA= +github.com/uudashr/gocognit v1.2.0/go.mod h1:k/DdKPI6XBZO1q7HgoV2juESI2/Ofj9AcHPZhBBdrTU= +github.com/uudashr/iface v1.3.1 h1:bA51vmVx1UIhiIsQFSNq6GZ6VPTk3WNMZgRiCe9R29U= +github.com/uudashr/iface v1.3.1/go.mod h1:4QvspiRd3JLPAEXBQ9AiZpLbJlrWWgRChOKDJEuQTdg= +github.com/x448/float16 v0.8.4 h1:qLwI1I70+NjRFUR3zs1JPUCgaCXSh3SW62uAKT1mSBM= +github.com/x448/float16 v0.8.4/go.mod h1:14CWIYCyZA/cWjXOioeEpHeN/83MdbZDRQHoFcYsOfg= +github.com/xen0n/gosmopolitan v1.2.2 h1:/p2KTnMzwRexIW8GlKawsTWOxn7UHA+jCMF/V8HHtvU= +github.com/xen0n/gosmopolitan v1.2.2/go.mod h1:7XX7Mj61uLYrj0qmeN0zi7XDon9JRAEhYQqAPLVNTeg= +github.com/yagipy/maintidx v1.0.0 h1:h5NvIsCz+nRDapQ0exNv4aJ0yXSI0420omVANTv3GJM= +github.com/yagipy/maintidx v1.0.0/go.mod h1:0qNf/I/CCZXSMhsRsrEPDZ+DkekpKLXAJfsTACwgXLk= +github.com/yeya24/promlinter v0.3.0 h1:JVDbMp08lVCP7Y6NP3qHroGAO6z2yGKQtS5JsjqtoFs= +github.com/yeya24/promlinter v0.3.0/go.mod h1:cDfJQQYv9uYciW60QT0eeHlFodotkYZlL+YcPQN+mW4= +github.com/ykadowak/zerologlint v0.1.5 h1:Gy/fMz1dFQN9JZTPjv1hxEk+sRWm05row04Yoolgdiw= +github.com/ykadowak/zerologlint v0.1.5/go.mod h1:KaUskqF3e/v59oPmdq1U1DnKcuHokl2/K1U4pmIELKg= +github.com/yuin/goldmark v1.1.25/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= +github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= +github.com/yuin/goldmark v1.1.32/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= +github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= +github.com/yuin/goldmark v1.3.5/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k= +github.com/yuin/goldmark v1.4.1/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k= +github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY= +gitlab.com/bosi/decorder v0.4.2 h1:qbQaV3zgwnBZ4zPMhGLW4KZe7A7NwxEhJx39R3shffo= +gitlab.com/bosi/decorder v0.4.2/go.mod h1:muuhHoaJkA9QLcYHq4Mj8FJUwDZ+EirSHRiaTcTf6T8= +go-simpler.org/assert v0.9.0 h1:PfpmcSvL7yAnWyChSjOz6Sp6m9j5lyK8Ok9pEL31YkQ= +go-simpler.org/assert v0.9.0/go.mod h1:74Eqh5eI6vCK6Y5l3PI8ZYFXG4Sa+tkr70OIPJAUr28= +go-simpler.org/musttag v0.13.0 h1:Q/YAW0AHvaoaIbsPj3bvEI5/QFP7w696IMUpnKXQfCE= +go-simpler.org/musttag v0.13.0/go.mod h1:FTzIGeK6OkKlUDVpj0iQUXZLUO1Js9+mvykDQy9C5yM= +go-simpler.org/sloglint v0.9.0 h1:/40NQtjRx9txvsB/RN022KsUJU+zaaSb/9q9BSefSrE= +go-simpler.org/sloglint v0.9.0/go.mod h1:G/OrAF6uxj48sHahCzrbarVMptL2kjWTaUeC8+fOGww= +go.uber.org/automaxprocs v1.6.0 h1:O3y2/QNTOdbF+e/dpXNNW7Rx2hZ4sTIPyybbxyNqTUs= +go.uber.org/automaxprocs v1.6.0/go.mod h1:ifeIMSnPZuznNm6jmdzmU3/bfk01Fe2fotchwEFJ8r8= +go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto= +go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE= +go.uber.org/multierr v1.11.0 h1:blXXJkSxSSfBVBlC76pxqeO+LN3aDfLQo+309xJstO0= +go.uber.org/multierr v1.11.0/go.mod h1:20+QtiLqy0Nd6FdQB9TLXag12DsQkrbs3htMFfDN80Y= +go.uber.org/zap v1.27.0 h1:aJMhYGrd5QSmlpLMr2MftRKl7t8J8PTZPA732ud/XR8= +go.uber.org/zap v1.27.0/go.mod h1:GB2qFLM7cTU87MWRP2mPIjqfIDnGu+VIO4V/SdhGo2E= +go.yaml.in/yaml/v2 v2.4.2 h1:DzmwEr2rDGHl7lsFgAHxmNz/1NlQ7xLIrlN2h5d1eGI= +go.yaml.in/yaml/v2 v2.4.2/go.mod h1:081UH+NErpNdqlCXm3TtEran0rJZGxAYx9hb/ELlsPU= +go.yaml.in/yaml/v3 v3.0.4 h1:tfq32ie2Jv2UxXFdLJdh3jXuOzWiL1fo0bu/FbuKpbc= +go.yaml.in/yaml/v3 v3.0.4/go.mod h1:DhzuOOF2ATzADvBadXxruRBLzYTpT36CKvDb3+aBEFg= +golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= +golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= +golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= +golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc= +golang.org/x/crypto v0.13.0/go.mod h1:y6Z2r+Rw4iayiXXAIxJIDAJ1zMW4yaTpebo8fPOliYc= +golang.org/x/crypto v0.14.0/go.mod h1:MVFd36DqK4CsrnJYDkBA3VC4m2GkXAM0PvzMCn4JQf4= +golang.org/x/exp v0.0.0-20251023183803-a4bb9ffd2546 h1:mgKeJMpvi0yx/sU5GsxQ7p6s2wtOnGAHZWCHUM4KGzY= +golang.org/x/exp v0.0.0-20251023183803-a4bb9ffd2546/go.mod h1:j/pmGrbnkbPtQfxEe5D0VQhZC6qKbfKifgD0oM7sR70= +golang.org/x/exp/typeparams v0.0.0-20220428152302-39d4317da171/go.mod h1:AbB0pIl9nAr9wVwH+Z2ZpaocVmF5I4GyWCDIsVjR0bk= +golang.org/x/exp/typeparams v0.0.0-20230203172020-98cc5a0785f9/go.mod h1:AbB0pIl9nAr9wVwH+Z2ZpaocVmF5I4GyWCDIsVjR0bk= +golang.org/x/exp/typeparams v0.0.0-20250210185358-939b2ce775ac h1:TSSpLIG4v+p0rPv1pNOQtl1I8knsO4S9trOxNMOLVP4= +golang.org/x/exp/typeparams v0.0.0-20250210185358-939b2ce775ac/go.mod h1:AbB0pIl9nAr9wVwH+Z2ZpaocVmF5I4GyWCDIsVjR0bk= +golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= +golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= +golang.org/x/mod v0.4.1/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= +golang.org/x/mod v0.4.2/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= +golang.org/x/mod v0.6.0-dev.0.20220106191415-9b9b3d81d5e3/go.mod h1:3p9vT2HGsQu2K1YbXdKPJLVgG5VJdoTa1poYQBtP1AY= +golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4= +golang.org/x/mod v0.7.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= +golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= +golang.org/x/mod v0.9.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= +golang.org/x/mod v0.12.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= +golang.org/x/mod v0.13.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c= +golang.org/x/mod v0.29.0 h1:HV8lRxZC4l2cr3Zq1LvtOsi/ThTgWnUk/y64QSs8GwA= +golang.org/x/mod v0.29.0/go.mod h1:NyhrlYXJ2H4eJiRy/WDBO6HMqZQ6q9nk4JzS3NuCK+w= +golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= +golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= +golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= +golang.org/x/net v0.0.0-20200625001655-4c5254603344/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA= +golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU= +golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg= +golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM= +golang.org/x/net v0.0.0-20211015210444-4f30a5c0130f/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y= +golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c= +golang.org/x/net v0.2.0/go.mod h1:KqCZLdyyvdV855qA2rE3GC2aiw5xGR5TEjj8smXukLY= +golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs= +golang.org/x/net v0.8.0/go.mod h1:QVkue5JL9kW//ek3r6jTKnTFis1tRmNAW2P1shuFdJc= +golang.org/x/net v0.10.0/go.mod h1:0qNGK6F8kojg2nk9dLZ2mShWaEBan6FAoqfSigmmuDg= +golang.org/x/net v0.15.0/go.mod h1:idbUs1IY1+zTqbi8yxTbhexhEEk5ur9LInksu6HrEpk= +golang.org/x/net v0.16.0/go.mod h1:NxSsAGuq816PNPmqtQdLE42eU2Fs7NoRIZrHJAlaCOE= +golang.org/x/net v0.46.0 h1:giFlY12I07fugqwPuWJi68oOnpfqFnJIJzaIIm2JVV4= +golang.org/x/net v0.46.0/go.mod h1:Q9BGdFy1y4nkUwiLvT5qtyhAnEHgnQ/zd8PfU6nc210= +golang.org/x/oauth2 v0.31.0 h1:8Fq0yVZLh4j4YA47vHKFTa9Ew5XIrCP8LC6UeNZnLxo= +golang.org/x/oauth2 v0.31.0/go.mod h1:lzm5WQJQwKZ3nwavOZ3IS5Aulzxi68dUSgRHujetwEA= +golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.0.0-20200625203802-6e8e738ad208/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.3.0/go.mod h1:FU7BRWz2tNW+3quACPkgCx/L+uEAv1htQ0V83Z9Rj+Y= +golang.org/x/sync v0.4.0/go.mod h1:FU7BRWz2tNW+3quACPkgCx/L+uEAv1htQ0V83Z9Rj+Y= +golang.org/x/sync v0.19.0 h1:vV+1eWNmZ5geRlYjzm2adRgW2/mcpevXNg50YZtPCE4= +golang.org/x/sync v0.19.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI= +golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= +golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20211019181941-9d821ace8654/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20211105183446-c75c47738b0c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.2.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.12.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.13.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.39.0 h1:CvCKL8MeisomCi6qNZ+wbb0DN9E5AATixKsvNtMoMFk= +golang.org/x/sys v0.39.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks= +golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= +golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8= +golang.org/x/term v0.2.0/go.mod h1:TVmDHMZPmdnySmBfhjOoOdhjzdE1h4u1VwSiw2l1Nuc= +golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k= +golang.org/x/term v0.6.0/go.mod h1:m6U89DPEgQRMq3DNkDClhWw02AUbt2daBVO4cn4Hv9U= +golang.org/x/term v0.8.0/go.mod h1:xPskH00ivmX89bAKVGSKKtLOWNx2+17Eiy94tnKShWo= +golang.org/x/term v0.12.0/go.mod h1:owVbMEjm3cBLCHdkQu9b1opXd4ETQWc3BhuQGKgXgvU= +golang.org/x/term v0.13.0/go.mod h1:LTmsnFJwVN6bCy1rVCoS+qHT1HhALEFxKncY3WNNh4U= +golang.org/x/term v0.36.0 h1:zMPR+aF8gfksFprF/Nc/rd1wRS1EI6nDBGyWAvDzx2Q= +golang.org/x/term v0.36.0/go.mod h1:Qu394IJq6V6dCBRgwqshf3mPF85AqzYEzofzRdZkWss= +golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= +golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk= +golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= +golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= +golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ= +golang.org/x/text v0.4.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8= +golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8= +golang.org/x/text v0.8.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8= +golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8= +golang.org/x/text v0.13.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE= +golang.org/x/text v0.30.0 h1:yznKA/E9zq54KzlzBEAWn1NXSQ8DIp/NYMy88xJjl4k= +golang.org/x/text v0.30.0/go.mod h1:yDdHFIX9t+tORqspjENWgzaCVXgk0yYnYuSZ8UzzBVM= +golang.org/x/time v0.13.0 h1:eUlYslOIt32DgYD6utsuUeHs4d7AsEYLuIAdg7FlYgI= +golang.org/x/time v0.13.0/go.mod h1:eL/Oa2bBBK0TkX57Fyni+NgnyQQN4LitPmob2Hjnqw4= +golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= +golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= +golang.org/x/tools v0.0.0-20200324003944-a576cf524670/go.mod h1:Sl4aGygMT6LrqrWclx+PTx3U+LnKx/seiNR+3G19Ar8= +golang.org/x/tools v0.0.0-20200329025819-fd4102a86c65/go.mod h1:Sl4aGygMT6LrqrWclx+PTx3U+LnKx/seiNR+3G19Ar8= +golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE= +golang.org/x/tools v0.0.0-20200724022722-7017fd6b1305/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA= +golang.org/x/tools v0.0.0-20200820010801-b793a1359eac/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA= +golang.org/x/tools v0.0.0-20201023174141-c8cfbd0f21e6/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= +golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= +golang.org/x/tools v0.1.1-0.20210205202024-ef80cdb6ec6d/go.mod h1:9bzcO0MWcOuT0tm1iBGzDVPshzfwoVvREIui8C+MHqU= +golang.org/x/tools v0.1.1-0.20210302220138-2ac05c832e1a/go.mod h1:9bzcO0MWcOuT0tm1iBGzDVPshzfwoVvREIui8C+MHqU= +golang.org/x/tools v0.1.1/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk= +golang.org/x/tools v0.1.5/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk= +golang.org/x/tools v0.1.10/go.mod h1:Uh6Zz+xoGYZom868N8YTex3t7RhtHDBrE8Gzo9bV56E= +golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc= +golang.org/x/tools v0.3.0/go.mod h1:/rWhSS2+zyEVwoJf8YAX6L2f0ntZ7Kn/mGgAWcipA5k= +golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU= +golang.org/x/tools v0.7.0/go.mod h1:4pg6aUX35JBAogB10C9AtvVL+qowtN4pT3CGSQex14s= +golang.org/x/tools v0.13.0/go.mod h1:HvlwmtVNQAhOuCjW7xxvovg8wbNq7LwfXh/k7wXUl58= +golang.org/x/tools v0.14.0/go.mod h1:uYBEerGOWcJyEORxN+Ek8+TT266gXkNlHdJBwexUsBg= +golang.org/x/tools v0.38.0 h1:Hx2Xv8hISq8Lm16jvBZ2VQf+RLmbd7wVUsALibYI/IQ= +golang.org/x/tools v0.38.0/go.mod h1:yEsQ/d/YK8cjh0L6rZlY8tgtlKiBNTL14pGDJPJpYQs= +golang.org/x/tools/go/expect v0.1.1-deprecated h1:jpBZDwmgPhXsKZC6WhL20P4b/wmnpsEAGHaNy0n/rJM= +golang.org/x/tools/go/expect v0.1.1-deprecated/go.mod h1:eihoPOH+FgIqa3FpoTwguz/bVUSGBlGQU67vpBeOrBY= +golang.org/x/tools/go/packages/packagestest v0.1.1-deprecated h1:1h2MnaIAIXISqTFKdENegdpAgUXz6NrPEsbIeWaBRvM= +golang.org/x/tools/go/packages/packagestest v0.1.1-deprecated/go.mod h1:RVAQXBGNv1ib0J382/DPCRS/BPnsGebyM1Gj5VSDpG8= +golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= +golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= +golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= +golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= +gomodules.xyz/jsonpatch/v2 v2.5.0 h1:JELs8RLM12qJGXU4u/TO3V25KW8GreMKl9pdkk14RM0= +gomodules.xyz/jsonpatch/v2 v2.5.0/go.mod h1:AH3dM2RI6uoBZxn3LVrfvJ3E0/9dG4cSrbuBJT4moAY= +google.golang.org/protobuf v1.36.9 h1:w2gp2mA27hUeUzj9Ex9FBjsBm40zfaDtEWow293U7Iw= +google.golang.org/protobuf v1.36.9/go.mod h1:fuxRtAxBytpl4zzqUh6/eyUujkJdNiuEkXntxiD/uRU= +gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= +gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk= +gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q= +gopkg.in/evanphx/json-patch.v4 v4.13.0 h1:czT3CmqEaQ1aanPc5SdlgQrrEIb8w/wwCvWWnfEbYzo= +gopkg.in/evanphx/json-patch.v4 v4.13.0/go.mod h1:p8EYWUEYMpynmqDbY58zCKCFZw8pRWMG4EsWvDvM72M= +gopkg.in/inf.v0 v0.9.1 h1:73M5CoZyi3ZLMOyDlQh031Cx6N9NDJ2Vvfl76EDAgDc= +gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw= +gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= +gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY= +gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ= +gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= +gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA= +gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= +honnef.co/go/tools v0.6.1 h1:R094WgE8K4JirYjBaOpz/AvTyUu/3wbmAoskKN/pxTI= +honnef.co/go/tools v0.6.1/go.mod h1:3puzxxljPCe8RGJX7BIy1plGbxEOZni5mR2aXe3/uk4= +k8s.io/api v0.34.3 h1:D12sTP257/jSH2vHV2EDYrb16bS7ULlHpdNdNhEw2S4= +k8s.io/api v0.34.3/go.mod h1:PyVQBF886Q5RSQZOim7DybQjAbVs8g7gwJNhGtY5MBk= +k8s.io/apiextensions-apiserver v0.34.3 h1:p10fGlkDY09eWKOTeUSioxwLukJnm+KuDZdrW71y40g= +k8s.io/apiextensions-apiserver v0.34.3/go.mod h1:aujxvqGFRdb/cmXYfcRTeppN7S2XV/t7WMEc64zB5A0= +k8s.io/apimachinery v0.34.3 h1:/TB+SFEiQvN9HPldtlWOTp0hWbJ+fjU+wkxysf/aQnE= +k8s.io/apimachinery v0.34.3/go.mod h1:/GwIlEcWuTX9zKIg2mbw0LRFIsXwrfoVxn+ef0X13lw= +k8s.io/client-go v0.34.3 h1:wtYtpzy/OPNYf7WyNBTj3iUA0XaBHVqhv4Iv3tbrF5A= +k8s.io/client-go v0.34.3/go.mod h1:OxxeYagaP9Kdf78UrKLa3YZixMCfP6bgPwPwNBQBzpM= +k8s.io/klog/v2 v2.130.1 h1:n9Xl7H1Xvksem4KFG4PYbdQCQxqc/tTUyrgXaOhHSzk= +k8s.io/klog/v2 v2.130.1/go.mod h1:3Jpz1GvMt720eyJH1ckRHK1EDfpxISzJ7I9OYgaDtPE= +k8s.io/kube-openapi v0.0.0-20250909170358-d67c058d9372 h1:6n2yF16Z5B+r+iKN6yL6/0cRj7lI5omG5F0wuI9ZHhw= +k8s.io/kube-openapi v0.0.0-20250909170358-d67c058d9372/go.mod h1:kdmbQkyfwUagLfXIad1y2TdrjPFWp2Q89B3qkRwf/pQ= +k8s.io/utils v0.0.0-20251002143259-bc988d571ff4 h1:SjGebBtkBqHFOli+05xYbK8YF1Dzkbzn+gDM4X9T4Ck= +k8s.io/utils v0.0.0-20251002143259-bc988d571ff4/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0= +mvdan.cc/gofumpt v0.7.0 h1:bg91ttqXmi9y2xawvkuMXyvAA/1ZGJqYAEGjXuP0JXU= +mvdan.cc/gofumpt v0.7.0/go.mod h1:txVFJy/Sc/mvaycET54pV8SW8gWxTlUuGHVEcncmNUo= +mvdan.cc/unparam v0.0.0-20240528143540-8a5130ca722f h1:lMpcwN6GxNbWtbpI1+xzFLSW8XzX0u72NttUGVFjO3U= +mvdan.cc/unparam v0.0.0-20240528143540-8a5130ca722f/go.mod h1:RSLa7mKKCNeTTMHBw5Hsy2rfJmd6O2ivt9Dw9ZqCQpQ= +sigs.k8s.io/controller-runtime v0.22.4 h1:GEjV7KV3TY8e+tJ2LCTxUTanW4z/FmNB7l327UfMq9A= +sigs.k8s.io/controller-runtime v0.22.4/go.mod h1:+QX1XUpTXN4mLoblf4tqr5CQcyHPAki2HLXqQMY6vh8= +sigs.k8s.io/json v0.0.0-20250730193827-2d320260d730 h1:IpInykpT6ceI+QxKBbEflcR5EXP7sU1kvOlxwZh5txg= +sigs.k8s.io/json v0.0.0-20250730193827-2d320260d730/go.mod h1:mdzfpAEoE6DHQEN0uh9ZbOCuHbLK5wOm7dK4ctXE9Tg= +sigs.k8s.io/randfill v1.0.0 h1:JfjMILfT8A6RbawdsK2JXGBR5AQVfd+9TbzrlneTyrU= +sigs.k8s.io/randfill v1.0.0/go.mod h1:XeLlZ/jmk4i1HRopwe7/aU3H5n1zNUcX6TM94b3QxOY= +sigs.k8s.io/structured-merge-diff/v6 v6.3.0 h1:jTijUJbW353oVOd9oTlifJqOGEkUw2jB/fXCbTiQEco= +sigs.k8s.io/structured-merge-diff/v6 v6.3.0/go.mod h1:M3W8sfWvn2HhQDIbGWj3S099YozAsymCo/wrT5ohRUE= +sigs.k8s.io/yaml v1.6.0 h1:G8fkbMSAFqgEFgh4b1wmtzDnioxFCUgTZhlbj5P9QYs= +sigs.k8s.io/yaml v1.6.0/go.mod h1:796bPqUfzR/0jLAl6XjHl3Ck7MiyVv8dbTdyT3/pMf4= diff --git a/images/agent/internal/controllers/registry.go b/images/agent/internal/controllers/registry.go new file mode 100644 index 000000000..98c8ae5e5 --- /dev/null +++ b/images/agent/internal/controllers/registry.go @@ -0,0 +1,39 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package controllers + +import ( + "fmt" + + "sigs.k8s.io/controller-runtime/pkg/manager" +) + +var registry []func(mgr manager.Manager) error + +func init() { + // ... +} + +func BuildAll(mgr manager.Manager) error { + for i, buildCtl := range registry { + err := buildCtl(mgr) + if err != nil { + return fmt.Errorf("building controller %d: %w", i, err) + } + } + return nil +} diff --git a/images/agent/internal/env/config.go b/images/agent/internal/env/config.go new file mode 100644 index 000000000..dcd304131 --- /dev/null +++ b/images/agent/internal/env/config.go @@ -0,0 +1,138 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package env + +import ( + "errors" + "fmt" + "os" + "strconv" +) + +const ( + NodeNameEnvVar = "NODE_NAME" + + DRBDMinPortEnvVar = "DRBD_MIN_PORT" + DRBDMaxPortEnvVar = "DRBD_MAX_PORT" + + DRBDMinPortDefault uint = 7000 + DRBDMaxPortDefault uint = 7999 + + HealthProbeBindAddressEnvVar = "HEALTH_PROBE_BIND_ADDRESS" + MetricsPortEnvVar = "METRICS_BIND_ADDRESS" + + // defaults are different for each app, do not merge them + DefaultHealthProbeBindAddress = ":4269" + DefaultMetricsBindAddress = ":4270" +) + +var ErrInvalidConfig = errors.New("invalid config") + +type config struct { + nodeName string + drbdMinPort uint + drbdMaxPort uint + healthProbeBindAddress string + metricsBindAddress string +} + +func (c *config) HealthProbeBindAddress() string { + return c.healthProbeBindAddress +} + +func (c *config) MetricsBindAddress() string { + return c.metricsBindAddress +} + +func (c *config) DRBDMaxPort() uint { + return c.drbdMaxPort +} + +func (c *config) DRBDMinPort() uint { + return c.drbdMinPort +} + +func (c *config) NodeName() string { + return c.nodeName +} + +type Config interface { + NodeName() string + DRBDMinPort() uint + DRBDMaxPort() uint + HealthProbeBindAddress() string + MetricsBindAddress() string +} + +var _ Config = &config{} + +func GetConfig() (Config, error) { + cfg := &config{} + + // + cfg.nodeName = os.Getenv(NodeNameEnvVar) + if cfg.nodeName == "" { + hostName, err := os.Hostname() + if err != nil { + return nil, fmt.Errorf("getting hostname: %w", err) + } + cfg.nodeName = hostName + } + + // + minPortStr := os.Getenv(DRBDMinPortEnvVar) + if minPortStr == "" { + cfg.drbdMinPort = DRBDMinPortDefault + } else { + minPort, err := strconv.ParseUint(minPortStr, 10, 32) + if err != nil { + return cfg, fmt.Errorf("parsing %s: %w", DRBDMinPortEnvVar, err) + } + cfg.drbdMinPort = uint(minPort) + } + + // + maxPortStr := os.Getenv(DRBDMaxPortEnvVar) + if maxPortStr == "" { + cfg.drbdMaxPort = DRBDMaxPortDefault + } else { + maxPort, err := strconv.ParseUint(maxPortStr, 10, 32) + if err != nil { + return cfg, fmt.Errorf("parsing %s: %w", DRBDMaxPortEnvVar, err) + } + cfg.drbdMaxPort = uint(maxPort) + } + + // + if cfg.drbdMaxPort < cfg.drbdMinPort { + return cfg, fmt.Errorf("%w: invalid port range %d-%d", ErrInvalidConfig, cfg.drbdMinPort, cfg.drbdMaxPort) + } + + // + cfg.healthProbeBindAddress = os.Getenv(HealthProbeBindAddressEnvVar) + if cfg.healthProbeBindAddress == "" { + cfg.healthProbeBindAddress = DefaultHealthProbeBindAddress + } + + // + cfg.metricsBindAddress = os.Getenv(MetricsPortEnvVar) + if cfg.metricsBindAddress == "" { + cfg.metricsBindAddress = DefaultMetricsBindAddress + } + + return cfg, nil +} diff --git a/images/agent/internal/errors/errors.go b/images/agent/internal/errors/errors.go new file mode 100644 index 000000000..c884bff96 --- /dev/null +++ b/images/agent/internal/errors/errors.go @@ -0,0 +1,50 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package errors + +import ( + "errors" + "fmt" +) + +var ErrNotImplemented = errors.New("not implemented") + +var ErrInvalidCluster = errors.New("invalid cluster state") + +var ErrInvalidNode = errors.New("invalid node") + +var ErrUnknown = errors.New("unknown error") + +func WrapErrorf(err error, format string, a ...any) error { + return fmt.Errorf("%w: %w", err, fmt.Errorf(format, a...)) +} + +func ErrInvalidClusterf(format string, a ...any) error { + return WrapErrorf(ErrInvalidCluster, format, a...) +} + +func ErrInvalidNodef(format string, a ...any) error { + return WrapErrorf(ErrInvalidNode, format, a...) +} + +func ErrNotImplementedf(format string, a ...any) error { + return WrapErrorf(ErrNotImplemented, format, a...) +} + +func ErrUnknownf(format string, a ...any) error { + return WrapErrorf(ErrUnknown, format, a...) +} diff --git a/images/agent/internal/indexes/field_indexes.go b/images/agent/internal/indexes/field_indexes.go new file mode 100644 index 000000000..294f4ef7a --- /dev/null +++ b/images/agent/internal/indexes/field_indexes.go @@ -0,0 +1,61 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package indexes + +import ( + "context" + "fmt" + + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/manager" + + "github.com/deckhouse/sds-replicated-volume/api/v1alpha1" +) + +const ( + // RVRByRVNameAndNodeName indexes ReplicatedVolumeReplica by composite key + // of spec.replicatedVolumeName and spec.nodeName. + RVRByRVNameAndNodeName = "spec.replicatedVolumeName+spec.nodeName" +) + +// RVRByRVNameAndNodeNameKey returns the index key for the composite index. +func RVRByRVNameAndNodeNameKey(rvName, nodeName string) string { + return rvName + "/" + nodeName +} + +// RegisterIndexes registers all field indexes used by the agent. +func RegisterIndexes(ctx context.Context, mgr manager.Manager) error { + indexer := mgr.GetFieldIndexer() + + // Index by composite key: spec.replicatedVolumeName + spec.nodeName + if err := indexer.IndexField( + ctx, + &v1alpha1.ReplicatedVolumeReplica{}, + RVRByRVNameAndNodeName, + func(rawObj client.Object) []string { + replica := rawObj.(*v1alpha1.ReplicatedVolumeReplica) + if replica.Spec.ReplicatedVolumeName == "" || replica.Spec.NodeName == "" { + return nil + } + return []string{RVRByRVNameAndNodeNameKey(replica.Spec.ReplicatedVolumeName, replica.Spec.NodeName)} + }, + ); err != nil { + return fmt.Errorf("indexing %s: %w", RVRByRVNameAndNodeName, err) + } + + return nil +} diff --git a/images/agent/internal/scheme/scheme.go b/images/agent/internal/scheme/scheme.go new file mode 100644 index 000000000..d2ab3b10d --- /dev/null +++ b/images/agent/internal/scheme/scheme.go @@ -0,0 +1,47 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package scheme + +import ( + "fmt" + + corev1 "k8s.io/api/core/v1" + storagev1 "k8s.io/api/storage/v1" + "k8s.io/apimachinery/pkg/runtime" + + snc "github.com/deckhouse/sds-node-configurator/api/v1alpha1" + "github.com/deckhouse/sds-replicated-volume/api/v1alpha1" +) + +func New() (*runtime.Scheme, error) { + scheme := runtime.NewScheme() + + var schemeFuncs = []func(s *runtime.Scheme) error{ + corev1.AddToScheme, + storagev1.AddToScheme, + v1alpha1.AddToScheme, + snc.AddToScheme, + } + + for i, f := range schemeFuncs { + if err := f(scheme); err != nil { + return nil, fmt.Errorf("adding scheme %d: %w", i, err) + } + } + + return scheme, nil +} diff --git a/images/agent/pkg/drbdadm/adjust.go b/images/agent/pkg/drbdadm/adjust.go new file mode 100644 index 000000000..d5b34c389 --- /dev/null +++ b/images/agent/pkg/drbdadm/adjust.go @@ -0,0 +1,38 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package drbdadm + +import ( + "context" +) + +func ExecuteAdjust(ctx context.Context, resource string) CommandError { + args := AdjustArgs(resource) + cmd := ExecCommandContext(ctx, Command, args...) + + out, err := cmd.CombinedOutput() + if err != nil { + return &commandError{ + error: err, + commandWithArgs: append([]string{Command}, args...), + output: string(out), + exitCode: errToExitCode(err), + } + } + + return nil +} diff --git a/images/agent/pkg/drbdadm/cmd.go b/images/agent/pkg/drbdadm/cmd.go new file mode 100644 index 000000000..85fc7148d --- /dev/null +++ b/images/agent/pkg/drbdadm/cmd.go @@ -0,0 +1,61 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package drbdadm + +import ( + "context" + "io" + "os/exec" +) + +type Cmd interface { + CombinedOutput() ([]byte, error) + SetStderr(io.Writer) + Run() error +} + +type ExecCommandContextFactory func(ctx context.Context, name string, arg ...string) Cmd + +// overridable for testing purposes +var ExecCommandContext ExecCommandContextFactory = func( + ctx context.Context, + name string, + arg ...string, +) Cmd { + return (*execCmd)(exec.CommandContext(ctx, name, arg...)) +} + +// dummy decorator to isolate from [exec.Cmd] struct fields +type execCmd exec.Cmd + +var _ Cmd = &execCmd{} + +func (r *execCmd) Run() error { return (*exec.Cmd)(r).Run() } +func (r *execCmd) SetStderr(w io.Writer) { (*exec.Cmd)(r).Stderr = w } +func (r *execCmd) CombinedOutput() ([]byte, error) { return (*exec.Cmd)(r).CombinedOutput() } +func (r *execCmd) ProcessStateExitCode() int { return (*exec.Cmd)(r).ProcessState.ExitCode() } + +// helper to isolate from [exec.ExitError] +func errToExitCode(err error) int { + type exitCode interface{ ExitCode() int } + + if errWithExitCode, ok := err.(exitCode); ok { + return errWithExitCode.ExitCode() + } + + return 0 +} diff --git a/images/agent/pkg/drbdadm/create-md.go b/images/agent/pkg/drbdadm/create-md.go new file mode 100644 index 000000000..92736a819 --- /dev/null +++ b/images/agent/pkg/drbdadm/create-md.go @@ -0,0 +1,38 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package drbdadm + +import ( + "context" +) + +func ExecuteCreateMD(ctx context.Context, resource string) CommandError { + args := CreateMDArgs(resource) + cmd := ExecCommandContext(ctx, Command, args...) + + out, err := cmd.CombinedOutput() + if err != nil { + return &commandError{ + error: err, + commandWithArgs: append([]string{Command}, args...), + output: string(out), + exitCode: errToExitCode(err), + } + } + + return nil +} diff --git a/images/agent/pkg/drbdadm/down.go b/images/agent/pkg/drbdadm/down.go new file mode 100644 index 000000000..81f704e8e --- /dev/null +++ b/images/agent/pkg/drbdadm/down.go @@ -0,0 +1,38 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package drbdadm + +import ( + "context" +) + +func ExecuteDown(ctx context.Context, resource string) CommandError { + args := DownArgs(resource) + cmd := ExecCommandContext(ctx, Command, args...) + + out, err := cmd.CombinedOutput() + if err != nil { + return &commandError{ + error: err, + commandWithArgs: append([]string{Command}, args...), + output: string(out), + exitCode: errToExitCode(err), + } + } + + return nil +} diff --git a/images/agent/pkg/drbdadm/dump-md.go b/images/agent/pkg/drbdadm/dump-md.go new file mode 100644 index 000000000..0ebf7de97 --- /dev/null +++ b/images/agent/pkg/drbdadm/dump-md.go @@ -0,0 +1,54 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package drbdadm + +import ( + "bytes" + "context" + "strings" +) + +// ExecuteDumpMD executes a command and returns: +// - (true, nil) if it exits with code 0 +// - (false, nil) if it exits with code 1 and contains "No valid meta data found" +// - (false, error) for any other case +func ExecuteDumpMDMetadataExists(ctx context.Context, resource string) (bool, CommandError) { + args := DumpMDArgs(resource) + cmd := ExecCommandContext(ctx, Command, args...) + + var stderr bytes.Buffer + cmd.SetStderr(&stderr) + + err := cmd.Run() + if err == nil { + return true, nil + } + + exitCode := errToExitCode(err) + output := stderr.String() + + if exitCode == 1 && strings.Contains(output, "No valid meta data found") { + return false, nil + } + + return false, &commandError{ + error: err, + commandWithArgs: append([]string{Command}, args...), + output: output, + exitCode: exitCode, + } +} diff --git a/images/agent/pkg/drbdadm/error.go b/images/agent/pkg/drbdadm/error.go new file mode 100644 index 000000000..8c06709dd --- /dev/null +++ b/images/agent/pkg/drbdadm/error.go @@ -0,0 +1,49 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package drbdadm + +type CommandError interface { + error + CommandWithArgs() []string + Output() string + ExitCode() int +} + +var _ CommandError = &commandError{} + +type commandError struct { + error + commandWithArgs []string + output string + exitCode int +} + +func (e *commandError) CommandWithArgs() []string { + return e.commandWithArgs +} + +func (e *commandError) Error() string { + return e.error.Error() +} + +func (e *commandError) ExitCode() int { + return e.exitCode +} + +func (e *commandError) Output() string { + return e.output +} diff --git a/images/agent/pkg/drbdadm/fake/fake.go b/images/agent/pkg/drbdadm/fake/fake.go new file mode 100644 index 000000000..1301136de --- /dev/null +++ b/images/agent/pkg/drbdadm/fake/fake.go @@ -0,0 +1,105 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package fake + +import ( + "bytes" + "context" + "io" + "slices" + "testing" + + "github.com/deckhouse/sds-replicated-volume/images/agent/pkg/drbdadm" +) + +type Exec struct { + cmds []*ExpectedCmd +} + +func (b *Exec) ExpectCommands(cmds ...*ExpectedCmd) { + b.cmds = append(b.cmds, cmds...) +} + +func (b *Exec) Setup(t *testing.T) { + t.Helper() + + tmp := drbdadm.ExecCommandContext + + i := 0 + + drbdadm.ExecCommandContext = func(_ context.Context, name string, args ...string) drbdadm.Cmd { + if len(b.cmds) <= i { + t.Fatalf("expected %d command executions, got more", len(b.cmds)) + } + cmd := b.cmds[i] + + if !cmd.Matches(name, args...) { + t.Fatalf("ExecCommandContext was called with unexpected arguments (call index %d)", i) + } + + i++ + return cmd + } + + t.Cleanup(func() { + // actual cleanup + drbdadm.ExecCommandContext = tmp + + // assert all commands executed + if i != len(b.cmds) { + t.Errorf("expected %d command executions, got %d", len(b.cmds), i) + } + }) +} + +type ExpectedCmd struct { + Name string + Args []string + + ResultOutput []byte + ResultErr error + + stderr io.Writer +} + +var _ drbdadm.Cmd = &ExpectedCmd{} + +func (c *ExpectedCmd) Matches(name string, args ...string) bool { + return c.Name == name && slices.Equal(c.Args, args) +} + +func (c *ExpectedCmd) CombinedOutput() ([]byte, error) { + return c.ResultOutput, c.ResultErr +} + +func (c *ExpectedCmd) SetStderr(w io.Writer) { + c.stderr = w +} + +func (c *ExpectedCmd) Run() error { + if c.stderr != nil { + if _, err := io.Copy(c.stderr, bytes.NewBuffer(c.ResultOutput)); err != nil { + return err + } + } + return c.ResultErr +} + +type ExitErr struct{ Code int } + +func (e ExitErr) Error() string { return "ExitErr" } +func (e ExitErr) ExitCode() int { return e.Code } diff --git a/images/agent/pkg/drbdadm/primary.go b/images/agent/pkg/drbdadm/primary.go new file mode 100644 index 000000000..53da2aace --- /dev/null +++ b/images/agent/pkg/drbdadm/primary.go @@ -0,0 +1,72 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package drbdadm + +import ( + "context" +) + +func ExecutePrimary(ctx context.Context, resource string) CommandError { + args := PrimaryArgs(resource) + cmd := ExecCommandContext(ctx, Command, args...) + + out, err := cmd.CombinedOutput() + if err != nil { + return &commandError{ + error: err, + commandWithArgs: append([]string{Command}, args...), + output: string(out), + exitCode: errToExitCode(err), + } + } + + return nil +} + +func ExecutePrimaryForce(ctx context.Context, resource string) CommandError { + args := PrimaryForceArgs(resource) + cmd := ExecCommandContext(ctx, Command, args...) + + out, err := cmd.CombinedOutput() + if err != nil { + return &commandError{ + error: err, + commandWithArgs: append([]string{Command}, args...), + output: string(out), + exitCode: errToExitCode(err), + } + } + + return nil +} + +func ExecuteSecondary(ctx context.Context, resource string) CommandError { + args := SecondaryArgs(resource) + cmd := ExecCommandContext(ctx, Command, args...) + + out, err := cmd.CombinedOutput() + if err != nil { + return &commandError{ + error: err, + commandWithArgs: append([]string{Command}, args...), + output: string(out), + exitCode: errToExitCode(err), + } + } + + return nil +} diff --git a/images/agent/pkg/drbdadm/resize.go b/images/agent/pkg/drbdadm/resize.go new file mode 100644 index 000000000..13f54ce66 --- /dev/null +++ b/images/agent/pkg/drbdadm/resize.go @@ -0,0 +1,38 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package drbdadm + +import ( + "context" +) + +func ExecuteResize(ctx context.Context, resource string) CommandError { + args := ResizeArgs(resource) + cmd := ExecCommandContext(ctx, Command, args...) + + out, err := cmd.CombinedOutput() + if err != nil { + return &commandError{ + error: err, + commandWithArgs: append([]string{Command}, args...), + output: string(out), + exitCode: errToExitCode(err), + } + } + + return nil +} diff --git a/images/agent/pkg/drbdadm/sh-nop.go b/images/agent/pkg/drbdadm/sh-nop.go new file mode 100644 index 000000000..89a6293d7 --- /dev/null +++ b/images/agent/pkg/drbdadm/sh-nop.go @@ -0,0 +1,38 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package drbdadm + +import ( + "context" +) + +func ExecuteShNop(ctx context.Context, configToTest string, configToExclude string) CommandError { + args := ShNopArgs(configToTest, configToExclude) + cmd := ExecCommandContext(ctx, Command, args...) + + out, err := cmd.CombinedOutput() + if err != nil { + return &commandError{ + error: err, + commandWithArgs: append([]string{Command}, args...), + output: string(out), + exitCode: errToExitCode(err), + } + } + + return nil +} diff --git a/images/agent/pkg/drbdadm/status.go b/images/agent/pkg/drbdadm/status.go new file mode 100644 index 000000000..f264de7ed --- /dev/null +++ b/images/agent/pkg/drbdadm/status.go @@ -0,0 +1,59 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package drbdadm + +import ( + "bytes" + "context" + "errors" + "os/exec" + "strings" +) + +// ExecuteDumpMD executes a command and returns: +// - (true, nil) if it exits with code 0 +// - (false, nil) if it exits with code 10 and contains "No such resource" +// - (false, error) for any other case +func ExecuteStatusIsUp(ctx context.Context, resource string) (bool, CommandError) { + args := StatusArgs(resource) + cmd := ExecCommandContext(ctx, Command, args...) + + var stderr bytes.Buffer + cmd.SetStderr(&stderr) + + err := cmd.Run() + if err == nil { + return true, nil + } + + var exitErr *exec.ExitError + if errors.As(err, &exitErr) { + exitCode := exitErr.ExitCode() + output := stderr.String() + + if exitCode == 10 && strings.Contains(output, "No such resource") { + return false, nil + } + } + + return false, &commandError{ + error: err, + commandWithArgs: append([]string{Command}, args...), + output: stderr.String(), + exitCode: errToExitCode(err), + } +} diff --git a/images/agent/pkg/drbdadm/up.go b/images/agent/pkg/drbdadm/up.go new file mode 100644 index 000000000..7ab7f8afe --- /dev/null +++ b/images/agent/pkg/drbdadm/up.go @@ -0,0 +1,38 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package drbdadm + +import ( + "context" +) + +func ExecuteUp(ctx context.Context, resource string) CommandError { + args := UpArgs(resource) + cmd := ExecCommandContext(ctx, Command, args...) + + out, err := cmd.CombinedOutput() + if err != nil { + return &commandError{ + error: err, + commandWithArgs: append([]string{Command}, args...), + output: string(out), + exitCode: errToExitCode(err), + } + } + + return nil +} diff --git a/images/agent/pkg/drbdadm/vars.go b/images/agent/pkg/drbdadm/vars.go new file mode 100644 index 000000000..1798d8887 --- /dev/null +++ b/images/agent/pkg/drbdadm/vars.go @@ -0,0 +1,65 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package drbdadm + +var Command = "drbdadm" + +var DumpMDArgs = func(resource string) []string { + return []string{"dump-md", "--force", resource} +} + +var StatusArgs = func(resource string) []string { + return []string{"status", resource} +} + +var UpArgs = func(resource string) []string { + return []string{"up", resource} +} + +var AdjustArgs = func(resource string) []string { + return []string{"adjust", resource} +} + +var ShNopArgs = func(configToTest string, configToExclude string) []string { + return []string{"--config-to-test", configToTest, "--config-to-exclude", configToExclude, "sh-nop"} +} + +var CreateMDArgs = func(resource string) []string { + return []string{"create-md", "--max-peers=7", "--force", resource} +} + +var DownArgs = func(resource string) []string { + return []string{"down", resource} +} + +var PrimaryArgs = func(resource string) []string { + return []string{"primary", resource} +} + +var PrimaryForceArgs = func(resource string) []string { + return []string{"primary", "--force", resource} +} + +var SecondaryArgs = func(resource string) []string { + return []string{"secondary", resource} +} + +var Events2Args = []string{"events2", "--timestamps"} + +var ResizeArgs = func(resource string) []string { + return []string{"resize", resource} +} diff --git a/images/agent/pkg/drbdconf/codec.go b/images/agent/pkg/drbdconf/codec.go new file mode 100644 index 000000000..683a3b794 --- /dev/null +++ b/images/agent/pkg/drbdconf/codec.go @@ -0,0 +1,189 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package drbdconf + +import ( + "fmt" + "reflect" + "strconv" + "strings" + "sync" +) + +var parameterTypeCodecs = map[reflect.Type]ParameterTypeCodec{ + reflect.TypeFor[[]string](): &stringSliceParameterCodec{}, + reflect.TypeFor[string](): &stringParameterCodec{}, + reflect.TypeFor[bool](): &boolParameterCodec{}, + reflect.TypeFor[*bool](): &boolPtrParameterCodec{}, + reflect.TypeFor[*int](): &intPtrParameterCodec{}, + reflect.TypeFor[*uint](): &uintPtrParameterCodec{}, +} + +var parameterTypeCodecsMu = &sync.Mutex{} + +func RegisterParameterTypeCodec[T any](codec ParameterTypeCodec) { + parameterTypeCodecsMu.Lock() + defer parameterTypeCodecsMu.Unlock() + parameterTypeCodecs[reflect.TypeFor[T]()] = codec +} + +type ParameterTypeCodec interface { + MarshalParameter(v any) ([]string, error) + UnmarshalParameter(p []Word) (any, error) +} + +// ======== [string] ======== + +type stringParameterCodec struct { +} + +var _ ParameterTypeCodec = &stringParameterCodec{} + +func (c *stringParameterCodec) MarshalParameter(v any) ([]string, error) { + return []string{v.(string)}, nil +} + +func (*stringParameterCodec) UnmarshalParameter(p []Word) (any, error) { + if err := EnsureLen(p, 2); err != nil { + return nil, err + } + return p[1].Value, nil +} + +// ======== [[]string] ======== + +type stringSliceParameterCodec struct { +} + +var _ ParameterTypeCodec = &stringSliceParameterCodec{} + +func (c *stringSliceParameterCodec) MarshalParameter(v any) ([]string, error) { + return v.([]string), nil +} + +func (*stringSliceParameterCodec) UnmarshalParameter(par []Word) (any, error) { + res := []string{} + for i := 1; i < len(par); i++ { + res = append(res, par[i].Value) + } + return res, nil +} + +// ======== [bool] ======== + +type boolParameterCodec struct { +} + +var _ ParameterTypeCodec = &boolParameterCodec{} + +func (*boolParameterCodec) MarshalParameter(_ any) ([]string, error) { + return nil, nil +} + +func (*boolParameterCodec) UnmarshalParameter(_ []Word) (any, error) { + return true, nil +} + +// ======== [*bool] ======== + +type boolPtrParameterCodec struct { +} + +var _ ParameterTypeCodec = &boolPtrParameterCodec{} + +func (*boolPtrParameterCodec) MarshalParameter(v any) ([]string, error) { + if *(v.(*bool)) { + return []string{"yes"}, nil + } + return []string{"no"}, nil +} + +func (*boolPtrParameterCodec) UnmarshalParameter(par []Word) (any, error) { + if strings.HasPrefix(par[0].Value, "no-") && len(par) == 1 { + return ptr(false), nil + } + + if len(par) == 1 || par[1].Value == "yes" { + return ptr(true), nil + } + + if par[1].Value == "no" { + return ptr(false), nil + } + + return nil, fmt.Errorf("format error: expected 'yes' or 'no'") +} + +// ======== [*int] ======== + +type intPtrParameterCodec struct { +} + +var _ ParameterTypeCodec = &intPtrParameterCodec{} + +func (*intPtrParameterCodec) MarshalParameter(v any) ([]string, error) { + return []string{strconv.Itoa(*(v.(*int)))}, nil +} + +func (*intPtrParameterCodec) UnmarshalParameter(p []Word) (any, error) { + if err := EnsureLen(p, 2); err != nil { + return nil, + fmt.Errorf("unmarshaling '%s' to *int: %w", p[0].Value, err) + } + + i, err := strconv.Atoi(p[1].Value) + if err != nil { + return nil, + fmt.Errorf( + "unmarshaling '%s' value to *int: %w", + p[0].Value, err, + ) + } + + return &i, nil +} + +// ======== [*uint] ======== + +type uintPtrParameterCodec struct { +} + +var _ ParameterTypeCodec = &uintPtrParameterCodec{} + +func (*uintPtrParameterCodec) MarshalParameter(v any) ([]string, error) { + return []string{strconv.FormatUint(uint64(*(v.(*uint))), 10)}, nil +} + +func (*uintPtrParameterCodec) UnmarshalParameter(p []Word) (any, error) { + if err := EnsureLen(p, 2); err != nil { + return nil, + fmt.Errorf("unmarshaling '%s' to *uint: %w", p[0].Value, err) + } + + i64, err := strconv.ParseUint(p[1].Value, 10, 0) + if err != nil { + return nil, + fmt.Errorf( + "unmarshaling '%s' value to *int: %w", + p[0].Value, err, + ) + } + + i := uint(i64) + + return &i, nil +} diff --git a/images/agent/pkg/drbdconf/common.go b/images/agent/pkg/drbdconf/common.go new file mode 100644 index 000000000..3e9abde42 --- /dev/null +++ b/images/agent/pkg/drbdconf/common.go @@ -0,0 +1,121 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package drbdconf + +import ( + "fmt" + "reflect" +) + +type visitedField struct { + Field reflect.StructField + FieldVal reflect.Value + ParameterNames []string + SectionName string +} + +func visitStructFields( + ptrVal reflect.Value, + visit func(f *visitedField) error, +) error { + if !isNonNilStructPtr(ptrVal) { + return fmt.Errorf("expected non-nil pointer to a struct") + } + + val := ptrVal.Elem() + + valType := val.Type() + for i := range valType.NumField() { + field := valType.Field(i) + // skip unexported fields + if field.PkgPath != "" { + continue + } + + fieldVal := val.Field(i) + + parNames, err := getDRBDParameterNames(field) + if err != nil { + return err + } + + if !isSectionKeyworder(ptrVal) && len(parNames) > 0 { + return fmt.Errorf( + "`drbd` tag found on non-section type %s", + valType.Name(), + ) + } + + _, secName := isStructPtrAndSectionKeyworder(fieldVal) + + err = visit( + &visitedField{ + Field: field, + FieldVal: fieldVal, + ParameterNames: parNames, + SectionName: secName, + }, + ) + if err != nil { + return err + } + } + + return nil +} + +func isStructPtrAndSectionKeyworder(v reflect.Value) (ok bool, kw string) { + ok = isNonNilStructPtr(v) && + v.Type().Implements(reflect.TypeFor[SectionKeyworder]()) + if ok { + kw = v.Interface().(SectionKeyworder).SectionKeyword() + } + return +} + +func isNonNilStructPtr(v reflect.Value) bool { + return v.Kind() == reflect.Pointer && + !v.IsNil() && + v.Elem().Kind() == reflect.Struct +} + +func isSectionKeyworder(v reflect.Value) bool { + return v.Type().Implements(reflect.TypeFor[SectionKeyworder]()) +} + +func isSliceOfStructPtrsAndSectionKeyworders( + t reflect.Type, +) (ok bool, elType reflect.Type, kw string) { + if t.Kind() != reflect.Slice { + return + } + elType = t.Elem() + ok, kw = typeIsStructPtrAndSectionKeyworder(elType) + return +} + +func typeIsStructPtrAndSectionKeyworder(t reflect.Type) (ok bool, kw string) { + ok = t.Kind() == reflect.Pointer && + t.Elem().Kind() == reflect.Struct && + t.Implements(reflect.TypeFor[SectionKeyworder]()) + if ok { + kw = reflect.Zero(t). + Interface().(SectionKeyworder). + SectionKeyword() + } + return +} diff --git a/images/agent/pkg/drbdconf/decode.go b/images/agent/pkg/drbdconf/decode.go new file mode 100644 index 000000000..f04092e37 --- /dev/null +++ b/images/agent/pkg/drbdconf/decode.go @@ -0,0 +1,245 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package drbdconf + +import ( + "fmt" + "reflect" + "slices" +) + +// Unmarshals low-level src into a dst struct. +// Also see docs for [Marshal]. +func Unmarshal[T any, PT Ptr[T]](src *Section, dst PT) error { + err := unmarshalSection(src, reflect.ValueOf(dst)) + if err != nil { + return err + } + + return nil +} + +func unmarshalSection( + src *Section, + ptrVal reflect.Value, +) error { + err := visitStructFields( + ptrVal, + func(f *visitedField) error { + if len(f.ParameterNames) > 0 { + var selectedSrcPars [][]Word + if f.ParameterNames[0] == "" { + // value is in current section key + if len(src.Key) > 1 { + selectedSrcPars = append(selectedSrcPars, src.Key) + } + } else { + // value is in parameters + for _, parName := range f.ParameterNames { + srcPars := slices.Collect(src.ParametersByKey(parName)) + + for _, srcPar := range srcPars { + selectedSrcPars = append( + selectedSrcPars, + srcPar.Key, + ) + } + + if len(srcPars) > 0 { + // ignore the rest of ParameterNames + break + } + } + } + + if len(selectedSrcPars) > 0 { + return unmarshalParameterValue( + selectedSrcPars, + f.FieldVal, + f.Field.Type, + ) + } + } else if ok, elType, kw := isSliceOfStructPtrsAndSectionKeyworders( + f.Field.Type, + ); ok { + sliceIsNonEmpty := f.FieldVal.Len() > 0 + for subSection := range src.SectionsByKey(kw) { + if sliceIsNonEmpty { + return fmt.Errorf( + "unmarshaling field %s: non-empty slice", + f.Field.Name, + ) + } + newVal := reflect.New(elType.Elem()) + if err := unmarshalSection(subSection, newVal); err != nil { + return fmt.Errorf( + "unmarshaling section %s to field %s: %w", + subSection.Location(), + f.Field.Name, + err, + ) + } + f.FieldVal.Set(reflect.Append(f.FieldVal, newVal)) + } + } else if ok, kw := typeIsStructPtrAndSectionKeyworder( + f.Field.Type, + ); ok { + subSections := slices.Collect(src.SectionsByKey(kw)) + if len(subSections) == 0 { + return nil + } + if len(subSections) > 1 { + return fmt.Errorf( + "unmarshaling field %s: "+ + "can not map more then one section: "+ + "%s, %s", + f.Field.Name, + subSections[0].Location(), + subSections[1].Location(), + ) + } + + if f.FieldVal.IsNil() { + newVal := reflect.New(f.FieldVal.Type().Elem()) + f.FieldVal.Set(newVal) + } + err := unmarshalSection(subSections[0], f.FieldVal) + if err != nil { + return fmt.Errorf( + "unmarshaling section %s to field %s: %w", + subSections[0].Location(), + f.Field.Name, + err, + ) + } + } + return nil + }, + ) + + if err != nil { + return err + } + + return nil +} + +func unmarshalParameterValue( + srcPars [][]Word, + dstVal reflect.Value, + dstType reflect.Type, +) error { + // parameterTypeCodecs have the highest priority + if typeCodec := parameterTypeCodecs[dstType]; typeCodec != nil { + if len(srcPars) > 1 { + return fmt.Errorf("can not map more then one section") + } + + v, err := typeCodec.UnmarshalParameter(srcPars[0]) + if err != nil { + return err + } + + val := reflect.ValueOf(v) + + if !val.Type().AssignableTo(dstVal.Type()) { + return fmt.Errorf( + "type codec returned value of type %s, which is not "+ + "assignable to destination type %s", + val.Type().Name(), dstVal.Type().Name(), + ) + } + + dstVal.Set(val) + return nil + } + + // value type may be different in case when dstType is slice element type + if dstVal.Type() != dstType { + if typeCodec := parameterTypeCodecs[dstVal.Type()]; typeCodec != nil { + if len(srcPars) > 1 { + return fmt.Errorf("can not map more then one section") + } + + v, err := typeCodec.UnmarshalParameter(srcPars[0]) + if err != nil { + return err + } + val := reflect.ValueOf(v) + + if !val.Type().AssignableTo(dstVal.Type()) { + return fmt.Errorf( + "type codec returned value of type %s, which is not "+ + "assignable to destination type %s", + val.Type().Name(), dstVal.Type().Name(), + ) + } + + dstVal.Set(val) + return nil + } + } + + if dstVal.Kind() == reflect.Slice { + elType := dstType.Elem() + elVarType := elType + if elType.Kind() == reflect.Pointer { + elVarType = elType.Elem() + } + for i, srcPar := range srcPars { + elVar := reflect.New(elVarType) + err := unmarshalParameterValue( + [][]Word{srcPar}, + elVar, + reflect.PointerTo(elVarType), + ) + if err != nil { + return fmt.Errorf( + "unmarshaling parameter at %s to slice element %d "+ + "of type %s: %w", + srcPar[len(srcPar)-1].Location, i, + elType.Name(), err, + ) + } + if elType.Kind() != reflect.Pointer { + elVar = elVar.Elem() + } + dstVal.Set(reflect.Append(dstVal, elVar)) + } + return nil + } + + if len(srcPars) > 1 { + return fmt.Errorf("can not map more then one section") + } + + if dstVal.Kind() == reflect.Pointer { + if dstVal.Type().Implements(reflect.TypeFor[ParameterUnmarshaler]()) { + if dstVal.IsNil() { + newVal := reflect.New(dstVal.Type().Elem()) + dstVal.Set(newVal) + } + return dstVal. + Interface().(ParameterUnmarshaler). + UnmarshalParameter(srcPars[0]) + } + } else if um, ok := dstVal.Addr().Interface().(ParameterUnmarshaler); ok { + return um.UnmarshalParameter(srcPars[0]) + } + + return fmt.Errorf("unsupported field type") +} diff --git a/images/agent/pkg/drbdconf/encode.go b/images/agent/pkg/drbdconf/encode.go new file mode 100644 index 000000000..d47035c42 --- /dev/null +++ b/images/agent/pkg/drbdconf/encode.go @@ -0,0 +1,246 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package drbdconf + +import ( + "errors" + "fmt" + "reflect" + "strings" +) + +/* +# Mapping of Parameter Types + +All primitive types' zero values should semantically correspond to a missing +DRBD section parameter (even for required parameters). + +# Tags + + - `drbd:"parametername"` to select the name of the parameter. There can be one + parameterless tag: `drbd:""`, which selects key of the section byitself + - [SectionKeyworder] and slices of such types SHOULD NOT be tagged, their name + is always taken from [SectionKeyworder] + - subsections should always be represented with struct pointers + - `drbd:"parname1,parname2"` tag value form allows specifying alternative + parameter names, which will be tried during unmarshaling. Marshaling will + always use the first name. + +# Primitive Types Support + +To add marshaling/unmarshaling support for another primitive type, consider the +following options: + - implement [ParameterTypeCodec] and register it with + [RegisterParameterTypeCodec]. It will be used for every usage of that type, + with highest priority. It will even take precedence over built-in slice + support. This method is useful for fields of "marker" interface types. + - implement [ParameterCodec]. This marshaling method is last-effort method, + it is used when there's no [ParameterTypeCodec] for a type +*/ +func Marshal[T any, TP Ptr[T]](src TP, dst *Section) error { + return marshalSection(reflect.ValueOf(src), dst) +} + +func marshalSection(srcPtrVal reflect.Value, dst *Section) error { + err := visitStructFields( + srcPtrVal, + func(f *visitedField) error { + if len(f.ParameterNames) > 0 { + // zero values always mean a missing parameter + if isZeroValue(f.FieldVal) { + return nil + } + + pars, err := marshalParameters(f.Field, f.FieldVal) + if err != nil { + return err + } + + if f.ParameterNames[0] == "" { + if len(pars) > 1 { + return fmt.Errorf( + "marshaling field %s: can not "+ + "render more then one parameter value to key", + f.Field.Name, + ) + } + // current section key + dst.Key = append(dst.Key, pars[0]...) + } else { + for _, words := range pars { + // new parameter + par := &Parameter{} + par.Key = append(par.Key, NewWord(f.ParameterNames[0])) + par.Key = append(par.Key, words...) + dst.Elements = append(dst.Elements, par) + } + } + } else if ok, _, kw := isSliceOfStructPtrsAndSectionKeyworders( + f.Field.Type, + ); ok { + for i := range f.FieldVal.Len() { + elem := f.FieldVal.Index(i) + + subsecItem := &Section{Key: []Word{NewWord(kw)}} + err := marshalSection(elem, subsecItem) + if err != nil { + return fmt.Errorf( + "marshaling field %s, item %d: %w", + f.Field.Name, i, err, + ) + } + dst.Elements = append(dst.Elements, subsecItem) + } + } else if ok, kw := isStructPtrAndSectionKeyworder(f.FieldVal); ok { + subsec := &Section{Key: []Word{NewWord(kw)}} + err := marshalSection(f.FieldVal, subsec) + if err != nil { + return fmt.Errorf( + "marshaling field %s: %w", + f.Field.Name, err, + ) + } + dst.Elements = append(dst.Elements, subsec) + } + return nil + }, + ) + + if err != nil { + return err + } + + return nil +} + +func marshalParameters( + field reflect.StructField, + fieldVal reflect.Value, +) ([][]Word, error) { + parsStrs, err := marshalParameterValue(fieldVal, field.Type) + if err != nil { + return nil, fmt.Errorf("marshaling field %s: %w", field.Name, err) + } + + var pars [][]Word + for _, parStr := range parsStrs { + pars = append(pars, NewWords(parStr)) + } + + return pars, nil +} + +func marshalParameterValue( + srcVal reflect.Value, + srcType reflect.Type, +) ([][]string, error) { + if typeCodec := parameterTypeCodecs[srcType]; typeCodec != nil { + res, err := typeCodec.MarshalParameter(srcVal.Interface()) + if err != nil { + return nil, err + } + return [][]string{res}, nil + } + + // value type may be different in case when srcType is slice element type + if srcVal.Type() != srcType { + if typeCodec := parameterTypeCodecs[srcVal.Type()]; typeCodec != nil { + resItem, err := typeCodec.MarshalParameter(srcVal.Interface()) + if err != nil { + return nil, err + } + return [][]string{resItem}, nil + } + } + + if srcType.Kind() == reflect.Slice { + var res [][]string + for i := 0; i < srcVal.Len(); i++ { + elVal := srcVal.Index(i) + + elRes, err := marshalParameterValue(elVal, srcType.Elem()) + if err != nil { + return nil, err + } + if len(elRes) > 1 { + return nil, errors.New( + "marshaling slices of slices is not supported", + ) + } + res = append(res, elRes[0]) + } + return res, nil + } + + if m, ok := srcVal.Interface().(ParameterMarshaler); ok { + resItem, err := m.MarshalParameter() + if err != nil { + return nil, err + } + return [][]string{resItem}, nil + } + + // interface may be implemented for pointer receiver + if srcVal.Kind() != reflect.Pointer { + if m, ok := srcVal.Addr().Interface().(ParameterMarshaler); ok { + resItem, err := m.MarshalParameter() + if err != nil { + return nil, err + } + return [][]string{resItem}, nil + } + } + + return nil, fmt.Errorf("unsupported field type") +} + +func isZeroValue(v reflect.Value) bool { + if v.IsZero() { + return true + } + if v.Kind() == reflect.Slice && v.Len() == 0 { + return true + } + return false +} + +func getDRBDParameterNames(field reflect.StructField) ([]string, error) { + tagValue, ok := field.Tag.Lookup("drbd") + if !ok { + return nil, nil + } + + tagValue = strings.TrimSpace(tagValue) + + if tagValue == "" { + return []string{""}, nil + } + + names := strings.Split(tagValue, ",") + for i, n := range names { + n = strings.TrimSpace(n) + if len(n) == 0 || !isTokenStr(n) { + return nil, + fmt.Errorf( + "field %s tag `drbd` value: invalid format", + field.Name, + ) + } + names[i] = n + } + return names, nil +} diff --git a/images/agent/pkg/drbdconf/interfaces.go b/images/agent/pkg/drbdconf/interfaces.go new file mode 100644 index 000000000..f3b9c0b59 --- /dev/null +++ b/images/agent/pkg/drbdconf/interfaces.go @@ -0,0 +1,45 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package drbdconf + +type SectionKeyworder interface { + SectionKeyword() string +} + +type ParameterCodec interface { + ParameterMarshaler + ParameterUnmarshaler +} + +type ParameterMarshaler interface { + MarshalParameter() ([]string, error) +} + +type ParameterUnmarshaler interface { + UnmarshalParameter(p []Word) error +} + +// # Type constraints + +type SectionPtr[T any] interface { + *T + SectionKeyworder +} + +type Ptr[T any] interface { + *T +} diff --git a/images/agent/pkg/drbdconf/parser.go b/images/agent/pkg/drbdconf/parser.go new file mode 100644 index 000000000..8932a540c --- /dev/null +++ b/images/agent/pkg/drbdconf/parser.go @@ -0,0 +1,401 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +// Format description: +// - https://linbit.com/man/v9/?linbitman=drbd.conf.5.html +// - https://manpages.debian.org/bookworm/drbd-utils/drbd.conf-9.0.5.en.html +package drbdconf + +import ( + "errors" + "fmt" + "io/fs" + "path/filepath" +) + +func Parse(fsys fs.FS, name string) (*Root, error) { + parser := &fileParser{} + if err := parser.parseFile(fsys, name); err != nil { + return nil, err + } + + return parser.root, nil +} + +type fileParser struct { + included map[string]*Root + fsys fs.FS + + data []byte + idx int + + root *Root + + // for error reporting only, zero-based + loc Location +} + +// [Word] or [trivia] +type token interface { + _token() +} + +const TokenMaxLen = 255 + +type trivia byte + +func (*trivia) _token() {} + +const ( + triviaOpenBrace trivia = '{' + triviaCloseBrace trivia = '}' + triviaSemicolon trivia = ';' +) + +func (p *fileParser) parseFile(fsys fs.FS, name string) (err error) { + data, err := fs.ReadFile(fsys, name) + if err != nil { + return fmt.Errorf("reading file %s: %w", name, err) + } + + p.fsys = fsys + p.data = data + p.root = &Root{ + Filename: name, + } + p.loc = Location{Filename: name} + if p.included == nil { + p.included = map[string]*Root{} + } + p.included[name] = p.root + + // since comments are checked only on position advance, + // we have to do an early check before the first advance happens + p.skipComment() + + var words []Word + + for { + var token token + if token, err = p.parseToken(); err != nil { + return p.report(err) + } + + if token == nil { // EOF + break + } + + switch t := token.(type) { + case *Word: + words = append(words, *t) + case *trivia: + switch *t { + case triviaOpenBrace: + if len(words) == 0 { + return p.report(errors.New("unexpected character '{'")) + } + s := &Section{ + Key: words, + } + words = nil + if s.Elements, err = p.parseSectionElements(); err != nil { + return err + } + p.root.Elements = append(p.root.Elements, s) + case triviaCloseBrace: + return p.report(errors.New("unexpected character '}'")) + case triviaSemicolon: + if len(words) == 0 { + return p.report(errors.New("unexpected character ';'")) + } + if words[0].Value != "include" { + // be tolerant to new keywords + continue + } + if len(words) != 2 { + return p.report(errors.New("expected exactly 1 argument in 'include'")) + } + + incl := &Include{ + Glob: words[1].Value, + } + words = nil + + var inclNames []string + + if inclNames, err = fs.Glob(p.fsys, incl.Glob); err != nil { + return p.report(fmt.Errorf("parsing glob pattern: %w", err)) + } + + for _, inclName := range inclNames { + if !filepath.IsAbs(inclName) { + // filepath is relative to current file + inclName = filepath.Join(filepath.Dir(name), inclName) + } + + inclRoot := p.included[inclName] + if inclRoot == nil { + includedParser := &fileParser{ + included: p.included, + } + if err := includedParser.parseFile(fsys, inclName); err != nil { + return err + } + inclRoot = includedParser.root + } + + incl.Files = append(incl.Files, inclRoot) + } + + p.root.Elements = append(p.root.Elements, incl) + default: + panic("unexpected trivia type") + } + default: + panic("unexpected token type") + } + } + + if len(words) > 0 { + return fmt.Errorf("unexpected EOF") + } + + return nil +} + +// Returns: +// - (slice of [Section] or [Parameter] elements, nil) in case of success +// - (nil, [error]) in case of error +func (p *fileParser) parseSectionElements() (elements []SectionElement, err error) { + p.skipWhitespace() + + var words []Word + + for { + var token token + if token, err = p.parseToken(); err != nil { + return nil, err + } + + if token == nil { // EOF + return nil, p.report(errors.New("unexpected EOF")) + } + + switch t := token.(type) { + case *Word: + words = append(words, *t) + case *trivia: + switch *t { + case triviaOpenBrace: + if len(words) == 0 { + return nil, p.report(errors.New("unexpected character '{'")) + } + s := &Section{ + Key: words, + } + words = nil + if s.Elements, err = p.parseSectionElements(); err != nil { + return nil, err + } + elements = append(elements, s) + case triviaCloseBrace: + if len(words) > 0 { + return nil, p.report(errors.New("unexpected character '}'")) + } + return + case triviaSemicolon: + if len(words) == 0 { + return nil, p.report(errors.New("unexpected character ';'")) + } + + p := &Parameter{ + Key: words, + } + words = nil + elements = append(elements, p) + default: + panic("unexpected trivia type") + } + default: + panic("unexpected token type") + } + } +} + +// Returns: +// - ([trivia], nil) for trivia tokens. +// - ([Word], nil) for word tokens. +// - (nil, nil) in case of EOF. +// - (nil, [error]) in case of error +func (p *fileParser) parseToken() (token, error) { + p.skipWhitespace() + if p.eof() { + return nil, nil + } + + if p.ch() == '"' { + p.advance(false) + return p.parseQuotedWord() + } + + if tr, ok := newTrivia(p.ch()); ok { + p.advance(true) + return tr, nil + } + + var word []byte + loc := p.loc + + for ; !p.eof() && !isWordTerminatorChar(p.ch()); p.advance(true) { + if !isTokenChar(p.ch()) { + return nil, p.report(errors.New("unexpected char")) + } + if len(word) == TokenMaxLen { + return nil, p.report(fmt.Errorf("token maximum length exceeded: %d", TokenMaxLen)) + } + + word = append(word, p.ch()) + } + + return &Word{Value: string(word), Location: loc}, nil +} + +func (p *fileParser) parseQuotedWord() (*Word, error) { + var word []byte + loc := p.loc + + var escaping bool + for ; ; p.advance(false) { + if p.eof() { + return nil, p.report(errors.New("unexpected EOF")) + } + + if escaping { + switch p.ch() { + case '\\': + word = append(word, '\\') + case 'n': + word = append(word, '\n') + case '"': + word = append(word, '"') + default: + return nil, p.report(errors.New("unexpected escape sequence")) + } + escaping = false + } else { + switch p.ch() { + case '\\': + escaping = true + case '\n': + return nil, p.report(errors.New("unexpected EOL")) + case '"': + // success + p.advance(true) + return &Word{ + IsQuoted: true, + Value: string(word), + Location: loc, + }, nil + default: + word = append(word, p.ch()) + } + } + } +} + +func (p *fileParser) ch() byte { + return p.data[p.idx] +} + +func (p *fileParser) advance(skipComment bool) { + p.advanceAndCountPosition() + + if skipComment { + p.skipComment() + } +} + +func (p *fileParser) advanceAndCountPosition() { + if p.ch() == '\n' { + p.loc = p.loc.NextLine() + } else { + p.loc = p.loc.NextCol() + } + + p.idx++ +} + +func (p *fileParser) eof() bool { + return p.idx == len(p.data) +} + +func (p *fileParser) skipComment() { + if p.eof() || p.ch() != '#' { + return + } + for !p.eof() && p.ch() != '\n' { + p.advanceAndCountPosition() + } +} + +func (p *fileParser) skipWhitespace() { + for !p.eof() && isWhitespace(p.ch()) { + p.advance(true) + } +} + +func (p *fileParser) report(err error) error { + return fmt.Errorf("%s: parsing error: %w", p.loc, err) +} + +func newTrivia(ch byte) (*trivia, bool) { + tr := trivia(ch) + switch tr { + case triviaCloseBrace: + return &tr, true + case triviaOpenBrace: + return &tr, true + case triviaSemicolon: + return &tr, true + default: + return nil, false + } +} + +func isTokenStr(s string) bool { + for i := 0; i < len(s); i++ { + if !isTokenChar(s[i]) { + return false + } + } + return true +} + +func isTokenChar(ch byte) bool { + return (ch >= 'a' && ch <= 'z') || + (ch >= 'A' && ch <= 'Z') || + (ch >= '0' && ch <= '9') || + ch == '.' || ch == '/' || ch == '_' || ch == '-' || ch == ':' || + ch == '[' || ch == ']' || ch == '%' +} + +func isWordTerminatorChar(ch byte) bool { + return isWhitespace(ch) || ch == ';' || ch == '{' +} + +func isWhitespace(ch byte) bool { + return ch == ' ' || ch == '\t' || ch == '\r' || ch == '\n' +} diff --git a/images/agent/pkg/drbdconf/parser_test.go b/images/agent/pkg/drbdconf/parser_test.go new file mode 100644 index 000000000..5284e3721 --- /dev/null +++ b/images/agent/pkg/drbdconf/parser_test.go @@ -0,0 +1,59 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package drbdconf + +import ( + "fmt" + "os" + "path/filepath" + "testing" +) + +func TestConf(t *testing.T) { + root, err := os.OpenRoot("./testdata/") + if err != nil { + t.Fatal(err) + } + + cfg, err := Parse(root.FS(), "root.conf") + if err != nil { + t.Fatal(err) + } + + err = cfg.WalkConfigs(func(conf *Root) error { + filename := "./testdata/out/" + conf.Filename + dir := filepath.Dir(filename) + if err := os.MkdirAll(dir, 0755); err != nil { + return fmt.Errorf("create directory %s: %w", dir, err) + } + file, err := os.OpenFile(filename, os.O_CREATE|os.O_RDWR|os.O_TRUNC, 0644) + if err != nil { + return fmt.Errorf("open file %s: %w", filename, err) + } + n, err := conf.WriteTo(file) + if err != nil { + return fmt.Errorf("writing to file %s: %w", filename, err) + } + t.Logf("wrote %d bytes to %s", n, filename) + return nil + }) + if err != nil { + t.Fatal(err) + } + + _ = cfg +} diff --git a/images/agent/pkg/drbdconf/root.go b/images/agent/pkg/drbdconf/root.go new file mode 100644 index 000000000..5e00842c9 --- /dev/null +++ b/images/agent/pkg/drbdconf/root.go @@ -0,0 +1,196 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package drbdconf + +import ( + "fmt" + "iter" +) + +type Root struct { + Filename string + Elements []RootElement +} + +func (root *Root) AsSection() *Section { + sec := &Section{} + + for _, subSec := range root.TopLevelSections() { + sec.Elements = append(sec.Elements, subSec) + } + + return sec +} + +func (root *Root) TopLevelSections() iter.Seq2[*Root, *Section] { + return func(yield func(*Root, *Section) bool) { + visited := map[*Root]struct{}{root: {}} + + for _, el := range root.Elements { + if sec, ok := el.(*Section); ok { + if !yield(root, sec) { + return + } + continue + } + incl := el.(*Include) + for _, subRoot := range incl.Files { + if _, ok := visited[subRoot]; ok { + continue + } + visited[subRoot] = struct{}{} + + for secRoot, sec := range subRoot.TopLevelSections() { + if !yield(secRoot, sec) { + return + } + } + } + } + } +} + +// [Section] or [Include] +type RootElement interface { + _rootElement() +} + +type Include struct { + Glob string + Files []*Root +} + +func (*Include) _rootElement() {} + +type Section struct { + Key []Word + Elements []SectionElement +} + +// [Section] or [Parameter] +type SectionElement interface { + _sectionElement() +} + +func (*Section) _rootElement() {} +func (*Section) _sectionElement() {} + +func (s *Section) Location() Location { return s.Key[0].Location } + +func (s *Section) ParametersByKey(name string) iter.Seq[*Parameter] { + return func(yield func(*Parameter) bool) { + for par := range s.Parameters() { + if par.Key[0].Value == name { + if !yield(par) { + return + } + } + } + } +} + +func (s *Section) Parameters() iter.Seq[*Parameter] { + return func(yield func(*Parameter) bool) { + for _, el := range s.Elements { + if par, ok := el.(*Parameter); ok { + if !yield(par) { + return + } + } + } + } +} + +func (s *Section) SectionsByKey(name string) iter.Seq[*Section] { + return func(yield func(*Section) bool) { + for par := range s.Sections() { + if par.Key[0].Value == name { + if !yield(par) { + return + } + } + } + } +} + +func (s *Section) Sections() iter.Seq[*Section] { + return func(yield func(*Section) bool) { + for _, el := range s.Elements { + if par, ok := el.(*Section); ok { + if !yield(par) { + return + } + } + } + } +} + +type Parameter struct { + Key []Word +} + +func (*Parameter) _sectionElement() {} +func (p *Parameter) Location() Location { return p.Key[0].Location } + +type Word struct { + // means that token is definitely not a keyword, but a value + IsQuoted bool + // Unquoted value + Value string + Location Location +} + +func NewWord(word string) Word { + return Word{ + Value: word, + IsQuoted: len(word) == 0 || !isTokenStr(word), + } +} + +func NewWords(wordStrs []string) []Word { + words := make([]Word, len(wordStrs)) + for i, s := range wordStrs { + words[i] = NewWord(s) + } + return words +} + +func (*Word) _token() {} + +func (w *Word) LocationEnd() Location { + loc := w.Location + loc.ColIndex += len(w.Value) + return loc +} + +type Location struct { + // for error reporting only, zero-based + LineIndex, ColIndex int + Filename string +} + +func (l Location) NextLine() Location { + return Location{l.LineIndex + 1, 0, l.Filename} +} + +func (l Location) NextCol() Location { + return Location{l.LineIndex, l.ColIndex + 1, l.Filename} +} + +func (l Location) String() string { + return fmt.Sprintf("%s [Ln %d, Col %d]", l.Filename, l.LineIndex+1, l.ColIndex+1) +} diff --git a/images/agent/pkg/drbdconf/testdata/drbd-utils/adjust_switch_diskfull_r0.res b/images/agent/pkg/drbdconf/testdata/drbd-utils/adjust_switch_diskfull_r0.res new file mode 100644 index 000000000..79ccfa24d --- /dev/null +++ b/images/agent/pkg/drbdconf/testdata/drbd-utils/adjust_switch_diskfull_r0.res @@ -0,0 +1,47 @@ +resource r0 { + options { + quorum majority; + } + volume 0 { + device minor 10; + disk /dev/scratch/r0_0; + meta-disk internal; + } + on undertest { node-id 1; volume 0 { disk none; } } + on i2 { node-id 2; } + on i3 { node-id 3; } + + net { load-balance-paths yes; } + + skip { + path { + host undertest address 192.168.122.11:7000; + host i2 address 192.168.122.12:7000; + } + path { + host undertest address 192.168.122.11:7001; + host i2 address 192.168.122.12:7001; + } + } + connection { + path { + host i2 address 192.168.122.12:7000; + host i3 address 192.168.122.13:7000; + } + path { + host i2 address 192.168.122.12:7001; + host i3 address 192.168.122.13:7001; + } + } + connection { + path { + host undertest address 192.168.122.11:7000; + host i3 address 192.168.122.13:7000; + } + path { + host undertest address 192.168.122.11:7001; + host i3 address 192.168.122.13:7001; + } + } +} + diff --git a/images/agent/pkg/drbdconf/testdata/drbd-utils/block-size.res b/images/agent/pkg/drbdconf/testdata/drbd-utils/block-size.res new file mode 100644 index 000000000..36e8984f7 --- /dev/null +++ b/images/agent/pkg/drbdconf/testdata/drbd-utils/block-size.res @@ -0,0 +1,71 @@ +resource res { + disk { + disk-flushes no; + md-flushes no; + block-size 4096; + } + + on undertest.ryzen9.home { + node-id 0; + volume 0 { + device /dev/drbd1; + disk none; + } + + } + + on u2.ryzen9.home { + node-id 1; + volume 0 { + device /dev/drbd1; + disk /dev/mapper/diskless-logical-block-size-20230320-100658-disk0-ebs; + meta-disk internal; + } + + } + + on u3.ryzen9.home { + node-id 2; + volume 0 { + device /dev/drbd1; + disk /dev/mapper/diskless-logical-block-size-20230320-100658-disk0-ebs; + meta-disk internal; + } + + } + + connection { + net { + } + + path { + host undertest address 192.168.123.51:7789; + host u2 address 192.168.123.52:7789; + } + + } + + connection { + net { + } + + path { + host undertest address 192.168.123.51:7789; + host u3 address 192.168.123.53:7789; + } + + } + + connection { + net { + } + + path { + host u2 address 192.168.123.52:7789; + host u3 address 192.168.123.53:7789; + } + + } + +} + diff --git a/images/agent/pkg/drbdconf/testdata/drbd-utils/drbd_8.4.res b/images/agent/pkg/drbdconf/testdata/drbd-utils/drbd_8.4.res new file mode 100644 index 000000000..4cc3e139a --- /dev/null +++ b/images/agent/pkg/drbdconf/testdata/drbd-utils/drbd_8.4.res @@ -0,0 +1,49 @@ +common { + protocol C; + syncer { + rate 25M; + al-extents 379; + csums-alg md5; + verify-alg crc32c; + c-min-rate 20M; + c-max-rate 500M; + } + handlers { + outdate-peer "/opt/root-scripts/bin/fence-peer"; + local-io-error "/opt/root-scripts/bin/handle-io-error"; + pri-on-incon-degr "/opt/root-scripts/bin/handle-io-error"; + } + net { + timeout 50; + connect-int 10; + ping-int 5; + ping-timeout 50; + cram-hmac-alg md5; + csums-after-crash-only; + shared-secret "gaeWoor7dawei3Oo"; + ko-count 0; + } +} + +resource dbdata_resource { + startup { + wfc-timeout 1; + } + disk { + no-disk-flushes; + no-md-flushes; + fencing resource-and-stonith; + on-io-error call-local-io-error; + disk-timeout 0; + } + device /dev/drbd1; + disk /dev/dbdata01/lvdbdata01; + meta-disk internal; + + on undertest { + address 172.16.6.211:1120; + } + on peer-host { + address 172.16.0.249:1120; + } +} diff --git a/images/agent/pkg/drbdconf/testdata/drbd-utils/drbdctrl.res b/images/agent/pkg/drbdconf/testdata/drbd-utils/drbdctrl.res new file mode 100644 index 000000000..20ce1685c --- /dev/null +++ b/images/agent/pkg/drbdconf/testdata/drbd-utils/drbdctrl.res @@ -0,0 +1,35 @@ +resource r0 { + net { + cram-hmac-alg sha256; + shared-secret "Uwni5ZRVCvbqk3AwHD4K"; + allow-two-primaries no; + } + volume 0 { + device minor 0; + disk /dev/drbdpool/.drbdctrl_0; + meta-disk internal; + } + volume 1 { + device minor 1; + disk /dev/drbdpool/.drbdctrl_1; + meta-disk internal; + } + on undertest { + node-id 0; + address ipv4 10.43.70.115:6999; + } + on rckdebb { + node-id 1; + address ipv4 10.43.70.116:6999; + } + on rckdebd { + node-id 2; + address ipv4 10.43.70.118:6999; + } + connection-mesh { + hosts undertest rckdebb rckdebd; + net { + protocol C; + } + } +} diff --git a/images/agent/pkg/drbdconf/testdata/drbd-utils/drbdmeta_force_flag.res b/images/agent/pkg/drbdconf/testdata/drbd-utils/drbdmeta_force_flag.res new file mode 100644 index 000000000..8973661cf --- /dev/null +++ b/images/agent/pkg/drbdconf/testdata/drbd-utils/drbdmeta_force_flag.res @@ -0,0 +1,18 @@ +resource r0 { + on undertest { + node-id 1; + address ipv4 10.1.1.1:7006; + volume 0 { + device minor 1; + disk /dev/foo/fun/0; + } + } + on other { + node-id 2; + address ipv4 10.1.1.2:7006; + volume 0 { + device minor 1; + disk /dev/foo/fun/0; + } + } +} diff --git a/images/agent/pkg/drbdconf/testdata/drbd-utils/floating-ipv4.res b/images/agent/pkg/drbdconf/testdata/drbd-utils/floating-ipv4.res new file mode 100644 index 000000000..24d4240c0 --- /dev/null +++ b/images/agent/pkg/drbdconf/testdata/drbd-utils/floating-ipv4.res @@ -0,0 +1,5 @@ +resource r0 { + volume 0 { device minor 1; disk /dev/foo/fun/0; } + floating 127.0.0.1:7706 { node-id 1; } # undertest + floating 127.1.2.3:7706 { node-id 2; } # other +} diff --git a/images/agent/pkg/drbdconf/testdata/drbd-utils/floating-ipv6.res b/images/agent/pkg/drbdconf/testdata/drbd-utils/floating-ipv6.res new file mode 100644 index 000000000..e1ba9a6ae --- /dev/null +++ b/images/agent/pkg/drbdconf/testdata/drbd-utils/floating-ipv6.res @@ -0,0 +1,7 @@ +resource r0 { + volume 0 { device minor 1; disk /dev/foo/fun/0; } + # undertest, 127.0.0.1 used to identify "self", + # we can not rely on ::1%lo to be present in all CI pipelines + floating 127.0.0.1:7706 { node-id 1; } + floating ipv6 [fe80::1022:53ff:feb7:614f%vethX]:7706 { node-id 2; } # other +} diff --git a/images/agent/pkg/drbdconf/testdata/drbd-utils/man.res b/images/agent/pkg/drbdconf/testdata/drbd-utils/man.res new file mode 100644 index 000000000..37ee34e68 --- /dev/null +++ b/images/agent/pkg/drbdconf/testdata/drbd-utils/man.res @@ -0,0 +1,26 @@ +resource r0 { + net { + cram-hmac-alg sha1; + shared-secret "FooFunFactory"; + } + volume 0 { + device /dev/drbd1; + disk /dev/sda7; + meta-disk internal; + } + on undertest { + node-id 0; + address 10.1.1.31:7000; + } + on bob { + node-id 1; + address 10.1.1.32:7000; + } + connection { + host undertest port 7000; + host bob port 7000; + net { + protocol C; + } + } +} diff --git a/images/agent/pkg/drbdconf/testdata/drbd-utils/nat-address.res b/images/agent/pkg/drbdconf/testdata/drbd-utils/nat-address.res new file mode 100644 index 000000000..b90fd0e31 --- /dev/null +++ b/images/agent/pkg/drbdconf/testdata/drbd-utils/nat-address.res @@ -0,0 +1,15 @@ +resource "nat-address" { + volume 0 { + device minor 99; + disk "/dev/foo/bar4"; + meta-disk "internal"; + } + on "undertest" { + node-id 0; + } + on "other" { + node-id 1; + } + connection { + } +} diff --git a/images/agent/pkg/drbdconf/testdata/drbd-utils/node-id-missing.res b/images/agent/pkg/drbdconf/testdata/drbd-utils/node-id-missing.res new file mode 100644 index 000000000..100aac641 --- /dev/null +++ b/images/agent/pkg/drbdconf/testdata/drbd-utils/node-id-missing.res @@ -0,0 +1,92 @@ + +resource site1 { + net { + cram-hmac-alg "sha1"; + shared-secret "Gei6mahcui4Ai0Oh1"; + } + + volume 0 { + device minor 0; + disk /dev/foo; + meta-disk /dev/bar; + } + on undertest { address 192.168.1.17:7000; } + on bravo { node-id 2; address 192.168.2.17:7000; } + on charlie { node-id 3; address 192.168.3.17:7000; } + connection-mesh { hosts undertest bravo charlie; } +} + +resource site2 { + net { + cram-hmac-alg "sha1"; + shared-secret "Gei6mahcui4Ai0Oh2"; + } + + volume 0 { + device minor 0; + disk /dev/foo; + meta-disk /dev/bar; + } + on delta { node-id 4; address 192.168.4.17:7000; } + on echo { node-id 5; address 192.168.5.17:7000; } + on fox { node-id 6; address 192.168.6.17:7000; } + connection-mesh { hosts delta echo fox; } +} + +resource stacked_multi_path { + net { + protocol A; + + on-congestion pull-ahead; + congestion-fill 400M; + congestion-extents 1000; + } + + disk { + c-fill-target 10M; + } + + volume 0 { device minor 10; } + + stacked-on-top-of site1 { node-id 0; } + stacked-on-top-of site2 { node-id 1; } + + connection { # site1 - site2 + path { + host undertest address 192.168.1.17:7100; + host delta address 192.168.4.17:7100; + } + path { + host bravo address 192.168.2.17:7100; + host delta address 192.168.4.17:7100; + } + path { + host charlie address 192.168.3.17:7100; + host delta address 192.168.4.17:7100; + } + path { + host undertest address 192.168.1.17:7100; + host echo address 192.168.5.17:7100; + } + path { + host bravo address 192.168.2.17:7100; + host echo address 192.168.5.17:7100; + } + path { + host charlie address 192.168.3.17:7100; + host echo address 192.168.5.17:7100; + } + path { + host undertest address 192.168.1.17:7100; + host fox address 192.168.6.17:7100; + } + path { + host bravo address 192.168.2.17:7100; + host fox address 192.168.6.17:7100; + } + path { + host charlie address 192.168.3.17:7100; + host fox address 192.168.6.17:7100; + } + } +} diff --git a/images/agent/pkg/drbdconf/testdata/drbd-utils/proxy_2sites_3nodes.res b/images/agent/pkg/drbdconf/testdata/drbd-utils/proxy_2sites_3nodes.res new file mode 100644 index 000000000..b5c62fd4a --- /dev/null +++ b/images/agent/pkg/drbdconf/testdata/drbd-utils/proxy_2sites_3nodes.res @@ -0,0 +1,46 @@ + +resource proxy_2sites_3nodes { + volume 0 { + device minor 19; + disk /dev/foo/bar; + meta-disk internal; + } + + on alpha { + node-id 0; + address 192.168.31.1:7800; + } + on bravo { + node-id 1; + address 192.168.31.2:7800; + } + on charlie { + node-id 2; + address 192.168.31.3:7800; + } + + connection { + host alpha; + host bravo; + net { protocol C; } + } + + connection { + net { protocol A; } + + volume 0 { + disk { + resync-rate 10M; + c-plan-ahead 20; + c-delay-target 10; + c-fill-target 100; + c-min-rate 10; + c-max-rate 100M; + } + } + } + + connection { + net { protocol A; } + } +} diff --git a/images/agent/pkg/drbdconf/testdata/drbd-utils/release_9_1_1.res b/images/agent/pkg/drbdconf/testdata/drbd-utils/release_9_1_1.res new file mode 100644 index 000000000..83c08d2e3 --- /dev/null +++ b/images/agent/pkg/drbdconf/testdata/drbd-utils/release_9_1_1.res @@ -0,0 +1,49 @@ +# This file was generated by drbdmanage(8), do not edit manually. +#dm-meta:{"create_date": "2017-09-02T12:32:53.114854"} + +resource r0 { + net { + allow-two-primaries yes; + shared-secret "EIUhGoz9e+FUY+XB/wX3"; + cram-hmac-alg sha1; + } + connection-mesh { + hosts undertest pve3 pve2; + } + on undertest { + node-id 1; + address ipv4 10.1.1.1:7006; + volume 0 { + device minor 143; + disk /dev/drbdpool/vm-105-disk-1_00; + disk { + size 4194304k; + } + meta-disk internal; + } + } + on pve3 { + node-id 2; + address ipv4 10.1.1.3:7006; + volume 0 { + device minor 143; + disk /dev/drbdpool/vm-105-disk-1_00; + disk { + size 4194304k; + } + meta-disk internal; + } + } + on pve2 { + node-id 0; + address ipv4 10.1.1.2:7006; + volume 0 { + device minor 143; + disk /dev/drbdpool/vm-105-disk-1_00; + disk { + size 4194304k; + } + meta-disk internal; + } + } +} diff --git a/images/agent/pkg/drbdconf/testdata/drbd-utils/require-drbd-module-version.res b/images/agent/pkg/drbdconf/testdata/drbd-utils/require-drbd-module-version.res new file mode 100644 index 000000000..c0b8ad880 --- /dev/null +++ b/images/agent/pkg/drbdconf/testdata/drbd-utils/require-drbd-module-version.res @@ -0,0 +1,27 @@ +require-drbd-module-version-eq 9.0.0; +resource r0 { + net { + cram-hmac-alg sha1; + shared-secret "FooFunFactory"; + } + volume 0 { + device /dev/drbd1; + disk /dev/sda7; + meta-disk internal; + } + on undertest { + node-id 0; + address 10.1.1.31:7000; + } + on bob { + node-id 1; + address 10.1.1.32:7000; + } + connection { + host undertest port 7000; + host bob port 7000; + net { + protocol C; + } + } +} diff --git a/images/agent/pkg/drbdconf/testdata/drbd-utils/resync_after.res b/images/agent/pkg/drbdconf/testdata/drbd-utils/resync_after.res new file mode 100644 index 000000000..74ccb81c8 --- /dev/null +++ b/images/agent/pkg/drbdconf/testdata/drbd-utils/resync_after.res @@ -0,0 +1,66 @@ +resource res0 { + on swiftfox { + volume 0 { + disk /dev/ssdpool/res0_0; + disk { + discard-zeroes-if-aligned yes; + rs-discard-granularity 65536; + } + device minor 1039; + } + node-id 1; + } + on undertest { + volume 0 { + disk /dev/ssdpool/res0_0; + disk { + discard-zeroes-if-aligned yes; + rs-discard-granularity 65536; + } + device minor 1039; + } + node-id 2; + } + connection { + disk { + c-fill-target 1048576; + } + host swiftfox address ipv4 10.43.241.3:7039; + host undertest address ipv4 10.43.241.4:7039; + } +} + +resource res1 { + on swiftfox { + volume 0 { + disk /dev/ssdpool/res0_0; + disk { + discard-zeroes-if-aligned yes; + resync-after "testing_that_this_is_accepted_although_not_defined_in_here/0"; + rs-discard-granularity 65536; + } + device minor 1041; + } + node-id 1; + + } + on undertest { + volume 0 { + disk /dev/ssdpool/res0_0; + disk { + discard-zeroes-if-aligned yes; + resync-after "res0/0"; + rs-discard-granularity 65536; + } + device minor 1041; + } + node-id 2; + } + connection { + disk { + c-fill-target 1048576; + } + host swiftfox address ipv4 10.43.241.3:7041; + host undertest address ipv4 10.43.241.4:7041; + } +} diff --git a/images/agent/pkg/drbdconf/testdata/drbd-utils/stacked_implicit_conn.res b/images/agent/pkg/drbdconf/testdata/drbd-utils/stacked_implicit_conn.res new file mode 100644 index 000000000..6ec1e854b --- /dev/null +++ b/images/agent/pkg/drbdconf/testdata/drbd-utils/stacked_implicit_conn.res @@ -0,0 +1,42 @@ +resource r0 { + net { + protocol C; + } + + startup { + wfc-timeout 60; + degr-wfc-timeout 60; + } + + on undertest { + device /dev/drbd0; + disk /dev/sdb1; + address 10.56.84.138:7788; + meta-disk internal; + } + + on node_b { + device /dev/drbd0; + disk /dev/sdb1; + address 10.56.84.139:7788; + meta-disk internal; + } +} + +resource r0-U { + net { + protocol B; + } + + stacked-on-top-of r0 { + device /dev/drbd10; + address 10.56.84.142:7788; + } + + on node_c { + device /dev/drbd10; + disk /dev/sdb1; + address 10.56.85.140:7788; + meta-disk internal; + } +} diff --git a/images/agent/pkg/drbdconf/testdata/drbd-utils/stacked_multi_path_2sites_3nodes.res b/images/agent/pkg/drbdconf/testdata/drbd-utils/stacked_multi_path_2sites_3nodes.res new file mode 100644 index 000000000..a56fc2ba5 --- /dev/null +++ b/images/agent/pkg/drbdconf/testdata/drbd-utils/stacked_multi_path_2sites_3nodes.res @@ -0,0 +1,91 @@ +resource site1 { + net { + cram-hmac-alg "sha1"; + shared-secret "Gei6mahcui4Ai0Oh1"; + } + + volume 0 { + device minor 0; + disk /dev/foo; + meta-disk /dev/bar; + } + on alfa { node-id 1; address 192.168.1.17:7000; } + on bravo { node-id 2; address 192.168.2.17:7000; } + on charlie { node-id 3; address 192.168.3.17:7000; } + connection-mesh { hosts alfa bravo charlie; } +} + +resource site2 { + net { + cram-hmac-alg "sha1"; + shared-secret "Gei6mahcui4Ai0Oh2"; + } + + volume 0 { + device minor 0; + disk /dev/foo; + meta-disk /dev/bar; + } + on delta { node-id 4; address 192.168.4.17:7000; } + on echo { node-id 5; address 192.168.5.17:7000; } + on fox { node-id 6; address 192.168.6.17:7000; } + connection-mesh { hosts delta echo fox; } +} + +resource stacked_multi_path { + net { + protocol A; + + on-congestion pull-ahead; + congestion-fill 400M; + congestion-extents 1000; + } + + disk { + c-fill-target 10M; + } + + volume 0 { device minor 10; } + + stacked-on-top-of site1 { node-id 0; } + stacked-on-top-of site2 { node-id 1; } + + connection { # site1 - site2 + path { + host alfa address 192.168.1.17:7100; + host delta address 192.168.4.17:7100; + } + path { + host bravo address 192.168.2.17:7100; + host delta address 192.168.4.17:7100; + } + path { + host charlie address 192.168.3.17:7100; + host delta address 192.168.4.17:7100; + } + path { + host alfa address 192.168.1.17:7100; + host echo address 192.168.5.17:7100; + } + path { + host bravo address 192.168.2.17:7100; + host echo address 192.168.5.17:7100; + } + path { + host charlie address 192.168.3.17:7100; + host echo address 192.168.5.17:7100; + } + path { + host alfa address 192.168.1.17:7100; + host fox address 192.168.6.17:7100; + } + path { + host bravo address 192.168.2.17:7100; + host fox address 192.168.6.17:7100; + } + path { + host charlie address 192.168.3.17:7100; + host fox address 192.168.6.17:7100; + } + } +} diff --git a/images/agent/pkg/drbdconf/testdata/drbd-utils/stacked_multi_path_3sites_2nodes.res b/images/agent/pkg/drbdconf/testdata/drbd-utils/stacked_multi_path_3sites_2nodes.res new file mode 100644 index 000000000..82310ba6b --- /dev/null +++ b/images/agent/pkg/drbdconf/testdata/drbd-utils/stacked_multi_path_3sites_2nodes.res @@ -0,0 +1,158 @@ + +resource site1 { + net { + cram-hmac-alg "sha1"; + shared-secret "Gei6mahcui4Ai0Oh1"; + } + + on undertest { + volume 0 { + device minor 0; + disk /dev/foo; + meta-disk /dev/bar; + } + address 192.168.23.21:7000; + } + on bravo { + volume 0 { + device minor 0; + disk /dev/foo; + meta-disk /dev/bar; + } + address 192.168.23.22:7000; + } +} + +resource site2 { + net { + cram-hmac-alg "sha1"; + shared-secret "Gei6mahcui4Ai0Oh2"; + } + + on charlie { + volume 0 { + device minor 0; + disk /dev/foo; + meta-disk /dev/bar; + } + address 192.168.24.21:7000; + } + on delta { + volume 0 { + device minor 0; + disk /dev/foo; + meta-disk /dev/bar; + } + address 192.168.24.22:7000; + } +} + +resource site3 { + net { + cram-hmac-alg "sha1"; + shared-secret "Gei6mahcui4Ai0Oh3"; + } + + on echo { + volume 0 { + device minor 0; + disk /dev/foo; + meta-disk /dev/bar; + } + address 192.168.25.21:7000; + } + on foxtrott { + volume 0 { + device minor 0; + disk /dev/foo; + meta-disk /dev/bar; + } + address 192.168.25.22:7000; + } +} + +resource stacked_multi_path { + net { + protocol A; + + on-congestion pull-ahead; + congestion-fill 400M; + congestion-extents 1000; + } + + disk { + c-fill-target 10M; + } + + volume 0 { + device minor 10; + } + + stacked-on-top-of site1 { + node-id 0; + } + stacked-on-top-of site2 { + node-id 1; + } + stacked-on-top-of site3 { + node-id 2; + } + + connection { # site1 - site2 + path { + host undertest address 192.168.23.21:7100; + host charlie address 192.168.24.21:7100; + } + path { + host bravo address 192.168.23.22:7100; + host delta address 192.168.24.22:7100; + } + path { + host undertest address 192.168.23.21:7100; + host delta address 192.168.24.22:7100; + } + path { + host bravo address 192.168.23.22:7100; + host charlie address 192.168.24.21:7100; + } + } + + connection { + path { + host undertest address 192.168.23.21:7100; + host echo address 192.168.25.21:7100; + } + path { + host bravo address 192.168.23.22:7100; + host foxtrott address 192.168.25.22:7100; + } + path { + host undertest address 192.168.23.21:7100; + host foxtrott address 192.168.25.22:7100; + } + path { + host bravo address 192.168.23.22:7100; + host echo address 192.168.25.21:7100; + } + + } + + connection { + path { + host charlie address 192.168.24.21:7100; + host echo address 192.168.25.21:7100; + } + path { + host delta address 192.168.24.22:7100; + host foxtrott address 192.168.25.22:7100; + } + path { + host charlie address 192.168.24.21:7100; + host foxtrott address 192.168.25.22:7100; + } + path { + host delta address 192.168.24.22:7100; + host echo address 192.168.25.21:7100; + } + } +} diff --git a/images/agent/pkg/drbdconf/testdata/drbd-utils/top-level-meta-disk.res b/images/agent/pkg/drbdconf/testdata/drbd-utils/top-level-meta-disk.res new file mode 100644 index 000000000..175b72d0a --- /dev/null +++ b/images/agent/pkg/drbdconf/testdata/drbd-utils/top-level-meta-disk.res @@ -0,0 +1,30 @@ +resource drbd_testqm { + device /dev/drbd1; + meta-disk /dev/mqmvg/MD-testqm; + syncer { + verify-alg sha1; + } + disk { + disk-flushes no; + md-flushes no; + disable-write-same yes; + resync-rate 184320; + c-fill-target 1048576; + c-max-rate 4194304; + c-min-rate 0; + } + net { + max-buffers 131072; + sndbuf-size 10485760; + rcvbuf-size 10485760; + } + on ADDRLeft { + disk /dev/mqmvg/QM-testqm; + address 192.168.45.122:7789; + } + on ADDRRight { + disk /dev/mqmvg/QM-testqm; + address 192.168.45.121:7789; + } +# discarded some stuff +} diff --git a/images/agent/pkg/drbdconf/testdata/example.res b/images/agent/pkg/drbdconf/testdata/example.res new file mode 100644 index 000000000..3477b2cb7 --- /dev/null +++ b/images/agent/pkg/drbdconf/testdata/example.res @@ -0,0 +1,108 @@ +include "/var/lib/linstor.d/*.res"; + +resource r0 { + net { + protocol C; + cram-hmac-alg sha1; + shared-secret "FooFunFactory"; + } + disk { + resync-rate 10M; + } + on alice { + volume 0 { + device minor 1; + disk /dev/sda7; + meta-disk internal; + } + address 10.1.1.31:7789; + } + on bob { + # asd + volume 0 { + device minor 1; + disk /dev/sda7; + meta-disk internal; + } + address 10.1.1.32:7789; + } +} + +resource "pvc-65bee3d7-ae9a-435c-980f-1c84c7621d27" +{ + + options + { + on-no-data-accessible suspend-io; + on-no-quorum suspend-io; # overrides value 'suspend-io' from RG (sc-2b1e7e36-3a82-53b4-84df-d7dd70927e67) + on-suspended-primary-outdated force-secondary; + quorum majority; + quorum-minimum-redundancy 2; + } + + net + { + cram-hmac-alg sha1; + shared-secret "fvdXdAsLg5aWzOepD0SO"; + protocol C; + rr-conflict retry-connect; + verify-alg "crct10dif-pclmul"; + } + + on "a-stefurishin-worker-0" + { + volume 0 + { + disk /dev/vg-0/pvc-65bee3d7-ae9a-435c-980f-1c84c7621d27_00000; + disk + { + discard-zeroes-if-aligned no; + } + meta-disk internal; + device minor 1000; + } + node-id 0; + } + + on "a-stefurishin-worker-1" "a-stefurishin-worker-1" + { + volume 0 + { + disk /dev/drbd/this/is/not/used; + disk + { + discard-zeroes-if-aligned no; + } + meta-disk internal; + device minor 1000; + } + node-id 1; + } + + on "a-stefurishin-worker-2" + { + volume 0 + { + disk /dev/drbd/this/is/not/used; + disk + { + discard-zeroes-if-aligned no; + } + meta-disk internal; + device minor 1000; + } + node-id 2; + } + + connection + { + host "a-stefurishin-worker-0" address 10.10.11.52:7000; + host "a-stefurishin-worker-1" address ipv4 10.10.11.149:7000; + } + + connection + { + host "a-stefurishin-worker-0" address ipv4 10.10.11.52:7000; + host "a-stefurishin-worker-2" address ipv4 10.10.11.150:7000; + } +} \ No newline at end of file diff --git a/images/agent/pkg/drbdconf/testdata/root.conf b/images/agent/pkg/drbdconf/testdata/root.conf new file mode 100644 index 000000000..baf91b6d2 --- /dev/null +++ b/images/agent/pkg/drbdconf/testdata/root.conf @@ -0,0 +1,2 @@ +include "*.res"; +include "drbd-utils/*.res"; \ No newline at end of file diff --git a/images/agent/pkg/drbdconf/utils.go b/images/agent/pkg/drbdconf/utils.go new file mode 100644 index 000000000..6ac1654b3 --- /dev/null +++ b/images/agent/pkg/drbdconf/utils.go @@ -0,0 +1,72 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package drbdconf + +import "fmt" + +func SectionKeyword[T any, TP SectionPtr[T]]() string { + return TP(nil).SectionKeyword() +} + +func ptr[T any](v T) *T { return &v } + +func EnsureLen(words []Word, lenAtLeast int) error { + if len(words) < lenAtLeast { + var loc Location + if len(words) > 0 { + loc = words[len(words)-1].LocationEnd() + } + return fmt.Errorf("%s: missing value", loc) + } + + return nil +} + +func ReadEnum[T ~string]( + dst *T, + knownValues map[T]struct{}, + value string, +) error { + if err := EnsureEnum(knownValues, value); err != nil { + return err + } + *dst = T(value) + return nil +} + +func ReadEnumAt[T ~string]( + dst *T, + knownValues map[T]struct{}, + p []Word, + idx int, +) error { + if err := EnsureLen(p, idx+1); err != nil { + return err + } + if err := ReadEnum(dst, knownValues, p[idx].Value); err != nil { + return err + } + *dst = T(p[idx].Value) + return nil +} + +func EnsureEnum[T ~string](knownValues map[T]struct{}, value string) error { + if _, ok := knownValues[T(value)]; !ok { + return fmt.Errorf("unrecognized value: '%s'", value) + } + return nil +} diff --git a/images/agent/pkg/drbdconf/v9/config.go b/images/agent/pkg/drbdconf/v9/config.go new file mode 100644 index 000000000..6fc9be57b --- /dev/null +++ b/images/agent/pkg/drbdconf/v9/config.go @@ -0,0 +1,29 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +// Missing sections: +// - require-drbd-module-version-{eq,ne,gt,ge,lt,le} +// - stacked-on-top-of +// +// Missing sections parameters: +// - net.transport +package v9 + +type Config struct { + Common *Common + Global *Global + Resources []*Resource +} diff --git a/images/agent/pkg/drbdconf/v9/config_test.go b/images/agent/pkg/drbdconf/v9/config_test.go new file mode 100644 index 000000000..731868f96 --- /dev/null +++ b/images/agent/pkg/drbdconf/v9/config_test.go @@ -0,0 +1,219 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package v9 + +import ( + "os" + "strings" + "testing" + + "github.com/google/go-cmp/cmp" + + "github.com/deckhouse/sds-replicated-volume/images/agent/pkg/drbdconf" +) + +func TestMarshalUnmarshal(t *testing.T) { + inCfg := &Config{ + Global: &Global{ + DialogRefresh: ptr(42), + DisableIPVerification: true, + UsageCount: UsageCountValueAsk, + UdevAlwaysUseVNR: true, + }, + Common: &Common{ + Disk: &DiskOptions{ + ALExtents: ptr(uint(123)), + ALUpdates: ptr(false), + DiskDrain: ptr(true), + OnIOError: IOErrorPolicyDetach, + ReadBalancing: ReadBalancingPolicy64KStriping, + ResyncAfter: "asd/asd", + }, + }, + Resources: []*Resource{ + { + Name: "r1", + Disk: &DiskOptions{ + MDFlushes: ptr(true), + }, + Connections: []*Connection{ + {}, + { + Name: "con1", + Hosts: []HostAddress{ + { + Name: "addr1", + AddressWithPort: "123.123.124.124:1000", + }, + { + Name: "addr2", + Port: ptr[uint](1232), + }, + }, + Paths: []*Path{ + { + Hosts: []HostAddress{ + { + Name: "addr1", + AddressWithPort: "123.123.124.124:123123", + }, + { + Name: "addr2", + AddressWithPort: "123.123.124.224", + }, + }, + }, + {}, + }, + }, + }, + On: []*On{ + { + HostNames: []string{"h1", "h2", "h3"}, + Address: &AddressWithPort{ + AddressFamily: "ipv4", + Address: "123.123.123.123", + Port: 1234, + }, + Volumes: []*Volume{ + { + Number: ptr(0), + Disk: ptr(VolumeDisk("/dev/a")), + }, + { + Number: ptr(1), + Disk: ptr(VolumeDisk("/dev/b")), + }, + }, + }, + { + HostNames: []string{"h1", "h2", "h3"}, + Address: &AddressWithPort{ + AddressFamily: "ipv4", + Address: "123.123.123.123", + Port: 1234, + }, + }, + }, + Floating: []*Floating{ + { + NodeID: ptr(123), + Address: &AddressWithPort{ + Address: "0.0.0.0", + Port: 222, + }, + }, + }, + Net: &Net{ + MaxBuffers: ptr(123), + KOCount: ptr(1234), + }, + Handlers: &Handlers{ + BeforeResyncTarget: "asd", + }, + Startup: &Startup{ + OutdatedWFCTimeout: ptr(23), + WaitAfterSB: true, + }, + ConnectionMesh: &ConnectionMesh{ + Hosts: []string{"g", "h", "j"}, + Net: &Net{ + Fencing: FencingPolicyResourceAndSTONITH, + }, + }, + Options: &Options{ + AutoPromote: ptr(true), + PeerAckWindow: &Unit{ + Value: 5, + Suffix: "s", + }, + Quorum: &QuorumMajority{}, + }, + }, + {Name: "r2"}, + }, + } + + rootSec := &drbdconf.Section{} + + err := drbdconf.Marshal(inCfg, rootSec) + if err != nil { + t.Fatal(err) + } + + root := &drbdconf.Root{} + + for _, sec := range rootSec.Elements { + root.Elements = append(root.Elements, sec.(*drbdconf.Section)) + } + + sb := &strings.Builder{} + + _, err = root.WriteTo(sb) + if err != nil { + t.Fatal(err) + } + t.Log("\n", sb.String()) + + outCfg := &Config{} + if err := drbdconf.Unmarshal(root.AsSection(), outCfg); err != nil { + t.Fatal(err) + } + + if !cmp.Equal(inCfg, outCfg) { + t.Error( + "expected inCfg to be equal to outCfg, got diff", + "\n", + cmp.Diff(inCfg, outCfg), + ) + } +} + +func TestUnmarshalReal(t *testing.T) { + fsRoot, err := os.OpenRoot("./../testdata/") + if err != nil { + t.Fatal(err) + } + + root, err := drbdconf.Parse(fsRoot.FS(), "root.conf") + if err != nil { + t.Fatal(err) + } + + v9Conf := &Config{} + + if err := drbdconf.Unmarshal(root.AsSection(), v9Conf); err != nil { + t.Fatal(err) + } + + dst := &drbdconf.Section{} + if err := drbdconf.Marshal(v9Conf, dst); err != nil { + t.Fatal(err) + } + dstRoot := &drbdconf.Root{} + for _, sec := range dst.Elements { + dstRoot.Elements = append(dstRoot.Elements, sec.(*drbdconf.Section)) + } + + sb := &strings.Builder{} + + _, err = dstRoot.WriteTo(sb) + if err != nil { + t.Fatal(err) + } + t.Log("\n", sb.String()) +} diff --git a/images/agent/pkg/drbdconf/v9/primitive_types.go b/images/agent/pkg/drbdconf/v9/primitive_types.go new file mode 100644 index 000000000..ec3da76fc --- /dev/null +++ b/images/agent/pkg/drbdconf/v9/primitive_types.go @@ -0,0 +1,204 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package v9 + +import ( + "fmt" + "strconv" + "strings" + + "github.com/deckhouse/sds-replicated-volume/images/agent/pkg/drbdconf" +) + +// [address []
:] [port ] +type HostAddress struct { + Name string + AddressWithPort string + AddressFamily string + Port *uint +} + +func (h *HostAddress) MarshalParameter() ([]string, error) { + res := []string{h.Name} + if h.AddressWithPort != "" { + res = append(res, "address") + if h.AddressFamily != "" { + res = append(res, h.AddressFamily) + } + res = append(res, h.AddressWithPort) + } else if h.Port != nil { + res = append(res, "port") + res = append(res, strconv.FormatUint(uint64(*h.Port), 10)) + } + return res, nil +} + +func (h *HostAddress) UnmarshalParameter(p []drbdconf.Word) error { + if err := drbdconf.EnsureLen(p, 2); err != nil { + return err + } + + hostname := p[1].Value + + if len(p) == 2 { + h.Name = hostname + return nil + } + + p = p[2:] + + addressWithPort, addressFamily, portStr, err := unmarshalHostAddress(p) + if err != nil { + return err + } + + // write result + var port *uint + if portStr != "" { + p, err := strconv.ParseUint(portStr, 10, 64) + if err != nil { + return err + } + port = ptr(uint(p)) + } + h.Name = hostname + h.AddressWithPort = addressWithPort + h.AddressFamily = addressFamily + h.Port = port + + return nil +} + +func unmarshalHostAddress(p []drbdconf.Word) ( + addressWithPort, addressFamily, portStr string, + err error, +) { + if err = drbdconf.EnsureLen(p, 2); err != nil { + return + } + + switch p[0].Value { + case "address": + if len(p) == 2 { + addressWithPort = p[1].Value + } else { // >=3 + addressFamily = p[1].Value + addressWithPort = p[2].Value + } + case "port": + portStr = p[1].Value + default: + err = fmt.Errorf("unrecognized keyword: '%s'", p[0].Value) + } + + return +} + +var _ drbdconf.ParameterCodec = &HostAddress{} + +// + +// []
: +type AddressWithPort struct { + Address string + AddressFamily string + Port uint +} + +var _ drbdconf.ParameterCodec = &AddressWithPort{} + +func (a *AddressWithPort) UnmarshalParameter(p []drbdconf.Word) error { + if err := drbdconf.EnsureLen(p, 2); err != nil { + return err + } + + addrIdx := 1 + if len(p) >= 3 { + a.AddressFamily = p[1].Value + addrIdx++ + } + addrVal := p[addrIdx].Value + + portSepIdx := strings.LastIndexByte(addrVal, ':') + if portSepIdx < 0 { + return fmt.Errorf("invalid format: ':port' is required") + } + + addrParts := []string{addrVal[0:portSepIdx], addrVal[portSepIdx+1:]} + + a.Address = addrParts[0] + port, err := strconv.ParseUint(addrParts[1], 10, 64) + if err != nil { + return err + } + a.Port = uint(port) + return nil +} + +func (a *AddressWithPort) MarshalParameter() ([]string, error) { + res := []string{} + + if a.AddressFamily != "" { + res = append(res, a.AddressFamily) + } + res = append(res, a.Address+":"+strconv.FormatUint(uint64(a.Port), 10)) + + return res, nil +} + +type Port struct { + PortNumber uint16 +} + +type Unit struct { + Value int + Suffix string +} + +var _ drbdconf.ParameterCodec = new(Unit) + +func (u *Unit) MarshalParameter() ([]string, error) { + return []string{strconv.FormatUint(uint64(u.Value), 10) + u.Suffix}, nil +} + +func (u *Unit) UnmarshalParameter(p []drbdconf.Word) error { + if err := drbdconf.EnsureLen(p, 2); err != nil { + return err + } + + strVal := p[1].Value + + // treat non-digit suffix as units + suffix := []byte{} + for i := len(strVal) - 1; i >= 0; i-- { + ch := strVal[i] + if ch < '0' || ch > '9' { + suffix = append(suffix, ch) + } else { + strVal = strVal[0 : i+1] + } + } + + val, err := strconv.Atoi(strVal) + if err != nil { + return err + } + + u.Value = val + u.Suffix = string(suffix) + return nil +} diff --git a/images/agent/pkg/drbdconf/v9/section_common.go b/images/agent/pkg/drbdconf/v9/section_common.go new file mode 100644 index 000000000..43675b0b3 --- /dev/null +++ b/images/agent/pkg/drbdconf/v9/section_common.go @@ -0,0 +1,35 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package v9 + +import "github.com/deckhouse/sds-replicated-volume/images/agent/pkg/drbdconf" + +// This section can contain each a disk, handlers, net, options, and startup +// section. All resources inherit the parameters in these sections as their +// default values. +type Common struct { + Disk *DiskOptions + Handlers *Handlers + Net *Net + Startup *Startup +} + +var _ drbdconf.SectionKeyworder = &Common{} + +func (*Common) SectionKeyword() string { + return "common" +} diff --git a/images/agent/pkg/drbdconf/v9/section_connection.go b/images/agent/pkg/drbdconf/v9/section_connection.go new file mode 100644 index 000000000..e16038684 --- /dev/null +++ b/images/agent/pkg/drbdconf/v9/section_connection.go @@ -0,0 +1,49 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package v9 + +import "github.com/deckhouse/sds-replicated-volume/images/agent/pkg/drbdconf" + +// Define a connection between two hosts. This section must contain two [HostAddress] +// parameters or multiple [Path] sections. The optional name is used to refer to +// the connection in the system log and in other messages. If no name is +// specified, the peer's host name is used instead. +type Connection struct { + Name string `drbd:""` + + // Defines an endpoint for a connection. Each [Host] statement refers to an + // [On] section in a [Resource]. If a port number is defined, this endpoint + // will use the specified port instead of the port defined in the on + // section. Each [Connection] section must contain exactly two [Host] + // parameters. Instead of two [Host] parameters the connection may contain + // multiple [Path] sections. + Hosts []HostAddress `drbd:"host"` + + Paths []*Path + + Net *Net + + Volume *ConnectionVolume + + PeerDeviceOptions *PeerDeviceOptions +} + +func (c *Connection) SectionKeyword() string { + return "connection" +} + +var _ drbdconf.SectionKeyworder = &Connection{} diff --git a/images/agent/pkg/drbdconf/v9/section_connection_mesh.go b/images/agent/pkg/drbdconf/v9/section_connection_mesh.go new file mode 100644 index 000000000..4fc70e2ff --- /dev/null +++ b/images/agent/pkg/drbdconf/v9/section_connection_mesh.go @@ -0,0 +1,37 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package v9 + +import "github.com/deckhouse/sds-replicated-volume/images/agent/pkg/drbdconf" + +// Define a connection mesh between multiple hosts. This section must contain a +// hosts parameter, which has the host names as arguments. This section is a +// shortcut to define many connections which share the same network options. +type ConnectionMesh struct { + // Defines all nodes of a mesh. Each name refers to an [On] section in a + // resource. The port that is defined in the [On] section will be used. + Hosts []string `drbd:"hosts"` + + Net *Net +} + +// SectionKeyword implements drbdconf.SectionKeyworder. +func (c *ConnectionMesh) SectionKeyword() string { + return "connection-mesh" +} + +var _ drbdconf.SectionKeyworder = &ConnectionMesh{} diff --git a/images/agent/pkg/drbdconf/v9/section_connection_volume.go b/images/agent/pkg/drbdconf/v9/section_connection_volume.go new file mode 100644 index 000000000..258e67967 --- /dev/null +++ b/images/agent/pkg/drbdconf/v9/section_connection_volume.go @@ -0,0 +1,78 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package v9 + +import "github.com/deckhouse/sds-replicated-volume/images/agent/pkg/drbdconf" + +// Define a volume within a resource. The volume numbers in the various [Volume] +// sections of a resource define which devices on which hosts form a replicated +// device. +type ConnectionVolume struct { + Number *int `drbd:""` + + DiskOptions *PeerDeviceOptions + + // Define the device name and minor number of a replicated block device. + // This is the device that applications are supposed to access; in most + // cases, the device is not used directly, but as a file system. This + // parameter is required and the standard device naming convention is + // assumed. + // + // In addition to this device, udev will create + // /dev/drbd/by-res/resource/volume and /dev/drbd/by-disk/lower-level-device + // symlinks to the device. + Device *DeviceMinorNumber `drbd:"device"` + + // Define the lower-level block device that DRBD will use for storing the + // actual data. While the replicated drbd device is configured, the + // lower-level device must not be used directly. Even read-only access with + // tools like dumpe2fs(8) and similar is not allowed. The keyword none + // specifies that no lower-level block device is configured; this also + // overrides inheritance of the lower-level device. + // + // Either [VolumeDisk] or [VolumeDiskNone]. + Disk DiskValue `drbd:"disk"` + + // Define where the metadata of a replicated block device resides: it can be + // internal, meaning that the lower-level device contains both the data and + // the metadata, or on a separate device. + // + // When the index form of this parameter is used, multiple replicated + // devices can share the same metadata device, each using a separate index. + // Each index occupies 128 MiB of data, which corresponds to a replicated + // device size of at most 4 TiB with two cluster nodes. We recommend not to + // share metadata devices anymore, and to instead use the lvm volume manager + // for creating metadata devices as needed. + // + // When the index form of this parameter is not used, the size of the + // lower-level device determines the size of the metadata. The size needed + // is 36 KiB + (size of lower-level device) / 32K * (number of nodes - 1). + // If the metadata device is bigger than that, the extra space is not used. + // + // This parameter is required if a disk other than none is specified, and + // ignored if disk is set to none. A meta-disk parameter without a disk + // parameter is not allowed. + // + // Either [VolumeMetaDiskInternal] or [VolumeMetaDiskDevice]. + MetaDisk MetaDiskValue `drbd:"meta-disk"` +} + +var _ drbdconf.SectionKeyworder = &ConnectionVolume{} + +func (v *ConnectionVolume) SectionKeyword() string { + return "volume" +} diff --git a/images/agent/pkg/drbdconf/v9/section_disk_options.go b/images/agent/pkg/drbdconf/v9/section_disk_options.go new file mode 100644 index 000000000..e0cb6e935 --- /dev/null +++ b/images/agent/pkg/drbdconf/v9/section_disk_options.go @@ -0,0 +1,309 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package v9 + +import ( + "github.com/deckhouse/sds-replicated-volume/images/agent/pkg/drbdconf" +) + +// Define parameters for a volume. All parameters in this section are optional. +type DiskOptions struct { + // DRBD automatically maintains a "hot" or "active" disk area likely to be + // written to again soon based on the recent write activity. The "active" + // disk area can be written to immediately, while "inactive" disk areas must + // be "activated" first, which requires a meta-data write. We also refer to + // this active disk area as the "activity log". + // + // The activity log saves meta-data writes, but the whole log must be + // resynced upon recovery of a failed node. The size of the activity log is + // a major factor of how long a resync will take and how fast a replicated + // disk will become consistent after a crash. + // + // The activity log consists of a number of 4-Megabyte segments; the + // al-extents parameter determines how many of those segments can be active + // at the same time. The default value for al-extents is 1237, with a + // minimum of 7 and a maximum of 65536. + // + // Note that the effective maximum may be smaller, depending on how you + // created the device meta data, see also drbdmeta(8) The effective maximum + // is 919 * (available on-disk activity-log ring-buffer area/4kB -1), the + // default 32kB ring-buffer effects a maximum of 6433 (covers more than + // 25 GiB of data). + // + // We recommend to keep this well within the amount your backend storage and + // replication link are able to resync inside of about 5 minutes. + ALExtents *uint `drbd:"al-extents,no-al-extents"` + + // With this parameter, the activity log can be turned off entirely (see the + // al-extents parameter). This will speed up writes because fewer meta-data + // writes will be necessary, but the entire device needs to be + // resynchronized opon recovery of a failed primary node. The default value + // for al-updates is yes. + ALUpdates *bool `drbd:"al-updates,no-al-updates"` + + // Use disk barriers to make sure that requests are written to disk in the + // right order. Barriers ensure that all requests submitted before a barrier + // make it to the disk before any requests submitted after the barrier. This + // is implemented using 'tagged command queuing' on SCSI devices and 'native + // command queuing' on SATA devices. Only some devices and device stacks + // support this method. The device mapper (LVM) only supports barriers in + // some configurations. + // + // Note that on systems which do not support disk barriers, enabling this + // option can lead to data loss or corruption. Until DRBD 8.4.1, + // disk-barrier was turned on if the I/O stack below DRBD did support + // barriers. Kernels since linux-2.6.36 (or 2.6.32 RHEL6) no longer allow to + // detect if barriers are supported. Since drbd-8.4.2, this option is off by + // default and needs to be enabled explicitly. + DiskBarrier *bool `drbd:"disk-barrier,no-disk-barrier"` + + // Use disk flushes between dependent write requests, also referred to as + // 'force unit access' by drive vendors. This forces all data to disk. This + // option is enabled by default. + DiskFlushes *bool `drbd:"disk-flushes,no-disk-flushes"` + + // Wait for the request queue to "drain" (that is, wait for the requests to + // finish) before submitting a dependent write request. This method requires + // that requests are stable on disk when they finish. Before DRBD 8.0.9, + // this was the only method implemented. This option is enabled by default. + // Do not disable in production environments. + // + // From these three methods, drbd will use the first that is enabled and + // supported by the backing storage device. If all three of these options + // are turned off, DRBD will submit write requests without bothering about + // dependencies. Depending on the I/O stack, write requests can be + // reordered, and they can be submitted in a different order on different + // cluster nodes. This can result in data loss or corruption. Therefore, + // turning off all three methods of controlling write ordering is strongly + // discouraged. + // + // A general guideline for configuring write ordering is to use disk + // barriers or disk flushes when using ordinary disks (or an ordinary disk + // array) with a volatile write cache. On storage without cache or with a + // battery backed write cache, disk draining can be a reasonable choice. + DiskDrain *bool `drbd:"disk-drain,no-disk-drain"` + + // If the lower-level device on which a DRBD device stores its data does not + // finish an I/O request within the defined disk-timeout, DRBD treats this + // as a failure. The lower-level device is detached, and the device's disk + // state advances to Diskless. If DRBD is connected to one or more peers, + // the failed request is passed on to one of them. + // + // This option is dangerous and may lead to kernel panic! + // + // "Aborting" requests, or force-detaching the disk, is intended for + // completely blocked/hung local backing devices which do no longer complete + // requests at all, not even do error completions. In this situation, + // usually a hard-reset and failover is the only way out. + // + // By "aborting", basically faking a local error-completion, we allow for a + // more graceful swichover by cleanly migrating services. Still the affected + // node has to be rebooted "soon". + // + // By completing these requests, we allow the upper layers to re-use the + // associated data pages. + // + // If later the local backing device "recovers", and now DMAs some data from + // disk into the original request pages, in the best case it will just put + // random data into unused pages; but typically it will corrupt meanwhile + // completely unrelated data, causing all sorts of damage. + // + // Which means delayed successful completion, especially for READ requests, + // is a reason to panic(). We assume that a delayed *error* completion is + // OK, though we still will complain noisily about it. + // + // The default value of disk-timeout is 0, which stands for an infinite + // timeout. Timeouts are specified in units of 0.1 seconds. This option is + // available since DRBD 8.3.12. + DiskTimeout *int `drbd:"disk-timeout"` + + // Enable disk flushes and disk barriers on the meta-data device. This + // option is enabled by default. See the disk-flushes parameter. + MDFlushes *bool `drbd:"md-flushes,no-md-flushes"` + + // Configure how DRBD reacts to I/O errors on a lower-level device. + OnIOError IOErrorPolicy `drbd:"on-io-error"` + + // Distribute read requests among cluster nodes as defined by policy. The + // supported policies are prefer-local (the default), prefer-remote, + // round-robin, least-pending, when-congested-remote, 32K-striping, + // 64K-striping, 128K-striping, 256K-striping, 512K-striping and + // 1M-striping. + // + // This option is available since DRBD 8.4.1. + ReadBalancing ReadBalancingPolicy `drbd:"read-balancing"` + + // Define that a device should only resynchronize after the specified other + // device. By default, no order between devices is defined, and all devices + // will resynchronize in parallel. Depending on the configuration of the + // lower-level devices, and the available network and disk bandwidth, this + // can slow down the overall resync process. This option can be used to form + // a chain or tree of dependencies among devices. + ResyncAfter string `drbd:"resync-after"` + + // When rs-discard-granularity is set to a non zero, positive value then + // DRBD tries to do a resync operation in requests of this size. In case + // such a block contains only zero bytes on the sync source node, the sync + // target node will issue a discard/trim/unmap command for the area. + // + // The value is constrained by the discard granularity of the backing block + // device. In case rs-discard-granularity is not a multiplier of the discard + // granularity of the backing block device DRBD rounds it up. The feature + // only gets active if the backing block device reads back zeroes after a + // discard command. + // + // The usage of rs-discard-granularity may cause c-max-rate to be exceeded. + // In particular, the resync rate may reach 10x the value of + // rs-discard-granularity per second. + // + // The default value of rs-discard-granularity is 0. This option is + // available since 8.4.7. + RsDiscardGranularity *uint `drbd:"rs-discard-granularity"` + + // There are several aspects to discard/trim/unmap support on linux block + // devices. Even if discard is supported in general, it may fail silently, + // or may partially ignore discard requests. Devices also announce whether + // reading from unmapped blocks returns defined data (usually zeroes), or + // undefined data (possibly old data, possibly garbage). + // + // If on different nodes, DRBD is backed by devices with differing discard + // characteristics, discards may lead to data divergence (old data or + // garbage left over on one backend, zeroes due to unmapped areas on the + // other backend). Online verify would now potentially report tons of + // spurious differences. While probably harmless for most use cases (fstrim + // on a file system), DRBD cannot have that. + // + // To play safe, we have to disable discard support, if our local backend + // (on a Primary) does not support "discard_zeroes_data=true". We also have + // to translate discards to explicit zero-out on the receiving side, unless + // the receiving side (Secondary) supports "discard_zeroes_data=true", + // thereby allocating areas what were supposed to be unmapped. + // + // There are some devices (notably the LVM/DM thin provisioning) that are + // capable of discard, but announce discard_zeroes_data=false. In the case + // of DM-thin, discards aligned to the chunk size will be unmapped, and + // reading from unmapped sectors will return zeroes. However, unaligned + // partial head or tail areas of discard requests will be silently ignored. + // + // If we now add a helper to explicitly zero-out these unaligned partial + // areas, while passing on the discard of the aligned full chunks, we + // effectively achieve discard_zeroes_data=true on such devices. + // + // Setting discard-zeroes-if-aligned to yes will allow DRBD to use discards, + // and to announce discard_zeroes_data=true, even on backends that announce + // discard_zeroes_data=false. + // + // Setting discard-zeroes-if-aligned to no will cause DRBD to always + // fall-back to zero-out on the receiving side, and to not even announce + // discard capabilities on the Primary, if the respective backend announces + // discard_zeroes_data=false. + // + // We used to ignore the discard_zeroes_data setting completely. To not + // break established and expected behaviour, and suddenly cause fstrim on + // thin-provisioned LVs to run out-of-space instead of freeing up space, the + // default value is yes. + // + // This option is available since 8.4.7. + DiscardZeroesIfAligned *bool `drbd:"discard-zeroes-if-aligned,no-discard-zeroes-if-aligned"` + + // Some disks announce WRITE_SAME support to the kernel but fail with an I/O + // error upon actually receiving such a request. This mostly happens when + // using virtualized disks -- notably, this behavior has been observed with + // VMware's virtual disks. + // + // When disable-write-same is set to yes, WRITE_SAME detection is manually + // overridden and support is disabled. + // + // The default value of disable-write-same is no. This option is available + // since 8.4.7. + DisableWriteSame *bool `drbd:"disable-write-same"` +} + +var _ drbdconf.SectionKeyworder = &DiskOptions{} + +func (d *DiskOptions) SectionKeyword() string { + return "disk" +} + +type IOErrorPolicy string + +var _ drbdconf.ParameterCodec = new(IOErrorPolicy) + +var knownValuesIOErrorPolicy = map[IOErrorPolicy]struct{}{ + IOErrorPolicyPassOn: {}, + IOErrorPolicyCallLocalIOError: {}, + IOErrorPolicyDetach: {}, +} + +func (i *IOErrorPolicy) MarshalParameter() ([]string, error) { + return []string{string(*i)}, nil +} + +func (i *IOErrorPolicy) UnmarshalParameter(p []drbdconf.Word) error { + return drbdconf.ReadEnumAt(i, knownValuesIOErrorPolicy, p, 1) +} + +const ( + // Change the disk status to Inconsistent, mark the failed block as + // inconsistent in the bitmap, and retry the I/O operation on a remote + // cluster node. + IOErrorPolicyPassOn IOErrorPolicy = "pass_on" + // Call the local-io-error handler (see the [Handlers] section). + IOErrorPolicyCallLocalIOError IOErrorPolicy = "call-local-io-error" + // Detach the lower-level device and continue in diskless mode. + IOErrorPolicyDetach IOErrorPolicy = "detach" +) + +type ReadBalancingPolicy string + +var knownValuesReadBalancingPolicy = map[ReadBalancingPolicy]struct{}{ + ReadBalancingPolicyPreferLocal: {}, + ReadBalancingPolicyPreferRemote: {}, + ReadBalancingPolicyRoundRobin: {}, + ReadBalancingPolicyLeastPending: {}, + ReadBalancingPolicyWhenCongestedRemote: {}, + ReadBalancingPolicy32KStriping: {}, + ReadBalancingPolicy64KStriping: {}, + ReadBalancingPolicy128KStriping: {}, + ReadBalancingPolicy256KStriping: {}, + ReadBalancingPolicy512KStriping: {}, + ReadBalancingPolicy1MStriping: {}, +} + +var _ drbdconf.ParameterCodec = ptr(ReadBalancingPolicy("")) + +func (r *ReadBalancingPolicy) MarshalParameter() ([]string, error) { + return []string{string(*r)}, nil +} + +func (r *ReadBalancingPolicy) UnmarshalParameter(p []drbdconf.Word) error { + return drbdconf.ReadEnumAt(r, knownValuesReadBalancingPolicy, p, 1) +} + +const ( + ReadBalancingPolicyPreferLocal ReadBalancingPolicy = "prefer-local" + ReadBalancingPolicyPreferRemote ReadBalancingPolicy = "prefer-remote" + ReadBalancingPolicyRoundRobin ReadBalancingPolicy = "round-robin" + ReadBalancingPolicyLeastPending ReadBalancingPolicy = "least-pending" + ReadBalancingPolicyWhenCongestedRemote ReadBalancingPolicy = "when-congested-remote" + ReadBalancingPolicy32KStriping ReadBalancingPolicy = "32K-striping" + ReadBalancingPolicy64KStriping ReadBalancingPolicy = "64K-striping" + ReadBalancingPolicy128KStriping ReadBalancingPolicy = "128K-striping" + ReadBalancingPolicy256KStriping ReadBalancingPolicy = "256K-striping" + ReadBalancingPolicy512KStriping ReadBalancingPolicy = "512K-striping" + ReadBalancingPolicy1MStriping ReadBalancingPolicy = "1M-striping" +) diff --git a/images/agent/pkg/drbdconf/v9/section_global.go b/images/agent/pkg/drbdconf/v9/section_global.go new file mode 100644 index 000000000..0227e54e1 --- /dev/null +++ b/images/agent/pkg/drbdconf/v9/section_global.go @@ -0,0 +1,97 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package v9 + +import ( + "github.com/deckhouse/sds-replicated-volume/images/agent/pkg/drbdconf" +) + +// Define some global parameters. All parameters in this section are optional. +// Only one [Global] section is allowed in the configuration. +type Global struct { + // The DRBD init script can be used to configure and start DRBD devices, + // which can involve waiting for other cluster nodes. While waiting, the + // init script shows the remaining waiting time. The dialog-refresh defines + // the number of seconds between updates of that countdown. The default + // value is 1; a value of 0 turns off the countdown. + DialogRefresh *int `drbd:"dialog-refresh"` + + // Normally, DRBD verifies that the IP addresses in the configuration match + // the host names. Use the disable-ip-verification parameter to disable + // these checks. + DisableIPVerification bool `drbd:"disable-ip-verification"` + + // A explained on DRBD's Online Usage Counter[2] web page, DRBD includes a + // mechanism for anonymously counting how many installations are using which + // versions of DRBD. The results are available on the web page for anyone to + // see. + // + // This parameter defines if a cluster node participates in the usage + // counter; the supported values are yes, no, and ask (ask the user, the + // default). + // + // We would like to ask users to participate in the online usage counter as + // this provides us valuable feedback for steering the development of DRBD. + UsageCount UsageCountValue `drbd:"usage-count"` + + // When udev asks drbdadm for a list of device related symlinks, drbdadm + // would suggest symlinks with differing naming conventions, depending on + // whether the resource has explicit volume VNR { } definitions, or only one + // single volume with the implicit volume number 0: + // # implicit single volume without "volume 0 {}" block + // DEVICE=drbd + // SYMLINK_BY_RES=drbd/by-res/ + // SYMLINK_BY_DISK=drbd/by-disk/ + // # explicit volume definition: volume VNR { } + // DEVICE=drbd + // SYMLINK_BY_RES=drbd/by-res//VNR + // SYMLINK_BY_DISK=drbd/by-disk/ + // If you define this parameter in the global section, drbdadm will always + // add the .../VNR part, and will not care for whether the volume definition + // was implicit or explicit. + // For legacy backward compatibility, this is off by default, but we do + // recommend to enable it. + UdevAlwaysUseVNR bool `drbd:"udev-always-use-vnr"` +} + +var _ drbdconf.SectionKeyworder = &Global{} + +func (g *Global) SectionKeyword() string { return "global" } + +type UsageCountValue string + +const ( + UsageCountValueYes UsageCountValue = "yes" + UsageCountValueNo UsageCountValue = "no" + UsageCountValueAsk UsageCountValue = "ask" +) + +var _ drbdconf.ParameterCodec = ptr(UsageCountValue("")) + +var knownValuesUsageCountValue = map[UsageCountValue]struct{}{ + UsageCountValueYes: {}, + UsageCountValueNo: {}, + UsageCountValueAsk: {}, +} + +func (u *UsageCountValue) MarshalParameter() ([]string, error) { + return []string{string(*u)}, nil +} + +func (u *UsageCountValue) UnmarshalParameter(p []drbdconf.Word) error { + return drbdconf.ReadEnumAt(u, knownValuesUsageCountValue, p, 1) +} diff --git a/images/agent/pkg/drbdconf/v9/section_handlers.go b/images/agent/pkg/drbdconf/v9/section_handlers.go new file mode 100644 index 000000000..db2f86fae --- /dev/null +++ b/images/agent/pkg/drbdconf/v9/section_handlers.go @@ -0,0 +1,108 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package v9 + +import "github.com/deckhouse/sds-replicated-volume/images/agent/pkg/drbdconf" + +// Define handlers to be invoked when certain events occur. The kernel passes +// the resource name in the first command-line argument and sets the following +// environment variables depending on the event's context: +// - For events related to a particular device: the device's minor number in +// DRBD_MINOR, the device's volume number in DRBD_VOLUME. +// - For events related to a particular device on a particular peer: the +// connection endpoints in DRBD_MY_ADDRESS, DRBD_MY_AF, DRBD_PEER_ADDRESS, +// and DRBD_PEER_AF; the device's local minor number in DRBD_MINOR, and the +// device's volume number in DRBD_VOLUME. +// - For events related to a particular connection: the connection endpoints +// in DRBD_MY_ADDRESS, DRBD_MY_AF, DRBD_PEER_ADDRESS, and DRBD_PEER_AF; and, +// for each device defined for that connection: the device's minor number in +// DRBD_MINOR_volume-number. +// - For events that identify a device, if a lower-level device is attached, +// the lower-level device's device name is passed in DRBD_BACKING_DEV (or +// DRBD_BACKING_DEV_volume-number). +// +// All parameters in this section are optional. Only a single handler can be +// defined for each event; if no handler is defined, nothing will happen. +type Handlers struct { + // Called on a resync target when a node state changes from Inconsistent to + // Consistent when a resync finishes. This handler can be used for removing + // the snapshot created in the before-resync-target handler. + AfterResyncTarget string `drbd:"after-resync-target"` + + // Called on a resync target before a resync begins. This handler can be + // used for creating a snapshot of the lower-level device for the duration + // of the resync: if the resync source becomes unavailable during a resync, + // reverting to the snapshot can restore a consistent state. + BeforeResyncTarget string `drbd:"before-resync-target"` + + // Called on a resync source before a resync begins. + BeforeResyncSource string `drbd:"before-resync-source"` + + // Called on all nodes after a verify finishes and out-of-sync blocks were + // found. This handler is mainly used for monitoring purposes. An example + // would be to call a script that sends an alert SMS. + OutOfSync string `drbd:"out-of-sync"` + + // Called on a Primary that lost quorum. This handler is usually used to + // reboot the node if it is not possible to restart the application that + // uses the storage on top of DRBD. + QuorumLost string `drbd:"quorum-lost"` + + // Called when a node should fence a resource on a particular peer. The + // handler should not use the same communication path that DRBD uses for + // talking to the peer. + FencePeer string `drbd:"fence-peer"` + + // Called when a node should remove fencing constraints from other nodes. + UnfencePeer string `drbd:"unfence-peer"` + + // Called when DRBD connects to a peer and detects that the peer is in a + // split-brain state with the local node. This handler is also called for + // split-brain scenarios which will be resolved automatically. + InitialSplitBrain string `drbd:"initial-split-brain"` + + // Called when an I/O error occurs on a lower-level device. + LocalIOError string `drbd:"local-io-error"` + + // The local node is currently primary, but DRBD believes that it should + // become a sync target. The node should give up its primary role. + PriLost string `drbd:"pri-lost"` + + // The local node is currently primary, but it has lost the + // after-split-brain auto recovery procedure. The node should be abandoned. + PriLostAfterSB string `drbd:"pri-lost-after-sb"` + + // The local node is primary, and neither the local lower-level device nor a + // lower-level device on a peer is up to date. (The primary has no device to + // read from or to write to.) + PriOnInconDegr string `drbd:"pri-on-incon-degr"` + + // DRBD has detected a split-brain situation which could not be resolved + // automatically. Manual recovery is necessary. This handler can be used to + // call for administrator attention. + SplitBrain string `drbd:"split-brain"` + + // A connection to a peer went down. The handler can learn about the reason + // for the disconnect from the DRBD_CSTATE environment variable. + Disconnected string `drbd:"disconnected"` +} + +var _ drbdconf.SectionKeyworder = &Handlers{} + +func (h *Handlers) SectionKeyword() string { + return "handlers" +} diff --git a/images/agent/pkg/drbdconf/v9/section_net.go b/images/agent/pkg/drbdconf/v9/section_net.go new file mode 100644 index 000000000..a974c81b3 --- /dev/null +++ b/images/agent/pkg/drbdconf/v9/section_net.go @@ -0,0 +1,630 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package v9 + +import ( + "errors" + "strings" + + "github.com/deckhouse/sds-replicated-volume/images/agent/pkg/drbdconf" +) + +// Define parameters for a connection. All parameters in this section are +// optional. +type Net struct { + // Define how to react if a split-brain scenario is detected and none of the + // two nodes is in primary role. (We detect split-brain scenarios when two + // nodes connect; split-brain decisions are always between two nodes.) + AfterSB0Pri AfterSB0PriPolicy `drbd:"after-sb-0pri"` + + // Define how to react if a split-brain scenario is detected, with one node + // in primary role and one node in secondary role. (We detect split-brain + // scenarios when two nodes connect, so split-brain decisions are always + // among two nodes.) + AfterSB1Pri AfterSB1PriPolicy `drbd:"after-sb-1pri"` + + // Define how to react if a split-brain scenario is detected and both nodes + // are in primary role. (We detect split-brain scenarios when two nodes + // connect, so split-brain decisions are always among two nodes.) + AfterSB2Pri AfterSB2PriPolicy `drbd:"after-sb-2pri"` + + // The most common way to configure DRBD devices is to allow only one node + // to be primary (and thus writable) at a time. + // + // In some scenarios it is preferable to allow two nodes to be primary at + // once; a mechanism outside of DRBD then must make sure that writes to the + // shared, replicated device happen in a coordinated way. This can be done + // with a shared-storage cluster file system like OCFS2 and GFS, or with + // virtual machine images and a virtual machine manager that can migrate + // virtual machines between physical machines. + // + // The allow-two-primaries parameter tells DRBD to allow two nodes to be + // primary at the same time. Never enable this option when using a + // non-distributed file system; otherwise, data corruption and node crashes + // will result! + AllowTwoPrimaries bool `drbd:"allow-two-primaries"` + + // Normally the automatic after-split-brain policies are only used if + // current states of the UUIDs do not indicate the presence of a third node. + // + // With this option you request that the automatic after-split-brain + // policies are used as long as the data sets of the nodes are somehow + // related. This might cause a full sync, if the UUIDs indicate the presence + // of a third node. (Or double faults led to strange UUID sets.) + AlwaysASBP bool `drbd:"always-asbp"` + + // As soon as a connection between two nodes is configured with drbdsetup + // connect, DRBD immediately tries to establish the connection. If this + // fails, DRBD waits for connect-int seconds and then repeats. The default + // value of connect-int is 10 seconds. + ConnectInt *uint `drbd:"connect-int"` + + // Configure the hash-based message authentication code (HMAC) or secure + // hash algorithm to use for peer authentication. The kernel supports a + // number of different algorithms, some of which may be loadable as kernel + // modules. See the shash algorithms listed in /proc/crypto. By default, + // cram-hmac-alg is unset. Peer authentication also requires a shared-secret + // to be configured. + CRAMHMACAlg string `drbd:"cram-hmac-alg"` + + // Normally, when two nodes resynchronize, the sync target requests a piece + // of out-of-sync data from the sync source, and the sync source sends the + // data. With many usage patterns, a significant number of those blocks will + // actually be identical. + // + // When a csums-alg algorithm is specified, when requesting a piece of + // out-of-sync data, the sync target also sends along a hash of the data it + // currently has. The sync source compares this hash with its own version of + // the data. It sends the sync target the new data if the hashes differ, and + // tells it that the data are the same otherwise. This reduces the network + // bandwidth required, at the cost of higher cpu utilization and possibly + // increased I/O on the sync target. + // + // The csums-alg can be set to one of the secure hash algorithms supported + // by the kernel; see the shash algorithms listed in /proc/crypto. By + // default, csums-alg is unset. + CSumsAlg string `drbd:"csums-alg"` + + // Enabling this option (and csums-alg, above) makes it possible to use the + // checksum based resync only for the first resync after primary crash, but + // not for later "network hickups". + // + // In most cases, block that are marked as need-to-be-resynced are in fact + // changed, so calculating checksums, and both reading and writing the + // blocks on the resync target is all effective overhead. + // + // The advantage of checksum based resync is mostly after primary crash + // recovery, where the recovery marked larger areas (those covered by the + // activity log) as need-to-be-resynced, just in case. Introduced in 8.4.5. + CSumsAfterCrashOnly bool `drbd:"csums-after-crash-only"` + + // DRBD normally relies on the data integrity checks built into the TCP/IP + // protocol, but if a data integrity algorithm is configured, it will + // additionally use this algorithm to make sure that the data received over + // the network match what the sender has sent. If a data integrity error is + // detected, DRBD will close the network connection and reconnect, which + // will trigger a resync. + // + // The data-integrity-alg can be set to one of the secure hash algorithms + // supported by the kernel; see the shash algorithms listed in /proc/crypto. + // By default, this mechanism is turned off. + // + // Because of the CPU overhead involved, we recommend not to use this option + // in production environments. Also see the notes on data integrity below. + DataIntegrityAlg string `drbd:"data-integrity-alg"` + + // Fencing is a preventive measure to avoid situations where both nodes are + // primary and disconnected. This is also known as a split-brain situation. + Fencing FencingPolicy `drbd:"fencing"` + + // If a secondary node fails to complete a write request in ko-count times + // the timeout parameter, it is excluded from the cluster. The primary node + // then sets the connection to this secondary node to Standalone. To disable + // this feature, you should explicitly set it to 0; defaults may change + // between versions. + KOCount *int `drbd:"ko-count"` + + // Limits the memory usage per DRBD minor device on the receiving side, or + // for internal buffers during resync or online-verify. Unit is PAGE_SIZE, + // which is 4 KiB on most systems. The minimum possible setting is hard + // coded to 32 (=128 KiB). These buffers are used to hold data blocks while + // they are written to/read from disk. To avoid possible distributed + // deadlocks on congestion, this setting is used as a throttle threshold + // rather than a hard limit. Once more than max-buffers pages are in use, + // further allocation from this pool is throttled. You want to increase + // max-buffers if you cannot saturate the IO backend on the receiving side. + MaxBuffers *int `drbd:"max-buffers"` + + // Define the maximum number of write requests DRBD may issue before issuing + // a write barrier. The default value is 2048, with a minimum of 1 and a + // maximum of 20000. Setting this parameter to a value below 10 is likely to + // decrease performance. + MaxEpochSize *int `drbd:"max-epoch-size"` + + // By default, DRBD blocks when the TCP send queue is full. This prevents + // applications from generating further write requests until more buffer + // space becomes available again. + // + // When DRBD is used together with DRBD-proxy, it can be better to use the + // pull-ahead on-congestion policy, which can switch DRBD into ahead/behind + // mode before the send queue is full. DRBD then records the differences + // between itself and the peer in its bitmap, but it no longer replicates + // them to the peer. When enough buffer space becomes available again, the + // node resynchronizes with the peer and switches back to normal + // replication. + // + // This has the advantage of not blocking application I/O even when the + // queues fill up, and the disadvantage that peer nodes can fall behind much + // further. Also, while resynchronizing, peer nodes will become + // inconsistent. + OnCongestion OnCongestionPolicy `drbd:"on-congestion"` + + // The congestion-fill parameter defines how much data is allowed to be + // "in flight" in this connection. The default value is 0, which disables + // this mechanism of congestion control, with a maximum of 10 GiBytes. + // + // Also see OnCongestion. + CongestionFill *Unit `drbd:"congestion-fill"` + + // The congestion-extents parameter defines how many bitmap extents may be + // active before switching into ahead/behind mode, with the same default and + // limits as the al-extents parameter. The congestion-extents parameter is + // effective only when set to a value smaller than al-extents. + // + // Also see OnCongestion. + CongestionExtents *int `drbd:"congestion-extents"` + + // When the TCP/IP connection to a peer is idle for more than ping-int + // seconds, DRBD will send a keep-alive packet to make sure that a failed + // peer or network connection is detected reasonably soon. The default value + // is 10 seconds, with a minimum of 1 and a maximum of 120 seconds. The + // unit is seconds. + PingInt *int `drbd:"ping-int"` + + // Define the timeout for replies to keep-alive packets. If the peer does + // not reply within ping-timeout, DRBD will close and try to reestablish the + // connection. The default value is 0.5 seconds, with a minimum of 0.1 + // seconds and a maximum of 30 seconds. The unit is tenths of a second. + PingTimeout *int `drbd:"ping-timeout"` + + // In setups involving a DRBD-proxy and connections that experience a lot of + // buffer-bloat it might be necessary to set ping-timeout to an unusual high + // value. By default DRBD uses the same value to wait if a newly established + // TCP-connection is stable. Since the DRBD-proxy is usually located in the + // same data center such a long wait time may hinder DRBD's connect process. + // + // In such setups socket-check-timeout should be set to at least to the + // round trip time between DRBD and DRBD-proxy. I.e. in most cases to 1. + // + // The default unit is tenths of a second, the default value is 0 (which + // causes DRBD to use the value of ping-timeout instead). Introduced in + // 8.4.5. + SocketCheckTimeout *int `drbd:"socket-check-timeout"` + + // Use the specified protocol on this connection. + Protocol Protocol `drbd:"protocol"` + + // Configure the size of the TCP/IP receive buffer. A value of 0 (the + // default) causes the buffer size to adjust dynamically. This parameter + // usually does not need to be set, but it can be set to a value up to + // 10 MiB. The default unit is bytes. + RcvbufSize *Unit `drbd:"rcvbuf-size"` + + // This option helps to solve the cases when the outcome of the resync + // decision is incompatible with the current role assignment in the cluster. + RRConflict RRConflictPolicy `drbd:"rr-conflict"` + + // Configure the shared secret used for peer authentication. The secret is a + // string of up to 64 characters. Peer authentication also requires the + // cram-hmac-alg parameter to be set. + SharedSecret string `drbd:"shared-secret"` + + // Configure the size of the TCP/IP send buffer. Since DRBD 8.0.13 / 8.2.7, + // a value of 0 (the default) causes the buffer size to adjust dynamically. + // Values below 32 KiB are harmful to the throughput on this connection. + // Large buffer sizes can be useful especially when protocol A is used over + // high-latency networks; the maximum value supported is 10 MiB. + SndbufSize *Unit `drbd:"sndbuf-size"` + + // By default, DRBD uses the TCP_CORK socket option to prevent the kernel + // from sending partial messages; this results in fewer and bigger packets + // on the network. Some network stacks can perform worse with this + // optimization. On these, the tcp-cork parameter can be used to turn this + // optimization off. + TCPCork *bool `drbd:"tcp-cork"` + + // Define the timeout for replies over the network: if a peer node does not + // send an expected reply within the specified timeout, it is considered + // dead and the TCP/IP connection is closed. The timeout value must be lower + // than connect-int and lower than ping-int. The default is 6 seconds; the + // value is specified in tenths of a second. + Timeout *int `drbd:"timeout"` + + // Each replicated device on a cluster node has a separate bitmap for each + // of its peer devices. The bitmaps are used for tracking the differences + // between the local and peer device: depending on the cluster state, a disk + // range can be marked as different from the peer in the device's bitmap, in + // the peer device's bitmap, or in both bitmaps. When two cluster nodes + // connect, they exchange each other's bitmaps, and they each compute the + // union of the local and peer bitmap to determine the overall differences. + // + // Bitmaps of very large devices are also relatively large, but they usually + // compress very well using run-length encoding. This can save time and + // bandwidth for the bitmap transfers. + // + // The use-rle parameter determines if run-length encoding should be used. + // It is on by default since DRBD 8.4.0. + UseRLE *bool `drbd:"use-rle"` + + // Online verification (drbdadm verify) computes and compares checksums of + // disk blocks (i.e., hash values) in order to detect if they differ. The + // verify-alg parameter determines which algorithm to use for these + // checksums. It must be set to one of the secure hash algorithms supported + // by the kernel before online verify can be used; see the shash algorithms + // listed in /proc/crypto. + // + // We recommend to schedule online verifications regularly during low-load + // periods, for example once a month. Also see the notes on data integrity + // below. + VerifyAlg string `drbd:"verify-alg"` + + // Allows or disallows DRBD to read from a peer node. + // + // When the disk of a primary node is detached, DRBD will try to continue + // reading and writing from another node in the cluster. For this purpose, + // it searches for nodes with up-to-date data, and uses any found node to + // resume operations. In some cases it may not be desirable to read back + // data from a peer node, because the node should only be used as a + // replication target. In this case, the allow-remote-read parameter can be + // set to no, which would prohibit this node from reading data from the peer + // node. + // + // The allow-remote-read parameter is available since DRBD 9.0.19, and + // defaults to yes. + AllowRemoteRead *bool `drbd:"allow-remote-read"` +} + +var _ drbdconf.SectionKeyworder = &Net{} + +func (*Net) SectionKeyword() string { + return "net" +} + +// + +type AfterSB0PriPolicy interface { + _isAfterSB0PriPolicy() +} + +func init() { + drbdconf.RegisterParameterTypeCodec[AfterSB0PriPolicy]( + &AfterSB0PriPolicyParameterTypeCodec{}, + ) +} + +type AfterSB0PriPolicyParameterTypeCodec struct { +} + +func (*AfterSB0PriPolicyParameterTypeCodec) MarshalParameter( + v any, +) ([]string, error) { + switch vt := v.(type) { + case *AfterSB0PriPolicyDisconnect: + return []string{"disconnect"}, nil + case *AfterSB0PriPolicyDiscardYoungerPrimary: + return []string{"discard-younger-primary"}, nil + case *AfterSB0PriPolicyDiscardOlderPrimary: + return []string{"discard-older-primary"}, nil + case *AfterSB0PriPolicyDiscardZeroChanges: + return []string{"discard-zero-changes"}, nil + case *AfterSB0PriPolicyDiscardLeastChanges: + return []string{"discard-least-changes"}, nil + case *AfterSB0PriPolicyDiscardNode: + return []string{"discard-node-" + vt.NodeName}, nil + } + return nil, errors.New("unrecognized value type") +} + +func (*AfterSB0PriPolicyParameterTypeCodec) UnmarshalParameter( + p []drbdconf.Word, +) (any, error) { + if err := drbdconf.EnsureLen(p, 2); err != nil { + return nil, err + } + switch p[1].Value { + case "disconnect": + return &AfterSB0PriPolicyDisconnect{}, nil + case "discard-younger-primary": + return &AfterSB0PriPolicyDiscardYoungerPrimary{}, nil + case "discard-older-primary": + return &AfterSB0PriPolicyDiscardOlderPrimary{}, nil + case "discard-zero-changes": + return &AfterSB0PriPolicyDiscardZeroChanges{}, nil + case "discard-least-changes": + return &AfterSB0PriPolicyDiscardLeastChanges{}, nil + default: + if nodeName, ok := strings.CutPrefix(p[1].Value, "discard-node-"); ok { + return &AfterSB0PriPolicyDiscardNode{NodeName: nodeName}, nil + } + return nil, errors.New("unrecognized value") + } +} + +// No automatic resynchronization; simply disconnect. +type AfterSB0PriPolicyDisconnect struct{} + +var _ AfterSB0PriPolicy = &AfterSB0PriPolicyDisconnect{} + +func (a *AfterSB0PriPolicyDisconnect) _isAfterSB0PriPolicy() {} + +// Resynchronize from the node which became primary first. If both nodes +// became primary independently, the discard-least-changes policy is used. +type AfterSB0PriPolicyDiscardYoungerPrimary struct{} + +var _ AfterSB0PriPolicy = &AfterSB0PriPolicyDiscardYoungerPrimary{} + +func (a *AfterSB0PriPolicyDiscardYoungerPrimary) _isAfterSB0PriPolicy() {} + +// Resynchronize from the node which became primary last. If both nodes +// became primary independently, the discard-least-changes policy is used. +type AfterSB0PriPolicyDiscardOlderPrimary struct{} + +var _ AfterSB0PriPolicy = &AfterSB0PriPolicyDiscardOlderPrimary{} + +func (a *AfterSB0PriPolicyDiscardOlderPrimary) _isAfterSB0PriPolicy() {} + +// If only one of the nodes wrote data since the split brain situation was +// detected, resynchronize from this node to the other. If both nodes wrote +// data, disconnect. +type AfterSB0PriPolicyDiscardZeroChanges struct{} + +var _ AfterSB0PriPolicy = &AfterSB0PriPolicyDiscardZeroChanges{} + +func (a *AfterSB0PriPolicyDiscardZeroChanges) _isAfterSB0PriPolicy() {} + +// Resynchronize from the node with more modified blocks. +type AfterSB0PriPolicyDiscardLeastChanges struct{} + +var _ AfterSB0PriPolicy = &AfterSB0PriPolicyDiscardLeastChanges{} + +func (a *AfterSB0PriPolicyDiscardLeastChanges) _isAfterSB0PriPolicy() {} + +// Always resynchronize to the named node. +type AfterSB0PriPolicyDiscardNode struct { + NodeName string +} + +var _ AfterSB0PriPolicy = &AfterSB0PriPolicyDiscardNode{} + +func (a *AfterSB0PriPolicyDiscardNode) _isAfterSB0PriPolicy() {} + +// + +type AfterSB1PriPolicy string + +var _ drbdconf.ParameterCodec = new(AfterSB1PriPolicy) + +var knownValuesAfterSB1PriPolicy = map[AfterSB1PriPolicy]struct{}{ + AfterSB1PriPolicyDisconnect: {}, + AfterSB1PriPolicyConsensus: {}, + AfterSB1PriPolicyViolentlyAS0P: {}, + AfterSB1PriPolicyDiscardSecondary: {}, + AfterSB1PriPolicyCallPriLostAfterSB: {}, +} + +func (a *AfterSB1PriPolicy) MarshalParameter() ([]string, error) { + return []string{string(*a)}, nil +} + +func (a *AfterSB1PriPolicy) UnmarshalParameter(p []drbdconf.Word) error { + return drbdconf.ReadEnumAt(a, knownValuesAfterSB1PriPolicy, p, 1) +} + +const ( + // No automatic resynchronization, simply disconnect. + AfterSB1PriPolicyDisconnect AfterSB1PriPolicy = "disconnect" + // Discard the data on the secondary node if the after-sb-0pri algorithm + // would also discard the data on the secondary node. Otherwise, disconnect. + AfterSB1PriPolicyConsensus AfterSB1PriPolicy = "consensus" + // Always take the decision of the after-sb-0pri algorithm, even if it + // causes an erratic change of the primary's view of the data. This is only + // useful if a single-node file system (i.e., not OCFS2 or GFS) with the + // allow-two-primaries flag is used. This option can cause the primary node + // to crash, and should not be used. + AfterSB1PriPolicyViolentlyAS0P AfterSB1PriPolicy = "violently-as0p" + // Discard the data on the secondary node. + AfterSB1PriPolicyDiscardSecondary AfterSB1PriPolicy = "discard-secondary" + // Always take the decision of the after-sb-0pri algorithm. If the decision + // is to discard the data on the primary node, call the pri-lost-after-sb + // handler on the primary node. + AfterSB1PriPolicyCallPriLostAfterSB AfterSB1PriPolicy = "call-pri-lost-after-sb" +) + +// + +type AfterSB2PriPolicy string + +var _ drbdconf.ParameterCodec = new(AfterSB2PriPolicy) + +var knownValuesAfterSB2PriPolicy = map[AfterSB2PriPolicy]struct{}{ + AfterSB2PriPolicyDisconnect: {}, + AfterSB2PriPolicyViolentlyAS0P: {}, + AfterSB2PriPolicyCallPriLostAfterSB: {}, +} + +func (a *AfterSB2PriPolicy) MarshalParameter() ([]string, error) { + return []string{string(*a)}, nil +} + +func (a *AfterSB2PriPolicy) UnmarshalParameter(p []drbdconf.Word) error { + return drbdconf.ReadEnumAt(a, knownValuesAfterSB2PriPolicy, p, 1) +} + +const ( + // No automatic resynchronization, simply disconnect. + AfterSB2PriPolicyDisconnect AfterSB2PriPolicy = "disconnect" + // See the violently-as0p policy for after-sb-1pri. + AfterSB2PriPolicyViolentlyAS0P AfterSB2PriPolicy = "violently-as0p" + // Call the pri-lost-after-sb helper program on one of the machines unless + // that machine can demote to secondary. The helper program is expected to + // reboot the machine, which brings the node into a secondary role. Which + // machine runs the helper program is determined by the after-sb-0pri + // strategy. + AfterSB2PriPolicyCallPriLostAfterSB AfterSB2PriPolicy = "call-pri-lost-after-sb" +) + +// + +type FencingPolicy string + +var _ drbdconf.ParameterCodec = new(FencingPolicy) + +var knownValuesFencingPolicy = map[FencingPolicy]struct{}{ + FencingPolicyDontCare: {}, + FencingPolicyResourceOnly: {}, + FencingPolicyResourceAndSTONITH: {}, +} + +const ( + // No fencing actions are taken. This is the default policy. + FencingPolicyDontCare FencingPolicy = "dont-care" + // If a node becomes a disconnected primary, it tries to fence the peer. + // This is done by calling the fence-peer handler. The handler is supposed + // to reach the peer over an alternative communication path and call + // 'drbdadm outdate minor' there. + FencingPolicyResourceOnly FencingPolicy = "resource-only" + // If a node becomes a disconnected primary, it freezes all its IO + // operations and calls its fence-peer handler. The fence-peer handler is + // supposed to reach the peer over an alternative communication path and + // call 'drbdadm outdate minor' there. In case it cannot do that, it should + // stonith the peer. IO is resumed as soon as the situation is resolved. In + // case the fence-peer handler fails, I/O can be resumed manually with + // 'drbdadm resume-io'. + FencingPolicyResourceAndSTONITH FencingPolicy = "resource-and-stonith" +) + +func (f *FencingPolicy) MarshalParameter() ([]string, error) { + return []string{string(*f)}, nil +} + +func (f *FencingPolicy) UnmarshalParameter(p []drbdconf.Word) error { + return drbdconf.ReadEnumAt(f, knownValuesFencingPolicy, p, 1) +} + +// + +type OnCongestionPolicy string + +var _ drbdconf.ParameterCodec = new(OnCongestionPolicy) + +var knownValuesOnCongestionPolicy = map[OnCongestionPolicy]struct{}{ + OnCongestionPolicyBlock: {}, + OnCongestionPolicyPullAhead: {}, +} + +const ( + OnCongestionPolicyBlock OnCongestionPolicy = "block" + OnCongestionPolicyPullAhead OnCongestionPolicy = "pull-ahead" +) + +// MarshalParameter implements drbdconf.ParameterCodec. +func (o *OnCongestionPolicy) MarshalParameter() ([]string, error) { + return []string{string(*o)}, nil +} + +// UnmarshalParameter implements drbdconf.ParameterCodec. +func (o *OnCongestionPolicy) UnmarshalParameter(p []drbdconf.Word) error { + return drbdconf.ReadEnumAt(o, knownValuesOnCongestionPolicy, p, 1) +} + +// + +type Protocol string + +var _ drbdconf.ParameterCodec = new(Protocol) + +var knownValuesProtocol = map[Protocol]struct{}{ + ProtocolA: {}, + ProtocolB: {}, + ProtocolC: {}, +} + +const ( + // Writes to the DRBD device complete as soon as they have reached the local + // disk and the TCP/IP send buffer. + ProtocolA Protocol = "A" + // Writes to the DRBD device complete as soon as they have reached the local + // disk, and all peers have acknowledged the receipt of the write requests. + ProtocolB Protocol = "B" + // Writes to the DRBD device complete as soon as they have reached the local + // and all remote disks. + ProtocolC Protocol = "C" +) + +func (pr *Protocol) MarshalParameter() ([]string, error) { + return []string{string(*pr)}, nil +} + +func (pr *Protocol) UnmarshalParameter(p []drbdconf.Word) error { + return drbdconf.ReadEnumAt(pr, knownValuesProtocol, p, 1) +} + +// + +type RRConflictPolicy string + +var _ drbdconf.ParameterCodec = new(RRConflictPolicy) + +var knownValuesRRConflictPolicy = map[RRConflictPolicy]struct{}{ + RRConflictPolicyDisconnect: {}, + RRConflictPolicyRetryConnect: {}, + RRConflictPolicyViolently: {}, + RRConflictPolicyCallPriLost: {}, + RRConflictPolicyAutoDiscard: {}, +} + +const ( + // No automatic resynchronization, simply disconnect. + RRConflictPolicyDisconnect RRConflictPolicy = "disconnect" + // Disconnect now, and retry to connect immediately afterwards. + RRConflictPolicyRetryConnect RRConflictPolicy = "retry-connect" + // Resync to the primary node is allowed, violating the assumption that data + // on a block device are stable for one of the nodes. Do not use this + // option, it is dangerous. + RRConflictPolicyViolently RRConflictPolicy = "violently" + // Call the pri-lost handler on one of the machines. The handler is expected + // to reboot the machine, which puts it into secondary role. + RRConflictPolicyCallPriLost RRConflictPolicy = "call-pri-lost" + // Auto-discard reverses the resync direction, so that DRBD resyncs the + // current primary to the current secondary. Auto-discard only applies when + // protocol A is in use and the resync decision is based on the principle + // that a crashed primary should be the source of a resync. When a primary + // node crashes, it might have written some last updates to its disk, which + // were not received by a protocol A secondary. By promoting the secondary + // in the meantime the user accepted that those last updates have been lost. + // By using auto-discard you consent that the last updates (before the crash + // of the primary) should be rolled back automatically. + RRConflictPolicyAutoDiscard RRConflictPolicy = "auto-discard" +) + +func (r *RRConflictPolicy) MarshalParameter() ([]string, error) { + return []string{string(*r)}, nil +} + +func (r *RRConflictPolicy) UnmarshalParameter(p []drbdconf.Word) error { + return drbdconf.ReadEnumAt(r, knownValuesRRConflictPolicy, p, 1) +} diff --git a/images/agent/pkg/drbdconf/v9/section_on.go b/images/agent/pkg/drbdconf/v9/section_on.go new file mode 100644 index 000000000..2b500a0c0 --- /dev/null +++ b/images/agent/pkg/drbdconf/v9/section_on.go @@ -0,0 +1,113 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package v9 + +import "github.com/deckhouse/sds-replicated-volume/images/agent/pkg/drbdconf" + +// Define the properties of a resource on a particular host or set of hosts. +// Specifying more than one host name can make sense in a setup with IP address +// failover, for example. The host-name argument must match the Linux host name +// (uname -n). +// +// Usually contains or inherits at least one [Volume] section. The node-id and +// address parameters must be defined in this section. The device, disk, and +// meta-disk parameters must be defined in, or inherited by, this section. +// +// A normal configuration file contains two or more [On] sections for each +// resource. Also see the [Floating] section. +type On struct { + HostNames []string `drbd:""` + + // Defines the address family, address, and port of a connection endpoint. + // + // The address families ipv4, ipv6, ssocks (Dolphin Interconnect Solutions' + // "super sockets"), sdp (Infiniband Sockets Direct Protocol), and sci are + // supported (sci is an alias for ssocks). If no address family is + // specified, ipv4 is assumed. For all address families except ipv6, the + // address is specified in IPV4 address notation (for example, 1.2.3.4). For + // ipv6, the address is enclosed in brackets and uses IPv6 address notation + // (for example, [fd01:2345:6789:abcd::1]). The port is always specified as + // a decimal number from 1 to 65535. + // + // On each host, the port numbers must be unique for each address; ports + // cannot be shared. + Address *AddressWithPort `drbd:"address"` + + // Defines the unique node identifier for a node in the cluster. Node + // identifiers are used to identify individual nodes in the network + // protocol, and to assign bitmap slots to nodes in the metadata. + // + // Node identifiers can only be reasssigned in a cluster when the cluster is + // down. It is essential that the node identifiers in the configuration and + // in the device metadata are changed consistently on all hosts. To change + // the metadata, dump the current state with drbdmeta dump-md, adjust the + // bitmap slot assignment, and update the metadata with drbdmeta restore-md. + // + // The node-id parameter exists since DRBD 9. Its value ranges from 0 to 16; + // there is no default. + NodeID *uint `drbd:"node-id"` + + Volumes []*Volume +} + +func (o *On) SectionKeyword() string { + return "on" +} + +var _ drbdconf.SectionKeyworder = &On{} + +// Like the [On] section, except that instead of the host name a network address +// is used to determine if it matches a floating section. +// +// The node-id parameter in this section is required. If the address parameter +// is not provided, no connections to peers will be created by default. The +// device, disk, and meta-disk parameters must be defined in, or inherited by, +// this section. +type Floating struct { + // Defines the address family, address, and port of a connection endpoint. + // + // The address families ipv4, ipv6, ssocks (Dolphin Interconnect Solutions' + // "super sockets"), sdp (Infiniband Sockets Direct Protocol), and sci are + // supported (sci is an alias for ssocks). If no address family is + // specified, ipv4 is assumed. For all address families except ipv6, the + // address is specified in IPV4 address notation (for example, 1.2.3.4). For + // ipv6, the address is enclosed in brackets and uses IPv6 address notation + // (for example, [fd01:2345:6789:abcd::1]). The port is always specified as + // a decimal number from 1 to 65535. + // + // On each host, the port numbers must be unique for each address; ports + // cannot be shared. + Address *AddressWithPort `drbd:""` + + // Defines the unique node identifier for a node in the cluster. Node + // identifiers are used to identify individual nodes in the network + // protocol, and to assign bitmap slots to nodes in the metadata. + // + // Node identifiers can only be reasssigned in a cluster when the cluster is + // down. It is essential that the node identifiers in the configuration and + // in the device metadata are changed consistently on all hosts. To change + // the metadata, dump the current state with drbdmeta dump-md, adjust the + // bitmap slot assignment, and update the metadata with drbdmeta restore-md. + // + // The node-id parameter exists since DRBD 9. Its value ranges from 0 to 16; + // there is no default. + NodeID *int `drbd:"node-id"` +} + +func (o *Floating) SectionKeyword() string { + return "floating" +} diff --git a/images/agent/pkg/drbdconf/v9/section_options.go b/images/agent/pkg/drbdconf/v9/section_options.go new file mode 100644 index 000000000..5a53f6fb1 --- /dev/null +++ b/images/agent/pkg/drbdconf/v9/section_options.go @@ -0,0 +1,424 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package v9 + +import ( + "errors" + "strconv" + + "github.com/deckhouse/sds-replicated-volume/images/agent/pkg/drbdconf" +) + +// Define parameters for a resource. All parameters in this section are +// optional. +type Options struct { + // A resource must be promoted to primary role before any of its devices can + // be mounted or opened for writing. + // Before DRBD 9, this could only be done explicitly ("drbdadm primary"). + // Since DRBD 9, the auto-promote parameter allows to automatically promote + // a resource to primary role when one of its devices is mounted or opened + // for writing. As soon as all devices are unmounted or closed with no more + // remaining users, the role of the resource changes back to secondary. + // + // Automatic promotion only succeeds if the cluster state allows it (that + // is, if an explicit drbdadm primary command would succeed). Otherwise, + // mounting or opening the device fails as it already did before DRBD 9: the + // mount(2) system call fails with errno set to EROFS (Read-only file + // system); the open(2) system call fails with errno set to EMEDIUMTYPE + // (wrong medium type). + // + // Irrespective of the auto-promote parameter, if a device is promoted + // explicitly (drbdadm primary), it also needs to be demoted explicitly + // (drbdadm secondary). + // + // The auto-promote parameter is available since DRBD 9.0.0, and defaults to + // yes. + AutoPromote *bool `drbd:"auto-promote"` + + // Set the cpu affinity mask for DRBD kernel threads. The cpu mask is + // specified as a hexadecimal number. The default value is 0, which lets the + // scheduler decide which kernel threads run on which CPUs. CPU numbers in + // cpu-mask which do not exist in the system are ignored. + CPUMask string `drbd:"cpu-mask"` + + // Determine how to deal with I/O requests when the requested data is not + // available locally or remotely (for example, when all disks have failed). + // When quorum is enabled, on-no-data-accessible should be set to the same + // value as on-no-quorum. + OnNoDataAccessible OnNoDataAccessiblePolicy `drbd:"on-no-data-accessible"` + + // On each node and for each device, DRBD maintains a bitmap of the + // differences between the local and remote data for each peer device. For + // example, in a three-node setup (nodes A, B, C) each with a single device, + // every node maintains one bitmap for each of its peers. + // + // When nodes receive write requests, they know how to update the bitmaps + // for the writing node, but not how to update the bitmaps between + // themselves. In this example, when a write request propagates from node A + // to B and C, nodes B and C know that they have the same data as node A, + // but not whether or not they both have the same data. + // + // As a remedy, the writing node occasionally sends peer-ack packets to its + // peers which tell them which state they are in relative to each other. + // + // The peer-ack-window parameter specifies how much data a primary node may + // send before sending a peer-ack packet. A low value causes increased + // network traffic; a high value causes less network traffic but higher + // memory consumption on secondary nodes and higher resync times between the + // secondary nodes after primary node failures. (Note: peer-ack packets may + // be sent due to other reasons as well, e.g. membership changes or expiry + // of the peer-ack-delay timer.) + // + // The default value for peer-ack-window is 2 MiB, the default unit is + // sectors. This option is available since 9.0.0. + PeerAckWindow *Unit `drbd:"peer-ack-window"` + + // If after the last finished write request no new write request gets issued + // for expiry-time, then a peer-ack packet is sent. If a new write request + // is issued before the timer expires, the timer gets reset to expiry-time. + // (Note: peer-ack packets may be sent due to other reasons as well, e.g. + // membership changes or the peer-ack-window option.) + // + // This parameter may influence resync behavior on remote nodes. Peer nodes + // need to wait until they receive an peer-ack for releasing a lock on an + // AL-extent. Resync operations between peers may need to wait for for these + // locks. + // + // The default value for peer-ack-delay is 100 milliseconds, the default + // unit is milliseconds. This option is available since 9.0.0. + PeerAckDelay *Unit `drbd:"peer-ack-delay"` + + // When activated, a cluster partition requires quorum in order to modify + // the replicated data set. That means a node in the cluster partition can + // only be promoted to primary if the cluster partition has quorum. Every + // node with a disk directly connected to the node that should be promoted + // counts. If a primary node should execute a write request, but the cluster + // partition has lost quorum, it will freeze IO or reject the write request + // with an error (depending on the on-no-quorum setting). Upon loosing + // quorum a primary always invokes the quorum-lost handler. The handler is + // intended for notification purposes, its return code is ignored. + // + // The option's value might be set to off, majority, all or a numeric value. + // If you set it to a numeric value, make sure that the value is greater + // than half of your number of nodes. Quorum is a mechanism to avoid data + // divergence, it might be used instead of fencing when there are more than + // two repicas. It defaults to off + // + // If all missing nodes are marked as outdated, a partition always has + // quorum, no matter how small it is. I.e. If you disconnect all secondary + // nodes gracefully a single primary continues to operate. In the moment a + // single secondary is lost, it has to be assumed that it forms a partition + // with all the missing outdated nodes. In case my partition might be + // smaller than the other, quorum is lost in this moment. + // + // In case you want to allow permanently diskless nodes to gain quorum it is + // recommended to not use majority or all. It is recommended to specify an + // absolute number, since DBRD's heuristic to determine the complete number + // of diskfull nodes in the cluster is unreliable. + // + // The quorum implementation is available starting with the DRBD kernel + // driver version 9.0.7. + Quorum Quorum `drbd:"quorum"` + + // This option sets the minimal required number of nodes with an UpToDate + // disk to allow the partition to gain quorum. This is a different + // requirement than the plain quorum option expresses. + // + // The option's value might be set to off, majority, all or a numeric value. + // If you set it to a numeric value, make sure that the value is greater + // than half of your number of nodes. + // + // In case you want to allow permanently diskless nodes to gain quorum it is + // recommended to not use majority or all. It is recommended to specify an + // absolute number, since DBRD's heuristic to determine the complete number + // of diskfull nodes in the cluster is unreliable. + // + // This option is available starting with the DRBD kernel driver version + // 9.0.10. + QuorumMinimumRedundancy QuorumMinimumRedundancy `drbd:"quorum-minimum-redundancy"` + + // By default DRBD freezes IO on a device, that lost quorum. By setting the + // on-no-quorum to io-error it completes all IO operations with an error if + // quorum is lost. + // + // Usually, the on-no-data-accessible should be set to the same value as + // on-no-quorum, as it has precedence. + // + // The on-no-quorum options is available starting with the DRBD kernel + // driver version 9.0.8. + OnNoQuorum OnNoQuorumPolicy `drbd:"on-no-quorum"` + + // This setting is only relevant when on-no-quorum is set to suspend-io. It + // is relevant in the following scenario. A primary node loses quorum hence + // has all IO requests frozen. This primary node then connects to another, + // quorate partition. It detects that a node in this quorate partition was + // promoted to primary, and started a newer data-generation there. As a + // result, the first primary learns that it has to consider itself outdated. + // + // When it is set to force-secondary then it will demote to secondary + // immediately, and fail all pending (and new) IO requests with IO errors. + // It will refuse to allow any process to open the DRBD devices until all + // openers closed the device. This state is visible in status and events2 + // under the name force-io-failures. + // + // The disconnect setting simply causes that node to reject connect attempts + // and stay isolated. + // + // The on-suspended-primary-outdated option is available starting with the + // DRBD kernel driver version 9.1.7. It has a default value of disconnect. + OnSuspendedPrimaryOutdated OnSuspendedPrimaryOutdatedPolicy `drbd:"on-suspended-primary-outdated"` +} + +var _ drbdconf.SectionKeyworder = &Options{} + +func (*Options) SectionKeyword() string { return "options" } + +// + +type OnNoDataAccessiblePolicy string + +const ( + OnNoDataAccessiblePolicyIOError OnNoDataAccessiblePolicy = "io-error" + OnNoDataAccessiblePolicySuspendIO OnNoDataAccessiblePolicy = "suspend-io" +) + +var knownValuesOnNoDataAccessiblePolicy = map[OnNoDataAccessiblePolicy]struct{}{ + OnNoDataAccessiblePolicyIOError: {}, + OnNoDataAccessiblePolicySuspendIO: {}, +} + +var _ drbdconf.ParameterCodec = new(OnNoDataAccessiblePolicy) + +func (o *OnNoDataAccessiblePolicy) MarshalParameter() ([]string, error) { + return []string{string(*o)}, nil +} + +func (o *OnNoDataAccessiblePolicy) UnmarshalParameter(p []drbdconf.Word) error { + return drbdconf.ReadEnumAt(o, knownValuesOnNoDataAccessiblePolicy, p, 1) +} + +// + +type Quorum interface { + _isQuorum() +} + +func init() { + drbdconf.RegisterParameterTypeCodec[Quorum]( + &QuorumParameterTypeCodec{}, + ) +} + +type QuorumParameterTypeCodec struct { +} + +func (*QuorumParameterTypeCodec) MarshalParameter( + v any, +) ([]string, error) { + switch vt := v.(type) { + case *QuorumOff: + return []string{"off"}, nil + case *QuorumMajority: + return []string{"majority"}, nil + case *QuorumAll: + return []string{"all"}, nil + case *QuorumNumeric: + return []string{strconv.Itoa(vt.Value)}, nil + } + return nil, errors.New("unrecognized value type") +} + +func (*QuorumParameterTypeCodec) UnmarshalParameter( + p []drbdconf.Word, +) (any, error) { + if err := drbdconf.EnsureLen(p, 2); err != nil { + return nil, err + } + + switch p[1].Value { + case "off": + return &QuorumOff{}, nil + case "majority": + return &QuorumMajority{}, nil + case "all": + return &QuorumAll{}, nil + default: + val, err := strconv.ParseInt(p[1].Value, 10, 64) + if err != nil { + return nil, err + } + return &QuorumNumeric{Value: int(val)}, nil + } +} + +// + +type QuorumOff struct{} + +var _ Quorum = &QuorumOff{} + +func (q *QuorumOff) _isQuorum() {} + +type QuorumMajority struct{} + +var _ Quorum = &QuorumMajority{} + +func (q *QuorumMajority) _isQuorum() {} + +type QuorumAll struct{} + +var _ Quorum = &QuorumAll{} + +func (q *QuorumAll) _isQuorum() {} + +type QuorumNumeric struct { + Value int +} + +var _ Quorum = &QuorumNumeric{} + +func (q *QuorumNumeric) _isQuorum() {} + +// + +type QuorumMinimumRedundancy interface { + _isQuorumMinimumRedundancy() +} + +func init() { + drbdconf.RegisterParameterTypeCodec[QuorumMinimumRedundancy]( + &QuorumMinimumRedundancyParameterTypeCodec{}, + ) +} + +type QuorumMinimumRedundancyParameterTypeCodec struct { +} + +func (*QuorumMinimumRedundancyParameterTypeCodec) MarshalParameter( + v any, +) ([]string, error) { + switch vt := v.(type) { + case *QuorumMinimumRedundancyOff: + return []string{"off"}, nil + case *QuorumMinimumRedundancyMajority: + return []string{"majority"}, nil + case *QuorumMinimumRedundancyAll: + return []string{"all"}, nil + case *QuorumMinimumRedundancyNumeric: + return []string{strconv.Itoa(vt.Value)}, nil + } + return nil, errors.New("unrecognized value type") +} + +func (*QuorumMinimumRedundancyParameterTypeCodec) UnmarshalParameter( + p []drbdconf.Word, +) (any, error) { + if err := drbdconf.EnsureLen(p, 2); err != nil { + return nil, err + } + + switch p[1].Value { + case "off": + return &QuorumMinimumRedundancyOff{}, nil + case "majority": + return &QuorumMinimumRedundancyMajority{}, nil + case "all": + return &QuorumMinimumRedundancyAll{}, nil + default: + val, err := strconv.ParseInt(p[1].Value, 10, 64) + if err != nil { + return nil, err + } + return &QuorumMinimumRedundancyNumeric{Value: int(val)}, nil + } +} + +// + +type QuorumMinimumRedundancyOff struct{} + +var _ QuorumMinimumRedundancy = &QuorumMinimumRedundancyOff{} + +func (q *QuorumMinimumRedundancyOff) _isQuorumMinimumRedundancy() {} + +type QuorumMinimumRedundancyMajority struct{} + +var _ QuorumMinimumRedundancy = &QuorumMinimumRedundancyMajority{} + +func (q *QuorumMinimumRedundancyMajority) _isQuorumMinimumRedundancy() {} + +type QuorumMinimumRedundancyAll struct{} + +var _ QuorumMinimumRedundancy = &QuorumMinimumRedundancyAll{} + +func (q *QuorumMinimumRedundancyAll) _isQuorumMinimumRedundancy() {} + +type QuorumMinimumRedundancyNumeric struct { + Value int +} + +var _ QuorumMinimumRedundancy = &QuorumMinimumRedundancyNumeric{} + +func (q *QuorumMinimumRedundancyNumeric) _isQuorumMinimumRedundancy() {} + +// + +type OnNoQuorumPolicy string + +const ( + OnNoQuorumPolicyIOError OnNoQuorumPolicy = "io-error" + OnNoQuorumPolicySuspendIO OnNoQuorumPolicy = "suspend-io" +) + +var knownValuesOnNoQuorumPolicy = map[OnNoQuorumPolicy]struct{}{ + OnNoQuorumPolicyIOError: {}, + OnNoQuorumPolicySuspendIO: {}, +} + +var _ drbdconf.ParameterCodec = new(OnNoQuorumPolicy) + +func (o *OnNoQuorumPolicy) MarshalParameter() ([]string, error) { + return []string{string(*o)}, nil +} + +func (o *OnNoQuorumPolicy) UnmarshalParameter(p []drbdconf.Word) error { + return drbdconf.ReadEnumAt(o, knownValuesOnNoQuorumPolicy, p, 1) +} + +// + +type OnSuspendedPrimaryOutdatedPolicy string + +const ( + OnSuspendedPrimaryOutdatedPolicyDisconnect OnSuspendedPrimaryOutdatedPolicy = "disconnect" + OnSuspendedPrimaryOutdatedPolicyForceSecondary OnSuspendedPrimaryOutdatedPolicy = "force-secondary" +) + +var knownValuesOnSuspendedPrimaryOutdatedPolicy = map[OnSuspendedPrimaryOutdatedPolicy]struct{}{ + OnSuspendedPrimaryOutdatedPolicyDisconnect: {}, + OnSuspendedPrimaryOutdatedPolicyForceSecondary: {}, +} + +var _ drbdconf.ParameterCodec = new(OnSuspendedPrimaryOutdatedPolicy) + +func (o *OnSuspendedPrimaryOutdatedPolicy) MarshalParameter() ([]string, error) { + return []string{string(*o)}, nil +} + +func (o *OnSuspendedPrimaryOutdatedPolicy) UnmarshalParameter(p []drbdconf.Word) error { + return drbdconf.ReadEnumAt(o, knownValuesOnSuspendedPrimaryOutdatedPolicy, p, 1) +} diff --git a/images/agent/pkg/drbdconf/v9/section_path.go b/images/agent/pkg/drbdconf/v9/section_path.go new file mode 100644 index 000000000..7a9e58d3d --- /dev/null +++ b/images/agent/pkg/drbdconf/v9/section_path.go @@ -0,0 +1,33 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package v9 + +import "github.com/deckhouse/sds-replicated-volume/images/agent/pkg/drbdconf" + +// Define a path between two hosts. This section must contain two host +// parameters. +type Path struct { + // Defines an endpoint for a connection. Each [Host] statement refers to an + // [On] section in a resource. If a port number is defined, this endpoint + // will use the specified port instead of the port defined in the [On] + // section. Each [Path] section must contain exactly two [Host] parameters. + Hosts []HostAddress `drbd:"host"` +} + +var _ drbdconf.SectionKeyworder = &Path{} + +func (*Path) SectionKeyword() string { return "path" } diff --git a/images/agent/pkg/drbdconf/v9/section_peer_device_options.go b/images/agent/pkg/drbdconf/v9/section_peer_device_options.go new file mode 100644 index 000000000..b9ca6110b --- /dev/null +++ b/images/agent/pkg/drbdconf/v9/section_peer_device_options.go @@ -0,0 +1,91 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package v9 + +import "github.com/deckhouse/sds-replicated-volume/images/agent/pkg/drbdconf" + +type PeerDeviceOptions struct { + // The c-delay-target parameter defines the delay in the resync path that + // DRBD should aim for. This should be set to five times the network + // round-trip time or more. The default value of c-delay-target is 10, in + // units of 0.1 seconds. + // Also see CPlanAhead. + CDelayTarget *int `drbd:"c-delay-target"` + + // The c-fill-target parameter defines the how much resync data DRBD should + // aim to have in-flight at all times. Common values for "normal" data paths + // range from 4K to 100K. The default value of c-fill-target is 100, in + // units of sectors + // Also see CPlanAhead. + CFillTarget *Unit `drbd:"c-fill-target"` + + // The c-max-rate parameter limits the maximum bandwidth used by dynamically + // controlled resyncs. Setting this to zero removes the limitation + // (since DRBD 9.0.28). It should be set to either the bandwidth available + // between the DRBD hosts and the machines hosting DRBD-proxy, or to the + // available disk bandwidth. The default value of c-max-rate is 102400, in + // units of KiB/s. + // Also see CPlanAhead. + CMaxRate *Unit `drbd:"c-max-rate"` + + // The c-plan-ahead parameter defines how fast DRBD adapts to changes in the + // resync speed. It should be set to five times the network round-trip time + // or more. The default value of c-plan-ahead is 20, in units of + // 0.1 seconds. + // + // # Dynamically control the resync speed + // + // The following modes are available: + // - Dynamic control with fill target (default). Enabled when c-plan-ahead + // is non-zero and c-fill-target is non-zero. The goal is to fill the + // buffers along the data path with a defined amount of data. This mode is + // recommended when DRBD-proxy is used. Configured with c-plan-ahead, + // c-fill-target and c-max-rate. + // - Dynamic control with delay target. Enabled when c-plan-ahead is + // non-zero (default) and c-fill-target is zero. The goal is to have a + // defined delay along the path. Configured with c-plan-ahead, + // c-delay-target and c-max-rate. + // - Fixed resync rate. Enabled when c-plan-ahead is zero. DRBD will try to + // perform resync I/O at a fixed rate. Configured with resync-rate. + CPlanAhead *Unit `drbd:"c-plan-ahead"` + + // A node which is primary and sync-source has to schedule application I/O + // requests and resync I/O requests. The c-min-rate parameter limits how + // much bandwidth is available for resync I/O; the remaining bandwidth is + // used for application I/O. + // + // A c-min-rate value of 0 means that there is no limit on the resync I/O + // bandwidth. This can slow down application I/O significantly. Use a value + // of 1 (1 KiB/s) for the lowest possible resync rate. + // + // The default value of c-min-rate is 250, in units of KiB/s. + CMinRate *Unit `drbd:"c-min-rate"` + + // Define how much bandwidth DRBD may use for resynchronizing. DRBD allows + // "normal" application I/O even during a resync. If the resync takes up too + // much bandwidth, application I/O can become very slow. This parameter + // allows to avoid that. Please note this is option only works when the + // dynamic resync controller is disabled. + ResyncRate *Unit `drbd:"resync-rate"` +} + +var _ drbdconf.SectionKeyworder = &PeerDeviceOptions{} + +func (p *PeerDeviceOptions) SectionKeyword() string { + // "Please note that you open the section with the disk keyword." + return "disk" +} diff --git a/images/agent/pkg/drbdconf/v9/section_resource.go b/images/agent/pkg/drbdconf/v9/section_resource.go new file mode 100644 index 000000000..3e6ce168e --- /dev/null +++ b/images/agent/pkg/drbdconf/v9/section_resource.go @@ -0,0 +1,40 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package v9 + +import "github.com/deckhouse/sds-replicated-volume/images/agent/pkg/drbdconf" + +// Define a resource. Usually contains at least two [On] sections and at least +// one [Connection] section. +type Resource struct { + Name string `drbd:""` + Connections []*Connection + ConnectionMesh *ConnectionMesh + Disk *DiskOptions + Floating []*Floating + Handlers *Handlers + Net *Net + On []*On + Options *Options + Startup *Startup +} + +var _ drbdconf.SectionKeyworder = &Resource{} + +func (r *Resource) SectionKeyword() string { + return "resource" +} diff --git a/images/agent/pkg/drbdconf/v9/section_startup.go b/images/agent/pkg/drbdconf/v9/section_startup.go new file mode 100644 index 000000000..6874b9689 --- /dev/null +++ b/images/agent/pkg/drbdconf/v9/section_startup.go @@ -0,0 +1,74 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package v9 + +import "github.com/deckhouse/sds-replicated-volume/images/agent/pkg/drbdconf" + +// The parameters in this section determine the behavior of a resource at +// startup time. They have no effect once the system is up and running. +type Startup struct { + // Define how long to wait until all peers are connected in case the cluster + // consisted of a single node only when the system went down. This parameter + // is usually set to a value smaller than wfc-timeout. The assumption here + // is that peers which were unreachable before a reboot are less likely to + // be reachable after the reboot, so waiting is less likely to help. + // + // The timeout is specified in seconds. The default value is 0, which stands + // for an infinite timeout. Also see the wfc-timeout parameter. + DegrWFCTimeout *int `drbd:"degr-wfc-timeout"` + + // Define how long to wait until all peers are connected if all peers were + // outdated when the system went down. This parameter is usually set to a + // value smaller than wfc-timeout. The assumption here is that an outdated + // peer cannot have become primary in the meantime, so we don't need to wait + // for it as long as for a node which was alive before. + // + // The timeout is specified in seconds. The default value is 0, which stands + // for an infinite timeout. Also see the wfc-timeout parameter. + OutdatedWFCTimeout *int `drbd:"outdated-wfc-timeout"` + + // On stacked devices, the wfc-timeout and degr-wfc-timeout parameters in + // the configuration are usually ignored, and both timeouts are set to twice + // the connect-int timeout. The stacked-timeouts parameter tells DRBD to use + // the wfc-timeout and degr-wfc-timeout parameters as defined in the + // configuration, even on stacked devices. Only use this parameter if the + // peer of the stacked resource is usually not available, or will not become + // primary. Incorrect use of this parameter can lead to unexpected + // split-brain scenarios. + StackedTimeouts bool `drbd:"stacked-timeouts"` + + // This parameter causes DRBD to continue waiting in the init script even + // when a split-brain situation has been detected, and the nodes therefore + // refuse to connect to each other. + WaitAfterSB bool `drbd:"wait-after-sb"` + + // Define how long the init script waits until all peers are connected. This + // can be useful in combination with a cluster manager which cannot manage + // DRBD resources: when the cluster manager starts, the DRBD resources will + // already be up and running. With a more capable cluster manager such as + // Pacemaker, it makes more sense to let the cluster manager control DRBD + // resources. The timeout is specified in seconds. The default value is 0, + // which stands for an infinite timeout. Also see the degr-wfc-timeout + // parameter. + WFCTimeout *int `drbd:"wfc-timeout"` +} + +var _ drbdconf.SectionKeyworder = &Startup{} + +func (h *Startup) SectionKeyword() string { + return "startup" +} diff --git a/images/agent/pkg/drbdconf/v9/section_volume.go b/images/agent/pkg/drbdconf/v9/section_volume.go new file mode 100644 index 000000000..f37d2d10b --- /dev/null +++ b/images/agent/pkg/drbdconf/v9/section_volume.go @@ -0,0 +1,244 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package v9 + +import ( + "errors" + "strconv" + "strings" + + "github.com/deckhouse/sds-replicated-volume/images/agent/pkg/drbdconf" +) + +// Define a volume within a resource. The volume numbers in the various [Volume] +// sections of a resource define which devices on which hosts form a replicated +// device. +type Volume struct { + Number *int `drbd:""` + + DiskOptions *DiskOptions + + // Define the device name and minor number of a replicated block device. + // This is the device that applications are supposed to access; in most + // cases, the device is not used directly, but as a file system. This + // parameter is required and the standard device naming convention is + // assumed. + // + // In addition to this device, udev will create + // /dev/drbd/by-res/resource/volume and /dev/drbd/by-disk/lower-level-device + // symlinks to the device. + Device *DeviceMinorNumber `drbd:"device"` + + // Define the lower-level block device that DRBD will use for storing the + // actual data. While the replicated drbd device is configured, the + // lower-level device must not be used directly. Even read-only access with + // tools like dumpe2fs(8) and similar is not allowed. The keyword none + // specifies that no lower-level block device is configured; this also + // overrides inheritance of the lower-level device. + // + // Either [VolumeDisk] or [VolumeDiskNone]. + Disk DiskValue `drbd:"disk"` + + // Define where the metadata of a replicated block device resides: it can be + // internal, meaning that the lower-level device contains both the data and + // the metadata, or on a separate device. + // + // When the index form of this parameter is used, multiple replicated + // devices can share the same metadata device, each using a separate index. + // Each index occupies 128 MiB of data, which corresponds to a replicated + // device size of at most 4 TiB with two cluster nodes. We recommend not to + // share metadata devices anymore, and to instead use the lvm volume manager + // for creating metadata devices as needed. + // + // When the index form of this parameter is not used, the size of the + // lower-level device determines the size of the metadata. The size needed + // is 36 KiB + (size of lower-level device) / 32K * (number of nodes - 1). + // If the metadata device is bigger than that, the extra space is not used. + // + // This parameter is required if a disk other than none is specified, and + // ignored if disk is set to none. A meta-disk parameter without a disk + // parameter is not allowed. + // + // Either [VolumeMetaDiskInternal] or [VolumeMetaDiskDevice]. + MetaDisk MetaDiskValue `drbd:"meta-disk"` +} + +var _ drbdconf.SectionKeyworder = &Volume{} + +func (v *Volume) SectionKeyword() string { + return "volume" +} + +// + +type DeviceMinorNumber uint + +func (d *DeviceMinorNumber) MarshalParameter() ([]string, error) { + return []string{"/dev/drbd" + strconv.FormatUint(uint64(*d), 10)}, nil +} + +func (d *DeviceMinorNumber) UnmarshalParameter(p []drbdconf.Word) error { + if err := drbdconf.EnsureLen(p, 2); err != nil { + return err + } + + var numberStr string + if after, found := strings.CutPrefix(p[1].Value, "/dev/drbd"); found { + numberStr = after + } else if p[1].Value == "minor" { + // also try one old format: + // "device minor " + if err := drbdconf.EnsureLen(p, 3); err != nil { + return err + } + numberStr = p[2].Value + } else { + return errors.New("unrecognized value format") + } + + n, err := strconv.ParseUint(numberStr, 10, 64) + if err != nil { + return err + } + *d = DeviceMinorNumber(n) + + return nil +} + +var _ drbdconf.ParameterCodec = new(DeviceMinorNumber) + +// + +type DiskValue interface { + _diskValue() +} + +func init() { + drbdconf.RegisterParameterTypeCodec[DiskValue]( + &DiskValueParameterTypeCodec{}, + ) +} + +type DiskValueParameterTypeCodec struct { +} + +func (d *DiskValueParameterTypeCodec) MarshalParameter( + v any, +) ([]string, error) { + switch typedVal := v.(type) { + case *VolumeDiskNone: + return []string{"none"}, nil + case *VolumeDisk: + return []string{string(*typedVal)}, nil + } + return nil, errors.New("unexpected DiskValue value") +} + +func (d *DiskValueParameterTypeCodec) UnmarshalParameter( + p []drbdconf.Word, +) (any, error) { + if err := drbdconf.EnsureLen(p, 2); err != nil { + return nil, err + } + if p[1].Value == "none" { + return &VolumeDiskNone{}, nil + } + return ptr(VolumeDisk(p[1].Value)), nil +} + +type VolumeDiskNone struct{} + +var _ DiskValue = &VolumeDiskNone{} + +func (v *VolumeDiskNone) _diskValue() {} + +type VolumeDisk string + +var _ DiskValue = new(VolumeDisk) + +func (v *VolumeDisk) _diskValue() {} + +// + +type MetaDiskValue interface { + _metaDiskValue() +} + +func init() { + drbdconf.RegisterParameterTypeCodec[MetaDiskValue]( + &MetaDiskValueParameterTypeCodec{}, + ) +} + +type MetaDiskValueParameterTypeCodec struct { +} + +func (d *MetaDiskValueParameterTypeCodec) MarshalParameter( + v any, +) ([]string, error) { + switch typedVal := v.(type) { + case *VolumeMetaDiskInternal: + return []string{"internal"}, nil + case *VolumeMetaDiskDevice: + res := []string{typedVal.Device} + if typedVal.Index != nil { + res = append(res, strconv.FormatUint(uint64(*typedVal.Index), 10)) + } + return res, nil + } + return nil, errors.New("unexpected MetaDiskValue value") +} + +func (d *MetaDiskValueParameterTypeCodec) UnmarshalParameter( + p []drbdconf.Word, +) (any, error) { + if err := drbdconf.EnsureLen(p, 2); err != nil { + return nil, err + } + if p[1].Value == "internal" { + return &VolumeMetaDiskInternal{}, nil + } + + res := &VolumeMetaDiskDevice{ + Device: p[1].Value, + } + + if len(p) >= 3 { + idx, err := strconv.ParseUint(p[2].Value, 10, 64) + if err != nil { + return nil, err + } + res.Index = ptr(uint(idx)) + } + + return res, nil +} + +type VolumeMetaDiskInternal struct{} + +var _ MetaDiskValue = new(VolumeMetaDiskInternal) + +func (v *VolumeMetaDiskInternal) _metaDiskValue() {} + +type VolumeMetaDiskDevice struct { + Device string + Index *uint +} + +var _ MetaDiskValue = new(VolumeMetaDiskDevice) + +func (v *VolumeMetaDiskDevice) _metaDiskValue() {} diff --git a/images/agent/pkg/drbdconf/v9/utils.go b/images/agent/pkg/drbdconf/v9/utils.go new file mode 100644 index 000000000..692e90268 --- /dev/null +++ b/images/agent/pkg/drbdconf/v9/utils.go @@ -0,0 +1,19 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package v9 + +func ptr[T any](v T) *T { return &v } diff --git a/images/agent/pkg/drbdconf/writer.go b/images/agent/pkg/drbdconf/writer.go new file mode 100644 index 000000000..7f19a2b67 --- /dev/null +++ b/images/agent/pkg/drbdconf/writer.go @@ -0,0 +1,96 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package drbdconf + +import ( + "fmt" + "io" + "strconv" + "strings" +) + +var _ io.WriterTo = &Root{} + +func (r *Root) WalkConfigs(accept func(conf *Root) error) error { + for _, el := range r.Elements { + if incl, ok := el.(*Include); ok { + for _, childConf := range incl.Files { + if err := childConf.WalkConfigs(accept); err != nil { + return fmt.Errorf("callback error: %w", err) + } + } + } + } + if err := accept(r); err != nil { + return fmt.Errorf("callback error: %w", err) + } + return nil +} + +func (r *Root) WriteTo(w io.Writer) (n int64, err error) { + // TODO streaming + sb := &strings.Builder{} + + for _, el := range r.Elements { + switch tEl := el.(type) { + case *Include: + sb.WriteString("include ") + sb.WriteString(strconv.Quote(tEl.Glob)) + sb.WriteString(";\n") + case *Section: + writeSectionTo(tEl, sb, "") + } + sb.WriteString("\n") + } + + return io.Copy(w, strings.NewReader(sb.String())) +} + +func writeSectionTo(s *Section, sb *strings.Builder, indent string) { + writeWordsTo(s.Key, sb, indent) + sb.WriteString(" {\n") + + nextIndent := indent + "\t" + for _, el := range s.Elements { + switch tEl := el.(type) { + case (*Section): + writeSectionTo(tEl, sb, nextIndent) + case (*Parameter): + writeWordsTo(tEl.Key, sb, nextIndent) + sb.WriteString(";\n") + default: + panic("unknown section element type") + } + } + + sb.WriteString(indent) + sb.WriteString("}\n") +} + +func writeWordsTo(words []Word, sb *strings.Builder, indent string) { + sb.WriteString(indent) + for i, word := range words { + if i > 0 { + sb.WriteString(" ") + } + if word.IsQuoted { + sb.WriteString(strconv.Quote(word.Value)) + } else { + sb.WriteString(word.Value) + } + } +} diff --git a/hooks/go/060-manual-cert-renewal/manual_cert_renewal_test.go b/images/agent/pkg/drbdsetup/down.go similarity index 62% rename from hooks/go/060-manual-cert-renewal/manual_cert_renewal_test.go rename to images/agent/pkg/drbdsetup/down.go index 4b3464666..03c3c2757 100644 --- a/hooks/go/060-manual-cert-renewal/manual_cert_renewal_test.go +++ b/images/agent/pkg/drbdsetup/down.go @@ -1,5 +1,5 @@ /* -Copyright 2022 Flant JSC +Copyright 2025 Flant JSC Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -14,26 +14,25 @@ See the License for the specific language governing permissions and limitations under the License. */ -package manualcertrenewal +package drbdsetup import ( "context" - "os" - "testing" - - "github.com/deckhouse/deckhouse/pkg/log" - "github.com/deckhouse/module-sdk/pkg" + "fmt" + "os/exec" ) -func TestManualCertRenewal(t *testing.T) { - devMode = true - os.Setenv("LOG_LEVEL", "INFO") - - err := manualCertRenewal(context.Background(), &pkg.HookInput{ - Logger: log.Default(), - }) +func ExecuteDown(ctx context.Context, resource string) error { + args := DownArgs(resource) + cmd := exec.CommandContext(ctx, Command, args...) + out, err := cmd.CombinedOutput() if err != nil { - t.Fatal(err) + return fmt.Errorf( + "running command %s %v: %w; output: %q", + Command, args, err, string(out), + ) } + + return nil } diff --git a/images/agent/pkg/drbdsetup/events2.go b/images/agent/pkg/drbdsetup/events2.go new file mode 100644 index 000000000..ba31a5cd3 --- /dev/null +++ b/images/agent/pkg/drbdsetup/events2.go @@ -0,0 +1,153 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package drbdsetup + +import ( + "bufio" + "context" + "fmt" + "iter" + "os/exec" + "strings" + "time" +) + +type Events2Result interface { + _isEvents2Result() +} + +type Event struct { + Timestamp time.Time + // "exists" for an existing object; + // + // "create", "destroy", and "change" if an object + // is created, destroyed, or changed; + // + // "call" or "response" if an event handler + // is called or it returns; + // + // or "rename" when the name of an object is changed + Kind string + // "resource", "device", "connection", "peer-device", "path", "helper", or + // a dash ("-") to indicate that the current state has been dumped + // completely + Object string + // Identify the object and describe the state that the object is in + State map[string]string +} + +var _ Events2Result = &Event{} + +func (*Event) _isEvents2Result() {} + +type UnparsedEvent struct { + RawEventLine string + Err error +} + +var _ Events2Result = &UnparsedEvent{} + +func (u UnparsedEvent) _isEvents2Result() {} + +func ExecuteEvents2( + ctx context.Context, + resultErr *error, +) iter.Seq[Events2Result] { + if resultErr == nil { + panic("resultErr is required to be non-nil pointer") + } + + return func(yield func(Events2Result) bool) { + cmd := exec.CommandContext( + ctx, + Command, + Events2Args..., + ) + + stdout, err := cmd.StdoutPipe() + if err != nil { + *resultErr = fmt.Errorf("getting stdout pipe: %w", err) + return + } + + if err := cmd.Start(); err != nil { + *resultErr = fmt.Errorf("starting command: %w", err) + return + } + + scanner := bufio.NewScanner(stdout) + for scanner.Scan() { + line := scanner.Text() + if !yield(parseLine(line)) { + return + } + } + + if err := scanner.Err(); err != nil { + *resultErr = fmt.Errorf("error reading command output: %w", err) + return + } + + if err := cmd.Wait(); err != nil { + *resultErr = fmt.Errorf("command finished with error: %w", err) + return + } + } +} + +// parseLine parses a single line of drbdsetup events2 output +func parseLine(line string) Events2Result { + fields := strings.Fields(line) + if len(fields) < 3 { + return &UnparsedEvent{ + RawEventLine: line, + Err: fmt.Errorf("line has fewer than 3 fields"), + } + } + + // ISO 8601 timestamp + tsStr := fields[0] + ts, err := time.Parse(time.RFC3339Nano, tsStr) + if err != nil { + return &UnparsedEvent{ + RawEventLine: line, + Err: fmt.Errorf("invalid timestamp %q: %v", tsStr, err), + } + } + + kind := fields[1] + object := fields[2] + + state := make(map[string]string) + for _, kv := range fields[3:] { + parts := strings.SplitN(kv, ":", 2) + if len(parts) != 2 { + return &UnparsedEvent{ + RawEventLine: line, + Err: fmt.Errorf("invalid key-value pair: %s", kv), + } + } + state[parts[0]] = parts[1] + } + + return &Event{ + Timestamp: ts, + Kind: kind, + Object: object, + State: state, + } +} diff --git a/images/agent/pkg/drbdsetup/status.go b/images/agent/pkg/drbdsetup/status.go new file mode 100644 index 000000000..168fbb166 --- /dev/null +++ b/images/agent/pkg/drbdsetup/status.go @@ -0,0 +1,124 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package drbdsetup + +import ( + "context" + "encoding/json" + "fmt" + "os/exec" +) + +type StatusResult []Resource + +type Resource struct { + Name string `json:"name"` + NodeID int `json:"node-id"` + Role string `json:"role"` + Suspended bool `json:"suspended"` + SuspendedUser bool `json:"suspended-user"` + SuspendedNoData bool `json:"suspended-no-data"` + SuspendedFencing bool `json:"suspended-fencing"` + SuspendedQuorum bool `json:"suspended-quorum"` + ForceIOFailures bool `json:"force-io-failures"` + WriteOrdering string `json:"write-ordering"` + Devices []Device `json:"devices"` + Connections []Connection `json:"connections"` +} + +type Device struct { + Volume int `json:"volume"` + Minor int `json:"minor"` + DiskState string `json:"disk-state"` + Client bool `json:"client"` + Open bool `json:"open"` + Quorum bool `json:"quorum"` + Size int `json:"size"` + Read int `json:"read"` + Written int `json:"written"` + ALWrites int `json:"al-writes"` + BMWrites int `json:"bm-writes"` + UpperPending int `json:"upper-pending"` + LowerPending int `json:"lower-pending"` +} + +type Connection struct { + PeerNodeID int `json:"peer-node-id"` + Name string `json:"name"` + ConnectionState string `json:"connection-state"` + Congested bool `json:"congested"` + Peerrole string `json:"peer-role"` + TLS bool `json:"tls"` + APInFlight int `json:"ap-in-flight"` + RSInFlight int `json:"rs-in-flight"` + + Paths []Path `json:"paths"` + PeerDevices []PeerDevice `json:"peer_devices"` +} + +type Path struct { + ThisHost Host `json:"this_host"` + RemoteHost Host `json:"remote_host"` + Established bool `json:"established"` +} + +type Host struct { + Address string `json:"address"` + Port int `json:"port"` + Family string `json:"family"` +} + +type PeerDevice struct { + Volume int `json:"volume"` + ReplicationState string `json:"replication-state"` + PeerDiskState string `json:"peer-disk-state"` + PeerClient bool `json:"peer-client"` + ResyncSuspended string `json:"resync-suspended"` + Received int `json:"received"` + Sent int `json:"sent"` + OutOfSync int `json:"out-of-sync"` + Pending int `json:"pending"` + Unacked int `json:"unacked"` + HasSyncDetails bool `json:"has-sync-details"` + HasOnlineVerifyDetails bool `json:"has-online-verify-details"` + PercentInSync float64 `json:"percent-in-sync"` +} + +func ExecuteStatus(ctx context.Context) (StatusResult, error) { + cmd := exec.CommandContext(ctx, Command, StatusArgs...) + + jsonBytes, err := cmd.CombinedOutput() + if err != nil { + return nil, + fmt.Errorf( + "running command: %w; output: %q", + err, string(jsonBytes), + ) + } + + // TODO: we need all items to be sorted and not rely on sorting on DRBD side + var res StatusResult + if err := json.Unmarshal(jsonBytes, &res); err != nil { + return nil, + fmt.Errorf( + "unmarshaling command output: %w; output: %q", + err, string(jsonBytes), + ) + } + + return res, nil +} diff --git a/images/agent/pkg/drbdsetup/vars.go b/images/agent/pkg/drbdsetup/vars.go new file mode 100644 index 000000000..5dcd7623c --- /dev/null +++ b/images/agent/pkg/drbdsetup/vars.go @@ -0,0 +1,24 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package drbdsetup + +var Command = "drbdsetup" +var StatusArgs = []string{"status", "--json"} +var Events2Args = []string{"events2", "--timestamps"} +var DownArgs = func(resource string) []string { + return []string{"down", resource} +} diff --git a/images/agent/werf.inc.yaml b/images/agent/werf.inc.yaml new file mode 100644 index 000000000..3dd7ca7ae --- /dev/null +++ b/images/agent/werf.inc.yaml @@ -0,0 +1,112 @@ +--- +# do not remove this image: used in external audits (DKP CSE) +image: {{ .ImageName }}-src-artifact +fromImage: builder/src +final: false +git: + - add: / + to: /src + includePaths: + - api + - lib/go + - images/{{ $.ImageName }} + stageDependencies: + install: + - '**/*' + excludePaths: + - images/{{ $.ImageName }}/werf.yaml +shell: + install: + - git clone --depth 1 --branch v{{ $.Versions.DRBD_UTILS }} {{ $.Root.SOURCE_REPO }}/LINBIT/drbd-utils /src/drbd-utils + - cd /src/drbd-utils + - git submodule update --init --recursive + #- rm -rf /src/drbd-utils/.git # needed for make + # LVM2 + - git clone --depth 1 --branch v{{ $.Versions.LVM2 }} {{ $.Root.SOURCE_REPO }}/lvmteam/lvm2.git /src/lvm2 + - rm -rf /src/lvm2/.git + + +--- +{{- $drbdBinaries := "/drbd-utils/sbin/* /drbd-utils/etc/drbd.conf /drbd-utils/etc/drbd.d/global_common.conf /drbd-utils/etc/multipath/conf.d/drbd.conf /usr/sbin/dmsetup /usr/sbin/dmstats" }} +image: {{ .ImageName }}-binaries-artifact +fromImage: builder/alt +final: false +import: + - image: {{ .ImageName }}-src-artifact + add: /src + to: /src + includePaths: + - drbd-utils + - lvm2 + before: install +git: + - add: /tools/dev_images/additional_tools/alt/binary_replace.sh + to: /binary_replace.sh + stageDependencies: + beforeSetup: + - '**/*' +shell: + beforeInstall: + - apt-get update + - apt-get install -y make automake pkg-config gcc libtool git curl rsync + - apt-get install -y flex libkeyutils-devel udev + - {{ $.Root.ALT_CLEANUP_CMD }} + install: + - cd /src/drbd-utils + - ./autogen.sh + - ./configure --prefix=/ --sysconfdir=/etc --localstatedir=/var --without-manual + # Fix the command startup error: + # 'git rev-parse HEAD' + # fatal: not a git repository (or any of the parent directories): .git + - if ! test -e .git/refs;then echo "-- mkdir -p .git/refs" ;mkdir -p .git/refs ;fi + - make + - make install DESTDIR=/drbd-utils + - sed -i 's/usage-count\s*yes;/usage-count no;/' /drbd-utils/etc/drbd.d/global_common.conf + # LVM2 - Let's take only dmsetup + - cd /src/lvm2 + - ./configure --prefix=/ --libdir=/lib64 --build=x86_64-linux-gnu --disable-readline --without-systemd + - make libdm.device-mapper + - make -C libdm install_device-mapper + beforeSetup: + - chmod +x /binary_replace.sh + - /binary_replace.sh -i "{{ $drbdBinaries }}" -o /relocate + setup: + - rsync -avz /relocate/drbd-utils/ /relocate/ + - rm -rf /relocate/drbd-utils/ + - echo 'include "/var/lib/sds-replicated-volume-agent.d/*.res";' > /relocate/etc/drbd.d/sds-replicated-volume-agent.res + +--- +image: {{ .ImageName }}-golang-artifact +fromImage: builder/golang-alpine +final: false +import: + - image: {{ .ImageName }}-src-artifact + add: /src + to: /src + excludePaths: + - drbd-utils + before: setup +mount: + - fromPath: ~/go-pkg-cache + to: /go/pkg +shell: + setup: + - cd /src/images/{{ $.ImageName }}/cmd + - GOOS=linux GOARCH=amd64 CGO_ENABLED=0 go build -ldflags="-s -w" -tags {{ $.Root.MODULE_EDITION }} -o /{{ $.ImageName }} + - chmod +x /{{ $.ImageName }} + +--- +image: {{ .ImageName }} +fromImage: builder/src +import: + - image: {{ $.ImageName }}-binaries-artifact + add: /relocate + to: / + before: setup + - image: {{ $.ImageName }}-golang-artifact + add: /{{ $.ImageName }} + to: /{{ $.ImageName }} + before: setup +imageSpec: + config: + entrypoint: ["/{{ $.ImageName }}"] diff --git a/images/controller/LICENSE b/images/controller/LICENSE new file mode 100644 index 000000000..b77c0c92a --- /dev/null +++ b/images/controller/LICENSE @@ -0,0 +1,201 @@ + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright [yyyy] [name of copyright owner] + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/images/controller/README.md b/images/controller/README.md new file mode 100644 index 000000000..a0897f96c --- /dev/null +++ b/images/controller/README.md @@ -0,0 +1,67 @@ +# sds-replicated-volume-controller + +This binary contains controllers for managing replicated storage resources. + +## Controllers + +| Controller | Primary Resource | Purpose | +|------------|------------------|---------| +| [rsp_controller](internal/controllers/rsp_controller/README.md) | ReplicatedStoragePool | Calculates eligible nodes from LVGs, Nodes, and agent Pods | +| [rsc_controller](internal/controllers/rsc_controller/README.md) | ReplicatedStorageClass | Manages RSP, validates configuration, aggregates volume stats | +| [node_controller](internal/controllers/node_controller/README.md) | Node | Manages agent node labels based on RSP eligibility and DRBDResources | + +## Architecture + +```mermaid +flowchart TB + subgraph external [External Resources] + Node[Node] + LVG[LVMVolumeGroup] + AgentPod[Pod agent] + DRBD[DRBDResource] + end + + subgraph resources [Module Resources] + RSP[ReplicatedStoragePool] + RSC[ReplicatedStorageClass] + RV[ReplicatedVolume] + end + + subgraph controllers [Controllers] + RSPCtrl[rsp_controller] + RSCCtrl[rsc_controller] + NodeCtrl[node_controller] + end + + subgraph managed [Managed State] + RSPStatus[RSP.status.eligibleNodes] + RSCStatus[RSC.status] + NodeLabel[Node label] + end + + LVG --> RSPCtrl + Node --> RSPCtrl + AgentPod --> RSPCtrl + RSP --> RSPCtrl + RSPCtrl --> RSPStatus + + RSPStatus --> RSCCtrl + RSC --> RSCCtrl + RV --> RSCCtrl + RSCCtrl -->|creates| RSP + RSCCtrl --> RSCStatus + + RSPStatus --> NodeCtrl + DRBD --> NodeCtrl + NodeCtrl --> NodeLabel +``` + +## Dependency Chain + +Controllers have a logical dependency order: + +1. **rsp_controller** — runs first, aggregates external resources into `RSP.status.eligibleNodes` +2. **rsc_controller** — depends on RSP status for configuration validation +3. **node_controller** — depends on RSP status for node label decisions + +Each controller reconciles independently, reacting to changes in its watched resources. diff --git a/images/controller/cmd/main.go b/images/controller/cmd/main.go new file mode 100644 index 000000000..a79774bd0 --- /dev/null +++ b/images/controller/cmd/main.go @@ -0,0 +1,88 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package main + +import ( + "context" + "errors" + "fmt" + "log/slog" + "os" + "time" + + "github.com/go-logr/logr" + "golang.org/x/sync/errgroup" + crlog "sigs.k8s.io/controller-runtime/pkg/log" + "sigs.k8s.io/controller-runtime/pkg/manager/signals" + + "github.com/deckhouse/sds-common-lib/slogh" + u "github.com/deckhouse/sds-common-lib/utils" + "github.com/deckhouse/sds-replicated-volume/images/controller/internal/env" +) + +func main() { + ctx := signals.SetupSignalHandler() + + slogh.EnableConfigReload(ctx, nil) + logHandler := &slogh.Handler{} + log := slog.New(logHandler). + With("startedAt", time.Now().Format(time.RFC3339)) + slog.SetDefault(log) + + crlog.SetLogger(logr.FromSlogHandler(logHandler)) + + log.Info("controller app started") + + err := run(ctx, log) + if !errors.Is(err, context.Canceled) || ctx.Err() != context.Canceled { + log.Error("exited unexpectedly", "err", err, "ctxerr", ctx.Err()) + os.Exit(1) + } + log.Info( + "gracefully shutdown", + // cleanup errors do not affect status code, but worth logging + "err", err, + ) +} + +func run(ctx context.Context, log *slog.Logger) (err error) { + // The derived Context is canceled the first time a function passed to eg.Go + // returns a non-nil error or the first time Wait returns + eg, ctx := errgroup.WithContext(ctx) + + envConfig, err := env.GetConfig() + if err != nil { + return fmt.Errorf("getting env config: %w", err) + } + + // MANAGER + mgr, err := newManager(ctx, log, envConfig) + if err != nil { + return err + } + + eg.Go(func() error { + if err := mgr.Start(ctx); err != nil { + return u.LogError(log, fmt.Errorf("starting controller: %w", err)) + } + return ctx.Err() + }) + + // ... + + return eg.Wait() +} diff --git a/images/controller/cmd/manager.go b/images/controller/cmd/manager.go new file mode 100644 index 000000000..701d5f708 --- /dev/null +++ b/images/controller/cmd/manager.go @@ -0,0 +1,102 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package main + +import ( + "context" + "fmt" + "log/slog" + + "github.com/go-logr/logr" + corev1 "k8s.io/api/core/v1" + "k8s.io/apimachinery/pkg/labels" + "sigs.k8s.io/controller-runtime/pkg/cache" + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/client/config" + "sigs.k8s.io/controller-runtime/pkg/healthz" + "sigs.k8s.io/controller-runtime/pkg/manager" + "sigs.k8s.io/controller-runtime/pkg/metrics/server" + + u "github.com/deckhouse/sds-common-lib/utils" + "github.com/deckhouse/sds-replicated-volume/images/controller/internal/controllers" + "github.com/deckhouse/sds-replicated-volume/images/controller/internal/scheme" +) + +type managerConfig interface { + PodNamespace() string + HealthProbeBindAddress() string + MetricsBindAddress() string +} + +func newManager( + ctx context.Context, + log *slog.Logger, + envConfig managerConfig, +) (manager.Manager, error) { + config, err := config.GetConfig() + if err != nil { + return nil, u.LogError(log, fmt.Errorf("getting rest config: %w", err)) + } + + scheme, err := scheme.New() + if err != nil { + return nil, u.LogError(log, fmt.Errorf("building scheme: %w", err)) + } + + // Configure cache to only watch agent pods in the controller's namespace. + // This reduces memory usage and API server load. + cacheOpt := cache.Options{ + ByObject: map[client.Object]cache.ByObject{ + &corev1.Pod{}: { + Namespaces: map[string]cache.Config{ + envConfig.PodNamespace(): {}, + }, + Label: labels.SelectorFromSet(labels.Set{"app": "agent"}), + }, + }, + } + + mgrOpts := manager.Options{ + Scheme: scheme, + BaseContext: func() context.Context { return ctx }, + Logger: logr.FromSlogHandler(log.Handler()), + HealthProbeBindAddress: envConfig.HealthProbeBindAddress(), + Cache: cacheOpt, + Metrics: server.Options{ + BindAddress: envConfig.MetricsBindAddress(), + }, + } + + mgr, err := manager.New(config, mgrOpts) + if err != nil { + return nil, u.LogError(log, fmt.Errorf("creating manager: %w", err)) + } + + if err = mgr.AddHealthzCheck("healthz", healthz.Ping); err != nil { + return nil, u.LogError(log, fmt.Errorf("AddHealthzCheck: %w", err)) + } + + if err = mgr.AddReadyzCheck("readyz", healthz.Ping); err != nil { + return nil, u.LogError(log, fmt.Errorf("AddReadyzCheck: %w", err)) + } + + if err := controllers.BuildAll(mgr, envConfig.PodNamespace()); err != nil { + return nil, err + } + + return mgr, nil +} diff --git a/images/controller/cmd/slogh.cfg b/images/controller/cmd/slogh.cfg new file mode 100644 index 000000000..78fcdd64d --- /dev/null +++ b/images/controller/cmd/slogh.cfg @@ -0,0 +1,13 @@ +# those are all keys with default values: + +# any slog level, or just a number +level=DEBUG + +# also supported: "text" +format=text + +# for each log print "source" property with information about callsite +callsite=true + +render=true +stringValues=true diff --git a/images/controller/go.mod b/images/controller/go.mod new file mode 100644 index 000000000..18384223d --- /dev/null +++ b/images/controller/go.mod @@ -0,0 +1,254 @@ +module github.com/deckhouse/sds-replicated-volume/images/controller + +go 1.24.11 + +replace github.com/deckhouse/sds-replicated-volume/api => ../../api + +replace github.com/deckhouse/sds-replicated-volume/lib/go/common => ../../lib/go/common + +require ( + github.com/deckhouse/sds-common-lib v0.6.3 + github.com/deckhouse/sds-node-configurator/api v0.0.0-20251112082451-591b11c7b2da + github.com/deckhouse/sds-replicated-volume/api v0.0.0-20251121101523-5ed5ba65d062 + github.com/deckhouse/sds-replicated-volume/lib/go/common v0.0.0-00010101000000-000000000000 + github.com/go-logr/logr v1.4.3 + github.com/onsi/ginkgo/v2 v2.27.2 + github.com/onsi/gomega v1.38.3 + golang.org/x/sync v0.19.0 + k8s.io/api v0.34.3 + k8s.io/apimachinery v0.34.3 + k8s.io/client-go v0.34.3 + k8s.io/component-helpers v0.34.3 + k8s.io/utils v0.0.0-20251002143259-bc988d571ff4 + sigs.k8s.io/controller-runtime v0.22.4 +) + +require ( + 4d63.com/gocheckcompilerdirectives v1.3.0 // indirect + 4d63.com/gochecknoglobals v0.2.2 // indirect + github.com/4meepo/tagalign v1.4.2 // indirect + github.com/Abirdcfly/dupword v0.1.3 // indirect + github.com/Antonboom/errname v1.0.0 // indirect + github.com/Antonboom/nilnil v1.0.1 // indirect + github.com/Antonboom/testifylint v1.5.2 // indirect + github.com/BurntSushi/toml v1.5.0 // indirect + github.com/Crocmagnon/fatcontext v0.7.1 // indirect + github.com/Djarvur/go-err113 v0.0.0-20210108212216-aea10b59be24 // indirect + github.com/GaijinEntertainment/go-exhaustruct/v3 v3.3.1 // indirect + github.com/Masterminds/semver/v3 v3.4.0 // indirect + github.com/OpenPeeDeeP/depguard/v2 v2.2.1 // indirect + github.com/alecthomas/go-check-sumtype v0.3.1 // indirect + github.com/alexkohler/nakedret/v2 v2.0.5 // indirect + github.com/alexkohler/prealloc v1.0.0 // indirect + github.com/alingse/asasalint v0.0.11 // indirect + github.com/alingse/nilnesserr v0.1.2 // indirect + github.com/ashanbrown/forbidigo v1.6.0 // indirect + github.com/ashanbrown/makezero v1.2.0 // indirect + github.com/beorn7/perks v1.0.1 // indirect + github.com/bkielbasa/cyclop v1.2.3 // indirect + github.com/blizzy78/varnamelen v0.8.0 // indirect + github.com/bombsimon/wsl/v4 v4.5.0 // indirect + github.com/breml/bidichk v0.3.2 // indirect + github.com/breml/errchkjson v0.4.0 // indirect + github.com/butuzov/ireturn v0.3.1 // indirect + github.com/butuzov/mirror v1.3.0 // indirect + github.com/catenacyber/perfsprint v0.8.2 // indirect + github.com/ccojocar/zxcvbn-go v1.0.2 // indirect + github.com/cespare/xxhash/v2 v2.3.0 // indirect + github.com/charithe/durationcheck v0.0.10 // indirect + github.com/chavacava/garif v0.1.0 // indirect + github.com/ckaznocha/intrange v0.3.0 // indirect + github.com/curioswitch/go-reassign v0.3.0 // indirect + github.com/daixiang0/gci v0.13.5 // indirect + github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect + github.com/denis-tingaikin/go-header v0.5.0 // indirect + github.com/emicklei/go-restful/v3 v3.13.0 // indirect + github.com/ettle/strcase v0.2.0 // indirect + github.com/evanphx/json-patch/v5 v5.9.11 // indirect + github.com/fatih/color v1.18.0 // indirect + github.com/fatih/structtag v1.2.0 // indirect + github.com/firefart/nonamedreturns v1.0.5 // indirect + github.com/fsnotify/fsnotify v1.9.0 // indirect + github.com/fxamacker/cbor/v2 v2.9.0 // indirect + github.com/fzipp/gocyclo v0.6.0 // indirect + github.com/ghostiam/protogetter v0.3.9 // indirect + github.com/go-critic/go-critic v0.12.0 // indirect + github.com/go-openapi/jsonpointer v0.22.0 // indirect + github.com/go-openapi/jsonreference v0.21.1 // indirect + github.com/go-openapi/swag v0.24.1 // indirect + github.com/go-openapi/swag/cmdutils v0.24.0 // indirect + github.com/go-openapi/swag/conv v0.24.0 // indirect + github.com/go-openapi/swag/fileutils v0.24.0 // indirect + github.com/go-openapi/swag/jsonname v0.24.0 // indirect + github.com/go-openapi/swag/jsonutils v0.24.0 // indirect + github.com/go-openapi/swag/loading v0.24.0 // indirect + github.com/go-openapi/swag/mangling v0.24.0 // indirect + github.com/go-openapi/swag/netutils v0.24.0 // indirect + github.com/go-openapi/swag/stringutils v0.24.0 // indirect + github.com/go-openapi/swag/typeutils v0.24.0 // indirect + github.com/go-openapi/swag/yamlutils v0.24.0 // indirect + github.com/go-task/slim-sprig/v3 v3.0.0 // indirect + github.com/go-toolsmith/astcast v1.1.0 // indirect + github.com/go-toolsmith/astcopy v1.1.0 // indirect + github.com/go-toolsmith/astequal v1.2.0 // indirect + github.com/go-toolsmith/astfmt v1.1.0 // indirect + github.com/go-toolsmith/astp v1.1.0 // indirect + github.com/go-toolsmith/strparse v1.1.0 // indirect + github.com/go-toolsmith/typep v1.1.0 // indirect + github.com/go-viper/mapstructure/v2 v2.4.0 // indirect + github.com/go-xmlfmt/xmlfmt v1.1.3 // indirect + github.com/gobwas/glob v0.2.3 // indirect + github.com/gofrs/flock v0.12.1 // indirect + github.com/gogo/protobuf v1.3.2 // indirect + github.com/golangci/dupl v0.0.0-20250308024227-f665c8d69b32 // indirect + github.com/golangci/go-printf-func-name v0.1.0 // indirect + github.com/golangci/gofmt v0.0.0-20250106114630-d62b90e6713d // indirect + github.com/golangci/golangci-lint v1.64.8 // indirect + github.com/golangci/misspell v0.6.0 // indirect + github.com/golangci/plugin-module-register v0.1.1 // indirect + github.com/golangci/revgrep v0.8.0 // indirect + github.com/golangci/unconvert v0.0.0-20240309020433-c5143eacb3ed // indirect + github.com/google/btree v1.1.3 // indirect + github.com/google/gnostic-models v0.7.0 // indirect + github.com/google/go-cmp v0.7.0 // indirect + github.com/google/pprof v0.0.0-20250820193118-f64d9cf942d6 // indirect + github.com/google/uuid v1.6.0 // indirect + github.com/gordonklaus/ineffassign v0.1.0 // indirect + github.com/gostaticanalysis/analysisutil v0.7.1 // indirect + github.com/gostaticanalysis/comment v1.5.0 // indirect + github.com/gostaticanalysis/forcetypeassert v0.2.0 // indirect + github.com/gostaticanalysis/nilerr v0.1.1 // indirect + github.com/hashicorp/go-immutable-radix/v2 v2.1.0 // indirect + github.com/hashicorp/go-version v1.7.0 // indirect + github.com/hashicorp/golang-lru/v2 v2.0.7 // indirect + github.com/hexops/gotextdiff v1.0.3 // indirect + github.com/inconshreveable/mousetrap v1.1.0 // indirect + github.com/jgautheron/goconst v1.7.1 // indirect + github.com/jingyugao/rowserrcheck v1.1.1 // indirect + github.com/jjti/go-spancheck v0.6.4 // indirect + github.com/josharian/intern v1.0.0 // indirect + github.com/json-iterator/go v1.1.12 // indirect + github.com/julz/importas v0.2.0 // indirect + github.com/karamaru-alpha/copyloopvar v1.2.1 // indirect + github.com/kisielk/errcheck v1.9.0 // indirect + github.com/kkHAIKE/contextcheck v1.1.6 // indirect + github.com/kulti/thelper v0.6.3 // indirect + github.com/kunwardeep/paralleltest v1.0.10 // indirect + github.com/lasiar/canonicalheader v1.1.2 // indirect + github.com/ldez/exptostd v0.4.2 // indirect + github.com/ldez/gomoddirectives v0.6.1 // indirect + github.com/ldez/grignotin v0.9.0 // indirect + github.com/ldez/tagliatelle v0.7.1 // indirect + github.com/ldez/usetesting v0.4.2 // indirect + github.com/leonklingele/grouper v1.1.2 // indirect + github.com/macabu/inamedparam v0.1.3 // indirect + github.com/mailru/easyjson v0.9.0 // indirect + github.com/maratori/testableexamples v1.0.0 // indirect + github.com/maratori/testpackage v1.1.1 // indirect + github.com/matoous/godox v1.1.0 // indirect + github.com/mattn/go-colorable v0.1.14 // indirect + github.com/mattn/go-isatty v0.0.20 // indirect + github.com/mattn/go-runewidth v0.0.16 // indirect + github.com/mgechev/revive v1.7.0 // indirect + github.com/mitchellh/go-homedir v1.1.0 // indirect + github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect + github.com/modern-go/reflect2 v1.0.3-0.20250322232337-35a7c28c31ee // indirect + github.com/moricho/tparallel v0.3.2 // indirect + github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect + github.com/nakabonne/nestif v0.3.1 // indirect + github.com/nishanths/exhaustive v0.12.0 // indirect + github.com/nishanths/predeclared v0.2.2 // indirect + github.com/nunnatsa/ginkgolinter v0.19.1 // indirect + github.com/olekukonko/tablewriter v0.0.5 // indirect + github.com/pelletier/go-toml/v2 v2.2.4 // indirect + github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect + github.com/polyfloyd/go-errorlint v1.7.1 // indirect + github.com/prometheus/client_golang v1.23.2 // indirect + github.com/prometheus/client_model v0.6.2 // indirect + github.com/prometheus/common v0.66.1 // indirect + github.com/prometheus/procfs v0.17.0 // indirect + github.com/quasilyte/go-ruleguard v0.4.3-0.20240823090925-0fe6f58b47b1 // indirect + github.com/quasilyte/go-ruleguard/dsl v0.3.22 // indirect + github.com/quasilyte/gogrep v0.5.0 // indirect + github.com/quasilyte/regex/syntax v0.0.0-20210819130434-b3f0c404a727 // indirect + github.com/quasilyte/stdinfo v0.0.0-20220114132959-f7386bf02567 // indirect + github.com/raeperd/recvcheck v0.2.0 // indirect + github.com/rivo/uniseg v0.4.7 // indirect + github.com/rogpeppe/go-internal v1.14.1 // indirect + github.com/ryancurrah/gomodguard v1.3.5 // indirect + github.com/ryanrolds/sqlclosecheck v0.5.1 // indirect + github.com/sagikazarmark/locafero v0.7.0 // indirect + github.com/sanposhiho/wastedassign/v2 v2.1.0 // indirect + github.com/santhosh-tekuri/jsonschema/v6 v6.0.1 // indirect + github.com/sashamelentyev/interfacebloat v1.1.0 // indirect + github.com/sashamelentyev/usestdlibvars v1.28.0 // indirect + github.com/securego/gosec/v2 v2.22.2 // indirect + github.com/sirupsen/logrus v1.9.3 // indirect + github.com/sivchari/containedctx v1.0.3 // indirect + github.com/sivchari/tenv v1.12.1 // indirect + github.com/sonatard/noctx v0.1.0 // indirect + github.com/sourcegraph/conc v0.3.0 // indirect + github.com/sourcegraph/go-diff v0.7.0 // indirect + github.com/spf13/afero v1.12.0 // indirect + github.com/spf13/cast v1.7.1 // indirect + github.com/spf13/cobra v1.10.2 // indirect + github.com/spf13/pflag v1.0.10 // indirect + github.com/spf13/viper v1.20.1 // indirect + github.com/ssgreg/nlreturn/v2 v2.2.1 // indirect + github.com/stbenjam/no-sprintf-host-port v0.2.0 // indirect + github.com/stretchr/objx v0.5.2 // indirect + github.com/stretchr/testify v1.11.1 // indirect + github.com/subosito/gotenv v1.6.0 // indirect + github.com/tdakkota/asciicheck v0.4.1 // indirect + github.com/tetafro/godot v1.5.0 // indirect + github.com/timakin/bodyclose v0.0.0-20241017074812-ed6a65f985e3 // indirect + github.com/timonwong/loggercheck v0.10.1 // indirect + github.com/tomarrell/wrapcheck/v2 v2.10.0 // indirect + github.com/tommy-muehle/go-mnd/v2 v2.5.1 // indirect + github.com/ultraware/funlen v0.2.0 // indirect + github.com/ultraware/whitespace v0.2.0 // indirect + github.com/uudashr/gocognit v1.2.0 // indirect + github.com/uudashr/iface v1.3.1 // indirect + github.com/x448/float16 v0.8.4 // indirect + github.com/xen0n/gosmopolitan v1.2.2 // indirect + github.com/yagipy/maintidx v1.0.0 // indirect + github.com/yeya24/promlinter v0.3.0 // indirect + github.com/ykadowak/zerologlint v0.1.5 // indirect + gitlab.com/bosi/decorder v0.4.2 // indirect + go-simpler.org/musttag v0.13.0 // indirect + go-simpler.org/sloglint v0.9.0 // indirect + go.uber.org/automaxprocs v1.6.0 // indirect + go.uber.org/multierr v1.11.0 // indirect + go.uber.org/zap v1.27.0 // indirect + go.yaml.in/yaml/v2 v2.4.2 // indirect + go.yaml.in/yaml/v3 v3.0.4 // indirect + golang.org/x/exp/typeparams v0.0.0-20250210185358-939b2ce775ac // indirect + golang.org/x/mod v0.29.0 // indirect + golang.org/x/net v0.46.0 // indirect + golang.org/x/oauth2 v0.31.0 // indirect + golang.org/x/sys v0.39.0 // indirect + golang.org/x/term v0.36.0 // indirect + golang.org/x/text v0.30.0 // indirect + golang.org/x/time v0.13.0 // indirect + golang.org/x/tools v0.38.0 // indirect + gomodules.xyz/jsonpatch/v2 v2.5.0 // indirect + google.golang.org/protobuf v1.36.9 // indirect + gopkg.in/evanphx/json-patch.v4 v4.13.0 // indirect + gopkg.in/inf.v0 v0.9.1 // indirect + gopkg.in/yaml.v2 v2.4.0 // indirect + gopkg.in/yaml.v3 v3.0.1 // indirect + honnef.co/go/tools v0.6.1 // indirect + k8s.io/apiextensions-apiserver v0.34.3 // indirect + k8s.io/klog/v2 v2.130.1 // indirect + k8s.io/kube-openapi v0.0.0-20250909170358-d67c058d9372 // indirect + mvdan.cc/gofumpt v0.7.0 // indirect + mvdan.cc/unparam v0.0.0-20240528143540-8a5130ca722f // indirect + sigs.k8s.io/json v0.0.0-20250730193827-2d320260d730 // indirect + sigs.k8s.io/randfill v1.0.0 // indirect + sigs.k8s.io/structured-merge-diff/v6 v6.3.0 // indirect + sigs.k8s.io/yaml v1.6.0 // indirect +) + +tool github.com/onsi/ginkgo/v2/ginkgo + +tool github.com/golangci/golangci-lint/cmd/golangci-lint diff --git a/images/controller/go.sum b/images/controller/go.sum new file mode 100644 index 000000000..6184b715a --- /dev/null +++ b/images/controller/go.sum @@ -0,0 +1,705 @@ +4d63.com/gocheckcompilerdirectives v1.3.0 h1:Ew5y5CtcAAQeTVKUVFrE7EwHMrTO6BggtEj8BZSjZ3A= +4d63.com/gocheckcompilerdirectives v1.3.0/go.mod h1:ofsJ4zx2QAuIP/NO/NAh1ig6R1Fb18/GI7RVMwz7kAY= +4d63.com/gochecknoglobals v0.2.2 h1:H1vdnwnMaZdQW/N+NrkT1SZMTBmcwHe9Vq8lJcYYTtU= +4d63.com/gochecknoglobals v0.2.2/go.mod h1:lLxwTQjL5eIesRbvnzIP3jZtG140FnTdz+AlMa+ogt0= +github.com/4meepo/tagalign v1.4.2 h1:0hcLHPGMjDyM1gHG58cS73aQF8J4TdVR96TZViorO9E= +github.com/4meepo/tagalign v1.4.2/go.mod h1:+p4aMyFM+ra7nb41CnFG6aSDXqRxU/w1VQqScKqDARI= +github.com/Abirdcfly/dupword v0.1.3 h1:9Pa1NuAsZvpFPi9Pqkd93I7LIYRURj+A//dFd5tgBeE= +github.com/Abirdcfly/dupword v0.1.3/go.mod h1:8VbB2t7e10KRNdwTVoxdBaxla6avbhGzb8sCTygUMhw= +github.com/Antonboom/errname v1.0.0 h1:oJOOWR07vS1kRusl6YRSlat7HFnb3mSfMl6sDMRoTBA= +github.com/Antonboom/errname v1.0.0/go.mod h1:gMOBFzK/vrTiXN9Oh+HFs+e6Ndl0eTFbtsRTSRdXyGI= +github.com/Antonboom/nilnil v1.0.1 h1:C3Tkm0KUxgfO4Duk3PM+ztPncTFlOf0b2qadmS0s4xs= +github.com/Antonboom/nilnil v1.0.1/go.mod h1:CH7pW2JsRNFgEh8B2UaPZTEPhCMuFowP/e8Udp9Nnb0= +github.com/Antonboom/testifylint v1.5.2 h1:4s3Xhuv5AvdIgbd8wOOEeo0uZG7PbDKQyKY5lGoQazk= +github.com/Antonboom/testifylint v1.5.2/go.mod h1:vxy8VJ0bc6NavlYqjZfmp6EfqXMtBgQ4+mhCojwC1P8= +github.com/BurntSushi/toml v1.5.0 h1:W5quZX/G/csjUnuI8SUYlsHs9M38FC7znL0lIO+DvMg= +github.com/BurntSushi/toml v1.5.0/go.mod h1:ukJfTF/6rtPPRCnwkur4qwRxa8vTRFBF0uk2lLoLwho= +github.com/Crocmagnon/fatcontext v0.7.1 h1:SC/VIbRRZQeQWj/TcQBS6JmrXcfA+BU4OGSVUt54PjM= +github.com/Crocmagnon/fatcontext v0.7.1/go.mod h1:1wMvv3NXEBJucFGfwOJBxSVWcoIO6emV215SMkW9MFU= +github.com/Djarvur/go-err113 v0.0.0-20210108212216-aea10b59be24 h1:sHglBQTwgx+rWPdisA5ynNEsoARbiCBOyGcJM4/OzsM= +github.com/Djarvur/go-err113 v0.0.0-20210108212216-aea10b59be24/go.mod h1:4UJr5HIiMZrwgkSPdsjy2uOQExX/WEILpIrO9UPGuXs= +github.com/GaijinEntertainment/go-exhaustruct/v3 v3.3.1 h1:Sz1JIXEcSfhz7fUi7xHnhpIE0thVASYjvosApmHuD2k= +github.com/GaijinEntertainment/go-exhaustruct/v3 v3.3.1/go.mod h1:n/LSCXNuIYqVfBlVXyHfMQkZDdp1/mmxfSjADd3z1Zg= +github.com/Masterminds/semver/v3 v3.4.0 h1:Zog+i5UMtVoCU8oKka5P7i9q9HgrJeGzI9SA1Xbatp0= +github.com/Masterminds/semver/v3 v3.4.0/go.mod h1:4V+yj/TJE1HU9XfppCwVMZq3I84lprf4nC11bSS5beM= +github.com/OpenPeeDeeP/depguard/v2 v2.2.1 h1:vckeWVESWp6Qog7UZSARNqfu/cZqvki8zsuj3piCMx4= +github.com/OpenPeeDeeP/depguard/v2 v2.2.1/go.mod h1:q4DKzC4UcVaAvcfd41CZh0PWpGgzrVxUYBlgKNGquUo= +github.com/alecthomas/assert/v2 v2.11.0 h1:2Q9r3ki8+JYXvGsDyBXwH3LcJ+WK5D0gc5E8vS6K3D0= +github.com/alecthomas/assert/v2 v2.11.0/go.mod h1:Bze95FyfUr7x34QZrjL+XP+0qgp/zg8yS+TtBj1WA3k= +github.com/alecthomas/go-check-sumtype v0.3.1 h1:u9aUvbGINJxLVXiFvHUlPEaD7VDULsrxJb4Aq31NLkU= +github.com/alecthomas/go-check-sumtype v0.3.1/go.mod h1:A8TSiN3UPRw3laIgWEUOHHLPa6/r9MtoigdlP5h3K/E= +github.com/alecthomas/repr v0.4.0 h1:GhI2A8MACjfegCPVq9f1FLvIBS+DrQ2KQBFZP1iFzXc= +github.com/alecthomas/repr v0.4.0/go.mod h1:Fr0507jx4eOXV7AlPV6AVZLYrLIuIeSOWtW57eE/O/4= +github.com/alexkohler/nakedret/v2 v2.0.5 h1:fP5qLgtwbx9EJE8dGEERT02YwS8En4r9nnZ71RK+EVU= +github.com/alexkohler/nakedret/v2 v2.0.5/go.mod h1:bF5i0zF2Wo2o4X4USt9ntUWve6JbFv02Ff4vlkmS/VU= +github.com/alexkohler/prealloc v1.0.0 h1:Hbq0/3fJPQhNkN0dR95AVrr6R7tou91y0uHG5pOcUuw= +github.com/alexkohler/prealloc v1.0.0/go.mod h1:VetnK3dIgFBBKmg0YnD9F9x6Icjd+9cvfHR56wJVlKE= +github.com/alingse/asasalint v0.0.11 h1:SFwnQXJ49Kx/1GghOFz1XGqHYKp21Kq1nHad/0WQRnw= +github.com/alingse/asasalint v0.0.11/go.mod h1:nCaoMhw7a9kSJObvQyVzNTPBDbNpdocqrSP7t/cW5+I= +github.com/alingse/nilnesserr v0.1.2 h1:Yf8Iwm3z2hUUrP4muWfW83DF4nE3r1xZ26fGWUKCZlo= +github.com/alingse/nilnesserr v0.1.2/go.mod h1:1xJPrXonEtX7wyTq8Dytns5P2hNzoWymVUIaKm4HNFg= +github.com/ashanbrown/forbidigo v1.6.0 h1:D3aewfM37Yb3pxHujIPSpTf6oQk9sc9WZi8gerOIVIY= +github.com/ashanbrown/forbidigo v1.6.0/go.mod h1:Y8j9jy9ZYAEHXdu723cUlraTqbzjKF1MUyfOKL+AjcU= +github.com/ashanbrown/makezero v1.2.0 h1:/2Lp1bypdmK9wDIq7uWBlDF1iMUpIIS4A+pF6C9IEUU= +github.com/ashanbrown/makezero v1.2.0/go.mod h1:dxlPhHbDMC6N6xICzFBSK+4njQDdK8euNO0qjQMtGY4= +github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM= +github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw= +github.com/bkielbasa/cyclop v1.2.3 h1:faIVMIGDIANuGPWH031CZJTi2ymOQBULs9H21HSMa5w= +github.com/bkielbasa/cyclop v1.2.3/go.mod h1:kHTwA9Q0uZqOADdupvcFJQtp/ksSnytRMe8ztxG8Fuo= +github.com/blizzy78/varnamelen v0.8.0 h1:oqSblyuQvFsW1hbBHh1zfwrKe3kcSj0rnXkKzsQ089M= +github.com/blizzy78/varnamelen v0.8.0/go.mod h1:V9TzQZ4fLJ1DSrjVDfl89H7aMnTvKkApdHeyESmyR7k= +github.com/bombsimon/wsl/v4 v4.5.0 h1:iZRsEvDdyhd2La0FVi5k6tYehpOR/R7qIUjmKk7N74A= +github.com/bombsimon/wsl/v4 v4.5.0/go.mod h1:NOQ3aLF4nD7N5YPXMruR6ZXDOAqLoM0GEpLwTdvmOSc= +github.com/breml/bidichk v0.3.2 h1:xV4flJ9V5xWTqxL+/PMFF6dtJPvZLPsyixAoPe8BGJs= +github.com/breml/bidichk v0.3.2/go.mod h1:VzFLBxuYtT23z5+iVkamXO386OB+/sVwZOpIj6zXGos= +github.com/breml/errchkjson v0.4.0 h1:gftf6uWZMtIa/Is3XJgibewBm2ksAQSY/kABDNFTAdk= +github.com/breml/errchkjson v0.4.0/go.mod h1:AuBOSTHyLSaaAFlWsRSuRBIroCh3eh7ZHh5YeelDIk8= +github.com/butuzov/ireturn v0.3.1 h1:mFgbEI6m+9W8oP/oDdfA34dLisRFCj2G6o/yiI1yZrY= +github.com/butuzov/ireturn v0.3.1/go.mod h1:ZfRp+E7eJLC0NQmk1Nrm1LOrn/gQlOykv+cVPdiXH5M= +github.com/butuzov/mirror v1.3.0 h1:HdWCXzmwlQHdVhwvsfBb2Au0r3HyINry3bDWLYXiKoc= +github.com/butuzov/mirror v1.3.0/go.mod h1:AEij0Z8YMALaq4yQj9CPPVYOyJQyiexpQEQgihajRfI= +github.com/catenacyber/perfsprint v0.8.2 h1:+o9zVmCSVa7M4MvabsWvESEhpsMkhfE7k0sHNGL95yw= +github.com/catenacyber/perfsprint v0.8.2/go.mod h1:q//VWC2fWbcdSLEY1R3l8n0zQCDPdE4IjZwyY1HMunM= +github.com/ccojocar/zxcvbn-go v1.0.2 h1:na/czXU8RrhXO4EZme6eQJLR4PzcGsahsBOAwU6I3Vg= +github.com/ccojocar/zxcvbn-go v1.0.2/go.mod h1:g1qkXtUSvHP8lhHp5GrSmTz6uWALGRMQdw6Qnz/hi60= +github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs= +github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs= +github.com/charithe/durationcheck v0.0.10 h1:wgw73BiocdBDQPik+zcEoBG/ob8uyBHf2iyoHGPf5w4= +github.com/charithe/durationcheck v0.0.10/go.mod h1:bCWXb7gYRysD1CU3C+u4ceO49LoGOY1C1L6uouGNreQ= +github.com/chavacava/garif v0.1.0 h1:2JHa3hbYf5D9dsgseMKAmc/MZ109otzgNFk5s87H9Pc= +github.com/chavacava/garif v0.1.0/go.mod h1:XMyYCkEL58DF0oyW4qDjjnPWONs2HBqYKI+UIPD+Gww= +github.com/ckaznocha/intrange v0.3.0 h1:VqnxtK32pxgkhJgYQEeOArVidIPg+ahLP7WBOXZd5ZY= +github.com/ckaznocha/intrange v0.3.0/go.mod h1:+I/o2d2A1FBHgGELbGxzIcyd3/9l9DuwjM8FsbSS3Lo= +github.com/cpuguy83/go-md2man/v2 v2.0.6/go.mod h1:oOW0eioCTA6cOiMLiUPZOpcVxMig6NIQQ7OS05n1F4g= +github.com/curioswitch/go-reassign v0.3.0 h1:dh3kpQHuADL3cobV/sSGETA8DOv457dwl+fbBAhrQPs= +github.com/curioswitch/go-reassign v0.3.0/go.mod h1:nApPCCTtqLJN/s8HfItCcKV0jIPwluBOvZP+dsJGA88= +github.com/daixiang0/gci v0.13.5 h1:kThgmH1yBmZSBCh1EJVxQ7JsHpm5Oms0AMed/0LaH4c= +github.com/daixiang0/gci v0.13.5/go.mod h1:12etP2OniiIdP4q+kjUGrC/rUagga7ODbqsom5Eo5Yk= +github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= +github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= +github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM= +github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= +github.com/deckhouse/sds-common-lib v0.6.3 h1:k0OotLuQaKuZt8iyph9IusDixjAE0MQRKyuTe2wZP3I= +github.com/deckhouse/sds-common-lib v0.6.3/go.mod h1:UHZMKkqEh6RAO+vtA7dFTwn/2m5lzfPn0kfULBmDf2o= +github.com/deckhouse/sds-node-configurator/api v0.0.0-20251112082451-591b11c7b2da h1:LFk9OC/+EVWfYDRe54Hip4kVKwjNcPhHZTftlm5DCpg= +github.com/deckhouse/sds-node-configurator/api v0.0.0-20251112082451-591b11c7b2da/go.mod h1:X5ftUa4MrSXMKiwQYa4lwFuGtrs+HoCNa8Zl6TPrGo8= +github.com/denis-tingaikin/go-header v0.5.0 h1:SRdnP5ZKvcO9KKRP1KJrhFR3RrlGuD+42t4429eC9k8= +github.com/denis-tingaikin/go-header v0.5.0/go.mod h1:mMenU5bWrok6Wl2UsZjy+1okegmwQ3UgWl4V1D8gjlY= +github.com/dlclark/regexp2 v1.11.0 h1:G/nrcoOa7ZXlpoa/91N3X7mM3r8eIlMBBJZvsz/mxKI= +github.com/dlclark/regexp2 v1.11.0/go.mod h1:DHkYz0B9wPfa6wondMfaivmHpzrQ3v9q8cnmRbL6yW8= +github.com/emicklei/go-restful/v3 v3.13.0 h1:C4Bl2xDndpU6nJ4bc1jXd+uTmYPVUwkD6bFY/oTyCes= +github.com/emicklei/go-restful/v3 v3.13.0/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc= +github.com/ettle/strcase v0.2.0 h1:fGNiVF21fHXpX1niBgk0aROov1LagYsOwV/xqKDKR/Q= +github.com/ettle/strcase v0.2.0/go.mod h1:DajmHElDSaX76ITe3/VHVyMin4LWSJN5Z909Wp+ED1A= +github.com/evanphx/json-patch v5.9.0+incompatible h1:fBXyNpNMuTTDdquAq/uisOr2lShz4oaXpDTX2bLe7ls= +github.com/evanphx/json-patch v5.9.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk= +github.com/evanphx/json-patch/v5 v5.9.11 h1:/8HVnzMq13/3x9TPvjG08wUGqBTmZBsCWzjTM0wiaDU= +github.com/evanphx/json-patch/v5 v5.9.11/go.mod h1:3j+LviiESTElxA4p3EMKAB9HXj3/XEtnUf6OZxqIQTM= +github.com/fatih/color v1.18.0 h1:S8gINlzdQ840/4pfAwic/ZE0djQEH3wM94VfqLTZcOM= +github.com/fatih/color v1.18.0/go.mod h1:4FelSpRwEGDpQ12mAdzqdOukCy4u8WUtOY6lkT/6HfU= +github.com/fatih/structtag v1.2.0 h1:/OdNE99OxoI/PqaW/SuSK9uxxT3f/tcSZgon/ssNSx4= +github.com/fatih/structtag v1.2.0/go.mod h1:mBJUNpUnHmRKrKlQQlmCrh5PuhftFbNv8Ys4/aAZl94= +github.com/firefart/nonamedreturns v1.0.5 h1:tM+Me2ZaXs8tfdDw3X6DOX++wMCOqzYUho6tUTYIdRA= +github.com/firefart/nonamedreturns v1.0.5/go.mod h1:gHJjDqhGM4WyPt639SOZs+G89Ko7QKH5R5BhnO6xJhw= +github.com/frankban/quicktest v1.14.6 h1:7Xjx+VpznH+oBnejlPUj8oUpdxnVs4f8XU8WnHkI4W8= +github.com/frankban/quicktest v1.14.6/go.mod h1:4ptaffx2x8+WTWXmUCuVU6aPUX1/Mz7zb5vbUoiM6w0= +github.com/fsnotify/fsnotify v1.9.0 h1:2Ml+OJNzbYCTzsxtv8vKSFD9PbJjmhYF14k/jKC7S9k= +github.com/fsnotify/fsnotify v1.9.0/go.mod h1:8jBTzvmWwFyi3Pb8djgCCO5IBqzKJ/Jwo8TRcHyHii0= +github.com/fxamacker/cbor/v2 v2.9.0 h1:NpKPmjDBgUfBms6tr6JZkTHtfFGcMKsw3eGcmD/sapM= +github.com/fxamacker/cbor/v2 v2.9.0/go.mod h1:vM4b+DJCtHn+zz7h3FFp/hDAI9WNWCsZj23V5ytsSxQ= +github.com/fzipp/gocyclo v0.6.0 h1:lsblElZG7d3ALtGMx9fmxeTKZaLLpU8mET09yN4BBLo= +github.com/fzipp/gocyclo v0.6.0/go.mod h1:rXPyn8fnlpa0R2csP/31uerbiVBugk5whMdlyaLkLoA= +github.com/ghostiam/protogetter v0.3.9 h1:j+zlLLWzqLay22Cz/aYwTHKQ88GE2DQ6GkWSYFOI4lQ= +github.com/ghostiam/protogetter v0.3.9/go.mod h1:WZ0nw9pfzsgxuRsPOFQomgDVSWtDLJRfQJEhsGbmQMA= +github.com/gkampitakis/ciinfo v0.3.2 h1:JcuOPk8ZU7nZQjdUhctuhQofk7BGHuIy0c9Ez8BNhXs= +github.com/gkampitakis/ciinfo v0.3.2/go.mod h1:1NIwaOcFChN4fa/B0hEBdAb6npDlFL8Bwx4dfRLRqAo= +github.com/gkampitakis/go-diff v1.3.2 h1:Qyn0J9XJSDTgnsgHRdz9Zp24RaJeKMUHg2+PDZZdC4M= +github.com/gkampitakis/go-diff v1.3.2/go.mod h1:LLgOrpqleQe26cte8s36HTWcTmMEur6OPYerdAAS9tk= +github.com/gkampitakis/go-snaps v0.5.15 h1:amyJrvM1D33cPHwVrjo9jQxX8g/7E2wYdZ+01KS3zGE= +github.com/gkampitakis/go-snaps v0.5.15/go.mod h1:HNpx/9GoKisdhw9AFOBT1N7DBs9DiHo/hGheFGBZ+mc= +github.com/go-critic/go-critic v0.12.0 h1:iLosHZuye812wnkEz1Xu3aBwn5ocCPfc9yqmFG9pa6w= +github.com/go-critic/go-critic v0.12.0/go.mod h1:DpE0P6OVc6JzVYzmM5gq5jMU31zLr4am5mB/VfFK64w= +github.com/go-logr/logr v1.4.3 h1:CjnDlHq8ikf6E492q6eKboGOC0T8CDaOvkHCIg8idEI= +github.com/go-logr/logr v1.4.3/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY= +github.com/go-logr/zapr v1.3.0 h1:XGdV8XW8zdwFiwOA2Dryh1gj2KRQyOOoNmBy4EplIcQ= +github.com/go-logr/zapr v1.3.0/go.mod h1:YKepepNBd1u/oyhd/yQmtjVXmm9uML4IXUgMOwR8/Gg= +github.com/go-openapi/jsonpointer v0.22.0 h1:TmMhghgNef9YXxTu1tOopo+0BGEytxA+okbry0HjZsM= +github.com/go-openapi/jsonpointer v0.22.0/go.mod h1:xt3jV88UtExdIkkL7NloURjRQjbeUgcxFblMjq2iaiU= +github.com/go-openapi/jsonreference v0.21.1 h1:bSKrcl8819zKiOgxkbVNRUBIr6Wwj9KYrDbMjRs0cDA= +github.com/go-openapi/jsonreference v0.21.1/go.mod h1:PWs8rO4xxTUqKGu+lEvvCxD5k2X7QYkKAepJyCmSTT8= +github.com/go-openapi/swag v0.24.1 h1:DPdYTZKo6AQCRqzwr/kGkxJzHhpKxZ9i/oX0zag+MF8= +github.com/go-openapi/swag v0.24.1/go.mod h1:sm8I3lCPlspsBBwUm1t5oZeWZS0s7m/A+Psg0ooRU0A= +github.com/go-openapi/swag/cmdutils v0.24.0 h1:KlRCffHwXFI6E5MV9n8o8zBRElpY4uK4yWyAMWETo9I= +github.com/go-openapi/swag/cmdutils v0.24.0/go.mod h1:uxib2FAeQMByyHomTlsP8h1TtPd54Msu2ZDU/H5Vuf8= +github.com/go-openapi/swag/conv v0.24.0 h1:ejB9+7yogkWly6pnruRX45D1/6J+ZxRu92YFivx54ik= +github.com/go-openapi/swag/conv v0.24.0/go.mod h1:jbn140mZd7EW2g8a8Y5bwm8/Wy1slLySQQ0ND6DPc2c= +github.com/go-openapi/swag/fileutils v0.24.0 h1:U9pCpqp4RUytnD689Ek/N1d2N/a//XCeqoH508H5oak= +github.com/go-openapi/swag/fileutils v0.24.0/go.mod h1:3SCrCSBHyP1/N+3oErQ1gP+OX1GV2QYFSnrTbzwli90= +github.com/go-openapi/swag/jsonname v0.24.0 h1:2wKS9bgRV/xB8c62Qg16w4AUiIrqqiniJFtZGi3dg5k= +github.com/go-openapi/swag/jsonname v0.24.0/go.mod h1:GXqrPzGJe611P7LG4QB9JKPtUZ7flE4DOVechNaDd7Q= +github.com/go-openapi/swag/jsonutils v0.24.0 h1:F1vE1q4pg1xtO3HTyJYRmEuJ4jmIp2iZ30bzW5XgZts= +github.com/go-openapi/swag/jsonutils v0.24.0/go.mod h1:vBowZtF5Z4DDApIoxcIVfR8v0l9oq5PpYRUuteVu6f0= +github.com/go-openapi/swag/loading v0.24.0 h1:ln/fWTwJp2Zkj5DdaX4JPiddFC5CHQpvaBKycOlceYc= +github.com/go-openapi/swag/loading v0.24.0/go.mod h1:gShCN4woKZYIxPxbfbyHgjXAhO61m88tmjy0lp/LkJk= +github.com/go-openapi/swag/mangling v0.24.0 h1:PGOQpViCOUroIeak/Uj/sjGAq9LADS3mOyjznmHy2pk= +github.com/go-openapi/swag/mangling v0.24.0/go.mod h1:Jm5Go9LHkycsz0wfoaBDkdc4CkpuSnIEf62brzyCbhc= +github.com/go-openapi/swag/netutils v0.24.0 h1:Bz02HRjYv8046Ycg/w80q3g9QCWeIqTvlyOjQPDjD8w= +github.com/go-openapi/swag/netutils v0.24.0/go.mod h1:WRgiHcYTnx+IqfMCtu0hy9oOaPR0HnPbmArSRN1SkZM= +github.com/go-openapi/swag/stringutils v0.24.0 h1:i4Z/Jawf9EvXOLUbT97O0HbPUja18VdBxeadyAqS1FM= +github.com/go-openapi/swag/stringutils v0.24.0/go.mod h1:5nUXB4xA0kw2df5PRipZDslPJgJut+NjL7D25zPZ/4w= +github.com/go-openapi/swag/typeutils v0.24.0 h1:d3szEGzGDf4L2y1gYOSSLeK6h46F+zibnEas2Jm/wIw= +github.com/go-openapi/swag/typeutils v0.24.0/go.mod h1:q8C3Kmk/vh2VhpCLaoR2MVWOGP8y7Jc8l82qCTd1DYI= +github.com/go-openapi/swag/yamlutils v0.24.0 h1:bhw4894A7Iw6ne+639hsBNRHg9iZg/ISrOVr+sJGp4c= +github.com/go-openapi/swag/yamlutils v0.24.0/go.mod h1:DpKv5aYuaGm/sULePoeiG8uwMpZSfReo1HR3Ik0yaG8= +github.com/go-quicktest/qt v1.101.0 h1:O1K29Txy5P2OK0dGo59b7b0LR6wKfIhttaAhHUyn7eI= +github.com/go-quicktest/qt v1.101.0/go.mod h1:14Bz/f7NwaXPtdYEgzsx46kqSxVwTbzVZsDC26tQJow= +github.com/go-task/slim-sprig/v3 v3.0.0 h1:sUs3vkvUymDpBKi3qH1YSqBQk9+9D/8M2mN1vB6EwHI= +github.com/go-task/slim-sprig/v3 v3.0.0/go.mod h1:W848ghGpv3Qj3dhTPRyJypKRiqCdHZiAzKg9hl15HA8= +github.com/go-toolsmith/astcast v1.1.0 h1:+JN9xZV1A+Re+95pgnMgDboWNVnIMMQXwfBwLRPgSC8= +github.com/go-toolsmith/astcast v1.1.0/go.mod h1:qdcuFWeGGS2xX5bLM/c3U9lewg7+Zu4mr+xPwZIB4ZU= +github.com/go-toolsmith/astcopy v1.1.0 h1:YGwBN0WM+ekI/6SS6+52zLDEf8Yvp3n2seZITCUBt5s= +github.com/go-toolsmith/astcopy v1.1.0/go.mod h1:hXM6gan18VA1T/daUEHCFcYiW8Ai1tIwIzHY6srfEAw= +github.com/go-toolsmith/astequal v1.0.3/go.mod h1:9Ai4UglvtR+4up+bAD4+hCj7iTo4m/OXVTSLnCyTAx4= +github.com/go-toolsmith/astequal v1.1.0/go.mod h1:sedf7VIdCL22LD8qIvv7Nn9MuWJruQA/ysswh64lffQ= +github.com/go-toolsmith/astequal v1.2.0 h1:3Fs3CYZ1k9Vo4FzFhwwewC3CHISHDnVUPC4x0bI2+Cw= +github.com/go-toolsmith/astequal v1.2.0/go.mod h1:c8NZ3+kSFtFY/8lPso4v8LuJjdJiUFVnSuU3s0qrrDY= +github.com/go-toolsmith/astfmt v1.1.0 h1:iJVPDPp6/7AaeLJEruMsBUlOYCmvg0MoCfJprsOmcco= +github.com/go-toolsmith/astfmt v1.1.0/go.mod h1:OrcLlRwu0CuiIBp/8b5PYF9ktGVZUjlNMV634mhwuQ4= +github.com/go-toolsmith/astp v1.1.0 h1:dXPuCl6u2llURjdPLLDxJeZInAeZ0/eZwFJmqZMnpQA= +github.com/go-toolsmith/astp v1.1.0/go.mod h1:0T1xFGz9hicKs8Z5MfAqSUitoUYS30pDMsRVIDHs8CA= +github.com/go-toolsmith/pkgload v1.2.2 h1:0CtmHq/02QhxcF7E9N5LIFcYFsMR5rdovfqTtRKkgIk= +github.com/go-toolsmith/pkgload v1.2.2/go.mod h1:R2hxLNRKuAsiXCo2i5J6ZQPhnPMOVtU+f0arbFPWCus= +github.com/go-toolsmith/strparse v1.0.0/go.mod h1:YI2nUKP9YGZnL/L1/DLFBfixrcjslWct4wyljWhSRy8= +github.com/go-toolsmith/strparse v1.1.0 h1:GAioeZUK9TGxnLS+qfdqNbA4z0SSm5zVNtCQiyP2Bvw= +github.com/go-toolsmith/strparse v1.1.0/go.mod h1:7ksGy58fsaQkGQlY8WVoBFNyEPMGuJin1rfoPS4lBSQ= +github.com/go-toolsmith/typep v1.1.0 h1:fIRYDyF+JywLfqzyhdiHzRop/GQDxxNhLGQ6gFUNHus= +github.com/go-toolsmith/typep v1.1.0/go.mod h1:fVIw+7zjdsMxDA3ITWnH1yOiw1rnTQKCsF/sk2H/qig= +github.com/go-viper/mapstructure/v2 v2.4.0 h1:EBsztssimR/CONLSZZ04E8qAkxNYq4Qp9LvH92wZUgs= +github.com/go-viper/mapstructure/v2 v2.4.0/go.mod h1:oJDH3BJKyqBA2TXFhDsKDGDTlndYOZ6rGS0BRZIxGhM= +github.com/go-xmlfmt/xmlfmt v1.1.3 h1:t8Ey3Uy7jDSEisW2K3somuMKIpzktkWptA0iFCnRUWY= +github.com/go-xmlfmt/xmlfmt v1.1.3/go.mod h1:aUCEOzzezBEjDBbFBoSiya/gduyIiWYRP6CnSFIV8AM= +github.com/gobwas/glob v0.2.3 h1:A4xDbljILXROh+kObIiy5kIaPYD8e96x1tgBhUI5J+Y= +github.com/gobwas/glob v0.2.3/go.mod h1:d3Ez4x06l9bZtSvzIay5+Yzi0fmZzPgnTbPcKjJAkT8= +github.com/goccy/go-yaml v1.18.0 h1:8W7wMFS12Pcas7KU+VVkaiCng+kG8QiFeFwzFb+rwuw= +github.com/goccy/go-yaml v1.18.0/go.mod h1:XBurs7gK8ATbW4ZPGKgcbrY1Br56PdM69F7LkFRi1kA= +github.com/gofrs/flock v0.12.1 h1:MTLVXXHf8ekldpJk3AKicLij9MdwOWkZ+a/jHHZby9E= +github.com/gofrs/flock v0.12.1/go.mod h1:9zxTsyu5xtJ9DK+1tFZyibEV7y3uwDxPPfbxeeHCoD0= +github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q= +github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q= +github.com/golangci/dupl v0.0.0-20250308024227-f665c8d69b32 h1:WUvBfQL6EW/40l6OmeSBYQJNSif4O11+bmWEz+C7FYw= +github.com/golangci/dupl v0.0.0-20250308024227-f665c8d69b32/go.mod h1:NUw9Zr2Sy7+HxzdjIULge71wI6yEg1lWQr7Evcu8K0E= +github.com/golangci/go-printf-func-name v0.1.0 h1:dVokQP+NMTO7jwO4bwsRwLWeudOVUPPyAKJuzv8pEJU= +github.com/golangci/go-printf-func-name v0.1.0/go.mod h1:wqhWFH5mUdJQhweRnldEywnR5021wTdZSNgwYceV14s= +github.com/golangci/gofmt v0.0.0-20250106114630-d62b90e6713d h1:viFft9sS/dxoYY0aiOTsLKO2aZQAPT4nlQCsimGcSGE= +github.com/golangci/gofmt v0.0.0-20250106114630-d62b90e6713d/go.mod h1:ivJ9QDg0XucIkmwhzCDsqcnxxlDStoTl89jDMIoNxKY= +github.com/golangci/golangci-lint v1.64.8 h1:y5TdeVidMtBGG32zgSC7ZXTFNHrsJkDnpO4ItB3Am+I= +github.com/golangci/golangci-lint v1.64.8/go.mod h1:5cEsUQBSr6zi8XI8OjmcY2Xmliqc4iYL7YoPrL+zLJ4= +github.com/golangci/misspell v0.6.0 h1:JCle2HUTNWirNlDIAUO44hUsKhOFqGPoC4LZxlaSXDs= +github.com/golangci/misspell v0.6.0/go.mod h1:keMNyY6R9isGaSAu+4Q8NMBwMPkh15Gtc8UCVoDtAWo= +github.com/golangci/plugin-module-register v0.1.1 h1:TCmesur25LnyJkpsVrupv1Cdzo+2f7zX0H6Jkw1Ol6c= +github.com/golangci/plugin-module-register v0.1.1/go.mod h1:TTpqoB6KkwOJMV8u7+NyXMrkwwESJLOkfl9TxR1DGFc= +github.com/golangci/revgrep v0.8.0 h1:EZBctwbVd0aMeRnNUsFogoyayvKHyxlV3CdUA46FX2s= +github.com/golangci/revgrep v0.8.0/go.mod h1:U4R/s9dlXZsg8uJmaR1GrloUr14D7qDl8gi2iPXJH8k= +github.com/golangci/unconvert v0.0.0-20240309020433-c5143eacb3ed h1:IURFTjxeTfNFP0hTEi1YKjB/ub8zkpaOqFFMApi2EAs= +github.com/golangci/unconvert v0.0.0-20240309020433-c5143eacb3ed/go.mod h1:XLXN8bNw4CGRPaqgl3bv/lhz7bsGPh4/xSaMTbo2vkQ= +github.com/google/btree v1.1.3 h1:CVpQJjYgC4VbzxeGVHfvZrv1ctoYCAI8vbl07Fcxlyg= +github.com/google/btree v1.1.3/go.mod h1:qOPhT0dTNdNzV6Z/lhRX0YXUafgPLFUh+gZMl761Gm4= +github.com/google/gnostic-models v0.7.0 h1:qwTtogB15McXDaNqTZdzPJRHvaVJlAl+HVQnLmJEJxo= +github.com/google/gnostic-models v0.7.0/go.mod h1:whL5G0m6dmc5cPxKc5bdKdEN3UjI7OUGxBlw57miDrQ= +github.com/google/go-cmp v0.5.1/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= +github.com/google/go-cmp v0.5.2/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= +github.com/google/go-cmp v0.5.4/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= +github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= +github.com/google/go-cmp v0.5.8/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY= +github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8= +github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU= +github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg= +github.com/google/gofuzz v1.2.0 h1:xRy4A+RhZaiKjJ1bPfwQ8sedCA+YS2YcCHW6ec7JMi0= +github.com/google/gofuzz v1.2.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg= +github.com/google/pprof v0.0.0-20250820193118-f64d9cf942d6 h1:EEHtgt9IwisQ2AZ4pIsMjahcegHh6rmhqxzIRQIyepY= +github.com/google/pprof v0.0.0-20250820193118-f64d9cf942d6/go.mod h1:I6V7YzU0XDpsHqbsyrghnFZLO1gwK6NPTNvmetQIk9U= +github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0= +github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= +github.com/gordonklaus/ineffassign v0.1.0 h1:y2Gd/9I7MdY1oEIt+n+rowjBNDcLQq3RsH5hwJd0f9s= +github.com/gordonklaus/ineffassign v0.1.0/go.mod h1:Qcp2HIAYhR7mNUVSIxZww3Guk4it82ghYcEXIAk+QT0= +github.com/gostaticanalysis/analysisutil v0.7.1 h1:ZMCjoue3DtDWQ5WyU16YbjbQEQ3VuzwxALrpYd+HeKk= +github.com/gostaticanalysis/analysisutil v0.7.1/go.mod h1:v21E3hY37WKMGSnbsw2S/ojApNWb6C1//mXO48CXbVc= +github.com/gostaticanalysis/comment v1.4.1/go.mod h1:ih6ZxzTHLdadaiSnF5WY3dxUoXfXAlTaRzuaNDlSado= +github.com/gostaticanalysis/comment v1.4.2/go.mod h1:KLUTGDv6HOCotCH8h2erHKmpci2ZoR8VPu34YA2uzdM= +github.com/gostaticanalysis/comment v1.5.0 h1:X82FLl+TswsUMpMh17srGRuKaaXprTaytmEpgnKIDu8= +github.com/gostaticanalysis/comment v1.5.0/go.mod h1:V6eb3gpCv9GNVqb6amXzEUX3jXLVK/AdA+IrAMSqvEc= +github.com/gostaticanalysis/forcetypeassert v0.2.0 h1:uSnWrrUEYDr86OCxWa4/Tp2jeYDlogZiZHzGkWFefTk= +github.com/gostaticanalysis/forcetypeassert v0.2.0/go.mod h1:M5iPavzE9pPqWyeiVXSFghQjljW1+l/Uke3PXHS6ILY= +github.com/gostaticanalysis/nilerr v0.1.1 h1:ThE+hJP0fEp4zWLkWHWcRyI2Od0p7DlgYG3Uqrmrcpk= +github.com/gostaticanalysis/nilerr v0.1.1/go.mod h1:wZYb6YI5YAxxq0i1+VJbY0s2YONW0HU0GPE3+5PWN4A= +github.com/gostaticanalysis/testutil v0.3.1-0.20210208050101-bfb5c8eec0e4/go.mod h1:D+FIZ+7OahH3ePw/izIEeH5I06eKs1IKI4Xr64/Am3M= +github.com/gostaticanalysis/testutil v0.5.0 h1:Dq4wT1DdTwTGCQQv3rl3IvD5Ld0E6HiY+3Zh0sUGqw8= +github.com/gostaticanalysis/testutil v0.5.0/go.mod h1:OLQSbuM6zw2EvCcXTz1lVq5unyoNft372msDY0nY5Hs= +github.com/hashicorp/go-immutable-radix/v2 v2.1.0 h1:CUW5RYIcysz+D3B+l1mDeXrQ7fUvGGCwJfdASSzbrfo= +github.com/hashicorp/go-immutable-radix/v2 v2.1.0/go.mod h1:hgdqLXA4f6NIjRVisM1TJ9aOJVNRqKZj+xDGF6m7PBw= +github.com/hashicorp/go-uuid v1.0.3 h1:2gKiV6YVmrJ1i2CKKa9obLvRieoRGviZFL26PcT/Co8= +github.com/hashicorp/go-uuid v1.0.3/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro= +github.com/hashicorp/go-version v1.2.1/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA= +github.com/hashicorp/go-version v1.7.0 h1:5tqGy27NaOTB8yJKUZELlFAS/LTKJkrmONwQKeRZfjY= +github.com/hashicorp/go-version v1.7.0/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA= +github.com/hashicorp/golang-lru/v2 v2.0.7 h1:a+bsQ5rvGLjzHuww6tVxozPZFVghXaHOwFs4luLUK2k= +github.com/hashicorp/golang-lru/v2 v2.0.7/go.mod h1:QeFd9opnmA6QUJc5vARoKUSoFhyfM2/ZepoAG6RGpeM= +github.com/hexops/gotextdiff v1.0.3 h1:gitA9+qJrrTCsiCl7+kh75nPqQt1cx4ZkudSTLoUqJM= +github.com/hexops/gotextdiff v1.0.3/go.mod h1:pSWU5MAI3yDq+fZBTazCSJysOMbxWL1BSow5/V2vxeg= +github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8= +github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw= +github.com/jgautheron/goconst v1.7.1 h1:VpdAG7Ca7yvvJk5n8dMwQhfEZJh95kl/Hl9S1OI5Jkk= +github.com/jgautheron/goconst v1.7.1/go.mod h1:aAosetZ5zaeC/2EfMeRswtxUFBpe2Hr7HzkgX4fanO4= +github.com/jingyugao/rowserrcheck v1.1.1 h1:zibz55j/MJtLsjP1OF4bSdgXxwL1b+Vn7Tjzq7gFzUs= +github.com/jingyugao/rowserrcheck v1.1.1/go.mod h1:4yvlZSDb3IyDTUZJUmpZfm2Hwok+Dtp+nu2qOq+er9c= +github.com/jjti/go-spancheck v0.6.4 h1:Tl7gQpYf4/TMU7AT84MN83/6PutY21Nb9fuQjFTpRRc= +github.com/jjti/go-spancheck v0.6.4/go.mod h1:yAEYdKJ2lRkDA8g7X+oKUHXOWVAXSBJRv04OhF+QUjk= +github.com/josharian/intern v1.0.0 h1:vlS4z54oSdjm0bgjRigI+G1HpF+tI+9rE5LLzOg8HmY= +github.com/josharian/intern v1.0.0/go.mod h1:5DoeVV0s6jJacbCEi61lwdGj/aVlrQvzHFFd8Hwg//Y= +github.com/joshdk/go-junit v1.0.0 h1:S86cUKIdwBHWwA6xCmFlf3RTLfVXYQfvanM5Uh+K6GE= +github.com/joshdk/go-junit v1.0.0/go.mod h1:TiiV0PqkaNfFXjEiyjWM3XXrhVyCa1K4Zfga6W52ung= +github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM= +github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo= +github.com/julz/importas v0.2.0 h1:y+MJN/UdL63QbFJHws9BVC5RpA2iq0kpjrFajTGivjQ= +github.com/julz/importas v0.2.0/go.mod h1:pThlt589EnCYtMnmhmRYY/qn9lCf/frPOK+WMx3xiJY= +github.com/karamaru-alpha/copyloopvar v1.2.1 h1:wmZaZYIjnJ0b5UoKDjUHrikcV0zuPyyxI4SVplLd2CI= +github.com/karamaru-alpha/copyloopvar v1.2.1/go.mod h1:nFmMlFNlClC2BPvNaHMdkirmTJxVCY0lhxBtlfOypMM= +github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8= +github.com/kisielk/errcheck v1.9.0 h1:9xt1zI9EBfcYBvdU1nVrzMzzUPUtPKs9bVSIM3TAb3M= +github.com/kisielk/errcheck v1.9.0/go.mod h1:kQxWMMVZgIkDq7U8xtG/n2juOjbLgZtedi0D+/VL/i8= +github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck= +github.com/kkHAIKE/contextcheck v1.1.6 h1:7HIyRcnyzxL9Lz06NGhiKvenXq7Zw6Q0UQu/ttjfJCE= +github.com/kkHAIKE/contextcheck v1.1.6/go.mod h1:3dDbMRNBFaq8HFXWC1JyvDSPm43CmE6IuHam8Wr0rkg= +github.com/klauspost/compress v1.18.0 h1:c/Cqfb0r+Yi+JtIEq73FWXVkRonBlf0CRNYc8Zttxdo= +github.com/klauspost/compress v1.18.0/go.mod h1:2Pp+KzxcywXVXMr50+X0Q/Lsb43OQHYWRCY2AiWywWQ= +github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE= +github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk= +github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY= +github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE= +github.com/kulti/thelper v0.6.3 h1:ElhKf+AlItIu+xGnI990no4cE2+XaSu1ULymV2Yulxs= +github.com/kulti/thelper v0.6.3/go.mod h1:DsqKShOvP40epevkFrvIwkCMNYxMeTNjdWL4dqWHZ6I= +github.com/kunwardeep/paralleltest v1.0.10 h1:wrodoaKYzS2mdNVnc4/w31YaXFtsc21PCTdvWJ/lDDs= +github.com/kunwardeep/paralleltest v1.0.10/go.mod h1:2C7s65hONVqY7Q5Efj5aLzRCNLjw2h4eMc9EcypGjcY= +github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc= +github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw= +github.com/lasiar/canonicalheader v1.1.2 h1:vZ5uqwvDbyJCnMhmFYimgMZnJMjwljN5VGY0VKbMXb4= +github.com/lasiar/canonicalheader v1.1.2/go.mod h1:qJCeLFS0G/QlLQ506T+Fk/fWMa2VmBUiEI2cuMK4djI= +github.com/ldez/exptostd v0.4.2 h1:l5pOzHBz8mFOlbcifTxzfyYbgEmoUqjxLFHZkjlbHXs= +github.com/ldez/exptostd v0.4.2/go.mod h1:iZBRYaUmcW5jwCR3KROEZ1KivQQp6PHXbDPk9hqJKCQ= +github.com/ldez/gomoddirectives v0.6.1 h1:Z+PxGAY+217f/bSGjNZr/b2KTXcyYLgiWI6geMBN2Qc= +github.com/ldez/gomoddirectives v0.6.1/go.mod h1:cVBiu3AHR9V31em9u2kwfMKD43ayN5/XDgr+cdaFaKs= +github.com/ldez/grignotin v0.9.0 h1:MgOEmjZIVNn6p5wPaGp/0OKWyvq42KnzAt/DAb8O4Ow= +github.com/ldez/grignotin v0.9.0/go.mod h1:uaVTr0SoZ1KBii33c47O1M8Jp3OP3YDwhZCmzT9GHEk= +github.com/ldez/tagliatelle v0.7.1 h1:bTgKjjc2sQcsgPiT902+aadvMjCeMHrY7ly2XKFORIk= +github.com/ldez/tagliatelle v0.7.1/go.mod h1:3zjxUpsNB2aEZScWiZTHrAXOl1x25t3cRmzfK1mlo2I= +github.com/ldez/usetesting v0.4.2 h1:J2WwbrFGk3wx4cZwSMiCQQ00kjGR0+tuuyW0Lqm4lwA= +github.com/ldez/usetesting v0.4.2/go.mod h1:eEs46T3PpQ+9RgN9VjpY6qWdiw2/QmfiDeWmdZdrjIQ= +github.com/leonklingele/grouper v1.1.2 h1:o1ARBDLOmmasUaNDesWqWCIFH3u7hoFlM84YrjT3mIY= +github.com/leonklingele/grouper v1.1.2/go.mod h1:6D0M/HVkhs2yRKRFZUoGjeDy7EZTfFBE9gl4kjmIGkA= +github.com/macabu/inamedparam v0.1.3 h1:2tk/phHkMlEL/1GNe/Yf6kkR/hkcUdAEY3L0hjYV1Mk= +github.com/macabu/inamedparam v0.1.3/go.mod h1:93FLICAIk/quk7eaPPQvbzihUdn/QkGDwIZEoLtpH6I= +github.com/mailru/easyjson v0.9.0 h1:PrnmzHw7262yW8sTBwxi1PdJA3Iw/EKBa8psRf7d9a4= +github.com/mailru/easyjson v0.9.0/go.mod h1:1+xMtQp2MRNVL/V1bOzuP3aP8VNwRW55fQUto+XFtTU= +github.com/maratori/testableexamples v1.0.0 h1:dU5alXRrD8WKSjOUnmJZuzdxWOEQ57+7s93SLMxb2vI= +github.com/maratori/testableexamples v1.0.0/go.mod h1:4rhjL1n20TUTT4vdh3RDqSizKLyXp7K2u6HgraZCGzE= +github.com/maratori/testpackage v1.1.1 h1:S58XVV5AD7HADMmD0fNnziNHqKvSdDuEKdPD1rNTU04= +github.com/maratori/testpackage v1.1.1/go.mod h1:s4gRK/ym6AMrqpOa/kEbQTV4Q4jb7WeLZzVhVVVOQMc= +github.com/maruel/natural v1.1.1 h1:Hja7XhhmvEFhcByqDoHz9QZbkWey+COd9xWfCfn1ioo= +github.com/maruel/natural v1.1.1/go.mod h1:v+Rfd79xlw1AgVBjbO0BEQmptqb5HvL/k9GRHB7ZKEg= +github.com/matoous/godox v1.1.0 h1:W5mqwbyWrwZv6OQ5Z1a/DHGMOvXYCBP3+Ht7KMoJhq4= +github.com/matoous/godox v1.1.0/go.mod h1:jgE/3fUXiTurkdHOLT5WEkThTSuE7yxHv5iWPa80afs= +github.com/matryer/is v1.4.0 h1:sosSmIWwkYITGrxZ25ULNDeKiMNzFSr4V/eqBQP0PeE= +github.com/matryer/is v1.4.0/go.mod h1:8I/i5uYgLzgsgEloJE1U6xx5HkBQpAZvepWuujKwMRU= +github.com/mattn/go-colorable v0.1.14 h1:9A9LHSqF/7dyVVX6g0U9cwm9pG3kP9gSzcuIPHPsaIE= +github.com/mattn/go-colorable v0.1.14/go.mod h1:6LmQG8QLFO4G5z1gPvYEzlUgJ2wF+stgPZH1UqBm1s8= +github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY= +github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y= +github.com/mattn/go-runewidth v0.0.9/go.mod h1:H031xJmbD/WCDINGzjvQ9THkh0rPKHF+m2gUSrubnMI= +github.com/mattn/go-runewidth v0.0.16 h1:E5ScNMtiwvlvB5paMFdw9p4kSQzbXFikJ5SQO6TULQc= +github.com/mattn/go-runewidth v0.0.16/go.mod h1:Jdepj2loyihRzMpdS35Xk/zdY8IAYHsh153qUoGf23w= +github.com/mfridman/tparse v0.18.0 h1:wh6dzOKaIwkUGyKgOntDW4liXSo37qg5AXbIhkMV3vE= +github.com/mfridman/tparse v0.18.0/go.mod h1:gEvqZTuCgEhPbYk/2lS3Kcxg1GmTxxU7kTC8DvP0i/A= +github.com/mgechev/revive v1.7.0 h1:JyeQ4yO5K8aZhIKf5rec56u0376h8AlKNQEmjfkjKlY= +github.com/mgechev/revive v1.7.0/go.mod h1:qZnwcNhoguE58dfi96IJeSTPeZQejNeoMQLUZGi4SW4= +github.com/mitchellh/go-homedir v1.1.0 h1:lukF9ziXFxDFPkA1vsr5zpc1XuPDn/wFntq5mG+4E0Y= +github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0= +github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q= +github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg= +github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q= +github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk= +github.com/modern-go/reflect2 v1.0.3-0.20250322232337-35a7c28c31ee h1:W5t00kpgFdJifH4BDsTlE89Zl93FEloxaWZfGcifgq8= +github.com/modern-go/reflect2 v1.0.3-0.20250322232337-35a7c28c31ee/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk= +github.com/moricho/tparallel v0.3.2 h1:odr8aZVFA3NZrNybggMkYO3rgPRcqjeQUlBBFVxKHTI= +github.com/moricho/tparallel v0.3.2/go.mod h1:OQ+K3b4Ln3l2TZveGCywybl68glfLEwFGqvnjok8b+U= +github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA= +github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ= +github.com/nakabonne/nestif v0.3.1 h1:wm28nZjhQY5HyYPx+weN3Q65k6ilSBxDb8v5S81B81U= +github.com/nakabonne/nestif v0.3.1/go.mod h1:9EtoZochLn5iUprVDmDjqGKPofoUEBL8U4Ngq6aY7OE= +github.com/nishanths/exhaustive v0.12.0 h1:vIY9sALmw6T/yxiASewa4TQcFsVYZQQRUQJhKRf3Swg= +github.com/nishanths/exhaustive v0.12.0/go.mod h1:mEZ95wPIZW+x8kC4TgC+9YCUgiST7ecevsVDTgc2obs= +github.com/nishanths/predeclared v0.2.2 h1:V2EPdZPliZymNAn79T8RkNApBjMmVKh5XRpLm/w98Vk= +github.com/nishanths/predeclared v0.2.2/go.mod h1:RROzoN6TnGQupbC+lqggsOlcgysk3LMK/HI84Mp280c= +github.com/nunnatsa/ginkgolinter v0.19.1 h1:mjwbOlDQxZi9Cal+KfbEJTCz327OLNfwNvoZ70NJ+c4= +github.com/nunnatsa/ginkgolinter v0.19.1/go.mod h1:jkQ3naZDmxaZMXPWaS9rblH+i+GWXQCaS/JFIWcOH2s= +github.com/olekukonko/tablewriter v0.0.5 h1:P2Ga83D34wi1o9J6Wh1mRuqd4mF/x/lgBS7N7AbDhec= +github.com/olekukonko/tablewriter v0.0.5/go.mod h1:hPp6KlRPjbx+hW8ykQs1w3UBbZlj6HuIJcUGPhkA7kY= +github.com/onsi/ginkgo/v2 v2.27.2 h1:LzwLj0b89qtIy6SSASkzlNvX6WktqurSHwkk2ipF/Ns= +github.com/onsi/ginkgo/v2 v2.27.2/go.mod h1:ArE1D/XhNXBXCBkKOLkbsb2c81dQHCRcF5zwn/ykDRo= +github.com/onsi/gomega v1.38.3 h1:eTX+W6dobAYfFeGC2PV6RwXRu/MyT+cQguijutvkpSM= +github.com/onsi/gomega v1.38.3/go.mod h1:ZCU1pkQcXDO5Sl9/VVEGlDyp+zm0m1cmeG5TOzLgdh4= +github.com/otiai10/copy v1.2.0/go.mod h1:rrF5dJ5F0t/EWSYODDu4j9/vEeYHMkc8jt0zJChqQWw= +github.com/otiai10/copy v1.14.0 h1:dCI/t1iTdYGtkvCuBG2BgR6KZa83PTclw4U5n2wAllU= +github.com/otiai10/copy v1.14.0/go.mod h1:ECfuL02W+/FkTWZWgQqXPWZgW9oeKCSQ5qVfSc4qc4w= +github.com/otiai10/curr v0.0.0-20150429015615-9b4961190c95/go.mod h1:9qAhocn7zKJG+0mI8eUu6xqkFDYS2kb2saOteoSB3cE= +github.com/otiai10/curr v1.0.0/go.mod h1:LskTG5wDwr8Rs+nNQ+1LlxRjAtTZZjtJW4rMXl6j4vs= +github.com/otiai10/mint v1.3.0/go.mod h1:F5AjcsTsWUqX+Na9fpHb52P8pcRX2CI6A3ctIT91xUo= +github.com/otiai10/mint v1.3.1/go.mod h1:/yxELlJQ0ufhjUwhshSj+wFjZ78CnZ48/1wtmBH1OTc= +github.com/pelletier/go-toml/v2 v2.2.4 h1:mye9XuhQ6gvn5h28+VilKrrPoQVanw5PMw/TB0t5Ec4= +github.com/pelletier/go-toml/v2 v2.2.4/go.mod h1:2gIqNv+qfxSVS7cM2xJQKtLSTLUE9V8t9Stt+h56mCY= +github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4= +github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= +github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= +github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U= +github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= +github.com/polyfloyd/go-errorlint v1.7.1 h1:RyLVXIbosq1gBdk/pChWA8zWYLsq9UEw7a1L5TVMCnA= +github.com/polyfloyd/go-errorlint v1.7.1/go.mod h1:aXjNb1x2TNhoLsk26iv1yl7a+zTnXPhwEMtEXukiLR8= +github.com/prashantv/gostub v1.1.0 h1:BTyx3RfQjRHnUWaGF9oQos79AlQ5k8WNktv7VGvVH4g= +github.com/prashantv/gostub v1.1.0/go.mod h1:A5zLQHz7ieHGG7is6LLXLz7I8+3LZzsrV0P1IAHhP5U= +github.com/prometheus/client_golang v1.23.2 h1:Je96obch5RDVy3FDMndoUsjAhG5Edi49h0RJWRi/o0o= +github.com/prometheus/client_golang v1.23.2/go.mod h1:Tb1a6LWHB3/SPIzCoaDXI4I8UHKeFTEQ1YCr+0Gyqmg= +github.com/prometheus/client_model v0.6.2 h1:oBsgwpGs7iVziMvrGhE53c/GrLUsZdHnqNwqPLxwZyk= +github.com/prometheus/client_model v0.6.2/go.mod h1:y3m2F6Gdpfy6Ut/GBsUqTWZqCUvMVzSfMLjcu6wAwpE= +github.com/prometheus/common v0.66.1 h1:h5E0h5/Y8niHc5DlaLlWLArTQI7tMrsfQjHV+d9ZoGs= +github.com/prometheus/common v0.66.1/go.mod h1:gcaUsgf3KfRSwHY4dIMXLPV0K/Wg1oZ8+SbZk/HH/dA= +github.com/prometheus/procfs v0.17.0 h1:FuLQ+05u4ZI+SS/w9+BWEM2TXiHKsUQ9TADiRH7DuK0= +github.com/prometheus/procfs v0.17.0/go.mod h1:oPQLaDAMRbA+u8H5Pbfq+dl3VDAvHxMUOVhe0wYB2zw= +github.com/quasilyte/go-ruleguard v0.4.3-0.20240823090925-0fe6f58b47b1 h1:+Wl/0aFp0hpuHM3H//KMft64WQ1yX9LdJY64Qm/gFCo= +github.com/quasilyte/go-ruleguard v0.4.3-0.20240823090925-0fe6f58b47b1/go.mod h1:GJLgqsLeo4qgavUoL8JeGFNS7qcisx3awV/w9eWTmNI= +github.com/quasilyte/go-ruleguard/dsl v0.3.22 h1:wd8zkOhSNr+I+8Qeciml08ivDt1pSXe60+5DqOpCjPE= +github.com/quasilyte/go-ruleguard/dsl v0.3.22/go.mod h1:KeCP03KrjuSO0H1kTuZQCWlQPulDV6YMIXmpQss17rU= +github.com/quasilyte/gogrep v0.5.0 h1:eTKODPXbI8ffJMN+W2aE0+oL0z/nh8/5eNdiO34SOAo= +github.com/quasilyte/gogrep v0.5.0/go.mod h1:Cm9lpz9NZjEoL1tgZ2OgeUKPIxL1meE7eo60Z6Sk+Ng= +github.com/quasilyte/regex/syntax v0.0.0-20210819130434-b3f0c404a727 h1:TCg2WBOl980XxGFEZSS6KlBGIV0diGdySzxATTWoqaU= +github.com/quasilyte/regex/syntax v0.0.0-20210819130434-b3f0c404a727/go.mod h1:rlzQ04UMyJXu/aOvhd8qT+hvDrFpiwqp8MRXDY9szc0= +github.com/quasilyte/stdinfo v0.0.0-20220114132959-f7386bf02567 h1:M8mH9eK4OUR4lu7Gd+PU1fV2/qnDNfzT635KRSObncs= +github.com/quasilyte/stdinfo v0.0.0-20220114132959-f7386bf02567/go.mod h1:DWNGW8A4Y+GyBgPuaQJuWiy0XYftx4Xm/y5Jqk9I6VQ= +github.com/raeperd/recvcheck v0.2.0 h1:GnU+NsbiCqdC2XX5+vMZzP+jAJC5fht7rcVTAhX74UI= +github.com/raeperd/recvcheck v0.2.0/go.mod h1:n04eYkwIR0JbgD73wT8wL4JjPC3wm0nFtzBnWNocnYU= +github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc= +github.com/rivo/uniseg v0.4.7 h1:WUdvkW8uEhrYfLC4ZzdpI2ztxP1I582+49Oc5Mq64VQ= +github.com/rivo/uniseg v0.4.7/go.mod h1:FN3SvrM+Zdj16jyLfmOkMNblXMcoc8DfTHruCPUcx88= +github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ= +github.com/rogpeppe/go-internal v1.14.1/go.mod h1:MaRKkUm5W0goXpeCfT7UZI6fk/L7L7so1lCWt35ZSgc= +github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM= +github.com/ryancurrah/gomodguard v1.3.5 h1:cShyguSwUEeC0jS7ylOiG/idnd1TpJ1LfHGpV3oJmPU= +github.com/ryancurrah/gomodguard v1.3.5/go.mod h1:MXlEPQRxgfPQa62O8wzK3Ozbkv9Rkqr+wKjSxTdsNJE= +github.com/ryanrolds/sqlclosecheck v0.5.1 h1:dibWW826u0P8jNLsLN+En7+RqWWTYrjCB9fJfSfdyCU= +github.com/ryanrolds/sqlclosecheck v0.5.1/go.mod h1:2g3dUjoS6AL4huFdv6wn55WpLIDjY7ZgUR4J8HOO/XQ= +github.com/sagikazarmark/locafero v0.7.0 h1:5MqpDsTGNDhY8sGp0Aowyf0qKsPrhewaLSsFaodPcyo= +github.com/sagikazarmark/locafero v0.7.0/go.mod h1:2za3Cg5rMaTMoG/2Ulr9AwtFaIppKXTRYnozin4aB5k= +github.com/sanposhiho/wastedassign/v2 v2.1.0 h1:crurBF7fJKIORrV85u9UUpePDYGWnwvv3+A96WvwXT0= +github.com/sanposhiho/wastedassign/v2 v2.1.0/go.mod h1:+oSmSC+9bQ+VUAxA66nBb0Z7N8CK7mscKTDYC6aIek4= +github.com/santhosh-tekuri/jsonschema/v6 v6.0.1 h1:PKK9DyHxif4LZo+uQSgXNqs0jj5+xZwwfKHgph2lxBw= +github.com/santhosh-tekuri/jsonschema/v6 v6.0.1/go.mod h1:JXeL+ps8p7/KNMjDQk3TCwPpBy0wYklyWTfbkIzdIFU= +github.com/sashamelentyev/interfacebloat v1.1.0 h1:xdRdJp0irL086OyW1H/RTZTr1h/tMEOsumirXcOJqAw= +github.com/sashamelentyev/interfacebloat v1.1.0/go.mod h1:+Y9yU5YdTkrNvoX0xHc84dxiN1iBi9+G8zZIhPVoNjQ= +github.com/sashamelentyev/usestdlibvars v1.28.0 h1:jZnudE2zKCtYlGzLVreNp5pmCdOxXUzwsMDBkR21cyQ= +github.com/sashamelentyev/usestdlibvars v1.28.0/go.mod h1:9nl0jgOfHKWNFS43Ojw0i7aRoS4j6EBye3YBhmAIRF8= +github.com/securego/gosec/v2 v2.22.2 h1:IXbuI7cJninj0nRpZSLCUlotsj8jGusohfONMrHoF6g= +github.com/securego/gosec/v2 v2.22.2/go.mod h1:UEBGA+dSKb+VqM6TdehR7lnQtIIMorYJ4/9CW1KVQBE= +github.com/shurcooL/go v0.0.0-20180423040247-9e1955d9fb6e/go.mod h1:TDJrrUr11Vxrven61rcy3hJMUqaf/CLWYhHNPmT14Lk= +github.com/shurcooL/go-goon v0.0.0-20170922171312-37c2f522c041/go.mod h1:N5mDOmsrJOB+vfqUK+7DmDyjhSLIIBnXo9lvZJj3MWQ= +github.com/sirupsen/logrus v1.9.3 h1:dueUQJ1C2q9oE3F7wvmSGAaVtTmUizReu6fjN8uqzbQ= +github.com/sirupsen/logrus v1.9.3/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ= +github.com/sivchari/containedctx v1.0.3 h1:x+etemjbsh2fB5ewm5FeLNi5bUjK0V8n0RB+Wwfd0XE= +github.com/sivchari/containedctx v1.0.3/go.mod h1:c1RDvCbnJLtH4lLcYD/GqwiBSSf4F5Qk0xld2rBqzJ4= +github.com/sivchari/tenv v1.12.1 h1:+E0QzjktdnExv/wwsnnyk4oqZBUfuh89YMQT1cyuvSY= +github.com/sivchari/tenv v1.12.1/go.mod h1:1LjSOUCc25snIr5n3DtGGrENhX3LuWefcplwVGC24mw= +github.com/sonatard/noctx v0.1.0 h1:JjqOc2WN16ISWAjAk8M5ej0RfExEXtkEyExl2hLW+OM= +github.com/sonatard/noctx v0.1.0/go.mod h1:0RvBxqY8D4j9cTTTWE8ylt2vqj2EPI8fHmrxHdsaZ2c= +github.com/sourcegraph/conc v0.3.0 h1:OQTbbt6P72L20UqAkXXuLOj79LfEanQ+YQFNpLA9ySo= +github.com/sourcegraph/conc v0.3.0/go.mod h1:Sdozi7LEKbFPqYX2/J+iBAM6HpqSLTASQIKqDmF7Mt0= +github.com/sourcegraph/go-diff v0.7.0 h1:9uLlrd5T46OXs5qpp8L/MTltk0zikUGi0sNNyCpA8G0= +github.com/sourcegraph/go-diff v0.7.0/go.mod h1:iBszgVvyxdc8SFZ7gm69go2KDdt3ag071iBaWPF6cjs= +github.com/spf13/afero v1.12.0 h1:UcOPyRBYczmFn6yvphxkn9ZEOY65cpwGKb5mL36mrqs= +github.com/spf13/afero v1.12.0/go.mod h1:ZTlWwG4/ahT8W7T0WQ5uYmjI9duaLQGy3Q2OAl4sk/4= +github.com/spf13/cast v1.7.1 h1:cuNEagBQEHWN1FnbGEjCXL2szYEXqfJPbP2HNUaca9Y= +github.com/spf13/cast v1.7.1/go.mod h1:ancEpBxwJDODSW/UG4rDrAqiKolqNNh2DX3mk86cAdo= +github.com/spf13/cobra v1.10.2 h1:DMTTonx5m65Ic0GOoRY2c16WCbHxOOw6xxezuLaBpcU= +github.com/spf13/cobra v1.10.2/go.mod h1:7C1pvHqHw5A4vrJfjNwvOdzYu0Gml16OCs2GRiTUUS4= +github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg= +github.com/spf13/pflag v1.0.9/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg= +github.com/spf13/pflag v1.0.10 h1:4EBh2KAYBwaONj6b2Ye1GiHfwjqyROoF4RwYO+vPwFk= +github.com/spf13/pflag v1.0.10/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg= +github.com/spf13/viper v1.20.1 h1:ZMi+z/lvLyPSCoNtFCpqjy0S4kPbirhpTMwl8BkW9X4= +github.com/spf13/viper v1.20.1/go.mod h1:P9Mdzt1zoHIG8m2eZQinpiBjo6kCmZSKBClNNqjJvu4= +github.com/ssgreg/nlreturn/v2 v2.2.1 h1:X4XDI7jstt3ySqGU86YGAURbxw3oTDPK9sPEi6YEwQ0= +github.com/ssgreg/nlreturn/v2 v2.2.1/go.mod h1:E/iiPB78hV7Szg2YfRgyIrk1AD6JVMTRkkxBiELzh2I= +github.com/stbenjam/no-sprintf-host-port v0.2.0 h1:i8pxvGrt1+4G0czLr/WnmyH7zbZ8Bg8etvARQ1rpyl4= +github.com/stbenjam/no-sprintf-host-port v0.2.0/go.mod h1:eL0bQ9PasS0hsyTyfTjjG+E80QIyPnBVQbYZyv20Jfk= +github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= +github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw= +github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo= +github.com/stretchr/objx v0.5.2 h1:xuMeJ0Sdp5ZMRXx/aWO6RZxdr3beISkG5/G/aIRr3pY= +github.com/stretchr/objx v0.5.2/go.mod h1:FRsXN1f5AsAjCGJKqEizvkpNtU+EGNCLh3NxZ/8L+MA= +github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI= +github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4= +github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA= +github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= +github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= +github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU= +github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo= +github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U= +github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U= +github.com/subosito/gotenv v1.6.0 h1:9NlTDc1FTs4qu0DDq7AEtTPNw6SVm7uBMsUCUjABIf8= +github.com/subosito/gotenv v1.6.0/go.mod h1:Dk4QP5c2W3ibzajGcXpNraDfq2IrhjMIvMSWPKKo0FU= +github.com/tdakkota/asciicheck v0.4.1 h1:bm0tbcmi0jezRA2b5kg4ozmMuGAFotKI3RZfrhfovg8= +github.com/tdakkota/asciicheck v0.4.1/go.mod h1:0k7M3rCfRXb0Z6bwgvkEIMleKH3kXNz9UqJ9Xuqopr8= +github.com/tenntenn/modver v1.0.1 h1:2klLppGhDgzJrScMpkj9Ujy3rXPUspSjAcev9tSEBgA= +github.com/tenntenn/modver v1.0.1/go.mod h1:bePIyQPb7UeioSRkw3Q0XeMhYZSMx9B8ePqg6SAMGH0= +github.com/tenntenn/text/transform v0.0.0-20200319021203-7eef512accb3 h1:f+jULpRQGxTSkNYKJ51yaw6ChIqO+Je8UqsTKN/cDag= +github.com/tenntenn/text/transform v0.0.0-20200319021203-7eef512accb3/go.mod h1:ON8b8w4BN/kE1EOhwT0o+d62W65a6aPw1nouo9LMgyY= +github.com/tetafro/godot v1.5.0 h1:aNwfVI4I3+gdxjMgYPus9eHmoBeJIbnajOyqZYStzuw= +github.com/tetafro/godot v1.5.0/go.mod h1:2oVxTBSftRTh4+MVfUaUXR6bn2GDXCaMcOG4Dk3rfio= +github.com/tidwall/gjson v1.18.0 h1:FIDeeyB800efLX89e5a8Y0BNH+LOngJyGrIWxG2FKQY= +github.com/tidwall/gjson v1.18.0/go.mod h1:/wbyibRr2FHMks5tjHJ5F8dMZh3AcwJEMf5vlfC0lxk= +github.com/tidwall/match v1.2.0 h1:0pt8FlkOwjN2fPt4bIl4BoNxb98gGHN2ObFEDkrfZnM= +github.com/tidwall/match v1.2.0/go.mod h1:eRSPERbgtNPcGhD8UCthc6PmLEQXEWd3PRB5JTxsfmM= +github.com/tidwall/pretty v1.2.1 h1:qjsOFOWWQl+N3RsoF5/ssm1pHmJJwhjlSbZ51I6wMl4= +github.com/tidwall/pretty v1.2.1/go.mod h1:ITEVvHYasfjBbM0u2Pg8T2nJnzm8xPwvNhhsoaGGjNU= +github.com/tidwall/sjson v1.2.5 h1:kLy8mja+1c9jlljvWTlSazM7cKDRfJuR/bOJhcY5NcY= +github.com/tidwall/sjson v1.2.5/go.mod h1:Fvgq9kS/6ociJEDnK0Fk1cpYF4FIW6ZF7LAe+6jwd28= +github.com/timakin/bodyclose v0.0.0-20241017074812-ed6a65f985e3 h1:y4mJRFlM6fUyPhoXuFg/Yu02fg/nIPFMOY8tOqppoFg= +github.com/timakin/bodyclose v0.0.0-20241017074812-ed6a65f985e3/go.mod h1:mkjARE7Yr8qU23YcGMSALbIxTQ9r9QBVahQOBRfU460= +github.com/timonwong/loggercheck v0.10.1 h1:uVZYClxQFpw55eh+PIoqM7uAOHMrhVcDoWDery9R8Lg= +github.com/timonwong/loggercheck v0.10.1/go.mod h1:HEAWU8djynujaAVX7QI65Myb8qgfcZ1uKbdpg3ZzKl8= +github.com/tomarrell/wrapcheck/v2 v2.10.0 h1:SzRCryzy4IrAH7bVGG4cK40tNUhmVmMDuJujy4XwYDg= +github.com/tomarrell/wrapcheck/v2 v2.10.0/go.mod h1:g9vNIyhb5/9TQgumxQyOEqDHsmGYcGsVMOx/xGkqdMo= +github.com/tommy-muehle/go-mnd/v2 v2.5.1 h1:NowYhSdyE/1zwK9QCLeRb6USWdoif80Ie+v+yU8u1Zw= +github.com/tommy-muehle/go-mnd/v2 v2.5.1/go.mod h1:WsUAkMJMYww6l/ufffCD3m+P7LEvr8TnZn9lwVDlgzw= +github.com/ultraware/funlen v0.2.0 h1:gCHmCn+d2/1SemTdYMiKLAHFYxTYz7z9VIDRaTGyLkI= +github.com/ultraware/funlen v0.2.0/go.mod h1:ZE0q4TsJ8T1SQcjmkhN/w+MceuatI6pBFSxxyteHIJA= +github.com/ultraware/whitespace v0.2.0 h1:TYowo2m9Nfj1baEQBjuHzvMRbp19i+RCcRYrSWoFa+g= +github.com/ultraware/whitespace v0.2.0/go.mod h1:XcP1RLD81eV4BW8UhQlpaR+SDc2givTvyI8a586WjW8= +github.com/uudashr/gocognit v1.2.0 h1:3BU9aMr1xbhPlvJLSydKwdLN3tEUUrzPSSM8S4hDYRA= +github.com/uudashr/gocognit v1.2.0/go.mod h1:k/DdKPI6XBZO1q7HgoV2juESI2/Ofj9AcHPZhBBdrTU= +github.com/uudashr/iface v1.3.1 h1:bA51vmVx1UIhiIsQFSNq6GZ6VPTk3WNMZgRiCe9R29U= +github.com/uudashr/iface v1.3.1/go.mod h1:4QvspiRd3JLPAEXBQ9AiZpLbJlrWWgRChOKDJEuQTdg= +github.com/x448/float16 v0.8.4 h1:qLwI1I70+NjRFUR3zs1JPUCgaCXSh3SW62uAKT1mSBM= +github.com/x448/float16 v0.8.4/go.mod h1:14CWIYCyZA/cWjXOioeEpHeN/83MdbZDRQHoFcYsOfg= +github.com/xen0n/gosmopolitan v1.2.2 h1:/p2KTnMzwRexIW8GlKawsTWOxn7UHA+jCMF/V8HHtvU= +github.com/xen0n/gosmopolitan v1.2.2/go.mod h1:7XX7Mj61uLYrj0qmeN0zi7XDon9JRAEhYQqAPLVNTeg= +github.com/yagipy/maintidx v1.0.0 h1:h5NvIsCz+nRDapQ0exNv4aJ0yXSI0420omVANTv3GJM= +github.com/yagipy/maintidx v1.0.0/go.mod h1:0qNf/I/CCZXSMhsRsrEPDZ+DkekpKLXAJfsTACwgXLk= +github.com/yeya24/promlinter v0.3.0 h1:JVDbMp08lVCP7Y6NP3qHroGAO6z2yGKQtS5JsjqtoFs= +github.com/yeya24/promlinter v0.3.0/go.mod h1:cDfJQQYv9uYciW60QT0eeHlFodotkYZlL+YcPQN+mW4= +github.com/ykadowak/zerologlint v0.1.5 h1:Gy/fMz1dFQN9JZTPjv1hxEk+sRWm05row04Yoolgdiw= +github.com/ykadowak/zerologlint v0.1.5/go.mod h1:KaUskqF3e/v59oPmdq1U1DnKcuHokl2/K1U4pmIELKg= +github.com/yuin/goldmark v1.1.25/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= +github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= +github.com/yuin/goldmark v1.1.32/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= +github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= +github.com/yuin/goldmark v1.3.5/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k= +github.com/yuin/goldmark v1.4.1/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k= +github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY= +gitlab.com/bosi/decorder v0.4.2 h1:qbQaV3zgwnBZ4zPMhGLW4KZe7A7NwxEhJx39R3shffo= +gitlab.com/bosi/decorder v0.4.2/go.mod h1:muuhHoaJkA9QLcYHq4Mj8FJUwDZ+EirSHRiaTcTf6T8= +go-simpler.org/assert v0.9.0 h1:PfpmcSvL7yAnWyChSjOz6Sp6m9j5lyK8Ok9pEL31YkQ= +go-simpler.org/assert v0.9.0/go.mod h1:74Eqh5eI6vCK6Y5l3PI8ZYFXG4Sa+tkr70OIPJAUr28= +go-simpler.org/musttag v0.13.0 h1:Q/YAW0AHvaoaIbsPj3bvEI5/QFP7w696IMUpnKXQfCE= +go-simpler.org/musttag v0.13.0/go.mod h1:FTzIGeK6OkKlUDVpj0iQUXZLUO1Js9+mvykDQy9C5yM= +go-simpler.org/sloglint v0.9.0 h1:/40NQtjRx9txvsB/RN022KsUJU+zaaSb/9q9BSefSrE= +go-simpler.org/sloglint v0.9.0/go.mod h1:G/OrAF6uxj48sHahCzrbarVMptL2kjWTaUeC8+fOGww= +go.uber.org/automaxprocs v1.6.0 h1:O3y2/QNTOdbF+e/dpXNNW7Rx2hZ4sTIPyybbxyNqTUs= +go.uber.org/automaxprocs v1.6.0/go.mod h1:ifeIMSnPZuznNm6jmdzmU3/bfk01Fe2fotchwEFJ8r8= +go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto= +go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE= +go.uber.org/multierr v1.11.0 h1:blXXJkSxSSfBVBlC76pxqeO+LN3aDfLQo+309xJstO0= +go.uber.org/multierr v1.11.0/go.mod h1:20+QtiLqy0Nd6FdQB9TLXag12DsQkrbs3htMFfDN80Y= +go.uber.org/zap v1.27.0 h1:aJMhYGrd5QSmlpLMr2MftRKl7t8J8PTZPA732ud/XR8= +go.uber.org/zap v1.27.0/go.mod h1:GB2qFLM7cTU87MWRP2mPIjqfIDnGu+VIO4V/SdhGo2E= +go.yaml.in/yaml/v2 v2.4.2 h1:DzmwEr2rDGHl7lsFgAHxmNz/1NlQ7xLIrlN2h5d1eGI= +go.yaml.in/yaml/v2 v2.4.2/go.mod h1:081UH+NErpNdqlCXm3TtEran0rJZGxAYx9hb/ELlsPU= +go.yaml.in/yaml/v3 v3.0.4 h1:tfq32ie2Jv2UxXFdLJdh3jXuOzWiL1fo0bu/FbuKpbc= +go.yaml.in/yaml/v3 v3.0.4/go.mod h1:DhzuOOF2ATzADvBadXxruRBLzYTpT36CKvDb3+aBEFg= +golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= +golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= +golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= +golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc= +golang.org/x/crypto v0.13.0/go.mod h1:y6Z2r+Rw4iayiXXAIxJIDAJ1zMW4yaTpebo8fPOliYc= +golang.org/x/crypto v0.14.0/go.mod h1:MVFd36DqK4CsrnJYDkBA3VC4m2GkXAM0PvzMCn4JQf4= +golang.org/x/exp v0.0.0-20251023183803-a4bb9ffd2546 h1:mgKeJMpvi0yx/sU5GsxQ7p6s2wtOnGAHZWCHUM4KGzY= +golang.org/x/exp v0.0.0-20251023183803-a4bb9ffd2546/go.mod h1:j/pmGrbnkbPtQfxEe5D0VQhZC6qKbfKifgD0oM7sR70= +golang.org/x/exp/typeparams v0.0.0-20220428152302-39d4317da171/go.mod h1:AbB0pIl9nAr9wVwH+Z2ZpaocVmF5I4GyWCDIsVjR0bk= +golang.org/x/exp/typeparams v0.0.0-20230203172020-98cc5a0785f9/go.mod h1:AbB0pIl9nAr9wVwH+Z2ZpaocVmF5I4GyWCDIsVjR0bk= +golang.org/x/exp/typeparams v0.0.0-20250210185358-939b2ce775ac h1:TSSpLIG4v+p0rPv1pNOQtl1I8knsO4S9trOxNMOLVP4= +golang.org/x/exp/typeparams v0.0.0-20250210185358-939b2ce775ac/go.mod h1:AbB0pIl9nAr9wVwH+Z2ZpaocVmF5I4GyWCDIsVjR0bk= +golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= +golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= +golang.org/x/mod v0.4.1/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= +golang.org/x/mod v0.4.2/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= +golang.org/x/mod v0.6.0-dev.0.20220106191415-9b9b3d81d5e3/go.mod h1:3p9vT2HGsQu2K1YbXdKPJLVgG5VJdoTa1poYQBtP1AY= +golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4= +golang.org/x/mod v0.7.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= +golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= +golang.org/x/mod v0.9.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= +golang.org/x/mod v0.12.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= +golang.org/x/mod v0.13.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c= +golang.org/x/mod v0.29.0 h1:HV8lRxZC4l2cr3Zq1LvtOsi/ThTgWnUk/y64QSs8GwA= +golang.org/x/mod v0.29.0/go.mod h1:NyhrlYXJ2H4eJiRy/WDBO6HMqZQ6q9nk4JzS3NuCK+w= +golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= +golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= +golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= +golang.org/x/net v0.0.0-20200625001655-4c5254603344/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA= +golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU= +golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg= +golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM= +golang.org/x/net v0.0.0-20211015210444-4f30a5c0130f/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y= +golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c= +golang.org/x/net v0.2.0/go.mod h1:KqCZLdyyvdV855qA2rE3GC2aiw5xGR5TEjj8smXukLY= +golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs= +golang.org/x/net v0.8.0/go.mod h1:QVkue5JL9kW//ek3r6jTKnTFis1tRmNAW2P1shuFdJc= +golang.org/x/net v0.10.0/go.mod h1:0qNGK6F8kojg2nk9dLZ2mShWaEBan6FAoqfSigmmuDg= +golang.org/x/net v0.15.0/go.mod h1:idbUs1IY1+zTqbi8yxTbhexhEEk5ur9LInksu6HrEpk= +golang.org/x/net v0.16.0/go.mod h1:NxSsAGuq816PNPmqtQdLE42eU2Fs7NoRIZrHJAlaCOE= +golang.org/x/net v0.46.0 h1:giFlY12I07fugqwPuWJi68oOnpfqFnJIJzaIIm2JVV4= +golang.org/x/net v0.46.0/go.mod h1:Q9BGdFy1y4nkUwiLvT5qtyhAnEHgnQ/zd8PfU6nc210= +golang.org/x/oauth2 v0.31.0 h1:8Fq0yVZLh4j4YA47vHKFTa9Ew5XIrCP8LC6UeNZnLxo= +golang.org/x/oauth2 v0.31.0/go.mod h1:lzm5WQJQwKZ3nwavOZ3IS5Aulzxi68dUSgRHujetwEA= +golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.0.0-20200625203802-6e8e738ad208/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.3.0/go.mod h1:FU7BRWz2tNW+3quACPkgCx/L+uEAv1htQ0V83Z9Rj+Y= +golang.org/x/sync v0.4.0/go.mod h1:FU7BRWz2tNW+3quACPkgCx/L+uEAv1htQ0V83Z9Rj+Y= +golang.org/x/sync v0.19.0 h1:vV+1eWNmZ5geRlYjzm2adRgW2/mcpevXNg50YZtPCE4= +golang.org/x/sync v0.19.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI= +golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= +golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20211019181941-9d821ace8654/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20211105183446-c75c47738b0c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.2.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.12.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.13.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.39.0 h1:CvCKL8MeisomCi6qNZ+wbb0DN9E5AATixKsvNtMoMFk= +golang.org/x/sys v0.39.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks= +golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= +golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8= +golang.org/x/term v0.2.0/go.mod h1:TVmDHMZPmdnySmBfhjOoOdhjzdE1h4u1VwSiw2l1Nuc= +golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k= +golang.org/x/term v0.6.0/go.mod h1:m6U89DPEgQRMq3DNkDClhWw02AUbt2daBVO4cn4Hv9U= +golang.org/x/term v0.8.0/go.mod h1:xPskH00ivmX89bAKVGSKKtLOWNx2+17Eiy94tnKShWo= +golang.org/x/term v0.12.0/go.mod h1:owVbMEjm3cBLCHdkQu9b1opXd4ETQWc3BhuQGKgXgvU= +golang.org/x/term v0.13.0/go.mod h1:LTmsnFJwVN6bCy1rVCoS+qHT1HhALEFxKncY3WNNh4U= +golang.org/x/term v0.36.0 h1:zMPR+aF8gfksFprF/Nc/rd1wRS1EI6nDBGyWAvDzx2Q= +golang.org/x/term v0.36.0/go.mod h1:Qu394IJq6V6dCBRgwqshf3mPF85AqzYEzofzRdZkWss= +golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= +golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk= +golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= +golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= +golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ= +golang.org/x/text v0.4.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8= +golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8= +golang.org/x/text v0.8.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8= +golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8= +golang.org/x/text v0.13.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE= +golang.org/x/text v0.30.0 h1:yznKA/E9zq54KzlzBEAWn1NXSQ8DIp/NYMy88xJjl4k= +golang.org/x/text v0.30.0/go.mod h1:yDdHFIX9t+tORqspjENWgzaCVXgk0yYnYuSZ8UzzBVM= +golang.org/x/time v0.13.0 h1:eUlYslOIt32DgYD6utsuUeHs4d7AsEYLuIAdg7FlYgI= +golang.org/x/time v0.13.0/go.mod h1:eL/Oa2bBBK0TkX57Fyni+NgnyQQN4LitPmob2Hjnqw4= +golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= +golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= +golang.org/x/tools v0.0.0-20200324003944-a576cf524670/go.mod h1:Sl4aGygMT6LrqrWclx+PTx3U+LnKx/seiNR+3G19Ar8= +golang.org/x/tools v0.0.0-20200329025819-fd4102a86c65/go.mod h1:Sl4aGygMT6LrqrWclx+PTx3U+LnKx/seiNR+3G19Ar8= +golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE= +golang.org/x/tools v0.0.0-20200724022722-7017fd6b1305/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA= +golang.org/x/tools v0.0.0-20200820010801-b793a1359eac/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA= +golang.org/x/tools v0.0.0-20201023174141-c8cfbd0f21e6/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= +golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= +golang.org/x/tools v0.1.1-0.20210205202024-ef80cdb6ec6d/go.mod h1:9bzcO0MWcOuT0tm1iBGzDVPshzfwoVvREIui8C+MHqU= +golang.org/x/tools v0.1.1-0.20210302220138-2ac05c832e1a/go.mod h1:9bzcO0MWcOuT0tm1iBGzDVPshzfwoVvREIui8C+MHqU= +golang.org/x/tools v0.1.1/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk= +golang.org/x/tools v0.1.5/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk= +golang.org/x/tools v0.1.10/go.mod h1:Uh6Zz+xoGYZom868N8YTex3t7RhtHDBrE8Gzo9bV56E= +golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc= +golang.org/x/tools v0.3.0/go.mod h1:/rWhSS2+zyEVwoJf8YAX6L2f0ntZ7Kn/mGgAWcipA5k= +golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU= +golang.org/x/tools v0.7.0/go.mod h1:4pg6aUX35JBAogB10C9AtvVL+qowtN4pT3CGSQex14s= +golang.org/x/tools v0.13.0/go.mod h1:HvlwmtVNQAhOuCjW7xxvovg8wbNq7LwfXh/k7wXUl58= +golang.org/x/tools v0.14.0/go.mod h1:uYBEerGOWcJyEORxN+Ek8+TT266gXkNlHdJBwexUsBg= +golang.org/x/tools v0.38.0 h1:Hx2Xv8hISq8Lm16jvBZ2VQf+RLmbd7wVUsALibYI/IQ= +golang.org/x/tools v0.38.0/go.mod h1:yEsQ/d/YK8cjh0L6rZlY8tgtlKiBNTL14pGDJPJpYQs= +golang.org/x/tools/go/expect v0.1.1-deprecated h1:jpBZDwmgPhXsKZC6WhL20P4b/wmnpsEAGHaNy0n/rJM= +golang.org/x/tools/go/expect v0.1.1-deprecated/go.mod h1:eihoPOH+FgIqa3FpoTwguz/bVUSGBlGQU67vpBeOrBY= +golang.org/x/tools/go/packages/packagestest v0.1.1-deprecated h1:1h2MnaIAIXISqTFKdENegdpAgUXz6NrPEsbIeWaBRvM= +golang.org/x/tools/go/packages/packagestest v0.1.1-deprecated/go.mod h1:RVAQXBGNv1ib0J382/DPCRS/BPnsGebyM1Gj5VSDpG8= +golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= +golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= +golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= +golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= +gomodules.xyz/jsonpatch/v2 v2.5.0 h1:JELs8RLM12qJGXU4u/TO3V25KW8GreMKl9pdkk14RM0= +gomodules.xyz/jsonpatch/v2 v2.5.0/go.mod h1:AH3dM2RI6uoBZxn3LVrfvJ3E0/9dG4cSrbuBJT4moAY= +google.golang.org/protobuf v1.36.9 h1:w2gp2mA27hUeUzj9Ex9FBjsBm40zfaDtEWow293U7Iw= +google.golang.org/protobuf v1.36.9/go.mod h1:fuxRtAxBytpl4zzqUh6/eyUujkJdNiuEkXntxiD/uRU= +gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= +gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk= +gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q= +gopkg.in/evanphx/json-patch.v4 v4.13.0 h1:czT3CmqEaQ1aanPc5SdlgQrrEIb8w/wwCvWWnfEbYzo= +gopkg.in/evanphx/json-patch.v4 v4.13.0/go.mod h1:p8EYWUEYMpynmqDbY58zCKCFZw8pRWMG4EsWvDvM72M= +gopkg.in/inf.v0 v0.9.1 h1:73M5CoZyi3ZLMOyDlQh031Cx6N9NDJ2Vvfl76EDAgDc= +gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw= +gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= +gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY= +gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ= +gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= +gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA= +gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= +honnef.co/go/tools v0.6.1 h1:R094WgE8K4JirYjBaOpz/AvTyUu/3wbmAoskKN/pxTI= +honnef.co/go/tools v0.6.1/go.mod h1:3puzxxljPCe8RGJX7BIy1plGbxEOZni5mR2aXe3/uk4= +k8s.io/api v0.34.3 h1:D12sTP257/jSH2vHV2EDYrb16bS7ULlHpdNdNhEw2S4= +k8s.io/api v0.34.3/go.mod h1:PyVQBF886Q5RSQZOim7DybQjAbVs8g7gwJNhGtY5MBk= +k8s.io/apiextensions-apiserver v0.34.3 h1:p10fGlkDY09eWKOTeUSioxwLukJnm+KuDZdrW71y40g= +k8s.io/apiextensions-apiserver v0.34.3/go.mod h1:aujxvqGFRdb/cmXYfcRTeppN7S2XV/t7WMEc64zB5A0= +k8s.io/apimachinery v0.34.3 h1:/TB+SFEiQvN9HPldtlWOTp0hWbJ+fjU+wkxysf/aQnE= +k8s.io/apimachinery v0.34.3/go.mod h1:/GwIlEcWuTX9zKIg2mbw0LRFIsXwrfoVxn+ef0X13lw= +k8s.io/client-go v0.34.3 h1:wtYtpzy/OPNYf7WyNBTj3iUA0XaBHVqhv4Iv3tbrF5A= +k8s.io/client-go v0.34.3/go.mod h1:OxxeYagaP9Kdf78UrKLa3YZixMCfP6bgPwPwNBQBzpM= +k8s.io/component-helpers v0.34.3 h1:Iws1GQfM89Lxo7IZITGmVdFOW0Bmyd7SVwwIu1/CCkE= +k8s.io/component-helpers v0.34.3/go.mod h1:S8HjjMTrUDVMVPo2EdNYRtQx9uIEIueQYdPMOe9UxJs= +k8s.io/klog/v2 v2.130.1 h1:n9Xl7H1Xvksem4KFG4PYbdQCQxqc/tTUyrgXaOhHSzk= +k8s.io/klog/v2 v2.130.1/go.mod h1:3Jpz1GvMt720eyJH1ckRHK1EDfpxISzJ7I9OYgaDtPE= +k8s.io/kube-openapi v0.0.0-20250909170358-d67c058d9372 h1:6n2yF16Z5B+r+iKN6yL6/0cRj7lI5omG5F0wuI9ZHhw= +k8s.io/kube-openapi v0.0.0-20250909170358-d67c058d9372/go.mod h1:kdmbQkyfwUagLfXIad1y2TdrjPFWp2Q89B3qkRwf/pQ= +k8s.io/utils v0.0.0-20251002143259-bc988d571ff4 h1:SjGebBtkBqHFOli+05xYbK8YF1Dzkbzn+gDM4X9T4Ck= +k8s.io/utils v0.0.0-20251002143259-bc988d571ff4/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0= +mvdan.cc/gofumpt v0.7.0 h1:bg91ttqXmi9y2xawvkuMXyvAA/1ZGJqYAEGjXuP0JXU= +mvdan.cc/gofumpt v0.7.0/go.mod h1:txVFJy/Sc/mvaycET54pV8SW8gWxTlUuGHVEcncmNUo= +mvdan.cc/unparam v0.0.0-20240528143540-8a5130ca722f h1:lMpcwN6GxNbWtbpI1+xzFLSW8XzX0u72NttUGVFjO3U= +mvdan.cc/unparam v0.0.0-20240528143540-8a5130ca722f/go.mod h1:RSLa7mKKCNeTTMHBw5Hsy2rfJmd6O2ivt9Dw9ZqCQpQ= +sigs.k8s.io/controller-runtime v0.22.4 h1:GEjV7KV3TY8e+tJ2LCTxUTanW4z/FmNB7l327UfMq9A= +sigs.k8s.io/controller-runtime v0.22.4/go.mod h1:+QX1XUpTXN4mLoblf4tqr5CQcyHPAki2HLXqQMY6vh8= +sigs.k8s.io/json v0.0.0-20250730193827-2d320260d730 h1:IpInykpT6ceI+QxKBbEflcR5EXP7sU1kvOlxwZh5txg= +sigs.k8s.io/json v0.0.0-20250730193827-2d320260d730/go.mod h1:mdzfpAEoE6DHQEN0uh9ZbOCuHbLK5wOm7dK4ctXE9Tg= +sigs.k8s.io/randfill v1.0.0 h1:JfjMILfT8A6RbawdsK2JXGBR5AQVfd+9TbzrlneTyrU= +sigs.k8s.io/randfill v1.0.0/go.mod h1:XeLlZ/jmk4i1HRopwe7/aU3H5n1zNUcX6TM94b3QxOY= +sigs.k8s.io/structured-merge-diff/v6 v6.3.0 h1:jTijUJbW353oVOd9oTlifJqOGEkUw2jB/fXCbTiQEco= +sigs.k8s.io/structured-merge-diff/v6 v6.3.0/go.mod h1:M3W8sfWvn2HhQDIbGWj3S099YozAsymCo/wrT5ohRUE= +sigs.k8s.io/yaml v1.6.0 h1:G8fkbMSAFqgEFgh4b1wmtzDnioxFCUgTZhlbj5P9QYs= +sigs.k8s.io/yaml v1.6.0/go.mod h1:796bPqUfzR/0jLAl6XjHl3Ck7MiyVv8dbTdyT3/pMf4= diff --git a/images/controller/internal/controllers/indexes.go b/images/controller/internal/controllers/indexes.go new file mode 100644 index 000000000..20fa41670 --- /dev/null +++ b/images/controller/internal/controllers/indexes.go @@ -0,0 +1,73 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package controllers + +import ( + "sigs.k8s.io/controller-runtime/pkg/manager" + + "github.com/deckhouse/sds-replicated-volume/images/controller/internal/indexes" +) + +// RegisterIndexes registers controller-runtime cache indexes used by controllers. +// It must be invoked before any controller starts listing with MatchingFields. +func RegisterIndexes(mgr manager.Manager) error { + // ReplicatedVolume (RV) + if err := indexes.RegisterRVByReplicatedStorageClassName(mgr); err != nil { + return err + } + + // ReplicatedVolumeAttachment (RVA) + if err := indexes.RegisterRVAByReplicatedVolumeName(mgr); err != nil { + return err + } + + // ReplicatedVolumeReplica (RVR) + if err := indexes.RegisterRVRByNodeName(mgr); err != nil { + return err + } + if err := indexes.RegisterRVRByReplicatedVolumeName(mgr); err != nil { + return err + } + + // Node + if err := indexes.RegisterNodeByMetadataName(mgr); err != nil { + return err + } + + // DRBDResource + if err := indexes.RegisterDRBDResourceByNodeName(mgr); err != nil { + return err + } + + // ReplicatedStorageClass (RSC) + if err := indexes.RegisterRSCByStoragePool(mgr); err != nil { + return err + } + + // ReplicatedStoragePool (RSP) + if err := indexes.RegisterRSPByLVMVolumeGroupName(mgr); err != nil { + return err + } + if err := indexes.RegisterRSPByEligibleNodeName(mgr); err != nil { + return err + } + if err := indexes.RegisterRSPByUsedByRSCName(mgr); err != nil { + return err + } + + return nil +} diff --git a/images/controller/internal/controllers/node_controller/README.md b/images/controller/internal/controllers/node_controller/README.md new file mode 100644 index 000000000..6e3bfd357 --- /dev/null +++ b/images/controller/internal/controllers/node_controller/README.md @@ -0,0 +1,134 @@ +# node_controller + +This controller manages the `storage.deckhouse.io/sds-replicated-volume-node` label on cluster nodes. + +## Purpose + +The `storage.deckhouse.io/sds-replicated-volume-node` label determines which nodes should run the sds-replicated-volume agent. +The controller automatically adds this label to nodes that are in at least one `ReplicatedStoragePool` (RSP) `eligibleNodes` list, +and removes it from nodes that are not in any RSP's `eligibleNodes`. + +**Important**: The label is also preserved on nodes that have at least one `DRBDResource`, +even if the node is not in any RSP's `eligibleNodes`. This prevents orphaning DRBD resources when RSP configuration changes. + +## Interactions + +| Direction | Resource/Controller | Relationship | +|-----------|---------------------|--------------| +| ← input | rsp_controller | Reads `RSP.Status.EligibleNodes` to decide node labels | +| ← input | DRBDResource | Reads presence of DRBDResources to preserve labels | +| → output | Node | Manages `AgentNodeLabelKey` label | + +## Algorithm + +A node receives the label if **at least one** of the following conditions is met (OR): + +1. **RSP Eligibility**: The node is in at least one `ReplicatedStoragePool`'s `status.eligibleNodes` list. +2. **DRBDResource Presence**: The node has at least one `DRBDResource` (`spec.nodeName == node.Name`). + +``` +shouldHaveLabel = (rspCount > 0) OR (drbdCount > 0) +``` + +## Reconciliation Structure + +The controller reconciles individual nodes (not a singleton): + +``` +Reconcile(nodeName) +├── getNodeAgentLabelPresence — check if node exists and has label (index lookup) +├── getNumberOfDRBDResourcesByNode — count DRBDResources on node (index lookup) +├── getNumberOfRSPByEligibleNode — count RSPs with this node eligible (index lookup) +├── if hasLabel == shouldHaveLabel → Done (no patch needed) +├── getNode — fetch full node object +└── Patch node label (add or remove) +``` + +## Algorithm Flow + +```mermaid +flowchart TD + Start([Reconcile Node]) --> CheckExists{Node exists?} + CheckExists -->|No| Done([Done]) + CheckExists -->|Yes| GetDRBD[Count DRBDResources on node] + GetDRBD --> GetRSP[Count RSPs with node eligible] + GetRSP --> ComputeTarget[shouldHaveLabel = drbd > 0 OR rsp > 0] + ComputeTarget --> CheckSync{hasLabel == shouldHaveLabel?} + CheckSync -->|Yes| Done + CheckSync -->|No| FetchNode[Fetch full Node object] + FetchNode --> Patch[Patch Node label] + Patch --> Done +``` + +## Managed Metadata + +| Type | Key | Managed On | Purpose | +|------|-----|------------|---------| +| Label | `storage.deckhouse.io/sds-replicated-volume-node` | Node | Mark nodes that should run the agent | + +## Watches + +The controller watches three event sources: + +| Resource | Events | Handler | +|----------|--------|---------| +| Node | Create, Update | Reacts to `AgentNodeLabelKey` presence changes | +| ReplicatedStoragePool | Create, Update, Delete | Reacts to `eligibleNodes` changes (delta computation) | +| DRBDResource | Create, Delete | Maps to node via `spec.nodeName` | + +### RSP Delta Computation + +When an RSP's `eligibleNodes` changes, the controller computes the delta (added/removed nodes) +and enqueues reconcile requests only for affected nodes, not for all nodes in the cluster. + +## Indexes + +| Index | Field | Purpose | +|-------|-------|---------| +| Node by metadata.name | `metadata.name` | Efficient node existence and label check | +| DRBDResource by node | `spec.nodeName` | Count DRBDResources per node | +| RSP by eligible node | `status.eligibleNodes[].nodeName` | Count RSPs where node is eligible | + +## Data Flow + +```mermaid +flowchart TD + subgraph events [Event Sources] + NodeEvents[Node label changes] + RSPEvents[RSP eligibleNodes changes] + DRBDEvents[DRBDResource create/delete] + end + + subgraph indexes [Index Lookups] + NodeIndex[Node by metadata.name] + DRBDIndex[DRBDResource by spec.nodeName] + RSPIndex[RSP by eligibleNodeName] + end + + subgraph reconcile [Reconcile] + CheckLabel[getNodeAgentLabelPresence] + CountDRBD[getNumberOfDRBDResourcesByNode] + CountRSP[getNumberOfRSPByEligibleNode] + Decision[shouldHaveLabel?] + PatchNode[Patch Node] + end + + subgraph output [Output] + NodeLabel[Node label
AgentNodeLabelKey] + end + + NodeEvents --> CheckLabel + RSPEvents --> CountRSP + DRBDEvents --> CountDRBD + + NodeIndex --> CheckLabel + DRBDIndex --> CountDRBD + RSPIndex --> CountRSP + + CheckLabel --> Decision + CountDRBD --> Decision + CountRSP --> Decision + + Decision -->|Need patch| PatchNode + PatchNode --> NodeLabel +``` diff --git a/images/controller/internal/controllers/node_controller/controller.go b/images/controller/internal/controllers/node_controller/controller.go new file mode 100644 index 000000000..939cb1373 --- /dev/null +++ b/images/controller/internal/controllers/node_controller/controller.go @@ -0,0 +1,154 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package nodecontroller + +import ( + "context" + + corev1 "k8s.io/api/core/v1" + "k8s.io/client-go/util/workqueue" + "sigs.k8s.io/controller-runtime/pkg/builder" + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/controller" + "sigs.k8s.io/controller-runtime/pkg/event" + "sigs.k8s.io/controller-runtime/pkg/handler" + "sigs.k8s.io/controller-runtime/pkg/manager" + "sigs.k8s.io/controller-runtime/pkg/reconcile" + + "github.com/deckhouse/sds-replicated-volume/api/v1alpha1" +) + +// NodeControllerName is the controller name for node_controller. +const NodeControllerName = "node-controller" + +func BuildController(mgr manager.Manager) error { + cl := mgr.GetClient() + + rec := NewReconciler(cl) + + return builder.ControllerManagedBy(mgr). + Named(NodeControllerName). + // This controller reconciles individual Node objects. + // It also watches RSP and DRBDResource events. + For(&corev1.Node{}, builder.WithPredicates(nodePredicates()...)). + Watches( + &v1alpha1.ReplicatedStoragePool{}, + rspEventHandler(), + builder.WithPredicates(rspPredicates()...), + ). + Watches( + &v1alpha1.DRBDResource{}, + handler.EnqueueRequestsFromMapFunc(mapDRBDResourceToNode), + builder.WithPredicates(drbdResourcePredicates()...), + ). + WithOptions(controller.Options{MaxConcurrentReconciles: 10}). + Complete(rec) +} + +// rspEventHandler returns an event handler for RSP that computes the delta of eligibleNodes +// and enqueues reconcile requests for affected nodes. +func rspEventHandler() handler.TypedEventHandler[client.Object, reconcile.Request] { + return handler.TypedFuncs[client.Object, reconcile.Request]{ + CreateFunc: func(_ context.Context, e event.TypedCreateEvent[client.Object], q workqueue.TypedRateLimitingInterface[reconcile.Request]) { + rsp, ok := e.Object.(*v1alpha1.ReplicatedStoragePool) + if !ok || rsp == nil { + return + } + enqueueNodesFromRSP(q, rsp) + }, + UpdateFunc: func(_ context.Context, e event.TypedUpdateEvent[client.Object], q workqueue.TypedRateLimitingInterface[reconcile.Request]) { + oldRSP, okOld := e.ObjectOld.(*v1alpha1.ReplicatedStoragePool) + newRSP, okNew := e.ObjectNew.(*v1alpha1.ReplicatedStoragePool) + if !okOld || !okNew || oldRSP == nil || newRSP == nil { + return + } + // Compute delta: nodes added or removed from eligibleNodes. + enqueueEligibleNodesDelta(q, oldRSP.Status.EligibleNodes, newRSP.Status.EligibleNodes) + }, + DeleteFunc: func(_ context.Context, e event.TypedDeleteEvent[client.Object], q workqueue.TypedRateLimitingInterface[reconcile.Request]) { + rsp, ok := e.Object.(*v1alpha1.ReplicatedStoragePool) + if !ok || rsp == nil { + return + } + enqueueNodesFromRSP(q, rsp) + }, + } +} + +// enqueueNodesFromRSP enqueues reconcile requests for all nodes in RSP's eligibleNodes. +func enqueueNodesFromRSP(q workqueue.TypedRateLimitingInterface[reconcile.Request], rsp *v1alpha1.ReplicatedStoragePool) { + for i := range rsp.Status.EligibleNodes { + nodeName := rsp.Status.EligibleNodes[i].NodeName + if nodeName != "" { + q.Add(reconcile.Request{NamespacedName: client.ObjectKey{Name: nodeName}}) + } + } +} + +// enqueueEligibleNodesDelta enqueues reconcile requests for nodes that were added or removed. +// Precondition: both oldNodes and newNodes are sorted by NodeName (RSP controller guarantees this). +func enqueueEligibleNodesDelta( + q workqueue.TypedRateLimitingInterface[reconcile.Request], + oldNodes, newNodes []v1alpha1.ReplicatedStoragePoolEligibleNode, +) { + // Merge-style traversal of two sorted lists to find delta. + i, j := 0, 0 + for i < len(oldNodes) || j < len(newNodes) { + switch { + case i >= len(oldNodes): + // Remaining newNodes are all added. + if newNodes[j].NodeName != "" { + q.Add(reconcile.Request{NamespacedName: client.ObjectKey{Name: newNodes[j].NodeName}}) + } + j++ + case j >= len(newNodes): + // Remaining oldNodes are all removed. + if oldNodes[i].NodeName != "" { + q.Add(reconcile.Request{NamespacedName: client.ObjectKey{Name: oldNodes[i].NodeName}}) + } + i++ + case oldNodes[i].NodeName < newNodes[j].NodeName: + // Node was removed. + if oldNodes[i].NodeName != "" { + q.Add(reconcile.Request{NamespacedName: client.ObjectKey{Name: oldNodes[i].NodeName}}) + } + i++ + case oldNodes[i].NodeName > newNodes[j].NodeName: + // Node was added. + if newNodes[j].NodeName != "" { + q.Add(reconcile.Request{NamespacedName: client.ObjectKey{Name: newNodes[j].NodeName}}) + } + j++ + default: + // Same node in both lists, no change. + i++ + j++ + } + } +} + +// mapDRBDResourceToNode maps a DRBDResource event to a reconcile request for the node it belongs to. +func mapDRBDResourceToNode(_ context.Context, obj client.Object) []reconcile.Request { + dr, ok := obj.(*v1alpha1.DRBDResource) + if !ok || dr == nil { + return nil + } + if dr.Spec.NodeName == "" { + return nil + } + return []reconcile.Request{{NamespacedName: client.ObjectKey{Name: dr.Spec.NodeName}}} +} diff --git a/images/controller/internal/controllers/node_controller/controller_test.go b/images/controller/internal/controllers/node_controller/controller_test.go new file mode 100644 index 000000000..181ae4838 --- /dev/null +++ b/images/controller/internal/controllers/node_controller/controller_test.go @@ -0,0 +1,268 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package nodecontroller + +import ( + "context" + "time" + + . "github.com/onsi/ginkgo/v2" + . "github.com/onsi/gomega" + corev1 "k8s.io/api/core/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "sigs.k8s.io/controller-runtime/pkg/reconcile" + + "github.com/deckhouse/sds-replicated-volume/api/v1alpha1" +) + +// testQueue is a minimal implementation to capture enqueued requests. +type testQueue struct { + items []reconcile.Request +} + +func (q *testQueue) Add(item reconcile.Request) { q.items = append(q.items, item) } +func (q *testQueue) Len() int { return len(q.items) } +func (q *testQueue) Get() (reconcile.Request, bool) { return reconcile.Request{}, false } +func (q *testQueue) Done(reconcile.Request) {} +func (q *testQueue) ShutDown() {} +func (q *testQueue) ShutDownWithDrain() {} +func (q *testQueue) ShuttingDown() bool { return false } +func (q *testQueue) AddAfter(reconcile.Request, time.Duration) {} +func (q *testQueue) AddRateLimited(reconcile.Request) {} +func (q *testQueue) Forget(reconcile.Request) {} +func (q *testQueue) NumRequeues(reconcile.Request) int { return 0 } + +func requestNames(items []reconcile.Request) []string { + names := make([]string, 0, len(items)) + for _, item := range items { + names = append(names, item.Name) + } + return names +} + +var _ = Describe("enqueueEligibleNodesDelta", func() { + var q *testQueue + + BeforeEach(func() { + q = &testQueue{} + }) + + It("enqueues nothing when both slices are empty", func() { + enqueueEligibleNodesDelta(q, nil, nil) + + Expect(q.items).To(BeEmpty()) + }) + + It("enqueues nothing when slices are equal", func() { + nodes := []v1alpha1.ReplicatedStoragePoolEligibleNode{ + {NodeName: "node-1"}, + {NodeName: "node-2"}, + } + + enqueueEligibleNodesDelta(q, nodes, nodes) + + Expect(q.items).To(BeEmpty()) + }) + + It("enqueues added node", func() { + oldNodes := []v1alpha1.ReplicatedStoragePoolEligibleNode{ + {NodeName: "node-1"}, + } + newNodes := []v1alpha1.ReplicatedStoragePoolEligibleNode{ + {NodeName: "node-1"}, + {NodeName: "node-2"}, + } + + enqueueEligibleNodesDelta(q, oldNodes, newNodes) + + Expect(requestNames(q.items)).To(ConsistOf("node-2")) + }) + + It("enqueues removed node", func() { + oldNodes := []v1alpha1.ReplicatedStoragePoolEligibleNode{ + {NodeName: "node-1"}, + {NodeName: "node-2"}, + } + newNodes := []v1alpha1.ReplicatedStoragePoolEligibleNode{ + {NodeName: "node-1"}, + } + + enqueueEligibleNodesDelta(q, oldNodes, newNodes) + + Expect(requestNames(q.items)).To(ConsistOf("node-2")) + }) + + It("enqueues all changed nodes (added and removed)", func() { + oldNodes := []v1alpha1.ReplicatedStoragePoolEligibleNode{ + {NodeName: "node-1"}, + {NodeName: "node-3"}, + } + newNodes := []v1alpha1.ReplicatedStoragePoolEligibleNode{ + {NodeName: "node-2"}, + {NodeName: "node-3"}, + } + + enqueueEligibleNodesDelta(q, oldNodes, newNodes) + + Expect(requestNames(q.items)).To(ConsistOf("node-1", "node-2")) + }) + + It("enqueues all nodes when old is empty", func() { + newNodes := []v1alpha1.ReplicatedStoragePoolEligibleNode{ + {NodeName: "node-1"}, + {NodeName: "node-2"}, + } + + enqueueEligibleNodesDelta(q, nil, newNodes) + + Expect(requestNames(q.items)).To(ConsistOf("node-1", "node-2")) + }) + + It("enqueues all nodes when new is empty", func() { + oldNodes := []v1alpha1.ReplicatedStoragePoolEligibleNode{ + {NodeName: "node-1"}, + {NodeName: "node-2"}, + } + + enqueueEligibleNodesDelta(q, oldNodes, nil) + + Expect(requestNames(q.items)).To(ConsistOf("node-1", "node-2")) + }) + + It("handles completely different sets", func() { + oldNodes := []v1alpha1.ReplicatedStoragePoolEligibleNode{ + {NodeName: "node-a"}, + {NodeName: "node-b"}, + } + newNodes := []v1alpha1.ReplicatedStoragePoolEligibleNode{ + {NodeName: "node-x"}, + {NodeName: "node-y"}, + } + + enqueueEligibleNodesDelta(q, oldNodes, newNodes) + + Expect(requestNames(q.items)).To(ConsistOf("node-a", "node-b", "node-x", "node-y")) + }) + + It("skips nodes with empty names", func() { + oldNodes := []v1alpha1.ReplicatedStoragePoolEligibleNode{ + {NodeName: ""}, + {NodeName: "node-1"}, + } + newNodes := []v1alpha1.ReplicatedStoragePoolEligibleNode{ + {NodeName: "node-1"}, + {NodeName: "node-2"}, + } + + enqueueEligibleNodesDelta(q, oldNodes, newNodes) + + Expect(requestNames(q.items)).To(ConsistOf("node-2")) + }) +}) + +var _ = Describe("mapDRBDResourceToNode", func() { + It("returns request for node when nodeName is set", func() { + dr := &v1alpha1.DRBDResource{ + ObjectMeta: metav1.ObjectMeta{Name: "drbd-1"}, + Spec: v1alpha1.DRBDResourceSpec{NodeName: "node-1"}, + } + + requests := mapDRBDResourceToNode(context.Background(), dr) + + Expect(requests).To(HaveLen(1)) + Expect(requests[0].Name).To(Equal("node-1")) + }) + + It("returns nil when nodeName is empty", func() { + dr := &v1alpha1.DRBDResource{ + ObjectMeta: metav1.ObjectMeta{Name: "drbd-1"}, + Spec: v1alpha1.DRBDResourceSpec{NodeName: ""}, + } + + requests := mapDRBDResourceToNode(context.Background(), dr) + + Expect(requests).To(BeNil()) + }) + + It("returns nil when object is not DRBDResource", func() { + node := &corev1.Node{ + ObjectMeta: metav1.ObjectMeta{Name: "node-1"}, + } + + requests := mapDRBDResourceToNode(context.Background(), node) + + Expect(requests).To(BeNil()) + }) + + It("returns nil when object is nil", func() { + requests := mapDRBDResourceToNode(context.Background(), nil) + + Expect(requests).To(BeNil()) + }) +}) + +var _ = Describe("enqueueNodesFromRSP", func() { + var q *testQueue + + BeforeEach(func() { + q = &testQueue{} + }) + + It("enqueues all nodes from RSP eligibleNodes", func() { + rsp := &v1alpha1.ReplicatedStoragePool{ + Status: v1alpha1.ReplicatedStoragePoolStatus{ + EligibleNodes: []v1alpha1.ReplicatedStoragePoolEligibleNode{ + {NodeName: "node-1"}, + {NodeName: "node-2"}, + {NodeName: "node-3"}, + }, + }, + } + + enqueueNodesFromRSP(q, rsp) + + Expect(requestNames(q.items)).To(ConsistOf("node-1", "node-2", "node-3")) + }) + + It("enqueues nothing when eligibleNodes is empty", func() { + rsp := &v1alpha1.ReplicatedStoragePool{ + Status: v1alpha1.ReplicatedStoragePoolStatus{ + EligibleNodes: []v1alpha1.ReplicatedStoragePoolEligibleNode{}, + }, + } + + enqueueNodesFromRSP(q, rsp) + + Expect(q.items).To(BeEmpty()) + }) + + It("skips nodes with empty names", func() { + rsp := &v1alpha1.ReplicatedStoragePool{ + Status: v1alpha1.ReplicatedStoragePoolStatus{ + EligibleNodes: []v1alpha1.ReplicatedStoragePoolEligibleNode{ + {NodeName: "node-1"}, + {NodeName: ""}, + {NodeName: "node-2"}, + }, + }, + } + + enqueueNodesFromRSP(q, rsp) + + Expect(requestNames(q.items)).To(ConsistOf("node-1", "node-2")) + }) +}) diff --git a/images/controller/internal/controllers/node_controller/predicates.go b/images/controller/internal/controllers/node_controller/predicates.go new file mode 100644 index 000000000..bb8ffca95 --- /dev/null +++ b/images/controller/internal/controllers/node_controller/predicates.go @@ -0,0 +1,100 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package nodecontroller + +import ( + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/event" + "sigs.k8s.io/controller-runtime/pkg/predicate" + + "github.com/deckhouse/sds-replicated-volume/api/v1alpha1" +) + +// nodePredicates returns predicates for Node events. +// Reacts to: +// - Create: always +// - Update: only if AgentNodeLabelKey presence/absence changed +// - Delete: never +func nodePredicates() []predicate.Predicate { + return []predicate.Predicate{ + predicate.TypedFuncs[client.Object]{ + UpdateFunc: func(e event.TypedUpdateEvent[client.Object]) bool { + // Only react if AgentNodeLabelKey presence/absence changed. + _, oldHas := e.ObjectOld.GetLabels()[v1alpha1.AgentNodeLabelKey] + _, newHas := e.ObjectNew.GetLabels()[v1alpha1.AgentNodeLabelKey] + + return oldHas != newHas + }, + DeleteFunc: func(_ event.TypedDeleteEvent[client.Object]) bool { + // Node deletions are not interesting for this controller. + return false + }, + }, + } +} + +// rspPredicates returns predicates for ReplicatedStoragePool events. +// Reacts to: +// - Create: always (new RSP may have eligibleNodes) +// - Update: only if eligibleNodes changed +// - Delete: always (RSP removed, nodes may need label removed) +func rspPredicates() []predicate.Predicate { + return []predicate.Predicate{ + predicate.TypedFuncs[client.Object]{ + UpdateFunc: func(e event.TypedUpdateEvent[client.Object]) bool { + oldRSP, okOld := e.ObjectOld.(*v1alpha1.ReplicatedStoragePool) + newRSP, okNew := e.ObjectNew.(*v1alpha1.ReplicatedStoragePool) + if !okOld || !okNew || oldRSP == nil || newRSP == nil { + return true + } + + // React only if eligibleNodes changed. + return !eligibleNodesEqual(oldRSP.Status.EligibleNodes, newRSP.Status.EligibleNodes) + }, + }, + } +} + +// eligibleNodesEqual compares two eligibleNodes slices by node names only. +// Precondition: both slices are sorted by NodeName (RSP controller guarantees this). +func eligibleNodesEqual(a, b []v1alpha1.ReplicatedStoragePoolEligibleNode) bool { + if len(a) != len(b) { + return false + } + for i := range a { + if a[i].NodeName != b[i].NodeName { + return false + } + } + return true +} + +// drbdResourcePredicates returns predicates for DRBDResource events. +// Reacts to: +// - Create: always (new resource appeared on a node) +// - Update: never (nodeName is immutable, other fields don't affect decision) +// - Delete: always (resource removed from a node) +func drbdResourcePredicates() []predicate.Predicate { + return []predicate.Predicate{ + predicate.TypedFuncs[client.Object]{ + UpdateFunc: func(_ event.TypedUpdateEvent[client.Object]) bool { + // nodeName is immutable, other fields don't affect label decisions. + return false + }, + }, + } +} diff --git a/images/controller/internal/controllers/node_controller/predicates_test.go b/images/controller/internal/controllers/node_controller/predicates_test.go new file mode 100644 index 000000000..b061ec24c --- /dev/null +++ b/images/controller/internal/controllers/node_controller/predicates_test.go @@ -0,0 +1,475 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package nodecontroller + +import ( + . "github.com/onsi/ginkgo/v2" + . "github.com/onsi/gomega" + corev1 "k8s.io/api/core/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/event" + "sigs.k8s.io/controller-runtime/pkg/predicate" + + "github.com/deckhouse/sds-replicated-volume/api/v1alpha1" +) + +var _ = Describe("nodePredicates", func() { + var preds []func(event.TypedUpdateEvent[client.Object]) bool + + BeforeEach(func() { + predicates := nodePredicates() + preds = make([]func(event.TypedUpdateEvent[client.Object]) bool, 0) + for _, p := range predicates { + if fp, ok := p.(predicate.TypedFuncs[client.Object]); ok { + if fp.UpdateFunc != nil { + preds = append(preds, fp.UpdateFunc) + } + } + } + }) + + Describe("UpdateFunc", func() { + It("returns true when AgentNodeLabelKey is added", func() { + oldNode := &corev1.Node{ + ObjectMeta: metav1.ObjectMeta{ + Name: "node-1", + Labels: map[string]string{}, + }, + } + newNode := &corev1.Node{ + ObjectMeta: metav1.ObjectMeta{ + Name: "node-1", + Labels: map[string]string{ + v1alpha1.AgentNodeLabelKey: "node-1", + }, + }, + } + e := event.TypedUpdateEvent[client.Object]{ + ObjectOld: oldNode, + ObjectNew: newNode, + } + + for _, pred := range preds { + Expect(pred(e)).To(BeTrue()) + } + }) + + It("returns true when AgentNodeLabelKey is removed", func() { + oldNode := &corev1.Node{ + ObjectMeta: metav1.ObjectMeta{ + Name: "node-1", + Labels: map[string]string{ + v1alpha1.AgentNodeLabelKey: "node-1", + }, + }, + } + newNode := &corev1.Node{ + ObjectMeta: metav1.ObjectMeta{ + Name: "node-1", + Labels: map[string]string{}, + }, + } + e := event.TypedUpdateEvent[client.Object]{ + ObjectOld: oldNode, + ObjectNew: newNode, + } + + for _, pred := range preds { + Expect(pred(e)).To(BeTrue()) + } + }) + + It("returns false when AgentNodeLabelKey is unchanged (both have)", func() { + oldNode := &corev1.Node{ + ObjectMeta: metav1.ObjectMeta{ + Name: "node-1", + Labels: map[string]string{ + v1alpha1.AgentNodeLabelKey: "node-1", + }, + }, + } + newNode := &corev1.Node{ + ObjectMeta: metav1.ObjectMeta{ + Name: "node-1", + Labels: map[string]string{ + v1alpha1.AgentNodeLabelKey: "node-1", + }, + }, + } + e := event.TypedUpdateEvent[client.Object]{ + ObjectOld: oldNode, + ObjectNew: newNode, + } + + for _, pred := range preds { + Expect(pred(e)).To(BeFalse()) + } + }) + + It("returns false when AgentNodeLabelKey is unchanged (both lack)", func() { + oldNode := &corev1.Node{ + ObjectMeta: metav1.ObjectMeta{ + Name: "node-1", + Labels: map[string]string{}, + }, + } + newNode := &corev1.Node{ + ObjectMeta: metav1.ObjectMeta{ + Name: "node-1", + Labels: map[string]string{}, + }, + } + e := event.TypedUpdateEvent[client.Object]{ + ObjectOld: oldNode, + ObjectNew: newNode, + } + + for _, pred := range preds { + Expect(pred(e)).To(BeFalse()) + } + }) + + It("returns false when other labels change but AgentNodeLabelKey unchanged", func() { + oldNode := &corev1.Node{ + ObjectMeta: metav1.ObjectMeta{ + Name: "node-1", + Labels: map[string]string{ + "env": "prod", + }, + }, + } + newNode := &corev1.Node{ + ObjectMeta: metav1.ObjectMeta{ + Name: "node-1", + Labels: map[string]string{ + "env": "staging", + }, + }, + } + e := event.TypedUpdateEvent[client.Object]{ + ObjectOld: oldNode, + ObjectNew: newNode, + } + + for _, pred := range preds { + Expect(pred(e)).To(BeFalse()) + } + }) + }) + + Describe("DeleteFunc", func() { + It("returns false always", func() { + predicates := nodePredicates() + for _, p := range predicates { + if fp, ok := p.(predicate.TypedFuncs[client.Object]); ok && fp.DeleteFunc != nil { + node := &corev1.Node{ + ObjectMeta: metav1.ObjectMeta{Name: "node-1"}, + } + e := event.TypedDeleteEvent[client.Object]{Object: node} + Expect(fp.DeleteFunc(e)).To(BeFalse()) + } + } + }) + }) +}) + +var _ = Describe("rspPredicates", func() { + Describe("UpdateFunc", func() { + var preds []func(event.TypedUpdateEvent[client.Object]) bool + + BeforeEach(func() { + predicates := rspPredicates() + preds = make([]func(event.TypedUpdateEvent[client.Object]) bool, 0) + for _, p := range predicates { + if fp, ok := p.(predicate.TypedFuncs[client.Object]); ok && fp.UpdateFunc != nil { + preds = append(preds, fp.UpdateFunc) + } + } + }) + + It("returns true when eligibleNodes changed (node added)", func() { + oldRSP := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{Name: "rsp-1"}, + Status: v1alpha1.ReplicatedStoragePoolStatus{ + EligibleNodes: []v1alpha1.ReplicatedStoragePoolEligibleNode{ + {NodeName: "node-1"}, + }, + }, + } + newRSP := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{Name: "rsp-1"}, + Status: v1alpha1.ReplicatedStoragePoolStatus{ + EligibleNodes: []v1alpha1.ReplicatedStoragePoolEligibleNode{ + {NodeName: "node-1"}, + {NodeName: "node-2"}, + }, + }, + } + e := event.TypedUpdateEvent[client.Object]{ + ObjectOld: oldRSP, + ObjectNew: newRSP, + } + + for _, pred := range preds { + Expect(pred(e)).To(BeTrue()) + } + }) + + It("returns true when eligibleNodes changed (node removed)", func() { + oldRSP := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{Name: "rsp-1"}, + Status: v1alpha1.ReplicatedStoragePoolStatus{ + EligibleNodes: []v1alpha1.ReplicatedStoragePoolEligibleNode{ + {NodeName: "node-1"}, + {NodeName: "node-2"}, + }, + }, + } + newRSP := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{Name: "rsp-1"}, + Status: v1alpha1.ReplicatedStoragePoolStatus{ + EligibleNodes: []v1alpha1.ReplicatedStoragePoolEligibleNode{ + {NodeName: "node-1"}, + }, + }, + } + e := event.TypedUpdateEvent[client.Object]{ + ObjectOld: oldRSP, + ObjectNew: newRSP, + } + + for _, pred := range preds { + Expect(pred(e)).To(BeTrue()) + } + }) + + It("returns true when eligibleNodes changed (different nodes)", func() { + oldRSP := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{Name: "rsp-1"}, + Status: v1alpha1.ReplicatedStoragePoolStatus{ + EligibleNodes: []v1alpha1.ReplicatedStoragePoolEligibleNode{ + {NodeName: "node-1"}, + }, + }, + } + newRSP := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{Name: "rsp-1"}, + Status: v1alpha1.ReplicatedStoragePoolStatus{ + EligibleNodes: []v1alpha1.ReplicatedStoragePoolEligibleNode{ + {NodeName: "node-2"}, + }, + }, + } + e := event.TypedUpdateEvent[client.Object]{ + ObjectOld: oldRSP, + ObjectNew: newRSP, + } + + for _, pred := range preds { + Expect(pred(e)).To(BeTrue()) + } + }) + + It("returns false when eligibleNodes unchanged", func() { + oldRSP := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{Name: "rsp-1"}, + Status: v1alpha1.ReplicatedStoragePoolStatus{ + EligibleNodes: []v1alpha1.ReplicatedStoragePoolEligibleNode{ + {NodeName: "node-1"}, + {NodeName: "node-2"}, + }, + }, + } + newRSP := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{Name: "rsp-1"}, + Status: v1alpha1.ReplicatedStoragePoolStatus{ + EligibleNodes: []v1alpha1.ReplicatedStoragePoolEligibleNode{ + {NodeName: "node-1"}, + {NodeName: "node-2"}, + }, + }, + } + e := event.TypedUpdateEvent[client.Object]{ + ObjectOld: oldRSP, + ObjectNew: newRSP, + } + + for _, pred := range preds { + Expect(pred(e)).To(BeFalse()) + } + }) + + It("returns false when eligibleNodes both empty", func() { + oldRSP := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{Name: "rsp-1"}, + Status: v1alpha1.ReplicatedStoragePoolStatus{ + EligibleNodes: []v1alpha1.ReplicatedStoragePoolEligibleNode{}, + }, + } + newRSP := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{Name: "rsp-1"}, + Status: v1alpha1.ReplicatedStoragePoolStatus{ + EligibleNodes: []v1alpha1.ReplicatedStoragePoolEligibleNode{}, + }, + } + e := event.TypedUpdateEvent[client.Object]{ + ObjectOld: oldRSP, + ObjectNew: newRSP, + } + + for _, pred := range preds { + Expect(pred(e)).To(BeFalse()) + } + }) + + It("returns true when cast fails (conservative)", func() { + oldNode := &corev1.Node{ + ObjectMeta: metav1.ObjectMeta{Name: "node-1"}, + } + newNode := &corev1.Node{ + ObjectMeta: metav1.ObjectMeta{Name: "node-1"}, + } + e := event.TypedUpdateEvent[client.Object]{ + ObjectOld: oldNode, + ObjectNew: newNode, + } + + for _, pred := range preds { + Expect(pred(e)).To(BeTrue()) + } + }) + + It("returns true when old is nil (conservative)", func() { + newRSP := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{Name: "rsp-1"}, + } + e := event.TypedUpdateEvent[client.Object]{ + ObjectOld: nil, + ObjectNew: newRSP, + } + + for _, pred := range preds { + Expect(pred(e)).To(BeTrue()) + } + }) + + It("returns true when new is nil (conservative)", func() { + oldRSP := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{Name: "rsp-1"}, + } + e := event.TypedUpdateEvent[client.Object]{ + ObjectOld: oldRSP, + ObjectNew: nil, + } + + for _, pred := range preds { + Expect(pred(e)).To(BeTrue()) + } + }) + }) +}) + +var _ = Describe("eligibleNodesEqual", func() { + It("returns true for empty slices", func() { + a := []v1alpha1.ReplicatedStoragePoolEligibleNode{} + b := []v1alpha1.ReplicatedStoragePoolEligibleNode{} + + Expect(eligibleNodesEqual(a, b)).To(BeTrue()) + }) + + It("returns true for nil slices", func() { + var a []v1alpha1.ReplicatedStoragePoolEligibleNode + var b []v1alpha1.ReplicatedStoragePoolEligibleNode + + Expect(eligibleNodesEqual(a, b)).To(BeTrue()) + }) + + It("returns true for equal slices", func() { + a := []v1alpha1.ReplicatedStoragePoolEligibleNode{ + {NodeName: "node-1"}, + {NodeName: "node-2"}, + } + b := []v1alpha1.ReplicatedStoragePoolEligibleNode{ + {NodeName: "node-1"}, + {NodeName: "node-2"}, + } + + Expect(eligibleNodesEqual(a, b)).To(BeTrue()) + }) + + It("returns false for different lengths", func() { + a := []v1alpha1.ReplicatedStoragePoolEligibleNode{ + {NodeName: "node-1"}, + } + b := []v1alpha1.ReplicatedStoragePoolEligibleNode{ + {NodeName: "node-1"}, + {NodeName: "node-2"}, + } + + Expect(eligibleNodesEqual(a, b)).To(BeFalse()) + }) + + It("returns false for different node names", func() { + a := []v1alpha1.ReplicatedStoragePoolEligibleNode{ + {NodeName: "node-1"}, + {NodeName: "node-2"}, + } + b := []v1alpha1.ReplicatedStoragePoolEligibleNode{ + {NodeName: "node-1"}, + {NodeName: "node-3"}, + } + + Expect(eligibleNodesEqual(a, b)).To(BeFalse()) + }) + + It("ignores other fields in EligibleNode (only compares NodeName)", func() { + a := []v1alpha1.ReplicatedStoragePoolEligibleNode{ + {NodeName: "node-1", ZoneName: "zone-a"}, + } + b := []v1alpha1.ReplicatedStoragePoolEligibleNode{ + {NodeName: "node-1", ZoneName: "zone-b"}, + } + + Expect(eligibleNodesEqual(a, b)).To(BeTrue()) + }) +}) + +var _ = Describe("drbdResourcePredicates", func() { + Describe("UpdateFunc", func() { + It("returns false always", func() { + predicates := drbdResourcePredicates() + for _, p := range predicates { + if fp, ok := p.(predicate.TypedFuncs[client.Object]); ok && fp.UpdateFunc != nil { + oldDR := &v1alpha1.DRBDResource{ + ObjectMeta: metav1.ObjectMeta{Name: "drbd-1"}, + Spec: v1alpha1.DRBDResourceSpec{NodeName: "node-1"}, + } + newDR := &v1alpha1.DRBDResource{ + ObjectMeta: metav1.ObjectMeta{Name: "drbd-1"}, + Spec: v1alpha1.DRBDResourceSpec{NodeName: "node-1"}, + } + e := event.TypedUpdateEvent[client.Object]{ + ObjectOld: oldDR, + ObjectNew: newDR, + } + Expect(fp.UpdateFunc(e)).To(BeFalse()) + } + } + }) + }) +}) diff --git a/images/controller/internal/controllers/node_controller/reconciler.go b/images/controller/internal/controllers/node_controller/reconciler.go new file mode 100644 index 000000000..bab61b7c5 --- /dev/null +++ b/images/controller/internal/controllers/node_controller/reconciler.go @@ -0,0 +1,176 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package nodecontroller + +import ( + "context" + + corev1 "k8s.io/api/core/v1" + apierrors "k8s.io/apimachinery/pkg/api/errors" + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/reconcile" + + obju "github.com/deckhouse/sds-replicated-volume/api/objutilv1" + "github.com/deckhouse/sds-replicated-volume/api/v1alpha1" + "github.com/deckhouse/sds-replicated-volume/images/controller/internal/indexes" + "github.com/deckhouse/sds-replicated-volume/lib/go/common/reconciliation/flow" +) + +// ────────────────────────────────────────────────────────────────────────────── +// Wiring / construction +// + +type Reconciler struct { + cl client.Client +} + +var _ reconcile.Reconciler = (*Reconciler)(nil) + +func NewReconciler(cl client.Client) *Reconciler { + return &Reconciler{cl: cl} +} + +// ────────────────────────────────────────────────────────────────────────────── +// Reconcile +// + +// Reconcile pattern: Conditional desired evaluation +// +// Reconciles a single Node by checking if it should have the AgentNodeLabelKey. +// A node should have the label if: +// - it is in at least one RSP's eligibleNodes, OR +// - it has at least one DRBDResource (to prevent orphaning DRBD resources) +func (r *Reconciler) Reconcile(ctx context.Context, req reconcile.Request) (reconcile.Result, error) { + rf := flow.BeginRootReconcile(ctx) + + nodeName := req.Name + + // Check current label state (cheap, uses UnsafeDisableDeepCopy). + nodeExists, hasLabel, err := r.getNodeAgentLabelPresence(rf.Ctx(), nodeName) + if err != nil { + return rf.Fail(err).ToCtrl() + } + if !nodeExists { + // Node was deleted, nothing to do. + return rf.Done().ToCtrl() + } + + // Check if node has any DRBDResources. + drbdCount, err := r.getNumberOfDRBDResourcesByNode(rf.Ctx(), nodeName) + if err != nil { + return rf.Fail(err).ToCtrl() + } + + // Check if node is in any RSP's eligibleNodes. + rspCount, err := r.getNumberOfRSPByEligibleNode(rf.Ctx(), nodeName) + if err != nil { + return rf.Fail(err).ToCtrl() + } + + // Node should have label if it has any DRBDResource OR is in any RSP's eligibleNodes. + shouldHaveLabel := drbdCount > 0 || rspCount > 0 + + // Check if node is already in sync. + if hasLabel == shouldHaveLabel { + return rf.Done().ToCtrl() + } + + // Need to patch: fetch full node. + node, err := r.getNode(rf.Ctx(), nodeName) + if err != nil { + if apierrors.IsNotFound(err) { + // Node was deleted between checks, nothing to do. + return rf.Done().ToCtrl() + } + return rf.Fail(err).ToCtrl() + } + + // Take patch base. + base := node.DeepCopy() + + // Ensure label state. + if shouldHaveLabel { + obju.SetLabel(node, v1alpha1.AgentNodeLabelKey, nodeName) + } else { + obju.RemoveLabel(node, v1alpha1.AgentNodeLabelKey) + } + + // Patch node. + if err := r.cl.Patch(rf.Ctx(), node, client.MergeFrom(base)); err != nil { + return rf.Fail(err).ToCtrl() + } + + return rf.Done().ToCtrl() +} + +// ────────────────────────────────────────────────────────────────────────────── +// Single-call I/O helper categories +// + +// getNodeAgentLabelPresence checks if a node exists and whether it has the AgentNodeLabelKey. +// Uses UnsafeDisableDeepCopy for performance since we only need to read the label. +// Returns (exists, hasLabel, err). +func (r *Reconciler) getNodeAgentLabelPresence(ctx context.Context, name string) (bool, bool, error) { + var list corev1.NodeList + if err := r.cl.List(ctx, &list, + client.MatchingFields{indexes.IndexFieldNodeByMetadataName: name}, + client.UnsafeDisableDeepCopy, + ); err != nil { + return false, false, err + } + if len(list.Items) == 0 { + return false, false, nil + } + hasLabel := obju.HasLabel(&list.Items[0], v1alpha1.AgentNodeLabelKey) + return true, hasLabel, nil +} + +// getNode fetches a Node by name. Returns NotFound error if node doesn't exist. +func (r *Reconciler) getNode(ctx context.Context, name string) (*corev1.Node, error) { + var node corev1.Node + if err := r.cl.Get(ctx, client.ObjectKey{Name: name}, &node); err != nil { + return nil, err + } + return &node, nil +} + +// getNumberOfDRBDResourcesByNode returns the count of DRBDResource objects on the specified node. +// Uses index for efficient lookup and UnsafeDisableDeepCopy for performance. +func (r *Reconciler) getNumberOfDRBDResourcesByNode(ctx context.Context, nodeName string) (int, error) { + var list v1alpha1.DRBDResourceList + if err := r.cl.List(ctx, &list, + client.MatchingFields{indexes.IndexFieldDRBDResourceByNodeName: nodeName}, + client.UnsafeDisableDeepCopy, + ); err != nil { + return 0, err + } + return len(list.Items), nil +} + +// getNumberOfRSPByEligibleNode returns the count of RSP objects that have the specified node +// in their eligibleNodes list. +// Uses index for efficient lookup and UnsafeDisableDeepCopy for performance. +func (r *Reconciler) getNumberOfRSPByEligibleNode(ctx context.Context, nodeName string) (int, error) { + var list v1alpha1.ReplicatedStoragePoolList + if err := r.cl.List(ctx, &list, + client.MatchingFields{indexes.IndexFieldRSPByEligibleNodeName: nodeName}, + client.UnsafeDisableDeepCopy, + ); err != nil { + return 0, err + } + return len(list.Items), nil +} diff --git a/images/controller/internal/controllers/node_controller/reconciler_test.go b/images/controller/internal/controllers/node_controller/reconciler_test.go new file mode 100644 index 000000000..9d5cc91b8 --- /dev/null +++ b/images/controller/internal/controllers/node_controller/reconciler_test.go @@ -0,0 +1,516 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package nodecontroller + +import ( + "context" + "testing" + + . "github.com/onsi/ginkgo/v2" + . "github.com/onsi/gomega" + corev1 "k8s.io/api/core/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/runtime" + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/client/fake" + "sigs.k8s.io/controller-runtime/pkg/reconcile" + + "github.com/deckhouse/sds-replicated-volume/api/v1alpha1" + "github.com/deckhouse/sds-replicated-volume/images/controller/internal/indexes/testhelpers" +) + +func TestNodeController(t *testing.T) { + RegisterFailHandler(Fail) + RunSpecs(t, "node_controller Reconciler Suite") +} + +var _ = Describe("Reconciler", func() { + var ( + scheme *runtime.Scheme + cl client.WithWatch + rec *Reconciler + ) + + BeforeEach(func() { + scheme = runtime.NewScheme() + Expect(corev1.AddToScheme(scheme)).To(Succeed()) + Expect(v1alpha1.AddToScheme(scheme)).To(Succeed()) + cl = nil + rec = nil + }) + + Describe("Reconcile", func() { + It("returns Done when node does not exist", func() { + cl = testhelpers.WithNodeByMetadataNameIndex( + testhelpers.WithDRBDResourceByNodeNameIndex( + testhelpers.WithRSPByEligibleNodeNameIndex( + fake.NewClientBuilder().WithScheme(scheme), + ), + ), + ).Build() + rec = NewReconciler(cl) + + result, err := rec.Reconcile(context.Background(), reconcile.Request{ + NamespacedName: client.ObjectKey{Name: "non-existent-node"}, + }) + + Expect(err).NotTo(HaveOccurred()) + Expect(result).To(Equal(reconcile.Result{})) + }) + + It("does not patch node that is already in sync (no label, no DRBD, not in RSP)", func() { + node := &corev1.Node{ + ObjectMeta: metav1.ObjectMeta{ + Name: "node-1", + Labels: map[string]string{}, + }, + } + cl = testhelpers.WithNodeByMetadataNameIndex( + testhelpers.WithDRBDResourceByNodeNameIndex( + testhelpers.WithRSPByEligibleNodeNameIndex( + fake.NewClientBuilder().WithScheme(scheme).WithObjects(node), + ), + ), + ).Build() + rec = NewReconciler(cl) + + result, err := rec.Reconcile(context.Background(), reconcile.Request{ + NamespacedName: client.ObjectKey{Name: "node-1"}, + }) + + Expect(err).NotTo(HaveOccurred()) + Expect(result).To(Equal(reconcile.Result{})) + + var updatedNode corev1.Node + Expect(cl.Get(context.Background(), client.ObjectKey{Name: "node-1"}, &updatedNode)).To(Succeed()) + Expect(updatedNode.Labels).NotTo(HaveKey(v1alpha1.AgentNodeLabelKey)) + }) + + It("removes label from node that has no DRBD and is not in any RSP eligibleNodes", func() { + node := &corev1.Node{ + ObjectMeta: metav1.ObjectMeta{ + Name: "node-1", + Labels: map[string]string{ + v1alpha1.AgentNodeLabelKey: "node-1", + }, + }, + } + cl = testhelpers.WithNodeByMetadataNameIndex( + testhelpers.WithDRBDResourceByNodeNameIndex( + testhelpers.WithRSPByEligibleNodeNameIndex( + fake.NewClientBuilder().WithScheme(scheme).WithObjects(node), + ), + ), + ).Build() + rec = NewReconciler(cl) + + result, err := rec.Reconcile(context.Background(), reconcile.Request{ + NamespacedName: client.ObjectKey{Name: "node-1"}, + }) + + Expect(err).NotTo(HaveOccurred()) + Expect(result).To(Equal(reconcile.Result{})) + + var updatedNode corev1.Node + Expect(cl.Get(context.Background(), client.ObjectKey{Name: "node-1"}, &updatedNode)).To(Succeed()) + Expect(updatedNode.Labels).NotTo(HaveKey(v1alpha1.AgentNodeLabelKey)) + }) + + It("adds label to node that has DRBDResource", func() { + node := &corev1.Node{ + ObjectMeta: metav1.ObjectMeta{ + Name: "node-1", + Labels: map[string]string{}, + }, + } + drbdResource := &v1alpha1.DRBDResource{ + ObjectMeta: metav1.ObjectMeta{Name: "drbd-1"}, + Spec: v1alpha1.DRBDResourceSpec{NodeName: "node-1"}, + } + cl = testhelpers.WithNodeByMetadataNameIndex( + testhelpers.WithDRBDResourceByNodeNameIndex( + testhelpers.WithRSPByEligibleNodeNameIndex( + fake.NewClientBuilder().WithScheme(scheme).WithObjects(node, drbdResource), + ), + ), + ).Build() + rec = NewReconciler(cl) + + result, err := rec.Reconcile(context.Background(), reconcile.Request{ + NamespacedName: client.ObjectKey{Name: "node-1"}, + }) + + Expect(err).NotTo(HaveOccurred()) + Expect(result).To(Equal(reconcile.Result{})) + + var updatedNode corev1.Node + Expect(cl.Get(context.Background(), client.ObjectKey{Name: "node-1"}, &updatedNode)).To(Succeed()) + Expect(updatedNode.Labels).To(HaveKeyWithValue(v1alpha1.AgentNodeLabelKey, "node-1")) + }) + + It("adds label to node that is in RSP eligibleNodes", func() { + node := &corev1.Node{ + ObjectMeta: metav1.ObjectMeta{ + Name: "node-1", + Labels: map[string]string{}, + }, + } + rsp := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{Name: "rsp-1"}, + Status: v1alpha1.ReplicatedStoragePoolStatus{ + EligibleNodes: []v1alpha1.ReplicatedStoragePoolEligibleNode{ + {NodeName: "node-1"}, + }, + }, + } + cl = testhelpers.WithNodeByMetadataNameIndex( + testhelpers.WithDRBDResourceByNodeNameIndex( + testhelpers.WithRSPByEligibleNodeNameIndex( + fake.NewClientBuilder().WithScheme(scheme).WithObjects(node, rsp), + ), + ), + ).Build() + rec = NewReconciler(cl) + + result, err := rec.Reconcile(context.Background(), reconcile.Request{ + NamespacedName: client.ObjectKey{Name: "node-1"}, + }) + + Expect(err).NotTo(HaveOccurred()) + Expect(result).To(Equal(reconcile.Result{})) + + var updatedNode corev1.Node + Expect(cl.Get(context.Background(), client.ObjectKey{Name: "node-1"}, &updatedNode)).To(Succeed()) + Expect(updatedNode.Labels).To(HaveKeyWithValue(v1alpha1.AgentNodeLabelKey, "node-1")) + }) + + It("does not patch node that is already in sync (has label, has DRBD)", func() { + node := &corev1.Node{ + ObjectMeta: metav1.ObjectMeta{ + Name: "node-1", + Labels: map[string]string{ + v1alpha1.AgentNodeLabelKey: "node-1", + }, + }, + } + drbdResource := &v1alpha1.DRBDResource{ + ObjectMeta: metav1.ObjectMeta{Name: "drbd-1"}, + Spec: v1alpha1.DRBDResourceSpec{NodeName: "node-1"}, + } + cl = testhelpers.WithNodeByMetadataNameIndex( + testhelpers.WithDRBDResourceByNodeNameIndex( + testhelpers.WithRSPByEligibleNodeNameIndex( + fake.NewClientBuilder().WithScheme(scheme).WithObjects(node, drbdResource), + ), + ), + ).Build() + rec = NewReconciler(cl) + + result, err := rec.Reconcile(context.Background(), reconcile.Request{ + NamespacedName: client.ObjectKey{Name: "node-1"}, + }) + + Expect(err).NotTo(HaveOccurred()) + Expect(result).To(Equal(reconcile.Result{})) + + var updatedNode corev1.Node + Expect(cl.Get(context.Background(), client.ObjectKey{Name: "node-1"}, &updatedNode)).To(Succeed()) + Expect(updatedNode.Labels).To(HaveKeyWithValue(v1alpha1.AgentNodeLabelKey, "node-1")) + }) + + It("does not patch node that is already in sync (has label, in RSP)", func() { + node := &corev1.Node{ + ObjectMeta: metav1.ObjectMeta{ + Name: "node-1", + Labels: map[string]string{ + v1alpha1.AgentNodeLabelKey: "node-1", + }, + }, + } + rsp := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{Name: "rsp-1"}, + Status: v1alpha1.ReplicatedStoragePoolStatus{ + EligibleNodes: []v1alpha1.ReplicatedStoragePoolEligibleNode{ + {NodeName: "node-1"}, + }, + }, + } + cl = testhelpers.WithNodeByMetadataNameIndex( + testhelpers.WithDRBDResourceByNodeNameIndex( + testhelpers.WithRSPByEligibleNodeNameIndex( + fake.NewClientBuilder().WithScheme(scheme).WithObjects(node, rsp), + ), + ), + ).Build() + rec = NewReconciler(cl) + + result, err := rec.Reconcile(context.Background(), reconcile.Request{ + NamespacedName: client.ObjectKey{Name: "node-1"}, + }) + + Expect(err).NotTo(HaveOccurred()) + Expect(result).To(Equal(reconcile.Result{})) + + var updatedNode corev1.Node + Expect(cl.Get(context.Background(), client.ObjectKey{Name: "node-1"}, &updatedNode)).To(Succeed()) + Expect(updatedNode.Labels).To(HaveKeyWithValue(v1alpha1.AgentNodeLabelKey, "node-1")) + }) + + It("keeps label on node with DRBD even when not in any RSP", func() { + node := &corev1.Node{ + ObjectMeta: metav1.ObjectMeta{ + Name: "node-1", + Labels: map[string]string{ + v1alpha1.AgentNodeLabelKey: "node-1", + }, + }, + } + drbdResource := &v1alpha1.DRBDResource{ + ObjectMeta: metav1.ObjectMeta{Name: "drbd-1"}, + Spec: v1alpha1.DRBDResourceSpec{NodeName: "node-1"}, + } + // No RSP with this node in eligibleNodes + cl = testhelpers.WithNodeByMetadataNameIndex( + testhelpers.WithDRBDResourceByNodeNameIndex( + testhelpers.WithRSPByEligibleNodeNameIndex( + fake.NewClientBuilder().WithScheme(scheme).WithObjects(node, drbdResource), + ), + ), + ).Build() + rec = NewReconciler(cl) + + result, err := rec.Reconcile(context.Background(), reconcile.Request{ + NamespacedName: client.ObjectKey{Name: "node-1"}, + }) + + Expect(err).NotTo(HaveOccurred()) + Expect(result).To(Equal(reconcile.Result{})) + + var updatedNode corev1.Node + Expect(cl.Get(context.Background(), client.ObjectKey{Name: "node-1"}, &updatedNode)).To(Succeed()) + Expect(updatedNode.Labels).To(HaveKeyWithValue(v1alpha1.AgentNodeLabelKey, "node-1")) + }) + + It("removes label once node is removed from RSP and has no DRBD", func() { + node := &corev1.Node{ + ObjectMeta: metav1.ObjectMeta{ + Name: "node-1", + Labels: map[string]string{ + v1alpha1.AgentNodeLabelKey: "node-1", + }, + }, + } + // RSP without node-1 in eligibleNodes (simulating removal) + rsp := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{Name: "rsp-1"}, + Status: v1alpha1.ReplicatedStoragePoolStatus{ + EligibleNodes: []v1alpha1.ReplicatedStoragePoolEligibleNode{ + {NodeName: "node-2"}, // not node-1 + }, + }, + } + cl = testhelpers.WithNodeByMetadataNameIndex( + testhelpers.WithDRBDResourceByNodeNameIndex( + testhelpers.WithRSPByEligibleNodeNameIndex( + fake.NewClientBuilder().WithScheme(scheme).WithObjects(node, rsp), + ), + ), + ).Build() + rec = NewReconciler(cl) + + result, err := rec.Reconcile(context.Background(), reconcile.Request{ + NamespacedName: client.ObjectKey{Name: "node-1"}, + }) + + Expect(err).NotTo(HaveOccurred()) + Expect(result).To(Equal(reconcile.Result{})) + + var updatedNode corev1.Node + Expect(cl.Get(context.Background(), client.ObjectKey{Name: "node-1"}, &updatedNode)).To(Succeed()) + Expect(updatedNode.Labels).NotTo(HaveKey(v1alpha1.AgentNodeLabelKey)) + }) + + It("adds label when node is in multiple RSPs", func() { + node := &corev1.Node{ + ObjectMeta: metav1.ObjectMeta{ + Name: "node-1", + Labels: map[string]string{}, + }, + } + rsp1 := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{Name: "rsp-1"}, + Status: v1alpha1.ReplicatedStoragePoolStatus{ + EligibleNodes: []v1alpha1.ReplicatedStoragePoolEligibleNode{ + {NodeName: "node-1"}, + }, + }, + } + rsp2 := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{Name: "rsp-2"}, + Status: v1alpha1.ReplicatedStoragePoolStatus{ + EligibleNodes: []v1alpha1.ReplicatedStoragePoolEligibleNode{ + {NodeName: "node-1"}, + }, + }, + } + cl = testhelpers.WithNodeByMetadataNameIndex( + testhelpers.WithDRBDResourceByNodeNameIndex( + testhelpers.WithRSPByEligibleNodeNameIndex( + fake.NewClientBuilder().WithScheme(scheme).WithObjects(node, rsp1, rsp2), + ), + ), + ).Build() + rec = NewReconciler(cl) + + result, err := rec.Reconcile(context.Background(), reconcile.Request{ + NamespacedName: client.ObjectKey{Name: "node-1"}, + }) + + Expect(err).NotTo(HaveOccurred()) + Expect(result).To(Equal(reconcile.Result{})) + + var updatedNode corev1.Node + Expect(cl.Get(context.Background(), client.ObjectKey{Name: "node-1"}, &updatedNode)).To(Succeed()) + Expect(updatedNode.Labels).To(HaveKeyWithValue(v1alpha1.AgentNodeLabelKey, "node-1")) + }) + + It("adds label when node has multiple DRBDResources", func() { + node := &corev1.Node{ + ObjectMeta: metav1.ObjectMeta{ + Name: "node-1", + Labels: map[string]string{}, + }, + } + drbd1 := &v1alpha1.DRBDResource{ + ObjectMeta: metav1.ObjectMeta{Name: "drbd-1"}, + Spec: v1alpha1.DRBDResourceSpec{NodeName: "node-1"}, + } + drbd2 := &v1alpha1.DRBDResource{ + ObjectMeta: metav1.ObjectMeta{Name: "drbd-2"}, + Spec: v1alpha1.DRBDResourceSpec{NodeName: "node-1"}, + } + cl = testhelpers.WithNodeByMetadataNameIndex( + testhelpers.WithDRBDResourceByNodeNameIndex( + testhelpers.WithRSPByEligibleNodeNameIndex( + fake.NewClientBuilder().WithScheme(scheme).WithObjects(node, drbd1, drbd2), + ), + ), + ).Build() + rec = NewReconciler(cl) + + result, err := rec.Reconcile(context.Background(), reconcile.Request{ + NamespacedName: client.ObjectKey{Name: "node-1"}, + }) + + Expect(err).NotTo(HaveOccurred()) + Expect(result).To(Equal(reconcile.Result{})) + + var updatedNode corev1.Node + Expect(cl.Get(context.Background(), client.ObjectKey{Name: "node-1"}, &updatedNode)).To(Succeed()) + Expect(updatedNode.Labels).To(HaveKeyWithValue(v1alpha1.AgentNodeLabelKey, "node-1")) + }) + + It("handles node with both DRBD and RSP eligibility", func() { + node := &corev1.Node{ + ObjectMeta: metav1.ObjectMeta{ + Name: "node-1", + Labels: map[string]string{}, + }, + } + drbdResource := &v1alpha1.DRBDResource{ + ObjectMeta: metav1.ObjectMeta{Name: "drbd-1"}, + Spec: v1alpha1.DRBDResourceSpec{NodeName: "node-1"}, + } + rsp := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{Name: "rsp-1"}, + Status: v1alpha1.ReplicatedStoragePoolStatus{ + EligibleNodes: []v1alpha1.ReplicatedStoragePoolEligibleNode{ + {NodeName: "node-1"}, + }, + }, + } + cl = testhelpers.WithNodeByMetadataNameIndex( + testhelpers.WithDRBDResourceByNodeNameIndex( + testhelpers.WithRSPByEligibleNodeNameIndex( + fake.NewClientBuilder().WithScheme(scheme).WithObjects(node, drbdResource, rsp), + ), + ), + ).Build() + rec = NewReconciler(cl) + + result, err := rec.Reconcile(context.Background(), reconcile.Request{ + NamespacedName: client.ObjectKey{Name: "node-1"}, + }) + + Expect(err).NotTo(HaveOccurred()) + Expect(result).To(Equal(reconcile.Result{})) + + var updatedNode corev1.Node + Expect(cl.Get(context.Background(), client.ObjectKey{Name: "node-1"}, &updatedNode)).To(Succeed()) + Expect(updatedNode.Labels).To(HaveKeyWithValue(v1alpha1.AgentNodeLabelKey, "node-1")) + }) + + It("only affects the reconciled node, not others", func() { + node1 := &corev1.Node{ + ObjectMeta: metav1.ObjectMeta{ + Name: "node-1", + Labels: map[string]string{}, + }, + } + node2 := &corev1.Node{ + ObjectMeta: metav1.ObjectMeta{ + Name: "node-2", + Labels: map[string]string{}, + }, + } + rsp := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{Name: "rsp-1"}, + Status: v1alpha1.ReplicatedStoragePoolStatus{ + EligibleNodes: []v1alpha1.ReplicatedStoragePoolEligibleNode{ + {NodeName: "node-1"}, + {NodeName: "node-2"}, + }, + }, + } + cl = testhelpers.WithNodeByMetadataNameIndex( + testhelpers.WithDRBDResourceByNodeNameIndex( + testhelpers.WithRSPByEligibleNodeNameIndex( + fake.NewClientBuilder().WithScheme(scheme).WithObjects(node1, node2, rsp), + ), + ), + ).Build() + rec = NewReconciler(cl) + + // Reconcile only node-1 + result, err := rec.Reconcile(context.Background(), reconcile.Request{ + NamespacedName: client.ObjectKey{Name: "node-1"}, + }) + + Expect(err).NotTo(HaveOccurred()) + Expect(result).To(Equal(reconcile.Result{})) + + var updatedNode1 corev1.Node + Expect(cl.Get(context.Background(), client.ObjectKey{Name: "node-1"}, &updatedNode1)).To(Succeed()) + Expect(updatedNode1.Labels).To(HaveKeyWithValue(v1alpha1.AgentNodeLabelKey, "node-1")) + + // node-2 should remain unchanged (no label) + var updatedNode2 corev1.Node + Expect(cl.Get(context.Background(), client.ObjectKey{Name: "node-2"}, &updatedNode2)).To(Succeed()) + Expect(updatedNode2.Labels).NotTo(HaveKey(v1alpha1.AgentNodeLabelKey)) + }) + }) +}) diff --git a/images/controller/internal/controllers/registry.go b/images/controller/internal/controllers/registry.go new file mode 100644 index 000000000..49a49d8f1 --- /dev/null +++ b/images/controller/internal/controllers/registry.go @@ -0,0 +1,72 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package controllers + +import ( + "fmt" + + "sigs.k8s.io/controller-runtime/pkg/manager" + + nodecontroller "github.com/deckhouse/sds-replicated-volume/images/controller/internal/controllers/node_controller" + rsccontroller "github.com/deckhouse/sds-replicated-volume/images/controller/internal/controllers/rsc_controller" + rspcontroller "github.com/deckhouse/sds-replicated-volume/images/controller/internal/controllers/rsp_controller" + rvattachcontroller "github.com/deckhouse/sds-replicated-volume/images/controller/internal/controllers/rv_attach_controller" + rvcontroller "github.com/deckhouse/sds-replicated-volume/images/controller/internal/controllers/rv_controller" + rvdeletepropagation "github.com/deckhouse/sds-replicated-volume/images/controller/internal/controllers/rv_delete_propagation" + rvrcontroller "github.com/deckhouse/sds-replicated-volume/images/controller/internal/controllers/rvr_controller" + rvrmetadata "github.com/deckhouse/sds-replicated-volume/images/controller/internal/controllers/rvr_metadata" + rvrschedulingcontroller "github.com/deckhouse/sds-replicated-volume/images/controller/internal/controllers/rvr_scheduling_controller" + rvrtiebreakercount "github.com/deckhouse/sds-replicated-volume/images/controller/internal/controllers/rvr_tie_breaker_count" + rvrvolume "github.com/deckhouse/sds-replicated-volume/images/controller/internal/controllers/rvr_volume" +) + +// BuildAll builds all controllers. +// podNamespace is the namespace where the controller pod runs, used by controllers +// that need to access other pods in this namespace (e.g., agent pods). +func BuildAll(mgr manager.Manager, podNamespace string) error { + // Must be first: controllers rely on MatchingFields against these indexes. + if err := RegisterIndexes(mgr); err != nil { + return fmt.Errorf("building indexes: %w", err) + } + + // Controllers that don't need podNamespace. + builders := []func(mgr manager.Manager) error{ + rvrtiebreakercount.BuildController, + rvcontroller.BuildController, + rvrvolume.BuildController, + rvrmetadata.BuildController, + rvdeletepropagation.BuildController, + rvrschedulingcontroller.BuildController, + rvrcontroller.BuildController, + rvattachcontroller.BuildController, + rsccontroller.BuildController, + nodecontroller.BuildController, + } + + for i, buildCtl := range builders { + if err := buildCtl(mgr); err != nil { + return fmt.Errorf("building controller %d: %w", i, err) + } + } + + // RSP controller needs podNamespace for agent pod discovery. + if err := rspcontroller.BuildController(mgr, podNamespace); err != nil { + return fmt.Errorf("building rsp controller: %w", err) + } + + return nil +} diff --git a/images/controller/internal/controllers/rsc_controller/README.md b/images/controller/internal/controllers/rsc_controller/README.md new file mode 100644 index 000000000..20e33e160 --- /dev/null +++ b/images/controller/internal/controllers/rsc_controller/README.md @@ -0,0 +1,256 @@ +# rsc_controller + +This controller manages `ReplicatedStorageClass` (RSC) resources by aggregating status from associated `ReplicatedStoragePool` (RSP) and `ReplicatedVolume` (RV) resources. + +## Purpose + +The controller reconciles `ReplicatedStorageClass` status with: + +1. **Storage pool management** — auto-generates and manages an RSP based on `spec.storage` configuration +2. **Configuration snapshot** — resolved configuration from spec, stored in `status.configuration` +3. **Generations/Revisions** — for quick change detection between RSC and RSP +4. **Conditions** — 4 conditions describing the current state +5. **Volume statistics** — counts of total, aligned, stale, and conflict volumes + +> **Note:** RSC does not calculate eligible nodes directly. It uses `RSP.Status.EligibleNodes` from the associated storage pool and validates them against topology/replication requirements. + +## Interactions + +| Direction | Resource/Controller | Relationship | +|-----------|---------------------|--------------| +| ← input | rsp_controller | Reads `RSP.Status.EligibleNodes` for validation | +| ← input | ReplicatedVolume | Reads RVs for volume statistics | +| → manages | ReplicatedStoragePool | Creates/updates auto-generated RSP | + +## Algorithm + +The controller creates/updates an RSP from `spec.storage`, validates eligible nodes against topology/replication requirements, and aggregates volume statistics: + +``` +readiness = storagePoolReady AND eligibleNodesValid +configuration = resolved(spec) if readiness else previous +volumeStats = aggregate(RVs) if allObserved else partial +``` + +## Reconciliation Structure + +``` +Reconcile (root) [Pure orchestration] +├── getRSC +├── getSortedRVsByRSC +├── reconcileMigrationFromRSP [Target-state driven] +│ └── migrate spec.storagePool → spec.storage (deprecated field) +├── reconcileMain [Target-state driven] +│ └── finalizer management +├── reconcileStatus [In-place reconciliation] +│ ├── reconcileRSP [Conditional desired evaluation] +│ │ └── create/update auto-generated RSP +│ ├── ensureStoragePool +│ │ └── status.storagePoolName + StoragePoolReady condition +│ ├── ensureConfiguration +│ │ └── status.configuration + Ready condition +│ └── ensureVolumeSummaryAndConditions +│ └── status.volumes + ConfigurationRolledOut/VolumesSatisfyEligibleNodes conditions +└── reconcileUnusedRSPs [Pure orchestration] + └── reconcileRSPRelease [Conditional desired evaluation] + └── release RSPs no longer referenced by this RSC +``` + +## Algorithm Flow + +```mermaid +flowchart TD + Start([Reconcile]) --> GetRSC[Get RSC] + GetRSC -->|NotFound| Done1([Done]) + GetRSC --> GetRVs[Get RVs by RSC] + + GetRVs --> Migration[reconcileMigrationFromRSP] + Migration -->|storagePool empty| Main + Migration -->|RSP not found| SetMigrationFailed[Set Ready=False, StoragePoolReady=False] + SetMigrationFailed --> Done2([Done]) + Migration -->|RSP found| MigrateStorage[Copy RSP config to spec.storage] + MigrateStorage --> Main + + Main[reconcileMain] --> CheckFinalizer{Finalizer check} + CheckFinalizer -->|Add/Remove| PatchMain[Patch main] + CheckFinalizer -->|No change| Status + PatchMain -->|Finalizer removed| Done3([Done]) + PatchMain --> Status + + Status[reconcileStatus] --> ReconcileRSP[reconcileRSP] + ReconcileRSP -->|RSP not exists| CreateRSP[Create RSP] + CreateRSP --> EnsureRSPMain[Ensure RSP finalizer and usedBy] + ReconcileRSP -->|RSP exists| EnsureRSPMain + + EnsureRSPMain --> EnsureStoragePool[ensureStoragePool] + EnsureStoragePool --> EnsureConfig[ensureConfiguration] + + EnsureConfig -->|StoragePoolReady != True| SetWaiting[Ready=False WaitingForStoragePool] + EnsureConfig -->|Eligible nodes invalid| SetInvalid[Ready=False InsufficientEligibleNodes] + EnsureConfig -->|Valid| SetReady[Ready=True, update configuration] + SetWaiting --> EnsureVolumes + SetInvalid --> EnsureVolumes + SetReady --> EnsureVolumes + + EnsureVolumes[ensureVolumeSummaryAndConditions] --> Changed{Changed?} + Changed -->|Yes| PatchStatus[Patch status] + Changed -->|No| ReleaseRSPs + PatchStatus --> ReleaseRSPs + + ReleaseRSPs[reconcileUnusedRSPs] --> EndNode([Done]) +``` + +## Conditions + +### Ready + +Indicates overall readiness of the storage class configuration. + +| Status | Reason | When | +|--------|--------|------| +| True | Ready | Configuration accepted and validated | +| False | InvalidConfiguration | Configuration validation failed | +| False | InsufficientEligibleNodes | RSP eligible nodes do not meet topology/replication requirements | +| False | WaitingForStoragePool | Waiting for RSP to become ready | + +### StoragePoolReady + +Indicates whether the associated storage pool exists and is ready. + +| Status | Reason | When | +|--------|--------|------| +| True | Ready | RSP exists and has Ready=True | +| False | StoragePoolNotFound | RSP does not exist (migration from deprecated storagePool field failed) | +| False | Pending | RSP has no Ready condition yet | +| False | (from RSP) | Propagated from RSP.Ready condition | + +### ConfigurationRolledOut + +Indicates whether all volumes' configuration matches the storage class. + +| Status | Reason | When | +|--------|--------|------| +| True | RolledOutToAllVolumes | All RVs have `ConfigurationReady=True` | +| False | ConfigurationRolloutInProgress | Rolling update in progress | +| False | ConfigurationRolloutDisabled | `ConfigurationRolloutStrategy.type=NewVolumesOnly` AND `staleConfiguration > 0` | +| Unknown | NewConfigurationNotYetObserved | Some volumes haven't observed the new configuration yet | + +### VolumesSatisfyEligibleNodes + +Indicates whether all volumes' replicas are placed on eligible nodes. + +| Status | Reason | When | +|--------|--------|------| +| True | AllVolumesSatisfy | All RVs have `SatisfyEligibleNodes=True` | +| False | ConflictResolutionInProgress | Resolution in progress | +| False | ManualConflictResolution | `EligibleNodesConflictResolutionStrategy.type=Manual` AND `inConflictWithEligibleNodes > 0` | +| Unknown | UpdatedEligibleNodesNotYetObserved | Some volumes haven't observed the updated eligible nodes yet | + +## Eligible Nodes Validation + +RSC does not calculate eligible nodes. The `rsp_controller` calculates them and stores in `RSP.Status.EligibleNodes`. + +RSC validates that the eligible nodes from RSP meet replication and topology requirements: + +| Replication | Topology | Requirement | +|-------------|----------|-------------| +| None | any | ≥1 node | +| Availability | Ignored/default | ≥3 nodes, ≥2 with disks | +| Availability | TransZonal | ≥3 zones, ≥2 with disks | +| Availability | Zonal | per zone: ≥3 nodes, ≥2 with disks | +| Consistency | Ignored/default | ≥2 nodes with disks | +| Consistency | TransZonal | ≥2 zones with disks | +| Consistency | Zonal | per zone: ≥2 nodes with disks | +| ConsistencyAndAvailability | Ignored/default | ≥3 nodes with disks | +| ConsistencyAndAvailability | TransZonal | ≥3 zones with disks | +| ConsistencyAndAvailability | Zonal | per zone: ≥3 nodes with disks | + +If validation fails, RSC sets `Ready=False` with reason `InsufficientEligibleNodes`. + +## Volume Statistics + +The controller aggregates statistics from all `ReplicatedVolume` resources referencing this RSC: + +- **Total** — count of all volumes +- **Aligned** — volumes where both `ConfigurationReady` and `SatisfyEligibleNodes` conditions are `True` +- **StaleConfiguration** — volumes where `ConfigurationReady` is `False` +- **InConflictWithEligibleNodes** — volumes where `SatisfyEligibleNodes` is `False` +- **PendingObservation** — volumes that haven't observed current RSC configuration/eligible nodes +- **UsedStoragePoolNames** — sorted list of storage pool names referenced by volumes + +> **Note:** Counters other than `Total` and `PendingObservation` are only computed when all volumes have observed the current configuration. + +## Managed Metadata + +| Type | Key | Managed On | Purpose | +|------|-----|------------|---------| +| Finalizer | `storage.deckhouse.io/rsc-controller` | RSC | Prevent deletion while RSP exists | +| Finalizer | `storage.deckhouse.io/rsc-controller` | RSP | Prevent deletion while RSC references it | +| Label | `storage.deckhouse.io/rsc-managed-rsp` | RSP | Mark RSP as auto-generated by RSC | +| Annotation | `storage.deckhouse.io/used-by-rsc` | RSP | Track which RSC uses this RSP | + +## Watches + +| Resource | Events | Handler | +|----------|--------|---------| +| RSC | For() (primary) | — | +| RSP | Generation change, EligibleNodesRevision change, Ready condition change | mapRSPToRSC | +| RV | spec.replicatedStorageClassName change, status.ConfigurationObservedGeneration change, ConfigurationReady/SatisfyEligibleNodes condition changes | rvEventHandler | + +## Indexes + +| Index | Field | Purpose | +|-------|-------|---------| +| `IndexFieldRSCByStoragePool` | `spec.storagePool` | Find RSCs referencing an RSP (migration from deprecated field) | +| `IndexFieldRSCByStatusStoragePoolName` | `status.storagePoolName` | Find RSCs using an RSP | +| `IndexFieldRVByRSC` | `spec.replicatedStorageClassName` | Find RVs referencing an RSC | + +## Data Flow + +```mermaid +flowchart TD + subgraph inputs [Inputs] + RSCSpec[RSC.spec] + RSP[RSP.status] + RVs[ReplicatedVolumes] + end + + subgraph reconcilers [Reconcilers] + ReconcileRSP[reconcileRSP] + EnsureStoragePool[ensureStoragePool] + EnsureConfig[ensureConfiguration] + EnsureVols[ensureVolumeSummaryAndConditions] + end + + subgraph status [Status Output] + StoragePoolName[status.storagePoolName] + StoragePoolGen[status.storagePoolBasedOnGeneration] + EligibleRev[status.storagePoolEligibleNodesRevision] + Config[status.configuration] + ConfigGen[status.configurationGeneration] + Conds[status.conditions] + Vol[status.volumes] + end + + RSCSpec --> ReconcileRSP + ReconcileRSP -->|Creates/updates| RSP + + RSCSpec --> EnsureStoragePool + RSP --> EnsureStoragePool + EnsureStoragePool --> StoragePoolName + EnsureStoragePool --> StoragePoolGen + EnsureStoragePool -->|StoragePoolReady| Conds + + RSCSpec --> EnsureConfig + RSP --> EnsureConfig + EnsureConfig --> Config + EnsureConfig --> ConfigGen + EnsureConfig --> EligibleRev + EnsureConfig -->|Ready| Conds + + RSCSpec --> EnsureVols + RVs --> EnsureVols + EnsureVols --> Vol + EnsureVols -->|ConfigurationRolledOut| Conds + EnsureVols -->|VolumesSatisfyEligibleNodes| Conds +``` diff --git a/images/controller/internal/controllers/rsc_controller/controller.go b/images/controller/internal/controllers/rsc_controller/controller.go new file mode 100644 index 000000000..8f8132a7a --- /dev/null +++ b/images/controller/internal/controllers/rsc_controller/controller.go @@ -0,0 +1,147 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package rsccontroller + +import ( + "context" + + "k8s.io/client-go/util/workqueue" + "sigs.k8s.io/controller-runtime/pkg/builder" + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/controller" + "sigs.k8s.io/controller-runtime/pkg/event" + "sigs.k8s.io/controller-runtime/pkg/handler" + "sigs.k8s.io/controller-runtime/pkg/manager" + "sigs.k8s.io/controller-runtime/pkg/reconcile" + + "github.com/deckhouse/sds-replicated-volume/api/v1alpha1" + "github.com/deckhouse/sds-replicated-volume/images/controller/internal/indexes" +) + +const ( + // RSCControllerName is the controller name for rsc_controller. + RSCControllerName = "rsc-controller" +) + +func BuildController(mgr manager.Manager) error { + cl := mgr.GetClient() + + rec := NewReconciler(cl) + + return builder.ControllerManagedBy(mgr). + Named(RSCControllerName). + For(&v1alpha1.ReplicatedStorageClass{}). + Watches( + &v1alpha1.ReplicatedStoragePool{}, + handler.EnqueueRequestsFromMapFunc(mapRSPToRSC(cl)), + builder.WithPredicates(rspPredicates()...), + ). + Watches( + &v1alpha1.ReplicatedVolume{}, + rvEventHandler(), + builder.WithPredicates(rvPredicates()...), + ). + WithOptions(controller.Options{MaxConcurrentReconciles: 10}). + Complete(rec) +} + +// mapRSPToRSC maps a ReplicatedStoragePool to all ReplicatedStorageClass resources that reference it. +// It queries RSCs using two indexes: +// - spec.storagePool (for migration from deprecated field) +// - status.storagePoolName (for auto-generated RSPs) +func mapRSPToRSC(cl client.Client) handler.MapFunc { + return func(ctx context.Context, obj client.Object) []reconcile.Request { + rsp, ok := obj.(*v1alpha1.ReplicatedStoragePool) + if !ok || rsp == nil { + return nil + } + + // Deduplicate RSC names from both indexes. + seen := make(map[string]struct{}) + + // Query by spec.storagePool (migration). + var listBySpec v1alpha1.ReplicatedStorageClassList + if err := cl.List(ctx, &listBySpec, + client.MatchingFields{indexes.IndexFieldRSCByStoragePool: rsp.Name}, + client.UnsafeDisableDeepCopy, + ); err == nil { + for i := range listBySpec.Items { + seen[listBySpec.Items[i].Name] = struct{}{} + } + } + + // Query by status.storagePoolName (auto-generated). + var listByStatus v1alpha1.ReplicatedStorageClassList + if err := cl.List(ctx, &listByStatus, + client.MatchingFields{indexes.IndexFieldRSCByStatusStoragePoolName: rsp.Name}, + client.UnsafeDisableDeepCopy, + ); err == nil { + for i := range listByStatus.Items { + seen[listByStatus.Items[i].Name] = struct{}{} + } + } + + if len(seen) == 0 { + return nil + } + + requests := make([]reconcile.Request, 0, len(seen)) + for name := range seen { + requests = append(requests, reconcile.Request{ + NamespacedName: client.ObjectKey{Name: name}, + }) + } + return requests + } +} + +// rvEventHandler returns an event handler for ReplicatedVolume events. +// On Update, it enqueues both old and new storage classes if they differ. +func rvEventHandler() handler.TypedEventHandler[client.Object, reconcile.Request] { + enqueueRSC := func(q workqueue.TypedRateLimitingInterface[reconcile.Request], rscName string) { + if rscName != "" { + q.Add(reconcile.Request{NamespacedName: client.ObjectKey{Name: rscName}}) + } + } + + return handler.TypedFuncs[client.Object, reconcile.Request]{ + CreateFunc: func(_ context.Context, e event.TypedCreateEvent[client.Object], q workqueue.TypedRateLimitingInterface[reconcile.Request]) { + rv, ok := e.Object.(*v1alpha1.ReplicatedVolume) + if !ok || rv == nil { + return + } + enqueueRSC(q, rv.Spec.ReplicatedStorageClassName) + }, + UpdateFunc: func(_ context.Context, e event.TypedUpdateEvent[client.Object], q workqueue.TypedRateLimitingInterface[reconcile.Request]) { + oldRV, okOld := e.ObjectOld.(*v1alpha1.ReplicatedVolume) + newRV, okNew := e.ObjectNew.(*v1alpha1.ReplicatedVolume) + if !okOld || !okNew || oldRV == nil || newRV == nil { + return + } + // Enqueue both old and new storage classes (deduplication happens in workqueue). + enqueueRSC(q, oldRV.Spec.ReplicatedStorageClassName) + enqueueRSC(q, newRV.Spec.ReplicatedStorageClassName) + }, + DeleteFunc: func(_ context.Context, e event.TypedDeleteEvent[client.Object], q workqueue.TypedRateLimitingInterface[reconcile.Request]) { + rv, ok := e.Object.(*v1alpha1.ReplicatedVolume) + if !ok || rv == nil { + return + } + enqueueRSC(q, rv.Spec.ReplicatedStorageClassName) + }, + } +} diff --git a/images/controller/internal/controllers/rsc_controller/controller_test.go b/images/controller/internal/controllers/rsc_controller/controller_test.go new file mode 100644 index 000000000..ae129761b --- /dev/null +++ b/images/controller/internal/controllers/rsc_controller/controller_test.go @@ -0,0 +1,271 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package rsccontroller + +import ( + "context" + "time" + + . "github.com/onsi/ginkgo/v2" + . "github.com/onsi/gomega" + corev1 "k8s.io/api/core/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/runtime" + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/client/fake" + "sigs.k8s.io/controller-runtime/pkg/event" + "sigs.k8s.io/controller-runtime/pkg/reconcile" + + "github.com/deckhouse/sds-replicated-volume/api/v1alpha1" + "github.com/deckhouse/sds-replicated-volume/images/controller/internal/indexes/testhelpers" +) + +var _ = Describe("Mapper functions", func() { + var scheme *runtime.Scheme + + BeforeEach(func() { + scheme = runtime.NewScheme() + Expect(v1alpha1.AddToScheme(scheme)).To(Succeed()) + Expect(corev1.AddToScheme(scheme)).To(Succeed()) + }) + + Describe("mapRSPToRSC", func() { + It("returns requests for RSCs referencing the RSP via spec.storagePool", func() { + rsp := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{Name: "pool-1"}, + } + rsc1 := &v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{Name: "rsc-1"}, + Spec: v1alpha1.ReplicatedStorageClassSpec{StoragePool: "pool-1"}, + } + rsc2 := &v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{Name: "rsc-2"}, + Spec: v1alpha1.ReplicatedStorageClassSpec{StoragePool: "pool-1"}, + } + rscOther := &v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{Name: "rsc-other"}, + Spec: v1alpha1.ReplicatedStorageClassSpec{StoragePool: "other-pool"}, + } + + cl := testhelpers.WithRSCByStatusStoragePoolNameIndex( + testhelpers.WithRSCByStoragePoolIndex( + fake.NewClientBuilder(). + WithScheme(scheme). + WithObjects(rsp, rsc1, rsc2, rscOther), + ), + ).Build() + + mapFunc := mapRSPToRSC(cl) + requests := mapFunc(context.Background(), rsp) + + Expect(requests).To(HaveLen(2)) + names := []string{requests[0].Name, requests[1].Name} + Expect(names).To(ContainElements("rsc-1", "rsc-2")) + }) + + It("returns requests for RSCs referencing the RSP via status.storagePoolName", func() { + rsp := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{Name: "auto-rsp-abc123"}, + } + rsc1 := &v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{Name: "rsc-1"}, + Status: v1alpha1.ReplicatedStorageClassStatus{StoragePoolName: "auto-rsp-abc123"}, + } + rsc2 := &v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{Name: "rsc-2"}, + Status: v1alpha1.ReplicatedStorageClassStatus{StoragePoolName: "auto-rsp-abc123"}, + } + rscOther := &v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{Name: "rsc-other"}, + Status: v1alpha1.ReplicatedStorageClassStatus{StoragePoolName: "other-pool"}, + } + + cl := testhelpers.WithRSCByStatusStoragePoolNameIndex( + testhelpers.WithRSCByStoragePoolIndex( + fake.NewClientBuilder(). + WithScheme(scheme). + WithObjects(rsp, rsc1, rsc2, rscOther), + ), + ).Build() + + mapFunc := mapRSPToRSC(cl) + requests := mapFunc(context.Background(), rsp) + + Expect(requests).To(HaveLen(2)) + names := []string{requests[0].Name, requests[1].Name} + Expect(names).To(ContainElements("rsc-1", "rsc-2")) + }) + + It("returns deduplicated requests when RSC matches both indexes", func() { + rsp := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{Name: "pool-1"}, + } + // RSC matches both spec.storagePool and status.storagePoolName. + rsc := &v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{Name: "rsc-1"}, + Spec: v1alpha1.ReplicatedStorageClassSpec{StoragePool: "pool-1"}, + Status: v1alpha1.ReplicatedStorageClassStatus{StoragePoolName: "pool-1"}, + } + + cl := testhelpers.WithRSCByStatusStoragePoolNameIndex( + testhelpers.WithRSCByStoragePoolIndex( + fake.NewClientBuilder(). + WithScheme(scheme). + WithObjects(rsp, rsc), + ), + ).Build() + + mapFunc := mapRSPToRSC(cl) + requests := mapFunc(context.Background(), rsp) + + Expect(requests).To(HaveLen(1)) + Expect(requests[0].Name).To(Equal("rsc-1")) + }) + + It("returns empty slice when no RSCs reference the RSP", func() { + rsp := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{Name: "pool-unused"}, + } + rscOther := &v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{Name: "rsc-other"}, + Spec: v1alpha1.ReplicatedStorageClassSpec{StoragePool: "other-pool"}, + } + + cl := testhelpers.WithRSCByStatusStoragePoolNameIndex( + testhelpers.WithRSCByStoragePoolIndex( + fake.NewClientBuilder(). + WithScheme(scheme). + WithObjects(rsp, rscOther), + ), + ).Build() + + mapFunc := mapRSPToRSC(cl) + requests := mapFunc(context.Background(), rsp) + + Expect(requests).To(BeNil()) + }) + + It("returns nil for non-RSP object", func() { + cl := fake.NewClientBuilder().WithScheme(scheme).Build() + + mapFunc := mapRSPToRSC(cl) + requests := mapFunc(context.Background(), &corev1.Node{}) + + Expect(requests).To(BeNil()) + }) + }) + + Describe("rvEventHandler", func() { + var handler = rvEventHandler() + var queue *fakeQueue + + BeforeEach(func() { + queue = &fakeQueue{} + }) + + It("enqueues RSC on RV create", func() { + rv := &v1alpha1.ReplicatedVolume{ + ObjectMeta: metav1.ObjectMeta{Name: "rv-1"}, + Spec: v1alpha1.ReplicatedVolumeSpec{ + ReplicatedStorageClassName: "rsc-1", + }, + } + + handler.Create(context.Background(), toCreateEvent(rv), queue) + + Expect(queue.items).To(HaveLen(1)) + Expect(queue.items[0].Name).To(Equal("rsc-1")) + }) + + It("enqueues both old and new RSC on RV update with changed RSC", func() { + oldRV := &v1alpha1.ReplicatedVolume{ + ObjectMeta: metav1.ObjectMeta{Name: "rv-1"}, + Spec: v1alpha1.ReplicatedVolumeSpec{ + ReplicatedStorageClassName: "rsc-old", + }, + } + newRV := &v1alpha1.ReplicatedVolume{ + ObjectMeta: metav1.ObjectMeta{Name: "rv-1"}, + Spec: v1alpha1.ReplicatedVolumeSpec{ + ReplicatedStorageClassName: "rsc-new", + }, + } + + handler.Update(context.Background(), toUpdateEvent(oldRV, newRV), queue) + + Expect(queue.items).To(HaveLen(2)) + names := []string{queue.items[0].Name, queue.items[1].Name} + Expect(names).To(ContainElements("rsc-old", "rsc-new")) + }) + + It("enqueues RSC on RV delete", func() { + rv := &v1alpha1.ReplicatedVolume{ + ObjectMeta: metav1.ObjectMeta{Name: "rv-1"}, + Spec: v1alpha1.ReplicatedVolumeSpec{ + ReplicatedStorageClassName: "rsc-1", + }, + } + + handler.Delete(context.Background(), toDeleteEvent(rv), queue) + + Expect(queue.items).To(HaveLen(1)) + Expect(queue.items[0].Name).To(Equal("rsc-1")) + }) + + It("does not enqueue when RSC name is empty", func() { + rv := &v1alpha1.ReplicatedVolume{ + ObjectMeta: metav1.ObjectMeta{Name: "rv-1"}, + Spec: v1alpha1.ReplicatedVolumeSpec{}, + } + + handler.Create(context.Background(), toCreateEvent(rv), queue) + + Expect(queue.items).To(BeEmpty()) + }) + }) +}) + +// fakeQueue implements workqueue.TypedRateLimitingInterface for testing. +type fakeQueue struct { + items []reconcile.Request +} + +func (q *fakeQueue) Add(item reconcile.Request) { q.items = append(q.items, item) } +func (q *fakeQueue) Len() int { return len(q.items) } +func (q *fakeQueue) Get() (reconcile.Request, bool) { return reconcile.Request{}, false } +func (q *fakeQueue) Done(reconcile.Request) {} +func (q *fakeQueue) ShutDown() {} +func (q *fakeQueue) ShutDownWithDrain() {} +func (q *fakeQueue) ShuttingDown() bool { return false } +func (q *fakeQueue) AddAfter(item reconcile.Request, _ time.Duration) { + q.items = append(q.items, item) +} +func (q *fakeQueue) AddRateLimited(reconcile.Request) {} +func (q *fakeQueue) Forget(reconcile.Request) {} +func (q *fakeQueue) NumRequeues(reconcile.Request) int { return 0 } + +func toCreateEvent(obj client.Object) event.TypedCreateEvent[client.Object] { + return event.TypedCreateEvent[client.Object]{Object: obj} +} + +func toUpdateEvent(oldObj, newObj client.Object) event.TypedUpdateEvent[client.Object] { + return event.TypedUpdateEvent[client.Object]{ObjectOld: oldObj, ObjectNew: newObj} +} + +func toDeleteEvent(obj client.Object) event.TypedDeleteEvent[client.Object] { + return event.TypedDeleteEvent[client.Object]{Object: obj} +} diff --git a/images/controller/internal/controllers/rsc_controller/predicates.go b/images/controller/internal/controllers/rsc_controller/predicates.go new file mode 100644 index 000000000..7925693c0 --- /dev/null +++ b/images/controller/internal/controllers/rsc_controller/predicates.go @@ -0,0 +1,103 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package rsccontroller + +import ( + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/event" + "sigs.k8s.io/controller-runtime/pkg/predicate" + + obju "github.com/deckhouse/sds-replicated-volume/api/objutilv1" + "github.com/deckhouse/sds-replicated-volume/api/v1alpha1" +) + +// rspPredicates returns predicates for ReplicatedStoragePool events. +// Filters to only react to: +// - Generation changes (spec updates) +// - Ready condition changes (status) +// - EligibleNodesRevision changes (status) +func rspPredicates() []predicate.Predicate { + return []predicate.Predicate{ + predicate.TypedFuncs[client.Object]{ + UpdateFunc: func(e event.TypedUpdateEvent[client.Object]) bool { + // Be conservative if objects are nil. + if e.ObjectOld == nil || e.ObjectNew == nil { + return true + } + + // Generation change (spec updates). + if e.ObjectNew.GetGeneration() != e.ObjectOld.GetGeneration() { + return true + } + + oldRSP, okOld := e.ObjectOld.(*v1alpha1.ReplicatedStoragePool) + newRSP, okNew := e.ObjectNew.(*v1alpha1.ReplicatedStoragePool) + if !okOld || !okNew || oldRSP == nil || newRSP == nil { + return true + } + + // EligibleNodesRevision change. + if oldRSP.Status.EligibleNodesRevision != newRSP.Status.EligibleNodesRevision { + return true + } + + // Ready condition change. + return !obju.AreConditionsSemanticallyEqual( + oldRSP, newRSP, + v1alpha1.ReplicatedStoragePoolCondReadyType, + ) + }, + }, + } +} + +// rvPredicates returns predicates for ReplicatedVolume events. +// Filters to only react to changes in: +// - spec.replicatedStorageClassName (storage class reference) +// - status.storageClass (observed RSC state for acknowledgment tracking) +// - ConfigurationReady condition +// - SatisfyEligibleNodes condition +func rvPredicates() []predicate.Predicate { + return []predicate.Predicate{ + predicate.TypedFuncs[client.Object]{ + GenericFunc: func(event.TypedGenericEvent[client.Object]) bool { return false }, + UpdateFunc: func(e event.TypedUpdateEvent[client.Object]) bool { + oldRV, okOld := e.ObjectOld.(*v1alpha1.ReplicatedVolume) + newRV, okNew := e.ObjectNew.(*v1alpha1.ReplicatedVolume) + if !okOld || !okNew || oldRV == nil || newRV == nil { + return true + } + + // Storage class reference change. + if oldRV.Spec.ReplicatedStorageClassName != newRV.Spec.ReplicatedStorageClassName { + return true + } + + // Configuration observation state change. + if oldRV.Status.ConfigurationObservedGeneration != newRV.Status.ConfigurationObservedGeneration { + return true + } + + return !obju.AreConditionsSemanticallyEqual( + oldRV, newRV, + v1alpha1.ReplicatedVolumeCondConfigurationReadyType, + v1alpha1.ReplicatedVolumeCondSatisfyEligibleNodesType, + ) + }, + }, + } +} diff --git a/images/controller/internal/controllers/rsc_controller/predicates_test.go b/images/controller/internal/controllers/rsc_controller/predicates_test.go new file mode 100644 index 000000000..4f66c7d52 --- /dev/null +++ b/images/controller/internal/controllers/rsc_controller/predicates_test.go @@ -0,0 +1,495 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package rsccontroller + +import ( + . "github.com/onsi/ginkgo/v2" + . "github.com/onsi/gomega" + corev1 "k8s.io/api/core/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/event" + "sigs.k8s.io/controller-runtime/pkg/predicate" + + "github.com/deckhouse/sds-replicated-volume/api/v1alpha1" +) + +var _ = Describe("rspPredicates", func() { + Describe("UpdateFunc", func() { + var preds []func(event.TypedUpdateEvent[client.Object]) bool + + BeforeEach(func() { + predicates := rspPredicates() + preds = make([]func(event.TypedUpdateEvent[client.Object]) bool, 0) + for _, p := range predicates { + if fp, ok := p.(predicate.TypedFuncs[client.Object]); ok && fp.UpdateFunc != nil { + preds = append(preds, fp.UpdateFunc) + } + } + }) + + It("returns true when Generation changes", func() { + oldRSP := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rsp-1", + Generation: 1, + }, + } + newRSP := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rsp-1", + Generation: 2, + }, + } + e := event.TypedUpdateEvent[client.Object]{ + ObjectOld: oldRSP, + ObjectNew: newRSP, + } + + for _, pred := range preds { + Expect(pred(e)).To(BeTrue()) + } + }) + + It("returns true when EligibleNodesRevision changes", func() { + oldRSP := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rsp-1", + Generation: 1, + }, + Status: v1alpha1.ReplicatedStoragePoolStatus{ + EligibleNodesRevision: 1, + }, + } + newRSP := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rsp-1", + Generation: 1, + }, + Status: v1alpha1.ReplicatedStoragePoolStatus{ + EligibleNodesRevision: 2, + }, + } + e := event.TypedUpdateEvent[client.Object]{ + ObjectOld: oldRSP, + ObjectNew: newRSP, + } + + for _, pred := range preds { + Expect(pred(e)).To(BeTrue()) + } + }) + + It("returns true when Ready condition changes", func() { + oldRSP := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rsp-1", + Generation: 1, + }, + Status: v1alpha1.ReplicatedStoragePoolStatus{ + EligibleNodesRevision: 1, + Conditions: []metav1.Condition{ + { + Type: v1alpha1.ReplicatedStoragePoolCondReadyType, + Status: metav1.ConditionFalse, + Reason: "NotReady", + }, + }, + }, + } + newRSP := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rsp-1", + Generation: 1, + }, + Status: v1alpha1.ReplicatedStoragePoolStatus{ + EligibleNodesRevision: 1, + Conditions: []metav1.Condition{ + { + Type: v1alpha1.ReplicatedStoragePoolCondReadyType, + Status: metav1.ConditionTrue, + Reason: "Ready", + }, + }, + }, + } + e := event.TypedUpdateEvent[client.Object]{ + ObjectOld: oldRSP, + ObjectNew: newRSP, + } + + for _, pred := range preds { + Expect(pred(e)).To(BeTrue()) + } + }) + + It("returns false when all unchanged", func() { + oldRSP := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rsp-1", + Generation: 1, + }, + Status: v1alpha1.ReplicatedStoragePoolStatus{ + EligibleNodesRevision: 1, + Conditions: []metav1.Condition{ + { + Type: v1alpha1.ReplicatedStoragePoolCondReadyType, + Status: metav1.ConditionTrue, + Reason: "Ready", + }, + }, + }, + } + newRSP := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rsp-1", + Generation: 1, + }, + Status: v1alpha1.ReplicatedStoragePoolStatus{ + EligibleNodesRevision: 1, + Conditions: []metav1.Condition{ + { + Type: v1alpha1.ReplicatedStoragePoolCondReadyType, + Status: metav1.ConditionTrue, + Reason: "Ready", + }, + }, + }, + } + e := event.TypedUpdateEvent[client.Object]{ + ObjectOld: oldRSP, + ObjectNew: newRSP, + } + + for _, pred := range preds { + Expect(pred(e)).To(BeFalse()) + } + }) + + It("returns true when cast fails (conservative)", func() { + oldNode := &corev1.Node{ + ObjectMeta: metav1.ObjectMeta{Name: "node-1"}, + } + newNode := &corev1.Node{ + ObjectMeta: metav1.ObjectMeta{Name: "node-1"}, + } + e := event.TypedUpdateEvent[client.Object]{ + ObjectOld: oldNode, + ObjectNew: newNode, + } + + for _, pred := range preds { + Expect(pred(e)).To(BeTrue()) + } + }) + + It("returns true when old is nil (conservative)", func() { + newRSP := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{Name: "rsp-1"}, + } + e := event.TypedUpdateEvent[client.Object]{ + ObjectOld: nil, + ObjectNew: newRSP, + } + + for _, pred := range preds { + Expect(pred(e)).To(BeTrue()) + } + }) + + It("returns true when new is nil (conservative)", func() { + oldRSP := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{Name: "rsp-1"}, + } + e := event.TypedUpdateEvent[client.Object]{ + ObjectOld: oldRSP, + ObjectNew: nil, + } + + for _, pred := range preds { + Expect(pred(e)).To(BeTrue()) + } + }) + }) +}) + +var _ = Describe("rvPredicates", func() { + Describe("UpdateFunc", func() { + var preds []func(event.TypedUpdateEvent[client.Object]) bool + + BeforeEach(func() { + predicates := rvPredicates() + preds = make([]func(event.TypedUpdateEvent[client.Object]) bool, 0) + for _, p := range predicates { + if fp, ok := p.(predicate.TypedFuncs[client.Object]); ok && fp.UpdateFunc != nil { + preds = append(preds, fp.UpdateFunc) + } + } + }) + + It("returns true when spec.replicatedStorageClassName changes", func() { + oldRV := &v1alpha1.ReplicatedVolume{ + ObjectMeta: metav1.ObjectMeta{Name: "rv-1"}, + Spec: v1alpha1.ReplicatedVolumeSpec{ + ReplicatedStorageClassName: "rsc-old", + }, + } + newRV := &v1alpha1.ReplicatedVolume{ + ObjectMeta: metav1.ObjectMeta{Name: "rv-1"}, + Spec: v1alpha1.ReplicatedVolumeSpec{ + ReplicatedStorageClassName: "rsc-new", + }, + } + e := event.TypedUpdateEvent[client.Object]{ + ObjectOld: oldRV, + ObjectNew: newRV, + } + + for _, pred := range preds { + Expect(pred(e)).To(BeTrue()) + } + }) + + It("returns true when status.ConfigurationObservedGeneration changes", func() { + oldRV := &v1alpha1.ReplicatedVolume{ + ObjectMeta: metav1.ObjectMeta{Name: "rv-1"}, + Spec: v1alpha1.ReplicatedVolumeSpec{ + ReplicatedStorageClassName: "rsc-1", + }, + Status: v1alpha1.ReplicatedVolumeStatus{ + ConfigurationObservedGeneration: 1, + }, + } + newRV := &v1alpha1.ReplicatedVolume{ + ObjectMeta: metav1.ObjectMeta{Name: "rv-1"}, + Spec: v1alpha1.ReplicatedVolumeSpec{ + ReplicatedStorageClassName: "rsc-1", + }, + Status: v1alpha1.ReplicatedVolumeStatus{ + ConfigurationObservedGeneration: 2, + }, + } + e := event.TypedUpdateEvent[client.Object]{ + ObjectOld: oldRV, + ObjectNew: newRV, + } + + for _, pred := range preds { + Expect(pred(e)).To(BeTrue()) + } + }) + + It("returns true when ConfigurationReady condition changes", func() { + oldRV := &v1alpha1.ReplicatedVolume{ + ObjectMeta: metav1.ObjectMeta{Name: "rv-1"}, + Spec: v1alpha1.ReplicatedVolumeSpec{ + ReplicatedStorageClassName: "rsc-1", + }, + Status: v1alpha1.ReplicatedVolumeStatus{ + ConfigurationObservedGeneration: 1, + Conditions: []metav1.Condition{ + { + Type: v1alpha1.ReplicatedVolumeCondConfigurationReadyType, + Status: metav1.ConditionFalse, + Reason: "NotReady", + }, + }, + }, + } + newRV := &v1alpha1.ReplicatedVolume{ + ObjectMeta: metav1.ObjectMeta{Name: "rv-1"}, + Spec: v1alpha1.ReplicatedVolumeSpec{ + ReplicatedStorageClassName: "rsc-1", + }, + Status: v1alpha1.ReplicatedVolumeStatus{ + ConfigurationObservedGeneration: 1, + Conditions: []metav1.Condition{ + { + Type: v1alpha1.ReplicatedVolumeCondConfigurationReadyType, + Status: metav1.ConditionTrue, + Reason: "Ready", + }, + }, + }, + } + e := event.TypedUpdateEvent[client.Object]{ + ObjectOld: oldRV, + ObjectNew: newRV, + } + + for _, pred := range preds { + Expect(pred(e)).To(BeTrue()) + } + }) + + It("returns true when SatisfyEligibleNodes condition changes", func() { + oldRV := &v1alpha1.ReplicatedVolume{ + ObjectMeta: metav1.ObjectMeta{Name: "rv-1"}, + Spec: v1alpha1.ReplicatedVolumeSpec{ + ReplicatedStorageClassName: "rsc-1", + }, + Status: v1alpha1.ReplicatedVolumeStatus{ + ConfigurationObservedGeneration: 1, + Conditions: []metav1.Condition{ + { + Type: v1alpha1.ReplicatedVolumeCondSatisfyEligibleNodesType, + Status: metav1.ConditionFalse, + Reason: "NotSatisfied", + }, + }, + }, + } + newRV := &v1alpha1.ReplicatedVolume{ + ObjectMeta: metav1.ObjectMeta{Name: "rv-1"}, + Spec: v1alpha1.ReplicatedVolumeSpec{ + ReplicatedStorageClassName: "rsc-1", + }, + Status: v1alpha1.ReplicatedVolumeStatus{ + ConfigurationObservedGeneration: 1, + Conditions: []metav1.Condition{ + { + Type: v1alpha1.ReplicatedVolumeCondSatisfyEligibleNodesType, + Status: metav1.ConditionTrue, + Reason: "Satisfied", + }, + }, + }, + } + e := event.TypedUpdateEvent[client.Object]{ + ObjectOld: oldRV, + ObjectNew: newRV, + } + + for _, pred := range preds { + Expect(pred(e)).To(BeTrue()) + } + }) + + It("returns false when all unchanged", func() { + oldRV := &v1alpha1.ReplicatedVolume{ + ObjectMeta: metav1.ObjectMeta{Name: "rv-1"}, + Spec: v1alpha1.ReplicatedVolumeSpec{ + ReplicatedStorageClassName: "rsc-1", + }, + Status: v1alpha1.ReplicatedVolumeStatus{ + ConfigurationObservedGeneration: 1, + Conditions: []metav1.Condition{ + { + Type: v1alpha1.ReplicatedVolumeCondConfigurationReadyType, + Status: metav1.ConditionTrue, + Reason: "Ready", + }, + { + Type: v1alpha1.ReplicatedVolumeCondSatisfyEligibleNodesType, + Status: metav1.ConditionTrue, + Reason: "Satisfied", + }, + }, + }, + } + newRV := &v1alpha1.ReplicatedVolume{ + ObjectMeta: metav1.ObjectMeta{Name: "rv-1"}, + Spec: v1alpha1.ReplicatedVolumeSpec{ + ReplicatedStorageClassName: "rsc-1", + }, + Status: v1alpha1.ReplicatedVolumeStatus{ + ConfigurationObservedGeneration: 1, + Conditions: []metav1.Condition{ + { + Type: v1alpha1.ReplicatedVolumeCondConfigurationReadyType, + Status: metav1.ConditionTrue, + Reason: "Ready", + }, + { + Type: v1alpha1.ReplicatedVolumeCondSatisfyEligibleNodesType, + Status: metav1.ConditionTrue, + Reason: "Satisfied", + }, + }, + }, + } + e := event.TypedUpdateEvent[client.Object]{ + ObjectOld: oldRV, + ObjectNew: newRV, + } + + for _, pred := range preds { + Expect(pred(e)).To(BeFalse()) + } + }) + + It("returns true when cast fails (conservative)", func() { + oldNode := &corev1.Node{ + ObjectMeta: metav1.ObjectMeta{Name: "node-1"}, + } + newNode := &corev1.Node{ + ObjectMeta: metav1.ObjectMeta{Name: "node-1"}, + } + e := event.TypedUpdateEvent[client.Object]{ + ObjectOld: oldNode, + ObjectNew: newNode, + } + + for _, pred := range preds { + Expect(pred(e)).To(BeTrue()) + } + }) + + It("returns true when old is nil (conservative)", func() { + newRV := &v1alpha1.ReplicatedVolume{ + ObjectMeta: metav1.ObjectMeta{Name: "rv-1"}, + } + e := event.TypedUpdateEvent[client.Object]{ + ObjectOld: nil, + ObjectNew: newRV, + } + + for _, pred := range preds { + Expect(pred(e)).To(BeTrue()) + } + }) + + It("returns true when new is nil (conservative)", func() { + oldRV := &v1alpha1.ReplicatedVolume{ + ObjectMeta: metav1.ObjectMeta{Name: "rv-1"}, + } + e := event.TypedUpdateEvent[client.Object]{ + ObjectOld: oldRV, + ObjectNew: nil, + } + + for _, pred := range preds { + Expect(pred(e)).To(BeTrue()) + } + }) + }) + + Describe("GenericFunc", func() { + It("returns false always", func() { + predicates := rvPredicates() + for _, p := range predicates { + if fp, ok := p.(predicate.TypedFuncs[client.Object]); ok && fp.GenericFunc != nil { + rv := &v1alpha1.ReplicatedVolume{ + ObjectMeta: metav1.ObjectMeta{Name: "rv-1"}, + } + e := event.TypedGenericEvent[client.Object]{Object: rv} + Expect(fp.GenericFunc(e)).To(BeFalse()) + } + } + }) + }) +}) diff --git a/images/controller/internal/controllers/rsc_controller/reconciler.go b/images/controller/internal/controllers/rsc_controller/reconciler.go new file mode 100644 index 000000000..885042519 --- /dev/null +++ b/images/controller/internal/controllers/rsc_controller/reconciler.go @@ -0,0 +1,1369 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package rsccontroller + +import ( + "context" + "encoding/hex" + "encoding/json" + "fmt" + "hash/fnv" + "slices" + "sort" + + apierrors "k8s.io/apimachinery/pkg/api/errors" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/utils/ptr" + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/reconcile" + + "github.com/deckhouse/sds-replicated-volume/api/objutilv1" + "github.com/deckhouse/sds-replicated-volume/api/v1alpha1" + "github.com/deckhouse/sds-replicated-volume/images/controller/internal/indexes" + "github.com/deckhouse/sds-replicated-volume/lib/go/common/reconciliation/flow" +) + +// ────────────────────────────────────────────────────────────────────────────── +// Wiring / construction +// + +type Reconciler struct { + cl client.Client +} + +var _ reconcile.Reconciler = (*Reconciler)(nil) + +func NewReconciler(cl client.Client) *Reconciler { + return &Reconciler{cl: cl} +} + +// ────────────────────────────────────────────────────────────────────────────── +// Reconcile +// + +// Reconcile pattern: Pure orchestration +func (r *Reconciler) Reconcile(ctx context.Context, req reconcile.Request) (reconcile.Result, error) { + rf := flow.BeginRootReconcile(ctx) + + // Get RSC. + rsc, err := r.getRSC(rf.Ctx(), req.Name) + if err != nil { + if apierrors.IsNotFound(err) { + return rf.Done().ToCtrl() + } + return rf.Fail(err).ToCtrl() + } + + // Get RVs referencing this RSC. + rvs, err := r.getSortedRVsByRSC(rf.Ctx(), rsc.Name) + if err != nil { + return rf.Fail(err).ToCtrl() + } + + // Reconcile migration from RSP (deprecated storagePool field). + outcome := r.reconcileMigrationFromRSP(rf.Ctx(), rsc) + if outcome.ShouldReturn() { + return outcome.ToCtrl() + } + + // Reconcile main (finalizer management). + outcome = r.reconcileMain(rf.Ctx(), rsc, rvs) + if outcome.ShouldReturn() { + return outcome.ToCtrl() + } + + // Reconcile status. + outcome = r.reconcileStatus(rf.Ctx(), rsc, rvs) + if outcome.ShouldReturn() { + return outcome.ToCtrl() + } + + // Release storage pools that are no longer used. + return r.reconcileUnusedRSPs(rf.Ctx(), rsc).ToCtrl() +} + +// reconcileMigrationFromRSP migrates StoragePool to spec.Storage. +// +// Reconcile pattern: Target-state driven +// +// Logic: +// - If storagePool is empty → Continue (nothing to migrate) +// - If storagePool set AND RSP not found → set conditions (Ready=False, StoragePoolReady=False), patch status, return Done +// - If storagePool set AND RSP found → copy type+lvmVolumeGroups to spec.storage, clear storagePool +func (r *Reconciler) reconcileMigrationFromRSP( + ctx context.Context, + rsc *v1alpha1.ReplicatedStorageClass, +) (outcome flow.ReconcileOutcome) { + rf := flow.BeginReconcile(ctx, "migration-from-rsp") + defer rf.OnEnd(&outcome) + + // Nothing to migrate. + if rsc.Spec.StoragePool == "" { + return rf.Continue() + } + + rsp, err := r.getRSP(rf.Ctx(), rsc.Spec.StoragePool) + if err != nil { + return rf.Fail(err) + } + + // RSP not found - set conditions and wait. + if rsp == nil { + base := rsc.DeepCopy() + changed := applyReadyCondFalse(rsc, + v1alpha1.ReplicatedStorageClassCondReadyReasonWaitingForStoragePool, + fmt.Sprintf("Cannot migrate from storagePool field: ReplicatedStoragePool %q not found", rsc.Spec.StoragePool)) + changed = applyStoragePoolReadyCondFalse(rsc, + v1alpha1.ReplicatedStorageClassCondStoragePoolReadyReasonStoragePoolNotFound, + fmt.Sprintf("ReplicatedStoragePool %q not found", rsc.Spec.StoragePool)) || changed + if changed { + if err := r.patchRSCStatus(rf.Ctx(), rsc, base, false); err != nil { + return rf.Fail(err) + } + } + return rf.Done() + } + + // RSP found, migrate storage configuration. + targetStorage := computeTargetStorageFromRSP(rsp) + + base := rsc.DeepCopy() + applyStorageMigration(rsc, targetStorage) + + if err := r.patchRSC(rf.Ctx(), rsc, base, true); err != nil { + return rf.Fail(err) + } + + return rf.Continue() +} + +// reconcileMain manages the finalizer. +// +// Reconcile pattern: Target-state driven +// +// Logic: +// - If no finalizer → add it +// - If deletionTimestamp set AND no RVs → remove finalizer +func (r *Reconciler) reconcileMain( + ctx context.Context, + rsc *v1alpha1.ReplicatedStorageClass, + rvs []rvView, +) (outcome flow.ReconcileOutcome) { + rf := flow.BeginReconcile(ctx, "main") + defer rf.OnEnd(&outcome) + + // Compute target for finalizer. + actualFinalizerPresent := computeActualFinalizerPresent(rsc) + targetFinalizerPresent := computeTargetFinalizerPresent(rsc, rvs) + + // If nothing changed, continue. + if targetFinalizerPresent == actualFinalizerPresent { + return rf.Continue() + } + + base := rsc.DeepCopy() + applyFinalizer(rsc, targetFinalizerPresent) + + if err := r.patchRSC(rf.Ctx(), rsc, base, true); err != nil { + return rf.Fail(err) + } + + // If finalizer was removed, we're done (object will be deleted). + if !targetFinalizerPresent { + return rf.Done() + } + + return rf.Continue() +} + +// computeTargetStorageFromRSP computes the target Storage from the RSP spec. +func computeTargetStorageFromRSP(rsp *v1alpha1.ReplicatedStoragePool) v1alpha1.ReplicatedStorageClassStorage { + // Clone LVMVolumeGroups to avoid aliasing. + lvmVolumeGroups := make([]v1alpha1.ReplicatedStoragePoolLVMVolumeGroups, len(rsp.Spec.LVMVolumeGroups)) + copy(lvmVolumeGroups, rsp.Spec.LVMVolumeGroups) + + return v1alpha1.ReplicatedStorageClassStorage{ + Type: rsp.Spec.Type, + LVMVolumeGroups: lvmVolumeGroups, + } +} + +// applyStorageMigration applies the target storage and clears the storagePool field. +func applyStorageMigration(rsc *v1alpha1.ReplicatedStorageClass, targetStorage v1alpha1.ReplicatedStorageClassStorage) { + rsc.Spec.Storage = targetStorage + rsc.Spec.StoragePool = "" +} + +// computeActualFinalizerPresent returns whether the controller finalizer is present on the RSC. +func computeActualFinalizerPresent(rsc *v1alpha1.ReplicatedStorageClass) bool { + return objutilv1.HasFinalizer(rsc, v1alpha1.RSCControllerFinalizer) +} + +// computeTargetFinalizerPresent returns whether the controller finalizer should be present. +// The finalizer should be present unless the RSC is being deleted AND has no RVs. +func computeTargetFinalizerPresent(rsc *v1alpha1.ReplicatedStorageClass, rvs []rvView) bool { + isDeleting := rsc.DeletionTimestamp != nil + hasRVs := len(rvs) > 0 + + // Keep finalizer if not deleting or if there are still RVs. + return !isDeleting || hasRVs +} + +// applyFinalizer adds or removes the controller finalizer based on target state. +func applyFinalizer(rsc *v1alpha1.ReplicatedStorageClass, targetPresent bool) { + if targetPresent { + objutilv1.AddFinalizer(rsc, v1alpha1.RSCControllerFinalizer) + } else { + objutilv1.RemoveFinalizer(rsc, v1alpha1.RSCControllerFinalizer) + } +} + +// --- Reconcile: status --- + +// reconcileStatus reconciles the RSC status. +// +// Reconcile pattern: In-place reconciliation +func (r *Reconciler) reconcileStatus( + ctx context.Context, + rsc *v1alpha1.ReplicatedStorageClass, + rvs []rvView, +) (outcome flow.ReconcileOutcome) { + rf := flow.BeginReconcile(ctx, "status") + defer rf.OnEnd(&outcome) + + // Compute target storage pool name (cached if already computed for this generation). + targetStoragePoolName := computeTargetStoragePool(rsc) + + // Ensure auto-generated RSP exists and is configured. + outcome, rsp := r.reconcileRSP(rf.Ctx(), rsc, targetStoragePoolName) + if outcome.ShouldReturn() { + return outcome + } + + // Take patch base before mutations. + base := rsc.DeepCopy() + + eo := flow.MergeEnsures( + // Ensure storagePool name and condition are up to date. + ensureStoragePool(rf.Ctx(), rsc, targetStoragePoolName, rsp), + + // Ensure configuration is up to date based on RSP state. + ensureConfiguration(rf.Ctx(), rsc, rsp), + + // Ensure volume summary and conditions. + ensureVolumeSummaryAndConditions(rf.Ctx(), rsc, rvs), + ) + + // Patch if changed. + if eo.DidChange() { + if err := r.patchRSCStatus(rf.Ctx(), rsc, base, eo.OptimisticLockRequired()); err != nil { + return rf.Fail(err) + } + } + + return rf.Done() +} + +// --- Ensure helpers --- + +// ensureStoragePool ensures status.storagePoolName and StoragePoolReady condition are up to date. +// +// Logic: +// - If storagePool not in sync → update status.storagePoolName and status.storagePoolBasedOnGeneration +// - If rsp == nil → set StoragePoolReady=False (not found) +// - If rsp != nil → copy Ready condition from RSP to our StoragePoolReady +func ensureStoragePool( + ctx context.Context, + rsc *v1alpha1.ReplicatedStorageClass, + targetStoragePoolName string, + rsp *v1alpha1.ReplicatedStoragePool, +) (outcome flow.EnsureOutcome) { + ef := flow.BeginEnsure(ctx, "storage-pool") + defer ef.OnEnd(&outcome) + + // Update storagePoolName. + changed := applyStoragePool(rsc, targetStoragePoolName) + + // Update StoragePoolReady condition based on RSP existence and state. + if rsp == nil { + changed = applyStoragePoolReadyCondFalse(rsc, + v1alpha1.ReplicatedStorageClassCondStoragePoolReadyReasonStoragePoolNotFound, + fmt.Sprintf("ReplicatedStoragePool %q not found", targetStoragePoolName)) || changed + } else { + changed = applyStoragePoolReadyCondFromRSP(rsc, rsp) || changed + } + + return ef.Ok().ReportChangedIf(changed) +} + +// ensureConfiguration ensures configuration is up to date based on RSP state. +// +// Algorithm: +// 1. Panic if StoragePoolBasedOnGeneration != Generation (caller bug). +// 2. If StoragePoolReady != True: set Ready=False (WaitingForStoragePool) and return. +// 3. If RSP.EligibleNodesRevision != rsc.status.StoragePoolEligibleNodesRevision: +// - Validate RSP.EligibleNodes against topology/replication requirements. +// - If invalid: Ready=False (InsufficientEligibleNodes) and return. +// - Update rsc.status.StoragePoolEligibleNodesRevision. +// 4. If ConfigurationGeneration == Generation: done (configuration already in sync). +// 5. Otherwise: apply new Configuration, set ConfigurationGeneration. +func ensureConfiguration( + ctx context.Context, + rsc *v1alpha1.ReplicatedStorageClass, + rsp *v1alpha1.ReplicatedStoragePool, +) (outcome flow.EnsureOutcome) { + ef := flow.BeginEnsure(ctx, "configuration") + defer ef.OnEnd(&outcome) + + // 1. Panic if StoragePoolBasedOnGeneration != Generation (caller bug). + if rsc.Status.StoragePoolBasedOnGeneration != rsc.Generation { + panic(fmt.Sprintf("ensureConfiguration: StoragePoolBasedOnGeneration (%d) != Generation (%d); ensureStoragePool must be called first", + rsc.Status.StoragePoolBasedOnGeneration, rsc.Generation)) + } + + changed := false + + // 2. If StoragePoolReady != True: set Ready=False and return. + if !objutilv1.IsStatusConditionPresentAndTrue(rsc, v1alpha1.ReplicatedStorageClassCondStoragePoolReadyType) { + changed = applyReadyCondFalse(rsc, + v1alpha1.ReplicatedStorageClassCondReadyReasonWaitingForStoragePool, + "Waiting for ReplicatedStoragePool to become ready") + return ef.Ok().ReportChangedIf(changed) + } + + // 3. Validate eligibleNodes if revision changed. + if rsp.Status.EligibleNodesRevision != rsc.Status.StoragePoolEligibleNodesRevision { + if err := validateEligibleNodes(rsp.Status.EligibleNodes, rsc.Spec.Topology, rsc.Spec.Replication); err != nil { + changed = applyReadyCondFalse(rsc, + v1alpha1.ReplicatedStorageClassCondReadyReasonInsufficientEligibleNodes, + err.Error()) + return ef.Ok().ReportChangedIf(changed) + } + + // Update StoragePoolEligibleNodesRevision. + rsc.Status.StoragePoolEligibleNodesRevision = rsp.Status.EligibleNodesRevision + changed = true + } + + // 4. If configuration is in sync, we're done. + if isConfigurationInSync(rsc) { + return ef.Ok().ReportChangedIf(changed) + } + + // 5. Apply new configuration. + config := makeConfiguration(rsc, rsc.Status.StoragePoolName) + rsc.Status.Configuration = &config + rsc.Status.ConfigurationGeneration = rsc.Generation + + // Set Ready condition. + applyReadyCondTrue(rsc, + v1alpha1.ReplicatedStorageClassCondReadyReasonReady, + "Storage class is ready", + ) + + return ef.Ok().ReportChanged().RequireOptimisticLock() +} + +// ensureVolumeSummaryAndConditions computes and applies volume summary and conditions in-place. +// +// Sets ConfigurationRolledOut and VolumesSatisfyEligibleNodes conditions based on +// volume counters (StaleConfiguration, InConflictWithEligibleNodes, PendingObservation). +func ensureVolumeSummaryAndConditions( + ctx context.Context, + rsc *v1alpha1.ReplicatedStorageClass, + rvs []rvView, +) (outcome flow.EnsureOutcome) { + ef := flow.BeginEnsure(ctx, "volume-summary-and-conditions") + defer ef.OnEnd(&outcome) + + // Compute and apply volume summary. + summary := computeActualVolumesSummary(rsc, rvs) + changed := applyVolumesSummary(rsc, summary) + + maxParallelConfigurationRollouts, maxParallelConflictResolutions := computeRollingStrategiesConfiguration(rsc) + + // Apply VolumesSatisfyEligibleNodes condition (calculated regardless of acknowledgment). + if *rsc.Status.Volumes.InConflictWithEligibleNodes > 0 { + if maxParallelConflictResolutions > 0 { + changed = applyVolumesSatisfyEligibleNodesCondFalse(rsc, + v1alpha1.ReplicatedStorageClassCondVolumesSatisfyEligibleNodesReasonConflictResolutionInProgress, + "not implemented", + ) || changed + } else { + changed = applyVolumesSatisfyEligibleNodesCondFalse(rsc, + v1alpha1.ReplicatedStorageClassCondVolumesSatisfyEligibleNodesReasonManualConflictResolution, + "not implemented", + ) || changed + } + } else { + changed = applyVolumesSatisfyEligibleNodesCondTrue(rsc, + v1alpha1.ReplicatedStorageClassCondVolumesSatisfyEligibleNodesReasonAllVolumesSatisfy, + "All volumes have replicas on eligible nodes", + ) || changed + } + + // ConfigurationRolledOut requires all volumes to acknowledge. + if *rsc.Status.Volumes.PendingObservation > 0 { + msg := fmt.Sprintf("%d volume(s) pending observation", *rsc.Status.Volumes.PendingObservation) + changed = applyConfigurationRolledOutCondUnknown(rsc, + v1alpha1.ReplicatedStorageClassCondConfigurationRolledOutReasonNewConfigurationNotYetObserved, + msg, + ) || changed + // Don't process configuration rolling updates until all volumes acknowledge. + return ef.Ok().ReportChangedIf(changed) + } + + // Apply ConfigurationRolledOut condition. + if *rsc.Status.Volumes.StaleConfiguration > 0 { + if maxParallelConfigurationRollouts > 0 { + changed = applyConfigurationRolledOutCondFalse(rsc, + v1alpha1.ReplicatedStorageClassCondConfigurationRolledOutReasonConfigurationRolloutInProgress, + "not implemented", + ) || changed + } else { + changed = applyConfigurationRolledOutCondFalse(rsc, + v1alpha1.ReplicatedStorageClassCondConfigurationRolledOutReasonConfigurationRolloutDisabled, + "not implemented", + ) || changed + } + } else { + changed = applyConfigurationRolledOutCondTrue(rsc, + v1alpha1.ReplicatedStorageClassCondConfigurationRolledOutReasonRolledOutToAllVolumes, + "All volumes have configuration matching the storage class", + ) || changed + } + + return ef.Ok().ReportChangedIf(changed) +} + +// ────────────────────────────────────────────────────────────────────────────── +// View types +// + +// rvView is a lightweight projection of ReplicatedVolume fields used by this controller. +type rvView struct { + name string + configurationStoragePoolName string + configurationObservedGeneration int64 + conditions rvViewConditions +} + +type rvViewConditions struct { + satisfyEligibleNodes bool + configurationReady bool +} + +// newRVView creates an rvView from a ReplicatedVolume. +// The unsafeRV may come from cache without DeepCopy; rvView copies only the needed scalar fields. +func newRVView(unsafeRV *v1alpha1.ReplicatedVolume) rvView { + view := rvView{ + name: unsafeRV.Name, + configurationObservedGeneration: unsafeRV.Status.ConfigurationObservedGeneration, + conditions: rvViewConditions{ + satisfyEligibleNodes: objutilv1.IsStatusConditionPresentAndTrue(unsafeRV, v1alpha1.ReplicatedVolumeCondSatisfyEligibleNodesType), + configurationReady: objutilv1.IsStatusConditionPresentAndTrue(unsafeRV, v1alpha1.ReplicatedVolumeCondConfigurationReadyType), + }, + } + + if unsafeRV.Status.Configuration != nil { + view.configurationStoragePoolName = unsafeRV.Status.Configuration.StoragePoolName + } + + return view +} + +// computeRollingStrategiesConfiguration determines max parallel limits for configuration rollouts and conflict resolutions. +// Returns 0 for a strategy if it's not set to RollingUpdate/RollingRepair type (meaning disabled). +func computeRollingStrategiesConfiguration(rsc *v1alpha1.ReplicatedStorageClass) (maxParallelConfigurationRollouts, maxParallelConflictResolutions int32) { + if rsc.Spec.ConfigurationRolloutStrategy.Type == v1alpha1.ReplicatedStorageClassConfigurationRolloutStrategyTypeRollingUpdate { + if rsc.Spec.ConfigurationRolloutStrategy.RollingUpdate == nil { + panic("ConfigurationRolloutStrategy.RollingUpdate is nil but Type is RollingUpdate; API validation should prevent this") + } + maxParallelConfigurationRollouts = rsc.Spec.ConfigurationRolloutStrategy.RollingUpdate.MaxParallel + } + + if rsc.Spec.EligibleNodesConflictResolutionStrategy.Type == v1alpha1.ReplicatedStorageClassEligibleNodesConflictResolutionStrategyTypeRollingRepair { + if rsc.Spec.EligibleNodesConflictResolutionStrategy.RollingRepair == nil { + panic("EligibleNodesConflictResolutionStrategy.RollingRepair is nil but Type is RollingRepair; API validation should prevent this") + } + maxParallelConflictResolutions = rsc.Spec.EligibleNodesConflictResolutionStrategy.RollingRepair.MaxParallel + } + + return maxParallelConfigurationRollouts, maxParallelConflictResolutions +} + +// makeConfiguration computes the intended configuration from RSC spec. +func makeConfiguration(rsc *v1alpha1.ReplicatedStorageClass, storagePoolName string) v1alpha1.ReplicatedStorageClassConfiguration { + return v1alpha1.ReplicatedStorageClassConfiguration{ + Topology: rsc.Spec.Topology, + Replication: rsc.Spec.Replication, + VolumeAccess: rsc.Spec.VolumeAccess, + StoragePoolName: storagePoolName, + } +} + +// applyConfigurationRolledOutCondUnknown sets the ConfigurationRolledOut condition to Unknown. +// Returns true if the condition was changed. +func applyConfigurationRolledOutCondUnknown(rsc *v1alpha1.ReplicatedStorageClass, reason, message string) bool { + return objutilv1.SetStatusCondition(rsc, metav1.Condition{ + Type: v1alpha1.ReplicatedStorageClassCondConfigurationRolledOutType, + Status: metav1.ConditionUnknown, + Reason: reason, + Message: message, + }) +} + +// applyReadyCondTrue sets the Ready condition to True. +// Returns true if the condition was changed. +func applyReadyCondTrue(rsc *v1alpha1.ReplicatedStorageClass, reason, message string) bool { + return objutilv1.SetStatusCondition(rsc, metav1.Condition{ + Type: v1alpha1.ReplicatedStorageClassCondReadyType, + Status: metav1.ConditionTrue, + Reason: reason, + Message: message, + }) +} + +// applyReadyCondFalse sets the Ready condition to False. +// Returns true if the condition was changed. +func applyReadyCondFalse(rsc *v1alpha1.ReplicatedStorageClass, reason, message string) bool { + return objutilv1.SetStatusCondition(rsc, metav1.Condition{ + Type: v1alpha1.ReplicatedStorageClassCondReadyType, + Status: metav1.ConditionFalse, + Reason: reason, + Message: message, + }) +} + +// applyStoragePoolReadyCondFalse sets the StoragePoolReady condition to False. +// Returns true if the condition was changed. +func applyStoragePoolReadyCondFalse(rsc *v1alpha1.ReplicatedStorageClass, reason, message string) bool { + return objutilv1.SetStatusCondition(rsc, metav1.Condition{ + Type: v1alpha1.ReplicatedStorageClassCondStoragePoolReadyType, + Status: metav1.ConditionFalse, + Reason: reason, + Message: message, + }) +} + +// applyStoragePoolReadyCondFromRSP copies the Ready condition from RSP to RSC's StoragePoolReady condition. +// Returns true if the condition was changed. +func applyStoragePoolReadyCondFromRSP(rsc *v1alpha1.ReplicatedStorageClass, rsp *v1alpha1.ReplicatedStoragePool) bool { + readyCond := objutilv1.GetStatusCondition(rsp, v1alpha1.ReplicatedStoragePoolCondReadyType) + if readyCond == nil { + // RSP has no Ready condition yet - set StoragePoolReady to Unknown. + return objutilv1.SetStatusCondition(rsc, metav1.Condition{ + Type: v1alpha1.ReplicatedStorageClassCondStoragePoolReadyType, + Status: metav1.ConditionUnknown, + Reason: v1alpha1.ReplicatedStorageClassCondStoragePoolReadyReasonPending, + Message: "ReplicatedStoragePool has no Ready condition yet", + }) + } + + // Copy Ready condition from RSP to RSC's StoragePoolReady. + return objutilv1.SetStatusCondition(rsc, metav1.Condition{ + Type: v1alpha1.ReplicatedStorageClassCondStoragePoolReadyType, + Status: readyCond.Status, + Reason: readyCond.Reason, + Message: readyCond.Message, + }) +} + +// applyConfigurationRolledOutCondTrue sets the ConfigurationRolledOut condition to True. +// Returns true if the condition was changed. +func applyConfigurationRolledOutCondTrue(rsc *v1alpha1.ReplicatedStorageClass, reason, message string) bool { + return objutilv1.SetStatusCondition(rsc, metav1.Condition{ + Type: v1alpha1.ReplicatedStorageClassCondConfigurationRolledOutType, + Status: metav1.ConditionTrue, + Reason: reason, + Message: message, + }) +} + +// applyConfigurationRolledOutCondFalse sets the ConfigurationRolledOut condition to False. +// Returns true if the condition was changed. +func applyConfigurationRolledOutCondFalse(rsc *v1alpha1.ReplicatedStorageClass, reason, message string) bool { + return objutilv1.SetStatusCondition(rsc, metav1.Condition{ + Type: v1alpha1.ReplicatedStorageClassCondConfigurationRolledOutType, + Status: metav1.ConditionFalse, + Reason: reason, + Message: message, + }) +} + +// applyVolumesSatisfyEligibleNodesCondTrue sets the VolumesSatisfyEligibleNodes condition to True. +// Returns true if the condition was changed. +func applyVolumesSatisfyEligibleNodesCondTrue(rsc *v1alpha1.ReplicatedStorageClass, reason, message string) bool { + return objutilv1.SetStatusCondition(rsc, metav1.Condition{ + Type: v1alpha1.ReplicatedStorageClassCondVolumesSatisfyEligibleNodesType, + Status: metav1.ConditionTrue, + Reason: reason, + Message: message, + }) +} + +// applyVolumesSatisfyEligibleNodesCondFalse sets the VolumesSatisfyEligibleNodes condition to False. +// Returns true if the condition was changed. +func applyVolumesSatisfyEligibleNodesCondFalse(rsc *v1alpha1.ReplicatedStorageClass, reason, message string) bool { + return objutilv1.SetStatusCondition(rsc, metav1.Condition{ + Type: v1alpha1.ReplicatedStorageClassCondVolumesSatisfyEligibleNodesType, + Status: metav1.ConditionFalse, + Reason: reason, + Message: message, + }) +} + +// validateEligibleNodes validates that eligible nodes from RSP meet the requirements +// for the RSC's replication mode and topology. +// +// Requirements by replication mode: +// - None: at least 1 node +// - Availability: at least 3 nodes, at least 2 with disks +// - Consistency: 2 nodes, both with disks +// - ConsistencyAndAvailability: at least 3 nodes with disks +// +// Additional topology requirements: +// - TransZonal: nodes must be distributed across required number of zones +// - Zonal: each zone must independently meet the requirements +func validateEligibleNodes( + eligibleNodes []v1alpha1.ReplicatedStoragePoolEligibleNode, + topology v1alpha1.ReplicatedStorageClassTopology, + replication v1alpha1.ReplicatedStorageClassReplication, +) error { + if len(eligibleNodes) == 0 { + return fmt.Errorf("no eligible nodes") + } + + // Count nodes and nodes with disks. + totalNodes := len(eligibleNodes) + nodesWithDisks := 0 + for _, n := range eligibleNodes { + if len(n.LVMVolumeGroups) > 0 { + nodesWithDisks++ + } + } + + // Group nodes by zone. + nodesByZone := make(map[string][]v1alpha1.ReplicatedStoragePoolEligibleNode) + for _, n := range eligibleNodes { + zone := n.ZoneName + if zone == "" { + zone = "" // empty zone key for nodes without zone + } + nodesByZone[zone] = append(nodesByZone[zone], n) + } + + // Count zones and zones with disks. + zonesWithDisks := 0 + for _, nodes := range nodesByZone { + for _, n := range nodes { + if len(n.LVMVolumeGroups) > 0 { + zonesWithDisks++ + break + } + } + } + + switch replication { + case v1alpha1.ReplicationNone: + // At least 1 node required. + if totalNodes < 1 { + return fmt.Errorf("replication None requires at least 1 node, have %d", totalNodes) + } + + case v1alpha1.ReplicationAvailability: + // At least 3 nodes, at least 2 with disks. + if err := validateAvailabilityReplication(topology, totalNodes, nodesWithDisks, nodesByZone, zonesWithDisks); err != nil { + return err + } + + case v1alpha1.ReplicationConsistency: + // 2 nodes, both with disks. + if err := validateConsistencyReplication(topology, totalNodes, nodesWithDisks, nodesByZone, zonesWithDisks); err != nil { + return err + } + + case v1alpha1.ReplicationConsistencyAndAvailability: + // At least 3 nodes with disks. + if err := validateConsistencyAndAvailabilityReplication(topology, nodesWithDisks, nodesByZone, zonesWithDisks); err != nil { + return err + } + } + + return nil +} + +// validateAvailabilityReplication validates requirements for Availability replication mode. +func validateAvailabilityReplication( + topology v1alpha1.ReplicatedStorageClassTopology, + totalNodes, nodesWithDisks int, + nodesByZone map[string][]v1alpha1.ReplicatedStoragePoolEligibleNode, + zonesWithDisks int, +) error { + switch topology { + case v1alpha1.RSCTopologyTransZonal: + // 3 different zones, at least 2 with disks. + if len(nodesByZone) < 3 { + return fmt.Errorf("replication Availability with TransZonal topology requires nodes in at least 3 zones, have %d", len(nodesByZone)) + } + if zonesWithDisks < 2 { + return fmt.Errorf("replication Availability with TransZonal topology requires at least 2 zones with disks, have %d", zonesWithDisks) + } + + case v1alpha1.RSCTopologyZonal: + // Per zone: at least 3 nodes, at least 2 with disks. + for zone, nodes := range nodesByZone { + zoneNodesWithDisks := 0 + for _, n := range nodes { + if len(n.LVMVolumeGroups) > 0 { + zoneNodesWithDisks++ + } + } + if len(nodes) < 3 { + return fmt.Errorf("replication Availability with Zonal topology requires at least 3 nodes in each zone, zone %q has %d", zone, len(nodes)) + } + if zoneNodesWithDisks < 2 { + return fmt.Errorf("replication Availability with Zonal topology requires at least 2 nodes with disks in each zone, zone %q has %d", zone, zoneNodesWithDisks) + } + } + + default: + // Ignored topology or unspecified: global check. + if totalNodes < 3 { + return fmt.Errorf("replication Availability requires at least 3 nodes, have %d", totalNodes) + } + if nodesWithDisks < 2 { + return fmt.Errorf("replication Availability requires at least 2 nodes with disks, have %d", nodesWithDisks) + } + } + + return nil +} + +// validateConsistencyReplication validates requirements for Consistency replication mode. +func validateConsistencyReplication( + topology v1alpha1.ReplicatedStorageClassTopology, + totalNodes, nodesWithDisks int, + nodesByZone map[string][]v1alpha1.ReplicatedStoragePoolEligibleNode, + zonesWithDisks int, +) error { + switch topology { + case v1alpha1.RSCTopologyTransZonal: + // 2 different zones with disks. + if zonesWithDisks < 2 { + return fmt.Errorf("replication Consistency with TransZonal topology requires at least 2 zones with disks, have %d", zonesWithDisks) + } + + case v1alpha1.RSCTopologyZonal: + // Per zone: at least 2 nodes with disks. + for zone, nodes := range nodesByZone { + zoneNodesWithDisks := 0 + for _, n := range nodes { + if len(n.LVMVolumeGroups) > 0 { + zoneNodesWithDisks++ + } + } + if zoneNodesWithDisks < 2 { + return fmt.Errorf("replication Consistency with Zonal topology requires at least 2 nodes with disks in each zone, zone %q has %d", zone, zoneNodesWithDisks) + } + } + + default: + // Ignored topology or unspecified: global check. + if totalNodes < 2 { + return fmt.Errorf("replication Consistency requires at least 2 nodes, have %d", totalNodes) + } + if nodesWithDisks < 2 { + return fmt.Errorf("replication Consistency requires at least 2 nodes with disks, have %d", nodesWithDisks) + } + } + + return nil +} + +// validateConsistencyAndAvailabilityReplication validates requirements for ConsistencyAndAvailability replication mode. +func validateConsistencyAndAvailabilityReplication( + topology v1alpha1.ReplicatedStorageClassTopology, + nodesWithDisks int, + nodesByZone map[string][]v1alpha1.ReplicatedStoragePoolEligibleNode, + zonesWithDisks int, +) error { + switch topology { + case v1alpha1.RSCTopologyTransZonal: + // 3 zones with disks. + if zonesWithDisks < 3 { + return fmt.Errorf("replication ConsistencyAndAvailability with TransZonal topology requires at least 3 zones with disks, have %d", zonesWithDisks) + } + + case v1alpha1.RSCTopologyZonal: + // Per zone: at least 3 nodes with disks. + for zone, nodes := range nodesByZone { + zoneNodesWithDisks := 0 + for _, n := range nodes { + if len(n.LVMVolumeGroups) > 0 { + zoneNodesWithDisks++ + } + } + if zoneNodesWithDisks < 3 { + return fmt.Errorf("replication ConsistencyAndAvailability with Zonal topology requires at least 3 nodes with disks in each zone, zone %q has %d", zone, zoneNodesWithDisks) + } + } + + default: + // Ignored topology or unspecified: global check. + if nodesWithDisks < 3 { + return fmt.Errorf("replication ConsistencyAndAvailability requires at least 3 nodes with disks, have %d", nodesWithDisks) + } + } + + return nil +} + +// isConfigurationInSync checks if the RSC status configuration matches current generation. +func isConfigurationInSync(rsc *v1alpha1.ReplicatedStorageClass) bool { + // Configuration must exist and generation must match. + return rsc.Status.Configuration != nil && rsc.Status.ConfigurationGeneration == rsc.Generation +} + +// computeActualVolumesSummary computes volume statistics from RV conditions. +// +// InConflictWithEligibleNodes is always calculated (regardless of acknowledgment). +// If any RV hasn't acknowledged the current RSC state (name/configurationGeneration mismatch), +// returns Total, PendingObservation, and InConflictWithEligibleNodes with Aligned/StaleConfiguration as nil - +// because we don't know the real counts for those until all RVs acknowledge. +// RVs without status.storageClass are considered acknowledged (to avoid flapping on new volumes). +func computeActualVolumesSummary(rsc *v1alpha1.ReplicatedStorageClass, rvs []rvView) v1alpha1.ReplicatedStorageClassVolumesSummary { + total := int32(len(rvs)) + var pendingObservation, aligned, staleConfiguration, inConflictWithEligibleNodes int32 + usedStoragePoolNames := make(map[string]struct{}) + + for i := range rvs { + rv := &rvs[i] + + // Collect used storage pool names. + if rv.configurationStoragePoolName != "" { + usedStoragePoolNames[rv.configurationStoragePoolName] = struct{}{} + } + + // Check nodes condition regardless of acknowledgment. + if !rv.conditions.satisfyEligibleNodes { + inConflictWithEligibleNodes++ + } + + // Count unobserved volumes (aligned/staleConfiguration require acknowledgment). + if !isRSCConfigurationAcknowledgedByRV(rsc, rv) { + pendingObservation++ + continue + } + + if rv.conditions.configurationReady && rv.conditions.satisfyEligibleNodes { + aligned++ + } + + if !rv.conditions.configurationReady { + staleConfiguration++ + } + } + + // Build sorted list of used storage pool names. + usedPoolNames := make([]string, 0, len(usedStoragePoolNames)) + for name := range usedStoragePoolNames { + usedPoolNames = append(usedPoolNames, name) + } + slices.Sort(usedPoolNames) + + // If any volumes haven't observed, return Total, PendingObservation, and InConflictWithEligibleNodes. + // We don't know the real counts for aligned/staleConfiguration until all RVs observe. + if pendingObservation > 0 { + return v1alpha1.ReplicatedStorageClassVolumesSummary{ + Total: &total, + PendingObservation: &pendingObservation, + InConflictWithEligibleNodes: &inConflictWithEligibleNodes, + UsedStoragePoolNames: usedPoolNames, + } + } + + zero := int32(0) + return v1alpha1.ReplicatedStorageClassVolumesSummary{ + Total: &total, + PendingObservation: &zero, + Aligned: &aligned, + StaleConfiguration: &staleConfiguration, + InConflictWithEligibleNodes: &inConflictWithEligibleNodes, + UsedStoragePoolNames: usedPoolNames, + } +} + +// isRSCConfigurationAcknowledgedByRV checks if the RV has acknowledged +// the current RSC configuration. +func isRSCConfigurationAcknowledgedByRV(rsc *v1alpha1.ReplicatedStorageClass, rv *rvView) bool { + return rv.configurationObservedGeneration == rsc.Status.ConfigurationGeneration +} + +// applyVolumesSummary applies volume summary to rsc.Status.Volumes. +// Returns true if any counter changed. +func applyVolumesSummary(rsc *v1alpha1.ReplicatedStorageClass, summary v1alpha1.ReplicatedStorageClassVolumesSummary) bool { + changed := false + if !ptr.Equal(rsc.Status.Volumes.Total, summary.Total) { + rsc.Status.Volumes.Total = summary.Total + changed = true + } + if !ptr.Equal(rsc.Status.Volumes.PendingObservation, summary.PendingObservation) { + rsc.Status.Volumes.PendingObservation = summary.PendingObservation + changed = true + } + if !ptr.Equal(rsc.Status.Volumes.Aligned, summary.Aligned) { + rsc.Status.Volumes.Aligned = summary.Aligned + changed = true + } + if !ptr.Equal(rsc.Status.Volumes.StaleConfiguration, summary.StaleConfiguration) { + rsc.Status.Volumes.StaleConfiguration = summary.StaleConfiguration + changed = true + } + if !ptr.Equal(rsc.Status.Volumes.InConflictWithEligibleNodes, summary.InConflictWithEligibleNodes) { + rsc.Status.Volumes.InConflictWithEligibleNodes = summary.InConflictWithEligibleNodes + changed = true + } + if !slices.Equal(rsc.Status.Volumes.UsedStoragePoolNames, summary.UsedStoragePoolNames) { + rsc.Status.Volumes.UsedStoragePoolNames = summary.UsedStoragePoolNames + changed = true + } + return changed +} + +// --- Compute/Apply helpers: storagePool --- + +// computeTargetStoragePool computes the target storagePool name. +// If status already has a value for the current generation, returns it without recomputing. +func computeTargetStoragePool(rsc *v1alpha1.ReplicatedStorageClass) string { + // Return cached value if already computed for this generation. + if rsc.Status.StoragePoolBasedOnGeneration == rsc.Generation && rsc.Status.StoragePoolName != "" { + return rsc.Status.StoragePoolName + } + + checksum := computeStoragePoolChecksum(rsc) + return "auto-rsp-" + checksum +} + +// computeStoragePoolChecksum computes FNV-128a checksum of RSC spec fields that go into RSP. +// Fields: storage.type, storage.lvmVolumeGroups, zones, nodeLabelSelector, systemNetworkNames. +func computeStoragePoolChecksum(rsc *v1alpha1.ReplicatedStorageClass) string { + h := fnv.New128a() + + // storage.type + h.Write([]byte(rsc.Spec.Storage.Type)) + h.Write([]byte{0}) // separator + + // storage.lvmVolumeGroups (sorted for determinism) + lvgs := make([]string, 0, len(rsc.Spec.Storage.LVMVolumeGroups)) + for _, lvg := range rsc.Spec.Storage.LVMVolumeGroups { + // Include both name and thinPoolName + lvgs = append(lvgs, lvg.Name+":"+lvg.ThinPoolName) + } + slices.Sort(lvgs) + for _, lvg := range lvgs { + h.Write([]byte(lvg)) + h.Write([]byte{0}) + } + + // zones (sorted for determinism) + zones := slices.Clone(rsc.Spec.Zones) + slices.Sort(zones) + for _, z := range zones { + h.Write([]byte(z)) + h.Write([]byte{0}) + } + + // nodeLabelSelector (JSON for deterministic serialization) + if rsc.Spec.NodeLabelSelector != nil { + selectorBytes, _ := json.Marshal(rsc.Spec.NodeLabelSelector) + h.Write(selectorBytes) + } + h.Write([]byte{0}) + + // systemNetworkNames (sorted for determinism) + networkNames := slices.Clone(rsc.Spec.SystemNetworkNames) + slices.Sort(networkNames) + for _, n := range networkNames { + h.Write([]byte(n)) + h.Write([]byte{0}) + } + + return hex.EncodeToString(h.Sum(nil)) +} + +// applyStoragePool applies target storagePool fields to status. Returns true if changed. +func applyStoragePool(rsc *v1alpha1.ReplicatedStorageClass, targetName string) bool { + changed := false + if rsc.Status.StoragePoolBasedOnGeneration != rsc.Generation { + rsc.Status.StoragePoolBasedOnGeneration = rsc.Generation + changed = true + } + if rsc.Status.StoragePoolName != targetName { + rsc.Status.StoragePoolName = targetName + changed = true + } + return changed +} + +// --- Reconcile: RSP --- + +// reconcileRSP ensures the auto-generated RSP exists and is properly configured. +// Creates RSP if not found, updates finalizer and usedBy if needed. +// +// Reconcile pattern: Conditional desired evaluation +func (r *Reconciler) reconcileRSP( + ctx context.Context, + rsc *v1alpha1.ReplicatedStorageClass, + targetStoragePoolName string, +) (outcome flow.ReconcileOutcome, rsp *v1alpha1.ReplicatedStoragePool) { + rf := flow.BeginReconcile(ctx, "rsp") + defer rf.OnEnd(&outcome) + + // Get existing RSP. + var err error + rsp, err = r.getRSP(rf.Ctx(), targetStoragePoolName) + if err != nil { + return rf.Fail(err), nil + } + + // If RSP doesn't exist, create it. + if rsp == nil { + rsp = newRSP(targetStoragePoolName, rsc) + if err := r.createRSP(rf.Ctx(), rsp); err != nil { + return rf.Fail(err), nil + } + // Continue to ensure usedBy is set below. + } + + // Ensure finalizer is set. + if !objutilv1.HasFinalizer(rsp, v1alpha1.RSCControllerFinalizer) { + base := rsp.DeepCopy() + applyRSPFinalizer(rsp, true) + if err := r.patchRSP(rf.Ctx(), rsp, base, true); err != nil { + return rf.Fail(err), nil + } + } + + // Ensure usedBy is set. + if !slices.Contains(rsp.Status.UsedBy.ReplicatedStorageClassNames, rsc.Name) { + base := rsp.DeepCopy() + applyRSPUsedBy(rsp, rsc.Name) + if err := r.patchRSPStatus(rf.Ctx(), rsp, base, true); err != nil { + return rf.Fail(err), nil + } + } + + return rf.Continue(), rsp +} + +// reconcileUnusedRSPs releases storage pools that are no longer used by this RSC. +// +// Reconcile pattern: Pure orchestration +func (r *Reconciler) reconcileUnusedRSPs( + ctx context.Context, + rsc *v1alpha1.ReplicatedStorageClass, +) (outcome flow.ReconcileOutcome) { + rf := flow.BeginReconcile(ctx, "unused-rsps") + defer rf.OnEnd(&outcome) + + // Get all RSPs that reference this RSC. + usedStoragePoolNames, err := r.getUsedStoragePoolNames(rf.Ctx(), rsc.Name) + if err != nil { + return rf.Fail(err) + } + + // Filter out RSPs that are still in use. + unusedStoragePoolNames := slices.DeleteFunc(slices.Clone(usedStoragePoolNames), func(name string) bool { + if name == rsc.Status.StoragePoolName { + return true + } + _, found := slices.BinarySearch(rsc.Status.Volumes.UsedStoragePoolNames, name) + return found + }) + + // Release each unused RSP. + outcomes := make([]flow.ReconcileOutcome, 0, len(unusedStoragePoolNames)) + for _, rspName := range unusedStoragePoolNames { + outcomes = append(outcomes, r.reconcileRSPRelease(rf.Ctx(), rsc.Name, rspName)) + } + + return flow.MergeReconciles(outcomes...) +} + +// reconcileRSPRelease releases the RSP from this RSC. +// Removes RSC from usedBy, and if no more users - deletes the RSP. +// +// Reconcile pattern: Conditional desired evaluation +func (r *Reconciler) reconcileRSPRelease( + ctx context.Context, + rscName string, + rspName string, +) (outcome flow.ReconcileOutcome) { + rf := flow.BeginReconcile(ctx, "rsp-release", "rsp", rspName) + defer rf.OnEnd(&outcome) + + // Get RSP. If not found - nothing to release. + rsp, err := r.getRSP(rf.Ctx(), rspName) + if err != nil { + return rf.Fail(err) + } + if rsp == nil { + return rf.Continue() + } + + // Check if this RSC is in usedBy (sorted list). + if _, found := slices.BinarySearch(rsp.Status.UsedBy.ReplicatedStorageClassNames, rscName); !found { + return rf.Continue() + } + + // Remove RSC from usedBy with optimistic lock. + base := rsp.DeepCopy() + applyRSPRemoveUsedBy(rsp, rscName) + if err := r.patchRSPStatus(rf.Ctx(), rsp, base, true); err != nil { + return rf.Fail(err) + } + + // If no more users - delete RSP. + if len(rsp.Status.UsedBy.ReplicatedStorageClassNames) == 0 { + // Remove finalizer first (if present). + if objutilv1.HasFinalizer(rsp, v1alpha1.RSCControllerFinalizer) { + base := rsp.DeepCopy() + applyRSPFinalizer(rsp, false) + if err := r.patchRSP(rf.Ctx(), rsp, base, true); err != nil { + return rf.Fail(err) + } + } + + // Delete RSP. + if err := r.deleteRSP(rf.Ctx(), rsp); err != nil { + return rf.Fail(err) + } + } + + return rf.Continue() +} + +// --- Helpers: Reconcile (non-I/O) --- + +// --- Helpers: RSP --- + +// newRSP constructs a new RSP from RSC spec. +func newRSP(name string, rsc *v1alpha1.ReplicatedStorageClass) *v1alpha1.ReplicatedStoragePool { + rsp := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{ + Name: name, + Finalizers: []string{v1alpha1.RSCControllerFinalizer}, + }, + Spec: v1alpha1.ReplicatedStoragePoolSpec{ + Type: rsc.Spec.Storage.Type, + LVMVolumeGroups: slices.Clone(rsc.Spec.Storage.LVMVolumeGroups), + Zones: slices.Clone(rsc.Spec.Zones), + SystemNetworkNames: slices.Clone(rsc.Spec.SystemNetworkNames), + EligibleNodesPolicy: v1alpha1.ReplicatedStoragePoolEligibleNodesPolicy{ + NotReadyGracePeriod: rsc.Spec.EligibleNodesPolicy.NotReadyGracePeriod, + }, + }, + } + + // Copy NodeLabelSelector if present. + if rsc.Spec.NodeLabelSelector != nil { + rsp.Spec.NodeLabelSelector = rsc.Spec.NodeLabelSelector.DeepCopy() + } + + return rsp +} + +// applyRSPFinalizer adds or removes the RSC controller finalizer on RSP. +// Returns true if the finalizer list was changed. +// +//nolint:unparam // Return value might be unused because callers pre-check with HasFinalizer. +func applyRSPFinalizer(rsp *v1alpha1.ReplicatedStoragePool, present bool) bool { + if present { + return objutilv1.AddFinalizer(rsp, v1alpha1.RSCControllerFinalizer) + } + return objutilv1.RemoveFinalizer(rsp, v1alpha1.RSCControllerFinalizer) +} + +// applyRSPUsedBy adds the RSC name to RSP status.usedBy if not already present. +func applyRSPUsedBy(rsp *v1alpha1.ReplicatedStoragePool, rscName string) bool { + if slices.Contains(rsp.Status.UsedBy.ReplicatedStorageClassNames, rscName) { + return false + } + rsp.Status.UsedBy.ReplicatedStorageClassNames = append( + rsp.Status.UsedBy.ReplicatedStorageClassNames, + rscName, + ) + // Sort for deterministic ordering. + sort.Strings(rsp.Status.UsedBy.ReplicatedStorageClassNames) + return true +} + +// applyRSPRemoveUsedBy removes the RSC name from RSP status.usedBy. +func applyRSPRemoveUsedBy(rsp *v1alpha1.ReplicatedStoragePool, rscName string) bool { + idx := slices.Index(rsp.Status.UsedBy.ReplicatedStorageClassNames, rscName) + if idx < 0 { + return false + } + rsp.Status.UsedBy.ReplicatedStorageClassNames = slices.Delete( + rsp.Status.UsedBy.ReplicatedStorageClassNames, + idx, idx+1, + ) + return true +} + +// ────────────────────────────────────────────────────────────────────────────── +// Single-call I/O helper categories +// + +// getRSC fetches an RSC by name. +func (r *Reconciler) getRSC(ctx context.Context, name string) (*v1alpha1.ReplicatedStorageClass, error) { + var rsc v1alpha1.ReplicatedStorageClass + if err := r.cl.Get(ctx, client.ObjectKey{Name: name}, &rsc); err != nil { + return nil, err + } + return &rsc, nil +} + +// getRSP fetches an RSP by name. Returns (nil, nil) if not found. +func (r *Reconciler) getRSP(ctx context.Context, name string) (*v1alpha1.ReplicatedStoragePool, error) { + var rsp v1alpha1.ReplicatedStoragePool + if err := r.cl.Get(ctx, client.ObjectKey{Name: name}, &rsp); err != nil { + if apierrors.IsNotFound(err) { + return nil, nil + } + return nil, err + } + return &rsp, nil +} + +// getUsedStoragePoolNames returns names of RSPs used by this RSC. +// Uses the index for efficient lookup and UnsafeDisableDeepCopy for performance. +func (r *Reconciler) getUsedStoragePoolNames(ctx context.Context, rscName string) ([]string, error) { + var unsafeList v1alpha1.ReplicatedStoragePoolList + if err := r.cl.List(ctx, &unsafeList, + client.MatchingFields{indexes.IndexFieldRSPByUsedByRSCName: rscName}, + client.UnsafeDisableDeepCopy, + ); err != nil { + return nil, err + } + + names := make([]string, len(unsafeList.Items)) + for i := range unsafeList.Items { + names[i] = unsafeList.Items[i].Name + } + return names, nil +} + +// getSortedRVsByRSC fetches RVs referencing a specific RSC using the index, sorted by name. +func (r *Reconciler) getSortedRVsByRSC(ctx context.Context, rscName string) ([]rvView, error) { + var unsafeList v1alpha1.ReplicatedVolumeList + if err := r.cl.List(ctx, &unsafeList, + client.MatchingFields{indexes.IndexFieldRVByReplicatedStorageClassName: rscName}, + client.UnsafeDisableDeepCopy, + ); err != nil { + return nil, err + } + + rvs := make([]rvView, len(unsafeList.Items)) + for i := range unsafeList.Items { + rvs[i] = newRVView(&unsafeList.Items[i]) + } + + sort.Slice(rvs, func(i, j int) bool { + return rvs[i].name < rvs[j].name + }) + + return rvs, nil +} + +// patchRSC patches the RSC main resource. +func (r *Reconciler) patchRSC( + ctx context.Context, + rsc *v1alpha1.ReplicatedStorageClass, + base *v1alpha1.ReplicatedStorageClass, + optimisticLock bool, +) error { + var patch client.Patch + if optimisticLock { + patch = client.MergeFromWithOptions(base, client.MergeFromWithOptimisticLock{}) + } else { + patch = client.MergeFrom(base) + } + return r.cl.Patch(ctx, rsc, patch) +} + +// patchRSCStatus patches the RSC status subresource. +func (r *Reconciler) patchRSCStatus( + ctx context.Context, + rsc *v1alpha1.ReplicatedStorageClass, + base *v1alpha1.ReplicatedStorageClass, + optimisticLock bool, +) error { + var patch client.Patch + if optimisticLock { + patch = client.MergeFromWithOptions(base, client.MergeFromWithOptimisticLock{}) + } else { + patch = client.MergeFrom(base) + } + return r.cl.Status().Patch(ctx, rsc, patch) +} + +// createRSP creates an RSP. +func (r *Reconciler) createRSP(ctx context.Context, rsp *v1alpha1.ReplicatedStoragePool) error { + return r.cl.Create(ctx, rsp) +} + +// patchRSP patches the RSP main resource. +func (r *Reconciler) patchRSP( + ctx context.Context, + rsp *v1alpha1.ReplicatedStoragePool, + base *v1alpha1.ReplicatedStoragePool, + optimisticLock bool, +) error { + var patch client.Patch + if optimisticLock { + patch = client.MergeFromWithOptions(base, client.MergeFromWithOptimisticLock{}) + } else { + patch = client.MergeFrom(base) + } + return r.cl.Patch(ctx, rsp, patch) +} + +// patchRSPStatus patches the RSP status subresource. +func (r *Reconciler) patchRSPStatus( + ctx context.Context, + rsp *v1alpha1.ReplicatedStoragePool, + base *v1alpha1.ReplicatedStoragePool, + optimisticLock bool, +) error { + var patch client.Patch + if optimisticLock { + patch = client.MergeFromWithOptions(base, client.MergeFromWithOptimisticLock{}) + } else { + patch = client.MergeFrom(base) + } + return r.cl.Status().Patch(ctx, rsp, patch) +} + +// deleteRSP deletes an RSP. +func (r *Reconciler) deleteRSP(ctx context.Context, rsp *v1alpha1.ReplicatedStoragePool) error { + return r.cl.Delete(ctx, rsp, client.Preconditions{ + UID: &rsp.UID, + ResourceVersion: &rsp.ResourceVersion, + }) +} diff --git a/images/controller/internal/controllers/rsc_controller/reconciler_test.go b/images/controller/internal/controllers/rsc_controller/reconciler_test.go new file mode 100644 index 000000000..f1cb19f43 --- /dev/null +++ b/images/controller/internal/controllers/rsc_controller/reconciler_test.go @@ -0,0 +1,2617 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package rsccontroller + +import ( + "context" + "testing" + "time" + + . "github.com/onsi/ginkgo/v2" + . "github.com/onsi/gomega" + corev1 "k8s.io/api/core/v1" + "k8s.io/apimachinery/pkg/api/meta" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/runtime" + "k8s.io/utils/ptr" + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/client/fake" + "sigs.k8s.io/controller-runtime/pkg/reconcile" + + snc "github.com/deckhouse/sds-node-configurator/api/v1alpha1" + obju "github.com/deckhouse/sds-replicated-volume/api/objutilv1" + "github.com/deckhouse/sds-replicated-volume/api/v1alpha1" + "github.com/deckhouse/sds-replicated-volume/images/controller/internal/indexes/testhelpers" + "github.com/deckhouse/sds-replicated-volume/lib/go/common/reconciliation/flow" +) + +func TestRSCController(t *testing.T) { + RegisterFailHandler(Fail) + RunSpecs(t, "rsc_controller Reconciler Suite") +} + +var _ = Describe("computeActualVolumesSummary", func() { + var rsc *v1alpha1.ReplicatedStorageClass + + BeforeEach(func() { + rsc = &v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{Name: "rsc-1"}, + Status: v1alpha1.ReplicatedStorageClassStatus{ + ConfigurationGeneration: 1, + StoragePoolEligibleNodesRevision: 1, + }, + } + }) + + It("returns zero counts for empty RV list", func() { + counters := computeActualVolumesSummary(rsc, nil) + + Expect(*counters.Total).To(Equal(int32(0))) + Expect(*counters.Aligned).To(Equal(int32(0))) + Expect(*counters.StaleConfiguration).To(Equal(int32(0))) + Expect(*counters.InConflictWithEligibleNodes).To(Equal(int32(0))) + }) + + It("counts total volumes (RVs without configurationObservedGeneration are considered acknowledged)", func() { + rvs := []rvView{ + {name: "rv-1"}, + {name: "rv-2"}, + } + + counters := computeActualVolumesSummary(rsc, rvs) + + Expect(*counters.Total).To(Equal(int32(2))) + }) + + It("counts aligned volumes with both conditions true", func() { + rvs := []rvView{ + { + name: "rv-1", + configurationObservedGeneration: 1, // Matches rsc.Status.ConfigurationGeneration. + conditions: rvViewConditions{ + configurationReady: true, + satisfyEligibleNodes: true, + }, + }, + } + + counters := computeActualVolumesSummary(rsc, rvs) + + Expect(*counters.Aligned).To(Equal(int32(1))) + }) + + It("counts configuration not aligned volumes (configurationReady false)", func() { + rvs := []rvView{ + { + name: "rv-1", + configurationObservedGeneration: 1, // Matches rsc.Status.ConfigurationGeneration. + conditions: rvViewConditions{ + configurationReady: false, + satisfyEligibleNodes: true, + }, + }, + } + + counters := computeActualVolumesSummary(rsc, rvs) + + Expect(*counters.StaleConfiguration).To(Equal(int32(1))) + }) + + It("counts eligible nodes not aligned volumes (satisfyEligibleNodes false)", func() { + rvs := []rvView{ + { + name: "rv-1", + conditions: rvViewConditions{ + configurationReady: true, + satisfyEligibleNodes: false, + }, + }, + } + + counters := computeActualVolumesSummary(rsc, rvs) + + Expect(*counters.InConflictWithEligibleNodes).To(Equal(int32(1))) + }) + + It("returns total and inConflictWithEligibleNodes when RV has not acknowledged (mismatched configurationGeneration)", func() { + rvs := []rvView{ + { + name: "rv-1", + configurationObservedGeneration: 0, // Mismatch - RSC has 1 + conditions: rvViewConditions{ + configurationReady: true, + satisfyEligibleNodes: false, // nodesOK=false + }, + }, + } + + counters := computeActualVolumesSummary(rsc, rvs) + + Expect(*counters.Total).To(Equal(int32(1))) + Expect(*counters.PendingObservation).To(Equal(int32(1))) + Expect(counters.Aligned).To(BeNil()) + Expect(counters.StaleConfiguration).To(BeNil()) + // inConflictWithEligibleNodes is calculated regardless of acknowledgment + Expect(*counters.InConflictWithEligibleNodes).To(Equal(int32(1))) + }) + + It("returns all counters when all RVs have acknowledged", func() { + rvs := []rvView{ + { + name: "rv-1", + configurationObservedGeneration: 1, + conditions: rvViewConditions{ + configurationReady: true, + satisfyEligibleNodes: true, + }, + }, + } + + counters := computeActualVolumesSummary(rsc, rvs) + + Expect(*counters.Total).To(Equal(int32(1))) + Expect(*counters.Aligned).To(Equal(int32(1))) + Expect(*counters.StaleConfiguration).To(Equal(int32(0))) + Expect(*counters.InConflictWithEligibleNodes).To(Equal(int32(0))) + }) + + It("collects used storage pool names from RVs", func() { + rvs := []rvView{ + { + name: "rv-1", + configurationStoragePoolName: "pool-b", + }, + { + name: "rv-2", + configurationStoragePoolName: "pool-a", + }, + { + name: "rv-3", + configurationStoragePoolName: "pool-b", // Duplicate. + }, + } + + counters := computeActualVolumesSummary(rsc, rvs) + + // Should be sorted and deduplicated. + Expect(counters.UsedStoragePoolNames).To(Equal([]string{"pool-a", "pool-b"})) + }) + + It("returns empty UsedStoragePoolNames when no RVs have storage pool", func() { + rvs := []rvView{ + {name: "rv-1"}, + {name: "rv-2"}, + } + + counters := computeActualVolumesSummary(rsc, rvs) + + Expect(counters.UsedStoragePoolNames).To(BeEmpty()) + }) + + It("includes UsedStoragePoolNames even when RVs have not acknowledged", func() { + rvs := []rvView{ + { + name: "rv-1", + configurationStoragePoolName: "pool-a", + configurationObservedGeneration: 0, // Not acknowledged. + }, + } + + counters := computeActualVolumesSummary(rsc, rvs) + + Expect(*counters.PendingObservation).To(Equal(int32(1))) + Expect(counters.UsedStoragePoolNames).To(Equal([]string{"pool-a"})) + }) +}) + +var _ = Describe("validateEligibleNodes", func() { + // Helper to create eligible node with or without LVG. + makeNode := func(name, zone string, hasLVG bool) v1alpha1.ReplicatedStoragePoolEligibleNode { + node := v1alpha1.ReplicatedStoragePoolEligibleNode{ + NodeName: name, + ZoneName: zone, + } + if hasLVG { + node.LVMVolumeGroups = []v1alpha1.ReplicatedStoragePoolEligibleNodeLVMVolumeGroup{ + {Name: "lvg-1"}, + } + } + return node + } + + Describe("Replication None", func() { + It("passes with 1 node", func() { + rsc := &v1alpha1.ReplicatedStorageClass{ + Spec: v1alpha1.ReplicatedStorageClassSpec{ + Replication: v1alpha1.ReplicationNone, + }, + } + nodes := []v1alpha1.ReplicatedStoragePoolEligibleNode{ + makeNode("node-1", "", false), + } + + err := validateEligibleNodes(nodes, rsc.Spec.Topology, rsc.Spec.Replication) + + Expect(err).NotTo(HaveOccurred()) + }) + + It("fails with 0 nodes", func() { + rsc := &v1alpha1.ReplicatedStorageClass{ + Spec: v1alpha1.ReplicatedStorageClassSpec{ + Replication: v1alpha1.ReplicationNone, + }, + } + + err := validateEligibleNodes(nil, rsc.Spec.Topology, rsc.Spec.Replication) + + Expect(err).To(HaveOccurred()) + Expect(err.Error()).To(ContainSubstring("no eligible nodes")) + }) + }) + + Describe("Replication Availability - Ignored topology", func() { + It("passes with 3 nodes, 2 with disks", func() { + rsc := &v1alpha1.ReplicatedStorageClass{ + Spec: v1alpha1.ReplicatedStorageClassSpec{ + Replication: v1alpha1.ReplicationAvailability, + Topology: v1alpha1.RSCTopologyIgnored, + }, + } + nodes := []v1alpha1.ReplicatedStoragePoolEligibleNode{ + makeNode("node-1", "", true), + makeNode("node-2", "", true), + makeNode("node-3", "", false), + } + + err := validateEligibleNodes(nodes, rsc.Spec.Topology, rsc.Spec.Replication) + + Expect(err).NotTo(HaveOccurred()) + }) + + It("fails with 2 nodes", func() { + rsc := &v1alpha1.ReplicatedStorageClass{ + Spec: v1alpha1.ReplicatedStorageClassSpec{ + Replication: v1alpha1.ReplicationAvailability, + Topology: v1alpha1.RSCTopologyIgnored, + }, + } + nodes := []v1alpha1.ReplicatedStoragePoolEligibleNode{ + makeNode("node-1", "", true), + makeNode("node-2", "", true), + } + + err := validateEligibleNodes(nodes, rsc.Spec.Topology, rsc.Spec.Replication) + + Expect(err).To(HaveOccurred()) + Expect(err.Error()).To(ContainSubstring("at least 3 nodes")) + }) + + It("fails with 3 nodes but only 1 with disks", func() { + rsc := &v1alpha1.ReplicatedStorageClass{ + Spec: v1alpha1.ReplicatedStorageClassSpec{ + Replication: v1alpha1.ReplicationAvailability, + Topology: v1alpha1.RSCTopologyIgnored, + }, + } + nodes := []v1alpha1.ReplicatedStoragePoolEligibleNode{ + makeNode("node-1", "", true), + makeNode("node-2", "", false), + makeNode("node-3", "", false), + } + + err := validateEligibleNodes(nodes, rsc.Spec.Topology, rsc.Spec.Replication) + + Expect(err).To(HaveOccurred()) + Expect(err.Error()).To(ContainSubstring("at least 2 nodes with disks")) + }) + }) + + Describe("Replication Availability - TransZonal topology", func() { + It("passes with 3 zones, 2 with disks", func() { + rsc := &v1alpha1.ReplicatedStorageClass{ + Spec: v1alpha1.ReplicatedStorageClassSpec{ + Replication: v1alpha1.ReplicationAvailability, + Topology: v1alpha1.RSCTopologyTransZonal, + }, + } + nodes := []v1alpha1.ReplicatedStoragePoolEligibleNode{ + makeNode("node-1", "zone-a", true), + makeNode("node-2", "zone-b", true), + makeNode("node-3", "zone-c", false), + } + + err := validateEligibleNodes(nodes, rsc.Spec.Topology, rsc.Spec.Replication) + + Expect(err).NotTo(HaveOccurred()) + }) + + It("fails with 2 zones", func() { + rsc := &v1alpha1.ReplicatedStorageClass{ + Spec: v1alpha1.ReplicatedStorageClassSpec{ + Replication: v1alpha1.ReplicationAvailability, + Topology: v1alpha1.RSCTopologyTransZonal, + }, + } + nodes := []v1alpha1.ReplicatedStoragePoolEligibleNode{ + makeNode("node-1", "zone-a", true), + makeNode("node-2", "zone-b", true), + } + + err := validateEligibleNodes(nodes, rsc.Spec.Topology, rsc.Spec.Replication) + + Expect(err).To(HaveOccurred()) + Expect(err.Error()).To(ContainSubstring("at least 3 zones")) + }) + + It("fails with 3 zones but only 1 with disks", func() { + rsc := &v1alpha1.ReplicatedStorageClass{ + Spec: v1alpha1.ReplicatedStorageClassSpec{ + Replication: v1alpha1.ReplicationAvailability, + Topology: v1alpha1.RSCTopologyTransZonal, + }, + } + nodes := []v1alpha1.ReplicatedStoragePoolEligibleNode{ + makeNode("node-1", "zone-a", true), + makeNode("node-2", "zone-b", false), + makeNode("node-3", "zone-c", false), + } + + err := validateEligibleNodes(nodes, rsc.Spec.Topology, rsc.Spec.Replication) + + Expect(err).To(HaveOccurred()) + Expect(err.Error()).To(ContainSubstring("at least 2 zones with disks")) + }) + }) + + Describe("Replication Availability - Zonal topology", func() { + It("passes with per zone: 3 nodes, 2 with disks", func() { + rsc := &v1alpha1.ReplicatedStorageClass{ + Spec: v1alpha1.ReplicatedStorageClassSpec{ + Replication: v1alpha1.ReplicationAvailability, + Topology: v1alpha1.RSCTopologyZonal, + }, + } + nodes := []v1alpha1.ReplicatedStoragePoolEligibleNode{ + makeNode("node-1a", "zone-a", true), + makeNode("node-2a", "zone-a", true), + makeNode("node-3a", "zone-a", false), + } + + err := validateEligibleNodes(nodes, rsc.Spec.Topology, rsc.Spec.Replication) + + Expect(err).NotTo(HaveOccurred()) + }) + + It("fails when zone has only 2 nodes", func() { + rsc := &v1alpha1.ReplicatedStorageClass{ + Spec: v1alpha1.ReplicatedStorageClassSpec{ + Replication: v1alpha1.ReplicationAvailability, + Topology: v1alpha1.RSCTopologyZonal, + }, + } + nodes := []v1alpha1.ReplicatedStoragePoolEligibleNode{ + makeNode("node-1a", "zone-a", true), + makeNode("node-2a", "zone-a", true), + } + + err := validateEligibleNodes(nodes, rsc.Spec.Topology, rsc.Spec.Replication) + + Expect(err).To(HaveOccurred()) + Expect(err.Error()).To(ContainSubstring("at least 3 nodes in each zone")) + }) + + It("fails when zone has 3 nodes but only 1 with disks", func() { + rsc := &v1alpha1.ReplicatedStorageClass{ + Spec: v1alpha1.ReplicatedStorageClassSpec{ + Replication: v1alpha1.ReplicationAvailability, + Topology: v1alpha1.RSCTopologyZonal, + }, + } + nodes := []v1alpha1.ReplicatedStoragePoolEligibleNode{ + makeNode("node-1a", "zone-a", true), + makeNode("node-2a", "zone-a", false), + makeNode("node-3a", "zone-a", false), + } + + err := validateEligibleNodes(nodes, rsc.Spec.Topology, rsc.Spec.Replication) + + Expect(err).To(HaveOccurred()) + Expect(err.Error()).To(ContainSubstring("at least 2 nodes with disks in each zone")) + }) + }) + + Describe("Replication Consistency - Ignored topology", func() { + It("passes with 2 nodes both with disks", func() { + rsc := &v1alpha1.ReplicatedStorageClass{ + Spec: v1alpha1.ReplicatedStorageClassSpec{ + Replication: v1alpha1.ReplicationConsistency, + Topology: v1alpha1.RSCTopologyIgnored, + }, + } + nodes := []v1alpha1.ReplicatedStoragePoolEligibleNode{ + makeNode("node-1", "", true), + makeNode("node-2", "", true), + } + + err := validateEligibleNodes(nodes, rsc.Spec.Topology, rsc.Spec.Replication) + + Expect(err).NotTo(HaveOccurred()) + }) + + It("fails with 1 node with disks", func() { + rsc := &v1alpha1.ReplicatedStorageClass{ + Spec: v1alpha1.ReplicatedStorageClassSpec{ + Replication: v1alpha1.ReplicationConsistency, + Topology: v1alpha1.RSCTopologyIgnored, + }, + } + nodes := []v1alpha1.ReplicatedStoragePoolEligibleNode{ + makeNode("node-1", "", true), + } + + err := validateEligibleNodes(nodes, rsc.Spec.Topology, rsc.Spec.Replication) + + Expect(err).To(HaveOccurred()) + Expect(err.Error()).To(ContainSubstring("at least 2 nodes")) + }) + + It("fails with 2 nodes but only 1 with disks", func() { + rsc := &v1alpha1.ReplicatedStorageClass{ + Spec: v1alpha1.ReplicatedStorageClassSpec{ + Replication: v1alpha1.ReplicationConsistency, + Topology: v1alpha1.RSCTopologyIgnored, + }, + } + nodes := []v1alpha1.ReplicatedStoragePoolEligibleNode{ + makeNode("node-1", "", true), + makeNode("node-2", "", false), + } + + err := validateEligibleNodes(nodes, rsc.Spec.Topology, rsc.Spec.Replication) + + Expect(err).To(HaveOccurred()) + Expect(err.Error()).To(ContainSubstring("at least 2 nodes with disks")) + }) + }) + + Describe("Replication Consistency - TransZonal topology", func() { + It("passes with 2 zones with disks", func() { + rsc := &v1alpha1.ReplicatedStorageClass{ + Spec: v1alpha1.ReplicatedStorageClassSpec{ + Replication: v1alpha1.ReplicationConsistency, + Topology: v1alpha1.RSCTopologyTransZonal, + }, + } + nodes := []v1alpha1.ReplicatedStoragePoolEligibleNode{ + makeNode("node-1", "zone-a", true), + makeNode("node-2", "zone-b", true), + } + + err := validateEligibleNodes(nodes, rsc.Spec.Topology, rsc.Spec.Replication) + + Expect(err).NotTo(HaveOccurred()) + }) + + It("fails with 1 zone with disks", func() { + rsc := &v1alpha1.ReplicatedStorageClass{ + Spec: v1alpha1.ReplicatedStorageClassSpec{ + Replication: v1alpha1.ReplicationConsistency, + Topology: v1alpha1.RSCTopologyTransZonal, + }, + } + nodes := []v1alpha1.ReplicatedStoragePoolEligibleNode{ + makeNode("node-1", "zone-a", true), + makeNode("node-2", "zone-b", false), + } + + err := validateEligibleNodes(nodes, rsc.Spec.Topology, rsc.Spec.Replication) + + Expect(err).To(HaveOccurred()) + Expect(err.Error()).To(ContainSubstring("at least 2 zones with disks")) + }) + }) + + Describe("Replication Consistency - Zonal topology", func() { + It("passes with per zone: 2 nodes with disks", func() { + rsc := &v1alpha1.ReplicatedStorageClass{ + Spec: v1alpha1.ReplicatedStorageClassSpec{ + Replication: v1alpha1.ReplicationConsistency, + Topology: v1alpha1.RSCTopologyZonal, + }, + } + nodes := []v1alpha1.ReplicatedStoragePoolEligibleNode{ + makeNode("node-1a", "zone-a", true), + makeNode("node-2a", "zone-a", true), + } + + err := validateEligibleNodes(nodes, rsc.Spec.Topology, rsc.Spec.Replication) + + Expect(err).NotTo(HaveOccurred()) + }) + + It("fails when zone has 1 node with disks", func() { + rsc := &v1alpha1.ReplicatedStorageClass{ + Spec: v1alpha1.ReplicatedStorageClassSpec{ + Replication: v1alpha1.ReplicationConsistency, + Topology: v1alpha1.RSCTopologyZonal, + }, + } + nodes := []v1alpha1.ReplicatedStoragePoolEligibleNode{ + makeNode("node-1a", "zone-a", true), + makeNode("node-2a", "zone-a", false), + } + + err := validateEligibleNodes(nodes, rsc.Spec.Topology, rsc.Spec.Replication) + + Expect(err).To(HaveOccurred()) + Expect(err.Error()).To(ContainSubstring("at least 2 nodes with disks in each zone")) + }) + }) + + Describe("Replication ConsistencyAndAvailability - Ignored topology", func() { + It("passes with 3 nodes with disks", func() { + rsc := &v1alpha1.ReplicatedStorageClass{ + Spec: v1alpha1.ReplicatedStorageClassSpec{ + Replication: v1alpha1.ReplicationConsistencyAndAvailability, + Topology: v1alpha1.RSCTopologyIgnored, + }, + } + nodes := []v1alpha1.ReplicatedStoragePoolEligibleNode{ + makeNode("node-1", "", true), + makeNode("node-2", "", true), + makeNode("node-3", "", true), + } + + err := validateEligibleNodes(nodes, rsc.Spec.Topology, rsc.Spec.Replication) + + Expect(err).NotTo(HaveOccurred()) + }) + + It("fails with 2 nodes with disks", func() { + rsc := &v1alpha1.ReplicatedStorageClass{ + Spec: v1alpha1.ReplicatedStorageClassSpec{ + Replication: v1alpha1.ReplicationConsistencyAndAvailability, + Topology: v1alpha1.RSCTopologyIgnored, + }, + } + nodes := []v1alpha1.ReplicatedStoragePoolEligibleNode{ + makeNode("node-1", "", true), + makeNode("node-2", "", true), + } + + err := validateEligibleNodes(nodes, rsc.Spec.Topology, rsc.Spec.Replication) + + Expect(err).To(HaveOccurred()) + Expect(err.Error()).To(ContainSubstring("at least 3 nodes with disks")) + }) + }) + + Describe("Replication ConsistencyAndAvailability - TransZonal topology", func() { + It("passes with 3 zones with disks", func() { + rsc := &v1alpha1.ReplicatedStorageClass{ + Spec: v1alpha1.ReplicatedStorageClassSpec{ + Replication: v1alpha1.ReplicationConsistencyAndAvailability, + Topology: v1alpha1.RSCTopologyTransZonal, + }, + } + nodes := []v1alpha1.ReplicatedStoragePoolEligibleNode{ + makeNode("node-1", "zone-a", true), + makeNode("node-2", "zone-b", true), + makeNode("node-3", "zone-c", true), + } + + err := validateEligibleNodes(nodes, rsc.Spec.Topology, rsc.Spec.Replication) + + Expect(err).NotTo(HaveOccurred()) + }) + + It("fails with 2 zones with disks", func() { + rsc := &v1alpha1.ReplicatedStorageClass{ + Spec: v1alpha1.ReplicatedStorageClassSpec{ + Replication: v1alpha1.ReplicationConsistencyAndAvailability, + Topology: v1alpha1.RSCTopologyTransZonal, + }, + } + nodes := []v1alpha1.ReplicatedStoragePoolEligibleNode{ + makeNode("node-1", "zone-a", true), + makeNode("node-2", "zone-b", true), + } + + err := validateEligibleNodes(nodes, rsc.Spec.Topology, rsc.Spec.Replication) + + Expect(err).To(HaveOccurred()) + Expect(err.Error()).To(ContainSubstring("at least 3 zones with disks")) + }) + }) + + Describe("Replication ConsistencyAndAvailability - Zonal topology", func() { + It("passes with per zone: 3 nodes with disks", func() { + rsc := &v1alpha1.ReplicatedStorageClass{ + Spec: v1alpha1.ReplicatedStorageClassSpec{ + Replication: v1alpha1.ReplicationConsistencyAndAvailability, + Topology: v1alpha1.RSCTopologyZonal, + }, + } + nodes := []v1alpha1.ReplicatedStoragePoolEligibleNode{ + makeNode("node-1a", "zone-a", true), + makeNode("node-2a", "zone-a", true), + makeNode("node-3a", "zone-a", true), + } + + err := validateEligibleNodes(nodes, rsc.Spec.Topology, rsc.Spec.Replication) + + Expect(err).NotTo(HaveOccurred()) + }) + + It("fails when zone has 2 nodes with disks", func() { + rsc := &v1alpha1.ReplicatedStorageClass{ + Spec: v1alpha1.ReplicatedStorageClassSpec{ + Replication: v1alpha1.ReplicationConsistencyAndAvailability, + Topology: v1alpha1.RSCTopologyZonal, + }, + } + nodes := []v1alpha1.ReplicatedStoragePoolEligibleNode{ + makeNode("node-1a", "zone-a", true), + makeNode("node-2a", "zone-a", true), + } + + err := validateEligibleNodes(nodes, rsc.Spec.Topology, rsc.Spec.Replication) + + Expect(err).To(HaveOccurred()) + Expect(err.Error()).To(ContainSubstring("at least 3 nodes with disks in each zone")) + }) + }) +}) + +var _ = Describe("isConfigurationInSync", func() { + It("returns false when Status.Configuration is nil", func() { + rsc := &v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{Generation: 1}, + Status: v1alpha1.ReplicatedStorageClassStatus{}, + } + + result := isConfigurationInSync(rsc) + + Expect(result).To(BeFalse()) + }) + + It("returns false when ConfigurationGeneration != Generation", func() { + rsc := &v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{Generation: 2}, + Status: v1alpha1.ReplicatedStorageClassStatus{ + Configuration: &v1alpha1.ReplicatedStorageClassConfiguration{}, + ConfigurationGeneration: 1, + }, + } + + result := isConfigurationInSync(rsc) + + Expect(result).To(BeFalse()) + }) + + It("returns true when ConfigurationGeneration == Generation", func() { + rsc := &v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{Generation: 5}, + Status: v1alpha1.ReplicatedStorageClassStatus{ + Configuration: &v1alpha1.ReplicatedStorageClassConfiguration{}, + ConfigurationGeneration: 5, + }, + } + + result := isConfigurationInSync(rsc) + + Expect(result).To(BeTrue()) + }) +}) + +var _ = Describe("computeRollingStrategiesConfiguration", func() { + It("returns (0, 0) when both policies are not RollingUpdate/RollingRepair", func() { + rsc := &v1alpha1.ReplicatedStorageClass{ + Spec: v1alpha1.ReplicatedStorageClassSpec{ + ConfigurationRolloutStrategy: v1alpha1.ReplicatedStorageClassConfigurationRolloutStrategy{ + Type: v1alpha1.ReplicatedStorageClassConfigurationRolloutStrategyTypeNewVolumesOnly, + }, + EligibleNodesConflictResolutionStrategy: v1alpha1.ReplicatedStorageClassEligibleNodesConflictResolutionStrategy{ + Type: v1alpha1.ReplicatedStorageClassEligibleNodesConflictResolutionStrategyTypeManual, + }, + }, + } + + rollouts, conflicts := computeRollingStrategiesConfiguration(rsc) + + Expect(rollouts).To(Equal(int32(0))) + Expect(conflicts).To(Equal(int32(0))) + }) + + It("returns maxParallel for rollouts when ConfigurationRolloutStrategy is RollingUpdate", func() { + rsc := &v1alpha1.ReplicatedStorageClass{ + Spec: v1alpha1.ReplicatedStorageClassSpec{ + ConfigurationRolloutStrategy: v1alpha1.ReplicatedStorageClassConfigurationRolloutStrategy{ + Type: v1alpha1.ReplicatedStorageClassConfigurationRolloutStrategyTypeRollingUpdate, + RollingUpdate: &v1alpha1.ReplicatedStorageClassConfigurationRollingUpdateStrategy{ + MaxParallel: 5, + }, + }, + EligibleNodesConflictResolutionStrategy: v1alpha1.ReplicatedStorageClassEligibleNodesConflictResolutionStrategy{ + Type: v1alpha1.ReplicatedStorageClassEligibleNodesConflictResolutionStrategyTypeManual, + }, + }, + } + + rollouts, conflicts := computeRollingStrategiesConfiguration(rsc) + + Expect(rollouts).To(Equal(int32(5))) + Expect(conflicts).To(Equal(int32(0))) + }) + + It("returns maxParallel for conflicts when EligibleNodesConflictResolutionStrategy is RollingRepair", func() { + rsc := &v1alpha1.ReplicatedStorageClass{ + Spec: v1alpha1.ReplicatedStorageClassSpec{ + ConfigurationRolloutStrategy: v1alpha1.ReplicatedStorageClassConfigurationRolloutStrategy{ + Type: v1alpha1.ReplicatedStorageClassConfigurationRolloutStrategyTypeNewVolumesOnly, + }, + EligibleNodesConflictResolutionStrategy: v1alpha1.ReplicatedStorageClassEligibleNodesConflictResolutionStrategy{ + Type: v1alpha1.ReplicatedStorageClassEligibleNodesConflictResolutionStrategyTypeRollingRepair, + RollingRepair: &v1alpha1.ReplicatedStorageClassEligibleNodesConflictResolutionRollingRepair{ + MaxParallel: 10, + }, + }, + }, + } + + rollouts, conflicts := computeRollingStrategiesConfiguration(rsc) + + Expect(rollouts).To(Equal(int32(0))) + Expect(conflicts).To(Equal(int32(10))) + }) + + It("returns both maxParallel values when both policies are rolling", func() { + rsc := &v1alpha1.ReplicatedStorageClass{ + Spec: v1alpha1.ReplicatedStorageClassSpec{ + ConfigurationRolloutStrategy: v1alpha1.ReplicatedStorageClassConfigurationRolloutStrategy{ + Type: v1alpha1.ReplicatedStorageClassConfigurationRolloutStrategyTypeRollingUpdate, + RollingUpdate: &v1alpha1.ReplicatedStorageClassConfigurationRollingUpdateStrategy{ + MaxParallel: 3, + }, + }, + EligibleNodesConflictResolutionStrategy: v1alpha1.ReplicatedStorageClassEligibleNodesConflictResolutionStrategy{ + Type: v1alpha1.ReplicatedStorageClassEligibleNodesConflictResolutionStrategyTypeRollingRepair, + RollingRepair: &v1alpha1.ReplicatedStorageClassEligibleNodesConflictResolutionRollingRepair{ + MaxParallel: 7, + }, + }, + }, + } + + rollouts, conflicts := computeRollingStrategiesConfiguration(rsc) + + Expect(rollouts).To(Equal(int32(3))) + Expect(conflicts).To(Equal(int32(7))) + }) + + It("panics when ConfigurationRolloutStrategy is RollingUpdate but RollingUpdate config is nil", func() { + rsc := &v1alpha1.ReplicatedStorageClass{ + Spec: v1alpha1.ReplicatedStorageClassSpec{ + ConfigurationRolloutStrategy: v1alpha1.ReplicatedStorageClassConfigurationRolloutStrategy{ + Type: v1alpha1.ReplicatedStorageClassConfigurationRolloutStrategyTypeRollingUpdate, + RollingUpdate: nil, + }, + EligibleNodesConflictResolutionStrategy: v1alpha1.ReplicatedStorageClassEligibleNodesConflictResolutionStrategy{ + Type: v1alpha1.ReplicatedStorageClassEligibleNodesConflictResolutionStrategyTypeManual, + }, + }, + } + + Expect(func() { + computeRollingStrategiesConfiguration(rsc) + }).To(Panic()) + }) + + It("panics when EligibleNodesConflictResolutionStrategy is RollingRepair but RollingRepair config is nil", func() { + rsc := &v1alpha1.ReplicatedStorageClass{ + Spec: v1alpha1.ReplicatedStorageClassSpec{ + ConfigurationRolloutStrategy: v1alpha1.ReplicatedStorageClassConfigurationRolloutStrategy{ + Type: v1alpha1.ReplicatedStorageClassConfigurationRolloutStrategyTypeNewVolumesOnly, + }, + EligibleNodesConflictResolutionStrategy: v1alpha1.ReplicatedStorageClassEligibleNodesConflictResolutionStrategy{ + Type: v1alpha1.ReplicatedStorageClassEligibleNodesConflictResolutionStrategyTypeRollingRepair, + RollingRepair: nil, + }, + }, + } + + Expect(func() { + computeRollingStrategiesConfiguration(rsc) + }).To(Panic()) + }) +}) + +var _ = Describe("ensureVolumeSummaryAndConditions", func() { + var ( + ctx context.Context + rsc *v1alpha1.ReplicatedStorageClass + ) + + // makeAcknowledgedRV creates an rvView that has acknowledged the RSC configuration. + makeAcknowledgedRV := func(name string, configOK, nodesOK bool) rvView { + return rvView{ + name: name, + configurationObservedGeneration: 1, + conditions: rvViewConditions{ + configurationReady: configOK, + satisfyEligibleNodes: nodesOK, + }, + } + } + + // makePendingRV creates an rvView that has NOT acknowledged the RSC configuration. + makePendingRV := func(name string) rvView { + return rvView{ + name: name, + configurationObservedGeneration: 0, // Mismatch - RSC has 1 + conditions: rvViewConditions{ + configurationReady: false, + satisfyEligibleNodes: false, + }, + } + } + + BeforeEach(func() { + ctx = flow.BeginRootReconcile(context.Background()).Ctx() + rsc = &v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-rsc", + }, + Spec: v1alpha1.ReplicatedStorageClassSpec{ + ConfigurationRolloutStrategy: v1alpha1.ReplicatedStorageClassConfigurationRolloutStrategy{ + Type: v1alpha1.ReplicatedStorageClassConfigurationRolloutStrategyTypeNewVolumesOnly, + }, + EligibleNodesConflictResolutionStrategy: v1alpha1.ReplicatedStorageClassEligibleNodesConflictResolutionStrategy{ + Type: v1alpha1.ReplicatedStorageClassEligibleNodesConflictResolutionStrategyTypeManual, + }, + }, + Status: v1alpha1.ReplicatedStorageClassStatus{ + ConfigurationGeneration: 1, + StoragePoolEligibleNodesRevision: 1, + }, + } + }) + + It("sets ConfigurationRolledOut to Unknown and VolumesSatisfyEligibleNodes based on actual when PendingObservation > 0", func() { + rvs := []rvView{ + makePendingRV("rv-1"), + makePendingRV("rv-2"), + makePendingRV("rv-3"), + } + + outcome := ensureVolumeSummaryAndConditions(ctx, rsc, rvs) + + Expect(outcome.Error()).To(BeNil()) + Expect(outcome.DidChange()).To(BeTrue()) + + // Check summary + Expect(rsc.Status.Volumes.PendingObservation).To(Equal(ptr.To(int32(3)))) + Expect(rsc.Status.Volumes.InConflictWithEligibleNodes).To(Equal(ptr.To(int32(3)))) + + // ConfigurationRolledOut is Unknown because we can't determine config status without acknowledgment. + configCond := obju.GetStatusCondition(rsc, v1alpha1.ReplicatedStorageClassCondConfigurationRolledOutType) + Expect(configCond).NotTo(BeNil()) + Expect(configCond.Status).To(Equal(metav1.ConditionUnknown)) + Expect(configCond.Reason).To(Equal(v1alpha1.ReplicatedStorageClassCondConfigurationRolledOutReasonNewConfigurationNotYetObserved)) + Expect(configCond.Message).To(ContainSubstring("3 volume(s) pending observation")) + + // VolumesSatisfyEligibleNodes is calculated regardless of acknowledgment. + nodesCond := obju.GetStatusCondition(rsc, v1alpha1.ReplicatedStorageClassCondVolumesSatisfyEligibleNodesType) + Expect(nodesCond).NotTo(BeNil()) + Expect(nodesCond.Status).To(Equal(metav1.ConditionFalse)) + Expect(nodesCond.Reason).To(Equal(v1alpha1.ReplicatedStorageClassCondVolumesSatisfyEligibleNodesReasonManualConflictResolution)) + }) + + It("sets ConfigurationRolledOut to False when StaleConfiguration > 0", func() { + rvs := []rvView{ + makeAcknowledgedRV("rv-1", false, true), // configOK=false + makeAcknowledgedRV("rv-2", false, true), // configOK=false + } + + outcome := ensureVolumeSummaryAndConditions(ctx, rsc, rvs) + + Expect(outcome.Error()).To(BeNil()) + Expect(outcome.DidChange()).To(BeTrue()) + + configCond := obju.GetStatusCondition(rsc, v1alpha1.ReplicatedStorageClassCondConfigurationRolledOutType) + Expect(configCond).NotTo(BeNil()) + Expect(configCond.Status).To(Equal(metav1.ConditionFalse)) + Expect(configCond.Reason).To(Equal(v1alpha1.ReplicatedStorageClassCondConfigurationRolledOutReasonConfigurationRolloutDisabled)) + }) + + It("sets ConfigurationRolledOut to True when StaleConfiguration == 0", func() { + rvs := []rvView{ + makeAcknowledgedRV("rv-1", true, true), + makeAcknowledgedRV("rv-2", true, true), + } + + outcome := ensureVolumeSummaryAndConditions(ctx, rsc, rvs) + + Expect(outcome.Error()).To(BeNil()) + Expect(outcome.DidChange()).To(BeTrue()) + + configCond := obju.GetStatusCondition(rsc, v1alpha1.ReplicatedStorageClassCondConfigurationRolledOutType) + Expect(configCond).NotTo(BeNil()) + Expect(configCond.Status).To(Equal(metav1.ConditionTrue)) + Expect(configCond.Reason).To(Equal(v1alpha1.ReplicatedStorageClassCondConfigurationRolledOutReasonRolledOutToAllVolumes)) + }) + + It("sets VolumesSatisfyEligibleNodes to False when InConflictWithEligibleNodes > 0", func() { + rvs := []rvView{ + makeAcknowledgedRV("rv-1", true, false), // nodesOK=false + makeAcknowledgedRV("rv-2", true, false), // nodesOK=false + } + + outcome := ensureVolumeSummaryAndConditions(ctx, rsc, rvs) + + Expect(outcome.Error()).To(BeNil()) + Expect(outcome.DidChange()).To(BeTrue()) + + nodesCond := obju.GetStatusCondition(rsc, v1alpha1.ReplicatedStorageClassCondVolumesSatisfyEligibleNodesType) + Expect(nodesCond).NotTo(BeNil()) + Expect(nodesCond.Status).To(Equal(metav1.ConditionFalse)) + Expect(nodesCond.Reason).To(Equal(v1alpha1.ReplicatedStorageClassCondVolumesSatisfyEligibleNodesReasonManualConflictResolution)) + }) + + It("sets VolumesSatisfyEligibleNodes to True when InConflictWithEligibleNodes == 0", func() { + rvs := []rvView{ + makeAcknowledgedRV("rv-1", true, true), + makeAcknowledgedRV("rv-2", true, true), + } + + outcome := ensureVolumeSummaryAndConditions(ctx, rsc, rvs) + + Expect(outcome.Error()).To(BeNil()) + Expect(outcome.DidChange()).To(BeTrue()) + + nodesCond := obju.GetStatusCondition(rsc, v1alpha1.ReplicatedStorageClassCondVolumesSatisfyEligibleNodesType) + Expect(nodesCond).NotTo(BeNil()) + Expect(nodesCond.Status).To(Equal(metav1.ConditionTrue)) + Expect(nodesCond.Reason).To(Equal(v1alpha1.ReplicatedStorageClassCondVolumesSatisfyEligibleNodesReasonAllVolumesSatisfy)) + }) + + It("sets both conditions correctly when StaleConfiguration > 0 and InConflictWithEligibleNodes > 0", func() { + rvs := []rvView{ + makeAcknowledgedRV("rv-1", false, false), // both false + makeAcknowledgedRV("rv-2", false, false), // both false + } + + outcome := ensureVolumeSummaryAndConditions(ctx, rsc, rvs) + + Expect(outcome.Error()).To(BeNil()) + Expect(outcome.DidChange()).To(BeTrue()) + + configCond := obju.GetStatusCondition(rsc, v1alpha1.ReplicatedStorageClassCondConfigurationRolledOutType) + Expect(configCond).NotTo(BeNil()) + Expect(configCond.Status).To(Equal(metav1.ConditionFalse)) + Expect(configCond.Reason).To(Equal(v1alpha1.ReplicatedStorageClassCondConfigurationRolledOutReasonConfigurationRolloutDisabled)) + + nodesCond := obju.GetStatusCondition(rsc, v1alpha1.ReplicatedStorageClassCondVolumesSatisfyEligibleNodesType) + Expect(nodesCond).NotTo(BeNil()) + Expect(nodesCond.Status).To(Equal(metav1.ConditionFalse)) + Expect(nodesCond.Reason).To(Equal(v1alpha1.ReplicatedStorageClassCondVolumesSatisfyEligibleNodesReasonManualConflictResolution)) + }) + + It("reports no change when conditions already match the target state", func() { + rvs := []rvView{ + makeAcknowledgedRV("rv-1", true, true), + } + + // First call to set conditions + outcome := ensureVolumeSummaryAndConditions(ctx, rsc, rvs) + Expect(outcome.DidChange()).To(BeTrue()) + + // Second call should report no change + outcome = ensureVolumeSummaryAndConditions(ctx, rsc, rvs) + Expect(outcome.Error()).To(BeNil()) + Expect(outcome.DidChange()).To(BeFalse()) + }) + + It("sets conditions to True when no volumes exist", func() { + rvs := []rvView{} + + outcome := ensureVolumeSummaryAndConditions(ctx, rsc, rvs) + + Expect(outcome.Error()).To(BeNil()) + Expect(outcome.DidChange()).To(BeTrue()) + + // Check summary + Expect(rsc.Status.Volumes.Total).To(Equal(ptr.To(int32(0)))) + Expect(rsc.Status.Volumes.PendingObservation).To(Equal(ptr.To(int32(0)))) + + configCond := obju.GetStatusCondition(rsc, v1alpha1.ReplicatedStorageClassCondConfigurationRolledOutType) + Expect(configCond).NotTo(BeNil()) + Expect(configCond.Status).To(Equal(metav1.ConditionTrue)) + + nodesCond := obju.GetStatusCondition(rsc, v1alpha1.ReplicatedStorageClassCondVolumesSatisfyEligibleNodesType) + Expect(nodesCond).NotTo(BeNil()) + Expect(nodesCond.Status).To(Equal(metav1.ConditionTrue)) + }) +}) + +var _ = Describe("makeConfiguration", func() { + It("copies all fields from spec correctly", func() { + rsc := &v1alpha1.ReplicatedStorageClass{ + Spec: v1alpha1.ReplicatedStorageClassSpec{ + Topology: v1alpha1.RSCTopologyTransZonal, + Replication: v1alpha1.ReplicationAvailability, + VolumeAccess: v1alpha1.VolumeAccessLocal, + }, + } + + config := makeConfiguration(rsc, "my-storage-pool") + + Expect(config.Topology).To(Equal(v1alpha1.RSCTopologyTransZonal)) + Expect(config.Replication).To(Equal(v1alpha1.ReplicationAvailability)) + Expect(config.VolumeAccess).To(Equal(v1alpha1.VolumeAccessLocal)) + Expect(config.StoragePoolName).To(Equal("my-storage-pool")) + }) +}) + +var _ = Describe("Reconciler", func() { + var ( + scheme *runtime.Scheme + cl client.WithWatch + rec *Reconciler + ) + + BeforeEach(func() { + scheme = runtime.NewScheme() + Expect(corev1.AddToScheme(scheme)).To(Succeed()) + Expect(v1alpha1.AddToScheme(scheme)).To(Succeed()) + Expect(snc.AddToScheme(scheme)).To(Succeed()) + cl = nil + rec = nil + }) + + Describe("Reconcile", func() { + It("does nothing when RSC is not found", func() { + cl = fake.NewClientBuilder().WithScheme(scheme).Build() + rec = NewReconciler(cl) + + result, err := rec.Reconcile(context.Background(), reconcile.Request{ + NamespacedName: client.ObjectKey{Name: "rsc-1"}, + }) + + Expect(err).NotTo(HaveOccurred()) + Expect(result).To(Equal(reconcile.Result{})) + }) + + It("migrates StoragePool to spec.Storage when RSP exists", func() { + rsc := &v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{Name: "rsc-1"}, + Spec: v1alpha1.ReplicatedStorageClassSpec{ + StoragePool: "rsp-1", + }, + } + rsp := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{Name: "rsp-1"}, + Spec: v1alpha1.ReplicatedStoragePoolSpec{ + Type: v1alpha1.ReplicatedStoragePoolTypeLVMThin, + LVMVolumeGroups: []v1alpha1.ReplicatedStoragePoolLVMVolumeGroups{ + {Name: "lvg-1"}, + {Name: "lvg-2"}, + }, + }, + } + cl = testhelpers.WithRVByReplicatedStorageClassNameIndex(fake.NewClientBuilder(). + WithScheme(scheme). + WithObjects(rsc, rsp). + WithStatusSubresource(rsc, &v1alpha1.ReplicatedStoragePool{})). + Build() + rec = NewReconciler(cl) + + result, err := rec.Reconcile(context.Background(), reconcile.Request{ + NamespacedName: client.ObjectKey{Name: "rsc-1"}, + }) + + Expect(err).NotTo(HaveOccurred()) + Expect(result).To(Equal(reconcile.Result{})) + + var updatedRSC v1alpha1.ReplicatedStorageClass + Expect(cl.Get(context.Background(), client.ObjectKey{Name: "rsc-1"}, &updatedRSC)).To(Succeed()) + + // StoragePool should be cleared. + Expect(updatedRSC.Spec.StoragePool).To(BeEmpty()) + + // spec.Storage should contain data from RSP. + Expect(updatedRSC.Spec.Storage.Type).To(Equal(v1alpha1.ReplicatedStoragePoolTypeLVMThin)) + Expect(updatedRSC.Spec.Storage.LVMVolumeGroups).To(HaveLen(2)) + Expect(updatedRSC.Spec.Storage.LVMVolumeGroups[0].Name).To(Equal("lvg-1")) + Expect(updatedRSC.Spec.Storage.LVMVolumeGroups[1].Name).To(Equal("lvg-2")) + + // Finalizer should be added. + Expect(updatedRSC.Finalizers).To(ContainElement(v1alpha1.RSCControllerFinalizer)) + }) + + It("sets conditions when RSP is not found", func() { + rsc := &v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{Name: "rsc-1"}, + Spec: v1alpha1.ReplicatedStorageClassSpec{ + StoragePool: "rsp-not-found", + }, + } + cl = testhelpers.WithRVByReplicatedStorageClassNameIndex(fake.NewClientBuilder(). + WithScheme(scheme). + WithObjects(rsc). + WithStatusSubresource(rsc)). + Build() + rec = NewReconciler(cl) + + result, err := rec.Reconcile(context.Background(), reconcile.Request{ + NamespacedName: client.ObjectKey{Name: "rsc-1"}, + }) + + Expect(err).NotTo(HaveOccurred()) + Expect(result).To(Equal(reconcile.Result{})) + + var updatedRSC v1alpha1.ReplicatedStorageClass + Expect(cl.Get(context.Background(), client.ObjectKey{Name: "rsc-1"}, &updatedRSC)).To(Succeed()) + + // StoragePool should remain unchanged (waiting for RSP to exist). + Expect(updatedRSC.Spec.StoragePool).To(Equal("rsp-not-found")) + + // Finalizer should NOT be added (reconcileMigrationFromRSP returns Done before reconcileMain). + Expect(updatedRSC.Finalizers).To(BeEmpty()) + + // Conditions should be set. + readyCond := meta.FindStatusCondition(updatedRSC.Status.Conditions, v1alpha1.ReplicatedStorageClassCondReadyType) + Expect(readyCond).NotTo(BeNil()) + Expect(readyCond.Status).To(Equal(metav1.ConditionFalse)) + Expect(readyCond.Reason).To(Equal(v1alpha1.ReplicatedStorageClassCondReadyReasonWaitingForStoragePool)) + + storagePoolReadyCond := meta.FindStatusCondition(updatedRSC.Status.Conditions, v1alpha1.ReplicatedStorageClassCondStoragePoolReadyType) + Expect(storagePoolReadyCond).NotTo(BeNil()) + Expect(storagePoolReadyCond.Status).To(Equal(metav1.ConditionFalse)) + Expect(storagePoolReadyCond.Reason).To(Equal(v1alpha1.ReplicatedStorageClassCondStoragePoolReadyReasonStoragePoolNotFound)) + }) + + It("does nothing when storagePool is already empty", func() { + rsc := &v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rsc-1", + Finalizers: []string{v1alpha1.RSCControllerFinalizer}, + }, + Spec: v1alpha1.ReplicatedStorageClassSpec{ + StoragePool: "", // Already empty - no migration needed. + Storage: v1alpha1.ReplicatedStorageClassStorage{ + Type: v1alpha1.ReplicatedStoragePoolTypeLVM, + LVMVolumeGroups: []v1alpha1.ReplicatedStoragePoolLVMVolumeGroups{ + {Name: "lvg-existing"}, + }, + }, + }, + } + cl = testhelpers.WithRVByReplicatedStorageClassNameIndex(fake.NewClientBuilder(). + WithScheme(scheme). + WithObjects(rsc). + WithStatusSubresource(rsc, &v1alpha1.ReplicatedStoragePool{})). + Build() + rec = NewReconciler(cl) + + result, err := rec.Reconcile(context.Background(), reconcile.Request{ + NamespacedName: client.ObjectKey{Name: "rsc-1"}, + }) + + Expect(err).NotTo(HaveOccurred()) + Expect(result).To(Equal(reconcile.Result{})) + + var updatedRSC v1alpha1.ReplicatedStorageClass + Expect(cl.Get(context.Background(), client.ObjectKey{Name: "rsc-1"}, &updatedRSC)).To(Succeed()) + + // Nothing should change. + Expect(updatedRSC.Spec.StoragePool).To(BeEmpty()) + Expect(updatedRSC.Spec.Storage.Type).To(Equal(v1alpha1.ReplicatedStoragePoolTypeLVM)) + Expect(updatedRSC.Spec.Storage.LVMVolumeGroups).To(HaveLen(1)) + Expect(updatedRSC.Spec.Storage.LVMVolumeGroups[0].Name).To(Equal("lvg-existing")) + }) + + It("sets condition StoragePoolReady=False when RSP is not found during migration", func() { + rsc := &v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{Name: "rsc-1"}, + Spec: v1alpha1.ReplicatedStorageClassSpec{ + StoragePool: "rsp-not-found", + }, + } + cl = testhelpers.WithRVByReplicatedStorageClassNameIndex(fake.NewClientBuilder(). + WithScheme(scheme). + WithObjects(rsc). + WithStatusSubresource(rsc)). + Build() + rec = NewReconciler(cl) + + result, err := rec.Reconcile(context.Background(), reconcile.Request{ + NamespacedName: client.ObjectKey{Name: "rsc-1"}, + }) + + Expect(err).NotTo(HaveOccurred()) + Expect(result).To(Equal(reconcile.Result{})) + + var updatedRSC v1alpha1.ReplicatedStorageClass + Expect(cl.Get(context.Background(), client.ObjectKey{Name: "rsc-1"}, &updatedRSC)).To(Succeed()) + + // Check Ready condition is false. + readyCond := obju.GetStatusCondition(&updatedRSC, v1alpha1.ReplicatedStorageClassCondReadyType) + Expect(readyCond).NotTo(BeNil()) + Expect(readyCond.Status).To(Equal(metav1.ConditionFalse)) + Expect(readyCond.Reason).To(Equal(v1alpha1.ReplicatedStorageClassCondReadyReasonWaitingForStoragePool)) + + // Check StoragePoolReady condition is false. + storagePoolCond := obju.GetStatusCondition(&updatedRSC, v1alpha1.ReplicatedStorageClassCondStoragePoolReadyType) + Expect(storagePoolCond).NotTo(BeNil()) + Expect(storagePoolCond.Status).To(Equal(metav1.ConditionFalse)) + Expect(storagePoolCond.Reason).To(Equal(v1alpha1.ReplicatedStorageClassCondStoragePoolReadyReasonStoragePoolNotFound)) + }) + + It("adds finalizer when RSC is created", func() { + rsc := &v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{Name: "rsc-1"}, + Spec: v1alpha1.ReplicatedStorageClassSpec{ + // No storagePool - using direct storage configuration. + Storage: v1alpha1.ReplicatedStorageClassStorage{ + Type: v1alpha1.ReplicatedStoragePoolTypeLVM, + LVMVolumeGroups: []v1alpha1.ReplicatedStoragePoolLVMVolumeGroups{ + {Name: "lvg-1"}, + }, + }, + }, + } + cl = testhelpers.WithRVByReplicatedStorageClassNameIndex(fake.NewClientBuilder(). + WithScheme(scheme). + WithObjects(rsc). + WithStatusSubresource(rsc, &v1alpha1.ReplicatedStoragePool{})). + Build() + rec = NewReconciler(cl) + + result, err := rec.Reconcile(context.Background(), reconcile.Request{ + NamespacedName: client.ObjectKey{Name: "rsc-1"}, + }) + + Expect(err).NotTo(HaveOccurred()) + Expect(result).To(Equal(reconcile.Result{})) + + var updatedRSC v1alpha1.ReplicatedStorageClass + Expect(cl.Get(context.Background(), client.ObjectKey{Name: "rsc-1"}, &updatedRSC)).To(Succeed()) + Expect(updatedRSC.Finalizers).To(ContainElement(v1alpha1.RSCControllerFinalizer)) + }) + + It("keeps finalizer when RSC has deletionTimestamp but RVs exist", func() { + now := metav1.Now() + rsc := &v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rsc-1", + Finalizers: []string{v1alpha1.RSCControllerFinalizer}, + DeletionTimestamp: &now, + }, + Spec: v1alpha1.ReplicatedStorageClassSpec{ + // No storagePool - using direct storage configuration. + Storage: v1alpha1.ReplicatedStorageClassStorage{ + Type: v1alpha1.ReplicatedStoragePoolTypeLVM, + LVMVolumeGroups: []v1alpha1.ReplicatedStoragePoolLVMVolumeGroups{ + {Name: "lvg-1"}, + }, + }, + }, + } + rv := &v1alpha1.ReplicatedVolume{ + ObjectMeta: metav1.ObjectMeta{Name: "rv-1"}, + Spec: v1alpha1.ReplicatedVolumeSpec{ + ReplicatedStorageClassName: "rsc-1", + }, + } + cl = testhelpers.WithRVByReplicatedStorageClassNameIndex(fake.NewClientBuilder(). + WithScheme(scheme). + WithObjects(rsc, rv). + WithStatusSubresource(rsc, &v1alpha1.ReplicatedStoragePool{})). + Build() + rec = NewReconciler(cl) + + result, err := rec.Reconcile(context.Background(), reconcile.Request{ + NamespacedName: client.ObjectKey{Name: "rsc-1"}, + }) + + Expect(err).NotTo(HaveOccurred()) + Expect(result).To(Equal(reconcile.Result{})) + + var updatedRSC v1alpha1.ReplicatedStorageClass + Expect(cl.Get(context.Background(), client.ObjectKey{Name: "rsc-1"}, &updatedRSC)).To(Succeed()) + Expect(updatedRSC.Finalizers).To(ContainElement(v1alpha1.RSCControllerFinalizer)) + }) + + It("removes finalizer when RSC has deletionTimestamp and no RVs", func() { + now := metav1.Now() + rsc := &v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rsc-1", + Finalizers: []string{v1alpha1.RSCControllerFinalizer}, + DeletionTimestamp: &now, + }, + Spec: v1alpha1.ReplicatedStorageClassSpec{ + // No storagePool - using direct storage configuration. + Storage: v1alpha1.ReplicatedStorageClassStorage{ + Type: v1alpha1.ReplicatedStoragePoolTypeLVM, + LVMVolumeGroups: []v1alpha1.ReplicatedStoragePoolLVMVolumeGroups{ + {Name: "lvg-1"}, + }, + }, + }, + } + cl = testhelpers.WithRVByReplicatedStorageClassNameIndex(fake.NewClientBuilder(). + WithScheme(scheme). + WithObjects(rsc). + WithStatusSubresource(rsc)). + Build() + rec = NewReconciler(cl) + + result, err := rec.Reconcile(context.Background(), reconcile.Request{ + NamespacedName: client.ObjectKey{Name: "rsc-1"}, + }) + + Expect(err).NotTo(HaveOccurred()) + Expect(result).To(Equal(reconcile.Result{})) + + // After removing the finalizer, the object is deleted by the API server. + var updatedRSC v1alpha1.ReplicatedStorageClass + err = cl.Get(context.Background(), client.ObjectKey{Name: "rsc-1"}, &updatedRSC) + Expect(err).To(HaveOccurred()) + Expect(client.IgnoreNotFound(err)).To(BeNil()) + }) + }) + + Describe("reconcileRSP", func() { + It("creates RSP when it does not exist", func() { + rsc := &v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rsc-1", + Generation: 1, + }, + Spec: v1alpha1.ReplicatedStorageClassSpec{ + Storage: v1alpha1.ReplicatedStorageClassStorage{ + Type: v1alpha1.ReplicatedStoragePoolTypeLVM, + LVMVolumeGroups: []v1alpha1.ReplicatedStoragePoolLVMVolumeGroups{ + {Name: "lvg-1"}, + {Name: "lvg-2"}, + }, + }, + Zones: []string{"zone-a", "zone-b"}, + SystemNetworkNames: []string{"Internal"}, + EligibleNodesPolicy: v1alpha1.ReplicatedStoragePoolEligibleNodesPolicy{ + NotReadyGracePeriod: metav1.Duration{Duration: 5 * time.Minute}, + }, + }, + } + cl = fake.NewClientBuilder(). + WithScheme(scheme). + WithObjects(rsc). + WithStatusSubresource(rsc, &v1alpha1.ReplicatedStoragePool{}). + Build() + rec = NewReconciler(cl) + + targetStoragePoolName := "auto-rsp-test123" + outcome, rsp := rec.reconcileRSP(context.Background(), rsc, targetStoragePoolName) + + Expect(outcome.ShouldReturn()).To(BeFalse()) + Expect(rsp).NotTo(BeNil()) + Expect(rsp.Name).To(Equal(targetStoragePoolName)) + + // Verify RSP was created. + var createdRSP v1alpha1.ReplicatedStoragePool + Expect(cl.Get(context.Background(), client.ObjectKey{Name: targetStoragePoolName}, &createdRSP)).To(Succeed()) + + // Verify finalizer is set. + Expect(createdRSP.Finalizers).To(ContainElement(v1alpha1.RSCControllerFinalizer)) + + // Verify spec. + Expect(createdRSP.Spec.Type).To(Equal(v1alpha1.ReplicatedStoragePoolTypeLVM)) + Expect(createdRSP.Spec.LVMVolumeGroups).To(HaveLen(2)) + Expect(createdRSP.Spec.Zones).To(Equal([]string{"zone-a", "zone-b"})) + Expect(createdRSP.Spec.SystemNetworkNames).To(Equal([]string{"Internal"})) + Expect(createdRSP.Spec.EligibleNodesPolicy.NotReadyGracePeriod.Duration).To(Equal(5 * time.Minute)) + + // Verify usedBy is set. + Expect(createdRSP.Status.UsedBy.ReplicatedStorageClassNames).To(ContainElement("rsc-1")) + }) + + It("adds finalizer to existing RSP without finalizer", func() { + rsc := &v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rsc-1", + Generation: 1, + }, + Spec: v1alpha1.ReplicatedStorageClassSpec{ + Storage: v1alpha1.ReplicatedStorageClassStorage{ + Type: v1alpha1.ReplicatedStoragePoolTypeLVM, + LVMVolumeGroups: []v1alpha1.ReplicatedStoragePoolLVMVolumeGroups{{Name: "lvg-1"}}, + }, + }, + } + existingRSP := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{ + Name: "auto-rsp-existing", + // No finalizer. + }, + Spec: v1alpha1.ReplicatedStoragePoolSpec{ + Type: v1alpha1.ReplicatedStoragePoolTypeLVM, + LVMVolumeGroups: []v1alpha1.ReplicatedStoragePoolLVMVolumeGroups{{Name: "lvg-1"}}, + }, + } + cl = fake.NewClientBuilder(). + WithScheme(scheme). + WithObjects(rsc, existingRSP). + WithStatusSubresource(rsc, existingRSP). + Build() + rec = NewReconciler(cl) + + outcome, rsp := rec.reconcileRSP(context.Background(), rsc, "auto-rsp-existing") + + Expect(outcome.ShouldReturn()).To(BeFalse()) + Expect(rsp).NotTo(BeNil()) + + // Verify finalizer was added. + var updatedRSP v1alpha1.ReplicatedStoragePool + Expect(cl.Get(context.Background(), client.ObjectKey{Name: "auto-rsp-existing"}, &updatedRSP)).To(Succeed()) + Expect(updatedRSP.Finalizers).To(ContainElement(v1alpha1.RSCControllerFinalizer)) + }) + + It("adds RSC name to usedBy", func() { + rsc := &v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rsc-1", + Generation: 1, + }, + Spec: v1alpha1.ReplicatedStorageClassSpec{ + Storage: v1alpha1.ReplicatedStorageClassStorage{ + Type: v1alpha1.ReplicatedStoragePoolTypeLVM, + LVMVolumeGroups: []v1alpha1.ReplicatedStoragePoolLVMVolumeGroups{{Name: "lvg-1"}}, + }, + }, + } + existingRSP := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{ + Name: "auto-rsp-existing", + Finalizers: []string{v1alpha1.RSCControllerFinalizer}, // Already has finalizer. + }, + Spec: v1alpha1.ReplicatedStoragePoolSpec{ + Type: v1alpha1.ReplicatedStoragePoolTypeLVM, + LVMVolumeGroups: []v1alpha1.ReplicatedStoragePoolLVMVolumeGroups{{Name: "lvg-1"}}, + }, + Status: v1alpha1.ReplicatedStoragePoolStatus{ + UsedBy: v1alpha1.ReplicatedStoragePoolUsedBy{ + ReplicatedStorageClassNames: []string{"other-rsc"}, // Another RSC already uses this. + }, + }, + } + cl = fake.NewClientBuilder(). + WithScheme(scheme). + WithObjects(rsc, existingRSP). + WithStatusSubresource(rsc, existingRSP). + Build() + rec = NewReconciler(cl) + + outcome, rsp := rec.reconcileRSP(context.Background(), rsc, "auto-rsp-existing") + + Expect(outcome.ShouldReturn()).To(BeFalse()) + Expect(rsp).NotTo(BeNil()) + + // Verify RSC name was added to usedBy. + var updatedRSP v1alpha1.ReplicatedStoragePool + Expect(cl.Get(context.Background(), client.ObjectKey{Name: "auto-rsp-existing"}, &updatedRSP)).To(Succeed()) + Expect(updatedRSP.Status.UsedBy.ReplicatedStorageClassNames).To(ContainElement("rsc-1")) + Expect(updatedRSP.Status.UsedBy.ReplicatedStorageClassNames).To(ContainElement("other-rsc")) + // Verify sorted order. + Expect(updatedRSP.Status.UsedBy.ReplicatedStorageClassNames).To(Equal([]string{"other-rsc", "rsc-1"})) + }) + + It("does not update when RSP already has finalizer and usedBy contains RSC", func() { + rsc := &v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rsc-1", + Generation: 1, + }, + Spec: v1alpha1.ReplicatedStorageClassSpec{ + Storage: v1alpha1.ReplicatedStorageClassStorage{ + Type: v1alpha1.ReplicatedStoragePoolTypeLVM, + LVMVolumeGroups: []v1alpha1.ReplicatedStoragePoolLVMVolumeGroups{{Name: "lvg-1"}}, + }, + }, + } + existingRSP := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{ + Name: "auto-rsp-existing", + Finalizers: []string{v1alpha1.RSCControllerFinalizer}, + ResourceVersion: "123", + }, + Spec: v1alpha1.ReplicatedStoragePoolSpec{ + Type: v1alpha1.ReplicatedStoragePoolTypeLVM, + LVMVolumeGroups: []v1alpha1.ReplicatedStoragePoolLVMVolumeGroups{{Name: "lvg-1"}}, + }, + Status: v1alpha1.ReplicatedStoragePoolStatus{ + UsedBy: v1alpha1.ReplicatedStoragePoolUsedBy{ + ReplicatedStorageClassNames: []string{"rsc-1"}, // Already has this RSC. + }, + }, + } + cl = fake.NewClientBuilder(). + WithScheme(scheme). + WithObjects(rsc, existingRSP). + WithStatusSubresource(rsc, existingRSP). + Build() + rec = NewReconciler(cl) + + outcome, rsp := rec.reconcileRSP(context.Background(), rsc, "auto-rsp-existing") + + Expect(outcome.ShouldReturn()).To(BeFalse()) + Expect(rsp).NotTo(BeNil()) + + // Verify nothing changed. + var updatedRSP v1alpha1.ReplicatedStoragePool + Expect(cl.Get(context.Background(), client.ObjectKey{Name: "auto-rsp-existing"}, &updatedRSP)).To(Succeed()) + // ResourceVersion should be unchanged if no updates were made. + // Note: fake client may update resourceVersion anyway, so we check content instead. + Expect(updatedRSP.Status.UsedBy.ReplicatedStorageClassNames).To(Equal([]string{"rsc-1"})) + }) + }) + + Describe("reconcileRSPRelease", func() { + It("does nothing when RSP does not exist", func() { + cl = fake.NewClientBuilder(). + WithScheme(scheme). + Build() + rec = NewReconciler(cl) + + outcome := rec.reconcileRSPRelease(context.Background(), "rsc-1", "non-existent-rsp") + + Expect(outcome.ShouldReturn()).To(BeFalse()) + }) + + It("does nothing when RSC not in usedBy", func() { + rsp := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{ + Name: "my-rsp", + Finalizers: []string{v1alpha1.RSCControllerFinalizer}, + }, + Status: v1alpha1.ReplicatedStoragePoolStatus{ + UsedBy: v1alpha1.ReplicatedStoragePoolUsedBy{ + ReplicatedStorageClassNames: []string{"other-rsc"}, + }, + }, + } + cl = fake.NewClientBuilder(). + WithScheme(scheme). + WithObjects(rsp). + WithStatusSubresource(rsp). + Build() + rec = NewReconciler(cl) + + outcome := rec.reconcileRSPRelease(context.Background(), "rsc-1", "my-rsp") + + Expect(outcome.ShouldReturn()).To(BeFalse()) + + // RSP should be unchanged. + var updatedRSP v1alpha1.ReplicatedStoragePool + Expect(cl.Get(context.Background(), client.ObjectKey{Name: "my-rsp"}, &updatedRSP)).To(Succeed()) + Expect(updatedRSP.Status.UsedBy.ReplicatedStorageClassNames).To(Equal([]string{"other-rsc"})) + }) + + It("removes RSC from usedBy when RSC is in usedBy", func() { + rsp := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{ + Name: "my-rsp", + Finalizers: []string{v1alpha1.RSCControllerFinalizer}, + }, + Status: v1alpha1.ReplicatedStoragePoolStatus{ + UsedBy: v1alpha1.ReplicatedStoragePoolUsedBy{ + ReplicatedStorageClassNames: []string{"other-rsc", "rsc-1"}, + }, + }, + } + cl = fake.NewClientBuilder(). + WithScheme(scheme). + WithObjects(rsp). + WithStatusSubresource(rsp). + Build() + rec = NewReconciler(cl) + + outcome := rec.reconcileRSPRelease(context.Background(), "rsc-1", "my-rsp") + + Expect(outcome.ShouldReturn()).To(BeFalse()) + + // RSP should have rsc-1 removed from usedBy. + var updatedRSP v1alpha1.ReplicatedStoragePool + Expect(cl.Get(context.Background(), client.ObjectKey{Name: "my-rsp"}, &updatedRSP)).To(Succeed()) + Expect(updatedRSP.Status.UsedBy.ReplicatedStorageClassNames).To(Equal([]string{"other-rsc"})) + // RSP should still exist. + Expect(updatedRSP.Finalizers).To(ContainElement(v1alpha1.RSCControllerFinalizer)) + }) + + It("deletes RSP when usedBy becomes empty", func() { + rsp := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{ + Name: "my-rsp", + Finalizers: []string{v1alpha1.RSCControllerFinalizer}, + }, + Status: v1alpha1.ReplicatedStoragePoolStatus{ + UsedBy: v1alpha1.ReplicatedStoragePoolUsedBy{ + ReplicatedStorageClassNames: []string{"rsc-1"}, + }, + }, + } + cl = fake.NewClientBuilder(). + WithScheme(scheme). + WithObjects(rsp). + WithStatusSubresource(rsp). + Build() + rec = NewReconciler(cl) + + outcome := rec.reconcileRSPRelease(context.Background(), "rsc-1", "my-rsp") + + Expect(outcome.ShouldReturn()).To(BeFalse()) + + // RSP should be deleted. + var updatedRSP v1alpha1.ReplicatedStoragePool + err := cl.Get(context.Background(), client.ObjectKey{Name: "my-rsp"}, &updatedRSP) + Expect(err).To(HaveOccurred()) + Expect(client.IgnoreNotFound(err)).To(BeNil()) + }) + }) + + Describe("newRSP", func() { + It("builds RSP with correct spec from RSC", func() { + rsc := &v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{Name: "rsc-1"}, + Spec: v1alpha1.ReplicatedStorageClassSpec{ + Storage: v1alpha1.ReplicatedStorageClassStorage{ + Type: v1alpha1.ReplicatedStoragePoolTypeLVMThin, + LVMVolumeGroups: []v1alpha1.ReplicatedStoragePoolLVMVolumeGroups{ + {Name: "lvg-1", ThinPoolName: "thin-1"}, + {Name: "lvg-2", ThinPoolName: "thin-2"}, + }, + }, + Zones: []string{"zone-a", "zone-b", "zone-c"}, + NodeLabelSelector: &metav1.LabelSelector{ + MatchLabels: map[string]string{"node-type": "storage"}, + }, + SystemNetworkNames: []string{"Internal"}, + EligibleNodesPolicy: v1alpha1.ReplicatedStoragePoolEligibleNodesPolicy{ + NotReadyGracePeriod: metav1.Duration{Duration: 15 * time.Minute}, + }, + }, + } + + rsp := newRSP("auto-rsp-abc123", rsc) + + Expect(rsp.Name).To(Equal("auto-rsp-abc123")) + Expect(rsp.Finalizers).To(ContainElement(v1alpha1.RSCControllerFinalizer)) + + Expect(rsp.Spec.Type).To(Equal(v1alpha1.ReplicatedStoragePoolTypeLVMThin)) + Expect(rsp.Spec.LVMVolumeGroups).To(HaveLen(2)) + Expect(rsp.Spec.LVMVolumeGroups[0].Name).To(Equal("lvg-1")) + Expect(rsp.Spec.LVMVolumeGroups[0].ThinPoolName).To(Equal("thin-1")) + Expect(rsp.Spec.LVMVolumeGroups[1].Name).To(Equal("lvg-2")) + Expect(rsp.Spec.LVMVolumeGroups[1].ThinPoolName).To(Equal("thin-2")) + + Expect(rsp.Spec.Zones).To(Equal([]string{"zone-a", "zone-b", "zone-c"})) + Expect(rsp.Spec.NodeLabelSelector).NotTo(BeNil()) + Expect(rsp.Spec.NodeLabelSelector.MatchLabels).To(HaveKeyWithValue("node-type", "storage")) + Expect(rsp.Spec.SystemNetworkNames).To(Equal([]string{"Internal"})) + Expect(rsp.Spec.EligibleNodesPolicy.NotReadyGracePeriod.Duration).To(Equal(15 * time.Minute)) + }) + + It("builds RSP without NodeLabelSelector when not set", func() { + rsc := &v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{Name: "rsc-1"}, + Spec: v1alpha1.ReplicatedStorageClassSpec{ + Storage: v1alpha1.ReplicatedStorageClassStorage{ + Type: v1alpha1.ReplicatedStoragePoolTypeLVM, + LVMVolumeGroups: []v1alpha1.ReplicatedStoragePoolLVMVolumeGroups{{Name: "lvg-1"}}, + }, + SystemNetworkNames: []string{"Internal"}, + }, + } + + rsp := newRSP("auto-rsp-xyz", rsc) + + Expect(rsp.Spec.NodeLabelSelector).To(BeNil()) + }) + + It("does not share slices with RSC", func() { + rsc := &v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{Name: "rsc-1"}, + Spec: v1alpha1.ReplicatedStorageClassSpec{ + Storage: v1alpha1.ReplicatedStorageClassStorage{ + Type: v1alpha1.ReplicatedStoragePoolTypeLVM, + LVMVolumeGroups: []v1alpha1.ReplicatedStoragePoolLVMVolumeGroups{ + {Name: "lvg-1"}, + }, + }, + Zones: []string{"zone-a"}, + SystemNetworkNames: []string{"Internal"}, + }, + } + + rsp := newRSP("auto-rsp-test", rsc) + + // Modify RSP slices. + rsp.Spec.LVMVolumeGroups[0].Name = "modified" + rsp.Spec.Zones[0] = "modified" + rsp.Spec.SystemNetworkNames[0] = "modified" + + // Verify RSC slices are unchanged. + Expect(rsc.Spec.Storage.LVMVolumeGroups[0].Name).To(Equal("lvg-1")) + Expect(rsc.Spec.Zones[0]).To(Equal("zone-a")) + Expect(rsc.Spec.SystemNetworkNames[0]).To(Equal("Internal")) + }) + }) + + Describe("ensureStoragePool", func() { + It("updates storagePoolName and storagePoolBasedOnGeneration when not in sync", func() { + rsc := &v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rsc-1", + Generation: 3, + }, + Status: v1alpha1.ReplicatedStorageClassStatus{ + StoragePoolBasedOnGeneration: 2, // Different from Generation. + StoragePoolName: "old-pool-name", + }, + } + rsp := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{Name: "new-pool-name"}, + Status: v1alpha1.ReplicatedStoragePoolStatus{ + Conditions: []metav1.Condition{ + {Type: v1alpha1.ReplicatedStoragePoolCondReadyType, Status: metav1.ConditionTrue, Reason: "Ready"}, + }, + }, + } + + outcome := ensureStoragePool(context.Background(), rsc, "new-pool-name", rsp) + + Expect(outcome.DidChange()).To(BeTrue()) + Expect(rsc.Status.StoragePoolName).To(Equal("new-pool-name")) + Expect(rsc.Status.StoragePoolBasedOnGeneration).To(Equal(int64(3))) + }) + + It("reports no change when already in sync", func() { + rsc := &v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rsc-1", + Generation: 5, + }, + Status: v1alpha1.ReplicatedStorageClassStatus{ + StoragePoolBasedOnGeneration: 5, + StoragePoolName: "my-pool", + Conditions: []metav1.Condition{ + { + Type: v1alpha1.ReplicatedStorageClassCondStoragePoolReadyType, + Status: metav1.ConditionTrue, + Reason: "Ready", + ObservedGeneration: 5, // Must match RSC Generation. + }, + }, + }, + } + rsp := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{Name: "my-pool"}, + Status: v1alpha1.ReplicatedStoragePoolStatus{ + Conditions: []metav1.Condition{ + {Type: v1alpha1.ReplicatedStoragePoolCondReadyType, Status: metav1.ConditionTrue, Reason: "Ready"}, + }, + }, + } + + outcome := ensureStoragePool(context.Background(), rsc, "my-pool", rsp) + + Expect(outcome.DidChange()).To(BeFalse()) + }) + + It("sets StoragePoolReady=False when RSP is nil", func() { + rsc := &v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rsc-1", + Generation: 1, + }, + } + + outcome := ensureStoragePool(context.Background(), rsc, "missing-pool", nil) + + Expect(outcome.DidChange()).To(BeTrue()) + cond := obju.GetStatusCondition(rsc, v1alpha1.ReplicatedStorageClassCondStoragePoolReadyType) + Expect(cond).NotTo(BeNil()) + Expect(cond.Status).To(Equal(metav1.ConditionFalse)) + Expect(cond.Reason).To(Equal(v1alpha1.ReplicatedStorageClassCondStoragePoolReadyReasonStoragePoolNotFound)) + Expect(cond.Message).To(ContainSubstring("missing-pool")) + }) + + It("sets StoragePoolReady=Unknown when RSP has no Ready condition", func() { + rsc := &v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rsc-1", + Generation: 1, + }, + } + rsp := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{Name: "new-pool"}, + // No conditions. + } + + outcome := ensureStoragePool(context.Background(), rsc, "new-pool", rsp) + + Expect(outcome.DidChange()).To(BeTrue()) + cond := obju.GetStatusCondition(rsc, v1alpha1.ReplicatedStorageClassCondStoragePoolReadyType) + Expect(cond).NotTo(BeNil()) + Expect(cond.Status).To(Equal(metav1.ConditionUnknown)) + Expect(cond.Reason).To(Equal(v1alpha1.ReplicatedStorageClassCondStoragePoolReadyReasonPending)) + }) + + It("copies RSP Ready=True to RSC StoragePoolReady=True", func() { + rsc := &v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rsc-1", + Generation: 1, + }, + } + rsp := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{Name: "ready-pool"}, + Status: v1alpha1.ReplicatedStoragePoolStatus{ + Conditions: []metav1.Condition{ + { + Type: v1alpha1.ReplicatedStoragePoolCondReadyType, + Status: metav1.ConditionTrue, + Reason: "AllNodesEligible", + Message: "All nodes are eligible", + }, + }, + }, + } + + outcome := ensureStoragePool(context.Background(), rsc, "ready-pool", rsp) + + Expect(outcome.DidChange()).To(BeTrue()) + cond := obju.GetStatusCondition(rsc, v1alpha1.ReplicatedStorageClassCondStoragePoolReadyType) + Expect(cond).NotTo(BeNil()) + Expect(cond.Status).To(Equal(metav1.ConditionTrue)) + Expect(cond.Reason).To(Equal("AllNodesEligible")) + Expect(cond.Message).To(Equal("All nodes are eligible")) + }) + + It("copies RSP Ready=False to RSC StoragePoolReady=False", func() { + rsc := &v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rsc-1", + Generation: 1, + }, + } + rsp := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{Name: "not-ready-pool"}, + Status: v1alpha1.ReplicatedStoragePoolStatus{ + Conditions: []metav1.Condition{ + { + Type: v1alpha1.ReplicatedStoragePoolCondReadyType, + Status: metav1.ConditionFalse, + Reason: "LVGNotReady", + Message: "LVMVolumeGroup is not ready", + }, + }, + }, + } + + outcome := ensureStoragePool(context.Background(), rsc, "not-ready-pool", rsp) + + Expect(outcome.DidChange()).To(BeTrue()) + cond := obju.GetStatusCondition(rsc, v1alpha1.ReplicatedStorageClassCondStoragePoolReadyType) + Expect(cond).NotTo(BeNil()) + Expect(cond.Status).To(Equal(metav1.ConditionFalse)) + Expect(cond.Reason).To(Equal("LVGNotReady")) + Expect(cond.Message).To(Equal("LVMVolumeGroup is not ready")) + }) + }) + + Describe("ensureConfiguration", func() { + It("panics when StoragePoolBasedOnGeneration != Generation", func() { + rsc := &v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rsc-1", + Generation: 5, + }, + Status: v1alpha1.ReplicatedStorageClassStatus{ + StoragePoolBasedOnGeneration: 4, // Mismatch. + }, + } + rsp := &v1alpha1.ReplicatedStoragePool{} + + Expect(func() { + ensureConfiguration(context.Background(), rsc, rsp) + }).To(Panic()) + }) + + It("sets Ready=False when StoragePoolReady is not True", func() { + rsc := &v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rsc-1", + Generation: 5, + }, + Status: v1alpha1.ReplicatedStorageClassStatus{ + StoragePoolBasedOnGeneration: 5, + // No StoragePoolReady condition - defaults to not-true. + }, + } + rsp := &v1alpha1.ReplicatedStoragePool{} + + outcome := ensureConfiguration(context.Background(), rsc, rsp) + + Expect(outcome.DidChange()).To(BeTrue()) + cond := obju.GetStatusCondition(rsc, v1alpha1.ReplicatedStorageClassCondReadyType) + Expect(cond).NotTo(BeNil()) + Expect(cond.Status).To(Equal(metav1.ConditionFalse)) + Expect(cond.Reason).To(Equal(v1alpha1.ReplicatedStorageClassCondReadyReasonWaitingForStoragePool)) + }) + + It("sets Ready=False when eligible nodes validation fails", func() { + rsc := &v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rsc-1", + Generation: 5, + }, + Spec: v1alpha1.ReplicatedStorageClassSpec{ + Replication: v1alpha1.ReplicationConsistencyAndAvailability, + }, + Status: v1alpha1.ReplicatedStorageClassStatus{ + StoragePoolBasedOnGeneration: 5, + StoragePoolEligibleNodesRevision: 1, // Different from RSP. + Conditions: []metav1.Condition{ + { + Type: v1alpha1.ReplicatedStorageClassCondStoragePoolReadyType, + Status: metav1.ConditionTrue, + Reason: "Ready", + ObservedGeneration: 5, + }, + }, + }, + } + rsp := &v1alpha1.ReplicatedStoragePool{ + Status: v1alpha1.ReplicatedStoragePoolStatus{ + EligibleNodesRevision: 2, // Changed. + EligibleNodes: []v1alpha1.ReplicatedStoragePoolEligibleNode{ + {NodeName: "node-1"}, // Not enough for ConsistencyAndAvailability. + }, + }, + } + + outcome := ensureConfiguration(context.Background(), rsc, rsp) + + Expect(outcome.DidChange()).To(BeTrue()) + cond := obju.GetStatusCondition(rsc, v1alpha1.ReplicatedStorageClassCondReadyType) + Expect(cond).NotTo(BeNil()) + Expect(cond.Status).To(Equal(metav1.ConditionFalse)) + Expect(cond.Reason).To(Equal(v1alpha1.ReplicatedStorageClassCondReadyReasonInsufficientEligibleNodes)) + }) + + It("updates StoragePoolEligibleNodesRevision when RSP revision changes and validation passes", func() { + rsc := &v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rsc-1", + Generation: 5, + }, + Spec: v1alpha1.ReplicatedStorageClassSpec{ + Replication: v1alpha1.ReplicationNone, + }, + Status: v1alpha1.ReplicatedStorageClassStatus{ + StoragePoolBasedOnGeneration: 5, + StoragePoolEligibleNodesRevision: 1, + ConfigurationGeneration: 5, // Already in sync. + Conditions: []metav1.Condition{ + { + Type: v1alpha1.ReplicatedStorageClassCondStoragePoolReadyType, + Status: metav1.ConditionTrue, + Reason: "Ready", + ObservedGeneration: 5, + }, + }, + }, + } + rsp := &v1alpha1.ReplicatedStoragePool{ + Status: v1alpha1.ReplicatedStoragePoolStatus{ + EligibleNodesRevision: 2, // Changed. + EligibleNodes: []v1alpha1.ReplicatedStoragePoolEligibleNode{ + {NodeName: "node-1"}, // Enough for ReplicationNone. + }, + }, + } + + outcome := ensureConfiguration(context.Background(), rsc, rsp) + + Expect(outcome.DidChange()).To(BeTrue()) + Expect(rsc.Status.StoragePoolEligibleNodesRevision).To(Equal(int64(2))) + }) + + It("skips configuration update when ConfigurationGeneration matches Generation", func() { + rsc := &v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rsc-1", + Generation: 5, + }, + Spec: v1alpha1.ReplicatedStorageClassSpec{ + Replication: v1alpha1.ReplicationNone, + }, + Status: v1alpha1.ReplicatedStorageClassStatus{ + StoragePoolBasedOnGeneration: 5, + StoragePoolName: "my-pool", + StoragePoolEligibleNodesRevision: 2, // Already in sync. + ConfigurationGeneration: 5, // Already in sync. + Configuration: &v1alpha1.ReplicatedStorageClassConfiguration{ + StoragePoolName: "my-pool", + }, + Conditions: []metav1.Condition{ + { + Type: v1alpha1.ReplicatedStorageClassCondStoragePoolReadyType, + Status: metav1.ConditionTrue, + Reason: "Ready", + ObservedGeneration: 5, + }, + }, + }, + } + rsp := &v1alpha1.ReplicatedStoragePool{ + Status: v1alpha1.ReplicatedStoragePoolStatus{ + EligibleNodesRevision: 2, // Same as rsc. + EligibleNodes: []v1alpha1.ReplicatedStoragePoolEligibleNode{ + {NodeName: "node-1"}, + }, + }, + } + + outcome := ensureConfiguration(context.Background(), rsc, rsp) + + Expect(outcome.DidChange()).To(BeFalse()) + }) + + It("updates configuration and sets Ready=True when generation mismatch", func() { + rsc := &v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rsc-1", + Generation: 6, // New generation. + }, + Spec: v1alpha1.ReplicatedStorageClassSpec{ + Replication: v1alpha1.ReplicationNone, + VolumeAccess: v1alpha1.VolumeAccessPreferablyLocal, + Topology: v1alpha1.RSCTopologyIgnored, + }, + Status: v1alpha1.ReplicatedStorageClassStatus{ + StoragePoolBasedOnGeneration: 6, + StoragePoolName: "my-pool", + StoragePoolEligibleNodesRevision: 2, + ConfigurationGeneration: 5, // Old generation. + Conditions: []metav1.Condition{ + { + Type: v1alpha1.ReplicatedStorageClassCondStoragePoolReadyType, + Status: metav1.ConditionTrue, + Reason: "Ready", + ObservedGeneration: 6, + }, + }, + }, + } + rsp := &v1alpha1.ReplicatedStoragePool{ + Status: v1alpha1.ReplicatedStoragePoolStatus{ + EligibleNodesRevision: 2, + EligibleNodes: []v1alpha1.ReplicatedStoragePoolEligibleNode{ + {NodeName: "node-1"}, + }, + }, + } + + outcome := ensureConfiguration(context.Background(), rsc, rsp) + + Expect(outcome.DidChange()).To(BeTrue()) + Expect(outcome.OptimisticLockRequired()).To(BeTrue()) + Expect(rsc.Status.ConfigurationGeneration).To(Equal(int64(6))) + Expect(rsc.Status.Configuration).NotTo(BeNil()) + Expect(rsc.Status.Configuration.StoragePoolName).To(Equal("my-pool")) + + // Ready should be True. + cond := obju.GetStatusCondition(rsc, v1alpha1.ReplicatedStorageClassCondReadyType) + Expect(cond).NotTo(BeNil()) + Expect(cond.Status).To(Equal(metav1.ConditionTrue)) + Expect(cond.Reason).To(Equal(v1alpha1.ReplicatedStorageClassCondReadyReasonReady)) + }) + }) + + Describe("applyStoragePool", func() { + It("returns true and updates when generation differs", func() { + rsc := &v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rsc-1", + Generation: 5, + }, + Status: v1alpha1.ReplicatedStorageClassStatus{ + StoragePoolBasedOnGeneration: 4, + StoragePoolName: "old-name", + }, + } + + changed := applyStoragePool(rsc, "new-name") + + Expect(changed).To(BeTrue()) + Expect(rsc.Status.StoragePoolBasedOnGeneration).To(Equal(int64(5))) + Expect(rsc.Status.StoragePoolName).To(Equal("new-name")) + }) + + It("returns true and updates when name differs", func() { + rsc := &v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rsc-1", + Generation: 5, + }, + Status: v1alpha1.ReplicatedStorageClassStatus{ + StoragePoolBasedOnGeneration: 5, + StoragePoolName: "old-name", + }, + } + + changed := applyStoragePool(rsc, "new-name") + + Expect(changed).To(BeTrue()) + Expect(rsc.Status.StoragePoolName).To(Equal("new-name")) + }) + + It("returns false when already in sync", func() { + rsc := &v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rsc-1", + Generation: 5, + }, + Status: v1alpha1.ReplicatedStorageClassStatus{ + StoragePoolBasedOnGeneration: 5, + StoragePoolName: "same-name", + }, + } + + changed := applyStoragePool(rsc, "same-name") + + Expect(changed).To(BeFalse()) + }) + }) + + Describe("applyRSPRemoveUsedBy", func() { + It("removes RSC name and returns true when present", func() { + rsp := &v1alpha1.ReplicatedStoragePool{ + Status: v1alpha1.ReplicatedStoragePoolStatus{ + UsedBy: v1alpha1.ReplicatedStoragePoolUsedBy{ + ReplicatedStorageClassNames: []string{"rsc-a", "rsc-b", "rsc-c"}, + }, + }, + } + + changed := applyRSPRemoveUsedBy(rsp, "rsc-b") + + Expect(changed).To(BeTrue()) + Expect(rsp.Status.UsedBy.ReplicatedStorageClassNames).To(Equal([]string{"rsc-a", "rsc-c"})) + }) + + It("returns false when RSC name not present", func() { + rsp := &v1alpha1.ReplicatedStoragePool{ + Status: v1alpha1.ReplicatedStoragePoolStatus{ + UsedBy: v1alpha1.ReplicatedStoragePoolUsedBy{ + ReplicatedStorageClassNames: []string{"rsc-a", "rsc-c"}, + }, + }, + } + + changed := applyRSPRemoveUsedBy(rsp, "rsc-b") + + Expect(changed).To(BeFalse()) + Expect(rsp.Status.UsedBy.ReplicatedStorageClassNames).To(Equal([]string{"rsc-a", "rsc-c"})) + }) + + It("handles empty usedBy list", func() { + rsp := &v1alpha1.ReplicatedStoragePool{ + Status: v1alpha1.ReplicatedStoragePoolStatus{ + UsedBy: v1alpha1.ReplicatedStoragePoolUsedBy{ + ReplicatedStorageClassNames: []string{}, + }, + }, + } + + changed := applyRSPRemoveUsedBy(rsp, "rsc-a") + + Expect(changed).To(BeFalse()) + Expect(rsp.Status.UsedBy.ReplicatedStorageClassNames).To(BeEmpty()) + }) + + It("removes last element correctly", func() { + rsp := &v1alpha1.ReplicatedStoragePool{ + Status: v1alpha1.ReplicatedStoragePoolStatus{ + UsedBy: v1alpha1.ReplicatedStoragePoolUsedBy{ + ReplicatedStorageClassNames: []string{"rsc-only"}, + }, + }, + } + + changed := applyRSPRemoveUsedBy(rsp, "rsc-only") + + Expect(changed).To(BeTrue()) + Expect(rsp.Status.UsedBy.ReplicatedStorageClassNames).To(BeEmpty()) + }) + }) + + Describe("computeStoragePoolChecksum", func() { + It("produces deterministic output for same parameters", func() { + rsc := &v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{Name: "rsc-1"}, + Spec: v1alpha1.ReplicatedStorageClassSpec{ + Storage: v1alpha1.ReplicatedStorageClassStorage{ + Type: v1alpha1.ReplicatedStoragePoolTypeLVM, + LVMVolumeGroups: []v1alpha1.ReplicatedStoragePoolLVMVolumeGroups{ + {Name: "lvg-1"}, + {Name: "lvg-2"}, + }, + }, + Zones: []string{"zone-a", "zone-b"}, + SystemNetworkNames: []string{"Internal"}, + }, + } + + checksum1 := computeStoragePoolChecksum(rsc) + checksum2 := computeStoragePoolChecksum(rsc) + + Expect(checksum1).To(Equal(checksum2)) + }) + + It("produces same checksum regardless of LVMVolumeGroups order", func() { + rsc1 := &v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{Name: "rsc-1"}, + Spec: v1alpha1.ReplicatedStorageClassSpec{ + Storage: v1alpha1.ReplicatedStorageClassStorage{ + Type: v1alpha1.ReplicatedStoragePoolTypeLVM, + LVMVolumeGroups: []v1alpha1.ReplicatedStoragePoolLVMVolumeGroups{ + {Name: "lvg-a"}, + {Name: "lvg-b"}, + }, + }, + }, + } + rsc2 := &v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{Name: "rsc-2"}, + Spec: v1alpha1.ReplicatedStorageClassSpec{ + Storage: v1alpha1.ReplicatedStorageClassStorage{ + Type: v1alpha1.ReplicatedStoragePoolTypeLVM, + LVMVolumeGroups: []v1alpha1.ReplicatedStoragePoolLVMVolumeGroups{ + {Name: "lvg-b"}, + {Name: "lvg-a"}, + }, + }, + }, + } + + checksum1 := computeStoragePoolChecksum(rsc1) + checksum2 := computeStoragePoolChecksum(rsc2) + + Expect(checksum1).To(Equal(checksum2)) + }) + + It("produces same checksum regardless of zones order", func() { + rsc1 := &v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{Name: "rsc-1"}, + Spec: v1alpha1.ReplicatedStorageClassSpec{ + Storage: v1alpha1.ReplicatedStorageClassStorage{ + Type: v1alpha1.ReplicatedStoragePoolTypeLVM, + LVMVolumeGroups: []v1alpha1.ReplicatedStoragePoolLVMVolumeGroups{ + {Name: "lvg-1"}, + }, + }, + Zones: []string{"zone-a", "zone-b", "zone-c"}, + }, + } + rsc2 := &v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{Name: "rsc-2"}, + Spec: v1alpha1.ReplicatedStorageClassSpec{ + Storage: v1alpha1.ReplicatedStorageClassStorage{ + Type: v1alpha1.ReplicatedStoragePoolTypeLVM, + LVMVolumeGroups: []v1alpha1.ReplicatedStoragePoolLVMVolumeGroups{ + {Name: "lvg-1"}, + }, + }, + Zones: []string{"zone-c", "zone-a", "zone-b"}, + }, + } + + checksum1 := computeStoragePoolChecksum(rsc1) + checksum2 := computeStoragePoolChecksum(rsc2) + + Expect(checksum1).To(Equal(checksum2)) + }) + + It("produces different checksums for different types", func() { + rsc1 := &v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{Name: "rsc-1"}, + Spec: v1alpha1.ReplicatedStorageClassSpec{ + Storage: v1alpha1.ReplicatedStorageClassStorage{ + Type: v1alpha1.ReplicatedStoragePoolTypeLVM, + LVMVolumeGroups: []v1alpha1.ReplicatedStoragePoolLVMVolumeGroups{{Name: "lvg-1"}}, + }, + }, + } + rsc2 := &v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{Name: "rsc-2"}, + Spec: v1alpha1.ReplicatedStorageClassSpec{ + Storage: v1alpha1.ReplicatedStorageClassStorage{ + Type: v1alpha1.ReplicatedStoragePoolTypeLVMThin, + LVMVolumeGroups: []v1alpha1.ReplicatedStoragePoolLVMVolumeGroups{{Name: "lvg-1"}}, + }, + }, + } + + checksum1 := computeStoragePoolChecksum(rsc1) + checksum2 := computeStoragePoolChecksum(rsc2) + + Expect(checksum1).NotTo(Equal(checksum2)) + }) + + It("produces different checksums for different LVMVolumeGroups", func() { + rsc1 := &v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{Name: "rsc-1"}, + Spec: v1alpha1.ReplicatedStorageClassSpec{ + Storage: v1alpha1.ReplicatedStorageClassStorage{ + Type: v1alpha1.ReplicatedStoragePoolTypeLVM, + LVMVolumeGroups: []v1alpha1.ReplicatedStoragePoolLVMVolumeGroups{{Name: "lvg-1"}}, + }, + }, + } + rsc2 := &v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{Name: "rsc-2"}, + Spec: v1alpha1.ReplicatedStorageClassSpec{ + Storage: v1alpha1.ReplicatedStorageClassStorage{ + Type: v1alpha1.ReplicatedStoragePoolTypeLVM, + LVMVolumeGroups: []v1alpha1.ReplicatedStoragePoolLVMVolumeGroups{{Name: "lvg-2"}}, + }, + }, + } + + checksum1 := computeStoragePoolChecksum(rsc1) + checksum2 := computeStoragePoolChecksum(rsc2) + + Expect(checksum1).NotTo(Equal(checksum2)) + }) + + It("produces different checksums for different zones", func() { + rsc1 := &v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{Name: "rsc-1"}, + Spec: v1alpha1.ReplicatedStorageClassSpec{ + Storage: v1alpha1.ReplicatedStorageClassStorage{ + Type: v1alpha1.ReplicatedStoragePoolTypeLVM, + LVMVolumeGroups: []v1alpha1.ReplicatedStoragePoolLVMVolumeGroups{{Name: "lvg-1"}}, + }, + Zones: []string{"zone-a"}, + }, + } + rsc2 := &v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{Name: "rsc-2"}, + Spec: v1alpha1.ReplicatedStorageClassSpec{ + Storage: v1alpha1.ReplicatedStorageClassStorage{ + Type: v1alpha1.ReplicatedStoragePoolTypeLVM, + LVMVolumeGroups: []v1alpha1.ReplicatedStoragePoolLVMVolumeGroups{{Name: "lvg-1"}}, + }, + Zones: []string{"zone-b"}, + }, + } + + checksum1 := computeStoragePoolChecksum(rsc1) + checksum2 := computeStoragePoolChecksum(rsc2) + + Expect(checksum1).NotTo(Equal(checksum2)) + }) + + It("produces different checksums for different NodeLabelSelector", func() { + rsc1 := &v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{Name: "rsc-1"}, + Spec: v1alpha1.ReplicatedStorageClassSpec{ + Storage: v1alpha1.ReplicatedStorageClassStorage{ + Type: v1alpha1.ReplicatedStoragePoolTypeLVM, + LVMVolumeGroups: []v1alpha1.ReplicatedStoragePoolLVMVolumeGroups{{Name: "lvg-1"}}, + }, + NodeLabelSelector: &metav1.LabelSelector{ + MatchLabels: map[string]string{"tier": "storage"}, + }, + }, + } + rsc2 := &v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{Name: "rsc-2"}, + Spec: v1alpha1.ReplicatedStorageClassSpec{ + Storage: v1alpha1.ReplicatedStorageClassStorage{ + Type: v1alpha1.ReplicatedStoragePoolTypeLVM, + LVMVolumeGroups: []v1alpha1.ReplicatedStoragePoolLVMVolumeGroups{{Name: "lvg-1"}}, + }, + NodeLabelSelector: &metav1.LabelSelector{ + MatchLabels: map[string]string{"tier": "compute"}, + }, + }, + } + + checksum1 := computeStoragePoolChecksum(rsc1) + checksum2 := computeStoragePoolChecksum(rsc2) + + Expect(checksum1).NotTo(Equal(checksum2)) + }) + + It("produces 32-character hex string (FNV-128)", func() { + rsc := &v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{Name: "rsc-1"}, + Spec: v1alpha1.ReplicatedStorageClassSpec{ + Storage: v1alpha1.ReplicatedStorageClassStorage{ + Type: v1alpha1.ReplicatedStoragePoolTypeLVM, + LVMVolumeGroups: []v1alpha1.ReplicatedStoragePoolLVMVolumeGroups{{Name: "lvg-1"}}, + }, + }, + } + + checksum := computeStoragePoolChecksum(rsc) + + Expect(checksum).To(HaveLen(32)) + // Verify it's a valid hex string. + Expect(checksum).To(MatchRegexp("^[0-9a-f]{32}$")) + }) + + It("includes thinPoolName in checksum", func() { + rsc1 := &v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{Name: "rsc-1"}, + Spec: v1alpha1.ReplicatedStorageClassSpec{ + Storage: v1alpha1.ReplicatedStorageClassStorage{ + Type: v1alpha1.ReplicatedStoragePoolTypeLVMThin, + LVMVolumeGroups: []v1alpha1.ReplicatedStoragePoolLVMVolumeGroups{{Name: "lvg-1", ThinPoolName: "thin-1"}}, + }, + }, + } + rsc2 := &v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{Name: "rsc-2"}, + Spec: v1alpha1.ReplicatedStorageClassSpec{ + Storage: v1alpha1.ReplicatedStorageClassStorage{ + Type: v1alpha1.ReplicatedStoragePoolTypeLVMThin, + LVMVolumeGroups: []v1alpha1.ReplicatedStoragePoolLVMVolumeGroups{{Name: "lvg-1", ThinPoolName: "thin-2"}}, + }, + }, + } + + checksum1 := computeStoragePoolChecksum(rsc1) + checksum2 := computeStoragePoolChecksum(rsc2) + + Expect(checksum1).NotTo(Equal(checksum2)) + }) + }) + + Describe("computeTargetStoragePool", func() { + It("returns auto-rsp- format", func() { + rsc := &v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rsc-1", + Generation: 1, + }, + Spec: v1alpha1.ReplicatedStorageClassSpec{ + Storage: v1alpha1.ReplicatedStorageClassStorage{ + Type: v1alpha1.ReplicatedStoragePoolTypeLVM, + LVMVolumeGroups: []v1alpha1.ReplicatedStoragePoolLVMVolumeGroups{{Name: "lvg-1"}}, + }, + }, + } + + name := computeTargetStoragePool(rsc) + + Expect(name).To(HavePrefix("auto-rsp-")) + Expect(name).To(HaveLen(9 + 32)) // "auto-rsp-" + 32-char checksum + }) + + It("returns cached value when StoragePoolBasedOnGeneration matches Generation", func() { + rsc := &v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rsc-1", + Generation: 5, + }, + Spec: v1alpha1.ReplicatedStorageClassSpec{ + Storage: v1alpha1.ReplicatedStorageClassStorage{ + Type: v1alpha1.ReplicatedStoragePoolTypeLVM, + LVMVolumeGroups: []v1alpha1.ReplicatedStoragePoolLVMVolumeGroups{{Name: "lvg-1"}}, + }, + }, + Status: v1alpha1.ReplicatedStorageClassStatus{ + StoragePoolBasedOnGeneration: 5, // Matches Generation. + StoragePoolName: "auto-rsp-cached-value", + }, + } + + name := computeTargetStoragePool(rsc) + + Expect(name).To(Equal("auto-rsp-cached-value")) + }) + + It("recomputes when StoragePoolBasedOnGeneration does not match Generation", func() { + rsc := &v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rsc-1", + Generation: 6, // Changed from 5. + }, + Spec: v1alpha1.ReplicatedStorageClassSpec{ + Storage: v1alpha1.ReplicatedStorageClassStorage{ + Type: v1alpha1.ReplicatedStoragePoolTypeLVM, + LVMVolumeGroups: []v1alpha1.ReplicatedStoragePoolLVMVolumeGroups{{Name: "lvg-1"}}, + }, + }, + Status: v1alpha1.ReplicatedStorageClassStatus{ + StoragePoolBasedOnGeneration: 5, // Does not match Generation. + StoragePoolName: "auto-rsp-old-value", + }, + } + + name := computeTargetStoragePool(rsc) + + Expect(name).NotTo(Equal("auto-rsp-old-value")) + Expect(name).To(HavePrefix("auto-rsp-")) + }) + + It("recomputes when StoragePoolName is empty even if generation matches", func() { + rsc := &v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rsc-1", + Generation: 5, + }, + Spec: v1alpha1.ReplicatedStorageClassSpec{ + Storage: v1alpha1.ReplicatedStorageClassStorage{ + Type: v1alpha1.ReplicatedStoragePoolTypeLVM, + LVMVolumeGroups: []v1alpha1.ReplicatedStoragePoolLVMVolumeGroups{{Name: "lvg-1"}}, + }, + }, + Status: v1alpha1.ReplicatedStorageClassStatus{ + StoragePoolBasedOnGeneration: 5, + StoragePoolName: "", // Empty. + }, + } + + name := computeTargetStoragePool(rsc) + + Expect(name).To(HavePrefix("auto-rsp-")) + Expect(name).NotTo(BeEmpty()) + }) + + It("is deterministic for same spec", func() { + rsc := &v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rsc-1", + Generation: 1, + }, + Spec: v1alpha1.ReplicatedStorageClassSpec{ + Storage: v1alpha1.ReplicatedStorageClassStorage{ + Type: v1alpha1.ReplicatedStoragePoolTypeLVM, + LVMVolumeGroups: []v1alpha1.ReplicatedStoragePoolLVMVolumeGroups{ + {Name: "lvg-1"}, + {Name: "lvg-2"}, + }, + }, + Zones: []string{"zone-a", "zone-b"}, + SystemNetworkNames: []string{"Internal"}, + }, + } + + name1 := computeTargetStoragePool(rsc) + name2 := computeTargetStoragePool(rsc) + + Expect(name1).To(Equal(name2)) + }) + }) +}) diff --git a/images/controller/internal/controllers/rsp_controller/README.md b/images/controller/internal/controllers/rsp_controller/README.md new file mode 100644 index 000000000..439b6575d --- /dev/null +++ b/images/controller/internal/controllers/rsp_controller/README.md @@ -0,0 +1,209 @@ +# rsp_controller + +> **TODO(systemnetwork): IMPORTANT!** This controller does not yet support custom SystemNetworkNames. +> Currently only "Internal" (default node network) is allowed by API validation. +> When systemnetwork feature stabilizes, the controller must: +> - Watch NetworkNode resources +> - Filter eligible nodes based on configured networks availability +> - Add NetworkNode predicates for Ready condition changes +> +> See `controller.go` for detailed TODO comments. + +This controller manages the `ReplicatedStoragePool` status fields by aggregating information from LVMVolumeGroups, Nodes, and agent Pods. + +## Purpose + +The controller reconciles `ReplicatedStoragePool` status with: + +1. **Eligible nodes** — nodes that can host volumes of this storage pool +2. **Eligible nodes revision** — for quick change detection +3. **Ready condition** — describing the current state + +## Interactions + +| Direction | Resource/Controller | Relationship | +|-----------|---------------------|--------------| +| ← input | LVMVolumeGroup | Reads LVGs referenced by RSP spec | +| ← input | Node | Reads nodes matching selector | +| ← input | Pod (agent) | Reads agent pod readiness | +| → used by | rsc_controller | RSC uses `RSP.Status.EligibleNodes` for validation | +| → used by | node_controller | Reads `RSP.Status.EligibleNodes` to manage node labels | + +## Algorithm + +A node is eligible if **all** conditions are met: + +``` +eligible = matchesNodeLabelSelector + AND matchesZones + AND (nodeReady OR withinGracePeriod) +``` + +For each eligible node, the controller also records LVG readiness and agent readiness. + +## Reconciliation Structure + +``` +Reconcile (root) +├── getRSP — fetch the RSP +├── getLVGsByRSP — fetch LVGs referenced by RSP +├── validateRSPAndLVGs — validate RSP/LVG configuration +├── getSortedNodes — fetch nodes (filtered by selector) +├── getAgentReadiness — fetch agent pods and compute readiness +├── computeActualEligibleNodes — compute eligible nodes list +├── applyEligibleNodesAndIncrementRevisionIfChanged +├── applyReadyCondTrue/applyReadyCondFalse — set Ready condition +└── patchRSPStatus — persist status changes +``` + +## Algorithm Flow + +```mermaid +flowchart TD + Start([Reconcile]) --> GetRSP[Get RSP] + GetRSP -->|NotFound| Done1([Done]) + GetRSP --> GetLVGs[Get LVGs by RSP] + + GetLVGs -->|Error| Fail1([Fail]) + GetLVGs -->|Some NotFound| SetLVGNotFound[Ready=False
LVMVolumeGroupNotFound] + GetLVGs --> ValidateRSP[Validate RSP and LVGs] + + SetLVGNotFound --> PatchStatus1[Patch status] + PatchStatus1 --> Done2([Done]) + + ValidateRSP -->|Invalid| SetInvalidLVG[Ready=False
InvalidLVMVolumeGroup] + ValidateRSP --> ValidateSelector[Validate NodeLabelSelector] + + SetInvalidLVG --> PatchStatus2[Patch status] + PatchStatus2 --> Done3([Done]) + + ValidateSelector -->|Invalid| SetInvalidSelector[Ready=False
InvalidNodeLabelSelector] + ValidateSelector --> ValidateZones[Validate Zones] + + SetInvalidSelector --> PatchStatus3[Patch status] + PatchStatus3 --> Done4([Done]) + + ValidateZones -->|Invalid| SetInvalidZones[Ready=False
InvalidNodeLabelSelector] + ValidateZones --> GetNodes[Get Nodes
filtered by selector] + + SetInvalidZones --> PatchStatus4[Patch status] + PatchStatus4 --> Done5([Done]) + + GetNodes --> GetAgentReadiness[Get Agent Readiness] + GetAgentReadiness --> ComputeEligible[Compute Eligible Nodes] + + ComputeEligible --> ApplyEligible[Apply eligible nodes
Increment revision if changed] + ApplyEligible --> SetReady[Ready=True] + + SetReady --> Changed{Changed?} + Changed -->|Yes| PatchStatus5[Patch status] + Changed -->|No| CheckGrace{Grace period
expiration?} + PatchStatus5 --> CheckGrace + + CheckGrace -->|Yes| Requeue([RequeueAfter]) + CheckGrace -->|No| Done6([Done]) +``` + +## Conditions + +### Ready + +Indicates whether the storage pool eligible nodes have been calculated successfully. + +| Status | Reason | When | +|--------|--------|------| +| True | Ready | Eligible nodes calculated successfully | +| False | LVMVolumeGroupNotFound | Some LVMVolumeGroups not found | +| False | InvalidLVMVolumeGroup | RSP/LVG validation failed (e.g., thin pool not found) | +| False | InvalidNodeLabelSelector | NodeLabelSelector or Zones parsing failed | + +## Eligible Nodes Details + +A node is considered eligible for an RSP if **all** conditions are met (AND): + +1. **NodeLabelSelector** — if the RSP has `nodeLabelSelector` specified, the node must match this selector; if not specified, the condition is satisfied for any node + +2. **Zones** — if the RSP has `zones` specified, the node's `topology.kubernetes.io/zone` label must be in that list; if `zones` is not specified, the condition is satisfied for any node + +3. **Ready status** — if the node has been `NotReady` longer than `spec.eligibleNodesPolicy.notReadyGracePeriod`, it is excluded from the eligible nodes list + +> **Note:** Nodes are filtered by NodeLabelSelector and Zones before being passed to the eligible nodes computation. Nodes without matching LVMVolumeGroups are still included as they can serve as client-only or tiebreaker nodes. + +For each eligible node, the controller records: + +- **NodeName** — Kubernetes node name +- **ZoneName** — from `topology.kubernetes.io/zone` label +- **NodeReady** — current node readiness status +- **Unschedulable** — from `node.spec.unschedulable` +- **AgentReady** — whether the sds-replicated-volume agent pod on this node is ready +- **LVMVolumeGroups** — list of matching LVGs with: + - **Name** — LVMVolumeGroup resource name + - **ThinPoolName** — thin pool name (for LVMThin storage pools) + - **Unschedulable** — from `storage.deckhouse.io/lvmVolumeGroupUnschedulable` annotation + - **Ready** — LVG Ready condition status (and thin pool ready status for LVMThin) + +## Managed Metadata + +This controller manages `RSP.Status` fields only and does not create external labels, annotations, or finalizers. + +| Type | Key | Managed On | Purpose | +|------|-----|------------|---------| +| Status field | `status.eligibleNodes` | RSP | List of eligible nodes | +| Status field | `status.eligibleNodesRevision` | RSP | Change detection counter | +| Status field | `status.conditions[Ready]` | RSP | Controller health condition | + +## Watches + +| Resource | Events | Handler | +|----------|--------|---------| +| ReplicatedStoragePool | Generation changes | Direct (primary) | +| Node | Label changes, Ready condition, spec.unschedulable | Index + selector matching | +| LVMVolumeGroup | Generation, unschedulable annotation, Ready condition, ThinPools[].Ready | Index by LVG name | +| Pod (agent) | Ready condition changes, namespace + label filter | Index by node name | + +## Indexes + +| Index | Field | Purpose | +|-------|-------|---------| +| RSP by eligible node name | `status.eligibleNodes[].nodeName` | Find RSPs where a node is eligible | +| LVMVolumeGroup by name | `metadata.name` | Fetch LVGs referenced by RSP | + +## Data Flow + +```mermaid +flowchart TD + subgraph inputs [Inputs] + RSP[RSP.spec] + Nodes[Nodes] + LVGs[LVMVolumeGroups] + AgentPods[Agent Pods] + end + + subgraph compute [Compute] + BuildSelector[Build node selector
from NodeLabelSelector + Zones] + BuildLVGMap[buildLVGByNodeMap] + GetAgent[getAgentReadiness] + ComputeEligible[computeActualEligibleNodes] + end + + subgraph status [Status Output] + EN[status.eligibleNodes] + ENRev[status.eligibleNodesRevision] + Conds[status.conditions] + end + + RSP --> BuildSelector + RSP --> BuildLVGMap + Nodes --> BuildSelector + BuildSelector -->|filtered nodes| ComputeEligible + + LVGs --> BuildLVGMap + BuildLVGMap --> ComputeEligible + + AgentPods --> GetAgent + GetAgent --> ComputeEligible + + ComputeEligible --> EN + ComputeEligible --> ENRev + ComputeEligible -->|Ready| Conds +``` diff --git a/images/controller/internal/controllers/rsp_controller/controller.go b/images/controller/internal/controllers/rsp_controller/controller.go new file mode 100644 index 000000000..25eddebfb --- /dev/null +++ b/images/controller/internal/controllers/rsp_controller/controller.go @@ -0,0 +1,231 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package rspcontroller + +import ( + "context" + "slices" + + corev1 "k8s.io/api/core/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/labels" + "sigs.k8s.io/controller-runtime/pkg/builder" + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/controller" + "sigs.k8s.io/controller-runtime/pkg/handler" + "sigs.k8s.io/controller-runtime/pkg/manager" + "sigs.k8s.io/controller-runtime/pkg/reconcile" + + snc "github.com/deckhouse/sds-node-configurator/api/v1alpha1" + "github.com/deckhouse/sds-replicated-volume/api/v1alpha1" + "github.com/deckhouse/sds-replicated-volume/images/controller/internal/indexes" +) + +const RSPControllerName = "rsp-controller" + +// BuildController registers the RSP controller with the manager. +// It sets up watches on ReplicatedStoragePool, Node, LVMVolumeGroup, and agent Pod resources. +func BuildController(mgr manager.Manager, podNamespace string) error { + cl := mgr.GetClient() + + rec := NewReconciler(cl, mgr.GetLogger().WithName(RSPControllerName), podNamespace) + + return builder.ControllerManagedBy(mgr). + Named(RSPControllerName). + For(&v1alpha1.ReplicatedStoragePool{}, builder.WithPredicates(rspPredicates()...)). + Watches( + &corev1.Node{}, + handler.EnqueueRequestsFromMapFunc(mapNodeToRSP(cl)), + builder.WithPredicates(nodePredicates()...), + ). + Watches( + &snc.LVMVolumeGroup{}, + handler.EnqueueRequestsFromMapFunc(mapLVGToRSP(cl)), + builder.WithPredicates(lvgPredicates()...), + ). + Watches( + &corev1.Pod{}, + handler.EnqueueRequestsFromMapFunc(mapAgentPodToRSP(cl, podNamespace)), + builder.WithPredicates(agentPodPredicates(podNamespace)...), + ). + // TODO(systemnetwork): IMPORTANT! Watch NetworkNode resources and filter eligible nodes. + // + // Currently missing: + // 1. Watch on NetworkNode resources (requires new index + mapNetworkNodeToRSP mapping function). + // 2. Filter eligible nodes to include only nodes where the specified SystemNetworkNames + // are configured (i.e., the node has corresponding NetworkNode resources with ready status). + // 3. Add NetworkNode predicates to react on NetworkNode Ready condition changes. + // + // This is not implemented because the systemnetwork feature is still under active development. + // Once systemnetwork stabilizes, this controller MUST be updated to: + // - Subscribe to NetworkNode changes + // - Validate that RSP's spec.systemNetworkNames are available on each eligible node + // - Exclude nodes from EligibleNodes if required networks are not configured/ready + // + // Current workaround: The only allowed value for spec.systemNetworkNames is "Internal" + // (the default node internal network). The API (kubebuilder validation) currently forbids + // other values. This means no NetworkNode filtering is needed until custom networks are supported. + // + WithOptions(controller.Options{MaxConcurrentReconciles: 10}). + Complete(rec) +} + +// mapNodeToRSP maps a Node to ReplicatedStoragePool resources that are affected. +// This includes RSPs where: +// 1. Node is already in EligibleNodes (for updates/removals) +// 2. Node matches RSP's NodeLabelSelector and Zones (for potential additions) +func mapNodeToRSP(cl client.Client) handler.MapFunc { + return func(ctx context.Context, obj client.Object) []reconcile.Request { + node, ok := obj.(*corev1.Node) + if !ok || node == nil { + return nil + } + + // 1. Find RSPs where this node is already in EligibleNodes (for update/removal). + var byIndex v1alpha1.ReplicatedStoragePoolList + if err := cl.List(ctx, &byIndex, + client.MatchingFields{indexes.IndexFieldRSPByEligibleNodeName: node.Name}, + client.UnsafeDisableDeepCopy, + ); err != nil { + return nil + } + + // 2. Find all RSPs to check if node could be added. + var all v1alpha1.ReplicatedStoragePoolList + if err := cl.List(ctx, &all, client.UnsafeDisableDeepCopy); err != nil { + return nil + } + + // Collect unique RSP names that need reconciliation. + seen := make(map[string]struct{}, len(byIndex.Items)+len(all.Items)) + + // Add RSPs where node is already tracked. + for i := range byIndex.Items { + name := byIndex.Items[i].Name + seen[name] = struct{}{} + } + + // Add RSPs where node matches selector/zones (potential addition). + nodeLabels := labels.Set(node.Labels) + nodeZone := node.Labels[corev1.LabelTopologyZone] + for i := range all.Items { + rsp := &all.Items[i] + if _, exists := seen[rsp.Name]; exists { + continue // Already included. + } + if nodeMatchesRSP(rsp, nodeLabels, nodeZone) { + seen[rsp.Name] = struct{}{} + } + } + + // Build requests. + requests := make([]reconcile.Request, 0, len(seen)) + for name := range seen { + requests = append(requests, reconcile.Request{ + NamespacedName: client.ObjectKey{Name: name}, + }) + } + return requests + } +} + +// nodeMatchesRSP checks if a node could potentially be added to RSP's EligibleNodes. +// This is a quick check based on NodeLabelSelector and Zones. +func nodeMatchesRSP(rsp *v1alpha1.ReplicatedStoragePool, nodeLabels labels.Set, nodeZone string) bool { + // Check zones filter. + if len(rsp.Spec.Zones) > 0 && !slices.Contains(rsp.Spec.Zones, nodeZone) { + return false + } + + // Check NodeLabelSelector. + if rsp.Spec.NodeLabelSelector == nil { + return true + } + + selector, err := metav1.LabelSelectorAsSelector(rsp.Spec.NodeLabelSelector) + if err != nil { + return true // Be conservative: if we can't parse, trigger reconciliation. + } + + return selector.Matches(nodeLabels) +} + +// mapLVGToRSP maps an LVMVolumeGroup to all ReplicatedStoragePool resources that reference it. +func mapLVGToRSP(cl client.Client) handler.MapFunc { + return func(ctx context.Context, obj client.Object) []reconcile.Request { + lvg, ok := obj.(*snc.LVMVolumeGroup) + if !ok || lvg == nil { + return nil + } + + // Find all RSPs that reference this LVG (using index). + var rspList v1alpha1.ReplicatedStoragePoolList + if err := cl.List(ctx, &rspList, client.MatchingFields{ + indexes.IndexFieldRSPByLVMVolumeGroupName: lvg.Name, + }); err != nil { + return nil + } + + requests := make([]reconcile.Request, 0, len(rspList.Items)) + for i := range rspList.Items { + requests = append(requests, reconcile.Request{ + NamespacedName: client.ObjectKeyFromObject(&rspList.Items[i]), + }) + } + return requests + } +} + +// mapAgentPodToRSP maps an agent pod to ReplicatedStoragePool resources +// where the pod's node is in EligibleNodes. +func mapAgentPodToRSP(cl client.Client, podNamespace string) handler.MapFunc { + return func(ctx context.Context, obj client.Object) []reconcile.Request { + pod, ok := obj.(*corev1.Pod) + if !ok || pod == nil { + return nil + } + + // Only handle pods in the agent namespace with the agent label. + if pod.Namespace != podNamespace { + return nil + } + if pod.Labels["app"] != "agent" { + return nil + } + + nodeName := pod.Spec.NodeName + if nodeName == "" { + return nil // Pod not yet scheduled. + } + + // Only reconcile RSPs where this node is in EligibleNodes. + var rspList v1alpha1.ReplicatedStoragePoolList + if err := cl.List(ctx, &rspList, client.MatchingFields{ + indexes.IndexFieldRSPByEligibleNodeName: nodeName, + }); err != nil { + return nil + } + + requests := make([]reconcile.Request, 0, len(rspList.Items)) + for i := range rspList.Items { + requests = append(requests, reconcile.Request{ + NamespacedName: client.ObjectKeyFromObject(&rspList.Items[i]), + }) + } + return requests + } +} diff --git a/images/controller/internal/controllers/rsp_controller/controller_test.go b/images/controller/internal/controllers/rsp_controller/controller_test.go new file mode 100644 index 000000000..7b36a5b88 --- /dev/null +++ b/images/controller/internal/controllers/rsp_controller/controller_test.go @@ -0,0 +1,642 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package rspcontroller + +import ( + "context" + + . "github.com/onsi/ginkgo/v2" + . "github.com/onsi/gomega" + corev1 "k8s.io/api/core/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/labels" + "k8s.io/apimachinery/pkg/runtime" + "sigs.k8s.io/controller-runtime/pkg/client/fake" + "sigs.k8s.io/controller-runtime/pkg/reconcile" + + snc "github.com/deckhouse/sds-node-configurator/api/v1alpha1" + "github.com/deckhouse/sds-replicated-volume/api/v1alpha1" + "github.com/deckhouse/sds-replicated-volume/images/controller/internal/indexes/testhelpers" +) + +func requestNames(requests []reconcile.Request) []string { + names := make([]string, 0, len(requests)) + for _, r := range requests { + names = append(names, r.Name) + } + return names +} + +var _ = Describe("mapNodeToRSP", func() { + var scheme *runtime.Scheme + + BeforeEach(func() { + scheme = runtime.NewScheme() + Expect(v1alpha1.AddToScheme(scheme)).To(Succeed()) + Expect(corev1.AddToScheme(scheme)).To(Succeed()) + }) + + It("returns RSPs where node is in EligibleNodes or could be added", func() { + node := &corev1.Node{ + ObjectMeta: metav1.ObjectMeta{Name: "node-1"}, + } + rsp1 := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{Name: "rsp-1"}, + Status: v1alpha1.ReplicatedStoragePoolStatus{ + EligibleNodes: []v1alpha1.ReplicatedStoragePoolEligibleNode{ + {NodeName: "node-1"}, + }, + }, + } + // rsp2 has no filtering criteria, so any node could potentially be added. + rsp2 := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{Name: "rsp-2"}, + Status: v1alpha1.ReplicatedStoragePoolStatus{ + EligibleNodes: []v1alpha1.ReplicatedStoragePoolEligibleNode{ + {NodeName: "node-2"}, + }, + }, + } + + cl := testhelpers.WithRSPByEligibleNodeNameIndex( + fake.NewClientBuilder(). + WithScheme(scheme). + WithObjects(node, rsp1, rsp2), + ).Build() + + mapFunc := mapNodeToRSP(cl) + requests := mapFunc(context.Background(), node) + + // Both RSPs returned: rsp-1 (node in EligibleNodes) and rsp-2 (no selector, any node matches). + Expect(requestNames(requests)).To(ConsistOf("rsp-1", "rsp-2")) + }) + + It("returns RSPs where node matches NodeLabelSelector and Zones", func() { + node := &corev1.Node{ + ObjectMeta: metav1.ObjectMeta{ + Name: "node-1", + Labels: map[string]string{ + "env": "prod", + corev1.LabelTopologyZone: "zone-a", + }, + }, + } + rspMatches := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{Name: "rsp-matches"}, + Spec: v1alpha1.ReplicatedStoragePoolSpec{ + Zones: []string{"zone-a"}, + NodeLabelSelector: &metav1.LabelSelector{ + MatchLabels: map[string]string{"env": "prod"}, + }, + }, + } + rspWrongZone := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{Name: "rsp-wrong-zone"}, + Spec: v1alpha1.ReplicatedStoragePoolSpec{ + Zones: []string{"zone-b"}, + }, + } + rspWrongSelector := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{Name: "rsp-wrong-selector"}, + Spec: v1alpha1.ReplicatedStoragePoolSpec{ + NodeLabelSelector: &metav1.LabelSelector{ + MatchLabels: map[string]string{"env": "dev"}, + }, + }, + } + + cl := testhelpers.WithRSPByEligibleNodeNameIndex( + fake.NewClientBuilder(). + WithScheme(scheme). + WithObjects(node, rspMatches, rspWrongZone, rspWrongSelector), + ).Build() + + mapFunc := mapNodeToRSP(cl) + requests := mapFunc(context.Background(), node) + + Expect(requestNames(requests)).To(ConsistOf("rsp-matches")) + }) + + It("deduplicates RSPs when node is in eligibleNodes AND matches selector", func() { + node := &corev1.Node{ + ObjectMeta: metav1.ObjectMeta{ + Name: "node-1", + Labels: map[string]string{ + "env": "prod", + corev1.LabelTopologyZone: "zone-a", + }, + }, + } + rsp := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{Name: "rsp-1"}, + Spec: v1alpha1.ReplicatedStoragePoolSpec{ + Zones: []string{"zone-a"}, + NodeLabelSelector: &metav1.LabelSelector{ + MatchLabels: map[string]string{"env": "prod"}, + }, + }, + Status: v1alpha1.ReplicatedStoragePoolStatus{ + EligibleNodes: []v1alpha1.ReplicatedStoragePoolEligibleNode{ + {NodeName: "node-1"}, + }, + }, + } + + cl := testhelpers.WithRSPByEligibleNodeNameIndex( + fake.NewClientBuilder(). + WithScheme(scheme). + WithObjects(node, rsp), + ).Build() + + mapFunc := mapNodeToRSP(cl) + requests := mapFunc(context.Background(), node) + + // Should appear only once despite matching both index and selector + Expect(requests).To(HaveLen(1)) + Expect(requests[0].Name).To(Equal("rsp-1")) + }) + + It("returns empty when node matches nothing", func() { + node := &corev1.Node{ + ObjectMeta: metav1.ObjectMeta{ + Name: "node-orphan", + Labels: map[string]string{ + corev1.LabelTopologyZone: "zone-x", + }, + }, + } + rsp := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{Name: "rsp-1"}, + Spec: v1alpha1.ReplicatedStoragePoolSpec{ + Zones: []string{"zone-a"}, + }, + Status: v1alpha1.ReplicatedStoragePoolStatus{ + EligibleNodes: []v1alpha1.ReplicatedStoragePoolEligibleNode{ + {NodeName: "other-node"}, + }, + }, + } + + cl := testhelpers.WithRSPByEligibleNodeNameIndex( + fake.NewClientBuilder(). + WithScheme(scheme). + WithObjects(node, rsp), + ).Build() + + mapFunc := mapNodeToRSP(cl) + requests := mapFunc(context.Background(), node) + + Expect(requests).To(BeEmpty()) + }) + + It("returns nil for non-Node object", func() { + cl := fake.NewClientBuilder().WithScheme(scheme).Build() + + mapFunc := mapNodeToRSP(cl) + requests := mapFunc(context.Background(), &v1alpha1.ReplicatedStoragePool{}) + + Expect(requests).To(BeNil()) + }) + + It("returns nil for nil object", func() { + cl := fake.NewClientBuilder().WithScheme(scheme).Build() + + mapFunc := mapNodeToRSP(cl) + requests := mapFunc(context.Background(), nil) + + Expect(requests).To(BeNil()) + }) +}) + +var _ = Describe("nodeMatchesRSP", func() { + It("returns true when RSP has no zones and no selector (matches all)", func() { + rsp := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{Name: "rsp-1"}, + Spec: v1alpha1.ReplicatedStoragePoolSpec{}, + } + nodeLabels := labels.Set{"any": "label"} + nodeZone := "any-zone" + + Expect(nodeMatchesRSP(rsp, nodeLabels, nodeZone)).To(BeTrue()) + }) + + It("returns true when node is in RSP zones", func() { + rsp := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{Name: "rsp-1"}, + Spec: v1alpha1.ReplicatedStoragePoolSpec{ + Zones: []string{"zone-a", "zone-b"}, + }, + } + nodeLabels := labels.Set{} + nodeZone := "zone-a" + + Expect(nodeMatchesRSP(rsp, nodeLabels, nodeZone)).To(BeTrue()) + }) + + It("returns false when node is not in RSP zones", func() { + rsp := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{Name: "rsp-1"}, + Spec: v1alpha1.ReplicatedStoragePoolSpec{ + Zones: []string{"zone-a", "zone-b"}, + }, + } + nodeLabels := labels.Set{} + nodeZone := "zone-c" + + Expect(nodeMatchesRSP(rsp, nodeLabels, nodeZone)).To(BeFalse()) + }) + + It("returns true when node matches selector", func() { + rsp := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{Name: "rsp-1"}, + Spec: v1alpha1.ReplicatedStoragePoolSpec{ + NodeLabelSelector: &metav1.LabelSelector{ + MatchLabels: map[string]string{"env": "prod"}, + }, + }, + } + nodeLabels := labels.Set{"env": "prod", "other": "value"} + nodeZone := "" + + Expect(nodeMatchesRSP(rsp, nodeLabels, nodeZone)).To(BeTrue()) + }) + + It("returns false when node does not match selector", func() { + rsp := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{Name: "rsp-1"}, + Spec: v1alpha1.ReplicatedStoragePoolSpec{ + NodeLabelSelector: &metav1.LabelSelector{ + MatchLabels: map[string]string{"env": "prod"}, + }, + }, + } + nodeLabels := labels.Set{"env": "dev"} + nodeZone := "" + + Expect(nodeMatchesRSP(rsp, nodeLabels, nodeZone)).To(BeFalse()) + }) + + It("returns true for invalid selector (conservative)", func() { + rsp := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{Name: "rsp-1"}, + Spec: v1alpha1.ReplicatedStoragePoolSpec{ + NodeLabelSelector: &metav1.LabelSelector{ + MatchExpressions: []metav1.LabelSelectorRequirement{ + { + Key: "key", + Operator: "InvalidOperator", // Invalid operator + Values: []string{"value"}, + }, + }, + }, + }, + } + nodeLabels := labels.Set{"any": "label"} + nodeZone := "" + + // Should return true (be conservative) if selector cannot be parsed + Expect(nodeMatchesRSP(rsp, nodeLabels, nodeZone)).To(BeTrue()) + }) + + It("returns true when both zones and selector match", func() { + rsp := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{Name: "rsp-1"}, + Spec: v1alpha1.ReplicatedStoragePoolSpec{ + Zones: []string{"zone-a"}, + NodeLabelSelector: &metav1.LabelSelector{ + MatchLabels: map[string]string{"env": "prod"}, + }, + }, + } + nodeLabels := labels.Set{"env": "prod"} + nodeZone := "zone-a" + + Expect(nodeMatchesRSP(rsp, nodeLabels, nodeZone)).To(BeTrue()) + }) + + It("returns false when zones match but selector does not", func() { + rsp := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{Name: "rsp-1"}, + Spec: v1alpha1.ReplicatedStoragePoolSpec{ + Zones: []string{"zone-a"}, + NodeLabelSelector: &metav1.LabelSelector{ + MatchLabels: map[string]string{"env": "prod"}, + }, + }, + } + nodeLabels := labels.Set{"env": "dev"} + nodeZone := "zone-a" + + Expect(nodeMatchesRSP(rsp, nodeLabels, nodeZone)).To(BeFalse()) + }) +}) + +var _ = Describe("mapLVGToRSP", func() { + var scheme *runtime.Scheme + + BeforeEach(func() { + scheme = runtime.NewScheme() + Expect(v1alpha1.AddToScheme(scheme)).To(Succeed()) + Expect(snc.AddToScheme(scheme)).To(Succeed()) + }) + + It("returns RSPs that reference LVG via spec.lvmVolumeGroups", func() { + lvg := &snc.LVMVolumeGroup{ + ObjectMeta: metav1.ObjectMeta{Name: "lvg-1"}, + } + rsp1 := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{Name: "rsp-1"}, + Spec: v1alpha1.ReplicatedStoragePoolSpec{ + Type: v1alpha1.ReplicatedStoragePoolTypeLVM, + LVMVolumeGroups: []v1alpha1.ReplicatedStoragePoolLVMVolumeGroups{ + {Name: "lvg-1"}, + }, + }, + } + rsp2 := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{Name: "rsp-2"}, + Spec: v1alpha1.ReplicatedStoragePoolSpec{ + Type: v1alpha1.ReplicatedStoragePoolTypeLVM, + LVMVolumeGroups: []v1alpha1.ReplicatedStoragePoolLVMVolumeGroups{ + {Name: "lvg-1"}, + {Name: "lvg-2"}, + }, + }, + } + rspOther := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{Name: "rsp-other"}, + Spec: v1alpha1.ReplicatedStoragePoolSpec{ + Type: v1alpha1.ReplicatedStoragePoolTypeLVM, + LVMVolumeGroups: []v1alpha1.ReplicatedStoragePoolLVMVolumeGroups{ + {Name: "lvg-other"}, + }, + }, + } + + cl := testhelpers.WithRSPByLVMVolumeGroupNameIndex( + fake.NewClientBuilder(). + WithScheme(scheme). + WithObjects(lvg, rsp1, rsp2, rspOther), + ).Build() + + mapFunc := mapLVGToRSP(cl) + requests := mapFunc(context.Background(), lvg) + + Expect(requestNames(requests)).To(ConsistOf("rsp-1", "rsp-2")) + }) + + It("returns empty when no RSPs reference LVG", func() { + lvg := &snc.LVMVolumeGroup{ + ObjectMeta: metav1.ObjectMeta{Name: "lvg-orphan"}, + } + rsp := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{Name: "rsp-1"}, + Spec: v1alpha1.ReplicatedStoragePoolSpec{ + Type: v1alpha1.ReplicatedStoragePoolTypeLVM, + LVMVolumeGroups: []v1alpha1.ReplicatedStoragePoolLVMVolumeGroups{ + {Name: "lvg-other"}, + }, + }, + } + + cl := testhelpers.WithRSPByLVMVolumeGroupNameIndex( + fake.NewClientBuilder(). + WithScheme(scheme). + WithObjects(lvg, rsp), + ).Build() + + mapFunc := mapLVGToRSP(cl) + requests := mapFunc(context.Background(), lvg) + + Expect(requests).To(BeEmpty()) + }) + + It("returns nil for non-LVG object", func() { + cl := fake.NewClientBuilder().WithScheme(scheme).Build() + + mapFunc := mapLVGToRSP(cl) + requests := mapFunc(context.Background(), &corev1.Node{}) + + Expect(requests).To(BeNil()) + }) + + It("returns nil for nil object", func() { + cl := fake.NewClientBuilder().WithScheme(scheme).Build() + + mapFunc := mapLVGToRSP(cl) + requests := mapFunc(context.Background(), nil) + + Expect(requests).To(BeNil()) + }) +}) + +var _ = Describe("mapAgentPodToRSP", func() { + var scheme *runtime.Scheme + const testNamespace = "d8-sds-replicated-volume" + + BeforeEach(func() { + scheme = runtime.NewScheme() + Expect(v1alpha1.AddToScheme(scheme)).To(Succeed()) + Expect(corev1.AddToScheme(scheme)).To(Succeed()) + }) + + It("returns RSPs where pod's node is in EligibleNodes", func() { + pod := &corev1.Pod{ + ObjectMeta: metav1.ObjectMeta{ + Name: "agent-pod-1", + Namespace: testNamespace, + Labels: map[string]string{"app": "agent"}, + }, + Spec: corev1.PodSpec{ + NodeName: "node-1", + }, + } + rsp1 := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{Name: "rsp-1"}, + Status: v1alpha1.ReplicatedStoragePoolStatus{ + EligibleNodes: []v1alpha1.ReplicatedStoragePoolEligibleNode{ + {NodeName: "node-1"}, + }, + }, + } + rsp2 := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{Name: "rsp-2"}, + Status: v1alpha1.ReplicatedStoragePoolStatus{ + EligibleNodes: []v1alpha1.ReplicatedStoragePoolEligibleNode{ + {NodeName: "node-2"}, + }, + }, + } + + cl := testhelpers.WithRSPByEligibleNodeNameIndex( + fake.NewClientBuilder(). + WithScheme(scheme). + WithObjects(pod, rsp1, rsp2), + ).Build() + + mapFunc := mapAgentPodToRSP(cl, testNamespace) + requests := mapFunc(context.Background(), pod) + + Expect(requestNames(requests)).To(ConsistOf("rsp-1")) + }) + + It("returns nil for pod in wrong namespace", func() { + pod := &corev1.Pod{ + ObjectMeta: metav1.ObjectMeta{ + Name: "agent-pod-1", + Namespace: "other-namespace", + Labels: map[string]string{"app": "agent"}, + }, + Spec: corev1.PodSpec{ + NodeName: "node-1", + }, + } + rsp := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{Name: "rsp-1"}, + Status: v1alpha1.ReplicatedStoragePoolStatus{ + EligibleNodes: []v1alpha1.ReplicatedStoragePoolEligibleNode{ + {NodeName: "node-1"}, + }, + }, + } + + cl := testhelpers.WithRSPByEligibleNodeNameIndex( + fake.NewClientBuilder(). + WithScheme(scheme). + WithObjects(pod, rsp), + ).Build() + + mapFunc := mapAgentPodToRSP(cl, testNamespace) + requests := mapFunc(context.Background(), pod) + + Expect(requests).To(BeNil()) + }) + + It("returns nil for pod without app=agent label", func() { + pod := &corev1.Pod{ + ObjectMeta: metav1.ObjectMeta{ + Name: "other-pod", + Namespace: testNamespace, + Labels: map[string]string{"app": "other"}, + }, + Spec: corev1.PodSpec{ + NodeName: "node-1", + }, + } + rsp := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{Name: "rsp-1"}, + Status: v1alpha1.ReplicatedStoragePoolStatus{ + EligibleNodes: []v1alpha1.ReplicatedStoragePoolEligibleNode{ + {NodeName: "node-1"}, + }, + }, + } + + cl := testhelpers.WithRSPByEligibleNodeNameIndex( + fake.NewClientBuilder(). + WithScheme(scheme). + WithObjects(pod, rsp), + ).Build() + + mapFunc := mapAgentPodToRSP(cl, testNamespace) + requests := mapFunc(context.Background(), pod) + + Expect(requests).To(BeNil()) + }) + + It("returns nil for unscheduled pod (empty NodeName)", func() { + pod := &corev1.Pod{ + ObjectMeta: metav1.ObjectMeta{ + Name: "agent-pod-unscheduled", + Namespace: testNamespace, + Labels: map[string]string{"app": "agent"}, + }, + Spec: corev1.PodSpec{ + NodeName: "", // Not yet scheduled + }, + } + rsp := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{Name: "rsp-1"}, + Status: v1alpha1.ReplicatedStoragePoolStatus{ + EligibleNodes: []v1alpha1.ReplicatedStoragePoolEligibleNode{ + {NodeName: "node-1"}, + }, + }, + } + + cl := testhelpers.WithRSPByEligibleNodeNameIndex( + fake.NewClientBuilder(). + WithScheme(scheme). + WithObjects(pod, rsp), + ).Build() + + mapFunc := mapAgentPodToRSP(cl, testNamespace) + requests := mapFunc(context.Background(), pod) + + Expect(requests).To(BeNil()) + }) + + It("returns nil for non-Pod object", func() { + cl := fake.NewClientBuilder().WithScheme(scheme).Build() + + mapFunc := mapAgentPodToRSP(cl, testNamespace) + requests := mapFunc(context.Background(), &corev1.Node{}) + + Expect(requests).To(BeNil()) + }) + + It("returns nil for nil object", func() { + cl := fake.NewClientBuilder().WithScheme(scheme).Build() + + mapFunc := mapAgentPodToRSP(cl, testNamespace) + requests := mapFunc(context.Background(), nil) + + Expect(requests).To(BeNil()) + }) + + It("returns empty when node is not in any RSP eligibleNodes", func() { + pod := &corev1.Pod{ + ObjectMeta: metav1.ObjectMeta{ + Name: "agent-pod-1", + Namespace: testNamespace, + Labels: map[string]string{"app": "agent"}, + }, + Spec: corev1.PodSpec{ + NodeName: "orphan-node", + }, + } + rsp := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{Name: "rsp-1"}, + Status: v1alpha1.ReplicatedStoragePoolStatus{ + EligibleNodes: []v1alpha1.ReplicatedStoragePoolEligibleNode{ + {NodeName: "other-node"}, + }, + }, + } + + cl := testhelpers.WithRSPByEligibleNodeNameIndex( + fake.NewClientBuilder(). + WithScheme(scheme). + WithObjects(pod, rsp), + ).Build() + + mapFunc := mapAgentPodToRSP(cl, testNamespace) + requests := mapFunc(context.Background(), pod) + + Expect(requests).To(BeEmpty()) + }) +}) diff --git a/images/controller/internal/controllers/rsp_controller/predicates.go b/images/controller/internal/controllers/rsp_controller/predicates.go new file mode 100644 index 000000000..fa43016a8 --- /dev/null +++ b/images/controller/internal/controllers/rsp_controller/predicates.go @@ -0,0 +1,202 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package rspcontroller + +import ( + "maps" + + corev1 "k8s.io/api/core/v1" + "k8s.io/apimachinery/pkg/api/meta" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + nodeutil "k8s.io/component-helpers/node/util" + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/event" + "sigs.k8s.io/controller-runtime/pkg/predicate" + + snc "github.com/deckhouse/sds-node-configurator/api/v1alpha1" + "github.com/deckhouse/sds-replicated-volume/api/v1alpha1" +) + +// rspPredicates returns predicates for ReplicatedStoragePool events. +// Filters to only react to generation changes (spec updates). +func rspPredicates() []predicate.Predicate { + return []predicate.Predicate{predicate.GenerationChangedPredicate{}} +} + +// nodePredicates returns predicates for Node events. +// Filters to only react to: +// - Label changes (for zone and node matching) +// - Ready condition changes +// - spec.unschedulable changes +func nodePredicates() []predicate.Predicate { + return []predicate.Predicate{ + predicate.TypedFuncs[client.Object]{ + UpdateFunc: func(e event.TypedUpdateEvent[client.Object]) bool { + oldNode, okOld := e.ObjectOld.(*corev1.Node) + newNode, okNew := e.ObjectNew.(*corev1.Node) + if !okOld || !okNew || oldNode == nil || newNode == nil { + return true + } + + // Any label change (for zone and node matching). + if !maps.Equal(e.ObjectOld.GetLabels(), e.ObjectNew.GetLabels()) { + return true + } + + // Ready condition change. + _, oldReady := nodeutil.GetNodeCondition(&oldNode.Status, corev1.NodeReady) + _, newReady := nodeutil.GetNodeCondition(&newNode.Status, corev1.NodeReady) + if (oldReady == nil) != (newReady == nil) || + (oldReady != nil && newReady != nil && oldReady.Status != newReady.Status) { + return true + } + + // spec.unschedulable change. + if oldNode.Spec.Unschedulable != newNode.Spec.Unschedulable { + return true + } + + return false + }, + }, + } +} + +// lvgPredicates returns predicates for LVMVolumeGroup events. +// Filters to only react to: +// - Generation changes (spec updates, including spec.local.nodeName) +// - Unschedulable annotation changes +// - Ready condition status changes +// - ThinPools[].Ready status changes +func lvgPredicates() []predicate.Predicate { + return []predicate.Predicate{ + predicate.TypedFuncs[client.Object]{ + UpdateFunc: func(e event.TypedUpdateEvent[client.Object]) bool { + // Generation change (spec updates). + if e.ObjectNew.GetGeneration() != e.ObjectOld.GetGeneration() { + return true + } + + oldLVG, okOld := e.ObjectOld.(*snc.LVMVolumeGroup) + newLVG, okNew := e.ObjectNew.(*snc.LVMVolumeGroup) + if !okOld || !okNew || oldLVG == nil || newLVG == nil { + return true + } + + // Unschedulable annotation change. + _, oldUnschedulable := oldLVG.Annotations[v1alpha1.LVMVolumeGroupUnschedulableAnnotationKey] + _, newUnschedulable := newLVG.Annotations[v1alpha1.LVMVolumeGroupUnschedulableAnnotationKey] + if oldUnschedulable != newUnschedulable { + return true + } + + // Ready condition status change. + if lvgReadyConditionStatus(oldLVG) != lvgReadyConditionStatus(newLVG) { + return true + } + + // ThinPools[].Ready status change. + if !areThinPoolsReadyEqual(oldLVG.Status.ThinPools, newLVG.Status.ThinPools) { + return true + } + + return false + }, + }, + } +} + +// lvgReadyConditionStatus returns the status of the Ready condition on an LVG. +func lvgReadyConditionStatus(lvg *snc.LVMVolumeGroup) metav1.ConditionStatus { + if cond := meta.FindStatusCondition(lvg.Status.Conditions, "Ready"); cond != nil { + return cond.Status + } + return metav1.ConditionUnknown +} + +// areThinPoolsReadyEqual compares only the Ready field of thin pools by name. +func areThinPoolsReadyEqual(oldPools, newPools []snc.LVMVolumeGroupThinPoolStatus) bool { + // Build map of name -> ready for old thin pools. + oldReady := make(map[string]bool, len(oldPools)) + for _, tp := range oldPools { + oldReady[tp.Name] = tp.Ready + } + + // Check new thin pools against old. + if len(oldPools) != len(newPools) { + return false + } + for _, tp := range newPools { + if oldReady[tp.Name] != tp.Ready { + return false + } + } + return true +} + +// agentPodPredicates returns predicates for agent Pod events. +// Filters to only react to: +// - Pods in the specified namespace with label app=agent +// - Ready condition changes +// - Create/Delete events +func agentPodPredicates(podNamespace string) []predicate.Predicate { + return []predicate.Predicate{ + predicate.TypedFuncs[client.Object]{ + CreateFunc: func(e event.TypedCreateEvent[client.Object]) bool { + pod, ok := e.Object.(*corev1.Pod) + if !ok || pod == nil { + return true // Be conservative on type assertion failure. + } + return pod.Namespace == podNamespace && pod.Labels["app"] == "agent" + }, + UpdateFunc: func(e event.TypedUpdateEvent[client.Object]) bool { + oldPod, okOld := e.ObjectOld.(*corev1.Pod) + newPod, okNew := e.ObjectNew.(*corev1.Pod) + if !okOld || !okNew || oldPod == nil || newPod == nil { + return true // Be conservative on type assertion failure. + } + + // Only care about agent pods in the target namespace. + if newPod.Namespace != podNamespace || newPod.Labels["app"] != "agent" { + return false + } + + // React to Ready condition changes. + oldReady := isPodReady(oldPod) + newReady := isPodReady(newPod) + return oldReady != newReady + }, + DeleteFunc: func(e event.TypedDeleteEvent[client.Object]) bool { + pod, ok := e.Object.(*corev1.Pod) + if !ok || pod == nil { + return true // Be conservative on type assertion failure. + } + return pod.Namespace == podNamespace && pod.Labels["app"] == "agent" + }, + }, + } +} + +// isPodReady checks if a pod has the Ready condition set to True. +func isPodReady(pod *corev1.Pod) bool { + for _, cond := range pod.Status.Conditions { + if cond.Type == corev1.PodReady { + return cond.Status == corev1.ConditionTrue + } + } + return false +} diff --git a/images/controller/internal/controllers/rsp_controller/predicates_test.go b/images/controller/internal/controllers/rsp_controller/predicates_test.go new file mode 100644 index 000000000..225ef400d --- /dev/null +++ b/images/controller/internal/controllers/rsp_controller/predicates_test.go @@ -0,0 +1,457 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package rspcontroller + +import ( + . "github.com/onsi/ginkgo/v2" + . "github.com/onsi/gomega" + corev1 "k8s.io/api/core/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/event" + "sigs.k8s.io/controller-runtime/pkg/predicate" + + snc "github.com/deckhouse/sds-node-configurator/api/v1alpha1" + "github.com/deckhouse/sds-replicated-volume/api/v1alpha1" +) + +var _ = Describe("nodePredicates", func() { + var predicates []predicate.Predicate + + BeforeEach(func() { + predicates = nodePredicates() + }) + + It("returns true for label change", func() { + oldNode := &corev1.Node{ + ObjectMeta: metav1.ObjectMeta{ + Name: "node-1", + Labels: map[string]string{"zone": "a"}, + }, + } + newNode := &corev1.Node{ + ObjectMeta: metav1.ObjectMeta{ + Name: "node-1", + Labels: map[string]string{"zone": "b"}, + }, + } + + e := event.TypedUpdateEvent[client.Object]{ + ObjectOld: oldNode, + ObjectNew: newNode, + } + + result := predicates[0].Update(e) + Expect(result).To(BeTrue()) + }) + + It("returns true for Ready condition status change", func() { + oldNode := &corev1.Node{ + ObjectMeta: metav1.ObjectMeta{Name: "node-1"}, + Status: corev1.NodeStatus{ + Conditions: []corev1.NodeCondition{ + {Type: corev1.NodeReady, Status: corev1.ConditionTrue}, + }, + }, + } + newNode := &corev1.Node{ + ObjectMeta: metav1.ObjectMeta{Name: "node-1"}, + Status: corev1.NodeStatus{ + Conditions: []corev1.NodeCondition{ + {Type: corev1.NodeReady, Status: corev1.ConditionFalse}, + }, + }, + } + + e := event.TypedUpdateEvent[client.Object]{ + ObjectOld: oldNode, + ObjectNew: newNode, + } + + result := predicates[0].Update(e) + Expect(result).To(BeTrue()) + }) + + It("returns true for spec.unschedulable change", func() { + oldNode := &corev1.Node{ + ObjectMeta: metav1.ObjectMeta{Name: "node-1"}, + Spec: corev1.NodeSpec{Unschedulable: false}, + } + newNode := &corev1.Node{ + ObjectMeta: metav1.ObjectMeta{Name: "node-1"}, + Spec: corev1.NodeSpec{Unschedulable: true}, + } + + e := event.TypedUpdateEvent[client.Object]{ + ObjectOld: oldNode, + ObjectNew: newNode, + } + + result := predicates[0].Update(e) + Expect(result).To(BeTrue()) + }) + + It("returns false when none of the relevant fields changed", func() { + oldNode := &corev1.Node{ + ObjectMeta: metav1.ObjectMeta{Name: "node-1"}, + Status: corev1.NodeStatus{ + Conditions: []corev1.NodeCondition{ + {Type: corev1.NodeReady, Status: corev1.ConditionTrue}, + }, + }, + } + newNode := oldNode.DeepCopy() + newNode.ResourceVersion = "2" // Only resource version changed. + + e := event.TypedUpdateEvent[client.Object]{ + ObjectOld: oldNode, + ObjectNew: newNode, + } + + result := predicates[0].Update(e) + Expect(result).To(BeFalse()) + }) +}) + +var _ = Describe("lvgPredicates", func() { + var predicates []predicate.Predicate + + BeforeEach(func() { + predicates = lvgPredicates() + }) + + It("returns true for generation change", func() { + oldLVG := &snc.LVMVolumeGroup{ + ObjectMeta: metav1.ObjectMeta{Name: "lvg-1", Generation: 1}, + } + newLVG := &snc.LVMVolumeGroup{ + ObjectMeta: metav1.ObjectMeta{Name: "lvg-1", Generation: 2}, + } + + e := event.TypedUpdateEvent[client.Object]{ + ObjectOld: oldLVG, + ObjectNew: newLVG, + } + + result := predicates[0].Update(e) + Expect(result).To(BeTrue()) + }) + + It("returns true for unschedulable annotation change", func() { + oldLVG := &snc.LVMVolumeGroup{ + ObjectMeta: metav1.ObjectMeta{Name: "lvg-1", Generation: 1}, + } + newLVG := &snc.LVMVolumeGroup{ + ObjectMeta: metav1.ObjectMeta{ + Name: "lvg-1", + Generation: 1, + Annotations: map[string]string{ + v1alpha1.LVMVolumeGroupUnschedulableAnnotationKey: "", + }, + }, + } + + e := event.TypedUpdateEvent[client.Object]{ + ObjectOld: oldLVG, + ObjectNew: newLVG, + } + + result := predicates[0].Update(e) + Expect(result).To(BeTrue()) + }) + + It("returns true for Ready condition status change", func() { + oldLVG := &snc.LVMVolumeGroup{ + ObjectMeta: metav1.ObjectMeta{Name: "lvg-1", Generation: 1}, + Status: snc.LVMVolumeGroupStatus{ + Conditions: []metav1.Condition{ + {Type: "Ready", Status: metav1.ConditionFalse}, + }, + }, + } + newLVG := &snc.LVMVolumeGroup{ + ObjectMeta: metav1.ObjectMeta{Name: "lvg-1", Generation: 1}, + Status: snc.LVMVolumeGroupStatus{ + Conditions: []metav1.Condition{ + {Type: "Ready", Status: metav1.ConditionTrue}, + }, + }, + } + + e := event.TypedUpdateEvent[client.Object]{ + ObjectOld: oldLVG, + ObjectNew: newLVG, + } + + result := predicates[0].Update(e) + Expect(result).To(BeTrue()) + }) + + It("returns true for ThinPools[].Ready change", func() { + oldLVG := &snc.LVMVolumeGroup{ + ObjectMeta: metav1.ObjectMeta{Name: "lvg-1", Generation: 1}, + Status: snc.LVMVolumeGroupStatus{ + ThinPools: []snc.LVMVolumeGroupThinPoolStatus{ + {Name: "tp-1", Ready: false}, + }, + }, + } + newLVG := &snc.LVMVolumeGroup{ + ObjectMeta: metav1.ObjectMeta{Name: "lvg-1", Generation: 1}, + Status: snc.LVMVolumeGroupStatus{ + ThinPools: []snc.LVMVolumeGroupThinPoolStatus{ + {Name: "tp-1", Ready: true}, + }, + }, + } + + e := event.TypedUpdateEvent[client.Object]{ + ObjectOld: oldLVG, + ObjectNew: newLVG, + } + + result := predicates[0].Update(e) + Expect(result).To(BeTrue()) + }) + + It("returns false when none of above changed", func() { + oldLVG := &snc.LVMVolumeGroup{ + ObjectMeta: metav1.ObjectMeta{Name: "lvg-1", Generation: 1}, + } + newLVG := oldLVG.DeepCopy() + newLVG.ResourceVersion = "2" + + e := event.TypedUpdateEvent[client.Object]{ + ObjectOld: oldLVG, + ObjectNew: newLVG, + } + + result := predicates[0].Update(e) + Expect(result).To(BeFalse()) + }) +}) + +var _ = Describe("agentPodPredicates", func() { + var predicates []predicate.Predicate + const testNamespace = "test-namespace" + + BeforeEach(func() { + predicates = agentPodPredicates(testNamespace) + }) + + Context("CreateFunc", func() { + It("returns true for agent pod in namespace", func() { + pod := &corev1.Pod{ + ObjectMeta: metav1.ObjectMeta{ + Name: "agent-abc", + Namespace: testNamespace, + Labels: map[string]string{"app": "agent"}, + }, + } + + e := event.TypedCreateEvent[client.Object]{Object: pod} + result := predicates[0].Create(e) + Expect(result).To(BeTrue()) + }) + + It("returns false for non-agent pod", func() { + pod := &corev1.Pod{ + ObjectMeta: metav1.ObjectMeta{ + Name: "other-pod", + Namespace: testNamespace, + Labels: map[string]string{"app": "other"}, + }, + } + + e := event.TypedCreateEvent[client.Object]{Object: pod} + result := predicates[0].Create(e) + Expect(result).To(BeFalse()) + }) + + It("returns false for pod in wrong namespace", func() { + pod := &corev1.Pod{ + ObjectMeta: metav1.ObjectMeta{ + Name: "agent-abc", + Namespace: "other-namespace", + Labels: map[string]string{"app": "agent"}, + }, + } + + e := event.TypedCreateEvent[client.Object]{Object: pod} + result := predicates[0].Create(e) + Expect(result).To(BeFalse()) + }) + }) + + Context("UpdateFunc", func() { + It("returns true when Ready condition changes", func() { + oldPod := &corev1.Pod{ + ObjectMeta: metav1.ObjectMeta{ + Name: "agent-abc", + Namespace: testNamespace, + Labels: map[string]string{"app": "agent"}, + }, + Status: corev1.PodStatus{ + Conditions: []corev1.PodCondition{ + {Type: corev1.PodReady, Status: corev1.ConditionFalse}, + }, + }, + } + newPod := &corev1.Pod{ + ObjectMeta: metav1.ObjectMeta{ + Name: "agent-abc", + Namespace: testNamespace, + Labels: map[string]string{"app": "agent"}, + }, + Status: corev1.PodStatus{ + Conditions: []corev1.PodCondition{ + {Type: corev1.PodReady, Status: corev1.ConditionTrue}, + }, + }, + } + + e := event.TypedUpdateEvent[client.Object]{ + ObjectOld: oldPod, + ObjectNew: newPod, + } + result := predicates[0].Update(e) + Expect(result).To(BeTrue()) + }) + + It("returns false when Ready unchanged", func() { + oldPod := &corev1.Pod{ + ObjectMeta: metav1.ObjectMeta{ + Name: "agent-abc", + Namespace: testNamespace, + Labels: map[string]string{"app": "agent"}, + }, + Status: corev1.PodStatus{ + Conditions: []corev1.PodCondition{ + {Type: corev1.PodReady, Status: corev1.ConditionTrue}, + }, + }, + } + newPod := oldPod.DeepCopy() + newPod.ResourceVersion = "2" + + e := event.TypedUpdateEvent[client.Object]{ + ObjectOld: oldPod, + ObjectNew: newPod, + } + result := predicates[0].Update(e) + Expect(result).To(BeFalse()) + }) + }) + + Context("DeleteFunc", func() { + It("returns true for agent pod", func() { + pod := &corev1.Pod{ + ObjectMeta: metav1.ObjectMeta{ + Name: "agent-abc", + Namespace: testNamespace, + Labels: map[string]string{"app": "agent"}, + }, + } + + e := event.TypedDeleteEvent[client.Object]{Object: pod} + result := predicates[0].Delete(e) + Expect(result).To(BeTrue()) + }) + }) +}) + +var _ = Describe("Helper functions", func() { + Describe("isPodReady", func() { + It("returns true when PodReady=True", func() { + pod := &corev1.Pod{ + Status: corev1.PodStatus{ + Conditions: []corev1.PodCondition{ + {Type: corev1.PodReady, Status: corev1.ConditionTrue}, + }, + }, + } + Expect(isPodReady(pod)).To(BeTrue()) + }) + + It("returns false when PodReady=False", func() { + pod := &corev1.Pod{ + Status: corev1.PodStatus{ + Conditions: []corev1.PodCondition{ + {Type: corev1.PodReady, Status: corev1.ConditionFalse}, + }, + }, + } + Expect(isPodReady(pod)).To(BeFalse()) + }) + + It("returns false when no PodReady condition", func() { + pod := &corev1.Pod{} + Expect(isPodReady(pod)).To(BeFalse()) + }) + }) + + Describe("lvgReadyConditionStatus", func() { + It("returns status when Ready condition exists", func() { + lvg := &snc.LVMVolumeGroup{ + Status: snc.LVMVolumeGroupStatus{ + Conditions: []metav1.Condition{ + {Type: "Ready", Status: metav1.ConditionTrue}, + }, + }, + } + Expect(lvgReadyConditionStatus(lvg)).To(Equal(metav1.ConditionTrue)) + }) + + It("returns Unknown when no Ready condition", func() { + lvg := &snc.LVMVolumeGroup{} + Expect(lvgReadyConditionStatus(lvg)).To(Equal(metav1.ConditionUnknown)) + }) + }) + + Describe("areThinPoolsReadyEqual", func() { + It("returns true for equal Ready states", func() { + oldPools := []snc.LVMVolumeGroupThinPoolStatus{ + {Name: "tp-1", Ready: true}, + } + newPools := []snc.LVMVolumeGroupThinPoolStatus{ + {Name: "tp-1", Ready: true}, + } + Expect(areThinPoolsReadyEqual(oldPools, newPools)).To(BeTrue()) + }) + + It("returns false for different Ready states", func() { + oldPools := []snc.LVMVolumeGroupThinPoolStatus{ + {Name: "tp-1", Ready: false}, + } + newPools := []snc.LVMVolumeGroupThinPoolStatus{ + {Name: "tp-1", Ready: true}, + } + Expect(areThinPoolsReadyEqual(oldPools, newPools)).To(BeFalse()) + }) + + It("returns false for different length", func() { + oldPools := []snc.LVMVolumeGroupThinPoolStatus{ + {Name: "tp-1", Ready: true}, + } + newPools := []snc.LVMVolumeGroupThinPoolStatus{ + {Name: "tp-1", Ready: true}, + {Name: "tp-2", Ready: true}, + } + Expect(areThinPoolsReadyEqual(oldPools, newPools)).To(BeFalse()) + }) + }) +}) diff --git a/images/controller/internal/controllers/rsp_controller/reconciler.go b/images/controller/internal/controllers/rsp_controller/reconciler.go new file mode 100644 index 000000000..760b5f140 --- /dev/null +++ b/images/controller/internal/controllers/rsp_controller/reconciler.go @@ -0,0 +1,676 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package rspcontroller + +import ( + "context" + "errors" + "fmt" + "sort" + "time" + + "github.com/go-logr/logr" + corev1 "k8s.io/api/core/v1" + apierrors "k8s.io/apimachinery/pkg/api/errors" + "k8s.io/apimachinery/pkg/api/meta" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/labels" + "k8s.io/apimachinery/pkg/selection" + nodeutil "k8s.io/component-helpers/node/util" + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/reconcile" + + snc "github.com/deckhouse/sds-node-configurator/api/v1alpha1" + "github.com/deckhouse/sds-replicated-volume/api/objutilv1" + "github.com/deckhouse/sds-replicated-volume/api/v1alpha1" + "github.com/deckhouse/sds-replicated-volume/lib/go/common/reconciliation/flow" +) + +// ────────────────────────────────────────────────────────────────────────────── +// Wiring / construction +// + +// Reconciler reconciles ReplicatedStoragePool resources. +// It calculates EligibleNodes based on LVMVolumeGroups, Nodes, and agent pod status. +type Reconciler struct { + cl client.Client + log logr.Logger + agentPodNamespace string +} + +var _ reconcile.Reconciler = (*Reconciler)(nil) + +// NewReconciler creates a new RSP reconciler. +// agentPodNamespace is the namespace where agent pods are deployed (used for AgentReady status). +func NewReconciler(cl client.Client, log logr.Logger, agentPodNamespace string) *Reconciler { + return &Reconciler{ + cl: cl, + log: log, + agentPodNamespace: agentPodNamespace, + } +} + +// ────────────────────────────────────────────────────────────────────────────── +// Reconcile +// + +// Reconcile pattern: In-place reconciliation +func (r *Reconciler) Reconcile(ctx context.Context, req reconcile.Request) (reconcile.Result, error) { + rf := flow.BeginRootReconcile(ctx) + + // Get RSP. + rsp, err := r.getRSP(rf.Ctx(), req.Name) + if err != nil { + if apierrors.IsNotFound(err) { + return rf.Done().ToCtrl() + } + return rf.Fail(err).ToCtrl() + } + + // Take patch base before mutations. + base := rsp.DeepCopy() + + // Get LVGs referenced by RSP. + lvgs, lvgsNotFoundErr, err := r.getLVGsByRSP(rf.Ctx(), rsp) + if err != nil { + return rf.Fail(err).ToCtrl() + } + + // Cannot calculate eligible nodes if LVGs are missing. + // Set condition and keep old eligible nodes. + if lvgsNotFoundErr != nil { + if applyReadyCondFalse(rsp, + v1alpha1.ReplicatedStoragePoolCondReadyReasonLVMVolumeGroupNotFound, + fmt.Sprintf("Some LVMVolumeGroups not found: %v", lvgsNotFoundErr), + ) { + return rf.DoneOrFail(r.patchRSPStatus(rf.Ctx(), rsp, base, false)).ToCtrl() + } + + return rf.Done().ToCtrl() + } + + // Validate RSP and LVGs are correctly configured. + if err := validateRSPAndLVGs(rsp, lvgs); err != nil { + if applyReadyCondFalse(rsp, + v1alpha1.ReplicatedStoragePoolCondReadyReasonInvalidLVMVolumeGroup, + fmt.Sprintf("RSP/LVG validation failed: %v", err), + ) { + return rf.DoneOrFail(r.patchRSPStatus(rf.Ctx(), rsp, base, false)).ToCtrl() + } + + return rf.Done().ToCtrl() + } + + nodeSelector := labels.Everything() + + // Validate NodeLabelSelector if present. + if rsp.Spec.NodeLabelSelector != nil { + selector, err := metav1.LabelSelectorAsSelector(rsp.Spec.NodeLabelSelector) + + if err != nil { + if applyReadyCondFalse(rsp, + v1alpha1.ReplicatedStoragePoolCondReadyReasonInvalidNodeLabelSelector, + fmt.Sprintf("Invalid NodeLabelSelector: %v", err), + ) { + return rf.DoneOrFail(r.patchRSPStatus(rf.Ctx(), rsp, base, false)).ToCtrl() + } + return rf.Done().ToCtrl() + } + + reqs, _ := selector.Requirements() + nodeSelector = nodeSelector.Add(reqs...) + } + + if len(rsp.Spec.Zones) > 0 { + req, err := labels.NewRequirement(corev1.LabelTopologyZone, selection.In, rsp.Spec.Zones) + if err != nil { + if applyReadyCondFalse(rsp, + v1alpha1.ReplicatedStoragePoolCondReadyReasonInvalidNodeLabelSelector, + fmt.Sprintf("Invalid Zones: %v", err), + ) { + return rf.DoneOrFail(r.patchRSPStatus(rf.Ctx(), rsp, base, false)).ToCtrl() + } + return rf.Done().ToCtrl() + } + nodeSelector = nodeSelector.Add(*req) + } + + // Get all nodes matching selector. + nodes, err := r.getSortedNodes(rf.Ctx(), nodeSelector) + if err != nil { + return rf.Fail(err).ToCtrl() + } + + // Get agent readiness per node. + agentReadyByNode, err := r.getAgentReadiness(rf.Ctx()) + if err != nil { + return rf.Fail(err).ToCtrl() + } + + eligibleNodes, worldStateExpiresAt := computeActualEligibleNodes(rsp, lvgs, nodes, agentReadyByNode) + + // Apply changes to status. + changed := applyEligibleNodesAndIncrementRevisionIfChanged(rsp, eligibleNodes) + + // Set condition to success. + changed = applyReadyCondTrue(rsp, + v1alpha1.ReplicatedStoragePoolCondReadyReasonReady, + fmt.Sprintf("Eligible nodes calculated successfully: %d nodes", len(eligibleNodes)), + ) || changed + + if changed { + if err := r.patchRSPStatus(rf.Ctx(), rsp, base, true); err != nil { + return rf.Fail(err).ToCtrl() + } + } + + // Schedule requeue when grace period will expire, even if nothing changed. + // This ensures nodes beyond grace period will be removed from EligibleNodes. + if worldStateExpiresAt != nil { + return rf.RequeueAfter(time.Until(*worldStateExpiresAt)).ToCtrl() + } + + return rf.Done().ToCtrl() +} + +// ────────────────────────────────────────────────────────────────────────────── +// Helpers: Reconcile (non-I/O) +// + +// --- Compute helpers --- + +// computeActualEligibleNodes computes the list of eligible nodes for an RSP. +// +// IMPORTANT: The nodes slice must be pre-filtered by the caller (Reconcile) to include +// only nodes matching RSP's NodeLabelSelector and Zones. This function does NOT perform +// zone/label filtering - it assumes all passed nodes are potential candidates. +// +// It also returns worldStateExpiresAt - the earliest time when a node's grace period +// will expire and the eligible nodes list may change. Returns nil if no expiration is needed. +func computeActualEligibleNodes( + rsp *v1alpha1.ReplicatedStoragePool, + lvgs map[string]lvgView, + nodes []nodeView, + agentReadyByNode map[string]bool, +) (eligibleNodes []v1alpha1.ReplicatedStoragePoolEligibleNode, worldStateExpiresAt *time.Time) { + // Build LVG lookup by node name. + lvgByNode := buildLVGByNodeMap(lvgs, rsp) + + // Get grace period for not-ready nodes from spec. + gracePeriod := rsp.Spec.EligibleNodesPolicy.NotReadyGracePeriod.Duration + + result := make([]v1alpha1.ReplicatedStoragePoolEligibleNode, 0) + var earliestExpiration *time.Time + + for i := range nodes { + node := &nodes[i] + + // Check node readiness and grace period. + nodeReady, notReadyBeyondGrace, graceExpiresAt := isNodeReadyOrWithinGrace(node.ready, gracePeriod) + if notReadyBeyondGrace { + // Node has been not-ready beyond grace period - exclude from eligible nodes. + continue + } + + // Track earliest grace period expiration for NotReady nodes within grace. + if !nodeReady && !graceExpiresAt.IsZero() { + if earliestExpiration == nil || graceExpiresAt.Before(*earliestExpiration) { + earliestExpiration = &graceExpiresAt + } + } + + // Get LVGs for this node (may be empty for client-only/tiebreaker nodes). + nodeLVGs := lvgByNode[node.name] + + // Build eligible node entry. + eligibleNode := v1alpha1.ReplicatedStoragePoolEligibleNode{ + NodeName: node.name, + ZoneName: node.zoneName, + NodeReady: nodeReady, + Unschedulable: node.unschedulable, + LVMVolumeGroups: nodeLVGs, + AgentReady: agentReadyByNode[node.name], + } + + result = append(result, eligibleNode) + } + + // Result is already sorted by node name because nodes are pre-sorted by getSortedNodes. + return result, earliestExpiration +} + +// ────────────────────────────────────────────────────────────────────────────── +// View types +// + +// nodeView is a lightweight read-only snapshot of Node fields needed for RSP reconciliation. +// It is safe to use with UnsafeDisableDeepCopy because it copies only scalar values. +type nodeView struct { + name string + zoneName string + unschedulable bool + ready nodeViewReady +} + +// nodeViewReady contains Ready condition state needed for grace period calculation. +type nodeViewReady struct { + status bool // True if Ready condition is True + hasCondition bool // True if Ready condition exists + lastTransitionTime time.Time // For grace period calculation +} + +// newNodeView creates a nodeView from a Node. +// The unsafeNode may come from cache without DeepCopy; nodeView copies only the needed scalar fields. +func newNodeView(unsafeNode *corev1.Node) nodeView { + view := nodeView{ + name: unsafeNode.Name, + zoneName: unsafeNode.Labels[corev1.LabelTopologyZone], + unschedulable: unsafeNode.Spec.Unschedulable, + } + + _, readyCond := nodeutil.GetNodeCondition(&unsafeNode.Status, corev1.NodeReady) + if readyCond != nil { + view.ready = nodeViewReady{ + hasCondition: true, + status: readyCond.Status == corev1.ConditionTrue, + lastTransitionTime: readyCond.LastTransitionTime.Time, + } + } + + return view +} + +// lvgView is a lightweight read-only snapshot of LVMVolumeGroup fields needed for RSP reconciliation. +// It is safe to use with UnsafeDisableDeepCopy because it copies only scalar values and small maps. +type lvgView struct { + name string + nodeName string + unschedulable bool + ready bool // Ready condition status + specThinPoolNames map[string]struct{} // set of thin pool names from spec + thinPoolReady map[string]struct{} // set of ready thin pool names from status +} + +// newLVGView creates an lvgView from an LVMVolumeGroup. +// The unsafeLVG may come from cache without DeepCopy; lvgView copies only the needed fields. +func newLVGView(unsafeLVG *snc.LVMVolumeGroup) lvgView { + view := lvgView{ + name: unsafeLVG.Name, + nodeName: unsafeLVG.Spec.Local.NodeName, + ready: meta.IsStatusConditionTrue(unsafeLVG.Status.Conditions, "Ready"), + } + + // Check unschedulable annotation. + _, view.unschedulable = unsafeLVG.Annotations[v1alpha1.LVMVolumeGroupUnschedulableAnnotationKey] + + // Copy spec thin pool names (for validation). + if len(unsafeLVG.Spec.ThinPools) > 0 { + view.specThinPoolNames = make(map[string]struct{}, len(unsafeLVG.Spec.ThinPools)) + for _, tp := range unsafeLVG.Spec.ThinPools { + view.specThinPoolNames[tp.Name] = struct{}{} + } + } + + // Copy status thin pool readiness (only ready thin pools). + if len(unsafeLVG.Status.ThinPools) > 0 { + view.thinPoolReady = make(map[string]struct{}, len(unsafeLVG.Status.ThinPools)) + for _, tp := range unsafeLVG.Status.ThinPools { + if tp.Ready { + view.thinPoolReady[tp.Name] = struct{}{} + } + } + } + + return view +} + +// --- Pure helpers --- + +// isLVGReady checks if an LVMVolumeGroup is ready. +// For LVM (no thin pool): checks if the LVG Ready condition is True. +// For LVMThin (with thin pool): checks if the LVG Ready condition is True AND +// the specific thin pool status.ready is true. +func isLVGReady(lvg *lvgView, thinPoolName string) bool { + // Check LVG Ready condition. + if !lvg.ready { + return false + } + + // If no thin pool specified (LVM type), LVG Ready condition is sufficient. + if thinPoolName == "" { + return true + } + + // For LVMThin, also check thin pool readiness. + _, ready := lvg.thinPoolReady[thinPoolName] + return ready +} + +// isNodeReadyOrWithinGrace checks node readiness and grace period status. +// Returns: +// - nodeReady: true if node is Ready +// - notReadyBeyondGrace: true if node is NotReady and beyond grace period (should be excluded) +// - graceExpiresAt: when the grace period will expire (zero if node is Ready or beyond grace) +func isNodeReadyOrWithinGrace(ready nodeViewReady, gracePeriod time.Duration) (nodeReady bool, notReadyBeyondGrace bool, graceExpiresAt time.Time) { + if !ready.hasCondition { + // No Ready condition - consider not ready but within grace (unknown state). + return false, false, time.Time{} + } + + if ready.status { + return true, false, time.Time{} + } + + // Node is not ready - check grace period. + graceExpiresAt = ready.lastTransitionTime.Add(gracePeriod) + if time.Now().After(graceExpiresAt) { + return false, true, time.Time{} // Beyond grace period. + } + + return false, false, graceExpiresAt // Within grace period. +} + +// --- Comparison helpers --- + +// areEligibleNodesEqual compares two eligible nodes slices for equality. +func areEligibleNodesEqual(a, b []v1alpha1.ReplicatedStoragePoolEligibleNode) bool { + if len(a) != len(b) { + return false + } + for i := range a { + if a[i].NodeName != b[i].NodeName || + a[i].ZoneName != b[i].ZoneName || + a[i].NodeReady != b[i].NodeReady || + a[i].Unschedulable != b[i].Unschedulable || + a[i].AgentReady != b[i].AgentReady { + return false + } + if !areLVGsEqual(a[i].LVMVolumeGroups, b[i].LVMVolumeGroups) { + return false + } + } + return true +} + +// areLVGsEqual compares two LVG slices for equality. +func areLVGsEqual(a, b []v1alpha1.ReplicatedStoragePoolEligibleNodeLVMVolumeGroup) bool { + if len(a) != len(b) { + return false + } + for i := range a { + if a[i].Name != b[i].Name || + a[i].ThinPoolName != b[i].ThinPoolName || + a[i].Unschedulable != b[i].Unschedulable || + a[i].Ready != b[i].Ready { + return false + } + } + return true +} + +// --- Apply helpers --- + +// applyReadyCondTrue sets the Ready condition to True. +// Returns true if the condition was changed. +func applyReadyCondTrue(rsp *v1alpha1.ReplicatedStoragePool, reason, message string) bool { + return objutilv1.SetStatusCondition(rsp, metav1.Condition{ + Type: v1alpha1.ReplicatedStoragePoolCondReadyType, + Status: metav1.ConditionTrue, + Reason: reason, + Message: message, + }) +} + +// applyReadyCondFalse sets the Ready condition to False. +// Returns true if the condition was changed. +func applyReadyCondFalse(rsp *v1alpha1.ReplicatedStoragePool, reason, message string) bool { + return objutilv1.SetStatusCondition(rsp, metav1.Condition{ + Type: v1alpha1.ReplicatedStoragePoolCondReadyType, + Status: metav1.ConditionFalse, + Reason: reason, + Message: message, + }) +} + +// applyEligibleNodesAndIncrementRevisionIfChanged updates eligible nodes in RSP status +// and increments revision if nodes changed. Returns true if changed. +func applyEligibleNodesAndIncrementRevisionIfChanged( + rsp *v1alpha1.ReplicatedStoragePool, + eligibleNodes []v1alpha1.ReplicatedStoragePoolEligibleNode, +) bool { + if areEligibleNodesEqual(rsp.Status.EligibleNodes, eligibleNodes) { + return false + } + rsp.Status.EligibleNodes = eligibleNodes + rsp.Status.EligibleNodesRevision++ + return true +} + +// --- Validate helpers --- + +// validateRSPAndLVGs validates that RSP and LVGs are correctly configured. +// It checks: +// - For LVMThin type, thinPoolName exists in each referenced LVG's Spec.ThinPools +func validateRSPAndLVGs(rsp *v1alpha1.ReplicatedStoragePool, lvgs map[string]lvgView) error { + // Validate ThinPool references for LVMThin type. + if rsp.Spec.Type == v1alpha1.ReplicatedStoragePoolTypeLVMThin { + for _, rspLVG := range rsp.Spec.LVMVolumeGroups { + if rspLVG.ThinPoolName == "" { + return fmt.Errorf("LVMVolumeGroup %q: thinPoolName is required for LVMThin type", rspLVG.Name) + } + + lvg, ok := lvgs[rspLVG.Name] + if !ok { + // LVG not found in the provided map - this is a bug in the calling code. + panic(fmt.Sprintf("validateRSPAndLVGs: LVG %q not found in lvgs (invariant violation)", rspLVG.Name)) + } + + // Check if ThinPool exists in LVG. + if _, thinPoolFound := lvg.specThinPoolNames[rspLVG.ThinPoolName]; !thinPoolFound { + return fmt.Errorf("LVMVolumeGroup %q: thinPool %q not found in Spec.ThinPools", rspLVG.Name, rspLVG.ThinPoolName) + } + } + } + + return nil +} + +// --- Construction helpers --- + +// buildLVGByNodeMap builds a map of node name to LVG entries for the RSP. +// Iterates over rsp.Spec.LVMVolumeGroups and looks up each LVG in the provided map. +// LVGs are sorted by name per node for deterministic output. +func buildLVGByNodeMap( + lvgs map[string]lvgView, + rsp *v1alpha1.ReplicatedStoragePool, +) map[string][]v1alpha1.ReplicatedStoragePoolEligibleNodeLVMVolumeGroup { + result := make(map[string][]v1alpha1.ReplicatedStoragePoolEligibleNodeLVMVolumeGroup) + + for _, ref := range rsp.Spec.LVMVolumeGroups { + lvg, ok := lvgs[ref.Name] + if !ok { + // LVG not found - skip (caller should have validated). + continue + } + + // Get node name from LVG. + nodeName := lvg.nodeName + if nodeName == "" { + continue + } + + // Determine readiness of the LVG (and thin pool if applicable). + ready := isLVGReady(&lvg, ref.ThinPoolName) + + entry := v1alpha1.ReplicatedStoragePoolEligibleNodeLVMVolumeGroup{ + Name: lvg.name, + ThinPoolName: ref.ThinPoolName, + Unschedulable: lvg.unschedulable, + Ready: ready, + } + + result[nodeName] = append(result[nodeName], entry) + } + + // Sort LVGs by name for deterministic output. + for nodeName := range result { + sort.Slice(result[nodeName], func(i, j int) bool { + return result[nodeName][i].Name < result[nodeName][j].Name + }) + } + + return result +} + +// ────────────────────────────────────────────────────────────────────────────── +// Single-call I/O helper categories +// + +// --- RSP --- + +// getRSP fetches an RSP by name. +func (r *Reconciler) getRSP(ctx context.Context, name string) (*v1alpha1.ReplicatedStoragePool, error) { + var rsp v1alpha1.ReplicatedStoragePool + if err := r.cl.Get(ctx, client.ObjectKey{Name: name}, &rsp); err != nil { + return nil, err + } + return &rsp, nil +} + +// patchRSPStatus patches the RSP status subresource. +func (r *Reconciler) patchRSPStatus( + ctx context.Context, + rsp *v1alpha1.ReplicatedStoragePool, + base *v1alpha1.ReplicatedStoragePool, + optimisticLock bool, +) error { + var patch client.Patch + if optimisticLock { + patch = client.MergeFromWithOptions(base, client.MergeFromWithOptimisticLock{}) + } else { + patch = client.MergeFrom(base) + } + return r.cl.Status().Patch(ctx, rsp, patch) +} + +// --- LVG --- + +// getLVGsByRSP fetches LVGs referenced by the given RSP and returns them as a map keyed by LVG name. +// Uses UnsafeDisableDeepCopy for efficiency. +// Returns: +// - lvgs: map of LVG name to lvgView snapshot for found LVGs +// - lvgsNotFoundErr: merged error for any NotFound cases (nil if all found) +// - err: non-NotFound error (if any occurred, lvgs will be nil) +func (r *Reconciler) getLVGsByRSP(ctx context.Context, rsp *v1alpha1.ReplicatedStoragePool) ( + lvgs map[string]lvgView, + lvgsNotFoundErr error, + err error, +) { + if rsp == nil || len(rsp.Spec.LVMVolumeGroups) == 0 { + return nil, nil, nil + } + + // Build a set of wanted LVG names. + wantedNames := make(map[string]struct{}, len(rsp.Spec.LVMVolumeGroups)) + for _, ref := range rsp.Spec.LVMVolumeGroups { + wantedNames[ref.Name] = struct{}{} + } + + // List all LVGs with UnsafeDisableDeepCopy and filter in-memory. + var unsafeList snc.LVMVolumeGroupList + if err := r.cl.List(ctx, &unsafeList, client.UnsafeDisableDeepCopy); err != nil { + return nil, nil, err + } + + lvgs = make(map[string]lvgView, len(rsp.Spec.LVMVolumeGroups)) + + for i := range unsafeList.Items { + unsafeLVG := &unsafeList.Items[i] + if _, wanted := wantedNames[unsafeLVG.Name]; !wanted { + continue + } + lvgs[unsafeLVG.Name] = newLVGView(unsafeLVG) + } + + // Check for not found LVGs. + var notFoundErrs []error + for name := range wantedNames { + if _, found := lvgs[name]; !found { + notFoundErrs = append(notFoundErrs, fmt.Errorf("LVMVolumeGroup %q not found", name)) + } + } + + // Sort notFoundErrs for deterministic error message. + sort.Slice(notFoundErrs, func(i, j int) bool { + return notFoundErrs[i].Error() < notFoundErrs[j].Error() + }) + + return lvgs, errors.Join(notFoundErrs...), nil +} + +// --- Node --- + +// getSortedNodes fetches nodes matching the given selector and returns lightweight nodeView snapshots, +// sorted by name. Uses UnsafeDisableDeepCopy for efficiency. +// The selector should include NodeLabelSelector and Zones requirements from RSP. +func (r *Reconciler) getSortedNodes(ctx context.Context, selector labels.Selector) ([]nodeView, error) { + var unsafeList corev1.NodeList + if err := r.cl.List(ctx, &unsafeList, + client.MatchingLabelsSelector{Selector: selector}, + client.UnsafeDisableDeepCopy, + ); err != nil { + return nil, err + } + + views := make([]nodeView, len(unsafeList.Items)) + for i := range unsafeList.Items { + views[i] = newNodeView(&unsafeList.Items[i]) + } + + sort.Slice(views, func(i, j int) bool { + return views[i].name < views[j].name + }) + + return views, nil +} + +// --- Pod --- + +// getAgentReadiness fetches agent pods and returns a map of nodeName -> isReady. +// Uses UnsafeDisableDeepCopy for efficiency. Nodes without agent pods are not included +// in the map, which results in AgentReady=false when accessed via map lookup. +func (r *Reconciler) getAgentReadiness(ctx context.Context) (map[string]bool, error) { + var unsafeList corev1.PodList + if err := r.cl.List(ctx, &unsafeList, + client.InNamespace(r.agentPodNamespace), + client.MatchingLabels{"app": "agent"}, + client.UnsafeDisableDeepCopy, + ); err != nil { + return nil, err + } + + result := make(map[string]bool, len(unsafeList.Items)) + for i := range unsafeList.Items { + unsafePod := &unsafeList.Items[i] + nodeName := unsafePod.Spec.NodeName + if nodeName == "" { + continue + } + result[nodeName] = isPodReady(unsafePod) + } + return result, nil +} diff --git a/images/controller/internal/controllers/rsp_controller/reconciler_test.go b/images/controller/internal/controllers/rsp_controller/reconciler_test.go new file mode 100644 index 000000000..a28296193 --- /dev/null +++ b/images/controller/internal/controllers/rsp_controller/reconciler_test.go @@ -0,0 +1,1239 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package rspcontroller + +import ( + "context" + "testing" + "time" + + "github.com/go-logr/logr" + . "github.com/onsi/ginkgo/v2" + . "github.com/onsi/gomega" + corev1 "k8s.io/api/core/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/runtime" + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/client/fake" + "sigs.k8s.io/controller-runtime/pkg/reconcile" + + snc "github.com/deckhouse/sds-node-configurator/api/v1alpha1" + obju "github.com/deckhouse/sds-replicated-volume/api/objutilv1" + "github.com/deckhouse/sds-replicated-volume/api/v1alpha1" +) + +// testGracePeriod is the grace period used in tests for NotReady nodes. +const testGracePeriod = 5 * time.Minute + +func TestRSPController(t *testing.T) { + RegisterFailHandler(Fail) + RunSpecs(t, "rsp_controller Reconciler Suite") +} + +var _ = Describe("computeActualEligibleNodes", func() { + var ( + rsp *v1alpha1.ReplicatedStoragePool + lvgs map[string]lvgView + nodes []nodeView + agentReadyByNode map[string]bool + ) + + BeforeEach(func() { + rsp = &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{Name: "rsp-1"}, + Spec: v1alpha1.ReplicatedStoragePoolSpec{ + Type: v1alpha1.ReplicatedStoragePoolTypeLVM, + LVMVolumeGroups: []v1alpha1.ReplicatedStoragePoolLVMVolumeGroups{ + {Name: "lvg-1"}, + }, + EligibleNodesPolicy: v1alpha1.ReplicatedStoragePoolEligibleNodesPolicy{ + NotReadyGracePeriod: metav1.Duration{Duration: testGracePeriod}, + }, + }, + } + lvgs = map[string]lvgView{ + "lvg-1": { + name: "lvg-1", + nodeName: "node-1", + ready: true, + }, + } + nodes = []nodeView{ + { + name: "node-1", + zoneName: "zone-a", + ready: nodeViewReady{ + hasCondition: true, + status: true, + }, + }, + } + agentReadyByNode = map[string]bool{ + "node-1": true, + } + }) + + It("returns eligible node when all conditions match", func() { + result, _ := computeActualEligibleNodes(rsp, lvgs, nodes, agentReadyByNode) + + Expect(result).To(HaveLen(1)) + Expect(result[0].NodeName).To(Equal("node-1")) + Expect(result[0].ZoneName).To(Equal("zone-a")) + Expect(result[0].NodeReady).To(BeTrue()) + Expect(result[0].AgentReady).To(BeTrue()) + Expect(result[0].LVMVolumeGroups).To(HaveLen(1)) + Expect(result[0].LVMVolumeGroups[0].Name).To(Equal("lvg-1")) + Expect(result[0].LVMVolumeGroups[0].Ready).To(BeTrue()) + }) + + Context("zone extraction", func() { + // Note: Zone/label filtering is done in Reconcile before calling computeActualEligibleNodes. + // This function only extracts the zone label from nodes that are passed to it. + + It("extracts zone label from node", func() { + nodes[0].zoneName = "zone-x" + + result, _ := computeActualEligibleNodes(rsp, lvgs, nodes, agentReadyByNode) + + Expect(result).To(HaveLen(1)) + Expect(result[0].ZoneName).To(Equal("zone-x")) + }) + + It("sets empty zone when label is missing", func() { + nodes[0].zoneName = "" + + result, _ := computeActualEligibleNodes(rsp, lvgs, nodes, agentReadyByNode) + + Expect(result).To(HaveLen(1)) + Expect(result[0].ZoneName).To(BeEmpty()) + }) + }) + + Context("LVG matching", func() { + It("includes node without matching LVG (client-only/tiebreaker nodes)", func() { + rsp.Spec.LVMVolumeGroups = []v1alpha1.ReplicatedStoragePoolLVMVolumeGroups{ + {Name: "lvg-2"}, // This LVG does not exist on node-1. + } + + result, _ := computeActualEligibleNodes(rsp, lvgs, nodes, agentReadyByNode) + + // Node is still eligible but without LVGs. + Expect(result).To(HaveLen(1)) + Expect(result[0].NodeName).To(Equal("node-1")) + Expect(result[0].LVMVolumeGroups).To(BeEmpty()) + }) + }) + + Context("node readiness", func() { + It("excludes node NotReady beyond grace period", func() { + nodes[0].ready = nodeViewReady{ + hasCondition: true, + status: false, + lastTransitionTime: time.Now().Add(-10 * time.Minute), + } + + result, _ := computeActualEligibleNodes(rsp, lvgs, nodes, agentReadyByNode) + + Expect(result).To(BeEmpty()) + }) + + It("includes node NotReady within grace period", func() { + nodes[0].ready = nodeViewReady{ + hasCondition: true, + status: false, + lastTransitionTime: time.Now().Add(-2 * time.Minute), + } + + result, _ := computeActualEligibleNodes(rsp, lvgs, nodes, agentReadyByNode) + + Expect(result).To(HaveLen(1)) + Expect(result[0].NodeReady).To(BeFalse()) + }) + }) + + Context("LVG unschedulable annotation", func() { + It("marks LVG as unschedulable when annotation is present", func() { + lvg := lvgs["lvg-1"] + lvg.unschedulable = true + lvgs["lvg-1"] = lvg + + result, _ := computeActualEligibleNodes(rsp, lvgs, nodes, agentReadyByNode) + + Expect(result).To(HaveLen(1)) + Expect(result[0].LVMVolumeGroups[0].Unschedulable).To(BeTrue()) + }) + }) + + Context("node unschedulable", func() { + It("marks node as unschedulable when spec.unschedulable is true", func() { + nodes[0].unschedulable = true + + result, _ := computeActualEligibleNodes(rsp, lvgs, nodes, agentReadyByNode) + + Expect(result).To(HaveLen(1)) + Expect(result[0].Unschedulable).To(BeTrue()) + }) + }) + + Context("agent readiness", func() { + It("populates AgentReady from agentReadyByNode map", func() { + agentReadyByNode["node-1"] = false + + result, _ := computeActualEligibleNodes(rsp, lvgs, nodes, agentReadyByNode) + + Expect(result).To(HaveLen(1)) + Expect(result[0].AgentReady).To(BeFalse()) + }) + + It("sets AgentReady to false when node not in map", func() { + delete(agentReadyByNode, "node-1") + + result, _ := computeActualEligibleNodes(rsp, lvgs, nodes, agentReadyByNode) + + Expect(result).To(HaveLen(1)) + Expect(result[0].AgentReady).To(BeFalse()) + }) + }) + + Context("LVG Ready status", func() { + It("marks LVG as not ready when Ready condition is False", func() { + lvg := lvgs["lvg-1"] + lvg.ready = false + lvgs["lvg-1"] = lvg + + result, _ := computeActualEligibleNodes(rsp, lvgs, nodes, agentReadyByNode) + + Expect(result).To(HaveLen(1)) + Expect(result[0].LVMVolumeGroups[0].Ready).To(BeFalse()) + }) + + It("marks LVG as not ready when thin pool is not ready", func() { + rsp.Spec.Type = v1alpha1.ReplicatedStoragePoolTypeLVMThin + rsp.Spec.LVMVolumeGroups = []v1alpha1.ReplicatedStoragePoolLVMVolumeGroups{ + {Name: "lvg-1", ThinPoolName: "thin-pool-1"}, + } + // thin pool not in thinPoolReady set = not ready + + result, _ := computeActualEligibleNodes(rsp, lvgs, nodes, agentReadyByNode) + + Expect(result).To(HaveLen(1)) + Expect(result[0].LVMVolumeGroups[0].Ready).To(BeFalse()) + }) + + It("marks LVG as ready when thin pool is ready", func() { + rsp.Spec.Type = v1alpha1.ReplicatedStoragePoolTypeLVMThin + rsp.Spec.LVMVolumeGroups = []v1alpha1.ReplicatedStoragePoolLVMVolumeGroups{ + {Name: "lvg-1", ThinPoolName: "thin-pool-1"}, + } + lvg := lvgs["lvg-1"] + lvg.thinPoolReady = map[string]struct{}{"thin-pool-1": {}} + lvgs["lvg-1"] = lvg + + result, _ := computeActualEligibleNodes(rsp, lvgs, nodes, agentReadyByNode) + + Expect(result).To(HaveLen(1)) + Expect(result[0].LVMVolumeGroups[0].Ready).To(BeTrue()) + }) + }) + + Context("worldStateExpiresAt", func() { + It("returns nil when no grace period is active", func() { + _, expiresAt := computeActualEligibleNodes(rsp, lvgs, nodes, agentReadyByNode) + + Expect(expiresAt).To(BeNil()) + }) + + It("returns earliest grace expiration time", func() { + transitionTime := time.Now().Add(-2 * time.Minute) + nodes[0].ready = nodeViewReady{ + hasCondition: true, + status: false, + lastTransitionTime: transitionTime, + } + + _, expiresAt := computeActualEligibleNodes(rsp, lvgs, nodes, agentReadyByNode) + + Expect(expiresAt).NotTo(BeNil()) + expected := transitionTime.Add(testGracePeriod) + Expect(expiresAt.Sub(expected)).To(BeNumerically("<", time.Second)) + }) + }) + + It("sorts eligible nodes by name", func() { + lvgs["lvg-2"] = lvgView{ + name: "lvg-2", + nodeName: "node-2", + ready: true, + } + rsp.Spec.LVMVolumeGroups = append(rsp.Spec.LVMVolumeGroups, v1alpha1.ReplicatedStoragePoolLVMVolumeGroups{Name: "lvg-2"}) + nodes = append(nodes, nodeView{ + name: "node-2", + ready: nodeViewReady{ + hasCondition: true, + status: true, + }, + }) + + result, _ := computeActualEligibleNodes(rsp, lvgs, nodes, agentReadyByNode) + + Expect(result).To(HaveLen(2)) + Expect(result[0].NodeName).To(Equal("node-1")) + Expect(result[1].NodeName).To(Equal("node-2")) + }) +}) + +var _ = Describe("buildLVGByNodeMap", func() { + var ( + rsp *v1alpha1.ReplicatedStoragePool + lvgs map[string]lvgView + ) + + BeforeEach(func() { + rsp = &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{Name: "rsp-1"}, + Spec: v1alpha1.ReplicatedStoragePoolSpec{ + Type: v1alpha1.ReplicatedStoragePoolTypeLVM, + LVMVolumeGroups: []v1alpha1.ReplicatedStoragePoolLVMVolumeGroups{ + {Name: "lvg-1"}, + }, + EligibleNodesPolicy: v1alpha1.ReplicatedStoragePoolEligibleNodesPolicy{ + NotReadyGracePeriod: metav1.Duration{Duration: testGracePeriod}, + }, + }, + } + lvgs = map[string]lvgView{ + "lvg-1": { + name: "lvg-1", + nodeName: "node-1", + ready: true, + }, + } + }) + + It("returns empty map for empty LVGs", func() { + result := buildLVGByNodeMap(nil, rsp) + + Expect(result).To(BeEmpty()) + }) + + It("maps LVG to node correctly", func() { + result := buildLVGByNodeMap(lvgs, rsp) + + Expect(result).To(HaveKey("node-1")) + Expect(result["node-1"]).To(HaveLen(1)) + Expect(result["node-1"][0].Name).To(Equal("lvg-1")) + }) + + It("skips LVG not referenced by RSP", func() { + lvgs["lvg-not-referenced"] = lvgView{ + name: "lvg-not-referenced", + nodeName: "node-2", + ready: true, + } + + result := buildLVGByNodeMap(lvgs, rsp) + + Expect(result).NotTo(HaveKey("node-2")) + }) + + It("skips LVG with empty nodeName", func() { + lvg := lvgs["lvg-1"] + lvg.nodeName = "" + lvgs["lvg-1"] = lvg + + result := buildLVGByNodeMap(lvgs, rsp) + + Expect(result).To(BeEmpty()) + }) + + It("sorts LVGs by name per node", func() { + rsp.Spec.LVMVolumeGroups = []v1alpha1.ReplicatedStoragePoolLVMVolumeGroups{ + {Name: "lvg-c"}, + {Name: "lvg-a"}, + {Name: "lvg-b"}, + } + lvgs = map[string]lvgView{ + "lvg-c": {name: "lvg-c", nodeName: "node-1", ready: true}, + "lvg-a": {name: "lvg-a", nodeName: "node-1", ready: true}, + "lvg-b": {name: "lvg-b", nodeName: "node-1", ready: true}, + } + + result := buildLVGByNodeMap(lvgs, rsp) + + Expect(result["node-1"]).To(HaveLen(3)) + Expect(result["node-1"][0].Name).To(Equal("lvg-a")) + Expect(result["node-1"][1].Name).To(Equal("lvg-b")) + Expect(result["node-1"][2].Name).To(Equal("lvg-c")) + }) + + It("sets Ready field based on LVG condition", func() { + lvg := lvgs["lvg-1"] + lvg.ready = false + lvgs["lvg-1"] = lvg + + result := buildLVGByNodeMap(lvgs, rsp) + + Expect(result["node-1"][0].Ready).To(BeFalse()) + }) + + It("sets Ready field based on thin pool status for LVMThin", func() { + rsp.Spec.Type = v1alpha1.ReplicatedStoragePoolTypeLVMThin + rsp.Spec.LVMVolumeGroups = []v1alpha1.ReplicatedStoragePoolLVMVolumeGroups{ + {Name: "lvg-1", ThinPoolName: "thin-pool-1"}, + } + // thin pool not in thinPoolReady set = not ready + + result := buildLVGByNodeMap(lvgs, rsp) + + Expect(result["node-1"][0].Ready).To(BeFalse()) + Expect(result["node-1"][0].ThinPoolName).To(Equal("thin-pool-1")) + }) + + It("marks LVG as unschedulable when annotation present", func() { + lvg := lvgs["lvg-1"] + lvg.unschedulable = true + lvgs["lvg-1"] = lvg + + result := buildLVGByNodeMap(lvgs, rsp) + + Expect(result["node-1"][0].Unschedulable).To(BeTrue()) + }) +}) + +var _ = Describe("isLVGReady", func() { + var lvg *lvgView + + BeforeEach(func() { + lvg = &lvgView{ + name: "lvg-1", + ready: true, + } + }) + + It("returns false when LVG has no Ready condition", func() { + lvg.ready = false + + result := isLVGReady(lvg, "") + + Expect(result).To(BeFalse()) + }) + + It("returns false when LVG Ready=False", func() { + lvg.ready = false + + result := isLVGReady(lvg, "") + + Expect(result).To(BeFalse()) + }) + + It("returns true when LVG Ready=True and no thin pool specified", func() { + result := isLVGReady(lvg, "") + + Expect(result).To(BeTrue()) + }) + + It("returns true when LVG Ready=True and thin pool Ready=true", func() { + lvg.thinPoolReady = map[string]struct{}{"thin-pool-1": {}} + + result := isLVGReady(lvg, "thin-pool-1") + + Expect(result).To(BeTrue()) + }) + + It("returns false when LVG Ready=True but thin pool Ready=false", func() { + // thin pool not in thinPoolReady set = not ready + + result := isLVGReady(lvg, "thin-pool-1") + + Expect(result).To(BeFalse()) + }) + + It("returns false when thin pool not found in status", func() { + lvg.thinPoolReady = map[string]struct{}{"other-pool": {}} + + result := isLVGReady(lvg, "thin-pool-1") + + Expect(result).To(BeFalse()) + }) +}) + +var _ = Describe("isNodeReadyOrWithinGrace", func() { + var ready nodeViewReady + + BeforeEach(func() { + ready = nodeViewReady{ + hasCondition: true, + status: true, + } + }) + + It("returns (true, false, zero) for Ready node", func() { + isReady, excluded, expiresAt := isNodeReadyOrWithinGrace(ready, testGracePeriod) + + Expect(isReady).To(BeTrue()) + Expect(excluded).To(BeFalse()) + Expect(expiresAt.IsZero()).To(BeTrue()) + }) + + It("returns (false, false, zero) for node without Ready condition (unknown state)", func() { + ready.hasCondition = false + + isReady, excluded, expiresAt := isNodeReadyOrWithinGrace(ready, testGracePeriod) + + Expect(isReady).To(BeFalse()) + Expect(excluded).To(BeFalse()) // Unknown state is treated as within grace. + Expect(expiresAt.IsZero()).To(BeTrue()) + }) + + It("returns (false, true, zero) for NotReady beyond grace period", func() { + ready = nodeViewReady{ + hasCondition: true, + status: false, + lastTransitionTime: time.Now().Add(-10 * time.Minute), + } + + isReady, excluded, expiresAt := isNodeReadyOrWithinGrace(ready, testGracePeriod) + + Expect(isReady).To(BeFalse()) + Expect(excluded).To(BeTrue()) + Expect(expiresAt.IsZero()).To(BeTrue()) + }) + + It("returns (false, false, expiresAt) for NotReady within grace period", func() { + transitionTime := time.Now().Add(-2 * time.Minute) + ready = nodeViewReady{ + hasCondition: true, + status: false, + lastTransitionTime: transitionTime, + } + + isReady, excluded, expiresAt := isNodeReadyOrWithinGrace(ready, testGracePeriod) + + Expect(isReady).To(BeFalse()) + Expect(excluded).To(BeFalse()) + expected := transitionTime.Add(testGracePeriod) + Expect(expiresAt.Sub(expected)).To(BeNumerically("<", time.Second)) + }) +}) + +var _ = Describe("validateRSPAndLVGs", func() { + var ( + rsp *v1alpha1.ReplicatedStoragePool + lvgs map[string]lvgView + ) + + BeforeEach(func() { + rsp = &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{Name: "rsp-1"}, + Spec: v1alpha1.ReplicatedStoragePoolSpec{ + Type: v1alpha1.ReplicatedStoragePoolTypeLVM, + LVMVolumeGroups: []v1alpha1.ReplicatedStoragePoolLVMVolumeGroups{ + {Name: "lvg-1"}, + }, + EligibleNodesPolicy: v1alpha1.ReplicatedStoragePoolEligibleNodesPolicy{ + NotReadyGracePeriod: metav1.Duration{Duration: testGracePeriod}, + }, + }, + } + lvgs = map[string]lvgView{ + "lvg-1": { + name: "lvg-1", + nodeName: "node-1", + }, + } + }) + + It("returns nil when type is not LVMThin", func() { + err := validateRSPAndLVGs(rsp, lvgs) + + Expect(err).NotTo(HaveOccurred()) + }) + + It("returns error for LVMThin when thinPoolName is empty", func() { + rsp.Spec.Type = v1alpha1.ReplicatedStoragePoolTypeLVMThin + // thinPoolName is empty + + err := validateRSPAndLVGs(rsp, lvgs) + + Expect(err).To(HaveOccurred()) + Expect(err.Error()).To(ContainSubstring("thinPoolName is required")) + }) + + It("returns error for LVMThin when thinPool not found in LVG spec", func() { + rsp.Spec.Type = v1alpha1.ReplicatedStoragePoolTypeLVMThin + rsp.Spec.LVMVolumeGroups = []v1alpha1.ReplicatedStoragePoolLVMVolumeGroups{ + {Name: "lvg-1", ThinPoolName: "missing-thin-pool"}, + } + lvg := lvgs["lvg-1"] + lvg.specThinPoolNames = map[string]struct{}{"other-thin-pool": {}} + lvgs["lvg-1"] = lvg + + err := validateRSPAndLVGs(rsp, lvgs) + + Expect(err).To(HaveOccurred()) + Expect(err.Error()).To(ContainSubstring("not found in Spec.ThinPools")) + }) + + It("returns nil when all validations pass for LVMThin", func() { + rsp.Spec.Type = v1alpha1.ReplicatedStoragePoolTypeLVMThin + rsp.Spec.LVMVolumeGroups = []v1alpha1.ReplicatedStoragePoolLVMVolumeGroups{ + {Name: "lvg-1", ThinPoolName: "thin-pool-1"}, + } + lvg := lvgs["lvg-1"] + lvg.specThinPoolNames = map[string]struct{}{"thin-pool-1": {}} + lvgs["lvg-1"] = lvg + + err := validateRSPAndLVGs(rsp, lvgs) + + Expect(err).NotTo(HaveOccurred()) + }) + + It("panics when LVG referenced by RSP not in lvgs map", func() { + rsp.Spec.Type = v1alpha1.ReplicatedStoragePoolTypeLVMThin + rsp.Spec.LVMVolumeGroups = []v1alpha1.ReplicatedStoragePoolLVMVolumeGroups{ + {Name: "lvg-missing", ThinPoolName: "thin-pool-1"}, + } + + Expect(func() { + _ = validateRSPAndLVGs(rsp, lvgs) + }).To(Panic()) + }) +}) + +var _ = Describe("applyEligibleNodesAndIncrementRevisionIfChanged", func() { + var rsp *v1alpha1.ReplicatedStoragePool + + BeforeEach(func() { + rsp = &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{Name: "rsp-1"}, + Status: v1alpha1.ReplicatedStoragePoolStatus{ + EligibleNodesRevision: 1, + EligibleNodes: []v1alpha1.ReplicatedStoragePoolEligibleNode{ + {NodeName: "node-1"}, + }, + }, + } + }) + + It("returns false when eligible nodes unchanged", func() { + newNodes := []v1alpha1.ReplicatedStoragePoolEligibleNode{ + {NodeName: "node-1"}, + } + + changed := applyEligibleNodesAndIncrementRevisionIfChanged(rsp, newNodes) + + Expect(changed).To(BeFalse()) + Expect(rsp.Status.EligibleNodesRevision).To(Equal(int64(1))) + }) + + It("returns true and increments revision when nodes changed", func() { + newNodes := []v1alpha1.ReplicatedStoragePoolEligibleNode{ + {NodeName: "node-1"}, + {NodeName: "node-2"}, + } + + changed := applyEligibleNodesAndIncrementRevisionIfChanged(rsp, newNodes) + + Expect(changed).To(BeTrue()) + Expect(rsp.Status.EligibleNodesRevision).To(Equal(int64(2))) + Expect(rsp.Status.EligibleNodes).To(HaveLen(2)) + }) + + It("detects change in NodeReady field", func() { + newNodes := []v1alpha1.ReplicatedStoragePoolEligibleNode{ + {NodeName: "node-1", NodeReady: true}, + } + + changed := applyEligibleNodesAndIncrementRevisionIfChanged(rsp, newNodes) + + Expect(changed).To(BeTrue()) + }) +}) + +var _ = Describe("areEligibleNodesEqual", func() { + It("returns true for equal slices", func() { + a := []v1alpha1.ReplicatedStoragePoolEligibleNode{ + {NodeName: "node-1", ZoneName: "zone-a"}, + } + b := []v1alpha1.ReplicatedStoragePoolEligibleNode{ + {NodeName: "node-1", ZoneName: "zone-a"}, + } + + Expect(areEligibleNodesEqual(a, b)).To(BeTrue()) + }) + + It("returns false for different lengths", func() { + a := []v1alpha1.ReplicatedStoragePoolEligibleNode{ + {NodeName: "node-1"}, + } + b := []v1alpha1.ReplicatedStoragePoolEligibleNode{ + {NodeName: "node-1"}, + {NodeName: "node-2"}, + } + + Expect(areEligibleNodesEqual(a, b)).To(BeFalse()) + }) + + It("returns false for different field values", func() { + a := []v1alpha1.ReplicatedStoragePoolEligibleNode{ + {NodeName: "node-1", AgentReady: true}, + } + b := []v1alpha1.ReplicatedStoragePoolEligibleNode{ + {NodeName: "node-1", AgentReady: false}, + } + + Expect(areEligibleNodesEqual(a, b)).To(BeFalse()) + }) + + It("handles empty slices", func() { + var a []v1alpha1.ReplicatedStoragePoolEligibleNode + var b []v1alpha1.ReplicatedStoragePoolEligibleNode + + Expect(areEligibleNodesEqual(a, b)).To(BeTrue()) + }) +}) + +var _ = Describe("areLVGsEqual", func() { + It("returns true for equal slices", func() { + a := []v1alpha1.ReplicatedStoragePoolEligibleNodeLVMVolumeGroup{ + {Name: "lvg-1", ThinPoolName: "tp-1", Unschedulable: false, Ready: true}, + } + b := []v1alpha1.ReplicatedStoragePoolEligibleNodeLVMVolumeGroup{ + {Name: "lvg-1", ThinPoolName: "tp-1", Unschedulable: false, Ready: true}, + } + + Expect(areLVGsEqual(a, b)).To(BeTrue()) + }) + + It("returns false for different lengths", func() { + a := []v1alpha1.ReplicatedStoragePoolEligibleNodeLVMVolumeGroup{ + {Name: "lvg-1"}, + } + b := []v1alpha1.ReplicatedStoragePoolEligibleNodeLVMVolumeGroup{ + {Name: "lvg-1"}, + {Name: "lvg-2"}, + } + + Expect(areLVGsEqual(a, b)).To(BeFalse()) + }) + + It("returns false for different Ready values", func() { + a := []v1alpha1.ReplicatedStoragePoolEligibleNodeLVMVolumeGroup{ + {Name: "lvg-1", Ready: true}, + } + b := []v1alpha1.ReplicatedStoragePoolEligibleNodeLVMVolumeGroup{ + {Name: "lvg-1", Ready: false}, + } + + Expect(areLVGsEqual(a, b)).To(BeFalse()) + }) + + It("handles empty slices", func() { + var a []v1alpha1.ReplicatedStoragePoolEligibleNodeLVMVolumeGroup + var b []v1alpha1.ReplicatedStoragePoolEligibleNodeLVMVolumeGroup + + Expect(areLVGsEqual(a, b)).To(BeTrue()) + }) +}) + +// ============================================================================= +// Integration Tests +// ============================================================================= + +var _ = Describe("Reconciler", func() { + var ( + scheme *runtime.Scheme + cl client.WithWatch + rec *Reconciler + ) + + BeforeEach(func() { + scheme = runtime.NewScheme() + Expect(corev1.AddToScheme(scheme)).To(Succeed()) + Expect(v1alpha1.AddToScheme(scheme)).To(Succeed()) + Expect(snc.AddToScheme(scheme)).To(Succeed()) + cl = nil + rec = nil + }) + + Describe("Reconcile", func() { + It("does nothing when RSP is not found", func() { + cl = fake.NewClientBuilder().WithScheme(scheme).Build() + rec = NewReconciler(cl, logr.Discard(), "test-namespace") + + result, err := rec.Reconcile(context.Background(), reconcile.Request{ + NamespacedName: client.ObjectKey{Name: "rsp-not-found"}, + }) + + Expect(err).NotTo(HaveOccurred()) + Expect(result).To(Equal(reconcile.Result{})) + }) + + It("sets Ready=False when LVGs not found", func() { + rsp := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{Name: "rsp-1"}, + Spec: v1alpha1.ReplicatedStoragePoolSpec{ + Type: v1alpha1.ReplicatedStoragePoolTypeLVM, + LVMVolumeGroups: []v1alpha1.ReplicatedStoragePoolLVMVolumeGroups{ + {Name: "lvg-missing"}, + }, + EligibleNodesPolicy: v1alpha1.ReplicatedStoragePoolEligibleNodesPolicy{ + NotReadyGracePeriod: metav1.Duration{Duration: testGracePeriod}, + }, + }, + } + cl = fake.NewClientBuilder(). + WithScheme(scheme). + WithObjects(rsp). + WithStatusSubresource(rsp). + Build() + rec = NewReconciler(cl, logr.Discard(), "test-namespace") + + result, err := rec.Reconcile(context.Background(), reconcile.Request{ + NamespacedName: client.ObjectKey{Name: "rsp-1"}, + }) + + Expect(err).NotTo(HaveOccurred()) + Expect(result).To(Equal(reconcile.Result{})) + + var updatedRSP v1alpha1.ReplicatedStoragePool + Expect(cl.Get(context.Background(), client.ObjectKey{Name: "rsp-1"}, &updatedRSP)).To(Succeed()) + readyCond := obju.GetStatusCondition(&updatedRSP, v1alpha1.ReplicatedStoragePoolCondReadyType) + Expect(readyCond).NotTo(BeNil()) + Expect(readyCond.Status).To(Equal(metav1.ConditionFalse)) + Expect(readyCond.Reason).To(Equal(v1alpha1.ReplicatedStoragePoolCondReadyReasonLVMVolumeGroupNotFound)) + }) + + It("sets Ready=False when validation fails for LVMThin", func() { + rsp := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{Name: "rsp-1"}, + Spec: v1alpha1.ReplicatedStoragePoolSpec{ + Type: v1alpha1.ReplicatedStoragePoolTypeLVMThin, + LVMVolumeGroups: []v1alpha1.ReplicatedStoragePoolLVMVolumeGroups{ + {Name: "lvg-1"}, // Missing thinPoolName. + }, + EligibleNodesPolicy: v1alpha1.ReplicatedStoragePoolEligibleNodesPolicy{ + NotReadyGracePeriod: metav1.Duration{Duration: testGracePeriod}, + }, + }, + } + lvg := &snc.LVMVolumeGroup{ + ObjectMeta: metav1.ObjectMeta{Name: "lvg-1"}, + Spec: snc.LVMVolumeGroupSpec{ + Local: snc.LVMVolumeGroupLocalSpec{NodeName: "node-1"}, + }, + } + cl = fake.NewClientBuilder(). + WithScheme(scheme). + WithObjects(rsp, lvg). + WithStatusSubresource(rsp). + Build() + rec = NewReconciler(cl, logr.Discard(), "test-namespace") + + result, err := rec.Reconcile(context.Background(), reconcile.Request{ + NamespacedName: client.ObjectKey{Name: "rsp-1"}, + }) + + Expect(err).NotTo(HaveOccurred()) + Expect(result).To(Equal(reconcile.Result{})) + + var updatedRSP v1alpha1.ReplicatedStoragePool + Expect(cl.Get(context.Background(), client.ObjectKey{Name: "rsp-1"}, &updatedRSP)).To(Succeed()) + readyCond := obju.GetStatusCondition(&updatedRSP, v1alpha1.ReplicatedStoragePoolCondReadyType) + Expect(readyCond).NotTo(BeNil()) + Expect(readyCond.Status).To(Equal(metav1.ConditionFalse)) + Expect(readyCond.Reason).To(Equal(v1alpha1.ReplicatedStoragePoolCondReadyReasonInvalidLVMVolumeGroup)) + }) + + It("sets Ready=False when NodeLabelSelector is invalid", func() { + rsp := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{Name: "rsp-1"}, + Spec: v1alpha1.ReplicatedStoragePoolSpec{ + Type: v1alpha1.ReplicatedStoragePoolTypeLVM, + LVMVolumeGroups: []v1alpha1.ReplicatedStoragePoolLVMVolumeGroups{ + {Name: "lvg-1"}, + }, + NodeLabelSelector: &metav1.LabelSelector{ + MatchExpressions: []metav1.LabelSelectorRequirement{ + { + Key: "invalid key with spaces", + Operator: metav1.LabelSelectorOpIn, + Values: []string{"value"}, + }, + }, + }, + EligibleNodesPolicy: v1alpha1.ReplicatedStoragePoolEligibleNodesPolicy{ + NotReadyGracePeriod: metav1.Duration{Duration: testGracePeriod}, + }, + }, + } + lvg := &snc.LVMVolumeGroup{ + ObjectMeta: metav1.ObjectMeta{Name: "lvg-1"}, + Spec: snc.LVMVolumeGroupSpec{ + Local: snc.LVMVolumeGroupLocalSpec{NodeName: "node-1"}, + }, + Status: snc.LVMVolumeGroupStatus{ + Conditions: []metav1.Condition{ + {Type: "Ready", Status: metav1.ConditionTrue}, + }, + }, + } + cl = fake.NewClientBuilder(). + WithScheme(scheme). + WithObjects(rsp, lvg). + WithStatusSubresource(rsp). + Build() + rec = NewReconciler(cl, logr.Discard(), "test-namespace") + + result, err := rec.Reconcile(context.Background(), reconcile.Request{ + NamespacedName: client.ObjectKey{Name: "rsp-1"}, + }) + + Expect(err).NotTo(HaveOccurred()) + Expect(result).To(Equal(reconcile.Result{})) + + var updatedRSP v1alpha1.ReplicatedStoragePool + Expect(cl.Get(context.Background(), client.ObjectKey{Name: "rsp-1"}, &updatedRSP)).To(Succeed()) + readyCond := obju.GetStatusCondition(&updatedRSP, v1alpha1.ReplicatedStoragePoolCondReadyType) + Expect(readyCond).NotTo(BeNil()) + Expect(readyCond.Status).To(Equal(metav1.ConditionFalse)) + Expect(readyCond.Reason).To(Equal(v1alpha1.ReplicatedStoragePoolCondReadyReasonInvalidNodeLabelSelector)) + Expect(readyCond.Message).To(ContainSubstring("Invalid NodeLabelSelector")) + }) + + It("sets Ready=True and updates EligibleNodes on success", func() { + rsp := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{Name: "rsp-1"}, + Spec: v1alpha1.ReplicatedStoragePoolSpec{ + Type: v1alpha1.ReplicatedStoragePoolTypeLVM, + LVMVolumeGroups: []v1alpha1.ReplicatedStoragePoolLVMVolumeGroups{ + {Name: "lvg-1"}, + }, + EligibleNodesPolicy: v1alpha1.ReplicatedStoragePoolEligibleNodesPolicy{ + NotReadyGracePeriod: metav1.Duration{Duration: testGracePeriod}, + }, + }, + } + lvg := &snc.LVMVolumeGroup{ + ObjectMeta: metav1.ObjectMeta{Name: "lvg-1"}, + Spec: snc.LVMVolumeGroupSpec{ + Local: snc.LVMVolumeGroupLocalSpec{NodeName: "node-1"}, + }, + Status: snc.LVMVolumeGroupStatus{ + Conditions: []metav1.Condition{ + {Type: "Ready", Status: metav1.ConditionTrue}, + }, + }, + } + node := &corev1.Node{ + ObjectMeta: metav1.ObjectMeta{ + Name: "node-1", + Labels: map[string]string{corev1.LabelTopologyZone: "zone-a"}, + }, + Status: corev1.NodeStatus{ + Conditions: []corev1.NodeCondition{ + {Type: corev1.NodeReady, Status: corev1.ConditionTrue}, + }, + }, + } + cl = fake.NewClientBuilder(). + WithScheme(scheme). + WithObjects(rsp, lvg, node). + WithStatusSubresource(rsp). + Build() + rec = NewReconciler(cl, logr.Discard(), "test-namespace") + + result, err := rec.Reconcile(context.Background(), reconcile.Request{ + NamespacedName: client.ObjectKey{Name: "rsp-1"}, + }) + + Expect(err).NotTo(HaveOccurred()) + Expect(result).To(Equal(reconcile.Result{})) + + var updatedRSP v1alpha1.ReplicatedStoragePool + Expect(cl.Get(context.Background(), client.ObjectKey{Name: "rsp-1"}, &updatedRSP)).To(Succeed()) + readyCond := obju.GetStatusCondition(&updatedRSP, v1alpha1.ReplicatedStoragePoolCondReadyType) + Expect(readyCond).NotTo(BeNil()) + Expect(readyCond.Status).To(Equal(metav1.ConditionTrue)) + Expect(updatedRSP.Status.EligibleNodes).To(HaveLen(1)) + Expect(updatedRSP.Status.EligibleNodes[0].NodeName).To(Equal("node-1")) + Expect(updatedRSP.Status.EligibleNodes[0].ZoneName).To(Equal("zone-a")) + Expect(updatedRSP.Status.EligibleNodesRevision).To(BeNumerically(">", 0)) + }) + + It("increments EligibleNodesRevision when nodes change", func() { + rsp := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{Name: "rsp-1"}, + Spec: v1alpha1.ReplicatedStoragePoolSpec{ + Type: v1alpha1.ReplicatedStoragePoolTypeLVM, + LVMVolumeGroups: []v1alpha1.ReplicatedStoragePoolLVMVolumeGroups{ + {Name: "lvg-1"}, + }, + EligibleNodesPolicy: v1alpha1.ReplicatedStoragePoolEligibleNodesPolicy{ + NotReadyGracePeriod: metav1.Duration{Duration: testGracePeriod}, + }, + }, + Status: v1alpha1.ReplicatedStoragePoolStatus{ + EligibleNodesRevision: 5, + EligibleNodes: []v1alpha1.ReplicatedStoragePoolEligibleNode{ + {NodeName: "node-old"}, + }, + }, + } + lvg := &snc.LVMVolumeGroup{ + ObjectMeta: metav1.ObjectMeta{Name: "lvg-1"}, + Spec: snc.LVMVolumeGroupSpec{ + Local: snc.LVMVolumeGroupLocalSpec{NodeName: "node-1"}, + }, + Status: snc.LVMVolumeGroupStatus{ + Conditions: []metav1.Condition{ + {Type: "Ready", Status: metav1.ConditionTrue}, + }, + }, + } + node := &corev1.Node{ + ObjectMeta: metav1.ObjectMeta{Name: "node-1"}, + Status: corev1.NodeStatus{ + Conditions: []corev1.NodeCondition{ + {Type: corev1.NodeReady, Status: corev1.ConditionTrue}, + }, + }, + } + cl = fake.NewClientBuilder(). + WithScheme(scheme). + WithObjects(rsp, lvg, node). + WithStatusSubresource(rsp). + Build() + rec = NewReconciler(cl, logr.Discard(), "test-namespace") + + result, err := rec.Reconcile(context.Background(), reconcile.Request{ + NamespacedName: client.ObjectKey{Name: "rsp-1"}, + }) + + Expect(err).NotTo(HaveOccurred()) + Expect(result).To(Equal(reconcile.Result{})) + + var updatedRSP v1alpha1.ReplicatedStoragePool + Expect(cl.Get(context.Background(), client.ObjectKey{Name: "rsp-1"}, &updatedRSP)).To(Succeed()) + Expect(updatedRSP.Status.EligibleNodesRevision).To(Equal(int64(6))) + }) + + It("requeues when grace period will expire", func() { + rsp := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{Name: "rsp-1"}, + Spec: v1alpha1.ReplicatedStoragePoolSpec{ + Type: v1alpha1.ReplicatedStoragePoolTypeLVM, + LVMVolumeGroups: []v1alpha1.ReplicatedStoragePoolLVMVolumeGroups{ + {Name: "lvg-1"}, + }, + EligibleNodesPolicy: v1alpha1.ReplicatedStoragePoolEligibleNodesPolicy{ + NotReadyGracePeriod: metav1.Duration{Duration: testGracePeriod}, + }, + }, + } + lvg := &snc.LVMVolumeGroup{ + ObjectMeta: metav1.ObjectMeta{Name: "lvg-1"}, + Spec: snc.LVMVolumeGroupSpec{ + Local: snc.LVMVolumeGroupLocalSpec{NodeName: "node-1"}, + }, + Status: snc.LVMVolumeGroupStatus{ + Conditions: []metav1.Condition{ + {Type: "Ready", Status: metav1.ConditionTrue}, + }, + }, + } + // Node is NotReady within grace period. + node := &corev1.Node{ + ObjectMeta: metav1.ObjectMeta{Name: "node-1"}, + Status: corev1.NodeStatus{ + Conditions: []corev1.NodeCondition{ + { + Type: corev1.NodeReady, + Status: corev1.ConditionFalse, + LastTransitionTime: metav1.NewTime(time.Now().Add(-2 * time.Minute)), + }, + }, + }, + } + cl = fake.NewClientBuilder(). + WithScheme(scheme). + WithObjects(rsp, lvg, node). + WithStatusSubresource(rsp). + Build() + rec = NewReconciler(cl, logr.Discard(), "test-namespace") + + result, err := rec.Reconcile(context.Background(), reconcile.Request{ + NamespacedName: client.ObjectKey{Name: "rsp-1"}, + }) + + Expect(err).NotTo(HaveOccurred()) + Expect(result.RequeueAfter).To(BeNumerically(">", 0)) + Expect(result.RequeueAfter).To(BeNumerically("<=", testGracePeriod)) + }) + }) +}) + +var _ = Describe("getLVGsByRSP", func() { + var ( + scheme *runtime.Scheme + cl client.WithWatch + rec *Reconciler + ) + + BeforeEach(func() { + scheme = runtime.NewScheme() + Expect(corev1.AddToScheme(scheme)).To(Succeed()) + Expect(v1alpha1.AddToScheme(scheme)).To(Succeed()) + Expect(snc.AddToScheme(scheme)).To(Succeed()) + }) + + It("returns nil for nil RSP", func() { + cl = fake.NewClientBuilder().WithScheme(scheme).Build() + rec = NewReconciler(cl, logr.Discard(), "test-namespace") + + lvgs, notFoundErr, err := rec.getLVGsByRSP(context.Background(), nil) + + Expect(err).NotTo(HaveOccurred()) + Expect(notFoundErr).To(BeNil()) + Expect(lvgs).To(BeNil()) + }) + + It("returns nil for RSP with empty LVMVolumeGroups", func() { + rsp := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{Name: "rsp-1"}, + Spec: v1alpha1.ReplicatedStoragePoolSpec{ + Type: v1alpha1.ReplicatedStoragePoolTypeLVM, + LVMVolumeGroups: []v1alpha1.ReplicatedStoragePoolLVMVolumeGroups{}, + }, + } + cl = fake.NewClientBuilder().WithScheme(scheme).Build() + rec = NewReconciler(cl, logr.Discard(), "test-namespace") + + lvgs, notFoundErr, err := rec.getLVGsByRSP(context.Background(), rsp) + + Expect(err).NotTo(HaveOccurred()) + Expect(notFoundErr).To(BeNil()) + Expect(lvgs).To(BeNil()) + }) + + It("returns all LVGs when all are found", func() { + rsp := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{Name: "rsp-1"}, + Spec: v1alpha1.ReplicatedStoragePoolSpec{ + Type: v1alpha1.ReplicatedStoragePoolTypeLVM, + LVMVolumeGroups: []v1alpha1.ReplicatedStoragePoolLVMVolumeGroups{ + {Name: "lvg-1"}, + {Name: "lvg-2"}, + }, + }, + } + lvg1 := &snc.LVMVolumeGroup{ + ObjectMeta: metav1.ObjectMeta{Name: "lvg-1"}, + } + lvg2 := &snc.LVMVolumeGroup{ + ObjectMeta: metav1.ObjectMeta{Name: "lvg-2"}, + } + cl = fake.NewClientBuilder(). + WithScheme(scheme). + WithObjects(lvg1, lvg2). + Build() + rec = NewReconciler(cl, logr.Discard(), "test-namespace") + + lvgs, notFoundErr, err := rec.getLVGsByRSP(context.Background(), rsp) + + Expect(err).NotTo(HaveOccurred()) + Expect(notFoundErr).To(BeNil()) + Expect(lvgs).To(HaveLen(2)) + Expect(lvgs).To(HaveKey("lvg-1")) + Expect(lvgs).To(HaveKey("lvg-2")) + Expect(lvgs["lvg-1"].name).To(Equal("lvg-1")) + Expect(lvgs["lvg-2"].name).To(Equal("lvg-2")) + }) + + It("returns found LVGs + NotFoundErr when some LVGs are missing", func() { + rsp := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{Name: "rsp-1"}, + Spec: v1alpha1.ReplicatedStoragePoolSpec{ + Type: v1alpha1.ReplicatedStoragePoolTypeLVM, + LVMVolumeGroups: []v1alpha1.ReplicatedStoragePoolLVMVolumeGroups{ + {Name: "lvg-1"}, + {Name: "lvg-missing-1"}, + {Name: "lvg-2"}, + {Name: "lvg-missing-2"}, + }, + }, + } + lvg1 := &snc.LVMVolumeGroup{ + ObjectMeta: metav1.ObjectMeta{Name: "lvg-1"}, + } + lvg2 := &snc.LVMVolumeGroup{ + ObjectMeta: metav1.ObjectMeta{Name: "lvg-2"}, + } + cl = fake.NewClientBuilder(). + WithScheme(scheme). + WithObjects(lvg1, lvg2). + Build() + rec = NewReconciler(cl, logr.Discard(), "test-namespace") + + lvgs, notFoundErr, err := rec.getLVGsByRSP(context.Background(), rsp) + + Expect(err).NotTo(HaveOccurred()) + Expect(notFoundErr).To(HaveOccurred()) + Expect(notFoundErr.Error()).To(ContainSubstring("lvg-missing-1")) + Expect(notFoundErr.Error()).To(ContainSubstring("lvg-missing-2")) + Expect(lvgs).To(HaveLen(2)) + Expect(lvgs).To(HaveKey("lvg-1")) + Expect(lvgs).To(HaveKey("lvg-2")) + }) + + It("returns LVGs as map keyed by name", func() { + rsp := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{Name: "rsp-1"}, + Spec: v1alpha1.ReplicatedStoragePoolSpec{ + Type: v1alpha1.ReplicatedStoragePoolTypeLVM, + LVMVolumeGroups: []v1alpha1.ReplicatedStoragePoolLVMVolumeGroups{ + {Name: "lvg-c"}, + {Name: "lvg-a"}, + {Name: "lvg-b"}, + }, + }, + } + lvgC := &snc.LVMVolumeGroup{ + ObjectMeta: metav1.ObjectMeta{Name: "lvg-c"}, + } + lvgA := &snc.LVMVolumeGroup{ + ObjectMeta: metav1.ObjectMeta{Name: "lvg-a"}, + } + lvgB := &snc.LVMVolumeGroup{ + ObjectMeta: metav1.ObjectMeta{Name: "lvg-b"}, + } + cl = fake.NewClientBuilder(). + WithScheme(scheme). + WithObjects(lvgC, lvgA, lvgB). + Build() + rec = NewReconciler(cl, logr.Discard(), "test-namespace") + + lvgs, notFoundErr, err := rec.getLVGsByRSP(context.Background(), rsp) + + Expect(err).NotTo(HaveOccurred()) + Expect(notFoundErr).To(BeNil()) + Expect(lvgs).To(HaveLen(3)) + Expect(lvgs).To(HaveKey("lvg-a")) + Expect(lvgs).To(HaveKey("lvg-b")) + Expect(lvgs).To(HaveKey("lvg-c")) + }) +}) diff --git a/images/controller/internal/controllers/rv_attach_controller/controller.go b/images/controller/internal/controllers/rv_attach_controller/controller.go new file mode 100644 index 000000000..ae46d79af --- /dev/null +++ b/images/controller/internal/controllers/rv_attach_controller/controller.go @@ -0,0 +1,60 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package rvattachcontroller + +import ( + "context" + + "sigs.k8s.io/controller-runtime/pkg/builder" + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/controller" + "sigs.k8s.io/controller-runtime/pkg/handler" + "sigs.k8s.io/controller-runtime/pkg/manager" + "sigs.k8s.io/controller-runtime/pkg/reconcile" + + "github.com/deckhouse/sds-replicated-volume/api/v1alpha1" +) + +func BuildController(mgr manager.Manager) error { + const controllerName = "rv_attach_controller" + + log := mgr.GetLogger().WithName(controllerName) + + var rec = NewReconciler(mgr.GetClient(), log) + + return builder.ControllerManagedBy(mgr). + Named(controllerName). + WithOptions(controller.Options{MaxConcurrentReconciles: 10}). + For(&v1alpha1.ReplicatedVolume{}, builder.WithPredicates(replicatedVolumePredicate())). + Watches( + &v1alpha1.ReplicatedVolumeReplica{}, + handler.EnqueueRequestForOwner(mgr.GetScheme(), mgr.GetRESTMapper(), &v1alpha1.ReplicatedVolume{}), + builder.WithPredicates(replicatedVolumeReplicaPredicate()), + ). + Watches( + &v1alpha1.ReplicatedVolumeAttachment{}, + handler.EnqueueRequestsFromMapFunc(func(_ context.Context, obj client.Object) []reconcile.Request { + rva, ok := obj.(*v1alpha1.ReplicatedVolumeAttachment) + if !ok || rva.Spec.ReplicatedVolumeName == "" { + return nil + } + return []reconcile.Request{{NamespacedName: client.ObjectKey{Name: rva.Spec.ReplicatedVolumeName}}} + }), + builder.WithPredicates(replicatedVolumeAttachmentPredicate()), + ). + Complete(rec) +} diff --git a/images/controller/internal/controllers/rv_attach_controller/doc.go b/images/controller/internal/controllers/rv_attach_controller/doc.go new file mode 100644 index 000000000..a35005040 --- /dev/null +++ b/images/controller/internal/controllers/rv_attach_controller/doc.go @@ -0,0 +1,79 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +// Package rvattachcontroller implements the rv-attach-controller. +// +// The controller reconciles desired/actual attachment state for a ReplicatedVolume (RV) +// using ReplicatedVolumeAttachment (RVA) objects as the user-facing "intent", and +// ReplicatedVolumeReplica (RVR) objects as the per-node DRBD execution target. +// +// # Main responsibilities +// +// - Derive rv.status.desiredAttachTo from the active RVA set (FIFO, unique nodes, max 2), +// while also using the existing rv.status.desiredAttachTo as a preference/"memory". +// - Compute rv.status.actuallyAttachedTo from replicas whose rvr.status.drbd.status.role=="Primary". +// - Drive replica role changes by patching rvr.status.drbd.config.primary (promotion/demotion). +// - Manage rv.status.drbd.config.allowTwoPrimaries for 2-node attachment (live migration), +// and wait until rvr.status.drbd.actual.allowTwoPrimaries is applied before requesting +// the second Primary. +// - Maintain RVA status (phase + Ready condition) as the externally observable attach progress/result. +// - Convert TieBreaker replicas to Access replicas when attachment requires promotion. +// +// # Watched resources (conceptually) +// +// - ReplicatedVolume (RV) +// - ReplicatedVolumeAttachment (RVA) +// - ReplicatedVolumeReplica (RVR) +// - ReplicatedStorageClass (RSC) +// +// # Attach enablement / detach-only mode +// +// The controller may run in "detach-only" mode where it does not add new nodes into +// desiredAttachTo (but still performs demotions and keeps RVA status/finalizers consistent). +// +// Attaching is enabled only when: +// - RV exists and is not deleting +// - RV has the module controller finalizer +// - rv.status is initialized and rv.status.conditions[type=RVIOReady] is True +// - referenced RSC is available +// +// # desiredAttachTo derivation +// +// High-level rules: +// - Start from current rv.status.desiredAttachTo (may be empty/nil). +// - Drop nodes that no longer have an active (non-deleting) RVA. +// - If attaching is enabled, fill remaining slots from the active RVA set (FIFO) up to 2 nodes. +// - For Local access, only *new* attachments are allowed on nodes with a Diskful replica, +// confirmed by rvr.status.actualType==Diskful (agent must initialize status first). +// - New attachments are not allowed on nodes whose replica is marked for deletion. +// +// # RVA status model +// +// The controller sets RVA.Status.Phase and a Ready condition (type=Ready) with a reason: +// - Attached (Ready=True, Reason=Attached) when the node is in actuallyAttachedTo. +// - Detaching (Ready=True, Reason=Attached) when RVA is deleting but the node is still attached. +// - Pending (Ready=False) when attachment cannot progress: +// WaitingForReplicatedVolume, WaitingForReplicatedVolumeReady, WaitingForActiveAttachmentsToDetach, +// LocalityNotSatisfied. +// - Attaching (Ready=False) while progressing: +// WaitingForReplica, ConvertingTieBreakerToAccess, SettingPrimary. +// +// # Notes +// +// Local volume access: +// - Locality constraints are reported via RVA status. +// - Existing desired nodes may be kept even if Locality becomes violated later. +package rvattachcontroller diff --git a/images/controller/internal/controllers/rv_attach_controller/predicates.go b/images/controller/internal/controllers/rv_attach_controller/predicates.go new file mode 100644 index 000000000..1b6bacadf --- /dev/null +++ b/images/controller/internal/controllers/rv_attach_controller/predicates.go @@ -0,0 +1,169 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package rvattachcontroller + +import ( + "slices" + + "k8s.io/apimachinery/pkg/api/meta" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "sigs.k8s.io/controller-runtime/pkg/event" + "sigs.k8s.io/controller-runtime/pkg/predicate" + + obju "github.com/deckhouse/sds-replicated-volume/api/objutilv1" + "github.com/deckhouse/sds-replicated-volume/api/v1alpha1" +) + +// replicatedVolumePredicate filters RV events so rv_attach_controller does not reconcile on its own status-only updates +// (desiredAttachTo/actuallyAttachedTo/allowTwoPrimaries), but still reacts to inputs that affect attach logic. +func replicatedVolumePredicate() predicate.Predicate { + return predicate.Funcs{ + CreateFunc: func(event.CreateEvent) bool { return true }, + DeleteFunc: func(event.DeleteEvent) bool { return true }, + GenericFunc: func(event.GenericEvent) bool { + // Be conservative: don't reconcile on generic events (rare), rely on real create/update/delete. + return false + }, + UpdateFunc: func(e event.UpdateEvent) bool { + oldRV := e.ObjectOld.(*v1alpha1.ReplicatedVolume) + newRV := e.ObjectNew.(*v1alpha1.ReplicatedVolume) + + // Spec change (generation bump) can affect which storage class we load. + if oldRV.Generation != newRV.Generation { + return true + } + + // Start of deletion must be observed (detach-only mode). + if oldRV.DeletionTimestamp.IsZero() != newRV.DeletionTimestamp.IsZero() { + return true + } + + // Controller finalizer gate affects whether attachments are allowed. + if obju.HasFinalizer(oldRV, v1alpha1.ControllerFinalizer) != obju.HasFinalizer(newRV, v1alpha1.ControllerFinalizer) { + return true + } + + // IOReady condition gates attachments; it is status-managed by another controller. + oldIOReady := meta.IsStatusConditionTrue(oldRV.Status.Conditions, v1alpha1.ReplicatedVolumeCondIOReadyType) + newIOReady := meta.IsStatusConditionTrue(newRV.Status.Conditions, v1alpha1.ReplicatedVolumeCondIOReadyType) + return oldIOReady != newIOReady + }, + } +} + +// replicatedVolumeReplicaPredicate filters RVR events to only those that can affect RV attach/detach logic. +func replicatedVolumeReplicaPredicate() predicate.Predicate { + return predicate.Funcs{ + CreateFunc: func(event.CreateEvent) bool { return true }, + DeleteFunc: func(event.DeleteEvent) bool { return true }, + GenericFunc: func(event.GenericEvent) bool { return false }, + UpdateFunc: func(e event.UpdateEvent) bool { + oldRVR := e.ObjectOld.(*v1alpha1.ReplicatedVolumeReplica) + newRVR := e.ObjectNew.(*v1alpha1.ReplicatedVolumeReplica) + + // If controller owner reference is set later, allow this update so EnqueueRequestForOwner can start working. + if metav1.GetControllerOf(oldRVR) == nil && metav1.GetControllerOf(newRVR) != nil { + return true + } + + // Deletion start affects eligibility of a node for new attachments. + if oldRVR.DeletionTimestamp.IsZero() != newRVR.DeletionTimestamp.IsZero() { + return true + } + + // Node/type changes affect locality checks and promotion flow. + if oldRVR.Spec.NodeName != newRVR.Spec.NodeName { + return true + } + if oldRVR.Spec.Type != newRVR.Spec.Type { + return true + } + + // Local volume access requires Diskful actualType on requested node. + oldActualType := oldRVR.Status.ActualType + newActualType := newRVR.Status.ActualType + if oldActualType != newActualType { + return true + } + + // actuallyAttachedTo is derived from DRBD role == Primary. + oldRole := rvrDRBDRole(oldRVR) + newRole := rvrDRBDRole(newRVR) + if oldRole != newRole { + return true + } + + // allowTwoPrimaries readiness gate is derived from DRBD Actual.AllowTwoPrimaries. + if rvrAllowTwoPrimariesActual(oldRVR) != rvrAllowTwoPrimariesActual(newRVR) { + return true + } + + // RVA ReplicaReady mirrors replica condition Ready, so changes must trigger reconcile. + // Compare (status, reason, message) to keep mirroring accurate even when status doesn't change. + oldCond := meta.FindStatusCondition(oldRVR.Status.Conditions, v1alpha1.ReplicatedVolumeReplicaCondReadyType) + newCond := meta.FindStatusCondition(newRVR.Status.Conditions, v1alpha1.ReplicatedVolumeReplicaCondReadyType) + return !obju.ConditionSemanticallyEqual(oldCond, newCond) + }, + } +} + +// replicatedVolumeAttachmentPredicate filters RVA events so we don't reconcile on our own status-only updates. +// It still reacts to create/delete, start of deletion and finalizer changes. +func replicatedVolumeAttachmentPredicate() predicate.Predicate { + return predicate.Funcs{ + CreateFunc: func(event.CreateEvent) bool { return true }, + DeleteFunc: func(event.DeleteEvent) bool { return true }, + GenericFunc: func(event.GenericEvent) bool { return false }, + UpdateFunc: func(e event.UpdateEvent) bool { + oldRVA := e.ObjectOld.(*v1alpha1.ReplicatedVolumeAttachment) + newRVA := e.ObjectNew.(*v1alpha1.ReplicatedVolumeAttachment) + + // Start of deletion affects desiredAttachTo and finalizer reconciliation. + if oldRVA.DeletionTimestamp.IsZero() != newRVA.DeletionTimestamp.IsZero() { + return true + } + + // Even though spec fields are immutable, generation bump is a safe signal for any spec-level changes. + if oldRVA.Generation != newRVA.Generation { + return true + } + + // Finalizers are important for safe detach/cleanup. + if !slices.Equal(oldRVA.Finalizers, newRVA.Finalizers) { + return true + } + + return false + }, + } +} + +func rvrDRBDRole(rvr *v1alpha1.ReplicatedVolumeReplica) string { + if rvr == nil || rvr.Status.DRBD == nil || rvr.Status.DRBD.Status == nil { + return "" + } + return rvr.Status.DRBD.Status.Role +} + +func rvrAllowTwoPrimariesActual(rvr *v1alpha1.ReplicatedVolumeReplica) bool { + if rvr == nil || rvr.Status.DRBD == nil || rvr.Status.DRBD.Actual == nil { + return false + } + return rvr.Status.DRBD.Actual.AllowTwoPrimaries +} + +// Note: condition equality is delegated to v1alpha1.ConditionSpecAgnosticEqual. diff --git a/images/controller/internal/controllers/rv_attach_controller/reconciler.go b/images/controller/internal/controllers/rv_attach_controller/reconciler.go new file mode 100644 index 000000000..0950665c7 --- /dev/null +++ b/images/controller/internal/controllers/rv_attach_controller/reconciler.go @@ -0,0 +1,1001 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package rvattachcontroller + +import ( + "context" + "errors" + "slices" + "sort" + + "github.com/go-logr/logr" + "k8s.io/apimachinery/pkg/api/meta" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/reconcile" + + obju "github.com/deckhouse/sds-replicated-volume/api/objutilv1" + v1alpha1 "github.com/deckhouse/sds-replicated-volume/api/v1alpha1" + "github.com/deckhouse/sds-replicated-volume/images/controller/internal/indexes" +) + +type Reconciler struct { + cl client.Client + log logr.Logger +} + +func NewReconciler(cl client.Client, log logr.Logger) *Reconciler { + return &Reconciler{ + cl: cl, + log: log, + } +} + +var _ reconcile.Reconciler = &Reconciler{} + +func (r *Reconciler) Reconcile( + ctx context.Context, + req reconcile.Request, +) (reconcile.Result, error) { + log := r.log.WithName("Reconcile").WithValues("request", req) + + // Fetch ReplicatedVolume, if possible (RV might be missing) + rv, err := r.getReplicatedVolume(ctx, req.Name) + if err != nil { + log.Error(err, "unable to get ReplicatedVolume") + return reconcile.Result{}, err + } + + // Fetch ReplicatedStorageClass, if possible (SC might be missing) + var sc *v1alpha1.ReplicatedStorageClass + if rv != nil { + sc, err = r.getReplicatedVolumeStorageClass(ctx, *rv) + if err != nil { + // If ReplicatedStorageClass cannot be loaded, proceed in detach-only mode. + log.Error(err, "unable to get ReplicatedStorageClass; proceeding in detach-only mode") + sc = nil + } + } + + // Fetch ReplicatedVolumeReplicas + replicas, err := r.getReplicatedVolumeReplicas(ctx, req.Name) + if err != nil { + log.Error(err, "unable to get ReplicatedVolumeReplicas") + return reconcile.Result{}, err + } + + // Fetch ReplicatedVolumeAttachments + rvas, err := r.getSortedReplicatedVolumeAttachments(ctx, req.Name) + if err != nil { + log.Error(err, "unable to get ReplicatedVolumeAttachments") + return reconcile.Result{}, err + } + + // Compute actuallyAttachedTo (based on RVRs) + actuallyAttachedTo := computeActuallyAttachedTo(replicas) + + // Compute desiredAttachTo (based on RVAs and RVRs) + rvaDesiredAttachTo := computeDesiredAttachToBaseOnlyOnRVA(rvas) + desiredAttachTo := computeDesiredAttachTo(rv, sc, replicas, rvaDesiredAttachTo) + + // Compute desiredAllowTwoPrimaries (based on RVAs and actual attachments) + desiredAllowTwoPrimaries := computeDesiredTwoPrimaries(desiredAttachTo, actuallyAttachedTo) + + // Reconcile RVA finalizers (don't release deleting RVA while it's still attached). + if err := r.reconcileRVAsFinalizers(ctx, rvas, actuallyAttachedTo, rvaDesiredAttachTo); err != nil { + log.Error(err, "unable to reconcile ReplicatedVolumeAttachments finalizers", "rvaCount", len(rvas)) + return reconcile.Result{}, err + } + + // Reconcile RV status (desiredAttachTo + actuallyAttachedTo), if possible + if rv != nil { + if err := r.ensureRV(ctx, rv, desiredAttachTo, actuallyAttachedTo, desiredAllowTwoPrimaries); err != nil { + log.Error(err, "unable to patch ReplicatedVolume status") + return reconcile.Result{}, err + } + } + + // Reconcile RVAs statuses even when RV is missing or deleting: + // RVA finalizers/statuses must remain consistent for external waiters and for safe cleanup. + if err := r.reconcileRVAsStatus(ctx, rvas, rv, sc, desiredAttachTo, actuallyAttachedTo, replicas); err != nil { + log.Error(err, "unable to reconcile ReplicatedVolumeAttachments status", "rvaCount", len(rvas)) + return reconcile.Result{}, err + } + + // If RV does not exist, stop reconciliation after we have reconciled RVAs. + // Having replicas without the corresponding RV is unexpected and likely indicates a bug in other controllers. + if rv == nil { + if len(replicas) > 0 { + log.Error(nil, "ReplicatedVolume not found, but ReplicatedVolumeReplicas exist; this is likely a bug in other controllers", + "replicaCount", len(replicas)) + } + return reconcile.Result{}, nil + } + + promoteEnabled := meta.IsStatusConditionTrue(rv.Status.Conditions, v1alpha1.ReplicatedVolumeCondIOReadyType) + + // Reconcile RVRs + if err := r.reconcileRVRs(ctx, replicas, desiredAttachTo, actuallyAttachedTo, promoteEnabled); err != nil { + log.Error(err, "unable to reconcile ReplicatedVolumeReplicas", "replicaCount", len(replicas)) + return reconcile.Result{}, err + } + + return reconcile.Result{}, nil +} + +// getReplicatedVolume fetches ReplicatedVolume by name. +// If the object does not exist, it returns (nil, nil). +func (r *Reconciler) getReplicatedVolume(ctx context.Context, rvName string) (*v1alpha1.ReplicatedVolume, error) { + rv := &v1alpha1.ReplicatedVolume{} + if err := r.cl.Get(ctx, client.ObjectKey{Name: rvName}, rv); err != nil { + if client.IgnoreNotFound(err) == nil { + return nil, nil + } + return nil, err + } + return rv, nil +} + +// getReplicatedVolumeStorageClass fetches ReplicatedStorageClass referenced by the given RV. +// If RV does not reference a storage class (empty name) or the class does not exist, it returns (nil, nil). +func (r *Reconciler) getReplicatedVolumeStorageClass(ctx context.Context, rv v1alpha1.ReplicatedVolume) (*v1alpha1.ReplicatedStorageClass, error) { + if rv.Spec.ReplicatedStorageClassName == "" { + return nil, nil + } + + sc := &v1alpha1.ReplicatedStorageClass{} + if err := r.cl.Get(ctx, client.ObjectKey{Name: rv.Spec.ReplicatedStorageClassName}, sc); err != nil { + if client.IgnoreNotFound(err) == nil { + return nil, nil + } + return nil, err + } + return sc, nil +} + +// getReplicatedVolumeReplicas lists all ReplicatedVolumeReplica objects belonging to the given RV. +func (r *Reconciler) getReplicatedVolumeReplicas(ctx context.Context, rvName string) ([]v1alpha1.ReplicatedVolumeReplica, error) { + rvrList := &v1alpha1.ReplicatedVolumeReplicaList{} + if err := r.cl.List(ctx, rvrList, client.MatchingFields{ + indexes.IndexFieldRVRByReplicatedVolumeName: rvName, + }); err != nil { + return nil, err + } + + return rvrList.Items, nil +} + +// getSortedReplicatedVolumeAttachments lists all ReplicatedVolumeAttachment objects and returns those belonging +// to the given RV, sorted by creation timestamp (FIFO). +func (r *Reconciler) getSortedReplicatedVolumeAttachments(ctx context.Context, rvName string) ([]v1alpha1.ReplicatedVolumeAttachment, error) { + rvaList := &v1alpha1.ReplicatedVolumeAttachmentList{} + if err := r.cl.List(ctx, rvaList, client.MatchingFields{ + indexes.IndexFieldRVAByReplicatedVolumeName: rvName, + }); err != nil { + return nil, err + } + + rvasForRV := rvaList.Items + + // Sort by creation timestamp + sort.SliceStable(rvasForRV, func(i, j int) bool { + ti := rvasForRV[i].CreationTimestamp.Time + tj := rvasForRV[j].CreationTimestamp.Time + if ti.Equal(tj) { + return false + } + return ti.Before(tj) + }) + + return rvasForRV, nil +} + +// computeActuallyAttachedTo returns a sorted list of node names where the volume is actually attached. +// We treat a node as "attached" when its replica reports DRBD role "Primary". +// The returned slice is kept sorted and unique while building it (BinarySearch + Insert). +func computeActuallyAttachedTo(replicas []v1alpha1.ReplicatedVolumeReplica) []string { + out := make([]string, 0, 2) + + for _, rvr := range replicas { + if rvr.Spec.NodeName == "" { + continue + } + if rvr.Status.DRBD == nil || rvr.Status.DRBD.Status == nil { + continue + } + if rvr.Status.DRBD.Status.Role != "Primary" { + continue + } + + i, found := slices.BinarySearch(out, rvr.Spec.NodeName) + if found { + continue + } + out = slices.Insert(out, i, rvr.Spec.NodeName) + } + + return out +} + +// computeDesiredAttachTo calculates rv.status.desiredAttachTo using current RV status and the RVA set. +// +// High-level rules: +// - Start from current desiredAttachTo stored in RV status (if any). +// - Remove nodes that no longer have an active (non-deleting) RVA. +// - If attaching is not allowed (RV is nil/deleting, no controller finalizer, no status, not IOReady, or no StorageClass), +// return the filtered desiredAttachTo as-is (detach-only mode: we do not add new nodes). +// - If attaching is allowed, we may add new nodes from RVA set (FIFO order, assuming rvas are sorted by creationTimestamp), +// but we keep at most 2 nodes in desiredAttachTo. +// - For Local volume access, new attachments are only allowed on nodes that have a Diskful replica according to +// ReplicatedVolumeReplica status.actualType. +// - New attachments are not allowed on nodes whose replicas are marked for deletion. +func computeDesiredAttachTo( + rv *v1alpha1.ReplicatedVolume, + sc *v1alpha1.ReplicatedStorageClass, + replicas []v1alpha1.ReplicatedVolumeReplica, + rvaDesiredAttachTo []string, +) []string { + desired := []string(nil) + + // Get current desiredAttachTo from ReplicatedVolume status. + if rv != nil { + desired = rv.Status.DesiredAttachTo + } + + // Exclude nodes that are not any more desired by existing RVA. + desired = slices.DeleteFunc(desired, func(node string) bool { + return !slices.Contains(rvaDesiredAttachTo, node) + }) + + attachEnabled := + rv != nil && + rv.DeletionTimestamp.IsZero() && + obju.HasFinalizer(rv, v1alpha1.ControllerFinalizer) && + meta.IsStatusConditionTrue(rv.Status.Conditions, v1alpha1.ReplicatedVolumeCondIOReadyType) && + sc != nil + + // Finish early if we are not allowed to attach. + if !attachEnabled { + return desired + } + + nodesWithDiskfulReplicas := make([]string, 0, len(replicas)) + nodesWithDeletingReplicas := make([]string, 0, len(replicas)) + nodesWithAnyReplica := make([]string, 0, len(replicas)) + for _, rvr := range replicas { + // Skip replicas without node + if rvr.Spec.NodeName == "" { + continue + } + + // No uniqueness check required: per design there can't be two replicas on the same node. + + // Add to nodesWithAnyReplica to check if the node has any replica at all. + nodesWithAnyReplica = append(nodesWithAnyReplica, rvr.Spec.NodeName) + + // Add to nodesWithDeletingReplicas to check if the node is marked for deletion. + if !rvr.DeletionTimestamp.IsZero() { + nodesWithDeletingReplicas = append(nodesWithDeletingReplicas, rvr.Spec.NodeName) + } + + // Add to nodesWithDiskfulReplicas to check if the node has a Diskful replica. + if rvr.Spec.Type == v1alpha1.ReplicaTypeDiskful && rvr.Status.ActualType == v1alpha1.ReplicaTypeDiskful { + nodesWithDiskfulReplicas = append(nodesWithDiskfulReplicas, rvr.Spec.NodeName) + } + } + + filteredRvaDesiredAttachTo := append([]string(nil), rvaDesiredAttachTo...) + + // For Local volume access, we must not keep a desired node that has no replica at all. + // (Unlike the "non-Diskful replica" case: an existing desired node may remain even if it violates Locality.) + if sc.Spec.VolumeAccess == v1alpha1.VolumeAccessLocal { + desired = slices.DeleteFunc(desired, func(node string) bool { + return !slices.Contains(nodesWithAnyReplica, node) + }) + } + + // For Local volume access, new attachments are only possible on nodes that have a Diskful replica. + if sc.Spec.VolumeAccess == v1alpha1.VolumeAccessLocal { + filteredRvaDesiredAttachTo = slices.DeleteFunc(filteredRvaDesiredAttachTo, func(node string) bool { + return !slices.Contains(nodesWithDiskfulReplicas, node) + }) + } + + // New attachments are only possible on replicas not marked for deletion. + filteredRvaDesiredAttachTo = slices.DeleteFunc(filteredRvaDesiredAttachTo, func(node string) bool { + return slices.Contains(nodesWithDeletingReplicas, node) + }) + + // Fill desired from RVA (FIFO) until we reach 2 nodes, skipping duplicates. + for _, node := range filteredRvaDesiredAttachTo { + if len(desired) >= 2 { + break + } + if slices.Contains(desired, node) { + continue + } + desired = append(desired, node) + } + + return desired +} + +// computeDesiredAttachToBaseOnlyOnRVA computes desiredAttachTo based only on active RVAs. +// It picks unique node names from the given RVA list, preserving the order of RVAs +// (caller is expected to pass RVAs sorted by creation timestamp if FIFO semantics are desired). +func computeDesiredAttachToBaseOnlyOnRVA(rvas []v1alpha1.ReplicatedVolumeAttachment) []string { + desired := make([]string, 0, len(rvas)) + seen := map[string]struct{}{} + + for _, rva := range rvas { + if rva.Spec.NodeName == "" { + continue + } + // Only active (non-deleting) RVAs participate in desiredAttachTo. + if !rva.DeletionTimestamp.IsZero() { + continue + } + if _, ok := seen[rva.Spec.NodeName]; ok { + continue + } + seen[rva.Spec.NodeName] = struct{}{} + desired = append(desired, rva.Spec.NodeName) + } + + return desired +} + +// reconcileRVAsFinalizers reconciles finalizers for all provided RVAs. +// It continues through all RVAs, joining any errors encountered. +func (r *Reconciler) reconcileRVAsFinalizers( + ctx context.Context, + rvas []v1alpha1.ReplicatedVolumeAttachment, + actuallyAttachedTo []string, + rvaDesiredAttachTo []string, +) error { + var joinedErr error + for i := range rvas { + rva := &rvas[i] + if err := r.reconcileRVAFinalizers(ctx, rva, actuallyAttachedTo, rvaDesiredAttachTo); err != nil { + joinedErr = errors.Join(joinedErr, err) + } + } + return joinedErr +} + +// reconcileRVAsStatus reconciles status (phase + conditions) for all provided RVAs. +// It continues through all RVAs, joining any errors encountered. +func (r *Reconciler) reconcileRVAsStatus( + ctx context.Context, + rvas []v1alpha1.ReplicatedVolumeAttachment, + rv *v1alpha1.ReplicatedVolume, + sc *v1alpha1.ReplicatedStorageClass, + desiredAttachTo []string, + actuallyAttachedTo []string, + replicas []v1alpha1.ReplicatedVolumeReplica, +) error { + var joinedErr error + for i := range rvas { + rva := &rvas[i] + + // Find replica on RVA node (include deleting replicas if any). + var replicaOnNode *v1alpha1.ReplicatedVolumeReplica + for i := range replicas { + if replicas[i].Spec.NodeName == rva.Spec.NodeName && replicas[i].Spec.NodeName != "" { + replicaOnNode = &replicas[i] + break + } + } + + if err := r.reconcileRVAStatus(ctx, rva, rv, sc, desiredAttachTo, actuallyAttachedTo, replicaOnNode); err != nil { + joinedErr = errors.Join(joinedErr, err) + } + } + return joinedErr +} + +// reconcileRVAFinalizers ensures RVA finalizers are in the desired state: +// - If RVA is not deleting, it ensures ControllerFinalizer is present. +// - If RVA is deleting, it removes ControllerFinalizer only when the node is not actually attached anymore (or a duplicate RVA exists). +// +// It persists changes to the API via ensureRVAFinalizers (optimistic lock) and performs no-op when no changes are needed. +func (r *Reconciler) reconcileRVAFinalizers( + ctx context.Context, + rva *v1alpha1.ReplicatedVolumeAttachment, + actuallyAttachedTo []string, + rvaDesiredAttachTo []string, +) error { + if rva == nil { + panic("reconcileRVAFinalizers: nil rva (programmer error)") + } + + if rva.DeletionTimestamp.IsZero() { + // Add controller finalizer if RVA is not deleting. + desiredFinalizers := append([]string(nil), rva.Finalizers...) + if !slices.Contains(desiredFinalizers, v1alpha1.ControllerFinalizer) { + desiredFinalizers = append(desiredFinalizers, v1alpha1.ControllerFinalizer) + } + return r.ensureRVAFinalizers(ctx, rva, desiredFinalizers) + } + + // RVA is deleting: remove controller finalizer only when safe. + // Safe when: + // - the node is not actually attached anymore, OR + // - the node is still attached, but there is another active RVA for the same node (so we don't need to wait for detach). + if !slices.Contains(actuallyAttachedTo, rva.Spec.NodeName) || slices.Contains(rvaDesiredAttachTo, rva.Spec.NodeName) { + currentFinalizers := append([]string(nil), rva.Finalizers...) + desiredFinalizers := slices.DeleteFunc(currentFinalizers, func(f string) bool { + return f == v1alpha1.ControllerFinalizer + }) + return r.ensureRVAFinalizers(ctx, rva, desiredFinalizers) + } + + return nil +} + +// ensureRVAFinalizers ensures RVA finalizers match the desired set. +// It patches the object with optimistic lock only when finalizers actually change. +func (r *Reconciler) ensureRVAFinalizers( + ctx context.Context, + rva *v1alpha1.ReplicatedVolumeAttachment, + desiredFinalizers []string, +) error { + if rva == nil { + panic("ensureRVAFinalizers: nil rva (programmer error)") + } + + if slices.Equal(rva.Finalizers, desiredFinalizers) { + return nil + } + + original := rva.DeepCopy() + rva.Finalizers = append([]string(nil), desiredFinalizers...) + if err := r.cl.Patch(ctx, rva, client.MergeFromWithOptions(original, client.MergeFromWithOptimisticLock{})); err != nil { + return err + } + + return nil +} + +// reconcileRVAStatus computes desired phase and RVA conditions for a single RVA and persists them via ensureRVAStatus. +func (r *Reconciler) reconcileRVAStatus( + ctx context.Context, + rva *v1alpha1.ReplicatedVolumeAttachment, + rv *v1alpha1.ReplicatedVolume, + sc *v1alpha1.ReplicatedStorageClass, + desiredAttachTo []string, + actuallyAttachedTo []string, + replicaOnNode *v1alpha1.ReplicatedVolumeReplica, +) error { + if rva == nil { + panic("reconcileRVAStatus: nil rva (programmer error)") + } + + var desiredPhase v1alpha1.ReplicatedVolumeAttachmentPhase + var desiredAttachedCondition metav1.Condition + + // ReplicaReady mirrors replica condition Ready (if available). + desiredReplicaReadyCondition := metav1.Condition{ + Status: metav1.ConditionUnknown, + Reason: v1alpha1.ReplicatedVolumeAttachmentCondReplicaReadyReasonWaitingForReplica, + Message: "Waiting for replica Ready condition on the requested node", + } + + // Helper: if we have replica and its Ready condition, mirror it. + if replicaOnNode != nil { + if rvrReady := meta.FindStatusCondition(replicaOnNode.Status.Conditions, v1alpha1.ReplicatedVolumeReplicaCondReadyType); rvrReady != nil { + desiredReplicaReadyCondition.Status = rvrReady.Status + desiredReplicaReadyCondition.Reason = rvrReady.Reason + desiredReplicaReadyCondition.Message = rvrReady.Message + } + } + + // Attached always wins (even if RVA/RV are deleting): reflect the actual state. + if slices.Contains(actuallyAttachedTo, rva.Spec.NodeName) { + if !rva.DeletionTimestamp.IsZero() { + desiredPhase = v1alpha1.ReplicatedVolumeAttachmentPhaseDetaching + } else { + desiredPhase = v1alpha1.ReplicatedVolumeAttachmentPhaseAttached + } + desiredAttachedCondition = metav1.Condition{ + Status: metav1.ConditionTrue, + Reason: v1alpha1.ReplicatedVolumeAttachmentCondAttachedReasonAttached, + Message: "Volume is attached to the requested node", + } + return r.ensureRVAStatus(ctx, rva, desiredPhase, desiredAttachedCondition, desiredReplicaReadyCondition, computeAggregateReadyCondition(desiredAttachedCondition, desiredReplicaReadyCondition)) + } + + // RV might be missing (not yet created / already deleted). In this case we can't attach and keep RVA Pending. + if rv == nil { + desiredPhase = v1alpha1.ReplicatedVolumeAttachmentPhasePending + desiredAttachedCondition = metav1.Condition{ + Status: metav1.ConditionFalse, + Reason: v1alpha1.ReplicatedVolumeAttachmentCondAttachedReasonWaitingForReplicatedVolume, + Message: "Waiting for ReplicatedVolume to exist", + } + return r.ensureRVAStatus(ctx, rva, desiredPhase, desiredAttachedCondition, desiredReplicaReadyCondition, computeAggregateReadyCondition(desiredAttachedCondition, desiredReplicaReadyCondition)) + } + + // StorageClass might be missing (not yet created / already deleted). In this case we can't attach and keep RVA Pending. + if sc == nil { + desiredPhase = v1alpha1.ReplicatedVolumeAttachmentPhasePending + desiredAttachedCondition = metav1.Condition{ + Status: metav1.ConditionFalse, + Reason: v1alpha1.ReplicatedVolumeAttachmentCondAttachedReasonWaitingForReplicatedVolume, + Message: "Waiting for ReplicatedStorageClass to exist", + } + return r.ensureRVAStatus(ctx, rva, desiredPhase, desiredAttachedCondition, desiredReplicaReadyCondition, computeAggregateReadyCondition(desiredAttachedCondition, desiredReplicaReadyCondition)) + } + + // For Local volume access, attachment is only possible when the requested node has a Diskful replica. + // If this is not satisfied, keep RVA in Pending (do not move to Attaching). + if sc.Spec.VolumeAccess == v1alpha1.VolumeAccessLocal { + if replicaOnNode == nil || replicaOnNode.Status.ActualType != v1alpha1.ReplicaTypeDiskful { + desiredPhase = v1alpha1.ReplicatedVolumeAttachmentPhasePending + desiredAttachedCondition = metav1.Condition{ + Status: metav1.ConditionFalse, + Reason: v1alpha1.ReplicatedVolumeAttachmentCondAttachedReasonLocalityNotSatisfied, + Message: "Local volume access requires a Diskful replica on the requested node", + } + return r.ensureRVAStatus(ctx, rva, desiredPhase, desiredAttachedCondition, desiredReplicaReadyCondition, computeAggregateReadyCondition(desiredAttachedCondition, desiredReplicaReadyCondition)) + } + } + + // If RV status is not initialized or not IOReady, we can't progress attachment; keep informative Pending. + if !meta.IsStatusConditionTrue(rv.Status.Conditions, v1alpha1.ReplicatedVolumeCondIOReadyType) { + desiredPhase = v1alpha1.ReplicatedVolumeAttachmentPhasePending + desiredAttachedCondition = metav1.Condition{ + Status: metav1.ConditionFalse, + Reason: v1alpha1.ReplicatedVolumeAttachmentCondAttachedReasonWaitingForReplicatedVolumeReady, + Message: "Waiting for ReplicatedVolume to become IOReady", + } + return r.ensureRVAStatus(ctx, rva, desiredPhase, desiredAttachedCondition, desiredReplicaReadyCondition, computeAggregateReadyCondition(desiredAttachedCondition, desiredReplicaReadyCondition)) + } + + // Not active (not in desiredAttachTo): must wait until one of the active nodes detaches. + if !slices.Contains(desiredAttachTo, rva.Spec.NodeName) { + desiredPhase = v1alpha1.ReplicatedVolumeAttachmentPhasePending + desiredAttachedCondition = metav1.Condition{ + Status: metav1.ConditionFalse, + Reason: v1alpha1.ReplicatedVolumeAttachmentCondAttachedReasonWaitingForActiveAttachmentsToDetach, + Message: "Waiting for active nodes to detach (maximum 2 nodes are supported)", + } + return r.ensureRVAStatus(ctx, rva, desiredPhase, desiredAttachedCondition, desiredReplicaReadyCondition, computeAggregateReadyCondition(desiredAttachedCondition, desiredReplicaReadyCondition)) + } + + // Active but not yet attached. + if replicaOnNode == nil { + desiredPhase = v1alpha1.ReplicatedVolumeAttachmentPhaseAttaching + desiredAttachedCondition = metav1.Condition{ + Status: metav1.ConditionFalse, + Reason: v1alpha1.ReplicatedVolumeAttachmentCondAttachedReasonWaitingForReplica, + Message: "Waiting for replica on the requested node", + } + return r.ensureRVAStatus(ctx, rva, desiredPhase, desiredAttachedCondition, desiredReplicaReadyCondition, computeAggregateReadyCondition(desiredAttachedCondition, desiredReplicaReadyCondition)) + } + + // TieBreaker replica cannot be promoted directly; it must be converted first. + if replicaOnNode.Spec.Type == v1alpha1.ReplicaTypeTieBreaker || + replicaOnNode.Status.ActualType == v1alpha1.ReplicaTypeTieBreaker { + desiredPhase = v1alpha1.ReplicatedVolumeAttachmentPhaseAttaching + desiredAttachedCondition = metav1.Condition{ + Status: metav1.ConditionFalse, + Reason: v1alpha1.ReplicatedVolumeAttachmentCondAttachedReasonConvertingTieBreakerToAccess, + Message: "Converting TieBreaker replica to Access to allow promotion", + } + return r.ensureRVAStatus(ctx, rva, desiredPhase, desiredAttachedCondition, desiredReplicaReadyCondition, computeAggregateReadyCondition(desiredAttachedCondition, desiredReplicaReadyCondition)) + } + + desiredPhase = v1alpha1.ReplicatedVolumeAttachmentPhaseAttaching + desiredAttachedCondition = metav1.Condition{ + Status: metav1.ConditionFalse, + Reason: v1alpha1.ReplicatedVolumeAttachmentCondAttachedReasonSettingPrimary, + Message: "Waiting for replica to become Primary", + } + return r.ensureRVAStatus(ctx, rva, desiredPhase, desiredAttachedCondition, desiredReplicaReadyCondition, computeAggregateReadyCondition(desiredAttachedCondition, desiredReplicaReadyCondition)) +} + +func computeAggregateReadyCondition(attached metav1.Condition, replicaReady metav1.Condition) metav1.Condition { + // Ready is a strict aggregate: Attached=True AND ReplicaReady=True + if attached.Status != metav1.ConditionTrue { + return metav1.Condition{ + Status: metav1.ConditionFalse, + Reason: v1alpha1.ReplicatedVolumeAttachmentCondReadyReasonNotAttached, + Message: "Waiting for volume to be attached to the requested node", + } + } + if replicaReady.Status != metav1.ConditionTrue { + return metav1.Condition{ + Status: metav1.ConditionFalse, + Reason: v1alpha1.ReplicatedVolumeAttachmentCondReadyReasonReplicaNotReady, + Message: "Waiting for replica on the requested node to become Ready", + } + } + return metav1.Condition{ + Status: metav1.ConditionTrue, + Reason: v1alpha1.ReplicatedVolumeAttachmentCondReadyReasonReady, + Message: "Volume is attached and replica is Ready on the requested node", + } +} + +// ensureRVAStatus ensures RVA status.phase and conditions match desired values. +// It patches status with optimistic lock only when something actually changes. +func (r *Reconciler) ensureRVAStatus( + ctx context.Context, + rva *v1alpha1.ReplicatedVolumeAttachment, + desiredPhase v1alpha1.ReplicatedVolumeAttachmentPhase, + desiredAttachedCondition metav1.Condition, + desiredReplicaReadyCondition metav1.Condition, + desiredReadyCondition metav1.Condition, +) error { + if rva == nil { + panic("ensureRVAStatus: nil rva (programmer error)") + } + + desiredAttachedCondition.Type = v1alpha1.ReplicatedVolumeAttachmentCondAttachedType + desiredReplicaReadyCondition.Type = v1alpha1.ReplicatedVolumeAttachmentCondReplicaReadyType + desiredReadyCondition.Type = v1alpha1.ReplicatedVolumeAttachmentCondReadyType + + desiredAttachedCondition.ObservedGeneration = rva.Generation + desiredReplicaReadyCondition.ObservedGeneration = rva.Generation + desiredReadyCondition.ObservedGeneration = rva.Generation + + currentPhase := rva.Status.Phase + currentAttached := meta.FindStatusCondition(rva.Status.Conditions, v1alpha1.ReplicatedVolumeAttachmentCondAttachedType) + currentReplicaReady := meta.FindStatusCondition(rva.Status.Conditions, v1alpha1.ReplicatedVolumeAttachmentCondReplicaReadyType) + currentReady := meta.FindStatusCondition(rva.Status.Conditions, v1alpha1.ReplicatedVolumeAttachmentCondReadyType) + + phaseEqual := currentPhase == desiredPhase + attachedEqual := obju.ConditionSemanticallyEqual(currentAttached, &desiredAttachedCondition) + replicaReadyEqual := obju.ConditionSemanticallyEqual(currentReplicaReady, &desiredReplicaReadyCondition) + readyEqual := obju.ConditionSemanticallyEqual(currentReady, &desiredReadyCondition) + if phaseEqual && attachedEqual && replicaReadyEqual && readyEqual { + return nil + } + + original := rva.DeepCopy() + rva.Status.Phase = desiredPhase + meta.SetStatusCondition(&rva.Status.Conditions, desiredAttachedCondition) + meta.SetStatusCondition(&rva.Status.Conditions, desiredReplicaReadyCondition) + meta.SetStatusCondition(&rva.Status.Conditions, desiredReadyCondition) + + if err := r.cl.Status().Patch(ctx, rva, client.MergeFromWithOptions(original, client.MergeFromWithOptimisticLock{})); err != nil { + if client.IgnoreNotFound(err) == nil { + return nil + } + return err + } + + return nil +} + +// ensureRV updates ReplicatedVolume status fields derived from replicas/RVAs: +// - status.desiredAttachTo +// - status.actuallyAttachedTo +// +// It patches status with optimistic lock only when something actually changes. +func (r *Reconciler) ensureRV( + ctx context.Context, + rv *v1alpha1.ReplicatedVolume, + desiredAttachTo []string, + actuallyAttachedTo []string, + desiredAllowTwoPrimaries bool, +) error { + if rv == nil { + panic("ensureRV: nil rv (programmer error)") + } + + currentDesired := []string(nil) + currentActual := []string(nil) + currentAllowTwoPrimaries := false + currentDesired = rv.Status.DesiredAttachTo + currentActual = rv.Status.ActuallyAttachedTo + if rv.Status.DRBD != nil && rv.Status.DRBD.Config != nil { + currentAllowTwoPrimaries = rv.Status.DRBD.Config.AllowTwoPrimaries + } + + if slices.Equal(currentDesired, desiredAttachTo) && + slices.Equal(currentActual, actuallyAttachedTo) && + currentAllowTwoPrimaries == desiredAllowTwoPrimaries { + return nil + } + + original := rv.DeepCopy() + if rv.Status.DRBD == nil { + rv.Status.DRBD = &v1alpha1.DRBDResourceDetails{} + } + if rv.Status.DRBD.Config == nil { + rv.Status.DRBD.Config = &v1alpha1.DRBDResourceConfig{} + } + rv.Status.DesiredAttachTo = append([]string(nil), desiredAttachTo...) + rv.Status.ActuallyAttachedTo = append([]string(nil), actuallyAttachedTo...) + rv.Status.DRBD.Config.AllowTwoPrimaries = desiredAllowTwoPrimaries + + if err := r.cl.Status().Patch(ctx, rv, client.MergeFromWithOptions(original, client.MergeFromWithOptimisticLock{})); err != nil { + return err + } + + return nil +} + +// reconcileRVRs reconciles status for all ReplicatedVolumeReplica objects of an RV. +// +// It computes the desired DRBD configuration (including allowTwoPrimaries and which nodes should be Primary), +// applies it to each replica via reconcileRVR, and joins errors (does not fail-fast). +// +// Safety notes: +// - never request 2 Primaries until allowTwoPrimaries is confirmed applied everywhere; +// - when switching the active Primary node without allowTwoPrimaries, do it as "demote first, then promote". +func (r *Reconciler) reconcileRVRs( + ctx context.Context, + replicas []v1alpha1.ReplicatedVolumeReplica, + desiredAttachTo []string, + actuallyAttachedTo []string, + promoteEnabled bool, +) error { + actualAllowTwoPrimaries := computeActualTwoPrimaries(replicas) + + // DRBD safety rule #1: + // - we only allow 2 Primaries after allowTwoPrimaries is confirmed applied everywhere; + // - until then, we keep at most 1 Primary to reduce split-brain risk; + + // DRBD safety rule #2: + // - when switching the active Primary node (in any mode), the transition must be "demote first, then promote" + // (i.e. never request two Primaries without allowTwoPrimaries). + + // Start from the current reality: nodes that are Primary right now. + desiredPrimaryNodes := append([]string(nil), actuallyAttachedTo...) + + // Try to promote additional desired nodes if we have capacity (capacity depends on actualAllowTwoPrimaries), + // but only when promotions are enabled (RV must be IOReady). + if promoteEnabled { + desiredPrimaryNodes = promoteNewDesiredNodesIfPossible(actualAllowTwoPrimaries, desiredPrimaryNodes, desiredAttachTo) + } + + // Demote nodes that are Primary but are no longer desired. This is necessary to free up "places" for furutre new promotions. + desiredPrimaryNodes = demoteNotAnyMoreDesiredNodes(desiredPrimaryNodes, desiredAttachTo) + + var joinedErr error + for i := range replicas { + rvr := &replicas[i] + if err := r.reconcileRVR(ctx, rvr, desiredPrimaryNodes); err != nil { + joinedErr = errors.Join(joinedErr, err) + } + } + return joinedErr +} + +// computeDesiredTwoPrimaries returns whether we want to allow two Primary replicas. +// +// Rule: +// - if we desire two attachments, we must allow two Primaries; +// - if we already have >1 Primary (actuallyAttachedTo), we MUST NOT disable allowTwoPrimaries until we demote down to <=1. +func computeDesiredTwoPrimaries(desiredAttachTo []string, actuallyAttachedTo []string) bool { + // The desiredAttachTo list can't be more than 2 nodes; this is enforced by computeDesiredAttachTo. + return len(desiredAttachTo) == 2 || len(actuallyAttachedTo) > 1 +} + +// computeActualTwoPrimaries returns whether allowTwoPrimaries is actually applied on all relevant replicas. +// A replica is considered relevant when it is assigned to a node. +func computeActualTwoPrimaries(replicas []v1alpha1.ReplicatedVolumeReplica) bool { + for _, rvr := range replicas { + // Skip replicas without a node (unscheduled replicas or TieBreaker without node assignment). + if rvr.Spec.NodeName == "" { + continue + } + if rvr.Status.DRBD == nil || rvr.Status.DRBD.Actual == nil || !rvr.Status.DRBD.Actual.AllowTwoPrimaries { + return false + } + } + return true +} + +// promoteNewDesiredNodesIfPossible returns actualPrimaryNodes extended with 0..2 additional desired nodes, if possible. +// +// The function respects the current allowTwoPrimaries readiness: +// - if actualAllowTwoPrimaries is false: maxNodesAllowed=1 +// - if actualAllowTwoPrimaries is true: maxNodesAllowed=2 +// +// Output size is 0..2. +func promoteNewDesiredNodesIfPossible( + actualAllowTwoPrimaries bool, + actualPrimaryNodes []string, + desiredPrimaryNodes []string, +) []string { + maxNodesAllowed := 1 + if actualAllowTwoPrimaries { + maxNodesAllowed = 2 + } + + // Start with actual Primary nodes. + out := append([]string(nil), actualPrimaryNodes...) + + // Add missing desired nodes (FIFO) until we reach maxNodesAllowed or run out of candidates. + if len(out) >= maxNodesAllowed { + return out + } + for _, node := range desiredPrimaryNodes { + if slices.Contains(out, node) { + continue + } + out = append(out, node) + if len(out) >= maxNodesAllowed { + break + } + } + + return out +} + +// demoteNotAnyMoreDesiredNodes returns actualPrimaryNodes with nodes that are not present in desiredPrimaryNodes removed. +// The order of remaining nodes is preserved. +func demoteNotAnyMoreDesiredNodes( + actualPrimaryNodes []string, + desiredPrimaryNodes []string, +) []string { + out := make([]string, 0, len(actualPrimaryNodes)) + for _, node := range actualPrimaryNodes { + if slices.Contains(desiredPrimaryNodes, node) { + out = append(out, node) + } + } + return out +} + +// reconcileRVR reconciles a single replica (spec.type + status: DRBD config.primary and Attached condition) +// for the given RV plan. desiredPrimary is derived from whether the replica node is present in desiredPrimaryNodes. +func (r *Reconciler) reconcileRVR( + ctx context.Context, + rvr *v1alpha1.ReplicatedVolumeReplica, + desiredPrimaryNodes []string, +) error { + if rvr == nil { + panic("reconcileRVR: rvr is nil") + } + + desiredPrimaryWanted := slices.Contains(desiredPrimaryNodes, rvr.Spec.NodeName) + + // TieBreaker cannot be promoted, so convert it to Access first. + desiredType := rvr.Spec.Type + if desiredPrimaryWanted && rvr.Spec.Type == v1alpha1.ReplicaTypeTieBreaker { + desiredType = v1alpha1.ReplicaTypeAccess + } + if err := r.ensureRVRType(ctx, rvr, desiredType); err != nil { + return err + } + + desiredPrimary := desiredPrimaryWanted + + // We only request Primary on replicas that are actually Diskful or Access (by status.actualType). + // This prevents trying to promote TieBreaker (or not-yet-initialized replicas). + if desiredPrimary { + if rvr.Status.ActualType != v1alpha1.ReplicaTypeDiskful && rvr.Status.ActualType != v1alpha1.ReplicaTypeAccess { + desiredPrimary = false + } + } + + // Build desired Attached condition. + desiredAttachedCondition := computeAttachedCondition(rvr, desiredPrimary) + + return r.ensureRVRStatus(ctx, rvr, desiredPrimary, desiredAttachedCondition) +} + +// ensureRVRType ensures rvr.spec.type matches the desired value. +// It patches the object with optimistic lock only when something actually changes. +func (r *Reconciler) ensureRVRType( + ctx context.Context, + rvr *v1alpha1.ReplicatedVolumeReplica, + desiredType v1alpha1.ReplicaType, +) error { + if rvr == nil { + panic("ensureRVRType: rvr is nil") + } + + if rvr.Spec.Type == desiredType { + return nil + } + + original := rvr.DeepCopy() + rvr.Spec.Type = desiredType + + if err := r.cl.Patch(ctx, rvr, client.MergeFromWithOptions(original, client.MergeFromWithOptimisticLock{})); err != nil { + return err + } + + return nil +} + +// computeAttachedCondition computes the Attached condition for a replica based on its current state. +func computeAttachedCondition(rvr *v1alpha1.ReplicatedVolumeReplica, shouldBePrimary bool) metav1.Condition { + if rvr.Spec.Type != v1alpha1.ReplicaTypeAccess && rvr.Spec.Type != v1alpha1.ReplicaTypeDiskful { + return metav1.Condition{ + Type: v1alpha1.ReplicatedVolumeReplicaCondAttachedType, + Status: metav1.ConditionFalse, + Reason: v1alpha1.ReplicatedVolumeReplicaCondAttachedReasonAttachingNotApplicable, + } + } + + if rvr.Spec.NodeName == "" || rvr.Status.DRBD == nil || rvr.Status.DRBD.Status == nil { + return metav1.Condition{ + Type: v1alpha1.ReplicatedVolumeReplicaCondAttachedType, + Status: metav1.ConditionUnknown, + Reason: v1alpha1.ReplicatedVolumeReplicaCondAttachedReasonAttachingNotInitialized, + } + } + + isPrimary := rvr.Status.DRBD.Status.Role == "Primary" + + cond := metav1.Condition{Type: v1alpha1.ReplicatedVolumeReplicaCondAttachedType} + + if isPrimary { + cond.Status = metav1.ConditionTrue + cond.Reason = v1alpha1.ReplicatedVolumeReplicaCondAttachedReasonAttached + } else { + cond.Status = metav1.ConditionFalse + if shouldBePrimary { + cond.Reason = v1alpha1.ReplicatedVolumeReplicaCondAttachedReasonPending + } else { + cond.Reason = v1alpha1.ReplicatedVolumeReplicaCondAttachedReasonDetached + } + } + + return cond +} + +// ensureRVRStatus ensures rvr.status.drbd.config.primary and the Attached condition match the desired values. +// It patches status with optimistic lock only when something actually changes. +func (r *Reconciler) ensureRVRStatus( + ctx context.Context, + rvr *v1alpha1.ReplicatedVolumeReplica, + desiredPrimary bool, + desiredAttachedCondition metav1.Condition, +) error { + if rvr == nil { + panic("ensureRVRStatus: rvr is nil") + } + + primary := false + if rvr.Status.DRBD != nil && rvr.Status.DRBD.Config != nil && rvr.Status.DRBD.Config.Primary != nil { + primary = *rvr.Status.DRBD.Config.Primary + } + attachedCond := meta.FindStatusCondition(rvr.Status.Conditions, v1alpha1.ReplicatedVolumeReplicaCondAttachedType) + + desiredAttachedCondition.Type = v1alpha1.ReplicatedVolumeReplicaCondAttachedType + desiredAttachedCondition.ObservedGeneration = rvr.Generation + + if primary == desiredPrimary && + obju.ConditionSemanticallyEqual(attachedCond, &desiredAttachedCondition) { + return nil + } + + original := rvr.DeepCopy() + if rvr.Status.DRBD == nil { + rvr.Status.DRBD = &v1alpha1.DRBD{} + } + if rvr.Status.DRBD.Config == nil { + rvr.Status.DRBD.Config = &v1alpha1.DRBDConfig{} + } + + rvr.Status.DRBD.Config.Primary = &desiredPrimary + meta.SetStatusCondition(&rvr.Status.Conditions, desiredAttachedCondition) + + if err := r.cl.Status().Patch(ctx, rvr, client.MergeFromWithOptions(original, client.MergeFromWithOptimisticLock{})); err != nil { + return err + } + + return nil +} diff --git a/images/controller/internal/controllers/rv_attach_controller/reconciler_test.go b/images/controller/internal/controllers/rv_attach_controller/reconciler_test.go new file mode 100644 index 000000000..bcdf31cb6 --- /dev/null +++ b/images/controller/internal/controllers/rv_attach_controller/reconciler_test.go @@ -0,0 +1,2669 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package rvattachcontroller_test + +import ( + "context" + "errors" + "fmt" + "testing" + "time" + + "github.com/go-logr/logr" + . "github.com/onsi/ginkgo/v2" + . "github.com/onsi/gomega" + "k8s.io/apimachinery/pkg/api/meta" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/runtime" + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/client/fake" + "sigs.k8s.io/controller-runtime/pkg/client/interceptor" + "sigs.k8s.io/controller-runtime/pkg/log" + "sigs.k8s.io/controller-runtime/pkg/reconcile" + + v1alpha1 "github.com/deckhouse/sds-replicated-volume/api/v1alpha1" + rvattachcontroller "github.com/deckhouse/sds-replicated-volume/images/controller/internal/controllers/rv_attach_controller" + "github.com/deckhouse/sds-replicated-volume/images/controller/internal/indexes/testhelpers" +) + +func TestRvAttachReconciler(t *testing.T) { + RegisterFailHandler(Fail) + RunSpecs(t, "rv-attach-controller Reconciler Suite") +} + +var errExpectedTestError = errors.New("test error") + +var _ = Describe("Reconcile", func() { + scheme := runtime.NewScheme() + Expect(v1alpha1.AddToScheme(scheme)).To(Succeed()) + Expect(v1alpha1.AddToScheme(scheme)).To(Succeed()) + + var ( + builder *fake.ClientBuilder + cl client.WithWatch + rec *rvattachcontroller.Reconciler + ) + + BeforeEach(func() { + builder = testhelpers.WithRVRByReplicatedVolumeNameIndex(testhelpers.WithRVAByReplicatedVolumeNameIndex(fake.NewClientBuilder().WithScheme(scheme))). + WithStatusSubresource(&v1alpha1.ReplicatedVolume{}). + WithStatusSubresource(&v1alpha1.ReplicatedVolumeReplica{}). + WithStatusSubresource(&v1alpha1.ReplicatedVolumeAttachment{}) + cl = nil + rec = nil + }) + + JustBeforeEach(func() { + cl = builder.Build() + rec = rvattachcontroller.NewReconciler(cl, logr.New(log.NullLogSink{})) + }) + + It("does not patch ReplicatedVolume status when computed fields already match (ensureRV no-op)", func(ctx SpecContext) { + rv := &v1alpha1.ReplicatedVolume{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rv-noop", + Finalizers: []string{v1alpha1.ControllerFinalizer}, + }, + Spec: v1alpha1.ReplicatedVolumeSpec{ + ReplicatedStorageClassName: "rsc1", + }, + Status: v1alpha1.ReplicatedVolumeStatus{ + Conditions: []metav1.Condition{{ + Type: v1alpha1.ReplicatedVolumeCondIOReadyType, + Status: metav1.ConditionTrue, + }}, + DesiredAttachTo: []string{}, + ActuallyAttachedTo: []string{}, + }, + } + rsc := &v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rsc1", + }, + Spec: v1alpha1.ReplicatedStorageClassSpec{ + Replication: "Availability", + VolumeAccess: "Remote", + }, + } + + localBuilder := testhelpers.WithRVRByReplicatedVolumeNameIndex(testhelpers.WithRVAByReplicatedVolumeNameIndex(fake.NewClientBuilder().WithScheme(scheme))). + WithStatusSubresource(&v1alpha1.ReplicatedVolume{}). + WithStatusSubresource(&v1alpha1.ReplicatedVolumeReplica{}). + WithStatusSubresource(&v1alpha1.ReplicatedVolumeAttachment{}). + WithObjects(rv, rsc). + WithInterceptorFuncs(interceptor.Funcs{ + SubResourcePatch: func(ctx context.Context, cl client.Client, subResourceName string, obj client.Object, patch client.Patch, opts ...client.SubResourcePatchOption) error { + if subResourceName == "status" { + if _, ok := obj.(*v1alpha1.ReplicatedVolume); ok { + return errExpectedTestError + } + } + return cl.SubResource(subResourceName).Patch(ctx, obj, patch, opts...) + }, + }) + + localCl := localBuilder.Build() + localRec := rvattachcontroller.NewReconciler(localCl, logr.New(log.NullLogSink{})) + + result, err := localRec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(rv)}) + Expect(err).NotTo(HaveOccurred()) + Expect(result).To(Equal(reconcile.Result{})) + }) + + It("returns nil when ReplicatedVolume not found", func(ctx SpecContext) { + Expect(rec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKey{Name: "non-existent"}})).To(Equal(reconcile.Result{})) + }) + + It("sets RVA Pending/Ready=False with WaitingForReplicatedVolume when ReplicatedVolume does not exist", func(ctx SpecContext) { + // Fake client does not support setting deletionTimestamp via Update() and deletes objects immediately on Delete(). + // To simulate a deleting object, we seed the fake client with an RVA that already has DeletionTimestamp set. + now := metav1.Now() + rva := &v1alpha1.ReplicatedVolumeAttachment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rva-missing-rv", + DeletionTimestamp: &now, + Finalizers: []string{ + v1alpha1.ControllerFinalizer, + }, + }, + Spec: v1alpha1.ReplicatedVolumeAttachmentSpec{ + ReplicatedVolumeName: "non-existent", + NodeName: "node-1", + }, + } + + localCl := testhelpers.WithRVRByReplicatedVolumeNameIndex(testhelpers.WithRVAByReplicatedVolumeNameIndex(fake.NewClientBuilder().WithScheme(scheme))). + WithStatusSubresource(&v1alpha1.ReplicatedVolume{}). + WithStatusSubresource(&v1alpha1.ReplicatedVolumeReplica{}). + WithStatusSubresource(&v1alpha1.ReplicatedVolumeAttachment{}). + WithObjects(rva). + Build() + localRec := rvattachcontroller.NewReconciler(localCl, logr.New(log.NullLogSink{})) + + Expect(localRec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKey{Name: "non-existent"}})).To(Equal(reconcile.Result{})) + + got := &v1alpha1.ReplicatedVolumeAttachment{} + err := localCl.Get(ctx, client.ObjectKeyFromObject(rva), got) + if client.IgnoreNotFound(err) == nil { + // Once finalizer is released, the object may disappear immediately. + return + } + Expect(err).NotTo(HaveOccurred()) + // When RV is missing, deleting RVA finalizer must be released. + Expect(got.Finalizers).NotTo(ContainElement(v1alpha1.ControllerFinalizer)) + Expect(got.Status.Phase).To(Equal(v1alpha1.ReplicatedVolumeAttachmentPhasePending)) + cond := meta.FindStatusCondition(got.Status.Conditions, v1alpha1.ReplicatedVolumeAttachmentCondAttachedType) + Expect(cond).NotTo(BeNil()) + Expect(cond.Status).To(Equal(metav1.ConditionFalse)) + Expect(cond.Reason).To(Equal(v1alpha1.ReplicatedVolumeAttachmentCondAttachedReasonWaitingForReplicatedVolume)) + }) + + It("sets RVA Pending/Ready=False with WaitingForReplicatedVolume when ReplicatedVolume was deleted", func(ctx SpecContext) { + rv := &v1alpha1.ReplicatedVolume{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rv-to-delete", + }, + Spec: v1alpha1.ReplicatedVolumeSpec{ + ReplicatedStorageClassName: "rsc1", + }, + } + Expect(cl.Create(ctx, rv)).To(Succeed()) + + rva := &v1alpha1.ReplicatedVolumeAttachment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rva-for-deleted-rv", + }, + Spec: v1alpha1.ReplicatedVolumeAttachmentSpec{ + ReplicatedVolumeName: "rv-to-delete", + NodeName: "node-1", + }, + } + Expect(cl.Create(ctx, rva)).To(Succeed()) + + Expect(cl.Delete(ctx, rv)).To(Succeed()) + + Expect(rec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKey{Name: "rv-to-delete"}})).To(Equal(reconcile.Result{})) + + got := &v1alpha1.ReplicatedVolumeAttachment{} + Expect(cl.Get(ctx, client.ObjectKeyFromObject(rva), got)).To(Succeed()) + Expect(got.Status.Phase).To(Equal(v1alpha1.ReplicatedVolumeAttachmentPhasePending)) + cond := meta.FindStatusCondition(got.Status.Conditions, v1alpha1.ReplicatedVolumeAttachmentCondAttachedType) + Expect(cond).NotTo(BeNil()) + Expect(cond.Status).To(Equal(metav1.ConditionFalse)) + Expect(cond.Reason).To(Equal(v1alpha1.ReplicatedVolumeAttachmentCondAttachedReasonWaitingForReplicatedVolume)) + }) + + It("does not error when ReplicatedVolume is missing but replicas exist", func(ctx SpecContext) { + rvr := &v1alpha1.ReplicatedVolumeReplica{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rvr-orphan", + }, + Spec: v1alpha1.ReplicatedVolumeReplicaSpec{ + ReplicatedVolumeName: "rv-missing", + NodeName: "node-1", + Type: v1alpha1.ReplicaTypeDiskful, + }, + Status: v1alpha1.ReplicatedVolumeReplicaStatus{ + ActualType: v1alpha1.ReplicaTypeDiskful, + DRBD: &v1alpha1.DRBD{ + Status: &v1alpha1.DRBDStatus{ + Role: "Primary", + }, + }, + }, + } + Expect(cl.Create(ctx, rvr)).To(Succeed()) + + Expect(rec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKey{Name: "rv-missing"}})).To(Equal(reconcile.Result{})) + }) + + It("runs detach-only: keeps attached RVA Attached, sets others Pending/WaitingForReplicatedVolumeReady, and releases finalizer only when safe", func(ctx SpecContext) { + // Same reason as in the test above: to simulate a deleting RVA, we seed the fake client with it. + now := metav1.Now() + + rv := &v1alpha1.ReplicatedVolume{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rv-detach-only", + Finalizers: []string{v1alpha1.ControllerFinalizer}, + }, + Spec: v1alpha1.ReplicatedVolumeSpec{ + ReplicatedStorageClassName: "rsc1", + }, + Status: v1alpha1.ReplicatedVolumeStatus{ + Conditions: []metav1.Condition{{ + Type: v1alpha1.ReplicatedVolumeCondIOReadyType, + Status: metav1.ConditionFalse, + }}, + ActuallyAttachedTo: []string{"node-1"}, + DesiredAttachTo: []string{"node-1", "node-2"}, + }, + } + + rva1 := &v1alpha1.ReplicatedVolumeAttachment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rva-node-1", + }, + Spec: v1alpha1.ReplicatedVolumeAttachmentSpec{ + ReplicatedVolumeName: rv.Name, + NodeName: "node-1", + }, + } + rva2 := &v1alpha1.ReplicatedVolumeAttachment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rva-node-2", + DeletionTimestamp: &now, + Finalizers: []string{ + v1alpha1.ControllerFinalizer, + }, + }, + Spec: v1alpha1.ReplicatedVolumeAttachmentSpec{ + ReplicatedVolumeName: rv.Name, + NodeName: "node-2", + }, + } + + // Replica on node-1 is Primary (actual attachment). + rvr1 := &v1alpha1.ReplicatedVolumeReplica{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rvr-node-1", + }, + Spec: v1alpha1.ReplicatedVolumeReplicaSpec{ + ReplicatedVolumeName: rv.Name, + NodeName: "node-1", + Type: v1alpha1.ReplicaTypeDiskful, + }, + Status: v1alpha1.ReplicatedVolumeReplicaStatus{ + ActualType: v1alpha1.ReplicaTypeDiskful, + DRBD: &v1alpha1.DRBD{ + Status: &v1alpha1.DRBDStatus{ + Role: "Primary", + }, + }, + }, + } + + // Replica on node-2 is Primary=true; detach-only must demote it. + primaryTrue := true + rvr2 := &v1alpha1.ReplicatedVolumeReplica{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rvr-node-2", + }, + Spec: v1alpha1.ReplicatedVolumeReplicaSpec{ + ReplicatedVolumeName: rv.Name, + NodeName: "node-2", + Type: v1alpha1.ReplicaTypeDiskful, + }, + Status: v1alpha1.ReplicatedVolumeReplicaStatus{ + DRBD: &v1alpha1.DRBD{ + Config: &v1alpha1.DRBDConfig{ + Primary: &primaryTrue, + }, + }, + }, + } + + localCl := testhelpers.WithRVRByReplicatedVolumeNameIndex(testhelpers.WithRVAByReplicatedVolumeNameIndex(fake.NewClientBuilder().WithScheme(scheme))). + WithStatusSubresource(&v1alpha1.ReplicatedVolume{}). + WithStatusSubresource(&v1alpha1.ReplicatedVolumeReplica{}). + WithStatusSubresource(&v1alpha1.ReplicatedVolumeAttachment{}). + WithObjects(rv, rva1, rva2, rvr1, rvr2). + Build() + localRec := rvattachcontroller.NewReconciler(localCl, logr.New(log.NullLogSink{})) + + Expect(localRec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKey{Name: rv.Name}})).To(Equal(reconcile.Result{})) + + // desiredAttachTo must be reduced to only node-1 (no new nodes added). + gotRV := &v1alpha1.ReplicatedVolume{} + Expect(localCl.Get(ctx, client.ObjectKeyFromObject(rv), gotRV)).To(Succeed()) + Expect(gotRV.Status).NotTo(BeNil()) + Expect(gotRV.Status.DesiredAttachTo).To(Equal([]string{"node-1"})) + + // rva1: attached node must stay Attached/Ready=True and should have finalizer added. + gotRVA1 := &v1alpha1.ReplicatedVolumeAttachment{} + Expect(localCl.Get(ctx, client.ObjectKeyFromObject(rva1), gotRVA1)).To(Succeed()) + Expect(gotRVA1.Finalizers).To(ContainElement(v1alpha1.ControllerFinalizer)) + Expect(gotRVA1.Status).NotTo(BeNil()) + Expect(gotRVA1.Status.Phase).To(Equal(v1alpha1.ReplicatedVolumeAttachmentPhaseAttached)) + cond1 := meta.FindStatusCondition(gotRVA1.Status.Conditions, v1alpha1.ReplicatedVolumeAttachmentCondAttachedType) + Expect(cond1).NotTo(BeNil()) + Expect(cond1.Status).To(Equal(metav1.ConditionTrue)) + + // rva2: deleting + not attached => finalizer removed, status Pending with WaitingForReplicatedVolumeReady. + gotRVA2 := &v1alpha1.ReplicatedVolumeAttachment{} + err := localCl.Get(ctx, client.ObjectKeyFromObject(rva2), gotRVA2) + if client.IgnoreNotFound(err) != nil { + Expect(err).NotTo(HaveOccurred()) + Expect(gotRVA2.Finalizers).NotTo(ContainElement(v1alpha1.ControllerFinalizer)) + Expect(gotRVA2.Status).NotTo(BeNil()) + Expect(gotRVA2.Status.Phase).To(Equal(v1alpha1.ReplicatedVolumeAttachmentPhasePending)) + cond2 := meta.FindStatusCondition(gotRVA2.Status.Conditions, v1alpha1.ReplicatedVolumeAttachmentCondAttachedType) + Expect(cond2).NotTo(BeNil()) + Expect(cond2.Status).To(Equal(metav1.ConditionFalse)) + Expect(cond2.Reason).To(Equal(v1alpha1.ReplicatedVolumeAttachmentCondAttachedReasonWaitingForReplicatedVolumeReady)) + } + + // rvr-node-2 should be demoted + gotRVR2 := &v1alpha1.ReplicatedVolumeReplica{} + Expect(localCl.Get(ctx, client.ObjectKeyFromObject(rvr2), gotRVR2)).To(Succeed()) + Expect(gotRVR2.Status).NotTo(BeNil()) + Expect(gotRVR2.Status.DRBD).NotTo(BeNil()) + Expect(gotRVR2.Status.DRBD.Config).NotTo(BeNil()) + Expect(gotRVR2.Status.DRBD.Config.Primary).NotTo(BeNil()) + Expect(*gotRVR2.Status.DRBD.Config.Primary).To(BeFalse()) + }) + + When("rv created", func() { + var rv v1alpha1.ReplicatedVolume + + BeforeEach(func() { + rv = v1alpha1.ReplicatedVolume{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rv1", + Finalizers: []string{v1alpha1.ControllerFinalizer}, + }, + Spec: v1alpha1.ReplicatedVolumeSpec{ + ReplicatedStorageClassName: "rsc1", + }, + } + }) + + JustBeforeEach(func(ctx SpecContext) { + Expect(cl.Create(ctx, &rv)).To(Succeed()) + }) + + When("status is empty", func() { + BeforeEach(func() { + rv.Status = v1alpha1.ReplicatedVolumeStatus{} + }) + + It("does not error when status is empty", func(ctx SpecContext) { + Expect(rec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(&rv)})).To(Equal(reconcile.Result{})) + }) + }) + + When("IOReady condition is False", func() { + BeforeEach(func() { + rv.Status = v1alpha1.ReplicatedVolumeStatus{ + Conditions: []metav1.Condition{ + { + Type: v1alpha1.ReplicatedVolumeCondIOReadyType, + Status: metav1.ConditionFalse, + }, + }, + // Keep desiredAttachTo pre-initialized to ensure the controller does not attempt + // to promote replicas just because desiredAttachTo already contains nodes. + DesiredAttachTo: []string{"node-1", "node-2"}, + } + + // ensure that if controller tried to read RSC, it would fail + builder.WithInterceptorFuncs(interceptor.Funcs{ + Get: func(ctx context.Context, c client.WithWatch, key client.ObjectKey, obj client.Object, opts ...client.GetOption) error { + if _, ok := obj.(*v1alpha1.ReplicatedStorageClass); ok { + return errExpectedTestError + } + return c.Get(ctx, key, obj, opts...) + }, + }) + }) + + It("runs detach-only when IOReady condition is False without touching ReplicatedStorageClass", func(ctx SpecContext) { + Expect(rec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(&rv)})).To(Equal(reconcile.Result{})) + }) + + It("does not request Primary on replicas before RV IOReady even if desiredAttachTo already contains nodes", func(ctx SpecContext) { + rva1 := &v1alpha1.ReplicatedVolumeAttachment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rva-node-1-not-ioready", + }, + Spec: v1alpha1.ReplicatedVolumeAttachmentSpec{ + ReplicatedVolumeName: rv.Name, + NodeName: "node-1", + }, + } + rva2 := &v1alpha1.ReplicatedVolumeAttachment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rva-node-2-not-ioready", + }, + Spec: v1alpha1.ReplicatedVolumeAttachmentSpec{ + ReplicatedVolumeName: rv.Name, + NodeName: "node-2", + }, + } + Expect(cl.Create(ctx, rva1)).To(Succeed()) + Expect(cl.Create(ctx, rva2)).To(Succeed()) + + rvr1 := &v1alpha1.ReplicatedVolumeReplica{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rvr-node-1-not-ioready", + }, + Spec: v1alpha1.ReplicatedVolumeReplicaSpec{ + ReplicatedVolumeName: rv.Name, + NodeName: "node-1", + Type: v1alpha1.ReplicaTypeDiskful, + }, + Status: v1alpha1.ReplicatedVolumeReplicaStatus{ + ActualType: v1alpha1.ReplicaTypeDiskful, + }, + } + rvr2 := &v1alpha1.ReplicatedVolumeReplica{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rvr-node-2-not-ioready", + }, + Spec: v1alpha1.ReplicatedVolumeReplicaSpec{ + ReplicatedVolumeName: rv.Name, + NodeName: "node-2", + Type: v1alpha1.ReplicaTypeDiskful, + }, + Status: v1alpha1.ReplicatedVolumeReplicaStatus{ + ActualType: v1alpha1.ReplicaTypeDiskful, + }, + } + Expect(cl.Create(ctx, rvr1)).To(Succeed()) + Expect(cl.Create(ctx, rvr2)).To(Succeed()) + + Expect(rec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(&rv)})).To(Equal(reconcile.Result{})) + + gotRVR1 := &v1alpha1.ReplicatedVolumeReplica{} + Expect(cl.Get(ctx, client.ObjectKeyFromObject(rvr1), gotRVR1)).To(Succeed()) + primaryRequested1 := false + if gotRVR1.Status.DRBD != nil && gotRVR1.Status.DRBD.Config != nil && gotRVR1.Status.DRBD.Config.Primary != nil { + primaryRequested1 = *gotRVR1.Status.DRBD.Config.Primary + } + Expect(primaryRequested1).To(BeFalse()) + + gotRVR2 := &v1alpha1.ReplicatedVolumeReplica{} + Expect(cl.Get(ctx, client.ObjectKeyFromObject(rvr2), gotRVR2)).To(Succeed()) + primaryRequested2 := false + if gotRVR2.Status.DRBD != nil && gotRVR2.Status.DRBD.Config != nil && gotRVR2.Status.DRBD.Config.Primary != nil { + primaryRequested2 = *gotRVR2.Status.DRBD.Config.Primary + } + Expect(primaryRequested2).To(BeFalse()) + }) + }) + + When("ReplicatedStorageClassName is empty", func() { + BeforeEach(func() { + rv.Spec.ReplicatedStorageClassName = "" + + // interceptor to fail any RSC Get if it ever happens + builder.WithInterceptorFuncs(interceptor.Funcs{ + Get: func(ctx context.Context, c client.WithWatch, key client.ObjectKey, obj client.Object, opts ...client.GetOption) error { + if _, ok := obj.(*v1alpha1.ReplicatedStorageClass); ok { + return errExpectedTestError + } + return c.Get(ctx, key, obj, opts...) + }, + }) + }) + + It("runs detach-only when ReplicatedStorageClassName is empty", func(ctx SpecContext) { + Expect(rec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(&rv)})).To(Equal(reconcile.Result{})) + }) + }) + + When("publish context loaded", func() { + var ( + rsc v1alpha1.ReplicatedStorageClass + rvrList v1alpha1.ReplicatedVolumeReplicaList + attachTo []string + volumeAccess v1alpha1.ReplicatedStorageClassVolumeAccess + ) + + BeforeEach(func() { + volumeAccess = v1alpha1.VolumeAccessLocal + attachTo = []string{"node-1", "node-2"} + + rv.Status = v1alpha1.ReplicatedVolumeStatus{ + Conditions: []metav1.Condition{ + { + Type: v1alpha1.ReplicatedVolumeCondIOReadyType, + Status: metav1.ConditionTrue, + }, + }, + } + // Keep RV.status.desiredAttachTo pre-initialized: + // for Local access the controller may be unable to "add" nodes from RVA until replicas are initialized + // (status.actualType must be reported by the agent), but it still must keep already-desired nodes. + rv.Status.DesiredAttachTo = attachTo + + rsc = v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rsc1", + }, + Spec: v1alpha1.ReplicatedStorageClassSpec{ + Replication: "Availability", + VolumeAccess: volumeAccess, + }, + } + + rvrList = v1alpha1.ReplicatedVolumeReplicaList{ + Items: []v1alpha1.ReplicatedVolumeReplica{ + { + ObjectMeta: metav1.ObjectMeta{ + Name: "rvr-df1", + }, + Spec: v1alpha1.ReplicatedVolumeReplicaSpec{ + ReplicatedVolumeName: rv.Name, + NodeName: "node-1", + Type: v1alpha1.ReplicaTypeDiskful, + }, + }, + { + ObjectMeta: metav1.ObjectMeta{ + Name: "rvr-df2", + }, + Spec: v1alpha1.ReplicatedVolumeReplicaSpec{ + ReplicatedVolumeName: rv.Name, + NodeName: "node-2", + Type: v1alpha1.ReplicaTypeDiskful, + }, + }, + }, + } + }) + + JustBeforeEach(func(ctx SpecContext) { + Expect(cl.Create(ctx, &rsc)).To(Succeed()) + for i := range rvrList.Items { + Expect(cl.Create(ctx, &rvrList.Items[i])).To(Succeed()) + } + + // Create RVA objects according to desired attachTo. + // The controller derives rv.status.desiredAttachTo from the RVA set. + for i, nodeName := range attachTo { + rva := &v1alpha1.ReplicatedVolumeAttachment{ + ObjectMeta: metav1.ObjectMeta{ + Name: fmt.Sprintf("rva-%d-%s", i, nodeName), + }, + Spec: v1alpha1.ReplicatedVolumeAttachmentSpec{ + ReplicatedVolumeName: rv.Name, + NodeName: nodeName, + }, + } + Expect(cl.Create(ctx, rva)).To(Succeed()) + } + }) + + When("volumeAccess is not Local", func() { + BeforeEach(func() { + volumeAccess = v1alpha1.VolumeAccessAny + rsc.Spec.VolumeAccess = volumeAccess + }) + + It("does not set any AttachSucceeded condition (it is not used on RV anymore)", func(ctx SpecContext) { + Expect(rec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(&rv)})).To(Equal(reconcile.Result{})) + + rvList := &v1alpha1.ReplicatedVolumeList{} + Expect(cl.List(ctx, rvList)).To(Succeed()) + Expect(rvList.Items).To(SatisfyAll( + HaveLen(1), + HaveEach(HaveField("Status.Conditions", Not(ContainElement(HaveField("Type", Equal("AttachSucceeded")))))), + )) + }) + }) + + When("ReplicatedStorageClass switches from Remote to Local", func() { + BeforeEach(func() { + volumeAccess = v1alpha1.VolumeAccessAny + rsc.Spec.VolumeAccess = volumeAccess + }) + + It("does not detach already-desired nodes even if they violate Locality after the switch", func(ctx SpecContext) { + // Simulate that the agent already reported actual types: + // node-2 is not Diskful (will violate Locality once SC becomes Local). + for _, item := range []struct { + name string + actualType v1alpha1.ReplicaType + }{ + {name: "rvr-df1", actualType: v1alpha1.ReplicaTypeDiskful}, + {name: "rvr-df2", actualType: v1alpha1.ReplicaTypeAccess}, + } { + got := &v1alpha1.ReplicatedVolumeReplica{} + Expect(cl.Get(ctx, client.ObjectKey{Name: item.name}, got)).To(Succeed()) + orig := got.DeepCopy() + got.Status = v1alpha1.ReplicatedVolumeReplicaStatus{ + ActualType: item.actualType, + } + Expect(cl.Status().Patch(ctx, got, client.MergeFrom(orig))).To(Succeed()) + } + + // Reconcile #1 with Remote: desiredAttachTo remains as-is. + Expect(rec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(&rv)})).To(Equal(reconcile.Result{})) + + gotRV1 := &v1alpha1.ReplicatedVolume{} + Expect(cl.Get(ctx, client.ObjectKeyFromObject(&rv), gotRV1)).To(Succeed()) + Expect(gotRV1.Status).NotTo(BeNil()) + Expect(gotRV1.Status.DesiredAttachTo).To(Equal([]string{"node-1", "node-2"})) + + // Switch storage class to Local. + gotRSC := &v1alpha1.ReplicatedStorageClass{} + Expect(cl.Get(ctx, client.ObjectKeyFromObject(&rsc), gotRSC)).To(Succeed()) + origRSC := gotRSC.DeepCopy() + gotRSC.Spec.VolumeAccess = "Local" + Expect(cl.Patch(ctx, gotRSC, client.MergeFrom(origRSC))).To(Succeed()) + + // Reconcile #2 with Local: existing desired nodes must not be detached. + Expect(rec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(&rv)})).To(Equal(reconcile.Result{})) + + gotRV2 := &v1alpha1.ReplicatedVolume{} + Expect(cl.Get(ctx, client.ObjectKeyFromObject(&rv), gotRV2)).To(Succeed()) + Expect(gotRV2.Status).NotTo(BeNil()) + Expect(gotRV2.Status.DesiredAttachTo).To(Equal([]string{"node-1", "node-2"})) + + // But the violating node must be reflected in RVA status. + gotRVA := &v1alpha1.ReplicatedVolumeAttachment{} + Expect(cl.Get(ctx, client.ObjectKey{Name: "rva-1-node-2"}, gotRVA)).To(Succeed()) + Expect(gotRVA.Status).NotTo(BeNil()) + Expect(gotRVA.Status.Phase).To(Equal(v1alpha1.ReplicatedVolumeAttachmentPhasePending)) + cond := meta.FindStatusCondition(gotRVA.Status.Conditions, v1alpha1.ReplicatedVolumeAttachmentCondAttachedType) + Expect(cond).NotTo(BeNil()) + Expect(cond.Status).To(Equal(metav1.ConditionFalse)) + Expect(cond.Reason).To(Equal(v1alpha1.ReplicatedVolumeAttachmentCondAttachedReasonLocalityNotSatisfied)) + }) + + When("node was actually attached before the switch", func() { + It("keeps RVA Attached (does not downgrade to Pending) even if Locality is violated after the switch", func(ctx SpecContext) { + // Simulate actual attachment on node-2: DRBD role Primary => actuallyAttachedTo contains node-2. + // Also simulate that node-2 is not Diskful (will violate Locality once SC becomes Local). + rvr1 := &v1alpha1.ReplicatedVolumeReplica{} + Expect(cl.Get(ctx, client.ObjectKey{Name: "rvr-df1"}, rvr1)).To(Succeed()) + orig1 := rvr1.DeepCopy() + rvr1.Status = v1alpha1.ReplicatedVolumeReplicaStatus{ + ActualType: v1alpha1.ReplicaTypeDiskful, + DRBD: &v1alpha1.DRBD{ + Status: &v1alpha1.DRBDStatus{Role: "Secondary"}, + }, + } + Expect(cl.Status().Patch(ctx, rvr1, client.MergeFrom(orig1))).To(Succeed()) + + rvr2 := &v1alpha1.ReplicatedVolumeReplica{} + Expect(cl.Get(ctx, client.ObjectKey{Name: "rvr-df2"}, rvr2)).To(Succeed()) + orig2 := rvr2.DeepCopy() + rvr2.Status = v1alpha1.ReplicatedVolumeReplicaStatus{ + ActualType: v1alpha1.ReplicaTypeAccess, + DRBD: &v1alpha1.DRBD{ + Status: &v1alpha1.DRBDStatus{Role: "Primary"}, + }, + } + Expect(cl.Status().Patch(ctx, rvr2, client.MergeFrom(orig2))).To(Succeed()) + + // Reconcile #1 with Remote: RVA on node-2 must be Attached (it is actually attached). + Expect(rec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(&rv)})).To(Equal(reconcile.Result{})) + + gotRVA1 := &v1alpha1.ReplicatedVolumeAttachment{} + Expect(cl.Get(ctx, client.ObjectKey{Name: "rva-1-node-2"}, gotRVA1)).To(Succeed()) + Expect(gotRVA1.Status).NotTo(BeNil()) + Expect(gotRVA1.Status.Phase).To(Equal(v1alpha1.ReplicatedVolumeAttachmentPhaseAttached)) + cond1 := meta.FindStatusCondition(gotRVA1.Status.Conditions, v1alpha1.ReplicatedVolumeAttachmentCondAttachedType) + Expect(cond1).NotTo(BeNil()) + Expect(cond1.Status).To(Equal(metav1.ConditionTrue)) + Expect(cond1.Reason).To(Equal(v1alpha1.ReplicatedVolumeAttachmentCondAttachedReasonAttached)) + + // Switch storage class to Local. + gotRSC := &v1alpha1.ReplicatedStorageClass{} + Expect(cl.Get(ctx, client.ObjectKeyFromObject(&rsc), gotRSC)).To(Succeed()) + origRSC := gotRSC.DeepCopy() + gotRSC.Spec.VolumeAccess = "Local" + Expect(cl.Patch(ctx, gotRSC, client.MergeFrom(origRSC))).To(Succeed()) + + // Reconcile #2 with Local: attached must still win over Locality. + Expect(rec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(&rv)})).To(Equal(reconcile.Result{})) + + gotRVA2 := &v1alpha1.ReplicatedVolumeAttachment{} + Expect(cl.Get(ctx, client.ObjectKey{Name: "rva-1-node-2"}, gotRVA2)).To(Succeed()) + Expect(gotRVA2.Status).NotTo(BeNil()) + Expect(gotRVA2.Status.Phase).To(Equal(v1alpha1.ReplicatedVolumeAttachmentPhaseAttached)) + cond2 := meta.FindStatusCondition(gotRVA2.Status.Conditions, v1alpha1.ReplicatedVolumeAttachmentCondAttachedType) + Expect(cond2).NotTo(BeNil()) + Expect(cond2.Status).To(Equal(metav1.ConditionTrue)) + Expect(cond2.Reason).To(Equal(v1alpha1.ReplicatedVolumeAttachmentCondAttachedReasonAttached)) + }) + }) + }) + + When("Local access and replica violates Locality", func() { + BeforeEach(func() { + volumeAccess = v1alpha1.VolumeAccessLocal + rsc.Spec.VolumeAccess = volumeAccess + }) + + When("node was not previously desired", func() { + BeforeEach(func() { + // RVAs exist for node-1 and node-2, but RV status currently desires only node-1. + rv.Status.DesiredAttachTo = []string{"node-1"} + }) + + It("does not add the node into desiredAttachTo", func(ctx SpecContext) { + // Simulate that the agent already reported actual types: + // node-2 is not Diskful, so it must not be added into desiredAttachTo under Local access. + for _, item := range []struct { + name string + actualType v1alpha1.ReplicaType + }{ + {name: "rvr-df1", actualType: v1alpha1.ReplicaTypeDiskful}, + {name: "rvr-df2", actualType: v1alpha1.ReplicaTypeAccess}, + } { + got := &v1alpha1.ReplicatedVolumeReplica{} + Expect(cl.Get(ctx, client.ObjectKey{Name: item.name}, got)).To(Succeed()) + orig := got.DeepCopy() + got.Status = v1alpha1.ReplicatedVolumeReplicaStatus{ + ActualType: item.actualType, + } + Expect(cl.Status().Patch(ctx, got, client.MergeFrom(orig))).To(Succeed()) + } + + Expect(rec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(&rv)})).To(Equal(reconcile.Result{})) + + gotRV := &v1alpha1.ReplicatedVolume{} + Expect(cl.Get(ctx, client.ObjectKeyFromObject(&rv), gotRV)).To(Succeed()) + Expect(gotRV.Status).NotTo(BeNil()) + Expect(gotRV.Status.DesiredAttachTo).To(Equal([]string{"node-1"})) + + gotRVA := &v1alpha1.ReplicatedVolumeAttachment{} + Expect(cl.Get(ctx, client.ObjectKey{Name: "rva-1-node-2"}, gotRVA)).To(Succeed()) + Expect(gotRVA.Status).NotTo(BeNil()) + Expect(gotRVA.Status.Phase).To(Equal(v1alpha1.ReplicatedVolumeAttachmentPhasePending)) + cond := meta.FindStatusCondition(gotRVA.Status.Conditions, v1alpha1.ReplicatedVolumeAttachmentCondAttachedType) + Expect(cond).NotTo(BeNil()) + Expect(cond.Status).To(Equal(metav1.ConditionFalse)) + Expect(cond.Reason).To(Equal(v1alpha1.ReplicatedVolumeAttachmentCondAttachedReasonLocalityNotSatisfied)) + }) + }) + }) + + When("Local access and Diskful replicas exist on all attachTo nodes", func() { + BeforeEach(func() { + volumeAccess = v1alpha1.VolumeAccessLocal + rsc.Spec.VolumeAccess = volumeAccess + }) + + It("does not set any AttachSucceeded condition and proceeds with reconciliation", func(ctx SpecContext) { + Expect(rec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(&rv)})).To(Equal(reconcile.Result{})) + + rvList := &v1alpha1.ReplicatedVolumeList{} + Expect(cl.List(ctx, rvList)).To(Succeed()) + Expect(rvList.Items).To(HaveLen(1)) + got := &rvList.Items[0] + + // AttachSucceeded condition is not used on RV anymore + Expect(meta.FindStatusCondition(got.Status.Conditions, "AttachSucceeded")).To(BeNil()) + }) + }) + + When("Local access but Diskful replica is missing on one of attachTo nodes", func() { + BeforeEach(func() { + volumeAccess = v1alpha1.VolumeAccessLocal + rsc.Spec.VolumeAccess = volumeAccess + + // remove Diskful replica for node-2 + rvrList.Items = rvrList.Items[:1] + }) + + It("keeps RVA Pending with LocalityNotSatisfied and does not include the node into desiredAttachTo", func(ctx SpecContext) { + Expect(rec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(&rv)})).To(Equal(reconcile.Result{})) + + gotRV := &v1alpha1.ReplicatedVolume{} + Expect(cl.Get(ctx, client.ObjectKeyFromObject(&rv), gotRV)).To(Succeed()) + Expect(gotRV.Status).NotTo(BeNil()) + Expect(gotRV.Status.DesiredAttachTo).To(Equal([]string{"node-1"})) + + gotRVA := &v1alpha1.ReplicatedVolumeAttachment{} + Expect(cl.Get(ctx, client.ObjectKey{Name: "rva-1-node-2"}, gotRVA)).To(Succeed()) + Expect(gotRVA.Status).NotTo(BeNil()) + Expect(gotRVA.Status.Phase).To(Equal(v1alpha1.ReplicatedVolumeAttachmentPhasePending)) + cond := meta.FindStatusCondition(gotRVA.Status.Conditions, v1alpha1.ReplicatedVolumeAttachmentCondAttachedType) + Expect(cond).NotTo(BeNil()) + Expect(cond.Status).To(Equal(metav1.ConditionFalse)) + Expect(cond.Reason).To(Equal(v1alpha1.ReplicatedVolumeAttachmentCondAttachedReasonLocalityNotSatisfied)) + }) + }) + + When("allowTwoPrimaries is configured and actual flag not yet applied on replicas", func() { + BeforeEach(func() { + volumeAccess = v1alpha1.VolumeAccessLocal + rsc.Spec.VolumeAccess = volumeAccess + + // request two primaries (via RVA set; attachTo is also used for initial desired preference) + attachTo = []string{"node-1", "node-2"} + + // replicas without actual.AllowTwoPrimaries + rvrList.Items[0].Status = v1alpha1.ReplicatedVolumeReplicaStatus{ + DRBD: &v1alpha1.DRBD{ + Actual: &v1alpha1.DRBDActual{ + AllowTwoPrimaries: false, + }, + }, + } + rvrList.Items[1].Status = v1alpha1.ReplicatedVolumeReplicaStatus{ + DRBD: &v1alpha1.DRBD{ + Actual: &v1alpha1.DRBDActual{ + AllowTwoPrimaries: false, + }, + }, + } + }) + + It("sets rv.status.drbd.config.allowTwoPrimaries=true and waits for replicas", func(ctx SpecContext) { + Expect(rec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(&rv)})).To(Equal(reconcile.Result{})) + + rvList := &v1alpha1.ReplicatedVolumeList{} + Expect(cl.List(ctx, rvList)).To(Succeed()) + Expect(rvList.Items).To(HaveLen(1)) + got := &rvList.Items[0] + Expect(got.Status).NotTo(BeNil()) + Expect(got.Status.DRBD).NotTo(BeNil()) + Expect(got.Status.DRBD.Config).NotTo(BeNil()) + Expect(got.Status.DRBD.Config.AllowTwoPrimaries).To(BeTrue()) + }) + + It("does not request the 2nd Primary until allowTwoPrimaries is applied on all replicas", func(ctx SpecContext) { + // Simulate that node-1 is Primary right now, but allowTwoPrimaries is not applied on replicas yet. + rvr1 := &v1alpha1.ReplicatedVolumeReplica{} + Expect(cl.Get(ctx, client.ObjectKey{Name: "rvr-df1"}, rvr1)).To(Succeed()) + orig1 := rvr1.DeepCopy() + rvr1.Status = v1alpha1.ReplicatedVolumeReplicaStatus{ + ActualType: v1alpha1.ReplicaTypeDiskful, + DRBD: &v1alpha1.DRBD{ + Actual: &v1alpha1.DRBDActual{AllowTwoPrimaries: false}, + Status: &v1alpha1.DRBDStatus{Role: "Primary"}, + }, + } + Expect(cl.Status().Patch(ctx, rvr1, client.MergeFrom(orig1))).To(Succeed()) + + rvr2 := &v1alpha1.ReplicatedVolumeReplica{} + Expect(cl.Get(ctx, client.ObjectKey{Name: "rvr-df2"}, rvr2)).To(Succeed()) + orig2 := rvr2.DeepCopy() + rvr2.Status = v1alpha1.ReplicatedVolumeReplicaStatus{ + ActualType: v1alpha1.ReplicaTypeDiskful, + DRBD: &v1alpha1.DRBD{ + Actual: &v1alpha1.DRBDActual{AllowTwoPrimaries: false}, + Status: &v1alpha1.DRBDStatus{Role: "Secondary"}, + }, + } + Expect(cl.Status().Patch(ctx, rvr2, client.MergeFrom(orig2))).To(Succeed()) + + Expect(rec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(&rv)})).To(Equal(reconcile.Result{})) + + gotRVR1 := &v1alpha1.ReplicatedVolumeReplica{} + Expect(cl.Get(ctx, client.ObjectKey{Name: "rvr-df1"}, gotRVR1)).To(Succeed()) + Expect(gotRVR1.Status).NotTo(BeNil()) + Expect(gotRVR1.Status.DRBD).NotTo(BeNil()) + Expect(gotRVR1.Status.DRBD.Config).NotTo(BeNil()) + Expect(gotRVR1.Status.DRBD.Config.Primary).NotTo(BeNil()) + Expect(*gotRVR1.Status.DRBD.Config.Primary).To(BeTrue()) + + gotRVR2 := &v1alpha1.ReplicatedVolumeReplica{} + Expect(cl.Get(ctx, client.ObjectKey{Name: "rvr-df2"}, gotRVR2)).To(Succeed()) + if gotRVR2.Status.DRBD != nil && + gotRVR2.Status.DRBD.Config != nil && + gotRVR2.Status.DRBD.Config.Primary != nil { + Expect(*gotRVR2.Status.DRBD.Config.Primary).To(BeFalse()) + } + }) + }) + + When("allowTwoPrimaries becomes applied after being not applied", func() { + BeforeEach(func() { + volumeAccess = v1alpha1.VolumeAccessAny + rsc.Spec.VolumeAccess = volumeAccess + + attachTo = []string{"node-1", "node-2"} + }) + + It("adds the 2nd Primary only after allowTwoPrimaries is applied on all replicas", func(ctx SpecContext) { + // Initial state: node-1 is Primary, allowTwoPrimaries is not applied. + rvr1 := &v1alpha1.ReplicatedVolumeReplica{} + Expect(cl.Get(ctx, client.ObjectKey{Name: "rvr-df1"}, rvr1)).To(Succeed()) + orig1 := rvr1.DeepCopy() + rvr1.Status = v1alpha1.ReplicatedVolumeReplicaStatus{ + ActualType: v1alpha1.ReplicaTypeDiskful, + DRBD: &v1alpha1.DRBD{ + Actual: &v1alpha1.DRBDActual{AllowTwoPrimaries: false}, + Status: &v1alpha1.DRBDStatus{Role: "Primary"}, + }, + } + Expect(cl.Status().Patch(ctx, rvr1, client.MergeFrom(orig1))).To(Succeed()) + + rvr2 := &v1alpha1.ReplicatedVolumeReplica{} + Expect(cl.Get(ctx, client.ObjectKey{Name: "rvr-df2"}, rvr2)).To(Succeed()) + orig2 := rvr2.DeepCopy() + rvr2.Status = v1alpha1.ReplicatedVolumeReplicaStatus{ + ActualType: v1alpha1.ReplicaTypeDiskful, + DRBD: &v1alpha1.DRBD{ + Actual: &v1alpha1.DRBDActual{AllowTwoPrimaries: false}, + Status: &v1alpha1.DRBDStatus{Role: "Secondary"}, + }, + } + Expect(cl.Status().Patch(ctx, rvr2, client.MergeFrom(orig2))).To(Succeed()) + + // Reconcile #1: do not request 2nd Primary yet. + Expect(rec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(&rv)})).To(Equal(reconcile.Result{})) + + gotRVR2 := &v1alpha1.ReplicatedVolumeReplica{} + Expect(cl.Get(ctx, client.ObjectKey{Name: "rvr-df2"}, gotRVR2)).To(Succeed()) + // Do not allow a request to become Primary on the 2nd node until allowTwoPrimaries is applied. + // Primary can be nil (no request) or false (explicit demotion request); it must not be true. + primaryRequested := false + if gotRVR2.Status.DRBD != nil && + gotRVR2.Status.DRBD.Config != nil && + gotRVR2.Status.DRBD.Config.Primary != nil { + primaryRequested = *gotRVR2.Status.DRBD.Config.Primary + } + Expect(primaryRequested).To(BeFalse()) + + // Simulate allowTwoPrimaries applied by the agent. + rvr1b := &v1alpha1.ReplicatedVolumeReplica{} + Expect(cl.Get(ctx, client.ObjectKey{Name: "rvr-df1"}, rvr1b)).To(Succeed()) + orig1b := rvr1b.DeepCopy() + rvr1b.Status.DRBD.Actual.AllowTwoPrimaries = true + Expect(cl.Status().Patch(ctx, rvr1b, client.MergeFrom(orig1b))).To(Succeed()) + + rvr2b := &v1alpha1.ReplicatedVolumeReplica{} + Expect(cl.Get(ctx, client.ObjectKey{Name: "rvr-df2"}, rvr2b)).To(Succeed()) + orig2b := rvr2b.DeepCopy() + rvr2b.Status.DRBD.Actual.AllowTwoPrimaries = true + Expect(cl.Status().Patch(ctx, rvr2b, client.MergeFrom(orig2b))).To(Succeed()) + + // Reconcile #2: now the controller may request 2 Primaries. + Expect(rec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(&rv)})).To(Equal(reconcile.Result{})) + + gotRVR2b := &v1alpha1.ReplicatedVolumeReplica{} + Expect(cl.Get(ctx, client.ObjectKey{Name: "rvr-df2"}, gotRVR2b)).To(Succeed()) + Expect(gotRVR2b.Status).NotTo(BeNil()) + Expect(gotRVR2b.Status.DRBD).NotTo(BeNil()) + Expect(gotRVR2b.Status.DRBD.Config).NotTo(BeNil()) + Expect(gotRVR2b.Status.DRBD.Config.Primary).NotTo(BeNil()) + Expect(*gotRVR2b.Status.DRBD.Config.Primary).To(BeTrue()) + }) + }) + + When("allowTwoPrimaries applied on all replicas", func() { + BeforeEach(func() { + volumeAccess = v1alpha1.VolumeAccessLocal + rsc.Spec.VolumeAccess = volumeAccess + + attachTo = []string{"node-1", "node-2"} + + // Both replicas are initialized by the agent (status.actualType is set) and already have + // actual.AllowTwoPrimaries=true. + for i := range rvrList.Items { + rvrList.Items[i].Status = v1alpha1.ReplicatedVolumeReplicaStatus{ + ActualType: v1alpha1.ReplicaTypeDiskful, + DRBD: &v1alpha1.DRBD{ + Actual: &v1alpha1.DRBDActual{ + AllowTwoPrimaries: true, + }, + Status: &v1alpha1.DRBDStatus{ + Role: "Secondary", + }, + }, + } + } + }) + + It("updates primary roles and attachedTo", func(ctx SpecContext) { + Expect(rec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(&rv)})).To(Equal(reconcile.Result{})) + + // RVRs on attachTo nodes should be configured as Primary + gotRVRs := &v1alpha1.ReplicatedVolumeReplicaList{} + Expect(cl.List(ctx, gotRVRs)).To(Succeed()) + + for i := range gotRVRs.Items { + rvr := &gotRVRs.Items[i] + if rvr.Spec.ReplicatedVolumeName != rv.Name { + continue + } + _, shouldBePrimary := map[string]struct{}{ + "node-1": {}, + "node-2": {}, + }[rvr.Spec.NodeName] + + if rvr.Status.DRBD == nil || rvr.Status.DRBD.Config == nil { + // if no config present, it must not be primary + Expect(shouldBePrimary).To(BeFalse()) + continue + } + + if shouldBePrimary { + Expect(rvr.Status.DRBD.Config.Primary).NotTo(BeNil()) + Expect(*rvr.Status.DRBD.Config.Primary).To(BeTrue()) + } + } + + // rv.status.actuallyAttachedTo should reflect RVRs with Role=Primary + rvList := &v1alpha1.ReplicatedVolumeList{} + Expect(cl.List(ctx, rvList)).To(Succeed()) + Expect(rvList.Items).To(HaveLen(1)) + gotRV := &rvList.Items[0] + // we don't assert exact content here, just that field is present and length <= 2 + Expect(len(gotRV.Status.ActuallyAttachedTo)).To(BeNumerically("<=", 2)) + }) + }) + + When("a deleting replica exists without actual.allowTwoPrimaries", func() { + BeforeEach(func() { + volumeAccess = v1alpha1.VolumeAccessAny + rsc.Spec.VolumeAccess = volumeAccess + + attachTo = []string{"node-1", "node-2"} + }) + + It("does not promote the 2nd Primary until allowTwoPrimaries is applied on all existing replicas (even deleting ones)", func(ctx SpecContext) { + // Desired: two primaries on node-1 and node-2, and allowTwoPrimaries already applied on relevant replicas. + for _, item := range []struct { + name string + role string + }{ + {name: "rvr-df1", role: "Primary"}, + {name: "rvr-df2", role: "Secondary"}, + } { + rvr := &v1alpha1.ReplicatedVolumeReplica{} + Expect(cl.Get(ctx, client.ObjectKey{Name: item.name}, rvr)).To(Succeed()) + orig := rvr.DeepCopy() + rvr.Status = v1alpha1.ReplicatedVolumeReplicaStatus{ + ActualType: v1alpha1.ReplicaTypeDiskful, + DRBD: &v1alpha1.DRBD{ + Actual: &v1alpha1.DRBDActual{AllowTwoPrimaries: true}, + Status: &v1alpha1.DRBDStatus{Role: item.role}, + }, + } + Expect(cl.Status().Patch(ctx, rvr, client.MergeFrom(orig))).To(Succeed()) + } + + // A deleting replica without actual.allowTwoPrimaries should be ignored for readiness. + now := metav1.Now() + rvrDeleting := &v1alpha1.ReplicatedVolumeReplica{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rvr-deleting", + DeletionTimestamp: &now, + }, + Spec: v1alpha1.ReplicatedVolumeReplicaSpec{ + ReplicatedVolumeName: rv.Name, + NodeName: "node-3", + Type: v1alpha1.ReplicaTypeDiskful, + }, + Status: v1alpha1.ReplicatedVolumeReplicaStatus{ + ActualType: v1alpha1.ReplicaTypeDiskful, + DRBD: &v1alpha1.DRBD{ + Actual: &v1alpha1.DRBDActual{AllowTwoPrimaries: false}, + Status: &v1alpha1.DRBDStatus{Role: "Secondary"}, + }, + }, + } + Expect(cl.Create(ctx, rvrDeleting)).To(Succeed()) + + Expect(rec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(&rv)})).To(Equal(reconcile.Result{})) + + gotRVR2 := &v1alpha1.ReplicatedVolumeReplica{} + Expect(cl.Get(ctx, client.ObjectKey{Name: "rvr-df2"}, gotRVR2)).To(Succeed()) + // Deleting replica still exists with actual.allowTwoPrimaries=false -> must not request the 2nd Primary. + primaryRequested := false + if gotRVR2.Status.DRBD != nil && + gotRVR2.Status.DRBD.Config != nil && + gotRVR2.Status.DRBD.Config.Primary != nil { + primaryRequested = *gotRVR2.Status.DRBD.Config.Primary + } + Expect(primaryRequested).To(BeFalse()) + }) + }) + + When("an unscheduled replica exists (spec.nodeName is empty)", func() { + BeforeEach(func() { + volumeAccess = v1alpha1.VolumeAccessAny + rsc.Spec.VolumeAccess = volumeAccess + }) + + It("does not panic and keeps Attached condition in Unknown/NotInitialized", func(ctx SpecContext) { + rvrUnscheduled := &v1alpha1.ReplicatedVolumeReplica{ + ObjectMeta: metav1.ObjectMeta{Name: "rvr-unscheduled"}, + Spec: v1alpha1.ReplicatedVolumeReplicaSpec{ + ReplicatedVolumeName: rv.Name, + NodeName: "", + Type: v1alpha1.ReplicaTypeDiskful, + }, + } + Expect(cl.Create(ctx, rvrUnscheduled)).To(Succeed()) + + Expect(rec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(&rv)})).To(Equal(reconcile.Result{})) + + got := &v1alpha1.ReplicatedVolumeReplica{} + Expect(cl.Get(ctx, client.ObjectKeyFromObject(rvrUnscheduled), got)).To(Succeed()) + Expect(got.Status).NotTo(BeNil()) + cond := meta.FindStatusCondition(got.Status.Conditions, v1alpha1.ReplicatedVolumeReplicaCondAttachedType) + Expect(cond).NotTo(BeNil()) + Expect(cond.Status).To(Equal(metav1.ConditionUnknown)) + Expect(cond.Reason).To(Equal(v1alpha1.ReplicatedVolumeReplicaCondAttachedReasonAttachingNotInitialized)) + }) + }) + + When("volumeAccess is not Local and TieBreaker replica should become primary", func() { + BeforeEach(func() { + volumeAccess = v1alpha1.VolumeAccessAny + rsc.Spec.VolumeAccess = volumeAccess + + attachTo = []string{"node-1"} + + rvrList = v1alpha1.ReplicatedVolumeReplicaList{ + Items: []v1alpha1.ReplicatedVolumeReplica{ + { + ObjectMeta: metav1.ObjectMeta{ + Name: "rvr-tb1", + }, + Spec: v1alpha1.ReplicatedVolumeReplicaSpec{ + ReplicatedVolumeName: rv.Name, + NodeName: "node-1", + Type: v1alpha1.ReplicaTypeTieBreaker, + }, + Status: v1alpha1.ReplicatedVolumeReplicaStatus{ + DRBD: &v1alpha1.DRBD{ + Actual: &v1alpha1.DRBDActual{ + AllowTwoPrimaries: false, + }, + Status: &v1alpha1.DRBDStatus{ + Role: "Secondary", + }, + }, + }, + }, + }, + } + }) + + It("converts TieBreaker to Access first, then requests primary=true after actualType becomes Access", func(ctx SpecContext) { + // Reconcile #1: conversion only (the agent must first report actualType=Access). + Expect(rec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(&rv)})).To(Equal(reconcile.Result{})) + + gotRVR := &v1alpha1.ReplicatedVolumeReplica{} + Expect(cl.Get(ctx, client.ObjectKey{Name: "rvr-tb1"}, gotRVR)).To(Succeed()) + + Expect(gotRVR.Spec.Type).To(Equal(v1alpha1.ReplicaTypeAccess)) + + // Simulate the agent updating actualType after conversion (TieBreaker -> Access). + orig := gotRVR.DeepCopy() + gotRVR.Status.ActualType = v1alpha1.ReplicaTypeAccess + if gotRVR.Status.DRBD == nil { + gotRVR.Status.DRBD = &v1alpha1.DRBD{} + } + if gotRVR.Status.DRBD.Actual == nil { + gotRVR.Status.DRBD.Actual = &v1alpha1.DRBDActual{} + } + gotRVR.Status.DRBD.Actual.AllowTwoPrimaries = false + if gotRVR.Status.DRBD.Status == nil { + gotRVR.Status.DRBD.Status = &v1alpha1.DRBDStatus{} + } + gotRVR.Status.DRBD.Status.Role = "Secondary" + Expect(cl.Status().Patch(ctx, gotRVR, client.MergeFrom(orig))).To(Succeed()) + + // Reconcile #2: now primary request is allowed for Access/Diskful actualType. + Expect(rec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(&rv)})).To(Equal(reconcile.Result{})) + + gotRVR2 := &v1alpha1.ReplicatedVolumeReplica{} + Expect(cl.Get(ctx, client.ObjectKey{Name: "rvr-tb1"}, gotRVR2)).To(Succeed()) + Expect(gotRVR2.Status).NotTo(BeNil()) + Expect(gotRVR2.Status.DRBD).NotTo(BeNil()) + Expect(gotRVR2.Status.DRBD.Config).NotTo(BeNil()) + Expect(gotRVR2.Status.DRBD.Config.Primary).NotTo(BeNil()) + Expect(*gotRVR2.Status.DRBD.Config.Primary).To(BeTrue()) + }) + }) + + When("replica on node outside attachTo does not become primary", func() { + BeforeEach(func() { + volumeAccess = v1alpha1.VolumeAccessAny + rsc.Spec.VolumeAccess = volumeAccess + + attachTo = []string{"node-1"} + + rvrList = v1alpha1.ReplicatedVolumeReplicaList{ + Items: []v1alpha1.ReplicatedVolumeReplica{ + { + ObjectMeta: metav1.ObjectMeta{ + Name: "rvr-node-1", + }, + Spec: v1alpha1.ReplicatedVolumeReplicaSpec{ + ReplicatedVolumeName: rv.Name, + NodeName: "node-1", + Type: v1alpha1.ReplicaTypeDiskful, + }, + }, + { + ObjectMeta: metav1.ObjectMeta{ + Name: "rvr-node-2", + }, + Spec: v1alpha1.ReplicatedVolumeReplicaSpec{ + ReplicatedVolumeName: rv.Name, + NodeName: "node-2", + Type: v1alpha1.ReplicaTypeAccess, + }, + }, + }, + } + }) + + It("keeps replica on non-attachTo node non-primary", func(ctx SpecContext) { + // Simulate that the agent has already initialized replicas (status.actualType is set), + // otherwise the controller must not request Primary. + for _, item := range []struct { + name string + actualType v1alpha1.ReplicaType + }{ + {name: "rvr-node-1", actualType: v1alpha1.ReplicaTypeDiskful}, + {name: "rvr-node-2", actualType: v1alpha1.ReplicaTypeAccess}, + } { + rvr := &v1alpha1.ReplicatedVolumeReplica{} + Expect(cl.Get(ctx, client.ObjectKey{Name: item.name}, rvr)).To(Succeed()) + orig := rvr.DeepCopy() + rvr.Status = v1alpha1.ReplicatedVolumeReplicaStatus{ + ActualType: item.actualType, + DRBD: &v1alpha1.DRBD{ + Actual: &v1alpha1.DRBDActual{ + AllowTwoPrimaries: false, + }, + Status: &v1alpha1.DRBDStatus{ + Role: "Secondary", + }, + }, + } + Expect(cl.Status().Patch(ctx, rvr, client.MergeFrom(orig))).To(Succeed()) + } + + Expect(rec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(&rv)})).To(Equal(reconcile.Result{})) + + gotRVRs := &v1alpha1.ReplicatedVolumeReplicaList{} + Expect(cl.List(ctx, gotRVRs)).To(Succeed()) + + var rvrNode1, rvrNode2 *v1alpha1.ReplicatedVolumeReplica + for i := range gotRVRs.Items { + r := &gotRVRs.Items[i] + switch r.Name { + case "rvr-node-1": + rvrNode1 = r + case "rvr-node-2": + rvrNode2 = r + } + } + + Expect(rvrNode1).NotTo(BeNil()) + Expect(rvrNode2).NotTo(BeNil()) + + // node-1 должен стать primary + Expect(rvrNode1.Status).NotTo(BeNil()) + Expect(rvrNode1.Status.DRBD).NotTo(BeNil()) + Expect(rvrNode1.Status.DRBD.Config).NotTo(BeNil()) + Expect(rvrNode1.Status.DRBD.Config.Primary).NotTo(BeNil()) + Expect(*rvrNode1.Status.DRBD.Config.Primary).To(BeTrue()) + + // node-2 не должен стать primary + primaryRequested := false + if rvrNode2.Status.DRBD != nil && + rvrNode2.Status.DRBD.Config != nil && + rvrNode2.Status.DRBD.Config.Primary != nil { + primaryRequested = *rvrNode2.Status.DRBD.Config.Primary + } + Expect(primaryRequested).To(BeFalse()) + }) + }) + + When("switching Primary node in single-primary mode", func() { + BeforeEach(func() { + volumeAccess = v1alpha1.VolumeAccessAny + rsc.Spec.VolumeAccess = volumeAccess + + // Only node-2 is desired now (RVA set), but node-1 is still Primary at the moment. + attachTo = []string{"node-2"} + }) + + It("demotes the old Primary first and promotes the new one only after actual Primary becomes empty", func(ctx SpecContext) { + // node-1 is Primary right now. + rvr1 := &v1alpha1.ReplicatedVolumeReplica{} + Expect(cl.Get(ctx, client.ObjectKey{Name: "rvr-df1"}, rvr1)).To(Succeed()) + orig1 := rvr1.DeepCopy() + rvr1.Status = v1alpha1.ReplicatedVolumeReplicaStatus{ + ActualType: v1alpha1.ReplicaTypeDiskful, + DRBD: &v1alpha1.DRBD{ + Actual: &v1alpha1.DRBDActual{AllowTwoPrimaries: false}, + Status: &v1alpha1.DRBDStatus{Role: "Primary"}, + }, + } + Expect(cl.Status().Patch(ctx, rvr1, client.MergeFrom(orig1))).To(Succeed()) + + rvr2 := &v1alpha1.ReplicatedVolumeReplica{} + Expect(cl.Get(ctx, client.ObjectKey{Name: "rvr-df2"}, rvr2)).To(Succeed()) + orig2 := rvr2.DeepCopy() + rvr2.Status = v1alpha1.ReplicatedVolumeReplicaStatus{ + ActualType: v1alpha1.ReplicaTypeDiskful, + DRBD: &v1alpha1.DRBD{ + Actual: &v1alpha1.DRBDActual{AllowTwoPrimaries: false}, + Status: &v1alpha1.DRBDStatus{Role: "Secondary"}, + }, + } + Expect(cl.Status().Patch(ctx, rvr2, client.MergeFrom(orig2))).To(Succeed()) + + // Reconcile #1: request demotion only (no new Primary while old one exists). + Expect(rec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(&rv)})).To(Equal(reconcile.Result{})) + + gotRVR1 := &v1alpha1.ReplicatedVolumeReplica{} + Expect(cl.Get(ctx, client.ObjectKey{Name: "rvr-df1"}, gotRVR1)).To(Succeed()) + Expect(gotRVR1.Status).NotTo(BeNil()) + Expect(gotRVR1.Status.DRBD).NotTo(BeNil()) + Expect(gotRVR1.Status.DRBD.Config).NotTo(BeNil()) + Expect(gotRVR1.Status.DRBD.Config.Primary).NotTo(BeNil()) + Expect(*gotRVR1.Status.DRBD.Config.Primary).To(BeFalse()) + + gotRVR2 := &v1alpha1.ReplicatedVolumeReplica{} + Expect(cl.Get(ctx, client.ObjectKey{Name: "rvr-df2"}, gotRVR2)).To(Succeed()) + primaryRequested := false + if gotRVR2.Status.DRBD != nil && + gotRVR2.Status.DRBD.Config != nil && + gotRVR2.Status.DRBD.Config.Primary != nil { + primaryRequested = *gotRVR2.Status.DRBD.Config.Primary + } + Expect(primaryRequested).To(BeFalse()) + + // Simulate the agent demoting node-1: no actual Primary remains. + gotRVR1b := &v1alpha1.ReplicatedVolumeReplica{} + Expect(cl.Get(ctx, client.ObjectKey{Name: "rvr-df1"}, gotRVR1b)).To(Succeed()) + orig1b := gotRVR1b.DeepCopy() + gotRVR1b.Status.DRBD.Status.Role = "Secondary" + Expect(cl.Status().Patch(ctx, gotRVR1b, client.MergeFrom(orig1b))).To(Succeed()) + + // Reconcile #2: now we can promote the new desired Primary (node-2). + Expect(rec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(&rv)})).To(Equal(reconcile.Result{})) + + gotRVR2b := &v1alpha1.ReplicatedVolumeReplica{} + Expect(cl.Get(ctx, client.ObjectKey{Name: "rvr-df2"}, gotRVR2b)).To(Succeed()) + Expect(gotRVR2b.Status).NotTo(BeNil()) + Expect(gotRVR2b.Status.DRBD).NotTo(BeNil()) + Expect(gotRVR2b.Status.DRBD.Config).NotTo(BeNil()) + Expect(gotRVR2b.Status.DRBD.Config.Primary).NotTo(BeNil()) + Expect(*gotRVR2b.Status.DRBD.Config.Primary).To(BeTrue()) + }) + }) + + When("switching two Primaries to two other nodes (2 -> 2 transition)", func() { + BeforeEach(func() { + volumeAccess = "Remote" + rsc.Spec.VolumeAccess = volumeAccess + + // Desired attachments are now node-3 and node-4. + attachTo = []string{"node-3", "node-4"} + + rvrList = v1alpha1.ReplicatedVolumeReplicaList{ + Items: []v1alpha1.ReplicatedVolumeReplica{ + { + ObjectMeta: metav1.ObjectMeta{Name: "rvr-n1"}, + Spec: v1alpha1.ReplicatedVolumeReplicaSpec{ + ReplicatedVolumeName: rv.Name, + NodeName: "node-1", + Type: v1alpha1.ReplicaTypeDiskful, + }, + }, + { + ObjectMeta: metav1.ObjectMeta{Name: "rvr-n2"}, + Spec: v1alpha1.ReplicatedVolumeReplicaSpec{ + ReplicatedVolumeName: rv.Name, + NodeName: "node-2", + Type: v1alpha1.ReplicaTypeDiskful, + }, + }, + { + ObjectMeta: metav1.ObjectMeta{Name: "rvr-n3"}, + Spec: v1alpha1.ReplicatedVolumeReplicaSpec{ + ReplicatedVolumeName: rv.Name, + NodeName: "node-3", + Type: v1alpha1.ReplicaTypeDiskful, + }, + }, + { + ObjectMeta: metav1.ObjectMeta{Name: "rvr-n4"}, + Spec: v1alpha1.ReplicatedVolumeReplicaSpec{ + ReplicatedVolumeName: rv.Name, + NodeName: "node-4", + Type: v1alpha1.ReplicaTypeDiskful, + }, + }, + }, + } + }) + + It("first requests demotion of old Primaries, then promotes new Primaries as slots become available", func(ctx SpecContext) { + // Current reality: node-1 and node-2 are Primary, allowTwoPrimaries is already applied. + // Patch statuses in a separate loop with explicit objects to keep it readable. + for _, item := range []struct { + name string + role string + }{ + {name: "rvr-n1", role: "Primary"}, + {name: "rvr-n2", role: "Primary"}, + {name: "rvr-n3", role: "Secondary"}, + {name: "rvr-n4", role: "Secondary"}, + } { + rvr := &v1alpha1.ReplicatedVolumeReplica{} + Expect(cl.Get(ctx, client.ObjectKey{Name: item.name}, rvr)).To(Succeed()) + orig := rvr.DeepCopy() + rvr.Status = v1alpha1.ReplicatedVolumeReplicaStatus{ + ActualType: v1alpha1.ReplicaTypeDiskful, + DRBD: &v1alpha1.DRBD{ + Actual: &v1alpha1.DRBDActual{AllowTwoPrimaries: true}, + Status: &v1alpha1.DRBDStatus{Role: item.role}, + }, + } + Expect(cl.Status().Patch(ctx, rvr, client.MergeFrom(orig))).To(Succeed()) + } + + // Reconcile #1: desiredPrimaryNodes must become empty first (demote-only phase). + Expect(rec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(&rv)})).To(Equal(reconcile.Result{})) + + for _, name := range []string{"rvr-n1", "rvr-n2"} { + got := &v1alpha1.ReplicatedVolumeReplica{} + Expect(cl.Get(ctx, client.ObjectKey{Name: name}, got)).To(Succeed()) + Expect(got.Status).NotTo(BeNil()) + Expect(got.Status.DRBD).NotTo(BeNil()) + Expect(got.Status.DRBD.Config).NotTo(BeNil()) + Expect(got.Status.DRBD.Config.Primary).NotTo(BeNil()) + Expect(*got.Status.DRBD.Config.Primary).To(BeFalse()) + } + for _, name := range []string{"rvr-n3", "rvr-n4"} { + got := &v1alpha1.ReplicatedVolumeReplica{} + Expect(cl.Get(ctx, client.ObjectKey{Name: name}, got)).To(Succeed()) + if got.Status.DRBD != nil && + got.Status.DRBD.Config != nil && + got.Status.DRBD.Config.Primary != nil { + Expect(*got.Status.DRBD.Config.Primary).To(BeFalse()) + } + } + + // Simulate agent demotion completing: node-1 and node-2 are no longer Primary. + for _, name := range []string{"rvr-n1", "rvr-n2"} { + got := &v1alpha1.ReplicatedVolumeReplica{} + Expect(cl.Get(ctx, client.ObjectKey{Name: name}, got)).To(Succeed()) + orig := got.DeepCopy() + got.Status.DRBD.Status.Role = "Secondary" + Expect(cl.Status().Patch(ctx, got, client.MergeFrom(orig))).To(Succeed()) + } + + // Reconcile #2: with two free slots, promote both desired nodes (node-3 and node-4). + Expect(rec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(&rv)})).To(Equal(reconcile.Result{})) + + for _, name := range []string{"rvr-n3", "rvr-n4"} { + got := &v1alpha1.ReplicatedVolumeReplica{} + Expect(cl.Get(ctx, client.ObjectKey{Name: name}, got)).To(Succeed()) + Expect(got.Status).NotTo(BeNil()) + Expect(got.Status.DRBD).NotTo(BeNil()) + Expect(got.Status.DRBD.Config).NotTo(BeNil()) + Expect(got.Status.DRBD.Config.Primary).NotTo(BeNil()) + Expect(*got.Status.DRBD.Config.Primary).To(BeTrue()) + } + }) + }) + + When("Local access but replica on attachTo node is Access", func() { + BeforeEach(func() { + volumeAccess = "Local" + rsc.Spec.VolumeAccess = volumeAccess + }) + + When("replica type is set via spec.type", func() { + BeforeEach(func() { + // Make replica on node-2 Access instead of Diskful (via spec). + rvrList.Items[1].Spec.Type = v1alpha1.ReplicaTypeAccess + }) + + It("keeps desiredAttachTo (does not detach an already desired node) and keeps RVA Pending with LocalityNotSatisfied", func(ctx SpecContext) { + Expect(rec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(&rv)})).To(Equal(reconcile.Result{})) + + gotRV := &v1alpha1.ReplicatedVolume{} + Expect(cl.Get(ctx, client.ObjectKeyFromObject(&rv), gotRV)).To(Succeed()) + Expect(gotRV.Status).NotTo(BeNil()) + Expect(gotRV.Status.DesiredAttachTo).To(Equal([]string{"node-1", "node-2"})) + + gotRVA := &v1alpha1.ReplicatedVolumeAttachment{} + Expect(cl.Get(ctx, client.ObjectKey{Name: "rva-1-node-2"}, gotRVA)).To(Succeed()) + Expect(gotRVA.Status).NotTo(BeNil()) + Expect(gotRVA.Status.Phase).To(Equal(v1alpha1.ReplicatedVolumeAttachmentPhasePending)) + cond := meta.FindStatusCondition(gotRVA.Status.Conditions, v1alpha1.ReplicatedVolumeAttachmentCondAttachedType) + Expect(cond).NotTo(BeNil()) + Expect(cond.Status).To(Equal(metav1.ConditionFalse)) + Expect(cond.Reason).To(Equal(v1alpha1.ReplicatedVolumeAttachmentCondAttachedReasonLocalityNotSatisfied)) + }) + }) + + When("replica type is set via status.actualType", func() { + BeforeEach(func() { + // Keep spec.type Diskful, but mark replica on node-2 as actually Access (via status). + rvrList.Items[1].Status = v1alpha1.ReplicatedVolumeReplicaStatus{ + ActualType: v1alpha1.ReplicaTypeAccess, + } + }) + + It("keeps desiredAttachTo (does not detach an already desired node) and keeps RVA Pending with LocalityNotSatisfied", func(ctx SpecContext) { + Expect(rec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(&rv)})).To(Equal(reconcile.Result{})) + + gotRV := &v1alpha1.ReplicatedVolume{} + Expect(cl.Get(ctx, client.ObjectKeyFromObject(&rv), gotRV)).To(Succeed()) + Expect(gotRV.Status).NotTo(BeNil()) + Expect(gotRV.Status.DesiredAttachTo).To(Equal([]string{"node-1", "node-2"})) + + gotRVA := &v1alpha1.ReplicatedVolumeAttachment{} + Expect(cl.Get(ctx, client.ObjectKey{Name: "rva-1-node-2"}, gotRVA)).To(Succeed()) + Expect(gotRVA.Status).NotTo(BeNil()) + Expect(gotRVA.Status.Phase).To(Equal(v1alpha1.ReplicatedVolumeAttachmentPhasePending)) + cond := meta.FindStatusCondition(gotRVA.Status.Conditions, v1alpha1.ReplicatedVolumeAttachmentCondAttachedType) + Expect(cond).NotTo(BeNil()) + Expect(cond.Status).To(Equal(metav1.ConditionFalse)) + Expect(cond.Reason).To(Equal(v1alpha1.ReplicatedVolumeAttachmentCondAttachedReasonLocalityNotSatisfied)) + }) + }) + }) + + When("Local access but replica on attachTo node is TieBreaker", func() { + BeforeEach(func() { + volumeAccess = "Local" + rsc.Spec.VolumeAccess = volumeAccess + }) + + When("replica type is set via spec.type", func() { + BeforeEach(func() { + // Make replica on node-2 TieBreaker instead of Diskful (via spec). + rvrList.Items[1].Spec.Type = v1alpha1.ReplicaTypeTieBreaker + }) + + It("keeps desiredAttachTo (does not detach an already desired node) and keeps RVA Pending with LocalityNotSatisfied", func(ctx SpecContext) { + Expect(rec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(&rv)})).To(Equal(reconcile.Result{})) + + gotRV := &v1alpha1.ReplicatedVolume{} + Expect(cl.Get(ctx, client.ObjectKeyFromObject(&rv), gotRV)).To(Succeed()) + Expect(gotRV.Status).NotTo(BeNil()) + Expect(gotRV.Status.DesiredAttachTo).To(Equal([]string{"node-1", "node-2"})) + + gotRVA := &v1alpha1.ReplicatedVolumeAttachment{} + Expect(cl.Get(ctx, client.ObjectKey{Name: "rva-1-node-2"}, gotRVA)).To(Succeed()) + Expect(gotRVA.Status).NotTo(BeNil()) + Expect(gotRVA.Status.Phase).To(Equal(v1alpha1.ReplicatedVolumeAttachmentPhasePending)) + cond := meta.FindStatusCondition(gotRVA.Status.Conditions, v1alpha1.ReplicatedVolumeAttachmentCondAttachedType) + Expect(cond).NotTo(BeNil()) + Expect(cond.Status).To(Equal(metav1.ConditionFalse)) + Expect(cond.Reason).To(Equal(v1alpha1.ReplicatedVolumeAttachmentCondAttachedReasonLocalityNotSatisfied)) + }) + }) + + When("replica type is set via status.actualType", func() { + BeforeEach(func() { + // Keep spec.type Diskful, but mark replica on node-2 as actually TieBreaker (via status). + rvrList.Items[1].Status = v1alpha1.ReplicatedVolumeReplicaStatus{ + ActualType: v1alpha1.ReplicaTypeTieBreaker, + } + }) + + It("keeps desiredAttachTo (does not detach an already desired node) and keeps RVA Pending with LocalityNotSatisfied", func(ctx SpecContext) { + Expect(rec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(&rv)})).To(Equal(reconcile.Result{})) + + gotRV := &v1alpha1.ReplicatedVolume{} + Expect(cl.Get(ctx, client.ObjectKeyFromObject(&rv), gotRV)).To(Succeed()) + Expect(gotRV.Status).NotTo(BeNil()) + Expect(gotRV.Status.DesiredAttachTo).To(Equal([]string{"node-1", "node-2"})) + + gotRVA := &v1alpha1.ReplicatedVolumeAttachment{} + Expect(cl.Get(ctx, client.ObjectKey{Name: "rva-1-node-2"}, gotRVA)).To(Succeed()) + Expect(gotRVA.Status).NotTo(BeNil()) + Expect(gotRVA.Status.Phase).To(Equal(v1alpha1.ReplicatedVolumeAttachmentPhasePending)) + cond := meta.FindStatusCondition(gotRVA.Status.Conditions, v1alpha1.ReplicatedVolumeAttachmentCondAttachedType) + Expect(cond).NotTo(BeNil()) + Expect(cond.Status).To(Equal(metav1.ConditionFalse)) + Expect(cond.Reason).To(Equal(v1alpha1.ReplicatedVolumeAttachmentCondAttachedReasonLocalityNotSatisfied)) + }) + }) + }) + + When("attachTo shrinks to a single node", func() { + BeforeEach(func() { + volumeAccess = "Local" + rsc.Spec.VolumeAccess = volumeAccess + + attachTo = []string{"node-1"} + + // смоделируем ситуацию, когда раньше allowTwoPrimaries уже был включён + rv.Status.DRBD = &v1alpha1.DRBDResourceDetails{ + Config: &v1alpha1.DRBDResourceConfig{ + AllowTwoPrimaries: true, + }, + } + + for i := range rvrList.Items { + rvrList.Items[i].Status = v1alpha1.ReplicatedVolumeReplicaStatus{ + DRBD: &v1alpha1.DRBD{ + Actual: &v1alpha1.DRBDActual{ + AllowTwoPrimaries: true, + }, + }, + } + } + }) + + It("sets allowTwoPrimaries=false when less than two nodes in attachTo", func(ctx SpecContext) { + Expect(rec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(&rv)})).To(Equal(reconcile.Result{})) + + got := &v1alpha1.ReplicatedVolume{} + Expect(cl.Get(ctx, client.ObjectKeyFromObject(&rv), got)).To(Succeed()) + Expect(got.Status).NotTo(BeNil()) + Expect(got.Status.DRBD).NotTo(BeNil()) + Expect(got.Status.DRBD.Config).NotTo(BeNil()) + Expect(got.Status.DRBD.Config.AllowTwoPrimaries).To(BeFalse()) + }) + }) + + When("replicas already have Primary role set in status", func() { + BeforeEach(func() { + volumeAccess = "Remote" + rsc.Spec.VolumeAccess = volumeAccess + + attachTo = []string{"node-1", "node-2"} + + for i := range rvrList.Items { + role := "Secondary" + if rvrList.Items[i].Spec.NodeName == "node-1" { + role = "Primary" + } + rvrList.Items[i].Status = v1alpha1.ReplicatedVolumeReplicaStatus{ + DRBD: &v1alpha1.DRBD{ + Actual: &v1alpha1.DRBDActual{ + AllowTwoPrimaries: true, + }, + Status: &v1alpha1.DRBDStatus{ + Role: role, + }, + }, + } + } + }) + + It("recomputes attachedTo from replicas with Primary role", func(ctx SpecContext) { + Expect(rec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(&rv)})).To(Equal(reconcile.Result{})) + + rvList := &v1alpha1.ReplicatedVolumeList{} + Expect(cl.List(ctx, rvList)).To(Succeed()) + Expect(rvList.Items).To(HaveLen(1)) + gotRV := &rvList.Items[0] + + Expect(gotRV.Status.ActuallyAttachedTo).To(ConsistOf("node-1")) + }) + }) + + }) + + When("RVA-driven attachTo and RVA statuses", func() { + BeforeEach(func() { + rv.Status = v1alpha1.ReplicatedVolumeStatus{ + Conditions: []metav1.Condition{ + { + Type: v1alpha1.ReplicatedVolumeCondIOReadyType, + Status: metav1.ConditionTrue, + }, + }, + } + // start with empty desiredAttachTo; controller will derive it from RVA set + rv.Status.DesiredAttachTo = nil + + rsc := v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rsc1", + }, + Spec: v1alpha1.ReplicatedStorageClassSpec{ + Replication: "Availability", + VolumeAccess: "Remote", + }, + } + builder.WithObjects(&rsc) + }) + + It("sets Detaching + Ready=True when deleting RVA targets a node that is still actually attached", func(ctx SpecContext) { + now := metav1.Now() + rva := &v1alpha1.ReplicatedVolumeAttachment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rva-detaching", + DeletionTimestamp: &now, + Finalizers: []string{v1alpha1.ControllerFinalizer}, + }, + Spec: v1alpha1.ReplicatedVolumeAttachmentSpec{ + ReplicatedVolumeName: rv.Name, + NodeName: "node-1", + }, + } + rvr := &v1alpha1.ReplicatedVolumeReplica{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rvr-primary-detaching", + }, + Spec: v1alpha1.ReplicatedVolumeReplicaSpec{ + ReplicatedVolumeName: rv.Name, + NodeName: "node-1", + Type: v1alpha1.ReplicaTypeDiskful, + }, + Status: v1alpha1.ReplicatedVolumeReplicaStatus{ + ActualType: v1alpha1.ReplicaTypeDiskful, + DRBD: &v1alpha1.DRBD{ + Status: &v1alpha1.DRBDStatus{ + Role: "Primary", + }, + }, + }, + } + localRV := rv + localRSC := &v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{Name: "rsc1"}, + Spec: v1alpha1.ReplicatedStorageClassSpec{ + Replication: "Availability", + VolumeAccess: "Remote", + }, + } + localCl := testhelpers.WithRVRByReplicatedVolumeNameIndex(testhelpers.WithRVAByReplicatedVolumeNameIndex(fake.NewClientBuilder().WithScheme(scheme))). + WithStatusSubresource(&v1alpha1.ReplicatedVolume{}). + WithStatusSubresource(&v1alpha1.ReplicatedVolumeReplica{}). + WithStatusSubresource(&v1alpha1.ReplicatedVolumeAttachment{}). + WithObjects(&localRV, localRSC, rva, rvr). + Build() + localRec := rvattachcontroller.NewReconciler(localCl, logr.New(log.NullLogSink{})) + + Expect(localRec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(&localRV)})).To(Equal(reconcile.Result{})) + + got := &v1alpha1.ReplicatedVolumeAttachment{} + Expect(localCl.Get(ctx, client.ObjectKeyFromObject(rva), got)).To(Succeed()) + Expect(got.Status).NotTo(BeNil()) + Expect(got.Status.Phase).To(Equal(v1alpha1.ReplicatedVolumeAttachmentPhaseDetaching)) + cond := meta.FindStatusCondition(got.Status.Conditions, v1alpha1.ReplicatedVolumeAttachmentCondAttachedType) + Expect(cond).NotTo(BeNil()) + Expect(cond.Status).To(Equal(metav1.ConditionTrue)) + Expect(cond.Reason).To(Equal(v1alpha1.ReplicatedVolumeAttachmentCondAttachedReasonAttached)) + }) + + It("sets Attaching + SettingPrimary when attachment is allowed and controller is ready to request Primary", func(ctx SpecContext) { + rva := &v1alpha1.ReplicatedVolumeAttachment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rva-setting-primary", + }, + Spec: v1alpha1.ReplicatedVolumeAttachmentSpec{ + ReplicatedVolumeName: rv.Name, + NodeName: "node-1", + }, + } + rvr := &v1alpha1.ReplicatedVolumeReplica{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rvr-secondary-setting-primary", + }, + Spec: v1alpha1.ReplicatedVolumeReplicaSpec{ + ReplicatedVolumeName: rv.Name, + NodeName: "node-1", + Type: v1alpha1.ReplicaTypeDiskful, + }, + Status: v1alpha1.ReplicatedVolumeReplicaStatus{ + ActualType: v1alpha1.ReplicaTypeDiskful, + DRBD: &v1alpha1.DRBD{ + Actual: &v1alpha1.DRBDActual{AllowTwoPrimaries: false}, + Status: &v1alpha1.DRBDStatus{Role: "Secondary"}, + }, + }, + } + Expect(cl.Create(ctx, rva)).To(Succeed()) + Expect(cl.Create(ctx, rvr)).To(Succeed()) + + Expect(rec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(&rv)})).To(Equal(reconcile.Result{})) + + got := &v1alpha1.ReplicatedVolumeAttachment{} + Expect(cl.Get(ctx, client.ObjectKeyFromObject(rva), got)).To(Succeed()) + Expect(got.Status).NotTo(BeNil()) + Expect(got.Status.Phase).To(Equal(v1alpha1.ReplicatedVolumeAttachmentPhaseAttaching)) + cond := meta.FindStatusCondition(got.Status.Conditions, v1alpha1.ReplicatedVolumeAttachmentCondAttachedType) + Expect(cond).NotTo(BeNil()) + Expect(cond.Status).To(Equal(metav1.ConditionFalse)) + Expect(cond.Reason).To(Equal(v1alpha1.ReplicatedVolumeAttachmentCondAttachedReasonSettingPrimary)) + }) + + It("does not extend desiredAttachTo from RVA set when RV has no controller finalizer", func(ctx SpecContext) { + // Ensure RV has no controller finalizer: this must disable adding new nodes into desiredAttachTo. + gotRV := &v1alpha1.ReplicatedVolume{} + Expect(cl.Get(ctx, client.ObjectKeyFromObject(&rv), gotRV)).To(Succeed()) + origRV := gotRV.DeepCopy() + gotRV.Finalizers = nil + Expect(cl.Patch(ctx, gotRV, client.MergeFrom(origRV))).To(Succeed()) + + // Pre-initialize desiredAttachTo with node-1 only. + gotRV2 := &v1alpha1.ReplicatedVolume{} + Expect(cl.Get(ctx, client.ObjectKeyFromObject(&rv), gotRV2)).To(Succeed()) + origRV2 := gotRV2.DeepCopy() + gotRV2.Status.DesiredAttachTo = []string{"node-1"} + Expect(cl.Status().Patch(ctx, gotRV2, client.MergeFrom(origRV2))).To(Succeed()) + + rva1 := &v1alpha1.ReplicatedVolumeAttachment{ + ObjectMeta: metav1.ObjectMeta{Name: "rva-nofinalizer-1"}, + Spec: v1alpha1.ReplicatedVolumeAttachmentSpec{ + ReplicatedVolumeName: rv.Name, + NodeName: "node-1", + }, + } + rva2 := &v1alpha1.ReplicatedVolumeAttachment{ + ObjectMeta: metav1.ObjectMeta{Name: "rva-nofinalizer-2"}, + Spec: v1alpha1.ReplicatedVolumeAttachmentSpec{ + ReplicatedVolumeName: rv.Name, + NodeName: "node-2", + }, + } + Expect(cl.Create(ctx, rva1)).To(Succeed()) + Expect(cl.Create(ctx, rva2)).To(Succeed()) + + Expect(rec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(&rv)})).To(Equal(reconcile.Result{})) + + gotRV3 := &v1alpha1.ReplicatedVolume{} + Expect(cl.Get(ctx, client.ObjectKeyFromObject(&rv), gotRV3)).To(Succeed()) + Expect(gotRV3.Status).NotTo(BeNil()) + Expect(gotRV3.Status.DesiredAttachTo).To(Equal([]string{"node-1"})) + + gotRVA2 := &v1alpha1.ReplicatedVolumeAttachment{} + Expect(cl.Get(ctx, client.ObjectKeyFromObject(rva2), gotRVA2)).To(Succeed()) + Expect(gotRVA2.Status).NotTo(BeNil()) + Expect(gotRVA2.Status.Phase).To(Equal(v1alpha1.ReplicatedVolumeAttachmentPhasePending)) + cond := meta.FindStatusCondition(gotRVA2.Status.Conditions, v1alpha1.ReplicatedVolumeAttachmentCondAttachedType) + Expect(cond).NotTo(BeNil()) + Expect(cond.Status).To(Equal(metav1.ConditionFalse)) + Expect(cond.Reason).To(Equal(v1alpha1.ReplicatedVolumeAttachmentCondAttachedReasonWaitingForActiveAttachmentsToDetach)) + }) + + It("does not add a node into desiredAttachTo when its replica is deleting", func(ctx SpecContext) { + rva1 := &v1alpha1.ReplicatedVolumeAttachment{ + ObjectMeta: metav1.ObjectMeta{Name: "rva-delrep-1"}, + Spec: v1alpha1.ReplicatedVolumeAttachmentSpec{ + ReplicatedVolumeName: rv.Name, + NodeName: "node-1", + }, + } + rva2 := &v1alpha1.ReplicatedVolumeAttachment{ + ObjectMeta: metav1.ObjectMeta{Name: "rva-delrep-2"}, + Spec: v1alpha1.ReplicatedVolumeAttachmentSpec{ + ReplicatedVolumeName: rv.Name, + NodeName: "node-2", + }, + } + now := metav1.Now() + rvr1 := &v1alpha1.ReplicatedVolumeReplica{ + ObjectMeta: metav1.ObjectMeta{Name: "rvr-delrep-1"}, + Spec: v1alpha1.ReplicatedVolumeReplicaSpec{ + ReplicatedVolumeName: rv.Name, + NodeName: "node-1", + Type: v1alpha1.ReplicaTypeDiskful, + }, + Status: v1alpha1.ReplicatedVolumeReplicaStatus{ + ActualType: v1alpha1.ReplicaTypeDiskful, + }, + } + rvr2 := &v1alpha1.ReplicatedVolumeReplica{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rvr-delrep-2", + DeletionTimestamp: &now, + Finalizers: []string{"test-finalizer"}, + }, + Spec: v1alpha1.ReplicatedVolumeReplicaSpec{ + ReplicatedVolumeName: rv.Name, + NodeName: "node-2", + Type: v1alpha1.ReplicaTypeDiskful, + }, + Status: v1alpha1.ReplicatedVolumeReplicaStatus{ + ActualType: v1alpha1.ReplicaTypeDiskful, + }, + } + localRV := rv + localRSC := &v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{Name: "rsc1"}, + Spec: v1alpha1.ReplicatedStorageClassSpec{ + Replication: "Availability", + VolumeAccess: "Remote", + }, + } + localCl := testhelpers.WithRVRByReplicatedVolumeNameIndex(testhelpers.WithRVAByReplicatedVolumeNameIndex(fake.NewClientBuilder().WithScheme(scheme))). + WithStatusSubresource(&v1alpha1.ReplicatedVolume{}). + WithStatusSubresource(&v1alpha1.ReplicatedVolumeReplica{}). + WithStatusSubresource(&v1alpha1.ReplicatedVolumeAttachment{}). + WithObjects(&localRV, localRSC, rva1, rva2, rvr1, rvr2). + Build() + localRec := rvattachcontroller.NewReconciler(localCl, logr.New(log.NullLogSink{})) + + Expect(localRec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(&localRV)})).To(Equal(reconcile.Result{})) + + gotRV := &v1alpha1.ReplicatedVolume{} + Expect(localCl.Get(ctx, client.ObjectKeyFromObject(&localRV), gotRV)).To(Succeed()) + Expect(gotRV.Status).NotTo(BeNil()) + Expect(gotRV.Status.DesiredAttachTo).To(Equal([]string{"node-1"})) + + gotRVA2 := &v1alpha1.ReplicatedVolumeAttachment{} + Expect(localCl.Get(ctx, client.ObjectKeyFromObject(rva2), gotRVA2)).To(Succeed()) + Expect(gotRVA2.Status).NotTo(BeNil()) + Expect(gotRVA2.Status.Phase).To(Equal(v1alpha1.ReplicatedVolumeAttachmentPhasePending)) + cond := meta.FindStatusCondition(gotRVA2.Status.Conditions, v1alpha1.ReplicatedVolumeAttachmentCondAttachedType) + Expect(cond).NotTo(BeNil()) + Expect(cond.Status).To(Equal(metav1.ConditionFalse)) + Expect(cond.Reason).To(Equal(v1alpha1.ReplicatedVolumeAttachmentCondAttachedReasonWaitingForActiveAttachmentsToDetach)) + }) + + It("derives desiredAttachTo FIFO from active RVAs, unique per node, ignoring deleting RVAs", func(ctx SpecContext) { + now := time.Unix(3000, 0) + delNow := metav1.NewTime(now) + rvaDeleting := &v1alpha1.ReplicatedVolumeAttachment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rva-del-old", + CreationTimestamp: metav1.NewTime(now.Add(-10 * time.Second)), + DeletionTimestamp: &delNow, + Finalizers: []string{v1alpha1.ControllerFinalizer}, + }, + Spec: v1alpha1.ReplicatedVolumeAttachmentSpec{ + ReplicatedVolumeName: rv.Name, + NodeName: "node-3", + }, + } + rva1 := &v1alpha1.ReplicatedVolumeAttachment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rva-node-1-old", + CreationTimestamp: metav1.NewTime(now), + }, + Spec: v1alpha1.ReplicatedVolumeAttachmentSpec{ + ReplicatedVolumeName: rv.Name, + NodeName: "node-1", + }, + } + rva1dup := &v1alpha1.ReplicatedVolumeAttachment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rva-node-1-dup", + CreationTimestamp: metav1.NewTime(now.Add(1 * time.Second)), + }, + Spec: v1alpha1.ReplicatedVolumeAttachmentSpec{ + ReplicatedVolumeName: rv.Name, + NodeName: "node-1", + }, + } + rva2 := &v1alpha1.ReplicatedVolumeAttachment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rva-node-2", + CreationTimestamp: metav1.NewTime(now.Add(2 * time.Second)), + }, + Spec: v1alpha1.ReplicatedVolumeAttachmentSpec{ + ReplicatedVolumeName: rv.Name, + NodeName: "node-2", + }, + } + // Fake client may mutate metadata on Create(); seed a dedicated client instead. + localRV := rv + localRSC := &v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{Name: "rsc1"}, + Spec: v1alpha1.ReplicatedStorageClassSpec{ + Replication: "Availability", + VolumeAccess: "Remote", + }, + } + localCl := testhelpers.WithRVRByReplicatedVolumeNameIndex(testhelpers.WithRVAByReplicatedVolumeNameIndex(fake.NewClientBuilder().WithScheme(scheme))). + WithStatusSubresource(&v1alpha1.ReplicatedVolume{}). + WithStatusSubresource(&v1alpha1.ReplicatedVolumeReplica{}). + WithStatusSubresource(&v1alpha1.ReplicatedVolumeAttachment{}). + WithObjects(&localRV, localRSC, rvaDeleting, rva1, rva1dup, rva2). + Build() + localRec := rvattachcontroller.NewReconciler(localCl, logr.New(log.NullLogSink{})) + + Expect(localRec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(&localRV)})).To(Equal(reconcile.Result{})) + + gotRV := &v1alpha1.ReplicatedVolume{} + Expect(localCl.Get(ctx, client.ObjectKeyFromObject(&localRV), gotRV)).To(Succeed()) + Expect(gotRV.Status).NotTo(BeNil()) + Expect(gotRV.Status.DesiredAttachTo).To(Equal([]string{"node-1", "node-2"})) + }) + + It("limits active attachments to two oldest RVAs and sets Pending/Ready=False for the rest", func(ctx SpecContext) { + now := time.Unix(1000, 0) + rva1 := &v1alpha1.ReplicatedVolumeAttachment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rva-1", + CreationTimestamp: metav1.NewTime(now), + }, + Spec: v1alpha1.ReplicatedVolumeAttachmentSpec{ + ReplicatedVolumeName: rv.Name, + NodeName: "node-1", + }, + } + rva2 := &v1alpha1.ReplicatedVolumeAttachment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rva-2", + CreationTimestamp: metav1.NewTime(now.Add(1 * time.Second)), + }, + Spec: v1alpha1.ReplicatedVolumeAttachmentSpec{ + ReplicatedVolumeName: rv.Name, + NodeName: "node-2", + }, + } + rva3 := &v1alpha1.ReplicatedVolumeAttachment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rva-3", + CreationTimestamp: metav1.NewTime(now.Add(2 * time.Second)), + }, + Spec: v1alpha1.ReplicatedVolumeAttachmentSpec{ + ReplicatedVolumeName: rv.Name, + NodeName: "node-3", + }, + } + Expect(cl.Create(ctx, rva1)).To(Succeed()) + Expect(cl.Create(ctx, rva2)).To(Succeed()) + Expect(cl.Create(ctx, rva3)).To(Succeed()) + + Expect(rec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(&rv)})).To(Equal(reconcile.Result{})) + + gotRV := &v1alpha1.ReplicatedVolume{} + Expect(cl.Get(ctx, client.ObjectKeyFromObject(&rv), gotRV)).To(Succeed()) + Expect(gotRV.Status).NotTo(BeNil()) + Expect(gotRV.Status.DesiredAttachTo).To(Equal([]string{"node-1", "node-2"})) + + gotRVA3 := &v1alpha1.ReplicatedVolumeAttachment{} + Expect(cl.Get(ctx, client.ObjectKeyFromObject(rva3), gotRVA3)).To(Succeed()) + Expect(gotRVA3.Status).NotTo(BeNil()) + Expect(gotRVA3.Status.Phase).To(Equal(v1alpha1.ReplicatedVolumeAttachmentPhasePending)) + cond := meta.FindStatusCondition(gotRVA3.Status.Conditions, v1alpha1.ReplicatedVolumeAttachmentCondAttachedType) + Expect(cond).NotTo(BeNil()) + Expect(cond.Status).To(Equal(metav1.ConditionFalse)) + Expect(cond.Reason).To(Equal(v1alpha1.ReplicatedVolumeAttachmentCondAttachedReasonWaitingForActiveAttachmentsToDetach)) + }) + + It("keeps nodes already present in rv.status.desiredAttachTo first (if such RVAs exist), then fills remaining slots", func(ctx SpecContext) { + // Pre-set desiredAttachTo with a preferred order. Controller should keep these nodes + // if there are corresponding RVAs, regardless of the FIFO order of other RVAs. + gotRV := &v1alpha1.ReplicatedVolume{} + Expect(cl.Get(ctx, client.ObjectKeyFromObject(&rv), gotRV)).To(Succeed()) + original := gotRV.DeepCopy() + gotRV.Status.DesiredAttachTo = []string{"node-2", "node-1"} + Expect(cl.Status().Patch(ctx, gotRV, client.MergeFrom(original))).To(Succeed()) + + now := time.Unix(2000, 0) + // Make node-3 RVA older than node-1 to ensure FIFO would pick it if not for attachTo preference. + rva3 := &v1alpha1.ReplicatedVolumeAttachment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rva-3", + CreationTimestamp: metav1.NewTime(now), + }, + Spec: v1alpha1.ReplicatedVolumeAttachmentSpec{ + ReplicatedVolumeName: rv.Name, + NodeName: "node-3", + }, + } + rva2 := &v1alpha1.ReplicatedVolumeAttachment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rva-2", + CreationTimestamp: metav1.NewTime(now.Add(1 * time.Second)), + }, + Spec: v1alpha1.ReplicatedVolumeAttachmentSpec{ + ReplicatedVolumeName: rv.Name, + NodeName: "node-2", + }, + } + rva1 := &v1alpha1.ReplicatedVolumeAttachment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rva-1", + CreationTimestamp: metav1.NewTime(now.Add(2 * time.Second)), + }, + Spec: v1alpha1.ReplicatedVolumeAttachmentSpec{ + ReplicatedVolumeName: rv.Name, + NodeName: "node-1", + }, + } + Expect(cl.Create(ctx, rva3)).To(Succeed()) + Expect(cl.Create(ctx, rva2)).To(Succeed()) + Expect(cl.Create(ctx, rva1)).To(Succeed()) + + Expect(rec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(&rv)})).To(Equal(reconcile.Result{})) + + gotRV2 := &v1alpha1.ReplicatedVolume{} + Expect(cl.Get(ctx, client.ObjectKeyFromObject(&rv), gotRV2)).To(Succeed()) + Expect(gotRV2.Status).NotTo(BeNil()) + Expect(gotRV2.Status.DesiredAttachTo).To(Equal([]string{"node-2", "node-1"})) + }) + + It("sets Attaching + WaitingForReplica when active RVA has no replica yet", func(ctx SpecContext) { + rva := &v1alpha1.ReplicatedVolumeAttachment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rva-wait-replica", + }, + Spec: v1alpha1.ReplicatedVolumeAttachmentSpec{ + ReplicatedVolumeName: rv.Name, + NodeName: "node-1", + }, + } + Expect(cl.Create(ctx, rva)).To(Succeed()) + + Expect(rec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(&rv)})).To(Equal(reconcile.Result{})) + + gotRVA := &v1alpha1.ReplicatedVolumeAttachment{} + Expect(cl.Get(ctx, client.ObjectKeyFromObject(rva), gotRVA)).To(Succeed()) + Expect(gotRVA.Status).NotTo(BeNil()) + Expect(gotRVA.Status.Phase).To(Equal(v1alpha1.ReplicatedVolumeAttachmentPhaseAttaching)) + cond := meta.FindStatusCondition(gotRVA.Status.Conditions, v1alpha1.ReplicatedVolumeAttachmentCondAttachedType) + Expect(cond).NotTo(BeNil()) + Expect(cond.Status).To(Equal(metav1.ConditionFalse)) + Expect(cond.Reason).To(Equal(v1alpha1.ReplicatedVolumeAttachmentCondAttachedReasonWaitingForReplica)) + }) + + It("sets Attaching + ConvertingTieBreakerToAccess when active RVA targets a TieBreaker replica", func(ctx SpecContext) { + rva := &v1alpha1.ReplicatedVolumeAttachment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rva-tb", + }, + Spec: v1alpha1.ReplicatedVolumeAttachmentSpec{ + ReplicatedVolumeName: rv.Name, + NodeName: "node-1", + }, + } + rvr := &v1alpha1.ReplicatedVolumeReplica{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rvr-tb-1", + }, + Spec: v1alpha1.ReplicatedVolumeReplicaSpec{ + ReplicatedVolumeName: rv.Name, + NodeName: "node-1", + Type: v1alpha1.ReplicaTypeTieBreaker, + }, + } + Expect(cl.Create(ctx, rva)).To(Succeed()) + Expect(cl.Create(ctx, rvr)).To(Succeed()) + + Expect(rec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(&rv)})).To(Equal(reconcile.Result{})) + + gotRVA := &v1alpha1.ReplicatedVolumeAttachment{} + Expect(cl.Get(ctx, client.ObjectKeyFromObject(rva), gotRVA)).To(Succeed()) + Expect(gotRVA.Status).NotTo(BeNil()) + Expect(gotRVA.Status.Phase).To(Equal(v1alpha1.ReplicatedVolumeAttachmentPhaseAttaching)) + cond := meta.FindStatusCondition(gotRVA.Status.Conditions, v1alpha1.ReplicatedVolumeAttachmentCondAttachedType) + Expect(cond).NotTo(BeNil()) + Expect(cond.Status).To(Equal(metav1.ConditionFalse)) + Expect(cond.Reason).To(Equal(v1alpha1.ReplicatedVolumeAttachmentCondAttachedReasonConvertingTieBreakerToAccess)) + }) + + It("sets Attached=True when RV reports the node in status.actuallyAttachedTo", func(ctx SpecContext) { + rva := &v1alpha1.ReplicatedVolumeAttachment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rva-attached", + }, + Spec: v1alpha1.ReplicatedVolumeAttachmentSpec{ + ReplicatedVolumeName: rv.Name, + NodeName: "node-1", + }, + } + rolePrimary := "Primary" + rvr := &v1alpha1.ReplicatedVolumeReplica{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rvr-primary-1", + }, + Spec: v1alpha1.ReplicatedVolumeReplicaSpec{ + ReplicatedVolumeName: rv.Name, + NodeName: "node-1", + Type: v1alpha1.ReplicaTypeDiskful, + }, + Status: v1alpha1.ReplicatedVolumeReplicaStatus{ + DRBD: &v1alpha1.DRBD{ + Status: &v1alpha1.DRBDStatus{ + Role: rolePrimary, + }, + }, + }, + } + Expect(cl.Create(ctx, rva)).To(Succeed()) + Expect(cl.Create(ctx, rvr)).To(Succeed()) + + Expect(rec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(&rv)})).To(Equal(reconcile.Result{})) + + gotRVA := &v1alpha1.ReplicatedVolumeAttachment{} + Expect(cl.Get(ctx, client.ObjectKeyFromObject(rva), gotRVA)).To(Succeed()) + Expect(gotRVA.Status).NotTo(BeNil()) + Expect(gotRVA.Status.Phase).To(Equal(v1alpha1.ReplicatedVolumeAttachmentPhaseAttached)) + cond := meta.FindStatusCondition(gotRVA.Status.Conditions, v1alpha1.ReplicatedVolumeAttachmentCondAttachedType) + Expect(cond).NotTo(BeNil()) + Expect(cond.Status).To(Equal(metav1.ConditionTrue)) + }) + + It("sets Ready=True when Attached=True and replica Ready=True", func(ctx SpecContext) { + rva := &v1alpha1.ReplicatedVolumeAttachment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rva-ready-true", + }, + Spec: v1alpha1.ReplicatedVolumeAttachmentSpec{ + ReplicatedVolumeName: rv.Name, + NodeName: "node-1", + }, + } + rolePrimary := "Primary" + rvr := &v1alpha1.ReplicatedVolumeReplica{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rvr-io-ready-true", + }, + Spec: v1alpha1.ReplicatedVolumeReplicaSpec{ + ReplicatedVolumeName: rv.Name, + NodeName: "node-1", + Type: v1alpha1.ReplicaTypeDiskful, + }, + Status: v1alpha1.ReplicatedVolumeReplicaStatus{ + ActualType: v1alpha1.ReplicaTypeDiskful, + DRBD: &v1alpha1.DRBD{ + Status: &v1alpha1.DRBDStatus{ + Role: rolePrimary, + }, + }, + Conditions: []metav1.Condition{{ + Type: v1alpha1.ReplicatedVolumeReplicaCondReadyType, + Status: metav1.ConditionTrue, + Reason: v1alpha1.ReplicatedVolumeReplicaCondReadyReasonReady, + Message: "replica is io ready", + }}, + }, + } + Expect(cl.Create(ctx, rva)).To(Succeed()) + Expect(cl.Create(ctx, rvr)).To(Succeed()) + + Expect(rec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(&rv)})).To(Equal(reconcile.Result{})) + + gotRVA := &v1alpha1.ReplicatedVolumeAttachment{} + Expect(cl.Get(ctx, client.ObjectKeyFromObject(rva), gotRVA)).To(Succeed()) + Expect(gotRVA.Status).NotTo(BeNil()) + Expect(gotRVA.Status.Phase).To(Equal(v1alpha1.ReplicatedVolumeAttachmentPhaseAttached)) + + attachedCond := meta.FindStatusCondition(gotRVA.Status.Conditions, v1alpha1.ReplicatedVolumeAttachmentCondAttachedType) + Expect(attachedCond).NotTo(BeNil()) + Expect(attachedCond.Status).To(Equal(metav1.ConditionTrue)) + Expect(attachedCond.Reason).To(Equal(v1alpha1.ReplicatedVolumeAttachmentCondAttachedReasonAttached)) + + replicaReadyCond := meta.FindStatusCondition(gotRVA.Status.Conditions, v1alpha1.ReplicatedVolumeAttachmentCondReplicaReadyType) + Expect(replicaReadyCond).NotTo(BeNil()) + Expect(replicaReadyCond.Status).To(Equal(metav1.ConditionTrue)) + Expect(replicaReadyCond.Reason).To(Equal(v1alpha1.ReplicatedVolumeReplicaCondReadyReasonReady)) + + readyCond := meta.FindStatusCondition(gotRVA.Status.Conditions, v1alpha1.ReplicatedVolumeAttachmentCondReadyType) + Expect(readyCond).NotTo(BeNil()) + Expect(readyCond.Status).To(Equal(metav1.ConditionTrue)) + Expect(readyCond.Reason).To(Equal(v1alpha1.ReplicatedVolumeAttachmentCondReadyReasonReady)) + }) + + It("sets Ready=False/ReplicaNotReady when Attached=True but replica Ready=False", func(ctx SpecContext) { + rva := &v1alpha1.ReplicatedVolumeAttachment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rva-ready-false", + }, + Spec: v1alpha1.ReplicatedVolumeAttachmentSpec{ + ReplicatedVolumeName: rv.Name, + NodeName: "node-1", + }, + } + rolePrimary := "Primary" + rvr := &v1alpha1.ReplicatedVolumeReplica{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rvr-io-ready-false", + }, + Spec: v1alpha1.ReplicatedVolumeReplicaSpec{ + ReplicatedVolumeName: rv.Name, + NodeName: "node-1", + Type: v1alpha1.ReplicaTypeDiskful, + }, + Status: v1alpha1.ReplicatedVolumeReplicaStatus{ + ActualType: v1alpha1.ReplicaTypeDiskful, + DRBD: &v1alpha1.DRBD{ + Status: &v1alpha1.DRBDStatus{ + Role: rolePrimary, + }, + }, + Conditions: []metav1.Condition{{ + Type: v1alpha1.ReplicatedVolumeReplicaCondReadyType, + Status: metav1.ConditionFalse, + Reason: "OutOfSync", + Message: "replica is not in sync", + }}, + }, + } + Expect(cl.Create(ctx, rva)).To(Succeed()) + Expect(cl.Create(ctx, rvr)).To(Succeed()) + + Expect(rec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(&rv)})).To(Equal(reconcile.Result{})) + + gotRVA := &v1alpha1.ReplicatedVolumeAttachment{} + Expect(cl.Get(ctx, client.ObjectKeyFromObject(rva), gotRVA)).To(Succeed()) + Expect(gotRVA.Status).NotTo(BeNil()) + Expect(gotRVA.Status.Phase).To(Equal(v1alpha1.ReplicatedVolumeAttachmentPhaseAttached)) + + replicaReadyCond := meta.FindStatusCondition(gotRVA.Status.Conditions, v1alpha1.ReplicatedVolumeAttachmentCondReplicaReadyType) + Expect(replicaReadyCond).NotTo(BeNil()) + Expect(replicaReadyCond.Status).To(Equal(metav1.ConditionFalse)) + Expect(replicaReadyCond.Reason).To(Equal("OutOfSync")) + + readyCond := meta.FindStatusCondition(gotRVA.Status.Conditions, v1alpha1.ReplicatedVolumeAttachmentCondReadyType) + Expect(readyCond).NotTo(BeNil()) + Expect(readyCond.Status).To(Equal(metav1.ConditionFalse)) + Expect(readyCond.Reason).To(Equal(v1alpha1.ReplicatedVolumeAttachmentCondReadyReasonReplicaNotReady)) + }) + + It("marks all RVAs for the same attached node as successful (Attached=True)", func(ctx SpecContext) { + // Create 3 RVA objects for the same node. + rva1 := &v1alpha1.ReplicatedVolumeAttachment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rva-attached-1", + }, + Spec: v1alpha1.ReplicatedVolumeAttachmentSpec{ + ReplicatedVolumeName: rv.Name, + NodeName: "node-1", + }, + } + rva2 := &v1alpha1.ReplicatedVolumeAttachment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rva-attached-2", + }, + Spec: v1alpha1.ReplicatedVolumeAttachmentSpec{ + ReplicatedVolumeName: rv.Name, + NodeName: "node-1", + }, + } + rva3 := &v1alpha1.ReplicatedVolumeAttachment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rva-attached-3", + }, + Spec: v1alpha1.ReplicatedVolumeAttachmentSpec{ + ReplicatedVolumeName: rv.Name, + NodeName: "node-1", + }, + } + Expect(cl.Create(ctx, rva1)).To(Succeed()) + Expect(cl.Create(ctx, rva2)).To(Succeed()) + Expect(cl.Create(ctx, rva3)).To(Succeed()) + + // Also create a replica on that node and mark it Primary so the controller sees actual attachment. + rolePrimary := "Primary" + rvr := &v1alpha1.ReplicatedVolumeReplica{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rvr-df-1", + }, + Spec: v1alpha1.ReplicatedVolumeReplicaSpec{ + ReplicatedVolumeName: rv.Name, + NodeName: "node-1", + Type: v1alpha1.ReplicaTypeDiskful, + }, + Status: v1alpha1.ReplicatedVolumeReplicaStatus{ + DRBD: &v1alpha1.DRBD{ + Status: &v1alpha1.DRBDStatus{ + Role: rolePrimary, + }, + }, + }, + } + Expect(cl.Create(ctx, rvr)).To(Succeed()) + + Expect(rec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(&rv)})).To(Equal(reconcile.Result{})) + + for _, obj := range []*v1alpha1.ReplicatedVolumeAttachment{rva1, rva2, rva3} { + got := &v1alpha1.ReplicatedVolumeAttachment{} + Expect(cl.Get(ctx, client.ObjectKeyFromObject(obj), got)).To(Succeed()) + Expect(got.Status).NotTo(BeNil()) + Expect(got.Status.Phase).To(Equal(v1alpha1.ReplicatedVolumeAttachmentPhaseAttached)) + cond := meta.FindStatusCondition(got.Status.Conditions, v1alpha1.ReplicatedVolumeAttachmentCondAttachedType) + Expect(cond).NotTo(BeNil()) + Expect(cond.Status).To(Equal(metav1.ConditionTrue)) + } + }) + + It("releases finalizer for deleting duplicate RVA on the same node (does not wait for actual detach)", func(ctx SpecContext) { + now := metav1.Now() + rvaAlive := &v1alpha1.ReplicatedVolumeAttachment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rva-alive", + }, + Spec: v1alpha1.ReplicatedVolumeAttachmentSpec{ + ReplicatedVolumeName: rv.Name, + NodeName: "node-1", + }, + } + rvaDeleting := &v1alpha1.ReplicatedVolumeAttachment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rva-deleting", + DeletionTimestamp: &now, + Finalizers: []string{v1alpha1.ControllerFinalizer}, + }, + Spec: v1alpha1.ReplicatedVolumeAttachmentSpec{ + ReplicatedVolumeName: rv.Name, + NodeName: "node-1", + }, + } + Expect(cl.Create(ctx, rvaAlive)).To(Succeed()) + Expect(cl.Create(ctx, rvaDeleting)).To(Succeed()) + + // Mark node-1 as attached. + rolePrimary := "Primary" + rvr := &v1alpha1.ReplicatedVolumeReplica{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rvr-df-1-delcase", + }, + Spec: v1alpha1.ReplicatedVolumeReplicaSpec{ + ReplicatedVolumeName: rv.Name, + NodeName: "node-1", + Type: v1alpha1.ReplicaTypeDiskful, + }, + Status: v1alpha1.ReplicatedVolumeReplicaStatus{ + ActualType: v1alpha1.ReplicaTypeDiskful, + DRBD: &v1alpha1.DRBD{ + Status: &v1alpha1.DRBDStatus{ + Role: rolePrimary, + }, + }, + }, + } + Expect(cl.Create(ctx, rvr)).To(Succeed()) + + Expect(rec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(&rv)})).To(Equal(reconcile.Result{})) + + gotAlive := &v1alpha1.ReplicatedVolumeAttachment{} + Expect(cl.Get(ctx, client.ObjectKeyFromObject(rvaAlive), gotAlive)).To(Succeed()) + Expect(gotAlive.Finalizers).To(ContainElement(v1alpha1.ControllerFinalizer)) + Expect(gotAlive.Status).NotTo(BeNil()) + Expect(gotAlive.Status.Phase).To(Equal(v1alpha1.ReplicatedVolumeAttachmentPhaseAttached)) + condAlive := meta.FindStatusCondition(gotAlive.Status.Conditions, v1alpha1.ReplicatedVolumeAttachmentCondAttachedType) + Expect(condAlive).NotTo(BeNil()) + Expect(condAlive.Status).To(Equal(metav1.ConditionTrue)) + + gotDel := &v1alpha1.ReplicatedVolumeAttachment{} + err := cl.Get(ctx, client.ObjectKeyFromObject(rvaDeleting), gotDel) + if client.IgnoreNotFound(err) == nil { + // After finalizer is released, fake client may delete the object immediately. + return + } + Expect(err).NotTo(HaveOccurred()) + Expect(gotDel.Finalizers).NotTo(ContainElement(v1alpha1.ControllerFinalizer)) + Expect(gotDel.Status).NotTo(BeNil()) + Expect(gotDel.Status.Phase).To(Equal(v1alpha1.ReplicatedVolumeAttachmentPhaseAttached)) + condDel := meta.FindStatusCondition(gotDel.Status.Conditions, v1alpha1.ReplicatedVolumeAttachmentCondAttachedType) + Expect(condDel).NotTo(BeNil()) + Expect(condDel.Status).To(Equal(metav1.ConditionTrue)) + }) + }) + + // AttachSucceeded condition on RV is intentionally not used anymore. + + When("patching RVR primary status fails", func() { + BeforeEach(func() { + rv.Status = v1alpha1.ReplicatedVolumeStatus{ + Conditions: []metav1.Condition{ + { + Type: v1alpha1.ReplicatedVolumeCondIOReadyType, + Status: metav1.ConditionTrue, + }, + }, + } + + rsc := v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rsc1", + }, + Spec: v1alpha1.ReplicatedStorageClassSpec{ + Replication: "Availability", + VolumeAccess: "Remote", + }, + } + + rva := v1alpha1.ReplicatedVolumeAttachment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rva-primary-1", + }, + Spec: v1alpha1.ReplicatedVolumeAttachmentSpec{ + ReplicatedVolumeName: rv.Name, + NodeName: "node-1", + }, + } + + rvr := v1alpha1.ReplicatedVolumeReplica{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rvr-primary-1", + }, + Spec: v1alpha1.ReplicatedVolumeReplicaSpec{ + ReplicatedVolumeName: rv.Name, + NodeName: "node-1", + Type: v1alpha1.ReplicaTypeDiskful, + }, + } + + builder.WithObjects(&rsc, &rva, &rvr) + + builder.WithInterceptorFuncs(interceptor.Funcs{ + SubResourcePatch: func(ctx context.Context, cl client.Client, subResourceName string, obj client.Object, patch client.Patch, opts ...client.SubResourcePatchOption) error { + if _, ok := obj.(*v1alpha1.ReplicatedVolumeReplica); ok { + return errExpectedTestError + } + return cl.SubResource(subResourceName).Patch(ctx, obj, patch, opts...) + }, + }) + }) + + It("returns error when updating RVR primary status fails", func(ctx SpecContext) { + result, err := rec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(&rv)}) + Expect(err).To(MatchError(errExpectedTestError)) + Expect(result).To(Equal(reconcile.Result{})) + }) + }) + + When("Get ReplicatedVolume fails", func() { + BeforeEach(func() { + builder.WithInterceptorFuncs(interceptor.Funcs{ + Get: func(ctx context.Context, c client.WithWatch, key client.ObjectKey, obj client.Object, opts ...client.GetOption) error { + if _, ok := obj.(*v1alpha1.ReplicatedVolume); ok { + return errExpectedTestError + } + return c.Get(ctx, key, obj, opts...) + }, + }) + }) + + It("returns same error", func(ctx SpecContext) { + result, err := rec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(&rv)}) + Expect(err).To(MatchError(errExpectedTestError)) + Expect(result).To(Equal(reconcile.Result{})) + }) + }) + + When("Get ReplicatedStorageClass fails", func() { + BeforeEach(func() { + rv.Status = v1alpha1.ReplicatedVolumeStatus{ + Conditions: []metav1.Condition{ + { + Type: v1alpha1.ReplicatedVolumeCondIOReadyType, + Status: metav1.ConditionTrue, + }, + }, + } + + builder.WithInterceptorFuncs(interceptor.Funcs{ + Get: func(ctx context.Context, c client.WithWatch, key client.ObjectKey, obj client.Object, opts ...client.GetOption) error { + if _, ok := obj.(*v1alpha1.ReplicatedStorageClass); ok { + return errExpectedTestError + } + return c.Get(ctx, key, obj, opts...) + }, + }) + }) + + It("does not error (switches to detach-only)", func(ctx SpecContext) { + result, err := rec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(&rv)}) + Expect(err).NotTo(HaveOccurred()) + Expect(result).To(Equal(reconcile.Result{})) + }) + + It("keeps RVA Pending/Ready=False with WaitingForReplicatedVolume when StorageClass cannot be loaded", func(ctx SpecContext) { + rva := &v1alpha1.ReplicatedVolumeAttachment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rva-sc-missing", + }, + Spec: v1alpha1.ReplicatedVolumeAttachmentSpec{ + ReplicatedVolumeName: rv.Name, + NodeName: "node-1", + }, + } + Expect(cl.Create(ctx, rva)).To(Succeed()) + + result, err := rec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(&rv)}) + Expect(err).NotTo(HaveOccurred()) + Expect(result).To(Equal(reconcile.Result{})) + + got := &v1alpha1.ReplicatedVolumeAttachment{} + Expect(cl.Get(ctx, client.ObjectKeyFromObject(rva), got)).To(Succeed()) + Expect(got.Status).NotTo(BeNil()) + Expect(got.Status.Phase).To(Equal(v1alpha1.ReplicatedVolumeAttachmentPhasePending)) + cond := meta.FindStatusCondition(got.Status.Conditions, v1alpha1.ReplicatedVolumeAttachmentCondAttachedType) + Expect(cond).NotTo(BeNil()) + Expect(cond.Status).To(Equal(metav1.ConditionFalse)) + Expect(cond.Reason).To(Equal(v1alpha1.ReplicatedVolumeAttachmentCondAttachedReasonWaitingForReplicatedVolume)) + }) + }) + + When("List ReplicatedVolumeReplica fails", func() { + BeforeEach(func() { + rv.Status = v1alpha1.ReplicatedVolumeStatus{ + Conditions: []metav1.Condition{ + { + Type: v1alpha1.ReplicatedVolumeCondIOReadyType, + Status: metav1.ConditionTrue, + }, + }, + } + + rsc := v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rsc1", + }, + Spec: v1alpha1.ReplicatedStorageClassSpec{ + Replication: "Availability", + VolumeAccess: "Local", + }, + } + + builder.WithObjects(&rsc) + + builder.WithInterceptorFuncs(interceptor.Funcs{ + List: func(ctx context.Context, c client.WithWatch, list client.ObjectList, opts ...client.ListOption) error { + if _, ok := list.(*v1alpha1.ReplicatedVolumeReplicaList); ok { + return errExpectedTestError + } + return c.List(ctx, list, opts...) + }, + }) + }) + + It("returns same error", func(ctx SpecContext) { + result, err := rec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(&rv)}) + Expect(err).To(MatchError(errExpectedTestError)) + Expect(result).To(Equal(reconcile.Result{})) + }) + }) + }) +}) diff --git a/images/controller/internal/controllers/rv_controller/controller.go b/images/controller/internal/controllers/rv_controller/controller.go new file mode 100644 index 000000000..f4ea7244a --- /dev/null +++ b/images/controller/internal/controllers/rv_controller/controller.go @@ -0,0 +1,82 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package rvcontroller + +import ( + "sigs.k8s.io/controller-runtime/pkg/builder" + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/controller" + "sigs.k8s.io/controller-runtime/pkg/event" + "sigs.k8s.io/controller-runtime/pkg/manager" + "sigs.k8s.io/controller-runtime/pkg/predicate" + + "github.com/deckhouse/sds-replicated-volume/api/v1alpha1" +) + +const ( + // RVControllerName is the controller name for rv_controller. + RVControllerName = "rv_controller" +) + +func BuildController(mgr manager.Manager) error { + cl := mgr.GetClient() + + rec := NewReconciler(cl) + + return builder.ControllerManagedBy(mgr). + Named(RVControllerName). + For( + &v1alpha1.ReplicatedVolume{}, + builder.WithPredicates( + predicate.Funcs{ + UpdateFunc: func(e event.TypedUpdateEvent[client.Object]) bool { + if e.ObjectNew == nil || e.ObjectOld == nil { + return true + } + + // If reconciliation uses status.conditions (or any generation-driven logic), + // react to generation changes for spec-driven updates. + if e.ObjectNew.GetGeneration() != e.ObjectOld.GetGeneration() { + return true + } + + // If RV deletion started, reconcile to execute finalization paths (metadata-only updates don't bump generation). + oldDT := e.ObjectOld.GetDeletionTimestamp() + newDT := e.ObjectNew.GetDeletionTimestamp() + if (oldDT == nil) != (newDT == nil) { + return true + } + + // The controller enforces this label to match spec.replicatedStorageClassName. + // Metadata-only updates don't bump generation, so react to changes of this single label key. + oldLabels := e.ObjectOld.GetLabels() + newLabels := e.ObjectNew.GetLabels() + oldV, oldOK := oldLabels[v1alpha1.ReplicatedStorageClassLabelKey] + newV, newOK := newLabels[v1alpha1.ReplicatedStorageClassLabelKey] + if oldOK != newOK || oldV != newV { + return true + } + + // Ignore pure status updates to avoid reconcile loops. + return false + }, + }, + ), + ). + WithOptions(controller.Options{MaxConcurrentReconciles: 10}). + Complete(rec) +} diff --git a/images/controller/internal/controllers/rv_controller/doc.go b/images/controller/internal/controllers/rv_controller/doc.go new file mode 100644 index 000000000..e58633cb1 --- /dev/null +++ b/images/controller/internal/controllers/rv_controller/doc.go @@ -0,0 +1,34 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +// Package rvcontroller implements the rv_controller controller, which manages ReplicatedVolume +// metadata (labels). +// +// # Controller Responsibilities +// +// The controller ensures that the ReplicatedStorageClass label is set on each ReplicatedVolume +// to match spec.replicatedStorageClassName. +// +// # Watched Resources +// +// The controller watches: +// - ReplicatedVolume: To reconcile metadata +// +// # Triggers +// +// The controller reconciles when: +// - RV create/update (idempotent; label set only if missing or mismatched) +package rvcontroller diff --git a/images/controller/internal/controllers/rv_controller/reconciler.go b/images/controller/internal/controllers/rv_controller/reconciler.go new file mode 100644 index 000000000..bb397e6b1 --- /dev/null +++ b/images/controller/internal/controllers/rv_controller/reconciler.go @@ -0,0 +1,86 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package rvcontroller + +import ( + "context" + + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/reconcile" + + obju "github.com/deckhouse/sds-replicated-volume/api/objutilv1" + "github.com/deckhouse/sds-replicated-volume/api/v1alpha1" + "github.com/deckhouse/sds-replicated-volume/lib/go/common/reconciliation/flow" +) + +type Reconciler struct { + cl client.Client +} + +var _ reconcile.Reconciler = (*Reconciler)(nil) + +func NewReconciler(cl client.Client) *Reconciler { + return &Reconciler{cl: cl} +} + +// Reconcile pattern: Pure orchestration +func (r *Reconciler) Reconcile(ctx context.Context, req reconcile.Request) (reconcile.Result, error) { + rf := flow.BeginRootReconcile(ctx) + + // Get the ReplicatedVolume + rv := &v1alpha1.ReplicatedVolume{} + if err := r.cl.Get(rf.Ctx(), req.NamespacedName, rv); err != nil { + if client.IgnoreNotFound(err) != nil { + return rf.Failf(err, "getting ReplicatedVolume").ToCtrl() + } + + // NotFound: object deleted, nothing to do. + return rf.Done().ToCtrl() + } + + // Reconcile main resource + outcome := r.reconcileMain(rf.Ctx(), rv) + if outcome.ShouldReturn() { + return outcome.ToCtrl() + } + + return rf.Done().ToCtrl() +} + +// Reconcile pattern: Conditional desired evaluation +func (r *Reconciler) reconcileMain(ctx context.Context, rv *v1alpha1.ReplicatedVolume) (outcome flow.ReconcileOutcome) { + rf := flow.BeginReconcile(ctx, "main") + defer rf.OnEnd(&outcome) + + if rv == nil { + return rf.Continue() + } + + if obju.HasLabelValue(rv, v1alpha1.ReplicatedStorageClassLabelKey, rv.Spec.ReplicatedStorageClassName) { + return rf.Continue() + } + + base := rv.DeepCopy() + + obju.SetLabel(rv, v1alpha1.ReplicatedStorageClassLabelKey, rv.Spec.ReplicatedStorageClassName) + + if err := r.cl.Patch(rf.Ctx(), rv, client.MergeFrom(base)); err != nil { + return rf.Fail(err).Enrichf("patching ReplicatedVolume") + } + + return rf.Continue() +} diff --git a/images/controller/internal/controllers/rv_controller/reconciler_test.go b/images/controller/internal/controllers/rv_controller/reconciler_test.go new file mode 100644 index 000000000..e7919cc76 --- /dev/null +++ b/images/controller/internal/controllers/rv_controller/reconciler_test.go @@ -0,0 +1,288 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package rvcontroller_test + +import ( + "context" + "errors" + "reflect" + "testing" + + . "github.com/onsi/ginkgo/v2" + . "github.com/onsi/gomega" + "k8s.io/apimachinery/pkg/api/resource" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/runtime" + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/client/fake" + "sigs.k8s.io/controller-runtime/pkg/client/interceptor" + "sigs.k8s.io/controller-runtime/pkg/reconcile" + + v1alpha1 "github.com/deckhouse/sds-replicated-volume/api/v1alpha1" + rvcontroller "github.com/deckhouse/sds-replicated-volume/images/controller/internal/controllers/rv_controller" +) + +func TestRvControllerReconciler(t *testing.T) { + RegisterFailHandler(Fail) + RunSpecs(t, "rv_controller Reconciler Suite") +} + +func RequestFor(object client.Object) reconcile.Request { + return reconcile.Request{NamespacedName: client.ObjectKeyFromObject(object)} +} + +func Requeue() OmegaMatcher { + return Not(Equal(reconcile.Result{})) +} + +func InterceptGet[T client.Object](intercept func(T) error) interceptor.Funcs { + var zero T + tType := reflect.TypeOf(zero) + if tType == nil { + panic("cannot determine type") + } + + return interceptor.Funcs{ + Get: func(ctx context.Context, client client.WithWatch, key client.ObjectKey, obj client.Object, opts ...client.GetOption) error { + if reflect.TypeOf(obj).AssignableTo(tType) { + return intercept(obj.(T)) + } + return client.Get(ctx, key, obj, opts...) + }, + List: func(ctx context.Context, client client.WithWatch, list client.ObjectList, opts ...client.ListOption) error { + if reflect.TypeOf(list).Elem().Elem().AssignableTo(tType) { + items := reflect.ValueOf(list).Elem().FieldByName("Items") + if items.IsValid() && items.Kind() == reflect.Slice { + for i := 0; i < items.Len(); i++ { + item := items.Index(i).Addr().Interface().(T) + if err := intercept(item); err != nil { + return err + } + } + } + } + return client.List(ctx, list, opts...) + }, + } +} + +var _ = Describe("Reconciler", func() { + var ( + clientBuilder *fake.ClientBuilder + scheme *runtime.Scheme + ) + var ( + cl client.WithWatch + rec *rvcontroller.Reconciler + ) + + BeforeEach(func() { + scheme = runtime.NewScheme() + Expect(v1alpha1.AddToScheme(scheme)).To(Succeed()) + clientBuilder = fake.NewClientBuilder(). + WithScheme(scheme). + WithStatusSubresource(&v1alpha1.ReplicatedVolume{}) + cl = nil + rec = nil + }) + + JustBeforeEach(func() { + cl = clientBuilder.Build() + rec = rvcontroller.NewReconciler(cl) + }) + + Describe("Reconcile (metadata)", func() { + type tc struct { + name string + objects []client.Object + reqName string + wantLabels map[string]string + } + + DescribeTable( + "updates labels", + func(ctx SpecContext, tt tc) { + localCl := fake.NewClientBuilder(). + WithScheme(scheme). + WithStatusSubresource(&v1alpha1.ReplicatedVolume{}). + WithObjects(tt.objects...). + Build() + localRec := rvcontroller.NewReconciler(localCl) + + _, err := localRec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKey{Name: tt.reqName}}) + Expect(err).NotTo(HaveOccurred()) + + rv := &v1alpha1.ReplicatedVolume{} + Expect(localCl.Get(ctx, client.ObjectKey{Name: tt.reqName}, rv)).To(Succeed()) + + for k, want := range tt.wantLabels { + Expect(rv.Labels).To(HaveKeyWithValue(k, want)) + } + }, + Entry("adds label when rsc specified", tc{ + name: "adds label when rsc specified", + objects: []client.Object{ + &v1alpha1.ReplicatedVolume{ + ObjectMeta: metav1.ObjectMeta{Name: "rv-with-rsc", ResourceVersion: "1"}, + Spec: v1alpha1.ReplicatedVolumeSpec{ReplicatedStorageClassName: "my-storage-class"}, + }, + }, + reqName: "rv-with-rsc", + wantLabels: map[string]string{ + v1alpha1.ReplicatedStorageClassLabelKey: "my-storage-class", + }, + }), + Entry("does not change label if already set correctly", tc{ + name: "does not change label if already set correctly", + objects: []client.Object{ + &v1alpha1.ReplicatedVolume{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rv-with-label", + ResourceVersion: "1", + Labels: map[string]string{ + v1alpha1.ReplicatedStorageClassLabelKey: "existing-class", + }, + }, + Spec: v1alpha1.ReplicatedVolumeSpec{ + ReplicatedStorageClassName: "existing-class", + }, + }, + }, + reqName: "rv-with-label", + wantLabels: map[string]string{ + v1alpha1.ReplicatedStorageClassLabelKey: "existing-class", + }, + }), + ) + }) + + It("returns no error when ReplicatedVolume does not exist", func(ctx SpecContext) { + Expect(rec.Reconcile(ctx, RequestFor(&v1alpha1.ReplicatedVolume{ + ObjectMeta: metav1.ObjectMeta{Name: "non-existent"}, + }))).ToNot(Requeue(), "should ignore NotFound errors") + }) + + When("RV created", func() { + var rv *v1alpha1.ReplicatedVolume + + BeforeEach(func() { + rv = &v1alpha1.ReplicatedVolume{ + ObjectMeta: metav1.ObjectMeta{ + Name: "volume-1", + }, + Spec: v1alpha1.ReplicatedVolumeSpec{ + Size: resource.MustParse("1Gi"), + ReplicatedStorageClassName: "my-storage-class", + }, + } + }) + + JustBeforeEach(func(ctx SpecContext) { + if rv != nil { + Expect(cl.Create(ctx, rv)).To(Succeed(), "should create ReplicatedVolume") + } + }) + + When("Get fails with non-NotFound error", func() { + var testError error + + BeforeEach(func() { + testError = errors.New("internal server error") + clientBuilder = clientBuilder.WithInterceptorFuncs( + InterceptGet(func(_ *v1alpha1.ReplicatedVolume) error { + return testError + }), + ) + }) + + It("should fail if getting ReplicatedVolume failed with non-NotFound error", func(ctx SpecContext) { + _, err := rec.Reconcile(ctx, RequestFor(rv)) + Expect(err).To(HaveOccurred(), "should return error when Get fails") + Expect(errors.Is(err, testError)).To(BeTrue(), "returned error should wrap the original Get error") + }) + }) + + It("sets label on RV", func(ctx SpecContext) { + By("Reconciling ReplicatedVolume") + result, err := rec.Reconcile(ctx, RequestFor(rv)) + Expect(err).NotTo(HaveOccurred(), "reconciliation should succeed") + Expect(result).ToNot(Requeue(), "should not requeue after successful reconciliation") + + By("Verifying label was set") + updatedRV := &v1alpha1.ReplicatedVolume{} + Expect(cl.Get(ctx, client.ObjectKeyFromObject(rv), updatedRV)).To(Succeed(), "should get updated ReplicatedVolume") + Expect(updatedRV.Labels).To(HaveKeyWithValue(v1alpha1.ReplicatedStorageClassLabelKey, "my-storage-class")) + }) + + When("label already set correctly", func() { + BeforeEach(func() { + rv.Labels = map[string]string{ + v1alpha1.ReplicatedStorageClassLabelKey: "my-storage-class", + } + }) + + It("is idempotent and does not modify RV", func(ctx SpecContext) { + By("Reconciling multiple times") + for i := 0; i < 3; i++ { + result, err := rec.Reconcile(ctx, RequestFor(rv)) + Expect(err).NotTo(HaveOccurred(), "reconciliation should succeed") + Expect(result).ToNot(Requeue(), "should not requeue") + } + + By("Verifying label remains unchanged") + updatedRV := &v1alpha1.ReplicatedVolume{} + Expect(cl.Get(ctx, client.ObjectKeyFromObject(rv), updatedRV)).To(Succeed()) + Expect(updatedRV.Labels).To(HaveKeyWithValue(v1alpha1.ReplicatedStorageClassLabelKey, "my-storage-class")) + }) + }) + }) + + When("Patch fails with non-NotFound error", func() { + var rv *v1alpha1.ReplicatedVolume + var testError error + + BeforeEach(func() { + rv = &v1alpha1.ReplicatedVolume{ + ObjectMeta: metav1.ObjectMeta{ + Name: "volume-patch-1", + }, + Spec: v1alpha1.ReplicatedVolumeSpec{ + ReplicatedStorageClassName: "my-storage-class", + }, + } + testError = errors.New("failed to patch") + clientBuilder = clientBuilder.WithInterceptorFuncs(interceptor.Funcs{ + Patch: func(ctx context.Context, cl client.WithWatch, obj client.Object, patch client.Patch, opts ...client.PatchOption) error { + if _, ok := obj.(*v1alpha1.ReplicatedVolume); ok { + return testError + } + return cl.Patch(ctx, obj, patch, opts...) + }, + }) + }) + + JustBeforeEach(func(ctx SpecContext) { + Expect(cl.Create(ctx, rv)).To(Succeed(), "should create ReplicatedVolume") + }) + + It("should fail if patching ReplicatedVolume failed with non-NotFound error", func(ctx SpecContext) { + _, err := rec.Reconcile(ctx, RequestFor(rv)) + Expect(err).To(HaveOccurred(), "should return error when Patch fails") + Expect(errors.Is(err, testError)).To(BeTrue(), "returned error should wrap the original Patch error") + }) + }) +}) diff --git a/images/controller/internal/controllers/rv_delete_propagation/const.go b/images/controller/internal/controllers/rv_delete_propagation/const.go new file mode 100644 index 000000000..1184165b1 --- /dev/null +++ b/images/controller/internal/controllers/rv_delete_propagation/const.go @@ -0,0 +1,19 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package rvdeletepropagation + +var ControllerName = "rv_delete_propagation_controller" diff --git a/images/controller/internal/controllers/rv_delete_propagation/controller.go b/images/controller/internal/controllers/rv_delete_propagation/controller.go new file mode 100644 index 000000000..d4e60d35f --- /dev/null +++ b/images/controller/internal/controllers/rv_delete_propagation/controller.go @@ -0,0 +1,40 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package rvdeletepropagation + +import ( + "log/slog" + + "sigs.k8s.io/controller-runtime/pkg/builder" + "sigs.k8s.io/controller-runtime/pkg/manager" + + u "github.com/deckhouse/sds-common-lib/utils" + "github.com/deckhouse/sds-replicated-volume/api/v1alpha1" +) + +func BuildController(mgr manager.Manager) error { + log := slog.Default().With("name", ControllerName) + + rec := NewReconciler(mgr.GetClient(), log) + + return u.LogError( + log, + builder.ControllerManagedBy(mgr). + Named(ControllerName). + For(&v1alpha1.ReplicatedVolume{}). + Complete(rec)) +} diff --git a/images/controller/internal/controllers/rv_delete_propagation/doc.go b/images/controller/internal/controllers/rv_delete_propagation/doc.go new file mode 100644 index 000000000..2c9708e8e --- /dev/null +++ b/images/controller/internal/controllers/rv_delete_propagation/doc.go @@ -0,0 +1,47 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +// Package rvdeletepropagation implements the rv-delete-propagation-controller, +// which propagates deletion from ReplicatedVolume to all its ReplicatedVolumeReplicas. +// +// # Controller Responsibilities +// +// The controller ensures proper cleanup by: +// - Detecting when a ReplicatedVolume has metadata.deletionTimestamp set +// - Triggering deletion of all associated ReplicatedVolumeReplicas +// +// # Watched Resources +// +// The controller watches: +// - ReplicatedVolume: To detect deletion events +// - ReplicatedVolumeReplica: To identify replicas belonging to deleted volumes +// +// # Reconciliation Flow +// +// 1. Check if ReplicatedVolume has metadata.deletionTimestamp set +// 2. List all ReplicatedVolumeReplicas with rvr.spec.replicatedVolumeName matching the RV +// 3. For each RVR without deletionTimestamp: +// - Trigger deletion by calling Delete on the RVR +// +// # Status Updates +// +// This controller does not update any status fields; it only triggers RVR deletions. +// +// # Special Notes +// +// This controller works in conjunction with rv-finalizer-controller, which manages +// the RV finalizer and ensures the RV is not fully deleted until all RVRs are removed. +package rvdeletepropagation diff --git a/images/controller/internal/controllers/rv_delete_propagation/reconciler.go b/images/controller/internal/controllers/rv_delete_propagation/reconciler.go new file mode 100644 index 000000000..f6c9076d3 --- /dev/null +++ b/images/controller/internal/controllers/rv_delete_propagation/reconciler.go @@ -0,0 +1,93 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package rvdeletepropagation + +import ( + "context" + "fmt" + "log/slog" + + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/reconcile" + + "github.com/deckhouse/sds-replicated-volume/api/v1alpha1" + "github.com/deckhouse/sds-replicated-volume/images/controller/internal/indexes" +) + +type Reconciler struct { + cl client.Client + log *slog.Logger +} + +var _ reconcile.Reconciler = &Reconciler{} + +func NewReconciler(cl client.Client, log *slog.Logger) *Reconciler { + if log == nil { + log = slog.Default() + } + return &Reconciler{ + cl: cl, + log: log, + } +} + +func (r *Reconciler) Reconcile(ctx context.Context, req reconcile.Request) (reconcile.Result, error) { + rv := &v1alpha1.ReplicatedVolume{} + if err := r.cl.Get(ctx, req.NamespacedName, rv); err != nil { + if client.IgnoreNotFound(err) == nil { + r.log.Info("ReplicatedVolume not found, probably deleted", "req", req) + return reconcile.Result{}, nil + } + return reconcile.Result{}, fmt.Errorf("getting rv: %w", err) + } + + log := r.log.With("rvName", rv.Name) + + if !linkedRVRsNeedToBeDeleted(rv) { + log.Debug("linked do not need to be deleted") + return reconcile.Result{}, nil + } + + rvrList := &v1alpha1.ReplicatedVolumeReplicaList{} + if err := r.cl.List(ctx, rvrList, client.MatchingFields{ + indexes.IndexFieldRVRByReplicatedVolumeName: rv.Name, + }); err != nil { + return reconcile.Result{}, fmt.Errorf("listing rvrs: %w", err) + } + + for i := range rvrList.Items { + rvr := &rvrList.Items[i] + if rvr.DeletionTimestamp == nil { + if err := r.cl.Delete(ctx, rvr); err != nil { + if client.IgnoreNotFound(err) != nil { + return reconcile.Result{}, fmt.Errorf("deleting rvr: %w", err) + } + log.Debug("rvr already deleted", "rvrName", rvr.Name) + continue + } + + log.Info("deleted rvr", "rvrName", rvr.Name) + } + } + + log.Info("finished rvr deletion") + return reconcile.Result{}, nil +} + +func linkedRVRsNeedToBeDeleted(rv *v1alpha1.ReplicatedVolume) bool { + return rv.DeletionTimestamp != nil +} diff --git a/images/controller/internal/controllers/rv_delete_propagation/reconciler_test.go b/images/controller/internal/controllers/rv_delete_propagation/reconciler_test.go new file mode 100644 index 000000000..1c36d4e4a --- /dev/null +++ b/images/controller/internal/controllers/rv_delete_propagation/reconciler_test.go @@ -0,0 +1,170 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package rvdeletepropagation_test + +import ( + "log/slog" + "testing" + "time" + + apierrors "k8s.io/apimachinery/pkg/api/errors" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/runtime" + "k8s.io/apimachinery/pkg/types" + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/client/fake" + "sigs.k8s.io/controller-runtime/pkg/reconcile" + + "github.com/deckhouse/sds-replicated-volume/api/v1alpha1" + rvdeletepropagation "github.com/deckhouse/sds-replicated-volume/images/controller/internal/controllers/rv_delete_propagation" + testhelpers "github.com/deckhouse/sds-replicated-volume/images/controller/internal/indexes/testhelpers" +) + +func TestReconciler_Reconcile(t *testing.T) { + scheme := runtime.NewScheme() + if err := v1alpha1.AddToScheme(scheme); err != nil { + t.Fatalf("adding scheme: %v", err) + } + + tests := []struct { + name string // description of this test case + objects []client.Object + req reconcile.Request + want reconcile.Result + wantErr bool + expectDeleted []types.NamespacedName + expectRemaining []types.NamespacedName + }{ + { + name: "skips deletion when rv is active", + objects: []client.Object{ + &v1alpha1.ReplicatedVolume{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rv-active", + ResourceVersion: "1", + }, + }, + &v1alpha1.ReplicatedVolumeReplica{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rvr-linked", + }, + Spec: v1alpha1.ReplicatedVolumeReplicaSpec{ + ReplicatedVolumeName: "rv-active", + Type: v1alpha1.ReplicaTypeDiskful, + }, + }, + }, + req: reconcile.Request{NamespacedName: types.NamespacedName{Name: "rv-active"}}, + expectRemaining: []types.NamespacedName{{Name: "rvr-linked"}}, + }, + { + name: "deletes linked rvrs when rv is being removed", + objects: []client.Object{ + &v1alpha1.ReplicatedVolume{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rv-deleting", + DeletionTimestamp: func() *metav1.Time { + ts := metav1.NewTime(time.Now()) + return &ts + }(), + Finalizers: []string{"keep-me"}, + ResourceVersion: "1", + }, + }, + &v1alpha1.ReplicatedVolumeReplica{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rvr-linked", + }, + Spec: v1alpha1.ReplicatedVolumeReplicaSpec{ + ReplicatedVolumeName: "rv-deleting", + Type: "Diskful", + }, + }, + &v1alpha1.ReplicatedVolumeReplica{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rvr-other", + }, + Spec: v1alpha1.ReplicatedVolumeReplicaSpec{ + ReplicatedVolumeName: "rv-other", + Type: v1alpha1.ReplicaTypeDiskful, + }, + }, + &v1alpha1.ReplicatedVolumeReplica{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rvr-already-deleting", + DeletionTimestamp: func() *metav1.Time { + ts := metav1.NewTime(time.Now()) + return &ts + }(), + Finalizers: []string{"keep-me"}, + }, + Spec: v1alpha1.ReplicatedVolumeReplicaSpec{ + ReplicatedVolumeName: "rv-deleting", + Type: v1alpha1.ReplicaTypeDiskful, + }, + }, + }, + req: reconcile.Request{NamespacedName: types.NamespacedName{Name: "rv-deleting"}}, + expectDeleted: []types.NamespacedName{{Name: "rvr-linked"}}, + expectRemaining: []types.NamespacedName{ + {Name: "rvr-other"}, + {Name: "rvr-already-deleting"}, + }, + }, + } + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + cl := testhelpers.WithRVRByReplicatedVolumeNameIndex(fake.NewClientBuilder(). + WithScheme(scheme)). + WithObjects(tt.objects...). + Build() + + r := rvdeletepropagation.NewReconciler(cl, slog.Default()) + got, gotErr := r.Reconcile(t.Context(), tt.req) + if gotErr != nil { + if !tt.wantErr { + t.Errorf("Reconcile() failed: %v", gotErr) + } + return + } + if tt.wantErr { + t.Fatal("Reconcile() succeeded unexpectedly") + } + if got != tt.want { + t.Errorf("Reconcile() = %v, want %v", got, tt.want) + } + + for _, nn := range tt.expectDeleted { + rvr := &v1alpha1.ReplicatedVolumeReplica{} + err := cl.Get(t.Context(), nn, rvr) + if err == nil { + t.Fatalf("expected rvr %s to be deleted, but it still exists", nn.Name) + } + if !apierrors.IsNotFound(err) { + t.Fatalf("expected not found for rvr %s, got %v", nn.Name, err) + } + } + + for _, nn := range tt.expectRemaining { + rvr := &v1alpha1.ReplicatedVolumeReplica{} + if err := cl.Get(t.Context(), nn, rvr); err != nil { + t.Fatalf("expected rvr %s to remain, get err: %v", nn.Name, err) + } + } + }) + } +} diff --git a/images/controller/internal/controllers/rvr_controller/controller.go b/images/controller/internal/controllers/rvr_controller/controller.go new file mode 100644 index 000000000..dd24bcf46 --- /dev/null +++ b/images/controller/internal/controllers/rvr_controller/controller.go @@ -0,0 +1,39 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package rvrcontroller + +import ( + "sigs.k8s.io/controller-runtime/pkg/builder" + "sigs.k8s.io/controller-runtime/pkg/controller" + "sigs.k8s.io/controller-runtime/pkg/manager" + + "github.com/deckhouse/sds-replicated-volume/api/v1alpha1" +) + +const RVRControllerName = "rvr-controller" + +func BuildController(mgr manager.Manager) error { + cl := mgr.GetClient() + + rec := NewReconciler(cl, mgr.GetLogger().WithName(RVRControllerName)) + + return builder.ControllerManagedBy(mgr). + Named(RVRControllerName). + For(&v1alpha1.ReplicatedVolumeReplica{}). + WithOptions(controller.Options{MaxConcurrentReconciles: 10}). + Complete(rec) +} diff --git a/images/controller/internal/controllers/rvr_controller/reconciler.go b/images/controller/internal/controllers/rvr_controller/reconciler.go new file mode 100644 index 000000000..f543b686d --- /dev/null +++ b/images/controller/internal/controllers/rvr_controller/reconciler.go @@ -0,0 +1,50 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package rvrcontroller + +import ( + "context" + + "github.com/go-logr/logr" + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/reconcile" +) + +type Reconciler struct { + cl client.Client + log logr.Logger +} + +var _ reconcile.Reconciler = (*Reconciler)(nil) + +func NewReconciler(cl client.Client, log logr.Logger) *Reconciler { + return &Reconciler{ + cl: cl, + log: log, + } +} + +func (r *Reconciler) Reconcile(ctx context.Context, req reconcile.Request) (reconcile.Result, error) { + _ = r.log.WithValues("req", req) + + _ = ctx + _ = req + + // TODO: implement reconciliation logic + + return reconcile.Result{}, nil +} diff --git a/images/controller/internal/controllers/rvr_metadata/controller.go b/images/controller/internal/controllers/rvr_metadata/controller.go new file mode 100644 index 000000000..cad4dc0e7 --- /dev/null +++ b/images/controller/internal/controllers/rvr_metadata/controller.go @@ -0,0 +1,39 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package rvrmetadata + +import ( + "sigs.k8s.io/controller-runtime/pkg/builder" + "sigs.k8s.io/controller-runtime/pkg/manager" + + "github.com/deckhouse/sds-replicated-volume/api/v1alpha1" +) + +func BuildController(mgr manager.Manager) error { + nameController := "rvr_metadata_controller" + + r := &Reconciler{ + cl: mgr.GetClient(), + log: mgr.GetLogger().WithName(nameController).WithName("Reconciler"), + scheme: mgr.GetScheme(), + } + + return builder.ControllerManagedBy(mgr). + Named(nameController). + For(&v1alpha1.ReplicatedVolumeReplica{}). + Complete(r) +} diff --git a/images/controller/internal/controllers/rvr_metadata/doc.go b/images/controller/internal/controllers/rvr_metadata/doc.go new file mode 100644 index 000000000..4ccbe08cd --- /dev/null +++ b/images/controller/internal/controllers/rvr_metadata/doc.go @@ -0,0 +1,74 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +// Package rvrmetadata implements the rvr-metadata-controller, which manages +// metadata (owner references and labels) on ReplicatedVolumeReplica resources. +// +// # Controller Responsibilities +// +// The controller ensures proper ownership and metadata by: +// - Setting metadata.ownerReferences on each RVR to point to its parent RV +// - Using the controller reference pattern for proper cascading deletion +// - Updating owner references if they become missing or incorrect +// - Setting replicated-storage-class label from the parent RV +// - Setting replicated-volume label from rvr.spec.replicatedVolumeName +// +// Note: node-name label (sds-replicated-volume.deckhouse.io/node-name) is managed by rvr_scheduling_controller. +// +// # Watched Resources +// +// The controller watches: +// - ReplicatedVolumeReplica: To maintain owner references and labels +// +// # Owner Reference Configuration +// +// The controller uses controllerutil.SetControllerReference() to set: +// - apiVersion: storage.deckhouse.io/v1alpha1 +// - kind: ReplicatedVolume +// - name: From rvr.spec.replicatedVolumeName +// - uid: From the actual RV resource +// - controller: true +// - blockOwnerDeletion: true +// +// # Labels Managed +// +// - sds-replicated-volume.deckhouse.io/replicated-storage-class: Name of the ReplicatedStorageClass (from RV) +// - sds-replicated-volume.deckhouse.io/replicated-volume: Name of the ReplicatedVolume +// +// Note: sds-replicated-volume.deckhouse.io/node-name label is managed by rvr_scheduling_controller +// (set during scheduling, restored if manually removed). +// +// # Reconciliation Flow +// +// 1. Get the RVR being reconciled +// 2. Fetch the parent ReplicatedVolume using rvr.spec.replicatedVolumeName +// 3. Set owner reference using controllerutil.SetControllerReference() +// 4. Ensure replicated-storage-class label is set from rv.spec.replicatedStorageClassName +// 5. Ensure replicated-volume label is set from rvr.spec.replicatedVolumeName +// 6. Patch RVR if any changes were made +// +// # Special Notes +// +// Owner references enable Kubernetes garbage collection: +// - When a ReplicatedVolume is deleted, all its RVRs are automatically marked for deletion +// - blockOwnerDeletion=true prevents RV deletion if RVRs still exist (works with finalizers) +// +// The controller reference pattern ensures only one controller owns each RVR, +// preventing conflicts in lifecycle management. +// +// This controller complements rv-metadata-controller and rv-delete-propagation-controller +// to provide robust lifecycle management. +package rvrmetadata diff --git a/images/controller/internal/controllers/rvr_metadata/reconciler.go b/images/controller/internal/controllers/rvr_metadata/reconciler.go new file mode 100644 index 000000000..6fe6be19e --- /dev/null +++ b/images/controller/internal/controllers/rvr_metadata/reconciler.go @@ -0,0 +1,132 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package rvrmetadata + +import ( + "context" + "reflect" + + "github.com/go-logr/logr" + "k8s.io/apimachinery/pkg/runtime" + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/controller/controllerutil" + "sigs.k8s.io/controller-runtime/pkg/reconcile" + + obju "github.com/deckhouse/sds-replicated-volume/api/objutilv1" + "github.com/deckhouse/sds-replicated-volume/api/v1alpha1" +) + +type Reconciler struct { + cl client.Client + log logr.Logger + scheme *runtime.Scheme +} + +var _ reconcile.Reconciler = (*Reconciler)(nil) + +func NewReconciler(cl client.Client, log logr.Logger, scheme *runtime.Scheme) *Reconciler { + return &Reconciler{ + cl: cl, + log: log, + scheme: scheme, + } +} + +func (r *Reconciler) Reconcile(ctx context.Context, req reconcile.Request) (reconcile.Result, error) { + log := r.log.WithName("Reconcile").WithValues("req", req) + + rvr := &v1alpha1.ReplicatedVolumeReplica{} + if err := r.cl.Get(ctx, req.NamespacedName, rvr); err != nil { + return reconcile.Result{}, client.IgnoreNotFound(err) + } + + if !rvr.DeletionTimestamp.IsZero() && !obju.HasFinalizersOtherThan(rvr, v1alpha1.ControllerFinalizer, v1alpha1.AgentFinalizer) { + return reconcile.Result{}, nil + } + + if rvr.Spec.ReplicatedVolumeName == "" { + return reconcile.Result{}, nil + } + + rv := &v1alpha1.ReplicatedVolume{} + if err := r.cl.Get(ctx, client.ObjectKey{Name: rvr.Spec.ReplicatedVolumeName}, rv); err != nil { + return reconcile.Result{}, client.IgnoreNotFound(err) + } + + originalRVR := rvr.DeepCopy() + + // Set owner reference + if err := controllerutil.SetControllerReference(rv, rvr, r.scheme); err != nil { + log.Error(err, "unable to set controller reference") + return reconcile.Result{}, err + } + + // Process labels + labelsChanged := r.processLabels(log, rvr, rv) + + ownerRefChanged := !ownerReferencesUnchanged(originalRVR, rvr) + + if !ownerRefChanged && !labelsChanged { + return reconcile.Result{}, nil + } + + if err := r.cl.Patch(ctx, rvr, client.MergeFrom(originalRVR)); err != nil { + if client.IgnoreNotFound(err) == nil { + log.V(1).Info("ReplicatedVolumeReplica was deleted during reconciliation, skipping patch", "rvr", rvr.Name) + return reconcile.Result{}, nil + } + log.Error(err, "unable to patch ReplicatedVolumeReplica metadata", "rvr", rvr.Name) + return reconcile.Result{}, err + } + + return reconcile.Result{}, nil +} + +// processLabels ensures required labels are set on the RVR. +// Returns true if any label was changed. +func (r *Reconciler) processLabels(log logr.Logger, rvr *v1alpha1.ReplicatedVolumeReplica, rv *v1alpha1.ReplicatedVolume) bool { + var changed, labelChanged bool + + // Set replicated-volume label from spec + if rvr.Spec.ReplicatedVolumeName != "" { + labelChanged = obju.SetLabel(rvr, v1alpha1.ReplicatedVolumeLabelKey, rvr.Spec.ReplicatedVolumeName) + if labelChanged { + log.V(1).Info("replicated-volume label set on rvr", + "rv", rvr.Spec.ReplicatedVolumeName) + changed = true + } + } + + // Set replicated-storage-class label from RV + if rv.Spec.ReplicatedStorageClassName != "" { + labelChanged = obju.SetLabel(rvr, v1alpha1.ReplicatedStorageClassLabelKey, rv.Spec.ReplicatedStorageClassName) + if labelChanged { + log.V(1).Info("replicated-storage-class label set on rvr", + "rsc", rv.Spec.ReplicatedStorageClassName) + changed = true + } + } + + // Note: node-name label (sds-replicated-volume.deckhouse.io/node-name) is managed + // by rvr_scheduling_controller, which sets it when scheduling and restores if manually removed. + + return changed +} + +func ownerReferencesUnchanged(before, after *v1alpha1.ReplicatedVolumeReplica) bool { + return reflect.DeepEqual(before.OwnerReferences, after.OwnerReferences) +} diff --git a/images/controller/internal/controllers/rvr_metadata/reconciler_test.go b/images/controller/internal/controllers/rvr_metadata/reconciler_test.go new file mode 100644 index 000000000..c6de2a64b --- /dev/null +++ b/images/controller/internal/controllers/rvr_metadata/reconciler_test.go @@ -0,0 +1,382 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package rvrmetadata_test + +import ( + "context" + "fmt" + + . "github.com/onsi/ginkgo/v2" + . "github.com/onsi/gomega" + "k8s.io/apimachinery/pkg/api/errors" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/runtime" + "k8s.io/apimachinery/pkg/types" + "k8s.io/utils/ptr" + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/client/fake" + "sigs.k8s.io/controller-runtime/pkg/client/interceptor" + "sigs.k8s.io/controller-runtime/pkg/reconcile" + + "github.com/deckhouse/sds-replicated-volume/api/v1alpha1" + rvrmetadata "github.com/deckhouse/sds-replicated-volume/images/controller/internal/controllers/rvr_metadata" +) + +var _ = Describe("Reconciler", func() { + scheme := runtime.NewScheme() + Expect(v1alpha1.AddToScheme(scheme)).To(Succeed()) + + var ( + clientBuilder *fake.ClientBuilder + ) + + var ( + cl client.Client + rec *rvrmetadata.Reconciler + ) + + BeforeEach(func() { + clientBuilder = fake.NewClientBuilder(). + WithScheme(scheme) + + cl = nil + rec = nil + }) + + JustBeforeEach(func() { + cl = clientBuilder.Build() + rec = rvrmetadata.NewReconciler(cl, GinkgoLogr, scheme) + }) + + It("returns no error when ReplicatedVolumeReplica does not exist", func(ctx SpecContext) { + _, err := rec.Reconcile(ctx, reconcile.Request{NamespacedName: types.NamespacedName{Name: "non-existent"}}) + Expect(err).NotTo(HaveOccurred()) + }) + + When("ReplicatedVolumeReplica exists", func() { + var rvr *v1alpha1.ReplicatedVolumeReplica + var rv *v1alpha1.ReplicatedVolume + + BeforeEach(func() { + rv = &v1alpha1.ReplicatedVolume{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rv1", + UID: "good-uid", + }, + Spec: v1alpha1.ReplicatedVolumeSpec{ + ReplicatedStorageClassName: "test-storage-class", + }, + } + rvr = &v1alpha1.ReplicatedVolumeReplica{ + ObjectMeta: metav1.ObjectMeta{Name: "rvr1"}, + Spec: v1alpha1.ReplicatedVolumeReplicaSpec{ + ReplicatedVolumeName: rv.Name, + }, + } + }) + + JustBeforeEach(func(ctx SpecContext) { + if rv != nil { + Expect(cl.Create(ctx, rv)).To(Succeed()) + } + Expect(cl.Create(ctx, rvr)).To(Succeed()) + }) + + It("sets ownerReference to the corresponding ReplicatedVolume", func(ctx SpecContext) { + _, err := rec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(rvr)}) + Expect(err).NotTo(HaveOccurred()) + + got := &v1alpha1.ReplicatedVolumeReplica{} + Expect(cl.Get(ctx, client.ObjectKeyFromObject(rvr), got)).To(Succeed()) + + Expect(got.OwnerReferences).To(ContainElement(SatisfyAll( + HaveField("Name", Equal(rv.Name)), + HaveField("Kind", Equal("ReplicatedVolume")), + HaveField("APIVersion", Equal("storage.deckhouse.io/v1alpha1")), + HaveField("Controller", Not(BeNil())), + HaveField("BlockOwnerDeletion", Not(BeNil())), + ))) + }) + + It("sets replicated-volume and replicated-storage-class labels", func(ctx SpecContext) { + _, err := rec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(rvr)}) + Expect(err).NotTo(HaveOccurred()) + + got := &v1alpha1.ReplicatedVolumeReplica{} + Expect(cl.Get(ctx, client.ObjectKeyFromObject(rvr), got)).To(Succeed()) + + Expect(got.Labels).To(HaveKeyWithValue(v1alpha1.ReplicatedVolumeLabelKey, rv.Name)) + Expect(got.Labels).To(HaveKeyWithValue(v1alpha1.ReplicatedStorageClassLabelKey, rv.Spec.ReplicatedStorageClassName)) + }) + + // Note: node-name label is tested in rvr_scheduling_controller tests + // as it's managed by that controller, not rvr_metadata. + + When("labels are already set correctly", func() { + BeforeEach(func() { + rvr.Labels = map[string]string{ + v1alpha1.ReplicatedVolumeLabelKey: rv.Name, + v1alpha1.ReplicatedStorageClassLabelKey: rv.Spec.ReplicatedStorageClassName, + } + rvr.OwnerReferences = []metav1.OwnerReference{ + { + Name: rv.Name, + Kind: "ReplicatedVolume", + APIVersion: "storage.deckhouse.io/v1alpha1", + Controller: ptr.To(true), + BlockOwnerDeletion: ptr.To(true), + UID: rv.UID, + }, + } + + clientBuilder.WithInterceptorFuncs(interceptor.Funcs{ + Patch: func(_ context.Context, _ client.WithWatch, _ client.Object, _ client.Patch, _ ...client.PatchOption) error { + return errors.NewInternalError(fmt.Errorf("patch should not be called")) + }, + }) + }) + + It("does nothing and returns no error", func(ctx SpecContext) { + _, err := rec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(rvr)}) + Expect(err).NotTo(HaveOccurred()) + + got := &v1alpha1.ReplicatedVolumeReplica{} + Expect(cl.Get(ctx, client.ObjectKeyFromObject(rvr), got)).To(Succeed()) + Expect(got.Labels).To(HaveKeyWithValue(v1alpha1.ReplicatedVolumeLabelKey, rv.Name)) + Expect(got.Labels).To(HaveKeyWithValue(v1alpha1.ReplicatedStorageClassLabelKey, rv.Spec.ReplicatedStorageClassName)) + }) + }) + + When("ReplicatedVolumeReplica has DeletionTimestamp", func() { + const externalFinalizer = "test-finalizer" + + When("has only controller finalizer", func() { + BeforeEach(func() { + rvr.Finalizers = []string{v1alpha1.ControllerFinalizer} + }) + + JustBeforeEach(func(ctx SpecContext) { + Expect(cl.Delete(ctx, rvr)).To(Succeed()) + got := &v1alpha1.ReplicatedVolumeReplica{} + Expect(cl.Get(ctx, client.ObjectKeyFromObject(rvr), got)).To(Succeed()) + Expect(got.DeletionTimestamp).NotTo(BeNil()) + Expect(got.Finalizers).To(ContainElement(v1alpha1.ControllerFinalizer)) + Expect(got.OwnerReferences).To(BeEmpty()) + }) + + It("skips reconciliation", func(ctx SpecContext) { + _, err := rec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(rvr)}) + Expect(err).NotTo(HaveOccurred()) + + got := &v1alpha1.ReplicatedVolumeReplica{} + Expect(cl.Get(ctx, client.ObjectKeyFromObject(rvr), got)).To(Succeed()) + Expect(got.DeletionTimestamp).NotTo(BeNil()) + Expect(got.Finalizers).To(ContainElement(v1alpha1.ControllerFinalizer)) + Expect(got.OwnerReferences).To(BeEmpty()) + }) + }) + + When("has external finalizer in addition to controller finalizer", func() { + BeforeEach(func() { + rvr.Finalizers = []string{v1alpha1.ControllerFinalizer, externalFinalizer} + }) + + JustBeforeEach(func(ctx SpecContext) { + Expect(cl.Delete(ctx, rvr)).To(Succeed()) + got := &v1alpha1.ReplicatedVolumeReplica{} + Expect(cl.Get(ctx, client.ObjectKeyFromObject(rvr), got)).To(Succeed()) + Expect(got.DeletionTimestamp).NotTo(BeNil()) + Expect(got.Finalizers).To(ContainElement(externalFinalizer)) + }) + + It("still sets ownerReference", func(ctx SpecContext) { + _, err := rec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(rvr)}) + Expect(err).NotTo(HaveOccurred()) + + got := &v1alpha1.ReplicatedVolumeReplica{} + Expect(cl.Get(ctx, client.ObjectKeyFromObject(rvr), got)).To(Succeed()) + Expect(got.DeletionTimestamp).NotTo(BeNil()) + Expect(got.Finalizers).To(ContainElement(externalFinalizer)) + Expect(got.OwnerReferences).To(ContainElement(SatisfyAll( + HaveField("Name", Equal(rv.Name)), + HaveField("Kind", Equal("ReplicatedVolume")), + HaveField("APIVersion", Equal("storage.deckhouse.io/v1alpha1")), + ))) + }) + }) + }) + + When("has empty ReplicatedVolumeName", func() { + BeforeEach(func() { + rvr.Spec.ReplicatedVolumeName = "" + }) + + It("does nothing and returns no error", func(ctx SpecContext) { + _, err := rec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(rvr)}) + Expect(err).NotTo(HaveOccurred()) + + got := &v1alpha1.ReplicatedVolumeReplica{} + Expect(cl.Get(ctx, client.ObjectKeyFromObject(rvr), got)).To(Succeed()) + Expect(got.OwnerReferences).To(BeEmpty()) + }) + }) + + When("ReplicatedVolume does not exist", func() { + BeforeEach(func() { + rv = nil + }) + + It("ignores missing ReplicatedVolume", func(ctx SpecContext) { + _, err := rec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(rvr)}) + Expect(err).NotTo(HaveOccurred()) + + got := &v1alpha1.ReplicatedVolumeReplica{} + Expect(cl.Get(ctx, client.ObjectKeyFromObject(rvr), got)).To(Succeed()) + Expect(got.OwnerReferences).To(BeEmpty()) + }) + }) + + When("Get for ReplicatedVolume fails", func() { + BeforeEach(func() { + clientBuilder.WithInterceptorFuncs(interceptor.Funcs{ + Get: func(ctx context.Context, c client.WithWatch, key client.ObjectKey, obj client.Object, opts ...client.GetOption) error { + if _, ok := obj.(*v1alpha1.ReplicatedVolume); ok { + return errors.NewInternalError(fmt.Errorf("test error")) + } + return c.Get(ctx, key, obj, opts...) + }, + }) + }) + + It("returns error from client", func(ctx SpecContext) { + _, err := rec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(rvr)}) + Expect(err).To(HaveOccurred()) + }) + }) + + When("Patch for ReplicatedVolumeReplica fails", func() { + BeforeEach(func() { + clientBuilder.WithInterceptorFuncs(interceptor.Funcs{ + Patch: func(_ context.Context, _ client.WithWatch, _ client.Object, _ client.Patch, _ ...client.PatchOption) error { + return errors.NewInternalError(fmt.Errorf("test error")) + }, + }) + }) + + It("returns error from client", func(ctx SpecContext) { + _, err := rec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(rvr)}) + Expect(err).To(HaveOccurred()) + }) + }) + + When("ReplicatedVolumeReplica has another ownerReference", func() { + BeforeEach(func() { + rvr.OwnerReferences = []metav1.OwnerReference{ + { + Name: "other-owner", + }, + } + }) + + It("sets another ownerReference to the corresponding ReplicatedVolume and keeps the original ownerReference", func(ctx SpecContext) { + _, err := rec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(rvr)}) + Expect(err).NotTo(HaveOccurred()) + + got := &v1alpha1.ReplicatedVolumeReplica{} + Expect(cl.Get(ctx, client.ObjectKeyFromObject(rvr), got)).To(Succeed()) + Expect(got.OwnerReferences).To(HaveLen(2)) + Expect(got.OwnerReferences).To(ContainElement(SatisfyAll( + HaveField("Name", Equal(rv.Name)), + HaveField("Kind", Equal("ReplicatedVolume")), + HaveField("APIVersion", Equal("storage.deckhouse.io/v1alpha1")), + HaveField("Controller", Not(BeNil())), + HaveField("BlockOwnerDeletion", Not(BeNil())), + ))) + Expect(got.OwnerReferences).To(ContainElement(HaveField("Name", Equal("other-owner")))) + }) + }) + + When("ReplicatedVolumeReplica already has ownerReference and labels set correctly", func() { + BeforeEach(func() { + rvr.OwnerReferences = []metav1.OwnerReference{ + { + Name: "rv1", + Kind: "ReplicatedVolume", + APIVersion: "storage.deckhouse.io/v1alpha1", + Controller: ptr.To(true), + BlockOwnerDeletion: ptr.To(true), + UID: "good-uid", + }, + } + rvr.Labels = map[string]string{ + v1alpha1.ReplicatedVolumeLabelKey: rv.Name, + v1alpha1.ReplicatedStorageClassLabelKey: rv.Spec.ReplicatedStorageClassName, + } + + clientBuilder.WithInterceptorFuncs(interceptor.Funcs{ + Patch: func(_ context.Context, _ client.WithWatch, _ client.Object, _ client.Patch, _ ...client.PatchOption) error { + return errors.NewInternalError(fmt.Errorf("test error")) + }, + }) + }) + + It("do nothing and returns no error", func(ctx SpecContext) { + _, err := rec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(rvr)}) + Expect(err).NotTo(HaveOccurred()) + + got := &v1alpha1.ReplicatedVolumeReplica{} + Expect(cl.Get(ctx, client.ObjectKeyFromObject(rvr), got)).To(Succeed()) + Expect(got.OwnerReferences).To(HaveLen(1)) + Expect(got.OwnerReferences).To(ContainElement(HaveField("Name", Equal("rv1")))) + Expect(got.Labels).To(HaveKeyWithValue(v1alpha1.ReplicatedVolumeLabelKey, rv.Name)) + Expect(got.Labels).To(HaveKeyWithValue(v1alpha1.ReplicatedStorageClassLabelKey, rv.Spec.ReplicatedStorageClassName)) + }) + }) + + When("ReplicatedVolumeReplica already has ownerReference to the ReplicatedVolume with different UID", func() { + BeforeEach(func() { + rvr.OwnerReferences = []metav1.OwnerReference{ + { + Name: "rv1", + Kind: "ReplicatedVolume", + APIVersion: "storage.deckhouse.io/v1alpha1", + Controller: ptr.To(true), + BlockOwnerDeletion: ptr.To(true), + UID: "bad-uid", + }, + } + }) + + It("sets ownerReference to the corresponding ReplicatedVolume", func(ctx SpecContext) { + _, err := rec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(rvr)}) + Expect(err).NotTo(HaveOccurred()) + + got := &v1alpha1.ReplicatedVolumeReplica{} + Expect(cl.Get(ctx, client.ObjectKeyFromObject(rvr), got)).To(Succeed()) + Expect(got.OwnerReferences).To(HaveLen(1)) + Expect(got.OwnerReferences).To(ContainElement(SatisfyAll( + HaveField("Name", Equal(rv.Name)), + HaveField("Kind", Equal("ReplicatedVolume")), + HaveField("APIVersion", Equal("storage.deckhouse.io/v1alpha1")), + HaveField("Controller", Not(BeNil())), + HaveField("BlockOwnerDeletion", Not(BeNil())), + HaveField("UID", Equal(types.UID("good-uid"))), + ))) + }) + }) + }) +}) diff --git a/images/controller/internal/controllers/rvr_metadata/rvr_metadata_controller_suite_test.go b/images/controller/internal/controllers/rvr_metadata/rvr_metadata_controller_suite_test.go new file mode 100644 index 000000000..230d7b45f --- /dev/null +++ b/images/controller/internal/controllers/rvr_metadata/rvr_metadata_controller_suite_test.go @@ -0,0 +1,29 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package rvrmetadata_test + +import ( + "testing" + + . "github.com/onsi/ginkgo/v2" + . "github.com/onsi/gomega" +) + +func TestRvrMetadataController(t *testing.T) { + RegisterFailHandler(Fail) + RunSpecs(t, "RvrMetadataController Suite") +} diff --git a/images/controller/internal/controllers/rvr_scheduling_controller/controller.go b/images/controller/internal/controllers/rvr_scheduling_controller/controller.go new file mode 100644 index 000000000..ad4464a7f --- /dev/null +++ b/images/controller/internal/controllers/rvr_scheduling_controller/controller.go @@ -0,0 +1,47 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package rvrschedulingcontroller + +import ( + "sigs.k8s.io/controller-runtime/pkg/builder" + "sigs.k8s.io/controller-runtime/pkg/handler" + "sigs.k8s.io/controller-runtime/pkg/manager" + + "github.com/deckhouse/sds-replicated-volume/api/v1alpha1" +) + +const controllerName = "rvr-scheduling-controller" + +func BuildController(mgr manager.Manager) error { + r, err := NewReconciler( + mgr.GetClient(), + mgr.GetLogger().WithName(controllerName).WithName("Reconciler"), + mgr.GetScheme(), + ) + if err != nil { + return err + } + + return builder.ControllerManagedBy(mgr). + Named(controllerName). + For(&v1alpha1.ReplicatedVolume{}). + Watches( + &v1alpha1.ReplicatedVolumeReplica{}, + handler.EnqueueRequestForOwner(mgr.GetScheme(), mgr.GetRESTMapper(), &v1alpha1.ReplicatedVolume{}), + ). + Complete(r) +} diff --git a/images/controller/internal/controllers/rvr_scheduling_controller/doc.go b/images/controller/internal/controllers/rvr_scheduling_controller/doc.go new file mode 100644 index 000000000..86b5b50aa --- /dev/null +++ b/images/controller/internal/controllers/rvr_scheduling_controller/doc.go @@ -0,0 +1,134 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +// Package rvrschedulingcontroller implements the rvr-scheduling-controller, which +// assigns nodes to ReplicatedVolumeReplicas based on topology, storage capacity, +// and placement requirements. +// +// # Controller Responsibilities +// +// The controller performs intelligent replica placement by: +// - Assigning unique nodes to each replica of a ReplicatedVolume +// - Respecting topology constraints (Zonal, TransZonal, Ignored) +// - Checking storage capacity via scheduler-extender API +// - Preferring nodes in rv.status.desiredAttachTo when possible +// - Handling different scheduling requirements for Diskful, Access, and TieBreaker replicas +// +// # Watched Resources +// +// The controller watches: +// - ReplicatedVolumeReplica: To detect replicas needing node assignment +// - ReplicatedVolume: To get placement hints (attachTo) +// - ReplicatedStorageClass: To get topology and zone constraints +// - ReplicatedStoragePool: To determine available nodes with storage +// - Node: To get zone information +// +// # Node Selection Criteria +// +// Eligible nodes are determined by intersection of: +// - Nodes in zones specified by rsc.spec.zones (or all zones if not specified) +// - Exception: For Access replicas, all nodes are eligible regardless of zones +// - Nodes with LVG from rsp.spec.lvmVolumeGroups (only for Diskful replicas) +// - Access and TieBreaker replicas can be scheduled on any node +// +// # Scheduling Phases +// +// The controller schedules replicas in three sequential phases: +// +// Phase 1: Diskful Replicas +// - Exclude nodes already hosting any replica of this RV +// - Apply topology constraints: +// - Zonal: All replicas in one zone +// - If Diskful replicas exist, use their zone +// - Else if rv.status.desiredAttachTo specified, choose best zone from those nodes +// - Else choose best zone from allowed zones +// - TransZonal: Distribute replicas evenly across zones +// - Place each replica in zone with fewest Diskful replicas +// - Fail if even distribution is impossible +// - Ignored: No zone constraints +// - Check storage capacity via scheduler-extender API +// - Prefer nodes in rv.status.desiredAttachTo (increase priority) +// +// Phase 2: Access Replicas +// - Only when rv.status.desiredAttachTo is set AND rsc.spec.volumeAccess != Local +// - Exclude nodes already hosting any replica of this RV +// - Target nodes in rv.status.desiredAttachTo without replicas +// - No topology or storage capacity constraints +// - OK if some attachTo nodes cannot get replicas (already have other replica types) +// - OK if some Access replicas cannot be scheduled (all attachTo nodes have replicas) +// +// Phase 3: TieBreaker Replicas +// - Exclude nodes already hosting any replica of this RV +// - Apply topology constraints: +// - Zonal: Place in same zone as Diskful replicas +// - Fail if no Diskful replicas exist +// - Fail if insufficient free nodes in zone +// - TransZonal: Place in zone with fewest replicas (any type) +// - If multiple zones tied, choose any +// - Fail if no free nodes in least-populated zones (cannot guarantee balance) +// - Ignored: No zone constraints +// - Fail if insufficient free nodes +// +// # Reconciliation Flow +// +// 1. Check prerequisites: +// - RV must have the controller finalizer +// 2. Get ReplicatedStorageClass and determine topology mode +// 3. List all RVRs for this RV to see existing placements +// 4. Schedule Diskful replicas: +// a. Collect eligible nodes based on zones and storage pools +// b. Apply topology rules +// c. Call scheduler-extender to verify storage capacity +// d. Assign rvr.spec.nodeName +// 5. Schedule Access replicas (if applicable): +// a. Identify nodes in attachTo without replicas +// b. Assign rvr.spec.nodeName +// 6. Schedule TieBreaker replicas: +// a. Apply topology rules +// b. Assign rvr.spec.nodeName +// 7. Update rvr.status.conditions[type=Scheduled]: +// - status=True, reason=ReplicaScheduled when successful +// - status=False with appropriate reason when scheduling fails: +// * InsufficientNodes, NoEligibleNodes, TopologyConstraintViolation, etc. +// - For unscheduled replicas: reason=WaitingForAnotherReplica +// +// # Status Updates +// +// The controller maintains: +// - rvr.spec.nodeName - Assigned node for the replica +// - rvr.status.conditions[type=Scheduled] - Scheduling success/failure status +// +// # Scheduler-Extender Integration +// +// For Diskful replicas, the controller calls the scheduler-extender API to: +// - Filter nodes with sufficient storage capacity +// - Consider LVM volume group availability +// - Ensure the volume can actually be created on selected nodes +// +// # Special Notes +// +// Best Zone Selection: +// - Chooses the zone with most available capacity and nodes +// - Considers storage pool availability +// +// Topology Guarantees: +// - Zonal: Failure locality within one availability zone +// - TransZonal: Replicas survive zone failures, even distribution required +// - Ignored: No zone awareness, simplest scheduling +// +// The scheduling algorithm ensures that replica placement supports the high +// availability and data consistency guarantees of the storage system. +package rvrschedulingcontroller diff --git a/images/controller/internal/controllers/rvr_scheduling_controller/reconciler.go b/images/controller/internal/controllers/rvr_scheduling_controller/reconciler.go new file mode 100644 index 000000000..7bebca288 --- /dev/null +++ b/images/controller/internal/controllers/rvr_scheduling_controller/reconciler.go @@ -0,0 +1,1347 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package rvrschedulingcontroller + +import ( + "context" + "errors" + "fmt" + "slices" + + "github.com/go-logr/logr" + corev1 "k8s.io/api/core/v1" + apierrors "k8s.io/apimachinery/pkg/api/errors" + "k8s.io/apimachinery/pkg/api/meta" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/runtime" + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/reconcile" + + snc "github.com/deckhouse/sds-node-configurator/api/v1alpha1" + obju "github.com/deckhouse/sds-replicated-volume/api/objutilv1" + v1alpha1 "github.com/deckhouse/sds-replicated-volume/api/v1alpha1" + "github.com/deckhouse/sds-replicated-volume/images/controller/internal/indexes" +) + +const ( + nodeZoneLabel = "topology.kubernetes.io/zone" + topologyIgnored = "Ignored" + topologyZonal = "Zonal" + topologyTransZonal = "TransZonal" +) + +var ( + errSchedulingTopologyConflict = errors.New("scheduling topology conflict") + errSchedulingNoCandidateNodes = errors.New("scheduling no candidate nodes") + errSchedulingPending = errors.New("scheduling pending") +) + +type Reconciler struct { + cl client.Client + log logr.Logger + scheme *runtime.Scheme + extenderClient *SchedulerExtenderClient +} + +var _ reconcile.Reconciler = (*Reconciler)(nil) + +func NewReconciler(cl client.Client, log logr.Logger, scheme *runtime.Scheme) (*Reconciler, error) { + extenderClient, err := NewSchedulerHTTPClient() + if err != nil { + log.Error(err, "failed to create scheduler-extender client") + return nil, err // TODO: implement graceful shutdown + } + + // Initialize reconciler with Kubernetes client, logger, scheme and scheduler-extender client. + return &Reconciler{ + cl: cl, + log: log, + scheme: scheme, + extenderClient: extenderClient, + }, nil +} + +func (r *Reconciler) Reconcile( + ctx context.Context, + req reconcile.Request, +) (reconcile.Result, error) { + log := r.log.WithName("RVRScheduler").WithValues( + "rv", req.Name, + ) + log.V(1).Info("starting reconciliation cycle") + + // Load ReplicatedVolume, its ReplicatedStorageClass and all relevant replicas. + sctx, err := r.prepareSchedulingContext(ctx, req, log) + if err != nil { + return reconcile.Result{}, r.handlePhaseError(ctx, req.Name, "prepare", err, log) + } + if sctx == nil { + // ReplicatedVolume not found, skip reconciliation + return reconcile.Result{}, nil + } + log.V(1).Info("scheduling context prepared", "rsc", sctx.Rsc.Name, "topology", sctx.Rsc.Spec.Topology, "volumeAccess", sctx.Rsc.Spec.VolumeAccess) + + // Ensure all previously scheduled replicas have correct Scheduled condition + // This is done early so that even if phases fail, existing replicas have correct conditions + if err := r.ensureScheduledConditionOnExistingReplicas(ctx, sctx, log); err != nil { + return reconcile.Result{}, err + } + + // Phase 1: place Diskful replicas. + log.V(1).Info("starting Diskful phase", "unscheduledCount", len(sctx.UnscheduledDiskfulReplicas)) + if err := r.scheduleDiskfulPhase(ctx, sctx); err != nil { + return reconcile.Result{}, r.handlePhaseError(ctx, req.Name, string(v1alpha1.ReplicaTypeDiskful), err, log) + } + log.V(1).Info("Diskful phase completed", "scheduledCountTotal", len(sctx.RVRsToSchedule)) + + // Phase 2: place Access replicas. + log.V(1).Info("starting Access phase", "unscheduledCount", len(sctx.UnscheduledAccessReplicas)) + if err := r.scheduleAccessPhase(sctx); err != nil { + return reconcile.Result{}, r.handlePhaseError(ctx, req.Name, string(v1alpha1.ReplicaTypeAccess), err, log) + } + log.V(1).Info("Access phase completed", "scheduledCountTotal", len(sctx.RVRsToSchedule)) + + // Phase 3: place TieBreaker replicas. + log.V(1).Info("starting TieBreaker phase", "unscheduledCount", len(sctx.UnscheduledTieBreakerReplicas)) + if err := r.scheduleTieBreakerPhase(ctx, sctx); err != nil { + return reconcile.Result{}, r.handlePhaseError(ctx, req.Name, string(v1alpha1.ReplicaTypeTieBreaker), err, log) + } + log.V(1).Info("TieBreaker phase completed", "scheduledCountTotal", len(sctx.RVRsToSchedule)) + + log.V(1).Info("patching scheduled replicas", "countTotal", len(sctx.RVRsToSchedule)) + if err := r.patchScheduledReplicas(ctx, sctx, log); err != nil { + return reconcile.Result{}, err + } + + log.V(1).Info("reconciliation completed successfully", "totalScheduled", len(sctx.RVRsToSchedule)) + return reconcile.Result{}, nil +} + +// rvrNotReadyReason describes why an RVR is not ready for scheduling. +type rvrNotReadyReason struct { + reason string + message string +} + +// handlePhaseError handles errors that occur during scheduling phases. +// It logs the error, sets failed condition on RVRs, and returns the error. +func (r *Reconciler) handlePhaseError( + ctx context.Context, + rvName string, + phaseName string, + err error, + log logr.Logger, +) error { + log.Error(err, phaseName+" phase failed") + reason := schedulingErrorToReason(err) + if setErr := r.setFailedScheduledConditionOnNonScheduledRVRs(ctx, rvName, reason, log); setErr != nil { + log.Error(setErr, "failed to set Scheduled condition on RVRs after scheduling error") + } + return err +} + +// schedulingErrorToReason converts a scheduling error to rvrNotReadyReason. +func schedulingErrorToReason(err error) *rvrNotReadyReason { + reason := v1alpha1.ReplicatedVolumeReplicaCondScheduledReasonSchedulingFailed + switch { + case errors.Is(err, errSchedulingTopologyConflict): + reason = v1alpha1.ReplicatedVolumeReplicaCondScheduledReasonTopologyConstraintsFailed + case errors.Is(err, errSchedulingNoCandidateNodes): + reason = v1alpha1.ReplicatedVolumeReplicaCondScheduledReasonNoAvailableNodes + case errors.Is(err, errSchedulingPending): + reason = v1alpha1.ReplicatedVolumeReplicaCondScheduledReasonSchedulingPending + } + return &rvrNotReadyReason{ + reason: reason, + message: err.Error(), + } +} + +// patchScheduledReplicas patches all scheduled replicas with their assigned node names +// and sets the Scheduled condition to True. +func (r *Reconciler) patchScheduledReplicas( + ctx context.Context, + sctx *SchedulingContext, + log logr.Logger, +) error { + if len(sctx.RVRsToSchedule) == 0 { + log.V(1).Info("no scheduled replicas to patch") + return nil + } + + for _, rvr := range sctx.RVRsToSchedule { + log.V(2).Info("patching replica", "rvr", rvr.Name, "nodeName", rvr.Spec.NodeName, "type", rvr.Spec.Type) + // Create original state for patch (without NodeName and node-name label) + original := rvr.DeepCopy() + original.Spec.NodeName = "" + + // Set node-name label together with NodeName. + // Note: if label is removed manually, it won't be restored until next condition check + // in ensureScheduledConditionOnExistingReplicas (which runs on each reconcile). + _ = obju.SetLabel(rvr, v1alpha1.NodeNameLabelKey, rvr.Spec.NodeName) + + // Apply the patch; ignore NotFound errors because the replica may have been deleted meanwhile. + if err := r.cl.Patch(ctx, rvr, client.MergeFrom(original)); err != nil { + if apierrors.IsNotFound(err) { + log.V(1).Info("replica not found during patch, skipping", "rvr", rvr.Name) + continue // Replica may have been deleted + } + return fmt.Errorf("failed to patch RVR %s: %w", rvr.Name, err) + } + + // Set Scheduled condition to True for successfully scheduled replicas + if err := r.setScheduledConditionOnRVR( + ctx, + rvr, + metav1.ConditionTrue, + v1alpha1.ReplicatedVolumeReplicaCondScheduledReasonReplicaScheduled, + "", + ); err != nil { + return fmt.Errorf("failed to set Scheduled condition on RVR %s: %w", rvr.Name, err) + } + } + return nil +} + +// ensureScheduledConditionOnExistingReplicas ensures that all already-scheduled replicas +// (those that had NodeName set before this reconcile) have the correct Scheduled condition. +// This handles cases where condition was missing or incorrect. +func (r *Reconciler) ensureScheduledConditionOnExistingReplicas( + ctx context.Context, + sctx *SchedulingContext, + log logr.Logger, +) error { + // Collect all scheduled replicas that were NOT scheduled in this cycle + alreadyScheduledReplicas := make([]*v1alpha1.ReplicatedVolumeReplica, 0) + + // Also check for scheduled Access and TieBreaker replicas from RvrList + for _, rvr := range sctx.RvrList { + if rvr.Spec.NodeName == "" { + continue // Skip unscheduled + } + alreadyScheduledReplicas = append(alreadyScheduledReplicas, rvr) + } + + for _, rvr := range alreadyScheduledReplicas { + log.V(2).Info("fixing Scheduled condition on existing replica", "rvr", rvr.Name) + + // Ensure node-name label is set (restores label if manually removed) + if err := r.ensureNodeNameLabel(ctx, log, rvr); err != nil { + return fmt.Errorf("failed to ensure node-name label on RVR %s: %w", rvr.Name, err) + } + + if err := r.setScheduledConditionOnRVR( + ctx, + rvr, + metav1.ConditionTrue, + v1alpha1.ReplicatedVolumeReplicaCondScheduledReasonReplicaScheduled, + "", + ); err != nil { + return fmt.Errorf("failed to set Scheduled condition on existing RVR %s: %w", rvr.Name, err) + } + } + + return nil +} + +// isRVReadyToSchedule checks if the ReplicatedVolume is ready for scheduling. +// Returns nil if ready, or an error wrapped with errSchedulingPending if not ready. +func isRVReadyToSchedule(rv *v1alpha1.ReplicatedVolume) error { + if rv.Finalizers == nil { + return fmt.Errorf("%w: ReplicatedVolume has no finalizers", errSchedulingPending) + } + + if !slices.Contains(rv.Finalizers, v1alpha1.ControllerFinalizer) { + return fmt.Errorf("%w: ReplicatedVolume is missing controller finalizer", errSchedulingPending) + } + + if rv.Spec.ReplicatedStorageClassName == "" { + return fmt.Errorf("%w: ReplicatedStorageClassName is not specified in ReplicatedVolume spec", errSchedulingPending) + } + + if rv.Spec.Size.IsZero() { + return fmt.Errorf("%w: ReplicatedVolume size is zero in ReplicatedVolume spec", errSchedulingPending) + } + + return nil +} + +func (r *Reconciler) prepareSchedulingContext( + ctx context.Context, + req reconcile.Request, + log logr.Logger, +) (*SchedulingContext, error) { + // Fetch the target ReplicatedVolume for this reconcile request. + rv := &v1alpha1.ReplicatedVolume{} + if err := r.cl.Get(ctx, req.NamespacedName, rv); err != nil { + // If the volume no longer exists, exit reconciliation without error. + if apierrors.IsNotFound(err) { + log.V(1).Info("ReplicatedVolume not found, skipping reconciliation") + return nil, nil + } + return nil, fmt.Errorf("unable to get ReplicatedVolume: %w", err) + } + + if err := isRVReadyToSchedule(rv); err != nil { + return nil, err + } + + // Load the referenced ReplicatedStorageClass. + rsc := &v1alpha1.ReplicatedStorageClass{} + if err := r.cl.Get(ctx, client.ObjectKey{Name: rv.Spec.ReplicatedStorageClassName}, rsc); err != nil { + return nil, fmt.Errorf("unable to get ReplicatedStorageClass: %w", err) + } + + // List all ReplicatedVolumeReplica resources for this RV. + replicaList := &v1alpha1.ReplicatedVolumeReplicaList{} + if err := r.cl.List(ctx, replicaList, client.MatchingFields{ + indexes.IndexFieldRVRByReplicatedVolumeName: rv.Name, + }); err != nil { + return nil, fmt.Errorf("unable to list ReplicatedVolumeReplica: %w", err) + } + + // Collect replicas for this RV: + // - replicasForRV: non-deleting replicas + // - nodesWithRVReplica: all occupied nodes (including nodes with deleting replicas) + replicasForRV, nodesWithRVReplica := collectReplicasAndOccupiedNodes(replicaList.Items) + + rsp := &v1alpha1.ReplicatedStoragePool{} + if err := r.cl.Get(ctx, client.ObjectKey{Name: rsc.Spec.StoragePool}, rsp); err != nil { + return nil, fmt.Errorf("unable to get ReplicatedStoragePool %s: %w", rsc.Spec.StoragePool, err) + } + + rspLvgToNodeInfoMap, err := r.getLVGToNodesByStoragePool(ctx, rsp, log) + if err != nil { + return nil, fmt.Errorf("unable to get LVG to nodes mapping: %w", err) + } + + // Build list of RSP nodes WITHOUT replicas - exclude nodes that already have replicas. + rspNodesWithoutReplica := []string{} + for _, info := range rspLvgToNodeInfoMap { + if _, hasReplica := nodesWithRVReplica[info.NodeName]; !hasReplica { + rspNodesWithoutReplica = append(rspNodesWithoutReplica, info.NodeName) + } + } + + nodeNameToZone, err := r.getNodeNameToZoneMap(ctx, log) + if err != nil { + return nil, fmt.Errorf("unable to get node to zone mapping: %w", err) + } + + attachToList := getAttachToNodeList(rv) + scheduledDiskfulReplicas, unscheduledDiskfulReplicas := getTypedReplicasLists(replicasForRV, v1alpha1.ReplicaTypeDiskful) + _, unscheduledAccessReplicas := getTypedReplicasLists(replicasForRV, v1alpha1.ReplicaTypeAccess) + _, unscheduledTieBreakerReplicas := getTypedReplicasLists(replicasForRV, v1alpha1.ReplicaTypeTieBreaker) + attachToNodesWithoutAnyReplica := getAttachToNodesWithoutAnyReplica(attachToList, nodesWithRVReplica) + + schedulingCtx := &SchedulingContext{ + Log: log, + Rv: rv, + Rsc: rsc, + Rsp: rsp, + RvrList: replicasForRV, + AttachToNodes: attachToList, + AttachToNodesWithoutRvReplica: attachToNodesWithoutAnyReplica, + RspLvgToNodeInfoMap: rspLvgToNodeInfoMap, + NodesWithAnyReplica: nodesWithRVReplica, + UnscheduledDiskfulReplicas: unscheduledDiskfulReplicas, + ScheduledDiskfulReplicas: scheduledDiskfulReplicas, + UnscheduledAccessReplicas: unscheduledAccessReplicas, + UnscheduledTieBreakerReplicas: unscheduledTieBreakerReplicas, + RspNodesWithoutReplica: rspNodesWithoutReplica, + NodeNameToZone: nodeNameToZone, + } + + return schedulingCtx, nil +} + +func (r *Reconciler) scheduleDiskfulPhase( + ctx context.Context, + sctx *SchedulingContext, +) error { + if len(sctx.UnscheduledDiskfulReplicas) == 0 { + sctx.Log.V(1).Info("no unscheduled Diskful replicas. Skipping Diskful phase.") + return nil + } + + candidateNodes := sctx.RspNodesWithoutReplica + sctx.Log.V(1).Info("Diskful phase: initial candidate nodes", "count", len(candidateNodes), "nodes", candidateNodes) + + // Try to schedule replicas, collect failure reason if any step fails + failureReason := r.tryScheduleDiskfulReplicas(ctx, sctx, candidateNodes) + + // Set Scheduled=False condition on remaining unscheduled Diskful replicas + if len(sctx.UnscheduledDiskfulReplicas) > 0 && failureReason != nil { + sctx.Log.V(1).Info("setting Scheduled=False on unscheduled Diskful replicas", + "count", len(sctx.UnscheduledDiskfulReplicas), + "reason", failureReason.reason) + return r.setScheduledConditionOnRVRs( + ctx, + sctx.UnscheduledDiskfulReplicas, + metav1.ConditionFalse, + failureReason.reason, + failureReason.message, + sctx.Log, + ) + } + + return nil +} + +// tryScheduleDiskfulReplicas attempts to schedule Diskful replicas and returns failure reason if not all could be scheduled. +func (r *Reconciler) tryScheduleDiskfulReplicas( + ctx context.Context, + sctx *SchedulingContext, + candidateNodes []string, +) *rvrNotReadyReason { + // Apply topology constraints (also checks for empty candidates) + if err := r.applyTopologyFilter(candidateNodes, true, sctx); err != nil { + sctx.Log.V(1).Info("topology filter failed", "error", err) + return schedulingErrorToReason(err) + } + + // Apply capacity filtering + if err := r.applyCapacityFilterAndScoreCandidates(ctx, sctx); err != nil { + sctx.Log.V(1).Info("capacity filter failed", "error", err) + return schedulingErrorToReason(err) + } + sctx.Log.V(1).Info("capacity filter applied and candidates scored", "zonesCount", len(sctx.ZonesToNodeCandidatesMap)) + + sctx.ApplyAttachToBonus() + sctx.Log.V(1).Info("attachTo bonus applied") + + // Assign replicas in best-effort mode + assignedReplicas, err := r.assignReplicasToNodes(sctx, sctx.UnscheduledDiskfulReplicas, v1alpha1.ReplicaTypeDiskful, true) + if err != nil { + sctx.Log.Error(err, "unexpected error during replica assignment") + return schedulingErrorToReason(err) + } + sctx.Log.V(1).Info("Diskful replicas assigned", "count", len(assignedReplicas)) + + sctx.UpdateAfterScheduling(assignedReplicas) + + // Return failure reason if not all replicas were scheduled + if len(sctx.UnscheduledDiskfulReplicas) > 0 { + return schedulingErrorToReason(fmt.Errorf("%w: not enough candidate nodes to schedule all Diskful replicas", errSchedulingNoCandidateNodes)) + } + + return nil +} + +// assignReplicasToNodes assigns nodes to unscheduled replicas based on topology and node scores. +// For Ignored topology: selects best nodes by score. +// For Zonal topology: selects the best zone first (by total score), then best nodes from that zone. +// For TransZonal topology: distributes replicas across zones, picking zones with fewer scheduled replicas first. +// replicaTypeFilter: for TransZonal, which replica types to count for zone balancing (empty = all types). +// bestEffort: if true, don't return error when not enough nodes. +// Note: This function returns the list of replicas that were assigned nodes in this call. +func (r *Reconciler) assignReplicasToNodes( + sctx *SchedulingContext, + unscheduledReplicas []*v1alpha1.ReplicatedVolumeReplica, + replicaTypeFilter v1alpha1.ReplicaType, + bestEffort bool, +) ([]*v1alpha1.ReplicatedVolumeReplica, error) { + if len(unscheduledReplicas) == 0 { + sctx.Log.Info("no unscheduled replicas to assign", "rv", sctx.Rv.Name) + return nil, nil + } + + switch sctx.Rsc.Spec.Topology { + case topologyIgnored: + return r.assignReplicasIgnoredTopology(sctx, unscheduledReplicas, bestEffort) + case topologyZonal: + return r.assignReplicasZonalTopology(sctx, unscheduledReplicas, bestEffort) + case topologyTransZonal: + return r.assignReplicasTransZonalTopology(sctx, unscheduledReplicas, replicaTypeFilter, bestEffort) + default: + return nil, fmt.Errorf("unknown topology: %s", sctx.Rsc.Spec.Topology) + } +} + +// assignReplicasIgnoredTopology assigns replicas to best nodes by score (ignoring zones). +// If bestEffort=true, assigns as many as possible without error. +// Returns the list of replicas that were assigned nodes. +func (r *Reconciler) assignReplicasIgnoredTopology( + sctx *SchedulingContext, + unscheduledReplicas []*v1alpha1.ReplicatedVolumeReplica, + bestEffort bool, +) ([]*v1alpha1.ReplicatedVolumeReplica, error) { + sctx.Log.V(1).Info("assigning replicas with Ignored topology", "replicasCount", len(unscheduledReplicas), "bestEffort", bestEffort) + // Collect all candidates from all zones + var allCandidates []NodeCandidate + for _, candidates := range sctx.ZonesToNodeCandidatesMap { + allCandidates = append(allCandidates, candidates...) + } + sctx.Log.V(2).Info("collected candidates", "count", len(allCandidates)) + + // Assign nodes to replicas + var assignedReplicas []*v1alpha1.ReplicatedVolumeReplica + for _, rvr := range unscheduledReplicas { + selectedNode, remaining := SelectAndRemoveBestNode(allCandidates) + if selectedNode == "" { + sctx.Log.V(1).Info("not enough candidate nodes for all replicas", "assigned", len(assignedReplicas), "total", len(unscheduledReplicas)) + if bestEffort { + break // Best-effort: return what we have + } + return assignedReplicas, fmt.Errorf("%w: not enough candidate nodes for all replicas", errSchedulingNoCandidateNodes) + } + allCandidates = remaining + + // Mark replica for scheduling + sctx.Log.V(2).Info("assigned replica to node", "rvr", rvr.Name, "node", selectedNode) + rvr.Spec.NodeName = selectedNode + assignedReplicas = append(assignedReplicas, rvr) + } + + return assignedReplicas, nil +} + +// assignReplicasZonalTopology selects the best zone first, then assigns replicas to best nodes in that zone. +// If bestEffort=true, assigns as many as possible without error. +// Returns the list of replicas that were assigned nodes. +func (r *Reconciler) assignReplicasZonalTopology( + sctx *SchedulingContext, + unscheduledReplicas []*v1alpha1.ReplicatedVolumeReplica, + bestEffort bool, +) ([]*v1alpha1.ReplicatedVolumeReplica, error) { + sctx.Log.V(1).Info("assigning replicas with Zonal topology", "replicasCount", len(unscheduledReplicas), "bestEffort", bestEffort) + // Find the best zone by combined metric: totalScore * len(candidates) + // This ensures zones with more nodes are preferred when scores are comparable + var bestZone string + bestZoneScore := -1 + + for zone, candidates := range sctx.ZonesToNodeCandidatesMap { + totalScore := 0 + for _, c := range candidates { + totalScore += c.Score + } + // Combined metric: zones with more nodes and good scores are preferred + zoneScore := totalScore * len(candidates) + sctx.Log.V(2).Info("evaluating zone", "zone", zone, "candidatesCount", len(candidates), "totalScore", totalScore, "zoneScore", zoneScore) + if zoneScore > bestZoneScore { + bestZoneScore = zoneScore + bestZone = zone + } + } + + if bestZone == "" { + sctx.Log.V(1).Info("no zones with candidates available") + if bestEffort { + return nil, nil // Best-effort: no candidates, no error + } + return nil, fmt.Errorf("%w: no zones with candidates available", errSchedulingNoCandidateNodes) + } + sctx.Log.V(1).Info("selected best zone", "zone", bestZone, "score", bestZoneScore) + + // Assign nodes to replicas + var assignedReplicas []*v1alpha1.ReplicatedVolumeReplica + for _, rvr := range unscheduledReplicas { + selectedNode, remaining := SelectAndRemoveBestNode(sctx.ZonesToNodeCandidatesMap[bestZone]) + if selectedNode == "" { + sctx.Log.V(1).Info("not enough candidate nodes in zone", "zone", bestZone, "assigned", len(assignedReplicas), "total", len(unscheduledReplicas)) + if bestEffort { + break // Best-effort: return what we have + } + return assignedReplicas, fmt.Errorf("%w: not enough candidate nodes in zone %s for all replicas", errSchedulingNoCandidateNodes, bestZone) + } + sctx.ZonesToNodeCandidatesMap[bestZone] = remaining + + // Mark replica for scheduling + sctx.Log.V(2).Info("assigned replica to node in zone", "rvr", rvr.Name, "node", selectedNode, "zone", bestZone) + rvr.Spec.NodeName = selectedNode + assignedReplicas = append(assignedReplicas, rvr) + } + + return assignedReplicas, nil +} + +// assignReplicasTransZonalTopology distributes replicas across zones, preferring zones with fewer scheduled replicas of the same type. +// It modifies rvr.Spec.NodeName and adds replicas to sctx.RVRsToSchedule for later patching. +// If bestEffort=true, assigns as many as possible without error when distribution constraints can't be met. +// Returns the list of replicas that were assigned nodes. +func (r *Reconciler) assignReplicasTransZonalTopology( + sctx *SchedulingContext, + unscheduledReplicas []*v1alpha1.ReplicatedVolumeReplica, + replicaTypeFilter v1alpha1.ReplicaType, + bestEffort bool, +) ([]*v1alpha1.ReplicatedVolumeReplica, error) { + if len(unscheduledReplicas) == 0 { + return nil, nil + } + + sctx.Log.V(1).Info("assigning replicas with TransZonal topology", "replicasCount", len(unscheduledReplicas), "replicaTypeFilter", replicaTypeFilter, "bestEffort", bestEffort) + + // Count already scheduled replicas per zone (filtered by type if specified) + zoneReplicaCount := countReplicasByZone(sctx.RvrList, replicaTypeFilter, sctx.NodeNameToZone) + sctx.Log.V(2).Info("current zone replica distribution", "zoneReplicaCount", zoneReplicaCount) + + // Get all allowed zones for TransZonal topology + allowedZones := getAllowedZones(nil, sctx.Rsc.Spec.Zones, sctx.NodeNameToZone) + sctx.Log.V(2).Info("allowed zones for TransZonal", "zones", allowedZones) + + // Build set of zones that have available candidates + availableZones := make(map[string]struct{}) + for zone, candidates := range sctx.ZonesToNodeCandidatesMap { + if len(candidates) > 0 { + availableZones[zone] = struct{}{} + } + } + + // For each unscheduled replica, pick the zone with fewest replicas, then best node + var assignedReplicas []*v1alpha1.ReplicatedVolumeReplica + for i, rvr := range unscheduledReplicas { + sctx.Log.V(2).Info("scheduling replica", "index", i, "rvr", rvr.Name) + + // Find zone with minimum replica count among ALL allowed zones + globalMinZone, globalMinCount := findZoneWithMinReplicaCount(allowedZones, zoneReplicaCount) + + // Find zone with minimum replica count that has available candidates + selectedZone, availableMinCount := findZoneWithMinReplicaCount(availableZones, zoneReplicaCount) + + if selectedZone == "" { + // No more zones with available candidates + sctx.Log.V(1).Info("no more zones with available candidates", "assigned", len(assignedReplicas), "total", len(unscheduledReplicas)) + if bestEffort { + break // Best-effort: return what we have + } + return assignedReplicas, fmt.Errorf( + "%w: no zones with available nodes to place replica", + errSchedulingNoCandidateNodes, + ) + } + + // Check if we can guarantee even distribution: + // If the global minimum (across all allowed zones) is less than the minimum among available zones, + // it means there's a zone that should have replicas but has no available nodes. + if globalMinCount < availableMinCount { + sctx.Log.V(1).Info("cannot guarantee even distribution: zone with fewer replicas has no available nodes", + "unavailableZone", globalMinZone, "replicasInZone", globalMinCount, "minReplicasInAvailableZones", availableMinCount) + if bestEffort { + break // Best-effort: return what we have, can't maintain even distribution + } + return assignedReplicas, fmt.Errorf( + "%w: zone %q has %d replicas but no available nodes; replica should be placed there to maintain even distribution across zones", + errSchedulingNoCandidateNodes, + globalMinZone, + globalMinCount, + ) + } + + sctx.Log.V(2).Info("selected zone for replica", "zone", selectedZone, "replicaCount", availableMinCount) + + // Select best node from zone and remove it from candidates + selectedNode, remaining := SelectAndRemoveBestNode(sctx.ZonesToNodeCandidatesMap[selectedZone]) + if selectedNode == "" { + // No available node in this zone - stop scheduling remaining replicas + sctx.Log.V(1).Info("no available node in selected zone", "zone", selectedZone) + return assignedReplicas, nil + } + sctx.ZonesToNodeCandidatesMap[selectedZone] = remaining + + // Update availableZones if zone has no more candidates + if len(remaining) == 0 { + delete(availableZones, selectedZone) + } + + // Update replica node name + sctx.Log.V(2).Info("assigned replica to node", "rvr", rvr.Name, "node", selectedNode, "zone", selectedZone) + rvr.Spec.NodeName = selectedNode + assignedReplicas = append(assignedReplicas, rvr) + + // Update zone replica count + zoneReplicaCount[selectedZone]++ + } + + sctx.Log.V(1).Info("TransZonal assignment completed", "assigned", len(assignedReplicas)) + return assignedReplicas, nil +} + +//nolint:unparam // error is always nil by design - Access phase never fails +func (r *Reconciler) scheduleAccessPhase( + sctx *SchedulingContext, +) error { + // Spec «Access»: phase works only when: + // - rv.status.desiredAttachTo is set AND not all desiredAttachTo nodes have replicas + // - rsc.spec.volumeAccess != Local + if len(sctx.AttachToNodes) == 0 { + sctx.Log.V(1).Info("skipping Access phase: no attachTo nodes") + return nil + } + + if sctx.Rsc.Spec.VolumeAccess == "Local" { + sctx.Log.V(1).Info("skipping Access phase: volumeAccess is Local") + return nil + } + + if len(sctx.UnscheduledAccessReplicas) == 0 { + sctx.Log.V(1).Info("no unscheduled Access replicas") + return nil + } + sctx.Log.V(1).Info("Access phase: processing replicas", "unscheduledCount", len(sctx.UnscheduledAccessReplicas)) + + // Spec «Access»: exclude nodes that already host any replica of this RV (any type) + // Use AttachToNodesWithoutRvReplica which already contains attachTo nodes without any replica + candidateNodes := sctx.AttachToNodesWithoutRvReplica + if len(candidateNodes) == 0 { + // All attachTo nodes already have replicas; nothing to do. + // Spec «Access»: it is allowed to have replicas that could not be scheduled + sctx.Log.V(1).Info("Access phase: all attachTo nodes already have replicas") + return nil + } + sctx.Log.V(1).Info("Access phase: candidate nodes", "count", len(candidateNodes), "nodes", candidateNodes) + + // We are not required to place all Access replicas or to cover all desiredAttachTo nodes. + // Spec «Access»: it is allowed to have nodes in rv.status.desiredAttachTo without enough replicas + // Spec «Access»: it is allowed to have replicas that could not be scheduled + nodesToFill := min(len(candidateNodes), len(sctx.UnscheduledAccessReplicas)) + sctx.Log.V(1).Info("Access phase: scheduling replicas", "nodesToFill", nodesToFill) + + var assignedReplicas []*v1alpha1.ReplicatedVolumeReplica + for i := range nodesToFill { + nodeName := candidateNodes[i] + rvr := sctx.UnscheduledAccessReplicas[i] + + sctx.Log.V(2).Info("Access phase: assigning replica", "rvr", rvr.Name, "node", nodeName) + rvr.Spec.NodeName = nodeName + assignedReplicas = append(assignedReplicas, rvr) + } + + // Update context after scheduling + sctx.UpdateAfterScheduling(assignedReplicas) + sctx.Log.V(1).Info("Access phase: completed", "assigned", len(assignedReplicas)) + + return nil +} + +func (r *Reconciler) scheduleTieBreakerPhase( + ctx context.Context, + sctx *SchedulingContext, +) error { + if len(sctx.UnscheduledTieBreakerReplicas) == 0 { + sctx.Log.V(1).Info("no unscheduled TieBreaker replicas") + return nil + } + sctx.Log.V(1).Info("TieBreaker phase: processing replicas", "unscheduledCount", len(sctx.UnscheduledTieBreakerReplicas), "topology", sctx.Rsc.Spec.Topology) + + // Build candidate nodes (nodes without any replica of this RV) + candidateNodes := r.getTieBreakerCandidateNodes(sctx) + sctx.Log.V(2).Info("TieBreaker phase: candidate nodes", "count", len(candidateNodes)) + + failureReason := r.tryScheduleTieBreakerReplicas(sctx, candidateNodes) + + // Set Scheduled=False condition on remaining unscheduled TieBreaker replicas + if len(sctx.UnscheduledTieBreakerReplicas) > 0 && failureReason != nil { + if err := r.setScheduledConditionOnRVRs( + ctx, + sctx.UnscheduledTieBreakerReplicas, + metav1.ConditionFalse, + failureReason.reason, + failureReason.message, + sctx.Log, + ); err != nil { + return err + } + } + + return nil +} + +// tryScheduleTieBreakerReplicas attempts to schedule TieBreaker replicas and returns failure reason if not all could be scheduled. +func (r *Reconciler) tryScheduleTieBreakerReplicas( + sctx *SchedulingContext, + candidateNodes []string, +) *rvrNotReadyReason { + // Apply topology filter (isDiskfulPhase=false) + if err := r.applyTopologyFilter(candidateNodes, false, sctx); err != nil { + sctx.Log.V(1).Info("topology filter failed", "error", err) + return schedulingErrorToReason(err) + } + + // Assign replicas: count ALL replica types for zone balancing, best-effort mode + assignedReplicas, err := r.assignReplicasToNodes(sctx, sctx.UnscheduledTieBreakerReplicas, v1alpha1.ReplicaType(""), true) + if err != nil { + sctx.Log.Error(err, "unexpected error during TieBreaker replica assignment") + return schedulingErrorToReason(err) + } + + // Update context after scheduling + sctx.UpdateAfterScheduling(assignedReplicas) + sctx.Log.V(1).Info("TieBreaker phase: completed", "assigned", len(assignedReplicas)) + + // Return failure reason if not all replicas were scheduled + if len(sctx.UnscheduledTieBreakerReplicas) > 0 { + return schedulingErrorToReason(fmt.Errorf("%w: not enough candidate nodes to schedule all TieBreaker replicas", errSchedulingNoCandidateNodes)) + } + + return nil +} + +// getTieBreakerCandidateNodes returns nodes that can host TieBreaker replicas: +// - Nodes without any replica of this RV +// Zone filtering is done later in applyTopologyFilter which considers scheduled Diskful replicas +func (r *Reconciler) getTieBreakerCandidateNodes(sctx *SchedulingContext) []string { + var candidateNodes []string + for nodeName := range sctx.NodeNameToZone { + if _, hasReplica := sctx.NodesWithAnyReplica[nodeName]; hasReplica { + continue + } + candidateNodes = append(candidateNodes, nodeName) + } + return candidateNodes +} + +func getAttachToNodeList(rv *v1alpha1.ReplicatedVolume) []string { + if rv == nil { + return nil + } + return slices.Clone(rv.Status.DesiredAttachTo) +} + +// collectReplicasAndOccupiedNodes processes replicas (already filtered for a given RV) and returns: +// - activeReplicas: non-deleting replicas (both scheduled and unscheduled) +// - occupiedNodes: all nodes with replicas (including deleting ones) to prevent scheduling collisions +func collectReplicasAndOccupiedNodes( + allReplicas []v1alpha1.ReplicatedVolumeReplica, +) (activeReplicas []*v1alpha1.ReplicatedVolumeReplica, occupiedNodes map[string]struct{}) { + occupiedNodes = make(map[string]struct{}) + + for i := range allReplicas { + rvr := &allReplicas[i] + // Track nodes from ALL replicas (including deleting ones) for occupancy + // This prevents scheduling new replicas on nodes where replicas are being deleted + if rvr.Spec.NodeName != "" { + occupiedNodes[rvr.Spec.NodeName] = struct{}{} + } + // Only include non-deleting replicas (active replicas) + if rvr.DeletionTimestamp.IsZero() { + activeReplicas = append(activeReplicas, rvr) + } + } + return activeReplicas, occupiedNodes +} + +func getTypedReplicasLists( + replicasForRV []*v1alpha1.ReplicatedVolumeReplica, + replicaType v1alpha1.ReplicaType, +) (scheduled, unscheduled []*v1alpha1.ReplicatedVolumeReplica) { + // Collect replicas of the given type, separating them by NodeName assignment. + for _, rvr := range replicasForRV { + if rvr.Spec.Type != replicaType { + continue + } + if rvr.Spec.NodeName != "" { + scheduled = append(scheduled, rvr) + } else { + unscheduled = append(unscheduled, rvr) + } + } + + return scheduled, unscheduled +} + +// setScheduledConditionOnRVRs sets the Scheduled condition on a list of RVRs. +func (r *Reconciler) setScheduledConditionOnRVRs( + ctx context.Context, + rvrs []*v1alpha1.ReplicatedVolumeReplica, + status metav1.ConditionStatus, + reason string, + message string, + log logr.Logger, +) error { + for _, rvr := range rvrs { + if err := r.setScheduledConditionOnRVR(ctx, rvr, status, reason, message); err != nil { + log.Error(err, "failed to set Scheduled condition", "rvr", rvr.Name) + return err + } + } + return nil +} + +// setScheduledConditionOnRVR sets the Scheduled condition on a single RVR. +func (r *Reconciler) setScheduledConditionOnRVR( + ctx context.Context, + rvr *v1alpha1.ReplicatedVolumeReplica, + status metav1.ConditionStatus, + reason string, + message string, +) error { + patch := client.MergeFrom(rvr.DeepCopy()) + + changed := meta.SetStatusCondition( + &rvr.Status.Conditions, + metav1.Condition{ + Type: v1alpha1.ReplicatedVolumeReplicaCondScheduledType, + Status: status, + Reason: reason, + Message: message, + ObservedGeneration: rvr.Generation, + }, + ) + + if !changed { + return nil + } + + err := r.cl.Status().Patch(ctx, rvr, patch) + if apierrors.IsNotFound(err) { + return nil + } + + return err +} + +// ensureNodeNameLabel ensures the node-name label is set on RVR matching its NodeName. +// This restores label if manually removed. +func (r *Reconciler) ensureNodeNameLabel( + ctx context.Context, + log logr.Logger, + rvr *v1alpha1.ReplicatedVolumeReplica, +) error { + if rvr.Spec.NodeName == "" { + return nil + } + + original := rvr.DeepCopy() + changed := obju.SetLabel(rvr, v1alpha1.NodeNameLabelKey, rvr.Spec.NodeName) + if !changed { + return nil + } + + log.V(2).Info("restoring node-name label on RVR", "rvr", rvr.Name, "node", rvr.Spec.NodeName) + + patch := client.MergeFrom(original) + if err := r.cl.Patch(ctx, rvr, patch); err != nil { + if apierrors.IsNotFound(err) { + return nil + } + return err + } + + return nil +} + +// setFailedScheduledConditionOnNonScheduledRVRs sets the Scheduled condition to False on all RVRs +// belonging to the given RV when the RV is not ready for scheduling. +func (r *Reconciler) setFailedScheduledConditionOnNonScheduledRVRs( + ctx context.Context, + rvName string, + notReadyReason *rvrNotReadyReason, + log logr.Logger, +) error { + // List all ReplicatedVolumeReplica resources for this RV. + replicaList := &v1alpha1.ReplicatedVolumeReplicaList{} + if err := r.cl.List(ctx, replicaList, client.MatchingFields{ + indexes.IndexFieldRVRByReplicatedVolumeName: rvName, + }); err != nil { + log.Error(err, "unable to list ReplicatedVolumeReplica") + return err + } + + // Update Scheduled condition on all RVRs belonging to this RV. + for _, rvr := range replicaList.Items { + // TODO: fix checking for deletion + if !rvr.DeletionTimestamp.IsZero() { + continue + } + + // Skip if the replica is already scheduled (has NodeName assigned). + if rvr.Spec.NodeName != "" { + continue + } + + if err := r.setScheduledConditionOnRVR( + ctx, + &rvr, + metav1.ConditionFalse, + notReadyReason.reason, + notReadyReason.message, + ); err != nil { + log.Error(err, "failed to set Scheduled condition", "rvr", rvr.Name, "reason", notReadyReason.reason, "message", notReadyReason.message) + return err + } + } + + return nil +} + +func getAttachToNodesWithoutAnyReplica( + attachToList []string, + nodesWithRVReplica map[string]struct{}, +) []string { + attachToNodesWithoutAnyReplica := make([]string, 0, len(attachToList)) + + for _, node := range attachToList { + if _, hasReplica := nodesWithRVReplica[node]; !hasReplica { + attachToNodesWithoutAnyReplica = append(attachToNodesWithoutAnyReplica, node) + } + } + return attachToNodesWithoutAnyReplica +} + +// applyTopologyFilter groups candidate nodes by zones based on RSC topology. +// isDiskfulPhase affects only Zonal topology: +// - true: falls back to attachTo or any allowed zone if no ScheduledDiskfulReplicas +// - false: returns error if no ScheduledDiskfulReplicas (TieBreaker needs Diskful zone) +// +// For Ignored and TransZonal, logic is the same for both phases. +func (r *Reconciler) applyTopologyFilter( + candidateNodes []string, + isDiskfulPhase bool, + sctx *SchedulingContext, +) error { + sctx.Log.V(1).Info("applying topology filter", "topology", sctx.Rsc.Spec.Topology, "candidatesCount", len(candidateNodes), "isDiskfulPhase", isDiskfulPhase) + + switch sctx.Rsc.Spec.Topology { + case topologyIgnored: + // Same for both phases: all candidates in single "zone" + sctx.Log.V(1).Info("topology filter: Ignored - creating single zone with all candidates") + nodeCandidates := make([]NodeCandidate, 0, len(candidateNodes)) + for _, nodeName := range candidateNodes { + nodeCandidates = append(nodeCandidates, NodeCandidate{ + Name: nodeName, + Score: 0, + }) + } + sctx.ZonesToNodeCandidatesMap = map[string][]NodeCandidate{ + topologyIgnored: nodeCandidates, + } + + case topologyZonal: + sctx.Log.V(1).Info("topology filter: Zonal - grouping candidates by zone") + if err := r.applyZonalTopologyFilter(candidateNodes, isDiskfulPhase, sctx); err != nil { + return err + } + + case topologyTransZonal: + // Same for both phases: group by allowed zones + sctx.Log.V(1).Info("topology filter: TransZonal - distributing across zones") + allowedZones := getAllowedZones(nil, sctx.Rsc.Spec.Zones, sctx.NodeNameToZone) + sctx.ZonesToNodeCandidatesMap = r.groupCandidateNodesByZone(candidateNodes, allowedZones, sctx) + + default: + return fmt.Errorf("unknown RSC topology: %s", sctx.Rsc.Spec.Topology) + } + + // Check for empty candidates after topology filtering + if len(sctx.ZonesToNodeCandidatesMap) == 0 { + return fmt.Errorf("%w: no candidate nodes found after topology filtering", errSchedulingNoCandidateNodes) + } + sctx.Log.V(1).Info("topology filter applied", "zonesCount", len(sctx.ZonesToNodeCandidatesMap)) + return nil +} + +// applyZonalTopologyFilter handles Zonal topology logic. +// For isDiskfulPhase=true: ScheduledDiskfulReplicas -> attachTo -> any allowed zone +// For isDiskfulPhase=false: ScheduledDiskfulReplicas -> ERROR (TieBreaker needs Diskful zone) +func (r *Reconciler) applyZonalTopologyFilter( + candidateNodes []string, + isDiskfulPhase bool, + sctx *SchedulingContext, +) error { + sctx.Log.V(1).Info("applyZonalTopologyFilter: starting", "candidatesCount", len(candidateNodes), "isDiskfulPhase", isDiskfulPhase) + + // Find zones of already scheduled diskful replicas + var zonesWithScheduledDiskfulReplicas []string + for _, rvr := range sctx.ScheduledDiskfulReplicas { + zone, ok := sctx.NodeNameToZone[rvr.Spec.NodeName] + if !ok || zone == "" { + return fmt.Errorf("%w: scheduled diskful replica %s is on node %s without zone label for Zonal topology", + errSchedulingTopologyConflict, rvr.Name, rvr.Spec.NodeName) + } + if !slices.Contains(zonesWithScheduledDiskfulReplicas, zone) { + zonesWithScheduledDiskfulReplicas = append(zonesWithScheduledDiskfulReplicas, zone) + } + } + sctx.Log.V(2).Info("applyZonalTopologyFilter: zones with scheduled diskful replicas", "zones", zonesWithScheduledDiskfulReplicas) + + // For Zonal topology, all scheduled diskful replicas must be in the same zone + if len(zonesWithScheduledDiskfulReplicas) > 1 { + return fmt.Errorf("%w: scheduled diskful replicas are in multiple zones %v for Zonal topology", + errSchedulingTopologyConflict, zonesWithScheduledDiskfulReplicas) + } + + // Determine target zones based on phase + var targetZones []string + + switch { + case len(zonesWithScheduledDiskfulReplicas) > 0: + // Use zone of scheduled Diskful replicas + targetZones = zonesWithScheduledDiskfulReplicas + case !isDiskfulPhase: + // TieBreaker phase: no ScheduledDiskfulReplicas is an error + return fmt.Errorf("%w: cannot schedule TieBreaker for Zonal topology: no Diskful replicas scheduled", + errSchedulingNoCandidateNodes) + default: + // Diskful phase: fallback to attachTo zones + for _, nodeName := range sctx.AttachToNodes { + zone, ok := sctx.NodeNameToZone[nodeName] + if !ok || zone == "" { + return fmt.Errorf("%w: attachTo node %s has no zone label for Zonal topology", + errSchedulingTopologyConflict, nodeName) + } + if !slices.Contains(targetZones, zone) { + targetZones = append(targetZones, zone) + } + } + sctx.Log.V(2).Info("applyZonalTopologyFilter: attachTo zones", "zones", targetZones) + // If still empty, getAllowedZones will use rsc.spec.zones or all cluster zones + } + + sctx.Log.V(2).Info("applyZonalTopologyFilter: target zones", "zones", targetZones) + + // Build candidate nodes map + allowedZones := getAllowedZones(targetZones, sctx.Rsc.Spec.Zones, sctx.NodeNameToZone) + sctx.Log.V(2).Info("applyZonalTopologyFilter: allowed zones", "zones", allowedZones) + + // Group candidate nodes by zone + sctx.ZonesToNodeCandidatesMap = r.groupCandidateNodesByZone(candidateNodes, allowedZones, sctx) + sctx.Log.V(1).Info("applyZonalTopologyFilter: completed", "zonesCount", len(sctx.ZonesToNodeCandidatesMap)) + return nil +} + +// applyCapacityFilterAndScoreCandidates filters nodes by available storage capacity using the scheduler extender. +// It converts nodes to LVGs, queries the extender for capacity scores, and updates ZonesToNodeCandidatesMap. +func (r *Reconciler) applyCapacityFilterAndScoreCandidates( + ctx context.Context, + sctx *SchedulingContext, +) error { + // Collect all candidate nodes from ZonesToNodeCandidatesMap + candidateNodeSet := make(map[string]struct{}) + for _, candidates := range sctx.ZonesToNodeCandidatesMap { + for _, candidate := range candidates { + candidateNodeSet[candidate.Name] = struct{}{} + } + } + + // Build LVG list from RspLvgToNodeInfoMap, but only for nodes in candidateNodeSet + reqLVGs := make([]schedulerExtenderLVG, 0, len(sctx.RspLvgToNodeInfoMap)) + for lvgName, info := range sctx.RspLvgToNodeInfoMap { + // Skip LVGs whose nodes are not in the candidate list + if _, ok := candidateNodeSet[info.NodeName]; !ok { + continue + } + reqLVGs = append(reqLVGs, schedulerExtenderLVG{ + Name: lvgName, + ThinPoolName: info.ThinPoolName, + }) + } + + if len(reqLVGs) == 0 { + // No LVGs to check — no candidate nodes have LVGs from the storage pool + sctx.Log.V(1).Info("no candidate nodes have LVGs from storage pool", "storagePool", sctx.Rsc.Spec.StoragePool) + return fmt.Errorf("%w: no candidate nodes have LVGs from storage pool %s", errSchedulingNoCandidateNodes, sctx.Rsc.Spec.StoragePool) + } + + // Convert RSP volume type to scheduler extender volume type + var volType string + switch sctx.Rsp.Spec.Type { + case "LVMThin": + volType = "thin" + case "LVM": + volType = "thick" + default: + return fmt.Errorf("RSP volume type is not supported: %s", sctx.Rsp.Spec.Type) + } + size := sctx.Rv.Spec.Size.Value() + + // Query scheduler extender for LVG scores + volumeInfo := VolumeInfo{ + Name: sctx.Rv.Name, + Size: size, + Type: volType, + } + lvgScores, err := r.extenderClient.queryLVGScores(ctx, reqLVGs, volumeInfo) + if err != nil { + sctx.Log.Error(err, "scheduler extender query failed") + return fmt.Errorf("%w: %v", errSchedulingNoCandidateNodes, err) + } + + // Build map of node -> score based on LVG scores + // Node gets the score of its LVG (if LVG is in the response) + nodeScores := make(map[string]int) + for lvgName, info := range sctx.RspLvgToNodeInfoMap { + if score, ok := lvgScores[lvgName]; ok { + nodeScores[info.NodeName] = score + } + } + + // Filter ZonesToNodeCandidatesMap: keep only nodes that have score (i.e., their LVG was returned) + // and update their scores + for zone, candidates := range sctx.ZonesToNodeCandidatesMap { + filteredCandidates := make([]NodeCandidate, 0, len(candidates)) + for _, candidate := range candidates { + if score, ok := nodeScores[candidate.Name]; ok { + filteredCandidates = append(filteredCandidates, NodeCandidate{ + Name: candidate.Name, + Score: score, + }) + } + // Node not in response — skip (no capacity) + } + if len(filteredCandidates) > 0 { + sctx.ZonesToNodeCandidatesMap[zone] = filteredCandidates + } else { + delete(sctx.ZonesToNodeCandidatesMap, zone) + } + } + + if len(sctx.ZonesToNodeCandidatesMap) == 0 { + sctx.Log.V(1).Info("no nodes with sufficient storage space found after capacity filtering") + return fmt.Errorf("%w: no nodes with sufficient storage space found", errSchedulingNoCandidateNodes) + } + + return nil +} + +// countReplicasByZone counts how many replicas are scheduled in each zone. +// If replicaType is not empty, only replicas of that type are counted. +// If replicaType is empty, all replica types are counted. +func countReplicasByZone( + replicas []*v1alpha1.ReplicatedVolumeReplica, + replicaType v1alpha1.ReplicaType, + nodeNameToZone map[string]string, +) map[string]int { + zoneReplicaCount := make(map[string]int) + for _, rvr := range replicas { + if replicaType != "" && rvr.Spec.Type != replicaType { + continue + } + if rvr.Spec.NodeName == "" { + continue + } + zone, ok := nodeNameToZone[rvr.Spec.NodeName] + if !ok || zone == "" { + continue + } + zoneReplicaCount[zone]++ + } + return zoneReplicaCount +} + +// groupCandidateNodesByZone groups candidate nodes by their zones, filtering by allowed zones +func (r *Reconciler) groupCandidateNodesByZone( + candidateNodes []string, + allowedZones map[string]struct{}, + sctx *SchedulingContext, +) map[string][]NodeCandidate { + zonesToCandidates := make(map[string][]NodeCandidate) + + for _, nodeName := range candidateNodes { + zone, ok := sctx.NodeNameToZone[nodeName] + if !ok || zone == "" { + continue // Skip nodes without zone label + } + + if _, ok := allowedZones[zone]; !ok { + continue // Skip nodes not in allowed zones + } + + zonesToCandidates[zone] = append(zonesToCandidates[zone], NodeCandidate{ + Name: nodeName, + Score: 0, + }) + } + + return zonesToCandidates +} + +// getAllowedZones determines which zones should be used for replica placement. +// Priority order: +// 1. If targetZones is provided and not empty, use those zones +// 2. If RSC spec defines zones, use those +// 3. Otherwise, use all zones from the cluster (from NodeNameToZone map) +func getAllowedZones(targetZones []string, rscZones []string, nodeNameToZone map[string]string) map[string]struct{} { + allowedZones := make(map[string]struct{}) + + switch { + case len(targetZones) > 0: + for _, zone := range targetZones { + allowedZones[zone] = struct{}{} + } + case len(rscZones) > 0: + for _, zone := range rscZones { + allowedZones[zone] = struct{}{} + } + default: + for _, zone := range nodeNameToZone { + if zone != "" { + allowedZones[zone] = struct{}{} + } + } + } + + return allowedZones +} + +func (r *Reconciler) getLVGToNodesByStoragePool( + ctx context.Context, + rsp *v1alpha1.ReplicatedStoragePool, + log logr.Logger, +) (map[string]LvgInfo, error) { + if rsp == nil || len(rsp.Spec.LVMVolumeGroups) == 0 { + return nil, fmt.Errorf("storage pool does not define any LVGs") + } + + lvgList := &snc.LVMVolumeGroupList{} + if err := r.cl.List(ctx, lvgList); err != nil { + log.Error(err, "unable to list LVMVolumeGroup") + return nil, err + } + + // Build lookup map: LVG name -> LVG object + lvgByName := make(map[string]*snc.LVMVolumeGroup, len(lvgList.Items)) + for i := range lvgList.Items { + lvgByName[lvgList.Items[i].Name] = &lvgList.Items[i] + } + + // Build result map from RSP's LVGs + result := make(map[string]LvgInfo, len(rsp.Spec.LVMVolumeGroups)) + for _, rspLvg := range rsp.Spec.LVMVolumeGroups { + lvg, ok := lvgByName[rspLvg.Name] + if !ok || len(lvg.Status.Nodes) == 0 { + continue + } + result[rspLvg.Name] = LvgInfo{ + NodeName: lvg.Status.Nodes[0].Name, + ThinPoolName: rspLvg.ThinPoolName, + } + } + + return result, nil +} + +func (r *Reconciler) getNodeNameToZoneMap( + ctx context.Context, + log logr.Logger, +) (map[string]string, error) { + // List all Kubernetes Nodes to inspect their zone labels. + nodes := &corev1.NodeList{} + if err := r.cl.List(ctx, nodes); err != nil { + log.Error(err, "unable to list Nodes") + return nil, err + } + + // Build a map from node name to its zone (may be empty if label is missing). + nodeNameToZone := make(map[string]string, len(nodes.Items)) + + for _, node := range nodes.Items { + zone := node.Labels[nodeZoneLabel] + nodeNameToZone[node.Name] = zone + } + + return nodeNameToZone, nil +} diff --git a/images/controller/internal/controllers/rvr_scheduling_controller/reconciler_test.go b/images/controller/internal/controllers/rvr_scheduling_controller/reconciler_test.go new file mode 100644 index 000000000..979e86ef2 --- /dev/null +++ b/images/controller/internal/controllers/rvr_scheduling_controller/reconciler_test.go @@ -0,0 +1,1751 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package rvrschedulingcontroller_test + +import ( + "context" + "encoding/json" + "fmt" + "net/http" + "net/http/httptest" + "os" + "slices" + + "github.com/go-logr/logr" + . "github.com/onsi/ginkgo/v2" + . "github.com/onsi/gomega" + corev1 "k8s.io/api/core/v1" + "k8s.io/apimachinery/pkg/api/meta" + "k8s.io/apimachinery/pkg/api/resource" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/runtime" + "k8s.io/apimachinery/pkg/types" + utilruntime "k8s.io/apimachinery/pkg/util/runtime" + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/client/fake" + "sigs.k8s.io/controller-runtime/pkg/reconcile" + + snc "github.com/deckhouse/sds-node-configurator/api/v1alpha1" + v1alpha1 "github.com/deckhouse/sds-replicated-volume/api/v1alpha1" + rvrschedulingcontroller "github.com/deckhouse/sds-replicated-volume/images/controller/internal/controllers/rvr_scheduling_controller" + testhelpers "github.com/deckhouse/sds-replicated-volume/images/controller/internal/indexes/testhelpers" +) + +// ClusterSetup defines a cluster configuration for tests +type ClusterSetup struct { + Name string + Zones []string // zones in cluster + RSCZones []string // zones in RSC (can be less than cluster zones) + NodesPerZone int // nodes per zone + NodeScores map[string]int // node -> score from scheduler extender +} + +// ExistingReplica represents an already scheduled replica +type ExistingReplica struct { + Type v1alpha1.ReplicaType // Diskful, Access, TieBreaker + NodeName string +} + +// ReplicasToSchedule defines how many replicas of each type need to be scheduled +type ReplicasToSchedule struct { + Diskful int + TieBreaker int +} + +// ExpectedResult defines the expected outcome of a test +type ExpectedResult struct { + Error string // expected error substring (empty if success) + DiskfulZones []string // zones where Diskful replicas should be (nil = any) + TieBreakerZones []string // zones where TieBreaker replicas should be (nil = any) + DiskfulNodes []string // specific nodes for Diskful (nil = check zones only) + TieBreakerNodes []string // specific nodes for TieBreaker (nil = check zones only) + // Partial scheduling support for Diskful + ScheduledDiskfulCount *int // expected number of scheduled Diskful (nil = all must be scheduled) + UnscheduledDiskfulCount *int // expected number of unscheduled Diskful (nil = 0) + UnscheduledReason string // expected condition reason for unscheduled Diskful replicas + // Partial scheduling support for TieBreaker + ScheduledTieBreakerCount *int // expected number of scheduled TieBreaker (nil = all must be scheduled) + UnscheduledTieBreakerCount *int // expected number of unscheduled TieBreaker (nil = 0) + UnscheduledTieBreakerReason string // expected condition reason for unscheduled TieBreaker replicas +} + +// IntegrationTestCase defines a full integration test case +type IntegrationTestCase struct { + Name string + Cluster string // reference to ClusterSetup.Name + Topology v1alpha1.ReplicatedStorageClassTopology // Zonal, TransZonal, Ignored + AttachTo []string + Existing []ExistingReplica + ToSchedule ReplicasToSchedule + Expected ExpectedResult +} + +// intPtr returns a pointer to an int value +func intPtr(i int) *int { + return &i +} + +// generateNodes creates nodes for a cluster setup +func generateNodes(setup ClusterSetup) ([]*corev1.Node, map[string]int) { + var nodes []*corev1.Node + scores := make(map[string]int) + + for _, zone := range setup.Zones { + for i := 1; i <= setup.NodesPerZone; i++ { + nodeName := fmt.Sprintf("node-%s%d", zone[len(zone)-1:], i) // e.g., node-a1, node-a2 + node := &corev1.Node{ + ObjectMeta: metav1.ObjectMeta{ + Name: nodeName, + Labels: map[string]string{"topology.kubernetes.io/zone": zone}, + }, + } + nodes = append(nodes, node) + + // Use predefined score or generate based on position + if score, ok := setup.NodeScores[nodeName]; ok { + scores[nodeName] = score + } else { + // Default: first node in first zone gets highest score + scores[nodeName] = 100 - (len(nodes)-1)*10 + } + } + } + return nodes, scores +} + +// generateLVGs creates LVMVolumeGroups for nodes +func generateLVGs(nodes []*corev1.Node) ([]*snc.LVMVolumeGroup, *v1alpha1.ReplicatedStoragePool) { + var lvgs []*snc.LVMVolumeGroup + var lvgRefs []v1alpha1.ReplicatedStoragePoolLVMVolumeGroups + + for _, node := range nodes { + lvgName := fmt.Sprintf("vg-%s", node.Name) + lvg := &snc.LVMVolumeGroup{ + ObjectMeta: metav1.ObjectMeta{Name: lvgName}, + Status: snc.LVMVolumeGroupStatus{Nodes: []snc.LVMVolumeGroupNode{{Name: node.Name}}}, + } + lvgs = append(lvgs, lvg) + lvgRefs = append(lvgRefs, v1alpha1.ReplicatedStoragePoolLVMVolumeGroups{Name: lvgName}) + } + + rsp := &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{Name: "pool-1"}, + Spec: v1alpha1.ReplicatedStoragePoolSpec{ + Type: "LVM", + LVMVolumeGroups: lvgRefs, + }, + } + + return lvgs, rsp +} + +// createMockServer creates a mock scheduler extender server. +// Only LVGs found in lvgToNode are returned with their scores. +// LVGs not found in lvgToNode are NOT returned (simulates "no space"). +func createMockServer(scores map[string]int, lvgToNode map[string]string) *httptest.Server { + return httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + var req struct { + LVGS []struct{ Name string } `json:"lvgs"` + } + _ = json.NewDecoder(r.Body).Decode(&req) + resp := map[string]any{"lvgs": []map[string]any{}} + for _, lvg := range req.LVGS { + nodeName, ok := lvgToNode[lvg.Name] + if !ok { + // LVG not configured - don't include in response (simulates no space) + continue + } + score := scores[nodeName] + if score == 0 { + score = 50 // default score if not explicitly configured + } + resp["lvgs"] = append(resp["lvgs"].([]map[string]any), map[string]any{"name": lvg.Name, "score": score}) + } + _ = json.NewEncoder(w).Encode(resp) + })) +} + +// Cluster configurations +var clusterConfigs = map[string]ClusterSetup{ + "small-1z": { + Name: "small-1z", + Zones: []string{"zone-a"}, + RSCZones: []string{"zone-a"}, + NodesPerZone: 2, + NodeScores: map[string]int{"node-a1": 100, "node-a2": 80}, + }, + "small-1z-4n": { + Name: "small-1z-4n", + Zones: []string{"zone-a"}, + RSCZones: []string{"zone-a"}, + NodesPerZone: 4, + NodeScores: map[string]int{"node-a1": 100, "node-a2": 90, "node-a3": 80, "node-a4": 70}, + }, + "medium-2z": { + Name: "medium-2z", + Zones: []string{"zone-a", "zone-b"}, + RSCZones: []string{"zone-a", "zone-b"}, + NodesPerZone: 2, + NodeScores: map[string]int{"node-a1": 100, "node-a2": 80, "node-b1": 90, "node-b2": 70}, + }, + "medium-2z-4n": { + Name: "medium-2z-4n", + Zones: []string{"zone-a", "zone-b"}, + RSCZones: []string{"zone-a", "zone-b"}, + NodesPerZone: 4, + NodeScores: map[string]int{ + "node-a1": 100, "node-a2": 90, "node-a3": 80, "node-a4": 70, + "node-b1": 95, "node-b2": 85, "node-b3": 75, "node-b4": 65, + }, + }, + "large-3z": { + Name: "large-3z", + Zones: []string{"zone-a", "zone-b", "zone-c"}, + RSCZones: []string{"zone-a", "zone-b", "zone-c"}, + NodesPerZone: 2, + NodeScores: map[string]int{ + "node-a1": 100, "node-a2": 80, + "node-b1": 90, "node-b2": 70, + "node-c1": 85, "node-c2": 65, + }, + }, + "large-3z-3n": { + Name: "large-3z-3n", + Zones: []string{"zone-a", "zone-b", "zone-c"}, + RSCZones: []string{"zone-a", "zone-b", "zone-c"}, + NodesPerZone: 3, + NodeScores: map[string]int{ + "node-a1": 100, "node-a2": 90, "node-a3": 80, + "node-b1": 95, "node-b2": 85, "node-b3": 75, + "node-c1": 92, "node-c2": 82, "node-c3": 72, + }, + }, + "xlarge-4z": { + Name: "xlarge-4z", + Zones: []string{"zone-a", "zone-b", "zone-c", "zone-d"}, + RSCZones: []string{"zone-a", "zone-b", "zone-c"}, // zone-d NOT in RSC! + NodesPerZone: 2, + NodeScores: map[string]int{ + "node-a1": 100, "node-a2": 80, + "node-b1": 90, "node-b2": 70, + "node-c1": 85, "node-c2": 65, + "node-d1": 95, "node-d2": 75, + }, + }, +} + +var _ = Describe("RVR Scheduling Integration Tests", Ordered, func() { + var ( + scheme *runtime.Scheme + ) + + BeforeEach(func() { + scheme = runtime.NewScheme() + utilruntime.Must(corev1.AddToScheme(scheme)) + utilruntime.Must(snc.AddToScheme(scheme)) + utilruntime.Must(v1alpha1.AddToScheme(scheme)) + }) + + // Helper to run a test case + runTestCase := func(ctx context.Context, tc IntegrationTestCase) { + cluster := clusterConfigs[tc.Cluster] + Expect(cluster.Name).ToNot(BeEmpty(), "Unknown cluster: %s", tc.Cluster) + + // Generate cluster resources + nodes, scores := generateNodes(cluster) + lvgs, rsp := generateLVGs(nodes) + + // Build lvg -> node mapping for mock server + lvgToNode := make(map[string]string) + for _, lvg := range lvgs { + if len(lvg.Status.Nodes) > 0 { + lvgToNode[lvg.Name] = lvg.Status.Nodes[0].Name + } + } + + // Create mock server + mockServer := createMockServer(scores, lvgToNode) + defer mockServer.Close() + os.Setenv("SCHEDULER_EXTENDER_URL", mockServer.URL) + defer os.Unsetenv("SCHEDULER_EXTENDER_URL") + + // Create RSC + rsc := &v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{Name: "rsc-test"}, + Spec: v1alpha1.ReplicatedStorageClassSpec{ + StoragePool: "pool-1", + VolumeAccess: "Any", + Topology: tc.Topology, + Zones: cluster.RSCZones, + }, + } + + // Create RV + rv := &v1alpha1.ReplicatedVolume{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rv-test", + Finalizers: []string{v1alpha1.ControllerFinalizer}, + }, + Spec: v1alpha1.ReplicatedVolumeSpec{ + Size: resource.MustParse("10Gi"), + ReplicatedStorageClassName: "rsc-test", + }, + Status: v1alpha1.ReplicatedVolumeStatus{ + DesiredAttachTo: tc.AttachTo, + Conditions: []metav1.Condition{{ + Type: v1alpha1.ReplicatedVolumeCondIOReadyType, + Status: metav1.ConditionTrue, + }}, + }, + } + + // Create RVRs + var rvrList []*v1alpha1.ReplicatedVolumeReplica + rvrIndex := 1 + + // Existing replicas (already scheduled) + for _, existing := range tc.Existing { + rvr := &v1alpha1.ReplicatedVolumeReplica{ + ObjectMeta: metav1.ObjectMeta{Name: fmt.Sprintf("rvr-existing-%d", rvrIndex)}, + Spec: v1alpha1.ReplicatedVolumeReplicaSpec{ + ReplicatedVolumeName: "rv-test", + Type: existing.Type, + NodeName: existing.NodeName, + }, + } + rvrList = append(rvrList, rvr) + rvrIndex++ + } + + // Diskful replicas to schedule + for i := 0; i < tc.ToSchedule.Diskful; i++ { + rvr := &v1alpha1.ReplicatedVolumeReplica{ + ObjectMeta: metav1.ObjectMeta{Name: fmt.Sprintf("rvr-diskful-%d", i+1)}, + Spec: v1alpha1.ReplicatedVolumeReplicaSpec{ + ReplicatedVolumeName: "rv-test", + Type: v1alpha1.ReplicaTypeDiskful, + }, + } + rvrList = append(rvrList, rvr) + } + + // TieBreaker replicas to schedule + for i := 0; i < tc.ToSchedule.TieBreaker; i++ { + rvr := &v1alpha1.ReplicatedVolumeReplica{ + ObjectMeta: metav1.ObjectMeta{Name: fmt.Sprintf("rvr-tiebreaker-%d", i+1)}, + Spec: v1alpha1.ReplicatedVolumeReplicaSpec{ + ReplicatedVolumeName: "rv-test", + Type: v1alpha1.ReplicaTypeTieBreaker, + }, + } + rvrList = append(rvrList, rvr) + } + + // Build objects list + objects := []runtime.Object{rv, rsc, rsp} + for _, node := range nodes { + objects = append(objects, node) + } + for _, lvg := range lvgs { + objects = append(objects, lvg) + } + for _, rvr := range rvrList { + objects = append(objects, rvr) + } + + // Create client and reconciler + cl := testhelpers.WithRVRByReplicatedVolumeNameIndex(fake.NewClientBuilder(). + WithScheme(scheme)). + WithRuntimeObjects(objects...). + WithStatusSubresource(&v1alpha1.ReplicatedVolumeReplica{}). + Build() + rec, err := rvrschedulingcontroller.NewReconciler(cl, logr.Discard(), scheme) + Expect(err).ToNot(HaveOccurred()) + + // Reconcile + _, err = rec.Reconcile(ctx, reconcile.Request{NamespacedName: types.NamespacedName{Name: rv.Name}}) + + // Check result + if tc.Expected.Error != "" { + Expect(err).To(HaveOccurred(), "Expected error but got none") + Expect(err.Error()).To(ContainSubstring(tc.Expected.Error), "Error message mismatch") + return + } + + Expect(err).ToNot(HaveOccurred(), "Unexpected error: %v", err) + + // Verify Diskful replicas + var scheduledDiskful []string + var unscheduledDiskful []string + var diskfulZones []string + for i := 0; i < tc.ToSchedule.Diskful; i++ { + updated := &v1alpha1.ReplicatedVolumeReplica{} + Expect(cl.Get(ctx, client.ObjectKey{Name: fmt.Sprintf("rvr-diskful-%d", i+1)}, updated)).To(Succeed()) + + if updated.Spec.NodeName != "" { + scheduledDiskful = append(scheduledDiskful, updated.Spec.NodeName) + // Find zone for this node + for _, node := range nodes { + if node.Name == updated.Spec.NodeName { + zone := node.Labels["topology.kubernetes.io/zone"] + if !slices.Contains(diskfulZones, zone) { + diskfulZones = append(diskfulZones, zone) + } + break + } + } + } else { + unscheduledDiskful = append(unscheduledDiskful, updated.Name) + // Check condition on unscheduled replica + if tc.Expected.UnscheduledReason != "" { + cond := meta.FindStatusCondition(updated.Status.Conditions, v1alpha1.ReplicatedVolumeReplicaCondScheduledType) + Expect(cond).ToNot(BeNil(), "Unscheduled replica %s should have Scheduled condition", updated.Name) + Expect(cond.Status).To(Equal(metav1.ConditionFalse), "Unscheduled replica %s should have Scheduled=False", updated.Name) + Expect(cond.Reason).To(Equal(tc.Expected.UnscheduledReason), "Unscheduled replica %s has wrong reason", updated.Name) + } + } + } + + // Check scheduled/unscheduled counts if specified + if tc.Expected.ScheduledDiskfulCount != nil { + Expect(len(scheduledDiskful)).To(Equal(*tc.Expected.ScheduledDiskfulCount), "Scheduled Diskful count mismatch") + } else if tc.Expected.UnscheduledDiskfulCount == nil { + // Default: all must be scheduled + Expect(len(unscheduledDiskful)).To(Equal(0), "All Diskful replicas should be scheduled, but %d were not: %v", len(unscheduledDiskful), unscheduledDiskful) + } + if tc.Expected.UnscheduledDiskfulCount != nil { + Expect(len(unscheduledDiskful)).To(Equal(*tc.Expected.UnscheduledDiskfulCount), "Unscheduled Diskful count mismatch") + } + + // Check Diskful zones + if tc.Expected.DiskfulZones != nil { + Expect(diskfulZones).To(ConsistOf(tc.Expected.DiskfulZones), "Diskful zones mismatch") + } + + // Check Diskful nodes + if tc.Expected.DiskfulNodes != nil { + Expect(scheduledDiskful).To(ConsistOf(tc.Expected.DiskfulNodes), "Diskful nodes mismatch") + } + + // Verify TieBreaker replicas + var scheduledTieBreaker []string + var unscheduledTieBreaker []string + var tieBreakerZones []string + for i := 0; i < tc.ToSchedule.TieBreaker; i++ { + updated := &v1alpha1.ReplicatedVolumeReplica{} + Expect(cl.Get(ctx, client.ObjectKey{Name: fmt.Sprintf("rvr-tiebreaker-%d", i+1)}, updated)).To(Succeed()) + if updated.Spec.NodeName != "" { + scheduledTieBreaker = append(scheduledTieBreaker, updated.Spec.NodeName) + + // Find zone for this node + for _, node := range nodes { + if node.Name == updated.Spec.NodeName { + zone := node.Labels["topology.kubernetes.io/zone"] + if !slices.Contains(tieBreakerZones, zone) { + tieBreakerZones = append(tieBreakerZones, zone) + } + break + } + } + } else { + unscheduledTieBreaker = append(unscheduledTieBreaker, updated.Name) + // Check condition on unscheduled TieBreaker replica + if tc.Expected.UnscheduledTieBreakerReason != "" { + cond := meta.FindStatusCondition(updated.Status.Conditions, v1alpha1.ReplicatedVolumeReplicaCondScheduledType) + Expect(cond).ToNot(BeNil(), "Unscheduled TieBreaker replica %s should have Scheduled condition", updated.Name) + Expect(cond.Status).To(Equal(metav1.ConditionFalse), "Unscheduled TieBreaker replica %s should have Scheduled=False", updated.Name) + Expect(cond.Reason).To(Equal(tc.Expected.UnscheduledTieBreakerReason), "Unscheduled TieBreaker replica %s has wrong reason", updated.Name) + } + } + } + + // Check scheduled/unscheduled TieBreaker counts if specified + if tc.Expected.ScheduledTieBreakerCount != nil { + Expect(len(scheduledTieBreaker)).To(Equal(*tc.Expected.ScheduledTieBreakerCount), "Scheduled TieBreaker count mismatch") + } else if tc.Expected.UnscheduledTieBreakerCount == nil { + // Default: all must be scheduled + Expect(len(unscheduledTieBreaker)).To(Equal(0), "All TieBreaker replicas should be scheduled, but %d were not: %v", len(unscheduledTieBreaker), unscheduledTieBreaker) + } + if tc.Expected.UnscheduledTieBreakerCount != nil { + Expect(len(unscheduledTieBreaker)).To(Equal(*tc.Expected.UnscheduledTieBreakerCount), "Unscheduled TieBreaker count mismatch") + } + + // Check TieBreaker zones + if tc.Expected.TieBreakerZones != nil { + Expect(tieBreakerZones).To(ConsistOf(tc.Expected.TieBreakerZones), "TieBreaker zones mismatch") + } + + // Check TieBreaker nodes + if tc.Expected.TieBreakerNodes != nil { + Expect(scheduledTieBreaker).To(ConsistOf(tc.Expected.TieBreakerNodes), "TieBreaker nodes mismatch") + } + + // Verify no node has multiple replicas + allScheduled := append(scheduledDiskful, scheduledTieBreaker...) + // Add existing replica nodes + for _, existing := range tc.Existing { + allScheduled = append(allScheduled, existing.NodeName) + } + nodeCount := make(map[string]int) + for _, node := range allScheduled { + nodeCount[node]++ + Expect(nodeCount[node]).To(Equal(1), "Node %s has multiple replicas", node) + } + } + + // ==================== ZONAL TOPOLOGY ==================== + Context("Zonal Topology", func() { + zonalTestCases := []IntegrationTestCase{ + { + Name: "1. small-1z: D:2, TB:1 - all in zone-a", + Cluster: "small-1z", + Topology: "Zonal", + AttachTo: nil, + Existing: nil, + ToSchedule: ReplicasToSchedule{Diskful: 2, TieBreaker: 0}, + Expected: ExpectedResult{DiskfulZones: []string{"zone-a"}}, + }, + { + Name: "2. small-1z: attachTo node-a1 - D on node-a1", + Cluster: "small-1z", + Topology: "Zonal", + AttachTo: []string{"node-a1"}, + Existing: nil, + ToSchedule: ReplicasToSchedule{Diskful: 1, TieBreaker: 1}, + Expected: ExpectedResult{DiskfulNodes: []string{"node-a1"}, TieBreakerNodes: []string{"node-a2"}}, + }, + { + Name: "3. medium-2z: attachTo same zone - all in zone-a", + Cluster: "medium-2z", + Topology: "Zonal", + AttachTo: []string{"node-a1", "node-a2"}, + Existing: nil, + ToSchedule: ReplicasToSchedule{Diskful: 2, TieBreaker: 0}, + Expected: ExpectedResult{DiskfulZones: []string{"zone-a"}}, + }, + { + Name: "4. medium-2z: attachTo different zones - pick one zone", + Cluster: "medium-2z", + Topology: "Zonal", + AttachTo: []string{"node-a1", "node-b1"}, + Existing: nil, + ToSchedule: ReplicasToSchedule{Diskful: 1, TieBreaker: 0}, + Expected: ExpectedResult{}, // any zone is ok + }, + { + Name: "5. medium-2z-4n: existing D in zone-a - new D and TB in zone-a", + Cluster: "medium-2z-4n", + Topology: "Zonal", + AttachTo: nil, + Existing: []ExistingReplica{{Type: v1alpha1.ReplicaTypeDiskful, NodeName: "node-a1"}}, + ToSchedule: ReplicasToSchedule{Diskful: 1, TieBreaker: 1}, + Expected: ExpectedResult{DiskfulZones: []string{"zone-a"}, TieBreakerZones: []string{"zone-a"}}, + }, + { + Name: "6. medium-2z: existing D in different zones - topology conflict", + Cluster: "medium-2z", + Topology: "Zonal", + AttachTo: nil, + Existing: []ExistingReplica{ + {Type: v1alpha1.ReplicaTypeDiskful, NodeName: "node-a1"}, + {Type: v1alpha1.ReplicaTypeDiskful, NodeName: "node-b1"}, + }, + ToSchedule: ReplicasToSchedule{Diskful: 1, TieBreaker: 0}, + // With best-effort scheduling, topology conflict doesn't return error, + // but sets Scheduled=False on unscheduled replicas + Expected: ExpectedResult{ + ScheduledDiskfulCount: intPtr(0), + UnscheduledDiskfulCount: intPtr(1), + UnscheduledReason: v1alpha1.ReplicatedVolumeReplicaCondScheduledReasonTopologyConstraintsFailed, + }, + }, + { + Name: "7. large-3z: no attachTo - pick best zone by score", + Cluster: "large-3z", + Topology: "Zonal", + AttachTo: nil, + Existing: nil, + ToSchedule: ReplicasToSchedule{Diskful: 2, TieBreaker: 0}, + Expected: ExpectedResult{}, // any zone, best score wins + }, + { + Name: "8. xlarge-4z: attachTo zone-d (not in RSC) - D in zone-d (targetZones priority)", + Cluster: "xlarge-4z", + Topology: "Zonal", + AttachTo: []string{"node-d1"}, + Existing: nil, + ToSchedule: ReplicasToSchedule{Diskful: 1, TieBreaker: 1}, + Expected: ExpectedResult{DiskfulZones: []string{"zone-d"}, TieBreakerZones: []string{"zone-d"}}, + }, + { + Name: "9. small-1z: all nodes occupied - no candidate nodes", + Cluster: "small-1z", + Topology: "Zonal", + AttachTo: nil, + Existing: []ExistingReplica{ + {Type: v1alpha1.ReplicaTypeDiskful, NodeName: "node-a1"}, + {Type: v1alpha1.ReplicaTypeDiskful, NodeName: "node-a2"}, + }, + ToSchedule: ReplicasToSchedule{Diskful: 0, TieBreaker: 1}, + Expected: ExpectedResult{ + ScheduledTieBreakerCount: intPtr(0), + UnscheduledTieBreakerCount: intPtr(1), + UnscheduledTieBreakerReason: v1alpha1.ReplicatedVolumeReplicaCondScheduledReasonNoAvailableNodes, + }, + }, + { + Name: "10. medium-2z: TB only without Diskful - no candidate nodes", + Cluster: "medium-2z", + Topology: "Zonal", + AttachTo: nil, + Existing: nil, + ToSchedule: ReplicasToSchedule{Diskful: 0, TieBreaker: 1}, + Expected: ExpectedResult{ + ScheduledTieBreakerCount: intPtr(0), + UnscheduledTieBreakerCount: intPtr(1), + UnscheduledTieBreakerReason: v1alpha1.ReplicatedVolumeReplicaCondScheduledReasonNoAvailableNodes, + }, + }, + { + Name: "11. medium-2z-4n: existing D+TB in zone-a - new D in zone-a", + Cluster: "medium-2z-4n", + Topology: "Zonal", + AttachTo: nil, + Existing: []ExistingReplica{ + {Type: v1alpha1.ReplicaTypeDiskful, NodeName: "node-a1"}, + {Type: v1alpha1.ReplicaTypeTieBreaker, NodeName: "node-a2"}, + }, + ToSchedule: ReplicasToSchedule{Diskful: 1, TieBreaker: 0}, + Expected: ExpectedResult{DiskfulZones: []string{"zone-a"}}, + }, + { + Name: "12. medium-2z-4n: existing D+Access in zone-a - new TB in zone-a", + Cluster: "medium-2z-4n", + Topology: "Zonal", + AttachTo: nil, + Existing: []ExistingReplica{ + {Type: v1alpha1.ReplicaTypeDiskful, NodeName: "node-a1"}, + {Type: v1alpha1.ReplicaTypeAccess, NodeName: "node-a2"}, + }, + ToSchedule: ReplicasToSchedule{Diskful: 0, TieBreaker: 1}, + Expected: ExpectedResult{TieBreakerZones: []string{"zone-a"}}, + }, + } + + for _, tc := range zonalTestCases { + It(tc.Name, func(ctx SpecContext) { + runTestCase(ctx, tc) + }) + } + }) + + // ==================== TRANSZONAL TOPOLOGY ==================== + Context("TransZonal Topology", func() { + transZonalTestCases := []IntegrationTestCase{ + { + Name: "1. large-3z: D:3 - one per zone", + Cluster: "large-3z", + Topology: "TransZonal", + AttachTo: nil, + Existing: nil, + ToSchedule: ReplicasToSchedule{Diskful: 3, TieBreaker: 0}, + Expected: ExpectedResult{DiskfulZones: []string{"zone-a", "zone-b", "zone-c"}}, + }, + { + Name: "2. large-3z: D:2, TB:1 - even distribution across 3 zones", + Cluster: "large-3z", + Topology: "TransZonal", + AttachTo: nil, + Existing: nil, + ToSchedule: ReplicasToSchedule{Diskful: 2, TieBreaker: 1}, + // TransZonal distributes replicas evenly across zones + // D:2 go to 2 different zones, TB goes to 3rd zone + // Exact zone selection depends on map iteration order, so we just verify coverage + Expected: ExpectedResult{}, // all 3 zones should be covered (verified by runTestCase) + }, + { + Name: "3. large-3z: existing D in zone-a,b - new D in zone-c", + Cluster: "large-3z", + Topology: "TransZonal", + AttachTo: nil, + Existing: []ExistingReplica{ + {Type: v1alpha1.ReplicaTypeDiskful, NodeName: "node-a1"}, + {Type: v1alpha1.ReplicaTypeDiskful, NodeName: "node-b1"}, + }, + ToSchedule: ReplicasToSchedule{Diskful: 1, TieBreaker: 0}, + Expected: ExpectedResult{DiskfulZones: []string{"zone-c"}}, + }, + { + Name: "4. large-3z: existing D in zone-a,b - TB in zone-c", + Cluster: "large-3z", + Topology: "TransZonal", + AttachTo: nil, + Existing: []ExistingReplica{ + {Type: v1alpha1.ReplicaTypeDiskful, NodeName: "node-a1"}, + {Type: v1alpha1.ReplicaTypeDiskful, NodeName: "node-b1"}, + }, + ToSchedule: ReplicasToSchedule{Diskful: 0, TieBreaker: 1}, + Expected: ExpectedResult{TieBreakerZones: []string{"zone-c"}}, + }, + { + Name: "5. medium-2z: existing D in zone-a - new D in zone-b", + Cluster: "medium-2z", + Topology: "TransZonal", + AttachTo: nil, + Existing: []ExistingReplica{{Type: v1alpha1.ReplicaTypeDiskful, NodeName: "node-a1"}}, + ToSchedule: ReplicasToSchedule{Diskful: 1, TieBreaker: 0}, + Expected: ExpectedResult{DiskfulZones: []string{"zone-b"}}, + }, + { + Name: "6. medium-2z: zones full, new D - cannot guarantee even", + Cluster: "medium-2z", + Topology: "TransZonal", + AttachTo: nil, + Existing: []ExistingReplica{ + {Type: v1alpha1.ReplicaTypeDiskful, NodeName: "node-a1"}, + {Type: v1alpha1.ReplicaTypeDiskful, NodeName: "node-b1"}, + }, + ToSchedule: ReplicasToSchedule{Diskful: 1, TieBreaker: 0}, + Expected: ExpectedResult{}, // will place in any zone with free node + }, + { + Name: "7. xlarge-4z: D:3, TB:1 - D in RSC zones only", + Cluster: "xlarge-4z", + Topology: "TransZonal", + AttachTo: nil, + Existing: nil, + ToSchedule: ReplicasToSchedule{Diskful: 3, TieBreaker: 1}, + Expected: ExpectedResult{DiskfulZones: []string{"zone-a", "zone-b", "zone-c"}}, + }, + { + Name: "8. large-3z-3n: D:5, TB:1 - distribution 2-2-1", + Cluster: "large-3z-3n", + Topology: "TransZonal", + AttachTo: nil, + Existing: nil, + ToSchedule: ReplicasToSchedule{Diskful: 5, TieBreaker: 1}, + Expected: ExpectedResult{}, // 2-2-1 distribution + 1 TB + }, + { + Name: "9. medium-2z: all nodes occupied - no candidate nodes", + Cluster: "medium-2z", + Topology: "TransZonal", + AttachTo: nil, + Existing: []ExistingReplica{ + {Type: v1alpha1.ReplicaTypeDiskful, NodeName: "node-a1"}, + {Type: v1alpha1.ReplicaTypeDiskful, NodeName: "node-a2"}, + {Type: v1alpha1.ReplicaTypeDiskful, NodeName: "node-b1"}, + {Type: v1alpha1.ReplicaTypeDiskful, NodeName: "node-b2"}, + }, + ToSchedule: ReplicasToSchedule{Diskful: 0, TieBreaker: 1}, + Expected: ExpectedResult{ + ScheduledTieBreakerCount: intPtr(0), + UnscheduledTieBreakerCount: intPtr(1), + UnscheduledTieBreakerReason: v1alpha1.ReplicatedVolumeReplicaCondScheduledReasonNoAvailableNodes, + }, + }, + { + Name: "10. large-3z: TB only, no existing - TB in any zone", + Cluster: "large-3z", + Topology: "TransZonal", + AttachTo: nil, + Existing: nil, + ToSchedule: ReplicasToSchedule{Diskful: 0, TieBreaker: 1}, + Expected: ExpectedResult{}, // any zone ok (all have 0 replicas) + }, + { + Name: "11. large-3z-3n: existing D+TB in zone-a,b - new D in zone-c", + Cluster: "large-3z-3n", + Topology: "TransZonal", + AttachTo: nil, + Existing: []ExistingReplica{ + {Type: v1alpha1.ReplicaTypeDiskful, NodeName: "node-a1"}, + {Type: v1alpha1.ReplicaTypeTieBreaker, NodeName: "node-a2"}, + {Type: v1alpha1.ReplicaTypeDiskful, NodeName: "node-b1"}, + }, + ToSchedule: ReplicasToSchedule{Diskful: 1, TieBreaker: 0}, + Expected: ExpectedResult{DiskfulZones: []string{"zone-c"}}, + }, + { + Name: "12. large-3z-3n: existing D+Access across zones - new TB balances", + Cluster: "large-3z-3n", + Topology: "TransZonal", + AttachTo: nil, + Existing: []ExistingReplica{ + {Type: v1alpha1.ReplicaTypeDiskful, NodeName: "node-a1"}, + {Type: v1alpha1.ReplicaTypeAccess, NodeName: "node-a2"}, + {Type: v1alpha1.ReplicaTypeDiskful, NodeName: "node-b1"}, + }, + ToSchedule: ReplicasToSchedule{Diskful: 0, TieBreaker: 1}, + Expected: ExpectedResult{TieBreakerZones: []string{"zone-c"}}, // zone-c has 0 replicas + }, + } + + for _, tc := range transZonalTestCases { + It(tc.Name, func(ctx SpecContext) { + runTestCase(ctx, tc) + }) + } + }) + + // ==================== IGNORED TOPOLOGY ==================== + Context("Ignored Topology", func() { + ignoredTestCases := []IntegrationTestCase{ + { + Name: "1. large-3z: D:2, TB:1 - Diskful uses best scores", + Cluster: "large-3z", + Topology: "Ignored", + AttachTo: nil, + Existing: nil, + ToSchedule: ReplicasToSchedule{Diskful: 2, TieBreaker: 1}, + // Scores: node-a1(100), node-b1(90) - D:2 get best 2 nodes + // TieBreaker doesn't use scheduler extender (no disk space needed) + Expected: ExpectedResult{ + DiskfulNodes: []string{"node-a1", "node-b1"}, + // TieBreaker goes to any remaining node (no score-based selection) + }, + }, + { + Name: "2. medium-2z: attachTo - prefer attachTo nodes", + Cluster: "medium-2z", + Topology: "Ignored", + AttachTo: []string{"node-a1", "node-b1"}, + Existing: nil, + ToSchedule: ReplicasToSchedule{Diskful: 2, TieBreaker: 1}, + Expected: ExpectedResult{DiskfulNodes: []string{"node-a1", "node-b1"}}, + }, + { + Name: "3. small-1z-4n: D:2, TB:2 - 4 replicas on 4 nodes", + Cluster: "small-1z-4n", + Topology: "Ignored", + AttachTo: nil, + Existing: nil, + ToSchedule: ReplicasToSchedule{Diskful: 2, TieBreaker: 2}, + Expected: ExpectedResult{}, // all 4 nodes used + }, + { + Name: "4. xlarge-4z: D:3, TB:1 - any 4 nodes by score", + Cluster: "xlarge-4z", + Topology: "Ignored", + AttachTo: nil, + Existing: nil, + ToSchedule: ReplicasToSchedule{Diskful: 3, TieBreaker: 1}, + Expected: ExpectedResult{}, // best 4 nodes + }, + { + Name: "5. small-1z: all nodes occupied - no candidate nodes", + Cluster: "small-1z", + Topology: "Ignored", + AttachTo: nil, + Existing: []ExistingReplica{ + {Type: v1alpha1.ReplicaTypeDiskful, NodeName: "node-a1"}, + {Type: v1alpha1.ReplicaTypeDiskful, NodeName: "node-a2"}, + }, + ToSchedule: ReplicasToSchedule{Diskful: 0, TieBreaker: 1}, + Expected: ExpectedResult{ + ScheduledTieBreakerCount: intPtr(0), + UnscheduledTieBreakerCount: intPtr(1), + UnscheduledTieBreakerReason: v1alpha1.ReplicatedVolumeReplicaCondScheduledReasonNoAvailableNodes, + }, + }, + { + Name: "6. small-1z-4n: existing D+TB - new D on best remaining", + Cluster: "small-1z-4n", + Topology: "Ignored", + AttachTo: nil, + Existing: []ExistingReplica{ + {Type: v1alpha1.ReplicaTypeDiskful, NodeName: "node-a1"}, + {Type: v1alpha1.ReplicaTypeTieBreaker, NodeName: "node-a2"}, + }, + ToSchedule: ReplicasToSchedule{Diskful: 1, TieBreaker: 0}, + Expected: ExpectedResult{}, // any of remaining nodes + }, + { + Name: "7. small-1z-4n: existing D+Access - new TB", + Cluster: "small-1z-4n", + Topology: "Ignored", + AttachTo: nil, + Existing: []ExistingReplica{ + {Type: v1alpha1.ReplicaTypeDiskful, NodeName: "node-a1"}, + {Type: v1alpha1.ReplicaTypeAccess, NodeName: "node-a2"}, + }, + ToSchedule: ReplicasToSchedule{Diskful: 0, TieBreaker: 1}, + Expected: ExpectedResult{}, // any of remaining nodes + }, + { + Name: "8. medium-2z-4n: existing mixed types - new D+TB", + Cluster: "medium-2z-4n", + Topology: "Ignored", + AttachTo: nil, + Existing: []ExistingReplica{ + {Type: v1alpha1.ReplicaTypeDiskful, NodeName: "node-a1"}, + {Type: v1alpha1.ReplicaTypeAccess, NodeName: "node-a2"}, + {Type: v1alpha1.ReplicaTypeTieBreaker, NodeName: "node-b1"}, + }, + ToSchedule: ReplicasToSchedule{Diskful: 1, TieBreaker: 1}, + Expected: ExpectedResult{}, // best remaining nodes by score + }, + } + + for _, tc := range ignoredTestCases { + It(tc.Name, func(ctx SpecContext) { + runTestCase(ctx, tc) + }) + } + }) + + // ==================== EXTENDER FILTERING ==================== + Context("Extender Filtering", func() { + It("sets Scheduled=False when extender filters out all nodes (no space)", func(ctx SpecContext) { + cluster := clusterConfigs["medium-2z"] + + // Generate cluster resources + nodes, _ := generateNodes(cluster) + lvgs, rsp := generateLVGs(nodes) + + // Create mock server that returns EMPTY lvgs (simulates no space on any node) + mockServer := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, _ *http.Request) { + resp := map[string]any{"lvgs": []map[string]any{}} + _ = json.NewEncoder(w).Encode(resp) + })) + defer mockServer.Close() + os.Setenv("SCHEDULER_EXTENDER_URL", mockServer.URL) + defer os.Unsetenv("SCHEDULER_EXTENDER_URL") + + rsc := &v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{Name: "rsc-test"}, + Spec: v1alpha1.ReplicatedStorageClassSpec{ + StoragePool: "pool-1", + VolumeAccess: "Any", + Topology: "Ignored", + Zones: cluster.RSCZones, + }, + } + rv := &v1alpha1.ReplicatedVolume{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rv-test", + Finalizers: []string{v1alpha1.ControllerFinalizer}, + }, + Spec: v1alpha1.ReplicatedVolumeSpec{ + Size: resource.MustParse("10Gi"), + ReplicatedStorageClassName: "rsc-test", + }, + Status: v1alpha1.ReplicatedVolumeStatus{ + Conditions: []metav1.Condition{{ + Type: v1alpha1.ReplicatedVolumeCondIOReadyType, + Status: metav1.ConditionTrue, + }}, + }, + } + rvr := &v1alpha1.ReplicatedVolumeReplica{ + ObjectMeta: metav1.ObjectMeta{Name: "rvr-diskful-1"}, + Spec: v1alpha1.ReplicatedVolumeReplicaSpec{ + ReplicatedVolumeName: "rv-test", + Type: v1alpha1.ReplicaTypeDiskful, + }, + } + + objects := []runtime.Object{rv, rsc, rsp, rvr} + for _, node := range nodes { + objects = append(objects, node) + } + for _, lvg := range lvgs { + objects = append(objects, lvg) + } + + cl := testhelpers.WithRVRByReplicatedVolumeNameIndex(fake.NewClientBuilder(). + WithScheme(scheme)). + WithRuntimeObjects(objects...). + WithStatusSubresource(&v1alpha1.ReplicatedVolumeReplica{}). + Build() + rec, err := rvrschedulingcontroller.NewReconciler(cl, logr.Discard(), scheme) + Expect(err).ToNot(HaveOccurred()) + + // With best-effort scheduling, no error is returned + _, err = rec.Reconcile(ctx, reconcile.Request{NamespacedName: types.NamespacedName{Name: rv.Name}}) + Expect(err).ToNot(HaveOccurred()) + + // Check that replica has Scheduled=False condition + updated := &v1alpha1.ReplicatedVolumeReplica{} + Expect(cl.Get(ctx, client.ObjectKey{Name: "rvr-diskful-1"}, updated)).To(Succeed()) + Expect(updated.Spec.NodeName).To(BeEmpty(), "Replica should not be scheduled when no space") + cond := meta.FindStatusCondition(updated.Status.Conditions, v1alpha1.ReplicatedVolumeReplicaCondScheduledType) + Expect(cond).ToNot(BeNil()) + Expect(cond.Status).To(Equal(metav1.ConditionFalse)) + Expect(cond.Reason).To(Equal(v1alpha1.ReplicatedVolumeReplicaCondScheduledReasonNoAvailableNodes)) + }) + + It("filters nodes where extender doesn't return LVG", func(ctx SpecContext) { + cluster := clusterConfigs["medium-2z"] + + nodes, scores := generateNodes(cluster) + lvgs, rsp := generateLVGs(nodes) + + // Only include zone-a LVGs in mapping - zone-b will be filtered out + lvgToNode := make(map[string]string) + for _, lvg := range lvgs { + if len(lvg.Status.Nodes) > 0 { + nodeName := lvg.Status.Nodes[0].Name + // Only include node-a* nodes + if nodeName == "node-a1" || nodeName == "node-a2" { + lvgToNode[lvg.Name] = nodeName + } + } + } + + mockServer := createMockServer(scores, lvgToNode) + defer mockServer.Close() + os.Setenv("SCHEDULER_EXTENDER_URL", mockServer.URL) + defer os.Unsetenv("SCHEDULER_EXTENDER_URL") + + rsc := &v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{Name: "rsc-test"}, + Spec: v1alpha1.ReplicatedStorageClassSpec{ + StoragePool: "pool-1", + VolumeAccess: "Any", + Topology: "Ignored", + Zones: cluster.RSCZones, + }, + } + rv := &v1alpha1.ReplicatedVolume{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rv-test", + Finalizers: []string{v1alpha1.ControllerFinalizer}, + }, + Spec: v1alpha1.ReplicatedVolumeSpec{ + Size: resource.MustParse("10Gi"), + ReplicatedStorageClassName: "rsc-test", + }, + Status: v1alpha1.ReplicatedVolumeStatus{ + Conditions: []metav1.Condition{{ + Type: v1alpha1.ReplicatedVolumeCondIOReadyType, + Status: metav1.ConditionTrue, + }}, + }, + } + rvr := &v1alpha1.ReplicatedVolumeReplica{ + ObjectMeta: metav1.ObjectMeta{Name: "rvr-diskful-1"}, + Spec: v1alpha1.ReplicatedVolumeReplicaSpec{ + ReplicatedVolumeName: "rv-test", + Type: v1alpha1.ReplicaTypeDiskful, + }, + } + + objects := []runtime.Object{rv, rsc, rsp, rvr} + for _, node := range nodes { + objects = append(objects, node) + } + for _, lvg := range lvgs { + objects = append(objects, lvg) + } + + cl := testhelpers.WithRVRByReplicatedVolumeNameIndex(fake.NewClientBuilder(). + WithScheme(scheme)). + WithRuntimeObjects(objects...). + WithStatusSubresource(&v1alpha1.ReplicatedVolumeReplica{}). + Build() + rec, err := rvrschedulingcontroller.NewReconciler(cl, logr.Discard(), scheme) + Expect(err).ToNot(HaveOccurred()) + + _, err = rec.Reconcile(ctx, reconcile.Request{NamespacedName: types.NamespacedName{Name: rv.Name}}) + Expect(err).ToNot(HaveOccurred()) + + updated := &v1alpha1.ReplicatedVolumeReplica{} + Expect(cl.Get(ctx, client.ObjectKey{Name: "rvr-diskful-1"}, updated)).To(Succeed()) + // Must be on zone-a node since zone-b was filtered out + Expect(updated.Spec.NodeName).To(Or(Equal("node-a1"), Equal("node-a2"))) + }) + }) +}) + +// ==================== ACCESS PHASE TESTS (kept separate) ==================== +var _ = Describe("Access Phase Tests", Ordered, func() { + var ( + scheme *runtime.Scheme + cl client.WithWatch + rec *rvrschedulingcontroller.Reconciler + mockServer *httptest.Server + ) + + BeforeEach(func() { + mockServer = httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + var req struct { + LVGS []struct{ Name string } `json:"lvgs"` + } + _ = json.NewDecoder(r.Body).Decode(&req) + resp := map[string]any{"lvgs": []map[string]any{}} + for _, lvg := range req.LVGS { + resp["lvgs"] = append(resp["lvgs"].([]map[string]any), map[string]any{"name": lvg.Name, "score": 100}) + } + _ = json.NewEncoder(w).Encode(resp) + })) + os.Setenv("SCHEDULER_EXTENDER_URL", mockServer.URL) + scheme = runtime.NewScheme() + utilruntime.Must(corev1.AddToScheme(scheme)) + utilruntime.Must(snc.AddToScheme(scheme)) + utilruntime.Must(v1alpha1.AddToScheme(scheme)) + }) + + AfterEach(func() { + os.Unsetenv("SCHEDULER_EXTENDER_URL") + mockServer.Close() + }) + + var ( + rv *v1alpha1.ReplicatedVolume + rsc *v1alpha1.ReplicatedStorageClass + rsp *v1alpha1.ReplicatedStoragePool + lvgA *snc.LVMVolumeGroup + lvgB *snc.LVMVolumeGroup + nodeA *corev1.Node + nodeB *corev1.Node + rvrList []*v1alpha1.ReplicatedVolumeReplica + withStatusSubresource bool + ) + + BeforeEach(func() { + rv = &v1alpha1.ReplicatedVolume{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rv-access", + Finalizers: []string{v1alpha1.ControllerFinalizer}, + }, + Spec: v1alpha1.ReplicatedVolumeSpec{ + Size: resource.MustParse("10Gi"), + ReplicatedStorageClassName: "rsc-access", + }, + Status: v1alpha1.ReplicatedVolumeStatus{ + DesiredAttachTo: []string{"node-a", "node-b"}, + Conditions: []metav1.Condition{{ + Type: v1alpha1.ReplicatedVolumeCondIOReadyType, + Status: metav1.ConditionTrue, + }}, + }, + } + + rsc = &v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{Name: "rsc-access"}, + Spec: v1alpha1.ReplicatedStorageClassSpec{ + StoragePool: "pool-access", + VolumeAccess: "Any", + Topology: "Ignored", + }, + } + + rsp = &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{Name: "pool-access"}, + Spec: v1alpha1.ReplicatedStoragePoolSpec{ + Type: "LVM", + LVMVolumeGroups: []v1alpha1.ReplicatedStoragePoolLVMVolumeGroups{ + {Name: "vg-a"}, {Name: "vg-b"}, + }, + }, + } + + lvgA = &snc.LVMVolumeGroup{ + ObjectMeta: metav1.ObjectMeta{Name: "vg-a"}, + Status: snc.LVMVolumeGroupStatus{Nodes: []snc.LVMVolumeGroupNode{{Name: "node-a"}}}, + } + lvgB = &snc.LVMVolumeGroup{ + ObjectMeta: metav1.ObjectMeta{Name: "vg-b"}, + Status: snc.LVMVolumeGroupStatus{Nodes: []snc.LVMVolumeGroupNode{{Name: "node-b"}}}, + } + + nodeA = &corev1.Node{ + ObjectMeta: metav1.ObjectMeta{ + Name: "node-a", + Labels: map[string]string{"topology.kubernetes.io/zone": "zone-a"}, + }, + } + nodeB = &corev1.Node{ + ObjectMeta: metav1.ObjectMeta{ + Name: "node-b", + Labels: map[string]string{"topology.kubernetes.io/zone": "zone-a"}, + }, + } + + rvrList = nil + withStatusSubresource = true // Enable by default - reconciler always writes status + }) + + JustBeforeEach(func() { + objects := []runtime.Object{rv, rsc, rsp, lvgA, nodeA} + if lvgB != nil { + objects = append(objects, lvgB) + } + if nodeB != nil { + objects = append(objects, nodeB) + } + for _, rvr := range rvrList { + objects = append(objects, rvr) + } + builder := testhelpers.WithRVRByReplicatedVolumeNameIndex(fake.NewClientBuilder().WithScheme(scheme)).WithRuntimeObjects(objects...) + if withStatusSubresource { + builder = builder.WithStatusSubresource(&v1alpha1.ReplicatedVolumeReplica{}) + } + cl = builder.Build() + var err error + rec, err = rvrschedulingcontroller.NewReconciler(cl, logr.Discard(), scheme) + Expect(err).ToNot(HaveOccurred()) + }) + + When("one attachTo node has diskful replica", func() { + BeforeEach(func() { + rvrList = []*v1alpha1.ReplicatedVolumeReplica{ + { + ObjectMeta: metav1.ObjectMeta{Name: "rvr-diskful"}, + Spec: v1alpha1.ReplicatedVolumeReplicaSpec{ + ReplicatedVolumeName: "rv-access", + Type: v1alpha1.ReplicaTypeDiskful, + NodeName: "node-a", + }, + }, + { + ObjectMeta: metav1.ObjectMeta{Name: "rvr-access-1"}, + Spec: v1alpha1.ReplicatedVolumeReplicaSpec{ + ReplicatedVolumeName: "rv-access", + Type: v1alpha1.ReplicaTypeAccess, + }, + }, + { + ObjectMeta: metav1.ObjectMeta{Name: "rvr-access-2"}, + Spec: v1alpha1.ReplicatedVolumeReplicaSpec{ + ReplicatedVolumeName: "rv-access", + Type: v1alpha1.ReplicaTypeAccess, + }, + }, + } + }) + + It("schedules access replica only on free attachTo node", func(ctx SpecContext) { + _, err := rec.Reconcile(ctx, reconcile.Request{NamespacedName: types.NamespacedName{Name: rv.Name}}) + Expect(err).ToNot(HaveOccurred()) + + updated1 := &v1alpha1.ReplicatedVolumeReplica{} + Expect(cl.Get(ctx, client.ObjectKey{Name: "rvr-access-1"}, updated1)).To(Succeed()) + updated2 := &v1alpha1.ReplicatedVolumeReplica{} + Expect(cl.Get(ctx, client.ObjectKey{Name: "rvr-access-2"}, updated2)).To(Succeed()) + + nodeNames := []string{updated1.Spec.NodeName, updated2.Spec.NodeName} + Expect(nodeNames).To(ContainElement("node-b")) + Expect(nodeNames).To(ContainElement("")) + }) + }) + + When("all attachTo nodes already have replicas", func() { + BeforeEach(func() { + rvrList = []*v1alpha1.ReplicatedVolumeReplica{ + { + ObjectMeta: metav1.ObjectMeta{Name: "rvr-a"}, + Spec: v1alpha1.ReplicatedVolumeReplicaSpec{ + ReplicatedVolumeName: "rv-access", + Type: v1alpha1.ReplicaTypeDiskful, + NodeName: "node-a", + }, + }, + { + ObjectMeta: metav1.ObjectMeta{Name: "rvr-b"}, + Spec: v1alpha1.ReplicatedVolumeReplicaSpec{ + ReplicatedVolumeName: "rv-access", + Type: v1alpha1.ReplicaTypeAccess, + NodeName: "node-b", + }, + }, + { + ObjectMeta: metav1.ObjectMeta{Name: "rvr-access-unscheduled"}, + Spec: v1alpha1.ReplicatedVolumeReplicaSpec{ + ReplicatedVolumeName: "rv-access", + Type: v1alpha1.ReplicaTypeAccess, + }, + }, + } + }) + + It("does not schedule unscheduled access replica", func(ctx SpecContext) { + _, err := rec.Reconcile(ctx, reconcile.Request{NamespacedName: types.NamespacedName{Name: rv.Name}}) + Expect(err).ToNot(HaveOccurred()) + + updated := &v1alpha1.ReplicatedVolumeReplica{} + Expect(cl.Get(ctx, client.ObjectKey{Name: "rvr-access-unscheduled"}, updated)).To(Succeed()) + Expect(updated.Spec.NodeName).To(Equal("")) + }) + }) + + When("checking Scheduled condition", func() { + BeforeEach(func() { + rv.Status.DesiredAttachTo = []string{"node-a", "node-b"} + rvrList = []*v1alpha1.ReplicatedVolumeReplica{ + { + ObjectMeta: metav1.ObjectMeta{Name: "rvr-scheduled"}, + Spec: v1alpha1.ReplicatedVolumeReplicaSpec{ + ReplicatedVolumeName: "rv-access", + Type: v1alpha1.ReplicaTypeDiskful, + NodeName: "node-a", + }, + Status: v1alpha1.ReplicatedVolumeReplicaStatus{}, + }, + { + ObjectMeta: metav1.ObjectMeta{Name: "rvr-to-schedule"}, + Spec: v1alpha1.ReplicatedVolumeReplicaSpec{ + ReplicatedVolumeName: "rv-access", + Type: v1alpha1.ReplicaTypeDiskful, + }, + Status: v1alpha1.ReplicatedVolumeReplicaStatus{}, + }, + } + }) + + It("sets Scheduled=True for all scheduled replicas", func(ctx SpecContext) { + _, err := rec.Reconcile(ctx, reconcile.Request{NamespacedName: types.NamespacedName{Name: rv.Name}}) + Expect(err).ToNot(HaveOccurred()) + + // Check already-scheduled replica gets condition fixed + updatedScheduled := &v1alpha1.ReplicatedVolumeReplica{} + Expect(cl.Get(ctx, client.ObjectKey{Name: "rvr-scheduled"}, updatedScheduled)).To(Succeed()) + condScheduled := meta.FindStatusCondition(updatedScheduled.Status.Conditions, v1alpha1.ReplicatedVolumeReplicaCondScheduledType) + Expect(condScheduled).ToNot(BeNil()) + Expect(condScheduled.Status).To(Equal(metav1.ConditionTrue)) + Expect(condScheduled.Reason).To(Equal(v1alpha1.ReplicatedVolumeReplicaCondScheduledReasonReplicaScheduled)) + + // Check newly-scheduled replica gets NodeName and Scheduled condition + updatedNewlyScheduled := &v1alpha1.ReplicatedVolumeReplica{} + Expect(cl.Get(ctx, client.ObjectKey{Name: "rvr-to-schedule"}, updatedNewlyScheduled)).To(Succeed()) + Expect(updatedNewlyScheduled.Spec.NodeName).To(Equal("node-b")) + condNewlyScheduled := meta.FindStatusCondition(updatedNewlyScheduled.Status.Conditions, v1alpha1.ReplicatedVolumeReplicaCondScheduledType) + Expect(condNewlyScheduled).ToNot(BeNil()) + Expect(condNewlyScheduled.Status).To(Equal(metav1.ConditionTrue)) + Expect(condNewlyScheduled.Reason).To(Equal(v1alpha1.ReplicatedVolumeReplicaCondScheduledReasonReplicaScheduled)) + }) + }) +}) + +// ==================== PARTIAL SCHEDULING AND EDGE CASES TESTS ==================== +var _ = Describe("Partial Scheduling and Edge Cases", Ordered, func() { + var ( + scheme *runtime.Scheme + ) + + BeforeEach(func() { + scheme = runtime.NewScheme() + utilruntime.Must(corev1.AddToScheme(scheme)) + utilruntime.Must(snc.AddToScheme(scheme)) + utilruntime.Must(v1alpha1.AddToScheme(scheme)) + }) + + Context("Partial Diskful Scheduling", func() { + It("schedules as many Diskful replicas as possible and sets Scheduled=False on remaining", func(ctx SpecContext) { + // Setup: 3 Diskful replicas to schedule, only 2 candidate nodes + cluster := clusterConfigs["small-1z"] + nodes, scores := generateNodes(cluster) + lvgs, rsp := generateLVGs(nodes) + + // Build lvg -> node mapping for mock server + lvgToNode := make(map[string]string) + for _, lvg := range lvgs { + if len(lvg.Status.Nodes) > 0 { + lvgToNode[lvg.Name] = lvg.Status.Nodes[0].Name + } + } + + mockServer := createMockServer(scores, lvgToNode) + defer mockServer.Close() + os.Setenv("SCHEDULER_EXTENDER_URL", mockServer.URL) + defer os.Unsetenv("SCHEDULER_EXTENDER_URL") + + rsc := &v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{Name: "rsc-test"}, + Spec: v1alpha1.ReplicatedStorageClassSpec{ + StoragePool: "pool-1", + VolumeAccess: "Any", + Topology: "Ignored", + Zones: cluster.RSCZones, + }, + } + rv := &v1alpha1.ReplicatedVolume{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rv-test", + Finalizers: []string{v1alpha1.ControllerFinalizer}, + }, + Spec: v1alpha1.ReplicatedVolumeSpec{ + Size: resource.MustParse("10Gi"), + ReplicatedStorageClassName: "rsc-test", + }, + Status: v1alpha1.ReplicatedVolumeStatus{ + Conditions: []metav1.Condition{{ + Type: v1alpha1.ReplicatedVolumeCondIOReadyType, + Status: metav1.ConditionTrue, + }}, + }, + } + + // Create 3 Diskful replicas but only 2 nodes available + rvr1 := &v1alpha1.ReplicatedVolumeReplica{ + ObjectMeta: metav1.ObjectMeta{Name: "rvr-diskful-1"}, + Spec: v1alpha1.ReplicatedVolumeReplicaSpec{ + ReplicatedVolumeName: "rv-test", + Type: v1alpha1.ReplicaTypeDiskful, + }, + } + rvr2 := &v1alpha1.ReplicatedVolumeReplica{ + ObjectMeta: metav1.ObjectMeta{Name: "rvr-diskful-2"}, + Spec: v1alpha1.ReplicatedVolumeReplicaSpec{ + ReplicatedVolumeName: "rv-test", + Type: v1alpha1.ReplicaTypeDiskful, + }, + } + rvr3 := &v1alpha1.ReplicatedVolumeReplica{ + ObjectMeta: metav1.ObjectMeta{Name: "rvr-diskful-3"}, + Spec: v1alpha1.ReplicatedVolumeReplicaSpec{ + ReplicatedVolumeName: "rv-test", + Type: v1alpha1.ReplicaTypeDiskful, + }, + } + + objects := []runtime.Object{rv, rsc, rsp, rvr1, rvr2, rvr3} + for _, node := range nodes { + objects = append(objects, node) + } + for _, lvg := range lvgs { + objects = append(objects, lvg) + } + + cl := testhelpers.WithRVRByReplicatedVolumeNameIndex(fake.NewClientBuilder(). + WithScheme(scheme)). + WithRuntimeObjects(objects...). + WithStatusSubresource(&v1alpha1.ReplicatedVolumeReplica{}). + Build() + rec, err := rvrschedulingcontroller.NewReconciler(cl, logr.Discard(), scheme) + Expect(err).ToNot(HaveOccurred()) + + // Reconcile should succeed (no error) even though not all replicas can be scheduled + _, err = rec.Reconcile(ctx, reconcile.Request{NamespacedName: types.NamespacedName{Name: rv.Name}}) + Expect(err).ToNot(HaveOccurred()) + + // Count scheduled replicas and check conditions + var scheduledCount int + var unscheduledCount int + for _, rvrName := range []string{"rvr-diskful-1", "rvr-diskful-2", "rvr-diskful-3"} { + updated := &v1alpha1.ReplicatedVolumeReplica{} + Expect(cl.Get(ctx, client.ObjectKey{Name: rvrName}, updated)).To(Succeed()) + + if updated.Spec.NodeName != "" { + scheduledCount++ + // Check Scheduled=True for scheduled replicas + cond := meta.FindStatusCondition(updated.Status.Conditions, v1alpha1.ReplicatedVolumeReplicaCondScheduledType) + Expect(cond).ToNot(BeNil()) + Expect(cond.Status).To(Equal(metav1.ConditionTrue)) + } else { + unscheduledCount++ + // Check Scheduled=False for unscheduled replicas with appropriate reason + cond := meta.FindStatusCondition(updated.Status.Conditions, v1alpha1.ReplicatedVolumeReplicaCondScheduledType) + Expect(cond).ToNot(BeNil()) + Expect(cond.Status).To(Equal(metav1.ConditionFalse)) + Expect(cond.Reason).To(Equal(v1alpha1.ReplicatedVolumeReplicaCondScheduledReasonNoAvailableNodes)) + } + } + + // Expect 2 scheduled (we have 2 nodes) and 1 unscheduled + Expect(scheduledCount).To(Equal(2)) + Expect(unscheduledCount).To(Equal(1)) + }) + }) + + Context("Deleting Replica Node Occupancy", func() { + It("does not schedule new replica on node with deleting replica", func(ctx SpecContext) { + // Setup: existing replica being deleted on node-a, new replica to schedule + cluster := clusterConfigs["small-1z"] + nodes, scores := generateNodes(cluster) + lvgs, rsp := generateLVGs(nodes) + + lvgToNode := make(map[string]string) + for _, lvg := range lvgs { + if len(lvg.Status.Nodes) > 0 { + lvgToNode[lvg.Name] = lvg.Status.Nodes[0].Name + } + } + + mockServer := createMockServer(scores, lvgToNode) + defer mockServer.Close() + os.Setenv("SCHEDULER_EXTENDER_URL", mockServer.URL) + defer os.Unsetenv("SCHEDULER_EXTENDER_URL") + + rsc := &v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{Name: "rsc-test"}, + Spec: v1alpha1.ReplicatedStorageClassSpec{ + StoragePool: "pool-1", + VolumeAccess: "Any", + Topology: "Ignored", + Zones: cluster.RSCZones, + }, + } + rv := &v1alpha1.ReplicatedVolume{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rv-test", + Finalizers: []string{v1alpha1.ControllerFinalizer}, + }, + Spec: v1alpha1.ReplicatedVolumeSpec{ + Size: resource.MustParse("10Gi"), + ReplicatedStorageClassName: "rsc-test", + }, + Status: v1alpha1.ReplicatedVolumeStatus{ + Conditions: []metav1.Condition{{ + Type: v1alpha1.ReplicatedVolumeCondIOReadyType, + Status: metav1.ConditionTrue, + }}, + }, + } + + // Create a deleting replica on node-a1 (best score node) + deletingTime := metav1.Now() + deletingRvr := &v1alpha1.ReplicatedVolumeReplica{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rvr-deleting", + DeletionTimestamp: &deletingTime, + Finalizers: []string{"test-finalizer"}, // Finalizer to prevent actual deletion + }, + Spec: v1alpha1.ReplicatedVolumeReplicaSpec{ + ReplicatedVolumeName: "rv-test", + Type: v1alpha1.ReplicaTypeDiskful, + NodeName: "node-a1", // Best score node + }, + } + + // New replica to schedule + newRvr := &v1alpha1.ReplicatedVolumeReplica{ + ObjectMeta: metav1.ObjectMeta{Name: "rvr-new"}, + Spec: v1alpha1.ReplicatedVolumeReplicaSpec{ + ReplicatedVolumeName: "rv-test", + Type: v1alpha1.ReplicaTypeDiskful, + }, + } + + objects := []runtime.Object{rv, rsc, rsp, deletingRvr, newRvr} + for _, node := range nodes { + objects = append(objects, node) + } + for _, lvg := range lvgs { + objects = append(objects, lvg) + } + + cl := testhelpers.WithRVRByReplicatedVolumeNameIndex(fake.NewClientBuilder(). + WithScheme(scheme)). + WithRuntimeObjects(objects...). + WithStatusSubresource(&v1alpha1.ReplicatedVolumeReplica{}). + Build() + rec, err := rvrschedulingcontroller.NewReconciler(cl, logr.Discard(), scheme) + Expect(err).ToNot(HaveOccurred()) + + _, err = rec.Reconcile(ctx, reconcile.Request{NamespacedName: types.NamespacedName{Name: rv.Name}}) + Expect(err).ToNot(HaveOccurred()) + + // New replica should be scheduled on node-a2 (not node-a1 which has deleting replica) + updated := &v1alpha1.ReplicatedVolumeReplica{} + Expect(cl.Get(ctx, client.ObjectKey{Name: "rvr-new"}, updated)).To(Succeed()) + Expect(updated.Spec.NodeName).To(Equal("node-a2")) + Expect(updated.Spec.NodeName).ToNot(Equal("node-a1")) // Should NOT be on node with deleting replica + }) + }) + + Context("RVR with DeletionTimestamp", func() { + It("does not schedule RVR that is being deleted", func(ctx SpecContext) { + cluster := clusterConfigs["small-1z"] + nodes, scores := generateNodes(cluster) + lvgs, rsp := generateLVGs(nodes) + + lvgToNode := make(map[string]string) + for _, lvg := range lvgs { + if len(lvg.Status.Nodes) > 0 { + lvgToNode[lvg.Name] = lvg.Status.Nodes[0].Name + } + } + + mockServer := createMockServer(scores, lvgToNode) + defer mockServer.Close() + os.Setenv("SCHEDULER_EXTENDER_URL", mockServer.URL) + defer os.Unsetenv("SCHEDULER_EXTENDER_URL") + + rsc := &v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{Name: "rsc-test"}, + Spec: v1alpha1.ReplicatedStorageClassSpec{ + StoragePool: "pool-1", + VolumeAccess: "Any", + Topology: "Ignored", + Zones: cluster.RSCZones, + }, + } + rv := &v1alpha1.ReplicatedVolume{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rv-test", + Finalizers: []string{v1alpha1.ControllerFinalizer}, + }, + Spec: v1alpha1.ReplicatedVolumeSpec{ + Size: resource.MustParse("10Gi"), + ReplicatedStorageClassName: "rsc-test", + }, + Status: v1alpha1.ReplicatedVolumeStatus{ + Conditions: []metav1.Condition{{ + Type: v1alpha1.ReplicatedVolumeCondIOReadyType, + Status: metav1.ConditionTrue, + }}, + }, + } + + // RVR with DeletionTimestamp and no NodeName - should NOT be scheduled + deletingTime := metav1.Now() + deletingRvr := &v1alpha1.ReplicatedVolumeReplica{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rvr-deleting-unscheduled", + DeletionTimestamp: &deletingTime, + Finalizers: []string{"test-finalizer"}, + }, + Spec: v1alpha1.ReplicatedVolumeReplicaSpec{ + ReplicatedVolumeName: "rv-test", + Type: v1alpha1.ReplicaTypeDiskful, + // No NodeName - not scheduled + }, + } + + objects := []runtime.Object{rv, rsc, rsp, deletingRvr} + for _, node := range nodes { + objects = append(objects, node) + } + for _, lvg := range lvgs { + objects = append(objects, lvg) + } + + cl := testhelpers.WithRVRByReplicatedVolumeNameIndex(fake.NewClientBuilder(). + WithScheme(scheme)). + WithRuntimeObjects(objects...). + WithStatusSubresource(&v1alpha1.ReplicatedVolumeReplica{}). + Build() + rec, err := rvrschedulingcontroller.NewReconciler(cl, logr.Discard(), scheme) + Expect(err).ToNot(HaveOccurred()) + + _, err = rec.Reconcile(ctx, reconcile.Request{NamespacedName: types.NamespacedName{Name: rv.Name}}) + Expect(err).ToNot(HaveOccurred()) + + // Deleting RVR should NOT be scheduled + updated := &v1alpha1.ReplicatedVolumeReplica{} + Expect(cl.Get(ctx, client.ObjectKey{Name: "rvr-deleting-unscheduled"}, updated)).To(Succeed()) + Expect(updated.Spec.NodeName).To(BeEmpty()) // Should remain unscheduled + }) + }) + + Context("Constraint Violation Conditions", func() { + It("sets Scheduled=False with appropriate reason when topology constraints fail", func(ctx SpecContext) { + // Setup: TransZonal topology, existing replicas in 2 zones, need to place more but distribution can't be satisfied + cluster := clusterConfigs["medium-2z"] + nodes, scores := generateNodes(cluster) + lvgs, rsp := generateLVGs(nodes) + + // Only include zone-a nodes in lvgToNode (simulating zone-b has no capacity) + lvgToNode := make(map[string]string) + for _, lvg := range lvgs { + if len(lvg.Status.Nodes) > 0 { + nodeName := lvg.Status.Nodes[0].Name + if nodeName == "node-a1" || nodeName == "node-a2" { + lvgToNode[lvg.Name] = nodeName + } + } + } + + mockServer := createMockServer(scores, lvgToNode) + defer mockServer.Close() + os.Setenv("SCHEDULER_EXTENDER_URL", mockServer.URL) + defer os.Unsetenv("SCHEDULER_EXTENDER_URL") + + rsc := &v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{Name: "rsc-test"}, + Spec: v1alpha1.ReplicatedStorageClassSpec{ + StoragePool: "pool-1", + VolumeAccess: "Any", + Topology: "TransZonal", + Zones: cluster.RSCZones, + }, + } + rv := &v1alpha1.ReplicatedVolume{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rv-test", + Finalizers: []string{v1alpha1.ControllerFinalizer}, + }, + Spec: v1alpha1.ReplicatedVolumeSpec{ + Size: resource.MustParse("10Gi"), + ReplicatedStorageClassName: "rsc-test", + }, + Status: v1alpha1.ReplicatedVolumeStatus{ + Conditions: []metav1.Condition{{ + Type: v1alpha1.ReplicatedVolumeCondIOReadyType, + Status: metav1.ConditionTrue, + }}, + }, + } + + // Create Diskful replicas to schedule - TransZonal will fail to place evenly + rvr1 := &v1alpha1.ReplicatedVolumeReplica{ + ObjectMeta: metav1.ObjectMeta{Name: "rvr-diskful-1"}, + Spec: v1alpha1.ReplicatedVolumeReplicaSpec{ + ReplicatedVolumeName: "rv-test", + Type: v1alpha1.ReplicaTypeDiskful, + }, + } + rvr2 := &v1alpha1.ReplicatedVolumeReplica{ + ObjectMeta: metav1.ObjectMeta{Name: "rvr-diskful-2"}, + Spec: v1alpha1.ReplicatedVolumeReplicaSpec{ + ReplicatedVolumeName: "rv-test", + Type: v1alpha1.ReplicaTypeDiskful, + }, + } + + objects := []runtime.Object{rv, rsc, rsp, rvr1, rvr2} + for _, node := range nodes { + objects = append(objects, node) + } + for _, lvg := range lvgs { + objects = append(objects, lvg) + } + + cl := testhelpers.WithRVRByReplicatedVolumeNameIndex(fake.NewClientBuilder(). + WithScheme(scheme)). + WithRuntimeObjects(objects...). + WithStatusSubresource(&v1alpha1.ReplicatedVolumeReplica{}). + Build() + rec, err := rvrschedulingcontroller.NewReconciler(cl, logr.Discard(), scheme) + Expect(err).ToNot(HaveOccurred()) + + // Reconcile - should succeed but some replicas may not be scheduled + _, err = rec.Reconcile(ctx, reconcile.Request{NamespacedName: types.NamespacedName{Name: rv.Name}}) + Expect(err).ToNot(HaveOccurred()) + + // Check that unscheduled replicas have Scheduled=False condition + for _, rvrName := range []string{"rvr-diskful-1", "rvr-diskful-2"} { + updated := &v1alpha1.ReplicatedVolumeReplica{} + Expect(cl.Get(ctx, client.ObjectKey{Name: rvrName}, updated)).To(Succeed()) + + if updated.Spec.NodeName == "" { + // Unscheduled replica should have Scheduled=False + cond := meta.FindStatusCondition(updated.Status.Conditions, v1alpha1.ReplicatedVolumeReplicaCondScheduledType) + Expect(cond).ToNot(BeNil()) + Expect(cond.Status).To(Equal(metav1.ConditionFalse)) + // Reason should indicate why scheduling failed + Expect(cond.Reason).To(Or( + Equal(v1alpha1.ReplicatedVolumeReplicaCondScheduledReasonNoAvailableNodes), + Equal(v1alpha1.ReplicatedVolumeReplicaCondScheduledReasonTopologyConstraintsFailed), + )) + } + } + }) + }) +}) diff --git a/images/controller/internal/controllers/rvr_scheduling_controller/rvr_scheduling_controller_suite_test.go b/images/controller/internal/controllers/rvr_scheduling_controller/rvr_scheduling_controller_suite_test.go new file mode 100644 index 000000000..4dee41ec0 --- /dev/null +++ b/images/controller/internal/controllers/rvr_scheduling_controller/rvr_scheduling_controller_suite_test.go @@ -0,0 +1,72 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package rvrschedulingcontroller_test + +import ( + "context" + "testing" + + . "github.com/onsi/ginkgo/v2" + . "github.com/onsi/gomega" + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/client/interceptor" + "sigs.k8s.io/controller-runtime/pkg/reconcile" +) + +func TestRvrSchedulingController(t *testing.T) { + RegisterFailHandler(Fail) + RunSpecs(t, "RvrSchedulingController Suite") +} + +func Requeue() OmegaMatcher { + return Not(Equal(reconcile.Result{})) +} + +// InterceptGet creates an interceptor that modifies objects in both Get and List operations. +// If Get or List returns an error, intercept is called with a nil (zero) value of type T allowing +// the test to override the error. +func InterceptGet[T client.Object]( + intercept func(T) error, +) interceptor.Funcs { + return interceptor.Funcs{ + Get: func(ctx context.Context, cl client.WithWatch, key client.ObjectKey, obj client.Object, opts ...client.GetOption) error { + if target, ok := obj.(T); ok { + if err := cl.Get(ctx, key, obj, opts...); err != nil { + var zero T + if err := intercept(zero); err != nil { + return err + } + return err + } + if err := intercept(target); err != nil { + return err + } + } + return cl.Get(ctx, key, obj, opts...) + }, + List: func(ctx context.Context, cl client.WithWatch, list client.ObjectList, opts ...client.ListOption) error { + if err := cl.List(ctx, list, opts...); err != nil { + var zero T + if err := intercept(zero); err != nil { + return err + } + return err + } + return nil + }, + } +} diff --git a/images/controller/internal/controllers/rvr_scheduling_controller/scheduler_extender.go b/images/controller/internal/controllers/rvr_scheduling_controller/scheduler_extender.go new file mode 100644 index 000000000..aa71a66b0 --- /dev/null +++ b/images/controller/internal/controllers/rvr_scheduling_controller/scheduler_extender.go @@ -0,0 +1,142 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package rvrschedulingcontroller + +import ( + "bytes" + "context" + "crypto/tls" + "encoding/json" + "errors" + "fmt" + "net/http" + "net/url" + "os" +) + +type schedulerExtenderLVG struct { + Name string `json:"name"` + ThinPoolName string `json:"thinPoolName,omitempty"` +} + +type schedulerExtenderVolume struct { + Name string `json:"name"` + Size int64 `json:"size"` + Type string `json:"type"` +} + +type schedulerExtenderRequest struct { + LVGS []schedulerExtenderLVG `json:"lvgs"` + Volume schedulerExtenderVolume `json:"volume"` +} + +type schedulerExtenderResponseLVG struct { + Name string `json:"name"` + Score int `json:"score"` +} + +type schedulerExtenderResponse struct { + LVGS []schedulerExtenderResponseLVG `json:"lvgs"` +} + +type SchedulerExtenderClient struct { + httpClient *http.Client + url string +} + +func NewSchedulerHTTPClient() (*SchedulerExtenderClient, error) { + extURL := os.Getenv("SCHEDULER_EXTENDER_URL") // TODO init in the other place later + if extURL == "" { + // No scheduler-extender URL configured — disable external capacity filtering. + return nil, errors.New("scheduler-extender URL is not configured") + } + + // Parse URL to validate it + _, err := url.Parse(extURL) + if err != nil { + return nil, fmt.Errorf("invalid scheduler-extender URL: %w", err) + } + + // Create HTTP client that trusts any certificate + tr := &http.Transport{ + TLSClientConfig: &tls.Config{InsecureSkipVerify: true}, + } + httpClient := &http.Client{Transport: tr} + + return &SchedulerExtenderClient{ + httpClient: httpClient, + url: extURL, + }, nil +} + +// VolumeInfo contains information about the volume to query scores for. +type VolumeInfo struct { + Name string + Size int64 + Type string // "thin" or "thick" +} + +// queryLVGScores queries the scheduler extender for LVG scores. +// It performs HTTP communication only and returns a map of LVG name to score. +func (c *SchedulerExtenderClient) queryLVGScores( + ctx context.Context, + lvgs []schedulerExtenderLVG, + volumeInfo VolumeInfo, +) (map[string]int, error) { + if len(lvgs) == 0 { + return nil, fmt.Errorf("no LVGs provided for query") + } + + reqBody := schedulerExtenderRequest{ + LVGS: lvgs, + Volume: schedulerExtenderVolume(volumeInfo), + } + + data, err := json.Marshal(reqBody) + if err != nil { + return nil, fmt.Errorf("unable to marshal scheduler-extender request: %w", err) + } + + httpReq, err := http.NewRequestWithContext(ctx, http.MethodPost, c.url, bytes.NewReader(data)) + if err != nil { + return nil, fmt.Errorf("unable to build scheduler-extender request: %w", err) + } + httpReq.Header.Set("Content-Type", "application/json") + + resp, err := c.httpClient.Do(httpReq) + if err != nil { + return nil, fmt.Errorf("scheduler-extender request failed: %w", err) + } + defer resp.Body.Close() + + if resp.StatusCode != http.StatusOK { + return nil, fmt.Errorf("scheduler-extender returned unexpected status %d", resp.StatusCode) + } + + var respBody schedulerExtenderResponse + if err := json.NewDecoder(resp.Body).Decode(&respBody); err != nil { + return nil, fmt.Errorf("unable to decode scheduler-extender response: %w", err) + } + + // Build map of LVG name -> score from response + lvgScores := make(map[string]int, len(respBody.LVGS)) + for _, lvg := range respBody.LVGS { + lvgScores[lvg.Name] = lvg.Score + } + + return lvgScores, nil +} diff --git a/images/controller/internal/controllers/rvr_scheduling_controller/types.go b/images/controller/internal/controllers/rvr_scheduling_controller/types.go new file mode 100644 index 000000000..1970788cf --- /dev/null +++ b/images/controller/internal/controllers/rvr_scheduling_controller/types.go @@ -0,0 +1,168 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package rvrschedulingcontroller + +import ( + "slices" + + "github.com/go-logr/logr" + + v1alpha1 "github.com/deckhouse/sds-replicated-volume/api/v1alpha1" +) + +type SchedulingContext struct { + Log logr.Logger + Rv *v1alpha1.ReplicatedVolume + Rsc *v1alpha1.ReplicatedStorageClass + Rsp *v1alpha1.ReplicatedStoragePool + RvrList []*v1alpha1.ReplicatedVolumeReplica + AttachToNodes []string + NodesWithAnyReplica map[string]struct{} + AttachToNodesWithoutRvReplica []string + UnscheduledDiskfulReplicas []*v1alpha1.ReplicatedVolumeReplica + ScheduledDiskfulReplicas []*v1alpha1.ReplicatedVolumeReplica + UnscheduledAccessReplicas []*v1alpha1.ReplicatedVolumeReplica + UnscheduledTieBreakerReplicas []*v1alpha1.ReplicatedVolumeReplica + RspLvgToNodeInfoMap map[string]LvgInfo // {lvgName: {NodeName, ThinPoolName}} + RspNodesWithoutReplica []string + NodeNameToZone map[string]string // {nodeName: zoneName} + ZonesToNodeCandidatesMap map[string][]NodeCandidate // {zone1: [{name: node1, score: 100}, {name: node2, score: 90}]} + // RVRs with nodes assigned in this reconcile + RVRsToSchedule []*v1alpha1.ReplicatedVolumeReplica +} + +type NodeCandidate struct { + Name string + Score int +} + +// SelectAndRemoveBestNode sorts candidates by score (descending), selects the best one, +// removes it from the slice, and returns the node name along with the updated slice. +// Returns empty string and original slice if no candidates available. +func SelectAndRemoveBestNode(candidates []NodeCandidate) (string, []NodeCandidate) { + if len(candidates) == 0 { + return "", candidates + } + + // Sort by score descending (higher score = better) + slices.SortFunc(candidates, func(a, b NodeCandidate) int { + return b.Score - a.Score + }) + + // Select the best node and remove it from the slice + bestNode := candidates[0].Name + return bestNode, candidates[1:] +} + +type LvgInfo struct { + NodeName string + ThinPoolName string +} + +// UpdateAfterScheduling updates the scheduling context after replicas have been assigned nodes. +// It removes assigned replicas from the appropriate unscheduled list based on their type, +// adds them to ScheduledDiskfulReplicas (for Diskful type), +// adds the assigned nodes to NodesWithAnyReplica, and removes them from AttachToNodesWithoutRvReplica. +func (sctx *SchedulingContext) UpdateAfterScheduling(assignedReplicas []*v1alpha1.ReplicatedVolumeReplica) { + if len(assignedReplicas) == 0 { + return + } + + // Build sets for fast lookup in a single pass + assignedNames := make(map[string]struct{}, len(assignedReplicas)) + assignedNodes := make(map[string]struct{}, len(assignedReplicas)) + var diskfulReplicas []*v1alpha1.ReplicatedVolumeReplica + + for _, rvr := range assignedReplicas { + assignedNames[rvr.Name] = struct{}{} + assignedNodes[rvr.Spec.NodeName] = struct{}{} + sctx.NodesWithAnyReplica[rvr.Spec.NodeName] = struct{}{} + if rvr.Spec.Type == v1alpha1.ReplicaTypeDiskful { + diskfulReplicas = append(diskfulReplicas, rvr) + } + } + + // Filter unscheduled lists + sctx.UnscheduledDiskfulReplicas = removeAssigned(sctx.UnscheduledDiskfulReplicas, assignedNames) + sctx.UnscheduledAccessReplicas = removeAssigned(sctx.UnscheduledAccessReplicas, assignedNames) + sctx.UnscheduledTieBreakerReplicas = removeAssigned(sctx.UnscheduledTieBreakerReplicas, assignedNames) + + // Add diskful replicas to ScheduledDiskfulReplicas + sctx.ScheduledDiskfulReplicas = append(sctx.ScheduledDiskfulReplicas, diskfulReplicas...) + + // Remove assigned nodes from AttachToNodesWithoutRvReplica + var remainingAttachToNodes []string + for _, node := range sctx.AttachToNodesWithoutRvReplica { + if _, assigned := assignedNodes[node]; !assigned { + remainingAttachToNodes = append(remainingAttachToNodes, node) + } + } + sctx.AttachToNodesWithoutRvReplica = remainingAttachToNodes + + // Add assigned replicas to RVRsToSchedule + sctx.RVRsToSchedule = append(sctx.RVRsToSchedule, assignedReplicas...) +} + +// removeAssigned removes replicas that are in the assigned set and returns the rest. +func removeAssigned(replicas []*v1alpha1.ReplicatedVolumeReplica, assigned map[string]struct{}) []*v1alpha1.ReplicatedVolumeReplica { + var result []*v1alpha1.ReplicatedVolumeReplica + for _, rvr := range replicas { + if _, ok := assigned[rvr.Name]; !ok { + result = append(result, rvr) + } + } + return result +} + +const attachToScoreBonus = 1000 + +// ApplyAttachToBonus increases score for nodes in rv.status.desiredAttachTo. +// This ensures attachTo nodes are preferred when scheduling Diskful replicas. +func (sctx *SchedulingContext) ApplyAttachToBonus() { + if len(sctx.AttachToNodes) == 0 { + return + } + + attachToSet := make(map[string]struct{}, len(sctx.AttachToNodes)) + for _, node := range sctx.AttachToNodes { + attachToSet[node] = struct{}{} + } + + for zone, candidates := range sctx.ZonesToNodeCandidatesMap { + for i := range candidates { + if _, isAttachTo := attachToSet[candidates[i].Name]; isAttachTo { + candidates[i].Score += attachToScoreBonus + } + } + sctx.ZonesToNodeCandidatesMap[zone] = candidates + } +} + +// findZoneWithMinReplicaCount finds the zone with the minimum replica count among the given zones. +// Returns the zone name and its replica count. If zones is empty, returns ("", -1). +func findZoneWithMinReplicaCount(zones map[string]struct{}, zoneReplicaCount map[string]int) (string, int) { + var minZone string + minCount := -1 + for zone := range zones { + count := zoneReplicaCount[zone] + if minCount == -1 || count < minCount { + minCount = count + minZone = zone + } + } + return minZone, minCount +} diff --git a/images/controller/internal/controllers/rvr_tie_breaker_count/controller.go b/images/controller/internal/controllers/rvr_tie_breaker_count/controller.go new file mode 100644 index 000000000..afce3f6c9 --- /dev/null +++ b/images/controller/internal/controllers/rvr_tie_breaker_count/controller.go @@ -0,0 +1,45 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package rvrtiebreakercount + +import ( + "sigs.k8s.io/controller-runtime/pkg/builder" + "sigs.k8s.io/controller-runtime/pkg/handler" + "sigs.k8s.io/controller-runtime/pkg/manager" + + "github.com/deckhouse/sds-replicated-volume/api/v1alpha1" +) + +func BuildController(mgr manager.Manager) error { + const controllerName = "rvr_tie_breaker_count_controller" + + log := mgr.GetLogger().WithName(controllerName) + + rec, err := NewReconciler(mgr.GetClient(), log, mgr.GetScheme()) + if err != nil { + return err + } + + return builder.ControllerManagedBy(mgr). + Named(controllerName). + For(&v1alpha1.ReplicatedVolume{}). + Watches( + &v1alpha1.ReplicatedVolumeReplica{}, + handler.EnqueueRequestForOwner(mgr.GetScheme(), mgr.GetRESTMapper(), &v1alpha1.ReplicatedVolume{}), + ). + Complete(rec) +} diff --git a/images/controller/internal/controllers/rvr_tie_breaker_count/doc.go b/images/controller/internal/controllers/rvr_tie_breaker_count/doc.go new file mode 100644 index 000000000..62ec166f2 --- /dev/null +++ b/images/controller/internal/controllers/rvr_tie_breaker_count/doc.go @@ -0,0 +1,102 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +// Package rvrtiebreakerccount implements the rvr-tie-breaker-count-controller, +// which manages TieBreaker replicas to maintain odd replica counts and prevent +// quorum ties in failure scenarios. +// +// # Controller Responsibilities +// +// The controller manages TieBreaker replicas by: +// - Creating TieBreaker replicas to ensure odd total replica count +// - Balancing replica distribution across failure domains +// - Deleting unnecessary TieBreaker replicas +// - Ensuring failure of any single failure domain doesn't cause quorum loss +// - Preventing majority failure domain loss from leaving a quorum +// +// # Watched Resources +// +// The controller watches: +// - ReplicatedVolume: To determine replica requirements +// - ReplicatedVolumeReplica: To count existing replicas +// - ReplicatedStorageClass: To get topology mode +// +// # Failure Domain Definition +// +// Failure Domain (FD) depends on topology: +// - When rsc.spec.topology==TransZonal: FD is the zone (availability zone) +// - Otherwise: FD is the node +// +// # TieBreaker Requirements +// +// The controller ensures: +// 1. Single FD failure must NOT cause quorum loss +// 2. Majority FD failure MUST cause quorum loss +// 3. Total replica count is odd +// 4. Replica difference between FDs is at most 1 +// +// To achieve this, TieBreaker replicas are added to balance FDs to the minimum +// count where these conditions are satisfied. +// +// # Reconciliation Flow +// +// 1. Check prerequisites: +// - RV must have the controller finalizer +// 2. If RV is being deleted (only module finalizers remain): +// - Do not create new replicas +// 3. Get ReplicatedStorageClass to determine topology +// 4. Determine failure domains: +// - TransZonal: Count replicas per zone +// - Other: Count replicas per node +// 5. Count existing replicas in each FD (Diskful, Access, TieBreaker) +// 6. Calculate target replica distribution: +// a. Determine minimum replica count per FD to satisfy requirements +// b. Ensure total count is odd +// c. Ensure FD counts differ by at most 1 +// 7. For FDs with fewer than target count: +// - Create TieBreaker replicas to reach target +// 8. For FDs with more than target count: +// - Delete excess TieBreaker replicas (only TieBreaker type) +// 9. Set rvr.metadata.deletionTimestamp for replicas to be deleted +// +// # Status Updates +// +// This controller creates and deletes ReplicatedVolumeReplica resources with +// spec.type=TieBreaker. It does not directly update status fields. +// +// # Special Notes +// +// Quorum Safety: +// - TieBreaker replicas participate in quorum but don't store data +// - They prevent split-brain in even-replica configurations +// - Example: With 2 Diskful replicas, add 1 TieBreaker for 3 total (quorum=2) +// +// TransZonal Topology: +// - Replicas are distributed to maintain zone balance +// - Zone failure should not cause quorum loss +// - Majority zone failure should cause quorum loss (prevents split-brain) +// +// Dynamic Adjustment: +// - As Diskful and Access replicas are added/removed, TieBreaker count adjusts +// - Maintains odd count and balanced distribution automatically +// +// Conversion to Access: +// - rv-attach-controller may convert TieBreaker to Access when needed for attaching +// - This controller will create new TieBreaker replicas if balance is disrupted +// +// The TieBreaker mechanism is crucial for maintaining data consistency and +// availability in distributed replicated storage systems. +package rvrtiebreakercount diff --git a/images/controller/internal/controllers/rvr_tie_breaker_count/failure_domain.go b/images/controller/internal/controllers/rvr_tie_breaker_count/failure_domain.go new file mode 100644 index 000000000..d77b35533 --- /dev/null +++ b/images/controller/internal/controllers/rvr_tie_breaker_count/failure_domain.go @@ -0,0 +1,69 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package rvrtiebreakercount + +import ( + "slices" + + "github.com/deckhouse/sds-replicated-volume/api/v1alpha1" +) + +type baseReplica *v1alpha1.ReplicatedVolumeReplica + +type tb *v1alpha1.ReplicatedVolumeReplica + +type failureDomain struct { + nodeNames []string // for Any/Zonal topology it is always single node + baseReplicas []baseReplica + tbs []tb +} + +func (fd *failureDomain) baseReplicaCount() int { + return len(fd.baseReplicas) +} + +func (fd *failureDomain) tbReplicaCount() int { + return len(fd.tbs) +} + +func (fd *failureDomain) addTBReplica(rvr tb) bool { + if !slices.Contains(fd.nodeNames, rvr.Spec.NodeName) { + return false + } + fd.tbs = append(fd.tbs, rvr) + + return true +} + +func (fd *failureDomain) addBaseReplica(rvr baseReplica) bool { + if !slices.Contains(fd.nodeNames, rvr.Spec.NodeName) { + return false + } + + fd.baseReplicas = append(fd.baseReplicas, rvr) + + return true +} + +func (fd *failureDomain) popTBReplica() *v1alpha1.ReplicatedVolumeReplica { + if len(fd.tbs) == 0 { + return nil + } + tb := fd.tbs[len(fd.tbs)-1] + fd.tbs = fd.tbs[0 : len(fd.tbs)-1] + return tb +} diff --git a/images/controller/internal/controllers/rvr_tie_breaker_count/reconciler.go b/images/controller/internal/controllers/rvr_tie_breaker_count/reconciler.go new file mode 100644 index 000000000..677323a97 --- /dev/null +++ b/images/controller/internal/controllers/rvr_tie_breaker_count/reconciler.go @@ -0,0 +1,432 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package rvrtiebreakercount + +import ( + "context" + "errors" + "fmt" + "slices" + + "github.com/go-logr/logr" + corev1 "k8s.io/api/core/v1" + apierrors "k8s.io/apimachinery/pkg/api/errors" + "k8s.io/apimachinery/pkg/api/meta" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/runtime" + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/controller/controllerutil" + "sigs.k8s.io/controller-runtime/pkg/reconcile" + + uslices "github.com/deckhouse/sds-common-lib/utils/slices" + obju "github.com/deckhouse/sds-replicated-volume/api/objutilv1" + v1alpha1 "github.com/deckhouse/sds-replicated-volume/api/v1alpha1" + interrors "github.com/deckhouse/sds-replicated-volume/images/controller/internal/errors" + "github.com/deckhouse/sds-replicated-volume/images/controller/internal/indexes" +) + +type Reconciler struct { + cl client.Client + log logr.Logger + scheme *runtime.Scheme +} + +func NewReconciler(cl client.Client, log logr.Logger, scheme *runtime.Scheme) (*Reconciler, error) { + if err := interrors.ValidateArgNotNil(cl, "cl"); err != nil { + return nil, err + } + if err := interrors.ValidateArgNotNil(scheme, "scheme"); err != nil { + return nil, err + } + return &Reconciler{ + cl: cl, + log: log, + scheme: scheme, + }, nil +} + +var _ reconcile.Reconciler = &Reconciler{} +var ErrNoZoneLabel = errors.New("can't find zone label") +var ErrBaseReplicaNodeIsNotInReplicatedStorageClassZones = errors.New("node is not in rsc.spec.zones") + +func (r *Reconciler) Reconcile( + ctx context.Context, + req reconcile.Request, +) (reconcile.Result, error) { + log := r.log.WithName("Reconcile").WithValues("request", req) + rv, err := r.getReplicatedVolume(ctx, req, log) + if err != nil { + if client.IgnoreNotFound(err) == nil { + return reconcile.Result{}, nil + } + return reconcile.Result{}, err + } + + // TODO: fail ReplicatedVolume if it has empty ReplicatedStorageClassName + if shouldSkipRV(rv, log) { + return reconcile.Result{}, nil + } + + rsc, err := r.getReplicatedStorageClass(ctx, rv, log) + if err != nil { + return reconcile.Result{}, err + } + if rsc == nil { + return reconcile.Result{}, nil + } + + rvrList := &v1alpha1.ReplicatedVolumeReplicaList{} + if err = r.cl.List(ctx, rvrList, client.MatchingFields{ + indexes.IndexFieldRVRByReplicatedVolumeName: rv.Name, + }); err != nil { + return reconcile.Result{}, logError(log, fmt.Errorf("listing rvrs: %w", err)) + } + + fds, tbs, nonFDtbs, err := r.loadFailureDomains(ctx, log, rv.Name, rvrList.Items, rsc) + if err != nil { + return reconcile.Result{}, err + } + + // delete TBs, which are scheduled to FDs, which are outside our FDs + for i, tbToDelete := range nonFDtbs { + rvr := (*v1alpha1.ReplicatedVolumeReplica)(tbToDelete) + if err := r.cl.Delete(ctx, rvr); client.IgnoreNotFound(err) != nil { + return reconcile.Result{}, + logError(log.WithValues("tbToDelete", tbToDelete.Name), fmt.Errorf("deleting nonFDtbs rvr: %w", err)) + } + + log.Info(fmt.Sprintf("deleted rvr %d/%d", i+1, len(nonFDtbs)), "tbToDelete", tbToDelete.Name) + } + + return r.syncTieBreakers(ctx, log, rv, fds, tbs, rvrList.Items) +} + +func (r *Reconciler) getReplicatedVolume( + ctx context.Context, + req reconcile.Request, + log logr.Logger, +) (*v1alpha1.ReplicatedVolume, error) { + rv := &v1alpha1.ReplicatedVolume{} + if err := r.cl.Get(ctx, req.NamespacedName, rv); err != nil { + log.Error(err, "Can't get ReplicatedVolume") + return nil, err + } + return rv, nil +} + +func shouldSkipRV(rv *v1alpha1.ReplicatedVolume, log logr.Logger) bool { + if !meta.IsStatusConditionTrue(rv.Status.Conditions, v1alpha1.ReplicatedVolumeCondInitializedType) { + log.Info("ReplicatedVolume is not initialized yet") + return true + } + + if rv.Spec.ReplicatedStorageClassName == "" { + log.Info("Empty ReplicatedStorageClassName") + return true + } + return false +} + +func ensureRVControllerFinalizer(ctx context.Context, cl client.Client, rv *v1alpha1.ReplicatedVolume) error { + if rv == nil { + panic("ensureRVControllerFinalizer: nil rv (programmer error)") + } + if obju.HasFinalizer(rv, v1alpha1.ControllerFinalizer) { + return nil + } + + original := rv.DeepCopy() + rv.Finalizers = append(rv.Finalizers, v1alpha1.ControllerFinalizer) + return cl.Patch(ctx, rv, client.MergeFromWithOptions(original, client.MergeFromWithOptimisticLock{})) +} + +func (r *Reconciler) getReplicatedStorageClass( + ctx context.Context, + rv *v1alpha1.ReplicatedVolume, + log logr.Logger, +) (*v1alpha1.ReplicatedStorageClass, error) { + rsc := &v1alpha1.ReplicatedStorageClass{} + if err := r.cl.Get(ctx, client.ObjectKey{Name: rv.Spec.ReplicatedStorageClassName}, rsc); err != nil { + if client.IgnoreNotFound(err) == nil { + log.V(1).Info("ReplicatedStorageClass not found", "name", rv.Spec.ReplicatedStorageClassName) + return nil, nil + } + log.Error(err, "Can't get ReplicatedStorageClass") + return nil, err + } + return rsc, nil +} + +func (r *Reconciler) loadFailureDomains( + ctx context.Context, + log logr.Logger, + rvName string, + rvrs []v1alpha1.ReplicatedVolumeReplica, + rsc *v1alpha1.ReplicatedStorageClass, +) (fds map[string]*failureDomain, tbs []tb, nonFDtbs []tb, err error) { + // initialize empty failure domains + nodeList := &corev1.NodeList{} + if err := r.cl.List(ctx, nodeList); err != nil { + return nil, nil, nil, logError(r.log, fmt.Errorf("listing nodes: %w", err)) + } + + if rsc.Spec.Topology == "TransZonal" { + // each zone is a failure domain + fds = make(map[string]*failureDomain, len(rsc.Spec.Zones)) + for _, zone := range rsc.Spec.Zones { + fds[zone] = &failureDomain{} + } + + for node := range uslices.Ptrs(nodeList.Items) { + zone, ok := node.Labels[corev1.LabelTopologyZone] + if !ok { + log.WithValues("node", node.Name).Error(ErrNoZoneLabel, "No zone label") + return nil, nil, nil, fmt.Errorf("%w: node is %s", ErrNoZoneLabel, node.Name) + } + + if fd, ok := fds[zone]; ok { + fd.nodeNames = append(fd.nodeNames, node.Name) + } + } + } else { + // each node is a failure domain + fds = make(map[string]*failureDomain, len(nodeList.Items)) + + for node := range uslices.Ptrs(nodeList.Items) { + fds[node.Name] = &failureDomain{nodeNames: []string{node.Name}} + } + } + + // init failure domains with RVRs + + for rvr := range uslices.Ptrs(rvrs) { + if rvr.Spec.ReplicatedVolumeName != rvName { + continue + } + + // ignore non-scheduled base replicas + if rvr.Spec.NodeName == "" && rvr.Spec.Type != v1alpha1.ReplicaTypeTieBreaker { + continue + } + + if rvr.Spec.Type == v1alpha1.ReplicaTypeTieBreaker { + var fdFound bool + if rvr.Spec.NodeName != "" { + for _, fd := range fds { + if fd.addTBReplica(rvr) { + // rvr always maps to single fd + fdFound = true + break + } + } + } else { + fdFound = true + } + + if fdFound { + tbs = append(tbs, rvr) + } else { + nonFDtbs = append(nonFDtbs, rvr) + } + } else { + var fdFound bool + for _, fd := range fds { + if fd.addBaseReplica(rvr) { + // rvr always maps to single fd + fdFound = true + break + } + } + if !fdFound { + return nil, nil, nil, logError( + log, + fmt.Errorf( + "cannot map base replica '%s' (node '%s') to failure domain: %w", + rvr.Name, rvr.Spec.NodeName, ErrBaseReplicaNodeIsNotInReplicatedStorageClassZones, + ), + ) + } + } + } + + return fds, tbs, nonFDtbs, nil +} + +func (r *Reconciler) syncTieBreakers( + ctx context.Context, + log logr.Logger, + rv *v1alpha1.ReplicatedVolume, + fds map[string]*failureDomain, + tbs []tb, + rvrs []v1alpha1.ReplicatedVolumeReplica, +) (reconcile.Result, error) { + var maxBaseReplicaCount, totalBaseReplicaCount int + for _, fd := range fds { + fdBaseReplicaCount := fd.baseReplicaCount() + maxBaseReplicaCount = max(maxBaseReplicaCount, fdBaseReplicaCount) + totalBaseReplicaCount += fdBaseReplicaCount + } + + // delete useless TBs: + // useless TB is scheduled to FD where number of other replicas is not less then + // maximum number of replicas per FD in cluster by 2 + baseReplicaCountForTBusefulness := maxBaseReplicaCount - 2 + for _, fd := range fds { + if len(fd.tbs) == 0 { + continue + } + + fdBaseReplicaCount := fd.baseReplicaCount() + + usefulTBNum := max(0, baseReplicaCountForTBusefulness-fdBaseReplicaCount) + + uselessTBNum := max(0, len(fd.tbs)-usefulTBNum) + + for i := range uselessTBNum { + uselessTB := fd.popTBReplica() + tbs = slices.DeleteFunc(tbs, func(rvr tb) bool { return rvr.Name == uselessTB.Name }) + + if err := r.cl.Delete(ctx, uselessTB); client.IgnoreNotFound(err) != nil { + return reconcile.Result{}, + logError(log.WithValues("uselessTB", uselessTB.Name), fmt.Errorf("deleting useless tb rvr: %w", err)) + } + + log.Info( + fmt.Sprintf("deleted useless tb rvr %d/%d", i+1, uselessTBNum), + "uselessTB", uselessTB.Name, + ) + } + } + // + + currentTB := len(tbs) + + var desiredTB int + for _, fd := range fds { + baseReplicaCountDiffFromMax := maxBaseReplicaCount - fd.baseReplicaCount() + if baseReplicaCountDiffFromMax >= 2 { + desiredTB += baseReplicaCountDiffFromMax - 1 + } + } + + desiredTotalReplicaCount := totalBaseReplicaCount + desiredTB + if desiredTotalReplicaCount > 0 && desiredTotalReplicaCount%2 == 0 { + // add one more in order to keep total number of replicas odd + desiredTB++ + } + + if currentTB == desiredTB { + log.Info("No need to change") + return reconcile.Result{}, nil + } + + if desiredTB > currentTB { + // Ensure controller finalizer is installed on RV before creating replicas. + if err := ensureRVControllerFinalizer(ctx, r.cl, rv); err != nil { + if apierrors.IsConflict(err) { + return reconcile.Result{Requeue: true}, nil + } + return reconcile.Result{}, err + } + } + + for i := range desiredTB - currentTB { + // creating + rvr := &v1alpha1.ReplicatedVolumeReplica{ + ObjectMeta: metav1.ObjectMeta{ + Finalizers: []string{v1alpha1.ControllerFinalizer}, + }, + Spec: v1alpha1.ReplicatedVolumeReplicaSpec{ + ReplicatedVolumeName: rv.Name, + Type: v1alpha1.ReplicaTypeTieBreaker, + }, + } + + if !rvr.ChooseNewName(rvrs) { + return reconcile.Result{}, + fmt.Errorf("unable to create new rvr: too many existing replicas for rv %s", rv.Name) + } + + if err := controllerutil.SetControllerReference(rv, rvr, r.scheme); err != nil { + return reconcile.Result{}, err + } + + if err := r.cl.Create(ctx, rvr); err != nil { + return reconcile.Result{}, err + } + + rvrs = append(rvrs, *rvr) + + log.Info(fmt.Sprintf("created rvr %d/%d", i+1, desiredTB-currentTB), "newRVR", rvr.Name) + } + + for i := range currentTB - desiredTB { + // deleting starting from scheduled TBs + var tbToDelete *v1alpha1.ReplicatedVolumeReplica + for _, fd := range fds { + if fd.tbReplicaCount() == 0 { + continue + } + + wantFDTotalReplicaCount := fd.baseReplicaCount() + fd.tbReplicaCount() + + // can we remove one tb from this fd? + wantFDTotalReplicaCount-- + + baseReplicaCountDiffFromMax := maxBaseReplicaCount - wantFDTotalReplicaCount + if baseReplicaCountDiffFromMax < 2 { + // found tb, which is not necessary for this fd + tbToDelete = fd.popTBReplica() + + break + } + } + + if tbToDelete == nil { + for _, tb := range tbs { + // take the first non-scheduled + if tb.Spec.NodeName == "" { + tbToDelete = tb + break + } + } + } + + if tbToDelete == nil { + // this should not happen, but let's be safe + log.V(1).Info("failed to select TB to delete") + return reconcile.Result{}, nil + } + + if err := r.cl.Delete(ctx, tbToDelete); client.IgnoreNotFound(err) != nil { + return reconcile.Result{}, + logError(log.WithValues("tbToDelete", tbToDelete.Name), fmt.Errorf("deleting tb rvr: %w", err)) + } + + log.Info(fmt.Sprintf("deleted rvr %d/%d", i+1, currentTB-desiredTB), "tbToDelete", tbToDelete.Name) + } + + return reconcile.Result{}, nil +} + +func logError(log logr.Logger, err error) error { + if err != nil { + log.Error(err, err.Error()) + return err + } + return nil +} diff --git a/images/controller/internal/controllers/rvr_tie_breaker_count/reconciler_test.go b/images/controller/internal/controllers/rvr_tie_breaker_count/reconciler_test.go new file mode 100644 index 000000000..ef1fcf1ec --- /dev/null +++ b/images/controller/internal/controllers/rvr_tie_breaker_count/reconciler_test.go @@ -0,0 +1,882 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package rvrtiebreakercount_test + +import ( + "context" + "errors" + "fmt" + "maps" + "slices" + "strings" + + "github.com/go-logr/logr" + . "github.com/onsi/ginkgo/v2" + . "github.com/onsi/gomega" + corev1 "k8s.io/api/core/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/runtime" + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/client/fake" + "sigs.k8s.io/controller-runtime/pkg/client/interceptor" + "sigs.k8s.io/controller-runtime/pkg/log" + "sigs.k8s.io/controller-runtime/pkg/reconcile" + + u "github.com/deckhouse/sds-common-lib/utils" + v1alpha1 "github.com/deckhouse/sds-replicated-volume/api/v1alpha1" + rvrtiebreakercount "github.com/deckhouse/sds-replicated-volume/images/controller/internal/controllers/rvr_tie_breaker_count" + testhelpers "github.com/deckhouse/sds-replicated-volume/images/controller/internal/indexes/testhelpers" +) + +var errExpectedTestError = errors.New("test error") + +var _ = Describe("Reconcile", func() { + scheme := runtime.NewScheme() + Expect(corev1.AddToScheme(scheme)).To(Succeed()) + Expect(v1alpha1.AddToScheme(scheme)).To(Succeed()) + Expect(v1alpha1.AddToScheme(scheme)).To(Succeed()) + + var ( + builder *fake.ClientBuilder + cl client.WithWatch + rec *rvrtiebreakercount.Reconciler + ) + + BeforeEach(func() { + builder = testhelpers.WithRVRByReplicatedVolumeNameIndex(fake.NewClientBuilder().WithScheme(scheme)) + cl = nil + rec = nil + }) + + JustBeforeEach(func() { + cl = builder.Build() + rec, _ = rvrtiebreakercount.NewReconciler(cl, logr.New(log.NullLogSink{}), scheme) + }) + + It("returns nil when ReplicatedVolume not found", func(ctx SpecContext) { + result, err := rec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKey{Name: "non-existent"}}) + Expect(err).NotTo(HaveOccurred()) + Expect(result).To(Equal(reconcile.Result{})) + }) + + When("rv created", func() { + var rv v1alpha1.ReplicatedVolume + BeforeEach(func() { + rv = v1alpha1.ReplicatedVolume{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rv1", + Finalizers: []string{v1alpha1.ControllerFinalizer}, + }, + Spec: v1alpha1.ReplicatedVolumeSpec{ + ReplicatedStorageClassName: "rsc1", + }, + } + + setRVInitializedCondition(&rv, metav1.ConditionTrue) + }) + + JustBeforeEach(func(ctx SpecContext) { + Expect(cl.Create(ctx, &rv)).To(Succeed()) + }) + + When("ReplicatedStorageClassName is empty", func() { + BeforeEach(func() { + rv.Spec.ReplicatedStorageClassName = "" + }) + + It("returns nil when ReplicatedStorageClassName is empty", func(ctx SpecContext) { + Expect(rec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(&rv)})).Error().NotTo(HaveOccurred()) + }) + }) + + When("RVRs created", func() { + var ( + rvrList v1alpha1.ReplicatedVolumeReplicaList + nodeList []corev1.Node + rsc v1alpha1.ReplicatedStorageClass + ) + + BeforeEach(func() { + rsc = v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rsc1", + }, + Spec: v1alpha1.ReplicatedStorageClassSpec{ + Replication: "Availability", + Topology: "", + }, + } + + // reset lists before populating them + nodeList = nil + rvrList = v1alpha1.ReplicatedVolumeReplicaList{} + + for i := 1; i <= 2; i++ { + node := corev1.Node{ + ObjectMeta: metav1.ObjectMeta{ + Name: fmt.Sprintf("node-%d", i), + }, + } + nodeList = append(nodeList, node) + + rvrList.Items = append(rvrList.Items, v1alpha1.ReplicatedVolumeReplica{ + ObjectMeta: metav1.ObjectMeta{ + Name: fmt.Sprintf("rvr-df%d", i), + }, + Spec: v1alpha1.ReplicatedVolumeReplicaSpec{ + ReplicatedVolumeName: rv.Name, + NodeName: node.Name, + Type: v1alpha1.ReplicaTypeDiskful, + }, + }) + } + }) + + JustBeforeEach(func(ctx SpecContext) { + Expect(cl.Create(ctx, &rsc)).To(Succeed()) + for i := range nodeList { + Expect(cl.Create(ctx, &nodeList[i])).To(Succeed()) + } + for i := range rvrList.Items { + Expect(cl.Create(ctx, &rvrList.Items[i])).To(Succeed()) + } + }) + + When("RV is not initialized yet", func() { + BeforeEach(func() { + setRVInitializedCondition(&rv, metav1.ConditionFalse) + }) + + It("skips reconciliation until Initialized=True", func(ctx SpecContext) { + result, err := rec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(&rv)}) + Expect(err).NotTo(HaveOccurred()) + Expect(result).To(Equal(reconcile.Result{})) + + Expect(cl.List(ctx, &rvrList)).To(Succeed()) + Expect(rvrList.Items).To(HaveTieBreakerCount(Equal(0))) + }) + }) + + // Initial State: + // FD "node-1": [Diskful] + // FD "node-2": [Diskful] + // TB: [] + // Replication: Availability + // Violates: + // - total replica count must be odd + // Desired state: + // FD "node-1": [Diskful] + // FD "node-2": [Diskful, TieBreaker] + // TB total: 1 + // replicas total: 3 (odd) + It("1. creates one TieBreaker for two Diskful on different FDs", func(ctx SpecContext) { + Expect(rec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(&rv)})).To(Equal(reconcile.Result{})) + + Expect(cl.List(ctx, &rvrList)).To(Succeed()) + Expect(rvrList.Items).To(HaveTieBreakerCount(Equal(1))) + + }) + + When("RV has no controller finalizer but tie-breaker creation is needed", func() { + BeforeEach(func() { + rv.Finalizers = nil + builder.WithInterceptorFuncs(interceptor.Funcs{ + Create: func(ctx context.Context, c client.WithWatch, obj client.Object, opts ...client.CreateOption) error { + if rvr, ok := obj.(*v1alpha1.ReplicatedVolumeReplica); ok && rvr.Spec.Type == v1alpha1.ReplicaTypeTieBreaker { + currentRV := &v1alpha1.ReplicatedVolume{} + Expect(c.Get(ctx, client.ObjectKeyFromObject(&rv), currentRV)).To(Succeed()) + Expect(currentRV.Finalizers).To(ContainElement(v1alpha1.ControllerFinalizer)) + } + return c.Create(ctx, obj, opts...) + }, + }) + }) + + It("adds controller finalizer and creates TieBreaker", func(ctx SpecContext) { + Expect(rec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(&rv)})).To(Equal(reconcile.Result{})) + + currentRV := &v1alpha1.ReplicatedVolume{} + Expect(cl.Get(ctx, client.ObjectKeyFromObject(&rv), currentRV)).To(Succeed()) + Expect(currentRV.Finalizers).To(ContainElement(v1alpha1.ControllerFinalizer)) + + Expect(cl.List(ctx, &rvrList)).To(Succeed()) + Expect(rvrList.Items).To(HaveTieBreakerCount(Equal(1))) + }) + }) + + When("Access replicas", func() { + BeforeEach(func() { + rv = v1alpha1.ReplicatedVolume{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rv1", + Finalizers: []string{v1alpha1.ControllerFinalizer}, + }, + Spec: v1alpha1.ReplicatedVolumeSpec{ + ReplicatedStorageClassName: "rsc1", + }, + } + setRVInitializedCondition(&rv, metav1.ConditionTrue) + rsc = v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{Name: "rsc1"}, + Spec: v1alpha1.ReplicatedStorageClassSpec{Replication: "Availability"}, + } + nodeList = []corev1.Node{ + {ObjectMeta: metav1.ObjectMeta{Name: "node-1"}}, + {ObjectMeta: metav1.ObjectMeta{Name: "node-2"}}, + } + rvrList.Items = []v1alpha1.ReplicatedVolumeReplica{ + { + ObjectMeta: metav1.ObjectMeta{Name: "rvr-df1"}, + Spec: v1alpha1.ReplicatedVolumeReplicaSpec{ + ReplicatedVolumeName: rv.Name, + NodeName: "node-1", + Type: v1alpha1.ReplicaTypeDiskful, + }, + }, + { + ObjectMeta: metav1.ObjectMeta{Name: "rvr-acc1"}, + Spec: v1alpha1.ReplicatedVolumeReplicaSpec{ + ReplicatedVolumeName: rv.Name, + NodeName: "node-2", + Type: v1alpha1.ReplicaTypeAccess, + }, + }, + } + }) + + It("counts Access replicas in FD distribution", func(ctx SpecContext) { + result, err := rec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(&rv)}) + Expect(err).NotTo(HaveOccurred()) + Expect(result).To(Equal(reconcile.Result{})) + + rvrList := &v1alpha1.ReplicatedVolumeReplicaList{} + Expect(cl.List(ctx, rvrList)).To(Succeed()) + Expect(rvrList.Items).To(HaveTieBreakerCount(Equal(1))) + }) + }) + + /* + + */ + When("more than one TieBreaker is required", func() { + BeforeEach(func() { + rv = v1alpha1.ReplicatedVolume{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rv1", + Finalizers: []string{v1alpha1.ControllerFinalizer}, + }, + Spec: v1alpha1.ReplicatedVolumeSpec{ + ReplicatedStorageClassName: "rsc1", + }, + } + setRVInitializedCondition(&rv, metav1.ConditionTrue) + rsc = v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{Name: "rsc1"}, + Spec: v1alpha1.ReplicatedStorageClassSpec{Replication: "Availability"}, + } + nodeList = []corev1.Node{ + {ObjectMeta: metav1.ObjectMeta{Name: "node-a"}}, + {ObjectMeta: metav1.ObjectMeta{Name: "node-b"}}, + {ObjectMeta: metav1.ObjectMeta{Name: "node-c"}}, + } + rvrList.Items = []v1alpha1.ReplicatedVolumeReplica{ + { + ObjectMeta: metav1.ObjectMeta{Name: "rvr-df-a1"}, + Spec: v1alpha1.ReplicatedVolumeReplicaSpec{ + ReplicatedVolumeName: rv.Name, + NodeName: "node-a", + Type: v1alpha1.ReplicaTypeDiskful, + }, + }, + { + ObjectMeta: metav1.ObjectMeta{Name: "rvr-df-b1"}, + Spec: v1alpha1.ReplicatedVolumeReplicaSpec{ + ReplicatedVolumeName: rv.Name, + NodeName: "node-b", + Type: v1alpha1.ReplicaTypeDiskful, + }, + }, + { + ObjectMeta: metav1.ObjectMeta{Name: "rvr-df-c1"}, + Spec: v1alpha1.ReplicatedVolumeReplicaSpec{ + ReplicatedVolumeName: rv.Name, + NodeName: "node-c", + Type: v1alpha1.ReplicaTypeDiskful, + }, + }, + { + ObjectMeta: metav1.ObjectMeta{Name: "rvr-acc-c2"}, + Spec: v1alpha1.ReplicatedVolumeReplicaSpec{ + ReplicatedVolumeName: rv.Name, + NodeName: "node-c", + Type: v1alpha1.ReplicaTypeAccess, + }, + }, + { + ObjectMeta: metav1.ObjectMeta{Name: "rvr-acc-c3"}, + Spec: v1alpha1.ReplicatedVolumeReplicaSpec{ + ReplicatedVolumeName: rv.Name, + NodeName: "node-c", + Type: v1alpha1.ReplicaTypeAccess, + }, + }, + } + }) + + It("creates two TieBreakers for FD distribution 1+1+3", func(ctx SpecContext) { + result, err := rec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(&rv)}) + Expect(err).NotTo(HaveOccurred()) + Expect(result).To(Equal(reconcile.Result{})) + + rvrList := &v1alpha1.ReplicatedVolumeReplicaList{} + Expect(cl.List(ctx, rvrList)).To(Succeed()) + Expect(rvrList.Items).To(HaveTieBreakerCount(Equal(2))) + }) + }) + + When("replicas without NodeName", func() { + BeforeEach(func() { + rsc = v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{Name: "rsc1"}, + Spec: v1alpha1.ReplicatedStorageClassSpec{Replication: "Availability"}, + } + nodeList = []corev1.Node{ + {ObjectMeta: metav1.ObjectMeta{Name: "node-1"}}, + } + rvrList.Items = rvrList.Items[:1] + rvrList.Items[0] = v1alpha1.ReplicatedVolumeReplica{ + ObjectMeta: metav1.ObjectMeta{Name: "rvr-df1"}, + Spec: v1alpha1.ReplicatedVolumeReplicaSpec{ + ReplicatedVolumeName: rv.Name, + NodeName: "", + Type: v1alpha1.ReplicaTypeDiskful, + }, + } + }) + + It("handles replicas without NodeName", func(ctx SpecContext) { + result, err := rec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(&rv)}) + Expect(err).NotTo(HaveOccurred()) + Expect(result).To(Equal(reconcile.Result{})) + }) + }) + + When("different zones", func() { + BeforeEach(func() { + rsc.Spec.Topology = "TransZonal" + rsc.Spec.Zones = []string{"zone-0", "zone-1"} + for i := range nodeList { + nodeList[i].Labels = map[string]string{corev1.LabelTopologyZone: fmt.Sprintf("zone-%d", i)} + } + }) + // Initial State: + // FD "zone-a/node-1": [Diskful] + // FD "zone-b/node-2": [Diskful] + // TB: [] + // Replication: Availability + // Topology: TransZonal + // Violates: + // - total replica count must be odd + // Desired state: + // FD "zone-a/node-1": [Diskful] + // FD "zone-b/node-2": [Diskful] + // FD "zone-b/node-3": [TieBreaker] + // TB total: 1 + // replicas total: 3 (odd) + It("2. creates one TieBreaker for two Diskful on different FDs with TransZonal topology", func(ctx SpecContext) { + + Expect(rec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(&rv)})).To(Equal(reconcile.Result{})) + + Expect(cl.List(ctx, &rvrList)).To(Succeed()) + Expect(rvrList.Items).To(HaveTieBreakerCount(Equal(1))) + }) + }) + + When("replicas on the same node", func() { + BeforeEach(func() { + for i := range rvrList.Items { + rvrList.Items[i].Spec.NodeName = nodeList[0].Name + } + }) + + It("3. create TieBreaker when all Diskful are in the same FD", func(ctx SpecContext) { + Expect(rec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(&rv)})).To(Equal(reconcile.Result{})) + Expect(cl.List(ctx, &rvrList)).To(Succeed()) + Expect(rvrList.Items).To(HaveTieBreakerCount(Equal(1))) + }) + }) + + When("extra TieBreakers", func() { + BeforeEach(func() { + rvrList.Items = []v1alpha1.ReplicatedVolumeReplica{ + { + ObjectMeta: metav1.ObjectMeta{ + Name: "rvr-df1", + }, + Spec: v1alpha1.ReplicatedVolumeReplicaSpec{ + ReplicatedVolumeName: rv.Name, + NodeName: nodeList[0].Name, + Type: v1alpha1.ReplicaTypeDiskful, + }, + }, + { + ObjectMeta: metav1.ObjectMeta{ + Name: "rvr-df2", + }, + Spec: v1alpha1.ReplicatedVolumeReplicaSpec{ + ReplicatedVolumeName: "rv1", + NodeName: "node-2", + Type: v1alpha1.ReplicaTypeDiskful, + }, + }, + { + ObjectMeta: metav1.ObjectMeta{ + Name: "rvr-tb1", + }, + Spec: v1alpha1.ReplicatedVolumeReplicaSpec{ + ReplicatedVolumeName: "rv1", + Type: v1alpha1.ReplicaTypeTieBreaker, + }, + }, + { + ObjectMeta: metav1.ObjectMeta{ + Name: "rvr-tb2", + }, + Spec: v1alpha1.ReplicatedVolumeReplicaSpec{ + ReplicatedVolumeName: "rv1", + Type: v1alpha1.ReplicaTypeTieBreaker, + }, + }, + } + }) + + // Initial State: + // FD "node-1": [Diskful] + // FD "node-2": [Diskful] + // TB: [TieBreaker, TieBreaker] + // Violates: + // - minimality of TieBreaker count for given FD distribution and odd total replica requirement + // Desired state: + // FD "node-1": [Diskful] + // FD "node-2": [Diskful, TieBreaker] + // TB total: 1 + // replicas total: 3 (odd) + It("4. deletes extra TieBreakers and leaves one", func(ctx SpecContext) { + Expect(rec.Reconcile(ctx, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(&rv)})).To(Equal(reconcile.Result{})) + + Expect(cl.List(ctx, &rvrList)).To(Succeed()) + Expect(rvrList.Items).To(HaveTieBreakerCount(Equal(1))) + }) + + When("Delete RVR fails", func() { + BeforeEach(func() { + builder.WithInterceptorFuncs(interceptor.Funcs{ + Delete: func(ctx context.Context, c client.WithWatch, obj client.Object, opts ...client.DeleteOption) error { + if rvr, ok := obj.(*v1alpha1.ReplicatedVolumeReplica); ok && rvr.Spec.Type == v1alpha1.ReplicaTypeTieBreaker { + return errExpectedTestError + } + return c.Delete(ctx, obj, opts...) + }, + }) + }) + + It("returns same error", func(ctx SpecContext) { + Expect(rec.Reconcile( + ctx, + reconcile.Request{NamespacedName: client.ObjectKeyFromObject(&rv)}, + )).Error().To(MatchError(errExpectedTestError)) + }) + }) + }) + + DescribeTableSubtree("propagates client errors", + func(setupInterceptors func(*fake.ClientBuilder)) { + BeforeEach(func() { + setupInterceptors(builder) + }) + + It("returns same error", func(ctx SpecContext) { + Expect(rec.Reconcile( + ctx, + reconcile.Request{NamespacedName: client.ObjectKeyFromObject(&rv)}, + )).Error().To(MatchError(errExpectedTestError)) + }) + }, + Entry("Get ReplicatedVolume fails", func(b *fake.ClientBuilder) { + b.WithInterceptorFuncs(interceptor.Funcs{ + Get: func(ctx context.Context, c client.WithWatch, key client.ObjectKey, obj client.Object, opts ...client.GetOption) error { + if _, ok := obj.(*v1alpha1.ReplicatedVolume); ok { + return errExpectedTestError + } + return c.Get(ctx, key, obj, opts...) + }, + }) + }), + Entry("Get ReplicatedStorageClass fails", func(b *fake.ClientBuilder) { + b.WithInterceptorFuncs(interceptor.Funcs{ + Get: func(ctx context.Context, c client.WithWatch, key client.ObjectKey, obj client.Object, opts ...client.GetOption) error { + if _, ok := obj.(*v1alpha1.ReplicatedStorageClass); ok { + return errExpectedTestError + } + return c.Get(ctx, key, obj, opts...) + }, + }) + }), + Entry("List Nodes fails", func(b *fake.ClientBuilder) { + b.WithInterceptorFuncs(interceptor.Funcs{ + List: func(ctx context.Context, c client.WithWatch, list client.ObjectList, opts ...client.ListOption) error { + if _, ok := list.(*corev1.NodeList); ok { + return errExpectedTestError + } + return c.List(ctx, list, opts...) + }, + }) + }), + Entry("List ReplicatedVolumeReplicaList fails", func(b *fake.ClientBuilder) { + b.WithInterceptorFuncs(interceptor.Funcs{ + List: func(ctx context.Context, c client.WithWatch, list client.ObjectList, opts ...client.ListOption) error { + if _, ok := list.(*v1alpha1.ReplicatedVolumeReplicaList); ok { + return errExpectedTestError + } + return c.List(ctx, list, opts...) + }, + }) + }), + Entry("Create RVR fails", func(b *fake.ClientBuilder) { + b.WithInterceptorFuncs(interceptor.Funcs{ + Create: func(ctx context.Context, c client.WithWatch, obj client.Object, opts ...client.CreateOption) error { + if rvr, ok := obj.(*v1alpha1.ReplicatedVolumeReplica); ok && rvr.Spec.Type == v1alpha1.ReplicaTypeTieBreaker { + return errExpectedTestError + } + return c.Create(ctx, obj, opts...) + }, + }) + }), + ) + + }) + + }) +}) + +type FDReplicaCounts struct { + Diskful int + Access int + TieBreaker int +} + +// EntryConfig allows overriding default test configuration per entry +type EntryConfig struct { + // Topology overrides RSC topology. Defaults to "TransZonal" if empty. + Topology v1alpha1.ReplicatedStorageClassTopology + // Zones overrides RSC zones. If nil, uses all FD keys. If empty slice, uses no zones. + Zones *[]string + + ExpectedReconcileError error +} + +func setRVInitializedCondition(rv *v1alpha1.ReplicatedVolume, status metav1.ConditionStatus) { + rv.Status = v1alpha1.ReplicatedVolumeStatus{ + Conditions: []metav1.Condition{{ + Type: v1alpha1.ReplicatedVolumeCondInitializedType, + Status: status, + LastTransitionTime: metav1.Now(), + Reason: "test", + }}, + } +} + +var _ = Describe("DesiredTieBreakerTotal", func() { + DescribeTableSubtree("returns correct TieBreaker count for fdCount < 4", + func(_ string, fdExtended map[string]FDReplicaCounts, expected int, cfgPtr *EntryConfig) { + When("reconciler creates expected TieBreaker replicas", func() { + scheme := runtime.NewScheme() + Expect(corev1.AddToScheme(scheme)).To(Succeed()) + Expect(v1alpha1.AddToScheme(scheme)).To(Succeed()) + Expect(v1alpha1.AddToScheme(scheme)).To(Succeed()) + + var ( + builder *fake.ClientBuilder + cl client.WithWatch + rec *rvrtiebreakercount.Reconciler + rv *v1alpha1.ReplicatedVolume + cfg EntryConfig + rscZones []string + nodeList []corev1.Node + ) + + BeforeEach(func() { + // Apply defaults for config + cfg = EntryConfig{Topology: "TransZonal"} + if cfgPtr != nil { + if cfgPtr.Topology != "" { + cfg.Topology = cfgPtr.Topology + } + cfg.Zones = cfgPtr.Zones + } + + cl = nil + rec = nil + nodeList = nil + + rv = &v1alpha1.ReplicatedVolume{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rv1", + Finalizers: []string{v1alpha1.ControllerFinalizer}, + }, + Spec: v1alpha1.ReplicatedVolumeSpec{ + ReplicatedStorageClassName: "rsc1", + }, + } + setRVInitializedCondition(rv, metav1.ConditionTrue) + + // Determine zones for RSC + if cfg.Zones != nil { + rscZones = *cfg.Zones + } else { + // Default: use all FD keys as zones + rscZones = slices.Collect(maps.Keys(fdExtended)) + } + + rsc := &v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rsc1", + }, + Spec: v1alpha1.ReplicatedStorageClassSpec{ + Replication: "Availability", + Topology: cfg.Topology, + Zones: rscZones, + }, + } + + var objects []client.Object + objects = append(objects, rv, rsc) + + for fdName, fdReplicaCounts := range fdExtended { + var nodeNameSlice []string + for i := range 10 { + nodeName := fmt.Sprintf("node-%s-%d", fdName, i) + node := corev1.Node{ + ObjectMeta: metav1.ObjectMeta{ + Name: nodeName, + Labels: map[string]string{corev1.LabelTopologyZone: fdName}, + }, + } + nodeList = append(nodeList, node) + objects = append(objects, &node) + nodeNameSlice = append(nodeNameSlice, nodeName) + + } + index := 0 + for j := 0; j < fdReplicaCounts.Diskful; j++ { + rvr := &v1alpha1.ReplicatedVolumeReplica{ + ObjectMeta: metav1.ObjectMeta{ + Name: fmt.Sprintf("rvr-df-%s-%d", fdName, j+1), + }, + Spec: v1alpha1.ReplicatedVolumeReplicaSpec{ + ReplicatedVolumeName: rv.Name, + NodeName: nodeNameSlice[index], + Type: v1alpha1.ReplicaTypeDiskful, + }, + } + objects = append(objects, rvr) + index++ + } + + for j := 0; j < fdReplicaCounts.Access; j++ { + rvr := &v1alpha1.ReplicatedVolumeReplica{ + ObjectMeta: metav1.ObjectMeta{ + Name: fmt.Sprintf("rvr-ac-%s-%d", fdName, j+1), + }, + Spec: v1alpha1.ReplicatedVolumeReplicaSpec{ + ReplicatedVolumeName: rv.Name, + NodeName: nodeNameSlice[index], + Type: v1alpha1.ReplicaTypeAccess, + }, + } + objects = append(objects, rvr) + index++ + } + + for j := 0; j < fdReplicaCounts.TieBreaker; j++ { + rvr := &v1alpha1.ReplicatedVolumeReplica{ + ObjectMeta: metav1.ObjectMeta{ + Name: fmt.Sprintf("rvr-tb-%s-%d", fdName, j+1), + }, + Spec: v1alpha1.ReplicatedVolumeReplicaSpec{ + ReplicatedVolumeName: rv.Name, + NodeName: nodeNameSlice[index], + Type: v1alpha1.ReplicaTypeTieBreaker, + }, + } + objects = append(objects, rvr) + index++ + } + } + builder = testhelpers.WithRVRByReplicatedVolumeNameIndex(fake.NewClientBuilder().WithScheme(scheme)).WithObjects(objects...) + }) + + JustBeforeEach(func() { + cl = builder.Build() + rec, _ = rvrtiebreakercount.NewReconciler(cl, logr.New(log.NullLogSink{}), scheme) + }) + + It("Reconcile works", func(ctx SpecContext) { + req := reconcile.Request{NamespacedName: client.ObjectKeyFromObject(rv)} + result, err := rec.Reconcile(context.Background(), req) + + fmt.Fprintf(GinkgoWriter, " reconcile result: %#v, err: %v\n", result, err) + + if cfgPtr != nil && cfgPtr.ExpectedReconcileError != nil { + Expect(err).To(MatchError(cfgPtr.ExpectedReconcileError)) + Expect(result).To(Equal(reconcile.Result{})) + return + } + + Expect(err).NotTo(HaveOccurred()) + Expect(result).To(Equal(reconcile.Result{})) + + rvrList := &v1alpha1.ReplicatedVolumeReplicaList{} + Expect(cl.List(ctx, rvrList)).To(Succeed()) + + fmt.Fprintf(GinkgoWriter, " total replicas after reconcile: %d\n", len(rvrList.Items)) + + Expect(rvrList.Items).To(HaveTieBreakerCount(Equal(expected))) + + // Check FD distribution balance (only for TransZonal topology) + if cfg.Topology == "TransZonal" { + Expect(rvrList.Items).To(HaveBalancedFDDistribution(rscZones, nodeList)) + } + }) + }) + }, + func(name string, fd map[string]FDReplicaCounts, expected int, cfgPtr *EntryConfig) string { + // Sort zone names for predictable output + zones := slices.Collect(maps.Keys(fd)) + slices.Sort(zones) + + s := []string{} + for _, zone := range zones { + counts := fd[zone] + // Sum only Diskful + Access (without TieBreaker) + total := counts.Diskful + counts.Access + s = append(s, fmt.Sprintf("%d", total)) + } + + // Add topology info if non-default + topologyInfo := "" + if cfgPtr != nil && cfgPtr.Topology != "" && cfgPtr.Topology != "TransZonal" { + topologyInfo = fmt.Sprintf(" [%s]", cfgPtr.Topology) + } + if cfgPtr != nil && cfgPtr.Zones != nil { + topologyInfo += fmt.Sprintf(" zones=%v", *cfgPtr.Zones) + } + + return fmt.Sprintf("case %s: %d FDs, %s -> %d%s", name, len(fd), strings.Join(s, "+"), expected, topologyInfo) + }, + Entry(nil, "1", map[string]FDReplicaCounts{}, 0, nil), + Entry(nil, "2", map[string]FDReplicaCounts{"a": {Diskful: 1}}, 0, nil), + Entry(nil, "3", map[string]FDReplicaCounts{"a": {Diskful: 0}, "b": {Diskful: 0}}, 0, nil), + Entry(nil, "4", map[string]FDReplicaCounts{"a": {Diskful: 1}, "b": {Diskful: 1}}, 1, nil), + Entry(nil, "5", map[string]FDReplicaCounts{"a": {Diskful: 1}, "b": {Diskful: 2}, "c": {}}, 2, nil), + Entry(nil, "6", map[string]FDReplicaCounts{"a": {Diskful: 2}, "b": {Diskful: 2}, "c": {}}, 1, nil), + Entry(nil, "7", map[string]FDReplicaCounts{"a": {Diskful: 1}, "b": {Diskful: 3}, "c": {}}, 3, nil), + Entry(nil, "8", map[string]FDReplicaCounts{"a": {Diskful: 2}, "b": {Diskful: 3}, "c": {}}, 2, nil), + Entry(nil, "8.1", map[string]FDReplicaCounts{"a": {Diskful: 2}, "b": {Diskful: 3}}, 0, nil), + Entry(nil, "9", map[string]FDReplicaCounts{"a": {Diskful: 3}, "b": {Diskful: 3}, "c": {}}, 3, nil), + Entry(nil, "10", map[string]FDReplicaCounts{"a": {Diskful: 1}, "b": {Diskful: 1}, "c": {Diskful: 1}}, 0, nil), + + Entry(nil, "11", map[string]FDReplicaCounts{"a": {Diskful: 1}, "b": {Diskful: 1}, "c": {Diskful: 2}}, 1, nil), + Entry(nil, "12", map[string]FDReplicaCounts{"a": {Diskful: 2}, "b": {Diskful: 2}, "c": {Diskful: 2}}, 1, nil), + Entry(nil, "13", map[string]FDReplicaCounts{"a": {Diskful: 1}, "b": {Diskful: 2}, "c": {Diskful: 2}}, 0, nil), + Entry(nil, "14", map[string]FDReplicaCounts{"a": {Diskful: 1}, "b": {Diskful: 1}, "c": {Diskful: 3}}, 2, nil), + Entry(nil, "15", map[string]FDReplicaCounts{"a": {Diskful: 1}, "b": {Diskful: 3}, "c": {Diskful: 5}}, 4, nil), + // Test cases with mixed replica types + Entry(nil, "16", map[string]FDReplicaCounts{"a": {Diskful: 1, Access: 1}, "b": {Diskful: 1}}, 0, nil), + Entry(nil, "17", map[string]FDReplicaCounts{"a": {Diskful: 1}, "b": {Access: 1}}, 1, nil), + Entry(nil, "18", map[string]FDReplicaCounts{"a": {Diskful: 1, Access: 1}, "b": {Diskful: 1, Access: 1}}, 1, nil), + Entry(nil, "19", map[string]FDReplicaCounts{"a": {Diskful: 2, Access: 1}, "b": {Diskful: 1, Access: 2}}, 1, nil), + Entry(nil, "20", map[string]FDReplicaCounts{"a": {Diskful: 1, Access: 1}, "b": {Diskful: 1, Access: 1}, "c": {Diskful: 1}}, 0, nil), + Entry(nil, "21", map[string]FDReplicaCounts{"a": {Diskful: 2, Access: 1, TieBreaker: 1}, "b": {Diskful: 1}, "c": {Diskful: 1}, "d": {}}, 4, nil), + // with deletion of existing TBs + Entry(nil, "22", map[string]FDReplicaCounts{"a": {Diskful: 1}, "b": {Diskful: 1}, "c": {Diskful: 1}, "d": {TieBreaker: 1}}, 0, nil), + Entry(nil, "23", map[string]FDReplicaCounts{"a": {Diskful: 1}, "b": {Diskful: 1}, "c": {Diskful: 1}, "d": {TieBreaker: 2}}, 0, nil), + Entry(nil, "24", map[string]FDReplicaCounts{"a": {Diskful: 1, Access: 1}, "b": {Diskful: 1}, "c": {Diskful: 1}, "d": {TieBreaker: 2}}, 1, nil), + + // ===== Tests with Zonal topology (FD = node, not zone) ===== + Entry(nil, "Z1", map[string]FDReplicaCounts{"node-a": {Diskful: 1}, "node-b": {Diskful: 1}}, 1, + &EntryConfig{Topology: "Zonal"}), + Entry(nil, "Z2", map[string]FDReplicaCounts{"node-a": {Diskful: 1}, "node-b": {Diskful: 1}, "node-c": {Diskful: 1}}, 0, + &EntryConfig{Topology: "Zonal"}), + Entry(nil, "Z3", map[string]FDReplicaCounts{"node-a": {Diskful: 2}, "node-b": {Diskful: 1}}, 0, + &EntryConfig{Topology: "Zonal"}), + Entry(nil, "Z4", map[string]FDReplicaCounts{"node-a": {Diskful: 1}, "node-b": {Diskful: 1}, "node-c": {TieBreaker: 1}}, 1, + &EntryConfig{Topology: "Zonal"}), + + // ===== Tests with Any topology (FD = node) ===== + Entry(nil, "A1", map[string]FDReplicaCounts{"node-a": {Diskful: 1}, "node-b": {Diskful: 1}}, 1, + &EntryConfig{Topology: "Any"}), + Entry(nil, "A2", map[string]FDReplicaCounts{"node-a": {Diskful: 1}, "node-b": {Diskful: 1}, "node-c": {Diskful: 1}}, 0, + &EntryConfig{Topology: "Any"}), + + // ===== BUG REPRODUCTION: TB on node outside allowed zones should be deleted ===== + // 3 Diskful in allowed zones (odd total), 1 TB in zone "c" (not allowed) -> TB should be deleted + Entry(nil, "TB-outside-zones-1", + map[string]FDReplicaCounts{"a": {Diskful: 1}, "b": {Diskful: 2}, "c": {TieBreaker: 1}}, 0, + &EntryConfig{Zones: u.Ptr([]string{"a", "b"})}), + // TB in zone "d" (not allowed), 3 Diskful across allowed zones a,b,c -> no TB needed, delete the one in d + Entry(nil, "TB-outside-zones-2", + map[string]FDReplicaCounts{"a": {Diskful: 1}, "b": {Diskful: 1}, "c": {Diskful: 1}, "d": {TieBreaker: 1}}, 0, + &EntryConfig{Zones: u.Ptr([]string{"a", "b", "c"})}), + // 3 Diskful in allowed zones (odd), 2 TBs outside allowed zones -> all TBs should be deleted + Entry(nil, "TB-outside-zones-3", + map[string]FDReplicaCounts{"a": {Diskful: 1}, "b": {Diskful: 1}, "c": {Diskful: 1}, "d": {TieBreaker: 1}, "e": {TieBreaker: 1}}, 0, + &EntryConfig{Zones: u.Ptr([]string{"a", "b", "c"})}), + // TB in excluded zone when no TB is needed at all + Entry(nil, "TB-outside-zones-4", + map[string]FDReplicaCounts{"a": {Diskful: 1}, "b": {Diskful: 1}, "c": {Diskful: 1}, "excluded": {TieBreaker: 2}}, 0, + &EntryConfig{Zones: u.Ptr([]string{"a", "b", "c"})}), + + // ===== Diskful replica in zone outside RSC zones ===== + // Diskful in zone "c" which is NOT in RSC zones ["a", "b"] + // Total replicas = 3 (odd), so no TB needed + // BUG REPRODUCTION: if controller ignores replicas outside zones, it will see only 2 Diskful + // and create 1 TB, resulting in 4 total replicas (even) - violates spec! + Entry(nil, "Diskful-outside-zones-1", + map[string]FDReplicaCounts{"a": {Diskful: 1}, "b": {Diskful: 1}, "c": {Diskful: 1}}, 0, + &EntryConfig{Zones: u.Ptr([]string{"a", "b"}), ExpectedReconcileError: rvrtiebreakercount.ErrBaseReplicaNodeIsNotInReplicatedStorageClassZones}), + + // ===== TB in wrong zone - should be redistributed ===== + // Initial: a has 1df+2ac+2tb=5 replicas, b has 1df, c has 1df + // Controller sees currentTB=2, desiredTB=2 -> "No need to change" + // BUG REPRODUCTION: TB should be in zones b and c, not in a! + // Distribution after reconcile should be balanced (diff <= 1) + Entry(nil, "TB-wrong-distribution", + map[string]FDReplicaCounts{ + "a": {Diskful: 1, Access: 2, TieBreaker: 2}, + "b": {Diskful: 1}, + "c": {Diskful: 1}, + }, 2, &EntryConfig{Zones: u.Ptr([]string{"a", "b", "c"})}), + + Entry(nil, "TB-wrong-distribution2", + map[string]FDReplicaCounts{ + "a": {Diskful: 4, Access: 2}, //6 + "b": {Diskful: 1, TieBreaker: 8}, // 1+4 + "c": {Diskful: 1}, // 1+4 + }, 9, &EntryConfig{Zones: u.Ptr([]string{"a", "b", "c"})}), + ) +}) diff --git a/images/controller/internal/controllers/rvr_tie_breaker_count/rvr_tie_breaker_count_suite_test.go b/images/controller/internal/controllers/rvr_tie_breaker_count/rvr_tie_breaker_count_suite_test.go new file mode 100644 index 000000000..40c9fb837 --- /dev/null +++ b/images/controller/internal/controllers/rvr_tie_breaker_count/rvr_tie_breaker_count_suite_test.go @@ -0,0 +1,118 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package rvrtiebreakercount_test + +import ( + "fmt" + "testing" + + . "github.com/onsi/ginkgo/v2" + . "github.com/onsi/gomega" + "github.com/onsi/gomega/types" + corev1 "k8s.io/api/core/v1" + + "github.com/deckhouse/sds-replicated-volume/api/v1alpha1" +) + +func TestRvrTieBreakerCount(t *testing.T) { + RegisterFailHandler(Fail) + RunSpecs(t, "RvrTieBreakerCount Suite") +} + +func HaveTieBreakerCount(matcher types.GomegaMatcher) types.GomegaMatcher { + return WithTransform(func(list []v1alpha1.ReplicatedVolumeReplica) int { + tbCount := 0 + for _, rvr := range list { + if rvr.Spec.Type == v1alpha1.ReplicaTypeTieBreaker { + tbCount++ + } + } + return tbCount + }, matcher) +} + +// HaveBalancedFDDistribution checks that scheduled TBs are not in zones with max base replica count +// (if there are zones with fewer base replicas). +// Note: This matcher only checks TB placement, not the total count. +// Total TB count is verified by HaveTieBreakerCount separately. +// zones - list of allowed zones from RSC +// nodeList - list of nodes with zone labels +func HaveBalancedFDDistribution(zones []string, nodeList []corev1.Node) types.GomegaMatcher { + return WithTransform(func(list []v1alpha1.ReplicatedVolumeReplica) error { + // Build node -> zone map + nodeToZone := make(map[string]string) + for _, node := range nodeList { + nodeToZone[node.Name] = node.Labels[corev1.LabelTopologyZone] + } + + // Count base replicas (Diskful + Access) per zone + zoneBaseCounts := make(map[string]int) + for _, zone := range zones { + zoneBaseCounts[zone] = 0 + } + + // Count scheduled TBs per zone + zoneTBCounts := make(map[string]int) + + for _, rvr := range list { + if rvr.Spec.NodeName == "" { + continue // skip unscheduled + } + + zone := nodeToZone[rvr.Spec.NodeName] + if _, ok := zoneBaseCounts[zone]; !ok { + continue // zone not in allowed zones + } + + if rvr.Spec.Type == v1alpha1.ReplicaTypeTieBreaker { + zoneTBCounts[zone]++ + } else { + zoneBaseCounts[zone]++ + } + } + + // Find max base replica count + maxBaseCount := 0 + for _, count := range zoneBaseCounts { + maxBaseCount = max(maxBaseCount, count) + } + + // Check: scheduled TBs should not be in zones with max base count + // (if there are zones with fewer base replicas) + hasZonesWithFewerBase := false + for _, count := range zoneBaseCounts { + if count < maxBaseCount { + hasZonesWithFewerBase = true + break + } + } + + if hasZonesWithFewerBase { + for zone, tbCount := range zoneTBCounts { + if tbCount > 0 && zoneBaseCounts[zone] == maxBaseCount { + return fmt.Errorf( + "scheduled TB in zone %q with max base count (%d), but there are zones with fewer base replicas; "+ + "zoneBaseCounts=%v, zoneTBCounts=%v", + zone, maxBaseCount, zoneBaseCounts, zoneTBCounts, + ) + } + } + } + + return nil + }, Succeed()) +} diff --git a/images/controller/internal/controllers/rvr_volume/controller.go b/images/controller/internal/controllers/rvr_volume/controller.go new file mode 100644 index 000000000..d5386100c --- /dev/null +++ b/images/controller/internal/controllers/rvr_volume/controller.go @@ -0,0 +1,47 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package rvrvolume + +import ( + "sigs.k8s.io/controller-runtime/pkg/builder" + "sigs.k8s.io/controller-runtime/pkg/handler" + "sigs.k8s.io/controller-runtime/pkg/manager" + + snc "github.com/deckhouse/sds-node-configurator/api/v1alpha1" + "github.com/deckhouse/sds-replicated-volume/api/v1alpha1" +) + +const ( + controllerName = "rvr_volume_controller" +) + +func BuildController(mgr manager.Manager) error { + r := &Reconciler{ + cl: mgr.GetClient(), + log: mgr.GetLogger().WithName(controllerName).WithName("Reconciler"), + scheme: mgr.GetScheme(), + } + + return builder.ControllerManagedBy(mgr). + Named(controllerName). + For( + &v1alpha1.ReplicatedVolumeReplica{}). + Watches( + &snc.LVMLogicalVolume{}, + handler.EnqueueRequestForOwner(mgr.GetScheme(), mgr.GetRESTMapper(), &v1alpha1.ReplicatedVolumeReplica{})). + Complete(r) +} diff --git a/images/controller/internal/controllers/rvr_volume/doc.go b/images/controller/internal/controllers/rvr_volume/doc.go new file mode 100644 index 000000000..b67cc4611 --- /dev/null +++ b/images/controller/internal/controllers/rvr_volume/doc.go @@ -0,0 +1,109 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +// Package rvrvolume implements the rvr-volume-controller, which manages the lifecycle +// of LVM Logical Volumes (LLV) backing Diskful replicas. +// +// # Controller Responsibilities +// +// The controller manages LVM volumes by: +// - Creating LLV resources for Diskful replicas +// - Setting owner references on LLVs pointing to RVRs +// - Updating rvr.status.lvmLogicalVolumeName when LLV is ready +// - Deleting LLVs when replica type changes from Diskful +// - Clearing lvmLogicalVolumeName status after LLV deletion +// +// # Watched Resources +// +// The controller watches: +// - ReplicatedVolumeReplica: To manage LLV lifecycle +// - LvmLogicalVolume: To track LLV readiness and status +// +// # LLV Lifecycle Management +// +// Create LLV when: +// - rvr.spec.type==Diskful +// - rvr.metadata.deletionTimestamp==nil (not being deleted) +// - No LLV exists yet for this RVR +// +// Delete LLV when: +// - rvr.spec.type!=Diskful (type changed to Access or TieBreaker) +// - rvr.status.actualType==rvr.spec.type (actual type matches desired) +// - This ensures DRBD has released the volume before deletion +// +// # Reconciliation Flow +// +// 1. Check prerequisites: +// - RV must have the controller finalizer +// 2. Get the RVR being reconciled +// 3. Check rvr.spec.type: +// +// For Diskful replicas (rvr.spec.type==Diskful AND deletionTimestamp==nil): +// +// a. Check if LLV already exists (by owner reference or name) +// b. If LLV doesn't exist: +// - Create new LLV resource +// - Set spec.size from RV +// - Set spec.lvmVolumeGroupName from storage pool +// - Set metadata.ownerReferences pointing to RVR +// c. If LLV exists and is ready: +// - Update rvr.status.lvmLogicalVolumeName to LLV name +// +// For non-Diskful replicas (rvr.spec.type!=Diskful): +// +// a. Check if rvr.status.actualType==rvr.spec.type (type transition complete) +// b. If types match and LLV exists: +// - Delete the LLV +// c. After LLV deletion: +// - Clear rvr.status.lvmLogicalVolumeName +// +// 4. If rvr.metadata.deletionTimestamp is set: +// - LLV will be deleted via owner reference cascade (handled by Kubernetes) +// +// # Status Updates +// +// The controller maintains: +// - rvr.status.lvmLogicalVolumeName - Name of the associated LLV (when ready) +// +// Creates and manages: +// - LvmLogicalVolume resources with owner references +// +// # Owner References +// +// LLVs have ownerReferences set to point to their RVR: +// - Enables automatic LLV cleanup when RVR is deleted +// - Uses controller reference pattern (controller=true, blockOwnerDeletion=true) +// +// # Special Notes +// +// Type Transitions: +// - When replica type changes (e.g., Diskful→Access for quorum rebalancing) +// - Must wait for rvr.status.actualType to match rvr.spec.type +// - This ensures DRBD has released the disk before LVM volume deletion +// - Prevents data corruption and resource conflicts +// +// LLV Readiness: +// - Only set lvmLogicalVolumeName when LLV is ready (can be used by DRBD) +// - This prevents drbd-config-controller from trying to use non-ready volumes +// +// Storage Pool Integration: +// - LLV is created on the storage pool specified in ReplicatedStorageClass +// - Node must have the required LVM volume group available +// - Scheduling controller ensures nodes are selected appropriately +// +// The LLV provides the underlying storage layer for DRBD replication, bridging +// the ReplicatedVolume abstraction with actual LVM-based storage on nodes. +package rvrvolume diff --git a/images/controller/internal/controllers/rvr_volume/reconciler.go b/images/controller/internal/controllers/rvr_volume/reconciler.go new file mode 100644 index 000000000..ce6b1460a --- /dev/null +++ b/images/controller/internal/controllers/rvr_volume/reconciler.go @@ -0,0 +1,443 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package rvrvolume + +import ( + "context" + "fmt" + "strings" + "time" + + "github.com/go-logr/logr" + apierrors "k8s.io/apimachinery/pkg/api/errors" + "k8s.io/apimachinery/pkg/api/meta" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/runtime" + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/controller/controllerutil" + "sigs.k8s.io/controller-runtime/pkg/reconcile" + + snc "github.com/deckhouse/sds-node-configurator/api/v1alpha1" + obju "github.com/deckhouse/sds-replicated-volume/api/objutilv1" + "github.com/deckhouse/sds-replicated-volume/api/v1alpha1" +) + +// TODO: Update sds-node-configurator to export this contants and reuse here +const ( + llvTypeThick = "Thick" + llvTypeThin = "Thin" + + // llvNamePrefix is used for both the K8s object name of LVMLogicalVolume and the actual LV name on the node. + // NOTE: Keep in sync with name length constraints (see api/v1alpha1 validations). + llvNamePrefix = "rvr-" +) + +type Reconciler struct { + cl client.Client + log logr.Logger + scheme *runtime.Scheme +} + +var _ reconcile.Reconciler = (*Reconciler)(nil) + +// NewReconciler is a small helper constructor that is primarily useful for tests. +func NewReconciler(cl client.Client, log logr.Logger, scheme *runtime.Scheme) *Reconciler { + return &Reconciler{ + cl: cl, + log: log, + scheme: scheme, + } +} + +// Reconcile reconciles a ReplicatedVolumeReplica by managing its associated LVMLogicalVolume. +// It handles creation, deletion, and status updates of LVMLogicalVolumes based on the RVR state. +func (r *Reconciler) Reconcile(ctx context.Context, req reconcile.Request) (reconcile.Result, error) { + log := r.log.WithName("Reconcile").WithValues("req", req) + log.Info("Reconciling started") + start := time.Now() + defer func() { + log.Info("Reconcile finished", "duration", time.Since(start).String()) + }() + + rvr := &v1alpha1.ReplicatedVolumeReplica{} + err := r.cl.Get(ctx, req.NamespacedName, rvr) + if err != nil { + if apierrors.IsNotFound(err) { + log.Info("ReplicatedVolumeReplica not found, ignoring reconcile request") + return reconcile.Result{}, nil + } + log.Error(err, "getting ReplicatedVolumeReplica") + return reconcile.Result{}, err + } + + if !rvr.DeletionTimestamp.IsZero() { + return reconcile.Result{}, wrapReconcileLLVDeletion(ctx, r.cl, log, rvr) + } + + // rvr.spec.nodeName will be set once and will not change again. + if rvr.Spec.Type == v1alpha1.ReplicaTypeDiskful && rvr.Spec.NodeName != "" { + return reconcile.Result{}, wrapReconcileLLVNormal(ctx, r.cl, r.scheme, log, rvr) + } + + // RVR is not diskful, so we need to delete the LLV if it exists and the actual type is the same as the spec type. + if rvr.Spec.Type != v1alpha1.ReplicaTypeDiskful && rvr.Status.ActualType == rvr.Spec.Type { + return reconcile.Result{}, wrapReconcileLLVDeletion(ctx, r.cl, log, rvr) + } + + return reconcile.Result{}, nil +} + +// wrapReconcileLLVDeletion wraps reconcileLLVDeletion and updates the BackingVolumeCreated condition. +func wrapReconcileLLVDeletion(ctx context.Context, cl client.Client, log logr.Logger, rvr *v1alpha1.ReplicatedVolumeReplica) error { + if err := reconcileLLVDeletion(ctx, cl, log, rvr); err != nil { + reconcileErr := err + // TODO: Can record the reconcile error in the message to the condition + if conditionErr := updateBackingVolumeCreatedCondition(ctx, cl, log, rvr, metav1.ConditionTrue, v1alpha1.ReplicatedVolumeReplicaCondBackingVolumeCreatedReasonBackingVolumeDeletionFailed, "Backing volume deletion failed: "+reconcileErr.Error()); conditionErr != nil { + return fmt.Errorf("updating BackingVolumeCreated condition: %w; reconcile error: %w", conditionErr, reconcileErr) + } + return reconcileErr + } + + if err := updateBackingVolumeCreatedCondition(ctx, cl, log, rvr, metav1.ConditionFalse, v1alpha1.ReplicatedVolumeReplicaCondBackingVolumeCreatedReasonNotApplicable, "Replica is not diskful"); err != nil { + return fmt.Errorf("updating BackingVolumeCreated condition: %w", err) + } + + return nil +} + +// reconcileLLVDeletion handles deletion of LVMLogicalVolume associated with the RVR. +// If LLV is not found, it clears the LVMLogicalVolumeName from RVR status. +// If LLV exists, it deletes it and clears the LVMLogicalVolumeName from RVR status when LLV is actually deleted. +func reconcileLLVDeletion(ctx context.Context, cl client.Client, log logr.Logger, rvr *v1alpha1.ReplicatedVolumeReplica) error { + log = log.WithName("ReconcileLLVDeletion") + + if rvr.Status.LVMLogicalVolumeName == "" { + log.V(4).Info("No LVMLogicalVolumeName in status, skipping deletion") + return nil + } + + llvName := rvr.Status.LVMLogicalVolumeName + llv, err := getLLVByName(ctx, cl, llvName) + switch { + case err != nil && apierrors.IsNotFound(err): + log.V(4).Info("LVMLogicalVolume not found in cluster, clearing status", "llvName", llvName) + if err := ensureLVMLogicalVolumeNameInStatus(ctx, cl, rvr, ""); err != nil { + return fmt.Errorf("clearing LVMLogicalVolumeName from status: %w", err) + } + case err != nil: + return fmt.Errorf("checking if llv exists: %w", err) + default: + log.V(4).Info("LVMLogicalVolume found in cluster, deleting it", "llvName", llvName) + if err := deleteLLV(ctx, cl, llv, log); err != nil { + return fmt.Errorf("deleting llv: %w", err) + } + } + + return nil +} + +// wrapReconcileLLVNormal wraps reconcileLLVNormal and updates the BackingVolumeCreated condition. +func wrapReconcileLLVNormal(ctx context.Context, cl client.Client, scheme *runtime.Scheme, log logr.Logger, rvr *v1alpha1.ReplicatedVolumeReplica) error { + if err := reconcileLLVNormal(ctx, cl, scheme, log, rvr); err != nil { + reconcileErr := err + // TODO: Can record the reconcile error in the message to the condition + if conditionErr := updateBackingVolumeCreatedCondition(ctx, cl, log, rvr, metav1.ConditionFalse, v1alpha1.ReplicatedVolumeReplicaCondBackingVolumeCreatedReasonBackingVolumeCreationFailed, "Backing volume creation failed: "+reconcileErr.Error()); conditionErr != nil { + return fmt.Errorf("updating BackingVolumeCreated condition: %w; reconcile error: %w", conditionErr, reconcileErr) + } + return reconcileErr + } + return nil +} + +// reconcileLLVNormal reconciles LVMLogicalVolume for a normal (non-deleting) RVR +// by finding it via ownerReference. If not found, creates a new LLV. If found and created, +// updates RVR status with the LLV name. +func reconcileLLVNormal(ctx context.Context, cl client.Client, scheme *runtime.Scheme, log logr.Logger, rvr *v1alpha1.ReplicatedVolumeReplica) error { + log = log.WithName("ReconcileLLVNormal") + + llv, err := getLLVByRVR(ctx, cl, rvr) + + if err != nil && !apierrors.IsNotFound(err) { + return fmt.Errorf("getting LVMLogicalVolume by name %s: %w", rvr.Name, err) + } + + if llv == nil { + log.V(4).Info("LVMLogicalVolume not found, creating it", "rvrName", rvr.Name) + if err := createLLV(ctx, cl, scheme, rvr, log); err != nil { + return fmt.Errorf("creating LVMLogicalVolume: %w", err) + } + + if err := updateBackingVolumeCreatedCondition(ctx, cl, log, rvr, metav1.ConditionFalse, v1alpha1.ReplicatedVolumeReplicaCondBackingVolumeCreatedReasonBackingVolumeNotReady, "Backing volume is not ready"); err != nil { + return fmt.Errorf("updating BackingVolumeCreated condition: %w", err) + } + + // Finish reconciliation by returning nil. When LLV becomes ready we get another reconcile event. + return nil + } + + log.Info("LVMLogicalVolume found, checking if it is ready", "llvName", llv.Name) + if !isLLVPhaseCreated(llv) { + if err := updateBackingVolumeCreatedCondition(ctx, cl, log, rvr, metav1.ConditionFalse, v1alpha1.ReplicatedVolumeReplicaCondBackingVolumeCreatedReasonBackingVolumeNotReady, "Backing volume is not ready"); err != nil { + return fmt.Errorf("updating BackingVolumeCreated condition: %w", err) + } + log.Info("LVMLogicalVolume is not ready, returning nil to wait for next reconcile event", "llvName", llv.Name) + return nil + } + + log.Info("LVMLogicalVolume is ready, updating status", "llvName", llv.Name) + + // TODO: Analyze for future optimization: consider combining multiple patches into fewer API calls. + // Currently we have separate patches for status (LVMLogicalVolumeName + condition) and labels (LVG). + // This could potentially be optimized to reduce API server load and avoid cache inconsistency issues. + + if err := ensureLVMLogicalVolumeNameInStatus(ctx, cl, rvr, llv.Name); err != nil { + return fmt.Errorf("updating LVMLogicalVolumeName in status: %w", err) + } + + // Set LVG label when LLV is ready + if err := ensureLVGLabel(ctx, cl, log, rvr, llv.Spec.LVMVolumeGroupName); err != nil { + return fmt.Errorf("setting LVG label: %w", err) + } + + // TODO: Uncomment when thin pools are extracted to separate objects + // if llv.Spec.Thin != nil && llv.Spec.Thin.PoolName != "" { + // if err := ensureThinPoolLabel(ctx, cl, log, rvr, llv.Spec.Thin.PoolName); err != nil { + // return fmt.Errorf("setting thin pool label: %w", err) + // } + // } + + if err := updateBackingVolumeCreatedCondition(ctx, cl, log, rvr, metav1.ConditionTrue, v1alpha1.ReplicatedVolumeReplicaCondBackingVolumeCreatedReasonBackingVolumeReady, "Backing volume is ready"); err != nil { + return fmt.Errorf("updating BackingVolumeCreated condition: %w", err) + } + + return nil +} + +// getLLV gets a LVMLogicalVolume from the cluster by name. +// Returns the llv object and nil error if found, or nil and an error if not found or on failure. +// The error will be a NotFound error if the object doesn't exist. +func getLLVByName(ctx context.Context, cl client.Client, llvName string) (*snc.LVMLogicalVolume, error) { + llv := &snc.LVMLogicalVolume{} + if err := cl.Get(ctx, client.ObjectKey{Name: llvName}, llv); err != nil { + return nil, fmt.Errorf("getting LVMLogicalVolume %s: %w", llvName, err) + } + return llv, nil +} + +func getLLVByRVR(ctx context.Context, cl client.Client, rvr *v1alpha1.ReplicatedVolumeReplica) (*snc.LVMLogicalVolume, error) { + // If status already points to a specific LLV name, trust it (supports legacy names too). + if rvr.Status.LVMLogicalVolumeName != "" { + return getLLVByName(ctx, cl, rvr.Status.LVMLogicalVolumeName) + } + + // Otherwise, use the prefixed name (new behavior). + return getLLVByName(ctx, cl, llvNamePrefix+rvr.Name) +} + +// ensureLVMLogicalVolumeNameInStatus sets or clears the LVMLogicalVolumeName field in RVR status if needed. +// If llvName is empty string, the field is cleared. Otherwise, it is set to the provided value. +func ensureLVMLogicalVolumeNameInStatus(ctx context.Context, cl client.Client, rvr *v1alpha1.ReplicatedVolumeReplica, llvName string) error { + if rvr.Status.LVMLogicalVolumeName == llvName { + return nil + } + patch := client.MergeFrom(rvr.DeepCopy()) + rvr.Status.LVMLogicalVolumeName = llvName + return cl.Status().Patch(ctx, rvr, patch) +} + +// ensureLVGLabel sets the LVG label on RVR if not already set correctly. +func ensureLVGLabel(ctx context.Context, cl client.Client, log logr.Logger, rvr *v1alpha1.ReplicatedVolumeReplica, lvgName string) error { + if lvgName == "" { + return nil + } + + original := rvr.DeepCopy() + + changed := obju.SetLabel(rvr, v1alpha1.LVMVolumeGroupLabelKey, lvgName) + if !changed { + return nil + } + + patch := client.MergeFrom(original) + if err := cl.Patch(ctx, rvr, patch); err != nil { + return err + } + log.V(4).Info("LVG label set on RVR", "lvg", lvgName) + return nil +} + +// createLLV creates a LVMLogicalVolume with ownerReference pointing to RVR. +// It retrieves the ReplicatedVolume and determines the appropriate LVMVolumeGroup and ThinPool +// based on the RVR's node name, then creates the LLV with the correct configuration. +func createLLV(ctx context.Context, cl client.Client, scheme *runtime.Scheme, rvr *v1alpha1.ReplicatedVolumeReplica, log logr.Logger) error { + llvName := llvNamePrefix + rvr.Name + log = log.WithValues("llvName", llvName, "nodeName", rvr.Spec.NodeName) + log.Info("Creating LVMLogicalVolume") + + rv, err := getReplicatedVolumeByName(ctx, cl, rvr.Spec.ReplicatedVolumeName) + if err != nil { + return fmt.Errorf("getting ReplicatedVolume: %w", err) + } + + lvmVolumeGroupName, thinPoolName, err := getLVMVolumeGroupNameAndThinPoolName(ctx, cl, rv.Spec.ReplicatedStorageClassName, rvr.Spec.NodeName) + if err != nil { + return fmt.Errorf("getting LVMVolumeGroupName and ThinPoolName: %w", err) + } + + llvNew := &snc.LVMLogicalVolume{ + ObjectMeta: metav1.ObjectMeta{ + Name: llvName, + }, + Spec: snc.LVMLogicalVolumeSpec{ + ActualLVNameOnTheNode: llvName, + LVMVolumeGroupName: lvmVolumeGroupName, + Size: rv.Spec.Size.String(), + }, + } + if thinPoolName == "" { + llvNew.Spec.Type = llvTypeThick + } else { + llvNew.Spec.Type = llvTypeThin + llvNew.Spec.Thin = &snc.LVMLogicalVolumeThinSpec{ + PoolName: thinPoolName, + } + } + + if err := controllerutil.SetControllerReference(rvr, llvNew, scheme); err != nil { + return fmt.Errorf("setting controller reference: %w", err) + } + + // TODO: Define in our spec how to handle IsAlreadyExists here (LLV with this name already exists) + if err := cl.Create(ctx, llvNew); err != nil { + return fmt.Errorf("creating LVMLogicalVolume: %w", err) + } + + log.Info("LVMLogicalVolume created successfully", "llvName", llvNew.Name) + return nil +} + +// isLLVPhaseCreated checks if LLV status phase is "Created". +func isLLVPhaseCreated(llv *snc.LVMLogicalVolume) bool { + return llv.Status != nil && llv.Status.Phase == "Created" +} + +// deleteLLV deletes a LVMLogicalVolume from the cluster. +func deleteLLV(ctx context.Context, cl client.Client, llv *snc.LVMLogicalVolume, log logr.Logger) error { + if llv.DeletionTimestamp != nil { + return nil + } + if err := cl.Delete(ctx, llv); err != nil && !apierrors.IsNotFound(err) { + return fmt.Errorf("deleting LVMLogicalVolume %s: %w", llv.Name, err) + } + log.Info("LVMLogicalVolume marked for deletion", "llvName", llv.Name) + return nil +} + +// getReplicatedVolumeByName gets a ReplicatedVolume from the cluster by name. +// Returns the ReplicatedVolume object and nil error if found, or nil and an error if not found or on failure. +func getReplicatedVolumeByName(ctx context.Context, cl client.Client, rvName string) (*v1alpha1.ReplicatedVolume, error) { + rv := &v1alpha1.ReplicatedVolume{} + if err := cl.Get(ctx, client.ObjectKey{Name: rvName}, rv); err != nil { + return nil, err + } + return rv, nil +} + +// getLVMVolumeGroupNameAndThinPoolName gets LVMVolumeGroupName and ThinPoolName from ReplicatedStorageClass. +// It retrieves the ReplicatedStorageClass, then the ReplicatedStoragePool, and finds the LVMVolumeGroup +// that matches the specified node name. +// Returns the LVMVolumeGroup name, ThinPool name (empty string for Thick volumes), and an error. +func getLVMVolumeGroupNameAndThinPoolName(ctx context.Context, cl client.Client, rscName, nodeName string) (string, string, error) { + // Get ReplicatedStorageClass + rsc := &v1alpha1.ReplicatedStorageClass{} + if err := cl.Get(ctx, client.ObjectKey{Name: rscName}, rsc); err != nil { + return "", "", err + } + + // Get StoragePool name from ReplicatedStorageClass + storagePoolName := rsc.Spec.StoragePool + if storagePoolName == "" { + return "", "", fmt.Errorf("ReplicatedStorageClass %s has empty StoragePool", rscName) + } + + // Get ReplicatedStoragePool + rsp := &v1alpha1.ReplicatedStoragePool{} + if err := cl.Get(ctx, client.ObjectKey{Name: storagePoolName}, rsp); err != nil { + return "", "", fmt.Errorf("getting ReplicatedStoragePool %s: %w", storagePoolName, err) + } + + // Find LVMVolumeGroup that matches the node + for _, rspLVG := range rsp.Spec.LVMVolumeGroups { + // Get LVMVolumeGroup resource to check its node + lvg := &snc.LVMVolumeGroup{} + if err := cl.Get(ctx, client.ObjectKey{Name: rspLVG.Name}, lvg); err != nil { + return "", "", fmt.Errorf("getting LVMVolumeGroup %s: %w", rspLVG.Name, err) + } + + // Check if this LVMVolumeGroup is on the specified node + if strings.EqualFold(lvg.Spec.Local.NodeName, nodeName) { + return rspLVG.Name, rspLVG.ThinPoolName, nil + } + } + + return "", "", fmt.Errorf("no LVMVolumeGroup found in ReplicatedStoragePool %s for node %s", storagePoolName, nodeName) +} + +// updateBackingVolumeCreatedCondition updates the BackingVolumeCreated condition on the RVR status +// with the provided status, reason, and message. It checks if the condition already has the same +// parameters before updating to avoid unnecessary status patches. +// Returns error if the patch failed, nil otherwise. +func updateBackingVolumeCreatedCondition( + ctx context.Context, + cl client.Client, + log logr.Logger, + rvr *v1alpha1.ReplicatedVolumeReplica, + conditionStatus metav1.ConditionStatus, + reason, + message string, +) error { + // Check if condition is already set correctly + if rvr.Status.Conditions != nil { + cond := meta.FindStatusCondition(rvr.Status.Conditions, v1alpha1.ReplicatedVolumeReplicaCondBackingVolumeCreatedType) + if cond != nil && + cond.Status == conditionStatus && + cond.Reason == reason && + cond.Message == message { + // Already set correctly, no need to update + return nil + } + } + + log.V(4).Info("Updating BackingVolumeCreated condition", "status", conditionStatus, "reason", reason, "message", message) + + // Create patch before making changes + patch := client.MergeFrom(rvr.DeepCopy()) + + // Apply changes + meta.SetStatusCondition( + &rvr.Status.Conditions, + metav1.Condition{ + Type: v1alpha1.ReplicatedVolumeReplicaCondBackingVolumeCreatedType, + Status: conditionStatus, + Reason: reason, + Message: message, + }, + ) + + // Patch the status in Kubernetes + return cl.Status().Patch(ctx, rvr, patch) +} diff --git a/images/controller/internal/controllers/rvr_volume/reconciler_test.go b/images/controller/internal/controllers/rvr_volume/reconciler_test.go new file mode 100644 index 000000000..939c3fd2d --- /dev/null +++ b/images/controller/internal/controllers/rvr_volume/reconciler_test.go @@ -0,0 +1,1222 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +// cspell:words Diskless Logr Subresource apimachinery gomega gvks metav onsi + +package rvrvolume_test + +import ( + "context" + "errors" + + . "github.com/onsi/ginkgo/v2" + . "github.com/onsi/gomega" + apierrors "k8s.io/apimachinery/pkg/api/errors" // cspell:words apierrors + "k8s.io/apimachinery/pkg/api/resource" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/runtime" + "k8s.io/apimachinery/pkg/types" + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/client/fake" + "sigs.k8s.io/controller-runtime/pkg/client/interceptor" + "sigs.k8s.io/controller-runtime/pkg/controller/controllerutil" // cspell:words controllerutil + "sigs.k8s.io/controller-runtime/pkg/reconcile" + + snc "github.com/deckhouse/sds-node-configurator/api/v1alpha1" + "github.com/deckhouse/sds-replicated-volume/api/v1alpha1" + rvrvolume "github.com/deckhouse/sds-replicated-volume/images/controller/internal/controllers/rvr_volume" +) + +var _ = Describe("Reconciler", func() { + scheme := runtime.NewScheme() + Expect(v1alpha1.AddToScheme(scheme)).To(Succeed()) + Expect(v1alpha1.AddToScheme(scheme)).To(Succeed()) + Expect(snc.AddToScheme(scheme)).To(Succeed()) + + // Available in BeforeEach + var ( + clientBuilder *fake.ClientBuilder + ) + + // Available in JustBeforeEach + var ( + cl client.WithWatch + rec *rvrvolume.Reconciler + ) + + BeforeEach(func() { + clientBuilder = fake.NewClientBuilder(). + WithScheme(scheme). + WithStatusSubresource( + &v1alpha1.ReplicatedVolumeReplica{}, + &v1alpha1.ReplicatedVolume{}) + + // To be safe. To make sure we don't use client from previous iterations + cl = nil + rec = nil + }) + + JustBeforeEach(func() { + cl = clientBuilder.Build() + rec = rvrvolume.NewReconciler(cl, GinkgoLogr, scheme) + }) + + It("returns no error when ReplicatedVolumeReplica does not exist", func(ctx SpecContext) { + Expect(rec.Reconcile(ctx, reconcile.Request{ + NamespacedName: types.NamespacedName{Name: "not-existing-rvr"}, + })).NotTo(Requeue()) + }) + + When("Get fails with non-NotFound error", func() { + internalServerError := errors.New("internal server error") + BeforeEach(func() { + clientBuilder = clientBuilder.WithInterceptorFuncs(interceptor.Funcs{ + Get: func(ctx context.Context, cl client.WithWatch, key client.ObjectKey, obj client.Object, opts ...client.GetOption) error { + if _, ok := obj.(*v1alpha1.ReplicatedVolumeReplica); ok { + return internalServerError + } + return cl.Get(ctx, key, obj, opts...) + }, + }) + }) + + It("should fail if getting ReplicatedVolumeReplica failed with non-NotFound error", func(ctx SpecContext) { + Expect(rec.Reconcile(ctx, reconcile.Request{ + NamespacedName: types.NamespacedName{Name: "test-rvr"}, + })).Error().To(MatchError(internalServerError)) + }) + }) + + When("ReplicatedVolumeReplica created", func() { + var rvr *v1alpha1.ReplicatedVolumeReplica + + BeforeEach(func() { + rvr = &v1alpha1.ReplicatedVolumeReplica{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-rvr", + UID: "test-uid", + }, + Spec: v1alpha1.ReplicatedVolumeReplicaSpec{ + ReplicatedVolumeName: "test-rv", + Type: v1alpha1.ReplicaTypeDiskful, + NodeName: "node-1", + }, + } + }) + + When("RVR has DeletionTimestamp", func() { + BeforeEach(func() { + rvr.Finalizers = []string{} + }) + + JustBeforeEach(func(ctx SpecContext) { + By("Adding finalizer to RVR so it can be marked for deletion") + rvr.Finalizers = append(rvr.Finalizers, "test-finalizer") + + By("Create RVR first, then delete it to set DeletionTimestamp") + Expect(cl.Create(ctx, rvr)).To(Succeed()) + Expect(cl.Delete(ctx, rvr)).To(Succeed()) + }) + + DescribeTableSubtree("when status does not have LLV name because", + Entry("empty Status", func() { rvr.Status = v1alpha1.ReplicatedVolumeReplicaStatus{} }), + Entry("empty LVMLogicalVolumeName", func() { + rvr.Status = v1alpha1.ReplicatedVolumeReplicaStatus{LVMLogicalVolumeName: ""} + }), + func(setup func()) { + BeforeEach(func() { + setup() + // Finalizer is already set in parent BeforeEach + }) + + It("should reconcile successfully without error", func(ctx SpecContext) { + // reconcileLLVDeletion should return early when status is nil or empty + // The RVR is already created and deleted in parent JustBeforeEach, setting DeletionTimestamp + Expect(rec.Reconcile(ctx, RequestFor(rvr))).NotTo(Requeue()) + }) + }) + + When("status has LVMLogicalVolumeName", func() { + BeforeEach(func() { + rvr.Status = v1alpha1.ReplicatedVolumeReplicaStatus{ + LVMLogicalVolumeName: "test-llv", + } + }) + + When("LLV does not exist in cluster", func() { + It("should clear LVMLogicalVolumeName from status", func(ctx SpecContext) { + Expect(rec.Reconcile(ctx, RequestFor(rvr))).NotTo(Requeue()) + + By("Refreshing RVR from cluster to get updated status") + Expect(cl.Get(ctx, client.ObjectKeyFromObject(rvr), rvr)).To(Succeed()) + Expect(rvr).To(HaveNoLVMLogicalVolumeName()) + }) + + When("clearing status fails", func() { + statusPatchError := errors.New("failed to patch status") + BeforeEach(func() { + clientBuilder = clientBuilder.WithInterceptorFuncs(interceptor.Funcs{ + SubResourcePatch: func(ctx context.Context, cl client.Client, subResourceName string, obj client.Object, patch client.Patch, opts ...client.SubResourcePatchOption) error { + if rvrObj, ok := obj.(*v1alpha1.ReplicatedVolumeReplica); ok && rvrObj.Name == "test-rvr" { + if subResourceName == "status" { + return statusPatchError + } + } + return cl.SubResource(subResourceName).Patch(ctx, obj, patch, opts...) + }, + }) + }) + + // RVR is already created and deleted in parent JustBeforeEach + // Client is already created in top-level JustBeforeEach with interceptors from BeforeEach + + It("should fail if patching status failed", func(ctx SpecContext) { + Expect(rec.Reconcile(ctx, RequestFor(rvr))).Error().To(MatchError(ContainSubstring("clearing LVMLogicalVolumeName from status"))) + }) + }) + }) + + When("LLV exists in cluster", func() { + var llv *snc.LVMLogicalVolume + + BeforeEach(func() { + llv = &snc.LVMLogicalVolume{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-llv", + }, + } + }) + + JustBeforeEach(func(ctx SpecContext) { + Expect(cl.Create(ctx, llv)).To(Succeed()) + }) + + When("LLV is not marked for deletion", func() { + It("should mark LLV for deletion", func(ctx SpecContext) { + Expect(rec.Reconcile(ctx, RequestFor(rvr))).NotTo(Requeue()) + + // LLV should be marked for deletion (fake client doesn't delete immediately) + updatedLLV := &snc.LVMLogicalVolume{} + err := cl.Get(ctx, client.ObjectKeyFromObject(llv), updatedLLV) + if err == nil { + // If still exists, it should be marked for deletion + Expect(updatedLLV.DeletionTimestamp).NotTo(BeNil()) + } else { + // Or it might be deleted + Expect(apierrors.IsNotFound(err)).To(BeTrue()) + } + }) + + When("LLV has another finalizer", func() { + BeforeEach(func() { + llv.Finalizers = []string{"other-finalizer"} + }) + + It("should keep other finalizers and set DeletionTimestamp", func(ctx SpecContext) { + Expect(rec.Reconcile(ctx, RequestFor(rvr))).NotTo(Requeue()) + + updatedLLV := &snc.LVMLogicalVolume{} + Expect(cl.Get(ctx, client.ObjectKeyFromObject(llv), updatedLLV)).To(Succeed()) + Expect(updatedLLV.Finalizers).To(ConsistOf("other-finalizer")) + Expect(updatedLLV.DeletionTimestamp).NotTo(BeNil()) + }) + }) + + When("Delete fails", func() { + deleteError := errors.New("failed to delete") + BeforeEach(func() { + clientBuilder = clientBuilder.WithInterceptorFuncs(interceptor.Funcs{ + Delete: func(ctx context.Context, cl client.WithWatch, obj client.Object, opts ...client.DeleteOption) error { + if llvObj, ok := obj.(*snc.LVMLogicalVolume); ok && llvObj.Name == "test-llv" { + return deleteError + } + return cl.Delete(ctx, obj, opts...) + }, + }) + }) + + // RVR and LLV are already created in parent JustBeforeEach + // Client is already created in top-level JustBeforeEach with interceptors from BeforeEach + + It("should fail if deleting LLV failed", func(ctx SpecContext) { + Expect(rec.Reconcile(ctx, RequestFor(rvr))).Error().To(MatchError(ContainSubstring("deleting llv"))) + + Expect(cl.Get(ctx, client.ObjectKeyFromObject(rvr), rvr)).To(Succeed()) + Expect(rvr).To(HaveBackingVolumeCreatedConditionDeletionFailed()) + }) + }) + + When("LLV is marked for deletion", func() { + JustBeforeEach(func(ctx SpecContext) { + existingLLV := &snc.LVMLogicalVolume{} + Expect(cl.Get(ctx, client.ObjectKeyFromObject(llv), existingLLV)).To(Succeed()) + Expect(cl.Delete(ctx, existingLLV)).To(Succeed()) + }) + + It("should reconcile successfully when LLV already deleting", func(ctx SpecContext) { + Expect(rec.Reconcile(ctx, RequestFor(rvr))).NotTo(Requeue()) + + existingLLV := &snc.LVMLogicalVolume{} + err := cl.Get(ctx, client.ObjectKeyFromObject(llv), existingLLV) + if err == nil { + Expect(existingLLV.DeletionTimestamp).NotTo(BeNil()) + } else { + Expect(apierrors.IsNotFound(err)).To(BeTrue()) + } + }) + }) + + When("Get LLV fails with non-NotFound error", func() { + getError := errors.New("failed to get") + BeforeEach(func() { + clientBuilder = clientBuilder.WithInterceptorFuncs(interceptor.Funcs{ + Get: func(ctx context.Context, cl client.WithWatch, key client.ObjectKey, obj client.Object, opts ...client.GetOption) error { + if _, ok := obj.(*snc.LVMLogicalVolume); ok && key.Name == "test-llv" { + return getError + } + return cl.Get(ctx, key, obj, opts...) + }, + }) + }) + + // RVR and LLV are already created in parent JustBeforeEach + // Client is already created in top-level JustBeforeEach with interceptors from BeforeEach + + It("should fail if getting LLV failed", func(ctx SpecContext) { + Expect(rec.Reconcile(ctx, RequestFor(rvr))).Error().To(MatchError(ContainSubstring("checking if llv exists"))) + }) + }) + }) + }) + }) + + When("RVR does not have DeletionTimestamp", func() { + DescribeTableSubtree("when RVR is not diskful because", + Entry("Type is Access", func() { rvr.Spec.Type = v1alpha1.ReplicaTypeAccess }), + Entry("Type is TieBreaker", func() { rvr.Spec.Type = v1alpha1.ReplicaTypeTieBreaker }), + func(setup func()) { + BeforeEach(func() { + setup() + }) + + When("ActualType matches Spec.Type", func() { + BeforeEach(func() { + rvr.Status = v1alpha1.ReplicatedVolumeReplicaStatus{ + ActualType: rvr.Spec.Type, + } + }) + + It("should call reconcileLLVDeletion", func(ctx SpecContext) { + Expect(rec.Reconcile(ctx, RequestFor(rvr))).NotTo(Requeue()) + + Expect(cl.Get(ctx, client.ObjectKeyFromObject(rvr), rvr)).To(Succeed()) + Expect(rvr).To(HaveBackingVolumeCreatedConditionNotApplicable()) + }) + }) + + When("ActualType does not match Spec.Type", func() { + BeforeEach(func() { + rvr.Status = v1alpha1.ReplicatedVolumeReplicaStatus{ + ActualType: v1alpha1.ReplicaTypeDiskful, + LVMLogicalVolumeName: "keep-llv", + } + }) + + It("should reconcile successfully without error", func(ctx SpecContext) { + Expect(rec.Reconcile(ctx, RequestFor(rvr))).NotTo(Requeue()) + }) + }) + + When("Status is empty", func() { + BeforeEach(func() { + rvr.Status = v1alpha1.ReplicatedVolumeReplicaStatus{} + }) + + It("should reconcile successfully without error", func(ctx SpecContext) { + Expect(rec.Reconcile(ctx, RequestFor(rvr))).NotTo(Requeue()) + + Expect(cl.Get(ctx, client.ObjectKeyFromObject(rvr), rvr)).To(Succeed()) + Expect(rvr).To(HaveBackingVolumeCreatedConditionNotApplicable()) + }) + }) + }) + + When("RVR is Diskful", func() { + BeforeEach(func() { + rvr.Spec.Type = v1alpha1.ReplicaTypeDiskful + }) + + DescribeTableSubtree("when RVR cannot create LLV because", + Entry("NodeName is empty", func() { rvr.Spec.NodeName = "" }), + Entry("Type is not Diskful", func() { rvr.Spec.Type = v1alpha1.ReplicaTypeAccess }), + func(setup func()) { + BeforeEach(func() { + setup() + }) + + It("should reconcile successfully without error", func(ctx SpecContext) { + Expect(rec.Reconcile(ctx, RequestFor(rvr))).NotTo(Requeue()) + }) + }) + + When("RVR has NodeName and is Diskful", func() { + BeforeEach(func() { + rvr.Spec.NodeName = "node-1" + rvr.Spec.Type = v1alpha1.ReplicaTypeDiskful + }) + + When("Status is empty", func() { + BeforeEach(func() { + rvr.Status = v1alpha1.ReplicatedVolumeReplicaStatus{} + }) + + It("should call reconcileLLVNormal", func(ctx SpecContext) { + Expect(rec.Reconcile(ctx, RequestFor(rvr))).NotTo(Requeue()) + }) + }) + + When("Status.LVMLogicalVolumeName is empty", func() { + BeforeEach(func() { + rvr.Status = v1alpha1.ReplicatedVolumeReplicaStatus{ + LVMLogicalVolumeName: "", + } + }) + + It("should call reconcileLLVNormal", func(ctx SpecContext) { + Expect(rec.Reconcile(ctx, RequestFor(rvr))).NotTo(Requeue()) + }) + }) + + When("Status.LVMLogicalVolumeName is set", func() { + BeforeEach(func() { + rvr.Status = v1alpha1.ReplicatedVolumeReplicaStatus{ + LVMLogicalVolumeName: "existing-llv", + } + }) + + It("should reconcile successfully without error", func(ctx SpecContext) { + Expect(rec.Reconcile(ctx, RequestFor(rvr))).NotTo(Requeue()) + }) + }) + }) + }) + }) + }) + + When("reconcileLLVNormal scenarios", func() { + var rvr *v1alpha1.ReplicatedVolumeReplica + var rv *v1alpha1.ReplicatedVolume + var rsc *v1alpha1.ReplicatedStorageClass + var rsp *v1alpha1.ReplicatedStoragePool + var lvg *snc.LVMVolumeGroup + + BeforeEach(func() { + rvr = &v1alpha1.ReplicatedVolumeReplica{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-rvr", + UID: "test-uid", + }, + Spec: v1alpha1.ReplicatedVolumeReplicaSpec{ + ReplicatedVolumeName: "test-rv", + Type: v1alpha1.ReplicaTypeDiskful, + NodeName: "node-1", + }, + } + + rv = &v1alpha1.ReplicatedVolume{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-rv", + }, + Spec: v1alpha1.ReplicatedVolumeSpec{ + Size: resource.MustParse("1Gi"), + ReplicatedStorageClassName: "test-rsc", + }, + } + + rsc = &v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-rsc", + }, + Spec: v1alpha1.ReplicatedStorageClassSpec{ + StoragePool: "test-rsp", + }, + } + + rsp = &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-rsp", + }, + Spec: v1alpha1.ReplicatedStoragePoolSpec{ + LVMVolumeGroups: []v1alpha1.ReplicatedStoragePoolLVMVolumeGroups{ + { + Name: "test-lvg", + ThinPoolName: "", + }, + }, + }, + } + + lvg = &snc.LVMVolumeGroup{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-lvg", + }, + Spec: snc.LVMVolumeGroupSpec{ + Local: snc.LVMVolumeGroupLocalSpec{ + NodeName: "node-1", + }, + }, + } + }) + + JustBeforeEach(func(ctx SpecContext) { + // Clear metadata before creating to avoid ResourceVersion issues + rvrCopy := rvr.DeepCopy() + rvrCopy.ResourceVersion = "" + rvrCopy.UID = "" + rvrCopy.Generation = 0 + Expect(cl.Create(ctx, rvrCopy)).To(Succeed()) + if rv != nil { + rvCopy := rv.DeepCopy() + rvCopy.ResourceVersion = "" + rvCopy.UID = "" + rvCopy.Generation = 0 + Expect(cl.Create(ctx, rvCopy)).To(Succeed()) + } + if rsc != nil { + rscCopy := rsc.DeepCopy() + rscCopy.ResourceVersion = "" + rscCopy.UID = "" + rscCopy.Generation = 0 + Expect(cl.Create(ctx, rscCopy)).To(Succeed()) + } + if rsp != nil { + rspCopy := rsp.DeepCopy() + rspCopy.ResourceVersion = "" + rspCopy.UID = "" + rspCopy.Generation = 0 + Expect(cl.Create(ctx, rspCopy)).To(Succeed()) + } + if lvg != nil { + lvgCopy := lvg.DeepCopy() + lvgCopy.ResourceVersion = "" + lvgCopy.UID = "" + lvgCopy.Generation = 0 + Expect(cl.Create(ctx, lvgCopy)).To(Succeed()) + } + }) + + When("RVR is Diskful with NodeName and no LLV name in status", func() { + BeforeEach(func() { + rvr.Status = v1alpha1.ReplicatedVolumeReplicaStatus{} + }) + + When("LLV does not exist", func() { + It("should create LLV", func(ctx SpecContext) { + Expect(rec.Reconcile(ctx, RequestFor(rvr))).NotTo(Requeue()) + + var llvList snc.LVMLogicalVolumeList + Expect(cl.List(ctx, &llvList)).To(Succeed()) + Expect(llvList.Items).To(HaveLen(1)) + + llv := &llvList.Items[0] + Expect(llv).To(HaveLLVWithOwnerReference(rvr.Name)) + Expect(llv.Name).To(Equal("rvr-" + rvr.Name)) + Expect(llv.Spec.LVMVolumeGroupName).To(Equal("test-lvg")) + Expect(llv.Spec.Size).To(Equal("1Gi")) + Expect(llv.Spec.Type).To(Equal("Thick")) + Expect(llv.Spec.ActualLVNameOnTheNode).To(Equal("rvr-" + rvr.Name)) + + Expect(cl.Get(ctx, client.ObjectKeyFromObject(rvr), rvr)).To(Succeed()) + Expect(rvr).To(HaveNoLVMLogicalVolumeName()) + Expect(rvr).To(HaveBackingVolumeCreatedConditionNotReady()) + }) + + When("ActualType was Access before switching to Diskful", func() { + BeforeEach(func() { + rvr.Status = v1alpha1.ReplicatedVolumeReplicaStatus{ + ActualType: v1alpha1.ReplicaTypeAccess, + } + }) + + It("should create LLV for Diskful mode", func(ctx SpecContext) { + Expect(rec.Reconcile(ctx, RequestFor(rvr))).NotTo(Requeue()) + + var llvList snc.LVMLogicalVolumeList + Expect(cl.List(ctx, &llvList)).To(Succeed()) + Expect(llvList.Items).To(HaveLen(1)) + + llv := &llvList.Items[0] + Expect(llv).To(HaveLLVWithOwnerReference(rvr.Name)) + Expect(llv.Name).To(Equal("rvr-" + rvr.Name)) + Expect(llv.Spec.LVMVolumeGroupName).To(Equal("test-lvg")) + Expect(llv.Spec.Size).To(Equal("1Gi")) + Expect(llv.Spec.Type).To(Equal("Thick")) + Expect(llv.Spec.ActualLVNameOnTheNode).To(Equal("rvr-" + rvr.Name)) + + Expect(cl.Get(ctx, client.ObjectKeyFromObject(rvr), rvr)).To(Succeed()) + Expect(rvr).To(HaveNoLVMLogicalVolumeName()) + }) + }) + + When("ReplicatedVolume does not exist", func() { + BeforeEach(func() { + rv = nil + }) + + JustBeforeEach(func(ctx SpecContext) { + // RVR is already created in parent JustBeforeEach, don't recreate it + // Don't create RV (it's nil), but create other objects if they don't exist + if rsc != nil { + existingRSC := &v1alpha1.ReplicatedStorageClass{} + err := cl.Get(ctx, client.ObjectKeyFromObject(rsc), existingRSC) + if err != nil { + rscCopy := rsc.DeepCopy() + rscCopy.ResourceVersion = "" + rscCopy.UID = "" + rscCopy.Generation = 0 + Expect(cl.Create(ctx, rscCopy)).To(Succeed()) + } + } + if rsp != nil { + existingRSP := &v1alpha1.ReplicatedStoragePool{} + err := cl.Get(ctx, client.ObjectKeyFromObject(rsp), existingRSP) + if err != nil { + rspCopy := rsp.DeepCopy() + rspCopy.ResourceVersion = "" + rspCopy.UID = "" + rspCopy.Generation = 0 + Expect(cl.Create(ctx, rspCopy)).To(Succeed()) + } + } + if lvg != nil { + existingLVG := &snc.LVMVolumeGroup{} + err := cl.Get(ctx, client.ObjectKeyFromObject(lvg), existingLVG) + if err != nil { + lvgCopy := lvg.DeepCopy() + lvgCopy.ResourceVersion = "" + lvgCopy.UID = "" + lvgCopy.Generation = 0 + Expect(cl.Create(ctx, lvgCopy)).To(Succeed()) + } + } + }) + + It("should fail if getting ReplicatedVolume failed", func(ctx SpecContext) { + Expect(rec.Reconcile(ctx, RequestFor(rvr))).Error().To(MatchError(ContainSubstring("getting ReplicatedVolume"))) + }) + }) + + When("ReplicatedStorageClass does not exist", func() { + BeforeEach(func() { + rsc = nil + }) + + JustBeforeEach(func(ctx SpecContext) { + // RVR and RV are already created in parent JustBeforeEach, don't recreate them + // Don't create RSC (it's nil), but create other objects if they don't exist + if rsp != nil { + existingRSP := &v1alpha1.ReplicatedStoragePool{} + err := cl.Get(ctx, client.ObjectKeyFromObject(rsp), existingRSP) + if err != nil { + rspCopy := rsp.DeepCopy() + rspCopy.ResourceVersion = "" + rspCopy.UID = "" + rspCopy.Generation = 0 + Expect(cl.Create(ctx, rspCopy)).To(Succeed()) + } + } + if lvg != nil { + existingLVG := &snc.LVMVolumeGroup{} + err := cl.Get(ctx, client.ObjectKeyFromObject(lvg), existingLVG) + if err != nil { + lvgCopy := lvg.DeepCopy() + lvgCopy.ResourceVersion = "" + lvgCopy.UID = "" + lvgCopy.Generation = 0 + Expect(cl.Create(ctx, lvgCopy)).To(Succeed()) + } + } + }) + + It("should fail if getting ReplicatedStorageClass failed", func(ctx SpecContext) { + Expect(rec.Reconcile(ctx, RequestFor(rvr))).Error().To(MatchError(ContainSubstring("getting LVMVolumeGroupName and ThinPoolName"))) + }) + }) + + When("ReplicatedStoragePool does not exist", func() { + BeforeEach(func() { + rsp = nil + }) + + JustBeforeEach(func(ctx SpecContext) { + // RVR, RV, and RSC are already created in parent JustBeforeEach, don't recreate them + // Don't create RSP (it's nil), but create other objects if they don't exist + if lvg != nil { + existingLVG := &snc.LVMVolumeGroup{} + err := cl.Get(ctx, client.ObjectKeyFromObject(lvg), existingLVG) + if err != nil { + lvgCopy := lvg.DeepCopy() + lvgCopy.ResourceVersion = "" + lvgCopy.UID = "" + lvgCopy.Generation = 0 + Expect(cl.Create(ctx, lvgCopy)).To(Succeed()) + } + } + }) + + It("should fail if getting ReplicatedStoragePool failed", func(ctx SpecContext) { + Expect(rec.Reconcile(ctx, RequestFor(rvr))).Error().To(MatchError(ContainSubstring("getting ReplicatedStoragePool"))) + }) + }) + + When("LVMVolumeGroup does not exist", func() { + BeforeEach(func() { + lvg = nil + }) + + JustBeforeEach(func() { + // RVR, RV, RSC, and RSP are already created in parent JustBeforeEach, don't recreate them + // Don't create LVG (it's nil) + }) + + It("should fail if getting LVMVolumeGroup failed", func(ctx SpecContext) { + Expect(rec.Reconcile(ctx, RequestFor(rvr))).Error().To(MatchError(ContainSubstring("getting LVMVolumeGroup"))) + }) + }) + + When("no LVMVolumeGroup matches node", func() { + BeforeEach(func() { + lvg.Spec.Local.NodeName = "other-node" + }) + + It("should fail if no LVMVolumeGroup found for node", func(ctx SpecContext) { + Expect(rec.Reconcile(ctx, RequestFor(rvr))).Error().To(MatchError(ContainSubstring("no LVMVolumeGroup found"))) + }) + }) + + When("Create LLV fails", func() { + createError := errors.New("failed to create") + BeforeEach(func() { + clientBuilder = clientBuilder.WithInterceptorFuncs(interceptor.Funcs{ + Create: func(ctx context.Context, cl client.WithWatch, obj client.Object, opts ...client.CreateOption) error { + if llvObj, ok := obj.(*snc.LVMLogicalVolume); ok && llvObj.Name == "rvr-test-rvr" { + return createError + } + return cl.Create(ctx, obj, opts...) + }, + }) + }) + + // RVR, RV, RSC, RSP, and LVG are already created in parent JustBeforeEach + // Client is already created in top-level JustBeforeEach with interceptors from BeforeEach + + It("should fail if creating LLV failed", func(ctx SpecContext) { + Expect(rec.Reconcile(ctx, RequestFor(rvr))).Error().To(MatchError(ContainSubstring("creating LVMLogicalVolume"))) + + Expect(cl.Get(ctx, client.ObjectKeyFromObject(rvr), rvr)).To(Succeed()) + Expect(rvr).To(HaveBackingVolumeCreatedConditionCreationFailed()) + }) + }) + + When("ThinPool is specified", func() { + BeforeEach(func() { + rsp.Spec.LVMVolumeGroups[0].ThinPoolName = "test-thin-pool" + }) + + It("should create LLV with Thin type", func(ctx SpecContext) { + Expect(rec.Reconcile(ctx, RequestFor(rvr))).NotTo(Requeue()) + + var llvList snc.LVMLogicalVolumeList + Expect(cl.List(ctx, &llvList)).To(Succeed()) + Expect(llvList.Items).To(HaveLen(1)) + + llv := &llvList.Items[0] + Expect(llv.Name).To(Equal("rvr-" + rvr.Name)) + Expect(llv.Spec.Type).To(Equal("Thin")) + Expect(llv.Spec.Thin).NotTo(BeNil()) + Expect(llv.Spec.Thin.PoolName).To(Equal("test-thin-pool")) + Expect(llv.Spec.ActualLVNameOnTheNode).To(Equal("rvr-" + rvr.Name)) + }) + }) + }) + + When("LLV exists with ownerReference", func() { + var llv *snc.LVMLogicalVolume + + BeforeEach(func() { + llv = &snc.LVMLogicalVolume{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rvr-" + rvr.Name, + }, + } + Expect(controllerutil.SetControllerReference(rvr, llv, scheme)).To(Succeed()) + }) + + JustBeforeEach(func(ctx SpecContext) { + // RVR is already created in parent JustBeforeEach + // Get the created RVR to set ownerReference correctly + createdRVR := &v1alpha1.ReplicatedVolumeReplica{} + Expect(cl.Get(ctx, client.ObjectKeyFromObject(rvr), createdRVR)).To(Succeed()) + // Clear metadata and recreate ownerReference + llvCopy := llv.DeepCopy() + llvCopy.ResourceVersion = "" + llvCopy.UID = "" + llvCopy.Generation = 0 + llvCopy.OwnerReferences = nil + // Set status if available (it might be set in nested BeforeEach) + // We'll create with status, and nested JustBeforeEach can update if needed + if llv.Status != nil { + llvCopy.Status = llv.Status.DeepCopy() + } + Expect(controllerutil.SetControllerReference(createdRVR, llvCopy, scheme)).To(Succeed()) + Expect(cl.Create(ctx, llvCopy)).To(Succeed()) + // If status was set, update it after creation (fake client might need this) + if llvCopy.Status != nil { + createdLLV := &snc.LVMLogicalVolume{} + if err := cl.Get(ctx, client.ObjectKeyFromObject(llvCopy), createdLLV); err == nil { + createdLLV.Status = llvCopy.Status.DeepCopy() + // Try to update status, but don't fail if it doesn't work + _ = cl.Status().Update(ctx, createdLLV) + } + } + }) + + When("LLV phase is Created", func() { + BeforeEach(func() { + llv.Status = &snc.LVMLogicalVolumeStatus{ + Phase: "Created", + } + }) + + // Status is already set in parent JustBeforeEach when creating LLV + // No need to update it here + + When("RVR status does not have LLV name", func() { + BeforeEach(func() { + rvr.Status = v1alpha1.ReplicatedVolumeReplicaStatus{} + }) + + It("should update RVR status with LLV name", func(ctx SpecContext) { + Expect(rec.Reconcile(ctx, RequestFor(rvr))).NotTo(Requeue()) + + Expect(cl.Get(ctx, client.ObjectKeyFromObject(rvr), rvr)).To(Succeed()) + Expect(rvr).To(HaveLVMLogicalVolumeName(llv.Name)) + Expect(rvr).To(HaveBackingVolumeCreatedConditionReady()) + }) + + When("updating status fails", func() { + statusPatchError := errors.New("failed to patch status") + BeforeEach(func() { + clientBuilder = clientBuilder.WithInterceptorFuncs(interceptor.Funcs{ + SubResourcePatch: func(ctx context.Context, cl client.Client, subResourceName string, obj client.Object, patch client.Patch, opts ...client.SubResourcePatchOption) error { + if rvrObj, ok := obj.(*v1alpha1.ReplicatedVolumeReplica); ok && rvrObj.Name == "test-rvr" { + if subResourceName == "status" { + return statusPatchError + } + } + return cl.SubResource(subResourceName).Patch(ctx, obj, patch, opts...) + }, + }) + }) + + // RVR, RV, RSC, RSP, LVG, and LLV are already created in parent JustBeforeEach + // Client is already created in top-level JustBeforeEach with interceptors from BeforeEach + + It("should fail if patching status failed", func(ctx SpecContext) { + Expect(rec.Reconcile(ctx, RequestFor(rvr))).Error().To(MatchError(ContainSubstring("updating LVMLogicalVolumeName in status"))) + }) + }) + }) + + When("RVR status already has LLV name", func() { + BeforeEach(func() { + rvr.Status = v1alpha1.ReplicatedVolumeReplicaStatus{ + LVMLogicalVolumeName: llv.Name, + } + }) + + It("should reconcile successfully without error", func(ctx SpecContext) { + Expect(rec.Reconcile(ctx, RequestFor(rvr))).NotTo(Requeue()) + }) + }) + }) + + DescribeTableSubtree("when LLV phase is not Created because", + Entry("phase is empty", func() { + llv.Status = &snc.LVMLogicalVolumeStatus{Phase: ""} + }), + Entry("phase is Pending", func() { + llv.Status = &snc.LVMLogicalVolumeStatus{Phase: "Pending"} + }), + Entry("status is nil", func() { + llv.Status = nil + }), + func(setup func()) { + BeforeEach(func() { + setup() + }) + + // Status is already set in parent JustBeforeEach when creating LLV + // No need to update it here - parent JustBeforeEach handles it + + It("should reconcile successfully and wait", func(ctx SpecContext) { + Expect(rec.Reconcile(ctx, RequestFor(rvr))).NotTo(Requeue()) + }) + }) + + When("List LLVs fails", func() { + listError := errors.New("failed to list") + BeforeEach(func() { + clientBuilder = clientBuilder.WithInterceptorFuncs(interceptor.Funcs{ + List: func(ctx context.Context, cl client.WithWatch, list client.ObjectList, opts ...client.ListOption) error { + if _, ok := list.(*snc.LVMLogicalVolumeList); ok { + return listError + } + return cl.List(ctx, list, opts...) + }, + }) + }) + + // RVR, RV, RSC, RSP, LVG, and LLV are already created in parent JustBeforeEach + // Client is already created in top-level JustBeforeEach with interceptors from BeforeEach + + It("should reconcile successfully without listing LLVs", func(ctx SpecContext) { + Expect(rec.Reconcile(ctx, RequestFor(rvr))).NotTo(Requeue()) + }) + }) + }) + }) + }) + }) + + When("Spec.Type changes from Diskful to Access", func() { + var rvr *v1alpha1.ReplicatedVolumeReplica + var llv *snc.LVMLogicalVolume + + BeforeEach(func() { + rvr = &v1alpha1.ReplicatedVolumeReplica{ + ObjectMeta: metav1.ObjectMeta{ + Name: "type-switch-rvr", + }, + Spec: v1alpha1.ReplicatedVolumeReplicaSpec{ + ReplicatedVolumeName: "type-switch-rv", + Type: v1alpha1.ReplicaTypeAccess, + }, + Status: v1alpha1.ReplicatedVolumeReplicaStatus{ + ActualType: v1alpha1.ReplicaTypeAccess, + LVMLogicalVolumeName: "type-switch-llv", + }, + } + + llv = &snc.LVMLogicalVolume{ + ObjectMeta: metav1.ObjectMeta{ + Name: "type-switch-llv", + Finalizers: []string{"other-finalizer"}, + }, + } + }) + + JustBeforeEach(func(ctx SpecContext) { + rvrCopy := rvr.DeepCopy() + rvrCopy.ResourceVersion = "" + rvrCopy.UID = "" + rvrCopy.Generation = 0 + Expect(cl.Create(ctx, rvrCopy)).To(Succeed()) + + llvCopy := llv.DeepCopy() + llvCopy.ResourceVersion = "" + llvCopy.UID = "" + llvCopy.Generation = 0 + Expect(cl.Create(ctx, llvCopy)).To(Succeed()) + }) + + It("should mark LLV for deletion and keep other finalizers", func(ctx SpecContext) { + Expect(rec.Reconcile(ctx, RequestFor(rvr))).NotTo(Requeue()) + + updatedLLV := &snc.LVMLogicalVolume{} + Expect(cl.Get(ctx, client.ObjectKeyFromObject(llv), updatedLLV)).To(Succeed()) + Expect(updatedLLV.DeletionTimestamp).NotTo(BeNil()) + Expect(updatedLLV.Finalizers).To(ConsistOf("other-finalizer")) + }) + + When("LLV has no finalizers and gets fully removed", func() { + BeforeEach(func() { + llv.Finalizers = nil + }) + + It("should clear LVMLogicalVolumeName in status", func(ctx SpecContext) { + // First reconcile: delete LLV (it disappears immediately because no finalizers) + Expect(rec.Reconcile(ctx, RequestFor(rvr))).NotTo(Requeue()) + err := cl.Get(ctx, client.ObjectKeyFromObject(llv), &snc.LVMLogicalVolume{}) + Expect(apierrors.IsNotFound(err)).To(BeTrue()) + + // Second reconcile: see LLV gone and clear status + Expect(rec.Reconcile(ctx, RequestFor(rvr))).NotTo(Requeue()) + + fetchedRVR := &v1alpha1.ReplicatedVolumeReplica{} + Expect(cl.Get(ctx, client.ObjectKeyFromObject(rvr), fetchedRVR)).To(Succeed()) + Expect(fetchedRVR.Status.LVMLogicalVolumeName).To(BeEmpty()) + }) + }) + }) + + When("Spec.Type is Access but ActualType is Diskful and LLV exists", func() { + var rvr *v1alpha1.ReplicatedVolumeReplica + var llv *snc.LVMLogicalVolume + + BeforeEach(func() { + rvr = &v1alpha1.ReplicatedVolumeReplica{ + ObjectMeta: metav1.ObjectMeta{ + Name: "mismatch-rvr", + }, + Spec: v1alpha1.ReplicatedVolumeReplicaSpec{ + ReplicatedVolumeName: "mismatch-rv", + Type: v1alpha1.ReplicaTypeAccess, + }, + Status: v1alpha1.ReplicatedVolumeReplicaStatus{ + ActualType: v1alpha1.ReplicaTypeDiskful, + LVMLogicalVolumeName: "keep-llv", + }, + } + + llv = &snc.LVMLogicalVolume{ + ObjectMeta: metav1.ObjectMeta{ + Name: "keep-llv", + }, + } + }) + + JustBeforeEach(func(ctx SpecContext) { + rvrCopy := rvr.DeepCopy() + rvrCopy.ResourceVersion = "" + rvrCopy.UID = "" + rvrCopy.Generation = 0 + Expect(cl.Create(ctx, rvrCopy)).To(Succeed()) + + llvCopy := llv.DeepCopy() + llvCopy.ResourceVersion = "" + llvCopy.UID = "" + llvCopy.Generation = 0 + Expect(cl.Create(ctx, llvCopy)).To(Succeed()) + }) + + It("should leave LLV intact when ActualType differs", func(ctx SpecContext) { + Expect(rec.Reconcile(ctx, RequestFor(rvr))).NotTo(Requeue()) + + existingLLV := &snc.LVMLogicalVolume{} + Expect(cl.Get(ctx, client.ObjectKeyFromObject(llv), existingLLV)).To(Succeed()) + }) + }) + + When("integration test for full controller lifecycle", func() { + var rvr *v1alpha1.ReplicatedVolumeReplica + var rv *v1alpha1.ReplicatedVolume + var rsc *v1alpha1.ReplicatedStorageClass + var rsp *v1alpha1.ReplicatedStoragePool + var lvg *snc.LVMVolumeGroup + + BeforeEach(func() { + rvr = &v1alpha1.ReplicatedVolumeReplica{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-rvr", + UID: "test-uid", + }, + Spec: v1alpha1.ReplicatedVolumeReplicaSpec{ + ReplicatedVolumeName: "test-rv", + Type: v1alpha1.ReplicaTypeDiskful, + NodeName: "node-1", + }, + Status: v1alpha1.ReplicatedVolumeReplicaStatus{ + LVMLogicalVolumeName: "", + }, + } + + rv = &v1alpha1.ReplicatedVolume{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-rv", + }, + Spec: v1alpha1.ReplicatedVolumeSpec{ + Size: resource.MustParse("1Gi"), + ReplicatedStorageClassName: "test-rsc", + }, + } + + rsc = &v1alpha1.ReplicatedStorageClass{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-rsc", + }, + Spec: v1alpha1.ReplicatedStorageClassSpec{ + StoragePool: "test-rsp", + }, + } + + rsp = &v1alpha1.ReplicatedStoragePool{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-rsp", + }, + Spec: v1alpha1.ReplicatedStoragePoolSpec{ + LVMVolumeGroups: []v1alpha1.ReplicatedStoragePoolLVMVolumeGroups{ + { + Name: "test-lvg", + ThinPoolName: "", + }, + }, + }, + } + + lvg = &snc.LVMVolumeGroup{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-lvg", + }, + Spec: snc.LVMVolumeGroupSpec{ + Local: snc.LVMVolumeGroupLocalSpec{ + NodeName: "node-1", + }, + }, + } + }) + + JustBeforeEach(func(ctx SpecContext) { + // Create all required objects + Expect(cl.Create(ctx, rvr)).To(Succeed()) + + rvCopy := rv.DeepCopy() + rvCopy.ResourceVersion = "" + rvCopy.UID = "" + rvCopy.Generation = 0 + Expect(cl.Create(ctx, rvCopy)).To(Succeed()) + + rscCopy := rsc.DeepCopy() + rscCopy.ResourceVersion = "" + rscCopy.UID = "" + rscCopy.Generation = 0 + Expect(cl.Create(ctx, rscCopy)).To(Succeed()) + + rspCopy := rsp.DeepCopy() + rspCopy.ResourceVersion = "" + rspCopy.UID = "" + rspCopy.Generation = 0 + Expect(cl.Create(ctx, rspCopy)).To(Succeed()) + + lvgCopy := lvg.DeepCopy() + lvgCopy.ResourceVersion = "" + lvgCopy.UID = "" + lvgCopy.Generation = 0 + Expect(cl.Create(ctx, lvgCopy)).To(Succeed()) + }) + + It("should handle full controller lifecycle", func(ctx SpecContext) { + // Step 1: Initial reconcile - should create LLV + Expect(rec.Reconcile(ctx, RequestFor(rvr))).NotTo(Requeue()) + + // Verify LLV was created + var llvList snc.LVMLogicalVolumeList + Expect(cl.List(ctx, &llvList)).To(Succeed()) + Expect(llvList.Items).To(HaveLen(1)) + llvName := llvList.Items[0].Name + Expect(llvName).To(Equal("rvr-" + rvr.Name)) + Expect(llvList.Items[0].Spec.ActualLVNameOnTheNode).To(Equal("rvr-" + rvr.Name)) + + // Verify condition is set to NotReady after LLV creation + Expect(cl.Get(ctx, client.ObjectKeyFromObject(rvr), rvr)).To(Succeed()) + Expect(rvr).To(HaveBackingVolumeCreatedConditionNotReady()) + + // Step 2: Set LLV phase to Pending and reconcile - should not update RVR status + // Get the created LLV + llv := &snc.LVMLogicalVolume{} + Expect(cl.Get(ctx, client.ObjectKey{Name: llvName}, llv)).To(Succeed()) + llv.Status = &snc.LVMLogicalVolumeStatus{ + Phase: "Pending", + } + // Use regular Update for LLV status in fake client + Expect(cl.Update(ctx, llv)).To(Succeed()) + Expect(cl.Get(ctx, client.ObjectKey{Name: llvName}, llv)).To(Succeed()) + Expect(llv.Status.Phase).To(Equal("Pending")) + + Eventually(func(g Gomega) *v1alpha1.ReplicatedVolumeReplica { + g.Expect(rec.Reconcile(ctx, RequestFor(rvr))).NotTo(Requeue()) + + // Verify RVR status was not updated with LLV name + notUpdatedRVR := &v1alpha1.ReplicatedVolumeReplica{} + g.Expect(cl.Get(ctx, client.ObjectKeyFromObject(rvr), notUpdatedRVR)).To(Succeed()) + return notUpdatedRVR + }).WithContext(ctx).Should(HaveNoLVMLogicalVolumeName()) + + // Step 3: Set LLV phase to Created and reconcile - should update RVR status + // Get LLV again to get fresh state + Expect(cl.Get(ctx, client.ObjectKey{Name: llvName}, llv)).To(Succeed()) + llv.Status.Phase = "Created" + // Use regular Update for LLV status in fake client + Expect(cl.Update(ctx, llv)).To(Succeed()) + + // Use Eventually to support future async client migration + Eventually(func(g Gomega) *v1alpha1.ReplicatedVolumeReplica { + g.Expect(rec.Reconcile(ctx, RequestFor(rvr))).NotTo(Requeue()) + + // Verify RVR status was updated with LLV name + updatedRVR := &v1alpha1.ReplicatedVolumeReplica{} + g.Expect(cl.Get(ctx, client.ObjectKeyFromObject(rvr), updatedRVR)).To(Succeed()) + return updatedRVR + }).WithContext(ctx).Should(And( + HaveLVMLogicalVolumeName(llvName), + HaveBackingVolumeCreatedConditionReady(), + )) + + // Get updatedRVR for next steps + updatedRVR := &v1alpha1.ReplicatedVolumeReplica{} + Expect(cl.Get(ctx, client.ObjectKeyFromObject(rvr), updatedRVR)).To(Succeed()) + + // Step 4: Change RVR type to Access - LLV should remain + // updatedRVR already obtained above + updatedRVR.Spec.Type = v1alpha1.ReplicaTypeAccess + Expect(cl.Update(ctx, updatedRVR)).To(Succeed()) + Expect(rec.Reconcile(ctx, RequestFor(rvr))).NotTo(Requeue()) + + // Verify LLV still exists + Expect(cl.Get(ctx, client.ObjectKey{Name: llvName}, llv)).To(Succeed()) + + // Step 5: Set actualType to Access - LLV should be deleted + // Get fresh RVR state + Expect(cl.Get(ctx, client.ObjectKeyFromObject(rvr), updatedRVR)).To(Succeed()) + updatedRVR.Status.ActualType = v1alpha1.ReplicaTypeAccess + Expect(cl.Status().Update(ctx, updatedRVR)).To(Succeed()) + Expect(rec.Reconcile(ctx, RequestFor(rvr))).NotTo(Requeue()) + + // Verify LLV was deleted + err := cl.Get(ctx, client.ObjectKey{Name: llvName}, &snc.LVMLogicalVolume{}) + Expect(apierrors.IsNotFound(err)).To(BeTrue()) + + // Step 6: Reconcile again - should clear LVMLogicalVolumeName from status + Expect(rec.Reconcile(ctx, RequestFor(rvr))).NotTo(Requeue()) + + // Verify status was cleared and condition is set to NotApplicable + Expect(cl.Get(ctx, client.ObjectKeyFromObject(rvr), updatedRVR)).To(Succeed()) + Expect(updatedRVR).To(HaveNoLVMLogicalVolumeName()) + Expect(updatedRVR).To(HaveBackingVolumeCreatedConditionNotApplicable()) + + // Step 7: Change type back to Diskful - should create LLV again + updatedRVR.Spec.Type = v1alpha1.ReplicaTypeDiskful + Expect(cl.Update(ctx, updatedRVR)).To(Succeed()) + Expect(rec.Reconcile(ctx, RequestFor(rvr))).NotTo(Requeue()) + + // Verify LLV was created again + Expect(cl.List(ctx, &llvList)).To(Succeed()) + Expect(llvList.Items).To(HaveLen(1)) + Expect(llvList.Items[0].Name).To(Equal("rvr-" + rvr.Name)) + Expect(llvList.Items[0].Spec.ActualLVNameOnTheNode).To(Equal("rvr-" + rvr.Name)) + }) + }) +}) diff --git a/images/controller/internal/controllers/rvr_volume/rvr_volume_suite_test.go b/images/controller/internal/controllers/rvr_volume/rvr_volume_suite_test.go new file mode 100644 index 000000000..e5c96d50d --- /dev/null +++ b/images/controller/internal/controllers/rvr_volume/rvr_volume_suite_test.go @@ -0,0 +1,183 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package rvrvolume_test + +import ( + "testing" + + . "github.com/onsi/ginkgo/v2" + . "github.com/onsi/gomega" + "github.com/onsi/gomega/gcustom" + gomegatypes "github.com/onsi/gomega/types" // cspell:words gomegatypes + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/reconcile" + + snc "github.com/deckhouse/sds-node-configurator/api/v1alpha1" + "github.com/deckhouse/sds-replicated-volume/api/v1alpha1" +) + +func TestRvrVolume(t *testing.T) { + RegisterFailHandler(Fail) + RunSpecs(t, "RvrVolume Suite") +} + +// Requeue returns a matcher that checks if reconcile result requires requeue +func Requeue() gomegatypes.GomegaMatcher { + return Not(Equal(reconcile.Result{})) +} + +// RequestFor creates a reconcile request for the given object +func RequestFor(object client.Object) reconcile.Request { + return reconcile.Request{NamespacedName: client.ObjectKeyFromObject(object)} +} + +// HaveLVMLogicalVolumeName returns a matcher that checks if RVR has the specified LLV name in status +func HaveLVMLogicalVolumeName(llvName string) gomegatypes.GomegaMatcher { + if llvName == "" { + return SatisfyAny( + HaveField("Status", BeNil()), + HaveField("Status.LVMLogicalVolumeName", BeEmpty()), + ) + } + return SatisfyAll( + HaveField("Status", Not(BeNil())), + HaveField("Status.LVMLogicalVolumeName", Equal(llvName)), + ) +} + +// HaveNoLVMLogicalVolumeName returns a matcher that checks if RVR has no LLV name in status +func HaveNoLVMLogicalVolumeName() gomegatypes.GomegaMatcher { + return HaveLVMLogicalVolumeName("") +} + +// BeLLVPhase returns a matcher that checks if LLV has the specified phase +func BeLLVPhase(phase string) gomegatypes.GomegaMatcher { + if phase == "" { + return SatisfyAny( + HaveField("Status", BeNil()), + HaveField("Status.Phase", BeEmpty()), + ) + } + return SatisfyAll( + HaveField("Status", Not(BeNil())), + HaveField("Status.Phase", Equal(phase)), + ) +} + +// HaveLLVWithOwnerReference returns a matcher that checks if LLV has owner reference to RVR +func HaveLLVWithOwnerReference(rvrName string) gomegatypes.GomegaMatcher { + return gcustom.MakeMatcher(func(llv *snc.LVMLogicalVolume) (bool, error) { + ownerRef := metav1.GetControllerOf(llv) + if ownerRef == nil { + return false, nil + } + return ownerRef.Kind == "ReplicatedVolumeReplica" && ownerRef.Name == rvrName, nil + }).WithMessage("expected LLV to have owner reference to RVR " + rvrName) +} + +// HaveFinalizer returns a matcher that checks if object has the specified finalizer +func HaveFinalizer(finalizerName string) gomegatypes.GomegaMatcher { + return gcustom.MakeMatcher(func(obj client.Object) (bool, error) { + for _, f := range obj.GetFinalizers() { + if f == finalizerName { + return true, nil + } + } + return false, nil + }).WithTemplate("Expected:\n{{.FormattedActual}}\n{{.To}} have finalizer:\n{{format .Data 1}}").WithTemplateData(finalizerName) +} + +// NotHaveFinalizer returns a matcher that checks if object does not have the specified finalizer +func NotHaveFinalizer(finalizerName string) gomegatypes.GomegaMatcher { + return gcustom.MakeMatcher(func(obj client.Object) (bool, error) { + for _, f := range obj.GetFinalizers() { + if f == finalizerName { + return false, nil + } + } + return true, nil + }).WithMessage("expected object to not have finalizer " + finalizerName) +} + +// BeDiskful returns a matcher that checks if RVR is diskful +func BeDiskful() gomegatypes.GomegaMatcher { + return HaveField("Spec.Type", Equal(v1alpha1.ReplicaTypeDiskful)) +} + +// BeNonDiskful returns a matcher that checks if RVR is not diskful +func BeNonDiskful() gomegatypes.GomegaMatcher { + return Not(BeDiskful()) +} + +// HaveDeletionTimestamp returns a matcher that checks if object has deletion timestamp +func HaveDeletionTimestamp() gomegatypes.GomegaMatcher { + return HaveField("DeletionTimestamp", Not(BeNil())) +} + +// NotHaveDeletionTimestamp returns a matcher that checks if object does not have deletion timestamp +func NotHaveDeletionTimestamp() gomegatypes.GomegaMatcher { + return SatisfyAny( + HaveField("DeletionTimestamp", BeNil()), + ) +} + +// HaveBackingVolumeCreatedCondition returns a matcher that checks if RVR has BackingVolumeCreated condition +// with the specified status and reason. +func HaveBackingVolumeCreatedCondition(status metav1.ConditionStatus, reason string) gomegatypes.GomegaMatcher { + return gcustom.MakeMatcher(func(rvr *v1alpha1.ReplicatedVolumeReplica) (bool, error) { + if rvr.Status.Conditions == nil { + return false, nil + } + for _, cond := range rvr.Status.Conditions { + if cond.Type == v1alpha1.ReplicatedVolumeReplicaCondBackingVolumeCreatedType { + return cond.Status == status && cond.Reason == reason, nil + } + } + return false, nil + }).WithMessage("expected RVR to have BackingVolumeCreated condition with status " + string(status) + " and reason " + reason) +} + +// HaveBackingVolumeCreatedConditionReady is a convenience matcher that checks if +// the BackingVolumeCreated condition is True with ReplicatedVolumeReplicaCondBackingVolumeCreatedReasonBackingVolumeReady. +func HaveBackingVolumeCreatedConditionReady() gomegatypes.GomegaMatcher { + return HaveBackingVolumeCreatedCondition(metav1.ConditionTrue, v1alpha1.ReplicatedVolumeReplicaCondBackingVolumeCreatedReasonBackingVolumeReady) +} + +// HaveBackingVolumeCreatedConditionNotReady is a convenience matcher that checks if +// the BackingVolumeCreated condition is False with ReplicatedVolumeReplicaCondBackingVolumeCreatedReasonBackingVolumeNotReady. +func HaveBackingVolumeCreatedConditionNotReady() gomegatypes.GomegaMatcher { + return HaveBackingVolumeCreatedCondition(metav1.ConditionFalse, v1alpha1.ReplicatedVolumeReplicaCondBackingVolumeCreatedReasonBackingVolumeNotReady) +} + +// HaveBackingVolumeCreatedConditionNotApplicable is a convenience matcher that checks if +// the BackingVolumeCreated condition is False with ReplicatedVolumeReplicaCondBackingVolumeCreatedReasonNotApplicable. +func HaveBackingVolumeCreatedConditionNotApplicable() gomegatypes.GomegaMatcher { + return HaveBackingVolumeCreatedCondition(metav1.ConditionFalse, v1alpha1.ReplicatedVolumeReplicaCondBackingVolumeCreatedReasonNotApplicable) +} + +// HaveBackingVolumeCreatedConditionCreationFailed is a convenience matcher that checks if +// the BackingVolumeCreated condition is False with ReplicatedVolumeReplicaCondBackingVolumeCreatedReasonBackingVolumeCreationFailed. +func HaveBackingVolumeCreatedConditionCreationFailed() gomegatypes.GomegaMatcher { + return HaveBackingVolumeCreatedCondition(metav1.ConditionFalse, v1alpha1.ReplicatedVolumeReplicaCondBackingVolumeCreatedReasonBackingVolumeCreationFailed) +} + +// HaveBackingVolumeCreatedConditionDeletionFailed is a convenience matcher that checks if +// the BackingVolumeCreated condition is True with ReplicatedVolumeReplicaCondBackingVolumeCreatedReasonBackingVolumeDeletionFailed. +func HaveBackingVolumeCreatedConditionDeletionFailed() gomegatypes.GomegaMatcher { + return HaveBackingVolumeCreatedCondition(metav1.ConditionTrue, v1alpha1.ReplicatedVolumeReplicaCondBackingVolumeCreatedReasonBackingVolumeDeletionFailed) +} diff --git a/images/controller/internal/env/config.go b/images/controller/internal/env/config.go new file mode 100644 index 000000000..4203bad4e --- /dev/null +++ b/images/controller/internal/env/config.go @@ -0,0 +1,85 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package env + +import ( + "errors" + "fmt" + "os" +) + +const ( + PodNamespaceEnvVar = "POD_NAMESPACE" + HealthProbeBindAddressEnvVar = "HEALTH_PROBE_BIND_ADDRESS" + MetricsPortEnvVar = "METRICS_BIND_ADDRESS" + + // defaults are different for each app, do not merge them + DefaultHealthProbeBindAddress = ":4271" + DefaultMetricsBindAddress = ":4272" +) + +var ErrInvalidConfig = errors.New("invalid config") + +type Config struct { + podNamespace string + healthProbeBindAddress string + metricsBindAddress string +} + +func (c *Config) HealthProbeBindAddress() string { + return c.healthProbeBindAddress +} + +func (c *Config) MetricsBindAddress() string { + return c.metricsBindAddress +} + +func (c *Config) PodNamespace() string { + return c.podNamespace +} + +type ConfigProvider interface { + PodNamespace() string + HealthProbeBindAddress() string + MetricsBindAddress() string +} + +var _ ConfigProvider = &Config{} + +func GetConfig() (*Config, error) { + cfg := &Config{} + + // Pod namespace (required): used to discover agent pods. + cfg.podNamespace = os.Getenv(PodNamespaceEnvVar) + if cfg.podNamespace == "" { + return nil, fmt.Errorf("%w: %s is required", ErrInvalidConfig, PodNamespaceEnvVar) + } + + // Health probe bind address (optional, has default). + cfg.healthProbeBindAddress = os.Getenv(HealthProbeBindAddressEnvVar) + if cfg.healthProbeBindAddress == "" { + cfg.healthProbeBindAddress = DefaultHealthProbeBindAddress + } + + // Metrics bind address (optional, has default). + cfg.metricsBindAddress = os.Getenv(MetricsPortEnvVar) + if cfg.metricsBindAddress == "" { + cfg.metricsBindAddress = DefaultMetricsBindAddress + } + + return cfg, nil +} diff --git a/images/controller/internal/errors/errors.go b/images/controller/internal/errors/errors.go new file mode 100644 index 000000000..c884bff96 --- /dev/null +++ b/images/controller/internal/errors/errors.go @@ -0,0 +1,50 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package errors + +import ( + "errors" + "fmt" +) + +var ErrNotImplemented = errors.New("not implemented") + +var ErrInvalidCluster = errors.New("invalid cluster state") + +var ErrInvalidNode = errors.New("invalid node") + +var ErrUnknown = errors.New("unknown error") + +func WrapErrorf(err error, format string, a ...any) error { + return fmt.Errorf("%w: %w", err, fmt.Errorf(format, a...)) +} + +func ErrInvalidClusterf(format string, a ...any) error { + return WrapErrorf(ErrInvalidCluster, format, a...) +} + +func ErrInvalidNodef(format string, a ...any) error { + return WrapErrorf(ErrInvalidNode, format, a...) +} + +func ErrNotImplementedf(format string, a ...any) error { + return WrapErrorf(ErrNotImplemented, format, a...) +} + +func ErrUnknownf(format string, a ...any) error { + return WrapErrorf(ErrUnknown, format, a...) +} diff --git a/images/controller/internal/errors/validation.go b/images/controller/internal/errors/validation.go new file mode 100644 index 000000000..36b9baa48 --- /dev/null +++ b/images/controller/internal/errors/validation.go @@ -0,0 +1,34 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package errors + +import ( + "fmt" + "reflect" +) + +func ValidateArgNotNil(arg any, argName string) error { + if arg == nil { + return fmt.Errorf("expected '%s' to be non-nil", argName) + } + // Check for typed nil pointers (e.g., (*SomeStruct)(nil) passed as any) + v := reflect.ValueOf(arg) + if v.Kind() == reflect.Pointer && v.IsNil() { + return fmt.Errorf("expected '%s' to be non-nil", argName) + } + return nil +} diff --git a/images/controller/internal/errors/validation_test.go b/images/controller/internal/errors/validation_test.go new file mode 100644 index 000000000..d89c2a457 --- /dev/null +++ b/images/controller/internal/errors/validation_test.go @@ -0,0 +1,48 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package errors_test + +import ( + "testing" + "time" + + "github.com/deckhouse/sds-replicated-volume/images/controller/internal/errors" +) + +func TestValidateArgNotNil(t *testing.T) { + var err error + + err = errors.ValidateArgNotNil(nil, "testArgName") + if err == nil { + t.Fatal("ValidateArgNotNil() succeeded unexpectedly") + } + + timeArg := time.Now() + timeArgPtr := &timeArg + + err = errors.ValidateArgNotNil(timeArgPtr, "timeArgPtr") + if err != nil { + t.Fatalf("ValidateArgNotNil() failed: %v", err) + } + + timeArgPtr = nil + + err = errors.ValidateArgNotNil(timeArgPtr, "testArgName") + if err == nil { + t.Fatal("ValidateArgNotNil() succeeded unexpectedly") + } +} diff --git a/images/controller/internal/indexes/drbdresource.go b/images/controller/internal/indexes/drbdresource.go new file mode 100644 index 000000000..624315bc1 --- /dev/null +++ b/images/controller/internal/indexes/drbdresource.go @@ -0,0 +1,54 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package indexes + +import ( + "context" + "fmt" + + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/manager" + + v1alpha1 "github.com/deckhouse/sds-replicated-volume/api/v1alpha1" +) + +// IndexFieldDRBDResourceByNodeName is used to quickly list +// DRBDResource objects on a specific node. +const IndexFieldDRBDResourceByNodeName = "spec.nodeName" + +// RegisterDRBDResourceByNodeName registers the index for listing +// DRBDResource objects by spec.nodeName. +func RegisterDRBDResourceByNodeName(mgr manager.Manager) error { + if err := mgr.GetFieldIndexer().IndexField( + context.Background(), + &v1alpha1.DRBDResource{}, + IndexFieldDRBDResourceByNodeName, + func(obj client.Object) []string { + dr, ok := obj.(*v1alpha1.DRBDResource) + if !ok { + return nil + } + if dr.Spec.NodeName == "" { + return nil + } + return []string{dr.Spec.NodeName} + }, + ); err != nil { + return fmt.Errorf("index DRBDResource by spec.nodeName: %w", err) + } + return nil +} diff --git a/images/controller/internal/indexes/node.go b/images/controller/internal/indexes/node.go new file mode 100644 index 000000000..6407839ca --- /dev/null +++ b/images/controller/internal/indexes/node.go @@ -0,0 +1,46 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package indexes + +import ( + "context" + "fmt" + + corev1 "k8s.io/api/core/v1" + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/manager" +) + +// IndexFieldNodeByMetadataName is used to quickly look up +// a Node by its metadata.name. +const IndexFieldNodeByMetadataName = "metadata.name" + +// RegisterNodeByMetadataName registers the index for looking up +// Node objects by metadata.name. +func RegisterNodeByMetadataName(mgr manager.Manager) error { + if err := mgr.GetFieldIndexer().IndexField( + context.Background(), + &corev1.Node{}, + IndexFieldNodeByMetadataName, + func(obj client.Object) []string { + return []string{obj.GetName()} + }, + ); err != nil { + return fmt.Errorf("index Node by metadata.name: %w", err) + } + return nil +} diff --git a/images/controller/internal/indexes/rsc.go b/images/controller/internal/indexes/rsc.go new file mode 100644 index 000000000..4182af0f1 --- /dev/null +++ b/images/controller/internal/indexes/rsc.go @@ -0,0 +1,81 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package indexes + +import ( + "context" + "fmt" + + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/manager" + + v1alpha1 "github.com/deckhouse/sds-replicated-volume/api/v1alpha1" +) + +// IndexFieldRSCByStoragePool is used to quickly list +// ReplicatedStorageClass objects referencing a specific RSP (deprecated field for migration). +const IndexFieldRSCByStoragePool = "spec.storagePool" + +// IndexFieldRSCByStatusStoragePoolName is used to quickly list +// ReplicatedStorageClass objects by their auto-generated RSP name. +const IndexFieldRSCByStatusStoragePoolName = "status.storagePoolName" + +// RegisterRSCByStoragePool registers the index for listing +// ReplicatedStorageClass objects by spec.storagePool. +func RegisterRSCByStoragePool(mgr manager.Manager) error { + if err := mgr.GetFieldIndexer().IndexField( + context.Background(), + &v1alpha1.ReplicatedStorageClass{}, + IndexFieldRSCByStoragePool, + func(obj client.Object) []string { + rsc, ok := obj.(*v1alpha1.ReplicatedStorageClass) + if !ok { + return nil + } + if rsc.Spec.StoragePool == "" { + return nil + } + return []string{rsc.Spec.StoragePool} + }, + ); err != nil { + return fmt.Errorf("index ReplicatedStorageClass by spec.storagePool: %w", err) + } + return nil +} + +// RegisterRSCByStatusStoragePoolName registers the index for listing +// ReplicatedStorageClass objects by status.storagePoolName. +func RegisterRSCByStatusStoragePoolName(mgr manager.Manager) error { + if err := mgr.GetFieldIndexer().IndexField( + context.Background(), + &v1alpha1.ReplicatedStorageClass{}, + IndexFieldRSCByStatusStoragePoolName, + func(obj client.Object) []string { + rsc, ok := obj.(*v1alpha1.ReplicatedStorageClass) + if !ok { + return nil + } + if rsc.Status.StoragePoolName == "" { + return nil + } + return []string{rsc.Status.StoragePoolName} + }, + ); err != nil { + return fmt.Errorf("index ReplicatedStorageClass by status.storagePoolName: %w", err) + } + return nil +} diff --git a/images/controller/internal/indexes/rsp.go b/images/controller/internal/indexes/rsp.go new file mode 100644 index 000000000..e347de381 --- /dev/null +++ b/images/controller/internal/indexes/rsp.go @@ -0,0 +1,120 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package indexes + +import ( + "context" + "fmt" + + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/manager" + + v1alpha1 "github.com/deckhouse/sds-replicated-volume/api/v1alpha1" +) + +// IndexFieldRSPByLVMVolumeGroupName is used to quickly list +// ReplicatedStoragePool objects referencing a specific LVMVolumeGroup. +// The index extracts all LVG names from spec.lvmVolumeGroups[*].name. +const IndexFieldRSPByLVMVolumeGroupName = "spec.lvmVolumeGroups.name" + +// RegisterRSPByLVMVolumeGroupName registers the index for listing +// ReplicatedStoragePool objects by spec.lvmVolumeGroups[*].name. +func RegisterRSPByLVMVolumeGroupName(mgr manager.Manager) error { + if err := mgr.GetFieldIndexer().IndexField( + context.Background(), + &v1alpha1.ReplicatedStoragePool{}, + IndexFieldRSPByLVMVolumeGroupName, + func(obj client.Object) []string { + rsp, ok := obj.(*v1alpha1.ReplicatedStoragePool) + if !ok { + return nil + } + if len(rsp.Spec.LVMVolumeGroups) == 0 { + return nil + } + names := make([]string, 0, len(rsp.Spec.LVMVolumeGroups)) + for _, lvg := range rsp.Spec.LVMVolumeGroups { + if lvg.Name != "" { + names = append(names, lvg.Name) + } + } + return names + }, + ); err != nil { + return fmt.Errorf("index ReplicatedStoragePool by spec.lvmVolumeGroups.name: %w", err) + } + return nil +} + +// IndexFieldRSPByUsedByRSCName is used to quickly list +// ReplicatedStoragePool objects that are used by a specific RSC. +// The index extracts all RSC names from status.usedBy.replicatedStorageClassNames. +const IndexFieldRSPByUsedByRSCName = "status.usedBy.replicatedStorageClassNames" + +// RegisterRSPByUsedByRSCName registers the index for listing +// ReplicatedStoragePool objects by status.usedBy.replicatedStorageClassNames. +func RegisterRSPByUsedByRSCName(mgr manager.Manager) error { + if err := mgr.GetFieldIndexer().IndexField( + context.Background(), + &v1alpha1.ReplicatedStoragePool{}, + IndexFieldRSPByUsedByRSCName, + func(obj client.Object) []string { + rsp, ok := obj.(*v1alpha1.ReplicatedStoragePool) + if !ok { + return nil + } + return rsp.Status.UsedBy.ReplicatedStorageClassNames + }, + ); err != nil { + return fmt.Errorf("index ReplicatedStoragePool by status.usedBy.replicatedStorageClassNames: %w", err) + } + return nil +} + +// IndexFieldRSPByEligibleNodeName is used to quickly list +// ReplicatedStoragePool objects that have a specific node in their EligibleNodes. +// The index extracts all node names from status.eligibleNodes[*].nodeName. +const IndexFieldRSPByEligibleNodeName = "status.eligibleNodes.nodeName" + +// RegisterRSPByEligibleNodeName registers the index for listing +// ReplicatedStoragePool objects by status.eligibleNodes[*].nodeName. +func RegisterRSPByEligibleNodeName(mgr manager.Manager) error { + if err := mgr.GetFieldIndexer().IndexField( + context.Background(), + &v1alpha1.ReplicatedStoragePool{}, + IndexFieldRSPByEligibleNodeName, + func(obj client.Object) []string { + rsp, ok := obj.(*v1alpha1.ReplicatedStoragePool) + if !ok { + return nil + } + if len(rsp.Status.EligibleNodes) == 0 { + return nil + } + names := make([]string, 0, len(rsp.Status.EligibleNodes)) + for _, en := range rsp.Status.EligibleNodes { + if en.NodeName != "" { + names = append(names, en.NodeName) + } + } + return names + }, + ); err != nil { + return fmt.Errorf("index ReplicatedStoragePool by status.eligibleNodes.nodeName: %w", err) + } + return nil +} diff --git a/images/controller/internal/indexes/rv.go b/images/controller/internal/indexes/rv.go new file mode 100644 index 000000000..80db34681 --- /dev/null +++ b/images/controller/internal/indexes/rv.go @@ -0,0 +1,54 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package indexes + +import ( + "context" + "fmt" + + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/manager" + + v1alpha1 "github.com/deckhouse/sds-replicated-volume/api/v1alpha1" +) + +// IndexFieldRVByReplicatedStorageClassName is used to quickly list +// ReplicatedVolume objects referencing a specific RSC. +const IndexFieldRVByReplicatedStorageClassName = "spec.replicatedStorageClassName" + +// RegisterRVByReplicatedStorageClassName registers the index for listing +// ReplicatedVolume objects by spec.replicatedStorageClassName. +func RegisterRVByReplicatedStorageClassName(mgr manager.Manager) error { + if err := mgr.GetFieldIndexer().IndexField( + context.Background(), + &v1alpha1.ReplicatedVolume{}, + IndexFieldRVByReplicatedStorageClassName, + func(obj client.Object) []string { + rv, ok := obj.(*v1alpha1.ReplicatedVolume) + if !ok { + return nil + } + if rv.Spec.ReplicatedStorageClassName == "" { + return nil + } + return []string{rv.Spec.ReplicatedStorageClassName} + }, + ); err != nil { + return fmt.Errorf("index ReplicatedVolume by spec.replicatedStorageClassName: %w", err) + } + return nil +} diff --git a/images/controller/internal/indexes/rva.go b/images/controller/internal/indexes/rva.go new file mode 100644 index 000000000..8cc939e88 --- /dev/null +++ b/images/controller/internal/indexes/rva.go @@ -0,0 +1,54 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package indexes + +import ( + "context" + "fmt" + + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/manager" + + v1alpha1 "github.com/deckhouse/sds-replicated-volume/api/v1alpha1" +) + +// IndexFieldRVAByReplicatedVolumeName is used to quickly list +// ReplicatedVolumeAttachment objects belonging to a specific RV. +const IndexFieldRVAByReplicatedVolumeName = "spec.replicatedVolumeName" + +// RegisterRVAByReplicatedVolumeName registers the index for listing +// ReplicatedVolumeAttachment objects by spec.replicatedVolumeName. +func RegisterRVAByReplicatedVolumeName(mgr manager.Manager) error { + if err := mgr.GetFieldIndexer().IndexField( + context.Background(), + &v1alpha1.ReplicatedVolumeAttachment{}, + IndexFieldRVAByReplicatedVolumeName, + func(obj client.Object) []string { + rva, ok := obj.(*v1alpha1.ReplicatedVolumeAttachment) + if !ok { + return nil + } + if rva.Spec.ReplicatedVolumeName == "" { + return nil + } + return []string{rva.Spec.ReplicatedVolumeName} + }, + ); err != nil { + return fmt.Errorf("index ReplicatedVolumeAttachment by spec.replicatedVolumeName: %w", err) + } + return nil +} diff --git a/images/controller/internal/indexes/rvr.go b/images/controller/internal/indexes/rvr.go new file mode 100644 index 000000000..6070a2406 --- /dev/null +++ b/images/controller/internal/indexes/rvr.go @@ -0,0 +1,83 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package indexes + +import ( + "context" + "fmt" + + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/manager" + + v1alpha1 "github.com/deckhouse/sds-replicated-volume/api/v1alpha1" +) + +const ( + // IndexFieldRVRByNodeName is used to quickly list + // ReplicatedVolumeReplica objects on a specific node. + IndexFieldRVRByNodeName = "spec.nodeName" + + // IndexFieldRVRByReplicatedVolumeName is used to quickly list + // ReplicatedVolumeReplica objects belonging to a specific RV. + IndexFieldRVRByReplicatedVolumeName = "spec.replicatedVolumeName" +) + +// RegisterRVRByNodeName registers the index for listing +// ReplicatedVolumeReplica objects by spec.nodeName. +func RegisterRVRByNodeName(mgr manager.Manager) error { + if err := mgr.GetFieldIndexer().IndexField( + context.Background(), + &v1alpha1.ReplicatedVolumeReplica{}, + IndexFieldRVRByNodeName, + func(obj client.Object) []string { + rvr, ok := obj.(*v1alpha1.ReplicatedVolumeReplica) + if !ok { + return nil + } + if rvr.Spec.NodeName == "" { + return nil + } + return []string{rvr.Spec.NodeName} + }, + ); err != nil { + return fmt.Errorf("index ReplicatedVolumeReplica by spec.nodeName: %w", err) + } + return nil +} + +// RegisterRVRByReplicatedVolumeName registers the index for listing +// ReplicatedVolumeReplica objects by spec.replicatedVolumeName. +func RegisterRVRByReplicatedVolumeName(mgr manager.Manager) error { + if err := mgr.GetFieldIndexer().IndexField( + context.Background(), + &v1alpha1.ReplicatedVolumeReplica{}, + IndexFieldRVRByReplicatedVolumeName, + func(obj client.Object) []string { + rvr, ok := obj.(*v1alpha1.ReplicatedVolumeReplica) + if !ok { + return nil + } + if rvr.Spec.ReplicatedVolumeName == "" { + return nil + } + return []string{rvr.Spec.ReplicatedVolumeName} + }, + ); err != nil { + return fmt.Errorf("index ReplicatedVolumeReplica by spec.replicatedVolumeName: %w", err) + } + return nil +} diff --git a/images/controller/internal/indexes/testhelpers/drbdresource.go b/images/controller/internal/indexes/testhelpers/drbdresource.go new file mode 100644 index 000000000..29662c662 --- /dev/null +++ b/images/controller/internal/indexes/testhelpers/drbdresource.go @@ -0,0 +1,40 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package testhelpers + +import ( + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/client/fake" + + "github.com/deckhouse/sds-replicated-volume/api/v1alpha1" + "github.com/deckhouse/sds-replicated-volume/images/controller/internal/indexes" +) + +// WithDRBDResourceByNodeNameIndex registers the IndexFieldDRBDResourceByNodeName index +// on a fake.ClientBuilder. This is useful for tests that need to use the index. +func WithDRBDResourceByNodeNameIndex(b *fake.ClientBuilder) *fake.ClientBuilder { + return b.WithIndex(&v1alpha1.DRBDResource{}, indexes.IndexFieldDRBDResourceByNodeName, func(obj client.Object) []string { + dr, ok := obj.(*v1alpha1.DRBDResource) + if !ok { + return nil + } + if dr.Spec.NodeName == "" { + return nil + } + return []string{dr.Spec.NodeName} + }) +} diff --git a/images/controller/internal/indexes/testhelpers/node.go b/images/controller/internal/indexes/testhelpers/node.go new file mode 100644 index 000000000..88afbebb3 --- /dev/null +++ b/images/controller/internal/indexes/testhelpers/node.go @@ -0,0 +1,33 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package testhelpers + +import ( + corev1 "k8s.io/api/core/v1" + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/client/fake" + + "github.com/deckhouse/sds-replicated-volume/images/controller/internal/indexes" +) + +// WithNodeByMetadataNameIndex registers the IndexFieldNodeByMetadataName index +// on a fake.ClientBuilder. This is useful for tests that need to use the index. +func WithNodeByMetadataNameIndex(b *fake.ClientBuilder) *fake.ClientBuilder { + return b.WithIndex(&corev1.Node{}, indexes.IndexFieldNodeByMetadataName, func(obj client.Object) []string { + return []string{obj.GetName()} + }) +} diff --git a/images/controller/internal/indexes/testhelpers/rsc.go b/images/controller/internal/indexes/testhelpers/rsc.go new file mode 100644 index 000000000..7ab3c1679 --- /dev/null +++ b/images/controller/internal/indexes/testhelpers/rsc.go @@ -0,0 +1,55 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package testhelpers + +import ( + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/client/fake" + + "github.com/deckhouse/sds-replicated-volume/api/v1alpha1" + "github.com/deckhouse/sds-replicated-volume/images/controller/internal/indexes" +) + +// WithRSCByStoragePoolIndex registers the IndexFieldRSCByStoragePool index +// on a fake.ClientBuilder. This is useful for tests that need to use the index. +func WithRSCByStoragePoolIndex(b *fake.ClientBuilder) *fake.ClientBuilder { + return b.WithIndex(&v1alpha1.ReplicatedStorageClass{}, indexes.IndexFieldRSCByStoragePool, func(obj client.Object) []string { + rsc, ok := obj.(*v1alpha1.ReplicatedStorageClass) + if !ok { + return nil + } + if rsc.Spec.StoragePool == "" { + return nil + } + return []string{rsc.Spec.StoragePool} + }) +} + +// WithRSCByStatusStoragePoolNameIndex registers the IndexFieldRSCByStatusStoragePoolName index +// on a fake.ClientBuilder. This is useful for tests that need to use the index. +func WithRSCByStatusStoragePoolNameIndex(b *fake.ClientBuilder) *fake.ClientBuilder { + return b.WithIndex(&v1alpha1.ReplicatedStorageClass{}, indexes.IndexFieldRSCByStatusStoragePoolName, func(obj client.Object) []string { + rsc, ok := obj.(*v1alpha1.ReplicatedStorageClass) + if !ok { + return nil + } + if rsc.Status.StoragePoolName == "" { + return nil + } + return []string{rsc.Status.StoragePoolName} + }) +} diff --git a/images/controller/internal/indexes/testhelpers/rsp.go b/images/controller/internal/indexes/testhelpers/rsp.go new file mode 100644 index 000000000..bfca5c90f --- /dev/null +++ b/images/controller/internal/indexes/testhelpers/rsp.go @@ -0,0 +1,79 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package testhelpers + +import ( + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/client/fake" + + "github.com/deckhouse/sds-replicated-volume/api/v1alpha1" + "github.com/deckhouse/sds-replicated-volume/images/controller/internal/indexes" +) + +// WithRSPByLVMVolumeGroupNameIndex registers the IndexFieldRSPByLVMVolumeGroupName index +// on a fake.ClientBuilder. This is useful for tests that need to use the index. +func WithRSPByLVMVolumeGroupNameIndex(b *fake.ClientBuilder) *fake.ClientBuilder { + return b.WithIndex(&v1alpha1.ReplicatedStoragePool{}, indexes.IndexFieldRSPByLVMVolumeGroupName, func(obj client.Object) []string { + rsp, ok := obj.(*v1alpha1.ReplicatedStoragePool) + if !ok { + return nil + } + if len(rsp.Spec.LVMVolumeGroups) == 0 { + return nil + } + names := make([]string, 0, len(rsp.Spec.LVMVolumeGroups)) + for _, lvg := range rsp.Spec.LVMVolumeGroups { + if lvg.Name != "" { + names = append(names, lvg.Name) + } + } + return names + }) +} + +// WithRSPByEligibleNodeNameIndex registers the IndexFieldRSPByEligibleNodeName index +// on a fake.ClientBuilder. This is useful for tests that need to use the index. +func WithRSPByEligibleNodeNameIndex(b *fake.ClientBuilder) *fake.ClientBuilder { + return b.WithIndex(&v1alpha1.ReplicatedStoragePool{}, indexes.IndexFieldRSPByEligibleNodeName, func(obj client.Object) []string { + rsp, ok := obj.(*v1alpha1.ReplicatedStoragePool) + if !ok { + return nil + } + if len(rsp.Status.EligibleNodes) == 0 { + return nil + } + names := make([]string, 0, len(rsp.Status.EligibleNodes)) + for _, en := range rsp.Status.EligibleNodes { + if en.NodeName != "" { + names = append(names, en.NodeName) + } + } + return names + }) +} + +// WithRSPByUsedByRSCNameIndex registers the IndexFieldRSPByUsedByRSCName index +// on a fake.ClientBuilder. This is useful for tests that need to use the index. +func WithRSPByUsedByRSCNameIndex(b *fake.ClientBuilder) *fake.ClientBuilder { + return b.WithIndex(&v1alpha1.ReplicatedStoragePool{}, indexes.IndexFieldRSPByUsedByRSCName, func(obj client.Object) []string { + rsp, ok := obj.(*v1alpha1.ReplicatedStoragePool) + if !ok { + return nil + } + return rsp.Status.UsedBy.ReplicatedStorageClassNames + }) +} diff --git a/images/controller/internal/indexes/testhelpers/rv.go b/images/controller/internal/indexes/testhelpers/rv.go new file mode 100644 index 000000000..f66a4cf45 --- /dev/null +++ b/images/controller/internal/indexes/testhelpers/rv.go @@ -0,0 +1,41 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +// Package testhelpers provides utilities for registering indexes with fake clients in tests. +package testhelpers + +import ( + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/client/fake" + + "github.com/deckhouse/sds-replicated-volume/api/v1alpha1" + "github.com/deckhouse/sds-replicated-volume/images/controller/internal/indexes" +) + +// WithRVByReplicatedStorageClassNameIndex registers the IndexFieldRVByReplicatedStorageClassName index +// on a fake.ClientBuilder. This is useful for tests that need to use the index. +func WithRVByReplicatedStorageClassNameIndex(b *fake.ClientBuilder) *fake.ClientBuilder { + return b.WithIndex(&v1alpha1.ReplicatedVolume{}, indexes.IndexFieldRVByReplicatedStorageClassName, func(obj client.Object) []string { + rv, ok := obj.(*v1alpha1.ReplicatedVolume) + if !ok { + return nil + } + if rv.Spec.ReplicatedStorageClassName == "" { + return nil + } + return []string{rv.Spec.ReplicatedStorageClassName} + }) +} diff --git a/images/controller/internal/indexes/testhelpers/rva.go b/images/controller/internal/indexes/testhelpers/rva.go new file mode 100644 index 000000000..e30016002 --- /dev/null +++ b/images/controller/internal/indexes/testhelpers/rva.go @@ -0,0 +1,40 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package testhelpers + +import ( + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/client/fake" + + "github.com/deckhouse/sds-replicated-volume/api/v1alpha1" + "github.com/deckhouse/sds-replicated-volume/images/controller/internal/indexes" +) + +// WithRVAByReplicatedVolumeNameIndex registers the IndexFieldRVAByReplicatedVolumeName index +// on a fake.ClientBuilder. This is useful for tests that need to use the index. +func WithRVAByReplicatedVolumeNameIndex(b *fake.ClientBuilder) *fake.ClientBuilder { + return b.WithIndex(&v1alpha1.ReplicatedVolumeAttachment{}, indexes.IndexFieldRVAByReplicatedVolumeName, func(obj client.Object) []string { + rva, ok := obj.(*v1alpha1.ReplicatedVolumeAttachment) + if !ok { + return nil + } + if rva.Spec.ReplicatedVolumeName == "" { + return nil + } + return []string{rva.Spec.ReplicatedVolumeName} + }) +} diff --git a/images/controller/internal/indexes/testhelpers/rvr.go b/images/controller/internal/indexes/testhelpers/rvr.go new file mode 100644 index 000000000..3b078489b --- /dev/null +++ b/images/controller/internal/indexes/testhelpers/rvr.go @@ -0,0 +1,55 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package testhelpers + +import ( + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/client/fake" + + "github.com/deckhouse/sds-replicated-volume/api/v1alpha1" + "github.com/deckhouse/sds-replicated-volume/images/controller/internal/indexes" +) + +// WithRVRByNodeNameIndex registers the IndexFieldRVRByNodeName index +// on a fake.ClientBuilder. This is useful for tests that need to use the index. +func WithRVRByNodeNameIndex(b *fake.ClientBuilder) *fake.ClientBuilder { + return b.WithIndex(&v1alpha1.ReplicatedVolumeReplica{}, indexes.IndexFieldRVRByNodeName, func(obj client.Object) []string { + rvr, ok := obj.(*v1alpha1.ReplicatedVolumeReplica) + if !ok { + return nil + } + if rvr.Spec.NodeName == "" { + return nil + } + return []string{rvr.Spec.NodeName} + }) +} + +// WithRVRByReplicatedVolumeNameIndex registers the IndexFieldRVRByReplicatedVolumeName index +// on a fake.ClientBuilder. This is useful for tests that need to use the index. +func WithRVRByReplicatedVolumeNameIndex(b *fake.ClientBuilder) *fake.ClientBuilder { + return b.WithIndex(&v1alpha1.ReplicatedVolumeReplica{}, indexes.IndexFieldRVRByReplicatedVolumeName, func(obj client.Object) []string { + rvr, ok := obj.(*v1alpha1.ReplicatedVolumeReplica) + if !ok { + return nil + } + if rvr.Spec.ReplicatedVolumeName == "" { + return nil + } + return []string{rvr.Spec.ReplicatedVolumeName} + }) +} diff --git a/images/controller/internal/scheme/scheme.go b/images/controller/internal/scheme/scheme.go new file mode 100644 index 000000000..d2ab3b10d --- /dev/null +++ b/images/controller/internal/scheme/scheme.go @@ -0,0 +1,47 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package scheme + +import ( + "fmt" + + corev1 "k8s.io/api/core/v1" + storagev1 "k8s.io/api/storage/v1" + "k8s.io/apimachinery/pkg/runtime" + + snc "github.com/deckhouse/sds-node-configurator/api/v1alpha1" + "github.com/deckhouse/sds-replicated-volume/api/v1alpha1" +) + +func New() (*runtime.Scheme, error) { + scheme := runtime.NewScheme() + + var schemeFuncs = []func(s *runtime.Scheme) error{ + corev1.AddToScheme, + storagev1.AddToScheme, + v1alpha1.AddToScheme, + snc.AddToScheme, + } + + for i, f := range schemeFuncs { + if err := f(scheme); err != nil { + return nil, fmt.Errorf("adding scheme %d: %w", i, err) + } + } + + return scheme, nil +} diff --git a/images/controller/werf.inc.yaml b/images/controller/werf.inc.yaml new file mode 100644 index 000000000..588ac997f --- /dev/null +++ b/images/controller/werf.inc.yaml @@ -0,0 +1,56 @@ +--- +image: {{ $.ImageName }}-src-artifact +from: {{ $.Root.BASE_ALT_P11 }} +final: false + +git: + - add: / + to: /src + includePaths: + - api + - lib/go + - images/{{ $.ImageName }} + stageDependencies: + install: + - '**/*' + excludePaths: + - images/{{ $.ImageName }}/werf.yaml + +shell: + install: + - echo "src artifact" + +--- +image: {{ $.ImageName }}-golang-artifact +fromImage: builder/golang-alpine +final: false + +import: + - image: {{ $.ImageName }}-src-artifact + add: /src + to: /src + before: install + +mount: + - fromPath: ~/go-pkg-cache + to: /go/pkg + +shell: + setup: + - cd /src/images/{{ $.ImageName }}/cmd + - GOOS=linux GOARCH=amd64 CGO_ENABLED=0 go build -ldflags="-s -w" -o /{{ $.ImageName }} + - chmod +x /{{ $.ImageName }} + +--- +image: {{ $.ImageName }} +fromImage: base/distroless + +import: + - image: {{ $.ImageName }}-golang-artifact + add: /{{ $.ImageName }} + to: /{{ $.ImageName }} + before: setup + +docker: + ENTRYPOINT: ["/{{ $.ImageName }}"] + USER: deckhouse:deckhouse diff --git a/images/csi-driver/LICENSE b/images/csi-driver/LICENSE new file mode 100644 index 000000000..b77c0c92a --- /dev/null +++ b/images/csi-driver/LICENSE @@ -0,0 +1,201 @@ + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright [yyyy] [name of copyright owner] + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/images/csi-driver/cmd/main.go b/images/csi-driver/cmd/main.go new file mode 100644 index 000000000..48a2ac60e --- /dev/null +++ b/images/csi-driver/cmd/main.go @@ -0,0 +1,102 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package main + +import ( + "context" + "fmt" + "net/http" + "os" + "os/signal" + "syscall" + + v1 "k8s.io/api/core/v1" + sv1 "k8s.io/api/storage/v1" + extv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1" + clientgoscheme "k8s.io/client-go/kubernetes/scheme" + "k8s.io/klog/v2" + + "github.com/deckhouse/sds-common-lib/kubeclient" + snc "github.com/deckhouse/sds-node-configurator/api/v1alpha1" + "github.com/deckhouse/sds-replicated-volume/api/v1alpha1" + "github.com/deckhouse/sds-replicated-volume/images/csi-driver/config" + "github.com/deckhouse/sds-replicated-volume/images/csi-driver/driver" + "github.com/deckhouse/sds-replicated-volume/lib/go/common/logger" +) + +func healthHandler(w http.ResponseWriter, _ *http.Request) { + w.WriteHeader(http.StatusOK) + _, err := fmt.Fprint(w, "OK") + if err != nil { + klog.Fatalf("Error while generating healthcheck, err: %s", err.Error()) + } +} + +func main() { + ctx, cancel := context.WithCancel(context.Background()) + defer cancel() + + c := make(chan os.Signal, 1) + signal.Notify(c, syscall.SIGINT, syscall.SIGTERM) + go func() { + <-c + cancel() + }() + + cfgParams, err := config.NewConfig() + if err != nil { + klog.Fatalf("unable to create NewConfig, err: %s", err.Error()) + } + + log, err := logger.NewLogger(cfgParams.Loglevel) + if err != nil { + klog.Fatalf("unable to create NewLogger, err: %v", err) + } + + log.Info("version = ", cfgParams.Version) + + cl, err := kubeclient.New( + snc.AddToScheme, + v1alpha1.AddToScheme, + clientgoscheme.AddToScheme, + extv1.AddToScheme, + v1.AddToScheme, + sv1.AddToScheme, + ) + if err != nil { + log.Error(err, "[main] unable to create kubeclient") + klog.Fatalf("unable to create kubeclient, err: %v", err) + } + + http.HandleFunc("/healthz", healthHandler) + http.HandleFunc("/readyz", healthHandler) + go func() { + err = http.ListenAndServe(cfgParams.HealthProbeBindAddress, nil) + if err != nil { + log.Error(err, "[main] create probes") + } + }() + + drv, err := driver.NewDriver(cfgParams.CsiAddress, cfgParams.DriverName, cfgParams.Address, &cfgParams.NodeName, log, cl) + if err != nil { + log.Error(err, "[main] create NewDriver") + } + + if err := drv.Run(ctx); err != nil { + log.Error(err, "[dev.Run]") + } +} diff --git a/images/csi-driver/config/config.go b/images/csi-driver/config/config.go new file mode 100644 index 000000000..d1f229207 --- /dev/null +++ b/images/csi-driver/config/config.go @@ -0,0 +1,78 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package config + +import ( + "flag" + "fmt" + "os" + + "github.com/deckhouse/sds-replicated-volume/images/csi-driver/driver" + "github.com/deckhouse/sds-replicated-volume/lib/go/common/logger" +) + +const ( + NodeName = "KUBE_NODE_NAME" + LogLevel = "LOG_LEVEL" + DefaultHealthProbeBindAddressEnvName = "HEALTH_PROBE_BIND_ADDRESS" + DefaultHealthProbeBindAddress = ":8081" +) + +type Options struct { + NodeName string + Version string + Loglevel logger.Verbosity + HealthProbeBindAddress string + CsiAddress string + DriverName string + Address string +} + +func NewConfig() (*Options, error) { + var opts Options + + opts.NodeName = os.Getenv(NodeName) + if opts.NodeName == "" { + return nil, fmt.Errorf("[NewConfig] required %s env variable is not specified", NodeName) + } + + opts.HealthProbeBindAddress = os.Getenv(DefaultHealthProbeBindAddressEnvName) + if opts.HealthProbeBindAddress == "" { + opts.HealthProbeBindAddress = DefaultHealthProbeBindAddress + } + + loglevel := os.Getenv(LogLevel) + if loglevel == "" { + opts.Loglevel = logger.DebugLevel + } else { + opts.Loglevel = logger.Verbosity(loglevel) + } + + opts.Version = "dev" + + fl := flag.NewFlagSet(os.Args[0], flag.ExitOnError) + fl.StringVar(&opts.CsiAddress, "csi-endpoint", "unix:///var/lib/kubelet/plugins/"+driver.DefaultDriverName+"/csi.sock", "CSI endpoint") + fl.StringVar(&opts.DriverName, "driver-name", driver.DefaultDriverName, "Name for the driver") + fl.StringVar(&opts.Address, "address", driver.DefaultAddress, "Address to serve on") + + err := fl.Parse(os.Args[1:]) + if err != nil { + return &opts, err + } + + return &opts, nil +} diff --git a/images/csi-driver/driver/controller.go b/images/csi-driver/driver/controller.go new file mode 100644 index 000000000..59ca42090 --- /dev/null +++ b/images/csi-driver/driver/controller.go @@ -0,0 +1,394 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package driver + +import ( + "context" + "errors" + "fmt" + + "github.com/container-storage-interface/spec/lib/go/csi" + "github.com/google/uuid" + "google.golang.org/grpc/codes" + "google.golang.org/grpc/status" + kerrors "k8s.io/apimachinery/pkg/api/errors" + "k8s.io/apimachinery/pkg/api/resource" + + "github.com/deckhouse/sds-replicated-volume/images/csi-driver/internal" + "github.com/deckhouse/sds-replicated-volume/images/csi-driver/pkg/utils" +) + +const ( + ReplicatedStorageClassParamNameKey = "replicated.csi.storage.deckhouse.io/replicatedStorageClassName" +) + +func (d *Driver) CreateVolume(ctx context.Context, request *csi.CreateVolumeRequest) (*csi.CreateVolumeResponse, error) { + traceID := uuid.New().String() + + d.log.Trace(fmt.Sprintf("[CreateVolume][traceID:%s] ========== CreateVolume ============", traceID)) + d.log.Trace(request.String()) + d.log.Trace(fmt.Sprintf("[CreateVolume][traceID:%s] ========== CreateVolume ============", traceID)) + + if len(request.Name) == 0 { + return nil, status.Error(codes.InvalidArgument, "Volume Name cannot be empty") + } + volumeID := request.Name + if request.VolumeCapabilities == nil { + return nil, status.Error(codes.InvalidArgument, "Volume Capability cannot de empty") + } + + BindingMode := request.Parameters[internal.BindingModeKey] + d.log.Info(fmt.Sprintf("[CreateVolume][traceID:%s][volumeID:%s] storage class BindingMode: %s", traceID, volumeID, BindingMode)) + + // Get LVMVolumeGroups from StoragePool + storagePoolName := request.Parameters[internal.StoragePoolKey] + if len(storagePoolName) == 0 { + err := errors.New("no StoragePool specified in a storage class's parameters") + d.log.Error(err, fmt.Sprintf("[CreateVolume][traceID:%s][volumeID:%s] no StoragePool was found for the request: %+v", traceID, volumeID, request)) + return nil, status.Errorf(codes.InvalidArgument, "no StoragePool specified in a storage class's parameters") + } + + d.log.Info(fmt.Sprintf("[CreateVolume][traceID:%s][volumeID:%s] using StoragePool: %s", traceID, volumeID, storagePoolName)) + storagePoolInfo, err := utils.GetStoragePoolInfo(ctx, d.cl, d.log, storagePoolName) + if err != nil { + d.log.Error(err, fmt.Sprintf("[CreateVolume][traceID:%s][volumeID:%s] error GetStoragePoolInfo", traceID, volumeID)) + return nil, status.Errorf(codes.Internal, "error during GetStoragePoolInfo: %v", err) + } + + LvmType := storagePoolInfo.LVMType + if LvmType != internal.LVMTypeThin && LvmType != internal.LVMTypeThick { + d.log.Warning(fmt.Sprintf("[CreateVolume][traceID:%s][volumeID:%s] Unknown LVM type from StoragePool: %s, defaulting to Thick", traceID, volumeID, LvmType)) + LvmType = internal.LVMTypeThick + } + d.log.Info(fmt.Sprintf("[CreateVolume][traceID:%s][volumeID:%s] LVM type from StoragePool: %s", traceID, volumeID, LvmType)) + + rvSize := resource.NewQuantity(request.CapacityRange.GetRequiredBytes(), resource.BinarySI) + d.log.Info(fmt.Sprintf("[CreateVolume][traceID:%s][volumeID:%s] ReplicatedVolume size: %s", traceID, volumeID, rvSize.String())) + + // Extract preferred node from AccessibilityRequirements for WaitForFirstConsumer + // Kubernetes provides the selected node in AccessibilityRequirements.Preferred[].Segments + // with key "kubernetes.io/hostname" + // NOTE: We no longer use rv.spec.attachTo. Attachment intent is expressed via ReplicatedVolumeAttachment (RVA) + // created in ControllerPublishVolume. + + // Build ReplicatedVolumeSpec + rvSpec := utils.BuildReplicatedVolumeSpec( + *rvSize, + request.Parameters[ReplicatedStorageClassParamNameKey], + ) + + d.log.Info(fmt.Sprintf("[CreateVolume][traceID:%s][volumeID:%s] ReplicatedVolumeSpec: %+v", traceID, volumeID, rvSpec)) + + // Create ReplicatedVolume + d.log.Trace(fmt.Sprintf("[CreateVolume][traceID:%s][volumeID:%s] ------------ CreateReplicatedVolume start ------------", traceID, volumeID)) + _, err = utils.CreateReplicatedVolume(ctx, d.cl, d.log, traceID, volumeID, rvSpec) + if err != nil { + if kerrors.IsAlreadyExists(err) { + d.log.Info(fmt.Sprintf("[CreateVolume][traceID:%s][volumeID:%s] ReplicatedVolume %s already exists. Skip creating", traceID, volumeID, volumeID)) + } else { + d.log.Error(err, fmt.Sprintf("[CreateVolume][traceID:%s][volumeID:%s] error CreateReplicatedVolume", traceID, volumeID)) + return nil, err + } + } + d.log.Trace(fmt.Sprintf("[CreateVolume][traceID:%s][volumeID:%s] ------------ CreateReplicatedVolume end ------------", traceID, volumeID)) + + // Wait for ReplicatedVolume to become ready + d.log.Trace(fmt.Sprintf("[CreateVolume][traceID:%s][volumeID:%s] start wait ReplicatedVolume", traceID, volumeID)) + attemptCounter, err := utils.WaitForReplicatedVolumeReady(ctx, d.cl, d.log, traceID, volumeID) + if err != nil { + d.log.Error(err, fmt.Sprintf("[CreateVolume][traceID:%s][volumeID:%s] error WaitForReplicatedVolumeReady. Delete ReplicatedVolume %s", traceID, volumeID, volumeID)) + + deleteErr := utils.DeleteReplicatedVolume(ctx, d.cl, d.log, traceID, volumeID) + if deleteErr != nil { + d.log.Error(deleteErr, fmt.Sprintf("[CreateVolume][traceID:%s][volumeID:%s] error DeleteReplicatedVolume", traceID, volumeID)) + } + + d.log.Error(err, fmt.Sprintf("[CreateVolume][traceID:%s][volumeID:%s] error creating ReplicatedVolume", traceID, volumeID)) + return nil, err + } + d.log.Trace(fmt.Sprintf("[CreateVolume][traceID:%s][volumeID:%s] finish wait ReplicatedVolume, attempt counter = %d", traceID, volumeID, attemptCounter)) + + // Build volume context + volumeCtx := make(map[string]string, len(request.Parameters)) + for k, v := range request.Parameters { + volumeCtx[k] = v + } + volumeCtx[internal.ReplicatedVolumeNameKey] = volumeID + + d.log.Info(fmt.Sprintf("[CreateVolume][traceID:%s][volumeID:%s] Volume created successfully. volumeCtx: %+v", traceID, volumeID, volumeCtx)) + + // Don't set AccessibleTopology - let scheduler-extender handle pod scheduling + + return &csi.CreateVolumeResponse{ + Volume: &csi.Volume{ + CapacityBytes: request.CapacityRange.GetRequiredBytes(), + VolumeId: request.Name, + VolumeContext: volumeCtx, + ContentSource: request.VolumeContentSource, + AccessibleTopology: nil, // No nodeAffinity - scheduling handled by scheduler-extender + }, + }, nil +} + +func (d *Driver) DeleteVolume(ctx context.Context, request *csi.DeleteVolumeRequest) (*csi.DeleteVolumeResponse, error) { + traceID := uuid.New().String() + d.log.Info(fmt.Sprintf("[DeleteVolume][traceID:%s] ========== Start DeleteVolume ============", traceID)) + if len(request.VolumeId) == 0 { + return nil, status.Error(codes.InvalidArgument, "Volume ID cannot be empty") + } + + err := utils.DeleteReplicatedVolume(ctx, d.cl, d.log, traceID, request.VolumeId) + if err != nil { + d.log.Error(err, "error DeleteReplicatedVolume") + return nil, err + } + d.log.Info(fmt.Sprintf("[DeleteVolume][traceID:%s][volumeID:%s] Volume deleted successfully", traceID, request.VolumeId)) + d.log.Info(fmt.Sprintf("[DeleteVolume][traceID:%s] ========== END DeleteVolume ============", traceID)) + return &csi.DeleteVolumeResponse{}, nil +} + +func (d *Driver) ControllerPublishVolume(ctx context.Context, request *csi.ControllerPublishVolumeRequest) (*csi.ControllerPublishVolumeResponse, error) { + traceID := uuid.New().String() + d.log.Info(fmt.Sprintf("[ControllerPublishVolume][traceID:%s] ========== ControllerPublishVolume ============", traceID)) + d.log.Trace(request.String()) + + if request.VolumeId == "" { + return nil, status.Error(codes.InvalidArgument, "Volume ID cannot be empty") + } + if request.NodeId == "" { + return nil, status.Error(codes.InvalidArgument, "Node ID cannot be empty") + } + + volumeID := request.VolumeId + nodeID := request.NodeId + + d.log.Info(fmt.Sprintf("[ControllerPublishVolume][traceID:%s][volumeID:%s][nodeID:%s] Creating ReplicatedVolumeAttachment and waiting for Ready=true", traceID, volumeID, nodeID)) + + _, err := utils.EnsureRVA(ctx, d.cl, d.log, traceID, volumeID, nodeID) + if err != nil { + d.log.Error(err, fmt.Sprintf("[ControllerPublishVolume][traceID:%s][volumeID:%s][nodeID:%s] Failed to create ReplicatedVolumeAttachment", traceID, volumeID, nodeID)) + return nil, status.Errorf(codes.Internal, "Failed to create ReplicatedVolumeAttachment: %v", err) + } + + err = utils.WaitForRVAReady(ctx, d.cl, d.log, traceID, volumeID, nodeID) + if err != nil { + d.log.Error(err, fmt.Sprintf("[ControllerPublishVolume][traceID:%s][volumeID:%s][nodeID:%s] Failed waiting for RVA Ready=true", traceID, volumeID, nodeID)) + // Preserve RVA reason/message for better user diagnostics. + var waitErr *utils.RVAWaitError + if errors.As(err, &waitErr) { + // Permanent failures: waiting won't help (e.g. locality constraints). + if waitErr.Permanent { + return nil, status.Errorf(codes.FailedPrecondition, "ReplicatedVolumeAttachment not ready: %v", waitErr) + } + // Context-aware mapping (external-attacher controls ctx deadline). + if errors.Is(err, context.DeadlineExceeded) { + return nil, status.Errorf(codes.DeadlineExceeded, "Timed out waiting for ReplicatedVolumeAttachment to become Ready=true: %v", waitErr) + } + if errors.Is(err, context.Canceled) { + return nil, status.Errorf(codes.Canceled, "Canceled waiting for ReplicatedVolumeAttachment to become Ready=true: %v", waitErr) + } + return nil, status.Errorf(codes.Internal, "Failed waiting for ReplicatedVolumeAttachment Ready=true: %v", waitErr) + } + // Fallback for unexpected errors. + if errors.Is(err, context.DeadlineExceeded) { + return nil, status.Errorf(codes.DeadlineExceeded, "Timed out waiting for ReplicatedVolumeAttachment to become Ready=true: %v", err) + } + if errors.Is(err, context.Canceled) { + return nil, status.Errorf(codes.Canceled, "Canceled waiting for ReplicatedVolumeAttachment to become Ready=true: %v", err) + } + return nil, status.Errorf(codes.Internal, "Failed waiting for ReplicatedVolumeAttachment Ready=true: %v", err) + } + + d.log.Info(fmt.Sprintf("[ControllerPublishVolume][traceID:%s][volumeID:%s][nodeID:%s] Volume attached successfully", traceID, volumeID, nodeID)) + d.log.Info(fmt.Sprintf("[ControllerPublishVolume][traceID:%s] ========== END ControllerPublishVolume ============", traceID)) + + return &csi.ControllerPublishVolumeResponse{ + PublishContext: map[string]string{ + internal.ReplicatedVolumeNameKey: volumeID, + }, + }, nil +} + +func (d *Driver) ControllerUnpublishVolume(ctx context.Context, request *csi.ControllerUnpublishVolumeRequest) (*csi.ControllerUnpublishVolumeResponse, error) { + traceID := uuid.New().String() + d.log.Info(fmt.Sprintf("[ControllerUnpublishVolume][traceID:%s] ========== ControllerUnpublishVolume ============", traceID)) + d.log.Trace(request.String()) + + if request.VolumeId == "" { + return nil, status.Error(codes.InvalidArgument, "Volume ID cannot be empty") + } + if request.NodeId == "" { + return nil, status.Error(codes.InvalidArgument, "Node ID cannot be empty") + } + + volumeID := request.VolumeId + nodeID := request.NodeId + + d.log.Info(fmt.Sprintf("[ControllerUnpublishVolume][traceID:%s][volumeID:%s][nodeID:%s] Deleting ReplicatedVolumeAttachment", traceID, volumeID, nodeID)) + + err := utils.DeleteRVA(ctx, d.cl, d.log, traceID, volumeID, nodeID) + if err != nil { + d.log.Error(err, fmt.Sprintf("[ControllerUnpublishVolume][traceID:%s][volumeID:%s][nodeID:%s] Failed to delete ReplicatedVolumeAttachment", traceID, volumeID, nodeID)) + return nil, status.Errorf(codes.Internal, "Failed to delete ReplicatedVolumeAttachment: %v", err) + } + + d.log.Info(fmt.Sprintf("[ControllerUnpublishVolume][traceID:%s][volumeID:%s][nodeID:%s] Waiting for node to disappear from status.actuallyAttachedTo", traceID, volumeID, nodeID)) + + // Wait for node to disappear from status.actuallyAttachedTo + err = utils.WaitForAttachedToRemoved(ctx, d.cl, d.log, traceID, volumeID, nodeID) + if err != nil { + d.log.Error(err, fmt.Sprintf("[ControllerUnpublishVolume][traceID:%s][volumeID:%s][nodeID:%s] Failed to wait for status.actuallyAttachedTo removal", traceID, volumeID, nodeID)) + return nil, status.Errorf(codes.Internal, "Failed to wait for status.actuallyAttachedTo removal: %v", err) + } + + d.log.Info(fmt.Sprintf("[ControllerUnpublishVolume][traceID:%s][volumeID:%s][nodeID:%s] Volume detached successfully", traceID, volumeID, nodeID)) + d.log.Info(fmt.Sprintf("[ControllerUnpublishVolume][traceID:%s] ========== END ControllerUnpublishVolume ============", traceID)) + + return &csi.ControllerUnpublishVolumeResponse{}, nil +} + +func (d *Driver) ValidateVolumeCapabilities(_ context.Context, _ *csi.ValidateVolumeCapabilitiesRequest) (*csi.ValidateVolumeCapabilitiesResponse, error) { + d.log.Info("call method ValidateVolumeCapabilities") + return nil, nil +} + +func (d *Driver) ListVolumes(_ context.Context, _ *csi.ListVolumesRequest) (*csi.ListVolumesResponse, error) { + d.log.Info("call method ListVolumes") + return nil, nil +} + +func (d *Driver) GetCapacity(_ context.Context, _ *csi.GetCapacityRequest) (*csi.GetCapacityResponse, error) { + d.log.Info("method GetCapacity") + + // Return maximum int64 value to indicate unlimited capacity + // This prevents Kubernetes scheduler from rejecting pods due to insufficient storage + // Real capacity validation happens during volume creation + // Note: CSIDriver has storageCapacity: false, but external-provisioner may still call this method + return &csi.GetCapacityResponse{ + AvailableCapacity: int64(^uint64(0) >> 1), // Max int64: ~9.2 exabytes + MaximumVolumeSize: nil, + MinimumVolumeSize: nil, + }, nil +} + +func (d *Driver) ControllerGetCapabilities(_ context.Context, _ *csi.ControllerGetCapabilitiesRequest) (*csi.ControllerGetCapabilitiesResponse, error) { + d.log.Info("method ControllerGetCapabilities") + + var capabilities = []csi.ControllerServiceCapability_RPC_Type{ + csi.ControllerServiceCapability_RPC_CREATE_DELETE_VOLUME, + csi.ControllerServiceCapability_RPC_CLONE_VOLUME, + csi.ControllerServiceCapability_RPC_GET_CAPACITY, + csi.ControllerServiceCapability_RPC_EXPAND_VOLUME, + csi.ControllerServiceCapability_RPC_PUBLISH_UNPUBLISH_VOLUME, + // TODO: Add snapshot support if needed + // csi.ControllerServiceCapability_RPC_CREATE_DELETE_SNAPSHOT, + } + + csiCaps := make([]*csi.ControllerServiceCapability, len(capabilities)) + + for i, capability := range capabilities { + csiCaps[i] = &csi.ControllerServiceCapability{ + Type: &csi.ControllerServiceCapability_Rpc{ + Rpc: &csi.ControllerServiceCapability_RPC{ + Type: capability, + }, + }, + } + } + + return &csi.ControllerGetCapabilitiesResponse{ + Capabilities: csiCaps, + }, nil +} + +func (d *Driver) ControllerExpandVolume(ctx context.Context, request *csi.ControllerExpandVolumeRequest) (*csi.ControllerExpandVolumeResponse, error) { + traceID := uuid.New().String() + + d.log.Info(fmt.Sprintf("[ControllerExpandVolume][traceID:%s] method ControllerExpandVolume", traceID)) + d.log.Trace(fmt.Sprintf("[ControllerExpandVolume][traceID:%s] ========== ControllerExpandVolume ============", traceID)) + d.log.Trace(request.String()) + d.log.Trace(fmt.Sprintf("[ControllerExpandVolume][traceID:%s] ========== ControllerExpandVolume ============", traceID)) + + volumeID := request.GetVolumeId() + if len(volumeID) == 0 { + return nil, status.Error(codes.InvalidArgument, "Volume id cannot be empty") + } + + rv, err := utils.GetReplicatedVolume(ctx, d.cl, volumeID) + if err != nil { + d.log.Error(err, fmt.Sprintf("[ControllerExpandVolume][traceID:%s][volumeID:%s] error getting ReplicatedVolume", traceID, volumeID)) + return nil, status.Errorf(codes.Internal, "error getting ReplicatedVolume: %s", err.Error()) + } + + resizeDelta, err := resource.ParseQuantity(internal.ResizeDelta) + if err != nil { + d.log.Error(err, fmt.Sprintf("[ControllerExpandVolume][traceID:%s][volumeID:%s] error ParseQuantity for ResizeDelta", traceID, volumeID)) + return nil, err + } + d.log.Trace(fmt.Sprintf("[ControllerExpandVolume][traceID:%s][volumeID:%s] resizeDelta: %s", traceID, volumeID, resizeDelta.String())) + requestCapacity := resource.NewQuantity(request.CapacityRange.GetRequiredBytes(), resource.BinarySI) + d.log.Trace(fmt.Sprintf("[ControllerExpandVolume][traceID:%s][volumeID:%s] requestCapacity: %s", traceID, volumeID, requestCapacity.String())) + + nodeExpansionRequired := true + if request.GetVolumeCapability().GetBlock() != nil { + nodeExpansionRequired = false + } + d.log.Info(fmt.Sprintf("[ControllerExpandVolume][traceID:%s][volumeID:%s] NodeExpansionRequired: %t", traceID, volumeID, nodeExpansionRequired)) + + // Check if resize is needed + currentSize := rv.Spec.Size + if currentSize.Value() > requestCapacity.Value()+resizeDelta.Value() || utils.AreSizesEqualWithinDelta(*requestCapacity, currentSize, resizeDelta) { + d.log.Warning(fmt.Sprintf("[ControllerExpandVolume][traceID:%s][volumeID:%s] requested size is less than or equal to the actual size of the volume include delta %s, no need to resize ReplicatedVolume %s, requested size: %s, actual size: %s, return NodeExpansionRequired: %t and CapacityBytes: %d", traceID, volumeID, resizeDelta.String(), volumeID, requestCapacity.String(), currentSize.String(), nodeExpansionRequired, currentSize.Value())) + return &csi.ControllerExpandVolumeResponse{ + CapacityBytes: currentSize.Value(), + NodeExpansionRequired: nodeExpansionRequired, + }, nil + } + + d.log.Info(fmt.Sprintf("[ControllerExpandVolume][traceID:%s][volumeID:%s] start resize ReplicatedVolume", traceID, volumeID)) + d.log.Info(fmt.Sprintf("[ControllerExpandVolume][traceID:%s][volumeID:%s] requested size: %s, actual size: %s", traceID, volumeID, requestCapacity.String(), currentSize.String())) + err = utils.ExpandReplicatedVolume(ctx, d.cl, rv, *requestCapacity) + if err != nil { + d.log.Error(err, fmt.Sprintf("[ControllerExpandVolume][traceID:%s][volumeID:%s] error updating ReplicatedVolume", traceID, volumeID)) + return nil, status.Errorf(codes.Internal, "error updating ReplicatedVolume: %v", err) + } + + // Wait for ReplicatedVolume to become ready after resize + attemptCounter, err := utils.WaitForReplicatedVolumeReady(ctx, d.cl, d.log, traceID, volumeID) + if err != nil { + d.log.Error(err, fmt.Sprintf("[ControllerExpandVolume][traceID:%s][volumeID:%s] error WaitForReplicatedVolumeReady", traceID, volumeID)) + return nil, err + } + d.log.Info(fmt.Sprintf("[ControllerExpandVolume][traceID:%s][volumeID:%s] finish resize ReplicatedVolume, attempt counter = %d", traceID, volumeID, attemptCounter)) + + d.log.Info(fmt.Sprintf("[ControllerExpandVolume][traceID:%s][volumeID:%s] Volume expanded successfully", traceID, volumeID)) + + return &csi.ControllerExpandVolumeResponse{ + CapacityBytes: request.CapacityRange.RequiredBytes, + NodeExpansionRequired: nodeExpansionRequired, + }, nil +} + +func (d *Driver) ControllerGetVolume(_ context.Context, _ *csi.ControllerGetVolumeRequest) (*csi.ControllerGetVolumeResponse, error) { + d.log.Info(" call method ControllerGetVolume") + return &csi.ControllerGetVolumeResponse{}, nil +} + +func (d *Driver) ControllerModifyVolume(_ context.Context, _ *csi.ControllerModifyVolumeRequest) (*csi.ControllerModifyVolumeResponse, error) { + d.log.Info(" call method ControllerModifyVolume") + return &csi.ControllerModifyVolumeResponse{}, nil +} diff --git a/images/csi-driver/driver/driver.go b/images/csi-driver/driver/driver.go new file mode 100644 index 000000000..032ebe00d --- /dev/null +++ b/images/csi-driver/driver/driver.go @@ -0,0 +1,191 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package driver + +import ( + "context" + "errors" + "fmt" + "net" + "net/http" + "net/url" + "os" + "path" + "path/filepath" + "sync" + "time" + + "github.com/container-storage-interface/spec/lib/go/csi" + "golang.org/x/sync/errgroup" + "google.golang.org/grpc" + "sigs.k8s.io/controller-runtime/pkg/client" + + "github.com/deckhouse/sds-replicated-volume/images/csi-driver/internal" + "github.com/deckhouse/sds-replicated-volume/images/csi-driver/pkg/utils" + "github.com/deckhouse/sds-replicated-volume/lib/go/common/logger" +) + +const ( + // DefaultDriverName defines the name that is used in Kubernetes and the CSI + // system for the canonical, official name of this plugin + DefaultDriverName = "replicated.csi.storage.deckhouse.io" + // DefaultAddress is the default address that the csi plugin will serve its + // http handler on. + DefaultAddress = "127.0.0.1:12302" + defaultWaitActionTimeout = 5 * time.Minute +) + +var ( + version string +) + +type Driver struct { + name string + + csiAddress string + address string + hostID string + waitActionTimeout time.Duration + + srv *grpc.Server + httpSrv http.Server + log *logger.Logger + + readyMu sync.Mutex // protects ready + ready bool + cl client.Client + storeManager utils.NodeStoreManager + inFlight *internal.InFlight + + csi.UnimplementedControllerServer + csi.UnimplementedIdentityServer + csi.UnimplementedNodeServer +} + +// NewDriver returns a CSI plugin that contains the necessary gRPC +// interfaces to interact with Kubernetes over unix domain sockets for +// managing disks +func NewDriver(csiAddress, driverName, address string, nodeName *string, log *logger.Logger, cl client.Client) (*Driver, error) { + if driverName == "" { + driverName = DefaultDriverName + } + + st := utils.NewStore(log) + + return &Driver{ + name: driverName, + hostID: *nodeName, + csiAddress: csiAddress, + address: address, + log: log, + waitActionTimeout: defaultWaitActionTimeout, + cl: cl, + storeManager: st, + inFlight: internal.NewInFlight(), + }, nil +} + +func (d *Driver) Run(ctx context.Context) error { + u, err := url.Parse(d.csiAddress) + if err != nil { + return fmt.Errorf("unable to parse address: %q", err) + } + + fmt.Print("d.csiAddress", d.csiAddress) + fmt.Print("u", u) + + grpcAddr := path.Join(u.Host, filepath.FromSlash(u.Path)) + if u.Host == "" { + grpcAddr = filepath.FromSlash(u.Path) + } + + fmt.Print("grpcAddr", grpcAddr) + + // CSI plugins talk only over UNIX sockets currently + if u.Scheme != "unix" { + return fmt.Errorf("currently only unix domain sockets are supported, have: %s", u.Scheme) + } + // remove the socket if it's already there. This can happen if we + // deploy a new version and the socket was created from the old running + // plugin. + d.log.Info(fmt.Sprintf("socket %s removing socket", grpcAddr)) + if err := os.Remove(grpcAddr); err != nil && !os.IsNotExist(err) { + return fmt.Errorf("failed to remove unix domain socket file %s, error: %s", grpcAddr, err) + } + + grpcListener, err := net.Listen(u.Scheme, grpcAddr) + if err != nil { + return fmt.Errorf("failed to listen: %v", err) + } + + // log response errors for better observability + errHandler := func(ctx context.Context, req interface{}, info *grpc.UnaryServerInfo, handler grpc.UnaryHandler) (interface{}, error) { + resp, err := handler(ctx, req) + if err != nil { + d.log.Error(err, fmt.Sprintf("method %s method failed ", info.FullMethod)) + } + return resp, err + } + + d.srv = grpc.NewServer(grpc.UnaryInterceptor(errHandler)) + csi.RegisterIdentityServer(d.srv, d) + csi.RegisterControllerServer(d.srv, d) + csi.RegisterNodeServer(d.srv, d) + + httpListener, err := net.Listen("tcp", d.address) + if err != nil { + return fmt.Errorf("failed to listen: %v", err) + } + + mux := http.NewServeMux() + mux.HandleFunc("/health", func(w http.ResponseWriter, _ *http.Request) { + w.WriteHeader(http.StatusOK) + }) + + d.httpSrv = http.Server{ + Handler: mux, + } + + d.ready = true + d.log.Info(fmt.Sprintf("grpc_addr %s http_addr %s starting server", grpcAddr, d.address)) + + var eg errgroup.Group + eg.Go(func() error { + <-ctx.Done() + return d.httpSrv.Shutdown(context.Background()) // TODO: Should we use just ctx here? + }) + eg.Go(func() error { + go func() { + <-ctx.Done() + d.log.Info("server stopped") + d.readyMu.Lock() + d.ready = false + d.readyMu.Unlock() + d.srv.GracefulStop() + }() + return d.srv.Serve(grpcListener) + }) + eg.Go(func() error { + err := d.httpSrv.Serve(httpListener) + if errors.Is(err, http.ErrServerClosed) { + return nil + } + return err + }) + + return eg.Wait() +} diff --git a/images/csi-driver/driver/health.go b/images/csi-driver/driver/health.go new file mode 100644 index 000000000..8afe5b742 --- /dev/null +++ b/images/csi-driver/driver/health.go @@ -0,0 +1,38 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package driver + +import "context" + +// HealthCheck is the interface that must be implemented to be compatible with +// `HealthChecker`. +type HealthCheck interface { + Name() string + Check(ctx context.Context) +} + +// HealthChecker helps with writing multi component health checkers. +type HealthChecker struct { + checks []HealthCheck +} + +// NewHealthChecker configures a new health checker with the passed in checks. +func NewHealthChecker(checks ...HealthCheck) *HealthChecker { + return &HealthChecker{ + checks: checks, + } +} diff --git a/images/csi-driver/driver/identity.go b/images/csi-driver/driver/identity.go new file mode 100644 index 000000000..e0047e0ac --- /dev/null +++ b/images/csi-driver/driver/identity.go @@ -0,0 +1,89 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package driver + +import ( + "context" + "fmt" + + "github.com/container-storage-interface/spec/lib/go/csi" + "github.com/golang/protobuf/ptypes/wrappers" +) + +// GetPluginInfo returns metadata of the plugin +func (d *Driver) GetPluginInfo(_ context.Context, _ *csi.GetPluginInfoRequest) (*csi.GetPluginInfoResponse, error) { + resp := &csi.GetPluginInfoResponse{ + Name: d.name, + VendorVersion: version, + } + + d.log.Info(fmt.Sprintf("response : %+v ", resp)) + return resp, nil +} + +// GetPluginCapabilities returns available capabilities of the plugin +func (d *Driver) GetPluginCapabilities(_ context.Context, _ *csi.GetPluginCapabilitiesRequest) (*csi.GetPluginCapabilitiesResponse, error) { + d.log.Info("method GetPluginCapabilities") + resp := &csi.GetPluginCapabilitiesResponse{ + Capabilities: []*csi.PluginCapability{ + { + Type: &csi.PluginCapability_Service_{ + Service: &csi.PluginCapability_Service{ + Type: csi.PluginCapability_Service_CONTROLLER_SERVICE, + }, + }, + }, + { + Type: &csi.PluginCapability_Service_{ + Service: &csi.PluginCapability_Service{ + Type: csi.PluginCapability_Service_VOLUME_ACCESSIBILITY_CONSTRAINTS, + }, + }, + }, + { + Type: &csi.PluginCapability_VolumeExpansion_{ + VolumeExpansion: &csi.PluginCapability_VolumeExpansion{ + Type: csi.PluginCapability_VolumeExpansion_ONLINE, + }, + }, + }, + { + Type: &csi.PluginCapability_VolumeExpansion_{ + VolumeExpansion: &csi.PluginCapability_VolumeExpansion{ + Type: csi.PluginCapability_VolumeExpansion_OFFLINE, + }, + }, + }, + }, + } + + d.log.Info(fmt.Sprintf("response method get_plugin_capabilities get plugin capabitilies called : %+v", resp)) + return resp, nil +} + +// Probe returns the health and readiness of the plugin +func (d *Driver) Probe(_ context.Context, _ *csi.ProbeRequest) (*csi.ProbeResponse, error) { + d.log.Info("method Probe") + d.readyMu.Lock() + defer d.readyMu.Unlock() + + return &csi.ProbeResponse{ + Ready: &wrappers.BoolValue{ + Value: d.ready, + }, + }, nil +} diff --git a/images/csi-driver/driver/node.go b/images/csi-driver/driver/node.go new file mode 100644 index 000000000..2b00e840d --- /dev/null +++ b/images/csi-driver/driver/node.go @@ -0,0 +1,537 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package driver + +import ( + "context" + "fmt" + "os" + "slices" + "strconv" + "strings" + "syscall" + "unsafe" + + "github.com/container-storage-interface/spec/lib/go/csi" + "golang.org/x/sys/unix" + "google.golang.org/grpc/codes" + "google.golang.org/grpc/status" + + "github.com/deckhouse/sds-replicated-volume/images/csi-driver/internal" + "github.com/deckhouse/sds-replicated-volume/images/csi-driver/pkg/utils" +) + +const ( + // default file system type to be used when it is not provided + defaultFsType = internal.FSTypeExt4 + + // VolumeOperationAlreadyExists is message fmt returned to CO when there is another in-flight call on the given volumeID + VolumeOperationAlreadyExists = "An operation with the given volume=%q is already in progress" + + BLKGETSIZE64 = 0x80081272 +) + +var ( + // nodeCaps represents the capability of node service. + nodeCaps = []csi.NodeServiceCapability_RPC_Type{ + csi.NodeServiceCapability_RPC_STAGE_UNSTAGE_VOLUME, + csi.NodeServiceCapability_RPC_EXPAND_VOLUME, + csi.NodeServiceCapability_RPC_GET_VOLUME_STATS, + } + + ValidFSTypes = map[string]struct{}{ + internal.FSTypeExt4: {}, + internal.FSTypeXfs: {}, + } +) + +func (d *Driver) NodeStageVolume(ctx context.Context, request *csi.NodeStageVolumeRequest) (*csi.NodeStageVolumeResponse, error) { + volumeID := request.GetVolumeId() + if len(volumeID) == 0 { + return nil, status.Error(codes.InvalidArgument, "[NodeStageVolume] Volume id cannot be empty") + } + + target := request.GetStagingTargetPath() + if len(target) == 0 { + return nil, status.Error(codes.InvalidArgument, "[NodeStageVolume] Staging target path cannot be empty") + } + + volCap := request.GetVolumeCapability() + if volCap == nil { + return nil, status.Error(codes.InvalidArgument, "[NodeStageVolume] Volume capability cannot be empty") + } + + if volCap.GetBlock() != nil { + d.log.Info("[NodeStageVolume] Block volume detected. Skipping staging.") + return &csi.NodeStageVolumeResponse{}, nil + } + + mountVolume := volCap.GetMount() + if mountVolume == nil { + return nil, status.Error(codes.InvalidArgument, "[NodeStageVolume] Volume capability mount cannot be empty") + } + + fsType := mountVolume.GetFsType() + if fsType == "" { + fsType = defaultFsType + } + + _, ok := ValidFSTypes[strings.ToLower(fsType)] + if !ok { + d.log.Error(fmt.Errorf("[NodeStageVolume] Invalid fsType: %s. Supported values: %v", fsType, ValidFSTypes), "Invalid fsType") + return nil, status.Errorf(codes.InvalidArgument, "invalid fsType") + } + + formatOptions := []string{} + + // support mounting on old linux kernels + needLegacySupport, err := needLegacyXFSSupport() + if err != nil { + return nil, err + } + if fsType == internal.FSTypeXfs && needLegacySupport { + d.log.Info("[NodeStageVolume] legacy xfs support is on") + formatOptions = append(formatOptions, "-m", "bigtime=0,inobtcount=0,reflink=0", "-i", "nrext64=0") + } + + mountOptions := collectMountOptions(fsType, mountVolume.GetMountFlags(), []string{}) + + d.log.Debug(fmt.Sprintf("[NodeStageVolume] Volume %s operation started", volumeID)) + ok = d.inFlight.Insert(volumeID) + if !ok { + return nil, status.Errorf(codes.Aborted, VolumeOperationAlreadyExists, volumeID) + } + defer func() { + d.log.Debug(fmt.Sprintf("[NodeStageVolume] Volume %s operation completed", volumeID)) + d.inFlight.Delete(volumeID) + }() + + // Get DRBD device path from ReplicatedVolumeReplica + rvr, err := utils.GetReplicatedVolumeReplicaForNode(ctx, d.cl, volumeID, d.hostID) + if err != nil { + return nil, status.Errorf(codes.Internal, "[NodeStageVolume] Error getting ReplicatedVolumeReplica: %v", err) + } + + devPath, err := utils.GetDRBDDevicePath(rvr) + if err != nil { + return nil, status.Errorf(codes.Internal, "[NodeStageVolume] Error getting DRBD device path: %v", err) + } + + d.log.Debug(fmt.Sprintf("[NodeStageVolume] Checking if device exists: %s", devPath)) + exists, err := d.storeManager.PathExists(devPath) + if err != nil { + return nil, status.Errorf(codes.Internal, "[NodeStageVolume] Error checking if device exists: %v", err) + } + if !exists { + return nil, status.Errorf(codes.NotFound, "[NodeStageVolume] Device %s not found", devPath) + } + + d.log.Trace(fmt.Sprintf("formatOptions = %s", formatOptions)) + d.log.Trace(fmt.Sprintf("mountOptions = %s", mountOptions)) + d.log.Trace(fmt.Sprintf("fsType = %s", fsType)) + + err = d.storeManager.NodeStageVolumeFS(devPath, target, fsType, mountOptions, formatOptions, "", "") + if err != nil { + d.log.Error(err, "[NodeStageVolume] Error mounting volume") + return nil, status.Errorf(codes.Internal, "[NodeStageVolume] Error format device %q and mounting volume at %q: %v", devPath, target, err) + } + + needResize, err := d.storeManager.NeedResize(devPath, target) + if err != nil { + d.log.Error(err, "[NodeStageVolume] Error checking if volume needs resize") + return nil, status.Errorf(codes.Internal, "[NodeStageVolume] Error checking if the volume %q (%q) mounted at %q needs resizing: %v", volumeID, devPath, target, err) + } + + if needResize { + d.log.Info(fmt.Sprintf("[NodeStageVolume] Resizing volume %q (%q) mounted at %q", volumeID, devPath, target)) + err = d.storeManager.ResizeFS(target) + if err != nil { + return nil, status.Errorf(codes.Internal, "[NodeStageVolume] Error resizing volume %q (%q) mounted at %q: %v", volumeID, devPath, target, err) + } + } + + d.log.Info(fmt.Sprintf("[NodeStageVolume] Volume %q (%q) successfully staged at %s. FsType: %s", volumeID, devPath, target, fsType)) + + return &csi.NodeStageVolumeResponse{}, nil +} + +func (d *Driver) NodeUnstageVolume(_ context.Context, request *csi.NodeUnstageVolumeRequest) (*csi.NodeUnstageVolumeResponse, error) { + d.log.Debug(fmt.Sprintf("[NodeUnstageVolume] method called with request: %v", request)) + volumeID := request.GetVolumeId() + if len(volumeID) == 0 { + return nil, status.Error(codes.InvalidArgument, "[NodeUnstageVolume] Volume id cannot be empty") + } + + target := request.GetStagingTargetPath() + if len(target) == 0 { + return nil, status.Error(codes.InvalidArgument, "[NodeUnstageVolume] Staging target path cannot be empty") + } + + d.log.Debug(fmt.Sprintf("[NodeUnstageVolume] Volume %s operation started", volumeID)) + ok := d.inFlight.Insert(volumeID) + if !ok { + return nil, status.Errorf(codes.Aborted, VolumeOperationAlreadyExists, volumeID) + } + defer func() { + d.log.Debug(fmt.Sprintf("[NodeUnstageVolume] Volume %s operation completed", volumeID)) + d.inFlight.Delete(volumeID) + }() + err := d.storeManager.Unstage(target) + if err != nil { + return nil, status.Errorf(codes.Internal, "[NodeUnstageVolume] Error unmounting volume %q mounted at %q: %v", volumeID, target, err) + } + + return &csi.NodeUnstageVolumeResponse{}, nil +} + +func (d *Driver) NodePublishVolume(ctx context.Context, request *csi.NodePublishVolumeRequest) (*csi.NodePublishVolumeResponse, error) { + d.log.Info("Start method NodePublishVolume") + d.log.Trace("------------- NodePublishVolume --------------") + d.log.Trace(request.String()) + d.log.Trace("------------- NodePublishVolume --------------") + + volumeID := request.GetVolumeId() + if len(volumeID) == 0 { + return nil, status.Error(codes.InvalidArgument, "[NodePublishVolume] Volume id cannot be empty") + } + + source := request.GetStagingTargetPath() + if len(source) == 0 { + return nil, status.Error(codes.InvalidArgument, "[NodePublishVolume] Staging target path cannot be empty") + } + + target := request.GetTargetPath() + if len(target) == 0 { + return nil, status.Error(codes.InvalidArgument, "[NodePublishVolume] Target path cannot be empty") + } + + volCap := request.GetVolumeCapability() + if volCap == nil { + return nil, status.Error(codes.InvalidArgument, "[NodePublishVolume] Volume capability cannot be empty") + } + + mountOptions := []string{"bind"} + if request.GetReadonly() { + mountOptions = append(mountOptions, "ro") + } + + // Get DRBD device path from ReplicatedVolumeReplica + rvr, err := utils.GetReplicatedVolumeReplicaForNode(ctx, d.cl, volumeID, d.hostID) + if err != nil { + return nil, status.Errorf(codes.Internal, "[NodePublishVolume] Error getting ReplicatedVolumeReplica: %v", err) + } + + devPath, err := utils.GetDRBDDevicePath(rvr) + if err != nil { + return nil, status.Errorf(codes.Internal, "[NodePublishVolume] Error getting DRBD device path: %v", err) + } + + d.log.Debug(fmt.Sprintf("[NodePublishVolume] Checking if device exists: %s", devPath)) + exists, err := d.storeManager.PathExists(devPath) + if err != nil { + return nil, status.Errorf(codes.Internal, "[NodePublishVolume] Error checking if device exists: %v", err) + } + if !exists { + return nil, status.Errorf(codes.NotFound, "[NodePublishVolume] Device %q not found", devPath) + } + + d.log.Debug(fmt.Sprintf("[NodePublishVolume] Volume %s operation started", volumeID)) + + ok := d.inFlight.Insert(volumeID) + if !ok { + return nil, status.Errorf(codes.Aborted, VolumeOperationAlreadyExists, volumeID) + } + defer func() { + d.log.Debug(fmt.Sprintf("[NodePublishVolume] Volume %s operation completed", volumeID)) + d.inFlight.Delete(volumeID) + }() + + switch volCap.GetAccessType().(type) { + case *csi.VolumeCapability_Block: + d.log.Trace("[NodePublishVolume] Block volume detected.") + + err := d.storeManager.NodePublishVolumeBlock(devPath, target, mountOptions) + if err != nil { + return nil, status.Errorf(codes.Internal, "[NodePublishVolume] Error mounting volume %q at %q: %v", devPath, target, err) + } + + case *csi.VolumeCapability_Mount: + d.log.Trace("[NodePublishVolume] FS type volume detected.") + mountVolume := volCap.GetMount() + if mountVolume == nil { + return nil, status.Error(codes.InvalidArgument, "[NodePublishVolume] Volume capability mount cannot be empty") + } + fsType := mountVolume.GetFsType() + if fsType == "" { + fsType = defaultFsType + } + + _, ok = ValidFSTypes[strings.ToLower(fsType)] + if !ok { + d.log.Error(fmt.Errorf("[NodeStageVolume] Invalid fsType: %s. Supported values: %v", fsType, ValidFSTypes), "Invalid fsType") + return nil, status.Errorf(codes.InvalidArgument, "Invalid fsType") + } + + mountOptions = collectMountOptions(fsType, mountVolume.GetMountFlags(), mountOptions) + + err := d.storeManager.NodePublishVolumeFS(source, devPath, target, fsType, mountOptions) + if err != nil { + return nil, status.Errorf(codes.Internal, "[NodePublishVolume] Error bind mounting volume %q. Source: %q. Target: %q. Mount options:%v. Err: %v", volumeID, source, target, mountOptions, err) + } + } + + return &csi.NodePublishVolumeResponse{}, nil +} + +func (d *Driver) NodeUnpublishVolume(_ context.Context, request *csi.NodeUnpublishVolumeRequest) (*csi.NodeUnpublishVolumeResponse, error) { + d.log.Debug(fmt.Sprintf("[NodeUnpublishVolume] method called with request: %v", request)) + d.log.Trace("------------- NodeUnpublishVolume --------------") + d.log.Trace(request.String()) + d.log.Trace("------------- NodeUnpublishVolume --------------") + + volumeID := request.GetVolumeId() + if len(volumeID) == 0 { + return nil, status.Error(codes.InvalidArgument, "[NodeUnpublishVolume] Volume id cannot be empty") + } + + target := request.GetTargetPath() + if len(target) == 0 { + return nil, status.Error(codes.InvalidArgument, "[NodeUnpublishVolume] Staging target path cannot be empty") + } + + d.log.Debug(fmt.Sprintf("[NodeUnpublishVolume] Volume %s operation started", volumeID)) + ok := d.inFlight.Insert(volumeID) + if !ok { + return nil, status.Errorf(codes.Aborted, VolumeOperationAlreadyExists, volumeID) + } + defer func() { + d.log.Debug(fmt.Sprintf("[NodeUnpublishVolume] Volume %s operation completed", volumeID)) + d.inFlight.Delete(volumeID) + }() + + err := d.storeManager.Unpublish(target) + if err != nil { + return nil, status.Errorf(codes.Internal, "[NodeUnpublishVolume] Error unmounting volume %q mounted at %q: %v", volumeID, target, err) + } + + return &csi.NodeUnpublishVolumeResponse{}, nil +} + +// IsBlock checks if the given path is a block device +func (d *Driver) IsBlockDevice(fullPath string) (bool, error) { + var st unix.Stat_t + err := unix.Stat(fullPath, &st) + if err != nil { + return false, err + } + + return (st.Mode & unix.S_IFMT) == unix.S_IFBLK, nil +} + +// getBlockSizeBytes returns the size of the block device in bytes +func (d *Driver) getBlockSizeBytes(devicePath string) (uint64, error) { + file, err := os.OpenFile(devicePath, os.O_RDONLY, 0) + if err != nil { + return 0, fmt.Errorf("failed to open device %s: %w", devicePath, err) + } + defer file.Close() + + fd := file.Fd() + + var size uint64 + _, _, errno := syscall.Syscall(syscall.SYS_IOCTL, fd, BLKGETSIZE64, uintptr(unsafe.Pointer(&size))) + if errno != 0 { + return 0, fmt.Errorf("failed to get device size for %s: %w", devicePath, errno) + } + + return size, nil +} + +func (d *Driver) NodeGetVolumeStats(_ context.Context, req *csi.NodeGetVolumeStatsRequest) (*csi.NodeGetVolumeStatsResponse, error) { + d.log.Info("method NodeGetVolumeStats") + + isBlock, err := d.IsBlockDevice(req.VolumePath) + + if err != nil { + return nil, status.Errorf(codes.Internal, "failed to determine whether %s is block device: %v", req.VolumePath, err) + } + + if isBlock { + bcap, err := d.getBlockSizeBytes(req.VolumePath) + if err != nil { + return nil, status.Errorf(codes.Internal, "failed to get block capacity on path %s: %v", req.VolumePath, err) + } + return &csi.NodeGetVolumeStatsResponse{ + Usage: []*csi.VolumeUsage{ + { + Unit: csi.VolumeUsage_BYTES, + Total: int64(bcap), + }, + }, + }, nil + } + + // For filesystem mounts, get filesystem statistics + var fsStat syscall.Statfs_t + if err := syscall.Statfs(req.VolumePath, &fsStat); err != nil { + return nil, status.Errorf(codes.Internal, "failed to statfs %s: %v", req.VolumePath, err) + } + + // NOTE: syscall.Statfs_t field types are OS-dependent. + // On linux Bsize is already int64 (so the conversion is redundant and triggers unconvert), + // but on darwin it's not, and we need int64 for computations below. + blockSize := int64(fsStat.Bsize) //nolint:unconvert + available := int64(fsStat.Bavail) * blockSize + total := int64(fsStat.Blocks) * blockSize + used := (int64(fsStat.Blocks) - int64(fsStat.Bfree)) * blockSize + + inodes := int64(fsStat.Files) + inodesFree := int64(fsStat.Ffree) + inodesUsed := inodes - inodesFree + + return &csi.NodeGetVolumeStatsResponse{ + Usage: []*csi.VolumeUsage{ + { + Available: available, + Total: total, + Used: used, + Unit: csi.VolumeUsage_BYTES, + }, + { + Available: inodesFree, + Total: inodes, + Used: inodesUsed, + Unit: csi.VolumeUsage_INODES, + }, + }, + }, nil +} + +func (d *Driver) NodeExpandVolume(_ context.Context, request *csi.NodeExpandVolumeRequest) (*csi.NodeExpandVolumeResponse, error) { + d.log.Info("Call method NodeExpandVolume") + + d.log.Trace("========== NodeExpandVolume ============") + d.log.Trace(request.String()) + d.log.Trace("========== NodeExpandVolume ============") + + volumeID := request.GetVolumeId() + volumePath := request.GetVolumePath() + if len(volumeID) == 0 { + return nil, status.Error(codes.InvalidArgument, "Volume id cannot be empty") + } + if len(volumePath) == 0 { + return nil, status.Error(codes.InvalidArgument, "Volume Path cannot be empty") + } + + err := d.storeManager.ResizeFS(volumePath) + if err != nil { + d.log.Error(err, "d.mounter.ResizeFS:") + return nil, status.Error(codes.Internal, err.Error()) + } + + return &csi.NodeExpandVolumeResponse{}, nil +} + +func (d *Driver) NodeGetCapabilities(_ context.Context, request *csi.NodeGetCapabilitiesRequest) (*csi.NodeGetCapabilitiesResponse, error) { + d.log.Debug(fmt.Sprintf("[NodeGetCapabilities] method called with request: %v", request)) + + caps := make([]*csi.NodeServiceCapability, len(nodeCaps)) + for i, capability := range nodeCaps { + caps[i] = &csi.NodeServiceCapability{ + Type: &csi.NodeServiceCapability_Rpc{ + Rpc: &csi.NodeServiceCapability_RPC{ + Type: capability, + }, + }, + } + } + + return &csi.NodeGetCapabilitiesResponse{ + Capabilities: caps, + }, nil +} + +func (d *Driver) NodeGetInfo(_ context.Context, _ *csi.NodeGetInfoRequest) (*csi.NodeGetInfoResponse, error) { + d.log.Info("method NodeGetInfo") + d.log.Info(fmt.Sprintf("hostID = %s", d.hostID)) + + return &csi.NodeGetInfoResponse{ + NodeId: d.hostID, + //MaxVolumesPerNode: 10, + // Don't set AccessibleTopology - scheduling handled by scheduler-extender + AccessibleTopology: nil, + }, nil +} + +// collectMountOptions returns array of mount options from +// VolumeCapability_MountVolume and special mount options for +// given filesystem. +func collectMountOptions(fsType string, mountFlags, mountOptions []string) []string { + for _, opt := range mountFlags { + if !slices.Contains(mountOptions, opt) { + mountOptions = append(mountOptions, opt) + } + } + + // By default, xfs does not allow mounting of two volumes with the same filesystem uuid. + // Force ignore this uuid to be able to mount volume + its clone / restored snapshot on the same node. + if fsType == internal.FSTypeXfs { + if !slices.Contains(mountOptions, "nouuid") { + mountOptions = append(mountOptions, "nouuid") + } + } + + return mountOptions +} + +func readCString(arr []byte) string { + b := make([]byte, 0, len(arr)) + for _, v := range arr { + if v == 0x00 { + break + } + b = append(b, v) + } + return string(b) +} + +func needLegacyXFSSupport() (bool, error) { + // checking if Linux kernel version is <= 5.4 + var uname unix.Utsname + if err := unix.Uname(&uname); err != nil { + return false, fmt.Errorf("unable to Uname kernel version: %w", err) + } + + fullVersion := readCString(uname.Release[:]) // similar to: "6.8.0-44-generic" + + parts := strings.SplitN(fullVersion, ".", 3) + if len(parts) < 3 { + return false, fmt.Errorf("unexpected kernel version: %s", fullVersion) + } + + major, err := strconv.Atoi(parts[0]) + if err != nil { + return false, fmt.Errorf("unexpected kernel version (major part): %s", fullVersion) + } + + minor, err := strconv.Atoi(parts[1]) + if err != nil { + return false, fmt.Errorf("unexpected kernel version (minor part): %s", fullVersion) + } + + return major < 5 || major == 5 && minor <= 15, nil +} diff --git a/images/csi-driver/go.mod b/images/csi-driver/go.mod new file mode 100644 index 000000000..437d8ef31 --- /dev/null +++ b/images/csi-driver/go.mod @@ -0,0 +1,267 @@ +module github.com/deckhouse/sds-replicated-volume/images/csi-driver + +go 1.24.11 + +require ( + github.com/container-storage-interface/spec v1.12.0 + github.com/deckhouse/sds-common-lib v0.6.3 + github.com/deckhouse/sds-node-configurator/api v0.0.0-20251112082451-591b11c7b2da + github.com/deckhouse/sds-replicated-volume/api v0.0.0-20250907192450-6e1330e9e380 + github.com/deckhouse/sds-replicated-volume/lib/go/common v0.0.0-00010101000000-000000000000 + github.com/golang/protobuf v1.5.4 + github.com/google/uuid v1.6.0 + github.com/onsi/ginkgo/v2 v2.27.2 + github.com/onsi/gomega v1.38.3 + github.com/stretchr/testify v1.11.1 + golang.org/x/sync v0.19.0 + golang.org/x/sys v0.39.0 + google.golang.org/grpc v1.73.0 + gopkg.in/yaml.v2 v2.4.0 + k8s.io/api v0.34.3 + k8s.io/apiextensions-apiserver v0.34.3 + k8s.io/apimachinery v0.34.3 + k8s.io/client-go v0.34.3 + k8s.io/klog/v2 v2.130.1 + k8s.io/mount-utils v0.31.0 + k8s.io/utils v0.0.0-20251002143259-bc988d571ff4 + sigs.k8s.io/controller-runtime v0.22.4 +) + +require ( + 4d63.com/gocheckcompilerdirectives v1.3.0 // indirect + 4d63.com/gochecknoglobals v0.2.2 // indirect + github.com/4meepo/tagalign v1.4.2 // indirect + github.com/Abirdcfly/dupword v0.1.3 // indirect + github.com/Antonboom/errname v1.0.0 // indirect + github.com/Antonboom/nilnil v1.0.1 // indirect + github.com/Antonboom/testifylint v1.5.2 // indirect + github.com/BurntSushi/toml v1.5.0 // indirect + github.com/Crocmagnon/fatcontext v0.7.1 // indirect + github.com/Djarvur/go-err113 v0.0.0-20210108212216-aea10b59be24 // indirect + github.com/GaijinEntertainment/go-exhaustruct/v3 v3.3.1 // indirect + github.com/Masterminds/semver/v3 v3.4.0 // indirect + github.com/OpenPeeDeeP/depguard/v2 v2.2.1 // indirect + github.com/alecthomas/go-check-sumtype v0.3.1 // indirect + github.com/alexkohler/nakedret/v2 v2.0.5 // indirect + github.com/alexkohler/prealloc v1.0.0 // indirect + github.com/alingse/asasalint v0.0.11 // indirect + github.com/alingse/nilnesserr v0.1.2 // indirect + github.com/ashanbrown/forbidigo v1.6.0 // indirect + github.com/ashanbrown/makezero v1.2.0 // indirect + github.com/beorn7/perks v1.0.1 // indirect + github.com/bkielbasa/cyclop v1.2.3 // indirect + github.com/blizzy78/varnamelen v0.8.0 // indirect + github.com/bombsimon/wsl/v4 v4.5.0 // indirect + github.com/breml/bidichk v0.3.2 // indirect + github.com/breml/errchkjson v0.4.0 // indirect + github.com/butuzov/ireturn v0.3.1 // indirect + github.com/butuzov/mirror v1.3.0 // indirect + github.com/catenacyber/perfsprint v0.8.2 // indirect + github.com/ccojocar/zxcvbn-go v1.0.2 // indirect + github.com/cespare/xxhash/v2 v2.3.0 // indirect + github.com/charithe/durationcheck v0.0.10 // indirect + github.com/chavacava/garif v0.1.0 // indirect + github.com/ckaznocha/intrange v0.3.0 // indirect + github.com/coreos/go-systemd/v22 v22.5.0 // indirect + github.com/curioswitch/go-reassign v0.3.0 // indirect + github.com/daixiang0/gci v0.13.5 // indirect + github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect + github.com/denis-tingaikin/go-header v0.5.0 // indirect + github.com/emicklei/go-restful/v3 v3.13.0 // indirect + github.com/ettle/strcase v0.2.0 // indirect + github.com/evanphx/json-patch/v5 v5.9.11 // indirect + github.com/fatih/color v1.18.0 // indirect + github.com/fatih/structtag v1.2.0 // indirect + github.com/firefart/nonamedreturns v1.0.5 // indirect + github.com/fsnotify/fsnotify v1.9.0 // indirect + github.com/fxamacker/cbor/v2 v2.9.0 // indirect + github.com/fzipp/gocyclo v0.6.0 // indirect + github.com/ghostiam/protogetter v0.3.9 // indirect + github.com/go-critic/go-critic v0.12.0 // indirect + github.com/go-logr/logr v1.4.3 // indirect + github.com/go-openapi/jsonpointer v0.22.0 // indirect + github.com/go-openapi/jsonreference v0.21.1 // indirect + github.com/go-openapi/swag v0.24.1 // indirect + github.com/go-openapi/swag/cmdutils v0.24.0 // indirect + github.com/go-openapi/swag/conv v0.24.0 // indirect + github.com/go-openapi/swag/fileutils v0.24.0 // indirect + github.com/go-openapi/swag/jsonname v0.24.0 // indirect + github.com/go-openapi/swag/jsonutils v0.24.0 // indirect + github.com/go-openapi/swag/loading v0.24.0 // indirect + github.com/go-openapi/swag/mangling v0.24.0 // indirect + github.com/go-openapi/swag/netutils v0.24.0 // indirect + github.com/go-openapi/swag/stringutils v0.24.0 // indirect + github.com/go-openapi/swag/typeutils v0.24.0 // indirect + github.com/go-openapi/swag/yamlutils v0.24.0 // indirect + github.com/go-task/slim-sprig/v3 v3.0.0 // indirect + github.com/go-toolsmith/astcast v1.1.0 // indirect + github.com/go-toolsmith/astcopy v1.1.0 // indirect + github.com/go-toolsmith/astequal v1.2.0 // indirect + github.com/go-toolsmith/astfmt v1.1.0 // indirect + github.com/go-toolsmith/astp v1.1.0 // indirect + github.com/go-toolsmith/strparse v1.1.0 // indirect + github.com/go-toolsmith/typep v1.1.0 // indirect + github.com/go-viper/mapstructure/v2 v2.4.0 // indirect + github.com/go-xmlfmt/xmlfmt v1.1.3 // indirect + github.com/gobwas/glob v0.2.3 // indirect + github.com/godbus/dbus/v5 v5.1.0 // indirect + github.com/gofrs/flock v0.12.1 // indirect + github.com/gogo/protobuf v1.3.2 // indirect + github.com/golangci/dupl v0.0.0-20250308024227-f665c8d69b32 // indirect + github.com/golangci/go-printf-func-name v0.1.0 // indirect + github.com/golangci/gofmt v0.0.0-20250106114630-d62b90e6713d // indirect + github.com/golangci/golangci-lint v1.64.8 // indirect + github.com/golangci/misspell v0.6.0 // indirect + github.com/golangci/plugin-module-register v0.1.1 // indirect + github.com/golangci/revgrep v0.8.0 // indirect + github.com/golangci/unconvert v0.0.0-20240309020433-c5143eacb3ed // indirect + github.com/google/gnostic-models v0.7.0 // indirect + github.com/google/go-cmp v0.7.0 // indirect + github.com/google/pprof v0.0.0-20250820193118-f64d9cf942d6 // indirect + github.com/gordonklaus/ineffassign v0.1.0 // indirect + github.com/gostaticanalysis/analysisutil v0.7.1 // indirect + github.com/gostaticanalysis/comment v1.5.0 // indirect + github.com/gostaticanalysis/forcetypeassert v0.2.0 // indirect + github.com/gostaticanalysis/nilerr v0.1.1 // indirect + github.com/hashicorp/go-immutable-radix/v2 v2.1.0 // indirect + github.com/hashicorp/go-version v1.7.0 // indirect + github.com/hashicorp/golang-lru/v2 v2.0.7 // indirect + github.com/hexops/gotextdiff v1.0.3 // indirect + github.com/inconshreveable/mousetrap v1.1.0 // indirect + github.com/jgautheron/goconst v1.7.1 // indirect + github.com/jingyugao/rowserrcheck v1.1.1 // indirect + github.com/jjti/go-spancheck v0.6.4 // indirect + github.com/josharian/intern v1.0.0 // indirect + github.com/json-iterator/go v1.1.12 // indirect + github.com/julz/importas v0.2.0 // indirect + github.com/karamaru-alpha/copyloopvar v1.2.1 // indirect + github.com/kisielk/errcheck v1.9.0 // indirect + github.com/kkHAIKE/contextcheck v1.1.6 // indirect + github.com/kulti/thelper v0.6.3 // indirect + github.com/kunwardeep/paralleltest v1.0.10 // indirect + github.com/lasiar/canonicalheader v1.1.2 // indirect + github.com/ldez/exptostd v0.4.2 // indirect + github.com/ldez/gomoddirectives v0.6.1 // indirect + github.com/ldez/grignotin v0.9.0 // indirect + github.com/ldez/tagliatelle v0.7.1 // indirect + github.com/ldez/usetesting v0.4.2 // indirect + github.com/leonklingele/grouper v1.1.2 // indirect + github.com/macabu/inamedparam v0.1.3 // indirect + github.com/mailru/easyjson v0.9.0 // indirect + github.com/maratori/testableexamples v1.0.0 // indirect + github.com/maratori/testpackage v1.1.1 // indirect + github.com/matoous/godox v1.1.0 // indirect + github.com/mattn/go-colorable v0.1.14 // indirect + github.com/mattn/go-isatty v0.0.20 // indirect + github.com/mattn/go-runewidth v0.0.16 // indirect + github.com/mgechev/revive v1.7.0 // indirect + github.com/mitchellh/go-homedir v1.1.0 // indirect + github.com/moby/sys/mountinfo v0.7.1 // indirect + github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect + github.com/modern-go/reflect2 v1.0.3-0.20250322232337-35a7c28c31ee // indirect + github.com/moricho/tparallel v0.3.2 // indirect + github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect + github.com/nakabonne/nestif v0.3.1 // indirect + github.com/nishanths/exhaustive v0.12.0 // indirect + github.com/nishanths/predeclared v0.2.2 // indirect + github.com/nunnatsa/ginkgolinter v0.19.1 // indirect + github.com/olekukonko/tablewriter v0.0.5 // indirect + github.com/opencontainers/runc v1.1.13 // indirect + github.com/opencontainers/runtime-spec v1.0.3-0.20220909204839-494a5a6aca78 // indirect + github.com/pelletier/go-toml/v2 v2.2.4 // indirect + github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect + github.com/polyfloyd/go-errorlint v1.7.1 // indirect + github.com/prometheus/client_golang v1.23.2 // indirect + github.com/prometheus/client_model v0.6.2 // indirect + github.com/prometheus/common v0.66.1 // indirect + github.com/prometheus/procfs v0.17.0 // indirect + github.com/quasilyte/go-ruleguard v0.4.3-0.20240823090925-0fe6f58b47b1 // indirect + github.com/quasilyte/go-ruleguard/dsl v0.3.22 // indirect + github.com/quasilyte/gogrep v0.5.0 // indirect + github.com/quasilyte/regex/syntax v0.0.0-20210819130434-b3f0c404a727 // indirect + github.com/quasilyte/stdinfo v0.0.0-20220114132959-f7386bf02567 // indirect + github.com/raeperd/recvcheck v0.2.0 // indirect + github.com/rivo/uniseg v0.4.7 // indirect + github.com/rogpeppe/go-internal v1.14.1 // indirect + github.com/ryancurrah/gomodguard v1.3.5 // indirect + github.com/ryanrolds/sqlclosecheck v0.5.1 // indirect + github.com/sagikazarmark/locafero v0.7.0 // indirect + github.com/sanposhiho/wastedassign/v2 v2.1.0 // indirect + github.com/santhosh-tekuri/jsonschema/v6 v6.0.1 // indirect + github.com/sashamelentyev/interfacebloat v1.1.0 // indirect + github.com/sashamelentyev/usestdlibvars v1.28.0 // indirect + github.com/securego/gosec/v2 v2.22.2 // indirect + github.com/sirupsen/logrus v1.9.3 // indirect + github.com/sivchari/containedctx v1.0.3 // indirect + github.com/sivchari/tenv v1.12.1 // indirect + github.com/sonatard/noctx v0.1.0 // indirect + github.com/sourcegraph/conc v0.3.0 // indirect + github.com/sourcegraph/go-diff v0.7.0 // indirect + github.com/spf13/afero v1.12.0 // indirect + github.com/spf13/cast v1.7.1 // indirect + github.com/spf13/cobra v1.10.2 // indirect + github.com/spf13/pflag v1.0.10 // indirect + github.com/spf13/viper v1.20.1 // indirect + github.com/ssgreg/nlreturn/v2 v2.2.1 // indirect + github.com/stbenjam/no-sprintf-host-port v0.2.0 // indirect + github.com/stretchr/objx v0.5.2 // indirect + github.com/subosito/gotenv v1.6.0 // indirect + github.com/tdakkota/asciicheck v0.4.1 // indirect + github.com/tetafro/godot v1.5.0 // indirect + github.com/timakin/bodyclose v0.0.0-20241017074812-ed6a65f985e3 // indirect + github.com/timonwong/loggercheck v0.10.1 // indirect + github.com/tomarrell/wrapcheck/v2 v2.10.0 // indirect + github.com/tommy-muehle/go-mnd/v2 v2.5.1 // indirect + github.com/ultraware/funlen v0.2.0 // indirect + github.com/ultraware/whitespace v0.2.0 // indirect + github.com/uudashr/gocognit v1.2.0 // indirect + github.com/uudashr/iface v1.3.1 // indirect + github.com/x448/float16 v0.8.4 // indirect + github.com/xen0n/gosmopolitan v1.2.2 // indirect + github.com/yagipy/maintidx v1.0.0 // indirect + github.com/yeya24/promlinter v0.3.0 // indirect + github.com/ykadowak/zerologlint v0.1.5 // indirect + gitlab.com/bosi/decorder v0.4.2 // indirect + go-simpler.org/musttag v0.13.0 // indirect + go-simpler.org/sloglint v0.9.0 // indirect + go.opentelemetry.io/otel v1.37.0 // indirect + go.opentelemetry.io/otel/sdk v1.37.0 // indirect + go.opentelemetry.io/otel/sdk/metric v1.36.0 // indirect + go.uber.org/automaxprocs v1.6.0 // indirect + go.uber.org/multierr v1.11.0 // indirect + go.uber.org/zap v1.27.0 // indirect + go.yaml.in/yaml/v2 v2.4.2 // indirect + go.yaml.in/yaml/v3 v3.0.4 // indirect + golang.org/x/exp/typeparams v0.0.0-20250210185358-939b2ce775ac // indirect + golang.org/x/mod v0.29.0 // indirect + golang.org/x/net v0.46.0 // indirect + golang.org/x/oauth2 v0.31.0 // indirect + golang.org/x/term v0.36.0 // indirect + golang.org/x/text v0.30.0 // indirect + golang.org/x/time v0.13.0 // indirect + golang.org/x/tools v0.38.0 // indirect + google.golang.org/genproto/googleapis/rpc v0.0.0-20250603155806-513f23925822 // indirect + google.golang.org/protobuf v1.36.9 // indirect + gopkg.in/evanphx/json-patch.v4 v4.13.0 // indirect + gopkg.in/inf.v0 v0.9.1 // indirect + gopkg.in/yaml.v3 v3.0.1 // indirect + honnef.co/go/tools v0.6.1 // indirect + k8s.io/kube-openapi v0.0.0-20250909170358-d67c058d9372 // indirect + mvdan.cc/gofumpt v0.7.0 // indirect + mvdan.cc/unparam v0.0.0-20240528143540-8a5130ca722f // indirect + sigs.k8s.io/json v0.0.0-20250730193827-2d320260d730 // indirect + sigs.k8s.io/randfill v1.0.0 // indirect + sigs.k8s.io/structured-merge-diff/v6 v6.3.0 // indirect + sigs.k8s.io/yaml v1.6.0 // indirect +) + +replace github.com/imdario/mergo => github.com/imdario/mergo v0.3.16 + +replace github.com/deckhouse/sds-replicated-volume/api => ../../api + +replace github.com/deckhouse/sds-replicated-volume/lib/go/common => ../../lib/go/common + +tool ( + github.com/golangci/golangci-lint/cmd/golangci-lint + github.com/onsi/ginkgo/v2/ginkgo +) diff --git a/images/csi-driver/go.sum b/images/csi-driver/go.sum new file mode 100644 index 000000000..32f872b7c --- /dev/null +++ b/images/csi-driver/go.sum @@ -0,0 +1,724 @@ +4d63.com/gocheckcompilerdirectives v1.3.0 h1:Ew5y5CtcAAQeTVKUVFrE7EwHMrTO6BggtEj8BZSjZ3A= +4d63.com/gocheckcompilerdirectives v1.3.0/go.mod h1:ofsJ4zx2QAuIP/NO/NAh1ig6R1Fb18/GI7RVMwz7kAY= +4d63.com/gochecknoglobals v0.2.2 h1:H1vdnwnMaZdQW/N+NrkT1SZMTBmcwHe9Vq8lJcYYTtU= +4d63.com/gochecknoglobals v0.2.2/go.mod h1:lLxwTQjL5eIesRbvnzIP3jZtG140FnTdz+AlMa+ogt0= +github.com/4meepo/tagalign v1.4.2 h1:0hcLHPGMjDyM1gHG58cS73aQF8J4TdVR96TZViorO9E= +github.com/4meepo/tagalign v1.4.2/go.mod h1:+p4aMyFM+ra7nb41CnFG6aSDXqRxU/w1VQqScKqDARI= +github.com/Abirdcfly/dupword v0.1.3 h1:9Pa1NuAsZvpFPi9Pqkd93I7LIYRURj+A//dFd5tgBeE= +github.com/Abirdcfly/dupword v0.1.3/go.mod h1:8VbB2t7e10KRNdwTVoxdBaxla6avbhGzb8sCTygUMhw= +github.com/Antonboom/errname v1.0.0 h1:oJOOWR07vS1kRusl6YRSlat7HFnb3mSfMl6sDMRoTBA= +github.com/Antonboom/errname v1.0.0/go.mod h1:gMOBFzK/vrTiXN9Oh+HFs+e6Ndl0eTFbtsRTSRdXyGI= +github.com/Antonboom/nilnil v1.0.1 h1:C3Tkm0KUxgfO4Duk3PM+ztPncTFlOf0b2qadmS0s4xs= +github.com/Antonboom/nilnil v1.0.1/go.mod h1:CH7pW2JsRNFgEh8B2UaPZTEPhCMuFowP/e8Udp9Nnb0= +github.com/Antonboom/testifylint v1.5.2 h1:4s3Xhuv5AvdIgbd8wOOEeo0uZG7PbDKQyKY5lGoQazk= +github.com/Antonboom/testifylint v1.5.2/go.mod h1:vxy8VJ0bc6NavlYqjZfmp6EfqXMtBgQ4+mhCojwC1P8= +github.com/BurntSushi/toml v1.5.0 h1:W5quZX/G/csjUnuI8SUYlsHs9M38FC7znL0lIO+DvMg= +github.com/BurntSushi/toml v1.5.0/go.mod h1:ukJfTF/6rtPPRCnwkur4qwRxa8vTRFBF0uk2lLoLwho= +github.com/Crocmagnon/fatcontext v0.7.1 h1:SC/VIbRRZQeQWj/TcQBS6JmrXcfA+BU4OGSVUt54PjM= +github.com/Crocmagnon/fatcontext v0.7.1/go.mod h1:1wMvv3NXEBJucFGfwOJBxSVWcoIO6emV215SMkW9MFU= +github.com/Djarvur/go-err113 v0.0.0-20210108212216-aea10b59be24 h1:sHglBQTwgx+rWPdisA5ynNEsoARbiCBOyGcJM4/OzsM= +github.com/Djarvur/go-err113 v0.0.0-20210108212216-aea10b59be24/go.mod h1:4UJr5HIiMZrwgkSPdsjy2uOQExX/WEILpIrO9UPGuXs= +github.com/GaijinEntertainment/go-exhaustruct/v3 v3.3.1 h1:Sz1JIXEcSfhz7fUi7xHnhpIE0thVASYjvosApmHuD2k= +github.com/GaijinEntertainment/go-exhaustruct/v3 v3.3.1/go.mod h1:n/LSCXNuIYqVfBlVXyHfMQkZDdp1/mmxfSjADd3z1Zg= +github.com/Masterminds/semver/v3 v3.4.0 h1:Zog+i5UMtVoCU8oKka5P7i9q9HgrJeGzI9SA1Xbatp0= +github.com/Masterminds/semver/v3 v3.4.0/go.mod h1:4V+yj/TJE1HU9XfppCwVMZq3I84lprf4nC11bSS5beM= +github.com/OpenPeeDeeP/depguard/v2 v2.2.1 h1:vckeWVESWp6Qog7UZSARNqfu/cZqvki8zsuj3piCMx4= +github.com/OpenPeeDeeP/depguard/v2 v2.2.1/go.mod h1:q4DKzC4UcVaAvcfd41CZh0PWpGgzrVxUYBlgKNGquUo= +github.com/alecthomas/assert/v2 v2.11.0 h1:2Q9r3ki8+JYXvGsDyBXwH3LcJ+WK5D0gc5E8vS6K3D0= +github.com/alecthomas/assert/v2 v2.11.0/go.mod h1:Bze95FyfUr7x34QZrjL+XP+0qgp/zg8yS+TtBj1WA3k= +github.com/alecthomas/go-check-sumtype v0.3.1 h1:u9aUvbGINJxLVXiFvHUlPEaD7VDULsrxJb4Aq31NLkU= +github.com/alecthomas/go-check-sumtype v0.3.1/go.mod h1:A8TSiN3UPRw3laIgWEUOHHLPa6/r9MtoigdlP5h3K/E= +github.com/alecthomas/repr v0.4.0 h1:GhI2A8MACjfegCPVq9f1FLvIBS+DrQ2KQBFZP1iFzXc= +github.com/alecthomas/repr v0.4.0/go.mod h1:Fr0507jx4eOXV7AlPV6AVZLYrLIuIeSOWtW57eE/O/4= +github.com/alexkohler/nakedret/v2 v2.0.5 h1:fP5qLgtwbx9EJE8dGEERT02YwS8En4r9nnZ71RK+EVU= +github.com/alexkohler/nakedret/v2 v2.0.5/go.mod h1:bF5i0zF2Wo2o4X4USt9ntUWve6JbFv02Ff4vlkmS/VU= +github.com/alexkohler/prealloc v1.0.0 h1:Hbq0/3fJPQhNkN0dR95AVrr6R7tou91y0uHG5pOcUuw= +github.com/alexkohler/prealloc v1.0.0/go.mod h1:VetnK3dIgFBBKmg0YnD9F9x6Icjd+9cvfHR56wJVlKE= +github.com/alingse/asasalint v0.0.11 h1:SFwnQXJ49Kx/1GghOFz1XGqHYKp21Kq1nHad/0WQRnw= +github.com/alingse/asasalint v0.0.11/go.mod h1:nCaoMhw7a9kSJObvQyVzNTPBDbNpdocqrSP7t/cW5+I= +github.com/alingse/nilnesserr v0.1.2 h1:Yf8Iwm3z2hUUrP4muWfW83DF4nE3r1xZ26fGWUKCZlo= +github.com/alingse/nilnesserr v0.1.2/go.mod h1:1xJPrXonEtX7wyTq8Dytns5P2hNzoWymVUIaKm4HNFg= +github.com/ashanbrown/forbidigo v1.6.0 h1:D3aewfM37Yb3pxHujIPSpTf6oQk9sc9WZi8gerOIVIY= +github.com/ashanbrown/forbidigo v1.6.0/go.mod h1:Y8j9jy9ZYAEHXdu723cUlraTqbzjKF1MUyfOKL+AjcU= +github.com/ashanbrown/makezero v1.2.0 h1:/2Lp1bypdmK9wDIq7uWBlDF1iMUpIIS4A+pF6C9IEUU= +github.com/ashanbrown/makezero v1.2.0/go.mod h1:dxlPhHbDMC6N6xICzFBSK+4njQDdK8euNO0qjQMtGY4= +github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM= +github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw= +github.com/bkielbasa/cyclop v1.2.3 h1:faIVMIGDIANuGPWH031CZJTi2ymOQBULs9H21HSMa5w= +github.com/bkielbasa/cyclop v1.2.3/go.mod h1:kHTwA9Q0uZqOADdupvcFJQtp/ksSnytRMe8ztxG8Fuo= +github.com/blizzy78/varnamelen v0.8.0 h1:oqSblyuQvFsW1hbBHh1zfwrKe3kcSj0rnXkKzsQ089M= +github.com/blizzy78/varnamelen v0.8.0/go.mod h1:V9TzQZ4fLJ1DSrjVDfl89H7aMnTvKkApdHeyESmyR7k= +github.com/bombsimon/wsl/v4 v4.5.0 h1:iZRsEvDdyhd2La0FVi5k6tYehpOR/R7qIUjmKk7N74A= +github.com/bombsimon/wsl/v4 v4.5.0/go.mod h1:NOQ3aLF4nD7N5YPXMruR6ZXDOAqLoM0GEpLwTdvmOSc= +github.com/breml/bidichk v0.3.2 h1:xV4flJ9V5xWTqxL+/PMFF6dtJPvZLPsyixAoPe8BGJs= +github.com/breml/bidichk v0.3.2/go.mod h1:VzFLBxuYtT23z5+iVkamXO386OB+/sVwZOpIj6zXGos= +github.com/breml/errchkjson v0.4.0 h1:gftf6uWZMtIa/Is3XJgibewBm2ksAQSY/kABDNFTAdk= +github.com/breml/errchkjson v0.4.0/go.mod h1:AuBOSTHyLSaaAFlWsRSuRBIroCh3eh7ZHh5YeelDIk8= +github.com/butuzov/ireturn v0.3.1 h1:mFgbEI6m+9W8oP/oDdfA34dLisRFCj2G6o/yiI1yZrY= +github.com/butuzov/ireturn v0.3.1/go.mod h1:ZfRp+E7eJLC0NQmk1Nrm1LOrn/gQlOykv+cVPdiXH5M= +github.com/butuzov/mirror v1.3.0 h1:HdWCXzmwlQHdVhwvsfBb2Au0r3HyINry3bDWLYXiKoc= +github.com/butuzov/mirror v1.3.0/go.mod h1:AEij0Z8YMALaq4yQj9CPPVYOyJQyiexpQEQgihajRfI= +github.com/catenacyber/perfsprint v0.8.2 h1:+o9zVmCSVa7M4MvabsWvESEhpsMkhfE7k0sHNGL95yw= +github.com/catenacyber/perfsprint v0.8.2/go.mod h1:q//VWC2fWbcdSLEY1R3l8n0zQCDPdE4IjZwyY1HMunM= +github.com/ccojocar/zxcvbn-go v1.0.2 h1:na/czXU8RrhXO4EZme6eQJLR4PzcGsahsBOAwU6I3Vg= +github.com/ccojocar/zxcvbn-go v1.0.2/go.mod h1:g1qkXtUSvHP8lhHp5GrSmTz6uWALGRMQdw6Qnz/hi60= +github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs= +github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs= +github.com/charithe/durationcheck v0.0.10 h1:wgw73BiocdBDQPik+zcEoBG/ob8uyBHf2iyoHGPf5w4= +github.com/charithe/durationcheck v0.0.10/go.mod h1:bCWXb7gYRysD1CU3C+u4ceO49LoGOY1C1L6uouGNreQ= +github.com/chavacava/garif v0.1.0 h1:2JHa3hbYf5D9dsgseMKAmc/MZ109otzgNFk5s87H9Pc= +github.com/chavacava/garif v0.1.0/go.mod h1:XMyYCkEL58DF0oyW4qDjjnPWONs2HBqYKI+UIPD+Gww= +github.com/ckaznocha/intrange v0.3.0 h1:VqnxtK32pxgkhJgYQEeOArVidIPg+ahLP7WBOXZd5ZY= +github.com/ckaznocha/intrange v0.3.0/go.mod h1:+I/o2d2A1FBHgGELbGxzIcyd3/9l9DuwjM8FsbSS3Lo= +github.com/container-storage-interface/spec v1.12.0 h1:zrFOEqpR5AghNaaDG4qyedwPBqU2fU0dWjLQMP/azK0= +github.com/container-storage-interface/spec v1.12.0/go.mod h1:txsm+MA2B2WDa5kW69jNbqPnvTtfvZma7T/zsAZ9qX8= +github.com/coreos/go-systemd/v22 v22.5.0 h1:RrqgGjYQKalulkV8NGVIfkXQf6YYmOyiJKk8iXXhfZs= +github.com/coreos/go-systemd/v22 v22.5.0/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc= +github.com/cpuguy83/go-md2man/v2 v2.0.6/go.mod h1:oOW0eioCTA6cOiMLiUPZOpcVxMig6NIQQ7OS05n1F4g= +github.com/curioswitch/go-reassign v0.3.0 h1:dh3kpQHuADL3cobV/sSGETA8DOv457dwl+fbBAhrQPs= +github.com/curioswitch/go-reassign v0.3.0/go.mod h1:nApPCCTtqLJN/s8HfItCcKV0jIPwluBOvZP+dsJGA88= +github.com/daixiang0/gci v0.13.5 h1:kThgmH1yBmZSBCh1EJVxQ7JsHpm5Oms0AMed/0LaH4c= +github.com/daixiang0/gci v0.13.5/go.mod h1:12etP2OniiIdP4q+kjUGrC/rUagga7ODbqsom5Eo5Yk= +github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= +github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= +github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM= +github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= +github.com/deckhouse/sds-common-lib v0.6.3 h1:k0OotLuQaKuZt8iyph9IusDixjAE0MQRKyuTe2wZP3I= +github.com/deckhouse/sds-common-lib v0.6.3/go.mod h1:UHZMKkqEh6RAO+vtA7dFTwn/2m5lzfPn0kfULBmDf2o= +github.com/deckhouse/sds-node-configurator/api v0.0.0-20251112082451-591b11c7b2da h1:LFk9OC/+EVWfYDRe54Hip4kVKwjNcPhHZTftlm5DCpg= +github.com/deckhouse/sds-node-configurator/api v0.0.0-20251112082451-591b11c7b2da/go.mod h1:X5ftUa4MrSXMKiwQYa4lwFuGtrs+HoCNa8Zl6TPrGo8= +github.com/denis-tingaikin/go-header v0.5.0 h1:SRdnP5ZKvcO9KKRP1KJrhFR3RrlGuD+42t4429eC9k8= +github.com/denis-tingaikin/go-header v0.5.0/go.mod h1:mMenU5bWrok6Wl2UsZjy+1okegmwQ3UgWl4V1D8gjlY= +github.com/dlclark/regexp2 v1.11.0 h1:G/nrcoOa7ZXlpoa/91N3X7mM3r8eIlMBBJZvsz/mxKI= +github.com/dlclark/regexp2 v1.11.0/go.mod h1:DHkYz0B9wPfa6wondMfaivmHpzrQ3v9q8cnmRbL6yW8= +github.com/emicklei/go-restful/v3 v3.13.0 h1:C4Bl2xDndpU6nJ4bc1jXd+uTmYPVUwkD6bFY/oTyCes= +github.com/emicklei/go-restful/v3 v3.13.0/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc= +github.com/ettle/strcase v0.2.0 h1:fGNiVF21fHXpX1niBgk0aROov1LagYsOwV/xqKDKR/Q= +github.com/ettle/strcase v0.2.0/go.mod h1:DajmHElDSaX76ITe3/VHVyMin4LWSJN5Z909Wp+ED1A= +github.com/evanphx/json-patch/v5 v5.9.11 h1:/8HVnzMq13/3x9TPvjG08wUGqBTmZBsCWzjTM0wiaDU= +github.com/evanphx/json-patch/v5 v5.9.11/go.mod h1:3j+LviiESTElxA4p3EMKAB9HXj3/XEtnUf6OZxqIQTM= +github.com/fatih/color v1.18.0 h1:S8gINlzdQ840/4pfAwic/ZE0djQEH3wM94VfqLTZcOM= +github.com/fatih/color v1.18.0/go.mod h1:4FelSpRwEGDpQ12mAdzqdOukCy4u8WUtOY6lkT/6HfU= +github.com/fatih/structtag v1.2.0 h1:/OdNE99OxoI/PqaW/SuSK9uxxT3f/tcSZgon/ssNSx4= +github.com/fatih/structtag v1.2.0/go.mod h1:mBJUNpUnHmRKrKlQQlmCrh5PuhftFbNv8Ys4/aAZl94= +github.com/firefart/nonamedreturns v1.0.5 h1:tM+Me2ZaXs8tfdDw3X6DOX++wMCOqzYUho6tUTYIdRA= +github.com/firefart/nonamedreturns v1.0.5/go.mod h1:gHJjDqhGM4WyPt639SOZs+G89Ko7QKH5R5BhnO6xJhw= +github.com/frankban/quicktest v1.14.6 h1:7Xjx+VpznH+oBnejlPUj8oUpdxnVs4f8XU8WnHkI4W8= +github.com/frankban/quicktest v1.14.6/go.mod h1:4ptaffx2x8+WTWXmUCuVU6aPUX1/Mz7zb5vbUoiM6w0= +github.com/fsnotify/fsnotify v1.9.0 h1:2Ml+OJNzbYCTzsxtv8vKSFD9PbJjmhYF14k/jKC7S9k= +github.com/fsnotify/fsnotify v1.9.0/go.mod h1:8jBTzvmWwFyi3Pb8djgCCO5IBqzKJ/Jwo8TRcHyHii0= +github.com/fxamacker/cbor/v2 v2.9.0 h1:NpKPmjDBgUfBms6tr6JZkTHtfFGcMKsw3eGcmD/sapM= +github.com/fxamacker/cbor/v2 v2.9.0/go.mod h1:vM4b+DJCtHn+zz7h3FFp/hDAI9WNWCsZj23V5ytsSxQ= +github.com/fzipp/gocyclo v0.6.0 h1:lsblElZG7d3ALtGMx9fmxeTKZaLLpU8mET09yN4BBLo= +github.com/fzipp/gocyclo v0.6.0/go.mod h1:rXPyn8fnlpa0R2csP/31uerbiVBugk5whMdlyaLkLoA= +github.com/ghostiam/protogetter v0.3.9 h1:j+zlLLWzqLay22Cz/aYwTHKQ88GE2DQ6GkWSYFOI4lQ= +github.com/ghostiam/protogetter v0.3.9/go.mod h1:WZ0nw9pfzsgxuRsPOFQomgDVSWtDLJRfQJEhsGbmQMA= +github.com/gkampitakis/ciinfo v0.3.2 h1:JcuOPk8ZU7nZQjdUhctuhQofk7BGHuIy0c9Ez8BNhXs= +github.com/gkampitakis/ciinfo v0.3.2/go.mod h1:1NIwaOcFChN4fa/B0hEBdAb6npDlFL8Bwx4dfRLRqAo= +github.com/gkampitakis/go-diff v1.3.2 h1:Qyn0J9XJSDTgnsgHRdz9Zp24RaJeKMUHg2+PDZZdC4M= +github.com/gkampitakis/go-diff v1.3.2/go.mod h1:LLgOrpqleQe26cte8s36HTWcTmMEur6OPYerdAAS9tk= +github.com/gkampitakis/go-snaps v0.5.15 h1:amyJrvM1D33cPHwVrjo9jQxX8g/7E2wYdZ+01KS3zGE= +github.com/gkampitakis/go-snaps v0.5.15/go.mod h1:HNpx/9GoKisdhw9AFOBT1N7DBs9DiHo/hGheFGBZ+mc= +github.com/go-critic/go-critic v0.12.0 h1:iLosHZuye812wnkEz1Xu3aBwn5ocCPfc9yqmFG9pa6w= +github.com/go-critic/go-critic v0.12.0/go.mod h1:DpE0P6OVc6JzVYzmM5gq5jMU31zLr4am5mB/VfFK64w= +github.com/go-logr/logr v1.4.3 h1:CjnDlHq8ikf6E492q6eKboGOC0T8CDaOvkHCIg8idEI= +github.com/go-logr/logr v1.4.3/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY= +github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag= +github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE= +github.com/go-logr/zapr v1.3.0 h1:XGdV8XW8zdwFiwOA2Dryh1gj2KRQyOOoNmBy4EplIcQ= +github.com/go-logr/zapr v1.3.0/go.mod h1:YKepepNBd1u/oyhd/yQmtjVXmm9uML4IXUgMOwR8/Gg= +github.com/go-openapi/jsonpointer v0.22.0 h1:TmMhghgNef9YXxTu1tOopo+0BGEytxA+okbry0HjZsM= +github.com/go-openapi/jsonpointer v0.22.0/go.mod h1:xt3jV88UtExdIkkL7NloURjRQjbeUgcxFblMjq2iaiU= +github.com/go-openapi/jsonreference v0.21.1 h1:bSKrcl8819zKiOgxkbVNRUBIr6Wwj9KYrDbMjRs0cDA= +github.com/go-openapi/jsonreference v0.21.1/go.mod h1:PWs8rO4xxTUqKGu+lEvvCxD5k2X7QYkKAepJyCmSTT8= +github.com/go-openapi/swag v0.24.1 h1:DPdYTZKo6AQCRqzwr/kGkxJzHhpKxZ9i/oX0zag+MF8= +github.com/go-openapi/swag v0.24.1/go.mod h1:sm8I3lCPlspsBBwUm1t5oZeWZS0s7m/A+Psg0ooRU0A= +github.com/go-openapi/swag/cmdutils v0.24.0 h1:KlRCffHwXFI6E5MV9n8o8zBRElpY4uK4yWyAMWETo9I= +github.com/go-openapi/swag/cmdutils v0.24.0/go.mod h1:uxib2FAeQMByyHomTlsP8h1TtPd54Msu2ZDU/H5Vuf8= +github.com/go-openapi/swag/conv v0.24.0 h1:ejB9+7yogkWly6pnruRX45D1/6J+ZxRu92YFivx54ik= +github.com/go-openapi/swag/conv v0.24.0/go.mod h1:jbn140mZd7EW2g8a8Y5bwm8/Wy1slLySQQ0ND6DPc2c= +github.com/go-openapi/swag/fileutils v0.24.0 h1:U9pCpqp4RUytnD689Ek/N1d2N/a//XCeqoH508H5oak= +github.com/go-openapi/swag/fileutils v0.24.0/go.mod h1:3SCrCSBHyP1/N+3oErQ1gP+OX1GV2QYFSnrTbzwli90= +github.com/go-openapi/swag/jsonname v0.24.0 h1:2wKS9bgRV/xB8c62Qg16w4AUiIrqqiniJFtZGi3dg5k= +github.com/go-openapi/swag/jsonname v0.24.0/go.mod h1:GXqrPzGJe611P7LG4QB9JKPtUZ7flE4DOVechNaDd7Q= +github.com/go-openapi/swag/jsonutils v0.24.0 h1:F1vE1q4pg1xtO3HTyJYRmEuJ4jmIp2iZ30bzW5XgZts= +github.com/go-openapi/swag/jsonutils v0.24.0/go.mod h1:vBowZtF5Z4DDApIoxcIVfR8v0l9oq5PpYRUuteVu6f0= +github.com/go-openapi/swag/loading v0.24.0 h1:ln/fWTwJp2Zkj5DdaX4JPiddFC5CHQpvaBKycOlceYc= +github.com/go-openapi/swag/loading v0.24.0/go.mod h1:gShCN4woKZYIxPxbfbyHgjXAhO61m88tmjy0lp/LkJk= +github.com/go-openapi/swag/mangling v0.24.0 h1:PGOQpViCOUroIeak/Uj/sjGAq9LADS3mOyjznmHy2pk= +github.com/go-openapi/swag/mangling v0.24.0/go.mod h1:Jm5Go9LHkycsz0wfoaBDkdc4CkpuSnIEf62brzyCbhc= +github.com/go-openapi/swag/netutils v0.24.0 h1:Bz02HRjYv8046Ycg/w80q3g9QCWeIqTvlyOjQPDjD8w= +github.com/go-openapi/swag/netutils v0.24.0/go.mod h1:WRgiHcYTnx+IqfMCtu0hy9oOaPR0HnPbmArSRN1SkZM= +github.com/go-openapi/swag/stringutils v0.24.0 h1:i4Z/Jawf9EvXOLUbT97O0HbPUja18VdBxeadyAqS1FM= +github.com/go-openapi/swag/stringutils v0.24.0/go.mod h1:5nUXB4xA0kw2df5PRipZDslPJgJut+NjL7D25zPZ/4w= +github.com/go-openapi/swag/typeutils v0.24.0 h1:d3szEGzGDf4L2y1gYOSSLeK6h46F+zibnEas2Jm/wIw= +github.com/go-openapi/swag/typeutils v0.24.0/go.mod h1:q8C3Kmk/vh2VhpCLaoR2MVWOGP8y7Jc8l82qCTd1DYI= +github.com/go-openapi/swag/yamlutils v0.24.0 h1:bhw4894A7Iw6ne+639hsBNRHg9iZg/ISrOVr+sJGp4c= +github.com/go-openapi/swag/yamlutils v0.24.0/go.mod h1:DpKv5aYuaGm/sULePoeiG8uwMpZSfReo1HR3Ik0yaG8= +github.com/go-quicktest/qt v1.101.0 h1:O1K29Txy5P2OK0dGo59b7b0LR6wKfIhttaAhHUyn7eI= +github.com/go-quicktest/qt v1.101.0/go.mod h1:14Bz/f7NwaXPtdYEgzsx46kqSxVwTbzVZsDC26tQJow= +github.com/go-task/slim-sprig/v3 v3.0.0 h1:sUs3vkvUymDpBKi3qH1YSqBQk9+9D/8M2mN1vB6EwHI= +github.com/go-task/slim-sprig/v3 v3.0.0/go.mod h1:W848ghGpv3Qj3dhTPRyJypKRiqCdHZiAzKg9hl15HA8= +github.com/go-toolsmith/astcast v1.1.0 h1:+JN9xZV1A+Re+95pgnMgDboWNVnIMMQXwfBwLRPgSC8= +github.com/go-toolsmith/astcast v1.1.0/go.mod h1:qdcuFWeGGS2xX5bLM/c3U9lewg7+Zu4mr+xPwZIB4ZU= +github.com/go-toolsmith/astcopy v1.1.0 h1:YGwBN0WM+ekI/6SS6+52zLDEf8Yvp3n2seZITCUBt5s= +github.com/go-toolsmith/astcopy v1.1.0/go.mod h1:hXM6gan18VA1T/daUEHCFcYiW8Ai1tIwIzHY6srfEAw= +github.com/go-toolsmith/astequal v1.0.3/go.mod h1:9Ai4UglvtR+4up+bAD4+hCj7iTo4m/OXVTSLnCyTAx4= +github.com/go-toolsmith/astequal v1.1.0/go.mod h1:sedf7VIdCL22LD8qIvv7Nn9MuWJruQA/ysswh64lffQ= +github.com/go-toolsmith/astequal v1.2.0 h1:3Fs3CYZ1k9Vo4FzFhwwewC3CHISHDnVUPC4x0bI2+Cw= +github.com/go-toolsmith/astequal v1.2.0/go.mod h1:c8NZ3+kSFtFY/8lPso4v8LuJjdJiUFVnSuU3s0qrrDY= +github.com/go-toolsmith/astfmt v1.1.0 h1:iJVPDPp6/7AaeLJEruMsBUlOYCmvg0MoCfJprsOmcco= +github.com/go-toolsmith/astfmt v1.1.0/go.mod h1:OrcLlRwu0CuiIBp/8b5PYF9ktGVZUjlNMV634mhwuQ4= +github.com/go-toolsmith/astp v1.1.0 h1:dXPuCl6u2llURjdPLLDxJeZInAeZ0/eZwFJmqZMnpQA= +github.com/go-toolsmith/astp v1.1.0/go.mod h1:0T1xFGz9hicKs8Z5MfAqSUitoUYS30pDMsRVIDHs8CA= +github.com/go-toolsmith/pkgload v1.2.2 h1:0CtmHq/02QhxcF7E9N5LIFcYFsMR5rdovfqTtRKkgIk= +github.com/go-toolsmith/pkgload v1.2.2/go.mod h1:R2hxLNRKuAsiXCo2i5J6ZQPhnPMOVtU+f0arbFPWCus= +github.com/go-toolsmith/strparse v1.0.0/go.mod h1:YI2nUKP9YGZnL/L1/DLFBfixrcjslWct4wyljWhSRy8= +github.com/go-toolsmith/strparse v1.1.0 h1:GAioeZUK9TGxnLS+qfdqNbA4z0SSm5zVNtCQiyP2Bvw= +github.com/go-toolsmith/strparse v1.1.0/go.mod h1:7ksGy58fsaQkGQlY8WVoBFNyEPMGuJin1rfoPS4lBSQ= +github.com/go-toolsmith/typep v1.1.0 h1:fIRYDyF+JywLfqzyhdiHzRop/GQDxxNhLGQ6gFUNHus= +github.com/go-toolsmith/typep v1.1.0/go.mod h1:fVIw+7zjdsMxDA3ITWnH1yOiw1rnTQKCsF/sk2H/qig= +github.com/go-viper/mapstructure/v2 v2.4.0 h1:EBsztssimR/CONLSZZ04E8qAkxNYq4Qp9LvH92wZUgs= +github.com/go-viper/mapstructure/v2 v2.4.0/go.mod h1:oJDH3BJKyqBA2TXFhDsKDGDTlndYOZ6rGS0BRZIxGhM= +github.com/go-xmlfmt/xmlfmt v1.1.3 h1:t8Ey3Uy7jDSEisW2K3somuMKIpzktkWptA0iFCnRUWY= +github.com/go-xmlfmt/xmlfmt v1.1.3/go.mod h1:aUCEOzzezBEjDBbFBoSiya/gduyIiWYRP6CnSFIV8AM= +github.com/gobwas/glob v0.2.3 h1:A4xDbljILXROh+kObIiy5kIaPYD8e96x1tgBhUI5J+Y= +github.com/gobwas/glob v0.2.3/go.mod h1:d3Ez4x06l9bZtSvzIay5+Yzi0fmZzPgnTbPcKjJAkT8= +github.com/goccy/go-yaml v1.18.0 h1:8W7wMFS12Pcas7KU+VVkaiCng+kG8QiFeFwzFb+rwuw= +github.com/goccy/go-yaml v1.18.0/go.mod h1:XBurs7gK8ATbW4ZPGKgcbrY1Br56PdM69F7LkFRi1kA= +github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA= +github.com/godbus/dbus/v5 v5.1.0 h1:4KLkAxT3aOY8Li4FRJe/KvhoNFFxo0m6fNuFUO8QJUk= +github.com/godbus/dbus/v5 v5.1.0/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA= +github.com/gofrs/flock v0.12.1 h1:MTLVXXHf8ekldpJk3AKicLij9MdwOWkZ+a/jHHZby9E= +github.com/gofrs/flock v0.12.1/go.mod h1:9zxTsyu5xtJ9DK+1tFZyibEV7y3uwDxPPfbxeeHCoD0= +github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q= +github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q= +github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek= +github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps= +github.com/golangci/dupl v0.0.0-20250308024227-f665c8d69b32 h1:WUvBfQL6EW/40l6OmeSBYQJNSif4O11+bmWEz+C7FYw= +github.com/golangci/dupl v0.0.0-20250308024227-f665c8d69b32/go.mod h1:NUw9Zr2Sy7+HxzdjIULge71wI6yEg1lWQr7Evcu8K0E= +github.com/golangci/go-printf-func-name v0.1.0 h1:dVokQP+NMTO7jwO4bwsRwLWeudOVUPPyAKJuzv8pEJU= +github.com/golangci/go-printf-func-name v0.1.0/go.mod h1:wqhWFH5mUdJQhweRnldEywnR5021wTdZSNgwYceV14s= +github.com/golangci/gofmt v0.0.0-20250106114630-d62b90e6713d h1:viFft9sS/dxoYY0aiOTsLKO2aZQAPT4nlQCsimGcSGE= +github.com/golangci/gofmt v0.0.0-20250106114630-d62b90e6713d/go.mod h1:ivJ9QDg0XucIkmwhzCDsqcnxxlDStoTl89jDMIoNxKY= +github.com/golangci/golangci-lint v1.64.8 h1:y5TdeVidMtBGG32zgSC7ZXTFNHrsJkDnpO4ItB3Am+I= +github.com/golangci/golangci-lint v1.64.8/go.mod h1:5cEsUQBSr6zi8XI8OjmcY2Xmliqc4iYL7YoPrL+zLJ4= +github.com/golangci/misspell v0.6.0 h1:JCle2HUTNWirNlDIAUO44hUsKhOFqGPoC4LZxlaSXDs= +github.com/golangci/misspell v0.6.0/go.mod h1:keMNyY6R9isGaSAu+4Q8NMBwMPkh15Gtc8UCVoDtAWo= +github.com/golangci/plugin-module-register v0.1.1 h1:TCmesur25LnyJkpsVrupv1Cdzo+2f7zX0H6Jkw1Ol6c= +github.com/golangci/plugin-module-register v0.1.1/go.mod h1:TTpqoB6KkwOJMV8u7+NyXMrkwwESJLOkfl9TxR1DGFc= +github.com/golangci/revgrep v0.8.0 h1:EZBctwbVd0aMeRnNUsFogoyayvKHyxlV3CdUA46FX2s= +github.com/golangci/revgrep v0.8.0/go.mod h1:U4R/s9dlXZsg8uJmaR1GrloUr14D7qDl8gi2iPXJH8k= +github.com/golangci/unconvert v0.0.0-20240309020433-c5143eacb3ed h1:IURFTjxeTfNFP0hTEi1YKjB/ub8zkpaOqFFMApi2EAs= +github.com/golangci/unconvert v0.0.0-20240309020433-c5143eacb3ed/go.mod h1:XLXN8bNw4CGRPaqgl3bv/lhz7bsGPh4/xSaMTbo2vkQ= +github.com/google/gnostic-models v0.7.0 h1:qwTtogB15McXDaNqTZdzPJRHvaVJlAl+HVQnLmJEJxo= +github.com/google/gnostic-models v0.7.0/go.mod h1:whL5G0m6dmc5cPxKc5bdKdEN3UjI7OUGxBlw57miDrQ= +github.com/google/go-cmp v0.5.1/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= +github.com/google/go-cmp v0.5.2/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= +github.com/google/go-cmp v0.5.4/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= +github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= +github.com/google/go-cmp v0.5.8/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY= +github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8= +github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU= +github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg= +github.com/google/pprof v0.0.0-20250820193118-f64d9cf942d6 h1:EEHtgt9IwisQ2AZ4pIsMjahcegHh6rmhqxzIRQIyepY= +github.com/google/pprof v0.0.0-20250820193118-f64d9cf942d6/go.mod h1:I6V7YzU0XDpsHqbsyrghnFZLO1gwK6NPTNvmetQIk9U= +github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0= +github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= +github.com/gordonklaus/ineffassign v0.1.0 h1:y2Gd/9I7MdY1oEIt+n+rowjBNDcLQq3RsH5hwJd0f9s= +github.com/gordonklaus/ineffassign v0.1.0/go.mod h1:Qcp2HIAYhR7mNUVSIxZww3Guk4it82ghYcEXIAk+QT0= +github.com/gostaticanalysis/analysisutil v0.7.1 h1:ZMCjoue3DtDWQ5WyU16YbjbQEQ3VuzwxALrpYd+HeKk= +github.com/gostaticanalysis/analysisutil v0.7.1/go.mod h1:v21E3hY37WKMGSnbsw2S/ojApNWb6C1//mXO48CXbVc= +github.com/gostaticanalysis/comment v1.4.1/go.mod h1:ih6ZxzTHLdadaiSnF5WY3dxUoXfXAlTaRzuaNDlSado= +github.com/gostaticanalysis/comment v1.4.2/go.mod h1:KLUTGDv6HOCotCH8h2erHKmpci2ZoR8VPu34YA2uzdM= +github.com/gostaticanalysis/comment v1.5.0 h1:X82FLl+TswsUMpMh17srGRuKaaXprTaytmEpgnKIDu8= +github.com/gostaticanalysis/comment v1.5.0/go.mod h1:V6eb3gpCv9GNVqb6amXzEUX3jXLVK/AdA+IrAMSqvEc= +github.com/gostaticanalysis/forcetypeassert v0.2.0 h1:uSnWrrUEYDr86OCxWa4/Tp2jeYDlogZiZHzGkWFefTk= +github.com/gostaticanalysis/forcetypeassert v0.2.0/go.mod h1:M5iPavzE9pPqWyeiVXSFghQjljW1+l/Uke3PXHS6ILY= +github.com/gostaticanalysis/nilerr v0.1.1 h1:ThE+hJP0fEp4zWLkWHWcRyI2Od0p7DlgYG3Uqrmrcpk= +github.com/gostaticanalysis/nilerr v0.1.1/go.mod h1:wZYb6YI5YAxxq0i1+VJbY0s2YONW0HU0GPE3+5PWN4A= +github.com/gostaticanalysis/testutil v0.3.1-0.20210208050101-bfb5c8eec0e4/go.mod h1:D+FIZ+7OahH3ePw/izIEeH5I06eKs1IKI4Xr64/Am3M= +github.com/gostaticanalysis/testutil v0.5.0 h1:Dq4wT1DdTwTGCQQv3rl3IvD5Ld0E6HiY+3Zh0sUGqw8= +github.com/gostaticanalysis/testutil v0.5.0/go.mod h1:OLQSbuM6zw2EvCcXTz1lVq5unyoNft372msDY0nY5Hs= +github.com/hashicorp/go-immutable-radix/v2 v2.1.0 h1:CUW5RYIcysz+D3B+l1mDeXrQ7fUvGGCwJfdASSzbrfo= +github.com/hashicorp/go-immutable-radix/v2 v2.1.0/go.mod h1:hgdqLXA4f6NIjRVisM1TJ9aOJVNRqKZj+xDGF6m7PBw= +github.com/hashicorp/go-uuid v1.0.3 h1:2gKiV6YVmrJ1i2CKKa9obLvRieoRGviZFL26PcT/Co8= +github.com/hashicorp/go-uuid v1.0.3/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro= +github.com/hashicorp/go-version v1.2.1/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA= +github.com/hashicorp/go-version v1.7.0 h1:5tqGy27NaOTB8yJKUZELlFAS/LTKJkrmONwQKeRZfjY= +github.com/hashicorp/go-version v1.7.0/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA= +github.com/hashicorp/golang-lru/v2 v2.0.7 h1:a+bsQ5rvGLjzHuww6tVxozPZFVghXaHOwFs4luLUK2k= +github.com/hashicorp/golang-lru/v2 v2.0.7/go.mod h1:QeFd9opnmA6QUJc5vARoKUSoFhyfM2/ZepoAG6RGpeM= +github.com/hexops/gotextdiff v1.0.3 h1:gitA9+qJrrTCsiCl7+kh75nPqQt1cx4ZkudSTLoUqJM= +github.com/hexops/gotextdiff v1.0.3/go.mod h1:pSWU5MAI3yDq+fZBTazCSJysOMbxWL1BSow5/V2vxeg= +github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8= +github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw= +github.com/jgautheron/goconst v1.7.1 h1:VpdAG7Ca7yvvJk5n8dMwQhfEZJh95kl/Hl9S1OI5Jkk= +github.com/jgautheron/goconst v1.7.1/go.mod h1:aAosetZ5zaeC/2EfMeRswtxUFBpe2Hr7HzkgX4fanO4= +github.com/jingyugao/rowserrcheck v1.1.1 h1:zibz55j/MJtLsjP1OF4bSdgXxwL1b+Vn7Tjzq7gFzUs= +github.com/jingyugao/rowserrcheck v1.1.1/go.mod h1:4yvlZSDb3IyDTUZJUmpZfm2Hwok+Dtp+nu2qOq+er9c= +github.com/jjti/go-spancheck v0.6.4 h1:Tl7gQpYf4/TMU7AT84MN83/6PutY21Nb9fuQjFTpRRc= +github.com/jjti/go-spancheck v0.6.4/go.mod h1:yAEYdKJ2lRkDA8g7X+oKUHXOWVAXSBJRv04OhF+QUjk= +github.com/josharian/intern v1.0.0 h1:vlS4z54oSdjm0bgjRigI+G1HpF+tI+9rE5LLzOg8HmY= +github.com/josharian/intern v1.0.0/go.mod h1:5DoeVV0s6jJacbCEi61lwdGj/aVlrQvzHFFd8Hwg//Y= +github.com/joshdk/go-junit v1.0.0 h1:S86cUKIdwBHWwA6xCmFlf3RTLfVXYQfvanM5Uh+K6GE= +github.com/joshdk/go-junit v1.0.0/go.mod h1:TiiV0PqkaNfFXjEiyjWM3XXrhVyCa1K4Zfga6W52ung= +github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM= +github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo= +github.com/julz/importas v0.2.0 h1:y+MJN/UdL63QbFJHws9BVC5RpA2iq0kpjrFajTGivjQ= +github.com/julz/importas v0.2.0/go.mod h1:pThlt589EnCYtMnmhmRYY/qn9lCf/frPOK+WMx3xiJY= +github.com/karamaru-alpha/copyloopvar v1.2.1 h1:wmZaZYIjnJ0b5UoKDjUHrikcV0zuPyyxI4SVplLd2CI= +github.com/karamaru-alpha/copyloopvar v1.2.1/go.mod h1:nFmMlFNlClC2BPvNaHMdkirmTJxVCY0lhxBtlfOypMM= +github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8= +github.com/kisielk/errcheck v1.9.0 h1:9xt1zI9EBfcYBvdU1nVrzMzzUPUtPKs9bVSIM3TAb3M= +github.com/kisielk/errcheck v1.9.0/go.mod h1:kQxWMMVZgIkDq7U8xtG/n2juOjbLgZtedi0D+/VL/i8= +github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck= +github.com/kkHAIKE/contextcheck v1.1.6 h1:7HIyRcnyzxL9Lz06NGhiKvenXq7Zw6Q0UQu/ttjfJCE= +github.com/kkHAIKE/contextcheck v1.1.6/go.mod h1:3dDbMRNBFaq8HFXWC1JyvDSPm43CmE6IuHam8Wr0rkg= +github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE= +github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk= +github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY= +github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE= +github.com/kulti/thelper v0.6.3 h1:ElhKf+AlItIu+xGnI990no4cE2+XaSu1ULymV2Yulxs= +github.com/kulti/thelper v0.6.3/go.mod h1:DsqKShOvP40epevkFrvIwkCMNYxMeTNjdWL4dqWHZ6I= +github.com/kunwardeep/paralleltest v1.0.10 h1:wrodoaKYzS2mdNVnc4/w31YaXFtsc21PCTdvWJ/lDDs= +github.com/kunwardeep/paralleltest v1.0.10/go.mod h1:2C7s65hONVqY7Q5Efj5aLzRCNLjw2h4eMc9EcypGjcY= +github.com/lasiar/canonicalheader v1.1.2 h1:vZ5uqwvDbyJCnMhmFYimgMZnJMjwljN5VGY0VKbMXb4= +github.com/lasiar/canonicalheader v1.1.2/go.mod h1:qJCeLFS0G/QlLQ506T+Fk/fWMa2VmBUiEI2cuMK4djI= +github.com/ldez/exptostd v0.4.2 h1:l5pOzHBz8mFOlbcifTxzfyYbgEmoUqjxLFHZkjlbHXs= +github.com/ldez/exptostd v0.4.2/go.mod h1:iZBRYaUmcW5jwCR3KROEZ1KivQQp6PHXbDPk9hqJKCQ= +github.com/ldez/gomoddirectives v0.6.1 h1:Z+PxGAY+217f/bSGjNZr/b2KTXcyYLgiWI6geMBN2Qc= +github.com/ldez/gomoddirectives v0.6.1/go.mod h1:cVBiu3AHR9V31em9u2kwfMKD43ayN5/XDgr+cdaFaKs= +github.com/ldez/grignotin v0.9.0 h1:MgOEmjZIVNn6p5wPaGp/0OKWyvq42KnzAt/DAb8O4Ow= +github.com/ldez/grignotin v0.9.0/go.mod h1:uaVTr0SoZ1KBii33c47O1M8Jp3OP3YDwhZCmzT9GHEk= +github.com/ldez/tagliatelle v0.7.1 h1:bTgKjjc2sQcsgPiT902+aadvMjCeMHrY7ly2XKFORIk= +github.com/ldez/tagliatelle v0.7.1/go.mod h1:3zjxUpsNB2aEZScWiZTHrAXOl1x25t3cRmzfK1mlo2I= +github.com/ldez/usetesting v0.4.2 h1:J2WwbrFGk3wx4cZwSMiCQQ00kjGR0+tuuyW0Lqm4lwA= +github.com/ldez/usetesting v0.4.2/go.mod h1:eEs46T3PpQ+9RgN9VjpY6qWdiw2/QmfiDeWmdZdrjIQ= +github.com/leonklingele/grouper v1.1.2 h1:o1ARBDLOmmasUaNDesWqWCIFH3u7hoFlM84YrjT3mIY= +github.com/leonklingele/grouper v1.1.2/go.mod h1:6D0M/HVkhs2yRKRFZUoGjeDy7EZTfFBE9gl4kjmIGkA= +github.com/macabu/inamedparam v0.1.3 h1:2tk/phHkMlEL/1GNe/Yf6kkR/hkcUdAEY3L0hjYV1Mk= +github.com/macabu/inamedparam v0.1.3/go.mod h1:93FLICAIk/quk7eaPPQvbzihUdn/QkGDwIZEoLtpH6I= +github.com/mailru/easyjson v0.9.0 h1:PrnmzHw7262yW8sTBwxi1PdJA3Iw/EKBa8psRf7d9a4= +github.com/mailru/easyjson v0.9.0/go.mod h1:1+xMtQp2MRNVL/V1bOzuP3aP8VNwRW55fQUto+XFtTU= +github.com/maratori/testableexamples v1.0.0 h1:dU5alXRrD8WKSjOUnmJZuzdxWOEQ57+7s93SLMxb2vI= +github.com/maratori/testableexamples v1.0.0/go.mod h1:4rhjL1n20TUTT4vdh3RDqSizKLyXp7K2u6HgraZCGzE= +github.com/maratori/testpackage v1.1.1 h1:S58XVV5AD7HADMmD0fNnziNHqKvSdDuEKdPD1rNTU04= +github.com/maratori/testpackage v1.1.1/go.mod h1:s4gRK/ym6AMrqpOa/kEbQTV4Q4jb7WeLZzVhVVVOQMc= +github.com/maruel/natural v1.1.1 h1:Hja7XhhmvEFhcByqDoHz9QZbkWey+COd9xWfCfn1ioo= +github.com/maruel/natural v1.1.1/go.mod h1:v+Rfd79xlw1AgVBjbO0BEQmptqb5HvL/k9GRHB7ZKEg= +github.com/matoous/godox v1.1.0 h1:W5mqwbyWrwZv6OQ5Z1a/DHGMOvXYCBP3+Ht7KMoJhq4= +github.com/matoous/godox v1.1.0/go.mod h1:jgE/3fUXiTurkdHOLT5WEkThTSuE7yxHv5iWPa80afs= +github.com/matryer/is v1.4.0 h1:sosSmIWwkYITGrxZ25ULNDeKiMNzFSr4V/eqBQP0PeE= +github.com/matryer/is v1.4.0/go.mod h1:8I/i5uYgLzgsgEloJE1U6xx5HkBQpAZvepWuujKwMRU= +github.com/mattn/go-colorable v0.1.14 h1:9A9LHSqF/7dyVVX6g0U9cwm9pG3kP9gSzcuIPHPsaIE= +github.com/mattn/go-colorable v0.1.14/go.mod h1:6LmQG8QLFO4G5z1gPvYEzlUgJ2wF+stgPZH1UqBm1s8= +github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY= +github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y= +github.com/mattn/go-runewidth v0.0.9/go.mod h1:H031xJmbD/WCDINGzjvQ9THkh0rPKHF+m2gUSrubnMI= +github.com/mattn/go-runewidth v0.0.16 h1:E5ScNMtiwvlvB5paMFdw9p4kSQzbXFikJ5SQO6TULQc= +github.com/mattn/go-runewidth v0.0.16/go.mod h1:Jdepj2loyihRzMpdS35Xk/zdY8IAYHsh153qUoGf23w= +github.com/mfridman/tparse v0.18.0 h1:wh6dzOKaIwkUGyKgOntDW4liXSo37qg5AXbIhkMV3vE= +github.com/mfridman/tparse v0.18.0/go.mod h1:gEvqZTuCgEhPbYk/2lS3Kcxg1GmTxxU7kTC8DvP0i/A= +github.com/mgechev/revive v1.7.0 h1:JyeQ4yO5K8aZhIKf5rec56u0376h8AlKNQEmjfkjKlY= +github.com/mgechev/revive v1.7.0/go.mod h1:qZnwcNhoguE58dfi96IJeSTPeZQejNeoMQLUZGi4SW4= +github.com/mitchellh/go-homedir v1.1.0 h1:lukF9ziXFxDFPkA1vsr5zpc1XuPDn/wFntq5mG+4E0Y= +github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0= +github.com/moby/sys/mountinfo v0.7.1 h1:/tTvQaSJRr2FshkhXiIpux6fQ2Zvc4j7tAhMTStAG2g= +github.com/moby/sys/mountinfo v0.7.1/go.mod h1:IJb6JQeOklcdMU9F5xQ8ZALD+CUr5VlGpwtX+VE0rpI= +github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q= +github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg= +github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q= +github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk= +github.com/modern-go/reflect2 v1.0.3-0.20250322232337-35a7c28c31ee h1:W5t00kpgFdJifH4BDsTlE89Zl93FEloxaWZfGcifgq8= +github.com/modern-go/reflect2 v1.0.3-0.20250322232337-35a7c28c31ee/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk= +github.com/moricho/tparallel v0.3.2 h1:odr8aZVFA3NZrNybggMkYO3rgPRcqjeQUlBBFVxKHTI= +github.com/moricho/tparallel v0.3.2/go.mod h1:OQ+K3b4Ln3l2TZveGCywybl68glfLEwFGqvnjok8b+U= +github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA= +github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ= +github.com/nakabonne/nestif v0.3.1 h1:wm28nZjhQY5HyYPx+weN3Q65k6ilSBxDb8v5S81B81U= +github.com/nakabonne/nestif v0.3.1/go.mod h1:9EtoZochLn5iUprVDmDjqGKPofoUEBL8U4Ngq6aY7OE= +github.com/nishanths/exhaustive v0.12.0 h1:vIY9sALmw6T/yxiASewa4TQcFsVYZQQRUQJhKRf3Swg= +github.com/nishanths/exhaustive v0.12.0/go.mod h1:mEZ95wPIZW+x8kC4TgC+9YCUgiST7ecevsVDTgc2obs= +github.com/nishanths/predeclared v0.2.2 h1:V2EPdZPliZymNAn79T8RkNApBjMmVKh5XRpLm/w98Vk= +github.com/nishanths/predeclared v0.2.2/go.mod h1:RROzoN6TnGQupbC+lqggsOlcgysk3LMK/HI84Mp280c= +github.com/nunnatsa/ginkgolinter v0.19.1 h1:mjwbOlDQxZi9Cal+KfbEJTCz327OLNfwNvoZ70NJ+c4= +github.com/nunnatsa/ginkgolinter v0.19.1/go.mod h1:jkQ3naZDmxaZMXPWaS9rblH+i+GWXQCaS/JFIWcOH2s= +github.com/olekukonko/tablewriter v0.0.5 h1:P2Ga83D34wi1o9J6Wh1mRuqd4mF/x/lgBS7N7AbDhec= +github.com/olekukonko/tablewriter v0.0.5/go.mod h1:hPp6KlRPjbx+hW8ykQs1w3UBbZlj6HuIJcUGPhkA7kY= +github.com/onsi/ginkgo/v2 v2.27.2 h1:LzwLj0b89qtIy6SSASkzlNvX6WktqurSHwkk2ipF/Ns= +github.com/onsi/ginkgo/v2 v2.27.2/go.mod h1:ArE1D/XhNXBXCBkKOLkbsb2c81dQHCRcF5zwn/ykDRo= +github.com/onsi/gomega v1.38.3 h1:eTX+W6dobAYfFeGC2PV6RwXRu/MyT+cQguijutvkpSM= +github.com/onsi/gomega v1.38.3/go.mod h1:ZCU1pkQcXDO5Sl9/VVEGlDyp+zm0m1cmeG5TOzLgdh4= +github.com/opencontainers/runc v1.1.13 h1:98S2srgG9vw0zWcDpFMn5TRrh8kLxa/5OFUstuUhmRs= +github.com/opencontainers/runc v1.1.13/go.mod h1:R016aXacfp/gwQBYw2FDGa9m+n6atbLWrYY8hNMT/sA= +github.com/opencontainers/runtime-spec v1.0.3-0.20220909204839-494a5a6aca78 h1:R5M2qXZiK/mWPMT4VldCOiSL9HIAMuxQZWdG0CSM5+4= +github.com/opencontainers/runtime-spec v1.0.3-0.20220909204839-494a5a6aca78/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0= +github.com/otiai10/copy v1.2.0/go.mod h1:rrF5dJ5F0t/EWSYODDu4j9/vEeYHMkc8jt0zJChqQWw= +github.com/otiai10/copy v1.14.0 h1:dCI/t1iTdYGtkvCuBG2BgR6KZa83PTclw4U5n2wAllU= +github.com/otiai10/copy v1.14.0/go.mod h1:ECfuL02W+/FkTWZWgQqXPWZgW9oeKCSQ5qVfSc4qc4w= +github.com/otiai10/curr v0.0.0-20150429015615-9b4961190c95/go.mod h1:9qAhocn7zKJG+0mI8eUu6xqkFDYS2kb2saOteoSB3cE= +github.com/otiai10/curr v1.0.0/go.mod h1:LskTG5wDwr8Rs+nNQ+1LlxRjAtTZZjtJW4rMXl6j4vs= +github.com/otiai10/mint v1.3.0/go.mod h1:F5AjcsTsWUqX+Na9fpHb52P8pcRX2CI6A3ctIT91xUo= +github.com/otiai10/mint v1.3.1/go.mod h1:/yxELlJQ0ufhjUwhshSj+wFjZ78CnZ48/1wtmBH1OTc= +github.com/pelletier/go-toml/v2 v2.2.4 h1:mye9XuhQ6gvn5h28+VilKrrPoQVanw5PMw/TB0t5Ec4= +github.com/pelletier/go-toml/v2 v2.2.4/go.mod h1:2gIqNv+qfxSVS7cM2xJQKtLSTLUE9V8t9Stt+h56mCY= +github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= +github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U= +github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= +github.com/polyfloyd/go-errorlint v1.7.1 h1:RyLVXIbosq1gBdk/pChWA8zWYLsq9UEw7a1L5TVMCnA= +github.com/polyfloyd/go-errorlint v1.7.1/go.mod h1:aXjNb1x2TNhoLsk26iv1yl7a+zTnXPhwEMtEXukiLR8= +github.com/prashantv/gostub v1.1.0 h1:BTyx3RfQjRHnUWaGF9oQos79AlQ5k8WNktv7VGvVH4g= +github.com/prashantv/gostub v1.1.0/go.mod h1:A5zLQHz7ieHGG7is6LLXLz7I8+3LZzsrV0P1IAHhP5U= +github.com/prometheus/client_golang v1.23.2 h1:Je96obch5RDVy3FDMndoUsjAhG5Edi49h0RJWRi/o0o= +github.com/prometheus/client_golang v1.23.2/go.mod h1:Tb1a6LWHB3/SPIzCoaDXI4I8UHKeFTEQ1YCr+0Gyqmg= +github.com/prometheus/client_model v0.6.2 h1:oBsgwpGs7iVziMvrGhE53c/GrLUsZdHnqNwqPLxwZyk= +github.com/prometheus/client_model v0.6.2/go.mod h1:y3m2F6Gdpfy6Ut/GBsUqTWZqCUvMVzSfMLjcu6wAwpE= +github.com/prometheus/common v0.66.1 h1:h5E0h5/Y8niHc5DlaLlWLArTQI7tMrsfQjHV+d9ZoGs= +github.com/prometheus/common v0.66.1/go.mod h1:gcaUsgf3KfRSwHY4dIMXLPV0K/Wg1oZ8+SbZk/HH/dA= +github.com/prometheus/procfs v0.17.0 h1:FuLQ+05u4ZI+SS/w9+BWEM2TXiHKsUQ9TADiRH7DuK0= +github.com/prometheus/procfs v0.17.0/go.mod h1:oPQLaDAMRbA+u8H5Pbfq+dl3VDAvHxMUOVhe0wYB2zw= +github.com/quasilyte/go-ruleguard v0.4.3-0.20240823090925-0fe6f58b47b1 h1:+Wl/0aFp0hpuHM3H//KMft64WQ1yX9LdJY64Qm/gFCo= +github.com/quasilyte/go-ruleguard v0.4.3-0.20240823090925-0fe6f58b47b1/go.mod h1:GJLgqsLeo4qgavUoL8JeGFNS7qcisx3awV/w9eWTmNI= +github.com/quasilyte/go-ruleguard/dsl v0.3.22 h1:wd8zkOhSNr+I+8Qeciml08ivDt1pSXe60+5DqOpCjPE= +github.com/quasilyte/go-ruleguard/dsl v0.3.22/go.mod h1:KeCP03KrjuSO0H1kTuZQCWlQPulDV6YMIXmpQss17rU= +github.com/quasilyte/gogrep v0.5.0 h1:eTKODPXbI8ffJMN+W2aE0+oL0z/nh8/5eNdiO34SOAo= +github.com/quasilyte/gogrep v0.5.0/go.mod h1:Cm9lpz9NZjEoL1tgZ2OgeUKPIxL1meE7eo60Z6Sk+Ng= +github.com/quasilyte/regex/syntax v0.0.0-20210819130434-b3f0c404a727 h1:TCg2WBOl980XxGFEZSS6KlBGIV0diGdySzxATTWoqaU= +github.com/quasilyte/regex/syntax v0.0.0-20210819130434-b3f0c404a727/go.mod h1:rlzQ04UMyJXu/aOvhd8qT+hvDrFpiwqp8MRXDY9szc0= +github.com/quasilyte/stdinfo v0.0.0-20220114132959-f7386bf02567 h1:M8mH9eK4OUR4lu7Gd+PU1fV2/qnDNfzT635KRSObncs= +github.com/quasilyte/stdinfo v0.0.0-20220114132959-f7386bf02567/go.mod h1:DWNGW8A4Y+GyBgPuaQJuWiy0XYftx4Xm/y5Jqk9I6VQ= +github.com/raeperd/recvcheck v0.2.0 h1:GnU+NsbiCqdC2XX5+vMZzP+jAJC5fht7rcVTAhX74UI= +github.com/raeperd/recvcheck v0.2.0/go.mod h1:n04eYkwIR0JbgD73wT8wL4JjPC3wm0nFtzBnWNocnYU= +github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc= +github.com/rivo/uniseg v0.4.7 h1:WUdvkW8uEhrYfLC4ZzdpI2ztxP1I582+49Oc5Mq64VQ= +github.com/rivo/uniseg v0.4.7/go.mod h1:FN3SvrM+Zdj16jyLfmOkMNblXMcoc8DfTHruCPUcx88= +github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ= +github.com/rogpeppe/go-internal v1.14.1/go.mod h1:MaRKkUm5W0goXpeCfT7UZI6fk/L7L7so1lCWt35ZSgc= +github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM= +github.com/ryancurrah/gomodguard v1.3.5 h1:cShyguSwUEeC0jS7ylOiG/idnd1TpJ1LfHGpV3oJmPU= +github.com/ryancurrah/gomodguard v1.3.5/go.mod h1:MXlEPQRxgfPQa62O8wzK3Ozbkv9Rkqr+wKjSxTdsNJE= +github.com/ryanrolds/sqlclosecheck v0.5.1 h1:dibWW826u0P8jNLsLN+En7+RqWWTYrjCB9fJfSfdyCU= +github.com/ryanrolds/sqlclosecheck v0.5.1/go.mod h1:2g3dUjoS6AL4huFdv6wn55WpLIDjY7ZgUR4J8HOO/XQ= +github.com/sagikazarmark/locafero v0.7.0 h1:5MqpDsTGNDhY8sGp0Aowyf0qKsPrhewaLSsFaodPcyo= +github.com/sagikazarmark/locafero v0.7.0/go.mod h1:2za3Cg5rMaTMoG/2Ulr9AwtFaIppKXTRYnozin4aB5k= +github.com/sanposhiho/wastedassign/v2 v2.1.0 h1:crurBF7fJKIORrV85u9UUpePDYGWnwvv3+A96WvwXT0= +github.com/sanposhiho/wastedassign/v2 v2.1.0/go.mod h1:+oSmSC+9bQ+VUAxA66nBb0Z7N8CK7mscKTDYC6aIek4= +github.com/santhosh-tekuri/jsonschema/v6 v6.0.1 h1:PKK9DyHxif4LZo+uQSgXNqs0jj5+xZwwfKHgph2lxBw= +github.com/santhosh-tekuri/jsonschema/v6 v6.0.1/go.mod h1:JXeL+ps8p7/KNMjDQk3TCwPpBy0wYklyWTfbkIzdIFU= +github.com/sashamelentyev/interfacebloat v1.1.0 h1:xdRdJp0irL086OyW1H/RTZTr1h/tMEOsumirXcOJqAw= +github.com/sashamelentyev/interfacebloat v1.1.0/go.mod h1:+Y9yU5YdTkrNvoX0xHc84dxiN1iBi9+G8zZIhPVoNjQ= +github.com/sashamelentyev/usestdlibvars v1.28.0 h1:jZnudE2zKCtYlGzLVreNp5pmCdOxXUzwsMDBkR21cyQ= +github.com/sashamelentyev/usestdlibvars v1.28.0/go.mod h1:9nl0jgOfHKWNFS43Ojw0i7aRoS4j6EBye3YBhmAIRF8= +github.com/securego/gosec/v2 v2.22.2 h1:IXbuI7cJninj0nRpZSLCUlotsj8jGusohfONMrHoF6g= +github.com/securego/gosec/v2 v2.22.2/go.mod h1:UEBGA+dSKb+VqM6TdehR7lnQtIIMorYJ4/9CW1KVQBE= +github.com/shurcooL/go v0.0.0-20180423040247-9e1955d9fb6e/go.mod h1:TDJrrUr11Vxrven61rcy3hJMUqaf/CLWYhHNPmT14Lk= +github.com/shurcooL/go-goon v0.0.0-20170922171312-37c2f522c041/go.mod h1:N5mDOmsrJOB+vfqUK+7DmDyjhSLIIBnXo9lvZJj3MWQ= +github.com/sirupsen/logrus v1.9.3 h1:dueUQJ1C2q9oE3F7wvmSGAaVtTmUizReu6fjN8uqzbQ= +github.com/sirupsen/logrus v1.9.3/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ= +github.com/sivchari/containedctx v1.0.3 h1:x+etemjbsh2fB5ewm5FeLNi5bUjK0V8n0RB+Wwfd0XE= +github.com/sivchari/containedctx v1.0.3/go.mod h1:c1RDvCbnJLtH4lLcYD/GqwiBSSf4F5Qk0xld2rBqzJ4= +github.com/sivchari/tenv v1.12.1 h1:+E0QzjktdnExv/wwsnnyk4oqZBUfuh89YMQT1cyuvSY= +github.com/sivchari/tenv v1.12.1/go.mod h1:1LjSOUCc25snIr5n3DtGGrENhX3LuWefcplwVGC24mw= +github.com/sonatard/noctx v0.1.0 h1:JjqOc2WN16ISWAjAk8M5ej0RfExEXtkEyExl2hLW+OM= +github.com/sonatard/noctx v0.1.0/go.mod h1:0RvBxqY8D4j9cTTTWE8ylt2vqj2EPI8fHmrxHdsaZ2c= +github.com/sourcegraph/conc v0.3.0 h1:OQTbbt6P72L20UqAkXXuLOj79LfEanQ+YQFNpLA9ySo= +github.com/sourcegraph/conc v0.3.0/go.mod h1:Sdozi7LEKbFPqYX2/J+iBAM6HpqSLTASQIKqDmF7Mt0= +github.com/sourcegraph/go-diff v0.7.0 h1:9uLlrd5T46OXs5qpp8L/MTltk0zikUGi0sNNyCpA8G0= +github.com/sourcegraph/go-diff v0.7.0/go.mod h1:iBszgVvyxdc8SFZ7gm69go2KDdt3ag071iBaWPF6cjs= +github.com/spf13/afero v1.12.0 h1:UcOPyRBYczmFn6yvphxkn9ZEOY65cpwGKb5mL36mrqs= +github.com/spf13/afero v1.12.0/go.mod h1:ZTlWwG4/ahT8W7T0WQ5uYmjI9duaLQGy3Q2OAl4sk/4= +github.com/spf13/cast v1.7.1 h1:cuNEagBQEHWN1FnbGEjCXL2szYEXqfJPbP2HNUaca9Y= +github.com/spf13/cast v1.7.1/go.mod h1:ancEpBxwJDODSW/UG4rDrAqiKolqNNh2DX3mk86cAdo= +github.com/spf13/cobra v1.10.2 h1:DMTTonx5m65Ic0GOoRY2c16WCbHxOOw6xxezuLaBpcU= +github.com/spf13/cobra v1.10.2/go.mod h1:7C1pvHqHw5A4vrJfjNwvOdzYu0Gml16OCs2GRiTUUS4= +github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg= +github.com/spf13/pflag v1.0.9/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg= +github.com/spf13/pflag v1.0.10 h1:4EBh2KAYBwaONj6b2Ye1GiHfwjqyROoF4RwYO+vPwFk= +github.com/spf13/pflag v1.0.10/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg= +github.com/spf13/viper v1.20.1 h1:ZMi+z/lvLyPSCoNtFCpqjy0S4kPbirhpTMwl8BkW9X4= +github.com/spf13/viper v1.20.1/go.mod h1:P9Mdzt1zoHIG8m2eZQinpiBjo6kCmZSKBClNNqjJvu4= +github.com/ssgreg/nlreturn/v2 v2.2.1 h1:X4XDI7jstt3ySqGU86YGAURbxw3oTDPK9sPEi6YEwQ0= +github.com/ssgreg/nlreturn/v2 v2.2.1/go.mod h1:E/iiPB78hV7Szg2YfRgyIrk1AD6JVMTRkkxBiELzh2I= +github.com/stbenjam/no-sprintf-host-port v0.2.0 h1:i8pxvGrt1+4G0czLr/WnmyH7zbZ8Bg8etvARQ1rpyl4= +github.com/stbenjam/no-sprintf-host-port v0.2.0/go.mod h1:eL0bQ9PasS0hsyTyfTjjG+E80QIyPnBVQbYZyv20Jfk= +github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= +github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw= +github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo= +github.com/stretchr/objx v0.5.2 h1:xuMeJ0Sdp5ZMRXx/aWO6RZxdr3beISkG5/G/aIRr3pY= +github.com/stretchr/objx v0.5.2/go.mod h1:FRsXN1f5AsAjCGJKqEizvkpNtU+EGNCLh3NxZ/8L+MA= +github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI= +github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4= +github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA= +github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= +github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= +github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU= +github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo= +github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U= +github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U= +github.com/subosito/gotenv v1.6.0 h1:9NlTDc1FTs4qu0DDq7AEtTPNw6SVm7uBMsUCUjABIf8= +github.com/subosito/gotenv v1.6.0/go.mod h1:Dk4QP5c2W3ibzajGcXpNraDfq2IrhjMIvMSWPKKo0FU= +github.com/tdakkota/asciicheck v0.4.1 h1:bm0tbcmi0jezRA2b5kg4ozmMuGAFotKI3RZfrhfovg8= +github.com/tdakkota/asciicheck v0.4.1/go.mod h1:0k7M3rCfRXb0Z6bwgvkEIMleKH3kXNz9UqJ9Xuqopr8= +github.com/tenntenn/modver v1.0.1 h1:2klLppGhDgzJrScMpkj9Ujy3rXPUspSjAcev9tSEBgA= +github.com/tenntenn/modver v1.0.1/go.mod h1:bePIyQPb7UeioSRkw3Q0XeMhYZSMx9B8ePqg6SAMGH0= +github.com/tenntenn/text/transform v0.0.0-20200319021203-7eef512accb3 h1:f+jULpRQGxTSkNYKJ51yaw6ChIqO+Je8UqsTKN/cDag= +github.com/tenntenn/text/transform v0.0.0-20200319021203-7eef512accb3/go.mod h1:ON8b8w4BN/kE1EOhwT0o+d62W65a6aPw1nouo9LMgyY= +github.com/tetafro/godot v1.5.0 h1:aNwfVI4I3+gdxjMgYPus9eHmoBeJIbnajOyqZYStzuw= +github.com/tetafro/godot v1.5.0/go.mod h1:2oVxTBSftRTh4+MVfUaUXR6bn2GDXCaMcOG4Dk3rfio= +github.com/tidwall/gjson v1.18.0 h1:FIDeeyB800efLX89e5a8Y0BNH+LOngJyGrIWxG2FKQY= +github.com/tidwall/gjson v1.18.0/go.mod h1:/wbyibRr2FHMks5tjHJ5F8dMZh3AcwJEMf5vlfC0lxk= +github.com/tidwall/match v1.2.0 h1:0pt8FlkOwjN2fPt4bIl4BoNxb98gGHN2ObFEDkrfZnM= +github.com/tidwall/match v1.2.0/go.mod h1:eRSPERbgtNPcGhD8UCthc6PmLEQXEWd3PRB5JTxsfmM= +github.com/tidwall/pretty v1.2.1 h1:qjsOFOWWQl+N3RsoF5/ssm1pHmJJwhjlSbZ51I6wMl4= +github.com/tidwall/pretty v1.2.1/go.mod h1:ITEVvHYasfjBbM0u2Pg8T2nJnzm8xPwvNhhsoaGGjNU= +github.com/tidwall/sjson v1.2.5 h1:kLy8mja+1c9jlljvWTlSazM7cKDRfJuR/bOJhcY5NcY= +github.com/tidwall/sjson v1.2.5/go.mod h1:Fvgq9kS/6ociJEDnK0Fk1cpYF4FIW6ZF7LAe+6jwd28= +github.com/timakin/bodyclose v0.0.0-20241017074812-ed6a65f985e3 h1:y4mJRFlM6fUyPhoXuFg/Yu02fg/nIPFMOY8tOqppoFg= +github.com/timakin/bodyclose v0.0.0-20241017074812-ed6a65f985e3/go.mod h1:mkjARE7Yr8qU23YcGMSALbIxTQ9r9QBVahQOBRfU460= +github.com/timonwong/loggercheck v0.10.1 h1:uVZYClxQFpw55eh+PIoqM7uAOHMrhVcDoWDery9R8Lg= +github.com/timonwong/loggercheck v0.10.1/go.mod h1:HEAWU8djynujaAVX7QI65Myb8qgfcZ1uKbdpg3ZzKl8= +github.com/tomarrell/wrapcheck/v2 v2.10.0 h1:SzRCryzy4IrAH7bVGG4cK40tNUhmVmMDuJujy4XwYDg= +github.com/tomarrell/wrapcheck/v2 v2.10.0/go.mod h1:g9vNIyhb5/9TQgumxQyOEqDHsmGYcGsVMOx/xGkqdMo= +github.com/tommy-muehle/go-mnd/v2 v2.5.1 h1:NowYhSdyE/1zwK9QCLeRb6USWdoif80Ie+v+yU8u1Zw= +github.com/tommy-muehle/go-mnd/v2 v2.5.1/go.mod h1:WsUAkMJMYww6l/ufffCD3m+P7LEvr8TnZn9lwVDlgzw= +github.com/ultraware/funlen v0.2.0 h1:gCHmCn+d2/1SemTdYMiKLAHFYxTYz7z9VIDRaTGyLkI= +github.com/ultraware/funlen v0.2.0/go.mod h1:ZE0q4TsJ8T1SQcjmkhN/w+MceuatI6pBFSxxyteHIJA= +github.com/ultraware/whitespace v0.2.0 h1:TYowo2m9Nfj1baEQBjuHzvMRbp19i+RCcRYrSWoFa+g= +github.com/ultraware/whitespace v0.2.0/go.mod h1:XcP1RLD81eV4BW8UhQlpaR+SDc2givTvyI8a586WjW8= +github.com/uudashr/gocognit v1.2.0 h1:3BU9aMr1xbhPlvJLSydKwdLN3tEUUrzPSSM8S4hDYRA= +github.com/uudashr/gocognit v1.2.0/go.mod h1:k/DdKPI6XBZO1q7HgoV2juESI2/Ofj9AcHPZhBBdrTU= +github.com/uudashr/iface v1.3.1 h1:bA51vmVx1UIhiIsQFSNq6GZ6VPTk3WNMZgRiCe9R29U= +github.com/uudashr/iface v1.3.1/go.mod h1:4QvspiRd3JLPAEXBQ9AiZpLbJlrWWgRChOKDJEuQTdg= +github.com/x448/float16 v0.8.4 h1:qLwI1I70+NjRFUR3zs1JPUCgaCXSh3SW62uAKT1mSBM= +github.com/x448/float16 v0.8.4/go.mod h1:14CWIYCyZA/cWjXOioeEpHeN/83MdbZDRQHoFcYsOfg= +github.com/xen0n/gosmopolitan v1.2.2 h1:/p2KTnMzwRexIW8GlKawsTWOxn7UHA+jCMF/V8HHtvU= +github.com/xen0n/gosmopolitan v1.2.2/go.mod h1:7XX7Mj61uLYrj0qmeN0zi7XDon9JRAEhYQqAPLVNTeg= +github.com/yagipy/maintidx v1.0.0 h1:h5NvIsCz+nRDapQ0exNv4aJ0yXSI0420omVANTv3GJM= +github.com/yagipy/maintidx v1.0.0/go.mod h1:0qNf/I/CCZXSMhsRsrEPDZ+DkekpKLXAJfsTACwgXLk= +github.com/yeya24/promlinter v0.3.0 h1:JVDbMp08lVCP7Y6NP3qHroGAO6z2yGKQtS5JsjqtoFs= +github.com/yeya24/promlinter v0.3.0/go.mod h1:cDfJQQYv9uYciW60QT0eeHlFodotkYZlL+YcPQN+mW4= +github.com/ykadowak/zerologlint v0.1.5 h1:Gy/fMz1dFQN9JZTPjv1hxEk+sRWm05row04Yoolgdiw= +github.com/ykadowak/zerologlint v0.1.5/go.mod h1:KaUskqF3e/v59oPmdq1U1DnKcuHokl2/K1U4pmIELKg= +github.com/yuin/goldmark v1.1.25/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= +github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= +github.com/yuin/goldmark v1.1.32/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= +github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= +github.com/yuin/goldmark v1.3.5/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k= +github.com/yuin/goldmark v1.4.1/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k= +github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY= +gitlab.com/bosi/decorder v0.4.2 h1:qbQaV3zgwnBZ4zPMhGLW4KZe7A7NwxEhJx39R3shffo= +gitlab.com/bosi/decorder v0.4.2/go.mod h1:muuhHoaJkA9QLcYHq4Mj8FJUwDZ+EirSHRiaTcTf6T8= +go-simpler.org/assert v0.9.0 h1:PfpmcSvL7yAnWyChSjOz6Sp6m9j5lyK8Ok9pEL31YkQ= +go-simpler.org/assert v0.9.0/go.mod h1:74Eqh5eI6vCK6Y5l3PI8ZYFXG4Sa+tkr70OIPJAUr28= +go-simpler.org/musttag v0.13.0 h1:Q/YAW0AHvaoaIbsPj3bvEI5/QFP7w696IMUpnKXQfCE= +go-simpler.org/musttag v0.13.0/go.mod h1:FTzIGeK6OkKlUDVpj0iQUXZLUO1Js9+mvykDQy9C5yM= +go-simpler.org/sloglint v0.9.0 h1:/40NQtjRx9txvsB/RN022KsUJU+zaaSb/9q9BSefSrE= +go-simpler.org/sloglint v0.9.0/go.mod h1:G/OrAF6uxj48sHahCzrbarVMptL2kjWTaUeC8+fOGww= +go.opentelemetry.io/auto/sdk v1.1.0 h1:cH53jehLUN6UFLY71z+NDOiNJqDdPRaXzTel0sJySYA= +go.opentelemetry.io/auto/sdk v1.1.0/go.mod h1:3wSPjt5PWp2RhlCcmmOial7AvC4DQqZb7a7wCow3W8A= +go.opentelemetry.io/otel v1.37.0 h1:9zhNfelUvx0KBfu/gb+ZgeAfAgtWrfHJZcAqFC228wQ= +go.opentelemetry.io/otel v1.37.0/go.mod h1:ehE/umFRLnuLa/vSccNq9oS1ErUlkkK71gMcN34UG8I= +go.opentelemetry.io/otel/metric v1.37.0 h1:mvwbQS5m0tbmqML4NqK+e3aDiO02vsf/WgbsdpcPoZE= +go.opentelemetry.io/otel/metric v1.37.0/go.mod h1:04wGrZurHYKOc+RKeye86GwKiTb9FKm1WHtO+4EVr2E= +go.opentelemetry.io/otel/sdk v1.37.0 h1:ItB0QUqnjesGRvNcmAcU0LyvkVyGJ2xftD29bWdDvKI= +go.opentelemetry.io/otel/sdk v1.37.0/go.mod h1:VredYzxUvuo2q3WRcDnKDjbdvmO0sCzOvVAiY+yUkAg= +go.opentelemetry.io/otel/sdk/metric v1.36.0 h1:r0ntwwGosWGaa0CrSt8cuNuTcccMXERFwHX4dThiPis= +go.opentelemetry.io/otel/sdk/metric v1.36.0/go.mod h1:qTNOhFDfKRwX0yXOqJYegL5WRaW376QbB7P4Pb0qva4= +go.opentelemetry.io/otel/trace v1.37.0 h1:HLdcFNbRQBE2imdSEgm/kwqmQj1Or1l/7bW6mxVK7z4= +go.opentelemetry.io/otel/trace v1.37.0/go.mod h1:TlgrlQ+PtQO5XFerSPUYG0JSgGyryXewPGyayAWSBS0= +go.uber.org/automaxprocs v1.6.0 h1:O3y2/QNTOdbF+e/dpXNNW7Rx2hZ4sTIPyybbxyNqTUs= +go.uber.org/automaxprocs v1.6.0/go.mod h1:ifeIMSnPZuznNm6jmdzmU3/bfk01Fe2fotchwEFJ8r8= +go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto= +go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE= +go.uber.org/multierr v1.11.0 h1:blXXJkSxSSfBVBlC76pxqeO+LN3aDfLQo+309xJstO0= +go.uber.org/multierr v1.11.0/go.mod h1:20+QtiLqy0Nd6FdQB9TLXag12DsQkrbs3htMFfDN80Y= +go.uber.org/zap v1.27.0 h1:aJMhYGrd5QSmlpLMr2MftRKl7t8J8PTZPA732ud/XR8= +go.uber.org/zap v1.27.0/go.mod h1:GB2qFLM7cTU87MWRP2mPIjqfIDnGu+VIO4V/SdhGo2E= +go.yaml.in/yaml/v2 v2.4.2 h1:DzmwEr2rDGHl7lsFgAHxmNz/1NlQ7xLIrlN2h5d1eGI= +go.yaml.in/yaml/v2 v2.4.2/go.mod h1:081UH+NErpNdqlCXm3TtEran0rJZGxAYx9hb/ELlsPU= +go.yaml.in/yaml/v3 v3.0.4 h1:tfq32ie2Jv2UxXFdLJdh3jXuOzWiL1fo0bu/FbuKpbc= +go.yaml.in/yaml/v3 v3.0.4/go.mod h1:DhzuOOF2ATzADvBadXxruRBLzYTpT36CKvDb3+aBEFg= +golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= +golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= +golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= +golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc= +golang.org/x/crypto v0.13.0/go.mod h1:y6Z2r+Rw4iayiXXAIxJIDAJ1zMW4yaTpebo8fPOliYc= +golang.org/x/crypto v0.14.0/go.mod h1:MVFd36DqK4CsrnJYDkBA3VC4m2GkXAM0PvzMCn4JQf4= +golang.org/x/exp v0.0.0-20251023183803-a4bb9ffd2546 h1:mgKeJMpvi0yx/sU5GsxQ7p6s2wtOnGAHZWCHUM4KGzY= +golang.org/x/exp v0.0.0-20251023183803-a4bb9ffd2546/go.mod h1:j/pmGrbnkbPtQfxEe5D0VQhZC6qKbfKifgD0oM7sR70= +golang.org/x/exp/typeparams v0.0.0-20220428152302-39d4317da171/go.mod h1:AbB0pIl9nAr9wVwH+Z2ZpaocVmF5I4GyWCDIsVjR0bk= +golang.org/x/exp/typeparams v0.0.0-20230203172020-98cc5a0785f9/go.mod h1:AbB0pIl9nAr9wVwH+Z2ZpaocVmF5I4GyWCDIsVjR0bk= +golang.org/x/exp/typeparams v0.0.0-20250210185358-939b2ce775ac h1:TSSpLIG4v+p0rPv1pNOQtl1I8knsO4S9trOxNMOLVP4= +golang.org/x/exp/typeparams v0.0.0-20250210185358-939b2ce775ac/go.mod h1:AbB0pIl9nAr9wVwH+Z2ZpaocVmF5I4GyWCDIsVjR0bk= +golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= +golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= +golang.org/x/mod v0.4.1/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= +golang.org/x/mod v0.4.2/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= +golang.org/x/mod v0.6.0-dev.0.20220106191415-9b9b3d81d5e3/go.mod h1:3p9vT2HGsQu2K1YbXdKPJLVgG5VJdoTa1poYQBtP1AY= +golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4= +golang.org/x/mod v0.7.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= +golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= +golang.org/x/mod v0.9.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= +golang.org/x/mod v0.12.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= +golang.org/x/mod v0.13.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c= +golang.org/x/mod v0.29.0 h1:HV8lRxZC4l2cr3Zq1LvtOsi/ThTgWnUk/y64QSs8GwA= +golang.org/x/mod v0.29.0/go.mod h1:NyhrlYXJ2H4eJiRy/WDBO6HMqZQ6q9nk4JzS3NuCK+w= +golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= +golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= +golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= +golang.org/x/net v0.0.0-20200625001655-4c5254603344/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA= +golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU= +golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg= +golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM= +golang.org/x/net v0.0.0-20211015210444-4f30a5c0130f/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y= +golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c= +golang.org/x/net v0.2.0/go.mod h1:KqCZLdyyvdV855qA2rE3GC2aiw5xGR5TEjj8smXukLY= +golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs= +golang.org/x/net v0.8.0/go.mod h1:QVkue5JL9kW//ek3r6jTKnTFis1tRmNAW2P1shuFdJc= +golang.org/x/net v0.10.0/go.mod h1:0qNGK6F8kojg2nk9dLZ2mShWaEBan6FAoqfSigmmuDg= +golang.org/x/net v0.15.0/go.mod h1:idbUs1IY1+zTqbi8yxTbhexhEEk5ur9LInksu6HrEpk= +golang.org/x/net v0.16.0/go.mod h1:NxSsAGuq816PNPmqtQdLE42eU2Fs7NoRIZrHJAlaCOE= +golang.org/x/net v0.46.0 h1:giFlY12I07fugqwPuWJi68oOnpfqFnJIJzaIIm2JVV4= +golang.org/x/net v0.46.0/go.mod h1:Q9BGdFy1y4nkUwiLvT5qtyhAnEHgnQ/zd8PfU6nc210= +golang.org/x/oauth2 v0.31.0 h1:8Fq0yVZLh4j4YA47vHKFTa9Ew5XIrCP8LC6UeNZnLxo= +golang.org/x/oauth2 v0.31.0/go.mod h1:lzm5WQJQwKZ3nwavOZ3IS5Aulzxi68dUSgRHujetwEA= +golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.0.0-20200625203802-6e8e738ad208/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.3.0/go.mod h1:FU7BRWz2tNW+3quACPkgCx/L+uEAv1htQ0V83Z9Rj+Y= +golang.org/x/sync v0.4.0/go.mod h1:FU7BRWz2tNW+3quACPkgCx/L+uEAv1htQ0V83Z9Rj+Y= +golang.org/x/sync v0.19.0 h1:vV+1eWNmZ5geRlYjzm2adRgW2/mcpevXNg50YZtPCE4= +golang.org/x/sync v0.19.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI= +golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= +golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20211019181941-9d821ace8654/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20211105183446-c75c47738b0c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.2.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.12.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.13.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.39.0 h1:CvCKL8MeisomCi6qNZ+wbb0DN9E5AATixKsvNtMoMFk= +golang.org/x/sys v0.39.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks= +golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= +golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8= +golang.org/x/term v0.2.0/go.mod h1:TVmDHMZPmdnySmBfhjOoOdhjzdE1h4u1VwSiw2l1Nuc= +golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k= +golang.org/x/term v0.6.0/go.mod h1:m6U89DPEgQRMq3DNkDClhWw02AUbt2daBVO4cn4Hv9U= +golang.org/x/term v0.8.0/go.mod h1:xPskH00ivmX89bAKVGSKKtLOWNx2+17Eiy94tnKShWo= +golang.org/x/term v0.12.0/go.mod h1:owVbMEjm3cBLCHdkQu9b1opXd4ETQWc3BhuQGKgXgvU= +golang.org/x/term v0.13.0/go.mod h1:LTmsnFJwVN6bCy1rVCoS+qHT1HhALEFxKncY3WNNh4U= +golang.org/x/term v0.36.0 h1:zMPR+aF8gfksFprF/Nc/rd1wRS1EI6nDBGyWAvDzx2Q= +golang.org/x/term v0.36.0/go.mod h1:Qu394IJq6V6dCBRgwqshf3mPF85AqzYEzofzRdZkWss= +golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= +golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk= +golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= +golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= +golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ= +golang.org/x/text v0.4.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8= +golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8= +golang.org/x/text v0.8.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8= +golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8= +golang.org/x/text v0.13.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE= +golang.org/x/text v0.30.0 h1:yznKA/E9zq54KzlzBEAWn1NXSQ8DIp/NYMy88xJjl4k= +golang.org/x/text v0.30.0/go.mod h1:yDdHFIX9t+tORqspjENWgzaCVXgk0yYnYuSZ8UzzBVM= +golang.org/x/time v0.13.0 h1:eUlYslOIt32DgYD6utsuUeHs4d7AsEYLuIAdg7FlYgI= +golang.org/x/time v0.13.0/go.mod h1:eL/Oa2bBBK0TkX57Fyni+NgnyQQN4LitPmob2Hjnqw4= +golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= +golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= +golang.org/x/tools v0.0.0-20200324003944-a576cf524670/go.mod h1:Sl4aGygMT6LrqrWclx+PTx3U+LnKx/seiNR+3G19Ar8= +golang.org/x/tools v0.0.0-20200329025819-fd4102a86c65/go.mod h1:Sl4aGygMT6LrqrWclx+PTx3U+LnKx/seiNR+3G19Ar8= +golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE= +golang.org/x/tools v0.0.0-20200724022722-7017fd6b1305/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA= +golang.org/x/tools v0.0.0-20200820010801-b793a1359eac/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA= +golang.org/x/tools v0.0.0-20201023174141-c8cfbd0f21e6/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= +golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= +golang.org/x/tools v0.1.1-0.20210205202024-ef80cdb6ec6d/go.mod h1:9bzcO0MWcOuT0tm1iBGzDVPshzfwoVvREIui8C+MHqU= +golang.org/x/tools v0.1.1-0.20210302220138-2ac05c832e1a/go.mod h1:9bzcO0MWcOuT0tm1iBGzDVPshzfwoVvREIui8C+MHqU= +golang.org/x/tools v0.1.1/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk= +golang.org/x/tools v0.1.5/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk= +golang.org/x/tools v0.1.10/go.mod h1:Uh6Zz+xoGYZom868N8YTex3t7RhtHDBrE8Gzo9bV56E= +golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc= +golang.org/x/tools v0.3.0/go.mod h1:/rWhSS2+zyEVwoJf8YAX6L2f0ntZ7Kn/mGgAWcipA5k= +golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU= +golang.org/x/tools v0.7.0/go.mod h1:4pg6aUX35JBAogB10C9AtvVL+qowtN4pT3CGSQex14s= +golang.org/x/tools v0.13.0/go.mod h1:HvlwmtVNQAhOuCjW7xxvovg8wbNq7LwfXh/k7wXUl58= +golang.org/x/tools v0.14.0/go.mod h1:uYBEerGOWcJyEORxN+Ek8+TT266gXkNlHdJBwexUsBg= +golang.org/x/tools v0.38.0 h1:Hx2Xv8hISq8Lm16jvBZ2VQf+RLmbd7wVUsALibYI/IQ= +golang.org/x/tools v0.38.0/go.mod h1:yEsQ/d/YK8cjh0L6rZlY8tgtlKiBNTL14pGDJPJpYQs= +golang.org/x/tools/go/expect v0.1.1-deprecated h1:jpBZDwmgPhXsKZC6WhL20P4b/wmnpsEAGHaNy0n/rJM= +golang.org/x/tools/go/expect v0.1.1-deprecated/go.mod h1:eihoPOH+FgIqa3FpoTwguz/bVUSGBlGQU67vpBeOrBY= +golang.org/x/tools/go/packages/packagestest v0.1.1-deprecated h1:1h2MnaIAIXISqTFKdENegdpAgUXz6NrPEsbIeWaBRvM= +golang.org/x/tools/go/packages/packagestest v0.1.1-deprecated/go.mod h1:RVAQXBGNv1ib0J382/DPCRS/BPnsGebyM1Gj5VSDpG8= +golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= +golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= +golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= +golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= +google.golang.org/genproto/googleapis/rpc v0.0.0-20250603155806-513f23925822 h1:fc6jSaCT0vBduLYZHYrBBNY4dsWuvgyff9noRNDdBeE= +google.golang.org/genproto/googleapis/rpc v0.0.0-20250603155806-513f23925822/go.mod h1:qQ0YXyHHx3XkvlzUtpXDkS29lDSafHMZBAZDc03LQ3A= +google.golang.org/grpc v1.73.0 h1:VIWSmpI2MegBtTuFt5/JWy2oXxtjJ/e89Z70ImfD2ok= +google.golang.org/grpc v1.73.0/go.mod h1:50sbHOUqWoCQGI8V2HQLJM0B+LMlIUjNSZmow7EVBQc= +google.golang.org/protobuf v1.36.9 h1:w2gp2mA27hUeUzj9Ex9FBjsBm40zfaDtEWow293U7Iw= +google.golang.org/protobuf v1.36.9/go.mod h1:fuxRtAxBytpl4zzqUh6/eyUujkJdNiuEkXntxiD/uRU= +gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= +gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk= +gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q= +gopkg.in/evanphx/json-patch.v4 v4.13.0 h1:czT3CmqEaQ1aanPc5SdlgQrrEIb8w/wwCvWWnfEbYzo= +gopkg.in/evanphx/json-patch.v4 v4.13.0/go.mod h1:p8EYWUEYMpynmqDbY58zCKCFZw8pRWMG4EsWvDvM72M= +gopkg.in/inf.v0 v0.9.1 h1:73M5CoZyi3ZLMOyDlQh031Cx6N9NDJ2Vvfl76EDAgDc= +gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw= +gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= +gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY= +gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ= +gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= +gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA= +gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= +honnef.co/go/tools v0.6.1 h1:R094WgE8K4JirYjBaOpz/AvTyUu/3wbmAoskKN/pxTI= +honnef.co/go/tools v0.6.1/go.mod h1:3puzxxljPCe8RGJX7BIy1plGbxEOZni5mR2aXe3/uk4= +k8s.io/api v0.34.3 h1:D12sTP257/jSH2vHV2EDYrb16bS7ULlHpdNdNhEw2S4= +k8s.io/api v0.34.3/go.mod h1:PyVQBF886Q5RSQZOim7DybQjAbVs8g7gwJNhGtY5MBk= +k8s.io/apiextensions-apiserver v0.34.3 h1:p10fGlkDY09eWKOTeUSioxwLukJnm+KuDZdrW71y40g= +k8s.io/apiextensions-apiserver v0.34.3/go.mod h1:aujxvqGFRdb/cmXYfcRTeppN7S2XV/t7WMEc64zB5A0= +k8s.io/apimachinery v0.34.3 h1:/TB+SFEiQvN9HPldtlWOTp0hWbJ+fjU+wkxysf/aQnE= +k8s.io/apimachinery v0.34.3/go.mod h1:/GwIlEcWuTX9zKIg2mbw0LRFIsXwrfoVxn+ef0X13lw= +k8s.io/client-go v0.34.3 h1:wtYtpzy/OPNYf7WyNBTj3iUA0XaBHVqhv4Iv3tbrF5A= +k8s.io/client-go v0.34.3/go.mod h1:OxxeYagaP9Kdf78UrKLa3YZixMCfP6bgPwPwNBQBzpM= +k8s.io/klog/v2 v2.130.1 h1:n9Xl7H1Xvksem4KFG4PYbdQCQxqc/tTUyrgXaOhHSzk= +k8s.io/klog/v2 v2.130.1/go.mod h1:3Jpz1GvMt720eyJH1ckRHK1EDfpxISzJ7I9OYgaDtPE= +k8s.io/kube-openapi v0.0.0-20250909170358-d67c058d9372 h1:6n2yF16Z5B+r+iKN6yL6/0cRj7lI5omG5F0wuI9ZHhw= +k8s.io/kube-openapi v0.0.0-20250909170358-d67c058d9372/go.mod h1:kdmbQkyfwUagLfXIad1y2TdrjPFWp2Q89B3qkRwf/pQ= +k8s.io/mount-utils v0.31.0 h1:o+a+n6gyZ7MGc6bIERU3LeFTHbLDBiVReaDpWlJotUE= +k8s.io/mount-utils v0.31.0/go.mod h1:HV/VYBUGqYUj4vt82YltzpWvgv8FPg0G9ItyInT3NPU= +k8s.io/utils v0.0.0-20251002143259-bc988d571ff4 h1:SjGebBtkBqHFOli+05xYbK8YF1Dzkbzn+gDM4X9T4Ck= +k8s.io/utils v0.0.0-20251002143259-bc988d571ff4/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0= +mvdan.cc/gofumpt v0.7.0 h1:bg91ttqXmi9y2xawvkuMXyvAA/1ZGJqYAEGjXuP0JXU= +mvdan.cc/gofumpt v0.7.0/go.mod h1:txVFJy/Sc/mvaycET54pV8SW8gWxTlUuGHVEcncmNUo= +mvdan.cc/unparam v0.0.0-20240528143540-8a5130ca722f h1:lMpcwN6GxNbWtbpI1+xzFLSW8XzX0u72NttUGVFjO3U= +mvdan.cc/unparam v0.0.0-20240528143540-8a5130ca722f/go.mod h1:RSLa7mKKCNeTTMHBw5Hsy2rfJmd6O2ivt9Dw9ZqCQpQ= +sigs.k8s.io/controller-runtime v0.22.4 h1:GEjV7KV3TY8e+tJ2LCTxUTanW4z/FmNB7l327UfMq9A= +sigs.k8s.io/controller-runtime v0.22.4/go.mod h1:+QX1XUpTXN4mLoblf4tqr5CQcyHPAki2HLXqQMY6vh8= +sigs.k8s.io/json v0.0.0-20250730193827-2d320260d730 h1:IpInykpT6ceI+QxKBbEflcR5EXP7sU1kvOlxwZh5txg= +sigs.k8s.io/json v0.0.0-20250730193827-2d320260d730/go.mod h1:mdzfpAEoE6DHQEN0uh9ZbOCuHbLK5wOm7dK4ctXE9Tg= +sigs.k8s.io/randfill v1.0.0 h1:JfjMILfT8A6RbawdsK2JXGBR5AQVfd+9TbzrlneTyrU= +sigs.k8s.io/randfill v1.0.0/go.mod h1:XeLlZ/jmk4i1HRopwe7/aU3H5n1zNUcX6TM94b3QxOY= +sigs.k8s.io/structured-merge-diff/v6 v6.3.0 h1:jTijUJbW353oVOd9oTlifJqOGEkUw2jB/fXCbTiQEco= +sigs.k8s.io/structured-merge-diff/v6 v6.3.0/go.mod h1:M3W8sfWvn2HhQDIbGWj3S099YozAsymCo/wrT5ohRUE= +sigs.k8s.io/yaml v1.6.0 h1:G8fkbMSAFqgEFgh4b1wmtzDnioxFCUgTZhlbj5P9QYs= +sigs.k8s.io/yaml v1.6.0/go.mod h1:796bPqUfzR/0jLAl6XjHl3Ck7MiyVv8dbTdyT3/pMf4= diff --git a/images/csi-driver/internal/const.go b/images/csi-driver/internal/const.go new file mode 100644 index 000000000..de94edc97 --- /dev/null +++ b/images/csi-driver/internal/const.go @@ -0,0 +1,42 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package internal + +const ( + LvmTypeKey = "replicated.csi.storage.deckhouse.io/lvm-type" + BindingModeKey = "replicated.csi.storage.deckhouse.io/volume-binding-mode" + StoragePoolKey = "replicated.csi.storage.deckhouse.io/storagePool" + LVMVThickContiguousParamKey = "replicated.csi.storage.deckhouse.io/lvm-thick-contiguous" + ActualNameOnTheNodeKey = "replicated.csi.storage.deckhouse.io/actualNameOnTheNode" + TopologyKey = "topology.sds-replicated-volume-csi/node" + SubPath = "subPath" + VGNameKey = "vgname" + ThinPoolNameKey = "thinPoolName" + LVMTypeThin = "Thin" + LVMTypeThick = "Thick" + BindingModeWFFC = "WaitForFirstConsumer" + BindingModeI = "Immediate" + ResizeDelta = "32Mi" + ReplicatedVolumeNameKey = "replicatedVolumeName" + DRBDDeviceMinorKey = "drbdDeviceMinor" + + FSTypeKey = "csi.storage.k8s.io/fstype" + + // supported filesystem types + FSTypeExt4 = "ext4" + FSTypeXfs = "xfs" +) diff --git a/images/csi-driver/internal/inflight.go b/images/csi-driver/internal/inflight.go new file mode 100644 index 000000000..9f5a87a76 --- /dev/null +++ b/images/csi-driver/internal/inflight.go @@ -0,0 +1,75 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package internal + +import ( + "sync" + + "k8s.io/klog/v2" +) + +// Idempotent is the interface required to manage in flight requests. +type Idempotent interface { + // The CSI data types are generated using a protobuf. + // The generated structures are guaranteed to implement the Stringer interface. + // Example: https://github.com/container-storage-interface/spec/blob/master/lib/go/csi/csi.pb.go#L3508 + // We can use the generated string as the key of our internal inflight database of requests. + String() string +} + +const ( + VolumeOperationAlreadyExistsErrorMsg = "An operation with the given Volume %s already exists" +) + +// InFlight is a struct used to manage in flight requests for a unique identifier. +type InFlight struct { + mux *sync.Mutex + inFlight map[string]bool +} + +// NewInFlight instanciates a InFlight structures. +func NewInFlight() *InFlight { + return &InFlight{ + mux: &sync.Mutex{}, + inFlight: make(map[string]bool), + } +} + +// Insert inserts the entry to the current list of inflight, request key is a unique identifier. +// Returns false when the key already exists. +func (db *InFlight) Insert(key string) bool { + db.mux.Lock() + defer db.mux.Unlock() + + _, ok := db.inFlight[key] + if ok { + return false + } + + db.inFlight[key] = true + return true +} + +// Delete removes the entry from the inFlight entries map. +// It doesn't return anything, and will do nothing if the specified key doesn't exist. +func (db *InFlight) Delete(key string) { + db.mux.Lock() + defer db.mux.Unlock() + + delete(db.inFlight, key) + klog.V(4).InfoS("Node Service: volume operation finished", "key", key) +} diff --git a/images/csi-driver/internal/inflight_test.go b/images/csi-driver/internal/inflight_test.go new file mode 100644 index 000000000..1aa6d8a4c --- /dev/null +++ b/images/csi-driver/internal/inflight_test.go @@ -0,0 +1,111 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package internal + +import ( + "testing" +) + +type testRequest struct { + volumeID string + extra string + expResp bool + delete bool +} + +func TestInFlight(t *testing.T) { + testCases := []struct { + name string + requests []testRequest + }{ + { + name: "success normal", + requests: []testRequest{ + { + + volumeID: "random-vol-name", + expResp: true, + }, + }, + }, + { + name: "success adding request with different volumeID", + requests: []testRequest{ + { + volumeID: "random-vol-foobar", + expResp: true, + }, + { + volumeID: "random-vol-name-foobar", + expResp: true, + }, + }, + }, + { + name: "failed adding request with same volumeID", + requests: []testRequest{ + { + volumeID: "random-vol-name-foobar", + expResp: true, + }, + { + volumeID: "random-vol-name-foobar", + expResp: false, + }, + }, + }, + + { + name: "success add, delete, add copy", + requests: []testRequest{ + { + volumeID: "random-vol-name", + extra: "random-node-id", + expResp: true, + }, + { + volumeID: "random-vol-name", + extra: "random-node-id", + expResp: false, + delete: true, + }, + { + volumeID: "random-vol-name", + extra: "random-node-id", + expResp: true, + }, + }, + }, + } + + for _, tc := range testCases { + t.Run(tc.name, func(t *testing.T) { + db := NewInFlight() + for _, r := range tc.requests { + var resp bool + if r.delete { + db.Delete(r.volumeID) + } else { + resp = db.Insert(r.volumeID) + } + if r.expResp != resp { + t.Fatalf("expected insert to be %+v, got %+v", r.expResp, resp) + } + } + }) + } +} diff --git a/images/csi-driver/pkg/utils/func.go b/images/csi-driver/pkg/utils/func.go new file mode 100644 index 000000000..d9ee9eeb1 --- /dev/null +++ b/images/csi-driver/pkg/utils/func.go @@ -0,0 +1,741 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package utils + +import ( + "context" + "crypto/sha1" + "encoding/hex" + "fmt" + "math" + "slices" + "strings" + "time" + + "gopkg.in/yaml.v2" + kerrors "k8s.io/apimachinery/pkg/api/errors" + "k8s.io/apimachinery/pkg/api/meta" + "k8s.io/apimachinery/pkg/api/resource" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "sigs.k8s.io/controller-runtime/pkg/client" + + snc "github.com/deckhouse/sds-node-configurator/api/v1alpha1" + srv "github.com/deckhouse/sds-replicated-volume/api/v1alpha1" + "github.com/deckhouse/sds-replicated-volume/lib/go/common/logger" +) + +const ( + KubernetesAPIRequestLimit = 3 + KubernetesAPIRequestTimeout = 1 + SDSReplicatedVolumeCSIFinalizer = "storage.deckhouse.io/sds-replicated-volume-csi" +) + +func AreSizesEqualWithinDelta(leftSize, rightSize, allowedDelta resource.Quantity) bool { + leftSizeFloat := float64(leftSize.Value()) + rightSizeFloat := float64(rightSize.Value()) + + return math.Abs(leftSizeFloat-rightSizeFloat) < float64(allowedDelta.Value()) +} + +func GetStorageClassLVGsAndParameters( + ctx context.Context, + kc client.Client, + log *logger.Logger, + storageClassLVGParametersString string, +) (storageClassLVGs []snc.LVMVolumeGroup, storageClassLVGParametersMap map[string]string, err error) { + var storageClassLVGParametersList LVMVolumeGroups + err = yaml.Unmarshal([]byte(storageClassLVGParametersString), &storageClassLVGParametersList) + if err != nil { + log.Error(err, "unmarshal yaml lvmVolumeGroup") + return nil, nil, err + } + + storageClassLVGParametersMap = make(map[string]string, len(storageClassLVGParametersList)) + for _, v := range storageClassLVGParametersList { + storageClassLVGParametersMap[v.Name] = v.Thin.PoolName + } + log.Info(fmt.Sprintf("[GetStorageClassLVGs] StorageClass LVM volume groups parameters map: %+v", storageClassLVGParametersMap)) + + lvgs, err := GetLVGList(ctx, kc) + if err != nil { + return nil, nil, err + } + + for _, lvg := range lvgs.Items { + log.Trace(fmt.Sprintf("[GetStorageClassLVGs] process lvg: %+v", lvg)) + + _, ok := storageClassLVGParametersMap[lvg.Name] + if ok { + log.Info(fmt.Sprintf("[GetStorageClassLVGs] found lvg from storage class: %s", lvg.Name)) + log.Info(fmt.Sprintf("[GetStorageClassLVGs] lvg.Status.Nodes[0].Name: %s", lvg.Status.Nodes[0].Name)) + storageClassLVGs = append(storageClassLVGs, lvg) + } else { + log.Trace(fmt.Sprintf("[GetStorageClassLVGs] skip lvg: %s", lvg.Name)) + } + } + + return storageClassLVGs, storageClassLVGParametersMap, nil +} + +func GetLVGList(ctx context.Context, kc client.Client) (*snc.LVMVolumeGroupList, error) { + listLvgs := &snc.LVMVolumeGroupList{} + return listLvgs, kc.List(ctx, listLvgs) +} + +// StoragePoolInfo contains information extracted from ReplicatedStoragePool +type StoragePoolInfo struct { + LVMVolumeGroups []snc.LVMVolumeGroup + LVGToThinPool map[string]string // maps LVMVolumeGroup name to ThinPool name + LVMType string // "Thick" or "Thin" +} + +// GetReplicatedStoragePool retrieves ReplicatedStoragePool by name +func GetReplicatedStoragePool( + ctx context.Context, + kc client.Client, + storagePoolName string, +) (*srv.ReplicatedStoragePool, error) { + rsp := &srv.ReplicatedStoragePool{} + err := kc.Get(ctx, client.ObjectKey{Name: storagePoolName}, rsp) + if err != nil { + return nil, fmt.Errorf("failed to get ReplicatedStoragePool %s: %w", storagePoolName, err) + } + return rsp, nil +} + +// GetLVMTypeFromStoragePool extracts LVM type from ReplicatedStoragePool +// Returns "Thick" for "LVM" and "Thin" for "LVMThin" +func GetLVMTypeFromStoragePool(rsp *srv.ReplicatedStoragePool) string { + switch rsp.Spec.Type { + case "LVMThin": + return "Thin" + case "LVM": + return "Thick" + default: + return "Thick" // default fallback + } +} + +// GetLVGToThinPoolMap creates a map from LVMVolumeGroup name to ThinPool name +// from ReplicatedStoragePool spec +func GetLVGToThinPoolMap(rsp *srv.ReplicatedStoragePool) map[string]string { + lvgToThinPool := make(map[string]string, len(rsp.Spec.LVMVolumeGroups)) + for _, rspLVG := range rsp.Spec.LVMVolumeGroups { + lvgToThinPool[rspLVG.Name] = rspLVG.ThinPoolName + } + return lvgToThinPool +} + +// GetStoragePoolInfo gets all information needed from ReplicatedStoragePool +func GetStoragePoolInfo( + ctx context.Context, + kc client.Client, + log *logger.Logger, + storagePoolName string, +) (*StoragePoolInfo, error) { + // Get ReplicatedStoragePool + rsp, err := GetReplicatedStoragePool(ctx, kc, storagePoolName) + if err != nil { + log.Error(err, fmt.Sprintf("failed to get ReplicatedStoragePool: %s", storagePoolName)) + return nil, err + } + + // Extract LVM type + lvmType := GetLVMTypeFromStoragePool(rsp) + log.Info(fmt.Sprintf("[GetStoragePoolInfo] StoragePool %s LVM type: %s", storagePoolName, lvmType)) + + // Extract LVG to ThinPool mapping + lvgToThinPool := GetLVGToThinPoolMap(rsp) + log.Info(fmt.Sprintf("[GetStoragePoolInfo] StoragePool %s LVG to ThinPool map: %+v", storagePoolName, lvgToThinPool)) + + // Build set of LVG names from StoragePool + lvgNamesSet := make(map[string]struct{}, len(rsp.Spec.LVMVolumeGroups)) + for _, rspLVG := range rsp.Spec.LVMVolumeGroups { + lvgNamesSet[rspLVG.Name] = struct{}{} + } + + // Get all LVMVolumeGroups from cluster and filter by names from StoragePool + allLVGs, err := GetLVGList(ctx, kc) + if err != nil { + return nil, fmt.Errorf("failed to get LVMVolumeGroups list: %w", err) + } + + lvmVolumeGroups := make([]snc.LVMVolumeGroup, 0) + for _, lvg := range allLVGs.Items { + log.Trace(fmt.Sprintf("[GetStoragePoolInfo] process lvg: %+v", lvg)) + + if _, ok := lvgNamesSet[lvg.Name]; ok { + log.Info(fmt.Sprintf("[GetStoragePoolInfo] found lvg from StoragePool: %s", lvg.Name)) + lvmVolumeGroups = append(lvmVolumeGroups, lvg) + } else { + log.Trace(fmt.Sprintf("[GetStoragePoolInfo] skip lvg: %s (not in StoragePool)", lvg.Name)) + } + } + + return &StoragePoolInfo{ + LVMVolumeGroups: lvmVolumeGroups, + LVGToThinPool: lvgToThinPool, + LVMType: lvmType, + }, nil +} + +// CreateReplicatedVolume creates a ReplicatedVolume resource +func CreateReplicatedVolume( + ctx context.Context, + kc client.Client, + log *logger.Logger, + traceID, name string, + rvSpec srv.ReplicatedVolumeSpec, +) (*srv.ReplicatedVolume, error) { + rv := &srv.ReplicatedVolume{ + ObjectMeta: metav1.ObjectMeta{ + Name: name, + OwnerReferences: []metav1.OwnerReference{}, + Finalizers: []string{SDSReplicatedVolumeCSIFinalizer}, + }, + Spec: rvSpec, + } + + log.Trace(fmt.Sprintf("[CreateReplicatedVolume][traceID:%s][volumeID:%s] ReplicatedVolume: %+v", traceID, name, rv)) + + err := kc.Create(ctx, rv) + return rv, err +} + +// GetReplicatedVolume gets a ReplicatedVolume resource +func GetReplicatedVolume(ctx context.Context, kc client.Client, name string) (*srv.ReplicatedVolume, error) { + rv := &srv.ReplicatedVolume{} + err := kc.Get(ctx, client.ObjectKey{Name: name}, rv) + return rv, err +} + +// WaitForReplicatedVolumeReady waits for ReplicatedVolume to become ready +func WaitForReplicatedVolumeReady( + ctx context.Context, + kc client.Client, + log *logger.Logger, + traceID, name string, +) (int, error) { + var attemptCounter int + log.Info(fmt.Sprintf("[WaitForReplicatedVolumeReady][traceID:%s][volumeID:%s] Waiting for ReplicatedVolume to become ready", traceID, name)) + for { + attemptCounter++ + select { + case <-ctx.Done(): + log.Warning(fmt.Sprintf("[WaitForReplicatedVolumeReady][traceID:%s][volumeID:%s] context done. Failed to wait for ReplicatedVolume", traceID, name)) + return attemptCounter, ctx.Err() + default: + time.Sleep(500 * time.Millisecond) + } + + rv, err := GetReplicatedVolume(ctx, kc, name) + if err != nil { + return attemptCounter, err + } + + if attemptCounter%10 == 0 { + log.Info(fmt.Sprintf("[WaitForReplicatedVolumeReady][traceID:%s][volumeID:%s] Attempt: %d, ReplicatedVolume: %+v", traceID, name, attemptCounter, rv)) + } + + if rv.DeletionTimestamp != nil { + return attemptCounter, fmt.Errorf("failed to create ReplicatedVolume %s, reason: ReplicatedVolume is being deleted", name) + } + + readyCond := meta.FindStatusCondition(rv.Status.Conditions, srv.ReplicatedVolumeCondIOReadyType) + if readyCond != nil && readyCond.Status == metav1.ConditionTrue { + log.Info(fmt.Sprintf("[WaitForReplicatedVolumeReady][traceID:%s][volumeID:%s] ReplicatedVolume is IOReady", traceID, name)) + return attemptCounter, nil + } + log.Trace(fmt.Sprintf("[WaitForReplicatedVolumeReady][traceID:%s][volumeID:%s] Attempt %d, ReplicatedVolume not IOReady yet. Waiting...", traceID, name, attemptCounter)) + } +} + +// DeleteReplicatedVolume deletes a ReplicatedVolume resource +func DeleteReplicatedVolume(ctx context.Context, kc client.Client, log *logger.Logger, traceID, name string) error { + log.Trace(fmt.Sprintf("[DeleteReplicatedVolume][traceID:%s][volumeID:%s] Trying to find ReplicatedVolume", traceID, name)) + rv, err := GetReplicatedVolume(ctx, kc, name) + if err != nil { + if kerrors.IsNotFound(err) { + log.Info(fmt.Sprintf("[DeleteReplicatedVolume][traceID:%s][volumeID:%s] ReplicatedVolume not found, already deleted", traceID, name)) + return nil + } + return fmt.Errorf("get ReplicatedVolume %s: %w", name, err) + } + + log.Trace(fmt.Sprintf("[DeleteReplicatedVolume][traceID:%s][volumeID:%s] ReplicatedVolume found: %+v", traceID, name, rv)) + log.Trace(fmt.Sprintf("[DeleteReplicatedVolume][traceID:%s][volumeID:%s] Removing finalizer %s if exists", traceID, name, SDSReplicatedVolumeCSIFinalizer)) + + removed, err := removervdeletepropagationIfExist(ctx, kc, log, rv, SDSReplicatedVolumeCSIFinalizer) + if err != nil { + return fmt.Errorf("remove finalizers from ReplicatedVolume %s: %w", name, err) + } + if removed { + log.Trace(fmt.Sprintf("[DeleteReplicatedVolume][traceID:%s][volumeID:%s] finalizer %s removed from ReplicatedVolume %s", traceID, name, SDSReplicatedVolumeCSIFinalizer, name)) + } else { + log.Warning(fmt.Sprintf("[DeleteReplicatedVolume][traceID:%s][volumeID:%s] finalizer %s not found in ReplicatedVolume %s", traceID, name, SDSReplicatedVolumeCSIFinalizer, name)) + } + + log.Trace(fmt.Sprintf("[DeleteReplicatedVolume][traceID:%s][volumeID:%s] Trying to delete ReplicatedVolume", traceID, name)) + err = kc.Delete(ctx, rv) + return err +} + +func removervdeletepropagationIfExist(ctx context.Context, kc client.Client, log *logger.Logger, rv *srv.ReplicatedVolume, finalizer string) (bool, error) { + for attempt := 0; attempt < KubernetesAPIRequestLimit; attempt++ { + removed := false + for i, val := range rv.Finalizers { + if val == finalizer { + rv.Finalizers = slices.Delete(rv.Finalizers, i, i+1) + removed = true + break + } + } + + if !removed { + return false, nil + } + + log.Trace(fmt.Sprintf("[removervdeletepropagationIfExist] removing finalizer %s from ReplicatedVolume %s", finalizer, rv.Name)) + err := kc.Update(ctx, rv) + if err == nil { + return true, nil + } + + if !kerrors.IsConflict(err) { + return false, fmt.Errorf("[removervdeletepropagationIfExist] error updating ReplicatedVolume %s: %w", rv.Name, err) + } + + if attempt < KubernetesAPIRequestLimit-1 { + log.Trace(fmt.Sprintf("[removervdeletepropagationIfExist] conflict while updating ReplicatedVolume %s, retrying...", rv.Name)) + select { + case <-ctx.Done(): + return false, ctx.Err() + default: + time.Sleep(KubernetesAPIRequestTimeout * time.Second) + freshRV, getErr := GetReplicatedVolume(ctx, kc, rv.Name) + if getErr != nil { + return false, fmt.Errorf("[removervdeletepropagationIfExist] error getting ReplicatedVolume %s after update conflict: %w", rv.Name, getErr) + } + *rv = *freshRV + } + } + } + + return false, fmt.Errorf("after %d attempts of removing finalizer %s from ReplicatedVolume %s, last error: %w", KubernetesAPIRequestLimit, finalizer, rv.Name, nil) +} + +// GetReplicatedVolumeReplicaForNode gets ReplicatedVolumeReplica for a specific node +func GetReplicatedVolumeReplicaForNode(ctx context.Context, kc client.Client, volumeName, nodeName string) (*srv.ReplicatedVolumeReplica, error) { + rvrList := &srv.ReplicatedVolumeReplicaList{} + err := kc.List( + ctx, + rvrList, + client.MatchingFields{"spec.replicatedVolumeName": volumeName}, + client.MatchingFields{"spec.nodeName": nodeName}, + ) + if err != nil { + return nil, err + } + + for i := range rvrList.Items { + if rvrList.Items[i].Spec.NodeName == nodeName { + return &rvrList.Items[i], nil + } + } + + return nil, fmt.Errorf("ReplicatedVolumeReplica not found for volume %s on node %s", volumeName, nodeName) +} + +// GetDRBDDevicePath gets DRBD device path from ReplicatedVolumeReplica status +func GetDRBDDevicePath(rvr *srv.ReplicatedVolumeReplica) (string, error) { + if rvr.Status.DRBD == nil || + rvr.Status.DRBD.Status == nil || len(rvr.Status.DRBD.Status.Devices) == 0 { + return "", fmt.Errorf("DRBD status not available or no devices found") + } + + minor := rvr.Status.DRBD.Status.Devices[0].Minor + return fmt.Sprintf("/dev/drbd%d", minor), nil +} + +// ExpandReplicatedVolume expands a ReplicatedVolume +func ExpandReplicatedVolume(ctx context.Context, kc client.Client, rv *srv.ReplicatedVolume, newSize resource.Quantity) error { + rv.Spec.Size = newSize + return kc.Update(ctx, rv) +} + +// BuildReplicatedVolumeSpec builds ReplicatedVolumeSpec from parameters +func BuildReplicatedVolumeSpec( + size resource.Quantity, + rscName string, +) srv.ReplicatedVolumeSpec { + return srv.ReplicatedVolumeSpec{ + Size: size, + ReplicatedStorageClassName: rscName, + } +} + +func BuildRVAName(volumeName, nodeName string) string { + base := "rva-" + volumeName + "-" + nodeName + if len(base) <= 253 { + return base + } + + sum := sha1.Sum([]byte(base)) + hash := hex.EncodeToString(sum[:])[:8] + + // "rva-" + vol + "-" + node + "-" + hash + const prefixLen = 4 // len("rva-") + const sepCount = 2 // "-" between parts + "-" before hash + const hashLen = 8 + maxPartsLen := 253 - prefixLen - sepCount - hashLen + if maxPartsLen < 2 { + // Should never happen, but keep a valid, bounded name. + return "rva-" + hash + } + + volMax := maxPartsLen / 2 + nodeMax := maxPartsLen - volMax + + volPart := truncateString(volumeName, volMax) + nodePart := truncateString(nodeName, nodeMax) + return "rva-" + volPart + "-" + nodePart + "-" + hash +} + +func truncateString(s string, maxLen int) string { + if maxLen <= 0 { + return "" + } + if len(s) <= maxLen { + return s + } + // Make the truncation stable and avoid trailing '-' (purely cosmetic, but improves readability). + out := s[:maxLen] + out = strings.TrimSuffix(out, "-") + out = strings.TrimSuffix(out, ".") + return out +} + +func EnsureRVA(ctx context.Context, kc client.Client, log *logger.Logger, traceID, volumeName, nodeName string) (string, error) { + rvaName := BuildRVAName(volumeName, nodeName) + + existing := &srv.ReplicatedVolumeAttachment{} + if err := kc.Get(ctx, client.ObjectKey{Name: rvaName}, existing); err == nil { + // Validate it matches the intended binding. + if existing.Spec.ReplicatedVolumeName != volumeName || existing.Spec.NodeName != nodeName { + return "", fmt.Errorf("ReplicatedVolumeAttachment %s already exists but has different spec (volume=%s,node=%s)", + rvaName, existing.Spec.ReplicatedVolumeName, existing.Spec.NodeName, + ) + } + return rvaName, nil + } else if client.IgnoreNotFound(err) != nil { + return "", fmt.Errorf("get ReplicatedVolumeAttachment %s: %w", rvaName, err) + } + + rva := &srv.ReplicatedVolumeAttachment{ + ObjectMeta: metav1.ObjectMeta{ + Name: rvaName, + }, + Spec: srv.ReplicatedVolumeAttachmentSpec{ + ReplicatedVolumeName: volumeName, + NodeName: nodeName, + }, + } + + log.Info(fmt.Sprintf("[EnsureRVA][traceID:%s][volumeID:%s][node:%s] Creating ReplicatedVolumeAttachment %s", traceID, volumeName, nodeName, rvaName)) + if err := kc.Create(ctx, rva); err != nil { + if kerrors.IsAlreadyExists(err) { + return rvaName, nil + } + return "", fmt.Errorf("create ReplicatedVolumeAttachment %s: %w", rvaName, err) + } + return rvaName, nil +} + +func DeleteRVA(ctx context.Context, kc client.Client, log *logger.Logger, traceID, volumeName, nodeName string) error { + rvaName := BuildRVAName(volumeName, nodeName) + rva := &srv.ReplicatedVolumeAttachment{} + if err := kc.Get(ctx, client.ObjectKey{Name: rvaName}, rva); err != nil { + if client.IgnoreNotFound(err) == nil { + log.Info(fmt.Sprintf("[DeleteRVA][traceID:%s][volumeID:%s][node:%s] ReplicatedVolumeAttachment %s not found, skipping", traceID, volumeName, nodeName, rvaName)) + return nil + } + return fmt.Errorf("get ReplicatedVolumeAttachment %s: %w", rvaName, err) + } + + log.Info(fmt.Sprintf("[DeleteRVA][traceID:%s][volumeID:%s][node:%s] Deleting ReplicatedVolumeAttachment %s", traceID, volumeName, nodeName, rvaName)) + if err := kc.Delete(ctx, rva); err != nil { + return client.IgnoreNotFound(err) + } + return nil +} + +// RVAWaitError represents a failure to observe RVA Ready=True. +// It may wrap a context cancellation/deadline error, while still preserving the last seen RVA Ready condition. +type RVAWaitError struct { + VolumeName string + NodeName string + RVAName string + + // LastReadyCondition is the last observed Ready condition (may be nil if status/condition was never observed). + LastReadyCondition *metav1.Condition + + // LastAttachedCondition is the last observed Attached condition (may be nil if missing). + // This is useful for surfacing detailed attach progress and permanent attach failures. + LastAttachedCondition *metav1.Condition + + // Permanent indicates that waiting won't help (e.g. locality constraint violation). + Permanent bool + + // Cause is the underlying error (e.g. context.DeadlineExceeded). May be nil for non-context failures. + Cause error +} + +func (e *RVAWaitError) Unwrap() error { return e.Cause } + +func (e *RVAWaitError) Error() string { + base := fmt.Sprintf("RVA %s for volume=%s node=%s not ready", e.RVAName, e.VolumeName, e.NodeName) + if e.LastReadyCondition != nil { + base = fmt.Sprintf("%s: Ready=%s reason=%s message=%q", base, e.LastReadyCondition.Status, e.LastReadyCondition.Reason, e.LastReadyCondition.Message) + } + if e.LastAttachedCondition != nil { + base = fmt.Sprintf("%s: Attached=%s reason=%s message=%q", base, e.LastAttachedCondition.Status, e.LastAttachedCondition.Reason, e.LastAttachedCondition.Message) + } + if e.Permanent { + base += " (permanent)" + } + if e.Cause != nil { + base = fmt.Sprintf("%s: %v", base, e.Cause) + } + return base +} + +func sleepWithContext(ctx context.Context) error { + t := time.NewTimer(500 * time.Millisecond) + defer t.Stop() + select { + case <-ctx.Done(): + return ctx.Err() + case <-t.C: + return nil + } +} + +func WaitForRVAReady( + ctx context.Context, + kc client.Client, + log *logger.Logger, + traceID, volumeName, nodeName string, +) error { + rvaName := BuildRVAName(volumeName, nodeName) + var attemptCounter int + var lastReadyCond *metav1.Condition + var lastAttachedCond *metav1.Condition + log.Info(fmt.Sprintf("[WaitForRVAReady][traceID:%s][volumeID:%s][node:%s] Waiting for ReplicatedVolumeAttachment %s to become Ready=True", traceID, volumeName, nodeName, rvaName)) + for { + attemptCounter++ + if err := ctx.Err(); err != nil { + log.Warning(fmt.Sprintf("[WaitForRVAReady][traceID:%s][volumeID:%s][node:%s] context done", traceID, volumeName, nodeName)) + return &RVAWaitError{ + VolumeName: volumeName, + NodeName: nodeName, + RVAName: rvaName, + LastReadyCondition: lastReadyCond, + LastAttachedCondition: lastAttachedCond, + Cause: err, + } + } + + rva := &srv.ReplicatedVolumeAttachment{} + if err := kc.Get(ctx, client.ObjectKey{Name: rvaName}, rva); err != nil { + if client.IgnoreNotFound(err) == nil { + if attemptCounter%10 == 0 { + log.Info(fmt.Sprintf("[WaitForRVAReady][traceID:%s][volumeID:%s][node:%s] Attempt: %d, RVA not found yet", traceID, volumeName, nodeName, attemptCounter)) + } + if err := sleepWithContext(ctx); err != nil { + return &RVAWaitError{ + VolumeName: volumeName, + NodeName: nodeName, + RVAName: rvaName, + LastReadyCondition: lastReadyCond, + LastAttachedCondition: lastAttachedCond, + Cause: err, + } + } + continue + } + return fmt.Errorf("get ReplicatedVolumeAttachment %s: %w", rvaName, err) + } + + readyCond := meta.FindStatusCondition(rva.Status.Conditions, srv.ReplicatedVolumeAttachmentCondReadyType) + attachedCond := meta.FindStatusCondition(rva.Status.Conditions, srv.ReplicatedVolumeAttachmentCondAttachedType) + + if attachedCond != nil { + attachedCopy := *attachedCond + lastAttachedCond = &attachedCopy + } + + if readyCond == nil { + if attemptCounter%10 == 0 { + log.Info(fmt.Sprintf("[WaitForRVAReady][traceID:%s][volumeID:%s][node:%s] Attempt: %d, RVA Ready condition missing", traceID, volumeName, nodeName, attemptCounter)) + } + if err := sleepWithContext(ctx); err != nil { + return &RVAWaitError{ + VolumeName: volumeName, + NodeName: nodeName, + RVAName: rvaName, + LastReadyCondition: lastReadyCond, + LastAttachedCondition: lastAttachedCond, + Cause: err, + } + } + continue + } + + // Keep a stable copy of the last observed condition for error reporting. + condCopy := *readyCond + lastReadyCond = &condCopy + + if attemptCounter%10 == 0 { + log.Info(fmt.Sprintf("[WaitForRVAReady][traceID:%s][volumeID:%s][node:%s] Attempt: %d, Ready=%s reason=%s message=%q", traceID, volumeName, nodeName, attemptCounter, readyCond.Status, readyCond.Reason, readyCond.Message)) + } + + if readyCond.Status == metav1.ConditionTrue { + log.Info(fmt.Sprintf("[WaitForRVAReady][traceID:%s][volumeID:%s][node:%s] RVA Ready=True", traceID, volumeName, nodeName)) + return nil + } + + // Early exit for conditions that will not become Ready without changing the request or topology. + // Waiting here only burns time and hides the real cause from CSI callers. + if lastAttachedCond != nil && + lastAttachedCond.Status == metav1.ConditionFalse && + (lastAttachedCond.Reason == srv.ReplicatedVolumeAttachmentCondAttachedReasonLocalityNotSatisfied || lastAttachedCond.Reason == srv.ReplicatedVolumeAttachmentCondAttachedReasonUnableToProvideLocalVolumeAccess) { + return &RVAWaitError{ + VolumeName: volumeName, + NodeName: nodeName, + RVAName: rvaName, + LastReadyCondition: lastReadyCond, + LastAttachedCondition: lastAttachedCond, + Permanent: true, + } + } + + if err := sleepWithContext(ctx); err != nil { + return &RVAWaitError{ + VolumeName: volumeName, + NodeName: nodeName, + RVAName: rvaName, + LastReadyCondition: lastReadyCond, + LastAttachedCondition: lastAttachedCond, + Cause: err, + } + } + } +} + +// WaitForAttachedToProvided waits for a node name to appear in rv.status.actuallyAttachedTo +func WaitForAttachedToProvided( + ctx context.Context, + kc client.Client, + log *logger.Logger, + traceID, volumeName, nodeName string, +) error { + var attemptCounter int + log.Info(fmt.Sprintf("[WaitForAttachedToProvided][traceID:%s][volumeID:%s][node:%s] Waiting for node to appear in status.actuallyAttachedTo", traceID, volumeName, nodeName)) + for { + attemptCounter++ + select { + case <-ctx.Done(): + log.Warning(fmt.Sprintf("[WaitForAttachedToProvided][traceID:%s][volumeID:%s][node:%s] context done", traceID, volumeName, nodeName)) + return ctx.Err() + default: + time.Sleep(500 * time.Millisecond) + } + + rv, err := GetReplicatedVolume(ctx, kc, volumeName) + if err != nil { + if kerrors.IsNotFound(err) { + return fmt.Errorf("ReplicatedVolume %s not found", volumeName) + } + return err + } + + if attemptCounter%10 == 0 { + log.Info(fmt.Sprintf("[WaitForAttachedToProvided][traceID:%s][volumeID:%s][node:%s] Attempt: %d, status.actuallyAttachedTo: %v", traceID, volumeName, nodeName, attemptCounter, rv.Status.ActuallyAttachedTo)) + } + + // Check if node is in status.actuallyAttachedTo + for _, attachedNode := range rv.Status.ActuallyAttachedTo { + if attachedNode == nodeName { + log.Info(fmt.Sprintf("[WaitForAttachedToProvided][traceID:%s][volumeID:%s][node:%s] Node is now in status.actuallyAttachedTo", traceID, volumeName, nodeName)) + return nil + } + } + + log.Trace(fmt.Sprintf("[WaitForAttachedToProvided][traceID:%s][volumeID:%s][node:%s] Attempt %d, node not in status.actuallyAttachedTo yet. Waiting...", traceID, volumeName, nodeName, attemptCounter)) + } +} + +// WaitForAttachedToRemoved waits for a node name to disappear from rv.status.actuallyAttachedTo +func WaitForAttachedToRemoved( + ctx context.Context, + kc client.Client, + log *logger.Logger, + traceID, volumeName, nodeName string, +) error { + var attemptCounter int + log.Info(fmt.Sprintf("[WaitForAttachedToRemoved][traceID:%s][volumeID:%s][node:%s] Waiting for node to disappear from status.actuallyAttachedTo", traceID, volumeName, nodeName)) + for { + attemptCounter++ + select { + case <-ctx.Done(): + log.Warning(fmt.Sprintf("[WaitForAttachedToRemoved][traceID:%s][volumeID:%s][node:%s] context done", traceID, volumeName, nodeName)) + return ctx.Err() + default: + time.Sleep(500 * time.Millisecond) + } + + rv, err := GetReplicatedVolume(ctx, kc, volumeName) + if err != nil { + if kerrors.IsNotFound(err) { + // Volume deleted, consider it as removed + log.Info(fmt.Sprintf("[WaitForAttachedToRemoved][traceID:%s][volumeID:%s][node:%s] ReplicatedVolume not found, considering node as removed", traceID, volumeName, nodeName)) + return nil + } + return err + } + + if attemptCounter%10 == 0 { + log.Info(fmt.Sprintf("[WaitForAttachedToRemoved][traceID:%s][volumeID:%s][node:%s] Attempt: %d, status.actuallyAttachedTo: %v", traceID, volumeName, nodeName, attemptCounter, rv.Status.ActuallyAttachedTo)) + } + + // Check if node is NOT in status.actuallyAttachedTo + found := false + for _, attachedNode := range rv.Status.ActuallyAttachedTo { + if attachedNode == nodeName { + found = true + break + } + } + + if !found { + log.Info(fmt.Sprintf("[WaitForAttachedToRemoved][traceID:%s][volumeID:%s][node:%s] Node is no longer in status.actuallyAttachedTo", traceID, volumeName, nodeName)) + return nil + } + + log.Trace(fmt.Sprintf("[WaitForAttachedToRemoved][traceID:%s][volumeID:%s][node:%s] Attempt %d, node still in status.actuallyAttachedTo. Waiting...", traceID, volumeName, nodeName, attemptCounter)) + } +} diff --git a/images/csi-driver/pkg/utils/func_publish_test.go b/images/csi-driver/pkg/utils/func_publish_test.go new file mode 100644 index 000000000..4693c3e5a --- /dev/null +++ b/images/csi-driver/pkg/utils/func_publish_test.go @@ -0,0 +1,423 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package utils + +import ( + "context" + "errors" + "testing" + "time" + + . "github.com/onsi/ginkgo/v2" + . "github.com/onsi/gomega" + "k8s.io/apimachinery/pkg/api/meta" + "k8s.io/apimachinery/pkg/api/resource" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/client-go/kubernetes/scheme" + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/client/fake" + + "github.com/deckhouse/sds-replicated-volume/api/v1alpha1" + "github.com/deckhouse/sds-replicated-volume/lib/go/common/logger" +) + +func TestPublishUtils(t *testing.T) { + RegisterFailHandler(Fail) + RunSpecs(t, "RVA Utils Suite") +} + +var _ = Describe("ReplicatedVolumeAttachment utils", func() { + var ( + cl client.Client + log logger.Logger + traceID string + ) + + BeforeEach(func() { + cl = newFakeClient() + log = logger.WrapLorg(GinkgoLogr) + traceID = "test-trace-id" + }) + + It("EnsureRVA creates a new RVA when it does not exist", func(ctx SpecContext) { + volumeName := "test-volume" + nodeName := "node-1" + + rvaName, err := EnsureRVA(ctx, cl, &log, traceID, volumeName, nodeName) + Expect(err).NotTo(HaveOccurred()) + Expect(rvaName).ToNot(BeEmpty()) + + got := &v1alpha1.ReplicatedVolumeAttachment{} + Expect(cl.Get(ctx, client.ObjectKey{Name: rvaName}, got)).To(Succeed()) + Expect(got.Spec.ReplicatedVolumeName).To(Equal(volumeName)) + Expect(got.Spec.NodeName).To(Equal(nodeName)) + }) + + It("EnsureRVA is idempotent when RVA already exists", func(ctx SpecContext) { + volumeName := "test-volume" + nodeName := "node-1" + + _, err := EnsureRVA(ctx, cl, &log, traceID, volumeName, nodeName) + Expect(err).NotTo(HaveOccurred()) + _, err = EnsureRVA(ctx, cl, &log, traceID, volumeName, nodeName) + Expect(err).NotTo(HaveOccurred()) + + list := &v1alpha1.ReplicatedVolumeAttachmentList{} + Expect(cl.List(ctx, list)).To(Succeed()) + Expect(list.Items).To(HaveLen(1)) + }) + + It("DeleteRVA deletes existing RVA and is idempotent", func(ctx SpecContext) { + volumeName := "test-volume" + nodeName := "node-1" + + _, err := EnsureRVA(ctx, cl, &log, traceID, volumeName, nodeName) + Expect(err).NotTo(HaveOccurred()) + + Expect(DeleteRVA(ctx, cl, &log, traceID, volumeName, nodeName)).To(Succeed()) + Expect(DeleteRVA(ctx, cl, &log, traceID, volumeName, nodeName)).To(Succeed()) + + list := &v1alpha1.ReplicatedVolumeAttachmentList{} + Expect(cl.List(ctx, list)).To(Succeed()) + Expect(list.Items).To(HaveLen(0)) + }) + + It("WaitForRVAReady returns nil when Ready=True", func(ctx SpecContext) { + volumeName := "test-volume" + nodeName := "node-1" + + rvaName, err := EnsureRVA(ctx, cl, &log, traceID, volumeName, nodeName) + Expect(err).NotTo(HaveOccurred()) + + rva := &v1alpha1.ReplicatedVolumeAttachment{} + Expect(cl.Get(ctx, client.ObjectKey{Name: rvaName}, rva)).To(Succeed()) + meta.SetStatusCondition(&rva.Status.Conditions, metav1.Condition{ + Type: v1alpha1.ReplicatedVolumeAttachmentCondAttachedType, + Status: metav1.ConditionTrue, + Reason: v1alpha1.ReplicatedVolumeAttachmentCondAttachedReasonAttached, + Message: "attached", + ObservedGeneration: rva.Generation, + }) + meta.SetStatusCondition(&rva.Status.Conditions, metav1.Condition{ + Type: v1alpha1.ReplicatedVolumeAttachmentCondReplicaReadyType, + Status: metav1.ConditionTrue, + Reason: v1alpha1.ReplicatedVolumeReplicaCondReadyReasonReady, + Message: "io ready", + ObservedGeneration: rva.Generation, + }) + meta.SetStatusCondition(&rva.Status.Conditions, metav1.Condition{ + Type: v1alpha1.ReplicatedVolumeAttachmentCondReadyType, + Status: metav1.ConditionTrue, + Reason: v1alpha1.ReplicatedVolumeAttachmentCondReadyReasonReady, + Message: "ok", + ObservedGeneration: rva.Generation, + }) + Expect(cl.Status().Update(ctx, rva)).To(Succeed()) + + Expect(WaitForRVAReady(ctx, cl, &log, traceID, volumeName, nodeName)).To(Succeed()) + }) + + It("WaitForRVAReady returns error immediately when Attached=False and reason=LocalityNotSatisfied", func(ctx SpecContext) { + volumeName := "test-volume" + nodeName := "node-1" + + rvaName, err := EnsureRVA(ctx, cl, &log, traceID, volumeName, nodeName) + Expect(err).NotTo(HaveOccurred()) + + rva := &v1alpha1.ReplicatedVolumeAttachment{} + Expect(cl.Get(ctx, client.ObjectKey{Name: rvaName}, rva)).To(Succeed()) + meta.SetStatusCondition(&rva.Status.Conditions, metav1.Condition{ + Type: v1alpha1.ReplicatedVolumeAttachmentCondAttachedType, + Status: metav1.ConditionFalse, + Reason: v1alpha1.ReplicatedVolumeAttachmentCondAttachedReasonLocalityNotSatisfied, + Message: "Local volume access requires a Diskful replica on the requested node", + ObservedGeneration: rva.Generation, + }) + meta.SetStatusCondition(&rva.Status.Conditions, metav1.Condition{ + Type: v1alpha1.ReplicatedVolumeAttachmentCondReadyType, + Status: metav1.ConditionFalse, + Reason: v1alpha1.ReplicatedVolumeAttachmentCondReadyReasonNotAttached, + Message: "Waiting for volume to be attached to the requested node", + ObservedGeneration: rva.Generation, + }) + Expect(cl.Status().Update(ctx, rva)).To(Succeed()) + + start := time.Now() + err = WaitForRVAReady(ctx, cl, &log, traceID, volumeName, nodeName) + Expect(err).To(HaveOccurred()) + Expect(time.Since(start)).To(BeNumerically("<", time.Second)) + + var waitErr *RVAWaitError + Expect(errors.As(err, &waitErr)).To(BeTrue()) + Expect(waitErr.Permanent).To(BeTrue()) + Expect(waitErr.LastReadyCondition).NotTo(BeNil()) + Expect(waitErr.LastReadyCondition.Reason).To(Equal(v1alpha1.ReplicatedVolumeAttachmentCondReadyReasonNotAttached)) + Expect(waitErr.LastAttachedCondition).NotTo(BeNil()) + Expect(waitErr.LastAttachedCondition.Reason).To(Equal(v1alpha1.ReplicatedVolumeAttachmentCondAttachedReasonLocalityNotSatisfied)) + }) + + It("WaitForRVAReady returns context deadline error but includes last observed reason/message", func(ctx SpecContext) { + volumeName := "test-volume" + nodeName := "node-1" + + rvaName, err := EnsureRVA(ctx, cl, &log, traceID, volumeName, nodeName) + Expect(err).NotTo(HaveOccurred()) + + rva := &v1alpha1.ReplicatedVolumeAttachment{} + Expect(cl.Get(ctx, client.ObjectKey{Name: rvaName}, rva)).To(Succeed()) + meta.SetStatusCondition(&rva.Status.Conditions, metav1.Condition{ + Type: v1alpha1.ReplicatedVolumeAttachmentCondAttachedType, + Status: metav1.ConditionFalse, + Reason: v1alpha1.ReplicatedVolumeAttachmentCondAttachedReasonSettingPrimary, + Message: "Waiting for replica to become Primary", + ObservedGeneration: rva.Generation, + }) + meta.SetStatusCondition(&rva.Status.Conditions, metav1.Condition{ + Type: v1alpha1.ReplicatedVolumeAttachmentCondReadyType, + Status: metav1.ConditionFalse, + Reason: v1alpha1.ReplicatedVolumeAttachmentCondReadyReasonNotAttached, + Message: "Waiting for volume to be attached to the requested node", + ObservedGeneration: rva.Generation, + }) + Expect(cl.Status().Update(ctx, rva)).To(Succeed()) + + timeoutCtx, cancel := context.WithTimeout(ctx, 150*time.Millisecond) + defer cancel() + + err = WaitForRVAReady(timeoutCtx, cl, &log, traceID, volumeName, nodeName) + Expect(err).To(HaveOccurred()) + Expect(errors.Is(err, context.DeadlineExceeded)).To(BeTrue()) + + var waitErr *RVAWaitError + Expect(errors.As(err, &waitErr)).To(BeTrue()) + Expect(waitErr.LastReadyCondition).NotTo(BeNil()) + Expect(waitErr.LastReadyCondition.Reason).To(Equal(v1alpha1.ReplicatedVolumeAttachmentCondReadyReasonNotAttached)) + Expect(waitErr.LastAttachedCondition).NotTo(BeNil()) + Expect(waitErr.LastAttachedCondition.Reason).To(Equal(v1alpha1.ReplicatedVolumeAttachmentCondAttachedReasonSettingPrimary)) + Expect(waitErr.LastAttachedCondition.Message).To(Equal("Waiting for replica to become Primary")) + }) +}) + +var _ = Describe("WaitForAttachedToProvided", func() { + var ( + cl client.Client + log logger.Logger + traceID string + ) + + BeforeEach(func() { + cl = newFakeClient() + log = logger.WrapLorg(GinkgoLogr) + traceID = "test-trace-id" + }) + + Context("when node already in status.actuallyAttachedTo", func() { + It("should return immediately", func(ctx SpecContext) { + volumeName := "test-volume" + nodeName := "node-1" + + rv := createTestReplicatedVolume(volumeName) + rv.Status.ActuallyAttachedTo = []string{nodeName} + Expect(cl.Create(ctx, rv)).To(Succeed()) + + err := WaitForAttachedToProvided(ctx, cl, &log, traceID, volumeName, nodeName) + Expect(err).NotTo(HaveOccurred()) + }) + }) + + Context("when node appears in status.actuallyAttachedTo", func() { + It("should wait and return successfully", func(ctx SpecContext) { + volumeName := "test-volume" + nodeName := "node-1" + + rv := createTestReplicatedVolume(volumeName) + Expect(cl.Create(ctx, rv)).To(Succeed()) + + // Update status in background after a short delay + go func() { + defer GinkgoRecover() + time.Sleep(100 * time.Millisecond) + updatedRV := &v1alpha1.ReplicatedVolume{} + Expect(cl.Get(ctx, client.ObjectKey{Name: volumeName}, updatedRV)).To(Succeed()) + updatedRV.Status.ActuallyAttachedTo = []string{nodeName} + // Use Update instead of Status().Update for fake client + Expect(cl.Update(ctx, updatedRV)).To(Succeed()) + }() + + // Use context with timeout to prevent hanging + timeoutCtx, cancel := context.WithTimeout(ctx, 5*time.Second) + defer cancel() + + err := WaitForAttachedToProvided(timeoutCtx, cl, &log, traceID, volumeName, nodeName) + Expect(err).NotTo(HaveOccurred()) + }) + }) + + Context("when ReplicatedVolume does not exist", func() { + It("should return an error", func(ctx SpecContext) { + volumeName := "non-existent-volume" + nodeName := "node-1" + + err := WaitForAttachedToProvided(ctx, cl, &log, traceID, volumeName, nodeName) + Expect(err).To(HaveOccurred()) + Expect(err.Error()).To(ContainSubstring("ReplicatedVolume")) + }) + }) + + Context("when context is cancelled", func() { + It("should return context error", func(ctx SpecContext) { + volumeName := "test-volume" + nodeName := "node-1" + + rv := createTestReplicatedVolume(volumeName) + Expect(cl.Create(ctx, rv)).To(Succeed()) + + cancelledCtx, cancel := context.WithCancel(ctx) + cancel() + + err := WaitForAttachedToProvided(cancelledCtx, cl, &log, traceID, volumeName, nodeName) + Expect(err).To(HaveOccurred()) + Expect(err).To(Equal(context.Canceled)) + }) + }) +}) + +var _ = Describe("WaitForAttachedToRemoved", func() { + var ( + cl client.Client + log logger.Logger + traceID string + ) + + BeforeEach(func() { + cl = newFakeClient() + log = logger.WrapLorg(GinkgoLogr) + traceID = "test-trace-id" + }) + + Context("when node already not in status.actuallyAttachedTo", func() { + It("should return immediately", func(ctx SpecContext) { + volumeName := "test-volume" + nodeName := "node-1" + + rv := createTestReplicatedVolume(volumeName) + Expect(cl.Create(ctx, rv)).To(Succeed()) + + err := WaitForAttachedToRemoved(ctx, cl, &log, traceID, volumeName, nodeName) + Expect(err).NotTo(HaveOccurred()) + }) + }) + + Context("when node is removed from status.actuallyAttachedTo", func() { + It("should wait and return successfully", func(ctx SpecContext) { + volumeName := "test-volume" + nodeName := "node-1" + + rv := createTestReplicatedVolume(volumeName) + rv.Status.ActuallyAttachedTo = []string{nodeName} + Expect(cl.Create(ctx, rv)).To(Succeed()) + + // Update status in background after a short delay + go func() { + defer GinkgoRecover() + time.Sleep(100 * time.Millisecond) + updatedRV := &v1alpha1.ReplicatedVolume{} + Expect(cl.Get(ctx, client.ObjectKey{Name: volumeName}, updatedRV)).To(Succeed()) + updatedRV.Status.ActuallyAttachedTo = []string{} + // Use Update instead of Status().Update for fake client + Expect(cl.Update(ctx, updatedRV)).To(Succeed()) + }() + + // Use context with timeout to prevent hanging + timeoutCtx, cancel := context.WithTimeout(ctx, 5*time.Second) + defer cancel() + + err := WaitForAttachedToRemoved(timeoutCtx, cl, &log, traceID, volumeName, nodeName) + Expect(err).NotTo(HaveOccurred()) + }) + }) + + Context("when ReplicatedVolume does not exist", func() { + It("should return nil (considered success)", func(ctx SpecContext) { + volumeName := "non-existent-volume" + nodeName := "node-1" + + err := WaitForAttachedToRemoved(ctx, cl, &log, traceID, volumeName, nodeName) + Expect(err).NotTo(HaveOccurred()) + }) + }) + + Context("when status is empty", func() { + It("should return nil (considered success)", func(ctx SpecContext) { + volumeName := "test-volume" + nodeName := "node-1" + + rv := createTestReplicatedVolume(volumeName) + rv.Status = v1alpha1.ReplicatedVolumeStatus{} + Expect(cl.Create(ctx, rv)).To(Succeed()) + + err := WaitForAttachedToRemoved(ctx, cl, &log, traceID, volumeName, nodeName) + Expect(err).NotTo(HaveOccurred()) + }) + }) + + Context("when context is cancelled", func() { + It("should return context error", func(ctx SpecContext) { + volumeName := "test-volume" + nodeName := "node-1" + + rv := createTestReplicatedVolume(volumeName) + rv.Status.ActuallyAttachedTo = []string{nodeName} + Expect(cl.Create(ctx, rv)).To(Succeed()) + + cancelledCtx, cancel := context.WithCancel(ctx) + cancel() + + err := WaitForAttachedToRemoved(cancelledCtx, cl, &log, traceID, volumeName, nodeName) + Expect(err).To(HaveOccurred()) + Expect(err).To(Equal(context.Canceled)) + }) + }) +}) + +// Helper functions + +func newFakeClient() client.Client { + s := scheme.Scheme + _ = metav1.AddMetaToScheme(s) + _ = v1alpha1.AddToScheme(s) + + builder := fake.NewClientBuilder(). + WithScheme(s). + WithStatusSubresource(&v1alpha1.ReplicatedVolumeAttachment{}) + return builder.Build() +} + +func createTestReplicatedVolume(name string) *v1alpha1.ReplicatedVolume { + return &v1alpha1.ReplicatedVolume{ + ObjectMeta: metav1.ObjectMeta{ + Name: name, + }, + Spec: v1alpha1.ReplicatedVolumeSpec{ + Size: resource.MustParse("1Gi"), + ReplicatedStorageClassName: "rsc", + }, + Status: v1alpha1.ReplicatedVolumeStatus{ + ActuallyAttachedTo: []string{}, + }, + } +} diff --git a/images/csi-driver/pkg/utils/node_store_maganer_test.go b/images/csi-driver/pkg/utils/node_store_maganer_test.go new file mode 100644 index 000000000..74a036064 --- /dev/null +++ b/images/csi-driver/pkg/utils/node_store_maganer_test.go @@ -0,0 +1,112 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package utils + +import ( + "testing" + + "github.com/stretchr/testify/assert" + mountutils "k8s.io/mount-utils" + + "github.com/deckhouse/sds-replicated-volume/lib/go/common/logger" +) + +func TestNodeStoreManager(t *testing.T) { + t.Run("toMapperPath", func(t *testing.T) { + t.Run("does_not_have_prefix_returns_empty", func(t *testing.T) { + assert.Equal(t, "", toMapperPath("not-dev-path")) + }) + + t.Run("have_prefix_returns_path", func(t *testing.T) { + path := "/dev/some-good/path" + expected := "/dev/mapper/some--good-path" + + assert.Equal(t, expected, toMapperPath(path)) + }) + }) + + t.Run("checkMount", func(t *testing.T) { + t.Run("all_good", func(t *testing.T) { + const ( + devPath = "/dev/some-good/path" + target = "some-target" + ) + f := &mountutils.FakeMounter{} + f.MountPoints = []mountutils.MountPoint{ + { + Device: devPath, + Path: target, + }, + } + store := &Store{ + Log: &logger.Logger{}, + NodeStorage: mountutils.SafeFormatAndMount{ + Interface: f, + }, + } + + err := checkMount(store, devPath, target, []string{}) + assert.NoError(t, err) + }) + + t.Run("device_is_not_devPath_nor_mapperDevPath_returns_error", func(t *testing.T) { + const ( + devPath = "weird-path" + target = "some-target" + ) + f := &mountutils.FakeMounter{} + f.MountPoints = []mountutils.MountPoint{ + { + Device: "other-name", + Path: target, + }, + } + store := &Store{ + Log: &logger.Logger{}, + NodeStorage: mountutils.SafeFormatAndMount{ + Interface: f, + }, + } + + err := checkMount(store, devPath, target, []string{}) + assert.ErrorContains(t, err, "[checkMount] device from mount point \"other-name\" does not match expected source device path weird-path or mapper device path ") + }) + + t.Run("path_is_not_target_returns_error", func(t *testing.T) { + const ( + devPath = "weird-path" + target = "some-target" + ) + f := &mountutils.FakeMounter{} + f.MountPoints = []mountutils.MountPoint{ + { + Device: devPath, + Path: "other-path", + }, + } + store := &Store{ + Log: &logger.Logger{}, + NodeStorage: mountutils.SafeFormatAndMount{ + Interface: f, + }, + } + + err := checkMount(store, devPath, target, []string{}) + assert.ErrorContains(t, err, "[checkMount] mount point \"some-target\" not found in mount info") + }) + }) +} diff --git a/images/csi-driver/pkg/utils/node_store_manager.go b/images/csi-driver/pkg/utils/node_store_manager.go new file mode 100644 index 000000000..5ee450ddd --- /dev/null +++ b/images/csi-driver/pkg/utils/node_store_manager.go @@ -0,0 +1,318 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package utils + +import ( + "fmt" + "os" + "slices" + "strings" + + mountutils "k8s.io/mount-utils" + utilexec "k8s.io/utils/exec" + + "github.com/deckhouse/sds-replicated-volume/images/csi-driver/internal" + "github.com/deckhouse/sds-replicated-volume/lib/go/common/logger" +) + +type NodeStoreManager interface { + NodeStageVolumeFS(source, target string, fsType string, mountOpts []string, formatOpts []string, lvmType, lvmThinPoolName string) error + NodePublishVolumeBlock(source, target string, mountOpts []string) error + NodePublishVolumeFS(source, devPath, target, fsType string, mountOpts []string) error + Unstage(target string) error + Unpublish(target string) error + IsNotMountPoint(target string) (bool, error) + ResizeFS(target string) error + PathExists(path string) (bool, error) + NeedResize(devicePath string, deviceMountPath string) (bool, error) +} + +type Store struct { + Log *logger.Logger + NodeStorage mountutils.SafeFormatAndMount +} + +func NewStore(logger *logger.Logger) *Store { + return &Store{ + Log: logger, + NodeStorage: mountutils.SafeFormatAndMount{ + Interface: mountutils.New("/bin/mount"), + Exec: utilexec.New(), + }, + } +} + +func (s *Store) NodeStageVolumeFS(source, target string, fsType string, mountOpts []string, formatOpts []string, lvmType, lvmThinPoolName string) error { + s.Log.Trace(" ----== Start NodeStageVolumeFS ==---- ") + + s.Log.Trace("≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈ Format options ≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈") + s.Log.Trace(fmt.Sprintf("[format] params device=%s fs=%s formatOptions=%v", source, fsType, formatOpts)) + s.Log.Trace("≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈ Format options ≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈") + + s.Log.Trace("≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈ Mount options ≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈") + s.Log.Trace(fmt.Sprintf("[mount] params source=%s target=%s fs=%s mountOptions=%v", source, target, fsType, mountOpts)) + s.Log.Trace("≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈ Mount options ≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈") + + info, err := os.Stat(source) + if err != nil { + return fmt.Errorf("failed to stat source device: %w", err) + } + + if (info.Mode() & os.ModeDevice) != os.ModeDevice { + return fmt.Errorf("[NewMount] path %s is not a device", source) + } + + s.Log.Trace("≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈ MODE SOURCE ≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈") + s.Log.Trace(info.Mode().String()) + s.Log.Trace("≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈ MODE SOURCE ≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈") + + s.Log.Trace("≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈ FS MOUNT ≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈") + s.Log.Trace("-----------------== start MkdirAll ==-----------------") + s.Log.Trace("mkdir create dir =" + target) + exists, err := s.PathExists(target) + if err != nil { + return fmt.Errorf("[PathExists] could not check if target directory %s exists: %w", target, err) + } + if !exists { + s.Log.Debug(fmt.Sprintf("Creating target directory %s", target)) + if err := os.MkdirAll(target, os.FileMode(0755)); err != nil { + return fmt.Errorf("[MkdirAll] could not create target directory %s: %w", target, err) + } + } + s.Log.Trace("-----------------== stop MkdirAll ==-----------------") + + isMountPoint, err := s.NodeStorage.IsMountPoint(target) + if err != nil { + return fmt.Errorf("[s.NodeStorage.IsMountPoint] unable to determine mount status of %s: %w", target, err) + } + + s.Log.Trace("≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈ isMountPoint ≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈") + s.Log.Trace(fmt.Sprintf("%t", isMountPoint)) + s.Log.Trace("≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈ isMountPoint ≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈") + + if isMountPoint { + mapperSourcePath := toMapperPath(source) + s.Log.Trace(fmt.Sprintf("Target %s is a mount point. Checking if it is already mounted to source %s or %s", target, source, mapperSourcePath)) + + mountedDevicePath, _, err := mountutils.GetDeviceNameFromMount(s.NodeStorage.Interface, target) + if err != nil { + return fmt.Errorf("failed to find the device mounted at %s: %w", target, err) + } + s.Log.Trace(fmt.Sprintf("Found device mounted at %s: %s", target, mountedDevicePath)) + + if mountedDevicePath != source && mountedDevicePath != mapperSourcePath { + return fmt.Errorf("target %s is a mount point and is not mounted to source %s or %s", target, source, mapperSourcePath) + } + + s.Log.Trace(fmt.Sprintf("Target %s is a mount point and already mounted to source %s. Skipping FormatAndMount without any checks", target, source)) + return nil + } + + s.Log.Trace("-----------------== start FormatAndMount ==---------------") + + if lvmType == internal.LVMTypeThin { + s.Log.Trace(fmt.Sprintf("LVM type is Thin. Thin pool name: %s", lvmThinPoolName)) + } + err = s.NodeStorage.FormatAndMountSensitiveWithFormatOptions(source, target, fsType, mountOpts, nil, formatOpts) + if err != nil { + return fmt.Errorf("failed to FormatAndMount : %w", err) + } + s.Log.Trace("-----------------== stop FormatAndMount ==---------------") + + s.Log.Trace("-----------------== stop NodeStageVolumeFS ==---------------") + return nil +} + +func (s *Store) NodePublishVolumeBlock(source, target string, mountOpts []string) error { + s.Log.Info(" ----== Start NodePublishVolumeBlock ==---- ") + + s.Log.Trace("≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈ Mount options ≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈") + s.Log.Trace(fmt.Sprintf("[NodePublishVolumeBlock] params source=%s target=%s mountOptions=%v", source, target, mountOpts)) + s.Log.Trace("≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈ Mount options ≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈") + + info, err := os.Stat(source) + if err != nil { + return fmt.Errorf("failed to stat source device: %w", err) + } + + if (info.Mode() & os.ModeDevice) != os.ModeDevice { + return fmt.Errorf("[NodePublishVolumeBlock] path %s is not a device", source) + } + + s.Log.Trace("≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈ MODE SOURCE ≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈") + s.Log.Trace(info.Mode().String()) + s.Log.Trace("≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈ MODE SOURCE ≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈") + + s.Log.Trace("-----------------== start Create File ==---------------") + f, err := os.OpenFile(target, os.O_CREATE, os.FileMode(0644)) + if err != nil { + if !os.IsExist(err) { + return fmt.Errorf("[NodePublishVolumeBlock] could not create bind target for block volume %s, %w", target, err) + } + } else { + _ = f.Close() + } + s.Log.Trace("-----------------== stop Create File ==---------------") + s.Log.Trace("-----------------== start Mount ==---------------") + err = s.NodeStorage.Mount(source, target, "", mountOpts) + if err != nil { + s.Log.Error(err, "[NodePublishVolumeBlock] mount error :") + return err + } + s.Log.Trace("-----------------== stop Mount ==---------------") + s.Log.Trace("-----------------== stop NodePublishVolumeBlock ==---------------") + return nil +} + +func (s *Store) NodePublishVolumeFS(source, devPath, target, fsType string, mountOpts []string) error { + s.Log.Info(" ----== Start NodePublishVolumeFS ==---- ") + s.Log.Trace(fmt.Sprintf("[NodePublishVolumeFS] params source=%q target=%q mountOptions=%v", source, target, mountOpts)) + isMountPoint := false + exists, err := s.PathExists(target) + if err != nil { + return fmt.Errorf("[NodePublishVolumeFS] could not check if target file %s exists: %w", target, err) + } + + if exists { + s.Log.Trace(fmt.Sprintf("[NodePublishVolumeFS] target file %s already exists", target)) + isMountPoint, err = s.NodeStorage.IsMountPoint(target) + if err != nil { + return fmt.Errorf("[NodePublishVolumeFS] could not check if target file %s is a mount point: %w", target, err) + } + } else { + s.Log.Trace(fmt.Sprintf("[NodePublishVolumeFS] creating target file %q", target)) + if err := os.MkdirAll(target, os.FileMode(0755)); err != nil { + return fmt.Errorf("[NodePublishVolumeFS] could not create target file %q: %w", target, err) + } + } + + if isMountPoint { + s.Log.Trace(fmt.Sprintf("[NodePublishVolumeFS] target directory %q is a mount point. Check mount", target)) + err := checkMount(s, devPath, target, mountOpts) + if err != nil { + return fmt.Errorf("[NodePublishVolumeFS] failed to check mount info for %q: %w", target, err) + } + s.Log.Trace(fmt.Sprintf("[NodePublishVolumeFS] target directory %q is a mount point and already mounted to source %s. Skipping mount", target, source)) + return nil + } + + err = s.NodeStorage.Interface.Mount(source, target, fsType, mountOpts) + if err != nil { + return fmt.Errorf("[NodePublishVolumeFS] failed to bind mount %q to %q with mount options %v: %w", source, target, mountOpts, err) + } + + s.Log.Trace("-----------------== stop NodePublishVolumeFS ==---------------") + return nil +} + +func (s *Store) Unpublish(target string) error { + return s.Unstage(target) +} + +func (s *Store) Unstage(target string) error { + s.Log.Info(fmt.Sprintf("[unmount volume] target=%s", target)) + err := mountutils.CleanupMountPoint(target, s.NodeStorage.Interface, false) + // Ignore the error when it contains "not mounted", because that indicates the + // world is already in the desired state + // + // mount-utils attempts to detect this on its own but fails when running on + // a read-only root filesystem + if err == nil || strings.Contains(fmt.Sprint(err), "not mounted") { + return nil + } + + return err +} + +func (s *Store) IsNotMountPoint(target string) (bool, error) { + notMounted, err := s.NodeStorage.IsMountPoint(target) + if err != nil { + if os.IsNotExist(err) { + return true, nil + } + return false, err + } + return notMounted, nil +} + +func (s *Store) ResizeFS(mountTarget string) error { + s.Log.Info(" ----== Resize FS ==---- ") + devicePath, _, err := mountutils.GetDeviceNameFromMount(s.NodeStorage.Interface, mountTarget) + if err != nil { + s.Log.Error(err, "Failed to find the device mounted at mountTarget", "mountTarget", mountTarget) + return fmt.Errorf("failed to find the device mounted at %s: %w", mountTarget, err) + } + + s.Log.Info("Found device for resizing", "devicePath", devicePath, "mountTarget", mountTarget) + + _, err = mountutils.NewResizeFs(s.NodeStorage.Exec).Resize(devicePath, mountTarget) + if err != nil { + s.Log.Error(err, "Failed to resize filesystem", "devicePath", devicePath, "mountTarget", mountTarget) + return fmt.Errorf("failed to resize filesystem %s on device %s: %w", mountTarget, devicePath, err) + } + + s.Log.Info("Filesystem resized successfully", "devicePath", devicePath) + return nil +} + +func (s *Store) PathExists(path string) (bool, error) { + return mountutils.PathExists(path) +} + +func (s *Store) NeedResize(devicePath string, deviceMountPath string) (bool, error) { + return mountutils.NewResizeFs(s.NodeStorage.Exec).NeedResize(devicePath, deviceMountPath) +} + +func toMapperPath(devPath string) string { + if !strings.HasPrefix(devPath, "/dev/") { + return "" + } + + shortPath := strings.TrimPrefix(devPath, "/dev/") + mapperPath := strings.ReplaceAll(shortPath, "-", "--") + mapperPath = strings.ReplaceAll(mapperPath, "/", "-") + return "/dev/mapper/" + mapperPath +} + +func checkMount(s *Store, devPath, target string, mountOpts []string) error { + mntInfo, err := s.NodeStorage.Interface.List() + if err != nil { + return fmt.Errorf("[checkMount] failed to list mounts: %w", err) + } + + for _, m := range mntInfo { + if m.Path == target { + mapperDevicePath := toMapperPath(devPath) + if m.Device != devPath && m.Device != mapperDevicePath { + return fmt.Errorf("[checkMount] device from mount point %q does not match expected source device path %s or mapper device path %s", m.Device, devPath, mapperDevicePath) + } + s.Log.Trace(fmt.Sprintf("[checkMount] mount point %s is mounted to device %s", target, m.Device)) + + if slices.Contains(mountOpts, "ro") { + if !slices.Contains(m.Opts, "ro") { + return fmt.Errorf("[checkMount] passed mount options contain 'ro' but mount options from mount point %q do not", target) + } + s.Log.Trace(fmt.Sprintf("[checkMount] mount point %s is mounted read-only", target)) + } + s.Log.Trace(fmt.Sprintf("[checkMount] mount point %s is mounted to device %s with mount options %v", target, m.Device, m.Opts)) + + return nil + } + } + + return fmt.Errorf("[checkMount] mount point %q not found in mount info", target) +} diff --git a/images/csi-driver/pkg/utils/type.go b/images/csi-driver/pkg/utils/type.go new file mode 100644 index 000000000..4cc39b072 --- /dev/null +++ b/images/csi-driver/pkg/utils/type.go @@ -0,0 +1,26 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package utils + +type VolumeGroup struct { + Name string `yaml:"name"` + Thin struct { + PoolName string `yaml:"poolName"` + } `yaml:"thin"` +} + +type LVMVolumeGroups []VolumeGroup diff --git a/images/csi-driver/werf.inc.yaml b/images/csi-driver/werf.inc.yaml new file mode 100644 index 000000000..5e87b6f2f --- /dev/null +++ b/images/csi-driver/werf.inc.yaml @@ -0,0 +1,116 @@ +--- +image: {{ $.ImageName }}-src-artifact +from: {{ $.Root.BASE_ALT_P11 }} +final: false + +git: + - add: / + to: /src + includePaths: + - api + - lib/go + - images/{{ $.ImageName }} + stageDependencies: + install: + - '**/*' + excludePaths: + - images/{{ $.ImageName }}/werf.yaml + +shell: + install: + - echo "src artifact" +--- +image: {{ $.ImageName }}-golang-artifact +fromImage: builder/golang-alpine +final: false + +import: + - image: {{ $.ImageName }}-src-artifact + add: /src + to: /src + before: install + +mount: + - fromPath: ~/go-pkg-cache + to: /go/pkg + +shell: + setup: + - cd /src/images/{{ $.ImageName }}/cmd + - GOOS=linux GOARCH=amd64 CGO_ENABLED=0 go build -ldflags="-s -w" -o /{{ $.ImageName }} + - chmod +x /{{ $.ImageName }} + +--- +{{- $csiBinariesMount := "/lib64/libnss_files.so.2 /lib64/libnss_dns.so.2 /bin/mount /bin/umount" }} +{{- $csiBinariesE2fsprogs := "/etc/e2scrub.conf /etc/mke2fs.conf /sbin/badblocks /sbin/debugfs /sbin/dumpe2fs /sbin/e2freefrag /sbin/e2fsck /sbin/e2image /sbin/e2initrd_helper /sbin/e2label /sbin/e2mmpstatus /sbin/e2scrub /sbin/e2scrub_all /sbin/e2undo /sbin/e4crypt /sbin/e4defrag /sbin/filefrag /sbin/fsck.ext2 /sbin/fsck.ext3 /sbin/fsck.ext4 /sbin/fsck.ext4dev /sbin/logsave /sbin/mke2fs /sbin/mkfs.ext2 /sbin/mkfs.ext3 /sbin/mkfs.ext4 /sbin/mkfs.ext4dev /sbin/mklost+found /sbin/resize2fs /sbin/tune2fs /usr/bin/chattr /usr/bin/lsattr" }} +{{- $csiBinariesXfsprogs := "/usr/lib64/xfsprogs/xfs_scrub_fail /usr/sbin/fsck.xfs /usr/sbin/mkfs.xfs /usr/sbin/xfs_* /usr/share/xfsprogs/mkfs/* " }} +{{- $csiBinariesUtilLinux := "/usr/sbin/blkid /usr/sbin/blockdev" }} +image: {{ $.ImageName }}-binaries-artifact +from: {{ $.Root.BASE_ALT_P11 }} +final: false +git: + - add: /tools/dev_images/additional_tools/alt/binary_replace.sh + to: /binary_replace.sh + stageDependencies: + beforeSetup: + - '**/*' +shell: + beforeInstall: + - apt-get update + - apt-get install -y glibc-utils glibc-nss glibc-core util-linux mount xfsprogs e2fsprogs + - {{ $.Root.ALT_CLEANUP_CMD }} + beforeSetup: + - chmod +x /binary_replace.sh + - /binary_replace.sh -i "{{ $csiBinariesMount }}" -o /relocate + - /binary_replace.sh -i "{{ $csiBinariesE2fsprogs }}" -o /relocate + - /binary_replace.sh -i "{{ $csiBinariesXfsprogs }}" -o /relocate + - /binary_replace.sh -i "{{ $csiBinariesUtilLinux }}" -o /relocate + setup: + - mkdir -p /relocate/etc + - ln -sf /proc/mounts /relocate/etc/mtab +--- +image: {{ $.ImageName }}-distroless-artifact +from: {{ $.Root.BASE_ALT_P11 }} +final: false +shell: + beforeInstall: + - apt-get update + - apt-get install -y openssl tzdata libtirpc + - {{ $.Root.ALT_CLEANUP_CMD }} + install: + - mkdir -p /relocate/bin /relocate/sbin /relocate/etc /relocate/var/lib/ssl /relocate/usr/bin /relocate/usr/sbin /relocate/usr/share + - cp -pr /tmp /relocate + - cp -pr /etc/passwd /etc/group /etc/hostname /etc/hosts /etc/shadow /etc/protocols /etc/services /etc/nsswitch.conf /etc/netconfig /relocate/etc + - cp -pr /usr/share/ca-certificates /relocate/usr/share + - cp -pr /usr/share/zoneinfo /relocate/usr/share + - cp -pr /var/lib/ssl/cert.pem /relocate/var/lib/ssl + - cp -pr /var/lib/ssl/certs /relocate/var/lib/ssl + - echo "deckhouse:x:{{ $.Root.DECKHOUSE_UID_GID }}:{{ $.Root.DECKHOUSE_UID_GID }}:deckhouse:/:/sbin/nologin" >> /relocate/etc/passwd + - echo "deckhouse:x:{{ $.Root.DECKHOUSE_UID_GID }}:" >> /relocate/etc/group + - echo "deckhouse:!::0:::::" >> /relocate/etc/shadow +--- +image: {{ $.ImageName }}-distroless +from: {{ $.Root.BASE_SCRATCH }} +final: false +import: + - image: {{ $.ImageName }}-distroless-artifact + add: /relocate + to: / + before: setup +--- +image: {{ $.ImageName }} +fromImage: {{ $.ImageName }}-distroless +import: + - image: {{ $.ImageName }}-golang-artifact + add: /{{ $.ImageName }} + to: /{{ $.ImageName }} + before: setup + - image: {{ $.ImageName }}-binaries-artifact + add: /relocate + to: / + before: setup +docker: + ENTRYPOINT: ["/{{ $.ImageName }}"] + LABEL: + distro: all + version: all diff --git a/images/linstor-drbd-wait/cmd/main.go b/images/linstor-drbd-wait/cmd/main.go index b8c68d7be..c51567336 100644 --- a/images/linstor-drbd-wait/cmd/main.go +++ b/images/linstor-drbd-wait/cmd/main.go @@ -25,7 +25,7 @@ import ( "strings" "time" - "github.com/sds-replicated-volume/images/linstor-drbd-wait/pkg/logger" + "github.com/deckhouse/sds-replicated-volume/lib/go/common/logger" ) const ( diff --git a/images/linstor-drbd-wait/go.mod b/images/linstor-drbd-wait/go.mod index 501f9742e..bbd217f40 100644 --- a/images/linstor-drbd-wait/go.mod +++ b/images/linstor-drbd-wait/go.mod @@ -1,8 +1,204 @@ module github.com/sds-replicated-volume/images/linstor-drbd-wait -go 1.23.6 +go 1.24.11 + +require github.com/deckhouse/sds-replicated-volume/lib/go/common v0.0.0-00010101000000-000000000000 require ( - github.com/go-logr/logr v1.4.2 - k8s.io/klog/v2 v2.130.1 + 4d63.com/gocheckcompilerdirectives v1.3.0 // indirect + 4d63.com/gochecknoglobals v0.2.2 // indirect + github.com/4meepo/tagalign v1.4.2 // indirect + github.com/Abirdcfly/dupword v0.1.3 // indirect + github.com/Antonboom/errname v1.0.0 // indirect + github.com/Antonboom/nilnil v1.0.1 // indirect + github.com/Antonboom/testifylint v1.5.2 // indirect + github.com/BurntSushi/toml v1.5.0 // indirect + github.com/Crocmagnon/fatcontext v0.7.1 // indirect + github.com/Djarvur/go-err113 v0.0.0-20210108212216-aea10b59be24 // indirect + github.com/GaijinEntertainment/go-exhaustruct/v3 v3.3.1 // indirect + github.com/Masterminds/semver/v3 v3.4.0 // indirect + github.com/OpenPeeDeeP/depguard/v2 v2.2.1 // indirect + github.com/alecthomas/go-check-sumtype v0.3.1 // indirect + github.com/alexkohler/nakedret/v2 v2.0.5 // indirect + github.com/alexkohler/prealloc v1.0.0 // indirect + github.com/alingse/asasalint v0.0.11 // indirect + github.com/alingse/nilnesserr v0.1.2 // indirect + github.com/ashanbrown/forbidigo v1.6.0 // indirect + github.com/ashanbrown/makezero v1.2.0 // indirect + github.com/beorn7/perks v1.0.1 // indirect + github.com/bkielbasa/cyclop v1.2.3 // indirect + github.com/blizzy78/varnamelen v0.8.0 // indirect + github.com/bombsimon/wsl/v4 v4.5.0 // indirect + github.com/breml/bidichk v0.3.2 // indirect + github.com/breml/errchkjson v0.4.0 // indirect + github.com/butuzov/ireturn v0.3.1 // indirect + github.com/butuzov/mirror v1.3.0 // indirect + github.com/catenacyber/perfsprint v0.8.2 // indirect + github.com/ccojocar/zxcvbn-go v1.0.2 // indirect + github.com/cespare/xxhash/v2 v2.3.0 // indirect + github.com/charithe/durationcheck v0.0.10 // indirect + github.com/chavacava/garif v0.1.0 // indirect + github.com/ckaznocha/intrange v0.3.0 // indirect + github.com/curioswitch/go-reassign v0.3.0 // indirect + github.com/daixiang0/gci v0.13.5 // indirect + github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect + github.com/denis-tingaikin/go-header v0.5.0 // indirect + github.com/ettle/strcase v0.2.0 // indirect + github.com/fatih/color v1.18.0 // indirect + github.com/fatih/structtag v1.2.0 // indirect + github.com/firefart/nonamedreturns v1.0.5 // indirect + github.com/fsnotify/fsnotify v1.9.0 // indirect + github.com/fzipp/gocyclo v0.6.0 // indirect + github.com/ghostiam/protogetter v0.3.9 // indirect + github.com/go-critic/go-critic v0.12.0 // indirect + github.com/go-logr/logr v1.4.3 // indirect + github.com/go-task/slim-sprig/v3 v3.0.0 // indirect + github.com/go-toolsmith/astcast v1.1.0 // indirect + github.com/go-toolsmith/astcopy v1.1.0 // indirect + github.com/go-toolsmith/astequal v1.2.0 // indirect + github.com/go-toolsmith/astfmt v1.1.0 // indirect + github.com/go-toolsmith/astp v1.1.0 // indirect + github.com/go-toolsmith/strparse v1.1.0 // indirect + github.com/go-toolsmith/typep v1.1.0 // indirect + github.com/go-viper/mapstructure/v2 v2.4.0 // indirect + github.com/go-xmlfmt/xmlfmt v1.1.3 // indirect + github.com/gobwas/glob v0.2.3 // indirect + github.com/gofrs/flock v0.12.1 // indirect + github.com/golangci/dupl v0.0.0-20250308024227-f665c8d69b32 // indirect + github.com/golangci/go-printf-func-name v0.1.0 // indirect + github.com/golangci/gofmt v0.0.0-20250106114630-d62b90e6713d // indirect + github.com/golangci/golangci-lint v1.64.8 // indirect + github.com/golangci/misspell v0.6.0 // indirect + github.com/golangci/plugin-module-register v0.1.1 // indirect + github.com/golangci/revgrep v0.8.0 // indirect + github.com/golangci/unconvert v0.0.0-20240309020433-c5143eacb3ed // indirect + github.com/google/go-cmp v0.7.0 // indirect + github.com/google/pprof v0.0.0-20250820193118-f64d9cf942d6 // indirect + github.com/gordonklaus/ineffassign v0.1.0 // indirect + github.com/gostaticanalysis/analysisutil v0.7.1 // indirect + github.com/gostaticanalysis/comment v1.5.0 // indirect + github.com/gostaticanalysis/forcetypeassert v0.2.0 // indirect + github.com/gostaticanalysis/nilerr v0.1.1 // indirect + github.com/hashicorp/go-immutable-radix/v2 v2.1.0 // indirect + github.com/hashicorp/go-version v1.7.0 // indirect + github.com/hashicorp/golang-lru/v2 v2.0.7 // indirect + github.com/hexops/gotextdiff v1.0.3 // indirect + github.com/inconshreveable/mousetrap v1.1.0 // indirect + github.com/jgautheron/goconst v1.7.1 // indirect + github.com/jingyugao/rowserrcheck v1.1.1 // indirect + github.com/jjti/go-spancheck v0.6.4 // indirect + github.com/julz/importas v0.2.0 // indirect + github.com/karamaru-alpha/copyloopvar v1.2.1 // indirect + github.com/kisielk/errcheck v1.9.0 // indirect + github.com/kkHAIKE/contextcheck v1.1.6 // indirect + github.com/kulti/thelper v0.6.3 // indirect + github.com/kunwardeep/paralleltest v1.0.10 // indirect + github.com/lasiar/canonicalheader v1.1.2 // indirect + github.com/ldez/exptostd v0.4.2 // indirect + github.com/ldez/gomoddirectives v0.6.1 // indirect + github.com/ldez/grignotin v0.9.0 // indirect + github.com/ldez/tagliatelle v0.7.1 // indirect + github.com/ldez/usetesting v0.4.2 // indirect + github.com/leonklingele/grouper v1.1.2 // indirect + github.com/macabu/inamedparam v0.1.3 // indirect + github.com/maratori/testableexamples v1.0.0 // indirect + github.com/maratori/testpackage v1.1.1 // indirect + github.com/matoous/godox v1.1.0 // indirect + github.com/mattn/go-colorable v0.1.14 // indirect + github.com/mattn/go-isatty v0.0.20 // indirect + github.com/mattn/go-runewidth v0.0.16 // indirect + github.com/mgechev/revive v1.7.0 // indirect + github.com/mitchellh/go-homedir v1.1.0 // indirect + github.com/moricho/tparallel v0.3.2 // indirect + github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect + github.com/nakabonne/nestif v0.3.1 // indirect + github.com/nishanths/exhaustive v0.12.0 // indirect + github.com/nishanths/predeclared v0.2.2 // indirect + github.com/nunnatsa/ginkgolinter v0.19.1 // indirect + github.com/olekukonko/tablewriter v0.0.5 // indirect + github.com/onsi/ginkgo/v2 v2.27.2 // indirect + github.com/pelletier/go-toml/v2 v2.2.4 // indirect + github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect + github.com/polyfloyd/go-errorlint v1.7.1 // indirect + github.com/prometheus/client_golang v1.23.2 // indirect + github.com/prometheus/client_model v0.6.2 // indirect + github.com/prometheus/common v0.66.1 // indirect + github.com/prometheus/procfs v0.17.0 // indirect + github.com/quasilyte/go-ruleguard v0.4.3-0.20240823090925-0fe6f58b47b1 // indirect + github.com/quasilyte/go-ruleguard/dsl v0.3.22 // indirect + github.com/quasilyte/gogrep v0.5.0 // indirect + github.com/quasilyte/regex/syntax v0.0.0-20210819130434-b3f0c404a727 // indirect + github.com/quasilyte/stdinfo v0.0.0-20220114132959-f7386bf02567 // indirect + github.com/raeperd/recvcheck v0.2.0 // indirect + github.com/rivo/uniseg v0.4.7 // indirect + github.com/rogpeppe/go-internal v1.14.1 // indirect + github.com/ryancurrah/gomodguard v1.3.5 // indirect + github.com/ryanrolds/sqlclosecheck v0.5.1 // indirect + github.com/sagikazarmark/locafero v0.7.0 // indirect + github.com/sanposhiho/wastedassign/v2 v2.1.0 // indirect + github.com/santhosh-tekuri/jsonschema/v6 v6.0.1 // indirect + github.com/sashamelentyev/interfacebloat v1.1.0 // indirect + github.com/sashamelentyev/usestdlibvars v1.28.0 // indirect + github.com/securego/gosec/v2 v2.22.2 // indirect + github.com/sirupsen/logrus v1.9.3 // indirect + github.com/sivchari/containedctx v1.0.3 // indirect + github.com/sivchari/tenv v1.12.1 // indirect + github.com/sonatard/noctx v0.1.0 // indirect + github.com/sourcegraph/conc v0.3.0 // indirect + github.com/sourcegraph/go-diff v0.7.0 // indirect + github.com/spf13/afero v1.12.0 // indirect + github.com/spf13/cast v1.7.1 // indirect + github.com/spf13/cobra v1.10.2 // indirect + github.com/spf13/pflag v1.0.10 // indirect + github.com/spf13/viper v1.20.1 // indirect + github.com/ssgreg/nlreturn/v2 v2.2.1 // indirect + github.com/stbenjam/no-sprintf-host-port v0.2.0 // indirect + github.com/stretchr/objx v0.5.2 // indirect + github.com/stretchr/testify v1.11.1 // indirect + github.com/subosito/gotenv v1.6.0 // indirect + github.com/tdakkota/asciicheck v0.4.1 // indirect + github.com/tetafro/godot v1.5.0 // indirect + github.com/timakin/bodyclose v0.0.0-20241017074812-ed6a65f985e3 // indirect + github.com/timonwong/loggercheck v0.10.1 // indirect + github.com/tomarrell/wrapcheck/v2 v2.10.0 // indirect + github.com/tommy-muehle/go-mnd/v2 v2.5.1 // indirect + github.com/ultraware/funlen v0.2.0 // indirect + github.com/ultraware/whitespace v0.2.0 // indirect + github.com/uudashr/gocognit v1.2.0 // indirect + github.com/uudashr/iface v1.3.1 // indirect + github.com/xen0n/gosmopolitan v1.2.2 // indirect + github.com/yagipy/maintidx v1.0.0 // indirect + github.com/yeya24/promlinter v0.3.0 // indirect + github.com/ykadowak/zerologlint v0.1.5 // indirect + gitlab.com/bosi/decorder v0.4.2 // indirect + go-simpler.org/musttag v0.13.0 // indirect + go-simpler.org/sloglint v0.9.0 // indirect + go.uber.org/automaxprocs v1.6.0 // indirect + go.uber.org/multierr v1.11.0 // indirect + go.uber.org/zap v1.27.0 // indirect + go.yaml.in/yaml/v2 v2.4.2 // indirect + golang.org/x/exp/typeparams v0.0.0-20250210185358-939b2ce775ac // indirect + golang.org/x/mod v0.29.0 // indirect + golang.org/x/sync v0.19.0 // indirect + golang.org/x/sys v0.39.0 // indirect + golang.org/x/text v0.30.0 // indirect + golang.org/x/tools v0.38.0 // indirect + golang.org/x/tools/go/expect v0.1.1-deprecated // indirect + golang.org/x/tools/go/packages/packagestest v0.1.1-deprecated // indirect + google.golang.org/protobuf v1.36.9 // indirect + gopkg.in/yaml.v2 v2.4.0 // indirect + gopkg.in/yaml.v3 v3.0.1 // indirect + honnef.co/go/tools v0.6.1 // indirect + k8s.io/klog/v2 v2.130.1 // indirect + mvdan.cc/gofumpt v0.7.0 // indirect + mvdan.cc/unparam v0.0.0-20240528143540-8a5130ca722f // indirect +) + +replace github.com/deckhouse/sds-replicated-volume/api => ../../api + +replace github.com/deckhouse/sds-replicated-volume/lib/go/common => ../../lib/go/common + +tool ( + github.com/golangci/golangci-lint/cmd/golangci-lint + github.com/onsi/ginkgo/v2/ginkgo ) diff --git a/images/linstor-drbd-wait/go.sum b/images/linstor-drbd-wait/go.sum index dc00cea4e..53d833dcc 100644 --- a/images/linstor-drbd-wait/go.sum +++ b/images/linstor-drbd-wait/go.sum @@ -1,4 +1,591 @@ -github.com/go-logr/logr v1.4.2 h1:6pFjapn8bFcIbiKo3XT4j/BhANplGihG6tvd+8rYgrY= -github.com/go-logr/logr v1.4.2/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY= +4d63.com/gocheckcompilerdirectives v1.3.0 h1:Ew5y5CtcAAQeTVKUVFrE7EwHMrTO6BggtEj8BZSjZ3A= +4d63.com/gocheckcompilerdirectives v1.3.0/go.mod h1:ofsJ4zx2QAuIP/NO/NAh1ig6R1Fb18/GI7RVMwz7kAY= +4d63.com/gochecknoglobals v0.2.2 h1:H1vdnwnMaZdQW/N+NrkT1SZMTBmcwHe9Vq8lJcYYTtU= +4d63.com/gochecknoglobals v0.2.2/go.mod h1:lLxwTQjL5eIesRbvnzIP3jZtG140FnTdz+AlMa+ogt0= +github.com/4meepo/tagalign v1.4.2 h1:0hcLHPGMjDyM1gHG58cS73aQF8J4TdVR96TZViorO9E= +github.com/4meepo/tagalign v1.4.2/go.mod h1:+p4aMyFM+ra7nb41CnFG6aSDXqRxU/w1VQqScKqDARI= +github.com/Abirdcfly/dupword v0.1.3 h1:9Pa1NuAsZvpFPi9Pqkd93I7LIYRURj+A//dFd5tgBeE= +github.com/Abirdcfly/dupword v0.1.3/go.mod h1:8VbB2t7e10KRNdwTVoxdBaxla6avbhGzb8sCTygUMhw= +github.com/Antonboom/errname v1.0.0 h1:oJOOWR07vS1kRusl6YRSlat7HFnb3mSfMl6sDMRoTBA= +github.com/Antonboom/errname v1.0.0/go.mod h1:gMOBFzK/vrTiXN9Oh+HFs+e6Ndl0eTFbtsRTSRdXyGI= +github.com/Antonboom/nilnil v1.0.1 h1:C3Tkm0KUxgfO4Duk3PM+ztPncTFlOf0b2qadmS0s4xs= +github.com/Antonboom/nilnil v1.0.1/go.mod h1:CH7pW2JsRNFgEh8B2UaPZTEPhCMuFowP/e8Udp9Nnb0= +github.com/Antonboom/testifylint v1.5.2 h1:4s3Xhuv5AvdIgbd8wOOEeo0uZG7PbDKQyKY5lGoQazk= +github.com/Antonboom/testifylint v1.5.2/go.mod h1:vxy8VJ0bc6NavlYqjZfmp6EfqXMtBgQ4+mhCojwC1P8= +github.com/BurntSushi/toml v1.5.0 h1:W5quZX/G/csjUnuI8SUYlsHs9M38FC7znL0lIO+DvMg= +github.com/BurntSushi/toml v1.5.0/go.mod h1:ukJfTF/6rtPPRCnwkur4qwRxa8vTRFBF0uk2lLoLwho= +github.com/Crocmagnon/fatcontext v0.7.1 h1:SC/VIbRRZQeQWj/TcQBS6JmrXcfA+BU4OGSVUt54PjM= +github.com/Crocmagnon/fatcontext v0.7.1/go.mod h1:1wMvv3NXEBJucFGfwOJBxSVWcoIO6emV215SMkW9MFU= +github.com/Djarvur/go-err113 v0.0.0-20210108212216-aea10b59be24 h1:sHglBQTwgx+rWPdisA5ynNEsoARbiCBOyGcJM4/OzsM= +github.com/Djarvur/go-err113 v0.0.0-20210108212216-aea10b59be24/go.mod h1:4UJr5HIiMZrwgkSPdsjy2uOQExX/WEILpIrO9UPGuXs= +github.com/GaijinEntertainment/go-exhaustruct/v3 v3.3.1 h1:Sz1JIXEcSfhz7fUi7xHnhpIE0thVASYjvosApmHuD2k= +github.com/GaijinEntertainment/go-exhaustruct/v3 v3.3.1/go.mod h1:n/LSCXNuIYqVfBlVXyHfMQkZDdp1/mmxfSjADd3z1Zg= +github.com/Masterminds/semver/v3 v3.4.0 h1:Zog+i5UMtVoCU8oKka5P7i9q9HgrJeGzI9SA1Xbatp0= +github.com/Masterminds/semver/v3 v3.4.0/go.mod h1:4V+yj/TJE1HU9XfppCwVMZq3I84lprf4nC11bSS5beM= +github.com/OpenPeeDeeP/depguard/v2 v2.2.1 h1:vckeWVESWp6Qog7UZSARNqfu/cZqvki8zsuj3piCMx4= +github.com/OpenPeeDeeP/depguard/v2 v2.2.1/go.mod h1:q4DKzC4UcVaAvcfd41CZh0PWpGgzrVxUYBlgKNGquUo= +github.com/alecthomas/assert/v2 v2.11.0 h1:2Q9r3ki8+JYXvGsDyBXwH3LcJ+WK5D0gc5E8vS6K3D0= +github.com/alecthomas/assert/v2 v2.11.0/go.mod h1:Bze95FyfUr7x34QZrjL+XP+0qgp/zg8yS+TtBj1WA3k= +github.com/alecthomas/go-check-sumtype v0.3.1 h1:u9aUvbGINJxLVXiFvHUlPEaD7VDULsrxJb4Aq31NLkU= +github.com/alecthomas/go-check-sumtype v0.3.1/go.mod h1:A8TSiN3UPRw3laIgWEUOHHLPa6/r9MtoigdlP5h3K/E= +github.com/alecthomas/repr v0.4.0 h1:GhI2A8MACjfegCPVq9f1FLvIBS+DrQ2KQBFZP1iFzXc= +github.com/alecthomas/repr v0.4.0/go.mod h1:Fr0507jx4eOXV7AlPV6AVZLYrLIuIeSOWtW57eE/O/4= +github.com/alexkohler/nakedret/v2 v2.0.5 h1:fP5qLgtwbx9EJE8dGEERT02YwS8En4r9nnZ71RK+EVU= +github.com/alexkohler/nakedret/v2 v2.0.5/go.mod h1:bF5i0zF2Wo2o4X4USt9ntUWve6JbFv02Ff4vlkmS/VU= +github.com/alexkohler/prealloc v1.0.0 h1:Hbq0/3fJPQhNkN0dR95AVrr6R7tou91y0uHG5pOcUuw= +github.com/alexkohler/prealloc v1.0.0/go.mod h1:VetnK3dIgFBBKmg0YnD9F9x6Icjd+9cvfHR56wJVlKE= +github.com/alingse/asasalint v0.0.11 h1:SFwnQXJ49Kx/1GghOFz1XGqHYKp21Kq1nHad/0WQRnw= +github.com/alingse/asasalint v0.0.11/go.mod h1:nCaoMhw7a9kSJObvQyVzNTPBDbNpdocqrSP7t/cW5+I= +github.com/alingse/nilnesserr v0.1.2 h1:Yf8Iwm3z2hUUrP4muWfW83DF4nE3r1xZ26fGWUKCZlo= +github.com/alingse/nilnesserr v0.1.2/go.mod h1:1xJPrXonEtX7wyTq8Dytns5P2hNzoWymVUIaKm4HNFg= +github.com/ashanbrown/forbidigo v1.6.0 h1:D3aewfM37Yb3pxHujIPSpTf6oQk9sc9WZi8gerOIVIY= +github.com/ashanbrown/forbidigo v1.6.0/go.mod h1:Y8j9jy9ZYAEHXdu723cUlraTqbzjKF1MUyfOKL+AjcU= +github.com/ashanbrown/makezero v1.2.0 h1:/2Lp1bypdmK9wDIq7uWBlDF1iMUpIIS4A+pF6C9IEUU= +github.com/ashanbrown/makezero v1.2.0/go.mod h1:dxlPhHbDMC6N6xICzFBSK+4njQDdK8euNO0qjQMtGY4= +github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM= +github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw= +github.com/bkielbasa/cyclop v1.2.3 h1:faIVMIGDIANuGPWH031CZJTi2ymOQBULs9H21HSMa5w= +github.com/bkielbasa/cyclop v1.2.3/go.mod h1:kHTwA9Q0uZqOADdupvcFJQtp/ksSnytRMe8ztxG8Fuo= +github.com/blizzy78/varnamelen v0.8.0 h1:oqSblyuQvFsW1hbBHh1zfwrKe3kcSj0rnXkKzsQ089M= +github.com/blizzy78/varnamelen v0.8.0/go.mod h1:V9TzQZ4fLJ1DSrjVDfl89H7aMnTvKkApdHeyESmyR7k= +github.com/bombsimon/wsl/v4 v4.5.0 h1:iZRsEvDdyhd2La0FVi5k6tYehpOR/R7qIUjmKk7N74A= +github.com/bombsimon/wsl/v4 v4.5.0/go.mod h1:NOQ3aLF4nD7N5YPXMruR6ZXDOAqLoM0GEpLwTdvmOSc= +github.com/breml/bidichk v0.3.2 h1:xV4flJ9V5xWTqxL+/PMFF6dtJPvZLPsyixAoPe8BGJs= +github.com/breml/bidichk v0.3.2/go.mod h1:VzFLBxuYtT23z5+iVkamXO386OB+/sVwZOpIj6zXGos= +github.com/breml/errchkjson v0.4.0 h1:gftf6uWZMtIa/Is3XJgibewBm2ksAQSY/kABDNFTAdk= +github.com/breml/errchkjson v0.4.0/go.mod h1:AuBOSTHyLSaaAFlWsRSuRBIroCh3eh7ZHh5YeelDIk8= +github.com/butuzov/ireturn v0.3.1 h1:mFgbEI6m+9W8oP/oDdfA34dLisRFCj2G6o/yiI1yZrY= +github.com/butuzov/ireturn v0.3.1/go.mod h1:ZfRp+E7eJLC0NQmk1Nrm1LOrn/gQlOykv+cVPdiXH5M= +github.com/butuzov/mirror v1.3.0 h1:HdWCXzmwlQHdVhwvsfBb2Au0r3HyINry3bDWLYXiKoc= +github.com/butuzov/mirror v1.3.0/go.mod h1:AEij0Z8YMALaq4yQj9CPPVYOyJQyiexpQEQgihajRfI= +github.com/catenacyber/perfsprint v0.8.2 h1:+o9zVmCSVa7M4MvabsWvESEhpsMkhfE7k0sHNGL95yw= +github.com/catenacyber/perfsprint v0.8.2/go.mod h1:q//VWC2fWbcdSLEY1R3l8n0zQCDPdE4IjZwyY1HMunM= +github.com/ccojocar/zxcvbn-go v1.0.2 h1:na/czXU8RrhXO4EZme6eQJLR4PzcGsahsBOAwU6I3Vg= +github.com/ccojocar/zxcvbn-go v1.0.2/go.mod h1:g1qkXtUSvHP8lhHp5GrSmTz6uWALGRMQdw6Qnz/hi60= +github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs= +github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs= +github.com/charithe/durationcheck v0.0.10 h1:wgw73BiocdBDQPik+zcEoBG/ob8uyBHf2iyoHGPf5w4= +github.com/charithe/durationcheck v0.0.10/go.mod h1:bCWXb7gYRysD1CU3C+u4ceO49LoGOY1C1L6uouGNreQ= +github.com/chavacava/garif v0.1.0 h1:2JHa3hbYf5D9dsgseMKAmc/MZ109otzgNFk5s87H9Pc= +github.com/chavacava/garif v0.1.0/go.mod h1:XMyYCkEL58DF0oyW4qDjjnPWONs2HBqYKI+UIPD+Gww= +github.com/ckaznocha/intrange v0.3.0 h1:VqnxtK32pxgkhJgYQEeOArVidIPg+ahLP7WBOXZd5ZY= +github.com/ckaznocha/intrange v0.3.0/go.mod h1:+I/o2d2A1FBHgGELbGxzIcyd3/9l9DuwjM8FsbSS3Lo= +github.com/cpuguy83/go-md2man/v2 v2.0.6/go.mod h1:oOW0eioCTA6cOiMLiUPZOpcVxMig6NIQQ7OS05n1F4g= +github.com/curioswitch/go-reassign v0.3.0 h1:dh3kpQHuADL3cobV/sSGETA8DOv457dwl+fbBAhrQPs= +github.com/curioswitch/go-reassign v0.3.0/go.mod h1:nApPCCTtqLJN/s8HfItCcKV0jIPwluBOvZP+dsJGA88= +github.com/daixiang0/gci v0.13.5 h1:kThgmH1yBmZSBCh1EJVxQ7JsHpm5Oms0AMed/0LaH4c= +github.com/daixiang0/gci v0.13.5/go.mod h1:12etP2OniiIdP4q+kjUGrC/rUagga7ODbqsom5Eo5Yk= +github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= +github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= +github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM= +github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= +github.com/denis-tingaikin/go-header v0.5.0 h1:SRdnP5ZKvcO9KKRP1KJrhFR3RrlGuD+42t4429eC9k8= +github.com/denis-tingaikin/go-header v0.5.0/go.mod h1:mMenU5bWrok6Wl2UsZjy+1okegmwQ3UgWl4V1D8gjlY= +github.com/dlclark/regexp2 v1.11.0 h1:G/nrcoOa7ZXlpoa/91N3X7mM3r8eIlMBBJZvsz/mxKI= +github.com/dlclark/regexp2 v1.11.0/go.mod h1:DHkYz0B9wPfa6wondMfaivmHpzrQ3v9q8cnmRbL6yW8= +github.com/ettle/strcase v0.2.0 h1:fGNiVF21fHXpX1niBgk0aROov1LagYsOwV/xqKDKR/Q= +github.com/ettle/strcase v0.2.0/go.mod h1:DajmHElDSaX76ITe3/VHVyMin4LWSJN5Z909Wp+ED1A= +github.com/fatih/color v1.18.0 h1:S8gINlzdQ840/4pfAwic/ZE0djQEH3wM94VfqLTZcOM= +github.com/fatih/color v1.18.0/go.mod h1:4FelSpRwEGDpQ12mAdzqdOukCy4u8WUtOY6lkT/6HfU= +github.com/fatih/structtag v1.2.0 h1:/OdNE99OxoI/PqaW/SuSK9uxxT3f/tcSZgon/ssNSx4= +github.com/fatih/structtag v1.2.0/go.mod h1:mBJUNpUnHmRKrKlQQlmCrh5PuhftFbNv8Ys4/aAZl94= +github.com/firefart/nonamedreturns v1.0.5 h1:tM+Me2ZaXs8tfdDw3X6DOX++wMCOqzYUho6tUTYIdRA= +github.com/firefart/nonamedreturns v1.0.5/go.mod h1:gHJjDqhGM4WyPt639SOZs+G89Ko7QKH5R5BhnO6xJhw= +github.com/frankban/quicktest v1.14.6 h1:7Xjx+VpznH+oBnejlPUj8oUpdxnVs4f8XU8WnHkI4W8= +github.com/frankban/quicktest v1.14.6/go.mod h1:4ptaffx2x8+WTWXmUCuVU6aPUX1/Mz7zb5vbUoiM6w0= +github.com/fsnotify/fsnotify v1.9.0 h1:2Ml+OJNzbYCTzsxtv8vKSFD9PbJjmhYF14k/jKC7S9k= +github.com/fsnotify/fsnotify v1.9.0/go.mod h1:8jBTzvmWwFyi3Pb8djgCCO5IBqzKJ/Jwo8TRcHyHii0= +github.com/fzipp/gocyclo v0.6.0 h1:lsblElZG7d3ALtGMx9fmxeTKZaLLpU8mET09yN4BBLo= +github.com/fzipp/gocyclo v0.6.0/go.mod h1:rXPyn8fnlpa0R2csP/31uerbiVBugk5whMdlyaLkLoA= +github.com/ghostiam/protogetter v0.3.9 h1:j+zlLLWzqLay22Cz/aYwTHKQ88GE2DQ6GkWSYFOI4lQ= +github.com/ghostiam/protogetter v0.3.9/go.mod h1:WZ0nw9pfzsgxuRsPOFQomgDVSWtDLJRfQJEhsGbmQMA= +github.com/gkampitakis/ciinfo v0.3.2 h1:JcuOPk8ZU7nZQjdUhctuhQofk7BGHuIy0c9Ez8BNhXs= +github.com/gkampitakis/ciinfo v0.3.2/go.mod h1:1NIwaOcFChN4fa/B0hEBdAb6npDlFL8Bwx4dfRLRqAo= +github.com/gkampitakis/go-diff v1.3.2 h1:Qyn0J9XJSDTgnsgHRdz9Zp24RaJeKMUHg2+PDZZdC4M= +github.com/gkampitakis/go-diff v1.3.2/go.mod h1:LLgOrpqleQe26cte8s36HTWcTmMEur6OPYerdAAS9tk= +github.com/gkampitakis/go-snaps v0.5.15 h1:amyJrvM1D33cPHwVrjo9jQxX8g/7E2wYdZ+01KS3zGE= +github.com/gkampitakis/go-snaps v0.5.15/go.mod h1:HNpx/9GoKisdhw9AFOBT1N7DBs9DiHo/hGheFGBZ+mc= +github.com/go-critic/go-critic v0.12.0 h1:iLosHZuye812wnkEz1Xu3aBwn5ocCPfc9yqmFG9pa6w= +github.com/go-critic/go-critic v0.12.0/go.mod h1:DpE0P6OVc6JzVYzmM5gq5jMU31zLr4am5mB/VfFK64w= +github.com/go-logr/logr v1.4.3 h1:CjnDlHq8ikf6E492q6eKboGOC0T8CDaOvkHCIg8idEI= +github.com/go-logr/logr v1.4.3/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY= +github.com/go-quicktest/qt v1.101.0 h1:O1K29Txy5P2OK0dGo59b7b0LR6wKfIhttaAhHUyn7eI= +github.com/go-quicktest/qt v1.101.0/go.mod h1:14Bz/f7NwaXPtdYEgzsx46kqSxVwTbzVZsDC26tQJow= +github.com/go-task/slim-sprig/v3 v3.0.0 h1:sUs3vkvUymDpBKi3qH1YSqBQk9+9D/8M2mN1vB6EwHI= +github.com/go-task/slim-sprig/v3 v3.0.0/go.mod h1:W848ghGpv3Qj3dhTPRyJypKRiqCdHZiAzKg9hl15HA8= +github.com/go-toolsmith/astcast v1.1.0 h1:+JN9xZV1A+Re+95pgnMgDboWNVnIMMQXwfBwLRPgSC8= +github.com/go-toolsmith/astcast v1.1.0/go.mod h1:qdcuFWeGGS2xX5bLM/c3U9lewg7+Zu4mr+xPwZIB4ZU= +github.com/go-toolsmith/astcopy v1.1.0 h1:YGwBN0WM+ekI/6SS6+52zLDEf8Yvp3n2seZITCUBt5s= +github.com/go-toolsmith/astcopy v1.1.0/go.mod h1:hXM6gan18VA1T/daUEHCFcYiW8Ai1tIwIzHY6srfEAw= +github.com/go-toolsmith/astequal v1.0.3/go.mod h1:9Ai4UglvtR+4up+bAD4+hCj7iTo4m/OXVTSLnCyTAx4= +github.com/go-toolsmith/astequal v1.1.0/go.mod h1:sedf7VIdCL22LD8qIvv7Nn9MuWJruQA/ysswh64lffQ= +github.com/go-toolsmith/astequal v1.2.0 h1:3Fs3CYZ1k9Vo4FzFhwwewC3CHISHDnVUPC4x0bI2+Cw= +github.com/go-toolsmith/astequal v1.2.0/go.mod h1:c8NZ3+kSFtFY/8lPso4v8LuJjdJiUFVnSuU3s0qrrDY= +github.com/go-toolsmith/astfmt v1.1.0 h1:iJVPDPp6/7AaeLJEruMsBUlOYCmvg0MoCfJprsOmcco= +github.com/go-toolsmith/astfmt v1.1.0/go.mod h1:OrcLlRwu0CuiIBp/8b5PYF9ktGVZUjlNMV634mhwuQ4= +github.com/go-toolsmith/astp v1.1.0 h1:dXPuCl6u2llURjdPLLDxJeZInAeZ0/eZwFJmqZMnpQA= +github.com/go-toolsmith/astp v1.1.0/go.mod h1:0T1xFGz9hicKs8Z5MfAqSUitoUYS30pDMsRVIDHs8CA= +github.com/go-toolsmith/pkgload v1.2.2 h1:0CtmHq/02QhxcF7E9N5LIFcYFsMR5rdovfqTtRKkgIk= +github.com/go-toolsmith/pkgload v1.2.2/go.mod h1:R2hxLNRKuAsiXCo2i5J6ZQPhnPMOVtU+f0arbFPWCus= +github.com/go-toolsmith/strparse v1.0.0/go.mod h1:YI2nUKP9YGZnL/L1/DLFBfixrcjslWct4wyljWhSRy8= +github.com/go-toolsmith/strparse v1.1.0 h1:GAioeZUK9TGxnLS+qfdqNbA4z0SSm5zVNtCQiyP2Bvw= +github.com/go-toolsmith/strparse v1.1.0/go.mod h1:7ksGy58fsaQkGQlY8WVoBFNyEPMGuJin1rfoPS4lBSQ= +github.com/go-toolsmith/typep v1.1.0 h1:fIRYDyF+JywLfqzyhdiHzRop/GQDxxNhLGQ6gFUNHus= +github.com/go-toolsmith/typep v1.1.0/go.mod h1:fVIw+7zjdsMxDA3ITWnH1yOiw1rnTQKCsF/sk2H/qig= +github.com/go-viper/mapstructure/v2 v2.4.0 h1:EBsztssimR/CONLSZZ04E8qAkxNYq4Qp9LvH92wZUgs= +github.com/go-viper/mapstructure/v2 v2.4.0/go.mod h1:oJDH3BJKyqBA2TXFhDsKDGDTlndYOZ6rGS0BRZIxGhM= +github.com/go-xmlfmt/xmlfmt v1.1.3 h1:t8Ey3Uy7jDSEisW2K3somuMKIpzktkWptA0iFCnRUWY= +github.com/go-xmlfmt/xmlfmt v1.1.3/go.mod h1:aUCEOzzezBEjDBbFBoSiya/gduyIiWYRP6CnSFIV8AM= +github.com/gobwas/glob v0.2.3 h1:A4xDbljILXROh+kObIiy5kIaPYD8e96x1tgBhUI5J+Y= +github.com/gobwas/glob v0.2.3/go.mod h1:d3Ez4x06l9bZtSvzIay5+Yzi0fmZzPgnTbPcKjJAkT8= +github.com/goccy/go-yaml v1.18.0 h1:8W7wMFS12Pcas7KU+VVkaiCng+kG8QiFeFwzFb+rwuw= +github.com/goccy/go-yaml v1.18.0/go.mod h1:XBurs7gK8ATbW4ZPGKgcbrY1Br56PdM69F7LkFRi1kA= +github.com/gofrs/flock v0.12.1 h1:MTLVXXHf8ekldpJk3AKicLij9MdwOWkZ+a/jHHZby9E= +github.com/gofrs/flock v0.12.1/go.mod h1:9zxTsyu5xtJ9DK+1tFZyibEV7y3uwDxPPfbxeeHCoD0= +github.com/golangci/dupl v0.0.0-20250308024227-f665c8d69b32 h1:WUvBfQL6EW/40l6OmeSBYQJNSif4O11+bmWEz+C7FYw= +github.com/golangci/dupl v0.0.0-20250308024227-f665c8d69b32/go.mod h1:NUw9Zr2Sy7+HxzdjIULge71wI6yEg1lWQr7Evcu8K0E= +github.com/golangci/go-printf-func-name v0.1.0 h1:dVokQP+NMTO7jwO4bwsRwLWeudOVUPPyAKJuzv8pEJU= +github.com/golangci/go-printf-func-name v0.1.0/go.mod h1:wqhWFH5mUdJQhweRnldEywnR5021wTdZSNgwYceV14s= +github.com/golangci/gofmt v0.0.0-20250106114630-d62b90e6713d h1:viFft9sS/dxoYY0aiOTsLKO2aZQAPT4nlQCsimGcSGE= +github.com/golangci/gofmt v0.0.0-20250106114630-d62b90e6713d/go.mod h1:ivJ9QDg0XucIkmwhzCDsqcnxxlDStoTl89jDMIoNxKY= +github.com/golangci/golangci-lint v1.64.8 h1:y5TdeVidMtBGG32zgSC7ZXTFNHrsJkDnpO4ItB3Am+I= +github.com/golangci/golangci-lint v1.64.8/go.mod h1:5cEsUQBSr6zi8XI8OjmcY2Xmliqc4iYL7YoPrL+zLJ4= +github.com/golangci/misspell v0.6.0 h1:JCle2HUTNWirNlDIAUO44hUsKhOFqGPoC4LZxlaSXDs= +github.com/golangci/misspell v0.6.0/go.mod h1:keMNyY6R9isGaSAu+4Q8NMBwMPkh15Gtc8UCVoDtAWo= +github.com/golangci/plugin-module-register v0.1.1 h1:TCmesur25LnyJkpsVrupv1Cdzo+2f7zX0H6Jkw1Ol6c= +github.com/golangci/plugin-module-register v0.1.1/go.mod h1:TTpqoB6KkwOJMV8u7+NyXMrkwwESJLOkfl9TxR1DGFc= +github.com/golangci/revgrep v0.8.0 h1:EZBctwbVd0aMeRnNUsFogoyayvKHyxlV3CdUA46FX2s= +github.com/golangci/revgrep v0.8.0/go.mod h1:U4R/s9dlXZsg8uJmaR1GrloUr14D7qDl8gi2iPXJH8k= +github.com/golangci/unconvert v0.0.0-20240309020433-c5143eacb3ed h1:IURFTjxeTfNFP0hTEi1YKjB/ub8zkpaOqFFMApi2EAs= +github.com/golangci/unconvert v0.0.0-20240309020433-c5143eacb3ed/go.mod h1:XLXN8bNw4CGRPaqgl3bv/lhz7bsGPh4/xSaMTbo2vkQ= +github.com/google/go-cmp v0.5.1/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= +github.com/google/go-cmp v0.5.2/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= +github.com/google/go-cmp v0.5.4/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= +github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= +github.com/google/go-cmp v0.5.8/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY= +github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8= +github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU= +github.com/google/pprof v0.0.0-20250820193118-f64d9cf942d6 h1:EEHtgt9IwisQ2AZ4pIsMjahcegHh6rmhqxzIRQIyepY= +github.com/google/pprof v0.0.0-20250820193118-f64d9cf942d6/go.mod h1:I6V7YzU0XDpsHqbsyrghnFZLO1gwK6NPTNvmetQIk9U= +github.com/gordonklaus/ineffassign v0.1.0 h1:y2Gd/9I7MdY1oEIt+n+rowjBNDcLQq3RsH5hwJd0f9s= +github.com/gordonklaus/ineffassign v0.1.0/go.mod h1:Qcp2HIAYhR7mNUVSIxZww3Guk4it82ghYcEXIAk+QT0= +github.com/gostaticanalysis/analysisutil v0.7.1 h1:ZMCjoue3DtDWQ5WyU16YbjbQEQ3VuzwxALrpYd+HeKk= +github.com/gostaticanalysis/analysisutil v0.7.1/go.mod h1:v21E3hY37WKMGSnbsw2S/ojApNWb6C1//mXO48CXbVc= +github.com/gostaticanalysis/comment v1.4.1/go.mod h1:ih6ZxzTHLdadaiSnF5WY3dxUoXfXAlTaRzuaNDlSado= +github.com/gostaticanalysis/comment v1.4.2/go.mod h1:KLUTGDv6HOCotCH8h2erHKmpci2ZoR8VPu34YA2uzdM= +github.com/gostaticanalysis/comment v1.5.0 h1:X82FLl+TswsUMpMh17srGRuKaaXprTaytmEpgnKIDu8= +github.com/gostaticanalysis/comment v1.5.0/go.mod h1:V6eb3gpCv9GNVqb6amXzEUX3jXLVK/AdA+IrAMSqvEc= +github.com/gostaticanalysis/forcetypeassert v0.2.0 h1:uSnWrrUEYDr86OCxWa4/Tp2jeYDlogZiZHzGkWFefTk= +github.com/gostaticanalysis/forcetypeassert v0.2.0/go.mod h1:M5iPavzE9pPqWyeiVXSFghQjljW1+l/Uke3PXHS6ILY= +github.com/gostaticanalysis/nilerr v0.1.1 h1:ThE+hJP0fEp4zWLkWHWcRyI2Od0p7DlgYG3Uqrmrcpk= +github.com/gostaticanalysis/nilerr v0.1.1/go.mod h1:wZYb6YI5YAxxq0i1+VJbY0s2YONW0HU0GPE3+5PWN4A= +github.com/gostaticanalysis/testutil v0.3.1-0.20210208050101-bfb5c8eec0e4/go.mod h1:D+FIZ+7OahH3ePw/izIEeH5I06eKs1IKI4Xr64/Am3M= +github.com/gostaticanalysis/testutil v0.5.0 h1:Dq4wT1DdTwTGCQQv3rl3IvD5Ld0E6HiY+3Zh0sUGqw8= +github.com/gostaticanalysis/testutil v0.5.0/go.mod h1:OLQSbuM6zw2EvCcXTz1lVq5unyoNft372msDY0nY5Hs= +github.com/hashicorp/go-immutable-radix/v2 v2.1.0 h1:CUW5RYIcysz+D3B+l1mDeXrQ7fUvGGCwJfdASSzbrfo= +github.com/hashicorp/go-immutable-radix/v2 v2.1.0/go.mod h1:hgdqLXA4f6NIjRVisM1TJ9aOJVNRqKZj+xDGF6m7PBw= +github.com/hashicorp/go-uuid v1.0.3 h1:2gKiV6YVmrJ1i2CKKa9obLvRieoRGviZFL26PcT/Co8= +github.com/hashicorp/go-uuid v1.0.3/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro= +github.com/hashicorp/go-version v1.2.1/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA= +github.com/hashicorp/go-version v1.7.0 h1:5tqGy27NaOTB8yJKUZELlFAS/LTKJkrmONwQKeRZfjY= +github.com/hashicorp/go-version v1.7.0/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA= +github.com/hashicorp/golang-lru/v2 v2.0.7 h1:a+bsQ5rvGLjzHuww6tVxozPZFVghXaHOwFs4luLUK2k= +github.com/hashicorp/golang-lru/v2 v2.0.7/go.mod h1:QeFd9opnmA6QUJc5vARoKUSoFhyfM2/ZepoAG6RGpeM= +github.com/hexops/gotextdiff v1.0.3 h1:gitA9+qJrrTCsiCl7+kh75nPqQt1cx4ZkudSTLoUqJM= +github.com/hexops/gotextdiff v1.0.3/go.mod h1:pSWU5MAI3yDq+fZBTazCSJysOMbxWL1BSow5/V2vxeg= +github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8= +github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw= +github.com/jgautheron/goconst v1.7.1 h1:VpdAG7Ca7yvvJk5n8dMwQhfEZJh95kl/Hl9S1OI5Jkk= +github.com/jgautheron/goconst v1.7.1/go.mod h1:aAosetZ5zaeC/2EfMeRswtxUFBpe2Hr7HzkgX4fanO4= +github.com/jingyugao/rowserrcheck v1.1.1 h1:zibz55j/MJtLsjP1OF4bSdgXxwL1b+Vn7Tjzq7gFzUs= +github.com/jingyugao/rowserrcheck v1.1.1/go.mod h1:4yvlZSDb3IyDTUZJUmpZfm2Hwok+Dtp+nu2qOq+er9c= +github.com/jjti/go-spancheck v0.6.4 h1:Tl7gQpYf4/TMU7AT84MN83/6PutY21Nb9fuQjFTpRRc= +github.com/jjti/go-spancheck v0.6.4/go.mod h1:yAEYdKJ2lRkDA8g7X+oKUHXOWVAXSBJRv04OhF+QUjk= +github.com/joshdk/go-junit v1.0.0 h1:S86cUKIdwBHWwA6xCmFlf3RTLfVXYQfvanM5Uh+K6GE= +github.com/joshdk/go-junit v1.0.0/go.mod h1:TiiV0PqkaNfFXjEiyjWM3XXrhVyCa1K4Zfga6W52ung= +github.com/julz/importas v0.2.0 h1:y+MJN/UdL63QbFJHws9BVC5RpA2iq0kpjrFajTGivjQ= +github.com/julz/importas v0.2.0/go.mod h1:pThlt589EnCYtMnmhmRYY/qn9lCf/frPOK+WMx3xiJY= +github.com/karamaru-alpha/copyloopvar v1.2.1 h1:wmZaZYIjnJ0b5UoKDjUHrikcV0zuPyyxI4SVplLd2CI= +github.com/karamaru-alpha/copyloopvar v1.2.1/go.mod h1:nFmMlFNlClC2BPvNaHMdkirmTJxVCY0lhxBtlfOypMM= +github.com/kisielk/errcheck v1.9.0 h1:9xt1zI9EBfcYBvdU1nVrzMzzUPUtPKs9bVSIM3TAb3M= +github.com/kisielk/errcheck v1.9.0/go.mod h1:kQxWMMVZgIkDq7U8xtG/n2juOjbLgZtedi0D+/VL/i8= +github.com/kkHAIKE/contextcheck v1.1.6 h1:7HIyRcnyzxL9Lz06NGhiKvenXq7Zw6Q0UQu/ttjfJCE= +github.com/kkHAIKE/contextcheck v1.1.6/go.mod h1:3dDbMRNBFaq8HFXWC1JyvDSPm43CmE6IuHam8Wr0rkg= +github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE= +github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk= +github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY= +github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE= +github.com/kulti/thelper v0.6.3 h1:ElhKf+AlItIu+xGnI990no4cE2+XaSu1ULymV2Yulxs= +github.com/kulti/thelper v0.6.3/go.mod h1:DsqKShOvP40epevkFrvIwkCMNYxMeTNjdWL4dqWHZ6I= +github.com/kunwardeep/paralleltest v1.0.10 h1:wrodoaKYzS2mdNVnc4/w31YaXFtsc21PCTdvWJ/lDDs= +github.com/kunwardeep/paralleltest v1.0.10/go.mod h1:2C7s65hONVqY7Q5Efj5aLzRCNLjw2h4eMc9EcypGjcY= +github.com/lasiar/canonicalheader v1.1.2 h1:vZ5uqwvDbyJCnMhmFYimgMZnJMjwljN5VGY0VKbMXb4= +github.com/lasiar/canonicalheader v1.1.2/go.mod h1:qJCeLFS0G/QlLQ506T+Fk/fWMa2VmBUiEI2cuMK4djI= +github.com/ldez/exptostd v0.4.2 h1:l5pOzHBz8mFOlbcifTxzfyYbgEmoUqjxLFHZkjlbHXs= +github.com/ldez/exptostd v0.4.2/go.mod h1:iZBRYaUmcW5jwCR3KROEZ1KivQQp6PHXbDPk9hqJKCQ= +github.com/ldez/gomoddirectives v0.6.1 h1:Z+PxGAY+217f/bSGjNZr/b2KTXcyYLgiWI6geMBN2Qc= +github.com/ldez/gomoddirectives v0.6.1/go.mod h1:cVBiu3AHR9V31em9u2kwfMKD43ayN5/XDgr+cdaFaKs= +github.com/ldez/grignotin v0.9.0 h1:MgOEmjZIVNn6p5wPaGp/0OKWyvq42KnzAt/DAb8O4Ow= +github.com/ldez/grignotin v0.9.0/go.mod h1:uaVTr0SoZ1KBii33c47O1M8Jp3OP3YDwhZCmzT9GHEk= +github.com/ldez/tagliatelle v0.7.1 h1:bTgKjjc2sQcsgPiT902+aadvMjCeMHrY7ly2XKFORIk= +github.com/ldez/tagliatelle v0.7.1/go.mod h1:3zjxUpsNB2aEZScWiZTHrAXOl1x25t3cRmzfK1mlo2I= +github.com/ldez/usetesting v0.4.2 h1:J2WwbrFGk3wx4cZwSMiCQQ00kjGR0+tuuyW0Lqm4lwA= +github.com/ldez/usetesting v0.4.2/go.mod h1:eEs46T3PpQ+9RgN9VjpY6qWdiw2/QmfiDeWmdZdrjIQ= +github.com/leonklingele/grouper v1.1.2 h1:o1ARBDLOmmasUaNDesWqWCIFH3u7hoFlM84YrjT3mIY= +github.com/leonklingele/grouper v1.1.2/go.mod h1:6D0M/HVkhs2yRKRFZUoGjeDy7EZTfFBE9gl4kjmIGkA= +github.com/macabu/inamedparam v0.1.3 h1:2tk/phHkMlEL/1GNe/Yf6kkR/hkcUdAEY3L0hjYV1Mk= +github.com/macabu/inamedparam v0.1.3/go.mod h1:93FLICAIk/quk7eaPPQvbzihUdn/QkGDwIZEoLtpH6I= +github.com/maratori/testableexamples v1.0.0 h1:dU5alXRrD8WKSjOUnmJZuzdxWOEQ57+7s93SLMxb2vI= +github.com/maratori/testableexamples v1.0.0/go.mod h1:4rhjL1n20TUTT4vdh3RDqSizKLyXp7K2u6HgraZCGzE= +github.com/maratori/testpackage v1.1.1 h1:S58XVV5AD7HADMmD0fNnziNHqKvSdDuEKdPD1rNTU04= +github.com/maratori/testpackage v1.1.1/go.mod h1:s4gRK/ym6AMrqpOa/kEbQTV4Q4jb7WeLZzVhVVVOQMc= +github.com/maruel/natural v1.1.1 h1:Hja7XhhmvEFhcByqDoHz9QZbkWey+COd9xWfCfn1ioo= +github.com/maruel/natural v1.1.1/go.mod h1:v+Rfd79xlw1AgVBjbO0BEQmptqb5HvL/k9GRHB7ZKEg= +github.com/matoous/godox v1.1.0 h1:W5mqwbyWrwZv6OQ5Z1a/DHGMOvXYCBP3+Ht7KMoJhq4= +github.com/matoous/godox v1.1.0/go.mod h1:jgE/3fUXiTurkdHOLT5WEkThTSuE7yxHv5iWPa80afs= +github.com/matryer/is v1.4.0 h1:sosSmIWwkYITGrxZ25ULNDeKiMNzFSr4V/eqBQP0PeE= +github.com/matryer/is v1.4.0/go.mod h1:8I/i5uYgLzgsgEloJE1U6xx5HkBQpAZvepWuujKwMRU= +github.com/mattn/go-colorable v0.1.14 h1:9A9LHSqF/7dyVVX6g0U9cwm9pG3kP9gSzcuIPHPsaIE= +github.com/mattn/go-colorable v0.1.14/go.mod h1:6LmQG8QLFO4G5z1gPvYEzlUgJ2wF+stgPZH1UqBm1s8= +github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY= +github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y= +github.com/mattn/go-runewidth v0.0.9/go.mod h1:H031xJmbD/WCDINGzjvQ9THkh0rPKHF+m2gUSrubnMI= +github.com/mattn/go-runewidth v0.0.16 h1:E5ScNMtiwvlvB5paMFdw9p4kSQzbXFikJ5SQO6TULQc= +github.com/mattn/go-runewidth v0.0.16/go.mod h1:Jdepj2loyihRzMpdS35Xk/zdY8IAYHsh153qUoGf23w= +github.com/mfridman/tparse v0.18.0 h1:wh6dzOKaIwkUGyKgOntDW4liXSo37qg5AXbIhkMV3vE= +github.com/mfridman/tparse v0.18.0/go.mod h1:gEvqZTuCgEhPbYk/2lS3Kcxg1GmTxxU7kTC8DvP0i/A= +github.com/mgechev/revive v1.7.0 h1:JyeQ4yO5K8aZhIKf5rec56u0376h8AlKNQEmjfkjKlY= +github.com/mgechev/revive v1.7.0/go.mod h1:qZnwcNhoguE58dfi96IJeSTPeZQejNeoMQLUZGi4SW4= +github.com/mitchellh/go-homedir v1.1.0 h1:lukF9ziXFxDFPkA1vsr5zpc1XuPDn/wFntq5mG+4E0Y= +github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0= +github.com/moricho/tparallel v0.3.2 h1:odr8aZVFA3NZrNybggMkYO3rgPRcqjeQUlBBFVxKHTI= +github.com/moricho/tparallel v0.3.2/go.mod h1:OQ+K3b4Ln3l2TZveGCywybl68glfLEwFGqvnjok8b+U= +github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA= +github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ= +github.com/nakabonne/nestif v0.3.1 h1:wm28nZjhQY5HyYPx+weN3Q65k6ilSBxDb8v5S81B81U= +github.com/nakabonne/nestif v0.3.1/go.mod h1:9EtoZochLn5iUprVDmDjqGKPofoUEBL8U4Ngq6aY7OE= +github.com/nishanths/exhaustive v0.12.0 h1:vIY9sALmw6T/yxiASewa4TQcFsVYZQQRUQJhKRf3Swg= +github.com/nishanths/exhaustive v0.12.0/go.mod h1:mEZ95wPIZW+x8kC4TgC+9YCUgiST7ecevsVDTgc2obs= +github.com/nishanths/predeclared v0.2.2 h1:V2EPdZPliZymNAn79T8RkNApBjMmVKh5XRpLm/w98Vk= +github.com/nishanths/predeclared v0.2.2/go.mod h1:RROzoN6TnGQupbC+lqggsOlcgysk3LMK/HI84Mp280c= +github.com/nunnatsa/ginkgolinter v0.19.1 h1:mjwbOlDQxZi9Cal+KfbEJTCz327OLNfwNvoZ70NJ+c4= +github.com/nunnatsa/ginkgolinter v0.19.1/go.mod h1:jkQ3naZDmxaZMXPWaS9rblH+i+GWXQCaS/JFIWcOH2s= +github.com/olekukonko/tablewriter v0.0.5 h1:P2Ga83D34wi1o9J6Wh1mRuqd4mF/x/lgBS7N7AbDhec= +github.com/olekukonko/tablewriter v0.0.5/go.mod h1:hPp6KlRPjbx+hW8ykQs1w3UBbZlj6HuIJcUGPhkA7kY= +github.com/onsi/ginkgo/v2 v2.27.2 h1:LzwLj0b89qtIy6SSASkzlNvX6WktqurSHwkk2ipF/Ns= +github.com/onsi/ginkgo/v2 v2.27.2/go.mod h1:ArE1D/XhNXBXCBkKOLkbsb2c81dQHCRcF5zwn/ykDRo= +github.com/onsi/gomega v1.38.3 h1:eTX+W6dobAYfFeGC2PV6RwXRu/MyT+cQguijutvkpSM= +github.com/onsi/gomega v1.38.3/go.mod h1:ZCU1pkQcXDO5Sl9/VVEGlDyp+zm0m1cmeG5TOzLgdh4= +github.com/otiai10/copy v1.2.0/go.mod h1:rrF5dJ5F0t/EWSYODDu4j9/vEeYHMkc8jt0zJChqQWw= +github.com/otiai10/copy v1.14.0 h1:dCI/t1iTdYGtkvCuBG2BgR6KZa83PTclw4U5n2wAllU= +github.com/otiai10/copy v1.14.0/go.mod h1:ECfuL02W+/FkTWZWgQqXPWZgW9oeKCSQ5qVfSc4qc4w= +github.com/otiai10/curr v0.0.0-20150429015615-9b4961190c95/go.mod h1:9qAhocn7zKJG+0mI8eUu6xqkFDYS2kb2saOteoSB3cE= +github.com/otiai10/curr v1.0.0/go.mod h1:LskTG5wDwr8Rs+nNQ+1LlxRjAtTZZjtJW4rMXl6j4vs= +github.com/otiai10/mint v1.3.0/go.mod h1:F5AjcsTsWUqX+Na9fpHb52P8pcRX2CI6A3ctIT91xUo= +github.com/otiai10/mint v1.3.1/go.mod h1:/yxELlJQ0ufhjUwhshSj+wFjZ78CnZ48/1wtmBH1OTc= +github.com/pelletier/go-toml/v2 v2.2.4 h1:mye9XuhQ6gvn5h28+VilKrrPoQVanw5PMw/TB0t5Ec4= +github.com/pelletier/go-toml/v2 v2.2.4/go.mod h1:2gIqNv+qfxSVS7cM2xJQKtLSTLUE9V8t9Stt+h56mCY= +github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= +github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U= +github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= +github.com/polyfloyd/go-errorlint v1.7.1 h1:RyLVXIbosq1gBdk/pChWA8zWYLsq9UEw7a1L5TVMCnA= +github.com/polyfloyd/go-errorlint v1.7.1/go.mod h1:aXjNb1x2TNhoLsk26iv1yl7a+zTnXPhwEMtEXukiLR8= +github.com/prashantv/gostub v1.1.0 h1:BTyx3RfQjRHnUWaGF9oQos79AlQ5k8WNktv7VGvVH4g= +github.com/prashantv/gostub v1.1.0/go.mod h1:A5zLQHz7ieHGG7is6LLXLz7I8+3LZzsrV0P1IAHhP5U= +github.com/prometheus/client_golang v1.23.2 h1:Je96obch5RDVy3FDMndoUsjAhG5Edi49h0RJWRi/o0o= +github.com/prometheus/client_golang v1.23.2/go.mod h1:Tb1a6LWHB3/SPIzCoaDXI4I8UHKeFTEQ1YCr+0Gyqmg= +github.com/prometheus/client_model v0.6.2 h1:oBsgwpGs7iVziMvrGhE53c/GrLUsZdHnqNwqPLxwZyk= +github.com/prometheus/client_model v0.6.2/go.mod h1:y3m2F6Gdpfy6Ut/GBsUqTWZqCUvMVzSfMLjcu6wAwpE= +github.com/prometheus/common v0.66.1 h1:h5E0h5/Y8niHc5DlaLlWLArTQI7tMrsfQjHV+d9ZoGs= +github.com/prometheus/common v0.66.1/go.mod h1:gcaUsgf3KfRSwHY4dIMXLPV0K/Wg1oZ8+SbZk/HH/dA= +github.com/prometheus/procfs v0.17.0 h1:FuLQ+05u4ZI+SS/w9+BWEM2TXiHKsUQ9TADiRH7DuK0= +github.com/prometheus/procfs v0.17.0/go.mod h1:oPQLaDAMRbA+u8H5Pbfq+dl3VDAvHxMUOVhe0wYB2zw= +github.com/quasilyte/go-ruleguard v0.4.3-0.20240823090925-0fe6f58b47b1 h1:+Wl/0aFp0hpuHM3H//KMft64WQ1yX9LdJY64Qm/gFCo= +github.com/quasilyte/go-ruleguard v0.4.3-0.20240823090925-0fe6f58b47b1/go.mod h1:GJLgqsLeo4qgavUoL8JeGFNS7qcisx3awV/w9eWTmNI= +github.com/quasilyte/go-ruleguard/dsl v0.3.22 h1:wd8zkOhSNr+I+8Qeciml08ivDt1pSXe60+5DqOpCjPE= +github.com/quasilyte/go-ruleguard/dsl v0.3.22/go.mod h1:KeCP03KrjuSO0H1kTuZQCWlQPulDV6YMIXmpQss17rU= +github.com/quasilyte/gogrep v0.5.0 h1:eTKODPXbI8ffJMN+W2aE0+oL0z/nh8/5eNdiO34SOAo= +github.com/quasilyte/gogrep v0.5.0/go.mod h1:Cm9lpz9NZjEoL1tgZ2OgeUKPIxL1meE7eo60Z6Sk+Ng= +github.com/quasilyte/regex/syntax v0.0.0-20210819130434-b3f0c404a727 h1:TCg2WBOl980XxGFEZSS6KlBGIV0diGdySzxATTWoqaU= +github.com/quasilyte/regex/syntax v0.0.0-20210819130434-b3f0c404a727/go.mod h1:rlzQ04UMyJXu/aOvhd8qT+hvDrFpiwqp8MRXDY9szc0= +github.com/quasilyte/stdinfo v0.0.0-20220114132959-f7386bf02567 h1:M8mH9eK4OUR4lu7Gd+PU1fV2/qnDNfzT635KRSObncs= +github.com/quasilyte/stdinfo v0.0.0-20220114132959-f7386bf02567/go.mod h1:DWNGW8A4Y+GyBgPuaQJuWiy0XYftx4Xm/y5Jqk9I6VQ= +github.com/raeperd/recvcheck v0.2.0 h1:GnU+NsbiCqdC2XX5+vMZzP+jAJC5fht7rcVTAhX74UI= +github.com/raeperd/recvcheck v0.2.0/go.mod h1:n04eYkwIR0JbgD73wT8wL4JjPC3wm0nFtzBnWNocnYU= +github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc= +github.com/rivo/uniseg v0.4.7 h1:WUdvkW8uEhrYfLC4ZzdpI2ztxP1I582+49Oc5Mq64VQ= +github.com/rivo/uniseg v0.4.7/go.mod h1:FN3SvrM+Zdj16jyLfmOkMNblXMcoc8DfTHruCPUcx88= +github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ= +github.com/rogpeppe/go-internal v1.14.1/go.mod h1:MaRKkUm5W0goXpeCfT7UZI6fk/L7L7so1lCWt35ZSgc= +github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM= +github.com/ryancurrah/gomodguard v1.3.5 h1:cShyguSwUEeC0jS7ylOiG/idnd1TpJ1LfHGpV3oJmPU= +github.com/ryancurrah/gomodguard v1.3.5/go.mod h1:MXlEPQRxgfPQa62O8wzK3Ozbkv9Rkqr+wKjSxTdsNJE= +github.com/ryanrolds/sqlclosecheck v0.5.1 h1:dibWW826u0P8jNLsLN+En7+RqWWTYrjCB9fJfSfdyCU= +github.com/ryanrolds/sqlclosecheck v0.5.1/go.mod h1:2g3dUjoS6AL4huFdv6wn55WpLIDjY7ZgUR4J8HOO/XQ= +github.com/sagikazarmark/locafero v0.7.0 h1:5MqpDsTGNDhY8sGp0Aowyf0qKsPrhewaLSsFaodPcyo= +github.com/sagikazarmark/locafero v0.7.0/go.mod h1:2za3Cg5rMaTMoG/2Ulr9AwtFaIppKXTRYnozin4aB5k= +github.com/sanposhiho/wastedassign/v2 v2.1.0 h1:crurBF7fJKIORrV85u9UUpePDYGWnwvv3+A96WvwXT0= +github.com/sanposhiho/wastedassign/v2 v2.1.0/go.mod h1:+oSmSC+9bQ+VUAxA66nBb0Z7N8CK7mscKTDYC6aIek4= +github.com/santhosh-tekuri/jsonschema/v6 v6.0.1 h1:PKK9DyHxif4LZo+uQSgXNqs0jj5+xZwwfKHgph2lxBw= +github.com/santhosh-tekuri/jsonschema/v6 v6.0.1/go.mod h1:JXeL+ps8p7/KNMjDQk3TCwPpBy0wYklyWTfbkIzdIFU= +github.com/sashamelentyev/interfacebloat v1.1.0 h1:xdRdJp0irL086OyW1H/RTZTr1h/tMEOsumirXcOJqAw= +github.com/sashamelentyev/interfacebloat v1.1.0/go.mod h1:+Y9yU5YdTkrNvoX0xHc84dxiN1iBi9+G8zZIhPVoNjQ= +github.com/sashamelentyev/usestdlibvars v1.28.0 h1:jZnudE2zKCtYlGzLVreNp5pmCdOxXUzwsMDBkR21cyQ= +github.com/sashamelentyev/usestdlibvars v1.28.0/go.mod h1:9nl0jgOfHKWNFS43Ojw0i7aRoS4j6EBye3YBhmAIRF8= +github.com/securego/gosec/v2 v2.22.2 h1:IXbuI7cJninj0nRpZSLCUlotsj8jGusohfONMrHoF6g= +github.com/securego/gosec/v2 v2.22.2/go.mod h1:UEBGA+dSKb+VqM6TdehR7lnQtIIMorYJ4/9CW1KVQBE= +github.com/shurcooL/go v0.0.0-20180423040247-9e1955d9fb6e/go.mod h1:TDJrrUr11Vxrven61rcy3hJMUqaf/CLWYhHNPmT14Lk= +github.com/shurcooL/go-goon v0.0.0-20170922171312-37c2f522c041/go.mod h1:N5mDOmsrJOB+vfqUK+7DmDyjhSLIIBnXo9lvZJj3MWQ= +github.com/sirupsen/logrus v1.9.3 h1:dueUQJ1C2q9oE3F7wvmSGAaVtTmUizReu6fjN8uqzbQ= +github.com/sirupsen/logrus v1.9.3/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ= +github.com/sivchari/containedctx v1.0.3 h1:x+etemjbsh2fB5ewm5FeLNi5bUjK0V8n0RB+Wwfd0XE= +github.com/sivchari/containedctx v1.0.3/go.mod h1:c1RDvCbnJLtH4lLcYD/GqwiBSSf4F5Qk0xld2rBqzJ4= +github.com/sivchari/tenv v1.12.1 h1:+E0QzjktdnExv/wwsnnyk4oqZBUfuh89YMQT1cyuvSY= +github.com/sivchari/tenv v1.12.1/go.mod h1:1LjSOUCc25snIr5n3DtGGrENhX3LuWefcplwVGC24mw= +github.com/sonatard/noctx v0.1.0 h1:JjqOc2WN16ISWAjAk8M5ej0RfExEXtkEyExl2hLW+OM= +github.com/sonatard/noctx v0.1.0/go.mod h1:0RvBxqY8D4j9cTTTWE8ylt2vqj2EPI8fHmrxHdsaZ2c= +github.com/sourcegraph/conc v0.3.0 h1:OQTbbt6P72L20UqAkXXuLOj79LfEanQ+YQFNpLA9ySo= +github.com/sourcegraph/conc v0.3.0/go.mod h1:Sdozi7LEKbFPqYX2/J+iBAM6HpqSLTASQIKqDmF7Mt0= +github.com/sourcegraph/go-diff v0.7.0 h1:9uLlrd5T46OXs5qpp8L/MTltk0zikUGi0sNNyCpA8G0= +github.com/sourcegraph/go-diff v0.7.0/go.mod h1:iBszgVvyxdc8SFZ7gm69go2KDdt3ag071iBaWPF6cjs= +github.com/spf13/afero v1.12.0 h1:UcOPyRBYczmFn6yvphxkn9ZEOY65cpwGKb5mL36mrqs= +github.com/spf13/afero v1.12.0/go.mod h1:ZTlWwG4/ahT8W7T0WQ5uYmjI9duaLQGy3Q2OAl4sk/4= +github.com/spf13/cast v1.7.1 h1:cuNEagBQEHWN1FnbGEjCXL2szYEXqfJPbP2HNUaca9Y= +github.com/spf13/cast v1.7.1/go.mod h1:ancEpBxwJDODSW/UG4rDrAqiKolqNNh2DX3mk86cAdo= +github.com/spf13/cobra v1.10.2 h1:DMTTonx5m65Ic0GOoRY2c16WCbHxOOw6xxezuLaBpcU= +github.com/spf13/cobra v1.10.2/go.mod h1:7C1pvHqHw5A4vrJfjNwvOdzYu0Gml16OCs2GRiTUUS4= +github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg= +github.com/spf13/pflag v1.0.9/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg= +github.com/spf13/pflag v1.0.10 h1:4EBh2KAYBwaONj6b2Ye1GiHfwjqyROoF4RwYO+vPwFk= +github.com/spf13/pflag v1.0.10/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg= +github.com/spf13/viper v1.20.1 h1:ZMi+z/lvLyPSCoNtFCpqjy0S4kPbirhpTMwl8BkW9X4= +github.com/spf13/viper v1.20.1/go.mod h1:P9Mdzt1zoHIG8m2eZQinpiBjo6kCmZSKBClNNqjJvu4= +github.com/ssgreg/nlreturn/v2 v2.2.1 h1:X4XDI7jstt3ySqGU86YGAURbxw3oTDPK9sPEi6YEwQ0= +github.com/ssgreg/nlreturn/v2 v2.2.1/go.mod h1:E/iiPB78hV7Szg2YfRgyIrk1AD6JVMTRkkxBiELzh2I= +github.com/stbenjam/no-sprintf-host-port v0.2.0 h1:i8pxvGrt1+4G0czLr/WnmyH7zbZ8Bg8etvARQ1rpyl4= +github.com/stbenjam/no-sprintf-host-port v0.2.0/go.mod h1:eL0bQ9PasS0hsyTyfTjjG+E80QIyPnBVQbYZyv20Jfk= +github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= +github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw= +github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo= +github.com/stretchr/objx v0.5.2 h1:xuMeJ0Sdp5ZMRXx/aWO6RZxdr3beISkG5/G/aIRr3pY= +github.com/stretchr/objx v0.5.2/go.mod h1:FRsXN1f5AsAjCGJKqEizvkpNtU+EGNCLh3NxZ/8L+MA= +github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI= +github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4= +github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA= +github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= +github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= +github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU= +github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo= +github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U= +github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U= +github.com/subosito/gotenv v1.6.0 h1:9NlTDc1FTs4qu0DDq7AEtTPNw6SVm7uBMsUCUjABIf8= +github.com/subosito/gotenv v1.6.0/go.mod h1:Dk4QP5c2W3ibzajGcXpNraDfq2IrhjMIvMSWPKKo0FU= +github.com/tdakkota/asciicheck v0.4.1 h1:bm0tbcmi0jezRA2b5kg4ozmMuGAFotKI3RZfrhfovg8= +github.com/tdakkota/asciicheck v0.4.1/go.mod h1:0k7M3rCfRXb0Z6bwgvkEIMleKH3kXNz9UqJ9Xuqopr8= +github.com/tenntenn/modver v1.0.1 h1:2klLppGhDgzJrScMpkj9Ujy3rXPUspSjAcev9tSEBgA= +github.com/tenntenn/modver v1.0.1/go.mod h1:bePIyQPb7UeioSRkw3Q0XeMhYZSMx9B8ePqg6SAMGH0= +github.com/tenntenn/text/transform v0.0.0-20200319021203-7eef512accb3 h1:f+jULpRQGxTSkNYKJ51yaw6ChIqO+Je8UqsTKN/cDag= +github.com/tenntenn/text/transform v0.0.0-20200319021203-7eef512accb3/go.mod h1:ON8b8w4BN/kE1EOhwT0o+d62W65a6aPw1nouo9LMgyY= +github.com/tetafro/godot v1.5.0 h1:aNwfVI4I3+gdxjMgYPus9eHmoBeJIbnajOyqZYStzuw= +github.com/tetafro/godot v1.5.0/go.mod h1:2oVxTBSftRTh4+MVfUaUXR6bn2GDXCaMcOG4Dk3rfio= +github.com/tidwall/gjson v1.18.0 h1:FIDeeyB800efLX89e5a8Y0BNH+LOngJyGrIWxG2FKQY= +github.com/tidwall/gjson v1.18.0/go.mod h1:/wbyibRr2FHMks5tjHJ5F8dMZh3AcwJEMf5vlfC0lxk= +github.com/tidwall/match v1.2.0 h1:0pt8FlkOwjN2fPt4bIl4BoNxb98gGHN2ObFEDkrfZnM= +github.com/tidwall/match v1.2.0/go.mod h1:eRSPERbgtNPcGhD8UCthc6PmLEQXEWd3PRB5JTxsfmM= +github.com/tidwall/pretty v1.2.1 h1:qjsOFOWWQl+N3RsoF5/ssm1pHmJJwhjlSbZ51I6wMl4= +github.com/tidwall/pretty v1.2.1/go.mod h1:ITEVvHYasfjBbM0u2Pg8T2nJnzm8xPwvNhhsoaGGjNU= +github.com/tidwall/sjson v1.2.5 h1:kLy8mja+1c9jlljvWTlSazM7cKDRfJuR/bOJhcY5NcY= +github.com/tidwall/sjson v1.2.5/go.mod h1:Fvgq9kS/6ociJEDnK0Fk1cpYF4FIW6ZF7LAe+6jwd28= +github.com/timakin/bodyclose v0.0.0-20241017074812-ed6a65f985e3 h1:y4mJRFlM6fUyPhoXuFg/Yu02fg/nIPFMOY8tOqppoFg= +github.com/timakin/bodyclose v0.0.0-20241017074812-ed6a65f985e3/go.mod h1:mkjARE7Yr8qU23YcGMSALbIxTQ9r9QBVahQOBRfU460= +github.com/timonwong/loggercheck v0.10.1 h1:uVZYClxQFpw55eh+PIoqM7uAOHMrhVcDoWDery9R8Lg= +github.com/timonwong/loggercheck v0.10.1/go.mod h1:HEAWU8djynujaAVX7QI65Myb8qgfcZ1uKbdpg3ZzKl8= +github.com/tomarrell/wrapcheck/v2 v2.10.0 h1:SzRCryzy4IrAH7bVGG4cK40tNUhmVmMDuJujy4XwYDg= +github.com/tomarrell/wrapcheck/v2 v2.10.0/go.mod h1:g9vNIyhb5/9TQgumxQyOEqDHsmGYcGsVMOx/xGkqdMo= +github.com/tommy-muehle/go-mnd/v2 v2.5.1 h1:NowYhSdyE/1zwK9QCLeRb6USWdoif80Ie+v+yU8u1Zw= +github.com/tommy-muehle/go-mnd/v2 v2.5.1/go.mod h1:WsUAkMJMYww6l/ufffCD3m+P7LEvr8TnZn9lwVDlgzw= +github.com/ultraware/funlen v0.2.0 h1:gCHmCn+d2/1SemTdYMiKLAHFYxTYz7z9VIDRaTGyLkI= +github.com/ultraware/funlen v0.2.0/go.mod h1:ZE0q4TsJ8T1SQcjmkhN/w+MceuatI6pBFSxxyteHIJA= +github.com/ultraware/whitespace v0.2.0 h1:TYowo2m9Nfj1baEQBjuHzvMRbp19i+RCcRYrSWoFa+g= +github.com/ultraware/whitespace v0.2.0/go.mod h1:XcP1RLD81eV4BW8UhQlpaR+SDc2givTvyI8a586WjW8= +github.com/uudashr/gocognit v1.2.0 h1:3BU9aMr1xbhPlvJLSydKwdLN3tEUUrzPSSM8S4hDYRA= +github.com/uudashr/gocognit v1.2.0/go.mod h1:k/DdKPI6XBZO1q7HgoV2juESI2/Ofj9AcHPZhBBdrTU= +github.com/uudashr/iface v1.3.1 h1:bA51vmVx1UIhiIsQFSNq6GZ6VPTk3WNMZgRiCe9R29U= +github.com/uudashr/iface v1.3.1/go.mod h1:4QvspiRd3JLPAEXBQ9AiZpLbJlrWWgRChOKDJEuQTdg= +github.com/xen0n/gosmopolitan v1.2.2 h1:/p2KTnMzwRexIW8GlKawsTWOxn7UHA+jCMF/V8HHtvU= +github.com/xen0n/gosmopolitan v1.2.2/go.mod h1:7XX7Mj61uLYrj0qmeN0zi7XDon9JRAEhYQqAPLVNTeg= +github.com/yagipy/maintidx v1.0.0 h1:h5NvIsCz+nRDapQ0exNv4aJ0yXSI0420omVANTv3GJM= +github.com/yagipy/maintidx v1.0.0/go.mod h1:0qNf/I/CCZXSMhsRsrEPDZ+DkekpKLXAJfsTACwgXLk= +github.com/yeya24/promlinter v0.3.0 h1:JVDbMp08lVCP7Y6NP3qHroGAO6z2yGKQtS5JsjqtoFs= +github.com/yeya24/promlinter v0.3.0/go.mod h1:cDfJQQYv9uYciW60QT0eeHlFodotkYZlL+YcPQN+mW4= +github.com/ykadowak/zerologlint v0.1.5 h1:Gy/fMz1dFQN9JZTPjv1hxEk+sRWm05row04Yoolgdiw= +github.com/ykadowak/zerologlint v0.1.5/go.mod h1:KaUskqF3e/v59oPmdq1U1DnKcuHokl2/K1U4pmIELKg= +github.com/yuin/goldmark v1.1.25/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= +github.com/yuin/goldmark v1.1.32/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= +github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= +github.com/yuin/goldmark v1.3.5/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k= +github.com/yuin/goldmark v1.4.1/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k= +github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY= +gitlab.com/bosi/decorder v0.4.2 h1:qbQaV3zgwnBZ4zPMhGLW4KZe7A7NwxEhJx39R3shffo= +gitlab.com/bosi/decorder v0.4.2/go.mod h1:muuhHoaJkA9QLcYHq4Mj8FJUwDZ+EirSHRiaTcTf6T8= +go-simpler.org/assert v0.9.0 h1:PfpmcSvL7yAnWyChSjOz6Sp6m9j5lyK8Ok9pEL31YkQ= +go-simpler.org/assert v0.9.0/go.mod h1:74Eqh5eI6vCK6Y5l3PI8ZYFXG4Sa+tkr70OIPJAUr28= +go-simpler.org/musttag v0.13.0 h1:Q/YAW0AHvaoaIbsPj3bvEI5/QFP7w696IMUpnKXQfCE= +go-simpler.org/musttag v0.13.0/go.mod h1:FTzIGeK6OkKlUDVpj0iQUXZLUO1Js9+mvykDQy9C5yM= +go-simpler.org/sloglint v0.9.0 h1:/40NQtjRx9txvsB/RN022KsUJU+zaaSb/9q9BSefSrE= +go-simpler.org/sloglint v0.9.0/go.mod h1:G/OrAF6uxj48sHahCzrbarVMptL2kjWTaUeC8+fOGww= +go.uber.org/automaxprocs v1.6.0 h1:O3y2/QNTOdbF+e/dpXNNW7Rx2hZ4sTIPyybbxyNqTUs= +go.uber.org/automaxprocs v1.6.0/go.mod h1:ifeIMSnPZuznNm6jmdzmU3/bfk01Fe2fotchwEFJ8r8= +go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto= +go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE= +go.uber.org/multierr v1.11.0 h1:blXXJkSxSSfBVBlC76pxqeO+LN3aDfLQo+309xJstO0= +go.uber.org/multierr v1.11.0/go.mod h1:20+QtiLqy0Nd6FdQB9TLXag12DsQkrbs3htMFfDN80Y= +go.uber.org/zap v1.27.0 h1:aJMhYGrd5QSmlpLMr2MftRKl7t8J8PTZPA732ud/XR8= +go.uber.org/zap v1.27.0/go.mod h1:GB2qFLM7cTU87MWRP2mPIjqfIDnGu+VIO4V/SdhGo2E= +go.yaml.in/yaml/v2 v2.4.2 h1:DzmwEr2rDGHl7lsFgAHxmNz/1NlQ7xLIrlN2h5d1eGI= +go.yaml.in/yaml/v2 v2.4.2/go.mod h1:081UH+NErpNdqlCXm3TtEran0rJZGxAYx9hb/ELlsPU= +go.yaml.in/yaml/v3 v3.0.4 h1:tfq32ie2Jv2UxXFdLJdh3jXuOzWiL1fo0bu/FbuKpbc= +go.yaml.in/yaml/v3 v3.0.4/go.mod h1:DhzuOOF2ATzADvBadXxruRBLzYTpT36CKvDb3+aBEFg= +golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= +golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= +golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= +golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc= +golang.org/x/crypto v0.13.0/go.mod h1:y6Z2r+Rw4iayiXXAIxJIDAJ1zMW4yaTpebo8fPOliYc= +golang.org/x/crypto v0.14.0/go.mod h1:MVFd36DqK4CsrnJYDkBA3VC4m2GkXAM0PvzMCn4JQf4= +golang.org/x/exp v0.0.0-20251023183803-a4bb9ffd2546 h1:mgKeJMpvi0yx/sU5GsxQ7p6s2wtOnGAHZWCHUM4KGzY= +golang.org/x/exp v0.0.0-20251023183803-a4bb9ffd2546/go.mod h1:j/pmGrbnkbPtQfxEe5D0VQhZC6qKbfKifgD0oM7sR70= +golang.org/x/exp/typeparams v0.0.0-20220428152302-39d4317da171/go.mod h1:AbB0pIl9nAr9wVwH+Z2ZpaocVmF5I4GyWCDIsVjR0bk= +golang.org/x/exp/typeparams v0.0.0-20230203172020-98cc5a0785f9/go.mod h1:AbB0pIl9nAr9wVwH+Z2ZpaocVmF5I4GyWCDIsVjR0bk= +golang.org/x/exp/typeparams v0.0.0-20250210185358-939b2ce775ac h1:TSSpLIG4v+p0rPv1pNOQtl1I8knsO4S9trOxNMOLVP4= +golang.org/x/exp/typeparams v0.0.0-20250210185358-939b2ce775ac/go.mod h1:AbB0pIl9nAr9wVwH+Z2ZpaocVmF5I4GyWCDIsVjR0bk= +golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= +golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= +golang.org/x/mod v0.4.1/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= +golang.org/x/mod v0.4.2/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= +golang.org/x/mod v0.6.0-dev.0.20220106191415-9b9b3d81d5e3/go.mod h1:3p9vT2HGsQu2K1YbXdKPJLVgG5VJdoTa1poYQBtP1AY= +golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4= +golang.org/x/mod v0.7.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= +golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= +golang.org/x/mod v0.9.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= +golang.org/x/mod v0.12.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= +golang.org/x/mod v0.13.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c= +golang.org/x/mod v0.29.0 h1:HV8lRxZC4l2cr3Zq1LvtOsi/ThTgWnUk/y64QSs8GwA= +golang.org/x/mod v0.29.0/go.mod h1:NyhrlYXJ2H4eJiRy/WDBO6HMqZQ6q9nk4JzS3NuCK+w= +golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= +golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= +golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= +golang.org/x/net v0.0.0-20200625001655-4c5254603344/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA= +golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU= +golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg= +golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM= +golang.org/x/net v0.0.0-20211015210444-4f30a5c0130f/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y= +golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c= +golang.org/x/net v0.2.0/go.mod h1:KqCZLdyyvdV855qA2rE3GC2aiw5xGR5TEjj8smXukLY= +golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs= +golang.org/x/net v0.8.0/go.mod h1:QVkue5JL9kW//ek3r6jTKnTFis1tRmNAW2P1shuFdJc= +golang.org/x/net v0.10.0/go.mod h1:0qNGK6F8kojg2nk9dLZ2mShWaEBan6FAoqfSigmmuDg= +golang.org/x/net v0.15.0/go.mod h1:idbUs1IY1+zTqbi8yxTbhexhEEk5ur9LInksu6HrEpk= +golang.org/x/net v0.16.0/go.mod h1:NxSsAGuq816PNPmqtQdLE42eU2Fs7NoRIZrHJAlaCOE= +golang.org/x/net v0.46.0 h1:giFlY12I07fugqwPuWJi68oOnpfqFnJIJzaIIm2JVV4= +golang.org/x/net v0.46.0/go.mod h1:Q9BGdFy1y4nkUwiLvT5qtyhAnEHgnQ/zd8PfU6nc210= +golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.0.0-20200625203802-6e8e738ad208/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.3.0/go.mod h1:FU7BRWz2tNW+3quACPkgCx/L+uEAv1htQ0V83Z9Rj+Y= +golang.org/x/sync v0.4.0/go.mod h1:FU7BRWz2tNW+3quACPkgCx/L+uEAv1htQ0V83Z9Rj+Y= +golang.org/x/sync v0.19.0 h1:vV+1eWNmZ5geRlYjzm2adRgW2/mcpevXNg50YZtPCE4= +golang.org/x/sync v0.19.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI= +golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= +golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20211019181941-9d821ace8654/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20211105183446-c75c47738b0c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.2.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.12.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.13.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.39.0 h1:CvCKL8MeisomCi6qNZ+wbb0DN9E5AATixKsvNtMoMFk= +golang.org/x/sys v0.39.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks= +golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= +golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8= +golang.org/x/term v0.2.0/go.mod h1:TVmDHMZPmdnySmBfhjOoOdhjzdE1h4u1VwSiw2l1Nuc= +golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k= +golang.org/x/term v0.6.0/go.mod h1:m6U89DPEgQRMq3DNkDClhWw02AUbt2daBVO4cn4Hv9U= +golang.org/x/term v0.8.0/go.mod h1:xPskH00ivmX89bAKVGSKKtLOWNx2+17Eiy94tnKShWo= +golang.org/x/term v0.12.0/go.mod h1:owVbMEjm3cBLCHdkQu9b1opXd4ETQWc3BhuQGKgXgvU= +golang.org/x/term v0.13.0/go.mod h1:LTmsnFJwVN6bCy1rVCoS+qHT1HhALEFxKncY3WNNh4U= +golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= +golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk= +golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= +golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= +golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ= +golang.org/x/text v0.4.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8= +golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8= +golang.org/x/text v0.8.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8= +golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8= +golang.org/x/text v0.13.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE= +golang.org/x/text v0.30.0 h1:yznKA/E9zq54KzlzBEAWn1NXSQ8DIp/NYMy88xJjl4k= +golang.org/x/text v0.30.0/go.mod h1:yDdHFIX9t+tORqspjENWgzaCVXgk0yYnYuSZ8UzzBVM= +golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= +golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= +golang.org/x/tools v0.0.0-20200324003944-a576cf524670/go.mod h1:Sl4aGygMT6LrqrWclx+PTx3U+LnKx/seiNR+3G19Ar8= +golang.org/x/tools v0.0.0-20200329025819-fd4102a86c65/go.mod h1:Sl4aGygMT6LrqrWclx+PTx3U+LnKx/seiNR+3G19Ar8= +golang.org/x/tools v0.0.0-20200724022722-7017fd6b1305/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA= +golang.org/x/tools v0.0.0-20200820010801-b793a1359eac/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA= +golang.org/x/tools v0.0.0-20201023174141-c8cfbd0f21e6/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= +golang.org/x/tools v0.1.1-0.20210205202024-ef80cdb6ec6d/go.mod h1:9bzcO0MWcOuT0tm1iBGzDVPshzfwoVvREIui8C+MHqU= +golang.org/x/tools v0.1.1-0.20210302220138-2ac05c832e1a/go.mod h1:9bzcO0MWcOuT0tm1iBGzDVPshzfwoVvREIui8C+MHqU= +golang.org/x/tools v0.1.1/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk= +golang.org/x/tools v0.1.5/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk= +golang.org/x/tools v0.1.10/go.mod h1:Uh6Zz+xoGYZom868N8YTex3t7RhtHDBrE8Gzo9bV56E= +golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc= +golang.org/x/tools v0.3.0/go.mod h1:/rWhSS2+zyEVwoJf8YAX6L2f0ntZ7Kn/mGgAWcipA5k= +golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU= +golang.org/x/tools v0.7.0/go.mod h1:4pg6aUX35JBAogB10C9AtvVL+qowtN4pT3CGSQex14s= +golang.org/x/tools v0.13.0/go.mod h1:HvlwmtVNQAhOuCjW7xxvovg8wbNq7LwfXh/k7wXUl58= +golang.org/x/tools v0.14.0/go.mod h1:uYBEerGOWcJyEORxN+Ek8+TT266gXkNlHdJBwexUsBg= +golang.org/x/tools v0.38.0 h1:Hx2Xv8hISq8Lm16jvBZ2VQf+RLmbd7wVUsALibYI/IQ= +golang.org/x/tools v0.38.0/go.mod h1:yEsQ/d/YK8cjh0L6rZlY8tgtlKiBNTL14pGDJPJpYQs= +golang.org/x/tools/go/expect v0.1.1-deprecated h1:jpBZDwmgPhXsKZC6WhL20P4b/wmnpsEAGHaNy0n/rJM= +golang.org/x/tools/go/expect v0.1.1-deprecated/go.mod h1:eihoPOH+FgIqa3FpoTwguz/bVUSGBlGQU67vpBeOrBY= +golang.org/x/tools/go/packages/packagestest v0.1.1-deprecated h1:1h2MnaIAIXISqTFKdENegdpAgUXz6NrPEsbIeWaBRvM= +golang.org/x/tools/go/packages/packagestest v0.1.1-deprecated/go.mod h1:RVAQXBGNv1ib0J382/DPCRS/BPnsGebyM1Gj5VSDpG8= +golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= +golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= +golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= +golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= +google.golang.org/protobuf v1.36.9 h1:w2gp2mA27hUeUzj9Ex9FBjsBm40zfaDtEWow293U7Iw= +google.golang.org/protobuf v1.36.9/go.mod h1:fuxRtAxBytpl4zzqUh6/eyUujkJdNiuEkXntxiD/uRU= +gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= +gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk= +gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q= +gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= +gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY= +gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ= +gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= +gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA= +gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= +honnef.co/go/tools v0.6.1 h1:R094WgE8K4JirYjBaOpz/AvTyUu/3wbmAoskKN/pxTI= +honnef.co/go/tools v0.6.1/go.mod h1:3puzxxljPCe8RGJX7BIy1plGbxEOZni5mR2aXe3/uk4= k8s.io/klog/v2 v2.130.1 h1:n9Xl7H1Xvksem4KFG4PYbdQCQxqc/tTUyrgXaOhHSzk= k8s.io/klog/v2 v2.130.1/go.mod h1:3Jpz1GvMt720eyJH1ckRHK1EDfpxISzJ7I9OYgaDtPE= +mvdan.cc/gofumpt v0.7.0 h1:bg91ttqXmi9y2xawvkuMXyvAA/1ZGJqYAEGjXuP0JXU= +mvdan.cc/gofumpt v0.7.0/go.mod h1:txVFJy/Sc/mvaycET54pV8SW8gWxTlUuGHVEcncmNUo= +mvdan.cc/unparam v0.0.0-20240528143540-8a5130ca722f h1:lMpcwN6GxNbWtbpI1+xzFLSW8XzX0u72NttUGVFjO3U= +mvdan.cc/unparam v0.0.0-20240528143540-8a5130ca722f/go.mod h1:RSLa7mKKCNeTTMHBw5Hsy2rfJmd6O2ivt9Dw9ZqCQpQ= diff --git a/images/linstor-drbd-wait/werf.inc.yaml b/images/linstor-drbd-wait/werf.inc.yaml index df7d9bf76..644259d78 100644 --- a/images/linstor-drbd-wait/werf.inc.yaml +++ b/images/linstor-drbd-wait/werf.inc.yaml @@ -1,6 +1,6 @@ --- image: {{ $.ImageName }}-src-artifact -from: {{ $.Root.BASE_ALT_P11 }} +fromImage: builder/alt final: false git: @@ -9,6 +9,7 @@ git: includePaths: - api - images/{{ $.ImageName }} + - lib/go stageDependencies: install: - '**/*' @@ -42,7 +43,7 @@ shell: --- image: {{ $.ImageName }}-distroless-artifact -from: {{ $.Root.BASE_ALT }} +fromImage: builder/alt final: false shell: beforeInstall: @@ -63,7 +64,7 @@ shell: --- image: {{ $.ImageName }}-distroless -from: {{ $.Root.BASE_SCRATCH }} +fromImage: base/distroless final: false import: - image: {{ $.ImageName }}-distroless-artifact @@ -73,7 +74,7 @@ import: --- image: {{ $.ImageName }} -fromImage: {{ $.ImageName }}-distroless +fromImage: base/distroless import: - image: {{ $.ImageName }}-golang-artifact diff --git a/images/megatest/LICENSE b/images/megatest/LICENSE new file mode 100644 index 000000000..b77c0c92a --- /dev/null +++ b/images/megatest/LICENSE @@ -0,0 +1,201 @@ + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright [yyyy] [name of copyright owner] + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/images/megatest/cmd/main.go b/images/megatest/cmd/main.go new file mode 100644 index 000000000..46628d637 --- /dev/null +++ b/images/megatest/cmd/main.go @@ -0,0 +1,180 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package main + +import ( + "context" + "fmt" + "log/slog" + "os" + "os/signal" + "syscall" + "time" + + "github.com/deckhouse/sds-replicated-volume/images/megatest/internal/config" + "github.com/deckhouse/sds-replicated-volume/images/megatest/internal/kubeutils" + "github.com/deckhouse/sds-replicated-volume/images/megatest/internal/runners" +) + +func main() { + // Parse options + var opt Opt + opt.Parse() + + // Convert log level string to slog.Level + var logLevel slog.Level + switch opt.LogLevel { + case "debug": + logLevel = slog.LevelDebug + case "info": + logLevel = slog.LevelInfo + case "warn": + logLevel = slog.LevelWarn + case "error": + logLevel = slog.LevelError + default: + logLevel = slog.LevelInfo + } + + // Setup logger with stdout output + logHandler := slog.NewTextHandler(os.Stdout, &slog.HandlerOptions{ + Level: logLevel, + AddSource: false, + }) + log := slog.New(logHandler) + slog.SetDefault(log) + + start := time.Now() + log.Info("megatest started") + + // Create Kubernetes client first, before setting up signal handling + // This allows us to exit early if cluster is unreachable + kubeClient, err := kubeutils.NewClientWithKubeconfig(opt.Kubeconfig) + if err != nil { + log.Error("failed to create Kubernetes client", "error", err) + os.Exit(1) + } + + // Setup signal handling + ctx, cancel := context.WithCancel(context.Background()) + defer cancel() + // Stopping Informers whom uses by VolumeChecker + defer kubeClient.StopInformers() + + sigChan := make(chan os.Signal, 1) + signal.Notify(sigChan, syscall.SIGINT, syscall.SIGTERM) + + // Channel to broadcast second signal to all cleanup handlers + // When closed, all readers will receive notification simultaneously (broadcast mechanism) + forceCleanupChan := make(chan struct{}) + + // Handle signals: first signal stops volume creation, second signal forces cleanup cancellation + go func() { + sig := <-sigChan + log.Info("received first signal, stopping RV creation and cleanup", "signal", sig) + cancel() + + // Wait for second signal to broadcast to all cleanup handlers + sig = <-sigChan + log.Info("received second signal, forcing cleanup cancellation for all", "signal", sig) + close(forceCleanupChan) // Broadcast: all readers will get notification simultaneously + }() + + // Create multivolume config + cfg := config.MultiVolumeConfig{ + StorageClasses: opt.StorageClasses, + MaxVolumes: opt.MaxVolumes, + VolumeStep: config.StepMinMax{Min: opt.VolumeStepMin, Max: opt.VolumeStepMax}, + StepPeriod: config.DurationMinMax{Min: opt.StepPeriodMin, Max: opt.StepPeriodMax}, + VolumePeriod: config.DurationMinMax{Min: opt.VolumePeriodMin, Max: opt.VolumePeriodMax}, + DisablePodDestroyer: opt.DisablePodDestroyer, + DisableVolumeResizer: opt.DisableVolumeResizer, + DisableVolumeReplicaDestroyer: opt.DisableVolumeReplicaDestroyer, + DisableVolumeReplicaCreator: opt.DisableVolumeReplicaCreator, + } + + multiVolume := runners.NewMultiVolume(cfg, kubeClient, forceCleanupChan) + _ = multiVolume.Run(ctx) + + // Print statistics + stats := multiVolume.GetStats() + checkerStats := multiVolume.GetCheckerStats() + duration := time.Since(start) + + fmt.Fprintf(os.Stdout, "\nStatistics:\n") + fmt.Fprintf(os.Stdout, "Total RV created: %d\n", stats.CreatedRVCount) + fmt.Fprintf(os.Stdout, "Total create RV errors: %d\n", stats.CreateRVErrorCount) + + // Calculate average times + var avgCreateTime, avgDeleteTime, avgWaitTime time.Duration + if stats.CreatedRVCount > 0 { + avgCreateTime = stats.TotalCreateRVTime / time.Duration(stats.CreatedRVCount) + avgDeleteTime = stats.TotalDeleteRVTime / time.Duration(stats.CreatedRVCount) + avgWaitTime = stats.TotalWaitForRVReadyTime / time.Duration(stats.CreatedRVCount) + } + + if logLevel >= slog.LevelDebug { + fmt.Fprintf(os.Stdout, "Total time to create RV via API and RVAs: %s (avg: %s)\n", stats.TotalCreateRVTime.String(), avgCreateTime.String()) + } + fmt.Fprintf(os.Stdout, "Total create RV time: %s (avg: %s)\n", stats.TotalWaitForRVReadyTime.String(), avgWaitTime.String()) + fmt.Fprintf(os.Stdout, "Total delete RV time: %s (avg: %s)\n", stats.TotalDeleteRVTime.String(), avgDeleteTime.String()) + + // Print checker statistics + printCheckerStats(checkerStats) + + fmt.Fprintf(os.Stdout, "\nTest duration: %s\n", duration.String()) + + os.Stdout.Sync() + + // Function returns normally, defer statements will execute +} + +// printCheckerStats prints a summary table of all checker statistics +func printCheckerStats(stats []*runners.CheckerStats) { + if len(stats) == 0 { + fmt.Fprintf(os.Stdout, "\nChecker Statistics: no data\n") + return + } + + fmt.Fprintf(os.Stdout, "\nChecker Statistics:\n") + fmt.Fprintf(os.Stdout, "%-40s %20s %20s\n", "RV Name", "IOReady Transitions", "Quorum Transitions") + fmt.Fprintf(os.Stdout, "%s\n", "────────────────────────────────────────────────────────────────────────────────") + + var stableCount, recoveredCount, brokenCount int + + for _, s := range stats { + ioReady := s.IOReadyTransitions.Load() + quorum := s.QuorumTransitions.Load() + + fmt.Fprintf(os.Stdout, "%-40s %20d %20d\n", s.RVName, ioReady, quorum) + + // Categorize RV state + switch { + case ioReady == 0 && quorum == 0: + stableCount++ // No issues at all + case ioReady%2 == 1 || quorum%2 == 1: + brokenCount++ // Odd = still in bad state + default: + recoveredCount++ // Even >0 = had issues but recovered + } + } + + fmt.Fprintf(os.Stdout, "%s\n", "────────────────────────────────────────────────────────────────────────────────") + fmt.Fprintf(os.Stdout, "Stable (0 transitions): %d\n", stableCount) + fmt.Fprintf(os.Stdout, "Recovered (even transitions): %d\n", recoveredCount) + fmt.Fprintf(os.Stdout, "Broken (odd transitions): %d\n", brokenCount) +} diff --git a/images/megatest/cmd/opt.go b/images/megatest/cmd/opt.go new file mode 100644 index 000000000..55fe36f6a --- /dev/null +++ b/images/megatest/cmd/opt.go @@ -0,0 +1,144 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package main + +import ( + "errors" + "os" + "regexp" + "time" + + "github.com/spf13/cobra" +) + +type Opt struct { + StorageClasses []string + Kubeconfig string + MaxVolumes int + VolumeStepMin int + VolumeStepMax int + StepPeriodMin time.Duration + StepPeriodMax time.Duration + VolumePeriodMin time.Duration + VolumePeriodMax time.Duration + LogLevel string + + // Disable flags for goroutines + DisablePodDestroyer bool + DisableVolumeResizer bool + DisableVolumeReplicaDestroyer bool + DisableVolumeReplicaCreator bool +} + +func (o *Opt) Parse() { + var rootCmd = &cobra.Command{ + Use: "megatest", + Short: "A tool for testing ReplicatedVolume operations in Kubernetes", + Long: `megatest is a testing tool that creates and manages multiple ReplicatedVolumes concurrently +to test the stability and performance of the SDS Replicated Volume system. + +Graceful Shutdown: + To stop megatest, press Ctrl+C (SIGINT). This will: + 1. Stop creating new ReplicatedVolumes + 2. Begin cleanup process that will delete all created ReplicatedVolumes and related resources + 3. After cleanup completes, display test statistics + +Interrupting Cleanup: + If you need to interrupt the cleanup process, press Ctrl+C a second time. + This will force immediate termination, leaving all objects created by megatest + in the cluster. These objects will need to be manually deleted later.`, + RunE: func(_ *cobra.Command, _ []string) error { + if len(o.StorageClasses) == 0 { + return errors.New("storage-classes flag is required") + } + + if !regexp.MustCompile(`^debug$|^info$|^warn$|^error$`).MatchString(o.LogLevel) { + return errors.New("invalid 'log-level' (allowed values: debug, info, warn, error)") + } + + if o.VolumeStepMin < 1 { + return errors.New("volume-step-min must be at least 1") + } + + if o.VolumeStepMax < o.VolumeStepMin { + return errors.New("volume-step-max must be greater than or equal to volume-step-min") + } + + if o.StepPeriodMin <= 0 { + return errors.New("step-period-min must be positive") + } + + if o.StepPeriodMax < o.StepPeriodMin { + return errors.New("step-period-max must be greater than or equal to step-period-min") + } + + if o.VolumePeriodMin <= 0 { + return errors.New("volume-period-min must be positive") + } + + if o.VolumePeriodMax < o.VolumePeriodMin { + return errors.New("volume-period-max must be greater than or equal to volume-period-min") + } + + if o.MaxVolumes < 1 { + return errors.New("max-volumes must be at least 1") + } + + return nil + }, + } + + // Exit after displaying the help information + rootCmd.SetHelpFunc(func(cmd *cobra.Command, _ []string) { + // Print Short description if available + if cmd.Short != "" { + cmd.Println(cmd.Short) + cmd.Println() + } + // Print Long description if available + if cmd.Long != "" { + cmd.Println(cmd.Long) + cmd.Println() + } + // Print usage and flags + cmd.Print(cmd.UsageString()) + os.Exit(0) + }) + + // Add flags + rootCmd.Flags().StringSliceVarP(&o.StorageClasses, "storage-classes", "", nil, "Comma-separated list of storage class names to use (required)") + rootCmd.Flags().StringVarP(&o.Kubeconfig, "kubeconfig", "", "", "Path to kubeconfig file") + rootCmd.Flags().IntVarP(&o.MaxVolumes, "max-volumes", "", 10, "Maximum number of concurrent ReplicatedVolumes") + rootCmd.Flags().IntVarP(&o.VolumeStepMin, "volume-step-min", "", 1, "Minimum number of ReplicatedVolumes to create per step") + rootCmd.Flags().IntVarP(&o.VolumeStepMax, "volume-step-max", "", 3, "Maximum number of ReplicatedVolumes to create per step") + rootCmd.Flags().DurationVarP(&o.StepPeriodMin, "step-period-min", "", 10*time.Second, "Minimum wait between ReplicatedVolume creation steps") + rootCmd.Flags().DurationVarP(&o.StepPeriodMax, "step-period-max", "", 30*time.Second, "Maximum wait between ReplicatedVolume creation steps") + rootCmd.Flags().DurationVarP(&o.VolumePeriodMin, "volume-period-min", "", 60*time.Second, "Minimum ReplicatedVolume lifetime") + rootCmd.Flags().DurationVarP(&o.VolumePeriodMax, "volume-period-max", "", 300*time.Second, "Maximum ReplicatedVolume lifetime") + rootCmd.Flags().StringVarP(&o.LogLevel, "log-level", "", "info", "Log level (allowed values: debug, info, warn, error)") + + // Disable flags for goroutines + rootCmd.Flags().BoolVarP(&o.DisablePodDestroyer, "disable-pod-destroyer", "", false, "Disable pod-destroyer goroutines") + rootCmd.Flags().BoolVarP(&o.DisableVolumeResizer, "disable-volume-resizer", "", false, "Disable volume-resizer goroutine") + rootCmd.Flags().BoolVarP(&o.DisableVolumeReplicaDestroyer, "disable-volume-replica-destroyer", "", false, "Disable volume-replica-destroyer goroutine") + rootCmd.Flags().BoolVarP(&o.DisableVolumeReplicaCreator, "disable-volume-replica-creator", "", false, "Disable volume-replica-creator goroutine") + + if err := rootCmd.Execute(); err != nil { + // we expect err to be logged already + os.Exit(1) + } +} diff --git a/images/megatest/go.mod b/images/megatest/go.mod new file mode 100644 index 000000000..fd5458463 --- /dev/null +++ b/images/megatest/go.mod @@ -0,0 +1,240 @@ +module github.com/deckhouse/sds-replicated-volume/images/megatest + +go 1.24.11 + +replace github.com/deckhouse/sds-replicated-volume/api => ../../api + +require ( + github.com/deckhouse/sds-replicated-volume/api v0.0.0-00010101000000-000000000000 + github.com/google/uuid v1.6.0 + github.com/spf13/cobra v1.10.2 + k8s.io/api v0.34.3 + k8s.io/apimachinery v0.34.3 + k8s.io/client-go v0.34.3 + sigs.k8s.io/controller-runtime v0.22.4 +) + +require ( + 4d63.com/gocheckcompilerdirectives v1.3.0 // indirect + 4d63.com/gochecknoglobals v0.2.2 // indirect + github.com/4meepo/tagalign v1.4.2 // indirect + github.com/Abirdcfly/dupword v0.1.3 // indirect + github.com/Antonboom/errname v1.0.0 // indirect + github.com/Antonboom/nilnil v1.0.1 // indirect + github.com/Antonboom/testifylint v1.5.2 // indirect + github.com/BurntSushi/toml v1.5.0 // indirect + github.com/Crocmagnon/fatcontext v0.7.1 // indirect + github.com/Djarvur/go-err113 v0.0.0-20210108212216-aea10b59be24 // indirect + github.com/GaijinEntertainment/go-exhaustruct/v3 v3.3.1 // indirect + github.com/Masterminds/semver/v3 v3.4.0 // indirect + github.com/OpenPeeDeeP/depguard/v2 v2.2.1 // indirect + github.com/alecthomas/go-check-sumtype v0.3.1 // indirect + github.com/alexkohler/nakedret/v2 v2.0.5 // indirect + github.com/alexkohler/prealloc v1.0.0 // indirect + github.com/alingse/asasalint v0.0.11 // indirect + github.com/alingse/nilnesserr v0.1.2 // indirect + github.com/ashanbrown/forbidigo v1.6.0 // indirect + github.com/ashanbrown/makezero v1.2.0 // indirect + github.com/beorn7/perks v1.0.1 // indirect + github.com/bkielbasa/cyclop v1.2.3 // indirect + github.com/blizzy78/varnamelen v0.8.0 // indirect + github.com/bombsimon/wsl/v4 v4.5.0 // indirect + github.com/breml/bidichk v0.3.2 // indirect + github.com/breml/errchkjson v0.4.0 // indirect + github.com/butuzov/ireturn v0.3.1 // indirect + github.com/butuzov/mirror v1.3.0 // indirect + github.com/catenacyber/perfsprint v0.8.2 // indirect + github.com/ccojocar/zxcvbn-go v1.0.2 // indirect + github.com/cespare/xxhash/v2 v2.3.0 // indirect + github.com/charithe/durationcheck v0.0.10 // indirect + github.com/chavacava/garif v0.1.0 // indirect + github.com/ckaznocha/intrange v0.3.0 // indirect + github.com/curioswitch/go-reassign v0.3.0 // indirect + github.com/daixiang0/gci v0.13.5 // indirect + github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect + github.com/denis-tingaikin/go-header v0.5.0 // indirect + github.com/emicklei/go-restful/v3 v3.13.0 // indirect + github.com/ettle/strcase v0.2.0 // indirect + github.com/evanphx/json-patch/v5 v5.9.11 // indirect + github.com/fatih/color v1.18.0 // indirect + github.com/fatih/structtag v1.2.0 // indirect + github.com/firefart/nonamedreturns v1.0.5 // indirect + github.com/fsnotify/fsnotify v1.9.0 // indirect + github.com/fxamacker/cbor/v2 v2.9.0 // indirect + github.com/fzipp/gocyclo v0.6.0 // indirect + github.com/ghostiam/protogetter v0.3.9 // indirect + github.com/go-critic/go-critic v0.12.0 // indirect + github.com/go-logr/logr v1.4.3 // indirect + github.com/go-openapi/jsonpointer v0.22.0 // indirect + github.com/go-openapi/jsonreference v0.21.1 // indirect + github.com/go-openapi/swag v0.24.1 // indirect + github.com/go-openapi/swag/cmdutils v0.24.0 // indirect + github.com/go-openapi/swag/conv v0.24.0 // indirect + github.com/go-openapi/swag/fileutils v0.24.0 // indirect + github.com/go-openapi/swag/jsonname v0.24.0 // indirect + github.com/go-openapi/swag/jsonutils v0.24.0 // indirect + github.com/go-openapi/swag/loading v0.24.0 // indirect + github.com/go-openapi/swag/mangling v0.24.0 // indirect + github.com/go-openapi/swag/netutils v0.24.0 // indirect + github.com/go-openapi/swag/stringutils v0.24.0 // indirect + github.com/go-openapi/swag/typeutils v0.24.0 // indirect + github.com/go-openapi/swag/yamlutils v0.24.0 // indirect + github.com/go-toolsmith/astcast v1.1.0 // indirect + github.com/go-toolsmith/astcopy v1.1.0 // indirect + github.com/go-toolsmith/astequal v1.2.0 // indirect + github.com/go-toolsmith/astfmt v1.1.0 // indirect + github.com/go-toolsmith/astp v1.1.0 // indirect + github.com/go-toolsmith/strparse v1.1.0 // indirect + github.com/go-toolsmith/typep v1.1.0 // indirect + github.com/go-viper/mapstructure/v2 v2.4.0 // indirect + github.com/go-xmlfmt/xmlfmt v1.1.3 // indirect + github.com/gobwas/glob v0.2.3 // indirect + github.com/gofrs/flock v0.12.1 // indirect + github.com/gogo/protobuf v1.3.2 // indirect + github.com/golangci/dupl v0.0.0-20250308024227-f665c8d69b32 // indirect + github.com/golangci/go-printf-func-name v0.1.0 // indirect + github.com/golangci/gofmt v0.0.0-20250106114630-d62b90e6713d // indirect + github.com/golangci/golangci-lint v1.64.8 // indirect + github.com/golangci/misspell v0.6.0 // indirect + github.com/golangci/plugin-module-register v0.1.1 // indirect + github.com/golangci/revgrep v0.8.0 // indirect + github.com/golangci/unconvert v0.0.0-20240309020433-c5143eacb3ed // indirect + github.com/google/gnostic-models v0.7.0 // indirect + github.com/google/go-cmp v0.7.0 // indirect + github.com/gordonklaus/ineffassign v0.1.0 // indirect + github.com/gostaticanalysis/analysisutil v0.7.1 // indirect + github.com/gostaticanalysis/comment v1.5.0 // indirect + github.com/gostaticanalysis/forcetypeassert v0.2.0 // indirect + github.com/gostaticanalysis/nilerr v0.1.1 // indirect + github.com/hashicorp/go-immutable-radix/v2 v2.1.0 // indirect + github.com/hashicorp/go-version v1.7.0 // indirect + github.com/hashicorp/golang-lru/v2 v2.0.7 // indirect + github.com/hexops/gotextdiff v1.0.3 // indirect + github.com/inconshreveable/mousetrap v1.1.0 // indirect + github.com/jgautheron/goconst v1.7.1 // indirect + github.com/jingyugao/rowserrcheck v1.1.1 // indirect + github.com/jjti/go-spancheck v0.6.4 // indirect + github.com/josharian/intern v1.0.0 // indirect + github.com/json-iterator/go v1.1.12 // indirect + github.com/julz/importas v0.2.0 // indirect + github.com/karamaru-alpha/copyloopvar v1.2.1 // indirect + github.com/kisielk/errcheck v1.9.0 // indirect + github.com/kkHAIKE/contextcheck v1.1.6 // indirect + github.com/kulti/thelper v0.6.3 // indirect + github.com/kunwardeep/paralleltest v1.0.10 // indirect + github.com/lasiar/canonicalheader v1.1.2 // indirect + github.com/ldez/exptostd v0.4.2 // indirect + github.com/ldez/gomoddirectives v0.6.1 // indirect + github.com/ldez/grignotin v0.9.0 // indirect + github.com/ldez/tagliatelle v0.7.1 // indirect + github.com/ldez/usetesting v0.4.2 // indirect + github.com/leonklingele/grouper v1.1.2 // indirect + github.com/macabu/inamedparam v0.1.3 // indirect + github.com/mailru/easyjson v0.9.0 // indirect + github.com/maratori/testableexamples v1.0.0 // indirect + github.com/maratori/testpackage v1.1.1 // indirect + github.com/matoous/godox v1.1.0 // indirect + github.com/mattn/go-colorable v0.1.14 // indirect + github.com/mattn/go-isatty v0.0.20 // indirect + github.com/mattn/go-runewidth v0.0.16 // indirect + github.com/mgechev/revive v1.7.0 // indirect + github.com/mitchellh/go-homedir v1.1.0 // indirect + github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect + github.com/modern-go/reflect2 v1.0.3-0.20250322232337-35a7c28c31ee // indirect + github.com/moricho/tparallel v0.3.2 // indirect + github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect + github.com/nakabonne/nestif v0.3.1 // indirect + github.com/nishanths/exhaustive v0.12.0 // indirect + github.com/nishanths/predeclared v0.2.2 // indirect + github.com/nunnatsa/ginkgolinter v0.19.1 // indirect + github.com/olekukonko/tablewriter v0.0.5 // indirect + github.com/pelletier/go-toml/v2 v2.2.4 // indirect + github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect + github.com/polyfloyd/go-errorlint v1.7.1 // indirect + github.com/prometheus/client_golang v1.23.2 // indirect + github.com/prometheus/client_model v0.6.2 // indirect + github.com/prometheus/common v0.66.1 // indirect + github.com/prometheus/procfs v0.17.0 // indirect + github.com/quasilyte/go-ruleguard v0.4.3-0.20240823090925-0fe6f58b47b1 // indirect + github.com/quasilyte/go-ruleguard/dsl v0.3.22 // indirect + github.com/quasilyte/gogrep v0.5.0 // indirect + github.com/quasilyte/regex/syntax v0.0.0-20210819130434-b3f0c404a727 // indirect + github.com/quasilyte/stdinfo v0.0.0-20220114132959-f7386bf02567 // indirect + github.com/raeperd/recvcheck v0.2.0 // indirect + github.com/rivo/uniseg v0.4.7 // indirect + github.com/rogpeppe/go-internal v1.14.1 // indirect + github.com/ryancurrah/gomodguard v1.3.5 // indirect + github.com/ryanrolds/sqlclosecheck v0.5.1 // indirect + github.com/sagikazarmark/locafero v0.7.0 // indirect + github.com/sanposhiho/wastedassign/v2 v2.1.0 // indirect + github.com/santhosh-tekuri/jsonschema/v6 v6.0.1 // indirect + github.com/sashamelentyev/interfacebloat v1.1.0 // indirect + github.com/sashamelentyev/usestdlibvars v1.28.0 // indirect + github.com/securego/gosec/v2 v2.22.2 // indirect + github.com/sirupsen/logrus v1.9.3 // indirect + github.com/sivchari/containedctx v1.0.3 // indirect + github.com/sivchari/tenv v1.12.1 // indirect + github.com/sonatard/noctx v0.1.0 // indirect + github.com/sourcegraph/conc v0.3.0 // indirect + github.com/sourcegraph/go-diff v0.7.0 // indirect + github.com/spf13/afero v1.12.0 // indirect + github.com/spf13/cast v1.7.1 // indirect + github.com/spf13/pflag v1.0.10 // indirect + github.com/spf13/viper v1.20.1 // indirect + github.com/ssgreg/nlreturn/v2 v2.2.1 // indirect + github.com/stbenjam/no-sprintf-host-port v0.2.0 // indirect + github.com/stretchr/objx v0.5.2 // indirect + github.com/stretchr/testify v1.11.1 // indirect + github.com/subosito/gotenv v1.6.0 // indirect + github.com/tdakkota/asciicheck v0.4.1 // indirect + github.com/tetafro/godot v1.5.0 // indirect + github.com/timakin/bodyclose v0.0.0-20241017074812-ed6a65f985e3 // indirect + github.com/timonwong/loggercheck v0.10.1 // indirect + github.com/tomarrell/wrapcheck/v2 v2.10.0 // indirect + github.com/tommy-muehle/go-mnd/v2 v2.5.1 // indirect + github.com/ultraware/funlen v0.2.0 // indirect + github.com/ultraware/whitespace v0.2.0 // indirect + github.com/uudashr/gocognit v1.2.0 // indirect + github.com/uudashr/iface v1.3.1 // indirect + github.com/x448/float16 v0.8.4 // indirect + github.com/xen0n/gosmopolitan v1.2.2 // indirect + github.com/yagipy/maintidx v1.0.0 // indirect + github.com/yeya24/promlinter v0.3.0 // indirect + github.com/ykadowak/zerologlint v0.1.5 // indirect + gitlab.com/bosi/decorder v0.4.2 // indirect + go-simpler.org/musttag v0.13.0 // indirect + go-simpler.org/sloglint v0.9.0 // indirect + go.uber.org/automaxprocs v1.6.0 // indirect + go.uber.org/multierr v1.11.0 // indirect + go.uber.org/zap v1.27.0 // indirect + go.yaml.in/yaml/v2 v2.4.2 // indirect + go.yaml.in/yaml/v3 v3.0.4 // indirect + golang.org/x/exp/typeparams v0.0.0-20250210185358-939b2ce775ac // indirect + golang.org/x/mod v0.29.0 // indirect + golang.org/x/net v0.46.0 // indirect + golang.org/x/oauth2 v0.31.0 // indirect + golang.org/x/sync v0.19.0 // indirect + golang.org/x/sys v0.39.0 // indirect + golang.org/x/term v0.36.0 // indirect + golang.org/x/text v0.30.0 // indirect + golang.org/x/time v0.13.0 // indirect + golang.org/x/tools v0.38.0 // indirect + google.golang.org/protobuf v1.36.9 // indirect + gopkg.in/evanphx/json-patch.v4 v4.13.0 // indirect + gopkg.in/inf.v0 v0.9.1 // indirect + gopkg.in/yaml.v2 v2.4.0 // indirect + gopkg.in/yaml.v3 v3.0.1 // indirect + honnef.co/go/tools v0.6.1 // indirect + k8s.io/apiextensions-apiserver v0.34.3 // indirect + k8s.io/klog/v2 v2.130.1 // indirect + k8s.io/kube-openapi v0.0.0-20250909170358-d67c058d9372 // indirect + k8s.io/utils v0.0.0-20251002143259-bc988d571ff4 // indirect + mvdan.cc/gofumpt v0.7.0 // indirect + mvdan.cc/unparam v0.0.0-20240528143540-8a5130ca722f // indirect + sigs.k8s.io/json v0.0.0-20250730193827-2d320260d730 // indirect + sigs.k8s.io/randfill v1.0.0 // indirect + sigs.k8s.io/structured-merge-diff/v6 v6.3.0 // indirect + sigs.k8s.io/yaml v1.6.0 // indirect +) + +tool github.com/golangci/golangci-lint/cmd/golangci-lint diff --git a/images/megatest/go.sum b/images/megatest/go.sum new file mode 100644 index 000000000..405391431 --- /dev/null +++ b/images/megatest/go.sum @@ -0,0 +1,663 @@ +4d63.com/gocheckcompilerdirectives v1.3.0 h1:Ew5y5CtcAAQeTVKUVFrE7EwHMrTO6BggtEj8BZSjZ3A= +4d63.com/gocheckcompilerdirectives v1.3.0/go.mod h1:ofsJ4zx2QAuIP/NO/NAh1ig6R1Fb18/GI7RVMwz7kAY= +4d63.com/gochecknoglobals v0.2.2 h1:H1vdnwnMaZdQW/N+NrkT1SZMTBmcwHe9Vq8lJcYYTtU= +4d63.com/gochecknoglobals v0.2.2/go.mod h1:lLxwTQjL5eIesRbvnzIP3jZtG140FnTdz+AlMa+ogt0= +github.com/4meepo/tagalign v1.4.2 h1:0hcLHPGMjDyM1gHG58cS73aQF8J4TdVR96TZViorO9E= +github.com/4meepo/tagalign v1.4.2/go.mod h1:+p4aMyFM+ra7nb41CnFG6aSDXqRxU/w1VQqScKqDARI= +github.com/Abirdcfly/dupword v0.1.3 h1:9Pa1NuAsZvpFPi9Pqkd93I7LIYRURj+A//dFd5tgBeE= +github.com/Abirdcfly/dupword v0.1.3/go.mod h1:8VbB2t7e10KRNdwTVoxdBaxla6avbhGzb8sCTygUMhw= +github.com/Antonboom/errname v1.0.0 h1:oJOOWR07vS1kRusl6YRSlat7HFnb3mSfMl6sDMRoTBA= +github.com/Antonboom/errname v1.0.0/go.mod h1:gMOBFzK/vrTiXN9Oh+HFs+e6Ndl0eTFbtsRTSRdXyGI= +github.com/Antonboom/nilnil v1.0.1 h1:C3Tkm0KUxgfO4Duk3PM+ztPncTFlOf0b2qadmS0s4xs= +github.com/Antonboom/nilnil v1.0.1/go.mod h1:CH7pW2JsRNFgEh8B2UaPZTEPhCMuFowP/e8Udp9Nnb0= +github.com/Antonboom/testifylint v1.5.2 h1:4s3Xhuv5AvdIgbd8wOOEeo0uZG7PbDKQyKY5lGoQazk= +github.com/Antonboom/testifylint v1.5.2/go.mod h1:vxy8VJ0bc6NavlYqjZfmp6EfqXMtBgQ4+mhCojwC1P8= +github.com/BurntSushi/toml v1.5.0 h1:W5quZX/G/csjUnuI8SUYlsHs9M38FC7znL0lIO+DvMg= +github.com/BurntSushi/toml v1.5.0/go.mod h1:ukJfTF/6rtPPRCnwkur4qwRxa8vTRFBF0uk2lLoLwho= +github.com/Crocmagnon/fatcontext v0.7.1 h1:SC/VIbRRZQeQWj/TcQBS6JmrXcfA+BU4OGSVUt54PjM= +github.com/Crocmagnon/fatcontext v0.7.1/go.mod h1:1wMvv3NXEBJucFGfwOJBxSVWcoIO6emV215SMkW9MFU= +github.com/Djarvur/go-err113 v0.0.0-20210108212216-aea10b59be24 h1:sHglBQTwgx+rWPdisA5ynNEsoARbiCBOyGcJM4/OzsM= +github.com/Djarvur/go-err113 v0.0.0-20210108212216-aea10b59be24/go.mod h1:4UJr5HIiMZrwgkSPdsjy2uOQExX/WEILpIrO9UPGuXs= +github.com/GaijinEntertainment/go-exhaustruct/v3 v3.3.1 h1:Sz1JIXEcSfhz7fUi7xHnhpIE0thVASYjvosApmHuD2k= +github.com/GaijinEntertainment/go-exhaustruct/v3 v3.3.1/go.mod h1:n/LSCXNuIYqVfBlVXyHfMQkZDdp1/mmxfSjADd3z1Zg= +github.com/Masterminds/semver/v3 v3.4.0 h1:Zog+i5UMtVoCU8oKka5P7i9q9HgrJeGzI9SA1Xbatp0= +github.com/Masterminds/semver/v3 v3.4.0/go.mod h1:4V+yj/TJE1HU9XfppCwVMZq3I84lprf4nC11bSS5beM= +github.com/OpenPeeDeeP/depguard/v2 v2.2.1 h1:vckeWVESWp6Qog7UZSARNqfu/cZqvki8zsuj3piCMx4= +github.com/OpenPeeDeeP/depguard/v2 v2.2.1/go.mod h1:q4DKzC4UcVaAvcfd41CZh0PWpGgzrVxUYBlgKNGquUo= +github.com/alecthomas/assert/v2 v2.11.0 h1:2Q9r3ki8+JYXvGsDyBXwH3LcJ+WK5D0gc5E8vS6K3D0= +github.com/alecthomas/assert/v2 v2.11.0/go.mod h1:Bze95FyfUr7x34QZrjL+XP+0qgp/zg8yS+TtBj1WA3k= +github.com/alecthomas/go-check-sumtype v0.3.1 h1:u9aUvbGINJxLVXiFvHUlPEaD7VDULsrxJb4Aq31NLkU= +github.com/alecthomas/go-check-sumtype v0.3.1/go.mod h1:A8TSiN3UPRw3laIgWEUOHHLPa6/r9MtoigdlP5h3K/E= +github.com/alecthomas/repr v0.4.0 h1:GhI2A8MACjfegCPVq9f1FLvIBS+DrQ2KQBFZP1iFzXc= +github.com/alecthomas/repr v0.4.0/go.mod h1:Fr0507jx4eOXV7AlPV6AVZLYrLIuIeSOWtW57eE/O/4= +github.com/alexkohler/nakedret/v2 v2.0.5 h1:fP5qLgtwbx9EJE8dGEERT02YwS8En4r9nnZ71RK+EVU= +github.com/alexkohler/nakedret/v2 v2.0.5/go.mod h1:bF5i0zF2Wo2o4X4USt9ntUWve6JbFv02Ff4vlkmS/VU= +github.com/alexkohler/prealloc v1.0.0 h1:Hbq0/3fJPQhNkN0dR95AVrr6R7tou91y0uHG5pOcUuw= +github.com/alexkohler/prealloc v1.0.0/go.mod h1:VetnK3dIgFBBKmg0YnD9F9x6Icjd+9cvfHR56wJVlKE= +github.com/alingse/asasalint v0.0.11 h1:SFwnQXJ49Kx/1GghOFz1XGqHYKp21Kq1nHad/0WQRnw= +github.com/alingse/asasalint v0.0.11/go.mod h1:nCaoMhw7a9kSJObvQyVzNTPBDbNpdocqrSP7t/cW5+I= +github.com/alingse/nilnesserr v0.1.2 h1:Yf8Iwm3z2hUUrP4muWfW83DF4nE3r1xZ26fGWUKCZlo= +github.com/alingse/nilnesserr v0.1.2/go.mod h1:1xJPrXonEtX7wyTq8Dytns5P2hNzoWymVUIaKm4HNFg= +github.com/ashanbrown/forbidigo v1.6.0 h1:D3aewfM37Yb3pxHujIPSpTf6oQk9sc9WZi8gerOIVIY= +github.com/ashanbrown/forbidigo v1.6.0/go.mod h1:Y8j9jy9ZYAEHXdu723cUlraTqbzjKF1MUyfOKL+AjcU= +github.com/ashanbrown/makezero v1.2.0 h1:/2Lp1bypdmK9wDIq7uWBlDF1iMUpIIS4A+pF6C9IEUU= +github.com/ashanbrown/makezero v1.2.0/go.mod h1:dxlPhHbDMC6N6xICzFBSK+4njQDdK8euNO0qjQMtGY4= +github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM= +github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw= +github.com/bkielbasa/cyclop v1.2.3 h1:faIVMIGDIANuGPWH031CZJTi2ymOQBULs9H21HSMa5w= +github.com/bkielbasa/cyclop v1.2.3/go.mod h1:kHTwA9Q0uZqOADdupvcFJQtp/ksSnytRMe8ztxG8Fuo= +github.com/blizzy78/varnamelen v0.8.0 h1:oqSblyuQvFsW1hbBHh1zfwrKe3kcSj0rnXkKzsQ089M= +github.com/blizzy78/varnamelen v0.8.0/go.mod h1:V9TzQZ4fLJ1DSrjVDfl89H7aMnTvKkApdHeyESmyR7k= +github.com/bombsimon/wsl/v4 v4.5.0 h1:iZRsEvDdyhd2La0FVi5k6tYehpOR/R7qIUjmKk7N74A= +github.com/bombsimon/wsl/v4 v4.5.0/go.mod h1:NOQ3aLF4nD7N5YPXMruR6ZXDOAqLoM0GEpLwTdvmOSc= +github.com/breml/bidichk v0.3.2 h1:xV4flJ9V5xWTqxL+/PMFF6dtJPvZLPsyixAoPe8BGJs= +github.com/breml/bidichk v0.3.2/go.mod h1:VzFLBxuYtT23z5+iVkamXO386OB+/sVwZOpIj6zXGos= +github.com/breml/errchkjson v0.4.0 h1:gftf6uWZMtIa/Is3XJgibewBm2ksAQSY/kABDNFTAdk= +github.com/breml/errchkjson v0.4.0/go.mod h1:AuBOSTHyLSaaAFlWsRSuRBIroCh3eh7ZHh5YeelDIk8= +github.com/butuzov/ireturn v0.3.1 h1:mFgbEI6m+9W8oP/oDdfA34dLisRFCj2G6o/yiI1yZrY= +github.com/butuzov/ireturn v0.3.1/go.mod h1:ZfRp+E7eJLC0NQmk1Nrm1LOrn/gQlOykv+cVPdiXH5M= +github.com/butuzov/mirror v1.3.0 h1:HdWCXzmwlQHdVhwvsfBb2Au0r3HyINry3bDWLYXiKoc= +github.com/butuzov/mirror v1.3.0/go.mod h1:AEij0Z8YMALaq4yQj9CPPVYOyJQyiexpQEQgihajRfI= +github.com/catenacyber/perfsprint v0.8.2 h1:+o9zVmCSVa7M4MvabsWvESEhpsMkhfE7k0sHNGL95yw= +github.com/catenacyber/perfsprint v0.8.2/go.mod h1:q//VWC2fWbcdSLEY1R3l8n0zQCDPdE4IjZwyY1HMunM= +github.com/ccojocar/zxcvbn-go v1.0.2 h1:na/czXU8RrhXO4EZme6eQJLR4PzcGsahsBOAwU6I3Vg= +github.com/ccojocar/zxcvbn-go v1.0.2/go.mod h1:g1qkXtUSvHP8lhHp5GrSmTz6uWALGRMQdw6Qnz/hi60= +github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs= +github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs= +github.com/charithe/durationcheck v0.0.10 h1:wgw73BiocdBDQPik+zcEoBG/ob8uyBHf2iyoHGPf5w4= +github.com/charithe/durationcheck v0.0.10/go.mod h1:bCWXb7gYRysD1CU3C+u4ceO49LoGOY1C1L6uouGNreQ= +github.com/chavacava/garif v0.1.0 h1:2JHa3hbYf5D9dsgseMKAmc/MZ109otzgNFk5s87H9Pc= +github.com/chavacava/garif v0.1.0/go.mod h1:XMyYCkEL58DF0oyW4qDjjnPWONs2HBqYKI+UIPD+Gww= +github.com/ckaznocha/intrange v0.3.0 h1:VqnxtK32pxgkhJgYQEeOArVidIPg+ahLP7WBOXZd5ZY= +github.com/ckaznocha/intrange v0.3.0/go.mod h1:+I/o2d2A1FBHgGELbGxzIcyd3/9l9DuwjM8FsbSS3Lo= +github.com/cpuguy83/go-md2man/v2 v2.0.6/go.mod h1:oOW0eioCTA6cOiMLiUPZOpcVxMig6NIQQ7OS05n1F4g= +github.com/curioswitch/go-reassign v0.3.0 h1:dh3kpQHuADL3cobV/sSGETA8DOv457dwl+fbBAhrQPs= +github.com/curioswitch/go-reassign v0.3.0/go.mod h1:nApPCCTtqLJN/s8HfItCcKV0jIPwluBOvZP+dsJGA88= +github.com/daixiang0/gci v0.13.5 h1:kThgmH1yBmZSBCh1EJVxQ7JsHpm5Oms0AMed/0LaH4c= +github.com/daixiang0/gci v0.13.5/go.mod h1:12etP2OniiIdP4q+kjUGrC/rUagga7ODbqsom5Eo5Yk= +github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= +github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= +github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM= +github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= +github.com/denis-tingaikin/go-header v0.5.0 h1:SRdnP5ZKvcO9KKRP1KJrhFR3RrlGuD+42t4429eC9k8= +github.com/denis-tingaikin/go-header v0.5.0/go.mod h1:mMenU5bWrok6Wl2UsZjy+1okegmwQ3UgWl4V1D8gjlY= +github.com/dlclark/regexp2 v1.11.0 h1:G/nrcoOa7ZXlpoa/91N3X7mM3r8eIlMBBJZvsz/mxKI= +github.com/dlclark/regexp2 v1.11.0/go.mod h1:DHkYz0B9wPfa6wondMfaivmHpzrQ3v9q8cnmRbL6yW8= +github.com/emicklei/go-restful/v3 v3.13.0 h1:C4Bl2xDndpU6nJ4bc1jXd+uTmYPVUwkD6bFY/oTyCes= +github.com/emicklei/go-restful/v3 v3.13.0/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc= +github.com/ettle/strcase v0.2.0 h1:fGNiVF21fHXpX1niBgk0aROov1LagYsOwV/xqKDKR/Q= +github.com/ettle/strcase v0.2.0/go.mod h1:DajmHElDSaX76ITe3/VHVyMin4LWSJN5Z909Wp+ED1A= +github.com/evanphx/json-patch/v5 v5.9.11 h1:/8HVnzMq13/3x9TPvjG08wUGqBTmZBsCWzjTM0wiaDU= +github.com/evanphx/json-patch/v5 v5.9.11/go.mod h1:3j+LviiESTElxA4p3EMKAB9HXj3/XEtnUf6OZxqIQTM= +github.com/fatih/color v1.18.0 h1:S8gINlzdQ840/4pfAwic/ZE0djQEH3wM94VfqLTZcOM= +github.com/fatih/color v1.18.0/go.mod h1:4FelSpRwEGDpQ12mAdzqdOukCy4u8WUtOY6lkT/6HfU= +github.com/fatih/structtag v1.2.0 h1:/OdNE99OxoI/PqaW/SuSK9uxxT3f/tcSZgon/ssNSx4= +github.com/fatih/structtag v1.2.0/go.mod h1:mBJUNpUnHmRKrKlQQlmCrh5PuhftFbNv8Ys4/aAZl94= +github.com/firefart/nonamedreturns v1.0.5 h1:tM+Me2ZaXs8tfdDw3X6DOX++wMCOqzYUho6tUTYIdRA= +github.com/firefart/nonamedreturns v1.0.5/go.mod h1:gHJjDqhGM4WyPt639SOZs+G89Ko7QKH5R5BhnO6xJhw= +github.com/frankban/quicktest v1.14.6 h1:7Xjx+VpznH+oBnejlPUj8oUpdxnVs4f8XU8WnHkI4W8= +github.com/frankban/quicktest v1.14.6/go.mod h1:4ptaffx2x8+WTWXmUCuVU6aPUX1/Mz7zb5vbUoiM6w0= +github.com/fsnotify/fsnotify v1.9.0 h1:2Ml+OJNzbYCTzsxtv8vKSFD9PbJjmhYF14k/jKC7S9k= +github.com/fsnotify/fsnotify v1.9.0/go.mod h1:8jBTzvmWwFyi3Pb8djgCCO5IBqzKJ/Jwo8TRcHyHii0= +github.com/fxamacker/cbor/v2 v2.9.0 h1:NpKPmjDBgUfBms6tr6JZkTHtfFGcMKsw3eGcmD/sapM= +github.com/fxamacker/cbor/v2 v2.9.0/go.mod h1:vM4b+DJCtHn+zz7h3FFp/hDAI9WNWCsZj23V5ytsSxQ= +github.com/fzipp/gocyclo v0.6.0 h1:lsblElZG7d3ALtGMx9fmxeTKZaLLpU8mET09yN4BBLo= +github.com/fzipp/gocyclo v0.6.0/go.mod h1:rXPyn8fnlpa0R2csP/31uerbiVBugk5whMdlyaLkLoA= +github.com/ghostiam/protogetter v0.3.9 h1:j+zlLLWzqLay22Cz/aYwTHKQ88GE2DQ6GkWSYFOI4lQ= +github.com/ghostiam/protogetter v0.3.9/go.mod h1:WZ0nw9pfzsgxuRsPOFQomgDVSWtDLJRfQJEhsGbmQMA= +github.com/go-critic/go-critic v0.12.0 h1:iLosHZuye812wnkEz1Xu3aBwn5ocCPfc9yqmFG9pa6w= +github.com/go-critic/go-critic v0.12.0/go.mod h1:DpE0P6OVc6JzVYzmM5gq5jMU31zLr4am5mB/VfFK64w= +github.com/go-logr/logr v1.4.3 h1:CjnDlHq8ikf6E492q6eKboGOC0T8CDaOvkHCIg8idEI= +github.com/go-logr/logr v1.4.3/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY= +github.com/go-logr/zapr v1.3.0 h1:XGdV8XW8zdwFiwOA2Dryh1gj2KRQyOOoNmBy4EplIcQ= +github.com/go-logr/zapr v1.3.0/go.mod h1:YKepepNBd1u/oyhd/yQmtjVXmm9uML4IXUgMOwR8/Gg= +github.com/go-openapi/jsonpointer v0.22.0 h1:TmMhghgNef9YXxTu1tOopo+0BGEytxA+okbry0HjZsM= +github.com/go-openapi/jsonpointer v0.22.0/go.mod h1:xt3jV88UtExdIkkL7NloURjRQjbeUgcxFblMjq2iaiU= +github.com/go-openapi/jsonreference v0.21.1 h1:bSKrcl8819zKiOgxkbVNRUBIr6Wwj9KYrDbMjRs0cDA= +github.com/go-openapi/jsonreference v0.21.1/go.mod h1:PWs8rO4xxTUqKGu+lEvvCxD5k2X7QYkKAepJyCmSTT8= +github.com/go-openapi/swag v0.24.1 h1:DPdYTZKo6AQCRqzwr/kGkxJzHhpKxZ9i/oX0zag+MF8= +github.com/go-openapi/swag v0.24.1/go.mod h1:sm8I3lCPlspsBBwUm1t5oZeWZS0s7m/A+Psg0ooRU0A= +github.com/go-openapi/swag/cmdutils v0.24.0 h1:KlRCffHwXFI6E5MV9n8o8zBRElpY4uK4yWyAMWETo9I= +github.com/go-openapi/swag/cmdutils v0.24.0/go.mod h1:uxib2FAeQMByyHomTlsP8h1TtPd54Msu2ZDU/H5Vuf8= +github.com/go-openapi/swag/conv v0.24.0 h1:ejB9+7yogkWly6pnruRX45D1/6J+ZxRu92YFivx54ik= +github.com/go-openapi/swag/conv v0.24.0/go.mod h1:jbn140mZd7EW2g8a8Y5bwm8/Wy1slLySQQ0ND6DPc2c= +github.com/go-openapi/swag/fileutils v0.24.0 h1:U9pCpqp4RUytnD689Ek/N1d2N/a//XCeqoH508H5oak= +github.com/go-openapi/swag/fileutils v0.24.0/go.mod h1:3SCrCSBHyP1/N+3oErQ1gP+OX1GV2QYFSnrTbzwli90= +github.com/go-openapi/swag/jsonname v0.24.0 h1:2wKS9bgRV/xB8c62Qg16w4AUiIrqqiniJFtZGi3dg5k= +github.com/go-openapi/swag/jsonname v0.24.0/go.mod h1:GXqrPzGJe611P7LG4QB9JKPtUZ7flE4DOVechNaDd7Q= +github.com/go-openapi/swag/jsonutils v0.24.0 h1:F1vE1q4pg1xtO3HTyJYRmEuJ4jmIp2iZ30bzW5XgZts= +github.com/go-openapi/swag/jsonutils v0.24.0/go.mod h1:vBowZtF5Z4DDApIoxcIVfR8v0l9oq5PpYRUuteVu6f0= +github.com/go-openapi/swag/loading v0.24.0 h1:ln/fWTwJp2Zkj5DdaX4JPiddFC5CHQpvaBKycOlceYc= +github.com/go-openapi/swag/loading v0.24.0/go.mod h1:gShCN4woKZYIxPxbfbyHgjXAhO61m88tmjy0lp/LkJk= +github.com/go-openapi/swag/mangling v0.24.0 h1:PGOQpViCOUroIeak/Uj/sjGAq9LADS3mOyjznmHy2pk= +github.com/go-openapi/swag/mangling v0.24.0/go.mod h1:Jm5Go9LHkycsz0wfoaBDkdc4CkpuSnIEf62brzyCbhc= +github.com/go-openapi/swag/netutils v0.24.0 h1:Bz02HRjYv8046Ycg/w80q3g9QCWeIqTvlyOjQPDjD8w= +github.com/go-openapi/swag/netutils v0.24.0/go.mod h1:WRgiHcYTnx+IqfMCtu0hy9oOaPR0HnPbmArSRN1SkZM= +github.com/go-openapi/swag/stringutils v0.24.0 h1:i4Z/Jawf9EvXOLUbT97O0HbPUja18VdBxeadyAqS1FM= +github.com/go-openapi/swag/stringutils v0.24.0/go.mod h1:5nUXB4xA0kw2df5PRipZDslPJgJut+NjL7D25zPZ/4w= +github.com/go-openapi/swag/typeutils v0.24.0 h1:d3szEGzGDf4L2y1gYOSSLeK6h46F+zibnEas2Jm/wIw= +github.com/go-openapi/swag/typeutils v0.24.0/go.mod h1:q8C3Kmk/vh2VhpCLaoR2MVWOGP8y7Jc8l82qCTd1DYI= +github.com/go-openapi/swag/yamlutils v0.24.0 h1:bhw4894A7Iw6ne+639hsBNRHg9iZg/ISrOVr+sJGp4c= +github.com/go-openapi/swag/yamlutils v0.24.0/go.mod h1:DpKv5aYuaGm/sULePoeiG8uwMpZSfReo1HR3Ik0yaG8= +github.com/go-quicktest/qt v1.101.0 h1:O1K29Txy5P2OK0dGo59b7b0LR6wKfIhttaAhHUyn7eI= +github.com/go-quicktest/qt v1.101.0/go.mod h1:14Bz/f7NwaXPtdYEgzsx46kqSxVwTbzVZsDC26tQJow= +github.com/go-task/slim-sprig/v3 v3.0.0 h1:sUs3vkvUymDpBKi3qH1YSqBQk9+9D/8M2mN1vB6EwHI= +github.com/go-task/slim-sprig/v3 v3.0.0/go.mod h1:W848ghGpv3Qj3dhTPRyJypKRiqCdHZiAzKg9hl15HA8= +github.com/go-toolsmith/astcast v1.1.0 h1:+JN9xZV1A+Re+95pgnMgDboWNVnIMMQXwfBwLRPgSC8= +github.com/go-toolsmith/astcast v1.1.0/go.mod h1:qdcuFWeGGS2xX5bLM/c3U9lewg7+Zu4mr+xPwZIB4ZU= +github.com/go-toolsmith/astcopy v1.1.0 h1:YGwBN0WM+ekI/6SS6+52zLDEf8Yvp3n2seZITCUBt5s= +github.com/go-toolsmith/astcopy v1.1.0/go.mod h1:hXM6gan18VA1T/daUEHCFcYiW8Ai1tIwIzHY6srfEAw= +github.com/go-toolsmith/astequal v1.0.3/go.mod h1:9Ai4UglvtR+4up+bAD4+hCj7iTo4m/OXVTSLnCyTAx4= +github.com/go-toolsmith/astequal v1.1.0/go.mod h1:sedf7VIdCL22LD8qIvv7Nn9MuWJruQA/ysswh64lffQ= +github.com/go-toolsmith/astequal v1.2.0 h1:3Fs3CYZ1k9Vo4FzFhwwewC3CHISHDnVUPC4x0bI2+Cw= +github.com/go-toolsmith/astequal v1.2.0/go.mod h1:c8NZ3+kSFtFY/8lPso4v8LuJjdJiUFVnSuU3s0qrrDY= +github.com/go-toolsmith/astfmt v1.1.0 h1:iJVPDPp6/7AaeLJEruMsBUlOYCmvg0MoCfJprsOmcco= +github.com/go-toolsmith/astfmt v1.1.0/go.mod h1:OrcLlRwu0CuiIBp/8b5PYF9ktGVZUjlNMV634mhwuQ4= +github.com/go-toolsmith/astp v1.1.0 h1:dXPuCl6u2llURjdPLLDxJeZInAeZ0/eZwFJmqZMnpQA= +github.com/go-toolsmith/astp v1.1.0/go.mod h1:0T1xFGz9hicKs8Z5MfAqSUitoUYS30pDMsRVIDHs8CA= +github.com/go-toolsmith/pkgload v1.2.2 h1:0CtmHq/02QhxcF7E9N5LIFcYFsMR5rdovfqTtRKkgIk= +github.com/go-toolsmith/pkgload v1.2.2/go.mod h1:R2hxLNRKuAsiXCo2i5J6ZQPhnPMOVtU+f0arbFPWCus= +github.com/go-toolsmith/strparse v1.0.0/go.mod h1:YI2nUKP9YGZnL/L1/DLFBfixrcjslWct4wyljWhSRy8= +github.com/go-toolsmith/strparse v1.1.0 h1:GAioeZUK9TGxnLS+qfdqNbA4z0SSm5zVNtCQiyP2Bvw= +github.com/go-toolsmith/strparse v1.1.0/go.mod h1:7ksGy58fsaQkGQlY8WVoBFNyEPMGuJin1rfoPS4lBSQ= +github.com/go-toolsmith/typep v1.1.0 h1:fIRYDyF+JywLfqzyhdiHzRop/GQDxxNhLGQ6gFUNHus= +github.com/go-toolsmith/typep v1.1.0/go.mod h1:fVIw+7zjdsMxDA3ITWnH1yOiw1rnTQKCsF/sk2H/qig= +github.com/go-viper/mapstructure/v2 v2.4.0 h1:EBsztssimR/CONLSZZ04E8qAkxNYq4Qp9LvH92wZUgs= +github.com/go-viper/mapstructure/v2 v2.4.0/go.mod h1:oJDH3BJKyqBA2TXFhDsKDGDTlndYOZ6rGS0BRZIxGhM= +github.com/go-xmlfmt/xmlfmt v1.1.3 h1:t8Ey3Uy7jDSEisW2K3somuMKIpzktkWptA0iFCnRUWY= +github.com/go-xmlfmt/xmlfmt v1.1.3/go.mod h1:aUCEOzzezBEjDBbFBoSiya/gduyIiWYRP6CnSFIV8AM= +github.com/gobwas/glob v0.2.3 h1:A4xDbljILXROh+kObIiy5kIaPYD8e96x1tgBhUI5J+Y= +github.com/gobwas/glob v0.2.3/go.mod h1:d3Ez4x06l9bZtSvzIay5+Yzi0fmZzPgnTbPcKjJAkT8= +github.com/gofrs/flock v0.12.1 h1:MTLVXXHf8ekldpJk3AKicLij9MdwOWkZ+a/jHHZby9E= +github.com/gofrs/flock v0.12.1/go.mod h1:9zxTsyu5xtJ9DK+1tFZyibEV7y3uwDxPPfbxeeHCoD0= +github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q= +github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q= +github.com/golangci/dupl v0.0.0-20250308024227-f665c8d69b32 h1:WUvBfQL6EW/40l6OmeSBYQJNSif4O11+bmWEz+C7FYw= +github.com/golangci/dupl v0.0.0-20250308024227-f665c8d69b32/go.mod h1:NUw9Zr2Sy7+HxzdjIULge71wI6yEg1lWQr7Evcu8K0E= +github.com/golangci/go-printf-func-name v0.1.0 h1:dVokQP+NMTO7jwO4bwsRwLWeudOVUPPyAKJuzv8pEJU= +github.com/golangci/go-printf-func-name v0.1.0/go.mod h1:wqhWFH5mUdJQhweRnldEywnR5021wTdZSNgwYceV14s= +github.com/golangci/gofmt v0.0.0-20250106114630-d62b90e6713d h1:viFft9sS/dxoYY0aiOTsLKO2aZQAPT4nlQCsimGcSGE= +github.com/golangci/gofmt v0.0.0-20250106114630-d62b90e6713d/go.mod h1:ivJ9QDg0XucIkmwhzCDsqcnxxlDStoTl89jDMIoNxKY= +github.com/golangci/golangci-lint v1.64.8 h1:y5TdeVidMtBGG32zgSC7ZXTFNHrsJkDnpO4ItB3Am+I= +github.com/golangci/golangci-lint v1.64.8/go.mod h1:5cEsUQBSr6zi8XI8OjmcY2Xmliqc4iYL7YoPrL+zLJ4= +github.com/golangci/misspell v0.6.0 h1:JCle2HUTNWirNlDIAUO44hUsKhOFqGPoC4LZxlaSXDs= +github.com/golangci/misspell v0.6.0/go.mod h1:keMNyY6R9isGaSAu+4Q8NMBwMPkh15Gtc8UCVoDtAWo= +github.com/golangci/plugin-module-register v0.1.1 h1:TCmesur25LnyJkpsVrupv1Cdzo+2f7zX0H6Jkw1Ol6c= +github.com/golangci/plugin-module-register v0.1.1/go.mod h1:TTpqoB6KkwOJMV8u7+NyXMrkwwESJLOkfl9TxR1DGFc= +github.com/golangci/revgrep v0.8.0 h1:EZBctwbVd0aMeRnNUsFogoyayvKHyxlV3CdUA46FX2s= +github.com/golangci/revgrep v0.8.0/go.mod h1:U4R/s9dlXZsg8uJmaR1GrloUr14D7qDl8gi2iPXJH8k= +github.com/golangci/unconvert v0.0.0-20240309020433-c5143eacb3ed h1:IURFTjxeTfNFP0hTEi1YKjB/ub8zkpaOqFFMApi2EAs= +github.com/golangci/unconvert v0.0.0-20240309020433-c5143eacb3ed/go.mod h1:XLXN8bNw4CGRPaqgl3bv/lhz7bsGPh4/xSaMTbo2vkQ= +github.com/google/gnostic-models v0.7.0 h1:qwTtogB15McXDaNqTZdzPJRHvaVJlAl+HVQnLmJEJxo= +github.com/google/gnostic-models v0.7.0/go.mod h1:whL5G0m6dmc5cPxKc5bdKdEN3UjI7OUGxBlw57miDrQ= +github.com/google/go-cmp v0.5.1/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= +github.com/google/go-cmp v0.5.2/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= +github.com/google/go-cmp v0.5.4/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= +github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= +github.com/google/go-cmp v0.5.8/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY= +github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8= +github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU= +github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg= +github.com/google/pprof v0.0.0-20250820193118-f64d9cf942d6 h1:EEHtgt9IwisQ2AZ4pIsMjahcegHh6rmhqxzIRQIyepY= +github.com/google/pprof v0.0.0-20250820193118-f64d9cf942d6/go.mod h1:I6V7YzU0XDpsHqbsyrghnFZLO1gwK6NPTNvmetQIk9U= +github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0= +github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= +github.com/gordonklaus/ineffassign v0.1.0 h1:y2Gd/9I7MdY1oEIt+n+rowjBNDcLQq3RsH5hwJd0f9s= +github.com/gordonklaus/ineffassign v0.1.0/go.mod h1:Qcp2HIAYhR7mNUVSIxZww3Guk4it82ghYcEXIAk+QT0= +github.com/gostaticanalysis/analysisutil v0.7.1 h1:ZMCjoue3DtDWQ5WyU16YbjbQEQ3VuzwxALrpYd+HeKk= +github.com/gostaticanalysis/analysisutil v0.7.1/go.mod h1:v21E3hY37WKMGSnbsw2S/ojApNWb6C1//mXO48CXbVc= +github.com/gostaticanalysis/comment v1.4.1/go.mod h1:ih6ZxzTHLdadaiSnF5WY3dxUoXfXAlTaRzuaNDlSado= +github.com/gostaticanalysis/comment v1.4.2/go.mod h1:KLUTGDv6HOCotCH8h2erHKmpci2ZoR8VPu34YA2uzdM= +github.com/gostaticanalysis/comment v1.5.0 h1:X82FLl+TswsUMpMh17srGRuKaaXprTaytmEpgnKIDu8= +github.com/gostaticanalysis/comment v1.5.0/go.mod h1:V6eb3gpCv9GNVqb6amXzEUX3jXLVK/AdA+IrAMSqvEc= +github.com/gostaticanalysis/forcetypeassert v0.2.0 h1:uSnWrrUEYDr86OCxWa4/Tp2jeYDlogZiZHzGkWFefTk= +github.com/gostaticanalysis/forcetypeassert v0.2.0/go.mod h1:M5iPavzE9pPqWyeiVXSFghQjljW1+l/Uke3PXHS6ILY= +github.com/gostaticanalysis/nilerr v0.1.1 h1:ThE+hJP0fEp4zWLkWHWcRyI2Od0p7DlgYG3Uqrmrcpk= +github.com/gostaticanalysis/nilerr v0.1.1/go.mod h1:wZYb6YI5YAxxq0i1+VJbY0s2YONW0HU0GPE3+5PWN4A= +github.com/gostaticanalysis/testutil v0.3.1-0.20210208050101-bfb5c8eec0e4/go.mod h1:D+FIZ+7OahH3ePw/izIEeH5I06eKs1IKI4Xr64/Am3M= +github.com/gostaticanalysis/testutil v0.5.0 h1:Dq4wT1DdTwTGCQQv3rl3IvD5Ld0E6HiY+3Zh0sUGqw8= +github.com/gostaticanalysis/testutil v0.5.0/go.mod h1:OLQSbuM6zw2EvCcXTz1lVq5unyoNft372msDY0nY5Hs= +github.com/hashicorp/go-immutable-radix/v2 v2.1.0 h1:CUW5RYIcysz+D3B+l1mDeXrQ7fUvGGCwJfdASSzbrfo= +github.com/hashicorp/go-immutable-radix/v2 v2.1.0/go.mod h1:hgdqLXA4f6NIjRVisM1TJ9aOJVNRqKZj+xDGF6m7PBw= +github.com/hashicorp/go-uuid v1.0.3 h1:2gKiV6YVmrJ1i2CKKa9obLvRieoRGviZFL26PcT/Co8= +github.com/hashicorp/go-uuid v1.0.3/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro= +github.com/hashicorp/go-version v1.2.1/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA= +github.com/hashicorp/go-version v1.7.0 h1:5tqGy27NaOTB8yJKUZELlFAS/LTKJkrmONwQKeRZfjY= +github.com/hashicorp/go-version v1.7.0/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA= +github.com/hashicorp/golang-lru/v2 v2.0.7 h1:a+bsQ5rvGLjzHuww6tVxozPZFVghXaHOwFs4luLUK2k= +github.com/hashicorp/golang-lru/v2 v2.0.7/go.mod h1:QeFd9opnmA6QUJc5vARoKUSoFhyfM2/ZepoAG6RGpeM= +github.com/hexops/gotextdiff v1.0.3 h1:gitA9+qJrrTCsiCl7+kh75nPqQt1cx4ZkudSTLoUqJM= +github.com/hexops/gotextdiff v1.0.3/go.mod h1:pSWU5MAI3yDq+fZBTazCSJysOMbxWL1BSow5/V2vxeg= +github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8= +github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw= +github.com/jgautheron/goconst v1.7.1 h1:VpdAG7Ca7yvvJk5n8dMwQhfEZJh95kl/Hl9S1OI5Jkk= +github.com/jgautheron/goconst v1.7.1/go.mod h1:aAosetZ5zaeC/2EfMeRswtxUFBpe2Hr7HzkgX4fanO4= +github.com/jingyugao/rowserrcheck v1.1.1 h1:zibz55j/MJtLsjP1OF4bSdgXxwL1b+Vn7Tjzq7gFzUs= +github.com/jingyugao/rowserrcheck v1.1.1/go.mod h1:4yvlZSDb3IyDTUZJUmpZfm2Hwok+Dtp+nu2qOq+er9c= +github.com/jjti/go-spancheck v0.6.4 h1:Tl7gQpYf4/TMU7AT84MN83/6PutY21Nb9fuQjFTpRRc= +github.com/jjti/go-spancheck v0.6.4/go.mod h1:yAEYdKJ2lRkDA8g7X+oKUHXOWVAXSBJRv04OhF+QUjk= +github.com/josharian/intern v1.0.0 h1:vlS4z54oSdjm0bgjRigI+G1HpF+tI+9rE5LLzOg8HmY= +github.com/josharian/intern v1.0.0/go.mod h1:5DoeVV0s6jJacbCEi61lwdGj/aVlrQvzHFFd8Hwg//Y= +github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM= +github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo= +github.com/julz/importas v0.2.0 h1:y+MJN/UdL63QbFJHws9BVC5RpA2iq0kpjrFajTGivjQ= +github.com/julz/importas v0.2.0/go.mod h1:pThlt589EnCYtMnmhmRYY/qn9lCf/frPOK+WMx3xiJY= +github.com/karamaru-alpha/copyloopvar v1.2.1 h1:wmZaZYIjnJ0b5UoKDjUHrikcV0zuPyyxI4SVplLd2CI= +github.com/karamaru-alpha/copyloopvar v1.2.1/go.mod h1:nFmMlFNlClC2BPvNaHMdkirmTJxVCY0lhxBtlfOypMM= +github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8= +github.com/kisielk/errcheck v1.9.0 h1:9xt1zI9EBfcYBvdU1nVrzMzzUPUtPKs9bVSIM3TAb3M= +github.com/kisielk/errcheck v1.9.0/go.mod h1:kQxWMMVZgIkDq7U8xtG/n2juOjbLgZtedi0D+/VL/i8= +github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck= +github.com/kkHAIKE/contextcheck v1.1.6 h1:7HIyRcnyzxL9Lz06NGhiKvenXq7Zw6Q0UQu/ttjfJCE= +github.com/kkHAIKE/contextcheck v1.1.6/go.mod h1:3dDbMRNBFaq8HFXWC1JyvDSPm43CmE6IuHam8Wr0rkg= +github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE= +github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk= +github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY= +github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE= +github.com/kulti/thelper v0.6.3 h1:ElhKf+AlItIu+xGnI990no4cE2+XaSu1ULymV2Yulxs= +github.com/kulti/thelper v0.6.3/go.mod h1:DsqKShOvP40epevkFrvIwkCMNYxMeTNjdWL4dqWHZ6I= +github.com/kunwardeep/paralleltest v1.0.10 h1:wrodoaKYzS2mdNVnc4/w31YaXFtsc21PCTdvWJ/lDDs= +github.com/kunwardeep/paralleltest v1.0.10/go.mod h1:2C7s65hONVqY7Q5Efj5aLzRCNLjw2h4eMc9EcypGjcY= +github.com/lasiar/canonicalheader v1.1.2 h1:vZ5uqwvDbyJCnMhmFYimgMZnJMjwljN5VGY0VKbMXb4= +github.com/lasiar/canonicalheader v1.1.2/go.mod h1:qJCeLFS0G/QlLQ506T+Fk/fWMa2VmBUiEI2cuMK4djI= +github.com/ldez/exptostd v0.4.2 h1:l5pOzHBz8mFOlbcifTxzfyYbgEmoUqjxLFHZkjlbHXs= +github.com/ldez/exptostd v0.4.2/go.mod h1:iZBRYaUmcW5jwCR3KROEZ1KivQQp6PHXbDPk9hqJKCQ= +github.com/ldez/gomoddirectives v0.6.1 h1:Z+PxGAY+217f/bSGjNZr/b2KTXcyYLgiWI6geMBN2Qc= +github.com/ldez/gomoddirectives v0.6.1/go.mod h1:cVBiu3AHR9V31em9u2kwfMKD43ayN5/XDgr+cdaFaKs= +github.com/ldez/grignotin v0.9.0 h1:MgOEmjZIVNn6p5wPaGp/0OKWyvq42KnzAt/DAb8O4Ow= +github.com/ldez/grignotin v0.9.0/go.mod h1:uaVTr0SoZ1KBii33c47O1M8Jp3OP3YDwhZCmzT9GHEk= +github.com/ldez/tagliatelle v0.7.1 h1:bTgKjjc2sQcsgPiT902+aadvMjCeMHrY7ly2XKFORIk= +github.com/ldez/tagliatelle v0.7.1/go.mod h1:3zjxUpsNB2aEZScWiZTHrAXOl1x25t3cRmzfK1mlo2I= +github.com/ldez/usetesting v0.4.2 h1:J2WwbrFGk3wx4cZwSMiCQQ00kjGR0+tuuyW0Lqm4lwA= +github.com/ldez/usetesting v0.4.2/go.mod h1:eEs46T3PpQ+9RgN9VjpY6qWdiw2/QmfiDeWmdZdrjIQ= +github.com/leonklingele/grouper v1.1.2 h1:o1ARBDLOmmasUaNDesWqWCIFH3u7hoFlM84YrjT3mIY= +github.com/leonklingele/grouper v1.1.2/go.mod h1:6D0M/HVkhs2yRKRFZUoGjeDy7EZTfFBE9gl4kjmIGkA= +github.com/macabu/inamedparam v0.1.3 h1:2tk/phHkMlEL/1GNe/Yf6kkR/hkcUdAEY3L0hjYV1Mk= +github.com/macabu/inamedparam v0.1.3/go.mod h1:93FLICAIk/quk7eaPPQvbzihUdn/QkGDwIZEoLtpH6I= +github.com/mailru/easyjson v0.9.0 h1:PrnmzHw7262yW8sTBwxi1PdJA3Iw/EKBa8psRf7d9a4= +github.com/mailru/easyjson v0.9.0/go.mod h1:1+xMtQp2MRNVL/V1bOzuP3aP8VNwRW55fQUto+XFtTU= +github.com/maratori/testableexamples v1.0.0 h1:dU5alXRrD8WKSjOUnmJZuzdxWOEQ57+7s93SLMxb2vI= +github.com/maratori/testableexamples v1.0.0/go.mod h1:4rhjL1n20TUTT4vdh3RDqSizKLyXp7K2u6HgraZCGzE= +github.com/maratori/testpackage v1.1.1 h1:S58XVV5AD7HADMmD0fNnziNHqKvSdDuEKdPD1rNTU04= +github.com/maratori/testpackage v1.1.1/go.mod h1:s4gRK/ym6AMrqpOa/kEbQTV4Q4jb7WeLZzVhVVVOQMc= +github.com/matoous/godox v1.1.0 h1:W5mqwbyWrwZv6OQ5Z1a/DHGMOvXYCBP3+Ht7KMoJhq4= +github.com/matoous/godox v1.1.0/go.mod h1:jgE/3fUXiTurkdHOLT5WEkThTSuE7yxHv5iWPa80afs= +github.com/matryer/is v1.4.0 h1:sosSmIWwkYITGrxZ25ULNDeKiMNzFSr4V/eqBQP0PeE= +github.com/matryer/is v1.4.0/go.mod h1:8I/i5uYgLzgsgEloJE1U6xx5HkBQpAZvepWuujKwMRU= +github.com/mattn/go-colorable v0.1.14 h1:9A9LHSqF/7dyVVX6g0U9cwm9pG3kP9gSzcuIPHPsaIE= +github.com/mattn/go-colorable v0.1.14/go.mod h1:6LmQG8QLFO4G5z1gPvYEzlUgJ2wF+stgPZH1UqBm1s8= +github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY= +github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y= +github.com/mattn/go-runewidth v0.0.9/go.mod h1:H031xJmbD/WCDINGzjvQ9THkh0rPKHF+m2gUSrubnMI= +github.com/mattn/go-runewidth v0.0.16 h1:E5ScNMtiwvlvB5paMFdw9p4kSQzbXFikJ5SQO6TULQc= +github.com/mattn/go-runewidth v0.0.16/go.mod h1:Jdepj2loyihRzMpdS35Xk/zdY8IAYHsh153qUoGf23w= +github.com/mgechev/revive v1.7.0 h1:JyeQ4yO5K8aZhIKf5rec56u0376h8AlKNQEmjfkjKlY= +github.com/mgechev/revive v1.7.0/go.mod h1:qZnwcNhoguE58dfi96IJeSTPeZQejNeoMQLUZGi4SW4= +github.com/mitchellh/go-homedir v1.1.0 h1:lukF9ziXFxDFPkA1vsr5zpc1XuPDn/wFntq5mG+4E0Y= +github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0= +github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q= +github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg= +github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q= +github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk= +github.com/modern-go/reflect2 v1.0.3-0.20250322232337-35a7c28c31ee h1:W5t00kpgFdJifH4BDsTlE89Zl93FEloxaWZfGcifgq8= +github.com/modern-go/reflect2 v1.0.3-0.20250322232337-35a7c28c31ee/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk= +github.com/moricho/tparallel v0.3.2 h1:odr8aZVFA3NZrNybggMkYO3rgPRcqjeQUlBBFVxKHTI= +github.com/moricho/tparallel v0.3.2/go.mod h1:OQ+K3b4Ln3l2TZveGCywybl68glfLEwFGqvnjok8b+U= +github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA= +github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ= +github.com/nakabonne/nestif v0.3.1 h1:wm28nZjhQY5HyYPx+weN3Q65k6ilSBxDb8v5S81B81U= +github.com/nakabonne/nestif v0.3.1/go.mod h1:9EtoZochLn5iUprVDmDjqGKPofoUEBL8U4Ngq6aY7OE= +github.com/nishanths/exhaustive v0.12.0 h1:vIY9sALmw6T/yxiASewa4TQcFsVYZQQRUQJhKRf3Swg= +github.com/nishanths/exhaustive v0.12.0/go.mod h1:mEZ95wPIZW+x8kC4TgC+9YCUgiST7ecevsVDTgc2obs= +github.com/nishanths/predeclared v0.2.2 h1:V2EPdZPliZymNAn79T8RkNApBjMmVKh5XRpLm/w98Vk= +github.com/nishanths/predeclared v0.2.2/go.mod h1:RROzoN6TnGQupbC+lqggsOlcgysk3LMK/HI84Mp280c= +github.com/nunnatsa/ginkgolinter v0.19.1 h1:mjwbOlDQxZi9Cal+KfbEJTCz327OLNfwNvoZ70NJ+c4= +github.com/nunnatsa/ginkgolinter v0.19.1/go.mod h1:jkQ3naZDmxaZMXPWaS9rblH+i+GWXQCaS/JFIWcOH2s= +github.com/olekukonko/tablewriter v0.0.5 h1:P2Ga83D34wi1o9J6Wh1mRuqd4mF/x/lgBS7N7AbDhec= +github.com/olekukonko/tablewriter v0.0.5/go.mod h1:hPp6KlRPjbx+hW8ykQs1w3UBbZlj6HuIJcUGPhkA7kY= +github.com/onsi/ginkgo/v2 v2.27.2 h1:LzwLj0b89qtIy6SSASkzlNvX6WktqurSHwkk2ipF/Ns= +github.com/onsi/ginkgo/v2 v2.27.2/go.mod h1:ArE1D/XhNXBXCBkKOLkbsb2c81dQHCRcF5zwn/ykDRo= +github.com/onsi/gomega v1.38.3 h1:eTX+W6dobAYfFeGC2PV6RwXRu/MyT+cQguijutvkpSM= +github.com/onsi/gomega v1.38.3/go.mod h1:ZCU1pkQcXDO5Sl9/VVEGlDyp+zm0m1cmeG5TOzLgdh4= +github.com/otiai10/copy v1.2.0/go.mod h1:rrF5dJ5F0t/EWSYODDu4j9/vEeYHMkc8jt0zJChqQWw= +github.com/otiai10/copy v1.14.0 h1:dCI/t1iTdYGtkvCuBG2BgR6KZa83PTclw4U5n2wAllU= +github.com/otiai10/copy v1.14.0/go.mod h1:ECfuL02W+/FkTWZWgQqXPWZgW9oeKCSQ5qVfSc4qc4w= +github.com/otiai10/curr v0.0.0-20150429015615-9b4961190c95/go.mod h1:9qAhocn7zKJG+0mI8eUu6xqkFDYS2kb2saOteoSB3cE= +github.com/otiai10/curr v1.0.0/go.mod h1:LskTG5wDwr8Rs+nNQ+1LlxRjAtTZZjtJW4rMXl6j4vs= +github.com/otiai10/mint v1.3.0/go.mod h1:F5AjcsTsWUqX+Na9fpHb52P8pcRX2CI6A3ctIT91xUo= +github.com/otiai10/mint v1.3.1/go.mod h1:/yxELlJQ0ufhjUwhshSj+wFjZ78CnZ48/1wtmBH1OTc= +github.com/pelletier/go-toml/v2 v2.2.4 h1:mye9XuhQ6gvn5h28+VilKrrPoQVanw5PMw/TB0t5Ec4= +github.com/pelletier/go-toml/v2 v2.2.4/go.mod h1:2gIqNv+qfxSVS7cM2xJQKtLSTLUE9V8t9Stt+h56mCY= +github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= +github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U= +github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= +github.com/polyfloyd/go-errorlint v1.7.1 h1:RyLVXIbosq1gBdk/pChWA8zWYLsq9UEw7a1L5TVMCnA= +github.com/polyfloyd/go-errorlint v1.7.1/go.mod h1:aXjNb1x2TNhoLsk26iv1yl7a+zTnXPhwEMtEXukiLR8= +github.com/prashantv/gostub v1.1.0 h1:BTyx3RfQjRHnUWaGF9oQos79AlQ5k8WNktv7VGvVH4g= +github.com/prashantv/gostub v1.1.0/go.mod h1:A5zLQHz7ieHGG7is6LLXLz7I8+3LZzsrV0P1IAHhP5U= +github.com/prometheus/client_golang v1.23.2 h1:Je96obch5RDVy3FDMndoUsjAhG5Edi49h0RJWRi/o0o= +github.com/prometheus/client_golang v1.23.2/go.mod h1:Tb1a6LWHB3/SPIzCoaDXI4I8UHKeFTEQ1YCr+0Gyqmg= +github.com/prometheus/client_model v0.6.2 h1:oBsgwpGs7iVziMvrGhE53c/GrLUsZdHnqNwqPLxwZyk= +github.com/prometheus/client_model v0.6.2/go.mod h1:y3m2F6Gdpfy6Ut/GBsUqTWZqCUvMVzSfMLjcu6wAwpE= +github.com/prometheus/common v0.66.1 h1:h5E0h5/Y8niHc5DlaLlWLArTQI7tMrsfQjHV+d9ZoGs= +github.com/prometheus/common v0.66.1/go.mod h1:gcaUsgf3KfRSwHY4dIMXLPV0K/Wg1oZ8+SbZk/HH/dA= +github.com/prometheus/procfs v0.17.0 h1:FuLQ+05u4ZI+SS/w9+BWEM2TXiHKsUQ9TADiRH7DuK0= +github.com/prometheus/procfs v0.17.0/go.mod h1:oPQLaDAMRbA+u8H5Pbfq+dl3VDAvHxMUOVhe0wYB2zw= +github.com/quasilyte/go-ruleguard v0.4.3-0.20240823090925-0fe6f58b47b1 h1:+Wl/0aFp0hpuHM3H//KMft64WQ1yX9LdJY64Qm/gFCo= +github.com/quasilyte/go-ruleguard v0.4.3-0.20240823090925-0fe6f58b47b1/go.mod h1:GJLgqsLeo4qgavUoL8JeGFNS7qcisx3awV/w9eWTmNI= +github.com/quasilyte/go-ruleguard/dsl v0.3.22 h1:wd8zkOhSNr+I+8Qeciml08ivDt1pSXe60+5DqOpCjPE= +github.com/quasilyte/go-ruleguard/dsl v0.3.22/go.mod h1:KeCP03KrjuSO0H1kTuZQCWlQPulDV6YMIXmpQss17rU= +github.com/quasilyte/gogrep v0.5.0 h1:eTKODPXbI8ffJMN+W2aE0+oL0z/nh8/5eNdiO34SOAo= +github.com/quasilyte/gogrep v0.5.0/go.mod h1:Cm9lpz9NZjEoL1tgZ2OgeUKPIxL1meE7eo60Z6Sk+Ng= +github.com/quasilyte/regex/syntax v0.0.0-20210819130434-b3f0c404a727 h1:TCg2WBOl980XxGFEZSS6KlBGIV0diGdySzxATTWoqaU= +github.com/quasilyte/regex/syntax v0.0.0-20210819130434-b3f0c404a727/go.mod h1:rlzQ04UMyJXu/aOvhd8qT+hvDrFpiwqp8MRXDY9szc0= +github.com/quasilyte/stdinfo v0.0.0-20220114132959-f7386bf02567 h1:M8mH9eK4OUR4lu7Gd+PU1fV2/qnDNfzT635KRSObncs= +github.com/quasilyte/stdinfo v0.0.0-20220114132959-f7386bf02567/go.mod h1:DWNGW8A4Y+GyBgPuaQJuWiy0XYftx4Xm/y5Jqk9I6VQ= +github.com/raeperd/recvcheck v0.2.0 h1:GnU+NsbiCqdC2XX5+vMZzP+jAJC5fht7rcVTAhX74UI= +github.com/raeperd/recvcheck v0.2.0/go.mod h1:n04eYkwIR0JbgD73wT8wL4JjPC3wm0nFtzBnWNocnYU= +github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc= +github.com/rivo/uniseg v0.4.7 h1:WUdvkW8uEhrYfLC4ZzdpI2ztxP1I582+49Oc5Mq64VQ= +github.com/rivo/uniseg v0.4.7/go.mod h1:FN3SvrM+Zdj16jyLfmOkMNblXMcoc8DfTHruCPUcx88= +github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ= +github.com/rogpeppe/go-internal v1.14.1/go.mod h1:MaRKkUm5W0goXpeCfT7UZI6fk/L7L7so1lCWt35ZSgc= +github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM= +github.com/ryancurrah/gomodguard v1.3.5 h1:cShyguSwUEeC0jS7ylOiG/idnd1TpJ1LfHGpV3oJmPU= +github.com/ryancurrah/gomodguard v1.3.5/go.mod h1:MXlEPQRxgfPQa62O8wzK3Ozbkv9Rkqr+wKjSxTdsNJE= +github.com/ryanrolds/sqlclosecheck v0.5.1 h1:dibWW826u0P8jNLsLN+En7+RqWWTYrjCB9fJfSfdyCU= +github.com/ryanrolds/sqlclosecheck v0.5.1/go.mod h1:2g3dUjoS6AL4huFdv6wn55WpLIDjY7ZgUR4J8HOO/XQ= +github.com/sagikazarmark/locafero v0.7.0 h1:5MqpDsTGNDhY8sGp0Aowyf0qKsPrhewaLSsFaodPcyo= +github.com/sagikazarmark/locafero v0.7.0/go.mod h1:2za3Cg5rMaTMoG/2Ulr9AwtFaIppKXTRYnozin4aB5k= +github.com/sanposhiho/wastedassign/v2 v2.1.0 h1:crurBF7fJKIORrV85u9UUpePDYGWnwvv3+A96WvwXT0= +github.com/sanposhiho/wastedassign/v2 v2.1.0/go.mod h1:+oSmSC+9bQ+VUAxA66nBb0Z7N8CK7mscKTDYC6aIek4= +github.com/santhosh-tekuri/jsonschema/v6 v6.0.1 h1:PKK9DyHxif4LZo+uQSgXNqs0jj5+xZwwfKHgph2lxBw= +github.com/santhosh-tekuri/jsonschema/v6 v6.0.1/go.mod h1:JXeL+ps8p7/KNMjDQk3TCwPpBy0wYklyWTfbkIzdIFU= +github.com/sashamelentyev/interfacebloat v1.1.0 h1:xdRdJp0irL086OyW1H/RTZTr1h/tMEOsumirXcOJqAw= +github.com/sashamelentyev/interfacebloat v1.1.0/go.mod h1:+Y9yU5YdTkrNvoX0xHc84dxiN1iBi9+G8zZIhPVoNjQ= +github.com/sashamelentyev/usestdlibvars v1.28.0 h1:jZnudE2zKCtYlGzLVreNp5pmCdOxXUzwsMDBkR21cyQ= +github.com/sashamelentyev/usestdlibvars v1.28.0/go.mod h1:9nl0jgOfHKWNFS43Ojw0i7aRoS4j6EBye3YBhmAIRF8= +github.com/securego/gosec/v2 v2.22.2 h1:IXbuI7cJninj0nRpZSLCUlotsj8jGusohfONMrHoF6g= +github.com/securego/gosec/v2 v2.22.2/go.mod h1:UEBGA+dSKb+VqM6TdehR7lnQtIIMorYJ4/9CW1KVQBE= +github.com/shurcooL/go v0.0.0-20180423040247-9e1955d9fb6e/go.mod h1:TDJrrUr11Vxrven61rcy3hJMUqaf/CLWYhHNPmT14Lk= +github.com/shurcooL/go-goon v0.0.0-20170922171312-37c2f522c041/go.mod h1:N5mDOmsrJOB+vfqUK+7DmDyjhSLIIBnXo9lvZJj3MWQ= +github.com/sirupsen/logrus v1.9.3 h1:dueUQJ1C2q9oE3F7wvmSGAaVtTmUizReu6fjN8uqzbQ= +github.com/sirupsen/logrus v1.9.3/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ= +github.com/sivchari/containedctx v1.0.3 h1:x+etemjbsh2fB5ewm5FeLNi5bUjK0V8n0RB+Wwfd0XE= +github.com/sivchari/containedctx v1.0.3/go.mod h1:c1RDvCbnJLtH4lLcYD/GqwiBSSf4F5Qk0xld2rBqzJ4= +github.com/sivchari/tenv v1.12.1 h1:+E0QzjktdnExv/wwsnnyk4oqZBUfuh89YMQT1cyuvSY= +github.com/sivchari/tenv v1.12.1/go.mod h1:1LjSOUCc25snIr5n3DtGGrENhX3LuWefcplwVGC24mw= +github.com/sonatard/noctx v0.1.0 h1:JjqOc2WN16ISWAjAk8M5ej0RfExEXtkEyExl2hLW+OM= +github.com/sonatard/noctx v0.1.0/go.mod h1:0RvBxqY8D4j9cTTTWE8ylt2vqj2EPI8fHmrxHdsaZ2c= +github.com/sourcegraph/conc v0.3.0 h1:OQTbbt6P72L20UqAkXXuLOj79LfEanQ+YQFNpLA9ySo= +github.com/sourcegraph/conc v0.3.0/go.mod h1:Sdozi7LEKbFPqYX2/J+iBAM6HpqSLTASQIKqDmF7Mt0= +github.com/sourcegraph/go-diff v0.7.0 h1:9uLlrd5T46OXs5qpp8L/MTltk0zikUGi0sNNyCpA8G0= +github.com/sourcegraph/go-diff v0.7.0/go.mod h1:iBszgVvyxdc8SFZ7gm69go2KDdt3ag071iBaWPF6cjs= +github.com/spf13/afero v1.12.0 h1:UcOPyRBYczmFn6yvphxkn9ZEOY65cpwGKb5mL36mrqs= +github.com/spf13/afero v1.12.0/go.mod h1:ZTlWwG4/ahT8W7T0WQ5uYmjI9duaLQGy3Q2OAl4sk/4= +github.com/spf13/cast v1.7.1 h1:cuNEagBQEHWN1FnbGEjCXL2szYEXqfJPbP2HNUaca9Y= +github.com/spf13/cast v1.7.1/go.mod h1:ancEpBxwJDODSW/UG4rDrAqiKolqNNh2DX3mk86cAdo= +github.com/spf13/cobra v1.10.2 h1:DMTTonx5m65Ic0GOoRY2c16WCbHxOOw6xxezuLaBpcU= +github.com/spf13/cobra v1.10.2/go.mod h1:7C1pvHqHw5A4vrJfjNwvOdzYu0Gml16OCs2GRiTUUS4= +github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg= +github.com/spf13/pflag v1.0.9/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg= +github.com/spf13/pflag v1.0.10 h1:4EBh2KAYBwaONj6b2Ye1GiHfwjqyROoF4RwYO+vPwFk= +github.com/spf13/pflag v1.0.10/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg= +github.com/spf13/viper v1.20.1 h1:ZMi+z/lvLyPSCoNtFCpqjy0S4kPbirhpTMwl8BkW9X4= +github.com/spf13/viper v1.20.1/go.mod h1:P9Mdzt1zoHIG8m2eZQinpiBjo6kCmZSKBClNNqjJvu4= +github.com/ssgreg/nlreturn/v2 v2.2.1 h1:X4XDI7jstt3ySqGU86YGAURbxw3oTDPK9sPEi6YEwQ0= +github.com/ssgreg/nlreturn/v2 v2.2.1/go.mod h1:E/iiPB78hV7Szg2YfRgyIrk1AD6JVMTRkkxBiELzh2I= +github.com/stbenjam/no-sprintf-host-port v0.2.0 h1:i8pxvGrt1+4G0czLr/WnmyH7zbZ8Bg8etvARQ1rpyl4= +github.com/stbenjam/no-sprintf-host-port v0.2.0/go.mod h1:eL0bQ9PasS0hsyTyfTjjG+E80QIyPnBVQbYZyv20Jfk= +github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= +github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw= +github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo= +github.com/stretchr/objx v0.5.2 h1:xuMeJ0Sdp5ZMRXx/aWO6RZxdr3beISkG5/G/aIRr3pY= +github.com/stretchr/objx v0.5.2/go.mod h1:FRsXN1f5AsAjCGJKqEizvkpNtU+EGNCLh3NxZ/8L+MA= +github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI= +github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4= +github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA= +github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= +github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= +github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU= +github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo= +github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U= +github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U= +github.com/subosito/gotenv v1.6.0 h1:9NlTDc1FTs4qu0DDq7AEtTPNw6SVm7uBMsUCUjABIf8= +github.com/subosito/gotenv v1.6.0/go.mod h1:Dk4QP5c2W3ibzajGcXpNraDfq2IrhjMIvMSWPKKo0FU= +github.com/tdakkota/asciicheck v0.4.1 h1:bm0tbcmi0jezRA2b5kg4ozmMuGAFotKI3RZfrhfovg8= +github.com/tdakkota/asciicheck v0.4.1/go.mod h1:0k7M3rCfRXb0Z6bwgvkEIMleKH3kXNz9UqJ9Xuqopr8= +github.com/tenntenn/modver v1.0.1 h1:2klLppGhDgzJrScMpkj9Ujy3rXPUspSjAcev9tSEBgA= +github.com/tenntenn/modver v1.0.1/go.mod h1:bePIyQPb7UeioSRkw3Q0XeMhYZSMx9B8ePqg6SAMGH0= +github.com/tenntenn/text/transform v0.0.0-20200319021203-7eef512accb3 h1:f+jULpRQGxTSkNYKJ51yaw6ChIqO+Je8UqsTKN/cDag= +github.com/tenntenn/text/transform v0.0.0-20200319021203-7eef512accb3/go.mod h1:ON8b8w4BN/kE1EOhwT0o+d62W65a6aPw1nouo9LMgyY= +github.com/tetafro/godot v1.5.0 h1:aNwfVI4I3+gdxjMgYPus9eHmoBeJIbnajOyqZYStzuw= +github.com/tetafro/godot v1.5.0/go.mod h1:2oVxTBSftRTh4+MVfUaUXR6bn2GDXCaMcOG4Dk3rfio= +github.com/timakin/bodyclose v0.0.0-20241017074812-ed6a65f985e3 h1:y4mJRFlM6fUyPhoXuFg/Yu02fg/nIPFMOY8tOqppoFg= +github.com/timakin/bodyclose v0.0.0-20241017074812-ed6a65f985e3/go.mod h1:mkjARE7Yr8qU23YcGMSALbIxTQ9r9QBVahQOBRfU460= +github.com/timonwong/loggercheck v0.10.1 h1:uVZYClxQFpw55eh+PIoqM7uAOHMrhVcDoWDery9R8Lg= +github.com/timonwong/loggercheck v0.10.1/go.mod h1:HEAWU8djynujaAVX7QI65Myb8qgfcZ1uKbdpg3ZzKl8= +github.com/tomarrell/wrapcheck/v2 v2.10.0 h1:SzRCryzy4IrAH7bVGG4cK40tNUhmVmMDuJujy4XwYDg= +github.com/tomarrell/wrapcheck/v2 v2.10.0/go.mod h1:g9vNIyhb5/9TQgumxQyOEqDHsmGYcGsVMOx/xGkqdMo= +github.com/tommy-muehle/go-mnd/v2 v2.5.1 h1:NowYhSdyE/1zwK9QCLeRb6USWdoif80Ie+v+yU8u1Zw= +github.com/tommy-muehle/go-mnd/v2 v2.5.1/go.mod h1:WsUAkMJMYww6l/ufffCD3m+P7LEvr8TnZn9lwVDlgzw= +github.com/ultraware/funlen v0.2.0 h1:gCHmCn+d2/1SemTdYMiKLAHFYxTYz7z9VIDRaTGyLkI= +github.com/ultraware/funlen v0.2.0/go.mod h1:ZE0q4TsJ8T1SQcjmkhN/w+MceuatI6pBFSxxyteHIJA= +github.com/ultraware/whitespace v0.2.0 h1:TYowo2m9Nfj1baEQBjuHzvMRbp19i+RCcRYrSWoFa+g= +github.com/ultraware/whitespace v0.2.0/go.mod h1:XcP1RLD81eV4BW8UhQlpaR+SDc2givTvyI8a586WjW8= +github.com/uudashr/gocognit v1.2.0 h1:3BU9aMr1xbhPlvJLSydKwdLN3tEUUrzPSSM8S4hDYRA= +github.com/uudashr/gocognit v1.2.0/go.mod h1:k/DdKPI6XBZO1q7HgoV2juESI2/Ofj9AcHPZhBBdrTU= +github.com/uudashr/iface v1.3.1 h1:bA51vmVx1UIhiIsQFSNq6GZ6VPTk3WNMZgRiCe9R29U= +github.com/uudashr/iface v1.3.1/go.mod h1:4QvspiRd3JLPAEXBQ9AiZpLbJlrWWgRChOKDJEuQTdg= +github.com/x448/float16 v0.8.4 h1:qLwI1I70+NjRFUR3zs1JPUCgaCXSh3SW62uAKT1mSBM= +github.com/x448/float16 v0.8.4/go.mod h1:14CWIYCyZA/cWjXOioeEpHeN/83MdbZDRQHoFcYsOfg= +github.com/xen0n/gosmopolitan v1.2.2 h1:/p2KTnMzwRexIW8GlKawsTWOxn7UHA+jCMF/V8HHtvU= +github.com/xen0n/gosmopolitan v1.2.2/go.mod h1:7XX7Mj61uLYrj0qmeN0zi7XDon9JRAEhYQqAPLVNTeg= +github.com/yagipy/maintidx v1.0.0 h1:h5NvIsCz+nRDapQ0exNv4aJ0yXSI0420omVANTv3GJM= +github.com/yagipy/maintidx v1.0.0/go.mod h1:0qNf/I/CCZXSMhsRsrEPDZ+DkekpKLXAJfsTACwgXLk= +github.com/yeya24/promlinter v0.3.0 h1:JVDbMp08lVCP7Y6NP3qHroGAO6z2yGKQtS5JsjqtoFs= +github.com/yeya24/promlinter v0.3.0/go.mod h1:cDfJQQYv9uYciW60QT0eeHlFodotkYZlL+YcPQN+mW4= +github.com/ykadowak/zerologlint v0.1.5 h1:Gy/fMz1dFQN9JZTPjv1hxEk+sRWm05row04Yoolgdiw= +github.com/ykadowak/zerologlint v0.1.5/go.mod h1:KaUskqF3e/v59oPmdq1U1DnKcuHokl2/K1U4pmIELKg= +github.com/yuin/goldmark v1.1.25/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= +github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= +github.com/yuin/goldmark v1.1.32/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= +github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= +github.com/yuin/goldmark v1.3.5/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k= +github.com/yuin/goldmark v1.4.1/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k= +github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY= +gitlab.com/bosi/decorder v0.4.2 h1:qbQaV3zgwnBZ4zPMhGLW4KZe7A7NwxEhJx39R3shffo= +gitlab.com/bosi/decorder v0.4.2/go.mod h1:muuhHoaJkA9QLcYHq4Mj8FJUwDZ+EirSHRiaTcTf6T8= +go-simpler.org/assert v0.9.0 h1:PfpmcSvL7yAnWyChSjOz6Sp6m9j5lyK8Ok9pEL31YkQ= +go-simpler.org/assert v0.9.0/go.mod h1:74Eqh5eI6vCK6Y5l3PI8ZYFXG4Sa+tkr70OIPJAUr28= +go-simpler.org/musttag v0.13.0 h1:Q/YAW0AHvaoaIbsPj3bvEI5/QFP7w696IMUpnKXQfCE= +go-simpler.org/musttag v0.13.0/go.mod h1:FTzIGeK6OkKlUDVpj0iQUXZLUO1Js9+mvykDQy9C5yM= +go-simpler.org/sloglint v0.9.0 h1:/40NQtjRx9txvsB/RN022KsUJU+zaaSb/9q9BSefSrE= +go-simpler.org/sloglint v0.9.0/go.mod h1:G/OrAF6uxj48sHahCzrbarVMptL2kjWTaUeC8+fOGww= +go.uber.org/automaxprocs v1.6.0 h1:O3y2/QNTOdbF+e/dpXNNW7Rx2hZ4sTIPyybbxyNqTUs= +go.uber.org/automaxprocs v1.6.0/go.mod h1:ifeIMSnPZuznNm6jmdzmU3/bfk01Fe2fotchwEFJ8r8= +go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto= +go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE= +go.uber.org/multierr v1.11.0 h1:blXXJkSxSSfBVBlC76pxqeO+LN3aDfLQo+309xJstO0= +go.uber.org/multierr v1.11.0/go.mod h1:20+QtiLqy0Nd6FdQB9TLXag12DsQkrbs3htMFfDN80Y= +go.uber.org/zap v1.27.0 h1:aJMhYGrd5QSmlpLMr2MftRKl7t8J8PTZPA732ud/XR8= +go.uber.org/zap v1.27.0/go.mod h1:GB2qFLM7cTU87MWRP2mPIjqfIDnGu+VIO4V/SdhGo2E= +go.yaml.in/yaml/v2 v2.4.2 h1:DzmwEr2rDGHl7lsFgAHxmNz/1NlQ7xLIrlN2h5d1eGI= +go.yaml.in/yaml/v2 v2.4.2/go.mod h1:081UH+NErpNdqlCXm3TtEran0rJZGxAYx9hb/ELlsPU= +go.yaml.in/yaml/v3 v3.0.4 h1:tfq32ie2Jv2UxXFdLJdh3jXuOzWiL1fo0bu/FbuKpbc= +go.yaml.in/yaml/v3 v3.0.4/go.mod h1:DhzuOOF2ATzADvBadXxruRBLzYTpT36CKvDb3+aBEFg= +golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= +golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= +golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= +golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc= +golang.org/x/crypto v0.13.0/go.mod h1:y6Z2r+Rw4iayiXXAIxJIDAJ1zMW4yaTpebo8fPOliYc= +golang.org/x/crypto v0.14.0/go.mod h1:MVFd36DqK4CsrnJYDkBA3VC4m2GkXAM0PvzMCn4JQf4= +golang.org/x/exp v0.0.0-20251023183803-a4bb9ffd2546 h1:mgKeJMpvi0yx/sU5GsxQ7p6s2wtOnGAHZWCHUM4KGzY= +golang.org/x/exp v0.0.0-20251023183803-a4bb9ffd2546/go.mod h1:j/pmGrbnkbPtQfxEe5D0VQhZC6qKbfKifgD0oM7sR70= +golang.org/x/exp/typeparams v0.0.0-20220428152302-39d4317da171/go.mod h1:AbB0pIl9nAr9wVwH+Z2ZpaocVmF5I4GyWCDIsVjR0bk= +golang.org/x/exp/typeparams v0.0.0-20230203172020-98cc5a0785f9/go.mod h1:AbB0pIl9nAr9wVwH+Z2ZpaocVmF5I4GyWCDIsVjR0bk= +golang.org/x/exp/typeparams v0.0.0-20250210185358-939b2ce775ac h1:TSSpLIG4v+p0rPv1pNOQtl1I8knsO4S9trOxNMOLVP4= +golang.org/x/exp/typeparams v0.0.0-20250210185358-939b2ce775ac/go.mod h1:AbB0pIl9nAr9wVwH+Z2ZpaocVmF5I4GyWCDIsVjR0bk= +golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= +golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= +golang.org/x/mod v0.4.1/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= +golang.org/x/mod v0.4.2/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= +golang.org/x/mod v0.6.0-dev.0.20220106191415-9b9b3d81d5e3/go.mod h1:3p9vT2HGsQu2K1YbXdKPJLVgG5VJdoTa1poYQBtP1AY= +golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4= +golang.org/x/mod v0.7.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= +golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= +golang.org/x/mod v0.9.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= +golang.org/x/mod v0.12.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= +golang.org/x/mod v0.13.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c= +golang.org/x/mod v0.29.0 h1:HV8lRxZC4l2cr3Zq1LvtOsi/ThTgWnUk/y64QSs8GwA= +golang.org/x/mod v0.29.0/go.mod h1:NyhrlYXJ2H4eJiRy/WDBO6HMqZQ6q9nk4JzS3NuCK+w= +golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= +golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= +golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= +golang.org/x/net v0.0.0-20200625001655-4c5254603344/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA= +golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU= +golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg= +golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM= +golang.org/x/net v0.0.0-20211015210444-4f30a5c0130f/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y= +golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c= +golang.org/x/net v0.2.0/go.mod h1:KqCZLdyyvdV855qA2rE3GC2aiw5xGR5TEjj8smXukLY= +golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs= +golang.org/x/net v0.8.0/go.mod h1:QVkue5JL9kW//ek3r6jTKnTFis1tRmNAW2P1shuFdJc= +golang.org/x/net v0.10.0/go.mod h1:0qNGK6F8kojg2nk9dLZ2mShWaEBan6FAoqfSigmmuDg= +golang.org/x/net v0.15.0/go.mod h1:idbUs1IY1+zTqbi8yxTbhexhEEk5ur9LInksu6HrEpk= +golang.org/x/net v0.16.0/go.mod h1:NxSsAGuq816PNPmqtQdLE42eU2Fs7NoRIZrHJAlaCOE= +golang.org/x/net v0.46.0 h1:giFlY12I07fugqwPuWJi68oOnpfqFnJIJzaIIm2JVV4= +golang.org/x/net v0.46.0/go.mod h1:Q9BGdFy1y4nkUwiLvT5qtyhAnEHgnQ/zd8PfU6nc210= +golang.org/x/oauth2 v0.31.0 h1:8Fq0yVZLh4j4YA47vHKFTa9Ew5XIrCP8LC6UeNZnLxo= +golang.org/x/oauth2 v0.31.0/go.mod h1:lzm5WQJQwKZ3nwavOZ3IS5Aulzxi68dUSgRHujetwEA= +golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.0.0-20200625203802-6e8e738ad208/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.3.0/go.mod h1:FU7BRWz2tNW+3quACPkgCx/L+uEAv1htQ0V83Z9Rj+Y= +golang.org/x/sync v0.4.0/go.mod h1:FU7BRWz2tNW+3quACPkgCx/L+uEAv1htQ0V83Z9Rj+Y= +golang.org/x/sync v0.19.0 h1:vV+1eWNmZ5geRlYjzm2adRgW2/mcpevXNg50YZtPCE4= +golang.org/x/sync v0.19.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI= +golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= +golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20211019181941-9d821ace8654/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20211105183446-c75c47738b0c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.2.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.12.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.13.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.39.0 h1:CvCKL8MeisomCi6qNZ+wbb0DN9E5AATixKsvNtMoMFk= +golang.org/x/sys v0.39.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks= +golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= +golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8= +golang.org/x/term v0.2.0/go.mod h1:TVmDHMZPmdnySmBfhjOoOdhjzdE1h4u1VwSiw2l1Nuc= +golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k= +golang.org/x/term v0.6.0/go.mod h1:m6U89DPEgQRMq3DNkDClhWw02AUbt2daBVO4cn4Hv9U= +golang.org/x/term v0.8.0/go.mod h1:xPskH00ivmX89bAKVGSKKtLOWNx2+17Eiy94tnKShWo= +golang.org/x/term v0.12.0/go.mod h1:owVbMEjm3cBLCHdkQu9b1opXd4ETQWc3BhuQGKgXgvU= +golang.org/x/term v0.13.0/go.mod h1:LTmsnFJwVN6bCy1rVCoS+qHT1HhALEFxKncY3WNNh4U= +golang.org/x/term v0.36.0 h1:zMPR+aF8gfksFprF/Nc/rd1wRS1EI6nDBGyWAvDzx2Q= +golang.org/x/term v0.36.0/go.mod h1:Qu394IJq6V6dCBRgwqshf3mPF85AqzYEzofzRdZkWss= +golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= +golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk= +golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= +golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= +golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ= +golang.org/x/text v0.4.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8= +golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8= +golang.org/x/text v0.8.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8= +golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8= +golang.org/x/text v0.13.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE= +golang.org/x/text v0.30.0 h1:yznKA/E9zq54KzlzBEAWn1NXSQ8DIp/NYMy88xJjl4k= +golang.org/x/text v0.30.0/go.mod h1:yDdHFIX9t+tORqspjENWgzaCVXgk0yYnYuSZ8UzzBVM= +golang.org/x/time v0.13.0 h1:eUlYslOIt32DgYD6utsuUeHs4d7AsEYLuIAdg7FlYgI= +golang.org/x/time v0.13.0/go.mod h1:eL/Oa2bBBK0TkX57Fyni+NgnyQQN4LitPmob2Hjnqw4= +golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= +golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= +golang.org/x/tools v0.0.0-20200324003944-a576cf524670/go.mod h1:Sl4aGygMT6LrqrWclx+PTx3U+LnKx/seiNR+3G19Ar8= +golang.org/x/tools v0.0.0-20200329025819-fd4102a86c65/go.mod h1:Sl4aGygMT6LrqrWclx+PTx3U+LnKx/seiNR+3G19Ar8= +golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE= +golang.org/x/tools v0.0.0-20200724022722-7017fd6b1305/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA= +golang.org/x/tools v0.0.0-20200820010801-b793a1359eac/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA= +golang.org/x/tools v0.0.0-20201023174141-c8cfbd0f21e6/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= +golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= +golang.org/x/tools v0.1.1-0.20210205202024-ef80cdb6ec6d/go.mod h1:9bzcO0MWcOuT0tm1iBGzDVPshzfwoVvREIui8C+MHqU= +golang.org/x/tools v0.1.1-0.20210302220138-2ac05c832e1a/go.mod h1:9bzcO0MWcOuT0tm1iBGzDVPshzfwoVvREIui8C+MHqU= +golang.org/x/tools v0.1.1/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk= +golang.org/x/tools v0.1.5/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk= +golang.org/x/tools v0.1.10/go.mod h1:Uh6Zz+xoGYZom868N8YTex3t7RhtHDBrE8Gzo9bV56E= +golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc= +golang.org/x/tools v0.3.0/go.mod h1:/rWhSS2+zyEVwoJf8YAX6L2f0ntZ7Kn/mGgAWcipA5k= +golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU= +golang.org/x/tools v0.7.0/go.mod h1:4pg6aUX35JBAogB10C9AtvVL+qowtN4pT3CGSQex14s= +golang.org/x/tools v0.13.0/go.mod h1:HvlwmtVNQAhOuCjW7xxvovg8wbNq7LwfXh/k7wXUl58= +golang.org/x/tools v0.14.0/go.mod h1:uYBEerGOWcJyEORxN+Ek8+TT266gXkNlHdJBwexUsBg= +golang.org/x/tools v0.38.0 h1:Hx2Xv8hISq8Lm16jvBZ2VQf+RLmbd7wVUsALibYI/IQ= +golang.org/x/tools v0.38.0/go.mod h1:yEsQ/d/YK8cjh0L6rZlY8tgtlKiBNTL14pGDJPJpYQs= +golang.org/x/tools/go/expect v0.1.1-deprecated h1:jpBZDwmgPhXsKZC6WhL20P4b/wmnpsEAGHaNy0n/rJM= +golang.org/x/tools/go/expect v0.1.1-deprecated/go.mod h1:eihoPOH+FgIqa3FpoTwguz/bVUSGBlGQU67vpBeOrBY= +golang.org/x/tools/go/packages/packagestest v0.1.1-deprecated h1:1h2MnaIAIXISqTFKdENegdpAgUXz6NrPEsbIeWaBRvM= +golang.org/x/tools/go/packages/packagestest v0.1.1-deprecated/go.mod h1:RVAQXBGNv1ib0J382/DPCRS/BPnsGebyM1Gj5VSDpG8= +golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= +golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= +golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= +golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= +google.golang.org/protobuf v1.36.9 h1:w2gp2mA27hUeUzj9Ex9FBjsBm40zfaDtEWow293U7Iw= +google.golang.org/protobuf v1.36.9/go.mod h1:fuxRtAxBytpl4zzqUh6/eyUujkJdNiuEkXntxiD/uRU= +gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= +gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk= +gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q= +gopkg.in/evanphx/json-patch.v4 v4.13.0 h1:czT3CmqEaQ1aanPc5SdlgQrrEIb8w/wwCvWWnfEbYzo= +gopkg.in/evanphx/json-patch.v4 v4.13.0/go.mod h1:p8EYWUEYMpynmqDbY58zCKCFZw8pRWMG4EsWvDvM72M= +gopkg.in/inf.v0 v0.9.1 h1:73M5CoZyi3ZLMOyDlQh031Cx6N9NDJ2Vvfl76EDAgDc= +gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw= +gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= +gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY= +gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ= +gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= +gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA= +gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= +honnef.co/go/tools v0.6.1 h1:R094WgE8K4JirYjBaOpz/AvTyUu/3wbmAoskKN/pxTI= +honnef.co/go/tools v0.6.1/go.mod h1:3puzxxljPCe8RGJX7BIy1plGbxEOZni5mR2aXe3/uk4= +k8s.io/api v0.34.3 h1:D12sTP257/jSH2vHV2EDYrb16bS7ULlHpdNdNhEw2S4= +k8s.io/api v0.34.3/go.mod h1:PyVQBF886Q5RSQZOim7DybQjAbVs8g7gwJNhGtY5MBk= +k8s.io/apiextensions-apiserver v0.34.3 h1:p10fGlkDY09eWKOTeUSioxwLukJnm+KuDZdrW71y40g= +k8s.io/apiextensions-apiserver v0.34.3/go.mod h1:aujxvqGFRdb/cmXYfcRTeppN7S2XV/t7WMEc64zB5A0= +k8s.io/apimachinery v0.34.3 h1:/TB+SFEiQvN9HPldtlWOTp0hWbJ+fjU+wkxysf/aQnE= +k8s.io/apimachinery v0.34.3/go.mod h1:/GwIlEcWuTX9zKIg2mbw0LRFIsXwrfoVxn+ef0X13lw= +k8s.io/client-go v0.34.3 h1:wtYtpzy/OPNYf7WyNBTj3iUA0XaBHVqhv4Iv3tbrF5A= +k8s.io/client-go v0.34.3/go.mod h1:OxxeYagaP9Kdf78UrKLa3YZixMCfP6bgPwPwNBQBzpM= +k8s.io/klog/v2 v2.130.1 h1:n9Xl7H1Xvksem4KFG4PYbdQCQxqc/tTUyrgXaOhHSzk= +k8s.io/klog/v2 v2.130.1/go.mod h1:3Jpz1GvMt720eyJH1ckRHK1EDfpxISzJ7I9OYgaDtPE= +k8s.io/kube-openapi v0.0.0-20250909170358-d67c058d9372 h1:6n2yF16Z5B+r+iKN6yL6/0cRj7lI5omG5F0wuI9ZHhw= +k8s.io/kube-openapi v0.0.0-20250909170358-d67c058d9372/go.mod h1:kdmbQkyfwUagLfXIad1y2TdrjPFWp2Q89B3qkRwf/pQ= +k8s.io/utils v0.0.0-20251002143259-bc988d571ff4 h1:SjGebBtkBqHFOli+05xYbK8YF1Dzkbzn+gDM4X9T4Ck= +k8s.io/utils v0.0.0-20251002143259-bc988d571ff4/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0= +mvdan.cc/gofumpt v0.7.0 h1:bg91ttqXmi9y2xawvkuMXyvAA/1ZGJqYAEGjXuP0JXU= +mvdan.cc/gofumpt v0.7.0/go.mod h1:txVFJy/Sc/mvaycET54pV8SW8gWxTlUuGHVEcncmNUo= +mvdan.cc/unparam v0.0.0-20240528143540-8a5130ca722f h1:lMpcwN6GxNbWtbpI1+xzFLSW8XzX0u72NttUGVFjO3U= +mvdan.cc/unparam v0.0.0-20240528143540-8a5130ca722f/go.mod h1:RSLa7mKKCNeTTMHBw5Hsy2rfJmd6O2ivt9Dw9ZqCQpQ= +sigs.k8s.io/controller-runtime v0.22.4 h1:GEjV7KV3TY8e+tJ2LCTxUTanW4z/FmNB7l327UfMq9A= +sigs.k8s.io/controller-runtime v0.22.4/go.mod h1:+QX1XUpTXN4mLoblf4tqr5CQcyHPAki2HLXqQMY6vh8= +sigs.k8s.io/json v0.0.0-20250730193827-2d320260d730 h1:IpInykpT6ceI+QxKBbEflcR5EXP7sU1kvOlxwZh5txg= +sigs.k8s.io/json v0.0.0-20250730193827-2d320260d730/go.mod h1:mdzfpAEoE6DHQEN0uh9ZbOCuHbLK5wOm7dK4ctXE9Tg= +sigs.k8s.io/randfill v1.0.0 h1:JfjMILfT8A6RbawdsK2JXGBR5AQVfd+9TbzrlneTyrU= +sigs.k8s.io/randfill v1.0.0/go.mod h1:XeLlZ/jmk4i1HRopwe7/aU3H5n1zNUcX6TM94b3QxOY= +sigs.k8s.io/structured-merge-diff/v6 v6.3.0 h1:jTijUJbW353oVOd9oTlifJqOGEkUw2jB/fXCbTiQEco= +sigs.k8s.io/structured-merge-diff/v6 v6.3.0/go.mod h1:M3W8sfWvn2HhQDIbGWj3S099YozAsymCo/wrT5ohRUE= +sigs.k8s.io/yaml v1.6.0 h1:G8fkbMSAFqgEFgh4b1wmtzDnioxFCUgTZhlbj5P9QYs= +sigs.k8s.io/yaml v1.6.0/go.mod h1:796bPqUfzR/0jLAl6XjHl3Ck7MiyVv8dbTdyT3/pMf4= diff --git a/images/megatest/internal/config/config.go b/images/megatest/internal/config/config.go new file mode 100644 index 000000000..fb604a4dc --- /dev/null +++ b/images/megatest/internal/config/config.go @@ -0,0 +1,93 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package config + +import ( + "time" + + "k8s.io/apimachinery/pkg/api/resource" +) + +// Duration represents a time duration range with min and max values +type DurationMinMax struct { + Min time.Duration + Max time.Duration +} + +// Count represents a count range with min and max values +type StepMinMax struct { + Min int + Max int +} + +// Size represents a size range with min and max values +type SizeMinMax struct { + Min resource.Quantity + Max resource.Quantity +} + +// MultiVolumeConfig configures the multivolume orchestrator +type MultiVolumeConfig struct { + StorageClasses []string + MaxVolumes int + VolumeStep StepMinMax + StepPeriod DurationMinMax + VolumePeriod DurationMinMax + DisablePodDestroyer bool + DisableVolumeResizer bool + DisableVolumeReplicaDestroyer bool + DisableVolumeReplicaCreator bool +} + +// VolumeMainConfig configures the volume-main goroutine +type VolumeMainConfig struct { + StorageClassName string + VolumeLifetime time.Duration + InitialSize resource.Quantity + DisableVolumeResizer bool + DisableVolumeReplicaDestroyer bool + DisableVolumeReplicaCreator bool +} + +// VolumeAttacherConfig configures the volume-attacher goroutine +type VolumeAttacherConfig struct { + Period DurationMinMax +} + +// VolumeReplicaDestroyerConfig configures the volume-replica-destroyer goroutine +type VolumeReplicaDestroyerConfig struct { + Period DurationMinMax +} + +// VolumeReplicaCreatorConfig configures the volume-replica-creator goroutine +type VolumeReplicaCreatorConfig struct { + Period DurationMinMax +} + +// VolumeResizerConfig configures the volume-resizer goroutine +type VolumeResizerConfig struct { + Period DurationMinMax + Step SizeMinMax +} + +// PodDestroyerConfig configures the pod-destroyer goroutine +type PodDestroyerConfig struct { + Namespace string + LabelSelector string + PodCount StepMinMax + Period DurationMinMax +} diff --git a/images/megatest/internal/kubeutils/client.go b/images/megatest/internal/kubeutils/client.go new file mode 100644 index 000000000..4aee71dcd --- /dev/null +++ b/images/megatest/internal/kubeutils/client.go @@ -0,0 +1,604 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package kubeutils + +import ( + "context" + "crypto/sha1" + "encoding/hex" + "fmt" + "math/rand/v2" + "sync" + "time" + + corev1 "k8s.io/api/core/v1" + "k8s.io/apimachinery/pkg/api/meta" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/labels" + "k8s.io/apimachinery/pkg/runtime" + "k8s.io/apimachinery/pkg/runtime/serializer" + "k8s.io/apimachinery/pkg/watch" + "k8s.io/client-go/rest" + "k8s.io/client-go/tools/cache" + "k8s.io/client-go/tools/clientcmd" + "k8s.io/client-go/util/flowcontrol" + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/client/config" + + "github.com/deckhouse/sds-replicated-volume/api/v1alpha1" +) + +const ( + // rvInformerResyncPeriod is the resync period for the RV informer. + // Normally events arrive instantly via Watch. Resync is a safety net + // for rare cases (~1%) when Watch connection drops and events are missed. + // Every resync period, informer re-lists all RVs to ensure cache is accurate. + rvInformerResyncPeriod = 30 * time.Second + + // nodesCacheTTL is the time-to-live for the nodes cache. + nodesCacheTTL = 30 * time.Second +) + +// Client wraps a controller-runtime client with helper methods +type Client struct { + cl client.Client + cfg *rest.Config + scheme *runtime.Scheme + + // Cached nodes with TTL + cachedNodes []corev1.Node + nodesCacheTime time.Time + nodesMutex sync.RWMutex + + // RV informer with dispatcher for VolumeCheckers. + // Uses dispatcher pattern instead of per-checker handlers for efficiency: + // - One handler processes all events (not N handlers for N checkers) + // - Map lookup O(1) instead of N filter calls per event + // - Better for 100+ concurrent RV watchers + rvInformer cache.SharedIndexInformer + rvInformerMu sync.RWMutex + informerStop chan struct{} + informerReady bool + + // Dispatcher: routes RV events to registered checkers by name. + // Key: RV name, Value: channel to send updates. + rvCheckersMu sync.RWMutex + rvCheckers map[string]chan *v1alpha1.ReplicatedVolume +} + +// NewClient creates a new Kubernetes client +func NewClient() (*Client, error) { + return NewClientWithKubeconfig("") +} + +// NewClientWithKubeconfig creates a new Kubernetes client with the specified kubeconfig path +func NewClientWithKubeconfig(kubeconfigPath string) (*Client, error) { + var cfg *rest.Config + var err error + + if kubeconfigPath != "" { + cfg, err = clientcmd.BuildConfigFromFlags("", kubeconfigPath) + if err != nil { + return nil, fmt.Errorf("building config from kubeconfig file %s: %w", kubeconfigPath, err) + } + } else { + cfg, err = config.GetConfig() + if err != nil { + return nil, fmt.Errorf("getting kubeconfig: %w", err) + } + } + + // Disable rate limiter for megatest to avoid "rate: Wait(n=1) would exceed context deadline" errors. + // megatest is a test tool that creates/deletes many resources concurrently. + // In test environments, disabling client-side rate limiting is acceptable. + // Note: API server may still throttle requests, but client won't block waiting. + cfg.RateLimiter = flowcontrol.NewFakeAlwaysRateLimiter() + + scheme := runtime.NewScheme() + if err := corev1.AddToScheme(scheme); err != nil { + return nil, fmt.Errorf("adding corev1 to scheme: %w", err) + } + if err := v1alpha1.AddToScheme(scheme); err != nil { + return nil, fmt.Errorf("adding v1alpha1 to scheme: %w", err) + } + + cl, err := client.New(cfg, client.Options{Scheme: scheme}) + if err != nil { + return nil, fmt.Errorf("creating client: %w", err) + } + + c := &Client{ + cl: cl, + cfg: cfg, + scheme: scheme, + informerStop: make(chan struct{}), + rvCheckers: make(map[string]chan *v1alpha1.ReplicatedVolume), + } + + // Initialize RV informer + if err := c.initRVInformer(); err != nil { + return nil, fmt.Errorf("initializing RV informer: %w", err) + } + + return c, nil +} + +// initRVInformer creates and starts the shared informer for ReplicatedVolumes. +// Called once during NewClient(). VolumeCheckers register handlers via AddRVEventHandler(). +func (c *Client) initRVInformer() error { + // Create REST client for RV + restCfg := rest.CopyConfig(c.cfg) + restCfg.GroupVersion = &v1alpha1.SchemeGroupVersion + restCfg.APIPath = "/apis" + // Use WithoutConversion() to avoid "no kind X is registered for internal version" errors. + // CRDs don't have internal versions like core Kubernetes types, so we need to skip + // version conversion when decoding watch events. + restCfg.NegotiatedSerializer = serializer.NewCodecFactory(c.scheme).WithoutConversion() + + restClient, err := rest.RESTClientFor(restCfg) + if err != nil { + return fmt.Errorf("creating REST client: %w", err) + } + + // Create ListWatch for ReplicatedVolumes using REST client methods directly + lw := &cache.ListWatch{ + ListWithContextFunc: func(_ context.Context, options metav1.ListOptions) (runtime.Object, error) { + result := &v1alpha1.ReplicatedVolumeList{} + err := restClient.Get(). + Resource("replicatedvolumes"). + VersionedParams(&options, metav1.ParameterCodec). + Do(context.Background()). + Into(result) + return result, err + }, + WatchFuncWithContext: func(_ context.Context, options metav1.ListOptions) (watch.Interface, error) { + options.Watch = true + return restClient.Get(). + Resource("replicatedvolumes"). + VersionedParams(&options, metav1.ParameterCodec). + Watch(context.Background()) + }, + } + + // Create shared informer + c.rvInformer = cache.NewSharedIndexInformer( + lw, + &v1alpha1.ReplicatedVolume{}, + rvInformerResyncPeriod, + cache.Indexers{cache.NamespaceIndex: cache.MetaNamespaceIndexFunc}, + ) + + // Register single dispatcher handler. + // This handler routes events to registered checkers by RV name. + // More efficient than N handlers for N checkers (O(1) map lookup vs O(N) filter calls). + _, _ = c.rvInformer.AddEventHandler(cache.ResourceEventHandlerFuncs{ + AddFunc: func(obj interface{}) { + c.dispatchRVEvent(obj) + }, + UpdateFunc: func(_, newObj interface{}) { + c.dispatchRVEvent(newObj) + }, + DeleteFunc: func(_ interface{}) { + // Delete events are not dispatched - checker stops before RV deletion + }, + }) + + // Start informer in background + go c.rvInformer.Run(c.informerStop) + + // Wait for cache sync with timeout to detect connectivity issues early + syncCtx, syncCancel := context.WithTimeout(context.Background(), 5*time.Second) + defer syncCancel() + + syncDone := make(chan struct{}) + var syncErr error + go func() { + // This is a blocking call that waits for the cache to be synced or the context is cancelled + if !cache.WaitForCacheSync(c.informerStop, c.rvInformer.HasSynced) { + syncErr = fmt.Errorf("cache sync failed") + } + close(syncDone) + }() + + select { + case <-syncDone: + if syncErr != nil { + return syncErr + } + // Cache synced successfully + case <-syncCtx.Done(): + // Timeout - cluster might be unreachable or API server is slow + return fmt.Errorf("timeout waiting for RV informer cache sync: cluster may be unreachable") + case <-c.informerStop: + // Informer was stopped (shouldn't happen during init) + return fmt.Errorf("informer stopped unexpectedly during initialization") + } + + c.rvInformerMu.Lock() + c.informerReady = true + c.rvInformerMu.Unlock() + + return nil +} + +// dispatchRVEvent routes an RV event to the registered checker (if any). +// Called by informer handler for Add/Update events. +func (c *Client) dispatchRVEvent(obj interface{}) { + rv, ok := obj.(*v1alpha1.ReplicatedVolume) + if !ok { + return + } + + c.rvCheckersMu.RLock() + ch, exists := c.rvCheckers[rv.Name] + c.rvCheckersMu.RUnlock() + + if exists { + select { + case ch <- rv: + default: + // Channel full, skip event (checker will get next one or resync) + } + } +} + +// StopInformers stops all running informers. +// Called on application shutdown from main.go via defer. +func (c *Client) StopInformers() { + c.rvInformerMu.Lock() + defer c.rvInformerMu.Unlock() + + if c.informerReady { + close(c.informerStop) + c.informerReady = false + } +} + +// RegisterRVChecker registers a VolumeChecker to receive events for specific RV. +// Returns channel where RV updates will be sent. Caller must call UnregisterRVChecker on shutdown. +// Uses dispatcher pattern: one informer handler routes to many checkers via map lookup. +func (c *Client) RegisterRVChecker(rvName string, ch chan *v1alpha1.ReplicatedVolume) error { + c.rvInformerMu.RLock() + ready := c.informerReady + c.rvInformerMu.RUnlock() + + if !ready { + return fmt.Errorf("RV informer is not ready") + } + + c.rvCheckersMu.Lock() + c.rvCheckers[rvName] = ch + c.rvCheckersMu.Unlock() + + return nil +} + +// UnregisterRVChecker removes a VolumeChecker registration. +// Called by VolumeChecker during shutdown to stop receiving events. +func (c *Client) UnregisterRVChecker(rvName string) { + c.rvCheckersMu.Lock() + delete(c.rvCheckers, rvName) + c.rvCheckersMu.Unlock() +} + +// GetRVFromCache gets a ReplicatedVolume from the informer cache by name. +// Used by VolumeChecker.checkInitialState() to get RV without API call. +func (c *Client) GetRVFromCache(name string) (*v1alpha1.ReplicatedVolume, error) { + c.rvInformerMu.RLock() + defer c.rvInformerMu.RUnlock() + + if !c.informerReady { + return nil, fmt.Errorf("RV informer is not ready") + } + + obj, exists, err := c.rvInformer.GetStore().GetByKey(name) + if err != nil { + return nil, err + } + if !exists { + return nil, fmt.Errorf("RV %s not found in cache", name) + } + + rv, ok := obj.(*v1alpha1.ReplicatedVolume) + if !ok { + return nil, fmt.Errorf("unexpected object type in cache: %T", obj) + } + + return rv, nil +} + +// Client returns the underlying controller-runtime client +func (c *Client) Client() client.Client { + return c.cl +} + +// GetRandomNodes selects n random unique nodes from the cluster +func (c *Client) GetRandomNodes(ctx context.Context, n int) ([]corev1.Node, error) { + nodes, err := c.ListNodes(ctx) + if err != nil { + return nil, err + } + if len(nodes) < n { + n = len(nodes) + } + + // Fisher-Yates shuffle and take first n + //nolint:gosec // G404: math/rand is fine for non-security-critical random selection + rand.Shuffle(len(nodes), func(i, j int) { + nodes[i], nodes[j] = nodes[j], nodes[i] + }) + + return nodes[:n], nil +} + +// ListNodes returns all nodes in the cluster with label storage.deckhouse.io/sds-replicated-volume-node="" +// The result is cached with TTL of nodesCacheTTL +func (c *Client) ListNodes(ctx context.Context) ([]corev1.Node, error) { + c.nodesMutex.RLock() + if c.cachedNodes != nil && time.Since(c.nodesCacheTime) < nodesCacheTTL { + nodes := make([]corev1.Node, len(c.cachedNodes)) + for i := range c.cachedNodes { + nodes[i] = *c.cachedNodes[i].DeepCopy() + } + c.nodesMutex.RUnlock() + return nodes, nil + } + c.nodesMutex.RUnlock() + + c.nodesMutex.Lock() + defer c.nodesMutex.Unlock() + + // Double-check after acquiring write lock + if c.cachedNodes != nil && time.Since(c.nodesCacheTime) < nodesCacheTTL { + nodes := make([]corev1.Node, len(c.cachedNodes)) + for i := range c.cachedNodes { + nodes[i] = *c.cachedNodes[i].DeepCopy() + } + return nodes, nil + } + + nodeList := &corev1.NodeList{} + err := c.cl.List(ctx, nodeList, client.MatchingLabels{ + "storage.deckhouse.io/sds-replicated-volume-node": "", + }) + if err != nil { + return nil, err + } + + // Cache the result with timestamp + c.cachedNodes = make([]corev1.Node, len(nodeList.Items)) + for i := range nodeList.Items { + c.cachedNodes[i] = *nodeList.Items[i].DeepCopy() + } + c.nodesCacheTime = time.Now() + + // Return a deep copy to prevent external modifications + nodes := make([]corev1.Node, len(c.cachedNodes)) + for i := range c.cachedNodes { + nodes[i] = *c.cachedNodes[i].DeepCopy() + } + return nodes, nil +} + +// CreateRV creates a new ReplicatedVolume +func (c *Client) CreateRV(ctx context.Context, rv *v1alpha1.ReplicatedVolume) error { + return c.cl.Create(ctx, rv) +} + +// DeleteRV deletes a ReplicatedVolume +func (c *Client) DeleteRV(ctx context.Context, rv *v1alpha1.ReplicatedVolume) error { + return c.cl.Delete(ctx, rv) +} + +// GetRV gets a ReplicatedVolume by name (from API server, not cache) +func (c *Client) GetRV(ctx context.Context, name string) (*v1alpha1.ReplicatedVolume, error) { + rv := &v1alpha1.ReplicatedVolume{} + err := c.cl.Get(ctx, client.ObjectKey{Name: name}, rv) + if err != nil { + return nil, err + } + return rv, nil +} + +// IsRVReady checks if a ReplicatedVolume is in IOReady and Quorum conditions +func (c *Client) IsRVReady(rv *v1alpha1.ReplicatedVolume) bool { + if rv == nil { + return false + } + return meta.IsStatusConditionTrue(rv.Status.Conditions, v1alpha1.ReplicatedVolumeCondIOReadyType) && + meta.IsStatusConditionTrue(rv.Status.Conditions, v1alpha1.ReplicatedVolumeCondQuorumType) +} + +// PatchRV patches a ReplicatedVolume using merge patch strategy +func (c *Client) PatchRV(ctx context.Context, originalRV *v1alpha1.ReplicatedVolume, updatedRV *v1alpha1.ReplicatedVolume) error { + return c.cl.Patch(ctx, updatedRV, client.MergeFrom(originalRV)) +} + +func buildRVAName(rvName, nodeName string) string { + base := "rva-" + rvName + "-" + nodeName + if len(base) <= 253 { + return base + } + sum := sha1.Sum([]byte(base)) + hash := hex.EncodeToString(sum[:])[:8] + // "rva-" + rv + "-" + node + "-" + hash + const prefixLen = 4 + const sepCount = 2 + const hashLen = 8 + maxPartsLen := 253 - prefixLen - sepCount - hashLen + if maxPartsLen < 2 { + return "rva-" + hash + } + rvMax := maxPartsLen / 2 + nodeMax := maxPartsLen - rvMax + rvPart := rvName + if len(rvPart) > rvMax { + rvPart = rvPart[:rvMax] + } + nodePart := nodeName + if len(nodePart) > nodeMax { + nodePart = nodePart[:nodeMax] + } + return "rva-" + rvPart + "-" + nodePart + "-" + hash +} + +// EnsureRVA creates a ReplicatedVolumeAttachment for (rvName, nodeName) if it does not exist. +func (c *Client) EnsureRVA(ctx context.Context, rvName, nodeName string) (*v1alpha1.ReplicatedVolumeAttachment, error) { + rvaName := buildRVAName(rvName, nodeName) + existing := &v1alpha1.ReplicatedVolumeAttachment{} + if err := c.cl.Get(ctx, client.ObjectKey{Name: rvaName}, existing); err == nil { + return existing, nil + } else if client.IgnoreNotFound(err) != nil { + return nil, fmt.Errorf("get RVA %s: %w", rvaName, err) + } + + rva := &v1alpha1.ReplicatedVolumeAttachment{ + ObjectMeta: metav1.ObjectMeta{ + Name: rvaName, + }, + Spec: v1alpha1.ReplicatedVolumeAttachmentSpec{ + ReplicatedVolumeName: rvName, + NodeName: nodeName, + }, + } + if err := c.cl.Create(ctx, rva); err != nil { + return nil, err + } + return rva, nil +} + +// DeleteRVA deletes a ReplicatedVolumeAttachment for (rvName, nodeName). It is idempotent. +func (c *Client) DeleteRVA(ctx context.Context, rvName, nodeName string) error { + rvaName := buildRVAName(rvName, nodeName) + rva := &v1alpha1.ReplicatedVolumeAttachment{} + if err := c.cl.Get(ctx, client.ObjectKey{Name: rvaName}, rva); err != nil { + return client.IgnoreNotFound(err) + } + return client.IgnoreNotFound(c.cl.Delete(ctx, rva)) +} + +// ListRVAsByRVName lists non-deleting RVAs for a given RV (cluster-scoped). +func (c *Client) ListRVAsByRVName(ctx context.Context, rvName string) ([]v1alpha1.ReplicatedVolumeAttachment, error) { + list := &v1alpha1.ReplicatedVolumeAttachmentList{} + if err := c.cl.List(ctx, list); err != nil { + return nil, err + } + var out []v1alpha1.ReplicatedVolumeAttachment + for _, item := range list.Items { + if !item.DeletionTimestamp.IsZero() { + continue + } + if item.Spec.ReplicatedVolumeName != rvName { + continue + } + out = append(out, item) + } + return out, nil +} + +// WaitForRVAReady waits until RVA Ready condition becomes True. +func (c *Client) WaitForRVAReady(ctx context.Context, rvName, nodeName string) error { + rvaName := buildRVAName(rvName, nodeName) + for { + rva := &v1alpha1.ReplicatedVolumeAttachment{} + if err := c.cl.Get(ctx, client.ObjectKey{Name: rvaName}, rva); err != nil { + if client.IgnoreNotFound(err) != nil { + return err + } + if err := waitWithContext(ctx, 500*time.Millisecond); err != nil { + return err + } + continue + } + cond := meta.FindStatusCondition(rva.Status.Conditions, v1alpha1.ReplicatedVolumeAttachmentCondReadyType) + if cond != nil && cond.Status == metav1.ConditionTrue { + return nil + } + // Early exit for permanent attach failures: these are reported via Attached condition reason. + attachedCond := meta.FindStatusCondition(rva.Status.Conditions, v1alpha1.ReplicatedVolumeAttachmentCondAttachedType) + if attachedCond != nil && + attachedCond.Status == metav1.ConditionFalse && + (attachedCond.Reason == v1alpha1.ReplicatedVolumeAttachmentCondAttachedReasonLocalityNotSatisfied || attachedCond.Reason == v1alpha1.ReplicatedVolumeAttachmentCondAttachedReasonUnableToProvideLocalVolumeAccess) { + return fmt.Errorf("RVA %s for volume=%s node=%s not attachable: Attached=%s reason=%s message=%q", + rvaName, rvName, nodeName, attachedCond.Status, attachedCond.Reason, attachedCond.Message) + } + if err := waitWithContext(ctx, 500*time.Millisecond); err != nil { + return err + } + } +} + +// ListRVRsByRVName lists all ReplicatedVolumeReplicas for a given RV +// Filters by spec.replicatedVolumeName field +func (c *Client) ListRVRsByRVName(ctx context.Context, rvName string) ([]v1alpha1.ReplicatedVolumeReplica, error) { + rvrList := &v1alpha1.ReplicatedVolumeReplicaList{} + err := c.cl.List(ctx, rvrList) + if err != nil { + return nil, err + } + + // Filter by replicatedVolumeName + var result []v1alpha1.ReplicatedVolumeReplica + for _, rvr := range rvrList.Items { + if rvr.Spec.ReplicatedVolumeName == rvName { + result = append(result, rvr) + } + } + return result, nil +} + +// DeleteRVR deletes a ReplicatedVolumeReplica +func (c *Client) DeleteRVR(ctx context.Context, rvr *v1alpha1.ReplicatedVolumeReplica) error { + return c.cl.Delete(ctx, rvr) +} + +// CreateRVR creates a ReplicatedVolumeReplica +func (c *Client) CreateRVR(ctx context.Context, rvr *v1alpha1.ReplicatedVolumeReplica) error { + return c.cl.Create(ctx, rvr) +} + +// ListPods returns pods in namespace matching label selector +func (c *Client) ListPods(ctx context.Context, namespace, labelSelector string) ([]corev1.Pod, error) { + podList := &corev1.PodList{} + + selector, err := labels.Parse(labelSelector) + if err != nil { + return nil, fmt.Errorf("parsing label selector %q: %w", labelSelector, err) + } + + err = c.cl.List(ctx, podList, client.InNamespace(namespace), client.MatchingLabelsSelector{Selector: selector}) + if err != nil { + return nil, fmt.Errorf("listing pods in namespace %q with selector %q: %w", namespace, labelSelector, err) + } + + return podList.Items, nil +} + +// DeletePod deletes a pod (does not wait for deletion) +func (c *Client) DeletePod(ctx context.Context, pod *corev1.Pod) error { + return c.cl.Delete(ctx, pod) +} + +// waitWithContext waits for the specified duration or until context is cancelled +func waitWithContext(ctx context.Context, d time.Duration) error { + select { + case <-ctx.Done(): + return ctx.Err() + case <-time.After(d): + return nil + } +} diff --git a/images/megatest/internal/runners/common.go b/images/megatest/internal/runners/common.go new file mode 100644 index 000000000..dbdfea613 --- /dev/null +++ b/images/megatest/internal/runners/common.go @@ -0,0 +1,78 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package runners + +import ( + "context" + "math/rand" + "time" + + "github.com/deckhouse/sds-replicated-volume/images/megatest/internal/config" +) + +const ( + // CleanupTimeout is the timeout for cleanup operations. + // Increased to 3 minutes to handle rate limiter delays when deleting many RVs concurrently. + CleanupTimeout = 3 * time.Minute +) + +// Runner represents a goroutine that can be started and stopped +type Runner interface { + // Run starts the runner and blocks until the context is cancelled + Run(ctx context.Context) error +} + +// randomDuration returns a random duration between min and max +func randomDuration(d config.DurationMinMax) time.Duration { + if d.Max <= d.Min { + return d.Min + } + delta := d.Max - d.Min + //nolint:gosec // G404: math/rand is fine for non-security-critical delays + return d.Min + time.Duration(rand.Int63n(int64(delta))) +} + +// randomInt returns a random int between minVal and maxVal (inclusive) +func randomInt(minVal, maxVal int) int { + if maxVal <= minVal { + return minVal + } + //nolint:gosec // G404: math/rand is fine for non-security-critical random selection + return minVal + rand.Intn(maxVal-minVal+1) +} + +// waitWithContext waits for the specified duration or until context is cancelled +func waitWithContext(ctx context.Context, d time.Duration) error { + select { + case <-ctx.Done(): + return ctx.Err() + case <-time.After(d): + return nil + } +} + +// waitRandomWithContext waits for a random duration within the given range +func waitRandomWithContext(ctx context.Context, d config.DurationMinMax) error { + return waitWithContext(ctx, randomDuration(d)) +} + +// measureDurationError measures the execution time of a function that returns only error +func measureDurationError(fn func() error) (time.Duration, error) { + startTime := time.Now() + err := fn() + return time.Since(startTime), err +} diff --git a/images/megatest/internal/runners/multivolume.go b/images/megatest/internal/runners/multivolume.go new file mode 100644 index 000000000..393860d2e --- /dev/null +++ b/images/megatest/internal/runners/multivolume.go @@ -0,0 +1,245 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package runners + +import ( + "context" + "fmt" + "log/slog" + "math/rand" + "sync" + "sync/atomic" + "time" + + "github.com/google/uuid" + "k8s.io/apimachinery/pkg/api/resource" + + "github.com/deckhouse/sds-replicated-volume/images/megatest/internal/config" + "github.com/deckhouse/sds-replicated-volume/images/megatest/internal/kubeutils" +) + +var ( + // PodDestroyer Agent configuration + podDestroyerAgentNamespace = "d8-sds-replicated-volume" + podDestroyerAgentLabelSelector = "app=agent" + podDestroyerAgentPodCountMinMax = []int{1, 5} + podDestroyerAgentPeriodMinMax = []int{30, 60} + + // PodDestroyer Controller configuration + podDestroyerControllerNamespace = "d8-sds-replicated-volume" + podDestroyerControllerLabelSelector = "app=controller" + podDestroyerControllerPodCountMinMax = []int{1, 3} + podDestroyerControllerPeriodMinMax = []int{30, 60} +) + +// Stats contains statistics about the test run +type Stats struct { + CreatedRVCount int64 + TotalCreateRVTime time.Duration + TotalDeleteRVTime time.Duration + TotalWaitForRVReadyTime time.Duration + CreateRVErrorCount int64 +} + +// MultiVolume orchestrates multiple volume-main instances and pod-destroyers +type MultiVolume struct { + cfg config.MultiVolumeConfig + client *kubeutils.Client + log *slog.Logger + forceCleanupChan <-chan struct{} + + // Tracking running volumes + runningVolumes atomic.Int32 + + // Statistics + createdRVCount atomic.Int64 + totalCreateRVTime atomic.Int64 // nanoseconds + totalDeleteRVTime atomic.Int64 // nanoseconds + totalWaitForRVReadyTime atomic.Int64 // nanoseconds + createRVErrorCount atomic.Int64 + + // Checker stats from all VolumeCheckers + checkerStatsMu sync.Mutex + checkerStats []*CheckerStats +} + +// NewMultiVolume creates a new MultiVolume orchestrator +func NewMultiVolume( + cfg config.MultiVolumeConfig, + client *kubeutils.Client, + forceCleanupChan <-chan struct{}, +) *MultiVolume { + return &MultiVolume{ + cfg: cfg, + client: client, + log: slog.Default().With("runner", "multivolume"), + forceCleanupChan: forceCleanupChan, + } +} + +// Run starts the multivolume orchestration until context is cancelled +func (m *MultiVolume) Run(ctx context.Context) error { + var disabledRunners []string + if m.cfg.DisablePodDestroyer { + disabledRunners = append(disabledRunners, "pod-destroyer") + } + if m.cfg.DisableVolumeResizer { + disabledRunners = append(disabledRunners, "volume-resizer") + } + if m.cfg.DisableVolumeReplicaDestroyer { + disabledRunners = append(disabledRunners, "volume-replica-destroyer") + } + if m.cfg.DisableVolumeReplicaCreator { + disabledRunners = append(disabledRunners, "volume-replica-creator") + } + + m.log.Info("started", "disabled_runners", disabledRunners) + defer m.log.Info("finished") + + if m.cfg.DisablePodDestroyer { + m.log.Debug("pod-destroyer runners are disabled") + } else { + m.startPodDestroyers(ctx) + } + + // Main volume creation loop + for { + // Check if we can create more volumes + currentVolumes := int(m.runningVolumes.Load()) + if currentVolumes < m.cfg.MaxVolumes { + // Determine how many to create + toCreate := randomInt(m.cfg.VolumeStep.Min, m.cfg.VolumeStep.Max) + m.log.Debug("create volumes", "count", toCreate) + + for i := 0; i < toCreate; i++ { + // Select random storage class + //nolint:gosec // G404: math/rand is fine for non-security-critical random selection + storageClass := m.cfg.StorageClasses[rand.Intn(len(m.cfg.StorageClasses))] + + // Select random volume period + volumeLifetime := randomDuration(m.cfg.VolumePeriod) + + // Generate unique name + rvName := fmt.Sprintf("mgt-%s", uuid.New().String()) + + // Start volume-main + m.startVolumeMain(ctx, rvName, storageClass, volumeLifetime) + } + } + + // Wait before next iteration + randomDuration := randomDuration(m.cfg.StepPeriod) + m.log.Debug("wait before next iteration of volume creation", "duration", randomDuration.String()) + if err := waitWithContext(ctx, randomDuration); err != nil { + m.cleanup(err) + return nil + } + } +} + +// GetStats returns statistics about the test run +func (m *MultiVolume) GetStats() Stats { + return Stats{ + CreatedRVCount: m.createdRVCount.Load(), + TotalCreateRVTime: time.Duration(m.totalCreateRVTime.Load()), + TotalDeleteRVTime: time.Duration(m.totalDeleteRVTime.Load()), + TotalWaitForRVReadyTime: time.Duration(m.totalWaitForRVReadyTime.Load()), + CreateRVErrorCount: m.createRVErrorCount.Load(), + } +} + +// AddCheckerStats registers stats from a VolumeChecker +func (m *MultiVolume) AddCheckerStats(stats *CheckerStats) { + m.checkerStatsMu.Lock() + defer m.checkerStatsMu.Unlock() + m.checkerStats = append(m.checkerStats, stats) +} + +// GetCheckerStats returns all collected checker stats +func (m *MultiVolume) GetCheckerStats() []*CheckerStats { + m.checkerStatsMu.Lock() + defer m.checkerStatsMu.Unlock() + return m.checkerStats +} + +func (m *MultiVolume) cleanup(reason error) { + log := m.log.With("reason", reason, "func", "cleanup") + log.Info("started") + defer log.Info("finished") + + for m.runningVolumes.Load() > 0 { + log.Info("waiting for volumes to stop", "remaining", m.runningVolumes.Load()) + time.Sleep(1 * time.Second) + } +} + +func (m *MultiVolume) startVolumeMain(ctx context.Context, rvName string, storageClass string, volumeLifetime time.Duration) { + cfg := config.VolumeMainConfig{ + StorageClassName: storageClass, + VolumeLifetime: volumeLifetime, + InitialSize: resource.MustParse("100Mi"), + DisableVolumeResizer: m.cfg.DisableVolumeResizer, + DisableVolumeReplicaDestroyer: m.cfg.DisableVolumeReplicaDestroyer, + DisableVolumeReplicaCreator: m.cfg.DisableVolumeReplicaCreator, + } + volumeMain := NewVolumeMain( + rvName, cfg, m.client, + &m.createdRVCount, &m.totalCreateRVTime, &m.totalDeleteRVTime, &m.totalWaitForRVReadyTime, + &m.createRVErrorCount, + m.AddCheckerStats, m.forceCleanupChan, + ) + + volumeCtx, cancel := context.WithCancel(ctx) + + go func() { + m.runningVolumes.Add(1) + defer func() { + cancel() + m.runningVolumes.Add(-1) + }() + + _ = volumeMain.Run(volumeCtx) + }() +} + +func (m *MultiVolume) startPodDestroyers(ctx context.Context) { + // Create agent pod-destroyer config + agentCfg := config.PodDestroyerConfig{ + Namespace: podDestroyerAgentNamespace, + LabelSelector: podDestroyerAgentLabelSelector, + PodCount: config.StepMinMax{Min: podDestroyerAgentPodCountMinMax[0], Max: podDestroyerAgentPodCountMinMax[1]}, + Period: config.DurationMinMax{Min: time.Duration(podDestroyerAgentPeriodMinMax[0]) * time.Second, Max: time.Duration(podDestroyerAgentPeriodMinMax[1]) * time.Second}, + } + + // Start agent destroyer + go func() { + _ = NewPodDestroyer(agentCfg, m.client, podDestroyerAgentPodCountMinMax, podDestroyerAgentPeriodMinMax).Run(ctx) + }() + + // Create controller pod-destroyer config + controllerCfg := config.PodDestroyerConfig{ + Namespace: podDestroyerControllerNamespace, + LabelSelector: podDestroyerControllerLabelSelector, + PodCount: config.StepMinMax{Min: podDestroyerControllerPodCountMinMax[0], Max: podDestroyerControllerPodCountMinMax[1]}, + Period: config.DurationMinMax{Min: time.Duration(podDestroyerControllerPeriodMinMax[0]) * time.Second, Max: time.Duration(podDestroyerControllerPeriodMinMax[1]) * time.Second}, + } + + // Start controller destroyer + go func() { + _ = NewPodDestroyer(controllerCfg, m.client, podDestroyerControllerPodCountMinMax, podDestroyerControllerPeriodMinMax).Run(ctx) + }() +} diff --git a/images/megatest/internal/runners/pod_destroyer.go b/images/megatest/internal/runners/pod_destroyer.go new file mode 100644 index 000000000..846fb41d6 --- /dev/null +++ b/images/megatest/internal/runners/pod_destroyer.go @@ -0,0 +1,123 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package runners + +import ( + "context" + "log/slog" + "math/rand" + + "github.com/deckhouse/sds-replicated-volume/images/megatest/internal/config" + "github.com/deckhouse/sds-replicated-volume/images/megatest/internal/kubeutils" +) + +// PodDestroyer periodically deletes random control-plane pods by label selector +// It does NOT wait for deletion to succeed +type PodDestroyer struct { + cfg config.PodDestroyerConfig + client *kubeutils.Client + log *slog.Logger +} + +// NewPodDestroyer creates a new PodDestroyer +func NewPodDestroyer( + cfg config.PodDestroyerConfig, + client *kubeutils.Client, + podCountMinMax []int, + periodMinMax []int, +) *PodDestroyer { + return &PodDestroyer{ + cfg: cfg, + client: client, + log: slog.Default().With( + "runner", "pod-destroyer", + "namespace", cfg.Namespace, + "label_selector", cfg.LabelSelector, + "pod_count_min_max", podCountMinMax, + "period_min_max", periodMinMax, + ), + } +} + +// Run starts the destroy cycle until context is cancelled +func (p *PodDestroyer) Run(ctx context.Context) error { + p.log.Info("started") + defer p.log.Info("finished") + + for { + // Wait random duration before delete + if err := waitRandomWithContext(ctx, p.cfg.Period); err != nil { + return err + } + + // Perform delete + if err := p.doDestroy(ctx); err != nil { + p.log.Error("destroy failed", "error", err) + } + } +} + +func (p *PodDestroyer) doDestroy(ctx context.Context) error { + // Get list of pods + pods, err := p.client.ListPods(ctx, p.cfg.Namespace, p.cfg.LabelSelector) + if err != nil { + return err + } + + if len(pods) == 0 { + p.log.Debug("no pods found to delete") + return nil + } + + // Shuffle the list + //nolint:gosec // G404: math/rand is fine for non-security-critical random selection + rand.Shuffle(len(pods), func(i, j int) { + pods[i], pods[j] = pods[j], pods[i] + }) + + // Determine how many to delete + toDelete := randomInt(p.cfg.PodCount.Min, p.cfg.PodCount.Max) + if toDelete > len(pods) { + toDelete = len(pods) + } + + p.log.Debug("deleting pods", "total_pods", len(pods), "to_delete", toDelete) + + // Delete pods + deleted := 0 + for i := 0; i < len(pods) && deleted < toDelete; i++ { + pod := &pods[i] + + p.log.Info("pod delete initiated", + "pod_name", pod.Name, + "namespace", pod.Namespace, + "action", "delete", + ) + + if err := p.client.DeletePod(ctx, pod); err != nil { + p.log.Error("failed to delete pod", + "pod_name", pod.Name, + "namespace", pod.Namespace, + "error", err, + ) + // Continue with other pods even on failure + } + deleted++ + } + + return nil +} diff --git a/images/megatest/internal/runners/volume_attacher.go b/images/megatest/internal/runners/volume_attacher.go new file mode 100644 index 000000000..08e51f25f --- /dev/null +++ b/images/megatest/internal/runners/volume_attacher.go @@ -0,0 +1,336 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package runners + +import ( + "context" + "fmt" + "log/slog" + "math/rand" + "slices" + "time" + + "github.com/deckhouse/sds-replicated-volume/images/megatest/internal/config" + "github.com/deckhouse/sds-replicated-volume/images/megatest/internal/kubeutils" +) + +const ( + // attachCycleProbability is the probability of a attach cycle (vs detach) + attachCycleProbability = 0.10 +) + +// VolumeAttacher periodically attaches and detaches a volume to random nodes +type VolumeAttacher struct { + rvName string + cfg config.VolumeAttacherConfig + client *kubeutils.Client + log *slog.Logger + forceCleanupChan <-chan struct{} +} + +// NewVolumeAttacher creates a new VolumeAttacher +func NewVolumeAttacher(rvName string, cfg config.VolumeAttacherConfig, client *kubeutils.Client, periodrMinMax []int, forceCleanupChan <-chan struct{}) *VolumeAttacher { + return &VolumeAttacher{ + rvName: rvName, + cfg: cfg, + client: client, + log: slog.Default().With("runner", "volume-attacher", "rv_name", rvName, "period_min_max", periodrMinMax), + forceCleanupChan: forceCleanupChan, + } +} + +// Run starts the attach/detach cycle until context is cancelled +func (v *VolumeAttacher) Run(ctx context.Context) error { + v.log.Info("started") + defer v.log.Info("finished") + + // Helper function to check context and cleanup before return + checkAndCleanup := func(err error) error { + if ctx.Err() != nil { + v.cleanup(ctx, ctx.Err()) + } + return err + } + + for { + if err := waitRandomWithContext(ctx, v.cfg.Period); err != nil { + return checkAndCleanup(nil) + } + + // Determine current desired attachments from RVA set (max 2 active attachments supported). + rvas, err := v.client.ListRVAsByRVName(ctx, v.rvName) + if err != nil { + v.log.Error("failed to list RVAs", "error", err) + return checkAndCleanup(err) + } + desiredNodes := make([]string, 0, len(rvas)) + for _, rva := range rvas { + if rva.Spec.NodeName == "" { + continue + } + desiredNodes = append(desiredNodes, rva.Spec.NodeName) + } + + // get a random node + nodes, err := v.client.GetRandomNodes(ctx, 1) + if err != nil { + v.log.Error("failed to get random node", "error", err) + return checkAndCleanup(err) + } + nodeName := nodes[0].Name + log := v.log.With("node_name", nodeName) + + // TODO: maybe it's necessary to collect time statistics by cycles? + switch len(desiredNodes) { + case 0: + if v.isAttachCycle() { + if err := v.attachCycle(ctx, nodeName); err != nil { + log.Error("failed to attachCycle", "error", err, "case", 0) + return checkAndCleanup(err) + } + } else { + if err := v.attachAndDetachCycle(ctx, nodeName); err != nil { + log.Error("failed to attachAndDetachCycle", "error", err, "case", 0) + return checkAndCleanup(err) + } + } + case 1: + otherNodeName := desiredNodes[0] + if otherNodeName == nodeName { + if err := v.detachCycle(ctx, nodeName); err != nil { + log.Error("failed to detachCycle", "error", err, "case", 1) + return checkAndCleanup(err) + } + } else { + if err := v.migrationCycle(ctx, otherNodeName, nodeName); err != nil { + log.Error("failed to migrationCycle", "error", err, "case", 1) + return checkAndCleanup(err) + } + } + case 2: + if !slices.Contains(desiredNodes, nodeName) { + nodeName = desiredNodes[0] + } + if err := v.detachCycle(ctx, nodeName); err != nil { + log.Error("failed to detachCycle", "error", err, "case", 2) + return checkAndCleanup(err) + } + default: + err := fmt.Errorf("unexpected number of active attachments (RVA): %d", len(desiredNodes)) + log.Error("error", "error", err) + return checkAndCleanup(err) + } + } +} + +func (v *VolumeAttacher) cleanup(ctx context.Context, reason error) { + log := v.log.With("reason", reason, "func", "cleanup") + log.Info("started") + defer log.Info("finished") + + cleanupCtx, cleanupCancel := context.WithTimeout(context.Background(), CleanupTimeout) + defer cleanupCancel() + + // If context was cancelled, listen for second signal to force cleanup cancellation. + // First signal already cancelled the main context (stopped volume operations). + // Second signal will close forceCleanupChan, and all cleanup handlers will receive + // notification simultaneously (broadcast mechanism via channel closure). + if ctx.Err() != nil && v.forceCleanupChan != nil { + log.Info("cleanup can be interrupted by second signal") + go func() { + select { + case <-v.forceCleanupChan: // All handlers receive this simultaneously when channel is closed + log.Info("received second signal, forcing cleanup cancellation") + cleanupCancel() + case <-cleanupCtx.Done(): + // Cleanup already completed or was cancelled + } + }() + } + + if err := v.detachCycle(cleanupCtx, ""); err != nil { + v.log.Error("failed to detachCycle", "error", err) + } +} + +func (v *VolumeAttacher) attachCycle(ctx context.Context, nodeName string) error { + log := v.log.With("node_name", nodeName, "func", "attachCycle") + log.Debug("started") + defer log.Debug("finished") + + if err := v.doAttach(ctx, nodeName); err != nil { + log.Error("failed to doAttach", "error", err) + return err + } + return nil +} + +func (v *VolumeAttacher) attachAndDetachCycle(ctx context.Context, nodeName string) error { + log := v.log.With("node_name", nodeName, "func", "attachAndDetachCycle") + log.Debug("started") + defer log.Debug("finished") + + // Step 1: Attach the node and wait for it to be attached + if err := v.attachCycle(ctx, nodeName); err != nil { + return err + } + + // Step 2: Random delay between attach and detach + randomDelay := randomDuration(v.cfg.Period) + log.Debug("waiting random delay before detach", "duration", randomDelay.String()) + if err := waitWithContext(ctx, randomDelay); err != nil { + return err + } + + // Step 3: Get fresh RV and detach + return v.detachCycle(ctx, nodeName) +} + +func (v *VolumeAttacher) migrationCycle(ctx context.Context, otherNodeName, nodeName string) error { + log := v.log.With("node_name", nodeName, "func", "migrationCycle") + log.Debug("started") + defer log.Debug("finished") + + if otherNodeName == nodeName { + return fmt.Errorf("other node name equals selected node name: %s", nodeName) + } + + // Step 1: Attach the selected node and wait for it + if err := v.attachCycle(ctx, nodeName); err != nil { + return err + } + + // Verify both nodes are now attached + for { + log.Debug("waiting for both nodes to be attached", "selected_node", nodeName, "other_node", otherNodeName) + + rv, err := v.client.GetRV(ctx, v.rvName) + if err != nil { + return err + } + + if len(rv.Status.ActuallyAttachedTo) == 2 { + break + } + + if err := waitWithContext(ctx, 1*time.Second); err != nil { + return err + } + } + + // Step 2: Random delay + randomDelay1 := randomDuration(v.cfg.Period) + log.Debug("waiting random delay before detaching other node", "duration", randomDelay1.String()) + if err := waitWithContext(ctx, randomDelay1); err != nil { + return err + } + + // Step 3: Get fresh RV and detach the other node + if err := v.detachCycle(ctx, otherNodeName); err != nil { + return err + } + + // Step 4: Random delay + randomDelay2 := randomDuration(v.cfg.Period) + log.Debug("waiting random delay before detaching selected node", "duration", randomDelay2.String()) + if err := waitWithContext(ctx, randomDelay2); err != nil { + return err + } + + // Step 5: Get fresh RV and detach the selected node + return v.detachCycle(ctx, nodeName) +} + +func (v *VolumeAttacher) doAttach(ctx context.Context, nodeName string) error { + if _, err := v.client.EnsureRVA(ctx, v.rvName, nodeName); err != nil { + return fmt.Errorf("failed to create RVA: %w", err) + } + if err := v.client.WaitForRVAReady(ctx, v.rvName, nodeName); err != nil { + return fmt.Errorf("failed to wait for RVA Ready: %w", err) + } + return nil +} + +func (v *VolumeAttacher) detachCycle(ctx context.Context, nodeName string) error { + log := v.log.With("node_name", nodeName, "func", "detachCycle") + log.Debug("started") + defer log.Debug("finished") + + if err := v.doUnattach(ctx, nodeName); err != nil { + log.Error("failed to doUnattach", "error", err) + return err + } + + // Wait for node(s) to be detached + for { + if nodeName == "" { + log.Debug("waiting for all nodes to be detached") + } else { + log.Debug("waiting for node to be detached") + } + + rv, err := v.client.GetRV(ctx, v.rvName) + if err != nil { + return err + } + + if nodeName == "" { + // Check if all nodes are detached + if len(rv.Status.ActuallyAttachedTo) == 0 { + return nil + } + } else { + // Check if specific node is detached + if !slices.Contains(rv.Status.ActuallyAttachedTo, nodeName) { + return nil + } + } + + if err := waitWithContext(ctx, 1*time.Second); err != nil { + return err + } + } +} + +func (v *VolumeAttacher) doUnattach(ctx context.Context, nodeName string) error { + if nodeName == "" { + // Detach from all nodes - delete all RVAs for this RV. + rvas, err := v.client.ListRVAsByRVName(ctx, v.rvName) + if err != nil { + return err + } + for _, rva := range rvas { + if rva.Spec.NodeName == "" { + continue + } + _ = v.client.DeleteRVA(ctx, v.rvName, rva.Spec.NodeName) + } + return nil + } + + // Detach from a specific node + if err := v.client.DeleteRVA(ctx, v.rvName, nodeName); err != nil { + return err + } + return nil +} + +func (v *VolumeAttacher) isAttachCycle() bool { + //nolint:gosec // G404: math/rand is fine for non-security-critical random selection + r := rand.Float64() + return r < attachCycleProbability +} diff --git a/images/megatest/internal/runners/volume_checker.go b/images/megatest/internal/runners/volume_checker.go new file mode 100644 index 000000000..913f8d424 --- /dev/null +++ b/images/megatest/internal/runners/volume_checker.go @@ -0,0 +1,359 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package runners + +import ( + "context" + "log/slog" + "strings" + "sync/atomic" + "time" + + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + + "github.com/deckhouse/sds-replicated-volume/api/v1alpha1" + "github.com/deckhouse/sds-replicated-volume/images/megatest/internal/kubeutils" +) + +const ( + // apiCallTimeout is the timeout for individual API calls to avoid hanging + apiCallTimeout = 10 * time.Second + + // updateChBufferSize is the buffer size for RV update channel. + // Provides headroom for burst updates while checker processes events. + updateChBufferSize = 10 +) + +// CheckerStats holds statistics about condition transitions for a ReplicatedVolume. +// Even number of transitions means RV maintains desired state despite disruption attempts. +// Odd number means disruption attempts succeeded. +// Ideal: all counters at zero. +type CheckerStats struct { + RVName string + IOReadyTransitions atomic.Int64 + QuorumTransitions atomic.Int64 +} + +// conditionState tracks the current state of monitored conditions +type conditionState struct { + ioReadyStatus metav1.ConditionStatus + quorumStatus metav1.ConditionStatus +} + +// VolumeChecker watches a ReplicatedVolume and logs state changes. +// It monitors IOReady and Quorum conditions and counts transitions. +// +// Uses shared informer with dispatcher pattern: +// - One informer handler for all checkers (not N handlers for N checkers) +// - Events routed via map lookup O(1) instead of N filter calls +// - Efficient for 100+ concurrent RV watchers +// - Automatic reconnection on API disconnects via informer +// +// If registration fails, it retries until RV lifetime expires. +type VolumeChecker struct { + rvName string + client *kubeutils.Client + log *slog.Logger + stats *CheckerStats + state conditionState + + // Channel for receiving RV updates (dispatcher sends here) + updateCh chan *v1alpha1.ReplicatedVolume +} + +// NewVolumeChecker creates a new VolumeChecker for the given RV +func NewVolumeChecker(rvName string, client *kubeutils.Client, stats *CheckerStats) *VolumeChecker { + return &VolumeChecker{ + rvName: rvName, + client: client, + log: slog.Default().With("runner", "volume-checker", "rv_name", rvName), + stats: stats, + state: conditionState{ + // Initial expected state: both conditions should be True + ioReadyStatus: metav1.ConditionTrue, + quorumStatus: metav1.ConditionTrue, + }, + updateCh: make(chan *v1alpha1.ReplicatedVolume, updateChBufferSize), + } +} + +// Run starts watching the RV until context is cancelled. +func (v *VolumeChecker) Run(ctx context.Context) error { + v.log.Info("started") + defer v.log.Info("finished") + + // Registration always succeeds if app started (informer is ready after NewClient) + v.register() + defer v.unregister() + + // Check initial state + v.checkInitialState(ctx) + + v.log.Debug("watching via shared informer dispatcher") + + // Process events from dispatcher + for { + select { + case <-ctx.Done(): + return nil + case rv := <-v.updateCh: + v.processRVUpdate(ctx, rv) + } + } +} + +// register adds this checker to the dispatcher. +// Dispatcher will route RV events matching our name to updateCh. +func (v *VolumeChecker) register() { + // Error only possible if informer not ready, but it's always ready after NewClient() + _ = v.client.RegisterRVChecker(v.rvName, v.updateCh) +} + +// unregister removes this checker from the dispatcher. +func (v *VolumeChecker) unregister() { + v.client.UnregisterRVChecker(v.rvName) +} + +// checkInitialState checks current RV state and counts transition if not in expected state. +// Uses processRVUpdate to detect changes from initial True state. +func (v *VolumeChecker) checkInitialState(ctx context.Context) { + if ctx.Err() != nil { + return + } + + // Try to get from cache first, fall back to API with timeout + rv, err := v.client.GetRVFromCache(v.rvName) + if err != nil { + v.log.Debug("not in cache, fetching from API") + + callCtx, cancel := context.WithTimeout(ctx, apiCallTimeout) + defer cancel() + + rv, err = v.client.GetRV(callCtx, v.rvName) + if err != nil { + if ctx.Err() != nil { + return // Context cancelled, normal shutdown + } + v.log.Error("failed to get from API", "error", err) + return + } + } + + // Reuse processRVUpdate - it will detect and log changes from initial True state + // (v.state is initialized as {True, True}). If state is OK, nothing is logged. + v.processRVUpdate(ctx, rv) +} + +// processRVUpdate checks for condition changes and logs them +func (v *VolumeChecker) processRVUpdate(ctx context.Context, rv *v1alpha1.ReplicatedVolume) { + if rv == nil { + v.log.Debug("RV is nil, skipping condition check") + return + } + + // Status is a struct in the API, but Conditions can be empty (e.g. just created / during deletion). + if len(rv.Status.Conditions) == 0 { + v.log.Debug("RV has no conditions yet, skipping condition check") + return + } + + newIOReadyStatus := getConditionStatus(rv.Status.Conditions, v1alpha1.ReplicatedVolumeCondIOReadyType) + newQuorumStatus := getConditionStatus(rv.Status.Conditions, v1alpha1.ReplicatedVolumeCondQuorumType) + + // Check IOReady transition. + // v.state stores previous status (default: True = expected healthy state). + // If new status differs from saved → log + count transition + update saved state. + if newIOReadyStatus != v.state.ioReadyStatus { + oldStatus := v.state.ioReadyStatus // Save old for logging + v.stats.IOReadyTransitions.Add(1) // Count transition for final stats + v.state.ioReadyStatus = newIOReadyStatus // Update saved state + + v.log.Warn("condition changed", + "condition", v1alpha1.ReplicatedVolumeCondIOReadyType, + "transition", string(oldStatus)+"->"+string(newIOReadyStatus)) + + // On False: log failed RVRs for debugging + if newIOReadyStatus == metav1.ConditionFalse { + reason := getConditionReason(rv.Status.Conditions, v1alpha1.ReplicatedVolumeCondIOReadyType) + message := getConditionMessage(rv.Status.Conditions, v1alpha1.ReplicatedVolumeCondIOReadyType) + v.logConditionDetails(ctx, v1alpha1.ReplicatedVolumeCondIOReadyType, reason, message) + } // FYI: we can make here else block, if we need some details then conditions going from Fase to True + } + + // Check Quorum transition (same logic as IOReady). + if newQuorumStatus != v.state.quorumStatus { + oldStatus := v.state.quorumStatus // Save old for logging + v.stats.QuorumTransitions.Add(1) // Count transition for final stats + v.state.quorumStatus = newQuorumStatus // Update saved state + + v.log.Warn("condition changed", + "condition", v1alpha1.ReplicatedVolumeCondQuorumType, + "transition", string(oldStatus)+"->"+string(newQuorumStatus)) + + // Log RVRs only if IOReady didn't just log them (avoid duplicate output) + if newQuorumStatus == metav1.ConditionFalse && v.state.ioReadyStatus != metav1.ConditionFalse { + reason := getConditionReason(rv.Status.Conditions, v1alpha1.ReplicatedVolumeCondQuorumType) + message := getConditionMessage(rv.Status.Conditions, v1alpha1.ReplicatedVolumeCondQuorumType) + v.logConditionDetails(ctx, v1alpha1.ReplicatedVolumeCondQuorumType, reason, message) + } // FYI: we can make here else block, if we need some details then conditions going from Fase to True + } +} + +// logConditionDetails logs condition details with failed RVRs listing. +// Uses structured logging with rv_name from logger context. +// RVR table is included in "failed_rvrs_details" field when there are failures. +func (v *VolumeChecker) logConditionDetails(ctx context.Context, condType, reason, message string) { + // Check if context is already done - skip RVR listing + if ctx.Err() != nil { + v.log.Warn("condition details (context cancelled, skipped RVR listing)", + "condition", condType, + "reason", reason, + "message", message) + return + } + + // Use timeout for API call + callCtx, cancel := context.WithTimeout(ctx, apiCallTimeout) + defer cancel() + + rvrs, err := v.client.ListRVRsByRVName(callCtx, v.rvName) + if err != nil { + v.log.Warn("condition details", + "condition", condType, + "reason", reason, + "message", message, + "list_rvrs_error", err.Error()) + return + } + + // Find failed RVRs (those with at least one False condition) + var failedRVRs []v1alpha1.ReplicatedVolumeReplica + for _, rvr := range rvrs { + if hasAnyFalseCondition(rvr.Status) { + failedRVRs = append(failedRVRs, rvr) + } + } + + if len(failedRVRs) == 0 { + v.log.Warn("condition details", + "condition", condType, + "reason", reason, + "message", message, + "failed_rvrs", 0) + return + } + + // Build RVR details table + var sb strings.Builder + for _, rvr := range failedRVRs { + sb.WriteString(buildRVRConditionsTable(&rvr)) + } + + v.log.Warn("condition details", + "condition", condType, + "reason", reason, + "message", message, + "failed_rvrs", len(failedRVRs), + "failed_rvrs_details", "\n"+sb.String()) +} + +// hasAnyFalseCondition checks if RVR has at least one condition with False status +func hasAnyFalseCondition(status v1alpha1.ReplicatedVolumeReplicaStatus) bool { + for _, cond := range status.Conditions { + if cond.Status == metav1.ConditionFalse { + return true + } + } + return false +} + +// buildRVRConditionsTable builds a formatted table of all conditions for an RVR. +// Format: +// +// RVR: (node: , type: ) +// - : | | +// +// Example: +// +// RVR: test-rv-1-abc (node: worker-1, type: Diskful) +// - Ready: False | StoragePoolUnavailable | Pool xyz not found +// - Synchronized: True | InSync +func buildRVRConditionsTable(rvr *v1alpha1.ReplicatedVolumeReplica) string { + var sb strings.Builder + sb.WriteString(" RVR: ") + sb.WriteString(rvr.Name) + sb.WriteString(" (node: ") + sb.WriteString(rvr.Spec.NodeName) + sb.WriteString(", type: ") + sb.WriteString(string(rvr.Spec.Type)) + sb.WriteString(")\n") + + if len(rvr.Status.Conditions) == 0 { + sb.WriteString(" (no status conditions available)\n") + return sb.String() + } + + for _, cond := range rvr.Status.Conditions { + sb.WriteString(" - ") + sb.WriteString(cond.Type) + sb.WriteString(": ") + sb.WriteString(string(cond.Status)) + sb.WriteString(" | ") + sb.WriteString(cond.Reason) + if cond.Message != "" { + sb.WriteString(" | ") + // Truncate message if too long + msg := cond.Message + if len(msg) > 60 { + msg = msg[:57] + "..." + } + sb.WriteString(msg) + } + sb.WriteString("\n") + } + + return sb.String() +} + +// Helper functions to extract condition fields + +func getConditionStatus(conditions []metav1.Condition, condType string) metav1.ConditionStatus { + for _, cond := range conditions { + if cond.Type == condType { + return cond.Status + } + } + return metav1.ConditionUnknown +} + +func getConditionReason(conditions []metav1.Condition, condType string) string { + for _, cond := range conditions { + if cond.Type == condType { + return cond.Reason + } + } + return "" +} + +func getConditionMessage(conditions []metav1.Condition, condType string) string { + for _, cond := range conditions { + if cond.Type == condType { + return cond.Message + } + } + return "" +} diff --git a/images/megatest/internal/runners/volume_main.go b/images/megatest/internal/runners/volume_main.go new file mode 100644 index 000000000..cce281dae --- /dev/null +++ b/images/megatest/internal/runners/volume_main.go @@ -0,0 +1,524 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package runners + +import ( + "context" + "log/slog" + "math/rand" + "sync/atomic" + "time" + + apierrors "k8s.io/apimachinery/pkg/api/errors" + "k8s.io/apimachinery/pkg/api/resource" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + + "github.com/deckhouse/sds-replicated-volume/api/v1alpha1" + "github.com/deckhouse/sds-replicated-volume/images/megatest/internal/config" + "github.com/deckhouse/sds-replicated-volume/images/megatest/internal/kubeutils" +) + +var ( + attacherPeriodMinMax = []int{30, 60} + replicaDestroyerPeriodMinMax = []int{30, 300} + replicaCreatorPeriodMinMax = []int{30, 300} + + volumeResizerPeriodMinMax = []int{50, 50} + volumeResizerStepMinMax = []string{"51Mi", "101Mi"} +) + +// VolumeMain manages the lifecycle of a single ReplicatedVolume and its sub-runners +type VolumeMain struct { + rvName string + storageClass string + volumeLifetime time.Duration + initialSize resource.Quantity + client *kubeutils.Client + log *slog.Logger + + // Disable flags for sub-runners + disableVolumeResizer bool + disableVolumeReplicaDestroyer bool + disableVolumeReplicaCreator bool + + // Tracking running volumes + runningSubRunners atomic.Int32 + checkerStarted atomic.Bool + + // Statistics + createdRVCount *atomic.Int64 + totalCreateRVTime *atomic.Int64 // nanoseconds + totalDeleteRVTime *atomic.Int64 // nanoseconds + totalWaitForRVReadyTime *atomic.Int64 // nanoseconds + createRVErrorCount *atomic.Int64 + + // Callback to register checker stats in MultiVolume + registerCheckerStats func(*CheckerStats) + + // Channel to receive broadcast notification when second Ctrl+C is pressed + // When closed, all cleanup handlers will receive notification simultaneously + forceCleanupChan <-chan struct{} +} + +// NewVolumeMain creates a new VolumeMain +func NewVolumeMain( + rvName string, + cfg config.VolumeMainConfig, + client *kubeutils.Client, + createdRVCount *atomic.Int64, + totalCreateRVTime *atomic.Int64, + totalDeleteRVTime *atomic.Int64, + totalWaitForRVReadyTime *atomic.Int64, + createRVErrorCount *atomic.Int64, + registerCheckerStats func(*CheckerStats), + forceCleanupChan <-chan struct{}, +) *VolumeMain { + return &VolumeMain{ + rvName: rvName, + storageClass: cfg.StorageClassName, + volumeLifetime: cfg.VolumeLifetime, + initialSize: cfg.InitialSize, + client: client, + log: slog.Default().With("runner", "volume-main", "rv_name", rvName, "storage_class", cfg.StorageClassName, "volume_lifetime", cfg.VolumeLifetime), + disableVolumeResizer: cfg.DisableVolumeResizer, + disableVolumeReplicaDestroyer: cfg.DisableVolumeReplicaDestroyer, + disableVolumeReplicaCreator: cfg.DisableVolumeReplicaCreator, + createdRVCount: createdRVCount, + totalCreateRVTime: totalCreateRVTime, + totalDeleteRVTime: totalDeleteRVTime, + totalWaitForRVReadyTime: totalWaitForRVReadyTime, + createRVErrorCount: createRVErrorCount, + registerCheckerStats: registerCheckerStats, + forceCleanupChan: forceCleanupChan, + } +} + +// Run executes the full lifecycle of a volume +func (v *VolumeMain) Run(ctx context.Context) error { + v.log.Info("started") + defer v.log.Info("finished") + + // Create lifetime context + lifetimeCtx, lifetimeCancel := context.WithTimeout(ctx, v.volumeLifetime) + defer lifetimeCancel() + + // Determine initial attach nodes (random distribution: 0=30%, 1=60%, 2=10%) + numberOfPublishNodes := v.getRundomNumberForNodes() + attachNodes, err := v.getPublishNodes(ctx, numberOfPublishNodes) + if err != nil { + v.log.Error("failed to get attached nodes", "error", err) + return err + } + v.log.Debug("attached nodes", "nodes", attachNodes) + + // Create RV and RVAs + // We are waiting for the RVA to be ready, so it may take a long time. + createDuration, err := measureDurationError(func() error { + return v.createRV(ctx, attachNodes) + }) + if err != nil { + v.log.Error("failed to create RV and RVAs", "error", err) + if v.createRVErrorCount != nil { + v.createRVErrorCount.Add(1) + } + v.cleanup(ctx, lifetimeCtx, v.forceCleanupChan) + return nil + } + if v.totalCreateRVTime != nil { + v.totalCreateRVTime.Add(createDuration.Nanoseconds()) + } + + // Start all sub-runners immediately after RV creation + // They will operate while we wait for Ready + v.startSubRunners(lifetimeCtx) + + // Wait for RV to become ready + waitDuration, err := measureDurationError(func() error { + return v.waitForRVReady(lifetimeCtx) + }) + if err != nil { + v.log.Error("failed waiting for RV to become ready", "error", err) + // Continue to cleanup + } else { + // Start checker after Ready (to monitor for state changes) + v.log.Debug("RV is ready, starting checker") + v.startVolumeChecker(lifetimeCtx) + } + if v.totalWaitForRVReadyTime != nil { + v.totalWaitForRVReadyTime.Add(waitDuration.Nanoseconds()) + } + + // Wait for lifetime to expire or context to be cancelled + <-lifetimeCtx.Done() + + // Cleanup sequence + v.cleanup(ctx, lifetimeCtx, v.forceCleanupChan) + + return nil +} + +func (v *VolumeMain) cleanup(ctx context.Context, lifetimeCtx context.Context, forceCleanupChan <-chan struct{}) { + reason := ctx.Err() + if reason == nil { + reason = lifetimeCtx.Err() + } + log := v.log.With("reason", reason, "func", "cleanup") + log.Info("started") + defer log.Info("finished") + + cleanupCtx, cleanupCancel := context.WithTimeout(context.Background(), CleanupTimeout) + defer cleanupCancel() + + // If context was cancelled, listen for second signal to force cleanup cancellation. + // First signal already cancelled the main context (stopped volume creation). + // Second signal will close forceCleanupChan, and all cleanup handlers will receive + // notification simultaneously (broadcast mechanism via channel closure). + if ctx.Err() != nil && forceCleanupChan != nil { + log.Info("cleanup can be interrupted by second signal") + go func() { + select { + case <-forceCleanupChan: // All handlers receive this simultaneously when channel is closed + log.Info("received second signal, forcing cleanup cancellation") + cleanupCancel() + case <-cleanupCtx.Done(): + // Cleanup already completed or was cancelled + } + }() + } + + // Wait for ALL sub-runners to stop (including VolumeChecker) +waitLoop: + for v.runningSubRunners.Load() > 0 { + select { + case <-cleanupCtx.Done(): + log.Info("cleanup interrupted, skipping sub-runners wait", "remaining", v.runningSubRunners.Load()) + break waitLoop + default: + } + log.Debug("waiting for sub-runners to stop", "remaining", v.runningSubRunners.Load()) + time.Sleep(500 * time.Millisecond) + } + + // Start volume-checker if it wasn't started earlier to capture final RV state before deletion + if !v.checkerStarted.Load() { + log.Debug("checker was not started earlier, starting it now to capture final state") + v.startVolumeCheckerForFinalState(cleanupCtx, log) + } + + deleteDuration, err := measureDurationError(func() error { + return v.deleteRVAndWait(cleanupCtx, log) + }) + if err != nil { + v.log.Error("failed to delete RV", "error", err) + } + if v.totalDeleteRVTime != nil { + v.totalDeleteRVTime.Add(deleteDuration.Nanoseconds()) + } +} + +func (v *VolumeMain) getRundomNumberForNodes() int { + // 0 nodes = 30%, 1 node = 60%, 2 nodes = 10% + //nolint:gosec // G404: math/rand is fine for non-security-critical random selection + r := rand.Float64() + switch { + case r < 0.30: + return 0 + case r < 0.90: + return 1 + default: + return 2 + } +} + +func (v *VolumeMain) getPublishNodes(ctx context.Context, count int) ([]string, error) { + if count == 0 { + return nil, nil + } + + nodes, err := v.client.GetRandomNodes(ctx, count) + if err != nil { + return nil, err + } + + names := make([]string, len(nodes)) + for i, node := range nodes { + names[i] = node.Name + } + return names, nil +} + +// createRV creates a ReplicatedVolume and RVAs for the given nodes. +func (v *VolumeMain) createRV(ctx context.Context, attachNodes []string) error { + rv := &v1alpha1.ReplicatedVolume{ + ObjectMeta: metav1.ObjectMeta{ + Name: v.rvName, + }, + Spec: v1alpha1.ReplicatedVolumeSpec{ + Size: v.initialSize, + ReplicatedStorageClassName: v.storageClass, + }, + } + + err := v.client.CreateRV(ctx, rv) + if err != nil { + return err + } + + // Increment statistics counter on successful creation + if v.createdRVCount != nil { + v.createdRVCount.Add(1) + } + + // Create initial attachment intents via RVA (if requested). + for _, nodeName := range attachNodes { + if nodeName == "" { + continue + } + if _, err := v.client.EnsureRVA(ctx, v.rvName, nodeName); err != nil { + return err + } + if err := v.client.WaitForRVAReady(ctx, v.rvName, nodeName); err != nil { + return err + } + } + + return nil +} + +func (v *VolumeMain) deleteRVAndWait(ctx context.Context, log *slog.Logger) error { + // Unattach from all nodes - delete all RVAs for this RV. + rvas, err := v.client.ListRVAsByRVName(ctx, v.rvName) + if err != nil { + return err + } + for _, rva := range rvas { + if rva.Spec.NodeName == "" { + continue + } + if err := v.client.DeleteRVA(ctx, v.rvName, rva.Spec.NodeName); err != nil { + return err + } + } + + rv := &v1alpha1.ReplicatedVolume{ + ObjectMeta: metav1.ObjectMeta{ + Name: v.rvName, + }, + } + + err = v.client.DeleteRV(ctx, rv) + if err != nil { + return err + } + + err = v.WaitForRVDeleted(ctx, log) + if err != nil { + return err + } + + return nil +} + +func (v *VolumeMain) waitForRVReady(ctx context.Context) error { + for { + v.log.Debug("waiting for RV to become ready") + + rv, err := v.client.GetRV(ctx, v.rvName) + if err != nil { + if apierrors.IsNotFound(err) { + if err := waitWithContext(ctx, 500*time.Millisecond); err != nil { + return err + } + continue + } + return err + } + + if v.client.IsRVReady(rv) { + return nil + } + + if err := waitWithContext(ctx, 1*time.Second); err != nil { + return err + } + } +} + +func (v *VolumeMain) WaitForRVDeleted(ctx context.Context, log *slog.Logger) error { + for { + log.Debug("waiting for RV to be deleted") + + _, err := v.client.GetRV(ctx, v.rvName) + if apierrors.IsNotFound(err) { + return nil + } + if err != nil { + return err + } + + if err := waitWithContext(ctx, 1*time.Second); err != nil { + return err + } + } +} + +func (v *VolumeMain) startSubRunners(ctx context.Context) { + // Start attacher + attacherCfg := config.VolumeAttacherConfig{ + Period: config.DurationMinMax{ + Min: time.Duration(attacherPeriodMinMax[0]) * time.Second, + Max: time.Duration(attacherPeriodMinMax[1]) * time.Second, + }, + } + attacher := NewVolumeAttacher(v.rvName, attacherCfg, v.client, attacherPeriodMinMax, v.forceCleanupChan) + attacherCtx, cancel := context.WithCancel(ctx) + go func() { + v.runningSubRunners.Add(1) + defer func() { + cancel() + v.runningSubRunners.Add(-1) + }() + + _ = attacher.Run(attacherCtx) + }() + + // Start replica destroyer + if v.disableVolumeReplicaDestroyer { + v.log.Debug("volume-replica-destroyer runner is disabled") + } else { + v.log.Debug("volume-replica-destroyer runner is enabled") + replicaDestroyerCfg := config.VolumeReplicaDestroyerConfig{ + Period: config.DurationMinMax{ + Min: time.Duration(replicaDestroyerPeriodMinMax[0]) * time.Second, + Max: time.Duration(replicaDestroyerPeriodMinMax[1]) * time.Second, + }, + } + replicaDestroyer := NewVolumeReplicaDestroyer(v.rvName, replicaDestroyerCfg, v.client, replicaDestroyerPeriodMinMax) + destroyerCtx, cancel := context.WithCancel(ctx) + go func() { + v.runningSubRunners.Add(1) + defer func() { + cancel() + v.runningSubRunners.Add(-1) + }() + + _ = replicaDestroyer.Run(destroyerCtx) + }() + } + + // Start replica creator + if v.disableVolumeReplicaCreator { + v.log.Debug("volume-replica-creator runner is disabled") + } else { + v.log.Debug("volume-replica-creator runner is enabled") + replicaCreatorCfg := config.VolumeReplicaCreatorConfig{ + Period: config.DurationMinMax{ + Min: time.Duration(replicaCreatorPeriodMinMax[0]) * time.Second, + Max: time.Duration(replicaCreatorPeriodMinMax[1]) * time.Second, + }, + } + replicaCreator := NewVolumeReplicaCreator(v.rvName, replicaCreatorCfg, v.client, replicaCreatorPeriodMinMax) + creatorCtx, cancel := context.WithCancel(ctx) + go func() { + v.runningSubRunners.Add(1) + defer func() { + cancel() + v.runningSubRunners.Add(-1) + }() + + _ = replicaCreator.Run(creatorCtx) + }() + } + + // Start resizer + if v.disableVolumeResizer { + v.log.Debug("volume-resizer runner is disabled") + } else { + v.log.Debug("volume-resizer runner is enabled") + volumeResizerCfg := config.VolumeResizerConfig{ + Period: config.DurationMinMax{ + Min: time.Duration(volumeResizerPeriodMinMax[0]) * time.Second, + Max: time.Duration(volumeResizerPeriodMinMax[1]) * time.Second, + }, + Step: config.SizeMinMax{ + Min: resource.MustParse(volumeResizerStepMinMax[0]), + Max: resource.MustParse(volumeResizerStepMinMax[1]), + }, + } + volumeResizer := NewVolumeResizer(v.rvName, volumeResizerCfg, v.client, volumeResizerPeriodMinMax, volumeResizerStepMinMax) + resizerCtx, cancel := context.WithCancel(ctx) + go func() { + v.runningSubRunners.Add(1) + defer func() { + cancel() + v.runningSubRunners.Add(-1) + }() + + _ = volumeResizer.Run(resizerCtx) + }() + } +} + +func (v *VolumeMain) startVolumeChecker(ctx context.Context) { + // Mark checker as started + v.checkerStarted.Store(true) + + // Create stats for this checker and register in MultiVolume + stats := &CheckerStats{RVName: v.rvName} + if v.registerCheckerStats != nil { + v.registerCheckerStats(stats) + } + + volumeChecker := NewVolumeChecker(v.rvName, v.client, stats) + checkerCtx, cancel := context.WithCancel(ctx) + go func() { + v.runningSubRunners.Add(1) + defer func() { + cancel() + v.runningSubRunners.Add(-1) + }() + + _ = volumeChecker.Run(checkerCtx) + }() +} + +// startVolumeCheckerForFinalState starts a volume checker briefly to capture the final state +// of the RV before deletion. This is used when the checker wasn't started earlier (e.g., if RV +// never reached Ready state). The checker will capture the current state via checkInitialState +// and then exit. +func (v *VolumeMain) startVolumeCheckerForFinalState(ctx context.Context, log *slog.Logger) { + // Create stats for this checker and register in MultiVolume + stats := &CheckerStats{RVName: v.rvName} + if v.registerCheckerStats != nil { + v.registerCheckerStats(stats) + } + + volumeChecker := NewVolumeChecker(v.rvName, v.client, stats) + + // Create a context with timeout to allow checker to capture state and exit + // 5 seconds should be enough for checkInitialState to complete + checkerCtx, cancel := context.WithTimeout(ctx, 5*time.Second) + defer cancel() + + // Run checker synchronously - it will capture initial state and exit when context is done + // This ensures we capture the final state before deletion + if err := volumeChecker.Run(checkerCtx); err != nil { + log.Debug("checker finished with error (expected)", "error", err) + } else { + log.Debug("checker finished successfully") + } +} diff --git a/images/megatest/internal/runners/volume_replica_creator.go b/images/megatest/internal/runners/volume_replica_creator.go new file mode 100644 index 000000000..ad327c533 --- /dev/null +++ b/images/megatest/internal/runners/volume_replica_creator.go @@ -0,0 +1,134 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package runners + +import ( + "context" + "log/slog" + "math/rand" + "time" + + "github.com/google/uuid" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + + "github.com/deckhouse/sds-replicated-volume/api/v1alpha1" + "github.com/deckhouse/sds-replicated-volume/images/megatest/internal/config" + "github.com/deckhouse/sds-replicated-volume/images/megatest/internal/kubeutils" +) + +// availableReplicaTypes returns the list of replica types that can be created. +// Uncomment ReplicaTypeDiskful when diskful destroyer is implemented. +func availableReplicaTypes() []string { + return []string{ + string(v1alpha1.ReplicaTypeAccess), + string(v1alpha1.ReplicaTypeTieBreaker), + // string(v1alpha1.ReplicaTypeDiskful), // TODO: uncomment when diskful destroyer is ready + } +} + +// VolumeReplicaCreator periodically creates random replicas for a volume. +// It does NOT wait for creation to succeed. +type VolumeReplicaCreator struct { + rvName string + cfg config.VolumeReplicaCreatorConfig + client *kubeutils.Client + log *slog.Logger +} + +// NewVolumeReplicaCreator creates a new VolumeReplicaCreator +func NewVolumeReplicaCreator( + rvName string, + cfg config.VolumeReplicaCreatorConfig, + client *kubeutils.Client, + periodMinMax []int, +) *VolumeReplicaCreator { + return &VolumeReplicaCreator{ + rvName: rvName, + cfg: cfg, + client: client, + log: slog.Default().With("runner", "volume-replica-creator", "rv_name", rvName, "period_min_max", periodMinMax), + } +} + +// Run starts the create cycle until context is cancelled +func (v *VolumeReplicaCreator) Run(ctx context.Context) error { + v.log.Info("started") + defer v.log.Info("finished") + + for { + // Wait random duration before create + if err := waitRandomWithContext(ctx, v.cfg.Period); err != nil { + return nil + } + + // Perform create (errors are logged, not returned) + v.doCreate(ctx) + } +} + +// selectRandomType selects a random replica type from available types +func (v *VolumeReplicaCreator) selectRandomType() string { + types := availableReplicaTypes() + //nolint:gosec // G404: math/rand is fine for non-security-critical random selection + return types[rand.Intn(len(types))] +} + +// generateRVRName generates a unique name for a new RVR +func (v *VolumeReplicaCreator) generateRVRName() string { + // Use short UUID suffix for uniqueness + shortUUID := uuid.New().String()[:8] + return v.rvName + "-mt-" + shortUUID +} + +func (v *VolumeReplicaCreator) doCreate(ctx context.Context) { + startTime := time.Now() + + // Select random type + replicaType := v.selectRandomType() + + // Generate unique name + rvrName := v.generateRVRName() + + // Create RVR object + // Note: We don't set OwnerReference here. + // The rvr_metadata_controller handles this automatically + // based on spec.replicatedVolumeName. + rvr := &v1alpha1.ReplicatedVolumeReplica{ + ObjectMeta: metav1.ObjectMeta{ + Name: rvrName, + }, + Spec: v1alpha1.ReplicatedVolumeReplicaSpec{ + ReplicatedVolumeName: v.rvName, + Type: v1alpha1.ReplicaType(replicaType), + // NodeName is not set - controller will schedule it + }, + } + + // Create RVR (do NOT wait for success) + if err := v.client.CreateRVR(ctx, rvr); err != nil { + v.log.Error("failed to create RVR", + "rvr_name", rvrName, + "error", err) + return + } + + // Log success + v.log.Info("RVR created", + "rvr_name", rvrName, + "rvr_type", replicaType, + "duration", time.Since(startTime)) +} diff --git a/images/megatest/internal/runners/volume_replica_destroyer.go b/images/megatest/internal/runners/volume_replica_destroyer.go new file mode 100644 index 000000000..a99518ae7 --- /dev/null +++ b/images/megatest/internal/runners/volume_replica_destroyer.go @@ -0,0 +1,103 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package runners + +import ( + "context" + "log/slog" + "math/rand" + "time" + + "github.com/deckhouse/sds-replicated-volume/images/megatest/internal/config" + "github.com/deckhouse/sds-replicated-volume/images/megatest/internal/kubeutils" +) + +// VolumeReplicaDestroyer periodically deletes random replicas from a volume. +// It does NOT wait for deletion to succeed. +type VolumeReplicaDestroyer struct { + rvName string + cfg config.VolumeReplicaDestroyerConfig + client *kubeutils.Client + log *slog.Logger +} + +// NewVolumeReplicaDestroyer creates a new VolumeReplicaDestroyer +func NewVolumeReplicaDestroyer( + rvName string, + cfg config.VolumeReplicaDestroyerConfig, + client *kubeutils.Client, + periodrMinMax []int, +) *VolumeReplicaDestroyer { + return &VolumeReplicaDestroyer{ + rvName: rvName, + cfg: cfg, + client: client, + log: slog.Default().With("runner", "volume-replica-destroyer", "rv_name", rvName, "period_min_max", periodrMinMax), + } +} + +// Run starts the destroy cycle until context is cancelled +func (v *VolumeReplicaDestroyer) Run(ctx context.Context) error { + v.log.Info("started") + defer v.log.Info("finished") + + for { + // Wait random duration before delete + if err := waitRandomWithContext(ctx, v.cfg.Period); err != nil { + return nil + } + + // Perform delete (errors are logged, not returned) + v.doDestroy(ctx) + } +} + +func (v *VolumeReplicaDestroyer) doDestroy(ctx context.Context) { + startTime := time.Now() + + // Get list of RVRs for this RV + rvrs, err := v.client.ListRVRsByRVName(ctx, v.rvName) + if err != nil { + v.log.Error("failed to list RVRs", "error", err) + return + } + + if len(rvrs) == 0 { + v.log.Debug("no RVRs found to destroy") + return + } + + // Select random RVR + //nolint:gosec // G404: math/rand is fine for non-security-critical random selection + idx := rand.Intn(len(rvrs)) + selectedRVR := &rvrs[idx] + + // Delete RVR (do NOT wait for success) + if err := v.client.DeleteRVR(ctx, selectedRVR); err != nil { + v.log.Error("failed to delete RVR", + "rvr_name", selectedRVR.Name, + "error", err) + return + } + + // Log success + v.log.Info("RVR deleted", + "rvr_name", selectedRVR.Name, + "rvr_type", selectedRVR.Spec.Type, + "rvr_node", selectedRVR.Spec.NodeName, + "duration", time.Since(startTime)) +} diff --git a/images/megatest/internal/runners/volume_resizer.go b/images/megatest/internal/runners/volume_resizer.go new file mode 100644 index 000000000..28d6dd73b --- /dev/null +++ b/images/megatest/internal/runners/volume_resizer.go @@ -0,0 +1,75 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package runners + +import ( + "context" + "errors" + "log/slog" + + "github.com/deckhouse/sds-replicated-volume/images/megatest/internal/config" + "github.com/deckhouse/sds-replicated-volume/images/megatest/internal/kubeutils" +) + +// VolumeResizer periodically increases the size of a ReplicatedVolume +type VolumeResizer struct { + rvName string + cfg config.VolumeResizerConfig + client *kubeutils.Client + log *slog.Logger +} + +// NewVolumeResizer creates a new VolumeResizer +func NewVolumeResizer( + rvName string, + cfg config.VolumeResizerConfig, + client *kubeutils.Client, + periodMinMax []int, + stepMinMax []string, +) *VolumeResizer { + return &VolumeResizer{ + rvName: rvName, + cfg: cfg, + client: client, + log: slog.Default().With("runner", "volume-resizer", "rv_name", rvName, "period_min_max", periodMinMax, "step_min_max", stepMinMax), + } +} + +// Run starts the resize cycle until context is cancelled +func (v *VolumeResizer) Run(ctx context.Context) error { + v.log.Info("started") + defer v.log.Info("finished") + + for { + // Wait random duration before resize + if err := waitRandomWithContext(ctx, v.cfg.Period); err != nil { + return nil + } + + // Perform resize + if err := v.doResize(ctx); err != nil { + v.log.Error("resize failed", "error", err) + // Continue even on failure + } + } +} + +func (v *VolumeResizer) doResize(ctx context.Context) error { + v.log.Debug("resizing volume -------------------------------------") + _ = ctx + return errors.New("resize not implemented") +} diff --git a/images/sds-replicated-volume-controller/cmd/main.go b/images/sds-replicated-volume-controller/cmd/main.go index 46961cb0b..16f06cecb 100644 --- a/images/sds-replicated-volume-controller/cmd/main.go +++ b/images/sds-replicated-volume-controller/cmd/main.go @@ -38,8 +38,8 @@ import ( srv "github.com/deckhouse/sds-replicated-volume/api/v1alpha1" "github.com/deckhouse/sds-replicated-volume/images/sds-replicated-volume-controller/config" "github.com/deckhouse/sds-replicated-volume/images/sds-replicated-volume-controller/pkg/controller" - kubutils "github.com/deckhouse/sds-replicated-volume/images/sds-replicated-volume-controller/pkg/kubeutils" - "github.com/deckhouse/sds-replicated-volume/images/sds-replicated-volume-controller/pkg/logger" + "github.com/deckhouse/sds-replicated-volume/lib/go/common/kubutils" + "github.com/deckhouse/sds-replicated-volume/lib/go/common/logger" ) var ( diff --git a/images/sds-replicated-volume-controller/config/config.go b/images/sds-replicated-volume-controller/config/config.go index a8ee9a701..298d5db0e 100644 --- a/images/sds-replicated-volume-controller/config/config.go +++ b/images/sds-replicated-volume-controller/config/config.go @@ -20,7 +20,7 @@ import ( "log" "os" - "github.com/deckhouse/sds-replicated-volume/images/sds-replicated-volume-controller/pkg/logger" + "github.com/deckhouse/sds-replicated-volume/lib/go/common/logger" ) // ScanInterval Scan block device interval seconds diff --git a/images/sds-replicated-volume-controller/go.mod b/images/sds-replicated-volume-controller/go.mod index 42c7575e9..64d1c1dd5 100644 --- a/images/sds-replicated-volume-controller/go.mod +++ b/images/sds-replicated-volume-controller/go.mod @@ -3,80 +3,255 @@ module github.com/deckhouse/sds-replicated-volume/images/sds-replicated-volume-c go 1.24.11 require ( - github.com/LINBIT/golinstor v0.49.0 - github.com/deckhouse/sds-node-configurator/api v0.0.0-20250424082358-e271071c2a57 - github.com/deckhouse/sds-replicated-volume/api v0.0.0-20240812165341-a73e664454b9 - github.com/go-logr/logr v1.4.2 - github.com/onsi/ginkgo/v2 v2.22.0 - github.com/onsi/gomega v1.36.1 + github.com/LINBIT/golinstor v0.56.2 + github.com/deckhouse/sds-node-configurator/api v0.0.0-20251112082451-591b11c7b2da + github.com/deckhouse/sds-replicated-volume/api v0.0.0-20250907192450-6e1330e9e380 + github.com/deckhouse/sds-replicated-volume/lib/go/common v0.0.0-00010101000000-000000000000 + github.com/google/uuid v1.6.0 + github.com/onsi/ginkgo/v2 v2.27.2 + github.com/onsi/gomega v1.38.3 + github.com/stretchr/testify v1.11.1 gopkg.in/yaml.v3 v3.0.1 - k8s.io/api v0.32.3 - k8s.io/apiextensions-apiserver v0.32.3 - k8s.io/apimachinery v0.32.3 - k8s.io/client-go v0.32.3 - sigs.k8s.io/controller-runtime v0.20.4 -) - -replace github.com/deckhouse/sds-replicated-volume/api => ../../api - -require ( - github.com/fxamacker/cbor/v2 v2.7.0 // indirect - github.com/go-task/slim-sprig/v3 v3.0.0 // indirect - github.com/google/btree v1.1.3 // indirect - github.com/google/gnostic-models v0.6.8 // indirect - github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect - github.com/x448/float16 v0.8.4 // indirect - golang.org/x/sync v0.14.0 // indirect - gopkg.in/evanphx/json-patch.v4 v4.12.0 // indirect + k8s.io/api v0.34.3 + k8s.io/apiextensions-apiserver v0.34.3 + k8s.io/apimachinery v0.34.3 + k8s.io/client-go v0.34.3 + k8s.io/klog/v2 v2.130.1 + k8s.io/utils v0.0.0-20251002143259-bc988d571ff4 + sigs.k8s.io/controller-runtime v0.22.4 ) require ( + 4d63.com/gocheckcompilerdirectives v1.3.0 // indirect + 4d63.com/gochecknoglobals v0.2.2 // indirect + github.com/4meepo/tagalign v1.4.2 // indirect + github.com/Abirdcfly/dupword v0.1.3 // indirect + github.com/Antonboom/errname v1.0.0 // indirect + github.com/Antonboom/nilnil v1.0.1 // indirect + github.com/Antonboom/testifylint v1.5.2 // indirect + github.com/BurntSushi/toml v1.5.0 // indirect + github.com/Crocmagnon/fatcontext v0.7.1 // indirect + github.com/Djarvur/go-err113 v0.0.0-20210108212216-aea10b59be24 // indirect + github.com/GaijinEntertainment/go-exhaustruct/v3 v3.3.1 // indirect + github.com/Masterminds/semver/v3 v3.4.0 // indirect + github.com/OpenPeeDeeP/depguard/v2 v2.2.1 // indirect + github.com/alecthomas/go-check-sumtype v0.3.1 // indirect + github.com/alexkohler/nakedret/v2 v2.0.5 // indirect + github.com/alexkohler/prealloc v1.0.0 // indirect + github.com/alingse/asasalint v0.0.11 // indirect + github.com/alingse/nilnesserr v0.1.2 // indirect + github.com/ashanbrown/forbidigo v1.6.0 // indirect + github.com/ashanbrown/makezero v1.2.0 // indirect github.com/beorn7/perks v1.0.1 // indirect + github.com/bkielbasa/cyclop v1.2.3 // indirect + github.com/blizzy78/varnamelen v0.8.0 // indirect + github.com/bombsimon/wsl/v4 v4.5.0 // indirect + github.com/breml/bidichk v0.3.2 // indirect + github.com/breml/errchkjson v0.4.0 // indirect + github.com/butuzov/ireturn v0.3.1 // indirect + github.com/butuzov/mirror v1.3.0 // indirect + github.com/catenacyber/perfsprint v0.8.2 // indirect + github.com/ccojocar/zxcvbn-go v1.0.2 // indirect github.com/cespare/xxhash/v2 v2.3.0 // indirect + github.com/charithe/durationcheck v0.0.10 // indirect + github.com/chavacava/garif v0.1.0 // indirect + github.com/ckaznocha/intrange v0.3.0 // indirect + github.com/curioswitch/go-reassign v0.3.0 // indirect + github.com/daixiang0/gci v0.13.5 // indirect github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect + github.com/denis-tingaikin/go-header v0.5.0 // indirect github.com/donovanhide/eventsource v0.0.0-20210830082556-c59027999da0 // indirect - github.com/emicklei/go-restful/v3 v3.11.0 // indirect - github.com/evanphx/json-patch v5.6.0+incompatible // indirect + github.com/emicklei/go-restful/v3 v3.13.0 // indirect + github.com/ettle/strcase v0.2.0 // indirect github.com/evanphx/json-patch/v5 v5.9.11 // indirect - github.com/fsnotify/fsnotify v1.7.0 // indirect - github.com/go-openapi/jsonpointer v0.21.0 // indirect - github.com/go-openapi/jsonreference v0.20.2 // indirect - github.com/go-openapi/swag v0.23.0 // indirect + github.com/fatih/color v1.18.0 // indirect + github.com/fatih/structtag v1.2.0 // indirect + github.com/firefart/nonamedreturns v1.0.5 // indirect + github.com/fsnotify/fsnotify v1.9.0 // indirect + github.com/fxamacker/cbor/v2 v2.9.0 // indirect + github.com/fzipp/gocyclo v0.6.0 // indirect + github.com/ghostiam/protogetter v0.3.9 // indirect + github.com/go-critic/go-critic v0.12.0 // indirect + github.com/go-logr/logr v1.4.3 // indirect + github.com/go-openapi/jsonpointer v0.22.0 // indirect + github.com/go-openapi/jsonreference v0.21.1 // indirect + github.com/go-openapi/swag v0.24.1 // indirect + github.com/go-openapi/swag/cmdutils v0.24.0 // indirect + github.com/go-openapi/swag/conv v0.24.0 // indirect + github.com/go-openapi/swag/fileutils v0.24.0 // indirect + github.com/go-openapi/swag/jsonname v0.24.0 // indirect + github.com/go-openapi/swag/jsonutils v0.24.0 // indirect + github.com/go-openapi/swag/loading v0.24.0 // indirect + github.com/go-openapi/swag/mangling v0.24.0 // indirect + github.com/go-openapi/swag/netutils v0.24.0 // indirect + github.com/go-openapi/swag/stringutils v0.24.0 // indirect + github.com/go-openapi/swag/typeutils v0.24.0 // indirect + github.com/go-openapi/swag/yamlutils v0.24.0 // indirect + github.com/go-task/slim-sprig/v3 v3.0.0 // indirect + github.com/go-toolsmith/astcast v1.1.0 // indirect + github.com/go-toolsmith/astcopy v1.1.0 // indirect + github.com/go-toolsmith/astequal v1.2.0 // indirect + github.com/go-toolsmith/astfmt v1.1.0 // indirect + github.com/go-toolsmith/astp v1.1.0 // indirect + github.com/go-toolsmith/strparse v1.1.0 // indirect + github.com/go-toolsmith/typep v1.1.0 // indirect + github.com/go-viper/mapstructure/v2 v2.4.0 // indirect + github.com/go-xmlfmt/xmlfmt v1.1.3 // indirect + github.com/gobwas/glob v0.2.3 // indirect + github.com/gofrs/flock v0.12.1 // indirect github.com/gogo/protobuf v1.3.2 // indirect - github.com/golang/protobuf v1.5.4 // indirect - github.com/google/go-cmp v0.6.0 // indirect + github.com/golangci/dupl v0.0.0-20250308024227-f665c8d69b32 // indirect + github.com/golangci/go-printf-func-name v0.1.0 // indirect + github.com/golangci/gofmt v0.0.0-20250106114630-d62b90e6713d // indirect + github.com/golangci/golangci-lint v1.64.8 // indirect + github.com/golangci/misspell v0.6.0 // indirect + github.com/golangci/plugin-module-register v0.1.1 // indirect + github.com/golangci/revgrep v0.8.0 // indirect + github.com/golangci/unconvert v0.0.0-20240309020433-c5143eacb3ed // indirect + github.com/google/btree v1.1.3 // indirect + github.com/google/gnostic-models v0.7.0 // indirect + github.com/google/go-cmp v0.7.0 // indirect github.com/google/go-querystring v1.1.0 // indirect - github.com/google/gofuzz v1.2.0 // indirect - github.com/google/pprof v0.0.0-20241029153458-d1b30febd7db // indirect - github.com/google/uuid v1.6.0 + github.com/google/pprof v0.0.0-20250820193118-f64d9cf942d6 // indirect + github.com/gordonklaus/ineffassign v0.1.0 // indirect + github.com/gostaticanalysis/analysisutil v0.7.1 // indirect + github.com/gostaticanalysis/comment v1.5.0 // indirect + github.com/gostaticanalysis/forcetypeassert v0.2.0 // indirect + github.com/gostaticanalysis/nilerr v0.1.1 // indirect + github.com/hashicorp/go-immutable-radix/v2 v2.1.0 // indirect + github.com/hashicorp/go-version v1.7.0 // indirect + github.com/hashicorp/golang-lru/v2 v2.0.7 // indirect + github.com/hexops/gotextdiff v1.0.3 // indirect + github.com/inconshreveable/mousetrap v1.1.0 // indirect + github.com/jgautheron/goconst v1.7.1 // indirect + github.com/jingyugao/rowserrcheck v1.1.1 // indirect + github.com/jjti/go-spancheck v0.6.4 // indirect github.com/josharian/intern v1.0.0 // indirect github.com/json-iterator/go v1.1.12 // indirect - github.com/mailru/easyjson v0.7.7 // indirect + github.com/julz/importas v0.2.0 // indirect + github.com/karamaru-alpha/copyloopvar v1.2.1 // indirect + github.com/kisielk/errcheck v1.9.0 // indirect + github.com/kkHAIKE/contextcheck v1.1.6 // indirect + github.com/kulti/thelper v0.6.3 // indirect + github.com/kunwardeep/paralleltest v1.0.10 // indirect + github.com/lasiar/canonicalheader v1.1.2 // indirect + github.com/ldez/exptostd v0.4.2 // indirect + github.com/ldez/gomoddirectives v0.6.1 // indirect + github.com/ldez/grignotin v0.9.0 // indirect + github.com/ldez/tagliatelle v0.7.1 // indirect + github.com/ldez/usetesting v0.4.2 // indirect + github.com/leonklingele/grouper v1.1.2 // indirect + github.com/macabu/inamedparam v0.1.3 // indirect + github.com/mailru/easyjson v0.9.0 // indirect + github.com/maratori/testableexamples v1.0.0 // indirect + github.com/maratori/testpackage v1.1.1 // indirect + github.com/matoous/godox v1.1.0 // indirect + github.com/mattn/go-colorable v0.1.14 // indirect + github.com/mattn/go-isatty v0.0.20 // indirect + github.com/mattn/go-runewidth v0.0.16 // indirect + github.com/mgechev/revive v1.7.0 // indirect + github.com/mitchellh/go-homedir v1.1.0 // indirect github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect - github.com/modern-go/reflect2 v1.0.2 // indirect + github.com/modern-go/reflect2 v1.0.3-0.20250322232337-35a7c28c31ee // indirect + github.com/moricho/tparallel v0.3.2 // indirect github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect - github.com/pkg/errors v0.9.1 // indirect - github.com/prometheus/client_golang v1.19.1 // indirect - github.com/prometheus/client_model v0.6.1 // indirect - github.com/prometheus/common v0.55.0 // indirect - github.com/prometheus/procfs v0.15.1 // indirect - github.com/spf13/pflag v1.0.5 // indirect - github.com/stretchr/testify v1.9.0 - golang.org/x/net v0.40.0 // indirect - golang.org/x/oauth2 v0.27.0 // indirect - golang.org/x/sys v0.33.0 // indirect - golang.org/x/term v0.32.0 // indirect - golang.org/x/text v0.25.0 // indirect - golang.org/x/time v0.7.0 // indirect - golang.org/x/tools v0.26.0 // indirect - gomodules.xyz/jsonpatch/v2 v2.4.0 // indirect - google.golang.org/protobuf v1.35.1 // indirect + github.com/nakabonne/nestif v0.3.1 // indirect + github.com/nishanths/exhaustive v0.12.0 // indirect + github.com/nishanths/predeclared v0.2.2 // indirect + github.com/nunnatsa/ginkgolinter v0.19.1 // indirect + github.com/olekukonko/tablewriter v0.0.5 // indirect + github.com/pelletier/go-toml/v2 v2.2.4 // indirect + github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect + github.com/polyfloyd/go-errorlint v1.7.1 // indirect + github.com/prometheus/client_golang v1.23.2 // indirect + github.com/prometheus/client_model v0.6.2 // indirect + github.com/prometheus/common v0.66.1 // indirect + github.com/prometheus/procfs v0.17.0 // indirect + github.com/quasilyte/go-ruleguard v0.4.3-0.20240823090925-0fe6f58b47b1 // indirect + github.com/quasilyte/go-ruleguard/dsl v0.3.22 // indirect + github.com/quasilyte/gogrep v0.5.0 // indirect + github.com/quasilyte/regex/syntax v0.0.0-20210819130434-b3f0c404a727 // indirect + github.com/quasilyte/stdinfo v0.0.0-20220114132959-f7386bf02567 // indirect + github.com/raeperd/recvcheck v0.2.0 // indirect + github.com/rivo/uniseg v0.4.7 // indirect + github.com/rogpeppe/go-internal v1.14.1 // indirect + github.com/ryancurrah/gomodguard v1.3.5 // indirect + github.com/ryanrolds/sqlclosecheck v0.5.1 // indirect + github.com/sagikazarmark/locafero v0.7.0 // indirect + github.com/sanposhiho/wastedassign/v2 v2.1.0 // indirect + github.com/santhosh-tekuri/jsonschema/v6 v6.0.1 // indirect + github.com/sashamelentyev/interfacebloat v1.1.0 // indirect + github.com/sashamelentyev/usestdlibvars v1.28.0 // indirect + github.com/securego/gosec/v2 v2.22.2 // indirect + github.com/sirupsen/logrus v1.9.3 // indirect + github.com/sivchari/containedctx v1.0.3 // indirect + github.com/sivchari/tenv v1.12.1 // indirect + github.com/sonatard/noctx v0.1.0 // indirect + github.com/sourcegraph/conc v0.3.0 // indirect + github.com/sourcegraph/go-diff v0.7.0 // indirect + github.com/spf13/afero v1.12.0 // indirect + github.com/spf13/cast v1.7.1 // indirect + github.com/spf13/cobra v1.10.2 // indirect + github.com/spf13/pflag v1.0.10 // indirect + github.com/spf13/viper v1.20.1 // indirect + github.com/ssgreg/nlreturn/v2 v2.2.1 // indirect + github.com/stbenjam/no-sprintf-host-port v0.2.0 // indirect + github.com/stretchr/objx v0.5.2 // indirect + github.com/subosito/gotenv v1.6.0 // indirect + github.com/tdakkota/asciicheck v0.4.1 // indirect + github.com/tetafro/godot v1.5.0 // indirect + github.com/timakin/bodyclose v0.0.0-20241017074812-ed6a65f985e3 // indirect + github.com/timonwong/loggercheck v0.10.1 // indirect + github.com/tomarrell/wrapcheck/v2 v2.10.0 // indirect + github.com/tommy-muehle/go-mnd/v2 v2.5.1 // indirect + github.com/ultraware/funlen v0.2.0 // indirect + github.com/ultraware/whitespace v0.2.0 // indirect + github.com/uudashr/gocognit v1.2.0 // indirect + github.com/uudashr/iface v1.3.1 // indirect + github.com/x448/float16 v0.8.4 // indirect + github.com/xen0n/gosmopolitan v1.2.2 // indirect + github.com/yagipy/maintidx v1.0.0 // indirect + github.com/yeya24/promlinter v0.3.0 // indirect + github.com/ykadowak/zerologlint v0.1.5 // indirect + gitlab.com/bosi/decorder v0.4.2 // indirect + go-simpler.org/musttag v0.13.0 // indirect + go-simpler.org/sloglint v0.9.0 // indirect + go.uber.org/automaxprocs v1.6.0 // indirect + go.uber.org/multierr v1.11.0 // indirect + go.uber.org/zap v1.27.0 // indirect + go.yaml.in/yaml/v2 v2.4.2 // indirect + go.yaml.in/yaml/v3 v3.0.4 // indirect + golang.org/x/exp/typeparams v0.0.0-20250210185358-939b2ce775ac // indirect + golang.org/x/mod v0.29.0 // indirect + golang.org/x/net v0.46.0 // indirect + golang.org/x/oauth2 v0.31.0 // indirect + golang.org/x/sync v0.19.0 // indirect + golang.org/x/sys v0.39.0 // indirect + golang.org/x/term v0.36.0 // indirect + golang.org/x/text v0.30.0 // indirect + golang.org/x/time v0.13.0 // indirect + golang.org/x/tools v0.38.0 // indirect + gomodules.xyz/jsonpatch/v2 v2.5.0 // indirect + google.golang.org/protobuf v1.36.9 // indirect + gopkg.in/evanphx/json-patch.v4 v4.13.0 // indirect gopkg.in/inf.v0 v0.9.1 // indirect - k8s.io/klog/v2 v2.130.1 - k8s.io/kube-openapi v0.0.0-20241105132330-32ad38e42d3f // indirect - k8s.io/utils v0.0.0-20241104100929-3ea5e8cea738 + gopkg.in/yaml.v2 v2.4.0 // indirect + honnef.co/go/tools v0.6.1 // indirect + k8s.io/kube-openapi v0.0.0-20250909170358-d67c058d9372 // indirect moul.io/http2curl/v2 v2.3.0 // indirect - sigs.k8s.io/json v0.0.0-20241010143419-9aa6b5e7a4b3 // indirect - sigs.k8s.io/structured-merge-diff/v4 v4.4.2 // indirect - sigs.k8s.io/yaml v1.4.0 // indirect + mvdan.cc/gofumpt v0.7.0 // indirect + mvdan.cc/unparam v0.0.0-20240528143540-8a5130ca722f // indirect + sigs.k8s.io/json v0.0.0-20250730193827-2d320260d730 // indirect + sigs.k8s.io/randfill v1.0.0 // indirect + sigs.k8s.io/structured-merge-diff/v6 v6.3.0 // indirect + sigs.k8s.io/yaml v1.6.0 // indirect +) + +replace github.com/deckhouse/sds-replicated-volume/lib/go/common => ../../lib/go/common + +replace github.com/deckhouse/sds-replicated-volume/api => ../../api + +tool ( + github.com/golangci/golangci-lint/cmd/golangci-lint + github.com/onsi/ginkgo/v2/ginkgo ) diff --git a/images/sds-replicated-volume-controller/go.sum b/images/sds-replicated-volume-controller/go.sum index a7cf8b54b..f25949411 100644 --- a/images/sds-replicated-volume-controller/go.sum +++ b/images/sds-replicated-volume-controller/go.sum @@ -1,210 +1,714 @@ -github.com/LINBIT/golinstor v0.49.0 h1:2Q5u0mjB+vMA8xkFfB04eT09qg1wFRxnmS1SkfK4Jr0= -github.com/LINBIT/golinstor v0.49.0/go.mod h1:wwtsHgmgK/+Kz0g3uJoEljqBEsEfmnCXvM64JcyuiwU= +4d63.com/gocheckcompilerdirectives v1.3.0 h1:Ew5y5CtcAAQeTVKUVFrE7EwHMrTO6BggtEj8BZSjZ3A= +4d63.com/gocheckcompilerdirectives v1.3.0/go.mod h1:ofsJ4zx2QAuIP/NO/NAh1ig6R1Fb18/GI7RVMwz7kAY= +4d63.com/gochecknoglobals v0.2.2 h1:H1vdnwnMaZdQW/N+NrkT1SZMTBmcwHe9Vq8lJcYYTtU= +4d63.com/gochecknoglobals v0.2.2/go.mod h1:lLxwTQjL5eIesRbvnzIP3jZtG140FnTdz+AlMa+ogt0= +github.com/4meepo/tagalign v1.4.2 h1:0hcLHPGMjDyM1gHG58cS73aQF8J4TdVR96TZViorO9E= +github.com/4meepo/tagalign v1.4.2/go.mod h1:+p4aMyFM+ra7nb41CnFG6aSDXqRxU/w1VQqScKqDARI= +github.com/Abirdcfly/dupword v0.1.3 h1:9Pa1NuAsZvpFPi9Pqkd93I7LIYRURj+A//dFd5tgBeE= +github.com/Abirdcfly/dupword v0.1.3/go.mod h1:8VbB2t7e10KRNdwTVoxdBaxla6avbhGzb8sCTygUMhw= +github.com/Antonboom/errname v1.0.0 h1:oJOOWR07vS1kRusl6YRSlat7HFnb3mSfMl6sDMRoTBA= +github.com/Antonboom/errname v1.0.0/go.mod h1:gMOBFzK/vrTiXN9Oh+HFs+e6Ndl0eTFbtsRTSRdXyGI= +github.com/Antonboom/nilnil v1.0.1 h1:C3Tkm0KUxgfO4Duk3PM+ztPncTFlOf0b2qadmS0s4xs= +github.com/Antonboom/nilnil v1.0.1/go.mod h1:CH7pW2JsRNFgEh8B2UaPZTEPhCMuFowP/e8Udp9Nnb0= +github.com/Antonboom/testifylint v1.5.2 h1:4s3Xhuv5AvdIgbd8wOOEeo0uZG7PbDKQyKY5lGoQazk= +github.com/Antonboom/testifylint v1.5.2/go.mod h1:vxy8VJ0bc6NavlYqjZfmp6EfqXMtBgQ4+mhCojwC1P8= +github.com/BurntSushi/toml v1.5.0 h1:W5quZX/G/csjUnuI8SUYlsHs9M38FC7znL0lIO+DvMg= +github.com/BurntSushi/toml v1.5.0/go.mod h1:ukJfTF/6rtPPRCnwkur4qwRxa8vTRFBF0uk2lLoLwho= +github.com/Crocmagnon/fatcontext v0.7.1 h1:SC/VIbRRZQeQWj/TcQBS6JmrXcfA+BU4OGSVUt54PjM= +github.com/Crocmagnon/fatcontext v0.7.1/go.mod h1:1wMvv3NXEBJucFGfwOJBxSVWcoIO6emV215SMkW9MFU= +github.com/Djarvur/go-err113 v0.0.0-20210108212216-aea10b59be24 h1:sHglBQTwgx+rWPdisA5ynNEsoARbiCBOyGcJM4/OzsM= +github.com/Djarvur/go-err113 v0.0.0-20210108212216-aea10b59be24/go.mod h1:4UJr5HIiMZrwgkSPdsjy2uOQExX/WEILpIrO9UPGuXs= +github.com/GaijinEntertainment/go-exhaustruct/v3 v3.3.1 h1:Sz1JIXEcSfhz7fUi7xHnhpIE0thVASYjvosApmHuD2k= +github.com/GaijinEntertainment/go-exhaustruct/v3 v3.3.1/go.mod h1:n/LSCXNuIYqVfBlVXyHfMQkZDdp1/mmxfSjADd3z1Zg= +github.com/LINBIT/golinstor v0.56.2 h1:efT4d8C712bSEyxvhgMoExpPAVJhkViX8g+GOgC3fEI= +github.com/LINBIT/golinstor v0.56.2/go.mod h1:JF2dGKWa9wyT6M9GOHmlzqFB9/s84Z9bt3tRkZLvZSU= +github.com/Masterminds/semver/v3 v3.4.0 h1:Zog+i5UMtVoCU8oKka5P7i9q9HgrJeGzI9SA1Xbatp0= +github.com/Masterminds/semver/v3 v3.4.0/go.mod h1:4V+yj/TJE1HU9XfppCwVMZq3I84lprf4nC11bSS5beM= +github.com/OpenPeeDeeP/depguard/v2 v2.2.1 h1:vckeWVESWp6Qog7UZSARNqfu/cZqvki8zsuj3piCMx4= +github.com/OpenPeeDeeP/depguard/v2 v2.2.1/go.mod h1:q4DKzC4UcVaAvcfd41CZh0PWpGgzrVxUYBlgKNGquUo= +github.com/alecthomas/assert/v2 v2.11.0 h1:2Q9r3ki8+JYXvGsDyBXwH3LcJ+WK5D0gc5E8vS6K3D0= +github.com/alecthomas/assert/v2 v2.11.0/go.mod h1:Bze95FyfUr7x34QZrjL+XP+0qgp/zg8yS+TtBj1WA3k= +github.com/alecthomas/go-check-sumtype v0.3.1 h1:u9aUvbGINJxLVXiFvHUlPEaD7VDULsrxJb4Aq31NLkU= +github.com/alecthomas/go-check-sumtype v0.3.1/go.mod h1:A8TSiN3UPRw3laIgWEUOHHLPa6/r9MtoigdlP5h3K/E= +github.com/alecthomas/repr v0.4.0 h1:GhI2A8MACjfegCPVq9f1FLvIBS+DrQ2KQBFZP1iFzXc= +github.com/alecthomas/repr v0.4.0/go.mod h1:Fr0507jx4eOXV7AlPV6AVZLYrLIuIeSOWtW57eE/O/4= +github.com/alexkohler/nakedret/v2 v2.0.5 h1:fP5qLgtwbx9EJE8dGEERT02YwS8En4r9nnZ71RK+EVU= +github.com/alexkohler/nakedret/v2 v2.0.5/go.mod h1:bF5i0zF2Wo2o4X4USt9ntUWve6JbFv02Ff4vlkmS/VU= +github.com/alexkohler/prealloc v1.0.0 h1:Hbq0/3fJPQhNkN0dR95AVrr6R7tou91y0uHG5pOcUuw= +github.com/alexkohler/prealloc v1.0.0/go.mod h1:VetnK3dIgFBBKmg0YnD9F9x6Icjd+9cvfHR56wJVlKE= +github.com/alingse/asasalint v0.0.11 h1:SFwnQXJ49Kx/1GghOFz1XGqHYKp21Kq1nHad/0WQRnw= +github.com/alingse/asasalint v0.0.11/go.mod h1:nCaoMhw7a9kSJObvQyVzNTPBDbNpdocqrSP7t/cW5+I= +github.com/alingse/nilnesserr v0.1.2 h1:Yf8Iwm3z2hUUrP4muWfW83DF4nE3r1xZ26fGWUKCZlo= +github.com/alingse/nilnesserr v0.1.2/go.mod h1:1xJPrXonEtX7wyTq8Dytns5P2hNzoWymVUIaKm4HNFg= +github.com/ashanbrown/forbidigo v1.6.0 h1:D3aewfM37Yb3pxHujIPSpTf6oQk9sc9WZi8gerOIVIY= +github.com/ashanbrown/forbidigo v1.6.0/go.mod h1:Y8j9jy9ZYAEHXdu723cUlraTqbzjKF1MUyfOKL+AjcU= +github.com/ashanbrown/makezero v1.2.0 h1:/2Lp1bypdmK9wDIq7uWBlDF1iMUpIIS4A+pF6C9IEUU= +github.com/ashanbrown/makezero v1.2.0/go.mod h1:dxlPhHbDMC6N6xICzFBSK+4njQDdK8euNO0qjQMtGY4= github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM= github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw= +github.com/bkielbasa/cyclop v1.2.3 h1:faIVMIGDIANuGPWH031CZJTi2ymOQBULs9H21HSMa5w= +github.com/bkielbasa/cyclop v1.2.3/go.mod h1:kHTwA9Q0uZqOADdupvcFJQtp/ksSnytRMe8ztxG8Fuo= +github.com/blizzy78/varnamelen v0.8.0 h1:oqSblyuQvFsW1hbBHh1zfwrKe3kcSj0rnXkKzsQ089M= +github.com/blizzy78/varnamelen v0.8.0/go.mod h1:V9TzQZ4fLJ1DSrjVDfl89H7aMnTvKkApdHeyESmyR7k= +github.com/bombsimon/wsl/v4 v4.5.0 h1:iZRsEvDdyhd2La0FVi5k6tYehpOR/R7qIUjmKk7N74A= +github.com/bombsimon/wsl/v4 v4.5.0/go.mod h1:NOQ3aLF4nD7N5YPXMruR6ZXDOAqLoM0GEpLwTdvmOSc= +github.com/breml/bidichk v0.3.2 h1:xV4flJ9V5xWTqxL+/PMFF6dtJPvZLPsyixAoPe8BGJs= +github.com/breml/bidichk v0.3.2/go.mod h1:VzFLBxuYtT23z5+iVkamXO386OB+/sVwZOpIj6zXGos= +github.com/breml/errchkjson v0.4.0 h1:gftf6uWZMtIa/Is3XJgibewBm2ksAQSY/kABDNFTAdk= +github.com/breml/errchkjson v0.4.0/go.mod h1:AuBOSTHyLSaaAFlWsRSuRBIroCh3eh7ZHh5YeelDIk8= +github.com/butuzov/ireturn v0.3.1 h1:mFgbEI6m+9W8oP/oDdfA34dLisRFCj2G6o/yiI1yZrY= +github.com/butuzov/ireturn v0.3.1/go.mod h1:ZfRp+E7eJLC0NQmk1Nrm1LOrn/gQlOykv+cVPdiXH5M= +github.com/butuzov/mirror v1.3.0 h1:HdWCXzmwlQHdVhwvsfBb2Au0r3HyINry3bDWLYXiKoc= +github.com/butuzov/mirror v1.3.0/go.mod h1:AEij0Z8YMALaq4yQj9CPPVYOyJQyiexpQEQgihajRfI= +github.com/catenacyber/perfsprint v0.8.2 h1:+o9zVmCSVa7M4MvabsWvESEhpsMkhfE7k0sHNGL95yw= +github.com/catenacyber/perfsprint v0.8.2/go.mod h1:q//VWC2fWbcdSLEY1R3l8n0zQCDPdE4IjZwyY1HMunM= +github.com/ccojocar/zxcvbn-go v1.0.2 h1:na/czXU8RrhXO4EZme6eQJLR4PzcGsahsBOAwU6I3Vg= +github.com/ccojocar/zxcvbn-go v1.0.2/go.mod h1:g1qkXtUSvHP8lhHp5GrSmTz6uWALGRMQdw6Qnz/hi60= github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs= github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs= -github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E= +github.com/charithe/durationcheck v0.0.10 h1:wgw73BiocdBDQPik+zcEoBG/ob8uyBHf2iyoHGPf5w4= +github.com/charithe/durationcheck v0.0.10/go.mod h1:bCWXb7gYRysD1CU3C+u4ceO49LoGOY1C1L6uouGNreQ= +github.com/chavacava/garif v0.1.0 h1:2JHa3hbYf5D9dsgseMKAmc/MZ109otzgNFk5s87H9Pc= +github.com/chavacava/garif v0.1.0/go.mod h1:XMyYCkEL58DF0oyW4qDjjnPWONs2HBqYKI+UIPD+Gww= +github.com/ckaznocha/intrange v0.3.0 h1:VqnxtK32pxgkhJgYQEeOArVidIPg+ahLP7WBOXZd5ZY= +github.com/ckaznocha/intrange v0.3.0/go.mod h1:+I/o2d2A1FBHgGELbGxzIcyd3/9l9DuwjM8FsbSS3Lo= +github.com/cpuguy83/go-md2man/v2 v2.0.6/go.mod h1:oOW0eioCTA6cOiMLiUPZOpcVxMig6NIQQ7OS05n1F4g= +github.com/curioswitch/go-reassign v0.3.0 h1:dh3kpQHuADL3cobV/sSGETA8DOv457dwl+fbBAhrQPs= +github.com/curioswitch/go-reassign v0.3.0/go.mod h1:nApPCCTtqLJN/s8HfItCcKV0jIPwluBOvZP+dsJGA88= +github.com/daixiang0/gci v0.13.5 h1:kThgmH1yBmZSBCh1EJVxQ7JsHpm5Oms0AMed/0LaH4c= +github.com/daixiang0/gci v0.13.5/go.mod h1:12etP2OniiIdP4q+kjUGrC/rUagga7ODbqsom5Eo5Yk= github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM= github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= -github.com/deckhouse/sds-node-configurator/api v0.0.0-20250424082358-e271071c2a57 h1:13GafAaD2xfKtklUnNoNkMtYhYSWwC7wOCAChB7yH1w= -github.com/deckhouse/sds-node-configurator/api v0.0.0-20250424082358-e271071c2a57/go.mod h1:asf5aASltd0t84HVMO95dgrZlLwYO7VJbfLsrL2NjsI= +github.com/deckhouse/sds-node-configurator/api v0.0.0-20251112082451-591b11c7b2da h1:LFk9OC/+EVWfYDRe54Hip4kVKwjNcPhHZTftlm5DCpg= +github.com/deckhouse/sds-node-configurator/api v0.0.0-20251112082451-591b11c7b2da/go.mod h1:X5ftUa4MrSXMKiwQYa4lwFuGtrs+HoCNa8Zl6TPrGo8= +github.com/denis-tingaikin/go-header v0.5.0 h1:SRdnP5ZKvcO9KKRP1KJrhFR3RrlGuD+42t4429eC9k8= +github.com/denis-tingaikin/go-header v0.5.0/go.mod h1:mMenU5bWrok6Wl2UsZjy+1okegmwQ3UgWl4V1D8gjlY= +github.com/dlclark/regexp2 v1.11.0 h1:G/nrcoOa7ZXlpoa/91N3X7mM3r8eIlMBBJZvsz/mxKI= +github.com/dlclark/regexp2 v1.11.0/go.mod h1:DHkYz0B9wPfa6wondMfaivmHpzrQ3v9q8cnmRbL6yW8= github.com/donovanhide/eventsource v0.0.0-20210830082556-c59027999da0 h1:C7t6eeMaEQVy6e8CarIhscYQlNmw5e3G36y7l7Y21Ao= github.com/donovanhide/eventsource v0.0.0-20210830082556-c59027999da0/go.mod h1:56wL82FO0bfMU5RvfXoIwSOP2ggqqxT+tAfNEIyxuHw= -github.com/emicklei/go-restful/v3 v3.11.0 h1:rAQeMHw1c7zTmncogyy8VvRZwtkmkZ4FxERmMY4rD+g= -github.com/emicklei/go-restful/v3 v3.11.0/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc= -github.com/evanphx/json-patch v5.6.0+incompatible h1:jBYDEEiFBPxA0v50tFdvOzQQTCvpL6mnFh5mB2/l16U= -github.com/evanphx/json-patch v5.6.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk= +github.com/emicklei/go-restful/v3 v3.13.0 h1:C4Bl2xDndpU6nJ4bc1jXd+uTmYPVUwkD6bFY/oTyCes= +github.com/emicklei/go-restful/v3 v3.13.0/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc= +github.com/ettle/strcase v0.2.0 h1:fGNiVF21fHXpX1niBgk0aROov1LagYsOwV/xqKDKR/Q= +github.com/ettle/strcase v0.2.0/go.mod h1:DajmHElDSaX76ITe3/VHVyMin4LWSJN5Z909Wp+ED1A= +github.com/evanphx/json-patch v5.9.0+incompatible h1:fBXyNpNMuTTDdquAq/uisOr2lShz4oaXpDTX2bLe7ls= +github.com/evanphx/json-patch v5.9.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk= github.com/evanphx/json-patch/v5 v5.9.11 h1:/8HVnzMq13/3x9TPvjG08wUGqBTmZBsCWzjTM0wiaDU= github.com/evanphx/json-patch/v5 v5.9.11/go.mod h1:3j+LviiESTElxA4p3EMKAB9HXj3/XEtnUf6OZxqIQTM= -github.com/fsnotify/fsnotify v1.7.0 h1:8JEhPFa5W2WU7YfeZzPNqzMP6Lwt7L2715Ggo0nosvA= -github.com/fsnotify/fsnotify v1.7.0/go.mod h1:40Bi/Hjc2AVfZrqy+aj+yEI+/bRxZnMJyTJwOpGvigM= -github.com/fxamacker/cbor/v2 v2.7.0 h1:iM5WgngdRBanHcxugY4JySA0nk1wZorNOpTgCMedv5E= -github.com/fxamacker/cbor/v2 v2.7.0/go.mod h1:pxXPTn3joSm21Gbwsv0w9OSA2y1HFR9qXEeXQVeNoDQ= -github.com/go-logr/logr v1.4.2 h1:6pFjapn8bFcIbiKo3XT4j/BhANplGihG6tvd+8rYgrY= -github.com/go-logr/logr v1.4.2/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY= +github.com/fatih/color v1.18.0 h1:S8gINlzdQ840/4pfAwic/ZE0djQEH3wM94VfqLTZcOM= +github.com/fatih/color v1.18.0/go.mod h1:4FelSpRwEGDpQ12mAdzqdOukCy4u8WUtOY6lkT/6HfU= +github.com/fatih/structtag v1.2.0 h1:/OdNE99OxoI/PqaW/SuSK9uxxT3f/tcSZgon/ssNSx4= +github.com/fatih/structtag v1.2.0/go.mod h1:mBJUNpUnHmRKrKlQQlmCrh5PuhftFbNv8Ys4/aAZl94= +github.com/firefart/nonamedreturns v1.0.5 h1:tM+Me2ZaXs8tfdDw3X6DOX++wMCOqzYUho6tUTYIdRA= +github.com/firefart/nonamedreturns v1.0.5/go.mod h1:gHJjDqhGM4WyPt639SOZs+G89Ko7QKH5R5BhnO6xJhw= +github.com/frankban/quicktest v1.14.6 h1:7Xjx+VpznH+oBnejlPUj8oUpdxnVs4f8XU8WnHkI4W8= +github.com/frankban/quicktest v1.14.6/go.mod h1:4ptaffx2x8+WTWXmUCuVU6aPUX1/Mz7zb5vbUoiM6w0= +github.com/fsnotify/fsnotify v1.9.0 h1:2Ml+OJNzbYCTzsxtv8vKSFD9PbJjmhYF14k/jKC7S9k= +github.com/fsnotify/fsnotify v1.9.0/go.mod h1:8jBTzvmWwFyi3Pb8djgCCO5IBqzKJ/Jwo8TRcHyHii0= +github.com/fxamacker/cbor/v2 v2.9.0 h1:NpKPmjDBgUfBms6tr6JZkTHtfFGcMKsw3eGcmD/sapM= +github.com/fxamacker/cbor/v2 v2.9.0/go.mod h1:vM4b+DJCtHn+zz7h3FFp/hDAI9WNWCsZj23V5ytsSxQ= +github.com/fzipp/gocyclo v0.6.0 h1:lsblElZG7d3ALtGMx9fmxeTKZaLLpU8mET09yN4BBLo= +github.com/fzipp/gocyclo v0.6.0/go.mod h1:rXPyn8fnlpa0R2csP/31uerbiVBugk5whMdlyaLkLoA= +github.com/ghostiam/protogetter v0.3.9 h1:j+zlLLWzqLay22Cz/aYwTHKQ88GE2DQ6GkWSYFOI4lQ= +github.com/ghostiam/protogetter v0.3.9/go.mod h1:WZ0nw9pfzsgxuRsPOFQomgDVSWtDLJRfQJEhsGbmQMA= +github.com/gkampitakis/ciinfo v0.3.2 h1:JcuOPk8ZU7nZQjdUhctuhQofk7BGHuIy0c9Ez8BNhXs= +github.com/gkampitakis/ciinfo v0.3.2/go.mod h1:1NIwaOcFChN4fa/B0hEBdAb6npDlFL8Bwx4dfRLRqAo= +github.com/gkampitakis/go-diff v1.3.2 h1:Qyn0J9XJSDTgnsgHRdz9Zp24RaJeKMUHg2+PDZZdC4M= +github.com/gkampitakis/go-diff v1.3.2/go.mod h1:LLgOrpqleQe26cte8s36HTWcTmMEur6OPYerdAAS9tk= +github.com/gkampitakis/go-snaps v0.5.15 h1:amyJrvM1D33cPHwVrjo9jQxX8g/7E2wYdZ+01KS3zGE= +github.com/gkampitakis/go-snaps v0.5.15/go.mod h1:HNpx/9GoKisdhw9AFOBT1N7DBs9DiHo/hGheFGBZ+mc= +github.com/go-critic/go-critic v0.12.0 h1:iLosHZuye812wnkEz1Xu3aBwn5ocCPfc9yqmFG9pa6w= +github.com/go-critic/go-critic v0.12.0/go.mod h1:DpE0P6OVc6JzVYzmM5gq5jMU31zLr4am5mB/VfFK64w= +github.com/go-logr/logr v1.4.3 h1:CjnDlHq8ikf6E492q6eKboGOC0T8CDaOvkHCIg8idEI= +github.com/go-logr/logr v1.4.3/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY= github.com/go-logr/zapr v1.3.0 h1:XGdV8XW8zdwFiwOA2Dryh1gj2KRQyOOoNmBy4EplIcQ= github.com/go-logr/zapr v1.3.0/go.mod h1:YKepepNBd1u/oyhd/yQmtjVXmm9uML4IXUgMOwR8/Gg= -github.com/go-openapi/jsonpointer v0.19.6/go.mod h1:osyAmYz/mB/C3I+WsTTSgw1ONzaLJoLCyoi6/zppojs= -github.com/go-openapi/jsonpointer v0.21.0 h1:YgdVicSA9vH5RiHs9TZW5oyafXZFc6+2Vc1rr/O9oNQ= -github.com/go-openapi/jsonpointer v0.21.0/go.mod h1:IUyH9l/+uyhIYQ/PXVA41Rexl+kOkAPDdXEYns6fzUY= -github.com/go-openapi/jsonreference v0.20.2 h1:3sVjiK66+uXK/6oQ8xgcRKcFgQ5KXa2KvnJRumpMGbE= -github.com/go-openapi/jsonreference v0.20.2/go.mod h1:Bl1zwGIM8/wsvqjsOQLJ/SH+En5Ap4rVB5KVcIDZG2k= -github.com/go-openapi/swag v0.22.3/go.mod h1:UzaqsxGiab7freDnrUUra0MwWfN/q7tE4j+VcZ0yl14= -github.com/go-openapi/swag v0.23.0 h1:vsEVJDUo2hPJ2tu0/Xc+4noaxyEffXNIs3cOULZ+GrE= -github.com/go-openapi/swag v0.23.0/go.mod h1:esZ8ITTYEsH1V2trKHjAN8Ai7xHb8RV+YSZ577vPjgQ= +github.com/go-openapi/jsonpointer v0.22.0 h1:TmMhghgNef9YXxTu1tOopo+0BGEytxA+okbry0HjZsM= +github.com/go-openapi/jsonpointer v0.22.0/go.mod h1:xt3jV88UtExdIkkL7NloURjRQjbeUgcxFblMjq2iaiU= +github.com/go-openapi/jsonreference v0.21.1 h1:bSKrcl8819zKiOgxkbVNRUBIr6Wwj9KYrDbMjRs0cDA= +github.com/go-openapi/jsonreference v0.21.1/go.mod h1:PWs8rO4xxTUqKGu+lEvvCxD5k2X7QYkKAepJyCmSTT8= +github.com/go-openapi/swag v0.24.1 h1:DPdYTZKo6AQCRqzwr/kGkxJzHhpKxZ9i/oX0zag+MF8= +github.com/go-openapi/swag v0.24.1/go.mod h1:sm8I3lCPlspsBBwUm1t5oZeWZS0s7m/A+Psg0ooRU0A= +github.com/go-openapi/swag/cmdutils v0.24.0 h1:KlRCffHwXFI6E5MV9n8o8zBRElpY4uK4yWyAMWETo9I= +github.com/go-openapi/swag/cmdutils v0.24.0/go.mod h1:uxib2FAeQMByyHomTlsP8h1TtPd54Msu2ZDU/H5Vuf8= +github.com/go-openapi/swag/conv v0.24.0 h1:ejB9+7yogkWly6pnruRX45D1/6J+ZxRu92YFivx54ik= +github.com/go-openapi/swag/conv v0.24.0/go.mod h1:jbn140mZd7EW2g8a8Y5bwm8/Wy1slLySQQ0ND6DPc2c= +github.com/go-openapi/swag/fileutils v0.24.0 h1:U9pCpqp4RUytnD689Ek/N1d2N/a//XCeqoH508H5oak= +github.com/go-openapi/swag/fileutils v0.24.0/go.mod h1:3SCrCSBHyP1/N+3oErQ1gP+OX1GV2QYFSnrTbzwli90= +github.com/go-openapi/swag/jsonname v0.24.0 h1:2wKS9bgRV/xB8c62Qg16w4AUiIrqqiniJFtZGi3dg5k= +github.com/go-openapi/swag/jsonname v0.24.0/go.mod h1:GXqrPzGJe611P7LG4QB9JKPtUZ7flE4DOVechNaDd7Q= +github.com/go-openapi/swag/jsonutils v0.24.0 h1:F1vE1q4pg1xtO3HTyJYRmEuJ4jmIp2iZ30bzW5XgZts= +github.com/go-openapi/swag/jsonutils v0.24.0/go.mod h1:vBowZtF5Z4DDApIoxcIVfR8v0l9oq5PpYRUuteVu6f0= +github.com/go-openapi/swag/loading v0.24.0 h1:ln/fWTwJp2Zkj5DdaX4JPiddFC5CHQpvaBKycOlceYc= +github.com/go-openapi/swag/loading v0.24.0/go.mod h1:gShCN4woKZYIxPxbfbyHgjXAhO61m88tmjy0lp/LkJk= +github.com/go-openapi/swag/mangling v0.24.0 h1:PGOQpViCOUroIeak/Uj/sjGAq9LADS3mOyjznmHy2pk= +github.com/go-openapi/swag/mangling v0.24.0/go.mod h1:Jm5Go9LHkycsz0wfoaBDkdc4CkpuSnIEf62brzyCbhc= +github.com/go-openapi/swag/netutils v0.24.0 h1:Bz02HRjYv8046Ycg/w80q3g9QCWeIqTvlyOjQPDjD8w= +github.com/go-openapi/swag/netutils v0.24.0/go.mod h1:WRgiHcYTnx+IqfMCtu0hy9oOaPR0HnPbmArSRN1SkZM= +github.com/go-openapi/swag/stringutils v0.24.0 h1:i4Z/Jawf9EvXOLUbT97O0HbPUja18VdBxeadyAqS1FM= +github.com/go-openapi/swag/stringutils v0.24.0/go.mod h1:5nUXB4xA0kw2df5PRipZDslPJgJut+NjL7D25zPZ/4w= +github.com/go-openapi/swag/typeutils v0.24.0 h1:d3szEGzGDf4L2y1gYOSSLeK6h46F+zibnEas2Jm/wIw= +github.com/go-openapi/swag/typeutils v0.24.0/go.mod h1:q8C3Kmk/vh2VhpCLaoR2MVWOGP8y7Jc8l82qCTd1DYI= +github.com/go-openapi/swag/yamlutils v0.24.0 h1:bhw4894A7Iw6ne+639hsBNRHg9iZg/ISrOVr+sJGp4c= +github.com/go-openapi/swag/yamlutils v0.24.0/go.mod h1:DpKv5aYuaGm/sULePoeiG8uwMpZSfReo1HR3Ik0yaG8= +github.com/go-quicktest/qt v1.101.0 h1:O1K29Txy5P2OK0dGo59b7b0LR6wKfIhttaAhHUyn7eI= +github.com/go-quicktest/qt v1.101.0/go.mod h1:14Bz/f7NwaXPtdYEgzsx46kqSxVwTbzVZsDC26tQJow= github.com/go-task/slim-sprig/v3 v3.0.0 h1:sUs3vkvUymDpBKi3qH1YSqBQk9+9D/8M2mN1vB6EwHI= github.com/go-task/slim-sprig/v3 v3.0.0/go.mod h1:W848ghGpv3Qj3dhTPRyJypKRiqCdHZiAzKg9hl15HA8= +github.com/go-toolsmith/astcast v1.1.0 h1:+JN9xZV1A+Re+95pgnMgDboWNVnIMMQXwfBwLRPgSC8= +github.com/go-toolsmith/astcast v1.1.0/go.mod h1:qdcuFWeGGS2xX5bLM/c3U9lewg7+Zu4mr+xPwZIB4ZU= +github.com/go-toolsmith/astcopy v1.1.0 h1:YGwBN0WM+ekI/6SS6+52zLDEf8Yvp3n2seZITCUBt5s= +github.com/go-toolsmith/astcopy v1.1.0/go.mod h1:hXM6gan18VA1T/daUEHCFcYiW8Ai1tIwIzHY6srfEAw= +github.com/go-toolsmith/astequal v1.0.3/go.mod h1:9Ai4UglvtR+4up+bAD4+hCj7iTo4m/OXVTSLnCyTAx4= +github.com/go-toolsmith/astequal v1.1.0/go.mod h1:sedf7VIdCL22LD8qIvv7Nn9MuWJruQA/ysswh64lffQ= +github.com/go-toolsmith/astequal v1.2.0 h1:3Fs3CYZ1k9Vo4FzFhwwewC3CHISHDnVUPC4x0bI2+Cw= +github.com/go-toolsmith/astequal v1.2.0/go.mod h1:c8NZ3+kSFtFY/8lPso4v8LuJjdJiUFVnSuU3s0qrrDY= +github.com/go-toolsmith/astfmt v1.1.0 h1:iJVPDPp6/7AaeLJEruMsBUlOYCmvg0MoCfJprsOmcco= +github.com/go-toolsmith/astfmt v1.1.0/go.mod h1:OrcLlRwu0CuiIBp/8b5PYF9ktGVZUjlNMV634mhwuQ4= +github.com/go-toolsmith/astp v1.1.0 h1:dXPuCl6u2llURjdPLLDxJeZInAeZ0/eZwFJmqZMnpQA= +github.com/go-toolsmith/astp v1.1.0/go.mod h1:0T1xFGz9hicKs8Z5MfAqSUitoUYS30pDMsRVIDHs8CA= +github.com/go-toolsmith/pkgload v1.2.2 h1:0CtmHq/02QhxcF7E9N5LIFcYFsMR5rdovfqTtRKkgIk= +github.com/go-toolsmith/pkgload v1.2.2/go.mod h1:R2hxLNRKuAsiXCo2i5J6ZQPhnPMOVtU+f0arbFPWCus= +github.com/go-toolsmith/strparse v1.0.0/go.mod h1:YI2nUKP9YGZnL/L1/DLFBfixrcjslWct4wyljWhSRy8= +github.com/go-toolsmith/strparse v1.1.0 h1:GAioeZUK9TGxnLS+qfdqNbA4z0SSm5zVNtCQiyP2Bvw= +github.com/go-toolsmith/strparse v1.1.0/go.mod h1:7ksGy58fsaQkGQlY8WVoBFNyEPMGuJin1rfoPS4lBSQ= +github.com/go-toolsmith/typep v1.1.0 h1:fIRYDyF+JywLfqzyhdiHzRop/GQDxxNhLGQ6gFUNHus= +github.com/go-toolsmith/typep v1.1.0/go.mod h1:fVIw+7zjdsMxDA3ITWnH1yOiw1rnTQKCsF/sk2H/qig= +github.com/go-viper/mapstructure/v2 v2.4.0 h1:EBsztssimR/CONLSZZ04E8qAkxNYq4Qp9LvH92wZUgs= +github.com/go-viper/mapstructure/v2 v2.4.0/go.mod h1:oJDH3BJKyqBA2TXFhDsKDGDTlndYOZ6rGS0BRZIxGhM= +github.com/go-xmlfmt/xmlfmt v1.1.3 h1:t8Ey3Uy7jDSEisW2K3somuMKIpzktkWptA0iFCnRUWY= +github.com/go-xmlfmt/xmlfmt v1.1.3/go.mod h1:aUCEOzzezBEjDBbFBoSiya/gduyIiWYRP6CnSFIV8AM= +github.com/gobwas/glob v0.2.3 h1:A4xDbljILXROh+kObIiy5kIaPYD8e96x1tgBhUI5J+Y= +github.com/gobwas/glob v0.2.3/go.mod h1:d3Ez4x06l9bZtSvzIay5+Yzi0fmZzPgnTbPcKjJAkT8= +github.com/goccy/go-yaml v1.18.0 h1:8W7wMFS12Pcas7KU+VVkaiCng+kG8QiFeFwzFb+rwuw= +github.com/goccy/go-yaml v1.18.0/go.mod h1:XBurs7gK8ATbW4ZPGKgcbrY1Br56PdM69F7LkFRi1kA= +github.com/gofrs/flock v0.12.1 h1:MTLVXXHf8ekldpJk3AKicLij9MdwOWkZ+a/jHHZby9E= +github.com/gofrs/flock v0.12.1/go.mod h1:9zxTsyu5xtJ9DK+1tFZyibEV7y3uwDxPPfbxeeHCoD0= github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q= github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q= -github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek= -github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps= +github.com/golangci/dupl v0.0.0-20250308024227-f665c8d69b32 h1:WUvBfQL6EW/40l6OmeSBYQJNSif4O11+bmWEz+C7FYw= +github.com/golangci/dupl v0.0.0-20250308024227-f665c8d69b32/go.mod h1:NUw9Zr2Sy7+HxzdjIULge71wI6yEg1lWQr7Evcu8K0E= +github.com/golangci/go-printf-func-name v0.1.0 h1:dVokQP+NMTO7jwO4bwsRwLWeudOVUPPyAKJuzv8pEJU= +github.com/golangci/go-printf-func-name v0.1.0/go.mod h1:wqhWFH5mUdJQhweRnldEywnR5021wTdZSNgwYceV14s= +github.com/golangci/gofmt v0.0.0-20250106114630-d62b90e6713d h1:viFft9sS/dxoYY0aiOTsLKO2aZQAPT4nlQCsimGcSGE= +github.com/golangci/gofmt v0.0.0-20250106114630-d62b90e6713d/go.mod h1:ivJ9QDg0XucIkmwhzCDsqcnxxlDStoTl89jDMIoNxKY= +github.com/golangci/golangci-lint v1.64.8 h1:y5TdeVidMtBGG32zgSC7ZXTFNHrsJkDnpO4ItB3Am+I= +github.com/golangci/golangci-lint v1.64.8/go.mod h1:5cEsUQBSr6zi8XI8OjmcY2Xmliqc4iYL7YoPrL+zLJ4= +github.com/golangci/misspell v0.6.0 h1:JCle2HUTNWirNlDIAUO44hUsKhOFqGPoC4LZxlaSXDs= +github.com/golangci/misspell v0.6.0/go.mod h1:keMNyY6R9isGaSAu+4Q8NMBwMPkh15Gtc8UCVoDtAWo= +github.com/golangci/plugin-module-register v0.1.1 h1:TCmesur25LnyJkpsVrupv1Cdzo+2f7zX0H6Jkw1Ol6c= +github.com/golangci/plugin-module-register v0.1.1/go.mod h1:TTpqoB6KkwOJMV8u7+NyXMrkwwESJLOkfl9TxR1DGFc= +github.com/golangci/revgrep v0.8.0 h1:EZBctwbVd0aMeRnNUsFogoyayvKHyxlV3CdUA46FX2s= +github.com/golangci/revgrep v0.8.0/go.mod h1:U4R/s9dlXZsg8uJmaR1GrloUr14D7qDl8gi2iPXJH8k= +github.com/golangci/unconvert v0.0.0-20240309020433-c5143eacb3ed h1:IURFTjxeTfNFP0hTEi1YKjB/ub8zkpaOqFFMApi2EAs= +github.com/golangci/unconvert v0.0.0-20240309020433-c5143eacb3ed/go.mod h1:XLXN8bNw4CGRPaqgl3bv/lhz7bsGPh4/xSaMTbo2vkQ= github.com/google/btree v1.1.3 h1:CVpQJjYgC4VbzxeGVHfvZrv1ctoYCAI8vbl07Fcxlyg= github.com/google/btree v1.1.3/go.mod h1:qOPhT0dTNdNzV6Z/lhRX0YXUafgPLFUh+gZMl761Gm4= -github.com/google/gnostic-models v0.6.8 h1:yo/ABAfM5IMRsS1VnXjTBvUb61tFIHozhlYvRgGre9I= -github.com/google/gnostic-models v0.6.8/go.mod h1:5n7qKqH0f5wFt+aWF8CW6pZLLNOfYuF5OpfBSENuI8U= +github.com/google/gnostic-models v0.7.0 h1:qwTtogB15McXDaNqTZdzPJRHvaVJlAl+HVQnLmJEJxo= +github.com/google/gnostic-models v0.7.0/go.mod h1:whL5G0m6dmc5cPxKc5bdKdEN3UjI7OUGxBlw57miDrQ= +github.com/google/go-cmp v0.5.1/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= github.com/google/go-cmp v0.5.2/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= -github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY= -github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI= -github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY= +github.com/google/go-cmp v0.5.4/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= +github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= +github.com/google/go-cmp v0.5.8/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY= +github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8= +github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU= github.com/google/go-querystring v1.1.0 h1:AnCroh3fv4ZBgVIf1Iwtovgjaw/GiKJo8M8yD/fhyJ8= github.com/google/go-querystring v1.1.0/go.mod h1:Kcdr2DB4koayq7X8pmAG4sNG59So17icRSOU623lUBU= github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg= github.com/google/gofuzz v1.2.0 h1:xRy4A+RhZaiKjJ1bPfwQ8sedCA+YS2YcCHW6ec7JMi0= github.com/google/gofuzz v1.2.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg= -github.com/google/pprof v0.0.0-20241029153458-d1b30febd7db h1:097atOisP2aRj7vFgYQBbFN4U4JNXUNYpxael3UzMyo= -github.com/google/pprof v0.0.0-20241029153458-d1b30febd7db/go.mod h1:vavhavw2zAxS5dIdcRluK6cSGGPlZynqzFM8NdvU144= +github.com/google/pprof v0.0.0-20250820193118-f64d9cf942d6 h1:EEHtgt9IwisQ2AZ4pIsMjahcegHh6rmhqxzIRQIyepY= +github.com/google/pprof v0.0.0-20250820193118-f64d9cf942d6/go.mod h1:I6V7YzU0XDpsHqbsyrghnFZLO1gwK6NPTNvmetQIk9U= github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0= github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= +github.com/gordonklaus/ineffassign v0.1.0 h1:y2Gd/9I7MdY1oEIt+n+rowjBNDcLQq3RsH5hwJd0f9s= +github.com/gordonklaus/ineffassign v0.1.0/go.mod h1:Qcp2HIAYhR7mNUVSIxZww3Guk4it82ghYcEXIAk+QT0= +github.com/gostaticanalysis/analysisutil v0.7.1 h1:ZMCjoue3DtDWQ5WyU16YbjbQEQ3VuzwxALrpYd+HeKk= +github.com/gostaticanalysis/analysisutil v0.7.1/go.mod h1:v21E3hY37WKMGSnbsw2S/ojApNWb6C1//mXO48CXbVc= +github.com/gostaticanalysis/comment v1.4.1/go.mod h1:ih6ZxzTHLdadaiSnF5WY3dxUoXfXAlTaRzuaNDlSado= +github.com/gostaticanalysis/comment v1.4.2/go.mod h1:KLUTGDv6HOCotCH8h2erHKmpci2ZoR8VPu34YA2uzdM= +github.com/gostaticanalysis/comment v1.5.0 h1:X82FLl+TswsUMpMh17srGRuKaaXprTaytmEpgnKIDu8= +github.com/gostaticanalysis/comment v1.5.0/go.mod h1:V6eb3gpCv9GNVqb6amXzEUX3jXLVK/AdA+IrAMSqvEc= +github.com/gostaticanalysis/forcetypeassert v0.2.0 h1:uSnWrrUEYDr86OCxWa4/Tp2jeYDlogZiZHzGkWFefTk= +github.com/gostaticanalysis/forcetypeassert v0.2.0/go.mod h1:M5iPavzE9pPqWyeiVXSFghQjljW1+l/Uke3PXHS6ILY= +github.com/gostaticanalysis/nilerr v0.1.1 h1:ThE+hJP0fEp4zWLkWHWcRyI2Od0p7DlgYG3Uqrmrcpk= +github.com/gostaticanalysis/nilerr v0.1.1/go.mod h1:wZYb6YI5YAxxq0i1+VJbY0s2YONW0HU0GPE3+5PWN4A= +github.com/gostaticanalysis/testutil v0.3.1-0.20210208050101-bfb5c8eec0e4/go.mod h1:D+FIZ+7OahH3ePw/izIEeH5I06eKs1IKI4Xr64/Am3M= +github.com/gostaticanalysis/testutil v0.5.0 h1:Dq4wT1DdTwTGCQQv3rl3IvD5Ld0E6HiY+3Zh0sUGqw8= +github.com/gostaticanalysis/testutil v0.5.0/go.mod h1:OLQSbuM6zw2EvCcXTz1lVq5unyoNft372msDY0nY5Hs= +github.com/hashicorp/go-immutable-radix/v2 v2.1.0 h1:CUW5RYIcysz+D3B+l1mDeXrQ7fUvGGCwJfdASSzbrfo= +github.com/hashicorp/go-immutable-radix/v2 v2.1.0/go.mod h1:hgdqLXA4f6NIjRVisM1TJ9aOJVNRqKZj+xDGF6m7PBw= +github.com/hashicorp/go-uuid v1.0.3 h1:2gKiV6YVmrJ1i2CKKa9obLvRieoRGviZFL26PcT/Co8= +github.com/hashicorp/go-uuid v1.0.3/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro= +github.com/hashicorp/go-version v1.2.1/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA= +github.com/hashicorp/go-version v1.7.0 h1:5tqGy27NaOTB8yJKUZELlFAS/LTKJkrmONwQKeRZfjY= +github.com/hashicorp/go-version v1.7.0/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA= +github.com/hashicorp/golang-lru/v2 v2.0.7 h1:a+bsQ5rvGLjzHuww6tVxozPZFVghXaHOwFs4luLUK2k= +github.com/hashicorp/golang-lru/v2 v2.0.7/go.mod h1:QeFd9opnmA6QUJc5vARoKUSoFhyfM2/ZepoAG6RGpeM= +github.com/hexops/gotextdiff v1.0.3 h1:gitA9+qJrrTCsiCl7+kh75nPqQt1cx4ZkudSTLoUqJM= +github.com/hexops/gotextdiff v1.0.3/go.mod h1:pSWU5MAI3yDq+fZBTazCSJysOMbxWL1BSow5/V2vxeg= +github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8= +github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw= +github.com/jgautheron/goconst v1.7.1 h1:VpdAG7Ca7yvvJk5n8dMwQhfEZJh95kl/Hl9S1OI5Jkk= +github.com/jgautheron/goconst v1.7.1/go.mod h1:aAosetZ5zaeC/2EfMeRswtxUFBpe2Hr7HzkgX4fanO4= +github.com/jingyugao/rowserrcheck v1.1.1 h1:zibz55j/MJtLsjP1OF4bSdgXxwL1b+Vn7Tjzq7gFzUs= +github.com/jingyugao/rowserrcheck v1.1.1/go.mod h1:4yvlZSDb3IyDTUZJUmpZfm2Hwok+Dtp+nu2qOq+er9c= +github.com/jjti/go-spancheck v0.6.4 h1:Tl7gQpYf4/TMU7AT84MN83/6PutY21Nb9fuQjFTpRRc= +github.com/jjti/go-spancheck v0.6.4/go.mod h1:yAEYdKJ2lRkDA8g7X+oKUHXOWVAXSBJRv04OhF+QUjk= github.com/josharian/intern v1.0.0 h1:vlS4z54oSdjm0bgjRigI+G1HpF+tI+9rE5LLzOg8HmY= github.com/josharian/intern v1.0.0/go.mod h1:5DoeVV0s6jJacbCEi61lwdGj/aVlrQvzHFFd8Hwg//Y= +github.com/joshdk/go-junit v1.0.0 h1:S86cUKIdwBHWwA6xCmFlf3RTLfVXYQfvanM5Uh+K6GE= +github.com/joshdk/go-junit v1.0.0/go.mod h1:TiiV0PqkaNfFXjEiyjWM3XXrhVyCa1K4Zfga6W52ung= github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM= github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo= +github.com/julz/importas v0.2.0 h1:y+MJN/UdL63QbFJHws9BVC5RpA2iq0kpjrFajTGivjQ= +github.com/julz/importas v0.2.0/go.mod h1:pThlt589EnCYtMnmhmRYY/qn9lCf/frPOK+WMx3xiJY= +github.com/karamaru-alpha/copyloopvar v1.2.1 h1:wmZaZYIjnJ0b5UoKDjUHrikcV0zuPyyxI4SVplLd2CI= +github.com/karamaru-alpha/copyloopvar v1.2.1/go.mod h1:nFmMlFNlClC2BPvNaHMdkirmTJxVCY0lhxBtlfOypMM= github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8= +github.com/kisielk/errcheck v1.9.0 h1:9xt1zI9EBfcYBvdU1nVrzMzzUPUtPKs9bVSIM3TAb3M= +github.com/kisielk/errcheck v1.9.0/go.mod h1:kQxWMMVZgIkDq7U8xtG/n2juOjbLgZtedi0D+/VL/i8= github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck= -github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI= +github.com/kkHAIKE/contextcheck v1.1.6 h1:7HIyRcnyzxL9Lz06NGhiKvenXq7Zw6Q0UQu/ttjfJCE= +github.com/kkHAIKE/contextcheck v1.1.6/go.mod h1:3dDbMRNBFaq8HFXWC1JyvDSPm43CmE6IuHam8Wr0rkg= +github.com/klauspost/compress v1.18.0 h1:c/Cqfb0r+Yi+JtIEq73FWXVkRonBlf0CRNYc8Zttxdo= +github.com/klauspost/compress v1.18.0/go.mod h1:2Pp+KzxcywXVXMr50+X0Q/Lsb43OQHYWRCY2AiWywWQ= github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE= github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk= -github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ= -github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI= github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY= github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE= -github.com/mailru/easyjson v0.7.7 h1:UGYAvKxe3sBsEDzO8ZeWOSlIQfWFlxbzLZe7hwFURr0= -github.com/mailru/easyjson v0.7.7/go.mod h1:xzfreul335JAWq5oZzymOObrkdz5UnU4kGfJJLY9Nlc= +github.com/kulti/thelper v0.6.3 h1:ElhKf+AlItIu+xGnI990no4cE2+XaSu1ULymV2Yulxs= +github.com/kulti/thelper v0.6.3/go.mod h1:DsqKShOvP40epevkFrvIwkCMNYxMeTNjdWL4dqWHZ6I= +github.com/kunwardeep/paralleltest v1.0.10 h1:wrodoaKYzS2mdNVnc4/w31YaXFtsc21PCTdvWJ/lDDs= +github.com/kunwardeep/paralleltest v1.0.10/go.mod h1:2C7s65hONVqY7Q5Efj5aLzRCNLjw2h4eMc9EcypGjcY= +github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc= +github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw= +github.com/lasiar/canonicalheader v1.1.2 h1:vZ5uqwvDbyJCnMhmFYimgMZnJMjwljN5VGY0VKbMXb4= +github.com/lasiar/canonicalheader v1.1.2/go.mod h1:qJCeLFS0G/QlLQ506T+Fk/fWMa2VmBUiEI2cuMK4djI= +github.com/ldez/exptostd v0.4.2 h1:l5pOzHBz8mFOlbcifTxzfyYbgEmoUqjxLFHZkjlbHXs= +github.com/ldez/exptostd v0.4.2/go.mod h1:iZBRYaUmcW5jwCR3KROEZ1KivQQp6PHXbDPk9hqJKCQ= +github.com/ldez/gomoddirectives v0.6.1 h1:Z+PxGAY+217f/bSGjNZr/b2KTXcyYLgiWI6geMBN2Qc= +github.com/ldez/gomoddirectives v0.6.1/go.mod h1:cVBiu3AHR9V31em9u2kwfMKD43ayN5/XDgr+cdaFaKs= +github.com/ldez/grignotin v0.9.0 h1:MgOEmjZIVNn6p5wPaGp/0OKWyvq42KnzAt/DAb8O4Ow= +github.com/ldez/grignotin v0.9.0/go.mod h1:uaVTr0SoZ1KBii33c47O1M8Jp3OP3YDwhZCmzT9GHEk= +github.com/ldez/tagliatelle v0.7.1 h1:bTgKjjc2sQcsgPiT902+aadvMjCeMHrY7ly2XKFORIk= +github.com/ldez/tagliatelle v0.7.1/go.mod h1:3zjxUpsNB2aEZScWiZTHrAXOl1x25t3cRmzfK1mlo2I= +github.com/ldez/usetesting v0.4.2 h1:J2WwbrFGk3wx4cZwSMiCQQ00kjGR0+tuuyW0Lqm4lwA= +github.com/ldez/usetesting v0.4.2/go.mod h1:eEs46T3PpQ+9RgN9VjpY6qWdiw2/QmfiDeWmdZdrjIQ= +github.com/leonklingele/grouper v1.1.2 h1:o1ARBDLOmmasUaNDesWqWCIFH3u7hoFlM84YrjT3mIY= +github.com/leonklingele/grouper v1.1.2/go.mod h1:6D0M/HVkhs2yRKRFZUoGjeDy7EZTfFBE9gl4kjmIGkA= +github.com/macabu/inamedparam v0.1.3 h1:2tk/phHkMlEL/1GNe/Yf6kkR/hkcUdAEY3L0hjYV1Mk= +github.com/macabu/inamedparam v0.1.3/go.mod h1:93FLICAIk/quk7eaPPQvbzihUdn/QkGDwIZEoLtpH6I= +github.com/mailru/easyjson v0.9.0 h1:PrnmzHw7262yW8sTBwxi1PdJA3Iw/EKBa8psRf7d9a4= +github.com/mailru/easyjson v0.9.0/go.mod h1:1+xMtQp2MRNVL/V1bOzuP3aP8VNwRW55fQUto+XFtTU= +github.com/maratori/testableexamples v1.0.0 h1:dU5alXRrD8WKSjOUnmJZuzdxWOEQ57+7s93SLMxb2vI= +github.com/maratori/testableexamples v1.0.0/go.mod h1:4rhjL1n20TUTT4vdh3RDqSizKLyXp7K2u6HgraZCGzE= +github.com/maratori/testpackage v1.1.1 h1:S58XVV5AD7HADMmD0fNnziNHqKvSdDuEKdPD1rNTU04= +github.com/maratori/testpackage v1.1.1/go.mod h1:s4gRK/ym6AMrqpOa/kEbQTV4Q4jb7WeLZzVhVVVOQMc= +github.com/maruel/natural v1.1.1 h1:Hja7XhhmvEFhcByqDoHz9QZbkWey+COd9xWfCfn1ioo= +github.com/maruel/natural v1.1.1/go.mod h1:v+Rfd79xlw1AgVBjbO0BEQmptqb5HvL/k9GRHB7ZKEg= +github.com/matoous/godox v1.1.0 h1:W5mqwbyWrwZv6OQ5Z1a/DHGMOvXYCBP3+Ht7KMoJhq4= +github.com/matoous/godox v1.1.0/go.mod h1:jgE/3fUXiTurkdHOLT5WEkThTSuE7yxHv5iWPa80afs= +github.com/matryer/is v1.4.0 h1:sosSmIWwkYITGrxZ25ULNDeKiMNzFSr4V/eqBQP0PeE= +github.com/matryer/is v1.4.0/go.mod h1:8I/i5uYgLzgsgEloJE1U6xx5HkBQpAZvepWuujKwMRU= +github.com/mattn/go-colorable v0.1.14 h1:9A9LHSqF/7dyVVX6g0U9cwm9pG3kP9gSzcuIPHPsaIE= +github.com/mattn/go-colorable v0.1.14/go.mod h1:6LmQG8QLFO4G5z1gPvYEzlUgJ2wF+stgPZH1UqBm1s8= +github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY= +github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y= +github.com/mattn/go-runewidth v0.0.9/go.mod h1:H031xJmbD/WCDINGzjvQ9THkh0rPKHF+m2gUSrubnMI= +github.com/mattn/go-runewidth v0.0.16 h1:E5ScNMtiwvlvB5paMFdw9p4kSQzbXFikJ5SQO6TULQc= +github.com/mattn/go-runewidth v0.0.16/go.mod h1:Jdepj2loyihRzMpdS35Xk/zdY8IAYHsh153qUoGf23w= +github.com/mfridman/tparse v0.18.0 h1:wh6dzOKaIwkUGyKgOntDW4liXSo37qg5AXbIhkMV3vE= +github.com/mfridman/tparse v0.18.0/go.mod h1:gEvqZTuCgEhPbYk/2lS3Kcxg1GmTxxU7kTC8DvP0i/A= +github.com/mgechev/revive v1.7.0 h1:JyeQ4yO5K8aZhIKf5rec56u0376h8AlKNQEmjfkjKlY= +github.com/mgechev/revive v1.7.0/go.mod h1:qZnwcNhoguE58dfi96IJeSTPeZQejNeoMQLUZGi4SW4= +github.com/mitchellh/go-homedir v1.1.0 h1:lukF9ziXFxDFPkA1vsr5zpc1XuPDn/wFntq5mG+4E0Y= +github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0= github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q= github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg= github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q= -github.com/modern-go/reflect2 v1.0.2 h1:xBagoLtFs94CBntxluKeaWgTMpvLxC4ur3nMaC9Gz0M= github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk= +github.com/modern-go/reflect2 v1.0.3-0.20250322232337-35a7c28c31ee h1:W5t00kpgFdJifH4BDsTlE89Zl93FEloxaWZfGcifgq8= +github.com/modern-go/reflect2 v1.0.3-0.20250322232337-35a7c28c31ee/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk= +github.com/moricho/tparallel v0.3.2 h1:odr8aZVFA3NZrNybggMkYO3rgPRcqjeQUlBBFVxKHTI= +github.com/moricho/tparallel v0.3.2/go.mod h1:OQ+K3b4Ln3l2TZveGCywybl68glfLEwFGqvnjok8b+U= github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA= github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ= -github.com/onsi/ginkgo/v2 v2.22.0 h1:Yed107/8DjTr0lKCNt7Dn8yQ6ybuDRQoMGrNFKzMfHg= -github.com/onsi/ginkgo/v2 v2.22.0/go.mod h1:7Du3c42kxCUegi0IImZ1wUQzMBVecgIHjR1C+NkhLQo= -github.com/onsi/gomega v1.36.1 h1:bJDPBO7ibjxcbHMgSCoo4Yj18UWbKDlLwX1x9sybDcw= -github.com/onsi/gomega v1.36.1/go.mod h1:PvZbdDc8J6XJEpDK4HCuRBm8a6Fzp9/DmhC9C7yFlog= +github.com/nakabonne/nestif v0.3.1 h1:wm28nZjhQY5HyYPx+weN3Q65k6ilSBxDb8v5S81B81U= +github.com/nakabonne/nestif v0.3.1/go.mod h1:9EtoZochLn5iUprVDmDjqGKPofoUEBL8U4Ngq6aY7OE= +github.com/nishanths/exhaustive v0.12.0 h1:vIY9sALmw6T/yxiASewa4TQcFsVYZQQRUQJhKRf3Swg= +github.com/nishanths/exhaustive v0.12.0/go.mod h1:mEZ95wPIZW+x8kC4TgC+9YCUgiST7ecevsVDTgc2obs= +github.com/nishanths/predeclared v0.2.2 h1:V2EPdZPliZymNAn79T8RkNApBjMmVKh5XRpLm/w98Vk= +github.com/nishanths/predeclared v0.2.2/go.mod h1:RROzoN6TnGQupbC+lqggsOlcgysk3LMK/HI84Mp280c= +github.com/nunnatsa/ginkgolinter v0.19.1 h1:mjwbOlDQxZi9Cal+KfbEJTCz327OLNfwNvoZ70NJ+c4= +github.com/nunnatsa/ginkgolinter v0.19.1/go.mod h1:jkQ3naZDmxaZMXPWaS9rblH+i+GWXQCaS/JFIWcOH2s= +github.com/olekukonko/tablewriter v0.0.5 h1:P2Ga83D34wi1o9J6Wh1mRuqd4mF/x/lgBS7N7AbDhec= +github.com/olekukonko/tablewriter v0.0.5/go.mod h1:hPp6KlRPjbx+hW8ykQs1w3UBbZlj6HuIJcUGPhkA7kY= +github.com/onsi/ginkgo/v2 v2.27.2 h1:LzwLj0b89qtIy6SSASkzlNvX6WktqurSHwkk2ipF/Ns= +github.com/onsi/ginkgo/v2 v2.27.2/go.mod h1:ArE1D/XhNXBXCBkKOLkbsb2c81dQHCRcF5zwn/ykDRo= +github.com/onsi/gomega v1.38.3 h1:eTX+W6dobAYfFeGC2PV6RwXRu/MyT+cQguijutvkpSM= +github.com/onsi/gomega v1.38.3/go.mod h1:ZCU1pkQcXDO5Sl9/VVEGlDyp+zm0m1cmeG5TOzLgdh4= +github.com/otiai10/copy v1.2.0/go.mod h1:rrF5dJ5F0t/EWSYODDu4j9/vEeYHMkc8jt0zJChqQWw= +github.com/otiai10/copy v1.14.0 h1:dCI/t1iTdYGtkvCuBG2BgR6KZa83PTclw4U5n2wAllU= +github.com/otiai10/copy v1.14.0/go.mod h1:ECfuL02W+/FkTWZWgQqXPWZgW9oeKCSQ5qVfSc4qc4w= +github.com/otiai10/curr v0.0.0-20150429015615-9b4961190c95/go.mod h1:9qAhocn7zKJG+0mI8eUu6xqkFDYS2kb2saOteoSB3cE= +github.com/otiai10/curr v1.0.0/go.mod h1:LskTG5wDwr8Rs+nNQ+1LlxRjAtTZZjtJW4rMXl6j4vs= +github.com/otiai10/mint v1.3.0/go.mod h1:F5AjcsTsWUqX+Na9fpHb52P8pcRX2CI6A3ctIT91xUo= +github.com/otiai10/mint v1.3.1/go.mod h1:/yxELlJQ0ufhjUwhshSj+wFjZ78CnZ48/1wtmBH1OTc= +github.com/pelletier/go-toml/v2 v2.2.4 h1:mye9XuhQ6gvn5h28+VilKrrPoQVanw5PMw/TB0t5Ec4= +github.com/pelletier/go-toml/v2 v2.2.4/go.mod h1:2gIqNv+qfxSVS7cM2xJQKtLSTLUE9V8t9Stt+h56mCY= github.com/pkg/diff v0.0.0-20200914180035-5b29258ca4f7/go.mod h1:zO8QMzTeZd5cpnIkz/Gn6iK0jDfGicM1nynOkkPIl28= github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4= github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U= github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= -github.com/prometheus/client_golang v1.19.1 h1:wZWJDwK+NameRJuPGDhlnFgx8e8HN3XHQeLaYJFJBOE= -github.com/prometheus/client_golang v1.19.1/go.mod h1:mP78NwGzrVks5S2H6ab8+ZZGJLZUq1hoULYBAYBw1Ho= -github.com/prometheus/client_model v0.6.1 h1:ZKSh/rekM+n3CeS952MLRAdFwIKqeY8b62p8ais2e9E= -github.com/prometheus/client_model v0.6.1/go.mod h1:OrxVMOVHjw3lKMa8+x6HeMGkHMQyHDk9E3jmP2AmGiY= -github.com/prometheus/common v0.55.0 h1:KEi6DK7lXW/m7Ig5i47x0vRzuBsHuvJdi5ee6Y3G1dc= -github.com/prometheus/common v0.55.0/go.mod h1:2SECS4xJG1kd8XF9IcM1gMX6510RAEL65zxzNImwdc8= -github.com/prometheus/procfs v0.15.1 h1:YagwOFzUgYfKKHX6Dr+sHT7km/hxC76UB0learggepc= -github.com/prometheus/procfs v0.15.1/go.mod h1:fB45yRUv8NstnjriLhBQLuOUt+WW4BsoGhij/e3PBqk= -github.com/rogpeppe/go-internal v1.12.0 h1:exVL4IDcn6na9z1rAb56Vxr+CgyK3nn3O+epU5NdKM8= -github.com/rogpeppe/go-internal v1.12.0/go.mod h1:E+RYuTGaKKdloAfM02xzb0FW3Paa99yedzYV+kq4uf4= +github.com/polyfloyd/go-errorlint v1.7.1 h1:RyLVXIbosq1gBdk/pChWA8zWYLsq9UEw7a1L5TVMCnA= +github.com/polyfloyd/go-errorlint v1.7.1/go.mod h1:aXjNb1x2TNhoLsk26iv1yl7a+zTnXPhwEMtEXukiLR8= +github.com/prashantv/gostub v1.1.0 h1:BTyx3RfQjRHnUWaGF9oQos79AlQ5k8WNktv7VGvVH4g= +github.com/prashantv/gostub v1.1.0/go.mod h1:A5zLQHz7ieHGG7is6LLXLz7I8+3LZzsrV0P1IAHhP5U= +github.com/prometheus/client_golang v1.23.2 h1:Je96obch5RDVy3FDMndoUsjAhG5Edi49h0RJWRi/o0o= +github.com/prometheus/client_golang v1.23.2/go.mod h1:Tb1a6LWHB3/SPIzCoaDXI4I8UHKeFTEQ1YCr+0Gyqmg= +github.com/prometheus/client_model v0.6.2 h1:oBsgwpGs7iVziMvrGhE53c/GrLUsZdHnqNwqPLxwZyk= +github.com/prometheus/client_model v0.6.2/go.mod h1:y3m2F6Gdpfy6Ut/GBsUqTWZqCUvMVzSfMLjcu6wAwpE= +github.com/prometheus/common v0.66.1 h1:h5E0h5/Y8niHc5DlaLlWLArTQI7tMrsfQjHV+d9ZoGs= +github.com/prometheus/common v0.66.1/go.mod h1:gcaUsgf3KfRSwHY4dIMXLPV0K/Wg1oZ8+SbZk/HH/dA= +github.com/prometheus/procfs v0.17.0 h1:FuLQ+05u4ZI+SS/w9+BWEM2TXiHKsUQ9TADiRH7DuK0= +github.com/prometheus/procfs v0.17.0/go.mod h1:oPQLaDAMRbA+u8H5Pbfq+dl3VDAvHxMUOVhe0wYB2zw= +github.com/quasilyte/go-ruleguard v0.4.3-0.20240823090925-0fe6f58b47b1 h1:+Wl/0aFp0hpuHM3H//KMft64WQ1yX9LdJY64Qm/gFCo= +github.com/quasilyte/go-ruleguard v0.4.3-0.20240823090925-0fe6f58b47b1/go.mod h1:GJLgqsLeo4qgavUoL8JeGFNS7qcisx3awV/w9eWTmNI= +github.com/quasilyte/go-ruleguard/dsl v0.3.22 h1:wd8zkOhSNr+I+8Qeciml08ivDt1pSXe60+5DqOpCjPE= +github.com/quasilyte/go-ruleguard/dsl v0.3.22/go.mod h1:KeCP03KrjuSO0H1kTuZQCWlQPulDV6YMIXmpQss17rU= +github.com/quasilyte/gogrep v0.5.0 h1:eTKODPXbI8ffJMN+W2aE0+oL0z/nh8/5eNdiO34SOAo= +github.com/quasilyte/gogrep v0.5.0/go.mod h1:Cm9lpz9NZjEoL1tgZ2OgeUKPIxL1meE7eo60Z6Sk+Ng= +github.com/quasilyte/regex/syntax v0.0.0-20210819130434-b3f0c404a727 h1:TCg2WBOl980XxGFEZSS6KlBGIV0diGdySzxATTWoqaU= +github.com/quasilyte/regex/syntax v0.0.0-20210819130434-b3f0c404a727/go.mod h1:rlzQ04UMyJXu/aOvhd8qT+hvDrFpiwqp8MRXDY9szc0= +github.com/quasilyte/stdinfo v0.0.0-20220114132959-f7386bf02567 h1:M8mH9eK4OUR4lu7Gd+PU1fV2/qnDNfzT635KRSObncs= +github.com/quasilyte/stdinfo v0.0.0-20220114132959-f7386bf02567/go.mod h1:DWNGW8A4Y+GyBgPuaQJuWiy0XYftx4Xm/y5Jqk9I6VQ= +github.com/raeperd/recvcheck v0.2.0 h1:GnU+NsbiCqdC2XX5+vMZzP+jAJC5fht7rcVTAhX74UI= +github.com/raeperd/recvcheck v0.2.0/go.mod h1:n04eYkwIR0JbgD73wT8wL4JjPC3wm0nFtzBnWNocnYU= +github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc= +github.com/rivo/uniseg v0.4.7 h1:WUdvkW8uEhrYfLC4ZzdpI2ztxP1I582+49Oc5Mq64VQ= +github.com/rivo/uniseg v0.4.7/go.mod h1:FN3SvrM+Zdj16jyLfmOkMNblXMcoc8DfTHruCPUcx88= +github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ= +github.com/rogpeppe/go-internal v1.14.1/go.mod h1:MaRKkUm5W0goXpeCfT7UZI6fk/L7L7so1lCWt35ZSgc= +github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM= +github.com/ryancurrah/gomodguard v1.3.5 h1:cShyguSwUEeC0jS7ylOiG/idnd1TpJ1LfHGpV3oJmPU= +github.com/ryancurrah/gomodguard v1.3.5/go.mod h1:MXlEPQRxgfPQa62O8wzK3Ozbkv9Rkqr+wKjSxTdsNJE= +github.com/ryanrolds/sqlclosecheck v0.5.1 h1:dibWW826u0P8jNLsLN+En7+RqWWTYrjCB9fJfSfdyCU= +github.com/ryanrolds/sqlclosecheck v0.5.1/go.mod h1:2g3dUjoS6AL4huFdv6wn55WpLIDjY7ZgUR4J8HOO/XQ= +github.com/sagikazarmark/locafero v0.7.0 h1:5MqpDsTGNDhY8sGp0Aowyf0qKsPrhewaLSsFaodPcyo= +github.com/sagikazarmark/locafero v0.7.0/go.mod h1:2za3Cg5rMaTMoG/2Ulr9AwtFaIppKXTRYnozin4aB5k= +github.com/sanposhiho/wastedassign/v2 v2.1.0 h1:crurBF7fJKIORrV85u9UUpePDYGWnwvv3+A96WvwXT0= +github.com/sanposhiho/wastedassign/v2 v2.1.0/go.mod h1:+oSmSC+9bQ+VUAxA66nBb0Z7N8CK7mscKTDYC6aIek4= +github.com/santhosh-tekuri/jsonschema/v6 v6.0.1 h1:PKK9DyHxif4LZo+uQSgXNqs0jj5+xZwwfKHgph2lxBw= +github.com/santhosh-tekuri/jsonschema/v6 v6.0.1/go.mod h1:JXeL+ps8p7/KNMjDQk3TCwPpBy0wYklyWTfbkIzdIFU= +github.com/sashamelentyev/interfacebloat v1.1.0 h1:xdRdJp0irL086OyW1H/RTZTr1h/tMEOsumirXcOJqAw= +github.com/sashamelentyev/interfacebloat v1.1.0/go.mod h1:+Y9yU5YdTkrNvoX0xHc84dxiN1iBi9+G8zZIhPVoNjQ= +github.com/sashamelentyev/usestdlibvars v1.28.0 h1:jZnudE2zKCtYlGzLVreNp5pmCdOxXUzwsMDBkR21cyQ= +github.com/sashamelentyev/usestdlibvars v1.28.0/go.mod h1:9nl0jgOfHKWNFS43Ojw0i7aRoS4j6EBye3YBhmAIRF8= +github.com/securego/gosec/v2 v2.22.2 h1:IXbuI7cJninj0nRpZSLCUlotsj8jGusohfONMrHoF6g= +github.com/securego/gosec/v2 v2.22.2/go.mod h1:UEBGA+dSKb+VqM6TdehR7lnQtIIMorYJ4/9CW1KVQBE= github.com/sergi/go-diff v1.0.0/go.mod h1:0CfEIISq7TuYL3j771MWULgwwjU+GofnZX9QAmXWZgo= +github.com/shurcooL/go v0.0.0-20180423040247-9e1955d9fb6e/go.mod h1:TDJrrUr11Vxrven61rcy3hJMUqaf/CLWYhHNPmT14Lk= +github.com/shurcooL/go-goon v0.0.0-20170922171312-37c2f522c041/go.mod h1:N5mDOmsrJOB+vfqUK+7DmDyjhSLIIBnXo9lvZJj3MWQ= github.com/sirupsen/logrus v1.9.3 h1:dueUQJ1C2q9oE3F7wvmSGAaVtTmUizReu6fjN8uqzbQ= github.com/sirupsen/logrus v1.9.3/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ= -github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA= +github.com/sivchari/containedctx v1.0.3 h1:x+etemjbsh2fB5ewm5FeLNi5bUjK0V8n0RB+Wwfd0XE= +github.com/sivchari/containedctx v1.0.3/go.mod h1:c1RDvCbnJLtH4lLcYD/GqwiBSSf4F5Qk0xld2rBqzJ4= +github.com/sivchari/tenv v1.12.1 h1:+E0QzjktdnExv/wwsnnyk4oqZBUfuh89YMQT1cyuvSY= +github.com/sivchari/tenv v1.12.1/go.mod h1:1LjSOUCc25snIr5n3DtGGrENhX3LuWefcplwVGC24mw= +github.com/sonatard/noctx v0.1.0 h1:JjqOc2WN16ISWAjAk8M5ej0RfExEXtkEyExl2hLW+OM= +github.com/sonatard/noctx v0.1.0/go.mod h1:0RvBxqY8D4j9cTTTWE8ylt2vqj2EPI8fHmrxHdsaZ2c= +github.com/sourcegraph/conc v0.3.0 h1:OQTbbt6P72L20UqAkXXuLOj79LfEanQ+YQFNpLA9ySo= +github.com/sourcegraph/conc v0.3.0/go.mod h1:Sdozi7LEKbFPqYX2/J+iBAM6HpqSLTASQIKqDmF7Mt0= +github.com/sourcegraph/go-diff v0.7.0 h1:9uLlrd5T46OXs5qpp8L/MTltk0zikUGi0sNNyCpA8G0= +github.com/sourcegraph/go-diff v0.7.0/go.mod h1:iBszgVvyxdc8SFZ7gm69go2KDdt3ag071iBaWPF6cjs= +github.com/spf13/afero v1.12.0 h1:UcOPyRBYczmFn6yvphxkn9ZEOY65cpwGKb5mL36mrqs= +github.com/spf13/afero v1.12.0/go.mod h1:ZTlWwG4/ahT8W7T0WQ5uYmjI9duaLQGy3Q2OAl4sk/4= +github.com/spf13/cast v1.7.1 h1:cuNEagBQEHWN1FnbGEjCXL2szYEXqfJPbP2HNUaca9Y= +github.com/spf13/cast v1.7.1/go.mod h1:ancEpBxwJDODSW/UG4rDrAqiKolqNNh2DX3mk86cAdo= +github.com/spf13/cobra v1.10.2 h1:DMTTonx5m65Ic0GOoRY2c16WCbHxOOw6xxezuLaBpcU= +github.com/spf13/cobra v1.10.2/go.mod h1:7C1pvHqHw5A4vrJfjNwvOdzYu0Gml16OCs2GRiTUUS4= github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg= +github.com/spf13/pflag v1.0.9/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg= +github.com/spf13/pflag v1.0.10 h1:4EBh2KAYBwaONj6b2Ye1GiHfwjqyROoF4RwYO+vPwFk= +github.com/spf13/pflag v1.0.10/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg= +github.com/spf13/viper v1.20.1 h1:ZMi+z/lvLyPSCoNtFCpqjy0S4kPbirhpTMwl8BkW9X4= +github.com/spf13/viper v1.20.1/go.mod h1:P9Mdzt1zoHIG8m2eZQinpiBjo6kCmZSKBClNNqjJvu4= +github.com/ssgreg/nlreturn/v2 v2.2.1 h1:X4XDI7jstt3ySqGU86YGAURbxw3oTDPK9sPEi6YEwQ0= +github.com/ssgreg/nlreturn/v2 v2.2.1/go.mod h1:E/iiPB78hV7Szg2YfRgyIrk1AD6JVMTRkkxBiELzh2I= +github.com/stbenjam/no-sprintf-host-port v0.2.0 h1:i8pxvGrt1+4G0czLr/WnmyH7zbZ8Bg8etvARQ1rpyl4= +github.com/stbenjam/no-sprintf-host-port v0.2.0/go.mod h1:eL0bQ9PasS0hsyTyfTjjG+E80QIyPnBVQbYZyv20Jfk= github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw= github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo= +github.com/stretchr/objx v0.5.2 h1:xuMeJ0Sdp5ZMRXx/aWO6RZxdr3beISkG5/G/aIRr3pY= +github.com/stretchr/objx v0.5.2/go.mod h1:FRsXN1f5AsAjCGJKqEizvkpNtU+EGNCLh3NxZ/8L+MA= github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI= github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4= +github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA= +github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU= -github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4= -github.com/stretchr/testify v1.9.0 h1:HtqpIVDClZ4nwg75+f6Lvsy/wHu+3BoSGCbBAcpTsTg= -github.com/stretchr/testify v1.9.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY= +github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo= +github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U= +github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U= +github.com/subosito/gotenv v1.6.0 h1:9NlTDc1FTs4qu0DDq7AEtTPNw6SVm7uBMsUCUjABIf8= +github.com/subosito/gotenv v1.6.0/go.mod h1:Dk4QP5c2W3ibzajGcXpNraDfq2IrhjMIvMSWPKKo0FU= github.com/tailscale/depaware v0.0.0-20210622194025-720c4b409502/go.mod h1:p9lPsd+cx33L3H9nNoecRRxPssFKUwwI50I3pZ0yT+8= +github.com/tdakkota/asciicheck v0.4.1 h1:bm0tbcmi0jezRA2b5kg4ozmMuGAFotKI3RZfrhfovg8= +github.com/tdakkota/asciicheck v0.4.1/go.mod h1:0k7M3rCfRXb0Z6bwgvkEIMleKH3kXNz9UqJ9Xuqopr8= +github.com/tenntenn/modver v1.0.1 h1:2klLppGhDgzJrScMpkj9Ujy3rXPUspSjAcev9tSEBgA= +github.com/tenntenn/modver v1.0.1/go.mod h1:bePIyQPb7UeioSRkw3Q0XeMhYZSMx9B8ePqg6SAMGH0= +github.com/tenntenn/text/transform v0.0.0-20200319021203-7eef512accb3 h1:f+jULpRQGxTSkNYKJ51yaw6ChIqO+Je8UqsTKN/cDag= +github.com/tenntenn/text/transform v0.0.0-20200319021203-7eef512accb3/go.mod h1:ON8b8w4BN/kE1EOhwT0o+d62W65a6aPw1nouo9LMgyY= +github.com/tetafro/godot v1.5.0 h1:aNwfVI4I3+gdxjMgYPus9eHmoBeJIbnajOyqZYStzuw= +github.com/tetafro/godot v1.5.0/go.mod h1:2oVxTBSftRTh4+MVfUaUXR6bn2GDXCaMcOG4Dk3rfio= +github.com/tidwall/gjson v1.18.0 h1:FIDeeyB800efLX89e5a8Y0BNH+LOngJyGrIWxG2FKQY= +github.com/tidwall/gjson v1.18.0/go.mod h1:/wbyibRr2FHMks5tjHJ5F8dMZh3AcwJEMf5vlfC0lxk= +github.com/tidwall/match v1.2.0 h1:0pt8FlkOwjN2fPt4bIl4BoNxb98gGHN2ObFEDkrfZnM= +github.com/tidwall/match v1.2.0/go.mod h1:eRSPERbgtNPcGhD8UCthc6PmLEQXEWd3PRB5JTxsfmM= +github.com/tidwall/pretty v1.2.1 h1:qjsOFOWWQl+N3RsoF5/ssm1pHmJJwhjlSbZ51I6wMl4= +github.com/tidwall/pretty v1.2.1/go.mod h1:ITEVvHYasfjBbM0u2Pg8T2nJnzm8xPwvNhhsoaGGjNU= +github.com/tidwall/sjson v1.2.5 h1:kLy8mja+1c9jlljvWTlSazM7cKDRfJuR/bOJhcY5NcY= +github.com/tidwall/sjson v1.2.5/go.mod h1:Fvgq9kS/6ociJEDnK0Fk1cpYF4FIW6ZF7LAe+6jwd28= +github.com/timakin/bodyclose v0.0.0-20241017074812-ed6a65f985e3 h1:y4mJRFlM6fUyPhoXuFg/Yu02fg/nIPFMOY8tOqppoFg= +github.com/timakin/bodyclose v0.0.0-20241017074812-ed6a65f985e3/go.mod h1:mkjARE7Yr8qU23YcGMSALbIxTQ9r9QBVahQOBRfU460= +github.com/timonwong/loggercheck v0.10.1 h1:uVZYClxQFpw55eh+PIoqM7uAOHMrhVcDoWDery9R8Lg= +github.com/timonwong/loggercheck v0.10.1/go.mod h1:HEAWU8djynujaAVX7QI65Myb8qgfcZ1uKbdpg3ZzKl8= +github.com/tomarrell/wrapcheck/v2 v2.10.0 h1:SzRCryzy4IrAH7bVGG4cK40tNUhmVmMDuJujy4XwYDg= +github.com/tomarrell/wrapcheck/v2 v2.10.0/go.mod h1:g9vNIyhb5/9TQgumxQyOEqDHsmGYcGsVMOx/xGkqdMo= +github.com/tommy-muehle/go-mnd/v2 v2.5.1 h1:NowYhSdyE/1zwK9QCLeRb6USWdoif80Ie+v+yU8u1Zw= +github.com/tommy-muehle/go-mnd/v2 v2.5.1/go.mod h1:WsUAkMJMYww6l/ufffCD3m+P7LEvr8TnZn9lwVDlgzw= +github.com/ultraware/funlen v0.2.0 h1:gCHmCn+d2/1SemTdYMiKLAHFYxTYz7z9VIDRaTGyLkI= +github.com/ultraware/funlen v0.2.0/go.mod h1:ZE0q4TsJ8T1SQcjmkhN/w+MceuatI6pBFSxxyteHIJA= +github.com/ultraware/whitespace v0.2.0 h1:TYowo2m9Nfj1baEQBjuHzvMRbp19i+RCcRYrSWoFa+g= +github.com/ultraware/whitespace v0.2.0/go.mod h1:XcP1RLD81eV4BW8UhQlpaR+SDc2givTvyI8a586WjW8= +github.com/uudashr/gocognit v1.2.0 h1:3BU9aMr1xbhPlvJLSydKwdLN3tEUUrzPSSM8S4hDYRA= +github.com/uudashr/gocognit v1.2.0/go.mod h1:k/DdKPI6XBZO1q7HgoV2juESI2/Ofj9AcHPZhBBdrTU= +github.com/uudashr/iface v1.3.1 h1:bA51vmVx1UIhiIsQFSNq6GZ6VPTk3WNMZgRiCe9R29U= +github.com/uudashr/iface v1.3.1/go.mod h1:4QvspiRd3JLPAEXBQ9AiZpLbJlrWWgRChOKDJEuQTdg= github.com/x448/float16 v0.8.4 h1:qLwI1I70+NjRFUR3zs1JPUCgaCXSh3SW62uAKT1mSBM= github.com/x448/float16 v0.8.4/go.mod h1:14CWIYCyZA/cWjXOioeEpHeN/83MdbZDRQHoFcYsOfg= +github.com/xen0n/gosmopolitan v1.2.2 h1:/p2KTnMzwRexIW8GlKawsTWOxn7UHA+jCMF/V8HHtvU= +github.com/xen0n/gosmopolitan v1.2.2/go.mod h1:7XX7Mj61uLYrj0qmeN0zi7XDon9JRAEhYQqAPLVNTeg= +github.com/yagipy/maintidx v1.0.0 h1:h5NvIsCz+nRDapQ0exNv4aJ0yXSI0420omVANTv3GJM= +github.com/yagipy/maintidx v1.0.0/go.mod h1:0qNf/I/CCZXSMhsRsrEPDZ+DkekpKLXAJfsTACwgXLk= +github.com/yeya24/promlinter v0.3.0 h1:JVDbMp08lVCP7Y6NP3qHroGAO6z2yGKQtS5JsjqtoFs= +github.com/yeya24/promlinter v0.3.0/go.mod h1:cDfJQQYv9uYciW60QT0eeHlFodotkYZlL+YcPQN+mW4= +github.com/ykadowak/zerologlint v0.1.5 h1:Gy/fMz1dFQN9JZTPjv1hxEk+sRWm05row04Yoolgdiw= +github.com/ykadowak/zerologlint v0.1.5/go.mod h1:KaUskqF3e/v59oPmdq1U1DnKcuHokl2/K1U4pmIELKg= +github.com/yuin/goldmark v1.1.25/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= +github.com/yuin/goldmark v1.1.32/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= +github.com/yuin/goldmark v1.3.5/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k= +github.com/yuin/goldmark v1.4.1/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k= +github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY= +gitlab.com/bosi/decorder v0.4.2 h1:qbQaV3zgwnBZ4zPMhGLW4KZe7A7NwxEhJx39R3shffo= +gitlab.com/bosi/decorder v0.4.2/go.mod h1:muuhHoaJkA9QLcYHq4Mj8FJUwDZ+EirSHRiaTcTf6T8= +go-simpler.org/assert v0.9.0 h1:PfpmcSvL7yAnWyChSjOz6Sp6m9j5lyK8Ok9pEL31YkQ= +go-simpler.org/assert v0.9.0/go.mod h1:74Eqh5eI6vCK6Y5l3PI8ZYFXG4Sa+tkr70OIPJAUr28= +go-simpler.org/musttag v0.13.0 h1:Q/YAW0AHvaoaIbsPj3bvEI5/QFP7w696IMUpnKXQfCE= +go-simpler.org/musttag v0.13.0/go.mod h1:FTzIGeK6OkKlUDVpj0iQUXZLUO1Js9+mvykDQy9C5yM= +go-simpler.org/sloglint v0.9.0 h1:/40NQtjRx9txvsB/RN022KsUJU+zaaSb/9q9BSefSrE= +go-simpler.org/sloglint v0.9.0/go.mod h1:G/OrAF6uxj48sHahCzrbarVMptL2kjWTaUeC8+fOGww= +go.uber.org/automaxprocs v1.6.0 h1:O3y2/QNTOdbF+e/dpXNNW7Rx2hZ4sTIPyybbxyNqTUs= +go.uber.org/automaxprocs v1.6.0/go.mod h1:ifeIMSnPZuznNm6jmdzmU3/bfk01Fe2fotchwEFJ8r8= go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto= go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE= go.uber.org/multierr v1.11.0 h1:blXXJkSxSSfBVBlC76pxqeO+LN3aDfLQo+309xJstO0= go.uber.org/multierr v1.11.0/go.mod h1:20+QtiLqy0Nd6FdQB9TLXag12DsQkrbs3htMFfDN80Y= go.uber.org/zap v1.27.0 h1:aJMhYGrd5QSmlpLMr2MftRKl7t8J8PTZPA732ud/XR8= go.uber.org/zap v1.27.0/go.mod h1:GB2qFLM7cTU87MWRP2mPIjqfIDnGu+VIO4V/SdhGo2E= +go.yaml.in/yaml/v2 v2.4.2 h1:DzmwEr2rDGHl7lsFgAHxmNz/1NlQ7xLIrlN2h5d1eGI= +go.yaml.in/yaml/v2 v2.4.2/go.mod h1:081UH+NErpNdqlCXm3TtEran0rJZGxAYx9hb/ELlsPU= +go.yaml.in/yaml/v3 v3.0.4 h1:tfq32ie2Jv2UxXFdLJdh3jXuOzWiL1fo0bu/FbuKpbc= +go.yaml.in/yaml/v3 v3.0.4/go.mod h1:DhzuOOF2ATzADvBadXxruRBLzYTpT36CKvDb3+aBEFg= golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= +golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc= +golang.org/x/crypto v0.13.0/go.mod h1:y6Z2r+Rw4iayiXXAIxJIDAJ1zMW4yaTpebo8fPOliYc= +golang.org/x/crypto v0.14.0/go.mod h1:MVFd36DqK4CsrnJYDkBA3VC4m2GkXAM0PvzMCn4JQf4= +golang.org/x/exp v0.0.0-20251023183803-a4bb9ffd2546 h1:mgKeJMpvi0yx/sU5GsxQ7p6s2wtOnGAHZWCHUM4KGzY= +golang.org/x/exp v0.0.0-20251023183803-a4bb9ffd2546/go.mod h1:j/pmGrbnkbPtQfxEe5D0VQhZC6qKbfKifgD0oM7sR70= +golang.org/x/exp/typeparams v0.0.0-20220428152302-39d4317da171/go.mod h1:AbB0pIl9nAr9wVwH+Z2ZpaocVmF5I4GyWCDIsVjR0bk= +golang.org/x/exp/typeparams v0.0.0-20230203172020-98cc5a0785f9/go.mod h1:AbB0pIl9nAr9wVwH+Z2ZpaocVmF5I4GyWCDIsVjR0bk= +golang.org/x/exp/typeparams v0.0.0-20250210185358-939b2ce775ac h1:TSSpLIG4v+p0rPv1pNOQtl1I8knsO4S9trOxNMOLVP4= +golang.org/x/exp/typeparams v0.0.0-20250210185358-939b2ce775ac/go.mod h1:AbB0pIl9nAr9wVwH+Z2ZpaocVmF5I4GyWCDIsVjR0bk= golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= golang.org/x/mod v0.4.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= +golang.org/x/mod v0.4.1/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= +golang.org/x/mod v0.4.2/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= +golang.org/x/mod v0.6.0-dev.0.20220106191415-9b9b3d81d5e3/go.mod h1:3p9vT2HGsQu2K1YbXdKPJLVgG5VJdoTa1poYQBtP1AY= +golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4= +golang.org/x/mod v0.7.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= +golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= +golang.org/x/mod v0.9.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= +golang.org/x/mod v0.12.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= +golang.org/x/mod v0.13.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c= +golang.org/x/mod v0.29.0 h1:HV8lRxZC4l2cr3Zq1LvtOsi/ThTgWnUk/y64QSs8GwA= +golang.org/x/mod v0.29.0/go.mod h1:NyhrlYXJ2H4eJiRy/WDBO6HMqZQ6q9nk4JzS3NuCK+w= golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= +golang.org/x/net v0.0.0-20200625001655-4c5254603344/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA= golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU= -golang.org/x/net v0.40.0 h1:79Xs7wF06Gbdcg4kdCCIQArK11Z1hr5POQ6+fIYHNuY= -golang.org/x/net v0.40.0/go.mod h1:y0hY0exeL2Pku80/zKK7tpntoX23cqL3Oa6njdgRtds= -golang.org/x/oauth2 v0.27.0 h1:da9Vo7/tDv5RH/7nZDz1eMGS/q1Vv1N/7FCrBhI9I3M= -golang.org/x/oauth2 v0.27.0/go.mod h1:onh5ek6nERTohokkhCD/y2cV4Do3fxFHFuAejCkRWT8= +golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg= +golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM= +golang.org/x/net v0.0.0-20211015210444-4f30a5c0130f/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y= +golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c= +golang.org/x/net v0.2.0/go.mod h1:KqCZLdyyvdV855qA2rE3GC2aiw5xGR5TEjj8smXukLY= +golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs= +golang.org/x/net v0.8.0/go.mod h1:QVkue5JL9kW//ek3r6jTKnTFis1tRmNAW2P1shuFdJc= +golang.org/x/net v0.10.0/go.mod h1:0qNGK6F8kojg2nk9dLZ2mShWaEBan6FAoqfSigmmuDg= +golang.org/x/net v0.15.0/go.mod h1:idbUs1IY1+zTqbi8yxTbhexhEEk5ur9LInksu6HrEpk= +golang.org/x/net v0.16.0/go.mod h1:NxSsAGuq816PNPmqtQdLE42eU2Fs7NoRIZrHJAlaCOE= +golang.org/x/net v0.46.0 h1:giFlY12I07fugqwPuWJi68oOnpfqFnJIJzaIIm2JVV4= +golang.org/x/net v0.46.0/go.mod h1:Q9BGdFy1y4nkUwiLvT5qtyhAnEHgnQ/zd8PfU6nc210= +golang.org/x/oauth2 v0.31.0 h1:8Fq0yVZLh4j4YA47vHKFTa9Ew5XIrCP8LC6UeNZnLxo= +golang.org/x/oauth2 v0.31.0/go.mod h1:lzm5WQJQwKZ3nwavOZ3IS5Aulzxi68dUSgRHujetwEA= golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.0.0-20200625203802-6e8e738ad208/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.14.0 h1:woo0S4Yywslg6hp4eUFjTVOyKt0RookbpAHG4c1HmhQ= -golang.org/x/sync v0.14.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA= +golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.3.0/go.mod h1:FU7BRWz2tNW+3quACPkgCx/L+uEAv1htQ0V83Z9Rj+Y= +golang.org/x/sync v0.4.0/go.mod h1:FU7BRWz2tNW+3quACPkgCx/L+uEAv1htQ0V83Z9Rj+Y= +golang.org/x/sync v0.19.0 h1:vV+1eWNmZ5geRlYjzm2adRgW2/mcpevXNg50YZtPCE4= +golang.org/x/sync v0.19.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI= golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.33.0 h1:q3i8TbbEz+JRD9ywIRlyRAQbM0qF7hu24q3teo2hbuw= -golang.org/x/sys v0.33.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k= -golang.org/x/term v0.32.0 h1:DR4lr0TjUs3epypdhTOkMmuF5CDFJ/8pOnbzMZPQ7bg= -golang.org/x/term v0.32.0/go.mod h1:uZG1FhGx848Sqfsq4/DlJr3xGGsYMu/L5GW4abiaEPQ= +golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20211019181941-9d821ace8654/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20211105183446-c75c47738b0c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.2.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.12.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.13.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.39.0 h1:CvCKL8MeisomCi6qNZ+wbb0DN9E5AATixKsvNtMoMFk= +golang.org/x/sys v0.39.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks= +golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= +golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8= +golang.org/x/term v0.2.0/go.mod h1:TVmDHMZPmdnySmBfhjOoOdhjzdE1h4u1VwSiw2l1Nuc= +golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k= +golang.org/x/term v0.6.0/go.mod h1:m6U89DPEgQRMq3DNkDClhWw02AUbt2daBVO4cn4Hv9U= +golang.org/x/term v0.8.0/go.mod h1:xPskH00ivmX89bAKVGSKKtLOWNx2+17Eiy94tnKShWo= +golang.org/x/term v0.12.0/go.mod h1:owVbMEjm3cBLCHdkQu9b1opXd4ETQWc3BhuQGKgXgvU= +golang.org/x/term v0.13.0/go.mod h1:LTmsnFJwVN6bCy1rVCoS+qHT1HhALEFxKncY3WNNh4U= +golang.org/x/term v0.36.0 h1:zMPR+aF8gfksFprF/Nc/rd1wRS1EI6nDBGyWAvDzx2Q= +golang.org/x/term v0.36.0/go.mod h1:Qu394IJq6V6dCBRgwqshf3mPF85AqzYEzofzRdZkWss= golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= +golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk= golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= -golang.org/x/text v0.25.0 h1:qVyWApTSYLk/drJRO5mDlNYskwQznZmkpV2c8q9zls4= -golang.org/x/text v0.25.0/go.mod h1:WEdwpYrmk1qmdHvhkSTNPm3app7v4rsT8F2UD6+VHIA= -golang.org/x/time v0.7.0 h1:ntUhktv3OPE6TgYxXWv9vKvUSJyIFJlyohwbkEwPrKQ= -golang.org/x/time v0.7.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM= +golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= +golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ= +golang.org/x/text v0.4.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8= +golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8= +golang.org/x/text v0.8.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8= +golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8= +golang.org/x/text v0.13.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE= +golang.org/x/text v0.30.0 h1:yznKA/E9zq54KzlzBEAWn1NXSQ8DIp/NYMy88xJjl4k= +golang.org/x/text v0.30.0/go.mod h1:yDdHFIX9t+tORqspjENWgzaCVXgk0yYnYuSZ8UzzBVM= +golang.org/x/time v0.13.0 h1:eUlYslOIt32DgYD6utsuUeHs4d7AsEYLuIAdg7FlYgI= +golang.org/x/time v0.13.0/go.mod h1:eL/Oa2bBBK0TkX57Fyni+NgnyQQN4LitPmob2Hjnqw4= golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= +golang.org/x/tools v0.0.0-20200324003944-a576cf524670/go.mod h1:Sl4aGygMT6LrqrWclx+PTx3U+LnKx/seiNR+3G19Ar8= +golang.org/x/tools v0.0.0-20200329025819-fd4102a86c65/go.mod h1:Sl4aGygMT6LrqrWclx+PTx3U+LnKx/seiNR+3G19Ar8= golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE= +golang.org/x/tools v0.0.0-20200724022722-7017fd6b1305/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA= +golang.org/x/tools v0.0.0-20200820010801-b793a1359eac/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA= +golang.org/x/tools v0.0.0-20201023174141-c8cfbd0f21e6/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= golang.org/x/tools v0.0.0-20201211185031-d93e913c1a58/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= -golang.org/x/tools v0.26.0 h1:v/60pFQmzmT9ExmjDv2gGIfi3OqfKoEP6I5+umXlbnQ= -golang.org/x/tools v0.26.0/go.mod h1:TPVVj70c7JJ3WCazhD8OdXcZg/og+b9+tH/KxylGwH0= +golang.org/x/tools v0.1.1-0.20210205202024-ef80cdb6ec6d/go.mod h1:9bzcO0MWcOuT0tm1iBGzDVPshzfwoVvREIui8C+MHqU= +golang.org/x/tools v0.1.1-0.20210302220138-2ac05c832e1a/go.mod h1:9bzcO0MWcOuT0tm1iBGzDVPshzfwoVvREIui8C+MHqU= +golang.org/x/tools v0.1.1/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk= +golang.org/x/tools v0.1.5/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk= +golang.org/x/tools v0.1.10/go.mod h1:Uh6Zz+xoGYZom868N8YTex3t7RhtHDBrE8Gzo9bV56E= +golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc= +golang.org/x/tools v0.3.0/go.mod h1:/rWhSS2+zyEVwoJf8YAX6L2f0ntZ7Kn/mGgAWcipA5k= +golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU= +golang.org/x/tools v0.7.0/go.mod h1:4pg6aUX35JBAogB10C9AtvVL+qowtN4pT3CGSQex14s= +golang.org/x/tools v0.13.0/go.mod h1:HvlwmtVNQAhOuCjW7xxvovg8wbNq7LwfXh/k7wXUl58= +golang.org/x/tools v0.14.0/go.mod h1:uYBEerGOWcJyEORxN+Ek8+TT266gXkNlHdJBwexUsBg= +golang.org/x/tools v0.38.0 h1:Hx2Xv8hISq8Lm16jvBZ2VQf+RLmbd7wVUsALibYI/IQ= +golang.org/x/tools v0.38.0/go.mod h1:yEsQ/d/YK8cjh0L6rZlY8tgtlKiBNTL14pGDJPJpYQs= +golang.org/x/tools/go/expect v0.1.1-deprecated h1:jpBZDwmgPhXsKZC6WhL20P4b/wmnpsEAGHaNy0n/rJM= +golang.org/x/tools/go/expect v0.1.1-deprecated/go.mod h1:eihoPOH+FgIqa3FpoTwguz/bVUSGBlGQU67vpBeOrBY= +golang.org/x/tools/go/packages/packagestest v0.1.1-deprecated h1:1h2MnaIAIXISqTFKdENegdpAgUXz6NrPEsbIeWaBRvM= +golang.org/x/tools/go/packages/packagestest v0.1.1-deprecated/go.mod h1:RVAQXBGNv1ib0J382/DPCRS/BPnsGebyM1Gj5VSDpG8= golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= -gomodules.xyz/jsonpatch/v2 v2.4.0 h1:Ci3iUJyx9UeRx7CeFN8ARgGbkESwJK+KB9lLcWxY/Zw= -gomodules.xyz/jsonpatch/v2 v2.4.0/go.mod h1:AH3dM2RI6uoBZxn3LVrfvJ3E0/9dG4cSrbuBJT4moAY= -google.golang.org/protobuf v1.35.1 h1:m3LfL6/Ca+fqnjnlqQXNpFPABW1UD7mjh8KO2mKFytA= -google.golang.org/protobuf v1.35.1/go.mod h1:9fA7Ob0pmnwhb644+1+CVWFRbNajQ6iRojtC/QF5bRE= +gomodules.xyz/jsonpatch/v2 v2.5.0 h1:JELs8RLM12qJGXU4u/TO3V25KW8GreMKl9pdkk14RM0= +gomodules.xyz/jsonpatch/v2 v2.5.0/go.mod h1:AH3dM2RI6uoBZxn3LVrfvJ3E0/9dG4cSrbuBJT4moAY= +google.golang.org/protobuf v1.36.9 h1:w2gp2mA27hUeUzj9Ex9FBjsBm40zfaDtEWow293U7Iw= +google.golang.org/protobuf v1.36.9/go.mod h1:fuxRtAxBytpl4zzqUh6/eyUujkJdNiuEkXntxiD/uRU= gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk= gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q= -gopkg.in/evanphx/json-patch.v4 v4.12.0 h1:n6jtcsulIzXPJaxegRbvFNNrZDjbij7ny3gmSPG+6V4= -gopkg.in/evanphx/json-patch.v4 v4.12.0/go.mod h1:p8EYWUEYMpynmqDbY58zCKCFZw8pRWMG4EsWvDvM72M= +gopkg.in/evanphx/json-patch.v4 v4.13.0 h1:czT3CmqEaQ1aanPc5SdlgQrrEIb8w/wwCvWWnfEbYzo= +gopkg.in/evanphx/json-patch.v4 v4.13.0/go.mod h1:p8EYWUEYMpynmqDbY58zCKCFZw8pRWMG4EsWvDvM72M= gopkg.in/inf.v0 v0.9.1 h1:73M5CoZyi3ZLMOyDlQh031Cx6N9NDJ2Vvfl76EDAgDc= gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw= gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= +gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY= +gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ= gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA= gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= -k8s.io/api v0.32.3 h1:Hw7KqxRusq+6QSplE3NYG4MBxZw1BZnq4aP4cJVINls= -k8s.io/api v0.32.3/go.mod h1:2wEDTXADtm/HA7CCMD8D8bK4yuBUptzaRhYcYEEYA3k= -k8s.io/apiextensions-apiserver v0.32.3 h1:4D8vy+9GWerlErCwVIbcQjsWunF9SUGNu7O7hiQTyPY= -k8s.io/apiextensions-apiserver v0.32.3/go.mod h1:8YwcvVRMVzw0r1Stc7XfGAzB/SIVLunqApySV5V7Dss= -k8s.io/apimachinery v0.32.3 h1:JmDuDarhDmA/Li7j3aPrwhpNBA94Nvk5zLeOge9HH1U= -k8s.io/apimachinery v0.32.3/go.mod h1:GpHVgxoKlTxClKcteaeuF1Ul/lDVb74KpZcxcmLDElE= -k8s.io/client-go v0.32.3 h1:RKPVltzopkSgHS7aS98QdscAgtgah/+zmpAogooIqVU= -k8s.io/client-go v0.32.3/go.mod h1:3v0+3k4IcT9bXTc4V2rt+d2ZPPG700Xy6Oi0Gdl2PaY= +honnef.co/go/tools v0.6.1 h1:R094WgE8K4JirYjBaOpz/AvTyUu/3wbmAoskKN/pxTI= +honnef.co/go/tools v0.6.1/go.mod h1:3puzxxljPCe8RGJX7BIy1plGbxEOZni5mR2aXe3/uk4= +k8s.io/api v0.34.3 h1:D12sTP257/jSH2vHV2EDYrb16bS7ULlHpdNdNhEw2S4= +k8s.io/api v0.34.3/go.mod h1:PyVQBF886Q5RSQZOim7DybQjAbVs8g7gwJNhGtY5MBk= +k8s.io/apiextensions-apiserver v0.34.3 h1:p10fGlkDY09eWKOTeUSioxwLukJnm+KuDZdrW71y40g= +k8s.io/apiextensions-apiserver v0.34.3/go.mod h1:aujxvqGFRdb/cmXYfcRTeppN7S2XV/t7WMEc64zB5A0= +k8s.io/apimachinery v0.34.3 h1:/TB+SFEiQvN9HPldtlWOTp0hWbJ+fjU+wkxysf/aQnE= +k8s.io/apimachinery v0.34.3/go.mod h1:/GwIlEcWuTX9zKIg2mbw0LRFIsXwrfoVxn+ef0X13lw= +k8s.io/client-go v0.34.3 h1:wtYtpzy/OPNYf7WyNBTj3iUA0XaBHVqhv4Iv3tbrF5A= +k8s.io/client-go v0.34.3/go.mod h1:OxxeYagaP9Kdf78UrKLa3YZixMCfP6bgPwPwNBQBzpM= k8s.io/klog/v2 v2.130.1 h1:n9Xl7H1Xvksem4KFG4PYbdQCQxqc/tTUyrgXaOhHSzk= k8s.io/klog/v2 v2.130.1/go.mod h1:3Jpz1GvMt720eyJH1ckRHK1EDfpxISzJ7I9OYgaDtPE= -k8s.io/kube-openapi v0.0.0-20241105132330-32ad38e42d3f h1:GA7//TjRY9yWGy1poLzYYJJ4JRdzg3+O6e8I+e+8T5Y= -k8s.io/kube-openapi v0.0.0-20241105132330-32ad38e42d3f/go.mod h1:R/HEjbvWI0qdfb8viZUeVZm0X6IZnxAydC7YU42CMw4= -k8s.io/utils v0.0.0-20241104100929-3ea5e8cea738 h1:M3sRQVHv7vB20Xc2ybTt7ODCeFj6JSWYFzOFnYeS6Ro= -k8s.io/utils v0.0.0-20241104100929-3ea5e8cea738/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0= +k8s.io/kube-openapi v0.0.0-20250909170358-d67c058d9372 h1:6n2yF16Z5B+r+iKN6yL6/0cRj7lI5omG5F0wuI9ZHhw= +k8s.io/kube-openapi v0.0.0-20250909170358-d67c058d9372/go.mod h1:kdmbQkyfwUagLfXIad1y2TdrjPFWp2Q89B3qkRwf/pQ= +k8s.io/utils v0.0.0-20251002143259-bc988d571ff4 h1:SjGebBtkBqHFOli+05xYbK8YF1Dzkbzn+gDM4X9T4Ck= +k8s.io/utils v0.0.0-20251002143259-bc988d571ff4/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0= moul.io/http2curl/v2 v2.3.0 h1:9r3JfDzWPcbIklMOs2TnIFzDYvfAZvjeavG6EzP7jYs= moul.io/http2curl/v2 v2.3.0/go.mod h1:RW4hyBjTWSYDOxapodpNEtX0g5Eb16sxklBqmd2RHcE= -sigs.k8s.io/controller-runtime v0.20.4 h1:X3c+Odnxz+iPTRobG4tp092+CvBU9UK0t/bRf+n0DGU= -sigs.k8s.io/controller-runtime v0.20.4/go.mod h1:xg2XB0K5ShQzAgsoujxuKN4LNXR2LfwwHsPj7Iaw+XY= -sigs.k8s.io/json v0.0.0-20241010143419-9aa6b5e7a4b3 h1:/Rv+M11QRah1itp8VhT6HoVx1Ray9eB4DBr+K+/sCJ8= -sigs.k8s.io/json v0.0.0-20241010143419-9aa6b5e7a4b3/go.mod h1:18nIHnGi6636UCz6m8i4DhaJ65T6EruyzmoQqI2BVDo= -sigs.k8s.io/structured-merge-diff/v4 v4.4.2 h1:MdmvkGuXi/8io6ixD5wud3vOLwc1rj0aNqRlpuvjmwA= -sigs.k8s.io/structured-merge-diff/v4 v4.4.2/go.mod h1:N8f93tFZh9U6vpxwRArLiikrE5/2tiu1w1AGfACIGE4= -sigs.k8s.io/yaml v1.4.0 h1:Mk1wCc2gy/F0THH0TAp1QYyJNzRm2KCLy3o5ASXVI5E= -sigs.k8s.io/yaml v1.4.0/go.mod h1:Ejl7/uTz7PSA4eKMyQCUTnhZYNmLIl+5c2lQPGR2BPY= +mvdan.cc/gofumpt v0.7.0 h1:bg91ttqXmi9y2xawvkuMXyvAA/1ZGJqYAEGjXuP0JXU= +mvdan.cc/gofumpt v0.7.0/go.mod h1:txVFJy/Sc/mvaycET54pV8SW8gWxTlUuGHVEcncmNUo= +mvdan.cc/unparam v0.0.0-20240528143540-8a5130ca722f h1:lMpcwN6GxNbWtbpI1+xzFLSW8XzX0u72NttUGVFjO3U= +mvdan.cc/unparam v0.0.0-20240528143540-8a5130ca722f/go.mod h1:RSLa7mKKCNeTTMHBw5Hsy2rfJmd6O2ivt9Dw9ZqCQpQ= +sigs.k8s.io/controller-runtime v0.22.4 h1:GEjV7KV3TY8e+tJ2LCTxUTanW4z/FmNB7l327UfMq9A= +sigs.k8s.io/controller-runtime v0.22.4/go.mod h1:+QX1XUpTXN4mLoblf4tqr5CQcyHPAki2HLXqQMY6vh8= +sigs.k8s.io/json v0.0.0-20250730193827-2d320260d730 h1:IpInykpT6ceI+QxKBbEflcR5EXP7sU1kvOlxwZh5txg= +sigs.k8s.io/json v0.0.0-20250730193827-2d320260d730/go.mod h1:mdzfpAEoE6DHQEN0uh9ZbOCuHbLK5wOm7dK4ctXE9Tg= +sigs.k8s.io/randfill v1.0.0 h1:JfjMILfT8A6RbawdsK2JXGBR5AQVfd+9TbzrlneTyrU= +sigs.k8s.io/randfill v1.0.0/go.mod h1:XeLlZ/jmk4i1HRopwe7/aU3H5n1zNUcX6TM94b3QxOY= +sigs.k8s.io/structured-merge-diff/v6 v6.3.0 h1:jTijUJbW353oVOd9oTlifJqOGEkUw2jB/fXCbTiQEco= +sigs.k8s.io/structured-merge-diff/v6 v6.3.0/go.mod h1:M3W8sfWvn2HhQDIbGWj3S099YozAsymCo/wrT5ohRUE= +sigs.k8s.io/yaml v1.6.0 h1:G8fkbMSAFqgEFgh4b1wmtzDnioxFCUgTZhlbj5P9QYs= +sigs.k8s.io/yaml v1.6.0/go.mod h1:796bPqUfzR/0jLAl6XjHl3Ck7MiyVv8dbTdyT3/pMf4= diff --git a/images/sds-replicated-volume-controller/pkg/controller/controller_suite_test.go b/images/sds-replicated-volume-controller/pkg/controller/controller_suite_test.go index 2126ea7f2..71190e70f 100644 --- a/images/sds-replicated-volume-controller/pkg/controller/controller_suite_test.go +++ b/images/sds-replicated-volume-controller/pkg/controller/controller_suite_test.go @@ -1,5 +1,5 @@ /* -Copyright 2025 Flant JSC +Copyright 2026 Flant JSC Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -200,7 +200,7 @@ func getAndValidateNotReconciledRSC(ctx context.Context, cl client.Client, testN Expect(err).NotTo(HaveOccurred()) Expect(replicatedSC.Name).To(Equal(testName)) Expect(replicatedSC.Finalizers).To(BeNil()) - Expect(replicatedSC.Status.Phase).To(Equal("")) + Expect(replicatedSC.Status.Phase).To(Equal(srv.ReplicatedStorageClassPhase(""))) Expect(replicatedSC.Status.Reason).To(Equal("")) return replicatedSC @@ -277,8 +277,8 @@ func getConfigMap(ctx context.Context, cl client.Client, namespace string) (*cor return configMap, err } -func getVolumeBindingMode(volumeAccess string) storagev1.VolumeBindingMode { - if volumeAccess == controller.VolumeAccessAny { +func getVolumeBindingMode(volumeAccess srv.ReplicatedStorageClassVolumeAccess) storagev1.VolumeBindingMode { + if volumeAccess == srv.VolumeAccessAny { return storagev1.VolumeBindingImmediate } diff --git a/images/sds-replicated-volume-controller/pkg/controller/linstor_leader.go b/images/sds-replicated-volume-controller/pkg/controller/linstor_leader.go index 4ae86b994..0b5b5c808 100644 --- a/images/sds-replicated-volume-controller/pkg/controller/linstor_leader.go +++ b/images/sds-replicated-volume-controller/pkg/controller/linstor_leader.go @@ -33,7 +33,7 @@ import ( "sigs.k8s.io/controller-runtime/pkg/reconcile" "sigs.k8s.io/controller-runtime/pkg/source" - "github.com/deckhouse/sds-replicated-volume/images/sds-replicated-volume-controller/pkg/logger" + "github.com/deckhouse/sds-replicated-volume/lib/go/common/logger" ) const ( diff --git a/images/sds-replicated-volume-controller/pkg/controller/linstor_leader_test.go b/images/sds-replicated-volume-controller/pkg/controller/linstor_leader_test.go index a9d47d5ad..13d9a0a5d 100644 --- a/images/sds-replicated-volume-controller/pkg/controller/linstor_leader_test.go +++ b/images/sds-replicated-volume-controller/pkg/controller/linstor_leader_test.go @@ -17,7 +17,6 @@ limitations under the License. package controller import ( - "context" "fmt" "testing" @@ -27,13 +26,13 @@ import ( metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" "sigs.k8s.io/controller-runtime/pkg/client" - "github.com/deckhouse/sds-replicated-volume/images/sds-replicated-volume-controller/pkg/logger" + "github.com/deckhouse/sds-replicated-volume/lib/go/common/logger" ) func TestLinstorLeaderController(t *testing.T) { var ( cl = newFakeClient() - ctx = context.Background() + ctx = t.Context() log = logger.Logger{} namespace = "test-ns" leaseName = "test-lease" diff --git a/images/sds-replicated-volume-controller/pkg/controller/linstor_node.go b/images/sds-replicated-volume-controller/pkg/controller/linstor_node.go index 3f0c319ab..d249b139b 100644 --- a/images/sds-replicated-volume-controller/pkg/controller/linstor_node.go +++ b/images/sds-replicated-volume-controller/pkg/controller/linstor_node.go @@ -39,7 +39,7 @@ import ( srv "github.com/deckhouse/sds-replicated-volume/api/v1alpha1" "github.com/deckhouse/sds-replicated-volume/images/sds-replicated-volume-controller/config" - "github.com/deckhouse/sds-replicated-volume/images/sds-replicated-volume-controller/pkg/logger" + "github.com/deckhouse/sds-replicated-volume/lib/go/common/logger" ) const ( diff --git a/images/sds-replicated-volume-controller/pkg/controller/linstor_node_t_test.go b/images/sds-replicated-volume-controller/pkg/controller/linstor_node_t_test.go index 520c7b091..ad3435879 100644 --- a/images/sds-replicated-volume-controller/pkg/controller/linstor_node_t_test.go +++ b/images/sds-replicated-volume-controller/pkg/controller/linstor_node_t_test.go @@ -17,7 +17,6 @@ limitations under the License. package controller import ( - "context" "testing" "github.com/stretchr/testify/assert" @@ -25,11 +24,11 @@ import ( v12 "k8s.io/api/storage/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - "github.com/deckhouse/sds-replicated-volume/images/sds-replicated-volume-controller/pkg/logger" + "github.com/deckhouse/sds-replicated-volume/lib/go/common/logger" ) func TestReconcileCSINodeLabelsIfDiffExists(t *testing.T) { - ctx := context.Background() + ctx := t.Context() cl := newFakeClient() log := logger.Logger{} @@ -160,7 +159,7 @@ func TestReconcileCSINodeLabelsIfDiffExists(t *testing.T) { } func TestReconcileCSINodeLabelsIfDiffDoesNotExists(t *testing.T) { - ctx := context.Background() + ctx := t.Context() cl := newFakeClient() log := logger.Logger{} @@ -293,7 +292,7 @@ func TestRenameLinbitLabels(t *testing.T) { SdsDfltDisklessStorPoolLabelKey = "storage.deckhouse.io/sds-replicated-volume-sp-DfltDisklessStorPool" LinbitDfltDisklessStorPoolLabelKey = "linbit.com/sp-DfltDisklessStorPool" ) - ctx := context.Background() + ctx := t.Context() cl := newFakeClient() nodes := []v1.Node{ { diff --git a/images/sds-replicated-volume-controller/pkg/controller/linstor_node_test.go b/images/sds-replicated-volume-controller/pkg/controller/linstor_node_test.go index 24a3a606f..28b115de4 100644 --- a/images/sds-replicated-volume-controller/pkg/controller/linstor_node_test.go +++ b/images/sds-replicated-volume-controller/pkg/controller/linstor_node_test.go @@ -17,7 +17,6 @@ limitations under the License. package controller_test import ( - "context" "fmt" linstor "github.com/LINBIT/golinstor/client" @@ -28,7 +27,7 @@ import ( srv "github.com/deckhouse/sds-replicated-volume/api/v1alpha1" "github.com/deckhouse/sds-replicated-volume/images/sds-replicated-volume-controller/pkg/controller" - "github.com/deckhouse/sds-replicated-volume/images/sds-replicated-volume-controller/pkg/logger" + "github.com/deckhouse/sds-replicated-volume/lib/go/common/logger" ) var _ = Describe(controller.LinstorNodeControllerName, func() { @@ -38,7 +37,6 @@ var _ = Describe(controller.LinstorNodeControllerName, func() { ) var ( - ctx = context.Background() cl = newFakeClient() cfgSecret *v1.Secret @@ -50,7 +48,7 @@ var _ = Describe(controller.LinstorNodeControllerName, func() { } ) - It("GetKubernetesSecretByName", func() { + It("GetKubernetesSecretByName", func(ctx SpecContext) { err := cl.Create(ctx, testSecret) Expect(err).NotTo(HaveOccurred()) @@ -82,7 +80,7 @@ var _ = Describe(controller.LinstorNodeControllerName, func() { selectedKubeNodes *v1.NodeList ) - It("GetKubernetesNodesBySelector", func() { + It("GetKubernetesNodesBySelector", func(ctx SpecContext) { cfgNodeSelector := map[string]string{} testLabels := map[string]string{testLblKey: testLblVal} testNode := v1.Node{ @@ -112,7 +110,7 @@ var _ = Describe(controller.LinstorNodeControllerName, func() { Expect(actualNode.Status.Addresses[0].Address).To(Equal(testNodeAddress)) }) - It("GetAllKubernetesNodes", func() { + It("GetAllKubernetesNodes", func(ctx SpecContext) { allKubsNodes, err := controller.GetAllKubernetesNodes(ctx, cl) Expect(err).NotTo(HaveOccurred()) Expect(len(allKubsNodes.Items)).To(Equal(1)) @@ -197,7 +195,7 @@ var _ = Describe(controller.LinstorNodeControllerName, func() { mockLc *linstor.Client ) - It("AddOrConfigureDRBDNodes", func() { + It("AddOrConfigureDRBDNodes", func(ctx SpecContext) { mockLc, err := NewLinstorClientWithMockNodes() Expect(err).NotTo(HaveOccurred()) @@ -236,7 +234,7 @@ var _ = Describe(controller.LinstorNodeControllerName, func() { Expect(drbdNodeProps["Aux/"+testKey2]).To(Equal(testValue2)) }) - It("ConfigureDRBDNode", func() { + It("ConfigureDRBDNode", func(ctx SpecContext) { err := controller.ConfigureDRBDNode(ctx, mockLc, linstor.Node{}, drbdNodeProps) Expect(err).NotTo(HaveOccurred()) }) diff --git a/images/sds-replicated-volume-controller/pkg/controller/linstor_port_range_cm_watcher.go b/images/sds-replicated-volume-controller/pkg/controller/linstor_port_range_cm_watcher.go index 36d4a7044..dded8061d 100644 --- a/images/sds-replicated-volume-controller/pkg/controller/linstor_port_range_cm_watcher.go +++ b/images/sds-replicated-volume-controller/pkg/controller/linstor_port_range_cm_watcher.go @@ -36,7 +36,7 @@ import ( "sigs.k8s.io/controller-runtime/pkg/source" "github.com/deckhouse/sds-replicated-volume/api/linstor" - "github.com/deckhouse/sds-replicated-volume/images/sds-replicated-volume-controller/pkg/logger" + "github.com/deckhouse/sds-replicated-volume/lib/go/common/logger" ) const ( diff --git a/images/sds-replicated-volume-controller/pkg/controller/linstor_port_range_cm_watcher_test.go b/images/sds-replicated-volume-controller/pkg/controller/linstor_port_range_cm_watcher_test.go index 411830cff..4f5e71da5 100644 --- a/images/sds-replicated-volume-controller/pkg/controller/linstor_port_range_cm_watcher_test.go +++ b/images/sds-replicated-volume-controller/pkg/controller/linstor_port_range_cm_watcher_test.go @@ -17,7 +17,6 @@ limitations under the License. package controller import ( - "context" "fmt" "strconv" "testing" @@ -30,11 +29,11 @@ import ( "sigs.k8s.io/controller-runtime/pkg/client" "sigs.k8s.io/controller-runtime/pkg/reconcile" - "github.com/deckhouse/sds-replicated-volume/images/sds-replicated-volume-controller/pkg/logger" + "github.com/deckhouse/sds-replicated-volume/lib/go/common/logger" ) func TestLinstorPortRangeWatcher(t *testing.T) { - ctx := context.Background() + ctx := t.Context() log := logger.Logger{} cl := newFakeClient() diff --git a/images/sds-replicated-volume-controller/pkg/controller/linstor_resources_watcher.go b/images/sds-replicated-volume-controller/pkg/controller/linstor_resources_watcher.go index 37a6a9269..c8067db4d 100644 --- a/images/sds-replicated-volume-controller/pkg/controller/linstor_resources_watcher.go +++ b/images/sds-replicated-volume-controller/pkg/controller/linstor_resources_watcher.go @@ -32,7 +32,7 @@ import ( "sigs.k8s.io/controller-runtime/pkg/client" "sigs.k8s.io/controller-runtime/pkg/manager" - "github.com/deckhouse/sds-replicated-volume/images/sds-replicated-volume-controller/pkg/logger" + "github.com/deckhouse/sds-replicated-volume/lib/go/common/logger" ) const ( @@ -76,7 +76,7 @@ func NewLinstorResourcesWatcher( log logger.Logger, ) { cl := mgr.GetClient() - ctx := context.Background() + ctx := context.Background() // TODO: should use external context to make it cancelable log.Info(fmt.Sprintf("[NewLinstorResourcesWatcher] the controller %s starts the work", linstorResourcesWatcherCtrlName)) @@ -369,7 +369,7 @@ func createTieBreaker(ctx context.Context, lc *lapi.Client, resourceName, nodeNa Name: resourceName, NodeName: nodeName, Flags: disklessFlags, - LayerObject: lapi.ResourceLayer{}, + LayerObject: &lapi.ResourceLayer{}, }, } diff --git a/images/sds-replicated-volume-controller/pkg/controller/replicated_storage_class.go b/images/sds-replicated-volume-controller/pkg/controller/replicated_storage_class.go index 584158733..821b16fa7 100644 --- a/images/sds-replicated-volume-controller/pkg/controller/replicated_storage_class.go +++ b/images/sds-replicated-volume-controller/pkg/controller/replicated_storage_class.go @@ -1,5 +1,5 @@ /* -Copyright 2025 Flant JSC +Copyright 2026 Flant JSC Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -43,7 +43,7 @@ import ( srv "github.com/deckhouse/sds-replicated-volume/api/v1alpha1" "github.com/deckhouse/sds-replicated-volume/images/sds-replicated-volume-controller/config" - "github.com/deckhouse/sds-replicated-volume/images/sds-replicated-volume-controller/pkg/logger" + "github.com/deckhouse/sds-replicated-volume/lib/go/common/logger" ) const ( @@ -83,6 +83,8 @@ const ( StorageClassParamAllowRemoteVolumeAccessKey = "replicated.csi.storage.deckhouse.io/allowRemoteVolumeAccess" StorageClassParamAllowRemoteVolumeAccessValue = "- fromSame:\n - topology.kubernetes.io/zone" ReplicatedStorageClassParamNameKey = "replicated.csi.storage.deckhouse.io/replicatedStorageClassName" + StorageClassParamTopologyKey = "replicated.csi.storage.deckhouse.io/topology" + StorageClassParamZonesKey = "replicated.csi.storage.deckhouse.io/zones" StorageClassParamFSTypeKey = "csi.storage.k8s.io/fstype" FsTypeExt4 = "ext4" @@ -518,6 +520,22 @@ func GenerateStorageClassFromReplicatedStorageClass(replicatedSC *srv.Replicated volumeBindingMode = "Immediate" } + // Add topology parameter + storageClassParameters[StorageClassParamTopologyKey] = string(replicatedSC.Spec.Topology) + + // Add zones parameter (serialize array to YAML list format) + if len(replicatedSC.Spec.Zones) > 0 { + var zonesBuilder strings.Builder + for i, zone := range replicatedSC.Spec.Zones { + if i > 0 { + zonesBuilder.WriteString("\n") + } + zonesBuilder.WriteString("- ") + zonesBuilder.WriteString(zone) + } + storageClassParameters[StorageClassParamZonesKey] = zonesBuilder.String() + } + switch replicatedSC.Spec.Topology { case TopologyTransZonal: storageClassParameters[StorageClassParamReplicasOnSameKey] = fmt.Sprintf("%s/%s", StorageClassLabelKeyPrefix, replicatedSC.Name) @@ -647,10 +665,19 @@ func canRecreateStorageClass(newSC, oldSC *storagev1.StorageClass) (bool, string // We can recreate StorageClass only if the following parameters are not equal. // If other parameters are not equal, we can't recreate StorageClass and // users must delete ReplicatedStorageClass resource and create it again manually. + // Ignore these parameters during comparison as they may be missing in old StorageClass: + // - QuorumMinimumRedundancyWithPrefixSCKey: optional parameter + // - ReplicatedStorageClassParamNameKey: optional parameter + // - StorageClassParamTopologyKey: new parameter, may be missing in old StorageClass + // - StorageClassParamZonesKey: new parameter, may be missing in old StorageClass delete(newSCCopy.Parameters, QuorumMinimumRedundancyWithPrefixSCKey) delete(newSCCopy.Parameters, ReplicatedStorageClassParamNameKey) + delete(newSCCopy.Parameters, StorageClassParamTopologyKey) + delete(newSCCopy.Parameters, StorageClassParamZonesKey) delete(oldSCCopy.Parameters, QuorumMinimumRedundancyWithPrefixSCKey) delete(oldSCCopy.Parameters, ReplicatedStorageClassParamNameKey) + delete(oldSCCopy.Parameters, StorageClassParamTopologyKey) + delete(oldSCCopy.Parameters, StorageClassParamZonesKey) return CompareStorageClasses(newSCCopy, oldSCCopy) } @@ -789,7 +816,7 @@ func updateReplicatedStorageClassStatus( phase string, reason string, ) error { - replicatedSC.Status.Phase = phase + replicatedSC.Status.Phase = srv.ReplicatedStorageClassPhase(phase) replicatedSC.Status.Reason = reason log.Trace(fmt.Sprintf("[updateReplicatedStorageClassStatus] update ReplicatedStorageClass %+v", replicatedSC)) return UpdateReplicatedStorageClass(ctx, cl, replicatedSC) diff --git a/images/sds-replicated-volume-controller/pkg/controller/replicated_storage_class_test.go b/images/sds-replicated-volume-controller/pkg/controller/replicated_storage_class_test.go index e9a268c4f..4e0db7a2a 100644 --- a/images/sds-replicated-volume-controller/pkg/controller/replicated_storage_class_test.go +++ b/images/sds-replicated-volume-controller/pkg/controller/replicated_storage_class_test.go @@ -1,5 +1,5 @@ /* -Copyright 2025 Flant JSC +Copyright 2026 Flant JSC Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -17,7 +17,6 @@ limitations under the License. package controller_test import ( - "context" "fmt" "reflect" "slices" @@ -36,15 +35,14 @@ import ( srv "github.com/deckhouse/sds-replicated-volume/api/v1alpha1" "github.com/deckhouse/sds-replicated-volume/images/sds-replicated-volume-controller/config" "github.com/deckhouse/sds-replicated-volume/images/sds-replicated-volume-controller/pkg/controller" - "github.com/deckhouse/sds-replicated-volume/images/sds-replicated-volume-controller/pkg/logger" + "github.com/deckhouse/sds-replicated-volume/lib/go/common/logger" ) var _ = Describe(controller.ReplicatedStorageClassControllerName, func() { var ( - ctx = context.Background() - cl = newFakeClient() - log = logger.Logger{} + cl client.WithWatch + log = logger.WrapLorg(GinkgoLogr) validCFG, _ = config.NewConfig() validZones = []string{"first", "second", "third"} @@ -78,6 +76,12 @@ var _ = Describe(controller.ReplicatedStorageClassControllerName, func() { } ) + BeforeEach(func() { + // Ensure test isolation: this suite creates cluster-scoped objects with identical names across tests. + // Using a fresh fake client per spec avoids cross-test pollution (AlreadyExists errors). + cl = newFakeClient() + }) + It("GenerateStorageClassFromReplicatedStorageClass_Generates_expected_StorageClass", func() { var ( testName = generateTestName() @@ -102,6 +106,8 @@ var _ = Describe(controller.ReplicatedStorageClassControllerName, func() { controller.StorageClassParamReplicasOnDifferentKey: controller.ZoneLabel, controller.StorageClassParamAllowRemoteVolumeAccessKey: "false", controller.QuorumMinimumRedundancyWithPrefixSCKey: "2", + controller.StorageClassParamTopologyKey: string(validSpecReplicatedSCTemplate.Spec.Topology), + controller.StorageClassParamZonesKey: "- first\n- second\n- third", } expectedSC = &storagev1.StorageClass{ @@ -138,7 +144,84 @@ var _ = Describe(controller.ReplicatedStorageClassControllerName, func() { Expect(actualSC).To(Equal(expectedSC)) }) - It("GetStorageClass_Returns_storage_class_and_no_error", func() { + It("GenerateStorageClassFromReplicatedStorageClass_Adds_topology_and_zones_parameters", func() { + testName := generateTestName() + replicatedSC := validSpecReplicatedSCTemplate + replicatedSC.Name = testName + + storageClass := controller.GenerateStorageClassFromReplicatedStorageClass(&replicatedSC) + + Expect(storageClass.Parameters).To(HaveKey(controller.StorageClassParamTopologyKey)) + Expect(storageClass.Parameters[controller.StorageClassParamTopologyKey]).To(Equal(controller.TopologyTransZonal)) + Expect(storageClass.Parameters).To(HaveKey(controller.StorageClassParamZonesKey)) + Expect(storageClass.Parameters[controller.StorageClassParamZonesKey]).To(Equal("- first\n- second\n- third")) + }) + + It("GenerateStorageClassFromReplicatedStorageClass_Does_not_add_zones_when_empty", func() { + testName := generateTestName() + replicatedSC := validSpecReplicatedSCTemplate + replicatedSC.Name = testName + replicatedSC.Spec.Zones = []string{} + + storageClass := controller.GenerateStorageClassFromReplicatedStorageClass(&replicatedSC) + + Expect(storageClass.Parameters).To(HaveKey(controller.StorageClassParamTopologyKey)) + Expect(storageClass.Parameters).NotTo(HaveKey(controller.StorageClassParamZonesKey)) + }) + + It("GenerateStorageClassFromReplicatedStorageClass_Formats_single_zone_correctly", func() { + testName := generateTestName() + replicatedSC := validSpecReplicatedSCTemplate + replicatedSC.Name = testName + replicatedSC.Spec.Zones = []string{"single-zone"} + + storageClass := controller.GenerateStorageClassFromReplicatedStorageClass(&replicatedSC) + + Expect(storageClass.Parameters).To(HaveKey(controller.StorageClassParamZonesKey)) + Expect(storageClass.Parameters[controller.StorageClassParamZonesKey]).To(Equal("- single-zone")) + }) + + It("GenerateStorageClassFromReplicatedStorageClass_Formats_multiple_zones_correctly", func() { + testName := generateTestName() + replicatedSC := validSpecReplicatedSCTemplate + replicatedSC.Name = testName + replicatedSC.Spec.Zones = []string{"zone-a", "zone-b", "zone-c", "zone-d"} + + storageClass := controller.GenerateStorageClassFromReplicatedStorageClass(&replicatedSC) + + Expect(storageClass.Parameters).To(HaveKey(controller.StorageClassParamZonesKey)) + Expect(storageClass.Parameters[controller.StorageClassParamZonesKey]).To(Equal("- zone-a\n- zone-b\n- zone-c\n- zone-d")) + }) + + It("GenerateStorageClassFromReplicatedStorageClass_Adds_topology_for_Zonal", func() { + testName := generateTestName() + replicatedSC := validSpecReplicatedSCTemplate + replicatedSC.Name = testName + replicatedSC.Spec.Topology = controller.TopologyZonal + replicatedSC.Spec.Zones = []string{} + + storageClass := controller.GenerateStorageClassFromReplicatedStorageClass(&replicatedSC) + + Expect(storageClass.Parameters).To(HaveKey(controller.StorageClassParamTopologyKey)) + Expect(storageClass.Parameters[controller.StorageClassParamTopologyKey]).To(Equal(controller.TopologyZonal)) + Expect(storageClass.Parameters).NotTo(HaveKey(controller.StorageClassParamZonesKey)) + }) + + It("GenerateStorageClassFromReplicatedStorageClass_Adds_topology_for_Ignored", func() { + testName := generateTestName() + replicatedSC := validSpecReplicatedSCTemplate + replicatedSC.Name = testName + replicatedSC.Spec.Topology = controller.TopologyIgnored + replicatedSC.Spec.Zones = []string{} + + storageClass := controller.GenerateStorageClassFromReplicatedStorageClass(&replicatedSC) + + Expect(storageClass.Parameters).To(HaveKey(controller.StorageClassParamTopologyKey)) + Expect(storageClass.Parameters[controller.StorageClassParamTopologyKey]).To(Equal(controller.TopologyIgnored)) + Expect(storageClass.Parameters).NotTo(HaveKey(controller.StorageClassParamZonesKey)) + }) + + It("GetStorageClass_Returns_storage_class_and_no_error", func(ctx SpecContext) { testName := generateTestName() replicatedSC := validSpecReplicatedSCTemplate replicatedSC.Name = testName @@ -161,7 +244,7 @@ var _ = Describe(controller.ReplicatedStorageClassControllerName, func() { Expect(sc.Namespace).To(Equal(testNamespaceConst)) }) - It("DeleteStorageClass_Deletes_needed_one_Returns_no_error", func() { + It("DeleteStorageClass_Deletes_needed_one_Returns_no_error", func(ctx SpecContext) { testName := generateTestName() replicatedSC := validSpecReplicatedSCTemplate replicatedSC.Name = testName @@ -195,7 +278,7 @@ var _ = Describe(controller.ReplicatedStorageClassControllerName, func() { Expect(sc).To(BeNil()) }) - It("CreateStorageClass_Creates_one_Returns_no_error", func() { + It("CreateStorageClass_Creates_one_Returns_no_error", func(ctx SpecContext) { testName := generateTestName() replicatedSC := validSpecReplicatedSCTemplate replicatedSC.Name = testName @@ -218,11 +301,11 @@ var _ = Describe(controller.ReplicatedStorageClassControllerName, func() { Expect(sc.Namespace).To(Equal(testNamespaceConst)) }) - It("UpdateReplicatedStorageClass_Updates_resource", func() { + It("UpdateReplicatedStorageClass_Updates_resource", func(ctx SpecContext) { testName := generateTestName() replicatedSC := validSpecReplicatedSCTemplate replicatedSC.Name = testName - replicatedSC.Status.Phase = controller.Created + replicatedSC.Status.Phase = srv.RSCPhaseCreated err := cl.Create(ctx, &replicatedSC) if err == nil { @@ -240,9 +323,9 @@ var _ = Describe(controller.ReplicatedStorageClassControllerName, func() { oldResource := resources[testName] Expect(oldResource.Name).To(Equal(testName)) Expect(oldResource.Namespace).To(Equal(testNamespaceConst)) - Expect(oldResource.Status.Phase).To(Equal(controller.Created)) + Expect(oldResource.Status.Phase).To(Equal(srv.RSCPhaseCreated)) - oldResource.Status.Phase = controller.Failed + oldResource.Status.Phase = srv.RSCPhaseFailed updatedMessage := "new message" oldResource.Status.Reason = updatedMessage @@ -255,7 +338,7 @@ var _ = Describe(controller.ReplicatedStorageClassControllerName, func() { updatedResource := resources[testName] Expect(updatedResource.Name).To(Equal(testName)) Expect(updatedResource.Namespace).To(Equal(testNamespaceConst)) - Expect(updatedResource.Status.Phase).To(Equal(controller.Failed)) + Expect(updatedResource.Status.Phase).To(Equal(srv.RSCPhaseFailed)) Expect(updatedResource.Status.Reason).To(Equal(updatedMessage)) }) @@ -282,12 +365,12 @@ var _ = Describe(controller.ReplicatedStorageClassControllerName, func() { } }) - It("ReconcileReplicatedStorageClassEvent_Resource_exists_DeletionTimestamp_not_nil_Status_created_StorageClass_is_absent_Deletes_Resource_Successfully", func() { + It("ReconcileReplicatedStorageClassEvent_Resource_exists_DeletionTimestamp_not_nil_Status_created_StorageClass_is_absent_Deletes_Resource_Successfully", func(ctx SpecContext) { testName := generateTestName() replicatedSC := validSpecReplicatedSCTemplate replicatedSC.Name = testName replicatedSC.Finalizers = []string{controller.ReplicatedStorageClassFinalizerName} - replicatedSC.Status.Phase = controller.Created + replicatedSC.Status.Phase = srv.RSCPhaseCreated request := reconcile.Request{ NamespacedName: types.NamespacedName{ @@ -328,12 +411,12 @@ var _ = Describe(controller.ReplicatedStorageClassControllerName, func() { Expect(reflect.ValueOf(resources[testName]).IsZero()).To(BeTrue()) }) - It("ReconcileReplicatedStorageClassEvent_Resource_exists_DeletionTimestamp_not_nil_Status_created_StorageClass_exists_Deletes_resource_and_storage_class_successfully", func() { + It("ReconcileReplicatedStorageClassEvent_Resource_exists_DeletionTimestamp_not_nil_Status_created_StorageClass_exists_Deletes_resource_and_storage_class_successfully", func(ctx SpecContext) { testName := generateTestName() replicatedSC := validSpecReplicatedSCTemplate replicatedSC.Name = testName replicatedSC.Finalizers = []string{controller.ReplicatedStorageClassFinalizerName} - replicatedSC.Status.Phase = controller.Created + replicatedSC.Status.Phase = srv.RSCPhaseCreated request := reconcile.Request{ NamespacedName: types.NamespacedName{ @@ -391,12 +474,12 @@ var _ = Describe(controller.ReplicatedStorageClassControllerName, func() { Expect(sc).To(BeNil()) }) - It("ReconcileReplicatedStorageClassEvent_Resource_exists_DeletionTimestamp_not_nil_Status_failed_StorageClass_exists_Does_NOT_delete_StorageClass_Deletes_resource", func() { + It("ReconcileReplicatedStorageClassEvent_Resource_exists_DeletionTimestamp_not_nil_Status_failed_StorageClass_exists_Does_NOT_delete_StorageClass_Deletes_resource", func(ctx SpecContext) { testName := generateTestName() replicatedSC := validSpecReplicatedSCTemplate replicatedSC.Name = testName replicatedSC.Finalizers = []string{controller.ReplicatedStorageClassFinalizerName} - replicatedSC.Status.Phase = controller.Failed + replicatedSC.Status.Phase = srv.RSCPhaseFailed request := reconcile.Request{ NamespacedName: types.NamespacedName{ @@ -448,11 +531,11 @@ var _ = Describe(controller.ReplicatedStorageClassControllerName, func() { Expect(reflect.ValueOf(resources[testName]).IsZero()).To(BeTrue()) }) - It("ReconcileReplicatedStorageClassEvent_Resource_exists_DeletionTimestamp_is_nil_returns_false_no_error_Doesnt_delete_resource", func() { + It("ReconcileReplicatedStorageClassEvent_Resource_exists_DeletionTimestamp_is_nil_returns_false_no_error_Doesnt_delete_resource", func(ctx SpecContext) { testName := generateTestName() replicatedSC := validSpecReplicatedSCTemplate replicatedSC.Name = testName - replicatedSC.Status.Phase = controller.Created + replicatedSC.Status.Phase = srv.RSCPhaseCreated request := reconcile.Request{ NamespacedName: types.NamespacedName{ @@ -481,7 +564,7 @@ var _ = Describe(controller.ReplicatedStorageClassControllerName, func() { Expect(resources[testName].Namespace).To(Equal(testNamespaceConst)) }) - It("ReconcileReplicatedStorageClassEvent_Resource_does_not_exist_Returns_false_no_error", func() { + It("ReconcileReplicatedStorageClassEvent_Resource_does_not_exist_Returns_false_no_error", func(ctx SpecContext) { testName := generateTestName() req := reconcile.Request{NamespacedName: types.NamespacedName{ Namespace: testNamespaceConst, @@ -524,7 +607,7 @@ var _ = Describe(controller.ReplicatedStorageClassControllerName, func() { Expect(validation).Should(BeTrue()) }) - It("GetClusterZones_nodes_in_zones_returns_correct_zones", func() { + It("GetClusterZones_nodes_in_zones_returns_correct_zones", func(ctx SpecContext) { const ( testZone = "zone1" ) @@ -571,7 +654,7 @@ var _ = Describe(controller.ReplicatedStorageClassControllerName, func() { Expect(zones).To(Equal(expectedZones)) }) - It("GetClusterZones_nodes_NOT_in_zones_returns_correct_zones", func() { + It("GetClusterZones_nodes_NOT_in_zones_returns_correct_zones", func(ctx SpecContext) { nodeNotInZone1 := v1.Node{ ObjectMeta: metav1.ObjectMeta{ Name: "nodeNotInZone1", @@ -611,7 +694,7 @@ var _ = Describe(controller.ReplicatedStorageClassControllerName, func() { Expect(len(zones)).To(Equal(0)) }) - It("ReconcileReplicatedStorageClass_Validation_failed_Updates_status_to_failed_and_reason", func() { + It("ReconcileReplicatedStorageClass_Validation_failed_Updates_status_to_failed_and_reason", func(ctx SpecContext) { testName := generateTestName() replicatedSC := invalidReplicatedSCTemplate replicatedSC.Name = testName @@ -655,18 +738,18 @@ var _ = Describe(controller.ReplicatedStorageClassControllerName, func() { Namespace: testNamespaceConst, }, &replicatedSC) Expect(err).NotTo(HaveOccurred()) - Expect(replicatedSC.Status.Phase).To(Equal(controller.Failed)) + Expect(replicatedSC.Status.Phase).To(Equal(srv.RSCPhaseFailed)) Expect(replicatedSC.Status.Reason).To(Equal(failedMessage)) resources, err := getTestAPIStorageClasses(ctx, cl) Expect(err).NotTo(HaveOccurred()) resource := resources[testName] - Expect(resource.Status.Phase).To(Equal(controller.Failed)) + Expect(resource.Status.Phase).To(Equal(srv.RSCPhaseFailed)) Expect(resource.Status.Reason).To(Equal(failedMessage)) }) - It("ReconcileReplicatedStorageClass_Validation_passed_StorageClass_not_found_Creates_one_Adds_finalizers_and_Returns_no_error", func() { + It("ReconcileReplicatedStorageClass_Validation_passed_StorageClass_not_found_Creates_one_Adds_finalizers_and_Returns_no_error", func(ctx SpecContext) { testName := generateTestName() replicatedSC := validSpecReplicatedSCTemplate replicatedSC.Name = testName @@ -698,7 +781,7 @@ var _ = Describe(controller.ReplicatedStorageClassControllerName, func() { resource := resources[testName] - Expect(resource.Status.Phase).To(Equal(controller.Created)) + Expect(resource.Status.Phase).To(Equal(srv.RSCPhaseCreated)) Expect(resource.Status.Reason).To(Equal("ReplicatedStorageClass and StorageClass are equal.")) Expect(slices.Contains(resource.Finalizers, controller.ReplicatedStorageClassFinalizerName)).To(BeTrue()) @@ -728,7 +811,7 @@ var _ = Describe(controller.ReplicatedStorageClassControllerName, func() { Expect(errors.IsNotFound(err)).To(BeTrue()) }) - It("ReconcileReplicatedStorageClass_Validation_passed_StorageClass_already_exists_Resource_and_StorageClass_ARE_EQUAL_Resource.Status.Phase_equals_Created", func() { + It("ReconcileReplicatedStorageClass_Validation_passed_StorageClass_already_exists_Resource_and_StorageClass_ARE_EQUAL_Resource.Status.Phase_equals_Created", func(ctx SpecContext) { testName := generateTestName() replicatedSC := validSpecReplicatedSCTemplate replicatedSC.Name = testName @@ -762,7 +845,7 @@ var _ = Describe(controller.ReplicatedStorageClassControllerName, func() { Expect(err).NotTo(HaveOccurred()) resource := resources[testName] - Expect(resource.Status.Phase).To(Equal(controller.Created)) + Expect(resource.Status.Phase).To(Equal(srv.RSCPhaseCreated)) Expect(resource.Status.Reason).To(Equal("ReplicatedStorageClass and StorageClass are equal.")) resFinalizers := strings.Join(resource.Finalizers, "") @@ -775,11 +858,11 @@ var _ = Describe(controller.ReplicatedStorageClassControllerName, func() { Expect(storageClass.Namespace).To(Equal(testNamespaceConst)) }) - It("ReconcileReplicatedStorageClass_Validation_passed_StorageClass_founded_Resource_and_StorageClass_ARE_NOT_EQUAL_Updates_resource_status_to_failed_and_reason", func() { + It("ReconcileReplicatedStorageClass_Validation_passed_StorageClass_founded_Resource_and_StorageClass_ARE_NOT_EQUAL_Updates_resource_status_to_failed_and_reason", func(ctx SpecContext) { testName := generateTestName() replicatedSC := validSpecReplicatedSCTemplate replicatedSC.Name = testName - replicatedSC.Status.Phase = controller.Created + replicatedSC.Status.Phase = srv.RSCPhaseCreated anotherReplicatedSC := validSpecReplicatedSCTemplate anotherReplicatedSC.Spec.ReclaimPolicy = "not-equal" @@ -824,7 +907,7 @@ var _ = Describe(controller.ReplicatedStorageClassControllerName, func() { }, &replicatedSCafterReconcile) Expect(err).NotTo(HaveOccurred()) Expect(replicatedSCafterReconcile.Name).To(Equal(testName)) - Expect(replicatedSCafterReconcile.Status.Phase).To(Equal(controller.Failed)) + Expect(replicatedSCafterReconcile.Status.Phase).To(Equal(srv.RSCPhaseFailed)) storageClass, err := controller.GetStorageClass(ctx, cl, testName) Expect(err).NotTo(HaveOccurred()) @@ -837,7 +920,7 @@ var _ = Describe(controller.ReplicatedStorageClassControllerName, func() { testName := generateTestName() replicatedSC := validSpecReplicatedSCTemplate replicatedSC.Name = testName - replicatedSC.Status.Phase = controller.Created + replicatedSC.Status.Phase = srv.RSCPhaseCreated storageClass := controller.GenerateStorageClassFromReplicatedStorageClass(&replicatedSC) equal, _ := controller.CompareStorageClasses(storageClass, storageClass) @@ -869,7 +952,7 @@ var _ = Describe(controller.ReplicatedStorageClassControllerName, func() { Expect(message).NotTo(Equal("")) }) - It("LabelNodes_set_labels", func() { + It("LabelNodes_set_labels", func(ctx SpecContext) { testName := generateTestName() replicatedSC := validSpecReplicatedSCTemplate replicatedSC.Name = testName @@ -917,7 +1000,7 @@ var _ = Describe(controller.ReplicatedStorageClassControllerName, func() { }) // Annotation tests - It("ReconcileReplicatedStorageClass_new_with_valid_config_VolumeAccessPreferablyLocal_ConfigMap_does_not_exist", func() { + It("ReconcileReplicatedStorageClass_new_with_valid_config_VolumeAccessPreferablyLocal_ConfigMap_does_not_exist", func(ctx SpecContext) { testName := testNameForAnnotationTests replicatedSC := validSpecReplicatedSCTemplate replicatedSC.Name = testName @@ -944,7 +1027,7 @@ var _ = Describe(controller.ReplicatedStorageClassControllerName, func() { Expect(shouldRequeue).To(BeFalse()) replicatedSC = getAndValidateReconciledRSC(ctx, cl, testName) - Expect(replicatedSC.Status.Phase).To(Equal(controller.Created)) + Expect(replicatedSC.Status.Phase).To(Equal(srv.RSCPhaseCreated)) storageClass := getAndValidateSC(ctx, cl, replicatedSC) Expect(storageClass.Annotations).To(Equal(map[string]string{controller.RSCStorageClassVolumeSnapshotClassAnnotationKey: controller.RSCStorageClassVolumeSnapshotClassAnnotationValue})) @@ -969,7 +1052,7 @@ var _ = Describe(controller.ReplicatedStorageClassControllerName, func() { Expect(errors.IsNotFound(err)).To(BeTrue()) }) - It("ReconcileReplicatedStorageClass_new_with_valid_config_VolumeAccessLocal_ConfigMap_does_not_exist", func() { + It("ReconcileReplicatedStorageClass_new_with_valid_config_VolumeAccessLocal_ConfigMap_does_not_exist", func(ctx SpecContext) { testName := testNameForAnnotationTests replicatedSC := validSpecReplicatedSCTemplate replicatedSC.Name = testName @@ -996,7 +1079,7 @@ var _ = Describe(controller.ReplicatedStorageClassControllerName, func() { Expect(shouldRequeue).To(BeFalse()) replicatedSC = getAndValidateReconciledRSC(ctx, cl, testName) - Expect(replicatedSC.Status.Phase).To(Equal(controller.Created)) + Expect(replicatedSC.Status.Phase).To(Equal(srv.RSCPhaseCreated)) storageClass := getAndValidateSC(ctx, cl, replicatedSC) Expect(storageClass.Annotations).To(Equal(map[string]string{controller.RSCStorageClassVolumeSnapshotClassAnnotationKey: controller.RSCStorageClassVolumeSnapshotClassAnnotationValue})) @@ -1021,7 +1104,7 @@ var _ = Describe(controller.ReplicatedStorageClassControllerName, func() { Expect(errors.IsNotFound(err)).To(BeTrue()) }) - It("ReconcileReplicatedStorageClass_new_with_valid_config_VolumeAccessPreferablyLocal_ConfigMap_exist_without_data", func() { + It("ReconcileReplicatedStorageClass_new_with_valid_config_VolumeAccessPreferablyLocal_ConfigMap_exist_without_data", func(ctx SpecContext) { testName := testNameForAnnotationTests replicatedSC := validSpecReplicatedSCTemplate replicatedSC.Name = testName @@ -1054,7 +1137,7 @@ var _ = Describe(controller.ReplicatedStorageClassControllerName, func() { Expect(shouldRequeue).To(BeFalse()) replicatedSC = getAndValidateReconciledRSC(ctx, cl, testName) - Expect(replicatedSC.Status.Phase).To(Equal(controller.Created)) + Expect(replicatedSC.Status.Phase).To(Equal(srv.RSCPhaseCreated)) storageClass := getAndValidateSC(ctx, cl, replicatedSC) Expect(storageClass.Annotations).To(Equal(map[string]string{controller.RSCStorageClassVolumeSnapshotClassAnnotationKey: controller.RSCStorageClassVolumeSnapshotClassAnnotationValue})) @@ -1086,7 +1169,7 @@ var _ = Describe(controller.ReplicatedStorageClassControllerName, func() { Expect(errors.IsNotFound(err)).To(BeTrue()) }) - It("ReconcileReplicatedStorageClass_new_with_valid_config_VolumeAccessLocal_ConfigMap_exist_without_data", func() { + It("ReconcileReplicatedStorageClass_new_with_valid_config_VolumeAccessLocal_ConfigMap_exist_without_data", func(ctx SpecContext) { testName := testNameForAnnotationTests replicatedSC := validSpecReplicatedSCTemplate replicatedSC.Name = testName @@ -1119,7 +1202,7 @@ var _ = Describe(controller.ReplicatedStorageClassControllerName, func() { Expect(shouldRequeue).To(BeFalse()) replicatedSC = getAndValidateReconciledRSC(ctx, cl, testName) - Expect(replicatedSC.Status.Phase).To(Equal(controller.Created)) + Expect(replicatedSC.Status.Phase).To(Equal(srv.RSCPhaseCreated)) storageClass := getAndValidateSC(ctx, cl, replicatedSC) Expect(storageClass.Annotations).To(Equal(map[string]string{controller.RSCStorageClassVolumeSnapshotClassAnnotationKey: controller.RSCStorageClassVolumeSnapshotClassAnnotationValue})) @@ -1151,7 +1234,7 @@ var _ = Describe(controller.ReplicatedStorageClassControllerName, func() { Expect(errors.IsNotFound(err)).To(BeTrue()) }) - It("ReconcileReplicatedStorageClass_new_with_valid_config_VolumeAccessPreferablyLocal_ConfigMap_exist_with_virtualization_key_and_virtualization_value_is_false", func() { + It("ReconcileReplicatedStorageClass_new_with_valid_config_VolumeAccessPreferablyLocal_ConfigMap_exist_with_virtualization_key_and_virtualization_value_is_false", func(ctx SpecContext) { testName := testNameForAnnotationTests replicatedSC := validSpecReplicatedSCTemplate replicatedSC.Name = testName @@ -1185,13 +1268,13 @@ var _ = Describe(controller.ReplicatedStorageClassControllerName, func() { Expect(shouldRequeue).To(BeFalse()) replicatedSC = getAndValidateReconciledRSC(ctx, cl, testName) - Expect(replicatedSC.Status.Phase).To(Equal(controller.Created)) + Expect(replicatedSC.Status.Phase).To(Equal(srv.RSCPhaseCreated)) storageClass := getAndValidateSC(ctx, cl, replicatedSC) Expect(storageClass.Annotations).To(Equal(map[string]string{controller.RSCStorageClassVolumeSnapshotClassAnnotationKey: controller.RSCStorageClassVolumeSnapshotClassAnnotationValue})) }) - It("ReconcileReplicatedStorageClass_already_exists_with_valid_config_VolumeAccessPreferablyLocal_ConfigMap_exist_with_virtualization_key_and_virtualization_value_updated_from_false_to_true", func() { + It("ReconcileReplicatedStorageClass_already_exists_with_valid_config_VolumeAccessPreferablyLocal_ConfigMap_exist_with_virtualization_key_and_virtualization_value_updated_from_false_to_true", func(ctx SpecContext) { testName := testNameForAnnotationTests request := reconcile.Request{ @@ -1201,6 +1284,18 @@ var _ = Describe(controller.ReplicatedStorageClassControllerName, func() { }, } + // Arrange initial state (this spec is "already exists", so we must create it within the spec). + replicatedSCSeed := validSpecReplicatedSCTemplate + replicatedSCSeed.Name = testName + replicatedSCSeed.Spec.VolumeAccess = controller.VolumeAccessPreferablyLocal + err := createConfigMap(ctx, cl, validCFG.ControllerNamespace, map[string]string{controller.VirtualizationModuleEnabledKey: "false"}) + Expect(err).NotTo(HaveOccurred()) + err = cl.Create(ctx, &replicatedSCSeed) + Expect(err).NotTo(HaveOccurred()) + shouldRequeueInit, err := controller.ReconcileReplicatedStorageClassEvent(ctx, cl, log, validCFG, request) + Expect(err).NotTo(HaveOccurred()) + Expect(shouldRequeueInit).To(BeFalse()) + configMap, err := getConfigMap(ctx, cl, validCFG.ControllerNamespace) Expect(err).NotTo(HaveOccurred()) Expect(configMap).NotTo(BeNil()) @@ -1222,15 +1317,15 @@ var _ = Describe(controller.ReplicatedStorageClassControllerName, func() { Expect(configMap.Data[controller.VirtualizationModuleEnabledKey]).To(Equal("true")) replicatedSC := getAndValidateReconciledRSC(ctx, cl, testName) - Expect(replicatedSC.Spec.VolumeAccess).To(Equal(controller.VolumeAccessPreferablyLocal)) - Expect(replicatedSC.Status.Phase).To(Equal(controller.Created)) + Expect(replicatedSC.Spec.VolumeAccess).To(Equal(srv.VolumeAccessPreferablyLocal)) + Expect(replicatedSC.Status.Phase).To(Equal(srv.RSCPhaseCreated)) shouldRequeue, err := controller.ReconcileReplicatedStorageClassEvent(ctx, cl, log, validCFG, request) Expect(err).NotTo(HaveOccurred()) Expect(shouldRequeue).To(BeFalse()) replicatedSC = getAndValidateReconciledRSC(ctx, cl, testName) - Expect(replicatedSC.Status.Phase).To(Equal(controller.Created)) + Expect(replicatedSC.Status.Phase).To(Equal(srv.RSCPhaseCreated)) storageClass := getAndValidateSC(ctx, cl, replicatedSC) Expect(storageClass.Annotations).To(Equal(map[string]string{controller.RSCStorageClassVolumeSnapshotClassAnnotationKey: controller.RSCStorageClassVolumeSnapshotClassAnnotationValue})) @@ -1262,7 +1357,7 @@ var _ = Describe(controller.ReplicatedStorageClassControllerName, func() { Expect(errors.IsNotFound(err)).To(BeTrue()) }) - It("ReconcileReplicatedStorageClass_new_with_valid_config_VolumeAccessLocal_ConfigMap_exist_with_virtualization_key_and_virtualization_value_is_false", func() { + It("ReconcileReplicatedStorageClass_new_with_valid_config_VolumeAccessLocal_ConfigMap_exist_with_virtualization_key_and_virtualization_value_is_false", func(ctx SpecContext) { testName := testNameForAnnotationTests replicatedSC := validSpecReplicatedSCTemplate replicatedSC.Name = testName @@ -1296,14 +1391,14 @@ var _ = Describe(controller.ReplicatedStorageClassControllerName, func() { Expect(shouldRequeue).To(BeFalse()) replicatedSC = getAndValidateReconciledRSC(ctx, cl, testName) - Expect(replicatedSC.Status.Phase).To(Equal(controller.Created)) + Expect(replicatedSC.Status.Phase).To(Equal(srv.RSCPhaseCreated)) storageClass := getAndValidateSC(ctx, cl, replicatedSC) Expect(storageClass.Annotations).To(Equal(map[string]string{controller.RSCStorageClassVolumeSnapshotClassAnnotationKey: controller.RSCStorageClassVolumeSnapshotClassAnnotationValue})) }) - It("ReconcileReplicatedStorageClass_already_exists_with_valid_config_VolumeAccessLocal_ConfigMap_exist_with_virtualization_key_and_virtualization_value_updated_from_false_to_true", func() { + It("ReconcileReplicatedStorageClass_already_exists_with_valid_config_VolumeAccessLocal_ConfigMap_exist_with_virtualization_key_and_virtualization_value_updated_from_false_to_true", func(ctx SpecContext) { testName := testNameForAnnotationTests request := reconcile.Request{ @@ -1313,6 +1408,18 @@ var _ = Describe(controller.ReplicatedStorageClassControllerName, func() { }, } + // Arrange initial state (this spec is "already exists", so we must create it within the spec). + replicatedSCSeed := validSpecReplicatedSCTemplate + replicatedSCSeed.Name = testName + replicatedSCSeed.Spec.VolumeAccess = controller.VolumeAccessLocal + err := createConfigMap(ctx, cl, validCFG.ControllerNamespace, map[string]string{controller.VirtualizationModuleEnabledKey: "false"}) + Expect(err).NotTo(HaveOccurred()) + err = cl.Create(ctx, &replicatedSCSeed) + Expect(err).NotTo(HaveOccurred()) + shouldRequeueInit, err := controller.ReconcileReplicatedStorageClassEvent(ctx, cl, log, validCFG, request) + Expect(err).NotTo(HaveOccurred()) + Expect(shouldRequeueInit).To(BeFalse()) + configMap, err := getConfigMap(ctx, cl, validCFG.ControllerNamespace) Expect(err).NotTo(HaveOccurred()) Expect(configMap).NotTo(BeNil()) @@ -1334,15 +1441,15 @@ var _ = Describe(controller.ReplicatedStorageClassControllerName, func() { Expect(configMap.Data[controller.VirtualizationModuleEnabledKey]).To(Equal("true")) replicatedSC := getAndValidateReconciledRSC(ctx, cl, testName) - Expect(replicatedSC.Spec.VolumeAccess).To(Equal(controller.VolumeAccessLocal)) - Expect(replicatedSC.Status.Phase).To(Equal(controller.Created)) + Expect(replicatedSC.Spec.VolumeAccess).To(Equal(srv.VolumeAccessLocal)) + Expect(replicatedSC.Status.Phase).To(Equal(srv.RSCPhaseCreated)) shouldRequeue, err := controller.ReconcileReplicatedStorageClassEvent(ctx, cl, log, validCFG, request) Expect(err).NotTo(HaveOccurred()) Expect(shouldRequeue).To(BeFalse()) replicatedSC = getAndValidateReconciledRSC(ctx, cl, testName) - Expect(replicatedSC.Status.Phase).To(Equal(controller.Created)) + Expect(replicatedSC.Status.Phase).To(Equal(srv.RSCPhaseCreated)) storageClass := getAndValidateSC(ctx, cl, replicatedSC) Expect(storageClass.Annotations).NotTo(BeNil()) @@ -1376,7 +1483,7 @@ var _ = Describe(controller.ReplicatedStorageClassControllerName, func() { Expect(errors.IsNotFound(err)).To(BeTrue()) }) - It("ReconcileReplicatedStorageClass_new_with_valid_config_VolumeAccessPreferablyLocal_ConfigMap_exist_with_virtualization_key_and_virtualization_value_is_true", func() { + It("ReconcileReplicatedStorageClass_new_with_valid_config_VolumeAccessPreferablyLocal_ConfigMap_exist_with_virtualization_key_and_virtualization_value_is_true", func(ctx SpecContext) { testName := testNameForAnnotationTests replicatedSC := validSpecReplicatedSCTemplate replicatedSC.Name = testName @@ -1410,13 +1517,13 @@ var _ = Describe(controller.ReplicatedStorageClassControllerName, func() { Expect(shouldRequeue).To(BeFalse()) replicatedSC = getAndValidateReconciledRSC(ctx, cl, testName) - Expect(replicatedSC.Status.Phase).To(Equal(controller.Created)) + Expect(replicatedSC.Status.Phase).To(Equal(srv.RSCPhaseCreated)) storageClass := getAndValidateSC(ctx, cl, replicatedSC) Expect(storageClass.Annotations).To(Equal(map[string]string{controller.RSCStorageClassVolumeSnapshotClassAnnotationKey: controller.RSCStorageClassVolumeSnapshotClassAnnotationValue})) }) - It("ReconcileReplicatedStorageClass_already_exists_with_valid_config_VolumeAccessPreferablyLocal_ConfigMap_exist_with_virtualization_key_and_virtualization_value_updated_from_true_to_false", func() { + It("ReconcileReplicatedStorageClass_already_exists_with_valid_config_VolumeAccessPreferablyLocal_ConfigMap_exist_with_virtualization_key_and_virtualization_value_updated_from_true_to_false", func(ctx SpecContext) { testName := testNameForAnnotationTests request := reconcile.Request{ @@ -1426,6 +1533,18 @@ var _ = Describe(controller.ReplicatedStorageClassControllerName, func() { }, } + // Arrange initial state (this spec is "already exists", so we must create it within the spec). + replicatedSCSeed := validSpecReplicatedSCTemplate + replicatedSCSeed.Name = testName + replicatedSCSeed.Spec.VolumeAccess = controller.VolumeAccessPreferablyLocal + err := createConfigMap(ctx, cl, validCFG.ControllerNamespace, map[string]string{controller.VirtualizationModuleEnabledKey: "true"}) + Expect(err).NotTo(HaveOccurred()) + err = cl.Create(ctx, &replicatedSCSeed) + Expect(err).NotTo(HaveOccurred()) + shouldRequeueInit, err := controller.ReconcileReplicatedStorageClassEvent(ctx, cl, log, validCFG, request) + Expect(err).NotTo(HaveOccurred()) + Expect(shouldRequeueInit).To(BeFalse()) + configMap, err := getConfigMap(ctx, cl, validCFG.ControllerNamespace) Expect(err).NotTo(HaveOccurred()) Expect(configMap).NotTo(BeNil()) @@ -1447,15 +1566,15 @@ var _ = Describe(controller.ReplicatedStorageClassControllerName, func() { Expect(configMap.Data[controller.VirtualizationModuleEnabledKey]).To(Equal("false")) replicatedSC := getAndValidateReconciledRSC(ctx, cl, testName) - Expect(replicatedSC.Spec.VolumeAccess).To(Equal(controller.VolumeAccessPreferablyLocal)) - Expect(replicatedSC.Status.Phase).To(Equal(controller.Created)) + Expect(replicatedSC.Spec.VolumeAccess).To(Equal(srv.VolumeAccessPreferablyLocal)) + Expect(replicatedSC.Status.Phase).To(Equal(srv.RSCPhaseCreated)) shouldRequeue, err := controller.ReconcileReplicatedStorageClassEvent(ctx, cl, log, validCFG, request) Expect(err).NotTo(HaveOccurred()) Expect(shouldRequeue).To(BeFalse()) replicatedSC = getAndValidateReconciledRSC(ctx, cl, testName) - Expect(replicatedSC.Status.Phase).To(Equal(controller.Created)) + Expect(replicatedSC.Status.Phase).To(Equal(srv.RSCPhaseCreated)) storageClass := getAndValidateSC(ctx, cl, replicatedSC) Expect(storageClass.Annotations).To(Equal(map[string]string{controller.RSCStorageClassVolumeSnapshotClassAnnotationKey: controller.RSCStorageClassVolumeSnapshotClassAnnotationValue})) @@ -1487,7 +1606,7 @@ var _ = Describe(controller.ReplicatedStorageClassControllerName, func() { Expect(errors.IsNotFound(err)).To(BeTrue()) }) - It("ReconcileReplicatedStorageClass_new_with_valid_config_VolumeAccessLocal_ConfigMap_exist_with_virtualization_key_and_virtualization_value_is_true", func() { + It("ReconcileReplicatedStorageClass_new_with_valid_config_VolumeAccessLocal_ConfigMap_exist_with_virtualization_key_and_virtualization_value_is_true", func(ctx SpecContext) { testName := testNameForAnnotationTests replicatedSC := validSpecReplicatedSCTemplate replicatedSC.Name = testName @@ -1530,7 +1649,7 @@ var _ = Describe(controller.ReplicatedStorageClassControllerName, func() { Expect(shouldRequeue).To(BeFalse()) replicatedSC = getAndValidateReconciledRSC(ctx, cl, testName) - Expect(replicatedSC.Status.Phase).To(Equal(controller.Created)) + Expect(replicatedSC.Status.Phase).To(Equal(srv.RSCPhaseCreated)) storageClass := getAndValidateSC(ctx, cl, replicatedSC) Expect(storageClass.Annotations).NotTo(BeNil()) @@ -1538,7 +1657,7 @@ var _ = Describe(controller.ReplicatedStorageClassControllerName, func() { Expect(storageClass.Annotations[controller.RSCStorageClassVolumeSnapshotClassAnnotationKey]).To(Equal(controller.RSCStorageClassVolumeSnapshotClassAnnotationValue)) }) - It("ReconcileReplicatedStorageClass_already_exists_with_valid_config_VolumeAccessLocal_ConfigMap_exist_with_virtualization_key_and_virtualization_value_updated_from_true_to_false", func() { + It("ReconcileReplicatedStorageClass_already_exists_with_valid_config_VolumeAccessLocal_ConfigMap_exist_with_virtualization_key_and_virtualization_value_updated_from_true_to_false", func(ctx SpecContext) { testName := testNameForAnnotationTests request := reconcile.Request{ @@ -1548,6 +1667,18 @@ var _ = Describe(controller.ReplicatedStorageClassControllerName, func() { }, } + // Arrange initial state (this spec is "already exists", so we must create it within the spec). + replicatedSCSeed := validSpecReplicatedSCTemplate + replicatedSCSeed.Name = testName + replicatedSCSeed.Spec.VolumeAccess = controller.VolumeAccessLocal + err := createConfigMap(ctx, cl, validCFG.ControllerNamespace, map[string]string{controller.VirtualizationModuleEnabledKey: "true"}) + Expect(err).NotTo(HaveOccurred()) + err = cl.Create(ctx, &replicatedSCSeed) + Expect(err).NotTo(HaveOccurred()) + shouldRequeueInit, err := controller.ReconcileReplicatedStorageClassEvent(ctx, cl, log, validCFG, request) + Expect(err).NotTo(HaveOccurred()) + Expect(shouldRequeueInit).To(BeFalse()) + configMap, err := getConfigMap(ctx, cl, validCFG.ControllerNamespace) Expect(err).NotTo(HaveOccurred()) Expect(configMap).NotTo(BeNil()) @@ -1569,8 +1700,8 @@ var _ = Describe(controller.ReplicatedStorageClassControllerName, func() { Expect(configMap.Data[controller.VirtualizationModuleEnabledKey]).To(Equal("false")) replicatedSC := getAndValidateReconciledRSC(ctx, cl, testName) - Expect(replicatedSC.Spec.VolumeAccess).To(Equal(controller.VolumeAccessLocal)) - Expect(replicatedSC.Status.Phase).To(Equal(controller.Created)) + Expect(replicatedSC.Spec.VolumeAccess).To(Equal(srv.VolumeAccessLocal)) + Expect(replicatedSC.Status.Phase).To(Equal(srv.RSCPhaseCreated)) virtualizationEnabled, err := controller.GetVirtualizationModuleEnabled(ctx, cl, log, types.NamespacedName{Name: controller.ControllerConfigMapName, Namespace: validCFG.ControllerNamespace}) Expect(err).NotTo(HaveOccurred()) @@ -1591,7 +1722,7 @@ var _ = Describe(controller.ReplicatedStorageClassControllerName, func() { Expect(shouldRequeue).To(BeFalse()) replicatedSC = getAndValidateReconciledRSC(ctx, cl, testName) - Expect(replicatedSC.Status.Phase).To(Equal(controller.Created)) + Expect(replicatedSC.Status.Phase).To(Equal(srv.RSCPhaseCreated)) storageClass = getAndValidateSC(ctx, cl, replicatedSC) Expect(storageClass.Annotations).To(Equal(map[string]string{controller.RSCStorageClassVolumeSnapshotClassAnnotationKey: controller.RSCStorageClassVolumeSnapshotClassAnnotationValue})) @@ -1624,7 +1755,7 @@ var _ = Describe(controller.ReplicatedStorageClassControllerName, func() { Expect(errors.IsNotFound(err)).To(BeTrue()) }) - It("ReconcileReplicatedStorageClass_new_with_valid_config_VolumeAccessLocal_StorageClass_already_exists_with_default_annotation_only_ConfigMap_exist_with_virtualization_key_and_virtualization_value_is_true", func() { + It("ReconcileReplicatedStorageClass_new_with_valid_config_VolumeAccessLocal_StorageClass_already_exists_with_default_annotation_only_ConfigMap_exist_with_virtualization_key_and_virtualization_value_is_true", func(ctx SpecContext) { testName := testNameForAnnotationTests replicatedSC := validSpecReplicatedSCTemplate replicatedSC.Name = testName @@ -1690,7 +1821,7 @@ var _ = Describe(controller.ReplicatedStorageClassControllerName, func() { Expect(shouldRequeue).To(BeFalse()) replicatedSC = getAndValidateReconciledRSC(ctx, cl, testName) - Expect(replicatedSC.Status.Phase).To(Equal(controller.Created)) + Expect(replicatedSC.Status.Phase).To(Equal(srv.RSCPhaseCreated)) storageClass = getAndValidateSC(ctx, cl, replicatedSC) Expect(storageClass.Annotations).NotTo(BeNil()) @@ -1700,7 +1831,7 @@ var _ = Describe(controller.ReplicatedStorageClassControllerName, func() { Expect(storageClass.Annotations[controller.RSCStorageClassVolumeSnapshotClassAnnotationKey]).To(Equal(controller.RSCStorageClassVolumeSnapshotClassAnnotationValue)) }) - It("ReconcileReplicatedStorageClass_already_exists_with_valid_config_VolumeAccessLocal_StorageClass_already_exists_with_default_and_vritualization_annotations_ConfigMap_exist_with_virtualization_key_and_virtualization_value_updated_from_true_to_false", func() { + It("ReconcileReplicatedStorageClass_already_exists_with_valid_config_VolumeAccessLocal_StorageClass_already_exists_with_default_and_vritualization_annotations_ConfigMap_exist_with_virtualization_key_and_virtualization_value_updated_from_true_to_false", func(ctx SpecContext) { testName := testNameForAnnotationTests request := reconcile.Request{ @@ -1710,6 +1841,27 @@ var _ = Describe(controller.ReplicatedStorageClassControllerName, func() { }, } + // Arrange initial state (this spec is "already exists with default+virtualization annotations"). + replicatedSCSeed := validSpecReplicatedSCTemplate + replicatedSCSeed.Name = testName + replicatedSCSeed.Spec.VolumeAccess = controller.VolumeAccessLocal + err := createConfigMap(ctx, cl, validCFG.ControllerNamespace, map[string]string{controller.VirtualizationModuleEnabledKey: "true"}) + Expect(err).NotTo(HaveOccurred()) + // Pre-create StorageClass with default + virtualization + snapshot annotations. + storageClassSeed := controller.GetNewStorageClass(&replicatedSCSeed, true) + Expect(storageClassSeed).NotTo(BeNil()) + if storageClassSeed.Annotations == nil { + storageClassSeed.Annotations = map[string]string{} + } + storageClassSeed.Annotations[controller.DefaultStorageClassAnnotationKey] = "true" + err = cl.Create(ctx, storageClassSeed) + Expect(err).NotTo(HaveOccurred()) + err = cl.Create(ctx, &replicatedSCSeed) + Expect(err).NotTo(HaveOccurred()) + shouldRequeueInit, err := controller.ReconcileReplicatedStorageClassEvent(ctx, cl, log, validCFG, request) + Expect(err).NotTo(HaveOccurred()) + Expect(shouldRequeueInit).To(BeFalse()) + configMap, err := getConfigMap(ctx, cl, validCFG.ControllerNamespace) Expect(err).NotTo(HaveOccurred()) Expect(configMap).NotTo(BeNil()) @@ -1731,8 +1883,8 @@ var _ = Describe(controller.ReplicatedStorageClassControllerName, func() { Expect(configMap.Data[controller.VirtualizationModuleEnabledKey]).To(Equal("false")) replicatedSC := getAndValidateReconciledRSC(ctx, cl, testName) - Expect(replicatedSC.Spec.VolumeAccess).To(Equal(controller.VolumeAccessLocal)) - Expect(replicatedSC.Status.Phase).To(Equal(controller.Created)) + Expect(replicatedSC.Spec.VolumeAccess).To(Equal(srv.VolumeAccessLocal)) + Expect(replicatedSC.Status.Phase).To(Equal(srv.RSCPhaseCreated)) storageClass := getAndValidateSC(ctx, cl, replicatedSC) Expect(storageClass.Annotations).NotTo(BeNil()) @@ -1757,7 +1909,7 @@ var _ = Describe(controller.ReplicatedStorageClassControllerName, func() { Expect(shouldRequeue).To(BeFalse()) replicatedSC = getAndValidateReconciledRSC(ctx, cl, testName) - Expect(replicatedSC.Status.Phase).To(Equal(controller.Created)) + Expect(replicatedSC.Status.Phase).To(Equal(srv.RSCPhaseCreated)) storageClass = getAndValidateSC(ctx, cl, replicatedSC) Expect(storageClass.Annotations).NotTo(BeNil()) diff --git a/images/sds-replicated-volume-controller/pkg/controller/replicated_storage_class_watcher.go b/images/sds-replicated-volume-controller/pkg/controller/replicated_storage_class_watcher.go index 67f127a82..e53196c70 100644 --- a/images/sds-replicated-volume-controller/pkg/controller/replicated_storage_class_watcher.go +++ b/images/sds-replicated-volume-controller/pkg/controller/replicated_storage_class_watcher.go @@ -30,7 +30,7 @@ import ( "sigs.k8s.io/controller-runtime/pkg/manager" srv "github.com/deckhouse/sds-replicated-volume/api/v1alpha1" - "github.com/deckhouse/sds-replicated-volume/images/sds-replicated-volume-controller/pkg/logger" + "github.com/deckhouse/sds-replicated-volume/lib/go/common/logger" ) const ( diff --git a/images/sds-replicated-volume-controller/pkg/controller/replicated_storage_class_watcher_test.go b/images/sds-replicated-volume-controller/pkg/controller/replicated_storage_class_watcher_test.go index 20ae0342a..e477f395e 100644 --- a/images/sds-replicated-volume-controller/pkg/controller/replicated_storage_class_watcher_test.go +++ b/images/sds-replicated-volume-controller/pkg/controller/replicated_storage_class_watcher_test.go @@ -17,7 +17,6 @@ limitations under the License. package controller import ( - "context" "testing" client2 "github.com/LINBIT/golinstor/client" @@ -31,13 +30,13 @@ import ( "sigs.k8s.io/controller-runtime/pkg/client/fake" srv "github.com/deckhouse/sds-replicated-volume/api/v1alpha1" - "github.com/deckhouse/sds-replicated-volume/images/sds-replicated-volume-controller/pkg/logger" + "github.com/deckhouse/sds-replicated-volume/lib/go/common/logger" ) func TestReplicatedStorageClassWatcher(t *testing.T) { var ( cl = newFakeClient() - ctx = context.Background() + ctx = t.Context() log = logger.Logger{} namespace = "test_namespace" ) diff --git a/images/sds-replicated-volume-controller/pkg/controller/replicated_storage_pool.go b/images/sds-replicated-volume-controller/pkg/controller/replicated_storage_pool.go index 0cdb0dd03..f9e37cbcf 100644 --- a/images/sds-replicated-volume-controller/pkg/controller/replicated_storage_pool.go +++ b/images/sds-replicated-volume-controller/pkg/controller/replicated_storage_pool.go @@ -1,5 +1,5 @@ /* -Copyright 2025 Flant JSC +Copyright 2026 Flant JSC Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -44,7 +44,7 @@ import ( snc "github.com/deckhouse/sds-node-configurator/api/v1alpha1" srv "github.com/deckhouse/sds-replicated-volume/api/v1alpha1" - "github.com/deckhouse/sds-replicated-volume/images/sds-replicated-volume-controller/pkg/logger" + "github.com/deckhouse/sds-replicated-volume/lib/go/common/logger" ) const ( @@ -196,7 +196,7 @@ func ReconcileReplicatedStoragePoolEvent(ctx context.Context, cl client.Client, } func ReconcileReplicatedStoragePool(ctx context.Context, cl client.Client, lc *lapi.Client, log logger.Logger, replicatedSP *srv.ReplicatedStoragePool) error { // TODO: add shouldRequeue as returned value - ok, msg, lvmVolumeGroups := GetAndValidateVolumeGroups(ctx, cl, replicatedSP.Spec.Type, replicatedSP.Spec.LVMVolumeGroups) + ok, msg, lvmVolumeGroups := GetAndValidateVolumeGroups(ctx, cl, string(replicatedSP.Spec.Type), replicatedSP.Spec.LVMVolumeGroups) if !ok { replicatedSP.Status.Phase = "Failed" replicatedSP.Status.Reason = msg diff --git a/images/sds-replicated-volume-controller/pkg/controller/replicated_storage_pool_test.go b/images/sds-replicated-volume-controller/pkg/controller/replicated_storage_pool_test.go index 3c7954056..bffc26abd 100644 --- a/images/sds-replicated-volume-controller/pkg/controller/replicated_storage_pool_test.go +++ b/images/sds-replicated-volume-controller/pkg/controller/replicated_storage_pool_test.go @@ -1,5 +1,5 @@ /* -Copyright 2025 Flant JSC +Copyright 2026 Flant JSC Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -30,7 +30,7 @@ import ( snc "github.com/deckhouse/sds-node-configurator/api/v1alpha1" srv "github.com/deckhouse/sds-replicated-volume/api/v1alpha1" "github.com/deckhouse/sds-replicated-volume/images/sds-replicated-volume-controller/pkg/controller" - "github.com/deckhouse/sds-replicated-volume/images/sds-replicated-volume-controller/pkg/logger" + "github.com/deckhouse/sds-replicated-volume/lib/go/common/logger" ) var _ = Describe(controller.ReplicatedStoragePoolControllerName, func() { @@ -40,10 +40,9 @@ var _ = Describe(controller.ReplicatedStoragePoolControllerName, func() { ) var ( - ctx = context.Background() - cl = newFakeClient() - log, _ = logger.NewLogger("2") - lc, _ = lapi.NewClient(lapi.Log(log)) + cl = newFakeClient() + log = logger.WrapLorg(GinkgoLogr) + lc, _ = lapi.NewClient(lapi.Log(log)) testReplicatedSP = &srv.ReplicatedStoragePool{ ObjectMeta: metav1.ObjectMeta{ @@ -53,7 +52,7 @@ var _ = Describe(controller.ReplicatedStoragePoolControllerName, func() { } ) - It("GetReplicatedStoragePool", func() { + It("GetReplicatedStoragePool", func(ctx SpecContext) { err := cl.Create(ctx, testReplicatedSP) Expect(err).NotTo(HaveOccurred()) @@ -63,7 +62,7 @@ var _ = Describe(controller.ReplicatedStoragePoolControllerName, func() { Expect(replicatedSP.Namespace).To(Equal(testNameSpace)) }) - It("UpdateReplicatedStoragePool", func() { + It("UpdateReplicatedStoragePool", func(ctx SpecContext) { const ( testLblKey = "test_label_key" testLblValue = "test_label_value" @@ -109,7 +108,7 @@ var _ = Describe(controller.ReplicatedStoragePoolControllerName, func() { Expect(m[""]).To(Equal("value3")) }) - It("GetLVMVolumeGroup", func() { + It("GetLVMVolumeGroup", func(ctx SpecContext) { testLvm := &snc.LVMVolumeGroup{ ObjectMeta: metav1.ObjectMeta{ Name: testName, @@ -124,7 +123,7 @@ var _ = Describe(controller.ReplicatedStoragePoolControllerName, func() { Expect(lvm.Name).To(Equal(testName)) }) - It("Validations", func() { + It("Validations", func(ctx SpecContext) { const ( LVMVGOneOnFirstNodeName = "lvmVG-1-on-FirstNode" ActualVGOneOnFirstNodeName = "actualVG-1-on-FirstNode" @@ -146,8 +145,8 @@ var _ = Describe(controller.ReplicatedStoragePoolControllerName, func() { GoodReplicatedStoragePoolName = "goodreplicatedoperatorstoragepool" BadReplicatedStoragePoolName = "badreplicatedoperatorstoragepool" - TypeLVMThin = "LVMThin" - TypeLVM = "LVM" + TypeLVMThin = srv.ReplicatedStoragePoolType("LVMThin") + TypeLVM = srv.ReplicatedStoragePoolType("LVM") LVMVGTypeLocal = "Local" LVMVGTypeShared = "Shared" ) @@ -178,13 +177,13 @@ var _ = Describe(controller.ReplicatedStoragePoolControllerName, func() { Expect(err).NotTo(HaveOccurred()) goodReplicatedStoragePoolrequest := reconcile.Request{NamespacedName: types.NamespacedName{Namespace: goodReplicatedStoragePool.ObjectMeta.Namespace, Name: goodReplicatedStoragePool.ObjectMeta.Name}} - shouldRequeue, err := controller.ReconcileReplicatedStoragePoolEvent(ctx, cl, goodReplicatedStoragePoolrequest, *log, lc) + shouldRequeue, err := controller.ReconcileReplicatedStoragePoolEvent(ctx, cl, goodReplicatedStoragePoolrequest, log, lc) Expect(err).To(HaveOccurred()) Expect(shouldRequeue).To(BeTrue()) reconciledGoodReplicatedStoragePool, err := controller.GetReplicatedStoragePool(ctx, cl, testNameSpace, GoodReplicatedStoragePoolName) Expect(err).NotTo(HaveOccurred()) - Expect(reconciledGoodReplicatedStoragePool.Status.Phase).To(Equal("Failed")) + Expect(reconciledGoodReplicatedStoragePool.Status.Phase).To(Equal(srv.RSPPhaseFailed)) Expect(reconciledGoodReplicatedStoragePool.Status.Reason).To(Equal("lvmVG-1-on-FirstNode: Error getting LVMVolumeGroup: lvmvolumegroups.storage.deckhouse.io \"lvmVG-1-on-FirstNode\" not found\nlvmVG-1-on-SecondNode: Error getting LVMVolumeGroup: lvmvolumegroups.storage.deckhouse.io \"lvmVG-1-on-SecondNode\" not found\n")) // Negative test with bad LVMVolumeGroups. @@ -197,13 +196,13 @@ var _ = Describe(controller.ReplicatedStoragePoolControllerName, func() { Expect(err).NotTo(HaveOccurred()) badReplicatedStoragePoolrequest := reconcile.Request{NamespacedName: types.NamespacedName{Namespace: badReplicatedStoragePool.ObjectMeta.Namespace, Name: badReplicatedStoragePool.ObjectMeta.Name}} - shouldRequeue, err = controller.ReconcileReplicatedStoragePoolEvent(ctx, cl, badReplicatedStoragePoolrequest, *log, lc) + shouldRequeue, err = controller.ReconcileReplicatedStoragePoolEvent(ctx, cl, badReplicatedStoragePoolrequest, log, lc) Expect(err).To(HaveOccurred()) Expect(shouldRequeue).To(BeTrue()) reconciledBadReplicatedStoragePool, err := controller.GetReplicatedStoragePool(ctx, cl, testNameSpace, BadReplicatedStoragePoolName) Expect(err).NotTo(HaveOccurred()) - Expect(reconciledBadReplicatedStoragePool.Status.Phase).To(Equal("Failed")) + Expect(reconciledBadReplicatedStoragePool.Status.Phase).To(Equal(srv.RSPPhaseFailed)) }) }) @@ -236,7 +235,7 @@ func CreateLVMVolumeGroup(ctx context.Context, cl client.WithWatch, lvmVolumeGro return err } -func CreateReplicatedStoragePool(ctx context.Context, cl client.WithWatch, replicatedStoragePoolName, namespace, lvmType string, lvmVolumeGroups []map[string]string) error { +func CreateReplicatedStoragePool(ctx context.Context, cl client.WithWatch, replicatedStoragePoolName, namespace string, lvmType srv.ReplicatedStoragePoolType, lvmVolumeGroups []map[string]string) error { volumeGroups := make([]srv.ReplicatedStoragePoolLVMVolumeGroups, 0) for i := range lvmVolumeGroups { for key, value := range lvmVolumeGroups[i] { diff --git a/images/sds-replicated-volume-controller/pkg/controller/storage_class_annotations.go b/images/sds-replicated-volume-controller/pkg/controller/storage_class_annotations.go index 0395a3bf0..9ca02b599 100644 --- a/images/sds-replicated-volume-controller/pkg/controller/storage_class_annotations.go +++ b/images/sds-replicated-volume-controller/pkg/controller/storage_class_annotations.go @@ -32,7 +32,7 @@ import ( "sigs.k8s.io/controller-runtime/pkg/reconcile" "sigs.k8s.io/controller-runtime/pkg/source" - "github.com/deckhouse/sds-replicated-volume/images/sds-replicated-volume-controller/pkg/logger" + "github.com/deckhouse/sds-replicated-volume/lib/go/common/logger" ) const ( diff --git a/images/sds-replicated-volume-controller/pkg/controller/storage_class_annotations_func.go b/images/sds-replicated-volume-controller/pkg/controller/storage_class_annotations_func.go index f8a84a7b2..47217bc68 100644 --- a/images/sds-replicated-volume-controller/pkg/controller/storage_class_annotations_func.go +++ b/images/sds-replicated-volume-controller/pkg/controller/storage_class_annotations_func.go @@ -28,7 +28,7 @@ import ( "sigs.k8s.io/controller-runtime/pkg/reconcile" srv "github.com/deckhouse/sds-replicated-volume/api/v1alpha1" - "github.com/deckhouse/sds-replicated-volume/images/sds-replicated-volume-controller/pkg/logger" + "github.com/deckhouse/sds-replicated-volume/lib/go/common/logger" ) const ( diff --git a/images/sds-replicated-volume-controller/pkg/controller/storage_class_annotations_test.go b/images/sds-replicated-volume-controller/pkg/controller/storage_class_annotations_test.go index 41f8a3188..d6822f9a0 100644 --- a/images/sds-replicated-volume-controller/pkg/controller/storage_class_annotations_test.go +++ b/images/sds-replicated-volume-controller/pkg/controller/storage_class_annotations_test.go @@ -17,7 +17,6 @@ limitations under the License. package controller_test import ( - "context" "fmt" "maps" @@ -34,7 +33,7 @@ import ( srv "github.com/deckhouse/sds-replicated-volume/api/v1alpha1" "github.com/deckhouse/sds-replicated-volume/images/sds-replicated-volume-controller/config" "github.com/deckhouse/sds-replicated-volume/images/sds-replicated-volume-controller/pkg/controller" - "github.com/deckhouse/sds-replicated-volume/images/sds-replicated-volume-controller/pkg/logger" + "github.com/deckhouse/sds-replicated-volume/lib/go/common/logger" ) var _ = Describe(controller.StorageClassAnnotationsCtrlName, func() { @@ -45,7 +44,6 @@ var _ = Describe(controller.StorageClassAnnotationsCtrlName, func() { ) var ( - ctx context.Context cl client.WithWatch log logger.Logger @@ -100,9 +98,8 @@ var _ = Describe(controller.StorageClassAnnotationsCtrlName, func() { ) BeforeEach(func() { - ctx = context.Background() cl = newFakeClient() - log = logger.Logger{} + log = logger.WrapLorg(GinkgoLogr) storageClassResource = nil configMap = nil replicatedStorageClassResource = nil @@ -124,7 +121,7 @@ var _ = Describe(controller.StorageClassAnnotationsCtrlName, func() { }, } }) - JustBeforeEach(func() { + JustBeforeEach(func(ctx SpecContext) { err := cl.Create(ctx, storageClassResource) Expect(err).NotTo(HaveOccurred()) if storageClassResource.Annotations != nil { @@ -134,7 +131,7 @@ var _ = Describe(controller.StorageClassAnnotationsCtrlName, func() { err = cl.Create(ctx, replicatedStorageClassResource) Expect(err).NotTo(HaveOccurred()) }) - JustAfterEach(func() { + JustAfterEach(func(ctx SpecContext) { storageClass, err := getSC(ctx, cl, storageClassResource.Name, storageClassResource.Namespace) Expect(err).NotTo(HaveOccurred()) Expect(storageClass).NotTo(BeNil()) @@ -179,11 +176,11 @@ var _ = Describe(controller.StorageClassAnnotationsCtrlName, func() { }, } }) - JustBeforeEach(func() { + JustBeforeEach(func(ctx SpecContext) { err := cl.Create(ctx, configMap) Expect(err).NotTo(HaveOccurred()) }) - JustAfterEach(func() { + JustAfterEach(func(ctx SpecContext) { err := cl.Delete(ctx, configMap) Expect(err).NotTo(HaveOccurred()) @@ -196,7 +193,7 @@ var _ = Describe(controller.StorageClassAnnotationsCtrlName, func() { }) } else { When("ConfigMap does not exist", func() { - JustBeforeEach(func() { + JustBeforeEach(func(ctx SpecContext) { var err error configMap, err := getConfigMap(ctx, cl, validCFG.ControllerNamespace) @@ -225,7 +222,7 @@ var _ = Describe(controller.StorageClassAnnotationsCtrlName, func() { storageClassResource.Parameters[controller.StorageClassParamAllowRemoteVolumeAccessKey] = "true" }) foo() - JustAfterEach(func() { + JustAfterEach(func(ctx SpecContext) { storageClass, err := getSC(ctx, cl, storageClassResource.Name, storageClassResource.Namespace) Expect(err).NotTo(HaveOccurred()) Expect(storageClass.Parameters).To(HaveKeyWithValue(controller.StorageClassParamAllowRemoteVolumeAccessKey, "true")) @@ -246,7 +243,7 @@ var _ = Describe(controller.StorageClassAnnotationsCtrlName, func() { Expect(storageClassResource.Parameters).To(HaveKeyWithValue(controller.StorageClassParamAllowRemoteVolumeAccessKey, "false")) }) foo() - JustAfterEach(func() { + JustAfterEach(func(ctx SpecContext) { storageClass, err := getSC(ctx, cl, storageClassResource.Name, storageClassResource.Namespace) Expect(err).NotTo(HaveOccurred()) Expect(storageClass.Parameters).To(HaveKeyWithValue(controller.StorageClassParamAllowRemoteVolumeAccessKey, "false")) @@ -300,7 +297,7 @@ var _ = Describe(controller.StorageClassAnnotationsCtrlName, func() { } configMap.Data[controller.VirtualizationModuleEnabledKey] = strValue }) - JustBeforeEach(func() { + JustBeforeEach(func(ctx SpecContext) { virtualizationEnabled, err := controller.GetVirtualizationModuleEnabled(ctx, cl, log, request.NamespacedName) Expect(err).NotTo(HaveOccurred()) Expect(virtualizationEnabled).To(BeEquivalentTo(value)) @@ -310,7 +307,7 @@ var _ = Describe(controller.StorageClassAnnotationsCtrlName, func() { } itHasNoAnnotations := func() { - It("has no annotations", func() { + It("has no annotations", func(ctx SpecContext) { shouldRequeue, err := controller.ReconcileControllerConfigMapEvent(ctx, cl, log, request) Expect(err).NotTo(HaveOccurred()) Expect(shouldRequeue).To(BeFalse()) @@ -323,7 +320,7 @@ var _ = Describe(controller.StorageClassAnnotationsCtrlName, func() { } itHasOnlyDefaultStorageClassAnnotationKey := func() { - It("has only default storage class annotation", func() { + It("has only default storage class annotation", func(ctx SpecContext) { shouldRequeue, err := controller.ReconcileControllerConfigMapEvent(ctx, cl, log, request) Expect(err).NotTo(HaveOccurred()) Expect(shouldRequeue).To(BeFalse()) @@ -370,7 +367,7 @@ var _ = Describe(controller.StorageClassAnnotationsCtrlName, func() { whenVirtualizationIs(true, func() { whenDefaultAnnotationExistsIs(false, func() { whenAllowRemoteVolumeAccessKeyIs(false, func() { - It("has only access mode annotation", func() { + It("has only access mode annotation", func(ctx SpecContext) { shouldRequeue, err := controller.ReconcileControllerConfigMapEvent(ctx, cl, log, request) Expect(err).NotTo(HaveOccurred()) Expect(shouldRequeue).To(BeFalse()) @@ -389,7 +386,7 @@ var _ = Describe(controller.StorageClassAnnotationsCtrlName, func() { }) whenDefaultAnnotationExistsIs(true, func() { whenAllowRemoteVolumeAccessKeyIs(false, func() { - It("has default storage class and access mode annotations", func() { + It("has default storage class and access mode annotations", func(ctx SpecContext) { shouldRequeue, err := controller.ReconcileControllerConfigMapEvent(ctx, cl, log, request) Expect(err).NotTo(HaveOccurred()) Expect(shouldRequeue).To(BeFalse()) @@ -419,7 +416,7 @@ var _ = Describe(controller.StorageClassAnnotationsCtrlName, func() { itHasOnlyDefaultStorageClassAnnotationKey() - It("parameter StorageClassParamAllowRemoteVolumeAccessKey set to false and another provisioner", func() { + It("parameter StorageClassParamAllowRemoteVolumeAccessKey set to false and another provisioner", func(ctx SpecContext) { shouldRequeue, err := controller.ReconcileControllerConfigMapEvent(ctx, cl, log, request) Expect(err).NotTo(HaveOccurred()) Expect(shouldRequeue).To(BeFalse()) diff --git a/images/sds-replicated-volume-controller/pkg/logger/logger.go b/images/sds-replicated-volume-controller/pkg/logger/logger.go deleted file mode 100644 index ce8489723..000000000 --- a/images/sds-replicated-volume-controller/pkg/logger/logger.go +++ /dev/null @@ -1,87 +0,0 @@ -/* -Copyright 2025 Flant JSC - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -package logger - -import ( - "fmt" - "strconv" - - "github.com/go-logr/logr" - "k8s.io/klog/v2/textlogger" -) - -const ( - ErrorLevel Verbosity = "0" - WarningLevel Verbosity = "1" - InfoLevel Verbosity = "2" - DebugLevel Verbosity = "3" - TraceLevel Verbosity = "4" -) - -const ( - warnLvl = iota + 1 - infoLvl - debugLvl - traceLvl -) - -type ( - Verbosity string -) - -type Logger struct { - log logr.Logger -} - -func NewLogger(level Verbosity) (*Logger, error) { - v, err := strconv.Atoi(string(level)) - if err != nil { - return nil, err - } - - log := textlogger.NewLogger(textlogger.NewConfig(textlogger.Verbosity(v))).WithCallDepth(1) - - return &Logger{log: log}, nil -} - -func (l Logger) GetLogger() logr.Logger { - return l.log -} - -func (l Logger) Error(err error, message string, keysAndValues ...interface{}) { - l.log.Error(err, fmt.Sprintf("ERROR %s", message), keysAndValues...) -} - -func (l Logger) Warning(message string, keysAndValues ...interface{}) { - l.log.V(warnLvl).Info(fmt.Sprintf("WARNING %s", message), keysAndValues...) -} - -func (l Logger) Info(message string, keysAndValues ...interface{}) { - l.log.V(infoLvl).Info(fmt.Sprintf("INFO %s", message), keysAndValues...) -} - -func (l Logger) Debug(message string, keysAndValues ...interface{}) { - l.log.V(debugLvl).Info(fmt.Sprintf("DEBUG %s", message), keysAndValues...) -} - -func (l Logger) Trace(message string, keysAndValues ...interface{}) { - l.log.V(traceLvl).Info(fmt.Sprintf("TRACE %s", message), keysAndValues...) -} - -func (l *Logger) Printf(format string, args ...interface{}) { - l.log.V(traceLvl).Info("%s", fmt.Sprintf(format, args...)) -} diff --git a/images/sds-replicated-volume-controller/pkg/sdk/framework/reconcile_helper/reconciler_core.go b/images/sds-replicated-volume-controller/pkg/sdk/framework/reconcile_helper/reconciler_core.go index ca013bf72..f2103f7fc 100644 --- a/images/sds-replicated-volume-controller/pkg/sdk/framework/reconcile_helper/reconciler_core.go +++ b/images/sds-replicated-volume-controller/pkg/sdk/framework/reconcile_helper/reconciler_core.go @@ -22,7 +22,7 @@ import ( "sigs.k8s.io/controller-runtime/pkg/cache" "sigs.k8s.io/controller-runtime/pkg/client" - "github.com/deckhouse/sds-replicated-volume/images/sds-replicated-volume-controller/pkg/logger" + "github.com/deckhouse/sds-replicated-volume/lib/go/common/logger" ) type ReconcilerOptions struct { diff --git a/images/sds-replicated-volume-controller/werf.inc.yaml b/images/sds-replicated-volume-controller/werf.inc.yaml index 31dc6e9c4..c5583abe6 100644 --- a/images/sds-replicated-volume-controller/werf.inc.yaml +++ b/images/sds-replicated-volume-controller/werf.inc.yaml @@ -9,6 +9,7 @@ git: includePaths: - api - images/{{ $.ImageName }} + - lib/go stageDependencies: install: - '**/*' diff --git a/images/webhooks/go.mod b/images/webhooks/go.mod index e41ea599d..6ea8b924e 100644 --- a/images/webhooks/go.mod +++ b/images/webhooks/go.mod @@ -3,70 +3,250 @@ module github.com/deckhouse/sds-replicated-volume/images/webhooks go 1.24.11 require ( - github.com/deckhouse/sds-common-lib v0.5.0 - github.com/deckhouse/sds-node-configurator/api v0.0.0-20250424082358-e271071c2a57 - github.com/deckhouse/sds-replicated-volume/api v0.0.0-20240812165341-a73e664454b9 - github.com/go-logr/logr v1.4.2 + github.com/deckhouse/sds-common-lib v0.6.3 + github.com/deckhouse/sds-node-configurator/api v0.0.0-20251112082451-591b11c7b2da + github.com/deckhouse/sds-replicated-volume/api v0.0.0-20250907192450-6e1330e9e380 + github.com/go-logr/logr v1.4.3 github.com/sirupsen/logrus v1.9.3 - github.com/slok/kubewebhook/v2 v2.6.0 - k8s.io/api v0.32.3 - k8s.io/apiextensions-apiserver v0.32.3 - k8s.io/apimachinery v0.32.3 - k8s.io/client-go v0.32.3 + github.com/slok/kubewebhook/v2 v2.7.0 + k8s.io/api v0.34.3 + k8s.io/apiextensions-apiserver v0.34.3 + k8s.io/apimachinery v0.34.3 + k8s.io/client-go v0.34.3 k8s.io/klog/v2 v2.130.1 - sigs.k8s.io/controller-runtime v0.20.4 + sigs.k8s.io/controller-runtime v0.22.4 ) -replace github.com/deckhouse/sds-replicated-volume/api => ../../api - require ( + 4d63.com/gocheckcompilerdirectives v1.3.0 // indirect + 4d63.com/gochecknoglobals v0.2.2 // indirect + github.com/4meepo/tagalign v1.4.2 // indirect + github.com/Abirdcfly/dupword v0.1.3 // indirect + github.com/Antonboom/errname v1.0.0 // indirect + github.com/Antonboom/nilnil v1.0.1 // indirect + github.com/Antonboom/testifylint v1.5.2 // indirect + github.com/BurntSushi/toml v1.5.0 // indirect + github.com/Crocmagnon/fatcontext v0.7.1 // indirect + github.com/Djarvur/go-err113 v0.0.0-20210108212216-aea10b59be24 // indirect + github.com/GaijinEntertainment/go-exhaustruct/v3 v3.3.1 // indirect + github.com/Masterminds/semver/v3 v3.4.0 // indirect + github.com/OpenPeeDeeP/depguard/v2 v2.2.1 // indirect + github.com/alecthomas/go-check-sumtype v0.3.1 // indirect + github.com/alexkohler/nakedret/v2 v2.0.5 // indirect + github.com/alexkohler/prealloc v1.0.0 // indirect + github.com/alingse/asasalint v0.0.11 // indirect + github.com/alingse/nilnesserr v0.1.2 // indirect + github.com/ashanbrown/forbidigo v1.6.0 // indirect + github.com/ashanbrown/makezero v1.2.0 // indirect github.com/beorn7/perks v1.0.1 // indirect + github.com/bkielbasa/cyclop v1.2.3 // indirect + github.com/blizzy78/varnamelen v0.8.0 // indirect + github.com/bombsimon/wsl/v4 v4.5.0 // indirect + github.com/breml/bidichk v0.3.2 // indirect + github.com/breml/errchkjson v0.4.0 // indirect + github.com/butuzov/ireturn v0.3.1 // indirect + github.com/butuzov/mirror v1.3.0 // indirect + github.com/catenacyber/perfsprint v0.8.2 // indirect + github.com/ccojocar/zxcvbn-go v1.0.2 // indirect github.com/cespare/xxhash/v2 v2.3.0 // indirect + github.com/charithe/durationcheck v0.0.10 // indirect + github.com/chavacava/garif v0.1.0 // indirect + github.com/ckaznocha/intrange v0.3.0 // indirect + github.com/curioswitch/go-reassign v0.3.0 // indirect + github.com/daixiang0/gci v0.13.5 // indirect github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect - github.com/emicklei/go-restful/v3 v3.12.1 // indirect + github.com/denis-tingaikin/go-header v0.5.0 // indirect + github.com/emicklei/go-restful/v3 v3.13.0 // indirect + github.com/ettle/strcase v0.2.0 // indirect + github.com/evanphx/json-patch v5.9.0+incompatible // indirect github.com/evanphx/json-patch/v5 v5.9.11 // indirect - github.com/fsnotify/fsnotify v1.8.0 // indirect - github.com/fxamacker/cbor/v2 v2.7.0 // indirect - github.com/go-openapi/jsonpointer v0.21.0 // indirect - github.com/go-openapi/jsonreference v0.21.0 // indirect - github.com/go-openapi/swag v0.23.0 // indirect + github.com/fatih/color v1.18.0 // indirect + github.com/fatih/structtag v1.2.0 // indirect + github.com/firefart/nonamedreturns v1.0.5 // indirect + github.com/fsnotify/fsnotify v1.9.0 // indirect + github.com/fxamacker/cbor/v2 v2.9.0 // indirect + github.com/fzipp/gocyclo v0.6.0 // indirect + github.com/ghostiam/protogetter v0.3.9 // indirect + github.com/go-critic/go-critic v0.12.0 // indirect + github.com/go-openapi/jsonpointer v0.22.0 // indirect + github.com/go-openapi/jsonreference v0.21.1 // indirect + github.com/go-openapi/swag v0.24.1 // indirect + github.com/go-openapi/swag/cmdutils v0.24.0 // indirect + github.com/go-openapi/swag/conv v0.24.0 // indirect + github.com/go-openapi/swag/fileutils v0.24.0 // indirect + github.com/go-openapi/swag/jsonname v0.24.0 // indirect + github.com/go-openapi/swag/jsonutils v0.24.0 // indirect + github.com/go-openapi/swag/loading v0.24.0 // indirect + github.com/go-openapi/swag/mangling v0.24.0 // indirect + github.com/go-openapi/swag/netutils v0.24.0 // indirect + github.com/go-openapi/swag/stringutils v0.24.0 // indirect + github.com/go-openapi/swag/typeutils v0.24.0 // indirect + github.com/go-openapi/swag/yamlutils v0.24.0 // indirect + github.com/go-task/slim-sprig/v3 v3.0.0 // indirect + github.com/go-toolsmith/astcast v1.1.0 // indirect + github.com/go-toolsmith/astcopy v1.1.0 // indirect + github.com/go-toolsmith/astequal v1.2.0 // indirect + github.com/go-toolsmith/astfmt v1.1.0 // indirect + github.com/go-toolsmith/astp v1.1.0 // indirect + github.com/go-toolsmith/strparse v1.1.0 // indirect + github.com/go-toolsmith/typep v1.1.0 // indirect + github.com/go-viper/mapstructure/v2 v2.4.0 // indirect + github.com/go-xmlfmt/xmlfmt v1.1.3 // indirect + github.com/gobwas/glob v0.2.3 // indirect + github.com/gofrs/flock v0.12.1 // indirect github.com/gogo/protobuf v1.3.2 // indirect - github.com/golang/protobuf v1.5.4 // indirect + github.com/golangci/dupl v0.0.0-20250308024227-f665c8d69b32 // indirect + github.com/golangci/go-printf-func-name v0.1.0 // indirect + github.com/golangci/gofmt v0.0.0-20250106114630-d62b90e6713d // indirect + github.com/golangci/golangci-lint v1.64.8 // indirect + github.com/golangci/misspell v0.6.0 // indirect + github.com/golangci/plugin-module-register v0.1.1 // indirect + github.com/golangci/revgrep v0.8.0 // indirect + github.com/golangci/unconvert v0.0.0-20240309020433-c5143eacb3ed // indirect github.com/google/btree v1.1.3 // indirect - github.com/google/gnostic-models v0.6.9 // indirect + github.com/google/gnostic-models v0.7.0 // indirect github.com/google/go-cmp v0.7.0 // indirect - github.com/google/gofuzz v1.2.0 // indirect + github.com/google/pprof v0.0.0-20250820193118-f64d9cf942d6 // indirect github.com/google/uuid v1.6.0 // indirect + github.com/gordonklaus/ineffassign v0.1.0 // indirect + github.com/gostaticanalysis/analysisutil v0.7.1 // indirect + github.com/gostaticanalysis/comment v1.5.0 // indirect + github.com/gostaticanalysis/forcetypeassert v0.2.0 // indirect + github.com/gostaticanalysis/nilerr v0.1.1 // indirect + github.com/hashicorp/go-immutable-radix/v2 v2.1.0 // indirect + github.com/hashicorp/go-version v1.7.0 // indirect + github.com/hashicorp/golang-lru/v2 v2.0.7 // indirect + github.com/hexops/gotextdiff v1.0.3 // indirect + github.com/inconshreveable/mousetrap v1.1.0 // indirect + github.com/jgautheron/goconst v1.7.1 // indirect + github.com/jingyugao/rowserrcheck v1.1.1 // indirect + github.com/jjti/go-spancheck v0.6.4 // indirect github.com/josharian/intern v1.0.0 // indirect github.com/json-iterator/go v1.1.12 // indirect - github.com/klauspost/compress v1.17.11 // indirect + github.com/julz/importas v0.2.0 // indirect + github.com/karamaru-alpha/copyloopvar v1.2.1 // indirect + github.com/kisielk/errcheck v1.9.0 // indirect + github.com/kkHAIKE/contextcheck v1.1.6 // indirect + github.com/kulti/thelper v0.6.3 // indirect + github.com/kunwardeep/paralleltest v1.0.10 // indirect + github.com/lasiar/canonicalheader v1.1.2 // indirect + github.com/ldez/exptostd v0.4.2 // indirect + github.com/ldez/gomoddirectives v0.6.1 // indirect + github.com/ldez/grignotin v0.9.0 // indirect + github.com/ldez/tagliatelle v0.7.1 // indirect + github.com/ldez/usetesting v0.4.2 // indirect + github.com/leonklingele/grouper v1.1.2 // indirect + github.com/macabu/inamedparam v0.1.3 // indirect github.com/mailru/easyjson v0.9.0 // indirect + github.com/maratori/testableexamples v1.0.0 // indirect + github.com/maratori/testpackage v1.1.1 // indirect + github.com/matoous/godox v1.1.0 // indirect + github.com/mattn/go-colorable v0.1.14 // indirect + github.com/mattn/go-isatty v0.0.20 // indirect + github.com/mattn/go-runewidth v0.0.16 // indirect + github.com/mgechev/revive v1.7.0 // indirect + github.com/mitchellh/go-homedir v1.1.0 // indirect github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect - github.com/modern-go/reflect2 v1.0.2 // indirect + github.com/modern-go/reflect2 v1.0.3-0.20250322232337-35a7c28c31ee // indirect + github.com/moricho/tparallel v0.3.2 // indirect github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect - github.com/pkg/errors v0.9.1 // indirect - github.com/prometheus/client_golang v1.20.5 // indirect - github.com/prometheus/client_model v0.6.1 // indirect - github.com/prometheus/common v0.61.0 // indirect - github.com/prometheus/procfs v0.15.1 // indirect - github.com/rogpeppe/go-internal v1.13.1 // indirect - github.com/spf13/pflag v1.0.5 // indirect + github.com/nakabonne/nestif v0.3.1 // indirect + github.com/nishanths/exhaustive v0.12.0 // indirect + github.com/nishanths/predeclared v0.2.2 // indirect + github.com/nunnatsa/ginkgolinter v0.19.1 // indirect + github.com/olekukonko/tablewriter v0.0.5 // indirect + github.com/onsi/ginkgo/v2 v2.27.2 // indirect + github.com/pelletier/go-toml/v2 v2.2.4 // indirect + github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect + github.com/polyfloyd/go-errorlint v1.7.1 // indirect + github.com/prometheus/client_golang v1.23.2 // indirect + github.com/prometheus/client_model v0.6.2 // indirect + github.com/prometheus/common v0.66.1 // indirect + github.com/prometheus/procfs v0.17.0 // indirect + github.com/quasilyte/go-ruleguard v0.4.3-0.20240823090925-0fe6f58b47b1 // indirect + github.com/quasilyte/go-ruleguard/dsl v0.3.22 // indirect + github.com/quasilyte/gogrep v0.5.0 // indirect + github.com/quasilyte/regex/syntax v0.0.0-20210819130434-b3f0c404a727 // indirect + github.com/quasilyte/stdinfo v0.0.0-20220114132959-f7386bf02567 // indirect + github.com/raeperd/recvcheck v0.2.0 // indirect + github.com/rivo/uniseg v0.4.7 // indirect + github.com/rogpeppe/go-internal v1.14.1 // indirect + github.com/ryancurrah/gomodguard v1.3.5 // indirect + github.com/ryanrolds/sqlclosecheck v0.5.1 // indirect + github.com/sagikazarmark/locafero v0.7.0 // indirect + github.com/sanposhiho/wastedassign/v2 v2.1.0 // indirect + github.com/santhosh-tekuri/jsonschema/v6 v6.0.1 // indirect + github.com/sashamelentyev/interfacebloat v1.1.0 // indirect + github.com/sashamelentyev/usestdlibvars v1.28.0 // indirect + github.com/securego/gosec/v2 v2.22.2 // indirect + github.com/sivchari/containedctx v1.0.3 // indirect + github.com/sivchari/tenv v1.12.1 // indirect + github.com/sonatard/noctx v0.1.0 // indirect + github.com/sourcegraph/conc v0.3.0 // indirect + github.com/sourcegraph/go-diff v0.7.0 // indirect + github.com/spf13/afero v1.12.0 // indirect + github.com/spf13/cast v1.7.1 // indirect + github.com/spf13/cobra v1.10.2 // indirect + github.com/spf13/pflag v1.0.10 // indirect + github.com/spf13/viper v1.20.1 // indirect + github.com/ssgreg/nlreturn/v2 v2.2.1 // indirect + github.com/stbenjam/no-sprintf-host-port v0.2.0 // indirect + github.com/stretchr/objx v0.5.2 // indirect + github.com/stretchr/testify v1.11.1 // indirect + github.com/subosito/gotenv v1.6.0 // indirect + github.com/tdakkota/asciicheck v0.4.1 // indirect + github.com/tetafro/godot v1.5.0 // indirect + github.com/timakin/bodyclose v0.0.0-20241017074812-ed6a65f985e3 // indirect + github.com/timonwong/loggercheck v0.10.1 // indirect + github.com/tomarrell/wrapcheck/v2 v2.10.0 // indirect + github.com/tommy-muehle/go-mnd/v2 v2.5.1 // indirect + github.com/ultraware/funlen v0.2.0 // indirect + github.com/ultraware/whitespace v0.2.0 // indirect + github.com/uudashr/gocognit v1.2.0 // indirect + github.com/uudashr/iface v1.3.1 // indirect github.com/x448/float16 v0.8.4 // indirect - golang.org/x/net v0.40.0 // indirect - golang.org/x/oauth2 v0.27.0 // indirect - golang.org/x/sync v0.14.0 // indirect - golang.org/x/sys v0.33.0 // indirect - golang.org/x/term v0.32.0 // indirect - golang.org/x/text v0.25.0 // indirect - golang.org/x/time v0.11.0 // indirect - gomodules.xyz/jsonpatch/v2 v2.4.0 // indirect - google.golang.org/protobuf v1.36.5 // indirect - gopkg.in/evanphx/json-patch.v4 v4.12.0 // indirect + github.com/xen0n/gosmopolitan v1.2.2 // indirect + github.com/yagipy/maintidx v1.0.0 // indirect + github.com/yeya24/promlinter v0.3.0 // indirect + github.com/ykadowak/zerologlint v0.1.5 // indirect + gitlab.com/bosi/decorder v0.4.2 // indirect + go-simpler.org/musttag v0.13.0 // indirect + go-simpler.org/sloglint v0.9.0 // indirect + go.uber.org/automaxprocs v1.6.0 // indirect + go.uber.org/multierr v1.11.0 // indirect + go.uber.org/zap v1.27.0 // indirect + go.yaml.in/yaml/v2 v2.4.2 // indirect + go.yaml.in/yaml/v3 v3.0.4 // indirect + golang.org/x/exp/typeparams v0.0.0-20250210185358-939b2ce775ac // indirect + golang.org/x/mod v0.29.0 // indirect + golang.org/x/net v0.46.0 // indirect + golang.org/x/oauth2 v0.31.0 // indirect + golang.org/x/sync v0.19.0 // indirect + golang.org/x/sys v0.39.0 // indirect + golang.org/x/term v0.36.0 // indirect + golang.org/x/text v0.30.0 // indirect + golang.org/x/time v0.13.0 // indirect + golang.org/x/tools v0.38.0 // indirect + gomodules.xyz/jsonpatch/v2 v2.5.0 // indirect + google.golang.org/protobuf v1.36.9 // indirect + gopkg.in/evanphx/json-patch.v4 v4.13.0 // indirect gopkg.in/inf.v0 v0.9.1 // indirect + gopkg.in/yaml.v2 v2.4.0 // indirect gopkg.in/yaml.v3 v3.0.1 // indirect - k8s.io/kube-openapi v0.0.0-20241212222426-2c72e554b1e7 // indirect - k8s.io/utils v0.0.0-20241210054802-24370beab758 // indirect - sigs.k8s.io/json v0.0.0-20241014173422-cfa47c3a1cc8 // indirect - sigs.k8s.io/structured-merge-diff/v4 v4.5.0 // indirect - sigs.k8s.io/yaml v1.4.0 // indirect + honnef.co/go/tools v0.6.1 // indirect + k8s.io/kube-openapi v0.0.0-20250909170358-d67c058d9372 // indirect + k8s.io/utils v0.0.0-20251002143259-bc988d571ff4 // indirect + mvdan.cc/gofumpt v0.7.0 // indirect + mvdan.cc/unparam v0.0.0-20240528143540-8a5130ca722f // indirect + sigs.k8s.io/json v0.0.0-20250730193827-2d320260d730 // indirect + sigs.k8s.io/randfill v1.0.0 // indirect + sigs.k8s.io/structured-merge-diff/v6 v6.3.0 // indirect + sigs.k8s.io/yaml v1.6.0 // indirect +) + +replace github.com/deckhouse/sds-replicated-volume/api => ../../api + +tool ( + github.com/golangci/golangci-lint/cmd/golangci-lint + github.com/onsi/ginkgo/v2/ginkgo ) diff --git a/images/webhooks/go.sum b/images/webhooks/go.sum index 2b645201d..e1064ac47 100644 --- a/images/webhooks/go.sum +++ b/images/webhooks/go.sum @@ -1,195 +1,705 @@ +4d63.com/gocheckcompilerdirectives v1.3.0 h1:Ew5y5CtcAAQeTVKUVFrE7EwHMrTO6BggtEj8BZSjZ3A= +4d63.com/gocheckcompilerdirectives v1.3.0/go.mod h1:ofsJ4zx2QAuIP/NO/NAh1ig6R1Fb18/GI7RVMwz7kAY= +4d63.com/gochecknoglobals v0.2.2 h1:H1vdnwnMaZdQW/N+NrkT1SZMTBmcwHe9Vq8lJcYYTtU= +4d63.com/gochecknoglobals v0.2.2/go.mod h1:lLxwTQjL5eIesRbvnzIP3jZtG140FnTdz+AlMa+ogt0= +github.com/4meepo/tagalign v1.4.2 h1:0hcLHPGMjDyM1gHG58cS73aQF8J4TdVR96TZViorO9E= +github.com/4meepo/tagalign v1.4.2/go.mod h1:+p4aMyFM+ra7nb41CnFG6aSDXqRxU/w1VQqScKqDARI= +github.com/Abirdcfly/dupword v0.1.3 h1:9Pa1NuAsZvpFPi9Pqkd93I7LIYRURj+A//dFd5tgBeE= +github.com/Abirdcfly/dupword v0.1.3/go.mod h1:8VbB2t7e10KRNdwTVoxdBaxla6avbhGzb8sCTygUMhw= +github.com/Antonboom/errname v1.0.0 h1:oJOOWR07vS1kRusl6YRSlat7HFnb3mSfMl6sDMRoTBA= +github.com/Antonboom/errname v1.0.0/go.mod h1:gMOBFzK/vrTiXN9Oh+HFs+e6Ndl0eTFbtsRTSRdXyGI= +github.com/Antonboom/nilnil v1.0.1 h1:C3Tkm0KUxgfO4Duk3PM+ztPncTFlOf0b2qadmS0s4xs= +github.com/Antonboom/nilnil v1.0.1/go.mod h1:CH7pW2JsRNFgEh8B2UaPZTEPhCMuFowP/e8Udp9Nnb0= +github.com/Antonboom/testifylint v1.5.2 h1:4s3Xhuv5AvdIgbd8wOOEeo0uZG7PbDKQyKY5lGoQazk= +github.com/Antonboom/testifylint v1.5.2/go.mod h1:vxy8VJ0bc6NavlYqjZfmp6EfqXMtBgQ4+mhCojwC1P8= +github.com/BurntSushi/toml v1.5.0 h1:W5quZX/G/csjUnuI8SUYlsHs9M38FC7znL0lIO+DvMg= +github.com/BurntSushi/toml v1.5.0/go.mod h1:ukJfTF/6rtPPRCnwkur4qwRxa8vTRFBF0uk2lLoLwho= +github.com/Crocmagnon/fatcontext v0.7.1 h1:SC/VIbRRZQeQWj/TcQBS6JmrXcfA+BU4OGSVUt54PjM= +github.com/Crocmagnon/fatcontext v0.7.1/go.mod h1:1wMvv3NXEBJucFGfwOJBxSVWcoIO6emV215SMkW9MFU= +github.com/Djarvur/go-err113 v0.0.0-20210108212216-aea10b59be24 h1:sHglBQTwgx+rWPdisA5ynNEsoARbiCBOyGcJM4/OzsM= +github.com/Djarvur/go-err113 v0.0.0-20210108212216-aea10b59be24/go.mod h1:4UJr5HIiMZrwgkSPdsjy2uOQExX/WEILpIrO9UPGuXs= +github.com/GaijinEntertainment/go-exhaustruct/v3 v3.3.1 h1:Sz1JIXEcSfhz7fUi7xHnhpIE0thVASYjvosApmHuD2k= +github.com/GaijinEntertainment/go-exhaustruct/v3 v3.3.1/go.mod h1:n/LSCXNuIYqVfBlVXyHfMQkZDdp1/mmxfSjADd3z1Zg= +github.com/Masterminds/semver/v3 v3.4.0 h1:Zog+i5UMtVoCU8oKka5P7i9q9HgrJeGzI9SA1Xbatp0= +github.com/Masterminds/semver/v3 v3.4.0/go.mod h1:4V+yj/TJE1HU9XfppCwVMZq3I84lprf4nC11bSS5beM= +github.com/OpenPeeDeeP/depguard/v2 v2.2.1 h1:vckeWVESWp6Qog7UZSARNqfu/cZqvki8zsuj3piCMx4= +github.com/OpenPeeDeeP/depguard/v2 v2.2.1/go.mod h1:q4DKzC4UcVaAvcfd41CZh0PWpGgzrVxUYBlgKNGquUo= +github.com/alecthomas/assert/v2 v2.11.0 h1:2Q9r3ki8+JYXvGsDyBXwH3LcJ+WK5D0gc5E8vS6K3D0= +github.com/alecthomas/assert/v2 v2.11.0/go.mod h1:Bze95FyfUr7x34QZrjL+XP+0qgp/zg8yS+TtBj1WA3k= +github.com/alecthomas/go-check-sumtype v0.3.1 h1:u9aUvbGINJxLVXiFvHUlPEaD7VDULsrxJb4Aq31NLkU= +github.com/alecthomas/go-check-sumtype v0.3.1/go.mod h1:A8TSiN3UPRw3laIgWEUOHHLPa6/r9MtoigdlP5h3K/E= +github.com/alecthomas/repr v0.4.0 h1:GhI2A8MACjfegCPVq9f1FLvIBS+DrQ2KQBFZP1iFzXc= +github.com/alecthomas/repr v0.4.0/go.mod h1:Fr0507jx4eOXV7AlPV6AVZLYrLIuIeSOWtW57eE/O/4= +github.com/alexkohler/nakedret/v2 v2.0.5 h1:fP5qLgtwbx9EJE8dGEERT02YwS8En4r9nnZ71RK+EVU= +github.com/alexkohler/nakedret/v2 v2.0.5/go.mod h1:bF5i0zF2Wo2o4X4USt9ntUWve6JbFv02Ff4vlkmS/VU= +github.com/alexkohler/prealloc v1.0.0 h1:Hbq0/3fJPQhNkN0dR95AVrr6R7tou91y0uHG5pOcUuw= +github.com/alexkohler/prealloc v1.0.0/go.mod h1:VetnK3dIgFBBKmg0YnD9F9x6Icjd+9cvfHR56wJVlKE= +github.com/alingse/asasalint v0.0.11 h1:SFwnQXJ49Kx/1GghOFz1XGqHYKp21Kq1nHad/0WQRnw= +github.com/alingse/asasalint v0.0.11/go.mod h1:nCaoMhw7a9kSJObvQyVzNTPBDbNpdocqrSP7t/cW5+I= +github.com/alingse/nilnesserr v0.1.2 h1:Yf8Iwm3z2hUUrP4muWfW83DF4nE3r1xZ26fGWUKCZlo= +github.com/alingse/nilnesserr v0.1.2/go.mod h1:1xJPrXonEtX7wyTq8Dytns5P2hNzoWymVUIaKm4HNFg= +github.com/ashanbrown/forbidigo v1.6.0 h1:D3aewfM37Yb3pxHujIPSpTf6oQk9sc9WZi8gerOIVIY= +github.com/ashanbrown/forbidigo v1.6.0/go.mod h1:Y8j9jy9ZYAEHXdu723cUlraTqbzjKF1MUyfOKL+AjcU= +github.com/ashanbrown/makezero v1.2.0 h1:/2Lp1bypdmK9wDIq7uWBlDF1iMUpIIS4A+pF6C9IEUU= +github.com/ashanbrown/makezero v1.2.0/go.mod h1:dxlPhHbDMC6N6xICzFBSK+4njQDdK8euNO0qjQMtGY4= github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM= github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw= +github.com/bkielbasa/cyclop v1.2.3 h1:faIVMIGDIANuGPWH031CZJTi2ymOQBULs9H21HSMa5w= +github.com/bkielbasa/cyclop v1.2.3/go.mod h1:kHTwA9Q0uZqOADdupvcFJQtp/ksSnytRMe8ztxG8Fuo= +github.com/blizzy78/varnamelen v0.8.0 h1:oqSblyuQvFsW1hbBHh1zfwrKe3kcSj0rnXkKzsQ089M= +github.com/blizzy78/varnamelen v0.8.0/go.mod h1:V9TzQZ4fLJ1DSrjVDfl89H7aMnTvKkApdHeyESmyR7k= +github.com/bombsimon/wsl/v4 v4.5.0 h1:iZRsEvDdyhd2La0FVi5k6tYehpOR/R7qIUjmKk7N74A= +github.com/bombsimon/wsl/v4 v4.5.0/go.mod h1:NOQ3aLF4nD7N5YPXMruR6ZXDOAqLoM0GEpLwTdvmOSc= +github.com/breml/bidichk v0.3.2 h1:xV4flJ9V5xWTqxL+/PMFF6dtJPvZLPsyixAoPe8BGJs= +github.com/breml/bidichk v0.3.2/go.mod h1:VzFLBxuYtT23z5+iVkamXO386OB+/sVwZOpIj6zXGos= +github.com/breml/errchkjson v0.4.0 h1:gftf6uWZMtIa/Is3XJgibewBm2ksAQSY/kABDNFTAdk= +github.com/breml/errchkjson v0.4.0/go.mod h1:AuBOSTHyLSaaAFlWsRSuRBIroCh3eh7ZHh5YeelDIk8= +github.com/butuzov/ireturn v0.3.1 h1:mFgbEI6m+9W8oP/oDdfA34dLisRFCj2G6o/yiI1yZrY= +github.com/butuzov/ireturn v0.3.1/go.mod h1:ZfRp+E7eJLC0NQmk1Nrm1LOrn/gQlOykv+cVPdiXH5M= +github.com/butuzov/mirror v1.3.0 h1:HdWCXzmwlQHdVhwvsfBb2Au0r3HyINry3bDWLYXiKoc= +github.com/butuzov/mirror v1.3.0/go.mod h1:AEij0Z8YMALaq4yQj9CPPVYOyJQyiexpQEQgihajRfI= +github.com/catenacyber/perfsprint v0.8.2 h1:+o9zVmCSVa7M4MvabsWvESEhpsMkhfE7k0sHNGL95yw= +github.com/catenacyber/perfsprint v0.8.2/go.mod h1:q//VWC2fWbcdSLEY1R3l8n0zQCDPdE4IjZwyY1HMunM= +github.com/ccojocar/zxcvbn-go v1.0.2 h1:na/czXU8RrhXO4EZme6eQJLR4PzcGsahsBOAwU6I3Vg= +github.com/ccojocar/zxcvbn-go v1.0.2/go.mod h1:g1qkXtUSvHP8lhHp5GrSmTz6uWALGRMQdw6Qnz/hi60= github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs= github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs= +github.com/charithe/durationcheck v0.0.10 h1:wgw73BiocdBDQPik+zcEoBG/ob8uyBHf2iyoHGPf5w4= +github.com/charithe/durationcheck v0.0.10/go.mod h1:bCWXb7gYRysD1CU3C+u4ceO49LoGOY1C1L6uouGNreQ= +github.com/chavacava/garif v0.1.0 h1:2JHa3hbYf5D9dsgseMKAmc/MZ109otzgNFk5s87H9Pc= +github.com/chavacava/garif v0.1.0/go.mod h1:XMyYCkEL58DF0oyW4qDjjnPWONs2HBqYKI+UIPD+Gww= +github.com/ckaznocha/intrange v0.3.0 h1:VqnxtK32pxgkhJgYQEeOArVidIPg+ahLP7WBOXZd5ZY= +github.com/ckaznocha/intrange v0.3.0/go.mod h1:+I/o2d2A1FBHgGELbGxzIcyd3/9l9DuwjM8FsbSS3Lo= +github.com/cpuguy83/go-md2man/v2 v2.0.6/go.mod h1:oOW0eioCTA6cOiMLiUPZOpcVxMig6NIQQ7OS05n1F4g= +github.com/curioswitch/go-reassign v0.3.0 h1:dh3kpQHuADL3cobV/sSGETA8DOv457dwl+fbBAhrQPs= +github.com/curioswitch/go-reassign v0.3.0/go.mod h1:nApPCCTtqLJN/s8HfItCcKV0jIPwluBOvZP+dsJGA88= +github.com/daixiang0/gci v0.13.5 h1:kThgmH1yBmZSBCh1EJVxQ7JsHpm5Oms0AMed/0LaH4c= +github.com/daixiang0/gci v0.13.5/go.mod h1:12etP2OniiIdP4q+kjUGrC/rUagga7ODbqsom5Eo5Yk= github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM= github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= -github.com/deckhouse/sds-common-lib v0.5.0 h1:dDERy3iKz4UsP2dLFCmoJivaAlUX4+gpdqsQ5l2XnD4= -github.com/deckhouse/sds-common-lib v0.5.0/go.mod h1:tAZI7ZaVeJi5/Fe5Mebw3d6NC4nTHUOOTwZFnHHzxFU= -github.com/deckhouse/sds-node-configurator/api v0.0.0-20250424082358-e271071c2a57 h1:13GafAaD2xfKtklUnNoNkMtYhYSWwC7wOCAChB7yH1w= -github.com/deckhouse/sds-node-configurator/api v0.0.0-20250424082358-e271071c2a57/go.mod h1:asf5aASltd0t84HVMO95dgrZlLwYO7VJbfLsrL2NjsI= -github.com/emicklei/go-restful/v3 v3.12.1 h1:PJMDIM/ak7btuL8Ex0iYET9hxM3CI2sjZtzpL63nKAU= -github.com/emicklei/go-restful/v3 v3.12.1/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc= -github.com/evanphx/json-patch v4.12.0+incompatible h1:4onqiflcdA9EOZ4RxV643DvftH5pOlLGNtQ5lPWQu84= -github.com/evanphx/json-patch v4.12.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk= +github.com/deckhouse/sds-common-lib v0.6.3 h1:k0OotLuQaKuZt8iyph9IusDixjAE0MQRKyuTe2wZP3I= +github.com/deckhouse/sds-common-lib v0.6.3/go.mod h1:UHZMKkqEh6RAO+vtA7dFTwn/2m5lzfPn0kfULBmDf2o= +github.com/deckhouse/sds-node-configurator/api v0.0.0-20251112082451-591b11c7b2da h1:LFk9OC/+EVWfYDRe54Hip4kVKwjNcPhHZTftlm5DCpg= +github.com/deckhouse/sds-node-configurator/api v0.0.0-20251112082451-591b11c7b2da/go.mod h1:X5ftUa4MrSXMKiwQYa4lwFuGtrs+HoCNa8Zl6TPrGo8= +github.com/denis-tingaikin/go-header v0.5.0 h1:SRdnP5ZKvcO9KKRP1KJrhFR3RrlGuD+42t4429eC9k8= +github.com/denis-tingaikin/go-header v0.5.0/go.mod h1:mMenU5bWrok6Wl2UsZjy+1okegmwQ3UgWl4V1D8gjlY= +github.com/dlclark/regexp2 v1.11.0 h1:G/nrcoOa7ZXlpoa/91N3X7mM3r8eIlMBBJZvsz/mxKI= +github.com/dlclark/regexp2 v1.11.0/go.mod h1:DHkYz0B9wPfa6wondMfaivmHpzrQ3v9q8cnmRbL6yW8= +github.com/emicklei/go-restful/v3 v3.13.0 h1:C4Bl2xDndpU6nJ4bc1jXd+uTmYPVUwkD6bFY/oTyCes= +github.com/emicklei/go-restful/v3 v3.13.0/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc= +github.com/ettle/strcase v0.2.0 h1:fGNiVF21fHXpX1niBgk0aROov1LagYsOwV/xqKDKR/Q= +github.com/ettle/strcase v0.2.0/go.mod h1:DajmHElDSaX76ITe3/VHVyMin4LWSJN5Z909Wp+ED1A= +github.com/evanphx/json-patch v5.9.0+incompatible h1:fBXyNpNMuTTDdquAq/uisOr2lShz4oaXpDTX2bLe7ls= +github.com/evanphx/json-patch v5.9.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk= github.com/evanphx/json-patch/v5 v5.9.11 h1:/8HVnzMq13/3x9TPvjG08wUGqBTmZBsCWzjTM0wiaDU= github.com/evanphx/json-patch/v5 v5.9.11/go.mod h1:3j+LviiESTElxA4p3EMKAB9HXj3/XEtnUf6OZxqIQTM= -github.com/fsnotify/fsnotify v1.8.0 h1:dAwr6QBTBZIkG8roQaJjGof0pp0EeF+tNV7YBP3F/8M= -github.com/fsnotify/fsnotify v1.8.0/go.mod h1:8jBTzvmWwFyi3Pb8djgCCO5IBqzKJ/Jwo8TRcHyHii0= -github.com/fxamacker/cbor/v2 v2.7.0 h1:iM5WgngdRBanHcxugY4JySA0nk1wZorNOpTgCMedv5E= -github.com/fxamacker/cbor/v2 v2.7.0/go.mod h1:pxXPTn3joSm21Gbwsv0w9OSA2y1HFR9qXEeXQVeNoDQ= -github.com/go-logr/logr v1.4.2 h1:6pFjapn8bFcIbiKo3XT4j/BhANplGihG6tvd+8rYgrY= -github.com/go-logr/logr v1.4.2/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY= +github.com/fatih/color v1.18.0 h1:S8gINlzdQ840/4pfAwic/ZE0djQEH3wM94VfqLTZcOM= +github.com/fatih/color v1.18.0/go.mod h1:4FelSpRwEGDpQ12mAdzqdOukCy4u8WUtOY6lkT/6HfU= +github.com/fatih/structtag v1.2.0 h1:/OdNE99OxoI/PqaW/SuSK9uxxT3f/tcSZgon/ssNSx4= +github.com/fatih/structtag v1.2.0/go.mod h1:mBJUNpUnHmRKrKlQQlmCrh5PuhftFbNv8Ys4/aAZl94= +github.com/firefart/nonamedreturns v1.0.5 h1:tM+Me2ZaXs8tfdDw3X6DOX++wMCOqzYUho6tUTYIdRA= +github.com/firefart/nonamedreturns v1.0.5/go.mod h1:gHJjDqhGM4WyPt639SOZs+G89Ko7QKH5R5BhnO6xJhw= +github.com/frankban/quicktest v1.14.6 h1:7Xjx+VpznH+oBnejlPUj8oUpdxnVs4f8XU8WnHkI4W8= +github.com/frankban/quicktest v1.14.6/go.mod h1:4ptaffx2x8+WTWXmUCuVU6aPUX1/Mz7zb5vbUoiM6w0= +github.com/fsnotify/fsnotify v1.9.0 h1:2Ml+OJNzbYCTzsxtv8vKSFD9PbJjmhYF14k/jKC7S9k= +github.com/fsnotify/fsnotify v1.9.0/go.mod h1:8jBTzvmWwFyi3Pb8djgCCO5IBqzKJ/Jwo8TRcHyHii0= +github.com/fxamacker/cbor/v2 v2.9.0 h1:NpKPmjDBgUfBms6tr6JZkTHtfFGcMKsw3eGcmD/sapM= +github.com/fxamacker/cbor/v2 v2.9.0/go.mod h1:vM4b+DJCtHn+zz7h3FFp/hDAI9WNWCsZj23V5ytsSxQ= +github.com/fzipp/gocyclo v0.6.0 h1:lsblElZG7d3ALtGMx9fmxeTKZaLLpU8mET09yN4BBLo= +github.com/fzipp/gocyclo v0.6.0/go.mod h1:rXPyn8fnlpa0R2csP/31uerbiVBugk5whMdlyaLkLoA= +github.com/ghostiam/protogetter v0.3.9 h1:j+zlLLWzqLay22Cz/aYwTHKQ88GE2DQ6GkWSYFOI4lQ= +github.com/ghostiam/protogetter v0.3.9/go.mod h1:WZ0nw9pfzsgxuRsPOFQomgDVSWtDLJRfQJEhsGbmQMA= +github.com/gkampitakis/ciinfo v0.3.2 h1:JcuOPk8ZU7nZQjdUhctuhQofk7BGHuIy0c9Ez8BNhXs= +github.com/gkampitakis/ciinfo v0.3.2/go.mod h1:1NIwaOcFChN4fa/B0hEBdAb6npDlFL8Bwx4dfRLRqAo= +github.com/gkampitakis/go-diff v1.3.2 h1:Qyn0J9XJSDTgnsgHRdz9Zp24RaJeKMUHg2+PDZZdC4M= +github.com/gkampitakis/go-diff v1.3.2/go.mod h1:LLgOrpqleQe26cte8s36HTWcTmMEur6OPYerdAAS9tk= +github.com/gkampitakis/go-snaps v0.5.15 h1:amyJrvM1D33cPHwVrjo9jQxX8g/7E2wYdZ+01KS3zGE= +github.com/gkampitakis/go-snaps v0.5.15/go.mod h1:HNpx/9GoKisdhw9AFOBT1N7DBs9DiHo/hGheFGBZ+mc= +github.com/go-critic/go-critic v0.12.0 h1:iLosHZuye812wnkEz1Xu3aBwn5ocCPfc9yqmFG9pa6w= +github.com/go-critic/go-critic v0.12.0/go.mod h1:DpE0P6OVc6JzVYzmM5gq5jMU31zLr4am5mB/VfFK64w= +github.com/go-logr/logr v1.4.3 h1:CjnDlHq8ikf6E492q6eKboGOC0T8CDaOvkHCIg8idEI= +github.com/go-logr/logr v1.4.3/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY= github.com/go-logr/zapr v1.3.0 h1:XGdV8XW8zdwFiwOA2Dryh1gj2KRQyOOoNmBy4EplIcQ= github.com/go-logr/zapr v1.3.0/go.mod h1:YKepepNBd1u/oyhd/yQmtjVXmm9uML4IXUgMOwR8/Gg= -github.com/go-openapi/jsonpointer v0.21.0 h1:YgdVicSA9vH5RiHs9TZW5oyafXZFc6+2Vc1rr/O9oNQ= -github.com/go-openapi/jsonpointer v0.21.0/go.mod h1:IUyH9l/+uyhIYQ/PXVA41Rexl+kOkAPDdXEYns6fzUY= -github.com/go-openapi/jsonreference v0.21.0 h1:Rs+Y7hSXT83Jacb7kFyjn4ijOuVGSvOdF2+tg1TRrwQ= -github.com/go-openapi/jsonreference v0.21.0/go.mod h1:LmZmgsrTkVg9LG4EaHeY8cBDslNPMo06cago5JNLkm4= -github.com/go-openapi/swag v0.23.0 h1:vsEVJDUo2hPJ2tu0/Xc+4noaxyEffXNIs3cOULZ+GrE= -github.com/go-openapi/swag v0.23.0/go.mod h1:esZ8ITTYEsH1V2trKHjAN8Ai7xHb8RV+YSZ577vPjgQ= +github.com/go-openapi/jsonpointer v0.22.0 h1:TmMhghgNef9YXxTu1tOopo+0BGEytxA+okbry0HjZsM= +github.com/go-openapi/jsonpointer v0.22.0/go.mod h1:xt3jV88UtExdIkkL7NloURjRQjbeUgcxFblMjq2iaiU= +github.com/go-openapi/jsonreference v0.21.1 h1:bSKrcl8819zKiOgxkbVNRUBIr6Wwj9KYrDbMjRs0cDA= +github.com/go-openapi/jsonreference v0.21.1/go.mod h1:PWs8rO4xxTUqKGu+lEvvCxD5k2X7QYkKAepJyCmSTT8= +github.com/go-openapi/swag v0.24.1 h1:DPdYTZKo6AQCRqzwr/kGkxJzHhpKxZ9i/oX0zag+MF8= +github.com/go-openapi/swag v0.24.1/go.mod h1:sm8I3lCPlspsBBwUm1t5oZeWZS0s7m/A+Psg0ooRU0A= +github.com/go-openapi/swag/cmdutils v0.24.0 h1:KlRCffHwXFI6E5MV9n8o8zBRElpY4uK4yWyAMWETo9I= +github.com/go-openapi/swag/cmdutils v0.24.0/go.mod h1:uxib2FAeQMByyHomTlsP8h1TtPd54Msu2ZDU/H5Vuf8= +github.com/go-openapi/swag/conv v0.24.0 h1:ejB9+7yogkWly6pnruRX45D1/6J+ZxRu92YFivx54ik= +github.com/go-openapi/swag/conv v0.24.0/go.mod h1:jbn140mZd7EW2g8a8Y5bwm8/Wy1slLySQQ0ND6DPc2c= +github.com/go-openapi/swag/fileutils v0.24.0 h1:U9pCpqp4RUytnD689Ek/N1d2N/a//XCeqoH508H5oak= +github.com/go-openapi/swag/fileutils v0.24.0/go.mod h1:3SCrCSBHyP1/N+3oErQ1gP+OX1GV2QYFSnrTbzwli90= +github.com/go-openapi/swag/jsonname v0.24.0 h1:2wKS9bgRV/xB8c62Qg16w4AUiIrqqiniJFtZGi3dg5k= +github.com/go-openapi/swag/jsonname v0.24.0/go.mod h1:GXqrPzGJe611P7LG4QB9JKPtUZ7flE4DOVechNaDd7Q= +github.com/go-openapi/swag/jsonutils v0.24.0 h1:F1vE1q4pg1xtO3HTyJYRmEuJ4jmIp2iZ30bzW5XgZts= +github.com/go-openapi/swag/jsonutils v0.24.0/go.mod h1:vBowZtF5Z4DDApIoxcIVfR8v0l9oq5PpYRUuteVu6f0= +github.com/go-openapi/swag/loading v0.24.0 h1:ln/fWTwJp2Zkj5DdaX4JPiddFC5CHQpvaBKycOlceYc= +github.com/go-openapi/swag/loading v0.24.0/go.mod h1:gShCN4woKZYIxPxbfbyHgjXAhO61m88tmjy0lp/LkJk= +github.com/go-openapi/swag/mangling v0.24.0 h1:PGOQpViCOUroIeak/Uj/sjGAq9LADS3mOyjznmHy2pk= +github.com/go-openapi/swag/mangling v0.24.0/go.mod h1:Jm5Go9LHkycsz0wfoaBDkdc4CkpuSnIEf62brzyCbhc= +github.com/go-openapi/swag/netutils v0.24.0 h1:Bz02HRjYv8046Ycg/w80q3g9QCWeIqTvlyOjQPDjD8w= +github.com/go-openapi/swag/netutils v0.24.0/go.mod h1:WRgiHcYTnx+IqfMCtu0hy9oOaPR0HnPbmArSRN1SkZM= +github.com/go-openapi/swag/stringutils v0.24.0 h1:i4Z/Jawf9EvXOLUbT97O0HbPUja18VdBxeadyAqS1FM= +github.com/go-openapi/swag/stringutils v0.24.0/go.mod h1:5nUXB4xA0kw2df5PRipZDslPJgJut+NjL7D25zPZ/4w= +github.com/go-openapi/swag/typeutils v0.24.0 h1:d3szEGzGDf4L2y1gYOSSLeK6h46F+zibnEas2Jm/wIw= +github.com/go-openapi/swag/typeutils v0.24.0/go.mod h1:q8C3Kmk/vh2VhpCLaoR2MVWOGP8y7Jc8l82qCTd1DYI= +github.com/go-openapi/swag/yamlutils v0.24.0 h1:bhw4894A7Iw6ne+639hsBNRHg9iZg/ISrOVr+sJGp4c= +github.com/go-openapi/swag/yamlutils v0.24.0/go.mod h1:DpKv5aYuaGm/sULePoeiG8uwMpZSfReo1HR3Ik0yaG8= +github.com/go-quicktest/qt v1.101.0 h1:O1K29Txy5P2OK0dGo59b7b0LR6wKfIhttaAhHUyn7eI= +github.com/go-quicktest/qt v1.101.0/go.mod h1:14Bz/f7NwaXPtdYEgzsx46kqSxVwTbzVZsDC26tQJow= github.com/go-task/slim-sprig/v3 v3.0.0 h1:sUs3vkvUymDpBKi3qH1YSqBQk9+9D/8M2mN1vB6EwHI= github.com/go-task/slim-sprig/v3 v3.0.0/go.mod h1:W848ghGpv3Qj3dhTPRyJypKRiqCdHZiAzKg9hl15HA8= +github.com/go-toolsmith/astcast v1.1.0 h1:+JN9xZV1A+Re+95pgnMgDboWNVnIMMQXwfBwLRPgSC8= +github.com/go-toolsmith/astcast v1.1.0/go.mod h1:qdcuFWeGGS2xX5bLM/c3U9lewg7+Zu4mr+xPwZIB4ZU= +github.com/go-toolsmith/astcopy v1.1.0 h1:YGwBN0WM+ekI/6SS6+52zLDEf8Yvp3n2seZITCUBt5s= +github.com/go-toolsmith/astcopy v1.1.0/go.mod h1:hXM6gan18VA1T/daUEHCFcYiW8Ai1tIwIzHY6srfEAw= +github.com/go-toolsmith/astequal v1.0.3/go.mod h1:9Ai4UglvtR+4up+bAD4+hCj7iTo4m/OXVTSLnCyTAx4= +github.com/go-toolsmith/astequal v1.1.0/go.mod h1:sedf7VIdCL22LD8qIvv7Nn9MuWJruQA/ysswh64lffQ= +github.com/go-toolsmith/astequal v1.2.0 h1:3Fs3CYZ1k9Vo4FzFhwwewC3CHISHDnVUPC4x0bI2+Cw= +github.com/go-toolsmith/astequal v1.2.0/go.mod h1:c8NZ3+kSFtFY/8lPso4v8LuJjdJiUFVnSuU3s0qrrDY= +github.com/go-toolsmith/astfmt v1.1.0 h1:iJVPDPp6/7AaeLJEruMsBUlOYCmvg0MoCfJprsOmcco= +github.com/go-toolsmith/astfmt v1.1.0/go.mod h1:OrcLlRwu0CuiIBp/8b5PYF9ktGVZUjlNMV634mhwuQ4= +github.com/go-toolsmith/astp v1.1.0 h1:dXPuCl6u2llURjdPLLDxJeZInAeZ0/eZwFJmqZMnpQA= +github.com/go-toolsmith/astp v1.1.0/go.mod h1:0T1xFGz9hicKs8Z5MfAqSUitoUYS30pDMsRVIDHs8CA= +github.com/go-toolsmith/pkgload v1.2.2 h1:0CtmHq/02QhxcF7E9N5LIFcYFsMR5rdovfqTtRKkgIk= +github.com/go-toolsmith/pkgload v1.2.2/go.mod h1:R2hxLNRKuAsiXCo2i5J6ZQPhnPMOVtU+f0arbFPWCus= +github.com/go-toolsmith/strparse v1.0.0/go.mod h1:YI2nUKP9YGZnL/L1/DLFBfixrcjslWct4wyljWhSRy8= +github.com/go-toolsmith/strparse v1.1.0 h1:GAioeZUK9TGxnLS+qfdqNbA4z0SSm5zVNtCQiyP2Bvw= +github.com/go-toolsmith/strparse v1.1.0/go.mod h1:7ksGy58fsaQkGQlY8WVoBFNyEPMGuJin1rfoPS4lBSQ= +github.com/go-toolsmith/typep v1.1.0 h1:fIRYDyF+JywLfqzyhdiHzRop/GQDxxNhLGQ6gFUNHus= +github.com/go-toolsmith/typep v1.1.0/go.mod h1:fVIw+7zjdsMxDA3ITWnH1yOiw1rnTQKCsF/sk2H/qig= +github.com/go-viper/mapstructure/v2 v2.4.0 h1:EBsztssimR/CONLSZZ04E8qAkxNYq4Qp9LvH92wZUgs= +github.com/go-viper/mapstructure/v2 v2.4.0/go.mod h1:oJDH3BJKyqBA2TXFhDsKDGDTlndYOZ6rGS0BRZIxGhM= +github.com/go-xmlfmt/xmlfmt v1.1.3 h1:t8Ey3Uy7jDSEisW2K3somuMKIpzktkWptA0iFCnRUWY= +github.com/go-xmlfmt/xmlfmt v1.1.3/go.mod h1:aUCEOzzezBEjDBbFBoSiya/gduyIiWYRP6CnSFIV8AM= +github.com/gobwas/glob v0.2.3 h1:A4xDbljILXROh+kObIiy5kIaPYD8e96x1tgBhUI5J+Y= +github.com/gobwas/glob v0.2.3/go.mod h1:d3Ez4x06l9bZtSvzIay5+Yzi0fmZzPgnTbPcKjJAkT8= +github.com/goccy/go-yaml v1.18.0 h1:8W7wMFS12Pcas7KU+VVkaiCng+kG8QiFeFwzFb+rwuw= +github.com/goccy/go-yaml v1.18.0/go.mod h1:XBurs7gK8ATbW4ZPGKgcbrY1Br56PdM69F7LkFRi1kA= +github.com/gofrs/flock v0.12.1 h1:MTLVXXHf8ekldpJk3AKicLij9MdwOWkZ+a/jHHZby9E= +github.com/gofrs/flock v0.12.1/go.mod h1:9zxTsyu5xtJ9DK+1tFZyibEV7y3uwDxPPfbxeeHCoD0= github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q= github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q= -github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek= -github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps= +github.com/golangci/dupl v0.0.0-20250308024227-f665c8d69b32 h1:WUvBfQL6EW/40l6OmeSBYQJNSif4O11+bmWEz+C7FYw= +github.com/golangci/dupl v0.0.0-20250308024227-f665c8d69b32/go.mod h1:NUw9Zr2Sy7+HxzdjIULge71wI6yEg1lWQr7Evcu8K0E= +github.com/golangci/go-printf-func-name v0.1.0 h1:dVokQP+NMTO7jwO4bwsRwLWeudOVUPPyAKJuzv8pEJU= +github.com/golangci/go-printf-func-name v0.1.0/go.mod h1:wqhWFH5mUdJQhweRnldEywnR5021wTdZSNgwYceV14s= +github.com/golangci/gofmt v0.0.0-20250106114630-d62b90e6713d h1:viFft9sS/dxoYY0aiOTsLKO2aZQAPT4nlQCsimGcSGE= +github.com/golangci/gofmt v0.0.0-20250106114630-d62b90e6713d/go.mod h1:ivJ9QDg0XucIkmwhzCDsqcnxxlDStoTl89jDMIoNxKY= +github.com/golangci/golangci-lint v1.64.8 h1:y5TdeVidMtBGG32zgSC7ZXTFNHrsJkDnpO4ItB3Am+I= +github.com/golangci/golangci-lint v1.64.8/go.mod h1:5cEsUQBSr6zi8XI8OjmcY2Xmliqc4iYL7YoPrL+zLJ4= +github.com/golangci/misspell v0.6.0 h1:JCle2HUTNWirNlDIAUO44hUsKhOFqGPoC4LZxlaSXDs= +github.com/golangci/misspell v0.6.0/go.mod h1:keMNyY6R9isGaSAu+4Q8NMBwMPkh15Gtc8UCVoDtAWo= +github.com/golangci/plugin-module-register v0.1.1 h1:TCmesur25LnyJkpsVrupv1Cdzo+2f7zX0H6Jkw1Ol6c= +github.com/golangci/plugin-module-register v0.1.1/go.mod h1:TTpqoB6KkwOJMV8u7+NyXMrkwwESJLOkfl9TxR1DGFc= +github.com/golangci/revgrep v0.8.0 h1:EZBctwbVd0aMeRnNUsFogoyayvKHyxlV3CdUA46FX2s= +github.com/golangci/revgrep v0.8.0/go.mod h1:U4R/s9dlXZsg8uJmaR1GrloUr14D7qDl8gi2iPXJH8k= +github.com/golangci/unconvert v0.0.0-20240309020433-c5143eacb3ed h1:IURFTjxeTfNFP0hTEi1YKjB/ub8zkpaOqFFMApi2EAs= +github.com/golangci/unconvert v0.0.0-20240309020433-c5143eacb3ed/go.mod h1:XLXN8bNw4CGRPaqgl3bv/lhz7bsGPh4/xSaMTbo2vkQ= github.com/google/btree v1.1.3 h1:CVpQJjYgC4VbzxeGVHfvZrv1ctoYCAI8vbl07Fcxlyg= github.com/google/btree v1.1.3/go.mod h1:qOPhT0dTNdNzV6Z/lhRX0YXUafgPLFUh+gZMl761Gm4= -github.com/google/gnostic-models v0.6.9 h1:MU/8wDLif2qCXZmzncUQ/BOfxWfthHi63KqpoNbWqVw= -github.com/google/gnostic-models v0.6.9/go.mod h1:CiWsm0s6BSQd1hRn8/QmxqB6BesYcbSZxsz9b0KuDBw= -github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY= +github.com/google/gnostic-models v0.7.0 h1:qwTtogB15McXDaNqTZdzPJRHvaVJlAl+HVQnLmJEJxo= +github.com/google/gnostic-models v0.7.0/go.mod h1:whL5G0m6dmc5cPxKc5bdKdEN3UjI7OUGxBlw57miDrQ= +github.com/google/go-cmp v0.5.1/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= +github.com/google/go-cmp v0.5.2/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= +github.com/google/go-cmp v0.5.4/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= +github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= +github.com/google/go-cmp v0.5.8/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY= github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8= github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU= github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg= github.com/google/gofuzz v1.2.0 h1:xRy4A+RhZaiKjJ1bPfwQ8sedCA+YS2YcCHW6ec7JMi0= github.com/google/gofuzz v1.2.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg= -github.com/google/pprof v0.0.0-20241029153458-d1b30febd7db h1:097atOisP2aRj7vFgYQBbFN4U4JNXUNYpxael3UzMyo= -github.com/google/pprof v0.0.0-20241029153458-d1b30febd7db/go.mod h1:vavhavw2zAxS5dIdcRluK6cSGGPlZynqzFM8NdvU144= +github.com/google/pprof v0.0.0-20250820193118-f64d9cf942d6 h1:EEHtgt9IwisQ2AZ4pIsMjahcegHh6rmhqxzIRQIyepY= +github.com/google/pprof v0.0.0-20250820193118-f64d9cf942d6/go.mod h1:I6V7YzU0XDpsHqbsyrghnFZLO1gwK6NPTNvmetQIk9U= github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0= github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= +github.com/gordonklaus/ineffassign v0.1.0 h1:y2Gd/9I7MdY1oEIt+n+rowjBNDcLQq3RsH5hwJd0f9s= +github.com/gordonklaus/ineffassign v0.1.0/go.mod h1:Qcp2HIAYhR7mNUVSIxZww3Guk4it82ghYcEXIAk+QT0= +github.com/gostaticanalysis/analysisutil v0.7.1 h1:ZMCjoue3DtDWQ5WyU16YbjbQEQ3VuzwxALrpYd+HeKk= +github.com/gostaticanalysis/analysisutil v0.7.1/go.mod h1:v21E3hY37WKMGSnbsw2S/ojApNWb6C1//mXO48CXbVc= +github.com/gostaticanalysis/comment v1.4.1/go.mod h1:ih6ZxzTHLdadaiSnF5WY3dxUoXfXAlTaRzuaNDlSado= +github.com/gostaticanalysis/comment v1.4.2/go.mod h1:KLUTGDv6HOCotCH8h2erHKmpci2ZoR8VPu34YA2uzdM= +github.com/gostaticanalysis/comment v1.5.0 h1:X82FLl+TswsUMpMh17srGRuKaaXprTaytmEpgnKIDu8= +github.com/gostaticanalysis/comment v1.5.0/go.mod h1:V6eb3gpCv9GNVqb6amXzEUX3jXLVK/AdA+IrAMSqvEc= +github.com/gostaticanalysis/forcetypeassert v0.2.0 h1:uSnWrrUEYDr86OCxWa4/Tp2jeYDlogZiZHzGkWFefTk= +github.com/gostaticanalysis/forcetypeassert v0.2.0/go.mod h1:M5iPavzE9pPqWyeiVXSFghQjljW1+l/Uke3PXHS6ILY= +github.com/gostaticanalysis/nilerr v0.1.1 h1:ThE+hJP0fEp4zWLkWHWcRyI2Od0p7DlgYG3Uqrmrcpk= +github.com/gostaticanalysis/nilerr v0.1.1/go.mod h1:wZYb6YI5YAxxq0i1+VJbY0s2YONW0HU0GPE3+5PWN4A= +github.com/gostaticanalysis/testutil v0.3.1-0.20210208050101-bfb5c8eec0e4/go.mod h1:D+FIZ+7OahH3ePw/izIEeH5I06eKs1IKI4Xr64/Am3M= +github.com/gostaticanalysis/testutil v0.5.0 h1:Dq4wT1DdTwTGCQQv3rl3IvD5Ld0E6HiY+3Zh0sUGqw8= +github.com/gostaticanalysis/testutil v0.5.0/go.mod h1:OLQSbuM6zw2EvCcXTz1lVq5unyoNft372msDY0nY5Hs= +github.com/hashicorp/go-immutable-radix/v2 v2.1.0 h1:CUW5RYIcysz+D3B+l1mDeXrQ7fUvGGCwJfdASSzbrfo= +github.com/hashicorp/go-immutable-radix/v2 v2.1.0/go.mod h1:hgdqLXA4f6NIjRVisM1TJ9aOJVNRqKZj+xDGF6m7PBw= +github.com/hashicorp/go-uuid v1.0.3 h1:2gKiV6YVmrJ1i2CKKa9obLvRieoRGviZFL26PcT/Co8= +github.com/hashicorp/go-uuid v1.0.3/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro= +github.com/hashicorp/go-version v1.2.1/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA= +github.com/hashicorp/go-version v1.7.0 h1:5tqGy27NaOTB8yJKUZELlFAS/LTKJkrmONwQKeRZfjY= +github.com/hashicorp/go-version v1.7.0/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA= +github.com/hashicorp/golang-lru/v2 v2.0.7 h1:a+bsQ5rvGLjzHuww6tVxozPZFVghXaHOwFs4luLUK2k= +github.com/hashicorp/golang-lru/v2 v2.0.7/go.mod h1:QeFd9opnmA6QUJc5vARoKUSoFhyfM2/ZepoAG6RGpeM= +github.com/hexops/gotextdiff v1.0.3 h1:gitA9+qJrrTCsiCl7+kh75nPqQt1cx4ZkudSTLoUqJM= +github.com/hexops/gotextdiff v1.0.3/go.mod h1:pSWU5MAI3yDq+fZBTazCSJysOMbxWL1BSow5/V2vxeg= +github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8= +github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw= +github.com/jgautheron/goconst v1.7.1 h1:VpdAG7Ca7yvvJk5n8dMwQhfEZJh95kl/Hl9S1OI5Jkk= +github.com/jgautheron/goconst v1.7.1/go.mod h1:aAosetZ5zaeC/2EfMeRswtxUFBpe2Hr7HzkgX4fanO4= +github.com/jingyugao/rowserrcheck v1.1.1 h1:zibz55j/MJtLsjP1OF4bSdgXxwL1b+Vn7Tjzq7gFzUs= +github.com/jingyugao/rowserrcheck v1.1.1/go.mod h1:4yvlZSDb3IyDTUZJUmpZfm2Hwok+Dtp+nu2qOq+er9c= +github.com/jjti/go-spancheck v0.6.4 h1:Tl7gQpYf4/TMU7AT84MN83/6PutY21Nb9fuQjFTpRRc= +github.com/jjti/go-spancheck v0.6.4/go.mod h1:yAEYdKJ2lRkDA8g7X+oKUHXOWVAXSBJRv04OhF+QUjk= github.com/josharian/intern v1.0.0 h1:vlS4z54oSdjm0bgjRigI+G1HpF+tI+9rE5LLzOg8HmY= github.com/josharian/intern v1.0.0/go.mod h1:5DoeVV0s6jJacbCEi61lwdGj/aVlrQvzHFFd8Hwg//Y= +github.com/joshdk/go-junit v1.0.0 h1:S86cUKIdwBHWwA6xCmFlf3RTLfVXYQfvanM5Uh+K6GE= +github.com/joshdk/go-junit v1.0.0/go.mod h1:TiiV0PqkaNfFXjEiyjWM3XXrhVyCa1K4Zfga6W52ung= github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM= github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo= +github.com/julz/importas v0.2.0 h1:y+MJN/UdL63QbFJHws9BVC5RpA2iq0kpjrFajTGivjQ= +github.com/julz/importas v0.2.0/go.mod h1:pThlt589EnCYtMnmhmRYY/qn9lCf/frPOK+WMx3xiJY= +github.com/karamaru-alpha/copyloopvar v1.2.1 h1:wmZaZYIjnJ0b5UoKDjUHrikcV0zuPyyxI4SVplLd2CI= +github.com/karamaru-alpha/copyloopvar v1.2.1/go.mod h1:nFmMlFNlClC2BPvNaHMdkirmTJxVCY0lhxBtlfOypMM= github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8= +github.com/kisielk/errcheck v1.9.0 h1:9xt1zI9EBfcYBvdU1nVrzMzzUPUtPKs9bVSIM3TAb3M= +github.com/kisielk/errcheck v1.9.0/go.mod h1:kQxWMMVZgIkDq7U8xtG/n2juOjbLgZtedi0D+/VL/i8= github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck= -github.com/klauspost/compress v1.17.11 h1:In6xLpyWOi1+C7tXUUWv2ot1QvBjxevKAaI6IXrJmUc= -github.com/klauspost/compress v1.17.11/go.mod h1:pMDklpSncoRMuLFrf1W9Ss9KT+0rH90U12bZKk7uwG0= +github.com/kkHAIKE/contextcheck v1.1.6 h1:7HIyRcnyzxL9Lz06NGhiKvenXq7Zw6Q0UQu/ttjfJCE= +github.com/kkHAIKE/contextcheck v1.1.6/go.mod h1:3dDbMRNBFaq8HFXWC1JyvDSPm43CmE6IuHam8Wr0rkg= +github.com/klauspost/compress v1.18.0 h1:c/Cqfb0r+Yi+JtIEq73FWXVkRonBlf0CRNYc8Zttxdo= +github.com/klauspost/compress v1.18.0/go.mod h1:2Pp+KzxcywXVXMr50+X0Q/Lsb43OQHYWRCY2AiWywWQ= github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE= github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk= github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY= github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE= +github.com/kulti/thelper v0.6.3 h1:ElhKf+AlItIu+xGnI990no4cE2+XaSu1ULymV2Yulxs= +github.com/kulti/thelper v0.6.3/go.mod h1:DsqKShOvP40epevkFrvIwkCMNYxMeTNjdWL4dqWHZ6I= +github.com/kunwardeep/paralleltest v1.0.10 h1:wrodoaKYzS2mdNVnc4/w31YaXFtsc21PCTdvWJ/lDDs= +github.com/kunwardeep/paralleltest v1.0.10/go.mod h1:2C7s65hONVqY7Q5Efj5aLzRCNLjw2h4eMc9EcypGjcY= github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc= github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw= +github.com/lasiar/canonicalheader v1.1.2 h1:vZ5uqwvDbyJCnMhmFYimgMZnJMjwljN5VGY0VKbMXb4= +github.com/lasiar/canonicalheader v1.1.2/go.mod h1:qJCeLFS0G/QlLQ506T+Fk/fWMa2VmBUiEI2cuMK4djI= +github.com/ldez/exptostd v0.4.2 h1:l5pOzHBz8mFOlbcifTxzfyYbgEmoUqjxLFHZkjlbHXs= +github.com/ldez/exptostd v0.4.2/go.mod h1:iZBRYaUmcW5jwCR3KROEZ1KivQQp6PHXbDPk9hqJKCQ= +github.com/ldez/gomoddirectives v0.6.1 h1:Z+PxGAY+217f/bSGjNZr/b2KTXcyYLgiWI6geMBN2Qc= +github.com/ldez/gomoddirectives v0.6.1/go.mod h1:cVBiu3AHR9V31em9u2kwfMKD43ayN5/XDgr+cdaFaKs= +github.com/ldez/grignotin v0.9.0 h1:MgOEmjZIVNn6p5wPaGp/0OKWyvq42KnzAt/DAb8O4Ow= +github.com/ldez/grignotin v0.9.0/go.mod h1:uaVTr0SoZ1KBii33c47O1M8Jp3OP3YDwhZCmzT9GHEk= +github.com/ldez/tagliatelle v0.7.1 h1:bTgKjjc2sQcsgPiT902+aadvMjCeMHrY7ly2XKFORIk= +github.com/ldez/tagliatelle v0.7.1/go.mod h1:3zjxUpsNB2aEZScWiZTHrAXOl1x25t3cRmzfK1mlo2I= +github.com/ldez/usetesting v0.4.2 h1:J2WwbrFGk3wx4cZwSMiCQQ00kjGR0+tuuyW0Lqm4lwA= +github.com/ldez/usetesting v0.4.2/go.mod h1:eEs46T3PpQ+9RgN9VjpY6qWdiw2/QmfiDeWmdZdrjIQ= +github.com/leonklingele/grouper v1.1.2 h1:o1ARBDLOmmasUaNDesWqWCIFH3u7hoFlM84YrjT3mIY= +github.com/leonklingele/grouper v1.1.2/go.mod h1:6D0M/HVkhs2yRKRFZUoGjeDy7EZTfFBE9gl4kjmIGkA= +github.com/macabu/inamedparam v0.1.3 h1:2tk/phHkMlEL/1GNe/Yf6kkR/hkcUdAEY3L0hjYV1Mk= +github.com/macabu/inamedparam v0.1.3/go.mod h1:93FLICAIk/quk7eaPPQvbzihUdn/QkGDwIZEoLtpH6I= github.com/mailru/easyjson v0.9.0 h1:PrnmzHw7262yW8sTBwxi1PdJA3Iw/EKBa8psRf7d9a4= github.com/mailru/easyjson v0.9.0/go.mod h1:1+xMtQp2MRNVL/V1bOzuP3aP8VNwRW55fQUto+XFtTU= +github.com/maratori/testableexamples v1.0.0 h1:dU5alXRrD8WKSjOUnmJZuzdxWOEQ57+7s93SLMxb2vI= +github.com/maratori/testableexamples v1.0.0/go.mod h1:4rhjL1n20TUTT4vdh3RDqSizKLyXp7K2u6HgraZCGzE= +github.com/maratori/testpackage v1.1.1 h1:S58XVV5AD7HADMmD0fNnziNHqKvSdDuEKdPD1rNTU04= +github.com/maratori/testpackage v1.1.1/go.mod h1:s4gRK/ym6AMrqpOa/kEbQTV4Q4jb7WeLZzVhVVVOQMc= +github.com/maruel/natural v1.1.1 h1:Hja7XhhmvEFhcByqDoHz9QZbkWey+COd9xWfCfn1ioo= +github.com/maruel/natural v1.1.1/go.mod h1:v+Rfd79xlw1AgVBjbO0BEQmptqb5HvL/k9GRHB7ZKEg= +github.com/matoous/godox v1.1.0 h1:W5mqwbyWrwZv6OQ5Z1a/DHGMOvXYCBP3+Ht7KMoJhq4= +github.com/matoous/godox v1.1.0/go.mod h1:jgE/3fUXiTurkdHOLT5WEkThTSuE7yxHv5iWPa80afs= +github.com/matryer/is v1.4.0 h1:sosSmIWwkYITGrxZ25ULNDeKiMNzFSr4V/eqBQP0PeE= +github.com/matryer/is v1.4.0/go.mod h1:8I/i5uYgLzgsgEloJE1U6xx5HkBQpAZvepWuujKwMRU= +github.com/mattn/go-colorable v0.1.14 h1:9A9LHSqF/7dyVVX6g0U9cwm9pG3kP9gSzcuIPHPsaIE= +github.com/mattn/go-colorable v0.1.14/go.mod h1:6LmQG8QLFO4G5z1gPvYEzlUgJ2wF+stgPZH1UqBm1s8= +github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY= +github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y= +github.com/mattn/go-runewidth v0.0.9/go.mod h1:H031xJmbD/WCDINGzjvQ9THkh0rPKHF+m2gUSrubnMI= +github.com/mattn/go-runewidth v0.0.16 h1:E5ScNMtiwvlvB5paMFdw9p4kSQzbXFikJ5SQO6TULQc= +github.com/mattn/go-runewidth v0.0.16/go.mod h1:Jdepj2loyihRzMpdS35Xk/zdY8IAYHsh153qUoGf23w= +github.com/mfridman/tparse v0.18.0 h1:wh6dzOKaIwkUGyKgOntDW4liXSo37qg5AXbIhkMV3vE= +github.com/mfridman/tparse v0.18.0/go.mod h1:gEvqZTuCgEhPbYk/2lS3Kcxg1GmTxxU7kTC8DvP0i/A= +github.com/mgechev/revive v1.7.0 h1:JyeQ4yO5K8aZhIKf5rec56u0376h8AlKNQEmjfkjKlY= +github.com/mgechev/revive v1.7.0/go.mod h1:qZnwcNhoguE58dfi96IJeSTPeZQejNeoMQLUZGi4SW4= +github.com/mitchellh/go-homedir v1.1.0 h1:lukF9ziXFxDFPkA1vsr5zpc1XuPDn/wFntq5mG+4E0Y= +github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0= github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q= github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg= github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q= -github.com/modern-go/reflect2 v1.0.2 h1:xBagoLtFs94CBntxluKeaWgTMpvLxC4ur3nMaC9Gz0M= github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk= +github.com/modern-go/reflect2 v1.0.3-0.20250322232337-35a7c28c31ee h1:W5t00kpgFdJifH4BDsTlE89Zl93FEloxaWZfGcifgq8= +github.com/modern-go/reflect2 v1.0.3-0.20250322232337-35a7c28c31ee/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk= +github.com/moricho/tparallel v0.3.2 h1:odr8aZVFA3NZrNybggMkYO3rgPRcqjeQUlBBFVxKHTI= +github.com/moricho/tparallel v0.3.2/go.mod h1:OQ+K3b4Ln3l2TZveGCywybl68glfLEwFGqvnjok8b+U= github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA= github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ= -github.com/onsi/ginkgo/v2 v2.22.0 h1:Yed107/8DjTr0lKCNt7Dn8yQ6ybuDRQoMGrNFKzMfHg= -github.com/onsi/ginkgo/v2 v2.22.0/go.mod h1:7Du3c42kxCUegi0IImZ1wUQzMBVecgIHjR1C+NkhLQo= -github.com/onsi/gomega v1.36.1 h1:bJDPBO7ibjxcbHMgSCoo4Yj18UWbKDlLwX1x9sybDcw= -github.com/onsi/gomega v1.36.1/go.mod h1:PvZbdDc8J6XJEpDK4HCuRBm8a6Fzp9/DmhC9C7yFlog= +github.com/nakabonne/nestif v0.3.1 h1:wm28nZjhQY5HyYPx+weN3Q65k6ilSBxDb8v5S81B81U= +github.com/nakabonne/nestif v0.3.1/go.mod h1:9EtoZochLn5iUprVDmDjqGKPofoUEBL8U4Ngq6aY7OE= +github.com/nishanths/exhaustive v0.12.0 h1:vIY9sALmw6T/yxiASewa4TQcFsVYZQQRUQJhKRf3Swg= +github.com/nishanths/exhaustive v0.12.0/go.mod h1:mEZ95wPIZW+x8kC4TgC+9YCUgiST7ecevsVDTgc2obs= +github.com/nishanths/predeclared v0.2.2 h1:V2EPdZPliZymNAn79T8RkNApBjMmVKh5XRpLm/w98Vk= +github.com/nishanths/predeclared v0.2.2/go.mod h1:RROzoN6TnGQupbC+lqggsOlcgysk3LMK/HI84Mp280c= +github.com/nunnatsa/ginkgolinter v0.19.1 h1:mjwbOlDQxZi9Cal+KfbEJTCz327OLNfwNvoZ70NJ+c4= +github.com/nunnatsa/ginkgolinter v0.19.1/go.mod h1:jkQ3naZDmxaZMXPWaS9rblH+i+GWXQCaS/JFIWcOH2s= +github.com/olekukonko/tablewriter v0.0.5 h1:P2Ga83D34wi1o9J6Wh1mRuqd4mF/x/lgBS7N7AbDhec= +github.com/olekukonko/tablewriter v0.0.5/go.mod h1:hPp6KlRPjbx+hW8ykQs1w3UBbZlj6HuIJcUGPhkA7kY= +github.com/onsi/ginkgo/v2 v2.27.2 h1:LzwLj0b89qtIy6SSASkzlNvX6WktqurSHwkk2ipF/Ns= +github.com/onsi/ginkgo/v2 v2.27.2/go.mod h1:ArE1D/XhNXBXCBkKOLkbsb2c81dQHCRcF5zwn/ykDRo= +github.com/onsi/gomega v1.38.3 h1:eTX+W6dobAYfFeGC2PV6RwXRu/MyT+cQguijutvkpSM= +github.com/onsi/gomega v1.38.3/go.mod h1:ZCU1pkQcXDO5Sl9/VVEGlDyp+zm0m1cmeG5TOzLgdh4= +github.com/otiai10/copy v1.2.0/go.mod h1:rrF5dJ5F0t/EWSYODDu4j9/vEeYHMkc8jt0zJChqQWw= +github.com/otiai10/copy v1.14.0 h1:dCI/t1iTdYGtkvCuBG2BgR6KZa83PTclw4U5n2wAllU= +github.com/otiai10/copy v1.14.0/go.mod h1:ECfuL02W+/FkTWZWgQqXPWZgW9oeKCSQ5qVfSc4qc4w= +github.com/otiai10/curr v0.0.0-20150429015615-9b4961190c95/go.mod h1:9qAhocn7zKJG+0mI8eUu6xqkFDYS2kb2saOteoSB3cE= +github.com/otiai10/curr v1.0.0/go.mod h1:LskTG5wDwr8Rs+nNQ+1LlxRjAtTZZjtJW4rMXl6j4vs= +github.com/otiai10/mint v1.3.0/go.mod h1:F5AjcsTsWUqX+Na9fpHb52P8pcRX2CI6A3ctIT91xUo= +github.com/otiai10/mint v1.3.1/go.mod h1:/yxELlJQ0ufhjUwhshSj+wFjZ78CnZ48/1wtmBH1OTc= +github.com/pelletier/go-toml/v2 v2.2.4 h1:mye9XuhQ6gvn5h28+VilKrrPoQVanw5PMw/TB0t5Ec4= +github.com/pelletier/go-toml/v2 v2.2.4/go.mod h1:2gIqNv+qfxSVS7cM2xJQKtLSTLUE9V8t9Stt+h56mCY= github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4= github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U= github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= -github.com/prometheus/client_golang v1.20.5 h1:cxppBPuYhUnsO6yo/aoRol4L7q7UFfdm+bR9r+8l63Y= -github.com/prometheus/client_golang v1.20.5/go.mod h1:PIEt8X02hGcP8JWbeHyeZ53Y/jReSnHgO035n//V5WE= -github.com/prometheus/client_model v0.6.1 h1:ZKSh/rekM+n3CeS952MLRAdFwIKqeY8b62p8ais2e9E= -github.com/prometheus/client_model v0.6.1/go.mod h1:OrxVMOVHjw3lKMa8+x6HeMGkHMQyHDk9E3jmP2AmGiY= -github.com/prometheus/common v0.61.0 h1:3gv/GThfX0cV2lpO7gkTUwZru38mxevy90Bj8YFSRQQ= -github.com/prometheus/common v0.61.0/go.mod h1:zr29OCN/2BsJRaFwG8QOBr41D6kkchKbpeNH7pAjb/s= -github.com/prometheus/procfs v0.15.1 h1:YagwOFzUgYfKKHX6Dr+sHT7km/hxC76UB0learggepc= -github.com/prometheus/procfs v0.15.1/go.mod h1:fB45yRUv8NstnjriLhBQLuOUt+WW4BsoGhij/e3PBqk= -github.com/rogpeppe/go-internal v1.13.1 h1:KvO1DLK/DRN07sQ1LQKScxyZJuNnedQ5/wKSR38lUII= -github.com/rogpeppe/go-internal v1.13.1/go.mod h1:uMEvuHeurkdAXX61udpOXGD/AzZDWNMNyH2VO9fmH0o= +github.com/polyfloyd/go-errorlint v1.7.1 h1:RyLVXIbosq1gBdk/pChWA8zWYLsq9UEw7a1L5TVMCnA= +github.com/polyfloyd/go-errorlint v1.7.1/go.mod h1:aXjNb1x2TNhoLsk26iv1yl7a+zTnXPhwEMtEXukiLR8= +github.com/prashantv/gostub v1.1.0 h1:BTyx3RfQjRHnUWaGF9oQos79AlQ5k8WNktv7VGvVH4g= +github.com/prashantv/gostub v1.1.0/go.mod h1:A5zLQHz7ieHGG7is6LLXLz7I8+3LZzsrV0P1IAHhP5U= +github.com/prometheus/client_golang v1.23.2 h1:Je96obch5RDVy3FDMndoUsjAhG5Edi49h0RJWRi/o0o= +github.com/prometheus/client_golang v1.23.2/go.mod h1:Tb1a6LWHB3/SPIzCoaDXI4I8UHKeFTEQ1YCr+0Gyqmg= +github.com/prometheus/client_model v0.6.2 h1:oBsgwpGs7iVziMvrGhE53c/GrLUsZdHnqNwqPLxwZyk= +github.com/prometheus/client_model v0.6.2/go.mod h1:y3m2F6Gdpfy6Ut/GBsUqTWZqCUvMVzSfMLjcu6wAwpE= +github.com/prometheus/common v0.66.1 h1:h5E0h5/Y8niHc5DlaLlWLArTQI7tMrsfQjHV+d9ZoGs= +github.com/prometheus/common v0.66.1/go.mod h1:gcaUsgf3KfRSwHY4dIMXLPV0K/Wg1oZ8+SbZk/HH/dA= +github.com/prometheus/procfs v0.17.0 h1:FuLQ+05u4ZI+SS/w9+BWEM2TXiHKsUQ9TADiRH7DuK0= +github.com/prometheus/procfs v0.17.0/go.mod h1:oPQLaDAMRbA+u8H5Pbfq+dl3VDAvHxMUOVhe0wYB2zw= +github.com/quasilyte/go-ruleguard v0.4.3-0.20240823090925-0fe6f58b47b1 h1:+Wl/0aFp0hpuHM3H//KMft64WQ1yX9LdJY64Qm/gFCo= +github.com/quasilyte/go-ruleguard v0.4.3-0.20240823090925-0fe6f58b47b1/go.mod h1:GJLgqsLeo4qgavUoL8JeGFNS7qcisx3awV/w9eWTmNI= +github.com/quasilyte/go-ruleguard/dsl v0.3.22 h1:wd8zkOhSNr+I+8Qeciml08ivDt1pSXe60+5DqOpCjPE= +github.com/quasilyte/go-ruleguard/dsl v0.3.22/go.mod h1:KeCP03KrjuSO0H1kTuZQCWlQPulDV6YMIXmpQss17rU= +github.com/quasilyte/gogrep v0.5.0 h1:eTKODPXbI8ffJMN+W2aE0+oL0z/nh8/5eNdiO34SOAo= +github.com/quasilyte/gogrep v0.5.0/go.mod h1:Cm9lpz9NZjEoL1tgZ2OgeUKPIxL1meE7eo60Z6Sk+Ng= +github.com/quasilyte/regex/syntax v0.0.0-20210819130434-b3f0c404a727 h1:TCg2WBOl980XxGFEZSS6KlBGIV0diGdySzxATTWoqaU= +github.com/quasilyte/regex/syntax v0.0.0-20210819130434-b3f0c404a727/go.mod h1:rlzQ04UMyJXu/aOvhd8qT+hvDrFpiwqp8MRXDY9szc0= +github.com/quasilyte/stdinfo v0.0.0-20220114132959-f7386bf02567 h1:M8mH9eK4OUR4lu7Gd+PU1fV2/qnDNfzT635KRSObncs= +github.com/quasilyte/stdinfo v0.0.0-20220114132959-f7386bf02567/go.mod h1:DWNGW8A4Y+GyBgPuaQJuWiy0XYftx4Xm/y5Jqk9I6VQ= +github.com/raeperd/recvcheck v0.2.0 h1:GnU+NsbiCqdC2XX5+vMZzP+jAJC5fht7rcVTAhX74UI= +github.com/raeperd/recvcheck v0.2.0/go.mod h1:n04eYkwIR0JbgD73wT8wL4JjPC3wm0nFtzBnWNocnYU= +github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc= +github.com/rivo/uniseg v0.4.7 h1:WUdvkW8uEhrYfLC4ZzdpI2ztxP1I582+49Oc5Mq64VQ= +github.com/rivo/uniseg v0.4.7/go.mod h1:FN3SvrM+Zdj16jyLfmOkMNblXMcoc8DfTHruCPUcx88= +github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ= +github.com/rogpeppe/go-internal v1.14.1/go.mod h1:MaRKkUm5W0goXpeCfT7UZI6fk/L7L7so1lCWt35ZSgc= +github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM= +github.com/ryancurrah/gomodguard v1.3.5 h1:cShyguSwUEeC0jS7ylOiG/idnd1TpJ1LfHGpV3oJmPU= +github.com/ryancurrah/gomodguard v1.3.5/go.mod h1:MXlEPQRxgfPQa62O8wzK3Ozbkv9Rkqr+wKjSxTdsNJE= +github.com/ryanrolds/sqlclosecheck v0.5.1 h1:dibWW826u0P8jNLsLN+En7+RqWWTYrjCB9fJfSfdyCU= +github.com/ryanrolds/sqlclosecheck v0.5.1/go.mod h1:2g3dUjoS6AL4huFdv6wn55WpLIDjY7ZgUR4J8HOO/XQ= +github.com/sagikazarmark/locafero v0.7.0 h1:5MqpDsTGNDhY8sGp0Aowyf0qKsPrhewaLSsFaodPcyo= +github.com/sagikazarmark/locafero v0.7.0/go.mod h1:2za3Cg5rMaTMoG/2Ulr9AwtFaIppKXTRYnozin4aB5k= +github.com/sanposhiho/wastedassign/v2 v2.1.0 h1:crurBF7fJKIORrV85u9UUpePDYGWnwvv3+A96WvwXT0= +github.com/sanposhiho/wastedassign/v2 v2.1.0/go.mod h1:+oSmSC+9bQ+VUAxA66nBb0Z7N8CK7mscKTDYC6aIek4= +github.com/santhosh-tekuri/jsonschema/v6 v6.0.1 h1:PKK9DyHxif4LZo+uQSgXNqs0jj5+xZwwfKHgph2lxBw= +github.com/santhosh-tekuri/jsonschema/v6 v6.0.1/go.mod h1:JXeL+ps8p7/KNMjDQk3TCwPpBy0wYklyWTfbkIzdIFU= +github.com/sashamelentyev/interfacebloat v1.1.0 h1:xdRdJp0irL086OyW1H/RTZTr1h/tMEOsumirXcOJqAw= +github.com/sashamelentyev/interfacebloat v1.1.0/go.mod h1:+Y9yU5YdTkrNvoX0xHc84dxiN1iBi9+G8zZIhPVoNjQ= +github.com/sashamelentyev/usestdlibvars v1.28.0 h1:jZnudE2zKCtYlGzLVreNp5pmCdOxXUzwsMDBkR21cyQ= +github.com/sashamelentyev/usestdlibvars v1.28.0/go.mod h1:9nl0jgOfHKWNFS43Ojw0i7aRoS4j6EBye3YBhmAIRF8= +github.com/securego/gosec/v2 v2.22.2 h1:IXbuI7cJninj0nRpZSLCUlotsj8jGusohfONMrHoF6g= +github.com/securego/gosec/v2 v2.22.2/go.mod h1:UEBGA+dSKb+VqM6TdehR7lnQtIIMorYJ4/9CW1KVQBE= +github.com/shurcooL/go v0.0.0-20180423040247-9e1955d9fb6e/go.mod h1:TDJrrUr11Vxrven61rcy3hJMUqaf/CLWYhHNPmT14Lk= +github.com/shurcooL/go-goon v0.0.0-20170922171312-37c2f522c041/go.mod h1:N5mDOmsrJOB+vfqUK+7DmDyjhSLIIBnXo9lvZJj3MWQ= github.com/sirupsen/logrus v1.9.3 h1:dueUQJ1C2q9oE3F7wvmSGAaVtTmUizReu6fjN8uqzbQ= github.com/sirupsen/logrus v1.9.3/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ= -github.com/slok/kubewebhook/v2 v2.6.0 h1:NMDDXx219OcNDc17ZYpqGXW81/jkBNmkdEwFDcZDVcA= -github.com/slok/kubewebhook/v2 v2.6.0/go.mod h1:EoPfBo8lzgU1lmI1DSY/Fpwu+cdr4lZnzY4Tmg5sHe0= -github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA= +github.com/sivchari/containedctx v1.0.3 h1:x+etemjbsh2fB5ewm5FeLNi5bUjK0V8n0RB+Wwfd0XE= +github.com/sivchari/containedctx v1.0.3/go.mod h1:c1RDvCbnJLtH4lLcYD/GqwiBSSf4F5Qk0xld2rBqzJ4= +github.com/sivchari/tenv v1.12.1 h1:+E0QzjktdnExv/wwsnnyk4oqZBUfuh89YMQT1cyuvSY= +github.com/sivchari/tenv v1.12.1/go.mod h1:1LjSOUCc25snIr5n3DtGGrENhX3LuWefcplwVGC24mw= +github.com/slok/kubewebhook/v2 v2.7.0 h1:0Wq3IVBAKDQROiB4ugxzypKUKN4FI50Wd+nyKGNiH1w= +github.com/slok/kubewebhook/v2 v2.7.0/go.mod h1:H9QZ1Z+0RpuE50y4aZZr85rr6d/4LSYX+hbvK6Oe+T4= +github.com/sonatard/noctx v0.1.0 h1:JjqOc2WN16ISWAjAk8M5ej0RfExEXtkEyExl2hLW+OM= +github.com/sonatard/noctx v0.1.0/go.mod h1:0RvBxqY8D4j9cTTTWE8ylt2vqj2EPI8fHmrxHdsaZ2c= +github.com/sourcegraph/conc v0.3.0 h1:OQTbbt6P72L20UqAkXXuLOj79LfEanQ+YQFNpLA9ySo= +github.com/sourcegraph/conc v0.3.0/go.mod h1:Sdozi7LEKbFPqYX2/J+iBAM6HpqSLTASQIKqDmF7Mt0= +github.com/sourcegraph/go-diff v0.7.0 h1:9uLlrd5T46OXs5qpp8L/MTltk0zikUGi0sNNyCpA8G0= +github.com/sourcegraph/go-diff v0.7.0/go.mod h1:iBszgVvyxdc8SFZ7gm69go2KDdt3ag071iBaWPF6cjs= +github.com/spf13/afero v1.12.0 h1:UcOPyRBYczmFn6yvphxkn9ZEOY65cpwGKb5mL36mrqs= +github.com/spf13/afero v1.12.0/go.mod h1:ZTlWwG4/ahT8W7T0WQ5uYmjI9duaLQGy3Q2OAl4sk/4= +github.com/spf13/cast v1.7.1 h1:cuNEagBQEHWN1FnbGEjCXL2szYEXqfJPbP2HNUaca9Y= +github.com/spf13/cast v1.7.1/go.mod h1:ancEpBxwJDODSW/UG4rDrAqiKolqNNh2DX3mk86cAdo= +github.com/spf13/cobra v1.10.2 h1:DMTTonx5m65Ic0GOoRY2c16WCbHxOOw6xxezuLaBpcU= +github.com/spf13/cobra v1.10.2/go.mod h1:7C1pvHqHw5A4vrJfjNwvOdzYu0Gml16OCs2GRiTUUS4= github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg= +github.com/spf13/pflag v1.0.9/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg= +github.com/spf13/pflag v1.0.10 h1:4EBh2KAYBwaONj6b2Ye1GiHfwjqyROoF4RwYO+vPwFk= +github.com/spf13/pflag v1.0.10/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg= +github.com/spf13/viper v1.20.1 h1:ZMi+z/lvLyPSCoNtFCpqjy0S4kPbirhpTMwl8BkW9X4= +github.com/spf13/viper v1.20.1/go.mod h1:P9Mdzt1zoHIG8m2eZQinpiBjo6kCmZSKBClNNqjJvu4= +github.com/ssgreg/nlreturn/v2 v2.2.1 h1:X4XDI7jstt3ySqGU86YGAURbxw3oTDPK9sPEi6YEwQ0= +github.com/ssgreg/nlreturn/v2 v2.2.1/go.mod h1:E/iiPB78hV7Szg2YfRgyIrk1AD6JVMTRkkxBiELzh2I= +github.com/stbenjam/no-sprintf-host-port v0.2.0 h1:i8pxvGrt1+4G0czLr/WnmyH7zbZ8Bg8etvARQ1rpyl4= +github.com/stbenjam/no-sprintf-host-port v0.2.0/go.mod h1:eL0bQ9PasS0hsyTyfTjjG+E80QIyPnBVQbYZyv20Jfk= github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= -github.com/stretchr/objx v0.5.0 h1:1zr/of2m5FGMsad5YfcqgdqdWrIhu+EBEJRhR1U7z/c= +github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw= github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo= +github.com/stretchr/objx v0.5.2 h1:xuMeJ0Sdp5ZMRXx/aWO6RZxdr3beISkG5/G/aIRr3pY= +github.com/stretchr/objx v0.5.2/go.mod h1:FRsXN1f5AsAjCGJKqEizvkpNtU+EGNCLh3NxZ/8L+MA= github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI= +github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4= +github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA= github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= -github.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA= -github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY= +github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= +github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU= +github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo= +github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U= +github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U= +github.com/subosito/gotenv v1.6.0 h1:9NlTDc1FTs4qu0DDq7AEtTPNw6SVm7uBMsUCUjABIf8= +github.com/subosito/gotenv v1.6.0/go.mod h1:Dk4QP5c2W3ibzajGcXpNraDfq2IrhjMIvMSWPKKo0FU= +github.com/tdakkota/asciicheck v0.4.1 h1:bm0tbcmi0jezRA2b5kg4ozmMuGAFotKI3RZfrhfovg8= +github.com/tdakkota/asciicheck v0.4.1/go.mod h1:0k7M3rCfRXb0Z6bwgvkEIMleKH3kXNz9UqJ9Xuqopr8= +github.com/tenntenn/modver v1.0.1 h1:2klLppGhDgzJrScMpkj9Ujy3rXPUspSjAcev9tSEBgA= +github.com/tenntenn/modver v1.0.1/go.mod h1:bePIyQPb7UeioSRkw3Q0XeMhYZSMx9B8ePqg6SAMGH0= +github.com/tenntenn/text/transform v0.0.0-20200319021203-7eef512accb3 h1:f+jULpRQGxTSkNYKJ51yaw6ChIqO+Je8UqsTKN/cDag= +github.com/tenntenn/text/transform v0.0.0-20200319021203-7eef512accb3/go.mod h1:ON8b8w4BN/kE1EOhwT0o+d62W65a6aPw1nouo9LMgyY= +github.com/tetafro/godot v1.5.0 h1:aNwfVI4I3+gdxjMgYPus9eHmoBeJIbnajOyqZYStzuw= +github.com/tetafro/godot v1.5.0/go.mod h1:2oVxTBSftRTh4+MVfUaUXR6bn2GDXCaMcOG4Dk3rfio= +github.com/tidwall/gjson v1.18.0 h1:FIDeeyB800efLX89e5a8Y0BNH+LOngJyGrIWxG2FKQY= +github.com/tidwall/gjson v1.18.0/go.mod h1:/wbyibRr2FHMks5tjHJ5F8dMZh3AcwJEMf5vlfC0lxk= +github.com/tidwall/match v1.2.0 h1:0pt8FlkOwjN2fPt4bIl4BoNxb98gGHN2ObFEDkrfZnM= +github.com/tidwall/match v1.2.0/go.mod h1:eRSPERbgtNPcGhD8UCthc6PmLEQXEWd3PRB5JTxsfmM= +github.com/tidwall/pretty v1.2.1 h1:qjsOFOWWQl+N3RsoF5/ssm1pHmJJwhjlSbZ51I6wMl4= +github.com/tidwall/pretty v1.2.1/go.mod h1:ITEVvHYasfjBbM0u2Pg8T2nJnzm8xPwvNhhsoaGGjNU= +github.com/tidwall/sjson v1.2.5 h1:kLy8mja+1c9jlljvWTlSazM7cKDRfJuR/bOJhcY5NcY= +github.com/tidwall/sjson v1.2.5/go.mod h1:Fvgq9kS/6ociJEDnK0Fk1cpYF4FIW6ZF7LAe+6jwd28= +github.com/timakin/bodyclose v0.0.0-20241017074812-ed6a65f985e3 h1:y4mJRFlM6fUyPhoXuFg/Yu02fg/nIPFMOY8tOqppoFg= +github.com/timakin/bodyclose v0.0.0-20241017074812-ed6a65f985e3/go.mod h1:mkjARE7Yr8qU23YcGMSALbIxTQ9r9QBVahQOBRfU460= +github.com/timonwong/loggercheck v0.10.1 h1:uVZYClxQFpw55eh+PIoqM7uAOHMrhVcDoWDery9R8Lg= +github.com/timonwong/loggercheck v0.10.1/go.mod h1:HEAWU8djynujaAVX7QI65Myb8qgfcZ1uKbdpg3ZzKl8= +github.com/tomarrell/wrapcheck/v2 v2.10.0 h1:SzRCryzy4IrAH7bVGG4cK40tNUhmVmMDuJujy4XwYDg= +github.com/tomarrell/wrapcheck/v2 v2.10.0/go.mod h1:g9vNIyhb5/9TQgumxQyOEqDHsmGYcGsVMOx/xGkqdMo= +github.com/tommy-muehle/go-mnd/v2 v2.5.1 h1:NowYhSdyE/1zwK9QCLeRb6USWdoif80Ie+v+yU8u1Zw= +github.com/tommy-muehle/go-mnd/v2 v2.5.1/go.mod h1:WsUAkMJMYww6l/ufffCD3m+P7LEvr8TnZn9lwVDlgzw= +github.com/ultraware/funlen v0.2.0 h1:gCHmCn+d2/1SemTdYMiKLAHFYxTYz7z9VIDRaTGyLkI= +github.com/ultraware/funlen v0.2.0/go.mod h1:ZE0q4TsJ8T1SQcjmkhN/w+MceuatI6pBFSxxyteHIJA= +github.com/ultraware/whitespace v0.2.0 h1:TYowo2m9Nfj1baEQBjuHzvMRbp19i+RCcRYrSWoFa+g= +github.com/ultraware/whitespace v0.2.0/go.mod h1:XcP1RLD81eV4BW8UhQlpaR+SDc2givTvyI8a586WjW8= +github.com/uudashr/gocognit v1.2.0 h1:3BU9aMr1xbhPlvJLSydKwdLN3tEUUrzPSSM8S4hDYRA= +github.com/uudashr/gocognit v1.2.0/go.mod h1:k/DdKPI6XBZO1q7HgoV2juESI2/Ofj9AcHPZhBBdrTU= +github.com/uudashr/iface v1.3.1 h1:bA51vmVx1UIhiIsQFSNq6GZ6VPTk3WNMZgRiCe9R29U= +github.com/uudashr/iface v1.3.1/go.mod h1:4QvspiRd3JLPAEXBQ9AiZpLbJlrWWgRChOKDJEuQTdg= github.com/x448/float16 v0.8.4 h1:qLwI1I70+NjRFUR3zs1JPUCgaCXSh3SW62uAKT1mSBM= github.com/x448/float16 v0.8.4/go.mod h1:14CWIYCyZA/cWjXOioeEpHeN/83MdbZDRQHoFcYsOfg= +github.com/xen0n/gosmopolitan v1.2.2 h1:/p2KTnMzwRexIW8GlKawsTWOxn7UHA+jCMF/V8HHtvU= +github.com/xen0n/gosmopolitan v1.2.2/go.mod h1:7XX7Mj61uLYrj0qmeN0zi7XDon9JRAEhYQqAPLVNTeg= +github.com/yagipy/maintidx v1.0.0 h1:h5NvIsCz+nRDapQ0exNv4aJ0yXSI0420omVANTv3GJM= +github.com/yagipy/maintidx v1.0.0/go.mod h1:0qNf/I/CCZXSMhsRsrEPDZ+DkekpKLXAJfsTACwgXLk= +github.com/yeya24/promlinter v0.3.0 h1:JVDbMp08lVCP7Y6NP3qHroGAO6z2yGKQtS5JsjqtoFs= +github.com/yeya24/promlinter v0.3.0/go.mod h1:cDfJQQYv9uYciW60QT0eeHlFodotkYZlL+YcPQN+mW4= +github.com/ykadowak/zerologlint v0.1.5 h1:Gy/fMz1dFQN9JZTPjv1hxEk+sRWm05row04Yoolgdiw= +github.com/ykadowak/zerologlint v0.1.5/go.mod h1:KaUskqF3e/v59oPmdq1U1DnKcuHokl2/K1U4pmIELKg= +github.com/yuin/goldmark v1.1.25/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= +github.com/yuin/goldmark v1.1.32/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= +github.com/yuin/goldmark v1.3.5/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k= +github.com/yuin/goldmark v1.4.1/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k= +github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY= +gitlab.com/bosi/decorder v0.4.2 h1:qbQaV3zgwnBZ4zPMhGLW4KZe7A7NwxEhJx39R3shffo= +gitlab.com/bosi/decorder v0.4.2/go.mod h1:muuhHoaJkA9QLcYHq4Mj8FJUwDZ+EirSHRiaTcTf6T8= +go-simpler.org/assert v0.9.0 h1:PfpmcSvL7yAnWyChSjOz6Sp6m9j5lyK8Ok9pEL31YkQ= +go-simpler.org/assert v0.9.0/go.mod h1:74Eqh5eI6vCK6Y5l3PI8ZYFXG4Sa+tkr70OIPJAUr28= +go-simpler.org/musttag v0.13.0 h1:Q/YAW0AHvaoaIbsPj3bvEI5/QFP7w696IMUpnKXQfCE= +go-simpler.org/musttag v0.13.0/go.mod h1:FTzIGeK6OkKlUDVpj0iQUXZLUO1Js9+mvykDQy9C5yM= +go-simpler.org/sloglint v0.9.0 h1:/40NQtjRx9txvsB/RN022KsUJU+zaaSb/9q9BSefSrE= +go-simpler.org/sloglint v0.9.0/go.mod h1:G/OrAF6uxj48sHahCzrbarVMptL2kjWTaUeC8+fOGww= +go.uber.org/automaxprocs v1.6.0 h1:O3y2/QNTOdbF+e/dpXNNW7Rx2hZ4sTIPyybbxyNqTUs= +go.uber.org/automaxprocs v1.6.0/go.mod h1:ifeIMSnPZuznNm6jmdzmU3/bfk01Fe2fotchwEFJ8r8= go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto= go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE= go.uber.org/multierr v1.11.0 h1:blXXJkSxSSfBVBlC76pxqeO+LN3aDfLQo+309xJstO0= go.uber.org/multierr v1.11.0/go.mod h1:20+QtiLqy0Nd6FdQB9TLXag12DsQkrbs3htMFfDN80Y= go.uber.org/zap v1.27.0 h1:aJMhYGrd5QSmlpLMr2MftRKl7t8J8PTZPA732ud/XR8= go.uber.org/zap v1.27.0/go.mod h1:GB2qFLM7cTU87MWRP2mPIjqfIDnGu+VIO4V/SdhGo2E= +go.yaml.in/yaml/v2 v2.4.2 h1:DzmwEr2rDGHl7lsFgAHxmNz/1NlQ7xLIrlN2h5d1eGI= +go.yaml.in/yaml/v2 v2.4.2/go.mod h1:081UH+NErpNdqlCXm3TtEran0rJZGxAYx9hb/ELlsPU= +go.yaml.in/yaml/v3 v3.0.4 h1:tfq32ie2Jv2UxXFdLJdh3jXuOzWiL1fo0bu/FbuKpbc= +go.yaml.in/yaml/v3 v3.0.4/go.mod h1:DhzuOOF2ATzADvBadXxruRBLzYTpT36CKvDb3+aBEFg= golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= +golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc= +golang.org/x/crypto v0.13.0/go.mod h1:y6Z2r+Rw4iayiXXAIxJIDAJ1zMW4yaTpebo8fPOliYc= +golang.org/x/crypto v0.14.0/go.mod h1:MVFd36DqK4CsrnJYDkBA3VC4m2GkXAM0PvzMCn4JQf4= +golang.org/x/exp v0.0.0-20251023183803-a4bb9ffd2546 h1:mgKeJMpvi0yx/sU5GsxQ7p6s2wtOnGAHZWCHUM4KGzY= +golang.org/x/exp v0.0.0-20251023183803-a4bb9ffd2546/go.mod h1:j/pmGrbnkbPtQfxEe5D0VQhZC6qKbfKifgD0oM7sR70= +golang.org/x/exp/typeparams v0.0.0-20220428152302-39d4317da171/go.mod h1:AbB0pIl9nAr9wVwH+Z2ZpaocVmF5I4GyWCDIsVjR0bk= +golang.org/x/exp/typeparams v0.0.0-20230203172020-98cc5a0785f9/go.mod h1:AbB0pIl9nAr9wVwH+Z2ZpaocVmF5I4GyWCDIsVjR0bk= +golang.org/x/exp/typeparams v0.0.0-20250210185358-939b2ce775ac h1:TSSpLIG4v+p0rPv1pNOQtl1I8knsO4S9trOxNMOLVP4= +golang.org/x/exp/typeparams v0.0.0-20250210185358-939b2ce775ac/go.mod h1:AbB0pIl9nAr9wVwH+Z2ZpaocVmF5I4GyWCDIsVjR0bk= golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= +golang.org/x/mod v0.4.1/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= +golang.org/x/mod v0.4.2/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= +golang.org/x/mod v0.6.0-dev.0.20220106191415-9b9b3d81d5e3/go.mod h1:3p9vT2HGsQu2K1YbXdKPJLVgG5VJdoTa1poYQBtP1AY= +golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4= +golang.org/x/mod v0.7.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= +golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= +golang.org/x/mod v0.9.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= +golang.org/x/mod v0.12.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= +golang.org/x/mod v0.13.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c= +golang.org/x/mod v0.29.0 h1:HV8lRxZC4l2cr3Zq1LvtOsi/ThTgWnUk/y64QSs8GwA= +golang.org/x/mod v0.29.0/go.mod h1:NyhrlYXJ2H4eJiRy/WDBO6HMqZQ6q9nk4JzS3NuCK+w= golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= +golang.org/x/net v0.0.0-20200625001655-4c5254603344/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA= golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU= -golang.org/x/net v0.40.0 h1:79Xs7wF06Gbdcg4kdCCIQArK11Z1hr5POQ6+fIYHNuY= -golang.org/x/net v0.40.0/go.mod h1:y0hY0exeL2Pku80/zKK7tpntoX23cqL3Oa6njdgRtds= -golang.org/x/oauth2 v0.27.0 h1:da9Vo7/tDv5RH/7nZDz1eMGS/q1Vv1N/7FCrBhI9I3M= -golang.org/x/oauth2 v0.27.0/go.mod h1:onh5ek6nERTohokkhCD/y2cV4Do3fxFHFuAejCkRWT8= +golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg= +golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM= +golang.org/x/net v0.0.0-20211015210444-4f30a5c0130f/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y= +golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c= +golang.org/x/net v0.2.0/go.mod h1:KqCZLdyyvdV855qA2rE3GC2aiw5xGR5TEjj8smXukLY= +golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs= +golang.org/x/net v0.8.0/go.mod h1:QVkue5JL9kW//ek3r6jTKnTFis1tRmNAW2P1shuFdJc= +golang.org/x/net v0.10.0/go.mod h1:0qNGK6F8kojg2nk9dLZ2mShWaEBan6FAoqfSigmmuDg= +golang.org/x/net v0.15.0/go.mod h1:idbUs1IY1+zTqbi8yxTbhexhEEk5ur9LInksu6HrEpk= +golang.org/x/net v0.16.0/go.mod h1:NxSsAGuq816PNPmqtQdLE42eU2Fs7NoRIZrHJAlaCOE= +golang.org/x/net v0.46.0 h1:giFlY12I07fugqwPuWJi68oOnpfqFnJIJzaIIm2JVV4= +golang.org/x/net v0.46.0/go.mod h1:Q9BGdFy1y4nkUwiLvT5qtyhAnEHgnQ/zd8PfU6nc210= +golang.org/x/oauth2 v0.31.0 h1:8Fq0yVZLh4j4YA47vHKFTa9Ew5XIrCP8LC6UeNZnLxo= +golang.org/x/oauth2 v0.31.0/go.mod h1:lzm5WQJQwKZ3nwavOZ3IS5Aulzxi68dUSgRHujetwEA= golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.0.0-20200625203802-6e8e738ad208/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.14.0 h1:woo0S4Yywslg6hp4eUFjTVOyKt0RookbpAHG4c1HmhQ= -golang.org/x/sync v0.14.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA= +golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.3.0/go.mod h1:FU7BRWz2tNW+3quACPkgCx/L+uEAv1htQ0V83Z9Rj+Y= +golang.org/x/sync v0.4.0/go.mod h1:FU7BRWz2tNW+3quACPkgCx/L+uEAv1htQ0V83Z9Rj+Y= +golang.org/x/sync v0.19.0 h1:vV+1eWNmZ5geRlYjzm2adRgW2/mcpevXNg50YZtPCE4= +golang.org/x/sync v0.19.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI= golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20211019181941-9d821ace8654/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20211105183446-c75c47738b0c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.33.0 h1:q3i8TbbEz+JRD9ywIRlyRAQbM0qF7hu24q3teo2hbuw= -golang.org/x/sys v0.33.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k= -golang.org/x/term v0.32.0 h1:DR4lr0TjUs3epypdhTOkMmuF5CDFJ/8pOnbzMZPQ7bg= -golang.org/x/term v0.32.0/go.mod h1:uZG1FhGx848Sqfsq4/DlJr3xGGsYMu/L5GW4abiaEPQ= +golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.2.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.12.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.13.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.39.0 h1:CvCKL8MeisomCi6qNZ+wbb0DN9E5AATixKsvNtMoMFk= +golang.org/x/sys v0.39.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks= +golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= +golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8= +golang.org/x/term v0.2.0/go.mod h1:TVmDHMZPmdnySmBfhjOoOdhjzdE1h4u1VwSiw2l1Nuc= +golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k= +golang.org/x/term v0.6.0/go.mod h1:m6U89DPEgQRMq3DNkDClhWw02AUbt2daBVO4cn4Hv9U= +golang.org/x/term v0.8.0/go.mod h1:xPskH00ivmX89bAKVGSKKtLOWNx2+17Eiy94tnKShWo= +golang.org/x/term v0.12.0/go.mod h1:owVbMEjm3cBLCHdkQu9b1opXd4ETQWc3BhuQGKgXgvU= +golang.org/x/term v0.13.0/go.mod h1:LTmsnFJwVN6bCy1rVCoS+qHT1HhALEFxKncY3WNNh4U= +golang.org/x/term v0.36.0 h1:zMPR+aF8gfksFprF/Nc/rd1wRS1EI6nDBGyWAvDzx2Q= +golang.org/x/term v0.36.0/go.mod h1:Qu394IJq6V6dCBRgwqshf3mPF85AqzYEzofzRdZkWss= golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= +golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk= golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= -golang.org/x/text v0.25.0 h1:qVyWApTSYLk/drJRO5mDlNYskwQznZmkpV2c8q9zls4= -golang.org/x/text v0.25.0/go.mod h1:WEdwpYrmk1qmdHvhkSTNPm3app7v4rsT8F2UD6+VHIA= -golang.org/x/time v0.11.0 h1:/bpjEDfN9tkoN/ryeYHnv5hcMlc8ncjMcM4XBk5NWV0= -golang.org/x/time v0.11.0/go.mod h1:CDIdPxbZBQxdj6cxyCIdrNogrJKMJ7pr37NYpMcMDSg= +golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= +golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ= +golang.org/x/text v0.4.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8= +golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8= +golang.org/x/text v0.8.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8= +golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8= +golang.org/x/text v0.13.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE= +golang.org/x/text v0.30.0 h1:yznKA/E9zq54KzlzBEAWn1NXSQ8DIp/NYMy88xJjl4k= +golang.org/x/text v0.30.0/go.mod h1:yDdHFIX9t+tORqspjENWgzaCVXgk0yYnYuSZ8UzzBVM= +golang.org/x/time v0.13.0 h1:eUlYslOIt32DgYD6utsuUeHs4d7AsEYLuIAdg7FlYgI= +golang.org/x/time v0.13.0/go.mod h1:eL/Oa2bBBK0TkX57Fyni+NgnyQQN4LitPmob2Hjnqw4= golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= +golang.org/x/tools v0.0.0-20200324003944-a576cf524670/go.mod h1:Sl4aGygMT6LrqrWclx+PTx3U+LnKx/seiNR+3G19Ar8= +golang.org/x/tools v0.0.0-20200329025819-fd4102a86c65/go.mod h1:Sl4aGygMT6LrqrWclx+PTx3U+LnKx/seiNR+3G19Ar8= golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE= +golang.org/x/tools v0.0.0-20200724022722-7017fd6b1305/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA= +golang.org/x/tools v0.0.0-20200820010801-b793a1359eac/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA= +golang.org/x/tools v0.0.0-20201023174141-c8cfbd0f21e6/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= -golang.org/x/tools v0.26.0 h1:v/60pFQmzmT9ExmjDv2gGIfi3OqfKoEP6I5+umXlbnQ= -golang.org/x/tools v0.26.0/go.mod h1:TPVVj70c7JJ3WCazhD8OdXcZg/og+b9+tH/KxylGwH0= +golang.org/x/tools v0.1.1-0.20210205202024-ef80cdb6ec6d/go.mod h1:9bzcO0MWcOuT0tm1iBGzDVPshzfwoVvREIui8C+MHqU= +golang.org/x/tools v0.1.1-0.20210302220138-2ac05c832e1a/go.mod h1:9bzcO0MWcOuT0tm1iBGzDVPshzfwoVvREIui8C+MHqU= +golang.org/x/tools v0.1.1/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk= +golang.org/x/tools v0.1.5/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk= +golang.org/x/tools v0.1.10/go.mod h1:Uh6Zz+xoGYZom868N8YTex3t7RhtHDBrE8Gzo9bV56E= +golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc= +golang.org/x/tools v0.3.0/go.mod h1:/rWhSS2+zyEVwoJf8YAX6L2f0ntZ7Kn/mGgAWcipA5k= +golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU= +golang.org/x/tools v0.7.0/go.mod h1:4pg6aUX35JBAogB10C9AtvVL+qowtN4pT3CGSQex14s= +golang.org/x/tools v0.13.0/go.mod h1:HvlwmtVNQAhOuCjW7xxvovg8wbNq7LwfXh/k7wXUl58= +golang.org/x/tools v0.14.0/go.mod h1:uYBEerGOWcJyEORxN+Ek8+TT266gXkNlHdJBwexUsBg= +golang.org/x/tools v0.38.0 h1:Hx2Xv8hISq8Lm16jvBZ2VQf+RLmbd7wVUsALibYI/IQ= +golang.org/x/tools v0.38.0/go.mod h1:yEsQ/d/YK8cjh0L6rZlY8tgtlKiBNTL14pGDJPJpYQs= +golang.org/x/tools/go/expect v0.1.1-deprecated h1:jpBZDwmgPhXsKZC6WhL20P4b/wmnpsEAGHaNy0n/rJM= +golang.org/x/tools/go/expect v0.1.1-deprecated/go.mod h1:eihoPOH+FgIqa3FpoTwguz/bVUSGBlGQU67vpBeOrBY= +golang.org/x/tools/go/packages/packagestest v0.1.1-deprecated h1:1h2MnaIAIXISqTFKdENegdpAgUXz6NrPEsbIeWaBRvM= +golang.org/x/tools/go/packages/packagestest v0.1.1-deprecated/go.mod h1:RVAQXBGNv1ib0J382/DPCRS/BPnsGebyM1Gj5VSDpG8= golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= -gomodules.xyz/jsonpatch/v2 v2.4.0 h1:Ci3iUJyx9UeRx7CeFN8ARgGbkESwJK+KB9lLcWxY/Zw= -gomodules.xyz/jsonpatch/v2 v2.4.0/go.mod h1:AH3dM2RI6uoBZxn3LVrfvJ3E0/9dG4cSrbuBJT4moAY= -google.golang.org/protobuf v1.36.5 h1:tPhr+woSbjfYvY6/GPufUoYizxw1cF/yFoxJ2fmpwlM= -google.golang.org/protobuf v1.36.5/go.mod h1:9fA7Ob0pmnwhb644+1+CVWFRbNajQ6iRojtC/QF5bRE= +gomodules.xyz/jsonpatch/v2 v2.5.0 h1:JELs8RLM12qJGXU4u/TO3V25KW8GreMKl9pdkk14RM0= +gomodules.xyz/jsonpatch/v2 v2.5.0/go.mod h1:AH3dM2RI6uoBZxn3LVrfvJ3E0/9dG4cSrbuBJT4moAY= +google.golang.org/protobuf v1.36.9 h1:w2gp2mA27hUeUzj9Ex9FBjsBm40zfaDtEWow293U7Iw= +google.golang.org/protobuf v1.36.9/go.mod h1:fuxRtAxBytpl4zzqUh6/eyUujkJdNiuEkXntxiD/uRU= gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk= gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q= -gopkg.in/evanphx/json-patch.v4 v4.12.0 h1:n6jtcsulIzXPJaxegRbvFNNrZDjbij7ny3gmSPG+6V4= -gopkg.in/evanphx/json-patch.v4 v4.12.0/go.mod h1:p8EYWUEYMpynmqDbY58zCKCFZw8pRWMG4EsWvDvM72M= +gopkg.in/evanphx/json-patch.v4 v4.13.0 h1:czT3CmqEaQ1aanPc5SdlgQrrEIb8w/wwCvWWnfEbYzo= +gopkg.in/evanphx/json-patch.v4 v4.13.0/go.mod h1:p8EYWUEYMpynmqDbY58zCKCFZw8pRWMG4EsWvDvM72M= gopkg.in/inf.v0 v0.9.1 h1:73M5CoZyi3ZLMOyDlQh031Cx6N9NDJ2Vvfl76EDAgDc= gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw= +gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= +gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY= +gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ= gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA= gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= -k8s.io/api v0.32.3 h1:Hw7KqxRusq+6QSplE3NYG4MBxZw1BZnq4aP4cJVINls= -k8s.io/api v0.32.3/go.mod h1:2wEDTXADtm/HA7CCMD8D8bK4yuBUptzaRhYcYEEYA3k= -k8s.io/apiextensions-apiserver v0.32.3 h1:4D8vy+9GWerlErCwVIbcQjsWunF9SUGNu7O7hiQTyPY= -k8s.io/apiextensions-apiserver v0.32.3/go.mod h1:8YwcvVRMVzw0r1Stc7XfGAzB/SIVLunqApySV5V7Dss= -k8s.io/apimachinery v0.32.3 h1:JmDuDarhDmA/Li7j3aPrwhpNBA94Nvk5zLeOge9HH1U= -k8s.io/apimachinery v0.32.3/go.mod h1:GpHVgxoKlTxClKcteaeuF1Ul/lDVb74KpZcxcmLDElE= -k8s.io/client-go v0.32.3 h1:RKPVltzopkSgHS7aS98QdscAgtgah/+zmpAogooIqVU= -k8s.io/client-go v0.32.3/go.mod h1:3v0+3k4IcT9bXTc4V2rt+d2ZPPG700Xy6Oi0Gdl2PaY= +honnef.co/go/tools v0.6.1 h1:R094WgE8K4JirYjBaOpz/AvTyUu/3wbmAoskKN/pxTI= +honnef.co/go/tools v0.6.1/go.mod h1:3puzxxljPCe8RGJX7BIy1plGbxEOZni5mR2aXe3/uk4= +k8s.io/api v0.34.3 h1:D12sTP257/jSH2vHV2EDYrb16bS7ULlHpdNdNhEw2S4= +k8s.io/api v0.34.3/go.mod h1:PyVQBF886Q5RSQZOim7DybQjAbVs8g7gwJNhGtY5MBk= +k8s.io/apiextensions-apiserver v0.34.3 h1:p10fGlkDY09eWKOTeUSioxwLukJnm+KuDZdrW71y40g= +k8s.io/apiextensions-apiserver v0.34.3/go.mod h1:aujxvqGFRdb/cmXYfcRTeppN7S2XV/t7WMEc64zB5A0= +k8s.io/apimachinery v0.34.3 h1:/TB+SFEiQvN9HPldtlWOTp0hWbJ+fjU+wkxysf/aQnE= +k8s.io/apimachinery v0.34.3/go.mod h1:/GwIlEcWuTX9zKIg2mbw0LRFIsXwrfoVxn+ef0X13lw= +k8s.io/client-go v0.34.3 h1:wtYtpzy/OPNYf7WyNBTj3iUA0XaBHVqhv4Iv3tbrF5A= +k8s.io/client-go v0.34.3/go.mod h1:OxxeYagaP9Kdf78UrKLa3YZixMCfP6bgPwPwNBQBzpM= k8s.io/klog/v2 v2.130.1 h1:n9Xl7H1Xvksem4KFG4PYbdQCQxqc/tTUyrgXaOhHSzk= k8s.io/klog/v2 v2.130.1/go.mod h1:3Jpz1GvMt720eyJH1ckRHK1EDfpxISzJ7I9OYgaDtPE= -k8s.io/kube-openapi v0.0.0-20241212222426-2c72e554b1e7 h1:hcha5B1kVACrLujCKLbr8XWMxCxzQx42DY8QKYJrDLg= -k8s.io/kube-openapi v0.0.0-20241212222426-2c72e554b1e7/go.mod h1:GewRfANuJ70iYzvn+i4lezLDAFzvjxZYK1gn1lWcfas= -k8s.io/utils v0.0.0-20241210054802-24370beab758 h1:sdbE21q2nlQtFh65saZY+rRM6x6aJJI8IUa1AmH/qa0= -k8s.io/utils v0.0.0-20241210054802-24370beab758/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0= -sigs.k8s.io/controller-runtime v0.20.4 h1:X3c+Odnxz+iPTRobG4tp092+CvBU9UK0t/bRf+n0DGU= -sigs.k8s.io/controller-runtime v0.20.4/go.mod h1:xg2XB0K5ShQzAgsoujxuKN4LNXR2LfwwHsPj7Iaw+XY= -sigs.k8s.io/json v0.0.0-20241014173422-cfa47c3a1cc8 h1:gBQPwqORJ8d8/YNZWEjoZs7npUVDpVXUUOFfW6CgAqE= -sigs.k8s.io/json v0.0.0-20241014173422-cfa47c3a1cc8/go.mod h1:mdzfpAEoE6DHQEN0uh9ZbOCuHbLK5wOm7dK4ctXE9Tg= -sigs.k8s.io/structured-merge-diff/v4 v4.5.0 h1:nbCitCK2hfnhyiKo6uf2HxUPTCodY6Qaf85SbDIaMBk= -sigs.k8s.io/structured-merge-diff/v4 v4.5.0/go.mod h1:N8f93tFZh9U6vpxwRArLiikrE5/2tiu1w1AGfACIGE4= -sigs.k8s.io/yaml v1.4.0 h1:Mk1wCc2gy/F0THH0TAp1QYyJNzRm2KCLy3o5ASXVI5E= -sigs.k8s.io/yaml v1.4.0/go.mod h1:Ejl7/uTz7PSA4eKMyQCUTnhZYNmLIl+5c2lQPGR2BPY= +k8s.io/kube-openapi v0.0.0-20250909170358-d67c058d9372 h1:6n2yF16Z5B+r+iKN6yL6/0cRj7lI5omG5F0wuI9ZHhw= +k8s.io/kube-openapi v0.0.0-20250909170358-d67c058d9372/go.mod h1:kdmbQkyfwUagLfXIad1y2TdrjPFWp2Q89B3qkRwf/pQ= +k8s.io/utils v0.0.0-20251002143259-bc988d571ff4 h1:SjGebBtkBqHFOli+05xYbK8YF1Dzkbzn+gDM4X9T4Ck= +k8s.io/utils v0.0.0-20251002143259-bc988d571ff4/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0= +mvdan.cc/gofumpt v0.7.0 h1:bg91ttqXmi9y2xawvkuMXyvAA/1ZGJqYAEGjXuP0JXU= +mvdan.cc/gofumpt v0.7.0/go.mod h1:txVFJy/Sc/mvaycET54pV8SW8gWxTlUuGHVEcncmNUo= +mvdan.cc/unparam v0.0.0-20240528143540-8a5130ca722f h1:lMpcwN6GxNbWtbpI1+xzFLSW8XzX0u72NttUGVFjO3U= +mvdan.cc/unparam v0.0.0-20240528143540-8a5130ca722f/go.mod h1:RSLa7mKKCNeTTMHBw5Hsy2rfJmd6O2ivt9Dw9ZqCQpQ= +sigs.k8s.io/controller-runtime v0.22.4 h1:GEjV7KV3TY8e+tJ2LCTxUTanW4z/FmNB7l327UfMq9A= +sigs.k8s.io/controller-runtime v0.22.4/go.mod h1:+QX1XUpTXN4mLoblf4tqr5CQcyHPAki2HLXqQMY6vh8= +sigs.k8s.io/json v0.0.0-20250730193827-2d320260d730 h1:IpInykpT6ceI+QxKBbEflcR5EXP7sU1kvOlxwZh5txg= +sigs.k8s.io/json v0.0.0-20250730193827-2d320260d730/go.mod h1:mdzfpAEoE6DHQEN0uh9ZbOCuHbLK5wOm7dK4ctXE9Tg= +sigs.k8s.io/randfill v1.0.0 h1:JfjMILfT8A6RbawdsK2JXGBR5AQVfd+9TbzrlneTyrU= +sigs.k8s.io/randfill v1.0.0/go.mod h1:XeLlZ/jmk4i1HRopwe7/aU3H5n1zNUcX6TM94b3QxOY= +sigs.k8s.io/structured-merge-diff/v6 v6.3.0 h1:jTijUJbW353oVOd9oTlifJqOGEkUw2jB/fXCbTiQEco= +sigs.k8s.io/structured-merge-diff/v6 v6.3.0/go.mod h1:M3W8sfWvn2HhQDIbGWj3S099YozAsymCo/wrT5ohRUE= +sigs.k8s.io/yaml v1.6.0 h1:G8fkbMSAFqgEFgh4b1wmtzDnioxFCUgTZhlbj5P9QYs= +sigs.k8s.io/yaml v1.6.0/go.mod h1:796bPqUfzR/0jLAl6XjHl3Ck7MiyVv8dbTdyT3/pMf4= diff --git a/images/webhooks/handlers/rspValidator.go b/images/webhooks/handlers/rspValidator.go index 39bdce7d9..d7b89d670 100644 --- a/images/webhooks/handlers/rspValidator.go +++ b/images/webhooks/handlers/rspValidator.go @@ -124,7 +124,7 @@ func RSPValidate(ctx context.Context, _ *model.AdmissionReview, obj metav1.Objec } if thinPoolExists { - ctx := context.Background() + ctx := context.Background() // TODO: can't we use previous context or derive from it? cl, err := NewKubeClient("") if err != nil { klog.Fatal(err.Error()) diff --git a/lib/go/common/api/patch.go b/lib/go/common/api/patch.go new file mode 100644 index 000000000..0902a2278 --- /dev/null +++ b/lib/go/common/api/patch.go @@ -0,0 +1,149 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package api + +import ( + "context" + "errors" + "fmt" + "reflect" + "time" + + kerrors "k8s.io/apimachinery/pkg/api/errors" + "k8s.io/apimachinery/pkg/util/wait" + "k8s.io/client-go/util/retry" + "sigs.k8s.io/controller-runtime/pkg/client" +) + +// ConflictRetryBackoff is the backoff policy used by PatchWithConflictRetry and +// PatchStatusWithConflictRetry to retry conditional patches on transient conflicts. +var ConflictRetryBackoff = wait.Backoff{ + Steps: 6, + Duration: 1 * time.Millisecond, + Cap: 50 * time.Millisecond, + Factor: 2.0, + Jitter: 0.25, +} + +var errReloadDidNotHappen = errors.New("resource reload did not happen") + +// PatchStatusWithConflictRetry applies a conditional, retriable merge-patch to the Status subresource. +// +// The patch is conditional via optimistic locking: it uses MergeFrom with +// a resourceVersion precondition, so the update only succeeds if the current +// resourceVersion matches. On 409 Conflict, the operation is retried using the +// ConflictRetryBackoff policy. If a conflict is detected and reloading the +// resource yields the same resourceVersion, the condition is treated as +// transient and retried as well; no special error is returned for this case. +// +// The provided patchFn must mutate the given object. If patchFn returns an +// error, no patch is sent and that error is returned. +// +// The resource must be a non-nil pointer to a struct; otherwise this function panics. +func PatchStatusWithConflictRetry[T client.Object]( + ctx context.Context, + cl client.Client, + resource T, + patchFn func(resource T) error, +) error { + return patch(ctx, cl, true, resource, patchFn) +} + +// PatchWithConflictRetry applies a conditional, retriable merge-patch to the main resource (spec/metadata). +// +// The patch is conditional via optimistic locking: it uses MergeFrom with +// a resourceVersion precondition, so the update only succeeds if the current +// resourceVersion matches. On 409 Conflict, the operation is retried using the +// ConflictRetryBackoff policy. If a conflict is detected and reloading the +// resource yields the same resourceVersion, the condition is treated as +// transient and retried as well; no special error is returned for this case. +// +// The provided patchFn must mutate the given object. If patchFn returns an +// error, no patch is sent and that error is returned. +// +// The resource must be a non-nil pointer to a struct; otherwise this function panics. +func PatchWithConflictRetry[T client.Object]( + ctx context.Context, + cl client.Client, + resource T, + patchFn func(resource T) error, +) error { + return patch(ctx, cl, false, resource, patchFn) +} + +func patch[T client.Object]( + ctx context.Context, + cl client.Client, + status bool, + resource T, + patchFn func(resource T) error, +) error { + assertNonNilPtrToStruct(resource) + + var conflictedResourceVersion string + + return retry.OnError( + ConflictRetryBackoff, + func(err error) bool { + return kerrors.IsConflict(err) || err == errReloadDidNotHappen + }, + func() error { + resourceVersion := resource.GetResourceVersion() + + if resourceVersion == conflictedResourceVersion { + err := cl.Get(ctx, client.ObjectKeyFromObject(resource), resource) + if err != nil { + return err + } + if resource.GetResourceVersion() == conflictedResourceVersion { + return errReloadDidNotHappen + } + } + + patch := client.MergeFromWithOptions( + resource.DeepCopyObject().(client.Object), + client.MergeFromWithOptimisticLock{}, + ) + + if err := patchFn(resource); err != nil { + return err + } + + var err error + if status { + err = cl.Status().Patch(ctx, resource, patch) + } else { + err = cl.Patch(ctx, resource, patch) + } + if kerrors.IsConflict(err) { + conflictedResourceVersion = resourceVersion + } + + return err + }, + ) +} + +func assertNonNilPtrToStruct[T any](obj T) { + rt := reflect.TypeFor[T]() + if rt.Kind() != reflect.Pointer || rt.Elem().Kind() != reflect.Struct { + panic(fmt.Sprintf("T must be a pointer to a struct; got %s", rt)) + } + if reflect.ValueOf(obj).IsNil() { + panic("obj must not be nil") + } +} diff --git a/lib/go/common/go.mod b/lib/go/common/go.mod new file mode 100644 index 000000000..25bbbd685 --- /dev/null +++ b/lib/go/common/go.mod @@ -0,0 +1,254 @@ +module github.com/deckhouse/sds-replicated-volume/lib/go/common + +go 1.24.11 + +require ( + github.com/go-logr/zapr v1.3.0 + go.uber.org/zap v1.27.0 + k8s.io/apimachinery v0.34.3 + k8s.io/client-go v0.34.3 + sigs.k8s.io/controller-runtime v0.22.4 +) + +require ( + 4d63.com/gocheckcompilerdirectives v1.3.0 // indirect + 4d63.com/gochecknoglobals v0.2.2 // indirect + github.com/4meepo/tagalign v1.4.2 // indirect + github.com/Abirdcfly/dupword v0.1.3 // indirect + github.com/Antonboom/errname v1.0.0 // indirect + github.com/Antonboom/nilnil v1.0.1 // indirect + github.com/Antonboom/testifylint v1.5.2 // indirect + github.com/BurntSushi/toml v1.5.0 // indirect + github.com/Crocmagnon/fatcontext v0.7.1 // indirect + github.com/Djarvur/go-err113 v0.0.0-20210108212216-aea10b59be24 // indirect + github.com/GaijinEntertainment/go-exhaustruct/v3 v3.3.1 // indirect + github.com/Masterminds/semver/v3 v3.4.0 // indirect + github.com/OpenPeeDeeP/depguard/v2 v2.2.1 // indirect + github.com/alecthomas/go-check-sumtype v0.3.1 // indirect + github.com/alexkohler/nakedret/v2 v2.0.5 // indirect + github.com/alexkohler/prealloc v1.0.0 // indirect + github.com/alingse/asasalint v0.0.11 // indirect + github.com/alingse/nilnesserr v0.1.2 // indirect + github.com/ashanbrown/forbidigo v1.6.0 // indirect + github.com/ashanbrown/makezero v1.2.0 // indirect + github.com/beorn7/perks v1.0.1 // indirect + github.com/bkielbasa/cyclop v1.2.3 // indirect + github.com/blizzy78/varnamelen v0.8.0 // indirect + github.com/bombsimon/wsl/v4 v4.5.0 // indirect + github.com/breml/bidichk v0.3.2 // indirect + github.com/breml/errchkjson v0.4.0 // indirect + github.com/butuzov/ireturn v0.3.1 // indirect + github.com/butuzov/mirror v1.3.0 // indirect + github.com/catenacyber/perfsprint v0.8.2 // indirect + github.com/ccojocar/zxcvbn-go v1.0.2 // indirect + github.com/cespare/xxhash/v2 v2.3.0 // indirect + github.com/charithe/durationcheck v0.0.10 // indirect + github.com/chavacava/garif v0.1.0 // indirect + github.com/ckaznocha/intrange v0.3.0 // indirect + github.com/curioswitch/go-reassign v0.3.0 // indirect + github.com/daixiang0/gci v0.13.5 // indirect + github.com/denis-tingaikin/go-header v0.5.0 // indirect + github.com/ettle/strcase v0.2.0 // indirect + github.com/evanphx/json-patch v5.9.0+incompatible // indirect + github.com/fatih/color v1.18.0 // indirect + github.com/fatih/structtag v1.2.0 // indirect + github.com/firefart/nonamedreturns v1.0.5 // indirect + github.com/fsnotify/fsnotify v1.9.0 // indirect + github.com/fzipp/gocyclo v0.6.0 // indirect + github.com/ghostiam/protogetter v0.3.9 // indirect + github.com/go-critic/go-critic v0.12.0 // indirect + github.com/go-task/slim-sprig/v3 v3.0.0 // indirect + github.com/go-toolsmith/astcast v1.1.0 // indirect + github.com/go-toolsmith/astcopy v1.1.0 // indirect + github.com/go-toolsmith/astequal v1.2.0 // indirect + github.com/go-toolsmith/astfmt v1.1.0 // indirect + github.com/go-toolsmith/astp v1.1.0 // indirect + github.com/go-toolsmith/strparse v1.1.0 // indirect + github.com/go-toolsmith/typep v1.1.0 // indirect + github.com/go-viper/mapstructure/v2 v2.4.0 // indirect + github.com/go-xmlfmt/xmlfmt v1.1.3 // indirect + github.com/gobwas/glob v0.2.3 // indirect + github.com/gofrs/flock v0.12.1 // indirect + github.com/golangci/dupl v0.0.0-20250308024227-f665c8d69b32 // indirect + github.com/golangci/go-printf-func-name v0.1.0 // indirect + github.com/golangci/gofmt v0.0.0-20250106114630-d62b90e6713d // indirect + github.com/golangci/golangci-lint v1.64.8 // indirect + github.com/golangci/misspell v0.6.0 // indirect + github.com/golangci/plugin-module-register v0.1.1 // indirect + github.com/golangci/revgrep v0.8.0 // indirect + github.com/golangci/unconvert v0.0.0-20240309020433-c5143eacb3ed // indirect + github.com/google/btree v1.1.3 // indirect + github.com/google/go-cmp v0.7.0 // indirect + github.com/google/pprof v0.0.0-20250820193118-f64d9cf942d6 // indirect + github.com/gordonklaus/ineffassign v0.1.0 // indirect + github.com/gostaticanalysis/analysisutil v0.7.1 // indirect + github.com/gostaticanalysis/comment v1.5.0 // indirect + github.com/gostaticanalysis/forcetypeassert v0.2.0 // indirect + github.com/gostaticanalysis/nilerr v0.1.1 // indirect + github.com/hashicorp/go-immutable-radix/v2 v2.1.0 // indirect + github.com/hashicorp/go-version v1.7.0 // indirect + github.com/hashicorp/golang-lru/v2 v2.0.7 // indirect + github.com/hexops/gotextdiff v1.0.3 // indirect + github.com/inconshreveable/mousetrap v1.1.0 // indirect + github.com/jgautheron/goconst v1.7.1 // indirect + github.com/jingyugao/rowserrcheck v1.1.1 // indirect + github.com/jjti/go-spancheck v0.6.4 // indirect + github.com/julz/importas v0.2.0 // indirect + github.com/karamaru-alpha/copyloopvar v1.2.1 // indirect + github.com/kisielk/errcheck v1.9.0 // indirect + github.com/kkHAIKE/contextcheck v1.1.6 // indirect + github.com/kulti/thelper v0.6.3 // indirect + github.com/kunwardeep/paralleltest v1.0.10 // indirect + github.com/lasiar/canonicalheader v1.1.2 // indirect + github.com/ldez/exptostd v0.4.2 // indirect + github.com/ldez/gomoddirectives v0.6.1 // indirect + github.com/ldez/grignotin v0.9.0 // indirect + github.com/ldez/tagliatelle v0.7.1 // indirect + github.com/ldez/usetesting v0.4.2 // indirect + github.com/leonklingele/grouper v1.1.2 // indirect + github.com/macabu/inamedparam v0.1.3 // indirect + github.com/maratori/testableexamples v1.0.0 // indirect + github.com/maratori/testpackage v1.1.1 // indirect + github.com/matoous/godox v1.1.0 // indirect + github.com/mattn/go-colorable v0.1.14 // indirect + github.com/mattn/go-isatty v0.0.20 // indirect + github.com/mattn/go-runewidth v0.0.16 // indirect + github.com/mgechev/revive v1.7.0 // indirect + github.com/mitchellh/go-homedir v1.1.0 // indirect + github.com/moricho/tparallel v0.3.2 // indirect + github.com/nakabonne/nestif v0.3.1 // indirect + github.com/nishanths/exhaustive v0.12.0 // indirect + github.com/nishanths/predeclared v0.2.2 // indirect + github.com/nunnatsa/ginkgolinter v0.19.1 // indirect + github.com/olekukonko/tablewriter v0.0.5 // indirect + github.com/onsi/ginkgo/v2 v2.27.2 // indirect + github.com/onsi/gomega v1.38.3 // indirect + github.com/pelletier/go-toml/v2 v2.2.4 // indirect + github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect + github.com/polyfloyd/go-errorlint v1.7.1 // indirect + github.com/prometheus/client_golang v1.23.2 // indirect + github.com/prometheus/client_model v0.6.2 // indirect + github.com/prometheus/common v0.66.1 // indirect + github.com/prometheus/procfs v0.17.0 // indirect + github.com/quasilyte/go-ruleguard v0.4.3-0.20240823090925-0fe6f58b47b1 // indirect + github.com/quasilyte/go-ruleguard/dsl v0.3.22 // indirect + github.com/quasilyte/gogrep v0.5.0 // indirect + github.com/quasilyte/regex/syntax v0.0.0-20210819130434-b3f0c404a727 // indirect + github.com/quasilyte/stdinfo v0.0.0-20220114132959-f7386bf02567 // indirect + github.com/raeperd/recvcheck v0.2.0 // indirect + github.com/rivo/uniseg v0.4.7 // indirect + github.com/rogpeppe/go-internal v1.14.1 // indirect + github.com/ryancurrah/gomodguard v1.3.5 // indirect + github.com/ryanrolds/sqlclosecheck v0.5.1 // indirect + github.com/sagikazarmark/locafero v0.7.0 // indirect + github.com/sanposhiho/wastedassign/v2 v2.1.0 // indirect + github.com/santhosh-tekuri/jsonschema/v6 v6.0.1 // indirect + github.com/sashamelentyev/interfacebloat v1.1.0 // indirect + github.com/sashamelentyev/usestdlibvars v1.28.0 // indirect + github.com/securego/gosec/v2 v2.22.2 // indirect + github.com/sirupsen/logrus v1.9.3 // indirect + github.com/sivchari/containedctx v1.0.3 // indirect + github.com/sivchari/tenv v1.12.1 // indirect + github.com/sonatard/noctx v0.1.0 // indirect + github.com/sourcegraph/conc v0.3.0 // indirect + github.com/sourcegraph/go-diff v0.7.0 // indirect + github.com/spf13/afero v1.12.0 // indirect + github.com/spf13/cast v1.7.1 // indirect + github.com/spf13/cobra v1.10.2 // indirect + github.com/spf13/pflag v1.0.10 // indirect + github.com/spf13/viper v1.20.1 // indirect + github.com/ssgreg/nlreturn/v2 v2.2.1 // indirect + github.com/stbenjam/no-sprintf-host-port v0.2.0 // indirect + github.com/stretchr/objx v0.5.2 // indirect + github.com/stretchr/testify v1.11.1 // indirect + github.com/subosito/gotenv v1.6.0 // indirect + github.com/tdakkota/asciicheck v0.4.1 // indirect + github.com/tetafro/godot v1.5.0 // indirect + github.com/tidwall/match v1.2.0 // indirect + github.com/timakin/bodyclose v0.0.0-20241017074812-ed6a65f985e3 // indirect + github.com/timonwong/loggercheck v0.10.1 // indirect + github.com/tomarrell/wrapcheck/v2 v2.10.0 // indirect + github.com/tommy-muehle/go-mnd/v2 v2.5.1 // indirect + github.com/ultraware/funlen v0.2.0 // indirect + github.com/ultraware/whitespace v0.2.0 // indirect + github.com/uudashr/gocognit v1.2.0 // indirect + github.com/uudashr/iface v1.3.1 // indirect + github.com/xen0n/gosmopolitan v1.2.2 // indirect + github.com/yagipy/maintidx v1.0.0 // indirect + github.com/yeya24/promlinter v0.3.0 // indirect + github.com/ykadowak/zerologlint v0.1.5 // indirect + gitlab.com/bosi/decorder v0.4.2 // indirect + go-simpler.org/musttag v0.13.0 // indirect + go-simpler.org/sloglint v0.9.0 // indirect + go.uber.org/automaxprocs v1.6.0 // indirect + go.uber.org/multierr v1.11.0 // indirect + golang.org/x/exp/typeparams v0.0.0-20250210185358-939b2ce775ac // indirect + golang.org/x/mod v0.29.0 // indirect + golang.org/x/sync v0.19.0 // indirect + golang.org/x/tools v0.38.0 // indirect + gomodules.xyz/jsonpatch/v2 v2.5.0 // indirect + gopkg.in/evanphx/json-patch.v4 v4.13.0 // indirect + gopkg.in/yaml.v2 v2.4.0 // indirect + honnef.co/go/tools v0.6.1 // indirect + k8s.io/apiextensions-apiserver v0.34.3 // indirect + mvdan.cc/gofumpt v0.7.0 // indirect + mvdan.cc/unparam v0.0.0-20240528143540-8a5130ca722f // indirect +) + +require ( + github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect + github.com/deckhouse/sds-common-lib v0.6.3 + github.com/emicklei/go-restful/v3 v3.13.0 // indirect + github.com/evanphx/json-patch/v5 v5.9.11 // indirect + github.com/fxamacker/cbor/v2 v2.9.0 // indirect + github.com/go-logr/logr v1.4.3 + github.com/go-openapi/jsonpointer v0.22.0 // indirect + github.com/go-openapi/jsonreference v0.21.1 // indirect + github.com/go-openapi/swag v0.24.1 // indirect + github.com/go-openapi/swag/cmdutils v0.24.0 // indirect + github.com/go-openapi/swag/conv v0.24.0 // indirect + github.com/go-openapi/swag/fileutils v0.24.0 // indirect + github.com/go-openapi/swag/jsonname v0.24.0 // indirect + github.com/go-openapi/swag/jsonutils v0.24.0 // indirect + github.com/go-openapi/swag/loading v0.24.0 // indirect + github.com/go-openapi/swag/mangling v0.24.0 // indirect + github.com/go-openapi/swag/netutils v0.24.0 // indirect + github.com/go-openapi/swag/stringutils v0.24.0 // indirect + github.com/go-openapi/swag/typeutils v0.24.0 // indirect + github.com/go-openapi/swag/yamlutils v0.24.0 // indirect + github.com/gogo/protobuf v1.3.2 // indirect + github.com/google/gnostic-models v0.7.0 // indirect + github.com/google/uuid v1.6.0 // indirect + github.com/josharian/intern v1.0.0 // indirect + github.com/json-iterator/go v1.1.12 // indirect + github.com/mailru/easyjson v0.9.0 // indirect + github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect + github.com/modern-go/reflect2 v1.0.3-0.20250322232337-35a7c28c31ee // indirect + github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect + github.com/x448/float16 v0.8.4 // indirect + go.yaml.in/yaml/v2 v2.4.2 // indirect + go.yaml.in/yaml/v3 v3.0.4 // indirect + golang.org/x/exp v0.0.0-20251023183803-a4bb9ffd2546 + golang.org/x/net v0.46.0 // indirect + golang.org/x/oauth2 v0.31.0 // indirect + golang.org/x/sys v0.39.0 // indirect + golang.org/x/term v0.36.0 // indirect + golang.org/x/text v0.30.0 // indirect + golang.org/x/time v0.13.0 // indirect + google.golang.org/protobuf v1.36.9 // indirect + gopkg.in/inf.v0 v0.9.1 // indirect + gopkg.in/yaml.v3 v3.0.1 // indirect + k8s.io/api v0.34.3 // indirect + k8s.io/klog/v2 v2.130.1 + k8s.io/kube-openapi v0.0.0-20250909170358-d67c058d9372 // indirect + k8s.io/utils v0.0.0-20251002143259-bc988d571ff4 // indirect + sigs.k8s.io/json v0.0.0-20250730193827-2d320260d730 // indirect + sigs.k8s.io/randfill v1.0.0 // indirect + sigs.k8s.io/structured-merge-diff/v6 v6.3.0 // indirect + sigs.k8s.io/yaml v1.6.0 // indirect +) + +tool ( + github.com/golangci/golangci-lint/cmd/golangci-lint + github.com/onsi/ginkgo/v2/ginkgo +) diff --git a/lib/go/common/go.sum b/lib/go/common/go.sum new file mode 100644 index 000000000..c4f3edc11 --- /dev/null +++ b/lib/go/common/go.sum @@ -0,0 +1,701 @@ +4d63.com/gocheckcompilerdirectives v1.3.0 h1:Ew5y5CtcAAQeTVKUVFrE7EwHMrTO6BggtEj8BZSjZ3A= +4d63.com/gocheckcompilerdirectives v1.3.0/go.mod h1:ofsJ4zx2QAuIP/NO/NAh1ig6R1Fb18/GI7RVMwz7kAY= +4d63.com/gochecknoglobals v0.2.2 h1:H1vdnwnMaZdQW/N+NrkT1SZMTBmcwHe9Vq8lJcYYTtU= +4d63.com/gochecknoglobals v0.2.2/go.mod h1:lLxwTQjL5eIesRbvnzIP3jZtG140FnTdz+AlMa+ogt0= +github.com/4meepo/tagalign v1.4.2 h1:0hcLHPGMjDyM1gHG58cS73aQF8J4TdVR96TZViorO9E= +github.com/4meepo/tagalign v1.4.2/go.mod h1:+p4aMyFM+ra7nb41CnFG6aSDXqRxU/w1VQqScKqDARI= +github.com/Abirdcfly/dupword v0.1.3 h1:9Pa1NuAsZvpFPi9Pqkd93I7LIYRURj+A//dFd5tgBeE= +github.com/Abirdcfly/dupword v0.1.3/go.mod h1:8VbB2t7e10KRNdwTVoxdBaxla6avbhGzb8sCTygUMhw= +github.com/Antonboom/errname v1.0.0 h1:oJOOWR07vS1kRusl6YRSlat7HFnb3mSfMl6sDMRoTBA= +github.com/Antonboom/errname v1.0.0/go.mod h1:gMOBFzK/vrTiXN9Oh+HFs+e6Ndl0eTFbtsRTSRdXyGI= +github.com/Antonboom/nilnil v1.0.1 h1:C3Tkm0KUxgfO4Duk3PM+ztPncTFlOf0b2qadmS0s4xs= +github.com/Antonboom/nilnil v1.0.1/go.mod h1:CH7pW2JsRNFgEh8B2UaPZTEPhCMuFowP/e8Udp9Nnb0= +github.com/Antonboom/testifylint v1.5.2 h1:4s3Xhuv5AvdIgbd8wOOEeo0uZG7PbDKQyKY5lGoQazk= +github.com/Antonboom/testifylint v1.5.2/go.mod h1:vxy8VJ0bc6NavlYqjZfmp6EfqXMtBgQ4+mhCojwC1P8= +github.com/BurntSushi/toml v1.5.0 h1:W5quZX/G/csjUnuI8SUYlsHs9M38FC7znL0lIO+DvMg= +github.com/BurntSushi/toml v1.5.0/go.mod h1:ukJfTF/6rtPPRCnwkur4qwRxa8vTRFBF0uk2lLoLwho= +github.com/Crocmagnon/fatcontext v0.7.1 h1:SC/VIbRRZQeQWj/TcQBS6JmrXcfA+BU4OGSVUt54PjM= +github.com/Crocmagnon/fatcontext v0.7.1/go.mod h1:1wMvv3NXEBJucFGfwOJBxSVWcoIO6emV215SMkW9MFU= +github.com/Djarvur/go-err113 v0.0.0-20210108212216-aea10b59be24 h1:sHglBQTwgx+rWPdisA5ynNEsoARbiCBOyGcJM4/OzsM= +github.com/Djarvur/go-err113 v0.0.0-20210108212216-aea10b59be24/go.mod h1:4UJr5HIiMZrwgkSPdsjy2uOQExX/WEILpIrO9UPGuXs= +github.com/GaijinEntertainment/go-exhaustruct/v3 v3.3.1 h1:Sz1JIXEcSfhz7fUi7xHnhpIE0thVASYjvosApmHuD2k= +github.com/GaijinEntertainment/go-exhaustruct/v3 v3.3.1/go.mod h1:n/LSCXNuIYqVfBlVXyHfMQkZDdp1/mmxfSjADd3z1Zg= +github.com/Masterminds/semver/v3 v3.4.0 h1:Zog+i5UMtVoCU8oKka5P7i9q9HgrJeGzI9SA1Xbatp0= +github.com/Masterminds/semver/v3 v3.4.0/go.mod h1:4V+yj/TJE1HU9XfppCwVMZq3I84lprf4nC11bSS5beM= +github.com/OpenPeeDeeP/depguard/v2 v2.2.1 h1:vckeWVESWp6Qog7UZSARNqfu/cZqvki8zsuj3piCMx4= +github.com/OpenPeeDeeP/depguard/v2 v2.2.1/go.mod h1:q4DKzC4UcVaAvcfd41CZh0PWpGgzrVxUYBlgKNGquUo= +github.com/alecthomas/assert/v2 v2.11.0 h1:2Q9r3ki8+JYXvGsDyBXwH3LcJ+WK5D0gc5E8vS6K3D0= +github.com/alecthomas/assert/v2 v2.11.0/go.mod h1:Bze95FyfUr7x34QZrjL+XP+0qgp/zg8yS+TtBj1WA3k= +github.com/alecthomas/go-check-sumtype v0.3.1 h1:u9aUvbGINJxLVXiFvHUlPEaD7VDULsrxJb4Aq31NLkU= +github.com/alecthomas/go-check-sumtype v0.3.1/go.mod h1:A8TSiN3UPRw3laIgWEUOHHLPa6/r9MtoigdlP5h3K/E= +github.com/alecthomas/repr v0.4.0 h1:GhI2A8MACjfegCPVq9f1FLvIBS+DrQ2KQBFZP1iFzXc= +github.com/alecthomas/repr v0.4.0/go.mod h1:Fr0507jx4eOXV7AlPV6AVZLYrLIuIeSOWtW57eE/O/4= +github.com/alexkohler/nakedret/v2 v2.0.5 h1:fP5qLgtwbx9EJE8dGEERT02YwS8En4r9nnZ71RK+EVU= +github.com/alexkohler/nakedret/v2 v2.0.5/go.mod h1:bF5i0zF2Wo2o4X4USt9ntUWve6JbFv02Ff4vlkmS/VU= +github.com/alexkohler/prealloc v1.0.0 h1:Hbq0/3fJPQhNkN0dR95AVrr6R7tou91y0uHG5pOcUuw= +github.com/alexkohler/prealloc v1.0.0/go.mod h1:VetnK3dIgFBBKmg0YnD9F9x6Icjd+9cvfHR56wJVlKE= +github.com/alingse/asasalint v0.0.11 h1:SFwnQXJ49Kx/1GghOFz1XGqHYKp21Kq1nHad/0WQRnw= +github.com/alingse/asasalint v0.0.11/go.mod h1:nCaoMhw7a9kSJObvQyVzNTPBDbNpdocqrSP7t/cW5+I= +github.com/alingse/nilnesserr v0.1.2 h1:Yf8Iwm3z2hUUrP4muWfW83DF4nE3r1xZ26fGWUKCZlo= +github.com/alingse/nilnesserr v0.1.2/go.mod h1:1xJPrXonEtX7wyTq8Dytns5P2hNzoWymVUIaKm4HNFg= +github.com/ashanbrown/forbidigo v1.6.0 h1:D3aewfM37Yb3pxHujIPSpTf6oQk9sc9WZi8gerOIVIY= +github.com/ashanbrown/forbidigo v1.6.0/go.mod h1:Y8j9jy9ZYAEHXdu723cUlraTqbzjKF1MUyfOKL+AjcU= +github.com/ashanbrown/makezero v1.2.0 h1:/2Lp1bypdmK9wDIq7uWBlDF1iMUpIIS4A+pF6C9IEUU= +github.com/ashanbrown/makezero v1.2.0/go.mod h1:dxlPhHbDMC6N6xICzFBSK+4njQDdK8euNO0qjQMtGY4= +github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM= +github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw= +github.com/bkielbasa/cyclop v1.2.3 h1:faIVMIGDIANuGPWH031CZJTi2ymOQBULs9H21HSMa5w= +github.com/bkielbasa/cyclop v1.2.3/go.mod h1:kHTwA9Q0uZqOADdupvcFJQtp/ksSnytRMe8ztxG8Fuo= +github.com/blizzy78/varnamelen v0.8.0 h1:oqSblyuQvFsW1hbBHh1zfwrKe3kcSj0rnXkKzsQ089M= +github.com/blizzy78/varnamelen v0.8.0/go.mod h1:V9TzQZ4fLJ1DSrjVDfl89H7aMnTvKkApdHeyESmyR7k= +github.com/bombsimon/wsl/v4 v4.5.0 h1:iZRsEvDdyhd2La0FVi5k6tYehpOR/R7qIUjmKk7N74A= +github.com/bombsimon/wsl/v4 v4.5.0/go.mod h1:NOQ3aLF4nD7N5YPXMruR6ZXDOAqLoM0GEpLwTdvmOSc= +github.com/breml/bidichk v0.3.2 h1:xV4flJ9V5xWTqxL+/PMFF6dtJPvZLPsyixAoPe8BGJs= +github.com/breml/bidichk v0.3.2/go.mod h1:VzFLBxuYtT23z5+iVkamXO386OB+/sVwZOpIj6zXGos= +github.com/breml/errchkjson v0.4.0 h1:gftf6uWZMtIa/Is3XJgibewBm2ksAQSY/kABDNFTAdk= +github.com/breml/errchkjson v0.4.0/go.mod h1:AuBOSTHyLSaaAFlWsRSuRBIroCh3eh7ZHh5YeelDIk8= +github.com/butuzov/ireturn v0.3.1 h1:mFgbEI6m+9W8oP/oDdfA34dLisRFCj2G6o/yiI1yZrY= +github.com/butuzov/ireturn v0.3.1/go.mod h1:ZfRp+E7eJLC0NQmk1Nrm1LOrn/gQlOykv+cVPdiXH5M= +github.com/butuzov/mirror v1.3.0 h1:HdWCXzmwlQHdVhwvsfBb2Au0r3HyINry3bDWLYXiKoc= +github.com/butuzov/mirror v1.3.0/go.mod h1:AEij0Z8YMALaq4yQj9CPPVYOyJQyiexpQEQgihajRfI= +github.com/catenacyber/perfsprint v0.8.2 h1:+o9zVmCSVa7M4MvabsWvESEhpsMkhfE7k0sHNGL95yw= +github.com/catenacyber/perfsprint v0.8.2/go.mod h1:q//VWC2fWbcdSLEY1R3l8n0zQCDPdE4IjZwyY1HMunM= +github.com/ccojocar/zxcvbn-go v1.0.2 h1:na/czXU8RrhXO4EZme6eQJLR4PzcGsahsBOAwU6I3Vg= +github.com/ccojocar/zxcvbn-go v1.0.2/go.mod h1:g1qkXtUSvHP8lhHp5GrSmTz6uWALGRMQdw6Qnz/hi60= +github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs= +github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs= +github.com/charithe/durationcheck v0.0.10 h1:wgw73BiocdBDQPik+zcEoBG/ob8uyBHf2iyoHGPf5w4= +github.com/charithe/durationcheck v0.0.10/go.mod h1:bCWXb7gYRysD1CU3C+u4ceO49LoGOY1C1L6uouGNreQ= +github.com/chavacava/garif v0.1.0 h1:2JHa3hbYf5D9dsgseMKAmc/MZ109otzgNFk5s87H9Pc= +github.com/chavacava/garif v0.1.0/go.mod h1:XMyYCkEL58DF0oyW4qDjjnPWONs2HBqYKI+UIPD+Gww= +github.com/ckaznocha/intrange v0.3.0 h1:VqnxtK32pxgkhJgYQEeOArVidIPg+ahLP7WBOXZd5ZY= +github.com/ckaznocha/intrange v0.3.0/go.mod h1:+I/o2d2A1FBHgGELbGxzIcyd3/9l9DuwjM8FsbSS3Lo= +github.com/cpuguy83/go-md2man/v2 v2.0.6/go.mod h1:oOW0eioCTA6cOiMLiUPZOpcVxMig6NIQQ7OS05n1F4g= +github.com/curioswitch/go-reassign v0.3.0 h1:dh3kpQHuADL3cobV/sSGETA8DOv457dwl+fbBAhrQPs= +github.com/curioswitch/go-reassign v0.3.0/go.mod h1:nApPCCTtqLJN/s8HfItCcKV0jIPwluBOvZP+dsJGA88= +github.com/daixiang0/gci v0.13.5 h1:kThgmH1yBmZSBCh1EJVxQ7JsHpm5Oms0AMed/0LaH4c= +github.com/daixiang0/gci v0.13.5/go.mod h1:12etP2OniiIdP4q+kjUGrC/rUagga7ODbqsom5Eo5Yk= +github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= +github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= +github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM= +github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= +github.com/deckhouse/sds-common-lib v0.6.3 h1:k0OotLuQaKuZt8iyph9IusDixjAE0MQRKyuTe2wZP3I= +github.com/deckhouse/sds-common-lib v0.6.3/go.mod h1:UHZMKkqEh6RAO+vtA7dFTwn/2m5lzfPn0kfULBmDf2o= +github.com/denis-tingaikin/go-header v0.5.0 h1:SRdnP5ZKvcO9KKRP1KJrhFR3RrlGuD+42t4429eC9k8= +github.com/denis-tingaikin/go-header v0.5.0/go.mod h1:mMenU5bWrok6Wl2UsZjy+1okegmwQ3UgWl4V1D8gjlY= +github.com/dlclark/regexp2 v1.11.0 h1:G/nrcoOa7ZXlpoa/91N3X7mM3r8eIlMBBJZvsz/mxKI= +github.com/dlclark/regexp2 v1.11.0/go.mod h1:DHkYz0B9wPfa6wondMfaivmHpzrQ3v9q8cnmRbL6yW8= +github.com/emicklei/go-restful/v3 v3.13.0 h1:C4Bl2xDndpU6nJ4bc1jXd+uTmYPVUwkD6bFY/oTyCes= +github.com/emicklei/go-restful/v3 v3.13.0/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc= +github.com/ettle/strcase v0.2.0 h1:fGNiVF21fHXpX1niBgk0aROov1LagYsOwV/xqKDKR/Q= +github.com/ettle/strcase v0.2.0/go.mod h1:DajmHElDSaX76ITe3/VHVyMin4LWSJN5Z909Wp+ED1A= +github.com/evanphx/json-patch v5.9.0+incompatible h1:fBXyNpNMuTTDdquAq/uisOr2lShz4oaXpDTX2bLe7ls= +github.com/evanphx/json-patch v5.9.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk= +github.com/evanphx/json-patch/v5 v5.9.11 h1:/8HVnzMq13/3x9TPvjG08wUGqBTmZBsCWzjTM0wiaDU= +github.com/evanphx/json-patch/v5 v5.9.11/go.mod h1:3j+LviiESTElxA4p3EMKAB9HXj3/XEtnUf6OZxqIQTM= +github.com/fatih/color v1.18.0 h1:S8gINlzdQ840/4pfAwic/ZE0djQEH3wM94VfqLTZcOM= +github.com/fatih/color v1.18.0/go.mod h1:4FelSpRwEGDpQ12mAdzqdOukCy4u8WUtOY6lkT/6HfU= +github.com/fatih/structtag v1.2.0 h1:/OdNE99OxoI/PqaW/SuSK9uxxT3f/tcSZgon/ssNSx4= +github.com/fatih/structtag v1.2.0/go.mod h1:mBJUNpUnHmRKrKlQQlmCrh5PuhftFbNv8Ys4/aAZl94= +github.com/firefart/nonamedreturns v1.0.5 h1:tM+Me2ZaXs8tfdDw3X6DOX++wMCOqzYUho6tUTYIdRA= +github.com/firefart/nonamedreturns v1.0.5/go.mod h1:gHJjDqhGM4WyPt639SOZs+G89Ko7QKH5R5BhnO6xJhw= +github.com/frankban/quicktest v1.14.6 h1:7Xjx+VpznH+oBnejlPUj8oUpdxnVs4f8XU8WnHkI4W8= +github.com/frankban/quicktest v1.14.6/go.mod h1:4ptaffx2x8+WTWXmUCuVU6aPUX1/Mz7zb5vbUoiM6w0= +github.com/fsnotify/fsnotify v1.9.0 h1:2Ml+OJNzbYCTzsxtv8vKSFD9PbJjmhYF14k/jKC7S9k= +github.com/fsnotify/fsnotify v1.9.0/go.mod h1:8jBTzvmWwFyi3Pb8djgCCO5IBqzKJ/Jwo8TRcHyHii0= +github.com/fxamacker/cbor/v2 v2.9.0 h1:NpKPmjDBgUfBms6tr6JZkTHtfFGcMKsw3eGcmD/sapM= +github.com/fxamacker/cbor/v2 v2.9.0/go.mod h1:vM4b+DJCtHn+zz7h3FFp/hDAI9WNWCsZj23V5ytsSxQ= +github.com/fzipp/gocyclo v0.6.0 h1:lsblElZG7d3ALtGMx9fmxeTKZaLLpU8mET09yN4BBLo= +github.com/fzipp/gocyclo v0.6.0/go.mod h1:rXPyn8fnlpa0R2csP/31uerbiVBugk5whMdlyaLkLoA= +github.com/ghostiam/protogetter v0.3.9 h1:j+zlLLWzqLay22Cz/aYwTHKQ88GE2DQ6GkWSYFOI4lQ= +github.com/ghostiam/protogetter v0.3.9/go.mod h1:WZ0nw9pfzsgxuRsPOFQomgDVSWtDLJRfQJEhsGbmQMA= +github.com/gkampitakis/ciinfo v0.3.2 h1:JcuOPk8ZU7nZQjdUhctuhQofk7BGHuIy0c9Ez8BNhXs= +github.com/gkampitakis/ciinfo v0.3.2/go.mod h1:1NIwaOcFChN4fa/B0hEBdAb6npDlFL8Bwx4dfRLRqAo= +github.com/gkampitakis/go-diff v1.3.2 h1:Qyn0J9XJSDTgnsgHRdz9Zp24RaJeKMUHg2+PDZZdC4M= +github.com/gkampitakis/go-diff v1.3.2/go.mod h1:LLgOrpqleQe26cte8s36HTWcTmMEur6OPYerdAAS9tk= +github.com/gkampitakis/go-snaps v0.5.15 h1:amyJrvM1D33cPHwVrjo9jQxX8g/7E2wYdZ+01KS3zGE= +github.com/gkampitakis/go-snaps v0.5.15/go.mod h1:HNpx/9GoKisdhw9AFOBT1N7DBs9DiHo/hGheFGBZ+mc= +github.com/go-critic/go-critic v0.12.0 h1:iLosHZuye812wnkEz1Xu3aBwn5ocCPfc9yqmFG9pa6w= +github.com/go-critic/go-critic v0.12.0/go.mod h1:DpE0P6OVc6JzVYzmM5gq5jMU31zLr4am5mB/VfFK64w= +github.com/go-logr/logr v1.4.3 h1:CjnDlHq8ikf6E492q6eKboGOC0T8CDaOvkHCIg8idEI= +github.com/go-logr/logr v1.4.3/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY= +github.com/go-logr/zapr v1.3.0 h1:XGdV8XW8zdwFiwOA2Dryh1gj2KRQyOOoNmBy4EplIcQ= +github.com/go-logr/zapr v1.3.0/go.mod h1:YKepepNBd1u/oyhd/yQmtjVXmm9uML4IXUgMOwR8/Gg= +github.com/go-openapi/jsonpointer v0.22.0 h1:TmMhghgNef9YXxTu1tOopo+0BGEytxA+okbry0HjZsM= +github.com/go-openapi/jsonpointer v0.22.0/go.mod h1:xt3jV88UtExdIkkL7NloURjRQjbeUgcxFblMjq2iaiU= +github.com/go-openapi/jsonreference v0.21.1 h1:bSKrcl8819zKiOgxkbVNRUBIr6Wwj9KYrDbMjRs0cDA= +github.com/go-openapi/jsonreference v0.21.1/go.mod h1:PWs8rO4xxTUqKGu+lEvvCxD5k2X7QYkKAepJyCmSTT8= +github.com/go-openapi/swag v0.24.1 h1:DPdYTZKo6AQCRqzwr/kGkxJzHhpKxZ9i/oX0zag+MF8= +github.com/go-openapi/swag v0.24.1/go.mod h1:sm8I3lCPlspsBBwUm1t5oZeWZS0s7m/A+Psg0ooRU0A= +github.com/go-openapi/swag/cmdutils v0.24.0 h1:KlRCffHwXFI6E5MV9n8o8zBRElpY4uK4yWyAMWETo9I= +github.com/go-openapi/swag/cmdutils v0.24.0/go.mod h1:uxib2FAeQMByyHomTlsP8h1TtPd54Msu2ZDU/H5Vuf8= +github.com/go-openapi/swag/conv v0.24.0 h1:ejB9+7yogkWly6pnruRX45D1/6J+ZxRu92YFivx54ik= +github.com/go-openapi/swag/conv v0.24.0/go.mod h1:jbn140mZd7EW2g8a8Y5bwm8/Wy1slLySQQ0ND6DPc2c= +github.com/go-openapi/swag/fileutils v0.24.0 h1:U9pCpqp4RUytnD689Ek/N1d2N/a//XCeqoH508H5oak= +github.com/go-openapi/swag/fileutils v0.24.0/go.mod h1:3SCrCSBHyP1/N+3oErQ1gP+OX1GV2QYFSnrTbzwli90= +github.com/go-openapi/swag/jsonname v0.24.0 h1:2wKS9bgRV/xB8c62Qg16w4AUiIrqqiniJFtZGi3dg5k= +github.com/go-openapi/swag/jsonname v0.24.0/go.mod h1:GXqrPzGJe611P7LG4QB9JKPtUZ7flE4DOVechNaDd7Q= +github.com/go-openapi/swag/jsonutils v0.24.0 h1:F1vE1q4pg1xtO3HTyJYRmEuJ4jmIp2iZ30bzW5XgZts= +github.com/go-openapi/swag/jsonutils v0.24.0/go.mod h1:vBowZtF5Z4DDApIoxcIVfR8v0l9oq5PpYRUuteVu6f0= +github.com/go-openapi/swag/loading v0.24.0 h1:ln/fWTwJp2Zkj5DdaX4JPiddFC5CHQpvaBKycOlceYc= +github.com/go-openapi/swag/loading v0.24.0/go.mod h1:gShCN4woKZYIxPxbfbyHgjXAhO61m88tmjy0lp/LkJk= +github.com/go-openapi/swag/mangling v0.24.0 h1:PGOQpViCOUroIeak/Uj/sjGAq9LADS3mOyjznmHy2pk= +github.com/go-openapi/swag/mangling v0.24.0/go.mod h1:Jm5Go9LHkycsz0wfoaBDkdc4CkpuSnIEf62brzyCbhc= +github.com/go-openapi/swag/netutils v0.24.0 h1:Bz02HRjYv8046Ycg/w80q3g9QCWeIqTvlyOjQPDjD8w= +github.com/go-openapi/swag/netutils v0.24.0/go.mod h1:WRgiHcYTnx+IqfMCtu0hy9oOaPR0HnPbmArSRN1SkZM= +github.com/go-openapi/swag/stringutils v0.24.0 h1:i4Z/Jawf9EvXOLUbT97O0HbPUja18VdBxeadyAqS1FM= +github.com/go-openapi/swag/stringutils v0.24.0/go.mod h1:5nUXB4xA0kw2df5PRipZDslPJgJut+NjL7D25zPZ/4w= +github.com/go-openapi/swag/typeutils v0.24.0 h1:d3szEGzGDf4L2y1gYOSSLeK6h46F+zibnEas2Jm/wIw= +github.com/go-openapi/swag/typeutils v0.24.0/go.mod h1:q8C3Kmk/vh2VhpCLaoR2MVWOGP8y7Jc8l82qCTd1DYI= +github.com/go-openapi/swag/yamlutils v0.24.0 h1:bhw4894A7Iw6ne+639hsBNRHg9iZg/ISrOVr+sJGp4c= +github.com/go-openapi/swag/yamlutils v0.24.0/go.mod h1:DpKv5aYuaGm/sULePoeiG8uwMpZSfReo1HR3Ik0yaG8= +github.com/go-quicktest/qt v1.101.0 h1:O1K29Txy5P2OK0dGo59b7b0LR6wKfIhttaAhHUyn7eI= +github.com/go-quicktest/qt v1.101.0/go.mod h1:14Bz/f7NwaXPtdYEgzsx46kqSxVwTbzVZsDC26tQJow= +github.com/go-task/slim-sprig/v3 v3.0.0 h1:sUs3vkvUymDpBKi3qH1YSqBQk9+9D/8M2mN1vB6EwHI= +github.com/go-task/slim-sprig/v3 v3.0.0/go.mod h1:W848ghGpv3Qj3dhTPRyJypKRiqCdHZiAzKg9hl15HA8= +github.com/go-toolsmith/astcast v1.1.0 h1:+JN9xZV1A+Re+95pgnMgDboWNVnIMMQXwfBwLRPgSC8= +github.com/go-toolsmith/astcast v1.1.0/go.mod h1:qdcuFWeGGS2xX5bLM/c3U9lewg7+Zu4mr+xPwZIB4ZU= +github.com/go-toolsmith/astcopy v1.1.0 h1:YGwBN0WM+ekI/6SS6+52zLDEf8Yvp3n2seZITCUBt5s= +github.com/go-toolsmith/astcopy v1.1.0/go.mod h1:hXM6gan18VA1T/daUEHCFcYiW8Ai1tIwIzHY6srfEAw= +github.com/go-toolsmith/astequal v1.0.3/go.mod h1:9Ai4UglvtR+4up+bAD4+hCj7iTo4m/OXVTSLnCyTAx4= +github.com/go-toolsmith/astequal v1.1.0/go.mod h1:sedf7VIdCL22LD8qIvv7Nn9MuWJruQA/ysswh64lffQ= +github.com/go-toolsmith/astequal v1.2.0 h1:3Fs3CYZ1k9Vo4FzFhwwewC3CHISHDnVUPC4x0bI2+Cw= +github.com/go-toolsmith/astequal v1.2.0/go.mod h1:c8NZ3+kSFtFY/8lPso4v8LuJjdJiUFVnSuU3s0qrrDY= +github.com/go-toolsmith/astfmt v1.1.0 h1:iJVPDPp6/7AaeLJEruMsBUlOYCmvg0MoCfJprsOmcco= +github.com/go-toolsmith/astfmt v1.1.0/go.mod h1:OrcLlRwu0CuiIBp/8b5PYF9ktGVZUjlNMV634mhwuQ4= +github.com/go-toolsmith/astp v1.1.0 h1:dXPuCl6u2llURjdPLLDxJeZInAeZ0/eZwFJmqZMnpQA= +github.com/go-toolsmith/astp v1.1.0/go.mod h1:0T1xFGz9hicKs8Z5MfAqSUitoUYS30pDMsRVIDHs8CA= +github.com/go-toolsmith/pkgload v1.2.2 h1:0CtmHq/02QhxcF7E9N5LIFcYFsMR5rdovfqTtRKkgIk= +github.com/go-toolsmith/pkgload v1.2.2/go.mod h1:R2hxLNRKuAsiXCo2i5J6ZQPhnPMOVtU+f0arbFPWCus= +github.com/go-toolsmith/strparse v1.0.0/go.mod h1:YI2nUKP9YGZnL/L1/DLFBfixrcjslWct4wyljWhSRy8= +github.com/go-toolsmith/strparse v1.1.0 h1:GAioeZUK9TGxnLS+qfdqNbA4z0SSm5zVNtCQiyP2Bvw= +github.com/go-toolsmith/strparse v1.1.0/go.mod h1:7ksGy58fsaQkGQlY8WVoBFNyEPMGuJin1rfoPS4lBSQ= +github.com/go-toolsmith/typep v1.1.0 h1:fIRYDyF+JywLfqzyhdiHzRop/GQDxxNhLGQ6gFUNHus= +github.com/go-toolsmith/typep v1.1.0/go.mod h1:fVIw+7zjdsMxDA3ITWnH1yOiw1rnTQKCsF/sk2H/qig= +github.com/go-viper/mapstructure/v2 v2.4.0 h1:EBsztssimR/CONLSZZ04E8qAkxNYq4Qp9LvH92wZUgs= +github.com/go-viper/mapstructure/v2 v2.4.0/go.mod h1:oJDH3BJKyqBA2TXFhDsKDGDTlndYOZ6rGS0BRZIxGhM= +github.com/go-xmlfmt/xmlfmt v1.1.3 h1:t8Ey3Uy7jDSEisW2K3somuMKIpzktkWptA0iFCnRUWY= +github.com/go-xmlfmt/xmlfmt v1.1.3/go.mod h1:aUCEOzzezBEjDBbFBoSiya/gduyIiWYRP6CnSFIV8AM= +github.com/gobwas/glob v0.2.3 h1:A4xDbljILXROh+kObIiy5kIaPYD8e96x1tgBhUI5J+Y= +github.com/gobwas/glob v0.2.3/go.mod h1:d3Ez4x06l9bZtSvzIay5+Yzi0fmZzPgnTbPcKjJAkT8= +github.com/goccy/go-yaml v1.18.0 h1:8W7wMFS12Pcas7KU+VVkaiCng+kG8QiFeFwzFb+rwuw= +github.com/goccy/go-yaml v1.18.0/go.mod h1:XBurs7gK8ATbW4ZPGKgcbrY1Br56PdM69F7LkFRi1kA= +github.com/gofrs/flock v0.12.1 h1:MTLVXXHf8ekldpJk3AKicLij9MdwOWkZ+a/jHHZby9E= +github.com/gofrs/flock v0.12.1/go.mod h1:9zxTsyu5xtJ9DK+1tFZyibEV7y3uwDxPPfbxeeHCoD0= +github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q= +github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q= +github.com/golangci/dupl v0.0.0-20250308024227-f665c8d69b32 h1:WUvBfQL6EW/40l6OmeSBYQJNSif4O11+bmWEz+C7FYw= +github.com/golangci/dupl v0.0.0-20250308024227-f665c8d69b32/go.mod h1:NUw9Zr2Sy7+HxzdjIULge71wI6yEg1lWQr7Evcu8K0E= +github.com/golangci/go-printf-func-name v0.1.0 h1:dVokQP+NMTO7jwO4bwsRwLWeudOVUPPyAKJuzv8pEJU= +github.com/golangci/go-printf-func-name v0.1.0/go.mod h1:wqhWFH5mUdJQhweRnldEywnR5021wTdZSNgwYceV14s= +github.com/golangci/gofmt v0.0.0-20250106114630-d62b90e6713d h1:viFft9sS/dxoYY0aiOTsLKO2aZQAPT4nlQCsimGcSGE= +github.com/golangci/gofmt v0.0.0-20250106114630-d62b90e6713d/go.mod h1:ivJ9QDg0XucIkmwhzCDsqcnxxlDStoTl89jDMIoNxKY= +github.com/golangci/golangci-lint v1.64.8 h1:y5TdeVidMtBGG32zgSC7ZXTFNHrsJkDnpO4ItB3Am+I= +github.com/golangci/golangci-lint v1.64.8/go.mod h1:5cEsUQBSr6zi8XI8OjmcY2Xmliqc4iYL7YoPrL+zLJ4= +github.com/golangci/misspell v0.6.0 h1:JCle2HUTNWirNlDIAUO44hUsKhOFqGPoC4LZxlaSXDs= +github.com/golangci/misspell v0.6.0/go.mod h1:keMNyY6R9isGaSAu+4Q8NMBwMPkh15Gtc8UCVoDtAWo= +github.com/golangci/plugin-module-register v0.1.1 h1:TCmesur25LnyJkpsVrupv1Cdzo+2f7zX0H6Jkw1Ol6c= +github.com/golangci/plugin-module-register v0.1.1/go.mod h1:TTpqoB6KkwOJMV8u7+NyXMrkwwESJLOkfl9TxR1DGFc= +github.com/golangci/revgrep v0.8.0 h1:EZBctwbVd0aMeRnNUsFogoyayvKHyxlV3CdUA46FX2s= +github.com/golangci/revgrep v0.8.0/go.mod h1:U4R/s9dlXZsg8uJmaR1GrloUr14D7qDl8gi2iPXJH8k= +github.com/golangci/unconvert v0.0.0-20240309020433-c5143eacb3ed h1:IURFTjxeTfNFP0hTEi1YKjB/ub8zkpaOqFFMApi2EAs= +github.com/golangci/unconvert v0.0.0-20240309020433-c5143eacb3ed/go.mod h1:XLXN8bNw4CGRPaqgl3bv/lhz7bsGPh4/xSaMTbo2vkQ= +github.com/google/btree v1.1.3 h1:CVpQJjYgC4VbzxeGVHfvZrv1ctoYCAI8vbl07Fcxlyg= +github.com/google/btree v1.1.3/go.mod h1:qOPhT0dTNdNzV6Z/lhRX0YXUafgPLFUh+gZMl761Gm4= +github.com/google/gnostic-models v0.7.0 h1:qwTtogB15McXDaNqTZdzPJRHvaVJlAl+HVQnLmJEJxo= +github.com/google/gnostic-models v0.7.0/go.mod h1:whL5G0m6dmc5cPxKc5bdKdEN3UjI7OUGxBlw57miDrQ= +github.com/google/go-cmp v0.5.1/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= +github.com/google/go-cmp v0.5.2/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= +github.com/google/go-cmp v0.5.4/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= +github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= +github.com/google/go-cmp v0.5.8/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY= +github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8= +github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU= +github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg= +github.com/google/gofuzz v1.2.0 h1:xRy4A+RhZaiKjJ1bPfwQ8sedCA+YS2YcCHW6ec7JMi0= +github.com/google/gofuzz v1.2.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg= +github.com/google/pprof v0.0.0-20250820193118-f64d9cf942d6 h1:EEHtgt9IwisQ2AZ4pIsMjahcegHh6rmhqxzIRQIyepY= +github.com/google/pprof v0.0.0-20250820193118-f64d9cf942d6/go.mod h1:I6V7YzU0XDpsHqbsyrghnFZLO1gwK6NPTNvmetQIk9U= +github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0= +github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= +github.com/gordonklaus/ineffassign v0.1.0 h1:y2Gd/9I7MdY1oEIt+n+rowjBNDcLQq3RsH5hwJd0f9s= +github.com/gordonklaus/ineffassign v0.1.0/go.mod h1:Qcp2HIAYhR7mNUVSIxZww3Guk4it82ghYcEXIAk+QT0= +github.com/gostaticanalysis/analysisutil v0.7.1 h1:ZMCjoue3DtDWQ5WyU16YbjbQEQ3VuzwxALrpYd+HeKk= +github.com/gostaticanalysis/analysisutil v0.7.1/go.mod h1:v21E3hY37WKMGSnbsw2S/ojApNWb6C1//mXO48CXbVc= +github.com/gostaticanalysis/comment v1.4.1/go.mod h1:ih6ZxzTHLdadaiSnF5WY3dxUoXfXAlTaRzuaNDlSado= +github.com/gostaticanalysis/comment v1.4.2/go.mod h1:KLUTGDv6HOCotCH8h2erHKmpci2ZoR8VPu34YA2uzdM= +github.com/gostaticanalysis/comment v1.5.0 h1:X82FLl+TswsUMpMh17srGRuKaaXprTaytmEpgnKIDu8= +github.com/gostaticanalysis/comment v1.5.0/go.mod h1:V6eb3gpCv9GNVqb6amXzEUX3jXLVK/AdA+IrAMSqvEc= +github.com/gostaticanalysis/forcetypeassert v0.2.0 h1:uSnWrrUEYDr86OCxWa4/Tp2jeYDlogZiZHzGkWFefTk= +github.com/gostaticanalysis/forcetypeassert v0.2.0/go.mod h1:M5iPavzE9pPqWyeiVXSFghQjljW1+l/Uke3PXHS6ILY= +github.com/gostaticanalysis/nilerr v0.1.1 h1:ThE+hJP0fEp4zWLkWHWcRyI2Od0p7DlgYG3Uqrmrcpk= +github.com/gostaticanalysis/nilerr v0.1.1/go.mod h1:wZYb6YI5YAxxq0i1+VJbY0s2YONW0HU0GPE3+5PWN4A= +github.com/gostaticanalysis/testutil v0.3.1-0.20210208050101-bfb5c8eec0e4/go.mod h1:D+FIZ+7OahH3ePw/izIEeH5I06eKs1IKI4Xr64/Am3M= +github.com/gostaticanalysis/testutil v0.5.0 h1:Dq4wT1DdTwTGCQQv3rl3IvD5Ld0E6HiY+3Zh0sUGqw8= +github.com/gostaticanalysis/testutil v0.5.0/go.mod h1:OLQSbuM6zw2EvCcXTz1lVq5unyoNft372msDY0nY5Hs= +github.com/hashicorp/go-immutable-radix/v2 v2.1.0 h1:CUW5RYIcysz+D3B+l1mDeXrQ7fUvGGCwJfdASSzbrfo= +github.com/hashicorp/go-immutable-radix/v2 v2.1.0/go.mod h1:hgdqLXA4f6NIjRVisM1TJ9aOJVNRqKZj+xDGF6m7PBw= +github.com/hashicorp/go-uuid v1.0.3 h1:2gKiV6YVmrJ1i2CKKa9obLvRieoRGviZFL26PcT/Co8= +github.com/hashicorp/go-uuid v1.0.3/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro= +github.com/hashicorp/go-version v1.2.1/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA= +github.com/hashicorp/go-version v1.7.0 h1:5tqGy27NaOTB8yJKUZELlFAS/LTKJkrmONwQKeRZfjY= +github.com/hashicorp/go-version v1.7.0/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA= +github.com/hashicorp/golang-lru/v2 v2.0.7 h1:a+bsQ5rvGLjzHuww6tVxozPZFVghXaHOwFs4luLUK2k= +github.com/hashicorp/golang-lru/v2 v2.0.7/go.mod h1:QeFd9opnmA6QUJc5vARoKUSoFhyfM2/ZepoAG6RGpeM= +github.com/hexops/gotextdiff v1.0.3 h1:gitA9+qJrrTCsiCl7+kh75nPqQt1cx4ZkudSTLoUqJM= +github.com/hexops/gotextdiff v1.0.3/go.mod h1:pSWU5MAI3yDq+fZBTazCSJysOMbxWL1BSow5/V2vxeg= +github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8= +github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw= +github.com/jgautheron/goconst v1.7.1 h1:VpdAG7Ca7yvvJk5n8dMwQhfEZJh95kl/Hl9S1OI5Jkk= +github.com/jgautheron/goconst v1.7.1/go.mod h1:aAosetZ5zaeC/2EfMeRswtxUFBpe2Hr7HzkgX4fanO4= +github.com/jingyugao/rowserrcheck v1.1.1 h1:zibz55j/MJtLsjP1OF4bSdgXxwL1b+Vn7Tjzq7gFzUs= +github.com/jingyugao/rowserrcheck v1.1.1/go.mod h1:4yvlZSDb3IyDTUZJUmpZfm2Hwok+Dtp+nu2qOq+er9c= +github.com/jjti/go-spancheck v0.6.4 h1:Tl7gQpYf4/TMU7AT84MN83/6PutY21Nb9fuQjFTpRRc= +github.com/jjti/go-spancheck v0.6.4/go.mod h1:yAEYdKJ2lRkDA8g7X+oKUHXOWVAXSBJRv04OhF+QUjk= +github.com/josharian/intern v1.0.0 h1:vlS4z54oSdjm0bgjRigI+G1HpF+tI+9rE5LLzOg8HmY= +github.com/josharian/intern v1.0.0/go.mod h1:5DoeVV0s6jJacbCEi61lwdGj/aVlrQvzHFFd8Hwg//Y= +github.com/joshdk/go-junit v1.0.0 h1:S86cUKIdwBHWwA6xCmFlf3RTLfVXYQfvanM5Uh+K6GE= +github.com/joshdk/go-junit v1.0.0/go.mod h1:TiiV0PqkaNfFXjEiyjWM3XXrhVyCa1K4Zfga6W52ung= +github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM= +github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo= +github.com/julz/importas v0.2.0 h1:y+MJN/UdL63QbFJHws9BVC5RpA2iq0kpjrFajTGivjQ= +github.com/julz/importas v0.2.0/go.mod h1:pThlt589EnCYtMnmhmRYY/qn9lCf/frPOK+WMx3xiJY= +github.com/karamaru-alpha/copyloopvar v1.2.1 h1:wmZaZYIjnJ0b5UoKDjUHrikcV0zuPyyxI4SVplLd2CI= +github.com/karamaru-alpha/copyloopvar v1.2.1/go.mod h1:nFmMlFNlClC2BPvNaHMdkirmTJxVCY0lhxBtlfOypMM= +github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8= +github.com/kisielk/errcheck v1.9.0 h1:9xt1zI9EBfcYBvdU1nVrzMzzUPUtPKs9bVSIM3TAb3M= +github.com/kisielk/errcheck v1.9.0/go.mod h1:kQxWMMVZgIkDq7U8xtG/n2juOjbLgZtedi0D+/VL/i8= +github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck= +github.com/kkHAIKE/contextcheck v1.1.6 h1:7HIyRcnyzxL9Lz06NGhiKvenXq7Zw6Q0UQu/ttjfJCE= +github.com/kkHAIKE/contextcheck v1.1.6/go.mod h1:3dDbMRNBFaq8HFXWC1JyvDSPm43CmE6IuHam8Wr0rkg= +github.com/klauspost/compress v1.18.0 h1:c/Cqfb0r+Yi+JtIEq73FWXVkRonBlf0CRNYc8Zttxdo= +github.com/klauspost/compress v1.18.0/go.mod h1:2Pp+KzxcywXVXMr50+X0Q/Lsb43OQHYWRCY2AiWywWQ= +github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE= +github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk= +github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY= +github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE= +github.com/kulti/thelper v0.6.3 h1:ElhKf+AlItIu+xGnI990no4cE2+XaSu1ULymV2Yulxs= +github.com/kulti/thelper v0.6.3/go.mod h1:DsqKShOvP40epevkFrvIwkCMNYxMeTNjdWL4dqWHZ6I= +github.com/kunwardeep/paralleltest v1.0.10 h1:wrodoaKYzS2mdNVnc4/w31YaXFtsc21PCTdvWJ/lDDs= +github.com/kunwardeep/paralleltest v1.0.10/go.mod h1:2C7s65hONVqY7Q5Efj5aLzRCNLjw2h4eMc9EcypGjcY= +github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc= +github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw= +github.com/lasiar/canonicalheader v1.1.2 h1:vZ5uqwvDbyJCnMhmFYimgMZnJMjwljN5VGY0VKbMXb4= +github.com/lasiar/canonicalheader v1.1.2/go.mod h1:qJCeLFS0G/QlLQ506T+Fk/fWMa2VmBUiEI2cuMK4djI= +github.com/ldez/exptostd v0.4.2 h1:l5pOzHBz8mFOlbcifTxzfyYbgEmoUqjxLFHZkjlbHXs= +github.com/ldez/exptostd v0.4.2/go.mod h1:iZBRYaUmcW5jwCR3KROEZ1KivQQp6PHXbDPk9hqJKCQ= +github.com/ldez/gomoddirectives v0.6.1 h1:Z+PxGAY+217f/bSGjNZr/b2KTXcyYLgiWI6geMBN2Qc= +github.com/ldez/gomoddirectives v0.6.1/go.mod h1:cVBiu3AHR9V31em9u2kwfMKD43ayN5/XDgr+cdaFaKs= +github.com/ldez/grignotin v0.9.0 h1:MgOEmjZIVNn6p5wPaGp/0OKWyvq42KnzAt/DAb8O4Ow= +github.com/ldez/grignotin v0.9.0/go.mod h1:uaVTr0SoZ1KBii33c47O1M8Jp3OP3YDwhZCmzT9GHEk= +github.com/ldez/tagliatelle v0.7.1 h1:bTgKjjc2sQcsgPiT902+aadvMjCeMHrY7ly2XKFORIk= +github.com/ldez/tagliatelle v0.7.1/go.mod h1:3zjxUpsNB2aEZScWiZTHrAXOl1x25t3cRmzfK1mlo2I= +github.com/ldez/usetesting v0.4.2 h1:J2WwbrFGk3wx4cZwSMiCQQ00kjGR0+tuuyW0Lqm4lwA= +github.com/ldez/usetesting v0.4.2/go.mod h1:eEs46T3PpQ+9RgN9VjpY6qWdiw2/QmfiDeWmdZdrjIQ= +github.com/leonklingele/grouper v1.1.2 h1:o1ARBDLOmmasUaNDesWqWCIFH3u7hoFlM84YrjT3mIY= +github.com/leonklingele/grouper v1.1.2/go.mod h1:6D0M/HVkhs2yRKRFZUoGjeDy7EZTfFBE9gl4kjmIGkA= +github.com/macabu/inamedparam v0.1.3 h1:2tk/phHkMlEL/1GNe/Yf6kkR/hkcUdAEY3L0hjYV1Mk= +github.com/macabu/inamedparam v0.1.3/go.mod h1:93FLICAIk/quk7eaPPQvbzihUdn/QkGDwIZEoLtpH6I= +github.com/mailru/easyjson v0.9.0 h1:PrnmzHw7262yW8sTBwxi1PdJA3Iw/EKBa8psRf7d9a4= +github.com/mailru/easyjson v0.9.0/go.mod h1:1+xMtQp2MRNVL/V1bOzuP3aP8VNwRW55fQUto+XFtTU= +github.com/maratori/testableexamples v1.0.0 h1:dU5alXRrD8WKSjOUnmJZuzdxWOEQ57+7s93SLMxb2vI= +github.com/maratori/testableexamples v1.0.0/go.mod h1:4rhjL1n20TUTT4vdh3RDqSizKLyXp7K2u6HgraZCGzE= +github.com/maratori/testpackage v1.1.1 h1:S58XVV5AD7HADMmD0fNnziNHqKvSdDuEKdPD1rNTU04= +github.com/maratori/testpackage v1.1.1/go.mod h1:s4gRK/ym6AMrqpOa/kEbQTV4Q4jb7WeLZzVhVVVOQMc= +github.com/maruel/natural v1.1.1 h1:Hja7XhhmvEFhcByqDoHz9QZbkWey+COd9xWfCfn1ioo= +github.com/maruel/natural v1.1.1/go.mod h1:v+Rfd79xlw1AgVBjbO0BEQmptqb5HvL/k9GRHB7ZKEg= +github.com/matoous/godox v1.1.0 h1:W5mqwbyWrwZv6OQ5Z1a/DHGMOvXYCBP3+Ht7KMoJhq4= +github.com/matoous/godox v1.1.0/go.mod h1:jgE/3fUXiTurkdHOLT5WEkThTSuE7yxHv5iWPa80afs= +github.com/matryer/is v1.4.0 h1:sosSmIWwkYITGrxZ25ULNDeKiMNzFSr4V/eqBQP0PeE= +github.com/matryer/is v1.4.0/go.mod h1:8I/i5uYgLzgsgEloJE1U6xx5HkBQpAZvepWuujKwMRU= +github.com/mattn/go-colorable v0.1.14 h1:9A9LHSqF/7dyVVX6g0U9cwm9pG3kP9gSzcuIPHPsaIE= +github.com/mattn/go-colorable v0.1.14/go.mod h1:6LmQG8QLFO4G5z1gPvYEzlUgJ2wF+stgPZH1UqBm1s8= +github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY= +github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y= +github.com/mattn/go-runewidth v0.0.9/go.mod h1:H031xJmbD/WCDINGzjvQ9THkh0rPKHF+m2gUSrubnMI= +github.com/mattn/go-runewidth v0.0.16 h1:E5ScNMtiwvlvB5paMFdw9p4kSQzbXFikJ5SQO6TULQc= +github.com/mattn/go-runewidth v0.0.16/go.mod h1:Jdepj2loyihRzMpdS35Xk/zdY8IAYHsh153qUoGf23w= +github.com/mfridman/tparse v0.18.0 h1:wh6dzOKaIwkUGyKgOntDW4liXSo37qg5AXbIhkMV3vE= +github.com/mfridman/tparse v0.18.0/go.mod h1:gEvqZTuCgEhPbYk/2lS3Kcxg1GmTxxU7kTC8DvP0i/A= +github.com/mgechev/revive v1.7.0 h1:JyeQ4yO5K8aZhIKf5rec56u0376h8AlKNQEmjfkjKlY= +github.com/mgechev/revive v1.7.0/go.mod h1:qZnwcNhoguE58dfi96IJeSTPeZQejNeoMQLUZGi4SW4= +github.com/mitchellh/go-homedir v1.1.0 h1:lukF9ziXFxDFPkA1vsr5zpc1XuPDn/wFntq5mG+4E0Y= +github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0= +github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q= +github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg= +github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q= +github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk= +github.com/modern-go/reflect2 v1.0.3-0.20250322232337-35a7c28c31ee h1:W5t00kpgFdJifH4BDsTlE89Zl93FEloxaWZfGcifgq8= +github.com/modern-go/reflect2 v1.0.3-0.20250322232337-35a7c28c31ee/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk= +github.com/moricho/tparallel v0.3.2 h1:odr8aZVFA3NZrNybggMkYO3rgPRcqjeQUlBBFVxKHTI= +github.com/moricho/tparallel v0.3.2/go.mod h1:OQ+K3b4Ln3l2TZveGCywybl68glfLEwFGqvnjok8b+U= +github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA= +github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ= +github.com/nakabonne/nestif v0.3.1 h1:wm28nZjhQY5HyYPx+weN3Q65k6ilSBxDb8v5S81B81U= +github.com/nakabonne/nestif v0.3.1/go.mod h1:9EtoZochLn5iUprVDmDjqGKPofoUEBL8U4Ngq6aY7OE= +github.com/nishanths/exhaustive v0.12.0 h1:vIY9sALmw6T/yxiASewa4TQcFsVYZQQRUQJhKRf3Swg= +github.com/nishanths/exhaustive v0.12.0/go.mod h1:mEZ95wPIZW+x8kC4TgC+9YCUgiST7ecevsVDTgc2obs= +github.com/nishanths/predeclared v0.2.2 h1:V2EPdZPliZymNAn79T8RkNApBjMmVKh5XRpLm/w98Vk= +github.com/nishanths/predeclared v0.2.2/go.mod h1:RROzoN6TnGQupbC+lqggsOlcgysk3LMK/HI84Mp280c= +github.com/nunnatsa/ginkgolinter v0.19.1 h1:mjwbOlDQxZi9Cal+KfbEJTCz327OLNfwNvoZ70NJ+c4= +github.com/nunnatsa/ginkgolinter v0.19.1/go.mod h1:jkQ3naZDmxaZMXPWaS9rblH+i+GWXQCaS/JFIWcOH2s= +github.com/olekukonko/tablewriter v0.0.5 h1:P2Ga83D34wi1o9J6Wh1mRuqd4mF/x/lgBS7N7AbDhec= +github.com/olekukonko/tablewriter v0.0.5/go.mod h1:hPp6KlRPjbx+hW8ykQs1w3UBbZlj6HuIJcUGPhkA7kY= +github.com/onsi/ginkgo/v2 v2.27.2 h1:LzwLj0b89qtIy6SSASkzlNvX6WktqurSHwkk2ipF/Ns= +github.com/onsi/ginkgo/v2 v2.27.2/go.mod h1:ArE1D/XhNXBXCBkKOLkbsb2c81dQHCRcF5zwn/ykDRo= +github.com/onsi/gomega v1.38.3 h1:eTX+W6dobAYfFeGC2PV6RwXRu/MyT+cQguijutvkpSM= +github.com/onsi/gomega v1.38.3/go.mod h1:ZCU1pkQcXDO5Sl9/VVEGlDyp+zm0m1cmeG5TOzLgdh4= +github.com/otiai10/copy v1.2.0/go.mod h1:rrF5dJ5F0t/EWSYODDu4j9/vEeYHMkc8jt0zJChqQWw= +github.com/otiai10/copy v1.14.0 h1:dCI/t1iTdYGtkvCuBG2BgR6KZa83PTclw4U5n2wAllU= +github.com/otiai10/copy v1.14.0/go.mod h1:ECfuL02W+/FkTWZWgQqXPWZgW9oeKCSQ5qVfSc4qc4w= +github.com/otiai10/curr v0.0.0-20150429015615-9b4961190c95/go.mod h1:9qAhocn7zKJG+0mI8eUu6xqkFDYS2kb2saOteoSB3cE= +github.com/otiai10/curr v1.0.0/go.mod h1:LskTG5wDwr8Rs+nNQ+1LlxRjAtTZZjtJW4rMXl6j4vs= +github.com/otiai10/mint v1.3.0/go.mod h1:F5AjcsTsWUqX+Na9fpHb52P8pcRX2CI6A3ctIT91xUo= +github.com/otiai10/mint v1.3.1/go.mod h1:/yxELlJQ0ufhjUwhshSj+wFjZ78CnZ48/1wtmBH1OTc= +github.com/pelletier/go-toml/v2 v2.2.4 h1:mye9XuhQ6gvn5h28+VilKrrPoQVanw5PMw/TB0t5Ec4= +github.com/pelletier/go-toml/v2 v2.2.4/go.mod h1:2gIqNv+qfxSVS7cM2xJQKtLSTLUE9V8t9Stt+h56mCY= +github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4= +github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= +github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= +github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U= +github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= +github.com/polyfloyd/go-errorlint v1.7.1 h1:RyLVXIbosq1gBdk/pChWA8zWYLsq9UEw7a1L5TVMCnA= +github.com/polyfloyd/go-errorlint v1.7.1/go.mod h1:aXjNb1x2TNhoLsk26iv1yl7a+zTnXPhwEMtEXukiLR8= +github.com/prashantv/gostub v1.1.0 h1:BTyx3RfQjRHnUWaGF9oQos79AlQ5k8WNktv7VGvVH4g= +github.com/prashantv/gostub v1.1.0/go.mod h1:A5zLQHz7ieHGG7is6LLXLz7I8+3LZzsrV0P1IAHhP5U= +github.com/prometheus/client_golang v1.23.2 h1:Je96obch5RDVy3FDMndoUsjAhG5Edi49h0RJWRi/o0o= +github.com/prometheus/client_golang v1.23.2/go.mod h1:Tb1a6LWHB3/SPIzCoaDXI4I8UHKeFTEQ1YCr+0Gyqmg= +github.com/prometheus/client_model v0.6.2 h1:oBsgwpGs7iVziMvrGhE53c/GrLUsZdHnqNwqPLxwZyk= +github.com/prometheus/client_model v0.6.2/go.mod h1:y3m2F6Gdpfy6Ut/GBsUqTWZqCUvMVzSfMLjcu6wAwpE= +github.com/prometheus/common v0.66.1 h1:h5E0h5/Y8niHc5DlaLlWLArTQI7tMrsfQjHV+d9ZoGs= +github.com/prometheus/common v0.66.1/go.mod h1:gcaUsgf3KfRSwHY4dIMXLPV0K/Wg1oZ8+SbZk/HH/dA= +github.com/prometheus/procfs v0.17.0 h1:FuLQ+05u4ZI+SS/w9+BWEM2TXiHKsUQ9TADiRH7DuK0= +github.com/prometheus/procfs v0.17.0/go.mod h1:oPQLaDAMRbA+u8H5Pbfq+dl3VDAvHxMUOVhe0wYB2zw= +github.com/quasilyte/go-ruleguard v0.4.3-0.20240823090925-0fe6f58b47b1 h1:+Wl/0aFp0hpuHM3H//KMft64WQ1yX9LdJY64Qm/gFCo= +github.com/quasilyte/go-ruleguard v0.4.3-0.20240823090925-0fe6f58b47b1/go.mod h1:GJLgqsLeo4qgavUoL8JeGFNS7qcisx3awV/w9eWTmNI= +github.com/quasilyte/go-ruleguard/dsl v0.3.22 h1:wd8zkOhSNr+I+8Qeciml08ivDt1pSXe60+5DqOpCjPE= +github.com/quasilyte/go-ruleguard/dsl v0.3.22/go.mod h1:KeCP03KrjuSO0H1kTuZQCWlQPulDV6YMIXmpQss17rU= +github.com/quasilyte/gogrep v0.5.0 h1:eTKODPXbI8ffJMN+W2aE0+oL0z/nh8/5eNdiO34SOAo= +github.com/quasilyte/gogrep v0.5.0/go.mod h1:Cm9lpz9NZjEoL1tgZ2OgeUKPIxL1meE7eo60Z6Sk+Ng= +github.com/quasilyte/regex/syntax v0.0.0-20210819130434-b3f0c404a727 h1:TCg2WBOl980XxGFEZSS6KlBGIV0diGdySzxATTWoqaU= +github.com/quasilyte/regex/syntax v0.0.0-20210819130434-b3f0c404a727/go.mod h1:rlzQ04UMyJXu/aOvhd8qT+hvDrFpiwqp8MRXDY9szc0= +github.com/quasilyte/stdinfo v0.0.0-20220114132959-f7386bf02567 h1:M8mH9eK4OUR4lu7Gd+PU1fV2/qnDNfzT635KRSObncs= +github.com/quasilyte/stdinfo v0.0.0-20220114132959-f7386bf02567/go.mod h1:DWNGW8A4Y+GyBgPuaQJuWiy0XYftx4Xm/y5Jqk9I6VQ= +github.com/raeperd/recvcheck v0.2.0 h1:GnU+NsbiCqdC2XX5+vMZzP+jAJC5fht7rcVTAhX74UI= +github.com/raeperd/recvcheck v0.2.0/go.mod h1:n04eYkwIR0JbgD73wT8wL4JjPC3wm0nFtzBnWNocnYU= +github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc= +github.com/rivo/uniseg v0.4.7 h1:WUdvkW8uEhrYfLC4ZzdpI2ztxP1I582+49Oc5Mq64VQ= +github.com/rivo/uniseg v0.4.7/go.mod h1:FN3SvrM+Zdj16jyLfmOkMNblXMcoc8DfTHruCPUcx88= +github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ= +github.com/rogpeppe/go-internal v1.14.1/go.mod h1:MaRKkUm5W0goXpeCfT7UZI6fk/L7L7so1lCWt35ZSgc= +github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM= +github.com/ryancurrah/gomodguard v1.3.5 h1:cShyguSwUEeC0jS7ylOiG/idnd1TpJ1LfHGpV3oJmPU= +github.com/ryancurrah/gomodguard v1.3.5/go.mod h1:MXlEPQRxgfPQa62O8wzK3Ozbkv9Rkqr+wKjSxTdsNJE= +github.com/ryanrolds/sqlclosecheck v0.5.1 h1:dibWW826u0P8jNLsLN+En7+RqWWTYrjCB9fJfSfdyCU= +github.com/ryanrolds/sqlclosecheck v0.5.1/go.mod h1:2g3dUjoS6AL4huFdv6wn55WpLIDjY7ZgUR4J8HOO/XQ= +github.com/sagikazarmark/locafero v0.7.0 h1:5MqpDsTGNDhY8sGp0Aowyf0qKsPrhewaLSsFaodPcyo= +github.com/sagikazarmark/locafero v0.7.0/go.mod h1:2za3Cg5rMaTMoG/2Ulr9AwtFaIppKXTRYnozin4aB5k= +github.com/sanposhiho/wastedassign/v2 v2.1.0 h1:crurBF7fJKIORrV85u9UUpePDYGWnwvv3+A96WvwXT0= +github.com/sanposhiho/wastedassign/v2 v2.1.0/go.mod h1:+oSmSC+9bQ+VUAxA66nBb0Z7N8CK7mscKTDYC6aIek4= +github.com/santhosh-tekuri/jsonschema/v6 v6.0.1 h1:PKK9DyHxif4LZo+uQSgXNqs0jj5+xZwwfKHgph2lxBw= +github.com/santhosh-tekuri/jsonschema/v6 v6.0.1/go.mod h1:JXeL+ps8p7/KNMjDQk3TCwPpBy0wYklyWTfbkIzdIFU= +github.com/sashamelentyev/interfacebloat v1.1.0 h1:xdRdJp0irL086OyW1H/RTZTr1h/tMEOsumirXcOJqAw= +github.com/sashamelentyev/interfacebloat v1.1.0/go.mod h1:+Y9yU5YdTkrNvoX0xHc84dxiN1iBi9+G8zZIhPVoNjQ= +github.com/sashamelentyev/usestdlibvars v1.28.0 h1:jZnudE2zKCtYlGzLVreNp5pmCdOxXUzwsMDBkR21cyQ= +github.com/sashamelentyev/usestdlibvars v1.28.0/go.mod h1:9nl0jgOfHKWNFS43Ojw0i7aRoS4j6EBye3YBhmAIRF8= +github.com/securego/gosec/v2 v2.22.2 h1:IXbuI7cJninj0nRpZSLCUlotsj8jGusohfONMrHoF6g= +github.com/securego/gosec/v2 v2.22.2/go.mod h1:UEBGA+dSKb+VqM6TdehR7lnQtIIMorYJ4/9CW1KVQBE= +github.com/shurcooL/go v0.0.0-20180423040247-9e1955d9fb6e/go.mod h1:TDJrrUr11Vxrven61rcy3hJMUqaf/CLWYhHNPmT14Lk= +github.com/shurcooL/go-goon v0.0.0-20170922171312-37c2f522c041/go.mod h1:N5mDOmsrJOB+vfqUK+7DmDyjhSLIIBnXo9lvZJj3MWQ= +github.com/sirupsen/logrus v1.9.3 h1:dueUQJ1C2q9oE3F7wvmSGAaVtTmUizReu6fjN8uqzbQ= +github.com/sirupsen/logrus v1.9.3/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ= +github.com/sivchari/containedctx v1.0.3 h1:x+etemjbsh2fB5ewm5FeLNi5bUjK0V8n0RB+Wwfd0XE= +github.com/sivchari/containedctx v1.0.3/go.mod h1:c1RDvCbnJLtH4lLcYD/GqwiBSSf4F5Qk0xld2rBqzJ4= +github.com/sivchari/tenv v1.12.1 h1:+E0QzjktdnExv/wwsnnyk4oqZBUfuh89YMQT1cyuvSY= +github.com/sivchari/tenv v1.12.1/go.mod h1:1LjSOUCc25snIr5n3DtGGrENhX3LuWefcplwVGC24mw= +github.com/sonatard/noctx v0.1.0 h1:JjqOc2WN16ISWAjAk8M5ej0RfExEXtkEyExl2hLW+OM= +github.com/sonatard/noctx v0.1.0/go.mod h1:0RvBxqY8D4j9cTTTWE8ylt2vqj2EPI8fHmrxHdsaZ2c= +github.com/sourcegraph/conc v0.3.0 h1:OQTbbt6P72L20UqAkXXuLOj79LfEanQ+YQFNpLA9ySo= +github.com/sourcegraph/conc v0.3.0/go.mod h1:Sdozi7LEKbFPqYX2/J+iBAM6HpqSLTASQIKqDmF7Mt0= +github.com/sourcegraph/go-diff v0.7.0 h1:9uLlrd5T46OXs5qpp8L/MTltk0zikUGi0sNNyCpA8G0= +github.com/sourcegraph/go-diff v0.7.0/go.mod h1:iBszgVvyxdc8SFZ7gm69go2KDdt3ag071iBaWPF6cjs= +github.com/spf13/afero v1.12.0 h1:UcOPyRBYczmFn6yvphxkn9ZEOY65cpwGKb5mL36mrqs= +github.com/spf13/afero v1.12.0/go.mod h1:ZTlWwG4/ahT8W7T0WQ5uYmjI9duaLQGy3Q2OAl4sk/4= +github.com/spf13/cast v1.7.1 h1:cuNEagBQEHWN1FnbGEjCXL2szYEXqfJPbP2HNUaca9Y= +github.com/spf13/cast v1.7.1/go.mod h1:ancEpBxwJDODSW/UG4rDrAqiKolqNNh2DX3mk86cAdo= +github.com/spf13/cobra v1.10.2 h1:DMTTonx5m65Ic0GOoRY2c16WCbHxOOw6xxezuLaBpcU= +github.com/spf13/cobra v1.10.2/go.mod h1:7C1pvHqHw5A4vrJfjNwvOdzYu0Gml16OCs2GRiTUUS4= +github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg= +github.com/spf13/pflag v1.0.9/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg= +github.com/spf13/pflag v1.0.10 h1:4EBh2KAYBwaONj6b2Ye1GiHfwjqyROoF4RwYO+vPwFk= +github.com/spf13/pflag v1.0.10/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg= +github.com/spf13/viper v1.20.1 h1:ZMi+z/lvLyPSCoNtFCpqjy0S4kPbirhpTMwl8BkW9X4= +github.com/spf13/viper v1.20.1/go.mod h1:P9Mdzt1zoHIG8m2eZQinpiBjo6kCmZSKBClNNqjJvu4= +github.com/ssgreg/nlreturn/v2 v2.2.1 h1:X4XDI7jstt3ySqGU86YGAURbxw3oTDPK9sPEi6YEwQ0= +github.com/ssgreg/nlreturn/v2 v2.2.1/go.mod h1:E/iiPB78hV7Szg2YfRgyIrk1AD6JVMTRkkxBiELzh2I= +github.com/stbenjam/no-sprintf-host-port v0.2.0 h1:i8pxvGrt1+4G0czLr/WnmyH7zbZ8Bg8etvARQ1rpyl4= +github.com/stbenjam/no-sprintf-host-port v0.2.0/go.mod h1:eL0bQ9PasS0hsyTyfTjjG+E80QIyPnBVQbYZyv20Jfk= +github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= +github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw= +github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo= +github.com/stretchr/objx v0.5.2 h1:xuMeJ0Sdp5ZMRXx/aWO6RZxdr3beISkG5/G/aIRr3pY= +github.com/stretchr/objx v0.5.2/go.mod h1:FRsXN1f5AsAjCGJKqEizvkpNtU+EGNCLh3NxZ/8L+MA= +github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI= +github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4= +github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA= +github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= +github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= +github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU= +github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo= +github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U= +github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U= +github.com/subosito/gotenv v1.6.0 h1:9NlTDc1FTs4qu0DDq7AEtTPNw6SVm7uBMsUCUjABIf8= +github.com/subosito/gotenv v1.6.0/go.mod h1:Dk4QP5c2W3ibzajGcXpNraDfq2IrhjMIvMSWPKKo0FU= +github.com/tdakkota/asciicheck v0.4.1 h1:bm0tbcmi0jezRA2b5kg4ozmMuGAFotKI3RZfrhfovg8= +github.com/tdakkota/asciicheck v0.4.1/go.mod h1:0k7M3rCfRXb0Z6bwgvkEIMleKH3kXNz9UqJ9Xuqopr8= +github.com/tenntenn/modver v1.0.1 h1:2klLppGhDgzJrScMpkj9Ujy3rXPUspSjAcev9tSEBgA= +github.com/tenntenn/modver v1.0.1/go.mod h1:bePIyQPb7UeioSRkw3Q0XeMhYZSMx9B8ePqg6SAMGH0= +github.com/tenntenn/text/transform v0.0.0-20200319021203-7eef512accb3 h1:f+jULpRQGxTSkNYKJ51yaw6ChIqO+Je8UqsTKN/cDag= +github.com/tenntenn/text/transform v0.0.0-20200319021203-7eef512accb3/go.mod h1:ON8b8w4BN/kE1EOhwT0o+d62W65a6aPw1nouo9LMgyY= +github.com/tetafro/godot v1.5.0 h1:aNwfVI4I3+gdxjMgYPus9eHmoBeJIbnajOyqZYStzuw= +github.com/tetafro/godot v1.5.0/go.mod h1:2oVxTBSftRTh4+MVfUaUXR6bn2GDXCaMcOG4Dk3rfio= +github.com/tidwall/gjson v1.18.0 h1:FIDeeyB800efLX89e5a8Y0BNH+LOngJyGrIWxG2FKQY= +github.com/tidwall/gjson v1.18.0/go.mod h1:/wbyibRr2FHMks5tjHJ5F8dMZh3AcwJEMf5vlfC0lxk= +github.com/tidwall/match v1.2.0 h1:0pt8FlkOwjN2fPt4bIl4BoNxb98gGHN2ObFEDkrfZnM= +github.com/tidwall/match v1.2.0/go.mod h1:eRSPERbgtNPcGhD8UCthc6PmLEQXEWd3PRB5JTxsfmM= +github.com/tidwall/pretty v1.2.1 h1:qjsOFOWWQl+N3RsoF5/ssm1pHmJJwhjlSbZ51I6wMl4= +github.com/tidwall/pretty v1.2.1/go.mod h1:ITEVvHYasfjBbM0u2Pg8T2nJnzm8xPwvNhhsoaGGjNU= +github.com/tidwall/sjson v1.2.5 h1:kLy8mja+1c9jlljvWTlSazM7cKDRfJuR/bOJhcY5NcY= +github.com/tidwall/sjson v1.2.5/go.mod h1:Fvgq9kS/6ociJEDnK0Fk1cpYF4FIW6ZF7LAe+6jwd28= +github.com/timakin/bodyclose v0.0.0-20241017074812-ed6a65f985e3 h1:y4mJRFlM6fUyPhoXuFg/Yu02fg/nIPFMOY8tOqppoFg= +github.com/timakin/bodyclose v0.0.0-20241017074812-ed6a65f985e3/go.mod h1:mkjARE7Yr8qU23YcGMSALbIxTQ9r9QBVahQOBRfU460= +github.com/timonwong/loggercheck v0.10.1 h1:uVZYClxQFpw55eh+PIoqM7uAOHMrhVcDoWDery9R8Lg= +github.com/timonwong/loggercheck v0.10.1/go.mod h1:HEAWU8djynujaAVX7QI65Myb8qgfcZ1uKbdpg3ZzKl8= +github.com/tomarrell/wrapcheck/v2 v2.10.0 h1:SzRCryzy4IrAH7bVGG4cK40tNUhmVmMDuJujy4XwYDg= +github.com/tomarrell/wrapcheck/v2 v2.10.0/go.mod h1:g9vNIyhb5/9TQgumxQyOEqDHsmGYcGsVMOx/xGkqdMo= +github.com/tommy-muehle/go-mnd/v2 v2.5.1 h1:NowYhSdyE/1zwK9QCLeRb6USWdoif80Ie+v+yU8u1Zw= +github.com/tommy-muehle/go-mnd/v2 v2.5.1/go.mod h1:WsUAkMJMYww6l/ufffCD3m+P7LEvr8TnZn9lwVDlgzw= +github.com/ultraware/funlen v0.2.0 h1:gCHmCn+d2/1SemTdYMiKLAHFYxTYz7z9VIDRaTGyLkI= +github.com/ultraware/funlen v0.2.0/go.mod h1:ZE0q4TsJ8T1SQcjmkhN/w+MceuatI6pBFSxxyteHIJA= +github.com/ultraware/whitespace v0.2.0 h1:TYowo2m9Nfj1baEQBjuHzvMRbp19i+RCcRYrSWoFa+g= +github.com/ultraware/whitespace v0.2.0/go.mod h1:XcP1RLD81eV4BW8UhQlpaR+SDc2givTvyI8a586WjW8= +github.com/uudashr/gocognit v1.2.0 h1:3BU9aMr1xbhPlvJLSydKwdLN3tEUUrzPSSM8S4hDYRA= +github.com/uudashr/gocognit v1.2.0/go.mod h1:k/DdKPI6XBZO1q7HgoV2juESI2/Ofj9AcHPZhBBdrTU= +github.com/uudashr/iface v1.3.1 h1:bA51vmVx1UIhiIsQFSNq6GZ6VPTk3WNMZgRiCe9R29U= +github.com/uudashr/iface v1.3.1/go.mod h1:4QvspiRd3JLPAEXBQ9AiZpLbJlrWWgRChOKDJEuQTdg= +github.com/x448/float16 v0.8.4 h1:qLwI1I70+NjRFUR3zs1JPUCgaCXSh3SW62uAKT1mSBM= +github.com/x448/float16 v0.8.4/go.mod h1:14CWIYCyZA/cWjXOioeEpHeN/83MdbZDRQHoFcYsOfg= +github.com/xen0n/gosmopolitan v1.2.2 h1:/p2KTnMzwRexIW8GlKawsTWOxn7UHA+jCMF/V8HHtvU= +github.com/xen0n/gosmopolitan v1.2.2/go.mod h1:7XX7Mj61uLYrj0qmeN0zi7XDon9JRAEhYQqAPLVNTeg= +github.com/yagipy/maintidx v1.0.0 h1:h5NvIsCz+nRDapQ0exNv4aJ0yXSI0420omVANTv3GJM= +github.com/yagipy/maintidx v1.0.0/go.mod h1:0qNf/I/CCZXSMhsRsrEPDZ+DkekpKLXAJfsTACwgXLk= +github.com/yeya24/promlinter v0.3.0 h1:JVDbMp08lVCP7Y6NP3qHroGAO6z2yGKQtS5JsjqtoFs= +github.com/yeya24/promlinter v0.3.0/go.mod h1:cDfJQQYv9uYciW60QT0eeHlFodotkYZlL+YcPQN+mW4= +github.com/ykadowak/zerologlint v0.1.5 h1:Gy/fMz1dFQN9JZTPjv1hxEk+sRWm05row04Yoolgdiw= +github.com/ykadowak/zerologlint v0.1.5/go.mod h1:KaUskqF3e/v59oPmdq1U1DnKcuHokl2/K1U4pmIELKg= +github.com/yuin/goldmark v1.1.25/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= +github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= +github.com/yuin/goldmark v1.1.32/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= +github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= +github.com/yuin/goldmark v1.3.5/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k= +github.com/yuin/goldmark v1.4.1/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k= +github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY= +gitlab.com/bosi/decorder v0.4.2 h1:qbQaV3zgwnBZ4zPMhGLW4KZe7A7NwxEhJx39R3shffo= +gitlab.com/bosi/decorder v0.4.2/go.mod h1:muuhHoaJkA9QLcYHq4Mj8FJUwDZ+EirSHRiaTcTf6T8= +go-simpler.org/assert v0.9.0 h1:PfpmcSvL7yAnWyChSjOz6Sp6m9j5lyK8Ok9pEL31YkQ= +go-simpler.org/assert v0.9.0/go.mod h1:74Eqh5eI6vCK6Y5l3PI8ZYFXG4Sa+tkr70OIPJAUr28= +go-simpler.org/musttag v0.13.0 h1:Q/YAW0AHvaoaIbsPj3bvEI5/QFP7w696IMUpnKXQfCE= +go-simpler.org/musttag v0.13.0/go.mod h1:FTzIGeK6OkKlUDVpj0iQUXZLUO1Js9+mvykDQy9C5yM= +go-simpler.org/sloglint v0.9.0 h1:/40NQtjRx9txvsB/RN022KsUJU+zaaSb/9q9BSefSrE= +go-simpler.org/sloglint v0.9.0/go.mod h1:G/OrAF6uxj48sHahCzrbarVMptL2kjWTaUeC8+fOGww= +go.uber.org/automaxprocs v1.6.0 h1:O3y2/QNTOdbF+e/dpXNNW7Rx2hZ4sTIPyybbxyNqTUs= +go.uber.org/automaxprocs v1.6.0/go.mod h1:ifeIMSnPZuznNm6jmdzmU3/bfk01Fe2fotchwEFJ8r8= +go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto= +go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE= +go.uber.org/multierr v1.11.0 h1:blXXJkSxSSfBVBlC76pxqeO+LN3aDfLQo+309xJstO0= +go.uber.org/multierr v1.11.0/go.mod h1:20+QtiLqy0Nd6FdQB9TLXag12DsQkrbs3htMFfDN80Y= +go.uber.org/zap v1.27.0 h1:aJMhYGrd5QSmlpLMr2MftRKl7t8J8PTZPA732ud/XR8= +go.uber.org/zap v1.27.0/go.mod h1:GB2qFLM7cTU87MWRP2mPIjqfIDnGu+VIO4V/SdhGo2E= +go.yaml.in/yaml/v2 v2.4.2 h1:DzmwEr2rDGHl7lsFgAHxmNz/1NlQ7xLIrlN2h5d1eGI= +go.yaml.in/yaml/v2 v2.4.2/go.mod h1:081UH+NErpNdqlCXm3TtEran0rJZGxAYx9hb/ELlsPU= +go.yaml.in/yaml/v3 v3.0.4 h1:tfq32ie2Jv2UxXFdLJdh3jXuOzWiL1fo0bu/FbuKpbc= +go.yaml.in/yaml/v3 v3.0.4/go.mod h1:DhzuOOF2ATzADvBadXxruRBLzYTpT36CKvDb3+aBEFg= +golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= +golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= +golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= +golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc= +golang.org/x/crypto v0.13.0/go.mod h1:y6Z2r+Rw4iayiXXAIxJIDAJ1zMW4yaTpebo8fPOliYc= +golang.org/x/crypto v0.14.0/go.mod h1:MVFd36DqK4CsrnJYDkBA3VC4m2GkXAM0PvzMCn4JQf4= +golang.org/x/exp v0.0.0-20251023183803-a4bb9ffd2546 h1:mgKeJMpvi0yx/sU5GsxQ7p6s2wtOnGAHZWCHUM4KGzY= +golang.org/x/exp v0.0.0-20251023183803-a4bb9ffd2546/go.mod h1:j/pmGrbnkbPtQfxEe5D0VQhZC6qKbfKifgD0oM7sR70= +golang.org/x/exp/typeparams v0.0.0-20220428152302-39d4317da171/go.mod h1:AbB0pIl9nAr9wVwH+Z2ZpaocVmF5I4GyWCDIsVjR0bk= +golang.org/x/exp/typeparams v0.0.0-20230203172020-98cc5a0785f9/go.mod h1:AbB0pIl9nAr9wVwH+Z2ZpaocVmF5I4GyWCDIsVjR0bk= +golang.org/x/exp/typeparams v0.0.0-20250210185358-939b2ce775ac h1:TSSpLIG4v+p0rPv1pNOQtl1I8knsO4S9trOxNMOLVP4= +golang.org/x/exp/typeparams v0.0.0-20250210185358-939b2ce775ac/go.mod h1:AbB0pIl9nAr9wVwH+Z2ZpaocVmF5I4GyWCDIsVjR0bk= +golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= +golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= +golang.org/x/mod v0.4.1/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= +golang.org/x/mod v0.4.2/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= +golang.org/x/mod v0.6.0-dev.0.20220106191415-9b9b3d81d5e3/go.mod h1:3p9vT2HGsQu2K1YbXdKPJLVgG5VJdoTa1poYQBtP1AY= +golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4= +golang.org/x/mod v0.7.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= +golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= +golang.org/x/mod v0.9.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= +golang.org/x/mod v0.12.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= +golang.org/x/mod v0.13.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c= +golang.org/x/mod v0.29.0 h1:HV8lRxZC4l2cr3Zq1LvtOsi/ThTgWnUk/y64QSs8GwA= +golang.org/x/mod v0.29.0/go.mod h1:NyhrlYXJ2H4eJiRy/WDBO6HMqZQ6q9nk4JzS3NuCK+w= +golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= +golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= +golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= +golang.org/x/net v0.0.0-20200625001655-4c5254603344/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA= +golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU= +golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg= +golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM= +golang.org/x/net v0.0.0-20211015210444-4f30a5c0130f/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y= +golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c= +golang.org/x/net v0.2.0/go.mod h1:KqCZLdyyvdV855qA2rE3GC2aiw5xGR5TEjj8smXukLY= +golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs= +golang.org/x/net v0.8.0/go.mod h1:QVkue5JL9kW//ek3r6jTKnTFis1tRmNAW2P1shuFdJc= +golang.org/x/net v0.10.0/go.mod h1:0qNGK6F8kojg2nk9dLZ2mShWaEBan6FAoqfSigmmuDg= +golang.org/x/net v0.15.0/go.mod h1:idbUs1IY1+zTqbi8yxTbhexhEEk5ur9LInksu6HrEpk= +golang.org/x/net v0.16.0/go.mod h1:NxSsAGuq816PNPmqtQdLE42eU2Fs7NoRIZrHJAlaCOE= +golang.org/x/net v0.46.0 h1:giFlY12I07fugqwPuWJi68oOnpfqFnJIJzaIIm2JVV4= +golang.org/x/net v0.46.0/go.mod h1:Q9BGdFy1y4nkUwiLvT5qtyhAnEHgnQ/zd8PfU6nc210= +golang.org/x/oauth2 v0.31.0 h1:8Fq0yVZLh4j4YA47vHKFTa9Ew5XIrCP8LC6UeNZnLxo= +golang.org/x/oauth2 v0.31.0/go.mod h1:lzm5WQJQwKZ3nwavOZ3IS5Aulzxi68dUSgRHujetwEA= +golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.0.0-20200625203802-6e8e738ad208/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.3.0/go.mod h1:FU7BRWz2tNW+3quACPkgCx/L+uEAv1htQ0V83Z9Rj+Y= +golang.org/x/sync v0.4.0/go.mod h1:FU7BRWz2tNW+3quACPkgCx/L+uEAv1htQ0V83Z9Rj+Y= +golang.org/x/sync v0.19.0 h1:vV+1eWNmZ5geRlYjzm2adRgW2/mcpevXNg50YZtPCE4= +golang.org/x/sync v0.19.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI= +golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= +golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20211019181941-9d821ace8654/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20211105183446-c75c47738b0c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.2.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.12.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.13.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.39.0 h1:CvCKL8MeisomCi6qNZ+wbb0DN9E5AATixKsvNtMoMFk= +golang.org/x/sys v0.39.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks= +golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= +golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8= +golang.org/x/term v0.2.0/go.mod h1:TVmDHMZPmdnySmBfhjOoOdhjzdE1h4u1VwSiw2l1Nuc= +golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k= +golang.org/x/term v0.6.0/go.mod h1:m6U89DPEgQRMq3DNkDClhWw02AUbt2daBVO4cn4Hv9U= +golang.org/x/term v0.8.0/go.mod h1:xPskH00ivmX89bAKVGSKKtLOWNx2+17Eiy94tnKShWo= +golang.org/x/term v0.12.0/go.mod h1:owVbMEjm3cBLCHdkQu9b1opXd4ETQWc3BhuQGKgXgvU= +golang.org/x/term v0.13.0/go.mod h1:LTmsnFJwVN6bCy1rVCoS+qHT1HhALEFxKncY3WNNh4U= +golang.org/x/term v0.36.0 h1:zMPR+aF8gfksFprF/Nc/rd1wRS1EI6nDBGyWAvDzx2Q= +golang.org/x/term v0.36.0/go.mod h1:Qu394IJq6V6dCBRgwqshf3mPF85AqzYEzofzRdZkWss= +golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= +golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk= +golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= +golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= +golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ= +golang.org/x/text v0.4.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8= +golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8= +golang.org/x/text v0.8.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8= +golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8= +golang.org/x/text v0.13.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE= +golang.org/x/text v0.30.0 h1:yznKA/E9zq54KzlzBEAWn1NXSQ8DIp/NYMy88xJjl4k= +golang.org/x/text v0.30.0/go.mod h1:yDdHFIX9t+tORqspjENWgzaCVXgk0yYnYuSZ8UzzBVM= +golang.org/x/time v0.13.0 h1:eUlYslOIt32DgYD6utsuUeHs4d7AsEYLuIAdg7FlYgI= +golang.org/x/time v0.13.0/go.mod h1:eL/Oa2bBBK0TkX57Fyni+NgnyQQN4LitPmob2Hjnqw4= +golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= +golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= +golang.org/x/tools v0.0.0-20200324003944-a576cf524670/go.mod h1:Sl4aGygMT6LrqrWclx+PTx3U+LnKx/seiNR+3G19Ar8= +golang.org/x/tools v0.0.0-20200329025819-fd4102a86c65/go.mod h1:Sl4aGygMT6LrqrWclx+PTx3U+LnKx/seiNR+3G19Ar8= +golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE= +golang.org/x/tools v0.0.0-20200724022722-7017fd6b1305/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA= +golang.org/x/tools v0.0.0-20200820010801-b793a1359eac/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA= +golang.org/x/tools v0.0.0-20201023174141-c8cfbd0f21e6/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= +golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= +golang.org/x/tools v0.1.1-0.20210205202024-ef80cdb6ec6d/go.mod h1:9bzcO0MWcOuT0tm1iBGzDVPshzfwoVvREIui8C+MHqU= +golang.org/x/tools v0.1.1-0.20210302220138-2ac05c832e1a/go.mod h1:9bzcO0MWcOuT0tm1iBGzDVPshzfwoVvREIui8C+MHqU= +golang.org/x/tools v0.1.1/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk= +golang.org/x/tools v0.1.5/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk= +golang.org/x/tools v0.1.10/go.mod h1:Uh6Zz+xoGYZom868N8YTex3t7RhtHDBrE8Gzo9bV56E= +golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc= +golang.org/x/tools v0.3.0/go.mod h1:/rWhSS2+zyEVwoJf8YAX6L2f0ntZ7Kn/mGgAWcipA5k= +golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU= +golang.org/x/tools v0.7.0/go.mod h1:4pg6aUX35JBAogB10C9AtvVL+qowtN4pT3CGSQex14s= +golang.org/x/tools v0.13.0/go.mod h1:HvlwmtVNQAhOuCjW7xxvovg8wbNq7LwfXh/k7wXUl58= +golang.org/x/tools v0.14.0/go.mod h1:uYBEerGOWcJyEORxN+Ek8+TT266gXkNlHdJBwexUsBg= +golang.org/x/tools v0.38.0 h1:Hx2Xv8hISq8Lm16jvBZ2VQf+RLmbd7wVUsALibYI/IQ= +golang.org/x/tools v0.38.0/go.mod h1:yEsQ/d/YK8cjh0L6rZlY8tgtlKiBNTL14pGDJPJpYQs= +golang.org/x/tools/go/expect v0.1.1-deprecated h1:jpBZDwmgPhXsKZC6WhL20P4b/wmnpsEAGHaNy0n/rJM= +golang.org/x/tools/go/expect v0.1.1-deprecated/go.mod h1:eihoPOH+FgIqa3FpoTwguz/bVUSGBlGQU67vpBeOrBY= +golang.org/x/tools/go/packages/packagestest v0.1.1-deprecated h1:1h2MnaIAIXISqTFKdENegdpAgUXz6NrPEsbIeWaBRvM= +golang.org/x/tools/go/packages/packagestest v0.1.1-deprecated/go.mod h1:RVAQXBGNv1ib0J382/DPCRS/BPnsGebyM1Gj5VSDpG8= +golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= +golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= +golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= +golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= +gomodules.xyz/jsonpatch/v2 v2.5.0 h1:JELs8RLM12qJGXU4u/TO3V25KW8GreMKl9pdkk14RM0= +gomodules.xyz/jsonpatch/v2 v2.5.0/go.mod h1:AH3dM2RI6uoBZxn3LVrfvJ3E0/9dG4cSrbuBJT4moAY= +google.golang.org/protobuf v1.36.9 h1:w2gp2mA27hUeUzj9Ex9FBjsBm40zfaDtEWow293U7Iw= +google.golang.org/protobuf v1.36.9/go.mod h1:fuxRtAxBytpl4zzqUh6/eyUujkJdNiuEkXntxiD/uRU= +gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= +gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk= +gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q= +gopkg.in/evanphx/json-patch.v4 v4.13.0 h1:czT3CmqEaQ1aanPc5SdlgQrrEIb8w/wwCvWWnfEbYzo= +gopkg.in/evanphx/json-patch.v4 v4.13.0/go.mod h1:p8EYWUEYMpynmqDbY58zCKCFZw8pRWMG4EsWvDvM72M= +gopkg.in/inf.v0 v0.9.1 h1:73M5CoZyi3ZLMOyDlQh031Cx6N9NDJ2Vvfl76EDAgDc= +gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw= +gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= +gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY= +gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ= +gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= +gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA= +gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= +honnef.co/go/tools v0.6.1 h1:R094WgE8K4JirYjBaOpz/AvTyUu/3wbmAoskKN/pxTI= +honnef.co/go/tools v0.6.1/go.mod h1:3puzxxljPCe8RGJX7BIy1plGbxEOZni5mR2aXe3/uk4= +k8s.io/api v0.34.3 h1:D12sTP257/jSH2vHV2EDYrb16bS7ULlHpdNdNhEw2S4= +k8s.io/api v0.34.3/go.mod h1:PyVQBF886Q5RSQZOim7DybQjAbVs8g7gwJNhGtY5MBk= +k8s.io/apiextensions-apiserver v0.34.3 h1:p10fGlkDY09eWKOTeUSioxwLukJnm+KuDZdrW71y40g= +k8s.io/apiextensions-apiserver v0.34.3/go.mod h1:aujxvqGFRdb/cmXYfcRTeppN7S2XV/t7WMEc64zB5A0= +k8s.io/apimachinery v0.34.3 h1:/TB+SFEiQvN9HPldtlWOTp0hWbJ+fjU+wkxysf/aQnE= +k8s.io/apimachinery v0.34.3/go.mod h1:/GwIlEcWuTX9zKIg2mbw0LRFIsXwrfoVxn+ef0X13lw= +k8s.io/client-go v0.34.3 h1:wtYtpzy/OPNYf7WyNBTj3iUA0XaBHVqhv4Iv3tbrF5A= +k8s.io/client-go v0.34.3/go.mod h1:OxxeYagaP9Kdf78UrKLa3YZixMCfP6bgPwPwNBQBzpM= +k8s.io/klog/v2 v2.130.1 h1:n9Xl7H1Xvksem4KFG4PYbdQCQxqc/tTUyrgXaOhHSzk= +k8s.io/klog/v2 v2.130.1/go.mod h1:3Jpz1GvMt720eyJH1ckRHK1EDfpxISzJ7I9OYgaDtPE= +k8s.io/kube-openapi v0.0.0-20250909170358-d67c058d9372 h1:6n2yF16Z5B+r+iKN6yL6/0cRj7lI5omG5F0wuI9ZHhw= +k8s.io/kube-openapi v0.0.0-20250909170358-d67c058d9372/go.mod h1:kdmbQkyfwUagLfXIad1y2TdrjPFWp2Q89B3qkRwf/pQ= +k8s.io/utils v0.0.0-20251002143259-bc988d571ff4 h1:SjGebBtkBqHFOli+05xYbK8YF1Dzkbzn+gDM4X9T4Ck= +k8s.io/utils v0.0.0-20251002143259-bc988d571ff4/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0= +mvdan.cc/gofumpt v0.7.0 h1:bg91ttqXmi9y2xawvkuMXyvAA/1ZGJqYAEGjXuP0JXU= +mvdan.cc/gofumpt v0.7.0/go.mod h1:txVFJy/Sc/mvaycET54pV8SW8gWxTlUuGHVEcncmNUo= +mvdan.cc/unparam v0.0.0-20240528143540-8a5130ca722f h1:lMpcwN6GxNbWtbpI1+xzFLSW8XzX0u72NttUGVFjO3U= +mvdan.cc/unparam v0.0.0-20240528143540-8a5130ca722f/go.mod h1:RSLa7mKKCNeTTMHBw5Hsy2rfJmd6O2ivt9Dw9ZqCQpQ= +sigs.k8s.io/controller-runtime v0.22.4 h1:GEjV7KV3TY8e+tJ2LCTxUTanW4z/FmNB7l327UfMq9A= +sigs.k8s.io/controller-runtime v0.22.4/go.mod h1:+QX1XUpTXN4mLoblf4tqr5CQcyHPAki2HLXqQMY6vh8= +sigs.k8s.io/json v0.0.0-20250730193827-2d320260d730 h1:IpInykpT6ceI+QxKBbEflcR5EXP7sU1kvOlxwZh5txg= +sigs.k8s.io/json v0.0.0-20250730193827-2d320260d730/go.mod h1:mdzfpAEoE6DHQEN0uh9ZbOCuHbLK5wOm7dK4ctXE9Tg= +sigs.k8s.io/randfill v1.0.0 h1:JfjMILfT8A6RbawdsK2JXGBR5AQVfd+9TbzrlneTyrU= +sigs.k8s.io/randfill v1.0.0/go.mod h1:XeLlZ/jmk4i1HRopwe7/aU3H5n1zNUcX6TM94b3QxOY= +sigs.k8s.io/structured-merge-diff/v6 v6.3.0 h1:jTijUJbW353oVOd9oTlifJqOGEkUw2jB/fXCbTiQEco= +sigs.k8s.io/structured-merge-diff/v6 v6.3.0/go.mod h1:M3W8sfWvn2HhQDIbGWj3S099YozAsymCo/wrT5ohRUE= +sigs.k8s.io/yaml v1.6.0 h1:G8fkbMSAFqgEFgh4b1wmtzDnioxFCUgTZhlbj5P9QYs= +sigs.k8s.io/yaml v1.6.0/go.mod h1:796bPqUfzR/0jLAl6XjHl3Ck7MiyVv8dbTdyT3/pMf4= diff --git a/images/sds-replicated-volume-controller/pkg/kubeutils/kubernetes.go b/lib/go/common/kubutils/kubernetes.go similarity index 89% rename from images/sds-replicated-volume-controller/pkg/kubeutils/kubernetes.go rename to lib/go/common/kubutils/kubernetes.go index a73ff936b..7f4e86651 100644 --- a/images/sds-replicated-volume-controller/pkg/kubeutils/kubernetes.go +++ b/lib/go/common/kubutils/kubernetes.go @@ -24,12 +24,18 @@ import ( ) func KubernetesDefaultConfigCreate() (*rest.Config, error) { + config, err := rest.InClusterConfig() + if err == nil { + return config, nil + } + clientConfig := clientcmd.NewNonInteractiveDeferredLoadingClientConfig( clientcmd.NewDefaultClientConfigLoadingRules(), &clientcmd.ConfigOverrides{}, ) + // Get a config to talk to API server - config, err := clientConfig.ClientConfig() + config, err = clientConfig.ClientConfig() if err != nil { return nil, fmt.Errorf("config kubernetes error %w", err) } diff --git a/images/linstor-drbd-wait/pkg/logger/logger.go b/lib/go/common/logger/logger.go similarity index 96% rename from images/linstor-drbd-wait/pkg/logger/logger.go rename to lib/go/common/logger/logger.go index ce8489723..b94de11f1 100644 --- a/images/linstor-drbd-wait/pkg/logger/logger.go +++ b/lib/go/common/logger/logger.go @@ -58,6 +58,10 @@ func NewLogger(level Verbosity) (*Logger, error) { return &Logger{log: log}, nil } +func WrapLorg(log logr.Logger) Logger { + return Logger{log: log} +} + func (l Logger) GetLogger() logr.Logger { return l.log } diff --git a/lib/go/common/maps/maps.go b/lib/go/common/maps/maps.go new file mode 100644 index 000000000..edf6ab6e4 --- /dev/null +++ b/lib/go/common/maps/maps.go @@ -0,0 +1,44 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package maps + +import ( + "fmt" + + "golang.org/x/exp/constraints" +) + +func SetUnique[K comparable, V any](m map[K]V, key K, value V) (map[K]V, bool) { + if m == nil { + return map[K]V{key: value}, true + } + if _, ok := m[key]; !ok { + m[key] = value + return m, true + } + + return m, false +} + +func SetLowestUnused[T constraints.Integer](used map[T]struct{}, minVal, maxVal T) (map[T]struct{}, T, error) { + for v := minVal; v <= maxVal; v++ { + if usedUpd, added := SetUnique(used, v, struct{}{}); added { + return usedUpd, v, nil + } + } + return used, 0, fmt.Errorf("unable to find unused number in range [%d;%d]", minVal, maxVal) +} diff --git a/lib/go/common/reconciliation/flow/flow.go b/lib/go/common/reconciliation/flow/flow.go new file mode 100644 index 000000000..227792e38 --- /dev/null +++ b/lib/go/common/reconciliation/flow/flow.go @@ -0,0 +1,836 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package flow + +import ( + "context" + "errors" + "fmt" + "time" + + "github.com/go-logr/logr" + ctrl "sigs.k8s.io/controller-runtime" + "sigs.k8s.io/controller-runtime/pkg/log" +) + +// Package flow provides small “phase scopes” that standardize: +// - phase-scoped logging (`phase start` / `phase end` + duration), +// - panic logging + re-panic, +// - and (for reconciliation) a tiny outcome type with `ShouldReturn()` + `ToCtrl()`. +// +// There are three scopes: +// +// - ReconcileFlow: used by Reconcile methods, returns ReconcileOutcome (flow-control + error). +// - EnsureFlow: used by ensure helpers, returns EnsureOutcome (error + change tracking + optimistic lock intent). +// - StepFlow: used by “steps” that should return plain `error` (idiomatic Go). +// +// Typical usage patterns: +// +// Root reconcile (no phase logging, no OnEnd): +// +// rf := flow.BeginRootReconcile(ctx) +// // ... +// return rf.Done().ToCtrl() +// +// Non-root reconcile method: +// +// func (r *Reconciler) reconcileX(ctx context.Context) (outcome flow.ReconcileOutcome) { +// rf := flow.BeginReconcile(ctx, "x") +// defer rf.OnEnd(&outcome) +// // ... +// return rf.Continue() +// } +// +// Ensure helper: +// +// func ensureFoo(ctx context.Context, obj *v1alpha1.Foo) (outcome flow.EnsureOutcome) { +// ef := flow.BeginEnsure(ctx, "ensure-foo") +// defer ef.OnEnd(&outcome) +// // mutate obj ... +// return ef.Ok().ReportChangedIf(changed).RequireOptimisticLock() +// } +// +// Step helper returning error: +// +// func computeBar(ctx context.Context) (err error) { +// sf := flow.BeginStep(ctx, "compute-bar") +// defer sf.OnEnd(&err) +// // ... +// return sf.Errf("bad input: %s", x) +// } +// +// ============================================================================= +// Common utilities +// ============================================================================= + +// Wrapf wraps err with formatted context. +// +// It returns nil if err is nil. +// +// Example: +// +// return flow.Wrapf(err, "patching Foo") +func Wrapf(err error, format string, args ...any) error { + if err == nil { + return nil + } + msg := fmt.Sprintf(format, args...) + return fmt.Errorf("%s: %w", msg, err) +} + +// phaseContextKey is a private context key for phase metadata. +type phaseContextKey struct{} + +// phaseContextValue is the minimal metadata OnEnd needs for consistent logging. +type phaseContextValue struct { + name string + kv []string + start time.Time +} + +// panicToError converts a recovered panic value to an error. +func panicToError(r any) error { + if err, ok := r.(error); ok { + return Wrapf(err, "panic") + } + return fmt.Errorf("panic: %v", r) +} + +// mustBeValidPhaseName validates phaseName used by Begin* and panics on invalid input. +// +// This is treated as a programmer error (hence panic), not a runtime failure. +func mustBeValidPhaseName(name string) { + if name == "" { + panic("flow: phaseName must be non-empty") + } + + segLen := 0 + for i := 0; i < len(name); i++ { + c := name[i] + + // Disallow whitespace and control chars. + if c <= ' ' || c == 0x7f { + panic("flow: phaseName contains whitespace/control characters: " + name) + } + + if c == '/' { + // Empty segments and trailing '/' are not allowed. + if segLen == 0 { + panic("flow: phaseName must not contain empty segments (e.g. leading '//' or trailing '/'): " + name) + } + segLen = 0 + continue + } + + // Recommended: ascii identifiers with separators. + isLetter := (c >= 'a' && c <= 'z') || (c >= 'A' && c <= 'Z') + isDigit := c >= '0' && c <= '9' + isAllowedPunct := c == '-' || c == '_' || c == '.' + if !isLetter && !isDigit && !isAllowedPunct { + panic("flow: phaseName contains unsupported character '" + string([]byte{c}) + "': " + name) + } + + segLen++ + } + + if segLen == 0 { + panic("flow: phaseName must not end with '/': " + name) + } +} + +// mustBeValidKV validates that kv has an even number of elements (key/value pairs). +// Panics on invalid input to surface programmer errors early. +func mustBeValidKV(kv []string) { + if len(kv)%2 != 0 { + panic("flow: kv must contain even number of elements (key/value pairs)") + } +} + +// buildPhaseLogger builds a phase-scoped logger: `WithName(phaseName)` + `WithValues(kv...)`. +func buildPhaseLogger(ctx context.Context, phaseName string, kv []string) logr.Logger { + l := log.FromContext(ctx).WithName(phaseName) + if len(kv) > 0 { + anyKV := make([]any, 0, len(kv)) + for _, v := range kv { + anyKV = append(anyKV, v) + } + l = l.WithValues(anyKV...) + } + return l +} + +// storePhaseContext attaches the logger to ctx and stores metadata needed by OnEnd. +func storePhaseContext(ctx context.Context, l logr.Logger, phaseName string, kv []string) context.Context { + ctx = log.IntoContext(ctx, l) + kvCopy := append([]string(nil), kv...) + ctx = context.WithValue(ctx, phaseContextKey{}, phaseContextValue{ + name: phaseName, + kv: kvCopy, + start: time.Now(), + }) + return ctx +} + +// getPhaseContext reads metadata stored by Begin* (if any). +func getPhaseContext(ctx context.Context) (phaseContextValue, bool) { + v, ok := ctx.Value(phaseContextKey{}).(phaseContextValue) + return v, ok && v.name != "" +} + +// ============================================================================= +// ReconcileFlow and ReconcileOutcome +// ============================================================================= + +// ReconcileFlow is a phase scope for Reconcile methods. +// +// Use it to: +// - get a phase-scoped ctx/logger (`Ctx()`/`Log()`), +// - construct ReconcileOutcome values (`Continue/Done/Requeue/RequeueAfter/Fail`), +// - and to standardize phase end handling via `defer rf.OnEnd(&outcome)` in non-root reconciles. +type ReconcileFlow struct { + ctx context.Context + log logr.Logger +} + +// Ctx returns a context with a phase-scoped logger attached. +func (rf ReconcileFlow) Ctx() context.Context { return rf.ctx } + +// Log returns the phase-scoped logger. +func (rf ReconcileFlow) Log() logr.Logger { return rf.log } + +// BeginRootReconcile starts the root reconcile scope. +// +// This is intentionally minimal: it does not log `phase start/end` and it does not use `OnEnd`. +// Root reconcile is expected to return via `outcome.ToCtrl()`. +func BeginRootReconcile(ctx context.Context) ReconcileFlow { + l := log.FromContext(ctx) + return ReconcileFlow{ctx: ctx, log: l} +} + +// BeginReconcile starts a non-root reconciliation phase. +// +// Intended usage: +// +// func (...) (outcome flow.ReconcileOutcome) { +// rf := flow.BeginReconcile(ctx, "my-phase", "k", "v") +// defer rf.OnEnd(&outcome) +// // ... +// } +func BeginReconcile(ctx context.Context, phaseName string, kv ...string) ReconcileFlow { + mustBeValidPhaseName(phaseName) + mustBeValidKV(kv) + + l := buildPhaseLogger(ctx, phaseName, kv) + l.V(1).Info("phase start") + + ctx = storePhaseContext(ctx, l, phaseName, kv) + return ReconcileFlow{ctx: ctx, log: l} +} + +// OnEnd is the deferred “phase end handler” for non-root reconciles. +// +// What it does: +// - logs `phase end` (and duration if available), +// - if the outcome has an error, logs it at Error level exactly once across nested phases, +// - if the phase panics, logs `phase panic` and re-panics. +func (rf ReconcileFlow) OnEnd(out *ReconcileOutcome) { + if r := recover(); r != nil { + err := panicToError(r) + rf.log.Error(err, "phase panic") + panic(r) + } + + v, ok := getPhaseContext(rf.ctx) + if !ok { + return + } + + if out == nil { + panic("flow: ReconcileFlow.OnEnd: outcome is nil") + } + + kind, requeueAfter := reconcileOutcomeKind(out) + + fields := []any{ + "result", kind, + "hasError", out.err != nil, + } + if requeueAfter > 0 { + fields = append(fields, "requeueAfter", requeueAfter) + } + if !v.start.IsZero() { + fields = append(fields, "duration", time.Since(v.start)) + } + + // Emit exactly one log record per phase end. + // Error is logged exactly once: at the first phase that encounters it. + if out.err != nil && !out.errorLogged { + rf.log.Error(out.err, "phase end", fields...) + out.errorLogged = true + return + } + rf.log.V(1).Info("phase end", fields...) +} + +// Continue indicates “keep executing” within the current Reconcile method. +// `ShouldReturn()` is false. +func (rf ReconcileFlow) Continue() ReconcileOutcome { + return ReconcileOutcome{} +} + +// Done indicates “stop and return; do not requeue”. +// `ShouldReturn()` is true. +func (rf ReconcileFlow) Done() ReconcileOutcome { + return ReconcileOutcome{result: &ctrl.Result{}} +} + +// Requeue indicates “stop and return; requeue immediately”. +// `ShouldReturn()` is true. +func (rf ReconcileFlow) Requeue() ReconcileOutcome { + return ReconcileOutcome{result: &ctrl.Result{Requeue: true}} +} + +// RequeueAfter indicates “stop and return; requeue after d”. +// `ShouldReturn()` is true. +func (rf ReconcileFlow) RequeueAfter(d time.Duration) ReconcileOutcome { + if d <= 0 { + panic("flow: RequeueAfter: duration must be > 0") + } + return ReconcileOutcome{result: &ctrl.Result{RequeueAfter: d}} +} + +// Fail indicates “stop and return with error”. +// `ShouldReturn()` is true. +func (rf ReconcileFlow) Fail(err error) ReconcileOutcome { + if err == nil { + panic("flow: Fail: nil error") + } + return ReconcileOutcome{result: &ctrl.Result{}, err: err} +} + +// Failf is a convenience wrapper around `Fail(Wrapf(...))`. +func (rf ReconcileFlow) Failf(err error, format string, args ...any) ReconcileOutcome { + return rf.Fail(Wrapf(err, format, args...)) +} + +// DoneOrFail returns Done() if err is nil, or Fail(err) otherwise. +// Useful for propagating errors from final operations like patches. +func (rf ReconcileFlow) DoneOrFail(err error) ReconcileOutcome { + if err != nil { + return rf.Fail(err) + } + return rf.Done() +} + +// ReconcileOutcome is the return value for Reconcile methods. +// +// Typical usage is: +// - declare `outcome flow.ReconcileOutcome` as a named return, +// - return `rf.Continue()/Done()/Requeue.../Fail...`, +// - and use `outcome.ShouldReturn()` at intermediate boundaries to early-exit. +type ReconcileOutcome struct { + result *ctrl.Result + err error + errorLogged bool +} + +// ShouldReturn reports whether the caller should return from the current Reconcile method. +func (o ReconcileOutcome) ShouldReturn() bool { return o.result != nil } + +// Error returns the error carried by the outcome, if any. +func (o ReconcileOutcome) Error() error { return o.err } + +// Enrichf adds local context to an existing error (no-op if there is no error). +// +// Example: +// +// return rf.Fail(err).Enrichf("patching ReplicatedVolume") +func (o ReconcileOutcome) Enrichf(format string, args ...any) ReconcileOutcome { + if o.err == nil { + return o + } + o.err = Wrapf(o.err, format, args...) + return o +} + +// ToCtrl converts ReconcileOutcome to controller-runtime return values. +// +// For Continue (result=nil), this returns `(ctrl.Result{}, nil)` (or `(ctrl.Result{}, err)` if you built an invalid outcome). +// For non-Continue outcomes, this returns the explicit ctrl.Result + error. +func (o ReconcileOutcome) ToCtrl() (ctrl.Result, error) { + if o.result == nil { + return ctrl.Result{}, o.err + } + return *o.result, o.err +} + +// MustToCtrl converts ReconcileOutcome to controller-runtime return values. +// It panics if called on Continue. +func (o ReconcileOutcome) MustToCtrl() (ctrl.Result, error) { + if o.result == nil { + panic("flow: ReconcileOutcome.MustToCtrl: result is nil (Continue)") + } + return *o.result, o.err +} + +// Merge combines this outcome with others and returns the merged result. +// +// This is a convenience method for chaining: outcome = outcome.Merge(a, b). +func (o ReconcileOutcome) Merge(others ...ReconcileOutcome) ReconcileOutcome { + return MergeReconciles(append([]ReconcileOutcome{o}, others...)...) +} + +// MergeReconciles combines multiple ReconcileOutcome values into one. +// +// Use this when you intentionally want to run multiple independent steps and then aggregate the decision. +// +// Rules (high-level): +// - Errors are joined via errors.Join (any error makes the merged outcome a Fail). +// - Requeue/RequeueAfter: treat Requeue as delay=0, RequeueAfter(d) as delay=d, pick minimum delay. +// - Done wins over Continue. +// +// Example: +// +// outcome := MergeReconciles(stepA(...), stepB(...)) +// if outcome.ShouldReturn() { return outcome } +func MergeReconciles(outcomes ...ReconcileOutcome) ReconcileOutcome { + if len(outcomes) == 0 { + return ReconcileOutcome{} + } + + const ( + noDelay time.Duration = -1 // sentinel: no requeue requested + immediateDelay time.Duration = 0 // Requeue() means delay=0 + ) + + var ( + hasReconcileResult bool + minDelay = noDelay + errs []error + allErrorsLogged = true + ) + + for _, o := range outcomes { + if o.err != nil { + errs = append(errs, o.err) + allErrorsLogged = allErrorsLogged && o.errorLogged + } + + if o.result == nil { + continue + } + hasReconcileResult = true + + // Compute delay for this outcome: Requeue → 0, RequeueAfter(d) → d + delay := noDelay + if o.result.Requeue { //nolint:staticcheck // handling deprecated Requeue field for backward compatibility + delay = immediateDelay + } else if o.result.RequeueAfter > 0 { + delay = o.result.RequeueAfter + } + + // Pick minimum delay (noDelay means "no requeue requested") + if delay != noDelay { + if minDelay == noDelay || delay < minDelay { + minDelay = delay + } + } + } + + combinedErr := errors.Join(errs...) + + // 1) Fail: if there are errors. + if combinedErr != nil { + return ReconcileOutcome{ + result: &ctrl.Result{}, + err: combinedErr, + errorLogged: allErrorsLogged, + } + } + + // 2) Requeue/RequeueAfter: minDelay wins. + if minDelay == immediateDelay { + return ReconcileOutcome{result: &ctrl.Result{Requeue: true}} + } + if minDelay > immediateDelay { + return ReconcileOutcome{result: &ctrl.Result{RequeueAfter: minDelay}} + } + + // 3) Done: at least one non-nil result (no requeue requested). + if hasReconcileResult { + return ReconcileOutcome{result: &ctrl.Result{}} + } + + // 4) Continue. + return ReconcileOutcome{} +} + +// reconcileOutcomeKind classifies the outcome for phase-end logging. +func reconcileOutcomeKind(o *ReconcileOutcome) (kind string, requeueAfter time.Duration) { + if o == nil { + panic("flow: reconcileOutcomeKind: outcome is nil") + } + + if o.result == nil { + if o.err != nil { + return "invalid", 0 + } + return "continue", 0 + } + + if o.result.Requeue { //nolint:staticcheck // handling deprecated Requeue field for backward compatibility + return "requeue", 0 + } + + if o.result.RequeueAfter > 0 { + return "requeueAfter", o.result.RequeueAfter + } + + if o.err != nil { + return "fail", 0 + } + + return "done", 0 +} + +// ============================================================================= +// EnsureFlow and EnsureOutcome +// ============================================================================= + +// changeState is internal ordering for EnsureOutcome merge semantics. +type changeState uint8 + +const ( + unchangedState changeState = iota + changedState + changedAndOptimisticLockRequiredState +) + +// EnsureFlow is a phase scope for ensure helpers. +// +// Ensure helpers typically mutate an object in-memory (one patch domain) and must report: +// - whether they changed the object (DidChange), +// - whether the subsequent save should use optimistic locking, +// - and whether they encountered an error. +type EnsureFlow struct { + ctx context.Context + log logr.Logger +} + +// Ctx returns a context with a phase-scoped logger attached. +func (ef EnsureFlow) Ctx() context.Context { return ef.ctx } + +// Log returns the phase-scoped logger. +func (ef EnsureFlow) Log() logr.Logger { return ef.log } + +// BeginEnsure starts an ensure phase. +// +// Intended usage: +// +// func ensureFoo(ctx context.Context, obj *v1alpha1.Foo) (outcome flow.EnsureOutcome) { +// ef := flow.BeginEnsure(ctx, "ensure-foo") +// defer ef.OnEnd(&outcome) +// // mutate obj ... +// return ef.Ok().ReportChangedIf(changed) +// } +func BeginEnsure(ctx context.Context, phaseName string, kv ...string) EnsureFlow { + mustBeValidPhaseName(phaseName) + mustBeValidKV(kv) + + l := buildPhaseLogger(ctx, phaseName, kv) + l.V(1).Info("phase start") + + ctx = storePhaseContext(ctx, l, phaseName, kv) + return EnsureFlow{ctx: ctx, log: l} +} + +// OnEnd is the deferred “phase end handler” for ensure helpers. +// +// What it does: +// - logs `phase end` with `changed`, `optimisticLock`, `hasError`, and duration, +// - if the phase panics, logs `phase panic` and re-panics. +func (ef EnsureFlow) OnEnd(out *EnsureOutcome) { + if r := recover(); r != nil { + err := panicToError(r) + ef.log.Error(err, "phase panic") + panic(r) + } + + v, ok := getPhaseContext(ef.ctx) + if !ok { + return + } + + if out == nil { + panic("flow: EnsureFlow.OnEnd: outcome is nil") + } + + fields := []any{ + "changed", out.DidChange(), + "optimisticLock", out.OptimisticLockRequired(), + "hasError", out.err != nil, + } + if !v.start.IsZero() { + fields = append(fields, "duration", time.Since(v.start)) + } + + if out.err != nil { + ef.log.Error(out.err, "phase end", fields...) + return + } + ef.log.V(1).Info("phase end", fields...) +} + +// Ok returns an EnsureOutcome indicating success (no error, no change). +func (ef EnsureFlow) Ok() EnsureOutcome { + return EnsureOutcome{} +} + +// Err returns an EnsureOutcome with an error. +func (ef EnsureFlow) Err(err error) EnsureOutcome { + return EnsureOutcome{err: err} +} + +// Errf returns an EnsureOutcome with a formatted error. +func (ef EnsureFlow) Errf(format string, args ...any) EnsureOutcome { + return EnsureOutcome{err: fmt.Errorf(format, args...)} +} + +// EnsureOutcome is the return value for ensure helpers. +// +// It reports: +// - Error(): whether the helper failed, +// - DidChange(): whether the helper mutated the object, +// - OptimisticLockRequired(): whether the subsequent save should use optimistic locking. +// +// Typical pattern: +// +// changed := false +// // mutate obj; set changed=true if needed +// return ef.Ok().ReportChangedIf(changed).RequireOptimisticLock() +type EnsureOutcome struct { + err error + changeState changeState + changeReported bool +} + +// Error returns the error carried by the outcome, if any. +func (o EnsureOutcome) Error() error { return o.err } + +// Enrichf adds local context to an existing error (no-op if there is no error). +func (o EnsureOutcome) Enrichf(format string, args ...any) EnsureOutcome { + if o.err == nil { + return o + } + o.err = Wrapf(o.err, format, args...) + return o +} + +// ReportChanged marks that the helper changed the object. +func (o EnsureOutcome) ReportChanged() EnsureOutcome { + o.changeReported = true + if o.changeState == unchangedState { + o.changeState = changedState + } + return o +} + +// ReportChangedIf is like ReportChanged, but records a change only when cond is true. +// +// Call this even for “no change” paths to make subsequent use of RequireOptimisticLock explicit and safe: +// +// return ef.Ok().ReportChangedIf(changed).RequireOptimisticLock() +func (o EnsureOutcome) ReportChangedIf(cond bool) EnsureOutcome { + o.changeReported = true + if cond && o.changeState == unchangedState { + o.changeState = changedState + } + return o +} + +// DidChange reports whether the outcome records a change. +func (o EnsureOutcome) DidChange() bool { return o.changeState >= changedState } + +// RequireOptimisticLock returns a copy of EnsureOutcome that requires optimistic locking. +// +// Contract: it must be called only after ReportChanged/ReportChangedIf; otherwise it panics +// (this is a guard against forgetting change reporting in ensure helpers). +func (o EnsureOutcome) RequireOptimisticLock() EnsureOutcome { + if !o.changeReported { + panic("flow: EnsureOutcome.RequireOptimisticLock called before ReportChanged/ReportChangedIf") + } + if o.changeState == changedState { + o.changeState = changedAndOptimisticLockRequiredState + } + return o +} + +// OptimisticLockRequired reports whether the outcome requires optimistic locking. +func (o EnsureOutcome) OptimisticLockRequired() bool { + return o.changeState >= changedAndOptimisticLockRequiredState +} + +// Merge combines this outcome with others and returns the merged result. +// +// This is a convenience method for chaining: eo = eo.Merge(a, b). +func (o EnsureOutcome) Merge(others ...EnsureOutcome) EnsureOutcome { + return MergeEnsures(append([]EnsureOutcome{o}, others...)...) +} + +// MergeEnsures combines multiple EnsureOutcome values into one. +// +// Use this to aggregate outcomes of multiple sub-ensures within the same ensure helper. +// +// - Errors are joined via errors.Join. +// - Change/lock intent is merged deterministically (strongest wins). +func MergeEnsures(outcomes ...EnsureOutcome) EnsureOutcome { + if len(outcomes) == 0 { + return EnsureOutcome{} + } + + var ( + errs []error + maxChangeState changeState + anyChangeReported bool + ) + + for _, o := range outcomes { + if o.err != nil { + errs = append(errs, o.err) + } + + anyChangeReported = anyChangeReported || o.changeReported + + if o.changeState > maxChangeState { + maxChangeState = o.changeState + } + } + + return EnsureOutcome{ + err: errors.Join(errs...), + changeState: maxChangeState, + changeReported: anyChangeReported, + } +} + +// ============================================================================= +// StepFlow +// ============================================================================= + +// StepFlow is a phase scope for steps that should return plain `error`. +// +// This is useful when you want phase logging/panic handling but do not want flow-control outcomes. +type StepFlow struct { + ctx context.Context + log logr.Logger +} + +// Ctx returns a context with a phase-scoped logger attached. +func (sf StepFlow) Ctx() context.Context { return sf.ctx } + +// Log returns the phase-scoped logger. +func (sf StepFlow) Log() logr.Logger { return sf.log } + +// BeginStep starts a step phase. +// +// Intended usage: +// +// func computeFoo(ctx context.Context) (err error) { +// sf := flow.BeginStep(ctx, "compute-foo") +// defer sf.OnEnd(&err) +// // ... +// return nil +// } +func BeginStep(ctx context.Context, phaseName string, kv ...string) StepFlow { + mustBeValidPhaseName(phaseName) + mustBeValidKV(kv) + + l := buildPhaseLogger(ctx, phaseName, kv) + l.V(1).Info("phase start") + + ctx = storePhaseContext(ctx, l, phaseName, kv) + return StepFlow{ctx: ctx, log: l} +} + +// OnEnd is the deferred “phase end handler” for step functions that return `error`. +// +// What it does: +// - logs `phase end` with `hasError` and duration, +// - if the phase panics, logs `phase panic` and re-panics. +func (sf StepFlow) OnEnd(err *error) { + if r := recover(); r != nil { + panicErr := panicToError(r) + sf.log.Error(panicErr, "phase panic") + panic(r) + } + + v, ok := getPhaseContext(sf.ctx) + if !ok { + return + } + + if err == nil { + panic("flow: StepFlow.OnEnd: err is nil") + } + + fields := []any{ + "hasError", *err != nil, + } + if !v.start.IsZero() { + fields = append(fields, "duration", time.Since(v.start)) + } + + if *err != nil { + sf.log.Error(*err, "phase end", fields...) + return + } + sf.log.V(1).Info("phase end", fields...) +} + +// Ok returns nil (success). +func (sf StepFlow) Ok() error { return nil } + +// Err returns the error as-is. Panics if err is nil. +func (sf StepFlow) Err(err error) error { + if err == nil { + panic("flow: StepFlow.Err: nil error") + } + return err +} + +// Errf returns a formatted error. +func (sf StepFlow) Errf(format string, args ...any) error { + return fmt.Errorf(format, args...) +} + +// Enrichf wraps err with formatted context. Returns nil if err is nil. +// +// Example: +// +// return sf.Enrichf(err, "doing something") +func (sf StepFlow) Enrichf(err error, format string, args ...any) error { + return Wrapf(err, format, args...) +} + +// MergeSteps combines multiple errors into one via errors.Join. +// +// This is useful when you want to run multiple independent sub-steps and return a single error: +// +// return MergeSteps(errA, errB, errC) +func MergeSteps(errs ...error) error { + return errors.Join(errs...) +} diff --git a/lib/go/common/reconciliation/flow/flow_test.go b/lib/go/common/reconciliation/flow/flow_test.go new file mode 100644 index 000000000..3bf01cbe0 --- /dev/null +++ b/lib/go/common/reconciliation/flow/flow_test.go @@ -0,0 +1,901 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package flow_test + +import ( + "context" + "errors" + "strings" + "testing" + "time" + + "github.com/go-logr/zapr" + "go.uber.org/zap" + "go.uber.org/zap/zapcore" + "go.uber.org/zap/zaptest/observer" + "sigs.k8s.io/controller-runtime/pkg/log" + + "github.com/deckhouse/sds-replicated-volume/lib/go/common/reconciliation/flow" +) + +func mustPanic(t *testing.T, fn func()) { + t.Helper() + defer func() { + if r := recover(); r == nil { + t.Fatalf("expected panic") + } + }() + fn() +} + +func mustNotPanic(t *testing.T, fn func()) { + t.Helper() + defer func() { + if r := recover(); r != nil { + t.Fatalf("unexpected panic: %v", r) + } + }() + fn() +} + +// ============================================================================= +// Wrapf tests +// ============================================================================= + +func TestWrapf_NilError(t *testing.T) { + if got := flow.Wrapf(nil, "x %d", 1); got != nil { + t.Fatalf("expected nil, got %v", got) + } +} + +func TestWrapf_Unwrap(t *testing.T) { + base := errors.New("base") + wrapped := flow.Wrapf(base, "x") + if !errors.Is(wrapped, base) { + t.Fatalf("expected errors.Is(wrapped, base) == true; wrapped=%v", wrapped) + } +} + +func TestWrapf_Formatting(t *testing.T) { + base := errors.New("base") + wrapped := flow.Wrapf(base, "hello %s %d", "a", 1) + + s := wrapped.Error() + if !strings.Contains(s, "hello a 1") { + t.Fatalf("expected wrapped error string to contain formatted prefix; got %q", s) + } + if !strings.Contains(s, base.Error()) { + t.Fatalf("expected wrapped error string to contain base error string; got %q", s) + } +} + +// ============================================================================= +// ReconcileFlow and ReconcileOutcome tests +// ============================================================================= + +func TestReconcileFlow_Fail_NilPanics(t *testing.T) { + rf := flow.BeginRootReconcile(context.Background()) + mustPanic(t, func() { _ = rf.Fail(nil) }) +} + +func TestReconcileFlow_RequeueAfter_ZeroPanics(t *testing.T) { + rf := flow.BeginRootReconcile(context.Background()) + mustPanic(t, func() { _ = rf.RequeueAfter(0) }) +} + +func TestReconcileFlow_RequeueAfter_NegativePanics(t *testing.T) { + rf := flow.BeginRootReconcile(context.Background()) + mustPanic(t, func() { _ = rf.RequeueAfter(-1 * time.Second) }) +} + +func TestReconcileFlow_RequeueAfter_Positive(t *testing.T) { + rf := flow.BeginRootReconcile(context.Background()) + outcome := rf.RequeueAfter(1 * time.Second) + if !outcome.ShouldReturn() { + t.Fatalf("expected ShouldReturn() == true") + } + + res, err := outcome.ToCtrl() + if err != nil { + t.Fatalf("expected err to be nil, got %v", err) + } + if res.RequeueAfter != 1*time.Second { + t.Fatalf("expected RequeueAfter to be %v, got %v", 1*time.Second, res.RequeueAfter) + } +} + +func TestReconcileFlow_Requeue(t *testing.T) { + rf := flow.BeginRootReconcile(context.Background()) + outcome := rf.Requeue() + if !outcome.ShouldReturn() { + t.Fatalf("expected ShouldReturn() == true") + } + + res, err := outcome.ToCtrl() + if err != nil { + t.Fatalf("expected err to be nil, got %v", err) + } + if !res.Requeue { //nolint:staticcheck // testing deprecated Requeue field + t.Fatalf("expected Requeue to be true") + } +} + +func TestMergeReconciles_DoneWinsOverContinue(t *testing.T) { + rf := flow.BeginRootReconcile(context.Background()) + outcome := flow.MergeReconciles(rf.Done(), rf.Continue()) + if !outcome.ShouldReturn() { + t.Fatalf("expected ShouldReturn() == true") + } + if outcome.Error() != nil { + t.Fatalf("expected Error() == nil, got %v", outcome.Error()) + } +} + +func TestMergeReconciles_RequeueAfterChoosesSmallest(t *testing.T) { + rf := flow.BeginRootReconcile(context.Background()) + outcome := flow.MergeReconciles(rf.RequeueAfter(5*time.Second), rf.RequeueAfter(1*time.Second)) + if !outcome.ShouldReturn() { + t.Fatalf("expected ShouldReturn() == true") + } + res, err := outcome.ToCtrl() + if err != nil { + t.Fatalf("expected err to be nil, got %v", err) + } + if res.RequeueAfter != 1*time.Second { + t.Fatalf("expected RequeueAfter to be %v, got %v", 1*time.Second, res.RequeueAfter) + } +} + +func TestMergeReconciles_FailAndDoneBecomesFail(t *testing.T) { + rf := flow.BeginRootReconcile(context.Background()) + e := errors.New("e") + outcome := flow.MergeReconciles(rf.Fail(e), rf.Done()) + if !outcome.ShouldReturn() { + t.Fatalf("expected ShouldReturn() == true") + } + + _, err := outcome.ToCtrl() + if err == nil { + t.Fatalf("expected err to be non-nil") + } + if !errors.Is(err, e) { + t.Fatalf("expected errors.Is(err, e) == true; err=%v", err) + } +} + +func TestMergeReconciles_FailOnlyStaysFail(t *testing.T) { + rf := flow.BeginRootReconcile(context.Background()) + e := errors.New("e") + outcome := flow.MergeReconciles(rf.Fail(e)) + if !outcome.ShouldReturn() { + t.Fatalf("expected ShouldReturn() == true") + } + + _, err := outcome.ToCtrl() + if !errors.Is(err, e) { + t.Fatalf("expected errors.Is(err, e) == true; err=%v", err) + } +} + +func TestReconcileOutcome_Error(t *testing.T) { + rf := flow.BeginRootReconcile(context.Background()) + if rf.Continue().Error() != nil { + t.Fatalf("expected Error() == nil for Continue()") + } + + e := errors.New("e") + if got := rf.Fail(e).Error(); got == nil || !errors.Is(got, e) { + t.Fatalf("expected Error() to contain %v, got %v", e, got) + } +} + +func TestReconcileOutcome_Enrichf_IsNoOpWhenNil(t *testing.T) { + rf := flow.BeginRootReconcile(context.Background()) + outcome := rf.Continue().Enrichf("hello %s %d", "a", 1) + if outcome.Error() != nil { + t.Fatalf("expected Error() to stay nil, got %v", outcome.Error()) + } +} + +func TestReconcileOutcome_Enrichf_WrapsExistingError(t *testing.T) { + rf := flow.BeginRootReconcile(context.Background()) + base := errors.New("base") + + outcome := rf.Fail(base).Enrichf("ctx %s", "x") + if outcome.Error() == nil { + t.Fatalf("expected Error() to be non-nil") + } + if !errors.Is(outcome.Error(), base) { + t.Fatalf("expected errors.Is(outcome.Error(), base) == true; err=%v", outcome.Error()) + } + if got := outcome.Error().Error(); !strings.Contains(got, "ctx x") { + t.Fatalf("expected wrapped error to contain formatted prefix; got %q", got) + } +} + +func TestReconcileOutcome_Enrichf_DoesNotAlterReturnDecision(t *testing.T) { + rf := flow.BeginRootReconcile(context.Background()) + outcome := rf.RequeueAfter(1 * time.Second).Enrichf("x") + if !outcome.ShouldReturn() { + t.Fatalf("expected ShouldReturn() == true") + } + res, _ := outcome.MustToCtrl() + if res.RequeueAfter != 1*time.Second { + t.Fatalf("expected RequeueAfter to be preserved, got %v", res.RequeueAfter) + } +} + +func TestReconcileOutcome_MustToCtrl_PanicsOnContinue(t *testing.T) { + rf := flow.BeginRootReconcile(context.Background()) + mustPanic(t, func() { _, _ = rf.Continue().MustToCtrl() }) +} + +// ============================================================================= +// EnsureFlow and EnsureOutcome tests +// ============================================================================= + +func TestEnsureOutcome_DidChange(t *testing.T) { + ef := flow.BeginEnsure(context.Background(), "test") + var outcome flow.EnsureOutcome + defer ef.OnEnd(&outcome) + + if ef.Ok().DidChange() { + t.Fatalf("expected DidChange() == false for Ok()") + } + if !ef.Ok().ReportChanged().DidChange() { + t.Fatalf("expected DidChange() == true after ReportChanged()") + } + if ef.Ok().ReportChangedIf(false).DidChange() { + t.Fatalf("expected DidChange() == false for ReportChangedIf(false)") + } +} + +func TestEnsureOutcome_OptimisticLockRequired(t *testing.T) { + ef := flow.BeginEnsure(context.Background(), "test") + var outcome flow.EnsureOutcome + defer ef.OnEnd(&outcome) + + if ef.Ok().OptimisticLockRequired() { + t.Fatalf("expected OptimisticLockRequired() == false for Ok()") + } + + if ef.Ok().ReportChanged().OptimisticLockRequired() { + t.Fatalf("expected OptimisticLockRequired() == false after ReportChanged()") + } + + o := ef.Ok().ReportChanged().RequireOptimisticLock() + if !o.OptimisticLockRequired() { + t.Fatalf("expected OptimisticLockRequired() == true after ReportChanged().RequireOptimisticLock()") + } +} + +func TestEnsureOutcome_RequireOptimisticLock_PanicsWithoutChangeReported(t *testing.T) { + ef := flow.BeginEnsure(context.Background(), "test") + var outcome flow.EnsureOutcome + defer ef.OnEnd(&outcome) + + mustPanic(t, func() { _ = ef.Ok().RequireOptimisticLock() }) +} + +func TestEnsureOutcome_RequireOptimisticLock_DoesNotPanicAfterReportChangedIfFalse(t *testing.T) { + ef := flow.BeginEnsure(context.Background(), "test") + var outcome flow.EnsureOutcome + defer ef.OnEnd(&outcome) + + mustNotPanic(t, func() { _ = ef.Ok().ReportChangedIf(false).RequireOptimisticLock() }) + + o := ef.Ok().ReportChangedIf(false).RequireOptimisticLock() + if o.OptimisticLockRequired() { + t.Fatalf("expected OptimisticLockRequired() == false when no change was reported") + } + if o.DidChange() { + t.Fatalf("expected DidChange() == false when no change was reported") + } +} + +func TestEnsureOutcome_Error(t *testing.T) { + ef := flow.BeginEnsure(context.Background(), "test") + var outcome flow.EnsureOutcome + defer ef.OnEnd(&outcome) + + if ef.Ok().Error() != nil { + t.Fatalf("expected Error() == nil for Ok()") + } + + e := errors.New("e") + if got := ef.Err(e).Error(); got == nil || !errors.Is(got, e) { + t.Fatalf("expected Error() to contain %v, got %v", e, got) + } +} + +func TestEnsureFlow_Err_NilIsAllowed(t *testing.T) { + ef := flow.BeginEnsure(context.Background(), "test") + var outcome flow.EnsureOutcome + defer ef.OnEnd(&outcome) + + // Unlike ReconcileFlow.Fail, EnsureFlow.Err(nil) is allowed and equivalent to Ok() + o := ef.Err(nil) + if o.Error() != nil { + t.Fatalf("expected Error() == nil for Err(nil), got %v", o.Error()) + } +} + +func TestEnsureOutcome_Enrichf(t *testing.T) { + ef := flow.BeginEnsure(context.Background(), "test") + var outcome flow.EnsureOutcome + defer ef.OnEnd(&outcome) + + base := errors.New("base") + o := ef.Err(base).Enrichf("ctx %s", "x") + if o.Error() == nil { + t.Fatalf("expected Error() to be non-nil") + } + if !errors.Is(o.Error(), base) { + t.Fatalf("expected errors.Is(o.Error(), base) == true; err=%v", o.Error()) + } + if got := o.Error().Error(); !strings.Contains(got, "ctx x") { + t.Fatalf("expected wrapped error to contain formatted prefix; got %q", got) + } +} + +func TestMergeEnsures_ChangeTracking_DidChange(t *testing.T) { + ef := flow.BeginEnsure(context.Background(), "test") + var outcome flow.EnsureOutcome + defer ef.OnEnd(&outcome) + + o := flow.MergeEnsures(ef.Ok(), ef.Ok().ReportChanged()) + if !o.DidChange() { + t.Fatalf("expected merged outcome to report DidChange() == true") + } + if o.OptimisticLockRequired() { + t.Fatalf("expected merged outcome to not require optimistic lock") + } +} + +func TestMergeEnsures_ChangeTracking_OptimisticLockRequired(t *testing.T) { + ef := flow.BeginEnsure(context.Background(), "test") + var outcome flow.EnsureOutcome + defer ef.OnEnd(&outcome) + + o := flow.MergeEnsures( + ef.Ok().ReportChanged(), + ef.Ok().ReportChanged().RequireOptimisticLock(), + ) + if !o.DidChange() { + t.Fatalf("expected merged outcome to report DidChange() == true") + } + if !o.OptimisticLockRequired() { + t.Fatalf("expected merged outcome to require optimistic lock") + } +} + +func TestMergeEnsures_ChangeTracking_ChangeReportedOr(t *testing.T) { + ef := flow.BeginEnsure(context.Background(), "test") + var outcome flow.EnsureOutcome + defer ef.OnEnd(&outcome) + + o := flow.MergeEnsures(ef.Ok(), ef.Ok().ReportChangedIf(false)) + + // ReportChangedIf(false) does not report a semantic change, but it does report that change tracking was used. + if o.DidChange() { + t.Fatalf("expected merged outcome DidChange() == false") + } + + // This call should not panic because MergeEnsures ORs the changeReported flag, even if no semantic change happened. + mustNotPanic(t, func() { _ = o.RequireOptimisticLock() }) + + o = o.RequireOptimisticLock() + if o.OptimisticLockRequired() { + t.Fatalf("expected OptimisticLockRequired() == false when no change was reported") + } +} + +func TestMergeEnsures_ErrorsJoined(t *testing.T) { + ef := flow.BeginEnsure(context.Background(), "test") + var outcome flow.EnsureOutcome + defer ef.OnEnd(&outcome) + + e1 := errors.New("e1") + e2 := errors.New("e2") + o := flow.MergeEnsures(ef.Err(e1), ef.Err(e2)) + + if o.Error() == nil { + t.Fatalf("expected Error() to be non-nil") + } + if !errors.Is(o.Error(), e1) { + t.Fatalf("expected errors.Is(o.Error(), e1) == true; err=%v", o.Error()) + } + if !errors.Is(o.Error(), e2) { + t.Fatalf("expected errors.Is(o.Error(), e2) == true; err=%v", o.Error()) + } +} + +// ============================================================================= +// StepFlow tests +// ============================================================================= + +func TestStepFlow_Ok(t *testing.T) { + sf := flow.BeginStep(context.Background(), "test") + var err error + defer sf.OnEnd(&err) + + if sf.Ok() != nil { + t.Fatalf("expected Ok() to return nil") + } +} + +func TestStepFlow_Err(t *testing.T) { + sf := flow.BeginStep(context.Background(), "test") + var err error + defer sf.OnEnd(&err) + + e := errors.New("e") + if got := sf.Err(e); got != e { + t.Fatalf("expected Err(e) to return e, got %v", got) + } +} + +func TestStepFlow_Errf(t *testing.T) { + sf := flow.BeginStep(context.Background(), "test") + var err error + defer sf.OnEnd(&err) + + got := sf.Errf("hello %s %d", "a", 1) + if got == nil { + t.Fatalf("expected Errf() to return non-nil") + } + if !strings.Contains(got.Error(), "hello a 1") { + t.Fatalf("expected error string to contain formatted message; got %q", got.Error()) + } +} + +func TestMergeSteps(t *testing.T) { + sf := flow.BeginStep(context.Background(), "test") + var err error + defer sf.OnEnd(&err) + + e1 := errors.New("e1") + e2 := errors.New("e2") + got := flow.MergeSteps(e1, e2) + + if got == nil { + t.Fatalf("expected MergeSteps() to return non-nil") + } + if !errors.Is(got, e1) { + t.Fatalf("expected errors.Is(got, e1) == true; got=%v", got) + } + if !errors.Is(got, e2) { + t.Fatalf("expected errors.Is(got, e2) == true; got=%v", got) + } +} + +func TestMergeSteps_AllNil(t *testing.T) { + sf := flow.BeginStep(context.Background(), "test") + var err error + defer sf.OnEnd(&err) + + got := flow.MergeSteps(nil, nil) + if got != nil { + t.Fatalf("expected MergeSteps(nil, nil) to return nil, got %v", got) + } +} + +func TestMergeSteps_SomeNil(t *testing.T) { + sf := flow.BeginStep(context.Background(), "test") + var err error + defer sf.OnEnd(&err) + + e := errors.New("e") + got := flow.MergeSteps(nil, e, nil) + + if got == nil { + t.Fatalf("expected MergeSteps() to return non-nil") + } + if !errors.Is(got, e) { + t.Fatalf("expected errors.Is(got, e) == true; got=%v", got) + } +} + +func TestStepFlow_Err_NilPanics(t *testing.T) { + sf := flow.BeginStep(context.Background(), "test") + var err error + defer sf.OnEnd(&err) + + defer func() { + if r := recover(); r == nil { + t.Fatalf("expected panic on Err(nil)") + } + }() + + _ = sf.Err(nil) +} + +func TestStepFlow_Enrichf(t *testing.T) { + sf := flow.BeginStep(context.Background(), "test") + var err error + defer sf.OnEnd(&err) + + e := errors.New("original") + got := sf.Enrichf(e, "context %d", 42) + + if got == nil { + t.Fatalf("expected Enrichf() to return non-nil") + } + if !errors.Is(got, e) { + t.Fatalf("expected errors.Is(got, e) == true; got=%v", got) + } + if !strings.Contains(got.Error(), "context 42") { + t.Fatalf("expected error string to contain 'context 42'; got %q", got.Error()) + } +} + +func TestStepFlow_Enrichf_NilIsNoOp(t *testing.T) { + sf := flow.BeginStep(context.Background(), "test") + var err error + defer sf.OnEnd(&err) + + got := sf.Enrichf(nil, "context") + if got != nil { + t.Fatalf("expected Enrichf(nil, ...) to return nil, got %v", got) + } +} + +// ============================================================================= +// Phase validation tests +// ============================================================================= + +func TestMustBeValidPhaseName_Valid(t *testing.T) { + valid := []string{ + "a", + "a/b", + "a-b.c_d", + "A1/B2", + } + for _, name := range valid { + t.Run(name, func(t *testing.T) { + mustNotPanic(t, func() { _ = flow.BeginReconcile(context.Background(), name) }) + }) + } +} + +func TestMustBeValidPhaseName_Invalid(t *testing.T) { + invalid := []string{ + "", + "/a", + "a/", + "a//b", + "a b", + "a\tb", + "a:b", + } + for _, name := range invalid { + t.Run(strings.ReplaceAll(name, "\t", "\\t"), func(t *testing.T) { + mustPanic(t, func() { _ = flow.BeginReconcile(context.Background(), name) }) + }) + } +} + +func TestBeginReconcile_KVOddLengthPanics(t *testing.T) { + //nolint:staticcheck // testing panic for odd kv length + mustPanic(t, func() { _ = flow.BeginReconcile(context.Background(), "p", "k") }) +} + +func TestBeginEnsure_KVOddLengthPanics(t *testing.T) { + //nolint:staticcheck // testing panic for odd kv length + mustPanic(t, func() { _ = flow.BeginEnsure(context.Background(), "p", "k") }) +} + +func TestBeginStep_KVOddLengthPanics(t *testing.T) { + //nolint:staticcheck // testing panic for odd kv length + mustPanic(t, func() { _ = flow.BeginStep(context.Background(), "p", "k") }) +} + +// ============================================================================= +// End logging tests +// ============================================================================= + +func TestReconcileFlow_OnEnd_LogsFailAsError_OnceAndMarksLogged(t *testing.T) { + core, observed := observer.New(zapcore.DebugLevel) + zl := zap.New(core) + l := zapr.NewLogger(zl) + + ctx := log.IntoContext(context.Background(), l) + rf := flow.BeginReconcile(ctx, "p") + + outcome := rf.Failf(errors.New("e"), "step") + rf.OnEnd(&outcome) + + // Should log exactly one Error-level "phase end" record (Fail*), with summary fields. + var matches []observer.LoggedEntry + for _, e := range observed.All() { + if e.Message == "phase end" && e.Level == zapcore.ErrorLevel { + matches = append(matches, e) + } + } + if len(matches) != 1 { + t.Fatalf("expected exactly 1 error 'phase end' log entry, got %d; entries=%v", len(matches), observed.All()) + } + + m := matches[0].ContextMap() + if got := m["result"]; got != "fail" { + t.Fatalf("expected result=fail, got %v", got) + } + if got := m["hasError"]; got != true { + t.Fatalf("expected hasError=true, got %v", got) + } + if _, ok := m["duration"]; !ok { + t.Fatalf("expected duration to be present; got %v", m) + } +} + +func TestReconcileFlow_OnEnd_NestedPhases_DoNotDoubleLogSameError(t *testing.T) { + core, observed := observer.New(zapcore.DebugLevel) + zl := zap.New(core) + l := zapr.NewLogger(zl) + + ctx := log.IntoContext(context.Background(), l) + parentRf := flow.BeginReconcile(ctx, "parent") + childRf := flow.BeginReconcile(parentRf.Ctx(), "child") + + outcome := childRf.Failf(errors.New("e"), "step") + childRf.OnEnd(&outcome) + parentRf.OnEnd(&outcome) + + // Only the first End should emit an Error-level "phase end" with error details. + count := 0 + for _, e := range observed.All() { + if e.Message == "phase end" && e.Level == zapcore.ErrorLevel { + count++ + } + } + if count != 1 { + t.Fatalf("expected exactly 1 error 'phase end' log entry, got %d; entries=%v", count, observed.All()) + } + + // Error chain should not be wrapped with phase context. + if outcome.Error() == nil { + t.Fatalf("expected error to be non-nil") + } + s := outcome.Error().Error() + if strings.Contains(s, "phase child") || strings.Contains(s, "phase parent") { + t.Fatalf("expected error to not contain phase wrappers; got %q", s) + } +} + +func TestEnsureFlow_OnEnd_LogsErrorAsError(t *testing.T) { + core, observed := observer.New(zapcore.DebugLevel) + zl := zap.New(core) + l := zapr.NewLogger(zl) + + ctx := log.IntoContext(context.Background(), l) + ef := flow.BeginEnsure(ctx, "ensure-test") + + outcome := ef.Err(errors.New("e")) + ef.OnEnd(&outcome) + + var matches []observer.LoggedEntry + for _, e := range observed.All() { + if e.Message == "phase end" && e.Level == zapcore.ErrorLevel { + matches = append(matches, e) + } + } + if len(matches) != 1 { + t.Fatalf("expected exactly 1 error 'phase end' log entry, got %d; entries=%v", len(matches), observed.All()) + } + + m := matches[0].ContextMap() + if got := m["hasError"]; got != true { + t.Fatalf("expected hasError=true, got %v", got) + } +} + +func TestStepFlow_OnEnd_LogsErrorAsError(t *testing.T) { + core, observed := observer.New(zapcore.DebugLevel) + zl := zap.New(core) + l := zapr.NewLogger(zl) + + ctx := log.IntoContext(context.Background(), l) + sf := flow.BeginStep(ctx, "step-test") + + err := errors.New("e") + sf.OnEnd(&err) + + var matches []observer.LoggedEntry + for _, e := range observed.All() { + if e.Message == "phase end" && e.Level == zapcore.ErrorLevel { + matches = append(matches, e) + } + } + if len(matches) != 1 { + t.Fatalf("expected exactly 1 error 'phase end' log entry, got %d; entries=%v", len(matches), observed.All()) + } + + m := matches[0].ContextMap() + if got := m["hasError"]; got != true { + t.Fatalf("expected hasError=true, got %v", got) + } +} + +func TestEnsureFlow_OnEnd_LogsChangeTrackingFields(t *testing.T) { + core, observed := observer.New(zapcore.DebugLevel) + zl := zap.New(core) + l := zapr.NewLogger(zl) + + ctx := log.IntoContext(context.Background(), l) + ef := flow.BeginEnsure(ctx, "ensure-test") + + outcome := ef.Ok().ReportChanged().RequireOptimisticLock() + ef.OnEnd(&outcome) + + // Find V(1) "phase end" log (no error, so Debug level in zap for V(1).Info) + var matches []observer.LoggedEntry + for _, e := range observed.All() { + if e.Message == "phase end" && e.Level == zapcore.DebugLevel { + matches = append(matches, e) + } + } + if len(matches) != 1 { + t.Fatalf("expected exactly 1 debug 'phase end' log entry, got %d; entries=%v", len(matches), observed.All()) + } + + m := matches[0].ContextMap() + if got := m["changed"]; got != true { + t.Fatalf("expected changed=true, got %v", got) + } + if got := m["optimisticLock"]; got != true { + t.Fatalf("expected optimisticLock=true, got %v", got) + } + if got := m["hasError"]; got != false { + t.Fatalf("expected hasError=false, got %v", got) + } +} + +func TestReconcileFlow_OnEnd_NestedPhases_SecondOnEndLogsAtDebugLevel(t *testing.T) { + core, observed := observer.New(zapcore.DebugLevel) + zl := zap.New(core) + l := zapr.NewLogger(zl) + + ctx := log.IntoContext(context.Background(), l) + parentRf := flow.BeginReconcile(ctx, "parent") + childRf := flow.BeginReconcile(parentRf.Ctx(), "child") + + outcome := childRf.Fail(errors.New("e")) + childRf.OnEnd(&outcome) + parentRf.OnEnd(&outcome) + + // Count error-level and debug-level "phase end" logs + // (V(1).Info logs at debug level in zap) + errorCount := 0 + debugCount := 0 + for _, e := range observed.All() { + if e.Message == "phase end" { + switch e.Level { + case zapcore.ErrorLevel: + errorCount++ + case zapcore.DebugLevel: + debugCount++ + } + } + } + + // First End logs at Error level, second End logs at Debug level (V(1).Info) + if errorCount != 1 { + t.Fatalf("expected exactly 1 error 'phase end' log entry, got %d", errorCount) + } + if debugCount != 1 { + t.Fatalf("expected exactly 1 debug 'phase end' log entry (for parent after error already logged), got %d", debugCount) + } +} + +func TestReconcileFlow_OnEnd_PanicIsLoggedAndReraised(t *testing.T) { + core, observed := observer.New(zapcore.DebugLevel) + zl := zap.New(core) + l := zapr.NewLogger(zl) + + ctx := log.IntoContext(context.Background(), l) + + defer func() { + r := recover() + if r == nil { + t.Fatalf("expected panic to be re-raised") + } + if r != "test panic" { + t.Fatalf("expected panic value 'test panic', got %v", r) + } + + // Verify "phase panic" was logged + var matches []observer.LoggedEntry + for _, e := range observed.All() { + if e.Message == "phase panic" && e.Level == zapcore.ErrorLevel { + matches = append(matches, e) + } + } + if len(matches) != 1 { + t.Fatalf("expected exactly 1 'phase panic' log entry, got %d; entries=%v", len(matches), observed.All()) + } + }() + + rf := flow.BeginReconcile(ctx, "test") + var outcome flow.ReconcileOutcome + defer rf.OnEnd(&outcome) + + panic("test panic") +} + +func TestEnsureFlow_OnEnd_PanicIsLoggedAndReraised(t *testing.T) { + core, observed := observer.New(zapcore.DebugLevel) + zl := zap.New(core) + l := zapr.NewLogger(zl) + + ctx := log.IntoContext(context.Background(), l) + + defer func() { + r := recover() + if r == nil { + t.Fatalf("expected panic to be re-raised") + } + + // Verify "phase panic" was logged + var matches []observer.LoggedEntry + for _, e := range observed.All() { + if e.Message == "phase panic" && e.Level == zapcore.ErrorLevel { + matches = append(matches, e) + } + } + if len(matches) != 1 { + t.Fatalf("expected exactly 1 'phase panic' log entry, got %d; entries=%v", len(matches), observed.All()) + } + }() + + ef := flow.BeginEnsure(ctx, "test") + var outcome flow.EnsureOutcome + defer ef.OnEnd(&outcome) + + panic("test panic") +} + +func TestStepFlow_OnEnd_PanicIsLoggedAndReraised(t *testing.T) { + core, observed := observer.New(zapcore.DebugLevel) + zl := zap.New(core) + l := zapr.NewLogger(zl) + + ctx := log.IntoContext(context.Background(), l) + + defer func() { + r := recover() + if r == nil { + t.Fatalf("expected panic to be re-raised") + } + + // Verify "phase panic" was logged + var matches []observer.LoggedEntry + for _, e := range observed.All() { + if e.Message == "phase panic" && e.Level == zapcore.ErrorLevel { + matches = append(matches, e) + } + } + if len(matches) != 1 { + t.Fatalf("expected exactly 1 'phase panic' log entry, got %d; entries=%v", len(matches), observed.All()) + } + }() + + sf := flow.BeginStep(ctx, "test") + var err error + defer sf.OnEnd(&err) + + panic("test panic") +} diff --git a/lib/go/common/reconciliation/flow/merge_internal_test.go b/lib/go/common/reconciliation/flow/merge_internal_test.go new file mode 100644 index 000000000..be685f854 --- /dev/null +++ b/lib/go/common/reconciliation/flow/merge_internal_test.go @@ -0,0 +1,75 @@ +/* +Copyright 2026 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package flow + +import ( + "context" + "errors" + "testing" +) + +func TestReconcileOutcome_ErrWithoutResult_IsClassifiedAsInvalidKind(t *testing.T) { + kind, _ := reconcileOutcomeKind(&ReconcileOutcome{err: errors.New("e")}) + if kind != "invalid" { + t.Fatalf("expected kind=invalid, got %q", kind) + } +} + +func TestReconcileFlow_OnEnd_ErrWithoutResult_DoesNotPanic(_ *testing.T) { + rf := BeginReconcile(context.Background(), "p") + o := ReconcileOutcome{err: errors.New("e")} + rf.OnEnd(&o) +} + +func TestMergeReconciles_RequeueIsSupported(t *testing.T) { + rf := BeginRootReconcile(context.Background()) + outcome := MergeReconciles(rf.Requeue(), rf.Continue()) + + if !outcome.ShouldReturn() { + t.Fatalf("expected ShouldReturn() == true") + } + + res, err := outcome.ToCtrl() + if err != nil { + t.Fatalf("expected err to be nil, got %v", err) + } + if !res.Requeue { //nolint:staticcheck // testing deprecated Requeue field + t.Fatalf("expected Requeue to be true") + } +} + +func TestMergeReconciles_RequeueWinsOverRequeueAfter(t *testing.T) { + rf := BeginRootReconcile(context.Background()) + // Requeue() = delay 0, RequeueAfter(5) = delay 5. + // Minimum delay wins, so Requeue() wins. + outcome := MergeReconciles(rf.Requeue(), rf.RequeueAfter(5)) + + if !outcome.ShouldReturn() { + t.Fatalf("expected ShouldReturn() == true") + } + + res, err := outcome.ToCtrl() + if err != nil { + t.Fatalf("expected err to be nil, got %v", err) + } + if !res.Requeue { //nolint:staticcheck // testing deprecated Requeue field + t.Fatalf("expected Requeue to be true (delay=0 wins)") + } + if res.RequeueAfter != 0 { + t.Fatalf("expected RequeueAfter to be 0 when Requeue is set, got %v", res.RequeueAfter) + } +} diff --git a/lib/go/common/strings/join.go b/lib/go/common/strings/join.go new file mode 100644 index 000000000..565ffcdf3 --- /dev/null +++ b/lib/go/common/strings/join.go @@ -0,0 +1,53 @@ +/* +Copyright 2025 Flant JSC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package strings + +import ( + "slices" + "strings" + + uiter "github.com/deckhouse/sds-common-lib/utils/iter" +) + +type GetNamer interface { + GetName() string +} + +func JoinNames[T GetNamer](items []T, sep string) string { + return strings.Join( + slices.Collect( + uiter.Map( + slices.Values(items), + func(item T) string { + return item.GetName() + }, + ), + ), + sep, + ) +} + +func JoinNonEmpty(sep string, elems ...string) string { + return strings.Join( + slices.Collect( + uiter.Filter( + slices.Values(elems), + func(s string) bool { return s != "" }), + ), + sep, + ) +} diff --git a/templates/agent/configmap.yaml b/templates/agent/configmap.yaml new file mode 100644 index 000000000..a29721daf --- /dev/null +++ b/templates/agent/configmap.yaml @@ -0,0 +1,22 @@ +--- +apiVersion: v1 +kind: ConfigMap +metadata: + name: agent-config + namespace: d8-{{ .Chart.Name }} + {{- include "helm_lib_module_labels" (list . (dict "app" "agent")) | nindent 2 }} +data: + slogh.cfg: | + # those are all keys with default values: + + # any slog level, or just a number + level=INFO + + # also supported: "text" + format=json + + # for each log print "source" property with information about callsite + callsite=false + + render=true + stringValues=true diff --git a/templates/agent/daemonset.yaml b/templates/agent/daemonset.yaml new file mode 100644 index 000000000..a370601d0 --- /dev/null +++ b/templates/agent/daemonset.yaml @@ -0,0 +1,142 @@ +{{- define "sds_utils_installer_resources" }} +cpu: 10m +memory: 25Mi +{{- end }} + +{{- define "sds_replicated_volume_agent_resources" }} +cpu: 50m +memory: 50Mi +{{- end }} + +{{- if (.Values.global.enabledModules | has "vertical-pod-autoscaler-crd") }} +--- +apiVersion: autoscaling.k8s.io/v1 +kind: VerticalPodAutoscaler +metadata: + name: agent + namespace: d8-{{ .Chart.Name }} + {{- include "helm_lib_module_labels" (list . (dict "app" "agent")) | nindent 2 }} +spec: + targetRef: + apiVersion: "apps/v1" + kind: DaemonSet + name: agent + updatePolicy: + updateMode: "Auto" + resourcePolicy: + containerPolicies: + - containerName: "agent" + minAllowed: + {{- include "sds_replicated_volume_agent_resources" . | nindent 8 }} + maxAllowed: + cpu: 200m + memory: 100Mi +{{- end }} + +{{- if not .Values.sdsReplicatedVolume.disableDs }} +--- +apiVersion: apps/v1 +kind: DaemonSet +metadata: + name: agent + namespace: d8-{{ .Chart.Name }} + {{- include "helm_lib_module_labels" (list . (dict "app" "agent")) | nindent 2 }} +spec: + selector: + matchLabels: + app: agent + template: + metadata: + name: agent + namespace: d8-{{ .Chart.Name }} + {{- include "helm_lib_module_labels" (list . (dict "app" "agent")) | nindent 6 }} + spec: + {{- include "helm_lib_priority_class" (tuple . "cluster-medium") | nindent 6 }} + {{- include "helm_lib_tolerations" (tuple . "any-node" "storage-problems") | nindent 6 }} + affinity: {} + nodeSelector: + storage.deckhouse.io/sds-replicated-volume-node: "" + dnsPolicy: ClusterFirstWithHostNet + imagePullSecrets: + - name: {{ .Chart.Name }}-module-registry + serviceAccountName: agent + hostNetwork: true + # We need root privileges to perform drbd operations on the node. + securityContext: + runAsUser: 0 + runAsNonRoot: false + runAsGroup: 0 + # readOnlyRootFilesystem: true + seLinuxOptions: + level: s0 + type: spc_t + containers: + - name: agent + image: {{ include "helm_lib_module_image" (list . "agent") }} + imagePullPolicy: IfNotPresent + readinessProbe: + httpGet: + path: /readyz + host: 127.0.0.1 + port: 4269 + scheme: HTTP + initialDelaySeconds: 5 + failureThreshold: 2 + periodSeconds: 1 + livenessProbe: + httpGet: + path: /healthz + host: 127.0.0.1 + port: 4269 + scheme: HTTP + periodSeconds: 1 + failureThreshold: 3 + env: + - name: NODE_NAME + valueFrom: + fieldRef: + fieldPath: spec.nodeName + - name: SLOGH_CONFIG_PATH + value: "/etc/config/slogh.cfg" + securityContext: + privileged: true + readOnlyRootFilesystem: true + volumeMounts: + - mountPath: /dev/ + name: host-device-dir + - mountPath: /etc/config/ + name: config + - mountPath: /var/lib/sds-replicated-volume-agent.d/ + name: sds-replicated-volume-agent-d + - mountPath: /var/lib/drbd/ + name: var-lib-drbd + - mountPath: /var/run/drbd/ + name: var-run-drbd + - mountPath: /var/lock/ + name: var-lock + resources: + requests: + {{- include "helm_lib_module_ephemeral_storage_only_logs" . | nindent 14 }} +{{- if not ( .Values.global.enabledModules | has "vertical-pod-autoscaler-crd") }} + {{- include "sds_replicated_volume_agent_resources" . | nindent 14 }} +{{- end }} + volumes: + - hostPath: + path: /dev/ + type: "" + name: host-device-dir + - name: config + configMap: + name: agent-config + items: + - key: slogh.cfg + path: slogh.cfg + - name: sds-replicated-volume-agent-d + emptyDir: {} + - name: var-lib-drbd + emptyDir: {} + - name: var-run-drbd + emptyDir: {} + - name: var-lock + emptyDir: {} +{{- end }} \ No newline at end of file diff --git a/templates/agent/rbac-for-us.yaml b/templates/agent/rbac-for-us.yaml new file mode 100644 index 000000000..2254d4443 --- /dev/null +++ b/templates/agent/rbac-for-us.yaml @@ -0,0 +1,30 @@ +apiVersion: v1 +kind: ServiceAccount +metadata: + name: agent + namespace: d8-{{ .Chart.Name }} + {{- include "helm_lib_module_labels" (list .) | nindent 2 }} +--- +kind: ClusterRole +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: d8:{{ .Chart.Name }}:sds-replicated-volume + {{- include "helm_lib_module_labels" (list .) | nindent 2 }} +rules: + - apiGroups: ["*"] + resources: ["*"] + verbs: ["*"] +--- +kind: ClusterRoleBinding +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: d8:{{ .Chart.Name }}:sds-replicated-volume + {{- include "helm_lib_module_labels" (list .) | nindent 2 }} +subjects: + - kind: ServiceAccount + name: agent + namespace: d8-{{ .Chart.Name }} +roleRef: + kind: ClusterRole + name: d8:{{ .Chart.Name }}:sds-replicated-volume + apiGroup: rbac.authorization.k8s.io diff --git a/templates/controller/configmap.yaml b/templates/controller/configmap.yaml new file mode 100644 index 000000000..844f62e2d --- /dev/null +++ b/templates/controller/configmap.yaml @@ -0,0 +1,24 @@ +--- +apiVersion: v1 +kind: ConfigMap +metadata: + name: controller-config + namespace: d8-{{ .Chart.Name }} + {{- include "helm_lib_module_labels" (list . (dict "app" "controller")) | nindent 2 }} +data: + drbdMinPort: "{{ $.Values.sdsReplicatedVolume.drbdPortRange.minPort }}" + drbdMaxPort: "{{ $.Values.sdsReplicatedVolume.drbdPortRange.maxPort }}" + slogh.cfg: | + # those are all keys with default values: + + # any slog level, or just a number + level=INFO + + # also supported: "text" + format=json + + # for each log print "source" property with information about callsite + callsite=true + + render=true + stringValues=true diff --git a/templates/controller/deployment.yaml b/templates/controller/deployment.yaml new file mode 100644 index 000000000..1ab2cdc41 --- /dev/null +++ b/templates/controller/deployment.yaml @@ -0,0 +1,131 @@ +{{- define "sds_drbd_controller_resources" }} +cpu: 10m +memory: 25Mi +{{- end }} + +{{- if (.Values.global.enabledModules | has "vertical-pod-autoscaler-crd") }} +--- +apiVersion: autoscaling.k8s.io/v1 +kind: VerticalPodAutoscaler +metadata: + name: controller + namespace: d8-{{ .Chart.Name }} + {{- include "helm_lib_module_labels" (list . (dict "app" "controller")) | nindent 2 }} +spec: + targetRef: + apiVersion: "apps/v1" + kind: Deployment + name: controller + updatePolicy: + updateMode: "Auto" + resourcePolicy: + containerPolicies: + - containerName: "controller" + minAllowed: + {{- include "sds_drbd_controller_resources" . | nindent 8 }} + maxAllowed: + cpu: 200m + memory: 100Mi +{{- end }} +--- +apiVersion: policy/v1 +kind: PodDisruptionBudget +metadata: + name: controller + namespace: d8-{{ .Chart.Name }} + {{- include "helm_lib_module_labels" (list . (dict "app" "controller" )) | nindent 2 }} +spec: + minAvailable: {{ include "helm_lib_is_ha_to_value" (list . 1 0) }} + selector: + matchLabels: + app: controller +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: controller + namespace: d8-{{ .Chart.Name }} + {{- include "helm_lib_module_labels" (list . (dict "app" "controller")) | nindent 2 }} +spec: + revisionHistoryLimit: 2 + {{- include "helm_lib_deployment_strategy_and_replicas_for_ha" . | nindent 2 }} + selector: + matchLabels: + app: controller + template: + metadata: + labels: + app: controller + spec: + {{- include "helm_lib_priority_class" (tuple . "cluster-medium") | nindent 6 }} + {{- include "helm_lib_node_selector" (tuple . "system") | nindent 6 }} + {{- include "helm_lib_tolerations" (tuple . "system") | nindent 6 }} + {{- include "helm_lib_module_pod_security_context_run_as_user_nobody" . | nindent 6 }} + {{- include "helm_lib_pod_anti_affinity_for_ha" (list . (dict "app" "controller")) | nindent 6 }} + imagePullSecrets: + - name: {{ .Chart.Name }}-module-registry + serviceAccountName: controller + containers: + - name: controller + image: {{ include "helm_lib_module_image" (list . "controller") }} + imagePullPolicy: IfNotPresent + readinessProbe: + httpGet: + path: /readyz + port: 4271 + scheme: HTTP + initialDelaySeconds: 5 + failureThreshold: 3 + periodSeconds: 1 + livenessProbe: + httpGet: + path: /healthz + port: 4271 + scheme: HTTP + periodSeconds: 1 + failureThreshold: 3 + resources: + requests: + {{- include "helm_lib_module_ephemeral_storage_only_logs" . | nindent 14 }} +{{- if not ( .Values.global.enabledModules | has "vertical-pod-autoscaler-crd") }} + {{- include "sds_drbd_controller_resources" . | nindent 14 }} +{{- end }} + securityContext: + privileged: true + readOnlyRootFilesystem: true + seLinuxOptions: + level: s0 + type: spc_t + env: + - name: SLOGH_CONFIG_PATH + value: "/etc/config/slogh.cfg" + - name: SCHEDULER_EXTENDER_URL + value: "https://sds-common-scheduler-extender.d8-sds-node-configurator.svc:8099/api/v1/volumes/filter-prioritize" + volumeMounts: + - name: host-device-dir + mountPath: /dev/ + - name: host-sys-dir + mountPath: /sys/ + - name: host-root + mountPath: /host-root/ + mountPropagation: HostToContainer + - mountPath: /etc/config/ + name: config + volumes: + - name: host-device-dir + hostPath: + path: /dev + type: "" + - name: host-sys-dir + hostPath: + path: /sys/ + type: Directory + - name: host-root + hostPath: + path: / + - name: config + configMap: + name: controller-config + items: + - key: slogh.cfg + path: slogh.cfg diff --git a/templates/controller/rbac-for-us.yaml b/templates/controller/rbac-for-us.yaml new file mode 100644 index 000000000..b8085a58d --- /dev/null +++ b/templates/controller/rbac-for-us.yaml @@ -0,0 +1,35 @@ +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: controller + namespace: d8-{{ .Chart.Name }} + {{- include "helm_lib_module_labels" (list . (dict "app" "sds-replicated-volume-controller")) | nindent 2 }} + +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: d8:{{ .Chart.Name }}:controller + {{- include "helm_lib_module_labels" (list . (dict "app" "sds-replicated-volume-controller")) | nindent 2 }} +rules: + - apiGroups: ["*"] + resources: ["*"] + verbs: ["*"] + +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: d8:{{ .Chart.Name }}:controller + {{- include "helm_lib_module_labels" (list . (dict "app" "sds-replicated-volume-controller")) | nindent 2 }} +subjects: + - kind: ServiceAccount + name: controller + namespace: d8-{{ .Chart.Name }} +roleRef: + kind: ClusterRole + name: d8:{{ .Chart.Name }}:controller + apiGroup: rbac.authorization.k8s.io + + diff --git a/templates/csi/controller.yaml b/templates/csi-driver/controller.yaml similarity index 51% rename from templates/csi/controller.yaml rename to templates/csi-driver/controller.yaml index 6504daa94..5f515fa1c 100644 --- a/templates/csi/controller.yaml +++ b/templates/csi-driver/controller.yaml @@ -1,11 +1,3 @@ -### -### common -### - -{{- define "additional_pull_secrets" }} -- name: {{ .Chart.Name }}-module-registry -{{- end }} - ### ### controller ### @@ -13,9 +5,10 @@ {{- define "csi_controller_args" }} - --csi-endpoint=unix://$(ADDRESS) -- --node=$(KUBE_NODE_NAME) -- --linstor-endpoint=$(LS_CONTROLLERS) -- --log-level=info +{{- end }} + +{{- define "additional_pull_secrets" }} +- name: {{ .Chart.Name }}-module-registry {{- end }} {{- define "csi_controller_envs" }} @@ -26,57 +19,52 @@ fieldRef: apiVersion: v1 fieldPath: spec.nodeName -- name: LS_CONTROLLERS - value: https://linstor.d8-sds-replicated-volume.svc:3371 -- name: LS_ROOT_CA - valueFrom: - secretKeyRef: - key: ca.crt - name: linstor-client-https-cert -- name: LS_USER_CERTIFICATE - valueFrom: - secretKeyRef: - key: tls.crt - name: linstor-client-https-cert -- name: LS_USER_KEY - valueFrom: - secretKeyRef: - key: tls.key - name: linstor-client-https-cert +- name: LOG_LEVEL +{{- if eq .Values.sdsReplicatedVolume.logLevel "ERROR" }} + value: "0" +{{- else if eq .Values.sdsReplicatedVolume.logLevel "WARN" }} + value: "1" +{{- else if eq .Values.sdsReplicatedVolume.logLevel "INFO" }} + value: "2" +{{- else if eq .Values.sdsReplicatedVolume.logLevel "DEBUG" }} + value: "3" +{{- else if eq .Values.sdsReplicatedVolume.logLevel "TRACE" }} + value: "4" +{{- end }} {{- include "helm_lib_envs_for_proxy" . }} {{- end }} -{{- define "csi_controller_init_containers" }} -- command: - - /linstor-wait-until - - api-online - env: - - name: LS_CONTROLLERS - value: https://linstor.d8-sds-replicated-volume.svc:3371 - - name: LS_ROOT_CA - valueFrom: - secretKeyRef: - key: ca.crt - name: linstor-client-https-cert - - name: LS_USER_CERTIFICATE - valueFrom: - secretKeyRef: - key: tls.crt - name: linstor-client-https-cert - - name: LS_USER_KEY - valueFrom: - secretKeyRef: - key: tls.key - name: linstor-client-https-cert - image: {{ include "helm_lib_module_image" (list . "linstorWaitUntil") }} - imagePullPolicy: IfNotPresent - name: linstor-wait-api-online - securityContext: - readOnlyRootFilesystem: true - allowPrivilegeEscalation: false - seccompProfile: - type: RuntimeDefault -{{- end }} +# {{- define "csi_controller_init_containers" }} +# - command: +# - /linstor-wait-until +# - api-online +# env: +# - name: LS_CONTROLLERS +# value: https://linstor.d8-sds-replicated-volume.svc:3371 +# - name: LS_ROOT_CA +# valueFrom: +# secretKeyRef: +# key: ca.crt +# name: linstor-client-https-cert +# - name: LS_USER_CERTIFICATE +# valueFrom: +# secretKeyRef: +# key: tls.crt +# name: linstor-client-https-cert +# - name: LS_USER_KEY +# valueFrom: +# secretKeyRef: +# key: tls.key +# name: linstor-client-https-cert +# image: {{ include "helm_lib_module_image" (list . "linstorWaitUntil") }} +# imagePullPolicy: IfNotPresent +# name: linstor-wait-api-online +# securityContext: +# readOnlyRootFilesystem: true +# allowPrivilegeEscalation: false +# seccompProfile: +# type: RuntimeDefault +# {{- end }} {{- define "csi_additional_controller_volumes" }} {{- end }} @@ -88,17 +76,17 @@ storage.deckhouse.io/sds-replicated-volume-node: "" {{- end }} -{{- $csiControllerImage := include "helm_lib_module_image" (list . "linstorCsi") }} +{{- $csiControllerImage := include "helm_lib_module_image" (list . "csiDriver") }} {{- $csiControllerConfig := dict }} {{- $_ := set $csiControllerConfig "controllerImage" $csiControllerImage }} -{{- $_ := set $csiControllerConfig "snapshotterEnabled" true }} +{{- $_ := set $csiControllerConfig "snapshotterEnabled" false }} {{- $_ := set $csiControllerConfig "resizerEnabled" true }} {{- $_ := set $csiControllerConfig "csiControllerHaMode" true }} {{- $_ := set $csiControllerConfig "csiControllerHostNetwork" "false" }} {{- $_ := set $csiControllerConfig "provisionerTimeout" "1200s" }} -{{- $_ := set $csiControllerConfig "snapshotterTimeout" "1200s" }} +# {{- $_ := set $csiControllerConfig "snapshotterTimeout" "1200s" }} {{- $_ := set $csiControllerConfig "extraCreateMetadataEnabled" true }} {{- $_ := set $csiControllerConfig "livenessProbePort" 4261 }} {{- $_ := set $csiControllerConfig "additionalControllerArgs" (include "csi_controller_args" . | fromYamlArray) }} @@ -106,7 +94,6 @@ storage.deckhouse.io/sds-replicated-volume-node: "" {{- $_ := set $csiControllerConfig "additionalControllerVolumes" (include "csi_additional_controller_volumes" . | fromYamlArray) }} {{- $_ := set $csiControllerConfig "additionalControllerVolumeMounts" (include "csi_additional_controller_volume_mounts" . | fromYamlArray) }} {{- $_ := set $csiControllerConfig "initContainers" (include "csi_controller_init_containers" . | fromYamlArray) }} -{{- $_ := set $csiControllerConfig "additionalPullSecrets" (include "additional_pull_secrets" . | fromYamlArray) }} {{- include "helm_lib_csi_controller_manifests" (list . $csiControllerConfig) }} @@ -115,81 +102,72 @@ storage.deckhouse.io/sds-replicated-volume-node: "" ### {{- define "csi_node_args" }} -- --csi-endpoint=unix://$(CSI_ENDPOINT) -- --node=$(KUBE_NODE_NAME) -- --linstor-endpoint=$(LS_CONTROLLERS) -- --log-level=info +- --csi-endpoint=unix://$(CSI_ADDRESS) {{- end }} {{- define "csi_node_envs" }} -- name: CSI_ENDPOINT +- name: CSI_ADDRESS value: /csi/csi.sock - name: DRIVER_REG_SOCK_PATH value: /var/lib/kubelet/plugins/replicated.csi.storage.deckhouse.io/csi.sock - name: KUBE_NODE_NAME valueFrom: fieldRef: - apiVersion: v1 fieldPath: spec.nodeName -- name: LS_CONTROLLERS - value: https://linstor.d8-sds-replicated-volume.svc:3371 -- name: LS_ROOT_CA - valueFrom: - secretKeyRef: - key: ca.crt - name: linstor-client-https-cert -- name: LS_USER_CERTIFICATE - valueFrom: - secretKeyRef: - key: tls.crt - name: linstor-client-https-cert -- name: LS_USER_KEY - valueFrom: - secretKeyRef: - key: tls.key - name: linstor-client-https-cert +- name: LOG_LEVEL +{{- if eq .Values.sdsReplicatedVolume.logLevel "ERROR" }} + value: "0" +{{- else if eq .Values.sdsReplicatedVolume.logLevel "WARN" }} + value: "1" +{{- else if eq .Values.sdsReplicatedVolume.logLevel "INFO" }} + value: "2" +{{- else if eq .Values.sdsReplicatedVolume.logLevel "DEBUG" }} + value: "3" +{{- else if eq .Values.sdsReplicatedVolume.logLevel "TRACE" }} + value: "4" +{{- end }} {{- end }} {{- define "csi_additional_node_selector_terms" }} {{- end }} -{{- define "csi_node_init_containers" }} -- command: - - /linstor-wait-until - - satellite-online - - $(KUBE_NODE_NAME) - env: - - name: KUBE_NODE_NAME - valueFrom: - fieldRef: - apiVersion: v1 - fieldPath: spec.nodeName - - name: LS_CONTROLLERS - value: https://linstor.d8-{{ .Chart.Name }}.svc:3371 - - name: LS_ROOT_CA - valueFrom: - secretKeyRef: - key: ca.crt - name: linstor-client-https-cert - - name: LS_USER_CERTIFICATE - valueFrom: - secretKeyRef: - key: tls.crt - name: linstor-client-https-cert - - name: LS_USER_KEY - valueFrom: - secretKeyRef: - key: tls.key - name: linstor-client-https-cert - image: {{ include "helm_lib_module_image" (list . "linstorWaitUntil") }} - imagePullPolicy: IfNotPresent - name: linstor-wait-node-online - securityContext: - readOnlyRootFilesystem: true - allowPrivilegeEscalation: false - seccompProfile: - type: RuntimeDefault -{{- end }} +# {{- define "csi_node_init_containers" }} +# - command: +# - /linstor-wait-until +# - satellite-online +# - $(KUBE_NODE_NAME) +# env: +# - name: KUBE_NODE_NAME +# valueFrom: +# fieldRef: +# apiVersion: v1 +# fieldPath: spec.nodeName +# - name: LS_CONTROLLERS +# value: https://linstor.d8-{{ .Chart.Name }}.svc:3371 +# - name: LS_ROOT_CA +# valueFrom: +# secretKeyRef: +# key: ca.crt +# name: linstor-client-https-cert +# - name: LS_USER_CERTIFICATE +# valueFrom: +# secretKeyRef: +# key: tls.crt +# name: linstor-client-https-cert +# - name: LS_USER_KEY +# valueFrom: +# secretKeyRef: +# key: tls.key +# name: linstor-client-https-cert +# image: {{ include "helm_lib_module_image" (list . "linstorWaitUntil") }} +# imagePullPolicy: IfNotPresent +# name: linstor-wait-node-online +# securityContext: +# readOnlyRootFilesystem: true +# allowPrivilegeEscalation: false +# seccompProfile: +# type: RuntimeDefault +# {{- end }} {{- define "csi_additional_node_volumes" }} @@ -198,7 +176,7 @@ storage.deckhouse.io/sds-replicated-volume-node: "" {{- define "csi_additional_node_volume_mounts" }} {{- end }} -{{- $csiNodeImage := include "helm_lib_module_image" (list . "linstorCsi") }} +{{- $csiNodeImage := include "helm_lib_module_image" (list . "csiDriver") }} {{- $csiNodeConfig := dict }} {{- $_ := set $csiNodeConfig "nodeImage" $csiNodeImage }} @@ -213,6 +191,5 @@ storage.deckhouse.io/sds-replicated-volume-node: "" {{- $_ := set $csiNodeConfig "additionalNodeVolumeMounts" (include "csi_additional_node_volume_mounts" . | fromYamlArray) }} {{- $_ := set $csiNodeConfig "customNodeSelector" (include "csi_custom_node_selector" . | fromYaml) }} {{- $_ := set $csiNodeConfig "forceCsiNodeAndStaticNodesDepoloy" true }} -{{- $_ := set $csiNodeConfig "additionalPullSecrets" (include "additional_pull_secrets" . | fromYamlArray) }} {{- include "helm_lib_csi_node_manifests" (list . $csiNodeConfig) }} diff --git a/templates/csi/csidriver.yaml b/templates/csi-driver/csidriver.yaml similarity index 86% rename from templates/csi/csidriver.yaml rename to templates/csi-driver/csidriver.yaml index 96e33d972..9befeee81 100644 --- a/templates/csi/csidriver.yaml +++ b/templates/csi-driver/csidriver.yaml @@ -7,8 +7,8 @@ spec: attachRequired: true fsGroupPolicy: ReadWriteOnceWithFSType podInfoOnMount: true - requiresRepublish: false + requiresRepublish: true seLinuxMount: true - storageCapacity: true + storageCapacity: false volumeLifecycleModes: - Persistent diff --git a/templates/csi-driver/rbac-for-us.yaml b/templates/csi-driver/rbac-for-us.yaml new file mode 100644 index 000000000..c9f7db843 --- /dev/null +++ b/templates/csi-driver/rbac-for-us.yaml @@ -0,0 +1,97 @@ +{{- include "helm_lib_csi_controller_rbac" . }} +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: d8:{{ .Chart.Name }}:storagepool-reader + {{- include "helm_lib_module_labels" (list . ) | nindent 2 }} +rules: + - apiGroups: ["storage.deckhouse.io"] + resources: ["replicatedstoragepools", "lvmvolumegroups"] + verbs: ["get", "list", "watch"] +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: d8:{{ .Chart.Name }}:storagepool-reader-binding + {{- include "helm_lib_module_labels" (list . ) | nindent 2 }} +subjects: + - kind: ServiceAccount + name: csi + namespace: d8-{{ .Chart.Name }} +roleRef: + kind: ClusterRole + name: d8:{{ .Chart.Name }}:storagepool-reader + apiGroup: rbac.authorization.k8s.io +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: d8:{{ .Chart.Name }}:rsc-watcher + {{- include "helm_lib_module_labels" (list . ) | nindent 2 }} +rules: + - apiGroups: ["storage.deckhouse.io"] + resources: ["replicatedstorageclasses"] + verbs: ["get", "list", "watch"] +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: d8:{{ .Chart.Name }}:rsc-read-access + {{- include "helm_lib_module_labels" (list . ) | nindent 2 }} +subjects: + - kind: ServiceAccount + name: csi + namespace: d8-{{ .Chart.Name }} +roleRef: + kind: ClusterRole + name: d8:{{ .Chart.Name }}:rsc-watcher + apiGroup: rbac.authorization.k8s.io +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: d8:{{ .Chart.Name }}:replicatedvolume-manager + {{- include "helm_lib_module_labels" (list . ) | nindent 2 }} +rules: + - apiGroups: ["storage.deckhouse.io"] + resources: ["replicatedvolumes"] + verbs: ["create", "get", "update", "delete", "list", "watch"] +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: d8:{{ .Chart.Name }}:replicatedvolume-manager-binding + {{- include "helm_lib_module_labels" (list . ) | nindent 2 }} +subjects: + - kind: ServiceAccount + name: csi + namespace: d8-{{ .Chart.Name }} +roleRef: + kind: ClusterRole + name: d8:{{ .Chart.Name }}:replicatedvolume-manager + apiGroup: rbac.authorization.k8s.io +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: d8:{{ .Chart.Name }}:replicatedvolumereplica-reader + {{- include "helm_lib_module_labels" (list . ) | nindent 2 }} +rules: + - apiGroups: ["storage.deckhouse.io"] + resources: ["replicatedvolumereplicas"] + verbs: ["get", "list", "watch"] +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: d8:{{ .Chart.Name }}:replicatedvolumereplica-reader-binding + {{- include "helm_lib_module_labels" (list . ) | nindent 2 }} +subjects: + - kind: ServiceAccount + name: csi + namespace: d8-{{ .Chart.Name }} +roleRef: + kind: ClusterRole + name: d8:{{ .Chart.Name }}:replicatedvolumereplica-reader + apiGroup: rbac.authorization.k8s.io diff --git a/templates/csi/rbac-for-us.yaml b/templates/csi/rbac-for-us.yaml deleted file mode 100644 index e1b3e1fb6..000000000 --- a/templates/csi/rbac-for-us.yaml +++ /dev/null @@ -1,49 +0,0 @@ -{{- include "helm_lib_csi_controller_rbac" . }} ---- -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRole -metadata: - name: d8:{{ .Chart.Name }}:storagepool-reader - {{- include "helm_lib_module_labels" (list . ) | nindent 2 }} -rules: - - apiGroups: ["storage.deckhouse.io"] - resources: ["replicatedstoragepools", "lvmvolumegroups"] - verbs: ["get", "list", "watch"] ---- -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRoleBinding -metadata: - name: d8:{{ .Chart.Name }}:storagepool-reader-binding - {{- include "helm_lib_module_labels" (list . ) | nindent 2 }} -subjects: - - kind: ServiceAccount - name: csi - namespace: d8-sds-replicated-volume -roleRef: - kind: ClusterRole - name: d8:{{ .Chart.Name }}:storagepool-reader - apiGroup: rbac.authorization.k8s.io ---- -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRole -metadata: - name: d8:{{ .Chart.Name }}:rsc-watcher - {{- include "helm_lib_module_labels" (list . ) | nindent 2 }} -rules: - - apiGroups: ["storage.deckhouse.io"] - resources: ["replicatedstorageclasses"] - verbs: ["get", "list", "watch"] ---- -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRoleBinding -metadata: - name: d8:{{ .Chart.Name }}:rsc-read-access - {{- include "helm_lib_module_labels" (list . ) | nindent 2 }} -subjects: - - kind: ServiceAccount - name: csi - namespace: d8-{{ .Chart.Name }} -roleRef: - kind: ClusterRole - name: d8:{{ .Chart.Name }}:rsc-watcher - apiGroup: rbac.authorization.k8s.io diff --git a/templates/csi/volume-snapshot-class.yaml b/templates/csi/volume-snapshot-class.yaml deleted file mode 100644 index 116e382ca..000000000 --- a/templates/csi/volume-snapshot-class.yaml +++ /dev/null @@ -1,10 +0,0 @@ -{{- if (.Values.global.enabledModules | has "snapshot-controller") }} ---- -apiVersion: snapshot.storage.k8s.io/v1beta1 -kind: VolumeSnapshotClass -metadata: - {{- include "helm_lib_module_labels" (list . (dict "app" "linstor-csi-controller")) | nindent 2 }} - name: {{ .Chart.Name }} -driver: replicated.csi.storage.deckhouse.io -deletionPolicy: Delete -{{- end }} diff --git a/templates/linstor-scheduler-extender/deployment.yaml b/templates/linstor-scheduler-extender/deployment.yaml deleted file mode 100644 index 81afae3a4..000000000 --- a/templates/linstor-scheduler-extender/deployment.yaml +++ /dev/null @@ -1,117 +0,0 @@ -# Source https://github.com/kvaps/linstor-scheduler-extender/blob/master/deploy/all.yaml -{{- define "kube_scheduler_resources" }} -cpu: 10m -memory: 30Mi -{{- end }} - -{{- define "linstor_scheduler_extender_resources" }} -cpu: 10m -memory: 25Mi -{{- end }} - -{{- if (.Values.global.enabledModules | has "vertical-pod-autoscaler-crd") }} ---- -apiVersion: autoscaling.k8s.io/v1 -kind: VerticalPodAutoscaler -metadata: - name: linstor-scheduler-extender - namespace: d8-{{ .Chart.Name }} - {{- include "helm_lib_module_labels" (list . (dict "app" "linstor-scheduler-extender")) | nindent 2 }} -spec: - targetRef: - apiVersion: "apps/v1" - kind: Deployment - name: linstor-scheduler-extender - updatePolicy: - updateMode: "Auto" - resourcePolicy: - containerPolicies: - - containerName: linstor-scheduler-extender - minAllowed: - {{- include "linstor_scheduler_extender_resources" . | nindent 8 }} - maxAllowed: - memory: 40Mi - cpu: 20m -{{- end }} ---- -apiVersion: policy/v1 -kind: PodDisruptionBudget -metadata: - name: linstor-scheduler-extender - namespace: d8-{{ .Chart.Name }} - {{- include "helm_lib_module_labels" (list . (dict "app" "linstor-scheduler-extender" )) | nindent 2 }} -spec: - minAvailable: {{ include "helm_lib_is_ha_to_value" (list . 1 0) }} - selector: - matchLabels: - app: linstor-scheduler-extender ---- -apiVersion: apps/v1 -kind: Deployment -metadata: - name: linstor-scheduler-extender - namespace: d8-{{ .Chart.Name }} - {{- include "helm_lib_module_labels" (list . (dict "app" "linstor-scheduler" )) | nindent 2 }} -spec: - {{- include "helm_lib_deployment_strategy_and_replicas_for_ha" . | nindent 2 }} - revisionHistoryLimit: 2 - selector: - matchLabels: - app: linstor-scheduler-extender - template: - metadata: - labels: - app: linstor-scheduler-extender - spec: - {{- include "helm_lib_priority_class" (tuple . "system-cluster-critical") | nindent 6 }} - {{- include "helm_lib_node_selector" (tuple . "system") | nindent 6 }} - {{- include "helm_lib_tolerations" (tuple . "system") | nindent 6 }} - {{- include "helm_lib_module_pod_security_context_run_as_user_nobody" . | nindent 6 }} - {{- include "helm_lib_pod_anti_affinity_for_ha" (list . (dict "app" "linstor-scheduler-extender")) | nindent 6 }} - imagePullSecrets: - - name: {{ .Chart.Name }}-module-registry - containers: - - name: linstor-scheduler-extender - {{- include "helm_lib_module_container_security_context_pss_restricted_flexible" (dict "ro" true "seccompProfile" true) | nindent 10 }} - image: {{ include "helm_lib_module_image" (list . "linstorSchedulerExtender") }} - imagePullPolicy: IfNotPresent - args: - - --verbose=true - env: - - name: LS_CONTROLLERS - value: https://linstor.d8-{{ .Chart.Name }}.svc:3371 - - name: LS_USER_CERTIFICATE - valueFrom: - secretKeyRef: - name: linstor-client-https-cert - key: tls.crt - - name: LS_USER_KEY - valueFrom: - secretKeyRef: - name: linstor-client-https-cert - key: tls.key - - name: LS_ROOT_CA - valueFrom: - secretKeyRef: - name: linstor-client-https-cert - key: ca.crt - volumeMounts: - - name: scheduler-extender-certs - mountPath: /etc/sds-replicated-volume-scheduler-extender/certs - readOnly: true - resources: - requests: - {{- include "helm_lib_module_ephemeral_storage_only_logs" . | nindent 14 }} - {{- if not ( .Values.global.enabledModules | has "vertical-pod-autoscaler-crd") }} - {{- include "linstor_scheduler_extender_resources" . | nindent 14 }} - {{- end }} - ports: - - containerPort: 8099 - protocol: TCP - name: scheduler - - volumes: - - name: scheduler-extender-certs - secret: - secretName: linstor-scheduler-extender-https-certs - serviceAccountName: linstor-scheduler-extender diff --git a/templates/linstor-scheduler-extender/kube-scheduler-webhook-configuration.yaml b/templates/linstor-scheduler-extender/kube-scheduler-webhook-configuration.yaml deleted file mode 100644 index 6977799e7..000000000 --- a/templates/linstor-scheduler-extender/kube-scheduler-webhook-configuration.yaml +++ /dev/null @@ -1,16 +0,0 @@ -apiVersion: deckhouse.io/v1alpha1 -kind: KubeSchedulerWebhookConfiguration -metadata: - name: d8-{{ .Chart.Name }} - {{- include "helm_lib_module_labels" (list . ) | nindent 2 }} -webhooks: -- weight: 5 - failurePolicy: Ignore - clientConfig: - service: - name: linstor-scheduler-extender - namespace: d8-{{ .Chart.Name }} - port: 8099 - path: / - caBundle: {{ .Values.sdsReplicatedVolume.internal.customSchedulerExtenderCert.ca | b64enc }} - timeoutSeconds: 5 diff --git a/templates/linstor-scheduler-extender/rbac-for-us.yaml b/templates/linstor-scheduler-extender/rbac-for-us.yaml deleted file mode 100644 index 59ea31f46..000000000 --- a/templates/linstor-scheduler-extender/rbac-for-us.yaml +++ /dev/null @@ -1,76 +0,0 @@ ---- -apiVersion: v1 -kind: ServiceAccount -metadata: - name: linstor-scheduler-extender - namespace: d8-{{ .Chart.Name }} - {{- include "helm_lib_module_labels" (list . (dict "app" "linstor-scheduler-extender")) | nindent 2 }} ---- -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRoleBinding -metadata: - name: d8:{{ .Chart.Name }}:linstor-scheduler-extender-kube-scheduler - {{- include "helm_lib_module_labels" (list . (dict "app" "linstor-scheduler-extender")) | nindent 2 }} -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: system:kube-scheduler -subjects: - - kind: ServiceAccount - name: linstor-scheduler-extender - namespace: d8-{{ .Chart.Name }} ---- -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRoleBinding -metadata: - name: d8:{{ .Chart.Name }}:linstor-scheduler-extender-volume-scheduler - {{- include "helm_lib_module_labels" (list . (dict "app" "linstor-scheduler-extender")) | nindent 2 }} -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: system:volume-scheduler -subjects: - - kind: ServiceAccount - name: linstor-scheduler-extender - namespace: d8-{{ .Chart.Name }} ---- -apiVersion: rbac.authorization.k8s.io/v1 -kind: Role -metadata: - name: linstor-scheduler-extender - namespace: d8-{{ .Chart.Name }} - {{- include "helm_lib_module_labels" (list . (dict "app" "linstor-scheduler-extender")) | nindent 2 }} -rules: - - apiGroups: ["coordination.k8s.io"] - resources: ["leases"] - verbs: ["create", "get", "update"] ---- -apiVersion: rbac.authorization.k8s.io/v1 -kind: RoleBinding -metadata: - name: linstor-scheduler-extender - namespace: d8-{{ .Chart.Name }} - {{- include "helm_lib_module_labels" (list . (dict "app" "linstor-scheduler-extender")) | nindent 2 }} -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: Role - name: linstor-scheduler-extender -subjects: - - kind: ServiceAccount - name: linstor-scheduler-extender - namespace: d8-{{ .Chart.Name }} ---- -apiVersion: rbac.authorization.k8s.io/v1 -kind: RoleBinding -metadata: - name: d8:{{ .Chart.Name }}:linstor-scheduler-extender:extension-apiserver-authentication-reader - namespace: kube-system - {{- include "helm_lib_module_labels" (list . (dict "app" "linstor-scheduler-extender" )) | nindent 2 }} -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: Role - name: extension-apiserver-authentication-reader -subjects: - - kind: ServiceAccount - name: linstor-scheduler-extender - namespace: d8-{{ .Chart.Name }} diff --git a/templates/linstor-scheduler-extender/secret.yaml b/templates/linstor-scheduler-extender/secret.yaml deleted file mode 100644 index fd6ce929b..000000000 --- a/templates/linstor-scheduler-extender/secret.yaml +++ /dev/null @@ -1,12 +0,0 @@ ---- -apiVersion: v1 -kind: Secret -metadata: - name: linstor-scheduler-extender-https-certs - namespace: d8-{{ .Chart.Name }} - {{- include "helm_lib_module_labels" (list . (dict "app" "sds-replicated-volume-scheduler-extender")) | nindent 2 }} -type: kubernetes.io/tls -data: - ca.crt: {{ .Values.sdsReplicatedVolume.internal.customSchedulerExtenderCert.ca | b64enc }} - tls.crt: {{ .Values.sdsReplicatedVolume.internal.customSchedulerExtenderCert.crt | b64enc }} - tls.key: {{ .Values.sdsReplicatedVolume.internal.customSchedulerExtenderCert.key | b64enc }} \ No newline at end of file diff --git a/templates/linstor-scheduler-extender/service.yaml b/templates/linstor-scheduler-extender/service.yaml deleted file mode 100644 index 1ef6d34f9..000000000 --- a/templates/linstor-scheduler-extender/service.yaml +++ /dev/null @@ -1,16 +0,0 @@ ---- -apiVersion: v1 -kind: Service -metadata: - name: linstor-scheduler-extender - namespace: d8-{{ .Chart.Name }} - {{- include "helm_lib_module_labels" (list . (dict "app" "sds-replicated-volume-scheduler-extender" )) | nindent 2 }} -spec: - type: ClusterIP - ports: - - port: 8099 - targetPort: scheduler - protocol: TCP - name: http - selector: - app: linstor-scheduler-extender