diff --git a/.gitignore b/.gitignore index ada68ff..f080086 100644 --- a/.gitignore +++ b/.gitignore @@ -25,3 +25,5 @@ go.work *.swp *.swo *~ + +prod-manifests/ diff --git a/.golangci.yml b/.golangci.yml index 6b29746..8931405 100644 --- a/.golangci.yml +++ b/.golangci.yml @@ -1,47 +1,113 @@ +--- run: + # Timeout for analysis. timeout: 5m - allow-parallel-runners: true + + # Modules download mode (do not modify go.mod) + module-download-mode: readonly + + # Include test files (see below to exclude certain linters) + tests: true issues: - # don't skip warning about doc comments - # don't exclude the default set of lint - exclude-use-default: false - # restore some of the defaults - # (fill in the rest as needed) exclude-rules: - - path: "api/*" - linters: - - lll - - path: "internal/*" + # Exclude certain linters for test code + - path: "_test\\.go" linters: + - bodyclose - dupl - - lll -linters: - disable-all: true - enable: - - dupl - - errcheck - - copyloopvar - - ginkgolinter - - goconst - - gocyclo - - gofmt - - goimports - - gosimple - - govet - - ineffassign - - lll - - misspell - - nakedret - - prealloc - - revive - - staticcheck - - typecheck - - unconvert - - unparam - - unused + - dogsled + - funlen + +output: + formats: colored-line-number + print-issued-lines: true + print-linter-name: true linters-settings: - revive: + depguard: rules: - - name: comment-spacings + main: + # Packages that are not allowed where the value is a suggestion. + deny: + - pkg: "github.com/pkg/errors" + desc: Should be replaced by standard lib errors package + cyclop: + # The maximal code complexity to report. + max-complexity: 15 + skip-tests: true + funlen: + lines: 100 + nestif: + min-complexity: 6 + forbidigo: + forbid: + - http\.NotFound.* # return RFC 7807 problem details instead + - http\.Error.* # return RFC 7807 problem details instead + gomoddirectives: + replace-allow-list: + - github.com/abbot/go-http-auth # https://github.com/traefik/traefik/issues/6873#issuecomment-637654361 + +linters: + disable-all: true + enable: + # enabled by default by golangci-lint + - errcheck # checking for unchecked errors, these unchecked errors can be critical bugs in some cases + - gosimple # specializes in simplifying a code + - govet # reports suspicious constructs, such as Printf calls whose arguments do not align with the format string + - ineffassign # detects when assignments to existing variables are not used + - staticcheck # is a go vet on steroids, applying a ton of static analysis checks + - typecheck # like the front-end of a Go compiler, parses and type-checks Go code + - unused # checks for unused constants, variables, functions and types + # extra enabled by us + - asasalint # checks for pass []any as any in variadic func(...any) + - asciicheck # checks that your code does not contain non-ASCII identifiers + - bidichk # checks for dangerous unicode character sequences + - bodyclose # checks whether HTTP response body is closed successfully + - cyclop # checks function and package cyclomatic complexity + - dupl # tool for code clone detection + - durationcheck # checks for two durations multiplied together + - dogsled # find assignments/declarations with too many blank identifiers + - errname # checks that sentinel errors are prefixed with the Err and error types are suffixed with the Error + - errorlint # finds code that will cause problems with the error wrapping scheme introduced in Go 1.13 + - exhaustive # checks exhaustiveness of enum switch statements + - exptostd # detects functions from golang.org/x/exp/ that can be replaced by std functions + - copyloopvar # checks for pointers to enclosing loop variables + - fatcontext # detects nested contexts in loops and function literals + - forbidigo # forbids identifiers + - funlen # tool for detection of long functions + - gocheckcompilerdirectives # validates go compiler directive comments (//go:) + - goconst # finds repeated strings that could be replaced by a constant + - gocritic # provides diagnostics that check for bugs, performance and style issues + - gofmt # checks if the code is formatted according to 'gofmt' command + - goimports # in addition to fixing imports, goimports also formats your code in the same style as gofmt + - gomoddirectives # manages the use of 'replace', 'retract', and 'excludes' directives in go.mod + - gomodguard # allow and block lists linter for direct Go module dependencies. This is different from depguard where there are different block types for example version constraints and module recommendations + - goprintffuncname # checks that printf-like functions are named with f at the end + - gosec # inspects source code for security problems + - loggercheck # checks key value pairs for common logger libraries (kitlog,klog,logr,zap) + - makezero # finds slice declarations with non-zero initial length + - mirror # reports wrong mirror patterns of bytes/strings usage + - misspell # finds commonly misspelled English words + - nakedret # finds naked returns in functions greater than a specified function length + - nestif # reports deeply nested if statements + - nilerr # finds the code that returns nil even if it checks that the error is not nil + - nolintlint # reports ill-formed or insufficient nolint directives + - nosprintfhostport # checks for misuse of Sprintf to construct a host with port in a URL + - perfsprint # Golang linter for performance, aiming at usages of fmt.Sprintf which have faster alternatives + - predeclared # finds code that shadows one of Go's predeclared identifiers + - promlinter # checks Prometheus metrics naming via promlint + - reassign # checks that package variables are not reassigned + - revive # fast, configurable, extensible, flexible, and beautiful linter for Go, drop-in replacement of golint + - rowserrcheck # checks whether Err of rows is checked successfully + - sqlclosecheck # checks that sql.Rows and sql.Stmt are closed + - sloglint # A Go linter that ensures consistent code style when using log/slog + - tagliatelle # checks the struct tags. + - testableexamples # checks if examples are testable (have an expected output) + - tparallel # detects inappropriate usage of t.Parallel() method in your Go test codes + - usetesting # detects using os.Setenv instead of t.Setenv since Go1.17 + - unconvert # removes unnecessary type conversions + - unparam # reports unused function parameters + - usestdlibvars # detects the possibility to use variables/constants from the Go standard library + - wastedassign # finds wasted assignment statements + fast: false \ No newline at end of file diff --git a/Dockerfile b/Dockerfile index 348b837..d3339f5 100644 --- a/Dockerfile +++ b/Dockerfile @@ -7,6 +7,9 @@ WORKDIR /workspace # Copy the Go Modules manifests COPY go.mod go.mod COPY go.sum go.sum + +COPY --from=repos ./smooth-operator /smooth-operator + # cache deps before building and copying source so that we don't need to re-download as much # and so that source changes don't invalidate our downloaded layer RUN go mod download diff --git a/PROJECT b/PROJECT index c53aa6d..c400b20 100644 --- a/PROJECT +++ b/PROJECT @@ -16,6 +16,11 @@ resources: kind: WMS path: github.com/pdok/mapserver-operator/api/v3 version: v3 + webhooks: + conversion: true + spoke: + - v2beta1 + webhookVersion: v1 - api: crdVersion: v1 namespaced: true @@ -24,4 +29,23 @@ resources: kind: WFS path: github.com/pdok/mapserver-operator/api/v3 version: v3 + webhooks: + conversion: true + spoke: + - v2beta1 + webhookVersion: v1 +- api: + crdVersion: v1 + namespaced: true + domain: pdok.nl + kind: WFS + path: github.com/pdok/mapserver-operator/api/v2beta1 + version: v2beta1 +- api: + crdVersion: v1 + namespaced: true + domain: pdok.nl + kind: WMS + path: github.com/pdok/mapserver-operator/api/v2beta1 + version: v2beta1 version: "3" diff --git a/api/v2beta1/groupversion_info.go b/api/v2beta1/groupversion_info.go new file mode 100644 index 0000000..7033d98 --- /dev/null +++ b/api/v2beta1/groupversion_info.go @@ -0,0 +1,44 @@ +/* +MIT License + +Copyright (c) 2024 Publieke Dienstverlening op de Kaart + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. +*/ + +// Package v2beta1 contains API Schema definitions for the v2beta1 API group. +// +kubebuilder:object:generate=true +// +groupName=pdok.nl +package v2beta1 + +import ( + "k8s.io/apimachinery/pkg/runtime/schema" + "sigs.k8s.io/controller-runtime/pkg/scheme" +) + +var ( + // GroupVersion is group version used to register these objects. + GroupVersion = schema.GroupVersion{Group: "pdok.nl", Version: "v2beta1"} + + // SchemeBuilder is used to add go types to the GroupVersionKind scheme. + SchemeBuilder = &scheme.Builder{GroupVersion: GroupVersion} + + // AddToScheme adds the types in this group-version to the given scheme. + AddToScheme = SchemeBuilder.AddToScheme +) diff --git a/api/v2beta1/shared_conversion.go b/api/v2beta1/shared_conversion.go new file mode 100644 index 0000000..789c075 --- /dev/null +++ b/api/v2beta1/shared_conversion.go @@ -0,0 +1,91 @@ +package v2beta1 + +import ( + pdoknlv3 "github.com/pdok/mapserver-operator/api/v3" + autoscalingv2 "k8s.io/api/autoscaling/v2beta1" + corev1 "k8s.io/api/core/v1" +) + +func Pointer[T interface{}](val T) *T { + return &val +} + +func PointerValWithDefault[T interface{}](ptr *T, defaultValue T) T { + if ptr == nil { + return defaultValue + } + + return *ptr +} + +func ConverseAutoscaling(src Autoscaling) *autoscalingv2.HorizontalPodAutoscalerSpec { + var minReplicas *int32 + if src.MinReplicas != nil { + minReplicas = Pointer(int32(*src.MinReplicas)) + } + + var maxReplicas int32 + if src.MaxReplicas != nil { + maxReplicas = int32(*src.MaxReplicas) + } + + metrics := make([]autoscalingv2.MetricSpec, 0) + if src.AverageCPUUtilization != nil { + metrics = append(metrics, autoscalingv2.MetricSpec{ + Type: autoscalingv2.ResourceMetricSourceType, + Resource: &autoscalingv2.ResourceMetricSource{ + Name: corev1.ResourceCPU, + TargetAverageUtilization: Pointer(int32(*src.AverageCPUUtilization)), + }, + }) + } + + return &autoscalingv2.HorizontalPodAutoscalerSpec{ + MinReplicas: minReplicas, + MaxReplicas: maxReplicas, + Metrics: metrics, + } +} + +func ConverseResources(src corev1.ResourceRequirements) *corev1.PodSpec { + return &corev1.PodSpec{ + Containers: []corev1.Container{ + { + Resources: src, + }, + }, + } +} + +func ConverseColumnAndAliasesV2ToColumnsWithAliasV3(columns []string, aliases map[string]string) []pdoknlv3.Columns { + v3Columns := make([]pdoknlv3.Columns, 0) + for _, column := range columns { + col := pdoknlv3.Columns{ + Name: column, + } + + // TODO - multiple aliases per column possible? + if alias, ok := aliases[column]; ok { + col.Alias = &alias + } + + v3Columns = append(v3Columns, col) + } + + return v3Columns +} + +func ConverseColumnsWithAliasV3ToColumnsAndAliasesV2(columns []pdoknlv3.Columns) ([]string, map[string]string) { + v2Columns := make([]string, 0) + v2Aliases := make(map[string]string) + + for _, col := range columns { + v2Columns = append(v2Columns, col.Name) + + if col.Alias != nil { + v2Aliases[col.Name] = *col.Alias + } + } + + return v2Columns, v2Aliases +} diff --git a/api/v2beta1/shared_types.go b/api/v2beta1/shared_types.go new file mode 100644 index 0000000..3a5a9e6 --- /dev/null +++ b/api/v2beta1/shared_types.go @@ -0,0 +1,141 @@ +package v2beta1 + +import ( + corev1 "k8s.io/api/core/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" +) + +// Status - The status for custom resources managed by the operator-sdk. +type Status struct { + Conditions []Condition `json:"conditions,omitempty"` + Deployment *string `json:"deployment,omitempty"` + Resources []Resources `json:"resources,omitempty"` +} + +// Condition - the condition for the ansible operator +// https://github.com/operator-framework/operator-sdk/blob/master/internal/ansible/controller/status/types.go#L101 +type Condition struct { + Type ConditionType `json:"type"` + Status ConditionStatus `json:"status"` + LastTransitionTime metav1.Time `json:"lastTransitionTime"` + AnsibleResult *ResultAnsible `json:"ansibleResult,omitempty"` + Reason string `json:"reason"` + Message string `json:"message"` +} + +// ConditionType specifies a string for field ConditionType +type ConditionType string + +// ConditionStatus specifies a string for field ConditionType +type ConditionStatus string + +// This const specifies allowed fields for Status +const ( + ConditionTrue ConditionStatus = "True" + ConditionFalse ConditionStatus = "False" + ConditionUnknown ConditionStatus = "Unknown" +) + +// ResultAnsible - encapsulation of the ansible result. 'AnsibleResult' is turned around in struct to comply with linting +type ResultAnsible struct { + Ok int `json:"ok"` + Changed int `json:"changed"` + Skipped int `json:"skipped"` + Failures int `json:"failures"` + TimeOfCompletion string `json:"completion"` +} + +// Resources is the struct for the resources field within status +type Resources struct { + APIVersion *string `json:"apiversion,omitempty"` + Kind *string `json:"kind,omitempty"` + Name *string `json:"name,omitempty"` +} + +// General is the struct with all generic fields for the crds +type General struct { + Dataset string `json:"dataset"` + Theme *string `json:"theme,omitempty"` + DatasetOwner string `json:"datasetOwner"` + ServiceVersion *string `json:"serviceVersion,omitempty"` + DataVersion *string `json:"dataVersion,omitempty"` +} + +// Kubernetes is the struct with all fields that can be defined in kubernetes fields in the crds +type Kubernetes struct { + Autoscaling *Autoscaling `json:"autoscaling,omitempty"` + HealthCheck *HealthCheck `json:"healthCheck,omitempty"` + Resources *corev1.ResourceRequirements `json:"resources,omitempty"` + Lifecycle *Lifecycle `json:"lifecycle,omitempty"` +} + +// Autoscaling is the struct with all fields to configure autoscalers for the crs +type Autoscaling struct { + AverageCPUUtilization *int `json:"averageCpuUtilization,omitempty"` + MinReplicas *int `json:"minReplicas,omitempty"` + MaxReplicas *int `json:"maxReplicas,omitempty"` +} + +// HealthCheck is the struct with all fields to configure healthchecks for the crs +type HealthCheck struct { + Querystring *string `json:"querystring,omitempty"` + Mimetype *string `json:"mimetype,omitempty"` + Boundingbox *string `json:"boundingbox,omitempty"` +} + +// Lifecycle is the struct with the fields to configure lifecycle settings for the resources +type Lifecycle struct { + TTLInDays *int `json:"ttlInDays,omitempty"` +} + +// WMSWFSOptions is the struct with options available in the operator +type WMSWFSOptions struct { + IncludeIngress bool `json:"includeIngress"` + AutomaticCasing bool `json:"automaticCasing"` + ValidateRequests *bool `json:"validateRequests,omitempty"` + RewriteGroupToDataLayers *bool `json:"rewriteGroupToDataLayers,omitempty"` + DisableWebserviceProxy *bool `json:"disableWebserviceProxy,omitempty"` + PrefetchData *bool `json:"prefetchData,omitempty"` + ValidateChildStyleNameEqual *bool `json:"validateChildStyleNameEqual,omitempty"` +} + +// Authority is a struct for the authority fields in WMS and WFS crds +type Authority struct { + Name string `json:"name"` + URL string `json:"url"` +} + +// Data is a struct for the data field for a WMSLayer or WFS FeatureType +type Data struct { + GPKG *GPKG `json:"gpkg,omitempty"` + Postgis *Postgis `json:"postgis,omitempty"` + Tif *Tif `json:"tif,omitempty"` +} + +// GPKG is a struct for the gpkg field for a WMSLayer or WFS FeatureType +type GPKG struct { + BlobKey string `json:"blobKey"` + Table string `json:"table"` + GeometryType string `json:"geometryType"` + Columns []string `json:"columns"` + // In a new version Aliases should become part of Columns + Aliases map[string]string `json:"aliases,omitempty"` +} + +// Postgis is a struct for the Postgis db config for a WMSLayer or WFS FeatureType +// connection details are passed through the environment +type Postgis struct { + Table string `json:"table"` + GeometryType string `json:"geometryType"` + Columns []string `json:"columns"` + // In a new version Aliases should become part of Columns + Aliases map[string]string `json:"aliases,omitempty"` +} + +// Tif is a struct for the Tif field for a WMSLayer +type Tif struct { + BlobKey string `json:"blobKey"` + GetFeatureInfoIncludesClass *bool `json:"getFeatureInfoIncludesClass,omitempty"` + Offsite *string `json:"offsite,omitempty"` + Resample *string `json:"resample,omitempty"` +} diff --git a/api/v2beta1/wfs_conversion.go b/api/v2beta1/wfs_conversion.go new file mode 100644 index 0000000..6a4ee5b --- /dev/null +++ b/api/v2beta1/wfs_conversion.go @@ -0,0 +1,309 @@ +/* +MIT License + +Copyright (c) 2024 Publieke Dienstverlening op de Kaart + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. +*/ + +package v2beta1 + +import ( + sharedModel "github.com/pdok/smooth-operator/model" + "log" + + "sigs.k8s.io/controller-runtime/pkg/conversion" + + pdoknlv3 "github.com/pdok/mapserver-operator/api/v3" +) + +// ConvertTo converts this WFS (v2beta1) to the Hub version (v3). +func (src *WFS) ConvertTo(dstRaw conversion.Hub) error { + dst := dstRaw.(*pdoknlv3.WFS) + log.Printf("ConvertTo: Converting WFS from Spoke version v2beta1 to Hub version v3;"+ + "source: %s/%s, target: %s/%s", src.Namespace, src.Name, dst.Namespace, dst.Name) + + dst.ObjectMeta = src.ObjectMeta + + // Set LifeCycle if defined + if src.Spec.Kubernetes.Lifecycle != nil && src.Spec.Kubernetes.Lifecycle.TTLInDays != nil { + dst.Spec.Lifecycle = &sharedModel.Lifecycle{ + TTLInDays: Pointer(int32(*src.Spec.Kubernetes.Lifecycle.TTLInDays)), + } + } + + if src.Spec.Kubernetes.Autoscaling != nil { + dst.Spec.HorizontalPodAutoscalerPatch = ConverseAutoscaling(*src.Spec.Kubernetes.Autoscaling) + } + + // TODO converse src.Spec.Kubernetes.HealthCheck when we know what the implementation in v3 will be + if src.Spec.Kubernetes.Resources != nil { + dst.Spec.PodSpecPatch = ConverseResources(*src.Spec.Kubernetes.Resources) + } + + dst.Spec.Options = &pdoknlv3.Options{ + AutomaticCasing: src.Spec.Options.AutomaticCasing, + PrefetchData: PointerValWithDefault(src.Spec.Options.PrefetchData, false), + IncludeIngress: src.Spec.Options.IncludeIngress, + } + + service := pdoknlv3.Service{ + Prefix: "", + BaseURL: "https://service.pdok.nl", + OwnerInfoRef: "pdok", + Title: src.Spec.Service.Title, + Abstract: src.Spec.Service.Abstract, + Keywords: src.Spec.Service.Keywords, + Fees: nil, + AccessConstraints: src.Spec.Service.AccessConstraints, + DefaultCrs: src.Spec.Service.DataEPSG, + OtherCrs: []string{}, + CountDefault: src.Spec.Service.Maxfeatures, + FeatureTypes: make([]pdoknlv3.FeatureType, 0), + } + + if src.Spec.Service.Mapfile != nil { + service.Mapfile = &pdoknlv3.Mapfile{ + ConfigMapKeyRef: src.Spec.Service.Mapfile.ConfigMapKeyRef, + } + } + + if src.Spec.Service.Extent != nil && *src.Spec.Service.Extent != "" { + service.Bbox = &pdoknlv3.Bbox{ + DefaultCRS: sharedModel.ExtentToBBox(*src.Spec.Service.Extent), + } + } else { + service.Bbox = &pdoknlv3.Bbox{ + DefaultCRS: sharedModel.BBox{ + MinX: "-25000", + MaxX: "280000", + MinY: "250000", + MaxY: "860000", + }, + } + } + + // TODO - where to place the MetadataIdentifier and FeatureTypes[0].SourceMetadataIdentifier if the service is not inspire? + if src.Spec.Service.Inspire { + service.Inspire = &pdoknlv3.Inspire{ + ServiceMetadataURL: pdoknlv3.MetadataURL{ + CSW: &pdoknlv3.Metadata{ + MetadataIdentifier: src.Spec.Service.MetadataIdentifier, + }, + }, + SpatialDatasetIdentifier: src.Spec.Service.FeatureTypes[0].SourceMetadataIdentifier, + Language: "nl", + } + } + + for _, featureType := range src.Spec.Service.FeatureTypes { + service.FeatureTypes = append(service.FeatureTypes, convertV2FeatureTypeToV3(featureType)) + } + + dst.Spec.Service = service + + return nil +} + +func convertV2FeatureTypeToV3(src FeatureType) pdoknlv3.FeatureType { + featureTypeV3 := pdoknlv3.FeatureType{ + Name: src.Name, + Title: src.Title, + Abstract: src.Abstract, + Keywords: src.Keywords, + DatasetMetadataURL: pdoknlv3.MetadataURL{ + CSW: &pdoknlv3.Metadata{ + MetadataIdentifier: src.DatasetMetadataIdentifier, + }, + }, + Data: pdoknlv3.Data{}, + } + + if src.Extent != nil { + featureTypeV3.Bbox = &pdoknlv3.FeatureBbox{ + DefaultCRS: sharedModel.ExtentToBBox(*src.Extent), + } + } + + if src.Data.GPKG != nil { + featureTypeV3.Data.Gpkg = &pdoknlv3.Gpkg{ + BlobKey: src.Data.GPKG.BlobKey, + TableName: src.Data.GPKG.Table, + GeometryType: src.Data.GPKG.GeometryType, + Columns: ConverseColumnAndAliasesV2ToColumnsWithAliasV3( + src.Data.GPKG.Columns, + src.Data.GPKG.Aliases, + ), + } + } + + if src.Data.Postgis != nil { + featureTypeV3.Data.Postgis = &pdoknlv3.Postgis{ + TableName: src.Data.Postgis.Table, + GeometryType: src.Data.Postgis.GeometryType, + Columns: ConverseColumnAndAliasesV2ToColumnsWithAliasV3( + src.Data.Postgis.Columns, + src.Data.Postgis.Aliases, + ), + } + } + + return featureTypeV3 +} + +// ConvertFrom converts the Hub version (v3) to this WFS (v2beta1). +// +//nolint:revive +func (dst *WFS) ConvertFrom(srcRaw conversion.Hub) error { + src := srcRaw.(*pdoknlv3.WFS) + log.Printf("ConvertFrom: Converting WFS from Hub version v3 to Spoke version v2beta1;"+ + "source: %s/%s, target: %s/%s", src.Namespace, src.Name, dst.Namespace, dst.Name) + + dst.ObjectMeta = src.ObjectMeta + + dst.Spec.General = General{ + Dataset: src.ObjectMeta.Labels["dataset"], + DatasetOwner: src.ObjectMeta.Labels["dataset-owner"], + DataVersion: nil, + } + + if serviceVersion, ok := src.ObjectMeta.Labels["service-version"]; ok { + dst.Spec.General.ServiceVersion = &serviceVersion + } + + if theme, ok := src.ObjectMeta.Labels["theme"]; ok { + dst.Spec.General.Theme = &theme + } + + dst.Spec.Kubernetes = Kubernetes{} + + if src.Spec.Lifecycle != nil && src.Spec.Lifecycle.TTLInDays != nil { + dst.Spec.Kubernetes.Lifecycle = &Lifecycle{ + TTLInDays: Pointer(int(*src.Spec.Lifecycle.TTLInDays)), + } + } + + // TODO - healthcheck + if src.Spec.PodSpecPatch != nil { + dst.Spec.Kubernetes.Resources = &src.Spec.PodSpecPatch.Containers[0].Resources + } + + if src.Spec.HorizontalPodAutoscalerPatch != nil { + dst.Spec.Kubernetes.Autoscaling = &Autoscaling{ + MaxReplicas: Pointer(int(src.Spec.HorizontalPodAutoscalerPatch.MaxReplicas)), + } + + if src.Spec.HorizontalPodAutoscalerPatch.MinReplicas != nil { + dst.Spec.Kubernetes.Autoscaling.MinReplicas = Pointer(int(*src.Spec.HorizontalPodAutoscalerPatch.MinReplicas)) + } + + if src.Spec.HorizontalPodAutoscalerPatch.Metrics != nil { + dst.Spec.Kubernetes.Autoscaling.AverageCPUUtilization = Pointer( + int(*src.Spec.HorizontalPodAutoscalerPatch.Metrics[0].Resource.TargetAverageUtilization), + ) + } + } + + if src.Spec.Options == nil { + dst.Spec.Options = WMSWFSOptions{ + AutomaticCasing: src.Spec.Options.AutomaticCasing, + PrefetchData: &src.Spec.Options.PrefetchData, + IncludeIngress: src.Spec.Options.IncludeIngress, + } + } + + service := WFSService{ + Title: src.Spec.Service.Title, + Abstract: src.Spec.Service.Abstract, + Keywords: src.Spec.Service.Keywords, + AccessConstraints: src.Spec.Service.AccessConstraints, + DataEPSG: src.Spec.Service.DefaultCrs, + Maxfeatures: src.Spec.Service.CountDefault, + Authority: Authority{ + Name: "", + URL: "", + }, + } + + if src.Spec.Service.Bbox != nil { + service.Extent = Pointer(src.Spec.Service.Bbox.DefaultCRS.ToExtent()) + } else { + service.Extent = Pointer("-25000 250000 280000 860000") + } + + if src.Spec.Service.Mapfile != nil { + service.Mapfile = &Mapfile{ + ConfigMapKeyRef: src.Spec.Service.Mapfile.ConfigMapKeyRef, + } + } + + if src.Spec.Service.Inspire != nil { + service.Inspire = true + service.MetadataIdentifier = src.Spec.Service.Inspire.ServiceMetadataURL.CSW.MetadataIdentifier + } else { + service.Inspire = false + } + + for _, featureType := range src.Spec.Service.FeatureTypes { + featureTypeV2 := FeatureType{ + Name: featureType.Name, + Title: featureType.Title, + Abstract: featureType.Abstract, + Keywords: featureType.Keywords, + DatasetMetadataIdentifier: featureType.DatasetMetadataURL.CSW.MetadataIdentifier, + SourceMetadataIdentifier: "", + Data: Data{}, + } + + if src.Spec.Service.Inspire != nil { + featureTypeV2.SourceMetadataIdentifier = src.Spec.Service.Inspire.SpatialDatasetIdentifier + } + + if featureType.Bbox != nil { + featureTypeV2.Extent = Pointer(featureType.Bbox.DefaultCRS.ToExtent()) + } + + if featureType.Data.Gpkg != nil { + columns, aliases := ConverseColumnsWithAliasV3ToColumnsAndAliasesV2(featureType.Data.Gpkg.Columns) + featureTypeV2.Data.GPKG = &GPKG{ + BlobKey: featureType.Data.Gpkg.BlobKey, + Table: featureType.Data.Gpkg.TableName, + GeometryType: featureType.Data.Gpkg.GeometryType, + Columns: columns, + Aliases: aliases, + } + } + + if featureType.Data.Postgis != nil { + columns, aliases := ConverseColumnsWithAliasV3ToColumnsAndAliasesV2(featureType.Data.Postgis.Columns) + featureTypeV2.Data.Postgis = &Postgis{ + Table: featureType.Data.Postgis.TableName, + GeometryType: featureType.Data.Postgis.GeometryType, + Columns: columns, + Aliases: aliases, + } + } + + service.FeatureTypes = append(service.FeatureTypes, featureTypeV2) + } + + dst.Spec.Service = service + + return nil +} diff --git a/api/v2beta1/wfs_types.go b/api/v2beta1/wfs_types.go new file mode 100644 index 0000000..0063843 --- /dev/null +++ b/api/v2beta1/wfs_types.go @@ -0,0 +1,94 @@ +/* +MIT License + +Copyright (c) 2024 Publieke Dienstverlening op de Kaart + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. +*/ + +package v2beta1 + +import ( + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" +) + +// EDIT THIS FILE! THIS IS SCAFFOLDING FOR YOU TO OWN! +// NOTE: json tags are required. Any new fields you add must have json tags for the fields to be serialized. + +// +kubebuilder:object:root=true +// +kubebuilder:subresource:status + +// WFS is the Schema for the wfs API. +type WFS struct { + metav1.TypeMeta `json:",inline"` + metav1.ObjectMeta `json:"metadata,omitempty"` + + Spec WFSSpec `json:"spec,omitempty"` + Status *Status `json:"status,omitempty"` +} + +// +kubebuilder:object:root=true + +// WFSList contains a list of WFS. +type WFSList struct { + metav1.TypeMeta `json:",inline"` + metav1.ListMeta `json:"metadata,omitempty"` + Items []WFS `json:"items"` +} + +// WFSSpec is the struct for all fields defined in the WFS CRD +type WFSSpec struct { + General General `json:"general"` + Service WFSService `json:"service"` + Kubernetes Kubernetes `json:"kubernetes"` + Options WMSWFSOptions `json:"options"` +} + +// WFSService is the struct with all service specific options +type WFSService struct { + Title string `json:"title"` + Inspire bool `json:"inspire"` + Abstract string `json:"abstract"` + AccessConstraints string `json:"accessConstraints"` + Keywords []string `json:"keywords"` + MetadataIdentifier string `json:"metadataIdentifier"` + Authority Authority `json:"authority"` + Extent *string `json:"extent,omitempty"` + Maxfeatures *string `json:"maxfeatures,omitempty"` + //nolint:tagliatelle + DataEPSG string `json:"dataEPSG"` + FeatureTypes []FeatureType `json:"featureTypes"` + Mapfile *Mapfile `json:"mapfile,omitempty"` +} + +// FeatureType is the struct for all feature type level fields +type FeatureType struct { + Name string `json:"name"` + Title string `json:"title"` + Abstract string `json:"abstract"` + Keywords []string `json:"keywords"` + DatasetMetadataIdentifier string `json:"datasetMetadataIdentifier"` + SourceMetadataIdentifier string `json:"sourceMetadataIdentifier"` + Extent *string `json:"extent,omitempty"` + Data Data `json:"data"` +} + +func init() { + SchemeBuilder.Register(&WFS{}, &WFSList{}) +} diff --git a/api/v2beta1/wms_conversion.go b/api/v2beta1/wms_conversion.go new file mode 100644 index 0000000..9698344 --- /dev/null +++ b/api/v2beta1/wms_conversion.go @@ -0,0 +1,55 @@ +/* +MIT License + +Copyright (c) 2024 Publieke Dienstverlening op de Kaart + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. +*/ + +package v2beta1 + +import ( + "log" + + "sigs.k8s.io/controller-runtime/pkg/conversion" + + pdoknlv3 "github.com/pdok/mapserver-operator/api/v3" +) + +// ConvertTo converts this WMS (v2beta1) to the Hub version (v3). +func (src *WMS) ConvertTo(dstRaw conversion.Hub) error { + dst := dstRaw.(*pdoknlv3.WMS) + log.Printf("ConvertTo: Converting WMS from Spoke version v2beta1 to Hub version v3;"+ + "source: %s/%s, target: %s/%s", src.Namespace, src.Name, dst.Namespace, dst.Name) + + // TODO(user): Implement conversion logic from v2beta1 to v3 + return nil +} + +// ConvertFrom converts the Hub version (v3) to this WMS (v2beta1). +// +//nolint:revive +func (dst *WMS) ConvertFrom(srcRaw conversion.Hub) error { + src := srcRaw.(*pdoknlv3.WMS) + log.Printf("ConvertFrom: Converting WMS from Hub version v3 to Spoke version v2beta1;"+ + "source: %s/%s, target: %s/%s", src.Namespace, src.Name, dst.Namespace, dst.Name) + + // TODO(user): Implement conversion logic from v3 to v2beta1 + return nil +} diff --git a/api/v2beta1/wms_types.go b/api/v2beta1/wms_types.go new file mode 100644 index 0000000..c44f9d8 --- /dev/null +++ b/api/v2beta1/wms_types.go @@ -0,0 +1,136 @@ +/* +MIT License + +Copyright (c) 2024 Publieke Dienstverlening op de Kaart + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. +*/ + +package v2beta1 + +import ( + corev1 "k8s.io/api/core/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" +) + +// EDIT THIS FILE! THIS IS SCAFFOLDING FOR YOU TO OWN! +// NOTE: json tags are required. Any new fields you add must have json tags for the fields to be serialized. + +// +kubebuilder:object:root=true +// +kubebuilder:subresource:status + +// WMS is the Schema for the wms API. +type WMS struct { + metav1.TypeMeta `json:",inline"` + metav1.ObjectMeta `json:"metadata,omitempty"` + + Spec WMSSpec `json:"spec,omitempty"` + Status *Status `json:"status,omitempty"` +} + +// WMSSpec is the struct for all fields defined in the WMS CRD +type WMSSpec struct { + General General `json:"general"` + Service WMSService `json:"service"` + Options WMSWFSOptions `json:"options"` + Kubernetes Kubernetes `json:"kubernetes"` +} + +// WMSService is the struct for all service level fields +type WMSService struct { + Inspire bool `json:"inspire"` + Title string `json:"title"` + Abstract string `json:"abstract"` + AccessConstraints string `json:"accessConstraints"` + Keywords []string `json:"keywords"` + MetadataIdentifier string `json:"metadataIdentifier"` + Authority Authority `json:"authority"` + Layers []WMSLayer `json:"layers"` + //nolint:tagliatelle + DataEPSG string `json:"dataEPSG"` + Extent *string `json:"extent,omitempty"` + Maxsize *string `json:"maxSize,omitempty"` + Resolution *int `json:"resolution,omitempty"` + DefResolution *int `json:"defResolution,omitempty"` + StylingAssets *StylingAssets `json:"stylingAssets,omitempty"` + Mapfile *Mapfile `json:"mapfile,omitempty"` +} + +// WMSLayer is the struct for all layer level fields +type WMSLayer struct { + Name string `json:"name"` + Group *string `json:"group,omitempty"` + Visible bool `json:"visible"` + Title *string `json:"title,omitempty"` + Abstract *string `json:"abstract,omitempty"` + Keywords []string `json:"keywords,omitempty"` + DatasetMetadataIdentifier *string `json:"datasetMetadataIdentifier,omitempty"` + SourceMetadataIdentifier *string `json:"sourceMetadataIdentifier,omitempty"` + Styles []Style `json:"styles"` + Extent *string `json:"extent,omitempty"` + MinScale *string `json:"minScale,omitempty"` + MaxScale *string `json:"maxScale,omitempty"` + LabelNoClip bool `json:"labelNoClip,omitempty"` + Data *Data `json:"data,omitempty"` +} + +// Style is the struct for all style level fields +type Style struct { + Name string `json:"name"` + Title *string `json:"title,omitempty"` + Abstract *string `json:"abstract,omitempty"` + Visualization *string `json:"visualization,omitempty"` + LegendFile *LegendFile `json:"legendfile,omitempty"` +} + +// LegendFile is the struct containing the location of the legendfile +type LegendFile struct { + BlobKey string `json:"blobKey"` +} + +// StylingAssets is the struct containing the location of styling assets +type StylingAssets struct { + ConfigMapRefs []ConfigMapRef `json:"configMapRefs,omitempty"` + BlobKeys []string `json:"blobKeys"` +} + +// ConfigMapRef contains all the config map name and all keys in that configmap that are relevant +// the Keys can be empty, so that the v1 WMS can convert to the v2beta1 WMS +type ConfigMapRef struct { + Name string `json:"name"` + Keys []string `json:"keys,omitempty"` +} + +// Mapfile contains the ConfigMapKeyRef containing a mapfile +type Mapfile struct { + ConfigMapKeyRef corev1.ConfigMapKeySelector `json:"configMapKeyRef"` +} + +// +kubebuilder:object:root=true + +// WMSList contains a list of WMS. +type WMSList struct { + metav1.TypeMeta `json:",inline"` + metav1.ListMeta `json:"metadata,omitempty"` + Items []WMS `json:"items"` +} + +func init() { + SchemeBuilder.Register(&WMS{}, &WMSList{}) +} diff --git a/api/v2beta1/zz_generated.deepcopy.go b/api/v2beta1/zz_generated.deepcopy.go new file mode 100644 index 0000000..8991593 --- /dev/null +++ b/api/v2beta1/zz_generated.deepcopy.go @@ -0,0 +1,924 @@ +//go:build !ignore_autogenerated + +/* +MIT License + +Copyright (c) 2024 Publieke Dienstverlening op de Kaart + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. +*/ + +// Code generated by controller-gen. DO NOT EDIT. + +package v2beta1 + +import ( + "k8s.io/api/core/v1" + runtime "k8s.io/apimachinery/pkg/runtime" +) + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *Authority) DeepCopyInto(out *Authority) { + *out = *in +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Authority. +func (in *Authority) DeepCopy() *Authority { + if in == nil { + return nil + } + out := new(Authority) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *Autoscaling) DeepCopyInto(out *Autoscaling) { + *out = *in + if in.AverageCPUUtilization != nil { + in, out := &in.AverageCPUUtilization, &out.AverageCPUUtilization + *out = new(int) + **out = **in + } + if in.MinReplicas != nil { + in, out := &in.MinReplicas, &out.MinReplicas + *out = new(int) + **out = **in + } + if in.MaxReplicas != nil { + in, out := &in.MaxReplicas, &out.MaxReplicas + *out = new(int) + **out = **in + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Autoscaling. +func (in *Autoscaling) DeepCopy() *Autoscaling { + if in == nil { + return nil + } + out := new(Autoscaling) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *Condition) DeepCopyInto(out *Condition) { + *out = *in + in.LastTransitionTime.DeepCopyInto(&out.LastTransitionTime) + if in.AnsibleResult != nil { + in, out := &in.AnsibleResult, &out.AnsibleResult + *out = new(ResultAnsible) + **out = **in + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Condition. +func (in *Condition) DeepCopy() *Condition { + if in == nil { + return nil + } + out := new(Condition) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *ConfigMapRef) DeepCopyInto(out *ConfigMapRef) { + *out = *in + if in.Keys != nil { + in, out := &in.Keys, &out.Keys + *out = make([]string, len(*in)) + copy(*out, *in) + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ConfigMapRef. +func (in *ConfigMapRef) DeepCopy() *ConfigMapRef { + if in == nil { + return nil + } + out := new(ConfigMapRef) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *Data) DeepCopyInto(out *Data) { + *out = *in + if in.GPKG != nil { + in, out := &in.GPKG, &out.GPKG + *out = new(GPKG) + (*in).DeepCopyInto(*out) + } + if in.Postgis != nil { + in, out := &in.Postgis, &out.Postgis + *out = new(Postgis) + (*in).DeepCopyInto(*out) + } + if in.Tif != nil { + in, out := &in.Tif, &out.Tif + *out = new(Tif) + (*in).DeepCopyInto(*out) + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Data. +func (in *Data) DeepCopy() *Data { + if in == nil { + return nil + } + out := new(Data) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *FeatureType) DeepCopyInto(out *FeatureType) { + *out = *in + if in.Keywords != nil { + in, out := &in.Keywords, &out.Keywords + *out = make([]string, len(*in)) + copy(*out, *in) + } + if in.Extent != nil { + in, out := &in.Extent, &out.Extent + *out = new(string) + **out = **in + } + in.Data.DeepCopyInto(&out.Data) +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new FeatureType. +func (in *FeatureType) DeepCopy() *FeatureType { + if in == nil { + return nil + } + out := new(FeatureType) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *GPKG) DeepCopyInto(out *GPKG) { + *out = *in + if in.Columns != nil { + in, out := &in.Columns, &out.Columns + *out = make([]string, len(*in)) + copy(*out, *in) + } + if in.Aliases != nil { + in, out := &in.Aliases, &out.Aliases + *out = make(map[string]string, len(*in)) + for key, val := range *in { + (*out)[key] = val + } + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new GPKG. +func (in *GPKG) DeepCopy() *GPKG { + if in == nil { + return nil + } + out := new(GPKG) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *General) DeepCopyInto(out *General) { + *out = *in + if in.Theme != nil { + in, out := &in.Theme, &out.Theme + *out = new(string) + **out = **in + } + if in.ServiceVersion != nil { + in, out := &in.ServiceVersion, &out.ServiceVersion + *out = new(string) + **out = **in + } + if in.DataVersion != nil { + in, out := &in.DataVersion, &out.DataVersion + *out = new(string) + **out = **in + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new General. +func (in *General) DeepCopy() *General { + if in == nil { + return nil + } + out := new(General) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *HealthCheck) DeepCopyInto(out *HealthCheck) { + *out = *in + if in.Querystring != nil { + in, out := &in.Querystring, &out.Querystring + *out = new(string) + **out = **in + } + if in.Mimetype != nil { + in, out := &in.Mimetype, &out.Mimetype + *out = new(string) + **out = **in + } + if in.Boundingbox != nil { + in, out := &in.Boundingbox, &out.Boundingbox + *out = new(string) + **out = **in + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new HealthCheck. +func (in *HealthCheck) DeepCopy() *HealthCheck { + if in == nil { + return nil + } + out := new(HealthCheck) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *Kubernetes) DeepCopyInto(out *Kubernetes) { + *out = *in + if in.Autoscaling != nil { + in, out := &in.Autoscaling, &out.Autoscaling + *out = new(Autoscaling) + (*in).DeepCopyInto(*out) + } + if in.HealthCheck != nil { + in, out := &in.HealthCheck, &out.HealthCheck + *out = new(HealthCheck) + (*in).DeepCopyInto(*out) + } + if in.Resources != nil { + in, out := &in.Resources, &out.Resources + *out = new(v1.ResourceRequirements) + (*in).DeepCopyInto(*out) + } + if in.Lifecycle != nil { + in, out := &in.Lifecycle, &out.Lifecycle + *out = new(Lifecycle) + (*in).DeepCopyInto(*out) + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Kubernetes. +func (in *Kubernetes) DeepCopy() *Kubernetes { + if in == nil { + return nil + } + out := new(Kubernetes) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *LegendFile) DeepCopyInto(out *LegendFile) { + *out = *in +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new LegendFile. +func (in *LegendFile) DeepCopy() *LegendFile { + if in == nil { + return nil + } + out := new(LegendFile) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *Lifecycle) DeepCopyInto(out *Lifecycle) { + *out = *in + if in.TTLInDays != nil { + in, out := &in.TTLInDays, &out.TTLInDays + *out = new(int) + **out = **in + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Lifecycle. +func (in *Lifecycle) DeepCopy() *Lifecycle { + if in == nil { + return nil + } + out := new(Lifecycle) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *Mapfile) DeepCopyInto(out *Mapfile) { + *out = *in + in.ConfigMapKeyRef.DeepCopyInto(&out.ConfigMapKeyRef) +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Mapfile. +func (in *Mapfile) DeepCopy() *Mapfile { + if in == nil { + return nil + } + out := new(Mapfile) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *Postgis) DeepCopyInto(out *Postgis) { + *out = *in + if in.Columns != nil { + in, out := &in.Columns, &out.Columns + *out = make([]string, len(*in)) + copy(*out, *in) + } + if in.Aliases != nil { + in, out := &in.Aliases, &out.Aliases + *out = make(map[string]string, len(*in)) + for key, val := range *in { + (*out)[key] = val + } + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Postgis. +func (in *Postgis) DeepCopy() *Postgis { + if in == nil { + return nil + } + out := new(Postgis) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *Resources) DeepCopyInto(out *Resources) { + *out = *in + if in.APIVersion != nil { + in, out := &in.APIVersion, &out.APIVersion + *out = new(string) + **out = **in + } + if in.Kind != nil { + in, out := &in.Kind, &out.Kind + *out = new(string) + **out = **in + } + if in.Name != nil { + in, out := &in.Name, &out.Name + *out = new(string) + **out = **in + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Resources. +func (in *Resources) DeepCopy() *Resources { + if in == nil { + return nil + } + out := new(Resources) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *ResultAnsible) DeepCopyInto(out *ResultAnsible) { + *out = *in +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ResultAnsible. +func (in *ResultAnsible) DeepCopy() *ResultAnsible { + if in == nil { + return nil + } + out := new(ResultAnsible) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *Status) DeepCopyInto(out *Status) { + *out = *in + if in.Conditions != nil { + in, out := &in.Conditions, &out.Conditions + *out = make([]Condition, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } + if in.Deployment != nil { + in, out := &in.Deployment, &out.Deployment + *out = new(string) + **out = **in + } + if in.Resources != nil { + in, out := &in.Resources, &out.Resources + *out = make([]Resources, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Status. +func (in *Status) DeepCopy() *Status { + if in == nil { + return nil + } + out := new(Status) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *Style) DeepCopyInto(out *Style) { + *out = *in + if in.Title != nil { + in, out := &in.Title, &out.Title + *out = new(string) + **out = **in + } + if in.Abstract != nil { + in, out := &in.Abstract, &out.Abstract + *out = new(string) + **out = **in + } + if in.Visualization != nil { + in, out := &in.Visualization, &out.Visualization + *out = new(string) + **out = **in + } + if in.LegendFile != nil { + in, out := &in.LegendFile, &out.LegendFile + *out = new(LegendFile) + **out = **in + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Style. +func (in *Style) DeepCopy() *Style { + if in == nil { + return nil + } + out := new(Style) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *StylingAssets) DeepCopyInto(out *StylingAssets) { + *out = *in + if in.ConfigMapRefs != nil { + in, out := &in.ConfigMapRefs, &out.ConfigMapRefs + *out = make([]ConfigMapRef, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } + if in.BlobKeys != nil { + in, out := &in.BlobKeys, &out.BlobKeys + *out = make([]string, len(*in)) + copy(*out, *in) + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new StylingAssets. +func (in *StylingAssets) DeepCopy() *StylingAssets { + if in == nil { + return nil + } + out := new(StylingAssets) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *Tif) DeepCopyInto(out *Tif) { + *out = *in + if in.GetFeatureInfoIncludesClass != nil { + in, out := &in.GetFeatureInfoIncludesClass, &out.GetFeatureInfoIncludesClass + *out = new(bool) + **out = **in + } + if in.Offsite != nil { + in, out := &in.Offsite, &out.Offsite + *out = new(string) + **out = **in + } + if in.Resample != nil { + in, out := &in.Resample, &out.Resample + *out = new(string) + **out = **in + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Tif. +func (in *Tif) DeepCopy() *Tif { + if in == nil { + return nil + } + out := new(Tif) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *WFS) DeepCopyInto(out *WFS) { + *out = *in + out.TypeMeta = in.TypeMeta + in.ObjectMeta.DeepCopyInto(&out.ObjectMeta) + in.Spec.DeepCopyInto(&out.Spec) + if in.Status != nil { + in, out := &in.Status, &out.Status + *out = new(Status) + (*in).DeepCopyInto(*out) + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new WFS. +func (in *WFS) DeepCopy() *WFS { + if in == nil { + return nil + } + out := new(WFS) + in.DeepCopyInto(out) + return out +} + +// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object. +func (in *WFS) DeepCopyObject() runtime.Object { + if c := in.DeepCopy(); c != nil { + return c + } + return nil +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *WFSList) DeepCopyInto(out *WFSList) { + *out = *in + out.TypeMeta = in.TypeMeta + in.ListMeta.DeepCopyInto(&out.ListMeta) + if in.Items != nil { + in, out := &in.Items, &out.Items + *out = make([]WFS, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new WFSList. +func (in *WFSList) DeepCopy() *WFSList { + if in == nil { + return nil + } + out := new(WFSList) + in.DeepCopyInto(out) + return out +} + +// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object. +func (in *WFSList) DeepCopyObject() runtime.Object { + if c := in.DeepCopy(); c != nil { + return c + } + return nil +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *WFSService) DeepCopyInto(out *WFSService) { + *out = *in + if in.Keywords != nil { + in, out := &in.Keywords, &out.Keywords + *out = make([]string, len(*in)) + copy(*out, *in) + } + out.Authority = in.Authority + if in.Extent != nil { + in, out := &in.Extent, &out.Extent + *out = new(string) + **out = **in + } + if in.Maxfeatures != nil { + in, out := &in.Maxfeatures, &out.Maxfeatures + *out = new(string) + **out = **in + } + if in.FeatureTypes != nil { + in, out := &in.FeatureTypes, &out.FeatureTypes + *out = make([]FeatureType, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } + if in.Mapfile != nil { + in, out := &in.Mapfile, &out.Mapfile + *out = new(Mapfile) + (*in).DeepCopyInto(*out) + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new WFSService. +func (in *WFSService) DeepCopy() *WFSService { + if in == nil { + return nil + } + out := new(WFSService) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *WFSSpec) DeepCopyInto(out *WFSSpec) { + *out = *in + in.General.DeepCopyInto(&out.General) + in.Service.DeepCopyInto(&out.Service) + in.Kubernetes.DeepCopyInto(&out.Kubernetes) + in.Options.DeepCopyInto(&out.Options) +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new WFSSpec. +func (in *WFSSpec) DeepCopy() *WFSSpec { + if in == nil { + return nil + } + out := new(WFSSpec) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *WMS) DeepCopyInto(out *WMS) { + *out = *in + out.TypeMeta = in.TypeMeta + in.ObjectMeta.DeepCopyInto(&out.ObjectMeta) + in.Spec.DeepCopyInto(&out.Spec) + if in.Status != nil { + in, out := &in.Status, &out.Status + *out = new(Status) + (*in).DeepCopyInto(*out) + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new WMS. +func (in *WMS) DeepCopy() *WMS { + if in == nil { + return nil + } + out := new(WMS) + in.DeepCopyInto(out) + return out +} + +// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object. +func (in *WMS) DeepCopyObject() runtime.Object { + if c := in.DeepCopy(); c != nil { + return c + } + return nil +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *WMSLayer) DeepCopyInto(out *WMSLayer) { + *out = *in + if in.Group != nil { + in, out := &in.Group, &out.Group + *out = new(string) + **out = **in + } + if in.Title != nil { + in, out := &in.Title, &out.Title + *out = new(string) + **out = **in + } + if in.Abstract != nil { + in, out := &in.Abstract, &out.Abstract + *out = new(string) + **out = **in + } + if in.Keywords != nil { + in, out := &in.Keywords, &out.Keywords + *out = make([]string, len(*in)) + copy(*out, *in) + } + if in.DatasetMetadataIdentifier != nil { + in, out := &in.DatasetMetadataIdentifier, &out.DatasetMetadataIdentifier + *out = new(string) + **out = **in + } + if in.SourceMetadataIdentifier != nil { + in, out := &in.SourceMetadataIdentifier, &out.SourceMetadataIdentifier + *out = new(string) + **out = **in + } + if in.Styles != nil { + in, out := &in.Styles, &out.Styles + *out = make([]Style, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } + if in.Extent != nil { + in, out := &in.Extent, &out.Extent + *out = new(string) + **out = **in + } + if in.MinScale != nil { + in, out := &in.MinScale, &out.MinScale + *out = new(string) + **out = **in + } + if in.MaxScale != nil { + in, out := &in.MaxScale, &out.MaxScale + *out = new(string) + **out = **in + } + if in.Data != nil { + in, out := &in.Data, &out.Data + *out = new(Data) + (*in).DeepCopyInto(*out) + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new WMSLayer. +func (in *WMSLayer) DeepCopy() *WMSLayer { + if in == nil { + return nil + } + out := new(WMSLayer) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *WMSList) DeepCopyInto(out *WMSList) { + *out = *in + out.TypeMeta = in.TypeMeta + in.ListMeta.DeepCopyInto(&out.ListMeta) + if in.Items != nil { + in, out := &in.Items, &out.Items + *out = make([]WMS, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new WMSList. +func (in *WMSList) DeepCopy() *WMSList { + if in == nil { + return nil + } + out := new(WMSList) + in.DeepCopyInto(out) + return out +} + +// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object. +func (in *WMSList) DeepCopyObject() runtime.Object { + if c := in.DeepCopy(); c != nil { + return c + } + return nil +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *WMSService) DeepCopyInto(out *WMSService) { + *out = *in + if in.Keywords != nil { + in, out := &in.Keywords, &out.Keywords + *out = make([]string, len(*in)) + copy(*out, *in) + } + out.Authority = in.Authority + if in.Layers != nil { + in, out := &in.Layers, &out.Layers + *out = make([]WMSLayer, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } + if in.Extent != nil { + in, out := &in.Extent, &out.Extent + *out = new(string) + **out = **in + } + if in.Maxsize != nil { + in, out := &in.Maxsize, &out.Maxsize + *out = new(string) + **out = **in + } + if in.Resolution != nil { + in, out := &in.Resolution, &out.Resolution + *out = new(int) + **out = **in + } + if in.DefResolution != nil { + in, out := &in.DefResolution, &out.DefResolution + *out = new(int) + **out = **in + } + if in.StylingAssets != nil { + in, out := &in.StylingAssets, &out.StylingAssets + *out = new(StylingAssets) + (*in).DeepCopyInto(*out) + } + if in.Mapfile != nil { + in, out := &in.Mapfile, &out.Mapfile + *out = new(Mapfile) + (*in).DeepCopyInto(*out) + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new WMSService. +func (in *WMSService) DeepCopy() *WMSService { + if in == nil { + return nil + } + out := new(WMSService) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *WMSSpec) DeepCopyInto(out *WMSSpec) { + *out = *in + in.General.DeepCopyInto(&out.General) + in.Service.DeepCopyInto(&out.Service) + in.Options.DeepCopyInto(&out.Options) + in.Kubernetes.DeepCopyInto(&out.Kubernetes) +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new WMSSpec. +func (in *WMSSpec) DeepCopy() *WMSSpec { + if in == nil { + return nil + } + out := new(WMSSpec) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *WMSWFSOptions) DeepCopyInto(out *WMSWFSOptions) { + *out = *in + if in.ValidateRequests != nil { + in, out := &in.ValidateRequests, &out.ValidateRequests + *out = new(bool) + **out = **in + } + if in.RewriteGroupToDataLayers != nil { + in, out := &in.RewriteGroupToDataLayers, &out.RewriteGroupToDataLayers + *out = new(bool) + **out = **in + } + if in.DisableWebserviceProxy != nil { + in, out := &in.DisableWebserviceProxy, &out.DisableWebserviceProxy + *out = new(bool) + **out = **in + } + if in.PrefetchData != nil { + in, out := &in.PrefetchData, &out.PrefetchData + *out = new(bool) + **out = **in + } + if in.ValidateChildStyleNameEqual != nil { + in, out := &in.ValidateChildStyleNameEqual, &out.ValidateChildStyleNameEqual + *out = new(bool) + **out = **in + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new WMSWFSOptions. +func (in *WMSWFSOptions) DeepCopy() *WMSWFSOptions { + if in == nil { + return nil + } + out := new(WMSWFSOptions) + in.DeepCopyInto(out) + return out +} diff --git a/api/v3/shared_types.go b/api/v3/shared_types.go new file mode 100644 index 0000000..a13f261 --- /dev/null +++ b/api/v3/shared_types.go @@ -0,0 +1,51 @@ +package v3 + +type Options struct { + AutomaticCasing bool `json:"automaticCasing"` + PrefetchData bool `json:"prefetchData"` + IncludeIngress bool `json:"includeIngress"` +} + +type Inspire struct { + ServiceMetadataURL MetadataURL `json:"serviceMetadataUrl"` + SpatialDatasetIdentifier string `json:"spatialDatasetIdentifier"` + Language string `json:"language"` +} + +type MetadataURL struct { + CSW *Metadata `json:"csw"` + Custom *Custom `json:"custom,omitempty"` +} + +type Metadata struct { + MetadataIdentifier string `json:"metadataIdentifier"` +} + +type Custom struct { + Href string `json:"href"` + Type string `json:"type"` +} + +type Data struct { + Gpkg *Gpkg `json:"gpkg,omitempty"` + Postgis *Postgis `json:"postgis,omitempty"` +} + +type Gpkg struct { + BlobKey string `json:"blobKey"` + TableName string `json:"tableName"` + GeometryType string `json:"geometryType"` + Columns []Columns `json:"columns"` +} + +// Postgis - reference to table in a Postgres database +type Postgis struct { + TableName string `json:"tableName"` + GeometryType string `json:"geometryType"` + Columns []Columns `json:"columns"` +} + +type Columns struct { + Name string `json:"name"` + Alias *string `json:"alias,omitempty"` +} diff --git a/api/v3/wfs_conversion.go b/api/v3/wfs_conversion.go new file mode 100644 index 0000000..372bdb3 --- /dev/null +++ b/api/v3/wfs_conversion.go @@ -0,0 +1,30 @@ +/* +MIT License + +Copyright (c) 2024 Publieke Dienstverlening op de Kaart + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. +*/ + +package v3 + +// EDIT THIS FILE! THIS IS SCAFFOLDING FOR YOU TO OWN! + +// Hub marks this type as a conversion hub. +func (*WFS) Hub() {} diff --git a/api/v3/wfs_types.go b/api/v3/wfs_types.go index 0a06c59..0d483e9 100644 --- a/api/v3/wfs_types.go +++ b/api/v3/wfs_types.go @@ -25,37 +25,30 @@ SOFTWARE. package v3 import ( + shared_model "github.com/pdok/smooth-operator/model" + autoscalingv2 "k8s.io/api/autoscaling/v2beta1" + corev1 "k8s.io/api/core/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" ) // EDIT THIS FILE! THIS IS SCAFFOLDING FOR YOU TO OWN! // NOTE: json tags are required. Any new fields you add must have json tags for the fields to be serialized. -// WFSSpec defines the desired state of WFS. -type WFSSpec struct { - // INSERT ADDITIONAL SPEC FIELDS - desired state of cluster - // Important: Run "make" to regenerate code after modifying this file - - // Foo is an example field of WFS. Edit wfs_types.go to remove/update - Foo string `json:"foo,omitempty"` -} - -// WFSStatus defines the observed state of WFS. -type WFSStatus struct { - // INSERT ADDITIONAL STATUS FIELD - define observed state of cluster - // Important: Run "make" to regenerate code after modifying this file -} - // +kubebuilder:object:root=true +// +kubebuilder:storageversion +// +kubebuilder:conversion:hub // +kubebuilder:subresource:status +// versionName=v3 +// +kubebuilder:resource:categories=pdok +// +kubebuilder:resource:path=wfs // WFS is the Schema for the wfs API. type WFS struct { metav1.TypeMeta `json:",inline"` metav1.ObjectMeta `json:"metadata,omitempty"` - Spec WFSSpec `json:"spec,omitempty"` - Status WFSStatus `json:"status,omitempty"` + Spec WFSSpec `json:"spec,omitempty"` + Status shared_model.OperatorStatus `json:"status,omitempty"` } // +kubebuilder:object:root=true @@ -70,3 +63,61 @@ type WFSList struct { func init() { SchemeBuilder.Register(&WFS{}, &WFSList{}) } + +// WFSSpec vertegenwoordigt de hoofdstruct voor de YAML-configuratie +type WFSSpec struct { + Lifecycle *shared_model.Lifecycle `json:"lifecycle"` + // +kubebuilder:validation:Type=object + // +kubebuilder:validation:Schemaless + // +kubebuilder:pruning:PreserveUnknownFields + // Optional strategic merge patch for the pod in the deployment. E.g. to patch the resources or add extra env vars. + PodSpecPatch *corev1.PodSpec `json:"podSpecPatch,omitempty"` + HorizontalPodAutoscalerPatch *autoscalingv2.HorizontalPodAutoscalerSpec `json:"horizontalPodAutoscalerPatch"` + Options *Options `json:"options"` + Service Service `json:"service"` +} + +type Service struct { + Prefix string `json:"prefix"` + BaseURL string `json:"baseUrl"` + Inspire *Inspire `json:"inspire,omitempty"` + Mapfile *Mapfile `json:"mapfile,omitempty"` + OwnerInfoRef string `json:"ownerInfoRef"` + Title string `json:"title"` + Abstract string `json:"abstract"` + Keywords []string `json:"keywords"` + Fees *string `json:"fees"` + AccessConstraints string `json:"accessConstraints"` + DefaultCrs string `json:"defaultCrs"` + OtherCrs []string `json:"otherCrs,omitempty"` + Bbox *Bbox `json:"bbox"` + // CountDefault -> wfs_maxfeatures in mapfile + CountDefault *string `json:"countDefault"` + FeatureTypes []FeatureType `json:"featureTypes"` +} + +type Mapfile struct { + ConfigMapKeyRef corev1.ConfigMapKeySelector `json:"configMapKeyRef"` +} + +type Bbox struct { + // EXTENT/wfs_extent in mapfile + //nolint:tagliatelle + DefaultCRS shared_model.BBox `json:"defaultCRS"` +} + +type FeatureType struct { + Name string `json:"name"` + Title string `json:"title"` + Abstract string `json:"abstract"` + Keywords []string `json:"keywords"` + DatasetMetadataURL MetadataURL `json:"datasetMetadataUrl"` + Bbox *FeatureBbox `json:"bbox,omitempty"` + Data Data `json:"data"` +} + +type FeatureBbox struct { + //nolint:tagliatelle + DefaultCRS shared_model.BBox `json:"defaultCRS"` + WGS84 *shared_model.BBox `json:"wgs84,omitempty"` +} diff --git a/api/v3/wms_conversion.go b/api/v3/wms_conversion.go new file mode 100644 index 0000000..c262d39 --- /dev/null +++ b/api/v3/wms_conversion.go @@ -0,0 +1,30 @@ +/* +MIT License + +Copyright (c) 2024 Publieke Dienstverlening op de Kaart + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. +*/ + +package v3 + +// EDIT THIS FILE! THIS IS SCAFFOLDING FOR YOU TO OWN! + +// Hub marks this type as a conversion hub. +func (*WMS) Hub() {} diff --git a/api/v3/wms_types.go b/api/v3/wms_types.go index 0bfd4e9..a550a29 100644 --- a/api/v3/wms_types.go +++ b/api/v3/wms_types.go @@ -47,7 +47,12 @@ type WMSStatus struct { } // +kubebuilder:object:root=true +// +kubebuilder:storageversion +// +kubebuilder:conversion:hub // +kubebuilder:subresource:status +// versionName=v3 +// +kubebuilder:resource:categories=pdok +// +kubebuilder:resource:path=wms // WMS is the Schema for the wms API. type WMS struct { diff --git a/api/v3/zz_generated.deepcopy.go b/api/v3/zz_generated.deepcopy.go index 8831ce8..9a9af1f 100644 --- a/api/v3/zz_generated.deepcopy.go +++ b/api/v3/zz_generated.deepcopy.go @@ -29,16 +29,331 @@ SOFTWARE. package v3 import ( + "github.com/pdok/smooth-operator/model" + "k8s.io/api/autoscaling/v2beta1" + "k8s.io/api/core/v1" runtime "k8s.io/apimachinery/pkg/runtime" ) +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *Bbox) DeepCopyInto(out *Bbox) { + *out = *in + out.DefaultCRS = in.DefaultCRS +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Bbox. +func (in *Bbox) DeepCopy() *Bbox { + if in == nil { + return nil + } + out := new(Bbox) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *Columns) DeepCopyInto(out *Columns) { + *out = *in + if in.Alias != nil { + in, out := &in.Alias, &out.Alias + *out = new(string) + **out = **in + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Columns. +func (in *Columns) DeepCopy() *Columns { + if in == nil { + return nil + } + out := new(Columns) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *Custom) DeepCopyInto(out *Custom) { + *out = *in +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Custom. +func (in *Custom) DeepCopy() *Custom { + if in == nil { + return nil + } + out := new(Custom) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *Data) DeepCopyInto(out *Data) { + *out = *in + if in.Gpkg != nil { + in, out := &in.Gpkg, &out.Gpkg + *out = new(Gpkg) + (*in).DeepCopyInto(*out) + } + if in.Postgis != nil { + in, out := &in.Postgis, &out.Postgis + *out = new(Postgis) + (*in).DeepCopyInto(*out) + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Data. +func (in *Data) DeepCopy() *Data { + if in == nil { + return nil + } + out := new(Data) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *FeatureBbox) DeepCopyInto(out *FeatureBbox) { + *out = *in + out.DefaultCRS = in.DefaultCRS + if in.WGS84 != nil { + in, out := &in.WGS84, &out.WGS84 + *out = new(model.BBox) + **out = **in + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new FeatureBbox. +func (in *FeatureBbox) DeepCopy() *FeatureBbox { + if in == nil { + return nil + } + out := new(FeatureBbox) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *FeatureType) DeepCopyInto(out *FeatureType) { + *out = *in + if in.Keywords != nil { + in, out := &in.Keywords, &out.Keywords + *out = make([]string, len(*in)) + copy(*out, *in) + } + in.DatasetMetadataURL.DeepCopyInto(&out.DatasetMetadataURL) + if in.Bbox != nil { + in, out := &in.Bbox, &out.Bbox + *out = new(FeatureBbox) + (*in).DeepCopyInto(*out) + } + in.Data.DeepCopyInto(&out.Data) +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new FeatureType. +func (in *FeatureType) DeepCopy() *FeatureType { + if in == nil { + return nil + } + out := new(FeatureType) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *Gpkg) DeepCopyInto(out *Gpkg) { + *out = *in + if in.Columns != nil { + in, out := &in.Columns, &out.Columns + *out = make([]Columns, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Gpkg. +func (in *Gpkg) DeepCopy() *Gpkg { + if in == nil { + return nil + } + out := new(Gpkg) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *Inspire) DeepCopyInto(out *Inspire) { + *out = *in + in.ServiceMetadataURL.DeepCopyInto(&out.ServiceMetadataURL) +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Inspire. +func (in *Inspire) DeepCopy() *Inspire { + if in == nil { + return nil + } + out := new(Inspire) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *Mapfile) DeepCopyInto(out *Mapfile) { + *out = *in + in.ConfigMapKeyRef.DeepCopyInto(&out.ConfigMapKeyRef) +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Mapfile. +func (in *Mapfile) DeepCopy() *Mapfile { + if in == nil { + return nil + } + out := new(Mapfile) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *Metadata) DeepCopyInto(out *Metadata) { + *out = *in +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Metadata. +func (in *Metadata) DeepCopy() *Metadata { + if in == nil { + return nil + } + out := new(Metadata) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *MetadataURL) DeepCopyInto(out *MetadataURL) { + *out = *in + if in.CSW != nil { + in, out := &in.CSW, &out.CSW + *out = new(Metadata) + **out = **in + } + if in.Custom != nil { + in, out := &in.Custom, &out.Custom + *out = new(Custom) + **out = **in + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new MetadataURL. +func (in *MetadataURL) DeepCopy() *MetadataURL { + if in == nil { + return nil + } + out := new(MetadataURL) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *Options) DeepCopyInto(out *Options) { + *out = *in +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Options. +func (in *Options) DeepCopy() *Options { + if in == nil { + return nil + } + out := new(Options) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *Postgis) DeepCopyInto(out *Postgis) { + *out = *in + if in.Columns != nil { + in, out := &in.Columns, &out.Columns + *out = make([]Columns, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Postgis. +func (in *Postgis) DeepCopy() *Postgis { + if in == nil { + return nil + } + out := new(Postgis) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *Service) DeepCopyInto(out *Service) { + *out = *in + if in.Inspire != nil { + in, out := &in.Inspire, &out.Inspire + *out = new(Inspire) + (*in).DeepCopyInto(*out) + } + if in.Mapfile != nil { + in, out := &in.Mapfile, &out.Mapfile + *out = new(Mapfile) + (*in).DeepCopyInto(*out) + } + if in.Keywords != nil { + in, out := &in.Keywords, &out.Keywords + *out = make([]string, len(*in)) + copy(*out, *in) + } + if in.Fees != nil { + in, out := &in.Fees, &out.Fees + *out = new(string) + **out = **in + } + if in.OtherCrs != nil { + in, out := &in.OtherCrs, &out.OtherCrs + *out = make([]string, len(*in)) + copy(*out, *in) + } + if in.Bbox != nil { + in, out := &in.Bbox, &out.Bbox + *out = new(Bbox) + **out = **in + } + if in.CountDefault != nil { + in, out := &in.CountDefault, &out.CountDefault + *out = new(string) + **out = **in + } + if in.FeatureTypes != nil { + in, out := &in.FeatureTypes, &out.FeatureTypes + *out = make([]FeatureType, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Service. +func (in *Service) DeepCopy() *Service { + if in == nil { + return nil + } + out := new(Service) + in.DeepCopyInto(out) + return out +} + // DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. func (in *WFS) DeepCopyInto(out *WFS) { *out = *in out.TypeMeta = in.TypeMeta in.ObjectMeta.DeepCopyInto(&out.ObjectMeta) - out.Spec = in.Spec - out.Status = in.Status + in.Spec.DeepCopyInto(&out.Spec) + in.Status.DeepCopyInto(&out.Status) } // DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new WFS. @@ -94,6 +409,27 @@ func (in *WFSList) DeepCopyObject() runtime.Object { // DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. func (in *WFSSpec) DeepCopyInto(out *WFSSpec) { *out = *in + if in.Lifecycle != nil { + in, out := &in.Lifecycle, &out.Lifecycle + *out = new(model.Lifecycle) + (*in).DeepCopyInto(*out) + } + if in.PodSpecPatch != nil { + in, out := &in.PodSpecPatch, &out.PodSpecPatch + *out = new(v1.PodSpec) + (*in).DeepCopyInto(*out) + } + if in.HorizontalPodAutoscalerPatch != nil { + in, out := &in.HorizontalPodAutoscalerPatch, &out.HorizontalPodAutoscalerPatch + *out = new(v2beta1.HorizontalPodAutoscalerSpec) + (*in).DeepCopyInto(*out) + } + if in.Options != nil { + in, out := &in.Options, &out.Options + *out = new(Options) + **out = **in + } + in.Service.DeepCopyInto(&out.Service) } // DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new WFSSpec. @@ -106,21 +442,6 @@ func (in *WFSSpec) DeepCopy() *WFSSpec { return out } -// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. -func (in *WFSStatus) DeepCopyInto(out *WFSStatus) { - *out = *in -} - -// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new WFSStatus. -func (in *WFSStatus) DeepCopy() *WFSStatus { - if in == nil { - return nil - } - out := new(WFSStatus) - in.DeepCopyInto(out) - return out -} - // DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. func (in *WMS) DeepCopyInto(out *WMS) { *out = *in diff --git a/apply-prod-manifests-locally.sh b/apply-prod-manifests-locally.sh new file mode 100755 index 0000000..5759c9a --- /dev/null +++ b/apply-prod-manifests-locally.sh @@ -0,0 +1,19 @@ +#!/bin/bash + +KUBECTX=$(kubectx) +if [[ "$KUBECTX" != "default" ]]; then + echo "You need to be connected with the local cluster." + exit 1 +fi + +SERVICE_TYPE=${1:-wfs} + +for MANIFEST in "./prod-manifests/$SERVICE_TYPE/"*.yaml; do + kubectl apply -f $MANIFEST + + if [ $? -eq 0 ]; then + kubectl delete -f $MANIFEST + else + break + fi +done \ No newline at end of file diff --git a/build-push-deploy-locally.sh b/build-push-deploy-locally.sh new file mode 100755 index 0000000..977ec5b --- /dev/null +++ b/build-push-deploy-locally.sh @@ -0,0 +1,26 @@ +#!/bin/bash + +TAG=$1 + +echo "Running: make generate" +make generate + +echo "" +echo "Running: build -t local-registry:5000/wfs-operator:$TAG --build-context repos=./.. ." +docker build -t "local-registry:5000/wfs-operator:$TAG" --build-context repos=./.. . + +echo "" +echo "Running: push local-registry:5000/wfs-operator:$TAG" +docker push "local-registry:5000/wfs-operator:$TAG" + +echo "" +echo "Installing cert-manager" +kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.17.0/cert-manager.yaml + +echo "" +echo "Running: make install" +make install + +echo "" +echo "Running: deploy IMG=local-registry:5000/wfs-operator:$TAG" +make deploy "IMG=local-registry:5000/wfs-operator:$TAG" \ No newline at end of file diff --git a/cmd/main.go b/cmd/main.go index e39da37..d863094 100644 --- a/cmd/main.go +++ b/cmd/main.go @@ -37,8 +37,10 @@ import ( metricsserver "sigs.k8s.io/controller-runtime/pkg/metrics/server" "sigs.k8s.io/controller-runtime/pkg/webhook" + pdoknlv2beta1 "github.com/pdok/mapserver-operator/api/v2beta1" pdoknlv3 "github.com/pdok/mapserver-operator/api/v3" "github.com/pdok/mapserver-operator/internal/controller" + webhookpdoknlv3 "github.com/pdok/mapserver-operator/internal/webhook/v3" // +kubebuilder:scaffold:imports ) @@ -51,10 +53,11 @@ func init() { utilruntime.Must(clientgoscheme.AddToScheme(scheme)) utilruntime.Must(pdoknlv3.AddToScheme(scheme)) + utilruntime.Must(pdoknlv2beta1.AddToScheme(scheme)) // +kubebuilder:scaffold:scheme } -// nolint:gocyclo +//nolint:gocyclo func main() { var metricsAddr string var metricsCertPath, metricsCertName, metricsCertKey string @@ -216,6 +219,20 @@ func main() { setupLog.Error(err, "unable to create controller", "controller", "WFS") os.Exit(1) } + + if os.Getenv("ENABLE_WEBHOOKS") != "false" { + if err = webhookpdoknlv3.SetupWFSWebhookWithManager(mgr); err != nil { + setupLog.Error(err, "unable to create webhook", "webhook", "WFS") + os.Exit(1) + } + } + + if os.Getenv("ENABLE_WEBHOOKS") != "false" { + if err = webhookpdoknlv3.SetupWMSWebhookWithManager(mgr); err != nil { + setupLog.Error(err, "unable to create webhook", "webhook", "WMS") + os.Exit(1) + } + } // +kubebuilder:scaffold:builder if metricsCertWatcher != nil { diff --git a/config/certmanager/certificate-metrics.yaml b/config/certmanager/certificate-metrics.yaml new file mode 100644 index 0000000..10df7dd --- /dev/null +++ b/config/certmanager/certificate-metrics.yaml @@ -0,0 +1,20 @@ +# The following manifests contain a self-signed issuer CR and a metrics certificate CR. +# More document can be found at https://docs.cert-manager.io +apiVersion: cert-manager.io/v1 +kind: Certificate +metadata: + labels: + app.kubernetes.io/name: mapserver-operator + app.kubernetes.io/managed-by: kustomize + name: metrics-certs # this name should match the one appeared in kustomizeconfig.yaml + namespace: system +spec: + dnsNames: + # SERVICE_NAME and SERVICE_NAMESPACE will be substituted by kustomize + # replacements in the config/default/kustomization.yaml file. + - SERVICE_NAME.SERVICE_NAMESPACE.svc + - SERVICE_NAME.SERVICE_NAMESPACE.svc.cluster.local + issuerRef: + kind: Issuer + name: selfsigned-issuer + secretName: metrics-server-cert diff --git a/config/certmanager/certificate-webhook.yaml b/config/certmanager/certificate-webhook.yaml new file mode 100644 index 0000000..85bdd1c --- /dev/null +++ b/config/certmanager/certificate-webhook.yaml @@ -0,0 +1,20 @@ +# The following manifests contain a self-signed issuer CR and a certificate CR. +# More document can be found at https://docs.cert-manager.io +apiVersion: cert-manager.io/v1 +kind: Certificate +metadata: + labels: + app.kubernetes.io/name: mapserver-operator + app.kubernetes.io/managed-by: kustomize + name: serving-cert # this name should match the one appeared in kustomizeconfig.yaml + namespace: system +spec: + # SERVICE_NAME and SERVICE_NAMESPACE will be substituted by kustomize + # replacements in the config/default/kustomization.yaml file. + dnsNames: + - SERVICE_NAME.SERVICE_NAMESPACE.svc + - SERVICE_NAME.SERVICE_NAMESPACE.svc.cluster.local + issuerRef: + kind: Issuer + name: selfsigned-issuer + secretName: webhook-server-cert diff --git a/config/certmanager/issuer.yaml b/config/certmanager/issuer.yaml new file mode 100644 index 0000000..554da94 --- /dev/null +++ b/config/certmanager/issuer.yaml @@ -0,0 +1,13 @@ +# The following manifest contains a self-signed issuer CR. +# More information can be found at https://docs.cert-manager.io +# WARNING: Targets CertManager v1.0. Check https://cert-manager.io/docs/installation/upgrading/ for breaking changes. +apiVersion: cert-manager.io/v1 +kind: Issuer +metadata: + labels: + app.kubernetes.io/name: mapserver-operator + app.kubernetes.io/managed-by: kustomize + name: selfsigned-issuer + namespace: system +spec: + selfSigned: {} diff --git a/config/certmanager/kustomization.yaml b/config/certmanager/kustomization.yaml new file mode 100644 index 0000000..fcb7498 --- /dev/null +++ b/config/certmanager/kustomization.yaml @@ -0,0 +1,7 @@ +resources: +- issuer.yaml +- certificate-webhook.yaml +- certificate-metrics.yaml + +configurations: +- kustomizeconfig.yaml diff --git a/config/certmanager/kustomizeconfig.yaml b/config/certmanager/kustomizeconfig.yaml new file mode 100644 index 0000000..cf6f89e --- /dev/null +++ b/config/certmanager/kustomizeconfig.yaml @@ -0,0 +1,8 @@ +# This configuration is for teaching kustomize how to update name ref substitution +nameReference: +- kind: Issuer + group: cert-manager.io + fieldSpecs: + - kind: Certificate + group: cert-manager.io + path: spec/issuerRef/name diff --git a/config/crd/bases/pdok.nl_wfs.yaml b/config/crd/bases/pdok.nl_wfs.yaml new file mode 100644 index 0000000..d79234e --- /dev/null +++ b/config/crd/bases/pdok.nl_wfs.yaml @@ -0,0 +1,1230 @@ +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.17.1 + name: wfs.pdok.nl +spec: + group: pdok.nl + names: + kind: WFS + listKind: WFSList + plural: wfs + singular: wfs + scope: Namespaced + versions: + - name: v2beta1 + schema: + openAPIV3Schema: + description: WFS is the Schema for the wfs API. + properties: + apiVersion: + description: |- + APIVersion defines the versioned schema of this representation of an object. + Servers should convert recognized schemas to the latest internal value, and + may reject unrecognized values. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources + type: string + kind: + description: |- + Kind is a string value representing the REST resource this object represents. + Servers may infer this from the endpoint the client submits requests to. + Cannot be updated. + In CamelCase. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + type: string + metadata: + type: object + spec: + description: WFSSpec is the struct for all fields defined in the WFS CRD + properties: + general: + description: General is the struct with all generic fields for the + crds + properties: + dataVersion: + type: string + dataset: + type: string + datasetOwner: + type: string + serviceVersion: + type: string + theme: + type: string + required: + - dataset + - datasetOwner + type: object + kubernetes: + description: Kubernetes is the struct with all fields that can be + defined in kubernetes fields in the crds + properties: + autoscaling: + description: Autoscaling is the struct with all fields to configure + autoscalers for the crs + properties: + averageCpuUtilization: + type: integer + maxReplicas: + type: integer + minReplicas: + type: integer + type: object + healthCheck: + description: HealthCheck is the struct with all fields to configure + healthchecks for the crs + properties: + boundingbox: + type: string + mimetype: + type: string + querystring: + type: string + type: object + lifecycle: + description: Lifecycle is the struct with the fields to configure + lifecycle settings for the resources + properties: + ttlInDays: + type: integer + type: object + resources: + description: ResourceRequirements describes the compute resource + requirements. + properties: + claims: + description: |- + Claims lists the names of resources, defined in spec.resourceClaims, + that are used by this container. + + This is an alpha field and requires enabling the + DynamicResourceAllocation feature gate. + + This field is immutable. It can only be set for containers. + items: + description: ResourceClaim references one entry in PodSpec.ResourceClaims. + properties: + name: + description: |- + Name must match the name of one entry in pod.spec.resourceClaims of + the Pod where this field is used. It makes that resource available + inside a container. + type: string + request: + description: |- + Request is the name chosen for a request in the referenced claim. + If empty, everything from the claim is made available, otherwise + only the result of this request. + type: string + required: + - name + type: object + type: array + x-kubernetes-list-map-keys: + - name + x-kubernetes-list-type: map + limits: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + description: |- + Limits describes the maximum amount of compute resources allowed. + More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ + type: object + requests: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + description: |- + Requests describes the minimum amount of compute resources required. + If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, + otherwise to an implementation-defined value. Requests cannot exceed Limits. + More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ + type: object + type: object + type: object + options: + description: WMSWFSOptions is the struct with options available in + the operator + properties: + automaticCasing: + type: boolean + disableWebserviceProxy: + type: boolean + includeIngress: + type: boolean + prefetchData: + type: boolean + rewriteGroupToDataLayers: + type: boolean + validateChildStyleNameEqual: + type: boolean + validateRequests: + type: boolean + required: + - automaticCasing + - includeIngress + type: object + service: + description: WFSService is the struct with all service specific options + properties: + abstract: + type: string + accessConstraints: + type: string + authority: + description: Authority is a struct for the authority fields in + WMS and WFS crds + properties: + name: + type: string + url: + type: string + required: + - name + - url + type: object + dataEPSG: + type: string + extent: + type: string + featureTypes: + items: + description: FeatureType is the struct for all feature type + level fields + properties: + abstract: + type: string + data: + description: Data is a struct for the data field for a WMSLayer + or WFS FeatureType + properties: + gpkg: + description: GPKG is a struct for the gpkg field for + a WMSLayer or WFS FeatureType + properties: + aliases: + additionalProperties: + type: string + description: In a new version Aliases should become + part of Columns + type: object + blobKey: + type: string + columns: + items: + type: string + type: array + geometryType: + type: string + table: + type: string + required: + - blobKey + - columns + - geometryType + - table + type: object + postgis: + description: |- + Postgis is a struct for the Postgis db config for a WMSLayer or WFS FeatureType + connection details are passed through the environment + properties: + aliases: + additionalProperties: + type: string + description: In a new version Aliases should become + part of Columns + type: object + columns: + items: + type: string + type: array + geometryType: + type: string + table: + type: string + required: + - columns + - geometryType + - table + type: object + tif: + description: Tif is a struct for the Tif field for a + WMSLayer + properties: + blobKey: + type: string + getFeatureInfoIncludesClass: + type: boolean + offsite: + type: string + resample: + type: string + required: + - blobKey + type: object + type: object + datasetMetadataIdentifier: + type: string + extent: + type: string + keywords: + items: + type: string + type: array + name: + type: string + sourceMetadataIdentifier: + type: string + title: + type: string + required: + - abstract + - data + - datasetMetadataIdentifier + - keywords + - name + - sourceMetadataIdentifier + - title + type: object + type: array + inspire: + type: boolean + keywords: + items: + type: string + type: array + mapfile: + description: Mapfile contains the ConfigMapKeyRef containing a + mapfile + properties: + configMapKeyRef: + description: Selects a key from a ConfigMap. + properties: + key: + description: The key to select. + type: string + name: + default: "" + description: |- + Name of the referent. + This field is effectively required, but due to backwards compatibility is + allowed to be empty. Instances of this type with an empty value here are + almost certainly wrong. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + optional: + description: Specify whether the ConfigMap or its key + must be defined + type: boolean + required: + - key + type: object + x-kubernetes-map-type: atomic + required: + - configMapKeyRef + type: object + maxfeatures: + type: string + metadataIdentifier: + type: string + title: + type: string + required: + - abstract + - accessConstraints + - authority + - dataEPSG + - featureTypes + - inspire + - keywords + - metadataIdentifier + - title + type: object + required: + - general + - kubernetes + - options + - service + type: object + status: + description: Status - The status for custom resources managed by the operator-sdk. + properties: + conditions: + items: + description: |- + Condition - the condition for the ansible operator + https://github.com/operator-framework/operator-sdk/blob/master/internal/ansible/controller/status/types.go#L101 + properties: + ansibleResult: + description: ResultAnsible - encapsulation of the ansible result. + 'AnsibleResult' is turned around in struct to comply with + linting + properties: + changed: + type: integer + completion: + type: string + failures: + type: integer + ok: + type: integer + skipped: + type: integer + required: + - changed + - completion + - failures + - ok + - skipped + type: object + lastTransitionTime: + format: date-time + type: string + message: + type: string + reason: + type: string + status: + description: ConditionStatus specifies a string for field ConditionType + type: string + type: + description: ConditionType specifies a string for field ConditionType + type: string + required: + - lastTransitionTime + - message + - reason + - status + - type + type: object + type: array + deployment: + type: string + resources: + items: + description: Resources is the struct for the resources field within + status + properties: + apiversion: + type: string + kind: + type: string + name: + type: string + type: object + type: array + type: object + type: object + served: true + storage: false + subresources: + status: {} + - name: v3 + schema: + openAPIV3Schema: + description: WFS is the Schema for the wfs API. + properties: + apiVersion: + description: |- + APIVersion defines the versioned schema of this representation of an object. + Servers should convert recognized schemas to the latest internal value, and + may reject unrecognized values. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources + type: string + kind: + description: |- + Kind is a string value representing the REST resource this object represents. + Servers may infer this from the endpoint the client submits requests to. + Cannot be updated. + In CamelCase. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + type: string + metadata: + type: object + spec: + description: WFSSpec vertegenwoordigt de hoofdstruct voor de YAML-configuratie + properties: + horizontalPodAutoscalerPatch: + description: HorizontalPodAutoscalerSpec describes the desired functionality + of the HorizontalPodAutoscaler. + properties: + maxReplicas: + description: |- + maxReplicas is the upper limit for the number of replicas to which the autoscaler can scale up. + It cannot be less that minReplicas. + format: int32 + type: integer + metrics: + description: |- + metrics contains the specifications for which to use to calculate the + desired replica count (the maximum replica count across all metrics will + be used). The desired replica count is calculated multiplying the + ratio between the target value and the current value by the current + number of pods. Ergo, metrics used must decrease as the pod count is + increased, and vice-versa. See the individual metric source types for + more information about how each type of metric must respond. + items: + description: |- + MetricSpec specifies how to scale based on a single metric + (only `type` and one other matching field should be set at once). + properties: + containerResource: + description: |- + container resource refers to a resource metric (such as those specified in + requests and limits) known to Kubernetes describing a single container in + each pod of the current scale target (e.g. CPU or memory). Such metrics are + built in to Kubernetes, and have special scaling options on top of those + available to normal per-pod metrics using the "pods" source. + properties: + container: + description: container is the name of the container + in the pods of the scaling target + type: string + name: + description: name is the name of the resource in question. + type: string + targetAverageUtilization: + description: |- + targetAverageUtilization is the target value of the average of the + resource metric across all relevant pods, represented as a percentage of + the requested value of the resource for the pods. + format: int32 + type: integer + targetAverageValue: + anyOf: + - type: integer + - type: string + description: |- + targetAverageValue is the target value of the average of the + resource metric across all relevant pods, as a raw value (instead of as + a percentage of the request), similar to the "pods" metric source type. + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + required: + - container + - name + type: object + external: + description: |- + external refers to a global metric that is not associated + with any Kubernetes object. It allows autoscaling based on information + coming from components running outside of cluster + (for example length of queue in cloud messaging service, or + QPS from loadbalancer running outside of cluster). + properties: + metricName: + description: metricName is the name of the metric in + question. + type: string + metricSelector: + description: |- + metricSelector is used to identify a specific time series + within a given metric. + properties: + matchExpressions: + description: matchExpressions is a list of label + selector requirements. The requirements are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label key that the + selector applies to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + targetAverageValue: + anyOf: + - type: integer + - type: string + description: |- + targetAverageValue is the target per-pod value of global metric (as a quantity). + Mutually exclusive with TargetValue. + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + targetValue: + anyOf: + - type: integer + - type: string + description: |- + targetValue is the target value of the metric (as a quantity). + Mutually exclusive with TargetAverageValue. + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + required: + - metricName + type: object + object: + description: |- + object refers to a metric describing a single kubernetes object + (for example, hits-per-second on an Ingress object). + properties: + averageValue: + anyOf: + - type: integer + - type: string + description: |- + averageValue is the target value of the average of the + metric across all relevant pods (as a quantity) + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + metricName: + description: metricName is the name of the metric in + question. + type: string + selector: + description: |- + selector is the string-encoded form of a standard kubernetes label selector for the given metric + When set, it is passed as an additional parameter to the metrics server for more specific metrics scoping + When unset, just the metricName will be used to gather metrics. + properties: + matchExpressions: + description: matchExpressions is a list of label + selector requirements. The requirements are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label key that the + selector applies to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + target: + description: target is the described Kubernetes object. + properties: + apiVersion: + description: API version of the referent + type: string + kind: + description: 'Kind of the referent; More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds' + type: string + name: + description: 'Name of the referent; More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names' + type: string + required: + - kind + - name + type: object + targetValue: + anyOf: + - type: integer + - type: string + description: targetValue is the target value of the + metric (as a quantity). + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + required: + - metricName + - target + - targetValue + type: object + pods: + description: |- + pods refers to a metric describing each pod in the current scale target + (for example, transactions-processed-per-second). The values will be + averaged together before being compared to the target value. + properties: + metricName: + description: metricName is the name of the metric in + question + type: string + selector: + description: |- + selector is the string-encoded form of a standard kubernetes label selector for the given metric + When set, it is passed as an additional parameter to the metrics server for more specific metrics scoping + When unset, just the metricName will be used to gather metrics. + properties: + matchExpressions: + description: matchExpressions is a list of label + selector requirements. The requirements are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label key that the + selector applies to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + targetAverageValue: + anyOf: + - type: integer + - type: string + description: |- + targetAverageValue is the target value of the average of the + metric across all relevant pods (as a quantity) + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + required: + - metricName + - targetAverageValue + type: object + resource: + description: |- + resource refers to a resource metric (such as those specified in + requests and limits) known to Kubernetes describing each pod in the + current scale target (e.g. CPU or memory). Such metrics are built in to + Kubernetes, and have special scaling options on top of those available + to normal per-pod metrics using the "pods" source. + properties: + name: + description: name is the name of the resource in question. + type: string + targetAverageUtilization: + description: |- + targetAverageUtilization is the target value of the average of the + resource metric across all relevant pods, represented as a percentage of + the requested value of the resource for the pods. + format: int32 + type: integer + targetAverageValue: + anyOf: + - type: integer + - type: string + description: |- + targetAverageValue is the target value of the average of the + resource metric across all relevant pods, as a raw value (instead of as + a percentage of the request), similar to the "pods" metric source type. + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + required: + - name + type: object + type: + description: |- + type is the type of metric source. It should be one of "ContainerResource", + "External", "Object", "Pods" or "Resource", each mapping to a matching field in the object. + type: string + required: + - type + type: object + type: array + x-kubernetes-list-type: atomic + minReplicas: + description: |- + minReplicas is the lower limit for the number of replicas to which the autoscaler + can scale down. It defaults to 1 pod. minReplicas is allowed to be 0 if the + alpha feature gate HPAScaleToZero is enabled and at least one Object or External + metric is configured. Scaling is active as long as at least one metric value is + available. + format: int32 + type: integer + scaleTargetRef: + description: |- + scaleTargetRef points to the target resource to scale, and is used to the pods for which metrics + should be collected, as well as to actually change the replica count. + properties: + apiVersion: + description: API version of the referent + type: string + kind: + description: 'Kind of the referent; More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds' + type: string + name: + description: 'Name of the referent; More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names' + type: string + required: + - kind + - name + type: object + required: + - maxReplicas + - scaleTargetRef + type: object + lifecycle: + properties: + ttlInDays: + format: int32 + type: integer + required: + - ttlInDays + type: object + options: + properties: + automaticCasing: + type: boolean + includeIngress: + type: boolean + prefetchData: + type: boolean + required: + - automaticCasing + - includeIngress + - prefetchData + type: object + podSpecPatch: + description: Optional strategic merge patch for the pod in the deployment. + E.g. to patch the resources or add extra env vars. + type: object + x-kubernetes-preserve-unknown-fields: true + service: + properties: + abstract: + type: string + accessConstraints: + type: string + baseUrl: + type: string + bbox: + properties: + defaultCRS: + description: EXTENT/wfs_extent in mapfile + properties: + maxx: + description: Rechtsonder X coördinaat + pattern: ^[+-]?([0-9]+([.][0-9]*)?|[.][0-9]+)$ + type: string + maxy: + description: Rechtsonder Y coördinaat + pattern: ^[+-]?([0-9]+([.][0-9]*)?|[.][0-9]+)$ + type: string + minx: + description: Linksboven X coördinaat + pattern: ^[+-]?([0-9]+([.][0-9]*)?|[.][0-9]+)$ + type: string + miny: + description: Linksboven Y coördinaat + pattern: ^[+-]?([0-9]+([.][0-9]*)?|[.][0-9]+)$ + type: string + required: + - maxx + - maxy + - minx + - miny + type: object + required: + - defaultCRS + type: object + countDefault: + description: CountDefault -> wfs_maxfeatures in mapfile + type: string + defaultCrs: + type: string + featureTypes: + items: + properties: + abstract: + type: string + bbox: + properties: + defaultCRS: + description: BBox defines a bounding box with coordinates + properties: + maxx: + description: Rechtsonder X coördinaat + pattern: ^[+-]?([0-9]+([.][0-9]*)?|[.][0-9]+)$ + type: string + maxy: + description: Rechtsonder Y coördinaat + pattern: ^[+-]?([0-9]+([.][0-9]*)?|[.][0-9]+)$ + type: string + minx: + description: Linksboven X coördinaat + pattern: ^[+-]?([0-9]+([.][0-9]*)?|[.][0-9]+)$ + type: string + miny: + description: Linksboven Y coördinaat + pattern: ^[+-]?([0-9]+([.][0-9]*)?|[.][0-9]+)$ + type: string + required: + - maxx + - maxy + - minx + - miny + type: object + wgs84: + description: BBox defines a bounding box with coordinates + properties: + maxx: + description: Rechtsonder X coördinaat + pattern: ^[+-]?([0-9]+([.][0-9]*)?|[.][0-9]+)$ + type: string + maxy: + description: Rechtsonder Y coördinaat + pattern: ^[+-]?([0-9]+([.][0-9]*)?|[.][0-9]+)$ + type: string + minx: + description: Linksboven X coördinaat + pattern: ^[+-]?([0-9]+([.][0-9]*)?|[.][0-9]+)$ + type: string + miny: + description: Linksboven Y coördinaat + pattern: ^[+-]?([0-9]+([.][0-9]*)?|[.][0-9]+)$ + type: string + required: + - maxx + - maxy + - minx + - miny + type: object + required: + - defaultCRS + type: object + data: + properties: + gpkg: + properties: + blobKey: + type: string + columns: + items: + properties: + alias: + type: string + name: + type: string + required: + - name + type: object + type: array + geometryType: + type: string + tableName: + type: string + required: + - blobKey + - columns + - geometryType + - tableName + type: object + postgis: + description: Postgis - reference to table in a Postgres + database + properties: + columns: + items: + properties: + alias: + type: string + name: + type: string + required: + - name + type: object + type: array + geometryType: + type: string + tableName: + type: string + required: + - columns + - geometryType + - tableName + type: object + type: object + datasetMetadataUrl: + properties: + csw: + properties: + metadataIdentifier: + type: string + required: + - metadataIdentifier + type: object + custom: + properties: + href: + type: string + type: + type: string + required: + - href + - type + type: object + required: + - csw + type: object + keywords: + items: + type: string + type: array + name: + type: string + title: + type: string + required: + - abstract + - data + - datasetMetadataUrl + - keywords + - name + - title + type: object + type: array + fees: + type: string + inspire: + properties: + language: + type: string + serviceMetadataUrl: + properties: + csw: + properties: + metadataIdentifier: + type: string + required: + - metadataIdentifier + type: object + custom: + properties: + href: + type: string + type: + type: string + required: + - href + - type + type: object + required: + - csw + type: object + spatialDatasetIdentifier: + type: string + required: + - language + - serviceMetadataUrl + - spatialDatasetIdentifier + type: object + keywords: + items: + type: string + type: array + mapfile: + properties: + configMapKeyRef: + description: Selects a key from a ConfigMap. + properties: + key: + description: The key to select. + type: string + name: + default: "" + description: |- + Name of the referent. + This field is effectively required, but due to backwards compatibility is + allowed to be empty. Instances of this type with an empty value here are + almost certainly wrong. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + optional: + description: Specify whether the ConfigMap or its key + must be defined + type: boolean + required: + - key + type: object + x-kubernetes-map-type: atomic + required: + - configMapKeyRef + type: object + otherCrs: + items: + type: string + type: array + ownerInfoRef: + type: string + prefix: + type: string + title: + type: string + required: + - abstract + - accessConstraints + - baseUrl + - bbox + - countDefault + - defaultCrs + - featureTypes + - fees + - keywords + - ownerInfoRef + - prefix + - title + type: object + required: + - horizontalPodAutoscalerPatch + - lifecycle + - options + - service + type: object + status: + description: OperatorStatus defines the observed state of an Atom/WFS/WMS/.... + properties: + conditions: + description: |- + Each condition contains details for one aspect of the current state of this Atom. + Known .status.conditions.type are: "Reconciled" + items: + description: Condition contains details for one aspect of the current + state of this API Resource. + properties: + lastTransitionTime: + description: |- + lastTransitionTime is the last time the condition transitioned from one status to another. + This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. + format: date-time + type: string + message: + description: |- + message is a human readable message indicating details about the transition. + This may be an empty string. + maxLength: 32768 + type: string + observedGeneration: + description: |- + observedGeneration represents the .metadata.generation that the condition was set based upon. + For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date + with respect to the current state of the instance. + format: int64 + minimum: 0 + type: integer + reason: + description: |- + reason contains a programmatic identifier indicating the reason for the condition's last transition. + Producers of specific condition types may define expected values and meanings for this field, + and whether the values are considered a guaranteed API. + The value should be a CamelCase string. + This field may not be empty. + maxLength: 1024 + minLength: 1 + pattern: ^[A-Za-z]([A-Za-z0-9_,:]*[A-Za-z0-9_])?$ + type: string + status: + description: status of the condition, one of True, False, Unknown. + enum: + - "True" + - "False" + - Unknown + type: string + type: + description: type of condition in CamelCase or in foo.example.com/CamelCase. + maxLength: 316 + pattern: ^([a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*/)?(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])$ + type: string + required: + - lastTransitionTime + - message + - reason + - status + - type + type: object + type: array + operationResults: + additionalProperties: + description: OperationResult is the action result of a CreateOrUpdate + call. + type: string + description: The result of creating or updating of each derived resource + for this Atom. + type: object + type: object + type: object + served: true + storage: true + subresources: + status: {} diff --git a/config/crd/bases/pdok.nl_wms.yaml b/config/crd/bases/pdok.nl_wms.yaml new file mode 100644 index 0000000..b0bcadb --- /dev/null +++ b/config/crd/bases/pdok.nl_wms.yaml @@ -0,0 +1,530 @@ +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.17.1 + name: wms.pdok.nl +spec: + group: pdok.nl + names: + kind: WMS + listKind: WMSList + plural: wms + singular: wms + scope: Namespaced + versions: + - name: v2beta1 + schema: + openAPIV3Schema: + description: WMS is the Schema for the wms API. + properties: + apiVersion: + description: |- + APIVersion defines the versioned schema of this representation of an object. + Servers should convert recognized schemas to the latest internal value, and + may reject unrecognized values. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources + type: string + kind: + description: |- + Kind is a string value representing the REST resource this object represents. + Servers may infer this from the endpoint the client submits requests to. + Cannot be updated. + In CamelCase. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + type: string + metadata: + type: object + spec: + description: WMSSpec is the struct for all fields defined in the WMS CRD + properties: + general: + description: General is the struct with all generic fields for the + crds + properties: + dataVersion: + type: string + dataset: + type: string + datasetOwner: + type: string + serviceVersion: + type: string + theme: + type: string + required: + - dataset + - datasetOwner + type: object + kubernetes: + description: Kubernetes is the struct with all fields that can be + defined in kubernetes fields in the crds + properties: + autoscaling: + description: Autoscaling is the struct with all fields to configure + autoscalers for the crs + properties: + averageCpuUtilization: + type: integer + maxReplicas: + type: integer + minReplicas: + type: integer + type: object + healthCheck: + description: HealthCheck is the struct with all fields to configure + healthchecks for the crs + properties: + boundingbox: + type: string + mimetype: + type: string + querystring: + type: string + type: object + lifecycle: + description: Lifecycle is the struct with the fields to configure + lifecycle settings for the resources + properties: + ttlInDays: + type: integer + type: object + resources: + description: ResourceRequirements describes the compute resource + requirements. + properties: + claims: + description: |- + Claims lists the names of resources, defined in spec.resourceClaims, + that are used by this container. + + This is an alpha field and requires enabling the + DynamicResourceAllocation feature gate. + + This field is immutable. It can only be set for containers. + items: + description: ResourceClaim references one entry in PodSpec.ResourceClaims. + properties: + name: + description: |- + Name must match the name of one entry in pod.spec.resourceClaims of + the Pod where this field is used. It makes that resource available + inside a container. + type: string + request: + description: |- + Request is the name chosen for a request in the referenced claim. + If empty, everything from the claim is made available, otherwise + only the result of this request. + type: string + required: + - name + type: object + type: array + x-kubernetes-list-map-keys: + - name + x-kubernetes-list-type: map + limits: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + description: |- + Limits describes the maximum amount of compute resources allowed. + More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ + type: object + requests: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + description: |- + Requests describes the minimum amount of compute resources required. + If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, + otherwise to an implementation-defined value. Requests cannot exceed Limits. + More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ + type: object + type: object + type: object + options: + description: WMSWFSOptions is the struct with options available in + the operator + properties: + automaticCasing: + type: boolean + disableWebserviceProxy: + type: boolean + includeIngress: + type: boolean + prefetchData: + type: boolean + rewriteGroupToDataLayers: + type: boolean + validateChildStyleNameEqual: + type: boolean + validateRequests: + type: boolean + required: + - automaticCasing + - includeIngress + type: object + service: + description: WMSService is the struct for all service level fields + properties: + abstract: + type: string + accessConstraints: + type: string + authority: + description: Authority is a struct for the authority fields in + WMS and WFS crds + properties: + name: + type: string + url: + type: string + required: + - name + - url + type: object + dataEPSG: + type: string + defResolution: + type: integer + extent: + type: string + inspire: + type: boolean + keywords: + items: + type: string + type: array + layers: + items: + description: WMSLayer is the struct for all layer level fields + properties: + abstract: + type: string + data: + description: Data is a struct for the data field for a WMSLayer + or WFS FeatureType + properties: + gpkg: + description: GPKG is a struct for the gpkg field for + a WMSLayer or WFS FeatureType + properties: + aliases: + additionalProperties: + type: string + description: In a new version Aliases should become + part of Columns + type: object + blobKey: + type: string + columns: + items: + type: string + type: array + geometryType: + type: string + table: + type: string + required: + - blobKey + - columns + - geometryType + - table + type: object + postgis: + description: |- + Postgis is a struct for the Postgis db config for a WMSLayer or WFS FeatureType + connection details are passed through the environment + properties: + aliases: + additionalProperties: + type: string + description: In a new version Aliases should become + part of Columns + type: object + columns: + items: + type: string + type: array + geometryType: + type: string + table: + type: string + required: + - columns + - geometryType + - table + type: object + tif: + description: Tif is a struct for the Tif field for a + WMSLayer + properties: + blobKey: + type: string + getFeatureInfoIncludesClass: + type: boolean + offsite: + type: string + resample: + type: string + required: + - blobKey + type: object + type: object + datasetMetadataIdentifier: + type: string + extent: + type: string + group: + type: string + keywords: + items: + type: string + type: array + labelNoClip: + type: boolean + maxScale: + type: string + minScale: + type: string + name: + type: string + sourceMetadataIdentifier: + type: string + styles: + items: + description: Style is the struct for all style level fields + properties: + abstract: + type: string + legendfile: + description: LegendFile is the struct containing the + location of the legendfile + properties: + blobKey: + type: string + required: + - blobKey + type: object + name: + type: string + title: + type: string + visualization: + type: string + required: + - name + type: object + type: array + title: + type: string + visible: + type: boolean + required: + - name + - styles + - visible + type: object + type: array + mapfile: + description: Mapfile contains the ConfigMapKeyRef containing a + mapfile + properties: + configMapKeyRef: + description: Selects a key from a ConfigMap. + properties: + key: + description: The key to select. + type: string + name: + default: "" + description: |- + Name of the referent. + This field is effectively required, but due to backwards compatibility is + allowed to be empty. Instances of this type with an empty value here are + almost certainly wrong. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + optional: + description: Specify whether the ConfigMap or its key + must be defined + type: boolean + required: + - key + type: object + x-kubernetes-map-type: atomic + required: + - configMapKeyRef + type: object + maxSize: + type: string + metadataIdentifier: + type: string + resolution: + type: integer + stylingAssets: + description: StylingAssets is the struct containing the location + of styling assets + properties: + blobKeys: + items: + type: string + type: array + configMapRefs: + items: + description: |- + ConfigMapRef contains all the config map name and all keys in that configmap that are relevant + the Keys can be empty, so that the v1 WMS can convert to the v2beta1 WMS + properties: + keys: + items: + type: string + type: array + name: + type: string + required: + - name + type: object + type: array + required: + - blobKeys + type: object + title: + type: string + required: + - abstract + - accessConstraints + - authority + - dataEPSG + - inspire + - keywords + - layers + - metadataIdentifier + - title + type: object + required: + - general + - kubernetes + - options + - service + type: object + status: + description: Status - The status for custom resources managed by the operator-sdk. + properties: + conditions: + items: + description: |- + Condition - the condition for the ansible operator + https://github.com/operator-framework/operator-sdk/blob/master/internal/ansible/controller/status/types.go#L101 + properties: + ansibleResult: + description: ResultAnsible - encapsulation of the ansible result. + 'AnsibleResult' is turned around in struct to comply with + linting + properties: + changed: + type: integer + completion: + type: string + failures: + type: integer + ok: + type: integer + skipped: + type: integer + required: + - changed + - completion + - failures + - ok + - skipped + type: object + lastTransitionTime: + format: date-time + type: string + message: + type: string + reason: + type: string + status: + description: ConditionStatus specifies a string for field ConditionType + type: string + type: + description: ConditionType specifies a string for field ConditionType + type: string + required: + - lastTransitionTime + - message + - reason + - status + - type + type: object + type: array + deployment: + type: string + resources: + items: + description: Resources is the struct for the resources field within + status + properties: + apiversion: + type: string + kind: + type: string + name: + type: string + type: object + type: array + type: object + type: object + served: true + storage: false + subresources: + status: {} + - name: v3 + schema: + openAPIV3Schema: + description: WMS is the Schema for the wms API. + properties: + apiVersion: + description: |- + APIVersion defines the versioned schema of this representation of an object. + Servers should convert recognized schemas to the latest internal value, and + may reject unrecognized values. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources + type: string + kind: + description: |- + Kind is a string value representing the REST resource this object represents. + Servers may infer this from the endpoint the client submits requests to. + Cannot be updated. + In CamelCase. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + type: string + metadata: + type: object + spec: + description: WMSSpec defines the desired state of WMS. + properties: + foo: + description: Foo is an example field of WMS. Edit wms_types.go to + remove/update + type: string + type: object + status: + description: WMSStatus defines the observed state of WMS. + type: object + type: object + served: true + storage: true + subresources: + status: {} diff --git a/config/crd/kustomization.yaml b/config/crd/kustomization.yaml index 53c910e..63fb94c 100644 --- a/config/crd/kustomization.yaml +++ b/config/crd/kustomization.yaml @@ -9,9 +9,11 @@ resources: patches: # [WEBHOOK] To enable webhook, uncomment all the sections with [WEBHOOK] prefix. # patches here are for enabling the conversion webhook for each CRD +- path: patches/webhook_in_wfs.yaml +- path: patches/webhook_in_wms.yaml # +kubebuilder:scaffold:crdkustomizewebhookpatch # [WEBHOOK] To enable webhook, uncomment the following section # the following config is for teaching kustomize how to do kustomization for CRDs. -#configurations: -#- kustomizeconfig.yaml +configurations: +- kustomizeconfig.yaml diff --git a/config/crd/patches/webhook_in_wfs.yaml b/config/crd/patches/webhook_in_wfs.yaml new file mode 100644 index 0000000..487afb0 --- /dev/null +++ b/config/crd/patches/webhook_in_wfs.yaml @@ -0,0 +1,16 @@ +# The following patch enables a conversion webhook for the CRD +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + name: wfs.pdok.nl +spec: + conversion: + strategy: Webhook + webhook: + clientConfig: + service: + namespace: system + name: webhook-service + path: /convert + conversionReviewVersions: + - v1 diff --git a/config/crd/patches/webhook_in_wms.yaml b/config/crd/patches/webhook_in_wms.yaml new file mode 100644 index 0000000..aaf2745 --- /dev/null +++ b/config/crd/patches/webhook_in_wms.yaml @@ -0,0 +1,16 @@ +# The following patch enables a conversion webhook for the CRD +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + name: wms.pdok.nl +spec: + conversion: + strategy: Webhook + webhook: + clientConfig: + service: + namespace: system + name: webhook-service + path: /convert + conversionReviewVersions: + - v1 diff --git a/config/default/kustomization.yaml b/config/default/kustomization.yaml index 67b6d96..bc079b4 100644 --- a/config/default/kustomization.yaml +++ b/config/default/kustomization.yaml @@ -1,5 +1,5 @@ # Adds namespace to all resources. -namespace: mapserver-operator-system +namespace: services # Value of this field is prepended to the # names of all resources, e.g. a deployment named @@ -20,9 +20,9 @@ resources: - ../manager # [WEBHOOK] To enable webhook, uncomment all the sections with [WEBHOOK] prefix including the one in # crd/kustomization.yaml -#- ../webhook +- ../webhook # [CERTMANAGER] To enable cert-manager, uncomment all sections with 'CERTMANAGER'. 'WEBHOOK' components are required. -#- ../certmanager +- ../certmanager # [PROMETHEUS] To enable prometheus monitor, uncomment all sections with 'PROMETHEUS'. #- ../prometheus # [METRICS] Expose the controller manager metrics service. @@ -50,163 +50,199 @@ patches: # [WEBHOOK] To enable webhook, uncomment all the sections with [WEBHOOK] prefix including the one in # crd/kustomization.yaml -#- path: manager_webhook_patch.yaml -# target: -# kind: Deployment +- path: manager_webhook_patch.yaml + target: + kind: Deployment # [CERTMANAGER] To enable cert-manager, uncomment all sections with 'CERTMANAGER' prefix. # Uncomment the following replacements to add the cert-manager CA injection annotations -#replacements: -# - source: # Uncomment the following block to enable certificates for metrics -# kind: Service -# version: v1 -# name: controller-manager-metrics-service -# fieldPath: metadata.name -# targets: -# - select: -# kind: Certificate -# group: cert-manager.io -# version: v1 -# name: metrics-certs -# fieldPaths: -# - spec.dnsNames.0 -# - spec.dnsNames.1 -# options: -# delimiter: '.' -# index: 0 -# create: true -# -# - source: -# kind: Service -# version: v1 -# name: controller-manager-metrics-service -# fieldPath: metadata.namespace -# targets: -# - select: -# kind: Certificate -# group: cert-manager.io -# version: v1 -# name: metrics-certs -# fieldPaths: -# - spec.dnsNames.0 -# - spec.dnsNames.1 -# options: -# delimiter: '.' -# index: 1 -# create: true -# -# - source: # Uncomment the following block if you have any webhook -# kind: Service -# version: v1 -# name: webhook-service -# fieldPath: .metadata.name # Name of the service -# targets: -# - select: -# kind: Certificate -# group: cert-manager.io -# version: v1 -# name: serving-cert -# fieldPaths: -# - .spec.dnsNames.0 -# - .spec.dnsNames.1 -# options: -# delimiter: '.' -# index: 0 -# create: true -# - source: -# kind: Service -# version: v1 -# name: webhook-service -# fieldPath: .metadata.namespace # Namespace of the service -# targets: -# - select: -# kind: Certificate -# group: cert-manager.io -# version: v1 -# name: serving-cert -# fieldPaths: -# - .spec.dnsNames.0 -# - .spec.dnsNames.1 -# options: -# delimiter: '.' -# index: 1 -# create: true -# -# - source: # Uncomment the following block if you have a ValidatingWebhook (--programmatic-validation) -# kind: Certificate -# group: cert-manager.io -# version: v1 -# name: serving-cert # This name should match the one in certificate.yaml -# fieldPath: .metadata.namespace # Namespace of the certificate CR -# targets: -# - select: -# kind: ValidatingWebhookConfiguration -# fieldPaths: -# - .metadata.annotations.[cert-manager.io/inject-ca-from] -# options: -# delimiter: '/' -# index: 0 -# create: true -# - source: -# kind: Certificate -# group: cert-manager.io -# version: v1 -# name: serving-cert -# fieldPath: .metadata.name -# targets: -# - select: -# kind: ValidatingWebhookConfiguration -# fieldPaths: -# - .metadata.annotations.[cert-manager.io/inject-ca-from] -# options: -# delimiter: '/' -# index: 1 -# create: true -# -# - source: # Uncomment the following block if you have a DefaultingWebhook (--defaulting ) -# kind: Certificate -# group: cert-manager.io -# version: v1 -# name: serving-cert -# fieldPath: .metadata.namespace # Namespace of the certificate CR -# targets: -# - select: -# kind: MutatingWebhookConfiguration -# fieldPaths: -# - .metadata.annotations.[cert-manager.io/inject-ca-from] -# options: -# delimiter: '/' -# index: 0 -# create: true -# - source: -# kind: Certificate -# group: cert-manager.io -# version: v1 -# name: serving-cert -# fieldPath: .metadata.name -# targets: -# - select: -# kind: MutatingWebhookConfiguration -# fieldPaths: -# - .metadata.annotations.[cert-manager.io/inject-ca-from] -# options: -# delimiter: '/' -# index: 1 -# create: true -# -# - source: # Uncomment the following block if you have a ConversionWebhook (--conversion) -# kind: Certificate -# group: cert-manager.io -# version: v1 -# name: serving-cert -# fieldPath: .metadata.namespace # Namespace of the certificate CR -# targets: # Do not remove or uncomment the following scaffold marker; required to generate code for target CRD. +replacements: + - source: # Uncomment the following block to enable certificates for metrics + kind: Service + version: v1 + name: controller-manager-metrics-service + fieldPath: metadata.name + targets: + - select: + kind: Certificate + group: cert-manager.io + version: v1 + name: metrics-certs + fieldPaths: + - spec.dnsNames.0 + - spec.dnsNames.1 + options: + delimiter: '.' + index: 0 + create: true + + - source: + kind: Service + version: v1 + name: controller-manager-metrics-service + fieldPath: metadata.namespace + targets: + - select: + kind: Certificate + group: cert-manager.io + version: v1 + name: metrics-certs + fieldPaths: + - spec.dnsNames.0 + - spec.dnsNames.1 + options: + delimiter: '.' + index: 1 + create: true + + - source: # Uncomment the following block if you have any webhook + kind: Service + version: v1 + name: webhook-service + fieldPath: .metadata.name # Name of the service + targets: + - select: + kind: Certificate + group: cert-manager.io + version: v1 + name: serving-cert + fieldPaths: + - .spec.dnsNames.0 + - .spec.dnsNames.1 + options: + delimiter: '.' + index: 0 + create: true + - source: + kind: Service + version: v1 + name: webhook-service + fieldPath: .metadata.namespace # Namespace of the service + targets: + - select: + kind: Certificate + group: cert-manager.io + version: v1 + name: serving-cert + fieldPaths: + - .spec.dnsNames.0 + - .spec.dnsNames.1 + options: + delimiter: '.' + index: 1 + create: true + + - source: # Uncomment the following block if you have a ValidatingWebhook (--programmatic-validation) + kind: Certificate + group: cert-manager.io + version: v1 + name: serving-cert # This name should match the one in certificate.yaml + fieldPath: .metadata.namespace # Namespace of the certificate CR + targets: + - select: + kind: ValidatingWebhookConfiguration + fieldPaths: + - .metadata.annotations.[cert-manager.io/inject-ca-from] + options: + delimiter: '/' + index: 0 + create: true + - source: + kind: Certificate + group: cert-manager.io + version: v1 + name: serving-cert + fieldPath: .metadata.name + targets: + - select: + kind: ValidatingWebhookConfiguration + fieldPaths: + - .metadata.annotations.[cert-manager.io/inject-ca-from] + options: + delimiter: '/' + index: 1 + create: true + + - source: # Uncomment the following block if you have a DefaultingWebhook (--defaulting ) + kind: Certificate + group: cert-manager.io + version: v1 + name: serving-cert + fieldPath: .metadata.namespace # Namespace of the certificate CR + targets: + - select: + kind: MutatingWebhookConfiguration + fieldPaths: + - .metadata.annotations.[cert-manager.io/inject-ca-from] + options: + delimiter: '/' + index: 0 + create: true + - source: + kind: Certificate + group: cert-manager.io + version: v1 + name: serving-cert + fieldPath: .metadata.name + targets: + - select: + kind: MutatingWebhookConfiguration + fieldPaths: + - .metadata.annotations.[cert-manager.io/inject-ca-from] + options: + delimiter: '/' + index: 1 + create: true + + - source: # Uncomment the following block if you have a ConversionWebhook (--conversion) + kind: Certificate + group: cert-manager.io + version: v1 + name: serving-cert + fieldPath: .metadata.namespace # Namespace of the certificate CR + targets: # Do not remove or uncomment the following scaffold marker; required to generate code for target CRD. + - select: + kind: CustomResourceDefinition + name: wfs.pdok.nl + fieldPaths: + - .metadata.annotations.[cert-manager.io/inject-ca-from] + options: + delimiter: '/' + index: 0 + create: true + - select: + kind: CustomResourceDefinition + name: wms.pdok.nl + fieldPaths: + - .metadata.annotations.[cert-manager.io/inject-ca-from] + options: + delimiter: '/' + index: 0 + create: true # +kubebuilder:scaffold:crdkustomizecainjectionns -# - source: -# kind: Certificate -# group: cert-manager.io -# version: v1 -# name: serving-cert -# fieldPath: .metadata.name -# targets: # Do not remove or uncomment the following scaffold marker; required to generate code for target CRD. + - source: + kind: Certificate + group: cert-manager.io + version: v1 + name: serving-cert + fieldPath: .metadata.name + targets: # Do not remove or uncomment the following scaffold marker; required to generate code for target CRD. + - select: + kind: CustomResourceDefinition + name: wfs.pdok.nl + fieldPaths: + - .metadata.annotations.[cert-manager.io/inject-ca-from] + options: + delimiter: '/' + index: 1 + create: true + - select: + kind: CustomResourceDefinition + name: wms.pdok.nl + fieldPaths: + - .metadata.annotations.[cert-manager.io/inject-ca-from] + options: + delimiter: '/' + index: 1 + create: true # +kubebuilder:scaffold:crdkustomizecainjectionname diff --git a/config/default/manager_webhook_patch.yaml b/config/default/manager_webhook_patch.yaml new file mode 100644 index 0000000..963c8a4 --- /dev/null +++ b/config/default/manager_webhook_patch.yaml @@ -0,0 +1,31 @@ +# This patch ensures the webhook certificates are properly mounted in the manager container. +# It configures the necessary arguments, volumes, volume mounts, and container ports. + +# Add the --webhook-cert-path argument for configuring the webhook certificate path +- op: add + path: /spec/template/spec/containers/0/args/- + value: --webhook-cert-path=/tmp/k8s-webhook-server/serving-certs + +# Add the volumeMount for the webhook certificates +- op: add + path: /spec/template/spec/containers/0/volumeMounts/- + value: + mountPath: /tmp/k8s-webhook-server/serving-certs + name: webhook-certs + readOnly: true + +# Add the port configuration for the webhook server +- op: add + path: /spec/template/spec/containers/0/ports/- + value: + containerPort: 9443 + name: webhook-server + protocol: TCP + +# Add the volume configuration for the webhook certificates +- op: add + path: /spec/template/spec/volumes/- + value: + name: webhook-certs + secret: + secretName: webhook-server-cert diff --git a/config/manager/kustomization.yaml b/config/manager/kustomization.yaml index 5c5f0b8..a27dd37 100644 --- a/config/manager/kustomization.yaml +++ b/config/manager/kustomization.yaml @@ -1,2 +1,8 @@ resources: - manager.yaml +apiVersion: kustomize.config.k8s.io/v1beta1 +kind: Kustomization +images: +- name: controller + newName: local-registry:5000/wfs-operator + newTag: v3.0.25 diff --git a/config/network-policy/allow-webhook-traffic.yaml b/config/network-policy/allow-webhook-traffic.yaml new file mode 100644 index 0000000..6fd52c8 --- /dev/null +++ b/config/network-policy/allow-webhook-traffic.yaml @@ -0,0 +1,27 @@ +# This NetworkPolicy allows ingress traffic to your webhook server running +# as part of the controller-manager from specific namespaces and pods. CR(s) which uses webhooks +# will only work when applied in namespaces labeled with 'webhook: enabled' +apiVersion: networking.k8s.io/v1 +kind: NetworkPolicy +metadata: + labels: + app.kubernetes.io/name: mapserver-operator + app.kubernetes.io/managed-by: kustomize + name: allow-webhook-traffic + namespace: system +spec: + podSelector: + matchLabels: + control-plane: controller-manager + app.kubernetes.io/name: mapserver-operator + policyTypes: + - Ingress + ingress: + # This allows ingress traffic from any namespace with the label webhook: enabled + - from: + - namespaceSelector: + matchLabels: + webhook: enabled # Only from namespaces with this label + ports: + - port: 443 + protocol: TCP diff --git a/config/network-policy/kustomization.yaml b/config/network-policy/kustomization.yaml index ec0fb5e..0872bee 100644 --- a/config/network-policy/kustomization.yaml +++ b/config/network-policy/kustomization.yaml @@ -1,2 +1,3 @@ resources: +- allow-webhook-traffic.yaml - allow-metrics-traffic.yaml diff --git a/config/rbac/role.yaml b/config/rbac/role.yaml index 27cc985..c087f0f 100644 --- a/config/rbac/role.yaml +++ b/config/rbac/role.yaml @@ -1,11 +1,35 @@ +--- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: - labels: - app.kubernetes.io/name: mapserver-operator - app.kubernetes.io/managed-by: kustomize name: manager-role rules: -- apiGroups: [""] - resources: ["pods"] - verbs: ["get", "list", "watch"] +- apiGroups: + - pdok.nl + resources: + - wfs + - wms + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - pdok.nl + resources: + - wfs/finalizers + - wms/finalizers + verbs: + - update +- apiGroups: + - pdok.nl + resources: + - wfs/status + - wms/status + verbs: + - get + - patch + - update diff --git a/config/samples/kustomization.yaml b/config/samples/kustomization.yaml index 51eab95..4bc4cbd 100644 --- a/config/samples/kustomization.yaml +++ b/config/samples/kustomization.yaml @@ -2,4 +2,6 @@ resources: - v3_wms.yaml - v3_wfs.yaml +- v2beta1_wfs.yaml +- v2beta1_wms.yaml # +kubebuilder:scaffold:manifestskustomizesamples diff --git a/config/samples/v2beta1_wfs.yaml b/config/samples/v2beta1_wfs.yaml new file mode 100644 index 0000000..2e3191d --- /dev/null +++ b/config/samples/v2beta1_wfs.yaml @@ -0,0 +1,63 @@ +apiVersion: pdok.nl/v2beta1 +kind: WFS +metadata: + name: sample-v2 + labels: + app.kubernetes.io/name: mapserver-operator + app.kubernetes.io/managed-by: kustomize + dataset: dataset + dataset-owner: eigenaar + service-version: v1_0 + service-type: wfs + annotations: + lifecycle-phase: prod + service-bundle-id: e9f89184-d8c3-5600-8502-08e8e9bc9d2f +spec: + general: + datasetOwner: eigenaar + serviceVersion: v1_0 + dataset: dataset + kubernetes: + resources: + limits: + ephemeralStorage: 20Mi + options: + automaticCasing: true + includeIngress: true + service: + inspire: true + title: Dataset + abstract: "Dataset beschrijving..." + keywords: + - keyword1 + - keyword2 + accessConstraints: none + metadataIdentifier: 68a42961-ed55-436b-a412-cc7424fd2a6e + authority: + name: eigenaar + url: https://www.rijksoverheid.nl/ministeries/ministerie-van-economische-zaken-en-klimaat + dataEPSG: "EPSG:28992" + extent: "0 300000 280000 625000" + featureTypes: + - name: "feature1" + title: "feature1" + abstract: "Feature 1 beschrijving..." + keywords: + - keyword1 + - keyword2 + datasetMetadataIdentifier: "07d73b60-dfd6-4c54-9c82-9fac70c6c48e" + sourceMetadataIdentifier: "07d73b60-dfd6-4c54-9c82-9fac70c6c48e" # TODO + data: + gpkg: + blobKey: eigenaar/dataset/data.gpkg + table: "table1" + geometryType: "MultiPolygon" + columns: + - "naam" + - "gebiedsnum" + - "besluitnum" + - "besluitdat" + aliases: + gebiedsnum: gebiedsnummer + besluitdat: datum + diff --git a/config/samples/v2beta1_wms.yaml b/config/samples/v2beta1_wms.yaml new file mode 100644 index 0000000..a464178 --- /dev/null +++ b/config/samples/v2beta1_wms.yaml @@ -0,0 +1,9 @@ +apiVersion: pdok.nl/v2beta1 +kind: WMS +metadata: + labels: + app.kubernetes.io/name: mapserver-operator + app.kubernetes.io/managed-by: kustomize + name: wms-sample +spec: + # TODO(user): Add fields here diff --git a/config/samples/v3_wfs.yaml b/config/samples/v3_wfs.yaml index 4b3691b..7fdb44d 100644 --- a/config/samples/v3_wfs.yaml +++ b/config/samples/v3_wfs.yaml @@ -4,6 +4,87 @@ metadata: labels: app.kubernetes.io/name: mapserver-operator app.kubernetes.io/managed-by: kustomize - name: wfs-sample + dataset: dataset + dataset-owner: eigenaar + service-type: wfs + service-version: 1.0.0 + name: sample-v3 spec: - # TODO(user): Add fields here + lifecycle: + ttlInDays: 21 + podSpecPatch: + containers: + - name: mapserver + resources: + limits: + memory: 12M + ephemeral-storage: 2G + horizontalPodAutoscalerPatch: + scaleTargetRef: + apiVersion: apps/v1 + kind: Deployment + name: wfs-sample-v3 + maxReplicas: 5 + minReplicas: 2 + metrics: + - type: "resource" + resource: + name: cpu + targetAverageUtilization: 60 + options: + automaticCasing: true + prefetchData: false + includeIngress: false + service: + prefix: "" + baseUrl: https://service.pdok.nl + inspire: + serviceMetadataUrl: + csw: + metadataIdentifier: 68a42961-ed55-436b-a412-cc7424fd2a6e + spatialDatasetIdentifier: "" + language: "nl" + ownerInfoRef: pdok + title: "Dataset" + abstract: "Dataset beschrijving ..." + keywords: + - keyword1 + - keyword2 + fees: "" + accessConstraints: "" + defaultCrs: "EPSG:28992" + bbox: + defaultCRS: + minx: "0" + maxx: "280000" + miny: "300000" + maxy: "625000" + countDefault: "12" + featureTypes: + - name: Feature1 + title: Feature1 + abstract: "Feature 1 beschrijving..." + keywords: + - keyword1 + - keyword2 + datasetMetadataUrl: + csw: + metadataIdentifier: 07d73b60-dfd6-4c54-9c82-9fac70c6c48e + bbox: + defaultCRS: # EXTENT/wfs_extent in mapfile + minx: "0" + maxx: "280000" + miny: "300000" + maxy: "625000" + data: + gpkg: + blobKey: eigenaar/dataset/data.gpkg + tableName: table1 + geometryType: "MultiPolygon" + columns: + - name: naam + - name: gebiedsnum + alias: gebiedsnummer + - name: besluitnum + - name: besluitdat + alias: datum diff --git a/config/webhook/kustomization.yaml b/config/webhook/kustomization.yaml new file mode 100644 index 0000000..8bf748c --- /dev/null +++ b/config/webhook/kustomization.yaml @@ -0,0 +1,6 @@ +resources: +#- manifests.yaml see https://github.com/kubernetes-sigs/kubebuilder/issues/2231 +- service.yaml + +configurations: +- kustomizeconfig.yaml diff --git a/config/webhook/kustomizeconfig.yaml b/config/webhook/kustomizeconfig.yaml new file mode 100644 index 0000000..206316e --- /dev/null +++ b/config/webhook/kustomizeconfig.yaml @@ -0,0 +1,22 @@ +# the following config is for teaching kustomize where to look at when substituting nameReference. +# It requires kustomize v2.1.0 or newer to work properly. +nameReference: +- kind: Service + version: v1 + fieldSpecs: + - kind: MutatingWebhookConfiguration + group: admissionregistration.k8s.io + path: webhooks/clientConfig/service/name + - kind: ValidatingWebhookConfiguration + group: admissionregistration.k8s.io + path: webhooks/clientConfig/service/name + +namespace: +- kind: MutatingWebhookConfiguration + group: admissionregistration.k8s.io + path: webhooks/clientConfig/service/namespace + create: true +- kind: ValidatingWebhookConfiguration + group: admissionregistration.k8s.io + path: webhooks/clientConfig/service/namespace + create: true diff --git a/config/webhook/service.yaml b/config/webhook/service.yaml new file mode 100644 index 0000000..f072a47 --- /dev/null +++ b/config/webhook/service.yaml @@ -0,0 +1,16 @@ +apiVersion: v1 +kind: Service +metadata: + labels: + app.kubernetes.io/name: mapserver-operator + app.kubernetes.io/managed-by: kustomize + name: webhook-service + namespace: system +spec: + ports: + - port: 443 + protocol: TCP + targetPort: 9443 + selector: + control-plane: controller-manager + app.kubernetes.io/name: mapserver-operator diff --git a/extract-manifests-from-prod.sh b/extract-manifests-from-prod.sh new file mode 100755 index 0000000..310a031 --- /dev/null +++ b/extract-manifests-from-prod.sh @@ -0,0 +1,45 @@ +#!/bin/bash + +SERVICE_TYPE=${1:-wfs} + +ORIGINAL_KUBECONFIG=$(echo $KUBECONFIG) +export KUBECONFIG=/Users/jelledijkstra/.kube/aks_config_prod +kubectx aks-services-oostwoud +SERVICES=$(kubectl get $SERVICE_TYPE -n services) + +MANIFESTS_DIR=prod-manifests/$SERVICE_TYPE +mkdir -p $MANIFESTS_DIR +rm "$MANIFESTS_DIR/"*.json >/dev/null 2>&1 +rm "$MANIFESTS_DIR/"*.yaml >/dev/null 2>&1 + +python3 -m pip install pyyaml + +REMOVE_KEYS=('.metadata.annotations."kubectl.kubernetes.io/last-applied-configuration"' ".status" ".metadata.creationTimestamp" ".metadata.generation" ".metadata.uid" ".metadata.resourceVersion" ".metadata.namespace") + +IFS=$'\n' +LINENUM=-1 +for SERVICE in $SERVICES; do + LINENUM=$(expr $LINENUM + 1) + + if [[ $LINENUM -eq 0 ]]; then + continue + fi + + SERVICE=$(echo $SERVICE | awk '{print $1}') + + JSON="$MANIFESTS_DIR/$SERVICE.json" + kubectl get $SERVICE_TYPE/$SERVICE -n services -o json > "$JSON" + + for KEY in "${REMOVE_KEYS[@]}"; do + jq "del($KEY)" "$JSON" > "$JSON.tmp" && mv "$JSON.tmp" "$JSON" + done + + YAML="$MANIFESTS_DIR/$SERVICE.yaml" + cat "$JSON" | python3 -c 'import sys, yaml, json; print(yaml.dump(json.loads(sys.stdin.read())))' > "$YAML" + rm "$JSON" + + # Replace column y with "y" - otherwise the admission controller thinks its a boolean + sed 's/- y$/- "y"/g' "$YAML" > "$YAML.tmp" && mv "$YAML.tmp" "$YAML" +done + +export KUBECONFIG=$ORIGINAL_KUBECONFIG \ No newline at end of file diff --git a/go.mod b/go.mod index 3e92965..2719412 100644 --- a/go.mod +++ b/go.mod @@ -4,9 +4,13 @@ go 1.23.0 godebug default=go1.23 +replace github.com/pdok/smooth-operator => ../smooth-operator + require ( github.com/onsi/ginkgo/v2 v2.21.0 github.com/onsi/gomega v1.35.1 + github.com/pdok/smooth-operator v1.0.0 + k8s.io/api v0.32.0 k8s.io/apimachinery v0.32.0 k8s.io/client-go v0.32.0 sigs.k8s.io/controller-runtime v0.20.0 @@ -86,7 +90,6 @@ require ( gopkg.in/evanphx/json-patch.v4 v4.12.0 // indirect gopkg.in/inf.v0 v0.9.1 // indirect gopkg.in/yaml.v3 v3.0.1 // indirect - k8s.io/api v0.32.0 // indirect k8s.io/apiextensions-apiserver v0.32.0 // indirect k8s.io/apiserver v0.32.0 // indirect k8s.io/component-base v0.32.0 // indirect diff --git a/internal/controller/wfs_controller_test.go b/internal/controller/wfs_controller_test.go index 1b2e430..2967cbd 100644 --- a/internal/controller/wfs_controller_test.go +++ b/internal/controller/wfs_controller_test.go @@ -26,6 +26,8 @@ package controller import ( "context" + "k8s.io/apimachinery/pkg/util/yaml" + "os" . "github.com/onsi/ginkgo/v2" . "github.com/onsi/gomega" @@ -48,7 +50,7 @@ var _ = Describe("WFS Controller", func() { Name: resourceName, Namespace: "default", // TODO(user):Modify as needed } - wfs := &pdoknlv3.WFS{} + wfs := readV3Sample() BeforeEach(func() { By("creating the custom resource for the Kind WFS") @@ -59,6 +61,7 @@ var _ = Describe("WFS Controller", func() { Name: resourceName, Namespace: "default", }, + Spec: wfs.Spec, // TODO(user): Specify other spec details if needed. } Expect(k8sClient.Create(ctx, resource)).To(Succeed()) @@ -67,7 +70,9 @@ var _ = Describe("WFS Controller", func() { AfterEach(func() { // TODO(user): Cleanup logic after each test, like removing the resource instance. - resource := &pdoknlv3.WFS{} + resource := &pdoknlv3.WFS{ + Spec: wfs.Spec, + } err := k8sClient.Get(ctx, typeNamespacedName, resource) Expect(err).NotTo(HaveOccurred()) @@ -90,3 +95,18 @@ var _ = Describe("WFS Controller", func() { }) }) }) + +func readV3Sample() *pdoknlv3.WFS { + yamlFile, err := os.ReadFile("../../config/samples/v3_wfs.yaml") + if err != nil { + panic(err) + } + + wfs := &pdoknlv3.WFS{} + err = yaml.Unmarshal(yamlFile, wfs) + if err != nil { + panic(err) + } + + return wfs +} diff --git a/internal/webhook/v3/wfs_webhook.go b/internal/webhook/v3/wfs_webhook.go new file mode 100644 index 0000000..3b8277d --- /dev/null +++ b/internal/webhook/v3/wfs_webhook.go @@ -0,0 +1,45 @@ +/* +MIT License + +Copyright (c) 2024 Publieke Dienstverlening op de Kaart + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. +*/ + +package v3 + +import ( + ctrl "sigs.k8s.io/controller-runtime" + logf "sigs.k8s.io/controller-runtime/pkg/log" + + pdoknlv3 "github.com/pdok/mapserver-operator/api/v3" +) + +// log is for logging in this package. +// +//nolint:unused +var wfslog = logf.Log.WithName("wfs-resource") + +// SetupWFSWebhookWithManager registers the webhook for WFS in the manager. +func SetupWFSWebhookWithManager(mgr ctrl.Manager) error { + return ctrl.NewWebhookManagedBy(mgr).For(&pdoknlv3.WFS{}). + Complete() +} + +// TODO(user): EDIT THIS FILE! THIS IS SCAFFOLDING FOR YOU TO OWN! diff --git a/internal/webhook/v3/wfs_webhook_test.go b/internal/webhook/v3/wfs_webhook_test.go new file mode 100644 index 0000000..f50ff08 --- /dev/null +++ b/internal/webhook/v3/wfs_webhook_test.go @@ -0,0 +1,63 @@ +/* +MIT License + +Copyright (c) 2024 Publieke Dienstverlening op de Kaart + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. +*/ + +package v3 + +import ( + . "github.com/onsi/ginkgo/v2" + . "github.com/onsi/gomega" + + pdoknlv3 "github.com/pdok/mapserver-operator/api/v3" + // TODO (user): Add any additional imports if needed +) + +var _ = Describe("WFS Webhook", func() { + var ( + obj *pdoknlv3.WFS + oldObj *pdoknlv3.WFS + ) + + BeforeEach(func() { + obj = &pdoknlv3.WFS{} + oldObj = &pdoknlv3.WFS{} + Expect(oldObj).NotTo(BeNil(), "Expected oldObj to be initialized") + Expect(obj).NotTo(BeNil(), "Expected obj to be initialized") + // TODO (user): Add any setup logic common to all tests + }) + + AfterEach(func() { + // TODO (user): Add any teardown logic common to all tests + }) + + Context("When creating WFS under Conversion Webhook", func() { + // TODO (user): Add logic to convert the object to the desired version and verify the conversion + // Example: + // It("Should convert the object correctly", func() { + // convertedObj := &pdoknlv3.WFS{} + // Expect(obj.ConvertTo(convertedObj)).To(Succeed()) + // Expect(convertedObj).ToNot(BeNil()) + // }) + }) + +}) diff --git a/internal/webhook/v3/wms_webhook.go b/internal/webhook/v3/wms_webhook.go new file mode 100644 index 0000000..5ed7b23 --- /dev/null +++ b/internal/webhook/v3/wms_webhook.go @@ -0,0 +1,45 @@ +/* +MIT License + +Copyright (c) 2024 Publieke Dienstverlening op de Kaart + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. +*/ + +package v3 + +import ( + ctrl "sigs.k8s.io/controller-runtime" + logf "sigs.k8s.io/controller-runtime/pkg/log" + + pdoknlv3 "github.com/pdok/mapserver-operator/api/v3" +) + +// log is for logging in this package. +// +//nolint:unused +var wmslog = logf.Log.WithName("wms-resource") + +// SetupWMSWebhookWithManager registers the webhook for WMS in the manager. +func SetupWMSWebhookWithManager(mgr ctrl.Manager) error { + return ctrl.NewWebhookManagedBy(mgr).For(&pdoknlv3.WMS{}). + Complete() +} + +// TODO(user): EDIT THIS FILE! THIS IS SCAFFOLDING FOR YOU TO OWN! diff --git a/internal/webhook/v3/wms_webhook_test.go b/internal/webhook/v3/wms_webhook_test.go new file mode 100644 index 0000000..6e4c396 --- /dev/null +++ b/internal/webhook/v3/wms_webhook_test.go @@ -0,0 +1,63 @@ +/* +MIT License + +Copyright (c) 2024 Publieke Dienstverlening op de Kaart + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. +*/ + +package v3 + +import ( + . "github.com/onsi/ginkgo/v2" + . "github.com/onsi/gomega" + + pdoknlv3 "github.com/pdok/mapserver-operator/api/v3" + // TODO (user): Add any additional imports if needed +) + +var _ = Describe("WMS Webhook", func() { + var ( + obj *pdoknlv3.WMS + oldObj *pdoknlv3.WMS + ) + + BeforeEach(func() { + obj = &pdoknlv3.WMS{} + oldObj = &pdoknlv3.WMS{} + Expect(oldObj).NotTo(BeNil(), "Expected oldObj to be initialized") + Expect(obj).NotTo(BeNil(), "Expected obj to be initialized") + // TODO (user): Add any setup logic common to all tests + }) + + AfterEach(func() { + // TODO (user): Add any teardown logic common to all tests + }) + + Context("When creating WMS under Conversion Webhook", func() { + // TODO (user): Add logic to convert the object to the desired version and verify the conversion + // Example: + // It("Should convert the object correctly", func() { + // convertedObj := &pdoknlv3.WMS{} + // Expect(obj.ConvertTo(convertedObj)).To(Succeed()) + // Expect(convertedObj).ToNot(BeNil()) + // }) + }) + +}) diff --git a/test/e2e/e2e_test.go b/test/e2e/e2e_test.go index 5d35410..304653d 100644 --- a/test/e2e/e2e_test.go +++ b/test/e2e/e2e_test.go @@ -172,6 +172,7 @@ var _ = Describe("Manager", Ordered, func() { It("should ensure the metrics endpoint is serving metrics", func() { By("creating a ClusterRoleBinding for the service account to allow access to metrics") + //nolint:gosec cmd := exec.Command("kubectl", "create", "clusterrolebinding", metricsRoleBindingName, "--clusterrole=mapserver-operator-metrics-reader", fmt.Sprintf("--serviceaccount=%s:%s", namespace, serviceAccountName), @@ -261,6 +262,44 @@ var _ = Describe("Manager", Ordered, func() { )) }) + It("should provisioned cert-manager", func() { + By("validating that cert-manager has the certificate Secret") + verifyCertManager := func(g Gomega) { + cmd := exec.Command("kubectl", "get", "secrets", "webhook-server-cert", "-n", namespace) + _, err := utils.Run(cmd) + g.Expect(err).NotTo(HaveOccurred()) + } + Eventually(verifyCertManager).Should(Succeed()) + }) + + It("should have CA injection for WFS conversion webhook", func() { + By("checking CA injection for WFS conversion webhook") + verifyCAInjection := func(g Gomega) { + cmd := exec.Command("kubectl", "get", + "customresourcedefinitions.apiextensions.k8s.io", + "wfs..pdok.nl", + "-o", "go-template={{ .spec.conversion.webhook.clientConfig.caBundle }}") + vwhOutput, err := utils.Run(cmd) + g.Expect(err).NotTo(HaveOccurred()) + g.Expect(len(vwhOutput)).To(BeNumerically(">", 10)) + } + Eventually(verifyCAInjection).Should(Succeed()) + }) + + It("should have CA injection for WMS conversion webhook", func() { + By("checking CA injection for WMS conversion webhook") + verifyCAInjection := func(g Gomega) { + cmd := exec.Command("kubectl", "get", + "customresourcedefinitions.apiextensions.k8s.io", + "wms..pdok.nl", + "-o", "go-template={{ .spec.conversion.webhook.clientConfig.caBundle }}") + vwhOutput, err := utils.Run(cmd) + g.Expect(err).NotTo(HaveOccurred()) + g.Expect(len(vwhOutput)).To(BeNumerically(">", 10)) + } + Eventually(verifyCAInjection).Should(Succeed()) + }) + // +kubebuilder:scaffold:e2e-webhooks-checks // TODO: Customize the e2e test suite with scenarios specific to your project. @@ -278,13 +317,14 @@ var _ = Describe("Manager", Ordered, func() { // It uses the Kubernetes TokenRequest API to generate a token by directly sending a request // and parsing the resulting token from the API response. func serviceAccountToken() (string, error) { + //nolint:gosec const tokenRequestRawString = `{ "apiVersion": "authentication.k8s.io/v1", "kind": "TokenRequest" }` // Temporary file to store the token request - secretName := fmt.Sprintf("%s-token-request", serviceAccountName) + secretName := serviceAccountName + "-token-request" tokenRequestFile := filepath.Join("/tmp", secretName) err := os.WriteFile(tokenRequestFile, []byte(tokenRequestRawString), os.FileMode(0o644)) if err != nil { diff --git a/test/utils/utils.go b/test/utils/utils.go index 04a5141..a934556 100644 --- a/test/utils/utils.go +++ b/test/utils/utils.go @@ -54,7 +54,7 @@ func Run(cmd *exec.Cmd) (string, error) { _, _ = fmt.Fprintf(GinkgoWriter, "running: %s\n", command) output, err := cmd.CombinedOutput() if err != nil { - return string(output), fmt.Errorf("%s failed with error: (%v) %s", command, err, string(output)) + return string(output), fmt.Errorf("%s failed with error: (%w) %s", command, err, string(output)) } return string(output), nil @@ -205,7 +205,6 @@ func GetProjectDir() (string, error) { // of the target content. The target content may span multiple lines. func UncommentCode(filename, target, prefix string) error { // false positive - // nolint:gosec content, err := os.ReadFile(filename) if err != nil { return err @@ -246,6 +245,6 @@ func UncommentCode(filename, target, prefix string) error { return err } // false positive - // nolint:gosec + //nolint:gosec return os.WriteFile(filename, out.Bytes(), 0644) }