Skip to content

CRD can not be created sucessful in installlation of OLM v.0.24.0 version (same bug as #2778: Applying OLM CRDs fails due to last-applied-configuration annotation) #2968

@shaojini

Description

@shaojini

Bug Report

What did you do?
A clear and concise description of the steps you took (or insert a code snippet).
root@K8s-master:# docker login
root@K8s-master:
# export olm_release=v0.24.0
root@K8s-master:~# kubectl apply -f https://github.com/operator-framework/operator-lifecycle-manager/releases/download/${olm_release}/crds.yaml
What did you expect to see?
A clear and concise description of what you expected to happen (or insert a code snippet).
Successful installation
What did you see instead? Under which circumstances?
A clear and concise description of what you expected to happen (or insert a code snippet).
customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/olmconfigs.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/operatorconditions.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/operatorgroups.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/operators.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com created
The CustomResourceDefinition "clusterserviceversions.operators.coreos.com" is invalid: metadata.annotations: Too long: must have at most 262144 bytes

When I switched into "operator-sdk" to install OLM, it did not show any error for the installation of OLM, however, the packageserver service is in "false":

root@K8s-master:~# operator-sdk olm install

INFO[0000] Fetching CRDs for version "latest"
INFO[0000] Fetching resources for resolved version "latest"
I0519 20:39:28.599145 64106 request.go:690] Waited for 1.01264153s due to client-side throttling, not priority and fairness, request: GET:https://192.168.0.7:6443/apis/cilium.io/v2?timeout=32s
INFO[0008] Creating CRDs and resources
INFO[0008] Creating CustomResourceDefinition "catalogsources.operators.coreos.com"
INFO[0008] Creating CustomResourceDefinition "clusterserviceversions.operators.coreos.com"
INFO[0008] Creating CustomResourceDefinition "installplans.operators.coreos.com"
INFO[0008] Creating CustomResourceDefinition "olmconfigs.operators.coreos.com"
INFO[0008] Creating CustomResourceDefinition "operatorconditions.operators.coreos.com"
INFO[0008] Creating CustomResourceDefinition "operatorgroups.operators.coreos.com"
INFO[0008] Creating CustomResourceDefinition "operators.operators.coreos.com"
INFO[0008] Creating CustomResourceDefinition "subscriptions.operators.coreos.com"
INFO[0009] Creating Namespace "olm"
INFO[0009] Creating Namespace "operators"
INFO[0009] Creating ServiceAccount "olm/olm-operator-serviceaccount"
INFO[0009] Creating ClusterRole "system:controller:operator-lifecycle-manager"
INFO[0009] Creating ClusterRoleBinding "olm-operator-binding-olm"
INFO[0009] Creating OLMConfig "cluster"
I0519 20:39:38.648832 64106 request.go:690] Waited for 1.447733128s due to client-side throttling, not priority and fairness, request: GET:https://192.168.0.7:6443/apis/operators.coreos.com/v1alpha2?timeout=32s
INFO[0012] Creating Deployment "olm/olm-operator"
INFO[0012] Creating Deployment "olm/catalog-operator"
INFO[0012] Creating ClusterRole "aggregate-olm-edit"
INFO[0012] Creating ClusterRole "aggregate-olm-view"
INFO[0012] Creating OperatorGroup "operators/global-operators"
INFO[0012] Creating OperatorGroup "olm/olm-operators"
INFO[0012] Creating ClusterServiceVersion "olm/packageserver"
INFO[0012] Creating CatalogSource "olm/operatorhubio-catalog"
INFO[0012] Waiting for deployment/olm-operator rollout to complete
INFO[0012] Waiting for Deployment "olm/olm-operator" to rollout: 0 of 1 updated replicas are available
INFO[0014] Deployment "olm/olm-operator" successfully rolled out
INFO[0014] Waiting for deployment/catalog-operator rollout to complete
INFO[0014] Deployment "olm/catalog-operator" successfully rolled out
INFO[0014] Waiting for deployment/packageserver rollout to complete
INFO[0014] Waiting for Deployment "olm/packageserver" to appear
INFO[0015] Waiting for Deployment "olm/packageserver" to rollout: 0 of 2 updated replicas are available
INFO[0028] Deployment "olm/packageserver" successfully rolled out
INFO[0028] Successfully installed OLM version "latest"

NAME NAMESPACE KIND STATUS
catalogsources.operators.coreos.com CustomResourceDefinition Installed
clusterserviceversions.operators.coreos.com CustomResourceDefinition Installed
installplans.operators.coreos.com CustomResourceDefinition Installed
olmconfigs.operators.coreos.com CustomResourceDefinition Installed
operatorconditions.operators.coreos.com CustomResourceDefinition Installed
operatorgroups.operators.coreos.com CustomResourceDefinition Installed
operators.operators.coreos.com CustomResourceDefinition Installed
subscriptions.operators.coreos.com CustomResourceDefinition Installed
olm Namespace Installed
operators Namespace Installed
olm-operator-serviceaccount olm ServiceAccount Installed
system:controller:operator-lifecycle-manager ClusterRole Installed
olm-operator-binding-olm ClusterRoleBinding Installed
cluster OLMConfig Installed
olm-operator olm Deployment Installed
catalog-operator olm Deployment Installed
aggregate-olm-edit ClusterRole Installed
aggregate-olm-view ClusterRole Installed
global-operators operators OperatorGroup Installed
olm-operators olm OperatorGroup Installed
packageserver olm ClusterServiceVersion Installed
operatorhubio-catalog olm CatalogSource Installed

However,

root@K8s-master:~# kubectl get apiservices.apiregistration.k8s.io v1.packages.operators.coreos.com

E0519 21:20:02.374158 88834 memcache.go:287] couldn't get resource list for packages.operators.coreos.com/v1: the server is currently unable to handle the request
E0519 21:20:02.375416 88834 memcache.go:121] couldn't get resource list for packages.operators.coreos.com/v1: the server is currently unable to handle the request
E0519 21:20:02.377921 88834 memcache.go:121] couldn't get resource list for packages.operators.coreos.com/v1: the server is currently unable to handle the request
E0519 21:20:02.380392 88834 memcache.go:121] couldn't get resource list for packages.operators.coreos.com/v1: the server is currently unable to handle the request
E0519 21:20:02.405270 88834 memcache.go:287] couldn't get resource list for packages.operators.coreos.com/v1: the server is currently unable to handle the request
E0519 21:20:02.427846 88834 memcache.go:121] couldn't get resource list for packages.operators.coreos.com/v1: the server is currently unable to handle the request
E0519 21:20:02.433485 88834 memcache.go:121] couldn't get resource list for packages.operators.coreos.com/v1: the server is currently unable to handle the request
E0519 21:20:02.436921 88834 memcache.go:121] couldn't get resource list for packages.operators.coreos.com/v1: the server is currently unable to handle the request
NAME SERVICE AVAILABLE AGE
v1.packages.operators.coreos.com olm/packageserver-service False (FailedDiscoveryCheck) 40m

The "packages.operators.coreos.com" is not functioning:

root@K8s-master:~# kubectl get crd | grep operators.coreos.com

E0519 21:28:58.773330 94129 memcache.go:287] couldn't get resource list for packages.operators.coreos.com/v1: the server is currently unable to handle the request
E0519 21:28:58.774865 94129 memcache.go:121] couldn't get resource list for packages.operators.coreos.com/v1: the server is currently unable to handle the request
E0519 21:28:58.777895 94129 memcache.go:121] couldn't get resource list for packages.operators.coreos.com/v1: the server is currently unable to handle the request
E0519 21:28:58.781929 94129 memcache.go:121] couldn't get resource list for packages.operators.coreos.com/v1: the server is currently unable to handle the request
catalogsources.operators.coreos.com 2023-05-19T17:39:35Z
clusterserviceversions.operators.coreos.com 2023-05-19T17:39:35Z
installplans.operators.coreos.com 2023-05-19T17:39:35Z
olmconfigs.operators.coreos.com 2023-05-19T17:39:35Z
operatorconditions.operators.coreos.com 2023-05-19T17:39:35Z
operatorgroups.operators.coreos.com 2023-05-19T17:39:35Z
operators.operators.coreos.com 2023-05-19T17:39:35Z
subscriptions.operators.coreos.com 2023-05-19T17:39:35Z

Environment

  • operator-lifecycle-manager version: v0.24.0
  • Kubernetes version information:

root@K8s-master:~# kubectl version
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version.
Client Version: version.Info{Major:"1", Minor:"26", GitVersion:"v1.26.3", GitCommit:"9e644106593f3f4aa98f8a84b23db5fa378900bd", GitTreeState:"clean", BuildDate:"2023-03-15T13:40:17Z", GoVersion:"go1.19.7", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v4.5.7
Server Version: version.Info{Major:"1", Minor:"26", GitVersion:"v1.26.4", GitCommit:"f89670c3aa4059d6999cb42e23ccb4f0b9a03979", GitTreeState:"clean", BuildDate:"2023-04-12T12:05:35Z", GoVersion:"go1.19.8", Compiler:"gc", Platform:"linux/amd64"}

  • Kubernetes cluster kind:
    root@K8s-master:~# kubectl cluster-info
    E0519 21:39:38.159210 100380 memcache.go:287] couldn't get resource list for packages.operators.coreos.com/v1: the server is currently unable to handle the request
    E0519 21:39:38.166851 100380 memcache.go:121] couldn't get resource list for packages.operators.coreos.com/v1: the server is currently unable to handle the request
    E0519 21:39:38.169478 100380 memcache.go:121] couldn't get resource list for packages.operators.coreos.com/v1: the server is currently unable to handle the request
    E0519 21:39:38.172656 100380 memcache.go:121] couldn't get resource list for packages.operators.coreos.com/v1: the server is currently unable to handle the request
    Kubernetes control plane is running at https://192.168.0.7:6443
    CoreDNS is running at https://192.168.0.7:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

Possible Solution

Additional context
Add any other context about the problem here.

Metadata

Metadata

Assignees

No one assigned

    Labels

    kind/bugCategorizes issue or PR as related to a bug.lifecycle/staleDenotes an issue or PR has remained open with no activity and has become stale.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions