Replies: 7 comments
-
I tried it on regular OCP 4.5 (4.5.17) and it seems to work fine for me. You didn't shared the whole subscription or any more details. But all seems to be deployed fine for me including the service account: - dependents:
- group: rbac.authorization.k8s.io
kind: PolicyRule
message: >-
namespaced
rule:{"verbs":["get","create","delete","patch","update"],"apiGroups":[""],"resources":["serviceaccounts"]}
status: Satisfied
version: v1
- group: rbac.authorization.k8s.io
kind: PolicyRule
message: >-
namespaced
rule:{"verbs":["get","create","delete","patch","update"],"apiGroups":["rbac.authorization.k8s.io"],"resources":["rolebindings"]}
status: Satisfied
version: v1
- group: rbac.authorization.k8s.io
kind: PolicyRule
message: >-
namespaced
rule:{"verbs":["get","list","watch","create","delete","patch","update"],"apiGroups":[""],"resources":["configmaps","services","secrets","persistentvolumeclaims"]}
status: Satisfied
version: v1
- group: rbac.authorization.k8s.io
kind: PolicyRule
message: >-
namespaced
rule:{"verbs":["get","list","watch","create","delete","patch","update"],"apiGroups":["kafka.strimzi.io"],"resources":["kafkas","kafkas/status","kafkaconnects","kafkaconnects/status","kafkaconnects2is","kafkaconnects2is/status","kafkaconnectors","kafkaconnectors/status","kafkamirrormakers","kafkamirrormakers/status","kafkabridges","kafkabridges/status","kafkamirrormaker2s","kafkamirrormaker2s/status","kafkarebalances","kafkarebalances/status","kafkatopics","kafkatopics/status","kafkausers","kafkausers/status"]}
status: Satisfied
version: v1
- group: rbac.authorization.k8s.io
kind: PolicyRule
message: >-
namespaced
rule:{"verbs":["get","list","watch","delete"],"apiGroups":[""],"resources":["pods"]}
status: Satisfied
version: v1
- group: rbac.authorization.k8s.io
kind: PolicyRule
message: >-
namespaced
rule:{"verbs":["get","list","watch"],"apiGroups":[""],"resources":["endpoints"]}
status: Satisfied
version: v1
- group: rbac.authorization.k8s.io
kind: PolicyRule
message: >-
namespaced
rule:{"verbs":["get","list","watch","create","delete","patch","update"],"apiGroups":["extensions"],"resources":["deployments","deployments/scale","replicasets","replicationcontrollers","networkpolicies","ingresses"]}
status: Satisfied
version: v1
- group: rbac.authorization.k8s.io
kind: PolicyRule
message: >-
namespaced
rule:{"verbs":["get","list","watch","create","delete","patch","update"],"apiGroups":["apps"],"resources":["deployments","deployments/scale","deployments/status","statefulsets","replicasets"]}
status: Satisfied
version: v1
- group: rbac.authorization.k8s.io
kind: PolicyRule
message: >-
namespaced
rule:{"verbs":["create"],"apiGroups":[""],"resources":["events"]}
status: Satisfied
version: v1
- group: rbac.authorization.k8s.io
kind: PolicyRule
message: >-
namespaced
rule:{"verbs":["get","list","watch","create","delete","patch","update"],"apiGroups":["apps.openshift.io"],"resources":["deploymentconfigs","deploymentconfigs/scale","deploymentconfigs/status","deploymentconfigs/finalizers"]}
status: Satisfied
version: v1
- group: rbac.authorization.k8s.io
kind: PolicyRule
message: >-
namespaced
rule:{"verbs":["create","delete","get","list","patch","watch","update"],"apiGroups":["build.openshift.io"],"resources":["buildconfigs","builds"]}
status: Satisfied
version: v1
- group: rbac.authorization.k8s.io
kind: PolicyRule
message: >-
namespaced
rule:{"verbs":["create","delete","get","list","watch","patch","update"],"apiGroups":["image.openshift.io"],"resources":["imagestreams","imagestreams/status"]}
status: Satisfied
version: v1
- group: rbac.authorization.k8s.io
kind: PolicyRule
message: >-
namespaced
rule:{"verbs":["get","list","watch","create","delete","patch","update"],"apiGroups":["networking.k8s.io"],"resources":["networkpolicies"]}
status: Satisfied
version: v1
- group: rbac.authorization.k8s.io
kind: PolicyRule
message: >-
namespaced
rule:{"verbs":["get","list","create","delete","patch","update"],"apiGroups":["route.openshift.io"],"resources":["routes","routes/custom-host"]}
status: Satisfied
version: v1
- group: rbac.authorization.k8s.io
kind: PolicyRule
message: >-
namespaced
rule:{"verbs":["get","list","watch","create","delete","patch","update"],"apiGroups":["policy"],"resources":["poddisruptionbudgets"]}
status: Satisfied
version: v1
- group: rbac.authorization.k8s.io
kind: PolicyRule
message: >-
cluster
rule:{"verbs":["get","create","delete","patch","update","watch"],"apiGroups":["rbac.authorization.k8s.io"],"resources":["clusterrolebindings"]}
status: Satisfied
version: v1
- group: rbac.authorization.k8s.io
kind: PolicyRule
message: >-
cluster
rule:{"verbs":["get"],"apiGroups":["storage.k8s.io"],"resources":["storageclasses"]}
status: Satisfied
version: v1
- group: rbac.authorization.k8s.io
kind: PolicyRule
message: >-
cluster
rule:{"verbs":["get","list"],"apiGroups":[""],"resources":["nodes"]}
status: Satisfied
version: v1
group: ''
kind: ServiceAccount
message: ''
name: strimzi-cluster-operator
status: Present
version: v1 |
Beta Was this translation helpful? Give feedback.
-
Thanks for the reply. I will fire-up another test and try to get you more details. It has failed consistently for us since the update to v0.20.0. |
Beta Was this translation helpful? Give feedback.
-
I know nothing about openshift-ci clusters and how they possibly differ from regular OCP. If you would be able to reproduce it on regular OCP, it would be much easier to debug for me. The service account for the operator is really created by the OperatorHub / OLM. I do not think 0.20.0 changed anything around it. So I'm not sure why the behaviour change. |
Beta Was this translation helpful? Give feedback.
-
PS: There is a dedicated channel with 0.19 which might get you unblocked if it is in any way related to 0.20.0. But obviously we should ideally make sure even the 0.20 works. |
Beta Was this translation helpful? Give feedback.
-
Yes...according to the openshift-ci people, there is nothing special about their clusters. I will work to get a solid of a reproducer as I can. I know how annoying things like this can be. |
Beta Was this translation helpful? Give feedback.
-
We are having issues installing the 0.20.x on an OCP 4.4 cluster. Is that compatible with that OCP version? Switching to the 0.19.x channel made it work on both OCP 4.4 and 4.5. Thanks. |
Beta Was this translation helpful? Give feedback.
-
We're also having an issue getting it to work on our OCP 4.6.21 cluster. We're seeing the exact same behavior as @crobby and @kvijai82. It worked until recently, last week, and then any fresh install or upgrade we wanted to do was stopped by the service account not being created. If we try to create the ServiceAccount manually, it says that it doesn't have the required policies. Any help appreciated, thanks! |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Describe the bug
A clear and concise description of what the bug is.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
The strimzi operator should be up and running in the openshift-operators namespace/project.
Actual behavior
The strimzi operator is NOT up and running.
The following snippet comes from the csv in the openshift-operators project.
Environment (please complete the following information):
YAML files and logs
Attach or copy paste the custom resources you used to deploy the Kafka cluster and the relevant YAMLs created by the Cluster Operator.
Attach or copy and paste also the relevant logs.
To easily collect all YAMLs and logs, you can use our report script which will automatically collect all files and prepare a ZIP archive which can be easily attached to this issue.
The usage of this script is:
./report.sh [--namespace <string>] [--cluster <string>]
Additional context
This problem just started for us during our Nightly openshift-ci run on Friday October 30. Prior to that, it was working well.
Beta Was this translation helpful? Give feedback.
All reactions