Skip to content

Conversation

Boston01
Copy link
Contributor

@Boston01 Boston01 commented Feb 21, 2025

What this PR does / why we need it: Supporting multiple regions and clouds requires creating additional secrets for the OpenStack cloud provider. This ensures the correct identification of the region from which a requested volume originates, allowing it to be properly attached to an instance within the same region or zone.

To bring OpenStack to the same level as AWS, GCP, and AKS in terms of high availability and seamless storage management with a single storage class, this pull request introduces enhancements that enable OpenStack to handle storage provisioning in a similar manner.

Which issue this PR fixes(if applicable):
fixes #

Special notes for reviewers:

Define a single storage class

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: perf1-delete
parameters:
  type: perf1
provisioner: cinder.csi.openstack.org
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer

Define a statefulset with 3 replicas with an anti-affinity in order to have on pod per zone

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: nginx
  namespace: nginx
spec:
  serviceName: nginx
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: topology.kubernetes.io/region
                    operator: In
                    values:
                      - par1
                      - par2
                      - par3
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchLabels:
                  app: nginx
              topologyKey: topology.kubernetes.io/region
      containers:
      - name: nginx-app
        image: nginx
        volumeMounts:
        - name: data
          mountPath: /data
  volumeClaimTemplates:
  - metadata:
      name: data
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: perf1-delete
      resources:
        requests:
          storage: 12Gi

List of storage class

(⎈|kubernetes-admin@platform-preprod-k8s:N/A)ansoufall ~/.kube  $ kubectl get storageclasses.storage.k8s.io
NAME           PROVISIONER                RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
perf1-delete   cinder.csi.openstack.org   Delete          WaitForFirstConsumer   true                   3h19m

List of pods on the spread wide kubernetes with zone par1, par2 and par3

(⎈|kubernetes-admin@platform-preprod-k8s:N/A)ansoufall ~/.kube  $ kubectl  -n nginx get po -owide
NAME      READY   STATUS    RESTARTS   AGE     IP              NODE                                 NOMINATED NODE   READINESS GATES
nginx-0   1/1     Running   0          3h16m   192.168.6.177   par3-platform-preprod-k8s-worker-3   <none>           <none>
nginx-1   1/1     Running   0          3h16m   192.168.4.96    par1-platform-preprod-k8s-worker-1   <none>           <none>
nginx-2   1/1     Running   0          3h16m   192.168.7.6     par2-platform-preprod-k8s-worker-3   <none>           <none>

List of persistent volumes

(⎈|kubernetes-admin@platform-preprod-k8s:N/A)ansoufall ~/.kube  $ kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                STORAGECLASS   VOLUMEATTRIBUTESCLASS   REASON   AGE
pvc-17f3547e-6bca-4914-bdd6-bacedf994556   12Gi       RWO            Delete           Bound    nginx/data-nginx-0   perf1-delete   <unset>                          3h24m
pvc-bc4e5b68-2cca-4614-8bac-45687e0694e7   12Gi       RWO            Delete           Bound    nginx/data-nginx-2   perf1-delete   <unset>                          3h23m
pvc-ea2c54b8-483a-42aa-9c86-6fd1ebc39791   12Gi       RWO            Delete           Bound    nginx/data-nginx-1   perf1-delete   <unset>                          3h23m

List of persistent volumes claims

(⎈|kubernetes-admin@platform-preprod-k8s:N/A)ansoufall ~/.kube  $ kubectl  -n nginx get pvc
NAME           STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
data-nginx-0   Bound    pvc-17f3547e-6bca-4914-bdd6-bacedf994556   12Gi       RWO            perf1-delete   <unset>                 3h28m
data-nginx-1   Bound    pvc-ea2c54b8-483a-42aa-9c86-6fd1ebc39791   12Gi       RWO            perf1-delete   <unset>                 3h25m
data-nginx-2   Bound    pvc-bc4e5b68-2cca-4614-8bac-45687e0694e7   12Gi       RWO            perf1-delete   <unset>                 3h25m

Release note:

NONE

@k8s-ci-robot k8s-ci-robot added the release-note-none Denotes a PR that doesn't merit a release note. label Feb 21, 2025
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign kayrus for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

Copy link

linux-foundation-easycla bot commented Feb 21, 2025

CLA Signed

The committers listed above are authorized under a signed CLA.

  • ✅ login: Boston01 / name: Ansou FALL (e051421)

@k8s-ci-robot
Copy link
Contributor

Welcome @Boston01!

It looks like this is your first PR to kubernetes/cloud-provider-openstack 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.

You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.

You can also check if kubernetes/cloud-provider-openstack has its own contribution guidelines.

You may want to refer to our testing guide if you run into trouble with your tests not passing.

If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!

Thank you, and welcome to Kubernetes. 😃

@k8s-ci-robot k8s-ci-robot added needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. cncf-cla: no Indicates the PR's author has not signed the CNCF CLA. labels Feb 21, 2025
@k8s-ci-robot
Copy link
Contributor

Hi @Boston01. Thanks for your PR.

I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. and removed cncf-cla: no Indicates the PR's author has not signed the CNCF CLA. labels Feb 21, 2025
@Boston01 Boston01 force-pushed the feat-support-of-one-storageClass branch from 13f0d6c to c6a8f80 Compare February 21, 2025 15:11
@k8s-ci-robot k8s-ci-robot added size/L Denotes a PR that changes 100-499 lines, ignoring generated files. and removed size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. labels Feb 21, 2025
@pabclsn
Copy link

pabclsn commented Mar 21, 2025

Much needed thanks a lot

Copy link
Member

@zetaab zetaab left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/ok-to-test

@k8s-ci-robot k8s-ci-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Mar 24, 2025

Example of configuration with 3 regions (The default is backward compatible with mono cluster configuration but not mandatory).
Example of configuration with 3 zones (The default is backward compatible with mono cluster configuration but not mandatory).
Copy link
Member

@zetaab zetaab Mar 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

well, I do not like that this PR is now mixing terminology everywhere. If you are using 3 different API endpoints, it usually means that you are using 3 different REGIONS not zones (like PR title says).

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@zetaab In openstack a region is a zone if you use the general cloud provider definition of a region, a region is composed of multiple zones.
What do you think ?

Copy link
Member

@stephenfin stephenfin Aug 5, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Region seems more appropriate here. Regions are effectively separate clouds, sharing only a common Keystone deployment. Does your deployment do this? If not, perhaps Cloud would be a more appropriate term. I don't like using the term Zone, since this overloads the term since it's more commonly used for Availability Zones.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hello @stephenfin , i get what you mean, i will refactor this Pull request and replace zone with region, we use different terminology but we're talking the same thing, thank you for your reply. Hi @zetaab i saw that you approve this Pull Request, great thanks, i update as you mention before about the mixing the zones and regions

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great, thanks. If possible, could you rebase on master also since this is relatively old now and it'd be good to drag in the latest changes

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hi @stephenfin do not worry i will do it during this weekend when i have some free time.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi,
Indeed i'm affraid that you mixed concepts of AZ/zones and regions/clouds, I was created previous contribution with multi regions/clouds notion, but not AZ.

In my understood of current codebase cinder-csi-plugin should be able ton consume volumes on an openstack cluster with multi-AZ.

But you probably need to have a distributed backend accross your AZ like ceph with cross AZ replication to be able to share a same volume across different zones.

I have no access to a multi AZ openstack cluster for now but if you simply wanna use a single StorageClass which is able to consume volume across different openstack regions/cluster, I can share you this project which fully and simply respond to my needs => https://github.com/sergelogvinov/hybrid-csi-plugin

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @MatthieuFin, you're totally right, i mixed concepts of AZ/zones and regions/clouds.
I'm working for a company that have 3 regions/cloud OpenStack and each region have this own pure storage.
When i started working over there, they logic was a zone is equivalent to a region and i did the pull request based on their experience, but i learn more about OpenStack during last period and i discover that my pull request work for them but i mixed up zones and regions.
I going to do a refactoring for this Pull Request in order to make work and robust for Openstack.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hi @stephenfin and @zetaab i did the refacto of the pull request by taking account the region as you suggested. I already redeployed on our staging, preprod and prod environments and it is working fine.
You can have a look on the pull request and let me know if i can improve others stuff.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle stale
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 25, 2025
@kayrus
Copy link
Contributor

kayrus commented Jun 25, 2025

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 25, 2025
topologyKey = "topology." + driverName + "/zone"
driverName = "cinder.csi.openstack.org"
topologyKey = "topology." + driverName + "/zone"
withTopologyKey = "topology.kubernetes.io/zone"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not simply use cinder-csi-plugin args --additional-topology=topology.kubernetes.io/zone instead of hardcoding one ? As explained here

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hi @MatthieuFin your comment is very important i will update when i have free time, thank you again

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hello @MatthieuFin, the "--additional-topology" is not available on the controller service, that's why i do not want to use it, if i use it on the controller service i will do a big refactor that may need a lot of changes, that's why i did the minimum of code in order to make this pull request for for your three regions(par1, par2, par3).

@Boston01 Boston01 force-pushed the feat-support-of-one-storageClass branch from d99c979 to a49d61d Compare August 19, 2025 12:41
@Boston01 Boston01 force-pushed the feat-support-of-one-storageClass branch 5 times, most recently from 21e5dcb to 7a3277e Compare August 19, 2025 13:45
@Boston01 Boston01 force-pushed the feat-support-of-one-storageClass branch from 7a3277e to e051421 Compare August 19, 2025 13:48
@k8s-ci-robot
Copy link
Contributor

@Boston01: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
openstack-cloud-csi-cinder-e2e-test e051421 link true /test openstack-cloud-csi-cinder-e2e-test

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Sep 18, 2025
@k8s-ci-robot
Copy link
Contributor

PR needs rebase.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. release-note-none Denotes a PR that doesn't merit a release note. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants