diff --git a/README.md b/README.md index 2e48bf0f..a94ced9f 100644 --- a/README.md +++ b/README.md @@ -61,8 +61,8 @@ This project contains Ansible code that creates a baseline cluster in an existin - Manage SAS Viya Platform Deployments - Organize and persist configuration for any number of SAS Viya platform deployments across namespaces, clusters, or cloud providers. -- SAS Viya with SingleStore Deployment - - SingleStore is a cloud-native database designed for data-intensive applications. See the [SAS Viya with SingleStore Documentation](./docs/user/SingleStore.md) for details. +- SAS SpeedyStore Deployment + - SingleStore is a cloud-native database designed for data-intensive applications. See the [SAS SpeedyStore Documentation](./docs/user/SingleStore.md) for details. ## Prerequisites diff --git a/docs/CONFIG-VARS.md b/docs/CONFIG-VARS.md index cca0c772..88d7cf9c 100644 --- a/docs/CONFIG-VARS.md +++ b/docs/CONFIG-VARS.md @@ -145,6 +145,7 @@ When V4_CFG_MANAGE_STORAGE is set to `true`, the `sas` and `pg-storage` storage | V4_CFG_ORDER_NUMBER | SAS software order ID | string | | true | | viya | | V4_CFG_CADENCE_NAME | Cadence name | string | lts | false | [stable,lts] | viya | | V4_CFG_CADENCE_VERSION | Cadence version | string | "2022.09" | true | This value must be surrounded by quotation marks to accommodate the updated SAS Cadence Version format. If the value is not quoted the deployment will fail. | viya | +| V4_CFG_CADENCE_RELEASE | Cadence release | string | | false | This value accepts a custom SAS Cadence release. It must be provided as a string enclosed in single quotes. (e.g. '20250909.1757454425315') | viya | | V4_CFG_DEPLOYMENT_ASSETS | Path to pre-downloaded deployment assets | string | | false | Leave blank to download [deployment assets](https://documentation.sas.com/?cdcId=sasadmincdc&cdcVersion=default&docsetId=itopscon&docsetTarget=n08bpieatgmfd8n192cnnbqc7m5c.htm#n1x7yoeafv23xan1gew0gfipt9e9) | viya | | V4_CFG_LICENSE | Path to pre-downloaded license file | string | | false| Leave blank to download the [license file](https://documentation.sas.com/?cdcId=sasadmincdc&cdcVersion=default&docsetId=itopscon&docsetTarget=n08bpieatgmfd8n192cnnbqc7m5c.htm#p1odbfo85cz4r5n1j2tzx9zz9sbi) | viya | | V4_CFG_CERTS | Path to pre-downloaded certificates file | string | | false| Leave blank to download the [certificates file](https://documentation.sas.com/?cdcId=sasadmincdc&cdcVersion=default&docsetId=itopscon&docsetTarget=n08bpieatgmfd8n192cnnbqc7m5c.htm#n0pj0ewyle0gfkn1psri3kw5ghha) | viya | @@ -373,7 +374,7 @@ The EBS CSI driver is only used for kubernetes v1.23 or later AWS EKS clusters. | EBS_CSI_DRIVER_CHART_NAME| aws ebs csi driver helm chart name | string | aws-ebs-csi-driver | false | | baseline | | EBS_CSI_DRIVER_CHART_VERSION | aws ebs csi driver helm chart version | string | 2.38.1 | false | | baseline | | EBS_CSI_DRIVER_CONFIG | aws ebs csi driver helm values | string | see [here](../roles/baseline/defaults/main.yml) | false | | baseline | -| EBS_CSI_DRIVER_ACCOUNT | cluster autoscaler aws role arn | string | | false | Required to enable the aws ebs csi driver on AWS | baseline | +| EBS_CSI_DRIVER_ACCOUNT | aws ebs csi driver IAM role ARN | string | | false | Required to enable the aws ebs csi driver on AWS | baseline | | EBS_CSI_DRIVER_LOCATION | aws region where kubernetes cluster resides | string | us-east-1 | false | | baseline | |EBS_CSI_RABBITMQ_STORAGE_CLASS_NAME| The EBS CSI storage class name for RabbitMQ | string | io2-vol-mq | false | | baseline | |EBS_CSI_RABBITMQ_STORAGE_CLASS_VOLUME_TYPE| The EBS CSI volume type to use for RabbitMQ persistent volumes| string | io2 | false | Supported values: [`io2`, `io1`, `gp3`] | baseline | diff --git a/docs/user/MigrationSteps-v9.md b/docs/user/MigrationSteps-v9.md deleted file mode 100644 index 6e8af421..00000000 --- a/docs/user/MigrationSteps-v9.md +++ /dev/null @@ -1,86 +0,0 @@ - -# Migration Guide: v9.0.0 - -This guide assumes you are migrating a Viya deployment (e.g., `v8.2.1`) to a newer version (e.g., `v9.0.0`) using the latest DaC baseline that includes the `csi-driver-nfs`. - -## Prerequisites - -- Ensure you have **cluster admin access**. -- The **NFS server** used in the existing setup must be retained and accessible. -- All **PVs and PVCs** should be backed up as a precaution. - -## Migration Steps - -### Backup Existing Viya Environment (Manual Trigger) - -Trigger a manual backup of your running Viya deployment: - -```bash -kubectl create job --from=cronjob/sas-scheduled-backup-all-sources manual-backup-$(date +%s) -n -```` - -Fatch the backup ID - -```bash -kubectl describe job -n va-viya | grep "sas.com/sas-backup-id" -```` - -### Verify the backup job has completed successfully: - -```bash -kubectl get jobs \ - -l "sas.com/sas-backup-id=" \ - -L "sas.com/sas-backup-id,sas.com/backup-job-type,sas.com/sas-backup-job-status,sas.com/backup-persistence-status" -``` -### Stop the Viya Deployment - -Stop the SAS Viya environment using the cron job: - -```bash -kubectl -n create job --from=cronjob/sas-stop-all stopdep- -``` - -**Example:** - -```bash -kubectl -n viya4 create job --from=cronjob/sas-stop-all stopdep-22072025 -``` -### Delete Old NFS Provisioner Components - -Remove the `sas` StorageClass: -For SAS Viya environments deployed on Google Cloud Platform (GCP), the legacy `pg-storage` StorageClass must be deleted. - -```bash -kubectl delete storageclass sas -``` - -Delete the namespace used by the legacy provisioner (typically `nfs-client`): - -```bash -kubectl delete namespace nfs-client -``` - -### Deploy New Viya Environment with CSI Driver - -If you have redeployed SAS Viya using the updated DaC baseline that includes CSI NFS driver support, no additional action is required. - -However, if you have only updated the DaC baseline without redeploying Viya, you will need to manually start the Viya environment using the following command: - -```bash -kubectl -n create job --from=cronjob/sas-start-all startdep- -``` - -> **Important Note:** You do **not** need to restore from backup, as the NFS server path to the PVs remains the same. The CSI driver will reuse existing PVs and directories automatically. - -### Post-Migration Steps - -* Confirm all PVCs are **bound and mounted correctly** in the new Viya deployment. -* Validate **data availability** and application functionality. - ---- - -### Notes - -* The **CSI NFS driver** offers improved compatibility with newer Kubernetes versions and is the **recommended** provisioner going forward. -* Avoid reusing the old Helm release metadata (`meta.helm.sh/*`) to prevent installation or upgrade conflicts. - diff --git a/docs/user/NFSProvisionerUpdate-v9.md b/docs/user/NFSProvisionerUpdate-v9.md new file mode 100644 index 00000000..f66d56df --- /dev/null +++ b/docs/user/NFSProvisionerUpdate-v9.md @@ -0,0 +1,110 @@ + +# Migration Guide: v9.0.0 + +This guide assumes you are migrating a viya4-deployment (e.g., `v8.2.1`) to a newer version (e.g., `v9.0.0`) using the latest viya4-deployment baseline that includes the `csi-driver-nfs`. + +## Prerequisites + +- Ensure you have **cluster admin access**. +- The **NFS server** used in the existing setup must be retained and accessible. +- All **PVs and PVCs** should be backed up as a precaution. + +## Migration Steps + +### Backup Existing Environment (Manual Execution) + +Trigger a manual backup of your running viya4-deployment: + +```bash +kubectl create job --from=cronjob/sas-scheduled-backup-all-sources manual-backup-$(date +%s) -n +```` + +Fatch the backup ID + +```bash +kubectl describe job -n va-viya | grep "sas.com/sas-backup-id" +```` + +### Verify the backup job has completed successfully: + +```bash +kubectl get jobs \ + -L "sas.com/sas-backup-id,sas.com/backup-job-type,sas.com/sas-backup-job-status,sas.com/backup-persistence-status" -n viya_namespace_name +``` +### Stop the viya4-deployment + +Stop the SAS viya4 environment using the cron job: + +```bash +kubectl -n create job --from=cronjob/sas-stop-all stopdep- +``` + +**Example:** + +```bash +kubectl -n viya4 create job --from=cronjob/sas-stop-all stopdep-22072025 +``` +### Delete Old NFS Provisioner Components + +Remove the `sas` StorageClass: +For SAS viya4 environments deployed on Google Cloud Platform (GCP), the legacy `pg-storage` StorageClass must be deleted. + +```bash +kubectl delete storageclass sas +``` + +Delete the namespace used by the legacy provisioner (typically `nfs-client`): + +```bash +kubectl delete namespace nfs-client +``` + +### Deploy New viya4 Environment with CSI Driver + +Update your DaC baseline to install the CSI NFS driver + +To install/upgrade baseline dependencies only using "Docker" + + ```bash + docker run --rm \ + --group-add root \ + --user $(id -u):$(id -g) \ + --volume $HOME/deployments:/data \ + --volume $HOME/deployments/dev-cluster/.kube/config:/config/kubeconfig \ + --volume $HOME/deployments/dev-cluster/dev-namespace/ansible-vars.yaml:/config/config \ + --volume $HOME/.ssh/id_rsa:/config/jump_svr_private_key \ + viya4-deployment --tags "baseline,install" + ``` + +To install/upgrade baseline dependencies only using "ansible" + + ```bash + ansible-playbook \ + -e BASE_DIR=$HOME/deployments \ + -e KUBECONFIG=$HOME/deployments/.kube/config \ + -e CONFIG=$HOME/deployments/dev-cluster/dev-namespace/ansible-vars.yaml \ + -e JUMP_SVR_PRIVATE_KEY=$HOME/.ssh/id_rsa \ + playbooks/playbook.yaml --tags "baseline,install" + ``` +If you have redeployed **viya4-deployment** using the [9.0.0 release](https://github.com/sassoftware/viya4-deployment/releases/tag/v9.0.0), which includes CSI NFS driver support, no additional action is required. + +However, if you have only updated the viya4-deployment baseline without redeploying viya4, you will need to manually start the viya4 environment using the following command: + +```bash +kubectl -n create job --from=cronjob/sas-start-all startdep- +``` + +> **Important Note:** You do **not** need to restore from backup, as the NFS server path to the PVs remains the same. The CSI driver will reuse existing PVs and directories automatically. + +### Post-Migration Steps + +* Confirm all PVCs are **bound and mounted correctly** in the new viya4-deployment. +* Validate **data availability** and application functionality. + +--- + +### Notes + +* The **CSI NFS driver** offers improved compatibility with newer Kubernetes versions and is the **recommended** provisioner going forward. +* Avoid reusing the old Helm release metadata (`meta.helm.sh/*`) to prevent installation or upgrade conflicts. + diff --git a/docs/user/SingleStore.md b/docs/user/SingleStore.md index 514b39aa..9d130955 100644 --- a/docs/user/SingleStore.md +++ b/docs/user/SingleStore.md @@ -2,45 +2,45 @@ The SAS Viya platform provides an optional integration with SingleStore. SingleStore is a cloud-native database that is designed for data-intensive applications. A distributed, relational SQL database management system that features ANSI SQL support, SingleStore is known for speed in data ingest, transaction processing, and query processing. -## Requirements for SAS with SingleStore +## Requirements for SAS SpeedyStore -If your SAS software order includes SAS with SingleStore, additional requirements apply to your deployment. The [_SAS Viya Platform Operations Guide_](https://documentation.sas.com/?cdcId=itopscdc&cdcVersion=default&docsetId=itopssr&docsetTarget=n0jq6u1duu7sqnn13cwzecyt475u.htm#n0qs42c42o8jjzn12ib4276fk7pb) provides detailed information about requirements for a SingleStore-enabled deployment of the SAS Viya platform. +If your SAS software order includes SAS SpeedyStore, additional requirements apply to your deployment. The [_SAS Viya Platform Operations Guide_](https://documentation.sas.com/?cdcId=itopscdc&cdcVersion=default&docsetId=itopssr&docsetTarget=n0jq6u1duu7sqnn13cwzecyt475u.htm#n0qs42c42o8jjzn12ib4276fk7pb) provides detailed information about requirements for a SingleStore-enabled deployment of the SAS Viya platform. -## Deploying SAS with SingleStore Using SAS Viya 4 Deployment +## Deploying SAS SpeedyStore Using SAS Viya 4 Deployment -You can deploy SAS with SingleStore into a Kubernetes cluster in the following environments: +You can deploy SAS SpeedyStore into a Kubernetes cluster in the following environments: - Azure Kubernetes Service (AKS) in Microsoft Azure - Elastic Kubernetes Service (EKS) in Amazon Web Services (AWS) - Open Source Kubernetes on your own machines -## Cluster Provisioning for SAS with SingleStore +## Cluster Provisioning for SAS SpeedyStore ### Azure Kubernetes Service (AKS) Cluster in Microsoft Azure -The [SAS Viya 4 IaC for Microsoft Azure](https://github.com/sassoftware/viya4-iac-azure) GitHub project can automatically provision the required infrastructure components that support SAS with SingleStore deployments. +The [SAS Viya 4 IaC for Microsoft Azure](https://github.com/sassoftware/viya4-iac-azure) GitHub project can automatically provision the required infrastructure components that support SAS SpeedyStore deployments. Refer to the [SingleStore sample input file](https://github.com/sassoftware/viya4-iac-azure/blob/main/examples/sample-input-singlestore.tfvars) for Terraform configuration values that create an AKS cluster that is suitable for deploying the SAS Viya platform and SingleStore. ### EKS Cluster in AWS -The [SAS Viya 4 IaC for AWS](https://github.com/sassoftware/viya4-iac-aws) GitHub project can automatically provision the required infrastructure components that support SAS with SingleStore deployments. +The [SAS Viya 4 IaC for AWS](https://github.com/sassoftware/viya4-iac-aws) GitHub project can automatically provision the required infrastructure components that support SAS SpeedyStore deployments. Refer to the [SingleStore sample input file](https://github.com/sassoftware/viya4-iac-aws/blob/main/examples/sample-input-singlestore.tfvars) for Terraform configuration values that create an EKS cluster that is suitable for deploying the SAS Viya platform and SingleStore. ### Open Source Kubernetes Cluster -The [SAS Viya 4 Infrastructure as Code (IaC) for Open Source Kubernetes](https://github.com/sassoftware/viya4-iac-k8s) GitHub project can automatically provision the required infrastructure components that support SAS with SingleStore deployments. +The [SAS Viya 4 Infrastructure as Code (IaC) for Open Source Kubernetes](https://github.com/sassoftware/viya4-iac-k8s) GitHub project can automatically provision the required infrastructure components that support SAS SpeedyStore deployments. Refer to the [SingleStore sample input file](https://github.com/sassoftware/viya4-iac-k8s/blob/main/examples/vsphere/sample-terraform-static-singlestore.tfvars) for Terraform configuration values that create an Open Source Kubernetes cluster that is suitable for deploying the SAS Viya platform and SingleStore. ## Customizing SingleStore Deployment Overlays Choose the appropriate section below based on the cadence version of the SAS Viya platform and SingleStore that you are deploying. -### SAS Viya and SingleStore orders at stable:2023.10 and later +### SAS SpeedyStore orders at stable:2023.10 and later Refer to the viya4-deployment [Getting Started](https://github.com/sassoftware/viya4-deployment#getting-started) and [SAS Viya Platform Customizations](https://github.com/sassoftware/viya4-deployment#sas-viya-platform-customizations) documentation if you need information about how to make changes to your deployment by adding custom overlays into subdirectories under the `site-config` directory. After running viya4-deployment with the setting `DEPLOY=false` in your ansible-vars.yaml file, locate the `sas-bases` directory, which is a peer to the `site-config` directory underneath your SAS Viya platform deployment's . -Complete each step under the "SingleStore Cluster Definition" heading in the "SAS SingleStore Cluster Operator" README file in order to configure your SAS with SingleStore deployment, noting the following exceptions. The README file is located at `$deploy/sas-bases/examples/sas-singlestore/README.md` (for Markdown format) or at `$deploy/sas-bases/docs/sas_singlestore_cluster_operator.htm` (for HTML format). +Complete each step under the "SingleStore Cluster Definition" heading in the "SAS SingleStore Cluster Operator" README file in order to configure your SAS SpeedyStore deployment, noting the following exceptions. The README file is located at `$deploy/sas-bases/examples/sas-singlestore/README.md` (for Markdown format) or at `$deploy/sas-bases/docs/sas_singlestore_cluster_operator.htm` (for HTML format). - Complete steps 1 and 2 in the "SAS SingleStore Cluster Operator" README file. @@ -80,15 +80,15 @@ Complete each step under the "SingleStore Cluster Definition" heading in the "SA - Set `DEPLOY=true` in your ansible-vars.yaml file. -- Run viya4-deployment with the "viya, install" tags to deploy SAS with SingleStore into your cluster. +- Run viya4-deployment with the "viya, install" tags to deploy SAS SpeedyStore into your cluster. -### SAS Viya and SingleStore orders at LTS:2023.03 and earlier +### SAS SpeedyStore orders at LTS:2023.03 and earlier Refer to the viya4-deployment [Getting Started](https://github.com/sassoftware/viya4-deployment#getting-started) and [SAS Viya Platform Customizations](https://github.com/sassoftware/viya4-deployment#sas-viya-platform-customizations) documentation if you need information about how to make changes to your deployment by adding custom overlays into subdirectories under the `/site-config` directory. After running viya4-deployment with the setting `DEPLOY=false` in your ansible-vars.yaml file, locate the `sas-bases` directory, which is a peer to the `site-config` directory underneath your SAS Viya platform deployment's . -Complete each step under the "SingleStore Cluster Definition" heading in the "SAS SingleStore Cluster Operator" README file in order to configure your SAS with SingleStore deployment, noting the following exceptions. The README file is located at `$deploy/sas-bases/examples/sas-singlestore/README.md` (for Markdown format) or at `$deploy/sas-bases/docs/sas_singlestore_cluster_operator.htm` (for HTML format). +Complete each step under the "SingleStore Cluster Definition" heading in the "SAS SingleStore Cluster Operator" README file in order to configure your SAS SpeedyStore deployment, noting the following exceptions. The README file is located at `$deploy/sas-bases/examples/sas-singlestore/README.md` (for Markdown format) or at `$deploy/sas-bases/docs/sas_singlestore_cluster_operator.htm` (for HTML format). - Complete steps 1 and 2 in the `sas-bases/examples/sas-singlestore/README.md` file. @@ -102,4 +102,4 @@ Complete each step under the "SingleStore Cluster Definition" heading in the "SA - Complete the remaining steps from the "SAS SingleStore Cluster Operator" README file. Then set `DEPLOY=true` in your ansible-vars.yaml file. -- Run viya4-deployment with the "viya, install" tags to deploy SAS with SingleStore into your cluster. +- Run viya4-deployment with the "viya, install" tags to deploy SAS SpeedyStore into your cluster. diff --git a/roles/baseline/defaults/main.yml b/roles/baseline/defaults/main.yml index 5dcf9657..be1d925d 100644 --- a/roles/baseline/defaults/main.yml +++ b/roles/baseline/defaults/main.yml @@ -9,6 +9,7 @@ V4_CFG_INGRESS_MODE: public V4_CFG_MANAGE_STORAGE: true V4_CFG_AWS_LB_SUBNETS: "" STORAGE_TYPE_BACKEND: "" +NETAPP_VOLUME_PATH: "" ## Cert-manager CERT_MANAGER_NAME: cert-manager @@ -121,6 +122,7 @@ CSI_DRIVER_NFS_CHART_VERSION: 4.11.0 CSI_DRIVER_NFS_CONFIG: driver: mountPermissions: "0777" + fsGroupPolicy: ReadWriteOnceWithFSType storageClass: create: true name: sas @@ -129,11 +131,12 @@ CSI_DRIVER_NFS_CONFIG: volumeBindingMode: Immediate parameters: server: "{{ V4_CFG_RWX_FILESTORE_ENDPOINT }}" - share: "{{ '/ontap' if STORAGE_TYPE_BACKEND == 'ontap' else ('/pvs' if PROVIDER != 'azure' else (V4_CFG_RWX_FILESTORE_PATH | replace('/$', '') ~ '/pvs')) }}" + share: "{{ '/ontap' if STORAGE_TYPE_BACKEND == 'ontap' else (V4_CFG_RWX_FILESTORE_PATH if (STORAGE_TYPE_BACKEND == 'netapp' and NETAPP_VOLUME_PATH | length > 0) else ('/pvs' if PROVIDER != 'azure' else (V4_CFG_RWX_FILESTORE_PATH | replace('/$', '') ~ '/pvs')) ) }}" subDir: ${pvc.metadata.namespace}/${pvc.metadata.name}/${pv.metadata.name} mountPermissions: "0777" mountOptions: - - vers=4.1 + - "{{ 'vers=3' if (PROVIDER == 'gcp' and STORAGE_TYPE_BACKEND == 'netapp') else 'vers=4.1' }}" + - nolock - noatime - nodiratime - rsize=262144 diff --git a/roles/baseline/tasks/nfs-csi-provisioner.yaml b/roles/baseline/tasks/nfs-csi-provisioner.yaml index d3d644f0..ec501f82 100644 --- a/roles/baseline/tasks/nfs-csi-provisioner.yaml +++ b/roles/baseline/tasks/nfs-csi-provisioner.yaml @@ -159,12 +159,13 @@ provisioner: nfs.csi.k8s.io parameters: server: "{{ V4_CFG_RWX_FILESTORE_ENDPOINT }}" - share: "{{ V4_CFG_RWX_FILESTORE_PATH if '-export' in V4_CFG_RWX_FILESTORE_PATH else ('/pvs' if V4_CFG_RWX_FILESTORE_PATH != '/volumes' else '/volumes/pvs') }}" + share: "{{ V4_CFG_RWX_FILESTORE_PATH if '-export' in V4_CFG_RWX_FILESTORE_PATH else (V4_CFG_RWX_FILESTORE_PATH if (STORAGE_TYPE_BACKEND == 'netapp' and NETAPP_VOLUME_PATH | length > 0) else ('/pvs' if V4_CFG_RWX_FILESTORE_PATH != '/volumes' else '/volumes/pvs')) }}" reclaimPolicy: Delete volumeBindingMode: Immediate allowVolumeExpansion: true mountOptions: - - "{{ 'nfsvers=4.1' if (V4_CFG_RWX_FILESTORE_PATH != '/volumes' and '-export' not in V4_CFG_RWX_FILESTORE_PATH) else 'nolock' }}" + - "{{ 'nfsvers=3' if (PROVIDER == 'gcp' and STORAGE_TYPE_BACKEND == 'netapp') else ('nfsvers=4.1' if (V4_CFG_RWX_FILESTORE_PATH != '/volumes' and '-export' not in V4_CFG_RWX_FILESTORE_PATH) else 'nfsvers=4.1') }}" + - nolock - noatime - nodiratime - rsize=262144 diff --git a/roles/common/tasks/main.yaml b/roles/common/tasks/main.yaml index 5f05ebed..362d0714 100644 --- a/roles/common/tasks/main.yaml +++ b/roles/common/tasks/main.yaml @@ -166,19 +166,20 @@ - tfstate.cluster_node_pool_mode is defined - tfstate.cluster_node_pool_mode.value|length > 0 # Set jump server facts from tfstate if present - - name: tfstate - jump server # noqa: name[casing] + - name: tfstate - jump server public IP # noqa: name[casing] set_fact: JUMP_SVR_HOST: "{{ tfstate.jump_public_ip.value }}" when: + - JUMP_SVR_HOST is not defined - tfstate.jump_public_ip is defined - tfstate.jump_public_ip.value|length > 0 - name: tfstate - jump server private # noqa: name[casing] set_fact: JUMP_SVR_HOST: "{{ tfstate.jump_private_ip.value }}" when: + - JUMP_SVR_HOST is not defined - tfstate.jump_private_ip is defined - tfstate.jump_private_ip.value|length > 0 - - JUMP_SVR_HOST is not defined - name: tfstate - jump user # noqa: name[casing] set_fact: JUMP_SVR_USER: "{{ tfstate.jump_admin_username.value }}" @@ -213,6 +214,12 @@ when: - tfstate.storage_type_backend is defined - tfstate.storage_type_backend.value|length > 0 + # Set NetApp volume path from tfstate if STORAGE_TYPE_BACKEND is 'netapp' + - name: "Set NetApp volume path fact" + set_fact: + NETAPP_VOLUME_PATH: "{{ tfstate.netapp_volume_path.value }}" + when: STORAGE_TYPE_BACKEND is defined and STORAGE_TYPE_BACKEND == 'netapp' + ### Deprecations - name: tfstate - postgres admin # noqa: name[casing] set_fact: diff --git a/roles/vdm/tasks/sasdeployment_custom_resource.yaml b/roles/vdm/tasks/sasdeployment_custom_resource.yaml index eb6fd3f8..8740cfda 100644 --- a/roles/vdm/tasks/sasdeployment_custom_resource.yaml +++ b/roles/vdm/tasks/sasdeployment_custom_resource.yaml @@ -33,12 +33,20 @@ - cas-onboard - offboard block: + # If explicitly passed from Ansible, use that value + - name: Set cadence release from ansible var if provided + set_fact: + V4_CFG_CADENCE_RELEASE: "{{ V4_CFG_CADENCE_RELEASE }}" + when: + - V4_CFG_CADENCE_RELEASE is defined + - V4_CFG_CADENCE_RELEASE | length > 0 # For cadence version > 2021.1 or 'fast', use cadence.yaml - name: sasdeployment custom resource - Find order cadence release from cadence.yaml # noqa: name[casing] set_fact: V4_CFG_CADENCE_RELEASE: "{{ (lookup('file', '{{ DEPLOY_DIR }}/sas-bases/.orchestration/cadence.yaml') | from_yaml).spec.release }}" when: - V4_CFG_CADENCE_VERSION is version('2021.1', ">") or V4_CFG_CADENCE_NAME|lower == "fast" + - V4_CFG_CADENCE_RELEASE is not defined or V4_CFG_CADENCE_RELEASE | length == 0 # For cadence version <= 2021.1 and not 'fast', use configmaps.yaml - name: sasdeployment custom resource - Find order cadence release from configmaps.yaml # noqa: name[casing] set_fact: @@ -46,6 +54,7 @@ when: - V4_CFG_CADENCE_VERSION is version('2021.1', "<=") - V4_CFG_CADENCE_NAME|lower != "fast" + - V4_CFG_CADENCE_RELEASE is not defined or V4_CFG_CADENCE_RELEASE | length == 0 # Prepare orchestration tooling directory and copy required files - name: sasdeployment custom resource - Setup orchestration tooling directory # noqa: name[casing] diff --git a/roles/vdm/templates/resources/openssl-generated-ingress-certificate.yaml b/roles/vdm/templates/resources/openssl-generated-ingress-certificate.yaml index 1ac24a7a..8ae7443d 100644 --- a/roles/vdm/templates/resources/openssl-generated-ingress-certificate.yaml +++ b/roles/vdm/templates/resources/openssl-generated-ingress-certificate.yaml @@ -1,11 +1,15 @@ apiVersion: batch/v1 kind: Job metadata: + annotations: {} labels: sas.com/admin: namespace name: sas-create-openssl-ingress-certificate spec: + ttlSecondsAfterFinished: 0 template: + metadata: + annotations: {} spec: imagePullSecrets: [] containers: @@ -58,6 +62,7 @@ spec: securityContext: allowPrivilegeEscalation: false capabilities: + add: [] drop: - ALL privileged: false @@ -68,6 +73,10 @@ spec: - mountPath: /security name: security restartPolicy: OnFailure + securityContext: + runAsNonRoot: true + seccompProfile: + type: RuntimeDefault volumes: - name: certframe-token secret: