Skip to content

Commit 53498d1

Browse files
authored
Merge pull request #30 from NetApp/release-v2.5
Merge v2.5 Release Branch
2 parents 6723750 + 51982c8 commit 53498d1

File tree

20 files changed

+457
-1004
lines changed

20 files changed

+457
-1004
lines changed

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ The NetApp DataOps Toolkit is a Python-based tool that simplifies the management
1010

1111
## Getting Started
1212

13-
The latest stable release of the NetApp DataOps Toolkit is version 2.4.0. It is recommended to always use the latest stable release. You can access the documentation for the latest stable release [here](https://github.com/NetApp/netapp-dataops-toolkit/tree/v2.4.0)
13+
The latest stable release of the NetApp DataOps Toolkit is version 2.5.0. It is recommended to always use the latest stable release. You can access the documentation for the latest stable release [here](https://github.com/NetApp/netapp-dataops-toolkit/tree/v2.5.0)
1414

1515
The NetApp DataOps Toolkit comes in two different flavors. For access to the most capabilities, we recommend using the [NetApp DataOps Toolkit for Kubernetes](netapp_dataops_k8s/). This flavor supports the full functionality of the toolkit, including JupyterLab workspace and NVIDIA Triton Inference Server management capabilities, but requires access to a Kubernetes cluster.
1616

netapp_dataops_k8s/Examples/Airflow/ai-training-run.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -109,7 +109,7 @@
109109
# Define step to take a snapshot of the dataset volume for traceability
110110
dataset_snapshot = KubernetesPodOperator(
111111
namespace=namespace,
112-
image="python:3",
112+
image="python:3.11",
113113
cmds=["/bin/bash", "-c"],
114114
arguments=["\
115115
python3 -m pip install netapp-dataops-k8s && \
@@ -144,7 +144,7 @@
144144
# Define step to take a snapshot of the model volume for versioning/baselining
145145
model_snapshot = KubernetesPodOperator(
146146
namespace=namespace,
147-
image="python:3",
147+
image="python:3.11",
148148
cmds=["/bin/bash", "-c"],
149149
arguments=["\
150150
python3 -m pip install netapp-dataops-k8s && \

netapp_dataops_k8s/Examples/Airflow/clone-volume.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -53,11 +53,11 @@
5353
# Define step to clone source volume
5454
clone_volume = KubernetesPodOperator(
5555
namespace=namespace,
56-
image="python:3",
56+
image="python:3.11",
5757
cmds=["/bin/bash", "-c"],
5858
arguments=[arg],
5959
name="clone-volume-clone-volume",
6060
task_id="clone-volume",
6161
is_delete_operator_pod=True,
6262
hostnetwork=False
63-
)
63+
)

netapp_dataops_k8s/Examples/Kubeflow/Pipelines/ai-training-run.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -49,7 +49,7 @@ def ai_training_run(
4949
volume_snapshot_name = "dataset-{{workflow.uid}}"
5050
dataset_snapshot = dsl.ContainerOp(
5151
name="dataset-snapshot",
52-
image="python:3",
52+
image="python:3.11",
5353
command=["/bin/bash", "-c"],
5454
arguments=["\
5555
python3 -m pip install netapp-dataops-k8s && \
@@ -85,7 +85,7 @@ def ai_training_run(
8585
volume_snapshot_name = "kfp-model-{{workflow.uid}}"
8686
model_snapshot = dsl.ContainerOp(
8787
name="model-snapshot",
88-
image="python:3",
88+
image="python:3.11",
8989
command=["/bin/bash", "-c"],
9090
arguments=["\
9191
python3 -m pip install netapp-dataops-k8s && \

netapp_dataops_k8s/Examples/Kubeflow/Pipelines/clone-volume.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ def clone_volume(
1919

2020
# Create a clone of the source volume
2121
name = "clone-volume"
22-
image = "python:3"
22+
image = "python:3.11"
2323
command = ["/bin/bash", "-c"]
2424
file_outputs = {"new_volume_pvc_name": "/new_volume_pvc_name.txt"}
2525
args = "\

netapp_dataops_k8s/Examples/Kubeflow/Pipelines/delete-snapshot.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ def delete_volume(
1717
# Delete Snapshot
1818
delete_snapshot = dsl.ContainerOp(
1919
name="delete-snapshot",
20-
image="python:3",
20+
image="python:3.11",
2121
command=["/bin/bash", "-c"],
2222
arguments=["\
2323
python3 -m pip install netapp-dataops-k8s && \

netapp_dataops_k8s/Examples/Kubeflow/Pipelines/delete-volume.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ def delete_volume(
1717
# Delete Volume
1818
delete_volume = dsl.ContainerOp(
1919
name="delete-volume",
20-
image="python:3",
20+
image="python:3.11",
2121
command=["/bin/bash", "-c"],
2222
arguments=["\
2323
python3 -m pip install netapp-dataops-k8s && \

netapp_dataops_k8s/Examples/Kubeflow/Pipelines/replicate-data-cloud-sync.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@ def netappCloudSyncUpdate(relationshipId: str, printResponse: bool = True, keepC
3030
syncCloudSyncRelationship(relationshipID=relationshipId, waitUntilComplete=keepCheckingUntilComplete, printOutput=printResponse)
3131

3232
# Convert netappCloudSyncUpdate function to Kubeflow Pipeline ContainerOp named 'NetappCloudSyncUpdateOp'
33-
NetappCloudSyncUpdateOp = comp.func_to_container_op(netappCloudSyncUpdate, base_image='python:3')
33+
NetappCloudSyncUpdateOp = comp.func_to_container_op(netappCloudSyncUpdate, base_image='python:3.11')
3434

3535

3636
# Define Kubeflow Pipeline
@@ -61,4 +61,4 @@ def replicate_data_cloud_sync(
6161

6262
if __name__ == '__main__' :
6363
import kfp.compiler as compiler
64-
compiler.Compiler().compile(replicate_data_cloud_sync, __file__ + '.yaml')
64+
compiler.Compiler().compile(replicate_data_cloud_sync, __file__ + '.yaml')

netapp_dataops_k8s/Examples/Kubeflow/Pipelines/replicate-data-snapmirror.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -54,7 +54,7 @@ def netappSnapMirrorUpdate(
5454
syncSnapMirrorRelationship(uuid=uuid, waitUntilComplete=waitUntilComplete, printOutput=True)
5555

5656
# Convert netappSnapMirrorUpdate function to Kubeflow Pipeline ContainerOp named 'NetappSnapMirrorUpdateOp'
57-
NetappSnapMirrorUpdateOp = comp.func_to_container_op(netappSnapMirrorUpdate, base_image='python:3')
57+
NetappSnapMirrorUpdateOp = comp.func_to_container_op(netappSnapMirrorUpdate, base_image='python:3.11')
5858

5959

6060
# Define Kubeflow Pipeline

netapp_dataops_k8s/README.md

Lines changed: 2 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ The NetApp DataOps Toolkit for Kubernetes supports Linux and macOS hosts.
99

1010
The toolkit must be used in conjunction with a Kubernetes cluster in order to be useful. Additionally, [Trident](https://netapp.io/persistent-storage-provisioner-for-kubernetes/), NetApp's dynamic storage orchestrator for Kubernetes, and/or the [BeeGFS CSI driver](https://github.com/NetApp/beegfs-csi-driver/) must be installed within the Kubernetes cluster. The toolkit simplifies performing of various data management tasks that are actually executed by a NetApp maintained CSI driver. In order to facilitate this, the toolkit communicates with the appropriate driver via the Kubernetes API.
1111

12-
The toolkit is currently compatible with Kubernetes versions 1.17 and above, and OpenShift versions 4.4 and above.
12+
The toolkit is currently compatible with Kubernetes versions 1.20 and above, and OpenShift versions 4.7 and above.
1313

1414
The toolkit is currently compatible with Trident versions 20.07 and above. Additionally, the toolkit is compatible with the following Trident backend types:
1515

@@ -24,7 +24,7 @@ The toolkit is currently compatible with all versions of the BeeGFS CSI driver,
2424

2525
### Prerequisites
2626

27-
The NetApp DataOps Toolkit for Kubernetes requires that Python 3.8 or above be installed on the local host. Additionally, the toolkit requires that pip for Python3 be installed on the local host. For more details regarding pip, including installation instructions, refer to the [pip documentation](https://pip.pypa.io/en/stable/installing/).
27+
The NetApp DataOps Toolkit for Kubernetes requires that Python 3.8, 3.9, 3.10, or 3.11 be installed on the local host. Additionally, the toolkit requires that pip for Python3 be installed on the local host. For more details regarding pip, including installation instructions, refer to the [pip documentation](https://pip.pypa.io/en/stable/installing/).
2828

2929
### Installation Instructions
3030

@@ -67,23 +67,6 @@ In the [Examples](Examples/) directory, you will find the following examples per
6767
6868
Refer to the [Kubernetes documentation](https://kubernetes.io/docs/tasks/run-application/access-api-from-pod/) for more information on accessing the Kubernetes API from within a pod.
6969
70-
## Extended Functionality with Astra Control
71-
72-
The NetApp DataOps Toolkit provides several extended capabilities that require [Astra Control](https://cloud.netapp.com/astra). Any operation that requires Astra Control is specifically noted within the documentation as requiring Astra Control. The prerequisites outlined in this section are required in order to perform any operation that requires Astra Control.
73-
74-
The toolkit uses the Astra Control Python SDK to interface with the Astra Control API. The Astra Control Python SDK is installed automatically when you install the NetApp DataOps Toolkit using pip.
75-
76-
In order for the Astra Control Python SDK to be able to communicate with the Astra Control API, you must create a 'config.yaml' file containing your Astra Control API connection details. Refer to the [Astra Control Python SDK README](https://github.com/NetApp/netapp-astra-toolkits/tree/v2.1.3) for formatting details. Note that you do not need to follow the installation instructions outlined in the Astra Control Python SDK README; you only need to create the 'config.yaml' file. Once you have created the 'config.yaml' file, you must store it in one of the following locations:
77-
- ~/.config/astra-toolkits/
78-
- /etc/astra-toolkits/
79-
- The directory pointed to by the shell environment variable 'ASTRATOOLKITS_CONF'
80-
81-
Additionally, you must set the shell environment variable 'ASTRA_K8S_CLUSTER_NAME' to the name of your specific Kubernetes cluster in Astra Control.
82-
83-
```sh
84-
export ASTRA_K8S_CLUSTER_NAME="<Kubernetes_cluster_name_in_Astra_Control"
85-
```
86-
8770
## Capabilities
8871
8972
The NetApp DataOps Toolkit for Kubernetes provides the following capabilities.

0 commit comments

Comments
 (0)