This module creates a Google Cloud NetApp Volumes volume.
NetApp Volumes is a first-party Google service that provides NFS and/or SMB shared file-systems to VMs. It offers advanced data management capabilities and highly scalable capacity and performance. NetApp Volume provides:
- robust support for NFSv3, NFSv4.x and SMB 2.1 and 3.x
- a rich feature set
- scalable performance
- FlexCache: Caching of ONTAP-based volumes to provide high-throughput and low latency read access to compute clusters of on-premises data
- Auto-tiering of unused data to optimse cost
Support for NetApp Volumes is split into two modules.
- netapp-storage-pool provisions a storage pool. Storage pools are pre-provisioned storage capacity containers which host volumes. A pool also defines fundamental properties of all the volumes within, like the region, the attached network, the service level, CMEK encryption, Active Directory and LDAP settings.
- netapp-volume provisions a volume inside an existing storage pool. A volume file-system container which is shared using NFS or SMB. It provides advanced data management capabilities.
For more information on this and other network storage options in the Cluster Toolkit, see the extended Network Storage documentation.
The netapp-volume module currently doesn't implement volume deletion protection. If you create a volume with Cluster Toolkit by using this module, Cluster Toolkit will also delete it when you run gcluster destroy. All the data in the volume will be gone. If you want to retain the volume instead, it is advised to use existing volumes not created by Cluster Toolkit.
Volumes are filesystem containers which can be shared using NFS or SMB filesharing protocols. Volumes live inside of storage pools, which can be provisioned using the netapp-storage-pool module. Volumes inherit fundamental settings from the pool. They consume capacity provided by the pool. You can create one or multiple volumes inside a pool.
The following examples show the use of netapp-volume. They builds on top of an storage pool which can be provisioned using the netapp-storage-pool module.
- id: home_volume
source: modules/file-system/netapp-volume
use: [netapp_pool] # Create this pool using the netapp-storage-pool module
settings:
volume_name: "eda-home"
capacity_gib: 1024 # Size up to available capacity in the pool
local_mount: "/eda-home" # Mount point at client when client uses USE directive
protocols: ["NFSV3"]
region: $(vars.region)
# Default export policy exports to "10.0.0.0/8,172.16.0.0/12,192.168.0.0/16" and no_root_squash - id: shared_volume
source: modules/file-system/netapp-volume
use: [netapp_pool] # Create this pool using the netapp-storage-pool module
settings:
volume_name: "eda-shared"
capacity_gib: 25000 # Size up to available capacity in the pool
large_capacity: true
local_mount: "/shared" # Mount point at client when client uses USE directive
mount_options: "rw" # Allows customizing mount options for special workloads
protocols: ["NFSV3","NFSV4"] # List of protocols. ["NFSV3], ["NFSv4] or ["NFSV3, "NFSV4"]
region: $(vars.region)
unix_permissions: "0777" # Specify default permissions for roo inode owned by root:root
# If no export policy is specified, a permissive default policy will be applied, which is:
# allowed_clients = "10.0.0.0/8,172.16.0.0/12,192.168.0.0/16" # RFC1918
# has_root_access = true # no_root_squash enabled
# access_type = "READ_WRITE"
export_policy:
- allowed_clients: "10.10.20.8,10.10.20.9"
has_root_access: true # no_root_squash enabled
access_type: "READ_WRITE"
nfsv3: false # allow only NFSv4 for these hosts
nfsv4: true
- allowed_clients: "10.0.0.0/8"
has_root_access: false # no_root_squash disabled
access_type: "READ_WRITE"
nfsv3: true # allow only NFSv3 for these hosts
nfsv4: false
tiering_policy: # Enable auto-tiering. Requires auto-tiering enabled storage pool
tier_action: "ENABLED"
cooling_threshold_days: 31 # tier data blocks which have not been touched for 31 days
description: "Shared volume for EDA job"
labels:
owner: bobSince Cluster Toolkit is currently built to provision Linux-based compute clusters, this module supports NFSv3 and NFSv4.1 only. SMB is blocked.
Volumes larger than 15 TiB can be created as Large Volumes. Such volumes can grow up to 3 PiB and can scale read performance up to 29 GiBps. They provide six IP addresses to the volume. They are exported via the server_ips output. When connecting a large volume to a client using the USE directive, cluster toolkit currently uses the first IP only. This will be improved in the future.
This feature is allow-listed GA. To request allow-listing, see Large Volumes.
For auto-tiering enabled storage pools you can enable auto-tiering on the volume. For more information, see manage auto-tiering.
NetApp Volumes volumes are regular NFS exports. You can use the pre-existing-network-storage module to integrate them into Cluster Toolkit.
Example code:
- id: homefs
source: modules/file-system/pre-existing-network-storage
settings:
server_ip: ## Set server IP here ##
remote_mount: nfsshare
local_mount: /home
fs_type: nfsThis creates a resource in Cluster Toolkit which references the specified NFS export, which will be mounted at /home by clients which mount if via USE directive.
Note that the server_ip must be known before deployment and this module does not allow
to specify a list of IPs for large volumes.
NetApp FlexCache technology accelerates data access, reduces WAN latency and lowers WAN bandwidth costs for read-intensive workloads, especially where clients need to access the same data repeatedly. When you create a FlexCache volume, you create a remote cache of an already existing (origin) volume that contains only the actively accessed data (hot data) of the origin volume.
The FlexCache support in Google Cloud NetApp Volumes allows you to provision a cache volume in your Google network to improve performance for hybrid cloud environments. A FlexCache volume can help you transition workloads to the hybrid cloud by caching data from an on-premises data center to cloud.
Deploying FlexCache volumes requires manual steps on the ONTAP origin side, which are not automated. Therefore this module has no support to deploy FlexCache volumes today. Deploy them manually and use the pre-existing-network-storage instead.
Copyright 2026 Google LLC
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
| Name | Version |
|---|---|
| terraform | = 1.12.2 |
| >= 6.45.0 |
| Name | Version |
|---|---|
| >= 6.45.0 |
No modules.
| Name | Type |
|---|---|
| google_netapp_volume.netapp_volume | resource |
| Name | Description | Type | Default | Required |
|---|---|---|---|---|
| capacity_gib | The capacity of the volume in GiB. | number |
1024 |
no |
| description | A description of the NetApp volume. | string |
"" |
no |
| export_policy_rules | Define NFS export policy. | list(object({ |
[ |
no |
| labels | Labels to add to the NetApp volume. Key-value pairs. | map(string) |
n/a | yes |
| large_capacity | If true, the volume will be created with large capacity. Large capacity volumes have 6 IP addresses and a minimal size of 15 TiB. |
bool |
false |
no |
| local_mount | Mountpoint for this volume. | string |
"/shared" |
no |
| mount_options | NFS mount options to mount file system. | string |
"rw,hard,rsize=65536,wsize=65536,tcp" |
no |
| netapp_storage_pool_id | The ID of the NetApp storage pool to use for the volume. | string |
n/a | yes |
| project_id | ID of project in which the NetApp storage pool will be created. | string |
n/a | yes |
| protocols | The protocols that the volume supports. Currently, only NFSv3 and NFSv4 is supported. | list(string) |
[ |
no |
| region | Location for NetApp storage pool. | string |
n/a | yes |
| tiering_policy | Define the tiering policy for the NetApp volume. | object({ |
null |
no |
| unix_permissions | UNIX permissions for root inode in the volume. | string |
"0777" |
no |
| volume_name | The name of the volume. Needs to be unique within the storage pool. | string |
null |
no |
| Name | Description |
|---|---|
| capacity_gb | Volume capacity in GiB. |
| install_nfs_client | Script for installing NFS client |
| install_nfs_client_runner | Runner to install NFS client using the startup-script module |
| mount_runner | Runner to mount the file-system using an ansible playbook. The startup-script module will automatically handle installation of ansible. - id: example-startup-script source: modules/scripts/startup-script settings: runners: - $(your-fs-id.mount_runner) ... |
| netapp_volume_id | An identifier for the resource with format projects/{{project}}/locations/{{location}}/volumes/{{name}} |
| network_storage | Describes a NetApp Volumes volume. |
| server_ips | List of IP addresses of the volume. |