Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
28 commits
Select commit Hold shift + click to select a range
0992ab1
chore: update docs/dyn/index.md
yoshi-automation Sep 2, 2025
505fe57
feat(aiplatform): update the api
yoshi-automation Sep 2, 2025
35758c6
feat(androidmanagement): update the api
yoshi-automation Sep 2, 2025
7af3687
feat(backupdr): update the api
yoshi-automation Sep 2, 2025
c9bb6ea
feat(bigquery): update the api
yoshi-automation Sep 2, 2025
8866a0a
feat(contactcenteraiplatform): update the api
yoshi-automation Sep 2, 2025
5ba402c
feat(contactcenterinsights): update the api
yoshi-automation Sep 2, 2025
bbaa573
feat(container): update the api
yoshi-automation Sep 2, 2025
a6bcada
feat(dataplex): update the api
yoshi-automation Sep 2, 2025
4550300
feat(deploymentmanager): update the api
yoshi-automation Sep 2, 2025
3a419d6
feat(dfareporting): update the api
yoshi-automation Sep 2, 2025
643b0de
feat(discoveryengine): update the api
yoshi-automation Sep 2, 2025
aa058d0
fix(dlp): update the api
yoshi-automation Sep 2, 2025
a39534d
feat(file): update the api
yoshi-automation Sep 2, 2025
922fbb5
feat(firebaseappdistribution): update the api
yoshi-automation Sep 2, 2025
4b3e905
feat(logging): update the api
yoshi-automation Sep 2, 2025
b918b97
feat(looker): update the api
yoshi-automation Sep 2, 2025
9dd4525
feat(memcache): update the api
yoshi-automation Sep 2, 2025
c646b13
feat(merchantapi): update the api
yoshi-automation Sep 2, 2025
e9b345d
feat(networkconnectivity): update the api
yoshi-automation Sep 2, 2025
692c1d4
feat(networksecurity): update the api
yoshi-automation Sep 2, 2025
0496df5
feat(observability): update the api
yoshi-automation Sep 2, 2025
b49ae19
feat(playintegrity): update the api
yoshi-automation Sep 2, 2025
d114f55
feat(redis): update the api
yoshi-automation Sep 2, 2025
047a2e6
feat(sqladmin): update the api
yoshi-automation Sep 2, 2025
1c03523
feat(vmmigration): update the api
yoshi-automation Sep 2, 2025
a0d3161
feat(workloadmanager): update the api
yoshi-automation Sep 2, 2025
76da8fc
chore(docs): Add new discovery artifacts and artifacts with minor upd…
yoshi-automation Sep 2, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
405 changes: 404 additions & 1 deletion docs/dyn/aiplatform_v1.projects.locations.html

Large diffs are not rendered by default.

4 changes: 4 additions & 0 deletions docs/dyn/aiplatform_v1beta1.batchPredictionJobs.html
Original file line number Diff line number Diff line change
Expand Up @@ -118,6 +118,7 @@ <h3>Method Details</h3>
&quot;machineSpec&quot;: { # Specification of a single machine. # Required. Immutable. The specification of a single machine.
&quot;acceleratorCount&quot;: 42, # The number of accelerators to attach to the machine.
&quot;acceleratorType&quot;: &quot;A String&quot;, # Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
&quot;gpuPartitionSize&quot;: &quot;A String&quot;, # Optional. Immutable. The Nvidia GPU partition size. When specified, the requested accelerators will be partitioned into smaller GPU partitions. For example, if the request is for 8 units of NVIDIA A100 GPUs, and gpu_partition_size=&quot;1g.10gb&quot;, the service will create 8 * 7 = 56 partitioned MIG instances. The partition size must be a value supported by the requested accelerator. Refer to [Nvidia GPU Partitioning](https://cloud.google.com/kubernetes-engine/docs/how-to/gpus-multi#multi-instance_gpu_partitions) for the available partition sizes. If set, the accelerator_count should be set to 1.
&quot;machineType&quot;: &quot;A String&quot;, # Immutable. The type of the machine. See the [list of machine types supported for prediction](https://cloud.google.com/vertex-ai/docs/predictions/configure-compute#machine-types) See the [list of machine types supported for custom training](https://cloud.google.com/vertex-ai/docs/training/configure-compute#machine-types). For DeployedModel this field is optional, and the default value is `n1-standard-2`. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
&quot;multihostGpuNodeCount&quot;: 42, # Optional. Immutable. The number of nodes per replica for multihost GPU deployments.
&quot;reservationAffinity&quot;: { # A ReservationAffinity can be used to configure a Vertex AI resource (e.g., a DeployedModel) to draw its Compute Engine resources from a Shared Reservation, or exclusively from on-demand capacity. # Optional. Immutable. Configuration controlling how this resource pool consumes reservation.
Expand Down Expand Up @@ -612,6 +613,7 @@ <h3>Method Details</h3>
&quot;machineSpec&quot;: { # Specification of a single machine. # Required. Immutable. The specification of a single machine.
&quot;acceleratorCount&quot;: 42, # The number of accelerators to attach to the machine.
&quot;acceleratorType&quot;: &quot;A String&quot;, # Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
&quot;gpuPartitionSize&quot;: &quot;A String&quot;, # Optional. Immutable. The Nvidia GPU partition size. When specified, the requested accelerators will be partitioned into smaller GPU partitions. For example, if the request is for 8 units of NVIDIA A100 GPUs, and gpu_partition_size=&quot;1g.10gb&quot;, the service will create 8 * 7 = 56 partitioned MIG instances. The partition size must be a value supported by the requested accelerator. Refer to [Nvidia GPU Partitioning](https://cloud.google.com/kubernetes-engine/docs/how-to/gpus-multi#multi-instance_gpu_partitions) for the available partition sizes. If set, the accelerator_count should be set to 1.
&quot;machineType&quot;: &quot;A String&quot;, # Immutable. The type of the machine. See the [list of machine types supported for prediction](https://cloud.google.com/vertex-ai/docs/predictions/configure-compute#machine-types) See the [list of machine types supported for custom training](https://cloud.google.com/vertex-ai/docs/training/configure-compute#machine-types). For DeployedModel this field is optional, and the default value is `n1-standard-2`. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
&quot;multihostGpuNodeCount&quot;: 42, # Optional. Immutable. The number of nodes per replica for multihost GPU deployments.
&quot;reservationAffinity&quot;: { # A ReservationAffinity can be used to configure a Vertex AI resource (e.g., a DeployedModel) to draw its Compute Engine resources from a Shared Reservation, or exclusively from on-demand capacity. # Optional. Immutable. Configuration controlling how this resource pool consumes reservation.
Expand Down Expand Up @@ -1112,6 +1114,7 @@ <h3>Method Details</h3>
&quot;machineSpec&quot;: { # Specification of a single machine. # Required. Immutable. The specification of a single machine.
&quot;acceleratorCount&quot;: 42, # The number of accelerators to attach to the machine.
&quot;acceleratorType&quot;: &quot;A String&quot;, # Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
&quot;gpuPartitionSize&quot;: &quot;A String&quot;, # Optional. Immutable. The Nvidia GPU partition size. When specified, the requested accelerators will be partitioned into smaller GPU partitions. For example, if the request is for 8 units of NVIDIA A100 GPUs, and gpu_partition_size=&quot;1g.10gb&quot;, the service will create 8 * 7 = 56 partitioned MIG instances. The partition size must be a value supported by the requested accelerator. Refer to [Nvidia GPU Partitioning](https://cloud.google.com/kubernetes-engine/docs/how-to/gpus-multi#multi-instance_gpu_partitions) for the available partition sizes. If set, the accelerator_count should be set to 1.
&quot;machineType&quot;: &quot;A String&quot;, # Immutable. The type of the machine. See the [list of machine types supported for prediction](https://cloud.google.com/vertex-ai/docs/predictions/configure-compute#machine-types) See the [list of machine types supported for custom training](https://cloud.google.com/vertex-ai/docs/training/configure-compute#machine-types). For DeployedModel this field is optional, and the default value is `n1-standard-2`. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
&quot;multihostGpuNodeCount&quot;: 42, # Optional. Immutable. The number of nodes per replica for multihost GPU deployments.
&quot;reservationAffinity&quot;: { # A ReservationAffinity can be used to configure a Vertex AI resource (e.g., a DeployedModel) to draw its Compute Engine resources from a Shared Reservation, or exclusively from on-demand capacity. # Optional. Immutable. Configuration controlling how this resource pool consumes reservation.
Expand Down Expand Up @@ -1618,6 +1621,7 @@ <h3>Method Details</h3>
&quot;machineSpec&quot;: { # Specification of a single machine. # Required. Immutable. The specification of a single machine.
&quot;acceleratorCount&quot;: 42, # The number of accelerators to attach to the machine.
&quot;acceleratorType&quot;: &quot;A String&quot;, # Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
&quot;gpuPartitionSize&quot;: &quot;A String&quot;, # Optional. Immutable. The Nvidia GPU partition size. When specified, the requested accelerators will be partitioned into smaller GPU partitions. For example, if the request is for 8 units of NVIDIA A100 GPUs, and gpu_partition_size=&quot;1g.10gb&quot;, the service will create 8 * 7 = 56 partitioned MIG instances. The partition size must be a value supported by the requested accelerator. Refer to [Nvidia GPU Partitioning](https://cloud.google.com/kubernetes-engine/docs/how-to/gpus-multi#multi-instance_gpu_partitions) for the available partition sizes. If set, the accelerator_count should be set to 1.
&quot;machineType&quot;: &quot;A String&quot;, # Immutable. The type of the machine. See the [list of machine types supported for prediction](https://cloud.google.com/vertex-ai/docs/predictions/configure-compute#machine-types) See the [list of machine types supported for custom training](https://cloud.google.com/vertex-ai/docs/training/configure-compute#machine-types). For DeployedModel this field is optional, and the default value is `n1-standard-2`. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
&quot;multihostGpuNodeCount&quot;: 42, # Optional. Immutable. The number of nodes per replica for multihost GPU deployments.
&quot;reservationAffinity&quot;: { # A ReservationAffinity can be used to configure a Vertex AI resource (e.g., a DeployedModel) to draw its Compute Engine resources from a Shared Reservation, or exclusively from on-demand capacity. # Optional. Immutable. Configuration controlling how this resource pool consumes reservation.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -149,6 +149,7 @@ <h3>Method Details</h3>
&quot;machineSpec&quot;: { # Specification of a single machine. # Required. Immutable. The specification of a single machine.
&quot;acceleratorCount&quot;: 42, # The number of accelerators to attach to the machine.
&quot;acceleratorType&quot;: &quot;A String&quot;, # Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
&quot;gpuPartitionSize&quot;: &quot;A String&quot;, # Optional. Immutable. The Nvidia GPU partition size. When specified, the requested accelerators will be partitioned into smaller GPU partitions. For example, if the request is for 8 units of NVIDIA A100 GPUs, and gpu_partition_size=&quot;1g.10gb&quot;, the service will create 8 * 7 = 56 partitioned MIG instances. The partition size must be a value supported by the requested accelerator. Refer to [Nvidia GPU Partitioning](https://cloud.google.com/kubernetes-engine/docs/how-to/gpus-multi#multi-instance_gpu_partitions) for the available partition sizes. If set, the accelerator_count should be set to 1.
&quot;machineType&quot;: &quot;A String&quot;, # Immutable. The type of the machine. See the [list of machine types supported for prediction](https://cloud.google.com/vertex-ai/docs/predictions/configure-compute#machine-types) See the [list of machine types supported for custom training](https://cloud.google.com/vertex-ai/docs/training/configure-compute#machine-types). For DeployedModel this field is optional, and the default value is `n1-standard-2`. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
&quot;multihostGpuNodeCount&quot;: 42, # Optional. Immutable. The number of nodes per replica for multihost GPU deployments.
&quot;reservationAffinity&quot;: { # A ReservationAffinity can be used to configure a Vertex AI resource (e.g., a DeployedModel) to draw its Compute Engine resources from a Shared Reservation, or exclusively from on-demand capacity. # Optional. Immutable. Configuration controlling how this resource pool consumes reservation.
Expand Down Expand Up @@ -642,6 +643,7 @@ <h3>Method Details</h3>
&quot;machineSpec&quot;: { # Specification of a single machine. # Required. Immutable. The specification of a single machine.
&quot;acceleratorCount&quot;: 42, # The number of accelerators to attach to the machine.
&quot;acceleratorType&quot;: &quot;A String&quot;, # Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
&quot;gpuPartitionSize&quot;: &quot;A String&quot;, # Optional. Immutable. The Nvidia GPU partition size. When specified, the requested accelerators will be partitioned into smaller GPU partitions. For example, if the request is for 8 units of NVIDIA A100 GPUs, and gpu_partition_size=&quot;1g.10gb&quot;, the service will create 8 * 7 = 56 partitioned MIG instances. The partition size must be a value supported by the requested accelerator. Refer to [Nvidia GPU Partitioning](https://cloud.google.com/kubernetes-engine/docs/how-to/gpus-multi#multi-instance_gpu_partitions) for the available partition sizes. If set, the accelerator_count should be set to 1.
&quot;machineType&quot;: &quot;A String&quot;, # Immutable. The type of the machine. See the [list of machine types supported for prediction](https://cloud.google.com/vertex-ai/docs/predictions/configure-compute#machine-types) See the [list of machine types supported for custom training](https://cloud.google.com/vertex-ai/docs/training/configure-compute#machine-types). For DeployedModel this field is optional, and the default value is `n1-standard-2`. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
&quot;multihostGpuNodeCount&quot;: 42, # Optional. Immutable. The number of nodes per replica for multihost GPU deployments.
&quot;reservationAffinity&quot;: { # A ReservationAffinity can be used to configure a Vertex AI resource (e.g., a DeployedModel) to draw its Compute Engine resources from a Shared Reservation, or exclusively from on-demand capacity. # Optional. Immutable. Configuration controlling how this resource pool consumes reservation.
Expand Down Expand Up @@ -1177,6 +1179,7 @@ <h3>Method Details</h3>
&quot;machineSpec&quot;: { # Specification of a single machine. # Required. Immutable. The specification of a single machine.
&quot;acceleratorCount&quot;: 42, # The number of accelerators to attach to the machine.
&quot;acceleratorType&quot;: &quot;A String&quot;, # Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
&quot;gpuPartitionSize&quot;: &quot;A String&quot;, # Optional. Immutable. The Nvidia GPU partition size. When specified, the requested accelerators will be partitioned into smaller GPU partitions. For example, if the request is for 8 units of NVIDIA A100 GPUs, and gpu_partition_size=&quot;1g.10gb&quot;, the service will create 8 * 7 = 56 partitioned MIG instances. The partition size must be a value supported by the requested accelerator. Refer to [Nvidia GPU Partitioning](https://cloud.google.com/kubernetes-engine/docs/how-to/gpus-multi#multi-instance_gpu_partitions) for the available partition sizes. If set, the accelerator_count should be set to 1.
&quot;machineType&quot;: &quot;A String&quot;, # Immutable. The type of the machine. See the [list of machine types supported for prediction](https://cloud.google.com/vertex-ai/docs/predictions/configure-compute#machine-types) See the [list of machine types supported for custom training](https://cloud.google.com/vertex-ai/docs/training/configure-compute#machine-types). For DeployedModel this field is optional, and the default value is `n1-standard-2`. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
&quot;multihostGpuNodeCount&quot;: 42, # Optional. Immutable. The number of nodes per replica for multihost GPU deployments.
&quot;reservationAffinity&quot;: { # A ReservationAffinity can be used to configure a Vertex AI resource (e.g., a DeployedModel) to draw its Compute Engine resources from a Shared Reservation, or exclusively from on-demand capacity. # Optional. Immutable. Configuration controlling how this resource pool consumes reservation.
Expand Down Expand Up @@ -1683,6 +1686,7 @@ <h3>Method Details</h3>
&quot;machineSpec&quot;: { # Specification of a single machine. # Required. Immutable. The specification of a single machine.
&quot;acceleratorCount&quot;: 42, # The number of accelerators to attach to the machine.
&quot;acceleratorType&quot;: &quot;A String&quot;, # Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
&quot;gpuPartitionSize&quot;: &quot;A String&quot;, # Optional. Immutable. The Nvidia GPU partition size. When specified, the requested accelerators will be partitioned into smaller GPU partitions. For example, if the request is for 8 units of NVIDIA A100 GPUs, and gpu_partition_size=&quot;1g.10gb&quot;, the service will create 8 * 7 = 56 partitioned MIG instances. The partition size must be a value supported by the requested accelerator. Refer to [Nvidia GPU Partitioning](https://cloud.google.com/kubernetes-engine/docs/how-to/gpus-multi#multi-instance_gpu_partitions) for the available partition sizes. If set, the accelerator_count should be set to 1.
&quot;machineType&quot;: &quot;A String&quot;, # Immutable. The type of the machine. See the [list of machine types supported for prediction](https://cloud.google.com/vertex-ai/docs/predictions/configure-compute#machine-types) See the [list of machine types supported for custom training](https://cloud.google.com/vertex-ai/docs/training/configure-compute#machine-types). For DeployedModel this field is optional, and the default value is `n1-standard-2`. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
&quot;multihostGpuNodeCount&quot;: 42, # Optional. Immutable. The number of nodes per replica for multihost GPU deployments.
&quot;reservationAffinity&quot;: { # A ReservationAffinity can be used to configure a Vertex AI resource (e.g., a DeployedModel) to draw its Compute Engine resources from a Shared Reservation, or exclusively from on-demand capacity. # Optional. Immutable. Configuration controlling how this resource pool consumes reservation.
Expand Down
Loading