Skip to content

Commit 0413210

Browse files
chore: Update discovery artifacts (#2650)
## Deleted keys were detected in the following stable discovery artifacts: dataplex v1 https://togithub.com/googleapis/google-api-python-client/commit/a6bcadaf0d07562fa0dba8bc7bf407bb688e1632 ## Discovery Artifact Change Summary: feat(aiplatform): update the api https://togithub.com/googleapis/google-api-python-client/commit/505fe57787446556cfb3ed666bdeb3008fd03c9d feat(androidmanagement): update the api https://togithub.com/googleapis/google-api-python-client/commit/35758c61c6d4883be38be4522f95c3eaf82b0c32 feat(backupdr): update the api https://togithub.com/googleapis/google-api-python-client/commit/7af368733d2866829d5026ac6d6bcb98ef65b327 feat(bigquery): update the api https://togithub.com/googleapis/google-api-python-client/commit/c9bb6eadb4d2c751240a8240e0b2053f7c5290ec feat(contactcenteraiplatform): update the api https://togithub.com/googleapis/google-api-python-client/commit/8866a0a1ae026e5b23ca83e992030409b5ba4063 feat(contactcenterinsights): update the api https://togithub.com/googleapis/google-api-python-client/commit/5ba402c0a2c2e31a25052bed515b69cf0f49e53e feat(container): update the api https://togithub.com/googleapis/google-api-python-client/commit/bbaa57303ad058dadea99a9aae59a24e96e6402f feat(dataplex): update the api https://togithub.com/googleapis/google-api-python-client/commit/a6bcadaf0d07562fa0dba8bc7bf407bb688e1632 feat(deploymentmanager): update the api https://togithub.com/googleapis/google-api-python-client/commit/4550300cedba44ded570e8ade828bca50d5845b3 feat(dfareporting): update the api https://togithub.com/googleapis/google-api-python-client/commit/3a419d62d93a31955538012dda1baea0e05c3ce1 feat(discoveryengine): update the api https://togithub.com/googleapis/google-api-python-client/commit/643b0de675d3820c4e36b1ad712f9d268bbc84e3 fix(dlp): update the api https://togithub.com/googleapis/google-api-python-client/commit/aa058d047d773e69c5a653b01d3c4c7e930950ee feat(file): update the api https://togithub.com/googleapis/google-api-python-client/commit/a39534da0b1cbc309f63b0800dd8bb245fff6e62 feat(firebaseappdistribution): update the api https://togithub.com/googleapis/google-api-python-client/commit/922fbb59b0e5f500d190f18c2754ad91c68dc785 feat(logging): update the api https://togithub.com/googleapis/google-api-python-client/commit/4b3e905666a30bb0ca78866d41eddd6a8f8f6d9c feat(looker): update the api https://togithub.com/googleapis/google-api-python-client/commit/b918b97e651dbae81afa99995cd8a5d00d3dbe3e feat(memcache): update the api https://togithub.com/googleapis/google-api-python-client/commit/9dd45257736ca5d8c394286c813fd77454da1515 feat(merchantapi): update the api https://togithub.com/googleapis/google-api-python-client/commit/c646b1385f56e129d508d4ddc6934eb60dc49a5b feat(networkconnectivity): update the api https://togithub.com/googleapis/google-api-python-client/commit/e9b345d5e6111f27b41ac7c4880d8daff64511eb feat(networksecurity): update the api https://togithub.com/googleapis/google-api-python-client/commit/692c1d4e072b4137bd5e555406d565cc4bd18050 feat(observability): update the api https://togithub.com/googleapis/google-api-python-client/commit/0496df5aa2d32a052c1702690f95d3c7c5868079 feat(playintegrity): update the api https://togithub.com/googleapis/google-api-python-client/commit/b49ae191b65d7b92d98be38ae254626d50a14500 feat(redis): update the api https://togithub.com/googleapis/google-api-python-client/commit/d114f55a24dc586dfa05e956a597e9f332a62a99 feat(sqladmin): update the api https://togithub.com/googleapis/google-api-python-client/commit/047a2e6658b84e4770bb8f614434a3fd936e45f4 feat(vmmigration): update the api https://togithub.com/googleapis/google-api-python-client/commit/1c035237a129585541283fad990538fafeca4fad feat(workloadmanager): update the api https://togithub.com/googleapis/google-api-python-client/commit/a0d316154630f46b50d59c63bb5dc664680fcf52
1 parent 9354074 commit 0413210

File tree

302 files changed

+70303
-1678
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

302 files changed

+70303
-1678
lines changed

docs/dyn/aiplatform_v1.projects.locations.html

Lines changed: 404 additions & 1 deletion
Large diffs are not rendered by default.

docs/dyn/aiplatform_v1beta1.batchPredictionJobs.html

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -118,6 +118,7 @@ <h3>Method Details</h3>
118118
&quot;machineSpec&quot;: { # Specification of a single machine. # Required. Immutable. The specification of a single machine.
119119
&quot;acceleratorCount&quot;: 42, # The number of accelerators to attach to the machine.
120120
&quot;acceleratorType&quot;: &quot;A String&quot;, # Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
121+
&quot;gpuPartitionSize&quot;: &quot;A String&quot;, # Optional. Immutable. The Nvidia GPU partition size. When specified, the requested accelerators will be partitioned into smaller GPU partitions. For example, if the request is for 8 units of NVIDIA A100 GPUs, and gpu_partition_size=&quot;1g.10gb&quot;, the service will create 8 * 7 = 56 partitioned MIG instances. The partition size must be a value supported by the requested accelerator. Refer to [Nvidia GPU Partitioning](https://cloud.google.com/kubernetes-engine/docs/how-to/gpus-multi#multi-instance_gpu_partitions) for the available partition sizes. If set, the accelerator_count should be set to 1.
121122
&quot;machineType&quot;: &quot;A String&quot;, # Immutable. The type of the machine. See the [list of machine types supported for prediction](https://cloud.google.com/vertex-ai/docs/predictions/configure-compute#machine-types) See the [list of machine types supported for custom training](https://cloud.google.com/vertex-ai/docs/training/configure-compute#machine-types). For DeployedModel this field is optional, and the default value is `n1-standard-2`. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
122123
&quot;multihostGpuNodeCount&quot;: 42, # Optional. Immutable. The number of nodes per replica for multihost GPU deployments.
123124
&quot;reservationAffinity&quot;: { # A ReservationAffinity can be used to configure a Vertex AI resource (e.g., a DeployedModel) to draw its Compute Engine resources from a Shared Reservation, or exclusively from on-demand capacity. # Optional. Immutable. Configuration controlling how this resource pool consumes reservation.
@@ -612,6 +613,7 @@ <h3>Method Details</h3>
612613
&quot;machineSpec&quot;: { # Specification of a single machine. # Required. Immutable. The specification of a single machine.
613614
&quot;acceleratorCount&quot;: 42, # The number of accelerators to attach to the machine.
614615
&quot;acceleratorType&quot;: &quot;A String&quot;, # Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
616+
&quot;gpuPartitionSize&quot;: &quot;A String&quot;, # Optional. Immutable. The Nvidia GPU partition size. When specified, the requested accelerators will be partitioned into smaller GPU partitions. For example, if the request is for 8 units of NVIDIA A100 GPUs, and gpu_partition_size=&quot;1g.10gb&quot;, the service will create 8 * 7 = 56 partitioned MIG instances. The partition size must be a value supported by the requested accelerator. Refer to [Nvidia GPU Partitioning](https://cloud.google.com/kubernetes-engine/docs/how-to/gpus-multi#multi-instance_gpu_partitions) for the available partition sizes. If set, the accelerator_count should be set to 1.
615617
&quot;machineType&quot;: &quot;A String&quot;, # Immutable. The type of the machine. See the [list of machine types supported for prediction](https://cloud.google.com/vertex-ai/docs/predictions/configure-compute#machine-types) See the [list of machine types supported for custom training](https://cloud.google.com/vertex-ai/docs/training/configure-compute#machine-types). For DeployedModel this field is optional, and the default value is `n1-standard-2`. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
616618
&quot;multihostGpuNodeCount&quot;: 42, # Optional. Immutable. The number of nodes per replica for multihost GPU deployments.
617619
&quot;reservationAffinity&quot;: { # A ReservationAffinity can be used to configure a Vertex AI resource (e.g., a DeployedModel) to draw its Compute Engine resources from a Shared Reservation, or exclusively from on-demand capacity. # Optional. Immutable. Configuration controlling how this resource pool consumes reservation.
@@ -1112,6 +1114,7 @@ <h3>Method Details</h3>
11121114
&quot;machineSpec&quot;: { # Specification of a single machine. # Required. Immutable. The specification of a single machine.
11131115
&quot;acceleratorCount&quot;: 42, # The number of accelerators to attach to the machine.
11141116
&quot;acceleratorType&quot;: &quot;A String&quot;, # Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
1117+
&quot;gpuPartitionSize&quot;: &quot;A String&quot;, # Optional. Immutable. The Nvidia GPU partition size. When specified, the requested accelerators will be partitioned into smaller GPU partitions. For example, if the request is for 8 units of NVIDIA A100 GPUs, and gpu_partition_size=&quot;1g.10gb&quot;, the service will create 8 * 7 = 56 partitioned MIG instances. The partition size must be a value supported by the requested accelerator. Refer to [Nvidia GPU Partitioning](https://cloud.google.com/kubernetes-engine/docs/how-to/gpus-multi#multi-instance_gpu_partitions) for the available partition sizes. If set, the accelerator_count should be set to 1.
11151118
&quot;machineType&quot;: &quot;A String&quot;, # Immutable. The type of the machine. See the [list of machine types supported for prediction](https://cloud.google.com/vertex-ai/docs/predictions/configure-compute#machine-types) See the [list of machine types supported for custom training](https://cloud.google.com/vertex-ai/docs/training/configure-compute#machine-types). For DeployedModel this field is optional, and the default value is `n1-standard-2`. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
11161119
&quot;multihostGpuNodeCount&quot;: 42, # Optional. Immutable. The number of nodes per replica for multihost GPU deployments.
11171120
&quot;reservationAffinity&quot;: { # A ReservationAffinity can be used to configure a Vertex AI resource (e.g., a DeployedModel) to draw its Compute Engine resources from a Shared Reservation, or exclusively from on-demand capacity. # Optional. Immutable. Configuration controlling how this resource pool consumes reservation.
@@ -1618,6 +1621,7 @@ <h3>Method Details</h3>
16181621
&quot;machineSpec&quot;: { # Specification of a single machine. # Required. Immutable. The specification of a single machine.
16191622
&quot;acceleratorCount&quot;: 42, # The number of accelerators to attach to the machine.
16201623
&quot;acceleratorType&quot;: &quot;A String&quot;, # Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
1624+
&quot;gpuPartitionSize&quot;: &quot;A String&quot;, # Optional. Immutable. The Nvidia GPU partition size. When specified, the requested accelerators will be partitioned into smaller GPU partitions. For example, if the request is for 8 units of NVIDIA A100 GPUs, and gpu_partition_size=&quot;1g.10gb&quot;, the service will create 8 * 7 = 56 partitioned MIG instances. The partition size must be a value supported by the requested accelerator. Refer to [Nvidia GPU Partitioning](https://cloud.google.com/kubernetes-engine/docs/how-to/gpus-multi#multi-instance_gpu_partitions) for the available partition sizes. If set, the accelerator_count should be set to 1.
16211625
&quot;machineType&quot;: &quot;A String&quot;, # Immutable. The type of the machine. See the [list of machine types supported for prediction](https://cloud.google.com/vertex-ai/docs/predictions/configure-compute#machine-types) See the [list of machine types supported for custom training](https://cloud.google.com/vertex-ai/docs/training/configure-compute#machine-types). For DeployedModel this field is optional, and the default value is `n1-standard-2`. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
16221626
&quot;multihostGpuNodeCount&quot;: 42, # Optional. Immutable. The number of nodes per replica for multihost GPU deployments.
16231627
&quot;reservationAffinity&quot;: { # A ReservationAffinity can be used to configure a Vertex AI resource (e.g., a DeployedModel) to draw its Compute Engine resources from a Shared Reservation, or exclusively from on-demand capacity. # Optional. Immutable. Configuration controlling how this resource pool consumes reservation.

docs/dyn/aiplatform_v1beta1.projects.locations.batchPredictionJobs.html

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -149,6 +149,7 @@ <h3>Method Details</h3>
149149
&quot;machineSpec&quot;: { # Specification of a single machine. # Required. Immutable. The specification of a single machine.
150150
&quot;acceleratorCount&quot;: 42, # The number of accelerators to attach to the machine.
151151
&quot;acceleratorType&quot;: &quot;A String&quot;, # Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
152+
&quot;gpuPartitionSize&quot;: &quot;A String&quot;, # Optional. Immutable. The Nvidia GPU partition size. When specified, the requested accelerators will be partitioned into smaller GPU partitions. For example, if the request is for 8 units of NVIDIA A100 GPUs, and gpu_partition_size=&quot;1g.10gb&quot;, the service will create 8 * 7 = 56 partitioned MIG instances. The partition size must be a value supported by the requested accelerator. Refer to [Nvidia GPU Partitioning](https://cloud.google.com/kubernetes-engine/docs/how-to/gpus-multi#multi-instance_gpu_partitions) for the available partition sizes. If set, the accelerator_count should be set to 1.
152153
&quot;machineType&quot;: &quot;A String&quot;, # Immutable. The type of the machine. See the [list of machine types supported for prediction](https://cloud.google.com/vertex-ai/docs/predictions/configure-compute#machine-types) See the [list of machine types supported for custom training](https://cloud.google.com/vertex-ai/docs/training/configure-compute#machine-types). For DeployedModel this field is optional, and the default value is `n1-standard-2`. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
153154
&quot;multihostGpuNodeCount&quot;: 42, # Optional. Immutable. The number of nodes per replica for multihost GPU deployments.
154155
&quot;reservationAffinity&quot;: { # A ReservationAffinity can be used to configure a Vertex AI resource (e.g., a DeployedModel) to draw its Compute Engine resources from a Shared Reservation, or exclusively from on-demand capacity. # Optional. Immutable. Configuration controlling how this resource pool consumes reservation.
@@ -642,6 +643,7 @@ <h3>Method Details</h3>
642643
&quot;machineSpec&quot;: { # Specification of a single machine. # Required. Immutable. The specification of a single machine.
643644
&quot;acceleratorCount&quot;: 42, # The number of accelerators to attach to the machine.
644645
&quot;acceleratorType&quot;: &quot;A String&quot;, # Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
646+
&quot;gpuPartitionSize&quot;: &quot;A String&quot;, # Optional. Immutable. The Nvidia GPU partition size. When specified, the requested accelerators will be partitioned into smaller GPU partitions. For example, if the request is for 8 units of NVIDIA A100 GPUs, and gpu_partition_size=&quot;1g.10gb&quot;, the service will create 8 * 7 = 56 partitioned MIG instances. The partition size must be a value supported by the requested accelerator. Refer to [Nvidia GPU Partitioning](https://cloud.google.com/kubernetes-engine/docs/how-to/gpus-multi#multi-instance_gpu_partitions) for the available partition sizes. If set, the accelerator_count should be set to 1.
645647
&quot;machineType&quot;: &quot;A String&quot;, # Immutable. The type of the machine. See the [list of machine types supported for prediction](https://cloud.google.com/vertex-ai/docs/predictions/configure-compute#machine-types) See the [list of machine types supported for custom training](https://cloud.google.com/vertex-ai/docs/training/configure-compute#machine-types). For DeployedModel this field is optional, and the default value is `n1-standard-2`. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
646648
&quot;multihostGpuNodeCount&quot;: 42, # Optional. Immutable. The number of nodes per replica for multihost GPU deployments.
647649
&quot;reservationAffinity&quot;: { # A ReservationAffinity can be used to configure a Vertex AI resource (e.g., a DeployedModel) to draw its Compute Engine resources from a Shared Reservation, or exclusively from on-demand capacity. # Optional. Immutable. Configuration controlling how this resource pool consumes reservation.
@@ -1177,6 +1179,7 @@ <h3>Method Details</h3>
11771179
&quot;machineSpec&quot;: { # Specification of a single machine. # Required. Immutable. The specification of a single machine.
11781180
&quot;acceleratorCount&quot;: 42, # The number of accelerators to attach to the machine.
11791181
&quot;acceleratorType&quot;: &quot;A String&quot;, # Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
1182+
&quot;gpuPartitionSize&quot;: &quot;A String&quot;, # Optional. Immutable. The Nvidia GPU partition size. When specified, the requested accelerators will be partitioned into smaller GPU partitions. For example, if the request is for 8 units of NVIDIA A100 GPUs, and gpu_partition_size=&quot;1g.10gb&quot;, the service will create 8 * 7 = 56 partitioned MIG instances. The partition size must be a value supported by the requested accelerator. Refer to [Nvidia GPU Partitioning](https://cloud.google.com/kubernetes-engine/docs/how-to/gpus-multi#multi-instance_gpu_partitions) for the available partition sizes. If set, the accelerator_count should be set to 1.
11801183
&quot;machineType&quot;: &quot;A String&quot;, # Immutable. The type of the machine. See the [list of machine types supported for prediction](https://cloud.google.com/vertex-ai/docs/predictions/configure-compute#machine-types) See the [list of machine types supported for custom training](https://cloud.google.com/vertex-ai/docs/training/configure-compute#machine-types). For DeployedModel this field is optional, and the default value is `n1-standard-2`. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
11811184
&quot;multihostGpuNodeCount&quot;: 42, # Optional. Immutable. The number of nodes per replica for multihost GPU deployments.
11821185
&quot;reservationAffinity&quot;: { # A ReservationAffinity can be used to configure a Vertex AI resource (e.g., a DeployedModel) to draw its Compute Engine resources from a Shared Reservation, or exclusively from on-demand capacity. # Optional. Immutable. Configuration controlling how this resource pool consumes reservation.
@@ -1683,6 +1686,7 @@ <h3>Method Details</h3>
16831686
&quot;machineSpec&quot;: { # Specification of a single machine. # Required. Immutable. The specification of a single machine.
16841687
&quot;acceleratorCount&quot;: 42, # The number of accelerators to attach to the machine.
16851688
&quot;acceleratorType&quot;: &quot;A String&quot;, # Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
1689+
&quot;gpuPartitionSize&quot;: &quot;A String&quot;, # Optional. Immutable. The Nvidia GPU partition size. When specified, the requested accelerators will be partitioned into smaller GPU partitions. For example, if the request is for 8 units of NVIDIA A100 GPUs, and gpu_partition_size=&quot;1g.10gb&quot;, the service will create 8 * 7 = 56 partitioned MIG instances. The partition size must be a value supported by the requested accelerator. Refer to [Nvidia GPU Partitioning](https://cloud.google.com/kubernetes-engine/docs/how-to/gpus-multi#multi-instance_gpu_partitions) for the available partition sizes. If set, the accelerator_count should be set to 1.
16861690
&quot;machineType&quot;: &quot;A String&quot;, # Immutable. The type of the machine. See the [list of machine types supported for prediction](https://cloud.google.com/vertex-ai/docs/predictions/configure-compute#machine-types) See the [list of machine types supported for custom training](https://cloud.google.com/vertex-ai/docs/training/configure-compute#machine-types). For DeployedModel this field is optional, and the default value is `n1-standard-2`. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
16871691
&quot;multihostGpuNodeCount&quot;: 42, # Optional. Immutable. The number of nodes per replica for multihost GPU deployments.
16881692
&quot;reservationAffinity&quot;: { # A ReservationAffinity can be used to configure a Vertex AI resource (e.g., a DeployedModel) to draw its Compute Engine resources from a Shared Reservation, or exclusively from on-demand capacity. # Optional. Immutable. Configuration controlling how this resource pool consumes reservation.

0 commit comments

Comments
 (0)