diff --git a/.codegen/_openapi_sha b/.codegen/_openapi_sha index 12fb465ab..a7b80d538 100644 --- a/.codegen/_openapi_sha +++ b/.codegen/_openapi_sha @@ -1 +1 @@ -94dc3e7289a19a90b167adf27316bd703a86f0eb \ No newline at end of file +cd641c9dd4febe334b339dd7878d099dcf0eeab5 \ No newline at end of file diff --git a/NEXT_CHANGELOG.md b/NEXT_CHANGELOG.md index 395a1a32c..804d1c8bf 100644 --- a/NEXT_CHANGELOG.md +++ b/NEXT_CHANGELOG.md @@ -11,3 +11,26 @@ ### Internal Changes ### API Changes +* Added `execution_details` and `script` fields for `databricks.sdk.service.compute.InitScriptInfoAndExecutionDetails`. +* Added `supports_elastic_disk` field for `databricks.sdk.service.compute.NodeType`. +* Added `data_granularity_quantity` field for `databricks.sdk.service.ml.CreateForecastingExperimentRequest`. +* [Breaking] Added `data_granularity_unit` field for `databricks.sdk.service.ml.CreateForecastingExperimentRequest`. +* Added `aliases`, `comment`, `data_type`, `dependency_list`, `full_data_type`, `id`, `input_params`, `name`, `properties`, `routine_definition`, `schema`, `securable_kind`, `share`, `share_id`, `storage_location` and `tags` fields for `databricks.sdk.service.sharing.Function`. +* [Breaking] Changed `create_experiment()` method for [w.forecasting](https://databricks-sdk-py.readthedocs.io/en/latest/workspace/ml/forecasting.html) workspace-level service with new required argument order. +* [Breaking] Changed `instance_type_id` field for `databricks.sdk.service.compute.NodeInstanceType` to no longer be required. +* [Breaking] Changed `category` field for `databricks.sdk.service.compute.NodeType` to no longer be required. +* [Breaking] Changed `functions` field for `databricks.sdk.service.sharing.ListProviderShareAssetsResponse` to type `databricks.sdk.service.sharing.FunctionList` dataclass. +* [Breaking] Removed `abfss`, `dbfs`, `error_message`, `execution_duration_seconds`, `file`, `gcs`, `s3`, `status`, `volumes` and `workspace` fields for `databricks.sdk.service.compute.InitScriptInfoAndExecutionDetails`. +* [Breaking] Removed `forecast_granularity` field for `databricks.sdk.service.ml.CreateForecastingExperimentRequest`. +* [Breaking] Removed `jwks_uri` field for `databricks.sdk.service.oauth2.OidcFederationPolicy`. +* [Breaking] Removed `fallback_config` field for `databricks.sdk.service.serving.AiGatewayConfig`. +* [Breaking] Removed `custom_provider_config` field for `databricks.sdk.service.serving.ExternalModel`. +* [Breaking] Removed `fallback_config` field for `databricks.sdk.service.serving.PutAiGatewayRequest`. +* [Breaking] Removed `fallback_config` field for `databricks.sdk.service.serving.PutAiGatewayResponse`. +* [Breaking] Removed `aliases`, `comment`, `data_type`, `dependency_list`, `full_data_type`, `id`, `input_params`, `name`, `properties`, `routine_definition`, `schema`, `securable_kind`, `share`, `share_id`, `storage_location` and `tags` fields for `databricks.sdk.service.sharing.DeltaSharingFunction`. +* [Breaking] Removed `access_token_failure`, `allocation_timeout`, `allocation_timeout_node_daemon_not_ready`, `allocation_timeout_no_healthy_clusters`, `allocation_timeout_no_matched_clusters`, `allocation_timeout_no_ready_clusters`, `allocation_timeout_no_unallocated_clusters`, `allocation_timeout_no_warmed_up_clusters`, `aws_inaccessible_kms_key_failure`, `aws_instance_profile_update_failure`, `aws_invalid_key_pair`, `aws_invalid_kms_key_state`, `aws_resource_quota_exceeded`, `azure_packed_deployment_partial_failure`, `bootstrap_timeout_due_to_misconfig`, `budget_policy_limit_enforcement_activated`, `budget_policy_resolution_failure`, `cloud_account_setup_failure`, `cloud_operation_cancelled`, `cloud_provider_instance_not_launched`, `cloud_provider_launch_failure_due_to_misconfig`, `cloud_provider_resource_stockout_due_to_misconfig`, `cluster_operation_throttled`, `cluster_operation_timeout`, `control_plane_request_failure_due_to_misconfig`, `data_access_config_changed`, `disaster_recovery_replication`, `driver_eviction`, `driver_launch_timeout`, `driver_node_unreachable`, `driver_out_of_disk`, `driver_out_of_memory`, `driver_pod_creation_failure`, `driver_unexpected_failure`, `dynamic_spark_conf_size_exceeded`, `eos_spark_image`, `executor_pod_unscheduled`, `gcp_api_rate_quota_exceeded`, `gcp_forbidden`, `gcp_iam_timeout`, `gcp_inaccessible_kms_key_failure`, `gcp_insufficient_capacity`, `gcp_ip_space_exhausted`, `gcp_kms_key_permission_denied`, `gcp_not_found`, `gcp_resource_quota_exceeded`, `gcp_service_account_access_denied`, `gcp_service_account_not_found`, `gcp_subnet_not_ready`, `gcp_trusted_image_projects_violated`, `gke_based_cluster_termination`, `init_container_not_finished`, `instance_pool_max_capacity_reached`, `instance_pool_not_found`, `instance_unreachable_due_to_misconfig`, `internal_capacity_failure`, `invalid_aws_parameter`, `invalid_instance_placement_protocol`, `invalid_worker_image_failure`, `in_penalty_box`, `lazy_allocation_timeout`, `maintenance_mode`, `netvisor_setup_timeout`, `no_matched_k8s`, `no_matched_k8s_testing_tag`, `pod_assignment_failure`, `pod_scheduling_failure`, `resource_usage_blocked`, `secret_creation_failure`, `serverless_long_running_terminated`, `spark_image_download_throttled`, `spark_image_not_found`, `ssh_bootstrap_failure`, `storage_download_failure_due_to_misconfig`, `storage_download_failure_slow`, `storage_download_failure_throttled`, `unexpected_pod_recreation`, `user_initiated_vm_termination` and `workspace_update` enum values for `databricks.sdk.service.compute.TerminationReasonCode`. +* [Breaking] Removed `generated_sql_query_too_long_exception` and `missing_sql_query_exception` enum values for `databricks.sdk.service.dashboards.MessageErrorType`. +* [Breaking] Removed `balanced` enum value for `databricks.sdk.service.jobs.PerformanceTarget`. +* [Breaking] Removed `listing_resource` enum value for `databricks.sdk.service.marketplace.FileParentType`. +* [Breaking] Removed `app` enum value for `databricks.sdk.service.marketplace.MarketplaceFileType`. +* [Breaking] Removed `custom` enum value for `databricks.sdk.service.serving.ExternalModelProvider`. diff --git a/databricks/sdk/service/catalog.py b/databricks/sdk/service/catalog.py index 99bae81dd..a790b1b6e 100755 --- a/databricks/sdk/service/catalog.py +++ b/databricks/sdk/service/catalog.py @@ -9471,8 +9471,6 @@ def from_dict(cls, d: Dict[str, Any]) -> UpdateWorkspaceBindingsParameters: @dataclass class ValidateCredentialRequest: - """Next ID: 17""" - aws_iam_role: Optional[AwsIamRole] = None """The AWS IAM role configuration""" diff --git a/databricks/sdk/service/compute.py b/databricks/sdk/service/compute.py index b4d0b3394..3810d631c 100755 --- a/databricks/sdk/service/compute.py +++ b/databricks/sdk/service/compute.py @@ -103,8 +103,6 @@ def from_dict(cls, d: Dict[str, Any]) -> AddResponse: @dataclass class Adlsgen2Info: - """A storage location in Adls Gen2""" - destination: str """abfss destination, e.g. `abfss://@.dfs.core.windows.net/`.""" @@ -165,8 +163,6 @@ def from_dict(cls, d: Dict[str, Any]) -> AutoScale: @dataclass class AwsAttributes: - """Attributes set during cluster creation which are related to Amazon Web Services.""" - availability: Optional[AwsAvailability] = None """Availability type used for all subsequent nodes past the `first_on_demand` ones. @@ -220,7 +216,9 @@ class AwsAttributes: profile must have previously been added to the Databricks environment by an account administrator. - This feature may only be available to certain customer plans.""" + This feature may only be available to certain customer plans. + + If this field is ommitted, we will pull in the default from the conf if it exists.""" spot_bid_price_percent: Optional[int] = None """The bid price for AWS spot instances, as a percentage of the corresponding instance type's @@ -229,7 +227,10 @@ class AwsAttributes: instances. Similarly, if this field is set to 200, the bid price is twice the price of on-demand `r3.xlarge` instances. If not specified, the default value is 100. When spot instances are requested for this cluster, only spot instances whose bid price percentage matches this field - will be considered. Note that, for safety, we enforce this field to be no more than 10000.""" + will be considered. Note that, for safety, we enforce this field to be no more than 10000. + + The default value and documentation here should be kept consistent with + CommonConf.defaultSpotBidPricePercent and CommonConf.maxSpotBidPricePercent.""" zone_id: Optional[str] = None """Identifier for the availability zone/datacenter in which the cluster resides. This string will @@ -238,10 +239,8 @@ class AwsAttributes: deployment resides in the "us-east-1" region. This is an optional field at cluster creation, and if not specified, a default zone will be used. If the zone specified is "auto", will try to place cluster in a zone with high availability, and will retry placement in a different AZ if - there is not enough capacity. - - The list of available zones as well as the default value can be found by using the `List Zones` - method.""" + there is not enough capacity. The list of available zones as well as the default value can be + found by using the `List Zones` method.""" def as_dict(self) -> dict: """Serializes the AwsAttributes into a dictionary suitable for use as a JSON request body.""" @@ -322,11 +321,10 @@ class AwsAvailability(Enum): @dataclass class AzureAttributes: - """Attributes set during cluster creation which are related to Microsoft Azure.""" - availability: Optional[AzureAvailability] = None """Availability type used for all subsequent nodes past the `first_on_demand` ones. Note: If - `first_on_demand` is zero, this availability type will be used for the entire cluster.""" + `first_on_demand` is zero (which only happens on pool clusters), this availability type will be + used for the entire cluster.""" first_on_demand: Optional[int] = None """The first `first_on_demand` nodes of the cluster will be placed on on-demand instances. This @@ -385,7 +383,8 @@ def from_dict(cls, d: Dict[str, Any]) -> AzureAttributes: class AzureAvailability(Enum): """Availability type used for all subsequent nodes past the `first_on_demand` ones. Note: If - `first_on_demand` is zero, this availability type will be used for the entire cluster.""" + `first_on_demand` is zero (which only happens on pool clusters), this availability type will be + used for the entire cluster.""" ON_DEMAND_AZURE = "ON_DEMAND_AZURE" SPOT_AZURE = "SPOT_AZURE" @@ -453,6 +452,7 @@ def from_dict(cls, d: Dict[str, Any]) -> CancelResponse: @dataclass class ChangeClusterOwner: cluster_id: str + """""" owner_username: str """New owner of the cluster_id after this RPC.""" @@ -559,7 +559,6 @@ def from_dict(cls, d: Dict[str, Any]) -> CloneCluster: @dataclass class CloudProviderNodeInfo: status: Optional[List[CloudProviderNodeStatus]] = None - """Status as reported by the cloud provider""" def as_dict(self) -> dict: """Serializes the CloudProviderNodeInfo into a dictionary suitable for use as a JSON request body.""" @@ -699,9 +698,6 @@ def from_dict(cls, d: Dict[str, Any]) -> ClusterAccessControlResponse: @dataclass class ClusterAttributes: - """Common set of attributes set during cluster creation. These attributes cannot be changed over - the lifetime of a cluster.""" - spark_version: str """The Spark version of the cluster, e.g. `3.3.x-scala2.11`. A list of available Spark versions can be retrieved by using the :method:clusters/sparkVersions API call.""" @@ -767,7 +763,6 @@ class ClusterAttributes: doesn’t have UC nor passthrough enabled.""" docker_image: Optional[DockerImage] = None - """Custom docker image BYOC""" driver_instance_pool_id: Optional[str] = None """The optional ID of the instance pool for the driver of the cluster belongs. The pool cluster @@ -775,11 +770,7 @@ class ClusterAttributes: driver_node_type_id: Optional[str] = None """The node type of the Spark driver. Note that this field is optional; if unset, the driver node - type will be set as the same value as `node_type_id` defined above. - - This field, along with node_type_id, should not be set if virtual_cluster_size is set. If both - driver_node_type_id, node_type_id, and virtual_cluster_size are specified, driver_node_type_id - and node_type_id take precedence.""" + type will be set as the same value as `node_type_id` defined above.""" enable_elastic_disk: Optional[bool] = None """Autoscaling Local Storage: when enabled, this cluster will dynamically acquire additional disk @@ -873,7 +864,6 @@ class ClusterAttributes: `use_ml_runtime`, and whether `node_type_id` is gpu node or not.""" workload_type: Optional[WorkloadType] = None - """Cluster Attributes showing for clusters workload types.""" def as_dict(self) -> dict: """Serializes the ClusterAttributes into a dictionary suitable for use as a JSON request body.""" @@ -1074,8 +1064,6 @@ def from_dict(cls, d: Dict[str, Any]) -> ClusterCompliance: @dataclass class ClusterDetails: - """Describes all of the metadata about a single Spark cluster in Databricks.""" - autoscale: Optional[AutoScale] = None """Parameters needed in order to automatically scale clusters up and down based on load. Note: autoscaling works best with DB runtime versions 3.0 or later.""" @@ -1122,7 +1110,7 @@ class ClusterDetails: cluster_source: Optional[ClusterSource] = None """Determines whether the cluster was created by a user through the UI, created by the Databricks - Jobs Scheduler, or through an API request.""" + Jobs Scheduler, or through an API request. This is the same as cluster_creator, but read only.""" creator_user_name: Optional[str] = None """Creator user name. The field won't be included in the response if the user has already been @@ -1177,7 +1165,6 @@ class ClusterDetails: - Name: """ docker_image: Optional[DockerImage] = None - """Custom docker image BYOC""" driver: Optional[SparkNode] = None """Node on which the Spark driver resides. The driver node contains the Spark master and the @@ -1189,11 +1176,7 @@ class ClusterDetails: driver_node_type_id: Optional[str] = None """The node type of the Spark driver. Note that this field is optional; if unset, the driver node - type will be set as the same value as `node_type_id` defined above. - - This field, along with node_type_id, should not be set if virtual_cluster_size is set. If both - driver_node_type_id, node_type_id, and virtual_cluster_size are specified, driver_node_type_id - and node_type_id take precedence.""" + type will be set as the same value as `node_type_id` defined above.""" enable_elastic_disk: Optional[bool] = None """Autoscaling Local Storage: when enabled, this cluster will dynamically acquire additional disk @@ -1308,8 +1291,9 @@ class ClusterDetails: be retrieved by using the :method:clusters/sparkVersions API call.""" spec: Optional[ClusterSpec] = None - """The spec contains a snapshot of the latest user specified settings that were used to create/edit - the cluster. Note: not included in the response of the ListClusters API.""" + """`spec` contains a snapshot of the field values that were used to create or edit this cluster. + The contents of `spec` can be used in the body of a create cluster request. This field might not + be populated for older clusters. Note: not included in the response of the ListClusters API.""" ssh_public_keys: Optional[List[str]] = None """SSH public key contents that will be added to each Spark node in this cluster. The corresponding @@ -1341,7 +1325,6 @@ class ClusterDetails: `use_ml_runtime`, and whether `node_type_id` is gpu node or not.""" workload_type: Optional[WorkloadType] = None - """Cluster Attributes showing for clusters workload types.""" def as_dict(self) -> dict: """Serializes the ClusterDetails into a dictionary suitable for use as a JSON request body.""" @@ -1603,10 +1586,13 @@ def from_dict(cls, d: Dict[str, Any]) -> ClusterDetails: @dataclass class ClusterEvent: cluster_id: str + """""" data_plane_event_details: Optional[DataPlaneEventDetails] = None + """""" details: Optional[EventDetails] = None + """""" timestamp: Optional[int] = None """The timestamp when the event occurred, stored as the number of milliseconds since the Unix @@ -1693,8 +1679,6 @@ def from_dict(cls, d: Dict[str, Any]) -> ClusterLibraryStatuses: @dataclass class ClusterLogConf: - """Cluster log delivery config""" - dbfs: Optional[DbfsStorageInfo] = None """destination needs to be provided. e.g. `{ "dbfs" : { "destination" : "dbfs:/home/cluster_log" } }`""" @@ -1706,7 +1690,7 @@ class ClusterLogConf: write data to the s3 destination.""" volumes: Optional[VolumesStorageInfo] = None - """destination needs to be provided, e.g. `{ "volumes": { "destination": + """destination needs to be provided. e.g. `{ "volumes" : { "destination" : "/Volumes/catalog/schema/volume/cluster_log" } }`""" def as_dict(self) -> dict: @@ -2266,9 +2250,6 @@ class ClusterSource(Enum): @dataclass class ClusterSpec: - """Contains a snapshot of the latest user specified settings that were used to create/edit the - cluster.""" - apply_policy_default_values: Optional[bool] = None """When set to true, fixed and default values from the policy will be used for fields that are omitted. When set to false, only fixed values from the policy will be applied.""" @@ -2338,7 +2319,6 @@ class ClusterSpec: doesn’t have UC nor passthrough enabled.""" docker_image: Optional[DockerImage] = None - """Custom docker image BYOC""" driver_instance_pool_id: Optional[str] = None """The optional ID of the instance pool for the driver of the cluster belongs. The pool cluster @@ -2346,11 +2326,7 @@ class ClusterSpec: driver_node_type_id: Optional[str] = None """The node type of the Spark driver. Note that this field is optional; if unset, the driver node - type will be set as the same value as `node_type_id` defined above. - - This field, along with node_type_id, should not be set if virtual_cluster_size is set. If both - driver_node_type_id, node_type_id, and virtual_cluster_size are specified, driver_node_type_id - and node_type_id take precedence.""" + type will be set as the same value as `node_type_id` defined above.""" enable_elastic_disk: Optional[bool] = None """Autoscaling Local Storage: when enabled, this cluster will dynamically acquire additional disk @@ -2458,7 +2434,6 @@ class ClusterSpec: `use_ml_runtime`, and whether `node_type_id` is gpu node or not.""" workload_type: Optional[WorkloadType] = None - """Cluster Attributes showing for clusters workload types.""" def as_dict(self) -> dict: """Serializes the ClusterSpec into a dictionary suitable for use as a JSON request body.""" @@ -2841,7 +2816,6 @@ class CreateCluster: doesn’t have UC nor passthrough enabled.""" docker_image: Optional[DockerImage] = None - """Custom docker image BYOC""" driver_instance_pool_id: Optional[str] = None """The optional ID of the instance pool for the driver of the cluster belongs. The pool cluster @@ -2849,11 +2823,7 @@ class CreateCluster: driver_node_type_id: Optional[str] = None """The node type of the Spark driver. Note that this field is optional; if unset, the driver node - type will be set as the same value as `node_type_id` defined above. - - This field, along with node_type_id, should not be set if virtual_cluster_size is set. If both - driver_node_type_id, node_type_id, and virtual_cluster_size are specified, driver_node_type_id - and node_type_id take precedence.""" + type will be set as the same value as `node_type_id` defined above.""" enable_elastic_disk: Optional[bool] = None """Autoscaling Local Storage: when enabled, this cluster will dynamically acquire additional disk @@ -2957,7 +2927,6 @@ class CreateCluster: `use_ml_runtime`, and whether `node_type_id` is gpu node or not.""" workload_type: Optional[WorkloadType] = None - """Cluster Attributes showing for clusters workload types.""" def as_dict(self) -> dict: """Serializes the CreateCluster into a dictionary suitable for use as a JSON request body.""" @@ -3562,12 +3531,16 @@ def from_dict(cls, d: Dict[str, Any]) -> CustomPolicyTag: @dataclass class DataPlaneEventDetails: event_type: Optional[DataPlaneEventDetailsEventType] = None + """""" executor_failures: Optional[int] = None + """""" host_id: Optional[str] = None + """""" timestamp: Optional[int] = None + """""" def as_dict(self) -> dict: """Serializes the DataPlaneEventDetails into a dictionary suitable for use as a JSON request body.""" @@ -3607,6 +3580,7 @@ def from_dict(cls, d: Dict[str, Any]) -> DataPlaneEventDetails: class DataPlaneEventDetailsEventType(Enum): + """""" NODE_BLACKLISTED = "NODE_BLACKLISTED" NODE_EXCLUDED_DECOMMISSIONED = "NODE_EXCLUDED_DECOMMISSIONED" @@ -3652,8 +3626,6 @@ class DataSecurityMode(Enum): @dataclass class DbfsStorageInfo: - """A storage location in DBFS""" - destination: str """dbfs destination, e.g. `dbfs:/my/path`""" @@ -4070,8 +4042,7 @@ def from_dict(cls, d: Dict[str, Any]) -> DockerImage: class EbsVolumeType(Enum): - """All EBS volume types that Databricks supports. See https://aws.amazon.com/ebs/details/ for - details.""" + """The type of EBS volumes that will be launched with this cluster.""" GENERAL_PURPOSE_SSD = "GENERAL_PURPOSE_SSD" THROUGHPUT_OPTIMIZED_HDD = "THROUGHPUT_OPTIMIZED_HDD" @@ -4155,7 +4126,6 @@ class EditCluster: doesn’t have UC nor passthrough enabled.""" docker_image: Optional[DockerImage] = None - """Custom docker image BYOC""" driver_instance_pool_id: Optional[str] = None """The optional ID of the instance pool for the driver of the cluster belongs. The pool cluster @@ -4163,11 +4133,7 @@ class EditCluster: driver_node_type_id: Optional[str] = None """The node type of the Spark driver. Note that this field is optional; if unset, the driver node - type will be set as the same value as `node_type_id` defined above. - - This field, along with node_type_id, should not be set if virtual_cluster_size is set. If both - driver_node_type_id, node_type_id, and virtual_cluster_size are specified, driver_node_type_id - and node_type_id take precedence.""" + type will be set as the same value as `node_type_id` defined above.""" enable_elastic_disk: Optional[bool] = None """Autoscaling Local Storage: when enabled, this cluster will dynamically acquire additional disk @@ -4271,7 +4237,6 @@ class EditCluster: `use_ml_runtime`, and whether `node_type_id` is gpu node or not.""" workload_type: Optional[WorkloadType] = None - """Cluster Attributes showing for clusters workload types.""" def as_dict(self) -> dict: """Serializes the EditCluster into a dictionary suitable for use as a JSON request body.""" @@ -4831,6 +4796,7 @@ class EventDetails: """The current number of nodes in the cluster.""" did_not_expand_reason: Optional[str] = None + """""" disk_size: Optional[int] = None """Current disk size in bytes""" @@ -4842,6 +4808,7 @@ class EventDetails: """Whether or not a blocklisted node should be terminated. For ClusterEventType NODE_BLACKLISTED.""" free_space: Optional[int] = None + """""" init_scripts: Optional[InitScriptEventDetails] = None """List of global and cluster init scripts associated with this cluster event.""" @@ -5036,14 +5003,12 @@ class EventType(Enum): @dataclass class GcpAttributes: - """Attributes set during cluster creation which are related to GCP.""" - availability: Optional[GcpAvailability] = None - """This field determines whether the spark executors will be scheduled to run on preemptible VMs, - on-demand VMs, or preemptible VMs with a fallback to on-demand VMs if the former is unavailable.""" + """This field determines whether the instance pool will contain preemptible VMs, on-demand VMs, or + preemptible VMs with a fallback to on-demand VMs if the former is unavailable.""" boot_disk_size: Optional[int] = None - """Boot disk size in GB""" + """boot disk size in GB""" google_service_account: Optional[str] = None """If provided, the cluster will impersonate the google service account when accessing gcloud @@ -5060,12 +5025,12 @@ class GcpAttributes: use_preemptible_executors: Optional[bool] = None """This field determines whether the spark executors will be scheduled to run on preemptible VMs (when set to true) versus standard compute engine VMs (when set to false; default). Note: Soon - to be deprecated, use the 'availability' field instead.""" + to be deprecated, use the availability field instead.""" zone_id: Optional[str] = None """Identifier for the availability zone in which the cluster resides. This can be one of the following: - "HA" => High availability, spread nodes across availability zones for a Databricks - deployment region [default]. - "AUTO" => Databricks picks an availability zone to schedule the + deployment region [default] - "AUTO" => Databricks picks an availability zone to schedule the cluster on. - A GCP availability zone => Pick One of the available zones for (machine type + region) from https://cloud.google.com/compute/docs/regions-zones.""" @@ -5127,8 +5092,6 @@ class GcpAvailability(Enum): @dataclass class GcsStorageInfo: - """A storage location in Google Cloud Platform's GCS""" - destination: str """GCS destination/URI, e.g. `gs://my-bucket/some-prefix`""" @@ -5316,6 +5279,7 @@ def from_dict(cls, d: Dict[str, Any]) -> GetEvents: class GetEventsOrder(Enum): + """The order to list events in; either "ASC" or "DESC". Defaults to "DESC".""" ASC = "ASC" DESC = "DESC" @@ -5324,6 +5288,7 @@ class GetEventsOrder(Enum): @dataclass class GetEventsResponse: events: Optional[List[ClusterEvent]] = None + """""" next_page: Optional[GetEvents] = None """The parameters required to retrieve the next page of events. Omitted if there are no more events @@ -5911,17 +5876,13 @@ def from_dict(cls, d: Dict[str, Any]) -> GlobalInitScriptUpdateRequest: @dataclass class InitScriptEventDetails: cluster: Optional[List[InitScriptInfoAndExecutionDetails]] = None - """The cluster scoped init scripts associated with this cluster event.""" + """The cluster scoped init scripts associated with this cluster event""" global_: Optional[List[InitScriptInfoAndExecutionDetails]] = None - """The global init scripts associated with this cluster event.""" + """The global init scripts associated with this cluster event""" reported_for_node: Optional[str] = None - """The private ip of the node we are reporting init script execution details for (we will select - the execution details from only one node rather than reporting the execution details from every - node to keep these event details small) - - This should only be defined for the INIT_SCRIPTS_FINISHED event""" + """The private ip address of the node where the init scripts were run.""" def as_dict(self) -> dict: """Serializes the InitScriptEventDetails into a dictionary suitable for use as a JSON request body.""" @@ -5955,12 +5916,54 @@ def from_dict(cls, d: Dict[str, Any]) -> InitScriptEventDetails: ) -class InitScriptExecutionDetailsInitScriptExecutionStatus(Enum): - """Result of attempted script execution""" +@dataclass +class InitScriptExecutionDetails: + error_message: Optional[str] = None + """Addition details regarding errors.""" + + execution_duration_seconds: Optional[int] = None + """The duration of the script execution in seconds.""" + + status: Optional[InitScriptExecutionDetailsStatus] = None + """The current status of the script""" + + def as_dict(self) -> dict: + """Serializes the InitScriptExecutionDetails into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.error_message is not None: + body["error_message"] = self.error_message + if self.execution_duration_seconds is not None: + body["execution_duration_seconds"] = self.execution_duration_seconds + if self.status is not None: + body["status"] = self.status.value + return body + + def as_shallow_dict(self) -> dict: + """Serializes the InitScriptExecutionDetails into a shallow dictionary of its immediate attributes.""" + body = {} + if self.error_message is not None: + body["error_message"] = self.error_message + if self.execution_duration_seconds is not None: + body["execution_duration_seconds"] = self.execution_duration_seconds + if self.status is not None: + body["status"] = self.status + return body + + @classmethod + def from_dict(cls, d: Dict[str, Any]) -> InitScriptExecutionDetails: + """Deserializes the InitScriptExecutionDetails from a dictionary.""" + return cls( + error_message=d.get("error_message", None), + execution_duration_seconds=d.get("execution_duration_seconds", None), + status=_enum(d, "status", InitScriptExecutionDetailsStatus), + ) + + +class InitScriptExecutionDetailsStatus(Enum): + """The current status of the script""" FAILED_EXECUTION = "FAILED_EXECUTION" FAILED_FETCH = "FAILED_FETCH" - FUSE_MOUNT_FAILED = "FUSE_MOUNT_FAILED" NOT_EXECUTED = "NOT_EXECUTED" SKIPPED = "SKIPPED" SUCCEEDED = "SUCCEEDED" @@ -5969,35 +5972,34 @@ class InitScriptExecutionDetailsInitScriptExecutionStatus(Enum): @dataclass class InitScriptInfo: - """Config for an individual init script Next ID: 11""" - abfss: Optional[Adlsgen2Info] = None - """destination needs to be provided, e.g. - `abfss://@.dfs.core.windows.net/`""" + """destination needs to be provided. e.g. `{ "abfss" : { "destination" : + "abfss://@.dfs.core.windows.net/" } }""" dbfs: Optional[DbfsStorageInfo] = None - """destination needs to be provided. e.g. `{ "dbfs": { "destination" : "dbfs:/home/cluster_log" } + """destination needs to be provided. e.g. `{ "dbfs" : { "destination" : "dbfs:/home/cluster_log" } }`""" file: Optional[LocalFileInfo] = None - """destination needs to be provided, e.g. `{ "file": { "destination": "file:/my/local/file.sh" } }`""" + """destination needs to be provided. e.g. `{ "file" : { "destination" : "file:/my/local/file.sh" } + }`""" gcs: Optional[GcsStorageInfo] = None - """destination needs to be provided, e.g. `{ "gcs": { "destination": "gs://my-bucket/file.sh" } }`""" + """destination needs to be provided. e.g. `{ "gcs": { "destination": "gs://my-bucket/file.sh" } }`""" s3: Optional[S3StorageInfo] = None - """destination and either the region or endpoint need to be provided. e.g. `{ \"s3\": { - \"destination\": \"s3://cluster_log_bucket/prefix\", \"region\": \"us-west-2\" } }` Cluster iam - role is used to access s3, please make sure the cluster iam role in `instance_profile_arn` has - permission to write data to the s3 destination.""" + """destination and either the region or endpoint need to be provided. e.g. `{ "s3": { "destination" + : "s3://cluster_log_bucket/prefix", "region" : "us-west-2" } }` Cluster iam role is used to + access s3, please make sure the cluster iam role in `instance_profile_arn` has permission to + write data to the s3 destination.""" volumes: Optional[VolumesStorageInfo] = None - """destination needs to be provided. e.g. `{ \"volumes\" : { \"destination\" : - \"/Volumes/my-init.sh\" } }`""" + """destination needs to be provided. e.g. `{ "volumes" : { "destination" : "/Volumes/my-init.sh" } + }`""" workspace: Optional[WorkspaceStorageInfo] = None - """destination needs to be provided, e.g. `{ "workspace": { "destination": - "/cluster-init-scripts/setup-datadog.sh" } }`""" + """destination needs to be provided. e.g. `{ "workspace" : { "destination" : + "/Users/user1@databricks.com/my-init.sh" } }`""" def as_dict(self) -> dict: """Serializes the InitScriptInfo into a dictionary suitable for use as a JSON request body.""" @@ -6053,109 +6055,36 @@ def from_dict(cls, d: Dict[str, Any]) -> InitScriptInfo: @dataclass class InitScriptInfoAndExecutionDetails: - abfss: Optional[Adlsgen2Info] = None - """destination needs to be provided, e.g. - `abfss://@.dfs.core.windows.net/`""" - - dbfs: Optional[DbfsStorageInfo] = None - """destination needs to be provided. e.g. `{ "dbfs": { "destination" : "dbfs:/home/cluster_log" } - }`""" - - error_message: Optional[str] = None - """Additional details regarding errors (such as a file not found message if the status is - FAILED_FETCH). This field should only be used to provide *additional* information to the status - field, not duplicate it.""" - - execution_duration_seconds: Optional[int] = None - """The number duration of the script execution in seconds""" - - file: Optional[LocalFileInfo] = None - """destination needs to be provided, e.g. `{ "file": { "destination": "file:/my/local/file.sh" } }`""" - - gcs: Optional[GcsStorageInfo] = None - """destination needs to be provided, e.g. `{ "gcs": { "destination": "gs://my-bucket/file.sh" } }`""" - - s3: Optional[S3StorageInfo] = None - """destination and either the region or endpoint need to be provided. e.g. `{ \"s3\": { - \"destination\": \"s3://cluster_log_bucket/prefix\", \"region\": \"us-west-2\" } }` Cluster iam - role is used to access s3, please make sure the cluster iam role in `instance_profile_arn` has - permission to write data to the s3 destination.""" - - status: Optional[InitScriptExecutionDetailsInitScriptExecutionStatus] = None - """The current status of the script""" + execution_details: Optional[InitScriptExecutionDetails] = None + """Details about the script""" - volumes: Optional[VolumesStorageInfo] = None - """destination needs to be provided. e.g. `{ \"volumes\" : { \"destination\" : - \"/Volumes/my-init.sh\" } }`""" - - workspace: Optional[WorkspaceStorageInfo] = None - """destination needs to be provided, e.g. `{ "workspace": { "destination": - "/cluster-init-scripts/setup-datadog.sh" } }`""" + script: Optional[InitScriptInfo] = None + """The script""" def as_dict(self) -> dict: """Serializes the InitScriptInfoAndExecutionDetails into a dictionary suitable for use as a JSON request body.""" body = {} - if self.abfss: - body["abfss"] = self.abfss.as_dict() - if self.dbfs: - body["dbfs"] = self.dbfs.as_dict() - if self.error_message is not None: - body["error_message"] = self.error_message - if self.execution_duration_seconds is not None: - body["execution_duration_seconds"] = self.execution_duration_seconds - if self.file: - body["file"] = self.file.as_dict() - if self.gcs: - body["gcs"] = self.gcs.as_dict() - if self.s3: - body["s3"] = self.s3.as_dict() - if self.status is not None: - body["status"] = self.status.value - if self.volumes: - body["volumes"] = self.volumes.as_dict() - if self.workspace: - body["workspace"] = self.workspace.as_dict() + if self.execution_details: + body["execution_details"] = self.execution_details.as_dict() + if self.script: + body["script"] = self.script.as_dict() return body def as_shallow_dict(self) -> dict: """Serializes the InitScriptInfoAndExecutionDetails into a shallow dictionary of its immediate attributes.""" body = {} - if self.abfss: - body["abfss"] = self.abfss - if self.dbfs: - body["dbfs"] = self.dbfs - if self.error_message is not None: - body["error_message"] = self.error_message - if self.execution_duration_seconds is not None: - body["execution_duration_seconds"] = self.execution_duration_seconds - if self.file: - body["file"] = self.file - if self.gcs: - body["gcs"] = self.gcs - if self.s3: - body["s3"] = self.s3 - if self.status is not None: - body["status"] = self.status - if self.volumes: - body["volumes"] = self.volumes - if self.workspace: - body["workspace"] = self.workspace + if self.execution_details: + body["execution_details"] = self.execution_details + if self.script: + body["script"] = self.script return body @classmethod def from_dict(cls, d: Dict[str, Any]) -> InitScriptInfoAndExecutionDetails: """Deserializes the InitScriptInfoAndExecutionDetails from a dictionary.""" return cls( - abfss=_from_dict(d, "abfss", Adlsgen2Info), - dbfs=_from_dict(d, "dbfs", DbfsStorageInfo), - error_message=d.get("error_message", None), - execution_duration_seconds=d.get("execution_duration_seconds", None), - file=_from_dict(d, "file", LocalFileInfo), - gcs=_from_dict(d, "gcs", GcsStorageInfo), - s3=_from_dict(d, "s3", S3StorageInfo), - status=_enum(d, "status", InitScriptExecutionDetailsInitScriptExecutionStatus), - volumes=_from_dict(d, "volumes", VolumesStorageInfo), - workspace=_from_dict(d, "workspace", WorkspaceStorageInfo), + execution_details=_from_dict(d, "execution_details", InitScriptExecutionDetails), + script=_from_dict(d, "script", InitScriptInfo), ) @@ -7185,7 +7114,7 @@ def from_dict(cls, d: Dict[str, Any]) -> ListAllClusterLibraryStatusesResponse: @dataclass class ListAvailableZonesResponse: default_zone: Optional[str] = None - """The availability zone if no ``zone_id`` is provided in the cluster creation request.""" + """The availability zone if no `zone_id` is provided in the cluster creation request.""" zones: Optional[List[str]] = None """The list of available zones (e.g., ['us-west-2c', 'us-east-2']).""" @@ -7313,6 +7242,7 @@ def from_dict(cls, d: Dict[str, Any]) -> ListClustersFilterBy: @dataclass class ListClustersResponse: clusters: Optional[List[ClusterDetails]] = None + """""" next_page_token: Optional[str] = None """This field represents the pagination token to retrieve the next page of results. If the value is @@ -7391,12 +7321,15 @@ def from_dict(cls, d: Dict[str, Any]) -> ListClustersSortBy: class ListClustersSortByDirection(Enum): + """The direction to sort by.""" ASC = "ASC" DESC = "DESC" class ListClustersSortByField(Enum): + """The sorting criteria. By default, clusters are sorted by 3 columns from highest to lowest + precedence: cluster state, pinned or unpinned, then cluster name.""" CLUSTER_NAME = "CLUSTER_NAME" DEFAULT = "DEFAULT" @@ -7568,6 +7501,7 @@ class ListSortColumn(Enum): class ListSortOrder(Enum): + """A generic ordering enum for list-based queries.""" ASC = "ASC" DESC = "DESC" @@ -7601,8 +7535,10 @@ def from_dict(cls, d: Dict[str, Any]) -> LocalFileInfo: @dataclass class LogAnalyticsInfo: log_analytics_primary_key: Optional[str] = None + """""" log_analytics_workspace_id: Optional[str] = None + """""" def as_dict(self) -> dict: """Serializes the LogAnalyticsInfo into a dictionary suitable for use as a JSON request body.""" @@ -7633,8 +7569,6 @@ def from_dict(cls, d: Dict[str, Any]) -> LogAnalyticsInfo: @dataclass class LogSyncStatus: - """The log delivery status""" - last_attempted: Optional[int] = None """The timestamp of last attempt. If the last attempt fails, `last_exception` will contain the exception in the last attempt.""" @@ -7714,24 +7648,15 @@ def from_dict(cls, d: Dict[str, Any]) -> MavenLibrary: @dataclass class NodeInstanceType: - """This structure embodies the machine type that hosts spark containers Note: this should be an - internal data structure for now It is defined in proto in case we want to send it over the wire - in the future (which is likely)""" - - instance_type_id: str - """Unique identifier across instance types""" + instance_type_id: Optional[str] = None local_disk_size_gb: Optional[int] = None - """Size of the individual local disks attached to this instance (i.e. per local disk).""" local_disks: Optional[int] = None - """Number of local disks that are present on this instance.""" local_nvme_disk_size_gb: Optional[int] = None - """Size of the individual local nvme disks attached to this instance (i.e. per local disk).""" local_nvme_disks: Optional[int] = None - """Number of local nvme disks that are present on this instance.""" def as_dict(self) -> dict: """Serializes the NodeInstanceType into a dictionary suitable for use as a JSON request body.""" @@ -7777,9 +7702,6 @@ def from_dict(cls, d: Dict[str, Any]) -> NodeInstanceType: @dataclass class NodeType: - """A description of a Spark node type including both the dimensions of the node and the instance - type on which it will be hosted.""" - node_type_id: str """Unique identifier for this node type.""" @@ -7797,13 +7719,9 @@ class NodeType: instance_type_id: str """An identifier for the type of hardware that this node runs on, e.g., "r3.2xlarge" in AWS.""" - category: str - """A descriptive category for this node type. Examples include "Memory Optimized" and "Compute - Optimized".""" + category: Optional[str] = None display_order: Optional[int] = None - """An optional hint at the display order of node types in the UI. Within a node type category, - lowest numbers come first.""" is_deprecated: Optional[bool] = None """Whether the node type is deprecated. Non-deprecated node types offer greater performance.""" @@ -7813,36 +7731,30 @@ class NodeType: workloads.""" is_graviton: Optional[bool] = None - """Whether this is an Arm-based instance.""" is_hidden: Optional[bool] = None - """Whether this node is hidden from presentation in the UI.""" is_io_cache_enabled: Optional[bool] = None - """Whether this node comes with IO cache enabled by default.""" node_info: Optional[CloudProviderNodeInfo] = None - """A collection of node type info reported by the cloud provider""" node_instance_type: Optional[NodeInstanceType] = None - """The NodeInstanceType object corresponding to instance_type_id""" num_gpus: Optional[int] = None - """Number of GPUs available for this node type.""" photon_driver_capable: Optional[bool] = None photon_worker_capable: Optional[bool] = None support_cluster_tags: Optional[bool] = None - """Whether this node type support cluster tags.""" support_ebs_volumes: Optional[bool] = None - """Whether this node type support EBS volumes. EBS volumes is disabled for node types that we could - place multiple corresponding containers on the same hosting instance.""" support_port_forwarding: Optional[bool] = None - """Whether this node type supports port forwarding.""" + + supports_elastic_disk: Optional[bool] = None + """Indicates if this node type can be used for an instance pool or cluster with elastic disk + enabled. This is true for most node types.""" def as_dict(self) -> dict: """Serializes the NodeType into a dictionary suitable for use as a JSON request body.""" @@ -7887,6 +7799,8 @@ def as_dict(self) -> dict: body["support_ebs_volumes"] = self.support_ebs_volumes if self.support_port_forwarding is not None: body["support_port_forwarding"] = self.support_port_forwarding + if self.supports_elastic_disk is not None: + body["supports_elastic_disk"] = self.supports_elastic_disk return body def as_shallow_dict(self) -> dict: @@ -7932,6 +7846,8 @@ def as_shallow_dict(self) -> dict: body["support_ebs_volumes"] = self.support_ebs_volumes if self.support_port_forwarding is not None: body["support_port_forwarding"] = self.support_port_forwarding + if self.supports_elastic_disk is not None: + body["supports_elastic_disk"] = self.supports_elastic_disk return body @classmethod @@ -7958,6 +7874,7 @@ def from_dict(cls, d: Dict[str, Any]) -> NodeType: support_cluster_tags=d.get("support_cluster_tags", None), support_ebs_volumes=d.get("support_ebs_volumes", None), support_port_forwarding=d.get("support_port_forwarding", None), + supports_elastic_disk=d.get("supports_elastic_disk", None), ) @@ -8039,6 +7956,7 @@ def from_dict(cls, d: Dict[str, Any]) -> PermanentDeleteClusterResponse: @dataclass class PinCluster: cluster_id: str + """""" def as_dict(self) -> dict: """Serializes the PinCluster into a dictionary suitable for use as a JSON request body.""" @@ -8440,6 +8358,7 @@ class RestartCluster: """The cluster to be started.""" restart_user: Optional[str] = None + """""" def as_dict(self) -> dict: """Serializes the RestartCluster into a dictionary suitable for use as a JSON request body.""" @@ -8589,6 +8508,13 @@ def from_dict(cls, d: Dict[str, Any]) -> Results: class RuntimeEngine(Enum): + """Determines the cluster's runtime engine, either standard or Photon. + + This field is not compatible with legacy `spark_version` values that contain `-photon-`. Remove + `-photon-` from the `spark_version` and set `runtime_engine` to `PHOTON`. + + If left unspecified, the runtime engine defaults to standard unless the spark_version contains + -photon-, in which case Photon will be used.""" NULL = "NULL" PHOTON = "PHOTON" @@ -8597,8 +8523,6 @@ class RuntimeEngine(Enum): @dataclass class S3StorageInfo: - """A storage location in Amazon S3""" - destination: str """S3 destination, e.g. `s3://my-bucket/some-prefix` Note that logs will be delivered using cluster iam role, please make sure you set cluster iam role and the role has write access to the @@ -8686,8 +8610,6 @@ def from_dict(cls, d: Dict[str, Any]) -> S3StorageInfo: @dataclass class SparkNode: - """Describes a specific Spark driver or executor.""" - host_private_ip: Optional[str] = None """The private IP address of the host instance.""" @@ -8707,10 +8629,16 @@ class SparkNode: public_dns: Optional[str] = None """Public DNS address of this node. This address can be used to access the Spark JDBC server on the driver node. To communicate with the JDBC server, traffic must be manually authorized by adding - security group rules to the "worker-unmanaged" security group via the AWS console.""" + security group rules to the "worker-unmanaged" security group via the AWS console. + + Actually it's the public DNS address of the host instance.""" start_timestamp: Optional[int] = None - """The timestamp (in millisecond) when the Spark node is launched.""" + """The timestamp (in millisecond) when the Spark node is launched. + + The start_timestamp is set right before the container is being launched. The timestamp when the + container is placed on the ResourceManager, before its launch and setup by the NodeDaemon. This + timestamp is the same as the creation timestamp in the database.""" def as_dict(self) -> dict: """Serializes the SparkNode into a dictionary suitable for use as a JSON request body.""" @@ -8766,8 +8694,6 @@ def from_dict(cls, d: Dict[str, Any]) -> SparkNode: @dataclass class SparkNodeAwsAttributes: - """Attributes specific to AWS for a Spark node.""" - is_spot: Optional[bool] = None """Whether this node is on an Amazon spot instance.""" @@ -8870,12 +8796,7 @@ def from_dict(cls, d: Dict[str, Any]) -> StartClusterResponse: class State(Enum): - """The state of a Cluster. The current allowable state transitions are as follows: - - - `PENDING` -> `RUNNING` - `PENDING` -> `TERMINATING` - `RUNNING` -> `RESIZING` - `RUNNING` -> - `RESTARTING` - `RUNNING` -> `TERMINATING` - `RESTARTING` -> `RUNNING` - `RESTARTING` -> - `TERMINATING` - `RESIZING` -> `RUNNING` - `RESIZING` -> `TERMINATING` - `TERMINATING` -> - `TERMINATED`""" + """Current state of the cluster.""" ERROR = "ERROR" PENDING = "PENDING" @@ -8931,34 +8852,20 @@ def from_dict(cls, d: Dict[str, Any]) -> TerminationReason: class TerminationReasonCode(Enum): - """The status code indicating why the cluster was terminated""" + """status code indicating why the cluster was terminated""" ABUSE_DETECTED = "ABUSE_DETECTED" - ACCESS_TOKEN_FAILURE = "ACCESS_TOKEN_FAILURE" - ALLOCATION_TIMEOUT = "ALLOCATION_TIMEOUT" - ALLOCATION_TIMEOUT_NODE_DAEMON_NOT_READY = "ALLOCATION_TIMEOUT_NODE_DAEMON_NOT_READY" - ALLOCATION_TIMEOUT_NO_HEALTHY_CLUSTERS = "ALLOCATION_TIMEOUT_NO_HEALTHY_CLUSTERS" - ALLOCATION_TIMEOUT_NO_MATCHED_CLUSTERS = "ALLOCATION_TIMEOUT_NO_MATCHED_CLUSTERS" - ALLOCATION_TIMEOUT_NO_READY_CLUSTERS = "ALLOCATION_TIMEOUT_NO_READY_CLUSTERS" - ALLOCATION_TIMEOUT_NO_UNALLOCATED_CLUSTERS = "ALLOCATION_TIMEOUT_NO_UNALLOCATED_CLUSTERS" - ALLOCATION_TIMEOUT_NO_WARMED_UP_CLUSTERS = "ALLOCATION_TIMEOUT_NO_WARMED_UP_CLUSTERS" ATTACH_PROJECT_FAILURE = "ATTACH_PROJECT_FAILURE" AWS_AUTHORIZATION_FAILURE = "AWS_AUTHORIZATION_FAILURE" - AWS_INACCESSIBLE_KMS_KEY_FAILURE = "AWS_INACCESSIBLE_KMS_KEY_FAILURE" - AWS_INSTANCE_PROFILE_UPDATE_FAILURE = "AWS_INSTANCE_PROFILE_UPDATE_FAILURE" AWS_INSUFFICIENT_FREE_ADDRESSES_IN_SUBNET_FAILURE = "AWS_INSUFFICIENT_FREE_ADDRESSES_IN_SUBNET_FAILURE" AWS_INSUFFICIENT_INSTANCE_CAPACITY_FAILURE = "AWS_INSUFFICIENT_INSTANCE_CAPACITY_FAILURE" - AWS_INVALID_KEY_PAIR = "AWS_INVALID_KEY_PAIR" - AWS_INVALID_KMS_KEY_STATE = "AWS_INVALID_KMS_KEY_STATE" AWS_MAX_SPOT_INSTANCE_COUNT_EXCEEDED_FAILURE = "AWS_MAX_SPOT_INSTANCE_COUNT_EXCEEDED_FAILURE" AWS_REQUEST_LIMIT_EXCEEDED = "AWS_REQUEST_LIMIT_EXCEEDED" - AWS_RESOURCE_QUOTA_EXCEEDED = "AWS_RESOURCE_QUOTA_EXCEEDED" AWS_UNSUPPORTED_FAILURE = "AWS_UNSUPPORTED_FAILURE" AZURE_BYOK_KEY_PERMISSION_FAILURE = "AZURE_BYOK_KEY_PERMISSION_FAILURE" AZURE_EPHEMERAL_DISK_FAILURE = "AZURE_EPHEMERAL_DISK_FAILURE" AZURE_INVALID_DEPLOYMENT_TEMPLATE = "AZURE_INVALID_DEPLOYMENT_TEMPLATE" AZURE_OPERATION_NOT_ALLOWED_EXCEPTION = "AZURE_OPERATION_NOT_ALLOWED_EXCEPTION" - AZURE_PACKED_DEPLOYMENT_PARTIAL_FAILURE = "AZURE_PACKED_DEPLOYMENT_PARTIAL_FAILURE" AZURE_QUOTA_EXCEEDED_EXCEPTION = "AZURE_QUOTA_EXCEEDED_EXCEPTION" AZURE_RESOURCE_MANAGER_THROTTLING = "AZURE_RESOURCE_MANAGER_THROTTLING" AZURE_RESOURCE_PROVIDER_THROTTLING = "AZURE_RESOURCE_PROVIDER_THROTTLING" @@ -8967,130 +8874,65 @@ class TerminationReasonCode(Enum): AZURE_VNET_CONFIGURATION_FAILURE = "AZURE_VNET_CONFIGURATION_FAILURE" BOOTSTRAP_TIMEOUT = "BOOTSTRAP_TIMEOUT" BOOTSTRAP_TIMEOUT_CLOUD_PROVIDER_EXCEPTION = "BOOTSTRAP_TIMEOUT_CLOUD_PROVIDER_EXCEPTION" - BOOTSTRAP_TIMEOUT_DUE_TO_MISCONFIG = "BOOTSTRAP_TIMEOUT_DUE_TO_MISCONFIG" - BUDGET_POLICY_LIMIT_ENFORCEMENT_ACTIVATED = "BUDGET_POLICY_LIMIT_ENFORCEMENT_ACTIVATED" - BUDGET_POLICY_RESOLUTION_FAILURE = "BUDGET_POLICY_RESOLUTION_FAILURE" - CLOUD_ACCOUNT_SETUP_FAILURE = "CLOUD_ACCOUNT_SETUP_FAILURE" - CLOUD_OPERATION_CANCELLED = "CLOUD_OPERATION_CANCELLED" CLOUD_PROVIDER_DISK_SETUP_FAILURE = "CLOUD_PROVIDER_DISK_SETUP_FAILURE" - CLOUD_PROVIDER_INSTANCE_NOT_LAUNCHED = "CLOUD_PROVIDER_INSTANCE_NOT_LAUNCHED" CLOUD_PROVIDER_LAUNCH_FAILURE = "CLOUD_PROVIDER_LAUNCH_FAILURE" - CLOUD_PROVIDER_LAUNCH_FAILURE_DUE_TO_MISCONFIG = "CLOUD_PROVIDER_LAUNCH_FAILURE_DUE_TO_MISCONFIG" CLOUD_PROVIDER_RESOURCE_STOCKOUT = "CLOUD_PROVIDER_RESOURCE_STOCKOUT" - CLOUD_PROVIDER_RESOURCE_STOCKOUT_DUE_TO_MISCONFIG = "CLOUD_PROVIDER_RESOURCE_STOCKOUT_DUE_TO_MISCONFIG" CLOUD_PROVIDER_SHUTDOWN = "CLOUD_PROVIDER_SHUTDOWN" - CLUSTER_OPERATION_THROTTLED = "CLUSTER_OPERATION_THROTTLED" - CLUSTER_OPERATION_TIMEOUT = "CLUSTER_OPERATION_TIMEOUT" COMMUNICATION_LOST = "COMMUNICATION_LOST" CONTAINER_LAUNCH_FAILURE = "CONTAINER_LAUNCH_FAILURE" CONTROL_PLANE_REQUEST_FAILURE = "CONTROL_PLANE_REQUEST_FAILURE" - CONTROL_PLANE_REQUEST_FAILURE_DUE_TO_MISCONFIG = "CONTROL_PLANE_REQUEST_FAILURE_DUE_TO_MISCONFIG" DATABASE_CONNECTION_FAILURE = "DATABASE_CONNECTION_FAILURE" - DATA_ACCESS_CONFIG_CHANGED = "DATA_ACCESS_CONFIG_CHANGED" DBFS_COMPONENT_UNHEALTHY = "DBFS_COMPONENT_UNHEALTHY" - DISASTER_RECOVERY_REPLICATION = "DISASTER_RECOVERY_REPLICATION" DOCKER_IMAGE_PULL_FAILURE = "DOCKER_IMAGE_PULL_FAILURE" - DRIVER_EVICTION = "DRIVER_EVICTION" - DRIVER_LAUNCH_TIMEOUT = "DRIVER_LAUNCH_TIMEOUT" - DRIVER_NODE_UNREACHABLE = "DRIVER_NODE_UNREACHABLE" - DRIVER_OUT_OF_DISK = "DRIVER_OUT_OF_DISK" - DRIVER_OUT_OF_MEMORY = "DRIVER_OUT_OF_MEMORY" - DRIVER_POD_CREATION_FAILURE = "DRIVER_POD_CREATION_FAILURE" - DRIVER_UNEXPECTED_FAILURE = "DRIVER_UNEXPECTED_FAILURE" DRIVER_UNREACHABLE = "DRIVER_UNREACHABLE" DRIVER_UNRESPONSIVE = "DRIVER_UNRESPONSIVE" - DYNAMIC_SPARK_CONF_SIZE_EXCEEDED = "DYNAMIC_SPARK_CONF_SIZE_EXCEEDED" - EOS_SPARK_IMAGE = "EOS_SPARK_IMAGE" EXECUTION_COMPONENT_UNHEALTHY = "EXECUTION_COMPONENT_UNHEALTHY" - EXECUTOR_POD_UNSCHEDULED = "EXECUTOR_POD_UNSCHEDULED" - GCP_API_RATE_QUOTA_EXCEEDED = "GCP_API_RATE_QUOTA_EXCEEDED" - GCP_FORBIDDEN = "GCP_FORBIDDEN" - GCP_IAM_TIMEOUT = "GCP_IAM_TIMEOUT" - GCP_INACCESSIBLE_KMS_KEY_FAILURE = "GCP_INACCESSIBLE_KMS_KEY_FAILURE" - GCP_INSUFFICIENT_CAPACITY = "GCP_INSUFFICIENT_CAPACITY" - GCP_IP_SPACE_EXHAUSTED = "GCP_IP_SPACE_EXHAUSTED" - GCP_KMS_KEY_PERMISSION_DENIED = "GCP_KMS_KEY_PERMISSION_DENIED" - GCP_NOT_FOUND = "GCP_NOT_FOUND" GCP_QUOTA_EXCEEDED = "GCP_QUOTA_EXCEEDED" - GCP_RESOURCE_QUOTA_EXCEEDED = "GCP_RESOURCE_QUOTA_EXCEEDED" - GCP_SERVICE_ACCOUNT_ACCESS_DENIED = "GCP_SERVICE_ACCOUNT_ACCESS_DENIED" GCP_SERVICE_ACCOUNT_DELETED = "GCP_SERVICE_ACCOUNT_DELETED" - GCP_SERVICE_ACCOUNT_NOT_FOUND = "GCP_SERVICE_ACCOUNT_NOT_FOUND" - GCP_SUBNET_NOT_READY = "GCP_SUBNET_NOT_READY" - GCP_TRUSTED_IMAGE_PROJECTS_VIOLATED = "GCP_TRUSTED_IMAGE_PROJECTS_VIOLATED" - GKE_BASED_CLUSTER_TERMINATION = "GKE_BASED_CLUSTER_TERMINATION" GLOBAL_INIT_SCRIPT_FAILURE = "GLOBAL_INIT_SCRIPT_FAILURE" HIVE_METASTORE_PROVISIONING_FAILURE = "HIVE_METASTORE_PROVISIONING_FAILURE" IMAGE_PULL_PERMISSION_DENIED = "IMAGE_PULL_PERMISSION_DENIED" INACTIVITY = "INACTIVITY" - INIT_CONTAINER_NOT_FINISHED = "INIT_CONTAINER_NOT_FINISHED" INIT_SCRIPT_FAILURE = "INIT_SCRIPT_FAILURE" INSTANCE_POOL_CLUSTER_FAILURE = "INSTANCE_POOL_CLUSTER_FAILURE" - INSTANCE_POOL_MAX_CAPACITY_REACHED = "INSTANCE_POOL_MAX_CAPACITY_REACHED" - INSTANCE_POOL_NOT_FOUND = "INSTANCE_POOL_NOT_FOUND" INSTANCE_UNREACHABLE = "INSTANCE_UNREACHABLE" - INSTANCE_UNREACHABLE_DUE_TO_MISCONFIG = "INSTANCE_UNREACHABLE_DUE_TO_MISCONFIG" - INTERNAL_CAPACITY_FAILURE = "INTERNAL_CAPACITY_FAILURE" INTERNAL_ERROR = "INTERNAL_ERROR" INVALID_ARGUMENT = "INVALID_ARGUMENT" - INVALID_AWS_PARAMETER = "INVALID_AWS_PARAMETER" - INVALID_INSTANCE_PLACEMENT_PROTOCOL = "INVALID_INSTANCE_PLACEMENT_PROTOCOL" INVALID_SPARK_IMAGE = "INVALID_SPARK_IMAGE" - INVALID_WORKER_IMAGE_FAILURE = "INVALID_WORKER_IMAGE_FAILURE" - IN_PENALTY_BOX = "IN_PENALTY_BOX" IP_EXHAUSTION_FAILURE = "IP_EXHAUSTION_FAILURE" JOB_FINISHED = "JOB_FINISHED" K8S_AUTOSCALING_FAILURE = "K8S_AUTOSCALING_FAILURE" K8S_DBR_CLUSTER_LAUNCH_TIMEOUT = "K8S_DBR_CLUSTER_LAUNCH_TIMEOUT" - LAZY_ALLOCATION_TIMEOUT = "LAZY_ALLOCATION_TIMEOUT" - MAINTENANCE_MODE = "MAINTENANCE_MODE" METASTORE_COMPONENT_UNHEALTHY = "METASTORE_COMPONENT_UNHEALTHY" NEPHOS_RESOURCE_MANAGEMENT = "NEPHOS_RESOURCE_MANAGEMENT" - NETVISOR_SETUP_TIMEOUT = "NETVISOR_SETUP_TIMEOUT" NETWORK_CONFIGURATION_FAILURE = "NETWORK_CONFIGURATION_FAILURE" NFS_MOUNT_FAILURE = "NFS_MOUNT_FAILURE" - NO_MATCHED_K8S = "NO_MATCHED_K8S" - NO_MATCHED_K8S_TESTING_TAG = "NO_MATCHED_K8S_TESTING_TAG" NPIP_TUNNEL_SETUP_FAILURE = "NPIP_TUNNEL_SETUP_FAILURE" NPIP_TUNNEL_TOKEN_FAILURE = "NPIP_TUNNEL_TOKEN_FAILURE" - POD_ASSIGNMENT_FAILURE = "POD_ASSIGNMENT_FAILURE" - POD_SCHEDULING_FAILURE = "POD_SCHEDULING_FAILURE" REQUEST_REJECTED = "REQUEST_REJECTED" REQUEST_THROTTLED = "REQUEST_THROTTLED" - RESOURCE_USAGE_BLOCKED = "RESOURCE_USAGE_BLOCKED" - SECRET_CREATION_FAILURE = "SECRET_CREATION_FAILURE" SECRET_RESOLUTION_ERROR = "SECRET_RESOLUTION_ERROR" SECURITY_DAEMON_REGISTRATION_EXCEPTION = "SECURITY_DAEMON_REGISTRATION_EXCEPTION" SELF_BOOTSTRAP_FAILURE = "SELF_BOOTSTRAP_FAILURE" - SERVERLESS_LONG_RUNNING_TERMINATED = "SERVERLESS_LONG_RUNNING_TERMINATED" SKIPPED_SLOW_NODES = "SKIPPED_SLOW_NODES" SLOW_IMAGE_DOWNLOAD = "SLOW_IMAGE_DOWNLOAD" SPARK_ERROR = "SPARK_ERROR" SPARK_IMAGE_DOWNLOAD_FAILURE = "SPARK_IMAGE_DOWNLOAD_FAILURE" - SPARK_IMAGE_DOWNLOAD_THROTTLED = "SPARK_IMAGE_DOWNLOAD_THROTTLED" - SPARK_IMAGE_NOT_FOUND = "SPARK_IMAGE_NOT_FOUND" SPARK_STARTUP_FAILURE = "SPARK_STARTUP_FAILURE" SPOT_INSTANCE_TERMINATION = "SPOT_INSTANCE_TERMINATION" - SSH_BOOTSTRAP_FAILURE = "SSH_BOOTSTRAP_FAILURE" STORAGE_DOWNLOAD_FAILURE = "STORAGE_DOWNLOAD_FAILURE" - STORAGE_DOWNLOAD_FAILURE_DUE_TO_MISCONFIG = "STORAGE_DOWNLOAD_FAILURE_DUE_TO_MISCONFIG" - STORAGE_DOWNLOAD_FAILURE_SLOW = "STORAGE_DOWNLOAD_FAILURE_SLOW" - STORAGE_DOWNLOAD_FAILURE_THROTTLED = "STORAGE_DOWNLOAD_FAILURE_THROTTLED" STS_CLIENT_SETUP_FAILURE = "STS_CLIENT_SETUP_FAILURE" SUBNET_EXHAUSTED_FAILURE = "SUBNET_EXHAUSTED_FAILURE" TEMPORARILY_UNAVAILABLE = "TEMPORARILY_UNAVAILABLE" TRIAL_EXPIRED = "TRIAL_EXPIRED" UNEXPECTED_LAUNCH_FAILURE = "UNEXPECTED_LAUNCH_FAILURE" - UNEXPECTED_POD_RECREATION = "UNEXPECTED_POD_RECREATION" UNKNOWN = "UNKNOWN" UNSUPPORTED_INSTANCE_TYPE = "UNSUPPORTED_INSTANCE_TYPE" UPDATE_INSTANCE_PROFILE_FAILURE = "UPDATE_INSTANCE_PROFILE_FAILURE" - USER_INITIATED_VM_TERMINATION = "USER_INITIATED_VM_TERMINATION" USER_REQUEST = "USER_REQUEST" WORKER_SETUP_FAILURE = "WORKER_SETUP_FAILURE" WORKSPACE_CANCELLED_ERROR = "WORKSPACE_CANCELLED_ERROR" WORKSPACE_CONFIGURATION_ERROR = "WORKSPACE_CONFIGURATION_ERROR" - WORKSPACE_UPDATE = "WORKSPACE_UPDATE" class TerminationReasonType(Enum): @@ -9155,6 +8997,7 @@ def from_dict(cls, d: Dict[str, Any]) -> UninstallLibrariesResponse: @dataclass class UnpinCluster: cluster_id: str + """""" def as_dict(self) -> dict: """Serializes the UnpinCluster into a dictionary suitable for use as a JSON request body.""" @@ -9200,18 +9043,10 @@ class UpdateCluster: """ID of the cluster.""" update_mask: str - """Used to specify which cluster attributes and size fields to update. See - https://google.aip.dev/161 for more details. - - The field mask must be a single string, with multiple fields separated by commas (no spaces). - The field path is relative to the resource object, using a dot (`.`) to navigate sub-fields - (e.g., `author.given_name`). Specification of elements in sequence or map fields is not allowed, - as only the entire collection field can be specified. Field names must exactly match the - resource field names. - - A field mask of `*` indicates full replacement. It’s recommended to always explicitly list the - fields being updated and avoid using `*` wildcards, as it can lead to unintended results if the - API changes in the future.""" + """Specifies which fields of the cluster will be updated. This is required in the POST request. The + update mask should be supplied as a single string. To specify multiple fields, separate them + with commas (no spaces). To delete a field from a cluster configuration, add it to the + `update_mask` string but omit it from the `cluster` object.""" cluster: Optional[UpdateClusterResource] = None """The cluster to be updated.""" @@ -9315,7 +9150,6 @@ class UpdateClusterResource: doesn’t have UC nor passthrough enabled.""" docker_image: Optional[DockerImage] = None - """Custom docker image BYOC""" driver_instance_pool_id: Optional[str] = None """The optional ID of the instance pool for the driver of the cluster belongs. The pool cluster @@ -9323,11 +9157,7 @@ class UpdateClusterResource: driver_node_type_id: Optional[str] = None """The node type of the Spark driver. Note that this field is optional; if unset, the driver node - type will be set as the same value as `node_type_id` defined above. - - This field, along with node_type_id, should not be set if virtual_cluster_size is set. If both - driver_node_type_id, node_type_id, and virtual_cluster_size are specified, driver_node_type_id - and node_type_id take precedence.""" + type will be set as the same value as `node_type_id` defined above.""" enable_elastic_disk: Optional[bool] = None """Autoscaling Local Storage: when enabled, this cluster will dynamically acquire additional disk @@ -9435,7 +9265,6 @@ class UpdateClusterResource: `use_ml_runtime`, and whether `node_type_id` is gpu node or not.""" workload_type: Optional[WorkloadType] = None - """Cluster Attributes showing for clusters workload types.""" def as_dict(self) -> dict: """Serializes the UpdateClusterResource into a dictionary suitable for use as a JSON request body.""" @@ -9637,11 +9466,8 @@ def from_dict(cls, d: Dict[str, Any]) -> UpdateResponse: @dataclass class VolumesStorageInfo: - """A storage location back by UC Volumes.""" - destination: str - """UC Volumes destination, e.g. `/Volumes/catalog/schema/vol1/init-scripts/setup-datadog.sh` or - `dbfs:/Volumes/catalog/schema/vol1/init-scripts/setup-datadog.sh`""" + """Unity Catalog volumes file destination, e.g. `/Volumes/catalog/schema/volume/dir/file`""" def as_dict(self) -> dict: """Serializes the VolumesStorageInfo into a dictionary suitable for use as a JSON request body.""" @@ -9665,8 +9491,6 @@ def from_dict(cls, d: Dict[str, Any]) -> VolumesStorageInfo: @dataclass class WorkloadType: - """Cluster Attributes showing for clusters workload types.""" - clients: ClientsTypes """defined what type of clients can use the cluster. E.g. Notebooks, Jobs""" @@ -9692,10 +9516,8 @@ def from_dict(cls, d: Dict[str, Any]) -> WorkloadType: @dataclass class WorkspaceStorageInfo: - """A storage location in Workspace Filesystem (WSFS)""" - destination: str - """wsfs destination, e.g. `workspace:/cluster-init-scripts/setup-datadog.sh`""" + """workspace files destination, e.g. `/Users/user1@databricks.com/my-init.sh`""" def as_dict(self) -> dict: """Serializes the WorkspaceStorageInfo into a dictionary suitable for use as a JSON request body.""" @@ -10149,6 +9971,7 @@ def change_owner(self, cluster_id: str, owner_username: str): `owner_username`. :param cluster_id: str + :param owner_username: str New owner of the cluster_id after this RPC. @@ -10204,11 +10027,8 @@ def create( """Create new cluster. Creates a new Spark cluster. This method will acquire new instances from the cloud provider if - necessary. This method is asynchronous; the returned ``cluster_id`` can be used to poll the cluster - status. When this method returns, the cluster will be in a ``PENDING`` state. The cluster will be - usable once it enters a ``RUNNING`` state. Note: Databricks may not be able to acquire some of the - requested nodes, due to cloud provider limitations (account limits, spot price, etc.) or transient - network issues. + necessary. Note: Databricks may not be able to acquire some of the requested nodes, due to cloud + provider limitations (account limits, spot price, etc.) or transient network issues. If Databricks acquires at least 85% of the requested on-demand nodes, cluster creation will succeed. Otherwise the cluster will terminate with an informative error message. @@ -10281,17 +10101,12 @@ def create( standard clusters. * `LEGACY_SINGLE_USER_STANDARD`: This mode provides a way that doesn’t have UC nor passthrough enabled. :param docker_image: :class:`DockerImage` (optional) - Custom docker image BYOC :param driver_instance_pool_id: str (optional) The optional ID of the instance pool for the driver of the cluster belongs. The pool cluster uses the instance pool with id (instance_pool_id) if the driver pool is not assigned. :param driver_node_type_id: str (optional) The node type of the Spark driver. Note that this field is optional; if unset, the driver node type will be set as the same value as `node_type_id` defined above. - - This field, along with node_type_id, should not be set if virtual_cluster_size is set. If both - driver_node_type_id, node_type_id, and virtual_cluster_size are specified, driver_node_type_id and - node_type_id take precedence. :param enable_elastic_disk: bool (optional) Autoscaling Local Storage: when enabled, this cluster will dynamically acquire additional disk space when its Spark workers are running low on disk space. This feature requires specific AWS permissions @@ -10378,7 +10193,6 @@ def create( `effective_spark_version` is determined by `spark_version` (DBR release), this field `use_ml_runtime`, and whether `node_type_id` is gpu node or not. :param workload_type: :class:`WorkloadType` (optional) - Cluster Attributes showing for clusters workload types. :returns: Long-running operation waiter for :class:`ClusterDetails`. @@ -10673,17 +10487,12 @@ def edit( standard clusters. * `LEGACY_SINGLE_USER_STANDARD`: This mode provides a way that doesn’t have UC nor passthrough enabled. :param docker_image: :class:`DockerImage` (optional) - Custom docker image BYOC :param driver_instance_pool_id: str (optional) The optional ID of the instance pool for the driver of the cluster belongs. The pool cluster uses the instance pool with id (instance_pool_id) if the driver pool is not assigned. :param driver_node_type_id: str (optional) The node type of the Spark driver. Note that this field is optional; if unset, the driver node type will be set as the same value as `node_type_id` defined above. - - This field, along with node_type_id, should not be set if virtual_cluster_size is set. If both - driver_node_type_id, node_type_id, and virtual_cluster_size are specified, driver_node_type_id and - node_type_id take precedence. :param enable_elastic_disk: bool (optional) Autoscaling Local Storage: when enabled, this cluster will dynamically acquire additional disk space when its Spark workers are running low on disk space. This feature requires specific AWS permissions @@ -10770,7 +10579,6 @@ def edit( `effective_spark_version` is determined by `spark_version` (DBR release), this field `use_ml_runtime`, and whether `node_type_id` is gpu node or not. :param workload_type: :class:`WorkloadType` (optional) - Cluster Attributes showing for clusters workload types. :returns: Long-running operation waiter for :class:`ClusterDetails`. @@ -10933,7 +10741,8 @@ def events( """List cluster activity events. Retrieves a list of events about the activity of a cluster. This API is paginated. If there are more - events to read, the response includes all the parameters necessary to request the next page of events. + events to read, the response includes all the nparameters necessary to request the next page of + events. :param cluster_id: str The ID of the cluster to retrieve events about. @@ -11152,6 +10961,7 @@ def pin(self, cluster_id: str): cluster that is already pinned will have no effect. This API can only be called by workspace admins. :param cluster_id: str + """ @@ -11228,6 +11038,7 @@ def restart(self, cluster_id: str, *, restart_user: Optional[str] = None) -> Wai :param cluster_id: str The cluster to be started. :param restart_user: str (optional) + :returns: Long-running operation waiter for :class:`ClusterDetails`. @@ -11297,10 +11108,11 @@ def start(self, cluster_id: str) -> Wait[ClusterDetails]: """Start terminated cluster. Starts a terminated Spark cluster with the supplied ID. This works similar to `createCluster` except: - - The previous cluster id and attributes are preserved. - The cluster starts with the last specified - cluster size. - If the previous cluster was an autoscaling cluster, the current cluster starts with - the minimum number of nodes. - If the cluster is not currently in a ``TERMINATED`` state, nothing will - happen. - Clusters launched to run a job cannot be started. + + * The previous cluster id and attributes are preserved. * The cluster starts with the last specified + cluster size. * If the previous cluster was an autoscaling cluster, the current cluster starts with + the minimum number of nodes. * If the cluster is not currently in a `TERMINATED` state, nothing will + happen. * Clusters launched to run a job cannot be started. :param cluster_id: str The cluster to be started. @@ -11333,6 +11145,7 @@ def unpin(self, cluster_id: str): admins. :param cluster_id: str + """ @@ -11363,18 +11176,10 @@ def update( :param cluster_id: str ID of the cluster. :param update_mask: str - Used to specify which cluster attributes and size fields to update. See https://google.aip.dev/161 - for more details. - - The field mask must be a single string, with multiple fields separated by commas (no spaces). The - field path is relative to the resource object, using a dot (`.`) to navigate sub-fields (e.g., - `author.given_name`). Specification of elements in sequence or map fields is not allowed, as only - the entire collection field can be specified. Field names must exactly match the resource field - names. - - A field mask of `*` indicates full replacement. It’s recommended to always explicitly list the - fields being updated and avoid using `*` wildcards, as it can lead to unintended results if the API - changes in the future. + Specifies which fields of the cluster will be updated. This is required in the POST request. The + update mask should be supplied as a single string. To specify multiple fields, separate them with + commas (no spaces). To delete a field from a cluster configuration, add it to the `update_mask` + string but omit it from the `cluster` object. :param cluster: :class:`UpdateClusterResource` (optional) The cluster to be updated. diff --git a/databricks/sdk/service/dashboards.py b/databricks/sdk/service/dashboards.py index a85c0269b..be50febad 100755 --- a/databricks/sdk/service/dashboards.py +++ b/databricks/sdk/service/dashboards.py @@ -1082,7 +1082,6 @@ class MessageErrorType(Enum): FUNCTION_ARGUMENTS_INVALID_JSON_EXCEPTION = "FUNCTION_ARGUMENTS_INVALID_JSON_EXCEPTION" FUNCTION_ARGUMENTS_INVALID_TYPE_EXCEPTION = "FUNCTION_ARGUMENTS_INVALID_TYPE_EXCEPTION" FUNCTION_CALL_MISSING_PARAMETER_EXCEPTION = "FUNCTION_CALL_MISSING_PARAMETER_EXCEPTION" - GENERATED_SQL_QUERY_TOO_LONG_EXCEPTION = "GENERATED_SQL_QUERY_TOO_LONG_EXCEPTION" GENERIC_CHAT_COMPLETION_EXCEPTION = "GENERIC_CHAT_COMPLETION_EXCEPTION" GENERIC_CHAT_COMPLETION_SERVICE_EXCEPTION = "GENERIC_CHAT_COMPLETION_SERVICE_EXCEPTION" GENERIC_SQL_EXEC_API_CALL_EXCEPTION = "GENERIC_SQL_EXEC_API_CALL_EXCEPTION" @@ -1097,7 +1096,6 @@ class MessageErrorType(Enum): MESSAGE_CANCELLED_WHILE_EXECUTING_EXCEPTION = "MESSAGE_CANCELLED_WHILE_EXECUTING_EXCEPTION" MESSAGE_DELETED_WHILE_EXECUTING_EXCEPTION = "MESSAGE_DELETED_WHILE_EXECUTING_EXCEPTION" MESSAGE_UPDATED_WHILE_EXECUTING_EXCEPTION = "MESSAGE_UPDATED_WHILE_EXECUTING_EXCEPTION" - MISSING_SQL_QUERY_EXCEPTION = "MISSING_SQL_QUERY_EXCEPTION" NO_DEPLOYMENTS_AVAILABLE_TO_WORKSPACE = "NO_DEPLOYMENTS_AVAILABLE_TO_WORKSPACE" NO_QUERY_TO_VISUALIZE_EXCEPTION = "NO_QUERY_TO_VISUALIZE_EXCEPTION" NO_TABLES_TO_QUERY_EXCEPTION = "NO_TABLES_TO_QUERY_EXCEPTION" diff --git a/databricks/sdk/service/iam.py b/databricks/sdk/service/iam.py index d5fe5645e..1dd81aaed 100755 --- a/databricks/sdk/service/iam.py +++ b/databricks/sdk/service/iam.py @@ -846,7 +846,7 @@ def from_dict(cls, d: Dict[str, Any]) -> ObjectPermissions: @dataclass class PartialUpdate: id: Optional[str] = None - """Unique ID in the Databricks workspace.""" + """Unique ID for a user in the Databricks workspace.""" operations: Optional[List[Patch]] = None @@ -1918,7 +1918,8 @@ class User: groups: Optional[List[ComplexValue]] = None id: Optional[str] = None - """Databricks user ID.""" + """Databricks user ID. This is automatically set by Databricks. Any value provided by the client + will be ignored.""" name: Optional[Name] = None @@ -2479,7 +2480,7 @@ def patch(self, id: str, *, operations: Optional[List[Patch]] = None, schemas: O Partially updates the details of a group. :param id: str - Unique ID in the Databricks workspace. + Unique ID for a group in the Databricks account. :param operations: List[:class:`Patch`] (optional) :param schemas: List[:class:`PatchSchema`] (optional) The schema of the patch request. Must be ["urn:ietf:params:scim:api:messages:2.0:PatchOp"]. @@ -2492,6 +2493,7 @@ def patch(self, id: str, *, operations: Optional[List[Patch]] = None, schemas: O if schemas is not None: body["schemas"] = [v.value for v in schemas] headers = { + "Accept": "application/json", "Content-Type": "application/json", } @@ -2555,6 +2557,7 @@ def update( if schemas is not None: body["schemas"] = [v.value for v in schemas] headers = { + "Accept": "application/json", "Content-Type": "application/json", } @@ -2762,7 +2765,7 @@ def patch(self, id: str, *, operations: Optional[List[Patch]] = None, schemas: O Partially updates the details of a single service principal in the Databricks account. :param id: str - Unique ID in the Databricks workspace. + Unique ID for a service principal in the Databricks account. :param operations: List[:class:`Patch`] (optional) :param schemas: List[:class:`PatchSchema`] (optional) The schema of the patch request. Must be ["urn:ietf:params:scim:api:messages:2.0:PatchOp"]. @@ -2775,6 +2778,7 @@ def patch(self, id: str, *, operations: Optional[List[Patch]] = None, schemas: O if schemas is not None: body["schemas"] = [v.value for v in schemas] headers = { + "Accept": "application/json", "Content-Type": "application/json", } @@ -2844,6 +2848,7 @@ def update( if schemas is not None: body["schemas"] = [v.value for v in schemas] headers = { + "Accept": "application/json", "Content-Type": "application/json", } @@ -2907,7 +2912,8 @@ def create( External ID is not currently supported. It is reserved for future use. :param groups: List[:class:`ComplexValue`] (optional) :param id: str (optional) - Databricks user ID. + Databricks user ID. This is automatically set by Databricks. Any value provided by the client will + be ignored. :param name: :class:`Name` (optional) :param roles: List[:class:`ComplexValue`] (optional) Corresponds to AWS instance profile/arn role. @@ -3117,7 +3123,7 @@ def patch(self, id: str, *, operations: Optional[List[Patch]] = None, schemas: O Partially updates a user resource by applying the supplied operations on specific user attributes. :param id: str - Unique ID in the Databricks workspace. + Unique ID for a user in the Databricks account. :param operations: List[:class:`Patch`] (optional) :param schemas: List[:class:`PatchSchema`] (optional) The schema of the patch request. Must be ["urn:ietf:params:scim:api:messages:2.0:PatchOp"]. @@ -3130,6 +3136,7 @@ def patch(self, id: str, *, operations: Optional[List[Patch]] = None, schemas: O if schemas is not None: body["schemas"] = [v.value for v in schemas] headers = { + "Accept": "application/json", "Content-Type": "application/json", } @@ -3157,7 +3164,8 @@ def update( Replaces a user's information with the data supplied in request. :param id: str - Databricks user ID. + Databricks user ID. This is automatically set by Databricks. Any value provided by the client will + be ignored. :param active: bool (optional) If this user is active :param display_name: str (optional) @@ -3207,6 +3215,7 @@ def update( if user_name is not None: body["userName"] = user_name headers = { + "Accept": "application/json", "Content-Type": "application/json", } @@ -3425,7 +3434,7 @@ def patch(self, id: str, *, operations: Optional[List[Patch]] = None, schemas: O Partially updates the details of a group. :param id: str - Unique ID in the Databricks workspace. + Unique ID for a group in the Databricks workspace. :param operations: List[:class:`Patch`] (optional) :param schemas: List[:class:`PatchSchema`] (optional) The schema of the patch request. Must be ["urn:ietf:params:scim:api:messages:2.0:PatchOp"]. @@ -3438,6 +3447,7 @@ def patch(self, id: str, *, operations: Optional[List[Patch]] = None, schemas: O if schemas is not None: body["schemas"] = [v.value for v in schemas] headers = { + "Accept": "application/json", "Content-Type": "application/json", } @@ -3499,6 +3509,7 @@ def update( if schemas is not None: body["schemas"] = [v.value for v in schemas] headers = { + "Accept": "application/json", "Content-Type": "application/json", } @@ -3911,7 +3922,7 @@ def patch(self, id: str, *, operations: Optional[List[Patch]] = None, schemas: O Partially updates the details of a single service principal in the Databricks workspace. :param id: str - Unique ID in the Databricks workspace. + Unique ID for a service principal in the Databricks workspace. :param operations: List[:class:`Patch`] (optional) :param schemas: List[:class:`PatchSchema`] (optional) The schema of the patch request. Must be ["urn:ietf:params:scim:api:messages:2.0:PatchOp"]. @@ -3924,6 +3935,7 @@ def patch(self, id: str, *, operations: Optional[List[Patch]] = None, schemas: O if schemas is not None: body["schemas"] = [v.value for v in schemas] headers = { + "Accept": "application/json", "Content-Type": "application/json", } @@ -3988,6 +4000,7 @@ def update( if schemas is not None: body["schemas"] = [v.value for v in schemas] headers = { + "Accept": "application/json", "Content-Type": "application/json", } @@ -4046,7 +4059,8 @@ def create( External ID is not currently supported. It is reserved for future use. :param groups: List[:class:`ComplexValue`] (optional) :param id: str (optional) - Databricks user ID. + Databricks user ID. This is automatically set by Databricks. Any value provided by the client will + be ignored. :param name: :class:`Name` (optional) :param roles: List[:class:`ComplexValue`] (optional) Corresponds to AWS instance profile/arn role. @@ -4280,7 +4294,7 @@ def patch(self, id: str, *, operations: Optional[List[Patch]] = None, schemas: O Partially updates a user resource by applying the supplied operations on specific user attributes. :param id: str - Unique ID in the Databricks workspace. + Unique ID for a user in the Databricks workspace. :param operations: List[:class:`Patch`] (optional) :param schemas: List[:class:`PatchSchema`] (optional) The schema of the patch request. Must be ["urn:ietf:params:scim:api:messages:2.0:PatchOp"]. @@ -4293,6 +4307,7 @@ def patch(self, id: str, *, operations: Optional[List[Patch]] = None, schemas: O if schemas is not None: body["schemas"] = [v.value for v in schemas] headers = { + "Accept": "application/json", "Content-Type": "application/json", } @@ -4341,7 +4356,8 @@ def update( Replaces a user's information with the data supplied in request. :param id: str - Databricks user ID. + Databricks user ID. This is automatically set by Databricks. Any value provided by the client will + be ignored. :param active: bool (optional) If this user is active :param display_name: str (optional) @@ -4391,6 +4407,7 @@ def update( if user_name is not None: body["userName"] = user_name headers = { + "Accept": "application/json", "Content-Type": "application/json", } diff --git a/databricks/sdk/service/jobs.py b/databricks/sdk/service/jobs.py index 5be08ce72..6a19b8980 100755 --- a/databricks/sdk/service/jobs.py +++ b/databricks/sdk/service/jobs.py @@ -3659,7 +3659,6 @@ class PerformanceTarget(Enum): on serverless compute should be. The performance mode on the job or pipeline should map to a performance setting that is passed to Cluster Manager (see cluster-common PerformanceTarget).""" - BALANCED = "BALANCED" COST_OPTIMIZED = "COST_OPTIMIZED" PERFORMANCE_OPTIMIZED = "PERFORMANCE_OPTIMIZED" diff --git a/databricks/sdk/service/marketplace.py b/databricks/sdk/service/marketplace.py index 41992fd69..1851bf1d6 100755 --- a/databricks/sdk/service/marketplace.py +++ b/databricks/sdk/service/marketplace.py @@ -1192,7 +1192,6 @@ def from_dict(cls, d: Dict[str, Any]) -> FileParent: class FileParentType(Enum): LISTING = "LISTING" - LISTING_RESOURCE = "LISTING_RESOURCE" PROVIDER = "PROVIDER" @@ -2453,7 +2452,6 @@ class ListingType(Enum): class MarketplaceFileType(Enum): - APP = "APP" EMBEDDED_NOTEBOOK = "EMBEDDED_NOTEBOOK" PROVIDER_ICON = "PROVIDER_ICON" diff --git a/databricks/sdk/service/ml.py b/databricks/sdk/service/ml.py index 88a807ee4..b978b45c6 100755 --- a/databricks/sdk/service/ml.py +++ b/databricks/sdk/service/ml.py @@ -499,19 +499,27 @@ class CreateForecastingExperimentRequest: time_column: str """Name of the column in the input training table that represents the timestamp of each row.""" - forecast_granularity: str - """The granularity of the forecast. This defines the time interval between consecutive rows in the - time series data. Possible values: '1 second', '1 minute', '5 minutes', '10 minutes', '15 - minutes', '30 minutes', 'Hourly', 'Daily', 'Weekly', 'Monthly', 'Quarterly', 'Yearly'.""" + data_granularity_unit: str + """The time unit of the input data granularity. Together with data_granularity_quantity field, this + defines the time interval between consecutive rows in the time series data. Possible values: * + 'W' (weeks) * 'D' / 'days' / 'day' * 'hours' / 'hour' / 'hr' / 'h' * 'm' / 'minute' / 'min' / + 'minutes' / 'T' * 'S' / 'seconds' / 'sec' / 'second' * 'M' / 'month' / 'months' * 'Q' / + 'quarter' / 'quarters' * 'Y' / 'year' / 'years'""" forecast_horizon: int """The number of time steps into the future for which predictions should be made. This value - represents a multiple of forecast_granularity determining how far ahead the model will forecast.""" + represents a multiple of data_granularity_unit and data_granularity_quantity determining how far + ahead the model will forecast.""" custom_weights_column: Optional[str] = None """Name of the column in the input training table used to customize the weight for each time series to calculate weighted metrics.""" + data_granularity_quantity: Optional[int] = None + """The quantity of the input data granularity. Together with data_granularity_unit field, this + defines the time interval between consecutive rows in the time series data. For now, only 1 + second, 1/5/10/15/30 minutes, 1 hour, 1 day, 1 week, 1 month, 1 quarter, 1 year are supported.""" + experiment_path: Optional[str] = None """The path to the created experiment. This is the path where the experiment will be stored in the workspace.""" @@ -552,10 +560,12 @@ def as_dict(self) -> dict: body = {} if self.custom_weights_column is not None: body["custom_weights_column"] = self.custom_weights_column + if self.data_granularity_quantity is not None: + body["data_granularity_quantity"] = self.data_granularity_quantity + if self.data_granularity_unit is not None: + body["data_granularity_unit"] = self.data_granularity_unit if self.experiment_path is not None: body["experiment_path"] = self.experiment_path - if self.forecast_granularity is not None: - body["forecast_granularity"] = self.forecast_granularity if self.forecast_horizon is not None: body["forecast_horizon"] = self.forecast_horizon if self.holiday_regions: @@ -587,10 +597,12 @@ def as_shallow_dict(self) -> dict: body = {} if self.custom_weights_column is not None: body["custom_weights_column"] = self.custom_weights_column + if self.data_granularity_quantity is not None: + body["data_granularity_quantity"] = self.data_granularity_quantity + if self.data_granularity_unit is not None: + body["data_granularity_unit"] = self.data_granularity_unit if self.experiment_path is not None: body["experiment_path"] = self.experiment_path - if self.forecast_granularity is not None: - body["forecast_granularity"] = self.forecast_granularity if self.forecast_horizon is not None: body["forecast_horizon"] = self.forecast_horizon if self.holiday_regions: @@ -622,8 +634,9 @@ def from_dict(cls, d: Dict[str, Any]) -> CreateForecastingExperimentRequest: """Deserializes the CreateForecastingExperimentRequest from a dictionary.""" return cls( custom_weights_column=d.get("custom_weights_column", None), + data_granularity_quantity=d.get("data_granularity_quantity", None), + data_granularity_unit=d.get("data_granularity_unit", None), experiment_path=d.get("experiment_path", None), - forecast_granularity=d.get("forecast_granularity", None), forecast_horizon=d.get("forecast_horizon", None), holiday_regions=d.get("holiday_regions", None), max_runtime=d.get("max_runtime", None), @@ -6987,10 +7000,11 @@ def create_experiment( train_data_path: str, target_column: str, time_column: str, - forecast_granularity: str, + data_granularity_unit: str, forecast_horizon: int, *, custom_weights_column: Optional[str] = None, + data_granularity_quantity: Optional[int] = None, experiment_path: Optional[str] = None, holiday_regions: Optional[List[str]] = None, max_runtime: Optional[int] = None, @@ -7013,16 +7027,23 @@ def create_experiment( this column will be used as the ground truth for model training. :param time_column: str Name of the column in the input training table that represents the timestamp of each row. - :param forecast_granularity: str - The granularity of the forecast. This defines the time interval between consecutive rows in the time - series data. Possible values: '1 second', '1 minute', '5 minutes', '10 minutes', '15 minutes', '30 - minutes', 'Hourly', 'Daily', 'Weekly', 'Monthly', 'Quarterly', 'Yearly'. + :param data_granularity_unit: str + The time unit of the input data granularity. Together with data_granularity_quantity field, this + defines the time interval between consecutive rows in the time series data. Possible values: * 'W' + (weeks) * 'D' / 'days' / 'day' * 'hours' / 'hour' / 'hr' / 'h' * 'm' / 'minute' / 'min' / 'minutes' + / 'T' * 'S' / 'seconds' / 'sec' / 'second' * 'M' / 'month' / 'months' * 'Q' / 'quarter' / 'quarters' + * 'Y' / 'year' / 'years' :param forecast_horizon: int The number of time steps into the future for which predictions should be made. This value represents - a multiple of forecast_granularity determining how far ahead the model will forecast. + a multiple of data_granularity_unit and data_granularity_quantity determining how far ahead the + model will forecast. :param custom_weights_column: str (optional) Name of the column in the input training table used to customize the weight for each time series to calculate weighted metrics. + :param data_granularity_quantity: int (optional) + The quantity of the input data granularity. Together with data_granularity_unit field, this defines + the time interval between consecutive rows in the time series data. For now, only 1 second, + 1/5/10/15/30 minutes, 1 hour, 1 day, 1 week, 1 month, 1 quarter, 1 year are supported. :param experiment_path: str (optional) The path to the created experiment. This is the path where the experiment will be stored in the workspace. @@ -7057,10 +7078,12 @@ def create_experiment( body = {} if custom_weights_column is not None: body["custom_weights_column"] = custom_weights_column + if data_granularity_quantity is not None: + body["data_granularity_quantity"] = data_granularity_quantity + if data_granularity_unit is not None: + body["data_granularity_unit"] = data_granularity_unit if experiment_path is not None: body["experiment_path"] = experiment_path - if forecast_granularity is not None: - body["forecast_granularity"] = forecast_granularity if forecast_horizon is not None: body["forecast_horizon"] = forecast_horizon if holiday_regions is not None: @@ -7102,10 +7125,11 @@ def create_experiment_and_wait( train_data_path: str, target_column: str, time_column: str, - forecast_granularity: str, + data_granularity_unit: str, forecast_horizon: int, *, custom_weights_column: Optional[str] = None, + data_granularity_quantity: Optional[int] = None, experiment_path: Optional[str] = None, holiday_regions: Optional[List[str]] = None, max_runtime: Optional[int] = None, @@ -7119,8 +7143,9 @@ def create_experiment_and_wait( ) -> ForecastingExperiment: return self.create_experiment( custom_weights_column=custom_weights_column, + data_granularity_quantity=data_granularity_quantity, + data_granularity_unit=data_granularity_unit, experiment_path=experiment_path, - forecast_granularity=forecast_granularity, forecast_horizon=forecast_horizon, holiday_regions=holiday_regions, max_runtime=max_runtime, diff --git a/databricks/sdk/service/oauth2.py b/databricks/sdk/service/oauth2.py index 366f282f4..928610d04 100755 --- a/databricks/sdk/service/oauth2.py +++ b/databricks/sdk/service/oauth2.py @@ -776,13 +776,6 @@ class OidcFederationPolicy: endpoint. Databricks strongly recommends relying on your issuer’s well known endpoint for discovering public keys.""" - jwks_uri: Optional[str] = None - """URL of the public keys used to validate the signature of federated tokens, in JWKS format. Most - use cases should not need to specify this field. If jwks_uri and jwks_json are both unspecified - (recommended), Databricks automatically fetches the public keys from your issuer’s well known - endpoint. Databricks strongly recommends relying on your issuer’s well known endpoint for - discovering public keys.""" - subject: Optional[str] = None """The required token subject, as specified in the subject claim of federated tokens. Must be specified for service principal federation policies. Must not be specified for account @@ -800,8 +793,6 @@ def as_dict(self) -> dict: body["issuer"] = self.issuer if self.jwks_json is not None: body["jwks_json"] = self.jwks_json - if self.jwks_uri is not None: - body["jwks_uri"] = self.jwks_uri if self.subject is not None: body["subject"] = self.subject if self.subject_claim is not None: @@ -817,8 +808,6 @@ def as_shallow_dict(self) -> dict: body["issuer"] = self.issuer if self.jwks_json is not None: body["jwks_json"] = self.jwks_json - if self.jwks_uri is not None: - body["jwks_uri"] = self.jwks_uri if self.subject is not None: body["subject"] = self.subject if self.subject_claim is not None: @@ -832,7 +821,6 @@ def from_dict(cls, d: Dict[str, Any]) -> OidcFederationPolicy: audiences=d.get("audiences", None), issuer=d.get("issuer", None), jwks_json=d.get("jwks_json", None), - jwks_uri=d.get("jwks_uri", None), subject=d.get("subject", None), subject_claim=d.get("subject_claim", None), ) diff --git a/databricks/sdk/service/pipelines.py b/databricks/sdk/service/pipelines.py index 5f0cc834a..36e74b8fd 100755 --- a/databricks/sdk/service/pipelines.py +++ b/databricks/sdk/service/pipelines.py @@ -69,7 +69,7 @@ class CreatePipeline: ingestion_definition: Optional[IngestionPipelineDefinition] = None """The configuration for a managed ingestion pipeline. These settings cannot be used with the - 'libraries', 'schema', 'target', or 'catalog' settings.""" + 'libraries', 'target' or 'catalog' settings.""" libraries: Optional[List[PipelineLibrary]] = None """Libraries or code needed by this deployment.""" @@ -95,7 +95,8 @@ class CreatePipeline: is thrown.""" schema: Optional[str] = None - """The default schema (database) where tables are read from or published to.""" + """The default schema (database) where tables are read from or published to. The presence of this + field implies that the pipeline is in direct publishing mode.""" serverless: Optional[bool] = None """Whether serverless compute is enabled for this pipeline.""" @@ -104,9 +105,9 @@ class CreatePipeline: """DBFS root directory for storing checkpoints and tables.""" target: Optional[str] = None - """Target schema (database) to add tables in this pipeline to. Exactly one of `schema` or `target` - must be specified. To publish to Unity Catalog, also specify `catalog`. This legacy field is - deprecated for pipeline creation in favor of the `schema` field.""" + """Target schema (database) to add tables in this pipeline to. If not specified, no data is + published to the Hive metastore or Unity Catalog. To publish to Unity Catalog, also specify + `catalog`.""" trigger: Optional[PipelineTrigger] = None """Which pipeline trigger to use. Deprecated: Use `continuous` instead.""" @@ -442,7 +443,7 @@ class EditPipeline: ingestion_definition: Optional[IngestionPipelineDefinition] = None """The configuration for a managed ingestion pipeline. These settings cannot be used with the - 'libraries', 'schema', 'target', or 'catalog' settings.""" + 'libraries', 'target' or 'catalog' settings.""" libraries: Optional[List[PipelineLibrary]] = None """Libraries or code needed by this deployment.""" @@ -471,7 +472,8 @@ class EditPipeline: is thrown.""" schema: Optional[str] = None - """The default schema (database) where tables are read from or published to.""" + """The default schema (database) where tables are read from or published to. The presence of this + field implies that the pipeline is in direct publishing mode.""" serverless: Optional[bool] = None """Whether serverless compute is enabled for this pipeline.""" @@ -480,9 +482,9 @@ class EditPipeline: """DBFS root directory for storing checkpoints and tables.""" target: Optional[str] = None - """Target schema (database) to add tables in this pipeline to. Exactly one of `schema` or `target` - must be specified. To publish to Unity Catalog, also specify `catalog`. This legacy field is - deprecated for pipeline creation in favor of the `schema` field.""" + """Target schema (database) to add tables in this pipeline to. If not specified, no data is + published to the Hive metastore or Unity Catalog. To publish to Unity Catalog, also specify + `catalog`.""" trigger: Optional[PipelineTrigger] = None """Which pipeline trigger to use. Deprecated: Use `continuous` instead.""" @@ -2216,7 +2218,7 @@ class PipelineSpec: ingestion_definition: Optional[IngestionPipelineDefinition] = None """The configuration for a managed ingestion pipeline. These settings cannot be used with the - 'libraries', 'schema', 'target', or 'catalog' settings.""" + 'libraries', 'target' or 'catalog' settings.""" libraries: Optional[List[PipelineLibrary]] = None """Libraries or code needed by this deployment.""" @@ -2234,7 +2236,8 @@ class PipelineSpec: """Restart window of this pipeline.""" schema: Optional[str] = None - """The default schema (database) where tables are read from or published to.""" + """The default schema (database) where tables are read from or published to. The presence of this + field implies that the pipeline is in direct publishing mode.""" serverless: Optional[bool] = None """Whether serverless compute is enabled for this pipeline.""" @@ -2243,9 +2246,9 @@ class PipelineSpec: """DBFS root directory for storing checkpoints and tables.""" target: Optional[str] = None - """Target schema (database) to add tables in this pipeline to. Exactly one of `schema` or `target` - must be specified. To publish to Unity Catalog, also specify `catalog`. This legacy field is - deprecated for pipeline creation in favor of the `schema` field.""" + """Target schema (database) to add tables in this pipeline to. If not specified, no data is + published to the Hive metastore or Unity Catalog. To publish to Unity Catalog, also specify + `catalog`.""" trigger: Optional[PipelineTrigger] = None """Which pipeline trigger to use. Deprecated: Use `continuous` instead.""" @@ -3455,7 +3458,7 @@ def create( Unique identifier for this pipeline. :param ingestion_definition: :class:`IngestionPipelineDefinition` (optional) The configuration for a managed ingestion pipeline. These settings cannot be used with the - 'libraries', 'schema', 'target', or 'catalog' settings. + 'libraries', 'target' or 'catalog' settings. :param libraries: List[:class:`PipelineLibrary`] (optional) Libraries or code needed by this deployment. :param name: str (optional) @@ -3473,15 +3476,15 @@ def create( Only `user_name` or `service_principal_name` can be specified. If both are specified, an error is thrown. :param schema: str (optional) - The default schema (database) where tables are read from or published to. + The default schema (database) where tables are read from or published to. The presence of this field + implies that the pipeline is in direct publishing mode. :param serverless: bool (optional) Whether serverless compute is enabled for this pipeline. :param storage: str (optional) DBFS root directory for storing checkpoints and tables. :param target: str (optional) - Target schema (database) to add tables in this pipeline to. Exactly one of `schema` or `target` must - be specified. To publish to Unity Catalog, also specify `catalog`. This legacy field is deprecated - for pipeline creation in favor of the `schema` field. + Target schema (database) to add tables in this pipeline to. If not specified, no data is published + to the Hive metastore or Unity Catalog. To publish to Unity Catalog, also specify `catalog`. :param trigger: :class:`PipelineTrigger` (optional) Which pipeline trigger to use. Deprecated: Use `continuous` instead. @@ -3959,7 +3962,7 @@ def update( Unique identifier for this pipeline. :param ingestion_definition: :class:`IngestionPipelineDefinition` (optional) The configuration for a managed ingestion pipeline. These settings cannot be used with the - 'libraries', 'schema', 'target', or 'catalog' settings. + 'libraries', 'target' or 'catalog' settings. :param libraries: List[:class:`PipelineLibrary`] (optional) Libraries or code needed by this deployment. :param name: str (optional) @@ -3977,15 +3980,15 @@ def update( Only `user_name` or `service_principal_name` can be specified. If both are specified, an error is thrown. :param schema: str (optional) - The default schema (database) where tables are read from or published to. + The default schema (database) where tables are read from or published to. The presence of this field + implies that the pipeline is in direct publishing mode. :param serverless: bool (optional) Whether serverless compute is enabled for this pipeline. :param storage: str (optional) DBFS root directory for storing checkpoints and tables. :param target: str (optional) - Target schema (database) to add tables in this pipeline to. Exactly one of `schema` or `target` must - be specified. To publish to Unity Catalog, also specify `catalog`. This legacy field is deprecated - for pipeline creation in favor of the `schema` field. + Target schema (database) to add tables in this pipeline to. If not specified, no data is published + to the Hive metastore or Unity Catalog. To publish to Unity Catalog, also specify `catalog`. :param trigger: :class:`PipelineTrigger` (optional) Which pipeline trigger to use. Deprecated: Use `continuous` instead. diff --git a/databricks/sdk/service/serving.py b/databricks/sdk/service/serving.py index 1dff8e1e9..ce65a795f 100755 --- a/databricks/sdk/service/serving.py +++ b/databricks/sdk/service/serving.py @@ -63,10 +63,6 @@ def from_dict(cls, d: Dict[str, Any]) -> Ai21LabsConfig: @dataclass class AiGatewayConfig: - fallback_config: Optional[FallbackConfig] = None - """Configuration for traffic fallback which auto fallbacks to other served entities if the request - to a served entity fails with certain error codes, to increase availability.""" - guardrails: Optional[AiGatewayGuardrails] = None """Configuration for AI Guardrails to prevent unwanted data and unsafe data in requests and responses.""" @@ -85,8 +81,6 @@ class AiGatewayConfig: def as_dict(self) -> dict: """Serializes the AiGatewayConfig into a dictionary suitable for use as a JSON request body.""" body = {} - if self.fallback_config: - body["fallback_config"] = self.fallback_config.as_dict() if self.guardrails: body["guardrails"] = self.guardrails.as_dict() if self.inference_table_config: @@ -100,8 +94,6 @@ def as_dict(self) -> dict: def as_shallow_dict(self) -> dict: """Serializes the AiGatewayConfig into a shallow dictionary of its immediate attributes.""" body = {} - if self.fallback_config: - body["fallback_config"] = self.fallback_config if self.guardrails: body["guardrails"] = self.guardrails if self.inference_table_config: @@ -116,7 +108,6 @@ def as_shallow_dict(self) -> dict: def from_dict(cls, d: Dict[str, Any]) -> AiGatewayConfig: """Deserializes the AiGatewayConfig from a dictionary.""" return cls( - fallback_config=_from_dict(d, "fallback_config", FallbackConfig), guardrails=_from_dict(d, "guardrails", AiGatewayGuardrails), inference_table_config=_from_dict(d, "inference_table_config", AiGatewayInferenceTableConfig), rate_limits=_repeated_dict(d, "rate_limits", AiGatewayRateLimit), @@ -515,47 +506,6 @@ def from_dict(cls, d: Dict[str, Any]) -> AnthropicConfig: ) -@dataclass -class ApiKeyAuth: - key: str - """The name of the API key parameter used for authentication.""" - - value: Optional[str] = None - """The Databricks secret key reference for an API Key. If you prefer to paste your token directly, - see `value_plaintext`.""" - - value_plaintext: Optional[str] = None - """The API Key provided as a plaintext string. If you prefer to reference your token using - Databricks Secrets, see `value`.""" - - def as_dict(self) -> dict: - """Serializes the ApiKeyAuth into a dictionary suitable for use as a JSON request body.""" - body = {} - if self.key is not None: - body["key"] = self.key - if self.value is not None: - body["value"] = self.value - if self.value_plaintext is not None: - body["value_plaintext"] = self.value_plaintext - return body - - def as_shallow_dict(self) -> dict: - """Serializes the ApiKeyAuth into a shallow dictionary of its immediate attributes.""" - body = {} - if self.key is not None: - body["key"] = self.key - if self.value is not None: - body["value"] = self.value - if self.value_plaintext is not None: - body["value_plaintext"] = self.value_plaintext - return body - - @classmethod - def from_dict(cls, d: Dict[str, Any]) -> ApiKeyAuth: - """Deserializes the ApiKeyAuth from a dictionary.""" - return cls(key=d.get("key", None), value=d.get("value", None), value_plaintext=d.get("value_plaintext", None)) - - @dataclass class AutoCaptureConfigInput: catalog_name: Optional[str] = None @@ -695,40 +645,6 @@ def from_dict(cls, d: Dict[str, Any]) -> AutoCaptureState: return cls(payload_table=_from_dict(d, "payload_table", PayloadTable)) -@dataclass -class BearerTokenAuth: - token: Optional[str] = None - """The Databricks secret key reference for a token. If you prefer to paste your token directly, see - `token_plaintext`.""" - - token_plaintext: Optional[str] = None - """The token provided as a plaintext string. If you prefer to reference your token using Databricks - Secrets, see `token`.""" - - def as_dict(self) -> dict: - """Serializes the BearerTokenAuth into a dictionary suitable for use as a JSON request body.""" - body = {} - if self.token is not None: - body["token"] = self.token - if self.token_plaintext is not None: - body["token_plaintext"] = self.token_plaintext - return body - - def as_shallow_dict(self) -> dict: - """Serializes the BearerTokenAuth into a shallow dictionary of its immediate attributes.""" - body = {} - if self.token is not None: - body["token"] = self.token - if self.token_plaintext is not None: - body["token_plaintext"] = self.token_plaintext - return body - - @classmethod - def from_dict(cls, d: Dict[str, Any]) -> BearerTokenAuth: - """Deserializes the BearerTokenAuth from a dictionary.""" - return cls(token=d.get("token", None), token_plaintext=d.get("token_plaintext", None)) - - @dataclass class BuildLogsResponse: logs: str @@ -920,53 +836,6 @@ def from_dict(cls, d: Dict[str, Any]) -> CreateServingEndpoint: ) -@dataclass -class CustomProviderConfig: - """Configs needed to create a custom provider model route.""" - - custom_provider_url: str - """This is a field to provide the URL of the custom provider API.""" - - api_key_auth: Optional[ApiKeyAuth] = None - """This is a field to provide API key authentication for the custom provider API. You can only - specify one authentication method.""" - - bearer_token_auth: Optional[BearerTokenAuth] = None - """This is a field to provide bearer token authentication for the custom provider API. You can only - specify one authentication method.""" - - def as_dict(self) -> dict: - """Serializes the CustomProviderConfig into a dictionary suitable for use as a JSON request body.""" - body = {} - if self.api_key_auth: - body["api_key_auth"] = self.api_key_auth.as_dict() - if self.bearer_token_auth: - body["bearer_token_auth"] = self.bearer_token_auth.as_dict() - if self.custom_provider_url is not None: - body["custom_provider_url"] = self.custom_provider_url - return body - - def as_shallow_dict(self) -> dict: - """Serializes the CustomProviderConfig into a shallow dictionary of its immediate attributes.""" - body = {} - if self.api_key_auth: - body["api_key_auth"] = self.api_key_auth - if self.bearer_token_auth: - body["bearer_token_auth"] = self.bearer_token_auth - if self.custom_provider_url is not None: - body["custom_provider_url"] = self.custom_provider_url - return body - - @classmethod - def from_dict(cls, d: Dict[str, Any]) -> CustomProviderConfig: - """Deserializes the CustomProviderConfig from a dictionary.""" - return cls( - api_key_auth=_from_dict(d, "api_key_auth", ApiKeyAuth), - bearer_token_auth=_from_dict(d, "bearer_token_auth", BearerTokenAuth), - custom_provider_url=d.get("custom_provider_url", None), - ) - - @dataclass class DataPlaneInfo: """Details necessary to query this object's API through the DataPlane APIs.""" @@ -1626,9 +1495,6 @@ class ExternalModel: cohere_config: Optional[CohereConfig] = None """Cohere Config. Only required if the provider is 'cohere'.""" - custom_provider_config: Optional[CustomProviderConfig] = None - """Custom Provider Config. Only required if the provider is 'custom'.""" - databricks_model_serving_config: Optional[DatabricksModelServingConfig] = None """Databricks Model Serving Config. Only required if the provider is 'databricks-model-serving'.""" @@ -1652,8 +1518,6 @@ def as_dict(self) -> dict: body["anthropic_config"] = self.anthropic_config.as_dict() if self.cohere_config: body["cohere_config"] = self.cohere_config.as_dict() - if self.custom_provider_config: - body["custom_provider_config"] = self.custom_provider_config.as_dict() if self.databricks_model_serving_config: body["databricks_model_serving_config"] = self.databricks_model_serving_config.as_dict() if self.google_cloud_vertex_ai_config: @@ -1681,8 +1545,6 @@ def as_shallow_dict(self) -> dict: body["anthropic_config"] = self.anthropic_config if self.cohere_config: body["cohere_config"] = self.cohere_config - if self.custom_provider_config: - body["custom_provider_config"] = self.custom_provider_config if self.databricks_model_serving_config: body["databricks_model_serving_config"] = self.databricks_model_serving_config if self.google_cloud_vertex_ai_config: @@ -1707,7 +1569,6 @@ def from_dict(cls, d: Dict[str, Any]) -> ExternalModel: amazon_bedrock_config=_from_dict(d, "amazon_bedrock_config", AmazonBedrockConfig), anthropic_config=_from_dict(d, "anthropic_config", AnthropicConfig), cohere_config=_from_dict(d, "cohere_config", CohereConfig), - custom_provider_config=_from_dict(d, "custom_provider_config", CustomProviderConfig), databricks_model_serving_config=_from_dict( d, "databricks_model_serving_config", DatabricksModelServingConfig ), @@ -1726,7 +1587,6 @@ class ExternalModelProvider(Enum): AMAZON_BEDROCK = "amazon-bedrock" ANTHROPIC = "anthropic" COHERE = "cohere" - CUSTOM = "custom" DATABRICKS_MODEL_SERVING = "databricks-model-serving" GOOGLE_CLOUD_VERTEX_AI = "google-cloud-vertex-ai" OPENAI = "openai" @@ -1776,35 +1636,6 @@ def from_dict(cls, d: Dict[str, Any]) -> ExternalModelUsageElement: ) -@dataclass -class FallbackConfig: - enabled: bool - """Whether to enable traffic fallback. When a served entity in the serving endpoint returns - specific error codes (e.g. 500), the request will automatically be round-robin attempted with - other served entities in the same endpoint, following the order of served entity list, until a - successful response is returned. If all attempts fail, return the last response with the error - code.""" - - def as_dict(self) -> dict: - """Serializes the FallbackConfig into a dictionary suitable for use as a JSON request body.""" - body = {} - if self.enabled is not None: - body["enabled"] = self.enabled - return body - - def as_shallow_dict(self) -> dict: - """Serializes the FallbackConfig into a shallow dictionary of its immediate attributes.""" - body = {} - if self.enabled is not None: - body["enabled"] = self.enabled - return body - - @classmethod - def from_dict(cls, d: Dict[str, Any]) -> FallbackConfig: - """Deserializes the FallbackConfig from a dictionary.""" - return cls(enabled=d.get("enabled", None)) - - @dataclass class FoundationModel: """All fields are not sensitive as they are hard-coded in the system and made available to @@ -2293,10 +2124,6 @@ def from_dict(cls, d: Dict[str, Any]) -> PayloadTable: @dataclass class PutAiGatewayRequest: - fallback_config: Optional[FallbackConfig] = None - """Configuration for traffic fallback which auto fallbacks to other served entities if the request - to a served entity fails with certain error codes, to increase availability.""" - guardrails: Optional[AiGatewayGuardrails] = None """Configuration for AI Guardrails to prevent unwanted data and unsafe data in requests and responses.""" @@ -2318,8 +2145,6 @@ class PutAiGatewayRequest: def as_dict(self) -> dict: """Serializes the PutAiGatewayRequest into a dictionary suitable for use as a JSON request body.""" body = {} - if self.fallback_config: - body["fallback_config"] = self.fallback_config.as_dict() if self.guardrails: body["guardrails"] = self.guardrails.as_dict() if self.inference_table_config: @@ -2335,8 +2160,6 @@ def as_dict(self) -> dict: def as_shallow_dict(self) -> dict: """Serializes the PutAiGatewayRequest into a shallow dictionary of its immediate attributes.""" body = {} - if self.fallback_config: - body["fallback_config"] = self.fallback_config if self.guardrails: body["guardrails"] = self.guardrails if self.inference_table_config: @@ -2353,7 +2176,6 @@ def as_shallow_dict(self) -> dict: def from_dict(cls, d: Dict[str, Any]) -> PutAiGatewayRequest: """Deserializes the PutAiGatewayRequest from a dictionary.""" return cls( - fallback_config=_from_dict(d, "fallback_config", FallbackConfig), guardrails=_from_dict(d, "guardrails", AiGatewayGuardrails), inference_table_config=_from_dict(d, "inference_table_config", AiGatewayInferenceTableConfig), name=d.get("name", None), @@ -2364,10 +2186,6 @@ def from_dict(cls, d: Dict[str, Any]) -> PutAiGatewayRequest: @dataclass class PutAiGatewayResponse: - fallback_config: Optional[FallbackConfig] = None - """Configuration for traffic fallback which auto fallbacks to other served entities if the request - to a served entity fails with certain error codes, to increase availability.""" - guardrails: Optional[AiGatewayGuardrails] = None """Configuration for AI Guardrails to prevent unwanted data and unsafe data in requests and responses.""" @@ -2386,8 +2204,6 @@ class PutAiGatewayResponse: def as_dict(self) -> dict: """Serializes the PutAiGatewayResponse into a dictionary suitable for use as a JSON request body.""" body = {} - if self.fallback_config: - body["fallback_config"] = self.fallback_config.as_dict() if self.guardrails: body["guardrails"] = self.guardrails.as_dict() if self.inference_table_config: @@ -2401,8 +2217,6 @@ def as_dict(self) -> dict: def as_shallow_dict(self) -> dict: """Serializes the PutAiGatewayResponse into a shallow dictionary of its immediate attributes.""" body = {} - if self.fallback_config: - body["fallback_config"] = self.fallback_config if self.guardrails: body["guardrails"] = self.guardrails if self.inference_table_config: @@ -2417,7 +2231,6 @@ def as_shallow_dict(self) -> dict: def from_dict(cls, d: Dict[str, Any]) -> PutAiGatewayResponse: """Deserializes the PutAiGatewayResponse from a dictionary.""" return cls( - fallback_config=_from_dict(d, "fallback_config", FallbackConfig), guardrails=_from_dict(d, "guardrails", AiGatewayGuardrails), inference_table_config=_from_dict(d, "inference_table_config", AiGatewayInferenceTableConfig), rate_limits=_repeated_dict(d, "rate_limits", AiGatewayRateLimit), @@ -4556,7 +4369,6 @@ def put_ai_gateway( self, name: str, *, - fallback_config: Optional[FallbackConfig] = None, guardrails: Optional[AiGatewayGuardrails] = None, inference_table_config: Optional[AiGatewayInferenceTableConfig] = None, rate_limits: Optional[List[AiGatewayRateLimit]] = None, @@ -4569,9 +4381,6 @@ def put_ai_gateway( :param name: str The name of the serving endpoint whose AI Gateway is being updated. This field is required. - :param fallback_config: :class:`FallbackConfig` (optional) - Configuration for traffic fallback which auto fallbacks to other served entities if the request to a - served entity fails with certain error codes, to increase availability. :param guardrails: :class:`AiGatewayGuardrails` (optional) Configuration for AI Guardrails to prevent unwanted data and unsafe data in requests and responses. :param inference_table_config: :class:`AiGatewayInferenceTableConfig` (optional) @@ -4586,8 +4395,6 @@ def put_ai_gateway( :returns: :class:`PutAiGatewayResponse` """ body = {} - if fallback_config is not None: - body["fallback_config"] = fallback_config.as_dict() if guardrails is not None: body["guardrails"] = guardrails.as_dict() if inference_table_config is not None: diff --git a/databricks/sdk/service/sharing.py b/databricks/sdk/service/sharing.py index 7325e5fdd..ab6360b41 100755 --- a/databricks/sdk/service/sharing.py +++ b/databricks/sdk/service/sharing.py @@ -324,7 +324,71 @@ def from_dict(cls, d: Dict[str, Any]) -> DeltaSharingDependencyList: @dataclass -class DeltaSharingFunction: +class DeltaSharingFunctionDependency: + """A Function in UC as a dependency.""" + + function_name: Optional[str] = None + + schema_name: Optional[str] = None + + def as_dict(self) -> dict: + """Serializes the DeltaSharingFunctionDependency into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.function_name is not None: + body["function_name"] = self.function_name + if self.schema_name is not None: + body["schema_name"] = self.schema_name + return body + + def as_shallow_dict(self) -> dict: + """Serializes the DeltaSharingFunctionDependency into a shallow dictionary of its immediate attributes.""" + body = {} + if self.function_name is not None: + body["function_name"] = self.function_name + if self.schema_name is not None: + body["schema_name"] = self.schema_name + return body + + @classmethod + def from_dict(cls, d: Dict[str, Any]) -> DeltaSharingFunctionDependency: + """Deserializes the DeltaSharingFunctionDependency from a dictionary.""" + return cls(function_name=d.get("function_name", None), schema_name=d.get("schema_name", None)) + + +@dataclass +class DeltaSharingTableDependency: + """A Table in UC as a dependency.""" + + schema_name: Optional[str] = None + + table_name: Optional[str] = None + + def as_dict(self) -> dict: + """Serializes the DeltaSharingTableDependency into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.schema_name is not None: + body["schema_name"] = self.schema_name + if self.table_name is not None: + body["table_name"] = self.table_name + return body + + def as_shallow_dict(self) -> dict: + """Serializes the DeltaSharingTableDependency into a shallow dictionary of its immediate attributes.""" + body = {} + if self.schema_name is not None: + body["schema_name"] = self.schema_name + if self.table_name is not None: + body["table_name"] = self.table_name + return body + + @classmethod + def from_dict(cls, d: Dict[str, Any]) -> DeltaSharingTableDependency: + """Deserializes the DeltaSharingTableDependency from a dictionary.""" + return cls(schema_name=d.get("schema_name", None), table_name=d.get("table_name", None)) + + +@dataclass +class Function: aliases: Optional[List[RegisteredModelAlias]] = None """The aliass of registered model.""" @@ -374,7 +438,7 @@ class DeltaSharingFunction: """The tags of the function.""" def as_dict(self) -> dict: - """Serializes the DeltaSharingFunction into a dictionary suitable for use as a JSON request body.""" + """Serializes the Function into a dictionary suitable for use as a JSON request body.""" body = {} if self.aliases: body["aliases"] = [v.as_dict() for v in self.aliases] @@ -411,7 +475,7 @@ def as_dict(self) -> dict: return body def as_shallow_dict(self) -> dict: - """Serializes the DeltaSharingFunction into a shallow dictionary of its immediate attributes.""" + """Serializes the Function into a shallow dictionary of its immediate attributes.""" body = {} if self.aliases: body["aliases"] = self.aliases @@ -448,8 +512,8 @@ def as_shallow_dict(self) -> dict: return body @classmethod - def from_dict(cls, d: Dict[str, Any]) -> DeltaSharingFunction: - """Deserializes the DeltaSharingFunction from a dictionary.""" + def from_dict(cls, d: Dict[str, Any]) -> Function: + """Deserializes the Function from a dictionary.""" return cls( aliases=_repeated_dict(d, "aliases", RegisteredModelAlias), comment=d.get("comment", None), @@ -470,70 +534,6 @@ def from_dict(cls, d: Dict[str, Any]) -> DeltaSharingFunction: ) -@dataclass -class DeltaSharingFunctionDependency: - """A Function in UC as a dependency.""" - - function_name: Optional[str] = None - - schema_name: Optional[str] = None - - def as_dict(self) -> dict: - """Serializes the DeltaSharingFunctionDependency into a dictionary suitable for use as a JSON request body.""" - body = {} - if self.function_name is not None: - body["function_name"] = self.function_name - if self.schema_name is not None: - body["schema_name"] = self.schema_name - return body - - def as_shallow_dict(self) -> dict: - """Serializes the DeltaSharingFunctionDependency into a shallow dictionary of its immediate attributes.""" - body = {} - if self.function_name is not None: - body["function_name"] = self.function_name - if self.schema_name is not None: - body["schema_name"] = self.schema_name - return body - - @classmethod - def from_dict(cls, d: Dict[str, Any]) -> DeltaSharingFunctionDependency: - """Deserializes the DeltaSharingFunctionDependency from a dictionary.""" - return cls(function_name=d.get("function_name", None), schema_name=d.get("schema_name", None)) - - -@dataclass -class DeltaSharingTableDependency: - """A Table in UC as a dependency.""" - - schema_name: Optional[str] = None - - table_name: Optional[str] = None - - def as_dict(self) -> dict: - """Serializes the DeltaSharingTableDependency into a dictionary suitable for use as a JSON request body.""" - body = {} - if self.schema_name is not None: - body["schema_name"] = self.schema_name - if self.table_name is not None: - body["table_name"] = self.table_name - return body - - def as_shallow_dict(self) -> dict: - """Serializes the DeltaSharingTableDependency into a shallow dictionary of its immediate attributes.""" - body = {} - if self.schema_name is not None: - body["schema_name"] = self.schema_name - if self.table_name is not None: - body["table_name"] = self.table_name - return body - - @classmethod - def from_dict(cls, d: Dict[str, Any]) -> DeltaSharingTableDependency: - """Deserializes the DeltaSharingTableDependency from a dictionary.""" - return cls(schema_name=d.get("schema_name", None), table_name=d.get("table_name", None)) - - @dataclass class FunctionParameterInfo: """Represents a parameter of a function. The same message is used for both input and output @@ -809,7 +809,7 @@ def from_dict(cls, d: Dict[str, Any]) -> IpAccessList: class ListProviderShareAssetsResponse: """Response to ListProviderShareAssets, which contains the list of assets of a share.""" - functions: Optional[List[DeltaSharingFunction]] = None + functions: Optional[List[Function]] = None """The list of functions in the share.""" notebooks: Optional[List[NotebookFile]] = None @@ -851,7 +851,7 @@ def as_shallow_dict(self) -> dict: def from_dict(cls, d: Dict[str, Any]) -> ListProviderShareAssetsResponse: """Deserializes the ListProviderShareAssetsResponse from a dictionary.""" return cls( - functions=_repeated_dict(d, "functions", DeltaSharingFunction), + functions=_repeated_dict(d, "functions", Function), notebooks=_repeated_dict(d, "notebooks", NotebookFile), tables=_repeated_dict(d, "tables", Table), volumes=_repeated_dict(d, "volumes", Volume), diff --git a/docs/account/iam/groups.rst b/docs/account/iam/groups.rst index d005f7930..adb23f7d7 100644 --- a/docs/account/iam/groups.rst +++ b/docs/account/iam/groups.rst @@ -99,7 +99,7 @@ Partially updates the details of a group. :param id: str - Unique ID in the Databricks workspace. + Unique ID for a group in the Databricks account. :param operations: List[:class:`Patch`] (optional) :param schemas: List[:class:`PatchSchema`] (optional) The schema of the patch request. Must be ["urn:ietf:params:scim:api:messages:2.0:PatchOp"]. diff --git a/docs/account/iam/service_principals.rst b/docs/account/iam/service_principals.rst index e0fd8577a..2823c8d31 100644 --- a/docs/account/iam/service_principals.rst +++ b/docs/account/iam/service_principals.rst @@ -178,7 +178,7 @@ Partially updates the details of a single service principal in the Databricks account. :param id: str - Unique ID in the Databricks workspace. + Unique ID for a service principal in the Databricks account. :param operations: List[:class:`Patch`] (optional) :param schemas: List[:class:`PatchSchema`] (optional) The schema of the patch request. Must be ["urn:ietf:params:scim:api:messages:2.0:PatchOp"]. diff --git a/docs/account/iam/users.rst b/docs/account/iam/users.rst index 7e527ec45..54a9f1af8 100644 --- a/docs/account/iam/users.rst +++ b/docs/account/iam/users.rst @@ -58,7 +58,8 @@ External ID is not currently supported. It is reserved for future use. :param groups: List[:class:`ComplexValue`] (optional) :param id: str (optional) - Databricks user ID. + Databricks user ID. This is automatically set by Databricks. Any value provided by the client will + be ignored. :param name: :class:`Name` (optional) :param roles: List[:class:`ComplexValue`] (optional) Corresponds to AWS instance profile/arn role. @@ -222,7 +223,7 @@ Partially updates a user resource by applying the supplied operations on specific user attributes. :param id: str - Unique ID in the Databricks workspace. + Unique ID for a user in the Databricks account. :param operations: List[:class:`Patch`] (optional) :param schemas: List[:class:`PatchSchema`] (optional) The schema of the patch request. Must be ["urn:ietf:params:scim:api:messages:2.0:PatchOp"]. @@ -237,7 +238,8 @@ Replaces a user's information with the data supplied in request. :param id: str - Databricks user ID. + Databricks user ID. This is automatically set by Databricks. Any value provided by the client will + be ignored. :param active: bool (optional) If this user is active :param display_name: str (optional) diff --git a/docs/dbdataclasses/compute.rst b/docs/dbdataclasses/compute.rst index 2424cf4cf..81fc85e30 100644 --- a/docs/dbdataclasses/compute.rst +++ b/docs/dbdataclasses/compute.rst @@ -44,7 +44,7 @@ These dataclasses are used in the SDK to represent API requests and responses fo .. py:class:: AzureAvailability - Availability type used for all subsequent nodes past the `first_on_demand` ones. Note: If `first_on_demand` is zero, this availability type will be used for the entire cluster. + Availability type used for all subsequent nodes past the `first_on_demand` ones. Note: If `first_on_demand` is zero (which only happens on pool clusters), this availability type will be used for the entire cluster. .. py:attribute:: ON_DEMAND_AZURE :value: "ON_DEMAND_AZURE" @@ -309,6 +309,8 @@ These dataclasses are used in the SDK to represent API requests and responses fo .. py:class:: DataPlaneEventDetailsEventType + + .. py:attribute:: NODE_BLACKLISTED :value: "NODE_BLACKLISTED" @@ -431,7 +433,7 @@ These dataclasses are used in the SDK to represent API requests and responses fo .. py:class:: EbsVolumeType - All EBS volume types that Databricks supports. See https://aws.amazon.com/ebs/details/ for details. + The type of EBS volumes that will be launched with this cluster. .. py:attribute:: GENERAL_PURPOSE_SSD :value: "GENERAL_PURPOSE_SSD" @@ -627,6 +629,8 @@ These dataclasses are used in the SDK to represent API requests and responses fo .. py:class:: GetEventsOrder + The order to list events in; either "ASC" or "DESC". Defaults to "DESC". + .. py:attribute:: ASC :value: "ASC" @@ -669,9 +673,13 @@ These dataclasses are used in the SDK to represent API requests and responses fo :members: :undoc-members: -.. py:class:: InitScriptExecutionDetailsInitScriptExecutionStatus +.. autoclass:: InitScriptExecutionDetails + :members: + :undoc-members: + +.. py:class:: InitScriptExecutionDetailsStatus - Result of attempted script execution + The current status of the script .. py:attribute:: FAILED_EXECUTION :value: "FAILED_EXECUTION" @@ -679,9 +687,6 @@ These dataclasses are used in the SDK to represent API requests and responses fo .. py:attribute:: FAILED_FETCH :value: "FAILED_FETCH" - .. py:attribute:: FUSE_MOUNT_FAILED - :value: "FUSE_MOUNT_FAILED" - .. py:attribute:: NOT_EXECUTED :value: "NOT_EXECUTED" @@ -890,6 +895,8 @@ These dataclasses are used in the SDK to represent API requests and responses fo .. py:class:: ListClustersSortByDirection + The direction to sort by. + .. py:attribute:: ASC :value: "ASC" @@ -898,6 +905,8 @@ These dataclasses are used in the SDK to represent API requests and responses fo .. py:class:: ListClustersSortByField + The sorting criteria. By default, clusters are sorted by 3 columns from highest to lowest precedence: cluster state, pinned or unpinned, then cluster name. + .. py:attribute:: CLUSTER_NAME :value: "CLUSTER_NAME" @@ -938,6 +947,8 @@ These dataclasses are used in the SDK to represent API requests and responses fo .. py:class:: ListSortOrder + A generic ordering enum for list-based queries. + .. py:attribute:: ASC :value: "ASC" @@ -1051,6 +1062,10 @@ These dataclasses are used in the SDK to represent API requests and responses fo .. py:class:: RuntimeEngine + Determines the cluster's runtime engine, either standard or Photon. + This field is not compatible with legacy `spark_version` values that contain `-photon-`. Remove `-photon-` from the `spark_version` and set `runtime_engine` to `PHOTON`. + If left unspecified, the runtime engine defaults to standard unless the spark_version contains -photon-, in which case Photon will be used. + .. py:attribute:: NULL :value: "NULL" @@ -1086,8 +1101,7 @@ These dataclasses are used in the SDK to represent API requests and responses fo .. py:class:: State - The state of a Cluster. The current allowable state transitions are as follows: - - `PENDING` -> `RUNNING` - `PENDING` -> `TERMINATING` - `RUNNING` -> `RESIZING` - `RUNNING` -> `RESTARTING` - `RUNNING` -> `TERMINATING` - `RESTARTING` -> `RUNNING` - `RESTARTING` -> `TERMINATING` - `RESIZING` -> `RUNNING` - `RESIZING` -> `TERMINATING` - `TERMINATING` -> `TERMINATED` + Current state of the cluster. .. py:attribute:: ERROR :value: "ERROR" @@ -1119,68 +1133,29 @@ These dataclasses are used in the SDK to represent API requests and responses fo .. py:class:: TerminationReasonCode - The status code indicating why the cluster was terminated + status code indicating why the cluster was terminated .. py:attribute:: ABUSE_DETECTED :value: "ABUSE_DETECTED" - .. py:attribute:: ACCESS_TOKEN_FAILURE - :value: "ACCESS_TOKEN_FAILURE" - - .. py:attribute:: ALLOCATION_TIMEOUT - :value: "ALLOCATION_TIMEOUT" - - .. py:attribute:: ALLOCATION_TIMEOUT_NODE_DAEMON_NOT_READY - :value: "ALLOCATION_TIMEOUT_NODE_DAEMON_NOT_READY" - - .. py:attribute:: ALLOCATION_TIMEOUT_NO_HEALTHY_CLUSTERS - :value: "ALLOCATION_TIMEOUT_NO_HEALTHY_CLUSTERS" - - .. py:attribute:: ALLOCATION_TIMEOUT_NO_MATCHED_CLUSTERS - :value: "ALLOCATION_TIMEOUT_NO_MATCHED_CLUSTERS" - - .. py:attribute:: ALLOCATION_TIMEOUT_NO_READY_CLUSTERS - :value: "ALLOCATION_TIMEOUT_NO_READY_CLUSTERS" - - .. py:attribute:: ALLOCATION_TIMEOUT_NO_UNALLOCATED_CLUSTERS - :value: "ALLOCATION_TIMEOUT_NO_UNALLOCATED_CLUSTERS" - - .. py:attribute:: ALLOCATION_TIMEOUT_NO_WARMED_UP_CLUSTERS - :value: "ALLOCATION_TIMEOUT_NO_WARMED_UP_CLUSTERS" - .. py:attribute:: ATTACH_PROJECT_FAILURE :value: "ATTACH_PROJECT_FAILURE" .. py:attribute:: AWS_AUTHORIZATION_FAILURE :value: "AWS_AUTHORIZATION_FAILURE" - .. py:attribute:: AWS_INACCESSIBLE_KMS_KEY_FAILURE - :value: "AWS_INACCESSIBLE_KMS_KEY_FAILURE" - - .. py:attribute:: AWS_INSTANCE_PROFILE_UPDATE_FAILURE - :value: "AWS_INSTANCE_PROFILE_UPDATE_FAILURE" - .. py:attribute:: AWS_INSUFFICIENT_FREE_ADDRESSES_IN_SUBNET_FAILURE :value: "AWS_INSUFFICIENT_FREE_ADDRESSES_IN_SUBNET_FAILURE" .. py:attribute:: AWS_INSUFFICIENT_INSTANCE_CAPACITY_FAILURE :value: "AWS_INSUFFICIENT_INSTANCE_CAPACITY_FAILURE" - .. py:attribute:: AWS_INVALID_KEY_PAIR - :value: "AWS_INVALID_KEY_PAIR" - - .. py:attribute:: AWS_INVALID_KMS_KEY_STATE - :value: "AWS_INVALID_KMS_KEY_STATE" - .. py:attribute:: AWS_MAX_SPOT_INSTANCE_COUNT_EXCEEDED_FAILURE :value: "AWS_MAX_SPOT_INSTANCE_COUNT_EXCEEDED_FAILURE" .. py:attribute:: AWS_REQUEST_LIMIT_EXCEEDED :value: "AWS_REQUEST_LIMIT_EXCEEDED" - .. py:attribute:: AWS_RESOURCE_QUOTA_EXCEEDED - :value: "AWS_RESOURCE_QUOTA_EXCEEDED" - .. py:attribute:: AWS_UNSUPPORTED_FAILURE :value: "AWS_UNSUPPORTED_FAILURE" @@ -1196,9 +1171,6 @@ These dataclasses are used in the SDK to represent API requests and responses fo .. py:attribute:: AZURE_OPERATION_NOT_ALLOWED_EXCEPTION :value: "AZURE_OPERATION_NOT_ALLOWED_EXCEPTION" - .. py:attribute:: AZURE_PACKED_DEPLOYMENT_PARTIAL_FAILURE - :value: "AZURE_PACKED_DEPLOYMENT_PARTIAL_FAILURE" - .. py:attribute:: AZURE_QUOTA_EXCEEDED_EXCEPTION :value: "AZURE_QUOTA_EXCEEDED_EXCEPTION" @@ -1223,48 +1195,18 @@ These dataclasses are used in the SDK to represent API requests and responses fo .. py:attribute:: BOOTSTRAP_TIMEOUT_CLOUD_PROVIDER_EXCEPTION :value: "BOOTSTRAP_TIMEOUT_CLOUD_PROVIDER_EXCEPTION" - .. py:attribute:: BOOTSTRAP_TIMEOUT_DUE_TO_MISCONFIG - :value: "BOOTSTRAP_TIMEOUT_DUE_TO_MISCONFIG" - - .. py:attribute:: BUDGET_POLICY_LIMIT_ENFORCEMENT_ACTIVATED - :value: "BUDGET_POLICY_LIMIT_ENFORCEMENT_ACTIVATED" - - .. py:attribute:: BUDGET_POLICY_RESOLUTION_FAILURE - :value: "BUDGET_POLICY_RESOLUTION_FAILURE" - - .. py:attribute:: CLOUD_ACCOUNT_SETUP_FAILURE - :value: "CLOUD_ACCOUNT_SETUP_FAILURE" - - .. py:attribute:: CLOUD_OPERATION_CANCELLED - :value: "CLOUD_OPERATION_CANCELLED" - .. py:attribute:: CLOUD_PROVIDER_DISK_SETUP_FAILURE :value: "CLOUD_PROVIDER_DISK_SETUP_FAILURE" - .. py:attribute:: CLOUD_PROVIDER_INSTANCE_NOT_LAUNCHED - :value: "CLOUD_PROVIDER_INSTANCE_NOT_LAUNCHED" - .. py:attribute:: CLOUD_PROVIDER_LAUNCH_FAILURE :value: "CLOUD_PROVIDER_LAUNCH_FAILURE" - .. py:attribute:: CLOUD_PROVIDER_LAUNCH_FAILURE_DUE_TO_MISCONFIG - :value: "CLOUD_PROVIDER_LAUNCH_FAILURE_DUE_TO_MISCONFIG" - .. py:attribute:: CLOUD_PROVIDER_RESOURCE_STOCKOUT :value: "CLOUD_PROVIDER_RESOURCE_STOCKOUT" - .. py:attribute:: CLOUD_PROVIDER_RESOURCE_STOCKOUT_DUE_TO_MISCONFIG - :value: "CLOUD_PROVIDER_RESOURCE_STOCKOUT_DUE_TO_MISCONFIG" - .. py:attribute:: CLOUD_PROVIDER_SHUTDOWN :value: "CLOUD_PROVIDER_SHUTDOWN" - .. py:attribute:: CLUSTER_OPERATION_THROTTLED - :value: "CLUSTER_OPERATION_THROTTLED" - - .. py:attribute:: CLUSTER_OPERATION_TIMEOUT - :value: "CLUSTER_OPERATION_TIMEOUT" - .. py:attribute:: COMMUNICATION_LOST :value: "COMMUNICATION_LOST" @@ -1274,111 +1216,30 @@ These dataclasses are used in the SDK to represent API requests and responses fo .. py:attribute:: CONTROL_PLANE_REQUEST_FAILURE :value: "CONTROL_PLANE_REQUEST_FAILURE" - .. py:attribute:: CONTROL_PLANE_REQUEST_FAILURE_DUE_TO_MISCONFIG - :value: "CONTROL_PLANE_REQUEST_FAILURE_DUE_TO_MISCONFIG" - .. py:attribute:: DATABASE_CONNECTION_FAILURE :value: "DATABASE_CONNECTION_FAILURE" - .. py:attribute:: DATA_ACCESS_CONFIG_CHANGED - :value: "DATA_ACCESS_CONFIG_CHANGED" - .. py:attribute:: DBFS_COMPONENT_UNHEALTHY :value: "DBFS_COMPONENT_UNHEALTHY" - .. py:attribute:: DISASTER_RECOVERY_REPLICATION - :value: "DISASTER_RECOVERY_REPLICATION" - .. py:attribute:: DOCKER_IMAGE_PULL_FAILURE :value: "DOCKER_IMAGE_PULL_FAILURE" - .. py:attribute:: DRIVER_EVICTION - :value: "DRIVER_EVICTION" - - .. py:attribute:: DRIVER_LAUNCH_TIMEOUT - :value: "DRIVER_LAUNCH_TIMEOUT" - - .. py:attribute:: DRIVER_NODE_UNREACHABLE - :value: "DRIVER_NODE_UNREACHABLE" - - .. py:attribute:: DRIVER_OUT_OF_DISK - :value: "DRIVER_OUT_OF_DISK" - - .. py:attribute:: DRIVER_OUT_OF_MEMORY - :value: "DRIVER_OUT_OF_MEMORY" - - .. py:attribute:: DRIVER_POD_CREATION_FAILURE - :value: "DRIVER_POD_CREATION_FAILURE" - - .. py:attribute:: DRIVER_UNEXPECTED_FAILURE - :value: "DRIVER_UNEXPECTED_FAILURE" - .. py:attribute:: DRIVER_UNREACHABLE :value: "DRIVER_UNREACHABLE" .. py:attribute:: DRIVER_UNRESPONSIVE :value: "DRIVER_UNRESPONSIVE" - .. py:attribute:: DYNAMIC_SPARK_CONF_SIZE_EXCEEDED - :value: "DYNAMIC_SPARK_CONF_SIZE_EXCEEDED" - - .. py:attribute:: EOS_SPARK_IMAGE - :value: "EOS_SPARK_IMAGE" - .. py:attribute:: EXECUTION_COMPONENT_UNHEALTHY :value: "EXECUTION_COMPONENT_UNHEALTHY" - .. py:attribute:: EXECUTOR_POD_UNSCHEDULED - :value: "EXECUTOR_POD_UNSCHEDULED" - - .. py:attribute:: GCP_API_RATE_QUOTA_EXCEEDED - :value: "GCP_API_RATE_QUOTA_EXCEEDED" - - .. py:attribute:: GCP_FORBIDDEN - :value: "GCP_FORBIDDEN" - - .. py:attribute:: GCP_IAM_TIMEOUT - :value: "GCP_IAM_TIMEOUT" - - .. py:attribute:: GCP_INACCESSIBLE_KMS_KEY_FAILURE - :value: "GCP_INACCESSIBLE_KMS_KEY_FAILURE" - - .. py:attribute:: GCP_INSUFFICIENT_CAPACITY - :value: "GCP_INSUFFICIENT_CAPACITY" - - .. py:attribute:: GCP_IP_SPACE_EXHAUSTED - :value: "GCP_IP_SPACE_EXHAUSTED" - - .. py:attribute:: GCP_KMS_KEY_PERMISSION_DENIED - :value: "GCP_KMS_KEY_PERMISSION_DENIED" - - .. py:attribute:: GCP_NOT_FOUND - :value: "GCP_NOT_FOUND" - .. py:attribute:: GCP_QUOTA_EXCEEDED :value: "GCP_QUOTA_EXCEEDED" - .. py:attribute:: GCP_RESOURCE_QUOTA_EXCEEDED - :value: "GCP_RESOURCE_QUOTA_EXCEEDED" - - .. py:attribute:: GCP_SERVICE_ACCOUNT_ACCESS_DENIED - :value: "GCP_SERVICE_ACCOUNT_ACCESS_DENIED" - .. py:attribute:: GCP_SERVICE_ACCOUNT_DELETED :value: "GCP_SERVICE_ACCOUNT_DELETED" - .. py:attribute:: GCP_SERVICE_ACCOUNT_NOT_FOUND - :value: "GCP_SERVICE_ACCOUNT_NOT_FOUND" - - .. py:attribute:: GCP_SUBNET_NOT_READY - :value: "GCP_SUBNET_NOT_READY" - - .. py:attribute:: GCP_TRUSTED_IMAGE_PROJECTS_VIOLATED - :value: "GCP_TRUSTED_IMAGE_PROJECTS_VIOLATED" - - .. py:attribute:: GKE_BASED_CLUSTER_TERMINATION - :value: "GKE_BASED_CLUSTER_TERMINATION" - .. py:attribute:: GLOBAL_INIT_SCRIPT_FAILURE :value: "GLOBAL_INIT_SCRIPT_FAILURE" @@ -1391,51 +1252,24 @@ These dataclasses are used in the SDK to represent API requests and responses fo .. py:attribute:: INACTIVITY :value: "INACTIVITY" - .. py:attribute:: INIT_CONTAINER_NOT_FINISHED - :value: "INIT_CONTAINER_NOT_FINISHED" - .. py:attribute:: INIT_SCRIPT_FAILURE :value: "INIT_SCRIPT_FAILURE" .. py:attribute:: INSTANCE_POOL_CLUSTER_FAILURE :value: "INSTANCE_POOL_CLUSTER_FAILURE" - .. py:attribute:: INSTANCE_POOL_MAX_CAPACITY_REACHED - :value: "INSTANCE_POOL_MAX_CAPACITY_REACHED" - - .. py:attribute:: INSTANCE_POOL_NOT_FOUND - :value: "INSTANCE_POOL_NOT_FOUND" - .. py:attribute:: INSTANCE_UNREACHABLE :value: "INSTANCE_UNREACHABLE" - .. py:attribute:: INSTANCE_UNREACHABLE_DUE_TO_MISCONFIG - :value: "INSTANCE_UNREACHABLE_DUE_TO_MISCONFIG" - - .. py:attribute:: INTERNAL_CAPACITY_FAILURE - :value: "INTERNAL_CAPACITY_FAILURE" - .. py:attribute:: INTERNAL_ERROR :value: "INTERNAL_ERROR" .. py:attribute:: INVALID_ARGUMENT :value: "INVALID_ARGUMENT" - .. py:attribute:: INVALID_AWS_PARAMETER - :value: "INVALID_AWS_PARAMETER" - - .. py:attribute:: INVALID_INSTANCE_PLACEMENT_PROTOCOL - :value: "INVALID_INSTANCE_PLACEMENT_PROTOCOL" - .. py:attribute:: INVALID_SPARK_IMAGE :value: "INVALID_SPARK_IMAGE" - .. py:attribute:: INVALID_WORKER_IMAGE_FAILURE - :value: "INVALID_WORKER_IMAGE_FAILURE" - - .. py:attribute:: IN_PENALTY_BOX - :value: "IN_PENALTY_BOX" - .. py:attribute:: IP_EXHAUSTION_FAILURE :value: "IP_EXHAUSTION_FAILURE" @@ -1448,57 +1282,30 @@ These dataclasses are used in the SDK to represent API requests and responses fo .. py:attribute:: K8S_DBR_CLUSTER_LAUNCH_TIMEOUT :value: "K8S_DBR_CLUSTER_LAUNCH_TIMEOUT" - .. py:attribute:: LAZY_ALLOCATION_TIMEOUT - :value: "LAZY_ALLOCATION_TIMEOUT" - - .. py:attribute:: MAINTENANCE_MODE - :value: "MAINTENANCE_MODE" - .. py:attribute:: METASTORE_COMPONENT_UNHEALTHY :value: "METASTORE_COMPONENT_UNHEALTHY" .. py:attribute:: NEPHOS_RESOURCE_MANAGEMENT :value: "NEPHOS_RESOURCE_MANAGEMENT" - .. py:attribute:: NETVISOR_SETUP_TIMEOUT - :value: "NETVISOR_SETUP_TIMEOUT" - .. py:attribute:: NETWORK_CONFIGURATION_FAILURE :value: "NETWORK_CONFIGURATION_FAILURE" .. py:attribute:: NFS_MOUNT_FAILURE :value: "NFS_MOUNT_FAILURE" - .. py:attribute:: NO_MATCHED_K8S - :value: "NO_MATCHED_K8S" - - .. py:attribute:: NO_MATCHED_K8S_TESTING_TAG - :value: "NO_MATCHED_K8S_TESTING_TAG" - .. py:attribute:: NPIP_TUNNEL_SETUP_FAILURE :value: "NPIP_TUNNEL_SETUP_FAILURE" .. py:attribute:: NPIP_TUNNEL_TOKEN_FAILURE :value: "NPIP_TUNNEL_TOKEN_FAILURE" - .. py:attribute:: POD_ASSIGNMENT_FAILURE - :value: "POD_ASSIGNMENT_FAILURE" - - .. py:attribute:: POD_SCHEDULING_FAILURE - :value: "POD_SCHEDULING_FAILURE" - .. py:attribute:: REQUEST_REJECTED :value: "REQUEST_REJECTED" .. py:attribute:: REQUEST_THROTTLED :value: "REQUEST_THROTTLED" - .. py:attribute:: RESOURCE_USAGE_BLOCKED - :value: "RESOURCE_USAGE_BLOCKED" - - .. py:attribute:: SECRET_CREATION_FAILURE - :value: "SECRET_CREATION_FAILURE" - .. py:attribute:: SECRET_RESOLUTION_ERROR :value: "SECRET_RESOLUTION_ERROR" @@ -1508,9 +1315,6 @@ These dataclasses are used in the SDK to represent API requests and responses fo .. py:attribute:: SELF_BOOTSTRAP_FAILURE :value: "SELF_BOOTSTRAP_FAILURE" - .. py:attribute:: SERVERLESS_LONG_RUNNING_TERMINATED - :value: "SERVERLESS_LONG_RUNNING_TERMINATED" - .. py:attribute:: SKIPPED_SLOW_NODES :value: "SKIPPED_SLOW_NODES" @@ -1523,33 +1327,15 @@ These dataclasses are used in the SDK to represent API requests and responses fo .. py:attribute:: SPARK_IMAGE_DOWNLOAD_FAILURE :value: "SPARK_IMAGE_DOWNLOAD_FAILURE" - .. py:attribute:: SPARK_IMAGE_DOWNLOAD_THROTTLED - :value: "SPARK_IMAGE_DOWNLOAD_THROTTLED" - - .. py:attribute:: SPARK_IMAGE_NOT_FOUND - :value: "SPARK_IMAGE_NOT_FOUND" - .. py:attribute:: SPARK_STARTUP_FAILURE :value: "SPARK_STARTUP_FAILURE" .. py:attribute:: SPOT_INSTANCE_TERMINATION :value: "SPOT_INSTANCE_TERMINATION" - .. py:attribute:: SSH_BOOTSTRAP_FAILURE - :value: "SSH_BOOTSTRAP_FAILURE" - .. py:attribute:: STORAGE_DOWNLOAD_FAILURE :value: "STORAGE_DOWNLOAD_FAILURE" - .. py:attribute:: STORAGE_DOWNLOAD_FAILURE_DUE_TO_MISCONFIG - :value: "STORAGE_DOWNLOAD_FAILURE_DUE_TO_MISCONFIG" - - .. py:attribute:: STORAGE_DOWNLOAD_FAILURE_SLOW - :value: "STORAGE_DOWNLOAD_FAILURE_SLOW" - - .. py:attribute:: STORAGE_DOWNLOAD_FAILURE_THROTTLED - :value: "STORAGE_DOWNLOAD_FAILURE_THROTTLED" - .. py:attribute:: STS_CLIENT_SETUP_FAILURE :value: "STS_CLIENT_SETUP_FAILURE" @@ -1565,9 +1351,6 @@ These dataclasses are used in the SDK to represent API requests and responses fo .. py:attribute:: UNEXPECTED_LAUNCH_FAILURE :value: "UNEXPECTED_LAUNCH_FAILURE" - .. py:attribute:: UNEXPECTED_POD_RECREATION - :value: "UNEXPECTED_POD_RECREATION" - .. py:attribute:: UNKNOWN :value: "UNKNOWN" @@ -1577,9 +1360,6 @@ These dataclasses are used in the SDK to represent API requests and responses fo .. py:attribute:: UPDATE_INSTANCE_PROFILE_FAILURE :value: "UPDATE_INSTANCE_PROFILE_FAILURE" - .. py:attribute:: USER_INITIATED_VM_TERMINATION - :value: "USER_INITIATED_VM_TERMINATION" - .. py:attribute:: USER_REQUEST :value: "USER_REQUEST" @@ -1592,9 +1372,6 @@ These dataclasses are used in the SDK to represent API requests and responses fo .. py:attribute:: WORKSPACE_CONFIGURATION_ERROR :value: "WORKSPACE_CONFIGURATION_ERROR" - .. py:attribute:: WORKSPACE_UPDATE - :value: "WORKSPACE_UPDATE" - .. py:class:: TerminationReasonType type of the termination diff --git a/docs/dbdataclasses/dashboards.rst b/docs/dbdataclasses/dashboards.rst index 776aac603..d1639d266 100644 --- a/docs/dbdataclasses/dashboards.rst +++ b/docs/dbdataclasses/dashboards.rst @@ -157,9 +157,6 @@ These dataclasses are used in the SDK to represent API requests and responses fo .. py:attribute:: FUNCTION_CALL_MISSING_PARAMETER_EXCEPTION :value: "FUNCTION_CALL_MISSING_PARAMETER_EXCEPTION" - .. py:attribute:: GENERATED_SQL_QUERY_TOO_LONG_EXCEPTION - :value: "GENERATED_SQL_QUERY_TOO_LONG_EXCEPTION" - .. py:attribute:: GENERIC_CHAT_COMPLETION_EXCEPTION :value: "GENERIC_CHAT_COMPLETION_EXCEPTION" @@ -202,9 +199,6 @@ These dataclasses are used in the SDK to represent API requests and responses fo .. py:attribute:: MESSAGE_UPDATED_WHILE_EXECUTING_EXCEPTION :value: "MESSAGE_UPDATED_WHILE_EXECUTING_EXCEPTION" - .. py:attribute:: MISSING_SQL_QUERY_EXCEPTION - :value: "MISSING_SQL_QUERY_EXCEPTION" - .. py:attribute:: NO_DEPLOYMENTS_AVAILABLE_TO_WORKSPACE :value: "NO_DEPLOYMENTS_AVAILABLE_TO_WORKSPACE" diff --git a/docs/dbdataclasses/jobs.rst b/docs/dbdataclasses/jobs.rst index fa5af4189..19f1a2208 100644 --- a/docs/dbdataclasses/jobs.rst +++ b/docs/dbdataclasses/jobs.rst @@ -482,9 +482,6 @@ These dataclasses are used in the SDK to represent API requests and responses fo PerformanceTarget defines how performant (lower latency) or cost efficient the execution of run on serverless compute should be. The performance mode on the job or pipeline should map to a performance setting that is passed to Cluster Manager (see cluster-common PerformanceTarget). - .. py:attribute:: BALANCED - :value: "BALANCED" - .. py:attribute:: COST_OPTIMIZED :value: "COST_OPTIMIZED" diff --git a/docs/dbdataclasses/marketplace.rst b/docs/dbdataclasses/marketplace.rst index 02e48c381..222c5065c 100644 --- a/docs/dbdataclasses/marketplace.rst +++ b/docs/dbdataclasses/marketplace.rst @@ -274,9 +274,6 @@ These dataclasses are used in the SDK to represent API requests and responses fo .. py:attribute:: LISTING :value: "LISTING" - .. py:attribute:: LISTING_RESOURCE - :value: "LISTING_RESOURCE" - .. py:attribute:: PROVIDER :value: "PROVIDER" @@ -464,9 +461,6 @@ These dataclasses are used in the SDK to represent API requests and responses fo .. py:class:: MarketplaceFileType - .. py:attribute:: APP - :value: "APP" - .. py:attribute:: EMBEDDED_NOTEBOOK :value: "EMBEDDED_NOTEBOOK" diff --git a/docs/dbdataclasses/serving.rst b/docs/dbdataclasses/serving.rst index 367f41b90..abaeb5355 100644 --- a/docs/dbdataclasses/serving.rst +++ b/docs/dbdataclasses/serving.rst @@ -79,10 +79,6 @@ These dataclasses are used in the SDK to represent API requests and responses fo :members: :undoc-members: -.. autoclass:: ApiKeyAuth - :members: - :undoc-members: - .. autoclass:: AutoCaptureConfigInput :members: :undoc-members: @@ -95,10 +91,6 @@ These dataclasses are used in the SDK to represent API requests and responses fo :members: :undoc-members: -.. autoclass:: BearerTokenAuth - :members: - :undoc-members: - .. autoclass:: BuildLogsResponse :members: :undoc-members: @@ -128,10 +120,6 @@ These dataclasses are used in the SDK to represent API requests and responses fo :members: :undoc-members: -.. autoclass:: CustomProviderConfig - :members: - :undoc-members: - .. autoclass:: DataPlaneInfo :members: :undoc-members: @@ -252,9 +240,6 @@ These dataclasses are used in the SDK to represent API requests and responses fo .. py:attribute:: COHERE :value: "COHERE" - .. py:attribute:: CUSTOM - :value: "CUSTOM" - .. py:attribute:: DATABRICKS_MODEL_SERVING :value: "DATABRICKS_MODEL_SERVING" @@ -271,10 +256,6 @@ These dataclasses are used in the SDK to represent API requests and responses fo :members: :undoc-members: -.. autoclass:: FallbackConfig - :members: - :undoc-members: - .. autoclass:: FoundationModel :members: :undoc-members: diff --git a/docs/dbdataclasses/sharing.rst b/docs/dbdataclasses/sharing.rst index f72c59b21..2e4437ef6 100644 --- a/docs/dbdataclasses/sharing.rst +++ b/docs/dbdataclasses/sharing.rst @@ -111,15 +111,15 @@ These dataclasses are used in the SDK to represent API requests and responses fo :members: :undoc-members: -.. autoclass:: DeltaSharingFunction +.. autoclass:: DeltaSharingFunctionDependency :members: :undoc-members: -.. autoclass:: DeltaSharingFunctionDependency +.. autoclass:: DeltaSharingTableDependency :members: :undoc-members: -.. autoclass:: DeltaSharingTableDependency +.. autoclass:: Function :members: :undoc-members: diff --git a/docs/workspace/compute/clusters.rst b/docs/workspace/compute/clusters.rst index e4423bc98..528cff321 100644 --- a/docs/workspace/compute/clusters.rst +++ b/docs/workspace/compute/clusters.rst @@ -66,6 +66,7 @@ `owner_username`. :param cluster_id: str + :param owner_username: str New owner of the cluster_id after this RPC. @@ -104,11 +105,8 @@ Create new cluster. Creates a new Spark cluster. This method will acquire new instances from the cloud provider if - necessary. This method is asynchronous; the returned ``cluster_id`` can be used to poll the cluster - status. When this method returns, the cluster will be in a ``PENDING`` state. The cluster will be - usable once it enters a ``RUNNING`` state. Note: Databricks may not be able to acquire some of the - requested nodes, due to cloud provider limitations (account limits, spot price, etc.) or transient - network issues. + necessary. Note: Databricks may not be able to acquire some of the requested nodes, due to cloud + provider limitations (account limits, spot price, etc.) or transient network issues. If Databricks acquires at least 85% of the requested on-demand nodes, cluster creation will succeed. Otherwise the cluster will terminate with an informative error message. @@ -181,17 +179,12 @@ standard clusters. * `LEGACY_SINGLE_USER_STANDARD`: This mode provides a way that doesn’t have UC nor passthrough enabled. :param docker_image: :class:`DockerImage` (optional) - Custom docker image BYOC :param driver_instance_pool_id: str (optional) The optional ID of the instance pool for the driver of the cluster belongs. The pool cluster uses the instance pool with id (instance_pool_id) if the driver pool is not assigned. :param driver_node_type_id: str (optional) The node type of the Spark driver. Note that this field is optional; if unset, the driver node type will be set as the same value as `node_type_id` defined above. - - This field, along with node_type_id, should not be set if virtual_cluster_size is set. If both - driver_node_type_id, node_type_id, and virtual_cluster_size are specified, driver_node_type_id and - node_type_id take precedence. :param enable_elastic_disk: bool (optional) Autoscaling Local Storage: when enabled, this cluster will dynamically acquire additional disk space when its Spark workers are running low on disk space. This feature requires specific AWS permissions @@ -278,7 +271,6 @@ `effective_spark_version` is determined by `spark_version` (DBR release), this field `use_ml_runtime`, and whether `node_type_id` is gpu node or not. :param workload_type: :class:`WorkloadType` (optional) - Cluster Attributes showing for clusters workload types. :returns: Long-running operation waiter for :class:`ClusterDetails`. @@ -451,17 +443,12 @@ standard clusters. * `LEGACY_SINGLE_USER_STANDARD`: This mode provides a way that doesn’t have UC nor passthrough enabled. :param docker_image: :class:`DockerImage` (optional) - Custom docker image BYOC :param driver_instance_pool_id: str (optional) The optional ID of the instance pool for the driver of the cluster belongs. The pool cluster uses the instance pool with id (instance_pool_id) if the driver pool is not assigned. :param driver_node_type_id: str (optional) The node type of the Spark driver. Note that this field is optional; if unset, the driver node type will be set as the same value as `node_type_id` defined above. - - This field, along with node_type_id, should not be set if virtual_cluster_size is set. If both - driver_node_type_id, node_type_id, and virtual_cluster_size are specified, driver_node_type_id and - node_type_id take precedence. :param enable_elastic_disk: bool (optional) Autoscaling Local Storage: when enabled, this cluster will dynamically acquire additional disk space when its Spark workers are running low on disk space. This feature requires specific AWS permissions @@ -548,7 +535,6 @@ `effective_spark_version` is determined by `spark_version` (DBR release), this field `use_ml_runtime`, and whether `node_type_id` is gpu node or not. :param workload_type: :class:`WorkloadType` (optional) - Cluster Attributes showing for clusters workload types. :returns: Long-running operation waiter for :class:`ClusterDetails`. @@ -617,7 +603,8 @@ List cluster activity events. Retrieves a list of events about the activity of a cluster. This API is paginated. If there are more - events to read, the response includes all the parameters necessary to request the next page of events. + events to read, the response includes all the nparameters necessary to request the next page of + events. :param cluster_id: str The ID of the cluster to retrieve events about. @@ -821,6 +808,7 @@ cluster that is already pinned will have no effect. This API can only be called by workspace admins. :param cluster_id: str + @@ -923,6 +911,7 @@ :param cluster_id: str The cluster to be started. :param restart_user: str (optional) + :returns: Long-running operation waiter for :class:`ClusterDetails`. @@ -1050,10 +1039,11 @@ Start terminated cluster. Starts a terminated Spark cluster with the supplied ID. This works similar to `createCluster` except: - - The previous cluster id and attributes are preserved. - The cluster starts with the last specified - cluster size. - If the previous cluster was an autoscaling cluster, the current cluster starts with - the minimum number of nodes. - If the cluster is not currently in a ``TERMINATED`` state, nothing will - happen. - Clusters launched to run a job cannot be started. + + * The previous cluster id and attributes are preserved. * The cluster starts with the last specified + cluster size. * If the previous cluster was an autoscaling cluster, the current cluster starts with + the minimum number of nodes. * If the cluster is not currently in a `TERMINATED` state, nothing will + happen. * Clusters launched to run a job cannot be started. :param cluster_id: str The cluster to be started. @@ -1104,6 +1094,7 @@ admins. :param cluster_id: str + @@ -1124,18 +1115,10 @@ :param cluster_id: str ID of the cluster. :param update_mask: str - Used to specify which cluster attributes and size fields to update. See https://google.aip.dev/161 - for more details. - - The field mask must be a single string, with multiple fields separated by commas (no spaces). The - field path is relative to the resource object, using a dot (`.`) to navigate sub-fields (e.g., - `author.given_name`). Specification of elements in sequence or map fields is not allowed, as only - the entire collection field can be specified. Field names must exactly match the resource field - names. - - A field mask of `*` indicates full replacement. It’s recommended to always explicitly list the - fields being updated and avoid using `*` wildcards, as it can lead to unintended results if the API - changes in the future. + Specifies which fields of the cluster will be updated. This is required in the POST request. The + update mask should be supplied as a single string. To specify multiple fields, separate them with + commas (no spaces). To delete a field from a cluster configuration, add it to the `update_mask` + string but omit it from the `cluster` object. :param cluster: :class:`UpdateClusterResource` (optional) The cluster to be updated. diff --git a/docs/workspace/iam/groups.rst b/docs/workspace/iam/groups.rst index 0b62b675a..8eb4ccbe2 100644 --- a/docs/workspace/iam/groups.rst +++ b/docs/workspace/iam/groups.rst @@ -187,7 +187,7 @@ Partially updates the details of a group. :param id: str - Unique ID in the Databricks workspace. + Unique ID for a group in the Databricks workspace. :param operations: List[:class:`Patch`] (optional) :param schemas: List[:class:`PatchSchema`] (optional) The schema of the patch request. Must be ["urn:ietf:params:scim:api:messages:2.0:PatchOp"]. diff --git a/docs/workspace/iam/service_principals.rst b/docs/workspace/iam/service_principals.rst index 74a498b00..ec893c807 100644 --- a/docs/workspace/iam/service_principals.rst +++ b/docs/workspace/iam/service_principals.rst @@ -176,7 +176,7 @@ Partially updates the details of a single service principal in the Databricks workspace. :param id: str - Unique ID in the Databricks workspace. + Unique ID for a service principal in the Databricks workspace. :param operations: List[:class:`Patch`] (optional) :param schemas: List[:class:`PatchSchema`] (optional) The schema of the patch request. Must be ["urn:ietf:params:scim:api:messages:2.0:PatchOp"]. diff --git a/docs/workspace/iam/users.rst b/docs/workspace/iam/users.rst index 76837ac54..5edacca5f 100644 --- a/docs/workspace/iam/users.rst +++ b/docs/workspace/iam/users.rst @@ -55,7 +55,8 @@ External ID is not currently supported. It is reserved for future use. :param groups: List[:class:`ComplexValue`] (optional) :param id: str (optional) - Databricks user ID. + Databricks user ID. This is automatically set by Databricks. Any value provided by the client will + be ignored. :param name: :class:`Name` (optional) :param roles: List[:class:`ComplexValue`] (optional) Corresponds to AWS instance profile/arn role. @@ -239,7 +240,7 @@ Partially updates a user resource by applying the supplied operations on specific user attributes. :param id: str - Unique ID in the Databricks workspace. + Unique ID for a user in the Databricks workspace. :param operations: List[:class:`Patch`] (optional) :param schemas: List[:class:`PatchSchema`] (optional) The schema of the patch request. Must be ["urn:ietf:params:scim:api:messages:2.0:PatchOp"]. @@ -284,7 +285,8 @@ Replaces a user's information with the data supplied in request. :param id: str - Databricks user ID. + Databricks user ID. This is automatically set by Databricks. Any value provided by the client will + be ignored. :param active: bool (optional) If this user is active :param display_name: str (optional) diff --git a/docs/workspace/ml/forecasting.rst b/docs/workspace/ml/forecasting.rst index bb667b3fc..5f5bddd8a 100644 --- a/docs/workspace/ml/forecasting.rst +++ b/docs/workspace/ml/forecasting.rst @@ -6,7 +6,7 @@ The Forecasting API allows you to create and get serverless forecasting experiments - .. py:method:: create_experiment(train_data_path: str, target_column: str, time_column: str, forecast_granularity: str, forecast_horizon: int [, custom_weights_column: Optional[str], experiment_path: Optional[str], holiday_regions: Optional[List[str]], max_runtime: Optional[int], prediction_data_path: Optional[str], primary_metric: Optional[str], register_to: Optional[str], split_column: Optional[str], timeseries_identifier_columns: Optional[List[str]], training_frameworks: Optional[List[str]]]) -> Wait[ForecastingExperiment] + .. py:method:: create_experiment(train_data_path: str, target_column: str, time_column: str, data_granularity_unit: str, forecast_horizon: int [, custom_weights_column: Optional[str], data_granularity_quantity: Optional[int], experiment_path: Optional[str], holiday_regions: Optional[List[str]], max_runtime: Optional[int], prediction_data_path: Optional[str], primary_metric: Optional[str], register_to: Optional[str], split_column: Optional[str], timeseries_identifier_columns: Optional[List[str]], training_frameworks: Optional[List[str]]]) -> Wait[ForecastingExperiment] Create a forecasting experiment. @@ -20,16 +20,23 @@ this column will be used as the ground truth for model training. :param time_column: str Name of the column in the input training table that represents the timestamp of each row. - :param forecast_granularity: str - The granularity of the forecast. This defines the time interval between consecutive rows in the time - series data. Possible values: '1 second', '1 minute', '5 minutes', '10 minutes', '15 minutes', '30 - minutes', 'Hourly', 'Daily', 'Weekly', 'Monthly', 'Quarterly', 'Yearly'. + :param data_granularity_unit: str + The time unit of the input data granularity. Together with data_granularity_quantity field, this + defines the time interval between consecutive rows in the time series data. Possible values: * 'W' + (weeks) * 'D' / 'days' / 'day' * 'hours' / 'hour' / 'hr' / 'h' * 'm' / 'minute' / 'min' / 'minutes' + / 'T' * 'S' / 'seconds' / 'sec' / 'second' * 'M' / 'month' / 'months' * 'Q' / 'quarter' / 'quarters' + * 'Y' / 'year' / 'years' :param forecast_horizon: int The number of time steps into the future for which predictions should be made. This value represents - a multiple of forecast_granularity determining how far ahead the model will forecast. + a multiple of data_granularity_unit and data_granularity_quantity determining how far ahead the + model will forecast. :param custom_weights_column: str (optional) Name of the column in the input training table used to customize the weight for each time series to calculate weighted metrics. + :param data_granularity_quantity: int (optional) + The quantity of the input data granularity. Together with data_granularity_unit field, this defines + the time interval between consecutive rows in the time series data. For now, only 1 second, + 1/5/10/15/30 minutes, 1 hour, 1 day, 1 week, 1 month, 1 quarter, 1 year are supported. :param experiment_path: str (optional) The path to the created experiment. This is the path where the experiment will be stored in the workspace. @@ -62,7 +69,7 @@ See :method:wait_get_experiment_forecasting_succeeded for more details. - .. py:method:: create_experiment_and_wait(train_data_path: str, target_column: str, time_column: str, forecast_granularity: str, forecast_horizon: int [, custom_weights_column: Optional[str], experiment_path: Optional[str], holiday_regions: Optional[List[str]], max_runtime: Optional[int], prediction_data_path: Optional[str], primary_metric: Optional[str], register_to: Optional[str], split_column: Optional[str], timeseries_identifier_columns: Optional[List[str]], training_frameworks: Optional[List[str]], timeout: datetime.timedelta = 2:00:00]) -> ForecastingExperiment + .. py:method:: create_experiment_and_wait(train_data_path: str, target_column: str, time_column: str, data_granularity_unit: str, forecast_horizon: int [, custom_weights_column: Optional[str], data_granularity_quantity: Optional[int], experiment_path: Optional[str], holiday_regions: Optional[List[str]], max_runtime: Optional[int], prediction_data_path: Optional[str], primary_metric: Optional[str], register_to: Optional[str], split_column: Optional[str], timeseries_identifier_columns: Optional[List[str]], training_frameworks: Optional[List[str]], timeout: datetime.timedelta = 2:00:00]) -> ForecastingExperiment .. py:method:: get_experiment(experiment_id: str) -> ForecastingExperiment diff --git a/docs/workspace/pipelines/pipelines.rst b/docs/workspace/pipelines/pipelines.rst index 935724d82..38f440147 100644 --- a/docs/workspace/pipelines/pipelines.rst +++ b/docs/workspace/pipelines/pipelines.rst @@ -87,7 +87,7 @@ Unique identifier for this pipeline. :param ingestion_definition: :class:`IngestionPipelineDefinition` (optional) The configuration for a managed ingestion pipeline. These settings cannot be used with the - 'libraries', 'schema', 'target', or 'catalog' settings. + 'libraries', 'target' or 'catalog' settings. :param libraries: List[:class:`PipelineLibrary`] (optional) Libraries or code needed by this deployment. :param name: str (optional) @@ -105,15 +105,15 @@ Only `user_name` or `service_principal_name` can be specified. If both are specified, an error is thrown. :param schema: str (optional) - The default schema (database) where tables are read from or published to. + The default schema (database) where tables are read from or published to. The presence of this field + implies that the pipeline is in direct publishing mode. :param serverless: bool (optional) Whether serverless compute is enabled for this pipeline. :param storage: str (optional) DBFS root directory for storing checkpoints and tables. :param target: str (optional) - Target schema (database) to add tables in this pipeline to. Exactly one of `schema` or `target` must - be specified. To publish to Unity Catalog, also specify `catalog`. This legacy field is deprecated - for pipeline creation in favor of the `schema` field. + Target schema (database) to add tables in this pipeline to. If not specified, no data is published + to the Hive metastore or Unity Catalog. To publish to Unity Catalog, also specify `catalog`. :param trigger: :class:`PipelineTrigger` (optional) Which pipeline trigger to use. Deprecated: Use `continuous` instead. @@ -485,7 +485,7 @@ Unique identifier for this pipeline. :param ingestion_definition: :class:`IngestionPipelineDefinition` (optional) The configuration for a managed ingestion pipeline. These settings cannot be used with the - 'libraries', 'schema', 'target', or 'catalog' settings. + 'libraries', 'target' or 'catalog' settings. :param libraries: List[:class:`PipelineLibrary`] (optional) Libraries or code needed by this deployment. :param name: str (optional) @@ -503,15 +503,15 @@ Only `user_name` or `service_principal_name` can be specified. If both are specified, an error is thrown. :param schema: str (optional) - The default schema (database) where tables are read from or published to. + The default schema (database) where tables are read from or published to. The presence of this field + implies that the pipeline is in direct publishing mode. :param serverless: bool (optional) Whether serverless compute is enabled for this pipeline. :param storage: str (optional) DBFS root directory for storing checkpoints and tables. :param target: str (optional) - Target schema (database) to add tables in this pipeline to. Exactly one of `schema` or `target` must - be specified. To publish to Unity Catalog, also specify `catalog`. This legacy field is deprecated - for pipeline creation in favor of the `schema` field. + Target schema (database) to add tables in this pipeline to. If not specified, no data is published + to the Hive metastore or Unity Catalog. To publish to Unity Catalog, also specify `catalog`. :param trigger: :class:`PipelineTrigger` (optional) Which pipeline trigger to use. Deprecated: Use `continuous` instead. diff --git a/docs/workspace/serving/serving_endpoints.rst b/docs/workspace/serving/serving_endpoints.rst index ad99bfc30..83609fc09 100644 --- a/docs/workspace/serving/serving_endpoints.rst +++ b/docs/workspace/serving/serving_endpoints.rst @@ -209,7 +209,7 @@ :returns: :class:`PutResponse` - .. py:method:: put_ai_gateway(name: str [, fallback_config: Optional[FallbackConfig], guardrails: Optional[AiGatewayGuardrails], inference_table_config: Optional[AiGatewayInferenceTableConfig], rate_limits: Optional[List[AiGatewayRateLimit]], usage_tracking_config: Optional[AiGatewayUsageTrackingConfig]]) -> PutAiGatewayResponse + .. py:method:: put_ai_gateway(name: str [, guardrails: Optional[AiGatewayGuardrails], inference_table_config: Optional[AiGatewayInferenceTableConfig], rate_limits: Optional[List[AiGatewayRateLimit]], usage_tracking_config: Optional[AiGatewayUsageTrackingConfig]]) -> PutAiGatewayResponse Update AI Gateway of a serving endpoint. @@ -218,9 +218,6 @@ :param name: str The name of the serving endpoint whose AI Gateway is being updated. This field is required. - :param fallback_config: :class:`FallbackConfig` (optional) - Configuration for traffic fallback which auto fallbacks to other served entities if the request to a - served entity fails with certain error codes, to increase availability. :param guardrails: :class:`AiGatewayGuardrails` (optional) Configuration for AI Guardrails to prevent unwanted data and unsafe data in requests and responses. :param inference_table_config: :class:`AiGatewayInferenceTableConfig` (optional)