@@ -70,6 +70,12 @@ github.com/databricks/cli/bundle/config/resources.Cluster:
7070 If `cluster_log_conf` is specified, init script logs are sent to `<destination>/<cluster-ID>/init_scripts`.
7171 instance_pool_id :
7272 description : The optional ID of the instance pool to which the cluster belongs.
73+ is_single_node :
74+ description : |
75+ This field can only be used with `kind`.
76+
77+ When set to true, Databricks will automatically set single node related `custom_tags`, `spark_conf`, and `num_workers`
78+ kind : {}
7379 node_type_id :
7480 description : |
7581 This field encodes, through a single value, the resources available to each of
@@ -119,6 +125,11 @@ github.com/databricks/cli/bundle/config/resources.Cluster:
119125 SSH public key contents that will be added to each Spark node in this cluster. The
120126 corresponding private keys can be used to login with the user name `ubuntu` on port `2200`.
121127 Up to 10 keys can be specified.
128+ use_ml_runtime :
129+ description : |
130+ This field can only be used with `kind`.
131+
132+ `effective_spark_version` is determined by `spark_version` (DBR release), this field `use_ml_runtime`, and whether `node_type_id` is gpu node or not.
122133 workload_type : {}
123134github.com/databricks/cli/bundle/config/resources.Dashboard :
124135 create_time :
@@ -759,6 +770,12 @@ github.com/databricks/databricks-sdk-go/service/compute.ClusterSpec:
759770 If `cluster_log_conf` is specified, init script logs are sent to `<destination>/<cluster-ID>/init_scripts`.
760771 instance_pool_id :
761772 description : The optional ID of the instance pool to which the cluster belongs.
773+ is_single_node :
774+ description : |
775+ This field can only be used with `kind`.
776+
777+ When set to true, Databricks will automatically set single node related `custom_tags`, `spark_conf`, and `num_workers`
778+ kind : {}
762779 node_type_id :
763780 description : |
764781 This field encodes, through a single value, the resources available to each of
@@ -808,13 +825,24 @@ github.com/databricks/databricks-sdk-go/service/compute.ClusterSpec:
808825 SSH public key contents that will be added to each Spark node in this cluster. The
809826 corresponding private keys can be used to login with the user name `ubuntu` on port `2200`.
810827 Up to 10 keys can be specified.
828+ use_ml_runtime :
829+ description : |
830+ This field can only be used with `kind`.
831+
832+ `effective_spark_version` is determined by `spark_version` (DBR release), this field `use_ml_runtime`, and whether `node_type_id` is gpu node or not.
811833 workload_type : {}
812834github.com/databricks/databricks-sdk-go/service/compute.DataSecurityMode :
813835 _ :
814836 description : |
815837 Data security mode decides what data governance model to use when accessing data
816838 from a cluster.
817839
840+ The following modes can only be used with `kind`.
841+ * `DATA_SECURITY_MODE_AUTO`: Databricks will choose the most appropriate access mode depending on your compute configuration.
842+ * `DATA_SECURITY_MODE_STANDARD`: Alias for `USER_ISOLATION`.
843+ * `DATA_SECURITY_MODE_DEDICATED`: Alias for `SINGLE_USER`.
844+
845+ The following modes can be used regardless of `kind`.
818846 * `NONE`: No security isolation for multiple users sharing the cluster. Data governance features are not available in this mode.
819847 * `SINGLE_USER`: A secure cluster that can only be exclusively used by a single user specified in `single_user_name`. Most programming languages, cluster features and data governance features are available in this mode.
820848 * `USER_ISOLATION`: A secure cluster that can be shared by multiple users. Cluster users are fully isolated so that they cannot see each other's data and credentials. Most data governance features are supported in this mode. But programming languages and cluster features might be limited.
@@ -827,6 +855,9 @@ github.com/databricks/databricks-sdk-go/service/compute.DataSecurityMode:
827855 * `LEGACY_SINGLE_USER`: This mode is for users migrating from legacy Passthrough on standard clusters.
828856 * `LEGACY_SINGLE_USER_STANDARD`: This mode provides a way that doesn’t have UC nor passthrough enabled.
829857 enum :
858+ - DATA_SECURITY_MODE_AUTO
859+ - DATA_SECURITY_MODE_STANDARD
860+ - DATA_SECURITY_MODE_DEDICATED
830861 - NONE
831862 - SINGLE_USER
832863 - USER_ISOLATION
@@ -1068,6 +1099,17 @@ github.com/databricks/databricks-sdk-go/service/dashboards.LifecycleState:
10681099 enum :
10691100 - ACTIVE
10701101 - TRASHED
1102+ github.com/databricks/databricks-sdk-go/service/jobs.CleanRoomsNotebookTask :
1103+ clean_room_name :
1104+ description : The clean room that the notebook belongs to.
1105+ etag :
1106+ description : |-
1107+ Checksum to validate the freshness of the notebook resource (i.e. the notebook being run is the latest version).
1108+ It can be fetched by calling the :method:cleanroomassets/get API.
1109+ notebook_base_parameters :
1110+ description : Base parameters to be used for the clean room notebook job.
1111+ notebook_name :
1112+ description : Name of the notebook being run.
10711113github.com/databricks/databricks-sdk-go/service/jobs.Condition :
10721114 _ :
10731115 enum :
@@ -1346,10 +1388,10 @@ github.com/databricks/databricks-sdk-go/service/jobs.JobsHealthMetric:
13461388 Specifies the health metric that is being evaluated for a particular health rule.
13471389
13481390 * `RUN_DURATION_SECONDS`: Expected total time for a run in seconds.
1349- * `STREAMING_BACKLOG_BYTES`: An estimate of the maximum bytes of data waiting to be consumed across all streams. This metric is in Private Preview.
1350- * `STREAMING_BACKLOG_RECORDS`: An estimate of the maximum offset lag across all streams. This metric is in Private Preview.
1351- * `STREAMING_BACKLOG_SECONDS`: An estimate of the maximum consumer delay across all streams. This metric is in Private Preview.
1352- * `STREAMING_BACKLOG_FILES`: An estimate of the maximum number of outstanding files across all streams. This metric is in Private Preview.
1391+ * `STREAMING_BACKLOG_BYTES`: An estimate of the maximum bytes of data waiting to be consumed across all streams. This metric is in Public Preview.
1392+ * `STREAMING_BACKLOG_RECORDS`: An estimate of the maximum offset lag across all streams. This metric is in Public Preview.
1393+ * `STREAMING_BACKLOG_SECONDS`: An estimate of the maximum consumer delay across all streams. This metric is in Public Preview.
1394+ * `STREAMING_BACKLOG_FILES`: An estimate of the maximum number of outstanding files across all streams. This metric is in Public Preview.
13531395 enum :
13541396 - RUN_DURATION_SECONDS
13551397 - STREAMING_BACKLOG_BYTES
@@ -1651,6 +1693,10 @@ github.com/databricks/databricks-sdk-go/service/jobs.TableUpdateTriggerConfigura
16511693 and can be used to wait for a series of table updates before triggering a run. The
16521694 minimum allowed value is 60 seconds.
16531695github.com/databricks/databricks-sdk-go/service/jobs.Task :
1696+ clean_rooms_notebook_task :
1697+ description : |-
1698+ The task runs a [clean rooms](https://docs.databricks.com/en/clean-rooms/index.html) notebook
1699+ when the `clean_rooms_notebook_task` field is present.
16541700 condition_task :
16551701 description : |-
16561702 The task evaluates a condition that can be used to control the execution of other tasks when the `condition_task` field is present.
0 commit comments