You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
## Changes
Upgrade Go SDK to 0.75.0
The biggest difference is that `genkit` does not infer field description
from the custom types description anymore (for example fields like
`data_security_mode`), thus we do it for annotations ourselves.
The result is that we are now missing descriptions for some of the enum
flags.
## Tests
Existing tests pass
<!-- If your PR needs to be included in the release notes for next
release,
add a separate entry in NEXT_CHANGELOG.md as part of your PR. -->
Copy file name to clipboardExpand all lines: acceptance/help/output.txt
+6-1Lines changed: 6 additions & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -57,7 +57,9 @@ Unity Catalog
57
57
catalogs A catalog is the first layer of Unity Catalog’s three-level namespace.
58
58
connections Connections allow for creating a connection to an external data source.
59
59
credentials A credential represents an authentication and authorization mechanism for accessing services on your cloud tenant.
60
+
external-lineage External Lineage APIs enable defining and managing lineage relationships between Databricks objects and external systems.
60
61
external-locations An external location is an object that combines a cloud storage path with a storage credential that authorizes access to the cloud storage path.
62
+
external-metadata External Metadata objects enable customers to register and manage metadata about external systems within Unity Catalog.
61
63
functions Functions implement User-Defined Functions (UDFs) in Unity Catalog.
62
64
grants In Unity Catalog, data is secure by default.
63
65
metastores A metastore is the top-level container of objects in Unity Catalog.
@@ -122,14 +124,17 @@ Apps
122
124
Clean Rooms
123
125
clean-room-assets Clean room assets are data and code objects — Tables, volumes, and notebooks that are shared with the clean room.
124
126
clean-room-task-runs Clean room task runs are the executions of notebooks in a clean room.
125
-
clean-rooms A clean room uses Delta Sharing and serverless compute to provide a secure and privacy-protecting environment where multiple parties can work together on sensitive enterprise data without direct access to each other’s data.
127
+
clean-rooms A clean room uses Delta Sharing and serverless compute to provide a secure and privacy-protecting environment where multiple parties can work together on sensitive enterprise data without direct access to each other's data.
126
128
127
129
Database
128
130
database Database Instances provide access to a database via REST API or direct SQL.
129
131
130
132
Quality Monitor v2
131
133
quality-monitor-v2 Manage data quality of UC objects (currently support schema).
132
134
135
+
OAuth
136
+
service-principal-secrets-proxy These APIs enable administrators to manage service principal secrets at the workspace level.
Write-only setting. Specifies the user or service principal that the job runs as. If not specified, the job runs as the user who created the job.
369
-
370
-
Either `user_name` or `service_principal_name` should be specified. If not, an error is thrown.
368
+
The user or service principal that the job runs as, if specified in the request.
369
+
This field indicates the explicit configuration of `run_as` for the job.
370
+
To find the value in all cases, explicit or implicit, use `run_as_user_name`.
371
371
"schedule":
372
372
"description": |-
373
373
An optional periodic schedule for this job. The default behavior is that the job only runs when triggered by clicking “Run Now” in the Jobs UI or sending an API request to `runNow`.
This is used as the root directory when editing the pipeline in the Databricks user interface and it is
529
528
added to sys.path when executing Python sources during pipeline execution.
530
-
"x-databricks-preview": |-
531
-
PRIVATE
532
529
"run_as":
533
530
"description": |-
534
531
Write-only setting, available only in Create/Update calls. Specifies the user or service principal that the pipeline runs as. If not specified, the pipeline runs as the user who created the pipeline.
Used to specify how many calls are allowed for a key within the renewal_period.
3331
3349
"key":
3332
3350
"description": |-
3333
-
Key field for a rate limit. Currently, only 'user' and 'endpoint' are supported,
3351
+
Key field for a rate limit. Currently, 'user', 'user_group, 'service_principal', and 'endpoint' are supported,
3334
3352
with 'endpoint' being the default if not specified.
3353
+
"principal":
3354
+
"description": |-
3355
+
Principal field for a user, user group, or service principal to apply rate limiting to. Accepts a user email, group name, or service principal application ID.
3335
3356
"renewal_period":
3336
3357
"description": |-
3337
3358
Renewal period field for a rate limit. Currently, only 'minute' is supported.
0 commit comments