You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
|description|Text describing what the activity does.|No|
70
64
|type|For Databricks Job Activity, the activity type is DatabricksJob.|Yes|
71
65
|linkedServiceName|Name of the Databricks Linked Service on which the Databricks job runs. To learn about this linked service, see [Compute linked services](compute-linked-services.md) article.|Yes|
72
-
|jobPath|The absolute path of the job to be run in the Databricks Workspace. This path must begin with a slash.|Yes|
73
-
|baseParameters|An array of Key-Value pairs. Base parameters can be used for each activity run. If the job takes a parameter that isn't specified, the default value from the job will be used. Find more on parameters in [Databricks Jobs](https://docs.databricks.com/api/latest/jobs.html#jobsparampair).|No|
74
-
|libraries|A list of libraries to be installed on the cluster that will execute the job. It can be an array of \<string, object>.|No|
66
+
|jobId|The id of the job to be run in the Databricks Workspace.|Yes|
67
+
|jobParameters|An array of Key-Value pairs. Job parameters can be used for each activity run. If the job takes a parameter that isn't specified, the default value from the job will be used. Find more on parameters in [Databricks Jobs](https://docs.databricks.com/api/latest/jobs.html#jobsparampair).|No|
75
68
76
-
## Supported libraries for Databricks activities
77
-
78
-
In the above Databricks activity definition, you specify these library types: *jar*, *egg*, *whl*, *maven*, *pypi*, *cran*.
For more information, see the [Databricks documentation](/azure/databricks/dev-tools/api/latest/libraries#managedlibrarieslibrary) for library types.
119
69
120
70
## Passing parameters between jobs and pipelines
121
71
122
-
You can pass parameters to jobs using *baseParameters* property in databricks activity.
123
-
124
-
In certain cases, you might require to pass back certain values from job back to the service, which can be used for control flow (conditional checks) in the service or be consumed by downstream activities (size limit is 2 MB).
125
-
126
-
1. In your job, you can call `dbutils.job.exit("returnValue")` and corresponding "returnValue" will be returned to the service.
127
-
128
-
1. You can consume the output in the service by using expression such as `@{activity('databricks job activity name').output.runOutput}`.
129
-
130
-
> [!IMPORTANT]
131
-
> If you're passing JSON object, you can retrieve values by appending property names. Example: `@{activity('databricks job activity name').output.runOutput.PropertyName}`
132
-
133
-
## How to upload a library in Databricks
134
-
135
-
### You can use the Workspace UI:
136
-
137
-
1.[Use the Databricks workspace UI](/azure/databricks/libraries/cluster-libraries#install-a-library-on-a-cluster)
138
-
139
-
2. To obtain the dbfs path of the library added using UI, you can use [Databricks CLI](/azure/databricks/dev-tools/cli/fs-commands#list-the-contents-of-a-directory).
140
-
141
-
Typically the Jar libraries are stored under dbfs:/FileStore/jars while using the UI. You can list all through the CLI: *databricks fs ls dbfs:/FileStore/job-jars*
142
-
143
-
### Or you can use the Databricks CLI:
144
-
145
-
1. Follow [Copy the library using Databricks CLI](/azure/databricks/dev-tools/cli/fs-commands#copy-a-directory-or-a-file)
146
-
147
-
2. Use Databricks CLI [(installation steps)](/azure/databricks/dev-tools/cli/commands#compute-commands)
72
+
You can pass parameters to jobs using *jobParameters* property in Databricks activity.
0 commit comments