Skip to content

Commit 6049e8a

Browse files
Better inline documentation for bundle commands (#3353)
## Changes I had cursor go through the bundle-examples repo and refer to those to improve the docs we have for our bundle commands. Then edited them to be correct. ## Why Better docs.
1 parent 0a7bd1a commit 6049e8a

File tree

31 files changed

+451
-28
lines changed

31 files changed

+451
-28
lines changed

acceptance/bundle/help/bundle-deploy/output.txt

Lines changed: 8 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,13 @@
11

22
>>> [CLI] bundle deploy --help
3-
Deploy bundle
3+
Deploy bundle.
4+
5+
Common patterns:
6+
databricks bundle deploy # Deploy to default target (dev)
7+
databricks bundle deploy --target dev # Deploy to development
8+
databricks bundle deploy --target prod # Deploy to production
9+
10+
See https://docs.databricks.com/en/dev-tools/bundles/index.html for more information.
411

512
Usage:
613
databricks bundle deploy [flags]

acceptance/bundle/help/bundle-deployment/output.txt

Lines changed: 15 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,20 @@
11

22
>>> [CLI] bundle deployment --help
3-
Deployment related commands
3+
Deployment related commands for managing bundle resource bindings.
4+
5+
Use these commands to bind / unbind bundle definitions to existing workspace resources.
6+
7+
Common workflow:
8+
1. Generate configuration from existing resource:
9+
databricks bundle generate job --existing-job-id 12345 --key my_job
10+
11+
2. Bind the bundle resource to the existing workspace resource:
12+
databricks bundle deployment bind my_job 12345
13+
14+
3. Deploy updates - the bound resource will be updated in the workspace:
15+
databricks bundle deploy
16+
17+
After binding, the existing workspace resource will be managed by your bundle.
418

519
Usage:
620
databricks bundle deployment [command]

acceptance/bundle/help/bundle-destroy/output.txt

Lines changed: 12 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,17 @@
11

22
>>> [CLI] bundle destroy --help
3-
Destroy deployed bundle resources
3+
Destroy all resources deployed by this bundle from the workspace.
4+
5+
This command removes all Databricks resources that were created by deploying
6+
this bundle.
7+
8+
Examples:
9+
databricks bundle destroy # Destroy resources in default target
10+
databricks bundle destroy --target prod # Destroy resources in production target
11+
12+
Typical use cases:
13+
- Cleaning up development or testing targets
14+
- Removing resources during environment decommissioning
415

516
Usage:
617
databricks bundle destroy [flags]

acceptance/bundle/help/bundle-generate-dashboard/output.txt

Lines changed: 31 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,36 @@
11

22
>>> [CLI] bundle generate dashboard --help
3-
Generate configuration for a dashboard
3+
Generate bundle configuration for an existing Databricks dashboard.
4+
5+
This command downloads an existing AI/BI dashboard and creates bundle files
6+
that you can use to deploy the dashboard to other environments or manage it as code.
7+
8+
Examples:
9+
# Import dashboard by workspace path
10+
databricks bundle generate dashboard --existing-path /Users/me/sales-dashboard \
11+
--key sales_dash
12+
13+
# Import dashboard by ID
14+
databricks bundle generate dashboard --existing-id abc123 --key analytics_dashboard
15+
16+
# Watch for changes to keep bundle in sync with UI modifications
17+
databricks bundle generate dashboard --resource my_dashboard --watch --force
18+
19+
What gets generated:
20+
- Dashboard configuration YAML file with settings and a reference to the dashboard definition
21+
- Dashboard definition (.lvdash.json) file with layout and queries
22+
23+
Sync workflow for dashboard development:
24+
When developing dashboards, you can modify them in the Databricks UI and sync
25+
changes back to your bundle:
26+
27+
1. Make changes to dashboard in the Databricks UI
28+
2. Run: databricks bundle generate dashboard --resource my_dashboard --force
29+
3. Commit changes to version control
30+
4. Deploy to other environments with: databricks bundle deploy --target prod
31+
32+
The --watch flag continuously polls for remote changes and updates your local
33+
bundle files automatically, useful during active dashboard development.
434

535
Usage:
636
databricks bundle generate dashboard [flags]

acceptance/bundle/help/bundle-generate-job/output.txt

Lines changed: 20 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,25 @@
11

22
>>> [CLI] bundle generate job --help
3-
Generate bundle configuration for a job
3+
Generate bundle configuration for an existing Databricks job.
4+
5+
This command downloads an existing job's configuration and creates bundle files
6+
that you can use to deploy the job to other environments or manage it as code.
7+
8+
Examples:
9+
# Import a production job for version control
10+
databricks bundle generate job --existing-job-id 12345 --key my_etl_job
11+
12+
# Specify custom directories for organization
13+
databricks bundle generate job --existing-job-id 67890 \
14+
--key data_pipeline --config-dir resources --source-dir src
15+
16+
What gets generated:
17+
- Job configuration YAML file in the resources directory
18+
- Any associated notebook or Python files in the source directory
19+
20+
After generation, you can deploy this job to other targets using:
21+
databricks bundle deploy --target staging
22+
databricks bundle deploy --target prod
423

524
Usage:
625
databricks bundle generate job [flags]

acceptance/bundle/help/bundle-generate-pipeline/output.txt

Lines changed: 20 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,25 @@
11

22
>>> [CLI] bundle generate pipeline --help
3-
Generate bundle configuration for a pipeline
3+
Generate bundle configuration for an existing Delta Live Tables pipeline.
4+
5+
This command downloads an existing DLT pipeline's configuration and any associated
6+
notebooks, creating bundle files that you can use to deploy the pipeline to other
7+
environments or manage it as code.
8+
9+
Examples:
10+
# Import a production DLT pipeline
11+
databricks bundle generate pipeline --existing-pipeline-id abc123 --key etl_pipeline
12+
13+
# Organize files in custom directories
14+
databricks bundle generate pipeline --existing-pipeline-id def456 \
15+
--key data_transformation --config-dir resources --source-dir src
16+
17+
What gets generated:
18+
- Pipeline configuration YAML file with settings and libraries
19+
- Pipeline notebooks downloaded to the source directory
20+
21+
After generation, you can deploy to other environments and modify settings
22+
like catalogs, schemas, and compute configurations per target.
423

524
Usage:
625
databricks bundle generate pipeline [flags]

acceptance/bundle/help/bundle-generate/output.txt

Lines changed: 14 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,19 @@
11

22
>>> [CLI] bundle generate --help
3-
Generate bundle configuration
3+
Generate bundle configuration from existing Databricks resources.
4+
5+
Common patterns:
6+
databricks bundle generate job --existing-job-id 123 --key my_job
7+
databricks bundle generate dashboard --existing-path /my-dashboard --key sales_dash
8+
databricks bundle generate dashboard --resource my_dashboard --watch --force # Keep local copy in sync. Useful for development.
9+
databricks bundle generate dashboard --resource my_dashboard --force # Do a one-time sync.
10+
11+
Complete migration workflow:
12+
1. Generate: databricks bundle generate job --existing-job-id 123 --key my_job
13+
2. Bind: databricks bundle deployment bind my_job 123
14+
3. Deploy: databricks bundle deploy
15+
16+
Use --key to specify the resource name in your bundle configuration.
417

518
Usage:
619
databricks bundle generate [command]

acceptance/bundle/help/bundle-init/output.txt

Lines changed: 10 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11

22
>>> [CLI] bundle init --help
3-
Initialize using a bundle template.
3+
Initialize using a bundle template to get started quickly.
44

55
TEMPLATE_PATH optionally specifies which template to use. It can be one of the following:
66
- default-python: The default Python template for Notebooks / Delta Live Tables / Workflows
@@ -11,6 +11,15 @@ TEMPLATE_PATH optionally specifies which template to use. It can be one of the f
1111
- a local file system path with a template directory
1212
- a Git repository URL, e.g. https://github.com/my/repository
1313

14+
Examples:
15+
databricks bundle init # Choose from built-in templates
16+
databricks bundle init default-python # Python jobs and notebooks
17+
databricks bundle init dbt-sql # dbt + SQL warehouse project
18+
databricks bundle init --output-dir ./my-project
19+
20+
After initialization:
21+
databricks bundle deploy --target dev
22+
1423
See https://docs.databricks.com/en/dev-tools/bundles/templates.html for more information on templates.
1524

1625
Usage:

acceptance/bundle/help/bundle-open/output.txt

Lines changed: 8 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,13 @@
11

22
>>> [CLI] bundle open --help
3-
Open a resource in the browser
3+
Open a deployed bundle resource in the Databricks workspace.
4+
5+
Examples:
6+
databricks bundle open # Prompts to select a resource to open
7+
databricks bundle open my_job # Open specific job in Workflows UI
8+
databricks bundle open my_dashboard # Open dashboard in browser
9+
10+
Use after deployment to quickly navigate to your resources in the workspace.
411

512
Usage:
613
databricks bundle open [flags]

acceptance/bundle/help/bundle-schema/output.txt

Lines changed: 8 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,13 @@
11

22
>>> [CLI] bundle schema --help
3-
Generate JSON Schema for bundle configuration
3+
Generate JSON Schema for bundle configuration to enable validation and autocomplete.
4+
5+
This command outputs the JSON Schema that describes the structure and validation
6+
rules for Databricks Asset Bundle configuration files.
7+
8+
Common use cases:
9+
- Configure IDE/editor validation for databricks.yml files
10+
- Set up autocomplete and IntelliSense for bundle configuration
411

512
Usage:
613
databricks bundle schema [flags]

0 commit comments

Comments
 (0)