You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: CHANGELOG.md
+7-1Lines changed: 7 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,4 +1,10 @@
1
-
## dbt-databricks 1.10.6 (TBD)
1
+
## dbt-databricks 1.10.7 (TBD)
2
+
3
+
## dbt-databricks 1.10.6 (July 30, 2025)
4
+
5
+
### Fixes
6
+
7
+
- Fix bug introduced by the fix for https://github.com/databricks/dbt-databricks/issues/1083. `DESCRIBE TABLE EXTENDED .. AS JSON` is now only used for DBR versions 16.2 and above
Copy file name to clipboardExpand all lines: README.md
+17-4Lines changed: 17 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -21,10 +21,11 @@ The `dbt-databricks` adapter contains all of the code enabling dbt to work with
21
21
22
22
-**Easy setup**. No need to install an ODBC driver as the adapter uses pure Python APIs.
23
23
-**Open by default**. For example, it uses the the open and performant [Delta](https://delta.io/) table format by default. This has many benefits, including letting you use `MERGE` as the the default incremental materialization strategy.
24
-
-**Support for Unity Catalog**. dbt-databricks>=1.1.1 supports the 3-level namespace of Unity Catalog (catalog / schema / relations) so you can organize and secure your data the way you like.
24
+
-**Support for Unity Catalog**. dbt-databricks supports the 3-level namespace of Unity Catalog (catalog / schema / relations) so you can organize and secure your data the way you like.
25
25
-**Performance**. The adapter generates SQL expressions that are automatically accelerated by the native, vectorized [Photon](https://databricks.com/product/photon) execution engine.
26
26
27
27
## Choosing between dbt-databricks and dbt-spark
28
+
28
29
If you are developing a dbt project on Databricks, we recommend using `dbt-databricks` for the reasons noted above.
29
30
30
31
`dbt-spark` is an actively developed adapter which works with Databricks as well as Apache Spark anywhere it is hosted e.g. on AWS EMR.
@@ -34,11 +35,13 @@ If you are developing a dbt project on Databricks, we recommend using `dbt-datab
34
35
### Installation
35
36
36
37
Install using pip:
38
+
37
39
```nofmt
38
40
pip install dbt-databricks
39
41
```
40
42
41
43
Upgrade to the latest version
44
+
42
45
```nofmt
43
46
pip install --upgrade dbt-databricks
44
47
```
@@ -51,21 +54,29 @@ your_profile_name:
51
54
outputs:
52
55
dev:
53
56
type: databricks
54
-
catalog: [optional catalog name, if you are using Unity Catalog, only available in dbt-databricks>=1.1.1]
57
+
catalog: [optional catalog name, if you are using Unity Catalog]
55
58
schema: [database/schema name]
56
59
host: [your.databrickshost.com]
57
60
http_path: [/sql/your/http/path]
58
61
token: [dapiXXXXXXXXXXXXXXXXXXXXXXX]
59
62
```
60
63
64
+
### Documentation
65
+
66
+
For comprehensive documentation on Databricks-specific features, configurations, and capabilities:
67
+
68
+
-**[Databricks configurations](https://docs.getdbt.com/reference/resource-configs/databricks-configs)** - Complete reference for all Databricks-specific model configurations, materializations, and incremental strategies
69
+
-**[Connect to Databricks](https://docs.getdbt.com/docs/core/connect-data-platform/databricks-setup)** - Setup and authentication guide
70
+
61
71
### Quick Starts
62
72
63
73
These following quick starts will get you up and running with the `dbt-databricks` adapter:
64
-
-[Developing your first dbt project](https://github.com/databricks/dbt-databricks/blob/main/docs/local-dev.md)
74
+
75
+
-[Set up your dbt project with Databricks](https://docs.getdbt.com/guides/set-up-your-databricks-dbt-project)
65
76
- Using dbt Cloud with Databricks ([Azure](https://docs.microsoft.com/en-us/azure/databricks/integrations/prep/dbt-cloud) | [AWS](https://docs.databricks.com/integrations/prep/dbt-cloud.html))
66
77
-[Running dbt production jobs on Databricks Workflows](https://github.com/databricks/dbt-databricks/blob/main/docs/databricks-workflows.md)
67
78
-[Using Unity Catalog with dbt-databricks](https://github.com/databricks/dbt-databricks/blob/main/docs/uc.md)
68
-
-[Using GitHub Actions for dbt CI/CD on Databricks](https://github.com/databricks/dbt-databricks/blob/main/docs/github-actions.md)
79
+
-[Continuous integration in dbt](https://docs.getdbt.com/docs/deploy/continuous-integration)
69
80
-[Loading data from S3 into Delta using the databricks_copy_into macro](https://github.com/databricks/dbt-databricks/blob/main/docs/databricks-copy-into-macro-aws.md)
70
81
-[Contribute to this repository](CONTRIBUTING.MD)
71
82
@@ -77,7 +88,9 @@ The `dbt-databricks` adapter has been tested:
77
88
- against `Databricks SQL` and `Databricks runtime releases 9.1 LTS` and later.
78
89
79
90
### Tips and Tricks
91
+
80
92
## Choosing compute for a Python model
93
+
81
94
You can override the compute used for a specific Python model by setting the `http_path` property in model configuration. This can be useful if, for example, you want to run a Python model on an All Purpose cluster, while running SQL models on a SQL Warehouse. Note that this capability is only available for Python models.
0 commit comments