Skip to content

Commit 5f686f2

Browse files
committed
Merge branch 'main' into 1.10.latest
2 parents 4cc5413 + b6261f2 commit 5f686f2

File tree

13 files changed

+354
-28
lines changed

13 files changed

+354
-28
lines changed
Lines changed: 68 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,68 @@
1+
name: Check PR title format
2+
description: Check if PR title follows conventional-commits format
3+
4+
permissions:
5+
pull-requests: write
6+
7+
on:
8+
pull_request:
9+
types:
10+
- opened # A pull request was created
11+
- edited # The title or body of a pull request was edited.
12+
- synchronize # A pull request's head branch was updated. For example, the head branch was updated from the base branch or new commits were pushed to the head branch.
13+
14+
jobs:
15+
pr-title:
16+
runs-on: linux-ubuntu-latest
17+
steps:
18+
- uses: actions/checkout@v4
19+
20+
- name: Setup node
21+
uses: actions/setup-node@v4
22+
with:
23+
node-version: 20
24+
- name: Install conventional commit parser
25+
shell: bash
26+
run: npm install --global conventional-commits-parser
27+
28+
- name: Validate PR title
29+
id: pr-format
30+
shell: bash
31+
env:
32+
PR_TITLE: ${{ github.event.pull_request.title }}
33+
# language=bash
34+
run: |
35+
echo "PR title: ${PR_TITLE}"
36+
37+
# check if PR title follows conventional commits format
38+
# issue on parser does not support "!" for breaking change (https://github.com/conventional-changelog/conventional-changelog/issues/648)
39+
# so we override the regex to support it
40+
conventionalCommitResult=$(echo "${PR_TITLE}" | conventional-commits-parser -p "^(\w*)!?(?:\(([\w\$\.\-\* ]*)\))?\: (.*)$" | jq ".[].type")
41+
if [[ "${conventionalCommitResult}" != "null" ]]; then
42+
echo "Conventional commit type: ${conventionalCommitResult}"
43+
exit 0
44+
fi
45+
46+
echo "Invalid PR title"
47+
exit 1
48+
49+
- name: Add comment to warn user
50+
uses: marocchino/sticky-pull-request-comment@efaaab3fd41a9c3de579aba759d2552635e590fd # v2.8.0
51+
if: failure()
52+
with:
53+
header: pr-title-lint-error
54+
message: |
55+
Hey there and thank you for opening this pull request! :wave:
56+
We require pull request titles to follow the [Conventional Commits specification](https://www.conventionalcommits.org/en/v1.0.0/).
57+
58+
Examples:
59+
- `feat(JIRA-123): My awesome feature`
60+
- `fix: My awesome fix`
61+
- `fix(name-of-impacted-package): My awesome fix`
62+
63+
- name: Delete a previous comment when the issue has been resolved
64+
uses: marocchino/sticky-pull-request-comment@efaaab3fd41a9c3de579aba759d2552635e590fd # v2.8.0
65+
if: success()
66+
with:
67+
header: pr-title-lint-error
68+
delete: true

CHANGELOG.md

Lines changed: 22 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,29 +1,49 @@
1-
## dbt-databricks 1.10.10 (TBD)
1+
## dbt-databricks 1.10.11 (TBD)
2+
3+
## dbt-databricks 1.10.10 (August 20, 2025)
4+
5+
### Fixes
6+
7+
- Gate column comment syntax on DBR version for better compatibility ([1151](https://github.com/databricks/dbt-databricks/pull/1151))
8+
9+
### Documentation
10+
11+
- Update Databricks Job documentation to match current terminology ([1145](https://github.com/databricks/dbt-databricks/pull/1145))
212

313
## dbt-databricks 1.10.9 (August 7, 2025)
414

515
### Features
16+
617
- Support column tags for views using `ALTER TABLE`
718

819
### Under the hood
20+
921
- Revert `REPLACE USING` syntax being used for insert overwrite ([1025](https://github.com/databricks/dbt-databricks/issues/1025))
1022

1123
## dbt-databricks 1.10.8 (August 4, 2025)
1224

1325
### Features
26+
1427
- Support insert_overwrite incremental strategy for SQL warehouses ([1025](https://github.com/databricks/dbt-databricks/issues/1025))
1528

1629
### Fixes
30+
1731
- Add fallback logic for known error types for `DESCRIBE TABLE EXTENDED .. AS JSON` for better reliability ([1128](https://github.com/databricks/dbt-databricks/issues/1128))
1832
- Fix no-op logic for views that is causing some incremental materializations to be skipped ([1122](https://github.com/databricks/dbt-databricks/issues/1122))
1933
- Fix check constraints keep getting replaced [issue-1109](https://github.com/databricks/dbt-databricks/issues/1109)
2034

2135
### Under the Hood
36+
2237
- Simplify connection management to align with base adapter. Connections are no longer cached per-thread
2338

2439
## dbt-databricks 1.10.7 (July 31, 2025)
2540

41+
### Features
42+
43+
- feat: add pr linting to enforce conventional commits [issue-1111](https://github.com/databricks/dbt-databricks/issues/1083)
44+
2645
### Fixes
46+
2747
- Do not use `DESCRIBE TABLE EXTENDED .. AS JSON` for STs when DBR version < 17.1. Do not use at all for MVs (not yet supported)
2848

2949
## dbt-databricks 1.10.6 (July 30, 2025)
@@ -71,6 +91,7 @@
7191
- Fix column comments for streaming tables and materialized views ([1049](https://github.com/databricks/dbt-databricks/issues/1049))
7292

7393
### Under the Hood
94+
7495
- Update to dbt-core 1.10.1
7596
- Update to dbt-common 1.24.0
7697

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1 @@
1-
version = "1.10.9"
1+
version = "1.10.10"

dbt/adapters/databricks/catalogs/_hive_metastore.py

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ class HiveMetastoreCatalogIntegration(CatalogIntegration):
1212

1313
def __init__(self, config: CatalogIntegrationConfig) -> None:
1414
super().__init__(config)
15-
self.file_format: str = config.file_format
15+
self.file_format: Optional[str] = config.file_format
1616

1717
@property
1818
def location_root(self) -> Optional[str]:
@@ -34,7 +34,9 @@ def build_relation(self, model: RelationConfig) -> DatabricksCatalogRelation:
3434
"""
3535
return DatabricksCatalogRelation(
3636
catalog_type=self.catalog_type,
37-
catalog_name=self.catalog_name,
37+
catalog_name=self.catalog_name
38+
if self.catalog_name != constants.DEFAULT_HIVE_METASTORE_CATALOG.catalog_name
39+
else model.database,
3840
table_format=parse_model.table_format(model) or self.table_format,
3941
file_format=parse_model.file_format(model) or self.file_format,
4042
external_volume=parse_model.location_root(model) or self.external_volume,

dbt/adapters/databricks/catalogs/_unity.py

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ def __init__(self, config: CatalogIntegrationConfig) -> None:
1414
super().__init__(config)
1515
if location_root := config.adapter_properties.get("location_root"):
1616
self.external_volume: Optional[str] = location_root
17-
self.file_format: str = config.file_format
17+
self.file_format: Optional[str] = config.file_format
1818

1919
@property
2020
def location_root(self) -> Optional[str]:
@@ -36,7 +36,9 @@ def build_relation(self, model: RelationConfig) -> DatabricksCatalogRelation:
3636
"""
3737
return DatabricksCatalogRelation(
3838
catalog_type=self.catalog_type,
39-
catalog_name=self.catalog_name,
39+
catalog_name=self.catalog_name
40+
if self.catalog_name != constants.DEFAULT_CATALOG.catalog_name
41+
else model.database,
4042
table_format=parse_model.table_format(model) or self.table_format,
4143
file_format=parse_model.file_format(model) or self.file_format,
4244
external_volume=parse_model.location_root(model) or self.external_volume,

dbt/adapters/databricks/parse_model.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,7 @@
11
import posixpath
22
from typing import Optional
33

4+
from dbt.adapters.catalogs import CATALOG_INTEGRATION_MODEL_CONFIG_NAME
45
from dbt.adapters.contracts.relation import RelationConfig
56
from dbt.adapters.databricks import constants
67

@@ -12,7 +13,7 @@ def catalog_name(model: RelationConfig) -> Optional[str]:
1213
Luckily, all catalog attribution in the legacy behavior is on the model.
1314
This means we can take a default catalog and rely on the model overrides to supply the rest.
1415
"""
15-
return _get(model, "catalog") or constants.DEFAULT_CATALOG.name
16+
return _get(model, CATALOG_INTEGRATION_MODEL_CONFIG_NAME) or constants.DEFAULT_CATALOG.name
1617

1718

1819
def file_format(model: RelationConfig) -> Optional[str]:

dbt/include/databricks/macros/adapters/persist_docs.sql

Lines changed: 16 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,22 @@
1313
{% endmacro %}
1414

1515
{% macro comment_on_column_sql(column_path, escaped_comment) %}
16-
COMMENT ON COLUMN {{ column_path }} IS '{{ escaped_comment }}'
16+
{%- if adapter.compare_dbr_version(16, 1) >= 0 -%}
17+
COMMENT ON COLUMN {{ column_path }} IS '{{ escaped_comment }}'
18+
{%- else -%}
19+
{{ alter_table_change_column_comment_sql(column_path, escaped_comment) }}
20+
{%- endif -%}
21+
{% endmacro %}
22+
23+
{% macro alter_table_change_column_comment_sql(column_path, escaped_comment) %}
24+
{%- set parts = column_path.split('.') -%}
25+
{%- if parts|length >= 4 -%}
26+
{%- set table_path = parts[:-1] | join('.') -%}
27+
{%- set column_name = parts[-1] -%}
28+
ALTER TABLE {{ table_path }} ALTER COLUMN {{ column_name }} COMMENT '{{ escaped_comment }}'
29+
{%- else -%}
30+
{{ exceptions.raise_compiler_error("Invalid column path: " ~ column_path ~ ". Expected format: database.schema.table.column") }}
31+
{%- endif -%}
1732
{% endmacro %}
1833

1934
{% macro databricks__persist_docs(relation, model, for_relation, for_columns) -%}

dbt/include/databricks/macros/get_custom_name/get_custom_database.sql

Lines changed: 10 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -4,10 +4,17 @@
44
#}
55
66
{% macro databricks__generate_database_name(custom_database_name=none, node=none) -%}
7-
{%- set default_database = target.database -%}
87
{%- if custom_database_name is none -%}
9-
{{ return(default_database) }}
8+
{%- if node is not none -%}
9+
{%- set catalog_relation = adapter.build_catalog_relation(node) -%}
10+
{{ return(catalog_relation.catalog_name) }}
11+
{%- elif 'config' in target -%}
12+
{%- set catalog_relation = adapter.build_catalog_relation(target) -%}
13+
{{ return(catalog_relation.catalog_name) }}
14+
{%- else -%}
15+
{{ return(target.database) }}
16+
{%- endif -%}
1017
{%- else -%}
11-
{{ return(custom_database_name) }}
18+
{{ return(custom_database_name) }}
1219
{%- endif -%}
1320
{%- endmacro %}

docs/databricks-workflows.md renamed to docs/databricks-jobs.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
1-
# Running a dbt project as a job in Databricks Workflows
1+
# Running a dbt project as a job in Databricks Jobs
22

3-
Databricks Workflows is a highly-reliable, managed orchestrator that lets you author and schedule DAGs of notebooks, Python scripts as well as dbt projects as production jobs.
3+
Databricks Lakeflow Jobs is a highly-reliable, managed orchestrator that lets you author and schedule DAGs of notebooks, Python scripts as well as dbt projects as production jobs.
44

55
In this guide, you will learn how to update an existing dbt project to run as a job, retrieve dbt run artifacts using the Jobs API and debug common issues.
66

@@ -18,7 +18,7 @@ When you run a dbt project as a Databricks Job, the dbt CLI runs on a single-nod
1818
- Install and configure the [Databricks CLI](https://docs.databricks.com/dev-tools/cli/index.html)
1919
- Install [jq](https://stedolan.github.io/jq/download/), a popular open source tool for parsing JSON from the command line
2020

21-
Note: previously dbt tasks on Databricks Workflows could target jobs clusters for compute.
21+
Note: previously dbt tasks on Databricks Jobs could target jobs clusters for compute.
2222
That is [no longer supported](https://docs.databricks.com/en/workflows/jobs/how-to/use-dbt-in-workflows.html#advanced-run-dbt-with-a-custom-profile).
2323
Job clusters can only be used for running the dbt-cli.
2424

pyproject.toml

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -22,12 +22,12 @@ classifiers = [
2222
"Programming Language :: Python :: 3.12",
2323
]
2424
dependencies = [
25-
"databricks-sdk>=0.41, <0.48.0",
26-
"databricks-sql-connector[pyarrow]>=4.0.0, <5.0.0",
27-
"dbt-adapters>=1.16.0, <2.0",
28-
"dbt-common>=1.24.0, <2.0",
29-
"dbt-core>=1.10.1, <2.0",
30-
"dbt-spark>=1.9.0, <2.0",
25+
"databricks-sdk>=0.41, <0.64.0",
26+
"databricks-sql-connector[pyarrow]>=4.0.0, <4.0.6",
27+
"dbt-adapters>=1.16.0, <1.16.5",
28+
"dbt-common>=1.24.0, <1.29.0",
29+
"dbt-core>=1.10.1, <1.10.10",
30+
"dbt-spark>=1.9.0, <1.9.4",
3131
"keyring>=23.13.0",
3232
"pydantic>=1.10.0",
3333
]
@@ -64,12 +64,12 @@ check-sdist = [
6464

6565
[tool.hatch.envs.default]
6666
pre-install-commands = [
67-
"pip install git+https://github.com/dbt-labs/dbt-adapters.git#subdirectory=dbt-adapters",
67+
"pip install git+https://github.com/dbt-labs/dbt-common.git",
68+
"pip install git+https://github.com/dbt-labs/dbt-adapters.git@main#subdirectory=dbt-adapters",
6869
"pip install git+https://github.com/dbt-labs/dbt-adapters.git@main#subdirectory=dbt-tests-adapter",
70+
"pip install git+https://github.com/dbt-labs/dbt-core.git@main#subdirectory=core",
6971
]
7072
dependencies = [
71-
"dbt_common @ git+https://github.com/dbt-labs/dbt-common.git",
72-
"dbt-core @ git+https://github.com/dbt-labs/dbt-core.git@main#subdirectory=core",
7373
"dbt-spark @ git+https://github.com/dbt-labs/dbt-adapters.git#subdirectory=dbt-spark",
7474
"pytest",
7575
"pytest-xdist",

0 commit comments

Comments
 (0)