Skip to content

Commit 2f7070d

Browse files
committed
Add 480 release notes
1 parent 78bc2e1 commit 2f7070d

File tree

2 files changed

+151
-0
lines changed

2 files changed

+151
-0
lines changed

docs/src/main/sphinx/release.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,7 @@
66
```{toctree}
77
:maxdepth: 1
88
9+
release/release-480
910
release/release-479
1011
release/release-478
1112
release/release-477
Lines changed: 150 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,150 @@
1+
# Release 480 (update to 07/01/2026)
2+
3+
## General
4+
* {{breaking}} Remove `enable-large-dynamic-filters` configuration property and the
5+
corresponding system session property `enable_large_dynamic_filters`. Large dynamic
6+
filters are used by default. ({issue}`27637`)
7+
* {{breaking}} Remove `dynamic-filtering.small*` configuration properties. ({issue}`27637`)
8+
* {{breaking}} Remove `dynamic-filtering.large-broadcast*` configuration. ({issue}`27637`).
9+
* Extend experimental performance improvements for remote data exchanges on newer CPU
10+
architectures. ({issue}`27586`)
11+
* Enable experimental performance improvements for remote data exchanges on Graviton 4
12+
CPUs. ({issue}`27586`)
13+
* Improve performance of queries with data exchanges or aggregations. ({issue}`27657`)
14+
* Fix double rounding in localtimestamp for sub-micro precision values. ({issue}`27806`)
15+
* Fix localtimestamp failure for precisions 7-8. ({issue}`27807`)
16+
17+
## Web UI
18+
19+
* Fix numeric ordering of stages in the UI. ({issue}`27655`)
20+
21+
## ClickHouse connector
22+
23+
* Fix failure when creating table caused by incorrect cleanup of the tables after a failed
24+
CTAS operation. ({issue}`27702`)
25+
26+
## Delta Lake connector
27+
28+
* {{breaking}} Remove live files table metadata cache. The configuration
29+
properties `metadata.live-files.cache-size`, `metadata.live-files.cache-ttl` and
30+
`checkpoint-filtering.enabled` are now defunct and must be removed from server
31+
configurations. ({issue}`27618`)
32+
* {{breaking}} Remove `hive.write-validation-threads` configuration property. ({issue}`27729`)
33+
* {{breaking}} Remove `parquet.optimized-writer.validation-percentage` configuration
34+
property, use `parquet.writer.validation-percentage`, instead. ({issue}`27729`)
35+
* {{breaking}} Remove `hive.parquet.writer.block-size` configuration property, use
36+
`parquet.writer.block-size`, instead. ({issue}`27729`)
37+
* {{breaking}} Remove `hive.parquet.writer.page-size` configuration property, use
38+
`parquet.writer.page-size`, instead. ({issue}`27729`)
39+
* Improve effectiveness of bloom filters written in parquet files for high cardinality
40+
columns. ({issue}`27656`)
41+
* Do not require `PutObjectTagging` AWS S3 permission when writing to Delta Lake tables
42+
on S3. ({issue}`27701`)
43+
44+
## DuckDB connector
45+
46+
* Fix failure when creating table caused by incorrect cleanup of the tables after a failed
47+
CTAS operation. ({issue}`27702`)
48+
49+
## Hive connector
50+
51+
* Add support for reading parquet files with timestamps stored in nanosecond units as a
52+
`timestamp with time zone` column. ({issue}`27861`)
53+
* {{breaking}} Remove `hive.write-validation-threads` configuration property. ({issue}`27729`)
54+
* {{breaking}} Remove `parquet.optimized-writer.validation-percentage` configuration
55+
property, use `parquet.writer.validation-percentage`, instead. ({issue}`27729`)
56+
* {{breaking}} Remove `hive.parquet.writer.block-size` configuration property, use
57+
`parquet.writer.block-size`, instead. ({issue}`27729`)
58+
* {{breaking}} Remove `hive.parquet.writer.page-size` configuration property, use
59+
`parquet.writer.page-size`, instead. ({issue}`27729`)
60+
* Improve effectiveness of bloom filters written in parquet files for high cardinality
61+
columns. ({issue}`27656`)
62+
63+
## Hudi connector
64+
65+
* {{breaking}} Remove `hive.write-validation-threads` configuration property. ({issue}`27729`)
66+
* {{breaking}} Remove `parquet.optimized-writer.validation-percentage` configuration
67+
property, use `parquet.writer.validation-percentage`, instead. ({issue}`27729`)
68+
* {{breaking}} Remove `hive.parquet.writer.block-size` configuration property, use
69+
`parquet.writer.block-size`, instead. ({issue}`27729`)
70+
* {{breaking}} Remove `hive.parquet.writer.page-size` configuration property, use
71+
`parquet.writer.page-size`, instead. ({issue}`27729`)
72+
* Improve effectiveness of bloom filters written in parquet files for high cardinality columns. ({issue}`27656`)
73+
74+
## Iceberg connector
75+
76+
* Add support for BigLake metastore in Iceberg REST catalog. ({issue}`26219`)
77+
* Add `delete_after_commit_enabled` and `max_previous_versions` table properties. ({issue}`14128`)
78+
* {{breaking}} Remove `hive.write-validation-threads` configuration property. ({issue}`27729`)
79+
* {{breaking}} Remove `parquet.optimized-writer.validation-percentage` configuration
80+
property, use `parquet.writer.validation-percentage`, instead. ({issue}`27729`)
81+
* {{breaking}} Remove `hive.parquet.writer.block-size` configuration property, use
82+
`parquet.writer.block-size`, instead. ({issue}`27729`)
83+
* {{breaking}} Remove `hive.parquet.writer.page-size` configuration property, use
84+
`parquet.writer.page-size`, instead. ({issue}`27729`)
85+
* Fix failure when reading `$files` metadata table with partition evolution using
86+
`truncate` or `bucket` on the same column. ({issue}`26109`)
87+
* Optimize Iceberg materialized view freshness checks based on grace period. ({issue}`27608`)
88+
89+
## Ignite connector
90+
91+
* Fix failure when creating table caused by incorrect cleanup of the tables after a failed
92+
CTAS operation. ({issue}`27702`)
93+
94+
## Lakehouse
95+
96+
* Fix failure when reading Iceberg `$files` table. ({issue}`26751`)
97+
98+
## MariaDB connector
99+
100+
* Fix failure when creating table caused by incorrect cleanup of the tables after a failed
101+
CTAS operation. ({issue}`27702`)
102+
103+
## MySQL connector
104+
105+
* Fix failure when creating table caused by incorrect cleanup of the tables after a failed
106+
CTAS operation. ({issue}`27702`)
107+
108+
## Oracle connector
109+
110+
* Fix failure when creating table caused by incorrect cleanup of the tables after a failed
111+
CTAS operation. ({issue}`27702`)
112+
113+
## PostgreSQL connector
114+
115+
* Fix failure when creating table caused by incorrect cleanup of the tables after a failed
116+
CTAS operation. ({issue}`27702`)
117+
118+
## Redshift connector
119+
120+
* Fix failure when creating table caused by incorrect cleanup of the tables after a failed
121+
CTAS operation. ({issue}`27702`)
122+
123+
## SingleStore connector
124+
125+
* Fix failure when creating table caused by incorrect cleanup of the tables after a failed
126+
CTAS operation. ({issue}`27702`)
127+
128+
## Snowflake connector
129+
130+
* Fix failure when creating table caused by incorrect cleanup of the tables after a failed
131+
CTAS operation. ({issue}`27702`)
132+
133+
## SQL Server connector
134+
135+
* Fix failure when creating table caused by incorrect cleanup of the tables after a failed
136+
CTAS operation. ({issue}`27702`)
137+
138+
## Vertica connector
139+
140+
* Fix failure when creating table caused by incorrect cleanup of the tables after a failed
141+
CTAS operation. ({issue}`27702`)
142+
143+
## SPI
144+
145+
* Remove support for `TypeSignatureParameter`. Use `TypeParameter`, instead. ({issue}`27574`)
146+
* Remove support for `ParameterKind`. Use `TypeParameter.Type`, `TypeParameter.Numeric`,
147+
`TypeParameter.Variable`, instead. ({issue}`27574`)
148+
* Remove support for `NamedType`, `NamedTypeSignature` and `NamedTypeParameter`. Use
149+
`TypeParameter.Type`, instead. ({issue}`27574`)
150+
* Deprecate `MaterializedViewFreshness#getLastFreshTime`. ({issue}`27803`)

0 commit comments

Comments
 (0)