Skip to content

Commit fd2e843

Browse files
authored
fix: roll back h2 upgrade and ineffective reset workarounds (#19264)
* Revert "chore(query): bump ast to 0.2.4 and optimize gRPC configs (#19253)" This reverts commit 84f7338. * Revert "fix(test): force HTTP/1.1 for sqllogictests client (#19243)" This reverts commit 5451e1a. * Revert "fix(ci): increase pool_max_idle_per_host (#19241)" This reverts commit f885633. * Revert "feat(iceberg): bump iceberg-rust to v0.8.0 and add write support (#19200)" This reverts commit 05caa2e. * fix: update geometry golden file after revert Update st_geomfromgeohash polygon coordinate order in golden file to match reverted behavior. The revert of PR #19200 changed the polygon ring starting point back to the original order, but the unit test golden file was not updated. This commit fixes the geometry.txt golden file to match the actual output after the revert. Related to CI failure in test_unit job. * fix: update golden files for geometry and cast tests - geometry.txt: fix Row 1 (u4pruydqqvj0) polygon coordinate order - geometry.txt: fix internal Output hex representation - cast.txt: update date parsing error message format to match new jiff library error format The error message format changed from: 'failed to parse year in date: failed to parse four digit integer as year' to: 'failed to parse year in date "X": failed to parse "Y" as year (a four digit integer)'
1 parent 53a911d commit fd2e843

File tree

49 files changed

+2291
-3208
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

49 files changed

+2291
-3208
lines changed

.github/actions/test_sqllogic_iceberg_tpch/action.yml

Lines changed: 3 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -16,9 +16,6 @@ runs:
1616
with:
1717
artifacts: sqllogictests,meta,query
1818

19-
- name: Setup uv
20-
uses: astral-sh/setup-uv@v7
21-
2219
- name: Iceberg Setup for (ubuntu-latest only)
2320
shell: bash
2421
run: |
@@ -32,12 +29,11 @@ runs:
3229
fi
3330
tar -zxf ${data_dir}/tpch.tar.gz -C $data_dir
3431
35-
cwd=$(pwd)
36-
cd tests/sqllogictests/scripts/ && uv sync && cd $cwd
32+
uv sync
3733
echo "Running prepare_iceberg_tpch_data.py..."
38-
uv run tests/sqllogictests/scripts/prepare_iceberg_tpch_data.py
34+
uv run python tests/sqllogictests/scripts/prepare_iceberg_tpch_data.py
3935
echo "Running prepare_iceberg_test_data.py..."
40-
uv run tests/sqllogictests/scripts/prepare_iceberg_test_data.py
36+
uv run python tests/sqllogictests/scripts/prepare_iceberg_test_data.py
4137
4238
4339
- name: Run sqllogic Tests with Standalone lib

.github/workflows/reuse.sqllogic.yml

Lines changed: 24 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -228,29 +228,30 @@ jobs:
228228
with:
229229
name: test-sqllogic-cluster-minio-${{ matrix.dirs }}-${{ matrix.handler }}
230230

231-
standalone_iceberg_tpch:
232-
runs-on:
233-
- self-hosted
234-
- ${{ inputs.runner_arch }}
235-
- Linux
236-
- 4c
237-
- "${{ inputs.runner_provider }}"
238-
steps:
239-
- uses: actions/checkout@v4
240-
- uses: actions/setup-java@v4
241-
with:
242-
distribution: "temurin"
243-
java-version: "17"
244-
- uses: ./.github/actions/test_sqllogic_iceberg_tpch
245-
timeout-minutes: 15
246-
with:
247-
dirs: tpch_iceberg
248-
handlers: http,hybrid
249-
- name: Upload failure
250-
if: failure()
251-
uses: ./.github/actions/artifact_failure
252-
with:
253-
name: test-sqllogic-standalone-iceberg-tpch
231+
# TODO: tmp disable since iceberg image not running on arm64
232+
# standalone_iceberg_tpch:
233+
# runs-on:
234+
# - self-hosted
235+
# - ${{ inputs.runner_arch }}
236+
# - Linux
237+
# - 4c
238+
# - "${{ inputs.runner_provider }}"
239+
# steps:
240+
# - uses: actions/checkout@v4
241+
# - uses: actions/setup-java@v4
242+
# with:
243+
# distribution: "temurin"
244+
# java-version: "17"
245+
# - uses: ./.github/actions/test_sqllogic_iceberg_tpch
246+
# timeout-minutes: 15
247+
# with:
248+
# dirs: tpch_iceberg
249+
# handlers: http,hybrid
250+
# - name: Upload failure
251+
# if: failure()
252+
# uses: ./.github/actions/artifact_failure
253+
# with:
254+
# name: test-sqllogic-standalone-iceberg-tpch
254255

255256
cluster:
256257
runs-on:

AGENTS.md

Lines changed: 0 additions & 20 deletions
This file was deleted.

CLAUDE.md

Lines changed: 164 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,164 @@
1+
# CLAUDE.md
2+
3+
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
4+
5+
## Project Overview
6+
7+
Databend is an open-source, Rust-based cloud data warehouse with near 100% Snowflake compatibility. It features MPP architecture, S3-native storage, and supports structured, semi-structured, and unstructured data processing with vector embeddings and AI capabilities.
8+
9+
## Architecture
10+
11+
The codebase follows a modular workspace structure with clear separation between:
12+
13+
- **Meta Service** (`src/meta/`): Distributed metadata management using Raft consensus
14+
- **Query Service** (`src/query/`): SQL processing engine with vectorized execution
15+
- **Common Libraries** (`src/common/`): Shared utilities for storage, networking, authentication
16+
- **Binaries** (`src/binaries/`): Main executables for databend-query, databend-meta, databend-metactl
17+
18+
Key architectural patterns:
19+
- Compute-storage separation with S3-native design
20+
- Async Rust throughout with tokio runtime
21+
- Arrow-based columnar processing
22+
- Plugin architecture for storage backends and file formats
23+
24+
## Development Commands
25+
26+
### Building
27+
```bash
28+
# Debug build (fast compilation)
29+
make build
30+
# or: cargo build --bin=databend-query --bin=databend-meta --bin=databend-metactl
31+
32+
# Release build (optimized)
33+
make build-release
34+
# or: bash ./scripts/build/build-release.sh
35+
36+
# Native optimized build
37+
make build-native
38+
```
39+
40+
### Testing
41+
```bash
42+
# Unit tests
43+
make unit-test
44+
45+
# Integration tests
46+
make stateless-test # Stateless integration tests
47+
make sqllogic-test # SQL logic tests
48+
make metactl-test # Meta control tests
49+
make meta-kvapi-test # Meta KV API tests
50+
51+
# Cluster tests
52+
make stateless-cluster-test
53+
make stateless-cluster-test-tls
54+
55+
# All tests
56+
make test
57+
```
58+
59+
### Development Environment
60+
```bash
61+
# Setup development environment (installs tools)
62+
make setup
63+
64+
# Run debug build locally
65+
make run-debug
66+
67+
# Run in management mode
68+
make run-debug-management
69+
70+
# Stop all services
71+
make kill
72+
```
73+
74+
### Code Quality
75+
```bash
76+
# Format code
77+
make fmt
78+
79+
# Lint and format all code
80+
make lint
81+
# Includes: cargo clippy, cargo machete, typos, taplo fmt, ruff format, shfmt
82+
83+
# YAML linting
84+
make lint-yaml
85+
86+
# License checking
87+
make check-license
88+
```
89+
90+
### Cleanup
91+
```bash
92+
make clean # Clean build artifacts and test data
93+
```
94+
95+
## Core Components
96+
97+
### Meta Service (`src/meta/`)
98+
- **raft-store**: Raft-based distributed consensus
99+
- **kvapi**: Key-value API layer for metadata operations
100+
- **api**: High-level metadata APIs (schema, table, user management)
101+
- **client**: gRPC client for meta service communication
102+
- **protos**: Protocol buffer definitions
103+
104+
### Query Service (`src/query/`)
105+
- **sql**: SQL parser and planner using recursive descent parser
106+
- **expression**: Vectorized expression evaluation engine
107+
- **functions**: Scalar and aggregate function implementations
108+
- **pipeline**: Query execution pipeline (sources → transforms → sinks)
109+
- **storages**: Storage engine integrations (Fuse, Iceberg, Delta, Hive)
110+
- **catalog**: Database/table catalog management
111+
112+
### Storage Systems
113+
- **Fuse** (`src/query/storages/fuse/`): Native columnar storage format
114+
- **External**: Iceberg, Delta Lake, Hive, Parquet integrations
115+
- **Stage** (`src/query/storages/stage/`): External stage management
116+
117+
### Common Libraries (`src/common/`)
118+
- **storage**: S3/cloud storage abstractions using OpenDAL
119+
- **hashtable**: Optimized hash tables for joins and aggregations
120+
- **expression**: Column-oriented data processing
121+
- **exception**: Error handling and backtraces
122+
- **metrics**: Prometheus metrics collection
123+
124+
## Testing Architecture
125+
126+
- **Unit tests**: Located in `tests/` subdirectories within each crate
127+
- **Stateless tests**: `tests/suites/0_stateless/` - SQL script based tests
128+
- **Stateful tests**: `tests/suites/1_stateful/` - Long-running integration tests
129+
- **SQL Logic Tests**: `tests/sqllogictests/` - SQL compatibility verification
130+
- **Enterprise tests**: `tests/suites/5_ee/` - Enterprise feature tests
131+
132+
## Configuration
133+
134+
- Default configs: `distro/configs/`
135+
- Test configs: `scripts/ci/deploy/config/`
136+
- Service configuration uses TOML format
137+
- Environment-based configuration supported
138+
139+
## Performance and Profiling
140+
141+
```bash
142+
# Performance profiling
143+
make profile
144+
145+
# Memory profiling with jemalloc
146+
# Built-in profiling endpoints available in debug builds
147+
```
148+
149+
## Development Tips
150+
151+
- Use `make setup` to install all required development tools
152+
- Rust toolchain version is pinned in `rust-toolchain.toml`
153+
- The project uses custom memory allocator (jemalloc) for performance
154+
- Vector/SIMD optimizations are extensive - check CPU feature compatibility
155+
- S3/cloud storage tests require proper credentials configuration
156+
- Always run `make lint` before committing to catch formatting issues
157+
158+
## Binary Outputs
159+
160+
After building, key binaries are available in `target/debug/` or `target/release/`:
161+
- `databend-query`: Main query engine
162+
- `databend-meta`: Metadata service
163+
- `databend-metactl`: Meta service administration tool
164+
- `databend-sqllogictests`: SQL logic test runner

0 commit comments

Comments
 (0)