Skip to content

feat(rust): implement Connection metadata methods#232

Merged
vikrantpuppala merged 5 commits intoadbc-drivers:mainfrom
vikrantpuppala:feat/rust-connection-metadata
Feb 13, 2026
Merged

feat(rust): implement Connection metadata methods#232
vikrantpuppala merged 5 commits intoadbc-drivers:mainfrom
vikrantpuppala:feat/rust-connection-metadata

Conversation

@vikrantpuppala
Copy link
Collaborator

Summary

  • Implement get_objects(), get_table_schema(), and get_table_types() ADBC Connection interface methods for the Databricks Rust driver
  • Add metadata/ module with SQL command builder (SHOW SQL), Databricks-to-Arrow type mapping, result parsing, and nested Arrow struct builder for GET_OBJECTS_SCHEMA
  • Add list_catalogs, list_schemas, list_tables, list_columns, list_table_types to DatabricksClient trait with SEA implementation
  • get_objects supports all depth levels (Catalogs, Schemas, Tables, Columns/All) with parallel per-catalog column fetching
  • get_table_schema resolves catalog automatically when not provided
  • 42 new unit tests covering SQL generation, type mapping, parsing, and Arrow builder validation

Design

Based on rust/spec/connection-metadata-design.md. Key decisions:

  • Single-query-per-depth strategy (no fan-out) matching JDBC driver behavior
  • No metadata caching (per ADBC spec and JDBC pattern)
  • Client-side pattern filtering using LIKE-style wildcards
  • Bottom-up Arrow array construction for deeply nested GET_OBJECTS_SCHEMA

Test plan

  • All 104 unit tests pass (cargo test)
  • cargo clippy -- -D warnings passes clean
  • cargo fmt applied
  • Integration testing with live Databricks endpoint (requires DATABRICKS_HOST/DATABRICKS_TOKEN)

🤖 Generated with Claude Code

…able_schema, get_table_types)

Implement the three ADBC Connection metadata interface methods for the
Databricks driver, enabling catalog/schema/table/column introspection.

Key changes:
- Add metadata module with SQL builder, type mapping, result parsing,
  and Arrow nested struct builder for GET_OBJECTS_SCHEMA
- Add list_catalogs, list_schemas, list_tables, list_columns, and
  list_table_types to DatabricksClient trait with SEA implementation
  using SHOW SQL commands
- Implement get_objects at all depth levels (Catalogs, Schemas, Tables,
  Columns/All) with parallel column fetching per catalog
- Implement get_table_schema with automatic catalog discovery fallback
- Implement get_table_types returning static table type list
- Add 42 unit tests covering SQL generation, type mapping, parsing,
  and Arrow builder output validation

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Copy link
Contributor

@lidavidm lidavidm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Again, driverbase has a utility for building the GetObjects response: https://github.com/adbc-drivers/driverbase-rs/blob/main/driverbase/src/get_objects.rs

Can we clarify why this doesn't work?

Add metadata_test example exercising get_info, get_table_types,
get_objects (at all depth levels), and get_table_schema including
auto catalog resolution.

Add debug! logging to SeaClient metadata methods (shows SQL sent)
and Connection metadata methods (shows depth, filter params, and
result counts for troubleshooting).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@vikrantpuppala
Copy link
Collaborator Author

@lidavidm tried to highlight some of the reasons in the design spec, but refer to these points specifically: https://github.com/adbc-drivers/databricks/pull/232/changes#diff-cc06c0b2052d8b8c72c56a77fe0195e06e71d839604614e9296ca4600c670d2fR162-R163

@lidavidm lidavidm dismissed their stale review February 9, 2026 09:04

Out of date

… metadata tests

- parse_schemas now accepts fallback_catalog parameter to handle SHOW
  SCHEMAS IN `catalog` returning only databaseName (no catalog column),
  matching JDBC driver behavior
- Fix get_info to use u32::from(&InfoCode) instead of as u32 for correct
  ADBC constant values (100/101 vs enum discriminants 7/8)
- Rewrite metadata_test example to use stable samples.tpch dataset with
  deterministic assertions on table names, column names, and ordinal order
- Update connection-metadata-design.md with schema variance documentation

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
println!("\n--- Test 6: get_objects(All) - samples.tpch.lineitem ---");
test_get_objects_all(&conn);

println!("\n--- Test 7: get_table_schema - samples.tpch.lineitem ---");
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

doesn't have columns test?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Test 6 (test_get_objects_all) uses ObjectDepth::All which is the depth that includes columns — it validates all 16 lineitem column names and ordinals. All is the deepest level in the ADBC spec and covers columns.

.get_objects(
ObjectDepth::Tables,
Some("samples"),
Some("tpch"),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does it support pattern in names?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added pattern tests (tests 9-11) that pass ADBC-style % wildcards through get_objects:

  • Test 9: schema='tpc%' → matches tpch, tpcds_sf1, tpcds_sf1000
  • Test 10: table='line%' → matches lineitem
  • Test 11: catalog='sam%' → matches samples (client-side filtered)

The patterns are converted from JDBC/ADBC format (%/_) to Hive format (*/.) before being sent to the server — see the sql.rs fix below.

fn list_table_types(&self) -> Vec<String> {
vec![
"SYSTEM TABLE".to_string(),
"TABLE".to_string(),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in test we have not included metric views

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added test_list_table_types unit test in sea.rs that verifies all four types including METRIC_VIEW.


/// Escape pattern for use in LIKE clause.
/// ADBC/JDBC uses % for multi-char wildcard and _ for single char,
/// which match SQL LIKE syntax. We only need to escape single quotes.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

problem is we follow hive format for SHOW commands, which uses . and *.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed. Renamed escape_like_patternjdbc_pattern_to_hive which converts ADBC/JDBC patterns to Hive format matching JDBC's WildcardUtil.jdbcPatternToHive:

  • %* (multi-char wildcard)
  • _. (single-char wildcard)
  • \\% → literal %, \\_ → literal _, \\\\\\\\

Verified against live server — % returned 0 results, * works correctly.

… address PR feedback

- Rename escape_like_pattern to jdbc_pattern_to_hive matching JDBC WildcardUtil:
  % → *, _ → ., with proper escape handling for \%, \_, \\
- Add pattern wildcard integration tests (schemas, tables, catalogs)
- Add list_table_types unit test covering METRIC_VIEW
- Add license header to connection-metadata-design.md

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@vikrantpuppala vikrantpuppala added this pull request to the merge queue Feb 13, 2026
Merged via the queue into adbc-drivers:main with commit 2d1f9ca Feb 13, 2026
10 of 17 checks passed
@vikrantpuppala vikrantpuppala deleted the feat/rust-connection-metadata branch February 13, 2026 15:49
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants