diff --git a/docs/en/guides/00-products/index.md b/docs/en/guides/00-products/index.md
index 7e7a1ef74b..6f0b301610 100644
--- a/docs/en/guides/00-products/index.md
+++ b/docs/en/guides/00-products/index.md
@@ -47,4 +47,4 @@ Built-in Datastore, Vector Database, Analytics, Search, and Geospatial engines c
**Performance & Scale**
- **[Performance Optimization](/guides/performance)**: Enhance query performance with various strategies.
- **[Benchmarks](/guides/benchmark)**: Compare Databend performance with other data warehouses.
-- **[Data Lakehouse](/guides/access-data-lake)**: Seamless integration with Hive, Iceberg, and Delta Lake.
+- **[Data Lakehouse](/sql/sql-reference/table-engines)**: Seamless integration with Hive, Iceberg, and Delta Lake.
diff --git a/docs/en/guides/51-access-data-lake/01-hive.md b/docs/en/guides/51-access-data-lake/01-hive.md
deleted file mode 100644
index 03649c8a1d..0000000000
--- a/docs/en/guides/51-access-data-lake/01-hive.md
+++ /dev/null
@@ -1,115 +0,0 @@
----
-title: Apache Hive
----
-import FunctionDescription from '@site/src/components/FunctionDescription';
-
-
-
-Databend supports the integration of an [Apache Hive](https://hive.apache.org/) catalog, enhancing its compatibility and versatility for data management and analytics. This extends Databend's capabilities by seamlessly incorporating the powerful metadata and storage management capabilities of Apache Hive into the platform.
-
-## Datatype Mapping
-
-This table maps data types between Apache Hive and Databend. Please note that Databend does not currently support Hive data types that are not listed in the table.
-
-| Apache Hive | Databend |
-| ------------------- | -------------------- |
-| BOOLEAN | [BOOLEAN](/sql/sql-reference/data-types/boolean) |
-| TINYINT | [TINYINT (INT8)](/sql/sql-reference/data-types/numeric#integer-data-types) |
-| SMALLINT | [SMALLINT (INT16)](/sql/sql-reference/data-types/numeric#integer-data-types) |
-| INT | [INT (INT32)](/sql/sql-reference/data-types/numeric#integer-data-types) |
-| BIGINT | [BIGINT (INT64)](/sql/sql-reference/data-types/numeric#integer-data-types) |
-| DATE | [DATE](/sql/sql-reference/data-types/datetime) |
-| TIMESTAMP | [TIMESTAMP](/sql/sql-reference/data-types/datetime) |
-| FLOAT | [FLOAT (FLOAT32)](/sql/sql-reference/data-types/numeric#floating-point-data-types) |
-| DOUBLE | [DOUBLE (FLOAT64)](/sql/sql-reference/data-types/numeric#floating-point-data-types) |
-| VARCHAR | [VARCHAR (STRING)](/sql/sql-reference/data-types/string) |
-| DECIMAL | [DECIMAL](/sql/sql-reference/data-types/decimal) |
-| ARRAY<TYPE> | [ARRAY](/sql/sql-reference/data-types/array), supports nesting |
-| MAP<KEYTYPE, VALUETYPE> | [MAP](/sql/sql-reference/data-types/map) |
-
-## Managing Catalogs
-
-Databend provides you the following commands to manage catalogs:
-
-- [CREATE CATALOG](#create-catalog)
-- [SHOW CREATE CATALOG](#show-create-catalog)
-- [SHOW CATALOGS](#show-catalogs)
-- [USE CATALOG](#use-catalog)
-
-### CREATE CATALOG
-
-Defines and establishes a new catalog in the Databend query engine.
-
-#### Syntax
-
-```sql
-CREATE CATALOG
-TYPE =
-CONNECTION = (
- METASTORE_ADDRESS = ''
- URL = ''
- = ''
- = ''
- ...
-)
-```
-
-| Parameter | Required? | Description |
-|-----------------------|-----------|---------------------------------------------------------------------------------------------------------------------------|
-| TYPE | Yes | Type of the catalog: 'HIVE' for Hive catalog or 'ICEBERG' for Iceberg catalog. |
-| METASTORE_ADDRESS | No | Hive Metastore address. Required for Hive catalog only.|
-| URL | Yes | Location of the external storage linked to this catalog. This could be a bucket or a folder within a bucket. For example, 's3://databend-toronto/'. |
-| connection_parameter | Yes | Connection parameters to establish connections with external storage. The required parameters vary based on the specific storage service and authentication methods. Refer to [Connection Parameters](/sql/sql-reference/connect-parameters) for detailed information. |
-
-### SHOW CREATE CATALOG
-
-Returns the detailed configuration of a specified catalog, including its type and storage parameters.
-
-#### Syntax
-
-```sql
-SHOW CREATE CATALOG ;
-```
-
-### SHOW CATALOGS
-
-Shows all the created catalogs.
-
-#### Syntax
-
-```sql
-SHOW CATALOGS [LIKE '']
-```
-
-### USE CATALOG
-
-Switches the current session to the specified catalog.
-
-#### Syntax
-
-```sql
-USE CATALOG
-```
-
-## Usage Examples
-
-This example demonstrates the creation of a catalog configured to interact with the Hive Metastore and access data stored on Amazon S3, located at 's3://databend-toronto/'.
-
-```sql
-CREATE CATALOG hive_ctl
-TYPE = HIVE
-CONNECTION =(
- METASTORE_ADDRESS = '127.0.0.1:9083'
- URL = 's3://databend-toronto/'
- ACCESS_KEY_ID = ''
- SECRET_ACCESS_KEY = ''
-);
-
-SHOW CREATE CATALOG hive_ctl;
-
-┌──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
-│ Catalog │ Type │ Option │
-├──────────┼────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
-│ hive_ctl │ hive │ METASTORE ADDRESS\n127.0.0.1:9083\nSTORAGE PARAMS\ns3 | bucket=databend-toronto,root=/,endpoint=https://s3.amazonaws.com │
-└──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
-```
\ No newline at end of file
diff --git a/docs/en/guides/51-access-data-lake/02-iceberg.md b/docs/en/guides/51-access-data-lake/02-iceberg.md
deleted file mode 100644
index e5a8684c72..0000000000
--- a/docs/en/guides/51-access-data-lake/02-iceberg.md
+++ /dev/null
@@ -1,437 +0,0 @@
----
-title: Apache Iceberg™
----
-
-import FunctionDescription from '@site/src/components/FunctionDescription';
-
-
-
-Databend supports the integration of an [Apache Iceberg™](https://iceberg.apache.org/) catalog, enhancing its compatibility and versatility for data management and analytics. This extends Databend's capabilities by seamlessly incorporating the powerful metadata and storage management capabilities of Apache Iceberg™ into the platform.
-
-## Quick Start with Iceberg
-
-If you want to quickly try out Iceberg and experiment with table operations locally, a [Docker-based starter project](https://github.com/databendlabs/iceberg-quick-start) is available. This setup allows you to:
-
-- Run Spark with Iceberg support
-- Use a REST catalog (Iceberg REST Fixture)
-- Simulate an S3-compatible object store using MinIO
-- Load sample TPC-H data into Iceberg tables for query testing
-
-### Prerequisites
-
-Before you start, make sure Docker and Docker Compose are installed on your system.
-
-### Start Iceberg Environment
-
-```bash
-git clone https://github.com/databendlabs/iceberg-quick-start.git
-cd iceberg-quick-start
-docker compose up -d
-```
-
-This will start the following services:
-
-- `spark-iceberg`: Spark 3.4 with Iceberg
-- `rest`: Iceberg REST Catalog
-- `minio`: S3-compatible object store
-- `mc`: MinIO client (for setting up the bucket)
-
-```bash
-WARN[0000] /Users/eric/iceberg-quick-start/docker-compose.yml: the attribute `version` is obsolete, it will be ignored, please remove it to avoid potential confusion
-[+] Running 5/5
- ✔ Network iceberg-quick-start_iceberg_net Created 0.0s
- ✔ Container iceberg-rest-test Started 0.4s
- ✔ Container minio Started 0.4s
- ✔ Container mc Started 0.6s
- ✔ Container spark-iceberg S... 0.7s
-```
-
-### Load TPC-H Data via Spark Shell
-
-Run the following command to generate and load sample TPC-H data into the Iceberg tables:
-
-```bash
-docker exec spark-iceberg bash /home/iceberg/load_tpch.sh
-```
-
-```bash
-Collecting duckdb
- Downloading duckdb-1.2.2-cp310-cp310-manylinux_2_24_aarch64.manylinux_2_28_aarch64.whl (18.7 MB)
- ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 18.7/18.7 MB 5.8 MB/s eta 0:00:00
-Requirement already satisfied: pyspark in /opt/spark/python (3.5.5)
-Collecting py4j==0.10.9.7
- Downloading py4j-0.10.9.7-py2.py3-none-any.whl (200 kB)
- ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 200.5/200.5 kB 5.9 MB/s eta 0:00:00
-Installing collected packages: py4j, duckdb
-Successfully installed duckdb-1.2.2 py4j-0.10.9.7
-
-[notice] A new release of pip is available: 23.0.1 -> 25.1.1
-[notice] To update, run: pip install --upgrade pip
-Setting default log level to "WARN".
-To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
-25/05/07 12:17:27 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
-25/05/07 12:17:28 WARN Utils: Service 'SparkUI' could not bind on port 4040. Attempting port 4041.
-[2025-05-07 12:17:18] [INFO] Starting TPC-H data generation and loading process
-[2025-05-07 12:17:18] [INFO] Configuration: Scale Factor=1, Data Dir=/home/iceberg/data/tpch_1
-[2025-05-07 12:17:18] [INFO] Generating TPC-H data with DuckDB (Scale Factor: 1)
-[2025-05-07 12:17:27] [INFO] Generated 8 Parquet files in /home/iceberg/data/tpch_1
-[2025-05-07 12:17:28] [INFO] Loading data into Iceberg catalog
-[2025-05-07 12:17:33] [INFO] Created Iceberg table: demo.tpch.part from part.parquet
-[2025-05-07 12:17:33] [INFO] Created Iceberg table: demo.tpch.region from region.parquet
-[2025-05-07 12:17:33] [INFO] Created Iceberg table: demo.tpch.supplier from supplier.parquet
-[2025-05-07 12:17:35] [INFO] Created Iceberg table: demo.tpch.orders from orders.parquet
-[2025-05-07 12:17:35] [INFO] Created Iceberg table: demo.tpch.nation from nation.parquet
-[2025-05-07 12:17:40] [INFO] Created Iceberg table: demo.tpch.lineitem from lineitem.parquet
-[2025-05-07 12:17:40] [INFO] Created Iceberg table: demo.tpch.partsupp from partsupp.parquet
-[2025-05-07 12:17:41] [INFO] Created Iceberg table: demo.tpch.customer from customer.parquet
-+---------+---------+-----------+
-|namespace|tableName|isTemporary|
-+---------+---------+-----------+
-| tpch| customer| false|
-| tpch| lineitem| false|
-| tpch| nation| false|
-| tpch| orders| false|
-| tpch| part| false|
-| tpch| partsupp| false|
-| tpch| region| false|
-| tpch| supplier| false|
-+---------+---------+-----------+
-
-[2025-05-07 12:17:42] [SUCCESS] TPCH data generation and loading completed successfully
-```
-
-### Query Data in Databend
-
-Once the TPC-H tables are loaded, you can query the data in Databend:
-
-1. Launch Databend in Docker:
-
-```bash
-docker network create iceberg_net
-```
-
-```bash
-docker run -d \
- --name databend \
- --network iceberg_net \
- -p 3307:3307 \
- -p 8000:8000 \
- -p 8124:8124 \
- -p 8900:8900 \
- datafuselabs/databend
-```
-
-2. Connect to Databend using BendSQL first, and then create an Iceberg catalog:
-
-```bash
-bendsql
-```
-
-```bash
-Welcome to BendSQL 0.24.1-f1f7de0(2024-12-04T12:31:18.526234000Z).
-Connecting to localhost:8000 as user root.
-Connected to Databend Query v1.2.725-8d073f6b7a(rust-1.88.0-nightly-2025-04-21T11:49:03.577976082Z)
-Loaded 1436 auto complete keywords from server.
-Started web server at 127.0.0.1:8080
-```
-
-```sql
-CREATE CATALOG iceberg TYPE = ICEBERG CONNECTION = (
- TYPE = 'rest'
- ADDRESS = 'http://host.docker.internal:8181'
- warehouse = 's3://warehouse/wh/'
- "s3.endpoint" = 'http://host.docker.internal:9000'
- "s3.access-key-id" = 'admin'
- "s3.secret-access-key" = 'password'
- "s3.region" = 'us-east-1'
-);
-```
-
-3. Use the newly created catalog:
-
-```sql
-USE CATALOG iceberg;
-```
-
-4. Show available databases:
-
-```sql
-SHOW DATABASES;
-```
-
-```sql
-╭──────────────────────╮
-│ databases_in_iceberg │
-│ String │
-├──────────────────────┤
-│ tpch │
-╰──────────────────────╯
-```
-
-5. Run a sample query to aggregate TPC-H data:
-
-```bash
-SELECT
- l_returnflag,
- l_linestatus,
- SUM(l_quantity) AS sum_qty,
- SUM(l_extendedprice) AS sum_base_price,
- SUM(l_extendedprice * (1 - l_discount)) AS sum_disc_price,
- SUM(l_extendedprice * (1 - l_discount) * (1 + l_tax)) AS sum_charge,
- AVG(l_quantity) AS avg_qty,
- AVG(l_extendedprice) AS avg_price,
- AVG(l_discount) AS avg_disc,
- COUNT(*) AS count_order
-FROM
- iceberg.tpch.lineitem
-GROUP BY
- l_returnflag,
- l_linestatus
-ORDER BY
- l_returnflag,
- l_linestatus;
-```
-
-```sql
-┌────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
-│ l_returnflag │ l_linestatus │ sum_qty │ sum_base_price │ sum_disc_price │ sum_charge │ avg_qty │ avg_price │ avg_disc │ count_order │
-│ Nullable(String) │ Nullable(String) │ Nullable(Decimal(38, 2)) │ Nullable(Decimal(38, 2)) │ Nullable(Decimal(38, 4)) │ Nullable(Decimal(38, 6)) │ Nullable(Decimal(38, 8)) │ Nullable(Decimal(38, 8)) │ Nullable(Decimal(38, 8)) │ UInt64 │
-├──────────────────┼──────────────────┼──────────────────────────┼──────────────────────────┼──────────────────────────┼──────────────────────────┼──────────────────────────┼──────────────────────────┼──────────────────────────┼─────────────┤
-│ A │ F │ 37734107.00 │ 56586554400.73 │ 53758257134.8700 │ 55909065222.827692 │ 25.52200585 │ 38273.12973462 │ 0.04998530 │ 1478493 │
-│ N │ F │ 991417.00 │ 1487504710.38 │ 1413082168.0541 │ 1469649223.194375 │ 25.51647192 │ 38284.46776085 │ 0.05009343 │ 38854 │
-│ N │ O │ 76633518.00 │ 114935210409.19 │ 109189591897.4720 │ 113561024263.013782 │ 25.50201964 │ 38248.01560906 │ 0.05000026 │ 3004998 │
-│ R │ F │ 37719753.00 │ 56568041380.90 │ 53741292684.6040 │ 55889619119.831932 │ 25.50579361 │ 38250.85462610 │ 0.05000941 │ 1478870 │
-└────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
-```
-
-## Datatype Mapping
-
-This table maps data types between Apache Iceberg™ and Databend. Please note that Databend does not currently support Iceberg data types that are not listed in the table.
-
-| Apache Iceberg™ | Databend |
-| ------------------------------------------- | ------------------------------------------------------------------------ |
-| BOOLEAN | [BOOLEAN](/sql/sql-reference/data-types/boolean) |
-| INT | [INT32](/sql/sql-reference/data-types/numeric#integer-data-types) |
-| LONG | [INT64](/sql/sql-reference/data-types/numeric#integer-data-types) |
-| DATE | [DATE](/sql/sql-reference/data-types/datetime) |
-| TIMESTAMP/TIMESTAMPZ | [TIMESTAMP](/sql/sql-reference/data-types/datetime) |
-| FLOAT | [FLOAT](/sql/sql-reference/data-types/numeric#floating-point-data-types) |
-| DOUBLE | [DOUBLE](/sql/sql-reference/data-types/numeric#floating-point-data-type) |
-| STRING/BINARY | [STRING](/sql/sql-reference/data-types/string) |
-| DECIMAL | [DECIMAL](/sql/sql-reference/data-types/decimal) |
-| ARRAY<TYPE> | [ARRAY](/sql/sql-reference/data-types/array), supports nesting |
-| MAP<KEYTYPE, VALUETYPE> | [MAP](/sql/sql-reference/data-types/map) |
-| STRUCT<COL1: TYPE1, COL2: TYPE2, ...> | [TUPLE](/sql/sql-reference/data-types/tuple) |
-| LIST | [ARRAY](/sql/sql-reference/data-types/array) |
-
-## Managing Catalogs
-
-Databend provides you the following commands to manage catalogs:
-
-- [CREATE CATALOG](#create-catalog)
-- [SHOW CREATE CATALOG](#show-create-catalog)
-- [SHOW CATALOGS](#show-catalogs)
-- [USE CATALOG](#use-catalog)
-
-### CREATE CATALOG
-
-Defines and establishes a new catalog in the Databend query engine.
-
-#### Syntax
-
-```sql
-CREATE CATALOG
-TYPE=ICEBERG
-CONNECTION=(
- TYPE=''
- ADDRESS=''
- WAREHOUSE=''
- ""=''
- ""=''
- ...
-);
-```
-
-| Parameter | Required? | Description |
-| ---------------------------- | --------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| `` | Yes | The name of the catalog you want to create. |
-| `TYPE` | Yes | Specifies the catalog type. For Apache Iceberg™, set to `ICEBERG`. |
-| `CONNECTION` | Yes | The connection parameters for the Iceberg catalog. |
-| `TYPE` (inside `CONNECTION`) | Yes | The connection type. For Iceberg, it is typically set to `rest` for REST-based connection. |
-| `ADDRESS` | Yes | The address or URL of the Iceberg service (e.g., `http://127.0.0.1:8181`). |
-| `WAREHOUSE` | Yes | The location of the Iceberg warehouse, usually an S3 bucket or compatible object storage system. |
-| `` | Yes | Connection parameters to establish connections with external storage. The required parameters vary based on the specific storage service and authentication methods. See the table below for a full list of the available parameters. |
-
-| Connection Parameter | Description |
-| --------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------- |
-| `s3.endpoint` | S3 endpoint. |
-| `s3.access-key-id` | S3 access key ID. |
-| `s3.secret-access-key` | S3 secret access key. |
-| `s3.session-token` | S3 session token, required when using temporary credentials. |
-| `s3.region` | S3 region. |
-| `client.region` | Region to use for the S3 client, takes precedence over `s3.region`. |
-| `s3.path-style-access` | S3 Path Style Access. |
-| `s3.sse.type` | S3 Server-Side Encryption (SSE) type. |
-| `s3.sse.key` | S3 SSE key. If encryption type is `kms`, this is a KMS Key ID. If encryption type is `custom`, this is a base-64 AES256 symmetric key. |
-| `s3.sse.md5` | S3 SSE MD5 checksum. |
-| `client.assume-role.arn` | ARN of the IAM role to assume instead of using the default credential chain. |
-| `client.assume-role.external-id` | Optional external ID used to assume an IAM role. |
-| `client.assume-role.session-name` | Optional session name used to assume an IAM role. |
-| `s3.allow-anonymous` | Option to allow anonymous access (e.g., for public buckets/folders). |
-| `s3.disable-ec2-metadata` | Option to disable loading credentials from EC2 metadata (typically used with `s3.allow-anonymous`). |
-| `s3.disable-config-load` | Option to disable loading configuration from config files and environment variables. |
-
-### Catalog Types
-
-Databend supports four types of Iceberg catalogs:
-
-- REST Catalog
-
-REST catalog uses a RESTful API approach to interact with Iceberg tables.
-
-```sql
-CREATE CATALOG iceberg_rest TYPE = ICEBERG CONNECTION = (
- TYPE = 'rest'
- ADDRESS = 'http://localhost:8181'
- warehouse = 's3://warehouse/demo/'
- "s3.endpoint" = 'http://localhost:9000'
- "s3.access-key-id" = 'admin'
- "s3.secret-access-key" = 'password'
- "s3.region" = 'us-east-1'
-)
-
-- AWS Glue Catalog
-For Glue catalogs, the configuration includes both Glue service parameters and storage (S3) parameters. The Glue service parameters appear first, followed by the S3 storage parameters (prefixed with "s3.").
-
-```sql
-CREATE CATALOG iceberg_glue TYPE = ICEBERG CONNECTION = (
- TYPE = 'glue'
- ADDRESS = 'http://localhost:5000'
- warehouse = 's3a://warehouse/glue/'
- "aws_access_key_id" = 'my_access_id'
- "aws_secret_access_key" = 'my_secret_key'
- "region_name" = 'us-east-1'
- "s3.endpoint" = 'http://localhost:9000'
- "s3.access-key-id" = 'admin'
- "s3.secret-access-key" = 'password'
- "s3.region" = 'us-east-1'
-)
-```
-
-- Storage Catalog (S3Tables Catalog)
-
-The Storage catalog requires a table_bucket_arn parameter. Unlike other buckets, S3Tables bucket is not a physical bucket, but a virtual bucket that is managed by S3Tables. You cannot directly access the bucket with a path like `s3://{bucket_name}/{file_path}`. All operations are performed with respect to the bucket ARN.
-
-Properties Parameters
-The following properties are available for the catalog:
-
-```
-profile_name: The name of the AWS profile to use.
-region_name: The AWS region to use.
-aws_access_key_id: The AWS access key ID to use.
-aws_secret_access_key: The AWS secret access key to use.
-aws_session_token: The AWS session token to use.
-```
-
-```sql
-CREATE CATALOG iceberg_storage TYPE = ICEBERG CONNECTION = (
- TYPE = 'storage'
- ADDRESS = 'http://localhost:9111'
- "table_bucket_arn" = 'my-bucket'
- -- Additional properties as needed
-)
-```
-
-- Hive Catalog (HMS Catalog)
-
-The Hive catalog requires an ADDRESS parameter, which is the address of the Hive metastore. It also requires a warehouse parameter, which is the location of the Iceberg warehouse, usually an S3 bucket or compatible object storage system.
-
-```sql
-CREATE CATALOG iceberg_hms TYPE = ICEBERG CONNECTION = (
- TYPE = 'hive'
- ADDRESS = '192.168.10.111:9083'
- warehouse = 's3a://warehouse/hive/'
- "s3.endpoint" = 'http://localhost:9000'
- "s3.access-key-id" = 'admin'
- "s3.secret-access-key" = 'password'
- "s3.region" = 'us-east-1'
-)
-```
-
-### SHOW CREATE CATALOG
-
-Returns the detailed configuration of a specified catalog, including its type and storage parameters.
-
-#### Syntax
-
-```sql
-SHOW CREATE CATALOG ;
-```
-
-### SHOW CATALOGS
-
-Shows all the created catalogs.
-
-#### Syntax
-
-```sql
-SHOW CATALOGS [LIKE '']
-```
-
-### USE CATALOG
-
-Switches the current session to the specified catalog.
-
-#### Syntax
-
-```sql
-USE CATALOG
-```
-
-## Caching Iceberg Catalog
-
-Databend offers a Catalog Metadata Cache specifically designed for Iceberg catalogs. When a query is executed on an Iceberg table for the first time, the metadata is cached in memory. By default, this cache remains valid for 10 minutes, after which it is asynchronously refreshed. This ensures that queries on Iceberg tables are faster by avoiding repeated metadata retrieval.
-
-If you need fresh metadata, you can manually refresh the cache using the following commands:
-
-```sql
-USE CATALOG iceberg;
-ALTER DATABASE tpch REFRESH CACHE; -- Refresh metadata cache for the tpch database
-ALTER TABLE tpch.lineitem REFRESH CACHE; -- Refresh metadata cache for the lineitem table
-```
-
-If you prefer not to use the metadata cache, you can disable it entirely by configuring the `iceberg_table_meta_count` setting to `0` in the [databend-query.toml](https://github.com/databendlabs/databend/blob/main/scripts/distribution/configs/databend-query.toml) configuration file:
-
-```toml
-...
-# Cache config.
-[cache]
-...
-iceberg_table_meta_count = 0
-...
-```
-
-In addition to metadata caching, Databend also supports table data caching for Iceberg catalog tables, similar to Fuse tables. For more information on data caching, refer to the `[cache] Section` in the [Query Configurations](../10-deploy/04-references/02-node-config/02-query-config.md) reference.
-
-## Apache Iceberg™ Table Functions
-
-Databend provides the following table functions for querying Iceberg metadata, allowing users to inspect snapshots and manifests efficiently:
-
-- [ICEBERG_MANIFEST](/sql/sql-functions/table-functions/iceberg-manifest)
-- [ICEBERG_SNAPSHOT](/sql/sql-functions/table-functions/iceberg-snapshot)
-
-## Usage Examples
-
-This example shows how to create an Iceberg catalog using a REST-based connection, specifying the service address, warehouse location (S3), and optional parameters like AWS region and custom endpoint:
-
-```sql
-CREATE CATALOG ctl
-TYPE=ICEBERG
-CONNECTION=(
- TYPE='rest'
- ADDRESS='http://127.0.0.1:8181'
- WAREHOUSE='s3://iceberg-tpch'
- "s3.region"='us-east-1'
- "s3.endpoint"='http://127.0.0.1:9000'
-);
-```
diff --git a/docs/en/guides/51-access-data-lake/_category_.json b/docs/en/guides/51-access-data-lake/_category_.json
deleted file mode 100644
index c6d3b2bcec..0000000000
--- a/docs/en/guides/51-access-data-lake/_category_.json
+++ /dev/null
@@ -1,3 +0,0 @@
-{
- "label": "Data Lakehouse"
-}
\ No newline at end of file
diff --git a/docs/en/guides/51-access-data-lake/index.md b/docs/en/guides/51-access-data-lake/index.md
deleted file mode 100644
index 6e857e8bd9..0000000000
--- a/docs/en/guides/51-access-data-lake/index.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Databend for Data Lakehouse
----
-
-Databend integrates with popular data lake technologies to provide a unified lakehouse architecture that combines data lake flexibility with data warehouse performance.
-
-| Technology | Integration Type | Key Features | Documentation |
-|------------|-----------------|--------------|---------------|
-| Apache Hive | Catalog-level | Legacy data lake support, schema registry | [Apache Hive Catalog](01-hive.md) |
-| Apache Iceberg™ | Catalog-level | ACID transactions, schema evolution, time travel | [Apache Iceberg™ Catalog](02-iceberg.md) |
-| Delta Lake | Table engine-level | ACID transactions, data versioning, schema enforcement | [Delta Lake Table Engine](03-delta.md) |
-
-These integrations enable Databend users to efficiently query, analyze, and manage diverse datasets across both data lake and data warehouse environments without data duplication.
\ No newline at end of file
diff --git a/docs/en/sql-reference/00-sql-reference/30-table-engines/02-iceberg.md b/docs/en/sql-reference/00-sql-reference/30-table-engines/02-iceberg.md
index 250b4626f6..1918d74a90 100644
--- a/docs/en/sql-reference/00-sql-reference/30-table-engines/02-iceberg.md
+++ b/docs/en/sql-reference/00-sql-reference/30-table-engines/02-iceberg.md
@@ -2,8 +2,439 @@
id: iceberg
title: Apache Iceberg™ Tables
sidebar_label: Apache Iceberg™ Tables
+slug: /sql-reference/table-engines/iceberg
---
-import Content from '../../../guides/51-access-data-lake/02-iceberg.md';
+import FunctionDescription from '@site/src/components/FunctionDescription';
-
+
+
+Databend supports the integration of an [Apache Iceberg™](https://iceberg.apache.org/) catalog, enhancing its compatibility and versatility for data management and analytics. This extends Databend's capabilities by seamlessly incorporating the powerful metadata and storage management capabilities of Apache Iceberg™ into the platform.
+
+## Quick Start with Iceberg
+
+If you want to quickly try out Iceberg and experiment with table operations locally, a [Docker-based starter project](https://github.com/databendlabs/iceberg-quick-start) is available. This setup allows you to:
+
+- Run Spark with Iceberg support
+- Use a REST catalog (Iceberg REST Fixture)
+- Simulate an S3-compatible object store using MinIO
+- Load sample TPC-H data into Iceberg tables for query testing
+
+### Prerequisites
+
+Before you start, make sure Docker and Docker Compose are installed on your system.
+
+### Start Iceberg Environment
+
+```bash
+git clone https://github.com/databendlabs/iceberg-quick-start.git
+cd iceberg-quick-start
+docker compose up -d
+```
+
+This will start the following services:
+
+- `spark-iceberg`: Spark 3.4 with Iceberg
+- `rest`: Iceberg REST Catalog
+- `minio`: S3-compatible object store
+- `mc`: MinIO client (for setting up the bucket)
+
+```bash
+WARN[0000] /Users/eric/iceberg-quick-start/docker-compose.yml: the attribute `version` is obsolete, it will be ignored, please remove it to avoid potential confusion
+[+] Running 5/5
+ ✔ Network iceberg-quick-start_iceberg_net Created 0.0s
+ ✔ Container iceberg-rest-test Started 0.4s
+ ✔ Container minio Started 0.4s
+ ✔ Container mc Started 0.6s
+ ✔ Container spark-iceberg S... 0.7s
+```
+
+### Load TPC-H Data via Spark Shell
+
+Run the following command to generate and load sample TPC-H data into the Iceberg tables:
+
+```bash
+docker exec spark-iceberg bash /home/iceberg/load_tpch.sh
+```
+
+```bash
+Collecting duckdb
+ Downloading duckdb-1.2.2-cp310-cp310-manylinux_2_24_aarch64.manylinux_2_28_aarch64.whl (18.7 MB)
+ ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 18.7/18.7 MB 5.8 MB/s eta 0:00:00
+Requirement already satisfied: pyspark in /opt/spark/python (3.5.5)
+Collecting py4j==0.10.9.7
+ Downloading py4j-0.10.9.7-py2.py3-none-any.whl (200 kB)
+ ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 200.5/200.5 kB 5.9 MB/s eta 0:00:00
+Installing collected packages: py4j, duckdb
+Successfully installed duckdb-1.2.2 py4j-0.10.9.7
+
+[notice] A new release of pip is available: 23.0.1 -> 25.1.1
+[notice] To update, run: pip install --upgrade pip
+Setting default log level to "WARN".
+To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
+25/05/07 12:17:27 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
+25/05/07 12:17:28 WARN Utils: Service 'SparkUI' could not bind on port 4040. Attempting port 4041.
+[2025-05-07 12:17:18] [INFO] Starting TPC-H data generation and loading process
+[2025-05-07 12:17:18] [INFO] Configuration: Scale Factor=1, Data Dir=/home/iceberg/data/tpch_1
+[2025-05-07 12:17:18] [INFO] Generating TPC-H data with DuckDB (Scale Factor: 1)
+[2025-05-07 12:17:27] [INFO] Generated 8 Parquet files in /home/iceberg/data/tpch_1
+[2025-05-07 12:17:28] [INFO] Loading data into Iceberg catalog
+[2025-05-07 12:17:33] [INFO] Created Iceberg table: demo.tpch.part from part.parquet
+[2025-05-07 12:17:33] [INFO] Created Iceberg table: demo.tpch.region from region.parquet
+[2025-05-07 12:17:33] [INFO] Created Iceberg table: demo.tpch.supplier from supplier.parquet
+[2025-05-07 12:17:35] [INFO] Created Iceberg table: demo.tpch.orders from orders.parquet
+[2025-05-07 12:17:35] [INFO] Created Iceberg table: demo.tpch.nation from nation.parquet
+[2025-05-07 12:17:40] [INFO] Created Iceberg table: demo.tpch.lineitem from lineitem.parquet
+[2025-05-07 12:17:40] [INFO] Created Iceberg table: demo.tpch.partsupp from partsupp.parquet
+[2025-05-07 12:17:41] [INFO] Created Iceberg table: demo.tpch.customer from customer.parquet
++---------+---------+-----------+
+|namespace|tableName|isTemporary|
++---------+---------+-----------+
+| tpch| customer| false|
+| tpch| lineitem| false|
+| tpch| nation| false|
+| tpch| orders| false|
+| tpch| part| false|
+| tpch| partsupp| false|
+| tpch| region| false|
+| tpch| supplier| false|
++---------+---------+-----------+
+
+[2025-05-07 12:17:42] [SUCCESS] TPCH data generation and loading completed successfully
+```
+
+### Query Data in Databend
+
+Once the TPC-H tables are loaded, you can query the data in Databend:
+
+1. Launch Databend in Docker:
+
+```bash
+docker network create iceberg_net
+```
+
+```bash
+docker run -d \
+ --name databend \
+ --network iceberg_net \
+ -p 3307:3307 \
+ -p 8000:8000 \
+ -p 8124:8124 \
+ -p 8900:8900 \
+ datafuselabs/databend
+```
+
+2. Connect to Databend using BendSQL first, and then create an Iceberg catalog:
+
+```bash
+bendsql
+```
+
+```bash
+Welcome to BendSQL 0.24.1-f1f7de0(2024-12-04T12:31:18.526234000Z).
+Connecting to localhost:8000 as user root.
+Connected to Databend Query v1.2.725-8d073f6b7a(rust-1.88.0-nightly-2025-04-21T11:49:03.577976082Z)
+Loaded 1436 auto complete keywords from server.
+Started web server at 127.0.0.1:8080
+```
+
+```sql
+CREATE CATALOG iceberg TYPE = ICEBERG CONNECTION = (
+ TYPE = 'rest'
+ ADDRESS = 'http://host.docker.internal:8181'
+ warehouse = 's3://warehouse/wh/'
+ "s3.endpoint" = 'http://host.docker.internal:9000'
+ "s3.access-key-id" = 'admin'
+ "s3.secret-access-key" = 'password'
+ "s3.region" = 'us-east-1'
+);
+```
+
+3. Use the newly created catalog:
+
+```sql
+USE CATALOG iceberg;
+```
+
+4. Show available databases:
+
+```sql
+SHOW DATABASES;
+```
+
+```sql
+╭──────────────────────╮
+│ databases_in_iceberg │
+│ String │
+├──────────────────────┤
+│ tpch │
+╰──────────────────────╯
+```
+
+5. Run a sample query to aggregate TPC-H data:
+
+```bash
+SELECT
+ l_returnflag,
+ l_linestatus,
+ SUM(l_quantity) AS sum_qty,
+ SUM(l_extendedprice) AS sum_base_price,
+ SUM(l_extendedprice * (1 - l_discount)) AS sum_disc_price,
+ SUM(l_extendedprice * (1 - l_discount) * (1 + l_tax)) AS sum_charge,
+ AVG(l_quantity) AS avg_qty,
+ AVG(l_extendedprice) AS avg_price,
+ AVG(l_discount) AS avg_disc,
+ COUNT(*) AS count_order
+FROM
+ iceberg.tpch.lineitem
+GROUP BY
+ l_returnflag,
+ l_linestatus
+ORDER BY
+ l_returnflag,
+ l_linestatus;
+```
+
+```sql
+┌────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
+│ l_returnflag │ l_linestatus │ sum_qty │ sum_base_price │ sum_disc_price │ sum_charge │ avg_qty │ avg_price │ avg_disc │ count_order │
+│ Nullable(String) │ Nullable(String) │ Nullable(Decimal(38, 2)) │ Nullable(Decimal(38, 2)) │ Nullable(Decimal(38, 4)) │ Nullable(Decimal(38, 6)) │ Nullable(Decimal(38, 8)) │ Nullable(Decimal(38, 8)) │ Nullable(Decimal(38, 8)) │ UInt64 │
+├──────────────────┼──────────────────┼──────────────────────────┼──────────────────────────┼──────────────────────────┼──────────────────────────┼──────────────────────────┼──────────────────────────┼──────────────────────────┼─────────────┤
+│ A │ F │ 37734107.00 │ 56586554400.73 │ 53758257134.8700 │ 55909065222.827692 │ 25.52200585 │ 38273.12973462 │ 0.04998530 │ 1478493 │
+│ N │ F │ 991417.00 │ 1487504710.38 │ 1413082168.0541 │ 1469649223.194375 │ 25.51647192 │ 38284.46776085 │ 0.05009343 │ 38854 │
+│ N │ O │ 76633518.00 │ 114935210409.19 │ 109189591897.4720 │ 113561024263.013782 │ 25.50201964 │ 38248.01560906 │ 0.05000026 │ 3004998 │
+│ R │ F │ 37719753.00 │ 56568041380.90 │ 53741292684.6040 │ 55889619119.831932 │ 25.50579361 │ 38250.85462610 │ 0.05000941 │ 1478870 │
+└────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
+```
+
+## Datatype Mapping
+
+This table maps data types between Apache Iceberg™ and Databend. Please note that Databend does not currently support Iceberg data types that are not listed in the table.
+
+| Apache Iceberg™ | Databend |
+| ------------------------------------------- | ------------------------------------------------------------------------ |
+| BOOLEAN | [BOOLEAN](/sql/sql-reference/data-types/boolean) |
+| INT | [INT32](/sql/sql-reference/data-types/numeric#integer-data-types) |
+| LONG | [INT64](/sql/sql-reference/data-types/numeric#integer-data-types) |
+| DATE | [DATE](/sql/sql-reference/data-types/datetime) |
+| TIMESTAMP/TIMESTAMPZ | [TIMESTAMP](/sql/sql-reference/data-types/datetime) |
+| FLOAT | [FLOAT](/sql/sql-reference/data-types/numeric#floating-point-data-types) |
+| DOUBLE | [DOUBLE](/sql/sql-reference/data-types/numeric#floating-point-data-type) |
+| STRING/BINARY | [STRING](/sql/sql-reference/data-types/string) |
+| DECIMAL | [DECIMAL](/sql/sql-reference/data-types/decimal) |
+| ARRAY<TYPE> | [ARRAY](/sql/sql-reference/data-types/array), supports nesting |
+| MAP<KEYTYPE, VALUETYPE> | [MAP](/sql/sql-reference/data-types/map) |
+| STRUCT<COL1: TYPE1, COL2: TYPE2, ...> | [TUPLE](/sql/sql-reference/data-types/tuple) |
+| LIST | [ARRAY](/sql/sql-reference/data-types/array) |
+
+## Managing Catalogs
+
+Databend provides you the following commands to manage catalogs:
+
+- [CREATE CATALOG](#create-catalog)
+- [SHOW CREATE CATALOG](#show-create-catalog)
+- [SHOW CATALOGS](#show-catalogs)
+- [USE CATALOG](#use-catalog)
+
+### CREATE CATALOG
+
+Defines and establishes a new catalog in the Databend query engine.
+
+#### Syntax
+
+```sql
+CREATE CATALOG
+TYPE=ICEBERG
+CONNECTION=(
+ TYPE=''
+ ADDRESS=''
+ WAREHOUSE=''
+ ""=''
+ ""=''
+ ...
+);
+```
+
+| Parameter | Required? | Description |
+| ---------------------------- | --------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| `` | Yes | The name of the catalog you want to create. |
+| `TYPE` | Yes | Specifies the catalog type. For Apache Iceberg™, set to `ICEBERG`. |
+| `CONNECTION` | Yes | The connection parameters for the Iceberg catalog. |
+| `TYPE` (inside `CONNECTION`) | Yes | The connection type. For Iceberg, it is typically set to `rest` for REST-based connection. |
+| `ADDRESS` | Yes | The address or URL of the Iceberg service (e.g., `http://127.0.0.1:8181`). |
+| `WAREHOUSE` | Yes | The location of the Iceberg warehouse, usually an S3 bucket or compatible object storage system. |
+| `` | Yes | Connection parameters to establish connections with external storage. The required parameters vary based on the specific storage service and authentication methods. See the table below for a full list of the available parameters. |
+
+| Connection Parameter | Description |
+| --------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------- |
+| `s3.endpoint` | S3 endpoint. |
+| `s3.access-key-id` | S3 access key ID. |
+| `s3.secret-access-key` | S3 secret access key. |
+| `s3.session-token` | S3 session token, required when using temporary credentials. |
+| `s3.region` | S3 region. |
+| `client.region` | Region to use for the S3 client, takes precedence over `s3.region`. |
+| `s3.path-style-access` | S3 Path Style Access. |
+| `s3.sse.type` | S3 Server-Side Encryption (SSE) type. |
+| `s3.sse.key` | S3 SSE key. If encryption type is `kms`, this is a KMS Key ID. If encryption type is `custom`, this is a base-64 AES256 symmetric key. |
+| `s3.sse.md5` | S3 SSE MD5 checksum. |
+| `client.assume-role.arn` | ARN of the IAM role to assume instead of using the default credential chain. |
+| `client.assume-role.external-id` | Optional external ID used to assume an IAM role. |
+| `client.assume-role.session-name` | Optional session name used to assume an IAM role. |
+| `s3.allow-anonymous` | Option to allow anonymous access (e.g., for public buckets/folders). |
+| `s3.disable-ec2-metadata` | Option to disable loading credentials from EC2 metadata (typically used with `s3.allow-anonymous`). |
+| `s3.disable-config-load` | Option to disable loading configuration from config files and environment variables. |
+
+### Catalog Types
+
+Databend supports four types of Iceberg catalogs:
+
+- REST Catalog
+
+REST catalog uses a RESTful API approach to interact with Iceberg tables.
+
+```sql
+CREATE CATALOG iceberg_rest TYPE = ICEBERG CONNECTION = (
+ TYPE = 'rest'
+ ADDRESS = 'http://localhost:8181'
+ warehouse = 's3://warehouse/demo/'
+ "s3.endpoint" = 'http://localhost:9000'
+ "s3.access-key-id" = 'admin'
+ "s3.secret-access-key" = 'password'
+ "s3.region" = 'us-east-1'
+)
+
+- AWS Glue Catalog
+For Glue catalogs, the configuration includes both Glue service parameters and storage (S3) parameters. The Glue service parameters appear first, followed by the S3 storage parameters (prefixed with "s3.").
+
+```sql
+CREATE CATALOG iceberg_glue TYPE = ICEBERG CONNECTION = (
+ TYPE = 'glue'
+ ADDRESS = 'http://localhost:5000'
+ warehouse = 's3a://warehouse/glue/'
+ "aws_access_key_id" = 'my_access_id'
+ "aws_secret_access_key" = 'my_secret_key'
+ "region_name" = 'us-east-1'
+ "s3.endpoint" = 'http://localhost:9000'
+ "s3.access-key-id" = 'admin'
+ "s3.secret-access-key" = 'password'
+ "s3.region" = 'us-east-1'
+)
+```
+
+- Storage Catalog (S3Tables Catalog)
+
+The Storage catalog requires a table_bucket_arn parameter. Unlike other buckets, S3Tables bucket is not a physical bucket, but a virtual bucket that is managed by S3Tables. You cannot directly access the bucket with a path like `s3://{bucket_name}/{file_path}`. All operations are performed with respect to the bucket ARN.
+
+Properties Parameters
+The following properties are available for the catalog:
+
+```
+profile_name: The name of the AWS profile to use.
+region_name: The AWS region to use.
+aws_access_key_id: The AWS access key ID to use.
+aws_secret_access_key: The AWS secret access key to use.
+aws_session_token: The AWS session token to use.
+```
+
+```sql
+CREATE CATALOG iceberg_storage TYPE = ICEBERG CONNECTION = (
+ TYPE = 'storage'
+ ADDRESS = 'http://localhost:9111'
+ "table_bucket_arn" = 'my-bucket'
+ -- Additional properties as needed
+)
+```
+
+- Hive Catalog (HMS Catalog)
+
+The Hive catalog requires an ADDRESS parameter, which is the address of the Hive metastore. It also requires a warehouse parameter, which is the location of the Iceberg warehouse, usually an S3 bucket or compatible object storage system.
+
+```sql
+CREATE CATALOG iceberg_hms TYPE = ICEBERG CONNECTION = (
+ TYPE = 'hive'
+ ADDRESS = '192.168.10.111:9083'
+ warehouse = 's3a://warehouse/hive/'
+ "s3.endpoint" = 'http://localhost:9000'
+ "s3.access-key-id" = 'admin'
+ "s3.secret-access-key" = 'password'
+ "s3.region" = 'us-east-1'
+)
+```
+
+### SHOW CREATE CATALOG
+
+Returns the detailed configuration of a specified catalog, including its type and storage parameters.
+
+#### Syntax
+
+```sql
+SHOW CREATE CATALOG ;
+```
+
+### SHOW CATALOGS
+
+Shows all the created catalogs.
+
+#### Syntax
+
+```sql
+SHOW CATALOGS [LIKE '']
+```
+
+### USE CATALOG
+
+Switches the current session to the specified catalog.
+
+#### Syntax
+
+```sql
+USE CATALOG
+```
+
+## Caching Iceberg Catalog
+
+Databend offers a Catalog Metadata Cache specifically designed for Iceberg catalogs. When a query is executed on an Iceberg table for the first time, the metadata is cached in memory. By default, this cache remains valid for 10 minutes, after which it is asynchronously refreshed. This ensures that queries on Iceberg tables are faster by avoiding repeated metadata retrieval.
+
+If you need fresh metadata, you can manually refresh the cache using the following commands:
+
+```sql
+USE CATALOG iceberg;
+ALTER DATABASE tpch REFRESH CACHE; -- Refresh metadata cache for the tpch database
+ALTER TABLE tpch.lineitem REFRESH CACHE; -- Refresh metadata cache for the lineitem table
+```
+
+If you prefer not to use the metadata cache, you can disable it entirely by configuring the `iceberg_table_meta_count` setting to `0` in the [databend-query.toml](https://github.com/databendlabs/databend/blob/main/scripts/distribution/configs/databend-query.toml) configuration file:
+
+```toml
+...
+# Cache config.
+[cache]
+...
+iceberg_table_meta_count = 0
+...
+```
+
+In addition to metadata caching, Databend also supports table data caching for Iceberg catalog tables, similar to Fuse tables. For more information on data caching, refer to the [cache section](/guides/deploy/references/node-config/query-config#cache-section) in the Query Configurations reference.
+
+## Apache Iceberg™ Table Functions
+
+Databend provides the following table functions for querying Iceberg metadata, allowing users to inspect snapshots and manifests efficiently:
+
+- [ICEBERG_MANIFEST](/sql/sql-functions/table-functions/iceberg-manifest)
+- [ICEBERG_SNAPSHOT](/sql/sql-functions/table-functions/iceberg-snapshot)
+
+## Usage Examples
+
+This example shows how to create an Iceberg catalog using a REST-based connection, specifying the service address, warehouse location (S3), and optional parameters like AWS region and custom endpoint:
+
+```sql
+CREATE CATALOG ctl
+TYPE=ICEBERG
+CONNECTION=(
+ TYPE='rest'
+ ADDRESS='http://127.0.0.1:8181'
+ WAREHOUSE='s3://iceberg-tpch'
+ "s3.region"='us-east-1'
+ "s3.endpoint"='http://127.0.0.1:9000'
+);
+```
diff --git a/docs/en/sql-reference/00-sql-reference/30-table-engines/03-hive.md b/docs/en/sql-reference/00-sql-reference/30-table-engines/03-hive.md
index 997dcdbe08..424aa14984 100644
--- a/docs/en/sql-reference/00-sql-reference/30-table-engines/03-hive.md
+++ b/docs/en/sql-reference/00-sql-reference/30-table-engines/03-hive.md
@@ -2,8 +2,74 @@
id: hive
title: Apache Hive Tables
sidebar_label: Apache Hive Tables
+slug: /sql-reference/table-engines/hive
---
+import FunctionDescription from '@site/src/components/FunctionDescription';
-import Content from '../../../guides/51-access-data-lake/01-hive.md';
+
-
+Databend can query data that is cataloged by Apache Hive without copying it. Register the Hive Metastore as a Databend catalog, point to the object storage that holds the table data, and then query the tables as if they were native Databend objects.
+
+## Quick Start
+
+1. **Register the Hive Metastore**
+
+ ```sql
+ CREATE CATALOG hive_prod
+ TYPE = HIVE
+ CONNECTION = (
+ METASTORE_ADDRESS = '127.0.0.1:9083'
+ URL = 's3://lakehouse/'
+ ACCESS_KEY_ID = ''
+ SECRET_ACCESS_KEY = ''
+ );
+ ```
+
+2. **Explore the catalog**
+
+ ```sql
+ USE CATALOG hive_prod;
+ SHOW DATABASES;
+ SHOW TABLES FROM tpch;
+ ```
+
+3. **Query Hive tables**
+
+ ```sql
+ SELECT l_orderkey, SUM(l_extendedprice) AS revenue
+ FROM tpch.lineitem
+ GROUP BY l_orderkey
+ ORDER BY revenue DESC
+ LIMIT 10;
+ ```
+
+## Keep Metadata Fresh
+
+Hive schemas or partitions can change outside of Databend. Refresh Databend’s cached metadata when that happens:
+
+```sql
+ALTER TABLE tpch.lineitem REFRESH CACHE;
+```
+
+## Data Type Mapping
+
+Databend automatically converts Hive primitive types to their closest native equivalents when queries run:
+
+| Hive Type | Databend Type |
+| --------- | ------------- |
+| `BOOLEAN` | [BOOLEAN](/sql/sql-reference/data-types/boolean) |
+| `TINYINT`, `SMALLINT`, `INT`, `BIGINT` | [Integer types](/sql/sql-reference/data-types/numeric#integer-data-types) |
+| `FLOAT`, `DOUBLE` | [Floating-point types](/sql/sql-reference/data-types/numeric#floating-point-data-types) |
+| `DECIMAL(p,s)` | [DECIMAL](/sql/sql-reference/data-types/decimal) |
+| `STRING`, `VARCHAR`, `CHAR` | [STRING](/sql/sql-reference/data-types/string) |
+| `DATE`, `TIMESTAMP` | [DATETIME](/sql/sql-reference/data-types/datetime) |
+| `ARRAY` | [ARRAY](/sql/sql-reference/data-types/array) |
+| `MAP` | [MAP](/sql/sql-reference/data-types/map) |
+
+Nested structures such as `STRUCT` are surfaced through the [VARIANT](/sql/sql-reference/data-types/variant) type.
+
+## Notes and Limitations
+
+- Hive catalogs are **read-only** in Databend (writes must happen through Hive-compatible engines).
+- Access to the underlying object storage is required; configure credentials by using [connection parameters](/sql/sql-reference/connect-parameters).
+- Use `ALTER TABLE ... REFRESH CACHE` whenever table layout changes (for example, new partitions) to keep query results up to date.
diff --git a/docs/en/guides/51-access-data-lake/03-delta.md b/docs/en/sql-reference/00-sql-reference/30-table-engines/04-delta.md
similarity index 94%
rename from docs/en/guides/51-access-data-lake/03-delta.md
rename to docs/en/sql-reference/00-sql-reference/30-table-engines/04-delta.md
index 917b324649..6202af2bec 100644
--- a/docs/en/guides/51-access-data-lake/03-delta.md
+++ b/docs/en/sql-reference/00-sql-reference/30-table-engines/04-delta.md
@@ -1,6 +1,10 @@
---
-title: Delta Lake
+id: delta
+title: Delta Lake Engine
+sidebar_label: Delta Lake Engine
+slug: /sql-reference/table-engines/delta
---
+
import FunctionDescription from '@site/src/components/FunctionDescription';
diff --git a/docs/en/sql-reference/00-sql-reference/30-table-engines/_04-delta.md b/docs/en/sql-reference/00-sql-reference/30-table-engines/_04-delta.md
deleted file mode 100644
index 14e2789266..0000000000
--- a/docs/en/sql-reference/00-sql-reference/30-table-engines/_04-delta.md
+++ /dev/null
@@ -1,9 +0,0 @@
----
-id: delta
-title: Delta Lake Engine
-sidebar_label: Delta Lake Engine
----
-
-import Content from '../../../guides/51-access-data-lake/03-delta.md';
-
-
diff --git a/docs/en/sql-reference/00-sql-reference/30-table-engines/_category_.json b/docs/en/sql-reference/00-sql-reference/30-table-engines/_category_.json
index e43e5670b2..aa9e4d17a3 100644
--- a/docs/en/sql-reference/00-sql-reference/30-table-engines/_category_.json
+++ b/docs/en/sql-reference/00-sql-reference/30-table-engines/_category_.json
@@ -1,3 +1,3 @@
{
- "label": "Tables"
-}
\ No newline at end of file
+ "label": "Table Engines"
+}
diff --git a/docs/en/sql-reference/00-sql-reference/30-table-engines/index.md b/docs/en/sql-reference/00-sql-reference/30-table-engines/index.md
new file mode 100644
index 0000000000..7e625b1809
--- /dev/null
+++ b/docs/en/sql-reference/00-sql-reference/30-table-engines/index.md
@@ -0,0 +1,22 @@
+---
+title: Table Engines
+---
+
+Databend provides several table engines so that you can balance performance and interoperability needs without moving data. Each engine is optimized for a specific scenario—ranging from Databend’s native Fuse storage to external data lake formats.
+
+## Available Engines
+
+| Engine | Best For | Highlights |
+| ------ | -------- | ---------- |
+| [Fuse Engine Tables](fuse) | Native Databend tables | Snapshot-based storage, automatic clustering, change tracking |
+| [Apache Iceberg™ Tables](iceberg) | Lakehouse catalogs | Time-travel, schema evolution, REST/Hive/Storage catalogs |
+| [Apache Hive Tables](hive) | Hive metastore data | Query Hive-managed data stores through external tables |
+| [Delta Lake Engine](delta) | Delta Lake datasets | Read Delta tables in object storage with ACID guarantees |
+
+## Choosing an Engine
+
+- Use **Fuse** when you manage data directly inside Databend and want the best storage and query performance.
+- Choose **Iceberg** when you already manage datasets through Iceberg catalogs and need tight lakehouse integration.
+- Configure **Hive** when you rely on an existing Hive Metastore but want Databend’s query engine.
+- Select **Delta** to analyse Delta Lake tables in place without ingesting them into Fuse.
+
diff --git a/pdf/docs.databend.en-sql.txt b/pdf/docs.databend.en-sql.txt
index ae26108305..4dc8aa6622 100644
--- a/pdf/docs.databend.en-sql.txt
+++ b/pdf/docs.databend.en-sql.txt
@@ -1,8 +1,8 @@
https://docs.databend.com/guides/
-https://docs.databend.com/guides/access-data-lake/
-https://docs.databend.com/guides/access-data-lake/delta
-https://docs.databend.com/guides/access-data-lake/hive
-https://docs.databend.com/guides/access-data-lake/iceberg
+https://docs.databend.com/sql/sql-reference/table-engines/
+https://docs.databend.com/sql/sql-reference/table-engines/delta
+https://docs.databend.com/sql/sql-reference/table-engines/hive
+https://docs.databend.com/sql/sql-reference/table-engines/iceberg
https://docs.databend.com/guides/ai-functions/
https://docs.databend.com/guides/ai-functions/external-functions
https://docs.databend.com/guides/ai-functions/mcp
@@ -1157,4 +1157,4 @@ https://docs.databend.com/tutorials/migrate/migrating-from-snowflake
https://docs.databend.com/tutorials/programming/python/integrating-with-databend-cloud-using-databend-driver
https://docs.databend.com/tutorials/programming/python/integrating-with-databend-cloud-using-databend-sqlalchemy
https://docs.databend.com/tutorials/programming/python/integrating-with-self-hosted-databend
-https://docs.databend.com/tutorials/recovery/bendsave
\ No newline at end of file
+https://docs.databend.com/tutorials/recovery/bendsave
diff --git a/scripts/sitemap-en.xml b/scripts/sitemap-en.xml
index 8d32ba8496..bc9ea7f8f6 100644
--- a/scripts/sitemap-en.xml
+++ b/scripts/sitemap-en.xml
@@ -286,22 +286,22 @@
0.5
- https://docs.databend.com/guides/access-data-lake/
+ https://docs.databend.com/sql/sql-reference/table-engines/
daily
0.5
- https://docs.databend.com/guides/access-data-lake/delta
+ https://docs.databend.com/sql/sql-reference/table-engines/delta
daily
0.5
- https://docs.databend.com/guides/access-data-lake/hive
+ https://docs.databend.com/sql/sql-reference/table-engines/hive
daily
0.5
- https://docs.databend.com/guides/access-data-lake/iceberg
+ https://docs.databend.com/sql/sql-reference/table-engines/iceberg
daily
0.5
@@ -5856,4 +5856,4 @@
daily
0.5
-
\ No newline at end of file
+
diff --git a/site-redirects.ts b/site-redirects.ts
index 7917953216..cd342023e5 100644
--- a/site-redirects.ts
+++ b/site-redirects.ts
@@ -4,8 +4,20 @@ const siteRedirects = [
to: '/guides/'
},
{
- from: '/sql/sql-reference/table-engines/iceberg',
- to: '/guides/access-data-lake/iceberg/'
+ from: '/guides/access-data-lake',
+ to: '/sql/sql-reference/table-engines/'
+ },
+ {
+ from: '/guides/access-data-lake/iceberg',
+ to: '/sql/sql-reference/table-engines/iceberg'
+ },
+ {
+ from: '/guides/access-data-lake/hive',
+ to: '/sql/sql-reference/table-engines/hive'
+ },
+ {
+ from: '/guides/access-data-lake/delta',
+ to: '/sql/sql-reference/table-engines/delta'
},
// AI Functions redirects - functions moved to external implementation
{