diff --git a/docs/en/guides/40-load-data/01-load/00-stage.md b/docs/en/guides/40-load-data/01-load/00-stage.md
index eaf2416883..d836d0829d 100644
--- a/docs/en/guides/40-load-data/01-load/00-stage.md
+++ b/docs/en/guides/40-load-data/01-load/00-stage.md
@@ -1,5 +1,6 @@
---
title: Loading from Stage
+sidebar_label: Stage
---
Databend enables you to easily import data from files uploaded to either the user stage or an internal/external stage. To do so, you can first upload the files to a stage using [BendSQL](../../30-sql-clients/00-bendsql/index.md), and then employ the [COPY INTO](/sql/sql-commands/dml/dml-copy-into-table) command to load the data from the staged file. Please note that the files must be in a format supported by Databend, otherwise the data cannot be imported. For more information on the file formats supported by Databend, see [Input & Output File Formats](/sql/sql-reference/file-format-options).
diff --git a/docs/en/guides/40-load-data/01-load/01-s3.md b/docs/en/guides/40-load-data/01-load/01-s3.md
index a3fcddecae..21cd16615a 100644
--- a/docs/en/guides/40-load-data/01-load/01-s3.md
+++ b/docs/en/guides/40-load-data/01-load/01-s3.md
@@ -1,5 +1,6 @@
---
title: Loading from Bucket
+sidebar_label: Bucket
---
When data files are stored in an object storage bucket, such as Amazon S3, it is possible to load them directly into Databend using the [COPY INTO](/sql/sql-commands/dml/dml-copy-into-table) command. Please note that the files must be in a format supported by Databend, otherwise the data cannot be imported. For more information on the file formats supported by Databend, see [Input & Output File Formats](/sql/sql-reference/file-format-options).
diff --git a/docs/en/guides/40-load-data/01-load/02-local.md b/docs/en/guides/40-load-data/01-load/02-local.md
index 44ed1322e4..98e5f04ecf 100644
--- a/docs/en/guides/40-load-data/01-load/02-local.md
+++ b/docs/en/guides/40-load-data/01-load/02-local.md
@@ -1,5 +1,6 @@
---
title: Loading from Local File
+sidebar_label: Local
---
Uploading your local data files to a stage or bucket before loading them into Databend can be unnecessary. Instead, you can use [BendSQL](../../30-sql-clients/00-bendsql/index.md), the Databend native CLI tool, to directly import the data. This simplifies the workflow and can save you storage fees.
diff --git a/docs/en/guides/40-load-data/01-load/03-http.md b/docs/en/guides/40-load-data/01-load/03-http.md
index 212f6c05ca..631cdc0f49 100644
--- a/docs/en/guides/40-load-data/01-load/03-http.md
+++ b/docs/en/guides/40-load-data/01-load/03-http.md
@@ -1,5 +1,6 @@
---
title: Loading from Remote File
+sidebar_label: Remote
---
To load data from remote files into Databend, the [COPY INTO](/sql/sql-commands/dml/dml-copy-into-table) command can be used. This command allows you to copy data from a variety of sources, including remote files, into Databend with ease. With COPY INTO, you can specify the source file location, file format, and other relevant parameters to tailor the import process to your needs. Please note that the files must be in a format supported by Databend, otherwise the data cannot be imported. For more information on the file formats supported by Databend, see [Input & Output File Formats](/sql/sql-reference/file-format-options).
diff --git a/docs/en/guides/40-load-data/01-load/_category_.json b/docs/en/guides/40-load-data/01-load/_category_.json
index 4f8a45e91e..b2419053e4 100644
--- a/docs/en/guides/40-load-data/01-load/_category_.json
+++ b/docs/en/guides/40-load-data/01-load/_category_.json
@@ -1,3 +1,3 @@
{
- "label": "Loading Data from Files"
+ "label": "Loading Files"
}
\ No newline at end of file
diff --git a/docs/en/guides/40-load-data/01-load/index.md b/docs/en/guides/40-load-data/01-load/index.md
index 2f07d7d1b3..d6a3a1898e 100644
--- a/docs/en/guides/40-load-data/01-load/index.md
+++ b/docs/en/guides/40-load-data/01-load/index.md
@@ -1,5 +1,5 @@
---
-title: Loading Data from Files
+title: Loading from Files
---
import DetailsWrap from '@site/src/components/DetailsWrap';
diff --git a/docs/en/guides/40-load-data/02-load-db/_category_.json b/docs/en/guides/40-load-data/02-load-db/_category_.json
index 2a128497ea..1e231c9ce2 100644
--- a/docs/en/guides/40-load-data/02-load-db/_category_.json
+++ b/docs/en/guides/40-load-data/02-load-db/_category_.json
@@ -1,3 +1,3 @@
{
- "label": "Loading Data with Tools"
+ "label": "Loading with Platforms"
}
\ No newline at end of file
diff --git a/docs/en/guides/40-load-data/02-load-db/addax.md b/docs/en/guides/40-load-data/02-load-db/addax.md
index 47f82a8022..20c6fc4333 100644
--- a/docs/en/guides/40-load-data/02-load-db/addax.md
+++ b/docs/en/guides/40-load-data/02-load-db/addax.md
@@ -2,120 +2,4 @@
title: Addax
---
-import FunctionDescription from '@site/src/components/FunctionDescription';
-
-
-
-[Addax](https://github.com/wgzhao/Addax), originally derived from Alibaba's [DataX](https://github.com/alibaba/DataX), is a versatile open-source ETL (Extract, Transform, Load) tool. It excels at seamlessly transferring data between diverse RDBMS (Relational Database Management Systems) and NoSQL databases, making it an optimal solution for efficient data migration.
-
-For information about the system requirements, download, and deployment steps for Addax, refer to Addax's [Getting Started Guide](https://github.com/wgzhao/Addax#getting-started). The guide provides detailed instructions and guidelines for setting up and using Addax.
-
-See also: [DataX](datax.md)
-
-## DatabendReader & DatabendWriter
-
-DatabendReader and DatabendWriter are integrated plugins of Addax, allowing seamless integration with Databend.
-
-The DatabendReader plugin enables reading data from Databend. Databend provides compatibility with the MySQL client protocol, so you can also use the [MySQLReader](https://wgzhao.github.io/Addax/develop/reader/mysqlreader/) plugin to retrieve data from Databend. For more information about DatabendReader, see https://wgzhao.github.io/Addax/develop/reader/databendreader/
-
-## Tutorial: Data Loading from MySQL
-
-In this tutorial, you will load data from MySQL to Databend with Addax. Before you start, make sure you have successfully set up Databend, MySQL, and Addax in your environment.
-
-1. In MySQL, create a SQL user that you will use for data loading and then create a table and populate it with sample data.
-
-```sql title='In MySQL:'
-mysql> create user 'mysqlu1'@'%' identified by '123';
-mysql> grant all on *.* to 'mysqlu1'@'%';
-mysql> create database db;
-mysql> create table db.tb01(id int, col1 varchar(10));
-mysql> insert into db.tb01 values(1, 'test1'), (2, 'test2'), (3, 'test3');
-```
-
-2. In Databend, create a corresponding target table.
-
-```sql title='In Databend:'
-databend> create database migrated_db;
-databend> create table migrated_db.tb01(id int null, col1 String null);
-```
-
-3. Copy and paste the following code to a file, and name the file as *mysql_demo.json*:
-
-:::note
-For the available parameters and their descriptions, refer to the documentation provided at the following link: https://wgzhao.github.io/Addax/develop/writer/databendwriter/#_2
-:::
-
-```json title='mysql_demo.json'
-{
- "job": {
- "setting": {
- "speed": {
- "channel": 4
- }
- },
- "content": {
- "writer": {
- "name": "databendwriter",
- "parameter": {
- "preSql": [
- "truncate table @table"
- ],
- "postSql": [
- ],
- "username": "u1",
- "password": "123",
- "database": "migrate_db",
- "table": "tb01",
- "jdbcUrl": "jdbc:mysql://127.0.0.1:3307/migrated_db",
- "loadUrl": ["127.0.0.1:8000","127.0.0.1:8000"],
- "fieldDelimiter": "\\x01",
- "lineDelimiter": "\\x02",
- "column": ["*"],
- "format": "csv"
- }
- },
- "reader": {
- "name": "mysqlreader",
- "parameter": {
- "username": "mysqlu1",
- "password": "123",
- "column": [
- "*"
- ],
- "connection": [
- {
- "jdbcUrl": [
- "jdbc:mysql://127.0.0.1:3306/db"
- ],
- "driver": "com.mysql.jdbc.Driver",
- "table": [
- "tb01"
- ]
- }
- ]
- }
- }
- }
- }
-}
-```
-
-4. Run Addax:
-
-```shell
-cd {YOUR_ADDAX_DIR_BIN}
-./addax.sh -L debug ./mysql_demo.json
-```
-
-You're all set! To verify the data loading, execute the query in Databend:
-
-```sql
-databend> select * from migrated_db.tb01;
-+------+-------+
-| id | col1 |
-+------+-------+
-| 1 | test1 |
-| 2 | test2 |
-| 3 | test3 |
-+------+-------+
-```
\ No newline at end of file
+See [Addax](/guides/migrate/mysql#addax).
\ No newline at end of file
diff --git a/docs/en/guides/40-load-data/02-load-db/datax.md b/docs/en/guides/40-load-data/02-load-db/datax.md
index 7fb582ec13..0480a5ef75 100644
--- a/docs/en/guides/40-load-data/02-load-db/datax.md
+++ b/docs/en/guides/40-load-data/02-load-db/datax.md
@@ -2,144 +2,4 @@
title: DataX
---
-import FunctionDescription from '@site/src/components/FunctionDescription';
-
-
-
-[DataX](https://github.com/alibaba/DataX) is an open-source data integration tool developed by Alibaba. It is designed to efficiently and reliably transfer data between various data storage systems and platforms, such as relational databases, big data platforms, and cloud storage services. DataX supports a wide range of data sources and data sinks, including but not limited to MySQL, Oracle, SQL Server, PostgreSQL, HDFS, Hive, HBase, MongoDB, and more.
-
-:::tip
-[Apache DolphinScheduler](https://dolphinscheduler.apache.org/) now has added support for Databend as a data source. This enhancement enables you to leverage DolphinScheduler for managing DataX tasks and effortlessly load data from MySQL to Databend.
-:::
-
-For information about the system requirements, download, and deployment steps for DataX, refer to DataX's [Quick Start Guide](https://github.com/alibaba/DataX/blob/master/userGuid.md). The guide provides detailed instructions and guidelines for setting up and using DataX.
-
-See also: [Addax](addax.md)
-
-## DatabendWriter
-
-DatabendWriter is an integrated plugin of DataX, which means it comes pre-installed and does not require any manual installation. It acts as a seamless connector that enables the effortless transfer of data from other databases to Databend. With DatabendWriter, you can leverage the capabilities of DataX to efficiently load data from various databases into Databend.
-
-DatabendWriter supports two operational modes: INSERT (default) and REPLACE. In INSERT Mode, new data is added while conflicts with existing records are prevented to maintain data integrity. On the other hand, the REPLACE Mode prioritizes data consistency by replacing existing records with newer data in case of conflicts.
-
-If you need more information about DatabendWriter and its functionalities, you can refer to the documentation available at https://github.com/alibaba/DataX/blob/master/databendwriter/doc/databendwriter.md
-
-## Tutorial: Data Loading from MySQL
-
-In this tutorial, you will load data from MySQL to Databend with DataX. Before you start, make sure you have successfully set up Databend, MySQL, and DataX in your environment.
-
-1. In MySQL, create a SQL user that you will use for data loading and then create a table and populate it with sample data.
-
-```sql title='In MySQL:'
-mysql> create user 'mysqlu1'@'%' identified by 'databend';
-mysql> grant all on *.* to 'mysqlu1'@'%';
-mysql> create database db;
-mysql> create table db.tb01(id int, d double, t TIMESTAMP, col1 varchar(10));
-mysql> insert into db.tb01 values(1, 3.1,now(), 'test1'), (1, 4.1,now(), 'test2'), (1, 4.1,now(), 'test2');
-```
-
-2. In Databend, create a corresponding target table.
-
-:::note
-DataX data types can be converted to Databend's data types when loaded into Databend. For the specific correspondences between DataX data types and Databend's data types, refer to the documentation provided at the following link: https://github.com/alibaba/DataX/blob/master/databendwriter/doc/databendwriter.md#33-type-convert
-:::
-
-```sql title='In Databend:'
-databend> create database migrated_db;
-databend> create table migrated_db.tb01(id int null, d double null, t TIMESTAMP null, col1 varchar(10) null);
-```
-
-3. Copy and paste the following code to a file, and name the file as *mysql_demo.json*. For the available parameters and their descriptions, refer to the documentation provided at the following link: https://github.com/alibaba/DataX/blob/master/databendwriter/doc/databendwriter.md#32-configuration-description
-
-```json title='mysql_demo.json'
-{
- "job": {
- "content": [
- {
- "reader": {
- "name": "mysqlreader",
- "parameter": {
- "username": "mysqlu1",
- "password": "databend",
- "column": [
- "id", "d", "t", "col1"
- ],
- "connection": [
- {
- "jdbcUrl": [
- "jdbc:mysql://127.0.0.1:3307/db"
- ],
- "driver": "com.mysql.jdbc.Driver",
- "table": [
- "tb01"
- ]
- }
- ]
- }
- },
- "writer": {
- "name": "databendwriter",
- "parameter": {
- "username": "databend",
- "password": "databend",
- "column": [
- "id", "d", "t", "col1"
- ],
- "preSql": [
- ],
- "postSql": [
- ],
- "connection": [
- {
- "jdbcUrl": "jdbc:databend://localhost:8000/migrated_db",
- "table": [
- "tb01"
- ]
- }
- ]
- }
- }
- }
- ],
- "setting": {
- "speed": {
- "channel": 1
- }
- }
- }
-}
-```
-
-:::tip
-The provided code above configures DatabendWriter to operate in the INSERT mode. To switch to the REPLACE mode, you must include the writeMode and onConflictColumn parameters. For example:
-
-```json title='mysql_demo.json'
-...
-"writer": {
- "name": "databendwriter",
- "parameter": {
- "writeMode": "replace",
- "onConflictColumn":["id"],
- "username": ...
-```
-:::
-
-4. Run DataX:
-
-```shell
-cd {YOUR_DATAX_DIR_BIN}
-python datax.py ./mysql_demo.json
-```
-
-You're all set! To verify the data loading, execute the query in Databend:
-
-```sql
-databend> select * from migrated_db.tb01;
-+------+------+----------------------------+-------+
-| id | d | t | col1 |
-+------+------+----------------------------+-------+
-| 1 | 3.1 | 2023-02-01 07:11:08.500000 | test1 |
-| 1 | 4.1 | 2023-02-01 07:11:08.501000 | test2 |
-| 1 | 4.1 | 2023-02-01 07:11:08.501000 | test2 |
-+------+------+----------------------------+-------+
-```
\ No newline at end of file
+See [DataX](/guides/migrate/mysql#datax).
\ No newline at end of file
diff --git a/docs/en/guides/40-load-data/02-load-db/debezium.md b/docs/en/guides/40-load-data/02-load-db/debezium.md
index 56a230e5a1..f0bdf06beb 100644
--- a/docs/en/guides/40-load-data/02-load-db/debezium.md
+++ b/docs/en/guides/40-load-data/02-load-db/debezium.md
@@ -2,179 +2,4 @@
title: Debezium
---
-[Debezium](https://debezium.io/) is a set of distributed services to capture changes in your databases so that your applications can see those changes and respond to them. Debezium records all row-level changes within each database table in a change event stream, and applications simply read these streams to see the change events in the same order in which they occurred.
-
-[debezium-server-databend](https://github.com/databendcloud/debezium-server-databend) is a lightweight CDC tool developed by Databend, based on Debezium Engine. Its purpose is to capture real-time changes in relational databases and deliver them as event streams to ultimately write the data into the target database Databend. This tool provides a simple way to monitor and capture database changes, transforming them into consumable events without the need for large data infrastructures like Flink, Kafka, or Spark.
-
-## Installing debezium-server-databend
-
-debezium-server-databend can be installed independently without the need for installing Debezium beforehand. Once you have decided to install debezium-server-databend, you have two options available. The first one is to install it from source by downloading the source code and building it yourself. Alternatively, you can opt for a more straightforward installation process using Docker.
-
-### Installing from Source
-
-Before you start, make sure JDK 11 and Maven are installed on your system.
-
-1. Clone the project:
-
-```bash
-git clone https://github.com/databendcloud/debezium-server-databend.git
-```
-
-2. Change into the project's root directory:
-
-```bash
-cd debezium-server-databend
-```
-
-3. Build and package debezium server:
-
-```go
-mvn -Passembly -Dmaven.test.skip package
-```
-
-4. Once the build is completed, unzip the server distribution package:
-
-```bash
-unzip debezium-server-databend-dist/target/debezium-server-databend-dist*.zip -d databendDist
-```
-
-5. Enter the extracted folder:
-
-```bash
-cd databendDist
-```
-
-6. Create a file named _application.properties_ in the _conf_ folder with the content in the sample [here](https://github.com/databendcloud/debezium-server-databend/blob/main/debezium-server-databend-dist/src/main/resources/distro/conf/application.properties.example), and modify the configurations according to your specific requirements. For description of the available parameters, see this [page](https://github.com/databendcloud/debezium-server-databend/blob/main/docs/docs.md).
-
-```bash
-nano conf/application.properties
-```
-
-7. Use the provided script to start the tool:
-
-```bash
-bash run.sh
-```
-
-### Installing with Docker
-
-Before you start, make sure Docker and Docker Compose are installed on your system.
-
-1. Create a file named _application.properties_ in the _conf_ folder with the content in the sample [here](https://github.com/databendcloud/debezium-server-databend/blob/main/debezium-server-databend-dist/src/main/resources/distro/conf/application.properties.example), and modify the configurations according to your specific requirements. For description of the available Databend parameters, see this [page](https://github.com/databendcloud/debezium-server-databend/blob/main/docs/docs.md).
-
-```bash
-nano conf/application.properties
-```
-
-2. Create a file named _docker-compose.yml_ with the following content:
-
-```dockerfile
-version: '2.1'
-services:
- debezium:
- image: ghcr.io/databendcloud/debezium-server-databend:pr-2
- ports:
- - "8080:8080"
- - "8083:8083"
- volumes:
- - $PWD/conf:/app/conf
- - $PWD/data:/app/data
-```
-
-3. Open a terminal or command-line interface and navigate to the directory containing the _docker-compose.yml_ file.
-
-4. Use the following command to start the tool:
-
-```bash
-docker-compose up -d
-```
-
-## Usage Example
-
-This section demonstrates the general steps to load data from MySQL into Databend and assumes that you already have a local MySQL instance running.
-
-### Step 1. Prepare Data in MySQL
-
-Create a database and a table in MySQL, and insert sample data into the table.
-
-```sql
-CREATE DATABASE mydb;
-USE mydb;
-
-CREATE TABLE products (id INTEGER NOT NULL AUTO_INCREMENT PRIMARY KEY,name VARCHAR(255) NOT NULL,description VARCHAR(512));
-ALTER TABLE products AUTO_INCREMENT = 10;
-
-INSERT INTO products VALUES (default,"scooter","Small 2-wheel scooter"),
-(default,"car battery","12V car battery"),
-(default,"12-pack drill bits","12-pack of drill bits with sizes ranging from #40 to #3"),
-(default,"hammer","12oz carpenter's hammer"),
-(default,"hammer","14oz carpenter's hammer"),
-(default,"hammer","16oz carpenter's hammer"),
-(default,"rocks","box of assorted rocks"),
-(default,"jacket","water-proof black wind breaker"),
-(default,"cloud","test for databend"),
-(default,"spare tire","24 inch spare tire");
-```
-
-### Step 2. Create database in Databend
-
-Create the corresponding database in Databend. Please note that you don't need to create a table that corresponds to the one in MySQL.
-
-```sql
-CREATE DATABASE debezium;
-```
-
-### Step 3. Create application.properties
-
-Create the file _application.properties_, then start debezium-server-databend. For how to install and start the tool, see [Installing debezium-server-databend](#installing-debezium-server-databend).
-
-When started for the first time, the tool performs a full synchronization of data from MySQL to Databend using the specified Batch Size. As a result, the data from MySQL is now visible in Databend after successful replication.
-
-```text title='application.properties'
-debezium.sink.type=databend
-debezium.sink.databend.upsert=true
-debezium.sink.databend.upsert-keep-deletes=false
-debezium.sink.databend.database.databaseName=debezium
-debezium.sink.databend.database.url=jdbc:databend://:
-debezium.sink.databend.database.username=
-debezium.sink.databend.database.password=
-debezium.sink.databend.database.primaryKey=id
-debezium.sink.databend.database.tableName=products
-debezium.sink.databend.database.param.ssl=true
-
-# enable event schemas
-debezium.format.value.schemas.enable=true
-debezium.format.key.schemas.enable=true
-debezium.format.value=json
-debezium.format.key=json
-
-# mysql source
-debezium.source.connector.class=io.debezium.connector.mysql.MySqlConnector
-debezium.source.offset.storage.file.filename=data/offsets.dat
-debezium.source.offset.flush.interval.ms=60000
-
-debezium.source.database.hostname=127.0.0.1
-debezium.source.database.port=3306
-debezium.source.database.user=root
-debezium.source.database.password=123456
-debezium.source.database.dbname=mydb
-debezium.source.database.server.name=from_mysql
-debezium.source.include.schema.changes=false
-debezium.source.table.include.list=mydb.products
-# debezium.source.database.ssl.mode=required
-# Run without Kafka, use local file to store checkpoints
-debezium.source.database.history=io.debezium.relational.history.FileDatabaseHistory
-debezium.source.database.history.file.filename=data/status.dat
-# do event flattening. unwrap message!
-debezium.transforms=unwrap
-debezium.transforms.unwrap.type=io.debezium.transforms.ExtractNewRecordState
-debezium.transforms.unwrap.delete.handling.mode=rewrite
-debezium.transforms.unwrap.drop.tombstones=true
-
-# ############ SET LOG LEVELS ############
-quarkus.log.level=INFO
-# Ignore messages below warning level from Jetty, because it's a bit verbose
-quarkus.log.category."org.eclipse.jetty".level=WARN
-```
-
-You're all set! If you query the products table in Databend, you will see that the data from MySQL has been successfully synchronized. Feel free to perform insertions, updates, or deletions in MySQL, and you will observe the corresponding changes reflected in Databend as well.
+See [Debezium](/guides/migrate/mysql#debezium).
\ No newline at end of file
diff --git a/docs/en/guides/40-load-data/02-load-db/flink-cdc.md b/docs/en/guides/40-load-data/02-load-db/flink-cdc.md
index 037be78241..6d70f2245f 100644
--- a/docs/en/guides/40-load-data/02-load-db/flink-cdc.md
+++ b/docs/en/guides/40-load-data/02-load-db/flink-cdc.md
@@ -2,165 +2,4 @@
title: Flink CDC
---
-import FunctionDescription from '@site/src/components/FunctionDescription';
-
-
-
-[Apache Flink](https://github.com/apache/flink) CDC (Change Data Capture) refers to the capability of Apache Flink to capture and process real-time data changes from various sources using SQL-based queries. CDC allows you to monitor and capture data modifications (inserts, updates, and deletes) happening in a database or streaming system and react to those changes in real time. You can utilize the [Flink SQL connector for Databend](https://github.com/databendcloud/flink-connector-databend) to load data from other databases in real-time into Databend. The Flink SQL connector for Databend offers a connector that integrates Flink's stream processing capabilities with Databend. By configuring this connector, you can capture data changes from various databases as streams and load them into Databend for processing and analysis in real-time.
-
-## Downloading & Installing Connector
-
-To download and install the Flink SQL connector for Databend, follow these steps:
-
-1. Download and set up Flink: Before installing the Flink SQL connector for Databend, ensure that you have downloaded and set up Flink on your system. You can download Flink from the official website: https://flink.apache.org/downloads/
-
-2. Download the connector: Visit the releases page of the Flink SQL connector for Databend on GitHub: [https://github.com/databendcloud/flink-connector-databend/releases](https://github.com/databendcloud/flink-connector-databend/releases). Download the latest version of the connector (e.g., flink-connector-databend-0.0.2.jar).
-
- Please note that you can also compile the Flink SQL connector for Databend from source:
-
- ```shell
- git clone https://github.com/databendcloud/flink-connector-databend
- cd flink-connector-databend
- mvn clean install -DskipTests
- ```
-
-3. Move the JAR file: Once you have downloaded the connector, move the JAR file to the lib folder in your Flink installation directory. For example, if you have Flink version 1.16.0 installed, move the JAR file to the flink-1.16.0/lib/ directory.
-
-## Tutorial: Real-time Data Loading from MySQL
-
-In this tutorial, you will set up a real-time data loading from MySQL to Databend with the Flink SQL connector for Databend. Before you start, make sure you have successfully set up Databend and MySQL in your environment.
-
-1. Create a table in MySQL and populate it with sample data. Then, create a corresponding target table in Databend.
-
-```sql title='In MySQL:'
-CREATE DATABASE mydb;
-USE mydb;
-
-CREATE TABLE products (id INTEGER NOT NULL AUTO_INCREMENT PRIMARY KEY,name VARCHAR(255) NOT NULL,description VARCHAR(512));
-ALTER TABLE products AUTO_INCREMENT = 10;
-
-INSERT INTO products VALUES (default,"scooter","Small 2-wheel scooter"),
-(default,"car battery","12V car battery"),
-(default,"12-pack drill bits","12-pack of drill bits with sizes ranging from #40 to #3"),
-(default,"hammer","12oz carpenter's hammer"),
-(default,"hammer","14oz carpenter's hammer"),
-(default,"hammer","16oz carpenter's hammer"),
-(default,"rocks","box of assorted rocks"),
-(default,"jacket","black wind breaker"),
-(default,"cloud","test for databend"),
-(default,"spare tire","24 inch spare tire");
-```
-
-```sql title='In Databend:'
-CREATE TABLE products (id INT NOT NULL, name VARCHAR(255) NOT NULL, description VARCHAR(512) );
-```
-
-2. Download [Flink](https://flink.apache.org/downloads/) and the following SQL connectors to your system:
- - Flink SQL connector for Databend: [https://github.com/databendcloud/flink-connector-databend/releases](https://github.com/databendcloud/flink-connector-databend/releases)
- - Flink SQL connector for MySQL: [https://repo1.maven.org/maven2/com/ververica/flink-sql-connector-mysql-cdc/2.3.0/flink-sql-connector-mysql-cdc-2.3.0.jar](https://repo1.maven.org/maven2/com/ververica/flink-sql-connector-mysql-cdc/2.3.0/flink-sql-connector-mysql-cdc-2.3.0.jar)
-3. Move the both connector JAR files to the _lib_ folder in your Flink installation directory.
-4. Start Flink:
-
-```shell
-cd flink-16.0
-./bin/start-cluster.sh
-```
-
-You can now open the Apache Flink Dashboard if you go to [http://localhost:8081](http://localhost:8081) in your browser:
-
-
-
-5. Start the Flink SQL Client:
-
-```shell
-./bin/sql-client.sh
-
- ▒▓██▓██▒
- ▓████▒▒█▓▒▓███▓▒
- ▓███▓░░ ▒▒▒▓██▒ ▒
- ░██▒ ▒▒▓▓█▓▓▒░ ▒████
- ██▒ ░▒▓███▒ ▒█▒█▒
- ░▓█ ███ ▓░▒██
- ▓█ ▒▒▒▒▒▓██▓░▒░▓▓█
- █░ █ ▒▒░ ███▓▓█ ▒█▒▒▒
- ████░ ▒▓█▓ ██▒▒▒ ▓███▒
- ░▒█▓▓██ ▓█▒ ▓█▒▓██▓ ░█░
- ▓░▒▓████▒ ██ ▒█ █▓░▒█▒░▒█▒
- ███▓░██▓ ▓█ █ █▓ ▒▓█▓▓█▒
- ░██▓ ░█░ █ █▒ ▒█████▓▒ ██▓░▒
- ███░ ░ █░ ▓ ░█ █████▒░░ ░█░▓ ▓░
- ██▓█ ▒▒▓▒ ▓███████▓░ ▒█▒ ▒▓ ▓██▓
- ▒██▓ ▓█ █▓█ ░▒█████▓▓▒░ ██▒▒ █ ▒ ▓█▒
- ▓█▓ ▓█ ██▓ ░▓▓▓▓▓▓▓▒ ▒██▓ ░█▒
- ▓█ █ ▓███▓▒░ ░▓▓▓███▓ ░▒░ ▓█
- ██▓ ██▒ ░▒▓▓███▓▓▓▓▓██████▓▒ ▓███ █
- ▓███▒ ███ ░▓▓▒░░ ░▓████▓░ ░▒▓▒ █▓
- █▓▒▒▓▓██ ░▒▒░░░▒▒▒▒▓██▓░ █▓
- ██ ▓░▒█ ▓▓▓▓▒░░ ▒█▓ ▒▓▓██▓ ▓▒ ▒▒▓
- ▓█▓ ▓▒█ █▓░ ░▒▓▓██▒ ░▓█▒ ▒▒▒░▒▒▓█████▒
- ██░ ▓█▒█▒ ▒▓▓▒ ▓█ █░ ░░░░ ░█▒
- ▓█ ▒█▓ ░ █░ ▒█ █▓
- █▓ ██ █░ ▓▓ ▒█▓▓▓▒█░
- █▓ ░▓██░ ▓▒ ▓█▓▒░░░▒▓█░ ▒█
- ██ ▓█▓░ ▒ ░▒█▒██▒ ▓▓
- ▓█▒ ▒█▓▒░ ▒▒ █▒█▓▒▒░░▒██
- ░██▒ ▒▓▓▒ ▓██▓▒█▒ ░▓▓▓▓▒█▓
- ░▓██▒ ▓░ ▒█▓█ ░░▒▒▒
- ▒▓▓▓▓▓▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒░░▓▓ ▓░▒█░
-
- ______ _ _ _ _____ ____ _ _____ _ _ _ BETA
- | ____| (_) | | / ____|/ __ \| | / ____| (_) | |
- | |__ | |_ _ __ | | __ | (___ | | | | | | | | |_ ___ _ __ | |_
- | __| | | | '_ \| |/ / \___ \| | | | | | | | | |/ _ \ '_ \| __|
- | | | | | | | | < ____) | |__| | |____ | |____| | | __/ | | | |_
- |_| |_|_|_| |_|_|\_\ |_____/ \___\_\______| \_____|_|_|\___|_| |_|\__|
-
- Welcome! Enter 'HELP;' to list all available commands. 'QUIT;' to exit.
-```
-
-6. Set the checkpointing interval to 3 seconds, and create corresponding tables with MySQL and Databend connectors in the Flink SQL Client. For the available connection parameters, see [https://github.com/databendcloud/flink-connector-databend#connector-options](https://github.com/databendcloud/flink-connector-databend#connector-options):
-
-```sql
-Flink SQL> SET execution.checkpointing.interval = 3s;
-[INFO] Session property has been set.
-
-Flink SQL> CREATE TABLE mysql_products (id INT,name STRING,description STRING,PRIMARY KEY (id) NOT ENFORCED)
-WITH ('connector' = 'mysql-cdc',
-'hostname' = 'localhost',
-'port' = '3306',
-'username' = 'root',
-'password' = '123456',
-'database-name' = 'mydb',
-'table-name' = 'products',
-'server-time-zone' = 'UTC'
-);
-[INFO] Execute statement succeed.
-
-Flink SQL> CREATE TABLE databend_products (id INT,name String,description String, PRIMARY KEY (`id`) NOT ENFORCED)
-WITH ('connector' = 'databend',
-'url'='databend://localhost:8000',
-'username'='databend',
-'password'='databend',
-'database-name'='default',
-'table-name'='products',
-'sink.batch-size' = '5',
-'sink.flush-interval' = '1000',
-'sink.ignore-delete' = 'false',
-'sink.max-retries' = '3');
-[INFO] Execute statement succeed.
-```
-
-7. In the Flink SQL Client, synchronize the data from the _mysql_products_ table to the _databend_products_ table:
-
-```sql
-Flink SQL> INSERT INTO databend_products SELECT * FROM mysql_products;
-[INFO] Submitting SQL update statement to the cluster...
-[INFO] SQL update statement has been successfully submitted to the cluster:
-Job ID: b14645f34937c7cf3672ffba35733734
-```
-
-You can now see a running job in the Apache Flink Dashboard:
-
-
-
-You're all set! If you query the _products_ table in Databend, you will see that the data from MySQL has been successfully synchronized. Feel free to perform insertions, updates, or deletions in MySQL, and you will observe the corresponding changes reflected in Databend as well.
+See [Flink CDC](/guides/migrate/mysql#flink-cdc).
\ No newline at end of file
diff --git a/docs/en/guides/40-load-data/02-load-db/index.md b/docs/en/guides/40-load-data/02-load-db/index.md
index 49e70e1fe1..0389638284 100644
--- a/docs/en/guides/40-load-data/02-load-db/index.md
+++ b/docs/en/guides/40-load-data/02-load-db/index.md
@@ -1,20 +1,9 @@
---
-title: Loading Data with Tools
+title: Loading with Platforms
---
-Databend offers connectors and plugins for integrating with major data import tools, ensuring efficient data synchronization. See the below table for supported tools and their Databend connectors.
+import IndexOverviewList from '@site/src/components/IndexOverviewList';
-:::info
-These connectors also support Databend Cloud. For setup instructions, visit: [Connecting to a Warehouse](/guides/cloud/using-databend-cloud/warehouses/#connecting)
-:::
+This guide introduces platforms for loading data into Databend, including:
-| Tool | Plugin / Connector |
-|----------- |-------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| Addax | [DatabendReader](https://wgzhao.github.io/Addax/develop/reader/databendreader/) & [DatabendWriter](https://wgzhao.github.io/Addax/develop/writer/databendwriter/) |
-| Airbyte | [datafuselabs/destination-databend:alpha](https://hub.docker.com/r/airbyte/destination-databend) |
-| DataX | [DatabendWriter](https://github.com/alibaba/DataX/blob/master/databendwriter/doc/databendwriter.md) |
-| dbt | [dbt-databend-cloud](https://github.com/databendcloud/dbt-databend) |
-| Debezium | [debezium-server-databend](https://github.com/databendcloud/debezium-server-databend) |
-| Flink CDC | [Flink SQL connector for Databend](https://github.com/databendcloud/flink-connector-databend) |
-| Kafka | [bend-ingest-kafka](https://github.com/databendcloud/bend-ingest-kafka) |
-| Vector | [Databend sink](https://vector.dev/docs/reference/configuration/sinks/databend/) |
+
diff --git a/docs/en/guides/40-load-data/03-load-semistructured/00-load-parquet.md b/docs/en/guides/40-load-data/03-load-semistructured/00-load-parquet.md
index 550bdefb9f..2699c1376d 100644
--- a/docs/en/guides/40-load-data/03-load-semistructured/00-load-parquet.md
+++ b/docs/en/guides/40-load-data/03-load-semistructured/00-load-parquet.md
@@ -1,6 +1,6 @@
---
title: Loading Parquet File into Databend
-sidebar_label: Loading Parquet File
+sidebar_label: Parquet
---
## What is Parquet?
diff --git a/docs/en/guides/40-load-data/03-load-semistructured/01-load-csv.md b/docs/en/guides/40-load-data/03-load-semistructured/01-load-csv.md
index e0558d39fd..03d84981ad 100644
--- a/docs/en/guides/40-load-data/03-load-semistructured/01-load-csv.md
+++ b/docs/en/guides/40-load-data/03-load-semistructured/01-load-csv.md
@@ -1,6 +1,6 @@
---
title: Loading CSV File into Databend
-sidebar_label: Loading CSV File
+sidebar_label: CSV
---
## What is CSV?
diff --git a/docs/en/guides/40-load-data/03-load-semistructured/02-load-tsv.md b/docs/en/guides/40-load-data/03-load-semistructured/02-load-tsv.md
index 321b6fb893..2e4631854c 100644
--- a/docs/en/guides/40-load-data/03-load-semistructured/02-load-tsv.md
+++ b/docs/en/guides/40-load-data/03-load-semistructured/02-load-tsv.md
@@ -1,6 +1,6 @@
---
title: Loading TSV File into Databend
-sidebar_label: Loading TSV File
+sidebar_label: TSV
---
## What is TSV?
diff --git a/docs/en/guides/40-load-data/03-load-semistructured/03-load-ndjson.md b/docs/en/guides/40-load-data/03-load-semistructured/03-load-ndjson.md
index a274e9dee5..d7361a9b5b 100644
--- a/docs/en/guides/40-load-data/03-load-semistructured/03-load-ndjson.md
+++ b/docs/en/guides/40-load-data/03-load-semistructured/03-load-ndjson.md
@@ -1,6 +1,6 @@
---
title: Loading NDJSON File into Databend
-sidebar_label: Loading NDJSON File
+sidebar_label: NDJSON
---
## What is NDJSON?
diff --git a/docs/en/guides/40-load-data/03-load-semistructured/04-load-orc.md b/docs/en/guides/40-load-data/03-load-semistructured/04-load-orc.md
index af39095319..7a196a0ff0 100644
--- a/docs/en/guides/40-load-data/03-load-semistructured/04-load-orc.md
+++ b/docs/en/guides/40-load-data/03-load-semistructured/04-load-orc.md
@@ -1,6 +1,6 @@
---
title: Loading ORC File into Databend
-sidebar_label: Loading ORC File
+sidebar_label: ORC
---
## What is ORC?
diff --git a/docs/en/guides/40-load-data/03-load-semistructured/_category_.json b/docs/en/guides/40-load-data/03-load-semistructured/_category_.json
index e277154890..de293952c4 100644
--- a/docs/en/guides/40-load-data/03-load-semistructured/_category_.json
+++ b/docs/en/guides/40-load-data/03-load-semistructured/_category_.json
@@ -1,3 +1,3 @@
{
- "label": "Loading Semi-structured Data"
+ "label": "Loading Semi-structured Formats"
}
\ No newline at end of file
diff --git a/docs/en/guides/40-load-data/03-load-semistructured/index.md b/docs/en/guides/40-load-data/03-load-semistructured/index.md
index 468b01c0fe..dfd533a159 100644
--- a/docs/en/guides/40-load-data/03-load-semistructured/index.md
+++ b/docs/en/guides/40-load-data/03-load-semistructured/index.md
@@ -1,5 +1,5 @@
---
-title: Loading Semi-structured Data
+title: Loading Semi-structured Formats
---
import IndexOverviewList from '@site/src/components/IndexOverviewList';
diff --git a/docs/en/guides/40-load-data/04-transform/00-querying-parquet.md b/docs/en/guides/40-load-data/04-transform/00-querying-parquet.md
index 56a355ac61..4182665693 100644
--- a/docs/en/guides/40-load-data/04-transform/00-querying-parquet.md
+++ b/docs/en/guides/40-load-data/04-transform/00-querying-parquet.md
@@ -1,6 +1,6 @@
---
title: Querying Parquet Files in Stage
-sidebar_label: Querying Parquet File
+sidebar_label: Parquet
---
## Query Parquet Files in Stage
diff --git a/docs/en/guides/40-load-data/04-transform/01-querying-csv.md b/docs/en/guides/40-load-data/04-transform/01-querying-csv.md
index 1141beb8a5..23a6bb0db5 100644
--- a/docs/en/guides/40-load-data/04-transform/01-querying-csv.md
+++ b/docs/en/guides/40-load-data/04-transform/01-querying-csv.md
@@ -1,6 +1,6 @@
---
title: Querying CSV Files in Stage
-sidebar_label: Querying CSV File
+sidebar_label: CSV
---
## Query CSV Files in Stage
diff --git a/docs/en/guides/40-load-data/04-transform/02-querying-tsv.md b/docs/en/guides/40-load-data/04-transform/02-querying-tsv.md
index 10df2fa6bc..d26afc1b23 100644
--- a/docs/en/guides/40-load-data/04-transform/02-querying-tsv.md
+++ b/docs/en/guides/40-load-data/04-transform/02-querying-tsv.md
@@ -1,6 +1,6 @@
---
title: Querying TSV Files in Stage
-sidebar_label: Querying TSV File
+sidebar_label: TSV
---
## Query TSV Files in Stage
diff --git a/docs/en/guides/40-load-data/04-transform/03-querying-ndjson.md b/docs/en/guides/40-load-data/04-transform/03-querying-ndjson.md
index 76f987a962..e134f9bc96 100644
--- a/docs/en/guides/40-load-data/04-transform/03-querying-ndjson.md
+++ b/docs/en/guides/40-load-data/04-transform/03-querying-ndjson.md
@@ -1,6 +1,6 @@
---
title: Querying NDJSON Files in Stage
-sidebar_label: Querying NDJSON File
+sidebar_label: NDJSON
---
## Query NDJSON Files in Stage
diff --git a/docs/en/guides/40-load-data/04-transform/03-querying-orc.md b/docs/en/guides/40-load-data/04-transform/03-querying-orc.md
index b4205cbf8e..b249738608 100644
--- a/docs/en/guides/40-load-data/04-transform/03-querying-orc.md
+++ b/docs/en/guides/40-load-data/04-transform/03-querying-orc.md
@@ -1,6 +1,6 @@
---
title: Querying Staged ORC Files in Stage
-sidebar_label: Querying ORC File
+sidebar_label: ORC
---
import StepsWrap from '@site/src/components/StepsWrap';
import StepContent from '@site/src/components/Steps/step-content';
diff --git a/docs/en/guides/40-load-data/04-transform/04-querying-metadata.md b/docs/en/guides/40-load-data/04-transform/04-querying-metadata.md
index ad1897fd6e..ee1c499bec 100644
--- a/docs/en/guides/40-load-data/04-transform/04-querying-metadata.md
+++ b/docs/en/guides/40-load-data/04-transform/04-querying-metadata.md
@@ -1,6 +1,6 @@
---
title: Query Metadata for Staged Files
-sidebar_label: Querying Metadata
+sidebar_label: Metadata
---
## Why and What is Metadata?
diff --git a/docs/en/guides/41-migrate/_category_.json b/docs/en/guides/41-migrate/_category_.json
new file mode 100644
index 0000000000..5224f7ca97
--- /dev/null
+++ b/docs/en/guides/41-migrate/_category_.json
@@ -0,0 +1,3 @@
+{
+ "label": "Migrating from Databases"
+}
\ No newline at end of file
diff --git a/docs/en/guides/41-migrate/index.md b/docs/en/guides/41-migrate/index.md
new file mode 100644
index 0000000000..417ef5d612
--- /dev/null
+++ b/docs/en/guides/41-migrate/index.md
@@ -0,0 +1,8 @@
+---
+title: Migrating from Databases
+---
+import IndexOverviewList from '@site/src/components/IndexOverviewList';
+
+This guide introduces how to migrate your data from different databases to Databend:
+
+
\ No newline at end of file
diff --git a/docs/en/guides/41-migrate/mysql.md b/docs/en/guides/41-migrate/mysql.md
new file mode 100644
index 0000000000..1d97150ee5
--- /dev/null
+++ b/docs/en/guides/41-migrate/mysql.md
@@ -0,0 +1,157 @@
+---
+title: MySQL
+---
+
+This guide introduces how to migrate data from MySQL to Databend. Databend supports two main migration approaches: batch loading and continuous data sync.
+
+## Batch Loading
+
+To migrate data from MySQL to Databend in batches, you can use tools such as Addax or DataX.
+
+### Addax
+
+[Addax](https://github.com/wgzhao/Addax), originally derived from Alibaba's [DataX](https://github.com/alibaba/DataX), is a versatile open-source ETL (Extract, Transform, Load) tool. It excels at seamlessly transferring data between diverse RDBMS (Relational Database Management Systems) and NoSQL databases, making it an optimal solution for efficient data migration.
+
+For information about the system requirements, download, and deployment steps for Addax, refer to Addax's [Getting Started Guide](https://github.com/wgzhao/Addax#getting-started). The guide provides detailed instructions and guidelines for setting up and using Addax.
+
+#### DatabendReader & DatabendWriter
+
+DatabendReader and DatabendWriter are integrated plugins of Addax, allowing seamless integration with Databend. The DatabendReader plugin enables reading data from Databend. Databend provides compatibility with the MySQL client protocol, so you can also use the [MySQLReader](https://wgzhao.github.io/Addax/develop/reader/mysqlreader/) plugin to retrieve data from Databend. For more information about DatabendReader, see https://wgzhao.github.io/Addax/develop/reader/databendreader/
+
+### DataX
+
+[DataX](https://github.com/alibaba/DataX) is an open-source data integration tool developed by Alibaba. It is designed to efficiently and reliably transfer data between various data storage systems and platforms, such as relational databases, big data platforms, and cloud storage services. DataX supports a wide range of data sources and data sinks, including but not limited to MySQL, Oracle, SQL Server, PostgreSQL, HDFS, Hive, HBase, MongoDB, and more.
+
+:::tip
+[Apache DolphinScheduler](https://dolphinscheduler.apache.org/) now has added support for Databend as a data source. This enhancement enables you to leverage DolphinScheduler for managing DataX tasks and effortlessly load data from MySQL to Databend.
+:::
+
+For information about the system requirements, download, and deployment steps for DataX, refer to DataX's [Quick Start Guide](https://github.com/alibaba/DataX/blob/master/userGuid.md). The guide provides detailed instructions and guidelines for setting up and using DataX.
+
+#### DatabendWriter
+
+DatabendWriter is an integrated plugin of DataX, which means it comes pre-installed and does not require any manual installation. It acts as a seamless connector that enables the effortless transfer of data from other databases to Databend. With DatabendWriter, you can leverage the capabilities of DataX to efficiently load data from various databases into Databend.
+
+DatabendWriter supports two operational modes: INSERT (default) and REPLACE. In INSERT Mode, new data is added while conflicts with existing records are prevented to maintain data integrity. On the other hand, the REPLACE Mode prioritizes data consistency by replacing existing records with newer data in case of conflicts.
+
+If you need more information about DatabendWriter and its functionalities, you can refer to the documentation available at https://github.com/alibaba/DataX/blob/master/databendwriter/doc/databendwriter.md
+
+## Continuous Sync with CDC
+
+To migrate data from MySQL to Databend in real-time, you can use Change Data Capture (CDC) tools such as Debezium or Flink CDC.
+
+### Debezium
+
+[Debezium](https://debezium.io/) is a set of distributed services to capture changes in your databases so that your applications can see those changes and respond to them. Debezium records all row-level changes within each database table in a change event stream, and applications simply read these streams to see the change events in the same order in which they occurred.
+
+[debezium-server-databend](https://github.com/databendcloud/debezium-server-databend) is a lightweight CDC tool developed by Databend, based on Debezium Engine. Its purpose is to capture real-time changes in relational databases and deliver them as event streams to ultimately write the data into the target database Databend. This tool provides a simple way to monitor and capture database changes, transforming them into consumable events without the need for large data infrastructures like Flink, Kafka, or Spark.
+
+debezium-server-databend can be installed independently without the need for installing Debezium beforehand. Once you have decided to install debezium-server-databend, you have two options available. The first one is to install it from source by downloading the source code and building it yourself. Alternatively, you can opt for a more straightforward installation process using Docker.
+
+#### Installing debezium-server-databend from Source
+
+Before you start, make sure JDK 11 and Maven are installed on your system.
+
+1. Clone the project:
+
+```bash
+git clone https://github.com/databendcloud/debezium-server-databend.git
+```
+
+2. Change into the project's root directory:
+
+```bash
+cd debezium-server-databend
+```
+
+3. Build and package debezium server:
+
+```go
+mvn -Passembly -Dmaven.test.skip package
+```
+
+4. Once the build is completed, unzip the server distribution package:
+
+```bash
+unzip debezium-server-databend-dist/target/debezium-server-databend-dist*.zip -d databendDist
+```
+
+5. Enter the extracted folder:
+
+```bash
+cd databendDist
+```
+
+6. Create a file named _application.properties_ in the _conf_ folder with the content in the sample [here](https://github.com/databendcloud/debezium-server-databend/blob/main/debezium-server-databend-dist/src/main/resources/distro/conf/application.properties.example), and modify the configurations according to your specific requirements. For description of the available parameters, see this [page](https://github.com/databendcloud/debezium-server-databend/blob/main/docs/docs.md).
+
+```bash
+nano conf/application.properties
+```
+
+7. Use the provided script to start the tool:
+
+```bash
+bash run.sh
+```
+
+#### Installing debezium-server-databend with Docker
+
+Before you start, make sure Docker and Docker Compose are installed on your system.
+
+1. Create a file named _application.properties_ in the _conf_ folder with the content in the sample [here](https://github.com/databendcloud/debezium-server-databend/blob/main/debezium-server-databend-dist/src/main/resources/distro/conf/application.properties.example), and modify the configurations according to your specific requirements. For description of the available Databend parameters, see this [page](https://github.com/databendcloud/debezium-server-databend/blob/main/docs/docs.md).
+
+```bash
+nano conf/application.properties
+```
+
+2. Create a file named _docker-compose.yml_ with the following content:
+
+```dockerfile
+version: '2.1'
+services:
+ debezium:
+ image: ghcr.io/databendcloud/debezium-server-databend:pr-2
+ ports:
+ - "8080:8080"
+ - "8083:8083"
+ volumes:
+ - $PWD/conf:/app/conf
+ - $PWD/data:/app/data
+```
+
+3. Open a terminal or command-line interface and navigate to the directory containing the _docker-compose.yml_ file.
+
+4. Use the following command to start the tool:
+
+```bash
+docker-compose up -d
+```
+
+### Flink CDC
+
+[Apache Flink](https://github.com/apache/flink) CDC (Change Data Capture) refers to the capability of Apache Flink to capture and process real-time data changes from various sources using SQL-based queries. CDC allows you to monitor and capture data modifications (inserts, updates, and deletes) happening in a database or streaming system and react to those changes in real time. You can utilize the [Flink SQL connector for Databend](https://github.com/databendcloud/flink-connector-databend) to load data from other databases in real-time into Databend. The Flink SQL connector for Databend offers a connector that integrates Flink's stream processing capabilities with Databend. By configuring this connector, you can capture data changes from various databases as streams and load them into Databend for processing and analysis in real-time.
+
+#### Downloading & Installing Connector
+
+To download and install the Flink SQL connector for Databend, follow these steps:
+
+1. Download and set up Flink: Before installing the Flink SQL connector for Databend, ensure that you have downloaded and set up Flink on your system. You can download Flink from the official website: https://flink.apache.org/downloads/
+
+2. Download the connector: Visit the releases page of the Flink SQL connector for Databend on GitHub: [https://github.com/databendcloud/flink-connector-databend/releases](https://github.com/databendcloud/flink-connector-databend/releases). Download the latest version of the connector (e.g., flink-connector-databend-0.0.2.jar).
+
+ Please note that you can also compile the Flink SQL connector for Databend from source:
+
+ ```shell
+ git clone https://github.com/databendcloud/flink-connector-databend
+ cd flink-connector-databend
+ mvn clean install -DskipTests
+ ```
+
+3. Move the JAR file: Once you have downloaded the connector, move the JAR file to the lib folder in your Flink installation directory. For example, if you have Flink version 1.16.0 installed, move the JAR file to the flink-1.16.0/lib/ directory.
+
+## Tutorials
+
+- [Migrating from MySQL with Addax](/tutorials/migrate/migrating-from-mysql-with-addax)
+- [Migrating from MySQL with DataX](/tutorials/migrate/migrating-from-mysql-with-datax)
+- [Migrating from MySQL with Debezium](/tutorials/migrate/migrating-from-mysql-with-debezium)
+- [Migrating from MySQL with Flink CDC](/tutorials/migrate/migrating-from-mysql-with-flink-cdc)
\ No newline at end of file
diff --git a/docs/en/guides/41-migrate/snowflake.md b/docs/en/guides/41-migrate/snowflake.md
new file mode 100644
index 0000000000..4e88c97f97
--- /dev/null
+++ b/docs/en/guides/41-migrate/snowflake.md
@@ -0,0 +1,41 @@
+---
+title: Snowflake
+---
+
+This guide provides a high-level overview of the process to migrate your data from Snowflake to Databend. The migration involves exporting data from Snowflake to an Amazon S3 bucket and then loading it into Databend. The process is broken down into three main steps:
+
+
+
+## Step 1: Configuring Snowflake Storage Integration for Amazon S3
+
+Before exporting your data, you need to establish a connection between Snowflake and Amazon S3. This is achieved by configuring a storage integration that allows Snowflake to securely access and interact with the S3 bucket where your data will be staged.
+
+1. Create IAM Role & Policy: Start by creating an AWS IAM role with permissions to read from and write to your S3 bucket. This role ensures that Snowflake can interact with the S3 bucket securely.
+
+2. Snowflake Storage Integration: In Snowflake, you will configure a storage integration using the IAM role. This integration will allow Snowflake to securely access the designated S3 bucket and perform data export operations.
+
+3. Update Trust Relationships: After creating the storage integration, you will update the Trust Relationships of the IAM role in AWS to ensure that Snowflake can assume the IAM role and gain the necessary permissions for data access.
+
+## Step 2: Preparing & Exporting Data to Amazon S3
+
+Once the integration is set up, the next step is to prepare the data within Snowflake and export it to the S3 bucket.
+
+1. Create Stage: You need to create an external stage in Snowflake that points to the S3 bucket. This stage will serve as a temporary location for the data before it's transferred to Databend.
+
+2. Prepare Data: Create the necessary tables and populate them with data in Snowflake. Once the data is ready, you can export it to the S3 bucket in a desired format, such as Parquet.
+
+3. Export Data: Use Snowflake’s COPY INTO command to export the data from Snowflake tables into the S3 bucket, specifying the file format and location. This process will save the data in the S3 bucket, making it ready for the next step.
+
+## Step 3: Loading Data into Databend
+
+Now that your data is exported to an S3 bucket, the final step is to load it into Databend.
+
+1. Create Target Table: In Databend, create a target table that matches the structure of the data you exported from Snowflake.
+
+2. Load Data: Use the COPY INTO command in Databend to load the data from the S3 bucket into the target table. Provide your AWS credentials to ensure Databend can access the S3 bucket. You can also define the file format (such as Parquet) to match the format of the exported data.
+
+3. Verify Data: After loading the data, perform a simple query in Databend to verify that the data has been successfully imported and is available for further processing.
+
+## Tutorials
+
+- [Migrating from Snowflake](/tutorials/migrate/migrating-from-snowflake)
\ No newline at end of file
diff --git a/docs/en/tutorials/migrate/_category_.json b/docs/en/tutorials/migrate/_category_.json
new file mode 100644
index 0000000000..5224f7ca97
--- /dev/null
+++ b/docs/en/tutorials/migrate/_category_.json
@@ -0,0 +1,3 @@
+{
+ "label": "Migrating from Databases"
+}
\ No newline at end of file
diff --git a/docs/en/tutorials/migrate/migrating-from-mysql-with-addax.md b/docs/en/tutorials/migrate/migrating-from-mysql-with-addax.md
new file mode 100644
index 0000000000..910e2efcd1
--- /dev/null
+++ b/docs/en/tutorials/migrate/migrating-from-mysql-with-addax.md
@@ -0,0 +1,103 @@
+---
+title: Migrating from MySQL with Addax
+---
+
+In this tutorial, you will load data from MySQL to Databend with Addax. Before you start, make sure you have successfully set up Databend, MySQL, and Addax in your environment.
+
+1. In MySQL, create a SQL user that you will use for data loading and then create a table and populate it with sample data.
+
+```sql title='In MySQL:'
+mysql> create user 'mysqlu1'@'%' identified by '123';
+mysql> grant all on *.* to 'mysqlu1'@'%';
+mysql> create database db;
+mysql> create table db.tb01(id int, col1 varchar(10));
+mysql> insert into db.tb01 values(1, 'test1'), (2, 'test2'), (3, 'test3');
+```
+
+2. In Databend, create a corresponding target table.
+
+```sql title='In Databend:'
+databend> create database migrated_db;
+databend> create table migrated_db.tb01(id int null, col1 String null);
+```
+
+3. Copy and paste the following code to a file, and name the file as *mysql_demo.json*:
+
+:::note
+For the available parameters and their descriptions, refer to the documentation provided at the following link: https://wgzhao.github.io/Addax/develop/writer/databendwriter/#_2
+:::
+
+```json title='mysql_demo.json'
+{
+ "job": {
+ "setting": {
+ "speed": {
+ "channel": 4
+ }
+ },
+ "content": {
+ "writer": {
+ "name": "databendwriter",
+ "parameter": {
+ "preSql": [
+ "truncate table @table"
+ ],
+ "postSql": [
+ ],
+ "username": "u1",
+ "password": "123",
+ "database": "migrate_db",
+ "table": "tb01",
+ "jdbcUrl": "jdbc:mysql://127.0.0.1:3307/migrated_db",
+ "loadUrl": ["127.0.0.1:8000","127.0.0.1:8000"],
+ "fieldDelimiter": "\\x01",
+ "lineDelimiter": "\\x02",
+ "column": ["*"],
+ "format": "csv"
+ }
+ },
+ "reader": {
+ "name": "mysqlreader",
+ "parameter": {
+ "username": "mysqlu1",
+ "password": "123",
+ "column": [
+ "*"
+ ],
+ "connection": [
+ {
+ "jdbcUrl": [
+ "jdbc:mysql://127.0.0.1:3306/db"
+ ],
+ "driver": "com.mysql.jdbc.Driver",
+ "table": [
+ "tb01"
+ ]
+ }
+ ]
+ }
+ }
+ }
+ }
+}
+```
+
+4. Run Addax:
+
+```shell
+cd {YOUR_ADDAX_DIR_BIN}
+./addax.sh -L debug ./mysql_demo.json
+```
+
+You're all set! To verify the data loading, execute the query in Databend:
+
+```sql
+databend> select * from migrated_db.tb01;
++------+-------+
+| id | col1 |
++------+-------+
+| 1 | test1 |
+| 2 | test2 |
+| 3 | test3 |
++------+-------+
+```
\ No newline at end of file
diff --git a/docs/en/tutorials/migrate/migrating-from-mysql-with-datax.md b/docs/en/tutorials/migrate/migrating-from-mysql-with-datax.md
new file mode 100644
index 0000000000..d3f14f8223
--- /dev/null
+++ b/docs/en/tutorials/migrate/migrating-from-mysql-with-datax.md
@@ -0,0 +1,121 @@
+---
+title: Migrating from MySQL with DataX
+---
+
+In this tutorial, you will load data from MySQL to Databend with DataX. Before you start, make sure you have successfully set up Databend, MySQL, and DataX in your environment.
+
+1. In MySQL, create a SQL user that you will use for data loading and then create a table and populate it with sample data.
+
+```sql title='In MySQL:'
+mysql> create user 'mysqlu1'@'%' identified by 'databend';
+mysql> grant all on *.* to 'mysqlu1'@'%';
+mysql> create database db;
+mysql> create table db.tb01(id int, d double, t TIMESTAMP, col1 varchar(10));
+mysql> insert into db.tb01 values(1, 3.1,now(), 'test1'), (1, 4.1,now(), 'test2'), (1, 4.1,now(), 'test2');
+```
+
+2. In Databend, create a corresponding target table.
+
+:::note
+DataX data types can be converted to Databend's data types when loaded into Databend. For the specific correspondences between DataX data types and Databend's data types, refer to the documentation provided at the following link: https://github.com/alibaba/DataX/blob/master/databendwriter/doc/databendwriter.md#33-type-convert
+:::
+
+```sql title='In Databend:'
+databend> create database migrated_db;
+databend> create table migrated_db.tb01(id int null, d double null, t TIMESTAMP null, col1 varchar(10) null);
+```
+
+3. Copy and paste the following code to a file, and name the file as *mysql_demo.json*. For the available parameters and their descriptions, refer to the documentation provided at the following link: https://github.com/alibaba/DataX/blob/master/databendwriter/doc/databendwriter.md#32-configuration-description
+
+```json title='mysql_demo.json'
+{
+ "job": {
+ "content": [
+ {
+ "reader": {
+ "name": "mysqlreader",
+ "parameter": {
+ "username": "mysqlu1",
+ "password": "databend",
+ "column": [
+ "id", "d", "t", "col1"
+ ],
+ "connection": [
+ {
+ "jdbcUrl": [
+ "jdbc:mysql://127.0.0.1:3307/db"
+ ],
+ "driver": "com.mysql.jdbc.Driver",
+ "table": [
+ "tb01"
+ ]
+ }
+ ]
+ }
+ },
+ "writer": {
+ "name": "databendwriter",
+ "parameter": {
+ "username": "databend",
+ "password": "databend",
+ "column": [
+ "id", "d", "t", "col1"
+ ],
+ "preSql": [
+ ],
+ "postSql": [
+ ],
+ "connection": [
+ {
+ "jdbcUrl": "jdbc:databend://localhost:8000/migrated_db",
+ "table": [
+ "tb01"
+ ]
+ }
+ ]
+ }
+ }
+ }
+ ],
+ "setting": {
+ "speed": {
+ "channel": 1
+ }
+ }
+ }
+}
+```
+
+:::tip
+The provided code above configures DatabendWriter to operate in the INSERT mode. To switch to the REPLACE mode, you must include the writeMode and onConflictColumn parameters. For example:
+
+```json title='mysql_demo.json'
+...
+"writer": {
+ "name": "databendwriter",
+ "parameter": {
+ "writeMode": "replace",
+ "onConflictColumn":["id"],
+ "username": ...
+```
+:::
+
+4. Run DataX:
+
+```shell
+cd {YOUR_DATAX_DIR_BIN}
+python datax.py ./mysql_demo.json
+```
+
+You're all set! To verify the data loading, execute the query in Databend:
+
+```sql
+databend> select * from migrated_db.tb01;
++------+------+----------------------------+-------+
+| id | d | t | col1 |
++------+------+----------------------------+-------+
+| 1 | 3.1 | 2023-02-01 07:11:08.500000 | test1 |
+| 1 | 4.1 | 2023-02-01 07:11:08.501000 | test2 |
+| 1 | 4.1 | 2023-02-01 07:11:08.501000 | test2 |
++------+------+----------------------------+-------+
+```
\ No newline at end of file
diff --git a/docs/en/tutorials/migrate/migrating-from-mysql-with-debezium.md b/docs/en/tutorials/migrate/migrating-from-mysql-with-debezium.md
new file mode 100644
index 0000000000..3cb2caa569
--- /dev/null
+++ b/docs/en/tutorials/migrate/migrating-from-mysql-with-debezium.md
@@ -0,0 +1,91 @@
+---
+title: Migrating from MySQL with Debezium
+---
+
+In this tutorial, you will load data from MySQL to Databend with Debezium. Before you start, make sure you have successfully set up Databend, MySQL, and Debezium in your environment.
+
+## Step 1. Prepare Data in MySQL
+
+Create a database and a table in MySQL, and insert sample data into the table.
+
+```sql
+CREATE DATABASE mydb;
+USE mydb;
+
+CREATE TABLE products (id INTEGER NOT NULL AUTO_INCREMENT PRIMARY KEY,name VARCHAR(255) NOT NULL,description VARCHAR(512));
+ALTER TABLE products AUTO_INCREMENT = 10;
+
+INSERT INTO products VALUES (default,"scooter","Small 2-wheel scooter"),
+(default,"car battery","12V car battery"),
+(default,"12-pack drill bits","12-pack of drill bits with sizes ranging from #40 to #3"),
+(default,"hammer","12oz carpenter's hammer"),
+(default,"hammer","14oz carpenter's hammer"),
+(default,"hammer","16oz carpenter's hammer"),
+(default,"rocks","box of assorted rocks"),
+(default,"jacket","water-proof black wind breaker"),
+(default,"cloud","test for databend"),
+(default,"spare tire","24 inch spare tire");
+```
+
+## Step 2. Create database in Databend
+
+Create the corresponding database in Databend. Please note that you don't need to create a table that corresponds to the one in MySQL.
+
+```sql
+CREATE DATABASE debezium;
+```
+
+## Step 3. Create application.properties
+
+Create the file _application.properties_, then start debezium-server-databend. For how to install and start the tool, see [Installing debezium-server-databend](#installing-debezium-server-databend).
+
+When started for the first time, the tool performs a full synchronization of data from MySQL to Databend using the specified Batch Size. As a result, the data from MySQL is now visible in Databend after successful replication.
+
+```text title='application.properties'
+debezium.sink.type=databend
+debezium.sink.databend.upsert=true
+debezium.sink.databend.upsert-keep-deletes=false
+debezium.sink.databend.database.databaseName=debezium
+debezium.sink.databend.database.url=jdbc:databend://:
+debezium.sink.databend.database.username=
+debezium.sink.databend.database.password=
+debezium.sink.databend.database.primaryKey=id
+debezium.sink.databend.database.tableName=products
+debezium.sink.databend.database.param.ssl=true
+
+# enable event schemas
+debezium.format.value.schemas.enable=true
+debezium.format.key.schemas.enable=true
+debezium.format.value=json
+debezium.format.key=json
+
+# mysql source
+debezium.source.connector.class=io.debezium.connector.mysql.MySqlConnector
+debezium.source.offset.storage.file.filename=data/offsets.dat
+debezium.source.offset.flush.interval.ms=60000
+
+debezium.source.database.hostname=127.0.0.1
+debezium.source.database.port=3306
+debezium.source.database.user=root
+debezium.source.database.password=123456
+debezium.source.database.dbname=mydb
+debezium.source.database.server.name=from_mysql
+debezium.source.include.schema.changes=false
+debezium.source.table.include.list=mydb.products
+# debezium.source.database.ssl.mode=required
+# Run without Kafka, use local file to store checkpoints
+debezium.source.database.history=io.debezium.relational.history.FileDatabaseHistory
+debezium.source.database.history.file.filename=data/status.dat
+# do event flattening. unwrap message!
+debezium.transforms=unwrap
+debezium.transforms.unwrap.type=io.debezium.transforms.ExtractNewRecordState
+debezium.transforms.unwrap.delete.handling.mode=rewrite
+debezium.transforms.unwrap.drop.tombstones=true
+
+# ############ SET LOG LEVELS ############
+quarkus.log.level=INFO
+# Ignore messages below warning level from Jetty, because it's a bit verbose
+quarkus.log.category."org.eclipse.jetty".level=WARN
+```
+
+You're all set! If you query the products table in Databend, you will see that the data from MySQL has been successfully synchronized. Feel free to perform insertions, updates, or deletions in MySQL, and you will observe the corresponding changes reflected in Databend as well.
\ No newline at end of file
diff --git a/docs/en/tutorials/migrate/migrating-from-mysql-with-flink-cdc.md b/docs/en/tutorials/migrate/migrating-from-mysql-with-flink-cdc.md
new file mode 100644
index 0000000000..59a673a387
--- /dev/null
+++ b/docs/en/tutorials/migrate/migrating-from-mysql-with-flink-cdc.md
@@ -0,0 +1,140 @@
+---
+title: Migrating from MySQL with Flink CDC
+---
+
+In this tutorial, you will set up a real-time data loading from MySQL to Databend with the Flink SQL connector for Databend. Before you start, make sure you have successfully set up Databend and MySQL in your environment.
+
+1. Create a table in MySQL and populate it with sample data. Then, create a corresponding target table in Databend.
+
+```sql title='In MySQL:'
+CREATE DATABASE mydb;
+USE mydb;
+
+CREATE TABLE products (id INTEGER NOT NULL AUTO_INCREMENT PRIMARY KEY,name VARCHAR(255) NOT NULL,description VARCHAR(512));
+ALTER TABLE products AUTO_INCREMENT = 10;
+
+INSERT INTO products VALUES (default,"scooter","Small 2-wheel scooter"),
+(default,"car battery","12V car battery"),
+(default,"12-pack drill bits","12-pack of drill bits with sizes ranging from #40 to #3"),
+(default,"hammer","12oz carpenter's hammer"),
+(default,"hammer","14oz carpenter's hammer"),
+(default,"hammer","16oz carpenter's hammer"),
+(default,"rocks","box of assorted rocks"),
+(default,"jacket","black wind breaker"),
+(default,"cloud","test for databend"),
+(default,"spare tire","24 inch spare tire");
+```
+
+```sql title='In Databend:'
+CREATE TABLE products (id INT NOT NULL, name VARCHAR(255) NOT NULL, description VARCHAR(512) );
+```
+
+2. Download [Flink](https://flink.apache.org/downloads/) and the following SQL connectors to your system:
+ - Flink SQL connector for Databend: [https://github.com/databendcloud/flink-connector-databend/releases](https://github.com/databendcloud/flink-connector-databend/releases)
+ - Flink SQL connector for MySQL: [https://repo1.maven.org/maven2/com/ververica/flink-sql-connector-mysql-cdc/2.3.0/flink-sql-connector-mysql-cdc-2.3.0.jar](https://repo1.maven.org/maven2/com/ververica/flink-sql-connector-mysql-cdc/2.3.0/flink-sql-connector-mysql-cdc-2.3.0.jar)
+3. Move the both connector JAR files to the _lib_ folder in your Flink installation directory.
+4. Start Flink:
+
+```shell
+cd flink-16.0
+./bin/start-cluster.sh
+```
+
+You can now open the Apache Flink Dashboard if you go to [http://localhost:8081](http://localhost:8081) in your browser:
+
+
+
+5. Start the Flink SQL Client:
+
+```shell
+./bin/sql-client.sh
+
+ ▒▓██▓██▒
+ ▓████▒▒█▓▒▓███▓▒
+ ▓███▓░░ ▒▒▒▓██▒ ▒
+ ░██▒ ▒▒▓▓█▓▓▒░ ▒████
+ ██▒ ░▒▓███▒ ▒█▒█▒
+ ░▓█ ███ ▓░▒██
+ ▓█ ▒▒▒▒▒▓██▓░▒░▓▓█
+ █░ █ ▒▒░ ███▓▓█ ▒█▒▒▒
+ ████░ ▒▓█▓ ██▒▒▒ ▓███▒
+ ░▒█▓▓██ ▓█▒ ▓█▒▓██▓ ░█░
+ ▓░▒▓████▒ ██ ▒█ █▓░▒█▒░▒█▒
+ ███▓░██▓ ▓█ █ █▓ ▒▓█▓▓█▒
+ ░██▓ ░█░ █ █▒ ▒█████▓▒ ██▓░▒
+ ███░ ░ █░ ▓ ░█ █████▒░░ ░█░▓ ▓░
+ ██▓█ ▒▒▓▒ ▓███████▓░ ▒█▒ ▒▓ ▓██▓
+ ▒██▓ ▓█ █▓█ ░▒█████▓▓▒░ ██▒▒ █ ▒ ▓█▒
+ ▓█▓ ▓█ ██▓ ░▓▓▓▓▓▓▓▒ ▒██▓ ░█▒
+ ▓█ █ ▓███▓▒░ ░▓▓▓███▓ ░▒░ ▓█
+ ██▓ ██▒ ░▒▓▓███▓▓▓▓▓██████▓▒ ▓███ █
+ ▓███▒ ███ ░▓▓▒░░ ░▓████▓░ ░▒▓▒ █▓
+ █▓▒▒▓▓██ ░▒▒░░░▒▒▒▒▓██▓░ █▓
+ ██ ▓░▒█ ▓▓▓▓▒░░ ▒█▓ ▒▓▓██▓ ▓▒ ▒▒▓
+ ▓█▓ ▓▒█ █▓░ ░▒▓▓██▒ ░▓█▒ ▒▒▒░▒▒▓█████▒
+ ██░ ▓█▒█▒ ▒▓▓▒ ▓█ █░ ░░░░ ░█▒
+ ▓█ ▒█▓ ░ █░ ▒█ █▓
+ █▓ ██ █░ ▓▓ ▒█▓▓▓▒█░
+ █▓ ░▓██░ ▓▒ ▓█▓▒░░░▒▓█░ ▒█
+ ██ ▓█▓░ ▒ ░▒█▒██▒ ▓▓
+ ▓█▒ ▒█▓▒░ ▒▒ █▒█▓▒▒░░▒██
+ ░██▒ ▒▓▓▒ ▓██▓▒█▒ ░▓▓▓▓▒█▓
+ ░▓██▒ ▓░ ▒█▓█ ░░▒▒▒
+ ▒▓▓▓▓▓▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒░░▓▓ ▓░▒█░
+
+ ______ _ _ _ _____ ____ _ _____ _ _ _ BETA
+ | ____| (_) | | / ____|/ __ \| | / ____| (_) | |
+ | |__ | |_ _ __ | | __ | (___ | | | | | | | | |_ ___ _ __ | |_
+ | __| | | | '_ \| |/ / \___ \| | | | | | | | | |/ _ \ '_ \| __|
+ | | | | | | | | < ____) | |__| | |____ | |____| | | __/ | | | |_
+ |_| |_|_|_| |_|_|\_\ |_____/ \___\_\______| \_____|_|_|\___|_| |_|\__|
+
+ Welcome! Enter 'HELP;' to list all available commands. 'QUIT;' to exit.
+```
+
+6. Set the checkpointing interval to 3 seconds, and create corresponding tables with MySQL and Databend connectors in the Flink SQL Client. For the available connection parameters, see [https://github.com/databendcloud/flink-connector-databend#connector-options](https://github.com/databendcloud/flink-connector-databend#connector-options):
+
+```sql
+Flink SQL> SET execution.checkpointing.interval = 3s;
+[INFO] Session property has been set.
+
+Flink SQL> CREATE TABLE mysql_products (id INT,name STRING,description STRING,PRIMARY KEY (id) NOT ENFORCED)
+WITH ('connector' = 'mysql-cdc',
+'hostname' = 'localhost',
+'port' = '3306',
+'username' = 'root',
+'password' = '123456',
+'database-name' = 'mydb',
+'table-name' = 'products',
+'server-time-zone' = 'UTC'
+);
+[INFO] Execute statement succeed.
+
+Flink SQL> CREATE TABLE databend_products (id INT,name String,description String, PRIMARY KEY (`id`) NOT ENFORCED)
+WITH ('connector' = 'databend',
+'url'='databend://localhost:8000',
+'username'='databend',
+'password'='databend',
+'database-name'='default',
+'table-name'='products',
+'sink.batch-size' = '5',
+'sink.flush-interval' = '1000',
+'sink.ignore-delete' = 'false',
+'sink.max-retries' = '3');
+[INFO] Execute statement succeed.
+```
+
+7. In the Flink SQL Client, synchronize the data from the _mysql_products_ table to the _databend_products_ table:
+
+```sql
+Flink SQL> INSERT INTO databend_products SELECT * FROM mysql_products;
+[INFO] Submitting SQL update statement to the cluster...
+[INFO] SQL update statement has been successfully submitted to the cluster:
+Job ID: b14645f34937c7cf3672ffba35733734
+```
+
+You can now see a running job in the Apache Flink Dashboard:
+
+
+
+You're all set! If you query the _products_ table in Databend, you will see that the data from MySQL has been successfully synchronized. Feel free to perform insertions, updates, or deletions in MySQL, and you will observe the corresponding changes reflected in Databend as well.
\ No newline at end of file
diff --git a/docs/en/tutorials/load/migrating-from-snowflake.md b/docs/en/tutorials/migrate/migrating-from-snowflake.md
similarity index 100%
rename from docs/en/tutorials/load/migrating-from-snowflake.md
rename to docs/en/tutorials/migrate/migrating-from-snowflake.md
diff --git a/static/img/load/snowflake-databend.png b/static/img/load/snowflake-databend.png
new file mode 100644
index 0000000000..1ed705c697
Binary files /dev/null and b/static/img/load/snowflake-databend.png differ