diff --git a/TOC-tidb-cloud-essential.md b/TOC-tidb-cloud-essential.md
index 247091a305303..9b55ae6b56739 100644
--- a/TOC-tidb-cloud-essential.md
+++ b/TOC-tidb-cloud-essential.md
@@ -216,6 +216,8 @@
- Migrate or Import Data
- [Overview](/tidb-cloud/tidb-cloud-migration-overview.md)
- Migrate Data into TiDB Cloud
+ - [Migrate Existing and Incremental Data Using Data Migration](/tidb-cloud/migrate-from-mysql-using-data-migration.md)
+ - [Migrate Incremental Data Using Data Migration](/tidb-cloud/migrate-incremental-data-from-mysql-using-data-migration.md)
- [Migrate from TiDB Self-Managed to TiDB Cloud](/tidb-cloud/migrate-from-op-tidb.md)
- [Migrate and Merge MySQL Shards of Large Datasets](/tidb-cloud/migrate-sql-shards.md)
- [Migrate from Amazon RDS for Oracle Using AWS DMS](/tidb-cloud/migrate-from-oracle-using-aws-dms.md)
diff --git a/tidb-cloud/migrate-from-mysql-using-data-migration.md b/tidb-cloud/migrate-from-mysql-using-data-migration.md
index 6eba799bbe63b..32f62386e4aaa 100644
--- a/tidb-cloud/migrate-from-mysql-using-data-migration.md
+++ b/tidb-cloud/migrate-from-mysql-using-data-migration.md
@@ -6,7 +6,15 @@ aliases: ['/tidbcloud/migrate-data-into-tidb','/tidbcloud/migrate-incremental-da
# Migrate MySQL-Compatible Databases to TiDB Cloud Using Data Migration
-This document guides you through migrating your MySQL databases from Amazon Aurora MySQL, Amazon RDS, Azure Database for MySQL - Flexible Server, Google Cloud SQL for MySQL, or self-managed MySQL instances to TiDB Cloud using the Data Migration feature in the [TiDB Cloud console](https://tidbcloud.com/).
+This document guides you through migrating your MySQL databases from Amazon Aurora MySQL, Amazon RDS, Azure Database for MySQL - Flexible Server, Google Cloud SQL for MySQL, or self-managed MySQL instances to {{{ .dedicated }}}{{{ .essential }}} using the Data Migration feature in the [TiDB Cloud console](https://tidbcloud.com/).
+
+
+
+> **Note:**
+>
+> Currently, the Data Migration feature is in public preview for {{{ .essential }}}.
+
+
This feature enables you to migrate your existing MySQL data and continuously replicate ongoing changes (binlog) from your MySQL-compatible source databases directly to TiDB Cloud, maintaining data consistency whether in the same region or across different regions. The streamlined process eliminates the need for separate dump and load operations, reducing downtime and simplifying your migration from MySQL to a more scalable platform.
@@ -16,38 +24,80 @@ If you only want to replicate ongoing binlog changes from your MySQL-compatible
### Availability
-- The Data Migration feature is available only for **TiDB Cloud Dedicated** clusters.
+- Currently, the Data Migration features is not available for {{{ .starter }}}.
-- If you don't see the [Data Migration](/tidb-cloud/migrate-from-mysql-using-data-migration.md#step-1-go-to-the-data-migration-page) entry for your TiDB Cloud Dedicated cluster in the [TiDB Cloud console](https://tidbcloud.com/), the feature might not be available in your region. To request support for your region, contact [TiDB Cloud Support](/tidb-cloud/tidb-cloud-support.md).
+
+- If you don't see the [Data Migration](/tidb-cloud/migrate-from-mysql-using-data-migration.md#step-1-go-to-the-data-migration-page) entry for your {{{ .dedicated }}} cluster in the [TiDB Cloud console](https://tidbcloud.com/), the feature might not be available in your region. To request support for your region, contact [TiDB Cloud Support](/tidb-cloud/tidb-cloud-support.md).
+
- Amazon Aurora MySQL writer instances support both existing data and incremental data migration. Amazon Aurora MySQL reader instances only support existing data migration and do not support incremental data migration.
### Maximum number of migration jobs
-You can create up to 200 migration jobs for each organization. To create more migration jobs, you need to [file a support ticket](/tidb-cloud/tidb-cloud-support.md).
+
+
+You can create up to 200 migration jobs on {{{ .dedicated }}} clusters for each organization. To create more migration jobs, you need to [file a support ticket](/tidb-cloud/tidb-cloud-support.md).
+
+
+
+
+You can create up to 100 migration jobs on {{{ .essential }}} clusters for each organization. To create more migration jobs, you need to [file a support ticket](/tidb-cloud/tidb-cloud-support.md).
+
+
### Filtered out and deleted databases
- The system databases will be filtered out and not migrated to TiDB Cloud even if you select all of the databases to migrate. That is, `mysql`, `information_schema`, `performance_schema`, and `sys` will not be migrated using this feature.
+
+
- When you delete a cluster in TiDB Cloud, all migration jobs in that cluster are automatically deleted and not recoverable.
+
+
+
+
+### Limitations of Alibaba Cloud RDS
+
+When using Alibaba Cloud RDS as a data source, every table must have an explicit primary key. For tables without one, RDS appends a hidden primary key to the binlog, which leads to a schema mismatch with the source table and causes the migration to fail.
+
+### Limitations of Alibaba Cloud PolarDB-X
+
+During full data migration, PolarDB-X schemas might contain keywords that are incompatible with the downstream database, causing the import to fail.
+
+To prevent this, create the target tables in the downstream database before starting the migration process.
+
+
+
### Limitations of existing data migration
- During existing data migration, if the target database already contains the table to be migrated and there are duplicate keys, the rows with duplicate keys will be replaced.
-- If your dataset size is smaller than 1 TiB, it is recommended that you use logical mode (the default mode). If your dataset size is larger than 1 TiB, or you want to migrate existing data faster, you can use physical mode. For more information, see [Migrate existing data and incremental data](#migrate-existing-data-and-incremental-data).
+
+
+- For {{{ .dedicated }}}, if your dataset size is smaller than 1 TiB, it is recommended that you use logical mode (the default mode). If your dataset size is larger than 1 TiB, or you want to migrate existing data faster, you can use physical mode. For more information, see [Migrate existing data and incremental data](#migrate-existing-data-and-incremental-data).
+
+
+
+
+- For {{{ .essential }}}, only logical mode is supported for data migration currently. This mode exports data from MySQL source databases as SQL statements and then executes them on TiDB. In this mode, the target tables before migration can be either empty or non-empty.
+
+
### Limitations of incremental data migration
-- During incremental data migration, if the table to be migrated already exists in the target database with duplicate keys, an error is reported and the migration is interrupted. In this situation, you need to make sure whether the MySQL source data is accurate. If yes, click the "Restart" button of the migration job, and the migration job will replace the target TiDB Cloud cluster's conflicting records with the MySQL source records.
+- During incremental data migration, if the table to be migrated already exists in the target database with duplicate keys, an error is reported and the migration is interrupted. In this situation, you need to verify that the MySQL source data is accurate. If it is accurate, click the **Restart** button of the migration job, and the migration job will replace the conflicting records in the target cluster with the MySQL source records.
+
+- During incremental data migration (migrating ongoing changes to your cluster), if the migration job recovers from an abrupt error, it might open the safe mode for 60 seconds. During the safe mode, `INSERT` statements are migrated as `REPLACE`, `UPDATE` statements as `DELETE` and `REPLACE`, and then these transactions are migrated to the target TiDB Cloud cluster to ensure that all the data during the abrupt error has been migrated smoothly to the target TiDB Cloud cluster. In this scenario, for MySQL source tables without primary keys or non-null unique indexes, some data might be duplicated in the target TiDB Cloud cluster because the data might be inserted repeatedly into the target TiDB Cloud cluster.
-- During incremental replication (migrating ongoing changes to your cluster), if the migration job recovers from an abrupt error, it might open the safe mode for 60 seconds. During the safe mode, `INSERT` statements are migrated as `REPLACE`, `UPDATE` statements as `DELETE` and `REPLACE`, and then these transactions are migrated to the target TiDB Cloud cluster to make sure that all the data during the abrupt error has been migrated smoothly to the target TiDB Cloud cluster. In this scenario, for MySQL source tables without primary keys or non-null unique indexes, some data might be duplicated in the target TiDB Cloud cluster because the data might be inserted repeatedly into the target TiDB Cloud cluster.
+
-- In the following scenarios, if the migration job takes longer than 24 hours, do not purge binary logs in the source database to ensure that Data Migration can get consecutive binary logs for incremental replication:
+- In the following scenarios, if the migration job takes longer than 24 hours, do not purge binary logs in the source database. This allows Data Migration to get consecutive binary logs for incremental data migration:
- During the existing data migration.
- - After the existing data migration is completed and when incremental data migration is started for the first time, the latency is not 0ms.
+ - After the existing data migration is completed and when incremental data migration is started for the first time, the latency is not 0 ms.
+
+
## Prerequisites
@@ -55,7 +105,9 @@ Before migrating, check whether your data source is supported, enable binary log
### Make sure your data source and version are supported
-Data Migration supports the following data sources and versions:
+
+
+For {{{ .dedicated }}}, the Data Migration feature supports the following data sources and versions:
| Data source | Supported versions |
|:------------|:-------------------|
@@ -64,6 +116,23 @@ Data Migration supports the following data sources and versions:
| Amazon RDS MySQL | 8.0, 5.7 |
| Azure Database for MySQL - Flexible Server | 8.0, 5.7 |
| Google Cloud SQL for MySQL | 8.0, 5.7, 5.6 |
+| Alibaba Cloud RDS MySQL | 8.0, 5.7 |
+
+
+
+
+For {{{ .essential }}}, the Data Migration feature supports the following data sources and versions:
+
+| Data source | Supported versions |
+|:-------------------------------------------------|:-------------------|
+| Self-managed MySQL (on-premises or public cloud) | 8.0, 5.7 |
+| Amazon Aurora MySQL | 8.0, 5.7 |
+| Amazon RDS MySQL | 8.0, 5.7 |
+| Alibaba Cloud RDS MySQL | 8.0, 5.7 |
+| Azure Database for MySQL - Flexible Server | 8.0, 5.7 |
+| Google Cloud SQL for MySQL | 8.0, 5.7 |
+
+
### Enable binary logs in the source MySQL-compatible database for replication
@@ -89,7 +158,7 @@ SHOW VARIABLES WHERE Variable_name IN
If necessary, change the source MySQL instance configurations to match the required values.
- Configure a self‑managed MySQL instance
+ Configure a self-managed MySQL instance
1. Open `/etc/my.cnf` and add the following:
@@ -152,23 +221,60 @@ For detailed instructions, see [Configure database flags](https://cloud.google.c
+
+ Configure Alibaba Cloud RDS MySQL
+
+1. In the [ApsaraDB RDS console](https://rds.console.aliyun.com/), select the region of your instance, and then click the ID of your RDS for MySQL instance.
+
+2. In the left navigation pane, click **Parameters**, search for each parameter, and then set the following values:
+
+ - `binlog_row_image`: `FULL`
+
+3. In the left navigation pane, click **Backup and Restoration**, and then select **Backup Strategy**. To ensure DM can access consecutive binlog files during migration, configure the backup strategy with the following constraints:
+
+ - Retention Period: Set to at least 3 days (7 days recommended).
+
+ - Retained Files: Ensure the "Max number of files" is sufficient to prevent older logs from being overwritten prematurely.
+
+ - Storage Safeguard: Monitor Storage Usage closely. Note that RDS will automatically purge the earliest binlogs if the disk space usage reaches the system threshold, regardless of the retention period setting.
+
+4. After applying the changes (and restarting if needed), connect to the instance and run the `SHOW VARIABLES` statement in this section to verify the configuration.
+
+For detailed instructions, see [Set instance parameters](https://www.alibabacloud.com/help/en/rds/apsaradb-rds-for-mysql/modify-the-parameters-of-an-apsaradb-rds-for-mysql-instance)
+
+
+
### Ensure network connectivity
Before creating a migration job, you need to plan and set up proper network connectivity between your source MySQL instance, the TiDB Cloud Data Migration (DM) service, and your target TiDB Cloud cluster.
-The available connection methods are as follows:
+
+
+For {{{ .dedicated }}}, the available connection methods are as follows:
| Connection method | Availability | Recommended for |
|:---------------------|:-------------|:----------------|
| Public endpoints or IP addresses | All cloud providers supported by TiDB Cloud | Quick proof-of-concept migrations, testing, or when private connectivity is unavailable |
-| Private links or private endpoints | AWS and Azure only | Production workloads without exposing data to the public internet |
+| Private links or private endpoints | AWS and Azure only | Production workloads without exposing data to the public internet |
| VPC peering | AWS and Google Cloud only | Production workloads that need low-latency, intra-region connections and have non-overlapping VPC/VNet CIDRs |
+
+
+
+For {{{ .essential }}}, the available connection methods are as follows:
+
+| Connection method | Availability | Recommended for |
+|:---------------------|:-------------|:----------------|
+| Public endpoints or IP addresses | All cloud providers supported by TiDB Cloud | Quick proof-of-concept migrations, testing, or when private connectivity is unavailable |
+| Private links or private endpoints | AWS and Alibaba Cloud only | Production workloads without exposing data to the public internet |
+
+
+
Choose a connection method that best fits your cloud provider, network topology, and security requirements, and then follow the setup instructions for that method.
#### End-to-end encryption over TLS/SSL
-Regardless of the connection method, it is strongly recommended to use TLS/SSL for end-to-end encryption. While private endpoints and VPC peering secure the network path, TLS/SSL secures the data itself and helps meet compliance requirements.
+Regardless of the connection method, it is strongly recommended to use TLS/SSL for end-to-end encryption. While private endpoints and VPC peering secure the network path, TLS/SSL secures the data itself and helps meet compliance requirements.
Download and store the cloud provider's certificates for TLS/SSL encrypted connections
@@ -176,6 +282,7 @@ Regardless of the connection method, it is strongly recommended to use TLS/SSL f
- Amazon Aurora MySQL or Amazon RDS MySQL: [Using SSL/TLS to encrypt a connection to a DB instance or cluster](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/UsingWithRDS.SSL.html)
- Azure Database for MySQL - Flexible Server: [Connect with encrypted connections](https://learn.microsoft.com/en-us/azure/mysql/flexible-server/how-to-connect-tls-ssl)
- Google Cloud SQL for MySQL: [Manage SSL/TLS certificates](https://cloud.google.com/sql/docs/mysql/manage-ssl-instance)
+- Alibaba Cloud RDS MySQL: [Configure the SSL encryption feature](https://www.alibabacloud.com/help/en/rds/apsaradb-rds-for-mysql/configure-a-cloud-certificate-to-enable-ssl-encryption)
@@ -183,6 +290,8 @@ Regardless of the connection method, it is strongly recommended to use TLS/SSL f
When using public endpoints, you can verify network connectivity and access both now and later during the DM job creation process. TiDB Cloud will provide specific egress IP addresses and prompt instructions at that time.
+
+
> **Note**:
>
> The egress IP range for your firewall is available only during Data Migration task creation. You cannot obtain this IP range in advance. Before you begin, ensure that you:
@@ -191,12 +300,14 @@ When using public endpoints, you can verify network connectivity and access both
> - Can access to your cloud provider's console during the setup process.
> - Can pause the task creation workflow to configure your firewall.
+
+
1. Identify and record the source MySQL instance's endpoint hostname (FQDN) or public IP address.
-2. Ensure you have the required permissions to modify the firewall or security group rules for your database. Refer to your cloud provider's documentation for guidance as follows:
+2. Ensure you have the required permissions to modify the firewall or security group rules for your database. Refer to your cloud provider's documentation for guidance.
- Amazon Aurora MySQL or Amazon RDS MySQL: [Controlling access with security groups](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Overview.RDSSecurityGroups.html).
- Azure Database for MySQL - Flexible Server: [Public Network Access](https://learn.microsoft.com/en-us/azure/mysql/flexible-server/concepts-networking-public)
- - Google Cloud SQL for MySQL: [Authorized Networks](https://cloud.google.com/sql/docs/mysql/configure-ip#authorized-networks).
+ - Google Cloud SQL for MySQL: [Authorized Networks](https://cloud.google.com/sql/docs/mysql/configure-ip?__hstc=86493575.39bd75fe158e3a694e276e9709c7bc82.1766498597248.1768349165136.1768351956126.50&__hssc=86493575.1.1768351956126&__hsfp=3e9153f1372737b813f3fefb5bbb2ddf#authorized-networks).
3. Optional: Verify connectivity to your source database from a machine with public internet access using the appropriate certificate for in-transit encryption:
@@ -204,9 +315,11 @@ When using public endpoints, you can verify network connectivity and access both
mysql -h -P -u -p --ssl-ca= -e "SELECT version();"
```
-4. Later, during the Data Migration job setup, TiDB Cloud will provide an egress IP range. At that time, you need to add this IP range to your database's firewall or security‑group rules following the same procedure above.
+4. Later, during the Data Migration job setup, TiDB Cloud will provide an egress IP range. At that time, you need to add this IP range to your database's firewall or security-group rules following the same procedure above.
+
+#### Private link or private endpoint
-#### Private link or private endpoint
+
If you use a provider-native private link or private endpoint, create a private endpoint for your source MySQL instance (RDS, Aurora, or Azure Database for MySQL).
@@ -215,11 +328,11 @@ If you use a provider-native private link or private endpoint, create a private
AWS does not support direct PrivateLink access to RDS or Aurora. Therefore, you need to create a Network Load Balancer (NLB) and publish it as an endpoint service associated with your source MySQL instance.
-1. In the [Amazon EC2 console](https://console.aws.amazon.com/ec2/), create an NLB in the same subnet(s) as your RDS or Aurora writer. Configure the NLB with a TCP listener on port `3306` that forwards traffic to the database endpoint.
+1. In the [Amazon EC2 console](https://console.aws.amazon.com/ec2/), create an NLB in the same subnet(s) as your RDS or Aurora writer. Configure the NLB with a TCP listener on port `3306` that forwards traffic to the database endpoint.
For detailed instructions, see [Create a Network Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/create-network-load-balancer.html) in AWS documentation.
-2. In the [Amazon VPC console](https://console.aws.amazon.com/vpc/), click **Endpoint Services** in the left navigation pane, and then create an endpoint service. During the setup, select the NLB created in the previous step as the backing load balancer, and enable the **Require acceptance for endpoint** option. After the endpoint service is created, copy the service name (in the `com.amazonaws.vpce-svc-xxxxxxxxxxxxxxxxx` format) for later use.
+2. In the [Amazon VPC console](https://console.aws.amazon.com/vpc/), click **Endpoint Services** in the left navigation pane, and then create an endpoint service. During the setup, select the NLB created in the previous step as the backing load balancer, and enable the **Require acceptance for endpoint** option. After the endpoint service is created, copy the service name (in the `com.amazonaws.vpce-svc-xxxxxxxxxxxxxxxxx` format) for later use.
For detailed instructions, see [Create an endpoint service](https://docs.aws.amazon.com/vpc/latest/privatelink/create-endpoint-service.html) in AWS documentation.
@@ -253,12 +366,21 @@ To add a new private endpoint, take the following steps:
mysql -h -P 3306 -u -p --ssl-ca= -e "SELECT version();"
```
-4. In the [Azure portal](https://portal.azure.com/), return to the overview page of your MySQL Flexible Server instance (not the private endpoint object), click **JSON View** for the **Essentials** section, and then copy the resource ID for later use. The resource ID is in the `/subscriptions//resourceGroups//providers/Microsoft.DBforMySQL/flexibleServers/` format. You will use this resource ID (not the private endpoint ID) to configure TiDB Cloud DM.
+4. In the [Azure portal](https://portal.azure.com/), return to the overview page of your MySQL Flexible Server instance (not the private endpoint object), click **JSON View** for the **Essentials** section, and then copy the resource ID for later use. The resource ID is in the `/subscriptions//resourceGroups//providers/Microsoft.DBforMySQL/flexibleServers/` format. You will use this resource ID (not the private endpoint ID) to configure TiDB Cloud DM.
5. Later, when configuring TiDB Cloud DM to connect via PrivateLink, you will need to return to the Azure portal and approve the pending connection request from TiDB Cloud to this private endpoint.
+
+
+
+If you use a provider-native private link or private endpoint, create a [Private Link Connection](/tidb-cloud/serverless-private-link-connection.md) for your source MySQL instance.
+
+
+
+
+
#### VPC peering
If you use AWS VPC peering or Google Cloud VPC network peering, see the following instructions to configure the network.
@@ -296,6 +418,8 @@ If your MySQL service is in a Google Cloud VPC, take the following steps:
+
+
### Grant required privileges for migration
Before starting migration, you need to set up appropriate database users with the required privileges on both the source and target databases. These privileges enable TiDB Cloud DM to read data from MySQL, replicate changes, and write to your TiDB Cloud cluster securely. Because the migration involves both full data dumps for existing data and binlog replication for incremental changes, your migration user requires specific permissions beyond basic read access.
@@ -310,7 +434,7 @@ For production workloads, it is recommended to have a dedicated user for data du
|:----------|:------|:--------|
| `SELECT` | Tables | Allows reading data from all tables |
| `RELOAD` | Global | Ensures consistent snapshots during full dump |
-| `REPLICATION SLAVE` | Global | Enables binlog streaming for incremental replication |
+| `REPLICATION SLAVE` | Global | Enables binlog streaming for incremental data migration |
| `REPLICATION CLIENT` | Global | Provides access to binlog position and server status |
For example, you can use the following `GRANT` statement in your source MySQL instance to grant corresponding privileges:
@@ -330,12 +454,12 @@ For production workloads, it is recommended to have a dedicated user for replica
| `CREATE` | Databases, Tables | Creates schema objects in the target |
| `SELECT` | Tables | Verifies data during migration |
| `INSERT` | Tables | Writes migrated data |
-| `UPDATE` | Tables | Modifies existing rows during incremental replication |
+| `UPDATE` | Tables | Modifies existing rows during incremental data migration |
| `DELETE` | Tables | Removes rows during replication or updates |
| `ALTER` | Tables | Modifies table definitions when schema changes |
| `DROP` | Databases, Tables | Removes objects during schema sync |
| `INDEX` | Tables | Creates and modifies indexes |
-| `CREATE VIEW` | View | Create views used by migration |
+| `CREATE VIEW` | Views | Creates views used by migration |
For example, you can execute the following `GRANT` statement in your target TiDB Cloud cluster to grant corresponding privileges:
@@ -351,7 +475,7 @@ GRANT CREATE, SELECT, INSERT, UPDATE, DELETE, ALTER, DROP, INDEX ON *.* TO 'dm_t
>
> You can use the combo box in the upper-left corner to switch between organizations, projects, and clusters.
-2. Click the name of your target cluster to go to its overview page, and then click **Data** > **Migration** in the left navigation pane.
+2. Click the name of your target cluster to go to its overview page, and then click **Data** > **Data Migration** in the left navigation pane.
3. On the **Data Migration** page, click **Create Migration Job** in the upper-right corner. The **Create Migration Job** page is displayed.
@@ -365,14 +489,38 @@ On the **Create Migration Job** page, configure the source and target connection
- **Data source**: the data source type.
- **Connectivity method**: select a connection method for your data source based on your security requirements and cloud provider:
+
+
+
- **Public IP**: available for all cloud providers (recommended for testing and proof-of-concept migrations).
- **Private Link**: available for AWS and Azure only (recommended for production workloads requiring private connectivity).
- **VPC Peering**: available for AWS and Google Cloud only (recommended for production workloads needing low-latency, intra-region connections with non-overlapping VPC/VNet CIDRs).
+
+
+
+
+ - **Public**: available for all cloud providers (recommended for testing and proof-of-concept migrations).
+ - **Private Link**: available for AWS and Alibaba Cloud only (recommended for production workloads requiring private connectivity).
+
+
+
- Based on the selected **Connectivity method**, do the following:
+
+
+
- If **Public IP** or **VPC Peering** is selected, fill in the **Hostname or IP address** field with the hostname or IP address of the data source.
- If **Private Link** is selected, fill in the following information:
- **Endpoint Service Name** (available if **Data source** is from AWS): enter the VPC endpoint service name (format: `com.amazonaws.vpce-svc-xxxxxxxxxxxxxxxxx`) that you created for your RDS or Aurora instance.
- **Private Endpoint Resource ID** (available if **Data source** is from Azure): enter the resource ID of your MySQL Flexible Server instance (format: `/subscriptions//resourceGroups//providers/Microsoft.DBforMySQL/flexibleServers/`).
+
+
+
+
+ - If **Public IP** is selected, fill in the **Hostname or IP address** field with the hostname or IP address of the data source.
+ - If **Private Link** is selected, select the private link connection that you created in the [Private link or private endpoint](#private-link-or-private-endpoint) section.
+
+
+
- **Port**: the port of the data source.
- **User Name**: the username of the data source.
- **Password**: the password of the username.
@@ -388,7 +536,7 @@ On the **Create Migration Job** page, configure the source and target connection
- Option 2: Client certificate authentication
- - If your MySQL server is configured for client certificate authentication, upload **Client Certificate** and **Client private key**.
+ - If your MySQL server is configured for client certificate authentication, upload **Client Certificate** and **Client private key**.
- In this option, TiDB Cloud presents its certificate to the MySQL server for authentication, but TiDB Cloud does not verify the MySQL server's certificate.
- This option is typically used when the MySQL server is configured with options such as `REQUIRE SUBJECT '...'` or `REQUIRE ISSUER '...'` without `REQUIRE X509`, allowing it to check specific attributes of the client certificate without full CA validation of that client certificate.
- This option is often used when the MySQL server accepts client certificates in self-signed or custom PKI environments. Note that this configuration is vulnerable to man-in-the-middle attacks and is not recommended for production environments unless other network-level controls guarantee server authenticity.
@@ -413,17 +561,38 @@ On the **Create Migration Job** page, configure the source and target connection
5. Take action according to the message you see:
+
+
- If you use **Public IP** or **VPC Peering** as the connectivity method, you need to add the Data Migration service's IP addresses to the IP Access List of your source database and firewall (if any).
- If you use **Private Link** as the connectivity method, you are prompted to accept the endpoint request:
- For AWS: go to the [AWS VPC console](https://us-west-2.console.aws.amazon.com/vpc/home), click **Endpoint services**, and accept the endpoint request from TiDB Cloud.
- For Azure: go to the [Azure portal](https://portal.azure.com), search for your MySQL Flexible Server by name, click **Setting** > **Networking** in the left navigation pane, locate the **Private endpoint** section on the right side, and then approve the pending connection request from TiDB Cloud.
+
+
+
+ If you use Public IP, you need to add the Data Migration service's IP addresses to the IP Access List of your source database and firewall (if any).
+
+
+
## Step 3: Choose migration job type
-In the **Choose the objects to be migrated** step, you can choose existing data migration, incremental data migration, or both.
+
+
+In the **Choose migration job type** step, you can choose to migrate both existing data and incremental data, migrate only existing data, or migrate only incremental data.
+
+
+
+
+
+In the **Choose migration job type** step, you can choose to migrate both existing data and incremental data, or migrate only incremental data.
+
+
### Migrate existing data and incremental data
+
+
To migrate data to TiDB Cloud once and for all, choose both **Existing data migration** and **Incremental data migration**, which ensures data consistency between the source and target databases.
You can use **physical mode** or **logical mode** to migrate **existing data** and **incremental data**.
@@ -446,11 +615,24 @@ Physical mode exports the MySQL source data as fast as possible, so [different s
| 8 RCUs | 365.5 MiB/s | 28.9% |
| 16 RCUs | 424.6 MiB/s | 46.7% |
+
+
+
+To migrate data to TiDB Cloud once and for all, choose both **Full + Incremental** and **Incremental data migration**, which ensures data consistency between the source and target databases.
+
+Currently you can only use **logical mode** to migrate **existing data**. This mode exports data from MySQL source databases as SQL statements and then executes them on TiDB. In this mode, the target tables before migration can be either empty or non-empty.
+
+
+
+
+
### Migrate only existing data
To migrate only existing data of the source database to TiDB Cloud, choose **Existing data migration**.
-You can only use logical mode to migrate existing data. For more information, see [Migrate existing data and incremental data](#migrate-existing-data-and-incremental-data).
+You can use physical mode or logical mode to migrate existing data. For more information, see [Migrate existing data and incremental data](#migrate-existing-data-and-incremental-data).
+
+
### Migrate only incremental data
@@ -464,7 +646,7 @@ For detailed instructions about incremental data migration, see [Migrate Only In
- If you click **All**, the migration job will migrate the existing data from the whole source database instance to TiDB Cloud and migrate ongoing changes after the full migration. Note that it happens only if you have selected the **Existing data migration** and **Incremental data migration** checkboxes in the previous step.
- If you click **Customize** and select some databases, the migration job will migrate the existing data and migrate ongoing changes of the selected databases to TiDB Cloud. Note that it happens only if you have selected the **Existing data migration** and **Incremental data migration** checkboxes in the previous step.
- - If you click **Customize** and select some tables under a dataset name, the migration job will only migrate the existing data and migrate ongoing changes of the selected tables. Tables created afterwards in the same database will not be migrated.
+ - If you click **Customize** and select some tables under a database name, the migration job will only migrate the existing data and migrate ongoing changes of the selected tables. Tables created afterwards in the same database will not be migrated.
2. Click **Next**.
@@ -472,7 +654,7 @@ For detailed instructions about incremental data migration, see [Migrate Only In
On the **Precheck** page, you can view the precheck results. If the precheck fails, you need to operate according to **Failed** or **Warning** details, and then click **Check again** to recheck.
-If there are only warnings on some check items, you can evaluate the risk and consider whether to ignore the warnings. If all warnings are ignored, the migration job will automatically go on to the next step.
+If there are only warnings on some check items, you can evaluate the risk and consider whether to ignore the warnings. If all warnings are ignored, the migration job will automatically proceed to the next step.
For more information about errors and solutions, see [Precheck errors and solutions](/tidb-cloud/tidb-cloud-dm-precheck-and-troubleshooting.md#precheck-errors-and-solutions).
@@ -480,6 +662,24 @@ For more information about precheck items, see [Migration Task Precheck](https:/
If all check items show **Pass**, click **Next**.
+
+
+## Step 6: View the migration progress
+
+After the migration job is created, you can view the migration progress on the **Migration Job Details** page. The migration progress is displayed in the **Stage and Status** area.
+
+You can pause or delete a migration job when it is running.
+
+If a migration job has failed, you can resume it after solving the problem.
+
+You can delete a migration job in any status.
+
+If you encounter any problems during the migration, see [Migration errors and solutions](/tidb-cloud/tidb-cloud-dm-precheck-and-troubleshooting.md#migration-errors-and-solutions).
+
+
+
+
+
## Step 6: Choose a spec and start migration
On the **Choose a Spec and Start Migration** page, select an appropriate migration specification according to your performance requirements. For more information about the specifications, see [Specifications for Data Migration](/tidb-cloud/tidb-cloud-billing-dm.md#specifications-for-data-migration).
@@ -500,7 +700,7 @@ If you encounter any problems during the migration, see [Migration errors and so
## Scale a migration job specification
-TiDB Cloud supports scaling up or down a migration job specification to meet your performance and cost requirements in different scenarios.
+TiDB Cloud Dedicated supports scaling up or down a migration job specification to meet your performance and cost requirements in different scenarios.
Different migration specifications have different performances. Your performance requirements might vary at different stages as well. For example, during the existing data migration, you want the performance to be as fast as possible, so you choose a migration job with a large specification, such as 8 RCU. Once the existing data migration is completed, the incremental migration does not require such a high performance, so you can scale down the job specification, for example, from 8 RCU to 2 RCU, to save cost.
@@ -520,8 +720,10 @@ When scaling a migration job specification, note the following:
1. Log in to the [TiDB Cloud console](https://tidbcloud.com/) and navigate to the [**Clusters**](https://tidbcloud.com/project/clusters) page of your project.
-2. Click the name of your target cluster to go to its overview page, and then click **Data** > **Migration** in the left navigation pane.
+2. Click the name of your target cluster to go to its overview page, and then click **Data** > **Data Migration** in the left navigation pane.
3. On the **Data Migration** page, locate the migration job you want to scale. In the **Action** column, click **...** > **Scale Up/Down**.
4. In the **Scale Up/Down** window, select the new specification you want to use, and then click **Submit**. You can view the new price of the specification at the bottom of the window.
+
+
diff --git a/tidb-cloud/migrate-incremental-data-from-mysql-using-data-migration.md b/tidb-cloud/migrate-incremental-data-from-mysql-using-data-migration.md
index b911077512e88..d121b5545c12f 100644
--- a/tidb-cloud/migrate-incremental-data-from-mysql-using-data-migration.md
+++ b/tidb-cloud/migrate-incremental-data-from-mysql-using-data-migration.md
@@ -1,11 +1,19 @@
---
title: Migrate Only Incremental Data from MySQL-Compatible Databases to TiDB Cloud Using Data Migration
-summary: Learn how to migrate incremental data from MySQL-compatible databases hosted in Amazon Aurora MySQL, Amazon Relational Database Service (RDS), Google Cloud SQL for MySQL, Azure Database for MySQL, or a local MySQL instance to TiDB Cloud using Data Migration.
+summary: Learn how to migrate incremental data from MySQL-compatible databases hosted on a cloud provider, or a local MySQL instance to TiDB Cloud using Data Migration.
---
# Migrate Only Incremental Data from MySQL-Compatible Databases to TiDB Cloud Using Data Migration
-This document describes how to migrate incremental data from a MySQL-compatible database on a cloud provider (Amazon Aurora MySQL, Amazon Relational Database Service (RDS), Google Cloud SQL for MySQL, or Azure Database for MySQL) or self-hosted source database to TiDB Cloud using the Data Migration feature of the TiDB Cloud console.
+This document describes how to migrate incremental data from a MySQL-compatible database on a cloud provider (Amazon Aurora MySQL, Amazon Relational Database Service (RDS), Google Cloud SQL for MySQL, Azure Database for MySQL, or Alibaba Cloud RDS) or self-hosted source database to {{{ .dedicated }}}{{{ .essential }}} using the Data Migration feature of the TiDB Cloud console.
+
+
+
+> **Note:**
+>
+> Currently, the Data Migration feature is in public preview for {{{ .essential }}}.
+
+
For instructions about how to migrate existing data or both existing data and incremental data, see [Migrate MySQL-Compatible Databases to TiDB Cloud Using Data Migration](/tidb-cloud/migrate-from-mysql-using-data-migration.md).
@@ -22,7 +30,7 @@ For instructions about how to migrate existing data or both existing data and in
00000000-0000-0000-0000-00000000000000000], endLocation:
[position: (mysql_bin.000016, 5162), gtid-set: 0000000-0000-0000
0000-0000000000000:0]: cannot fetch downstream table schema of
- zm`.'table1' to initialize upstream schema 'zm'.'table1' in sschema
+ zm`.'table1' to initialize upstream schema 'zm'.'table1' in schema
tracker Raw Cause: Error 1146: Table 'zm.table1' doesn't exist
```
@@ -44,7 +52,7 @@ If you want to use GTID to specify the start position, make sure that the GTID i
### For Amazon RDS and Amazon Aurora MySQL
-For Amazon RDS and Amazon Aurora MySQL, you need to create a new modifiable parameter group (that is, not the default parameter group) and then modify the following parameters in the parameter group and restart the instance application.
+For Amazon RDS and Amazon Aurora MySQL, you need to create a new modifiable parameter group (that is, not the default parameter group), modify the following parameters in the parameter group, and then restart the instance to apply the changes.
- `gtid_mode`
- `enforce_gtid_consistency`
@@ -71,7 +79,19 @@ If the result is `ON` or `ON_PERMISSIVE`, the GTID mode is successfully enabled.
### For Azure Database for MySQL
-The GTID mode is enabled by default for Azure Database for MySQL (versions 5.7 and later). You can check if the GTID mode has been successfully enabled by executing the following SQL statement:
+The GTID mode is enabled by default for Azure Database for MySQL (versions 5.7 and later) and does not support disabling GTID mode.
+
+In addition, ensure that the `binlog_row_image` server parameter is set to `FULL`. You can check this by executing the following SQL statement:
+
+```sql
+SHOW VARIABLES LIKE 'binlog_row_image';
+```
+
+If the result is not `FULL`, you need to configure this parameter for your Azure Database for MySQL instance using the [Azure portal](https://portal.azure.com/) or [Azure CLI](https://learn.microsoft.com/en-us/cli/azure/).
+
+### For Alibaba Cloud RDS MySQL
+
+The GTID mode is enabled by default for Alibaba Cloud RDS MySQL. You can check if the GTID mode has been successfully enabled by executing the following SQL statement:
```sql
SHOW VARIABLES LIKE 'gtid_mode';
@@ -85,7 +105,7 @@ In addition, ensure that the `binlog_row_image` server parameter is set to `FULL
SHOW VARIABLES LIKE 'binlog_row_image';
```
-If the result is not `FULL`, you need to configure this parameter for your Azure Database for MySQL instance using the [Azure portal](https://portal.azure.com/) or [Azure CLI](https://learn.microsoft.com/en-us/cli/azure/).
+If the result is not `FULL`, you need to configure this parameter for your Alibaba Cloud RDS MySQL instance using the [RDS console](https://rds.console.aliyun.com/).
### For a self-hosted MySQL instance
@@ -128,7 +148,7 @@ To enable the GTID mode for a self-hosted MySQL instance, follow these steps:
>
> You can use the combo box in the upper-left corner to switch between organizations, projects, and clusters.
-2. Click the name of your target cluster to go to its overview page, and then click **Data** > **Migration** in the left navigation pane.
+2. Click the name of your target cluster to go to its overview page, and then click **Data** > **Data Migration** in the left navigation pane.
3. On the **Data Migration** page, click **Create Migration Job** in the upper-right corner. The **Create Migration Job** page is displayed.
@@ -142,16 +162,28 @@ On the **Create Migration Job** page, configure the source and target connection
- **Data source**: the data source type.
- **Region**: the region of the data source, which is required for cloud databases only.
- - **Connectivity method**: the connection method for the data source. Currently, you can choose public IP, VPC Peering, or Private Link according to your connection method.
+ - **Connectivity method**: the connection method for the data source. Currently, you can choose public IP, VPC Peering, or Private Link according to your connection method.You can choose public IP or Private Link according to your connection method.
+
+
+
- **Hostname or IP address** (for public IP and VPC Peering): the hostname or IP address of the data source.
- **Service Name** (for Private Link): the endpoint service name.
+
+
+
+
+ - **Hostname or IP address** (for public IP): the hostname or IP address of the data source.
+ - **Private Link Connection** (for Private Link): the private link connection that you created in the [Private Link Connections](/tidb-cloud/serverless-private-link-connection.md) section.
+
+
+
- **Port**: the port of the data source.
- **Username**: the username of the data source.
- **Password**: the password of the username.
- **SSL/TLS**: if you enable SSL/TLS, you need to upload the certificates of the data source, including any of the following:
- only the CA certificate
- the client certificate and client key
- - the CA certificate, client certificate and client key
+ - the CA certificate, client certificate, and client key
3. Fill in the target connection profile.
@@ -162,9 +194,18 @@ On the **Create Migration Job** page, configure the source and target connection
5. Take action according to the message you see:
+
+
- If you use Public IP or VPC Peering, you need to add the Data Migration service's IP addresses to the IP Access List of your source database and firewall (if any).
- If you use AWS Private Link, you are prompted to accept the endpoint request. Go to the [AWS VPC console](https://us-west-2.console.aws.amazon.com/vpc/home), and click **Endpoint services** to accept the endpoint request.
+
+
+
+ If you use Public IP, you need to add the Data Migration service's IP addresses to the IP Access List of your source database and firewall (if any).
+
+
+
## Step 3: Choose migration job type
To migrate only the incremental data of the source database to TiDB Cloud, select **Incremental data migration** and do not select **Existing data migration**. In this way, the migration job only migrates ongoing changes of the source database to TiDB Cloud.
@@ -215,7 +256,7 @@ If there is data in the target database, make sure the binlog position is correc
On the **Precheck** page, you can view the precheck results. If the precheck fails, you need to operate according to **Failed** or **Warning** details, and then click **Check again** to recheck.
-If there are only warnings on some check items, you can evaluate the risk and consider whether to ignore the warnings. If all warnings are ignored, the migration job will automatically go on to the next step.
+If there are only warnings on some check items, you can evaluate the risk and consider whether to ignore the warnings. If all warnings are ignored, the migration job will automatically proceed to the next step.
For more information about errors and solutions, see [Precheck errors and solutions](/tidb-cloud/tidb-cloud-dm-precheck-and-troubleshooting.md#precheck-errors-and-solutions).
@@ -223,6 +264,24 @@ For more information about precheck items, see [Migration Task Precheck](https:/
If all check items show **Pass**, click **Next**.
+
+
+## Step 6: View the migration progress
+
+After the migration job is created, you can view the migration progress on the **Migration Job Details** page. The migration progress is displayed in the **Stage and Status** area.
+
+You can pause or delete a migration job when it is running.
+
+If a migration job has failed, you can resume it after solving the problem.
+
+You can delete a migration job in any status.
+
+If you encounter any problems during the migration, see [Migration errors and solutions](/tidb-cloud/tidb-cloud-dm-precheck-and-troubleshooting.md#migration-errors-and-solutions).
+
+
+
+
+
## Step 6: Choose a spec and start migration
On the **Choose a Spec and Start Migration** page, select an appropriate migration specification according to your performance requirements. For more information about the specifications, see [Specifications for Data Migration](/tidb-cloud/tidb-cloud-billing-dm.md#specifications-for-data-migration).
@@ -240,3 +299,5 @@ If a migration job has failed, you can resume it after solving the problem.
You can delete a migration job in any status.
If you encounter any problems during the migration, see [Migration errors and solutions](/tidb-cloud/tidb-cloud-dm-precheck-and-troubleshooting.md#migration-errors-and-solutions).
+
+
\ No newline at end of file