diff --git a/docs/content.zh/docs/connectors/flink-sources/vitess-cdc.md b/docs/content.zh/docs/connectors/flink-sources/vitess-cdc.md
index cdc975de175..0ac1f9b724f 100644
--- a/docs/content.zh/docs/connectors/flink-sources/vitess-cdc.md
+++ b/docs/content.zh/docs/connectors/flink-sources/vitess-cdc.md
@@ -49,7 +49,7 @@ more released versions will be available in the Maven central warehouse.
Setup Vitess server
----------------
-You can follow the Local Install via [Docker guide](https://vitess.io/docs/get-started/local-docker/), or the Vitess Operator for [Kubernetes guide](https://vitess.io/docs/get-started/operator/) to install Vitess. No special setup is needed to support Vitess connector.
+You can follow the Local Install via [Docker guide](https://vitess.io/docs/get-started/vttestserver-docker-image/), or the Vitess Operator for [Kubernetes guide](https://vitess.io/docs/get-started/operator/) to install Vitess. No special setup is needed to support Vitess connector.
### Checklist
* Make sure that the VTGate host and its gRPC port (default is 15991) is accessible from the machine where the Vitess connector is installed
diff --git a/docs/content.zh/docs/connectors/pipeline-connectors/elasticsearch.md b/docs/content.zh/docs/connectors/pipeline-connectors/elasticsearch.md
new file mode 100644
index 00000000000..939cfc6d8bb
--- /dev/null
+++ b/docs/content.zh/docs/connectors/pipeline-connectors/elasticsearch.md
@@ -0,0 +1,275 @@
+---
+title: "Elasticsearch"
+weight: 7
+type: docs
+aliases:
+- /connectors/pipeline-connectors/elasticsearch
+---
+
+
+# Elasticsearch Pipeline Connector
+
+Elasticsearch Pipeline 连接器可以用作 Pipeline 的 Data Sink, 将数据写入 Elasticsearch。 本文档介绍如何设置 Elasticsearch Pipeline 连接器。
+
+
+How to create Pipeline
+----------------
+
+从 MySQL 读取数据同步到 Elasticsearch 的 Pipeline 可以定义如下:
+
+```yaml
+source:
+ type: mysql
+ name: MySQL Source
+ hostname: 127.0.0.1
+ port: 3306
+ username: admin
+ password: pass
+ tables: adb.\.*, bdb.user_table_[0-9]+, [app|web].order_\.*
+ server-id: 5401-5404
+
+sink:
+ type: elasticsearch
+ name: Elasticsearch Sink
+ hosts: http://127.0.0.1:9092,http://127.0.0.1:9093
+
+route:
+ - source-table: adb.\.*
+ sink-table: default_index
+ description: sync adb.\.* table to default_index
+
+pipeline:
+ name: MySQL to Elasticsearch Pipeline
+ parallelism: 2
+```
+
+Pipeline Connector Options
+----------------
+
+
+
+
+ | Option |
+ Required |
+ Default |
+ Type |
+ Description |
+
+
+
+
+ | type |
+ required |
+ (none) |
+ String |
+ 指定要使用的连接器, 这里需要设置成 'elasticsearch'. |
+
+
+ | name |
+ optional |
+ (none) |
+ String |
+ Sink 的名称。 |
+
+
+ | hosts |
+ required |
+ (none) |
+ String |
+ 要连接到的一台或多台 Elasticsearch 主机,例如: 'http://host_name:9092,http://host_name:9093'. |
+
+
+ | version |
+ optional |
+ 7 |
+ Integer |
+ 指定要使用的连接器,有效值为:
+
+ - 6: 连接到 Elasticsearch 6.x 的集群。
+ - 7: 连接到 Elasticsearch 7.x 的集群。
+ - 8: 连接到 Elasticsearch 8.x 的集群。
+
+ |
+
+
+ | username |
+ optional |
+ (none) |
+ String |
+ 用于连接 Elasticsearch 实例认证的用户名。 |
+
+
+ | password |
+ optional |
+ (none) |
+ String |
+ 用于连接 Elasticsearch 实例认证的密码。 |
+
+
+ | batch.size.max |
+ optional |
+ 500 |
+ Integer |
+ 每个批量请求的最大缓冲操作数。 可以设置为'0'来禁用它。 |
+
+
+ | inflight.requests.max |
+ optional |
+ 5 |
+ Integer |
+ 连接器将尝试执行的最大并发请求数。 |
+
+
+ | buffered.requests.max |
+ optional |
+ 1000 |
+ Integer |
+ 每个批量请求的内存缓冲区中保留的最大请求数。 |
+
+
+ | batch.size.max.bytes |
+ optional |
+ 5242880 |
+ Long |
+ 每个批量请求的缓冲操作在内存中的最大值(以byte为单位)。 |
+
+
+ | buffer.time.max.ms |
+ optional |
+ 5000 |
+ Long |
+ 每个批量请求的缓冲 flush 操作的间隔(以ms为单位)。 |
+
+
+ | record.size.max.bytes |
+ optional |
+ 10485760 |
+ Long |
+ 单个记录的最大大小(以byte为单位)。 |
+
+
+
+
+
+Usage Notes
+--------
+
+* 写入 Elasticsearch 的 index 默认为与上游表同名字符串,可以通过 pipeline 的 route 功能进行修改。
+
+* 如果写入 Elasticsearch 的 index 不存在,不会被默认创建。
+
+Data Type Mapping
+----------------
+Elasticsearch 将文档存储在 JSON 字符串中,数据类型之间的映射关系如下表所示:
+
+
+
+
+ | CDC type |
+ JSON type |
+ NOTE |
+
+
+
+
+ | TINYINT |
+ NUMBER |
+ |
+
+
+ | SMALLINT |
+ NUMBER |
+ |
+
+
+ | INT |
+ NUMBER |
+ |
+
+
+ | BIGINT |
+ NUMBER |
+ |
+
+
+ | FLOAT |
+ NUMBER |
+ |
+
+
+ | DOUBLE |
+ NUMBER |
+ |
+
+
+ | DECIMAL(p, s) |
+ STRING |
+ |
+
+
+ | BOOLEAN |
+ BOOLEAN |
+ |
+
+
+ | DATE |
+ STRING |
+ with format: date (yyyy-MM-dd), example: 2024-10-21 |
+
+
+ | TIMESTAMP |
+ STRING |
+ with format: date-time (yyyy-MM-dd HH:mm:ss.SSSSSS, with UTC time zone), example: 2024-10-21 14:10:56.000000 |
+
+
+ | TIMESTAMP_LTZ |
+ STRING |
+ with format: date-time (yyyy-MM-dd HH:mm:ss.SSSSSS, with UTC time zone), example: 2024-10-21 14:10:56.000000 |
+
+
+ | CHAR(n) |
+ STRING |
+ |
+
+
+ | VARCHAR(n) |
+ STRING |
+ |
+
+
+ | ARRAY |
+ ARRAY |
+ |
+
+
+ | MAP |
+ STRING |
+ |
+
+
+ | ROW |
+ STRING |
+ |
+
+
+
+
+
+{{< top >}}
\ No newline at end of file
diff --git a/docs/content/docs/connectors/flink-sources/vitess-cdc.md b/docs/content/docs/connectors/flink-sources/vitess-cdc.md
index cdc975de175..0ac1f9b724f 100644
--- a/docs/content/docs/connectors/flink-sources/vitess-cdc.md
+++ b/docs/content/docs/connectors/flink-sources/vitess-cdc.md
@@ -49,7 +49,7 @@ more released versions will be available in the Maven central warehouse.
Setup Vitess server
----------------
-You can follow the Local Install via [Docker guide](https://vitess.io/docs/get-started/local-docker/), or the Vitess Operator for [Kubernetes guide](https://vitess.io/docs/get-started/operator/) to install Vitess. No special setup is needed to support Vitess connector.
+You can follow the Local Install via [Docker guide](https://vitess.io/docs/get-started/vttestserver-docker-image/), or the Vitess Operator for [Kubernetes guide](https://vitess.io/docs/get-started/operator/) to install Vitess. No special setup is needed to support Vitess connector.
### Checklist
* Make sure that the VTGate host and its gRPC port (default is 15991) is accessible from the machine where the Vitess connector is installed
diff --git a/docs/content/docs/connectors/pipeline-connectors/elasticsearch.md b/docs/content/docs/connectors/pipeline-connectors/elasticsearch.md
new file mode 100644
index 00000000000..579d7015e9f
--- /dev/null
+++ b/docs/content/docs/connectors/pipeline-connectors/elasticsearch.md
@@ -0,0 +1,275 @@
+---
+title: "Elasticsearch"
+weight: 7
+type: docs
+aliases:
+- /connectors/pipeline-connectors/elasticsearch
+---
+
+
+# Elasticsearch Pipeline Connector
+
+The Elasticsearch Pipeline connector can be used as the *Data Sink* of the pipeline, and write data to Elasticsearch. This document describes how to set up the Elasticsearch Pipeline connector.
+
+
+How to create Pipeline
+----------------
+
+The pipeline for reading data from MySQL and sink to Elasticsearch can be defined as follows:
+
+```yaml
+source:
+ type: mysql
+ name: MySQL Source
+ hostname: 127.0.0.1
+ port: 3306
+ username: admin
+ password: pass
+ tables: adb.\.*, bdb.user_table_[0-9]+, [app|web].order_\.*
+ server-id: 5401-5404
+
+sink:
+ type: elasticsearch
+ name: Elasticsearch Sink
+ hosts: http://127.0.0.1:9092,http://127.0.0.1:9093
+
+route:
+ - source-table: adb.\.*
+ sink-table: default_index
+ description: sync adb.\.* table to default_index
+
+pipeline:
+ name: MySQL to Elasticsearch Pipeline
+ parallelism: 2
+```
+
+Pipeline Connector Options
+----------------
+
+
+
+
+ | Option |
+ Required |
+ Default |
+ Type |
+ Description |
+
+
+
+
+ | type |
+ required |
+ (none) |
+ String |
+ Specify what connector to use, here should be 'elasticsearch'. |
+
+
+ | name |
+ optional |
+ (none) |
+ String |
+ The name of the sink. |
+
+
+ | hosts |
+ required |
+ (none) |
+ String |
+ One or more Elasticsearch hosts to connect to, e.g. 'http://host_name:9092,http://host_name:9093'. |
+
+
+ | version |
+ optional |
+ 7 |
+ Integer |
+ Specify what connector to use, valid values are:
+
+ - 6: connect to Elasticsearch 6.x cluster.
+ - 7: connect to Elasticsearch 7.x cluster.
+ - 8: connect to Elasticsearch 8.x cluster.
+
+ |
+
+
+ | username |
+ optional |
+ (none) |
+ String |
+ The username for Elasticsearch authentication. |
+
+
+ | password |
+ optional |
+ (none) |
+ String |
+ The password for Elasticsearch authentication. |
+
+
+ | batch.size.max |
+ optional |
+ 500 |
+ Integer |
+ Maximum number of buffered actions per bulk request. Can be set to '0' to disable it. |
+
+
+ | inflight.requests.max |
+ optional |
+ 5 |
+ Integer |
+ The maximum number of concurrent requests that the sink will try to execute. |
+
+
+ | buffered.requests.max |
+ optional |
+ 1000 |
+ Integer |
+ The maximum number of requests to keep in the in-memory buffer. |
+
+
+ | batch.size.max.bytes |
+ optional |
+ 5242880 |
+ Long |
+ The maximum size of batch requests in bytes. |
+
+
+ | buffer.time.max.ms |
+ optional |
+ 5000 |
+ Long |
+ The maximum time to wait for incomplete batches before flushing. |
+
+
+ | record.size.max.bytes |
+ optional |
+ 10485760 |
+ Long |
+ The maximum size of a single record in bytes. |
+
+
+
+
+
+Usage Notes
+--------
+
+* The written index of Elasticsearch will be `namespace.schemaName.tableName` string of TableId,this can be changed using route function of pipeline.
+
+* No support for automatic Elasticsearch index creation.
+
+Data Type Mapping
+----------------
+Elasticsearch stores document in a JSON string. So the data type mapping is between Flink CDC data type and JSON data type.
+
+
+
+
+ | CDC type |
+ JSON type |
+ NOTE |
+
+
+
+
+ | TINYINT |
+ NUMBER |
+ |
+
+
+ | SMALLINT |
+ NUMBER |
+ |
+
+
+ | INT |
+ NUMBER |
+ |
+
+
+ | BIGINT |
+ NUMBER |
+ |
+
+
+ | FLOAT |
+ NUMBER |
+ |
+
+
+ | DOUBLE |
+ NUMBER |
+ |
+
+
+ | DECIMAL(p, s) |
+ STRING |
+ |
+
+
+ | BOOLEAN |
+ BOOLEAN |
+ |
+
+
+ | DATE |
+ STRING |
+ with format: date (yyyy-MM-dd), example: 2024-10-21 |
+
+
+ | TIMESTAMP |
+ STRING |
+ with format: date-time (yyyy-MM-dd HH:mm:ss.SSSSSS, with UTC time zone), example: 2024-10-21 14:10:56.000000 |
+
+
+ | TIMESTAMP_LTZ |
+ STRING |
+ with format: date-time (yyyy-MM-dd HH:mm:ss.SSSSSS, with UTC time zone), example: 2024-10-21 14:10:56.000000 |
+
+
+ | CHAR(n) |
+ STRING |
+ |
+
+
+ | VARCHAR(n) |
+ STRING |
+ |
+
+
+ | ARRAY |
+ ARRAY |
+ |
+
+
+ | MAP |
+ STRING |
+ |
+
+
+ | ROW |
+ STRING |
+ |
+
+
+
+
+
+{{< top >}}
\ No newline at end of file
diff --git a/flink-cdc-connect/flink-cdc-pipeline-connectors/flink-cdc-pipeline-connector-elasticsearch/pom.xml b/flink-cdc-connect/flink-cdc-pipeline-connectors/flink-cdc-pipeline-connector-elasticsearch/pom.xml
index 896019c8168..f634ec7a791 100644
--- a/flink-cdc-connect/flink-cdc-pipeline-connectors/flink-cdc-pipeline-connector-elasticsearch/pom.xml
+++ b/flink-cdc-connect/flink-cdc-pipeline-connectors/flink-cdc-pipeline-connector-elasticsearch/pom.xml
@@ -197,7 +197,7 @@ limitations under the License.
org.apache.flink
flink-cdc-composer
- ${revision}
+ ${project.version}
test