Skip to content

Commit 690cab8

Browse files
authored
[lake/paimon] Bump Paimon version to 1.3.1 (#2035)
1 parent a0abed0 commit 690cab8

File tree

7 files changed

+15
-11
lines changed

7 files changed

+15
-11
lines changed

fluss-lake/fluss-lake-paimon/pom.xml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@
3333
<packaging>jar</packaging>
3434

3535
<properties>
36-
<paimon.version>1.2.0</paimon.version>
36+
<paimon.version>1.3.1</paimon.version>
3737
</properties>
3838

3939
<dependencies>

fluss-lake/fluss-lake-paimon/src/main/java/org/apache/fluss/lake/paimon/tiering/PaimonLakeCommitter.java

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -31,6 +31,7 @@
3131
import org.apache.paimon.manifest.IndexManifestEntry;
3232
import org.apache.paimon.manifest.ManifestCommittable;
3333
import org.apache.paimon.manifest.ManifestEntry;
34+
import org.apache.paimon.manifest.SimpleFileEntry;
3435
import org.apache.paimon.operation.FileStoreCommit;
3536
import org.apache.paimon.table.FileStoreTable;
3637
import org.apache.paimon.table.sink.CommitCallback;
@@ -224,7 +225,10 @@ public static class PaimonCommitCallback implements CommitCallback {
224225

225226
@Override
226227
public void call(
227-
List<ManifestEntry> list, List<IndexManifestEntry> indexFiles, Snapshot snapshot) {
228+
List<SimpleFileEntry> baseFiles,
229+
List<ManifestEntry> deltaFiles,
230+
List<IndexManifestEntry> indexFiles,
231+
Snapshot snapshot) {
228232
currentCommitSnapshotId.set(snapshot.id());
229233
}
230234

fluss-lake/fluss-lake-paimon/src/main/resources/META-INF/NOTICE

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,4 +6,4 @@ The Apache Software Foundation (http://www.apache.org/).
66

77
This project bundles the following dependencies under the Apache Software License 2.0 (http://www.apache.org/licenses/LICENSE-2.0.txt)
88

9-
- org.apache.paimon:paimon-bundle:1.2.0
9+
- org.apache.paimon:paimon-bundle:1.3.1

pom.xml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -88,7 +88,7 @@
8888
<curator.version>5.4.0</curator.version>
8989
<netty.version>4.1.104.Final</netty.version>
9090
<arrow.version>15.0.0</arrow.version>
91-
<paimon.version>1.2.0</paimon.version>
91+
<paimon.version>1.3.1</paimon.version>
9292
<iceberg.version>1.9.1</iceberg.version>
9393

9494
<fluss.hadoop.version>2.10.2</fluss.hadoop.version>

website/docs/maintenance/tiered-storage/lakehouse-storage.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ datalake.paimon.metastore: filesystem
3535
datalake.paimon.warehouse: /tmp/paimon
3636
```
3737
38-
Fluss processes Paimon configurations by removing the `datalake.paimon.` prefix and then use the remaining configuration (without the prefix `datalake.paimon.`) to create the Paimon catalog. Checkout the [Paimon documentation](https://paimon.apache.org/docs/1.1/maintenance/configurations/) for more details on the available configurations.
38+
Fluss processes Paimon configurations by removing the `datalake.paimon.` prefix and then use the remaining configuration (without the prefix `datalake.paimon.`) to create the Paimon catalog. Checkout the [Paimon documentation](https://paimon.apache.org/docs/1.3/maintenance/configurations/) for more details on the available configurations.
3939

4040
For example, if you want to configure to use Hive catalog, you can configure like following:
4141
```yaml
@@ -65,7 +65,7 @@ Then, you must start the datalake tiering service to tier Fluss's data to the la
6565
you should download the corresponding [Fluss filesystem jar](/downloads#filesystem-jars) and also put it into `${FLINK_HOME}/lib`
6666
- Put [fluss-lake-paimon jar](https://repo1.maven.org/maven2/org/apache/fluss/fluss-lake-paimon/$FLUSS_VERSION$/fluss-lake-paimon-$FLUSS_VERSION$.jar) into `${FLINK_HOME}/lib`
6767
- [Download](https://flink.apache.org/downloads/) pre-bundled Hadoop jar `flink-shaded-hadoop-2-uber-*.jar` and put into `${FLINK_HOME}/lib`
68-
- Put Paimon's [filesystem jar](https://paimon.apache.org/docs/1.1/project/download/) into `${FLINK_HOME}/lib`, if you use s3 to store paimon data, please put `paimon-s3` jar into `${FLINK_HOME}/lib`
68+
- Put Paimon's [filesystem jar](https://paimon.apache.org/docs/1.3/project/download/) into `${FLINK_HOME}/lib`, if you use s3 to store paimon data, please put `paimon-s3` jar into `${FLINK_HOME}/lib`
6969
- The other jars that Paimon may require, for example, if you use HiveCatalog, you will need to put hive related jars
7070

7171

website/docs/quickstart/lakehouse.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -117,7 +117,7 @@ The Docker Compose environment consists of the following containers:
117117
- **Flink Cluster**: a Flink `JobManager` and a Flink `TaskManager` container to execute queries.
118118

119119
**Note:** The `apache/fluss-quickstart-flink` image is based on [flink:1.20.3-java17](https://hub.docker.com/layers/library/flink/1.20-java17/images/sha256:296c7c23fa40a9a3547771b08fc65e25f06bc4cfd3549eee243c99890778cafc) and
120-
includes the [fluss-flink](engine-flink/getting-started.md), [paimon-flink](https://paimon.apache.org/docs/1.0/flink/quick-start/) and
120+
includes the [fluss-flink](engine-flink/getting-started.md), [paimon-flink](https://paimon.apache.org/docs/1.3/flink/quick-start/) and
121121
[flink-connector-faker](https://flink-packages.org/packages/flink-faker) to simplify this guide.
122122

123123
3. To start all containers, run:
@@ -136,7 +136,7 @@ You can also visit http://localhost:8083/ to see if Flink is running normally.
136136

137137
:::note
138138
- If you want to additionally use an observability stack, follow one of the provided quickstart guides [here](maintenance/observability/quickstart.md) and then continue with this guide.
139-
- If you want to run with your own Flink environment, remember to download the [fluss-flink connector jar](/downloads), [flink-connector-faker](https://github.com/knaufk/flink-faker/releases), [paimon-flink connector jar](https://paimon.apache.org/docs/1.0/flink/quick-start/) and then put them to `FLINK_HOME/lib/`.
139+
- If you want to run with your own Flink environment, remember to download the [fluss-flink connector jar](/downloads), [flink-connector-faker](https://github.com/knaufk/flink-faker/releases), [paimon-flink connector jar](https://paimon.apache.org/docs/1.3/flink/quick-start/) and then put them to `FLINK_HOME/lib/`.
140140
- All the following commands involving `docker compose` should be executed in the created working directory that contains the `docker-compose.yml` file.
141141
:::
142142

website/docs/streaming-lakehouse/integrate-data-lakes/paimon.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -73,7 +73,7 @@ You can choose between two views of the table:
7373
#### Read Data Only in Paimon
7474

7575
##### Prerequisites
76-
Download the [paimon-flink.jar](https://paimon.apache.org/docs/1.2/) that matches your Flink version, and place it in the `FLINK_HOME/lib` directory
76+
Download the [paimon-flink.jar](https://paimon.apache.org/docs/1.3/) that matches your Flink version, and place it in the `FLINK_HOME/lib` directory
7777

7878
##### Read Paimon Data
7979
To read only data stored in Paimon, use the `$lake` suffix in the table name. The following example demonstrates this:
@@ -92,7 +92,7 @@ SELECT * FROM orders$lake$snapshots;
9292

9393
When you specify the `$lake` suffix in a query, the table behaves like a standard Paimon table and inherits all its capabilities.
9494
This allows you to take full advantage of Flink's query support and optimizations on Paimon, such as querying system tables, time travel, and more.
95-
For further information, refer to Paimon’s [SQL Query documentation](https://paimon.apache.org/docs/0.9/flink/sql-query/#sql-query).
95+
For further information, refer to Paimon’s [SQL Query documentation](https://paimon.apache.org/docs/1.3/flink/sql-query/#sql-query).
9696

9797
#### Union Read of Data in Fluss and Paimon
9898

@@ -125,7 +125,7 @@ Key behavior for data retention:
125125

126126
### Reading with other Engines
127127

128-
Since the data tiered to Paimon from Fluss is stored as a standard Paimon table, you can use any engine that supports Paimon to read it. Below is an example using [StarRocks](https://paimon.apache.org/docs/1.2/ecosystem/starrocks/):
128+
Since the data tiered to Paimon from Fluss is stored as a standard Paimon table, you can use any engine that supports Paimon to read it. Below is an example using [StarRocks](https://paimon.apache.org/docs/1.3/ecosystem/starrocks/):
129129

130130
First, create a Paimon catalog in StarRocks:
131131

0 commit comments

Comments
 (0)