You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* change PrimaryKey table to Primary Key Table across pages
* syntactic fixes
* make some more minor fixes
* fix broken link
* address yuxia's comments
| type | required | (none) | Catalog type, must to be 'fluss' here. |
42
+
| type | required | (none) | Catalog type, must be 'fluss' here. |
43
43
| bootstrap.servers | required | (none) | Comma separated list of Fluss servers. |
44
44
| default-database | optional | fluss | The default database to use when switching to this catalog. |
45
45
| client.security.protocol | optional | PLAINTEXT | The security protocol used to communicate with brokers. Currently, only `PLAINTEXT` and `SASL` are supported, the configuration value is case insensitive. |
46
-
|`client.security.{protocol}.*`| optional | (none) | Client-side configuration properties for a specific authentication protocol. E.g., client.security.sasl.jaas.config. More Details in [authentication](../security/authentication.md)| (none) |
46
+
|`client.security.{protocol}.*`| optional | (none) | Client-side configuration properties for a specific authentication protocol. E.g., client.security.sasl.jaas.config. More Details in [authentication](../security/authentication.md)|
47
47
48
-
The following introduced statements assuming the current catalog is switched to the Fluss catalog using `USE CATALOG <catalog_name>` statement.
48
+
The following statements assume that the current catalog has been switched to the Fluss catalog using the`USE CATALOG <catalog_name>` statement.
49
49
50
50
## Create Database
51
51
52
-
By default, FlussCatalog will use the `fluss` database in Flink. Using the following example to create a separate database in order to avoid creating tables under the default `fluss` database:
52
+
By default, FlussCatalog will use the `fluss` database in Flink. You can use the following example to create a separate database to avoid creating tables under the default `fluss` database:
53
53
54
54
```sql title="Flink SQL"
55
55
CREATEDATABASEmy_db;
@@ -75,9 +75,9 @@ DROP DATABASE my_db;
75
75
76
76
## Create Table
77
77
78
-
### PrimaryKey Table
78
+
### Primary Key Table
79
79
80
-
The following SQL statement will create a [PrimaryKey Table](table-design/table-types/pk-table/index.md) with a primary key consisting of shop_id and user_id.
80
+
The following SQL statement will create a [Primary Key Table](table-design/table-types/pk-table/index.md) with a primary key consisting of shop_id and user_id.
81
81
```sql title="Flink SQL"
82
82
CREATETABLEmy_pk_table (
83
83
shop_id BIGINT,
@@ -105,14 +105,14 @@ CREATE TABLE my_log_table (
105
105
);
106
106
```
107
107
108
-
### Partitioned (PrimaryKey/Log) Table
108
+
### Partitioned (Primary Key/Log) Table
109
109
110
110
:::note
111
111
1. Currently, Fluss only supports partitioned field with `STRING` type
112
-
2. For the Partitioned PrimaryKey Table, the partitioned field (`dt` in this case) must be a subset of the primary key (`dt, shop_id, user_id` in this case)
112
+
2. For the Partitioned Primary Key Table, the partitioned field (`dt` in this case) must be a subset of the primary key (`dt, shop_id, user_id` in this case)
113
113
:::
114
114
115
-
The following SQL statement creates a Partitioned PrimaryKey Table in Fluss.
115
+
The following SQL statement creates a Partitioned Primary Key Table in Fluss.
116
116
117
117
```sql title="Flink SQL"
118
118
CREATETABLEmy_part_pk_table (
@@ -145,7 +145,7 @@ But you can still use the [Add Partition](engine-flink/ddl.md#add-partition) sta
145
145
146
146
#### Multi-Fields Partitioned Table
147
147
148
-
Fluss also support[Multi-Fields Partitioning](table-design/data-distribution/partitioning.md#multi-field-partitioned-tables), the following SQL statement creates a Multi-Fields Partitioned Log Table in Fluss:
148
+
Fluss also supports[Multi-Fields Partitioning](table-design/data-distribution/partitioning.md#multi-field-partitioned-tables), the following SQL statement creates a Multi-Fields Partitioned Log Table in Fluss:
Fluss also support creat Auto Partitioned (PrimaryKey/Log) Table. The following SQL statement creates an Auto Partitioned PrimaryKey Table in Fluss.
163
+
Fluss also supports creating Auto Partitioned (Primary Key/Log) Table. The following SQL statement creates an Auto Partitioned Primary Key Table in Fluss.
For more details about Auto Partitioned (PrimaryKey/Log) Table, refer to [Auto Partitioning](table-design/data-distribution/partitioning.md#auto-partitioning).
196
+
For more details about Auto Partitioned (Primary Key/Log) Table, refer to [Auto Partitioning](table-design/data-distribution/partitioning.md#auto-partitioning).
197
197
198
198
199
199
### Options
@@ -238,8 +238,8 @@ This will entirely remove all the data of the table in the Fluss cluster.
238
238
239
239
## Add Partition
240
240
241
-
Fluss support manually add partitions to an exists partitioned table by Fluss Catalog. If the specified partition
242
-
not exists, Fluss will create the partition. If the specified partition already exists, Fluss will ignore the request
241
+
Fluss supports manually adding partitions to an existing partitioned table through the Fluss Catalog. If the specified partition
242
+
does not exist, Fluss will create the partition. If the specified partition already exists, Fluss will ignore the request
243
243
or throw an exception.
244
244
245
245
To add partitions, run:
@@ -275,8 +275,8 @@ For more details, refer to the [Flink SHOW PARTITIONS](https://nightlies.apache.
275
275
276
276
## Drop Partition
277
277
278
-
Fluss also support manually drop partitions from an exists partitioned table by Fluss Catalog. If the specified partition
279
-
not exists, Fluss will ignore the request or throw an exception.
278
+
Fluss also supports manually dropping partitions from an existing partitioned table through the Fluss Catalog. If the specified partition
279
+
does not exist, Fluss will ignore the request or throw an exception.
280
280
281
281
282
282
To drop partitions, run:
@@ -289,5 +289,3 @@ ALTER TABLE my_multi_fields_part_log_table DROP PARTITION (dt = '2025-03-05', na
289
289
```
290
290
291
291
For more details, refer to the [Flink ALTER TABLE(DROP)](https://nightlies.apache.org/flink/flink-docs-release-1.20/docs/dev/table/sql/alter/#drop) documentation.
Copy file name to clipboardExpand all lines: website/docs/intro.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -26,7 +26,7 @@ Fluss is a streaming storage built for real-time analytics which can serve as th
26
26
27
27

28
28
29
-
It bridges the gap between **streaming data** and the data **Lakehouse** by enabling low-latency, high-throughput data ingestion and processing while seamlessly integrating with popular compute engines like **Apache Flink**, while**Apache Spark**, and **StarRocks** are coming soon.
29
+
It bridges the gap between **streaming data** and the data **Lakehouse** by enabling low-latency, high-throughput data ingestion and processing while seamlessly integrating with popular compute engines like **Apache Flink**, with**Apache Spark** and **StarRocks** coming soon.
30
30
31
31
Fluss supports `streaming reads` and `writes` with sub-second latency and stores data in a columnar format, enhancing query performance and reducing storage costs.
32
32
It offers flexible table types, including append-only **Log Tables** and updatable **PrimaryKey Tables**, to accommodate diverse real-time analytics and processing needs.
@@ -44,7 +44,7 @@ The following is a list of (but not limited to) use-cases that Fluss shines ✨:
Copy file name to clipboardExpand all lines: website/docs/table-design/overview.md
+7-9Lines changed: 7 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -32,13 +32,13 @@ Tables are classified into two types based on the presence of a primary key:
32
32
-**Log Tables:**
33
33
- Designed for append-only scenarios.
34
34
- Support only INSERT operations.
35
-
-**PrimaryKey Tables:**
35
+
-**Primary Key Tables:**
36
36
- Used for updating and managing data in business databases.
37
37
- Support INSERT, UPDATE, and DELETE operations based on the defined primary key.
38
38
39
-
A Table becomes a [Partitioned Table](table-design/data-distribution/partitioning.md) when a partition column is defined. Data with the same partition value is stored in the same partition. Partition columns can be applied to both Log Tables and PrimaryKey Tables, but with specific considerations:
39
+
A Table becomes a [Partitioned Table](data-distribution/partitioning.md) when a partition column is defined. Data with the same partition value is stored in the same partition. Partition columns can be applied to both Log Tables and Primary Key Tables, but with specific considerations:
40
40
-**For Log Tables**, partitioning is commonly used for log data, typically based on date columns, to facilitate data separation and cleaning.
41
-
-**For PrimaryKey Tables**, the partition column must be a subset of the primary key to ensure uniqueness.
41
+
-**For Primary Key Tables**, the partition column must be a subset of the primary key to ensure uniqueness.
42
42
43
43
This design ensures efficient data organization, flexibility in handling different use cases, and adherence to data integrity constraints.
44
44
@@ -58,14 +58,12 @@ The number of buckets `N` can be configured per table. A bucket is the smallest
58
58
The data of a bucket consists of a LogTablet and a (optional) KvTablet.
59
59
60
60
### LogTablet
61
-
A **LogTablet** needs to be generated for each bucket of Log and PrimaryKey tables.
62
-
For Log Tables, the LogTablet is both the primary table data and the log data. For PrimaryKey tables, the LogTablet acts
61
+
A **LogTablet** needs to be generated for each bucket of Log and Primary Key Tables.
62
+
For Log Tables, the LogTablet is both the primary table data and the log data. For Primary Key Tables, the LogTablet acts
63
63
as the log data for the primary table data.
64
64
-**Segment:** The smallest unit of log storage in the **LogTablet**. A segment consists of an **.index** file and a **.log** data file.
65
-
-**.index:** An `offset sparse index` that stores the mappings between the physical byte address in the message relative offset -> .log file.
65
+
-**.index:** An `offset sparse index` that maps message relative offsets to their corresponding physical byte addresses in the .log file.
66
66
-**.log:** Compact arrangement of log data.
67
67
68
68
### KvTablet
69
-
Each bucket of the PrimaryKey table needs to generate a KvTablet. Underlying, each KvTablet corresponds to an embedded RocksDB instance. RocksDB is an LSM (log structured merge) engine which helps KvTablet supports high-performance updates and lookup query.
70
-
71
-
69
+
Each bucket of the Primary Key Table needs to generate a KvTablet. Underlying, each KvTablet corresponds to an embedded RocksDB instance. RocksDB is an LSM (log structured merge) engine which helps KvTablet support high-performance updates and lookup queries.
Copy file name to clipboardExpand all lines: website/docs/table-design/table-types/log-table.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -60,7 +60,7 @@ Log Tables in Fluss allow real-time data consumption, preserving the order of da
60
60
## Column Pruning
61
61
62
62
Column pruning is a technique used to reduce the amount of data that needs to be read from storage by eliminating unnecessary columns from the query.
63
-
Fluss supports column pruning for Log Tables and the changelog of PrimaryKey Tables, which can significantly improve query performance by reducing the amount of data that needs to be read from storage and lowering networking costs.
63
+
Fluss supports column pruning for Log Tables and the changelog of Primary Key Tables, which can significantly improve query performance by reducing the amount of data that needs to be read from storage and lowering networking costs.
64
64
65
65
What sets Fluss apart is its ability to apply **column pruning during streaming reads**, a capability that is both unique and industry-leading. This ensures that even in real-time streaming scenarios, only the required columns are processed, minimizing resource usage and maximizing efficiency.
66
66
@@ -88,7 +88,7 @@ Additionally, compression is applied to each column independently, preserving th
88
88
89
89
When compression is enabled:
90
90
- For **Log Tables**, data is compressed by the writer on the client side, written in a compressed format, and decompressed by the log scanner on the client side.
91
-
- For **PrimaryKey Table changelogs**, compression is performed server-side since the changelog is generated on the server.
91
+
- For **Primary Key Table changelogs**, compression is performed server-side since the changelog is generated on the server.
92
92
93
93
Log compression significantly reduces networking and storage costs. Benchmark results demonstrate that using the ZSTD compression with level 3 achieves a compression ratio of approximately **5x** (e.g., reducing 5GB of data to 1GB).
94
94
Furthermore, read/write throughput improves substantially due to reduced networking overhead.
@@ -131,4 +131,4 @@ In the above example, we set the compression codec to `LZ4_FRAME` and the compre
131
131
:::
132
132
133
133
## Log Tiering
134
-
Log Table supports tiering data to different storage tiers. See more details about [Remote Log](maintenance/tiered-storage/remote-storage.md).
134
+
Log Table supports tiering data to different storage tiers. See more details about [Remote Log](maintenance/tiered-storage/remote-storage.md).
Copy file name to clipboardExpand all lines: website/docs/table-design/table-types/pk-table/index.md
+17-17Lines changed: 17 additions & 17 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,5 @@
1
1
---
2
-
title: PrimaryKey Table
2
+
title: Primary Key Table
3
3
sidebar_position: 1
4
4
---
5
5
@@ -19,15 +19,15 @@ sidebar_position: 1
19
19
limitations under the License.
20
20
-->
21
21
22
-
# PrimaryKey Table
22
+
# Primary Key Table
23
23
24
24
## Basic Concept
25
25
26
-
PrimaryKey Table in Fluss ensure the uniqueness of the specified primary key and supports `INSERT`, `UPDATE`,
26
+
Primary Key Table in Fluss ensures the uniqueness of the specified primary key and supports `INSERT`, `UPDATE`,
27
27
and `DELETE` operations.
28
28
29
-
A PrimaryKey Table is created by specifying a `PRIMARY KEY` clause in the `CREATE TABLE` statement. For example, the
30
-
following Flink SQL statement creates a PrimaryKey Table with `shop_id` and `user_id` as the primary key and distributes
29
+
A Primary Key Table is created by specifying a `PRIMARY KEY` clause in the `CREATE TABLE` statement. For example, the
30
+
following Flink SQL statement creates a Primary Key Table with `shop_id` and `user_id` as the primary key and distributes
31
31
the data into 4 buckets:
32
32
33
33
```sql title="Flink SQL"
@@ -47,13 +47,13 @@ In Fluss primary key table, each row of data has a unique primary key.
47
47
If multiple entries with the same primary key are written to the Fluss primary key table, only the last entry will be
48
48
retained.
49
49
50
-
For [Partitioned PrimaryKey Table](table-design/data-distribution/partitioning.md), the primary key must contain the
50
+
For [Partitioned Primary Key Table](table-design/data-distribution/partitioning.md), the primary key must contain the
51
51
partition key.
52
52
53
53
## Bucket Assigning
54
54
55
55
For primary key tables, Fluss always determines which bucket the data belongs to based on the hash value of the bucket
56
-
key (It must be a subset of the primary keys excluding partition keys of the primary key table) for each record. If the bucket key is not specified, the bucket key will used as the primary key (excluding the partition key).
56
+
key (It must be a subset of the primary keys excluding partition keys of the primary key table) for each record. If the bucket key is not specified, the bucket key will be used as the primary key (excluding the partition key).
57
57
Data with the same hash value will be distributed to the same bucket.
58
58
59
59
## Partial Update
@@ -92,20 +92,20 @@ follows:
92
92
93
93
## Merge Engines
94
94
95
-
The **Merge Engine** in Fluss is a core component designed to efficiently handle and consolidate data updates for PrimaryKey Tables.
95
+
The **Merge Engine** in Fluss is a core component designed to efficiently handle and consolidate data updates for Primary Key Tables.
96
96
It offers users the flexibility to define how incoming data records are merged with existing records sharing the same primary key.
97
-
However, users can specify a different merge engine to customize the merging behavior according to their specific use cases
97
+
However, users can specify a different merge engine to customize the merging behavior according to their specific use cases.
Fluss will capture the changes when inserting, updating, deleting records on the primary-key table, which is known as
108
+
Fluss will capture the changes when inserting, updating, deleting records on the Primary Key Table, which is known as
109
109
the changelog. Downstream consumers can directly consume the changelog to obtain the changes in the table. For example,
110
110
consider the following primary key table in Fluss:
111
111
@@ -119,7 +119,7 @@ CREATE TABLE T
119
119
);
120
120
```
121
121
122
-
If the data written to the primary-key table is
122
+
If the data written to the Primary Key Table is
123
123
sequentially `+I(1, 2.0, 'apple')`, `+I(1, 4.0, 'banana')`, `-D(1, 4.0, 'banana')`, then the following change data will
124
124
be generated. For example, the following Flink SQL statements illustrate this behavior:
125
125
@@ -162,13 +162,13 @@ For primary key tables, Fluss supports various kinds of querying abilities.
162
162
For a primary key table, the default read method is a full snapshot followed by incremental data. First, the
163
163
snapshot data of the table is consumed, followed by the changelog data of the table.
164
164
165
-
It is also possible to only consume the changelog data of the table. For more details, please refer to the [Flink Reads](engine-flink/reads.md)
165
+
It is also possible to only consume the changelog data of the table. For more details, please refer to the [Flink Reads](../../../engine-flink/reads.md)
166
166
167
167
### Lookup
168
168
169
-
Fluss primary key table can lookup data by the primary keys. If the key exists in Fluss, lookup will return a unique row. it always used in [Flink Lookup Join](engine-flink/lookups.md#lookup).
169
+
Fluss primary key table can lookup data by the primary keys. If the key exists in Fluss, lookup will return a unique row. It is always used in [Flink Lookup Join](../../../engine-flink/lookups.md#lookup).
170
170
171
171
### Prefix Lookup
172
172
173
173
Fluss primary key table can also do prefix lookup by the prefix subset primary keys. Unlike lookup, prefix lookup
174
-
will scan data based on the prefix of primary keys and may return multiple rows. It always used in [Flink Prefix Lookup Join](engine-flink/lookups.md#prefix-lookup).
174
+
will scan data based on the prefix of primary keys and may return multiple rows. It is always used in [Flink Prefix Lookup Join](../../../engine-flink/lookups.md#prefix-lookup).
0 commit comments