Skip to content

Commit c75a085

Browse files
committed
improve docs
1 parent 534bf68 commit c75a085

File tree

2 files changed

+4
-12
lines changed

2 files changed

+4
-12
lines changed

website/docs/maintenance/filesystems/hdfs.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -61,8 +61,8 @@ For most use cases, these work perfectly. However, you should configure your mac
6161
1. Your HDFS uses kerberos security
6262
2. You need to avoid version conflicts between Fluss's bundled hadoop libraries and your HDFS cluster
6363
64-
Fluss automatically loads HDFS dependencies on the machine via the HADOOP_CLASSPATH environment variable.
65-
Make sure that the HADOOP_CLASSPATH environment variable is set up (it can be checked by running echo $HADOOP_CLASSPATH).
64+
Fluss automatically loads HDFS dependencies on the machine via the `HADOOP_CLASSPATH` environment variable.
65+
Make sure that the `HADOOP_CLASSPATH` environment variable is set up (it can be checked by running `echo $HADOOP_CLASSPATH`).
6666
If not, set it up using
6767
```bash
6868
export HADOOP_CLASSPATH=`hadoop classpath`

website/docs/maintenance/tiered-storage/lakehouse-storage.md

Lines changed: 2 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -49,17 +49,9 @@ datalake.paimon.warehouse: hdfs:///path/to/warehouse
4949
While Fluss includes the core Paimon library, additional jars may still need to be manually added to `${FLUSS_HOME}/plugins/paimon/` according to your needs.
5050
For example:
5151
- If you are using Paimon filesystem catalog with OSS filesystem, you need to put `paimon-oss-<paimon_version>.jar` into directory `${FLUSS_HOME}/plugins/paimon/`.
52-
- If you are using Hive catalog, you need to put [the flink sql hive connector jar](https://nightlies.apache.org/flink/flink-docs-stable/docs/connectors/table/hive/overview/#using-bundled-hive-jar) into directory `${FLUSS_HOME}/plugins/paimon/`.
52+
- If you are using Paimon Hive catalog, you need to put [the flink sql hive connector jar](https://nightlies.apache.org/flink/flink-docs-stable/docs/connectors/table/hive/overview/#using-bundled-hive-jar) into directory `${FLUSS_HOME}/plugins/paimon/`.
5353

54-
#### Hadoop Environment Configuration(required for kerberos-secured HDFS)
55-
Other usage scenarios can skip this section.
56-
57-
Fluss automatically loads HDFS dependencies on the machine via the HADOOP_CLASSPATH environment variable.
58-
Make sure that the HADOOP_CLASSPATH environment variable is set up (it can be checked by running echo $HADOOP_CLASSPATH).
59-
If not, set it up using
60-
```bash
61-
export HADOOP_CLASSPATH=`hadoop classpath`
62-
```
54+
Additionally, when using Paimon with HDFS, you must also configure the Fluss server with the Hadoop environment. See the [HDFS setup guide](/docs/maintenance/filesystems/hdfs.md) for detailed instructions.
6355

6456
### Start The Datalake Tiering Service
6557
Then, you must start the datalake tiering service to tier Fluss's data to the lakehouse storage.

0 commit comments

Comments
 (0)