You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Fluss processes Paimon configurations by removing the `datalake.paimon.` prefix and then use the remaining configuration (without the prefix `datalake.paimon.`) to create the Paimon catalog. Checkout the [Paimon documentation](https://paimon.apache.org/docs/1.1/maintenance/configurations/) for more details on the available configurations.
54
54
55
-
For example, to configure the use of a Hive catalog, you need to [download](https://nightlies.apache.org/flink/flink-docs-stable/docs/connectors/table/hive/overview/#using-bundled-hive-jar) the Flink SQL Hive Client JAR, place the downloaded JAR in Paimon's plugin directory at $FLUSS_HOME/plugins/paimon, and then add the following configuration:
55
+
For example, if you want to configure to use Hive catalog, you can configure like following:
While Fluss includes the core Paimon library, additional jars may still need to be manually added to `${FLUSS_HOME}/plugins/paimon/` according to your needs.
64
-
For example, for OSS filesystem support, you need to put `paimon-oss-<paimon_version>.jar` into directory `${FLUSS_HOME}/plugins/paimon/`.
65
-
66
-
#### Hadoop Environment Configuration
65
+
For example:
66
+
- If you are using Paimon filesystem catalog with OSS filesystem, you need to put `paimon-oss-<paimon_version>.jar` into directory `${FLUSS_HOME}/plugins/paimon/`.
67
+
- If you are using Hive catalog, you need to put [the flink sql hive connector jar](https://nightlies.apache.org/flink/flink-docs-stable/docs/connectors/table/hive/overview/#using-bundled-hive-jar) into directory `${FLUSS_HOME}/plugins/paimon/`.
67
68
68
-
To use the machine hadoop environment, instead of Fluss' embedded Hadoop, follow these steps:
69
+
#### Hadoop Environment Configuration(required for kerberos-secured HDFS)
70
+
Other usage scenarios can skip this section.
69
71
70
72
**Step 1: Set Hadoop Classpath**
71
73
```bash
72
74
export HADOOP_CLASSPATH=`hadoop classpath`
73
75
```
74
76
75
-
**Step 2: Add the following to your configuration file**
77
+
**Step 2: Add the following to your `server.yaml` file**
0 commit comments