Skip to content

Commit 0d87af1

Browse files
authored
removed hdfs (#1843)
1 parent 3fc7012 commit 0d87af1

File tree

13 files changed

+6
-175
lines changed

13 files changed

+6
-175
lines changed

docs/en/guides/00-products/01-dee/10-enterprise-features.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -76,7 +76,7 @@ width={['70%', '15%', '15%']}
7676
thead={['Functionality', 'Databend Community', 'Databend Enterprise']}
7777
tbody={[
7878
['Deployment Support: K8s, Baremetal, Installer', '✓', '✓'],
79-
['Backend Storage Support: S3, Azblob, GCS, OSS, COS, HDFS', '✓', '✓'],
79+
['Backend Storage Support: S3, Azblob, GCS, OSS, COS', '✓', '✓'],
8080
['x86_64 & ARM64 Architecture', '✓', '✓'],
8181
['Compatible with LoongArch, openEuler, etc.', '✓', '✓'],
8282
['Monitoring and Alerting APIs', '✓', '✓'],

docs/en/guides/10-deploy/04-references/02-node-config/02-query-config.md

Lines changed: 1 addition & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -120,7 +120,7 @@ The following is a list of the parameters available within the [storage] section
120120

121121
| Parameter | Description |
122122
| -------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
123-
| type | The type of storage used. It can be one of the following: fs, s3, azblob, gcs, oss, cos, hdfs, webhdfs. |
123+
| type | The type of storage used. It can be one of the following: fs, s3, azblob, gcs, oss, cos. |
124124
| allow_insecure | Defaults to false. Set it to true when deploying Databend on MinIO or loading data via a URL prefixed by `http://`, otherwise, you may encounter the following error: "copy from insecure storage is not allowed. Please set `allow_insecure=true`". |
125125

126126
### [storage.fs] Section
@@ -221,25 +221,6 @@ The following is a list of the parameters available within the [storage.cos] sec
221221
| secret_key | The secret key for authenticating with Tencent COS. |
222222
| root | Specifies a directory within the bucket from which Databend will operate. Example: if a bucket's root directory has a folder called `myroot`, then `root = "myroot/"`. |
223223

224-
### [storage.hdfs] Section
225-
226-
The following is a list of the parameters available within the [storage.hdfs] section:
227-
228-
| Parameter | Description |
229-
| --------- | ---------------------------------------------------------------- |
230-
| name_node | The name node address for Hadoop Distributed File System (HDFS). |
231-
| root | Specifies a directory from which Databend will operate. |
232-
233-
### [storage.webhdfs] Section
234-
235-
The following is a list of the parameters available within the [storage.webhdfs] section:
236-
237-
| Parameter | Description |
238-
| ------------ | -------------------------------------------------------------- |
239-
| endpoint_url | The URL endpoint for WebHDFS (Hadoop Distributed File System). |
240-
| root | Specifies a directory from which Databend will operate. |
241-
| delegation | Delegation token for authentication and authorization. |
242-
243224
## [cache] Section
244225

245226
The following is a list of the parameters available within the [cache] section:

docs/en/guides/51-access-data-lake/01-hive.md

Lines changed: 0 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -61,16 +61,6 @@ CONNECTION = (
6161
| URL | Yes | Location of the external storage linked to this catalog. This could be a bucket or a folder within a bucket. For example, 's3://databend-toronto/'. |
6262
| connection_parameter | Yes | Connection parameters to establish connections with external storage. The required parameters vary based on the specific storage service and authentication methods. Refer to [Connection Parameters](/sql/sql-reference/connect-parameters) for detailed information. |
6363

64-
:::note
65-
To read data from HDFS, you need to set the following environment variables before starting Databend. These environment variables ensure that Databend can access the necessary Java and Hadoop dependencies to interact with HDFS effectively. Make sure to replace "/path/to/java" and "/path/to/hadoop" with the actual paths to your Java and Hadoop installations, and adjust the CLASSPATH to include all the required Hadoop JAR files.
66-
```shell
67-
export JAVA_HOME=/path/to/java
68-
export LD_LIBRARY_PATH=${JAVA_HOME}/lib/server:${LD_LIBRARY_PATH}
69-
export HADOOP_HOME=/path/to/hadoop
70-
export CLASSPATH=/all/hadoop/jar/files
71-
```
72-
:::
73-
7464
### SHOW CREATE CATALOG
7565

7666
Returns the detailed configuration of a specified catalog, including its type and storage parameters.

docs/en/guides/51-access-data-lake/02-iceberg.md

Lines changed: 0 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -84,15 +84,6 @@ CONNECTION=(
8484
| `s3.disable-ec2-metadata` | Option to disable loading credentials from EC2 metadata (typically used with `s3.allow-anonymous`). |
8585
| `s3.disable-config-load` | Option to disable loading configuration from config files and environment variables. |
8686

87-
:::note
88-
To read data from HDFS, you need to set the following environment variables before starting Databend. These environment variables ensure that Databend can access the necessary Java and Hadoop dependencies to interact with HDFS effectively. Make sure to replace "/path/to/java" and "/path/to/hadoop" with the actual paths to your Java and Hadoop installations, and adjust the CLASSPATH to include all the required Hadoop JAR files.
89-
```shell
90-
export JAVA_HOME=/path/to/java
91-
export LD_LIBRARY_PATH=${JAVA_HOME}/lib/server:${LD_LIBRARY_PATH}
92-
export HADOOP_HOME=/path/to/hadoop
93-
export CLASSPATH=/all/hadoop/jar/files
94-
```
95-
:::
9687

9788
### SHOW CREATE CATALOG
9889

docs/en/release-notes/96-v1.0.0.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -21,13 +21,11 @@ To read more about this release, please refer to our detailed [blog post](https:
2121
- feat: add internal_merge_on_read_mutation config option by @dantengsky
2222
- feat: Iceberg/create-catalog by @ClSlaid
2323
- feat(result cache): better the setting name and the desc by @BohuTANG
24-
- feat: Add support for copying from webhdfs by @ClSlaid
2524
- feat(website): update website index styles by @Carlosfengv
2625
- feat(query): use decimal to store u128 u256 keys and support group by decimal by @sundy-li
2726
- feat(planner): Introduce bitmap to record applied rules by @dusx1981
2827
- feat(query): support aggregate spill to object storage by @zhang2014
2928
- feat: Adopt OpenDAL's native write retry support by @Xuanwo
30-
- feat: backend webhdfs by @ClSlaid
3129
- feat(query): Support Map data type create table and insert values by @b41sh
3230
- feat(query): support decimal256 select insert by @TCeason
3331

docs/en/sql-reference/00-sql-reference/20-system-tables/system-configs.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -109,8 +109,6 @@ mysql> SELECT * FROM system.configs;
109109
| storage | azblob.container | | |
110110
| storage | azblob.endpoint_url | | |
111111
| storage | azblob.root | | |
112-
| storage | hdfs.name_node | | |
113-
| storage | hdfs.root | | |
114112
| storage | obs.access_key_id | | |
115113
| storage | obs.secret_access_key | | |
116114
| storage | obs.bucket | | |

docs/en/sql-reference/00-sql-reference/51-connect-parameters.md

Lines changed: 0 additions & 37 deletions
Original file line numberDiff line numberDiff line change
@@ -165,43 +165,6 @@ CREATE STAGE my_cos_stage
165165

166166
</TabItem>
167167

168-
<TabItem value="HDFS" label="HDFS">
169-
170-
The following table lists connection parameters for accessing Hadoop Distributed File System (HDFS):
171-
172-
| Parameter | Required? | Description |
173-
|----------- |----------- |------------------------------------------------------ |
174-
| name_node | Yes | HDFS NameNode address for connecting to the cluster. |
175-
176-
```sql title='Examples'
177-
CREATE STAGE my_hdfs_stage
178-
'hdfs://my-bucket'
179-
CONNECTION = (
180-
NAME_NODE = 'hdfs://<namenode-host>:<port>'
181-
);
182-
```
183-
184-
</TabItem>
185-
186-
<TabItem value="WebHDFS" label="WebHDFS">
187-
188-
The following table lists connection parameters for accessing WebHDFS:
189-
190-
| Parameter | Required? | Description |
191-
|-------------- |----------- |--------------------------------------------------- |
192-
| endpoint_url | Yes | Endpoint URL for WebHDFS. |
193-
| delegation | No | Delegation token for accessing WebHDFS. |
194-
195-
```sql title='Examples'
196-
CREATE STAGE my_webhdfs_stage
197-
'webhdfs://my-bucket'
198-
CONNECTION = (
199-
ENDPOINT_URL = 'http://<namenode-host>:<port>'
200-
);
201-
```
202-
203-
</TabItem>
204-
205168
<TabItem value="Hugging Face" label="HuggingFace">
206169

207170
The following table lists connection parameters for accessing Hugging Face:

docs/en/sql-reference/10-sql-commands/00-ddl/03-stage/01-ddl-create-stage.md

Lines changed: 0 additions & 39 deletions
Original file line numberDiff line numberDiff line change
@@ -106,45 +106,6 @@ externalLocation ::=
106106

107107
For the connection parameters available for accessing Tencent Cloud Object Storage, see [Connection Parameters](/00-sql-reference/51-connect-parameters.md).
108108
</TabItem>
109-
110-
<TabItem value="HDFS" label="HDFS">
111-
112-
```sql
113-
externalLocation ::=
114-
"hdfs://<endpoint_url>[<path>]"
115-
CONNECTION = (
116-
<connection_parameters>
117-
)
118-
```
119-
120-
For the connection parameters available for accessing HDFS, see [Connection Parameters](/00-sql-reference/51-connect-parameters.md).
121-
</TabItem>
122-
123-
<TabItem value="WebHDFS" label="WebHDFS">
124-
125-
```sql
126-
externalLocation ::=
127-
"webhdfs://<endpoint_url>[<path>]"
128-
CONNECTION = (
129-
<connection_parameters>
130-
)
131-
```
132-
133-
For the connection parameters available for accessing WebHDFS, see [Connection Parameters](/00-sql-reference/51-connect-parameters.md).
134-
</TabItem>
135-
136-
<TabItem value="Hugging Face" label="Hugging Face">
137-
138-
```sql
139-
externalLocation ::=
140-
"hf://<repo_id>[<path>]"
141-
CONNECTION = (
142-
<connection_parameters>
143-
)
144-
```
145-
146-
For the connection parameters available for accessing Hugging Face, see [Connection Parameters](/00-sql-reference/51-connect-parameters.md).
147-
</TabItem>
148109
</Tabs>
149110

150111
### FILE_FORMAT

docs/en/sql-reference/10-sql-commands/00-ddl/13-connection/create-connection.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ CREATE [ OR REPLACE ] CONNECTION [ IF NOT EXISTS ] <connection_name>
1919

2020
| Parameter | Description |
2121
|------------------|----------------------------------------------------------------------------------------------------------------------------------------------------|
22-
| STORAGE_TYPE | Type of storage service. Possible values include: `s3`, `azblob`, `gcs`, `oss`, `cos`, `hdfs`, and `webhdfs`. |
22+
| STORAGE_TYPE | Type of storage service. Possible values include: `s3`, `azblob`, `gcs`, `oss`, and `cos`. |
2323
| storage_params | Vary based on storage type and authentication method. See [Connection Parameters](../../../00-sql-reference/51-connect-parameters.md) for details. |
2424

2525
## Examples

docs/en/sql-reference/10-sql-commands/10-dml/dml-copy-into-location.md

Lines changed: 0 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -112,31 +112,6 @@ externalLocation ::=
112112
For the connection parameters available for accessing Tencent Cloud Object Storage, see [Connection Parameters](/00-sql-reference/51-connect-parameters.md).
113113
</TabItem>
114114

115-
<TabItem value="Hadoop Distributed File System (HDFS)" label="HDFS">
116-
117-
```sql
118-
externalLocation ::=
119-
'hdfs://<endpoint_url>[<path>]'
120-
CONNECTION = (
121-
<connection_parameters>
122-
)
123-
```
124-
125-
For the connection parameters available for accessing HDFS, see [Connection Parameters](/00-sql-reference/51-connect-parameters.md).
126-
</TabItem>
127-
128-
<TabItem value="WebHDFS" label="WebHDFS">
129-
130-
```sql
131-
externalLocation ::=
132-
'webhdfs://<endpoint_url>[<path>]'
133-
CONNECTION = (
134-
<connection_parameters>
135-
)
136-
```
137-
138-
For the connection parameters available for accessing WebHDFS, see [Connection Parameters](/00-sql-reference/51-connect-parameters.md).
139-
</TabItem>
140115
</Tabs>
141116

142117
### FILE_FORMAT

0 commit comments

Comments
 (0)