Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -129,13 +129,13 @@ copyOptions ::=
[ USE_RAW_PATH = true | false ]
```

| 参数 | 默认值 | 描述 |
|------------------|------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| SINGLE | false | 如果为 `true`,则命令将数据卸载到单个文件中。 |
| MAX_FILE_SIZE | 67108864 bytes (64 MB) | 要创建的每个文件的最大大小(以字节为单位)。当 `SINGLE` 为 false 时生效。 |
| OVERWRITE | false | 如果为 `true`,则目标路径下具有相同名称的现有文件将被覆盖。注意:`OVERWRITE = true` 需要 `USE_RAW_PATH = true` 和 `INCLUDE_QUERY_ID = false`。 |
| INCLUDE_QUERY_ID | true | 如果为 `true`,则导出的文件名中将包含唯一的 UUID。 |
| USE_RAW_PATH | false | 如果为 `true`,则将使用用户提供的确切路径(包括完整的文件名)来导出数据。如果设置为 `false`,则用户必须提供目录路径。 |
| 参数 | 默认值 | 描述 |
| ---------------- | ---------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------- |
| SINGLE | false | 如果为 `true`,则命令将数据卸载到单个文件中。 |
| MAX_FILE_SIZE | 67108864 bytes (64 MB) | 要创建的每个文件的最大大小(以字节为单位)。当 `SINGLE` 为 false 时生效。 |
| OVERWRITE | false | 如果为 `true`,则目标路径下具有相同名称的现有文件将被覆盖。注意:`OVERWRITE = true` 需要 `USE_RAW_PATH = true` 和 `INCLUDE_QUERY_ID = false`。 |
| INCLUDE_QUERY_ID | true | 如果为 `true`,则导出的文件名中将包含唯一的 UUID。 |
| USE_RAW_PATH | false | 如果为 `true`,则将使用用户提供的确切路径(包括完整的文件名)来导出数据。如果设置为 `false`,则用户必须提供目录路径。 |

### DETAILED_OUTPUT

Expand All @@ -145,19 +145,19 @@ copyOptions ::=

COPY INTO 提供了数据卸载结果的摘要,包含以下列:

| 列 | 描述 |
| ------------- | --------------------------------------------------------------------------------------------- |
| rows_unloaded | 成功卸载到目标位置的行数。 |
| input_bytes | 在卸载操作期间从源表读取的数据的总大小(以字节为单位)。 |
| output_bytes | 写入到目标位置的数据的总大小(以字节为单位)。 |
| 列 | 描述 |
| ------------- | -------------------------------------------------------- |
| rows_unloaded | 成功卸载到目标位置的行数。 |
| input_bytes | 在卸载操作期间从源表读取的数据的总大小(以字节为单位)。 |
| output_bytes | 写入到目标位置的数据的总大小(以字节为单位)。 |

当 `DETAILED_OUTPUT` 设置为 `true` 时,COPY INTO 提供包含以下列的结果。这有助于定位卸载的文件,尤其是在使用 `MAX_FILE_SIZE` 将卸载的数据分隔到多个文件中时。

| 列 | 描述 |
| --------- | --------------------------------------------- |
| file_name | 卸载的文件名。 |
| file_size | 卸载的文件的大小(以字节为单位)。 |
| row_count | 卸载的文件中包含的行数。 |
| 列 | 描述 |
| --------- | ---------------------------------- |
| file_name | 卸载的文件名。 |
| file_size | 卸载的文件的大小(以字节为单位)。 |
| row_count | 卸载的文件中包含的行数。 |

## 示例

Expand Down Expand Up @@ -241,10 +241,10 @@ LIST @my_internal_stage;
└───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘

-- COPY INTO 也适用于自定义文件格式。请参见下文:
-- 创建一个名为 my_cs_gzip 的自定义文件格式,该格式采用 CSV 格式和 gzip 压缩
-- 创建一个名为 my_csv_gzip 的自定义文件格式,该格式采用 CSV 格式和 gzip 压缩
CREATE FILE FORMAT my_csv_gzip TYPE = CSV COMPRESSION = gzip;

-- 使用自定义文件格式 my_cs_gzip 将表中的数据卸载到 Stage
-- 使用自定义文件格式 my_csv_gzip 将表中的数据卸载到 Stage
COPY INTO @my_internal_stage
FROM canadian_city_population
FILE_FORMAT = (FORMAT_NAME = 'my_csv_gzip');
Expand All @@ -257,14 +257,14 @@ COPY INTO @my_internal_stage

LIST @my_internal_stage;


```
┌───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
│ name │ size │ md5 │ last_modified │ creator │
├────────────────────────────────────────────────────────────────┼────────┼──────────────────┼───────────────────────────────┼──────────────────┤
│ data_d006ba1c-0609-46d7-a67b-75c7078d86ff_0000_00000000.csv.gz │ 168 │ NULL │ 2024-01-18 16:29:29.721 +0000 │ NULL │
│ data_7970afa5-32e3-4e7d-b793-e42a2a82a8e6_0000_00000000.csv.gz │ 168 │ NULL │ 2024-01-18 16:27:01.663 +0000 │ NULL │
└───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘


```

### 示例 3:卸载到 Bucket
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -129,13 +129,13 @@ copyOptions ::=
[ USE_RAW_PATH = true | false ]
```

| Parameter | Default | Description |
|------------------|------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| SINGLE | false | When `true`, the command unloads data into one single file. |
| MAX_FILE_SIZE | 67108864 bytes (64 MB) | The maximum size (in bytes) of each file to be created. Effective when `SINGLE` is false. |
| OVERWRITE | false | When `true`, existing files with the same name at the target path will be overwritten. Note: `OVERWRITE = true` requires `USE_RAW_PATH = true` and `INCLUDE_QUERY_ID = false`. |
| INCLUDE_QUERY_ID | true | When `true`, a unique UUID will be included in the exported file names. |
| USE_RAW_PATH | false | When `true`, the exact user-provided path (including the full file name) will be used for exporting the data. If set to `false`, the user must provide a directory path. |
| Parameter | Default | Description |
| ---------------- | ---------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| SINGLE | false | When `true`, the command unloads data into one single file. |
| MAX_FILE_SIZE | 67108864 bytes (64 MB) | The maximum size (in bytes) of each file to be created. Effective when `SINGLE` is false. |
| OVERWRITE | false | When `true`, existing files with the same name at the target path will be overwritten. Note: `OVERWRITE = true` requires `USE_RAW_PATH = true` and `INCLUDE_QUERY_ID = false`. |
| INCLUDE_QUERY_ID | true | When `true`, a unique UUID will be included in the exported file names. |
| USE_RAW_PATH | false | When `true`, the exact user-provided path (including the full file name) will be used for exporting the data. If set to `false`, the user must provide a directory path. |

### DETAILED_OUTPUT

Expand Down Expand Up @@ -241,10 +241,10 @@ LIST @my_internal_stage;
└───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘

-- COPY INTO also works with custom file formats. See below:
-- Create a custom file format named my_cs_gzip with CSV format and gzip compression
-- Create a custom file format named my_csv_gzip with CSV format and gzip compression
CREATE FILE FORMAT my_csv_gzip TYPE = CSV COMPRESSION = gzip;

-- Unload data from the table to the stage using the custom file format my_cs_gzip
-- Unload data from the table to the stage using the custom file format my_csv_gzip
COPY INTO @my_internal_stage
FROM canadian_city_population
FILE_FORMAT = (FORMAT_NAME = 'my_csv_gzip');
Expand Down
8 changes: 4 additions & 4 deletions src/components/Config/CookieConsentConfig.ts
Original file line number Diff line number Diff line change
Expand Up @@ -19,18 +19,18 @@ const pluginConfig: CookieConsentConfig = {
},
},
onFirstConsent: function () {
console.log('onFirstAction fired');
// console.log('onFirstAction fired');
},

onConsent: function ({ cookie }) {
console.log('onConsent fired ...');
// console.log('onConsent fired ...');
},

onChange: function ({ changedCategories, cookie }) {
console.log('onChange fired ...');
// console.log('onChange fired ...');
},
onModalReady: ({ modalName, modal }) => {
console.log('onModalReady fired ...', modalName, modal);
// console.log('onModalReady fired ...', modalName, modal);
},

categories: {
Expand Down
7 changes: 5 additions & 2 deletions src/components/LanguageDocs/index.tsx
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
// Copyright 2023 DatabendLabs.
import useDocusaurusContext from "@docusaurus/useDocusaurusContext";
import MDXA from "@site/src/theme/MDXComponents/A";
import React, { FC } from "react";
import ReactMarkdown from "react-markdown";
import remarkGfm from "remark-gfm";
Expand All @@ -13,9 +14,11 @@ const LanguageDocs: FC<IProps> = ({ cn = "", en = "" }): any => {
customFields: { isChina },
},
} = useDocusaurusContext() as any;

const components = {
a: MDXA,
};
return (
<ReactMarkdown remarkPlugins={[remarkGfm]}>
<ReactMarkdown remarkPlugins={[remarkGfm]} components={components}>
{isChina ? cn : en}
</ReactMarkdown>
);
Expand Down
Loading