Skip to content

Commit 4f15cc3

Browse files
Update develop-openrowset.md
1 parent 6fa3b6e commit 4f15cc3

File tree

1 file changed

+5
-5
lines changed

1 file changed

+5
-5
lines changed

articles/synapse-analytics/sql/develop-openrowset.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -239,19 +239,19 @@ CSV parser version 2.0 specifics:
239239
240240
HEADER_ROW = { TRUE | FALSE }
241241
242-
Specifies whether CSV file contains header row. Default is FALSE. Supported in PARSER_VERSION='2.0'. If TRUE, column names will be read from first row according to FIRSTROW argument. If TRUE and schema is specified using WITH, binding of column names will be done by column name, not ordinal positions.
242+
Specifies whether a CSV file contains header row. Default is `FALSE.` Supported in PARSER_VERSION='2.0'. If TRUE, the column names will be read from the first row according to FIRSTROW argument. If TRUE and schema is specified using WITH, binding of column names will be done by column name, not ordinal positions.
243243
244244
DATAFILETYPE = { 'char' | 'widechar' }
245245
246-
Specifies encoding: char is used for UTF8, widechar is used for UTF16 files.
246+
Specifies encoding: `char` is used for UTF8, `widechar` is used for UTF16 files.
247247
248248
CODEPAGE = { 'ACP' | 'OEM' | 'RAW' | 'code_page' }
249249
250250
Specifies the code page of the data in the data file. The default value is 65001 (UTF-8 encoding). See more details about this option [here](/sql/t-sql/functions/openrowset-transact-sql?view=sql-server-ver15&preserve-view=true#codepage).
251251
252252
ROWSET_OPTIONS = '{"READ_OPTIONS":["ALLOW_INCONSISTENT_READS"]}'
253253
254-
This option will disable the file modification check during the query execution, and read the files that are updated while the query is running. This is usefull option when you need to read append-only files that are appended while the query is running. In the appendable files, the existing content is not updated, and only new rows are added. Therefore, the probability of wrong results is minimized compared to the updateable files. This option might enable you to read the frequently appended files without handling the errors. See more information in [querying appendable CSV files](query-single-csv-file.md#querying-appendable-files) section.
254+
This option will disable the file modification check during the query execution, and read the files that are updated while the query is running. This is useful option when you need to read append-only files that are appended while the query is running. In the appendable files, the existing content is not updated, and only new rows are added. Therefore, the probability of wrong results is minimized compared to the updateable files. This option might enable you to read the frequently appended files without handling the errors. See more information in [querying appendable CSV files](query-single-csv-file.md#querying-appendable-files) section.
255255
256256
## Fast delimited text parsing
257257
@@ -261,9 +261,9 @@ There are two delimited text parser versions you can use. CSV parser version 1.0
261261
262262
You can easily query both CSV and Parquet files without knowing or specifying schema by omitting WITH clause. Column names and data types will be inferred from files.
263263
264-
Parquet files contain column metadata which will be read, type mappings can be found in [type mappings for Parquet](#type-mapping-for-parquet). Check [reading Parquet files without specifying schema](#read-parquet-files-without-specifying-schema) for samples.
264+
Parquet files contain column metadata, which will be read, type mappings can be found in [type mappings for Parquet](#type-mapping-for-parquet). Check [reading Parquet files without specifying schema](#read-parquet-files-without-specifying-schema) for samples.
265265
266-
For CSV files column names can be read from header row. You can specify whether header row exists using HEADER_ROW argument. If HEADER_ROW = FALSE, generic column names will be used: C1, C2, ... Cn where n is number of columns in file. Data types will be inferred from first 100 data rows. Check [reading CSV files without specifying schema](#read-csv-files-without-specifying-schema) for samples.
266+
For the CSV files, column names can be read from header row. You can specify whether header row exists using HEADER_ROW argument. If HEADER_ROW = FALSE, generic column names will be used: C1, C2, ... Cn where n is number of columns in file. Data types will be inferred from first 100 data rows. Check [reading CSV files without specifying schema](#read-csv-files-without-specifying-schema) for samples.
267267
268268
> [!IMPORTANT]
269269
> There are cases when appropriate data type cannot be inferred due to lack of information and larger data type will be used instead. This brings performance overhead and is particularly important for character columns which will be inferred as varchar(8000). For optimal performance, please [check inferred data types](./best-practices-serverless-sql-pool.md#check-inferred-data-types) and [use appropriate data types](./best-practices-serverless-sql-pool.md#use-appropriate-data-types).

0 commit comments

Comments
 (0)