You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/hub/datasets-adding.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -98,9 +98,9 @@ Other formats and structures may not be recognized by the Hub.
98
98
99
99
For most types of datasets, **Parquet** is the recommended format due to its efficient compression, rich typing, and since a variety of tools supports this format with optimized read and batched operations. Alternatively, CSV or JSON Lines/JSON can be used for tabular data (prefer JSON Lines for nested data). Although easy to parse compared to Parquet, these formats are not recommended for data larger than several GBs. For image and audio datasets, uploading raw files is the most practical for most use cases since it's easy to access individual files. For large scale image and audio datasets streaming, [WebDataset](https://github.com/webdataset/webdataset) should be preferred over raw image and audio files to avoid the overhead of accessing individual files. Though for more general use cases involving analytics, data filtering or metadata parsing, Parquet is the recommended option for large scale image and audio datasets.
100
100
101
-
### Dataset Viewer
101
+
### Data Studio
102
102
103
-
The [Dataset Viewer](./datasets-viewer) is useful to know how the data actually looks like before you download it.
103
+
The [Data Studio](./datasets-viewer) is useful to know how the data actually looks like before you download it.
104
104
It is enabled by default for all public datasets. It is also available for private datasets owned by a [PRO user](https://huggingface.co/pricing) or an [Enterprise Hub organization](https://huggingface.co/enterprise).
105
105
106
106
After uploading your dataset, make sure the Dataset Viewer correctly shows your data, or [Configure the Dataset Viewer](./datasets-viewer-configure).
Copy file name to clipboardExpand all lines: docs/hub/datasets-viewer-sql-console.md
+23-26Lines changed: 23 additions & 26 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,10 +1,10 @@
1
1
# SQL Console: Query Hugging Face datasets in your browser
2
2
3
-
You can run SQL queries on the dataset in the browser using the SQL Console. The SQL Console is powered by [DuckDB](https://duckdb.org/) WASM and runs entirely in the browser. You can access the SQL Console from the dataset page by clicking on the **SQL Console** badge.
3
+
You can run SQL queries on the dataset in the browser using the SQL Console. The SQL Console is powered by [DuckDB](https://duckdb.org/) WASM and runs entirely in the browser. You can access the SQL Console from the Data Studio.
@@ -16,8 +16,9 @@ Through the SQL Console, you can:
16
16
17
17
- Run [DuckDB SQL queries](https://duckdb.org/docs/sql/query_syntax/select) on the dataset (_checkout [SQL Snippets](https://huggingface.co/spaces/cfahlgren1/sql-snippets) for useful queries_)
18
18
- Share results of the query with others via a link (_check out [this example](https://huggingface.co/datasets/gretelai/synthetic-gsm8k-reflection-405b?sql_console=true&sql=FROM+histogram%28%0A++train%2C%0A++topic%2C%0A++bin_count+%3A%3D+10%0A%29)_)
19
-
- Download the results of the query to a parquet file
19
+
- Download the results of the query to a Parquet or CSV file
20
20
- Embed the results of the query in your own webpage using an iframe
21
+
- Query datasets with natural language
21
22
22
23
<Tip>
23
24
You can also use the DuckDB locally through the CLI to query the dataset via the `hf://` protocol. See the <ahref="https://huggingface.co/docs/hub/en/datasets-duckdb"target="_blank"rel="noopener noreferrer">DuckDB Datasets documentation</a> for more information. The SQL Console provides a convenient `Copy to DuckDB CLI` button that generates the SQL query for creating views and executing your query in the DuckDB CLI.
@@ -31,59 +32,55 @@ You can also use the DuckDB locally through the CLI to query the dataset via the
31
32
The SQL Console makes filtering datasets really easy. For example, if you want to filter the `SkunkworksAI/reasoning-0.01` dataset for instructions and responses with a reasoning length of at least 10, you can use the following query:
In the query, we can use the `len` function to get the length of the `reasoning_chains` column and the `bar` function to create a bar chart of the reasoning lengths.
39
-
39
+
Here's the SQL to sort by length of the reasoning
40
40
```sql
41
-
SELECTlen(reasoning_chains) AS reason_len, bar(reason_len, 0, 100), *
41
+
SELECT*
42
42
FROM train
43
-
WHERE reason_len >10
44
-
ORDER BY reason_len DESC
43
+
WHERE LENGTH(reasoning_chains) >10;
45
44
```
46
45
47
-
The [bar](https://duckdb.org/docs/sql/functions/char.html#barx-min-max-width) function is a neat built-in DuckDB function that creates a bar chart of the reasoning lengths.
48
-
49
46
### Histogram
50
47
51
48
Many dataset authors choose to include statistics about the distribution of the data in the dataset. Using the DuckDB `histogram` function, we can plot a histogram of a column's values.
52
49
53
-
For example, to plot a histogram of the `reason_len` column in the `SkunkworksAI/reasoning-0.01` dataset, you can use the following query:
50
+
For example, to plot a histogram of the `Rating` column in the [Lichess/chess-puzzles](https://huggingface.co/datasets/Lichess/chess-puzzles) dataset, you can use the following query:
Learn more about the `histogram` function and parameters <a href="https://cfahlgren1-sql-snippets.hf.space/histogram" target="_blank" rel="noopener noreferrer">here</a>.
61
58
</p>
62
59
63
60
```sql
64
-
FROM histogram(train, len(reasoning_chains))
61
+
from histogram(train, Rating)
65
62
```
66
63
67
64
### Regex Matching
68
65
69
66
One of the most powerful features of DuckDB is the deep support for regular expressions. You can use the `regexp` function to match patterns in your data.
70
67
71
-
Using the [regexp_matches](https://duckdb.org/docs/sql/functions/char.html#regexp_matchesstring-pattern) function, we can filter the `SkunkworksAI/reasoning-0.01` dataset for instructions that contain markdown code blocks.
68
+
Using the [regexp_matches](https://duckdb.org/docs/sql/functions/char.html#regexp_matchesstring-pattern) function, we can filter the [GeneralReasoning/GeneralThought-195k](https://huggingface.co/datasets/GeneralReasoning/GeneralThought-195K) dataset for instructions that contain markdown code blocks.
Learn more about the DuckDB regex functions <a href="https://duckdb.org/docs/sql/functions/regular_expressions.html" target="_blank" rel="noopener noreferrer">here</a>.
79
76
</p>
80
77
81
78
82
79
```sql
83
-
SELECT*
80
+
SELECT*
84
81
FROM train
85
-
WHERE regexp_matches(instruction, '```[a-z]*\n')
86
-
limit100
82
+
WHERE regexp_matches(model_answer, '```')
83
+
LIMIT10;
87
84
```
88
85
89
86
@@ -92,8 +89,8 @@ limit 100
92
89
Leakage detection is the process of identifying whether data in a dataset is present in multiple splits, for example, whether the test set is present in the training set.
Copy file name to clipboardExpand all lines: docs/hub/datasets-viewer.md
+23-7Lines changed: 23 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,10 +1,10 @@
1
-
# Dataset viewer
1
+
# Data Studio
2
2
3
3
Each dataset page includes a table with the contents of the dataset, arranged by pages of 100 rows. You can navigate between pages using the buttons at the bottom of the table.
@@ -16,18 +16,34 @@ At the top of the columns you can see the graphs representing the distribution o
16
16
If you click on a bar of a histogram from a numerical column, the dataset viewer will filter the data and show only the rows with values that fall in the selected range.
17
17
Similarly, if you select one class from a categorical column, it will show only the rows from the selected category.
You can search for a word in the dataset by typing it in the search bar at the top of the table. The search is case-insensitive and will match any row containing the word. The text is searched in the columns of `string`, even if the values are nested in a dictionary or a list.
22
27
23
28
## Run SQL queries on the dataset
24
29
25
30
You can run SQL queries on the dataset in the browser using the SQL Console. This feature also leverages our [auto-conversion to Parquet](datasets-viewer#access-the-parquet-files).
26
-
For more information see our guide on [SQL Console](./datasets-viewer-sql-console).
For more information see our guide on [SQL Console](./datasets-viewer-sql-console).
27
38
28
39
## Share a specific row
29
40
30
-
You can share a specific row by clicking on it, and then copying the URL in the address bar of your browser. For example https://huggingface.co/datasets/nyu-mll/glue/viewer/mrpc/test?p=2&row=241 will open the dataset viewer on the MRPC dataset, on the test split, and on the 241st row.
41
+
You can share a specific row by clicking on it, and then copying the URL in the address bar of your browser. For example https://huggingface.co/datasets/nyu-mll/glue/viewer/mrpc/test?p=2&row=241 will open the dataset studio on the MRPC dataset, on the test split, and on the 241st row.
@@ -53,8 +69,8 @@ Parquet is a columnar storage format optimized for querying and processing large
53
69
When you create a new dataset, the [`parquet-converter` bot](https://huggingface.co/parquet-converter) notifies you once it converts the dataset to Parquet. The [discussion](./repositories-pull-requests-discussions) it opens in the repository provides details about the Parquet format and links to the Parquet files.
0 commit comments