Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
27 commits
Select commit Hold shift + click to select a range
d216612
added option to check for pdfplumber library
yabramuvdi Dec 12, 2024
f29215b
added pdf to init
yabramuvdi Dec 12, 2024
a403a71
added pdf to features.py
yabramuvdi Dec 12, 2024
642992d
added pdf to init
yabramuvdi Dec 12, 2024
ff04ec1
added pdf to features.py
yabramuvdi Dec 12, 2024
ee912fc
first version of the Pdf feature
yabramuvdi Dec 12, 2024
3280b84
Update src/datasets/features/pdf.py
yabramuvdi Dec 19, 2024
84ebf32
Update src/datasets/features/pdf.py
yabramuvdi Dec 19, 2024
59e103e
Update src/datasets/features/pdf.py
yabramuvdi Dec 19, 2024
ce98806
Update src/datasets/features/pdf.py
yabramuvdi Dec 19, 2024
debdd2e
Update src/datasets/features/pdf.py
yabramuvdi Dec 19, 2024
3c214a7
Update src/datasets/features/pdf.py
yabramuvdi Dec 19, 2024
dfef76b
Update src/datasets/features/pdf.py
yabramuvdi Dec 19, 2024
c85b4f3
Update src/datasets/features/pdf.py
yabramuvdi Dec 19, 2024
944d12e
Update src/datasets/features/pdf.py
yabramuvdi Dec 19, 2024
753621b
added packages required for PDF support
yabramuvdi Dec 19, 2024
7f0710e
created decorator for requirement of pdfplumber
yabramuvdi Dec 19, 2024
c5b8ada
added a simple pdf with images and plots for testing pdf support
yabramuvdi Dec 19, 2024
0dcbdf9
first version of tests for pdf
yabramuvdi Dec 19, 2024
219f3dc
update to pdf feature
yabramuvdi Dec 19, 2024
b6bc313
Merge branch 'main' into introduce-pdf-support
lhoestq Jan 3, 2025
ad06aba
Merge branch 'main' into introduce-pdf-support
lhoestq Mar 18, 2025
3e17ce5
fix Pdf feature
lhoestq Mar 18, 2025
f781357
add PdfFolder
lhoestq Mar 18, 2025
90f1dda
docs
lhoestq Mar 18, 2025
aebcc54
fix docs
lhoestq Mar 18, 2025
3def6f0
a bit more docs
lhoestq Mar 18, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions docs/source/_toctree.yml
Original file line number Diff line number Diff line change
Expand Up @@ -84,6 +84,10 @@
title: Load video data
- local: video_dataset
title: Create a video dataset
- local: document_load
title: Load document data
- local: document_dataset
title: Create a document dataset
title: "Vision"
- sections:
- local: nlp_load
Expand Down
141 changes: 141 additions & 0 deletions docs/source/document_dataset.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,141 @@
# Create a document dataset

This guide will show you how to create a document with `PdfFolder` and some metadata. This is a no-code solution for quickly creating a document with several thousand pdfs.

<Tip>

You can control access to your dataset by requiring users to share their contact information first. Check out the [Gated datasets](https://huggingface.co/docs/hub/datasets-gated) guide for more information about how to enable this feature on the Hub.

</Tip>

## PdfFolder

The `PdfFolder` is a dataset builder designed to quickly load a document with several thousand pdfs without requiring you to write any code.

<Tip>

💡 Take a look at the [Split pattern hierarchy](repository_structure#split-pattern-hierarchy) to learn more about how `PdfFolder` creates dataset splits based on your dataset repository structure.

</Tip>

`PdfFolder` automatically infers the class labels of your dataset based on the directory name. Store your dataset in a directory structure like:

```
folder/train/resume/0001.pdf
folder/train/resume/0002.pdf
folder/train/resume/0003.pdf

folder/train/invoice/0001.pdf
folder/train/invoice/0002.pdf
folder/train/invoice/0003.pdf
```

If the dataset follows the `PdfFolder` structure, then you can load it directly with [`load_dataset`]:

```py
>>> from datasets import load_dataset

>>> dataset = load_dataset("path/to/folder")
```

This is equivalent to passing `pdffolder` manually in [`load_dataset`] and the directory in `data_dir`:

```py
>>> dataset = load_dataset("pdffolder", data_dir="/path/to/folder")
```

You can also use `pdffolder` to load datasets involving multiple splits. To do so, your dataset directory should have the following structure:

```
folder/train/resume/0001.pdf
folder/train/resume/0002.pdf
folder/test/invoice/0001.pdf
folder/test/invoice/0002.pdf
```

<Tip warning={true}>

If all PDF files are contained in a single directory or if they are not on the same level of directory structure, `label` column won't be added automatically. If you need it, set `drop_labels=False` explicitly.

</Tip>


If there is additional information you'd like to include about your dataset, like text captions or bounding boxes, add it as a `metadata.csv` file in your folder. This lets you quickly create datasets for different computer vision tasks like text captioning or object detection. You can also use a JSONL file `metadata.jsonl` or a Parquet file `metadata.parquet`.

```
folder/train/metadata.csv
folder/train/0001.pdf
folder/train/0002.pdf
folder/train/0003.pdf
```

Your `metadata.csv` file must have a `file_name` or `*_file_name` field which links PDF files with their metadata:

```csv
file_name,additional_feature
0001.pdf,This is a first value of a text feature you added to your pdfs
0002.pdf,This is a second value of a text feature you added to your pdfs
0003.pdf,This is a third value of a text feature you added to your pdfs
```

or using `metadata.jsonl`:

```jsonl
{"file_name": "0001.pdf", "additional_feature": "This is a first value of a text feature you added to your pdfs"}
{"file_name": "0002.pdf", "additional_feature": "This is a second value of a text feature you added to your pdfs"}
{"file_name": "0003.pdf", "additional_feature": "This is a third value of a text feature you added to your pdfs"}
```

Here the `file_name` must be the name of the PDF file next to the metadata file. More generally, it must be the relative path from the directory containing the metadata to the PDF file.

It's possible to point to more than one pdf in each row in your dataset, for example if both your input and output are pdfs:

```jsonl
{"input_file_name": "0001.pdf", "output_file_name": "0001_output.pdf"}
{"input_file_name": "0002.pdf", "output_file_name": "0002_output.pdf"}
{"input_file_name": "0003.pdf", "output_file_name": "0003_output.pdf"}
```

You can also define lists of pdfs. In that case you need to name the field `file_names` or `*_file_names`. Here is an example:

```jsonl
{"pdfs_file_names": ["0001_part1.pdf", "0001_part2.pdf"], "label": "urgent"}
{"pdfs_file_names": ["0002_part1.pdf", "0002_part2.pdf"], "label": "urgent"}
{"pdfs_file_names": ["0003_part1.pdf", "0002_part2.pdf"], "label": "normal"}
```

### OCR (Optical character recognition)

OCR datasets have the text contained in a pdf. An example `metadata.csv` may look like:

```csv
file_name,text
0001.pdf,Invoice 1234 from 01/01/1970...
0002.pdf,Software Engineer Resume. Education: ...
0003.pdf,Attention is all you need. Abstract. The ...
```

Load the dataset with `PdfFolder`, and it will create a `text` column for the pdf captions:

```py
>>> dataset = load_dataset("pdffolder", data_dir="/path/to/folder", split="train")
>>> dataset[0]["text"]
"Invoice 1234 from 01/01/1970..."
```

### Upload dataset to the Hub

Once you've created a dataset, you can share it to the using `huggingface_hub` for example. Make sure you have the [huggingface_hub](https://huggingface.co/docs/huggingface_hub/index) library installed and you're logged in to your Hugging Face account (see the [Upload with Python tutorial](upload_dataset#upload-with-python) for more details).

Upload your dataset with `huggingface_hub.HfApi.upload_folder`:

```py
from huggingface_hub import HfApi
api = HfApi()

api.upload_folder(
folder_path="/path/to/local/dataset",
repo_id="username/my-cool-dataset",
repo_type="dataset",
)
```
216 changes: 216 additions & 0 deletions docs/source/document_load.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,216 @@
# Load pdf data

<Tip warning={true}>

Pdf support is experimental and is subject to change.

</Tip>

Pdf datasets have [`Pdf`] type columns, which contain `pdfplumber` objects.

<Tip>

To work with pdf datasets, you need to have the `pdfplumber` package installed. Check out the [installation](https://github.com/pytorch/vision#installation) guide to learn how to install it.

</Tip>

When you load an pdf dataset and call the pdf column, the pdfs are decoded as `pdfplumber` Pdfs:

```py
>>> from datasets import load_dataset, Pdf

>>> dataset = load_dataset("path/to/pdf/folder", split="train")
>>> dataset[0]["pdf"]
<pdfplumber.pdf.PDF at 0x1075bc320>
```

<Tip warning={true}>

Index into an pdf dataset using the row index first and then the `pdf` column - `dataset[0]["pdf"]` - to avoid creating all the pdf objects in the dataset. Otherwise, this can be a slow and time-consuming process if you have a large dataset.

</Tip>

For a guide on how to load any type of dataset, take a look at the <a class="underline decoration-sky-400 decoration-2 font-semibold" href="./loading">general loading guide</a>.

## Read pages

Access pages directly from a pdf using the `PDF` using `.pages`.

Then you can use the `pdfplumber` functions to read texts, tables and images, e.g.:

```python
>>> pdf = dataset[0]["pdf"]
>>> first_page = pdf.pages[0]
>>> first_page
<Page:1>
>>> first_page.extract_text()
Docling Technical Report
Version1.0
ChristophAuer MaksymLysak AhmedNassar MicheleDolfi NikolaosLivathinos
PanosVagenas CesarBerrospiRamis MatteoOmenetti FabianLindlbauer
KasperDinkla LokeshMishra YusikKim ShubhamGupta RafaelTeixeiradeLima
ValeryWeber LucasMorin IngmarMeijer ViktorKuropiatnyk PeterW.J.Staar
AI4KGroup,IBMResearch
Ru¨schlikon,Switzerland
Abstract
This technical report introduces Docling, an easy to use, self-contained, MIT-
licensed open-source package for PDF document conversion.
...
>>> first_page.images
In [24]: first_page.images
Out[24]:
[{'x0': 256.5,
'y0': 621.0,
'x1': 355.49519999999995,
'y1': 719.9952,
'width': 98.99519999999995,
'height': 98.99519999999995,
'name': 'Im1',
'stream': <PDFStream(44): raw=88980, {'Type': /'XObject', 'Subtype': /'Image', 'BitsPerComponent': 8, 'ColorSpace': /'DeviceRGB', 'Filter': /'DCTDecode', 'Height': 1024, 'Length': 88980, 'Width': 1024}>,
'srcsize': (1024, 1024),
'imagemask': None,
'bits': 8,
'colorspace': [/'DeviceRGB'],
'mcid': None,
'tag': None,
'object_type': 'image',
'page_number': 1,
'top': 72.00480000000005,
'bottom': 171.0,
'doctop': 72.00480000000005}]
>>> first_page.extract_tables()
[]
```

You can also load each page as a `PIL.Image`:

```python
>>> import PIL.Image
>>> import io
>>> first_page.to_image()
<pdfplumber.display.PageImage at 0x107d68dd0>
>>> buffer = io.BytesIO()
>>> first_page.to_image().save(buffer)
>>> img = PIL.Image.open(buffer)
>>> img
<PIL.PngImagePlugin.PngImageFile image mode=P size=612x792>
```

Note that you can pass `resolution=` to `.to_image()` to render the image in higher resolution that the default (72 ppi).

## Local files

You can load a dataset from the pdf path. Use the [`~Dataset.cast_column`] function to accept a column of pdf file paths, and decode it into a `pdfplumber` pdf with the [`Pdf`] feature:
```py
>>> from datasets import Dataset, Pdf

>>> dataset = Dataset.from_dict({"pdf": ["path/to/pdf_1", "path/to/pdf_2", ..., "path/to/pdf_n"]}).cast_column("pdf", Pdf())
>>> dataset[0]["pdf"]
<pdfplumber.pdf.PDF at 0x1657d0280>
```

If you only want to load the underlying path to the pdf dataset without decoding the pdf object, set `decode=False` in the [`Pdf`] feature:

```py
>>> dataset = dataset.cast_column("pdf", Pdf(decode=False))
>>> dataset[0]["pdf"]
{'bytes': None,
'path': 'path/to/pdf/folder/pdf0.pdf'}
```

## PdfFolder

You can also load a dataset with an `PdfFolder` dataset builder which does not require writing a custom dataloader. This makes `PdfFolder` ideal for quickly creating and loading pdf datasets with several thousand pdfs for different vision tasks. Your pdf dataset structure should look like this:

```
folder/train/resume/0001.pdf
folder/train/resume/0002.pdf
folder/train/resume/0003.pdf

folder/train/invoice/0001.pdf
folder/train/invoice/0002.pdf
folder/train/invoice/0003.pdf
```

If the dataset follows the `PdfFolder` structure, then you can load it directly with [`load_dataset`]:

```py
>>> from datasets import load_dataset

>>> dataset = load_dataset("username/dataset_name")
>>> # OR locally:
>>> dataset = load_dataset("/path/to/folder")
```

For local datasets, this is equivalent to passing `pdffolder` manually in [`load_dataset`] and the directory in `data_dir`:

```py
>>> dataset = load_dataset("pdffolder", data_dir="/path/to/folder")
```

Then you can access the pdfs as `pdfplumber.pdf.PDF` objects:

```
>>> dataset["train"][0]
{"pdf": <pdfplumber.pdf.PDF at 0x161715e50>, "label": 0}

>>> dataset["train"][-1]
{"pdf": <pdfplumber.pdf.PDF at 0x16170bd90>, "label": 1}
```

To ignore the information in the metadata file, set `drop_metadata=True` in [`load_dataset`]:

```py
>>> from datasets import load_dataset

>>> dataset = load_dataset("username/dataset_with_metadata", drop_metadata=True)
```

If you don't have a metadata file, `PdfFolder` automatically infers the label name from the directory name.
If you want to drop automatically created labels, set `drop_labels=True`.
In this case, your dataset will only contain an pdf column:

```py
>>> from datasets import load_dataset

>>> dataset = load_dataset("username/dataset_without_metadata", drop_labels=True)
```

Finally the `filters` argument lets you load only a subset of the dataset, based on a condition on the label or the metadata. This is especially useful if the metadata is in Parquet format, since this format enables fast filtering. It is also recommended to use this argument with `streaming=True`, because by default the dataset is fully downloaded before filtering.

```python
>>> filters = [("label", "=", 0)]
>>> dataset = load_dataset("username/dataset_name", streaming=True, filters=filters)
```

<Tip>

For more information about creating your own `PdfFolder` dataset, take a look at the [Create a pdf dataset](./pdf_dataset) guide.

</Tip>

## Pdf decoding

By default, pdfs are decoded sequentially as pdfplumber `PDFs` when you iterate on a dataset.
It sequentially decodes the metadata of the pdfs, and doesn't read the pdf pages until you access them.

However it is possible to speed up the dataset significantly using multithreaded decoding:

```python
>>> import os
>>> num_threads = num_threads = min(32, (os.cpu_count() or 1) + 4)
>>> dataset = dataset.decode(num_threads=num_threads)
>>> for example in dataset: # up to 20 times faster !
... ...
```

You can enable multithreading using `num_threads`. This is especially useful to speed up remote data streaming.
However it can be slower than `num_threads=0` for local data on fast disks.

If you are not interested in the documents decoded as pdfplumber `PDFs` and would like to access the path/bytes instead, you can disable decoding:

```python
>>> dataset = dataset.decode(False)
```

Note: [`IterableDataset.decode`] is only available for streaming datasets at the moment.
6 changes: 6 additions & 0 deletions docs/source/package_reference/loading_methods.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -91,6 +91,12 @@ load_dataset("csv", data_dir="path/to/data/dir", sep="\t")

[[autodoc]] datasets.packaged_modules.videofolder.VideoFolder

### Pdf

[[autodoc]] datasets.packaged_modules.pdffolder.PdfFolderConfig

[[autodoc]] datasets.packaged_modules.pdffolder.PdfFolder

### WebDataset

[[autodoc]] datasets.packaged_modules.webdataset.WebDataset
4 changes: 4 additions & 0 deletions docs/source/package_reference/main_classes.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -253,6 +253,10 @@ Dictionary with split names as keys ('train', 'test' for example), and `Iterable

[[autodoc]] datasets.Video

### Pdf

[[autodoc]] datasets.Pdf

## Filesystems

[[autodoc]] datasets.filesystems.is_remote_filesystem
Expand Down
Loading