Skip to content

Commit daec402

Browse files
authored
Merge pull request #282 from visual-layer/dnth/nb-labelbox
Add notebook to pull data from Labelbox
2 parents 709f17a + d742d4b commit daec402

8 files changed

+1648
-8
lines changed

README.md

Lines changed: 119 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -362,7 +362,7 @@ Learn the basics of fastdup through interactive examples. View the notebooks on
362362
</table>
363363

364364

365-
## Data Loading
365+
## Data Sources
366366
This notebooks in this section shows how you can load data from various sources and analyze them with fastdup.
367367

368368
<table>
@@ -477,6 +477,117 @@ This notebooks in this section shows how you can load data from various sources
477477
</td>
478478
</tr>
479479
<!-- ------------------------------------------------------------------- -->
480+
<tr>
481+
<td rowspan="4" width="160">
482+
<a href="https://visual-layer.readme.io/docs/analyzing-labelbox-datasets">
483+
<img src="./gallery/labelbox_thumbnail.jpg" width="200" />
484+
</a>
485+
</td>
486+
<td rowspan="4"><b>📦 Labelbox:</b> Load and analyze vision datasets from <a href="https://labelbox.com/">Labelbox</a> - A data-centric AI platform for building intelligent applications.<br><br>
487+
<div align="right"><a href="https://visual-layer.readme.io/docs/analyzing-labelbox-datasets">🔗 Learn More.</a></div>
488+
</td>
489+
<td align="center" width="80">
490+
<a href="https://nbviewer.org/github/visual-layer/fastdup/blob/main/examples/analyzing-labelbox-datasets.ipynb">
491+
<img src="./gallery/nbviewer_logo.png" height="30" />
492+
</a>
493+
</td>
494+
</tr>
495+
<tr>
496+
<td align="center">
497+
<a href="https://github.com/visual-layer/fastdup/blob/main/examples/analyzing-labelbox-datasets.ipynb">
498+
<img src="./gallery/github_logo.png" height="25" />
499+
</a>
500+
</td>
501+
</tr>
502+
<tr>
503+
<td align="center">
504+
<a href="https://colab.research.google.com/github/visual-layer/fastdup/blob/main/examples/analyzing-labelbox-datasets.ipynb">
505+
<img src="./gallery/colab_logo.png" height="20" />
506+
</a>
507+
</td>
508+
</tr>
509+
<tr>
510+
<td align="center">
511+
<a href="https://kaggle.com/kernels/welcome?src=https://github.com/visual-layer/fastdup/blob/main/examples/analyzing-labelbox-datasets.ipynb">
512+
<img src="./gallery/kaggle_logo.png" height="25" />
513+
</a>
514+
</td>
515+
</tr>
516+
<!-- ------------------------------------------------------------------- -->
517+
<tr>
518+
<td rowspan="4" width="160">
519+
<a href="https://visual-layer.readme.io/docs/analyzing-torchvision-datasets">
520+
<img src="./gallery/torch_thumbnail.jpg" width="200" />
521+
</a>
522+
</td>
523+
<td rowspan="4"><b>🔦 Torchvision Datasets:</b> Load and analyze vision datasets from <a href="https://pytorch.org/vision/main/datasets.html">Torchvision Datasets</a>.<br><br>
524+
<div align="right"><a href="https://visual-layer.readme.io/docs/analyzing-torchvision-datasets">🔗 Learn More.</a></div>
525+
</td>
526+
<td align="center" width="80">
527+
<a href="https://nbviewer.org/github/visual-layer/fastdup/blob/main/examples/analyzing-torchvision-datasets.ipynb">
528+
<img src="./gallery/nbviewer_logo.png" height="30" />
529+
</a>
530+
</td>
531+
</tr>
532+
<tr>
533+
<td align="center">
534+
<a href="https://github.com/visual-layer/fastdup/blob/main/examples/analyzing-torchvision-datasets.ipynb">
535+
<img src="./gallery/github_logo.png" height="25" />
536+
</a>
537+
</td>
538+
</tr>
539+
<tr>
540+
<td align="center">
541+
<a href="https://colab.research.google.com/github/visual-layer/fastdup/blob/main/examples/analyzing-torchvision-datasets.ipynb">
542+
<img src="./gallery/colab_logo.png" height="20" />
543+
</a>
544+
</td>
545+
</tr>
546+
<tr>
547+
<td align="center">
548+
<a href="https://kaggle.com/kernels/welcome?src=https://github.com/visual-layer/fastdup/blob/main/examples/analyzing-torchvision-datasets.ipynb">
549+
<img src="./gallery/kaggle_logo.png" height="25" />
550+
</a>
551+
</td>
552+
</tr>
553+
<!-- ------------------------------------------------------------------- -->
554+
<tr>
555+
<td rowspan="4" width="160">
556+
<a href="https://visual-layer.readme.io/docs/analyzing-tensorflow-datasets">
557+
<img src="./gallery/tfds_thumbnail.jpg" width="200" />
558+
</a>
559+
</td>
560+
<td rowspan="4"><b>💦 Tensorflow Datasets:</b> Load and analyze vision datasets from <a href="https://www.tensorflow.org/datasets">Tensorflow Datasets</a>.<br><br>
561+
<div align="right"><a href="https://visual-layer.readme.io/docs/analyzing-tensorflow-datasets">🔗 Learn More.</a></div>
562+
</td>
563+
<td align="center" width="80">
564+
<a href="https://nbviewer.org/github/visual-layer/fastdup/blob/main/examples/analyzing-tensorflow-datasets.ipynb">
565+
<img src="./gallery/nbviewer_logo.png" height="30" />
566+
</a>
567+
</td>
568+
</tr>
569+
<tr>
570+
<td align="center">
571+
<a href="https://github.com/visual-layer/fastdup/blob/main/examples/analyzing-tensorflow-datasets.ipynb">
572+
<img src="./gallery/github_logo.png" height="25" />
573+
</a>
574+
</td>
575+
</tr>
576+
<tr>
577+
<td align="center">
578+
<a href="https://colab.research.google.com/github/visual-layer/fastdup/blob/main/examples/analyzing-tensorflow-datasets.ipynb">
579+
<img src="./gallery/colab_logo.png" height="20" />
580+
</a>
581+
</td>
582+
</tr>
583+
<tr>
584+
<td align="center">
585+
<a href="https://kaggle.com/kernels/welcome?src=https://github.com/visual-layer/fastdup/blob/main/examples/analyzing-tensorflow-datasets.ipynb">
586+
<img src="./gallery/kaggle_logo.png" height="25" />
587+
</a>
588+
</td>
589+
</tr>
590+
<!-- ------------------------------------------------------------------- -->
480591
</table>
481592

482593

@@ -529,7 +640,7 @@ This notebooks in this section shows how you can load data from various sources
529640
</a>
530641
</td>
531642
<td rowspan="4">
532-
<b>➡️ Use Your Own Feature Vectors:</b> Read fastdup generated feature vectors in Python and use them for downstream processing, or run fastdup on your feature vectors.
643+
<b>➡️ Use Your Own Feature Vectors:</b> Run fastdup on pre-computed feature vectors and surface data quality issues.
533644
</td>
534645
<td align="center" width="80">
535646
<a href="https://nbviewer.org/github/visual-layer/fastdup/blob/main/examples/feature_vectors.ipynb">
@@ -673,35 +784,35 @@ This notebooks in this section shows how you can load data from various sources
673784
<tr>
674785
<td rowspan="4" width="160">
675786
<a href="https://visual-layer.readme.io/docs/running-over-extracted-features">
676-
<img src="gallery/surveillance_thumbnail.jpg" width="200">
787+
<img src="gallery/caption_thumbnail.jpg" width="200">
677788
</a>
678789
</td>
679790
<td rowspan="4">
680-
<b>📑 Captioning with BLIP:</b> Enrich your dataset by captioning them using <a href="https://github.com/salesforce/BLIP">BLIP</a>.
791+
<b>📑 Image Captioning & Visual Question Answering (VQA):</b> Enrich your dataset by captioning them using <a href="https://github.com/salesforce/BLIP">BLIP</a>, <a href="https://github.com/salesforce/LAVIS/tree/main/projects/blip2">BLIP-2</a>, or <a href="https://huggingface.co/ydshieh/vit-gpt2-coco-en-ckpts">ViT-GPT2</a> model. Alternatively, use VQA models and ask question about the content of your images with <a href="https://github.com/dandelin/ViLT">Vilt-b32</a> or <a href="https://huggingface.co/nateraw/vit-age-classifier">ViT-Age</a> model.
681792
</td>
682793
<td align="center" width="80">
683-
<a href="https://nbviewer.org/github/visual-layer/fastdup/blob/main/examples/surveillance_videos.ipynb">
794+
<a href="https://nbviewer.org/github/visual-layer/fastdup/blob/main/examples/caption_generation.ipynb">
684795
<img src="./gallery/nbviewer_logo.png" height="30">
685796
</a>
686797
</td>
687798
</tr>
688799
<tr>
689800
<td align="center">
690-
<a href="https://github.com/visual-layer/fastdup/blob/main/examples/surveillance_videos.ipynb">
801+
<a href="https://github.com/visual-layer/fastdup/blob/main/examples/caption_generation.ipynb">
691802
<img src="./gallery/github_logo.png" height="25">
692803
</a>
693804
</td>
694805
</tr>
695806
<tr>
696807
<td align="center">
697-
<a href="https://colab.research.google.com/github/visual-layer/fastdup/blob/main/examples/surveillance_videos.ipynb">
808+
<a href="https://colab.research.google.com/github/visual-layer/fastdup/blob/main/examples/caption_generation.ipynb">
698809
<img src="./gallery/colab_logo.png" height="20">
699810
</a>
700811
</td>
701812
</tr>
702813
<tr>
703814
<td align="center">
704-
<a href="https://kaggle.com/kernels/welcome?src=https://github.com/visual-layer/fastdup/blob/main/examples/surveillance_videos.ipynb">
815+
<a href="https://kaggle.com/kernels/welcome?src=https://github.com/visual-layer/fastdup/blob/main/examples/caption_generation.ipynb">
705816
<img src="./gallery/kaggle_logo.png" height="25">
706817
</a>
707818
</td>

0 commit comments

Comments
 (0)