Skip to content

Commit 0cdc1e3

Browse files
committed
update readme for v0.1
1 parent ee4664b commit 0cdc1e3

File tree

1 file changed

+4
-5
lines changed

1 file changed

+4
-5
lines changed

README.md

Lines changed: 4 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -25,6 +25,7 @@ MetaCLIP is trained w/ face blurred images.
2525
```
2626

2727
## Updates
28+
* 08/15/2024: [v0.1](https://github.com/facebookresearch/MetaCLIP/releases/tag/v0.1) released.
2829
* 04/25/2024: 🔥 paper [MoDE: CLIP Data Experts via Clustering](https://arxiv.org/abs/2404.16030) is accepted by CVPR 2024 with [code](mode/README.md) released.
2930
* 01/18/2024: 🔥 add [code](metaclip/README_metadata.md) for building metadata.
3031
* 01/16/2024: 🔥 paper accepted by ICLR as [spotlight presentation](https://openreview.net/group?id=ICLR.cc/2024/Conference#tab-accept-spotlight).
@@ -48,7 +49,7 @@ MetaCLIP is trained w/ face blurred images.
4849
## Quick Start
4950
The pre-trained MetaCLIP models are available in
5051
<details>
51-
<summary>[Huggingface](https://huggingface.co/models?other=metaclip)</summary>
52+
<summary>Huggingface</summary>
5253

5354
```python
5455
from PIL import Image
@@ -69,7 +70,7 @@ print("Label probs:", text_probs)
6970
</details>
7071

7172
<details>
72-
<summary>[OpenCLIP](https://github.com/mlfoundations/open_clip) (or this repo)</summary>
73+
<summary>This repo or (OpenCLIP)</summary>
7374

7475
```python
7576
import torch
@@ -202,10 +203,8 @@ Please cite our paper (accepted by ICLR2024 as spotlight presentation) if MetaCL
202203
The training code is developed based on [OpenCLIP](https://github.com/mlfoundations/open_clip), modified to the vanilla CLIP training setup.
203204

204205
## TODO
205-
- v0.1 code release;
206206
- refactor openclip as v0.2;
207-
- pip installation;
208-
- (welcome your use cases or suggestions to update this codebase regularly)
207+
- pip installation of metaclip package;
209208

210209

211210
## License

0 commit comments

Comments
 (0)