You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+17-4Lines changed: 17 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,9 +1,8 @@
1
1
# Visual Token Matching
2
2
3
-
This repository contains official code for [Universal Few-shot Learning of Dense Prediction Tasks with Visual Token Matching](https://openreview.net/forum?id=88nT0j5jAn) (ICLR 2023 oral).
3
+
(News) Our paper recieved the [Outstanding Paper Award](https://blog.iclr.cc/2023/03/21/announcing-the-iclr-2023-outstanding-paper-award-recipients/) in ICLR 2023!
4
4
5
-
Currently, we only include codes for our model architecture and episodic training.
6
-
We will release remaining codes (fine-tuning and evaluation) soon.
5
+
This repository contains official code for [Universal Few-shot Learning of Dense Prediction Tasks with Visual Token Matching](https://openreview.net/forum?id=88nT0j5jAn) (ICLR 2023 oral).
7
6
8
7
## Setup
9
8
1. Download Taskonomy Dataset (tiny split) from the official github page https://github.com/StanfordVL/taskonomy/tree/master/data.
@@ -35,10 +34,24 @@ We will release remaining codes (fine-tuning and evaluation) soon.
35
34
* We used `beit_base_patch16_224_pt22k` checkpoint for our experiment.
0 commit comments