You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/dalib/benchmarks/re_identification.rst
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -16,7 +16,7 @@ We adopt cross dataset setting (another one is cross camera setting). The model
16
16
17
17
For a fair comparison, our model is trained with standard cross entropy loss and triplet loss. We adopt modified resnet architecture from `Mutual Mean-Teaching: Pseudo Label Refinery for Unsupervised Domain Adaptation on Person Re-identification (ICLR 2020) <https://arxiv.org/pdf/2001.01526.pdf>`_.
18
18
19
-
As we are given unlabeled samples from target domain, we can utilize clustering algorithms to produce pseudo labels on target domain and then use them as supervision signals to perform self-training. This simple method turns out to be a strong baseline. We use ``Baseline_Cluster`` to represent this baseline in our results.
19
+
As we are given unlabelled samples from target domain, we can utilize clustering algorithms to produce pseudo labels on target domain and then use them as supervision signals to perform self-training. This simple method turns out to be a strong baseline. We use ``Baseline_Cluster`` to represent this baseline in our results.
-[Learning Without Forgetting (LWF, ECCV 2016)](https://arxiv.org/abs/1606.09282)
42
41
-[Bi-tuning of Pre-trained Representations (Bi-Tuning)](https://arxiv.org/abs/2011.06182?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+arxiv%2FQSXk+%28ExcitingAds%21+cs+updates+on+arXiv.org%29)
43
42
44
43
## Experiment and Results
@@ -78,52 +77,52 @@ If you use these methods in your research, please consider citing.
78
77
79
78
```
80
79
@inproceedings{LWF,
81
-
author = {Zhizhong Li and
80
+
author = {Zhizhong Li and
82
81
Derek Hoiem},
83
-
title = {Learning without Forgetting},
84
-
booktitle={ECCV},
85
-
year = {2016},
82
+
title = {Learning without Forgetting},
83
+
booktitle={ECCV},
84
+
year = {2016},
86
85
}
87
86
88
87
@inproceedings{L2SP,
89
-
title={Explicit inductive bias for transfer learning with convolutional networks},
90
-
author={Xuhong, LI and Grandvalet, Yves and Davoine, Franck},
91
-
booktitle={ICML},
92
-
year={2018},
88
+
title={Explicit inductive bias for transfer learning with convolutional networks},
89
+
author={Xuhong, LI and Grandvalet, Yves and Davoine, Franck},
90
+
booktitle={ICML},
91
+
year={2018},
93
92
}
94
93
95
94
@inproceedings{BSS,
96
-
title={Catastrophic forgetting meets negative transfer: Batch spectral shrinkage for safe transfer learning},
97
-
author={Chen, Xinyang and Wang, Sinan and Fu, Bo and Long, Mingsheng and Wang, Jianmin},
98
-
booktitle={NeurIPS},
99
-
year={2019}
95
+
title={Catastrophic forgetting meets negative transfer: Batch spectral shrinkage for safe transfer learning},
96
+
author={Chen, Xinyang and Wang, Sinan and Fu, Bo and Long, Mingsheng and Wang, Jianmin},
97
+
booktitle={NeurIPS},
98
+
year={2019}
100
99
}
101
100
102
101
@inproceedings{DELTA,
103
-
title={Delta: Deep learning transfer using feature map with attention for convolutional networks},
104
-
author={Li, Xingjian and Xiong, Haoyi and Wang, Hanchao and Rao, Yuxuan and Liu, Liping and Chen, Zeyu and Huan, Jun},
105
-
booktitle={ICLR},
106
-
year={2019}
102
+
title={Delta: Deep learning transfer using feature map with attention for convolutional networks},
103
+
author={Li, Xingjian and Xiong, Haoyi and Wang, Hanchao and Rao, Yuxuan and Liu, Liping and Chen, Zeyu and Huan, Jun},
104
+
booktitle={ICLR},
105
+
year={2019}
107
106
}
108
107
109
108
@inproceedings{StocNorm,
110
-
title={Stochastic Normalization},
111
-
author={Kou, Zhi and You, Kaichao and Long, Mingsheng and Wang, Jianmin},
112
-
booktitle={NeurIPS},
113
-
year={2020}
109
+
title={Stochastic Normalization},
110
+
author={Kou, Zhi and You, Kaichao and Long, Mingsheng and Wang, Jianmin},
111
+
booktitle={NeurIPS},
112
+
year={2020}
114
113
}
115
114
116
115
@inproceedings{CoTuning,
117
-
title={Co-Tuning for Transfer Learning},
118
-
author={You, Kaichao and Kou, Zhi and Long, Mingsheng and Wang, Jianmin},
119
-
booktitle={NeurIPS},
120
-
year={2020}
116
+
title={Co-Tuning for Transfer Learning},
117
+
author={You, Kaichao and Kou, Zhi and Long, Mingsheng and Wang, Jianmin},
118
+
booktitle={NeurIPS},
119
+
year={2020}
121
120
}
122
121
123
122
@article{BiTuning,
124
-
title={Bi-tuning of Pre-trained Representations},
125
-
author={Zhong, Jincheng and Wang, Ximei and Kou, Zhi and Wang, Jianmin and Long, Mingsheng},
126
-
journal={arXiv preprint arXiv:2011.06182},
127
-
year={2020}
123
+
title={Bi-tuning of Pre-trained Representations},
124
+
author={Zhong, Jincheng and Wang, Ximei and Kou, Zhi and Wang, Jianmin and Long, Mingsheng},
Here are a few projects that are built on Trans-Learn. They are examples of how to use Trans-Learn as a library, to
2
+
facilitate your own research.
3
+
4
+
## Projects by [THUML](https://github.com/thuml)
5
+
6
+
-[Self-Tuning for Data-Efficient Deep Learning (2021 ICML)](http://ise.thss.tsinghua.edu.cn/~mlong/doc/Self-Tuning-for-Data-Efficient-Deep-Learning-icml21.pdf)
-[Unsupervised Data Augmentation for Consistency Training (uda, NIPS 2020)](https://proceedings.neurips.cc/paper/2020/file/44feb0096faa8326192570788b38c1d1-Paper.pdf)
30
+
-[FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence (FixMatch, NIPS 2020)](https://proceedings.neurips.cc/paper/2020/file/f7ac67a9aa8d255282de7d11391e1b69-Paper.pdf)
31
+
-[Self-Tuning for Data-Efficient Deep Learning (self-tuning, ICML 2021)](http://ise.thss.tsinghua.edu.cn/~mlong/doc/Self-Tuning-for-Data-Efficient-Deep-Learning-icml21.pdf)
32
+
33
+
## Experiments and Results
34
+
35
+
### SSL with supervised pre-trained model
36
+
37
+
The shell files give the script to reproduce our [results](benchmark.md) with specified hyper-parameters. For example,
38
+
if you want to run baseline on CUB200 with 15% labeled samples, use the following script
39
+
40
+
```shell script
41
+
# SSL with ResNet50 backbone on CUB200.
42
+
# Assume you have put the datasets under the path `data/cub200`,
43
+
# or you are glad to download the datasets automatically from the Internet to this path
If you use these methods in your research, please consider citing.
73
+
74
+
```
75
+
@inproceedings{pi-model,
76
+
title={Temporal ensembling for semi-supervised learning},
77
+
author={Laine, Samuli and Aila, Timo},
78
+
booktitle={ICLR},
79
+
year={2017}
80
+
}
81
+
@inproceedings{mean-teacher,
82
+
title={Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results},
83
+
author={Tarvainen, Antti and Valpola, Harri},
84
+
booktitle={NIPS},
85
+
year={2017}
86
+
}
87
+
@inproceedings{uda,
88
+
title={Unsupervised data augmentation for consistency training},
89
+
author={Xie, Qizhe and Dai, Zihang and Hovy, Eduard and Luong, Minh-Thang and Le, Quoc V},
90
+
booktitle={NIPS},
91
+
year={2019}
92
+
}
93
+
@inproceedings{fixmatch,
94
+
title={Fixmatch: Simplifying semi-supervised learning with consistency and confidence},
95
+
author={Sohn, Kihyuk and Berthelot, David and Li, Chun-Liang and Zhang, Zizhao and Carlini, Nicholas and Cubuk, Ekin D and Kurakin, Alex and Zhang, Han and Raffel, Colin},
96
+
booktitle={NIPS},
97
+
year={2020}
98
+
}
99
+
@inproceedings{self-tuning,
100
+
title={Self-tuning for data-efficient deep learning},
101
+
author={Wang, Ximei and Gao, Jinghan and Long, Mingsheng and Wang, Jianmin},
0 commit comments