Skip to content

Commit 2d85a4c

Browse files
committed
update ckpt links
1 parent 8df4270 commit 2d85a4c

File tree

3 files changed

+17
-10
lines changed

3 files changed

+17
-10
lines changed

README.md

Lines changed: 13 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ Ao Wang, Hui Chen, Zijia Lin, Hengjun Pu, and Guiguang Ding\
1515
<summary>
1616
<font size="+1">Abstract</font>
1717
</summary>
18-
Recently, lightweight Vision Transformers (ViTs) demonstrate superior performance and lower latency compared with lightweight Convolutional Neural Networks (CNNs) on resource-constrained mobile devices. This improvement is usually attributed to the multi-head self-attention module, which enables the model to learn global representations. However, the architectural disparities between lightweight ViTs and lightweight CNNs have not been adequately examined. In this study, we revisit the efficient design of lightweight CNNs and emphasize their potential for mobile devices. We incrementally enhance the mobile-friendliness of a standard lightweight CNN, specifically MobileNetV3, by integrating the efficient architectural choices of lightweight ViTs. To this end, we present a new family of pure lightweight CNNs, namely RepViT. Extensive experiments show that RepViT outperforms existing state-of-the-art lightweight ViTs and exhibits favorable latency in various vision tasks. On ImageNet, RepViT achieves over 80\% top-1 accuracy with nearly 1ms latency on an iPhone 12, which is the first time for a lightweight model, to the best of our knowledge. Our largest model, RepViT-M3, obtains 81.4\% accuracy with only 1.3ms latency.
18+
Recently, lightweight Vision Transformers (ViTs) demonstrate superior performance and lower latency compared with lightweight Convolutional Neural Networks (CNNs) on resource-constrained mobile devices. This improvement is usually attributed to the multi-head self-attention module, which enables the model to learn global representations. However, the architectural disparities between lightweight ViTs and lightweight CNNs have not been adequately examined. In this study, we revisit the efficient design of lightweight CNNs and emphasize their potential for mobile devices. We incrementally enhance the mobile-friendliness of a standard lightweight CNN, specifically MobileNetV3, by integrating the efficient architectural choices of lightweight ViTs. This ends up with a new family of pure lightweight CNNs, namely RepViT. Extensive experiments show that RepViT outperforms existing state-of-the-art lightweight ViTs and exhibits favorable latency in various vision tasks. On ImageNet, RepViT achieves over 80\% top-1 accuracy with nearly 1ms latency on an iPhone 12, which is the first time for a lightweight model, to the best of our knowledge. Our largest model, RepViT-M3, obtains 81.4\% accuracy with only 1.3ms latency.
1919
</details>
2020

2121
<br>
@@ -26,9 +26,9 @@ Recently, lightweight Vision Transformers (ViTs) demonstrate superior performanc
2626

2727
| Model | Top-1 (300)| #params | MACs | Latency | Ckpt | Core ML | Log |
2828
|:---------------|:----:|:---:|:--:|:--:|:--:|:--:|:--:|
29-
| RepViT-M1 | 78.5 | 5.1M | 0.8G | 0.9ms | [M1](https://github.com/jameslahm/RepViT/releases/download/untagged-75eb9e1fea235b938f50/repvit_m1_distill_300.pth) | [M1](https://github.com/jameslahm/RepViT/releases/download/untagged-75eb9e1fea235b938f50/repvit_m1_224.mlmodel) | [M1](./logs/repvit_m1_train.log) |
30-
| RepViT-M2 | 80.6 | 8.8M | 1.4G | 1.1ms | [M2](https://github.com/jameslahm/RepViT/releases/download/untagged-75eb9e1fea235b938f50/repvit_m2_distill_300.pth) | [M2](https://github.com/jameslahm/RepViT/releases/download/untagged-75eb9e1fea235b938f50/repvit_m2_224.mlmodel) | [M2](./logs/repvit_m2_train.log) |
31-
| RepViT-M3 | 81.4 | 10.1M | 1.9G | 1.3ms | [M3](https://github.com/jameslahm/RepViT/releases/download/untagged-75eb9e1fea235b938f50/repvit_m3_distill_300.pth) | [M3](https://github.com/jameslahm/RepViT/releases/download/untagged-75eb9e1fea235b938f50/repvit_m3_224.mlmodel) | [M3](./logs/repvit_m3_train.log) |
29+
| RepViT-M1 | 78.5 | 5.1M | 0.8G | 0.9ms | [M1](https://github.com/jameslahm/RepViT/releases/download/v1.0/repvit_m1_distill_300.pth) | [M1](https://github.com/jameslahm/RepViT/releases/download/v1.0/repvit_m1_224.mlmodel) | [M1](./logs/repvit_m1_train.log) |
30+
| RepViT-M2 | 80.6 | 8.2M | 1.3G | 1.1ms | [M2](https://github.com/jameslahm/RepViT/releases/download/v1.0/repvit_m2_distill_300.pth) | [M2](https://github.com/jameslahm/RepViT/releases/download/v1.0/repvit_m2_224.mlmodel) | [M2](./logs/repvit_m2_train.log) |
31+
| RepViT-M3 | 81.4 | 10.1M | 1.9G | 1.3ms | [M3](https://github.com/jameslahm/RepViT/releases/download/v1.0/repvit_m3_distill_300.pth) | [M3](https://github.com/jameslahm/RepViT/releases/download/v1.0/repvit_m3_224.mlmodel) | [M3](./logs/repvit_m3_train.log) |
3232

3333
Tips: Convert a training-time RepViT into the inference-time structure
3434
```
@@ -102,7 +102,14 @@ Thanks for the great implementations!
102102

103103
## Citation
104104

105-
If our code or models help your work, please cite our papers:
105+
If our code or models help your work, please cite our paper:
106106
```BibTeX
107-
107+
@misc{wang2023repvit,
108+
title={RepViT: Revisiting Mobile CNN From ViT Perspective},
109+
author={Ao Wang and Hui Chen and Zijia Lin and Hengjun Pu and Guiguang Ding},
110+
year={2023},
111+
eprint={2307.09283},
112+
archivePrefix={arXiv},
113+
primaryClass={cs.CV}
114+
}
108115
```

detection/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -5,8 +5,8 @@ Detection and instance segmentation on MS COCO 2017 is implemented based on [MMD
55
## Models
66
| Model | $AP^b$ | $AP_{50}^b$ | $AP_{75}^b$ | $AP^m$ | $AP_{50}^m$ | $AP_{75}^m$ | Latency | Ckpt | Log |
77
|:---------------|:----:|:---:|:--:|:--:|:--:|:--:|:--:|:--:|:--:|
8-
| RepViT-M2 | 39.8 | 61.9 | 43.5 | 37.2 | 58.8 | 40.1 | 4.9ms | [M2](https://github.com/jameslahm/RepViT/releases/download/untagged-75eb9e1fea235b938f50/repvit_m2_coco.pth) | [M2](./detection/logs/repvit_m2_coco.json) |
9-
| RepViT-M3 | 41.1 | 63.1 | 45.0 | 38.3 | 60.4 | 41.0 | 5.9ms | [M3](https://github.com/jameslahm/RepViT/releases/download/untagged-17a1aad5598a25485a4e/repvit_m3_coco.pth) | [M3](./detection/logs/repvit_m3_coco.json) |
8+
| RepViT-M2 | 39.8 | 61.9 | 43.5 | 37.2 | 58.8 | 40.1 | 4.9ms | [M2](https://github.com/jameslahm/RepViT/releases/download/v1.0/repvit_m2_coco.pth) | [M2](./logs/repvit_m2_coco.json) |
9+
| RepViT-M3 | 41.1 | 63.1 | 45.0 | 38.3 | 60.4 | 41.0 | 5.9ms | [M3](https://github.com/jameslahm/RepViT/releases/download/v1.0/repvit_m3_coco.pth) | [M3](./logs/repvit_m3_coco.json) |
1010

1111
## Installation
1212

segmentation/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -5,8 +5,8 @@ Segmentation on ADE20K is implemented based on [MMSegmentation](https://github.c
55
## Models
66
| Model | mIoU | Latency | Ckpt | Log |
77
|:---------------|:----:|:---:|:--:|:--:|
8-
| RepViT-M2 | 40.6 | 4.9ms | [M2](https://github.com/jameslahm/RepViT/releases/download/untagged-75eb9e1fea235b938f50/repvit_m2_ade20k.pth) | [M2](./logs/repvit_m2_ade20k.json) |
9-
| RepViT-M3 | 42.8 | 5.9ms | [M3](https://github.com/jameslahm/RepViT/releases/download/untagged-75eb9e1fea235b938f50/repvit_m3_ade20k.pth) | [M3](./logs/repvit_m3_ade20k.json) |
8+
| RepViT-M2 | 40.6 | 4.9ms | [M2](https://github.com/jameslahm/RepViT/releases/download/v1.0/repvit_m2_ade20k.pth) | [M2](./logs/repvit_m2_ade20k.json) |
9+
| RepViT-M3 | 42.8 | 5.9ms | [M3](https://github.com/jameslahm/RepViT/releases/download/v1.0/repvit_m3_ade20k.pth) | [M3](./logs/repvit_m3_ade20k.json) |
1010

1111
The backbone latency is measured with image crops of 512x512 on iPhone 12 by Core ML Tools.
1212

0 commit comments

Comments
 (0)