Skip to content

Commit 6db80d1

Browse files
committed
update readme
1 parent c716fe5 commit 6db80d1

File tree

1 file changed

+8
-133
lines changed

1 file changed

+8
-133
lines changed

README.md

Lines changed: 8 additions & 133 deletions
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,7 @@
33
Differences between original repository and fork:
44

55
* Compatibility with PyTorch >=2.0. (🔥)
6+
* Original pretrained models from GitHub [releases page](https://github.com/clibdev/MODNet/releases). (🔥)
67
* Installation with [requirements.txt](requirements.txt) file.
78

89
# Installation
@@ -11,141 +12,15 @@ Differences between original repository and fork:
1112
pip install -r requirements.txt
1213
```
1314

15+
# Pretrained models
16+
17+
| Name | Link |
18+
|-----------------------|-----------------------------------------------------------------------------------------------------------------|
19+
| MODNet (Photographic) | [PyTorch](https://github.com/clibdev/MODNet/releases/latest/download/modnet_photographic_portrait_matting.ckpt) |
20+
| MODNet (Webcam) | [PyTorch](https://github.com/clibdev/MODNet/releases/latest/download/modnet_webcam_portrait_matting.ckpt) |
21+
1422
# Inference
1523

1624
```shell
1725
python -m demo.image_matting.colab.inference --ckpt-path pretrained/modnet_photographic_portrait_matting.ckpt --input-path data/images --output-path .
1826
```
19-
20-
21-
22-
23-
<h2 align="center">MODNet: Trimap-Free Portrait Matting in Real Time</h2>
24-
25-
<div align="center"><i>MODNet: Real-Time Trimap-Free Portrait Matting via Objective Decomposition (AAAI 2022)</i></div>
26-
27-
<br />
28-
29-
<img src="doc/gif/homepage_demo.gif" width="100%">
30-
31-
<div align="center">MODNet is a model for <b>real-time</b> portrait matting with <b>only RGB image input</b></div>
32-
<div align="center">MODNet是一个<b>仅需RGB图片输入</b>的<b>实时</b>人像抠图模型</div>
33-
34-
<br />
35-
36-
<p align="center">
37-
<a href="#online-application-在线应用">Online Application (在线应用)</a> |
38-
<a href="#research-demo">Research Demo</a> |
39-
<a href="https://arxiv.org/pdf/2011.11961.pdf">AAAI 2022 Paper</a> |
40-
<a href="https://youtu.be/PqJ3BRHX3Lc">Supplementary Video</a>
41-
</p>
42-
43-
<p align="center">
44-
<a href="#community">Community</a> |
45-
<a href="#code">Code</a> |
46-
<a href="#ppm-benchmark">PPM Benchmark</a> |
47-
<a href="#license">License</a> |
48-
<a href="#acknowledgement">Acknowledgement</a> |
49-
<a href="#citation">Citation</a> |
50-
<a href="#contact">Contact</a>
51-
</p>
52-
53-
---
54-
55-
56-
## Online Application (在线应用)
57-
58-
The model used in the online demo (unpublished) is only **7M**! Process **2K** resolution image with a **Fast** speed on common PCs or Mobiles! **Beter** than research demos!
59-
Please try online portrait image matting on [my personal homepage](https://zhke.io/#/?modnet_demo) for fun!
60-
61-
在线应用中使用的模型(未发布)大小仅为**7M**!可以在普通PC或移动设备上**快速**处理具有**2K**分辨率的图像!效果比研究示例**更好**
62-
请通过[我的主页](https://zhke.io/#/?modnet_demo)在线尝试图片抠像!
63-
64-
65-
## Research Demo
66-
67-
All the models behind the following demos are trained on the datasets mentioned in [our paper](https://arxiv.org/pdf/2011.11961.pdf).
68-
69-
### Portrait Image Matting
70-
We provide an [online Colab demo](https://colab.research.google.com/drive/1GANpbKT06aEFiW-Ssx0DQnnEADcXwQG6?usp=sharing) for portrait image matting.
71-
It allows you to upload portrait images and predict/visualize/download the alpha mattes.
72-
73-
<!-- <img src="doc/gif/image_matting_demo.gif" width='40%'> -->
74-
75-
### Portrait Video Matting
76-
We provide two real-time portrait video matting demos based on WebCam. When using the demo, you can move the WebCam around at will.
77-
If you have an Ubuntu system, we recommend you to try the [offline demo](demo/video_matting/webcam) to get a higher *fps*. Otherwise, you can access the [online Colab demo](https://colab.research.google.com/drive/1Pt3KDSc2q7WxFvekCnCLD8P0gBEbxm6J?usp=sharing).
78-
We also provide an [offline demo](demo/video_matting/custom) that allows you to process custom videos.
79-
80-
<!-- <img src="doc/gif/video_matting_demo.gif" width='60%'> -->
81-
82-
83-
## Community
84-
85-
We share some cool applications/extentions of MODNet built by the community.
86-
87-
<!-- - **WebGUI for Portrait Image Matting** -->
88-
<!-- You can try [this WebGUI](https://www.gradio.app/hub/aliabd/modnet) (hosted on [Gradio](https://www.gradio.app/)) for portrait image matting from your browser without code! -->
89-
90-
- **Colab Demo of Bokeh (Blur Background)**
91-
You can try [this Colab demo](https://colab.research.google.com/github/eyaler/avatars4all/blob/master/yarok.ipynb) (built by [@eyaler](https://github.com/eyaler)) to blur the backgroud based on MODNet!
92-
93-
- **ONNX Version of MODNet**
94-
You can convert the pre-trained MODNet to an ONNX model by using [this code](onnx) (provided by [@manthan3C273](https://github.com/manthan3C273)). You can also try [this Colab demo](https://colab.research.google.com/drive/1P3cWtg8fnmu9karZHYDAtmm1vj1rgA-f?usp=sharing) for MODNet image matting (ONNX version).
95-
96-
- **TorchScript Version of MODNet**
97-
You can convert the pre-trained MODNet to an TorchScript model by using [this code](torchscript) (provided by [@yarkable](https://github.com/yarkable)).
98-
99-
- **TensorRT Version of MODNet**
100-
You can access [this Github repository](https://github.com/jkjung-avt/tensorrt_demos) to try the TensorRT version of MODNet (provided by [@jkjung-avt](https://github.com/jkjung-avt)).
101-
102-
- **Docker Container for MODnet**
103-
You can access [this Github repository](https://github.com/nahidalam/modnet_docker) for a containerized version of MODNet with the Docker environment (provided by [@nahidalam](https://github.com/nahidalam)).
104-
105-
106-
There are some resources about MODNet from the community.
107-
- [Video from What's AI YouTube Channel](https://youtu.be/rUo0wuVyefU)
108-
- [Article from Louis Bouchard's Blog](https://www.louisbouchard.ai/remove-background/)
109-
110-
111-
## Code
112-
We provide the [code](src/trainer.py) of MODNet training iteration, including:
113-
- **Supervised Training**: Train MODNet on a labeled matting dataset
114-
- **SOC Adaptation**: Adapt a trained MODNet to an unlabeled dataset
115-
116-
In code comments, we provide examples for using the functions.
117-
118-
119-
## PPM Benchmark
120-
The PPM benchmark is released in a separate repository [PPM](https://github.com/ZHKKKe/PPM).
121-
122-
123-
## License
124-
The code, models, and demos in this repository (excluding GIF files under the folder `doc/gif`) are released under the [Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0) license.
125-
126-
127-
## Acknowledgement
128-
- We thank
129-
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[@yzhou0919](https://github.com/yzhou0919), [@eyaler](https://github.com/eyaler), [@manthan3C273](https://github.com/manthan3C273), [@yarkable](https://github.com/yarkable), [@jkjung-avt](https://github.com/jkjung-avt), [@manzke](https://github.com/manzke), [@nahidalam](https://github.com/nahidalam),
130-
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[the Gradio team](https://github.com/gradio-app/gradio), [What's AI YouTube Channel](https://www.youtube.com/channel/UCUzGQrN-lyyc0BWTYoJM_Sg), [Louis Bouchard's Blog](https://www.louisbouchard.ai),
131-
for their contributions to this repository or their cool applications/extentions/resources of MODNet.
132-
133-
134-
## Citation
135-
If this work helps your research, please consider to cite:
136-
137-
```bibtex
138-
@InProceedings{MODNet,
139-
author = {Zhanghan Ke and Jiayu Sun and Kaican Li and Qiong Yan and Rynson W.H. Lau},
140-
title = {MODNet: Real-Time Trimap-Free Portrait Matting via Objective Decomposition},
141-
booktitle = {AAAI},
142-
year = {2022},
143-
}
144-
```
145-
146-
147-
## Contact
148-
This repository is maintained by Zhanghan Ke ([@ZHKKKe](https://github.com/ZHKKKe)).
149-
For questions, please contact `[email protected]`.
150-
151-
<!-- <img src="doc/gif/commercial_image_matting_model_result.gif" width='100%'> -->

0 commit comments

Comments
 (0)