|
1 | 1 | <div align="center" markdown="1"> |
2 | 2 | <p> |
3 | 3 | <a href="https://sony.github.io/model_optimization/" target="_blank"> |
4 | | - <img src="https://github.com/sony/model_optimization/blob/main/docsrc/images/mctHeader1-cropped.svg" width="1000"></a> |
| 4 | + <img src="https://raw.githubusercontent.com/sony/model_optimization/refs/heads/main/docsrc/images/mctHeader1-cropped.svg" width="1000"></a> |
5 | 5 | </p> |
6 | 6 |
|
7 | 7 | ______________________________________________________________________ |
@@ -67,7 +67,7 @@ For further details, please see [Supported features and algorithms](#high-level- |
67 | 67 | <div align="center"> |
68 | 68 | <p align="center"> |
69 | 69 |
|
70 | | -<img src="https://github.com/sony/model_optimization/blob/main/docsrc/images/mctDiagram_clean.svg" width="800"> |
| 70 | +<img src="https://raw.githubusercontent.com/sony/model_optimization/refs/heads/main/docsrc/images/mctDiagram_clean.svg" width="800"> |
71 | 71 | </p> |
72 | 72 | </div> |
73 | 73 |
|
@@ -148,16 +148,16 @@ Currently, MCT is being tested on various Python, Pytorch and TensorFlow version |
148 | 148 | ## <div align="center">Results</div> |
149 | 149 |
|
150 | 150 | <p align="center"> |
151 | | -<img src="https://github.com/sony/model_optimization/blob/main/docsrc/images/Classification.png" width="200"> |
152 | | -<img src="https://github.com/sony/model_optimization/blob/main/docsrc/images/SemSeg.png" width="200"> |
153 | | -<img src="https://github.com/sony/model_optimization/blob/main/docsrc/images/PoseEst.png" width="200"> |
154 | | -<img src="https://github.com/sony/model_optimization/blob/main/docsrc/images/ObjDet.png" width="200"> |
| 151 | +<img src="https://raw.githubusercontent.com/sony/model_optimization/refs/heads/main/docsrc/images/Classification.png" width="200"> |
| 152 | +<img src="https://raw.githubusercontent.com/sony/model_optimization/refs/heads/main/docsrc/images/SemSeg.png" width="200"> |
| 153 | +<img src="https://raw.githubusercontent.com/sony/model_optimization/refs/heads/main/docsrc/images/PoseEst.png" width="200"> |
| 154 | +<img src="https://raw.githubusercontent.com/sony/model_optimization/refs/heads/main/docsrc/images/ObjDet.png" width="200"> |
155 | 155 |
|
156 | 156 | MCT can quantize an existing 32-bit floating-point model to an 8-bit fixed-point (or less) model without compromising accuracy. |
157 | 157 | Below is a graph of [MobileNetV2](https://pytorch.org/vision/main/models/generated/torchvision.models.mobilenet_v2.html) accuracy on ImageNet vs average bit-width of weights (X-axis), using **single-precision** quantization, **mixed-precision** quantization, and mixed-precision quantization with GPTQ. |
158 | 158 |
|
159 | 159 | <p align="center"> |
160 | | -<img src="https://github.com/sony/model_optimization/blob/main/docsrc/images/torch_mobilenetv2.png" width="800"> |
| 160 | +<img src="https://raw.githubusercontent.com/sony/model_optimization/refs/heads/main/docsrc/images/torch_mobilenetv2.png" width="800"> |
161 | 161 |
|
162 | 162 | For more results, please see [1] |
163 | 163 |
|
|
0 commit comments