Skip to content

Commit e4aa511

Browse files
authored
Fix incorrect model information in MANIFEST.json and add test data for version-RFB (#553)
* Add missing version-RFB Signed-off-by: jcwchen <[email protected]> * updated MAINIFEST Signed-off-by: jcwchen <[email protected]> * nit Signed-off-by: jcwchen <[email protected]> * update Signed-off-by: jcwchen <[email protected]> * update .json Signed-off-by: jcwchen <[email protected]> * for example Signed-off-by: jcwchen <[email protected]> * use string Python 3.10 Signed-off-by: jcwchen <[email protected]> * add yolov2.tar.gz Signed-off-by: jcwchen <[email protected]> Signed-off-by: jcwchen <[email protected]>
1 parent bf28aa8 commit e4aa511

File tree

11 files changed

+55
-28
lines changed

11 files changed

+55
-28
lines changed

.github/workflows/linux_ci.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ jobs:
1717
runs-on: ubuntu-latest
1818
strategy:
1919
matrix:
20-
python-version: [3.8]
20+
python-version: ['3.10']
2121
architecture: ['x64']
2222

2323
steps:

.github/workflows/windows_ci.yml

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ jobs:
1717
runs-on: windows-latest
1818
strategy:
1919
matrix:
20-
python-version: [3.8]
20+
python-version: ['3.10']
2121
architecture: ['x64']
2222

2323
steps:
@@ -33,8 +33,6 @@ jobs:
3333
- name: Install dependencies
3434
run: |
3535
python -m pip install --upgrade pip
36-
# TODO: now ONNX only supports Protobuf <= 3.20.1
37-
python -m pip install protobuf==3.20.1
3836
python -m pip install onnx onnxruntime requests py-cpuinfo
3937
# Print CPU info for debugging ONNX Runtime inference difference
4038
python -m cpuinfo

ONNX_HUB_MANIFEST.json

Lines changed: 22 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -753,7 +753,7 @@
753753
{
754754
"model": "T5-decoder-with-lm-head",
755755
"model_path": "text/machine_comprehension/t5/model/t5-decoder-with-lm-head-12.onnx",
756-
"onnx_version": "1.6",
756+
"onnx_version": "1.7",
757757
"opset_version": 12,
758758
"metadata": {
759759
"model_sha": "235afca35d3c47a5fd7209844ac11f3157fe7263fd074197a45b7f536e40ea56",
@@ -794,13 +794,16 @@
794794
"type": "tensor(float)"
795795
}
796796
]
797-
}
797+
},
798+
"model_with_data_path": "text/machine_comprehension/t5/model/t5-decoder-with-lm-head-12.tar.gz",
799+
"model_with_data_sha": "fa6edefa5951479f7b68bea693950a4ebc188f54c7971bf73f2b036b91bc8066",
800+
"model_with_data_bytes": 287840759
798801
}
799802
},
800803
{
801804
"model": "T5-encoder",
802805
"model_path": "text/machine_comprehension/t5/model/t5-encoder-12.onnx",
803-
"onnx_version": "1.6",
806+
"onnx_version": "1.7",
804807
"opset_version": 12,
805808
"metadata": {
806809
"model_sha": "4523122d7cf0f50905694d84995633c8a0cc223da762d3eb2aaffa17251a6f60",
@@ -832,7 +835,10 @@
832835
"type": "tensor(float)"
833836
}
834837
]
835-
}
838+
},
839+
"model_with_data_path": "text/machine_comprehension/t5/model/t5-encoder-12.tar.gz",
840+
"model_with_data_sha": "434e2f691bd71c838bd4b68be47bdf32c899313fdebfe4b77c7bea0b2f52e831",
841+
"model_with_data_bytes": 194535656
836842
}
837843
},
838844
{
@@ -1025,7 +1031,10 @@
10251031
"type": "tensor(float)"
10261032
}
10271033
]
1028-
}
1034+
},
1035+
"model_with_data_path": "vision/body_analysis/ultraface/models/version-RFB-320.tar.gz",
1036+
"model_with_data_sha": "628d0dd3e0288adb821f211e13d4e97f6d6f4527237339606732dffa6f19d381",
1037+
"model_with_data_bytes": 2015397
10291038
}
10301039
},
10311040
{
@@ -1074,7 +1083,10 @@
10741083
"type": "tensor(float)"
10751084
}
10761085
]
1077-
}
1086+
},
1087+
"model_with_data_path": "vision/body_analysis/ultraface/models/version-RFB-640.tar.gz",
1088+
"model_with_data_sha": "07cc0e284b7924bd89f2ec103254ef3ab4b673fd12341153faaff07f3c1137e3",
1089+
"model_with_data_bytes": 4818743
10781090
}
10791091
},
10801092
{
@@ -7446,7 +7458,10 @@
74467458
"type": "tensor(float)"
74477459
}
74487460
]
7449-
}
7461+
},
7462+
"model_with_data_path": "vision/object_detection_segmentation/yolov2-coco/model/yolov2-coco-9.tar.gz",
7463+
"model_with_data_sha": "c34966e4e96165f8db6f2549f509b6a8bdfc01ddd978e6fd07daad4e665d5383",
7464+
"model_with_data_bytes": 191439022
74507465
}
74517466
},
74527467
{

text/machine_comprehension/t5/README.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -12,10 +12,10 @@ understanding through the training of multiple tasks at once.
1212

1313
## Model
1414

15-
|Model |Download | Compressed |ONNX version|Opset version|
16-
|-------------|:--------------|:--------------|:--------------|:--------------|
17-
|T5-encoder |[650.6 MB](model/t5-encoder-12.onnx) | [205.0 MB](model/t5-encoder-12.tar.gz)| 1.6 | 12
18-
|T5-decoder-with-lm-head |[304.9 MB](model/t5-decoder-with-lm-head-12.onnx) | [304.9 MB](model/t5-decoder-with-lm-head-12.tar.gz)| 1.6 | 12
15+
| Model | Download | Download (with sample test data) | ONNX version | Opset version |
16+
| ----------- | ---------- |--------------| -------------- | -------------- |
17+
|T5-encoder |[650.6 MB](model/t5-encoder-12.onnx) | [205.0 MB](model/t5-encoder-12.tar.gz)| 1.7 | 12
18+
|T5-decoder-with-lm-head |[304.9 MB](model/t5-decoder-with-lm-head-12.onnx) | [304.9 MB](model/t5-decoder-with-lm-head-12.tar.gz)| 1.7 | 12
1919

2020

2121
### Source

vision/body_analysis/age_gender/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@
66
Automatic age and gender classification has become relevant to an increasing amount of applications, particularly since the rise of social platforms and social media. Nevertheless, performance of existing methods on real-world images is still significantly lacking, especially when compared to the tremendous leaps in performance recently reported for the related task of face recognition.
77

88
## Models
9-
| Model (Caffe) | Download | ONNX version | Opset version | Dataset |
9+
| Model | Download | ONNX version | Opset version | Dataset |
1010
|:-------------|:--------------|:--------------|:--------------|:--------------|
1111
| [googlenet_age_adience](https://drive.google.com/drive/folders/1GeLTHzHALgTYFj2Q9o5aWdztA9WzoErx?usp=sharing) | [23 MB](models/age_googlenet.onnx) | 1.6 | 11 | Adience |
1212
| [googlenet_gender_adience](https://drive.google.com/drive/folders/1r0GroTfsF7VpLhcS3IxU-LmAh6rI6vbQ?usp=sharing) | [23 MB](models/gender_googlenet.onnx)| 1.6 | 11 | Adience |

vision/body_analysis/ultraface/README.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -6,10 +6,10 @@
66
This model is a lightweight facedetection model designed for edge computing devices.
77

88
## Model
9-
| Model| Download | ONNX version | Opset version |
10-
|----------------|:-----------|:-----------|:--------|
11-
|version-RFB-320|[1.21 MB](models/version-RFB-320.onnx)|1.4|9|
12-
|version-RFB-640|[1.51 MB](models/version-RFB-640.onnx)|1.4|9|
9+
| Model | Download | Download (with sample test data) | ONNX version | Opset version |
10+
| ------------- | ------------- | ------------- | ------------- | ------------- |
11+
|version-RFB-320| [1.21 MB](models/version-RFB-320.onnx) | [1.92 MB](models/version-RFB-320.tar.gz) | 1.4 | 9 |
12+
|version-RFB-640| [1.51 MB](models/version-RFB-640.onnx) | [4.59 MB](models/version-RFB-640.tar.gz) | 1.4 | 9 |
1313

1414
### Dataset
1515
The training set is the VOC format data set generated by using the cleaned widerface labels provided by [Retinaface](https://arxiv.org/pdf/1905.00641.pdf) in conjunction with the widerface [dataset](http://shuoyang1213.me/WIDERFACE/).
Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
version https://git-lfs.github.com/spec/v1
2+
oid sha256:628d0dd3e0288adb821f211e13d4e97f6d6f4527237339606732dffa6f19d381
3+
size 2015397
Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
version https://git-lfs.github.com/spec/v1
2+
oid sha256:07cc0e284b7924bd89f2ec103254ef3ab4b673fd12341153faaff07f3c1137e3
3+
size 4818743

vision/object_detection_segmentation/yolov2-coco/README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -8,9 +8,9 @@ This model aims to detect objects in real time. It detects 80 different classes
88
## Model
99
The model was converted to ONNX from PyTorch version of YOLOv2 using [PyTorch-Yolo2](https://github.com/marvis/pytorch-yolo2). The output is fully verified by generating bounding boxes under PyTorch and onnxruntime.
1010

11-
|Model|Download| ONNX version |Opset version|
12-
|-----|:--------------|:-------------|:------------|
13-
|YOLOv2|[203.9 MB](model/yolov2-coco-9.onnx) |1.5 |9 |
11+
| Model | Download | Download (with sample test data) | ONNX version | Opset version |
12+
| ----- | -------- | -------------------------------- | ------------ | ------------- |
13+
| YOLOv2 | [203.9 MB](model/yolov2-coco-9.onnx) | [182.6 MB](model/yolov2-coco-9.tar.gz) | 1.5 | 9 |
1414

1515
## Inference
1616
### Input to model
Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
version https://git-lfs.github.com/spec/v1
2+
oid sha256:c34966e4e96165f8db6f2549f509b6a8bdfc01ddd978e6fd07daad4e665d5383
3+
size 191439022

0 commit comments

Comments
 (0)