Skip to content

Commit 9b49e4e

Browse files
Update README.md and add minor fixes
1 parent cbcda76 commit 9b49e4e

File tree

33 files changed

+51
-38
lines changed

33 files changed

+51
-38
lines changed

gen2/gen2-fatigue-detection/README.zh-CN.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
[英文文档](README.md)
22

3-
# Gen2 疲劳检测
3+
# 疲劳检测
44

55
该示例演示了Gen2 Pipeline Builder运行的 [面部检测网络](https://docs.openvinotoolkit.org/2019_R1/_face_detection_retail_0004_description_face_detection_retail_0004.html)和头部检测网络
66

gen2/gen2-license-plate-recognition/main.py

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -16,6 +16,7 @@
1616
help="Path to video file to be used for inference (conflicts with -cam)")
1717
args = parser.parse_args()
1818

19+
args.video = 'chinese_traffic.mp4'
1920
if not args.camera and not args.video:
2021
raise RuntimeError(
2122
"No source selected. Use either \"-cam\" to run on RGB camera as a source or \"-vid <path>\" to run on video"
@@ -159,6 +160,7 @@ def lic_thread(det_queue, rec_queue):
159160
img.setData(to_planar(cropped_frame, (94, 24)))
160161
img.setWidth(94)
161162
img.setHeight(24)
163+
162164
rec_queue.send(img)
163165

164166
fps.tick('lic')
@@ -319,6 +321,7 @@ def get_frame():
319321
lic_frame.setType(dai.RawImgFrame.Type.BGR888p)
320322
lic_frame.setWidth(300)
321323
lic_frame.setHeight(300)
324+
322325
det_in.send(lic_frame)
323326
veh_frame = dai.ImgFrame()
324327
veh_frame.setData(to_planar(frame, (300, 300)))

sdk/sdk-class-saver-jpeg/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,7 @@ timestamp,label,left,top,right,bottom,raw_frame,overlay_frame,cropped_frame
4343

4444
## Demo
4545

46-
[![Gen2 Class Saver (JPEG)](https://user-images.githubusercontent.com/5244214/106964520-83b34b00-6742-11eb-8729-eff0a7584a46.gif)](https://youtu.be/gKawPaUcTi4 "Class Saver (JPEG) on DepthAI")
46+
[![Class Saver (JPEG)](https://user-images.githubusercontent.com/5244214/106964520-83b34b00-6742-11eb-8729-eff0a7584a46.gif)](https://youtu.be/gKawPaUcTi4 "Class Saver (JPEG) on DepthAI")
4747

4848
## Install requirements
4949

sdk/sdk-class-saver-jpeg/README.zh-CN.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
[英文文档](README.md)
22

3-
# SDK Class Saver (JPEG)
3+
# Class Saver (JPEG)
44

55
本示例演示如何运行MobilenetSSD并收集按检测标签分组的检测对象的图像。运行此脚本后,DepthAI将启动MobilenetSSD,并且每当检测到对象时,它将添加一个数据集条目。
66

@@ -42,7 +42,7 @@ timestamp,label,left,top,right,bottom,raw_frame,overlay_frame,cropped_frame
4242

4343
## 演示
4444

45-
[![Gen2 Class Saver (JPEG)](https://user-images.githubusercontent.com/5244214/106964520-83b34b00-6742-11eb-8729-eff0a7584a46.gif)](https://youtu.be/gKawPaUcTi4 "Class Saver (JPEG) on DepthAI")
45+
[![Class Saver (JPEG)](https://user-images.githubusercontent.com/5244214/106964520-83b34b00-6742-11eb-8729-eff0a7584a46.gif)](https://youtu.be/gKawPaUcTi4 "Class Saver (JPEG) on DepthAI")
4646

4747
## 先决条件
4848

sdk/sdk-crowdcounting/README.md

Lines changed: 7 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,17 +1,19 @@
1-
## [Gen2] Crowd Counting with density maps on DepthAI
1+
## Crowd Counting with density maps
22

3-
This example shows an implementation of Crowd Counting with density maps on DepthAI in the Gen2 API system. We use [DM-Count](https://github.com/cvlab-stonybrook/DM-Count) ([LICENSE](https://github.com/cvlab-stonybrook/DM-Count/blob/master/LICENSE)) model, which has a VGG-19 backbone and is trained on Shanghai B data set.
3+
This example shows an implementation of Crowd Counting with density maps using DepthAI SDK. We
4+
use [DM-Count](https://github.com/cvlab-stonybrook/DM-Count) ([LICENSE](https://github.com/cvlab-stonybrook/DM-Count/blob/master/LICENSE))
5+
model, which has a VGG-19 backbone and is trained on Shanghai B data set.
46

57
The model produces density map from which predicted count can be computed.
68

7-
Input video is resized to 426 x 240 (W x H). Due to a relatively heavy model, the inference speed is around 1 FPS.
9+
Input video is resized to 426x240 (WxH). Due to a relatively heavy model, the inference speed is around 1 FPS.
810

911
![Image example](imgs/example.gif)
1012

1113
![image](https://user-images.githubusercontent.com/32992551/171780142-5cd4f2a4-6c51-4dbc-9e3e-17062a9c6c6c.png)
1214

13-
14-
Example shows input video with overlay density map input. Example video taken from [VIRAT](https://viratdata.org/) dataset.
15+
Example shows input video with overlay density map input. Example video taken from [VIRAT](https://viratdata.org/)
16+
dataset.
1517

1618
## Pre-requisites
1719

sdk/sdk-deeplabv3-person/README.zh-CN.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
11
[英文文档](README.md)
22

3-
## [Gen2] DepthAI上的Deeplabv3
3+
## DepthAI上的DeepLabV3
44

5-
此示例显示了如何在Gen2 API系统中的DepthAI上运行Deeplabv3+。
5+
此示例显示了如何在 SDK 系统中的DepthAI上运行DeepLabV3+。
66

77
[![Semantic Segmentation on DepthAI](https://user-images.githubusercontent.com/32992551/109359126-25a9ed00-7842-11eb-9071-cddc7439e3ca.png)](https://www.youtube.com/watch?v=zjcUChyyNgI "Deeplabv3+ Custom Training for DepthAI")
88

sdk/sdk-fast-depth/README.md

Lines changed: 5 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,8 @@
1-
## [Gen2] FastDepth on DepthAI
1+
## FastDepth
22

3-
This example shows an implementation of [FastDepth](https://github.com/dwofk/fast-depth) on DepthAI in the Gen2 API system. Blob is created with ONNX from [PINTO's Model ZOO](https://github.com/PINTO0309/PINTO_model_zoo/tree/main/146_FastDepth), which is then converted to OpenVINO IR with required flags and converted to blob.
3+
This example shows an implementation of [FastDepth](https://github.com/dwofk/fast-depth) using DepthAI SDK. Blob is
4+
created with ONNX from [PINTO's Model ZOO](https://github.com/PINTO0309/PINTO_model_zoo/tree/main/146_FastDepth), which
5+
is then converted to OpenVINO IR with required flags and converted to blob.
46

57
There are two available blob's for different input sizes:
68

@@ -12,6 +14,7 @@ There are two available blob's for different input sizes:
1214
## Pre-requisites
1315

1416
Install requirements:
17+
1518
```
1619
python3 -m pip install -r requirements.txt
1720
```

sdk/sdk-fatigue-detection/README.zh-CN.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
11
[英文文档](README.md)
22

3-
# Gen2 疲劳检测
3+
# 疲劳检测
44

5-
该示例演示了Gen2 Pipeline Builder运行的 [面部检测网络](https://docs.openvinotoolkit.org/2019_R1/_face_detection_retail_0004_description_face_detection_retail_0004.html)和头部检测网络
5+
该示例演示了 运行的 [面部检测网络](https://docs.openvinotoolkit.org/2019_R1/_face_detection_retail_0004_description_face_detection_retail_0004.html)和头部检测网络
66

77
## 演示:
88

sdk/sdk-flower-classification/README.zh-CN.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
[英文文档](README.md)
22

3-
# Gen2 Tensorflow 图像分类示例
3+
# Tensorflow 图像分类示例
44

55
该示例演示了如何运行使用 [TensorFlow Image Classification tutorial](https://colab.research.google.com/drive/1oNxfvx5jOfcmk1Nx0qavjLN8KtWcLRn6?usp=sharing)创建的神经网络 (即使将OpenVINO转换为.blob,我们的社区成员之一也将其整合到了一个Colab Notebook中)
66

sdk/sdk-human-pose/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -10,11 +10,11 @@ using DepthAI SDK.
1010

1111
### Camera
1212

13-
[![Gen2 Age & Gender recognition](https://user-images.githubusercontent.com/5244214/107493701-35f97100-6b8e-11eb-8b13-02a7a8dbec21.gif)](https://www.youtube.com/watch?v=Py3-dHQymko "Human pose estimation on DepthAI")
13+
[![Age & Gender recognition](https://user-images.githubusercontent.com/5244214/107493701-35f97100-6b8e-11eb-8b13-02a7a8dbec21.gif)](https://www.youtube.com/watch?v=Py3-dHQymko "Human pose estimation on DepthAI")
1414

1515
### Video file
1616

17-
[![Gen2 Age & Gender recognition](https://user-images.githubusercontent.com/5244214/110801736-d3bf8900-827d-11eb-934b-9755978f80d9.gif)](https://www.youtube.com/watch?v=1dp2wJ_OqxI "Human pose estimation on DepthAI")
17+
[![Age & Gender recognition](https://user-images.githubusercontent.com/5244214/110801736-d3bf8900-827d-11eb-934b-9755978f80d9.gif)](https://www.youtube.com/watch?v=1dp2wJ_OqxI "Human pose estimation on DepthAI")
1818

1919
## Pre-requisites
2020

0 commit comments

Comments
 (0)