11
2- ### My platform  
2+ ### My platform     
33
44*  raspberry pi 3b
55*  2022-04-04-raspios-bullseye-armhf-lite.img 
66*  cpu: 4 core armv8, memory: 1G 
77
88
99
10- ### Install ncnn  
10+ ### Install ncnn     
1111
12- #### 1. dependencies    
13- ``` 
14- $ python -m pip install onnx-simplifier 
15- ``` 
16- 
17- #### 2. build ncnn    
1812Just follow the ncnn official tutoral of [ build-for-linux] ( https://github.com/Tencent/ncnn/wiki/how-to-build#build-for-linux )  to install ncnn. Following steps are all carried out on my raspberry pi:  
1913
2014** step 1:**  install dependencies  
@@ -25,21 +19,26 @@ $ sudo apt install build-essential git cmake libprotobuf-dev protobuf-compiler l
2519** step 2:**  (optional) install vulkan  
2620
2721** step 3:**  build   
28- I am using commit ` 5725c028c0980efd ` , and I have not tested over other commits.  
22+ I am using commit ` 6869c81ed3e7170dc0 ` , and I have not tested over other commits.  
2923``` 
3024$ git clone https://github.com/Tencent/ncnn.git 
3125$ cd ncnn 
32- $ git reset --hard 5725c028c0980efd  
26+ $ git reset --hard 6869c81ed3e7170dc0  
3327$ git submodule update --init 
3428$ mkdir -p build 
3529$ cmake -DCMAKE_BUILD_TYPE=Release -DNCNN_VULKAN=OFF -DNCNN_BUILD_TOOLS=ON -DCMAKE_TOOLCHAIN_FILE=../toolchains/pi3.toolchain.cmake .. 
3630$ make -j2 
3731$ make install  
3832``` 
3933
40- ### Convert model, build and run the demo   
34+ ### Convert pytorch  model to ncnn model       
4135
42- #### 1. convert pytorch model to ncnn model via onnx    
36+ #### 1. dependencies    
37+ ``` 
38+ $ python -m pip install onnx-simplifier 
39+ ``` 
40+ 
41+ #### 2. convert pytorch model to ncnn model via onnx    
4342On your training platform:
4443``` 
4544$ cd BiSeNet/ 
@@ -52,13 +51,21 @@ Then copy your `model_v2_sim.onnx` from training platform to raspberry device.
5251On raspberry device:  
5352``` 
5453$ /path/to/ncnn/build/tools/onnx/onnx2ncnn model_v2_sim.onnx model_v2_sim.param model_v2_sim.bin 
55- $ cd BiSeNet/ncnn/ 
56- $ mkdir -p models 
57- $ mv model_v2_sim.param models/ 
58- $ mv model_v2_sim.bin models/ 
5954``` 
6055
61- #### 2. compile demo code    
56+ You can optimize the ncnn model by fusing the layers and save the weights with fp16 datatype.   
57+ On raspberry device:
58+ ``` 
59+ $ /path/to/ncnn/build/tools/ncnnoptimize model_v2_sim.param model_v2_sim.bin model_v2_sim_opt.param model_v2_sim_opt.bin 65536 
60+ $ mv model_v2_sim_opt.param model_v2_sim.param 
61+ $ mv model_v2_sim_opt.bin model_v2_sim.bin 
62+ ``` 
63+ 
64+ You can also quantize the model for int8 inference, following this [ tutorial] ( https://github.com/Tencent/ncnn/wiki/quantized-int8-inference ) . Make sure your device support int8 inference.  
65+ 
66+ 
67+ ### build and run the demo  
68+ #### 1. compile demo code    
6269On raspberry device:  
6370``` 
6471$ mkdir -p BiSeNet/ncnn/build 
@@ -67,7 +74,7 @@ $ cmake .. -DNCNN_ROOT=/path/to/ncnn/build/install
6774$ make 
6875``` 
6976
70- #### 3 . run demo    
77+ #### 2 . run demo    
7178``` 
7279./segment 
7380``` 
0 commit comments