-
Notifications
You must be signed in to change notification settings - Fork 66
Adding VGG that works for neon v2.3 with MKL backend #25
base: master
Are you sure you want to change the base?
Conversation
…ference), and GPU backend inference
| @@ -1,14 +1,19 @@ | |||
| #Overview | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Change to # Overview
|
|
||
| This example VGG directory contains scripts to perform VGG training and inference using MKL backend and GPU backend | ||
|
|
||
| ##Model |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Change to ## Model
|
|
||
| ### Model script | ||
| The model run script is included here [vgg_neon.py](./vgg_neon.py). This script can easily be adapted for fine tuning this network but we have focused on inference here because a successful training protocol may require details beyond what is available from the Caffe model zoo. | ||
| The model run scripts included here [vgg_neon_train.py] (./vgg_neon_train.py) and [vgg_neon_inference.py] (./vgg_neon_inference.py) perform training and inference respectively. We are providing both the training and the inference script, they can be adapted for fine tuning this network but we have yet to test the training script because a successful training protocol may require details beyond what is available from the Caffe model zoo. The inference script will take the trained weight file as input: supply it with the VGG_D_fused_conv_bias.p or VGG_E_fused_conv_bias.p or trained models from running VGG training. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
change "[vgg_neon_train.py] (./vgg_neon_train.py)" to "vgg_neon_train.py", so the hyperlink will work
change "[vgg_neon_inference.py] (./vgg_neon_inference.py)" to "vgg_neon_inference.py", so the hyperlink will work
| | Total | 1152 ms | | ||
| ---------------------- | ||
| ``` | ||
| python -u vgg_neon_train.py -c vgg_mkl.cfg -vvv --save_path VGG16-model.prm --output_file VGG16-data.h5 --caffe |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
move this line into ``` (code marks)
| "numactl -i all" is our recommendation to get as much performance as possible for Intel architecture-based servers which | ||
| feature multiple sockets and when NUMA is enabled. On such systems, please run the following: | ||
|
|
||
| numactl -i all python -u vgg_neon_train.py -c vgg_mkl.cfg -vvv --save_path VGG16-model.prm --output_file VGG16-data.h5 --caffe |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
move this line into ``` (code marks)
| modify the above vgg_mkl.cfg 'backend' entry or simply using the following command: | ||
|
|
||
| If neon is installed into a `virtualenv`, make sure that it is activated before running the commands below. | ||
| python -u vgg_neon_train.py -c vgg_mkl.cfg -b gpu -vvv --save_path VGG16-model.prm --output_file VGG16-data.h5 --caffe |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
move this line into ``` (code marks)
training and inference works fine with MKL
GPU training occasionally work
GPU inference works fine.
Include updated Readme regarding new format of weights.