Skip to content
This repository was archived by the owner on Jan 3, 2023. It is now read-only.

Conversation

@wei-v-wang
Copy link

training and inference works fine with MKL
GPU training occasionally work
GPU inference works fine.

Include updated Readme regarding new format of weights.

@@ -1,14 +1,19 @@
#Overview

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Change to # Overview


This example VGG directory contains scripts to perform VGG training and inference using MKL backend and GPU backend

##Model

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Change to ## Model


### Model script
The model run script is included here [vgg_neon.py](./vgg_neon.py). This script can easily be adapted for fine tuning this network but we have focused on inference here because a successful training protocol may require details beyond what is available from the Caffe model zoo.
The model run scripts included here [vgg_neon_train.py] (./vgg_neon_train.py) and [vgg_neon_inference.py] (./vgg_neon_inference.py) perform training and inference respectively. We are providing both the training and the inference script, they can be adapted for fine tuning this network but we have yet to test the training script because a successful training protocol may require details beyond what is available from the Caffe model zoo. The inference script will take the trained weight file as input: supply it with the VGG_D_fused_conv_bias.p or VGG_E_fused_conv_bias.p or trained models from running VGG training.
Copy link

@wsokolow wsokolow Nov 20, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

change "[vgg_neon_train.py] (./vgg_neon_train.py)" to "vgg_neon_train.py", so the hyperlink will work

change "[vgg_neon_inference.py] (./vgg_neon_inference.py)" to "vgg_neon_inference.py", so the hyperlink will work

| Total | 1152 ms |
----------------------
```
python -u vgg_neon_train.py -c vgg_mkl.cfg -vvv --save_path VGG16-model.prm --output_file VGG16-data.h5 --caffe

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

move this line into ``` (code marks)

"numactl -i all" is our recommendation to get as much performance as possible for Intel architecture-based servers which
feature multiple sockets and when NUMA is enabled. On such systems, please run the following:

numactl -i all python -u vgg_neon_train.py -c vgg_mkl.cfg -vvv --save_path VGG16-model.prm --output_file VGG16-data.h5 --caffe

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

move this line into ``` (code marks)

modify the above vgg_mkl.cfg 'backend' entry or simply using the following command:

If neon is installed into a `virtualenv`, make sure that it is activated before running the commands below.
python -u vgg_neon_train.py -c vgg_mkl.cfg -b gpu -vvv --save_path VGG16-model.prm --output_file VGG16-data.h5 --caffe

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

move this line into ``` (code marks)

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants