Skip to content
This repository was archived by the owner on Jan 3, 2023. It is now read-only.
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion ImageClassification/CIFAR10/All_CNN/readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ The trained weights file can be downloaded from AWS


### neon version
The model weight file above has been generated using neon version tag [v1.4.0]((https://github.com/NervanaSystems/neon/releases/tag/v1.4.0).
The model weight file above has been generated using neon version tag [v2.3.0]((https://github.com/NervanaSystems/neon/releases/tag/v2.3.0).

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"The mode weight file" will probably need to be converted to new format. Have you already obtained a trained weight with neon v2.2/v2.3 format?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Assuming yes.

It may not work with other versions.

### Performance
Expand Down
2 changes: 1 addition & 1 deletion ImageClassification/CIFAR10/All_CNN/test.sh
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ echo "Downloading weights file from ${WEIGHTS_URL}"
curl -o $WEIGHTS_FILE $WEIGHTS_URL 2> /dev/null

python -u $TEST_SCRIPT -i ${EXECUTOR_NUMBER} -vvv \
--model_file $WEIGHTS_FILE --no_progress_bar | tee output.dat 2>&1
--model_file $WEIGHTS_FILE --no_progress_bar 2>&1 | tee output.dat
rc=$?
if [ $rc -ne 0 ];then
exit $rc
Expand Down
4 changes: 2 additions & 2 deletions ImageClassification/CIFAR10/DeepResNet/readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,10 +24,10 @@ Training this model with the options described below should be able to achieve a
accuracy using only mean subtraction, random cropping, and random flips.

## Instructions
This script was tested with [neon version 1.5.0](https://github.com/NervanaSystems/neon/tree/v1.5.0).
This script was tested with [neon version 2.2.0](https://github.com/NervanaSystems/neon/tree/v2.2.0).
Make sure that your local repo is synced to this commit and run the [installation
procedure](http://neon.nervanasys.com/docs/latest/installation.html) before proceeding.
Commit SHA for v1.5.0 is `8a5b2c45784499cd3aba3c322ea10b3661c2a2a9`
Commit SHA for v2.2.0 is `5843e7116d880dfc59c8fb558beb58dd2ef421d0`

This example uses the `DataLoader` module to load the images for consumption while applying random
cropping, flipping, and shuffling. To use the DataLoader, the script will generate PNG files from
Expand Down
89 changes: 43 additions & 46 deletions ImageClassification/CIFAR10/DeepResNet/resnet_cifar10.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,70 +13,67 @@
# See the License for the specific language governing permissions and
# limitations under the License.
# ----------------------------------------------------------------------------
import numpy as np
from neon.util.argparser import NeonArgparser
from neon.initializers import Kaiming, IdentityInit
from neon.layers import Conv, Pooling, GeneralizedCost, Affine, Activation
from neon.layers import MergeSum, SkipNode
from neon.optimizers import GradientDescentMomentum, Schedule
from neon.transforms import Rectlin, Softmax, CrossEntropyMulti, Misclassification
from neon.models import Model
from neon.data import ImageLoader, ImageParams, DataLoader
from neon.callbacks.callbacks import Callbacks

import os

from neon.data.dataloader_transformers import OneHot, TypeCast, BGRMeanSubtract
from neon.data.aeon_shim import AeonDataLoader

def wrap_dataloader(dl, dtype=np.float32):
dl = OneHot(dl, index=1, nclasses=10)
dl = TypeCast(dl, index=0, dtype=dtype)
dl = BGRMeanSubtract(dl, index=0)
return dl

def config(manifest_filename, manifest_root, batch_size, subset_pct):
image_config = {"type": "image",
"height": 32,
"width": 32}
label_config = {"type": "label",
"binary": False}
augmentation = {"type": "image",
"crop_enable": True}

return {'manifest_filename': manifest_filename,
'manifest_root': manifest_root,
'batch_size': batch_size,
'subset_fraction': float(subset_pct/100.0),
'etl': [image_config, label_config],
'augmentation': [augmentation]}


def make_train_config(manifest_filename, manifest_root, batch_size, subset_pct=100):
train_config = config(manifest_filename, manifest_root, batch_size, subset_pct)
train_config['augmentation'][0]['center'] = False
train_config['augmentation'][0]['flip_enable'] = True
train_config['shuffle_enable'] = True
train_config['shuffle_manifest'] = True

return wrap_dataloader(AeonDataLoader(train_config))


def make_val_config(manifest_filename, manifest_root, batch_size, subset_pct=100):
val_config = config(manifest_filename, manifest_root, batch_size, subset_pct)
return wrap_dataloader(AeonDataLoader(val_config))

# parse the command line arguments (generates the backend)
parser = NeonArgparser(__doc__)
parser.add_argument('--depth', type=int, default=9,
help='depth of each stage (network depth will be 6n+2)')
args = parser.parse_args()


def extract_images(out_dir, padded_size):
'''
Save CIFAR-10 dataset as PNG files
'''
import numpy as np
from neon.data import load_cifar10
from PIL import Image
dataset = dict()
dataset['train'], dataset['val'], _ = load_cifar10(out_dir, normalize=False)
pad_size = (padded_size - 32) // 2 if padded_size > 32 else 0
pad_width = ((0, 0), (pad_size, pad_size), (pad_size, pad_size))

for setn in ('train', 'val'):
data, labels = dataset[setn]

img_dir = os.path.join(out_dir, setn)
ulabels = np.unique(labels)
for ulabel in ulabels:
subdir = os.path.join(img_dir, str(ulabel))
if not os.path.exists(subdir):
os.makedirs(subdir)

for idx in range(data.shape[0]):
im = np.pad(data[idx].reshape((3, 32, 32)), pad_width, mode='mean')
im = np.uint8(np.transpose(im, axes=[1, 2, 0]).copy())
im = Image.fromarray(im)
path = os.path.join(img_dir, str(labels[idx][0]), str(idx) + '.png')
im.save(path, format='PNG')

# setup data provider
train_dir = os.path.join(args.data_dir, 'train')
test_dir = os.path.join(args.data_dir, 'val')
if not (os.path.exists(train_dir) and os.path.exists(test_dir)):
extract_images(args.data_dir, 40)

# setup data provider
shape = dict(channel_count=3, height=32, width=32)
train_params = ImageParams(center=False, flip=True, **shape)
test_params = ImageParams(**shape)
common = dict(target_size=1, nclasses=10)

train = DataLoader(set_name='train', repo_dir=train_dir, media_params=train_params,
shuffle=True, **common)
test = DataLoader(set_name='val', repo_dir=test_dir, media_params=test_params, **common)

train = make_train_config(args.manifest['train'], args.manifest_root, args.batch_size)
test = make_val_config(args.manifest['val'], args.manifest_root, args.batch_size)

def conv_params(fsize, nfm, stride=1, relu=True):
return dict(fshape=(fsize, fsize, nfm), strides=stride, padding=(1 if fsize > 1 else 0),
Expand Down
40 changes: 33 additions & 7 deletions ImageClassification/CIFAR10/DeepResNet/resnet_eval.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,24 +15,50 @@
# ----------------------------------------------------------------------------
import os

import numpy as np
from neon.util.argparser import NeonArgparser
from neon.util.persist import load_obj
from neon.transforms import Misclassification, CrossEntropyMulti
from neon.optimizers import GradientDescentMomentum
from neon.layers import GeneralizedCost
from neon.models import Model
from neon.data import DataLoader, ImageParams

from neon.data.dataloader_transformers import OneHot, TypeCast, BGRMeanSubtract
from neon.data.aeon_shim import AeonDataLoader

# parse the command line arguments (generates the backend)
parser = NeonArgparser(__doc__)
args = parser.parse_args()

# setup data provider
test_dir = os.path.join(args.data_dir, 'val')
shape = dict(channel_count=3, height=32, width=32)
test_params = ImageParams(center=True, flip=False, **shape)
common = dict(target_size=1, nclasses=10)
test_set = DataLoader(set_name='val', repo_dir=test_dir, media_params=test_params, **common)
def wrap_dataloader(dl, dtype=np.float32):
dl = OneHot(dl, index=1, nclasses=10)
dl = TypeCast(dl, index=0, dtype=dtype)
dl = BGRMeanSubtract(dl, index=0)
return dl

def config(manifest_filename, manifest_root, batch_size, subset_pct):
image_config = {"type": "image",
"height": 32,
"width": 32}
label_config = {"type": "label",
"binary": False}
augmentation = {"type": "image",
"crop_enable": True,
"center": True,
"flip_enable": False}

return {'manifest_filename': manifest_filename,
'manifest_root': manifest_root,
'batch_size': batch_size,
'subset_fraction': float(subset_pct/100.0),
'etl': [image_config, label_config],
'augmentation': [augmentation]}

def make_val_config(manifest_filename, manifest_root, batch_size, subset_pct=100):
val_config = config(manifest_filename, manifest_root, batch_size, subset_pct)
return wrap_dataloader(AeonDataLoader(val_config))

test_set = make_val_config(args.manifest["val"], args.manifest_root, batch_size=args.batch_size)

model = Model(load_obj(args.model_file))

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

model=Model(load_obj(args.model_file)) will break the code, since neon v2.2 and v2.3 was not supposed to initialize a model using old weights. model=Model(load_obj(args.model_file) works with new weights.
So how do we get the new format of an old args.model_file.
Can we try to use model=Model(layers=cifar10_layers).
And do model.load_params(args.model_files, load_state)..

When we see there is an assert failure. This usually mean only a subset of the cifar10 layers was involved in saving the weights. So we need to print out those layer information from the old weight file and see which layers are missing. We then delete those layers (temporarily) from cifar10 layers.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is resnet_eval.py still functional/successful in loading weights?

cost = GeneralizedCost(costfunc=CrossEntropyMulti())
Expand Down
4 changes: 2 additions & 2 deletions ImageClassification/CIFAR10/DeepResNet/test.sh
Original file line number Diff line number Diff line change
Expand Up @@ -23,14 +23,14 @@ WEIGHTS_FILE=${WEIGHTS_URL##*/}
echo "Downloading weights file from ${WEIGHTS_URL}"
curl -o $WEIGHTS_FILE $WEIGHTS_URL 2> /dev/null

python -u $TEST_SCRIPT -i ${EXECUTOR_NUMBER} -vvv --model_file $WEIGHTS_FILE --no_progress_bar -w /usr/local/data/CIFAR10/macrobatches | tee output.dat 2>&1
python -u $TEST_SCRIPT -i ${EXECUTOR_NUMBER} -vvv --model_file $WEIGHTS_FILE --manifest val:/data/CIFAR/val-index.csv --manifest_root /data/CIFAR -b gpu -z 32 --no_progress_bar 2>&1 | tee output.dat

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you use "/dataset/aeon/CIFAR10" instead of "/data/CIFAR"?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@baojun-nervana Out of curiosity, is there any particular reason why these paths should be changed?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@tpatejko that is the validation dataset directory, so we don't have to update those after the PR.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed

rc=$?
if [ $rc -ne 0 ];then
exit $rc
fi

# get the top-1 misclass
top1=`tail -n 1 output.dat | sed "s/.*Accuracy: //" | sed "s/ \% (Top-1).*//"`
top1=`cat output.dat | sed -n "s/.*Accuracy: \(.*\) \% (Top-1).*/\1/p"`

top1pass=0
top1pass=`echo $top1'>'85 | bc -l`
Expand Down
56 changes: 43 additions & 13 deletions ImageClassification/ILSVRC2012/Alexnet/alexnet_neon.py
Original file line number Diff line number Diff line change
Expand Up @@ -27,16 +27,52 @@
alexnet_neon.py -w <path-to-saved-batches> --test_only \
--model_file <saved weights file>
"""

import numpy as np
from neon.util.argparser import NeonArgparser
from neon.initializers import Constant, Gaussian
from neon.layers import Conv, Dropout, Pooling, GeneralizedCost, Affine, LRN
from neon.optimizers import GradientDescentMomentum, MultiOptimizer, Schedule
from neon.transforms import Rectlin, Softmax, CrossEntropyMulti, TopKMisclassification
from neon.models import Model
from neon.data import ImageLoader
from neon.callbacks.callbacks import Callbacks

from neon.data.dataloader_transformers import OneHot, TypeCast, BGRMeanSubtract
from neon.data.aeon_shim import AeonDataLoader

def wrap_dataloader(dl, dtype=np.float32):
dl = OneHot(dl, index=1, nclasses=1000)
dl = TypeCast(dl, index=0, dtype=dtype)
dl = BGRMeanSubtract(dl, index=0)
return dl

def common_config(subset_pct, manifest_filename, manifest_root, batch_size):
# cache_root = get_data_cache_or_nothing('i1k-cache/')
image_config = {"type": "image",
"height": 224,
"width": 224}
label_config = {"type": "label",
"binary": False}
augmentation = {"type": "image",
"scale": [0.875, 0.875],
"crop_enable": True}

return {'manifest_filename': manifest_filename,
'manifest_root': manifest_root,
'batch_size': batch_size,
'subset_fraction': float(subset_pct/100.0),
'etl': [image_config, label_config],
'augmentation': [augmentation]}

def make_train_config(subset_pct, manifest_filename, manifest_root, batch_size):
train_config = common_config(subset_pct, manifest_filename, manifest_root, batch_size)
train_config['shuffle_enable'] = True
train_config['shuffle_manifest'] = True
return wrap_dataloader(AeonDataLoader(train_config))

def make_val_config(subset_pct, manifest_filename, manifest_root, batch_size):
val_config = common_config(subset_pct, manifest_filename, manifest_root, batch_size)
return wrap_dataloader(AeonDataLoader(val_config))

# parse the command line arguments (generates the backend)
parser = NeonArgparser(__doc__)
parser.add_argument('--subset_pct', type=float, default=100,
Expand All @@ -49,14 +85,8 @@
if args.model_file is None:
raise ValueError('To test model, trained weights need to be provided')

# setup data provider
img_set_options = dict(repo_dir=args.data_dir,
inner_size=224,
subset_pct=args.subset_pct)
train = ImageLoader(set_name='train', scale_range=(256, 256),
shuffle=True, **img_set_options)
test = ImageLoader(set_name='validation', scale_range=(256, 256),
do_transforms=False, **img_set_options)
train = make_train_config(args.subset_pct, args.manifest["train"], args.manifest_root, batch_size=args.batch_size)
val = make_val_config(args.subset_pct, args.manifest["val"], args.manifest_root, batch_size=args.batch_size)

init_g1 = Gaussian(scale=0.01)
init_g2 = Gaussian(scale=0.005)
Expand Down Expand Up @@ -105,15 +135,15 @@

# configure callbacks
valmetric = TopKMisclassification(k=5)
callbacks = Callbacks(model, eval_set=test, metric=valmetric, **args.callback_args)
callbacks = Callbacks(model, eval_set=val, metric=valmetric, **args.callback_args)

if args.model_file is not None:
model.load_params(args.model_file)
model.load_params(args.model_file, load_states=False)
if not args.test_only:
cost = GeneralizedCost(costfunc=CrossEntropyMulti())
model.fit(train, optimizer=opt, num_epochs=args.epochs, cost=cost, callbacks=callbacks)

mets = model.eval(test, metric=valmetric)
mets = model.eval(val, metric=valmetric)
print 'Validation set metrics:'
print 'LogLoss: %.2f, Accuracy: %.1f %% (Top-1), %.1f %% (Top-5)' % (mets[0],
(1.0-mets[1])*100,
Expand Down
5 changes: 2 additions & 3 deletions ImageClassification/ILSVRC2012/Alexnet/readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ The model run script is included below [alexnet_neon.py](./alexnet_neon.py).
### Trained weights
The trained weights file can be downloaded from AWS using the following link:
[trained Alexnet model weights][S3_WEIGHTS_FILE]
[S3_WEIGHTS_FILE]: https://s3-us-west-1.amazonaws.com/nervana-modelzoo/alexnet/alexnet.p
[S3_WEIGHTS_FILE]: https://s3-us-west-1.amazonaws.com/nervana-modelzoo/alexnet/alexnet_fused_conv_bias.p

### Performance
This model is acheiving 58.6% top-1 and 81.1% top-5 accuracy on the validation
Expand All @@ -33,8 +33,7 @@ Note there has been some changes to the format of the mean data subtraction;
users with the old format may be prompted to run an update script before proceeding.


This script was tested with the [neon release v1.4.0](https://github.com/NervanaSystems/neon/tree/v1.4.0)
(commit SHA bc196cb).
This script was tested with the [neon release v2.3.0](https://github.com/NervanaSystems/neon/tree/v2.3.0).
Make sure that your local repo is synced to this commit and run the
[installation procedure](http://neon.nervanasys.com/docs/latest/installation.html)
before proceeding.
Expand Down
2 changes: 1 addition & 1 deletion ImageClassification/ILSVRC2012/Alexnet/test.sh
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ WEIGHTS_FILE=${WEIGHTS_URL##*/}
echo "Downloading weights file from ${WEIGHTS_URL}"
curl -o $WEIGHTS_FILE $WEIGHTS_URL 2> /dev/null

python -u alexnet_neon.py --test_only -i ${EXECUTOR_NUMBER} -w /usr/local/data/I1K/macrobatches/ -vvv --model_file $WEIGHTS_FILE --no_progress_bar | tee output.dat 2>&1
python -u alexnet_neon.py --test_only -i ${EXECUTOR_NUMBER} -w /data/i1k-extracted/ --manifest_root /data/i1k-extracted --manifest train:/data/i1k-extracted/train-index.csv --manifest val:/data/i1k-extracted/val-index.csv -vvv --model_file $WEIGHTS_FILE --no_progress_bar -z 256 2>&1 | tee output.dat

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you use "/dataset/aeon/I1K/i1k-extracted" instead of "/data/i1k-extracted"?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed

rc=$?
if [ $rc -ne 0 ];then
exit $rc
Expand Down
Loading