-
Notifications
You must be signed in to change notification settings - Fork 146
Notes
- Why is our SCNN implementation better than the original but appear worse on TuSimple?
If you look at the VGG16-SCNN results on CULane, our re-implementation is no-doubt far superior (over 1.5% improv.), mainly because the original SCNN is in old torch7, while we are based on the modern torchvision backbone implementations and ImageNet pre-training. We also did a simple grid search for learning rate on the validation set. However, the original SCNN are often reported to achieve 96.53% on TuSimple in the literature, much higher than ours. That is mainly due to it was a competition entry, there is no way we (only use train set) can compete with that. Note that the original SCNN paper never mentioned a TuSimple experiment without competition tricks can reach that performance.
- Why is our ENet implementation better?
Same as 1, we have better implementations. But the improvement should mainly be attributed to us using the same advanced augmentation scheme and learning rate scheduler from ERFNet for ENet.