SqueezeNet

SqueezeNet

Description
SqueezeNet begins with a standalone convolution layer (conv1), followed by 8 Fire modules (fire2-9), ending with a final conv layer (conv10). We gradually increase the number of filters per fire module from the beginning to the end of the network. SqueezeNet performs max-pooling with a stride of 2 after layers conv1, fire4, fire8, and conv10
Application
ImageNet
Task
ImageNet classification: Compute probabilities for all ImageNet categories on a given input image
Type
Supervised learning
Architecture
Convolutional Neural Network (CNN)
Data
Image/Photo: source
Title
SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size
Authors
Forrest N. Iandola, Song Han, Matthew W. Moskewicz, Khalid Ashraf, William J. Dally, Kurt Keutzer
Abstract
Recent research on deep neural networks has focused primarily on improving accuracy. For a given accuracy level, it is typically possible to identify multiple DNN architectures that achieve that accuracy level. With equivalent accuracy, smaller DNN architectures offer at least three advantages: (1) Smaller DNNs require less communication across servers during distributed training. (2) Smaller DNNs require less bandwidth to export a new model from the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on FPGAs and other hardware with limited memory. To provide all of these advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters. Additionally, with model compression techniques we are able to compress SqueezeNet to less than 0.5MB (510x smaller than AlexNet).
Year
2016
bibtex
@article{iandola2016squeezenet, title={SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and< 0.5 MB model size}, author={Iandola, Forrest N and Han, Song and Moskewicz, Matthew W and Ashraf, Khalid and Dally, William J and Keutzer, Kurt}, journal={arXiv preprint arXiv:1602.07360}, year={2016}}
Powered by Netron.

Supported file formats: png, jpg, jpeg

License

Acknowledgements

Model License

Sample Data License