Shah, PrashamEl-Sharkawy, Mohamed2021-08-232021-08-232020-08Shah, P., & El-Sharkawy, M. (2020). A-MnasNet: Augmented MnasNet for Computer Vision. 2020 IEEE 63rd International Midwest Symposium on Circuits and Systems (MWSCAS), 1044–1047. https://doi.org/10.1109/MWSCAS48704.2020.9184619https://hdl.handle.net/1805/26483Convolutional Neural Networks (CNNs) play an essential role in Deep Learning. They are extensively used in Computer Vision. They are complicated but very effective in extracting features from an image or a video stream. After AlexNet [5] won the ILSVRC [8] in 2012, there was a drastic increase in research related with CNNs. Many state-of-the-art architectures like VGG Net [12], GoogleNet [13], ResNet [18], Inception-v4 [14], Inception-Resnet-v2 [14], ShuffleNet [23], Xception [24], MobileNet [6], MobileNetV2 [7], SqueezeNet [16], SqueezeNext [17] and many more were introduced. The trend behind the research depicts an increase in the number of layers of CNN to make them more efficient but with that the size of the model increased as well. This problem was fixed with the advent of new algorithms which resulted in a decrease in model size. As a result, today we have CNN models which are implemented on mobile devices. These mobile models are small and fast which in turn reduce the computational cost of the embedded system. This paper resembles similar idea, it proposes a new model Augmented MnasNet (A-MnasNet) which has been derived from MnasNet [1]. The model is trained with CIFAR-10 [4] dataset and has a validation accuracy of 96.89% and a model size of 11.6 MB. It outperforms its baseline architecture MnasNet which has a validation accuracy of 80.8% and a model size of 12.7 MB when trained with CIFAR-10.enPublisher Policyconvolutional neural networkscomputer visionfeature extractionA-MnasNet: Augmented MnasNet for Computer VisionConference proceedings