Advanced search options

Advanced Search Options 🞨

Browse by author name (“Author name starts with…”).

Find ETDs with:

in
/  
in
/  
in
/  
in

Written in Published in Earliest date Latest date

Sorted by

Results per page:

You searched for subject:(Coarse pruning). One record found.

Search Limiters

Last 2 Years | English Only

No search limiters apply to these results.

▼ Search Limiters


IUPUI

1. Gaikwad, Akash S. Pruning Convolution Neural Network (SqueezeNet) for Efficient Hardware Deployment.

Degree: 2018, IUPUI

Indiana University-Purdue University Indianapolis (IUPUI)

In recent years, deep learning models have become popular in the real-time embedded application, but there are many complexities for hardware deployment because of limited resources such as memory, computational power, and energy. Recent research in the field of deep learning focuses on reducing the model size of the Convolution Neural Network (CNN) by various compression techniques like Architectural compression, Pruning, Quantization, and Encoding (e.g., Huffman encoding). Network pruning is one of the promising technique to solve these problems. This thesis proposes methods to prune the convolution neural network (SqueezeNet) without introducing network sparsity in the pruned model. This thesis proposes three methods to prune the CNN to decrease the model size of CNN without a significant drop in the accuracy of the model. 1: Pruning based on Taylor expansion of change in cost function Delta C. 2: Pruning based on L2 normalization of activation maps. 3: Pruning based on a combination of method 1 and method 2. The proposed methods use various ranking methods to rank the convolution kernels and prune the lower ranked filters afterwards SqueezeNet model is fine-tuned by backpropagation. Transfer learning technique is used to train the SqueezeNet on the CIFAR-10 dataset. Results show that the proposed approach reduces the SqueezeNet model by 72% without a significant drop in the accuracy of the model (optimal pruning efficiency result). Results also show that Pruning based on a combination of Taylor expansion of the cost function and L2 normalization of activation maps achieves better pruning efficiency compared to other individual pruning criteria and most of the pruned kernels are from mid and high-level layers. The Pruned model is deployed on BlueBox 2.0 using RTMaps software and model performance was evaluated.

Advisors/Committee Members: El-Sharkawy, Mohamed, Rizkalla, Maher, King, Brian.

Subjects/Keywords: Convolution neural network; CNN; SqueezeNet; Pruning; L2 Normalization; CIFAR-10; Transfer learning; Coarse pruning; S32V234; Taylor expansion; RTMaps; BlueBox; Fine pruning; Model compression; Activation maps

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Gaikwad, A. S. (2018). Pruning Convolution Neural Network (SqueezeNet) for Efficient Hardware Deployment. (Thesis). IUPUI. Retrieved from http://hdl.handle.net/1805/17923

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Gaikwad, Akash S. “Pruning Convolution Neural Network (SqueezeNet) for Efficient Hardware Deployment.” 2018. Thesis, IUPUI. Accessed May 27, 2019. http://hdl.handle.net/1805/17923.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Gaikwad, Akash S. “Pruning Convolution Neural Network (SqueezeNet) for Efficient Hardware Deployment.” 2018. Web. 27 May 2019.

Vancouver:

Gaikwad AS. Pruning Convolution Neural Network (SqueezeNet) for Efficient Hardware Deployment. [Internet] [Thesis]. IUPUI; 2018. [cited 2019 May 27]. Available from: http://hdl.handle.net/1805/17923.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Gaikwad AS. Pruning Convolution Neural Network (SqueezeNet) for Efficient Hardware Deployment. [Thesis]. IUPUI; 2018. Available from: http://hdl.handle.net/1805/17923

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

.