Identifying and Pruning Redundant Structures for Deep Neural Networks


Deep convolutional neural networks have achieved considerable success in the field of computer vision. However, it is difficult to deploy state-of-the-art models on resource-constrained platforms due to their high storage, memory bandwidth, and computational costs. In this paper, we propose a structured pruning method which employs a three-step process to reduce the resource consumption of neural networks. First, we train an initial network on the training set and evaluate it on the validation set. Next, we introduce an iterative pruning and fine-tuning algorithm to identify and prune redundant structures, which results in a pruned network with a compact architecture. Finally, we train the pruned network from scratch on both the training set and validation set to obtain the final accuracy on the test set. In the experiments, our pruning method significantly reduces the model size (by 87.2% on CIFAR-10), saves inference time (53.3% on CIFAR-10), and achieves better performance as compared to recent state-of-the-art methods.

2019 IEEE Visual Communications and Image Processing (VCIP)
Li Song
Li Song
Professor, IEEE Senior Member