Slide 33
Slide 33 text
CompML
32
[6] Liu, Z., Sun, M., Zhou, T., Huang, G., and Darrell, T. Rethinking the value of network pruning. In 7th
International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019.
[7] Han, S., Mao, H., and Dally, W. J. Deep compression: Compressing deep neural network with pruning, trained
quantization and huffman coding. In Bengio, Y. and LeCun, Y. (eds.), 4th International Conference on Learning
Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings, 2016.
[8] Li, H., Kadav, A., Durdanovic, I., Samet, H., and Graf, H. P. Pruning filters for efficient convnets. arXiv
preprint arXiv:1608.08710, 2016.
[9] He, Y., Zhang, X., and Sun, J. Channel pruning for accelerating very deep neural networks. In Proceedings of
the IEEE International Conference on Computer Vision, pp. 1389–1397, 2017.
[10] Kim, W., Kim, S., Park, M., & Jeon, G. (2020). Neuron Merging: Compensating for Pruned Neurons.
Advances in Neural Information Processing Systems, 33.