Skip to main content
  • Home
  • Happenings
  • Events
  • Sparsification of Deep Neural Networks: Layer-Wise and Structured Pruning
Sparsification of Deep Neural Networks: Layer-Wise and Structured Pruning

Sparsification of Deep Neural Networks: Layer-Wise and Structured Pruning

Date29th Nov 2021

Time10:00 AM

Venue Google meet link: meet.google.com/huf-ndxb-nts

PAST EVENT

Details

Deep Neural Networks (DNNs) have huge computational and memory requirements. One way to reduce these requirements is by pruning unwanted connections (sparsification). Appropriate DNN training procedures to induce sparsity, should be accompanied by effective pruning algorithms to enhance sparsity levels. Towards this direction, DNN pruning algorithms will be the major focus of this talk. Various aspects of DNN pruning algorithms will be systematically discussed and the need for layer-wise heuristics in DNN pruning algorithms will be motivated. An ``energy-aware'' layer-wise pruning algorithm, where the layers are pruned based on their individual allocated accuracy loss budget, determined by estimates of the reduction in number of multiply-accumulate operations (in convolutional layers) and weights (in fully connected layers) will be proposed. Final part of the talk will be on structured pruning. The various types of groupings (in convolutional layers and fully connected layers respectively) for structured pruning and the tradeoffs involved will be discussed. The results obtained on various Neural Network architectures of MNIST, SVHN, CIFAR-10, Imagenette datasets will be presented.

Speakers

K.B.N. Girish (EE16D400)

Electrical Engineering