Skip to main content
Pixel-Adaptive Attention-Guided Network for Motion Deblurring.

Pixel-Adaptive Attention-Guided Network for Motion Deblurring.

Date21st Dec 2021

Time11:00 AM

Venue Google meet Link: https://meet.google.com/aah-wwfu-ufy

PAST EVENT

Details

End-to-end fully convolutional designs have recently advanced state-of-the-art in non-uniform motion deblurring. But, their performance suffers from lack of adaptiveness to the varying nature of blur. Existing approaches for single image deblurring achieve a large receptive field by increasing the number of generic convolution layers and kernel size, but this comes at the expense of increased model size and inference speed. We propose a pixel-adaptive approach for handling large blur variations across different spatial locations while adapting to each test image. We design a content-aware global-local filtering module that significantly improves performance by considering global dependencies and dynamically exploiting neighboring pixel information. Similarly, when dealing with blurred videos, most of the existing approaches simply stack fixed adjacent frames to exploit the temporal information. They depend on implicit or explicit alignment for temporal information fusion, which either increases the computational cost or results in suboptimal performance due to misalignment. We investigate two key factors responsible for video deblurring quality: how to fuse spatio-temporal information and from where to collect it. We propose a factorized gated spatio-temporal attention module to perform non-local operations across space and time to utilize the available information without relying on alignment. To complement the attention module, we propose a reinforcement learning-based framework for selecting keyframes from the neighborhood with the most complementary and useful information. Extensive qualitative and quantitative comparisons with prior art on both single-image and video deblurring benchmarks demonstrate that our design offers significant improvements over the state-of-the-art.

Speakers

Maitreya Suin (EE17D201)

Electrical Engineering