Skip to main content
Knowledge-Distillation-Guided Spatially Varying Image Restoration

Knowledge-Distillation-Guided Spatially Varying Image Restoration

Date9th Jun 2022

Time03:00 PM

Venue Google meet



Image restoration is the task of recovering a clean image from a degraded observation. Most of the bad-weather-caused degradations are spatially varying, such as rain streaks, haze, and raindrops. It requires the restoration network to implicitly localize and restore the affected regions. We decompose the restoration task into two stages: degradation localization and degraded region-guided restoration, unlike existing methods that directly learn a mapping between the degraded and clean images. We demonstrate that the model trained for the degraded region detection task contains vital region knowledge, which can be utilized to guide the restoration network’s training using the knowledge-distillation technique. Further, we propose mask-guided gated convolution and global context aggregation modules leveraging the extra guidance from the predicted mask while focusing on restoring the degraded regions. Unlike bad-weather-caused degradations, where some pixel information is present but degraded, image inpainting is an even more ill-posed restoration task with completely absent pixels in specific areas. We demonstrate that the knowledge-distillation-based guidance is equally crucial for inpainting, to provide intermediate supervision throughout the network. Many existing solutions propose course-to-fine, recurrent refinement, structural guidance, etc., to handle the complexity and inherent ill-posedness. These methods suffer from huge computational overheads owing to multiple generator networks, the limited ability of handcrafted features, and sub-optimal utilization of the information present in the ground truth. We propose a distillation-based approach for inpainting, where we provide direct feature-level supervision for the encoder layers to produce a more accurate encoding of the holes. Next, we introduce a distillation-based attention transfer technique and further enhance the coherence by using pixel-adaptive global-local feature fusion in the decoder. We conduct an extensive evaluation on multiple datasets for both bad-weather-caused degradation and inpainting tasks to validate our method.


Maitreya Suin (EE17D201)

Electrical Engineering