Skip to main content
  • Home
  • Happenings
  • Events
  • High Frame-Rate Video Reconstruction from Neuromorphic Event Sensors
High Frame-Rate Video Reconstruction from Neuromorphic Event Sensors

High Frame-Rate Video Reconstruction from Neuromorphic Event Sensors

Date11th May 2021

Time11:00 AM

Venue https://meet.google.com/qby-jwbx-etc

PAST EVENT

Details

Neuromoprhic event sensors are a new generation of sensors that asynchronously capture changes in brightness values at individual pixels at the instant they occur. Compared to traditional frame-based cameras, event sensors provide advantages such as 120dB dynamic range, microsecond temporal resolution and very low-power operation. These sensors output either a positive or a negative event from a pixel, whenever the pixel sees an increase or a decrease in the intensity value. This is unlike traditional image sensors, which measure the absolute intensity values at all the pixels at a fixed frame rate. The low-latency and high-temporal resolution operation of these sensors make them attractive for several applications such as high-speed imaging, Augmented/ Virtual Reality, etc.

In this talk, we look at reconstruction of high-speed videos using the high-temporal resolution motion information captured with the event sensor. We first propose a hybrid setup consisting of a low-frame rate conventional sensor and the event sensor. The texture-rich information from the image sensor with the motion-rich information from the event sensor is exploited to reconstruct high-frame rate photorealistic video. To accomplish the task, the low frame rate intensity images are warped to temporally dense locations of the event data. The results obtained from the proposed algorithm are more photorealistic compared to any of the previous state-of-the-art algorithms. The algorithm's robustness to abrupt camera motion and noise in the event sensor data is also demonstrated.

Next, we consider the case of high-speed video reconstruction when only event sensor data is available with no access to the conventional image sensor. For this, we design a recurrent neural network, learned via self-supervised brightness constancy loss, to simultaneously predict the intensity images as well as optical flow. A novel loss function is proposed to train the network that achieves high dynamic range output even when using low-dynamic range ground truth images while training. We demonstrate our algorithm's robustness against challenging cases of abrupt motion and high dynamic range scenes.

Speakers

Prasan Shedligeri (EE16D409)

Electrical Engineering