TY - JOUR
T1 - Asynchronous Spatial Image Convolutions for Event Cameras
AU - Scheerlinck, Cedric
AU - Barnes, Nick
AU - Mahony, Robert
N1 - Publisher Copyright:
© 2016 IEEE.
PY - 2019/4
Y1 - 2019/4
N2 - Spatial convolution is arguably the most fundamental of two-dimensional image processing operations. Conventional spatial image convolution can only be applied to a conventional image, that is, an array of pixel values (or similar image representation) that are associated with a single instant in time. Event cameras have serial, asynchronous output with no natural notion of an image frame, and each event arrives with a different timestamp. In this letter, we propose a method to compute the convolution of a linear spatial kernel with the output of an event camera. The approach operates on the event stream output of the camera directly without synthesising pseudoimage frames as is common in the literature. The key idea is the introduction of an internal state that directly encodes the convolved image information, which is updated asynchronously as each event arrives from the camera. The state can be read off as often as and whenever required for use in higher level vision algorithms for real-time robotic systems. We demonstrate the application of our method to corner detection, providing an implementation of a Harris corner-response 'state' that can be used in real time for feature detection and tracking on robotic systems.
AB - Spatial convolution is arguably the most fundamental of two-dimensional image processing operations. Conventional spatial image convolution can only be applied to a conventional image, that is, an array of pixel values (or similar image representation) that are associated with a single instant in time. Event cameras have serial, asynchronous output with no natural notion of an image frame, and each event arrives with a different timestamp. In this letter, we propose a method to compute the convolution of a linear spatial kernel with the output of an event camera. The approach operates on the event stream output of the camera directly without synthesising pseudoimage frames as is common in the literature. The key idea is the introduction of an internal state that directly encodes the convolved image information, which is updated asynchronously as each event arrives from the camera. The state can be read off as often as and whenever required for use in higher level vision algorithms for real-time robotic systems. We demonstrate the application of our method to corner detection, providing an implementation of a Harris corner-response 'state' that can be used in real time for feature detection and tracking on robotic systems.
KW - Computer vision for automation
KW - visual tracking
UR - http://www.scopus.com/inward/record.url?scp=85063311904&partnerID=8YFLogxK
U2 - 10.1109/LRA.2019.2893427
DO - 10.1109/LRA.2019.2893427
M3 - Article
SN - 2377-3766
VL - 4
SP - 816
EP - 822
JO - IEEE Robotics and Automation Letters
JF - IEEE Robotics and Automation Letters
IS - 2
M1 - 8613800
ER -