||A major challenge in computer vision stems from the fact that visual data is highly ambiguous: given any two-dimensional image, there are infinitely many possible hypotheses of real-world scenes that would explain what we see. For the most part, this ambiguity can be traced back to the very mechanism of image capture: every pixel value of our camera is an integral of the so-called plenoptic function over space (the extent of a pixel location), angle (the aperture of the camera), wavelength (the spectral response) and time (the shutter period). Light contributions that reached the same sensor location on different paths or at different times are thus mixed irreversibly. Consequently, many common problems such as deblurring, motion tracking as well as the estimation of geometry, material reflectance and illumination are still considered unsolved even after decades of active investigation.
To our rescue come new imaging modalities that sample the temporal, angular and spectral dimensions to produce high-dimensional plenoptic images, facilitating some of those hard tasks. A recent direction of research that I find particularly fascinating is transient imaging, a relatively new technique that introduces time resolution fast enough to resolve non-stationary light distributions in real-world scenes, effectively “filming” light in flight. The transient image contains, in signal processing terms, the spatio-temporal response of a scene to illumination by an ultrashort pulse of light emitted from point A and observed by a camera at point B. Impulse responses have proven a valuable tool for secondary investigations in virtually all fields of science and engineering. Just as well, I expect transient imaging to open new paths towards the solution of many hard problems in computer vision, graphics and related areas.
In November 2013 Matthias Hullin started his faculty position at the University Bonn, Germany.