Personal Homepage: http://web.stanford.edu/~harilaks/
Group Homepage: http://web.stanford.edu/~harilaks/
The goal of our research group is to create systems for life-like video experience. In particular, we focus on novel video processing algorithms for (a) capture & postproduction, (b) compression & delivery, and (c) rendering stages of an immersive video pipeline.
In the content creation stage, we investigate various aspects in producing omnidirectional videos to capture a 3D scene. Due to its nature, this entails very high data volume. Some of the questions we would like to address are: What representations of a 3D scene do we use? How do we minimize the data transfer – can we use receiver-driven communication schemes? What bandwidths do we need for live transmission over the Internet to achieve immersion? At the receiver side, we would like to develop video based realistic rendering of the 3D scene and enable content adaptation for different output devices or viewer preferences. Overall, the pipeline would enable us to revolutionize the viewing experience from passive onlooking to active engagement of the audience by choosing what or where they want to view within the scene. We are at the confluence of various tech trends in acquisition and display devices where such immersive experiences may become realizable in the near future.