MZ

FotoFirst NameLast NamePosition
Mykhaylo Andriluka People Detection and Tracking
Roland Angst Vision, Geometry, and Computational Perception
Tamay Aykut
Vahid Babaei
Pierpaolo Baccichet Distributed Media Systems
Volker Blanz Learning-Based Modeling of Objects
Volker Blanz Learning-Based Modeling of Objects
Martin Bokeloh Inverse Procedural Modeling
Adrian Butscher Geometry Processing and Discrete Differential Geometry
Renjie Chen Images and Geometry

Researcher


Dr. Michael Zollhöfer


Visual Computing, Deep Learning and Optimization

Name of Research Group: Visual Computing, Deep Learning and Optimization
Homepage Research Group: web.stanford.edu/~zollhoef
Personal Homepage: zollhoefer.com
Mentor Saarbrücken: Hans-Peter Seidel
Mentor Stanford: Pat Hanrahan
Research Mission: The primary focus of my research is to teach computers to reconstruct and analyze our world at frame rate based on visual input. The extracted knowledge is the foundation for a broad range of applications not only in visual effects, computer animation, autonomous driving and man-machine interaction, but is also essential in other related fields such as medicine and biomechanics. Especially, with the increasing popularity of virtual, augmented and mixed reality, there comes a rising demand for real-time low latency solutions to the underlying core problems.    My research tackles these challenges based on novel mathematical models and algorithms that enable computers to first reconstruct and subsequently analyze our world. The main focus is on fast and robust algorithms that approach the underlying reconstruction and machine learning problems for static as well as dynamic scenes. To this end, I develop key technology to invert the image formation models of computer graphics based on data-parallel optimization and state-of-the-art deep learning techniques.    The extraction of 3D and 4D information from visual data is highly challenging and under-constraint, since image formation convolves multiple physical dimensions into flat color measurements. 3D and 4D reconstruction at real-time rates poses additional challenges, since it involves the solution of unique challenges at the intersection of multiple important research fields, namely computer graphics, computer vision, machine learning, optimization, and high-performance computing. However, a solution to these problems provides strong cues for the extraction of higher-order semantic knowledge. It is incredibly important to solve the underlying core problems, since this will have high impact in multiple important research fields and provide key technological insights that have the potential to transform the visual computing industry. In summer 2019 Michael Zollhöfer joined Facebook.

Researcher

Name of Researcher
Lakshman, Haricharan
Homepage of Research Group
First Name
Haricharan
Last Name
Lakshman
Foto
url_foto
wp.mpi-inf.mpg.de/mpc-vcc/files/2014/11/lakshman-e1415351999764.jpg
Phone
Position
Immersive Video
Categories
Former Groups
Research Mission
The goal of our research group is to create systems for life-like video experience. In particular, we focus on novel video processing algorithms for (a) capture & postproduction, (b) compression & delivery, and (c) rendering stages of an immersive video pipeline. In the content creation stage, we investigate various aspects in producing omnidirectional videos to capture a 3D scene. Due to its nature, this entails very high data volume. Some of the questions we would like to address are: What representations of a 3D scene do we use? How do we minimize the data transfer – can we use receiver-driven communication schemes? What bandwidths do we need for live transmission over the Internet to achieve immersion? At the receiver side, we would like to develop video based realistic rendering of the 3D scene and enable content adaptation for different output devices or viewer preferences. Overall, the pipeline would enable us to revolutionize the viewing experience from passive onlooking to active engagement of the audience by choosing what or where they want to view within the scene. We are at the confluence of various tech trends in acquisition and display devices where such immersive experiences may become realizable in the near future.
mission_rtf
Name of Research Group
Immersive Video

Personal Info

Photo