MZ

FotoFirst NameLast NamePosition
Klaus Hildebrandt Applies Geometry
Matthias Hullin Computational Transient Imaging
Ivo Ihrke Generalized Image Acquisition and Analysis
Andreas Karrenbauer Discrete Optimization
Michael Kerber Topological and Geometric Computing
Haricharan Lakshman Immersive Video
Hendrik Lensch General Appearance Acquisition
Hendrik Lensch General Appearance Acquisition
Yangyan Li Semantic Reconstruction from Point Cloud
Markus Magnor Graphics - Optics - Vision

Researcher


Dr. Michael Zollhöfer


Visual Computing, Deep Learning and Optimization

Name of Research Group: Visual Computing, Deep Learning and Optimization
Homepage Research Group: web.stanford.edu/~zollhoef
Personal Homepage: zollhoefer.com
Mentor Saarbrücken: Hans-Peter Seidel
Mentor Stanford: Pat Hanrahan
Research Mission: The primary focus of my research is to teach computers to reconstruct and analyze our world at frame rate based on visual input. The extracted knowledge is the foundation for a broad range of applications not only in visual effects, computer animation, autonomous driving and man-machine interaction, but is also essential in other related fields such as medicine and biomechanics. Especially, with the increasing popularity of virtual, augmented and mixed reality, there comes a rising demand for real-time low latency solutions to the underlying core problems.    My research tackles these challenges based on novel mathematical models and algorithms that enable computers to first reconstruct and subsequently analyze our world. The main focus is on fast and robust algorithms that approach the underlying reconstruction and machine learning problems for static as well as dynamic scenes. To this end, I develop key technology to invert the image formation models of computer graphics based on data-parallel optimization and state-of-the-art deep learning techniques.    The extraction of 3D and 4D information from visual data is highly challenging and under-constraint, since image formation convolves multiple physical dimensions into flat color measurements. 3D and 4D reconstruction at real-time rates poses additional challenges, since it involves the solution of unique challenges at the intersection of multiple important research fields, namely computer graphics, computer vision, machine learning, optimization, and high-performance computing. However, a solution to these problems provides strong cues for the extraction of higher-order semantic knowledge. It is incredibly important to solve the underlying core problems, since this will have high impact in multiple important research fields and provide key technological insights that have the potential to transform the visual computing industry. In summer 2019 Michael Zollhöfer joined Facebook.

Researcher

Name of Researcher
Markus Flierl
Homepage of Research Group
First Name
Markus
Last Name
Flierl
Foto
url_foto
Phone
Position
Visual Sensor Networks
Categories
Former Groups
Research Mission
Visual information plays a prominent role in our daily lives. This is not surprising as the human being is ocular-centric. We aim to use our visual sense most efficiently and we make use of visual information to communicate our messages. Images and video have changed the way we see the world. An image is capable of capturing the impression of a moment. Video adds another dimension capturing the constant change of the expression of our world. But still, images and video let us perceive the world as "one-dimensional". Being able of binocular vision, the human being benefits from more than one view of the world. Multi-view imagery adds another dimension capable of capturing the constant change from various perspectives. The research on Visual Sensor Networks investigates distributed visual communication with emphasis on both source coding and transmission over networks. In particular, this research project considers visual communication of natural dynamic 3D scenes. Spatially distributed video sensors capture a dynamic 3D scene from multiple view-points. The video sensors encode their signal and transmit their data via the network to the central decoder which shall be able to reconstruct the dynamic 3D scene. The sensor network shall exploit the correlation among the many observations of the scene. Also, communication among the visual sensors shall enhance the efficiency of the sensor network. This project will address interesting problems like how to sample the dynamic 3D scene efficiently as well as what messages have to be exchanged among the video sensors to maximize their efficiency. Apart from communication tasks, Visual Sensor Networks may be helpful for other applications: As the views of the cameras overlap, multi-view image sequence data may be used to track objects in 3D space or to estimate the motion field of voxels. Centralized algorithms for these problems are known, but due to the large data volume generated by dense camera arrays, such algorithms may not be feasible. To conclude, the signal that is desired at the fusion center of the Visual Sensor Network will also shape its design. Efficient reconstruction for driving a holographic display with all camera signals will impose different constraints than rendering a single novel view.
mission_rtf
Name of Research Group

Personal Info

Photo