MZ

FotoFirst NameLast NamePosition
Mike Sips Visual Exploration of Space-Time Pattern in Multi-Dimensional and Heterogeneous Data Spaces
Mike Sips Visual Exploration of Space-Time Pattern in Multi-Dimensional and Heterogeneous Data Spaces
Michael Stark Visual Object Recognition and Scene Interpretation
Jürgen Steimle Embodied Interaction
Markus Steinberger GPU Scheduling and Parallel Computing
Carsten Stoll Optical Performace Capture
Robert Strzodka Integrative Scientific Computing
Robert Strzodka Integrative Scientific Computing
Holger Theisel Topological Methods for Vector Field Processing
Holger Theisel Topological Methods for Vector Field Processing

Researcher


Dr. Michael Zollhöfer


Visual Computing, Deep Learning and Optimization

Name of Research Group: Visual Computing, Deep Learning and Optimization
Homepage Research Group: web.stanford.edu/~zollhoef
Personal Homepage: zollhoefer.com
Mentor Saarbrücken: Hans-Peter Seidel
Mentor Stanford: Pat Hanrahan
Research Mission: The primary focus of my research is to teach computers to reconstruct and analyze our world at frame rate based on visual input. The extracted knowledge is the foundation for a broad range of applications not only in visual effects, computer animation, autonomous driving and man-machine interaction, but is also essential in other related fields such as medicine and biomechanics. Especially, with the increasing popularity of virtual, augmented and mixed reality, there comes a rising demand for real-time low latency solutions to the underlying core problems.    My research tackles these challenges based on novel mathematical models and algorithms that enable computers to first reconstruct and subsequently analyze our world. The main focus is on fast and robust algorithms that approach the underlying reconstruction and machine learning problems for static as well as dynamic scenes. To this end, I develop key technology to invert the image formation models of computer graphics based on data-parallel optimization and state-of-the-art deep learning techniques.    The extraction of 3D and 4D information from visual data is highly challenging and under-constraint, since image formation convolves multiple physical dimensions into flat color measurements. 3D and 4D reconstruction at real-time rates poses additional challenges, since it involves the solution of unique challenges at the intersection of multiple important research fields, namely computer graphics, computer vision, machine learning, optimization, and high-performance computing. However, a solution to these problems provides strong cues for the extraction of higher-order semantic knowledge. It is incredibly important to solve the underlying core problems, since this will have high impact in multiple important research fields and provide key technological insights that have the potential to transform the visual computing industry. In summer 2019 Michael Zollhöfer joined Facebook.

Researcher

Name of Researcher
Martin Wicke
Homepage of Research Group
First Name
Martin
Last Name
Wicke
Foto
url_foto
wp.mpi-inf.mpg.de/mpc-vcc/files/2012/01/wicke.jpg
Phone
Position
Methods for Large-Scale Physical Modeling and Animation
Categories
Former Groups
Research Mission
General Reduced-Order Models: Dimensionality reduction is a powerful technique that makes very high-dimensional simulations tractable. As an instance of data-driven simulation, creating a reduced-order model requires examples that have to be generated using traditional simulation techniques. Furthermore, the reduction process is computationally expensive, imposing severe limits on the size of reduced models. Each reduced model can only be used with the exact boundary conditions that were specified while the model was created. In this project, we aim to relax these restrictions. By computing reduced models in a modular fashion, more complex simulations can be set up without repeating the expensive model reduction step. Combining several of these tiles requires a coupling mechanism that has to be fast and flexible, while guaranteeing important physical invariants and providing useful error bounds. Measuring Fluid Flow: Fluid flows are complex and hard to measure. One of the most commonly used techniques injects probes into the fluid whose position is then tracked over time. The recorded trajectories can then be used to reconstruct the (time-dependent) flow field. In this project, traditional flow reconstruction methods are augmented using knowledge about the physical processes. Since we can compute the time-dependent behavior of a fluid given an initial condition, measurements at one timestep can be validated by subsequent measurements. This mechanism makes it possible to use far fewer sample points to reconstruct time-varying flow fields. To validate this method, we design and build an experimental setup that can be used to measure small-scale fluid flows using optical tracking of small objects, similar to traditional particle image velocimetry approaches. Handling Mobility in Sensor Networks: Most algorithms for routing and data managements are designed for static or slowly changing networks. While they adapt well to gradual changes in network topology and mechanisms exists that help recover from temporary failures, mobility of some nodes poses significant challenges. This is especially true for sensor networks, whose nodes are typically small, low-power devices,placing severe constraints on admissible computation and power consumption. This project aims to gracefully handle node mobility in networks, in particular sensor networks. Exploiting typical user behaviour can help implement proactive handoff procedures to prevent link failures. Intermediate goals include a robust and efficient streaming mechanism, as well as reliable predictive routing.
mission_rtf
Name of Research Group
Methods for Large-Scale Physical Modeling and Animation

Personal Info

Photo