||My research interests focus on developing methods for realistic image generation. From its onset, the field of computer graphics rendering has experienced a continuous increase not only in scientific recognition but also in economic relevance. Computer graphics-based special effects and entirely computer-animated feature films have become established business segments of the movie industry, and computer games constitute today a multi-billion dollar business. In recent years, however, progress in rendering algorithms and graphics hardware has led to the insight that, despite faster and more complex rendering calculations, the modeling techniques traditionally employed in computer graphics often fundamentally limit attainable realism. I therefore concentrate on investigating suitable algorithms to import natural world-recorded realism into computer graphics. Such image/video-based modeling and rendering techniques employ conventionally taken photographs or video recordings of the real world to achieve photo-realistic rendering results.
Goal of my work is to combine the versatility and freedom of computer-based image synthesis methods with the natural realism and ease of acquisition of a camcorder recording, making use of today’s PC graphics card capabilities and consumer-market imaging technology.
During my post-doctoral time as Research Associate at Stanford University, I participated in the Stanford Immersive Television Project and worked on the acquisition, analysis and rendering of dynamic light fields. A real-time rendering algorithm exploiting conventional PC graphics hardware capabilities enabled viewing the dynamic scene interactively. For optimal rendering results, a novel algorithm was developed to robustly estimate dense depth maps from the MPEG-compressed video streams.
At the Max Planck Institute for Computer Science, I currently focus on developing efficient analysis and rendering methods for sparse recording setups with only a handful of cameras. In our studio, we can record a stage area with eight synchronized video cameras distributed all around the stage. In my group, we have developed methods for online visual hull reconstruction and enhanced quality real-time rendering. Recently, we have succeeded in using a generic geometry model to automatically analyze the complex movements of a human dancer, enabling photo-realistic, interactive rendering of the dancer from any viewpoint.