Publications

2017

1. RaTrace: Simple and Efficient Abstractions for Ray Traversal Algorithms
Arsène Pérard-Gayot, Martin Weier, Richard Membarth, Philipp Slusallek, Roland Leißa, and Sebastian Hack
In submission to IPDPS 2017.

2. Perception-driven Accelerated Rendering, State-of-the Art Report
Martin Weier, Michael Stengel, Steve Grogorick, Thorten Roth, Martin Eisemann, Elmar Eisemann, Ernst Kruijff, Marcus Magnor, Yongmin Li, Andre Hinkenjann, Philipp Slusallek, Piotr Didyk, and Karol Myszkowski
Submitted to Eurographics 2017

2016

3. Light Harmonization for Virtual Production
Farshad Einabadi and Oliver Grau
Short Paper at CVMP 2016

Einabadi-CVMP-2016-Short.pdf (Adobe PDF - 507Kb)

4.VPET – A Toolset for Collaborative Virtual Filmmaking
Simon Spielmann, Andreas Schuster, Kai Götz, Volker Helzle
to be presented at Siggraph Asia 2016 Technical Briefs, 5th December 2016, h14:15, The Venetian Macao Meeting Room Sicily 2505, Level 1

Abstract
Over the last decades the process of filmmaking has been subject to constant virtualization. Empty green screen stages leave the entire on-set crew clueless as real props are often replaced with virtual elements in later stages of production. With the development of virtual production workflows, solutions have been introduced that enable the decision-makers to explore the virtually augmented reality. However, current environments are either proprietary or lack usability, particularly when used by filmmakers without a specialized knowledge of computer graphics and 3D software. As part of the EU funded project Dreamspace, we have developed VPET (Virtual Production Editing Tool), a holistic approach for established film pipelines that allow on-set light, asset and animation editing via an intuitive interface. VPET is a tablet-based on-set editing application that works within a real-time virtual production environment. It is designed to run on mobile and head mounted devices (HMD) and communicates through a network interface with Digital Content Creation (DCC) tools and other VPET clients. Moreover, the tool provides functionality to interact with digital assets during a film production and synchronises changes within the film pipeline. This work represents a novel approach to interact collaboratively with film assets in real-time by maintaining fundamental parts of production pipelines. Our vision is to establish an on-set situation comparable to the early days of filmmaking where all creative decisions were made directly on set. Additionally, this will contribute to the democratisation of virtual production.

SA16_VPET_Filmakademie.pdf (Adobe PDF - 267Kb)

5. DREAMSPACE: A PLATFORM AND TOOLS FOR COLLABORATIVE VIRTUAL PRODUCTION
O. Grau, V. Helzle, E. Joris, T. Knop, B. Michoud, P. Slusallek, P. Bekaert, J.Starck
in IBC 2016 - Best 8 paper of the Year

Abstract
This paper describes the concepts and results implemented by the European FP7 Dreamspace project. Dreamspace develops a new platform and tools for collaborative virtual production of visual effects in film and TV and new immersive experiences. The aim of the project is to enable creative professionals to combine live performances, video and computer-generated imagery in real-time. In particular the project has developed tools allowing on-set manipulation of 3D assets, live integration of video feeds from tracked cameras and live-compositing of either CGI content or background plates from panoramic video, captured by Omnidirectional video rigs. The CGI content is lit by automatically captured studio lighting, using a new real-time global illumination rendering system. Furthermore, Dreamspace investigates the use of omnidirectional video and 3D assets in new immersive user experiences.

IBC2016_Dreamspace.pdf (Adobe PDF - 17.27Mb)

6. Roto++: Accelerating Professional Rotoscoping using Shape Manifolds
W. Li, F. Viola, J. Starck, G. Brostow, N. Campbell
ACM Transactions on Graphics (TOG) - Proceedings of ACM SIGGRAPH 2016

Abstract
Rotoscoping (cutting out different characters/objects/layers in raw video footage) is a ubiquitous task in modern post-production and represents a significant investment in person-hours. In this work, we study the particular task of professional rotoscoping for high-end, live action movies and propose a new framework that works with roto-artists to accelerate the workflow and improve their productivity. Working with the existing keyframing paradigm, our first contribu- tion is the development of a shape model that is updated as artists add successive keyframes. This model is used to improve the output of traditional interpolation and tracking techniques, reducing the number of keyframes that need to be specified by the artist. Our second contribution is to use the same shape model to provide a new interactive tool that allows an artist to reduce the time spent editing each keyframe. The more keyframes that are edited, the better the interactive tool becomes, accelerating the process and making the artist more efficient without compromising their control. Finally, we also provide a new, professionally rotoscoped dataset that enables truly representative, real-world evaluation of rotoscoping methods. We used this dataset to perform a number of experiments, including an expert study with professional roto-artists, to show, quantitatively, the advantages of our approach.

7. Nonuniform Depth Distribution Selection with Discrete Fourier Transform
Lode Jorissen, Patrik Goorts, Gauthier Lafruit, Philippe Bekaert
SIggraph 2016

iMinds-SIGGRAPH-2016-poster.pdf (Adobe PDF - 2.17Mb)

8. Omnidirectional Free Viewpoint Video using Panoramic Light Fields
Steven Maesen, Patrik Goorts, Philippe Bekaert
In 3DTV 2016

Abstract
In this paper, we describe a system to create an omnidirectional free viewpoint experience using only a small number of input cameras. The input cameras are placed on a circle and we create a large number of novel virtual viewpoints on that circle. Next, we choose a position within that circle and compute the omnidirectional image that is visible from that position by considering the collection of virtual images as a light field. The corresponding pixels in the virtual images are selected by tracing rays from the desired viewing position. Changing your position inside the circle, results in an adapted view-dependent rendering. This creates a free viewpoint 3D VR experience. We demonstrate our method using the game engine Unity combined with the Oculus Rift.

9. Multi-view Wide Baseline Depth Estimation Robust to Sparse Input Sampling
Lode Jorissen, Patrik Goorts, Gauthier Lafruit, Philippe Bekaert
In 3DTV 2016

Abstract
In this paper, we propose a depth map estimation algorithm, based on Epipolar Plane Image (EPI) line extraction, that is able to cor- rectly handle partially occluded objects in wide baseline camera setups. Furthermore, we introduce a descriptor matching tech- nique to reduce the negative influence of inaccurate color correc- tion and similarly textured objects on the depth maps. A visual comparison between an existing EPI-line extraction algorithm and our method is provided, showing that our method provides more accurate and consistent depth maps in most cases.

paper.pdf (Adobe PDF - 9.10Mb)

10. Foveated Real-Time Ray Tracing for Head-Mounted Displays
Martin Weier, Thorsten Roth, Ernst Krujiff, André Hinkenjann, Arsène Pérard-Gayot, Philipp Slusallek, Yongmin Li
In Pacific Graphics 2016

Abstract
Head-mounted displays with dense pixel arrays used for virtual reality applications require high frame rates and low latency rendering. This forms a challenging use case for any rendering approach. In addition to its ability of generating realistic images, ray tracing offers a number of distinct advantages, but has been held back mainly by its performance. In this paper, we present an approach that significantly improves image generation performance of ray tracing. This is done by combining foveated rendering based on eye tracking with reprojection rendering using previous frames in order to drastically reduce the number of new image samples per frame. To reproject samples a coarse geometry is reconstructed from a G-Buffer. Possible errors introduced by this reprojection as well as parts that are critical to the perception are scheduled for resampling. Additionally, a coarse color buffer is used to provide an initial image, refined smoothly by more samples were needed. Evaluations and user tests show that our method achieves real-time frame rates, while visual differences compared to fully rendered images are hardly perceivable. As a result, we can ray trace non-trivial static scenes for the Oculus DK2 HMD at 1182 * 1464 per eye within the the VSync limits without perceived visual differences.

11. Web-enabled server-based and distributed real-time Ray-Tracing
Georg Tamm; Philipp Slusallek
In: Proceedings of the 16th Eurographics Symposium on Parallel Graphics and Visualization. Eurographics Symposium on Parallel Graphics and Visualization (EGPGV-16), June 6-7, Groningen, Netherlands, o. A. 6/2016

Abstract
As browsers expand their functionality, they continuously act as a platform for portable application development within a web page. To bring interactive 3D graphics closer to the web developer, frameworks allowing a declarative scene description in line with the HTML markup exist. However, these approaches utilize client-side rendering and are thus limited in the scene complexity and rendering algorithms they can provide on a given device. We present the extension of the declarative 3D framework XML3D to support server-based rendering. The server back-end enables distributed rendering with an arbitrary hierarchy of cluster nodes. In the back-end, we deploy a custom real-time ray-tracer. To distribute the ray-tracer, we present a load balancing method which exploits frame-to-frame coherence in a real-time context. The load balancer achieves strong scalability without inducing communication overhead during rendering to coordinate the workers.

12. Plugin free Remote Visualization in the Browser
Georg Tamm; Philipp Slusallek
In: Proceedings of SPIE 9397. SPIE Conference on Visualization and Data Analysis, February 8-12, San Francisco, CA, USA, o. A. 2015.

Abstract
Today, users access information and rich media from anywhere using the web browser on their desktop computers, tablets or smartphones. But the web evolves beyond media delivery. Interactive graphics applications like visualization or gaming become feasible as browsers advance in the functionality they provide. However, to deliver large-scale visualization to thin clients like mobile devices, a dedicated server component is necessary. Ideally, the client runs directly within the browser the user is accustomed to, requiring no installation of a plugin or native application. In this paper, we present the state-of-the-art of technologies which enable plugin free remote rendering in the browser. Further, we describe a remote visualization system unifying these technologies. The system transfers rendering results to the client as images or as a video stream. We utilize the upcoming World Wide Web Consortium (W3C) conform Web Real-Time Communication (WebRTC) standard, and the Native Client (NaCl) technology built into Chrome, to deliver video with low-latency.

2015

1. Intuitive Virtual Production Tools for Set and Light Editing
Trottnow, Jonas, Götz, Kai, Seibert, Stefan, Spielmann, Simon, Helzle, Volker, Einabadi, Farshad, Sielaff, Clemens K. H. and Grau, Oliver
in Proceedings of the 12th European Conference on Visual Media Production (CVMP 2015)

Abstract

This contribution describes a set of newly developed tools for virtual production. Virtual production aims to bring together the creative production aspects into one real-time environment, to overcome the bottlenecks of offline processing in digital content production. This paper introduces tools and an architecture to edit set assets and adjust the lighting set-up. A set of tools was designed, implemented and tested on tablet PCs and one augmented reality and a virtual reality device. These tools are designed to be used on a movie set by staff not necessarily familiar with 3D software. Further, an approach to harmonize light set-ups in virtual and real scenes is introduced. This approach uses an automated image-based light capture process, which models the dominant lights as discrete light sources with fall-off characteristics to give required fine details for close range light set-ups and overcomes limitations of traditional image-based light probes. The paper describes initial results of a user evaluation using the developed tools in production-like environments.

2.Shallow Embedding of DSLs via Online Partial Evaluation
Roland Leißa, Klaas Boesche, Sebastian Hack, Richard Membarth and Philipp Slusallek
- winner of Best Paper Award -
in Proceedings of the 14th International Conference on Generative Programming: Concepts & Experiences (GPCE), pp. 11-20, Pittsburgh, PA, USA, October 26-27, 2015.

Abstract
This paper investigates shallow embedding of DSLs by means of online partial evaluation. To this end, we present a novel online partial evaluator for continuation-passing style languages. We argue that it has, in contrast to prior work, a predictable termination policy that works well in practice. We present our approach formally using a continuation-passing variant of PCF and prove its termination properties. We evaluate our technique experimentally in the field of visual and high-performance computing and show that our evaluator produces highly specialized and efficient code for CPUs as well as GPUs that matches the performance of hand-tuned expert code.

gpce15.pdf (Adobe PDF - 1.30Mb)

3. Discrete Light Source Estimation from Light Probes for Photorealistic Rendering
by F. Einabadi & O. Grau
In Proceedings of the British Machine Vision Conference 2015

Abstract
This contribution describes a new technique for estimation of discrete spot light sources. The method uses a consumer grade DSLR camera equipped with a fisheye lens to capture light probe images registered to the scene. From these probe images the geometric and radiometric properties of the dominant light sources in the scene are estimated. The first step is a robust approach to identify light sources in the light probes and to find exact positions by triangulation. Then the light direction and radiometric fall-off properties are formulated and estimated in a least square minimization approach.
The new method shows quantitatively accurate estimates compared to ground truth measurements. We also tested the results in an augmented reality context by rendering a synthetic reference object scanned with a 3D scanner into an image of the scene with the estimated light properties. The rendered images give photorealistic results of the shadow and shading compared to images of the real reference object.

4. Multi-Camera Epipolar Plane Image Feature Detection for Robust View Synthesis
Lode Jorissen, Patrik Goorts, Sammy Rogmans, Gauthier Lafruit, PhilippeBekaert
In Proceedings of the conference of 3D television(3D-TV), July 2015

P17.pdf (Adobe PDF - 4.03Mb)

2014

1. Multicamera based automated rotoscoping & depth map estimation using patchmatch
by Jonas Trottnow, Volker Helzle,
CVMP2014 - Short paper

Abstract
This short paper proposes a method for computing depth maps from multiple satellite cameras and fusing them to an enhanced one for the principal camera of a movie production. Additional precision is gathered from optionally adding a Time-of-Flight camera. The proposed algorithm is a generalization of the PatchMatch Stereo algorithm [1]. It calculates multiple stereo depth maps without the need of a common baseline and fuses them to one global depths map.

CVMP2014_short-paper-Trottnow.pdf (Adobe PDF - 241Kb)

2. Specialization through Dynamic Staging.
by P. Danilewski, M. Köster, R. Leißa, R. Membarth and P. Slusallek
In Proceedings of the 13th International Conference on Generative Programming: Concepts & Experiences, pp. 103-112, Västerås, Sweden, September 15-16, 2014.

Abstract
Partial evaluation allows for specialization of program fragments. This can be realized by staging, where one fragment is executed earlier than its surrounding code. However, taking advantage of these capabilities is often a cumbersome endeavor.
In this paper, we present a new metaprogramming concept using staging parameters that are first-class citizen entities and define the order of execution of the program. Staging parameters can be used to define MetaML-like quotations, but can also allow stages to be created and resolved dynamically. The programmer can write generic, polyvariant code which can be reused in the context of different stages. We demonstrate how our approach can be used to define and apply domain-specific optimizations. Our implementation of the proposed metaprogramming concept generates code which is on a par with templated C++ code in terms of execution time.

3. Adaptive Material Descriptions
Sons, Kristian, Klein, Felix, Sutter, Jan and Slusallek, Philipp
Computer Graphics Forum, 33(7):51--60, ISSN: 1467-8659, DOI: 10.1111/cgf.12473
October 2014

4. Target-Specific Refinement of Multigrid Codes.
R. Membarth, P. Slusallek, M. Köster, R. Leißa, and S. Hack.
Proceedings of the 4th International Workshop on Domain-Specific Languages and High-Level Frameworks for High Performance Computing (WOLFHPC), New Orleans, LA, USA, November 17, 2014.

Abstract
This paper applies partial evaluation to stage a stencil code Domain-Specific Language (DSL) onto a functional and imperative programming language. Platform-specific primitives such as scheduling or vectorization, and algorithmic variants such as boundary handling are factored out into a library that make up the elements of that DSL. We show how partial evaluation can eliminate all overhead of this separation of concerns and creates code that resembles hand-crafted versions for a particular target platform. We evaluate our technique by implementing a DSL for the V-cycle multigrid iteration. Our approach generates code for AMD and NVIDIA GPUs (via SPIR and NVVM) as well as for CPUs using AVX/AVX2 alike from the same high-level DSL program. First results show that we achieve a speedup of up to 3× on the CPU by vectorizing multigrid components and a speedup of up to 2× on the GPU by merging the computation of multigrid components

wolfhpc14.pdf (Adobe PDF - 206Kb)

5. Towards a Performance-portable Description of Geometric Multigrid Algorithms using a Domain-specific Language
Membarth, Richard, Reiche, Oliver, Schmitt, Christian, Hannig, Frank, Teich, Jürgen, Stürmer, Markus and Köstler, Harald
Journal of Parallel and Distributed Computing (JPDC), 24(12):3191-3201, December 2014

6. shade.js: Adaptive Material Descriptions
Sons, Kristian, Klein, Felix, Sutter, Jan and Slusallek, Philipp
Computer Graphics Forum, 33(7):51--60. October 2014

CGF14.pdf (Adobe PDF - 5.77Mb)

7. Code Refinement of Stencil Codes
Köster, Marcel, Leißa, Roland, Hack, Sebastian, Membarth, Richard and Slusallek, Philipp
Parallel Processing Letters (PPL), 24(3):1-16. September 2014

8. Platform-Specific Optimization and Mapping of Stencil Codes through Refinement
Köster, Marcel, Leißa, Roland, Hack, Sebastian, Membarth, Richard and Slusallek, Philipp
Proceedings of the First International Workshop on High-Performance Stencil Computations (HiStencils) , page 1-6. January 2014

HiStencils14.pdf (Adobe PDF - 412Kb)