Draft Specification and Interface description

The multitude of current and upcoming hard- and software technologies provides unique opportunities for creative professionals to work in real-time virtual environments, allowing onset content interaction, visualization & modification with intuitive methods of controlling creative parameters. This can be achieved by combining modules like accelerated software algorithms for image synthesis, performance capture, sophisticated tracking of cameras and input devices, gesture recognition, virtual/augmented reality display devices and capturing of additional information such as depth, geometry or Omni-Directional Video to name a few.

In case of Filmmaking and TV productions that involve real and virtual scene elements there are manifold obvious benefits of using such technologies in novel production workflows. Most promising appears to be the opportunity to involve all creative decision makers of a film production directly on set. This in turn will reduce the amount of work that needs to be executed in post production where creative changes are often quite challenging in terms of labor time and budget. Fully digital productions like an animated short film or cinematic sequence for a video game also hold new opportunities for more efficient production scenarios considering the possibilities of designing, directing, navigating, editing and modifying in a collaborative virtual environment.

Another use case is the capturing of Omni-Directional Video to immerse spectators into a certain surrounding. Sport and live events like concerts have proven to deliver a very realistic experience when observed with a Head Mounted Device including head tracking. Such Immersive Experiences are also a unique occasion to explore the borders of reality and virtual worlds in Performance Arts and Media Installations.

A key element in enabling the creative personnel working with these collaborative virtual environments is the ability to edit real and virtual scene elements intuitively. This document will introduce various scenarios that demonstrate the challenges and limitations of current Virtual Production pipelines. Then it will describe tools and workflows that indicate how to research and develop solutions for such challenges. Examples showcase how state-of-the-art methodologies are extended by using new input metaphors to directly manipulate positions of virtual assets, animations, lighting, keying, compositing and editing on a timeline based on Virtual Production concepts. This will include the experimental evaluation of mobile devices as well as display devices that combine real and virtual imagery.