Yusuke Tomoto, Srinivas Rao, Tobias Bertel, Krunal Chande, Christian Richardt, Stefan Holzer, Rodrigo Ortiz-Cayon

"Casual Real-World VR using Light Fields"

 in ACM SIGGRAPH Asia 2020 posters
Overview of our system:
(a) An AR app guides through the casual capturing process.
(b) A neural network promotes a subset of input viewpoints to multiplane images,
from which we extract high-quality geometry per view for faster rendering.
(c) Our scene representation can be rendered in real-time on desktop and VR.
Virtual reality (VR) would benefit from more end-to-end systems centered around a casual capturing procedure, high-quality visual results, and representations that are viewable on multiple platforms. We present an end-to-end system that is designed for casual creation of real-world VR content, using a smartphone. We use an AR app to casually capture a linear light field of a real-world object by recording a video sweep around the object. We predict multiplane images for a subset of input viewpoints, from which we extract high-quality textured geometry that are used for real-time image-based rendering suitable for VR. The round-trip time of our system, from guided capture to interactive display, is typically 1–2 minutes per scene. See the submission video for a walkthrough and results.

Submission video

Back to Top