Tobias Bertel, Feng Xu and Christian Richardt chapter in 
"Real VR -- Immersive Digital Reality: How to Import the Real World into Head-Mounted Immersive Displays",
"Image-Based Scene Representations for Head-Motion Parallax in 360", p109--131,
Springer 2020, ISBN 978-3-030-41816-8_5
Abstract: 
Creation and delivery of "RealVR" experiences essentially consists of the following four main steps: capture, reconstruction, representation and rendering. In this chapter, we present, compare, and discuss two recent end-to-end approaches, Parallax360 by Luo et al. [2018] and MegaParallax by Bertel et al. [2019]. Both propose complete pipelines for RealVR content generation and novel-view synthesis with head-motion parallax for 360° environments.
Parallax360 uses a robotic arm for capturing thousands of input views on the surface of a sphere. Based on precomputed disparity motion fields and pairwise optical flow, novel viewpoints are synthesized on the fly using flow-based blending of the nearest two to three input views which provides compelling head-motion parallax.
MegaParallax proposes a pipeline for RealVR content generation and rendering that emphasizes casual, hand-held capturing. The approach introduces view-dependent flow-based blending to enable novel-view synthesis with head-motion parallax within a viewing area determined by the field of view of the input cameras and the capturing radius.
We describe both methods and discuss their similarities and differences in corresponding steps in the RealVR pipeline show selected results. The chapter ends by discussing advantages and disadvantages as well as outlining the most important limitations and future work.
Back to Top