Tobias Bertel, Yusuke Tomoto, Srinivas Rao, Rodrigo Ortiz-Cayon, Stefan Holzer and Christian Richardt
"Deferred Neural Rendering for View Extrapolation"
in ACM SIGGRAPH Asia 2020 posters
"Deferred Neural Rendering for View Extrapolation"
in ACM SIGGRAPH Asia 2020 posters
![](https://cdn.myportfolio.com/c71558b0-7cff-4908-8e83-4bcd438ecd5f/c4b63a9e-d999-446b-af14-74048d4958bd_rw_1920.png?h=b3f3c3db98b5e7ad7c32787bbe4db873)
We capture an input video with a consumer camera, estimate camera poses, reconstruct a mesh and uv-map it.
We extend Deferred Neural Rendering [Thies et al. 2019] (blue) to enable smooth extrapolation of novel viewpoints (orange).
Abstract:
Image-based rendering methods that support visually pleasing specular surface reflections require accurate surface geometry and a large number of input images. Recent advances in neural scene
representations show excellent visual quality while requiring only imperfect mesh proxies or no surface-based proxies at all. While providing state-of-the-art visual quality, the inference time of learned models is usually too slow for interactive applications. While using a casually captured circular video sweep as input, we extend Deferred Neural Rendering to extrapolate smooth viewpoints around specular objects like a car.
representations show excellent visual quality while requiring only imperfect mesh proxies or no surface-based proxies at all. While providing state-of-the-art visual quality, the inference time of learned models is usually too slow for interactive applications. While using a casually captured circular video sweep as input, we extend Deferred Neural Rendering to extrapolate smooth viewpoints around specular objects like a car.
Submission video