RRM: Relightable assets using Radiance guided Material extraction

1Ecole Polytechnique, 2Adobe Research
CGI 2024

Abstract

Novel view synthesis has seen tremendous improvements over the last few years thanks to the rise of neural representations of light fields, shifting the paradigm for the acquisition of 3D data from photographs. However, these radiance-based approaches typically lack the ability to synthesize views under new lighting conditions. Recent efforts tackle the problem via the extraction of physically-based parameters that can then be rendered under arbitrary lighting, but they are limited in the range of scenes they can handle, usually mishandling glossy scenes. We propose RRM, a method that can extract the materials, geometry, and environment lighting of a scene even in the presence of highly reflective objects. We design a physically-aware radiance fields representation to supervise the diffuse and view-dependent components of a physically-based module that uses Multiple Importance Sampling to grasp the complex behavior of glossy indirect lighting as well as feed our expressive environment light structure based on Pyramids of Laplacians. We demonstrate that our contributions outperform the state-of-the-art on parameter retrieval tasks, leading to high-fidelity relighting and novel view synthesis on surfacic scenes.

Example of decomposed scene

BibTeX

@misc{gomez2024rrmrelightableassetsusing,
        title={RRM: Relightable assets using Radiance guided Material extraction}, 
        author={Diego Gomez and Julien Philip and Adrien Kaiser and Élie Michel},
        year={2024},
        eprint={2407.06397},
        archivePrefix={arXiv},
        primaryClass={cs.CV},
        url={https://arxiv.org/abs/2407.06397}, 
    }