web counter

Learning Object-Centric Neural Scattering Functions for Free-viewpoint Relighting and Scene Composition

TMLR 2023

Abstract

[Left] Input images of objects. [Right] Relightable neural scene composition.

Photorealistic object appearance modeling from 2D images is a constant topic in vision and graphics. While neural implicit methods (such as Neural Radiance Fields) have shown high-fidelity view synthesis results, they cannot relight the captured objects. More recent neural inverse rendering approaches have enabled object relighting, but they represent surface properties as simple BRDFs, and therefore cannot handle translucent objects. We propose Object-Centric Neural Scattering Functions (OSFs) for learning to reconstruct object appearance from only images. OSFs not only support free-viewpoint object relighting, but also can model both opaque and translucent objects. While accurately modeling subsurface light transport for translucent objects can be highly complex and even intractable for neural methods, OSFs learn to approximate the radiance transfer from a distant light to an outgoing direction at any spatial location. This approximation avoids explicitly modeling complex subsurface scattering, making learning a neural implicit model tractable. Experiments on real and synthetic data show that OSFs accurately reconstruct appearances for both opaque and translucent objects, allowing faithful free-viewpoint relighting as well as scene composition.

Free-viewpoint Relighting: Real Opaque Objects

We use the DiLiGenT-MV [1] dataset.

Cow
Buddha
Pot
Reading
[1] M. Li et al., "Multi-View Photometric Stereo: A Robust Solution and Benchmark Dataset for Spatially Varying Isotropic Materials", TIP 2020

Free-viewpoint Relighting: Real Translucent Objects

We capture images of translucent objects by a cellphone.

Blue soap
Half soap

Comparison on Synthetic Translucent Bunny

Ours allows relighting translucent objects while it is difficult for BRDF-based methods.

NeRD [1]
PhySG [2]
OSF (ours)
Groundtruth
[1] M. Boss et al., "NeRD: Neural Reflectance Decomposition From Image Collections", ICCV 2021
[2] K. Zhang et al., "PhySG: Inverse Rendering With Spherical Gaussians for Physics-Based Material Editing and Relighting", CVPR 2021

Neural Scene Composition

Object NeRFs
OSFs (ours)
Groundtruth

Video

BibTeX

@article{yu2023osf, title={Learning object-centric neural scattering functions for free-viewpoint relighting and scene composition}, author={Hong-Xing Yu and Michelle Guo and Alireza Fathi and Yen-Yu Chang and Eric Ryan Chan and Ruohan Gao and Thomas Funkhouser and Jiajun Wu}, journal={TMLR}, year={2023} }