free web stats

Accidental Light Probes

CVPR 2023

Abstract

[Left] A photo with a shiny object (Diet Pepsi). [Middle] We insert a new soda can with our estimated lighting. [Right] Inserting the can with a recent light estimator. Note how our method better relights the inserted can to produce an appearance consistent with the environment.

Recovering lighting in a scene from a single image is a fundamental problem in computer vision. While a mirror ball light probe can capture omnidirectional lighting, light probes are generally unavailable in everyday images. In this work, we study recovering lighting from accidental light probes (ALPs)---common, shiny objects like Coke cans, which often accidentally appear in daily scenes. We propose a physically-based approach to model ALPs and estimate lighting from their appearances in single images. The main idea is to model the appearance of ALPs by photogrammetrically principled shading and to invert this process via differentiable rendering to recover incidental illumination. We demonstrate that we can put an ALP into a scene to allow high-fidelity lighting estimation. Our model can also recover lighting for existing images that happen to contain an ALP.

I'd rather be Shiny. --- Tamatoa from Moana, 2016

Video

Object Insertion

We collect a dataset of photos that have ALPs in them. We insert virtual objects into these photos. Examples are shown below (insets are estimated environment maps).

Garon et.al. [1]
DeepParam [2]
StyleLight [3]
Ours
GT Lighting
[1] M. Garon et.al., Fast Spatially-varying Indoor Lighting Estimation, CVPR'19
[2] M.-A. Gardner et.al., Deep Parametric Indoor Lighting Estimation, ICCV'19
[3] G. Wang et.al., StyleLight: HDR Panorama Generation for Lighting Estimation and Editing, ECCV'22

BibTeX

@inproceedings{yu2023alp, title={Accidental Light Probes}, author={Hong-Xing Yu and Samir Agarwala and Charles Herrmann and Richard Szeliski and Noah Snavely and Jiajun Wu and Deqing Sun}, booktitle={CVPR}, year={2023} }

Acknowledgements

We would like to thank William T. Freeman for invaluable discussions and for the photo credit, Varun Jampani for helping us with data collection, and Henrique Weber and Jean-François Lalonde for running their methods as comparisons for us. The work was done in part when Hong-Xing Yu was a student researcher at Google and has been supported by gift funding and GCP credits from Google and Qualcomm.