Web Analytics

Inferring Hybrid Neural Fluid Fields from Videos

  • 1Stanford University

  • 2Georgia Institute of Technology

  • *Contributed Equally

NeurIPS 2023

Abstract

We study recovering fluid density and velocity from sparse multiview videos. Existing neural dynamic reconstruction methods predominantly rely on optical flows; therefore, they cannot accurately estimate the density and uncover the underlying velocity due to the inherent visual ambiguities of fluid velocity, as fluids are often shapeless and lack stable visual features. The challenge is further pronounced by the turbulent nature of fluid flows, which calls for properly designed fluid velocity representations. To address these challenges, we propose hybrid neural fluid fields (HyFluid), a neural approach to jointly infer fluid density and velocity fields. Specifically, to deal with visual ambiguities of fluid velocity, we introduce a set of physics-based losses that enforce inferring a physically plausible velocity field, which is divergence-free and drives the transport of density. To deal with the turbulent nature of fluid velocity, we design a hybrid neural velocity representation that includes a base neural velocity field that captures most irrotational energy and a vortex particle-based velocity that models residual turbulent velocity. We show that our method enables recovering vortical flow details. Our approach opens up possibilities for various learning and reconstruction applications centered around 3D incompressible flow, including fluid re-simulation and editing, future prediction, and neural dynamic scene composition.

Inferring 3D Fluid Density and Velocity

From a few multiview videos (three views in this example), HyFluid can infer 3D fluid density and velocity fields.

Visualization of recovered 3D fluid fields

Video

Novel-View Re-Simulation

The recovered 3D fluid density and velocity fields allow re-simulation of the fluid dynamics from novel views. Note that original captured videos have black background. We reverse the color for better visualization.

NeRFlow [1]

PINF [2]

HyFluid (Ours)

Groundtruth

Novel-View Future Prediction

The recovered 3D density and velocity allows predicting future evolution of the fluid by simulating the fluid dynamics. We use a simple Eulerian simulator to predict the future.

PINF [2]

HyFluid (Ours)

Groundtruth

Dynamic Neural Scene Composition

The recovered 3D density and appearance of the fluid can be easily composited into any NeRF-like reconstructed dynamic scene.

Original (by K-Planes [3])

HyFluid (Our composition with K-Planes)

[1] Y. Du, et.al., Neural Radiance Flow for 4d View Synthesis and Video Processing, ICCV 2021
[2] M. Chu et.al., Physics Informed Neural Fields for Smoke Reconstruction with Sparse Data, SIGGRAPH 2022
[3] S. Fridovich-Keil et.al., K-Planes: Explicit Radiance Fields in Space, Time, and Appearance, arXiv 2023

BibTeX

@inproceedings{yu2023hyfluid, title={Inferring hybrid neural fluid fields from videos}, author={Hong-Xing Yu and Yang Zheng and Yuan Gao and Yitong Deng and Bo Zhu and Jiajun Wu}, booktitle={NeurIPS}, year={2023} }