Geometry-Aware Diffusion Models for Multiview Scene Inpainting

1York University, 2Vector Institute for AI, 3Samsung AI Centre Toronto, 4Google DeepMind
{ahmadsa,marcus.brubaker,kosta}@yorku.ca, tristan.a@samsung.com

Abstract

In this paper, we focus on 3D scene inpainting, where parts of an input image set, captured from different viewpoints, are masked out. The main challenge lies in generating plausible image completions that are geometrically consistent across views. Most recent work addresses this challenge by combining generative models with a 3D radiance field to fuse information across viewpoints. However, a major drawback of these methods is that they often produce blurry images due to the fusion of inconsistent cross-view images. To avoid blurry inpaintings, we eschew the use of an explicit or implicit radiance field altogether and instead fuse cross-view information in a learned space. In particular, we introduce a geometry-aware conditional generative model, capable of inpainting multi-view consistent images based on both geometric and appearance cues from reference images. A key advantage of our approach over existing methods is its unique ability to inpaint masked scenes with a limited number of views (i.e., few-view inpainting), whereas previous methods require relatively large image sets for their 3D model fitting step. Empirically, we evaluate and compare our scene-centric inpainting method on two datasets, SPIn-NeRF and NeRFiller, which contain images captured at narrow and wide baselines, respectively, and achieve state-of-the-art 3D inpainting performance on both. Additionally, we demonstrate the efficacy of our approach in the few-view setting compared to prior methods.



Qualitative Results for Small-Baseline Object Removal

The examples below show the input and inpainted scenes for the small-baseline object removal task. The first column shows the input scene, the second column shows the inpainted scene by SPIn-NeRF, and the third column shows the inpainted scene by our method. Notice that SPIn-NeRF often produces blurry results, while our method produces sharp and consistent results across views. Note also that our method is NeRF-free, and here we have fitted a NeRF to the inpainted images, solely for visualization purposes.

Input Scene

SPIn-NeRF

Ours



Inpainted Test Views

The following visualization shows the inpainted images in each scene, inpainted by SPIn-NeRF and our method. Use the slider and animation controls to iterate through the frames. Note that the blurry artifacts along the image boundaries occur because, in the SPIn-NeRF benchmark, a NeRF is first fit on the "training views", and the inpainted NeRF is then rendered to the "test views". Since the training views do not fully cover the regions captured by the test views, this leads to artifacts—a common artifact of NeRFs.

Scene Selection

Select a scene to view the inpainted images.


Input
Ours
SPIn-NeRF
InFusion
MVIP-NeRF
MALD-NeRF

Frame Control (automated playback or manual slider)

Stop animation
Cycle 5fps
Cycle 10fps



Qualitative Results for Wide-Baseline Scene Completion

The examples below show the input and inpainted scenes for the wide-baseline scene completion task. The first column shows the input scene, the second column shows the inpainted scene by NeRFiller, and the third column shows the inpainted scene by our method. Notice that NeRFiller often produces blurry results, while our method produces sharper results across views.

Input Scene

NeRFiller

Ours



Qualitative Results for Few-View Inpainting

The examples below show the input and inpainted views for the few-view inpainting task. The first column shows all the input views, the second column shows the inpainted views by SPIn-NeRF, the third column shows the inpainted views by NeRFiller, and the fourth column shows the inpainted views by our method. Notice that SPIn-NeRF and NeRFiller often produce blurry and inconsistent results, due to the incapability of NeRFs to work on very few views. Our method, on the other hand, produces sharp and consistent results across views.

Input View

SPIn-NeRF

NeRFiller

Ours


Visualization of SED

Since our inpainting method does not explicitly enforce 3D consistency through a 3D radiance field, we evaluate its geometric consistency using the Thresholded Symmetric Epipolar Distance (TSED) metric (Yu et al., 2023). This visualization showcases how SED is computed between a pair of images.

Select two corresponding points in each image below. The Epipolar line from each point will be drawn in the images with the same color as the point. When both points have been selected, the minimum length line will be drawn between each point and its Epipolar line, and the computed SED will be displayed under the image. Notice that the SED is low when selecting the same points in the scene, while the SED is high when different points in the scene are selected for each image.

Null

Method Overview

We use a geometry-aware conditional diffusion model to consistently inpaint multi-view scenes. We inpaint the scene autoregressively, conditioning each view on the previously inpainted views. We use a multi-view depth estimator to gradually estimate the geometry of the scene, while inpainting the views. Instead of feeding the reference images directly to the model, we render a set of appearance and geometric cues from the reference images, informing the model of the constraints that the scene geometry imposes. We create a mesh using the estimated geometry, and render the appearance (RGB) and geometric cues to the target view. The geometric cues include front-face mask, back-face mask, rendered shadow volume, and depth, as depicted in the figure below. We also use the rendered geometric cues to fuse multiple reference images when inpainting an image.


BibTeX

@article{salimi2025geomvi,
  title={Geometry-Aware Diffusion Models for Multiview Scene Inpainting},
  author={Ahmad Salimi and Tristan Aumentado-Armstrong and Marcus A. Brubaker and Konstantinos G. Derpanis},
  journal={arXiv preprint arXiv:2502.13335},
  year={2025}
}