MultiDiff: Consistent Novel View Synthesis from a Single Image

CVPR 2024

1Meta Reality Labs 2Technical University of Munich,

MultiDiff enables camera-motion control for scene-level novel view synthesis. Given a single RGB image and a camera trajectory of choice, the model generates 3D-consistent views extrapolating from the input image.

Abstract

We introduce MultiDiff, a novel approach for consistent novel view synthesis of scenes from a single RGB image. The task of synthesizing novel views from a single reference image is highly ill-posed by nature, as there exist multiple, plausible explanations for unobserved areas. To address this issue, we incorporate strong priors in form of monocular depth predictors and video-diffusion models. Monocular depth enables us to condition our model on warped reference images for the target views, increasing geometric stability. The video-diffusion prior provides a strong proxy for 3D scenes, allowing the model to learn continuous and pixel-accurate correspondences across generated images. In contrast to approaches relying on autoregressive image generation that are prone to drifts and error accumulation, MultiDiff jointly synthesizes a sequence of frames yielding high-quality and multi-view consistent results -- even for long-term scene generation with large camera movements, while reducing inference time by an order of magnitude. For additional consistency and image quality improvements, we introduce a novel, structured noise distribution. Our experimental results demonstrate that MultiDiff outperforms state-of-the-art methods on the challenging, real-world datasets RealEstate10K and ScanNet. Finally, our model naturally supports multi-view consistent editing without the need for further tuning.

Video

Method

MultiDiff leverages strong depth and video diffusion priors to enable consistent novel view synthesis of scenes from a single RGB image using a novel correspondence attention layer.

Novel-view rendering results following the GT trajectory.

By warping the initial noise according to the estimated depth into the target novel views, we can structure the noise providing additional information about the 3D scene structure. Just like Neo in "The Matrix", the model can decode this abstract noise pattern in more consistent views.

By masking areas in the input image, MultiDiff naturally enables consistent editing without the need for finetuning.

BibTeX

@InProceedings{Muller_2024_CVPR,
                author    = {M\"uller, Norman and Schwarz, Katja and R\"ossle, Barbara and Porzi, Lorenzo and Bul\`o, Samuel Rota and Nie{\ss}ner, Matthias and Kontschieder, Peter},
                title     = {MultiDiff: Consistent Novel View Synthesis from a Single Image},
                booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
                month     = {June},
                year      = {2024},
                pages     = {10258-10268}
            }