Marigold-DC: Zero-Shot Monocular Depth Completion with Guided Diffusion
Abstract
Depth completion upgrades sparse depth measurements into dense depth maps guided by a conventional image. Existing methods for this highly ill-posed task operate in tightly constrained settings and tend to struggle when applied to images outside the training domain or when the available depth measurements are sparse, irregularly distributed, or of varying density. Inspired by recent advances in monocular depth estimation, we reframe depth completion as an image-conditional depth map generation guided by sparse measurements. Our method, Marigold-DC, builds on a pretrained latent diffusion model for monocular depth estimation and injects the depth observations as test-time guidance via an optimization scheme that runs in tandem with the iterative inference of denoising diffusion. The method exhibits excellent zero-shot generalization across a diverse range of environments and handles even extremely sparse guidance effectively. Our results suggest that contemporary monocular depth priors greatly robustify depth completion: it may be better to view the task as recovering dense depth from (dense) image pixels, guided by sparse depth; rather than as inpainting (sparse) depth, guided by an image. Project website: https://MarigoldDepthCompletion.github.io/
Community
Introducing โ Marigold-DC โ our training-free zero-shot approach to monocular Depth Completion with guided diffusion! If you have ever wondered how else a long denoising diffusion schedule can be useful, we have an answer for you!
Depth Completion addresses sparse, incomplete, or noisy measurements from photogrammetry or sensors like LiDAR. Sparse points arenโt just hard for humans to interpret โ they also hinder downstream tasks.
Traditionally, depth completion was framed as image-guided depth interpolation. We leverage Marigold, a diffusion-based monodepth model, to reframe it as sparse-depth-guided depth generation. How the turntables! Check out the paper anyway ๐
๐ Website: https://marigolddepthcompletion.github.io/
๐ค Demo: https://huggingface.co/spaces/prs-eth/marigold-dc
๐ Paper: https://arxiv.org/abs/2412.13389
๐พ Code: https://github.com/prs-eth/marigold-dc
Team ETH Zรผrich: Massimiliano Viola ( @mviola ), Kevin Qu ( @KevinQu7 ), Nando Metzger ( @nandometzger ), Bingxin Ke ( @Bingxin ), Alexander Becker, Konrad Schindler, and Anton Obukhov ( @toshas ). We thank Hugging Face for their continuous support.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- MetricGold: Leveraging Text-To-Image Latent Diffusion Models for Metric Depth Estimation (2024)
- SharpDepth: Sharpening Metric Depth Predictions Using Diffusion Distillation (2024)
- OMNI-DC: Highly Robust Depth Completion with Multiresolution Depth Integration (2024)
- PriorDiffusion: Leverage Language Prior in Diffusion Models for Monocular Depth Estimation (2024)
- FiffDepth: Feed-forward Transformation of Diffusion-Based Generators for Detailed Depth Estimation (2024)
- Align3R: Aligned Monocular Depth Estimation for Dynamic Videos (2024)
- MultiDepth: Multi-Sample Priors for Refining Monocular Metric Depth Estimations in Indoor Scenes (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 1
Collections including this paper 0
No Collection including this paper