Abstract
Radiance Field methods have recently revolutionized novel-view synthesis of scenes captured with multiple photos or videos. However, achieving high visual quality still requires neural networks that are costly to train and render, while recent faster methods inevitably trade off speed for quality. For unbounded and complete scenes (rather than isolated objects) and 1080p resolution rendering, no current method can achieve real-time display rates. We introduce three key elements that allow us to achieve state-of-the-art visual quality while maintaining competitive training times and importantly allow high-quality real-time (>= 30 fps) novel-view synthesis at 1080p resolution. First, starting from sparse points produced during camera calibration, we represent the scene with 3D Gaussians that preserve desirable properties of continuous volumetric radiance fields for scene optimization while avoiding unnecessary computation in empty space; Second, we perform interleaved optimization/density control of the 3D Gaussians, notably optimizing anisotropic covariance to achieve an accurate representation of the scene; Third, we develop a fast visibility-aware rendering algorithm that supports anisotropic splatting and both accelerates training and allows realtime rendering. We demonstrate state-of-the-art visual quality and real-time rendering on several established datasets.
Community
where and how can we access the Model?
Thats great!
Il s'agit d'un véhicule fait avec MidJourney, voilà le prompt que j'ai utilisé :
Lots of people in line wander around behind huge very rusty garbage truck with bull bars, big chain in the front, spikes on the bull bars, dirty windows, the truck spits out huge black smoke, Madmax style, giger style. The truck is in the desert and is moving on a muddy road, realistic
Je l'ai mis ici pour en faire une scène 3D mais je ne sais pas comment faire.
@chabgyver C'est seulement un "paper" pour l'instant, il n'y a pas encore un "space" disponible pour tester. Parfois, tu peux trouver un peu avant HuggingFace le github et tester le code avec google colab par exemple sous la forme d'un notebook/jupyter. Si ce que je t'écris ressemble à du chinois, colle ma prose dans ChatGPT et tu vas piger en 3 sec c'est très simple...
J'ai bien compris ton message, pas grave, j'attendrai que cette technologie soit plus aboutie mais j'ai hâte de tester sur mes propres tableaux
This looks incredible.
What's the latest with this? There were only some comments on this when it was published. Would love to see more
make an Oculus Quest viewer please!
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Spacetime Gaussian Feature Splatting for Real-Time Dynamic View Synthesis (2023)
- 4D Gaussian Splatting: Towards Efficient Novel View Synthesis for Dynamic Scenes (2024)
- EndoGaussian: Real-time Gaussian Splatting for Dynamic Endoscopic Scene Reconstruction (2024)
- Deblurring 3D Gaussian Splatting (2024)
- TRIPS: Trilinear Point Splatting for Real-Time Radiance Field Rendering (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Real-Time Radiance Fields: How 3D Gaussian Splatting is Changing the Game
Links 🔗:
👉 Subscribe: https://www.youtube.com/@Arxflix
👉 Twitter: https://x.com/arxflix
👉 LMNT (Partner): https://lmnt.com/
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper