Details
- Meta has introduced SAM 3D, an expansion of its Segment Anything suite, debuting SAM 3D Objects for reconstructing objects and scenes and SAM 3D Body for detailed human body and shape estimation from a single 2D image.
- The launch delivers model checkpoints, inference code, the SA-3DAO benchmark dataset, and the Meta Momentum Human Rig (MHR) parametric model with a commercial license, enabling broad adoption by developers and researchers.
- SAM 3D Objects reconstructs 3D shapes, textures, and layouts by leveraging a novel annotation approach where annotators rank model-generated outputs, resulting in 3.14 million mesh annotations from nearly 1 million real-world images.
- Meta's annotation technique and use of vast, diverse datasets enables accurate 3D model creation from single images, differentiating SAM 3D's precision and scale from prior 3D AI solutions.
- The technology is poised to serve sectors including AR, VR, robotics, gaming, and e-commerce, propelling Meta’s push to democratize advanced 3D perception tools and expand use cases across its ecosystems.
Impact
Meta’s SAM 3D establishes a new benchmark in 3D AI by solving chronic data scarcity through scalable annotation and open resources. Its immediate integration and open access facilitate a wave of innovation in industries hungry for realistic 3D models, intensifying competition with other vision AI leaders and setting the standard for future developments in immersive technology.
