Introducing Meta 3D Gen – new text-to-3D research from AI researchers at Meta that enables text-to-3D generation with high-quality geometry and textures. Research paper ➡️ https://go.fb.me/c9g4x6 Meta 3D Gen delivers text-to-mesh generation with high-quality geometry, texture and PBR materials. It can generate high-quality 3D assets, with both high-resolution textures and material maps end-to-end, producing results that are superior to previous state-of-the-art solutions — all at 3-10x the speed of previous work. In addition to the Meta 3D Gen technical report, we’re publishing our research on the two individual components of the Meta 3D Gen system: Meta 3D AssetGen for generating 3D models from text — and Meta 3D TextureGen, a model capable of high-quality texture generation and AI-assisted retexturing of artist-created or generated assets. Meta 3D AssetGen paper ➡️ https://go.fb.me/87tktg Meta 3D TextureGen paper ➡️ https://go.fb.me/tvbdf8
Tools like these are probably the reason why so many people in the games industry lost their job. But if the computer can do it, he should do it! We don't need any fake jobs!
From the paper: "Recent works thus pivoted into basing such generators on text-to-image models that are trained on billions of captioned images. " ...where are these BILLIONS of images coming from ? Seriously.
Is the model available open-source?
Amazing! Check this Rebecca Wikström Emma Sjögren
WOAH 😮
The ability to generate high-quality 3D assets from text, with such impressive speed and precision, is truly exciting. This technology opens up incredible opportunities for problem-solving and creativity, making it an exhilarating time to be involved in UX design. Kudos to the Meta team for pushing the boundaries of what's possible! #UXDesign #Innovation #3DGeneration #Meta3DGen
Generative AI to another level. Great work 👍
Results of yalls predatory Instagram scrapping, that you had to write a mandatory essay of why AI is bad for your hard work to be fed into? Fuck off it.
I'm eagerly waiting for the code release! I've tried out various tools like Triposr, Unique3D, and many more. It's exciting to see that research papers like Meshy.ai and Rodin allow for mesh retexturing after generation, and now Blender supports ComfyUI and stable diffusion. I'm looking forward to seeing papers from Meta, a company I admire. After dominating the advertising space and having no close competition in monetization, they're releasing impressive open-source models. Keep the code coming!
Innovation in 3D Character and Animation Technology | Bridging Art and Engineering
1dThis is far away from a production ready technology for games companies that allows for the level of control a IRL production artist needs. It's on its way, but not there yet. Additionally, people mistake these types of technologies for things that will replace artists. The reason so many games companies have laid people off is because the business model in games is broken. When a AAA project costs $150M-$300M to make, the game needs to make back that money. Those games have been falling short - It's unsustainable and that is why there have been the layoffs. Trust me, a major studio will happily employ 10 teams of 15 instead of 1 team of 150 if every game is making a profit. The reality is that getting tools like these will actually help the industry by allowing games companies to reduce risk on a single project and in turn increasing the security for the team that is building it - because they need to make less money to validate their existence. On top of that, technology like this has the potential to unleash the indie artist and give them the ability to fight on a more level playing field. I believe we will see a gaming and film renaissance emerge as a result of AI driven tools that help artists, not replace them.