WonderWorld: Interactive 3D Scene Generation from a Single Image

HX Yu, H Duan, C Herrmann, WT Freeman… - arXiv preprint arXiv …, 2024 - arxiv.org
arXiv preprint arXiv:2406.09394, 2024arxiv.org
We present WonderWorld, a novel framework for\emph {interactive} 3D scene extrapolation
that enables users to explore and shape virtual environments based on a single input image
and user-specified text. While significant improvements have been made to the visual quality
of scene generation, existing methods are run offline, taking tens of minutes to hours to
generate a scene. By leveraging Fast Gaussian Surfels and a guided diffusion-based depth
estimation method, WonderWorld generates geometrically consistent extrapolation while …
We present WonderWorld, a novel framework for \emph{interactive} 3D scene extrapolation that enables users to explore and shape virtual environments based on a single input image and user-specified text. While significant improvements have been made to the visual quality of scene generation, existing methods are run offline, taking tens of minutes to hours to generate a scene. By leveraging Fast Gaussian Surfels and a guided diffusion-based depth estimation method, WonderWorld generates geometrically consistent extrapolation while significantly reducing computational time. Our framework generates connected and diverse 3D scenes in less than 10 seconds on a single A6000 GPU, enabling real-time user interaction and exploration. We demonstrate the potential of WonderWorld for applications in virtual reality, gaming, and creative design, where users can quickly generate and navigate immersive, potentially infinite virtual worlds from a single image. Our approach represents a significant advancement in interactive 3D scene generation, opening up new possibilities for user-driven content creation and exploration in virtual environments. We will release full code and software for reproducibility. Project website: https://WonderWorld-2024.github.io/
arxiv.org