ROBOTICS

Habitat 3.0: A Co-Habitat for Humans, Avatars and Robots

March 12, 2024

Abstract

We present Habitat 3.0: a simulation platform for studying collaborative human-robot tasks in home environments. Habitat 3.0 offers contributions across three dimensions: (1) Accurate humanoid simulation: addressing challenges in modeling complex deformable bodies and diversity in appearance and motion, all while ensuring high simulation speed. (2) Human-in-the-loop infrastructure: enabling real human interaction with simulated robots via mouse/keyboard or a VR interface, facilitating evaluation of robot policies with human input. (3) Collaborative tasks: studying two collaborative tasks, Social Navigation and Social Rearrangement. Social Navigation investigates a robot's ability to locate and follow humanoid avatars in unseen environments, whereas Social Rearrangement addresses collaboration between a humanoid and robot while rearranging a scene. These contributions allow us to study end-to-end learned and heuristic baselines for human-robot collaboration in-depth, as well as evaluate them with humans in the loop. Our experiments demonstrate that learned robot policies lead to efficient task completion when collaborating with unseen humanoid agents and human partners that might exhibit behaviors that the robot has not seen before. Additionally, we observe emergent behaviors during collaborative task execution, such as the robot yielding space when obstructing a humanoid agent, thereby allowing the effective completion of the task by the humanoid agent. Furthermore, our experiments using the human-in-the-loop tool demonstrate that our automated evaluation with humanoids can provide an indication of the relative ordering of different policies when evaluated with real human collaborators. Habitat 3.0 unlocks interesting new features in simulators for Embodied AI, and we hope it paves the way for a new frontier of embodied human-AI interaction capabilities.

Download the Paper

AUTHORS

Written by

Xavi Puig

Eric Undersander

Andrew Szot

Mikael Dallaire Cote

Jimmy Yang

Ruslan Partsey

Ruta Desai

Alexander William Clegg

Tiffany Min

Vladimír Vondruš

Theo Gervet

Vincent-Pierre Berges

Oleksandr Maksymets

Zsolt Kira

Mrinal Kalakrishnan

Jitendra Malik

Devendra Singh Chaplot

Unnat Jain

Dhruv Batra

Akshara Rai

Roozbeh Mottaghi

Publisher

ICLR

Research Topics

Robotics

Related Publications

May 06, 2024

ROBOTICS

Bootstrapping Linear Models for Fast Online Adaptation in Human-Agent Collaboration

Ben Newman, Christopher Paxton, Kris Kitani, Henny Admoni

May 06, 2024

April 02, 2024

ROBOTICS

REINFORCEMENT LEARNING

MoDem-V2: Visuo-Motor World Models for Real-World Robot Manipulation

Patrick Lancaster, Nicklas Hansen, Aravind Rajeswaran, Vikash Kumar

April 02, 2024

March 26, 2024

ROBOTICS

REINFORCEMENT LEARNING

When should we prefer Decision Transformers for Offline Reinforcement Learning?

Prajjwal Bhargava, Rohan Chitnis, Alborz Geramifard, Shagun Sodhani, Amy Zhang

March 26, 2024

December 10, 2023

ROBOTICS

REINFORCEMENT LEARNING

Accelerating Exploration with Unlabeled Prior Data

Qiyang Li, Jason Zhang, Dibya Ghosh, Amy Zhang, Sergey Levine

December 10, 2023

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.