Constructing compact and informative 3D scene representations is essential for effective embodied exploration and reasoning, especially in complex environments over extended periods. Existing representations, such as object-centric 3D scene graphs, oversimplify spatial relationships by modeling scenes as isolated objects with restrictive textual relationships, making it difficult to address queries requiring nuanced spatial understanding. Moreover, these representations lack natural mechanisms for active exploration and memory management, hindering their application to lifelong autonomy. In this work, we propose 3D-Mem, a novel 3D scene memory framework for embodied agents. 3D-Mem employs informative multi-view images, termed Memory Snapshots, to represent the scene and capture rich visual information of explored regions. It further integrates frontier-based exploration by introducing Frontier Snapshots—glimpses of unexplored areas—enabling agents to make informed decisions by considering both known and potential new information. To support lifelong memory in active exploration settings, we present an incremental construction pipeline for 3D-Mem, as well as a memory retrieval technique for memory management. Experimental results on three benchmarks demonstrate that 3D-Mem significantly enhances agents' exploration and reasoning capabilities in 3D environments, highlighting its potential for advancing applications in embodied AI.
The 3D scene memory is efficiently filtered using Prefiltering. The VLM agent then leverages:
Each demo represents an Embodied Q&A episode in a Habitat-sim scene, where the agent is required to answer 6-8 questions in sequence.
Top-down map. The notations are explained in the following:
Thanks to its training-free design, 3D-Mem seamlessly adapts to real robots—like low-height quadrupeds—enabling practical deployment in the real world.
We further illustrate how 3D-Mem is aggregated during exploration with the following video. Each video illustrates a task-agnostic exploration episode.
@misc{yang20243dmem3dscenememory,
title={3D-Mem: 3D Scene Memory for Embodied Exploration and Reasoning},
author={Yuncong Yang and Han Yang and Jiachen Zhou and Peihao Chen and Hongxin Zhang and Yilun Du and Chuang Gan},
year={2024},
eprint={2411.17735},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2411.17735},
}