Spatial-aware executable 3D point prediction for embodied localization, unifying touchable points and air points under a unified representation.
Embodied intelligence fundamentally requires the capability to determine where to act in 3D space. We formalize this requirement as embodied localization—the problem of predicting executable 3D points conditioned on visual observations and language instructions. We instantiate embodied localization with two complementary target types: touchable points, which are surface-grounded 3D points enabling direct physical interaction, and air points, which are free-space 3D points specifying placement and navigation goals, directional constraints, or geometric relations.
Embodied localization is inherently a problem of embodied 3D spatial reasoning—yet most existing vision-language systems rely predominantly on RGB inputs, requiring implicit geometric reconstruction that limits cross-scene generalization, despite the widespread adoption of RGB-D sensors in robotics.
To address this gap, we propose SpatialPoint, a spatial-aware vision-language framework with a careful design that integrates structured depth into a vision-language model (VLM) and generates camera-frame 3D coordinates. We construct a 2.6M-sample RGB-D dataset covering both touchable-point and air-point QA pairs for training and evaluation. Extensive experiments demonstrate that incorporating depth into VLMs significantly improves embodied localization performance.
We further validate SpatialPoint through real-robot deployment across three representative tasks: language-guided robotic arm grasping at specified locations, object placement to target destinations, and mobile robot navigation to goal positions.
The robot arm localizes the target box and places it at the specified destination.
The mobile robot navigates to the target location and retrieves the bottle.
The full manipulation sequence is shown in a longer real-robot task with instruction-conditioned target prediction.
@article{zhu2026spatialpoint,
title = {SpatialPoint: Spatial-aware Point Prediction for Embodied Localization},
author = {Qiming Zhu and Zhirui Fang and Tianming Zhang and Chuanxiu Liu and Xiaoke Jiang and Lei Zhang},
journal = {arXiv preprint arXiv:2603.26690},
year = {2026}
}