We present COLA, a proprioception-only reinforcement learning approach that unifies leader and follower behaviors within a single policy. Trained in a closed-loop environment modeling dynamic interactions among humanoid, object, and human, COLA implicitly predicts object motion to enable compliant collaboration and maintain load balance. Experiments in both simulation and real-world scenarios demonstrate effective collaborative carrying across diverse objects and conditions, highlighting COLA’s practical value for human–robot object transportation.
Our Policy mainly consists of three steps: (i) We train a base whole-body control policy to provide a robust whole-body controller. (ii) In the closed-loop training environment, we train a residual teacher policy on top of the whole-body control policy with privileged information for human-humanoid collaboration. (iii) We distill the knowledge from the teacher policy into a student policy for real-world deployment using behavioral cloning.
@article{du2025learning,
title={Learning Human-Humanoid Coordination for Collaborative Object Carrying},
author={Yushi Du and Yixuan Li and Baoxiong Jia and Yutang Lin and Pei Zhou and Wei Liang and Yanchao Yang and Siyuan Huang},
journal={arXiv preprint arXiv:2510.14293},
year={2025}
}
📽️ More video demonstrations are coming!
🙏 We are especially grateful to Yuhan Li, Peiyuan Zhi, Le Ma, Peiyang Li for their invaluable help and support during the filming of our demonstration video.