Humanoid Seoul

Bridging Humanoid Robotics and Foundation Models: Embodied Intelligence and AI Integration

October 2, 2025 · COEX, Seoul

Full-day Workshop

Objective and Scope

The integration of foundation models, such as Large Language Models (LLMs), Vision-Language Models (VLMs), Vision-Language-Action Models (VLAMs), and other multimodal architectures, has the potential to fundamentally reshape how humanoid robots perceive, reason, act, and interact in the world. Until recently, general-purpose humanoid robots were constrained by the limited scalability of traditional planning, control, and perception pipelines. These robots, designed to understand instructions, adapt to novel environments, and engage in human-like interactions, often struggled to meet the flexibility required for general use. Foundation models offer a major step forward by enabling abstract reasoning, grounding through multi-modal inputs, and fast generalization across tasks and domains. However, integrating these models into humanoid robots presents open challenges. Questions remain about grounding abstract knowledge into sensorimotor data, combining model-based control with learned representations, ensuring safety and interpretability, and enabling embodied models to interact socially and linguistically with humans. Humanoid platforms, ranging from full-body anthropomorphic robots to more abstract bimanual systems, are especially well-suited for this line of research. Their structural similarity to humans and versatility make them ideal for exploring foundation model grounding, physical reasoning, and social interaction in human environments. This workshop is designed to promote collaboration between academic researchers and industry practitioners. The program will feature keynote talks, a cross-sector panel discussion, and a call for contributed presentations, with a particular focus on emerging research at the intersection of humanoid robotics and foundation models. By fostering this convergence, the workshop aims to build a roadmap toward generalizable, human-aligned, and socially capable humanoid systems powered by foundation models.

Topics

Speakers

Organizers

Preliminary Program

09:00 - 09:30 Welcome & Introduction
09:30 - 10:00 Keerthana Gopalakrishnan
10:00 - 10:30 Booster Presentations of Accepted Posters
10:30 - 11:00 ☕ Coffee Break | Poster Session
11:00 - 11:30 Rudolf Lioutikov
11:30 - 12:00 Kento Kawaharazuka | Foundation Model-based Recognition and Planning for Humanoid Robots
12:00 - 14:00 🍽️ Lunch break
14:00 - 14:30 Joel Jang | Why do we need Humanoid Robots? Perspective with video world models
14:30 - 15:00 Roger Qiu | Blurring the line between humans and humanoids
15:00 - 16:00 ☕ Coffee Break
16:00 - 16:45 Discussion Panel
16:45 - 17:00 Closing Remarks
17:00 End

Accepted Posters

Can We Detect Failures Without Failure Data? Uncertainty-Aware Runtime Failure Detection for Imitation Learning Policies
Chen Xu1, Tony Khuong Nguyen1, Emma Dixon1, Christopher Rodriguez1, Patrick Miller1, Robert Lee2,Paarth Shah1,Rares Andrei Ambrus1, Haruki Nishimura1, Masha Itkina1
1Toyota Research Institute (TRI), 2Woven by Toyota (WbyT)
Guiding Task and Motion Planning with Large Language Models
Ilyass Taouil1, Michal Ciebelski1, Victor Dhedin1, Angela Dai1, Majid Khadiv1
1 Technical University of Munich (TUM)
Preference-Based Long-Horizon Robotic Stacking with Multimodal Large Language Models
Wanming Yu1, Adrian Röfer2, Abhinav Valada2, Sethu Vijayakumar1
1University of Edinburgh, 2University of Freiburg
ActiveGrounder: 3D Visual Grounding with Object-Hull-Guided Active Observation
Dasol Hong1, Juhye Park1, Hyun Myung1
1Urban Robotics Lab, KAIST
BEAST: Efficient Tokenization of B-Splines Encoded Action Sequences for Imitation Learning
Hongyi Zhou1, Weiran Liao1, Xi Huang1, Yucheng Tang1, Fabian Otto1, Xiaogang Jia1, Xinkai Jiang1, Simon Hilber1, Ge Li1, Qian Wang1, Ömer Erdinç1 Yağmurlu, Nils Blank1, Moritz Reuss1, Rudolf Lioutikov1
1Intuitive Robots Lab, Karlsruhe Institute of Technology, Germany

Contact

For any inquiries related to the workshop, submissions, or participation, feel free to reach out to us at:

Email: dionis.totsila@inria.fr

We look forward to hearing from you!