Publication

2025

CLIP-RT : Learning Language-Conditioned Robotic Policies from Natural Language Supervision

Gi-Cheon Kang* , Junghyun Kim* , Kyuhwan Shim , Jun Ki Lee , Byoung-Tak Zhang (* co-first authorship)

Robotics: Science and Systems (RSS) 2025
3rd Workshop on Language and Robot Learning (LangRob) @ CoRL 2024
.

  Official Link       Paper       Poster       GitHub

Teaching robots desired skills in real-world environments remains challenging, especially for non-experts. A key bottleneck is that collecting robotic data often requires expertise or specialized hardware, limiting accessibility and scalability. We posit that natural language offers an intuitive and accessible interface for robot learning. To this end, we study two aspects: (1) enabling non-experts to collect robotic data through natural language supervision (e.g., “move the arm to the right”) and (2) training robot policies directly from this supervision. Specifically, we introduce a data collection framework that collects robot demonstrations based on natural language supervision and further augments these demonstrations. We then present CLIP-RT, a new vision-language-action (VLA) model that learns language-conditioned visuomotor policies from this supervision. CLIP-RT adapts the pretrained CLIP model and learns to predict language-based motion primitives via contrastive imitation learning. We train CLIP-RT on the Open X-Embodiment dataset and finetune it on in-domain data collected by our framework. In real-world evaluations, CLIP-RT demonstrates strong capabilities in learning novel manipulation skills, outperforming OpenVLA (7B parameters) by 24% in average success rates, while using 7x fewer parameters (1B). We further assess CLIP-RT’s capabilities in few-shot generalization and collaborative scenarios involving large pretrained models or humans. In simulated environments, CLIP-RT also yields strong performance, achieving a 92.8% average success rate on the LIBERO benchmark with an inference throughput of 163 Hz.

thumbnail

2024

KIRINO: An Interactive Chatbot System for User Persona

Ganghun Kim* , Hyunjae Kim* , Geon Choi* , Kyuhwan Shim* , Myoung-Wan Koo (* co-first authorship)

Korean Computer Congress 2024
Sogang Convergence Technology Competition
.

  Excellence Paper Award

  Official Link       Paper       Slides       Poster    

This research aims to develop a dialog system that reflects the persona of a speaker in a dialog system. For this purpose, we designed an abstract architecture based on the Retrieval-Augmented Generation (RAG) architecture, which generates the persona of each speaker based on the content of the conversation and presents a method to use it to personalize the conversation. We focused on developing a persona-based dialog system to address the problem of interactive agents giving inconsistent answers, talking out of context, and sometimes giving uninteresting answers in order to maintain natural conversations with humans. Experimental results show that our proposed method can improve the quality and naturalness of conversations, which suggests that it can contribute to improving the quality of conversational interfaces.

thumbnail