Welcome to our Multimodal Interactive Intelligence Laboratory (MIIL). In MIIL, we are dedicated to creating intelligent systems that can comprehend and engage in meaningful, context-aware conversations while seamlessly interacting with their surroundings. To this end, we recognize the significance of multimodal AI, which combines speech, vision, gesture, and touch, to enable natural understanding of our world. Moreover, we understand the critical role of conversational and embodied AI in realizing Artificial General Intelligence, the ultimate goal of creating machines that possess comprehensive intelligence and interact back with the world in a human-like manner.

Join us in our pursuit of Artificial General Intelligence as we explore the frontiers of multimodal interactive intelligence, advance the capabilities of conversational and embodied AI, and pave the way towards a future where machines possess human-like intelligence and interact with us in a truly natural and embodied manner.

Latest News

Professional Service

Paul will serve as a Finance Chair for KCCV 2024.

Publication

Two papers accepted to CVPR 2024.

Professional Service

Paul will serve as an Area Chair for ICML 2024.

Professional Service

Paul will serve as an Area Chair for CVPR 2024.

Professional Service

Paul will serve as an Area Chair for ICLR 2024.

Professional Service

Paul will serve as an Area Chair for NeurIPS 2023.

Publication

4 papers accepted to CVPR 2023.

Professional Service

Paul will serve as an Area Chair for ICML 2023.