Large Language Models for Autonomus Driving
(LLM4AD)

Purdue Digital Twin Lab

Our Goal

Our work focuses on pioneering research at the intersection of LLMs, VLMs and autonomous driving. We're investigating how advanced language understanding can improve vehicle decision-making and human-vehicle interaction, thereby enhancing safety and efficiency in autonomous systems. Our goal is to push the boundaries of AI in automotive technology and lead the way in developing smarter, safer, and more intuitive autonomous vehicles for the future.

February 27, 2024

One Paper was Accpeted @CVPR 2024 !

Our paper "MAPLM: A Real-World Large-Scale Vision-Language Dataset for Map and Traffic Scene Understanding" was accepted by CVPR 2024!

January 20, 2024

Talk2Drive Featured video is Available

We released the featured video of our Talk2Drive framework. This is a condensed video for the previous parking, intersection and highway scenarios, providing an overview of the features within our Talk2Drive framework.

Decmember 7, 2023

One new Benchmark (LaMPilot)!

Our new benchmark paper "LaMPilot: An Open Benchmark Dataset for Autonomous Driving with Language Model Programs" was avaliable now!