MACP: Efficient Model Adaptation for Cooperative Perception

1 Purdue University, 2 Tsinghua University,
3 University of Illinois Urbana-Champaign, 4 University of Virginia

IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) 2024
*Indicates Equal Contribution

Abstract

v2v_conada
Overview of the MACP Framework

Vehicle-to-vehicle (V2V) communications have greatly enhanced the perception capabilities of connected and automated vehicles (CAVs) by enabling information sharing to "see through the occlusions", resulting in significant performance improvements. However, developing and training complex multi-agent perception models from scratch can be expensive and unnecessary when existing single-agent models show remarkable generalization capabilities. In this paper, we propose a new framework termed MACP, which equips a single-agent pre-trained model with cooperation capabilities. We approach this objective by identifying the key challenges of shifting from single-agent to cooperative settings, adapting the model by freezing most of its parameters, and adding a few lightweight modules. We demonstrate in our experiments that the proposed framework can effectively utilize cooperative observations and outperform other state-of-the-art approaches in both simulated and real-world cooperative perception benchmarks, while requiring substantially fewer tunable parameters with reduced communication costs.

Performance Results

Effective Cooperation

BibTeX

@inproceedings{ma2024macp,
  title={MACP: Efficient Model Adaptation for Cooperative Perception},
  author={Ma, Yunsheng and Lu, Juanwu and Cui, Can and Zhao, Sicheng and Cao, Xu and Ye, Wenqian and Wang, Ziran},
  booktitle={Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision},
  pages={3373--3382},
  year={2024}
}