Abstract:
In response to the issues of detail blurring and a low target-to-background ratio in vehicle-mounted millimeter-wave radar images, an improved object detection and classification network based on YOLOv5s is presented in this paper. Firstly, the original dataset is processed using frame synchronization and the minimum bounding rectangle method to obtain millimeter-wave radar Range-Azimuth images and annotation information, which are jointly calibrated with cameras and lidar. Secondly, the upsampling module of the YOLOv5s network is enhanced with CARAFE to enable the network to comprehensively integrate features at different scales. Additionally, an improved network loss function, CIoU Loss, is employed to enhance the precision of prediction results. Finally, through the refinement of the network′s Decoupled head, the detection and classification tasks are addressed in parallel through different branches. The experimental analysis of measurement data demonstrates that the proposed method exhibits a respective improvement of 3.3 % and 2.0 % in mAP@0.5 and mAP@0.5 ∶ 0.95 compared to the original YOLOv5s network. This enhancement is well-suited for small object detection and meets the requirements for detection and classification accuracy as well as real-time performance. Consequently, it is well-suited for deployment in vehicle-mounted embedded systems.