基于改进YOLOv5s车载雷达图像目标检测分类方法

    Improvement of Vehicle Radar Image Object Detection and Classification Method Based on YOLOv5s

    • 摘要: 对车载毫米波雷达图像细节模糊、目标占比小的问题,提出了一种基于YOLOv5s改进的目标检测分类网络。首先通过帧同步与最小外接矩形方法处理原始数据集,获得由相机、激光雷达联合标定的毫米波雷达距离-方位图像与标注信息;然后将YOLOv5s网络的上采样模块改进为CARAFE,使网络充分融合不同尺度特征,并改进网络损失函数为综合交并比损失函数(CIoU Loss),使预测结果更加精确;最后,通过网络解耦头(Decoupled head)采用不同的分支并行处理检测与分类问题。实测数据实验处理结果表明,该方法较原始YOLOv5s网络的mAP@0.5与mAP@0.5 ∶ 0.95分别提升了3.3 % 和2.0 %,尤其适用于小目标检测,并能同时满足检测和分类精度与实时性要求,适合部署至车载嵌入式系统中。

       

      Abstract: In response to the issues of detail blurring and a low target-to-background ratio in vehicle-mounted millimeter-wave radar images, an improved object detection and classification network based on YOLOv5s is presented in this paper. Firstly, the original dataset is processed using frame synchronization and the minimum bounding rectangle method to obtain millimeter-wave radar Range-Azimuth images and annotation information, which are jointly calibrated with cameras and lidar. Secondly, the upsampling module of the YOLOv5s network is enhanced with CARAFE to enable the network to comprehensively integrate features at different scales. Additionally, an improved network loss function, CIoU Loss, is employed to enhance the precision of prediction results. Finally, through the refinement of the network′s Decoupled head, the detection and classification tasks are addressed in parallel through different branches. The experimental analysis of measurement data demonstrates that the proposed method exhibits a respective improvement of 3.3 % and 2.0 % in mAP@0.5 and mAP@0.5 ∶ 0.95 compared to the original YOLOv5s network. This enhancement is well-suited for small object detection and meets the requirements for detection and classification accuracy as well as real-time performance. Consequently, it is well-suited for deployment in vehicle-mounted embedded systems.

       

    /

    返回文章
    返回