Research on LiDAR Multimodal End-to-end Autonomous Driving Model for Night Scenes
-
Abstract
In recent years, the development of autonomous driving has been rapid, with many autonomous driving models emerging. However, these models have not conducted targeted research for applications in dark scenes such as nighttime. In nighttime scene, camera performance is poor, while LiDAR actively emits laser light, is not affected by environmental lighting, and can effectively identify road information. Therefore, this paper proposes an improved end-to-end autonomous driving model, based on the integrated architecture of FusionAD original collision prediction-decision, adding "feature adaptive sampling layer and gradient dynamic adjustment layer", replacing the Newtonian method with a small-batch gradient descent method in the collision optimization part of the original model, forming a three-order structure of "perceptual feature purification, collision risk modeling, and small-batch gradient descent optimization". This paper introduces a multimodal fusion input scheme by integrating LiDAR point cloud data with visual image data to construct a joint input system. Based on the Huawei ONCE dataset, this model conducted performance comparison experiments on end-to-end autonomous driving models under both multimodal and pure image input conditions. The results show that after introducing an improved collision optimization structure, the model reduced the collision rate by 50.8% in nighttime environments, simultaneously decreased the displacement error by 17.3%, significantly improved trajectory planning performance, and enhanced training efficiency.This improvement provides stronger support for the robustness and adaptability of end-to-end autonomous driving systems, validates the potential of LiDAR applications in the autonomous driving field, and lays a foundation for future sensor fusion research
-
-