Plane-Constrained Registration for Multi-Source Building LiDAR Point Clouds
-
Abstract
A single scanning platform is limited by its field of view, making it difficult to capture a complete three-dimensional scene. The fusion of multi-source point clouds (e.g., airborne and terrestrial laser scanning) can effectively supplement data dimensions and depth, enhancing the 3D representation of scenes. However, due to differences in scanning perspectives, coverage, and resolution, multi-source point cloud registration still faces challenges such as strong dependence on initial pose, sensitivity to noise, and susceptibility to local optima. To address these issues, this paper proposes a registration framework centered on planar features. The research method first extracts planar patches from the point cloud through octree voxelization and merges them into larger, more descriptive planes. Subsequently, geometric constraints are constructed based on the angles between plane normal vectors to match corresponding planes in the source and target point clouds. The matching planes are then used to calculate rotation and translation transformation parameters. Finally, an optimal registration matrix is determined from candidate transformation matrices through clustering screening and similarity scoring mechanisms. Experiments were conducted on a self-collected campus multi-source point cloud dataset. Comparisons with methods such as Super4PCS, GROR, and DIP show that the proposed method achieves the best results in terms of rotational error, while also performing excellently in translational error and post-registration root mean square error (RMSE), particularly demonstrating higher registration accuracy in building scenes with distinct planar structures. The method exhibits good robustness to noise and can provide high-quality initial poses for subsequent fine registration. This study demonstrates that the proposed planar feature-guided registration method can effectively handle multi-source point cloud data with large resolution differences and low overlap rates. Compared to methods based on low-level features or global optimization, it offers advantages in utilizing high-level structural features. The method provides a reliable solution for multi-source point cloud fusion of buildings, but it still has limitations in generalizing to complex scenes (e.g., forests). Future work could further incorporate other structural features to expand its applicability.
A single scanning platform is limited by its field of view, making it difficult to capture a complete three-dimensional scene. The fusion of multi-source point clouds (e.g., airborne and terrestrial laser scanning) can effectively supplement data dimensions and depth, enhancing the 3D representation of scenes. However, due to differences in scanning perspectives, coverage, and resolution, multi-source point cloud registration still faces challenges such as strong dependence on initial pose, sensitivity to noise, and susceptibility to local optima. To address these issues, this paper proposes a registration framework centered on planar features. The research method first extracts planar patches from the point cloud through octree voxelization and merges them into larger, more descriptive planes. Subsequently, geometric constraints are constructed based on the angles between plane normal vectors to match corresponding planes in the source and target point clouds. The matching planes are then used to calculate rotation and translation transformation parameters. Finally, an optimal registration matrix is determined from candidate transformation matrices through clustering screening and similarity scoring mechanisms. Experiments were conducted on a self-collected campus multi-source point cloud dataset. Comparisons with methods such as Super4PCS, GROR, and DIP show that the proposed method achieves the best results in terms of rotational error, while also performing excellently in translational error and post-registration root mean square error (RMSE), particularly demonstrating higher registration accuracy in building scenes with distinct planar structures. The method exhibits good robustness to noise and can provide high-quality initial poses for subsequent fine registration. This study demonstrates that the proposed planar feature-guided registration method can effectively handle multi-source point cloud data with large resolution differences and low overlap rates. Compared to methods based on low-level features or global optimization, it offers advantages in utilizing high-level structural features. The method provides a reliable solution for multi-source point cloud fusion of buildings, but it still has limitations in generalizing to complex scenes (e.g., forests). Future work could further incorporate other structural features to expand its applicability.
-
-