平面特征引导下的多源建筑物激光雷达点云数据配准

    Plane-Constrained Registration for Multi-Source Building LiDAR Point Clouds

    • 摘要: 单一扫描平台受限于视场角,难以获取完整三维场景,而多源点云(如机载与地面激光扫描)的融合能有效补充数据维度与深度,增强场景三维表达能力。然而,由于扫描视角、范围与分辨率的差异,多源点云配准仍面临初始位姿依赖强、噪声敏感、易陷局部最优等挑战。为应对上述问题,本文提出一种以平面特征为核心的配准框架。研究方法首先通过八叉树体素化提取点云中的平面斑块,并融合为更具描述性的大平面;随后基于平面法向量夹角构建几何约束,匹配源点云与目标点云中的对应平面;进而利用匹配平面计算旋转与平移变换参数;最后通过聚类筛选与相似度评分机制从候选变换矩阵中确定最优配准矩阵。实验在自采集的校园多源点云数据集上进行,与Super4PCS、GROR、DIP等方法对比表明,本文方法在旋转误差上均取得最优结果,在平移误差与配准后均方根误差(root mean square error, RMSE)上也表现优异,尤其在具有明显平面结构的建筑场景中配准精度更高。该方法对噪声具有较好鲁棒性,能为后续精配准提供优质初始位姿。本研究证明,所提出的平面特征引导配准方法能有效处理分辨率差异大、重叠率低的多源点云数据,相较于基于底层特征或全局优化的方法,在高层次结构特征利用上具有优势。该方法为建筑物多源点云融合提供了一种可靠解决方案,但仍存在对复杂场景(如森林)的泛化能力有限等问题,未来可进一步结合其他结构特征以拓展其适用性。

       

      Abstract:   
        A single scanning platform is limited by its field of view, making it difficult to capture a complete three-dimensional scene. The fusion of multi-source point clouds (e.g., airborne and terrestrial laser scanning) can effectively supplement data dimensions and depth, enhancing the 3D representation of scenes. However, due to differences in scanning perspectives, coverage, and resolution, multi-source point cloud registration still faces challenges such as strong dependence on initial pose, sensitivity to noise, and susceptibility to local optima. To address these issues, this paper proposes a registration framework centered on planar features. The research method first extracts planar patches from the point cloud through octree voxelization and merges them into larger, more descriptive planes. Subsequently, geometric constraints are constructed based on the angles between plane normal vectors to match corresponding planes in the source and target point clouds. The matching planes are then used to calculate rotation and translation transformation parameters. Finally, an optimal registration matrix is determined from candidate transformation matrices through clustering screening and similarity scoring mechanisms. Experiments were conducted on a self-collected campus multi-source point cloud dataset. Comparisons with methods such as Super4PCS, GROR, and DIP show that the proposed method achieves the best results in terms of rotational error, while also performing excellently in translational error and post-registration root mean square error (RMSE), particularly demonstrating higher registration accuracy in building scenes with distinct planar structures. The method exhibits good robustness to noise and can provide high-quality initial poses for subsequent fine registration. This study demonstrates that the proposed planar feature-guided registration method can effectively handle multi-source point cloud data with large resolution differences and low overlap rates. Compared to methods based on low-level features or global optimization, it offers advantages in utilizing high-level structural features. The method provides a reliable solution for multi-source point cloud fusion of buildings, but it still has limitations in generalizing to complex scenes (e.g., forests). Future work could further incorporate other structural features to expand its applicability.
        A single scanning platform is limited by its field of view, making it difficult to capture a complete three-dimensional scene. The fusion of multi-source point clouds (e.g., airborne and terrestrial laser scanning) can effectively supplement data dimensions and depth, enhancing the 3D representation of scenes. However, due to differences in scanning perspectives, coverage, and resolution, multi-source point cloud registration still faces challenges such as strong dependence on initial pose, sensitivity to noise, and susceptibility to local optima. To address these issues, this paper proposes a registration framework centered on planar features. The research method first extracts planar patches from the point cloud through octree voxelization and merges them into larger, more descriptive planes. Subsequently, geometric constraints are constructed based on the angles between plane normal vectors to match corresponding planes in the source and target point clouds. The matching planes are then used to calculate rotation and translation transformation parameters. Finally, an optimal registration matrix is determined from candidate transformation matrices through clustering screening and similarity scoring mechanisms. Experiments were conducted on a self-collected campus multi-source point cloud dataset. Comparisons with methods such as Super4PCS, GROR, and DIP show that the proposed method achieves the best results in terms of rotational error, while also performing excellently in translational error and post-registration root mean square error (RMSE), particularly demonstrating higher registration accuracy in building scenes with distinct planar structures. The method exhibits good robustness to noise and can provide high-quality initial poses for subsequent fine registration. This study demonstrates that the proposed planar feature-guided registration method can effectively handle multi-source point cloud data with large resolution differences and low overlap rates. Compared to methods based on low-level features or global optimization, it offers advantages in utilizing high-level structural features. The method provides a reliable solution for multi-source point cloud fusion of buildings, but it still has limitations in generalizing to complex scenes (e.g., forests). Future work could further incorporate other structural features to expand its applicability.

       

    /

    返回文章
    返回