Graph feature-enhanced point cloud sampling for object detection[J]. Chinese Journal of Engineering. DOI: 10.13374/j.issn2095-9389.2024.11.19.003
Citation: Graph feature-enhanced point cloud sampling for object detection[J]. Chinese Journal of Engineering. DOI: 10.13374/j.issn2095-9389.2024.11.19.003

Graph feature-enhanced point cloud sampling for object detection

  • The point cloud data acquired through LiDAR is extensive, and its feature extraction demands significant computational resources, making efficient sampling crucial for enhancing processing speed. Within point clouds, most points represent the background, with their density typically higher near the sensor and decreasing with distance. Over-concentration of sample points or excessive inclusion of background points can lead to the loss of critical foreground information, negatively impacting object detection performance. Traditional sampling techniques, such as farthest point sampling and random sampling, operate in an unsupervised manner, failing to harness the rich feature information embedded within the point cloud. Although farthest point sampling has been widely adopted in numerous object detection approaches with commendable outcomes, its inherently sequential nature, where each sampling step depends on the preceding one, can compromise overall detection efficiency. To address these limitations, we propose a novel supervised point cloud sampling method grounded in graph features. This innovative approach enables parallel sampling and utilizes foreground and background classification as supervisory signals, significantly boosting the proportion of foreground points in the sampled set. Compared with methods that directly use point features for supervision, the graph-feature-based method captures a greater number of local point cloud features, making it particularly well-suited for initial-stage sampling. Experimental results on the KITTI autonomous driving dataset show that the proposed method achieves a remarkable 99% ratio of foreground points in the sampled set. This effectively extracts feature information from sparse point cloud regions, such as occluded and distant targets, thereby enhancing the performance of the object detection network. After incorporating this method, the mean average precision for detecting car, pedestrians, and cyclist under hard conditions was improved by 8.58%, 2.27%, and 3.12%, respectively. Moreover, the proposed method emphasizes flexibility and ease of integration into a wide range of 3D point cloud applications that depend on effective sampling.
  • loading

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return