DOI QR코드

DOI QR Code

A Study on the Effective Preprocessing Methods for Accelerating Point Cloud Registration

  • Chungsu, Jang (AI Autonomy Technology Center, Advanced Defense Science & Technology Research Institute, Agency of Defense Development) ;
  • Yongmin, Kim (AI Autonomy Technology Center, Advanced Defense Science & Technology Research Institute, Agency of Defense Development) ;
  • Taehyun, Kim (AI Autonomy Technology Center, Advanced Defense Science & Technology Research Institute, Agency of Defense Development) ;
  • Sunyong, Choi (AI Autonomy Technology Center, Advanced Defense Science & Technology Research Institute, Agency of Defense Development) ;
  • Jinwoo, Koh (AI Autonomy Technology Center, Advanced Defense Science & Technology Research Institute, Agency of Defense Development) ;
  • Seungkeun, Lee (AI Autonomy Technology Center, Advanced Defense Science & Technology Research Institute, Agency of Defense Development)
  • Received : 2023.02.02
  • Accepted : 2023.02.27
  • Published : 2023.02.28

Abstract

In visual slam and 3D data modeling, the Iterative Closest Point method is a primary fundamental algorithm, and many technical fields have used this method. However, it relies on search methods that take a high search time. This paper solves this problem by applying an effective point cloud refinement method. And this paper also accelerates the point cloud registration process with an indexing scheme using the spatial decomposition method. Through some experiments, the results of this paper show that the proposed point cloud refinement method helped to produce better performance.

Keywords

1. Introduction

In visual slam and 3D urban modeling, the point cloud is the primary data to represent the 3D world. However, most scanning sensors can only capture partial scans within their limited field of view. When completing partial 3D data, the Iterative Closest Point (ICP) algorithm (Besl and Mckay, 1992) has been widely used in many applications for the registration of 3D rigid bodies. “IterativeClosest Point” means to find a pair of nearest points repeatedly. The ICP method iteratively finds matching pairs as a minimum distance condition between two sets of points. And it estimates the optimal transformation matrix between two rigid body pairs. For easy understanding, Fig. 1 shows the conceptual diagram of the ICP algorithm for the Stanford bunny data set (The Stanford 3D Scanning Repository, 1996).

OGCSBN_2023_v39n1_111_f0001.png 이미지

Fig. 1. The example of ICP for a two-point cloud dataset.

Although ICP achieves a high-quality alignment level, it sometimes falls into local minima.The success rate of the ICP method depends on the initial data distribution and the size of the data. ICP also relies on a search method that takes a high search time. It exhaustively compares the distance between the point cloud data set point-by-point. It affects the speed and accuracy of finding the transformation matrix.It can be particularly problematic in applications that acquire intact 3D data with fewer resources on mobile phones. ICP has inherent limitations. The Light Detection and Ranging (LiDAR) point cloud data from mobile phones like iPhones and iPad is often sparse and uneven. Its density distribution is not uniform, and the registration performance of the ICP about point cloud data set with variable density can potentially make the entire process slower. The purpose of this paper is to accelerate the point cloud registration process. Our approach has two components: a sampling method for point cloud data refinement and an index construction method to search matching pairs faster. And we prove the effects of the two parts through several experiments. This paper proceeds in the order of chapters 2–5. Chapter 2 introduces research trends and motivations. Chapter 3 describes the proposed methods of this paper. Chapter 4 is about experimental results. In conclusion, chapter 5 summarizes the impact of this paper.

2. Backgrounds and Motivations

The research to improve point cloud registration performance has been ongoing for years.Rusinkiewicz and Levoy (2001) classified six factors affecting the performance of the ICP algorithm. It contains methods for selecting point sets, matching point sets, proper weighting mating point pairs, rejecting specific point pairs, error metrics, and minimizing error metrics. For the robustness of ICP, Fitzgibbon (2003) suggested the Levenberg-Marquard (LM)-ICP based on the Levenberg Marquardt algorithm. Globally Optimal (Go)-ICP (Yang et al., 2016) is a famous method. It is based on traditional optimization techniques. For fast convergence and escape of local minima, The Go-ICP uses the derivation of registration error bounds based on the 3D motion geometry space. The Go-ICP can have upper and lower bounds regarding registration errors. It supports speeds up and global optimality. Zhang et al.(2022) deal with the point cloud registration problem for 3D urban scenes. It uses geometric primitives in structural space. However, this method cannot progress in cases with shapes close to a plane. And when multi types of primitives are not available, their performance is poor.Recently, with the popularity of a deep neural network, data-driven methods like PointNet (Qi et al., 2017a) and PointNet++ (Qi et al., 2017b) are popular. As the successful application of PointNet in the point cloud registration, there are PointNet Lucas and Kanade (LK) (Aoki et al., 2019), deep virtual corresponding points (VCP) (Lu et al., 2019), feature metric registration (FMR) (Huang et al., 2020), Corsnet (Kurobe et al., 2020). However, most learning-based methods experimented with less than 200,000 small datasets. It is because the receptive field size of 3D convolutional networks is inherently limited. So, we want to come up with a way to handle more than half a million large pieces of data. It should be a method applied to optimization and data-driven approaches. So, this paper focuses on developing pre-processing steps for extensive data processing.

3. Materials and Methods

3.1. Overview

Fig. 2 describes the proposed steps to accelerate point cloud registration. In Fig. 2, the diagram shows the original diagram of ICP. Given two sparse point clouds with uneven distribution, ICP combines them into a single, registered point cloud. The chart below shows our modification of the general ICP process. Given two similar sparse and uneven point clouds, we use point cloud resampling to generate resampled point clouds. Data refinement results such as resampling should not damage 3D geometry. This paper also applies the ICP with an indexing technique for a faster search. So, in this paper, there are two primary components. On top of the existing ICP, we applied Lloyd’s algorithm (Che et al., 2018) to resample the sparse point clouds and distribute them evenly. As we zoom into the sparse and resampled point clouds, we see that resampling creates a much richer and dense point cloud. And point cloud refinement step using Voronoi Diagram even restores damaged geometry, such as holes. Our indexing scheme relies on spatial decomposition, such as Delaunay Triangulation (DT) (Berg et al., 2008) or KD-tree (Berg et al., 2008), to accelerate the point cloud registration.

OGCSBN_2023_v39n1_111_f0002.png 이미지

Fig. 2. The overview of the acceleration step for the point cloud registration.

OGCSBN_2023_v39n1_111_f0003.png 이미지

Fig. 3. The two key components for faster point cloud registration.

3.2. Method 1 - Point Cloud Resampling

The sensor’s raw point cloud is usually noise, redundancy, incompleteness, and uneven distributions, which leads to poor performance on point cloud registration. For the improvement of registration performance, a data refinement step is required. After refinement, the point cloud data will have noise-free and even distributed properties. We can quickly think of the downsampling. The downsampling is used to reduce the memory requirements of massive point clouds. However, random downsampling can transform the 3D geometry of point cloud data, which can degrade matching performance. Lloyd sampling using Centroidal Voronoi Tessellation (CVT) is one of the most popular methods (Chen et al., 2018). Lloyd sampling repeatedly reduces the Voronoi cell’ssize and makes the points’ distribution uniform by minimizing a tailored energy function. Given tensor fields such as curvature, it provides a more compact representation of the underlying surface, which is why it is also called the Voronoi quantizer (Klatt et al., 2019). We use a Voronoi quantizer to resample point cloud data in our task. Voronoi diagram evenly spreads out data points.

The rightmost of Fig. 4 shows that the Voronoi diagram generation algorithm constructs Voronoi cells given currentsites.Then, it computes the centroid of each cell. The current site of each cell is then shifted towards the centroid of the same cell. The algorithm repeats this process for several iterations until convergence. The total quantization energy represents the uneven distribution, and it decreases through iterations. As a result, the Voronoi cells and points become uniformly spread out. Lloyd’s sampling method tries to change the raw data into noise-free data that can better represent the underlying shape. The leftmost of Fig. 5 shows the raw data in 3D space. The raw data obtained from the sensor shows that the distribution is very irregular, and the 3D geometric information is damaged.The Lloyd sampling generates point clouds with uniform distribution, as you see the rightmost of Fig. 5, Lloyd’s sampling strategy helps preserve continuous geometric expression with noise-free distribution and lack of occlusion.

OGCSBN_2023_v39n1_111_f0004.png 이미지

Fig. 4. Example of Centroidal Voronoi Tessellation (CVT) method.

OGCSBN_2023_v39n1_111_f0005.png 이미지

Fig. 5. Sampling result of Stanford Bunny data using Centroidal Voronoi Tessellation (CVT) methods.

3.3. Method 2 – Spatial Decomposition Indexing Scheme

After the data refinement step, when we start to find corresponding pairs. We can adopt optimization techniques for finding corresponding pairs in the ICP algorithm for acceleration.It is about the data indexing method to support a faster matching pair search.

Table 1 summarizes the candidate optimization techniques to find the corresponding pairs effectively. Usually, the brute force exhaustive search method is the baseline used in the original ICP. Given a point cloud A with m points and a target point cloud B with n points, the brute force method matches point data in A with all the point data in B. Then, it compares the distance between all matches and locates the point data in B with the smallest range. Since we need to repeat this for all points in A, it takes O (mn) time, which is very slow. So, we can use the point pair search method using DT. In this method, we construct the Delaunay Triangulation on the target point cloud B and then solve the point location problem given the triangulation. It takes O (n2) to generate the Delaunay Triangulation (Berg et al., 2008) of B with n points since the point clouds lie in the 3D space.Then, given a query point in A, it takes about O (log n)to O (n)time to find the point in B with the smallest distance, and repeating this for all point data in A takes O (m log n) to O (mn) time. To solve this point location problem by using Delaunay Triangulation, we use a variation of the Quickhull Algorithm(Berg et al., 2008). First, we project Delaunay Triangulation onto a paraboloid located at a space one dimension higher. Delaunay Triangulation gives a lower convex hull, with the points corresponding to the vertices of the Delaunay Triangulation. Then, we perform a binary search on the convex hull to find the face closest to the query point, which will take O (log n). However, in specific cases, the binary search will fail, returning a face located at the upper hull, in which case we resort to an exhaustive search.The brute-force search will take O (n) time. We can also use KD-tree (Berg et al., 2008) to search points faster. We can subdivide the point data in point cloud B into regions using KD-tree.Constructing one takes O (n log n)time. Also, it will take O (log n)time to search for the closest point given a query point from point cloud A so that the total search time will take O (m log n). First, we will build a KD-tree, dividing the points on the target point cloud using binary space partitioning. While dividing it, we can also create the tree structure with the points located on the leaf nodes. Then, during search time, we can traverse along the KD-tree to find the closest point on the leaf node. Assuming the tree is balanced, searching will take O (log n) time.

Table 1. Comparison of nearest neighbor search methods

OGCSBN_2023_v39n1_111_t0001.png 이미지

4. Results

We focused on the effects of resampling and fast search methods to improve the performance ofthe point cloud matching and alignment problem. Therefore, to verify the impact of the resampling strategy, this paper compares the ICP performance of raw data and relaxed data by Lloyd. This paper then reviews whether there is any performance improvement in the ICP algorithm. The second compares with the performance results of nearest neighbor search methods like brute force, KD-tree, and Delaunay Triangle. It is an experiment to choose the fastest point cloud data indexing method. The experiment was conducted at Intel (R) Core (TM) i7-10510U CPU @ 1.80 GHz 2.30 GHz with 16 GB RAM.

4.1. Dataset

As datasets, this paper usesthe Stanford 3D scanning dataset, and the actual dataset, including the Fetch robot and the Pascucci shopping bag captured by the iPad Pro LiDAR sensor, released by Apple in 2020. Although the specifications of the LiDAR sensor have yet to be officially announced accurately, its performance has been analyzed from technical data such as a study by Spreafico et al. (2021) and so on and is known to the public. In Stanford 3D Scanning data, Bunny, Armadillo, and Dragon were prepared, and the number of points in each data is shown in Fig. 6. This dataset is selected to examine how sampling and indexing techniques to influence registration performance while gradually increasing the number and complexity of 3D data.The number of point clouds to be processed in the order of Bunny, Armadillo, and Dragon increases. We focused on the effect of the data refinement and the effective indexing schemes to improve the performance of the point cloud matching and alignment problem. This paper finds the best refinement method by comparing the ICP performance using raw data, randomly down-sampled data, and relaxed data by Lloyd. This paper reviews whether there is any performance improvement in the ICP algorithm. The second experimental objective is to find the optimal search method by analyzing the search performance results of data indexing techniques such as Brute Force, KD-tree, and Delonay Triangle. We created point cloud B through the arbitrary transformation of the original data A. Random rotation and translation were performed. We show how accurately and quickly the rigid rotation and translation transformation matrix are estimated before and after sampling and spatial decomposition methods by ICP. We verified the effectiveness of sampling and spatial decomposition methods through experiments.

OGCSBN_2023_v39n1_111_f0006.png 이미지

Fig. 6. Number of points about the Stanford dataset selected for the experiment.

We directly collected data from Fig. 7 for the Pascucci bag and Fetch robot using iPad. One thing to note is that since two-point clouds are composed of the left and right sides, the two-point cloud data only intersect in the central part. Point cloud datasets(right-side and left-side) are point cloud data obtained from different angles.It is a setting to check the effectiveness in extreme cases.

OGCSBN_2023_v39n1_111_f0007.png 이미지

Fig. 7. Number of points about the Pascucci bag dataset and Fetch robot dataset captured for the experiment.

4.2. Analysis of Experimental Results

4.2.1. Experiment 1 - Bunny (35,947 points)

This paper created two-point cloud data through arbitrary transformation, B of the original bunny dataset, A as shown in Fig. 8. Afterward, the two-point cloud data were matched again through the ICP algorithm. Fig. 9 compares the convergence speed rate-the elapsed time (s), translation error (°), and rotation error (°) of the ICP algorithm before and after the application of the data sampling technique. Compared to the case of performing the ICP 100 times with raw point cloud data, when Lloyd sampling was performed while having the number of data with the original data, the convergence speed of the ICP algorithm was more than halffaster.The convergence speed of the ICP algorithm has been reduced by more than half. There was no difference in the convergence rate of the ICPalgorithm’s transformation and rotation errors. In Fig. 9, the part showing the fastest time performance gain was described by marking (■, ▲). When the number of point cloud data samples to register through Lloyd sampling is reduced by more than half, it takes about 0.3 seconds for the translation error to converge to zero (■) and about 0.6 seconds for the rotation error to converge to zero (▲). The preprocessing step through Lloyd sampling achieves 100 percent registration success while drastically reducing the speed of traditional ICP algorithms. What’s not different is that, as shown in Fig. 4, Lloyd sampling does not change the three-dimensional surface curvature of the raw data. When reducing the number of original datasets in half to accelerate the convergence speed of the ICP algorithm, the convergence speed of the ICP algorithm is naturally halved in both cases when randomly decreasing and using Lloyd’s sampling. Since random sampling applies deformation to three-dimensional data, it can be confirmed that the convergence rate of transformation and rotation errors is significantly poor than that of Lloyd sampling. In contrast, Lloyd sampling is performed while maintaining the surface curvature of the 3D data, so the convergence rate is better than random sampling.

OGCSBN_2023_v39n1_111_f0008.png 이미지

Fig. 8. Example of the ICP about Bunny data.

OGCSBN_2023_v39n1_111_f0009.png 이미지

Fig. 9. Experiment result after Lloyd resampling about Bunny.

Fig. 10 is about the comparison of ICP performance results before and after applying the KD-tree and Delaunay Triangulation about the Bunny dataset. In terms of algorithm speed, KD-tree is the best indexing scheme. Delaunay Triangulation has not shown much improvement.It can be seen that the convergence rates of all three were the same for the transformation and rotation errors, and the indexing scheme doesn’t affect finding the exact matching pair.

OGCSBN_2023_v39n1_111_f0010.png 이미지

Fig. 10. Experiment result: comparison of results before and after applying indexing techniques about Bunny.

4.2.2. Experiment 2 – Armadillo (172,974 points)

In Experiment 2, Lloyd sampling and indexing techniques were applied to the Almadillo dataset to see how effective the data refinement method and indexing scheme were when the dataset size was increased by five times. The experimental settings were the same; one point cloud was transformed to create two point clouds, left-side point clouds (A) and right-side point clouds (B).

In Fig. 12, Lloyd resampling boosts performance against raw data. Lloyd + downsampling is faster than normal random downsampling.There is an improvement in speed by using the Lloyd resampling method. The marking (■, ▲) in Fig. 12 shows that the Lloyd-sampled ICP algorithm takes less time around the same iteration. This performance gain is because the Lloyd preprocessing step finds matching points better. In terms of error estimation convergence speed, there was no significant performance improvement. When the data set is large enough from the second experiment, it can be seen that Lloyd sampling has improved speed but has not significantly affected the convergence rate of conversion errors. This result is because, from a statistical point of view, the ICP algorithm does not have performance improvement factors to significantly improve the transformation error after initially finding the correct matching pair.

OGCSBN_2023_v39n1_111_f0011.png 이미지

Fig. 11. Experiment: Armadillo data.

OGCSBN_2023_v39n1_111_f0012.png 이미지

Fig. 12. Experiment result: Lloyd resampling about Armadillo.

OGCSBN_2023_v39n1_111_f0013.png 이미지

Fig. 13. Experiment result: comparison of results before and after applying indexing techniques about Armadillo.

Comparing the indexing techniques for 3D point cloud data, the speed of the brute-force method was the slowest, and KD-tree and Delaunay Triangulation were similar as in the previous experiment 1, indexing schemes did not affect the rotation and movement conversion errorrates.Thisresult is presumed to be due to the initial distribution of B randomly moved from A to facilitate index configuration. Considering the time complexity of each indexing scheme in Table 1, the data set is not large enough to see the effect of index composition yet.Therefore, the impact of index scheme composition for 3D data was compared and analyzed through the next experiment 3.

4.2.3. Experiment 3 – Dragon (437,645 points)

The Dragon data of Fig. 14 is large data consisting of 437,645 points. One interesting thing is that on Dragon data, the brute-force method couldn’t run just one iteration. Using the brute force method, it takes 191 billion calculations to find the nearest neighbor pair just once (O [NM] = 437,645 × 437,645 = 191,533,146,025) If we check the effect of Lloyd resampling, there is a performance improvement in algorithm speed in the dragon data as well. The marking (■, ▲) in Fig. 15 shows a similar trend with Fig. 12.Through experiments 1 to 2, Lloyd sampling was identified as an effective method of accelerating large-sized data registration. The indexing technique used KD-tree in common in all three experiments related to Lloyd’s sampling.

OGCSBN_2023_v39n1_111_f0014.png 이미지

Fig. 14. Experiment: dragon data.

OGCSBN_2023_v39n1_111_f0015.png 이미지

Fig. 15. Experiment result: Lloyd resampling about dragon.

Fig. 16 intuitively explains the performance of the indexing scheme when the data size is large. KD-tree supports finding matching pairs with stable performance, while Delaunay Triangulation fails to find matching pairs, and the convergence rate diverges. Experiments confirmed that the convergence of translation and rotation errors also did not succeed. It can be seen that the Delaunay method does not find the correct answer and falls into the local optimal. This result came from the failure to find corresponding pairs. Depending on the time complexity of Table 1, the brute force comparison was so slow that it could not be reproduced on the current equipment. So, we couldn’t visualize it as a graph.

OGCSBN_2023_v39n1_111_f0016.png 이미지

Fig. 16. Experiment result: comparison of results before and after applying indexing techniques about dragon.

4.2.4. Experiment 4 – Pascucci Bag (12,106 points)

The actual dataset of experiments 4 and 5 did not measure the accuracy due to the difficulty ofstrict pose estimation and low overlap below about 10%. The position estimation between two scenes is outside the scope of this study. It only compared acceleration performance.In experiment 4, two objects are collected from the left and right sides with an iPad LiDAR sensor, and only the central part has common information, as shown in Fig. 17.

OGCSBN_2023_v39n1_111_f0017.png 이미지

Fig. 17. Experiment: Pascucci bag data.

If we look at the experimental results of Fig. 17 after Lloyd resampling, the points become more uniformly distributed,so the ICP performance is improved. Even the hole is restored from Lloyd’s resampling. Although Fig. 18 is not possible to precisely quantify the overlap ratio photographed at any angle, it overlaps when viewed visually with a minimal percentage. So, the transformation matrix could not be accurately estimated with the basic ICP algorithm. As shown in Fig. 18, if the ICP is performed after preprocessing of Lloyd, fewer iterations are required to improve the same matching performance.

OGCSBN_2023_v39n1_111_f0018.png 이미지

Fig. 18. Experiment result: Lloyd resampling about the Pascucci bag.

Fig. 19 illustrates the search acceleration performance of the indexing scheme in real data. The KD-tree showed the most stable and good performance, and DT was better than the brute-force method but failed to converge. It shows that since matching pairs cannot be found in low overlapping situations, a balanced indexing structure like KD-tree finds matching pairs the most effectively.

OGCSBN_2023_v39n1_111_f0019.png 이미지

Fig. 19. Experiment result: comparison of results before and after applying indexing techniques about the Pascucci bag.

4.2.5. Experiment 5 – Fetch Robot (96,446 points)

In the last experiment, the ICP algorithm was tested on the point clouds of Fetch robot. Although accuracy was not measured in this experiment, it can be confirmed that the final matching result is distorted in a low overlap situation, as shown in Fig. 20. Referring to Fig. 20, the result without the Lloyd sampling process, can be visually confirmed that the same ICP algorithm is not perfectly matched during the repetition of the same ICP algorithm. But, after the Lloyd sampling, the matching is better.

OGCSBN_2023_v39n1_111_f0020.png 이미지

Fig. 20. Experiment result: Fetch robot data.​​​​​​​

In Fig. 21, Lloyd’sresampling shows the importance of preprocessing by accelerating the speed of the ICP algorithm. It was confirmed that the Lloyd relaxation algorithm accelerates the matching speed while better finding the overlap area while restoring the deformed form in low overlap situations, even if it is not more than twice as much.

OGCSBN_2023_v39n1_111_f0021.png 이미지

Fig. 21. Experiment result: Lloyd resampling method about Fetch robot.​​​​​​​

For efficient neighbor search, it also shows a similar performance of speed in the order of brute force → Delaunay → KD-tree. So, from experiment 1 to experiment 5, we can conclude that there is a performance difference depending on the sampling technique and spatial indexing scheme. Experiments have shown that Lloyd sampling, which considers geometric shapes rather than simple random re-sampling, tends to increase the matching probability and accelerate matching speed through the effective data processing ability of the KD-tree for 3D data.

OGCSBN_2023_v39n1_111_f0022.png 이미지

Fig. 22. Experiment result: comparison of results before and after applying indexing techniques about the Fetch robot.​​​​​​​

5. Conclusions

This paper proposed and implemented preprocessing steps to accelerate the point cloud registration problem. There may be better methods than the brute force method for mobile applications requiring fast or large data processing time. This paper used the Lloyd sampling method as the first preprocessing step. Comparing spatial decomposition techniques like the brute-force comparison method, Delaunay Triangulation, and KD-tree, we show the best performance of the KD-tree indexing scheme as the second preprocessing step. To verify the effectiveness of the proposed methods, many related experiments were conducted based on several point cloud datasets, including synthetic data and real-world data. This paper teaches us the importance of reducing the algorithm’s complexity. Through this work, we have shown through several experiments that we can effectively handle large-capacity point cloud data using Lloyd sampling and KD-tree. It will remain a future task to develop algorithms that increase matching accuracy even in a small overlap environment by utilizing this preprocessing step.

Acknowledgments

This work was supported by the Agency for Defense Development of the Korean Government (925019302*). It was the result of the Multi-source Imagery Fusion System (MIFS) project (2019–2022).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Aoki, Y., Goforth, H., Srivatsan, R. A., and Lucey, S., 2019. PointNetLK: Robust & Efficient Point Cloud Registration Using Point Net. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach,CA, USA,June 15-20. pp. 7163-7172. https://doi.org/10.1109/CVPR.2019.00733
  2. Baker, S. and Matthews, I., 2004. Lucas- Kanade 20 years on: a unifying framework. International Journal of Computer Vision, 56, 221-255. https://doi.org/10.1023/B:VISI.0000011205.11775.fd
  3. Besl, P.J. and McKay, N.D., 1992. A method for registration of 3-d shapes. IEEE Transactions on Pattern Analysis and Machine Intelligence, 14(2), 239-256. https://doi.org/10.1109/34.121791
  4. Chen,Z.,Zhang,T.,Cao,J.,Zhang,Y.J., andWang,C., 2018. Point cloud resampling using centroidal Voronoi tessellation methods.Computer-Aided Design, 102, 12-21. https://doi.org/10.1016/j.cad.2018.04.010.
  5. Fitzgibbon,A.W., 2003. Robust registration of 2D and 3D point sets. Image and Vision Computing, 21(13-14), 1145-1153. https://doi.org/10.1016/j.imavis.2003.09.004
  6. Huang, X., Mei, G., and Zhang, J., 2020. Feature-metric registration: A fast semi-supervised approach for robust point cloud registration without correspondences.In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, June 13-19, pp. 11366-11374. https://doi.org/10.1109/CVPR42600.2020.01138
  7. Klatt, M.A., Lovric,J.,Chen, D., Kapfer, S.C., Schaller, F.M., Schonhofer, P.W., et al., 2019. Universal hidden order in amorphous cellular geometries. Nature Communication, 10(811), 1-9. https://doi.org/10.1038/s41467-019-08360-5.
  8. Kurobe, A., Sekikawa, Y., Ishikawa, K., and Saito, H., 2020. Corsnet: 3d point cloud registration by deep neural network. IEEE Robotics and Automation Letters, 5(3), 3960-3966. https://doi.org/10.1109/LRA.2020.2970946.
  9. Mark, D.B., Otfried, C., Marc, V. K., and Mark, O., 2008. Computational Geometry Algorithms and Applications(3rd edition), Springer. https://doi.org/10.1007/978-3-540-77974-2
  10. Qi, C.R., Su, H., Mo, K., and Guibas, L.J., 2017a. PointNet: Deep learning on point sets for 3d classiffication and segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, July 21-26, pp. 652-660. https://doi.org/10.1109/CVPR.2017.16.
  11. Qi, C.R., Yi, L., Su, H., and Guibas, L.J., 2017b. PointNet++: Deep hierarchical feature learning on point sets in a metric space. In Proceedings of the 31st International Conference on Neural Information Processing Systems, Long Beach, CA, USA, Dec. 4-9, pp. 5105-5114. https://dl.acm.org/doi/10.5555/3295222.3295263
  12. Rusinkiewicz, S. and Levoy, M., 2001. Efficient variants of the ICP algorithm. In Proceedings of the 3rd International Conference on 3-D Digital Imaging and Modeling, Quebec City, QC,Canada, May 28-June 1, pp. 145-152. https://doi.org/10.1109/IM.2001.924423
  13. The Stanford 3D Scanning Repository, 1996.Available online: http://graphics.stanford.edu/data/3Dscanrep/ (accessed on Aug. 19, 2014).
  14. Spreafico, A., Chiabrando, F., Lose, L.T., and Tonolo, F.G., 2021. The ipad pro built-in lidar sensor: 3d rapid mapping tests and quality assessment. The International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, 43, 63-69. https://doi.org/10.5194/isprs-archivesXLIII-B1-2021-63-2021
  15. Lu, W., Wan, G., Zhou, Y., Fu, X., Yuan, P., and Song, S., 2019. DeepVCP: An End-to-End Deep Neural Network for Point Cloud Registration. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Korea, Oct. 27-Nov. 2, pp. 12-21. https://doi.org/10.1109/ICCV.2019.00010.
  16. Yang, J., Li, H., Campbell, D., and Jia, Y., 2016. Go-ICP: A globally optimal solution to 3D ICP point-set registration. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(11), 2241-2254. https://doi.org/10.1109/TPAMI.2015.2513405
  17. Zhang, L., Guo,J., Cheng, Z., Xiao,J., and Zhang, X., 2022. Efficient Pairwise 3-D Registration of Urban Scenes via Hybrid Structural Descriptors. IEEE Transactions on Geoscience and Remote Sensing, 60, 1-17. https://doi.org/10.1109/TGRS.2021.3091380