RoReg: Pairwise Point Cloud Registration with Oriented Descriptors and Local Rotations

IEEE TPAMI 2023


Haiping Wang*,1, Yuan Liu*,2, Qingyong Hu3, Bing Wang4, Jianguo Chen5, Zhen Dong†,1, Yulan Guo6, Wenping Wang7 Bisheng Yang†,1

1Wuhan University    2The University of Hong Kong    3University of Oxford    4The Hong Kong Polytechnic University    5DiDi Chuxing   
6Sun Yat-sen University    7Texas A&M University   
*The first two authors contribute equally.    Corresponding authors.   

Abstract


We present RoReg, a novel point cloud registration framework that fully exploits oriented descriptors and estimated local rotations in the whole registration pipeline. Previous methods mainly focus on extracting rotation-invariant descriptors for registration but unanimously neglect the orientations of descriptors. In this paper, we show that the oriented descriptors and the estimated local rotations are very useful in the whole registration pipeline, including feature description, feature detection, feature matching, and transformation estimation. Consequently, we design a novel oriented descriptor RoReg-Desc and apply RoReg-Desc to estimate the local rotations. Such estimated local rotations enable us to develop a rotation-guided detector, a rotation coherence matcher, and a one-shot-estimation RANSAC, all of which greatly improve the registration performance. Extensive experiments demonstrate that RoReg achieves state-of-the-art performance on the widely-used 3DMatch and 3DLoMatch datasets, and also generalizes well to the outdoor ETH dataset. In particular, we also provide in-depth analysis on each component of RoReg, validating the improvements brought by oriented descriptors and the estimated local rotations.


Fig.1 Pipeline of RoReg. We apply oriented descriptors (RoReg-Desc) and estimated local rotations in all four steps of the point cloud registration pipeline, i.e., feature description, feature detection, feature matching, and transformation estimation. Specifically, We first extract RoReg-Desc (Sec.3.2) and demonstrate how to estimate local rotations from RoReg-Descs (Sec.3.3). Then, a set of keypoints are selected utilizing the RoReg-Descs with the rotation-guided detector (Sec.3.4). The detected keypoints are matched to correspondences by the rotation coherence matcher (Sec.3.5). Finally, transformations are estimated by the OSE-RANSAC given the estimated correspondences (Sec.3.6).


Brief Introduction


Feature Description: RoReg-Desc extraction (Sec.3.2)

Fig.2. Rotation-equivariance and rotation-invariance properties of RoReg-Desc. Existing descriptors such as PerfectMatch and SpinNet focus on rotation-invariance. They typically rely on a local reference frame work (LRF) to lose the local orientation information for rotation-invariance, which is sensitive to noise/density variations and contains ambiguity. In contrast, our RoReg-Desc consists of a rotation-equivariant part to preserve the orientation information and a rotation-invariant part for matching. Our RoReg-Desc contains 60 permuted row-vectors. Applying a rotation in the icosahedral group will bring a specific permutation to the rotation-equivariant part $F$ of RoReg-Desc. The row-wise pooling results in a descriptor $d$ that is invariant to rotations in icosahedral group. Proofs can be found in the supplementary material. We validate that the rotation-invariant part of RoReg-Desc is more robust and distinctive than LRF-based descriptors and further utilize the rotation-equivariant part of RoReg-Desc to improve the following registration steps by introducing local rotations.


SO(3)-equivariance Usage: local rotation estimation (Sec.3.3)

Fig.3. Local rotation estimation on paired RoReg-Descs. RoReg-Desc encodes local orientation information as discussed in Fig.2 and shown in the Left. Thus, by utilizing the local orientation information of a RoReg-Desc pair, we can decode a local rotation that can align their local orientations. Once the local rotation is correctly estimated, it will exactly coincide with the global rotation in the scan registration. We will utilize local rotations (rotation-equivariance) in the following steps of point cloud registration, i.e., feature detection, feature matching, and transformation estimation.


Feature Detection: rotation-guided detection (Sec.3.4)

Fig.4. We propose a novel loss for feature (keypoint) detection. Besides the distinctiveness check of rotation-invariant features, we further encourage that the keypoints should own clear/distinct local orientations to estimate correct local rotations. Such a rotation-guided loss achieves a stable convergence of the detector training and thus improves the repeatability and matchability of detected keypoints.


Feature Matching: rotation coherence matcher

Fig.5. In feature matching, we observe that local rotations are essential for finding true correspondences, because the local rotations estimated from true correspondences are correct and consistent while the local rotations estimated from false correspondences are randomly distributed. We thus introduce such a clue to transformer-based matcher to encouge the established correspondences share similar local rotations, which can overcome the influence of repetitive patterns to some extent.


Transformation Estimation: OSE-RANSAC

Fig.6. The classical RANSAC samples correspondence triplets to generate transformation hypotheses on every triplet with the Kabsch algorithm, which contain many spurious solutions with low inlier ratios. By utilizing local rotations, our OSE-RANSAC generates a hypothesis with only one correspondence, which reduces the searching space and improves the registration quality, especially when the inlier ratio is low.


Qualitative Results


Example results on 3DMatch/3DLoMatch/ETH


Citation


@inproceedings{wang2022you,
  title={You only hypothesize once: Point cloud registration with rotation-equivariant descriptors},
  author={Wang, Haiping and Liu, Yuan and Dong, Zhen and Wang, Wenping},
  booktitle={Proceedings of the 30th ACM International Conference on Multimedia},
  pages={1630--1641},
  year={2022}

@ARTICLE{wang2023roreg,
  author={Wang, Haiping and Liu, Yuan and Hu, Qingyong and Wang, Bing and Chen, Jianguo and Dong, Zhen and Guo, Yulan and Wang, Wenping and Yang, Bisheng},
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence}, 
  title={RoReg: Pairwise Point Cloud Registration with Oriented Descriptors and Local Rotations}, 
  year={2023},
  volume={},
  number={},
  pages={1-18},
  doi={10.1109/TPAMI.2023.3244951}}
}

Other Projects


Welcome to take a look at the homepage of our research group WHU-USI3DV ! We focus on 3D Computer Vision, particularly including 3D reconstruction, scene understanding, point cloud processing as well as their applications in intelligent transportation system, digital twin cities, urban sustainable development, and robotics.