The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
Pendekatan biasa untuk membina semula model persekitaran 3D ialah mengimbas persekitaran dengan penderia kedalaman dan menyesuaikan awan titik terkumpul kepada model 3D. Dalam senario jenis ini, aplikasi pembinaan semula persekitaran 3D am menganggap pengimbasan berterusan sementara. Walau bagaimanapun dalam beberapa kegunaan praktikal, andaian ini tidak boleh diterima. Oleh itu, kaedah padanan awan titik untuk mencantum beberapa imbasan 3D tidak berterusan diperlukan. Padanan awan titik selalunya termasuk ralat dalam pengesanan titik ciri kerana awan titik pada asasnya merupakan persampelan jarang bagi persekitaran sebenar dan ia mungkin termasuk ralat pengkuantitian yang tidak boleh diabaikan. Selain itu, penderia kedalaman cenderung mempunyai ralat disebabkan oleh sifat reflektif permukaan yang diperhatikan. Oleh itu, kami membuat andaian bahawa pasangan titik ciri antara dua awan titik akan termasuk ralat. Dalam kerja ini, kami mencadangkan kaedah perihalan ciri yang mantap kepada ralat pendaftaran titik ciri yang diterangkan di atas. Untuk mencapai matlamat ini, kami mereka bentuk model perihalan ciri berasaskan pembelajaran mendalam yang terdiri daripada penerangan ciri tempatan di sekitar titik ciri dan penerangan ciri global bagi keseluruhan awan titik. Untuk mendapatkan perihalan ciri yang mantap kepada ralat pendaftaran titik ciri, kami memasukkan pasangan titik ciri dengan ralat dan melatih model dengan pembelajaran metrik. Keputusan percubaan menunjukkan bahawa model perihalan ciri kami boleh menganggar dengan betul sama ada pasangan titik ciri adalah cukup hampir untuk dianggap padan atau tidak walaupun ralat pendaftaran titik ciri adalah besar, dan model kami boleh menganggarkan dengan ketepatan yang lebih tinggi berbanding kaedah seperti FPFH atau 3DMatch. Selain itu, kami menjalankan percubaan untuk gabungan awan titik input, termasuk awan titik tempatan atau global, kedua-dua jenis awan titik dan pengekod.
Kenshiro TAMATA
Osaka University
Tomohiro MASHITA
Osaka University
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Salinan
Kenshiro TAMATA, Tomohiro MASHITA, "Feature Description with Feature Point Registration Error Using Local and Global Point Cloud Encoders" in IEICE TRANSACTIONS on Information,
vol. E105-D, no. 1, pp. 134-140, January 2022, doi: 10.1587/transinf.2021EDP7082.
Abstract: A typical approach to reconstructing a 3D environment model is scanning the environment with a depth sensor and fitting the accumulated point cloud to 3D models. In this kind of scenario, a general 3D environment reconstruction application assumes temporally continuous scanning. However in some practical uses, this assumption is unacceptable. Thus, a point cloud matching method for stitching several non-continuous 3D scans is required. Point cloud matching often includes errors in the feature point detection because a point cloud is basically a sparse sampling of the real environment, and it may include quantization errors that cannot be ignored. Moreover, depth sensors tend to have errors due to the reflective properties of the observed surface. We therefore make the assumption that feature point pairs between two point clouds will include errors. In this work, we propose a feature description method robust to the feature point registration error described above. To achieve this goal, we designed a deep learning based feature description model that consists of a local feature description around the feature points and a global feature description of the entire point cloud. To obtain a feature description robust to feature point registration error, we input feature point pairs with errors and train the models with metric learning. Experimental results show that our feature description model can correctly estimate whether the feature point pair is close enough to be considered a match or not even when the feature point registration errors are large, and our model can estimate with higher accuracy in comparison to methods such as FPFH or 3DMatch. In addition, we conducted experiments for combinations of input point clouds, including local or global point clouds, both types of point cloud, and encoders.
URL: https://global.ieice.org/en_transactions/information/10.1587/transinf.2021EDP7082/_p
Salinan
@ARTICLE{e105-d_1_134,
author={Kenshiro TAMATA, Tomohiro MASHITA, },
journal={IEICE TRANSACTIONS on Information},
title={Feature Description with Feature Point Registration Error Using Local and Global Point Cloud Encoders},
year={2022},
volume={E105-D},
number={1},
pages={134-140},
abstract={A typical approach to reconstructing a 3D environment model is scanning the environment with a depth sensor and fitting the accumulated point cloud to 3D models. In this kind of scenario, a general 3D environment reconstruction application assumes temporally continuous scanning. However in some practical uses, this assumption is unacceptable. Thus, a point cloud matching method for stitching several non-continuous 3D scans is required. Point cloud matching often includes errors in the feature point detection because a point cloud is basically a sparse sampling of the real environment, and it may include quantization errors that cannot be ignored. Moreover, depth sensors tend to have errors due to the reflective properties of the observed surface. We therefore make the assumption that feature point pairs between two point clouds will include errors. In this work, we propose a feature description method robust to the feature point registration error described above. To achieve this goal, we designed a deep learning based feature description model that consists of a local feature description around the feature points and a global feature description of the entire point cloud. To obtain a feature description robust to feature point registration error, we input feature point pairs with errors and train the models with metric learning. Experimental results show that our feature description model can correctly estimate whether the feature point pair is close enough to be considered a match or not even when the feature point registration errors are large, and our model can estimate with higher accuracy in comparison to methods such as FPFH or 3DMatch. In addition, we conducted experiments for combinations of input point clouds, including local or global point clouds, both types of point cloud, and encoders.},
keywords={},
doi={10.1587/transinf.2021EDP7082},
ISSN={1745-1361},
month={January},}
Salinan
TY - JOUR
TI - Feature Description with Feature Point Registration Error Using Local and Global Point Cloud Encoders
T2 - IEICE TRANSACTIONS on Information
SP - 134
EP - 140
AU - Kenshiro TAMATA
AU - Tomohiro MASHITA
PY - 2022
DO - 10.1587/transinf.2021EDP7082
JO - IEICE TRANSACTIONS on Information
SN - 1745-1361
VL - E105-D
IS - 1
JA - IEICE TRANSACTIONS on Information
Y1 - January 2022
AB - A typical approach to reconstructing a 3D environment model is scanning the environment with a depth sensor and fitting the accumulated point cloud to 3D models. In this kind of scenario, a general 3D environment reconstruction application assumes temporally continuous scanning. However in some practical uses, this assumption is unacceptable. Thus, a point cloud matching method for stitching several non-continuous 3D scans is required. Point cloud matching often includes errors in the feature point detection because a point cloud is basically a sparse sampling of the real environment, and it may include quantization errors that cannot be ignored. Moreover, depth sensors tend to have errors due to the reflective properties of the observed surface. We therefore make the assumption that feature point pairs between two point clouds will include errors. In this work, we propose a feature description method robust to the feature point registration error described above. To achieve this goal, we designed a deep learning based feature description model that consists of a local feature description around the feature points and a global feature description of the entire point cloud. To obtain a feature description robust to feature point registration error, we input feature point pairs with errors and train the models with metric learning. Experimental results show that our feature description model can correctly estimate whether the feature point pair is close enough to be considered a match or not even when the feature point registration errors are large, and our model can estimate with higher accuracy in comparison to methods such as FPFH or 3DMatch. In addition, we conducted experiments for combinations of input point clouds, including local or global point clouds, both types of point cloud, and encoders.
ER -