Matching correspondence between images and 3D model in a reconstruction process

  • Mau Le Tien The Quangngai College of Industrial Technology, Vietnam
  • Khoi Nguyen Tan The University of Danang, Vietnam
  • Romain Raffin Aix-Marseille University, LSIS UMR 7296

Abstract

3D reconstruction from photographies is an active research trend. The resolution of the sensors is increasing and the data processing is more accurate (not only restricted to calibrated stereo vision). In archaeological research it becomes a common way to safeguard some views of an ancient site, coupled with a manner to describe in 3D the artifacts with the same set of photographies. Archaeological scientist are now facing a complex problem to handle these digital data. An important usage is to describe semantically the artifacts. It is generally made ”by hand”, supplied by the knowledge of the scientists. We propose a solution that can perform a part of this work automatically, to generate descriptions of the obtained geometry. It combines image processing, geometry processing, 3D reconstruction. This paper aims at presenting an algorithm for 2D/3D point matching. The 3D reconstruction process of model from multiple views based on SIFT algorithm. The matching process uses a 2D mask pattern to lookup the 3D corresponding point. Experimental results show that our matching algorithm is precise, highly flexible, and can be successfully applied to a variety of 3D shapes.

Downloads

Download data is not yet available.

References

[1] Adeline Manuel, Livio De Luca and Philippe V´eron, ”A Hybrid Approach for the Semantic Annotation of Spatially Oriented Images”, International Journal of Heritage in the Digital Era, 2014.
[2] Christian Lindequist Larsen, ”3D Reconstruction of Buildings From Images with Automatic Fac¸ade Refinement”, Master’s Thesis, Vision, Graphics and Interactive Systems, 2010
[3] David G. Lowe, ”Distinctive Image Features from Scale-Invariant Keypoints”, International Journal of Computer Vision, Volume 60 Issue 2, November 2004.
[4] Ding-Yun Chen and Ming Ouhyoung, ”A 3D Object Retrieval System based on Multi-Resolution Reeb Graph”. Proc. of Computer Graphics Workshop, 2002.
[5] Dmitriy Bespalov , William C. Regli , Ali Shokouf, ”Local feature extraction and matching partial objects” Computer-Aided Design 38(9), pp. 1020–1037, 2006.
[6] G. Stavropoulos, P. Moschonas, K. Moustakas, D. Tzovaras and M.G. Strintzis, ”3D Model Search and Retrieval from Range Images using Salient Features”, IEEE Transactions on Multimedia, vol. 12, no.7, pp. 692-704, November 2010
[7] JC. Torres, G. Arroyo, C. Romo, ”3D Digitization using Structure from Motion”, CEIG-Spanish Computer Graphics Conference, 2012.
[8] Jebara, Tony, Ali Azarbayejani, and Alex Pentland. ”3D structure from 2D motion, Signal Processing Magazine”, IEEE 16.3, pp. 66-84, 1999
[9] MicMac, Apero, Pastis and Other Beverages in a Nutshell, 2015.
[10] SF. El-Hakim, JA. Beraldin, M. Picard, ”Detailed 3D reconstruction of large-scale heritage sites with integrated techniques”, Computer Graphics and Applications, IEEE Volume 24, Issue 3, pp. 21-29, 2004.
[11] Park, Hyun Soo, et al, ”3D Trajectory Reconstruction under Perspective Projection”, International Journal of Computer Vision, pp. 1-21, 2015.
[12] C. Baillard, C. Schmid, A. Zisserman, and A. Fitzgibbon, ”Automatic line matching and 3D reconstruction of buildings from multiple views”, In ISPRS Conference on Automatic Extraction of GIS Objects from Digital Imagery, volume 32, pp. 69–80, 1999.
[13] R. Berthilsson, K. Astrom, and A. Heyden, ”Reconstruction of general curves, using factorization and bundle adjustment”, International Journal of Computer Vision, 41(3), pp. 171–182, 2001.

Published
2016-08-31
How to Cite
TIEN, Mau Le; TAN, Khoi Nguyen; RAFFIN, Romain. Matching correspondence between images and 3D model in a reconstruction process. Journal of Science and Technology: Issue on Information and Communications Technology, [S.l.], v. 2, n. 1, p. 64-69, aug. 2016. ISSN 1859-1531. Available at: <http://ict.jst.udn.vn/index.php/jst/article/view/29>. Date accessed: 26 apr. 2024. doi: https://doi.org/10.31130/jst.2016.29.