A Multi-Strategy Fusion Method for Aerial Image Feature Matching Considering Shadow and Viewing Angle Differences

  • CHEN Chijie , 1, 2 ,
  • WANG Tao , 1, 2, * ,
  • ZHANG Yan 1 ,
  • YAN Siwei 1 ,
  • ZHAO Kangshun 1
Expand
  • 1. Information Engineering University, Zhengzhou 450001, China
  • 2. National Key Laboratory of Intelligent Spatial Information, Zhengzhou 450001, China
*WANG Tao, E-mail:

Received date: 2025-03-04

  Revised date: 2025-04-16

  Online published: 2025-06-06

Supported by

National Key Laboratory of Intelligent Spatial Information Fund(a8235)

Abstract

[Objectives] Feature matching is a core step in the 3D reconstruction of aerial images. However, due to shadows and perspective variations during the imaging process, the number of matching points is often small and unevenly distributed, significantly affecting accuracy. [Methods] This paper proposes a multi-strategy fusion feature matching method that accounts for shadow and viewing angle differences. It combines the traditional SIFT feature extraction algorithm with the advanced LightGlue feature matching neural network. Through multiple optimization strategies, the method achieves high-quality matching results under complex imaging conditions. The main improvements include the following: (1) An adaptive shadow region enhancement strategy is proposed. Shadow regions are extracted from the original image, and an initial brightness enhancement factor is determined based on the average brightness ratio of shadow and non-shadow areas. This factor is then adjusted using the gray-level differences within the shadow regions to enhance their brightness and restore ground object details, increasing the number of feature points. (2) A multi-view simulated image generation strategy is introduced. Simulated images are generated based on camera pose information to improve the adaptability of input features to view changes, enhancing matching accuracy and robustness. (3) In the matching optimization stage, due to significant height differences in aerial images, using a planar assumption for estimation introduces large errors. To address this, A RANSAC matching optimization algorithm based on K-Means clustering is developed. The number of clusters (K) is dynamically determined using the image's original color information. Matching points are clustered accordingly, and the RANSAC algorithm is applied to each cluster for local optimization. This reduces planar assumption errors and improves the selection of inliers. [Results] Experiments were conducted using aerial image data captured by the A3 camera, testing both single and combined strategies. Results show that after applying the adaptive shadow region enhancement and multi-view simulation strategies, the number of matching points nearly tripled compared to the unprocessed data. Additionally, after K-Means clustering RANSAC optimization, the average pixel distance error decreased by approximately 30% compared to direct RANSAC optimization, and the matching accuracy improved by an average of 24.8%. [Conclusions] The proposed method effectively addresses the challenges of aerial image matching under complex imaging conditions, providing more robust and reliable data support for downstream tasks such as 3D reconstruction.

Cite this article

CHEN Chijie , WANG Tao , ZHANG Yan , YAN Siwei , ZHAO Kangshun . A Multi-Strategy Fusion Method for Aerial Image Feature Matching Considering Shadow and Viewing Angle Differences[J]. Journal of Geo-information Science, 2025 , 27(6) : 1401 -1419 . DOI: 10.12082/dqxxkx.2025.250099

利益冲突:Conflicts of Interest 所有作者声明不存在利益冲突。

All authors disclose no relevant conflicts of interest.

[1]
姬谕, 丁朋, 刘楠, 等. 基于改进SURF的低照度图像拼接方法[J]. 激光与光电子学进展, 2024, 61(18):1837014.

[Ji Y, Ding P, Liu N, et al. Low-light image stitching method based on improved SURF[J]. Laser & Optoelectronics Progress, 2024, 61(18):1837014.] DOI:10.3788/LOP240470

[2]
魏休耘, 甘淑, 袁希平, 等. 基于边缘响应优化SIFT算法在无人机影像匹配中的研究[J]. 测绘工程, 2024, 33(6):1-10.

[Wei X Y, Gan S, Yuan X P, et al. An improved SIFT algorithm is applied to UAV image matching research[J]. Engineering of Surveying and Mapping, 2024, 33(6):1-10.] DOI:10.19349/j.cnki.issn1006-7949.2024.06.001

[3]
张昆, 王涛, 张艳, 等. 一种基于面阵摆扫式航空影像的特征匹配方法[J]. 地球信息科学学报, 2022, 24(3): 522-532.

[Zhang K, Wang T, Zhang Y, et al. A feature matching method based on area array swing-scan aerial image[J]. Journal of Geo-information Science, 2024, 33(6):1-10.] DOI:10.12082/dqxxkx.2022.210394

[4]
Moravec H. Obstacle Avoidance and Navigation in the Real World by a Seeing Robot Rover[R]. Technical Report CMU-RITR-3, Carnegie-Mellon University, Robotics Institute, 1980.

[5]
Harris C, Stephens M. A combined corner and edge detector[C]// Proceedings of the Alvey Vision Conference 1988. Alvey Vision Club, 1988, 15(50):10-5244. DOI:10.5244/c.2.23

[6]
Lowe D G. Distinctive image features from scale-invariant keypoints[J]. International Journal of Computer Vision, 2004, 60(2):91-110. DOI:10.1023/B:VISI.0000029664.99615.94

[7]
Bay H, Tuytelaars T, Van Gool L. SURF: Speeded up robust features[C]// Computer Vision - ECCV 2006. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006: 404-417. DOI:10.1007/11744023_32

[8]
张胜国, 聂文泽, 饶维冬, 等. 基于重叠区域改进SURF的无人机影像快速匹配算法[J]. 无线电工程, 2024, 54(8):1978-1985.

[Zhang S G, Nie W Z, Rao W D, et al. A fast matching algorithm for unmanned aerial vehicle images based on overlapping region improvement SURF[J]. Radio Engineering, 2024, 54(8):1978-1985.] DOI:10.3969/j.issn.1003-3106.2024.08.016

[9]
Jiang W, Trulls E, Hosang J, et al. Correspondence transformer for matching across images[C]// 2021 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 2021:6187-6197. DOI:10.1109/ICCV48922.2021.00615

[10]
Sun J M, Shen Z H, Wang Y A, et al. LoFTR:Detector-free local feature matching with transformers[C]// 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2021:8918-8927. DOI:10.1109/CVPR46437.2021.00881

[11]
Chen H K, Luo Z X, Zhou L, et al. ASpanFormer: Detector-free image matching with Adaptive span transformer[C]// Computer Vision-ECCV 2022. Cham: Springer Nature Switzerland, 2022:20-36. DOI:10.1007/978-3-031-19824-3_2

[12]
DeTone D, Malisiewicz T, Rabinovich A. SuperPoint:Self-supervised interest point detection and description[C]// 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, 2018. DOI:10.1109/CVPRW.2018.00060

[13]
Dusmanu M, Rocco I, Pajdla T, et al. D2-net: A trainable CNN for joint description and detection of local features[C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2019: 8084-8093. DOI:10.1109/CVPR.2019.00828

[14]
Sarlin P E, DeTone D, Malisiewicz T, et al. SuperGlue:Learning feature matching with graph neural networks[C]// 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2020:4937-4946. DOI:10.1109/CVPR42600.2020.00499

[15]
Chen H K, Luo Z X, Zhang J H, et al. Learning to match features with seeded graph matching network[C]//2021 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 2021:6281-6290. DOI:10.1109/ICCV48922.2021.00624

[16]
Lindenberger P, Sarlin P E, Pollefeys M. LightGlue:Local feature matching at light speed[C]// 2023 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 2023:17581-17592. DOI:10.1109/ICCV51070.2023.01616

[17]
Morel J M, Yu G S. ASIFT: A new framework for fully affine invariant image comparison[J]. SIAM Journal on Imaging Sciences, 2009, 2(2):438-469. DOI:10.1137/080732730

[18]
佟国峰, 李勇, 刘楠, 等. 大仿射场景的混合特征提取与匹配[J]. 光学学报, 2017, 37(11):215-222.

[Tong G F, Li Y, Liu N, et al. Mixed feature extraction and matching for large affine scene[J]. Acta Optica Sinica, 2017, 37(11):215-222.] DOI:10.3788/AOS201737.1115003

[19]
岳娟, 高思莉, 李范鸣, 等. 具有近似仿射尺度不变特征的快速图像匹配[J]. 光学精密工程, 2020, 28(10):2349-2359.

[Yue J, Gao S L, Li F M, et al. Fast image matching algorithm with approximate affine and scale invariance[J]. Optics and Precision Engineering, 2020, 28(10):2349-2359.] DOI:10.37188/OPE.20202810.2349

[20]
王焱, 宋宇超, 吕猛. 基于改进算法的航拍图像匹配方法[J]. 计算机仿真, 2020, 37(2):258-262.

[Wang Y, Song Y C, Lv M. Aerial image matching method based on improved algorithm[J]. Computer Simulation, 2020, 37(2):258-262.] DOI:10.3969/j.issn.1006-9348.2020.02.053

[21]
Bökman G, Kahl F. A case for using rotation invariant features in state of the art feature matchers[C]//2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, 2022:5106-5115. DOI:10.1109/CVPRW56347.2022.00559

[22]
Arandjelović R, Zisserman A. Three things everyone should know to improve object retrieval[C]//2012 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2012:2911-2918. DOI:10.1109/CVPR.2012.6248018

[23]
Lai Y, Tariq M. Group enhancement for matching of multi-view image with overlap fuzzy feature[J]. Multimedia Tools and Applications, 2020, 79(3):2069-2084. DOI:10.1007/s11042-019-08173-0

[24]
侯义锋, 丁畅, 刘海, 等. 逆光海况下低质量红外目标的增强与识别[J]. 光学学报, 2023, 43(6):226-238.

[Hou Y F, Ding C, Liu H, et al. Enhancement and recognition of infrared target with low quality under backlight maritime condition[J]. Acta Optica Sinica, 2023, 43(6):226-238.] DOI:10.3788/AOS221387

[25]
赵丹露, 张永安, 何光辉, 等. 透烟雾红外数字全息像的亮度增强算法[J]. 中国激光, 2023, 50(18):290-301.

[Zhao D L, Zhang Y A, He G H, et al. Brightness enhancement algorithm for infrared digital holographic image through smoke[J]. Chinese Journal of Lasers, 2023, 50(18):290-301.] DOI:10.3788/CJL221316

[26]
Fischler M A, Bolles R C. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography[J]. Communications of the ACM, 1981, 24(6):381-395. DOI:10.1145/358669.358692

[27]
Barath D, Matas J. Graph-cut ransac[C]//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2018:6733-6741. DOI:10.1109/CVPR.2018.00704

[28]
王吉晖, 王亚伟, 许廷发, 等. 三维物体抗仿射变换特征匹配方法[J]. 北京理工大学学报, 2013, 33(11):1193-1197.

[Wang J H, Wang Y W, Xu T F, et al. Image matching method of three-dimensional object with affine invariant feature[J]. Transactions of Beijing Institute of Technology, 2013, 33(11):1193-1197.] DOI:10.15918/j.tb it1001-0645.2013.11.013

[29]
Li Z Q, Snavely N. MegaDepth:Learning single-view depth prediction from internet photos[C]// 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2018:2041- 2050. DOI:10.1109/CVPR.2018.00218

[30]
Geiger A, Lenz P, Stiller C, et al. Vision meets robotics: The KITTI dataset[J]. The International Journal of Robotics Research, 2013, 32(11):1231-1237. DOI:10.1177/0278364913491297

Outlines

/