地球信息科学学报 ›› 2023, Vol. 25 ›› Issue (5): 1064-1074.doi: 10.12082/dqxxkx.2023.220827

• 遥感科学与应用技术 • 上一篇    下一篇

基于卫星影像全局和局部深度学习特征检索的无人机绝对定位方法

侯慧太(), 蓝朝桢(), 徐青   

  1. 战略支援部队信息工程大学地理空间信息学院,郑州 450001
  • 收稿日期:2022-10-25 修回日期:2022-12-05 出版日期:2023-05-25 发布日期:2023-04-27
  • 通讯作者: *蓝朝桢(1979— ),男,福建龙岩人,博士,副教授,主要从事摄影测量与遥感方向研究。E-mail: lan_cz@163.com
  • 作者简介:侯慧太(1996— ),男,山东青州人,博士生,主要从事摄影测量与遥感方向的研究。E-mail: houhuitai@163.com
  • 基金资助:
    基础加强计划(173计划)(2020-JCJQ-ZD-015-00)

UAV Absolute Positioning Method based on Global and Local Deep Learning Feature Retrieval from Satellite Images

HOU Huitai(), LAN Chaozhen(), XU Qing   

  1. Institute of Geospatial Information, Information Engineering University, Zhengzhou 450001, China
  • Received:2022-10-25 Revised:2022-12-05 Online:2023-05-25 Published:2023-04-27
  • Contact: LAN Chaozhen
  • Supported by:
    Basic Research Strengthening Program of China(173 Program)(2020-JCJQ-ZD-015-00)

摘要:

随着无人机技术的逐渐成熟,越来越多的领域开始引入无人机执行任务。无人机具有使用成本低,环境适应能力强等优点,但其能够顺利执行空中任务的前提是对自身位置的准确定位。传统导航技术主要依赖GNSS,但GNSS存在不稳定、易受干扰等缺点,易出现无人机无法利用GNSS进行定位的情况,即GNSS拒止环境。针对GNSS拒止环境下无人机的导航定位问题,基于已知的卫星正射影像,提出了一种综合利用卫星影像局部和全局深度学习特征的无人机视觉检索定位方法。首先将ConvNeXt作为主干网络,与广义平均池化相结合组成检索特征提取算法,用于提取卫星和无人机影像的全局特征。针对检索定位的任务设计了考虑影像间重叠面积的三元损失函数,训练特征提取算法。然后根据全局特征对一定范围内的卫星影像进行检索。最后为了进一步提高检索得到目标影像的准确率,利用深度学习局部特征进行匹配重新排序。论文建立了面向无人机检索定位任务的训练和试验数据集,试验结果表明,对于完全重叠无人机模拟影像检索不同季节卫星影像,本文方法平均准确率达到90.9%,平均耗时2.22 s。在无人机实拍影像测试中准确率为87.5%,基本能够满足无人机导航定位需求。

关键词: GNSS拒止, 深度学习特征, 卷积神经网络, 影像检索, 视觉定位, 无人机, 局部特征, 全局特征

Abstract:

With the development of Unmanned Aerial Vehicle (UAV) technology, it has been applied to various tasks in different fields. The prerequisite for a UAV to perform successful aerial tasks is accurate localization of its own position. Generally, traditional UAV navigation relies on the Global Navigation Satellite System (GNSS) for localization. However, this system has disadvantages such as instability and susceptibility to interference, leading to situations where UAV cannot use GNSS for positioning, known as GNSS-denied environments. This study focuses on the navigation and positioning of UAV in GNSS-denied environments and proposes a UAV visual retrieval and positioning method that comprehensively utilizes local and global deep learning features of known satellite orthophotos. Specifically, ConvNeXt is used as the backbone network, combined with generalized mean pooling, to form a retrieval feature extraction algorithm for extracting global features of satellite and UAV images. A triplet loss function considering the overlapping area between images is designed for the retrieval and positioning tasks, and a corresponding training data set is established to train the feature extraction algorithm. Then, the satellite images within a certain range are retrieved according to the extracted global features, and the preliminary retrieval results are obtained. In order to further improve the accuracy of the retrieved target images, the LoFTR algorithm based on deep learning local features is used for matching and reordering. Since the LoFTR algorithm has many mismatches, RANSAC is used to screen the matching results. Experiments using the test datasets we established demonstrate that the proposed method obtains an average accuracy of 90.9% and an average time cost of 2.22 seconds for retrieving satellite images in different seasons from fully overlapped UAV simulated images. The accuracy of the UAV real image test is 87.5%, which can meet the UAV positioning requirements.

Key words: GNSS-denied, deep learning features, convolutional neural network, image retrieval, visual localization, Unmanned Aerial Vehicle, local feature, global feature