地球信息科学学报 ›› 2022, Vol. 24 ›› Issue (5): 962-980.doi: 10.12082/dqxxkx.2022.210572

• 遥感科学与应用技术 • 上一篇    下一篇

基于多尺度特征感知网络的城市植被无人机遥感分类

蒯宇1(), 王彪1,*(), 吴艳兰1,2, 陈搏涛1, 陈兴迪1, 薛维宝1   

  1. 1.安徽大学资源与环境工程学院,合肥 230601
    2.安徽省地理信息智能技术工程研究中心,合肥 230000
  • 收稿日期:2021-09-23 修回日期:2021-11-09 出版日期:2022-05-25 发布日期:2022-07-25
  • 通讯作者: * 王 彪(1987—),男,山东曲阜人,副教授,主要从事摄影测量与遥感技术研究。E-mail: wangbiao-rs@ahu.edu.cn
  • 作者简介:蒯 宇(1995—),男,安徽合肥人,硕士生,主要从事深度学习遥感影像信息提取。E-mail: kuaiyu1020@163.com
  • 基金资助:
    国家自然科学基金项目(41971311);国家自然科学基金项目(41902182);安徽省自然科学基金(2008085QD188)

Urban Vegetation Classification based on Multi-scale Feature Perception Network for UAV Images

KUAI Yu1(), WANG Biao1,*(), WU Yanglan1,2, CHEN Botao1, CHEN Xingdi1, XUE Weibao1   

  1. 1. School of Resources and Environmental Engineering, Anhui University, Hefei 230601, China
    2. Anhui Province Geographic Information Intelligent Technology Engineering Research Center, Hefei 230000, China
  • Received:2021-09-23 Revised:2021-11-09 Online:2022-05-25 Published:2022-07-25
  • Supported by:
    National Natural Science Foundation of China(41971311);National Natural Science Foundation of China(41902182);Natural Science Foundation of Anhui Province(2008085QD188)

摘要:

目前城市植被分类受特征相近、光谱相似影响导致植被漏分、错分。因此,设计了一种多尺度特征感知网络(MFDN)结合高分辨率无人机可见光影像对城市植被分类。该网络针对漏分、错分问题,通过在输入层引入坐标卷积减少空间信息的丢失;构建并行网络增强多尺度特征信息并在网络之间引入重复多尺度融合模块使整个过程保持高分辨率表示,减少细节特征的丢失;同时添加分离特征模块扩大感受野,获取多尺度特征,从而有效缓解了城市植被错分、漏分现象。结果表明,MFDN方法在仅使用无人机可见光影像条件下主要是通过空间模式而不是光谱信息促进了城市植被分类,平均总体精度为89.54%,平均F1得分为75.85%,平均IOU为65.45%,分割结果准确完整。因此,所提方法与易于操作的低成本无人机系统相匹配,适用于城市植被快速调查,可以为城市空间利用和生态资源调查提供技术支持和科学依据。

关键词: 城市植被分类, 深度学习, 无人机遥感, 可见光影像, 多尺度特征融合, 语义分割, 多特征感知, 城市生态系统

Abstract:

At present, the classification of urban vegetation is affected by similar characteristics and similar spectra, resulting in misclassification of vegetation. Therefore, a Multi-scale Feature Perception Network (MFDN) combined with high-resolution UAV visible light images is designed to classify urban vegetation. This network addresses the problem of misclassification and reduces the loss of spatial information by introducing coordinate convolution in the input layer. It constructs parallel networks to enhance multi-scale feature information and introduces repeated multi-scale fusion modules between networks to maintain high-resolution representation in the entire process and reduce the loss of detailed features. In addition, the separation feature module is added to expand the receptive field and obtain multi-scale features, thereby effectively alleviating the phenomenon of misclassification and omission of urban vegetation. The results show that the MFDN method improves the classification of urban vegetation mainly through spatial patterns rather than spectral information from UAV visible light images. The average overall accuracy is 89.54%, the average F1 score is 75.85%, and the average IOU is 65.45%. The segmentation results are accurate and complete. Therefore, the proposed method is compatible with the easy-to-operate low-cost UAV system, is suitable for rapid survey of urban vegetation, and can provide technical support and scientific basis for urban space utilization and ecological resource survey.

Key words: urban vegetation classification, deep learning, UAV remote sensing, visible light image, multi-scale feature fusion, semantic segmentation, multi-feature perception, urban ecosystem