Journal of Geo-information Science >
Faster R-CNN Deep Learning Network Based Object Recognition of Remote Sensing Image
Received date: 2018-05-12
Request revised date: 2018-07-25
Online published: 2018-10-17
Supported by
National Key Research and Development Program of China under Grant, No.2017YFB0504202
National Natural Science Foundation of China, No.41871312
Hubei Natural Science Foundation, No.2017CFB433
Hubei Key Laboratory of Intelligent Geo-Information Processing (China University of Geosciences (Wuhan)), No. KLIGIP-2017A09
The Key Laboratory of Spatial Data Mining & Information Sharing of the Ministry of Education, Fuzhou University, No.2016LSDMIS06
Shanghai Aerospace Science and Technology Innovation Fund, No.SAST2016006
Beijing Key Laboratory of Urban Spatial Information Engineering, No.2017209.
Copyright
Object recognition of remote sensing image is of great theoretical significance and application value in many fields. Faster and more accurate object identification methods are hot and difficult points in the field of remote sensing and image. In this paper, the method of deep learning is applied to remote sensing image object recognition, and a fast and accurate method of object recognition based on Faster R-CNN deep learning network is proposed. This method uses the proposal region extraction method based on RPN and the VGG16 training convolution network model, and constructs a deep convolutional neural network for the object recognition of remote sensing image. In order to verify the accuracy and performance of the method, the GPU accelerated computing model was used in the Caffe deep learning framework. Firstly, the aircraft target recognition experiment in remote sensing image was designed. The aircraft target recognition accuracy reached 96.67%. Then, after the experiment was successful, we continued to identify other target features, and selected the high-resolution remote sensing image of the oil tank, playground and overpass object for verification experiments. In the same experiment environment, the same good experimental verification results were obtained, the target recognition rate was at a high level, and the cost time of recognition in each picture was less than 0.2 seconds, which fully verified the validity and reliability of the model studied in this paper. After analysis and comparison, the conclusion is that the deep learning method based on Faster R-CNN can realize the fast and accurate recognition of the selected targets, which proves that the method has a good promotion significance in high-resolution remote sensing image target recognition applications. Therefore, the model has great application value, and it also has certain reference significance for target recognition research based on other deep learning methods.
WANG Jinchuan , TAN Xicheng , WANG Zhaohai , ZHONG Yanfei , DONG Huaping , ZHOU Songtao , CHENG Buyi . Faster R-CNN Deep Learning Network Based Object Recognition of Remote Sensing Image[J]. Journal of Geo-information Science, 2018 , 20(10) : 1500 -1508 . DOI: 10.12082/dqxxkx.2018.180237.
Fig. 1 Faster R-CNN network structure图1 Faster R-CNN网络模型结构图 |
Fig. 2 Proposal regions generation process图2 建议区域生成流程 |
Fig. 3 The VGG16 convolutional neural network图3 VGG16卷积神经网络 |
Fig. 4 Data expansion and data sets图4 数据扩充及数据集 |
Fig. 5 The result of airplane recognition图5 飞机目标识别结果 |
Tab. 1 The test performance comparison in R-CNN, Fast R-CNN and Faster R-CNN表1 R-CNN、Fast R-CNN和Faster R-CNN测试性能对比 |
深度学习方法 | 识别精度/% | 每张图片识别耗时/s |
---|---|---|
R-CNN | 77.10 | 13.40 |
Fast R-CNN | 77.50 | 4.60 |
Faster R-CNN | 96.67 | 0.14 |
Fig. 6 The oil tank, playground and overpass target recognition results图6 油罐、学校操场和立交桥目标识别结果 |
Tab.2 Comparison of recognition accuracy and recognition efficiency between our model and reference表2 本文模型与文献模型识别精度与识别效率对比 |
目标类别 | Faster R-CNN + VGG16 | AlexNet-DR & GoogleNet-DR | |||
---|---|---|---|---|---|
识别精度/% | 识别效率/s | 识别精度/% | 识别效率/s | ||
飞机 | 96.67 | 0.144 | 94.99 | 37.292 | |
油罐 | 97.46 | 0.184 | 94.47 | 38.283 | |
操场 | 97.41 | 0.143 | 97.18 | 11.928 | |
立交桥 | 81.08 | 0.186 | 88.30 | 5.360 |
The authors have declared that no competing interests exist.
[1] |
[
|
[2] |
[
|
[3] |
|
[4] |
|
[5] |
|
[6] |
|
[7] |
[
|
[8] |
[
|
[9] |
[
|
[10] |
[
|
[11] |
[
|
[12] |
[
|
[13] |
[
|
[14] |
[
|
[15] |
|
[16] |
|
[17] |
[
|
[18] |
|
[19] |
[
|
[20] |
|
[21] |
[
|
[22] |
|
[23] |
[
|
[24] |
|
[25] |
[
|
[26] |
|
[27] |
|
/
〈 | 〉 |