Most Download

  • Published in last 1 year
  • In last 2 years
  • In last 3 years
  • All
  • Most Downloaded in Recent Month
  • Most Downloaded in Recent Year

Please wait a minute...
  • Select all
    |
  • Journal of Geo-information Science. 2025, 27(3): 537-538.
  • HE Guojin, LIU Huichan, YANG Ruiqing, ZHANG Zhaoming, XUE Yuan, AN Shihao, YUAN Mingruo, WANG Guizhou, LONG Tengfei, PENG Yan, YIN Ranyu
    Journal of Geo-information Science. 2025, 27(2): 273-284. https://doi.org/10.12082/dqxxkx.2025.240630

    [Significance] Data resources have become pivotal in modern production, evolving in close synergy with advancements in artificial intelligence (AI) technologies, which continuously cultivate new, high-quality productive forces. Remote sensing data intelligence has naturally emerged as a result of the rapid expansion of remote sensing big data and AI. This integration significantly enhances the efficiency and accuracy of remote sensing data processing while bolstering the ability to address emergencies and adapt to complex environmental changes. Remote sensing data intelligence represents a transformative approach, leveraging state-of-the-art technological advancements and redefining traditional paradigms of remote sensing information engineering and its applications. [Analysis] This paper delves into the technological background and foundations that have facilitated the emergence of remote sensing data intelligence. The rapid development of technology has provided robust support for remote sensing data intelligence, primarily in three areas: the advent of the big data era in remote sensing, significant advancements in remote sensing data processing capabilities, and the flourishing research on remote sensing large models. Furthermore, a comprehensive technical framework is proposed, outlining the critical elements and methodologies required for implementing remote sensing data intelligence effectively. To demonstrate the practical applications of remote sensing data intelligence, the paper presents a case study on applying these techniques to extract ultra-high-resolution centralized and distributed photovoltaic information in China. [Results] By integrating large models with remote sensing data, the study demonstrates how remote sensing data intelligence enables precise identification and mapping of centralized and distributed photovoltaic installations, offering valuable insights for energy management and planning. The effectiveness of remote sensing data intelligence in addressing challenges associated with large-scale photovoltaic extraction underscores its potential for application in critical fields. [Prospect] Finally, the paper provides an outlook on areas requiring further study in remote sensing data intelligence. It emphasizes that high-quality data serves as the foundation for remote sensing data intelligence and highlights the importance of constructing AI-ready knowledge bases and recognizing the value of small datasets. Developing targeted and efficient algorithms is essential for achieving remote sensing intelligence, making the advancement of practical data intelligence methods an urgent research priority. Furthermore, promoting multi-level services for remote sensing data, information, and knowledge through data intelligence should be prioritized. This research provides a comprehensive technical framework and forward-looking insights for remote sensing data intelligence, offering valuable references for further exploration and implementation in critical fields.

  • LI Yansheng, ZHONG Zhenyu, MENG Qingxiang, MAO Zhidian, DANG Bo, WANG Tao, FENG Yuanjun, ZHANG Yongjun
    Journal of Geo-information Science. 2025, 27(2): 350-366. https://doi.org/10.12082/dqxxkx.2025.240571

    [Objectives] With the development of deep learning technology, the ability to monitor changes in natural resource elements using remote sensing images has significantly improved. While deep learning change detection models excel at extracting low-level semantic information from remote sensing images, they face challenges in distinguishing land-use type changes from non-land-use type changes, such as crop rotation, natural fluctuations in water levels, and forest degradation. To ensure a high recall rate in change detection, these models often generate a large number of false positive change polygons, requiring substantial manual effort to eliminate these false alarms. [Methods] To address this issue, this paper proposes a natural resource element change polygon purification algorithm driven by remote sensing spatiotemporal knowledge graph. The algorithm aims to minimize the false positive rate while maintaining a high recall rate, thereby improving the efficiency of natural resource element change monitoring. To support the intelligent construction and effective reasoning of the spatiotemporal knowledge graph, this study designed a remote sensing spatiotemporal knowledge graph ontology model taking into account spatiotemporal characteristics and developed a GraphGIS toolkit that integrates graph database storage and computation. This paper also introduces a vector knowledge extraction method based on the native spatial analysis of the GraphGIS graph database, a remote sensing image knowledge extraction method based on efficient fine-tuning of the SkySense visual large model, and a polygon purification knowledge extraction method based on the SeqGPT large language model. Under the constraints of the spatiotemporal ontology model, vector, image, and text knowledge converge to form a remote sensing spatiotemporal knowledge graph. Inspired by the manual operation methods for change polygon purification, this paper developed an automatic purification method of change polygons based on first-order logical reasoning within the knowledge graph. To improve the concurrent processing and human-computer interaction, this paper developed a remote sensing spatiotemporal knowledge graph management and service system. [Results] For the task of purifying natural resource element change polygons in Guangdong Province from March to June 2024, the proposed method achieved a true-preserved rate of 95.37% and a false-removed rate of 21.82%. [Conclusions] The intelligent purification algorithm and system for natural resource element change polygons proposed in this study effectively reduce false positives while preserving real change polygons. This approach significantly enhances the efficiency of natural resource element change monitoring.

  • Journal of Geo-information Science. 2025, 27(2): 271-272.
  • TANG Junqing, AN Mengqi, ZHAO Pengjun, GONG Zhaoya, GUO Zengjun, LUO Taoran, LYU Wei
    Journal of Geo-information Science. 2025, 27(3): 553-569. https://doi.org/10.12082/dqxxkx.2024.240107

    [Significance] Cities globally face increasingly frequent multi-hazard risks, driving them pursuing more sustainable and resilient urban transportation systems. This paper presents a comprehensive systematic literature review of the application of spatial-temporal data in transportation system resilience studies. It highlights the pivotal role of spatial-temporal big data in understanding and enhancing the resilience of urban transportation systems under various hazard scenarios. Spatial-temporal big data, characterized by high temporal resolution and fine spatial granularity, has been increasingly applied to the field of transportation system resilience, providing essential support for decision-makers. [Progress] This study reveals two significant findings: Firstly, quantitative analysis of transportation system resilience is one of the most widely applied uses of spatial-temporal big data. However, real-time monitoring and early warning explorations are relatively rare. Most studies remain at the modelling and numerical simulation stage, indicating a need for more empirical studies using multi-source spatial-temporal big data. Moreover, compared to English literature, Chinese transportation system resilience studies are primarily qualitative and lack empirical research, indicating divergent research emphases between domestic and international scholars. Secondly, high-quality, multi-source spatial-temporal big data could facilitate more comprehensive spatial analysis in transportation system resilience studies. Improved data quality allows for deeper exploration from a microscopic perspective, focusing on individual behaviors and aligning closely with real-world needs. The concept of resilience has evolved from its previous post-disaster focus to a comprehensive life-cycle perspective encompassing pre-, during-, and post-disaster phases, transforming the study framework for transportation system resilience. [Prospect] As spatial-temporal big data technology advances and new transportation modes emerge, more innovations and breakthroughs in transportation system resilience studies are expected. Future research should further explore and utilize the potential of spatial-temporal big data in this field, amplifying the policy ramifications of abrupt-onset occurrences. Increased emphasis should be placed on research conducted at the scale of urban agglomerations. Simultaneously, a nuanced examination from a microscopic perspective is imperative to dissect the underlying causes and mechanisms contributing to variations in resilience among distinct groups. Despite the significant progress in transportation system resilience studies, there are still challenges in data collection, processing, and analysis. As technology progresses, researchers should leverage advanced algorithms, platforms, and tools to enhance data processing capabilities and analytical precision, facilitating more complex and detailed studies on transportation system resilience. This will provide a scientific basis for planning and managing urban transportation systems, significantly contributing to the overall resilience and sustainable development of cities.

  • LIU Diyou, KONG Yunlong, CHEN Jingbo, WANG Chenhao, MENG Yu, DENG Ligao, DENG Yupeng, ZHANG Zheng, SONG Ke, WANG Zhihua, CHU Qifeng
    Journal of Geo-information Science. 2025, 27(2): 285-304. https://doi.org/10.12082/dqxxkx.2024.240436

    [Significance] The extraction of Cartographic-Level Vector Elements (CLVE) is a critical prerequisite for the direct application of remote sensing image intelligent interpretation in real-world scenarios. [Analysis] In recent years, the continuous rapid advancement of remote sensing observation technology has provided a rich data foundation for fields such as natural resource surveying, monitoring, and public surveying and mapping data production. However, due to the limitations of intelligent interpretation algorithms, obtaining the necessary vector elements data for operational scenarios still heavily relies on manual visual interpretation and human-computer interactive post-processing. Although significant progress has been made in remote sensing image interpretation using deep learning techniques, producing vector data that are directly usable in operational scenarios remains a major challenge. [Progress] This paper, based on the actual data needs of operational scenarios such as public surveying and mapping data production, conducts an in-depth analysis of the rule constraints for different vector elements in remote sensing image interpretation across a wide range of operational contexts. It preliminarily defines "cartographic-level vector elements" as vector element data that complies with certain cartographic standard constraints at a specific scale. Centered on this definition, the content of the rule set for CLVE is summarized and analyzed from nine dimensions, including vector types, object shapes, boundary positioning, area, length, width, angle size, topological constraints, and adjacency constraints. Evaluation methods for CLVE are then outlined in four aspects: class attributes, positional accuracy, topological accuracy, and rationality of generalization and compromise. Subsequently, through literature collection and statistical analysis, it was observed that research on deep learning-based vector extraction, while still in its early stages, has shown a rapid upward trend year by year, indicating increasing attention in the field. The paper then systematically reviews three major methodological frameworks for deep learning-based vector extraction: semantic segmentation & post-processing, iterative methods, and parallel methods. A detailed analysis is provided on their basic principles, characteristics and accuracy of vector extraction, flexibility, and computational efficiency, highlighting their respective strengths, weaknesses, and differences. The paper also summarizes the current limitations of remote sensing intelligent interpretation methods aimed at CLVE in terms of cartographic-level interpretation capabilities, rule coupling, and remote sensing interpretability. [Prospect]Finally, future research directions for intelligent interpretation of CLVE are explored from several perspectives, including the construction of broad and open cartographic-level rule sets, the development and sharing of CLVE datasets, the advancement of multi-element CLVE extraction frameworks, and the exploration of the potential of multimodal coupled semantic rules.

  • HUANG Yi, ZHANG Xueying, SHENG Yehua, XIA Yongqi, YE Peng
    Journal of Geo-information Science. 2025, 27(6): 1249-1262. https://doi.org/10.12082/dqxxkx.2025.250175

    [Objectives] This study addresses the critical challenges in typhoon disaster knowledge services, which are often hindered by "massive data, scarce knowledge, and limited services." The core objective is to rapidly distill actionable knowledge from vast datasets to enhance disaster management efficacy and mitigate typhoon-related impacts. Large Language Models (LLMs), renowned for their superior performance in natural language processing, are leveraged to deeply mine disaster-related information and provide robust support for advanced knowledge services. [Methods] This research establishes a typhoon disaster knowledge service framework encompassing three layers: data, knowledge, and service. [Results] For the data-to-knowledge layer, an LLM-driven (Qwen2.5-Max) automated method for constructing typhoon disaster Knowledge Graphs (KGs) is proposed. This method first introduces a multi-level typhoon disaster knowledge representation model that integrates spatiotemporal characteristics and disaster impact mechanisms. A specialized training dataset is curated, incorporating typhoon-related texts with explicit temporal and spatial attributes. By adopting a "pre-training + fine-tuning" paradigm, the framework efficiently transforms raw disaster data into structured knowledge. For the knowledge-to-service layer, an LLM-based intelligent question-answering system is developed. Utilizing the constructed typhoon disaster KG, this system employs Graph Retrieval-Augmented Generation (GraphRAG) to retrieve contextually relevant knowledge from the graph and generate user-specific disaster prevention and mitigation guidance. This approach ensures seamless conversion of structured knowledge into practical services, such as personalized evacuation plans and resource allocation strategies. [Conclusions] The study highlights the transformative potential of LLMs in typhoon disaster management and lays a foundation for integrating LLMs with geospatial technologies. This interdisciplinary synergy advances Geographic Artificial Intelligence (GeoAI) and paves the way for innovative applications in disaster service.

  • SHI Shihao, SHI Qunshan, ZHOU Yang, HU Xiaofei, QI Kai
    Journal of Geo-information Science. 2025, 27(7): 1596-1607. https://doi.org/10.12082/dqxxkx.2025.250015

    [Objectives] Small object detection is of great significance in both military and civil applications. However, due to challenges such as low resolution, high noise environments, target occlusion, and complex backgrounds, traditional detection methods often struggle to achieve the necessary accuracy and robustness. The problem of detecting small objects in complex scenes remains highly challenging. Therefore, this paper proposes a hybrid feature and multi-scale fusion algorithm for small object detection. [Methods] First, a Hybrid Conv and Transformer Block (HCTB) is designed to fully utilize local and global context information, enhancing the network's perception of small objects while optimizing computational efficiency and feature extraction capability. Second, a Multi-Dilated Shared Kernel Conv (MDSKC) module is introduced to extend the receptive field of the backbone network using dilated convolutions with varying expansion rates, thereby enabling efficient multi-scale feature extraction. Finally, the Omni-Kernel Cross Stage Model (OKCSM), constructed based on the concepts of Omni-Kernel and Cross Stage Partial, is integrated to optimize the small target feature pyramid network. This approach helps preserve small object information and significantly improves detection performance. [Results] Ablation and comparison experiments were conducted on the VisDrone2019 and TinyPerson datasets. Compared to the baseline model YOLOv8n, the proposed method improves precision, recall, mAP@50, and mAP@50:95 by 1.3%, 3.1%, 3%, and 1.9%, respectively on VisDrone2019, and by 3.6%, 1.3%, 2.1%, and 0.7%, respectively on TinyPerson. Additionally, the model size and GFLOPs are only 6.3 MB and 11.3 G, demonstrating its efficiency. Furthermore, compared with classical algorithms, such as HIC-YOLOv5, TPH- YOLOv5, and Drone-YOLO, the proposed algorithm demonstrates significant advantages and superior performance. [Conclusions] The algorithm effectively improves detection accuracy, confirming its strong performance in addressing small object detection in complex scenes.

  • LI Lianfa, GAO Xilin, HE Wei, CHEN Miaomiao, YANG Xiaomei, WANG Zhihua, ZHANG Junyao, LIU Xiaoliang
    Journal of Geo-information Science. 2025, 27(2): 331-349. https://doi.org/10.12082/dqxxkx.2024.240278

    [Objectives] As remote sensing classification and interpretation technologies continue to advance, the intelligent interpretation of natural resources in complex environments has become a critical research focus. The accuracy and reliability of remote sensing data interpretation depend fundamentally on the quality and representativeness of the samples used in the analysis. In China, diverse terrain, complex meteorological conditions, and fragmented land surface structures introduce significant spatiotemporal variability, making the selection and quality of remote sensing samples particularly challenging. Traditional sampling methods often fail to adequately represent the full spectrum of characteristics inherent in these diverse landscapes, leading to substantial biases and inaccuracies in interpretation outcomes. [Methods] To address these challenges, this study offers a comprehensive review of key elements in remote sensing classification, encompassing methods for sampling labeled data, techniques for multi-scale morphological transformations to augment samples, and strategies for evaluating the quality of labeled samples. The research emphasizes the critical importance of optimizing sample selection to reduce bias and improve interpretation accuracy. It explores the theoretical foundations for sample optimization, highlighting the necessity of obtaining representative samples that accurately capture the complexity and variability of the land surface. [Results] One of the primary contributions of this study is the development of a novel sampling optimization method that integrates terrain complexity into the sampling process. By considering the diverse and intricate nature of the landscape, our approach enhances the representativeness of the samples, thereby reducing errors introduced by sampling bias and significantly improving the accuracy of remote sensing interpretation. In particular, we emphasize the role of multi-scale morphological transformations, which allow for the expansion of sample diversity and the generation of more robust and generalizable remote sensing models. This process is crucial for creating high-quality labeled samples that can better support complex interpretation tasks. The effectiveness of this complexity-based sample optimization approach is demonstrated through a series of experiments. These experiments reveal significant improvements in interpretation accuracy when compared to traditional sampling methods. This substantial enhancement underscores the value of incorporating terrain complexity and multi-scale transformations in the sampling and interpretation process. [Conclusions] By following the principles and methodologies outlined in this research, practitioners and researchers can obtain high-quality, representative labeled samples that significantly improve the precision and efficiency of remote sensing classification models. The findings of this study provide a solid theoretical and technical foundation for advancing remote sensing intelligent interpretation technology. Furthermore, the research offers practical insights and guidelines for applying these optimized sampling strategies to the classification of natural resources in complex scenarios, ultimately contributing to more accurate and reliable interpretation outcomes in the field of remote sensing.

  • WANG Zhihua, YANG Xiaomei, ZHANG Junyao, LIU Xiaoliang, LI Lianfa, DONG Wen, HE Wei
    Journal of Geo-information Science. 2025, 27(2): 305-330. https://doi.org/10.12082/dqxxkx.2024.230729

    [Objectives] Remote Sensing Intelligent Interpretation (RSII) often encounters challenges when applied for practical resource and environmental management, especially for complex scenes. To address this, we start from the explanation of why remote sensing interpretation is needed, and clarify that the mission of RSII is to achieve more rapid interpretation to build the digital twin earth with lower cost compared to manual interpretation. However, most RSII systems operate as a unidirectional process from remote sensing data to geoscience knowledge, lacking the feedback from knowledge to data. As a result, remote sensing information extracted from data often mismatch the knowledge of existing geoscience, creating a trust crisis between RSII researchers and geoscience researchers. And the crisis becomes more severe with the uncertainty of remote sensing information. [Analysis] We believe that an agreed upon representation model of geoscience knowledge between RSII researchers and geoscience researchers is necessary to alleviate the crisis. Based on this analysis, we propose a framework using geo-science zoning as the bridge to connect RSII researchers and geoscience researchers. In this framework, knowledge from geoscience could be transferred into the RSII system through geo-science zoning so that the interpretation results could be more coincided with geoscience knowledge. The framework mainly relies on (a) the scene complexity measurement, (b) the knowledge coupling of geographic regions to form the geological zoning method for remote sensing intelligent interpretation, and (c) the sampling specification of regional samples. The scene complexity measurement provides quantitative features for geoscience zoning and sampling weights assignment. Existing zoning data, such as ecological zoning data, geographic elements, and multisource remote sensing images are the main data inputs for geoscience zoning. The main principles for constructing zoning methods include (a) the geoscience elements type, (b) the scale of geoscience zoning, and (c) the process of information flow from data to knowledge. [Prospects] With these models, we can realize regional RSII guided by the knowledge. Preliminary experiments on complexity and optimization sampling, image segmentation scale optimization, cultivated land type fine classification, etc., reveal that this framework has great potential in improving the geoscience knowledge acquisition by RSII, enhancing the accuracy of the state-of-the-art RSII by 6%~10%, especially for the high-complexity nature scenes. However, the superiority of the framework may disappear if the scene for interpretation is simple, like the first level land use/cover classification, which is mainly caused by the inefficient samples after geoscience zoning. Therefore, more attention is needed in sampling when developing geoscience zoning framework.

  • WANG Xingfeng, CHEN Guoliang
    Journal of Geo-information Science. 2025, 27(2): 367-380. https://doi.org/10.12082/dqxxkx.2025.240597

    [Objectives] The active discovery and scientific assessment of the ecological damage caused by mining disturbances in coal mining areas is a key focus in the research of "intelligent mining and green mines". However, traditional analysis methods, characterized by a reliance on monitoring, limited discovery capabilities, dependence on experts, and retrospective assessments, struggle to meet the regulatory authorities’ actual needs for intelligent identification and rapid early warning. This article aims to verify the adaptability of knowledge graph-based spatial reasoning methods for actively detecting and intelligently identifying ecological damage in coal mining areas, while exploring innovative approaches and technologies for ecological environment governance in the modern era. [Methods] By integrating multi-source monitoring data from "Space-Air-Ground-Human" systems and summarizing knowledge on the location, form, group distribution, distribution patterns, and spatiotemporal evolution of ecological units in coal mining areas, an indicator system for describing these units is designed. Additionally, intelligent identification and reasoning rules for ecological damage are constructed using knowledge graph technology. [Results] Coal mining subsidence is a typical land use/cover change phenomenon caused by coal resource extraction. Remote sensing technology is crucial for extracting subsidence and its variations, yet traditional methods often overestimate the affected area due to misclassification of natural water surfaces as mining-induced subsidence. To address this, new knowledge and spatial reasoning rules were introduced to accurately differentiate between mining subsidence areas and natural water surfaces. Using a coal mining area in Shanxi Province as a case study, spatial reasoning rules were applied to identify subsidence units accurately. Experimental results demonstrated that the proposed method improved the precision and intelligent recognition accuracy of mining disturbance units. Compared with traditional recognition approaches, the proposed method reduced false positives by 21.43%. [Conclusions] Knowledge graph technology proves highly adaptable for analyzing and evaluating ecological environments in coal mining areas. It offers technical support for the proactive discovery and accurate identification of ecological damage caused by mining disturbances. Furthermore, it provides new technological tools and ideas for building advanced ecological governance models.

  • LIAN Peige, LI Yingbing, LIU Bo, FENG Xiaoke
    Journal of Geo-information Science. 2025, 27(3): 636-652. https://doi.org/10.12082/dqxxkx.2025.240641

    [Objectives] With accelerating urbanization and a surge in vehicle numbers, urban traffic systems face immense pressure. Intelligent transportation systems, a vital component of smart cities, are widely employed to improve urban traffic conditions, with traffic speed prediction being a key research focus. However, the complex coupling relationships and dynamically varying characteristics of urban traffic network nodes pose challenges for existing traffic speed prediction methods in accurately capturing dynamic spatio-temporal correlations. Spatio-temporal graph neural networks have proven to be among the most effective models for traffic speed prediction tasks. However, most methods heavily rely on prior knowledge, limiting the flexibility of spatial feature extraction and hindering the dynamic representation of road network topology. Recent approaches, such as adaptive adjacency matrix construction, address the limitations of static graphs. However, they often overlook the synergy between dynamic features and static topology, making it difficult to fully capture the complex fluctuations in traffic flow, which in turn limits prediction accuracy and adaptability. [Methods] To address these challenges, this study formulates urban traffic speed prediction as a multivariate time-series forecasting problem and proposes a traffic speed prediction model based on a Multivariate Time-series Dynamic Graph Neural Network (MTDGNN). Leveraging real-time traffic information and predefined static graph structures, the model adaptively generates dynamic traffic graphs to capture spatial dependencies through a graph learning layer and integrates them with static road network graphs to capture spatial dependencies from multiple perspectives. Meanwhile, the alternating use of graph convolution and temporal convolution modules constructs a multi-level spatial neighborhood and temporal receptive field, fully exploring the spatial and temporal features of traffic data. [Results] The MTDGNN model was tested on real traffic data from 397 road sections in eastern Beijing, collected between April 1, 2017, and May 31, 2017. Its prediction results were compared against nine benchmark models and seven ablation models. Compared to benchmark models, MTDGNN reduced the average MAE by at least 2.24% and the average RMSE by at least 3.98%. [Conclusions] Experimental results demonstrate that the MTDGNN model achieves superior prediction accuracy in MAE, RMSE, and MAPE evaluation metrics, highlighting its robustness and effectiveness in complex traffic scenarios.

  • WU Ruoling, GUO Danhuai
    Journal of Geo-information Science. 2025, 27(5): 1041-1052. https://doi.org/10.12082/dqxxkx.2025.240694

    [Objectives] Understanding whether Large Language Models (LLMs) possess spatial cognitive abilities and how to quantify them are critical research questions in the fields of large language models and geographic information science. However, there is currently a lack of systematic evaluation methods and standards for assessing the spatial cognitive abilities of LLMs. Based on an analysis of existing LLM characteristics, this study develops a comprehensive evaluation standard for spatial cognition in large language models. Ultimately, it establishes a testing standard framework, SRT4LLM, along with standardized testing processes to evaluate and quantify spatial cognition in LLMs. [Methods] The testing standard is constructed along three dimensions: spatial object types, spatial relations, and prompt engineering strategies in spatial scenarios. It includes three types of spatial objects, three categories of spatial relations, and three prompt engineering strategies, all integrated into a standardized testing process. The effectiveness of the SRT4LLM standard and the stability of the results are verified through multiple rounds of testing on eight large language models with different parameter scales. Using this standard, the performance scores of different LLMs are evaluated under progressively improved prompt engineering strategies. [Results] The geometric complexity of input spatial objects influences the spatial cognition of LLMs. While different LLMs exhibit significant performance variations, the scores of the same model remain stable. As the geometric complexity of spatial objects and the complexity of spatial relations increase, LLMs' accuracy in judging three spatial relations decreases by only 7.2%, demonstrating the robustness of the test standard across different scenarios. Improved prompt engineering strategies can partially enhance LLM's spatial cognitive Question-Answering (Q&A) performance, with varying degrees of improvement across different models. This verifies the effectiveness of the standard in analyzing LLMs' spatial cognitive abilities. Additionally, Multiple rounds of testing on the same LLM indicate that the results are convergent, and score differences between different LLMs exhibit a stable distribution. [Conclusions] SRT4LLM effectively measures the spatial cognitive abilities of LLMs and serves as a standardized evaluation tool. It can be used to assess LLMs' spatial cognition and support the development of native geographic large models in future research.

  • ZHAO Pengjun, CHEN Xiaoyi, WANG Yiqing, HOU Yongqi, ZHENG Yu
    Journal of Geo-information Science. 2025, 27(3): 539-552. https://doi.org/10.12082/dqxxkx.2024.240313

    [Objectives] The scale, distribution, travel mode structure, and traffic flow of passenger travel demand are the results of spatial interactions within the human social economy across different locations. The complexity of the social and economic operation systems dictates that travel demand prediction must start from the urban system to address the technical challenges of current travel demand forecasting. This paper analyzes the systematic nature of urban transportation and proposes an integrated simulation technology framework that incorporates land, population, housing, and transportation. It also summarizes traffic demand simulation and prediction technology based on urban systems and develops China's first urban system travel demand forecasting technology platform. [Methods] This technology covers sub-modules such as transportation demand distribution, transportation mode share and path allocation, land use simulation, population and employment distribution, real estate price, and carbon emissions to reflect the complete urban system. It includes a series of sub-module variables, including generalized travel cost, location accessibility, real estate price, job-housing relationship coefficients, and land use mixing degrees, to reflect the interactions among subsystems and the time lag effect. Additionally, core algorithms of sub-modules are designed to achieve urban system simulation and prediction. Using Beijing as a case study, the application of this technology platform is demonstrated. A comparison between the actual and simulated values for 2020 shows that the accuracy of simulated results for travel demand, traffic congestion situation, land use, and population distribution is above 85%. [Results] Applying this platform to Beijing, the travel demand, traffic flow, congestion index, population distribution, and land use projections for 2030 were predicted. According to the forecast results, from 2020 to 2030, the total number of traffic trips in Beijing will show a generally stable and slowly declining trend, with strong centripetal characteristics spatially, and trips within each suburb will become more balanced. There will be a slight decrease in the proportion of public transportation travel, a slight reduction in residents' average travel time, and more severe congestion compared to 2020. The expansion of land for residential areas, roads and transportation facilities, green spaces and squares, and commercial services will be more obvious. Resident population will show steady fluctuations, with finger-like extensions along major transportation corridors. [Conclusion] Overall, this paper advances urban transportation theory, innovates urban transportation simulation forecasting methods, and provides new technical support for urban and rural planning and urban transportation planning.

  • XU Wenwen, TANG Xinhua, PAN Shuguo, BAO Yachuan, YU Baoguo
    Journal of Geo-information Science. 2025, 27(3): 612-622. https://doi.org/10.12082/dqxxkx.2024.230532

    [Objectives] The ultra-wideband ranging errors in underground narrow spaces show a significant heavy-tailed distribution. The Gaussian mixture model is more closer to the empirical distribution than a simple Gaussian probability envelope. The Protection Level (PL) obtained through traditional bounding distribution models are overconservative, which reduces system usability. [Methods] To enhance the system usability, an overbounding framework based on Gaussian Mixture Model (GMM) is employed to handle the Time of Arrival (TOA)-based distance measurements obtained from the ultra-wideband ranging system. Firstly, a possibility density function (PDF) of the ranging errors is determined in the form of a dual-component GMM using Expectation-Maximum (EM) algorithm, which provides an approximation of the practical noise distribution in underground space. The PDF plays a crucial role in computing the upcoming overbounding PL. To ensure the mathematical tractability of the overbounding model, the bilateral boundaries of the errors are examined through the use of Cumulative Distribution Function (CDF). Next, in correspondence with the GMM-based PDF model, an asymmetrical CDF is obtained, necessitating the separate overbounding operation on both sides of the CDF. A heuristic adjustment on previous GMM-based PDF is conducted to increase the possibility density of the bilateral tail parts, ensuring sufficient but not excessive space to guarantee the validity of the PL. Initially, on the left side of the CDF, the weight and variance of the Gaussian component with a lower mean value in the GMM-based PDF are increased. Subsequently, the updated PDF will be shifted to the left to create a new version of the GMM-based PDF, with higher values in the left part of the CDF compared to the traditional one. Similar adjustment work needs to be conducted for the right part of the original CDF. After the adjustment operations on both sides, two different GMM-based PDFs can be obtained for every single base station, one is termed as left-boundary PDF while the other is right-boundary PDF. Finally, the predefined PDFs from different UWB base stations will be used to infer the overall PDF in position domain using a convolution operation. Based on this, PL can be computed from the inverse operation of the corresponding CDFs. [Results] In order to verify the methodology, experiments under practical simulated underground scenarios are conducted using six UWB base stations. Error models are constructed using sample data collected within a range of 3 to 93 meters. Evaluation of practical performance shows that the GMM-based bilateral bounding box reduces PL by more than 20% compared to traditional Gaussian-based calculations. [Conclusions] The GMM-based PDF can tighten the PL with a relatively low computational cost, enhancing the system usability.

  • QI Haoxuan, CAO Yi, ZHAO Bin
    Journal of Geo-information Science. 2025, 27(3): 623-635. https://doi.org/10.12082/dqxxkx.2025.240707

    [Objectives] The primary objective is to enhance the accuracy of vehicle trajectory prediction at intersections and address the challenges in predicting trajectories in multi-vehicle interaction scenarios. This is crucial for improving the safety and efficiency of autonomous driving and traffic management in complex urban intersections. [Methods] An Enhanced Adjacency Graph Convolutional Network-Transformer (EAG-GCN-T) vehicle trajectory prediction model is developed. The INTERACTION public dataset is employed, with data smoothing techniques applied to mitigate noise. Model comparison and validation experiments are conducted to assess performance. The model’s accuracy is evaluated by comparing error assessment indicators against different baseline models, analyzing interaction capabilities, generalization ability, and driving behavior recognition. The EAG-GCN-T model combines an Enhanced Adjacency Graph Convolutional Network (EAG-GCN) and a Transformer module. The EAG-GCN module accurately models spatial interactions between vehicles by considering relative speed and distance using an enhanced weighted adjacency matrix. The Transformer module captures temporal dependencies and generates future trajectories, improving spatiotemporal prediction ability. [Results] In long-term single-vehicle trajectory prediction, the Average Displacement Error (ADE) and Final Displacement Error (FDE) are reduced by 69.4%, 39.8%, and 33.3% and 71.9%, 32.5%, and 27.4% respectively, compared to CV, ARIMA, and CNN-LSTM models. In multi-vehicle interaction prediction, the FDE is reduced by 19.5% and 20.6% compared to the GRIP model. Compared with three interaction mechanisms, EAG-GCN-T achieves the lowest overall error across all time domains, with ADE/FDE values of 0.53 and 0.74, respectively. EAG-GCN-T achieves more reasonable Driving Area Compliance (DAC) and Trajectory Point Loss Rate (MR), demonstrating strong adaptability in ramps and roundabouts. The model accurately predicts driving behaviors such as following, lane-changing, evasion, and their impacts on trajectories, with predicted trajectories highly consistent with actual vehicle movements. [Conclusions] The EAG-GCN-T model effectively addresses vehicle trajectory prediction in multi-vehicle interaction scenarios at intersections. It demonstrates high accuracy, strong interactivity, and excellent generalization ability. This model provides a novel solution for vehicle trajectory prediction in intelligent transportation systems, offering significant potential for advancing autonomous driving and intelligent traffic management.

  • ZHANG Peng, LIU Wanyue, LIU Chengbao, BO Zheng, NIU Ran, HAN Dongxu, LIN Qian, ZHANG Ziyi, MA Mingze
    Journal of Geo-information Science. 2025, 27(4): 787-800. https://doi.org/10.12082/dqxxkx.2025.240467

    [Significance] The characteristics of the lunar surface, including its mineral compositions, geological formations, environmental factors, and temperature variations, are essential for advancing our understanding of the Moon. These features provide a wealth of scientific data for lunar research, such as resource distribution, environmental characteristics, and evolutionary history. Spectral imagers, which detect mineral compositions in a nondestructive way, play a crucial role in analyzing the mineral compositions of the lunar surface and have become key payloads in scientific exploration missions. With the increasing demand for high-precision lunar exploration data and advancements in spectral imaging technology, there is a growing trend toward acquiring lunar remote sensing data with higher spatial and spectral resolution across a broad spectral range. This trend is shaping the future of lunar orbit exploration, allowing for unprecedented detail in probing the Moon's surface. However, the higher resolution of spatial and spectral data also introduces significant challenges in data processing. [Progress] This paper begins by summarizing existing lunar spectral orbit data, including payload parameters and associated scientific findings. It then explores specific technical challenges in the data processing chain, such as pre-processing and the calculation of lunar surface parameters. Mapping surface compositions through spectral remote sensing is particularly complex due to the mixing of minerals within rocks, which can obscure clear spectral signatures. To address these challenges, various theoretical and empirical approaches have been developed. This paper proposes technical methods and potential solutions to overcome these obstacles.[Conclusions] In conclusion, detailed studies of lunar surface characteristics and the acquisition of high-resolution spectral data are vital for advancing lunar science. Lunar hyperspectral data are expected to support manned lunar exploration and scientific research by enabling the identification of various minerals on the Moon's surface and determining their abundance through hyperspectral observations. Advances in spectral imaging technology and the development of solutions for processing high-resolution data will significantly enhance lunar and planetary science capabilities. These efforts will pave the way for deeper insights into the Moon's geology and potential resource utilization.

  • HAO Yuanfei, LIU Zhe, ZHENG Xi, QIAN Yun
    Journal of Geo-information Science. 2025, 27(9): 2070-2085. https://doi.org/10.12082/dqxxkx.2025.250129

    [Objectives] Street space serves as the primary perceptual interface for pedestrians in urban environments, and the visual quality of these spaces plays a crucial role in enhancing their vitality. Traditional evaluation methods often rely on single-objective indicators, making it difficult to effectively link objective environmental features with pedestrians' subjective perceptions. [Methods] This study proposes a novel evaluation framework based on Large Language Models (LLMs), incorporating the style dimension of subjective perception and extending traditional single-indicator quantitative analysis to a comprehensive approach that integrates both quantification and stylization. This framework utilizes Baidu Street View imagery to quantitatively assess two objective indicators, namely green view index and sky view factor, through semantic segmentation techniques. Additionally, it evaluates six subjective indicators, including vegetation diversity, building typology, building continuity, sidewalk usage, roadway usage, and signage usage, by leveraging prompt-optimized LLMs. The study then categorizes street space visual quality features within the research area using the Latent Dirichlet Allocation (LDA) topic model, aiming to explore the spatial characteristics of different streets and identify optimization strategies. [Results] Using Beijing's Xicheng District as the study area, the results reveal spatial distribution patterns of vegetation density and sky openness, along with pedestrians' subjective evaluations of indicators such as vegetation diversity and building type. Cluster analysis identified comprehensive service streets centered around Xidan North Street, characteristic streets centered around Xihuangchenggen South Street, and mixed-type streets centered around Lingjing Hutong. [Conclusions] This study innovatively introduces a large language model with human-like perceptual capabilities, enhancing its performance through prompt engineering. The resulting framework enables efficient and integrated evaluation of street visual quality by combining both objective and subjective factors. This approach provides a practical reference for large-scale, automated analysis of street view imagery.

  • SONG Qi, GAO Xiaohong, YIN Chengzhuo, HUANG Yanjun, LI Qiaoli, SONG Yuting, MA Xuyan
    Journal of Geo-information Science. 2025, 27(4): 946-966. https://doi.org/10.12082/dqxxkx.2025.240607

    [Objectives] Unmanned Aerial Vehicle (UAV) and satellite remote sensing technologies have been successfully applied to estimate soil organic carbon and other attributes. However, their application to soil texture estimation remains relatively limited, highlighting the need for further research in this area. This study focuses on three farmland plots located in Zhuozhatan Village (Huzhu County), Nilongkou Village (Lalongkou Town, Huangzhong District), and Baitu Village (Lushar Town, Huangzhong District) within the Huangshui River Basin of Qinghai Province. It explores the potential of UAV and satellite remote sensing technologies for estimating soil texture content at the field scale. [Methods] Using UAV platforms equipped with two hyperspectral cameras, field-scale imaging of farmland soils was conducted. Additionally, a field spectrometer was used to collect in-situ soil spectra, and a total of 838 soil samples were collected from 2022 to 2024. Satellite imagery was also obtained for the same time periods, including GF1/2/7 (Gaofen 1/2/7), Sentinel-2A, and ZY1-02D (Ziyuan 1-02D). Laboratory analyses determined soil particle size distribution and acquired indoor soil spectral data. Based on these datasets, statistical modeling and soil texture content estimation were performed using the XGBoost (Extreme Gradient Boosting) method for laboratory, field in-situ, UAV, GF, ZY1-02D, and Sentinel-2 spectral data. Spatial distribution maps of soil texture content were then generated. [Results] ① Among the XGBoost model results, the highest model accuracy for UAV image spectra achieved an RPD (Ratio of Performance to Deviation) of 2.441, while the optimal RPD values for GF1/2/7, ZY1-02D, and Sentinel-2 satellite imagery were 1.815, 1.601, and 1.561, respectively. ② The estimation accuracy based on UAV and satellite imagery was lower than that derived from field spectrometer measurements. The accuracy ranking was as follows: laboratory spectra > field in-situ spectra > UAV image spectra > GF1/2/7 satellite image spectra > ZY1-02D satellite image spectra > Sentinel-2 satellite image spectra. Among soil texture components, clay content estimation showed the highest accuracy (RPD = 2.70), followed by silt (RPD = 2.24) and sand (RPD = 1.91). ③ Sand and clay content exhibited a negative correlation with soil spectral reflectance, whereas silt content displayed a positive correlation. The sensitive bands for sand, silt, and clay content were primarily concentrated in the near-infrared region (780~2 400 nm). ④ The content of sand, silt, and clay exhibited minor variations over three years, demonstrating relative stability. The mapping results for the three plots showed soil texture contents predominantly in the following ranges: 67% < sand ≤ 83%, 10.6% < silt ≤ 19.1%, and 3.2% < clay ≤ 6.6%. [Conclusions] At the field scale, UAV imagery was identified as the most effective data source for soil texture content mapping, providing strong support for precision agricultural management. While GF1/2/7 and ZY1-02D satellite imagery were found to be sufficient for texture mapping, Sentinel-2 satellite imagery was too coarse for field-scale mapping.

  • QIN Chengzhi, ZHU Liangjun, CHEN Ziyue, WANG Yijie, WANG Yujing, WU Chenglong, FAN Xingchen, ZHAO Fanghe, REN Yingchao, ZHU Axing, ZHOU Chenghu
    Journal of Geo-information Science. 2025, 27(5): 1027-1040. https://doi.org/10.12082/dqxxkx.2025.240706

    [Objectives] Geographic modeling aims to appropriately couple diverse geographic models and their specific algorithmic implementations to form an effective and executable model workflow for solving specific, unsolved application problems. This approach is highly valuable and in high demand in practice. However, traditional geographic modeling is designed with an execution-oriented approach, which plays a heavy burden on users, especially non-expert users. [Methods] In this position paper, we advocate not only for the necessity of intelligent geographic modeling but also achieving it through a so-called recursive geographic modeling approach. This new approach originates from the user's modeling target, which can be formalized as an initial elemental modeling question. It then reasons backward to resolve the current elemental modeling question and iteratively updates new elemental modeling questions in a recursive manner. This process enables the automatic construction of an appropriate geographic workflow model tailored to the application context of the user's modeling problem, thereby addressing the limitations of traditional geographic modeling. [Progress] Building on this foundational concept, this position paper introduces a series of intelligent geographic modeling methods developed by the authors. These methods aim to reduce the geographic modeling burden on non-expert users while assuring the appropriateness of automatically constructed models. Specifically, each proposed intelligent geographic modeling method is designed to solve a specific type of elemental question within intelligent geographic modeling. The elemental questions include: (1) how to determine the appropriate model algorithm (or its parameter values) within the given application context, (2) how to select the appropriate covariate set as input for a model without a predetermined number of inputs (e.g., a soil mapping model without predetermined environmental covariates as inputs), (3) how to determine the structure of a model that integrates multiple coupled modules (e.g., a watershed system model incorporating diverse process simulation modules), and (4) how to determine the proper spatial extent of input data for a geographic model when a specific area of interest is assigned by the user. The key to solving these elemental questions lies in the effective utilization of geographic modeling knowledge, particularly application-context knowledge. However, since application-context knowledge is typically unsystematic, empirical, and implicit, we developed case formalization and case-based reasoning strategies to integrate this knowledge within the proposed methods. Based on the recursive intelligent geographic modeling approach and the correspondingly methods, we propose an application schema for intelligent geographic modeling and computing. This schema is grounded in domain modeling knowledge, particularly case-based application-context knowledge, and leverages the “Data-Knowledge-Model” tripartite collaboration. A prototype of this approach has been implemented in an intelligent geospatial computing system called EGC (EasyGeoComputing). [Prospect] Finally, this position paper discusses the emerging role of large language models in geographic modeling. Their potential applications, relationships with the research presented here, and prospects for future research directions are explored.

  • LI Wangping, WEI Wenbo, LIU Xiaojie, CHAI Chengfu, ZHANG Xueying, ZHOU Zhaoye, ZHANG Xiuxia, HAO Junming, WEI Yuming
    Journal of Geo-information Science. 2025, 27(6): 1448-1461. https://doi.org/10.12082/dqxxkx.2025.250034

    [Objectives] Using deep learning methods for landslide identification can significantly improve efficiency and is of great importance for landslide disaster prevention and mitigation. The DeepLabV3+ algorithm effectively captures multi-scale features, thereby improving image segmentation accuracy, and has been widely used in the segmentation and recognition of remote sensing images. [Methods] We propose an improved model based on DeepLabV3+. First, the Coordinate Attention (CA) mechanism is incorporated into the original model to enhance its feature extraction capabilities. Second, the Atrous Spatial Pyramid Pooling (ASPP) module is replaced with the Dense Atrous Spatial Pyramid Pooling (DenseASPP) module, which helps the network capture more detailed features and expands the receptive field, effectively addressing the limitations of inefficient or ineffective dilated convolution. A Strip Pooling (SP) branch module is added in parallel to allow the backbone network to better leverage long-range dependencies. Finally, the Cascade Feature Fusion (CFF) module is introduced to hierarchically fuse multi-scale features, further improving segmentation accuracy. [Results] Experiments on the Bijie landslide dataset show that, compared with the original model, the improved model achieves a 2.2% increase in MIoU and a 1.2% increase in the F1 score. Compared with other mainstream deep learning models, the proposed model demonstrates higher extraction accuracy. In terms of segmentation quality, it significantly improves the overall accuracy in identifying landslide areas, reduces misclassification and omission, and yields more precise delineation of landslide boundaries. [Conclusions] Based on experiments using the landslide debris flow disaster dataset in Sichuan and surrounding areas, along with practical application verification, the proposed method demonstrates strong recognition capability across landslide images in diverse scenarios and levels of complexity. It performs particularly well in challenging environments such as areas with dense vegetation or proximity to rivers, showing strong generalization ability and broad applicability.

  • LI Junming, HU Yaxuan, WANG Nannan, WANG Siyaqi, WANG Ruolan, LYU Lin, FANG Ziqing
    Journal of Geo-information Science. 2025, 27(7): 1501-1519. https://doi.org/10.12082/dqxxkx.2025.250161

    [Objectives] Classical statistical inference typically relies on the assumptions of large sample sizes and independent, identically distributed (i.i.d.) observations, conditions that spatio-temporal data frequently violate, leading to inherent theoretical limitations in conventional approaches. In contrast, Bayesian spatio-temporal statistical methods integrate prior knowledge and treat all model parameters as random variables, thereby forming a unified probabilistic inference framework. This enables the incorporation of a broader range of uncertainties and offers robustness in modelling small samples and dependent structures, making Bayesian methods highly advantageous and increasingly influential in spatio-temporal analysis. [Progress] From the perspective of methodological evolution, this paper systematically reviews mainstream Bayesian spatio-temporal statistical models from two complementary perspectives: traditional Bayesian statistics and the Bayesian machine learning. The former includes Bayesian Spatio-temporal Evolutionary Hierarchical Models, Bayesian Spatio-temporal Regression Hierarchical Models, Bayesian Spatial Panel Data Models, Bayesian Geographically Weighted Spatio-temporal Regression Models, Bayesian Spatio-temporal Varying Coefficient Models, and Bayesian Spatio-temporal Meshed Gaussian Process Model. The latter includes Bayesian Causal Forest Models, Bayesian Spatio-temporal Neural Networks, and Bayesian Graph Convolutional Neural Networks. In terms of application, the review highlights representative studies across domains such as public health, environmental sciences, socio-economic and public safety, as well as energy and engineering. [Prospect] Bayesian spatio-temporal statistical methods need to achieve breakthroughs in multi-source heterogeneous data modeling, integration with deep learning, incorporation of causal inference mechanisms, and optimization of high-performance computing. These advances are essential to balance theoretical rigor with practical adaptability and to promote the development of a next-generation spatio-temporal modeling paradigm characterized by causal inference, adaptive generalization, and intelligent analysis.

  • SUI Xin, HAO Yuting, CHEN Zhijian, WANG Changqiang, SHI Zhengxu, XU Aigong
    Journal of Geo-information Science. 2025, 27(2): 397-410. https://doi.org/10.12082/dqxxkx.2024.230648

    [Objectives] Scene understanding based on 3D laser point clouds plays a core role in many applications such as object detection, 3D reconstruction, cultural relic protection, and autonomous driving. The semantic classification of 3D point clouds is an important task in scene understanding, but due to the large amount of data, diverse targets, and large-scale differences, as well as the occlusion of buildings and trees, this task still poses challenges. The existing deep learning models for point cloud classification face several challenges due to the unstructured and disordered nature of point clouds. These challenges include inadequate extraction of local and global features and the absence of an efficient mechanism for context feature integration, making it challenging to achieve fine-grained classification of ground objects. Therefore, this study introduces a novel point cloud feature classification approach that incorporates a multi-scale convolutional attention network for both local and global features. [Methods] To address the lack of structure in point clouds, we construct a local weighted graph to model the positional relationships between central points and their neighboring points. This graph facilitates dynamic adjustments of kernel weights, enabling the extraction of more representative local features. Simultaneously, we introduce a global graph attention module to account for the overall spatial distribution of points, address the disorder of point clouds, and effectively capture global contextual features, thereby integrating information at different scales. Furthermore, we design an adaptive weighted pooling module to facilitate the seamless fusion of local and global features, thus maximizing the network's classification performance. [Results] The proposed method is evaluated using the publicly available Toronto-3D point cloud dataset and a campus point cloud dataset obtained from real measurements. We compare its performance against various network models, including Pointnet++, DGCNN, RandLA-Net, BAAF-Net, and BAF-LAC, The experimental results show that the OA and MIoU of our method in the Toronto-3D dataset are 97.21% and 85.46%, respectively. Compared with network models such as Pointnet++, DGCNN, RandLA Net, BAAF Net, and BAF-LAC, OA has improved by 1.99% to 8.21%, and MIoU has improved by 3.23% to 35.86%. In the campus dataset, the OA and MIoU of our method in this paper are 97.38% and 85.70%, respectively. OA has improved by 0.58% to 10.53%, and MIoU has improved by 2.01% to 32.01%. [Conclusions] These results surpass those achieved by the comparison networks and effectively overcome problems such as large changes in target scale and building occlusion, establishing our method's capability to achieve high-precision and efficient fine classification of ground objects in complex road scenes.

  • LIU Chengbao, BO Zheng, ZHANG Peng, ZHOU Miyu, LIU Wanyue, HUANG Rong, NIU Ran, YE Zhen, YANG Hanzhe, LIU Shijie, HAN Dongxu, LIN Qian
    Journal of Geo-information Science. 2025, 27(4): 801-819. https://doi.org/10.12082/dqxxkx.2025.240466

    [Significance] Lunar remote sensing is a critical method to ensure the safety and success of lunar exploration missions while advancing lunar scientific research. It plays a significant role in understanding the Moon's geological evolution and the formation of the Earth-Moon system. Accurate lunar topographic maps are essential for mission planning, including landing site selection, navigation, and resource identification. These maps also provide valuable data for studying planetary processes and the history of the solar system. [Progress] In recent years, with growing global interest and investment in lunar exploration, remarkable progress has been made in remote sensing technology. These advancements have significantly improved the precision, resolution, and coverage of lunar topographic mapping. Various lunar remote sensing missions, such as China's Chang'e program, NASA's Lunar Reconnaissance Orbiter, and missions by other space agencies, have acquired substantial amounts of multi-source, multi-modal, and multi-scale data. This wealth of data has laid a solid foundation for technological breakthroughs. For instance, high-resolution laser altimetry, optical photogrammetry, and synthetic aperture radar have provided detailed datasets, enabling refined mapping of the Moon's surface. However, the dramatic increase in data volume, complexity, and heterogeneity presents challenges for effective processing, integration, and application in topographic mapping. This paper provides a comprehensive overview of the current state of lunar topographic remote sensing and mapping, focusing on the implementation and data acquisition capabilities of major lunar remote sensing missions during the second wave of lunar exploration. It systematically summarizes the latest research progress in key surveying and mapping technologies, including laser altimetry, which enables precise elevation measurements; optical photogrammetry, which reconstructs surface features using high-resolution imagery; and synthetic aperture radar, which provides unique insights into topographic and subsurface structures. [Prospect] In addition to reviewing recent advancements, the paper discusses future trends and challenges in the field. Key recommendations include enhancing sensor functionality and performance metrics to improve data quality, optimizing the lunar absolute reference framework for consistency and accuracy, leveraging multi-source data fusion for fine-scale modeling, expanding scientific applications of lunar topography, and developing intelligent and efficient methods to process massive amounts of remote sensing data. These efforts will not only support upcoming lunar exploration missions, such as China's manned lunar landing program scheduled for 2030, but also contribute to a deeper understanding of the Moon and its relationship with Earth.

  • LIU Chang, SHI Erpeng, GUO Shiyi, GUO Liang, SUN Xiaoli
    Journal of Geo-information Science. 2025, 27(3): 585-600. https://doi.org/10.12082/dqxxkx.2024.230576

    [Objectives] Urban public transportation service quality is an important factor affecting residents' travel choices and quality of life, but the current development and reform of urban public transportation in China still has shortcomings, and it is necessary to incorporate public perception into the decision-making basis and improve service quality from the perspective of residents. Previous studies have two main limitations: first, they rely on traditional analysis methods based on traffic surveys, which fail to capture the regional differences in perceived service quality; second, they use big data from social media platforms, which are prone to information bias, polarization, and other issues, and do not reflect the public's real needs. Moreover, they mostly focus on public opinion analysis, without providing specific and feasible optimization paths. [Methods] To address these gaps, this paper proposes a method that combines public network participation and semantic analysis. It uses internet big data to extract online messages related to urban public transportation from the online interactive platform between government and citizens and analyzes their spatiotemporal features and perceived service quality. It also conducts spatial analysis and explores the service efficiency of the public transportation system in relation to the transportation facility distribution. Based on this, it offers optimization suggestions. The paper selects Wuhan as a case study, which is one of the national central cities and an important megacity in the middle reaches of the Yangtze River. The urban development area in Wuhan is a key zone for urbanization and a major hub for public travel activities, covering 15 functional zones. It has a complete public transportation facility allocation, including all the subway lines and stations, and most of the bus lines and stations in the city. [Results] The main findings are as follows: (1) The quality of public network participation data can reflect the spatiotemporal patterns of actual travel activities and has high credibility; (2) The emotional expression of the public varies across individuals and regions and the perceived service quality dimensions can be categorized into five topics: "public transportation planning and construction", "public transportation travel conditions", "residential community bus configuration", "public transportation route setting", and "public transportation operation service". Furthermore, the perceived service quality exhibits spatial imbalance and agglomeration; (3) Corresponding optimization suggestions are made for the road system in the main urban area, subway stations in the far urban area, and bus routes at the junction of the main urban area and far urban area. [Conclusions] The research results of this paper provide a new method for fine-grained identification and optimization of spatial differences in urban public transportation perceived service quality, and also demonstrate the application value of public network participation data in facilitating government decision-making.

  • WENG Mingkai, XIAO Guirong
    Journal of Geo-information Science. 2025, 27(5): 1113-1128. https://doi.org/10.12082/dqxxkx.2025.250050

    [Objectives] The quality of training samples significantly impacts model performance and prediction accuracy. In regions with limited sample data, the small number of samples and their uneven spatial distribution may prevent the model from effectively learning the features of disaster-inducing factors. This increases the risk of overfitting and ultimately affects the accuracy of model predictions. Therefore, it is crucial to collect and optimize training samples based on regional characteristics. [Methods] To address this issue, this study proposes a sampling optimization method for training samples. The method combines the Prototype Sampling (PBS) approach for selecting landslide-positive samples with an unsupervised clustering model for training sample selection. This results in a screened and expanded positive sample dataset and an objectively extracted negative sample dataset, forming an optimized training sample dataset. Subsequently, the Random Forest (RF) and Support Vector Machine (SVM) models, which are well suited for handling small sample data, were employed to construct a landslide susceptibility evaluation model. Comparative experiments were conducted using Raw Data (RD), a dataset with only Data Augmentation (DA), and the optimized dataset. Model prediction performance was assessed using metrics such as the Area Under the Curve (AUC). Additionally, the frequency ratio method was applied to optimize the results of landslide susceptibility zoning. Finally, a case study was conducted in Putian City, where landslide sample data is relatively scarce, to verify the effectiveness and generalization capability of the proposed sampling optimization method. [Results] The results indicate that models trained on the SO dataset achieved AUC improvements of 10.69% and 18.23% compared to those trained on the RD and DA datasets, respectively, demonstrating a significant enhancement in predictive performance. This suggests that selecting and expanding positive samples while objectively extracting negative samples can improve model accuracy and mitigate the overfitting problem during training. Furthermore, the frequency ratio analysis revealed that the SO-RF model achieved higher frequency ratios in regions with extremely high and high susceptibility than the SO-SVM model, indicating that SO-RF is more suitable for evaluating landslide susceptibility in regions with limited landslide sample data, such as Putian City. [Conclusions] The proposed training sample optimization approach, combined with machine learning evaluation methods, demonstrates high applicability and accuracy. Therefore, the findings of this study provide valuable insights into machine learning-based sampling strategies for landslide susceptibility assessment.

  • DING Yan, MA Yaohong, WANG Jiale, LI Yunhao, CHEN Biyu
    Journal of Geo-information Science. 2025, 27(3): 653-667. https://doi.org/10.12082/dqxxkx.2025.240639

    [Objectives] Accurate and reliable traffic state prediction is essential for various applications in Intelligent Transportation Systems (ITS). However, the complexity of urban road networks makes it challenging to effectively model spatial dependencies between road segments, posing a significant obstacle to urban traffic forecasting. Traditional Graph Convolutional Networks (GCNs) are widely used for traffic prediction but fail to account for the unique characteristics of traffic networks, such as driving directions, turning rules, and varying spatial dependencies. This study aims to address these challenges by proposing a novel graph convolutional network model, the Turn-based Graph Convolutional Neural Network (TurnGCN), which better captures the complex spatial relationships in urban traffic networks. [Methods] TurnGCN models the urban road network as a heterogeneous graph, where edges represent turning relationships between road segments. Unlike traditional GCNs that rely on static adjacency matrices, TurnGCN introduces a turning table to label neighboring nodes and map their features into a structured Euclidean feature grid. A Convolutional Neural Network (CNN) is then applied to this grid to aggregate and fuse the spatial features of neighboring nodes. This approach allows TurnGCN to model the heterogeneity of turning relationships and learn their varying impacts on the central road segment. Additionally, the parameter-sharing nature of CNNs ensures that TurnGCN performs efficiently with relatively fewer trainable parameters. [Results] To validate the effectiveness of TurnGCN, extensive experiments were conducted on two real-world traffic datasets: Urban-150 from Seoul, South Korea, and SHSpeed from Shanghai, China. These datasets vary in sampling density and temporal resolution, presenting diverse evaluation challenges. The results demonstrate that TurnGCN consistently outperforms traditional GCNs and GCN variants enhanced with spatial attention mechanisms across multiple evaluation metrics. Specifically, TurnGCN excels in capturing heterogeneous spatial dependencies and modeling turning relationships in urban road networks. [Conclusions] TurnGCN provides a robust, efficient, and scalable solution for urban traffic prediction by explicitly modeling turn-based spatial relationships. It overcomes the limitations of traditional GCNs and attention-based models, achieving significant improvements in predictive performance while maintaining computational efficiency. These advantages highlight TurnGCN’s potential for practical applications in ITS, including traffic flow optimization, congestion management, and intelligent navigation systems.

  • ZHAO Jinzhao, WEI Zhicheng
    Journal of Geo-information Science. 2025, 27(3): 682-697. https://doi.org/10.12082/dqxxkx.2025.240621

    [Objectives] City-wide traffic flow prediction plays a crucial role in intelligent transportation systems. Traditional studies partition road networks into grids, represent them as graph structures with grids as nodes, and use graph neural networks for region-level prediction. However, this region-based approach overlooks the relationships between individual roads, making it difficult to reflect traffic flow changes of roads. Methods based on road segment data can better capture spatial connections between roads and enable more accurate traffic flow predictions. However, mapping trajectory data to roads presents challenges such as redundant data and trajectory mismatches, and traffic flow data after mapping is sparse. Existing methods struggle to effectively capture the spatial correlation in sparse traffic conditions. [Methods] To address these issues, this study proposes an Attention Spatio-Temporal Neural Network (ASTNN) model for road-level sparse traffic flow prediction. The model first preprocesses trajectory data and applies Hidden Markov Model (HMM)-based map matching to obtain road-level traffic flow data. It then introduces an adaptive compact 2D image representation method to model the road network as a 2D image, where road segments are represented as pixel points. Based on an analysis of the spatial and temporal characteristics of traffic flow, two new attentional spatiotemporal blocks are proposed: Attentional Spatio-Temporal Memory Block (ASTM block) for mining temporal correlations and attentional spatial-temporal focusing block (ASTF block) for extracting spatial sparse features. By integrating these two blocks with external information, ASTNN is constructed to achieve road-level traffic flow prediction. [Results] This study uses Chengdu taxi trajectory data as a case study. After preprocessing trajectory data and mapping traffic flow, the proposed model is validated on a five-level road network within Chengdu’s third ring area. Results indicate that the proposed data processing method reduces trajectory-to-road network matching time by 73.6%. In the comparative experiments with existing models, such as Convolutional Neural Network (CNN), Convolutional Long Short-Term Memory (ConvLSTM), Gated Recurrent Unit (GRU), and Spatial-Temporal Neural Network (STNN), ASTNN achieves the highest prediction accuracy in terms of Root Mean Square Error (RMSE), Mean Absolute Error (MAE), and R-squared (R2). Furthermore, the study confirms the significant improvement in prediction accuracy when incorporating temperature data into ASTNN, providing new insights for optimizing model performance. [Conclusions] The ASTNN model proposed in this study provides an effective framework for city-wide, road-level sparse traffic flow prediction, offering valuable insights for intelligent transportation systems.

  • QIN Qiming
    Journal of Geo-information Science. 2025, 27(10): 2283-2290. https://doi.org/10.12082/dqxxkx.2025.250426

    [Objectives] With the rapid increase in the number of Earth observation satellites in orbit worldwide, remote sensing data has been accumulating explosively, offering unprecedented opportunities for Earth system science research to dynamically monitor global change. At the same time, it also brings a series of challenges, including multi-source heterogeneity, scarcity of labeled data, insufficient task generalization, and data overload. [Methods] To address these bottlenecks, Google DeepMind has proposed AlphaEarth Foundations (AEF), which integrates multimodal data such as optical imagery, SAR, LiDAR, climate simulations, and textual sources to construct a unified 64-dimensional embedding field. This framework achieves cross-modal and spatiotemporal semantic consistency for data fusion and has been made openly available on platforms such as Google Earth Engine. [Results] The main contributions of AEF can be summarized as follows: (1) Mitigating the long-standing “data silos” problem by establishing globally consistent embedding layers; (2) Enhancing semantic similarity measurement through a von Mises-Fisher (vMF) spherical embedding mechanism, thereby supporting efficient retrieval and change detection; (3) Shifting complex preprocessing and feature engineering tasks into the pre-training stage, enabling downstream applications to become “analysis-ready” and significantly reducing application costs. The paper further highlights the application potential of AEF in three stages: (1) Initially in land cover classification and change detection; (2) Subsequently in deep coupling of embedding vectors with physical models to drive scientific discovery; (3) Ultimately evolving into a spatial intelligence infrastructure, serving as a foundational service for global geospatial intelligence. Nevertheless, AEF still faces several challenges: (1) Limited interpretability of embedding vectors, which constrains scientific attribution and causal analysis; (2) Uncertainties in domain transfer and cross-scenario adaptability, with robustness in extreme environments yet to be verified; (3) Performance advantages that require more empirical validation across regions and independent experiments. [Conclusions] Overall, AEF represents a new direction for research in remote sensing and geospatial artificial intelligence, with breakthroughs in data efficiency and cross-task generalization providing solid support for future Earth science studies. However, its further development will depend on continuous advances in interpretability, robustness, and empirical validation, as well as on transforming the 64-dimensional embedding vectors into widely usable data resources through different pathways.

  • ZHANG Jiangyue, SU Shiliang
    Journal of Geo-information Science. 2025, 27(2): 441-460. https://doi.org/10.12082/dqxxkx.2025.240513

    [Background] Chinese Classical Gardens (CCGs), as integral components of world cultural heritage and essential urban recreational spaces, hold profound cultural, historical, and aesthetic value. Renowned for their intricate design, these gardens provide cultural ecosystem services through dynamic interactions between tourists and landscapes. Visual perception plays a pivotal role in these interactions, directly influencing how visitors engage with and interpret the "scenery"—a concept central to CCGs. With rapid advancements in 3D real scene reconstruction and digital simulation technologies, a pressing challenge has emerged: developing a 3D data model for CCGs tailored to visual perception computing. Traditional models fail to capture the complex interplay between spatial elements and human perceptual responses. [Objectives] This study aims to address this challenge by tackling three core methodological issues: (1) constructing a visual perception framework to represent the unique "scenery" concept inherent to CCGs; (2) analyzing tourist behavior through the lens of visual perception processes; and (3) organizing a 3D data model that supports robust analysis and visualization. [Methods] To systematically address these challenges, the study elaborates on a visual perception framework for CCGs, integrating four critical stages of visitors' visual experiences: object (what is seen), path (how one navigates), subject (who perceives), and outcome (the resulting impressions and emotions). This framework incorporates spatial narratives, consisting of a narrative symbol system and strategies, and landscape space composition, distinguishing among environmental space, visual perception space, and visual cognition space. Building on this framework, a novel 3D data model tailored to visual perception computing in CCGs is proposed. The model is structured into three interrelated layers: the physical features layer (capturing spatial and structural details), the behavior patterns layer (analyzing tourists' movements and gaze behaviors), and the analytical layers (integrating visual perception metrics). [Results] The feasibility of the proposed approach is demonstrated through a case study of the Humble Administrator's Garden in Suzhou. The implementation process involves acquiring physical data, configuring behavioral data, setting up the storage environment, and computing visual perception. This multi-layered approach provides a theoretical framework for understanding visual perception in CCGs and establishes a methodological pathway for applying 3D technologies to cultural heritage research. [Conclusions] The proposed 3D data model offers a deeper understanding of visual perception within CCGs, facilitating new insights into spatial design and visitor experiences. Furthermore, the methods outlined in this paper have broader implications for studying and preserving other cultural heritage sites, advancing the integration of digital technology in heritage conservation and cultural landscape analysis.