Most Viewed

  • Published in last 1 year
  • In last 2 years
  • In last 3 years
  • All

Please wait a minute...
  • Select all
    |
  • HE Guojin, LIU Huichan, YANG Ruiqing, ZHANG Zhaoming, XUE Yuan, AN Shihao, YUAN Mingruo, WANG Guizhou, LONG Tengfei, PENG Yan, YIN Ranyu
    Journal of Geo-information Science. 2025, 27(2): 273-284. https://doi.org/10.12082/dqxxkx.2025.240630

    [Significance] Data resources have become pivotal in modern production, evolving in close synergy with advancements in artificial intelligence (AI) technologies, which continuously cultivate new, high-quality productive forces. Remote sensing data intelligence has naturally emerged as a result of the rapid expansion of remote sensing big data and AI. This integration significantly enhances the efficiency and accuracy of remote sensing data processing while bolstering the ability to address emergencies and adapt to complex environmental changes. Remote sensing data intelligence represents a transformative approach, leveraging state-of-the-art technological advancements and redefining traditional paradigms of remote sensing information engineering and its applications. [Analysis] This paper delves into the technological background and foundations that have facilitated the emergence of remote sensing data intelligence. The rapid development of technology has provided robust support for remote sensing data intelligence, primarily in three areas: the advent of the big data era in remote sensing, significant advancements in remote sensing data processing capabilities, and the flourishing research on remote sensing large models. Furthermore, a comprehensive technical framework is proposed, outlining the critical elements and methodologies required for implementing remote sensing data intelligence effectively. To demonstrate the practical applications of remote sensing data intelligence, the paper presents a case study on applying these techniques to extract ultra-high-resolution centralized and distributed photovoltaic information in China. [Results] By integrating large models with remote sensing data, the study demonstrates how remote sensing data intelligence enables precise identification and mapping of centralized and distributed photovoltaic installations, offering valuable insights for energy management and planning. The effectiveness of remote sensing data intelligence in addressing challenges associated with large-scale photovoltaic extraction underscores its potential for application in critical fields. [Prospect] Finally, the paper provides an outlook on areas requiring further study in remote sensing data intelligence. It emphasizes that high-quality data serves as the foundation for remote sensing data intelligence and highlights the importance of constructing AI-ready knowledge bases and recognizing the value of small datasets. Developing targeted and efficient algorithms is essential for achieving remote sensing intelligence, making the advancement of practical data intelligence methods an urgent research priority. Furthermore, promoting multi-level services for remote sensing data, information, and knowledge through data intelligence should be prioritized. This research provides a comprehensive technical framework and forward-looking insights for remote sensing data intelligence, offering valuable references for further exploration and implementation in critical fields.

  • ZHANG Jiangyue, SU Shiliang
    Journal of Geo-information Science. 2025, 27(2): 441-460. https://doi.org/10.12082/dqxxkx.2025.240513

    [Background] Chinese Classical Gardens (CCGs), as integral components of world cultural heritage and essential urban recreational spaces, hold profound cultural, historical, and aesthetic value. Renowned for their intricate design, these gardens provide cultural ecosystem services through dynamic interactions between tourists and landscapes. Visual perception plays a pivotal role in these interactions, directly influencing how visitors engage with and interpret the "scenery"—a concept central to CCGs. With rapid advancements in 3D real scene reconstruction and digital simulation technologies, a pressing challenge has emerged: developing a 3D data model for CCGs tailored to visual perception computing. Traditional models fail to capture the complex interplay between spatial elements and human perceptual responses. [Objectives] This study aims to address this challenge by tackling three core methodological issues: (1) constructing a visual perception framework to represent the unique "scenery" concept inherent to CCGs; (2) analyzing tourist behavior through the lens of visual perception processes; and (3) organizing a 3D data model that supports robust analysis and visualization. [Methods] To systematically address these challenges, the study elaborates on a visual perception framework for CCGs, integrating four critical stages of visitors' visual experiences: object (what is seen), path (how one navigates), subject (who perceives), and outcome (the resulting impressions and emotions). This framework incorporates spatial narratives, consisting of a narrative symbol system and strategies, and landscape space composition, distinguishing among environmental space, visual perception space, and visual cognition space. Building on this framework, a novel 3D data model tailored to visual perception computing in CCGs is proposed. The model is structured into three interrelated layers: the physical features layer (capturing spatial and structural details), the behavior patterns layer (analyzing tourists' movements and gaze behaviors), and the analytical layers (integrating visual perception metrics). [Results] The feasibility of the proposed approach is demonstrated through a case study of the Humble Administrator's Garden in Suzhou. The implementation process involves acquiring physical data, configuring behavioral data, setting up the storage environment, and computing visual perception. This multi-layered approach provides a theoretical framework for understanding visual perception in CCGs and establishes a methodological pathway for applying 3D technologies to cultural heritage research. [Conclusions] The proposed 3D data model offers a deeper understanding of visual perception within CCGs, facilitating new insights into spatial design and visitor experiences. Furthermore, the methods outlined in this paper have broader implications for studying and preserving other cultural heritage sites, advancing the integration of digital technology in heritage conservation and cultural landscape analysis.

  • LI Yansheng, ZHONG Zhenyu, MENG Qingxiang, MAO Zhidian, DANG Bo, WANG Tao, FENG Yuanjun, ZHANG Yongjun
    Journal of Geo-information Science. 2025, 27(2): 350-366. https://doi.org/10.12082/dqxxkx.2025.240571

    [Objectives] With the development of deep learning technology, the ability to monitor changes in natural resource elements using remote sensing images has significantly improved. While deep learning change detection models excel at extracting low-level semantic information from remote sensing images, they face challenges in distinguishing land-use type changes from non-land-use type changes, such as crop rotation, natural fluctuations in water levels, and forest degradation. To ensure a high recall rate in change detection, these models often generate a large number of false positive change polygons, requiring substantial manual effort to eliminate these false alarms. [Methods] To address this issue, this paper proposes a natural resource element change polygon purification algorithm driven by remote sensing spatiotemporal knowledge graph. The algorithm aims to minimize the false positive rate while maintaining a high recall rate, thereby improving the efficiency of natural resource element change monitoring. To support the intelligent construction and effective reasoning of the spatiotemporal knowledge graph, this study designed a remote sensing spatiotemporal knowledge graph ontology model taking into account spatiotemporal characteristics and developed a GraphGIS toolkit that integrates graph database storage and computation. This paper also introduces a vector knowledge extraction method based on the native spatial analysis of the GraphGIS graph database, a remote sensing image knowledge extraction method based on efficient fine-tuning of the SkySense visual large model, and a polygon purification knowledge extraction method based on the SeqGPT large language model. Under the constraints of the spatiotemporal ontology model, vector, image, and text knowledge converge to form a remote sensing spatiotemporal knowledge graph. Inspired by the manual operation methods for change polygon purification, this paper developed an automatic purification method of change polygons based on first-order logical reasoning within the knowledge graph. To improve the concurrent processing and human-computer interaction, this paper developed a remote sensing spatiotemporal knowledge graph management and service system. [Results] For the task of purifying natural resource element change polygons in Guangdong Province from March to June 2024, the proposed method achieved a true-preserved rate of 95.37% and a false-removed rate of 21.82%. [Conclusions] The intelligent purification algorithm and system for natural resource element change polygons proposed in this study effectively reduce false positives while preserving real change polygons. This approach significantly enhances the efficiency of natural resource element change monitoring.

  • HOU Yuhao, YANG Weifang, YAN Haowen, LI Jingzhong, ZHU Xinyu, YAN Xiangrong, PENG Yibo
    Journal of Geo-information Science. 2025, 27(2): 461-478. https://doi.org/10.12082/dqxxkx.2025.240327

    [Objectives]Currently, systematic research in content retrieval for We-maps is lacking. To address this gap, this paper proposes an approach for geographic feature extraction and retrieval in hand-drawn map scenes using the YOLOv8l-FMSC-Spatial model (You Only Look Once v8l - Fewer Multi-Scale Convolution-Spatial). [Methods]First, different YOLO models were compared to select the optimal YOLOv8l model. The C2f-FMSC module was introduced to improve this model, resulting in the YOLOv8l-FMSC training model specifically designed for We-maps. This model was applied to extract geographic features from raster maps. Next, to meet the retrieval needs of geographic features, a spatial relationship database for these features was established. A spatial computation and retrieval module, Spatial, was designed to process geographic feature information by transmitting and filtering it. The module further calculates spatial correlations between user queries and the geographic feature information in the database. Based on the degree of spatial relationship association, the model indexes maps containing relevant geographic feature information from the We-maps database, enabling the construction of a spatial relationship-based geographic feature retrieval model. The method was validated using hand-drawn campus map retrieval scenarios. The experimental dataset comprised publicly available maps from schools and maps freely created by students, totaling 493 hand-drawn campus maps. These maps were used to study the retrieval of representative geographical elements such as water bodies, sports fields, and unique architectural structures associated with schools nationwide. The focus was on accurately identifying and retrieving these characteristic elements to ensure the model’s practical applicability. [Results] The experimental results indicate: (1) The trained YOLOv8l model effectively identifies geographical elements in self-made maps, with its effectiveness and robustness verified on the proposed dataset; (2)The YOLOv8l model, enhanced with the FMSC module, achieved a precision of 0.8 and a recall of 0.764, making it the optimal choice for practical comparisons; (3)The Spatial calculation model effectively captures the spatial information of relevant geographical elements, narrowing the gap with orthographic map retrieval. By applying this method, the retrieval of geographical elements from hand-drawn campus maps, while considering spatial relationships, becomes achievable. [Conclusions] The proposed model can quickly and accurately retrieve content-relevant hand-drawn maps based on geographic feature conditions, effectively filling the research gap in content retrieval for We-maps.

  • WANG Zhihua, YANG Xiaomei, ZHANG Junyao, LIU Xiaoliang, LI Lianfa, DONG Wen, HE Wei
    Journal of Geo-information Science. 2025, 27(2): 305-330. https://doi.org/10.12082/dqxxkx.2024.230729

    [Objectives] Remote Sensing Intelligent Interpretation (RSII) often encounters challenges when applied for practical resource and environmental management, especially for complex scenes. To address this, we start from the explanation of why remote sensing interpretation is needed, and clarify that the mission of RSII is to achieve more rapid interpretation to build the digital twin earth with lower cost compared to manual interpretation. However, most RSII systems operate as a unidirectional process from remote sensing data to geoscience knowledge, lacking the feedback from knowledge to data. As a result, remote sensing information extracted from data often mismatch the knowledge of existing geoscience, creating a trust crisis between RSII researchers and geoscience researchers. And the crisis becomes more severe with the uncertainty of remote sensing information. [Analysis] We believe that an agreed upon representation model of geoscience knowledge between RSII researchers and geoscience researchers is necessary to alleviate the crisis. Based on this analysis, we propose a framework using geo-science zoning as the bridge to connect RSII researchers and geoscience researchers. In this framework, knowledge from geoscience could be transferred into the RSII system through geo-science zoning so that the interpretation results could be more coincided with geoscience knowledge. The framework mainly relies on (a) the scene complexity measurement, (b) the knowledge coupling of geographic regions to form the geological zoning method for remote sensing intelligent interpretation, and (c) the sampling specification of regional samples. The scene complexity measurement provides quantitative features for geoscience zoning and sampling weights assignment. Existing zoning data, such as ecological zoning data, geographic elements, and multisource remote sensing images are the main data inputs for geoscience zoning. The main principles for constructing zoning methods include (a) the geoscience elements type, (b) the scale of geoscience zoning, and (c) the process of information flow from data to knowledge. [Prospects] With these models, we can realize regional RSII guided by the knowledge. Preliminary experiments on complexity and optimization sampling, image segmentation scale optimization, cultivated land type fine classification, etc., reveal that this framework has great potential in improving the geoscience knowledge acquisition by RSII, enhancing the accuracy of the state-of-the-art RSII by 6%~10%, especially for the high-complexity nature scenes. However, the superiority of the framework may disappear if the scene for interpretation is simple, like the first level land use/cover classification, which is mainly caused by the inefficient samples after geoscience zoning. Therefore, more attention is needed in sampling when developing geoscience zoning framework.

  • LIAN Peige, LI Yingbing, LIU Bo, FENG Xiaoke
    Journal of Geo-information Science. 2025, 27(3): 636-652. https://doi.org/10.12082/dqxxkx.2025.240641

    [Objectives] With accelerating urbanization and a surge in vehicle numbers, urban traffic systems face immense pressure. Intelligent transportation systems, a vital component of smart cities, are widely employed to improve urban traffic conditions, with traffic speed prediction being a key research focus. However, the complex coupling relationships and dynamically varying characteristics of urban traffic network nodes pose challenges for existing traffic speed prediction methods in accurately capturing dynamic spatio-temporal correlations. Spatio-temporal graph neural networks have proven to be among the most effective models for traffic speed prediction tasks. However, most methods heavily rely on prior knowledge, limiting the flexibility of spatial feature extraction and hindering the dynamic representation of road network topology. Recent approaches, such as adaptive adjacency matrix construction, address the limitations of static graphs. However, they often overlook the synergy between dynamic features and static topology, making it difficult to fully capture the complex fluctuations in traffic flow, which in turn limits prediction accuracy and adaptability. [Methods] To address these challenges, this study formulates urban traffic speed prediction as a multivariate time-series forecasting problem and proposes a traffic speed prediction model based on a Multivariate Time-series Dynamic Graph Neural Network (MTDGNN). Leveraging real-time traffic information and predefined static graph structures, the model adaptively generates dynamic traffic graphs to capture spatial dependencies through a graph learning layer and integrates them with static road network graphs to capture spatial dependencies from multiple perspectives. Meanwhile, the alternating use of graph convolution and temporal convolution modules constructs a multi-level spatial neighborhood and temporal receptive field, fully exploring the spatial and temporal features of traffic data. [Results] The MTDGNN model was tested on real traffic data from 397 road sections in eastern Beijing, collected between April 1, 2017, and May 31, 2017. Its prediction results were compared against nine benchmark models and seven ablation models. Compared to benchmark models, MTDGNN reduced the average MAE by at least 2.24% and the average RMSE by at least 3.98%. [Conclusions] Experimental results demonstrate that the MTDGNN model achieves superior prediction accuracy in MAE, RMSE, and MAPE evaluation metrics, highlighting its robustness and effectiveness in complex traffic scenarios.

  • SHEN Li, XU Zhenfan, AI Mingyao, LU Binbin
    Journal of Geo-information Science. 2025, 27(3): 698-715. https://doi.org/10.12082/dqxxkx.2025.240528

    [Objectives] Cancer is the leading cause of death in most countries worldwide, posing a significant threat to human longevity and public health. This study explores the spatiotemporal distribution characteristics of mortality rates for five major types of cancer worldwide and provides predictions for future trends. [Methods] Aiming at five major cancer types (lung, colorectal, gastric, liver, and pancreatic cancer) in 200 countries from 2011 to 2019, this study used GBD and World Bank data to extract spatial heterogeneity of the factors affecting cancer mortality using the MGWR model. The ARIMA model was used to extract temporal trend characteristics of various cancer mortality rates. Such spatial-temporal information was integrated into a Bayesian spatial-temporal model to predict and evaluate the global mortality risk for the five types of cancer. [Results] Results revealed that global death rate for all five cancer types increased, with an average rise of 17.2 deaths per 100 000 people in 2019 compared to 2011. Over 72.8% of countries exhibited a high relative risk of cancer death (RR>1), indicating significant spatial clustering. [Conclusions] Regions such as Europe, Central Asia, North America, and East Asia and the Pacific experienced faster increases in cancer death rates compared to Africa and South Asia. Compared to low- and middle-income countries, middle-high- and high-income countries showed a more pronounced upward trend in cancer mortality and a higher relative risk. Key factors influencing global cancer mortality included the percentage of the population aged 65 years and older, smoking, alcohol consumption, low physical activity, high sugar diets, GDP per capita, GNI per capita, and health expenditure per capita. By integrating the advantages of different geographical spatial-temporal analysis methods, this study developed an innovative spatiotemporal prediction model of disease risk that integrates spatial-temporal grouping variables and multiple influencing factors. This proposed model is highly flexible, interpretable, and better suited for quantifying non-stationarity spatial-temporal relationships. While the structured spatial and temporal effects increase computational demands, the model effectively assesses cancer mortality risk across regions, offering robust insights into the spatiotemporal dynamics of disease. This approach deepens the integration of geospatial modeling technology and epidemiological research, providing significant scientific contributions to global cancer research, prevention, and control planning.

  • ZHAO Jinzhao, WEI Zhicheng
    Journal of Geo-information Science. 2025, 27(3): 682-697. https://doi.org/10.12082/dqxxkx.2025.240621

    [Objectives] City-wide traffic flow prediction plays a crucial role in intelligent transportation systems. Traditional studies partition road networks into grids, represent them as graph structures with grids as nodes, and use graph neural networks for region-level prediction. However, this region-based approach overlooks the relationships between individual roads, making it difficult to reflect traffic flow changes of roads. Methods based on road segment data can better capture spatial connections between roads and enable more accurate traffic flow predictions. However, mapping trajectory data to roads presents challenges such as redundant data and trajectory mismatches, and traffic flow data after mapping is sparse. Existing methods struggle to effectively capture the spatial correlation in sparse traffic conditions. [Methods] To address these issues, this study proposes an Attention Spatio-Temporal Neural Network (ASTNN) model for road-level sparse traffic flow prediction. The model first preprocesses trajectory data and applies Hidden Markov Model (HMM)-based map matching to obtain road-level traffic flow data. It then introduces an adaptive compact 2D image representation method to model the road network as a 2D image, where road segments are represented as pixel points. Based on an analysis of the spatial and temporal characteristics of traffic flow, two new attentional spatiotemporal blocks are proposed: Attentional Spatio-Temporal Memory Block (ASTM block) for mining temporal correlations and attentional spatial-temporal focusing block (ASTF block) for extracting spatial sparse features. By integrating these two blocks with external information, ASTNN is constructed to achieve road-level traffic flow prediction. [Results] This study uses Chengdu taxi trajectory data as a case study. After preprocessing trajectory data and mapping traffic flow, the proposed model is validated on a five-level road network within Chengdu’s third ring area. Results indicate that the proposed data processing method reduces trajectory-to-road network matching time by 73.6%. In the comparative experiments with existing models, such as Convolutional Neural Network (CNN), Convolutional Long Short-Term Memory (ConvLSTM), Gated Recurrent Unit (GRU), and Spatial-Temporal Neural Network (STNN), ASTNN achieves the highest prediction accuracy in terms of Root Mean Square Error (RMSE), Mean Absolute Error (MAE), and R-squared (R2). Furthermore, the study confirms the significant improvement in prediction accuracy when incorporating temperature data into ASTNN, providing new insights for optimizing model performance. [Conclusions] The ASTNN model proposed in this study provides an effective framework for city-wide, road-level sparse traffic flow prediction, offering valuable insights for intelligent transportation systems.

  • YU Hanyang, LAN Chaozhen, WANG Longhao, WEI Zijun, GAO Tian, WANG Yiqiao, LIU Ruimeng
    Journal of Geo-information Science. 2025, 27(8): 1896-1919. https://doi.org/10.12082/dqxxkx.2025.250052

    [Significance] Multimodal remote sensing image matching has become a fundamental task in integrated Earth observation, enabling precise spatial alignment across heterogeneous image sources. [Progress] As the diversity of sensing modalities, acquisition geometries, and temporal conditions increases, traditional matching frameworks have proven inadequate for capturing complex variations in radiometric responses, geometric configurations, and semantic representations. This technological gap has driven a significant paradigm shift from handcrafted feature engineering to deep learning-based solutions, which now form the core of current research and application development. This paper provides a comprehensive and structured review of recent advances in deep learning methods for multimodal remote sensing image matching, with an emphasis on the evolution of methodological paradigms and technical frameworks. It establishes a clear dual-path classification: the single-session approach and the end-to-end approach. The former selectively replaces or enhances individual components of traditional pipelines, such as feature encoding or similarity estimation, using neural network modules. The latter integrates the entire matching process into a unified network architecture, enabling joint optimization of feature learning, transformation modeling, and correspondence inference within a closed loop. This progression reflects the field's transition from modular adaptation to holistic modeling, revealing a deeper integration of data-driven representation learning with geometric reasoning. The review further examines the development of architectural strategies supporting this evolution, including attention mechanisms, graph-based structures, hierarchical feature fusion, and modality-bridging transformations. These innovations contribute to improved robustness, semantic consistency, and adaptability across diverse matching scenarios. Recent trends also demonstrate a growing reliance on pretrained vision foundation models, which provide transferable feature spaces and reduce the dependence on large-scale labeled datasets. In addition to summarizing technical advancements, the paper analyzes representative datasets, performance evaluation strategies, and the current challenges that constrain real-world deployment. These include limited data availability, weak cross-scene generalization, computational inefficiency, and insufficient interpretability. [Prospect] By synthesizing methodological progress with practical demands, the review identifies key directions for future research, including the design of modality-invariant representations, physically-informed neural architectures, and lightweight solutions tailored for scalable, real-time image registration in complex operational environments.

  • ZHANG Nuan, WANG Tao, ZHANG Yan, WEI Yibo, LI Liuwen, LIU Yichen
    Journal of Geo-information Science. 2025, 27(8): 1751-1779. https://doi.org/10.12082/dqxxkx.2025.250137

    [Significance] Street View Image-based Visual Place Recognition (SV-VPR) is a geographical location recognition technology that relies on visual feature information. Its core task is to predict and accurately locate unknown locations by analyzing the visual features of street view images. This technology must overcome challenges such as appearance changes under different environmental conditions (e.g., lighting differences between day and night, seasonal variations) and viewpoint differences (e.g., perspective deviations between vehicle-mounted cameras and satellite images). Accurate recognition is achieved through calculating image feature similarity, applying geometric constraints, and related methods. As an interdisciplinary field of computer vision and geographic information science, SV-VPR is closely related to visual positioning, image retrieval, SLAM, and more. It has significant application value in areas such as UAV autonomous navigation, high-precision positioning for autonomous driving, construction of geographical boundaries in cyberspace, and integration of augmented reality environments. It is particularly advantageous in GPS-denied environments. [Analysis] This paper systematically reviews the research progress of visual location recognition based on street view images, covering the following aspects: First, the basic concepts and classifications of visual place recognition technologies are introduced. Second, the foundational principles and categorization methods specific to street view image-based visual place recognition are discussed in depth. Third, the key technologies in this field are analyzed in detail. Furthermore, relevant datasets for street view image-based visual place recognition are comprehensively reviewed. In addition, evaluation methods and index systems used in this domain are summarized. Finally, potential future research directions for SV-VPR are explored. [Purpose] This review aims to provide researchers with a systematic overview of the technological development trajectory of SV-VPR, helping them quickly understand the current research landscape. It also offers a comparative analysis of key technologies and evaluation methods to support algorithm selection, and identifies emerging challenges and potential breakthrough areas to inspire innovative research.

  • TANG Junqing, AN Mengqi, ZHAO Pengjun, GONG Zhaoya, GUO Zengjun, LUO Taoran, LYU Wei
    Journal of Geo-information Science. 2025, 27(3): 553-569. https://doi.org/10.12082/dqxxkx.2024.240107

    [Significance] Cities globally face increasingly frequent multi-hazard risks, driving them pursuing more sustainable and resilient urban transportation systems. This paper presents a comprehensive systematic literature review of the application of spatial-temporal data in transportation system resilience studies. It highlights the pivotal role of spatial-temporal big data in understanding and enhancing the resilience of urban transportation systems under various hazard scenarios. Spatial-temporal big data, characterized by high temporal resolution and fine spatial granularity, has been increasingly applied to the field of transportation system resilience, providing essential support for decision-makers. [Progress] This study reveals two significant findings: Firstly, quantitative analysis of transportation system resilience is one of the most widely applied uses of spatial-temporal big data. However, real-time monitoring and early warning explorations are relatively rare. Most studies remain at the modelling and numerical simulation stage, indicating a need for more empirical studies using multi-source spatial-temporal big data. Moreover, compared to English literature, Chinese transportation system resilience studies are primarily qualitative and lack empirical research, indicating divergent research emphases between domestic and international scholars. Secondly, high-quality, multi-source spatial-temporal big data could facilitate more comprehensive spatial analysis in transportation system resilience studies. Improved data quality allows for deeper exploration from a microscopic perspective, focusing on individual behaviors and aligning closely with real-world needs. The concept of resilience has evolved from its previous post-disaster focus to a comprehensive life-cycle perspective encompassing pre-, during-, and post-disaster phases, transforming the study framework for transportation system resilience. [Prospect] As spatial-temporal big data technology advances and new transportation modes emerge, more innovations and breakthroughs in transportation system resilience studies are expected. Future research should further explore and utilize the potential of spatial-temporal big data in this field, amplifying the policy ramifications of abrupt-onset occurrences. Increased emphasis should be placed on research conducted at the scale of urban agglomerations. Simultaneously, a nuanced examination from a microscopic perspective is imperative to dissect the underlying causes and mechanisms contributing to variations in resilience among distinct groups. Despite the significant progress in transportation system resilience studies, there are still challenges in data collection, processing, and analysis. As technology progresses, researchers should leverage advanced algorithms, platforms, and tools to enhance data processing capabilities and analytical precision, facilitating more complex and detailed studies on transportation system resilience. This will provide a scientific basis for planning and managing urban transportation systems, significantly contributing to the overall resilience and sustainable development of cities.

  • QIN Qiming
    Journal of Geo-information Science. 2025, 27(10): 2283-2290. https://doi.org/10.12082/dqxxkx.2025.250426

    [Objectives] With the rapid increase in the number of Earth observation satellites in orbit worldwide, remote sensing data has been accumulating explosively, offering unprecedented opportunities for Earth system science research to dynamically monitor global change. At the same time, it also brings a series of challenges, including multi-source heterogeneity, scarcity of labeled data, insufficient task generalization, and data overload. [Methods] To address these bottlenecks, Google DeepMind has proposed AlphaEarth Foundations (AEF), which integrates multimodal data such as optical imagery, SAR, LiDAR, climate simulations, and textual sources to construct a unified 64-dimensional embedding field. This framework achieves cross-modal and spatiotemporal semantic consistency for data fusion and has been made openly available on platforms such as Google Earth Engine. [Results] The main contributions of AEF can be summarized as follows: (1) Mitigating the long-standing “data silos” problem by establishing globally consistent embedding layers; (2) Enhancing semantic similarity measurement through a von Mises-Fisher (vMF) spherical embedding mechanism, thereby supporting efficient retrieval and change detection; (3) Shifting complex preprocessing and feature engineering tasks into the pre-training stage, enabling downstream applications to become “analysis-ready” and significantly reducing application costs. The paper further highlights the application potential of AEF in three stages: (1) Initially in land cover classification and change detection; (2) Subsequently in deep coupling of embedding vectors with physical models to drive scientific discovery; (3) Ultimately evolving into a spatial intelligence infrastructure, serving as a foundational service for global geospatial intelligence. Nevertheless, AEF still faces several challenges: (1) Limited interpretability of embedding vectors, which constrains scientific attribution and causal analysis; (2) Uncertainties in domain transfer and cross-scenario adaptability, with robustness in extreme environments yet to be verified; (3) Performance advantages that require more empirical validation across regions and independent experiments. [Conclusions] Overall, AEF represents a new direction for research in remote sensing and geospatial artificial intelligence, with breakthroughs in data efficiency and cross-task generalization providing solid support for future Earth science studies. However, its further development will depend on continuous advances in interpretability, robustness, and empirical validation, as well as on transforming the 64-dimensional embedding vectors into widely usable data resources through different pathways.

  • QI Haoxuan, CAO Yi, ZHAO Bin
    Journal of Geo-information Science. 2025, 27(3): 623-635. https://doi.org/10.12082/dqxxkx.2025.240707

    [Objectives] The primary objective is to enhance the accuracy of vehicle trajectory prediction at intersections and address the challenges in predicting trajectories in multi-vehicle interaction scenarios. This is crucial for improving the safety and efficiency of autonomous driving and traffic management in complex urban intersections. [Methods] An Enhanced Adjacency Graph Convolutional Network-Transformer (EAG-GCN-T) vehicle trajectory prediction model is developed. The INTERACTION public dataset is employed, with data smoothing techniques applied to mitigate noise. Model comparison and validation experiments are conducted to assess performance. The model’s accuracy is evaluated by comparing error assessment indicators against different baseline models, analyzing interaction capabilities, generalization ability, and driving behavior recognition. The EAG-GCN-T model combines an Enhanced Adjacency Graph Convolutional Network (EAG-GCN) and a Transformer module. The EAG-GCN module accurately models spatial interactions between vehicles by considering relative speed and distance using an enhanced weighted adjacency matrix. The Transformer module captures temporal dependencies and generates future trajectories, improving spatiotemporal prediction ability. [Results] In long-term single-vehicle trajectory prediction, the Average Displacement Error (ADE) and Final Displacement Error (FDE) are reduced by 69.4%, 39.8%, and 33.3% and 71.9%, 32.5%, and 27.4% respectively, compared to CV, ARIMA, and CNN-LSTM models. In multi-vehicle interaction prediction, the FDE is reduced by 19.5% and 20.6% compared to the GRIP model. Compared with three interaction mechanisms, EAG-GCN-T achieves the lowest overall error across all time domains, with ADE/FDE values of 0.53 and 0.74, respectively. EAG-GCN-T achieves more reasonable Driving Area Compliance (DAC) and Trajectory Point Loss Rate (MR), demonstrating strong adaptability in ramps and roundabouts. The model accurately predicts driving behaviors such as following, lane-changing, evasion, and their impacts on trajectories, with predicted trajectories highly consistent with actual vehicle movements. [Conclusions] The EAG-GCN-T model effectively addresses vehicle trajectory prediction in multi-vehicle interaction scenarios at intersections. It demonstrates high accuracy, strong interactivity, and excellent generalization ability. This model provides a novel solution for vehicle trajectory prediction in intelligent transportation systems, offering significant potential for advancing autonomous driving and intelligent traffic management.

  • LIU Diyou, KONG Yunlong, CHEN Jingbo, WANG Chenhao, MENG Yu, DENG Ligao, DENG Yupeng, ZHANG Zheng, SONG Ke, WANG Zhihua, CHU Qifeng
    Journal of Geo-information Science. 2025, 27(2): 285-304. https://doi.org/10.12082/dqxxkx.2024.240436

    [Significance] The extraction of Cartographic-Level Vector Elements (CLVE) is a critical prerequisite for the direct application of remote sensing image intelligent interpretation in real-world scenarios. [Analysis] In recent years, the continuous rapid advancement of remote sensing observation technology has provided a rich data foundation for fields such as natural resource surveying, monitoring, and public surveying and mapping data production. However, due to the limitations of intelligent interpretation algorithms, obtaining the necessary vector elements data for operational scenarios still heavily relies on manual visual interpretation and human-computer interactive post-processing. Although significant progress has been made in remote sensing image interpretation using deep learning techniques, producing vector data that are directly usable in operational scenarios remains a major challenge. [Progress] This paper, based on the actual data needs of operational scenarios such as public surveying and mapping data production, conducts an in-depth analysis of the rule constraints for different vector elements in remote sensing image interpretation across a wide range of operational contexts. It preliminarily defines "cartographic-level vector elements" as vector element data that complies with certain cartographic standard constraints at a specific scale. Centered on this definition, the content of the rule set for CLVE is summarized and analyzed from nine dimensions, including vector types, object shapes, boundary positioning, area, length, width, angle size, topological constraints, and adjacency constraints. Evaluation methods for CLVE are then outlined in four aspects: class attributes, positional accuracy, topological accuracy, and rationality of generalization and compromise. Subsequently, through literature collection and statistical analysis, it was observed that research on deep learning-based vector extraction, while still in its early stages, has shown a rapid upward trend year by year, indicating increasing attention in the field. The paper then systematically reviews three major methodological frameworks for deep learning-based vector extraction: semantic segmentation & post-processing, iterative methods, and parallel methods. A detailed analysis is provided on their basic principles, characteristics and accuracy of vector extraction, flexibility, and computational efficiency, highlighting their respective strengths, weaknesses, and differences. The paper also summarizes the current limitations of remote sensing intelligent interpretation methods aimed at CLVE in terms of cartographic-level interpretation capabilities, rule coupling, and remote sensing interpretability. [Prospect]Finally, future research directions for intelligent interpretation of CLVE are explored from several perspectives, including the construction of broad and open cartographic-level rule sets, the development and sharing of CLVE datasets, the advancement of multi-element CLVE extraction frameworks, and the exploration of the potential of multimodal coupled semantic rules.

  • HUANG Yi, ZHANG Xueying, SHENG Yehua, XIA Yongqi, YE Peng
    Journal of Geo-information Science. 2025, 27(6): 1249-1262. https://doi.org/10.12082/dqxxkx.2025.250175

    [Objectives] This study addresses the critical challenges in typhoon disaster knowledge services, which are often hindered by "massive data, scarce knowledge, and limited services." The core objective is to rapidly distill actionable knowledge from vast datasets to enhance disaster management efficacy and mitigate typhoon-related impacts. Large Language Models (LLMs), renowned for their superior performance in natural language processing, are leveraged to deeply mine disaster-related information and provide robust support for advanced knowledge services. [Methods] This research establishes a typhoon disaster knowledge service framework encompassing three layers: data, knowledge, and service. [Results] For the data-to-knowledge layer, an LLM-driven (Qwen2.5-Max) automated method for constructing typhoon disaster Knowledge Graphs (KGs) is proposed. This method first introduces a multi-level typhoon disaster knowledge representation model that integrates spatiotemporal characteristics and disaster impact mechanisms. A specialized training dataset is curated, incorporating typhoon-related texts with explicit temporal and spatial attributes. By adopting a "pre-training + fine-tuning" paradigm, the framework efficiently transforms raw disaster data into structured knowledge. For the knowledge-to-service layer, an LLM-based intelligent question-answering system is developed. Utilizing the constructed typhoon disaster KG, this system employs Graph Retrieval-Augmented Generation (GraphRAG) to retrieve contextually relevant knowledge from the graph and generate user-specific disaster prevention and mitigation guidance. This approach ensures seamless conversion of structured knowledge into practical services, such as personalized evacuation plans and resource allocation strategies. [Conclusions] The study highlights the transformative potential of LLMs in typhoon disaster management and lays a foundation for integrating LLMs with geospatial technologies. This interdisciplinary synergy advances Geographic Artificial Intelligence (GeoAI) and paves the way for innovative applications in disaster service.

  • LIU Chengbao, BO Zheng, ZHANG Peng, ZHOU Miyu, LIU Wanyue, HUANG Rong, NIU Ran, YE Zhen, YANG Hanzhe, LIU Shijie, HAN Dongxu, LIN Qian
    Journal of Geo-information Science. 2025, 27(4): 801-819. https://doi.org/10.12082/dqxxkx.2025.240466

    [Significance] Lunar remote sensing is a critical method to ensure the safety and success of lunar exploration missions while advancing lunar scientific research. It plays a significant role in understanding the Moon's geological evolution and the formation of the Earth-Moon system. Accurate lunar topographic maps are essential for mission planning, including landing site selection, navigation, and resource identification. These maps also provide valuable data for studying planetary processes and the history of the solar system. [Progress] In recent years, with growing global interest and investment in lunar exploration, remarkable progress has been made in remote sensing technology. These advancements have significantly improved the precision, resolution, and coverage of lunar topographic mapping. Various lunar remote sensing missions, such as China's Chang'e program, NASA's Lunar Reconnaissance Orbiter, and missions by other space agencies, have acquired substantial amounts of multi-source, multi-modal, and multi-scale data. This wealth of data has laid a solid foundation for technological breakthroughs. For instance, high-resolution laser altimetry, optical photogrammetry, and synthetic aperture radar have provided detailed datasets, enabling refined mapping of the Moon's surface. However, the dramatic increase in data volume, complexity, and heterogeneity presents challenges for effective processing, integration, and application in topographic mapping. This paper provides a comprehensive overview of the current state of lunar topographic remote sensing and mapping, focusing on the implementation and data acquisition capabilities of major lunar remote sensing missions during the second wave of lunar exploration. It systematically summarizes the latest research progress in key surveying and mapping technologies, including laser altimetry, which enables precise elevation measurements; optical photogrammetry, which reconstructs surface features using high-resolution imagery; and synthetic aperture radar, which provides unique insights into topographic and subsurface structures. [Prospect] In addition to reviewing recent advancements, the paper discusses future trends and challenges in the field. Key recommendations include enhancing sensor functionality and performance metrics to improve data quality, optimizing the lunar absolute reference framework for consistency and accuracy, leveraging multi-source data fusion for fine-scale modeling, expanding scientific applications of lunar topography, and developing intelligent and efficient methods to process massive amounts of remote sensing data. These efforts will not only support upcoming lunar exploration missions, such as China's manned lunar landing program scheduled for 2030, but also contribute to a deeper understanding of the Moon and its relationship with Earth.

  • LI Junming, HU Yaxuan, WANG Nannan, WANG Siyaqi, WANG Ruolan, LYU Lin, FANG Ziqing
    Journal of Geo-information Science. 2025, 27(7): 1501-1519. https://doi.org/10.12082/dqxxkx.2025.250161

    [Objectives] Classical statistical inference typically relies on the assumptions of large sample sizes and independent, identically distributed (i.i.d.) observations, conditions that spatio-temporal data frequently violate, leading to inherent theoretical limitations in conventional approaches. In contrast, Bayesian spatio-temporal statistical methods integrate prior knowledge and treat all model parameters as random variables, thereby forming a unified probabilistic inference framework. This enables the incorporation of a broader range of uncertainties and offers robustness in modelling small samples and dependent structures, making Bayesian methods highly advantageous and increasingly influential in spatio-temporal analysis. [Progress] From the perspective of methodological evolution, this paper systematically reviews mainstream Bayesian spatio-temporal statistical models from two complementary perspectives: traditional Bayesian statistics and the Bayesian machine learning. The former includes Bayesian Spatio-temporal Evolutionary Hierarchical Models, Bayesian Spatio-temporal Regression Hierarchical Models, Bayesian Spatial Panel Data Models, Bayesian Geographically Weighted Spatio-temporal Regression Models, Bayesian Spatio-temporal Varying Coefficient Models, and Bayesian Spatio-temporal Meshed Gaussian Process Model. The latter includes Bayesian Causal Forest Models, Bayesian Spatio-temporal Neural Networks, and Bayesian Graph Convolutional Neural Networks. In terms of application, the review highlights representative studies across domains such as public health, environmental sciences, socio-economic and public safety, as well as energy and engineering. [Prospect] Bayesian spatio-temporal statistical methods need to achieve breakthroughs in multi-source heterogeneous data modeling, integration with deep learning, incorporation of causal inference mechanisms, and optimization of high-performance computing. These advances are essential to balance theoretical rigor with practical adaptability and to promote the development of a next-generation spatio-temporal modeling paradigm characterized by causal inference, adaptive generalization, and intelligent analysis.

  • ZHAO Hanxu, WANG Lei, SONG Zhixue, ZHANG Pengfei, ZHANG Zixin, YIN Nan
    Journal of Geo-information Science. 2025, 27(2): 479-490. https://doi.org/10.12082/dqxxkx.2025.240454

    [Objectives] The extraction of watershed hydrological information is crucial for water resource management, flood forecasting, and ecological protection. Traditional hydrological modeling often employs quadrilateral grids for spatial discretization. However, due to issues such as inconsistent adjacency, shape distortion, and inaccurate representation of topological structures, watershed extraction often results in staircase-like and parallel river line features in finer details, especially at curved sections and bifurcation points of rivers. In contrast, hexagonal grids, with their isotropy, improved boundary effects, and uniform spatial distribution, are better at preserving the morphology of curves and bifurcation points. They thereby enable more accurate simulation of hydrological processes and watershed extraction. [Methods] This study adopts the H3 hexagonal grid system, using the Jiuyuangou watershed as the study area. A 30-meter resolution SRTM 1 Digital Elevation Model (DEM) was used to design a hydrological analysis algorithm based on hexagonal grids. The methodology includes hexagonal grid generation, DEM resampling, depression filling, flow direction analysis, and flow accumulation. The quality of flow accumulation and river network extraction was evaluated. Firstly, the study compared the percentage of hexagonal and quadrilateral grids contributing to total grids across flow values ranging from 1 to 15. Results showed that hexagonal grids demonstrated greater concentration in low flow values and maintained more stable cumulative frequency growth with increasing flow values, avoiding over-concentration in high flow value ranges. Additionally, a higher-resolution Jiuyuangou river network (12.5 m) was used as the standard river network. Points were randomly sampled in proportion to the river line segment length at intervals of 100, 200, 300, 400, and 500 points. The average distance to the nearest quadrilateral and hexagonal grids was then calculated. [Results] The results show that the average offsets for quadrilateral grids were 28.16 m, 30.45 m, 30.57 m, 30.84 m, and 30.79 m, respectively. For hexagonal grids, the average offsets were 24.03 m, 25.63 m, 23.49 m, 23.78 m, and 24.99 m, respectively. Hexagonal grids consistently exhibited smaller average offsets than quadrilateral grids, demonstrating higher precision in river network extraction and better reflection of terrain characteristics. [Conclusions] Compared to traditional quadrilateral grids, hexagonal grids exhibit superior spatial consistency and accuracy in flow accumulation and river network extraction. This provides a more efficient and precise solution for hydrological modeling and watershed analysis.

  • ZHENG Chenglong, SONG Ci, CHEN Jie
    Journal of Geo-information Science. 2025, 27(6): 1317-1331. https://doi.org/10.12082/dqxxkx.2025.250168

    [Objectives] With the deepening of urbanization and intensified market competition, long working hours have become a pervasive social issue, posing challenges to both workers' physical and mental health and to urban sustainable development. Current studies on urban residents' work activities predominantly rely on questionnaire survey data, which suffer from limited sample sizes and a lack of in-depth exploration into long working hours in megacities. [Methods] This research utilized mobile signaling data from Beijing, collected between November and December 2019, to identify stay points using a threshold rule method. Residential and workplace locations were determined through a time-window approach, and users' working hours were extracted. The study then examined the spatial distribution patterns of long-working-hours employees (defined as those working over 40 hours per week) and investigated spatial characteristics across various gender and age groups. Finally, the study also explored the characteristics of long working hours in different employment clusters in Beijing. [Results] The findings reveal that 47.1% of Beijing's workforce engages in long working hours (weekly working hours ≥40 hours), with an average weekly working duration of 48.86 hours. Spatial analysis demonstrates a polycentric agglomeration pattern, concentrated in major employment hubs such as the CBD, Financial Street, Zhongguancun, and Yizhuang. Significant disparities exist across gender and age groups. Male employees work an average of 49.62 hours per week, 1.5 hours more than their female counterparts (48.12 hours). Among male age groups, those aged 20~29 have the longest average weekly working hours at 50.68 hours. In contrast, although women aged 30~39 constitute the largest proportion of the female workforce (22.13%), their average weekly working hours are the lowest, at 47.59 hours. The characteristics of overtime work in different employment clusters show a clear pattern: the CBD and Zhongguancun have a higher number of overtime workers, while Yizhuang stands out with the highest proportion at 58.0%. Wholesale and logistics hubs such as Xinfadi and Majuqiao exhibit the most intensive work schedules, with average weekly working hours exceeding 50 hours. [Conclusions] This study provides rich empirical evidence for understanding the phenomenon of long working hours in Beijing. The results offer data-driven support for optimizing labor time policies, contributing to urban sustainable development and social equity.

  • WANG Chunyan, WANG Zikang
    Journal of Geo-information Science. 2025, 27(2): 522-535. https://doi.org/10.12082/dqxxkx.2025.240549

    [Objectives] High-resolution remote sensing images offer a wealth of detailed spatial information. However, this abundance of detail can blur the boundaries between different land cover types, thereby increasing the ambiguity and uncertainty of segmentation. To address this challenge in remote sensing image segmentation, this paper introduces an innovative segmentation method based on an improved interval type-2 fuzzy neural network. [Methods] By leveraging spatial neighborhood information and a model mixing strategy, a hybrid regression membership function is constructed to enable the precise representation of complex data features, thereby enhancing the model's adaptability and feature extraction capability. The uncertain region of the hybrid regression membership function is designed to map the fuzzy and uncertain features of remote sensing data, improving the model's robustness. The proposed approach utilizes a fully connected neural network architecture to enhance the model's capacity for feature integration and learning while incorporating a focal loss function to address the effects of class imbalance. [Results] In land cover segmentation experiments conducted on the WHDLD and Potsdam datasets, the proposed method significantly outperformed DeepLab v3+ and UNet++. The proposed method achieved average overall accuracy improvements of 8.31% and 10.48%, Kappa coefficient enhancements of 14.07% and 14.59%, and F1 score increases of 16.36% and 12.31%, compared to the interval type-2 fuzzy neural network. [Conclusions] The results demonstrate that the proposed method effectively addresses ambiguity and uncertainty in remote sensing image segmentation, significantly mitigating the impact of regional noise on land cover segmentation while achieving high segmentation accuracy and robust generalization capabilities.

  • LIU Xuanguang, LI Yujie, ZHANG Zhenchao, DAI Chenguang, ZHANG Hao, MIAO Yuzhe, ZHU Han, LU Jinhao
    Journal of Geo-information Science. 2025, 27(5): 1144-1162. https://doi.org/10.12082/dqxxkx.2025.240668

    [Objectives] Existing semantic change detection methods fail to fully utilize local and global features in very high-resolution images and often overlook the spatial-temporal dependencies between bi-temporal remote sensing images, resulting in inaccurate land cover classification results. Additionally, the detected change regions suffer from boundary ambiguity, leading to low consistency between the detected and actual boundaries. [Methods] To address these issues, inspired by the Vision State Space Model (VSSM) with long-sequence modeling capabilities, we propose a semantic change detection network, CVS-Net, which combines Convolutional Neural Networks (CNNs) and VSSM. CVS-Net effectively leverages the local feature extraction capability of CNNs and the long-distance dependency modeling ability of VSSM. Furthermore, we embed a bi-directional spatial-temporal feature modeling module based on VSSM into CVS-Net to guide the network in capturing spatial-temporal change relations. Finally, we introduce a boundary-aware reinforcement branch to enhance the model's performance in boundary localization. [Results] We validate the proposed method on the SECOND and Fuzhou GF2 (FZ-SCD) datasets and compare it with five state-of-the-art methods: HRSCD.str4, Bi-SRNet, ChangeMamba, ScanNet, and TED. Comparative experiments demonstrate that our method outperforms these existing approaches, achieving a Sek of 23.95% and mIoU of 72.89% on the SECOND dataset, and a Sek of 23.02% and mIoU of 72.60% on the FZ-SCD dataset. In ablation experiments, as the proposed modules were progressively added, the SeK improved to 21.26%, 23.04%, and 23.95%, respectively, demonstrating the effectiveness of each module. Notably, compared with CNN-based, Transformer-based, and Mamba-based feature extractors,the proposed CNN-VSS feature extractor achieved the highest Sek, mIoU and Fscd, indicating its robust feature extraction capability and effective balance between local and global feature representation. Additionally, ST-SS2D improved the Sek score by 1.19% on average compared to other spatial-temporal modeling methods, effectively capturing the spatial-temporal dependencies of bi-temporal features and enhancing the model's ability to infer potential feature changes. Furthermore, the proposed edge-enhancement branch improved the consistency between detected and actual boundaries, achieving a consistency degree of 92.97%. [Conclusions] The proposed method significantly improves both the attribute and geometric accuracy of semantic change detection, providing technical references and data support for sustainable urban development and land resource management.

  • ZHANG Peng, LIU Wanyue, LIU Chengbao, BO Zheng, NIU Ran, HAN Dongxu, LIN Qian, ZHANG Ziyi, MA Mingze
    Journal of Geo-information Science. 2025, 27(4): 787-800. https://doi.org/10.12082/dqxxkx.2025.240467

    [Significance] The characteristics of the lunar surface, including its mineral compositions, geological formations, environmental factors, and temperature variations, are essential for advancing our understanding of the Moon. These features provide a wealth of scientific data for lunar research, such as resource distribution, environmental characteristics, and evolutionary history. Spectral imagers, which detect mineral compositions in a nondestructive way, play a crucial role in analyzing the mineral compositions of the lunar surface and have become key payloads in scientific exploration missions. With the increasing demand for high-precision lunar exploration data and advancements in spectral imaging technology, there is a growing trend toward acquiring lunar remote sensing data with higher spatial and spectral resolution across a broad spectral range. This trend is shaping the future of lunar orbit exploration, allowing for unprecedented detail in probing the Moon's surface. However, the higher resolution of spatial and spectral data also introduces significant challenges in data processing. [Progress] This paper begins by summarizing existing lunar spectral orbit data, including payload parameters and associated scientific findings. It then explores specific technical challenges in the data processing chain, such as pre-processing and the calculation of lunar surface parameters. Mapping surface compositions through spectral remote sensing is particularly complex due to the mixing of minerals within rocks, which can obscure clear spectral signatures. To address these challenges, various theoretical and empirical approaches have been developed. This paper proposes technical methods and potential solutions to overcome these obstacles.[Conclusions] In conclusion, detailed studies of lunar surface characteristics and the acquisition of high-resolution spectral data are vital for advancing lunar science. Lunar hyperspectral data are expected to support manned lunar exploration and scientific research by enabling the identification of various minerals on the Moon's surface and determining their abundance through hyperspectral observations. Advances in spectral imaging technology and the development of solutions for processing high-resolution data will significantly enhance lunar and planetary science capabilities. These efforts will pave the way for deeper insights into the Moon's geology and potential resource utilization.

  • MENG Yuebo, SU Shilong, HUANG Xinyu, WANG Heng
    Journal of Geo-information Science. 2025, 27(4): 930-945. https://doi.org/10.12082/dqxxkx.2025.240633

    [Objectives] To address issues in existing remote sensing building extraction models, including poor feature representation ability due to redundancy, unclear building boundaries, and the loss of small buildings, [Methods] we propose a detail enhancement and cross-scale geometric feature sharing network (DCS-Net). This network consists of an Information Decoupling and Aggregation Module (IRDM), a Local Mutual Similarity Detail Enhancement Module (LMSE), and a Cross-scale Geometric Feature Fusing Module (CGFF), designed to guide small target inference. The IRDM module separates and reconstructs redundant features by assigning weights, thereby suppressing redundancy in both spatial and channel dimensions and promoting effective feature learning. The LMSE module enhances the accuracy and completeness of building edge information by dynamically selecting windows and specifying pixel clustering based on local mutual similarity between encoder-decoder features. The CGFF module computes the feature block relationships between the original image and various semantic-level feature maps to compensate for information loss, thereby improving the extraction performance of small buildings. [Results] The experiments in this paper are based on two public datasets: the WHU aerial dataset and the Massachusetts building detection dataset. The experimental results demonstrate the following: (1) Compared with existing building extraction algorithms such as UNet, PSPNet, Deeplab V3+, MANet, MAPNet, DRNet, Build-Former, MBR-HRNet, SDSNet, HDNet, DFFNet, and UANet, DCS-Net has achieved significant improvements across various evaluation metrics, demonstrating the effectiveness of the proposed method. (2) On the WHU dataset, the Intersection over Union (IoU), F1 score, and 95% Hausdorff Distance (95%HD) reached 92.94%, 96.35%, and 75.79%, respectively, outperforming the current best algorithm by 0.79%, 0.44%, and 1.90%. (3) On the Massachusetts dataset, the metrics were 77.13%, 87.06%, and 205.26, with improvements of 0.72%, 0.43%, and 13.84%, respectively. [Conclusions] These results indicate that DCS-Net can more accurately and comprehensively extract buildings from remote sensing images, significantly alleviating the issue of small building loss.

  • LI Wangping, WEI Wenbo, LIU Xiaojie, CHAI Chengfu, ZHANG Xueying, ZHOU Zhaoye, ZHANG Xiuxia, HAO Junming, WEI Yuming
    Journal of Geo-information Science. 2025, 27(6): 1448-1461. https://doi.org/10.12082/dqxxkx.2025.250034

    [Objectives] Using deep learning methods for landslide identification can significantly improve efficiency and is of great importance for landslide disaster prevention and mitigation. The DeepLabV3+ algorithm effectively captures multi-scale features, thereby improving image segmentation accuracy, and has been widely used in the segmentation and recognition of remote sensing images. [Methods] We propose an improved model based on DeepLabV3+. First, the Coordinate Attention (CA) mechanism is incorporated into the original model to enhance its feature extraction capabilities. Second, the Atrous Spatial Pyramid Pooling (ASPP) module is replaced with the Dense Atrous Spatial Pyramid Pooling (DenseASPP) module, which helps the network capture more detailed features and expands the receptive field, effectively addressing the limitations of inefficient or ineffective dilated convolution. A Strip Pooling (SP) branch module is added in parallel to allow the backbone network to better leverage long-range dependencies. Finally, the Cascade Feature Fusion (CFF) module is introduced to hierarchically fuse multi-scale features, further improving segmentation accuracy. [Results] Experiments on the Bijie landslide dataset show that, compared with the original model, the improved model achieves a 2.2% increase in MIoU and a 1.2% increase in the F1 score. Compared with other mainstream deep learning models, the proposed model demonstrates higher extraction accuracy. In terms of segmentation quality, it significantly improves the overall accuracy in identifying landslide areas, reduces misclassification and omission, and yields more precise delineation of landslide boundaries. [Conclusions] Based on experiments using the landslide debris flow disaster dataset in Sichuan and surrounding areas, along with practical application verification, the proposed method demonstrates strong recognition capability across landslide images in diverse scenarios and levels of complexity. It performs particularly well in challenging environments such as areas with dense vegetation or proximity to rivers, showing strong generalization ability and broad applicability.

  • QIN Chengzhi, ZHU Liangjun, CHEN Ziyue, WANG Yijie, WANG Yujing, WU Chenglong, FAN Xingchen, ZHAO Fanghe, REN Yingchao, ZHU Axing, ZHOU Chenghu
    Journal of Geo-information Science. 2025, 27(5): 1027-1040. https://doi.org/10.12082/dqxxkx.2025.240706

    [Objectives] Geographic modeling aims to appropriately couple diverse geographic models and their specific algorithmic implementations to form an effective and executable model workflow for solving specific, unsolved application problems. This approach is highly valuable and in high demand in practice. However, traditional geographic modeling is designed with an execution-oriented approach, which plays a heavy burden on users, especially non-expert users. [Methods] In this position paper, we advocate not only for the necessity of intelligent geographic modeling but also achieving it through a so-called recursive geographic modeling approach. This new approach originates from the user's modeling target, which can be formalized as an initial elemental modeling question. It then reasons backward to resolve the current elemental modeling question and iteratively updates new elemental modeling questions in a recursive manner. This process enables the automatic construction of an appropriate geographic workflow model tailored to the application context of the user's modeling problem, thereby addressing the limitations of traditional geographic modeling. [Progress] Building on this foundational concept, this position paper introduces a series of intelligent geographic modeling methods developed by the authors. These methods aim to reduce the geographic modeling burden on non-expert users while assuring the appropriateness of automatically constructed models. Specifically, each proposed intelligent geographic modeling method is designed to solve a specific type of elemental question within intelligent geographic modeling. The elemental questions include: (1) how to determine the appropriate model algorithm (or its parameter values) within the given application context, (2) how to select the appropriate covariate set as input for a model without a predetermined number of inputs (e.g., a soil mapping model without predetermined environmental covariates as inputs), (3) how to determine the structure of a model that integrates multiple coupled modules (e.g., a watershed system model incorporating diverse process simulation modules), and (4) how to determine the proper spatial extent of input data for a geographic model when a specific area of interest is assigned by the user. The key to solving these elemental questions lies in the effective utilization of geographic modeling knowledge, particularly application-context knowledge. However, since application-context knowledge is typically unsystematic, empirical, and implicit, we developed case formalization and case-based reasoning strategies to integrate this knowledge within the proposed methods. Based on the recursive intelligent geographic modeling approach and the correspondingly methods, we propose an application schema for intelligent geographic modeling and computing. This schema is grounded in domain modeling knowledge, particularly case-based application-context knowledge, and leverages the “Data-Knowledge-Model” tripartite collaboration. A prototype of this approach has been implemented in an intelligent geospatial computing system called EGC (EasyGeoComputing). [Prospect] Finally, this position paper discusses the emerging role of large language models in geographic modeling. Their potential applications, relationships with the research presented here, and prospects for future research directions are explored.

  • SUI Xin, HAO Yuting, CHEN Zhijian, WANG Changqiang, SHI Zhengxu, XU Aigong
    Journal of Geo-information Science. 2025, 27(2): 397-410. https://doi.org/10.12082/dqxxkx.2024.230648

    [Objectives] Scene understanding based on 3D laser point clouds plays a core role in many applications such as object detection, 3D reconstruction, cultural relic protection, and autonomous driving. The semantic classification of 3D point clouds is an important task in scene understanding, but due to the large amount of data, diverse targets, and large-scale differences, as well as the occlusion of buildings and trees, this task still poses challenges. The existing deep learning models for point cloud classification face several challenges due to the unstructured and disordered nature of point clouds. These challenges include inadequate extraction of local and global features and the absence of an efficient mechanism for context feature integration, making it challenging to achieve fine-grained classification of ground objects. Therefore, this study introduces a novel point cloud feature classification approach that incorporates a multi-scale convolutional attention network for both local and global features. [Methods] To address the lack of structure in point clouds, we construct a local weighted graph to model the positional relationships between central points and their neighboring points. This graph facilitates dynamic adjustments of kernel weights, enabling the extraction of more representative local features. Simultaneously, we introduce a global graph attention module to account for the overall spatial distribution of points, address the disorder of point clouds, and effectively capture global contextual features, thereby integrating information at different scales. Furthermore, we design an adaptive weighted pooling module to facilitate the seamless fusion of local and global features, thus maximizing the network's classification performance. [Results] The proposed method is evaluated using the publicly available Toronto-3D point cloud dataset and a campus point cloud dataset obtained from real measurements. We compare its performance against various network models, including Pointnet++, DGCNN, RandLA-Net, BAAF-Net, and BAF-LAC, The experimental results show that the OA and MIoU of our method in the Toronto-3D dataset are 97.21% and 85.46%, respectively. Compared with network models such as Pointnet++, DGCNN, RandLA Net, BAAF Net, and BAF-LAC, OA has improved by 1.99% to 8.21%, and MIoU has improved by 3.23% to 35.86%. In the campus dataset, the OA and MIoU of our method in this paper are 97.38% and 85.70%, respectively. OA has improved by 0.58% to 10.53%, and MIoU has improved by 2.01% to 32.01%. [Conclusions] These results surpass those achieved by the comparison networks and effectively overcome problems such as large changes in target scale and building occlusion, establishing our method's capability to achieve high-precision and efficient fine classification of ground objects in complex road scenes.

  • LIU Kang
    Journal of Geo-information Science. 2025, 27(7): 1520-1531. https://doi.org/10.12082/dqxxkx.2025.250196

    [Significance] Human mobility is closely tied to transportation, infectious disease spread, and public safety, making trajectory analysis and modeling a long-standing research focus. While numerous specialized trajectory models, such as interpolation, prediction, and classification models, have been developed using machine learning or deep learning, most are task-specific and trained on localized datasets, limiting their generalizability across tasks, regions, or trajectory data. Recent advances in generative AI have demonstrated the potential of foundation models in NLP and computer vision, motivating the need for a trajectory foundation model capable of learning universal patterns from large-scale mobility data to support diverse downstream applications. [Methods] This paper first reviews the research progress of various specialized trajectory models. It then categorizes trajectory modeling tasks into conventional tasks (e.g., trajectory similarity computation, interpolation, prediction, and classification) and generation task (i.e., trajectory generation), and elaborates on recent advances in trajectory foundation models for these two types of tasks. [Conclusions] The paper argues that trajectory foundation models for conventional tasks should enhance not only task generalization but also spatial and data generalization. Trajectory foundation models for generation task must address the challenge of spatial generalization, enabling the generation of large-scale trajectory data "from scratch" based on easily obtainable macro-level urban data or features. Furthermore, integrating trajectory data with other data types (e.g., text, maps, and other geospatial data) to construct multimodal geographic foundation models, as well as developing application-oriented trajectory foundation models for fields such as transportation, public health, and public safety, are promising research directions worthy of future exploration.

  • LIU Xiaoqing, REN Fu, YUE Weiting, GAO Yunji
    Journal of Geo-information Science. 2025, 27(5): 1214-1227. https://doi.org/10.12082/dqxxkx.2025.240359

    [Objectives] Forests, as the backbone of terrestrial ecosystems, play crucial roles in climate regulation and soil and water conservation. Among the many threats to forests, the impact of forest fires is becoming increasingly severe. Analyzing the factors influencing forest fires is essential for preventing forest fires and formulating relevant strategies. [Methods] This study focuses on China, using multi-source data related to fires, vegetation, climate, topography, and human activities to analyze the spatial heterogeneity of forest fire driving forces from multiple perspectives. [Results] The findings reveal that: (1) At a global scale, the spatial distribution of forest fires is most influenced by FVC, with an explanatory power of 0.130 2, while climate factors exert a relatively strong influence. The interaction between driving factors is enhanced, and forest fire occurrence results from the combined influence of multiple factors. Moreover, a nonlinear relationship and impact threshold exist between these driving factors and the probability of forest fire occurrence. (2) At a local scale, climate and vegetation serve as key driving factors behind forest fires, significantly explaining their spatial distribution across different zones. Temperature is the most influential factor in the Cold Temperate Needle-leaf Forest region, the Temperate Coniferous and Broad-leaved Mixed Forest region, and the Alpine Vegetation of the Tibetan Plateau region, with explanatory powers of 0.313, 0.41, and 0.052, respectively. In contrast, wind speed is the dominant factor in the Warm Temperate Broad-leaved Forest region, with an explanatory power of 0.279. [Conclusions] The primary driving factors and their interactions vary across different regions, quantitatively confirming the spatial heterogeneity of forest fire driving forces. This research contributes to a national-scale understanding of forest fire drivers and fire hazard distribution in China, assisting policymakers in designing fire management strategies to mitigate potential fire risks.

  • SHI Shihao, SHI Qunshan, ZHOU Yang, HU Xiaofei, QI Kai
    Journal of Geo-information Science. 2025, 27(7): 1596-1607. https://doi.org/10.12082/dqxxkx.2025.250015

    [Objectives] Small object detection is of great significance in both military and civil applications. However, due to challenges such as low resolution, high noise environments, target occlusion, and complex backgrounds, traditional detection methods often struggle to achieve the necessary accuracy and robustness. The problem of detecting small objects in complex scenes remains highly challenging. Therefore, this paper proposes a hybrid feature and multi-scale fusion algorithm for small object detection. [Methods] First, a Hybrid Conv and Transformer Block (HCTB) is designed to fully utilize local and global context information, enhancing the network's perception of small objects while optimizing computational efficiency and feature extraction capability. Second, a Multi-Dilated Shared Kernel Conv (MDSKC) module is introduced to extend the receptive field of the backbone network using dilated convolutions with varying expansion rates, thereby enabling efficient multi-scale feature extraction. Finally, the Omni-Kernel Cross Stage Model (OKCSM), constructed based on the concepts of Omni-Kernel and Cross Stage Partial, is integrated to optimize the small target feature pyramid network. This approach helps preserve small object information and significantly improves detection performance. [Results] Ablation and comparison experiments were conducted on the VisDrone2019 and TinyPerson datasets. Compared to the baseline model YOLOv8n, the proposed method improves precision, recall, mAP@50, and mAP@50:95 by 1.3%, 3.1%, 3%, and 1.9%, respectively on VisDrone2019, and by 3.6%, 1.3%, 2.1%, and 0.7%, respectively on TinyPerson. Additionally, the model size and GFLOPs are only 6.3 MB and 11.3 G, demonstrating its efficiency. Furthermore, compared with classical algorithms, such as HIC-YOLOv5, TPH- YOLOv5, and Drone-YOLO, the proposed algorithm demonstrates significant advantages and superior performance. [Conclusions] The algorithm effectively improves detection accuracy, confirming its strong performance in addressing small object detection in complex scenes.

  • LIU Chang, SHI Erpeng, GUO Shiyi, GUO Liang, SUN Xiaoli
    Journal of Geo-information Science. 2025, 27(3): 585-600. https://doi.org/10.12082/dqxxkx.2024.230576

    [Objectives] Urban public transportation service quality is an important factor affecting residents' travel choices and quality of life, but the current development and reform of urban public transportation in China still has shortcomings, and it is necessary to incorporate public perception into the decision-making basis and improve service quality from the perspective of residents. Previous studies have two main limitations: first, they rely on traditional analysis methods based on traffic surveys, which fail to capture the regional differences in perceived service quality; second, they use big data from social media platforms, which are prone to information bias, polarization, and other issues, and do not reflect the public's real needs. Moreover, they mostly focus on public opinion analysis, without providing specific and feasible optimization paths. [Methods] To address these gaps, this paper proposes a method that combines public network participation and semantic analysis. It uses internet big data to extract online messages related to urban public transportation from the online interactive platform between government and citizens and analyzes their spatiotemporal features and perceived service quality. It also conducts spatial analysis and explores the service efficiency of the public transportation system in relation to the transportation facility distribution. Based on this, it offers optimization suggestions. The paper selects Wuhan as a case study, which is one of the national central cities and an important megacity in the middle reaches of the Yangtze River. The urban development area in Wuhan is a key zone for urbanization and a major hub for public travel activities, covering 15 functional zones. It has a complete public transportation facility allocation, including all the subway lines and stations, and most of the bus lines and stations in the city. [Results] The main findings are as follows: (1) The quality of public network participation data can reflect the spatiotemporal patterns of actual travel activities and has high credibility; (2) The emotional expression of the public varies across individuals and regions and the perceived service quality dimensions can be categorized into five topics: "public transportation planning and construction", "public transportation travel conditions", "residential community bus configuration", "public transportation route setting", and "public transportation operation service". Furthermore, the perceived service quality exhibits spatial imbalance and agglomeration; (3) Corresponding optimization suggestions are made for the road system in the main urban area, subway stations in the far urban area, and bus routes at the junction of the main urban area and far urban area. [Conclusions] The research results of this paper provide a new method for fine-grained identification and optimization of spatial differences in urban public transportation perceived service quality, and also demonstrate the application value of public network participation data in facilitating government decision-making.