Most Download

  • Published in last 1 year
  • In last 2 years
  • In last 3 years
  • All
  • Most Downloaded in Recent Month
  • Most Downloaded in Recent Year

Please wait a minute...
  • Select all
    |
  • QIN Qiming
    Journal of Geo-information Science. 2025, 27(10): 2283-2290. https://doi.org/10.12082/dqxxkx.2025.250426

    [Objectives] With the rapid increase in the number of Earth observation satellites in orbit worldwide, remote sensing data has been accumulating explosively, offering unprecedented opportunities for Earth system science research to dynamically monitor global change. At the same time, it also brings a series of challenges, including multi-source heterogeneity, scarcity of labeled data, insufficient task generalization, and data overload. [Methods] To address these bottlenecks, Google DeepMind has proposed AlphaEarth Foundations (AEF), which integrates multimodal data such as optical imagery, SAR, LiDAR, climate simulations, and textual sources to construct a unified 64-dimensional embedding field. This framework achieves cross-modal and spatiotemporal semantic consistency for data fusion and has been made openly available on platforms such as Google Earth Engine. [Results] The main contributions of AEF can be summarized as follows: (1) Mitigating the long-standing “data silos” problem by establishing globally consistent embedding layers; (2) Enhancing semantic similarity measurement through a von Mises-Fisher (vMF) spherical embedding mechanism, thereby supporting efficient retrieval and change detection; (3) Shifting complex preprocessing and feature engineering tasks into the pre-training stage, enabling downstream applications to become “analysis-ready” and significantly reducing application costs. The paper further highlights the application potential of AEF in three stages: (1) Initially in land cover classification and change detection; (2) Subsequently in deep coupling of embedding vectors with physical models to drive scientific discovery; (3) Ultimately evolving into a spatial intelligence infrastructure, serving as a foundational service for global geospatial intelligence. Nevertheless, AEF still faces several challenges: (1) Limited interpretability of embedding vectors, which constrains scientific attribution and causal analysis; (2) Uncertainties in domain transfer and cross-scenario adaptability, with robustness in extreme environments yet to be verified; (3) Performance advantages that require more empirical validation across regions and independent experiments. [Conclusions] Overall, AEF represents a new direction for research in remote sensing and geospatial artificial intelligence, with breakthroughs in data efficiency and cross-task generalization providing solid support for future Earth science studies. However, its further development will depend on continuous advances in interpretability, robustness, and empirical validation, as well as on transforming the 64-dimensional embedding vectors into widely usable data resources through different pathways.

  • HAO Yuanfei, LIU Zhe, ZHENG Xi, QIAN Yun
    Journal of Geo-information Science. 2025, 27(9): 2070-2085. https://doi.org/10.12082/dqxxkx.2025.250129

    [Objectives] Street space serves as the primary perceptual interface for pedestrians in urban environments, and the visual quality of these spaces plays a crucial role in enhancing their vitality. Traditional evaluation methods often rely on single-objective indicators, making it difficult to effectively link objective environmental features with pedestrians' subjective perceptions. [Methods] This study proposes a novel evaluation framework based on Large Language Models (LLMs), incorporating the style dimension of subjective perception and extending traditional single-indicator quantitative analysis to a comprehensive approach that integrates both quantification and stylization. This framework utilizes Baidu Street View imagery to quantitatively assess two objective indicators, namely green view index and sky view factor, through semantic segmentation techniques. Additionally, it evaluates six subjective indicators, including vegetation diversity, building typology, building continuity, sidewalk usage, roadway usage, and signage usage, by leveraging prompt-optimized LLMs. The study then categorizes street space visual quality features within the research area using the Latent Dirichlet Allocation (LDA) topic model, aiming to explore the spatial characteristics of different streets and identify optimization strategies. [Results] Using Beijing's Xicheng District as the study area, the results reveal spatial distribution patterns of vegetation density and sky openness, along with pedestrians' subjective evaluations of indicators such as vegetation diversity and building type. Cluster analysis identified comprehensive service streets centered around Xidan North Street, characteristic streets centered around Xihuangchenggen South Street, and mixed-type streets centered around Lingjing Hutong. [Conclusions] This study innovatively introduces a large language model with human-like perceptual capabilities, enhancing its performance through prompt engineering. The resulting framework enables efficient and integrated evaluation of street visual quality by combining both objective and subjective factors. This approach provides a practical reference for large-scale, automated analysis of street view imagery.

  • ZHU Shan, HOU Xiyong, WANG Xiaoli, ZHANG Xueying, LIU Kai, SONG Jie
    Journal of Geo-information Science. 2025, 27(8): 1952-1964. https://doi.org/10.12082/dqxxkx.2025.240702

    [Objectives] Land Use and Land Cover (LULC) plays a crucial role in shaping surface environments and ecological processes. Among various land cover types, built-up land, representing the dominant form of anthropogenic surface modification, has expanded rapidly in recent decades, exerting significant impacts on regional ecosystems while attracting increasing attention from multiple disciplines. This study aims to improve the spatial accuracy of built-up land mapping by evaluating and integrating multiple LULC datasets, thereby supporting research on regional sustainable development. [Methods] Taking the Bohai Rim region as the study area, seven medium to high-resolution LULC products from domestic and international sources were initially selected. Based on a comparative analysis of total built-up area and spatial distribution patterns, five datasets (ESA2020, CoLUCC2020, GlobeLand2020, CLCD2023, and GLC_FCS2022) were chosen for further evaluation and integration. Consistency analysis was conducted to assess the classification performance of each dataset, and a multi-criteria evaluation combined with threshold-based filtering was employed for multi-source data fusion. [Results] Evaluation results indicated that the ESA2020, CoLUCC2020, GlobeLand2020, and GLC_FCS2022 datasets exhibit relatively high classification accuracy for built-up land, while the CLCD2023 dataset performs less satisfactorily. The fused product achieved an overall accuracy of 93.51% and a Kappa coefficient of 0.745 5, demonstrating notable improvements over any individual dataset. [Conclusions] The proposed fusion method effectively overcomes the limitations of single-source data by leveraging the complementary strengths of multiple datasets. It provides a robust methodological foundation for regional LULC data integration and offers valuable data support for sustainable development research in the Bohai Rim and similar regions.

  • HUANG Yi, ZHANG Xueying, SHENG Yehua, XIA Yongqi, YE Peng
    Journal of Geo-information Science. 2025, 27(6): 1249-1262. https://doi.org/10.12082/dqxxkx.2025.250175

    [Objectives] This study addresses the critical challenges in typhoon disaster knowledge services, which are often hindered by "massive data, scarce knowledge, and limited services." The core objective is to rapidly distill actionable knowledge from vast datasets to enhance disaster management efficacy and mitigate typhoon-related impacts. Large Language Models (LLMs), renowned for their superior performance in natural language processing, are leveraged to deeply mine disaster-related information and provide robust support for advanced knowledge services. [Methods] This research establishes a typhoon disaster knowledge service framework encompassing three layers: data, knowledge, and service. [Results] For the data-to-knowledge layer, an LLM-driven (Qwen2.5-Max) automated method for constructing typhoon disaster Knowledge Graphs (KGs) is proposed. This method first introduces a multi-level typhoon disaster knowledge representation model that integrates spatiotemporal characteristics and disaster impact mechanisms. A specialized training dataset is curated, incorporating typhoon-related texts with explicit temporal and spatial attributes. By adopting a "pre-training + fine-tuning" paradigm, the framework efficiently transforms raw disaster data into structured knowledge. For the knowledge-to-service layer, an LLM-based intelligent question-answering system is developed. Utilizing the constructed typhoon disaster KG, this system employs Graph Retrieval-Augmented Generation (GraphRAG) to retrieve contextually relevant knowledge from the graph and generate user-specific disaster prevention and mitigation guidance. This approach ensures seamless conversion of structured knowledge into practical services, such as personalized evacuation plans and resource allocation strategies. [Conclusions] The study highlights the transformative potential of LLMs in typhoon disaster management and lays a foundation for integrating LLMs with geospatial technologies. This interdisciplinary synergy advances Geographic Artificial Intelligence (GeoAI) and paves the way for innovative applications in disaster service.

  • DU Pei, SHEN Yangjie, LIU Zhenxia, YU Zhaoyuan
    Journal of Geo-information Science. 2025, 27(9): 2106-2116. https://doi.org/10.12082/dqxxkx.2025.250220

    [Objectives] Global climate change, accelerating sea-level rise, and intensifying anthropogenic pressures are rendering the intricate human-land-sea nexus within coastal zones increasingly complex, sensitive, and vulnerable. This growing challenge underscores the urgent need for integrated coastal research frameworks capable of synthesizing environmental sensing, dynamic process simulation, and scenario projection. Addressing this critical gap, Digital Twin (DT) technology emerges as a transformative paradigm. By integrating multi-source data, sophisticated models, and domain knowledge into intelligent systems, DT offers unprecedented potential for creating precise virtual replicas and enabling intelligent management of complex coastal socio-ecological systems. [Analysis] This paper systematically analyzes the state of coastal zone digitalization, highlighting the pressing need for robust digital frameworks that can effectively represent and analyze the strong coupling between natural processes and human activities under multifaceted pressures. Building on this foundation, we propose a novel conceptual framework and implementation pathway for constructing a Digital Twin Coastal Zone (DTCZ). This framework explicitly positions land-sea interface processes as the foundational scenario and centers on human-land-sea feedback mechanisms as the core analytical thread. The proposed DTCZ system architecture is articulated across four pivotal dimensions: (1) Comprehensive information integration and knowledge aggregation; (2) Simulation of natural processes integrated with coupled human-nature decision support; (3) Synergistic short-term forecasting and long-term monitoring capabilities; and (4) Realistic multidimensional representation enabling intelligent interaction. We critically discuss the key technological enablers supporting this vision, encompassing coastal data governance and fusion, multi-scale scenario modeling, predictive analytics for critical coastal elements, persistent long-term monitoring strategies, and the development of the integrated DTCZ platform itself. At its core, the envisioned DTCZ leverages spatiotemporally fused multi-source data as its foundation and prioritizes enhanced scenario simulation and intervention capabilities. [Prospects] This framework is designed to overcome the limitations, such as fragmented data and limited predictive power, that constrain traditional coastal digital systems. By significantly advancing the computational tractability and overall manageability of coastal systems, the DTCZ paradigm offers a powerful new methodological tool and operational framework. It holds strong potential for supporting sustainable coastal development and modernizing governance structures in the face of ongoing climate change, providing a robust platform for evidence-based planning and adaptive management.

  • LI Wangping, WEI Wenbo, LIU Xiaojie, CHAI Chengfu, ZHANG Xueying, ZHOU Zhaoye, ZHANG Xiuxia, HAO Junming, WEI Yuming
    Journal of Geo-information Science. 2025, 27(6): 1448-1461. https://doi.org/10.12082/dqxxkx.2025.250034

    [Objectives] Using deep learning methods for landslide identification can significantly improve efficiency and is of great importance for landslide disaster prevention and mitigation. The DeepLabV3+ algorithm effectively captures multi-scale features, thereby improving image segmentation accuracy, and has been widely used in the segmentation and recognition of remote sensing images. [Methods] We propose an improved model based on DeepLabV3+. First, the Coordinate Attention (CA) mechanism is incorporated into the original model to enhance its feature extraction capabilities. Second, the Atrous Spatial Pyramid Pooling (ASPP) module is replaced with the Dense Atrous Spatial Pyramid Pooling (DenseASPP) module, which helps the network capture more detailed features and expands the receptive field, effectively addressing the limitations of inefficient or ineffective dilated convolution. A Strip Pooling (SP) branch module is added in parallel to allow the backbone network to better leverage long-range dependencies. Finally, the Cascade Feature Fusion (CFF) module is introduced to hierarchically fuse multi-scale features, further improving segmentation accuracy. [Results] Experiments on the Bijie landslide dataset show that, compared with the original model, the improved model achieves a 2.2% increase in MIoU and a 1.2% increase in the F1 score. Compared with other mainstream deep learning models, the proposed model demonstrates higher extraction accuracy. In terms of segmentation quality, it significantly improves the overall accuracy in identifying landslide areas, reduces misclassification and omission, and yields more precise delineation of landslide boundaries. [Conclusions] Based on experiments using the landslide debris flow disaster dataset in Sichuan and surrounding areas, along with practical application verification, the proposed method demonstrates strong recognition capability across landslide images in diverse scenarios and levels of complexity. It performs particularly well in challenging environments such as areas with dense vegetation or proximity to rivers, showing strong generalization ability and broad applicability.

  • CHEN Xiawei, LONG Yi, LIU Xiang, ZHANG Ling, LIU Shaojun
    Journal of Geo-information Science. 2025, 27(5): 1228-1245. https://doi.org/10.12082/dqxxkx.2025.240508

    [Objectives] The quality of the leisure environment is a critical factor influencing residents' leisure experiences and participation, and it is closely related to the vitality of urban areas and economic development. Therefore, exploring how environmental quality influences the vitality of leisure space is crucial for promoting urban development. [Methods] A human-centered approach is adopted to construct a research framework for exploring the relationship between leisure environment quality and leisure space vitality based on image-text fusion perception. Online review texts and street view images are used to comprehensively perceive the leisure environment quality of the city. Natural language processing and semantic segmentation techniques are used to assess the leisure environment quality, while mobile signaling data is utilized to quantitatively measure the vitality of leisure spaces through user trajectory semantic modeling. Finally, using an Optimal Parameter-based Geographical Detector (OPGD), an in-depth analysis is conducted on the impact mechanisms of individual leisure environment quality factors and their interactions with the vitality of leisure spaces at global and local spatial scales in Nanjing. [Results] The findings reveal that: (1) The spatial distribution of leisure space vitality exhibits a "single-core-multi-center" pattern. The vitality in the main urban area is concentrated around the Xinjiekou commercial district, while Jiangbei District forms a "three-point" pattern with interactions between the two ends and the center. In the Xianlin area, high-vitality zones are distributed around the university town, while in the Dongshan area, they are located along the Shuanglong Avenue corridor. (2) On a macro scale, the leisure space vitality of Nanjing is indirectly dominated by economic levels. On a local scale, the influence of 14 leisure environment quality factors on leisure space vitality demonstrates significant regional heterogeneity. However, in municipal and district-level core areas with high leisure space vitality, the effects of these environmental quality factors are all significant. (3) The formation mechanism of leisure space vitality in Nanjing is closely related to regional geographical location, population density and composition, and economic income levels. [Conclusions] The analysis of Nanjing indicates that the exploration of leisure environment quality through image-text fusion perception enhances the systematic and comprehensive understanding of the factors influencing leisure space vitality and its mechanisms. This provides a scientific basis for optimizing the quality of the urban leisure environment and enhancing the vitality of leisure space.

  • WU Ruoling, GUO Danhuai
    Journal of Geo-information Science. 2025, 27(5): 1041-1052. https://doi.org/10.12082/dqxxkx.2025.240694

    [Objectives] Understanding whether Large Language Models (LLMs) possess spatial cognitive abilities and how to quantify them are critical research questions in the fields of large language models and geographic information science. However, there is currently a lack of systematic evaluation methods and standards for assessing the spatial cognitive abilities of LLMs. Based on an analysis of existing LLM characteristics, this study develops a comprehensive evaluation standard for spatial cognition in large language models. Ultimately, it establishes a testing standard framework, SRT4LLM, along with standardized testing processes to evaluate and quantify spatial cognition in LLMs. [Methods] The testing standard is constructed along three dimensions: spatial object types, spatial relations, and prompt engineering strategies in spatial scenarios. It includes three types of spatial objects, three categories of spatial relations, and three prompt engineering strategies, all integrated into a standardized testing process. The effectiveness of the SRT4LLM standard and the stability of the results are verified through multiple rounds of testing on eight large language models with different parameter scales. Using this standard, the performance scores of different LLMs are evaluated under progressively improved prompt engineering strategies. [Results] The geometric complexity of input spatial objects influences the spatial cognition of LLMs. While different LLMs exhibit significant performance variations, the scores of the same model remain stable. As the geometric complexity of spatial objects and the complexity of spatial relations increase, LLMs' accuracy in judging three spatial relations decreases by only 7.2%, demonstrating the robustness of the test standard across different scenarios. Improved prompt engineering strategies can partially enhance LLM's spatial cognitive Question-Answering (Q&A) performance, with varying degrees of improvement across different models. This verifies the effectiveness of the standard in analyzing LLMs' spatial cognitive abilities. Additionally, Multiple rounds of testing on the same LLM indicate that the results are convergent, and score differences between different LLMs exhibit a stable distribution. [Conclusions] SRT4LLM effectively measures the spatial cognitive abilities of LLMs and serves as a standardized evaluation tool. It can be used to assess LLMs' spatial cognition and support the development of native geographic large models in future research.

  • SHI Shihao, SHI Qunshan, ZHOU Yang, HU Xiaofei, QI Kai
    Journal of Geo-information Science. 2025, 27(7): 1596-1607. https://doi.org/10.12082/dqxxkx.2025.250015

    [Objectives] Small object detection is of great significance in both military and civil applications. However, due to challenges such as low resolution, high noise environments, target occlusion, and complex backgrounds, traditional detection methods often struggle to achieve the necessary accuracy and robustness. The problem of detecting small objects in complex scenes remains highly challenging. Therefore, this paper proposes a hybrid feature and multi-scale fusion algorithm for small object detection. [Methods] First, a Hybrid Conv and Transformer Block (HCTB) is designed to fully utilize local and global context information, enhancing the network's perception of small objects while optimizing computational efficiency and feature extraction capability. Second, a Multi-Dilated Shared Kernel Conv (MDSKC) module is introduced to extend the receptive field of the backbone network using dilated convolutions with varying expansion rates, thereby enabling efficient multi-scale feature extraction. Finally, the Omni-Kernel Cross Stage Model (OKCSM), constructed based on the concepts of Omni-Kernel and Cross Stage Partial, is integrated to optimize the small target feature pyramid network. This approach helps preserve small object information and significantly improves detection performance. [Results] Ablation and comparison experiments were conducted on the VisDrone2019 and TinyPerson datasets. Compared to the baseline model YOLOv8n, the proposed method improves precision, recall, mAP@50, and mAP@50:95 by 1.3%, 3.1%, 3%, and 1.9%, respectively on VisDrone2019, and by 3.6%, 1.3%, 2.1%, and 0.7%, respectively on TinyPerson. Additionally, the model size and GFLOPs are only 6.3 MB and 11.3 G, demonstrating its efficiency. Furthermore, compared with classical algorithms, such as HIC-YOLOv5, TPH- YOLOv5, and Drone-YOLO, the proposed algorithm demonstrates significant advantages and superior performance. [Conclusions] The algorithm effectively improves detection accuracy, confirming its strong performance in addressing small object detection in complex scenes.

  • HE Li, WANG Rong
    Journal of Geo-information Science. 2025, 27(9): 2151-2164. https://doi.org/10.12082/dqxxkx.2025.250273

    [Significance] Space is not merely a physical place, but a productive arena of social relations. Social phenomena are inherently endowed with spatial attributes, making the spatial perspective a critical pathway for understanding complex social issues. With the deepening "spatial turn" in the social sciences and continuous advancements in Geographic Information Systems (GIS)—particularly in data acquisition, spatial analysis and modeling, and spatial visualization—GIS has become an essential tool for addressing social issues. However, disciplinary differences in theoretical paradigms, methodological logic, and scale cognition between geography and the social sciences constrain their deeper integration. Existing literature lacks a systematic synthesis of integration trends, underlying challenges, and empowerment pathways, necessitating a comprehensive clarification of fusion mechanisms, core obstacles, and emerging opportunities. [Progress] This paper identifies five key advantages of GIS in empowering social science research: expanding spatial analytical thinking, supporting spatiotemporal data, enhancing survey techniques, enriching representational forms, and strengthening analytical capabilities. We review representative GIS applications in economics, political science, and sociology. From dimensions such as spatial cognition, data capacity, methodological adoption, and research hotspots, we distill application characteristics across these disciplines, revealing both commonalities and differences. While all three disciplines recognize spatial effects, their theoretical orientations shape distinct technical approaches—economics emphasizes causal identification, political science focuses on geopolitical structures, and sociology prioritizes contextual representation. Through a three-dimensional analysis—data, methodology, and cognition—we examine three major challenges in addressing social issues: the mismatch between data and research questions, the difficulty of integrating methods with causal mechanisms, and the contextual misalignment of place and scale, which reflect deeper issues of data suitability, methodological coherence, and the validity of spatial reasoning. [Prospects] The advancement of artificial intelligence, especially large models, injects new methodological momentum into GIS-based spatial analysis and brings threefold opportunities for addressing social issues. First, large models are driving spatial analysis from correlation-based description toward transparent causal inference; Second, multi-source data fusion and the generation of "silicon-based samples" help overcome the limitations of traditional survey data. Third, an emerging "space-survey" integrated framework is constructing a "spatial cognitive infrastructure" to support social research. Future efforts should establish a synergistic "large model-spatial analysis" paradigm that integrates these three opportunities. By simultaneously addressing challenges of data matching, method integration, and contextual misalignment, this paradigm can elevate GIS from a supportive tool to a core engine for theory generation and mechanism interpretation. This transformation will enhance the scientific value and practical effectiveness of GIS and spatial analysis in addressing complex social issues, fostering a bidirectional interaction between methodological innovation and theoretical advancement.

  • FU Xin, ZHANG Haoran, WANG Yuanbo, HUANG Chong, LIU Xiangye, ZHANG Hengcai, XU Zhenghe
    Journal of Geo-information Science. 2025, 27(9): 2135-2150. https://doi.org/10.12082/dqxxkx.2024.240020

    [Objectives] Soil salinity is one of the major and widespread challenges in the recent era, hindering global food security and environmental sustainability. Accurate evaluation and analysis of soil salinization are of great significance for the improvement and management of soil salinization. [Methods] To address the challenge of mapping three-dimensional spatial distribution of soil salinity, this study selected 819 effective field soil samples within a saline soil region of the Yellow River Delta. These samples, which have vertical stratifications from 0 to 100cm, were used for comprehensive analysis. The soil sample points were arranged in a grid of 5 km×5 km horizontally, and the sampling soil layer was set up every 10cm vertically. Following the principle of covering different land cover types and human accessibility, soil samples were collected from the depth range of 0~100 cm in the study area. The three-dimensional spatial differentiation of soil salinity in the coastal saline soil area was revealed from different perspectives using traditional geostatistical methods and 3D Empirical Bayesian Kriging interpolation. The effects of various factors on the spatial differentiation of soil salinity were analyzed using the Geodetector method. [Results] The results showed that the spatial distribution of soil salinity in the whole soil range and different vertical layers were highly variable. There were differences in the scale of spatial autocorrelation of soil salt content at different depths. In this study, the 3D Empirical Bayesian Kriging interpolation method was established to spatialize the soil salinity of soil samples, which effectively revealed the vertical fine-scale three-dimensional spatial characteristics of soil salinity. Soil salinity exhibited significant three-dimensional spatial differentiation, with diverse profile distribution types. The main types were homogeneous and surface aggregated, with some local areas showing bottom aggregated and fluctuating types. All influencing factors significantly affected the three-dimensional spatial differentiation of soil salinity, but the degree of influence varied for each factor. The order of explanatory power of each influencing factor is as follows: land use/land cover > distance to coastline > groundwater depth > groundwater conductivity > elevation > land surface temperature > soil bulk density > soil clay content. Compared with single factors, the pairwise interaction of any factor had a greater effect on the spatial differentiation of soil salinity, but the interaction strength of different factors varied. In the whole 0~100 cm soil depth range, GWD ∩ LULC had the largest impact (0.443), followed by LST ∩ LULC (0.326). [Conclusions] Although the q values of land surface temperature and soil bulk density were not high, their explanatory power on soil salinity was greatly improved after their interaction with land use/cover, better explaining the changes of soil salinity in the study area. Factors such as land use/cover, groundwater depth, surface temperature, and soil bulk density are closely related to the spatial distribution of soil salinity in the study area. The research results provide a theoretical basis and technical support for the formulation of comprehensive improvement measures and management systems for fine-scale saline-alkali land in the region. These findings have positive implications for promoting the achievement of the Sustainable Development Goal of Land Degradation Neutrality in coastal areas.

  • LI Junming, HU Yaxuan, WANG Nannan, WANG Siyaqi, WANG Ruolan, LYU Lin, FANG Ziqing
    Journal of Geo-information Science. 2025, 27(7): 1501-1519. https://doi.org/10.12082/dqxxkx.2025.250161

    [Objectives] Classical statistical inference typically relies on the assumptions of large sample sizes and independent, identically distributed (i.i.d.) observations, conditions that spatio-temporal data frequently violate, leading to inherent theoretical limitations in conventional approaches. In contrast, Bayesian spatio-temporal statistical methods integrate prior knowledge and treat all model parameters as random variables, thereby forming a unified probabilistic inference framework. This enables the incorporation of a broader range of uncertainties and offers robustness in modelling small samples and dependent structures, making Bayesian methods highly advantageous and increasingly influential in spatio-temporal analysis. [Progress] From the perspective of methodological evolution, this paper systematically reviews mainstream Bayesian spatio-temporal statistical models from two complementary perspectives: traditional Bayesian statistics and the Bayesian machine learning. The former includes Bayesian Spatio-temporal Evolutionary Hierarchical Models, Bayesian Spatio-temporal Regression Hierarchical Models, Bayesian Spatial Panel Data Models, Bayesian Geographically Weighted Spatio-temporal Regression Models, Bayesian Spatio-temporal Varying Coefficient Models, and Bayesian Spatio-temporal Meshed Gaussian Process Model. The latter includes Bayesian Causal Forest Models, Bayesian Spatio-temporal Neural Networks, and Bayesian Graph Convolutional Neural Networks. In terms of application, the review highlights representative studies across domains such as public health, environmental sciences, socio-economic and public safety, as well as energy and engineering. [Prospect] Bayesian spatio-temporal statistical methods need to achieve breakthroughs in multi-source heterogeneous data modeling, integration with deep learning, incorporation of causal inference mechanisms, and optimization of high-performance computing. These advances are essential to balance theoretical rigor with practical adaptability and to promote the development of a next-generation spatio-temporal modeling paradigm characterized by causal inference, adaptive generalization, and intelligent analysis.

  • QIN Chengzhi, ZHU Liangjun, CHEN Ziyue, WANG Yijie, WANG Yujing, WU Chenglong, FAN Xingchen, ZHAO Fanghe, REN Yingchao, ZHU Axing, ZHOU Chenghu
    Journal of Geo-information Science. 2025, 27(5): 1027-1040. https://doi.org/10.12082/dqxxkx.2025.240706

    [Objectives] Geographic modeling aims to appropriately couple diverse geographic models and their specific algorithmic implementations to form an effective and executable model workflow for solving specific, unsolved application problems. This approach is highly valuable and in high demand in practice. However, traditional geographic modeling is designed with an execution-oriented approach, which plays a heavy burden on users, especially non-expert users. [Methods] In this position paper, we advocate not only for the necessity of intelligent geographic modeling but also achieving it through a so-called recursive geographic modeling approach. This new approach originates from the user's modeling target, which can be formalized as an initial elemental modeling question. It then reasons backward to resolve the current elemental modeling question and iteratively updates new elemental modeling questions in a recursive manner. This process enables the automatic construction of an appropriate geographic workflow model tailored to the application context of the user's modeling problem, thereby addressing the limitations of traditional geographic modeling. [Progress] Building on this foundational concept, this position paper introduces a series of intelligent geographic modeling methods developed by the authors. These methods aim to reduce the geographic modeling burden on non-expert users while assuring the appropriateness of automatically constructed models. Specifically, each proposed intelligent geographic modeling method is designed to solve a specific type of elemental question within intelligent geographic modeling. The elemental questions include: (1) how to determine the appropriate model algorithm (or its parameter values) within the given application context, (2) how to select the appropriate covariate set as input for a model without a predetermined number of inputs (e.g., a soil mapping model without predetermined environmental covariates as inputs), (3) how to determine the structure of a model that integrates multiple coupled modules (e.g., a watershed system model incorporating diverse process simulation modules), and (4) how to determine the proper spatial extent of input data for a geographic model when a specific area of interest is assigned by the user. The key to solving these elemental questions lies in the effective utilization of geographic modeling knowledge, particularly application-context knowledge. However, since application-context knowledge is typically unsystematic, empirical, and implicit, we developed case formalization and case-based reasoning strategies to integrate this knowledge within the proposed methods. Based on the recursive intelligent geographic modeling approach and the correspondingly methods, we propose an application schema for intelligent geographic modeling and computing. This schema is grounded in domain modeling knowledge, particularly case-based application-context knowledge, and leverages the “Data-Knowledge-Model” tripartite collaboration. A prototype of this approach has been implemented in an intelligent geospatial computing system called EGC (EasyGeoComputing). [Prospect] Finally, this position paper discusses the emerging role of large language models in geographic modeling. Their potential applications, relationships with the research presented here, and prospects for future research directions are explored.

  • YU Hanyang, LAN Chaozhen, WANG Longhao, WEI Zijun, GAO Tian, WANG Yiqiao, LIU Ruimeng
    Journal of Geo-information Science. 2025, 27(8): 1896-1919. https://doi.org/10.12082/dqxxkx.2025.250052

    [Significance] Multimodal remote sensing image matching has become a fundamental task in integrated Earth observation, enabling precise spatial alignment across heterogeneous image sources. [Progress] As the diversity of sensing modalities, acquisition geometries, and temporal conditions increases, traditional matching frameworks have proven inadequate for capturing complex variations in radiometric responses, geometric configurations, and semantic representations. This technological gap has driven a significant paradigm shift from handcrafted feature engineering to deep learning-based solutions, which now form the core of current research and application development. This paper provides a comprehensive and structured review of recent advances in deep learning methods for multimodal remote sensing image matching, with an emphasis on the evolution of methodological paradigms and technical frameworks. It establishes a clear dual-path classification: the single-session approach and the end-to-end approach. The former selectively replaces or enhances individual components of traditional pipelines, such as feature encoding or similarity estimation, using neural network modules. The latter integrates the entire matching process into a unified network architecture, enabling joint optimization of feature learning, transformation modeling, and correspondence inference within a closed loop. This progression reflects the field's transition from modular adaptation to holistic modeling, revealing a deeper integration of data-driven representation learning with geometric reasoning. The review further examines the development of architectural strategies supporting this evolution, including attention mechanisms, graph-based structures, hierarchical feature fusion, and modality-bridging transformations. These innovations contribute to improved robustness, semantic consistency, and adaptability across diverse matching scenarios. Recent trends also demonstrate a growing reliance on pretrained vision foundation models, which provide transferable feature spaces and reduce the dependence on large-scale labeled datasets. In addition to summarizing technical advancements, the paper analyzes representative datasets, performance evaluation strategies, and the current challenges that constrain real-world deployment. These include limited data availability, weak cross-scene generalization, computational inefficiency, and insufficient interpretability. [Prospect] By synthesizing methodological progress with practical demands, the review identifies key directions for future research, including the design of modality-invariant representations, physically-informed neural architectures, and lightweight solutions tailored for scalable, real-time image registration in complex operational environments.

  • PING Yifan, LU Jun, GUO Haitao, HOU Qingfeng, ZHU Kun, SANG Zehao, LIU Tong
    Journal of Geo-information Science. 2025, 27(7): 1608-1623. https://doi.org/10.12082/dqxxkx.2025.250051

    [Objectives] Cross-view image geolocation refers to a technology that determines the geographical location of an image by matching it with reference images taken from different perspectives and possessing precise location information. This technology plays a crucial role in real-world applications such as Unmanned Aerial Vehicle (UAV) navigation, environmental monitoring, and target positioning. Currently, most deep learning-based cross-view image retrieval and geolocation methods for drone-satellite tasks rely heavily on supervised learning. However, the scarcity of high-quality labeled data presents a significant limitation, hindering the generalization capability of these models. Moreover, existing methods often fail to effectively model the spatial layout of images, making it difficult to bridge the substantial domain gap between cross-view images, thereby limiting the accuracy and robustness of geolocation tasks. [Methods] To address these challenges, this paper proposes a novel cross-view image retrieval and localization architecture called DINO-MSRA. The architecture first employs the DINOv2 large model framework, fine-tuned by Conv-LoRA, as the feature encoder. This enhances the model's feature extraction capabilities with fewer parameters, improving both efficiency and accuracy. Second, we design a spatial relation-aware feature aggregator based on the Mamba module (MSRA) to more effectively aggregate image features. By embedding spatial configuration features into the global descriptor, this module significantly improves the model's performance in cross-view matching tasks, especially in complex scenarios where spatial relationships between objects are crucial. Finally, the InfoNCE loss function is adopted to train the model, optimizing contrastive learning and ensuring more accurate retrieval and localization results. [Results] Extensive comparative and ablation experiments were conducted on the University-1652 and SUES-200 datasets. The experimental results show that for drone-view target localization (drone→satellite) and drone navigation (satellite→drone) tasks, the proposed method achieves R@1 accuracies of 95.14% and 97.29%, respectively, on the University-1652 dataset, representing improvements of 0.68% and 1.14% over the current best algorithm, CAMP. On the SUES-200 dataset at an altitude of 150 meters, R@1 accuracies reach 97.2% and 98.75%, which are 1.8% and 2.5% higher than CAMP, respectively. Moreover, the proposed method requires significantly fewer parameters than existing algorithms, only 19.2% of those used by Sample4Geo. [Conclusions] In summary, the proposed DINO-MSRA architecture outperforms current state-of-the-art methods in cross-view image matching, achieving higher accuracy and faster inference speed. These results demonstrate its robustness and practical application potential in challenging real-world scenarios.

  • PAN Jiechen, XING Shuai, CAO Jiayin, DAI Mofan, HUANG Gaoshuang, ZHI Lu
    Journal of Geo-information Science. 2025, 27(9): 1999-2020. https://doi.org/10.12082/dqxxkx.2025.250151

    [Significance] With rapid advances in remote sensing, surveying and mapping, and autonomous driving technologies, 3D point cloud semantic segmentation, a core technology of digital twin systems, is attracting increasing research attention. Airborne point cloud semantic segmentation is regarded as a key technology for enhancing the automation and intelligence of 3D geographic information systems. [Analysis] Driven by deep learning and sensing technologies such as LiDAR, depth cameras, and 3D laser scanners, point cloud semantic segmentation can automatically classify and accurately recognize large-scale point cloud data through precise feature extraction and efficient model training. However, compared with typical high-density, category-balanced point cloud datasets (e.g., those used in indoor scenes, autonomous driving, or robotics), airborne point clouds present significant challenges in areas such as registration and feature extraction. These challenges stem from their unique characteristics, including large-scale 3D terrain coverage, dynamic platform motion errors, considerable variations in ground-object spatial scales, and complex occlusions. Currently, deep-learning-based airborne point cloud semantic segmentation is still in its early stages. Due to heterogeneous data acquisition methods, varying resolutions, and diverse attribute information, there remains a gap between existing research and practical algorithm deployment. [Progress] This paper provides a comprehensive review of the field, covering adaptive algorithms, datasets, performance metrics, and emerging methods along with their advantages and limitations. It also offers quantitative comparisons with existing technologies, evaluating representative methods in terms of precision and applicability. [Prospect] A thorough analysis suggests that breakthroughs in airborne point cloud semantic segmentation necessitate systematic research innovations across multiple dimensions, including feature representation, multimodal fusion, few-shot learning, algorithm interpretability, and large-scale model benchmarking. These advancements are essential not only for overcoming current bottlenecks in real-world applications but also for establishing robust technical foundations for critical use cases such as digital twin cities and disaster emergency response.

  • ZHANG Nuan, WANG Tao, ZHANG Yan, WEI Yibo, LI Liuwen, LIU Yichen
    Journal of Geo-information Science. 2025, 27(8): 1751-1779. https://doi.org/10.12082/dqxxkx.2025.250137

    [Significance] Street View Image-based Visual Place Recognition (SV-VPR) is a geographical location recognition technology that relies on visual feature information. Its core task is to predict and accurately locate unknown locations by analyzing the visual features of street view images. This technology must overcome challenges such as appearance changes under different environmental conditions (e.g., lighting differences between day and night, seasonal variations) and viewpoint differences (e.g., perspective deviations between vehicle-mounted cameras and satellite images). Accurate recognition is achieved through calculating image feature similarity, applying geometric constraints, and related methods. As an interdisciplinary field of computer vision and geographic information science, SV-VPR is closely related to visual positioning, image retrieval, SLAM, and more. It has significant application value in areas such as UAV autonomous navigation, high-precision positioning for autonomous driving, construction of geographical boundaries in cyberspace, and integration of augmented reality environments. It is particularly advantageous in GPS-denied environments. [Analysis] This paper systematically reviews the research progress of visual location recognition based on street view images, covering the following aspects: First, the basic concepts and classifications of visual place recognition technologies are introduced. Second, the foundational principles and categorization methods specific to street view image-based visual place recognition are discussed in depth. Third, the key technologies in this field are analyzed in detail. Furthermore, relevant datasets for street view image-based visual place recognition are comprehensively reviewed. In addition, evaluation methods and index systems used in this domain are summarized. Finally, potential future research directions for SV-VPR are explored. [Purpose] This review aims to provide researchers with a systematic overview of the technological development trajectory of SV-VPR, helping them quickly understand the current research landscape. It also offers a comparative analysis of key technologies and evaluation methods to support algorithm selection, and identifies emerging challenges and potential breakthrough areas to inspire innovative research.

  • LIU Kang
    Journal of Geo-information Science. 2025, 27(7): 1520-1531. https://doi.org/10.12082/dqxxkx.2025.250196

    [Significance] Human mobility is closely tied to transportation, infectious disease spread, and public safety, making trajectory analysis and modeling a long-standing research focus. While numerous specialized trajectory models, such as interpolation, prediction, and classification models, have been developed using machine learning or deep learning, most are task-specific and trained on localized datasets, limiting their generalizability across tasks, regions, or trajectory data. Recent advances in generative AI have demonstrated the potential of foundation models in NLP and computer vision, motivating the need for a trajectory foundation model capable of learning universal patterns from large-scale mobility data to support diverse downstream applications. [Methods] This paper first reviews the research progress of various specialized trajectory models. It then categorizes trajectory modeling tasks into conventional tasks (e.g., trajectory similarity computation, interpolation, prediction, and classification) and generation task (i.e., trajectory generation), and elaborates on recent advances in trajectory foundation models for these two types of tasks. [Conclusions] The paper argues that trajectory foundation models for conventional tasks should enhance not only task generalization but also spatial and data generalization. Trajectory foundation models for generation task must address the challenge of spatial generalization, enabling the generation of large-scale trajectory data "from scratch" based on easily obtainable macro-level urban data or features. Furthermore, integrating trajectory data with other data types (e.g., text, maps, and other geospatial data) to construct multimodal geographic foundation models, as well as developing application-oriented trajectory foundation models for fields such as transportation, public health, and public safety, are promising research directions worthy of future exploration.

  • YUE Zichen, ZHONG Shaobo, MEI Xin
    Journal of Geo-information Science. 2025, 27(6): 1289-1304. https://doi.org/10.12082/dqxxkx.2025.240715

    [Objectives] Knowledge graphs, as a cutting-edge technology for integrating multimodal data sources, have garnered significant attention in the GIS domain. These graphs are typically constructed using graph databases. However, mainstream graph databases still face challenges in effectively organizing and analyzing geospatial-temporal data. [Methods] To address this issue, this paper proposes an approach to modeling spatiotemporal semantics and query optimization that bridges graph and spatial data engine implemented within relational databases. In the graph database, geographic entities are stored as lightweight placeholder nodes (storing only mapping IDs) and linked to spatiotemporal index nodes (such as time trees and Geohash encodings) to enhance aggregation capabilities. Meanwhile, complete geospatial-temporal objects are stored in a relational database, while table partitioning strategies are employed to improve retrieval efficiency. This approach uses unified identifiers and JDBC for routing geographic entities across the databases. When users invoke pre-registered spatiotemporal functions in the graph database, a query rewriter transforms the graph queries into SQL statements based on entity identifiers, pushes them to the relational database for processing, and returns the results to the graph query pipeline. Additionally, a two-phase commit protocol ensures data consistency across the heterogeneous databases. [Results] We implemented a prototype system integrating Neo4j and PostGIS and conducted experiments on query and storage efficiency using a multisource spatiotemporal dataset from Shenzhen (including taxi trajectories, bike-sharing trajectories, road networks, POIs, and remote sensing imagery). Compared to mainstream graph database systems (e.g., Neo4j and GraphDB), our approach significantly improves performance for geospatial-temporal queries, reducing response times by 1~2 orders of magnitude in complex computational scenarios and enabling raster computations unsupported by native graph databases. By leveraging lightweight graph nodes and PostGIS data compression, storage space is reduced by approximately 3~5 times. Compared to virtual knowledge graph systems (e.g., Ontop), our method shows minimal differences in spatial query performance and storage overhead, while achieving notably faster response times for large-scale spatiotemporal queries. [Conclusions] Compared to existing methods, our approach leverages existing graph databases to construct materialized spatiotemporal knowledge graphs, enhancing modeling flexibility and query efficiency for geospatial-temporal data. It also supports user-defined extensions to the geospatial-temporal function library, offering a novel framework for efficiently managing and analyzing such data within knowledge graphs.

  • ZHANG Teng, WANG Jingxue, XIE Xiao, ZANG Dongdong
    Journal of Geo-information Science. 2025, 27(5): 1163-1178. https://doi.org/10.12082/dqxxkx.2025.240698

    [Objectives] The 3D model reconstruction of buildings based on a model-driven approach using airborne LiDAR building point clouds relies on fitting the building point cloud to predefined geometric primitives. However, due to the uneven density and noise in the building point cloud, errors often arise in structural details during the primitive fitting process, leading to reduced reconstruction accuracy. To address this issue, this study proposes a 3D model reconstruction method for airborne LiDAR building point clouds based on sequential quadratic programming and elevation step correction. [Methods] First, a primitive library containing classical roof structures is established, including simple roofs, complex roofs, and steep roofs. An adjacency matrix is constructed by incorporating the adjacency relationships and ridge properties between roof patches. The best-matching primitives are then selected from the primitive library based on the adjacency matrix. Next, the shape parameters of the selected primitives are optimized using the sequential quadratic programming algorithm to achieve a globally optimal fitting state. The initial 3D model is then generated. To further enhance accuracy, the relative position of the building models and the roof point clouds in 3D space is refined through translation and rotation, reducing the relative distance deviation and improving the fitting precision. Finally, the City Geography Markup Language (CityGML) is used to store the reconstructed 3D building models, ensuring clear structure and correct topology, which facilitates the visual representation of reconstruction results. [Results] Ten sets of classical building point clouds from the 3D Building dataset were selected for the 3D model reconstruction experiment. The proposed method was compared with existing reconstruction approaches based on the same model-driven framework, and classical accuracy evaluation matrics were used for quantitative analysis. The average objective function value for the selected experimental data was 0.32 m, which is 0.03 m higher than the comparison method, indicating improved accuracy. The horizontal average deviation between the reconstructed building elements and the building point cloud was 0.10 m, while the vertical average deviation was 0.04 m. [Conclusions] In summary, the optimal shape parameters, obtained through the sequential quadratic programming algorithm, enable the construction of 3D building models with complete topology and regular shapes. Additionally, the elevation step correction, which utilizes the average point spacing of the roof point cloud as the step length, effectively enhances the reconstruction accuracy of 3D building models.

  • NIU Chaoran, XUE Cunjin, XIANG Zheng, MA Ziyue
    Journal of Geo-information Science. 2025, 27(9): 2117-2134. https://doi.org/10.12082/dqxxkx.2025.240629

    [Objectives] The ocean twin space consists of the real ocean, the virtual ocean, and the bidirectional links between them. Spatiotemporal modeling for twin spaces requires the simultaneous representation and modeling of all ocean phenomena, objects, and their relationships within the study area. However, existing models such as object-oriented models, spatiotemporal field models, event-based models, and process-based models that incorporate dynamic changes, primarily focus on modeling individual ocean phenomena, including objects, fields, events, and processes. The absence of a unified organizational structure makes comprehensive ocean environment modeling challenging. [Methods] Based on four types of ocean spatiotemporal models mentioned above, this study designs a unified spatiotemporal data organization structure and proposes a graph model for the integrated representation of oceanic static and dynamic elements in twin spaces. The core components of the model include: (1) Establishing a unified organization structure of "entity object-data description-data sequence" through hierarchical and attribute design of the entity object, enabling the unified organization of four object types: spatiotemporal objects, spatiotemporal fields, events, and processes; (2) Designing the relationship representation between oceanic static and dynamic elements in the twin space by analyzing the mapping process from the real ocean to the virtual ocean; (3) Integrating the unified structure of the four object types with the representation of relationships between static and dynamic elements, extracting five core components: time, entity object, twin object, twin scene, and relationship. Furthermore, entities and relationships within these core components are then abstracted into nodes and edges, constructing a five-layer graph representation framework: "twin scene-twin object-entity object-data sequence-time." [Results] A case study on the organizational management of ocean elements around Yin Island and its surrounding waters in the northeast of the Yongle Atoll, Xisha Islands, Sansha City, Hainan Province, China, validates the feasibility and effectiveness of the proposed graph model for integrating static and dynamic ocean elements in twin spaces. Comparative experiments with the hybrid object-field model, the geographic knowledge graph, and the geographic spatiotemporal process-based knowledge representation model demonstrate that the proposed model successfully unifies static objects and dynamic processes, providing a more comprehensive representation of relationships between ocean objects. [Conclusions] The proposed model resolves the fragmentation of static and dynamic data in twin spaces, enhances the efficiency of ocean data management and utilization, and advances ocean management from digitization to intelligence.

  • ZHAO Luying, ZHOU Yang, HU Xiaofei, HUANG Gaoshuang, GAN Wenjian, HOU Mingbo
    Journal of Geo-information Science. 2025, 27(10): 2293-2315. https://doi.org/10.12082/dqxxkx.2024.240262

    [Significance] Cross-view geolocalization is the process of using a satellite image with coordinate metadata as reference to determine the geographic coordinates of an unknown ground-view image. This problem is often viewed as an image matching task, where an overhead satellite image is segmented into a number of square blocks of satellite patches, and the ground image is matched with candidate satellite patches to retrieve the most similar satellite patch, using the position of the center pixel in that patch as the query location. [Progress] With the development of cross-view geolocalization, the technique has been extended to fine-grained metric localization of ground imagery, i.e., identifying which image coordinates in a satellite patch correspond to a ground-measured location. Given that satellite images have global coverage and are easy to obtain, their application as reference images in image positioning has significantly broadened the application scope of image geolocation technology. This trend has prompted growing academic interest and attention to cross-view geolocalization research. Along with the development of various algorithmic techniques, cross-view geo-localization has evolved from the manual extraction of features, which was mainly based on the geometric features of buildings, to deep learning approaches that are applicable to richer scenarios, such as suburban and urban areas. The specific localization idea has progressed from the image-level cross-view localization, which uses the retrieval method to directly mark the retrieved center coordinate of the satellite image as the location of the ground image, to pixel-level fine-grained localization, which more accurately assigns the coordinates of the corresponding pixel location of the satellite image to the ground image. However, the drastic change in the viewing angle of ground and satellite images results in a huge difference in visual content, making cross-view image localization more challenging. To improve the accuracy of cross-view geo-localization, various scholars have made algorithmic improvements, such as representation learning and metric calculation. Additionally, for the huge viewpoint differences, some scholars study specialized geometric transformation, image generation, and other viewpoint conversion methods between cross-view images. Others improve localization accuracy with the help of directional information, intermediate viewpoint connection of UAV image information, and more. [Purpose] This paper summarizes the development process of cross-view geolocation, the different methods for improving accuracy, the various data sets involved, and the evaluation methods at different stages. On this basis, we discuss the future development trends and provide corresponding summaries.

  • ZHU Ge, ZHANG Zheng, CAO Lianshuai, MA Kunyang, XU Xinyue, CHENG Yi
    Journal of Geo-information Science. 2025, 27(9): 2165-2176. https://doi.org/10.12082/dqxxkx.2025.250207

    [Objectives] Map compilation involves professional operations such as element selection, symbolization, and notation configuration. However, the process is often complex and inefficient. Leveraging Large Language Models (LLMs), text-to-map technology significantly simplifies the mapping process, lowers the barrier to entry for non-experts, and improves mapping efficiency. Nevertheless, challenges remain, including heavy reliance on manual debugging and fragmentation tool invocation. [Methods] This paper proposes a DeepSeek-based method for constructing text-to-map agents, which automates the entire process from user input to visualization output. This is achieved through the decomposition of natural language instructions and autonomous adaptation of tools. Centered on the DeepSeek model, the approach associates cartographic elements with specialized tools and usage descriptions, analyzes module structures and collaboration mechanisms, and organizes tools into five categories. By interpreting user instructions and reasoning through task-oriented chains of thought, the agent invokes appropriate visualization tools to achieve cross-modal mapping from natural language to maps, enabling autonomous task reasoning and automated map generation. [Results] To evaluate the agent's effectiveness, two types of mapping tasks—based on local map data and online map services—were conducted using DeepSeek-V3-0324 and R1 models as decision-making cores. The experiments demonstrated that the agent could autonomously complete mapping tasks from natural language using both local and tile-based data. Local map visualization experiments confirmed the agent's ability to reuse tools effectively in low-complexity scenarios. Tile-based map visualization experiments indicated the agent's capability in handling high-complexity scenarios involving multi-toolchain invocations. It accurately decomposed subtasks, assigned appropriate tools, and performed structured string-based input variable transmission or direct invocation without variables, all presented to users in a semi-transparent manner. Across forty repeated experiments, the V3 model outperformed the R1 model, achieving 6.56 times greater execution efficiency with an average processing speed of approximately 6.29 seconds per step, and demonstrated better modular adaptability with the LangChain agent framework. [Conclusions] The proposed construction method validates the feasibility of using DeepSeek-based agents for intelligent cartography. The V3 model exhibits strong potential in this field, with its performance (6.29 s/step) comparable to that of professional cartographers. The text-to-map intelligent agent significantly reduces the entry barrier for map creation, promotes the broader adoption of mapping tools in everyday use, and provides a valuable technical reference for integrating autonomous cartography with professional software platforms such as ArcGIS and QGIS.

  • WENG Mingkai, XIAO Guirong
    Journal of Geo-information Science. 2025, 27(5): 1113-1128. https://doi.org/10.12082/dqxxkx.2025.250050

    [Objectives] The quality of training samples significantly impacts model performance and prediction accuracy. In regions with limited sample data, the small number of samples and their uneven spatial distribution may prevent the model from effectively learning the features of disaster-inducing factors. This increases the risk of overfitting and ultimately affects the accuracy of model predictions. Therefore, it is crucial to collect and optimize training samples based on regional characteristics. [Methods] To address this issue, this study proposes a sampling optimization method for training samples. The method combines the Prototype Sampling (PBS) approach for selecting landslide-positive samples with an unsupervised clustering model for training sample selection. This results in a screened and expanded positive sample dataset and an objectively extracted negative sample dataset, forming an optimized training sample dataset. Subsequently, the Random Forest (RF) and Support Vector Machine (SVM) models, which are well suited for handling small sample data, were employed to construct a landslide susceptibility evaluation model. Comparative experiments were conducted using Raw Data (RD), a dataset with only Data Augmentation (DA), and the optimized dataset. Model prediction performance was assessed using metrics such as the Area Under the Curve (AUC). Additionally, the frequency ratio method was applied to optimize the results of landslide susceptibility zoning. Finally, a case study was conducted in Putian City, where landslide sample data is relatively scarce, to verify the effectiveness and generalization capability of the proposed sampling optimization method. [Results] The results indicate that models trained on the SO dataset achieved AUC improvements of 10.69% and 18.23% compared to those trained on the RD and DA datasets, respectively, demonstrating a significant enhancement in predictive performance. This suggests that selecting and expanding positive samples while objectively extracting negative samples can improve model accuracy and mitigate the overfitting problem during training. Furthermore, the frequency ratio analysis revealed that the SO-RF model achieved higher frequency ratios in regions with extremely high and high susceptibility than the SO-SVM model, indicating that SO-RF is more suitable for evaluating landslide susceptibility in regions with limited landslide sample data, such as Putian City. [Conclusions] The proposed training sample optimization approach, combined with machine learning evaluation methods, demonstrates high applicability and accuracy. Therefore, the findings of this study provide valuable insights into machine learning-based sampling strategies for landslide susceptibility assessment.

  • LI Xiao, WANG Shaohua, LIANG Haojian, ZHOU Liang, LIU Chang, WANG Runqiao, SU Cheng
    Journal of Geo-information Science. 2025, 27(8): 1822-1840. https://doi.org/10.12082/dqxxkx.2025.250144

    [Objectives] Sustainable development is an important issue for countries worldwide, encompassing key aspects such as sustainable transportation systems and inclusive, sustainable urbanization. As a crucial component of urban public service infrastructure, the public transportation network serves as a cornerstone of a city's stable operation, with the distribution of its stops and routes directly influencing residents' travel patterns. However, existing studies mainly focus on accessibility analysis, site selection optimization, and spatial coupling with factors such as population and land use, while lacking in-depth optimization approaches and clear mechanisms that address spatial heterogeneity and facility redundancy. [Methods] Taking Beijing as a case study, with a focus on Dongcheng and Xicheng Districts, this study constructs a system of influencing factors based on multi-source data, including public transportation networks, topography, and economic indicators, and employs the XGBoost machine learning method to reveal the impact weights of these driving factors on the distribution of bus stops. On this basis, a mathematical model incorporating stop redundancy is proposed to optimize the spatial layout of upstream and downstream stops, producing a spatial optimization map of bus stops in Beijing. [Results] The findings indicate that: (1) There is an imbalance in the distribution of public transportation facilities in Beijing, with the proportion of the population having convenient access to public transportation differing by more than 30% between central and peripheral urban areas. (2) Among the 19 influencing factors, population density is the key driving factor, accounting for 27.77%, while the number of scenic spots and parking facilities have minimal impact, with feature importance scores below 0.5%. (3) Compared to the p-median model, the proposed redundancy optimization model significantly reduces the redundancy of optimized stops while maintaining performance in minimizing weighted distance. The optimized stop layout is more evenly distributed along existing bus routes. [Conclusions] These findings provide valuable reference and theoretical support for the layout of bus stops and other public service facilities, contributing to the efficient utilization of public resources and promoting sustainable urban development.

  • ZHENG Chenglong, SONG Ci, CHEN Jie
    Journal of Geo-information Science. 2025, 27(6): 1317-1331. https://doi.org/10.12082/dqxxkx.2025.250168

    [Objectives] With the deepening of urbanization and intensified market competition, long working hours have become a pervasive social issue, posing challenges to both workers' physical and mental health and to urban sustainable development. Current studies on urban residents' work activities predominantly rely on questionnaire survey data, which suffer from limited sample sizes and a lack of in-depth exploration into long working hours in megacities. [Methods] This research utilized mobile signaling data from Beijing, collected between November and December 2019, to identify stay points using a threshold rule method. Residential and workplace locations were determined through a time-window approach, and users' working hours were extracted. The study then examined the spatial distribution patterns of long-working-hours employees (defined as those working over 40 hours per week) and investigated spatial characteristics across various gender and age groups. Finally, the study also explored the characteristics of long working hours in different employment clusters in Beijing. [Results] The findings reveal that 47.1% of Beijing's workforce engages in long working hours (weekly working hours ≥40 hours), with an average weekly working duration of 48.86 hours. Spatial analysis demonstrates a polycentric agglomeration pattern, concentrated in major employment hubs such as the CBD, Financial Street, Zhongguancun, and Yizhuang. Significant disparities exist across gender and age groups. Male employees work an average of 49.62 hours per week, 1.5 hours more than their female counterparts (48.12 hours). Among male age groups, those aged 20~29 have the longest average weekly working hours at 50.68 hours. In contrast, although women aged 30~39 constitute the largest proportion of the female workforce (22.13%), their average weekly working hours are the lowest, at 47.59 hours. The characteristics of overtime work in different employment clusters show a clear pattern: the CBD and Zhongguancun have a higher number of overtime workers, while Yizhuang stands out with the highest proportion at 58.0%. Wholesale and logistics hubs such as Xinfadi and Majuqiao exhibit the most intensive work schedules, with average weekly working hours exceeding 50 hours. [Conclusions] This study provides rich empirical evidence for understanding the phenomenon of long working hours in Beijing. The results offer data-driven support for optimizing labor time policies, contributing to urban sustainable development and social equity.

  • ZHENG Qiangwen, WU Sheng, WEI Jinghui
    Journal of Geo-information Science. 2025, 27(6): 1361-1380. https://doi.org/10.12082/dqxxkx.2025.250122

    [Background] Traditional methods, due to their static receptive field design, struggle to adapt to the significant scale differences among cars, pedestrians, and cyclists in urban autonomous driving scenarios. Moreover, cross-scale feature fusion often leads to hierarchical interference. [Methodology] To address the key challenge of cross-scale representation consistency in 3D object detection for multi-class, multi-scale objects in autonomous driving scenarios, this study proposes a novel method named VoxTNT. VoxTNT leverages an equalized receptive field and a local-global collaborative attention mechanism to enhance detection performance. At the local level, a PointSetFormer module is introduced, incorporating an Induced Set Attention Block (ISAB) to aggregate fine-grained geometric features from high-density point clouds through reduced cross-attention. This design overcomes the information loss typically associated with traditional voxel mean pooling. At the global level, a VoxelFormerFFN module is designed, which abstracts non-empty voxels into a super-point set and applies cross-voxel ISAB interactions to capture long-range contextual dependencies. This approach reduces the computational complexity of global feature learning from O(N2) to O(M2) (where M << N, M is the number of non-empty voxels), avoiding the high computational complexity associated with directly applying complex Transformers to raw point clouds. This dual-domain coupled architecture achieves a dynamic balance between local fine-grained perception and global semantic association, effectively mitigating modeling bias caused by fixed receptive fields and multi-scale fusion. [Results] Experiments demonstrate that the proposed method achieves a single-stage detection Average Precision (AP) of 59.56% for moderate-level pedestrian detection on the KITTI dataset, an improvement of approximately 12.4% over the SECOND baseline. For two-stage detection, it achieves a mean Average Precision (mAP) of 66.54%, outperforming the second-best method, BSAODet, which achieves 66.10%. Validation on the WOD dataset further confirms the method’s effectiveness, achieving 66.09% mAP, which outperforms the SECOND and PointPillars baselines by 7.7% and 8.5%, respectively. Ablation studies demonstrate that the proposed equalized local-global receptive field mechanism significantly improves detection accuracy for small objects. For example, on the KITTI dataset, full component ablation resulted in a 10.8% and 10.0% drop in AP for moderate-level pedestrian and cyclist detection, respectively, while maintaining stable performance for large-object detection. [Conclusions] This study presents a novel approach to tackling the challenges of multi-scale object detection in autonomous driving scenarios. Future work will focus on optimizing the model architecture to further enhance efficiency.

  • LIU Xiaoqing, REN Fu, YUE Weiting, GAO Yunji
    Journal of Geo-information Science. 2025, 27(5): 1214-1227. https://doi.org/10.12082/dqxxkx.2025.240359

    [Objectives] Forests, as the backbone of terrestrial ecosystems, play crucial roles in climate regulation and soil and water conservation. Among the many threats to forests, the impact of forest fires is becoming increasingly severe. Analyzing the factors influencing forest fires is essential for preventing forest fires and formulating relevant strategies. [Methods] This study focuses on China, using multi-source data related to fires, vegetation, climate, topography, and human activities to analyze the spatial heterogeneity of forest fire driving forces from multiple perspectives. [Results] The findings reveal that: (1) At a global scale, the spatial distribution of forest fires is most influenced by FVC, with an explanatory power of 0.130 2, while climate factors exert a relatively strong influence. The interaction between driving factors is enhanced, and forest fire occurrence results from the combined influence of multiple factors. Moreover, a nonlinear relationship and impact threshold exist between these driving factors and the probability of forest fire occurrence. (2) At a local scale, climate and vegetation serve as key driving factors behind forest fires, significantly explaining their spatial distribution across different zones. Temperature is the most influential factor in the Cold Temperate Needle-leaf Forest region, the Temperate Coniferous and Broad-leaved Mixed Forest region, and the Alpine Vegetation of the Tibetan Plateau region, with explanatory powers of 0.313, 0.41, and 0.052, respectively. In contrast, wind speed is the dominant factor in the Warm Temperate Broad-leaved Forest region, with an explanatory power of 0.279. [Conclusions] The primary driving factors and their interactions vary across different regions, quantitatively confirming the spatial heterogeneity of forest fire driving forces. This research contributes to a national-scale understanding of forest fire drivers and fire hazard distribution in China, assisting policymakers in designing fire management strategies to mitigate potential fire risks.

  • SHAN Huilin, WANG Xingtao, LIU Wenxing, WU Xinyue, GAO Runze, LI Hongxu
    Journal of Geo-information Science. 2025, 27(6): 1381-1400. https://doi.org/10.12082/dqxxkx.2025.250009

    [Objectives] With the enhancement of spatial resolution, remote sensing images contain increasingly intricate information, encompassing a vast array of spatial and semantic features. The effective extraction and integration of these features play a pivotal role in semantic segmentation performance. However, most existing approaches focus solely on feature fusion improvements while neglecting the consistency between spatial and semantic features. Additionally, these methods often overlook the precise extraction of edge information, which significantly impacts segmentation accuracy. [Methods] This paper proposes a semantic segmentation model for high-resolution remote sensing images based on multi-scale deep supervision. First, separate feature extraction branches are designed for spatial and semantic features to fully exploit their respective information. Second, a spatial redundancy reduction residual module is incorporated into the spatial branch, integrating wavelet transformation and coordinate convolution to enhance spatial feature extraction and better capture edge details. Third, a residual attention Mamba module is added to the semantic branch to facilitate global-level semantic feature extraction. Finally, a multi-scale feature fusion mechanism is applied, utilizing a large-kernel grouped feature extraction module to progressively merge spatial, semantic, and deep-level features while suppressing irrelevant information and activating meaningful features. Additionally, a deep supervision mechanism is employed by introducing auxiliary supervision heads at each feature fusion stage to enhance training efficiency. [Results] Comparison and ablation experiments were conducted on the ISPRS Potsdam and Vaihingen datasets with random sampling and data augmentation, The experimental results demonstrate that the proposed algorithm achieves an average Intersection over Union (IoU) of 83.43% on ISPRS Potsdam and 86.49% on the augmented Vaihingen dataset. Compared to nine state-of-the-art methods, including CGGLNet and CMLFormer, the proposed approach improves the average IoU by at least 5.00% and 3.00%, respectively. [Conclusions] The results verify that the proposed algorithm effectively extracts and integrates spatial and semantic features, thereby enhancing the accuracy of semantic segmentation in remote sensing images.

  • GUO Xuan, ZHANG Jinxue, WEI Yibing, YU Shutong, LIU Junnan, LIU Haiyan, XU Daozhu, XU Mingliang
    Journal of Geo-information Science. 2025, 27(12): 2789-2801. https://doi.org/10.12082/dqxxkx.2025.250239

    [Objectives] The trajectory knowledge graph effectively captures the deep semantic relationships between trajectories and geospatial entities, offering significant advantages in revealing complex associated information. However, traditional methods for constructing knowledge graphs from domain-specific data sources rely heavily on expert knowledge, involve extensive data preprocessing and entity-relationship extraction, and require high levels of professional expertise. [Methods] To address these challenges, this paper proposes a trajectory knowledge graph construction method that supports natural language-driven task execution through prompt learning with large language models. First, a prompt strategy for the preprocessing task is designed to guide large language models in automatically generating data processing code for cleaning abnormal trajectories. Second, a two-level system prompt strategy is developed to enable tool invocation by matching and calling the trajectory knowledge extraction tool. This strategy allows non-expert users to complete the graph construction process using simple natural language instructions, significantly reducing reliance on programming skills and deep semantic understanding. [Results] To evaluate the feasibility and effectiveness of the proposed prompt strategies, a set of test sentences was created for trajectory preprocessing and entity-relation extraction tasks. Real-world ship and vehicle trajectory datasets were used to support knowledge graph construction. Experiments conducted on two representative large language models, Tongyi Qianwen and Baidu Qianfan, achieved average accuracy rates exceeding 75% and 80%, respectively, demonstrating strong generalization ability and practical value. [Conclusions] This study verifies the effectiveness of combining large language models with prompt learning in constructing trajectory knowledge graphs with low technical barriers, demonstrating the strong generalization and application value of the proposed prompt strategy.