Most Download

  • Published in last 1 year
  • In last 2 years
  • In last 3 years
  • All
  • Most Downloaded in Recent Month
  • Most Downloaded in Recent Year

Please wait a minute...
  • Select all
    |
  • QIN Qiming
    Journal of Geo-information Science. 2025, 27(10): 2283-2290. https://doi.org/10.12082/dqxxkx.2025.250426

    [Objectives] With the rapid increase in the number of Earth observation satellites in orbit worldwide, remote sensing data has been accumulating explosively, offering unprecedented opportunities for Earth system science research to dynamically monitor global change. At the same time, it also brings a series of challenges, including multi-source heterogeneity, scarcity of labeled data, insufficient task generalization, and data overload. [Methods] To address these bottlenecks, Google DeepMind has proposed AlphaEarth Foundations (AEF), which integrates multimodal data such as optical imagery, SAR, LiDAR, climate simulations, and textual sources to construct a unified 64-dimensional embedding field. This framework achieves cross-modal and spatiotemporal semantic consistency for data fusion and has been made openly available on platforms such as Google Earth Engine. [Results] The main contributions of AEF can be summarized as follows: (1) Mitigating the long-standing “data silos” problem by establishing globally consistent embedding layers; (2) Enhancing semantic similarity measurement through a von Mises-Fisher (vMF) spherical embedding mechanism, thereby supporting efficient retrieval and change detection; (3) Shifting complex preprocessing and feature engineering tasks into the pre-training stage, enabling downstream applications to become “analysis-ready” and significantly reducing application costs. The paper further highlights the application potential of AEF in three stages: (1) Initially in land cover classification and change detection; (2) Subsequently in deep coupling of embedding vectors with physical models to drive scientific discovery; (3) Ultimately evolving into a spatial intelligence infrastructure, serving as a foundational service for global geospatial intelligence. Nevertheless, AEF still faces several challenges: (1) Limited interpretability of embedding vectors, which constrains scientific attribution and causal analysis; (2) Uncertainties in domain transfer and cross-scenario adaptability, with robustness in extreme environments yet to be verified; (3) Performance advantages that require more empirical validation across regions and independent experiments. [Conclusions] Overall, AEF represents a new direction for research in remote sensing and geospatial artificial intelligence, with breakthroughs in data efficiency and cross-task generalization providing solid support for future Earth science studies. However, its further development will depend on continuous advances in interpretability, robustness, and empirical validation, as well as on transforming the 64-dimensional embedding vectors into widely usable data resources through different pathways.

  • HAO Yuanfei, LIU Zhe, ZHENG Xi, QIAN Yun
    Journal of Geo-information Science. 2025, 27(9): 2070-2085. https://doi.org/10.12082/dqxxkx.2025.250129

    [Objectives] Street space serves as the primary perceptual interface for pedestrians in urban environments, and the visual quality of these spaces plays a crucial role in enhancing their vitality. Traditional evaluation methods often rely on single-objective indicators, making it difficult to effectively link objective environmental features with pedestrians' subjective perceptions. [Methods] This study proposes a novel evaluation framework based on Large Language Models (LLMs), incorporating the style dimension of subjective perception and extending traditional single-indicator quantitative analysis to a comprehensive approach that integrates both quantification and stylization. This framework utilizes Baidu Street View imagery to quantitatively assess two objective indicators, namely green view index and sky view factor, through semantic segmentation techniques. Additionally, it evaluates six subjective indicators, including vegetation diversity, building typology, building continuity, sidewalk usage, roadway usage, and signage usage, by leveraging prompt-optimized LLMs. The study then categorizes street space visual quality features within the research area using the Latent Dirichlet Allocation (LDA) topic model, aiming to explore the spatial characteristics of different streets and identify optimization strategies. [Results] Using Beijing's Xicheng District as the study area, the results reveal spatial distribution patterns of vegetation density and sky openness, along with pedestrians' subjective evaluations of indicators such as vegetation diversity and building type. Cluster analysis identified comprehensive service streets centered around Xidan North Street, characteristic streets centered around Xihuangchenggen South Street, and mixed-type streets centered around Lingjing Hutong. [Conclusions] This study innovatively introduces a large language model with human-like perceptual capabilities, enhancing its performance through prompt engineering. The resulting framework enables efficient and integrated evaluation of street visual quality by combining both objective and subjective factors. This approach provides a practical reference for large-scale, automated analysis of street view imagery.

  • HUANG Yi, ZHANG Xueying, SHENG Yehua, XIA Yongqi, YE Peng
    Journal of Geo-information Science. 2025, 27(6): 1249-1262. https://doi.org/10.12082/dqxxkx.2025.250175

    [Objectives] This study addresses the critical challenges in typhoon disaster knowledge services, which are often hindered by "massive data, scarce knowledge, and limited services." The core objective is to rapidly distill actionable knowledge from vast datasets to enhance disaster management efficacy and mitigate typhoon-related impacts. Large Language Models (LLMs), renowned for their superior performance in natural language processing, are leveraged to deeply mine disaster-related information and provide robust support for advanced knowledge services. [Methods] This research establishes a typhoon disaster knowledge service framework encompassing three layers: data, knowledge, and service. [Results] For the data-to-knowledge layer, an LLM-driven (Qwen2.5-Max) automated method for constructing typhoon disaster Knowledge Graphs (KGs) is proposed. This method first introduces a multi-level typhoon disaster knowledge representation model that integrates spatiotemporal characteristics and disaster impact mechanisms. A specialized training dataset is curated, incorporating typhoon-related texts with explicit temporal and spatial attributes. By adopting a "pre-training + fine-tuning" paradigm, the framework efficiently transforms raw disaster data into structured knowledge. For the knowledge-to-service layer, an LLM-based intelligent question-answering system is developed. Utilizing the constructed typhoon disaster KG, this system employs Graph Retrieval-Augmented Generation (GraphRAG) to retrieve contextually relevant knowledge from the graph and generate user-specific disaster prevention and mitigation guidance. This approach ensures seamless conversion of structured knowledge into practical services, such as personalized evacuation plans and resource allocation strategies. [Conclusions] The study highlights the transformative potential of LLMs in typhoon disaster management and lays a foundation for integrating LLMs with geospatial technologies. This interdisciplinary synergy advances Geographic Artificial Intelligence (GeoAI) and paves the way for innovative applications in disaster service.

  • SHI Shihao, SHI Qunshan, ZHOU Yang, HU Xiaofei, QI Kai
    Journal of Geo-information Science. 2025, 27(7): 1596-1607. https://doi.org/10.12082/dqxxkx.2025.250015

    [Objectives] Small object detection is of great significance in both military and civil applications. However, due to challenges such as low resolution, high noise environments, target occlusion, and complex backgrounds, traditional detection methods often struggle to achieve the necessary accuracy and robustness. The problem of detecting small objects in complex scenes remains highly challenging. Therefore, this paper proposes a hybrid feature and multi-scale fusion algorithm for small object detection. [Methods] First, a Hybrid Conv and Transformer Block (HCTB) is designed to fully utilize local and global context information, enhancing the network's perception of small objects while optimizing computational efficiency and feature extraction capability. Second, a Multi-Dilated Shared Kernel Conv (MDSKC) module is introduced to extend the receptive field of the backbone network using dilated convolutions with varying expansion rates, thereby enabling efficient multi-scale feature extraction. Finally, the Omni-Kernel Cross Stage Model (OKCSM), constructed based on the concepts of Omni-Kernel and Cross Stage Partial, is integrated to optimize the small target feature pyramid network. This approach helps preserve small object information and significantly improves detection performance. [Results] Ablation and comparison experiments were conducted on the VisDrone2019 and TinyPerson datasets. Compared to the baseline model YOLOv8n, the proposed method improves precision, recall, mAP@50, and mAP@50:95 by 1.3%, 3.1%, 3%, and 1.9%, respectively on VisDrone2019, and by 3.6%, 1.3%, 2.1%, and 0.7%, respectively on TinyPerson. Additionally, the model size and GFLOPs are only 6.3 MB and 11.3 G, demonstrating its efficiency. Furthermore, compared with classical algorithms, such as HIC-YOLOv5, TPH- YOLOv5, and Drone-YOLO, the proposed algorithm demonstrates significant advantages and superior performance. [Conclusions] The algorithm effectively improves detection accuracy, confirming its strong performance in addressing small object detection in complex scenes.

  • ZHU Shan, HOU Xiyong, WANG Xiaoli, ZHANG Xueying, LIU Kai, SONG Jie
    Journal of Geo-information Science. 2025, 27(8): 1952-1964. https://doi.org/10.12082/dqxxkx.2025.240702

    [Objectives] Land Use and Land Cover (LULC) plays a crucial role in shaping surface environments and ecological processes. Among various land cover types, built-up land, representing the dominant form of anthropogenic surface modification, has expanded rapidly in recent decades, exerting significant impacts on regional ecosystems while attracting increasing attention from multiple disciplines. This study aims to improve the spatial accuracy of built-up land mapping by evaluating and integrating multiple LULC datasets, thereby supporting research on regional sustainable development. [Methods] Taking the Bohai Rim region as the study area, seven medium to high-resolution LULC products from domestic and international sources were initially selected. Based on a comparative analysis of total built-up area and spatial distribution patterns, five datasets (ESA2020, CoLUCC2020, GlobeLand2020, CLCD2023, and GLC_FCS2022) were chosen for further evaluation and integration. Consistency analysis was conducted to assess the classification performance of each dataset, and a multi-criteria evaluation combined with threshold-based filtering was employed for multi-source data fusion. [Results] Evaluation results indicated that the ESA2020, CoLUCC2020, GlobeLand2020, and GLC_FCS2022 datasets exhibit relatively high classification accuracy for built-up land, while the CLCD2023 dataset performs less satisfactorily. The fused product achieved an overall accuracy of 93.51% and a Kappa coefficient of 0.745 5, demonstrating notable improvements over any individual dataset. [Conclusions] The proposed fusion method effectively overcomes the limitations of single-source data by leveraging the complementary strengths of multiple datasets. It provides a robust methodological foundation for regional LULC data integration and offers valuable data support for sustainable development research in the Bohai Rim and similar regions.

  • DU Pei, SHEN Yangjie, LIU Zhenxia, YU Zhaoyuan
    Journal of Geo-information Science. 2025, 27(9): 2106-2116. https://doi.org/10.12082/dqxxkx.2025.250220

    [Objectives] Global climate change, accelerating sea-level rise, and intensifying anthropogenic pressures are rendering the intricate human-land-sea nexus within coastal zones increasingly complex, sensitive, and vulnerable. This growing challenge underscores the urgent need for integrated coastal research frameworks capable of synthesizing environmental sensing, dynamic process simulation, and scenario projection. Addressing this critical gap, Digital Twin (DT) technology emerges as a transformative paradigm. By integrating multi-source data, sophisticated models, and domain knowledge into intelligent systems, DT offers unprecedented potential for creating precise virtual replicas and enabling intelligent management of complex coastal socio-ecological systems. [Analysis] This paper systematically analyzes the state of coastal zone digitalization, highlighting the pressing need for robust digital frameworks that can effectively represent and analyze the strong coupling between natural processes and human activities under multifaceted pressures. Building on this foundation, we propose a novel conceptual framework and implementation pathway for constructing a Digital Twin Coastal Zone (DTCZ). This framework explicitly positions land-sea interface processes as the foundational scenario and centers on human-land-sea feedback mechanisms as the core analytical thread. The proposed DTCZ system architecture is articulated across four pivotal dimensions: (1) Comprehensive information integration and knowledge aggregation; (2) Simulation of natural processes integrated with coupled human-nature decision support; (3) Synergistic short-term forecasting and long-term monitoring capabilities; and (4) Realistic multidimensional representation enabling intelligent interaction. We critically discuss the key technological enablers supporting this vision, encompassing coastal data governance and fusion, multi-scale scenario modeling, predictive analytics for critical coastal elements, persistent long-term monitoring strategies, and the development of the integrated DTCZ platform itself. At its core, the envisioned DTCZ leverages spatiotemporally fused multi-source data as its foundation and prioritizes enhanced scenario simulation and intervention capabilities. [Prospects] This framework is designed to overcome the limitations, such as fragmented data and limited predictive power, that constrain traditional coastal digital systems. By significantly advancing the computational tractability and overall manageability of coastal systems, the DTCZ paradigm offers a powerful new methodological tool and operational framework. It holds strong potential for supporting sustainable coastal development and modernizing governance structures in the face of ongoing climate change, providing a robust platform for evidence-based planning and adaptive management.

  • WU Ruoling, GUO Danhuai
    Journal of Geo-information Science. 2025, 27(5): 1041-1052. https://doi.org/10.12082/dqxxkx.2025.240694

    [Objectives] Understanding whether Large Language Models (LLMs) possess spatial cognitive abilities and how to quantify them are critical research questions in the fields of large language models and geographic information science. However, there is currently a lack of systematic evaluation methods and standards for assessing the spatial cognitive abilities of LLMs. Based on an analysis of existing LLM characteristics, this study develops a comprehensive evaluation standard for spatial cognition in large language models. Ultimately, it establishes a testing standard framework, SRT4LLM, along with standardized testing processes to evaluate and quantify spatial cognition in LLMs. [Methods] The testing standard is constructed along three dimensions: spatial object types, spatial relations, and prompt engineering strategies in spatial scenarios. It includes three types of spatial objects, three categories of spatial relations, and three prompt engineering strategies, all integrated into a standardized testing process. The effectiveness of the SRT4LLM standard and the stability of the results are verified through multiple rounds of testing on eight large language models with different parameter scales. Using this standard, the performance scores of different LLMs are evaluated under progressively improved prompt engineering strategies. [Results] The geometric complexity of input spatial objects influences the spatial cognition of LLMs. While different LLMs exhibit significant performance variations, the scores of the same model remain stable. As the geometric complexity of spatial objects and the complexity of spatial relations increase, LLMs' accuracy in judging three spatial relations decreases by only 7.2%, demonstrating the robustness of the test standard across different scenarios. Improved prompt engineering strategies can partially enhance LLM's spatial cognitive Question-Answering (Q&A) performance, with varying degrees of improvement across different models. This verifies the effectiveness of the standard in analyzing LLMs' spatial cognitive abilities. Additionally, Multiple rounds of testing on the same LLM indicate that the results are convergent, and score differences between different LLMs exhibit a stable distribution. [Conclusions] SRT4LLM effectively measures the spatial cognitive abilities of LLMs and serves as a standardized evaluation tool. It can be used to assess LLMs' spatial cognition and support the development of native geographic large models in future research.

  • LI Wangping, WEI Wenbo, LIU Xiaojie, CHAI Chengfu, ZHANG Xueying, ZHOU Zhaoye, ZHANG Xiuxia, HAO Junming, WEI Yuming
    Journal of Geo-information Science. 2025, 27(6): 1448-1461. https://doi.org/10.12082/dqxxkx.2025.250034

    [Objectives] Using deep learning methods for landslide identification can significantly improve efficiency and is of great importance for landslide disaster prevention and mitigation. The DeepLabV3+ algorithm effectively captures multi-scale features, thereby improving image segmentation accuracy, and has been widely used in the segmentation and recognition of remote sensing images. [Methods] We propose an improved model based on DeepLabV3+. First, the Coordinate Attention (CA) mechanism is incorporated into the original model to enhance its feature extraction capabilities. Second, the Atrous Spatial Pyramid Pooling (ASPP) module is replaced with the Dense Atrous Spatial Pyramid Pooling (DenseASPP) module, which helps the network capture more detailed features and expands the receptive field, effectively addressing the limitations of inefficient or ineffective dilated convolution. A Strip Pooling (SP) branch module is added in parallel to allow the backbone network to better leverage long-range dependencies. Finally, the Cascade Feature Fusion (CFF) module is introduced to hierarchically fuse multi-scale features, further improving segmentation accuracy. [Results] Experiments on the Bijie landslide dataset show that, compared with the original model, the improved model achieves a 2.2% increase in MIoU and a 1.2% increase in the F1 score. Compared with other mainstream deep learning models, the proposed model demonstrates higher extraction accuracy. In terms of segmentation quality, it significantly improves the overall accuracy in identifying landslide areas, reduces misclassification and omission, and yields more precise delineation of landslide boundaries. [Conclusions] Based on experiments using the landslide debris flow disaster dataset in Sichuan and surrounding areas, along with practical application verification, the proposed method demonstrates strong recognition capability across landslide images in diverse scenarios and levels of complexity. It performs particularly well in challenging environments such as areas with dense vegetation or proximity to rivers, showing strong generalization ability and broad applicability.

  • LI Junming, HU Yaxuan, WANG Nannan, WANG Siyaqi, WANG Ruolan, LYU Lin, FANG Ziqing
    Journal of Geo-information Science. 2025, 27(7): 1501-1519. https://doi.org/10.12082/dqxxkx.2025.250161

    [Objectives] Classical statistical inference typically relies on the assumptions of large sample sizes and independent, identically distributed (i.i.d.) observations, conditions that spatio-temporal data frequently violate, leading to inherent theoretical limitations in conventional approaches. In contrast, Bayesian spatio-temporal statistical methods integrate prior knowledge and treat all model parameters as random variables, thereby forming a unified probabilistic inference framework. This enables the incorporation of a broader range of uncertainties and offers robustness in modelling small samples and dependent structures, making Bayesian methods highly advantageous and increasingly influential in spatio-temporal analysis. [Progress] From the perspective of methodological evolution, this paper systematically reviews mainstream Bayesian spatio-temporal statistical models from two complementary perspectives: traditional Bayesian statistics and the Bayesian machine learning. The former includes Bayesian Spatio-temporal Evolutionary Hierarchical Models, Bayesian Spatio-temporal Regression Hierarchical Models, Bayesian Spatial Panel Data Models, Bayesian Geographically Weighted Spatio-temporal Regression Models, Bayesian Spatio-temporal Varying Coefficient Models, and Bayesian Spatio-temporal Meshed Gaussian Process Model. The latter includes Bayesian Causal Forest Models, Bayesian Spatio-temporal Neural Networks, and Bayesian Graph Convolutional Neural Networks. In terms of application, the review highlights representative studies across domains such as public health, environmental sciences, socio-economic and public safety, as well as energy and engineering. [Prospect] Bayesian spatio-temporal statistical methods need to achieve breakthroughs in multi-source heterogeneous data modeling, integration with deep learning, incorporation of causal inference mechanisms, and optimization of high-performance computing. These advances are essential to balance theoretical rigor with practical adaptability and to promote the development of a next-generation spatio-temporal modeling paradigm characterized by causal inference, adaptive generalization, and intelligent analysis.

  • FU Xin, ZHANG Haoran, WANG Yuanbo, HUANG Chong, LIU Xiangye, ZHANG Hengcai, XU Zhenghe
    Journal of Geo-information Science. 2025, 27(9): 2135-2150. https://doi.org/10.12082/dqxxkx.2024.240020

    [Objectives] Soil salinity is one of the major and widespread challenges in the recent era, hindering global food security and environmental sustainability. Accurate evaluation and analysis of soil salinization are of great significance for the improvement and management of soil salinization. [Methods] To address the challenge of mapping three-dimensional spatial distribution of soil salinity, this study selected 819 effective field soil samples within a saline soil region of the Yellow River Delta. These samples, which have vertical stratifications from 0 to 100cm, were used for comprehensive analysis. The soil sample points were arranged in a grid of 5 km×5 km horizontally, and the sampling soil layer was set up every 10cm vertically. Following the principle of covering different land cover types and human accessibility, soil samples were collected from the depth range of 0~100 cm in the study area. The three-dimensional spatial differentiation of soil salinity in the coastal saline soil area was revealed from different perspectives using traditional geostatistical methods and 3D Empirical Bayesian Kriging interpolation. The effects of various factors on the spatial differentiation of soil salinity were analyzed using the Geodetector method. [Results] The results showed that the spatial distribution of soil salinity in the whole soil range and different vertical layers were highly variable. There were differences in the scale of spatial autocorrelation of soil salt content at different depths. In this study, the 3D Empirical Bayesian Kriging interpolation method was established to spatialize the soil salinity of soil samples, which effectively revealed the vertical fine-scale three-dimensional spatial characteristics of soil salinity. Soil salinity exhibited significant three-dimensional spatial differentiation, with diverse profile distribution types. The main types were homogeneous and surface aggregated, with some local areas showing bottom aggregated and fluctuating types. All influencing factors significantly affected the three-dimensional spatial differentiation of soil salinity, but the degree of influence varied for each factor. The order of explanatory power of each influencing factor is as follows: land use/land cover > distance to coastline > groundwater depth > groundwater conductivity > elevation > land surface temperature > soil bulk density > soil clay content. Compared with single factors, the pairwise interaction of any factor had a greater effect on the spatial differentiation of soil salinity, but the interaction strength of different factors varied. In the whole 0~100 cm soil depth range, GWD ∩ LULC had the largest impact (0.443), followed by LST ∩ LULC (0.326). [Conclusions] Although the q values of land surface temperature and soil bulk density were not high, their explanatory power on soil salinity was greatly improved after their interaction with land use/cover, better explaining the changes of soil salinity in the study area. Factors such as land use/cover, groundwater depth, surface temperature, and soil bulk density are closely related to the spatial distribution of soil salinity in the study area. The research results provide a theoretical basis and technical support for the formulation of comprehensive improvement measures and management systems for fine-scale saline-alkali land in the region. These findings have positive implications for promoting the achievement of the Sustainable Development Goal of Land Degradation Neutrality in coastal areas.

  • ZHANG Peng, LIU Wanyue, LIU Chengbao, BO Zheng, NIU Ran, HAN Dongxu, LIN Qian, ZHANG Ziyi, MA Mingze
    Journal of Geo-information Science. 2025, 27(4): 787-800. https://doi.org/10.12082/dqxxkx.2025.240467

    [Significance] The characteristics of the lunar surface, including its mineral compositions, geological formations, environmental factors, and temperature variations, are essential for advancing our understanding of the Moon. These features provide a wealth of scientific data for lunar research, such as resource distribution, environmental characteristics, and evolutionary history. Spectral imagers, which detect mineral compositions in a nondestructive way, play a crucial role in analyzing the mineral compositions of the lunar surface and have become key payloads in scientific exploration missions. With the increasing demand for high-precision lunar exploration data and advancements in spectral imaging technology, there is a growing trend toward acquiring lunar remote sensing data with higher spatial and spectral resolution across a broad spectral range. This trend is shaping the future of lunar orbit exploration, allowing for unprecedented detail in probing the Moon's surface. However, the higher resolution of spatial and spectral data also introduces significant challenges in data processing. [Progress] This paper begins by summarizing existing lunar spectral orbit data, including payload parameters and associated scientific findings. It then explores specific technical challenges in the data processing chain, such as pre-processing and the calculation of lunar surface parameters. Mapping surface compositions through spectral remote sensing is particularly complex due to the mixing of minerals within rocks, which can obscure clear spectral signatures. To address these challenges, various theoretical and empirical approaches have been developed. This paper proposes technical methods and potential solutions to overcome these obstacles.[Conclusions] In conclusion, detailed studies of lunar surface characteristics and the acquisition of high-resolution spectral data are vital for advancing lunar science. Lunar hyperspectral data are expected to support manned lunar exploration and scientific research by enabling the identification of various minerals on the Moon's surface and determining their abundance through hyperspectral observations. Advances in spectral imaging technology and the development of solutions for processing high-resolution data will significantly enhance lunar and planetary science capabilities. These efforts will pave the way for deeper insights into the Moon's geology and potential resource utilization.

  • HE Li, WANG Rong
    Journal of Geo-information Science. 2025, 27(9): 2151-2164. https://doi.org/10.12082/dqxxkx.2025.250273

    [Significance] Space is not merely a physical place, but a productive arena of social relations. Social phenomena are inherently endowed with spatial attributes, making the spatial perspective a critical pathway for understanding complex social issues. With the deepening "spatial turn" in the social sciences and continuous advancements in Geographic Information Systems (GIS)—particularly in data acquisition, spatial analysis and modeling, and spatial visualization—GIS has become an essential tool for addressing social issues. However, disciplinary differences in theoretical paradigms, methodological logic, and scale cognition between geography and the social sciences constrain their deeper integration. Existing literature lacks a systematic synthesis of integration trends, underlying challenges, and empowerment pathways, necessitating a comprehensive clarification of fusion mechanisms, core obstacles, and emerging opportunities. [Progress] This paper identifies five key advantages of GIS in empowering social science research: expanding spatial analytical thinking, supporting spatiotemporal data, enhancing survey techniques, enriching representational forms, and strengthening analytical capabilities. We review representative GIS applications in economics, political science, and sociology. From dimensions such as spatial cognition, data capacity, methodological adoption, and research hotspots, we distill application characteristics across these disciplines, revealing both commonalities and differences. While all three disciplines recognize spatial effects, their theoretical orientations shape distinct technical approaches—economics emphasizes causal identification, political science focuses on geopolitical structures, and sociology prioritizes contextual representation. Through a three-dimensional analysis—data, methodology, and cognition—we examine three major challenges in addressing social issues: the mismatch between data and research questions, the difficulty of integrating methods with causal mechanisms, and the contextual misalignment of place and scale, which reflect deeper issues of data suitability, methodological coherence, and the validity of spatial reasoning. [Prospects] The advancement of artificial intelligence, especially large models, injects new methodological momentum into GIS-based spatial analysis and brings threefold opportunities for addressing social issues. First, large models are driving spatial analysis from correlation-based description toward transparent causal inference; Second, multi-source data fusion and the generation of "silicon-based samples" help overcome the limitations of traditional survey data. Third, an emerging "space-survey" integrated framework is constructing a "spatial cognitive infrastructure" to support social research. Future efforts should establish a synergistic "large model-spatial analysis" paradigm that integrates these three opportunities. By simultaneously addressing challenges of data matching, method integration, and contextual misalignment, this paradigm can elevate GIS from a supportive tool to a core engine for theory generation and mechanism interpretation. This transformation will enhance the scientific value and practical effectiveness of GIS and spatial analysis in addressing complex social issues, fostering a bidirectional interaction between methodological innovation and theoretical advancement.

  • YU Hanyang, LAN Chaozhen, WANG Longhao, WEI Zijun, GAO Tian, WANG Yiqiao, LIU Ruimeng
    Journal of Geo-information Science. 2025, 27(8): 1896-1919. https://doi.org/10.12082/dqxxkx.2025.250052

    [Significance] Multimodal remote sensing image matching has become a fundamental task in integrated Earth observation, enabling precise spatial alignment across heterogeneous image sources. [Progress] As the diversity of sensing modalities, acquisition geometries, and temporal conditions increases, traditional matching frameworks have proven inadequate for capturing complex variations in radiometric responses, geometric configurations, and semantic representations. This technological gap has driven a significant paradigm shift from handcrafted feature engineering to deep learning-based solutions, which now form the core of current research and application development. This paper provides a comprehensive and structured review of recent advances in deep learning methods for multimodal remote sensing image matching, with an emphasis on the evolution of methodological paradigms and technical frameworks. It establishes a clear dual-path classification: the single-session approach and the end-to-end approach. The former selectively replaces or enhances individual components of traditional pipelines, such as feature encoding or similarity estimation, using neural network modules. The latter integrates the entire matching process into a unified network architecture, enabling joint optimization of feature learning, transformation modeling, and correspondence inference within a closed loop. This progression reflects the field's transition from modular adaptation to holistic modeling, revealing a deeper integration of data-driven representation learning with geometric reasoning. The review further examines the development of architectural strategies supporting this evolution, including attention mechanisms, graph-based structures, hierarchical feature fusion, and modality-bridging transformations. These innovations contribute to improved robustness, semantic consistency, and adaptability across diverse matching scenarios. Recent trends also demonstrate a growing reliance on pretrained vision foundation models, which provide transferable feature spaces and reduce the dependence on large-scale labeled datasets. In addition to summarizing technical advancements, the paper analyzes representative datasets, performance evaluation strategies, and the current challenges that constrain real-world deployment. These include limited data availability, weak cross-scene generalization, computational inefficiency, and insufficient interpretability. [Prospect] By synthesizing methodological progress with practical demands, the review identifies key directions for future research, including the design of modality-invariant representations, physically-informed neural architectures, and lightweight solutions tailored for scalable, real-time image registration in complex operational environments.

  • CHEN Xiawei, LONG Yi, LIU Xiang, ZHANG Ling, LIU Shaojun
    Journal of Geo-information Science. 2025, 27(5): 1228-1245. https://doi.org/10.12082/dqxxkx.2025.240508

    [Objectives] The quality of the leisure environment is a critical factor influencing residents' leisure experiences and participation, and it is closely related to the vitality of urban areas and economic development. Therefore, exploring how environmental quality influences the vitality of leisure space is crucial for promoting urban development. [Methods] A human-centered approach is adopted to construct a research framework for exploring the relationship between leisure environment quality and leisure space vitality based on image-text fusion perception. Online review texts and street view images are used to comprehensively perceive the leisure environment quality of the city. Natural language processing and semantic segmentation techniques are used to assess the leisure environment quality, while mobile signaling data is utilized to quantitatively measure the vitality of leisure spaces through user trajectory semantic modeling. Finally, using an Optimal Parameter-based Geographical Detector (OPGD), an in-depth analysis is conducted on the impact mechanisms of individual leisure environment quality factors and their interactions with the vitality of leisure spaces at global and local spatial scales in Nanjing. [Results] The findings reveal that: (1) The spatial distribution of leisure space vitality exhibits a "single-core-multi-center" pattern. The vitality in the main urban area is concentrated around the Xinjiekou commercial district, while Jiangbei District forms a "three-point" pattern with interactions between the two ends and the center. In the Xianlin area, high-vitality zones are distributed around the university town, while in the Dongshan area, they are located along the Shuanglong Avenue corridor. (2) On a macro scale, the leisure space vitality of Nanjing is indirectly dominated by economic levels. On a local scale, the influence of 14 leisure environment quality factors on leisure space vitality demonstrates significant regional heterogeneity. However, in municipal and district-level core areas with high leisure space vitality, the effects of these environmental quality factors are all significant. (3) The formation mechanism of leisure space vitality in Nanjing is closely related to regional geographical location, population density and composition, and economic income levels. [Conclusions] The analysis of Nanjing indicates that the exploration of leisure environment quality through image-text fusion perception enhances the systematic and comprehensive understanding of the factors influencing leisure space vitality and its mechanisms. This provides a scientific basis for optimizing the quality of the urban leisure environment and enhancing the vitality of leisure space.

  • QIN Chengzhi, ZHU Liangjun, CHEN Ziyue, WANG Yijie, WANG Yujing, WU Chenglong, FAN Xingchen, ZHAO Fanghe, REN Yingchao, ZHU Axing, ZHOU Chenghu
    Journal of Geo-information Science. 2025, 27(5): 1027-1040. https://doi.org/10.12082/dqxxkx.2025.240706

    [Objectives] Geographic modeling aims to appropriately couple diverse geographic models and their specific algorithmic implementations to form an effective and executable model workflow for solving specific, unsolved application problems. This approach is highly valuable and in high demand in practice. However, traditional geographic modeling is designed with an execution-oriented approach, which plays a heavy burden on users, especially non-expert users. [Methods] In this position paper, we advocate not only for the necessity of intelligent geographic modeling but also achieving it through a so-called recursive geographic modeling approach. This new approach originates from the user's modeling target, which can be formalized as an initial elemental modeling question. It then reasons backward to resolve the current elemental modeling question and iteratively updates new elemental modeling questions in a recursive manner. This process enables the automatic construction of an appropriate geographic workflow model tailored to the application context of the user's modeling problem, thereby addressing the limitations of traditional geographic modeling. [Progress] Building on this foundational concept, this position paper introduces a series of intelligent geographic modeling methods developed by the authors. These methods aim to reduce the geographic modeling burden on non-expert users while assuring the appropriateness of automatically constructed models. Specifically, each proposed intelligent geographic modeling method is designed to solve a specific type of elemental question within intelligent geographic modeling. The elemental questions include: (1) how to determine the appropriate model algorithm (or its parameter values) within the given application context, (2) how to select the appropriate covariate set as input for a model without a predetermined number of inputs (e.g., a soil mapping model without predetermined environmental covariates as inputs), (3) how to determine the structure of a model that integrates multiple coupled modules (e.g., a watershed system model incorporating diverse process simulation modules), and (4) how to determine the proper spatial extent of input data for a geographic model when a specific area of interest is assigned by the user. The key to solving these elemental questions lies in the effective utilization of geographic modeling knowledge, particularly application-context knowledge. However, since application-context knowledge is typically unsystematic, empirical, and implicit, we developed case formalization and case-based reasoning strategies to integrate this knowledge within the proposed methods. Based on the recursive intelligent geographic modeling approach and the correspondingly methods, we propose an application schema for intelligent geographic modeling and computing. This schema is grounded in domain modeling knowledge, particularly case-based application-context knowledge, and leverages the “Data-Knowledge-Model” tripartite collaboration. A prototype of this approach has been implemented in an intelligent geospatial computing system called EGC (EasyGeoComputing). [Prospect] Finally, this position paper discusses the emerging role of large language models in geographic modeling. Their potential applications, relationships with the research presented here, and prospects for future research directions are explored.

  • LIU Chengbao, BO Zheng, ZHANG Peng, ZHOU Miyu, LIU Wanyue, HUANG Rong, NIU Ran, YE Zhen, YANG Hanzhe, LIU Shijie, HAN Dongxu, LIN Qian
    Journal of Geo-information Science. 2025, 27(4): 801-819. https://doi.org/10.12082/dqxxkx.2025.240466

    [Significance] Lunar remote sensing is a critical method to ensure the safety and success of lunar exploration missions while advancing lunar scientific research. It plays a significant role in understanding the Moon's geological evolution and the formation of the Earth-Moon system. Accurate lunar topographic maps are essential for mission planning, including landing site selection, navigation, and resource identification. These maps also provide valuable data for studying planetary processes and the history of the solar system. [Progress] In recent years, with growing global interest and investment in lunar exploration, remarkable progress has been made in remote sensing technology. These advancements have significantly improved the precision, resolution, and coverage of lunar topographic mapping. Various lunar remote sensing missions, such as China's Chang'e program, NASA's Lunar Reconnaissance Orbiter, and missions by other space agencies, have acquired substantial amounts of multi-source, multi-modal, and multi-scale data. This wealth of data has laid a solid foundation for technological breakthroughs. For instance, high-resolution laser altimetry, optical photogrammetry, and synthetic aperture radar have provided detailed datasets, enabling refined mapping of the Moon's surface. However, the dramatic increase in data volume, complexity, and heterogeneity presents challenges for effective processing, integration, and application in topographic mapping. This paper provides a comprehensive overview of the current state of lunar topographic remote sensing and mapping, focusing on the implementation and data acquisition capabilities of major lunar remote sensing missions during the second wave of lunar exploration. It systematically summarizes the latest research progress in key surveying and mapping technologies, including laser altimetry, which enables precise elevation measurements; optical photogrammetry, which reconstructs surface features using high-resolution imagery; and synthetic aperture radar, which provides unique insights into topographic and subsurface structures. [Prospect] In addition to reviewing recent advancements, the paper discusses future trends and challenges in the field. Key recommendations include enhancing sensor functionality and performance metrics to improve data quality, optimizing the lunar absolute reference framework for consistency and accuracy, leveraging multi-source data fusion for fine-scale modeling, expanding scientific applications of lunar topography, and developing intelligent and efficient methods to process massive amounts of remote sensing data. These efforts will not only support upcoming lunar exploration missions, such as China's manned lunar landing program scheduled for 2030, but also contribute to a deeper understanding of the Moon and its relationship with Earth.

  • PING Yifan, LU Jun, GUO Haitao, HOU Qingfeng, ZHU Kun, SANG Zehao, LIU Tong
    Journal of Geo-information Science. 2025, 27(7): 1608-1623. https://doi.org/10.12082/dqxxkx.2025.250051

    [Objectives] Cross-view image geolocation refers to a technology that determines the geographical location of an image by matching it with reference images taken from different perspectives and possessing precise location information. This technology plays a crucial role in real-world applications such as Unmanned Aerial Vehicle (UAV) navigation, environmental monitoring, and target positioning. Currently, most deep learning-based cross-view image retrieval and geolocation methods for drone-satellite tasks rely heavily on supervised learning. However, the scarcity of high-quality labeled data presents a significant limitation, hindering the generalization capability of these models. Moreover, existing methods often fail to effectively model the spatial layout of images, making it difficult to bridge the substantial domain gap between cross-view images, thereby limiting the accuracy and robustness of geolocation tasks. [Methods] To address these challenges, this paper proposes a novel cross-view image retrieval and localization architecture called DINO-MSRA. The architecture first employs the DINOv2 large model framework, fine-tuned by Conv-LoRA, as the feature encoder. This enhances the model's feature extraction capabilities with fewer parameters, improving both efficiency and accuracy. Second, we design a spatial relation-aware feature aggregator based on the Mamba module (MSRA) to more effectively aggregate image features. By embedding spatial configuration features into the global descriptor, this module significantly improves the model's performance in cross-view matching tasks, especially in complex scenarios where spatial relationships between objects are crucial. Finally, the InfoNCE loss function is adopted to train the model, optimizing contrastive learning and ensuring more accurate retrieval and localization results. [Results] Extensive comparative and ablation experiments were conducted on the University-1652 and SUES-200 datasets. The experimental results show that for drone-view target localization (drone→satellite) and drone navigation (satellite→drone) tasks, the proposed method achieves R@1 accuracies of 95.14% and 97.29%, respectively, on the University-1652 dataset, representing improvements of 0.68% and 1.14% over the current best algorithm, CAMP. On the SUES-200 dataset at an altitude of 150 meters, R@1 accuracies reach 97.2% and 98.75%, which are 1.8% and 2.5% higher than CAMP, respectively. Moreover, the proposed method requires significantly fewer parameters than existing algorithms, only 19.2% of those used by Sample4Geo. [Conclusions] In summary, the proposed DINO-MSRA architecture outperforms current state-of-the-art methods in cross-view image matching, achieving higher accuracy and faster inference speed. These results demonstrate its robustness and practical application potential in challenging real-world scenarios.

  • LIU Kang
    Journal of Geo-information Science. 2025, 27(7): 1520-1531. https://doi.org/10.12082/dqxxkx.2025.250196

    [Significance] Human mobility is closely tied to transportation, infectious disease spread, and public safety, making trajectory analysis and modeling a long-standing research focus. While numerous specialized trajectory models, such as interpolation, prediction, and classification models, have been developed using machine learning or deep learning, most are task-specific and trained on localized datasets, limiting their generalizability across tasks, regions, or trajectory data. Recent advances in generative AI have demonstrated the potential of foundation models in NLP and computer vision, motivating the need for a trajectory foundation model capable of learning universal patterns from large-scale mobility data to support diverse downstream applications. [Methods] This paper first reviews the research progress of various specialized trajectory models. It then categorizes trajectory modeling tasks into conventional tasks (e.g., trajectory similarity computation, interpolation, prediction, and classification) and generation task (i.e., trajectory generation), and elaborates on recent advances in trajectory foundation models for these two types of tasks. [Conclusions] The paper argues that trajectory foundation models for conventional tasks should enhance not only task generalization but also spatial and data generalization. Trajectory foundation models for generation task must address the challenge of spatial generalization, enabling the generation of large-scale trajectory data "from scratch" based on easily obtainable macro-level urban data or features. Furthermore, integrating trajectory data with other data types (e.g., text, maps, and other geospatial data) to construct multimodal geographic foundation models, as well as developing application-oriented trajectory foundation models for fields such as transportation, public health, and public safety, are promising research directions worthy of future exploration.

  • PAN Jiechen, XING Shuai, CAO Jiayin, DAI Mofan, HUANG Gaoshuang, ZHI Lu
    Journal of Geo-information Science. 2025, 27(9): 1999-2020. https://doi.org/10.12082/dqxxkx.2025.250151

    [Significance] With rapid advances in remote sensing, surveying and mapping, and autonomous driving technologies, 3D point cloud semantic segmentation, a core technology of digital twin systems, is attracting increasing research attention. Airborne point cloud semantic segmentation is regarded as a key technology for enhancing the automation and intelligence of 3D geographic information systems. [Analysis] Driven by deep learning and sensing technologies such as LiDAR, depth cameras, and 3D laser scanners, point cloud semantic segmentation can automatically classify and accurately recognize large-scale point cloud data through precise feature extraction and efficient model training. However, compared with typical high-density, category-balanced point cloud datasets (e.g., those used in indoor scenes, autonomous driving, or robotics), airborne point clouds present significant challenges in areas such as registration and feature extraction. These challenges stem from their unique characteristics, including large-scale 3D terrain coverage, dynamic platform motion errors, considerable variations in ground-object spatial scales, and complex occlusions. Currently, deep-learning-based airborne point cloud semantic segmentation is still in its early stages. Due to heterogeneous data acquisition methods, varying resolutions, and diverse attribute information, there remains a gap between existing research and practical algorithm deployment. [Progress] This paper provides a comprehensive review of the field, covering adaptive algorithms, datasets, performance metrics, and emerging methods along with their advantages and limitations. It also offers quantitative comparisons with existing technologies, evaluating representative methods in terms of precision and applicability. [Prospect] A thorough analysis suggests that breakthroughs in airborne point cloud semantic segmentation necessitate systematic research innovations across multiple dimensions, including feature representation, multimodal fusion, few-shot learning, algorithm interpretability, and large-scale model benchmarking. These advancements are essential not only for overcoming current bottlenecks in real-world applications but also for establishing robust technical foundations for critical use cases such as digital twin cities and disaster emergency response.

  • SONG Qi, GAO Xiaohong, YIN Chengzhuo, HUANG Yanjun, LI Qiaoli, SONG Yuting, MA Xuyan
    Journal of Geo-information Science. 2025, 27(4): 946-966. https://doi.org/10.12082/dqxxkx.2025.240607

    [Objectives] Unmanned Aerial Vehicle (UAV) and satellite remote sensing technologies have been successfully applied to estimate soil organic carbon and other attributes. However, their application to soil texture estimation remains relatively limited, highlighting the need for further research in this area. This study focuses on three farmland plots located in Zhuozhatan Village (Huzhu County), Nilongkou Village (Lalongkou Town, Huangzhong District), and Baitu Village (Lushar Town, Huangzhong District) within the Huangshui River Basin of Qinghai Province. It explores the potential of UAV and satellite remote sensing technologies for estimating soil texture content at the field scale. [Methods] Using UAV platforms equipped with two hyperspectral cameras, field-scale imaging of farmland soils was conducted. Additionally, a field spectrometer was used to collect in-situ soil spectra, and a total of 838 soil samples were collected from 2022 to 2024. Satellite imagery was also obtained for the same time periods, including GF1/2/7 (Gaofen 1/2/7), Sentinel-2A, and ZY1-02D (Ziyuan 1-02D). Laboratory analyses determined soil particle size distribution and acquired indoor soil spectral data. Based on these datasets, statistical modeling and soil texture content estimation were performed using the XGBoost (Extreme Gradient Boosting) method for laboratory, field in-situ, UAV, GF, ZY1-02D, and Sentinel-2 spectral data. Spatial distribution maps of soil texture content were then generated. [Results] ① Among the XGBoost model results, the highest model accuracy for UAV image spectra achieved an RPD (Ratio of Performance to Deviation) of 2.441, while the optimal RPD values for GF1/2/7, ZY1-02D, and Sentinel-2 satellite imagery were 1.815, 1.601, and 1.561, respectively. ② The estimation accuracy based on UAV and satellite imagery was lower than that derived from field spectrometer measurements. The accuracy ranking was as follows: laboratory spectra > field in-situ spectra > UAV image spectra > GF1/2/7 satellite image spectra > ZY1-02D satellite image spectra > Sentinel-2 satellite image spectra. Among soil texture components, clay content estimation showed the highest accuracy (RPD = 2.70), followed by silt (RPD = 2.24) and sand (RPD = 1.91). ③ Sand and clay content exhibited a negative correlation with soil spectral reflectance, whereas silt content displayed a positive correlation. The sensitive bands for sand, silt, and clay content were primarily concentrated in the near-infrared region (780~2 400 nm). ④ The content of sand, silt, and clay exhibited minor variations over three years, demonstrating relative stability. The mapping results for the three plots showed soil texture contents predominantly in the following ranges: 67% < sand ≤ 83%, 10.6% < silt ≤ 19.1%, and 3.2% < clay ≤ 6.6%. [Conclusions] At the field scale, UAV imagery was identified as the most effective data source for soil texture content mapping, providing strong support for precision agricultural management. While GF1/2/7 and ZY1-02D satellite imagery were found to be sufficient for texture mapping, Sentinel-2 satellite imagery was too coarse for field-scale mapping.

  • WENG Mingkai, XIAO Guirong
    Journal of Geo-information Science. 2025, 27(5): 1113-1128. https://doi.org/10.12082/dqxxkx.2025.250050

    [Objectives] The quality of training samples significantly impacts model performance and prediction accuracy. In regions with limited sample data, the small number of samples and their uneven spatial distribution may prevent the model from effectively learning the features of disaster-inducing factors. This increases the risk of overfitting and ultimately affects the accuracy of model predictions. Therefore, it is crucial to collect and optimize training samples based on regional characteristics. [Methods] To address this issue, this study proposes a sampling optimization method for training samples. The method combines the Prototype Sampling (PBS) approach for selecting landslide-positive samples with an unsupervised clustering model for training sample selection. This results in a screened and expanded positive sample dataset and an objectively extracted negative sample dataset, forming an optimized training sample dataset. Subsequently, the Random Forest (RF) and Support Vector Machine (SVM) models, which are well suited for handling small sample data, were employed to construct a landslide susceptibility evaluation model. Comparative experiments were conducted using Raw Data (RD), a dataset with only Data Augmentation (DA), and the optimized dataset. Model prediction performance was assessed using metrics such as the Area Under the Curve (AUC). Additionally, the frequency ratio method was applied to optimize the results of landslide susceptibility zoning. Finally, a case study was conducted in Putian City, where landslide sample data is relatively scarce, to verify the effectiveness and generalization capability of the proposed sampling optimization method. [Results] The results indicate that models trained on the SO dataset achieved AUC improvements of 10.69% and 18.23% compared to those trained on the RD and DA datasets, respectively, demonstrating a significant enhancement in predictive performance. This suggests that selecting and expanding positive samples while objectively extracting negative samples can improve model accuracy and mitigate the overfitting problem during training. Furthermore, the frequency ratio analysis revealed that the SO-RF model achieved higher frequency ratios in regions with extremely high and high susceptibility than the SO-SVM model, indicating that SO-RF is more suitable for evaluating landslide susceptibility in regions with limited landslide sample data, such as Putian City. [Conclusions] The proposed training sample optimization approach, combined with machine learning evaluation methods, demonstrates high applicability and accuracy. Therefore, the findings of this study provide valuable insights into machine learning-based sampling strategies for landslide susceptibility assessment.

  • ZHANG Nuan, WANG Tao, ZHANG Yan, WEI Yibo, LI Liuwen, LIU Yichen
    Journal of Geo-information Science. 2025, 27(8): 1751-1779. https://doi.org/10.12082/dqxxkx.2025.250137

    [Significance] Street View Image-based Visual Place Recognition (SV-VPR) is a geographical location recognition technology that relies on visual feature information. Its core task is to predict and accurately locate unknown locations by analyzing the visual features of street view images. This technology must overcome challenges such as appearance changes under different environmental conditions (e.g., lighting differences between day and night, seasonal variations) and viewpoint differences (e.g., perspective deviations between vehicle-mounted cameras and satellite images). Accurate recognition is achieved through calculating image feature similarity, applying geometric constraints, and related methods. As an interdisciplinary field of computer vision and geographic information science, SV-VPR is closely related to visual positioning, image retrieval, SLAM, and more. It has significant application value in areas such as UAV autonomous navigation, high-precision positioning for autonomous driving, construction of geographical boundaries in cyberspace, and integration of augmented reality environments. It is particularly advantageous in GPS-denied environments. [Analysis] This paper systematically reviews the research progress of visual location recognition based on street view images, covering the following aspects: First, the basic concepts and classifications of visual place recognition technologies are introduced. Second, the foundational principles and categorization methods specific to street view image-based visual place recognition are discussed in depth. Third, the key technologies in this field are analyzed in detail. Furthermore, relevant datasets for street view image-based visual place recognition are comprehensively reviewed. In addition, evaluation methods and index systems used in this domain are summarized. Finally, potential future research directions for SV-VPR are explored. [Purpose] This review aims to provide researchers with a systematic overview of the technological development trajectory of SV-VPR, helping them quickly understand the current research landscape. It also offers a comparative analysis of key technologies and evaluation methods to support algorithm selection, and identifies emerging challenges and potential breakthrough areas to inspire innovative research.

  • LI Pengshuo, FENG Yongjiu, TONG Xiaohua, XI Mengrong, XU Xiong, LIU Shijie, HUANG Qian
    Journal of Geo-information Science. 2025, 27(4): 864-875. https://doi.org/10.12082/dqxxkx.2025.240401

    [Objectives] Rovers play an essential role in lunar exploration, serving as vital tools for scientists aiming to unravel the Moon's geological history and exploit its potential water-ice reserves. However, navigating the lunar surface with rovers presents significant safety risks due to the complex and often hazardous terrain, compounded by the lack of a consistent and reliable light source. The absence of pre-existing, high-resolution data—such as LiDAR—prior to exploration missions poses a considerable challenge in evaluating the safety of potential rover paths. Given these constraints, developing a reliable pre-assessment method is crucial for enhancing the success rate of lunar rover missions. [Methods] This paper introduces a 3D simulation method for lunar rover exploration, leveraging the Visualization Toolkit (VTK) to address these challenges. Our method integrates three critical aspects. Firstly, it offers high-resolution visualization of the lunar surface terrain, capturing intricate details down to the meter scale. Secondly, it simulates the dynamic illumination environment on the lunar surface, accounting for the varying illumination conditions due to the Moon 's rotation and orbital position. Thirdly, it models the rover's position and attitude transformations as it navigates the terrain. [Results] The effectiveness of this simulation approach is demonstrated through a case study focusing on the Shackleton Connecting Ridge region at the lunar South Pole, an area of significant interest due to its challenging topography and potential for water-ice deposits. The 3D simulation accurately depicts the undulating terrain of impact craters and allows for a thorough assessment of the rover's route safety by visualizing the potential hazards along the path. Moreover, the simulation offers an intuitive representation of the rover's movement, including real-time adjustments in position and attitude, which are critical for ensuring the rover’s stability and operational safety over long distances. Additionally, our method includes a real-time update feature for the dynamic illumination scene, enabling direct observation of how changing light conditions affect the rover's path during the mission. This capability is particularly important for assessing the feasibility of navigating through areas that may experience prolonged periods of darkness or extreme shadowing, which could impede the rover's progress or jeopardize its safety. The goal of this research is to improve the reliability and safety of future lunar rover missions by providing a robust pre-assessment tool that can verify the feasibility of proposed exploration routes. [Conclusions] This method thus offers crucial a priori information, serving as an essential guarantee for the successful execution of future lunar exploration endeavors.

  • WANG Jiao, LI Junjiao, RUI Qiyao, CHENG Weiming
    Journal of Geo-information Science. 2025, 27(4): 820-834. https://doi.org/10.12082/dqxxkx.2025.240474

    [Objectives] The identification and classification of lunar impact craters are critical for selecting spacecraft landing sites and estimating the Moon's geological age. However, the complex morphological features created by impact processes post significant challenges to studying micro-scale lunar surface features, which are often indivisible at the pixel level. Addressing these challenges requires a scale-adaptive approach that incorporates micro-scale characteristics to refine lunar impact crater classification maps. [Methods] This study introduces a scale-adaptive algorithm based on geomorphons for the automatic classification of micro-scale lunar surface features. First, terrain parameters are optimized to define local ternary patterns of lunar geomorphology. These patterns are then used to determine lunar geomorphons. Next, the geomorphons are aggregated according to rules based on relief amplitude and slope to identify lunar impact geomorphic units on a larger scale. Finally, a classification map of lunar impact craters in the Gagarin Crater region is constructed using the identified geomorphons. [Results] The proposed method successfully identifies the optimal parameters for adaptively scaling lunar geomorphons by incorporating the unique characteristics of lunar surface features. Using a four-parameter constraint window, lunar geomorphons are refined at locally optimal spatial scales through the computation of local ternary patterns integrated with the theory of lunar geomorphological evolution. The results reveal that the generated maps of lunar geomorphons exhibit significant spatial aggregation, well-defined classification boundaries, and high accuracy in representing lunar impact craters. The method effectively captures the internal structural details of impact craters, providing a pixel-level depiction of their morphological features. The multi-scale identification of impact craters achieves a precision of 88.24%, a recall of 84.96%, and an F1 score of 86.57%. A classification schema for impact craters was established, including simple pit, small-scale bowl, small-scale flat bottom, small-scale central peak, medium flat bottom, medium central peak, large ring plain, and giant complex. [Conclusions] This method demonstrates robustness and high efficiency in crater identification, offering multi-scale geomorphological units and serving as a foundational tool for scale-based lunar scientific research. It provides technical support for identifying and classifying multi-scale lunar impact craters, contributing to advancements in lunar morphological and geological analysis.

  • ZHANG Teng, WANG Jingxue, XIE Xiao, ZANG Dongdong
    Journal of Geo-information Science. 2025, 27(5): 1163-1178. https://doi.org/10.12082/dqxxkx.2025.240698

    [Objectives] The 3D model reconstruction of buildings based on a model-driven approach using airborne LiDAR building point clouds relies on fitting the building point cloud to predefined geometric primitives. However, due to the uneven density and noise in the building point cloud, errors often arise in structural details during the primitive fitting process, leading to reduced reconstruction accuracy. To address this issue, this study proposes a 3D model reconstruction method for airborne LiDAR building point clouds based on sequential quadratic programming and elevation step correction. [Methods] First, a primitive library containing classical roof structures is established, including simple roofs, complex roofs, and steep roofs. An adjacency matrix is constructed by incorporating the adjacency relationships and ridge properties between roof patches. The best-matching primitives are then selected from the primitive library based on the adjacency matrix. Next, the shape parameters of the selected primitives are optimized using the sequential quadratic programming algorithm to achieve a globally optimal fitting state. The initial 3D model is then generated. To further enhance accuracy, the relative position of the building models and the roof point clouds in 3D space is refined through translation and rotation, reducing the relative distance deviation and improving the fitting precision. Finally, the City Geography Markup Language (CityGML) is used to store the reconstructed 3D building models, ensuring clear structure and correct topology, which facilitates the visual representation of reconstruction results. [Results] Ten sets of classical building point clouds from the 3D Building dataset were selected for the 3D model reconstruction experiment. The proposed method was compared with existing reconstruction approaches based on the same model-driven framework, and classical accuracy evaluation matrics were used for quantitative analysis. The average objective function value for the selected experimental data was 0.32 m, which is 0.03 m higher than the comparison method, indicating improved accuracy. The horizontal average deviation between the reconstructed building elements and the building point cloud was 0.10 m, while the vertical average deviation was 0.04 m. [Conclusions] In summary, the optimal shape parameters, obtained through the sequential quadratic programming algorithm, enable the construction of 3D building models with complete topology and regular shapes. Additionally, the elevation step correction, which utilizes the average point spacing of the roof point cloud as the step length, effectively enhances the reconstruction accuracy of 3D building models.

  • YUE Zichen, ZHONG Shaobo, MEI Xin
    Journal of Geo-information Science. 2025, 27(6): 1289-1304. https://doi.org/10.12082/dqxxkx.2025.240715

    [Objectives] Knowledge graphs, as a cutting-edge technology for integrating multimodal data sources, have garnered significant attention in the GIS domain. These graphs are typically constructed using graph databases. However, mainstream graph databases still face challenges in effectively organizing and analyzing geospatial-temporal data. [Methods] To address this issue, this paper proposes an approach to modeling spatiotemporal semantics and query optimization that bridges graph and spatial data engine implemented within relational databases. In the graph database, geographic entities are stored as lightweight placeholder nodes (storing only mapping IDs) and linked to spatiotemporal index nodes (such as time trees and Geohash encodings) to enhance aggregation capabilities. Meanwhile, complete geospatial-temporal objects are stored in a relational database, while table partitioning strategies are employed to improve retrieval efficiency. This approach uses unified identifiers and JDBC for routing geographic entities across the databases. When users invoke pre-registered spatiotemporal functions in the graph database, a query rewriter transforms the graph queries into SQL statements based on entity identifiers, pushes them to the relational database for processing, and returns the results to the graph query pipeline. Additionally, a two-phase commit protocol ensures data consistency across the heterogeneous databases. [Results] We implemented a prototype system integrating Neo4j and PostGIS and conducted experiments on query and storage efficiency using a multisource spatiotemporal dataset from Shenzhen (including taxi trajectories, bike-sharing trajectories, road networks, POIs, and remote sensing imagery). Compared to mainstream graph database systems (e.g., Neo4j and GraphDB), our approach significantly improves performance for geospatial-temporal queries, reducing response times by 1~2 orders of magnitude in complex computational scenarios and enabling raster computations unsupported by native graph databases. By leveraging lightweight graph nodes and PostGIS data compression, storage space is reduced by approximately 3~5 times. Compared to virtual knowledge graph systems (e.g., Ontop), our method shows minimal differences in spatial query performance and storage overhead, while achieving notably faster response times for large-scale spatiotemporal queries. [Conclusions] Compared to existing methods, our approach leverages existing graph databases to construct materialized spatiotemporal knowledge graphs, enhancing modeling flexibility and query efficiency for geospatial-temporal data. It also supports user-defined extensions to the geospatial-temporal function library, offering a novel framework for efficiently managing and analyzing such data within knowledge graphs.

  • ZHU Ge, ZHANG Zheng, CAO Lianshuai, MA Kunyang, XU Xinyue, CHENG Yi
    Journal of Geo-information Science. 2025, 27(9): 2165-2176. https://doi.org/10.12082/dqxxkx.2025.250207

    [Objectives] Map compilation involves professional operations such as element selection, symbolization, and notation configuration. However, the process is often complex and inefficient. Leveraging Large Language Models (LLMs), text-to-map technology significantly simplifies the mapping process, lowers the barrier to entry for non-experts, and improves mapping efficiency. Nevertheless, challenges remain, including heavy reliance on manual debugging and fragmentation tool invocation. [Methods] This paper proposes a DeepSeek-based method for constructing text-to-map agents, which automates the entire process from user input to visualization output. This is achieved through the decomposition of natural language instructions and autonomous adaptation of tools. Centered on the DeepSeek model, the approach associates cartographic elements with specialized tools and usage descriptions, analyzes module structures and collaboration mechanisms, and organizes tools into five categories. By interpreting user instructions and reasoning through task-oriented chains of thought, the agent invokes appropriate visualization tools to achieve cross-modal mapping from natural language to maps, enabling autonomous task reasoning and automated map generation. [Results] To evaluate the agent's effectiveness, two types of mapping tasks—based on local map data and online map services—were conducted using DeepSeek-V3-0324 and R1 models as decision-making cores. The experiments demonstrated that the agent could autonomously complete mapping tasks from natural language using both local and tile-based data. Local map visualization experiments confirmed the agent's ability to reuse tools effectively in low-complexity scenarios. Tile-based map visualization experiments indicated the agent's capability in handling high-complexity scenarios involving multi-toolchain invocations. It accurately decomposed subtasks, assigned appropriate tools, and performed structured string-based input variable transmission or direct invocation without variables, all presented to users in a semi-transparent manner. Across forty repeated experiments, the V3 model outperformed the R1 model, achieving 6.56 times greater execution efficiency with an average processing speed of approximately 6.29 seconds per step, and demonstrated better modular adaptability with the LangChain agent framework. [Conclusions] The proposed construction method validates the feasibility of using DeepSeek-based agents for intelligent cartography. The V3 model exhibits strong potential in this field, with its performance (6.29 s/step) comparable to that of professional cartographers. The text-to-map intelligent agent significantly reduces the entry barrier for map creation, promotes the broader adoption of mapping tools in everyday use, and provides a valuable technical reference for integrating autonomous cartography with professional software platforms such as ArcGIS and QGIS.

  • SU Zhiping, YANG Chengsheng, WANG Ziqian
    Journal of Geo-information Science. 2025, 27(4): 979-993. https://doi.org/10.12082/dqxxkx.2025.240614

    [Objectives] The influence of negative sample selection and machine learning models on landslide susceptibility evaluation cannot be overlooked. [Methods] To investigate the impact of these two factors on landslide susceptibility assessment, this study examines the Nujiang Valley section of the Nujiang River Basin. A weighted information quantity model was proposed to optimize negative sample selection. Thirteen influencing factors, including topography, land use, and average annual rainfall, were selected. Three machine learning models were employed: Support Vector Machine (SVM), Convolutional Neural Network (CNN), and Gradient Boosting Decision Tree (GBDT). A comparative analysis of landslide susceptibility was conducted against traditional random sample selection methods. Additionally, the effect of rainfall factors on susceptibility classification was analyzed. [Results] The results indicate that: (1) The optimized negative sample selection improved landslide density by 0.0103, 0.0639, and 0.004 0, respectively, for the three models. The AUC values increased by 0.033, 0.018, and 0.008, respectively. (2) Among the susceptibility evaluation models, the GBDT model performed best, improving accuracy by 3.8% and 1.7% compared to the SVM and CNN models, respectively. (3) Incorporating average monthly rainfall data for summer and winter (2019—2020) into the GBDT model revealed an increase in high and relatively high susceptibility zones during summer, particularly in the southern regions of Liuku Town and Shangjiang Town. [Conclusions] The optimization of negative samples based on the weighted information quantity model is reasonable and effective. As a landslide susceptibility evaluation model, the GBDT model is the most suitable for the disaster-prone environment of the Nujiang Valley, where precipitation significantly impacts landslide susceptibility.

  • Journal of Geo-information Science. 2025, 27(4): 785-786.
  • WANG Kuang, KE Rihong, LI Shengnan, WANG Pu
    Journal of Geo-information Science. 2025, 27(4): 967-978. https://doi.org/10.12082/dqxxkx.2025.240586

    [Objectives] Revealing the structural characteristics of tourist flow networks is a prerequisite for achieving complementary advantages and coordinated development among attractions.[Methods] In this study, we employs methods such as travel chain extraction, social network analysis, and community detection to construct a research framework to analyze multi-scale tourist flow networks based on large-scale mobile phone data. The structural characteristics of the tourist flow network in Changsha are explored at microscopic, mesoscopic, and macroscopic scales.[Results] (1) Microscopic scale: The tourist flow network of Changsha shows a significant centralization trend, where a few core attractions such as the Yuelu Mountain and Orange Island have great influences on the whole network. Only 33% of attractions show structural hole efficiency and effectiveness above average, while their constraint is below average, indicating prominent structural holes and limited overall connectivity and efficiency. (2) Mesoscopic scale: The tourist flows of Changsha are highly concentrated, showing obvious spatial clustering characteristics and forming six tourism communities. There are usually two core attractions in each community to drive tourists to visit the surrounding attractions. In addition, the development of tourism communities is unbalanced, with a highly large community centered on Yuelu Mountain and Orange Island. (3) Macroscopic scale: The spatial distribution of the tourist flow network presents the characteristics of single-core strong concentration and overall dispersion, showing a multi-layer structure with the city center as the core and spreading outwards. The global efficiency of the network is only 0.367, with some marginal attractions having poor accessibility. The core attraction plays limited "trickle-down" effects on marginal attractions.