[Significance] Street View Image-based Visual Place Recognition (SV-VPR) is a geographical location recognition technology that relies on visual feature information. Its core task is to predict and accurately locate unknown locations by analyzing the visual features of street view images. This technology must overcome challenges such as appearance changes under different environmental conditions (e.g., lighting differences between day and night, seasonal variations) and viewpoint differences (e.g., perspective deviations between vehicle-mounted cameras and satellite images). Accurate recognition is achieved through calculating image feature similarity, applying geometric constraints, and related methods. As an interdisciplinary field of computer vision and geographic information science, SV-VPR is closely related to visual positioning, image retrieval, SLAM, and more. It has significant application value in areas such as UAV autonomous navigation, high-precision positioning for autonomous driving, construction of geographical boundaries in cyberspace, and integration of augmented reality environments. It is particularly advantageous in GPS-denied environments. [Analysis] This paper systematically reviews the research progress of visual location recognition based on street view images, covering the following aspects: First, the basic concepts and classifications of visual place recognition technologies are introduced. Second, the foundational principles and categorization methods specific to street view image-based visual place recognition are discussed in depth. Third, the key technologies in this field are analyzed in detail. Furthermore, relevant datasets for street view image-based visual place recognition are comprehensively reviewed. In addition, evaluation methods and index systems used in this domain are summarized. Finally, potential future research directions for SV-VPR are explored. [Purpose] This review aims to provide researchers with a systematic overview of the technological development trajectory of SV-VPR, helping them quickly understand the current research landscape. It also offers a comparative analysis of key technologies and evaluation methods to support algorithm selection, and identifies emerging challenges and potential breakthrough areas to inspire innovative research.
[Objectives] To address the limitation that existing trajectory anomaly detection methods often fail to fully consider road network constraints, this study proposes a trajectory anomaly detection algorithm designed to effectively identify potential fraudulent behavior by taxi drivers during passenger pickup. [Methods] The algorithm first performs map matching, aligning trajectory data with the actual road network to obtain a sequence of path segments. Then, a two-stage clustering approach is applied: the matched trajectory paths are initially clustered to extract and expand core road segments, forming multiple core paths. Next, the algorithm calculates the similarity between different core paths and assigns highly similar paths to the same cluster, thereby generating multiple path clusters. Finally, a CostThreshold is computed based on each path cluster. The travel cost of each trajectory, calculated by combining travel time and distance costs, is compared against the corresponding CostThreshold to determine whether the trajectory is anomalous. [Results] Compared with traditional anomaly detection methods on real-world trajectory datasets, the proposed approach demonstrates superior performance in detecting anomalous trajectories. It achieves significantly lower runtime and improves detection accuracy by up to 9.03% compared to the STADCS method. The F1 score also improves considerably compared to the Two-Phase and ATDC methods, with maximum gains of 6.67% and 9.45%, respectively. [Conclusions] This paper presents a detection method that integrates road network constraints with two-stage clustering and travel cost evaluation. The method enhances detection accuracy and efficiency while reducing the false positive rate. It is well-suited for complex urban road networks, offering valuable support for vehicle trajectory data mining and traffic management decision-making, with significant practical value in fraud detection and related fields.
[Objectives] The exponential growth in the complexity and scale of cyberspace networks presents significant challenges for visualization and analysis. Traditional visualization methods often struggle to effectively represent both the topological structures and geographical relationships among network elements, particularly when these elements are geographically constrained. This research addresses the critical need for advanced visualization techniques that can simultaneously preserve topological accuracy and geographical context while reducing visual clutter in large-scale network visualizations. [Methods] This study proposes a novel visualization approach for geographically constrained cyberspace point clusters based on Hierarchical Confluent Drawings. The methodology comprises four integrated components: (1) Backbone network construction using Louvain community detection algorithm and betweenness centrality-based central node selection, which identifies densely connected communities and their representative nodes; (2) Confluent Drawing generation using the Power-Graph edge aggregation algorithm, which replaces multiple edges with bundled paths while preserving exact connectivity information; (3) Geographical layout optimization of bundled points using K-means clustering and centroid calculation, which maintains spatial distribution characteristics while reducing visual complexity; and (4) Hierarchical interactive visualization design, enabling multi-level exploration through on-demand expansion of community structures, attribute tooltips, and selective highlighting of network elements. [Results] Comprehensive experiments conducted on Hong Kong's IP routing network and a Location-Based Social Network (LBSN) demonstrate the superiority of our approach over conventional methods. Compared to direct topological visualization, Force-Directed Eedge Bundling (FDEB), and Kernel Density Estimation Edge Bundling (KDEEB), Our method achieves significant reductions in edge crossings,by 36.1% and 78.2%,respectively,while maintaining precise connectivity information. Our confluent drawing approach preserves connection distribution patterns more effectively than FDEB and KDEEB, which tend to create visual centers that deviate significantly from the original topology. The method proves particularly effective for visualizing geographically constrained scale-free networks, which are prevalent in cyberspace infrastructure and social network contexts. While our approach incurs slightly higher computational time, the spatial complexity remains comparable to other edge bundling techniques, with significantly improved visual clarity and structural representation. [Conclusions] The proposed hierarchical confluent drawing method offers an innovative and practical solution for visualizing complex, geographically constrained networks. By effectively balancing topological clarity with geographical context, our approach enables analysts to identify key network structures, information flow patterns, and geographical relationships more intuitively. The integration of interactive exploration capabilities further enhances the method's utility for both overview analysis and detailed investigation of specific network regions. This research contributes to the growing field of geospatial network visualization and provides valuable tools for cybersecurity analysis, social network research, and infrastructure planning, where both connectivity patterns and geographical constraints are critical considerations.
[Objectives] Existing map service access control methods predominantly rely on static permission configurations, lacking dynamic spatiotemporal constraints that account for contextual information and user access environments. To address this limitation, a multi-constrained access control mechanism for vector map services by integrating virtual links is proposed. Its core lies in constructing spatiotemporally-coupled, discretized access credentials, which enables instantaneous, restricted, and auditable permission carriers through a multi-constraint binding mechanism. [Methods] Firstly, IP address-based geographic restrictions ensure that only authorized users within designated regions can access corresponding vector map data. Secondly, predefined temporal windows verify request validity, allowing access exclusively during permitted time periods. Finally, through dynamic recognition of user roles and vector layer security levels, the system achieves granular permission control over accessible map layers. [Results] In the simulation experiment, this study tested the access control performance for vector data across three typical user roles, multiple preset access time windows, and specific IP address ranges. The experimental results show that, compared to the static permission configuration of traditional Role-Based Access Control (RBAC) schemes, the proposed mechanism ensures that permission credentials are only valid within specific time windows and authorized IP ranges via a one-time virtual access link mechanism. This effectively mitigates the risk of unauthorized sharing of access credentials. Moreover, the mechanism dynamically adjusts spatiotemporal constraints based on changes in users' access environments and integrates the relationship between user roles and data layer confidentiality levels to precisely control access to vector data. [Conclusions] This study demonstrates that the proposed multi-constrained access control scheme can meet the complex access control requirements of vector map services, offering a feasible solution for enhancing permission flexibility and fine-grained management in traditional models.
[Objectives] Sustainable development is an important issue for countries worldwide, encompassing key aspects such as sustainable transportation systems and inclusive, sustainable urbanization. As a crucial component of urban public service infrastructure, the public transportation network serves as a cornerstone of a city's stable operation, with the distribution of its stops and routes directly influencing residents' travel patterns. However, existing studies mainly focus on accessibility analysis, site selection optimization, and spatial coupling with factors such as population and land use, while lacking in-depth optimization approaches and clear mechanisms that address spatial heterogeneity and facility redundancy. [Methods] Taking Beijing as a case study, with a focus on Dongcheng and Xicheng Districts, this study constructs a system of influencing factors based on multi-source data, including public transportation networks, topography, and economic indicators, and employs the XGBoost machine learning method to reveal the impact weights of these driving factors on the distribution of bus stops. On this basis, a mathematical model incorporating stop redundancy is proposed to optimize the spatial layout of upstream and downstream stops, producing a spatial optimization map of bus stops in Beijing. [Results] The findings indicate that: (1) There is an imbalance in the distribution of public transportation facilities in Beijing, with the proportion of the population having convenient access to public transportation differing by more than 30% between central and peripheral urban areas. (2) Among the 19 influencing factors, population density is the key driving factor, accounting for 27.77%, while the number of scenic spots and parking facilities have minimal impact, with feature importance scores below 0.5%. (3) Compared to the p-median model, the proposed redundancy optimization model significantly reduces the redundancy of optimized stops while maintaining performance in minimizing weighted distance. The optimized stop layout is more evenly distributed along existing bus routes. [Conclusions] These findings provide valuable reference and theoretical support for the layout of bus stops and other public service facilities, contributing to the efficient utilization of public resources and promoting sustainable urban development.
[Objectives] To enhance the safety of highway merging zones and uncover the mechanisms underlying traffic conflicts, this study investigates traffic conflict prediction and contributing factors in merging areas. [Methods] A multi-dimensional spatiotemporal feature database was constructed by integrating high-precision Exid trajectory data with Lanelet2 HD maps. Mutual Information (MI), XGBoost, and GPT algorithms were employed to generate multi-perspective independent feature sets. A Residual Convolutional Neural Network (ResCNN) was then developed for traffic conflict prediction, with predictive outcomes visualized using a confusion matrix. Performance metrics including Accuracy and Recall were used to compare ResCNN with CNN, AttCNN, ConvXGB, Transformer, and GraphSAGE models across different feature sets. The Friedman-Nemenyi test was conducted to assess the statistical significance of model performance differences. The Area Under the Curve (AUC) was used to evaluate conflict detection capability and determine the optimal feature set. The SHAP (SHapley Additive exPlanations) algorithm was applied to analyze both single-feature contributions and dual-feature interaction effects on traffic conflicts. [Results] Visualization of the prediction results via confusion matrices demonstrated that ResCNN accurately identified the majority of conflict events with low misclassification rates. In a comprehensive performance evaluation, ResCNN outperformed all comparative models across four feature sets, with all metrics exceeding 93.5%. Under the GPT&XGB_selector feature set, it achieved near-theoretical-limit performance, with an accuracy of 99.27% and a recall of 99.03%. Significance testing confirmed ResCNN's statistically superior performance (p-value << confidence level), with its average rank difference exceeding critical values in most comparisons. In detection capability validation, ResCNN's AUC curve showed the steepest ascent across all feature sets. Interpretability analysis revealed: (1) Single-feature contributions highlighted eight key factors (e.g., time headway) with distinct influence patterns; (2) Pairwise-feature interactions uncovered complex relationships between variables such as time headway and speed difference. [Conclusions] ResCNN demonstrates statistically significant advantages over comparative models, accurately distinguishing between conflict and non-conflict events while maintaining adaptability to different feature sets. The model effectively addresses both prediction and mechanistic analysis of traffic conflicts in highway merging zones, offering a novel solution for conflict prediction in intelligent transportation systems.
[Objectives] With the rapid advancement of information technology, the volume of data on the internet has grown exponentially, making it increasingly difficult for traditional information retrieval methods to effectively access key information and uncover the underlying associations among data. In this context, knowledge graphs have emerged as a powerful technology for organizing and managing complex information. By providing structured representation, semantic association, and logical reasoning capabilities, knowledge graphs effectively address the problems of information fragmentation and structural disorganization, thereby offering a solid foundation for knowledge discovery, intelligent inference, and decision support. However, as a crucial component of knowledge graph construction, information extraction continues to face numerous challenges in practical applications. These challenges are particularly pronounced in specialized domains such as gold mining, where overlapping entities, nested structures, and complex relational patterns are common. Moreover, the lack of efficient and automated construction pipelines further exacerbates the complexity and inefficiency of building domain-specific knowledge graphs. [Methods] To address these issues, this study proposes an information extraction approach based on the PFNA (Partition Filter Network with Attention mechanism) model, specifically designed for entity and relation extraction tasks in the gold mining domain. The proposed model incorporates an attention mechanism to dynamically weight input features, thereby enhancing its ability to capture complex entities and semantic relationships. Additionally, by integrating a domain-specific word embedding enhancement strategy, the model improves its capability to identify and represent specialized terminology and intricate patterns within the domain. [Results] Experimental results on a gold mining dataset demonstrate the superiority of the proposed method. The model achieves an F1 score improvement of 6.50% to 50.42% for complex entity recognition and 13.15% to 58.54% for complex relation extraction, significantly outperforming several mainstream baseline models. These results strongly validate the effectiveness and advancement of the proposed method in the context of knowledge extraction in the gold mining field. [Conclusions] Finally, by leveraging the Neo4j graph database for entity and relation storage and integrating a visualization interface, the constructed knowledge graph system is successfully deployed. This system provides structured and systematic knowledge support for intelligent decision-making, resource management, and domain-specific knowledge services in the gold mining industry.
[Objectives] As one of the main data types in geographic information, generating DEMs (Digital Elevation Models) that ensure data consistency and meet diverse needs has been a key area of research in cartography. Currently, existing generalization methods include spatial interpolation, filtering, structured generalization, and sparse sampling. However, these methods often struggle with incomplete preservation of geomorphological features and over-generalization of certain terrain elements. [Methods] To address these issues, this paper proposes a DEM generalization method based on curvature wavelet transform, which integrates the characteristics of planar curvature and wavelet analysis. First, the planar curvature of the DEM data is calculated to emphasize local terrain geometry, simplifying terrain details while enhancing the clarity and completeness of features for subsequent wavelet analysis. This improves geomorphological feature extraction and enhances the accuracy of wavelet transform in multi-scale terrain analysis. Next, wavelet decomposition is applied to obtain low-frequency and high-frequency components. The low-frequency components are used to identify geomorphological feature points, with a square-root model introduced to establish selection criteria. By integrating the number of feature points with the scale span through the square-root model, the number of selected feature points adjusts dynamically according to the generalization level, ensuring a rational and standardized selection process. Finally, spatial interpolation is conducted based on the selected feature points. The results of Kriging, Inverse Distance Weighting (IDW), spline interpolation, and natural neighbor interpolation are compared in terms of the mean and standard deviation of elevation values, as well as contour analysis. Kriging, which demonstrates the best performance, is selected as the final interpolation method. The interpolated results are then combined with the high-frequency coefficients to complete the DEM generalization. To evaluate the applicability of the proposed method, cartographic generalization is performed on data from three representative landform types—mountains, hills, and plains—evaluating the method's effectiveness in preserving major terrain features across different datasets. [Results] The generalization results are satisfactory for all three landform types. In particular, mountain DEMs at scales of 1:100 000, 1:250 000, and 1:500 000, derived from 1:50 000 data, exhibit strong performance in terms of elevation statistics (mean and standard deviation), 3D visualization, and geomorphic feature preservation. Compared with the structured generalization method, the proposed approach reduces MAE and RMSE by 13% and 34%, respectively, demonstrating superior generalization accuracy. [Conclusions] By integrating planar curvature with wavelet transform, the proposed method effectively emphasizes local terrain geometry, enhancing the accuracy and applicability of wavelet transform in multi-scale terrain analysis. It demonstrates strong adaptability to various landform types and excels at preserving key topographic features during generalization.
[Objectives] Net Primary Productivity (NPP) is a key indicator for evaluating carbon sinks in terrestrial ecosystems and is typically estimated using ecological models at both global and regional scales. Although remote sensing estimation models offer high accuracy, they are limited in projecting NPP and assessing long-term ecological responses to climate change. Process-based models can compensate for these limitations but often suffer from low spatial resolution. [Methods] In this study, we propose a moving window quantile mapping method for spatial downscaling of annual NPP generated by a process-based model (CEVSA), using high-resolution texture units derived from a remote sensing-based model (CASA). First, CEVSA-NPP was adjusted by extracting texture features from CASA-NPP data (2001-2010). A moving window was then established to apply quantile mapping within localized spatial range. The results were subsequently validated using data from 2011 to 2019. [Results] Using Qinghai Province as a case study, the proposed method significantly improved the accuracy of downscaled NPP. It reduced the Root Mean Square Error (RMSE) by 57% compared to the original RMSE between CEVSA-NPP and CASA-NPP, while largely preserving the original trend direction and interannual variation of the CEVSA model outputs. When texture features from CASA-NPP were applied to CEVSA-NPP, forest ecosystems showed relatively high standard deviations in texture units and correspondingly high uncertainty in simulated NPP. In contrast, uncertainty was lower for desert, aquatic/wetland ecosystems. Compared to the global quantile mapping method, the moving window approach further reduced RMSE, with the optimal performance observed at a 3 km window size. After applying the downscaling method, the cropland and settlement ecosystems showed the highest RMSE, followed by forests, while deserts, grasslands, and aquatic/wetland ecosystems had relatively lower RMSE values. [Conclusions] The proposed downscaling method effectively captures spatial heterogeneity while preserving the original trend direction and interannual variability of process-based model simulations. It offers a promising approach for enhancing the spatial resolution of NPP projections from process-based models, thereby supporting fine-scale regional ecological assessments and evaluations of ecosystem responses to extreme climate events.
[Significance] Multimodal remote sensing image matching has become a fundamental task in integrated Earth observation, enabling precise spatial alignment across heterogeneous image sources. [Progress] As the diversity of sensing modalities, acquisition geometries, and temporal conditions increases, traditional matching frameworks have proven inadequate for capturing complex variations in radiometric responses, geometric configurations, and semantic representations. This technological gap has driven a significant paradigm shift from handcrafted feature engineering to deep learning-based solutions, which now form the core of current research and application development. This paper provides a comprehensive and structured review of recent advances in deep learning methods for multimodal remote sensing image matching, with an emphasis on the evolution of methodological paradigms and technical frameworks. It establishes a clear dual-path classification: the single-session approach and the end-to-end approach. The former selectively replaces or enhances individual components of traditional pipelines, such as feature encoding or similarity estimation, using neural network modules. The latter integrates the entire matching process into a unified network architecture, enabling joint optimization of feature learning, transformation modeling, and correspondence inference within a closed loop. This progression reflects the field's transition from modular adaptation to holistic modeling, revealing a deeper integration of data-driven representation learning with geometric reasoning. The review further examines the development of architectural strategies supporting this evolution, including attention mechanisms, graph-based structures, hierarchical feature fusion, and modality-bridging transformations. These innovations contribute to improved robustness, semantic consistency, and adaptability across diverse matching scenarios. Recent trends also demonstrate a growing reliance on pretrained vision foundation models, which provide transferable feature spaces and reduce the dependence on large-scale labeled datasets. In addition to summarizing technical advancements, the paper analyzes representative datasets, performance evaluation strategies, and the current challenges that constrain real-world deployment. These include limited data availability, weak cross-scene generalization, computational inefficiency, and insufficient interpretability. [Prospect] By synthesizing methodological progress with practical demands, the review identifies key directions for future research, including the design of modality-invariant representations, physically-informed neural architectures, and lightweight solutions tailored for scalable, real-time image registration in complex operational environments.
[Objectives] Change detection is a critical and challenging task in remote sensing image analysis, playing an increasingly important role in Earth observation. Although deep learning-based change detection techniques have achieved promising results, issues such as false detection and missed detection persist, especially in detailed and edge regions. [Methods] To address these challenges, this paper proposes a Multi-Scale Wavelet Transform Attention Network (WTANet) that integrates spatial-domain contextual information with frequency-domain high-frequency details. By leveraging complementary features from both spatial and frequency domains, and guiding the network through multi-scale feature differences, WTANet enhances the model’ s ability to perceive subtle changes from both global semantic and local detail perspectives. WTANet introduces the Detail Capture Wavelet Module (DCWM), which combines the frequency-domain properties of wavelet transforms with attention mechanisms to effectively extract coarse-to-fine information from remote sensing images. This helps recover high-frequency details typically lost due to convolution or pooling operations, thereby improving the network's capability to detect fine-grained changes. Additionally, the Feature Difference Enhancement Decoder (FDED) emphasizes differences between multi-scale features, enriching the feature representations and boosting the model’s performance in complex scenarios. [Results] Experimental results on three high-resolution remote sensing change detection datasets, CDD, LEVIR-CD, and S2Looking, demonstrate that WTANet achieves F1 scores of 97.52%, 91.24%, and 65.43%, respectively. Compared with representative change detection models such as SNUNet and BIT, WTANet exhibits superior performance in detail and edge detection. [Conclusions] The WTANet proposed in this study effectively improves the accuracy of remote sensing image change detection by integrating spatial and frequency domain information. This approach not only provides new insights for future research in remote sensing image analysis, but also offers valuable technical references for urban planning, environmental monitoring, and related fields.
[Objectives] Accurate extraction of surface water morphology is fundamental to applying remote sensing in hydrological monitoring. However, low spatial resolution and the narrow width of small and medium-sized rivers severely constrain its effectiveness in both real-time monitoring and historical data reconstruction. [Methods] To address this issue, this study proposes a method that first identifies mixed pixels along the land-water boundary based on their similarity to an initial water mask, and subsequently determines the proportion and spatial distribution of water within these pixels using spectral unmixing and a spatial attraction model. Landsat 8 OLI imagery, covering five representative river reaches near hydrological stations in the upper Jinsha River, including Shigu, Benzilan, Batang, Gangtuo, and Zhimenda river reaches, was used to validate the method. Four geometric metrics—Jaccard index, intersection-over-union ratio, Boundary Offset Distance (BOD), and Average Path Distance (APD)—were employed to evaluate the method's performance under diverse landscape conditions. [Results] The results demonstrate that: (1) The improved mixed-pixel water extraction approach significantly reduces commission and omission errors and more accurately restores the true boundaries and area of water bodies. (2) Compared with traditional extraction methods (NDWI, DT, SVM, and RF), the proposed approach reconstructs them into mixed-pixel-based models (NDWI_Mixed, DT_Mixed, SVM_Mixed, and RF_Mixed), achieving consistently better performance across all five representative river reaches. The Jaccard index increased from 0.81 to 0.88, the intersection-over-union ratio reached 93%, while the Boundary Offset Distance (BOD) and Average Path Distance (APD) were reduced from 13.4 m to 7.6 m and from 23.4 m to 11.2 m, respectively—resulting in a 52% improvement in shape consistency. (3) The method exhibited stable performance in river reaches with regular banks and homogeneous textures, such as Zhimenda, Benzilan, and Gangtuo, while more pronounced variations occurred in complex regions like Batang and Shigu. Among the four tested approaches, NDWI_Mixed and RF_Mixed demonstrated greater overall robustness, whereas SVM_Mixed and DT_Mixed were more sensitive to spectral interference. [Conclusions] Overall, the proposed method substantially improves the delineation accuracy of river boundaries in medium- and low-resolution imagery, offering promising potential for hydrological inversion in historical or data-scarce regions.
[Objectives] Land Use and Land Cover (LULC) plays a crucial role in shaping surface environments and ecological processes. Among various land cover types, built-up land, representing the dominant form of anthropogenic surface modification, has expanded rapidly in recent decades, exerting significant impacts on regional ecosystems while attracting increasing attention from multiple disciplines. This study aims to improve the spatial accuracy of built-up land mapping by evaluating and integrating multiple LULC datasets, thereby supporting research on regional sustainable development. [Methods] Taking the Bohai Rim region as the study area, seven medium to high-resolution LULC products from domestic and international sources were initially selected. Based on a comparative analysis of total built-up area and spatial distribution patterns, five datasets (ESA2020, CoLUCC2020, GlobeLand2020, CLCD2023, and GLC_FCS2022) were chosen for further evaluation and integration. Consistency analysis was conducted to assess the classification performance of each dataset, and a multi-criteria evaluation combined with threshold-based filtering was employed for multi-source data fusion. [Results] Evaluation results indicated that the ESA2020, CoLUCC2020, GlobeLand2020, and GLC_FCS2022 datasets exhibit relatively high classification accuracy for built-up land, while the CLCD2023 dataset performs less satisfactorily. The fused product achieved an overall accuracy of 93.51% and a Kappa coefficient of 0.745 5, demonstrating notable improvements over any individual dataset. [Conclusions] The proposed fusion method effectively overcomes the limitations of single-source data by leveraging the complementary strengths of multiple datasets. It provides a robust methodological foundation for regional LULC data integration and offers valuable data support for sustainable development research in the Bohai Rim and similar regions.
[Objectives] This study aims to investigate the spatiotemporal evolution characteristics and driving mechanisms of landslide evolution in the Three Gorges Reservoir Area (TGRA) under compound extreme weather events in 2022, characterized by severe drought in the Yangtze River Basin and localized heavy rainfall within the reservoir area. It also seeks to address the knowledge gap in understanding landslide evolution under extreme climatic conditions. [Methods] Focusing on the Zigui-Fengjie section, this study utilized Sentinel-1 SAR data and the Small Baseline Subset (SBAS) InSAR technique to monitor surface deformation and identify active landslides. A dynamic evaluation of landslide susceptibility was conducted combining the information value model and SBAS-InSAR results. Based on reservoir water level fluctuations, the study period was divided into two intervals: normal weather (July 2020 to July 2022) and extreme weather (July 2022 to September 2023), to comparatively analyze landslide evolution patterns and driving mechanisms. [Results] The results are as follows: (1) A total of 136 active landslides were identified. The most favorable geomorphic conditions for landslide development included slopes of 10°-30°, southeasterly to northwesterly orientations, elevations of 100~400 m, distances to rivers less than 200 m, and distributions mainly in clastic and mixed clastic-carbonate rock areas. Many landslides were located in rainfed farmland within 100 m of roads. (2) The combination of InSAR technology with traditional landslide susceptibility assessment models enabled dynamic assessment. The method can be updated synchronously with InSAR deformation data to reflect the current state of landslide evolution in a timely manner. (3) Under extreme weather conditions, landslide risks in the study area increased significantly, while the relatively low water level of the Three Gorges Reservoir had no significant negative impact on reservoir bank stability. (4) Precipitation was identified as the primary driver of dynamic landslide susceptibility evolution, with the susceptibility-precipitation response varying considerably across regions with different lithologies. [Conclusions] By integrating SBAS-InSAR time-series deformation monitoring with the information value model, this study reveals the spatiotemporal variability of landslide risk under extreme weather conditions. It addresses critical gaps in understanding landslide evolution mechanisms in the TGRA and provides a scientific foundation for landslide monitoring, early warning, and risk management.
[Objectives] Population big data, characterized by large sample sizes and high spatiotemporal completeness, has become an essential foundational dataset for research in demography, population geography, and spatial population studies. Despite its widespread application, effective calibration methods for population big data remain lacking. Quantitative research, in particular, requires accurate and reliable population big data. However, common mathematical models struggle to precisely and realistically describe the relationship between big data and the Seventh National Population Census. [Methods] This paper proposes a method to calibrate population big data quantities by using the legally authoritative Seventh National Population Census data as an anchor point, integrating statistical data with 2020 Baidu population big data at a spatial level. Based on the mathematical relationship between population big data and official statistics, an operations-research-based optimization model is constructed to obtain the globally optimal deviation values for calibration. Using resident population data from the Seventh National Population Census for Hunan Province as the anchor point, the method calibrates the 2020 Baidu resident population big data and conducts two validation procedures. [Results] The results show that the deviation ratio between the calibrated resident population big data for Hunan Province and the census data is -1.01% (an improvement of 25.87%), with city-level deviation ratios ranging from -2.05% to +0.92% and county-level ratios from -2.06% to +1.99%, without altering the pre-calibration deviation trends. Validation against National Bureau of Statistics data indicates that the deviation ratio for calculated total urban population ranges from -2.7% to 1.7%. Validation against village-reported data in the "Green Heart" region at the intersection of Changsha, Zhuzhou, and Xiangtan cities shows a deviation ratio of 0.47% between the calculated resident population and the village-reported figures. [Conclusions] The calibration and validation results fully demonstrate the effectiveness of the proposed method, offering a viable approach for estimating population figures in non-census years and generating spatially distributed population datasets based on big data. This method can be applied to calibrate population big data from any provider and can be extended beyond resident population figures to calibrate gender ratios, age structures, working populations, mobile populations, OD flow data, and population profiles in big data.