[Objectives] This study addresses the critical challenges in typhoon disaster knowledge services, which are often hindered by "massive data, scarce knowledge, and limited services." The core objective is to rapidly distill actionable knowledge from vast datasets to enhance disaster management efficacy and mitigate typhoon-related impacts. Large Language Models (LLMs), renowned for their superior performance in natural language processing, are leveraged to deeply mine disaster-related information and provide robust support for advanced knowledge services. [Methods] This research establishes a typhoon disaster knowledge service framework encompassing three layers: data, knowledge, and service. [Results] For the data-to-knowledge layer, an LLM-driven (Qwen2.5-Max) automated method for constructing typhoon disaster Knowledge Graphs (KGs) is proposed. This method first introduces a multi-level typhoon disaster knowledge representation model that integrates spatiotemporal characteristics and disaster impact mechanisms. A specialized training dataset is curated, incorporating typhoon-related texts with explicit temporal and spatial attributes. By adopting a "pre-training + fine-tuning" paradigm, the framework efficiently transforms raw disaster data into structured knowledge. For the knowledge-to-service layer, an LLM-based intelligent question-answering system is developed. Utilizing the constructed typhoon disaster KG, this system employs Graph Retrieval-Augmented Generation (GraphRAG) to retrieve contextually relevant knowledge from the graph and generate user-specific disaster prevention and mitigation guidance. This approach ensures seamless conversion of structured knowledge into practical services, such as personalized evacuation plans and resource allocation strategies. [Conclusions] The study highlights the transformative potential of LLMs in typhoon disaster management and lays a foundation for integrating LLMs with geospatial technologies. This interdisciplinary synergy advances Geographic Artificial Intelligence (GeoAI) and paves the way for innovative applications in disaster service.
[Objectives] In response to the challenges of geometric priority, semantic weakening, and cross-software semantic loss in the practical application of Building Information Modeling (BIM) according to the IFC standard, this study leverages the knowledge graph and its inference algorithm (TransE) to establish a network semantic representation of BIM model information. By enhancing the geometric and semantic correlation of the model, it addresses the issue of semantic loss during cross-platform interactions. [Methods] Using the Revit software library's built-in three-layer building model as the experimental object, the TransE model is applied to extract semantic information from BIM. BIM semantic information is first categorized into three types: component semantics, association semantics, and coordinate semantics. IfcEntity dynamic labels are assigned to component nodes, while static relationship attribute labels are assigned to association nodes. A total of 2 453 BIM semantic nodes and 14 844 association relationships are extracted. [Results] The experiment results demonstrate that: (1) The knowledge graph effectively represents BIM model components and their complex relationships; (2) By comparing the TransE model performance index (MRR\Hits@n) under different parameter combinations, it was found that: The embedding dimension is directly proportional to model performance and the learning rate is inversely proportional to model performance; (3) The optimal model performance is achieved when the embedding dimension is set to 200 and the learning rate to 0.0005; (4) By querying the system for all component nodes and verifying the results, the success rate of extracting semantic information from BIM components was found to be 94.47%. [Conclusions] The method proposed in this study is effective for extracting semantic information from BIM and conducting deeper semantic analysis. The findings provide a novel approach for semantic transformation in the integration of BIM and GIS.
[Objectives] Discrete Global Grid System (DGGS) is a hierarchical structure with seamless global coverage. It functions similarly to an electronic table covering the Earth, supporting the processing and analysis of heterogeneous geospatial data. However, the grid is essentially a multi-scale raster structure, and integrating geographic vector data into it remains a challenge in both research and application. Vector data includes points, lines, and polygons, among which vector line gridding is a fundamental problem. Most existing solutions express vector lines using the center lines of grid cells on a planar grid. However, when extended to spherical surfaces, the accuracy of vector data modeling decreases, making it difficult to meet application requirements. [Methods] This paper proposes a high-precision modeling method for vector lines in DGGS. First, a hexagonal global discrete grid is constructed based on the rhombic triacontahedron, which offers a higher degree of conformity to the sphere. Three adjacent rhombic faces are combined to form a composite structure, and a three-axis integer coordinate system is established to describe the spatial positions of hexagonal elements. Based on the grid cells corresponding to the start and end points of the vector line, optimal direction codes are determined to reduce the search range. The great arc of the vector line passing through the cells is identified using a neighborhood encoding operation. The resulting model is constructed based on the connecting line between grid centers, and a method for processing cross-surface vector lines is proposed. Finally, grid vertices are introduced as structural elements to enable vector line modeling using multiple structural elements, further improving the accuracy of hexagonal grid-based vector line modeling. Experiments show that the proposed method successfully models the grid representation of major coastlines across different continents. The results ensure that the grid model intersects with the original vector line topology, avoiding topological errors where original vector lines do not intersect with any grid cells.[Results] Compared with planar grid modeling, the proposed method achieves significantly higher accuracy in vector line gridding across various coastal regions worldwide. The modeling results demonstrate strong stability and are nearly unaffected by the resolution of the original vector data. Moreover, the method maintains an efficiency advantage, even after complex geometric operations on the spherical surface. [Conclusions] To address the geometric accuracy loss and topological distortion issues in traditional vector data grid modeling, this paper proposes a high-precision spherical grid modeling method. The approach shows strong potential to support the conversion of vector data to grid-based isomorphic representations.
[Objectives] Knowledge graphs, as a cutting-edge technology for integrating multimodal data sources, have garnered significant attention in the GIS domain. These graphs are typically constructed using graph databases. However, mainstream graph databases still face challenges in effectively organizing and analyzing geospatial-temporal data. [Methods] To address this issue, this paper proposes an approach to modeling spatiotemporal semantics and query optimization that bridges graph and spatial data engine implemented within relational databases. In the graph database, geographic entities are stored as lightweight placeholder nodes (storing only mapping IDs) and linked to spatiotemporal index nodes (such as time trees and Geohash encodings) to enhance aggregation capabilities. Meanwhile, complete geospatial-temporal objects are stored in a relational database, while table partitioning strategies are employed to improve retrieval efficiency. This approach uses unified identifiers and JDBC for routing geographic entities across the databases. When users invoke pre-registered spatiotemporal functions in the graph database, a query rewriter transforms the graph queries into SQL statements based on entity identifiers, pushes them to the relational database for processing, and returns the results to the graph query pipeline. Additionally, a two-phase commit protocol ensures data consistency across the heterogeneous databases. [Results] We implemented a prototype system integrating Neo4j and PostGIS and conducted experiments on query and storage efficiency using a multisource spatiotemporal dataset from Shenzhen (including taxi trajectories, bike-sharing trajectories, road networks, POIs, and remote sensing imagery). Compared to mainstream graph database systems (e.g., Neo4j and GraphDB), our approach significantly improves performance for geospatial-temporal queries, reducing response times by 1~2 orders of magnitude in complex computational scenarios and enabling raster computations unsupported by native graph databases. By leveraging lightweight graph nodes and PostGIS data compression, storage space is reduced by approximately 3~5 times. Compared to virtual knowledge graph systems (e.g., Ontop), our method shows minimal differences in spatial query performance and storage overhead, while achieving notably faster response times for large-scale spatiotemporal queries. [Conclusions] Compared to existing methods, our approach leverages existing graph databases to construct materialized spatiotemporal knowledge graphs, enhancing modeling flexibility and query efficiency for geospatial-temporal data. It also supports user-defined extensions to the geospatial-temporal function library, offering a novel framework for efficiently managing and analyzing such data within knowledge graphs.
[Objectives] Accurate identification of turbidity distribution in urban rivers is crucial for understanding urban water quality, assessing pollution levels, and optimizing water resource management. Current remote sensing inversion methods for water quality typically rely on correlation coefficients to select sensitive spectral bands or combinations for modeling. However, the broad spectral range and limited number of bands in common satellite data introduce uncertainties in identifying optimal bands, thereby constraining model accuracy. [Methods] This study proposes a turbidity inversion method based on a Convolutional Neural Network (CNN) architecture comprising four convolutional layers, two pooling layers, and four fully connected layers, with the final layer outputting turbidity values. Using Planet satellite data, field-measured turbidity, and spectral data from the Dongfeng Canal and Xiong’er River in Zhengzhou City, China, the CNN model was implemented for urban river turbidity inversion. Performance comparisons were conducted against two regression analysis methods and three classical machine learning approaches to generate spatial turbidity distribution maps. [Results] The CNN model achieved a coefficient of determination (R2) of 0.908 and a Root Mean Square Error (RMSE) of 0.410 NTU in the study area, outperforming the best regression model and the best classical machine learning method by 39.6% and 6.5%, respectively. Validation through visual inspection, field sampling, and laboratory analysis confirmed consistency between CNN-derived turbidity maps and ground truth data. The study area exhibited a mean turbidity of 3.52 NTU, with a standard deviation of 1.003 NTU and a coefficient of variation of 0.28. [Conclusions] These findings indicate that the convolutional neural network effectively captures complex nonlinear and high-dimensional data relationships in remote sensing images, improving the accuracy of turbidity parameter inversion.
[Objectives] With the deepening of urbanization and intensified market competition, long working hours have become a pervasive social issue, posing challenges to both workers' physical and mental health and to urban sustainable development. Current studies on urban residents' work activities predominantly rely on questionnaire survey data, which suffer from limited sample sizes and a lack of in-depth exploration into long working hours in megacities. [Methods] This research utilized mobile signaling data from Beijing, collected between November and December 2019, to identify stay points using a threshold rule method. Residential and workplace locations were determined through a time-window approach, and users' working hours were extracted. The study then examined the spatial distribution patterns of long-working-hours employees (defined as those working over 40 hours per week) and investigated spatial characteristics across various gender and age groups. Finally, the study also explored the characteristics of long working hours in different employment clusters in Beijing. [Results] The findings reveal that 47.1% of Beijing's workforce engages in long working hours (weekly working hours ≥40 hours), with an average weekly working duration of 48.86 hours. Spatial analysis demonstrates a polycentric agglomeration pattern, concentrated in major employment hubs such as the CBD, Financial Street, Zhongguancun, and Yizhuang. Significant disparities exist across gender and age groups. Male employees work an average of 49.62 hours per week, 1.5 hours more than their female counterparts (48.12 hours). Among male age groups, those aged 20~29 have the longest average weekly working hours at 50.68 hours. In contrast, although women aged 30~39 constitute the largest proportion of the female workforce (22.13%), their average weekly working hours are the lowest, at 47.59 hours. The characteristics of overtime work in different employment clusters show a clear pattern: the CBD and Zhongguancun have a higher number of overtime workers, while Yizhuang stands out with the highest proportion at 58.0%. Wholesale and logistics hubs such as Xinfadi and Majuqiao exhibit the most intensive work schedules, with average weekly working hours exceeding 50 hours. [Conclusions] This study provides rich empirical evidence for understanding the phenomenon of long working hours in Beijing. The results offer data-driven support for optimizing labor time policies, contributing to urban sustainable development and social equity.
[Objectives] Automatically generating storylines of news events from a large number of online news articles helps track the evolution of events, with significant applications in fields such as disaster emergency response, military conflict analysis, and social governance. Existing methods typically cluster news articles by directly encoding article features or mining patterns of keyword co-occurrence, then generate storylines based on chronological order or GeoTag. However, these approaches have not fully explored or utilized the spatio-temporal attributes of events in news texts, resulting in storylines that fails to accurately represent the evolution of news events across space and time. [Methods] To address this limitation, we propose a novel news storyline generation approach based on spatio-temporal optimal transport. First, we introduce a two-stage unsupervised story discovery method that initially aggregates news articles using document-level semantic embeddings of news streams, then more precisely assigns semantically related articles to the same story based on keyword distributions of candidate stories. Second, time expressions and toponym entities extracted from the news articles are parsed into standardized time formats and geographic coordinates using regular expression matching and Wikidata, effectively mining the spatio-temporal information embedded in the texts. Finally, an optimal transport-based approach is proposed for spatio-temporal distance calculation, incorporating distance decay functions to model the attenuation of spatio-temporal correlations. Storylines are then constructed using a maximum spanning tree algorithm. To verify the effectiveness of our method, extensive experiments were conducted on the publicly available large-scale Chinese storyline generation dataset, ChineseNewsEvents. [Results] In the story discovery tasks, compared to baseline methods such as Story Forest and SCStory, our method significantly improved clustering performance, with gains of more than 0.147 in AMI and over 0.103 in ARI, while achieving comparable results to SCStory in B3-F1. For storyline generation, our method outperformed baselines in terms of relevance, accuracy, and connectivity.[Conclusions] The proposed approach more accurately captures the spatio-temporal evolution of news events, providing a powerful tool for event evolution detection and simulation.
[Objectives] This study aims to investigate the disturbances in crowd stay behavior under extreme weather conditions, reflecting their overall impact on human dynamics. The findings can support efforts to enhance urban resilience, reduce disaster-related losses, and maintain urban order and stability. [Methods] A quantitative method was developed to measure the degree of disturbance in stay behavior at the individual level based on the concept of similarity. At the group level, a quantitative disturbance measurement method grounded in the Z-Score principle was constructed. The effectiveness of these methods was validated using anonymized mobile location data collected during a heavy rainfall event in Quanzhou in July 2022. The dataset covers the week of the event and the subsequent week. [Results] The findings indicate that: (1) The proposed method effectively quantifies spatiotemporal disturbances in crowd stay behavior, demonstrating its reliability; (2) At the individual level, the method successfully reveals different types of impacts on individuals and their spatiotemporal distribution characteristics. Case study data show that individuals residing in the city center are more susceptible to the combined effects of heavy rainfall on both their geographic locations and daily activity schedules. In contrast, individuals in suburban and exurban areas experience some alterations in their stay locations but generally maintain consistent daily routines and time schedules; (3) At the group level, the method effectively captures the temporal disturbance patterns and geographic distribution of disruptions caused by heavy rainfall. Additionally, different regions exhibit varying resilience characteristics and recovery speeds in stay behaviors. Case study data indicate that, on the day of the heavy rainfall event, the affected population's residential areas covered 68.71% of the city's built-up area. The number of long-term stay behaviors increased significantly on the eve of the heavy rainfall, with a maximum change of 9.82%. On the morning of the event, short-term stay behaviors significantly decreased, while in the afternoon, they increased sharply, with a maximum change of 21.48%. [Conclusions] The proposed methods quantitatively assess the influence of extreme weather conditions on crowd stay behavior at both individual and group levels. These findings provide a solid foundation for emergency management agencies to evaluate disaster risks and develop effective response and management strategies.
[Background] Traditional methods, due to their static receptive field design, struggle to adapt to the significant scale differences among cars, pedestrians, and cyclists in urban autonomous driving scenarios. Moreover, cross-scale feature fusion often leads to hierarchical interference. [Methodology] To address the key challenge of cross-scale representation consistency in 3D object detection for multi-class, multi-scale objects in autonomous driving scenarios, this study proposes a novel method named VoxTNT. VoxTNT leverages an equalized receptive field and a local-global collaborative attention mechanism to enhance detection performance. At the local level, a PointSetFormer module is introduced, incorporating an Induced Set Attention Block (ISAB) to aggregate fine-grained geometric features from high-density point clouds through reduced cross-attention. This design overcomes the information loss typically associated with traditional voxel mean pooling. At the global level, a VoxelFormerFFN module is designed, which abstracts non-empty voxels into a super-point set and applies cross-voxel ISAB interactions to capture long-range contextual dependencies. This approach reduces the computational complexity of global feature learning from O(N2) to O(M2) (where M << N, M is the number of non-empty voxels), avoiding the high computational complexity associated with directly applying complex Transformers to raw point clouds. This dual-domain coupled architecture achieves a dynamic balance between local fine-grained perception and global semantic association, effectively mitigating modeling bias caused by fixed receptive fields and multi-scale fusion. [Results] Experiments demonstrate that the proposed method achieves a single-stage detection Average Precision (AP) of 59.56% for moderate-level pedestrian detection on the KITTI dataset, an improvement of approximately 12.4% over the SECOND baseline. For two-stage detection, it achieves a mean Average Precision (mAP) of 66.54%, outperforming the second-best method, BSAODet, which achieves 66.10%. Validation on the WOD dataset further confirms the method’s effectiveness, achieving 66.09% mAP, which outperforms the SECOND and PointPillars baselines by 7.7% and 8.5%, respectively. Ablation studies demonstrate that the proposed equalized local-global receptive field mechanism significantly improves detection accuracy for small objects. For example, on the KITTI dataset, full component ablation resulted in a 10.8% and 10.0% drop in AP for moderate-level pedestrian and cyclist detection, respectively, while maintaining stable performance for large-object detection. [Conclusions] This study presents a novel approach to tackling the challenges of multi-scale object detection in autonomous driving scenarios. Future work will focus on optimizing the model architecture to further enhance efficiency.
[Objectives] With the enhancement of spatial resolution, remote sensing images contain increasingly intricate information, encompassing a vast array of spatial and semantic features. The effective extraction and integration of these features play a pivotal role in semantic segmentation performance. However, most existing approaches focus solely on feature fusion improvements while neglecting the consistency between spatial and semantic features. Additionally, these methods often overlook the precise extraction of edge information, which significantly impacts segmentation accuracy. [Methods] This paper proposes a semantic segmentation model for high-resolution remote sensing images based on multi-scale deep supervision. First, separate feature extraction branches are designed for spatial and semantic features to fully exploit their respective information. Second, a spatial redundancy reduction residual module is incorporated into the spatial branch, integrating wavelet transformation and coordinate convolution to enhance spatial feature extraction and better capture edge details. Third, a residual attention Mamba module is added to the semantic branch to facilitate global-level semantic feature extraction. Finally, a multi-scale feature fusion mechanism is applied, utilizing a large-kernel grouped feature extraction module to progressively merge spatial, semantic, and deep-level features while suppressing irrelevant information and activating meaningful features. Additionally, a deep supervision mechanism is employed by introducing auxiliary supervision heads at each feature fusion stage to enhance training efficiency. [Results] Comparison and ablation experiments were conducted on the ISPRS Potsdam and Vaihingen datasets with random sampling and data augmentation, The experimental results demonstrate that the proposed algorithm achieves an average Intersection over Union (IoU) of 83.43% on ISPRS Potsdam and 86.49% on the augmented Vaihingen dataset. Compared to nine state-of-the-art methods, including CGGLNet and CMLFormer, the proposed approach improves the average IoU by at least 5.00% and 3.00%, respectively. [Conclusions] The results verify that the proposed algorithm effectively extracts and integrates spatial and semantic features, thereby enhancing the accuracy of semantic segmentation in remote sensing images.
[Objectives] Feature matching is a core step in the 3D reconstruction of aerial images. However, due to shadows and perspective variations during the imaging process, the number of matching points is often small and unevenly distributed, significantly affecting accuracy. [Methods] This paper proposes a multi-strategy fusion feature matching method that accounts for shadow and viewing angle differences. It combines the traditional SIFT feature extraction algorithm with the advanced LightGlue feature matching neural network. Through multiple optimization strategies, the method achieves high-quality matching results under complex imaging conditions. The main improvements include the following: (1) An adaptive shadow region enhancement strategy is proposed. Shadow regions are extracted from the original image, and an initial brightness enhancement factor is determined based on the average brightness ratio of shadow and non-shadow areas. This factor is then adjusted using the gray-level differences within the shadow regions to enhance their brightness and restore ground object details, increasing the number of feature points. (2) A multi-view simulated image generation strategy is introduced. Simulated images are generated based on camera pose information to improve the adaptability of input features to view changes, enhancing matching accuracy and robustness. (3) In the matching optimization stage, due to significant height differences in aerial images, using a planar assumption for estimation introduces large errors. To address this, A RANSAC matching optimization algorithm based on K-Means clustering is developed. The number of clusters (K) is dynamically determined using the image's original color information. Matching points are clustered accordingly, and the RANSAC algorithm is applied to each cluster for local optimization. This reduces planar assumption errors and improves the selection of inliers. [Results] Experiments were conducted using aerial image data captured by the A3 camera, testing both single and combined strategies. Results show that after applying the adaptive shadow region enhancement and multi-view simulation strategies, the number of matching points nearly tripled compared to the unprocessed data. Additionally, after K-Means clustering RANSAC optimization, the average pixel distance error decreased by approximately 30% compared to direct RANSAC optimization, and the matching accuracy improved by an average of 24.8%. [Conclusions] The proposed method effectively addresses the challenges of aerial image matching under complex imaging conditions, providing more robust and reliable data support for downstream tasks such as 3D reconstruction.
[Objectives] Building footprint regularization is a fundamental task in GIS data updating. Buildings extracted from images often suffer from incomplete polygons and redundant points, making them unsuitable for cartographic applications. Existing regularization methods primarily focus on local shapes and weak right-angle features of buildings, neglecting the actual distribution and shapes of buildings in images. [Methods] As a result, the regularized building contours often deviate from the actual building shapes. To address this issue, we propose a building contour regularization method incorporating direction field features. Firstly, a multi-task building extraction model is proposed to extract building areas and describe the direction field features. Then, similar buildings are grouped based on the relative neighborhood graph, and the main direction for individual buildings and building groups is calculated. Finally, based on the optimized main direction, building contour edges are disassembled and reconstructed to obtain regularized building contours. [Results] Comparative experiments were conducted on the Inria and WHU datasets with four regularization methods and two deep learning-based building contour extraction methods. The proposed method outperforms others in handling "Y" shaped and "C" shaped buildings, as well as building groups. Compared to the vector recombination method, our approach achieves a 5.28% improvement in the Intersection over Union (IoU) metric on the Inria dataset. [Conclusions] Experimental results demonstrate that the proposed method enables the extraction of more distinct and precise building corner points, effectively mitigating mutual occlusion issues among adjacent buildings. Compared to instance segmentation algorithms that directly extract building contours, our approach offers higher precision and greater computational efficiency.
[Objectives] To address the problems of color distortion and noise artifacts in some low-light remote sensing images, this paper proposes a low-light remote sensing image enhancement algorithm named Denoising and Integrated Color Retinex-Net-based Network (DICR-Net). [Methods] In the color optimization stage, the Squeeze-and-Excitation Network (SENet) and Skip Connections (SC) are introduced into the decomposition and adjustment networks of the algorithm. A color loss function is also incorporated into the adjustment network to optimize color fidelity. SENet adaptively adjusts the feature channel weights to emphasize important information, while SC transfers shallow features to deeper layers to preserve fine details. In the denoising stage, a Deformable Convolutional Denoising Network (DCDNet) based on U-Net is constructed, and a noise loss function is introduced to suppress image noise. The convolution layers of DCDNet use Deformable Convolution (DConv) to ensure that the receptive field adapts to the object's shape, while also reducing the number of convolutional layers to lower computational cost. [Results] Experiments were conducted using the WHU-RS19 remote sensing image dataset released by Wuhan University and the classic LOL dataset for low-light image enhancement. For the WHU-RS19 dataset, 727 remote sensing images with normal illumination were randomly selected, then processed using Fourier transform: the low-frequency component underwent gamma correction to reduce brightness and contrast, and Gaussian noise was added to the high-frequency component to generate 727 low-light images. From this, 485 image pairs were randomly selected as the training set and 242 pairs as the test sets. For the LOL dataset, 485 pairs were used for training and 15 pairs for testing. The DICR-Net algorithm was compared with MSRCR, Zero-DCE, LIME, Retinex-Net, SCI, and DDNet. Experimental results show that DICR-Net significantly improves the visual quality of images in terms of subjective perception. In terms of objective metrics, on the WHU-RS19 dataset, DICR-Net improved PSNR, SSIM, SAM, SAT, and Delta E by 2.74%, 1.54%, 2.95%, 6.53%, and 8.82%, respectively, over the suboptimal algorithm. On the LOL dataset, improvements were 5.30%, 6.44%, 3.37%, 5.10%, and 10.80%, respectively. [Conclusions] The proposed algorithm demonstrates strong performance in color preservation and noise suppression for low-light remote sensing image enhancement, providing technical support for applications such as long-term monitoring and dynamic tracking using remote sensing imagery.
[Objectives] Using deep learning methods for landslide identification can significantly improve efficiency and is of great importance for landslide disaster prevention and mitigation. The DeepLabV3+ algorithm effectively captures multi-scale features, thereby improving image segmentation accuracy, and has been widely used in the segmentation and recognition of remote sensing images. [Methods] We propose an improved model based on DeepLabV3+. First, the Coordinate Attention (CA) mechanism is incorporated into the original model to enhance its feature extraction capabilities. Second, the Atrous Spatial Pyramid Pooling (ASPP) module is replaced with the Dense Atrous Spatial Pyramid Pooling (DenseASPP) module, which helps the network capture more detailed features and expands the receptive field, effectively addressing the limitations of inefficient or ineffective dilated convolution. A Strip Pooling (SP) branch module is added in parallel to allow the backbone network to better leverage long-range dependencies. Finally, the Cascade Feature Fusion (CFF) module is introduced to hierarchically fuse multi-scale features, further improving segmentation accuracy. [Results] Experiments on the Bijie landslide dataset show that, compared with the original model, the improved model achieves a 2.2% increase in MIoU and a 1.2% increase in the F1 score. Compared with other mainstream deep learning models, the proposed model demonstrates higher extraction accuracy. In terms of segmentation quality, it significantly improves the overall accuracy in identifying landslide areas, reduces misclassification and omission, and yields more precise delineation of landslide boundaries. [Conclusions] Based on experiments using the landslide debris flow disaster dataset in Sichuan and surrounding areas, along with practical application verification, the proposed method demonstrates strong recognition capability across landslide images in diverse scenarios and levels of complexity. It performs particularly well in challenging environments such as areas with dense vegetation or proximity to rivers, showing strong generalization ability and broad applicability.
[Objectives] The dissemination and evolution of public opinion exhibit distinct spatial characteristics, and increasing attention to the geographical dimension of public opinion has become an important research trend. [Methods] This study constructs a four-dimensional geographic public opinion hypernetwork model, incorporating "City-Social-Opinion-Psychology", based on a specific aviation accident. It employs methods such as complex networks, GIS spatial analysis, and machine learning to explore the spatiotemporal evolution of geographic public opinion. Additionally, the MRQAP model is used to investigate the driving mechanisms behind the spatial evolution of geographic public opinion. [Results] The empirical findings of this study indicate that: ① The spatial distribution of public opinion hotspots in China demonstrates significant geographical clustering and spatial correlation. These hotspots are predominantly concentrated in urban clusters south of the Hu Huanyong Line, including the Beijing-Tianjin-Hebei region, Shandong Peninsula, Yangtze River Delta, Pearl River Delta, and Chengdu-Chongqing region, particularly in cities with strong information dissemination capabilities and substantial public influence. The clustering effect is more pronounced during the outbreak and persistence phases. ② The social subnet exhibits a diamond-shaped structure with a marked community effect, though its connectivity efficiency is relatively low. The mainstream emotions within the psychological subnet show a clear trend toward non-negative transformation. Topics within the opinion subnet show a decentralized evolutionary pattern. The city subnet displays a clear "point-axis" structure with a high degree of integration, reflecting high network dissemination efficiency. ③ Homophily, siphoning, and proximity effects all contribute to the spatial dissemination of public opinion. Social, psychological, and opinion similarities, coupled with a favorable educational environment, outward-oriented development model, superior media infrastructure, and institutional, social, and organizational proximity, facilitate the spatial diffusion of public opinion information.[Conclusions] The spatial distribution and evolution of public opinion hotspots are influenced not only by geographic factors but also by socio-economic characteristics, social network structures, and shifts in psychological and emotional states.
[Significance] Air pollution control is not only a strong support for achieving carbon peak and carbon neutrality goals but also an important means of safeguarding public health and promoting green transformation and development. [Methods] Based on PM2.5 data retrieved via remote sensing from 2012 to 2022, this study employs spatial autocorrelation analysis, GeoDetector, and Multiscale Geographically and Temporally Weighted Regression (MGTWR) models to reveal the spatiotemporal evolution characteristics of PM2.5 at three scales—annual, seasonal, and monthly—as well as the spatiotemporal heterogeneity of its influencing factors in the three urban agglomerations of the Yangtze River Economic Belt. [Results] (1) The annual average PM2.5 concentration in the three urban agglomerations of the Yangtze River Economic Belt shows a "reverse U-shaped" trend with 2013 as the inflection point and overall higher concentrations in the north than in the south. Over the 11 years, the average PM2.5 concentration in the Middle Yangtze River Urban Agglomeration was slightly higher than in the Chengdu-Chongqing Urban Agglomeration and the Yangtze River Delta urban agglomeration. Temporal trends across the urban agglomerations are largely consistent, characterized by a gradual narrowing of regional differences and significant improvements in air quality. (2) PM2.5 concentrations in the three urban agglomerations follow a seasonal pattern of "higher in winter, lower in summer, and intermediate in spring and autumn." Air quality has significantly improved across all seasons, particularly in winter. Monthly average PM2.5 concentrations exhibit a "U-shaped" fluctuation, with notable declines across all regions, most falling below 50 μg/m3. (3) The spatial autocorrelation of annual average PM2.5 in the three urban agglomerations shows a significant overall positive correlation, though the degree of autocorrelation varies among individual agglomerations. High-high clusters are mainly located in the northwest of the Yangtze River Delta and the northwest of the Middle Yangtze River Urban Agglomeration, with some also found in Luzhou and Zigong in southern Sichuan (Chengdu-Chongqing Urban Agglomeration). Low-low clusters are primarily distributed in the southeastern coastal regions of the Yangtze River Economic Belt and Jiangxi Province. A low-high area is identified only in Yichang, Hubei. (4) Among the key factors influencing the spatial differentiation of PM2.5, the proportion of secondary industry exerts the greatest impact, followed by NDVI and per capita GDP. The MGTWR model demonstrates superior accuracy in fitting PM2.5 compared to other spatial regression models. Regression coefficient analysis reveals that per capita GDP is the dominant suppressing factor, while the proportion of secondary industry shows a positive correlation. Annual average wind speed and slope had relatively weaker, bidirectional effects. In contrast, annual precipitation, NDVI, and the number of large-scale industrial enterprises significantly reduce PM2.5 concentrations, whereas annual temperature and population density have strong promoting effects. [Conclusions] The MGTWR method incorporates both long time series and multi-scale analysis, offering a novel perspective for exploring the driving mechanism of PM2.5.