Archive

  • 2018 Volume 20 Issue 4
    Published: 20 April 2018
      

  • Select all
    |
  • DENG Xiangzheng,DAN Li,YE Qian,WANG Zhaohua,LIU Yu,ZHANG Xueyan,ZHANG Fan,QI Wei,WANG Guofeng,WANG Pei,BAI Yuping
    Download PDF ( ) HTML ( )   Knowledge map   Save

    Social and economic costs of carbon emission and reduction have increasingly been one of the hot research topics and are concerns by the policy makers of academic communities. We have conducted a comprehensive analysis on the key scientific issues and research progress on global carbon emission and carbon mitigation at both domestic and abroad. The latest observations from carbon satellites have proved that the global carbon dioxide has been spatially unevenly distributed. Given the 1.5 °C and 2.0 °C temperature increase limits of the Paris Agreement, we propose a technical framework to explore the temporal and spatial relationship between CO2 non-uniform dynamic distribution and global surface temperature, and to evaluate the carbon emission of selected major countries on the conditions of CO2 non-uniform dynamic distribution, and to estimate the social and economic costs of carbon emission and carbon reduction under the scenario of the 1.5°C and 2.0°C temperature limits. We finally propose an in-depth applied research on the complex relationships between climate change, economic growth and technology development. The technical framework and research methodologies in this paper will provide supports for the government on the aspects of formulating strategies and countermeasures for carbon emission and carbon reduction, by providing decision-making advices on mitigating climate change and achieving sustainable transformation and enhancing China′s dominant voice in the carbon diplomacy as well.

  • LIU Kaisi,WANG Yanbing,GONG Huili,LI Xiaojuan,YU Jie
    Download PDF ( ) HTML ( )   Knowledge map   Save

    Airborne LiDAR is one kind of the technologies for obtaining ground surface DEM. On the analysis of the airborne LiDAR point cloud filtering algorithms, this paper proposes a new filtering algorithm-dihedral filtering. The algorithm is based on the theory that can express the relative position of two intersect planes in space, to achieve the airborne LiDAR point cloud data filtering process. Firstly, the elevation-mutate points are extracted from point cloud data. The iteration ends when the stability of the cosine of non-mutated points′ dihedral angle reaches required level. Then, the frequency distributions of the cosine of both mutated and non-mutated points′ dihedral angle are counted, and draws a line chart. Ground points and non-ground points are classified based on the intersection′s cosine of line chart and slope value of the last iteration. Finally, the open operator of the mathematical morphology is used to remove low vegetation, and the reliable results are obtained. Comparing with ′Progressive TIN Method′, the misjudged percentage of the non-ground points were effectively reduced. Dihedral method can retain topographical information while filtering terrestrial object information.

  • FU Yongjian,LI Zongchun,HE Hua
    Download PDF ( ) HTML ( )   Knowledge map   Save

    To solve the problem of calculating the center of retro-reflective planar target when the point clouds are deficient or redundant, an algorithm of extracting edge points and calculating target center is proposed. The algorithm includes three steps: (1) point clouds preprocessing; (2) edge points extracting; (3) target center calculating. In step (1), the rough region of the target points is artificially segregated from the points scanned by the laser scanner first. Then, the target points are accurately extracted from the rough region according to the intensity of return light. The noise points are removed from the target points to get the high-quality target point clouds. Finally, the high-quality target point clouds are projected into a plane, called the best fitting plane, and then the plane is rotated to be parallel with the XOY coordinate plane. In step (2), the barycenter of the target point clouds is calculated, and then all the points are translated to a new coordinate plane with the barycenter as its origin. The new coordinate plane is divided into several fan-shaped regions. The point is regarded as the edge one only when it is farthest away from the origin in one region. In step (3), the equation of the target circle is calculated by fitting the edge points, using the robust least square method. The fitting circle center is rotated back to 3D space used for target point cloud. The resulted circle center in 3D space is regarded as the estimated value of the planar target center. In order to test the effectiveness of the proposed algorithm, three tests were conducted. Firstly, the target center of high-quality target point clouds was separately calculated by the proposed algorithm and centroid method, and the accuracy of target center locations was compared. Secondly, the edge points were extracted by the proposed algorithm and the method in Ref. [12], and the time efficiency of the algorithms was compared. Thirdly, the center of low-quality target point clouds is calculated by the proposed algorithm, and methods introduced in Ref. [11] and Ref. [12], and the bias and location accuracy from these methods were compared. The experimental results show that the proposed algorithm of extracting the edge points can get good results in shorter computing time than that by Ref. [12] method. And the proposed algorithm can quickly and accurately calculate the target center, and the location accuracy is better than 1mm, better than that of Ref. [11] and Ref. [12] method. The proposed method is effective and practical.

  • LI Peng,XIN Shuai,LI Jin,HE Hua,WANG Dandi,LI Pengcheng
    Download PDF ( ) HTML ( )   Knowledge map   Save

    Due to the influence of the 3D point cloud data collection instrument, acquisition method and post-processing, the number of point cloud features extracted from statistical algorithm base on geometric features such as the curvature and normal is quite large and with large errors. Using these features for rough cloud point registration, it is difficult to improve the precision and speed of rough cloud point registration. Since the accurate registration algorithm of point cloud data, such as ICP (Iteration Closest Point), 3D-NDT (3-Dimensional-Normal Distributions Transform) and GMM (Gaussian mixture model), works in a narrow registration range, the proper initial transform parameters are requested to essentially improve the speed and accuracy of the algorithm, otherwise it will cause the exact registration algorithm to fall into local optimum or result in registration failure. By analyzing actual spatial distribution of point cloud data, we find that it is difficult to collect accurate point, line feature and plane features information of point cloud or the accuracy of the collected key features is very low due to collection instruments, acquisition methods and post-processing and other factors. Therefore, combined with the method of feature point extraction, principal component analysis (PCA) and feature point clustering, this paper presents a virtual feature point fitting algorithm. Based on the commonly used feature point extraction algorithm, this algorithm uses segment endpoints of an average of more than 3 lines that are not parallel and the endpoints in the range domain ε1 or adopt the distance weighted calculation to complete the virtual feature points fitting. Another way is to use the feature points to fit lines by the least squares method, and then according to the principle of the smallest two norms, 3 or more non-parallel lines whose distance each other is less than ε2 is used to fit the virtual feature points whose distance to those lines is shortest. The virtual point feature generated by the algorithm is calculated from the actual feature points of the point cloud and the feature lines generated by fitting the feature points. It is not the actual laser reflection foot point on the scanned object. Through experimental verification, the virtual feature point algorithm can be more accurate to fit the corner point data of building which cannot be collected due to equipment and operating methods and other reasons. The virtual point feature data obtained by the algorithm is 64.71% less than the actual feature point data amount, the computing speed increased by 41.90%, and accuracy was improved by an order of magnitude. Using the fitted virtual feature points can reduce the amount of data involved in the coarse registration algorithm, improve the computational efficiency of the coarse registration algorithm and obtain more accurate and reliable initial transformation parameters.

  • HUANG Junsong,ZENG Qiming,GAO Sheng,JIAO Jian,HU Leyin
    Download PDF ( ) HTML ( )   Knowledge map   Save

    Owning to that the pixels in natural terrains are prone to spatial-temporal decorrelation during the long-term observation, using time-series InSAR (Synthetic Aperture Interferometry) technique to carry out deformation monitoring of natural terrains will face the challenge of lacking of available deformation measurement points. To solve this problem, an improved Small Baseline Subset (SBAS) method is proposed. It improves the selection process of initial high coherent pixels and phase filtering in conventional SBAS. Firstly, it uses the goodness of fit and the coherence threshold condition to identify statistically homogeneous pixels (SHP). After this, all pixels are divided into two parts base on the number of SHP, i.e. Persistent Scatterers (PS) candidates and Distributed Scatterers (DS) candidates. Then, initial high coherent PS and DS are selected from these two parts respectively. Finally those selected high coherent PS and DS are filtered by a weighted phase filter. The deformation monitoring experiment with 27 ENVISAT ASAR images, acquired over the northwest part of Beijing plain shows that: compared with StaMPS-PS (refers to the PS-InSAR in StaMPS) method and StaMPS-SBAS (refers to the SBAS in StaMPS) method, the improved method can effectively extend the quantity and coverage of deformation measurement points. The quantity of measurement points is increased by 22.6% and 27.6% respectively, and the deformation result of natural terrains is improved effectively. The deformation result of this study area is in good agreement with the displacement of 4 continuous GPS stations. Experimental results prove the effectiveness and superiority of this method in the inversion of ground deformation.

  • YUAN Pengfei,HUANG Ronggang,HU Pingbo,YANG Bisheng
    Download PDF ( ) HTML ( )   Knowledge map   Save

    Because of the little elevation difference between road points and ground points, and the similar laser reflection intensity between them, it is relatively hard to extract the road from lidar data at present. Furthermore, the same elevation and reflection intensity among the road, square and park makes the square and park being mistaken as road unavoidable in the city environment. In order to use the three-dimensional and multi-spectral information of the LiDAR comprehensively in this paper, data preprocessing which containing the point cloud filtering, sample collection and the data fusion is conducted first. The purpose of the filtering is to get the ground points from the LiDAR data, and the data fusion achieves the consistency of the multi-spectral LiDAR data. Then, the statistical features of the ground points can be obtained based on the intensity, the density and the flatness. To describe the road′s strip feature for distinguishing road from the square and park, the strip local binary feature (SLBF) is proposed. The SLBF is gained in a circular region which are intensity comparisons between the central position and every circular region position, and it is represented by a 96-dimension feature with value of 0 or 1. The LiDAR data is then classified as the road and non-road points by the features (Statistics-Based Feature, SBF and Stripe Local Binary Feature, SLBF) proposed above through a random forest classifier. After a further refinement by an Euclidean clustering, the road axis points are extracted by the thinning of the road points step by step by the iterative corrosion boundary method. In this paper we project the LiDAR data to the horizontal plane and use the K3M method to extract the center line of the road, and then re-project it back to the three-dimension space. Finally, the extracted road axis points are vectorized as the final result of the method. We used the multi-spectral point cloud data of the Waddenzee region to verify the method proposed in the paper. The result of the experiment shows that the completeness of the road axis vectorization achieves 94.15%, the accuracy achieves 97.95%, and the precision reaches 92.28%. The experiment shows that the proposed method can extract the road points efficiently, and vectorize the road axis correctly, it can be applied to many kinds of environments such as urban and forest as the designed features have the invariance of environments.

  • SHAO Lei,DONG Guangjun,YU Ying,YAO Qiangqiang,ZHANG Along
    Download PDF ( ) HTML ( )   Knowledge map   Save

    The mobile LiDAR scanning system is a useful tool for getting the top information as well as fa?ade information of buildings, which makes it as primary means to obtain 3D city modeling infrastructure. The first step of 3D modeling is to extract the building data from the complex mobile point cloud data quickly and accurately. Therefore, it is of great significance to study a fast and effective method of building extraction from vehicle laser scanning data. The buildings in mobile laser scanning data has the characteristics of uneven distribution of point densities, lack of existence, and some of the buildings in the measured data are not strictly flat, the top data of the low building is not the fa?ade. In order to solve the problems discussed above, a method of building extraction in complex urban scenes from mobile LiDAR is proposed by using a variety of projection images. Firstly, the method projects the point cloud data into the XOY plane to produce a variety of projected images. Secondly, based on the geometric semantic features of the buildings, the geometric constraints and morphological calculations of the acquired projection images are processed to get the seed area of the building. On the basis of this seed area, the eight-neighborhood region of the building seed area is grown on the highest elevation image by setting the height difference threshold to obtain the building area. Lastly, the building area on the image is back-projected into three-dimensional space to extract the building objectives. Two data sets with different point densities and different scanning tools are used to verify the effectiveness of this method. Results show that this method has higher data processing efficiency than the existing three-dimensional extraction method because point cloud data is projected into the two-dimensional image and the geometrical features of the building are synthetically used in the process of building extraction. Using this method, both top surface and non-fa?ade buildings can be extracted precisely. In this paper, sub-regional growth methods solve the problem that it′s difficult to extract the buildings with incompleteness of cloud data which is caused by the blockage through the traditional projection method.

  • LI Peiting,ZHAO Qingzhan,CHEN Hong
    Download PDF ( ) HTML ( )   Knowledge map   Save

    Digital elevation model (DEM) is an effective way of describing terrain. Unmanned Aerial Vehicle (UAV) light detection and ranging (LiDAR) has become a novel and powerful technology to produce DEM. Point cloud filtering is very important to generate DEM. However, low efficiency, over-segmentation, under-segmentation, and low precision have been problems in point cloud filtering. In order to improve filter accuracy and reduce ground point clouds data volume for rapidly establishing high accuracy DEM, this paper puts forward a filter method by K-means clustering to acquire ground point clouds based on constraint of normalized echo intensity values. UAV Scout B1-100 was used to carry laser scanner VUX-1 to acquire point clouds with high density and resolution in Xinjiang mana’s valley. The Riegl LMS and OxTS NAVgraph software were then used to carry out registration and correction of point clouds in study area. The point clouds after processing of removing noise points and diluting points had a total of 107 372 points. Then, K-means method was used under constraint of three-dimensional coordinates’ values of point clouds to get three different clustering results. Meanwhile, maximum-minimum standard method was introduced to normalize values of original echo intensity to a range of 0 to 1. The corresponding ground point clouds were obtained for different clustering results by choosing different ranges based on normalized intensity values. Finally, we merged ground point clouds from different clustering results to acquire ground point clouds in entire study area. For comparisons, we also used K-means clustering method under constraint of original values of echo intensity and three-dimensional coordinates. The results show that the ground point cloud obtained from K-means clustering and constraint with original values ??of echo intensity has 66 713 points, accounting for 62.133% of the total number of point clouds in this paper. An additional 13 648 subsurface vegetation points can be removed by using this paper’ method, reducing the ratio of ground point clouds to total point clouds to 49.422%. This method can better maintain terrain profile and reduce data volume of the ground point cloud, thus laying the foundation for the rapid establishment of high precision DEM.

  • GENG Yuxin,ZHONG Ruofei,PENG Baojiang
    Download PDF ( ) HTML ( )   Knowledge map   Save

    The street landscape describes the view of buildings and other objects on both sides of the road and works as the window of a city′s overall image. It is quite vital for urban planning and design, which could be helpful reference for management of government. Vehicle-borne point cloud data, with high precision and wide coverage, can provide position information and shape characteristics of the buildings along streets. This makes it possible to provide a new solution for urban vista facades extraction. Based on facades management, we propose a novel approach for automatic extraction of vista facades from vehicle-borne laser scanning data. The detail introduction focuses on the extraction of facades. In the approach, we divided the ground points and non-ground points after de-noising raw data and separated buildings from non-ground points in order to extract vista facades. It works in following four steps: (1) denoise raw point cloud and remove surface feature points from the raw data in order to acquire points of objects on the ground; (2) construct regular grids for non-ground points with binarization processing and select points of building according to semantic features; (3) estimate the reference vectors via POS(Positioning and Orientation System) data and set those vectors as normal vectors of chosen reference planes; (4) compute the Euclidean distance between each point and each plane. Points are classified by the distances with the same plane, based on which we extract point cloud of vista facades. To verify the feasibility and effectiveness of this method, we used a large group of vehicle-borne laser point cloud to carry out a series of experiments, including separating buildings from ground in origin data,extracting facade points from building points and comparing the automatic extraction with manual selection and results of other methods. The results showed that the method could improve the efficiency of data processing to some extent and return good results. The superiority of it was also verified by experiments.

  • LIU Qiang,FU Xueqing,HUANG Huafang,DANG Haiyan,YU Guochao,LI Renjie,ZHANG Junhai
    Download PDF ( ) HTML ( )   Knowledge map   Save

    The fossil and Paleolithic sites are strongly related to the ancient lakes and ancient rivers of Nihewan Basin. If the sedimentary formations of the lakeside facies and river terraces were discovered, then the next step is likely to find the fossils and Paleolithic sites of ancient human activities. The generalized definition of DEM as "terrain surface" is changed to " lacustrine sedimentary layer of the Nihewan basin" and named as Pleistocene lacustrine layer DEM. The establishment of the Pleistocene lacustrine layer DEM of Nihewan Basin can greatly promote the study on the ancient human activities and paleogeography in Nihewan area. In this paper, the first step of establishing the Pleistocene lacustrine layer DEM in Nihewan was studied. The three-dimensional geographic information of the lacustrine layer was extracted and the geological information of the Pleistocene lacustrine layer at the outcrop section of Heyaozhuang was studied as an example. The three-dimensional laser scanner was used to collect the three-dimensional geographic information of profile, and the filtering method was designed based on the LiDAR point cloud data. The distance between the sampling points was approximated and the echo intensity was used to distinguish the vegetation and the soil. On the basis of the result of the echo intensity filtering, the RGB information is used to set the threshold to further distinguish between the vegetation and the soil quality, and the better effect was obtained. After the final manual removal of noise, a clean and complete three-dimensional geographic information data of formation integrity can be obtained which can be used as the basic data for DEM modeling of lacustrine sediments.

  • WANG Guoli,WU Guikai,WANG Yanmin,GUO Ming,ZHAO Jianghong,GAO Chao
    Download PDF ( ) HTML ( )   Knowledge map   Save

    Deformation monitoring of architectural heritage plays an important role in Sustainable Heritage Protection. Ancient pagoda is one of the typical categories of architectural heritage with complex structure, high height and various models. The deformation of pagoda includes subsidence, tilting, bending and twisting etc, and it is difficult for conventional deformation monitoring methods to meet the monitoring requirement. Terrestrial LiDAR and UAV photogrammetry technology become more and more popular in 3D data acquisition of cultural heritage with fast speed, high accuracy and non-contact capabilities. However, most of the LiDAR and UAV data are used for detail surveying and documentation. In this paper terrestrial LiDAR and UAV photogrammetry technology were selected to obtain the 3D data of ancient pagoda for deformation studies. A comprehensive comparison and analysis is made for mornitoring process, characteristics and the accuracy of three methods and complete analysis on fusion model with UAV photogrammetry and LiDAR data is made according to the monitoring index of pagoda. The main conclusions are as follow: The conventional deformation methods is flexible and have more advantages in precision, and is more suitable for monitoring of the overall attitude of ancient buildings and their typical characteristics. Terrestrial LiDAR technology has advantages in overall and local deformation of pagoda, but it′s also susceptible to scanning angle. The 3D model of pagoda built by UAV close range photogrammetry technology has high precision and real color, and performs well on the whole and detail textures, however it′s hard for the technique to acquire the 3D data of narrow space inside ancient pagoda. Fusion data model can effectively make up the defects of the single data source and realize a comprehensive deformation analysis for ancient pagoda. For accuracy of the results, terrestrial laser scanning and photogrammetric techniques can reach millimeter accuracy and is better in the comprehensive monitoring. The traditional monitoring method is superior in settlement and tilt monitoring to the first two methods.

  • YANG Mingyuan,LIU Haiyan,JI Xiaolin,GUO Wenyue,CHEN Siwen
    Download PDF ( ) HTML ( )   Knowledge map   Save

    Spatio-temporal Kriging is efficiently used in interpolation by adjacent sampling point in space-time. The core is to extend the spatial variogram into space time. Because sparse scattered dataset is lack of sampling point in single time slice and the distribution of point is non-uniform, we propose an improved method of spatio-temporal Kriging against low precision. First, the trend surface of the sample is obtained by using cubic polynomial. The sample data is decomposed into trend terms and residual terms, because original trending sample data can't satisfy the stationary assumption required for Kriging interpolation. Then, the time variogram is fitted with less data coming from a nearby stationary station. The sampling position of the stationary station is constant and the sampling frequency is consistent. It is suitable for fitting the variogram because its observation sequence is longer, although few in quantity. Meanwhile,instead of fitting method by dataset in all-time, we adopt the strategy of multi-period overlap fitting to obtain a more reasonable spatial variogram. The length of the time segment is selected according to the degree of time variation. The variable values of the sampling point in each sub-period are calculated and overlaid to fit spatial variogram. In such way, the spatio-temporal variogram is constructed based on product-sum model, which is used to estimate the variable value in space and time. In final phrase, interpolation is performed using the spatio-temporal weights solved by Kriging equations. To verify the effectiveness of the proposed method, a comparison with the existing interpolation method is made by sea temperature data of Argo buoys from China Argo real-time data center and moored buoys from Pacific Marine Environmental Laboratory. From the comparison of the cross-verification results of interpolation, we judge the accuracy and stability of the method based on MAE and MSE. Compared to general spatio-temporal kriging and spatio-temporal weight interpolation, the proposed method is increased by 69.5% and 38.9% respectively in accuracy, and increased by 61.9% and 48.9% respectively in stability. The proposed method is improved based on spatio-temporal Kriging, considering the structural characteristics and spatial and temporal variation characteristics of sparse scattered datasets. Spatio-temporal variogram is more scientific and practical, which is constructed through the proposed method. Interpolation precision and stability are also improved significantly.

  • WANG Shengkai,XU Zhijie,ZHANG Jianqin,DU Mingyi
    Download PDF ( ) HTML ( )   Knowledge map   Save

    As a graphical representation and visualization method, Heatmap has a more visual and comprehensive display effect due to its capability in large spatial data mining and knowledge discovery, compared with standard analysis chart. With the development of big data and multi-scale digital map technology, a static Heatmap has not been able to meet the user requirements and heatmap has begun to turn multidimensional. This paper presents a method of drawing the Heatmap using the reverse rendering process, in which the geographic space mapped by renderer pixels was taken as the spatial granularity in calculation and analysis. This method solved the problem that the influence superposition mode of the Heatmap is much limited by the rendering mechanism. With the improved method, the influence superposition mode can be flexibly selected according to the analysis requirements, and radius coefficient and influence parameter of the analysis point are calculated by combining the geographical distance and rendering pixel to reduce the deformation of heatmap at different map scales. We used the Kapur multi-level segmentation algorithm to automatically detect the image threshold and get the gradient colors, so that the hierarchical display of thermodynamic effect can be optimized and the visual effects on the data can be more beautiful and clear in the map. This method was tested in a group of experiments with bus IC card records provided by Beijing Municipal Transportation Commission. Under the same experiment condition, Heatmaps were derived by using the reverse rendering process method, as well as the standard process method, both based on the leaflet map and Canvas render. The visual results of the two methods were compared and analyzed at different map scales in different locations. It shows that the visualization effects of reverse rendering method can provide more stable details and more comprehensive display of data features under same experiment conditions. This indicates that the proposed reverse rendering method can improve the visualization effect of spatial features of POI (position of interest) points in Heatmap and is more in line with the requirements of modern multi-scale digital map.

  • CHEN Lina,WU Sheng,CHEN Jie,LI Mingxiao,LU Feng
    Download PDF ( ) HTML ( )   Knowledge map   Save

    The near-real-time prediction of urban populations at the fine-grained scales can provide an important scientific basis in many fields, such as optimizing the allocation of public resources, assisting urban traffic guidance, making the early warning in urban emergencies, as well as exploring daily life patterns of urban residents. In this study, based on time series analysis method, a parameter prediction model (i.e., the Autoregressive Integrated Moving Average model) and a non-parameter prediction model (i.e., the K-Nearest Neighboring model) are constructed to predict urban populations in large spatial and temporal scales. The spatial resolution is 0.005 arc-degree and the temporal resolution is 30 minutes. When applying these two prediction models to a large mobile phone location dataset, the results demonstrate that both of them can be helpful to the near-real-time prediction of urban populations. In particular, the non-parameter prediction model produced more stable prediction results with lower error than the parameter prediction model, from the perspectives of prediction error distributions by grid population, prediction error distributions in space and time, prediction error at different temporal granularities, and prediction error distributions under a special event.

  • WU Xinxin,LIU Xiaoping,LIANG Xun,CHEN Guangliang
    Download PDF ( ) HTML ( )   Knowledge map   Save

    Arising from rapid growth of economy and population,urban sprawl has become a major challenge for sustainable urban development in the world. In order to assist urban planning, applicable methods and models are required to guide and constrain the growth of urban areas. Nowadays, urban growth boundaries (UGBs) has been regarded as a common tool used by planners to control the scale of urban development and protect rural areas which has a significant contribution to local ecological environment. However, existing models mainly focus on the delimitation of UGBs for urban development in single-scenarios. To date, there are rarely studies to develop efficient and scientific methods for delimiting the UGBs by taking the influences of macro policy and spatial policy into account. This paper presents a future land use simulation and urban growth boundary model (FLUS-UGB) which aims to delimit the UGBs for the urban areas in multi-scenarios. The top-down system dynamics (SD) model and bottom-up cellular automaton (CA) model are integrated in FLUS sub-model for simulating the urban growth pattern in the future. Furthermore, the UGB sub-model is developed to generate the UGBs that uses a morphological technology based on erosion and dilation according to the urban form produced by FLUS. This method merges and connects the cluster of urban blocks into one integral area and eliminates the small and isolated urban patches at the same time. We selected the Pearl River Delta region (PRD), one of the most developed areas in China, as the case study area and simulate the urban growth of PRD region from 2000 to 2013 for validate the proposed model. Then we used FLUS-UGB model to delimit the UGBs in PRD region of 2050 under three different planning scenarios (baseline, farmland protection and ecological control). The results showed that: (1) the model has high simulation accuracy for urban land with Kappa of 0.715, overall accuracy of 94.539% and Fom 0.269. (2) the method can maintain the edge details well in areas with high urban fragmentation and fractal dimension. This research demonstrates that the FLUS-UGB model is appropriate to delineate UGB under different planning policies, which is very useful for rapid urban growth regions.

  • ZHOU Guoqing,HUANG Yu,YUE Tao,WANG Haoyu,HE Chaoshuang,LI Xiaozhu
    Download PDF ( ) HTML ( )   Knowledge map   Save

    With the complexity of and the photorealistic requirement for urban buildings in rapid development of urbanization, the high accuracy of modeling for 3D urban buildings and the establishment of an effective data structure for those complicated building becomes a challenging work. With consideration of the shortage of the current CSG (Constructive Solid Geometry) modeling, this paper presents a hybrid modeling, which combines CSG and BR (Boundary Representation). In the proposed model, the traditional CSG model is improved by what is known as "Spatial CSG (SCSG)", which uses the dimensionally extended Nine-Intersection model (DE-9IM) to represent the topological relations between voxels and determines the unique SCSG tree to represent the exterior shape of the buildings. And then, the BR is used to represent the topological relationship between geometric elements of the urban buildings, which considers the texture as the attribute data of the wall and the top and combines SCSG as SCSG-BR method. This proposed method combines the file database and the relational database to manage the data of three-dimensional (3D) buildings. The attribute information of the building model and the texture are stored in the relational database. The file database contains a model file and a texture image file, which are used to store the building and the texture image. The texture images are separately stored in another relational database by a variable-length binary data type. During the storage and recall of texture images, the urban building model ID and the texture ID are linked through face ID in relational database. The texture images and the urban building model are loaded and stored at the same time. Thus, the management method has less complex processes in texture mapping and improves the model loading speed. In the data processing, the least squares algorithm is used to normalize the building polygons, and adjust the polygon topology to ensure the accuracy of the modeled data. Data sets, located in Denver, Colorado, USA, and Zurich, Switzerland, are selected to validate our method. The time-consuming comparison of model loading using the different modeling methods are conducted, and the experimental results demonstrated that our method consumes least time out of all methods. The experimental results also demonstrated that the hybrid modeling method proposed in this paper can not only accurately represent the topological relations of the building entities, but also quickly load the building texture images, which is capable to achieve fast and accurate modeling buildings, and effectively realize spatial query.