06 Fakultät Luft- und Raumfahrttechnik und Geodäsie
Permanent URI for this collectionhttps://elib.uni-stuttgart.de/handle/11682/7
Browse
32 results
Search Results
Item Open Access Towards improved targetless registration and deformation analysis of TLS point clouds using patch-based segmentation(2023) Yang, Yihui; Schwieger, Volker (Prof. Dr.-Ing. habil. Dr. h.c.)The geometric changes in the real world can be captured by measuring and comparing the 3D coordinates of object surfaces. Traditional point-wise measurements with low spatial resolution may fail to detect inhomogeneous, anisotropic and unexpected deformations, and thus cannot reveal complex deformation processes. 3D point clouds generated from laser scanning or photogrammetric techniques have opened up opportunities for an area-wise acquisition of spatial information. In particular, terrestrial laser scanning (TLS) exhibits rapid development and wide application in areal geodetic monitoring owing to the high resolution and high quality of acquired point cloud data. However, several issues in the process chain of TLS-based deformation monitoring are still not solved satisfactorily. This thesis mainly focuses on the targetless registration and deformation analysis of TLS point clouds, aiming to develop novel data-driven methods to tackle the current challenges. For most deformation processes of natural scenes, in some local areas no shape deformations occur (i.e., these areas are rigid), and even the deformation directions show a certain level of consistency when these areas are small enough. Further point cloud processing, like stability and deformation analyses, could benefit from the assumptions of local rigidity and consistency of deformed point clouds. In this thesis, thereby, three typical types of locally rigid patches - small planar patches, geometric primitives, and quasi-rigid areas - can be generated from 3D point clouds by specific segmentation techniques. These patches, on the one hand, can preserve the boundaries between rigid and non-rigid areas and thus enable spatial separation with respect to surface stability. On the other hand, local geometric information and empirical stochastic models could be readily determined by the points in each patch. Based on these segmented rigid patches, targetless registration and deformation analysis of deformed TLS point clouds can be improved regarding accuracy and spatial resolution. Specifically, small planar patches like supervoxels are utilized to distinguish the stable and unstable areas in an iterative registration process, thus ensuring only relatively stable points are involved in estimating transformation parameters. The experimental results show that the proposed targetless registration method has significantly improved the registration accuracy. These small planar patches are also exploited to develop a novel variant of the multiscale model-to-model cloud comparison (M3C2) algorithm, which constructs prisms extending from planar patches instead of the cylinders in standard M3C2. This new method separates actual surface variations and measurement uncertainties, thus yielding lower-uncertainty and higher-resolution deformations. A coarse-to-fine segmentation framework is used to extract multiple geometric primitives from point clouds, and rigorous parameter estimations are performed individually to derive high-precision parametric deformations. Besides, a generalized local registration-based pipeline is proposed to derive dense displacement vectors based on segmented quasi-rigid areas that are corresponded by areal geometric feature descriptors. All proposed methods are successfully verified and evaluated by simulated and/or real point cloud data. The choice of proposed deformation analysis methods for specific scenarios or applications is also provided in this thesis.Item Open Access Analyzing and characterizing spaceborne observation of water storage variation : past, present, future(2024) Saemian, Peyman; Sneeuw, Nico (Prof. Dr.-Ing.)Water storage is an indispensable constituent of the intricate water cycle, as it governs the availability and distribution of this precious resource. Any alteration in the water storage can trigger a cascade of consequences, affecting not only our agricultural practices but also the well-being of various ecosystems and the occurrence of natural hazards. Therefore, it is essential to monitor and manage the water storage levels prudently to ensure a sustainable future for our planet. Despite significant advancements in ground-based measurements and modeling techniques, accurately measuring water storage variation remained a major challenge for a long time. Since 2002, the Gravity Recovery and Climate Experiment (GRACE) and its successor GRACE Follow-On (GRACE-FO) satellites have revolutionized our understanding of the Earth's water cycle. By detecting variations in the Earth's gravity field caused by changes in water distribution, these satellites can precisely measure changes in total water storage (TWS) across the entire globe, providing a truly comprehensive view of the world's water resources. This information has proved invaluable for understanding how water resources are changing over time, and for developing strategies to manage these resources sustainably. However, GRACE and GRACE-FO are subject to various challenges that must be addressed in order to enhance the efficacy of our exploitation of GRACE observations for scientific and practical purposes. This thesis aims to address some of the challenges faced by GRACE and GRACE-FO. Since the inception of the GRACE mission, scholars have commonly extracted mass changes from observations by approximating the Earth's gravity field utilizing mathematical functions termed spherical harmonics. Various institutions have already processed GRACE(-FO) data, known as level-2 data in the GRACE community, considering the constraints, approaches, and models that have been utilized. However, this processed data necessitates post-processing to be used for several applications, such as hydrology and climate research. In this thesis, we evaluate various methods of processing GRACE(-FO) level-2 data and assess the spatio-temporal effect of the post-processing steps. Furthermore, we aim to compare the consistency between GRACE and its successor mission, GRACE-FO, in terms of data quality and measurement accuracy. By analyzing and comparing the data from these two missions, we can identify any potential discrepancies or differences and establish the level of confidence in the accuracy and reliability of the GRACE-FO measurements. Finally, we will compare the processed level-3 products with the level-3 products that are presently accessible online. The relatively short record of the GRACE measurements, compared to other satellite missions and observational records, can limit some studies that require long-term data. This short record makes it challenging to separate long-term signals from short-term variability and validate the data with ground-based measurements or other satellite missions. To address this limitation, this thesis expands the temporal coverage of GRACE(-FO) observations using global hydrological, atmospheric, and reanalysis models. First, we assess these models in estimating the TWS variation at a global scale. We compare the performance of various methods including data-driven and machine learning approaches in incorporating models and reconstruct GRACE TWS change. The results are also validated against Satellite Laser Ranging (SLR) observations over the pre-GRACE period. This thesis develops a hindcasted GRACE, which provides a better understanding of the changes in the Earth's water storage on a longer time scale. The GRACE satellite mission detects changes in the overall water storage in a specific region but cannot distinguish between the different compartments of TWS, such as surface water, groundwater, and soil moisture. Understanding these individual components is crucial for managing water resources and addressing the effects of droughts and floods. This study aims to integrate various data sources to improve our understanding of water storage variations at the continental to basin scale, including water fluxes, lake water level, and lake storage change data. Additionally, the study demonstrates the importance of combining GRACE(-FO) observations with other measurements, such as piezometric wells and rain-gauges, to understand the water scarcity predicament in Iran and other regions facing similar challenges. The GRACE satellite mission provides valuable insights into the Earth's system. However, the GRACE product has a level of uncertainty due to several error sources. While the mission has taken measures to minimize these uncertainties, researchers need to account for them when analyzing the data and communicate them when reporting findings. This thesis proposes a probabilistic approach to incorporate the Total Water Storage Anomaly (TWSA) data from GRACE(-FO). By accounting for the uncertainty in the TWSA data, this approach can provide a more comprehensive understanding of drought conditions, which is essential for decision makers managing water resources and responding to drought events.Item Open Access Forming a hybrid intelligence system by combining Active Learning and paid crowdsourcing for semantic 3D point cloud segmentation(2023) Kölle, Michael; Sörgel, Uwe (Prof. Dr.-Ing.)While in recent years tremendous advancements have been achieved in the development of supervised Machine Learning (ML) systems such as Convolutional Neural Networks (CNNs), still the most decisive factor for their performance is the quality of labeled training data from which the system is supposed to learn. This is why we advocate focusing more on methods to obtain such data, which we expect to be more sustainable than establishing ever new classifiers in the rapidly evolving ML field. In the geospatial domain, however, the generation process of training data for ML systems is still rather neglected in research, with typically experts ending up being occupied with such tedious labeling tasks. In our design of a system for the semantic interpretation of Airborne Laser Scanning (ALS) point clouds, we break with this convention and completely lift labeling obligations from experts. At the same time, human annotation is restricted to only those samples that actually justify manual inspection. This is accomplished by means of a hybrid intelligence system in which the machine, represented by an ML model, is actively and iteratively working together with the human component through Active Learning (AL), which acts as pointer to exactly such most decisive samples. Instead of having an expert label these samples, we propose to outsource this task to a large group of non-specialists, the crowd. But since it is rather unlikely that enough volunteers would participate in such crowdsourcing campaigns due to the tedious nature of labeling, we argue attracting workers by monetary incentives, i.e., we employ paid crowdsourcing. Relying on respective platforms, typically we have access to a vast pool of prospective workers, guaranteeing completion of jobs promptly. Thus, crowdworkers become human processing units that behave similarly to the electronic processing units of this hybrid intelligence system performing the tasks of the machine part. With respect to the latter, we do not only evaluate whether an AL-based pipeline works for the semantic segmentation of ALS point clouds, but also shed light on the question of why it works. As crucial components of our pipeline, we test and enhance different AL sampling strategies in conjunction with both a conventional feature-driven classifier as well as a data-driven CNN classification module. In this regard, we aim to select AL points in such a manner that samples are not only informative for the machine, but also feasible to be interpreted by non-experts. These theoretical formulations are verified by various experiments in which we replace the frequently assumed but highly unrealistic error-free oracle with simulated imperfect oracles we are always confronted with when working with humans. Furthermore, we find that the need for labeled data, which is already reduced through AL to a small fraction (typically ≪1 % of Passive Learning training points), can be even further minimized when we reuse information from a given source domain for the semantic enrichment of a specific target domain, i.e., we utilize AL as means for Domain Adaptation. As for the human component of our hybrid intelligence system, the special challenge we face is monetarily motivated workers with a wide variety of educational and cultural backgrounds as well as most different mindsets regarding the quality they are willing to deliver. Consequently, we are confronted with a great quality inhomogeneity in results received. Thus, when designing respective campaigns, special attention to quality control is required to be able to automatically reject submissions of low quality and to refine accepted contributions in the sense of the Wisdom of the Crowds principle. We further explore ways to support the crowd in labeling by experimenting with different data modalities (discretized point cloud vs. continuous textured 3D mesh surface), and also aim to shift the motivation from a purely extrinsic nature (i.e., payment) to a more intrinsic one, which we intend to trigger through gamification. Eventually, by casting these different concepts into the so-called CATEGORISE framework, we constitute the aspired hybrid intelligence system and employ it for the semantic enrichment of ALS point clouds of different characteristics, enabled through learning from the (paid) crowd.Item Open Access Editorial for PFG issue 5/2023(2023) Gerke, Markus; Cramer, MichaelItem Open Access Exploring the performances of SAR altimetry and improvements offered by fully focused SAR(2021) Wu, YuweiWith the development of the altimetry techniques, the measurement principle has been changed from the conventional pulse-limited principle to the delay-Doppler principle since CryoSat-2. The delay-Doppler altimetry presents scientists with the chance to develop new processing schemes and improve products that maximize the benefits of the measurements. Nevertheless, one of the challenges for delay-Doppler Altimetry lies in the complexity of the post-processing, especially the Delay-Doppler processing. The focus of this thesis is to better understand delay-Doppler and fully focused SAR altimetry. This thesis compares the retrieved waveforms and resultant water level time series with different altimetry principles, processing options and retracking methods. By using platform SARvatore for delay-Doppler altimetry and SMAP for fully focused SAR altimetry, different processing options (data posting rate, Hamming window and zero padding) and different retrackers (SAMOSA family for SARvatore, PTR for SMAP) can be applied and compared. Our results reveal that the waveforms generated by different configurations have different peaks for SARvatore. For SMAP, with or without zero padding or Hamming window had very little impact, with more differences mainly coming from the different retracking methods. Our results also show that fully focused SAR does not bring a significant improvement when applied to Sentinel-3 data. In summary, different configurations and retracking methods can significantly affect the shape of waveforms and their derived ranges. According to this thesis's experiments, the configuration with 80 Hz data posting rate, Hamming window, zero padding, extended receiving window and retracker SAMOSA++ offers the best performance.Item Open Access New methods for 3D reconstructions using high resolution satellite data(2021) Gong, Ke; Fritsch, Dieter (Prof. Dr.-Ing. habil. Prof. h.c.)Item Open Access Analysis of water volume change of the lakes and reservoirs in the Mississippi River basin using Landsat imagery and satellite altimetry(2021) Wang, LingkeIn recent years, the demand for freshwater has been steadily increasing owing to population growth and economic expansion. Surface waters such as lakes and reservoirs function as a dominant factor in mankind's freshwater provision. Analysis of changes in their water storage is consequently vital for understanding of the global water cycle and water resources. However, the water volume changes in lakes or reservoirs cannot be measured directly from space, but can be inferred from lake areas and lake water levels. Lake area can be measured globally from space but lake water level is not easy to be obtained globally. Because the number of in situ stations is few, and in situ data are only accessible for some lakes with few measurement epochs, despite in situ stations can measure lake water level and provide high accuracy observations. Although the altimetry technique can generate the time series of the water level for the majority of lakes, they are not global coverage due to the distance between satellite tracks and the gap between different missions. Therefore, in situ data and satellite altimetry measurements of water levels of lakes and reservoirs are not always available. For example, there are only 22 lakes or reservoirs in this study covered by satellite altimetry or in situ stations out of 90 research cases in Mississippi River Basin. Then, in case of unavailable in situ data or altimetry measurements, this research proposes an alternative method to estimate the water level through Digital Elevation Model (DEM). Because satellite imagery offers global coverage and DEM is the global digital representation of the land surface elevation with respect to any reference datum, this study allows for the evaluation of global water volume changes by acquiring lake area data from space and lake height data from DEM. Therefore, the objective of this study is that changes in water volume in lakes or reservoirs can be successfully monitored even when in situ data and satellite altimetry measurements are not available for lakes or reservoirs. Hereby, we investigate 90 lakes and reservoirs in the Mississippi River Basin and develop an alternative remote sensing technique to monitor the water volume changes by combining the improved water mask with DEM. Meanwhile, we propose practical methods to detect the shoreline pixels of the water body from improved water mask. Given the assumption that all pixels in the shoreline should have the same height, four water level estimation models are developed, including water level estimation model based on statistical analysis, frequency maps, change pixels and pixel pair analysis. To this end, the study estimates the time series of lake height from water level estimation model and obtains the time series of lake surface area from HydroSat. Subsequently, this study builds the unique function between the lake water level and the lake surface area and then develops the function between the lake water volume change and the lake surface area. Finally, this study analyses the water volume changes of lakes and reservoirs in the Mississippi River Basin using this alternative remote sensing method. Four water level estimation models are proposed and evaluated. They are respectively based on statistical analysis, frequency maps, change pixels and pixel pair analysis. As a result of their actions, the first model based on statistical analysis, with an average correlation of 0.62 and an average RMSE of 0.91 meters, functions in the majority of situations and demonstrates excessive outlier removal in some cases. The second model based on frequency maps is more general than the first, with an average correlation of 0.66 and an average RMSE of 1.11 meters. The average correlation for the third model based on change pixels is 0.71, and the average RMSE is 0.99 meters. The resulting model based on pixel pair analysis obtains a mean correlation of 0.67 and a mean RMSE of 1.00 meters. Finally, these models behave differently in different seasons, so they exhibit distinct monthly behaviour. To conclude, the above validation results show that this alternative method can be used in different lakes and reservoirs in case of absence of water level observation data, and achieve to monitor the water volume changes during a long period.Item Open Access Geospatial information research : state of the art, case studies and future perspectives(2022) Bill, Ralf; Blankenbach, Jörg; Breunig, Martin; Haunert, Jan-Henrik; Heipke, Christian; Herle, Stefan; Maas, Hans-Gerd; Mayer, Helmut; Meng, Liqui; Rottensteiner, Franz; Schiewe, Jochen; Sester, Monika; Sörgel, Uwe; Werner, MartinGeospatial information science (GI science) is concerned with the development and application of geodetic and information science methods for modeling, acquiring, sharing, managing, exploring, analyzing, synthesizing, visualizing, and evaluating data on spatio-temporal phenomena related to the Earth. As an interdisciplinary scientific discipline, it focuses on developing and adapting information technologies to understand processes on the Earth and human-place interactions, to detect and predict trends and patterns in the observed data, and to support decision making. The authors - members of DGK, the Geoinformatics division, as part of the Committee on Geodesy of the Bavarian Academy of Sciences and Humanities, representing geodetic research and university teaching in Germany - have prepared this paper as a means to point out future research questions and directions in geospatial information science. For the different facets of geospatial information science, the state of art is presented and underlined with mostly own case studies. The paper thus illustrates which contributions the German GI community makes and which research perspectives arise in geospatial information science. The paper further demonstrates that GI science, with its expertise in data acquisition and interpretation, information modeling and management, integration, decision support, visualization, and dissemination, can help solve many of the grand challenges facing society today and in the future.Item Open Access CRBeDaSet : a benchmark dataset for high accuracy close range 3D object reconstruction(2023) Gabara, Grzegorz; Sawicki, PiotrThis paper presents the CRBeDaSet - a new benchmark dataset designed for evaluating close range, image-based 3D modeling and reconstruction techniques, and the first empirical experiences of its use. The test object is a medium-sized building. Diverse textures characterize the surface of elevations. The dataset contains: the geodetic spatial control network (12 stabilized ground points determined using iterative multi-observation parametric adjustment) and the photogrammetric network (32 artificial signalized and 18 defined natural control points), measured using Leica TS30 total station and 36 terrestrial, mainly convergent photos, acquired from elevated camera standpoints with non-metric digital single-lens reflex Nikon D5100 camera (ground sample distance approx. 3 mm), the complex results of the bundle block adjustment with simultaneous camera calibration performed in the Pictran software package, and the colored point clouds (ca. 250 million points) from terrestrial laser scanning acquired using the Leica ScanStation C10 and post-processed in the Leica Cyclone™ SCAN software (ver. 2022.1.1) which were denoized, filtered, and classified using LoD3 standard (ca. 62 million points). The existing datasets and benchmarks were also described and evaluated in the paper. The proposed photogrammetric dataset was experimentally tested in the open-source application GRAPHOS and the commercial suites ContextCapture, Metashape, PhotoScan, Pix4Dmapper, and RealityCapture. As the first experience in its evaluation, the difficulties and errors that occurred in the software used during dataset digital processing were shown and discussed. The proposed CRBeDaSet benchmark dataset allows obtaining high accuracy (“mm” range) of the photogrammetric 3D object reconstruction in close range, based on a multi-image view uncalibrated imagery, dense image matching techniques, and generated dense point clouds.Item Open Access Spatio-temporal evaluation of GPM-IMERGV6.0 final run precipitation product in capturing extreme precipitation events across Iran(2022) Bakhtar, Aydin; Rahmati, Akbar; Shayeghi, Afshin; Teymoori, Javad; Ghajarnia, Navid; Saemian, PeymanExtreme precipitation events such as floods and droughts have occurred with higher frequency over the recent decades as a result of the climate change and anthropogenic activities. To understand and mitigate such events, it is crucial to investigate their spatio-temporal variations globally or regionally. Global precipitation products provide an alternative way to the in situ observations over such a region. In this study, we have evaluated the performance of the latest version of the Global Precipitation Measurement-Integrated Multi-satellitE Retrievals (GPM-IMERGV6.0 Final Run (GPM-IMERGF)). To this end, we have employed ten most common extreme precipitation indices, including maximum indices (Rx1day, Rx5day, CDD, and CWD), percentile indices (R95pTOT and R99pTOT), and absolute threshold indices (R10mm, R20mm, SDII, and PRCPTOT). Overall, the spatial distribution results for error metrics showed that the highest and lowest accuracy for GPM-IMERGF were reported for the absolute threshold indices and percentile indices, respectively. Considering the spatial distribution of the results, the highest accuracy of GPM-IMERGF in capturing extreme precipitations was observed over the western highlands, while the worst results were obtained along the Caspian Sea regions. Our analysis can significantly contribute to various hydro-metrological applications for the study region, including identifying drought and flood-prone areas and water resources planning.