06 Fakultät Luft- und Raumfahrttechnik und Geodäsie

Permanent URI for this collectionhttps://elib.uni-stuttgart.de/handle/11682/7

Browse

Search Results

Now showing 1 - 10 of 16
  • Thumbnail Image
    ItemOpen Access
    Towards improved targetless registration and deformation analysis of TLS point clouds using patch-based segmentation
    (2023) Yang, Yihui; Schwieger, Volker (Prof. Dr.-Ing. habil. Dr. h.c.)
    The geometric changes in the real world can be captured by measuring and comparing the 3D coordinates of object surfaces. Traditional point-wise measurements with low spatial resolution may fail to detect inhomogeneous, anisotropic and unexpected deformations, and thus cannot reveal complex deformation processes. 3D point clouds generated from laser scanning or photogrammetric techniques have opened up opportunities for an area-wise acquisition of spatial information. In particular, terrestrial laser scanning (TLS) exhibits rapid development and wide application in areal geodetic monitoring owing to the high resolution and high quality of acquired point cloud data. However, several issues in the process chain of TLS-based deformation monitoring are still not solved satisfactorily. This thesis mainly focuses on the targetless registration and deformation analysis of TLS point clouds, aiming to develop novel data-driven methods to tackle the current challenges. For most deformation processes of natural scenes, in some local areas no shape deformations occur (i.e., these areas are rigid), and even the deformation directions show a certain level of consistency when these areas are small enough. Further point cloud processing, like stability and deformation analyses, could benefit from the assumptions of local rigidity and consistency of deformed point clouds. In this thesis, thereby, three typical types of locally rigid patches - small planar patches, geometric primitives, and quasi-rigid areas - can be generated from 3D point clouds by specific segmentation techniques. These patches, on the one hand, can preserve the boundaries between rigid and non-rigid areas and thus enable spatial separation with respect to surface stability. On the other hand, local geometric information and empirical stochastic models could be readily determined by the points in each patch. Based on these segmented rigid patches, targetless registration and deformation analysis of deformed TLS point clouds can be improved regarding accuracy and spatial resolution. Specifically, small planar patches like supervoxels are utilized to distinguish the stable and unstable areas in an iterative registration process, thus ensuring only relatively stable points are involved in estimating transformation parameters. The experimental results show that the proposed targetless registration method has significantly improved the registration accuracy. These small planar patches are also exploited to develop a novel variant of the multiscale model-to-model cloud comparison (M3C2) algorithm, which constructs prisms extending from planar patches instead of the cylinders in standard M3C2. This new method separates actual surface variations and measurement uncertainties, thus yielding lower-uncertainty and higher-resolution deformations. A coarse-to-fine segmentation framework is used to extract multiple geometric primitives from point clouds, and rigorous parameter estimations are performed individually to derive high-precision parametric deformations. Besides, a generalized local registration-based pipeline is proposed to derive dense displacement vectors based on segmented quasi-rigid areas that are corresponded by areal geometric feature descriptors. All proposed methods are successfully verified and evaluated by simulated and/or real point cloud data. The choice of proposed deformation analysis methods for specific scenarios or applications is also provided in this thesis.
  • Thumbnail Image
    ItemOpen Access
    Analyzing and characterizing spaceborne observation of water storage variation : past, present, future
    (2024) Saemian, Peyman; Sneeuw, Nico (Prof. Dr.-Ing.)
    Water storage is an indispensable constituent of the intricate water cycle, as it governs the availability and distribution of this precious resource. Any alteration in the water storage can trigger a cascade of consequences, affecting not only our agricultural practices but also the well-being of various ecosystems and the occurrence of natural hazards. Therefore, it is essential to monitor and manage the water storage levels prudently to ensure a sustainable future for our planet. Despite significant advancements in ground-based measurements and modeling techniques, accurately measuring water storage variation remained a major challenge for a long time. Since 2002, the Gravity Recovery and Climate Experiment (GRACE) and its successor GRACE Follow-On (GRACE-FO) satellites have revolutionized our understanding of the Earth's water cycle. By detecting variations in the Earth's gravity field caused by changes in water distribution, these satellites can precisely measure changes in total water storage (TWS) across the entire globe, providing a truly comprehensive view of the world's water resources. This information has proved invaluable for understanding how water resources are changing over time, and for developing strategies to manage these resources sustainably. However, GRACE and GRACE-FO are subject to various challenges that must be addressed in order to enhance the efficacy of our exploitation of GRACE observations for scientific and practical purposes. This thesis aims to address some of the challenges faced by GRACE and GRACE-FO. Since the inception of the GRACE mission, scholars have commonly extracted mass changes from observations by approximating the Earth's gravity field utilizing mathematical functions termed spherical harmonics. Various institutions have already processed GRACE(-FO) data, known as level-2 data in the GRACE community, considering the constraints, approaches, and models that have been utilized. However, this processed data necessitates post-processing to be used for several applications, such as hydrology and climate research. In this thesis, we evaluate various methods of processing GRACE(-FO) level-2 data and assess the spatio-temporal effect of the post-processing steps. Furthermore, we aim to compare the consistency between GRACE and its successor mission, GRACE-FO, in terms of data quality and measurement accuracy. By analyzing and comparing the data from these two missions, we can identify any potential discrepancies or differences and establish the level of confidence in the accuracy and reliability of the GRACE-FO measurements. Finally, we will compare the processed level-3 products with the level-3 products that are presently accessible online. The relatively short record of the GRACE measurements, compared to other satellite missions and observational records, can limit some studies that require long-term data. This short record makes it challenging to separate long-term signals from short-term variability and validate the data with ground-based measurements or other satellite missions. To address this limitation, this thesis expands the temporal coverage of GRACE(-FO) observations using global hydrological, atmospheric, and reanalysis models. First, we assess these models in estimating the TWS variation at a global scale. We compare the performance of various methods including data-driven and machine learning approaches in incorporating models and reconstruct GRACE TWS change. The results are also validated against Satellite Laser Ranging (SLR) observations over the pre-GRACE period. This thesis develops a hindcasted GRACE, which provides a better understanding of the changes in the Earth's water storage on a longer time scale. The GRACE satellite mission detects changes in the overall water storage in a specific region but cannot distinguish between the different compartments of TWS, such as surface water, groundwater, and soil moisture. Understanding these individual components is crucial for managing water resources and addressing the effects of droughts and floods. This study aims to integrate various data sources to improve our understanding of water storage variations at the continental to basin scale, including water fluxes, lake water level, and lake storage change data. Additionally, the study demonstrates the importance of combining GRACE(-FO) observations with other measurements, such as piezometric wells and rain-gauges, to understand the water scarcity predicament in Iran and other regions facing similar challenges. The GRACE satellite mission provides valuable insights into the Earth's system. However, the GRACE product has a level of uncertainty due to several error sources. While the mission has taken measures to minimize these uncertainties, researchers need to account for them when analyzing the data and communicate them when reporting findings. This thesis proposes a probabilistic approach to incorporate the Total Water Storage Anomaly (TWSA) data from GRACE(-FO). By accounting for the uncertainty in the TWSA data, this approach can provide a more comprehensive understanding of drought conditions, which is essential for decision makers managing water resources and responding to drought events.
  • Thumbnail Image
    ItemOpen Access
    Forming a hybrid intelligence system by combining Active Learning and paid crowdsourcing for semantic 3D point cloud segmentation
    (2023) Kölle, Michael; Sörgel, Uwe (Prof. Dr.-Ing.)
    While in recent years tremendous advancements have been achieved in the development of supervised Machine Learning (ML) systems such as Convolutional Neural Networks (CNNs), still the most decisive factor for their performance is the quality of labeled training data from which the system is supposed to learn. This is why we advocate focusing more on methods to obtain such data, which we expect to be more sustainable than establishing ever new classifiers in the rapidly evolving ML field. In the geospatial domain, however, the generation process of training data for ML systems is still rather neglected in research, with typically experts ending up being occupied with such tedious labeling tasks. In our design of a system for the semantic interpretation of Airborne Laser Scanning (ALS) point clouds, we break with this convention and completely lift labeling obligations from experts. At the same time, human annotation is restricted to only those samples that actually justify manual inspection. This is accomplished by means of a hybrid intelligence system in which the machine, represented by an ML model, is actively and iteratively working together with the human component through Active Learning (AL), which acts as pointer to exactly such most decisive samples. Instead of having an expert label these samples, we propose to outsource this task to a large group of non-specialists, the crowd. But since it is rather unlikely that enough volunteers would participate in such crowdsourcing campaigns due to the tedious nature of labeling, we argue attracting workers by monetary incentives, i.e., we employ paid crowdsourcing. Relying on respective platforms, typically we have access to a vast pool of prospective workers, guaranteeing completion of jobs promptly. Thus, crowdworkers become human processing units that behave similarly to the electronic processing units of this hybrid intelligence system performing the tasks of the machine part. With respect to the latter, we do not only evaluate whether an AL-based pipeline works for the semantic segmentation of ALS point clouds, but also shed light on the question of why it works. As crucial components of our pipeline, we test and enhance different AL sampling strategies in conjunction with both a conventional feature-driven classifier as well as a data-driven CNN classification module. In this regard, we aim to select AL points in such a manner that samples are not only informative for the machine, but also feasible to be interpreted by non-experts. These theoretical formulations are verified by various experiments in which we replace the frequently assumed but highly unrealistic error-free oracle with simulated imperfect oracles we are always confronted with when working with humans. Furthermore, we find that the need for labeled data, which is already reduced through AL to a small fraction (typically ≪1 % of Passive Learning training points), can be even further minimized when we reuse information from a given source domain for the semantic enrichment of a specific target domain, i.e., we utilize AL as means for Domain Adaptation. As for the human component of our hybrid intelligence system, the special challenge we face is monetarily motivated workers with a wide variety of educational and cultural backgrounds as well as most different mindsets regarding the quality they are willing to deliver. Consequently, we are confronted with a great quality inhomogeneity in results received. Thus, when designing respective campaigns, special attention to quality control is required to be able to automatically reject submissions of low quality and to refine accepted contributions in the sense of the Wisdom of the Crowds principle. We further explore ways to support the crowd in labeling by experimenting with different data modalities (discretized point cloud vs. continuous textured 3D mesh surface), and also aim to shift the motivation from a purely extrinsic nature (i.e., payment) to a more intrinsic one, which we intend to trigger through gamification. Eventually, by casting these different concepts into the so-called CATEGORISE framework, we constitute the aspired hybrid intelligence system and employ it for the semantic enrichment of ALS point clouds of different characteristics, enabled through learning from the (paid) crowd.
  • Thumbnail Image
    ItemOpen Access
    Editorial for PFG issue 5/2023
    (2023) Gerke, Markus; Cramer, Michael
  • Thumbnail Image
    ItemOpen Access
    CRBeDaSet : a benchmark dataset for high accuracy close range 3D object reconstruction
    (2023) Gabara, Grzegorz; Sawicki, Piotr
    This paper presents the CRBeDaSet - a new benchmark dataset designed for evaluating close range, image-based 3D modeling and reconstruction techniques, and the first empirical experiences of its use. The test object is a medium-sized building. Diverse textures characterize the surface of elevations. The dataset contains: the geodetic spatial control network (12 stabilized ground points determined using iterative multi-observation parametric adjustment) and the photogrammetric network (32 artificial signalized and 18 defined natural control points), measured using Leica TS30 total station and 36 terrestrial, mainly convergent photos, acquired from elevated camera standpoints with non-metric digital single-lens reflex Nikon D5100 camera (ground sample distance approx. 3 mm), the complex results of the bundle block adjustment with simultaneous camera calibration performed in the Pictran software package, and the colored point clouds (ca. 250 million points) from terrestrial laser scanning acquired using the Leica ScanStation C10 and post-processed in the Leica Cyclone™ SCAN software (ver. 2022.1.1) which were denoized, filtered, and classified using LoD3 standard (ca. 62 million points). The existing datasets and benchmarks were also described and evaluated in the paper. The proposed photogrammetric dataset was experimentally tested in the open-source application GRAPHOS and the commercial suites ContextCapture, Metashape, PhotoScan, Pix4Dmapper, and RealityCapture. As the first experience in its evaluation, the difficulties and errors that occurred in the software used during dataset digital processing were shown and discussed. The proposed CRBeDaSet benchmark dataset allows obtaining high accuracy (“mm” range) of the photogrammetric 3D object reconstruction in close range, based on a multi-image view uncalibrated imagery, dense image matching techniques, and generated dense point clouds.
  • Thumbnail Image
    ItemOpen Access
    Method of development of a new regional ionosphere model (RIM) to improve static single-frequency precise point positioning (SF-PPP) for Egypt using Bernese GNSS software
    (2023) Abdallah, Ashraf; Agag, Tarek; Schwieger, Volker
    Due to the lack of coverage of IGS in Africa, especially over North Africa, and the construction revolution of infrastructure in Egypt, a geodetic CORS stations network was established in 2012. These CORS stations are operated by the Egyptian Surveying Authority (Egy. SA) and cover the whole of Egypt. The paper presents a fully developed regional ionosphere model (RIM) depending on the Egyptian CORS stations. The new model and the PPP solution were obtained using Bernese GNSS V. 5.2 software. An observation data series of eight days (DOY 201-208)/2019 was used in this study. Eighteen stations were used to develop the RIM model for each day; fifteen stations were used to validate the new RIM model. A static SF-PPP solution was obtained using the CODE-GIM and RIM models. Comparing the outcomes to the reference network solution, based on the recently developed RIM model, the solution showed a mean error of 0.06 m in the East direction, 0.13 m in the North direction, and 0.21 m in the height direction. In the East, North, and height directions, this solution improves the SF-PPP result achieved by the Global Ionosphere Maps (CODE-GIM) model by 60%, 68%, and 77%, respectively.
  • Thumbnail Image
    ItemOpen Access
    Understanding the hydrological signature in gravity data
    (2023) Schollmeier, Philipp
    Over the past two decades, the subsequent advancements in Superconducting Gravimeters (SGs) have ushered in a level of precision that enables the measurement of the impact of ground water and soil water on gravity. Because of the challenging nature of monitoring the total water volume and the relatively subtle amplitude of the hydrological signal, a comprehensive understanding of the precise hydrological signature in continuous gravity data remains elusive. In this study, I use SG data in conjunction with hydrological measurements from a geoscientific observatory in Germany to find the signature of hydrological signals in gravity data. I scrutinize the various steps involved in extracting this signal, presenting new methodologies, including a technique to eliminate oscillations in gravity residuals that are likely attributed to remaining tidal signals due to an imperfect tidal model. A major contribution of this work involves constructing a data-driven model that incorporates precipitation and soil moisture measurements to elucidate gravity variations. I address critical questions such as the impact of utilizing soil moisture data on the model’s performance, determining the optimal model for achieving the closest fit with gravity measurements, and assessing the applicability of computed model parameters to new epochs. Furthermore, I provide recommendations for refining the model-building process in future investigations. Results show that a convolution of the different hydrological timeseries with one half of a Gaussian bell curve leads to a strong agreement with the gravity measurements. The use of soil moisture data significantly improves the fit, especially when the measurement stations are spatially well distributed. This fit becomes less strong when the computed parameters are applied to new events, but the approach showed promise for some of the events. Enhancing our comprehension of the hydrological influence on gravity measurements holds promising implications, potentially positioning SGs as instruments for monitoring soil and ground water in the future. Moreover, this improved understanding could elevate the pre cision of analyzing other subtle signals, such as the effects of Polar Motion.
  • Thumbnail Image
    ItemOpen Access
    Assessment of ICESat-2 laser altimetry in hydrological applications
    (2024) Wang, Bo; Sneeuw, Nico (Prof. Dr.-Ing.)
    Water bodies act as critical components of the hydrological cycle, serving as reservoirs, lakes, wetlands, and aquifers that store and release water over time. Monitoring changes in the extent and volume of these water bodies is crucial for understanding their role in regulating water flow, maintaining baseflow during dry periods, and supporting ecological habitats. Furthermore, the identification of trends and alterations in water body dynamics aids in detecting potential impacts of climate change and human activities on the hydrological cycle. Historically, gauge stations have been employed to monitor the water level of these bodies since the 19th century. However, their numbers have been dwindling since the 1970s due to maintenance challenges. With the development of satellite altimetry missions, more accurate and continuous monitoring of lakes and rivers has become possible. These satellites in recent years offer the capability to provide water level data with different along-track sampling distances. For instance, ICESat-2 offers a sampling distance of 70 cm with a footprint size of ~17 m, while Sentinel-3 provides a sampling distance of 300 m. The temporal resolution ranges from 10 days (Jason-3) to 369 days (Cryosat-2). These advances allow researchers to effectively observe and understand changes in water bodies. The invention of satellite-based laser altimetry has brought a revolutionary advancement in our ability to monitor and study Earth’s water bodies with unprecedented precision and extensive spatial coverage. This doctoral thesis aims to explore the diverse applications of ICESat-2 laser altimetry data over inland water bodies. Through these investigations, the aim is to advance our understanding of global hydrological processes and acquire valuable insights to improve water resource management strategies. It is important to understand the error budget of the altimetric observations, one component of which is radial orbit error. Apart from the altimetric ranging errors, radial orbit errors directly influence the accuracy of the measurement of Earth’s surface heights. These errors can be assessed by analyzing the difference of surface heights at ground track intersections, so-called crossover differences (XO differences). An effective approach is to model the orbit error by minimizing the residual XO difference by the least-squares (LS) method, which is commonly known as XO adjustment. This method was implemented in the Arctic region to examine the performance of the LS adjustment over spherical cap geometry and assess the level of radial orbit error across a large-scale area. This analysis will aid in understanding the accuracy and reliability of ICESat-2 satellite orbit over the Arctic region. The ICESat-2 satellite captures high-resolution observations of Earth’s surface, including land and water, thus enabling dense measurements of heights. The green laser used in ICESat-2 has the capability to penetrate water surfaces, allowing measurements of not only the lake water level but also the nearshore water bottom. This study proposes a novel algorithm that combines ICESat-2 measurements with Landsat imagery to extract lake water level, extent and volume. This algorithm was applied to Lake Mead, resulting in a long-term time series of water level, extent and volume dating back to 1984, only derived from remote sensing data. The ICESat-2 satellite is equipped with three pairs of laser transmitters, which concurrently generate three pairs of ground tracks. This unique characteristic enables us to derive river surface heights for each ground track, thereby calculating the river slope between two tracks, referred to as the across-track river slope. Moreover, when one ground track passes through the river surface, producing dense measurements, it allows us to obtain the small-scale slope for that specific track, termed the along-track based river slope. By using these methods, both types of slopes were estimated for the entire length of the Rhine River and subsequently generated the average slope for each reach along the river.
  • Thumbnail Image
    ItemOpen Access
    An elementary error model for terrestrial laser scanning
    (2023) Kerekes, Gabriel; Schwieger, Volker (Prof. Dr.-Ing. habil. Dr. h.c.)
    Terrestrial Laser Scanning (TLS) is a recent method in engineering geodesy area-wise deformation analysis. After a TLS scan, the result for each epoch is a point cloud that describes the object’s geometry. For each point cloud, the stochastic properties are important for a reliable decision concerning the current object geometry. Generally, the stochastic properties are described by a stochastic model. Currently, stochastic models for TLS observations are highly disputed and incomplete. A realistic stochastic model is necessary for typical applications like structural deformation analysis for buildings and civil engineering constructions. This work presents a method to define a stochastic model in form of a synthetic variance-covariance matrix (SVCM) for TLS observations. It relies on the elementary error theory defined by Bessel and Hagen at the beginning of the 19th century and adapted for geodetic observations by Pelzer and Schwieger at the end of the 20th century. According to this theory, different types of errors that affect TLS measurements are classified into three groups: non-correlating, functional correlating, and stochastic correlating errors. For each group, different types of errors are studied based on the error sources that affect TLS observations. These types are classified as instrument-specific errors, environment-related errors, and object surface-related errors. Regarding instrument errors, calibration models for high-end laser scanners are studied. For the propagation medium of TLS observations, the effects of air temperature, air pressure and vertical temperature gradient on TLS distances and vertical angles are studied. An approach based on time series theory is used for extracting the spatial correlations between observation lines. For the object’s surface properties, the effect of surface roughness and reflectivity on the distance measurement is considered. Both parameters affect the variances and covariances in the stochastic model. For each of the error types, examples based on own research or literature are given. After establishing the model, four different study cases are used to exemplify the utility of a fully populated SVCM. The scenarios include real objects measured under laboratory and field conditions and simulated objects. The first example outlines the results from the SVCM based on a simulated wall with an analysis of the variance and covariance contribution. In the second study case, the role of the SVCM in a sphere adjustment is highlighted. A third study case presents a deformation analysis of a wooden tower. Finally, the fourth example shows how to derive an optimal TLS station point based on the SVCM trace. All in all, this thesis brings a contribution by defining a new stochastic model based on the elementary error theory in the form a SVCM for TLS measurements. It may be used for purposes such as analysis of error magnitude on scanned objects, adjustment of surfaces, or finding an optimal TLS station point position with regard to predefined criteria.