06 Fakultät Luft- und Raumfahrttechnik und Geodäsie

Permanent URI for this collectionhttps://elib.uni-stuttgart.de/handle/11682/7

Browse

Search Results

Now showing 1 - 10 of 23
  • Thumbnail Image
    ItemOpen Access
    Towards improved targetless registration and deformation analysis of TLS point clouds using patch-based segmentation
    (2023) Yang, Yihui; Schwieger, Volker (Prof. Dr.-Ing. habil. Dr. h.c.)
    The geometric changes in the real world can be captured by measuring and comparing the 3D coordinates of object surfaces. Traditional point-wise measurements with low spatial resolution may fail to detect inhomogeneous, anisotropic and unexpected deformations, and thus cannot reveal complex deformation processes. 3D point clouds generated from laser scanning or photogrammetric techniques have opened up opportunities for an area-wise acquisition of spatial information. In particular, terrestrial laser scanning (TLS) exhibits rapid development and wide application in areal geodetic monitoring owing to the high resolution and high quality of acquired point cloud data. However, several issues in the process chain of TLS-based deformation monitoring are still not solved satisfactorily. This thesis mainly focuses on the targetless registration and deformation analysis of TLS point clouds, aiming to develop novel data-driven methods to tackle the current challenges. For most deformation processes of natural scenes, in some local areas no shape deformations occur (i.e., these areas are rigid), and even the deformation directions show a certain level of consistency when these areas are small enough. Further point cloud processing, like stability and deformation analyses, could benefit from the assumptions of local rigidity and consistency of deformed point clouds. In this thesis, thereby, three typical types of locally rigid patches - small planar patches, geometric primitives, and quasi-rigid areas - can be generated from 3D point clouds by specific segmentation techniques. These patches, on the one hand, can preserve the boundaries between rigid and non-rigid areas and thus enable spatial separation with respect to surface stability. On the other hand, local geometric information and empirical stochastic models could be readily determined by the points in each patch. Based on these segmented rigid patches, targetless registration and deformation analysis of deformed TLS point clouds can be improved regarding accuracy and spatial resolution. Specifically, small planar patches like supervoxels are utilized to distinguish the stable and unstable areas in an iterative registration process, thus ensuring only relatively stable points are involved in estimating transformation parameters. The experimental results show that the proposed targetless registration method has significantly improved the registration accuracy. These small planar patches are also exploited to develop a novel variant of the multiscale model-to-model cloud comparison (M3C2) algorithm, which constructs prisms extending from planar patches instead of the cylinders in standard M3C2. This new method separates actual surface variations and measurement uncertainties, thus yielding lower-uncertainty and higher-resolution deformations. A coarse-to-fine segmentation framework is used to extract multiple geometric primitives from point clouds, and rigorous parameter estimations are performed individually to derive high-precision parametric deformations. Besides, a generalized local registration-based pipeline is proposed to derive dense displacement vectors based on segmented quasi-rigid areas that are corresponded by areal geometric feature descriptors. All proposed methods are successfully verified and evaluated by simulated and/or real point cloud data. The choice of proposed deformation analysis methods for specific scenarios or applications is also provided in this thesis.
  • Thumbnail Image
    ItemOpen Access
    Analyzing and characterizing spaceborne observation of water storage variation : past, present, future
    (2024) Saemian, Peyman; Sneeuw, Nico (Prof. Dr.-Ing.)
    Water storage is an indispensable constituent of the intricate water cycle, as it governs the availability and distribution of this precious resource. Any alteration in the water storage can trigger a cascade of consequences, affecting not only our agricultural practices but also the well-being of various ecosystems and the occurrence of natural hazards. Therefore, it is essential to monitor and manage the water storage levels prudently to ensure a sustainable future for our planet. Despite significant advancements in ground-based measurements and modeling techniques, accurately measuring water storage variation remained a major challenge for a long time. Since 2002, the Gravity Recovery and Climate Experiment (GRACE) and its successor GRACE Follow-On (GRACE-FO) satellites have revolutionized our understanding of the Earth's water cycle. By detecting variations in the Earth's gravity field caused by changes in water distribution, these satellites can precisely measure changes in total water storage (TWS) across the entire globe, providing a truly comprehensive view of the world's water resources. This information has proved invaluable for understanding how water resources are changing over time, and for developing strategies to manage these resources sustainably. However, GRACE and GRACE-FO are subject to various challenges that must be addressed in order to enhance the efficacy of our exploitation of GRACE observations for scientific and practical purposes. This thesis aims to address some of the challenges faced by GRACE and GRACE-FO. Since the inception of the GRACE mission, scholars have commonly extracted mass changes from observations by approximating the Earth's gravity field utilizing mathematical functions termed spherical harmonics. Various institutions have already processed GRACE(-FO) data, known as level-2 data in the GRACE community, considering the constraints, approaches, and models that have been utilized. However, this processed data necessitates post-processing to be used for several applications, such as hydrology and climate research. In this thesis, we evaluate various methods of processing GRACE(-FO) level-2 data and assess the spatio-temporal effect of the post-processing steps. Furthermore, we aim to compare the consistency between GRACE and its successor mission, GRACE-FO, in terms of data quality and measurement accuracy. By analyzing and comparing the data from these two missions, we can identify any potential discrepancies or differences and establish the level of confidence in the accuracy and reliability of the GRACE-FO measurements. Finally, we will compare the processed level-3 products with the level-3 products that are presently accessible online. The relatively short record of the GRACE measurements, compared to other satellite missions and observational records, can limit some studies that require long-term data. This short record makes it challenging to separate long-term signals from short-term variability and validate the data with ground-based measurements or other satellite missions. To address this limitation, this thesis expands the temporal coverage of GRACE(-FO) observations using global hydrological, atmospheric, and reanalysis models. First, we assess these models in estimating the TWS variation at a global scale. We compare the performance of various methods including data-driven and machine learning approaches in incorporating models and reconstruct GRACE TWS change. The results are also validated against Satellite Laser Ranging (SLR) observations over the pre-GRACE period. This thesis develops a hindcasted GRACE, which provides a better understanding of the changes in the Earth's water storage on a longer time scale. The GRACE satellite mission detects changes in the overall water storage in a specific region but cannot distinguish between the different compartments of TWS, such as surface water, groundwater, and soil moisture. Understanding these individual components is crucial for managing water resources and addressing the effects of droughts and floods. This study aims to integrate various data sources to improve our understanding of water storage variations at the continental to basin scale, including water fluxes, lake water level, and lake storage change data. Additionally, the study demonstrates the importance of combining GRACE(-FO) observations with other measurements, such as piezometric wells and rain-gauges, to understand the water scarcity predicament in Iran and other regions facing similar challenges. The GRACE satellite mission provides valuable insights into the Earth's system. However, the GRACE product has a level of uncertainty due to several error sources. While the mission has taken measures to minimize these uncertainties, researchers need to account for them when analyzing the data and communicate them when reporting findings. This thesis proposes a probabilistic approach to incorporate the Total Water Storage Anomaly (TWSA) data from GRACE(-FO). By accounting for the uncertainty in the TWSA data, this approach can provide a more comprehensive understanding of drought conditions, which is essential for decision makers managing water resources and responding to drought events.
  • Thumbnail Image
    ItemOpen Access
    Forming a hybrid intelligence system by combining Active Learning and paid crowdsourcing for semantic 3D point cloud segmentation
    (2023) Kölle, Michael; Sörgel, Uwe (Prof. Dr.-Ing.)
    While in recent years tremendous advancements have been achieved in the development of supervised Machine Learning (ML) systems such as Convolutional Neural Networks (CNNs), still the most decisive factor for their performance is the quality of labeled training data from which the system is supposed to learn. This is why we advocate focusing more on methods to obtain such data, which we expect to be more sustainable than establishing ever new classifiers in the rapidly evolving ML field. In the geospatial domain, however, the generation process of training data for ML systems is still rather neglected in research, with typically experts ending up being occupied with such tedious labeling tasks. In our design of a system for the semantic interpretation of Airborne Laser Scanning (ALS) point clouds, we break with this convention and completely lift labeling obligations from experts. At the same time, human annotation is restricted to only those samples that actually justify manual inspection. This is accomplished by means of a hybrid intelligence system in which the machine, represented by an ML model, is actively and iteratively working together with the human component through Active Learning (AL), which acts as pointer to exactly such most decisive samples. Instead of having an expert label these samples, we propose to outsource this task to a large group of non-specialists, the crowd. But since it is rather unlikely that enough volunteers would participate in such crowdsourcing campaigns due to the tedious nature of labeling, we argue attracting workers by monetary incentives, i.e., we employ paid crowdsourcing. Relying on respective platforms, typically we have access to a vast pool of prospective workers, guaranteeing completion of jobs promptly. Thus, crowdworkers become human processing units that behave similarly to the electronic processing units of this hybrid intelligence system performing the tasks of the machine part. With respect to the latter, we do not only evaluate whether an AL-based pipeline works for the semantic segmentation of ALS point clouds, but also shed light on the question of why it works. As crucial components of our pipeline, we test and enhance different AL sampling strategies in conjunction with both a conventional feature-driven classifier as well as a data-driven CNN classification module. In this regard, we aim to select AL points in such a manner that samples are not only informative for the machine, but also feasible to be interpreted by non-experts. These theoretical formulations are verified by various experiments in which we replace the frequently assumed but highly unrealistic error-free oracle with simulated imperfect oracles we are always confronted with when working with humans. Furthermore, we find that the need for labeled data, which is already reduced through AL to a small fraction (typically ≪1 % of Passive Learning training points), can be even further minimized when we reuse information from a given source domain for the semantic enrichment of a specific target domain, i.e., we utilize AL as means for Domain Adaptation. As for the human component of our hybrid intelligence system, the special challenge we face is monetarily motivated workers with a wide variety of educational and cultural backgrounds as well as most different mindsets regarding the quality they are willing to deliver. Consequently, we are confronted with a great quality inhomogeneity in results received. Thus, when designing respective campaigns, special attention to quality control is required to be able to automatically reject submissions of low quality and to refine accepted contributions in the sense of the Wisdom of the Crowds principle. We further explore ways to support the crowd in labeling by experimenting with different data modalities (discretized point cloud vs. continuous textured 3D mesh surface), and also aim to shift the motivation from a purely extrinsic nature (i.e., payment) to a more intrinsic one, which we intend to trigger through gamification. Eventually, by casting these different concepts into the so-called CATEGORISE framework, we constitute the aspired hybrid intelligence system and employ it for the semantic enrichment of ALS point clouds of different characteristics, enabled through learning from the (paid) crowd.
  • Thumbnail Image
    ItemOpen Access
    Editorial for PFG issue 5/2023
    (2023) Gerke, Markus; Cramer, Michael
  • Thumbnail Image
    ItemOpen Access
    Geospatial information research : state of the art, case studies and future perspectives
    (2022) Bill, Ralf; Blankenbach, Jörg; Breunig, Martin; Haunert, Jan-Henrik; Heipke, Christian; Herle, Stefan; Maas, Hans-Gerd; Mayer, Helmut; Meng, Liqui; Rottensteiner, Franz; Schiewe, Jochen; Sester, Monika; Sörgel, Uwe; Werner, Martin
    Geospatial information science (GI science) is concerned with the development and application of geodetic and information science methods for modeling, acquiring, sharing, managing, exploring, analyzing, synthesizing, visualizing, and evaluating data on spatio-temporal phenomena related to the Earth. As an interdisciplinary scientific discipline, it focuses on developing and adapting information technologies to understand processes on the Earth and human-place interactions, to detect and predict trends and patterns in the observed data, and to support decision making. The authors - members of DGK, the Geoinformatics division, as part of the Committee on Geodesy of the Bavarian Academy of Sciences and Humanities, representing geodetic research and university teaching in Germany - have prepared this paper as a means to point out future research questions and directions in geospatial information science. For the different facets of geospatial information science, the state of art is presented and underlined with mostly own case studies. The paper thus illustrates which contributions the German GI community makes and which research perspectives arise in geospatial information science. The paper further demonstrates that GI science, with its expertise in data acquisition and interpretation, information modeling and management, integration, decision support, visualization, and dissemination, can help solve many of the grand challenges facing society today and in the future.
  • Thumbnail Image
    ItemOpen Access
    CRBeDaSet : a benchmark dataset for high accuracy close range 3D object reconstruction
    (2023) Gabara, Grzegorz; Sawicki, Piotr
    This paper presents the CRBeDaSet - a new benchmark dataset designed for evaluating close range, image-based 3D modeling and reconstruction techniques, and the first empirical experiences of its use. The test object is a medium-sized building. Diverse textures characterize the surface of elevations. The dataset contains: the geodetic spatial control network (12 stabilized ground points determined using iterative multi-observation parametric adjustment) and the photogrammetric network (32 artificial signalized and 18 defined natural control points), measured using Leica TS30 total station and 36 terrestrial, mainly convergent photos, acquired from elevated camera standpoints with non-metric digital single-lens reflex Nikon D5100 camera (ground sample distance approx. 3 mm), the complex results of the bundle block adjustment with simultaneous camera calibration performed in the Pictran software package, and the colored point clouds (ca. 250 million points) from terrestrial laser scanning acquired using the Leica ScanStation C10 and post-processed in the Leica Cyclone™ SCAN software (ver. 2022.1.1) which were denoized, filtered, and classified using LoD3 standard (ca. 62 million points). The existing datasets and benchmarks were also described and evaluated in the paper. The proposed photogrammetric dataset was experimentally tested in the open-source application GRAPHOS and the commercial suites ContextCapture, Metashape, PhotoScan, Pix4Dmapper, and RealityCapture. As the first experience in its evaluation, the difficulties and errors that occurred in the software used during dataset digital processing were shown and discussed. The proposed CRBeDaSet benchmark dataset allows obtaining high accuracy (“mm” range) of the photogrammetric 3D object reconstruction in close range, based on a multi-image view uncalibrated imagery, dense image matching techniques, and generated dense point clouds.
  • Thumbnail Image
    ItemOpen Access
    Spatio-temporal evaluation of GPM-IMERGV6.0 final run precipitation product in capturing extreme precipitation events across Iran
    (2022) Bakhtar, Aydin; Rahmati, Akbar; Shayeghi, Afshin; Teymoori, Javad; Ghajarnia, Navid; Saemian, Peyman
    Extreme precipitation events such as floods and droughts have occurred with higher frequency over the recent decades as a result of the climate change and anthropogenic activities. To understand and mitigate such events, it is crucial to investigate their spatio-temporal variations globally or regionally. Global precipitation products provide an alternative way to the in situ observations over such a region. In this study, we have evaluated the performance of the latest version of the Global Precipitation Measurement-Integrated Multi-satellitE Retrievals (GPM-IMERGV6.0 Final Run (GPM-IMERGF)). To this end, we have employed ten most common extreme precipitation indices, including maximum indices (Rx1day, Rx5day, CDD, and CWD), percentile indices (R95pTOT and R99pTOT), and absolute threshold indices (R10mm, R20mm, SDII, and PRCPTOT). Overall, the spatial distribution results for error metrics showed that the highest and lowest accuracy for GPM-IMERGF were reported for the absolute threshold indices and percentile indices, respectively. Considering the spatial distribution of the results, the highest accuracy of GPM-IMERGF in capturing extreme precipitations was observed over the western highlands, while the worst results were obtained along the Caspian Sea regions. Our analysis can significantly contribute to various hydro-metrological applications for the study region, including identifying drought and flood-prone areas and water resources planning.
  • Thumbnail Image
    ItemOpen Access
    On the information transfer between imagery, point clouds, and meshes for multi-modal semantics utilizing geospatial data
    (2022) Laupheimer, Dominik; Haala, Norbert (apl. Prof. Dr.-Ing.)
    The semantic segmentation of the huge amount of acquired 3D data has become an important task in recent years. Images and Point Clouds (PCs) are fundamental data representations, particularly in urban mapping applications. Textured meshes integrate both representations by wiring the PC and texturing the reconstructed surface elements with high-resolution imagery. Meshes are adaptive to the underlying mapped geometry due to their graph structure composed of non-uniform and non-regular entities. Hence, the mesh is a memory-efficient realistic-looking 3D map of the real world. For these reasons, we primarily opt for semantic segmentation of meshes, which is a widely overlooked topic in photogrammetry and remote sensing yet. In particular, we head for multi-modal semantics utilizing supervised learning. However, publicly available annotated geospatial mesh data has been rare at the beginning of the thesis. Therefore, annotating mesh data has to be done beforehand. To kill two birds with one stone, we aim for a multi-modal fusion that enables multi-modal enhancement of entity descriptors and semi-automatic data annotation leveraging publicly available annotations of non-mesh data. We propose a novel holistic geometry-driven association mechanism that explicitly integrates entities of modalities imagery, PC, and mesh. The established entity relationships between pixels, points, and faces enable the sharing of information across the modalities in a two-fold manner: (i) feature transfer (measured or engineered) and (ii) label transfer (predicted or annotated). The implementation follows a tile-wise strategy to facilitate scalability to large-scale data sets. At the same time, it enables parallel, distributed processing, reducing processing time. We demonstrate the effectiveness of the proposed method on the International Society for Photogrammetry and Remote Sensing (ISPRS) benchmark data sets Vaihingen 3D and Hessigheim 3D. Taken together, the proposed entity linking and subsequent information transfer inject great flexibility into the semantic segmentation of geospatial data. Imagery, PCs, and meshes can be semantically segmented with classifiers trained on any of these modalities utilizing features derived from any of these modalities. Particularly, we can semantically segment a modality by training a classifier on the same modality (direct approach) or by transferring predictions from other modalities (indirect approach). Hence, any established well-performing modality-specific classifier can be used for semantic segmentation of these modalities - regardless of whether they follow an end-to-end learning or feature-driven scheme. We perform an extensive ablation study on the impact of multi-modal handcrafted features for automatic 3D scene interpretation - both for the direct and indirect approach. We discuss and analyze various Ground Truth (GT) generation methods. The semi-automatic labeling leveraging the entity linking achieves consistent annotation across modalities and reduces the manual label effort to a single representation. Please note that the multiple epochs of the Hessigheim data consisting of manually annotated PCs and semi-automatically annotated meshes are a result of this thesis and provided to the community as part of the Hessigheim 3D benchmark. To further reduce the labeling effort to a few instances on a single modality, we combine the proposed information transfer with active learning. We recruit non-experts for the tedious labeling task and analyze their annotation quality. Subsequently, we compare the resulting classifier performances to conventional passive learning using expert annotation. In particular, we investigate the impact of visualizing the mesh instead of the PC on the annotation quality achieved by non-experts. In summary, we accentuate the mesh and its utility for multi-modal fusion, GT generation, multi-modal semantics, and visualizational purposes.
  • Thumbnail Image
    ItemOpen Access
    Method of development of a new regional ionosphere model (RIM) to improve static single-frequency precise point positioning (SF-PPP) for Egypt using Bernese GNSS software
    (2023) Abdallah, Ashraf; Agag, Tarek; Schwieger, Volker
    Due to the lack of coverage of IGS in Africa, especially over North Africa, and the construction revolution of infrastructure in Egypt, a geodetic CORS stations network was established in 2012. These CORS stations are operated by the Egyptian Surveying Authority (Egy. SA) and cover the whole of Egypt. The paper presents a fully developed regional ionosphere model (RIM) depending on the Egyptian CORS stations. The new model and the PPP solution were obtained using Bernese GNSS V. 5.2 software. An observation data series of eight days (DOY 201-208)/2019 was used in this study. Eighteen stations were used to develop the RIM model for each day; fifteen stations were used to validate the new RIM model. A static SF-PPP solution was obtained using the CODE-GIM and RIM models. Comparing the outcomes to the reference network solution, based on the recently developed RIM model, the solution showed a mean error of 0.06 m in the East direction, 0.13 m in the North direction, and 0.21 m in the height direction. In the East, North, and height directions, this solution improves the SF-PPP result achieved by the Global Ionosphere Maps (CODE-GIM) model by 60%, 68%, and 77%, respectively.