06 Fakultät Luft- und Raumfahrttechnik und Geodäsie
Permanent URI for this collectionhttps://elib.uni-stuttgart.de/handle/11682/7
Browse
7 results
Search Results
Item Open Access Towards improved targetless registration and deformation analysis of TLS point clouds using patch-based segmentation(2023) Yang, Yihui; Schwieger, Volker (Prof. Dr.-Ing. habil. Dr. h.c.)The geometric changes in the real world can be captured by measuring and comparing the 3D coordinates of object surfaces. Traditional point-wise measurements with low spatial resolution may fail to detect inhomogeneous, anisotropic and unexpected deformations, and thus cannot reveal complex deformation processes. 3D point clouds generated from laser scanning or photogrammetric techniques have opened up opportunities for an area-wise acquisition of spatial information. In particular, terrestrial laser scanning (TLS) exhibits rapid development and wide application in areal geodetic monitoring owing to the high resolution and high quality of acquired point cloud data. However, several issues in the process chain of TLS-based deformation monitoring are still not solved satisfactorily. This thesis mainly focuses on the targetless registration and deformation analysis of TLS point clouds, aiming to develop novel data-driven methods to tackle the current challenges. For most deformation processes of natural scenes, in some local areas no shape deformations occur (i.e., these areas are rigid), and even the deformation directions show a certain level of consistency when these areas are small enough. Further point cloud processing, like stability and deformation analyses, could benefit from the assumptions of local rigidity and consistency of deformed point clouds. In this thesis, thereby, three typical types of locally rigid patches - small planar patches, geometric primitives, and quasi-rigid areas - can be generated from 3D point clouds by specific segmentation techniques. These patches, on the one hand, can preserve the boundaries between rigid and non-rigid areas and thus enable spatial separation with respect to surface stability. On the other hand, local geometric information and empirical stochastic models could be readily determined by the points in each patch. Based on these segmented rigid patches, targetless registration and deformation analysis of deformed TLS point clouds can be improved regarding accuracy and spatial resolution. Specifically, small planar patches like supervoxels are utilized to distinguish the stable and unstable areas in an iterative registration process, thus ensuring only relatively stable points are involved in estimating transformation parameters. The experimental results show that the proposed targetless registration method has significantly improved the registration accuracy. These small planar patches are also exploited to develop a novel variant of the multiscale model-to-model cloud comparison (M3C2) algorithm, which constructs prisms extending from planar patches instead of the cylinders in standard M3C2. This new method separates actual surface variations and measurement uncertainties, thus yielding lower-uncertainty and higher-resolution deformations. A coarse-to-fine segmentation framework is used to extract multiple geometric primitives from point clouds, and rigorous parameter estimations are performed individually to derive high-precision parametric deformations. Besides, a generalized local registration-based pipeline is proposed to derive dense displacement vectors based on segmented quasi-rigid areas that are corresponded by areal geometric feature descriptors. All proposed methods are successfully verified and evaluated by simulated and/or real point cloud data. The choice of proposed deformation analysis methods for specific scenarios or applications is also provided in this thesis.Item Open Access Use of non-linearity as a characteristic in the selection of filtering algorithms in kinematic positioning(2020) Pham, Dung; Schwieger, Volker (Prof. Dr.-Ing. habil. Dr. h.c.)Selection of an optimal filtering algorithm for kinematic positioning systems constitutes one of the most extensively studied applications in the surveyor engineering community. The ability of a filtering algorithm is often assessed through its performance. The performance of a filtering algorithm is frequently evaluated in terms of accuracy and computational time. According to the accuracy parameter, it is often determined by a comparison between true trajectory and the estimated one from an algorithm. However, the true trajectory is commonly unknown in real-life situations, and thus the accuracy of the filtering algorithm cannot be assessed in this manner. Indeed, lack of true trajectory is one of the primary obstacles in the evaluation of the performance of filtering algorithms. The non-linearity of the model, on the other hand, can be determined without any information about the true trajectory and is also associated with the abilities of algorithms. So far, however, very little attention has been paid to the role of the decision of filtering algorithms based on non-linearity. Thus, this study proposes an alternative characteristic in the assessment of the performance of filtering algorithms, which is the non-linearity of the observation model. This research aims to assess the ability of non-linear characteristic for the choice of an optimal filtering algorithm. In this research, the data are simulated by the Monte Carlo method. The abilities of filtering algorithms are investigated on the extended Kalman filter (EKF), unscented Kalman filter (UKF), and particle filter (PF). These algorithms are widely utilized in kinematic positioning, and they are appropriate for various levels of non-linearity. The current study evaluated the influence of the algorithm’s accuracy on three factors: measurement uncertainty, observation geometry, and the number of observations. These algorithms are also assessed on their computational times according to a certain scenario. Regarding measures of non-linearity, three different indicators are examined for the non-linearity of both system and observation models. The coefficient of determination, 1-R2, is utilized as a single indicator to measure the non-linearity of each function of the above models. The M and 1-MVA, known as the deviation of a non-linear function from linearity and multivariate association, respectively, can be used as indicators to quantify the non-linearity of numerous functions of the above models jointly. The 1-MVA indicator is proposed for the first time to quantify the non-linearity of models. From analyses of the accuracy and non-linearity, the relationship between them is determined with changing measurement uncertainty and observation geometry in several scenarios. Based on the established relationship between accuracy and non-linearity, the choice of an optimal algorithm is analyzed through numerical examples. These results indicate that the accuracy of these algorithms is strongly influenced by measurement uncertainty, observation geometry, and the number of observations. The accuracy obtained by PF is higher than that of UKF and EKF. Conversely, the computational time of EKF is shorter than that of UKF and PF. According to measures of non-linearity, the above-proposed indicators are suitable, and the tendency of non-linearity of a model obtained by these indicators is the same. The non-linearity of the system model is small due to the given small amount of standard deviations of the disturbance quantities. Inversely, the non-linearity of the observation model is high due to high measurement uncertainties, or poor observation geometries. The main finding of this research is that both non-linearity of the observation model and position accuracy are influenced by factors of measurement uncertainty and observation geometry. Therefore, the relationship between the position accuracy and the non-linearity of the observation model is established based on these factors. This relationship is strong, which is assessed by the goodness-of-fit value of the best fitting function. In addition, another important result from the present research is that the fitting function described for this relationship changes due to influencing factors of scenarios. The established relationship constitutes the main limitation of this characteristic in application. As a result, instead of accuracy, the non-linearity of the observation model can be employed for the assessment of algorithms when the true trajectory is not available. However, the optimal algorithm can only be selected using these factors in some special cases. For a general case of arbitrary scenarios’ factors, the non-linear characteristic cannot be used for this purpose.Item Open Access Anwendung von Unsicherheitsmodellen am Beispiel der Verkehrserfassung unter Nutzung von Mobilfunkdaten(2013) Borchers, Ralf; Schwieger, Volker (Prof. Dr.-Ing. habil.)Das hohe und stetig anwachsende Verkehrsaufkommen ist häufig Ursache für Überlastungen der Verkehrswege und deren Folgeerscheinungen, wie z. B. Verkehrsstaus und Unfälle. Mit Einführung von Verkehrsmanagement-systemen wird angestrebt, Effizienzverbesserungen des Verkehrsflusses auf den bestehenden Verkehrswegen zu erzielen. Als wesentliche Datengrundlage setzen Verkehrsmanagementsysteme Verkehrsdaten voraus, wie sie mittels Induktionsschleifen oder sogenannter Floating Car Data gewonnen werden können. Vor dem Hintergrund, dass Mobilfunkgeräte geortet werden können, eignen sich die als Floating Phone Data bezeichneten und aus Mobilfunkdaten hergeleiteten Verkehrsdaten ebenfalls zur Verkehrserfassung. Dieser auch wirtschaftlich attraktiven Verkehrserfassungsart stehen hohe Ortungsunsicherheiten von mehreren einhundert Metern sowie die fehlende Information, ob und in welchem Verkehrsmittel das Mobilfunkgerät mitgeführt wurde, gegenüber. In der vorliegenden Arbeit wird eine Mobilfunkortung basierend auf dem Signalpegel-Matching eingesetzt, die die gemessenen Signalpegel der Mobilfunkgeräte des GSM-Mobilfunknetzes mit Referenzsignalpegelkarten vergleicht. Die zufälligen, systematischen aber auch unbekannt wirkenden Unsicherheiten werden mit Hilfe der zufälligen Variabilität, der Fuzzy-Theorie und der Fuzzy-Randomness Methodik modelliert. Im Anschluss werden Identifikationsverfahren vorgestellt, mit denen in Verkehrsmitteln des Motorisierten Individualverkehrs (z. B. in PKW oder LKW) generierte Mobilfunkdaten aus anonymisierten Mobilfunkdaten identifiziert werden können. Zu Beginn wird geprüft, ob sich das Mobilfunkgerät in Bewegung befindet. Bewegt es sich, wird nachfolgend dessen Geschwindigkeit als Entscheidung bezüglich eines Verkehrsmittels herangezogen. Hintergrund ist, dass bauartbedingte oder administrative Gründe die Höchstgeschwindigkeit von Verkehrsmitteln begrenzen. Ist die Geschwindigkeit des Mobilfunkgerätes signifikant höher als die Höchstgeschwindigkeit des untersuchten Verkehrsmittels, kann dieses Verkehrsmittel ausgeschlossen werden. Da in öffentlichen Verkehrsmitteln generierte Mobilfunkdaten für die Erfassung des Motorisierten Individualverkehrs ungeeignet sind, werden sie im nächsten Schritt eliminiert. Aus Fahrplänen werden hierfür die Positionen der Fahrzeuge des Öffentlichen Verkehrs (z. B. Linienbusse, Straßenbahnen) prognostiziert und mit den Positionen des Mobilfunkgerätes sowohl zeitlich als auch räumlich verglichen. Abschließend wird geprüft, ob für die Positionsfolge des Mobilfunkgerätes eine Trajektorie auf dem Verkehrsnetzgraph des motorisierten Individualverkehrs (Straßennetz) generiert werden kann. Kann die Positionsfolge in den Verkehrsnetzgraph räumlich, topologisch und zeitlich eingepasst werden, ist sie grundsätzlich für die Verkehrslageerfassung des motorisierten Individualverkehrs geeignet. Für stehende Mobilfunkgeräte ist in der Regel keine eindeutige Identifikation des Verkehrsmittels möglich, da jedes Verkehrsmittel stehen kann. Eine Unterscheidung zwischen Mobilfunkgeräten in Verkehrsstaus und beispielsweise nicht am Verkehr beteiligten Mobilfunkgeräten (z. B. stehende Fußgänger) ist infolgedessen zunächst nicht möglich. Diese Problemstellung wurde durch die Verknüpfung aktueller und vorgehender Identifizierungsergebnisse gelöst. Um ihre Eignung und ihr Potential zu vergleichen, wurden die Identifikationsverfahren mit konsequenter Anwendung der mathematischen Unsicherheitsmodelle der zufälligen Variabilität, der Fuzzy-Theorie und des Fuzzy-Randomness entwickelt und softwaretechnisch umgesetzt. Die entwickelten Identifikationsverfahren wurden unter Verwendung realer Mobilfunkdaten validiert und evaluiert. Das auf der Fuzzy-Randomness Methodik basierende Identifikationsverfahren, ergab sowohl qualitativ als auch quantitativ die besten Identifikationsergebnisse.Item Open Access Method of development of a new regional ionosphere model (RIM) to improve static single-frequency precise point positioning (SF-PPP) for Egypt using Bernese GNSS software(2023) Abdallah, Ashraf; Agag, Tarek; Schwieger, VolkerDue to the lack of coverage of IGS in Africa, especially over North Africa, and the construction revolution of infrastructure in Egypt, a geodetic CORS stations network was established in 2012. These CORS stations are operated by the Egyptian Surveying Authority (Egy. SA) and cover the whole of Egypt. The paper presents a fully developed regional ionosphere model (RIM) depending on the Egyptian CORS stations. The new model and the PPP solution were obtained using Bernese GNSS V. 5.2 software. An observation data series of eight days (DOY 201-208)/2019 was used in this study. Eighteen stations were used to develop the RIM model for each day; fifteen stations were used to validate the new RIM model. A static SF-PPP solution was obtained using the CODE-GIM and RIM models. Comparing the outcomes to the reference network solution, based on the recently developed RIM model, the solution showed a mean error of 0.06 m in the East direction, 0.13 m in the North direction, and 0.21 m in the height direction. In the East, North, and height directions, this solution improves the SF-PPP result achieved by the Global Ionosphere Maps (CODE-GIM) model by 60%, 68%, and 77%, respectively.Item Open Access Elementary error model applied to terrestrial laser scanning measurements: study case arch dam Kops(2020) Kerekes, Gabriel; Schwieger, VolkerAll measurements are affected by systematic and random deviations. A huge challenge is to correctly consider these effects on the results. Terrestrial laser scanners deliver point clouds that usually precede surface modeling. Therefore, stochastic information of the measured points directly influences the modeled surface quality. The elementary error model (EEM) is one method used to determine error sources impact on variances-covariance matrices (VCM). This approach assumes linear models and normal distributed deviations, despite the non-linear nature of the observations. It has been proven that in 90% of the cases, linearity can be assumed. In previous publications on the topic, EEM results were shown on simulated data sets while focusing on panorama laser scanners. Within this paper an application of the EEM is presented on a real object and a functional model is introduced for hybrid laser scanners. The focus is set on instrumental and atmospheric error sources. A different approach is used to classify the atmospheric parameters as stochastic correlating elementary errors, thus expanding the currently available EEM. Former approaches considered atmospheric parameters functional correlating elementary errors. Results highlight existing spatial correlations for varying scanner positions and different atmospheric conditions at the arch dam Kops in Austria.Item Open Access An elementary error model for terrestrial laser scanning(2023) Kerekes, Gabriel; Schwieger, Volker (Prof. Dr.-Ing. habil. Dr. h.c.)Terrestrial Laser Scanning (TLS) is a recent method in engineering geodesy area-wise deformation analysis. After a TLS scan, the result for each epoch is a point cloud that describes the object’s geometry. For each point cloud, the stochastic properties are important for a reliable decision concerning the current object geometry. Generally, the stochastic properties are described by a stochastic model. Currently, stochastic models for TLS observations are highly disputed and incomplete. A realistic stochastic model is necessary for typical applications like structural deformation analysis for buildings and civil engineering constructions. This work presents a method to define a stochastic model in form of a synthetic variance-covariance matrix (SVCM) for TLS observations. It relies on the elementary error theory defined by Bessel and Hagen at the beginning of the 19th century and adapted for geodetic observations by Pelzer and Schwieger at the end of the 20th century. According to this theory, different types of errors that affect TLS measurements are classified into three groups: non-correlating, functional correlating, and stochastic correlating errors. For each group, different types of errors are studied based on the error sources that affect TLS observations. These types are classified as instrument-specific errors, environment-related errors, and object surface-related errors. Regarding instrument errors, calibration models for high-end laser scanners are studied. For the propagation medium of TLS observations, the effects of air temperature, air pressure and vertical temperature gradient on TLS distances and vertical angles are studied. An approach based on time series theory is used for extracting the spatial correlations between observation lines. For the object’s surface properties, the effect of surface roughness and reflectivity on the distance measurement is considered. Both parameters affect the variances and covariances in the stochastic model. For each of the error types, examples based on own research or literature are given. After establishing the model, four different study cases are used to exemplify the utility of a fully populated SVCM. The scenarios include real objects measured under laboratory and field conditions and simulated objects. The first example outlines the results from the SVCM based on a simulated wall with an analysis of the variance and covariance contribution. In the second study case, the role of the SVCM in a sphere adjustment is highlighted. A third study case presents a deformation analysis of a wooden tower. Finally, the fourth example shows how to derive an optimal TLS station point based on the SVCM trace. All in all, this thesis brings a contribution by defining a new stochastic model based on the elementary error theory in the form a SVCM for TLS measurements. It may be used for purposes such as analysis of error magnitude on scanned objects, adjustment of surfaces, or finding an optimal TLS station point position with regard to predefined criteria.Item Open Access Estimating control points for B-spline surfaces using fully populated synthetic variance−covariance matrices for TLS point clouds(2021) Raschhofer, Jakob; Kerekes, Gabriel; Harmening, Corinna; Neuner, Hans; Schwieger, VolkerA flexible approach for geometric modelling of point clouds obtained from Terrestrial Laser Scanning (TLS) is by means of B-splines. These functions have gained some popularity in the engineering geodesy as they provide a suitable basis for a spatially continuous and parametric deformation analysis. In the predominant studies on geometric modelling of point clouds by B-splines, uncorrelated and equally weighted measurements are assumed. Trying to overcome this, the elementary errors theory is applied for establishing fully populated covariance matrices of TLS observations that consider correlations in the observed point clouds. In this article, a systematic approach for establishing realistic synthetic variance–covariance matrices (SVCMs) is presented and afterward used to model TLS point clouds by B-splines. Additionally, three criteria are selected to analyze the impact of different SVCMs on the functional and stochastic components of the estimation results. Plausible levels for variances and covariances are obtained using a test specimen of several dm—dimension. It is used to identify the most dominant elementary errors under laboratory conditions. Starting values for the variance level are obtained from a TLS calibration. The impact of SVCMs with different structures and different numeric values are comparatively investigated. Main findings of the paper are that for the analyzed object size and distances, the structure of the covariance matrix does not significantly affect the location of the estimated surface control points, but their precision in terms of the corresponding standard deviations. Regarding the latter, properly setting the main diagonal terms of the SVCM is of superordinate importance compared to setting the off-diagonal ones. The investigation of some individual errors revealed that the influence of their standard deviation on the precision of the estimated parameters is primarily dependent on the scanning distance. When the distance stays the same, one-sided influences on the precision of the estimated control points can be observed with an increase in the standard deviations.