06 Fakultät Luft- und Raumfahrttechnik und Geodäsie
Permanent URI for this collectionhttps://elib.uni-stuttgart.de/handle/11682/7
Browse
67 results
Search Results
Item Open Access Towards improved targetless registration and deformation analysis of TLS point clouds using patch-based segmentation(2023) Yang, Yihui; Schwieger, Volker (Prof. Dr.-Ing. habil. Dr. h.c.)The geometric changes in the real world can be captured by measuring and comparing the 3D coordinates of object surfaces. Traditional point-wise measurements with low spatial resolution may fail to detect inhomogeneous, anisotropic and unexpected deformations, and thus cannot reveal complex deformation processes. 3D point clouds generated from laser scanning or photogrammetric techniques have opened up opportunities for an area-wise acquisition of spatial information. In particular, terrestrial laser scanning (TLS) exhibits rapid development and wide application in areal geodetic monitoring owing to the high resolution and high quality of acquired point cloud data. However, several issues in the process chain of TLS-based deformation monitoring are still not solved satisfactorily. This thesis mainly focuses on the targetless registration and deformation analysis of TLS point clouds, aiming to develop novel data-driven methods to tackle the current challenges. For most deformation processes of natural scenes, in some local areas no shape deformations occur (i.e., these areas are rigid), and even the deformation directions show a certain level of consistency when these areas are small enough. Further point cloud processing, like stability and deformation analyses, could benefit from the assumptions of local rigidity and consistency of deformed point clouds. In this thesis, thereby, three typical types of locally rigid patches - small planar patches, geometric primitives, and quasi-rigid areas - can be generated from 3D point clouds by specific segmentation techniques. These patches, on the one hand, can preserve the boundaries between rigid and non-rigid areas and thus enable spatial separation with respect to surface stability. On the other hand, local geometric information and empirical stochastic models could be readily determined by the points in each patch. Based on these segmented rigid patches, targetless registration and deformation analysis of deformed TLS point clouds can be improved regarding accuracy and spatial resolution. Specifically, small planar patches like supervoxels are utilized to distinguish the stable and unstable areas in an iterative registration process, thus ensuring only relatively stable points are involved in estimating transformation parameters. The experimental results show that the proposed targetless registration method has significantly improved the registration accuracy. These small planar patches are also exploited to develop a novel variant of the multiscale model-to-model cloud comparison (M3C2) algorithm, which constructs prisms extending from planar patches instead of the cylinders in standard M3C2. This new method separates actual surface variations and measurement uncertainties, thus yielding lower-uncertainty and higher-resolution deformations. A coarse-to-fine segmentation framework is used to extract multiple geometric primitives from point clouds, and rigorous parameter estimations are performed individually to derive high-precision parametric deformations. Besides, a generalized local registration-based pipeline is proposed to derive dense displacement vectors based on segmented quasi-rigid areas that are corresponded by areal geometric feature descriptors. All proposed methods are successfully verified and evaluated by simulated and/or real point cloud data. The choice of proposed deformation analysis methods for specific scenarios or applications is also provided in this thesis.Item Open Access Experimental investigation of low-frequency sound and infrasound induced by onshore wind turbines(2024) Blumendeller, Esther; Cheng, Po Wen (Prof. Dr.)Climate change has a global impact and is increasingly affecting our environment. This is driving the continuous expansion of renewable energies, with wind energy playing a major role. As wind energy becomes more widespread, an increasing number of people will live near wind turbines in complex terrain. In such scenarios, wind turbines are often positioned at elevated locations, while residents live in valleys. In complex terrain, such as a steep escarpment, local turbulence, wind speed, and direction are strongly influenced by topography, contributing to the complexity of sound propagation or impacts the background noise situation in valleys, for example, due to shielding effects. The operation of wind turbines is associated with both visual and sound-related impact, with sound being generated at various frequencies. There is a growing interest in low-frequency sound and infrasound, characterized by long wavelengths that propagate over considerable distances without significant attenuation. This is in contrast to higher-frequency sound, and might increase the impact of wind turbine sound at residential areas located several hundred meters or a few kilometers away from the wind farm. In the context of complex terrain, this work investigates wind turbines in complex terrain as sources of low-frequency sound and infrasound. The investigations on characterization of sound generation and propagation are based on measurements in the vicinity of two wind farms. Measurements were conducted within four measurement campaigns at two wind farms located close to an escarpment at the Swabian Alb in Southern Germany over a period of about nine month. Acoustic data was obtained in the proximity of the wind turbines and at residential buildings in 1–1.7km distance to the wind farms in municipalities located within a valley. Besides acoustic measurements including the infrasonic frequency range, a comprehensive data set with ground motion data, wind turbine operating data, meteorological data and data from a noise reporting app supports the investigation. Two aspects require analysis: Firstly, the aspect of generation and propagation of wind turbine low-frequency sound and infrasound in complex terrain, and secondly, the relation with annoyance. Results show that sounds within the infrasonic range assigned to the blade passage at the tower are transmitted through the air over distances of 1 km. Low-frequency sounds were found to be amplitude-modulated and were investigated as amplitude modulation. Infrasound and amplitude modulation occurrences were more likely during morning, evening and night hours and during atmospheric conditions with positive lapse rate, vertical wind shear and low turbulence intensity. The occurrence of both infrasound and amplitude modulation was typically observed during rated rotational speed but below-rated power. To allow predictions, a standard prediction method was extended to include the lowfrequency sound and infrasound range and adapted to the measurement data in order to apply it to complex terrain. The sound level difference of the measured data aligns well with the predictions within the frequency range of 8 Hz and 250 Hz. Investigations regarding outdoor-to-indoor sound reductions showed influences from structural resonances and room modes, which depend on the characteristics of the building and the specific room under investigation. Combining acoustic measurements with annoyance reports showed that rated wind turbine operation appears to be a contributing factor in annoyance ratings obtained through a noise reporting app, ranging from “somewhat” to “very” levels. Furthermore, the analysis indicates that varying levels of annoyance at a distance of 1km from the wind farm, both outside and inside buildings, do not correspond to significant differences in the averaged and A-weighted sound pressure levels. Overall, this work contributes to a better understanding of the low-frequency sound and infrasound generated from wind turbines and provides insight into the sound characteristics of measured wind turbine sound at residential locations in complex terrains.Item Open Access Analyzing and characterizing spaceborne observation of water storage variation : past, present, future(2024) Saemian, Peyman; Sneeuw, Nico (Prof. Dr.-Ing.)Water storage is an indispensable constituent of the intricate water cycle, as it governs the availability and distribution of this precious resource. Any alteration in the water storage can trigger a cascade of consequences, affecting not only our agricultural practices but also the well-being of various ecosystems and the occurrence of natural hazards. Therefore, it is essential to monitor and manage the water storage levels prudently to ensure a sustainable future for our planet. Despite significant advancements in ground-based measurements and modeling techniques, accurately measuring water storage variation remained a major challenge for a long time. Since 2002, the Gravity Recovery and Climate Experiment (GRACE) and its successor GRACE Follow-On (GRACE-FO) satellites have revolutionized our understanding of the Earth's water cycle. By detecting variations in the Earth's gravity field caused by changes in water distribution, these satellites can precisely measure changes in total water storage (TWS) across the entire globe, providing a truly comprehensive view of the world's water resources. This information has proved invaluable for understanding how water resources are changing over time, and for developing strategies to manage these resources sustainably. However, GRACE and GRACE-FO are subject to various challenges that must be addressed in order to enhance the efficacy of our exploitation of GRACE observations for scientific and practical purposes. This thesis aims to address some of the challenges faced by GRACE and GRACE-FO. Since the inception of the GRACE mission, scholars have commonly extracted mass changes from observations by approximating the Earth's gravity field utilizing mathematical functions termed spherical harmonics. Various institutions have already processed GRACE(-FO) data, known as level-2 data in the GRACE community, considering the constraints, approaches, and models that have been utilized. However, this processed data necessitates post-processing to be used for several applications, such as hydrology and climate research. In this thesis, we evaluate various methods of processing GRACE(-FO) level-2 data and assess the spatio-temporal effect of the post-processing steps. Furthermore, we aim to compare the consistency between GRACE and its successor mission, GRACE-FO, in terms of data quality and measurement accuracy. By analyzing and comparing the data from these two missions, we can identify any potential discrepancies or differences and establish the level of confidence in the accuracy and reliability of the GRACE-FO measurements. Finally, we will compare the processed level-3 products with the level-3 products that are presently accessible online. The relatively short record of the GRACE measurements, compared to other satellite missions and observational records, can limit some studies that require long-term data. This short record makes it challenging to separate long-term signals from short-term variability and validate the data with ground-based measurements or other satellite missions. To address this limitation, this thesis expands the temporal coverage of GRACE(-FO) observations using global hydrological, atmospheric, and reanalysis models. First, we assess these models in estimating the TWS variation at a global scale. We compare the performance of various methods including data-driven and machine learning approaches in incorporating models and reconstruct GRACE TWS change. The results are also validated against Satellite Laser Ranging (SLR) observations over the pre-GRACE period. This thesis develops a hindcasted GRACE, which provides a better understanding of the changes in the Earth's water storage on a longer time scale. The GRACE satellite mission detects changes in the overall water storage in a specific region but cannot distinguish between the different compartments of TWS, such as surface water, groundwater, and soil moisture. Understanding these individual components is crucial for managing water resources and addressing the effects of droughts and floods. This study aims to integrate various data sources to improve our understanding of water storage variations at the continental to basin scale, including water fluxes, lake water level, and lake storage change data. Additionally, the study demonstrates the importance of combining GRACE(-FO) observations with other measurements, such as piezometric wells and rain-gauges, to understand the water scarcity predicament in Iran and other regions facing similar challenges. The GRACE satellite mission provides valuable insights into the Earth's system. However, the GRACE product has a level of uncertainty due to several error sources. While the mission has taken measures to minimize these uncertainties, researchers need to account for them when analyzing the data and communicate them when reporting findings. This thesis proposes a probabilistic approach to incorporate the Total Water Storage Anomaly (TWSA) data from GRACE(-FO). By accounting for the uncertainty in the TWSA data, this approach can provide a more comprehensive understanding of drought conditions, which is essential for decision makers managing water resources and responding to drought events.Item Open Access Forming a hybrid intelligence system by combining Active Learning and paid crowdsourcing for semantic 3D point cloud segmentation(2023) Kölle, Michael; Sörgel, Uwe (Prof. Dr.-Ing.)While in recent years tremendous advancements have been achieved in the development of supervised Machine Learning (ML) systems such as Convolutional Neural Networks (CNNs), still the most decisive factor for their performance is the quality of labeled training data from which the system is supposed to learn. This is why we advocate focusing more on methods to obtain such data, which we expect to be more sustainable than establishing ever new classifiers in the rapidly evolving ML field. In the geospatial domain, however, the generation process of training data for ML systems is still rather neglected in research, with typically experts ending up being occupied with such tedious labeling tasks. In our design of a system for the semantic interpretation of Airborne Laser Scanning (ALS) point clouds, we break with this convention and completely lift labeling obligations from experts. At the same time, human annotation is restricted to only those samples that actually justify manual inspection. This is accomplished by means of a hybrid intelligence system in which the machine, represented by an ML model, is actively and iteratively working together with the human component through Active Learning (AL), which acts as pointer to exactly such most decisive samples. Instead of having an expert label these samples, we propose to outsource this task to a large group of non-specialists, the crowd. But since it is rather unlikely that enough volunteers would participate in such crowdsourcing campaigns due to the tedious nature of labeling, we argue attracting workers by monetary incentives, i.e., we employ paid crowdsourcing. Relying on respective platforms, typically we have access to a vast pool of prospective workers, guaranteeing completion of jobs promptly. Thus, crowdworkers become human processing units that behave similarly to the electronic processing units of this hybrid intelligence system performing the tasks of the machine part. With respect to the latter, we do not only evaluate whether an AL-based pipeline works for the semantic segmentation of ALS point clouds, but also shed light on the question of why it works. As crucial components of our pipeline, we test and enhance different AL sampling strategies in conjunction with both a conventional feature-driven classifier as well as a data-driven CNN classification module. In this regard, we aim to select AL points in such a manner that samples are not only informative for the machine, but also feasible to be interpreted by non-experts. These theoretical formulations are verified by various experiments in which we replace the frequently assumed but highly unrealistic error-free oracle with simulated imperfect oracles we are always confronted with when working with humans. Furthermore, we find that the need for labeled data, which is already reduced through AL to a small fraction (typically ≪1 % of Passive Learning training points), can be even further minimized when we reuse information from a given source domain for the semantic enrichment of a specific target domain, i.e., we utilize AL as means for Domain Adaptation. As for the human component of our hybrid intelligence system, the special challenge we face is monetarily motivated workers with a wide variety of educational and cultural backgrounds as well as most different mindsets regarding the quality they are willing to deliver. Consequently, we are confronted with a great quality inhomogeneity in results received. Thus, when designing respective campaigns, special attention to quality control is required to be able to automatically reject submissions of low quality and to refine accepted contributions in the sense of the Wisdom of the Crowds principle. We further explore ways to support the crowd in labeling by experimenting with different data modalities (discretized point cloud vs. continuous textured 3D mesh surface), and also aim to shift the motivation from a purely extrinsic nature (i.e., payment) to a more intrinsic one, which we intend to trigger through gamification. Eventually, by casting these different concepts into the so-called CATEGORISE framework, we constitute the aspired hybrid intelligence system and employ it for the semantic enrichment of ALS point clouds of different characteristics, enabled through learning from the (paid) crowd.Item Open Access Multiscale and multiphase modeling and numerical simulation of function-perfusion processes in the liver(Stuttgart : Institut für Statik und Dynamik der Luft und Raumfahrtkonstruktionen, Universität Stuttgart, 2023) Lambers, Lena; Ricken, Tim (Prof. Dr.-Ing.)Item Open Access Passively mode-locked Tm-lasers for all-fiber high-energy nonlinear chirped pulse amplification(2023) Graf, Florian; Dekorsy, Thomas (Prof. Dr. rer. nat.)Item Open Access Linear stability investigations of three-dimensional disturbances in the boundary layer over anisotropic compliant walls(2023) Zengl, Marcus; Rist, Ulrich (apl. Prof. Dr.-Ing.)In dieser Arbeit werden dreidimensionale Störungen in der Grenzschicht über anisotropen nachgiebigen Wänden mit linearer Stabilitätstheorie untersucht. Ein oberflächenbasiertes Modell wird verwendet, um die nachgiebige Wand abzubilden. Hierbei wird das anisotrope Wandmodell von Carpenter erweitert, um einen zusätzlichen Schiebewinkel der Wand bezüglich der Strömungsrichtung einzubringen. Basierend auf diesem Wandmodell wird eine Randbedingung für die Lineare Stabilitätstheorie hergeleitet. Aufgrund der Tatsache, dass diese Randbedingung die Orr-Sommerfeld- und Squire-Gleichung koppelt, wurden zwei neuartige Lösungsverfahren, ein Schießverfahren und ein Matrixlöser, für diesen besonderen Umstand entwickelt. Der Schießlöser transformiert das zugrunde gelegte Eigenwertproblem in ein Randwertproblem und verwendet ein klassisches Schießverfahren zur Lösung des Problems. Um das numerisch steife Problem mit seinem parasitärem Fehlerwachstum zu berücksichtigen beinhaltet das Lösungsverfahren eine Gram-Schmid Orthonormierungsroutine. Durch eine neuartige Skalierung der Phase des zu minimierenden Residuums wird das zeitliche und räumliche Modell robust und performant für gegebene Eigenmoden gelöst. Das durch die gekoppelte Orr-Sommerfeld- and Squire-Gleichung entstehende Eigenwertproblem wird auch mit einer Matrix-basierenden Methode gelöst. Das durch die nachgiebige Wand entstehende zeitliche quadratische Eigenwertproblem wird dabei berücksichtigt. Hierbei wird eine pseudospektrale Diskretisierung mit Chebyshev-Kollokation verwendet. Besonders betrachtet wird die Formulierung des diskretisierten Problems auf seine numerischen Fehler. Die numerische Genauigkeit der Lösungsverfahren wird genau überprüft, um die Gitterunabhängigkeit der Ergebnisse sicherzustellen. Um das Potenzial der nachgiebigen Wände zur Verzögerung des laminar-turbulenten Umschlags zu untersuchen, wurde die Vorgehensweise von Carpenter [15] übernommen. Carpenter optimierte die Parameter der nachgiebigen Wand so, dass Tollmien-Schlichting (TS) Moden so weit wie möglich abgeschwächt werden, während Fluid-Struktur (FISI) Moden grenzwertig stabil bleiben. Dieses Vorgehen wurde ausgewählt, weil Fluid-Struktur Moden absolut instabil sein können, was zu sofortigem Strömungsumschlag führen kann. Stabilitätsrechnungen wurden ausgeführt für zwei Sätze von Wandparametern, die Carpenter mit seinem zweidimensionalen Rahmenwerk optimiert hat. Hierbei wurden nicht nur dreidimensionale Störungen betrachtet, sondern es wurde auch der Einfluss des neu eingebrachten Schiebewinkels der nachgiebigen Wand untersucht. Die Ergebnisse wurden bezüglich der zeitlichen Anfachung der TS- und FISI-Moden, und bezüglich des mit N-Faktoren vorhergesagten Umschlagspunkts beurteilt. Es wird gezeigt, dass dreidimensionale Störungen bestimmte N-Faktoren vor ihren zweidimensionalen Pendants erreichen. Die vorhergesagte laminare Länge ist etwas kürzer als mit zweidimensionalen Verfahren vorhergesagt. Es scheint als ob der eingebrachte Schiebewinkel für die untersuchten Parametersätze keinen Vorteil bezüglich Laminarhaltung bringt. Schließlich wurden optimale Störungen berechnet, um das Transiente Energiewachstum für die anisotrope nachgiebige Wand zu untersuchen. Hierbei wurden die Anfangsverteilungen von Eigenmoden so optimiert, dass deren Überlagerung ein maximales Energiewachstum für eine vorgegebene Zeit erfährt. Die Einhüllende dieser optimalen Störungen wird dann für variierende Wellenzahlen in Strömungs- und Spannweitenrichtung, und variierende Wachstumszeit berechnet. Die Ergebnisse zeigen kein durch die nachgiebige Wand hervorgerufenes relevantes transientes Wachstum. Es wird gezeigt, dass der klassische Mechanismus für transientes Wachstum, der bei der steifen Wand dominiert, nicht verändert wird.Item Open Access Optimierung eines Hochdruckelektrolysesystems für regenerative Brennstoffzellensysteme(2023) Fremdling, Fabian; Fasoulas, Stefanos (Prof. Dr.-Ing)Diese Dissertation fasst die Entwicklung eines Hochdruckelektrolysesystems mit einem Betriebsdruck von 100 bar als Teil eines regenerativen Brennstoffzellensystems (RFCS) für Luft- und Raumfahrtanwendungen zusammen. Die Entwicklung basiert auf den zuvor durchgeführten Optimierungsmaßnahmen eines alkalischen Wasserelektrolyseurs, dem ein Zelldesign mit immobilem Elektrolyt und direkter Wasserzufuhr in ein Doppeldiaphragma zugrunde liegt. Die Probleme bei diesem Elektrolyseur liegen in der Unterbrechung der ionischen Leitfähigkeit durch Inhomogenitäten in der Elektrolytkonzentration und Gasansammlungen im Elektrolytbereich, die im Betrieb auftreten, sowie in der Versorgung der Kathode mit Wasser. Dies wird experimentell abgebildet und theoretisch beschrieben. Verschiedene Maßnahmen zur Behebung der Probleme werden vorgestellt, jedoch kann kein zufriedenstellender Betrieb erreicht werden, was zu einer Änderung des Zelldesigns führt. Anstatt einer direkten Wasserversorgung ins Doppeldiaphragma wird eine Elektrolytzirkulation durch den Kathodenraum implementiert und ein Diaphragma entfernt. Diese Änderungen bringen eine Separationseinheit für Wasserstoff vom Elektrolyt sowie eine zusätzliche Pumpe mit sich, der Elektrolyt liegt nun mobil vor. Somit erhöht sich die Systemkomplexität, die uneingeschränkte Raumfahrttauglichkeit ist zunächst nicht gegeben. Dieses neue Elektrolysesystem wird anschließend in zahlreichen Testreihen charakterisiert. Erzielt werden auf Zellebene eine maximale Effizienz von 90.4 % bei 0.509 A/cm², ein möglicher Stromdichtebereich bis 0.76 A/cm², ein Betriebsdruck von 100 bar (Wasserstoff und Sauerstoff) und ein Temperaturbereich von 30-90 °C. Die so gewonnenen Versuchsdaten bilden, zusammen mit theoretischen Grundlagen und weiteren experimentellen Daten, die Basis für eine Modellierung des Systems. Diese Modellierung wird in Matlab/Simulink erstellt und bildet das gesamte Elektrolysesystem ab. Mithilfe der Modellierung können Betriebsparameter des Elektrolysesystems optimiert sowie das Verhalten des Systems in bestimmten Betriebspunkten vorhergesagt werden. Das Modell selbst kann in eine RFCS Gesamtsystemmodellierung implementiert werden. Des Weiteren werden in dieser Arbeit Untersuchungen zur Gasreinheit des Elektrolysesystems durchgeführt. Dies beinhaltet theoretische und experimentelle Untersuchungen zur Gasverunreinigung von Elektrolysegasen sowie Konzepte und Untersuchungen zur Reinigung dieser Gase. Für die für regenerative Brennstoffzellensysteme spezifische Anforderung der Passivphase des Elektrolysesystems wird ein Betriebskonzept erarbeitet. Das Elektrolysesystem ist so umfangreich theoretisch abgebildet, was eine akkurate Auslegung und Anpassung zulässt, auch für Anwendungen außerhalb regenerativer Brennstoffzellensysteme.Item Open Access Science planning for the DESTINY+ Dust Analyzer : leveraging the potential of a space exploration instrument(2024) Sommer, Maximilian; Srama, Ralf (Apl. Prof. Dr.-Ing.)The DESTINY+ Dust Analyzer (DDA) is a highly sophisticated planetary science instrument to provide cutting-edge in-situ characterization of individual cosmic dust grains, with respect to their composition, as well as their physical and dynamical properties. As such, it constitutes a critical component of the upcoming JAXA mission DESTINY+, which is scheduled to launch in 2025. After a three-year cruise phase, the spacecraft will perform a flyby of the target asteroid 3200 Phaethon, with the goal of observing the enigmatic Geminids parent body with two camera instruments, and sampling particles released from its surface with the DDA. Until that flyby, DESTINY+ will execute a highly diverse, ion-engine-driven flight plan that allows DDA to extensively study the dust environments of the Earth, Moon, and interplanetary space - a breadth of science opportunities that is unique to this mission and instrument. This dissertation provides a comprehensive study of the dust types and phenomena possibly encountered by DDA during its journey to Phaethon and applies the principles and methods of science planning to prepare for the operational phase of the mission. The work synthesizes technical considerations and scientific analyses of relevant cosmic dust populations, aiming to optimize DDA’s scientific potential. Detailed examinations of spacecraft and instrument factors, such as the dynamic spacecraft attitude during the near-Earth phase or the instrument’s two-axis pointing mechanism, lay the groundwork for the scientific planning. The thorough analysis of known (and lesser known) dust populations in the inner solar system and of previous relevant measurements by other dust instruments form the core of the study. Finally, the findings are consolidated into a draft science activity plan for the entire mission, as well as exemplary pointing timelines to be executed by the instrument for optimal scientific return. The latter is accomplished with the DOPE tool, which aids in intuitive and efficient planning of DDA observations, having been developed in the scope of this project. The presented work builds the foundation for the scientific operations of DDA, setting it up for a successful and scientifically impactful mission. The findings of this study also provide a valuable perspective for other ventures of in-situ dust astronomy to the inner solar system and contribute to the field of cosmic dust as a whole.Item Open Access Design and development of a calibration solution feasible for series production of cameras for video-based driver-assistant systems(2022) Nekouei Shahraki, Mehrdad; Haala, Norbert (apl. Prof. Dr.)In this study, we reviewed the current techniques and methods in photogrammetry - especially close-range photogrammetry - and focused on camera calibration. We reviewed the new evolving field of video-based driver-assistant systems, their requirements and their applications. Exclusively of fisheye cameras and a general omnidirectional projection, we extended an existing camera calibration model to address our needs and functionality requirements. These extensions enable us to use the camera calibration model in real-time embedded mobile systems with low processing power. We also introduced the free-function model as a flexible and advantageous model for camera distortion modelling. This is a new approach for modelling the overall image distortion together with the local lens distortions that are estimated using a standard model during the calibration process. Using free-function model on different lens designs, one can achieve good calibration accuracies by modelling the very local lens distortion taking benefit from the flexibility of this model. We introduced optimization strategies for recalculation and image rectification. These optimizations are also used to minimize the amount of required processing power and device memory. This brings many advantages to variety of computational platforms such as FPGAs, x86 and ARM processors, and makes it possible to benefit from variety of parallel-processing techniques. This model is capable of being used in runtime and is an ideal calibration model for using in variety of machine vision solutions. We also discussed several important requirements for accurate camera calibration that we later used in hardware test stand design phase. We designed and developed two different test stands in order to realize the specifications and geometrical features of multiple-view test-field-based camera calibration referred to as bundle-block calibration. One of their special geometrical characteristics is the uniform point distribution, which corresponds to the uniform motion. Such a point distribution is beneficial when using calibration models such as free-function model that enable us to model of local lens distortion with good accuracy and quality all over the image. A very important feature of this test stand is having the capability of performing camera/sensor alignment testing, a feature which is very important for testing the geometrical alignment of the internal mechanical elements of each camera. Using automated machines and algorithms in test stand calibration increased the stability and accuracy of the calibration and thus ensured the quality and speed of the calibration for cameras. These test stands are capable of performing automatic camera calibration, suitable for applications such as series-production of cameras. As an accuracy -and flexibility evaluation step for the free-function model, we tested the free-function calibration model on real-world data using a stereo camera with added large local distortions taking images from a front vehicle similar to the conditions where real-world use-cases are defined. By performing the camera calibration, we compared the calibration results and accuracy parameters of the free-function model to a conventional calibration model. Using these calibration results, we generated a set of disparity maps and compared their density and availability, especially on the areas where the local distortion was present. We used this test to compare the capabilities of the proposed model to conventional ones in real-wold situations where large optical distortions could be present that cannot be easily modelled with conventional calibration models. The higher modelling capability and accuracy of the free-function model will generally influence those functions that are using the information of the disparity map or the derived 3D information as part of their input data and potentially leads to the better functionality or even their availability if local distortions are present in the image. There are many more use-cases in photogrammetry and computer-vision where a higher calibration accuracy is beneficial on hardware such as low-cost optics where sometimes optical distortion are available that cannot easily be modelled with classical models. These use-cases could all benefit from the flexibility and modelling accuracy of the free-function model.