Universität Stuttgart

Permanent URI for this communityhttps://elib.uni-stuttgart.de/handle/11682/1

Browse

Search Results

Now showing 1 - 10 of 34
  • Thumbnail Image
    ItemOpen Access
    Forming a hybrid intelligence system by combining Active Learning and paid crowdsourcing for semantic 3D point cloud segmentation
    (2023) Kölle, Michael; Sörgel, Uwe (Prof. Dr.-Ing.)
    While in recent years tremendous advancements have been achieved in the development of supervised Machine Learning (ML) systems such as Convolutional Neural Networks (CNNs), still the most decisive factor for their performance is the quality of labeled training data from which the system is supposed to learn. This is why we advocate focusing more on methods to obtain such data, which we expect to be more sustainable than establishing ever new classifiers in the rapidly evolving ML field. In the geospatial domain, however, the generation process of training data for ML systems is still rather neglected in research, with typically experts ending up being occupied with such tedious labeling tasks. In our design of a system for the semantic interpretation of Airborne Laser Scanning (ALS) point clouds, we break with this convention and completely lift labeling obligations from experts. At the same time, human annotation is restricted to only those samples that actually justify manual inspection. This is accomplished by means of a hybrid intelligence system in which the machine, represented by an ML model, is actively and iteratively working together with the human component through Active Learning (AL), which acts as pointer to exactly such most decisive samples. Instead of having an expert label these samples, we propose to outsource this task to a large group of non-specialists, the crowd. But since it is rather unlikely that enough volunteers would participate in such crowdsourcing campaigns due to the tedious nature of labeling, we argue attracting workers by monetary incentives, i.e., we employ paid crowdsourcing. Relying on respective platforms, typically we have access to a vast pool of prospective workers, guaranteeing completion of jobs promptly. Thus, crowdworkers become human processing units that behave similarly to the electronic processing units of this hybrid intelligence system performing the tasks of the machine part. With respect to the latter, we do not only evaluate whether an AL-based pipeline works for the semantic segmentation of ALS point clouds, but also shed light on the question of why it works. As crucial components of our pipeline, we test and enhance different AL sampling strategies in conjunction with both a conventional feature-driven classifier as well as a data-driven CNN classification module. In this regard, we aim to select AL points in such a manner that samples are not only informative for the machine, but also feasible to be interpreted by non-experts. These theoretical formulations are verified by various experiments in which we replace the frequently assumed but highly unrealistic error-free oracle with simulated imperfect oracles we are always confronted with when working with humans. Furthermore, we find that the need for labeled data, which is already reduced through AL to a small fraction (typically ≪1 % of Passive Learning training points), can be even further minimized when we reuse information from a given source domain for the semantic enrichment of a specific target domain, i.e., we utilize AL as means for Domain Adaptation. As for the human component of our hybrid intelligence system, the special challenge we face is monetarily motivated workers with a wide variety of educational and cultural backgrounds as well as most different mindsets regarding the quality they are willing to deliver. Consequently, we are confronted with a great quality inhomogeneity in results received. Thus, when designing respective campaigns, special attention to quality control is required to be able to automatically reject submissions of low quality and to refine accepted contributions in the sense of the Wisdom of the Crowds principle. We further explore ways to support the crowd in labeling by experimenting with different data modalities (discretized point cloud vs. continuous textured 3D mesh surface), and also aim to shift the motivation from a purely extrinsic nature (i.e., payment) to a more intrinsic one, which we intend to trigger through gamification. Eventually, by casting these different concepts into the so-called CATEGORISE framework, we constitute the aspired hybrid intelligence system and employ it for the semantic enrichment of ALS point clouds of different characteristics, enabled through learning from the (paid) crowd.
  • Thumbnail Image
    ItemOpen Access
    Energieeffizienz von Prozessoren in High Performance Computinganwendungen der Ingenieurwissenschaften
    (Stuttgart : Höchstleistungsrechenzentrum, Universität Stuttgart, 2018) Khabi, Dmitry; Resch, Michael M. (Prof. Dr.-Ing. Dr. h.c. Dr. h.c. Prof. E.h.)
    Im Mittelpunkt dieser Arbeit steht die Frage nach Energieeffizienz im Hochleistungsrechnen (HPC) mit Schwerpunkt auf Zusammenhänge zwischen der elektrischen Leistung der Prozessoren und deren Rechenleistung. In Kapitel 1, Einleitung der folgenden Abhandlungen, werden die Motivation und der Stand der Technik auf dem Gebiet der Strommessung und der Energieeffizienz im HPC und dessen Komponenten erläutert. In den Folgenden Kapiteln 2 und 3 wird eine am Höchstleistungsrechenzentrum Stuttgart (HLRS) entwickelte Messtechnik detailliert diskutiert, die für die Strommessungen im Testcluster angewendet wird. Das Messverfahren der unterschiedlichen Hardwarekomponenten und die Abhängigkeit zwischen deren Stromversorgung, Messgenauigkeit und Messfrequenz werden dargelegt. Im Kapitel 4 der Arbeit beschreibe ich, welchen Zusammenhang es zwischen dem Stromverbrauch eines Prozessors, dessen Konfiguration und darauf ausgeführten Algorithmen gibt. Der Fokus liegt dabei auf den Zusammenhängen zwischen CPU-Frequenz, Grad der Parallelisierung, Rechenleistung und elektrischer Leistung. Für den Effizienzvergleich zwischen den Prozessoren und Algorithmen benutze ich ein Verfahren, das auf eine Approximation in der analytischen Form der Rechen- und der elektrischen Leistung der Prozessoren basiert. In diesem Kapitel wird außerdem gezeigt, dass die Koeffizienten der Approximation, die mehrere Hinweise auf Software und Hardware-Eigenschaften geben, als Basis für die Ausarbeitung eines erweiterten Modells dienen können. Wie im weiteren Verlauf gezeigt wird, berücksichtigen die existierenden Modelle der Rechen- und der elektrischen Leistung nur zum Teil die unterschiedlichen Frequenz-Domains der Hardwarekomponenten. Im Kapitel 5 wird eine Erweiterung des existierenden Modells der Rechenleistung erläutert, mit dessen Hilfe die entsprechenden neuen Eigenschaften der CPU-Architektur teilweise erklärt werden könnten. Die daraus gewonnenen Erkenntnisse sollen helfen, ein Modell zu entwickeln, das sowohl die Rechen- als auch die elektrische Leistung beschreibt. In Kapitel 6 beschreibe ich die Problemstellung der Energieeffizienz eines Hochleistungsrechners. Unter anderem werden die in dieser Arbeit entwickelten Methoden auf eine HPC-Platform evaluiert.
  • Thumbnail Image
    ItemOpen Access
    Deep learning based prediction and visual analytics for temporal environmental data
    (2022) Harbola, Shubhi; Coors, Volker (Prof. Dr.)
    The objective of this thesis is to focus on developing Machine Learning methods and their visualisation for environmental data. The presented approaches primarily focus on devising an accurate Machine Learning framework that supports the user in understanding and comparing the model accuracy in relation to essential aspects of the respective parameter selection, trends, time frame, and correlating together with considered meteorological and pollution parameters. Later, this thesis develops approaches for the interactive visualisation of environmental data that are wrapped over the time series prediction as an application. Moreover, these approaches provide an interactive application that supports: 1. a Visual Analytics platform to interact with the sensors data and enhance the representation of the environmental data visually by identifying patterns that mostly go unnoticed in large temporal datasets, 2. a seasonality deduction platform presenting analyses of the results that clearly demonstrate the relationship between these parameters in a combined temporal activities frame, and 3. air quality analyses that successfully discovers spatio-temporal relationships among complex air quality data interactively in different time frames by harnessing the user’s knowledge of factors influencing the past, present, and future behaviour with Machine Learning models' aid. Some of the above pieces of work contribute to the field of Explainable Artificial Intelligence which is an area concerned with the development of methods that help understand, explain and interpret Machine Learning algorithms. In summary, this thesis describes Machine Learning prediction algorithms together with several visualisation approaches for visually analysing the temporal relationships among complex environmental data in different time frames interactively in a robust web platform. The developed interactive visualisation system for environmental data assimilates visual prediction, sensors’ spatial locations, measurements of the parameters, detailed patterns analyses, and change in conditions over time. This provides a new combined approach to the existing visual analytics research. The algorithms developed in this thesis can be used to infer spatio-temporal environmental data, enabling the interactive exploration processes, thus helping manage the cities smartly.
  • Thumbnail Image
    ItemOpen Access
    Efficient modeling and computation methods for robust AMS system design
    (2018) Gil, Leandro; Radetzki, Martin (Prof. Dr.-Ing.)
    This dissertation copes with the challenge regarding the development of model based design tools that better support the mixed analog and digital parts design of embedded systems. It focuses on the conception of efficient modeling and simulation methods that adequately support emerging system level design methodologies. Starting with a deep analysis of the design activities, many weak points of today’s system level design tools were captured. After considering the modeling and simulation of power electronic circuits for designing low energy embedded systems, a novel signal model that efficiently captures the dynamic behavior of analog and digital circuits is proposed and utilized for the development of computation methods that enable the fast and accurate system level simulation of AMS systems. In order to support a stepwise system design refinement which is based on the essential system properties, behavior computation methods for linear and nonlinear analog circuits based on the novel signal model are presented and compared regarding the performance, accuracy and stability with existing numerical and analytical methods for circuit simulation. The novel signal model in combination with the method proposed to efficiently cope with the interaction of analog and digital circuits as well as the new method for digital circuit simulation are the key contributions of this dissertation because they allow the concurrent state and event based simulation of analog and digital circuits. Using a synchronous data flow model of computation for scheduling the execution of the analog and digital model parts, very fast AMS system simulations are carried out. As the best behavior abstraction for analog and digital circuits may be selected without the need of changing component interfaces, the implementation, validation and verification of AMS systems take advantage of the novel mixed signal representation. Changes on the modeling abstraction level do not affect the experiment setup. The second part of this work deals with the robust design of AMS systems and its verification. After defining a mixed sensitivity based robustness evaluation index for AMS control systems, a general robust design method leading to optimal controller tuning is presented. To avoid over-conservative AMS system designs, the proposed robust design optimization method considers parametric uncertainty and nonlinear model characteristics. The system properties in the frequency domain needed to evaluate the system robustness during parameter optimization are obtained from the proposed signal model. Further advantages of the presented signal model for the computation of control system performance evaluation indexes in the time domain are also investigated in combination with range arithmetic. A novel approach for capturing parameter correlations in range arithmetic based circuit behavior computation is proposed as a step towards a holistic modeling method for the robust design of AMS systems. The several modeling and computation methods proposed to improve the support of design methodologies and tools for AMS system are validated and evaluated in the course of this dissertation considering many aspects of the modeling, simulation, design and verification of a low power embedded system implementing Adaptive Voltage and Frequency Scaling (AVFS) for energy saving.
  • Thumbnail Image
    ItemOpen Access
    Über die Lösung der Navier-Stokes-Gleichungen mit Hilfe der Moore-Penrose-Inversen des Laplace-Operators im Vektorraum der Polynomkoeffizienten
    (2024) Große-Wöhrmann, Bärbel; Resch, Michael (Prof. Dr.-Ing.)
    Die bekannten numerischen Standard-Verfahren zur Lösung partieller Differentialgleichungen basieren auf einer räumlichen Diskretisierung des Berechnungsgebiets. Ihre Performance und Skalierbarkeit auf modernen massiv-parallelen Höchstleistungsrechnern ist von der Verfügbarkeit effizienter numerischer Verfahren zur Lösung linearer Gleichungssysteme abhängig. Angesichts grundlegender Herausforderungen erscheint die Entwicklung neuer Lösungsansätze sinnvoll. Ich stelle in dieser Arbeit einen Polynomansatz zur Lösung partieller Differentialgleichungen vor, der nicht auf einer räumlichen Diskretisierung beruht und mit Hilfe der Moore-Penrose-Inversen des Laplace-Operators die Entkopplung der Navier-Stokes-Gleichungen ermöglicht. Dabei ist der Grad der Polynome nicht grundsätzlich beschränkt, so dass eine hohe räumliche Auflösung erreicht werden kann.
  • Thumbnail Image
    ItemOpen Access
    Development and application of PICLas for combined optic-/plume-simulation of ion-propulsion systems
    (2019) Binder, Tilman; Fasoulas, Stefanos (Prof. Dr.-Ing.)
    Electric propulsion systems are an efficient option for altitude/attitude control and orbit transfers of spacecraft. One example is the gridded ion thruster which ionizes the propellant and accelerates the ions of the generated plasma by a high-voltage grid system. This work deals with the numerical simulation of the plasma flow starting near the grid system in the ionization chamber and leaving the thruster with high velocity. These simulations give direct insight into the modeled, physical interrelationships and can be used to investigate questions arising in the industrial development process of ion propulsion systems. The required simulation method is challenging due to the high degree of flow rarefaction and the plasma state itself, including freely moving ions and electrons. Applicable simulation methods belong to a particle-based, gas-kinetic approach, such as Particle-In-Cell (PIC) for the simulation of electromagnetic interaction and the Direct Simulation Monte Carlo (DSMC) for inter-particle collisions. The effects resulting from the finite size of a real system can only be investigated by simulating the complete, three-dimensional thruster geometry which requires a large and complex simulation domain. Acceptable simulation times are realized by expanding and using the framework of the coupled PIC-DSMC code PICLas in combination with high performance computing systems.
  • Thumbnail Image
    ItemOpen Access
    A light weighted semi-automatically I/O-tuning solution for engineering applications
    (Stuttgart : Höchstleistungsrechenzentrum, Universität Stuttgart, 2017) Wang, Xuan; Resch, Michael M. (Prof. Dr.-Ing. Dr. h.c. Dr. h.c. Prof. E.h.)
    Today’s engineering applications running on high performance computing (HPC) platforms generate more and more diverse data simultaneously and require large storage systems as well as extremely high data transfer rates to store their data. To achieve high performance data transfer rate (I/O performance), computer scientists together with HPC manufacturers have developed a lot of innovative solutions. However, how to transfer the knowledge of their solutions to engineers and scientists has become one of the largest barriers. Since the engineers and scientists are experts in their own professional areas, they might not be capable of tuning their applications to the optimal level. Sometimes they might even drop down the I/O performance by mistake. The basic training courses provided by computing centers like HLRS seem to be not sufficient enough to transfer the know-how required. In order to overcome this barrier, I have developed a semi-automatically I/O-tuning solution (SAIO) for engineering applications. SAIO, a light weighted and intelligent framework, is designed to be compatible with as many engineering applications as possible, scalable with large engineering applications, usable for engineers and scientists with little knowledge of parallel I/O, and portable across multiple HPC platforms. Standing upon MPI-IO library allows SAIO to be compatible with MPI-IO based high level I/O libraries, such as parallel HDF5, parallel NetCDF, as well as proprietary and open source software, like Ansys Fluent, WRF Model etc. In addition, SAIO follows current MPI standard, which makes it be portable across many HPC platforms and scalable. SAIO, which is implemented as dynamic library and loaded dynamically, does not require recompiling or changing application's source codes. By simply adding several export directives into their job submission scripts, engineers and scientists will be able to run their jobs more efficiently. Furthermore, an automated SAIO training utility keeps the optimal configurations up to date, without any manuell efforts of user involved.
  • Thumbnail Image
    ItemOpen Access
    Multiscale modeling and stability analysis of soft active materials : from electro- and magneto-active elastomers to polymeric hydrogels
    (Stuttgart : Institute of Applied Mechanics, 2023) Polukhov, Elten; Keip, Marc-André (Prof. Dr.-Ing.)
    This work is dedicated to modeling and stability analysis of stimuli-responsive, soft active materials within a multiscale variational framework. In particular, composite electro- and magneto-active polymers and polymeric hydrogels are under consideration. When electro- and magneto-active polymers (EAP and MAP) are fabricated in the form of composites, they comprise at least two phases: a polymeric matrix and embedded electric or magnetic particles. As a result, the obtained composite is soft, highly stretchable, and fracture resistant like polymer and undergoes stimuli-induced deformation due to the interaction of particles. By designing the microstructure of EAP or MAP composites, a compressive or a tensile deformation can be induced under electric or magnetic fields, and also coupling response of the composite can be enhanced. Hence, these materials have found applications as sensors, actuators, energy harvesters, absorbers, and soft, programmable, smart devices in various areas of engineering. Similarly, polymeric hydrogels are also stimuli-responsive materials. They undergo large volumetric deformations due to the diffusion of a solvent into the polymer network of hydrogels. In this case, the obtained material shows the characteristic behavior of polymer and solvent. Therefore, these materials can also be considered in the form of composites to enhance the response further. Since hydrogels are biocompatible materials, they have found applications as contact lenses, wound dressings, drug encapsulators and carriers in bio-medicine, among other similar applications of electro- and magneto-active polymers. All above mentioned favorable features of these materials, as well as their application possibilities, make it necessary to develop mathematical models and numerical tools to simulate the response of them in order to design pertinent microstructures for particular applications as well as understand the observed complex patterns such as wrinkling, creasing, snapping, localization or pattern transformations, among others. These instabilities are often considered as failure points of materials. However, many recent works take advantage of instabilities for smart applications. Investigation of these instabilities and prediction of their onset and mode are some of the main goals of this work. In this sense, the thesis is organized into three main parts. The first part is devoted to the state of the art in the development, fabrication, and modeling of soft active materials as well as the continuum mechanical description of the magneto-electro-elasticity. The second part is dedicated to multiscale instabilities in electro- and magneto-active polymer composites within a minimization-type variational homogenization setting. This means that the highly heterogeneous problem is not resolved on one scale due to computational inefficiency but is replaced by an equivalent homogeneous problem. The effective response of the macroscopic homogeneous problem is determined by solving a microscopic representative volume element which includes all the geometrical and material non-linearities. To bridge these two scales, the Hill-Mandel macro-homogeneity condition is utilized. Within this framework, we investigate both macroscopic and microscopic instabilities. The former are important not only from a physical point of view but also from a computational point of view since the macroscopic stability (strong ellipticity) is necessary for the existence of minimizers at the macroscopic scale. Similarly, the investigation of the latter instabilities are also important to determine the pattern transformations at the microscale due to external action. Thereby the critical domain of homogenization is also determined for computation of accurate effective results. Both investigations are carried out for various composite microstructures and it is found that they play a crucial role in the response of the materials. Therefore, they must be considered for designing EAP and MAP composites as well as for providing reliable computations. The third part of the thesis is dedicated to polymeric hydrogels. Here, we develop a minimization-based homogenization framework to determine the response of transient periodic hydrogel systems. We demonstrate the prevailing size effect as a result of a transient microscopic problem, which has been investigated for various microstructures. Exploiting the elements of the proposed framework, we explore the material and structural instabilities in single and two-phase hydrogel systems. Here, we have observed complex experimentally observed and novel 2D pattern transformations such as diamond-plate patterns coupled with and without wrinkling of internal surfaces for perforated microstructures and 3D pattern transformations in thin reinforced hydrogel composites. The results indicate that the obtained patterns can be controlled by tuning the material and geometrical parameters of the composite.
  • Thumbnail Image
    ItemOpen Access
    Model-centric task debugging at scale
    (Stuttgart : Höchstleistungsrechenzentrum, Universität Stuttgart, 2017) Nachtmann, Mathias; Resch, Michael (Prof. Dr.-Ing. Dr. h.c. Dr. h.c. Prof. E.h.)
    Chapter 1, Introduction, presents state of the art debugging techniques in high-performance computing. The lack of information out of the programming model, these traditional debugging tools suffer, motivated the model-centric debugging approach. Chapter 2, Technical Background: Parallel Programming Models & Tools, exemplifies the programming models used in the scope of my work. The differences between those models are illustrated, and for the most popular programming models in HPC, examples are attached in this chapter. The chapter also describes Temanejo, the toolchain's front-end, which supports the application developer during his actions. In the following chapter (Chapter 4), Design: Events & Requests in Ayudame, the theory of task" and dependency" representation is stated. The chapter includes the design of different information types, which are later on used for the communication between a programming model and the model-centric debugging approach. In chapter 5, Design: Communication Back-end Ayudame, the design of the back-end tool infrastructure is described in detail. This also includes the problems occurring during the design process and their specific solutions. The concept of a multi-process environment and the usage of different programming models at the same time is also part of this chapter. The following chapter (Chapter 6), Instrumentation of Runtime Systems, briefly describes the information exchange between a programming model and the model-centric debugging approach. The different ways of monitoring and controlling an application through its programming model are illustrated. In chapter 7, Case Study: Performance Debugging, the model-centric debugging approach is used for optimising an application. All necessary optimisation steps are described in detail, with the help of mock-ups. Additionally, a description of the different optimised versions is included in this chapter. The evaluation, done on different hardware architectures, is presented and discussed. This includes not only the behaviour of the versions on different platforms but also architecture specific issues.
  • Thumbnail Image
    ItemOpen Access
    Physics-informed regression of implicitly-constrained robot dynamics
    (2022) Geist, Andreas René; Allgöwer, Frank (Prof. Dr.-Ing.)
    The ability to predict a robot’s motion through a dynamics model is critical for the development of fast, safe, and efficient control algorithms. Yet, obtaining an accurate robot dynamics model is challenging as robot dynamics are typically nonlinear and subject to environment-dependent physical phenomena such as friction and material elasticities. The respective functions often cause analytical dynamics models to have large prediction errors. An alternative approach to analytical modeling forms the identification of a robot’s dynamics through data-driven modeling techniques such as Gaussian processes or neural networks. However, solely data-driven algorithms require considerable amounts of data, which on a robotic system must be collected in real-time. Moreover, the information stored in the data as well as the coverage of the system’s state space by the data is limited by the controller that is used to obtain the data. To tackle the shortcomings of analytical dynamics and data-driven modeling, this dissertation investigates and develops models in which analytical dynamics is being combined with data-driven regression techniques. By combining prior structural knowledge from analytical dynamics with data-driven regression, physics-informed models show improved data-efficiency and prediction accuracy compared to using the aforementioned modeling techniques in an isolated manner.