Universität Stuttgart

Permanent URI for this communityhttps://elib.uni-stuttgart.de/handle/11682/1

Browse

Search Results

Now showing 1 - 10 of 285
  • Thumbnail Image
    ItemOpen Access
    Anonymisierung von Daten : von der Literatur zum Automobilbereich
    (2023) Herkommer, Jan
    Die Datenanonymisierung im Automobilbereich gewinnt immer mehr an Bedeutung. Jedoch gibt es kaum Literatur und Ansätze, die sich mit der Anonymisierung von Automobildaten beschäftigen. In dieser Arbeit werden deshalb mit Hilfe einer strukturierten Literaturrecherche die aktuell verbreitetsten Verfahren und Anwendungsbereiche erörtert und die wichtigsten Erkenntnisse der Recherche zusammengefasst. So werden bei den analysierten Paper der Anwendungsbereich, die Methodik sowie der zu anonymisierende Datentyp ermittelt. DesWeiteren werden die Metriken zum Vergleich von unterschiedlichen Ansätzen betrachtet. Mit Hilfe dieser Erkenntnisse wird im Anschluss auf die Anonymisierung von Fahrzeugdaten anhand verschiedener Anwendungsfälle eingegangen und Herausforderungen und Lösungsansätze skizziert. Zuletzt wird beispielhaft ein Ansatz zur Anonymisierung von Routen implementiert, um mit Hilfe eines GPS-Sensors aufgezeichnete Fahrzeugrouten zu anonymisieren. Dabei werden zusätzliche Probleme wie der Umgang mit Messungenauigkeiten und Messfehlern sowie die tatsächlichen Auswirkungen von reduzierter Datennutzbarkeit verdeutlicht.
  • Thumbnail Image
    ItemOpen Access
    Development of an Euler-Lagrangian framework for point-particle tracking to enable efficient multiscale simulations of complex flows
    (2023) Kschidock, Helena
    In this work, we implement, test, and validate an Euler-Lagrangian point-particle tracking framework for the commercial aerodynamics and aeroacoustics simulation tool ultraFluidX, which is based on the Lattice Boltzmann Method and optimized for GPUs. Our framework successfully simulates one-way and two-way coupled particle-laden flows based on drag forces and gravitation. Trilinear interpolation is used for determining the fluid's macroscopic properties at the particle position. Object and domain boundary conditions are implemented using a planar surface approximation. The whole particle framework is run within three dedicated GPU kernels, and data is only copied back to the CPU upon output. We show validation for the velocity interpolation, gravitational acceleration, back-coupling forces and boundary conditions, and test runtimes and memory requirements. We also propose the next steps required to make the particle framework ready for use in engineering applications.
  • Thumbnail Image
    ItemOpen Access
    Concepts and methods for the design, configuration and selection of machine learning solutions in manufacturing
    (2021) Villanueva Zacarias, Alejandro Gabriel; Mitschang, Bernhard (Prof. Dr.-Ing. habil.)
    The application of Machine Learning (ML) techniques and methods is common practice in manufacturing companies. They assign teams to the development of ML solutions to support individual use cases. This dissertation refers as ML solution to the set of software components and learning algorithms to deliver a predictive capability based on available use case data, their (hyper) paremeters and technical settings. Currently, development teams face four challenges that complicate the development of ML solutions. First, they lack a formal approach to specify ML solutions that can trace the impact of individual solution components on domain-specific requirements. Second, they lack an approach to document the configurations chosen to build an ML solution, therefore ensuring the reproducibility of the performance obtained. Third, they lack an approach to recommend and select ML solutions that is intuitive for non ML experts. Fourth, they lack a comprehensive sequence of steps that ensures both best practices and the consideration of technical and domain-specific aspects during the development process. Overall, the inability to address these challenges leads to longer development times and higher development costs, as well as less suitable ML solutions that are more difficult to understand and to reuse. This dissertation presents concepts to address these challenges. They are Axiomatic Design for Machine Learning (AD4ML), the ML solution profiling framework and AssistML. AD4ML is a concept for the structured and agile specification of ML solutions. AD4ML establishes clear relationships between domain-specific requirements and concrete software components. AD4ML specifications can thus be validated regarding domain expert requirements before implementation. The ML solution profiling framework employs metadata to document important characteristics of data, technical configurations, and parameter values of software components as well as multiple performance metrics. These metadata constitute the foundations for the reproducibility of ML solutions. AssistML recommends ML solutions for new use cases. AssistML searches among documented ML solutions those that better fulfill the performance preferences of the new use case. The selected solutions are then presented to decision-makers in an intuitive way. Each of these concepts was evaluated and implemented. Combined, these concepts offer development teams a technology-agnostic approach to build ML solutions. The use of these concepts brings multiple benefits, i. e., shorter development times, more efficient development projects, and betterinformed decisions about the development and selection of ML solutions.
  • Thumbnail Image
    ItemOpen Access
    Adaptive robust scheduling in wireless Time-Sensitive Networks (TSN)
    (2024) Egger, Simon
    The correct operation of upper-layer services is unattainable in wireless Time-Sensitive Networks (TSN) if the schedule cannot provide formal reliability guarantees to each stream. Still, current TSN scheduling literature leaves reliability, let alone provable reliability, either poorly quantified or entirely unaddressed. This work aims to remedy this shortcoming by designing an adaptive mechanism to compute robust schedules. For static wireless channels, robust schedules enforce the streams' reliability requirements by allocating sufficiently large wireless transmission intervals and by isolating omission faults. While robustness against omission faults is conventionally achieved by strictly isolating each transmission, we show that controlled interleaving of wireless streams is crucial for finding eligible schedules. We adapt the Disjunctive Graph Model (DGM) from job-shop scheduling to design TSN-DGM as a metaheuristic scheduler that can schedule up to one hundred wireless streams with fifty cross-traffic streams in under five minutes. In comparison, we demonstrate that strict transmission isolation already prohibits scheduling a few wireless streams. For dynamic wireless channels, we introduce shuffle graphs as a linear-time adaptation strategy that converts reliability surpluses from improving wireless links into slack and reliability impairments from degrading wireless links into tardiness. While TSN-DGM is able to improve the adapted schedule considerably within ten seconds of reactive rescheduling, we justify that the reliability contracts between upper-layer services and the infrastructure provider should specify a worst-case channel degradation beyond which no punctuality guarantees can be made.
  • Thumbnail Image
    ItemOpen Access
    Improving usability of gaze and voice based text entry systems
    (2023) Sengupta, Korok; Staab, Steffen (Prof. Dr.)
  • Thumbnail Image
    ItemOpen Access
    Data-efficient and safe learning with Gaussian processes
    (2020) Schreiter, Jens; Toussaint, Marc (Prof. Dr. rer. nat.)
    Data-based modeling techniques enjoy increasing popularity in many areas of science and technology where traditional approaches are limited regarding accuracy and efficiency. When employing machine learning methods to generate models of dynamic system, it is necessary to consider two important issues. Firstly, the data-sampling process should induce an informative and representative set of points to enable high generalization accuracy of the learned models. Secondly, the algorithmic part for efficient model building is essential for applicability, usability, and the quality of the learned predictive model. This thesis deals with both of these aspects for supervised learning problems, where the interaction between them is exploited to realize an exact and powerful modeling. After introducing the non-parametric Bayesian modeling approach with Gaussian processes and basics for transient modeling tasks in the next chapter, we dedicate ourselves to extensions of this probabilistic technique to relevant practical requirements in the subsequent chapter. This chapter provides an overview on existing sparse Gaussian process approximations and propose some novel work to increase efficiency and model selection on particularly large training data sets. For example, our sparse modeling approach enables real-time capable prediction performance and efficient learning with low memory requirements. A comprehensive comparison on various real-world problems confirms the proposed contributions and shows a variety of modeling tasks, where approximate Gaussian processes can be successfully applied. Further experiments provide more insight about the whole learning process, and thus a profound understanding of the presented work. In the fourth chapter, we focus on active learning schemes for safe and information-optimal generation of meaningful data sets. In addition to the exploration behavior of the active learner, the safety issue is considered in our work, since interacting with real systems should not result in damages or even completely destroy it. Here we propose a new model-based active learning framework to solve both tasks simultaneously. As basis for the data-sampling process we employ the presented Gaussian process techniques. Furthermore, we distinguish between static and transient experimental design strategies. Both problems are separately considered in this chapter. Nevertheless, the requirements for each active learning problem are the same. This subdivision into a static and transient setting allows a more problem-specific perspective on the two cases, and thus enables the creation of specially adapted active learning algorithms. Our novel approaches are then investigated for different applications, where a favorable trade-off between safety and exploration is always realized. Theoretical results maintain these evaluations and provide respectable knowledge about the derived model-based active learning schemes. For example, an upper bound for the probability of failure of the presented active learning methods is derived under reasonable assumptions. Finally, the thesis concludes with a summary of the investigated machine learning problems and motivate some future research directions.
  • Thumbnail Image
    ItemOpen Access
    An analytics framework for the IoT platform MBP
    (2020) Kumar, Abishek
    The emergence of IoT has introduced a huge amount of applications that generate massive amounts of data at a high rate. This data stream needs intelligent data processing and analysis. The evolution of Smart cities and Smart industries has resulted into an ocean of data from millions of sensors and devices. Surveillance systems, telecommunication systems, smart devices, and smart cars are some examples of such systems. However, this data itself doesn’t provide any information unless it is analysed. This results into a need of analytics tools and frameworks which can efficiently analyse this data and provide with useful information. Analytics is all about inspection, transformation and modelling of data to achieve information that further suggests and assists in decision making. In a world of IoT, analytics has a crucial role to play to improve life and better manage the infrastructure in a secure, sustainable and cost effective manner. The smart sensor network serves as the base for IoT. In this context, one of the major tasks is to develop advanced analytics frameworks for the interpretation of data provided by the sensors. MBP is a platform for managing IoT environments. Sensors and devices can be registered to this platform and the status of sensors can be viewed and modified from the platform. This platform will be used to collect data from the sensors and devices connected to the platform. There are two types of mining that can be performed on raw data, one technique analyses the data on the fly as it is received (Data Stream Mining) and the other can be performed on demand on the data collected for a longer period of time (Batch Processing). Both types of analysis has its own advantages. Lambda architecture is a data analytics architecture which allows us to perform both stream analysis and batch processing on the same data. This architecture defines some practical and well versed principles of handling big data. The pattern allows us to deal with both real time and historical data, but the analysis is performed separately and does not affect each other. In this thesis, we will create an analytics framework for the MBP IoT platform based on the lambda architecture.
  • Thumbnail Image
    ItemOpen Access
    Neural Networks on Microsoft HoloLens 2
    (2021) Lazar, Léon
    The goal of the present Bachelor thesis is to enable comparing different approaches of integrating Neural Networks in HoloLens 2 applications in a quantitative and qualitative manner by defining highly diagnostic criteria. Moreover, multiple different approaches to accomplish the integration are proposed, implemented and evaluated using the aforementioned criteria. Finally, the work gives an expressive overview of all working approaches. The basic requirements are that Neural Networks trained by TensorFlow/Keras can be used and executed directly on the HoloLens 2 without requiring an internet connection. Furthermore, the Neural Networks have to be integrable in Mixed/Augmented Reality applications. In total four approaches are proposed: TensorFlow.js, Unity Barracuda, TensorFlow.NET, and Windows Machine Learning which is an already existing approach. For each working approach a benchmarking application is developed which runs a common reference model on a test datatset to measure inference time and accuracy. Moreover, a small proof of concept application is developed in order to show that the approach also works with real Augmented Reality applications. The application uses a MobileNetV2 model to classify image frames coming from the webcam and displays the results to the user. All the feasible approaches are evaluated using the aforementioned evaluation criteria which include ease of implementation, performance, accuracy, compatibility with Machine Learning frameworks and pre-trained models, and integrability with 3D frameworks. The Barracuda, TensorFlow.js and WinML approaches turned out to be feasible. Barracuda, which only can be integrated in Unity applications, is the most performant framework since it can make use of GPU inference. After that follows TensorFlow.js which can be integrated in JavaScript Augmented Reality frameworks such as A-Frame. Windows ML can currently only use CPU inference on the HoloLens 2 and is therefore the slowest one. It can be integrated in Unity projects with some difficulties as well as plain Win32 and UWP apps. Barracuda and Windows Machine Learning are also integrated in a biomechanical visualization application based on Unity for performing simulations. The results of this thesis make the different approaches for integrating Neural Networks on the HoloLens 2 comparable. Now an informed decision which approach is the best for a specific application can be made. Furthermore, the work shows that the use of Barracuda or TensorFlow.js on the HoloLens 2 is feasible and superior compared to the existing WinML approach.
  • Thumbnail Image
    ItemOpen Access
    Webanwendung für Multiphysik-Simulationen mit opendihu
    (2020) Tompert, Matthias
    Opendihu ist ein Software-Framework zum Lösen von Multi-Physik-Problemen mit Hilfe der Finiten-Elemente-Methode. Die Anwendungen von Opendihu sind hauptsächlich im Bereich der Skelett-Muskel-Simulationen. Das Erstellen einer Simulation in Opendihu erfolgt über eine C++-Datei, in welcher verschachtelte Löserstrukturen angegeben werden und über eine Python-Datei in welcher die Parameter der verwendeten Löser konfiguriert werden. Das Bearbeiten vorhandener Simulationen und das Erstellen neuer Simulationen mit Hilfe dieser Schnittstelle erfordern gute Kenntnisse über den Sourcecode, beziehungsweise die Struktur von Opendihu. Daher wäre es Sinnvoll Opendihu um eine Nutzerfreundlichere und auch für Einsteiger geeignete Nutzerschnittstelle zu erweitern. Im Rahmen dieser Arbeit habe Ich daher eine grafische Benutzeroberfläche für Opendihu implementiert, welche die Löserstruktur und die Parameter der einzelnen Löser einer Simulation visualisiert. Außerdem ist es mit der Anwendung möglich vorhandene Simulationen zu ändern und neue Simulationen mit Hilfe eines Baukastensystems zu erstellen. Diese Bachelorarbeit erläutert den Aufbau dieser Anwendung und erforscht mit Hilfe einer Nutzerstudie ob die entstandene Benutzerschnittstelle einen Mehrwert gegenüber der bereits vorhandenen Schnittstelle bietet. Das Bearbeiten und Erstellen neuer Simulationen mit Hilfe der Anwendung wurde von den Teilnehmern der Studie im Durchschnitt als einfacher empfunden, als das Bearbeiten und Erstellen neuer Simulationen mit Hilfe der bereits vorhandenen Schnittstelle. Die entstandene Anwendung bietet also einen Mehrwert beim Bearbeiten und Erstellen von Opendihu-Simulationen. Besonders beim Erstellen neuer Simulationen wurde das Baukastensystem als hilfreich bewertet.
  • Thumbnail Image
    ItemOpen Access
    Optimization of diffusive load-balancing for short-range molecular dynamics
    (2020) Hauser, Simon
    In recent years, multi-core processors have become more and more important for manufacturers, which means that developers now have to think more about how to distribute a single application sensibly over several processes. This is where load balancing comes in, allowing us to move load from an overloaded process to an underloaded process. One way of load balancing is diffusive load balancing, which is a method of moving load in the local neighborhood and therefore no global communication is needed. The advantage of this is that processes that have completed the local communication and thus the load-balancing process can continue with the next calculations. This form of load balancing is found in librepa, a library that deals with the balancing of linked-cell grids and can be used in the simulation software ESPResSo. In the course of this thesis the library has been extended with the First and Second Order Diffusion. Furthermore, a feature was added that allows to keep the initial structure of the grid constant, which means that the neighborhood of each process does not change. This feature is necessary for the Second Order Diffusion. A comparison between the methods shows that both First and Second Order Diffusion distribute the load better in the system than librepa's default and prior to this work only diffusive variant. Furthermore, we show that there is no significant overhead in using the Preserving Structure Diffusion. With the use of flow iteration the imbalance values of First and Second Order Diffusion can be improved even further.