05 Fakultät Informatik, Elektrotechnik und Informationstechnik
Permanent URI for this collectionhttps://elib.uni-stuttgart.de/handle/11682/6
Browse
275 results
Search Results
Item Open Access Anonymisierung von Daten : von der Literatur zum Automobilbereich(2023) Herkommer, JanDie Datenanonymisierung im Automobilbereich gewinnt immer mehr an Bedeutung. Jedoch gibt es kaum Literatur und Ansätze, die sich mit der Anonymisierung von Automobildaten beschäftigen. In dieser Arbeit werden deshalb mit Hilfe einer strukturierten Literaturrecherche die aktuell verbreitetsten Verfahren und Anwendungsbereiche erörtert und die wichtigsten Erkenntnisse der Recherche zusammengefasst. So werden bei den analysierten Paper der Anwendungsbereich, die Methodik sowie der zu anonymisierende Datentyp ermittelt. DesWeiteren werden die Metriken zum Vergleich von unterschiedlichen Ansätzen betrachtet. Mit Hilfe dieser Erkenntnisse wird im Anschluss auf die Anonymisierung von Fahrzeugdaten anhand verschiedener Anwendungsfälle eingegangen und Herausforderungen und Lösungsansätze skizziert. Zuletzt wird beispielhaft ein Ansatz zur Anonymisierung von Routen implementiert, um mit Hilfe eines GPS-Sensors aufgezeichnete Fahrzeugrouten zu anonymisieren. Dabei werden zusätzliche Probleme wie der Umgang mit Messungenauigkeiten und Messfehlern sowie die tatsächlichen Auswirkungen von reduzierter Datennutzbarkeit verdeutlicht.Item Open Access Development of an Euler-Lagrangian framework for point-particle tracking to enable efficient multiscale simulations of complex flows(2023) Kschidock, HelenaIn this work, we implement, test, and validate an Euler-Lagrangian point-particle tracking framework for the commercial aerodynamics and aeroacoustics simulation tool ultraFluidX, which is based on the Lattice Boltzmann Method and optimized for GPUs. Our framework successfully simulates one-way and two-way coupled particle-laden flows based on drag forces and gravitation. Trilinear interpolation is used for determining the fluid's macroscopic properties at the particle position. Object and domain boundary conditions are implemented using a planar surface approximation. The whole particle framework is run within three dedicated GPU kernels, and data is only copied back to the CPU upon output. We show validation for the velocity interpolation, gravitational acceleration, back-coupling forces and boundary conditions, and test runtimes and memory requirements. We also propose the next steps required to make the particle framework ready for use in engineering applications.Item Open Access Adaptive robust scheduling in wireless Time-Sensitive Networks (TSN)(2024) Egger, SimonThe correct operation of upper-layer services is unattainable in wireless Time-Sensitive Networks (TSN) if the schedule cannot provide formal reliability guarantees to each stream. Still, current TSN scheduling literature leaves reliability, let alone provable reliability, either poorly quantified or entirely unaddressed. This work aims to remedy this shortcoming by designing an adaptive mechanism to compute robust schedules. For static wireless channels, robust schedules enforce the streams' reliability requirements by allocating sufficiently large wireless transmission intervals and by isolating omission faults. While robustness against omission faults is conventionally achieved by strictly isolating each transmission, we show that controlled interleaving of wireless streams is crucial for finding eligible schedules. We adapt the Disjunctive Graph Model (DGM) from job-shop scheduling to design TSN-DGM as a metaheuristic scheduler that can schedule up to one hundred wireless streams with fifty cross-traffic streams in under five minutes. In comparison, we demonstrate that strict transmission isolation already prohibits scheduling a few wireless streams. For dynamic wireless channels, we introduce shuffle graphs as a linear-time adaptation strategy that converts reliability surpluses from improving wireless links into slack and reliability impairments from degrading wireless links into tardiness. While TSN-DGM is able to improve the adapted schedule considerably within ten seconds of reactive rescheduling, we justify that the reliability contracts between upper-layer services and the infrastructure provider should specify a worst-case channel degradation beyond which no punctuality guarantees can be made.Item Open Access Improving usability of gaze and voice based text entry systems(2023) Sengupta, Korok; Staab, Steffen (Prof. Dr.)Item Open Access Data-efficient and safe learning with Gaussian processes(2020) Schreiter, Jens; Toussaint, Marc (Prof. Dr. rer. nat.)Data-based modeling techniques enjoy increasing popularity in many areas of science and technology where traditional approaches are limited regarding accuracy and efficiency. When employing machine learning methods to generate models of dynamic system, it is necessary to consider two important issues. Firstly, the data-sampling process should induce an informative and representative set of points to enable high generalization accuracy of the learned models. Secondly, the algorithmic part for efficient model building is essential for applicability, usability, and the quality of the learned predictive model. This thesis deals with both of these aspects for supervised learning problems, where the interaction between them is exploited to realize an exact and powerful modeling. After introducing the non-parametric Bayesian modeling approach with Gaussian processes and basics for transient modeling tasks in the next chapter, we dedicate ourselves to extensions of this probabilistic technique to relevant practical requirements in the subsequent chapter. This chapter provides an overview on existing sparse Gaussian process approximations and propose some novel work to increase efficiency and model selection on particularly large training data sets. For example, our sparse modeling approach enables real-time capable prediction performance and efficient learning with low memory requirements. A comprehensive comparison on various real-world problems confirms the proposed contributions and shows a variety of modeling tasks, where approximate Gaussian processes can be successfully applied. Further experiments provide more insight about the whole learning process, and thus a profound understanding of the presented work. In the fourth chapter, we focus on active learning schemes for safe and information-optimal generation of meaningful data sets. In addition to the exploration behavior of the active learner, the safety issue is considered in our work, since interacting with real systems should not result in damages or even completely destroy it. Here we propose a new model-based active learning framework to solve both tasks simultaneously. As basis for the data-sampling process we employ the presented Gaussian process techniques. Furthermore, we distinguish between static and transient experimental design strategies. Both problems are separately considered in this chapter. Nevertheless, the requirements for each active learning problem are the same. This subdivision into a static and transient setting allows a more problem-specific perspective on the two cases, and thus enables the creation of specially adapted active learning algorithms. Our novel approaches are then investigated for different applications, where a favorable trade-off between safety and exploration is always realized. Theoretical results maintain these evaluations and provide respectable knowledge about the derived model-based active learning schemes. For example, an upper bound for the probability of failure of the presented active learning methods is derived under reasonable assumptions. Finally, the thesis concludes with a summary of the investigated machine learning problems and motivate some future research directions.Item Open Access An analytics framework for the IoT platform MBP(2020) Kumar, AbishekThe emergence of IoT has introduced a huge amount of applications that generate massive amounts of data at a high rate. This data stream needs intelligent data processing and analysis. The evolution of Smart cities and Smart industries has resulted into an ocean of data from millions of sensors and devices. Surveillance systems, telecommunication systems, smart devices, and smart cars are some examples of such systems. However, this data itself doesn’t provide any information unless it is analysed. This results into a need of analytics tools and frameworks which can efficiently analyse this data and provide with useful information. Analytics is all about inspection, transformation and modelling of data to achieve information that further suggests and assists in decision making. In a world of IoT, analytics has a crucial role to play to improve life and better manage the infrastructure in a secure, sustainable and cost effective manner. The smart sensor network serves as the base for IoT. In this context, one of the major tasks is to develop advanced analytics frameworks for the interpretation of data provided by the sensors. MBP is a platform for managing IoT environments. Sensors and devices can be registered to this platform and the status of sensors can be viewed and modified from the platform. This platform will be used to collect data from the sensors and devices connected to the platform. There are two types of mining that can be performed on raw data, one technique analyses the data on the fly as it is received (Data Stream Mining) and the other can be performed on demand on the data collected for a longer period of time (Batch Processing). Both types of analysis has its own advantages. Lambda architecture is a data analytics architecture which allows us to perform both stream analysis and batch processing on the same data. This architecture defines some practical and well versed principles of handling big data. The pattern allows us to deal with both real time and historical data, but the analysis is performed separately and does not affect each other. In this thesis, we will create an analytics framework for the MBP IoT platform based on the lambda architecture.Item Open Access Neural Networks on Microsoft HoloLens 2(2021) Lazar, LéonThe goal of the present Bachelor thesis is to enable comparing different approaches of integrating Neural Networks in HoloLens 2 applications in a quantitative and qualitative manner by defining highly diagnostic criteria. Moreover, multiple different approaches to accomplish the integration are proposed, implemented and evaluated using the aforementioned criteria. Finally, the work gives an expressive overview of all working approaches. The basic requirements are that Neural Networks trained by TensorFlow/Keras can be used and executed directly on the HoloLens 2 without requiring an internet connection. Furthermore, the Neural Networks have to be integrable in Mixed/Augmented Reality applications. In total four approaches are proposed: TensorFlow.js, Unity Barracuda, TensorFlow.NET, and Windows Machine Learning which is an already existing approach. For each working approach a benchmarking application is developed which runs a common reference model on a test datatset to measure inference time and accuracy. Moreover, a small proof of concept application is developed in order to show that the approach also works with real Augmented Reality applications. The application uses a MobileNetV2 model to classify image frames coming from the webcam and displays the results to the user. All the feasible approaches are evaluated using the aforementioned evaluation criteria which include ease of implementation, performance, accuracy, compatibility with Machine Learning frameworks and pre-trained models, and integrability with 3D frameworks. The Barracuda, TensorFlow.js and WinML approaches turned out to be feasible. Barracuda, which only can be integrated in Unity applications, is the most performant framework since it can make use of GPU inference. After that follows TensorFlow.js which can be integrated in JavaScript Augmented Reality frameworks such as A-Frame. Windows ML can currently only use CPU inference on the HoloLens 2 and is therefore the slowest one. It can be integrated in Unity projects with some difficulties as well as plain Win32 and UWP apps. Barracuda and Windows Machine Learning are also integrated in a biomechanical visualization application based on Unity for performing simulations. The results of this thesis make the different approaches for integrating Neural Networks on the HoloLens 2 comparable. Now an informed decision which approach is the best for a specific application can be made. Furthermore, the work shows that the use of Barracuda or TensorFlow.js on the HoloLens 2 is feasible and superior compared to the existing WinML approach.Item Open Access Webanwendung für Multiphysik-Simulationen mit opendihu(2020) Tompert, MatthiasOpendihu ist ein Software-Framework zum Lösen von Multi-Physik-Problemen mit Hilfe der Finiten-Elemente-Methode. Die Anwendungen von Opendihu sind hauptsächlich im Bereich der Skelett-Muskel-Simulationen. Das Erstellen einer Simulation in Opendihu erfolgt über eine C++-Datei, in welcher verschachtelte Löserstrukturen angegeben werden und über eine Python-Datei in welcher die Parameter der verwendeten Löser konfiguriert werden. Das Bearbeiten vorhandener Simulationen und das Erstellen neuer Simulationen mit Hilfe dieser Schnittstelle erfordern gute Kenntnisse über den Sourcecode, beziehungsweise die Struktur von Opendihu. Daher wäre es Sinnvoll Opendihu um eine Nutzerfreundlichere und auch für Einsteiger geeignete Nutzerschnittstelle zu erweitern. Im Rahmen dieser Arbeit habe Ich daher eine grafische Benutzeroberfläche für Opendihu implementiert, welche die Löserstruktur und die Parameter der einzelnen Löser einer Simulation visualisiert. Außerdem ist es mit der Anwendung möglich vorhandene Simulationen zu ändern und neue Simulationen mit Hilfe eines Baukastensystems zu erstellen. Diese Bachelorarbeit erläutert den Aufbau dieser Anwendung und erforscht mit Hilfe einer Nutzerstudie ob die entstandene Benutzerschnittstelle einen Mehrwert gegenüber der bereits vorhandenen Schnittstelle bietet. Das Bearbeiten und Erstellen neuer Simulationen mit Hilfe der Anwendung wurde von den Teilnehmern der Studie im Durchschnitt als einfacher empfunden, als das Bearbeiten und Erstellen neuer Simulationen mit Hilfe der bereits vorhandenen Schnittstelle. Die entstandene Anwendung bietet also einen Mehrwert beim Bearbeiten und Erstellen von Opendihu-Simulationen. Besonders beim Erstellen neuer Simulationen wurde das Baukastensystem als hilfreich bewertet.Item Open Access Optimization of diffusive load-balancing for short-range molecular dynamics(2020) Hauser, SimonIn recent years, multi-core processors have become more and more important for manufacturers, which means that developers now have to think more about how to distribute a single application sensibly over several processes. This is where load balancing comes in, allowing us to move load from an overloaded process to an underloaded process. One way of load balancing is diffusive load balancing, which is a method of moving load in the local neighborhood and therefore no global communication is needed. The advantage of this is that processes that have completed the local communication and thus the load-balancing process can continue with the next calculations. This form of load balancing is found in librepa, a library that deals with the balancing of linked-cell grids and can be used in the simulation software ESPResSo. In the course of this thesis the library has been extended with the First and Second Order Diffusion. Furthermore, a feature was added that allows to keep the initial structure of the grid constant, which means that the neighborhood of each process does not change. This feature is necessary for the Second Order Diffusion. A comparison between the methods shows that both First and Second Order Diffusion distribute the load better in the system than librepa's default and prior to this work only diffusive variant. Furthermore, we show that there is no significant overhead in using the Preserving Structure Diffusion. With the use of flow iteration the imbalance values of First and Second Order Diffusion can be improved even further.Item Open Access Models for data-efficient reinforcement learning on real-world applications(2021) Dörr, Andreas; Toussaint, Marc (Prof. Dr.)Large-scale deep Reinforcement Learning is strongly contributing to many recently published success stories of Artificial Intelligence. These techniques enabled computer systems to autonomously learn and master challenging problems, such as playing the game of Go or complex strategy games such as Star-Craft on human levels or above. Naturally, the question arises which problems could be addressed with these Reinforcement Learning technologies in industrial applications. So far, machine learning technologies based on (semi-)supervised learning create the most visible impact in industrial applications. For example, image, video or text understanding are primarily dominated by models trained and derived autonomously from large-scale data sets with modern (deep) machine learning methods. Reinforcement Learning, on the opposite side, however, deals with temporal decision-making problems and is much less commonly found in the industrial context. In these problems, current decisions and actions inevitably influence the outcome and success of a process much further down the road. This work strives to address some of the core problems, which prevent the effective use of Reinforcement Learning in industrial settings. Autonomous learning of new skills is always guided by existing priors that allow for generalization from previous experience. In some scenarios, non-existing or uninformative prior knowledge can be mitigated by vast amounts of experience for a particular task at hand. Typical industrial processes are, however, operated in very restricted, tightly calibrated operating points. Exploring the space of possible actions or changes to the process naively on the search for improved performance tends to be costly or even prohibitively dangerous. Therefore, one reoccurring subject throughout this work is the emergence of priors and model structures that allow for efficient use of all available experience data. A promising direction is Model-Based Reinforcement Learning, which is explored in the first part of this work. This part derives an automatic tuning method for one of themostcommonindustrial control architectures, the PID controller. By leveraging all available data about the system’s behavior in learning a system dynamics model, the derived method can efficiently tune these controllers from scratch. Although we can easily incorporate all data into dynamics models, real systems expose additional problems to the dynamics modeling and learning task. Characteristics such as non-Gaussian noise, latent states, feedback control or non-i.i.d. data regularly prevent using off-the-shelf modeling tools. Therefore, the second part of this work is concerned with the derivation of modeling solutions that are particularly suited for the reinforcement learning problem. Despite the predominant focus on model-based reinforcement learning as a promising, data-efficient learning tool, this work’s final part revisits model assumptions in a separate branch of reinforcement learning algorithms. Again, generalization and, therefore, efficient learning in model-based methods is primarily driven by the incorporated model assumptions (e.g., smooth dynamics), which real, discontinuous processes might heavily violate. To this end, a model-free reinforcement learning is presented that carefully reintroduces prior model structure to facilitate efficient learning without the need for strong dynamic model priors. The methods and solutions proposed in this work are grounded in the challenges experienced when operating with real-world hardware systems. With applications on a humanoid upper-body robot or an autonomous model race car, the proposed methods are demonstrated to successfully model and master their complex behavior.