05 Fakultät Informatik, Elektrotechnik und Informationstechnik
Permanent URI for this collectionhttps://elib.uni-stuttgart.de/handle/11682/6
Browse
281 results
Search Results
Item Open Access Modeling recommendations for pattern-based mashup plans(2018) Das, SomeshData mashups are modeled as pipelines. The pipelines are basically a chain of data processing steps in order to integrate data from different data sources into a single one. These processing steps include data operations, such as join, filter, extraction, integration or alteration. To create and execute data mashups, modelers need to have technical knowledge in order to understand these data operations. In order to solve this issue, an extended data mashup approach was created - FlexMash developed at the University of Stuttgart - which allows users to define data mashups without technical knowledge about any execution details. Consquently, modelers with no or limited technical knowledge can design their own domain-specific mashup based on their use case scenarios. However, designing data mashups graphically is still difficult for non-IT users. When users design a model graphically, it is hard to understand which patterns or nodes should be modeled and connected in the data flow graph. In order to cope with this issue, this master thesis aims to provide users modeling recommendations during modeling time. At each modeling step, user can query for recommendations. The recommendations are generated by analyzing the existing models. To generate the recommendations from existing models, association rule mining algorithms are used in this thesis. If users accept a recommendation, the recommended node is automatically added to the partial model and connected with the node for which recommendations were given.Item Open Access Anonymisierung von Daten : von der Literatur zum Automobilbereich(2023) Herkommer, JanDie Datenanonymisierung im Automobilbereich gewinnt immer mehr an Bedeutung. Jedoch gibt es kaum Literatur und Ansätze, die sich mit der Anonymisierung von Automobildaten beschäftigen. In dieser Arbeit werden deshalb mit Hilfe einer strukturierten Literaturrecherche die aktuell verbreitetsten Verfahren und Anwendungsbereiche erörtert und die wichtigsten Erkenntnisse der Recherche zusammengefasst. So werden bei den analysierten Paper der Anwendungsbereich, die Methodik sowie der zu anonymisierende Datentyp ermittelt. DesWeiteren werden die Metriken zum Vergleich von unterschiedlichen Ansätzen betrachtet. Mit Hilfe dieser Erkenntnisse wird im Anschluss auf die Anonymisierung von Fahrzeugdaten anhand verschiedener Anwendungsfälle eingegangen und Herausforderungen und Lösungsansätze skizziert. Zuletzt wird beispielhaft ein Ansatz zur Anonymisierung von Routen implementiert, um mit Hilfe eines GPS-Sensors aufgezeichnete Fahrzeugrouten zu anonymisieren. Dabei werden zusätzliche Probleme wie der Umgang mit Messungenauigkeiten und Messfehlern sowie die tatsächlichen Auswirkungen von reduzierter Datennutzbarkeit verdeutlicht.Item Open Access Inferring object hypotheses based on feature motion from different sources(2015) Fuchs, SteffenPerception systems in robotics are typically closely tailored to the given task, e.g., in typical pick-and-place tasks the perception systems only recognizes the mugs that are supposed to be moved and the table the mugs are placed on. The obvious limitation of those systems is that for a new task a new vision system must be designed and implemented. This master's thesis proposes a method that allows to identify entities in the world based on motion of various features from various sources. This is without relying on strong prior assumptions and to provide an important piece towards a more general perception system. While entities are rigid bodies in the world, the sources can be anything that allows to track certain features over time in order to create trajectories. For example, these feature trajectories can be obtained from RGB and RGB-D sensors of a robot, from external cameras, or even the end effector of the robot (proprioception). The core conceptual elements are: the distance variance between trajectory pairs is computed to construct an affinity matrix. This matrix is then used as input for a divisive k-means algorithm in order to cluster trajectories into object hypotheses. In a final step these hypotheses are combined with previously observed hypotheses by computing the correlations between the current and the updated sets. This approach has been evaluated on both simulated and real world data. Generating simulated data provides an elegant way for a qualitative analysis of various scenarios. The real world data was obtained by tracking Shi-Tomasi corners using the Lucas-Kanade optical flow estimation of RGB image sequences and projecting the features into range image space.Item Open Access Development of an Euler-Lagrangian framework for point-particle tracking to enable efficient multiscale simulations of complex flows(2023) Kschidock, HelenaIn this work, we implement, test, and validate an Euler-Lagrangian point-particle tracking framework for the commercial aerodynamics and aeroacoustics simulation tool ultraFluidX, which is based on the Lattice Boltzmann Method and optimized for GPUs. Our framework successfully simulates one-way and two-way coupled particle-laden flows based on drag forces and gravitation. Trilinear interpolation is used for determining the fluid's macroscopic properties at the particle position. Object and domain boundary conditions are implemented using a planar surface approximation. The whole particle framework is run within three dedicated GPU kernels, and data is only copied back to the CPU upon output. We show validation for the velocity interpolation, gravitational acceleration, back-coupling forces and boundary conditions, and test runtimes and memory requirements. We also propose the next steps required to make the particle framework ready for use in engineering applications.Item Open Access Vision assisted biasing for robot manipulation planning(2018) Puang, En YenSampling efficiency has been one of the major bottlenecks of sampling-based motion planner. Although being more reliable in complex environments, Rapidly-exploring Random Tree for example often requires longer planning time than its optimisation-based counterpart. Recent developments have introduced numerous methods to bias sampling in configuration-space. Gaussian mixture model, in particular, was proposed to estimate feasible regions in configuration-space for low-variance task. Unfortunately this method does not adapt its biases according to individual planning scene during inference. Therefore, this work proposes vision assisted biasing to adapt biases by changing the weights of Gaussian components upon query. It uses autoencoder to extract features directly from depth image, and the resulted latent code is then used for either nearest neighbours search or direct weights prediction. With a modified pipeline, these extensions show improvements on not only the sampling efficiency but also path optimality of simple motion planner.Item Open Access Robust Quasi-Newton methods for partitioned fluid-structure simulations(2015) Scheufele, KlaudiusIn recent years, quasi-Newton schemes have proven to be a robust and efficient way for the coupling of partitioned multi-physics simulations in particular for fluid-structure interaction. The focus of this work is put on the coupling of partitioned fluid-structure interaction, where minimal interface requirements are assumed for the respective field solvers, thus treated as black box solvers. The coupling is done through communication of boundary values between the solvers. In this thesis a new quasi-Newton variant (IQN-IMVJ) based on a multi-vector update is investigated in combination with serial and parallel coupling systems. Due to implicit incorporation of passed information within the Jacobian update it renders the problem dependent parameter of retained previous time steps unnecessary. Besides, a whole range of coupling schemes are categorized and compared comprehensively with respect to robustness, convergence behaviour and complexity. Those coupling algorithms differ in the structure of the coupling, i.\,e., serial or parallel execution of the field solvers and the used quasi-Newton methods. A more in-depth analysis for a choice of coupling schemes is conducted for a set of strongly coupled FSI benchmark problems, using the in-house coupling library preCICE. The superior convergence behaviour and robust nature of the IQN-IMVJ method compared to well known state of the art methods such as the IQN-ILS method, is demonstrated here. It is confirmed that the multi-vector method works optimal without the need of tuning problem dependent parameters in advance. Furthermore, it appears to be especially suitable in conjunction with the parallel coupling system, in that it yields fairly similar results for parallel and serial coupling. Although we focus on FSI simulation, the considered coupling schemes are supposed to be equally applicable to various kinds of different volume- or surface-coupled problems.Item Open Access Comprehensive Support of the Lifecycle of Machine Learning Models in Model Management Systems(2019) Popp, MatthiasToday, Machine Learning (ML) is entering many economic and scientific fields. The lifecycle of ML models includes data pre-processing to transform raw data into features, training a model with the features, and providing the model to answer predictive queries. The challenge is to ensure accurate predictions by continuously updating the model with automatic or manual retraining. To be aware of all changes, e.g. datasets and parameters, it is required to store metadata over the entire ML lifecycle. In this thesis we present a concept and system for comprehensive support of the ML lifecycle. The concept includes a metadata schema, as well as a solution to collect and enrich the metadata. The metadata schema contains information about the experiment, runs, executions, executables and common artifacts in ML such as datasets, models, and metrics. The stored information can be used for comparisons, re-iterations, and backtracking of ML experiments. We achieve this by tracking the lineage of ML pipeline steps and collecting metadata such as hyperparameters. Furthermore, a prototype is implemented to demonstrate and evaluate the concept. A case study, based on a selected scenario, serves as the basis for a qualitative assessment. The case study shows that the concept meets all the requirements and is therefore a suitable approach to comprehensively support ML model lifecycle.Item Open Access Orthogonale Dünngitter-Teilraumzerlegungen(2018) Schreiber, ConstantinIn der Simulation treten Häufg hochdimensionale partielle Differentialgleichungen auf. Das Lösen dieser wird für volle Gitter sehr schnell zu teuer. In dieser Arbeit wird ein Verfahren für das Lösen partieller Differentialgleichungen mit Hilfe von Dünnen Gittern, welche für mehrdimensionale Probleme besser skalieren, sowie dessen Implementierung in das Programmpaket SG++ vorgestellt. Durch Funktionsdarstellung in einem Erzeugendensystem wird die Verwendung einer L2-orthogonalen Teilraumzerlegung ermöglicht. Projektionsoperatoren ersetzen hierbei die explizite Transformation in eine Prewavelet-Basis. Diese Zerlegung erlaubt das Lumping der Steifgkeitsmatrix, also das Weglassen von großen Blöcken der Matrix. Hiermit wird ein Algorithmus zur Matrixmultiplikation, welcher dem von Schwab und Todor ähnelt implementiert. Dieser wird in einem konjugierten Gradienten-Verfahren verwendet und auch auf krummberandete Gebieten angewendet. Des Weiteren wird die Teilraumzerlegung durch L2-Projektion mit anderen Zerlegungen in Bezug auf Laufzeit und Fehlerentwicklung verglichen.Item Open Access Speech interface for human and robot collaboration(2018) Kashif, Moin UddinIn the past, robots and machines were mostly designed to perform specific tasks without much human interaction needed. Nowadays with the advancements in technology, intelligent robots can be designed which can perform multiple tasks, interact with the surrounding environment, assist and give valuable suggestions to humans etc. so an efficient and natural mode of communication is required for this human-robot interaction. In this thesis, we proposed an architecture to develop a speech interface for human-robot interaction. The speech interface is used to give voice commands to the robot, PR2, in order to perform 5 tasks which are designed to test the performance of the speech interface. The tasks are sorting, shaping, stacking, building and balancing of 6 objects on table-top which are designed and ordered by the level of difficulty. First two tasks are comparatively easier as the user doesn't have to follow any order to finish them, next two tasks require to follow the order and in the last task, the stack of objects must be balanced in order to finish it. The speech interface receives voice commands from the user, convert them into text, maps to the corresponding command and send to the task manager to perform the operation. After that, it processes the received command, takes the appropriate decision based on the current status of the task and available actions and sends the command to the PR2 to perform the operation. Additionally, we have designed a feedback mechanism where PR2 sends back the feedback to the task manager which is delivered back to the speech manager so that it can be converted into an audio signal and play for the user. Furthermore, the system uses a TCP connection for the exchange of data and information between the speech manager and the task manager. The speech interface is also compared with other modalities such as text input and graphical user interface with the same tasks and we have also conducted user study to evaluate the system performance. The results show that the participants prefer speech interface as it feels more natural.Item Open Access Metadata management in the data lake architecture(2019) Eichler, Rebecca KayThe big data era has introduced a set of new challenges, one of which is the efficient storage of data at scale. As a result, the data lake concept was developed. It is a highly scalable storage repository, explicitly designed to handle (raw) data at scale and to support the big data characteristics. In order to fully exploit the strengths of the data lake concept, pro-active data governance and metadata management are required. Without data governance or metadata management, a data lake can turn into a data swamp. A data swamp signifies that the data has become useless, or has lost in value for a variety of reasons, therefore it is important to avoid this condition. In the scope of this thesis a concept for metadata management in data lakes is developed. The concept is explicitly designed to support all aspects of a data lake architecture. Furthermore, it enables to fully exploit the strengths of the data lake concept and it supports both classic data lake use cases as well as organization specific use cases. The concept is tested by applying it to a data inventory, data lineage and data access use case. Furthermore, a prototype is implemented demonstrating the concept through exemplary metadata and use case specific functionality. Finally, the suitability and realization of the use cases, the concept and the prototype are discussed. The discussion yields that the concept meets the requirements and is therefore suitable for the initial motivation of metadata management and data governance.