Universität Stuttgart

Permanent URI for this communityhttps://elib.uni-stuttgart.de/handle/11682/1

Browse

Search Results

Now showing 1 - 10 of 204
  • Thumbnail Image
    ItemOpen Access
    Eine OSLC-Plattform zur Unterstützung der Situationserkennung in Workflows
    (2015) Jansa, Paul
    Das Internet der Dinge gewinnt immer mehr an Bedeutung durch eine starke Vernetzung von Rechnern, Produktionsanlagen, mobilen Endgeräten und weiteren technischen Geräten. Derartige vernetzte Umgebungen werden auch als SMART Environments bezeichnet. Auf Basis von Sensordaten können in solchen Umgebungen höherwertige Situationen (Zustandsänderungen) erkannt und auf diese meist automatisch reagiert werden. Dadurch werden neuartige Technologien wie zum Beispiel "Industrie 4.0", "SMART Homes" oder "SMART Cities" ermöglicht. Komplexe Vernetzungen und Arbeitsabläufe in derartigen Umgebungen werden oftmals mit Workflows realisiert. Um eine robuste Ausführung dieser Workflows zu gewährleisten, müssen Situationsänderungen beachtet und auf diese entsprechend reagiert werden, zum Beispiel durch Workflow-Adaption. Das heißt, erst durch die Erkennung höherwertiger Situationen können solche Workflows robust modelliert und ausgeführt werden. Jedoch stellen die für die Erkennung von Situationen notwendige Anbindung und Bereitstellung von Sensordaten eine große Herausforderung dar. Oft handelt es sich bei den Sensordaten um Rohdaten. Sie sind schwer extrahierbar, liegen oftmals nur lokal vor, sind ungenau und lassen sich dementsprechend schwer verarbeiten. Um die Sensordaten zu extrahieren, müssen für jeden Sensor individuelle Adapter programmiert werden, die wiederum ein einheitliches Datenformat der Sensordaten bereitstellen müssen und anschließend mit sehr viel Aufwand untereinander verbunden werden. Im Rahmen dieser Diplomarbeit wird ein Konzept erarbeitet und entwickelt, mit dessen Hilfe eine einfache Integration von Sensordaten ermöglicht wird. Dazu werden die Sensoren über eine webbasierte Benutzeroberfläche oder über eine programmatische Schnittstelle in einer gemeinsamen Datenbank registriert. Die Sensordaten werden durch REST-Ressourcen abstrahiert, in RDF-basierte Repräsentationen umgewandelt und mit dem Linked-Data Prinzip miteinander verbunden. Durch die standardisierte Schnittstelle können Endbenutzer oder Anwendungen über das Internet auf die Sensordaten zugreifen, neue Sensoren anmelden oder entfernen.
  • Thumbnail Image
    ItemOpen Access
    Visualization challenges in distributed heterogeneous computing environments
    (2015) Panagiotidis, Alexandros; Ertl, Thomas (Prof. Dr.)
    Large-scale computing environments are important for many aspects of modern life. They drive scientific research in biology and physics, facilitate industrial rapid prototyping, and provide information relevant to everyday life such as weather forecasts. Their computational power grows steadily to provide faster response times and to satisfy the demand for higher complexity in simulation models as well as more details and higher resolutions in visualizations. For some years now, the prevailing trend for these large systems is the utilization of additional processors, like graphics processing units. These heterogeneous systems, that employ more than one kind of processor, are becoming increasingly widespread since they provide many benefits, like higher performance or increased energy efficiency. At the same time, they are more challenging and complex to use because the various processing units differ in their architecture and programming model. This heterogeneity is often addressed by abstraction but existing approaches often entail restrictions or are not universally applicable. As these systems also grow in size and complexity, they become more prone to errors and failures. Therefore, developers and users become more interested in resilience besides traditional aspects, like performance and usability. While fault tolerance is well researched in general, it is mostly dismissed in distributed visualization or not adapted to its special requirements. Finally, analysis and tuning of these systems and their software is required to assess their status and to improve their performance. The available tools and methods to capture and evaluate the necessary information are often isolated from the context or not designed for interactive use cases. These problems are amplified in heterogeneous computing environments, since more data is available and required for the analysis. Additionally, real-time feedback is required in distributed visualization to correlate user interactions to performance characteristics and to decide on the validity and correctness of the data and its visualization. This thesis presents contributions to all of these aspects. Two approaches to abstraction are explored for general purpose computing on graphics processing units and visualization in heterogeneous computing environments. The first approach hides details of different processing units and allows using them in a unified manner. The second approach employs per-pixel linked lists as a generic framework for compositing and simplifying order-independent transparency for distributed visualization. Traditional methods for fault tolerance in high performance computing systems are discussed in the context of distributed visualization. On this basis, strategies for fault-tolerant distributed visualization are derived and organized in a taxonomy. Example implementations of these strategies, their trade-offs, and resulting implications are discussed. For analysis, local graph exploration and tuning of volume visualization are evaluated. Challenges in dense graphs like visual clutter, ambiguity, and inclusion of additional attributes are tackled in node-link diagrams using a lens metaphor as well as supplementary views. An exploratory approach for performance analysis and tuning of parallel volume visualization on a large, high-resolution display is evaluated. This thesis takes a broader look at the issues of distributed visualization on large displays and heterogeneous computing environments for the first time. While the presented approaches all solve individual challenges and are successfully employed in this context, their joint utility form a solid basis for future research in this young field. In its entirety, this thesis presents building blocks for robust distributed visualization on current and future heterogeneous visualization environments.
  • Thumbnail Image
    ItemOpen Access
    Inferring object hypotheses based on feature motion from different sources
    (2015) Fuchs, Steffen
    Perception systems in robotics are typically closely tailored to the given task, e.g., in typical pick-and-place tasks the perception systems only recognizes the mugs that are supposed to be moved and the table the mugs are placed on. The obvious limitation of those systems is that for a new task a new vision system must be designed and implemented. This master's thesis proposes a method that allows to identify entities in the world based on motion of various features from various sources. This is without relying on strong prior assumptions and to provide an important piece towards a more general perception system. While entities are rigid bodies in the world, the sources can be anything that allows to track certain features over time in order to create trajectories. For example, these feature trajectories can be obtained from RGB and RGB-D sensors of a robot, from external cameras, or even the end effector of the robot (proprioception). The core conceptual elements are: the distance variance between trajectory pairs is computed to construct an affinity matrix. This matrix is then used as input for a divisive k-means algorithm in order to cluster trajectories into object hypotheses. In a final step these hypotheses are combined with previously observed hypotheses by computing the correlations between the current and the updated sets. This approach has been evaluated on both simulated and real world data. Generating simulated data provides an elegant way for a qualitative analysis of various scenarios. The real world data was obtained by tracking Shi-Tomasi corners using the Lucas-Kanade optical flow estimation of RGB image sequences and projecting the features into range image space.
  • Thumbnail Image
    ItemOpen Access
    Aufwandsschätzung bei Geschäftsprozessmodellerstellung
    (2015) Milutinovic, Aleksandar
    Geschäftsprozessmanagement-Projekte im Allgemeinen und Modellierungsprojekte im Speziellen besitzen noch keine angewandte und wissenschaftlich gestützte und untersuchte Methode zur Aufwandsschätzung. Insbesondere gibt es für die Phase der Modellerstellung innerhalb des Geschäftsprozessmanagement-Lebenszyklus keine Aufwandsschätzungen, während der Bereich der Implementierung von Geschäftsprozessmodellen erste Ansätze zu Aufwandschätzungen in der Literatur aufzeigt. Diese Arbeit gibt einen Überblick über Schätzmethoden, den aktuellen Stand der Literatur aus angrenzenden und entfernten Forschungsfeldern und entwickelt basierend darauf eine Methode namens BPM COCOMO zur Aufwandsschätzung bei der Erstellung von Geschäftsprozessmodellen. Es wird eine Adaption von COCOMO vorgenommen und eine weitere Untersuchung und Validierung dieses Modells vorgeschlagen. Keywords: geschäftsprozessmodellierung, aufwandsschätzung, gpm, bpm, bpmn, function point, cocomo, modellierung, aufwand, kosten
  • Thumbnail Image
    ItemOpen Access
    Robust Quasi-Newton methods for partitioned fluid-structure simulations
    (2015) Scheufele, Klaudius
    In recent years, quasi-Newton schemes have proven to be a robust and efficient way for the coupling of partitioned multi-physics simulations in particular for fluid-structure interaction. The focus of this work is put on the coupling of partitioned fluid-structure interaction, where minimal interface requirements are assumed for the respective field solvers, thus treated as black box solvers. The coupling is done through communication of boundary values between the solvers. In this thesis a new quasi-Newton variant (IQN-IMVJ) based on a multi-vector update is investigated in combination with serial and parallel coupling systems. Due to implicit incorporation of passed information within the Jacobian update it renders the problem dependent parameter of retained previous time steps unnecessary. Besides, a whole range of coupling schemes are categorized and compared comprehensively with respect to robustness, convergence behaviour and complexity. Those coupling algorithms differ in the structure of the coupling, i.\,e., serial or parallel execution of the field solvers and the used quasi-Newton methods. A more in-depth analysis for a choice of coupling schemes is conducted for a set of strongly coupled FSI benchmark problems, using the in-house coupling library preCICE. The superior convergence behaviour and robust nature of the IQN-IMVJ method compared to well known state of the art methods such as the IQN-ILS method, is demonstrated here. It is confirmed that the multi-vector method works optimal without the need of tuning problem dependent parameters in advance. Furthermore, it appears to be especially suitable in conjunction with the parallel coupling system, in that it yields fairly similar results for parallel and serial coupling. Although we focus on FSI simulation, the considered coupling schemes are supposed to be equally applicable to various kinds of different volume- or surface-coupled problems.
  • Thumbnail Image
    ItemOpen Access
    Control-plane consistency in software-defined networking: distributed controller synchronization using the ISIS² toolkit
    (2015) Strauß, Jan
    Software-defined Networking (SDN) is a recent approach in computer networks to ease the network administration by separating the control-plane and the data-plane. The data-plane only forwards packets according to certain rules specified by the control-plane. The control-plane, implemented by a software called controller, determines the forwarding rules based on a global view of the network. In order to increase fault tolerance and to eliminate a possible performance bottleneck, the controller can be distributed. The synchronization of the data that holds the global view is conventionally realized using distributed key-value stores offering a fixed consistency semantic, not respecting the heterogeneous consistency requirements of the data items in controller state. The virtual synchrony model, an alternative approach to the commonly used state machine replication method, offers a more flexible solution that can result in higher performance when certain assumptions on the data kept in controller state can be made. In this thesis a distributed controller based on OpenDaylight, a state-of-the-art SDN controller and the ISIS² library, that implements the virtual synchrony model, is proposed. The modular architecture of the proposed controller and the usage of a platform independent data model allows to extend or replace parts of the system. The implementation of the distributed controller is described and the macro and micro performance is evaluated with benchmarks.
  • Thumbnail Image
    ItemOpen Access
    Concept and implementation of digital beacons
    (2015) Chughtai, Muhammad Bilal
  • Thumbnail Image
    ItemOpen Access
    Mapping molecular surfaces of arbitrary genus to a sphere
    (2015) Frieß, Florian
    Molecular surfaces are one of the most widely used visual representations for the analysis of molecules. They allow different properties of the molecule to be shown and allow additional information to be added, such as chemical properties of the atoms, using colour. With the usual representation of molecular surfaces being three dimensional there are common problems, such as occlusion and view-dependency. To solve these problems a two dimensional representation of the molecular surface can be created. For molecules with a surface of genus zero there are different methods of creating the sphere that is used as an intermediate object to create the map. For molecules with a higher genus this process becomes more difficult. Tunnels can only be mapped to the sphere if they are closed at some point inside the tunnel. Introducing arbitrary cuts can lead to small areas on the map. The deeper inside the tunnel the cut is placed the smaller the area. To avoid these small areas the cuts have to be placed close to the entrance of the tunnel. Therefore a mesh segmentation is performed to identify the tunnel and to create a genus zero surface for the molecule. Based on this identification further information can be displayed, such as geodesic lines showing how the tunnels are connected.
  • Thumbnail Image
    ItemOpen Access
    Design and implementation of TOSCA Service Templates for provisioning and executing bone simulation in cloud environments
    (2015) Dehghanipour, Marzieh
    Recent years have shown an increasing trend to move applications and services into cloud infrastructures. Cloud-based applications typically consist of distributed components which are connected and communicate with each other. Automating the deployment and management of these components is one of the major challenges in IT world. The OASIS TOSCA standard provides a meta-model for describing the structure of composite cloud-based applications, which provides automation for deployment and management of these applications. TOSCA-based applications may be executed via the OpenTOSCA (a run-time environment for TOSCA-based applications) environment, which has been developed by the University of Stuttgart. Simulation applications deal with heterogeneous and huge data sources. Adequate data management and data provisioning for these applications are some of the most significant challenges for simulation applications. SIMPL is a framework which provides a generic approach for data management and data provisioning in simulation applications. SIMPL frees users to deal with any low-level details of data sources and corresponding data management operations. Both the TOSCA standard and the SIMPL framework are based on workflows. The first goal of this master's thesis is to combine the TOSCA standard with the SIMPL framework in order to enable the generic data provisioning and data management approach offered by SIMPL as an integral part of the TOSCA standard. A further and main part of this work is to design and implement TOSCA Service Templates for provisioning and executing bone simulations in cloud environments. Different variants of a TOSCA Service Template realizing a bone simulation in a cloud-native way have to be developed and implemented. In other words, a SaaS solution for PANDAS bone simulation is provided in the scope of this master's thesis with the help of TOSCA and SIMPL technologies.
  • Thumbnail Image
    ItemOpen Access
    Gruppierung von Eye-Tracking-Daten mittels geeigneter Ähnlichkeitsfunktionen
    (2015) Heyen, Frank
    Eye-Tracking gewann als Hilfsmittel zur Evaluation von Benutzerschnittstellen und Visualisierungen in den letzten Jahren stets an Beliebtheit. Ein Vergleich der Lösungsstrategien verschiedener Personen kann anhand der Blickpfade, auch Scanpaths genannt, durchgeführt werden. Für diese Aufgabe fehlt zurzeit noch eine optimale Methode. Bereits existierende Arbeiten verwenden unter anderem Algorithmen zum String-Vergleich, um die Ähnlichkeit zwischen Scanpaths zu ermitteln. Diese Algorithmen können durch Parameter beeinflusst werden. Auch eine Vorverarbeitung der Blickpfade ist durch Methoden mit weiteren Parametern möglich. Angesichts der Vielzahl von denkbaren Kombinationen ist eine Auswahl der optimalen Parameter schwer. In dieser Arbeit werden unterschiedliche Ansätze für den Vergleich von Scanpaths untersucht. Dazu gehören unter anderem die Levenshtein-Distanz und der Algorithmus von Needleman und Wunsch, die einen Wert für die Ähnlichkeit von Strings berechnen. Für diese Ansätze werden Erweiterungen zur Vorverarbeitung der Scanpaths und Einbeziehung weiterer Informationen in den Vergleich erarbeitet. Eine Evaluation in drei Versuchen mit generierten und real aufgezeichneten Eye-Tracking-Daten zeigt anschließend, welche der Parameterkonfigurationen sich in der Praxis bewähren.