05 Fakultät Informatik, Elektrotechnik und Informationstechnik

Permanent URI for this collectionhttps://elib.uni-stuttgart.de/handle/11682/6

Browse

Search Results

Now showing 1 - 10 of 67
  • Thumbnail Image
    ItemOpen Access
    REST compliant clients for REST APIs
    (2014) Jaber, Mustafa
    In today's distributed systems, REST services play a centric role in defining applications' architecture. Current technologies and literature focus on building server-side REST applications. But they fail to build generic and REST compliant client solutions. Therefore, most offered services and especially client applications rarely comply to the constraints that constitute the REST architecture. In this thesis, the architecture of a new generic framework for building REST compliant client applications is introduced. In addition, a new description language that conforms to REST's constraints and helps reduce development time is presented. We describe in this work the building-blocks of the proposed solutions and show a software implementation of a library that leverages the solutions' architectures. Using the proposed framework and description language, client applications that conform to the full set of REST's constraints can be built in an easy and optimized way. In addition, REST service providers can rely on the proposed description language to eliminate the complexity of repetitively building customized solutions for different technologies or platforms.
  • Thumbnail Image
    ItemOpen Access
    Modellierung und Ausführung einer gekoppelten Festkörpersimulation mit Workflow-Choreographien
    (2014) Hintermayer, Kerstin
    Im wissenschaftlichen Umfeld wird vermehrt die Workflow-Technologie eingesetzt, um Simulationen oder Berechnungen computergesteuert auszuführen. Die vorliegende Arbeit beschäftigt sich mit der Modellierung einer gekoppelten Festkörpersimulation als Workflow-Choreographie nach dem Top-Down-Ansatz. Auftauchende Herausforderungen werden identifiziert und mögliche Lösungsansätze beschrieben. Aufbauend auf dem Modellierungsergebnis werden die für die Ausführung der gekoppelten Festkörpersimulation benötigten Prozesse implementiert und vorgestellt. Die fertige Modellierung wird im Vergleich mit den Anforderungen beurteilt. Die Modellierung kann als Basis für zukünftige Arbeiten dienen und bietet Ansätze für aufbauende Untersuchungen. Dadurch wird eine Verfeinerung für zukünftige Workflowmodellierungen auf Basis einer Choreographie ermöglicht.
  • Thumbnail Image
    ItemOpen Access
    Modeling of a multi-core microblaze system at RTL and TLM abstraction levels in systemC
    (2013) Eissa, Karim
    Transaction Level Modeling (TLM) has recently become a popular approach for modeling contemporary Systems-on-Chip (SoCs) on a higher abstraction level than Register Transfer Level (RTL). In this thesis a multi-core system based on the Xilinx MicroBlaze micro-processor is modeled at RTL and TLM abstraction levels in SystemC. Both implemented models have cycle accurate timing, and are verified against the reference VHDL model using a VHDL / SystemC mixed-language simulation with ModelSim. Finally, performance measurements are carried out to evaluate simulation speedup at the transaction level. Modeling of the MicroBlaze processor is based on a MicroBlaze Instruction Set Simulator (ISS) from SoCLib. A wrapper is therefore implemented to provide communication interfaces between the processor and the rest of the system, as well as control the timing of the ISS operation to reach cycle accurate models. Furthermore, a local memory module based on Block Random Access Memories (BRAMs) is modeled to simulate a complete system consisting of a processor and a local memory.
  • Thumbnail Image
    ItemOpen Access
    Extending parking assistance for automative user interfaces
    (2014) Georoceanu, Radu
    Nowadays the trend in the automotive industry is to integrate systems that go beyond the scope of just maneuvering the car. Navigation, communication, and entertainment functions have become usual in most cars. The multitude of sensors present in vehicles today can be used to collect information that can be shared with other drivers in order to make the roads safer and cleaner. A more troubling issue that affects drivers is the search for free parking spots, because of the time waste, fuel consumtion and effort. There are already solutions available that try to help drivers diminish these problems, like crowdsourcing smartphone apps, but they are still far away from being a reliable solution. The overall goal of this thesis is to find new ways of providing parking information to drivers. This information is collected from vehicles which are equipped with latest sensoric hardware capable of detecting parking spaces while driving and distribute these information to the cloud, sharing it with other drivers using smartphones or vehicle's integrated displays. Though the idea is simple, there are many challanges that need to be addressed. The thesis will also look into ways of improving parking surveillance for vehicles to make them less susceptible to vandalism and thefts, by using latest vehicle-integrated video camera systems. A study will be made to see what information drivers want to have related to parking and how this information can be displayed to them. Further, a cloud based-implementation of such a system will be presented in detail and an evaluation will be made to see how the system behaves in the real world.
  • Thumbnail Image
    ItemOpen Access
    Depth-driven variational methods for stereo reconstruction
    (2014) Maurer, Daniel
    Stereo reconstruction belongs to the fundamental problems in computer vision, with the aim of reconstructing the depth of a static scene. In order to solve this problem the corresponding pixels in both views must be found. A common technique is to minimize an energy (cost) function. Therefore, most methods use a parameterization in form of a displacement information (disparity). In contrast, this thesis uses, extends and examines a depth parameterization. (i) First a basic depth-driven variational method is developed based on a recently presented method of Basha et al. [2]. (ii) After that, several possible extensions are presented, in order to improve the developed method. These extensions include advanced smoothness terms that incorporate image information and enable an anisotropic smoothing behavior. Further advanced data terms are considered, which use modified constraints to allow a more accurate estimation in different situations. (iii) Finally, all extensions are compared with each other and with a disparity-driven counterpart.
  • Thumbnail Image
    ItemOpen Access
    Large-scale data mining analytics based on MapReduce
    (2014) Ranjan, Sunny
    In this work, we search for possible approaches to large-scale data mining analytics. We perform an exploration about the existing MapReduce and other MapReduce-like frameworks for distributed data processing and the distributed file systems for distributed data storage. We study in detail about Hadoop Distributed File System (HDFS) and Hadoop MapReduce software framework. We analyse the benefits of newer version of Hadoop software framework which provides better scalability solution by segregating the cluster resource management task from MapReduce framework. This version is called YARN and is very flexible in supporting various kinds of distributed data processing other than batchmode processing of MapReduce. We also looked into various implementations of data mining algorithms based on MapReduce to derive a comprehensive concept about developing such algorithms. We also looked for various tools that provided MapRedcue based scalable data mining algorithms. We could only find Mahout as a tool specially based on Hadoop MapReduce. But the tool developer team decided to stop using Hadoop MapReduce and to use instead Apache Spark as the underlying execution engine. WEKA also has a very small subset of data mining algorithms implemented using MapReduce which is not properly maintained and supported by the developer team. Subsequently, we found out that Apache Spark, apart from providing an optimised and a faster execution engine for distributed processing also provided an accompanying library for machine learning algorithms. This library is called Machine Learning library (MLlib). Apache Spark claimed that it is much faster than Hadoop MapReduce as it exploits the advantages of in-memory computations which is particularly more beneficial for iterative workloads in case of data mining. Spark is designed to work on variety of clusters: YARN being one of them. It is designed to process the Hadoop data. We selected to perform a particular data mining task: decision tree learning based classification and regression data mining. We stored properly labelled training data for predictive mining tasks in HDFS. We set up a YARN cluster and run Spark's MLlib applications on this cluster. These applications use the cluster managing capabilities of YARN and the distributed execution framework of Spark core services. We performed several experiments to measure the performance gains, speed-up and scaleup of implementations of decision tree learning algorithms in Spark's MLlib. We found out much better than expected results for our experiments. We achieved a much higher than ideal speed-up when we increased the number of nodes. The scale-up is also very excellent. There is a significant decrease in run-time for training decision tree models by increasing the number of nodes. This demonstrates that Spark's MLlib decision tree learning algorithms for classification and regression analysis are highly scalable.
  • Thumbnail Image
    ItemOpen Access
    Accelerated computation using runtime partial reconfiguration
    (2013) Nayak, Naresh Ganesh
    Runtime reconfigurable architectures, which integrate a hard processor core along with a reconfigurable fabric on a single device, allow to accelerate a computation by means of hardware accelerators implemented in the reconfigurable fabric. Runtime partial reconfiguration provides the flexibility to dynamically change these hardware accelerators to adapt the computing capacity of the system. This thesis presents the evaluation of design paradigms which exploit partial reconfiguration to implement compute intensive applications on such runtime reconfigurable architectures. For this purpose, image processing applications are implemented on Zynq-7000, a System on a Chip (SoC) from Xilinx Inc. which integrates an ARM Cortex A9 with a reconfigurable fabric. This thesis studies different image processing applications to select suitable candidates that benefit if implemented on the above mentioned class of reconfigurable architectures using runtime partial reconfiguration. Different Intellectual Property (IP) cores for executing basic image operations are generated using high level synthesis for the implementation. A software based scheduler, executed in the Linux environment running on the ARM core, is responsible for implementing the image processing application by means of loading appropriate IP cores into the reconfigurable fabric. The implementation is evaluated to measure the application speed up, resource savings, power savings and the delay on account of partial reconfiguration. The results of the thesis suggest that the use of partial reconfiguration to implement an application provides FPGA resource savings. The extent of resource savings depend on the granularity of the operations into which the application is decomposed. The thesis could also establish that runtime partial reconfiguration can be used to accelerate the computations in reconfigurable architectures with processor core like the Zynq-7000 platform. The achieved computational speed-up depends on factors like the number of hardware accelerators used for the computation and the used reconfiguration schedule. The thesis also highlights the power savings that may be achieved by executing computations in the reconfigurable fabric instead of the processor core.
  • Thumbnail Image
    ItemOpen Access
    A process insight repository supporting process optimization
    (2012) Vetlugin, Andrey
    Existing solutions for analysis and optimization of manufacturing processes, such as online analysis processing or statistical calculations, have shortcomings that limit continuous process improvements. In particular, they lack means of storing and integrating the results of analysis. This makes the valuable information that can be used for process optimizations used only once and then disposed. The goal of the Advanced Manufacturing Analytics (AdMA) research project is to design an integrated platform for data-driven analysis and optimization of manufacturing processes using analytical techniques, especially data mining, in order to carry out continuous improvement of production. The achievement of this goal is based on the integration of the data related to the manufacturing processes, especially from Manufacturing Execution Systems (MES), with the other operating data, e.g. from Enterprise Resource Planning (ERP) systems. This work is based on AdMA platform described in [1] and Deep Business Process Optimization platform described in [2]. It is focused on the conceptual development of the Process Insight Repository, which is a part of the AdMA platform. The Process Insight Repository is aimed at storing the manufacturing process related data and the insights associated with it. Being part of the AdMA platform, the Process Insight Repository is oriented on storing the insights retrieved by application of data mining techniques to the data of manufacturing processes, so that the newly extracted knowledge can be stored along with the process data itself. Chapter 2 describes the conceptual schema of the Process Insight Repository. The conceptual schema defines what data must be stored in the Process Insight Repository and how different parts of this data are interconnected. Chapter 3 provides a review of technologies that can be used for the implementation of the Process Insight Repository. This includes technologies for storing manufacturing process data, free form knowledge and data mining related data. Chapter 4 describes the details of the prototype implementation of the Process Insight Repository. The result of this work is the created conceptual schema of the Process Insight Repository and a prototype implementation as a proof of concept.
  • Thumbnail Image
    ItemOpen Access
    Entwicklung analysebasierter Optimierungsmuster zur Verbesserung von Fertigungsprozessen
    (2012) Dapperheld, Moritz
    Der Produktionsablauf in Industrieunternehmen muss u.a. kosten- und zeiteffizient gestaltet, transparent und flexibel sein. Somit stellen genau kalibrierte Prozesse in der Fertigung eine Grundlage für den Erfolg des Unternehmens dar. Um diese zu erreichen, steht mit der Verbesserung bestehender Fertigungsprozesse ein kritischer Ansatz zur Verfügung. So existiert eine Vielzahl an Optimierungskonzepten im Produktionsbereich, die sich bereits durch erfolgreiche Umsetzung in der Praxis bewährt haben. Jedoch werden für den Optimierungsvorgang oft nur die zur Verbesserung gewählten Bereiche betrachtet, ohne dass eine Interaktion mit den zusammenhängenden Informationsflüssen entsteht. Das Forschungsprojekt Advanced Manufacturing Analytics (AdMA) stellt einen Ansatz zur Verfügung, um eine Analyse und Optimierung von Fertigungsprozessen zu erzielen, indem auf eine Kombination von Ausführungsdaten und Daten aus operativen Systemen zugegriffen wird. Die Optimierung wird auf Basis von Optimierungsmustern ausgeführt. Ziel dieser Arbeit ist Bewertung bestehender Verfahren zur Optimierung hinsichtlich einer Anwendung als Optimierungsmuster. Die Ansätze werden in einem Rahmenwerk zusammengefasst. Hierfür werden Best Practices aus dem Produktionskontext, workflowgetriebene Ansätze und dynamische Vorgehen betrachtet. Die Bewertung zeigt Anwendungsmöglichkeiten für Ansätze aus allen drei Gebieten, aber auch die Kriterien auf, die eine Umsetzung aufwendig oder unmöglich gestalten. Es wird ein Konzept zur Umsetzung des Ansatzes der proaktiven Optimierung erstellt. Das Muster passt die Attribute von Prozessinstanzen an, indem eine Handlungsempfehlung generiert wird. Die Anpassung basiert auf der Erstellung und Auswertung von Entscheidungsbäumen. Auf das Konzept folgend, wird die prototypische Implementierung beschrieben.
  • Thumbnail Image
    ItemOpen Access
    Development of procedures and evaluation strategies for novel field-effect transistor sensors
    (2012) Parker, Michael Lee
    In order to evaluate new types of sensors based on the field-effect transistor technology, a cost-effective measurement and control system is developed. Because some new types of transistor-based sensors are particularly prone to drift and noise, a measurement system is built around evaluating the effect of a biasing technique known as switched biasing, which has been shown to reduce drift under certain configurations. The result is an implementation of software and hardware that is both able to control a transistor with switched biasing, explore drift-reducing switched biasing configurations, and accurately measure its performance with relatively high precision. Pre-Filtering of the measured data coupled with a fast actuation of an analog-to-digital converter is realized and implemented on a FPGA in the form of a rate-adjustable CIC decimation filter, which increases the signal-to-noise ratio and reduces the required data-transfer rate. The measurement system is controlled internally by a microcontroller and is interfaced through a USB interfaces to a higher-level system, such as a computer running MATLAB, and allows for multiple measurement systems to be operated in parallel. Systematic errors related to limitations of measurement hardware such as offset, temperature and drift are evaluated and compensated for through calibration.