Universität Stuttgart
Permanent URI for this communityhttps://elib.uni-stuttgart.de/handle/11682/1
Browse
Search Results
Item Open Access Modeling of a multi-core microblaze system at RTL and TLM abstraction levels in systemC(2013) Eissa, KarimTransaction Level Modeling (TLM) has recently become a popular approach for modeling contemporary Systems-on-Chip (SoCs) on a higher abstraction level than Register Transfer Level (RTL). In this thesis a multi-core system based on the Xilinx MicroBlaze micro-processor is modeled at RTL and TLM abstraction levels in SystemC. Both implemented models have cycle accurate timing, and are verified against the reference VHDL model using a VHDL / SystemC mixed-language simulation with ModelSim. Finally, performance measurements are carried out to evaluate simulation speedup at the transaction level. Modeling of the MicroBlaze processor is based on a MicroBlaze Instruction Set Simulator (ISS) from SoCLib. A wrapper is therefore implemented to provide communication interfaces between the processor and the rest of the system, as well as control the timing of the ISS operation to reach cycle accurate models. Furthermore, a local memory module based on Block Random Access Memories (BRAMs) is modeled to simulate a complete system consisting of a processor and a local memory.Item Open Access Accelerated computation using runtime partial reconfiguration(2013) Nayak, Naresh GaneshRuntime reconfigurable architectures, which integrate a hard processor core along with a reconfigurable fabric on a single device, allow to accelerate a computation by means of hardware accelerators implemented in the reconfigurable fabric. Runtime partial reconfiguration provides the flexibility to dynamically change these hardware accelerators to adapt the computing capacity of the system. This thesis presents the evaluation of design paradigms which exploit partial reconfiguration to implement compute intensive applications on such runtime reconfigurable architectures. For this purpose, image processing applications are implemented on Zynq-7000, a System on a Chip (SoC) from Xilinx Inc. which integrates an ARM Cortex A9 with a reconfigurable fabric. This thesis studies different image processing applications to select suitable candidates that benefit if implemented on the above mentioned class of reconfigurable architectures using runtime partial reconfiguration. Different Intellectual Property (IP) cores for executing basic image operations are generated using high level synthesis for the implementation. A software based scheduler, executed in the Linux environment running on the ARM core, is responsible for implementing the image processing application by means of loading appropriate IP cores into the reconfigurable fabric. The implementation is evaluated to measure the application speed up, resource savings, power savings and the delay on account of partial reconfiguration. The results of the thesis suggest that the use of partial reconfiguration to implement an application provides FPGA resource savings. The extent of resource savings depend on the granularity of the operations into which the application is decomposed. The thesis could also establish that runtime partial reconfiguration can be used to accelerate the computations in reconfigurable architectures with processor core like the Zynq-7000 platform. The achieved computational speed-up depends on factors like the number of hardware accelerators used for the computation and the used reconfiguration schedule. The thesis also highlights the power savings that may be achieved by executing computations in the reconfigurable fabric instead of the processor core.Item Open Access Analysis of cache usability on modern real-time systems(2013) Almheidat, Ahmad Nuraldin FalehCache memories are used in the microprocessors to close the speed gap between the processor and the main memory. Caches can minimize the memory access time by keeping a copy of the highly demanded data closer to the processor. As a result, the overall program execution time is reduced. In safety-critical real-time systems, a worst-case analysis is required, and therefore the cache memories play an essential role in the estimation of the application's worst-case execution time. A simulation tool for the cache structure was developed to provide estimated measurements for both cache predictability and the worst-case memory access time based on the used architectural model. This may help to draw some conclusions about the actual cache operation. The simulation supports several modern uni-core and multi-core architectures, including some used in real-time systems. It also allows configuring different cache structures and hierarchies. The cache architecture, configuration and memory accesses from a simulated running application are specified by the user via an input file. The simulation provides a list of traces for every access. The cache predictability can be formulated as hit and miss rates. At the same time, the traces can be used to estimate total memory access time.Item Open Access Online self-test wrapper for runtime-reconfigurable systems(2013) Wang, JilingReconfigurable Systems-on-a-Chip (SoC) architectures consist of microprocessors and Field Programmable Gate Arrays (FPGAs). In order to implement runtime reconfigurable systems, these SoC devices combine the ease of programmability and the flexibility that FPGAs provide. One representative of these is the new Xilinx Zynq-7000 Extensible Processing Platform (EPP), which integrates a dual-core ARM Cortex-A9 based Processing System (PS) and Programmable Logic (PL) in a single device. After power on, the PS is booted and the PL can subsequently be configured and reconfigured by the PS. Recent FPGA technologies incorporate the dynamic Partial Reconfiguration (PR) feature. PR allows new functionality to be programmed online into specific regions of the FPGA while the performance and functionality of the remaining logic is preserved. This on-the-fly reconfiguration characteristic enables designers to time-multiplex portions of hardware dynamically, load functions into the FPGA on an as-needed basis. The configuration access port on the FPGA can be used to load the configuration data from memory to the reconfigurable block, which enables the user to reconfigure the FPGA online and test runtime systems. Manufactured in the advanced 28 nm technologies, the modern generations of FPGAs are increasingly prone to latent defects and aging-related failure mechanisms. To detect faults contained in the reconfigurable gate arrays, dedicated on and off-line test methods can be employed to test the device in the field. Adaptive systems require that the fault is detected and localized, so that the faulty logic unit will not be used in future reconfiguration steps. This thesis presents the development and evaluation of a self-test wrapper for the reconfigurable parts in such hybrid SoCs. It comprises the implementation of Test Configurations (TCs) of reconfigurable components as well as the generation and application of appropriate test stimuli and response analysis. The self-test wrapper is successfully implemented and is fully compatible with the AMBA protocols. The TC implementation is based on an existing Java framework for Xilinx Virtex-5 FPGA, and extended to the Zynq-7000 EPP family. These TCs are successfully redesigned to have a full logic coverage of FPGA structures. Furthermore, the array-based testing method is adopted and the tests can be applied to any part of the reconfigurable fabric. A complete software project has been developed and built to allow the reconfiguration process to be triggered by the ARM microprocessor. Functional test of the reconfigurable architecture, online self-test execution and retrieval of results are under the control of the embedded processor. Implementation results and analysis demonstrate that TCs are successfully synthesized and can be dynamically reconfigured into the area under test, and subsequent tests can be performed accordingly.Item Open Access Identifiability and sensitivity analysis of heterogeneous cell population models(2013) Zeng, ShenIn this thesis, we introduce novel concepts to the modeling and analysis of heterogeneous cell populations. Heterogeneous cell populations can be interpreted as large populations of structurally identical cells with heterogeneous parameters and initial conditions. They appear in biological systems such as tissues of higher organisms or colonies of microorganisms. A well-known approach for the modeling of heterogeneous cell populations is the so called density-based approach, in which the state of a heterogeneous cell population is given by the probability density of the cell states. The evolution of the probability densities is in this approach given in terms of a partial differential equation. We extend this approach via a measure theoretical consideration, which exploits the probabilistic nature of the problem. The result of this novel ansatz is a framework in which the evolution of densities is described by operators. One of the key tasks in the analysis of heterogeneous cell population models is parameter estimation. For heterogeneous cell populations we want to estimate the probability density of parameters and initial conditions. However, to be able to perform parameter estimation, one always needs specific identifiability properties of a system. We formulate for the first time the concept of structural identifiability of a heterogeneous cell population model. It is revealed that this concept is closely related to observability of the corresponding single cell model. The connection between both concepts is studied and illuminated in a concrete example. The second emphasis of this thesis is the implementation of sensitivity analysis to the class of heterogeneous cell population models. Here we study sensitivity with respect to variations or misspecifications in the probability density of parameters and initial conditions.Item Open Access Energy-proportional machines for cloud data centers(2013) Francato, ArturoToday's concern is about the energy efficiency of servers and high power machines in a cloud datacenter infrastructure. According to Barroso et al. [1], an ideal machine consumes energy proportional to the work performed. In this case, an idle machine should consume no energy and a machine in operation should only consume energy proportionally to the number of tasks performed. Even though the energy efficiency of machines is constantly improving, they are still not perfectly energy-proportional. Therefore, Dürr proposed the concept of Elastic Tandem Machines Instances (ETMI) in [2] aims to improve the energy efficiency in particular for idle and weakly loaded instances. In this thesis, we attempt to improve the concept of Elastic Tandem Machines. The original concept only integrated one low-power system on a chip (SoC) machine, which operates during low load on the datacenter, and exactly one high-power vir- tual machine(VM) instance, powered on when the traffic increases and needs to be redirected. However, if the performance of the SoC and the VM instance differed too much, the efficiency of the approach suffered since at the performance limit of the SoC, when the transferred occurred, the high-power would be almost idle. There- fore, we integrate different performance classes of VMs (e.g., small, medium, and large instances) into Elastic n-Instance Machines to further improve the efficiency and scalability of the system. We then design a predictive algorithm and integrate it with the ETMI to decide, in advance, when the best time is, before overloading any server, to switch among the instances. The handover algorithm, based on a software-defined networking and the pre- dictive algorithm, based on an Autoregressive Integrated Moving Average (ARIMA) model are presented. The performance of the system with respect to the energy efficiency and machine elasticity is evaluated using experiments and performance benchmarks. The evaluations of the model demonstrate the applicability of low and medium power instances serving low and medium loads efficiently, in addition to the scalability of the solution among n-instances. The predictive method shows satisfactory results when forecasting seasonal data, different models may have to be implemented for non-seasonal series.Item Open Access Microwave heating of plasmas with the new 14 GHz system at the stellarator TJ-K(2013) Loiten, MichaelThe aim of this thesis has been to investigate the plasmas generated by the newly installed 14 GHz microwave heating system at TJ-K in the equilibrium state. The new heating system has been installed in order to operate TJ-K at a wider range of controllable parameters. Several diagnostics have been used to investigate the plasma: An interferometer was used to obtain the line averaged density. A radially movable device with three Langmuir probes was used to obtain the radial profiles of the electron density and the electron temperature. An optical diode was used to obtain the radiation mainly in the visible range, whereas a bolometer with eight channels was used in order to obtain the poloidal radiation profiles. In addition, the neutral gas pressure, the magnetic field (based on the current running through the coils), and the injected and reflected microwave power was measured. Magnetic and pressure scans in the new regime have been performed, meaning that the scanned parameter has been varied on a shot to shot basis, whereas the other parameters have been kept constant. In addition to increase the parameter space, the magnetic field has been varied in order to vary the power deposition in the plasmas. The pressure has been varied in order to approach regimes where neoclassical effects become important. When lowering the collisionality, collisional regimes where neoclassical effects dominates can be reached. Lower collisional regimes were found for low pressures in hydrogen. However, operation at these collisional regimes is not readily available as it was found that the plasmas become increasingly unstable when closing in on these regimes. With this heating system one can operate at higher magnetic fields, and thus increase the confinement of the plasma. It has been found that plasmas in this regime have higher densities than the previously installed heating systems. This makes the new heating system a good candidate in studying over-dense plasmas.Item Open Access Providing in-network content-based routing using OpenFlow(2013) Mishra, Gagan BihariContent-based routing as provided by publish/subscribe systems has evolved as a key paradigm for interactions between loosely coupled application components (content publishers and subscribers). Content-based routing aims to increase the efficiency of forwarding by utilizing the diversity of information exchanged between application components. Using content-based forwarding rules (also called content filters) installed on content-basedrouters (also termed brokers), bandwidth-efficiency is increased by only forwarding content to the subset of subscribers who are actually interested in the published content. Many middle-ware implementations for content-based publish/subscribe have been developed over the last decade. However, implemented on the application layer, their performance is still far behind the performance of communication protocols implemented on the network layer w.r.t. throughput, end-to-end latency and bandwidth efficiency. Therefore, it would be highly attractive to implement content-based routing directly on the network layer. Especially, the advent of new networking technologies namely, software-define networking and network virtualization have potential to make this reality. To this end, recently a reference architecture has been proposed allowing for the embedding of content-based routing at the network layer by utilizing OpenFlow specification. The task of this thesis is the concrete realization of content-based routing in the OpenFlow reference architecture. In particular, the thesis focuses on the implementation/embedding of filtering-based publish/subscribe approaches in the reference architecture, as a proof of concept. The implementation is then evaluated w.r.t. message forwarding delay, false positives etc.Item Open Access Thermal imaging for interactive surfaces(2013) Abdelrahman, YomnaThermal camera operating in Far Infrared (FIR) is considered to be un- derexplored in elds other than security and law enforcement. Nevertheless, recently it drawn the attention of Human Computer Interaction (HCI) re- searchers as a new sensory system enabling novel interactive systems. They are robust to illumination changes and using it with well-known computer vi- sion techniques, the complexity of the interaction detection is highly reduced as compared to RGB and depth cameras. FIR radiation, however, has an- other undiscovered characteristic that distinguishes thermal cameras from their RGB or depth counterparts, namely thermal re ection. Commonly, surfaces re ect thermal radiation differently than visual light and can be- come a perfect thermal mirror. In this thesis, we show that through thermal re ection, thermal cameras can sense the space beyond their direct eld of view (areas besides and even behind the camera' eld of view). We investi- gate how thermal re ection can potentially increase the interaction space of projected surfaces using camera-projection systems. We moreover discuss the re ection characteristics of common surfaces in our vicinity in both the visual and thermal radiation bands. Using a proof-of-concept prototype, we demonstrate the increased interaction space for hand-held camera-projection system. Furthermore, we depict a number of other promising application examples that can largely bene t from the thermal re ection characteristic of surfacesItem Open Access Securing cloud applications with two-factor authentication(2013) Ashraf, UmairContent management Software as a Service (SaaS) applications have made a lot of attention in the recent years. The software and related content is hosted in cloud and remote access is given to the users through a web browser or a thin web client. The content management SaaS solutions store the regulatory content of an organization in cloud. Any successful attempt of unauthorized access to the cloud content can pose certain security risks, ranging from financial loss, defamation, to civil or criminal crime. Security and privacy are two major hindrance for cloud consumers in adopting SaaS based cloud applications. We need a solution to maximize the level of trust between the cloud consumers and the cloud providers. The level of trust can be increased by increasing information security and privacy, which boils down to strong authentication, authorization and access control mechanism. This thesis focuses on new technologies to improve authentication of services consumed in the cloud. Password authentication is the commonly used single-factor authentication mechanism. The password authentication is defenceless to many security threats. Passwords are vulnerable to replay and discovery attacks. They also do not show any resistance to eavesdropping, man-in-the-middle or phishing attacks. Two-factor authentication opens up new horizons in security enhancement. It mandates users to provide two authentication tokens during the authentication phase. The two authentication tokens cover vulnerabilities of each other and combine together to provide higher information security. Ensuring strong authentication is a complete process within itself. The probability of occurrence of a security breach and the loss involved in it play a decisive role in selecting an authentication assurance level. The assurance level is the measurement of the strength of an authentication process. The appropriate technology is selected to meet a certain assurance level and mitigates the exposed risk to an information system. Selecting the appropriate technology includes selecting the authentication tokens, choosing the token management policy and determining the communication protocol between the client and the server. Also, authentication security enhancement is a cyclic process and requires continuous monitoring and improvement. The two-factor authentication solution must secure all the SaaS software and services. While most of the software and services support password authentication, not all of them provide support for two-factor authentication. Ensuring two-factor authentication in a SaaS model is a challenging task and requires all the software and services to be brought under one authentication policy.
- «
- 1 (current)
- 2
- 3
- »