05 Fakultät Informatik, Elektrotechnik und Informationstechnik

Permanent URI for this collectionhttps://elib.uni-stuttgart.de/handle/11682/6

Browse

Search Results

Now showing 1 - 10 of 84
  • Thumbnail Image
    ItemOpen Access
    Rigorous compilation for near-term quantum computers
    (2024) Brandhofer, Sebastian; Polian, Ilia (Prof.)
    Quantum computing promises an exponential speedup for computational problems in material sciences, cryptography and drug design that are infeasible to resolve by traditional classical systems. As quantum computing technology matures, larger and more complex quantum states can be prepared on a quantum computer, enabling the resolution of larger problem instances, e.g. breaking larger cryptographic keys or modelling larger molecules accurately for the exploration of novel drugs. Near-term quantum computers, however, are characterized by large error rates, a relatively low number of qubits and a low connectivity between qubits. These characteristics impose strict requirements on the structure of quantum computations that must be incorporated by compilation methods targeting near-term quantum computers in order to ensure compatibility and yield highly accurate results. Rigorous compilation methods have been explored for addressing these requirements as they exactly explore the solution space and thus yield a quantum computation that is optimal with respect to the incorporated requirements. However, previous rigorous compilation methods demonstrate limited applicability and typically focus on one aspect of the imposed requirements, i.e. reducing the duration or the number of swap gates in a quantum computation. In this work, opportunities for improving near-term quantum computations through compilation are explored first. These compilation opportunities are included in rigorous compilation methods to investigate each aspect of the imposed requirements, i.e. the number of qubits, connectivity of qubits, duration and incurred errors. The developed rigorous compilation methods are then evaluated with respect to their ability to enable quantum computations that are otherwise not accessible with near-term quantum technology. Experimental results demonstrate the ability of the developed rigorous compilation methods to extend the computational reach of near-term quantum computers by generating quantum computations with a reduced requirement on the number and connectivity of qubits as well as reducing the duration and incurred errors of performed quantum computations. Furthermore, the developed rigorous compilation methods extend their applicability to quantum circuit partitioning, qubit reuse and the translation between quantum computations generated for distinct quantum technologies. Specifically, a developed rigorous compilation method exploiting the structure of a quantum computation to reuse qubits at runtime yielded a reduction in the required number of qubits of up to 5x and result error by up to 33%. The developed quantum circuit partitioning method optimally distributes a quantum computation to distinct separate partitions, reducing the required number of qubits by 40% and the cost of partitioning by 41% on average. Furthermore, a rigorous compilation method was developed for quantum computers based on neutral atoms that combines swap gate insertions and topology changes to reduce the impact of limited qubit connectivity on the quantum computation duration by up to 58% and on the result fidelity by up to 29%. Finally, the developed quantum circuit adaptation method enables to translate between distinct quantum technologies while considering heterogeneous computational primitives with distinct characteristics to reduce the idle time of qubits by up to 87% and the result fidelity by up to 40%.
  • Thumbnail Image
    ItemOpen Access
    Efficient fault tolerance for selected scientific computing algorithms on heterogeneous and approximate computer architectures
    (2018) Schöll, Alexander; Wunderlich, Hans-Joachim (Prof. Dr.)
    Scientific computing and simulation technology play an essential role to solve central challenges in science and engineering. The high computational power of heterogeneous computer architectures allows to accelerate applications in these domains, which are often dominated by compute-intensive mathematical tasks. Scientific, economic and political decision processes increasingly rely on such applications and therefore induce a strong demand to compute correct and trustworthy results. However, the continued semiconductor technology scaling increasingly imposes serious threats to the reliability and efficiency of upcoming devices. Different reliability threats can cause crashes or erroneous results without indication. Software-based fault tolerance techniques can protect algorithmic tasks by adding appropriate operations to detect and correct errors at runtime. Major challenges are induced by the runtime overhead of such operations and by rounding errors in floating-point arithmetic that can cause false positives. The end of Dennard scaling induces central challenges to further increase the compute efficiency between semiconductor technology generations. Approximate computing exploits the inherent error resilience of different applications to achieve efficiency gains with respect to, for instance, power, energy, and execution times. However, scientific applications often induce strict accuracy requirements which require careful utilization of approximation techniques. This thesis provides fault tolerance and approximate computing methods that enable the reliable and efficient execution of linear algebra operations and Conjugate Gradient solvers using heterogeneous and approximate computer architectures. The presented fault tolerance techniques detect and correct errors at runtime with low runtime overhead and high error coverage. At the same time, these fault tolerance techniques are exploited to enable the execution of the Conjugate Gradient solvers on approximate hardware by monitoring the underlying error resilience while adjusting the approximation error accordingly. Besides, parameter evaluation and estimation methods are presented that determine the computational efficiency of application executions on approximate hardware. An extensive experimental evaluation shows the efficiency and efficacy of the presented methods with respect to the runtime overhead to detect and correct errors, the error coverage as well as the achieved energy reduction in executing the Conjugate Gradient solvers on approximate hardware.
  • Thumbnail Image
    ItemOpen Access
    Locking-enabled security analysis of cryptographic circuits
    (2024) Upadhyaya, Devanshi; Gay, Maël; Polian, Ilia
    Hardware implementations of cryptographic primitives require protection against physical attacks and supply chain threats. This raises the question of secure composability of different attack countermeasures, i.e., whether protecting a circuit against one threat can make it more vulnerable against a different threat. In this article, we study the consequences of applying logic locking, a popular design-for-trust solution against intellectual property piracy and overproduction, to cryptographic circuits. We show that the ability to unlock the circuit incorrectly gives the adversary new powerful attack options. We introduce LEDFA (locking-enabled differential fault analysis) and demonstrate for several ciphers and families of locking schemes that fault attacks become possible (or consistently easier) for incorrectly unlocked circuits. In several cases, logic locking has made circuit implementations prone to classical algebraic attacks with no fault injection needed altogether. We refer to this “zero-fault” version of LEDFA by the term LEDA, investigate its success factors in-depth and propose a countermeasure to protect the logic-locked implementations against LEDA. We also perform test vector leakage assessment (TVLA) of incorrectly unlocked AES implementations to show the effects of logic locking regarding side-channel leakage. Our results indicate that logic locking is not safe to use in cryptographic circuits, making them less rather than more secure.
  • Thumbnail Image
    ItemOpen Access
    Development of an infrastructure for creating a behavioral model of hardware of measurable parameters in dependency of executed software
    (2021) Schwachhofer, Denis
    System-Level Test (SLT) gains traction not only in the industry but as of recently also in academia. It is used to detect manufacturing defects not caught by previous test steps. The idea behind SLT is to embed the Design Under Test (DUT) in an environment and running software on it that corresponds to its end-user application. But even though it is increasingly used in manufacturing since a decade there are still many open challenges to solve. For example, there is no coverage metric for SLT. Also, tests are not automatically generated but manually composed using existing operating systems and programs. This master thesis introduces the foundation for the AutoGen project, that will tackle the aforementioned challenges in the future. This foundation contains a platform for experiments and a workflow to generate Systems-on-Chip (SoCs). A case study is conducted to show an example on how on-chip sensors can be used in SLT applications to replace missing detailed technology-information. For the case study a “power devil” application has been developed that aims to keep the temperature of the Field Programmable Gate Array (FPGA) it runs on in a target range. The study shows an example on how software and parameters influence the extra-functional behavior of hardware.
  • Thumbnail Image
    ItemOpen Access
    Fault emulation for reconfigurable scan networks
    (2018) Schwachhofer, Denis
    At around their standardization by the IEEE the interest on Reconfigurable Scan Networks (RSNs) by research and industry sparked. The testing of RSNs also raises new challenges. To analyze and cope with these challenges researchers are required to perform fault simulation. And the industry incorporated RSNs into their designs and need to test them to which also requires fault simulation. But the runtime of it is significantly high due to the RSNs’ structure. This thesis introduces a platform for fault emulation of RSNs and analyzes its feasibility. The speedup compared to fault simulation is presented and advantages, limitations and possible optimizations are evaluated and discussed.
  • Thumbnail Image
    ItemOpen Access
    Modeling of a multi-core microblaze system at RTL and TLM abstraction levels in systemC
    (2013) Eissa, Karim
    Transaction Level Modeling (TLM) has recently become a popular approach for modeling contemporary Systems-on-Chip (SoCs) on a higher abstraction level than Register Transfer Level (RTL). In this thesis a multi-core system based on the Xilinx MicroBlaze micro-processor is modeled at RTL and TLM abstraction levels in SystemC. Both implemented models have cycle accurate timing, and are verified against the reference VHDL model using a VHDL / SystemC mixed-language simulation with ModelSim. Finally, performance measurements are carried out to evaluate simulation speedup at the transaction level. Modeling of the MicroBlaze processor is based on a MicroBlaze Instruction Set Simulator (ISS) from SoCLib. A wrapper is therefore implemented to provide communication interfaces between the processor and the rest of the system, as well as control the timing of the ISS operation to reach cycle accurate models. Furthermore, a local memory module based on Block Random Access Memories (BRAMs) is modeled to simulate a complete system consisting of a processor and a local memory.
  • Thumbnail Image
    ItemOpen Access
    Test planning for low-power built-in self test
    (2014) Zoellin, Christian G.; Wunderlich, Hans-Joachim (Prof. Dr. rer. nat. habil.)
    Power consumption has become the most important issue in the design of integrated circuits. The power consumption during manufacturing or in-system test of a circuit can significantly exceed the power consumption during functional operation. The excessive power can lead to false test fails or can result in the permanent degradation or destruction of the device under test. Both effects can significantly impact the cost of manufacturing integrated circuits. This work targets power consumption during Built-In Self-Test (BIST). BIST is a Design-for-Test (DfT) technique that adds additional circuitry to a design such that it can be tested at-speed with very little external stimulus. Test planning is the process of computing configurations of the BIST-based tests that optimize the power consumption within the constraints of test time and fault coverage. In this work, a test planning approach is presented that targets the Self-Test Using Multiple-input signature register and Parallel Shift-register sequence generator (STUMPS) DfT architecture. For this purpose, the STUMPS architecture is extended by clock gating in order to leverage the benefits of test planning. The clock of every chain of scan flip-flops can be independently disabled, reducing the switching activity of the flip-flops and their clock distribution to zero as well as reducing the switching activity of the down-stream logic. Further improvements are obtained by clustering the flip-flops of the circuit appropriately. The test planning problem is mapped to a set covering problem. The constraints for the set covering are extracted from fault simulation and the circuit structure such that any valid cover will test every targeted fault at least once. Divide-and-conquer is employed to reduce the computational complexity of optimization against a power consumption metric. The approach can be combined with any fault model and in this work, stuck-at and transition faults are considered. The approach effectively reduces the test power without increasing the test time or reducing the fault coverage. It has proven effective with academic benchmark circuits, several industrial benchmarks and the Synergistic Processing Element (SPE) of the Cell/B.E.™ Processor (Riley et al., 2005). Hardware experiments have been conducted based on the manufacturing BIST of the Cell/B.E.™ Processor and shown the viability of the approach for industrial, high-volume, high-end designs. In order to improve the fault coverage for delay faults, high-frequency circuits are sometimes tested with complex clock sequences that generate test with three or more at-speed cycles (rather than just two of traditional at-speed testing). In order to allow such complex clock sequences to be supported, the test planning presented here has been extended by a circuit graph based approach for determining equivalent combinational circuits for the sequential logic. In addition, this work proposes a method based on dynamic frequency scaling of the shift clock that utilizes a given power envelope to it full extent. This way, the test time can be reduced significantly, in particular if high test coverage is targeted.
  • Thumbnail Image
    ItemOpen Access
    Benchmarking the performance of portfolio optimization with QAOA
    (2022) Brandhofer, Sebastian; Braun, Daniel; Dehn, Vanessa; Hellstern, Gerhard; Hüls, Matthias; Ji, Yanjun; Polian, Ilia; Bhatia, Amandeep Singh; Wellens, Thomas
    We present a detailed study of portfolio optimization using different versions of the quantum approximate optimization algorithm (QAOA). For a given list of assets, the portfolio optimization problem is formulated as quadratic binary optimization constrained on the number of assets contained in the portfolio. QAOA has been suggested as a possible candidate for solving this problem (and similar combinatorial optimization problems) more efficiently than classical computers in the case of a sufficiently large number of assets. However, the practical implementation of this algorithm requires a careful consideration of several technical issues, not all of which are discussed in the present literature. The present article intends to fill this gap and thereby provides the reader with a useful guide for applying QAOA to the portfolio optimization problem (and similar problems). In particular, we will discuss several possible choices of the variational form and of different classical algorithms for finding the corresponding optimized parameters. Viewing at the application of QAOA on error-prone NISQ hardware, we also analyse the influence of statistical sampling errors (due to a finite number of shots) and gate and readout errors (due to imperfect quantum hardware). Finally, we define a criterion for distinguishing between ‘easy’ and ‘hard’ instances of the portfolio optimization problem.
  • Thumbnail Image
    ItemOpen Access
    A muscle model for injury simulation
    (2023) Millard, Matthew; Kempter, Fabian; Fehr, Jörg; Stutzig, Norman; Siebert, Tobias
    Car accidents frequently cause neck injuries that are painful, expensive, and difficult to simulate. The movements that lead to neck injury include phases in which the neck muscles are actively lengthened. Actively lengthened muscle can develop large forces that greatly exceed the maximum isometric force. Although Hill-type models are often used to simulate human movement, this model has no mechanism to develop large tensions during active lengthening. When used to simulate neck injury, a Hill model will underestimate the risk of injury to the muscles but may overestimate the risk of injury to the structures that the muscles protect. We have developed a musculotendon model that includes the viscoelasticity of attached crossbridges and has an active titin element. In this work we evaluate the proposed model to a Hill model by simulating the experiments of Leonard et al. [1] that feature extreme active lengthening.
  • Thumbnail Image
    ItemOpen Access
    Efficient modeling and computation methods for robust AMS system design
    (2018) Gil, Leandro; Radetzki, Martin (Prof. Dr.-Ing.)
    This dissertation copes with the challenge regarding the development of model based design tools that better support the mixed analog and digital parts design of embedded systems. It focuses on the conception of efficient modeling and simulation methods that adequately support emerging system level design methodologies. Starting with a deep analysis of the design activities, many weak points of today’s system level design tools were captured. After considering the modeling and simulation of power electronic circuits for designing low energy embedded systems, a novel signal model that efficiently captures the dynamic behavior of analog and digital circuits is proposed and utilized for the development of computation methods that enable the fast and accurate system level simulation of AMS systems. In order to support a stepwise system design refinement which is based on the essential system properties, behavior computation methods for linear and nonlinear analog circuits based on the novel signal model are presented and compared regarding the performance, accuracy and stability with existing numerical and analytical methods for circuit simulation. The novel signal model in combination with the method proposed to efficiently cope with the interaction of analog and digital circuits as well as the new method for digital circuit simulation are the key contributions of this dissertation because they allow the concurrent state and event based simulation of analog and digital circuits. Using a synchronous data flow model of computation for scheduling the execution of the analog and digital model parts, very fast AMS system simulations are carried out. As the best behavior abstraction for analog and digital circuits may be selected without the need of changing component interfaces, the implementation, validation and verification of AMS systems take advantage of the novel mixed signal representation. Changes on the modeling abstraction level do not affect the experiment setup. The second part of this work deals with the robust design of AMS systems and its verification. After defining a mixed sensitivity based robustness evaluation index for AMS control systems, a general robust design method leading to optimal controller tuning is presented. To avoid over-conservative AMS system designs, the proposed robust design optimization method considers parametric uncertainty and nonlinear model characteristics. The system properties in the frequency domain needed to evaluate the system robustness during parameter optimization are obtained from the proposed signal model. Further advantages of the presented signal model for the computation of control system performance evaluation indexes in the time domain are also investigated in combination with range arithmetic. A novel approach for capturing parameter correlations in range arithmetic based circuit behavior computation is proposed as a step towards a holistic modeling method for the robust design of AMS systems. The several modeling and computation methods proposed to improve the support of design methodologies and tools for AMS system are validated and evaluated in the course of this dissertation considering many aspects of the modeling, simulation, design and verification of a low power embedded system implementing Adaptive Voltage and Frequency Scaling (AVFS) for energy saving.