05 Fakultät Informatik, Elektrotechnik und Informationstechnik
Permanent URI for this collectionhttps://elib.uni-stuttgart.de/handle/11682/6
Browse
Search Results
Item Open Access Rigorous compilation for near-term quantum computers(2024) Brandhofer, Sebastian; Polian, Ilia (Prof.)Quantum computing promises an exponential speedup for computational problems in material sciences, cryptography and drug design that are infeasible to resolve by traditional classical systems. As quantum computing technology matures, larger and more complex quantum states can be prepared on a quantum computer, enabling the resolution of larger problem instances, e.g. breaking larger cryptographic keys or modelling larger molecules accurately for the exploration of novel drugs. Near-term quantum computers, however, are characterized by large error rates, a relatively low number of qubits and a low connectivity between qubits. These characteristics impose strict requirements on the structure of quantum computations that must be incorporated by compilation methods targeting near-term quantum computers in order to ensure compatibility and yield highly accurate results. Rigorous compilation methods have been explored for addressing these requirements as they exactly explore the solution space and thus yield a quantum computation that is optimal with respect to the incorporated requirements. However, previous rigorous compilation methods demonstrate limited applicability and typically focus on one aspect of the imposed requirements, i.e. reducing the duration or the number of swap gates in a quantum computation. In this work, opportunities for improving near-term quantum computations through compilation are explored first. These compilation opportunities are included in rigorous compilation methods to investigate each aspect of the imposed requirements, i.e. the number of qubits, connectivity of qubits, duration and incurred errors. The developed rigorous compilation methods are then evaluated with respect to their ability to enable quantum computations that are otherwise not accessible with near-term quantum technology. Experimental results demonstrate the ability of the developed rigorous compilation methods to extend the computational reach of near-term quantum computers by generating quantum computations with a reduced requirement on the number and connectivity of qubits as well as reducing the duration and incurred errors of performed quantum computations. Furthermore, the developed rigorous compilation methods extend their applicability to quantum circuit partitioning, qubit reuse and the translation between quantum computations generated for distinct quantum technologies. Specifically, a developed rigorous compilation method exploiting the structure of a quantum computation to reuse qubits at runtime yielded a reduction in the required number of qubits of up to 5x and result error by up to 33%. The developed quantum circuit partitioning method optimally distributes a quantum computation to distinct separate partitions, reducing the required number of qubits by 40% and the cost of partitioning by 41% on average. Furthermore, a rigorous compilation method was developed for quantum computers based on neutral atoms that combines swap gate insertions and topology changes to reduce the impact of limited qubit connectivity on the quantum computation duration by up to 58% and on the result fidelity by up to 29%. Finally, the developed quantum circuit adaptation method enables to translate between distinct quantum technologies while considering heterogeneous computational primitives with distinct characteristics to reduce the idle time of qubits by up to 87% and the result fidelity by up to 40%.Item Open Access Stable and mass-conserving high-dimensional simulations with the sparse grid combination technique for full HPC systems and beyond(2024) Pollinger, Theresa; Pflüger, Dirk (Prof. Dr.)In the light of the ongoing climate crisis, mastering controlled plasma fusion has the potential to be one of the pivotal scientific achievements of the 21st century. To understand the turbulent fields in confined fusion devices, simulation has been and continues to be both an asset and a challenge. The main limiting factor to large-scale high-fidelity predictive simulations lies in the Curse of Dimensionality, which dominates all grid-based discretizations of plasmas based on the Vlasov-Poisson and Vlasov-Maxwell equations. In the full formulation, they result in six-dimensional grids and fine scales that need to be resolved, leading to a potentially untractable number of degrees of freedom. Typical approaches to this problem - coordinate transformations such as gyrokinetics, grid adaptation, restricting oneself to limited resolutions - do not directly address the Curse of Dimensionality, but rather work around it. The sparse grid combination technique, which forms the center of this work, is a multiscale approach that alleviates the curse of dimensionality for time-stepping simulations: Multiple regular grid-based simulations are run and update each other’s information throughout the course of simulation time. The present thesis improves upon the former state-of-the-art of the combination technique in three ways: introducing conservation of mass and numerical stability through the use of better-suited multiscale basis functions, optimizing the code for large-scale HPC systems, and extending the combination technique to the widely-distributed setting. Firstly, this thesis analyzes the often-used hierarchical hat function from the viewpoint of biorthogonal wavelets, which allows to replace the hierarchical hat function by other multiscale functions (such as the mass-conserving CDF wavelets) in a straightforward manner. Numerical studies presented in the thesis show that this not only introduces conservation but also increases accuracy and avoids numerical instabilities - which previously were a major roadblock for large-scale Vlasov simulations with the combination technique. Secondly, the open-source framework DisCoTec was extended to scale the combination technique up to the available memory of entire supercomputing systems. DisCoTec is designed to wrap the combination technique around existing grid-based solvers and draws on the inherent parallelism of the combination technique. Among several other contributions, different communication-avoiding multiscale reduction schemes were developed and implemented into DisCoTec as part of this work. The scalability of the approach is asserted by an extensive set of measurements in this thesis: DisCoTec is shown to scale up to the full system size of four German supercomputers, including the three CPU-based Tier-0/Tier-1 systems. Thirdly, the combination technique was further extended to the widely-distributed setting, where two HPC systems synchronously run a joint simulation. This is enabled by file transfer as well as sophisticated algorithms for assigning the different simulation instances to the systems, two of which were developed as part of this work. By the resulting drastic reductions in the communication volume, tolerable transfer times for combination technique simulations on different HPC systems have been achieved for the first time. These three advances - improved numerical properties, scaling efficiently up to full system sizes, and the possibility to extend the simulation beyond a single system - show the sparse grid combination technique to be a promising approach for future high-fidelity simulations of higher-dimensional problems, such as plasma turbulence.Item Open Access Measurement of transient overvoltages by capacitive electric field sensors(2024) Probst, Felipe L.; Beltle, Michael; Tenbohlen, StefanThe accurate measurement and the investigation of electromagnetic transients are becoming more important, especially with the increasing integration of renewable energy sources into the power grid. These sources introduce new transient phenomena due to the extensive use of power electronics. To achieve this, the measurement devices must have a broadband response capable of measuring fast transients. This paper presents a capacitive electric field sensor-based measurement system to measure transient overvoltages in high-voltage substations. The concept and design of the measurement system are first presented. Then, the design and concept are validated using tests performed in a high-voltage laboratory. Afterwards, two different calibration techniques are discussed: the simplified method (SM) and the coupling capacitance compensation (CCC) method. Finally, three recorded transients are evaluated using the calibration methods. The investigation revealed that the SM tends to overestimate the maximum overvoltage, highlighting the CCC method as a more suitable approach for calibrating transient overvoltage measurements. This measurement system has been validated using various measurements and can be an efficient and flexible solution for the long-term monitoring of transient overvoltages in high-voltage substations.Item Open Access Resilience of quantum optimization algorithms(2024) Ji, Yanjun; Polian, Ilia (Prof. Dr.)Quantum optimization algorithms (QOAs) show promise in surpassing classical methods for solving complex problems. However, their practical application is limited by the sensitivity of quantum systems to noise. This study addresses this challenge by investigating the resilience of QOAs and developing strategies to enhance their performance and robustness on noisy quantum computers. We begin by establishing an evaluation framework to assess the performance of QOAs under various conditions, including simulated noise-free and error-modeled environments, as well as real noisy hardware, providing a foundation for guiding the development of enhancement strategies. We then propose innovative techniques to improve the performance of algorithms on near-term quantum devices characterized by limited qubit connectivity and noisy operations. Our study introduces an effective compilation process that maximizes the utilization of classical and quantum resources. To overcome the restricted connectivity of hardware, we develop an algorithm-oriented qubit mapping approach that bridges the gap between heuristic and exact methods, providing scalable and optimal solutions. Additionally, we demonstrate, for the first time, selective optimization of quantum circuits on real hardware by optimizing only gates implemented with low-quality native gates, providing significant insights for large-scale quantum computing. We also investigate error mitigation strategies and their dependence on hardware features and algorithm implementation details, emphasizing the synergistic effects of error mitigation and circuit design. While error mitigation can suppress the effects of noise, hardware quality and circuit design are ultimately more critical for achieving high performance. Building upon these insights, we explore the cooptimization of algorithm design and hardware implementation to achieve optimal performance and resilience. By optimizing gate sequences and parameters at the algorithmic level and minimizing error-prone two-qubit gates during compilation, we demonstrate significant improvements in QOA performance. Finally, we explore the practical application of QOAs in real-world problems, emphasizing the importance of optimizing parameters in problem instances to identify optimal solutions. With extensive experiments conducted on real devices, this dissertation makes a substantial contribution to the field of quantum optimization, providing both theoretical foundations and practical strategies for addressing the challenges posed by near-term quantum hardware. Our findings pave the way for the realization of practical quantum computing applications and unlock the full potential of QOAs.Item Open Access Adaptive error control for stratospheric long-distance optical links(2024) Parthasarathy, Swaminathan; Kirstädter, Andreas (Prof. Dr.-Ing.)Free-space optical (FSO) communication plays a crucial role in aerospace technology, utilizing lasers to establish high-speed, wireless connections over long distances. FSO surpasses conventional RF wireless technology in various aspects and supports high-data-rate connectivity for services such as Internet access, data transfer, voice communication, and image transfer. High-Altitude Platforms (HAPs) have emerged as ideal hosts for FSO communication networks, offering ultra-high data rates for applications like high-speed Internet, video conferencing, telemedicine, smart cities, and autonomous driving. FSO via HAPs ensures minimal latency, making it suitable for real-time tasks like remote surgery and autonomous vehicle control. The swift, long-distance communication links with low delays make FSO-equipped HAPs ideal for RF-congested areas, providing cost-effective solutions in remote regions and contributing to environmental monitoring. This thesis explores the use of adaptive code-rate Hybrid Automatic Repeat Request (HARQ) methods and channel state information (CSI) to improve the transmission efficiency of Free-Space Optical (FSO) links between High Altitude Platforms (HAPs). The study looks at channel problems like atmospheric turbulence and static pointing errors, focusing on the weak fluctuation regime of atmospheric turbulence. It explores the reciprocal behavior in bidirectional FSO channels to improve performance efficiency, providing evidence of channel reciprocity. The research proposes using HARQ, an adaptive Reed-Solomon (RS) code-rate technique, and different CSI types to address these impairments. Simulations of various situations are used to test how well these methods work. This helps us learn more about how efficient HARQ protocols are in inter-HAP FSO links, how important different CSI is in adaptive rate HARQ, and possible ways to make the system more efficient. This thesis looks at the channel model for inter-High Altitude Platform (HAP) Free-Space Optical (FSO) links in great detail, taking atmospheric conditions and static pointing errors into account. The channel is modeled as a lognormal fading channel under a weak fluctuation regime. The principle of channel reciprocity and the measures used to quantify it are discussed, providing a foundational understanding for the subsequent investigations. Forward Error Correction (FEC) schemes, with a specific emphasis on the Reed-Solomon (RS) scheme, and various Automatic Repeat reQuest (ARQ) schemes are thoroughly examined. A meticulous comparison of different ARQ schemes highlights that Selective Repeat ARQ (SR-ARQ) is the most efficient for high-error-rate channels, making it the preferred choice for inter-HAP FSO channels. Conversely, Stop and Wait ARQ (SW-ARQ) and Go-Back-N ARQ (GBN-ARQ) are found to be less suitable for these channels. An innovative approach is introduced, leveraging various types of Channel State Information (CSI) to adjust the Reed-Solomon Forward Error Correction (FEC) code-rate. Four types of CSI: perfect CSI (P-CSI), reciprocal CSI (R-CSI), delayed CSI (D-CSI), and fixed mean CSI (F-CSI) are employed. The adaptation of the Reed-Solomon FEC code-rate, aligned with Selective Repeat ARQ, is explored, and the optimal power selection is identified through rigorous analysis. It shows simulation models that use OMNET++ and gives information about the inter-HAP channel and the event-based selective repeat HARQ model. The study demonstrates reciprocity in the longest recorded ground-to-ground bidirectional Free-Space Optical (FSO) link, holding promise to mitigate signal scintillation caused by atmospheric turbulence. It evaluates the performance of different ARQ protocols and adaptive Hybrid Automatic Repeat Request (HARQ) schemes in inter-HAP FSO communication systems. The results show how channel state information, turbulence in the atmosphere, and pointing errors affect the performance of the system. They also suggest ways to improve system efficiency, such as using CSI prediction and soft combining. These findings offer valuable insights for the design and optimization of ARQ and HARQ schemes in inter-HAP FSO communication systems and suggest promising avenues for future research.Item Open Access Sheet conductance of laser-doped layers using a Gaussian laser beam : an effective depth approximation(2024) Hassan, Mohamed; Werner, Jürgen H.Laser doping of silicon with pulsed and scanned laser beams is now well-established to obtain defect-free, doping profile tailored, and locally selectively doped regions with a high spatial resolution. Picking the correct laser parameters (pulse power, pulse shape, and scanning speed) impacts the depth and uniformity of the melted region geometry. This work performs laser doping on the surface of single crystalline silicon, using a pulsed and scanned laser profile with a Gaussian intensity distribution. A deposited boron oxide precursor layer serves as a doping source. Increasing the local inter-pulse distance xirrbetween subsequent pulses causes a quadratic decrease of the sheet conductance Gshof the doped surface layer. Here, we present a simple geometric model that explains all experimental findings. The quadratic dependence stems from the approximately parabolic shape of the individual melted regions directly after the laser beam has hit the Si surface. The sheet resistance depends critically on the intersection depth dchand the distance xirrof overlap between two subsequent, neighboring pulses. The intersection depth dchquadratically depends on the pulse distance xirrand therefore also on the scanning speed vscanof the laser. Finally, we present a simple model that reduces the complicated three dimensional, laterally inhomogeneous doping profile to an effective two-dimensional, homogeneously doped layer which varies its thickness with the scanning speed.Item Open Access Multiplexed pseudo-deterministic photon source with asymmetric switching elements(2024) Brandhofer, Sebastian; Myers, Casey R.; Devitt, Simon; Polian, IliaItem Open Access Ge-on-Si single-photon avalanche diode using a double mesa structure(2024) Wanitzek, Maurice; Schulze, Jörg; Oehme, Michael