Universität Stuttgart
Permanent URI for this communityhttps://elib.uni-stuttgart.de/handle/11682/1
Browse
4 results
Search Results
Item Open Access Rigorous compilation for near-term quantum computers(2024) Brandhofer, Sebastian; Polian, Ilia (Prof.)Quantum computing promises an exponential speedup for computational problems in material sciences, cryptography and drug design that are infeasible to resolve by traditional classical systems. As quantum computing technology matures, larger and more complex quantum states can be prepared on a quantum computer, enabling the resolution of larger problem instances, e.g. breaking larger cryptographic keys or modelling larger molecules accurately for the exploration of novel drugs. Near-term quantum computers, however, are characterized by large error rates, a relatively low number of qubits and a low connectivity between qubits. These characteristics impose strict requirements on the structure of quantum computations that must be incorporated by compilation methods targeting near-term quantum computers in order to ensure compatibility and yield highly accurate results. Rigorous compilation methods have been explored for addressing these requirements as they exactly explore the solution space and thus yield a quantum computation that is optimal with respect to the incorporated requirements. However, previous rigorous compilation methods demonstrate limited applicability and typically focus on one aspect of the imposed requirements, i.e. reducing the duration or the number of swap gates in a quantum computation. In this work, opportunities for improving near-term quantum computations through compilation are explored first. These compilation opportunities are included in rigorous compilation methods to investigate each aspect of the imposed requirements, i.e. the number of qubits, connectivity of qubits, duration and incurred errors. The developed rigorous compilation methods are then evaluated with respect to their ability to enable quantum computations that are otherwise not accessible with near-term quantum technology. Experimental results demonstrate the ability of the developed rigorous compilation methods to extend the computational reach of near-term quantum computers by generating quantum computations with a reduced requirement on the number and connectivity of qubits as well as reducing the duration and incurred errors of performed quantum computations. Furthermore, the developed rigorous compilation methods extend their applicability to quantum circuit partitioning, qubit reuse and the translation between quantum computations generated for distinct quantum technologies. Specifically, a developed rigorous compilation method exploiting the structure of a quantum computation to reuse qubits at runtime yielded a reduction in the required number of qubits of up to 5x and result error by up to 33%. The developed quantum circuit partitioning method optimally distributes a quantum computation to distinct separate partitions, reducing the required number of qubits by 40% and the cost of partitioning by 41% on average. Furthermore, a rigorous compilation method was developed for quantum computers based on neutral atoms that combines swap gate insertions and topology changes to reduce the impact of limited qubit connectivity on the quantum computation duration by up to 58% and on the result fidelity by up to 29%. Finally, the developed quantum circuit adaptation method enables to translate between distinct quantum technologies while considering heterogeneous computational primitives with distinct characteristics to reduce the idle time of qubits by up to 87% and the result fidelity by up to 40%.Item Open Access Quantum support vector machines of high-dimensional data for image classification problems(2023) Vikas Singh, RajputThis thesis presents a comprehensive investigation into the efficient utilization of Quantum Support Vector Machines (QSVMs) for image classification on high-dimensional data. The primary focus is on analyzing the standard MNIST dataset and the high-dimensional dataset provided by TRUMPF SE + Co. KG. To evaluate the performance of QSVMs against classical Support Vector Machines (SVMs) for high-dimensional data, a benchmarking framework is proposed. In the current Noisy Intermediate Scale Quantum (NISQ) era, classical preprocessing of the data is a crucial step to prepare the data for classification tasks using NISQ machines. Various dimensionality reduction techniques, such as principal component analysis (PCA), t-distributed stochastic neighbor embedding (tSNE), and convolutional autoencoders, are explored to preprocess the image datasets. Convolutional autoencoders are found to outperform other methods when calculating quantum kernels on a small dataset. Furthermore, the benchmarking framework systematically analyzes different quantum feature maps by varying hyperparameters, such as the number of qubits, the use of parameterized gates, the number of features encoded per qubit line, and the use of entanglement. Quantum feature maps demonstrate higher accuracy compared to classical feature maps for both TRUMPF and MNIST data. Among the feature maps, one using 𝑅𝑧 and 𝑅𝑦 gates with two features per qubit, without entanglement, achieves the highest accuracy. The study also reveals that increasing the number of qubits leads to improved accuracy for the real-world TRUMPF dataset. Additionally, the choice of the quantum kernel function significantly impacts classification results, with the projected type quantum kernel outperforming the fidelity type quantum kernel. Subsequently, the study examines the Kernel Target Alignment (KTA) optimization method to improve the pipeline. However, for the chosen feature map and dataset, KTA does not provide significant benefits. In summary, the results highlight the potential for achieving quantum advantage by optimizing all components of the quantum classifier framework. Selecting appropriate dimensionality reduction techniques, quantum feature maps, and quantum kernel methods is crucial for enhancing classification accuracy. Further research is needed to address challenges related to kernel optimization and fully leverage the capabilities of quantum computing in machine learning applications.Item Open Access Resilience of quantum optimization algorithms(2024) Ji, Yanjun; Polian, Ilia (Prof. Dr.)Quantum optimization algorithms (QOAs) show promise in surpassing classical methods for solving complex problems. However, their practical application is limited by the sensitivity of quantum systems to noise. This study addresses this challenge by investigating the resilience of QOAs and developing strategies to enhance their performance and robustness on noisy quantum computers. We begin by establishing an evaluation framework to assess the performance of QOAs under various conditions, including simulated noise-free and error-modeled environments, as well as real noisy hardware, providing a foundation for guiding the development of enhancement strategies. We then propose innovative techniques to improve the performance of algorithms on near-term quantum devices characterized by limited qubit connectivity and noisy operations. Our study introduces an effective compilation process that maximizes the utilization of classical and quantum resources. To overcome the restricted connectivity of hardware, we develop an algorithm-oriented qubit mapping approach that bridges the gap between heuristic and exact methods, providing scalable and optimal solutions. Additionally, we demonstrate, for the first time, selective optimization of quantum circuits on real hardware by optimizing only gates implemented with low-quality native gates, providing significant insights for large-scale quantum computing. We also investigate error mitigation strategies and their dependence on hardware features and algorithm implementation details, emphasizing the synergistic effects of error mitigation and circuit design. While error mitigation can suppress the effects of noise, hardware quality and circuit design are ultimately more critical for achieving high performance. Building upon these insights, we explore the cooptimization of algorithm design and hardware implementation to achieve optimal performance and resilience. By optimizing gate sequences and parameters at the algorithmic level and minimizing error-prone two-qubit gates during compilation, we demonstrate significant improvements in QOA performance. Finally, we explore the practical application of QOAs in real-world problems, emphasizing the importance of optimizing parameters in problem instances to identify optimal solutions. With extensive experiments conducted on real devices, this dissertation makes a substantial contribution to the field of quantum optimization, providing both theoretical foundations and practical strategies for addressing the challenges posed by near-term quantum hardware. Our findings pave the way for the realization of practical quantum computing applications and unlock the full potential of QOAs.Item Open Access Multi-material blind beam hardening correction in near real-time based on non-linearity adjustment of projections(2023) Alsaffar, Ammar; Sun, Kaicong; Simon, SvenBeam hardening (BH) is one of the major artifacts that severely reduces the quality of computed tomography (CT) imaging. This BH artifact arises due to the polychromatic nature of the X-ray source and causes cupping and streak artifacts. This work aims to propose a fast and accurate BH correction method that requires no prior knowledge of the materials and corrects first and higher-order BH artifacts. This is achieved by performing a wide sweep of the material based on an experimentally measured look-up table to obtain the closest estimate of the material. Then, the non-linearity effect of the BH is corrected by adding the difference between the estimated monochromatic and the polychromatic simulated projections of the segmented image. The estimated polychromatic projection is accurately derived using the least square estimation (LSE) method by minimizing the difference between the experimental projection and the linear combination of simulated polychromatic projections. As a result, an accurate non-linearity correction term is derived that leads to an accurate BH correction result. The simulated projections in this work are performed using a multi-GPU-accelerated forward projection model which ensures a fast BH correction in near real-time. To evaluate the proposed BH correction method, we have conducted extensive experiments on real-world CT data. It is shown that the proposed method results in images with improved contrast-to-noise ratio (CNR) in comparison to the images corrected from only the scatter artifacts and the BH-corrected images using the state-of-the-art empirical BH correction method.