05 Fakultät Informatik, Elektrotechnik und Informationstechnik

Permanent URI for this collectionhttps://elib.uni-stuttgart.de/handle/11682/6

Browse

Search Results

Now showing 1 - 2 of 2
  • Thumbnail Image
    ItemOpen Access
    Rigorous compilation for near-term quantum computers
    (2024) Brandhofer, Sebastian; Polian, Ilia (Prof.)
    Quantum computing promises an exponential speedup for computational problems in material sciences, cryptography and drug design that are infeasible to resolve by traditional classical systems. As quantum computing technology matures, larger and more complex quantum states can be prepared on a quantum computer, enabling the resolution of larger problem instances, e.g. breaking larger cryptographic keys or modelling larger molecules accurately for the exploration of novel drugs. Near-term quantum computers, however, are characterized by large error rates, a relatively low number of qubits and a low connectivity between qubits. These characteristics impose strict requirements on the structure of quantum computations that must be incorporated by compilation methods targeting near-term quantum computers in order to ensure compatibility and yield highly accurate results. Rigorous compilation methods have been explored for addressing these requirements as they exactly explore the solution space and thus yield a quantum computation that is optimal with respect to the incorporated requirements. However, previous rigorous compilation methods demonstrate limited applicability and typically focus on one aspect of the imposed requirements, i.e. reducing the duration or the number of swap gates in a quantum computation. In this work, opportunities for improving near-term quantum computations through compilation are explored first. These compilation opportunities are included in rigorous compilation methods to investigate each aspect of the imposed requirements, i.e. the number of qubits, connectivity of qubits, duration and incurred errors. The developed rigorous compilation methods are then evaluated with respect to their ability to enable quantum computations that are otherwise not accessible with near-term quantum technology. Experimental results demonstrate the ability of the developed rigorous compilation methods to extend the computational reach of near-term quantum computers by generating quantum computations with a reduced requirement on the number and connectivity of qubits as well as reducing the duration and incurred errors of performed quantum computations. Furthermore, the developed rigorous compilation methods extend their applicability to quantum circuit partitioning, qubit reuse and the translation between quantum computations generated for distinct quantum technologies. Specifically, a developed rigorous compilation method exploiting the structure of a quantum computation to reuse qubits at runtime yielded a reduction in the required number of qubits of up to 5x and result error by up to 33%. The developed quantum circuit partitioning method optimally distributes a quantum computation to distinct separate partitions, reducing the required number of qubits by 40% and the cost of partitioning by 41% on average. Furthermore, a rigorous compilation method was developed for quantum computers based on neutral atoms that combines swap gate insertions and topology changes to reduce the impact of limited qubit connectivity on the quantum computation duration by up to 58% and on the result fidelity by up to 29%. Finally, the developed quantum circuit adaptation method enables to translate between distinct quantum technologies while considering heterogeneous computational primitives with distinct characteristics to reduce the idle time of qubits by up to 87% and the result fidelity by up to 40%.
  • Thumbnail Image
    ItemOpen Access
    Quantum support vector machines of high-dimensional data for image classification problems
    (2023) Vikas Singh, Rajput
    This thesis presents a comprehensive investigation into the efficient utilization of Quantum Support Vector Machines (QSVMs) for image classification on high-dimensional data. The primary focus is on analyzing the standard MNIST dataset and the high-dimensional dataset provided by TRUMPF SE + Co. KG. To evaluate the performance of QSVMs against classical Support Vector Machines (SVMs) for high-dimensional data, a benchmarking framework is proposed. In the current Noisy Intermediate Scale Quantum (NISQ) era, classical preprocessing of the data is a crucial step to prepare the data for classification tasks using NISQ machines. Various dimensionality reduction techniques, such as principal component analysis (PCA), t-distributed stochastic neighbor embedding (tSNE), and convolutional autoencoders, are explored to preprocess the image datasets. Convolutional autoencoders are found to outperform other methods when calculating quantum kernels on a small dataset. Furthermore, the benchmarking framework systematically analyzes different quantum feature maps by varying hyperparameters, such as the number of qubits, the use of parameterized gates, the number of features encoded per qubit line, and the use of entanglement. Quantum feature maps demonstrate higher accuracy compared to classical feature maps for both TRUMPF and MNIST data. Among the feature maps, one using 𝑅𝑧 and 𝑅𝑦 gates with two features per qubit, without entanglement, achieves the highest accuracy. The study also reveals that increasing the number of qubits leads to improved accuracy for the real-world TRUMPF dataset. Additionally, the choice of the quantum kernel function significantly impacts classification results, with the projected type quantum kernel outperforming the fidelity type quantum kernel. Subsequently, the study examines the Kernel Target Alignment (KTA) optimization method to improve the pipeline. However, for the chosen feature map and dataset, KTA does not provide significant benefits. In summary, the results highlight the potential for achieving quantum advantage by optimizing all components of the quantum classifier framework. Selecting appropriate dimensionality reduction techniques, quantum feature maps, and quantum kernel methods is crucial for enhancing classification accuracy. Further research is needed to address challenges related to kernel optimization and fully leverage the capabilities of quantum computing in machine learning applications.