Universität Stuttgart
Permanent URI for this communityhttps://elib.uni-stuttgart.de/handle/11682/1
Browse
2 results
Search Results
Item Open Access Simulating stochastic processes with variational quantum circuits(2022) Fink, DanielSimulating future outcomes based on past observations is a key task in predictive modeling and has found application in many areas ranging from neuroscience to the modeling of financial markets. The classical provably optimal models for stationary stochastic processes are so-called ϵ-machines, which have the structure of a unifilar hidden Markov model and offer a minimal set of internal states. However, these models are not optimal in the quantum setting, i.e., when the models have access to quantum devices. The methods proposed so far for quantum predictive models rely either on the knowledge of an ϵ-machine, or on learning a classical representation thereof, which is memory inefficient since it requires exponentially many resources in the Markov order. Meanwhile, variational quantum algorithms (VQAs) are a promising approach for using near-term quantum devices to tackle problems arising from many different areas in science and technology. Within this work, we propose a VQA for learning quantum predictive models directly from data on a quantum computer. The learning algorithm is inspired by recent developments in the area of implicit generative modeling, where a kernel-based two-sample-test, called maximum mean discrepancy (MMD), is used as a cost function. A major challenge of learning predictive models is to ensure that arbitrarily many time steps can be simulated accurately. For this purpose, we propose a quantum post-processing step that yields a regularization term for the cost function and penalizes models with a large set of internal states. As a proof of concept, we apply the algorithm to a stationary stochastic process and show that the regularization leads to a small set of internal states and a constantly good simulation performance over multiple future time steps, measured in the Kullback-Leibler divergence and the total variation distance.Item Open Access Quantum support vector machines of high-dimensional data for image classification problems(2023) Vikas Singh, RajputThis thesis presents a comprehensive investigation into the efficient utilization of Quantum Support Vector Machines (QSVMs) for image classification on high-dimensional data. The primary focus is on analyzing the standard MNIST dataset and the high-dimensional dataset provided by TRUMPF SE + Co. KG. To evaluate the performance of QSVMs against classical Support Vector Machines (SVMs) for high-dimensional data, a benchmarking framework is proposed. In the current Noisy Intermediate Scale Quantum (NISQ) era, classical preprocessing of the data is a crucial step to prepare the data for classification tasks using NISQ machines. Various dimensionality reduction techniques, such as principal component analysis (PCA), t-distributed stochastic neighbor embedding (tSNE), and convolutional autoencoders, are explored to preprocess the image datasets. Convolutional autoencoders are found to outperform other methods when calculating quantum kernels on a small dataset. Furthermore, the benchmarking framework systematically analyzes different quantum feature maps by varying hyperparameters, such as the number of qubits, the use of parameterized gates, the number of features encoded per qubit line, and the use of entanglement. Quantum feature maps demonstrate higher accuracy compared to classical feature maps for both TRUMPF and MNIST data. Among the feature maps, one using 𝑅𝑧 and 𝑅𝑦 gates with two features per qubit, without entanglement, achieves the highest accuracy. The study also reveals that increasing the number of qubits leads to improved accuracy for the real-world TRUMPF dataset. Additionally, the choice of the quantum kernel function significantly impacts classification results, with the projected type quantum kernel outperforming the fidelity type quantum kernel. Subsequently, the study examines the Kernel Target Alignment (KTA) optimization method to improve the pipeline. However, for the chosen feature map and dataset, KTA does not provide significant benefits. In summary, the results highlight the potential for achieving quantum advantage by optimizing all components of the quantum classifier framework. Selecting appropriate dimensionality reduction techniques, quantum feature maps, and quantum kernel methods is crucial for enhancing classification accuracy. Further research is needed to address challenges related to kernel optimization and fully leverage the capabilities of quantum computing in machine learning applications.