Universität Stuttgart
Permanent URI for this communityhttps://elib.uni-stuttgart.de/handle/11682/1
Browse
3 results
Search Results
Item Open Access Collective variables in data-centric neural network training(2023) Nikolaou, KonstantinNeural Networks have become beneficial tools for physics research. While they provide a powerful tool for data-driven modeling, their success is accompanied by a lack of interpretability. This thesis aims to add transparency to the opaque nature of NNs by means of collective variables, a concept well-known in the field of statistical physics. Three collective variables are introduced that emerge from the interactions between neurons and data. These observables enable one to capture holistic behavior of the network and are used to conduct an analysis of neural network training, focusing on data. Through the investigations, the collective variables are applied to selections from a novel sampling method: Random Network Distillation (RND). Besides studying collective variables, the investigation of Random Network Distillation as a data selection method composes the second part of this thesis. The method is analyzed and optimized with respect to its components, aiming to understand and improve the data selection process. It is shown that RND can be used to select data sets that are beneficial for neural network training, giving rise to its application in fields like active learning. The collective variables are leveraged to further investigate the selection method and its effect on neural network training, revealing previously unknown properties of RND-selected data sets. The potential of the collective variables is demonstrated and discussed from a data-centric perspective. They are shown to be discriminative towards the information content of data and give rise to novel insights into the nature of neural network training. In addition to fundamental research on neural networks, the collective variables offer several potential applications including the identification of adversarial attacks and facilitating neural architecture search.Item Open Access Simulating stochastic processes with variational quantum circuits(2022) Fink, DanielSimulating future outcomes based on past observations is a key task in predictive modeling and has found application in many areas ranging from neuroscience to the modeling of financial markets. The classical provably optimal models for stationary stochastic processes are so-called ϵ-machines, which have the structure of a unifilar hidden Markov model and offer a minimal set of internal states. However, these models are not optimal in the quantum setting, i.e., when the models have access to quantum devices. The methods proposed so far for quantum predictive models rely either on the knowledge of an ϵ-machine, or on learning a classical representation thereof, which is memory inefficient since it requires exponentially many resources in the Markov order. Meanwhile, variational quantum algorithms (VQAs) are a promising approach for using near-term quantum devices to tackle problems arising from many different areas in science and technology. Within this work, we propose a VQA for learning quantum predictive models directly from data on a quantum computer. The learning algorithm is inspired by recent developments in the area of implicit generative modeling, where a kernel-based two-sample-test, called maximum mean discrepancy (MMD), is used as a cost function. A major challenge of learning predictive models is to ensure that arbitrarily many time steps can be simulated accurately. For this purpose, we propose a quantum post-processing step that yields a regularization term for the cost function and penalizes models with a large set of internal states. As a proof of concept, we apply the algorithm to a stationary stochastic process and show that the regularization leads to a small set of internal states and a constantly good simulation performance over multiple future time steps, measured in the Kullback-Leibler divergence and the total variation distance.Item Open Access Quantum machine learning for time series prediction(2024) Fellner, TobiasTime series prediction is an essential task in various fields, such as meteorology, finance and healthcare. Traditional approaches to time series prediction have primarily relied on regression and moving average methods, but recent advancements have seen a growing interest in applying machine learning techniques. With the rise of quantum computing, it is of interest to explore whether quantum machine learning can offer advantages over classical methods for time series forecasting. This thesis presents the first large-scale systematic benchmark comparing classical and quantum models for time series prediction. A variety of quantum models are evaluated against classical counterparts on different datasets. A novel quantum reservoir computing architecture is proposed, demonstrating promising results in handling nonlinear prediction tasks. The findings suggest that, for simpler time series prediction tasks, quantum models achieve accuracy comparable to classical methods. However, for more complex tasks, such as long-term forecasting, certain quantum models show improved performance. While current quantum machine learning models do not consistently outperform classical approaches, the results point to specific contexts where quantum methods may be beneficial.