05 Fakultät Informatik, Elektrotechnik und Informationstechnik

Permanent URI for this collectionhttps://elib.uni-stuttgart.de/handle/11682/6

Browse

Search Results

Now showing 1 - 10 of 10
  • Thumbnail Image
    ItemOpen Access
    Rigorous compilation for near-term quantum computers
    (2024) Brandhofer, Sebastian; Polian, Ilia (Prof.)
    Quantum computing promises an exponential speedup for computational problems in material sciences, cryptography and drug design that are infeasible to resolve by traditional classical systems. As quantum computing technology matures, larger and more complex quantum states can be prepared on a quantum computer, enabling the resolution of larger problem instances, e.g. breaking larger cryptographic keys or modelling larger molecules accurately for the exploration of novel drugs. Near-term quantum computers, however, are characterized by large error rates, a relatively low number of qubits and a low connectivity between qubits. These characteristics impose strict requirements on the structure of quantum computations that must be incorporated by compilation methods targeting near-term quantum computers in order to ensure compatibility and yield highly accurate results. Rigorous compilation methods have been explored for addressing these requirements as they exactly explore the solution space and thus yield a quantum computation that is optimal with respect to the incorporated requirements. However, previous rigorous compilation methods demonstrate limited applicability and typically focus on one aspect of the imposed requirements, i.e. reducing the duration or the number of swap gates in a quantum computation. In this work, opportunities for improving near-term quantum computations through compilation are explored first. These compilation opportunities are included in rigorous compilation methods to investigate each aspect of the imposed requirements, i.e. the number of qubits, connectivity of qubits, duration and incurred errors. The developed rigorous compilation methods are then evaluated with respect to their ability to enable quantum computations that are otherwise not accessible with near-term quantum technology. Experimental results demonstrate the ability of the developed rigorous compilation methods to extend the computational reach of near-term quantum computers by generating quantum computations with a reduced requirement on the number and connectivity of qubits as well as reducing the duration and incurred errors of performed quantum computations. Furthermore, the developed rigorous compilation methods extend their applicability to quantum circuit partitioning, qubit reuse and the translation between quantum computations generated for distinct quantum technologies. Specifically, a developed rigorous compilation method exploiting the structure of a quantum computation to reuse qubits at runtime yielded a reduction in the required number of qubits of up to 5x and result error by up to 33%. The developed quantum circuit partitioning method optimally distributes a quantum computation to distinct separate partitions, reducing the required number of qubits by 40% and the cost of partitioning by 41% on average. Furthermore, a rigorous compilation method was developed for quantum computers based on neutral atoms that combines swap gate insertions and topology changes to reduce the impact of limited qubit connectivity on the quantum computation duration by up to 58% and on the result fidelity by up to 29%. Finally, the developed quantum circuit adaptation method enables to translate between distinct quantum technologies while considering heterogeneous computational primitives with distinct characteristics to reduce the idle time of qubits by up to 87% and the result fidelity by up to 40%.
  • Thumbnail Image
    ItemOpen Access
    Data-efficient and safe learning with Gaussian processes
    (2020) Schreiter, Jens; Toussaint, Marc (Prof. Dr. rer. nat.)
    Data-based modeling techniques enjoy increasing popularity in many areas of science and technology where traditional approaches are limited regarding accuracy and efficiency. When employing machine learning methods to generate models of dynamic system, it is necessary to consider two important issues. Firstly, the data-sampling process should induce an informative and representative set of points to enable high generalization accuracy of the learned models. Secondly, the algorithmic part for efficient model building is essential for applicability, usability, and the quality of the learned predictive model. This thesis deals with both of these aspects for supervised learning problems, where the interaction between them is exploited to realize an exact and powerful modeling. After introducing the non-parametric Bayesian modeling approach with Gaussian processes and basics for transient modeling tasks in the next chapter, we dedicate ourselves to extensions of this probabilistic technique to relevant practical requirements in the subsequent chapter. This chapter provides an overview on existing sparse Gaussian process approximations and propose some novel work to increase efficiency and model selection on particularly large training data sets. For example, our sparse modeling approach enables real-time capable prediction performance and efficient learning with low memory requirements. A comprehensive comparison on various real-world problems confirms the proposed contributions and shows a variety of modeling tasks, where approximate Gaussian processes can be successfully applied. Further experiments provide more insight about the whole learning process, and thus a profound understanding of the presented work. In the fourth chapter, we focus on active learning schemes for safe and information-optimal generation of meaningful data sets. In addition to the exploration behavior of the active learner, the safety issue is considered in our work, since interacting with real systems should not result in damages or even completely destroy it. Here we propose a new model-based active learning framework to solve both tasks simultaneously. As basis for the data-sampling process we employ the presented Gaussian process techniques. Furthermore, we distinguish between static and transient experimental design strategies. Both problems are separately considered in this chapter. Nevertheless, the requirements for each active learning problem are the same. This subdivision into a static and transient setting allows a more problem-specific perspective on the two cases, and thus enables the creation of specially adapted active learning algorithms. Our novel approaches are then investigated for different applications, where a favorable trade-off between safety and exploration is always realized. Theoretical results maintain these evaluations and provide respectable knowledge about the derived model-based active learning schemes. For example, an upper bound for the probability of failure of the presented active learning methods is derived under reasonable assumptions. Finally, the thesis concludes with a summary of the investigated machine learning problems and motivate some future research directions.
  • Thumbnail Image
    ItemOpen Access
    The Generalized Minimum Manhattan Network Problem
    (2015) Schnizler, Michael
    In this thesis we consider the Generalized Minimum Manhattan Network Problem: given a set containing n pairs of points in R2 or Rd, the goal is to find a rectilinear network of minimal length which contains a path of minimal length (a so-called Manhattan path) between the two points of each pair. We restrict our search to a discrete subspace and show that under specific conditions an optimal solution can be found in polynomial time using a dynamic program. The conditions concern the intersection graph of the bounding boxes of the pairs. Its maximum degree as well as the treewidth must be bounded by two constants which are independent of n. Finally, we present a simple greedy algorithm for practical purposes.
  • Thumbnail Image
    ItemOpen Access
    Circuit complexity of group theoretic problems
    (2021) Weiß, Armin; Diekert, Volker (Prof. Dr. rer. nat.)
    In dieser kumulativen Habilitationsschrift werden sechs Arbeiten zum Thema "Schaltkreiskomplexität von Gruppentheoretischen Problemen" zusammengefasst. An vorderster Stelle steht hierbei das Wortproblem: Gegeben ein Wort über den Erzeugern einer Gruppe, ist die Frage, ob das Wort das Einselement der Gruppe darstellt. Daneben werden noch weitere Probleme, wie das Konjugationsproblem, das Power-Wortproblem (wie das Wortproblem, aber die Eingabe wird in komprimierter Form gegeben) und das Lösen von Gleichungen betrachtet. Die meisten der hier zusammengefassten Arbeiten betrachten die genannten Probleme für spezielle Klassen von Gruppen und klassifizieren deren Komplexität mit Methoden der Schaltkreiskomplexität. Eine Ausnahme bildet die letzte Arbeit zum Thema Gleichungen: hier liegt der Zusammenhang zur Schaltkreiskomplexität darin, dass sich das Erfüllbarkeitsproblem für Gleichungen in endlichen auslösbaren Gruppen ähnlich verhält wie das Erfüllbarkeitsproblem für CC^0 Schaltkreise.
  • Thumbnail Image
    ItemOpen Access
    Load-balancing for scalable simulations with large particle numbers
    (2021) Hirschmann, Steffen; Pflüger, Dirk (Prof. Dr.)
  • Thumbnail Image
    ItemOpen Access
    Equation satisfiability in solvable groups
    (2022) Idziak, Paweł; Kawałek, Piotr; Krzaczkowski, Jacek; Weiß, Armin
    The study of the complexity of the equation satisfiability problem in finite groups had been initiated by Goldmann and Russell in (Inf. Comput. 178 (1), 253-262, 10 ) where they showed that this problem is in P for nilpotent groups while it is NP -complete for non-solvable groups. Since then, several results have appeared showing that the problem can be solved in polynomial time in certain solvable groups G having a nilpotent normal subgroup H with nilpotent factor G / H . This paper shows that such a normal subgroup must exist in each finite group with equation satisfiability solvable in polynomial time, unless the Exponential Time Hypothesis fails.
  • Thumbnail Image
    ItemOpen Access
    Stable and mass-conserving high-dimensional simulations with the sparse grid combination technique for full HPC systems and beyond
    (2024) Pollinger, Theresa; Pflüger, Dirk (Prof. Dr.)
    In the light of the ongoing climate crisis, mastering controlled plasma fusion has the potential to be one of the pivotal scientific achievements of the 21st century. To understand the turbulent fields in confined fusion devices, simulation has been and continues to be both an asset and a challenge. The main limiting factor to large-scale high-fidelity predictive simulations lies in the Curse of Dimensionality, which dominates all grid-based discretizations of plasmas based on the Vlasov-Poisson and Vlasov-Maxwell equations. In the full formulation, they result in six-dimensional grids and fine scales that need to be resolved, leading to a potentially untractable number of degrees of freedom. Typical approaches to this problem - coordinate transformations such as gyrokinetics, grid adaptation, restricting oneself to limited resolutions - do not directly address the Curse of Dimensionality, but rather work around it. The sparse grid combination technique, which forms the center of this work, is a multiscale approach that alleviates the curse of dimensionality for time-stepping simulations: Multiple regular grid-based simulations are run and update each other’s information throughout the course of simulation time. The present thesis improves upon the former state-of-the-art of the combination technique in three ways: introducing conservation of mass and numerical stability through the use of better-suited multiscale basis functions, optimizing the code for large-scale HPC systems, and extending the combination technique to the widely-distributed setting. Firstly, this thesis analyzes the often-used hierarchical hat function from the viewpoint of biorthogonal wavelets, which allows to replace the hierarchical hat function by other multiscale functions (such as the mass-conserving CDF wavelets) in a straightforward manner. Numerical studies presented in the thesis show that this not only introduces conservation but also increases accuracy and avoids numerical instabilities - which previously were a major roadblock for large-scale Vlasov simulations with the combination technique. Secondly, the open-source framework DisCoTec was extended to scale the combination technique up to the available memory of entire supercomputing systems. DisCoTec is designed to wrap the combination technique around existing grid-based solvers and draws on the inherent parallelism of the combination technique. Among several other contributions, different communication-avoiding multiscale reduction schemes were developed and implemented into DisCoTec as part of this work. The scalability of the approach is asserted by an extensive set of measurements in this thesis: DisCoTec is shown to scale up to the full system size of four German supercomputers, including the three CPU-based Tier-0/Tier-1 systems. Thirdly, the combination technique was further extended to the widely-distributed setting, where two HPC systems synchronously run a joint simulation. This is enabled by file transfer as well as sophisticated algorithms for assigning the different simulation instances to the systems, two of which were developed as part of this work. By the resulting drastic reductions in the communication volume, tolerable transfer times for combination technique simulations on different HPC systems have been achieved for the first time. These three advances - improved numerical properties, scaling efficiently up to full system sizes, and the possibility to extend the simulation beyond a single system - show the sparse grid combination technique to be a promising approach for future high-fidelity simulations of higher-dimensional problems, such as plasma turbulence.
  • Thumbnail Image
    ItemOpen Access
  • Thumbnail Image
    ItemOpen Access
    Quantum support vector machines of high-dimensional data for image classification problems
    (2023) Vikas Singh, Rajput
    This thesis presents a comprehensive investigation into the efficient utilization of Quantum Support Vector Machines (QSVMs) for image classification on high-dimensional data. The primary focus is on analyzing the standard MNIST dataset and the high-dimensional dataset provided by TRUMPF SE + Co. KG. To evaluate the performance of QSVMs against classical Support Vector Machines (SVMs) for high-dimensional data, a benchmarking framework is proposed. In the current Noisy Intermediate Scale Quantum (NISQ) era, classical preprocessing of the data is a crucial step to prepare the data for classification tasks using NISQ machines. Various dimensionality reduction techniques, such as principal component analysis (PCA), t-distributed stochastic neighbor embedding (tSNE), and convolutional autoencoders, are explored to preprocess the image datasets. Convolutional autoencoders are found to outperform other methods when calculating quantum kernels on a small dataset. Furthermore, the benchmarking framework systematically analyzes different quantum feature maps by varying hyperparameters, such as the number of qubits, the use of parameterized gates, the number of features encoded per qubit line, and the use of entanglement. Quantum feature maps demonstrate higher accuracy compared to classical feature maps for both TRUMPF and MNIST data. Among the feature maps, one using 𝑅𝑧 and 𝑅𝑦 gates with two features per qubit, without entanglement, achieves the highest accuracy. The study also reveals that increasing the number of qubits leads to improved accuracy for the real-world TRUMPF dataset. Additionally, the choice of the quantum kernel function significantly impacts classification results, with the projected type quantum kernel outperforming the fidelity type quantum kernel. Subsequently, the study examines the Kernel Target Alignment (KTA) optimization method to improve the pipeline. However, for the chosen feature map and dataset, KTA does not provide significant benefits. In summary, the results highlight the potential for achieving quantum advantage by optimizing all components of the quantum classifier framework. Selecting appropriate dimensionality reduction techniques, quantum feature maps, and quantum kernel methods is crucial for enhancing classification accuracy. Further research is needed to address challenges related to kernel optimization and fully leverage the capabilities of quantum computing in machine learning applications.
  • Thumbnail Image
    ItemOpen Access
    Discontinuous Galerkin methods for two-phase flows in porous media
    (2010) Grüninger, Christoph
    In this work two-phase flows in porous media are simulated numerically with Discontinuous Galerkin methods. The three methods Symmetrical Interior Penalty Galerkin method (SIPG), Non-symmetrical Interior Penalty Galerkin method (NIPG) and the scheme from Oden, Babuška and Baumann (OBB) are considered. The terminology and the examples are taken from soil science. First the Richards equation is solved using these methods. Then a two-phase flows problem in the saturation/pressure formation is solved with OBB and NIPG. The numerical methods are implemented using the software toolkit PDELab. They are tested with examples from other publications. Weighted averages for absolute and relative permeabilities are examined.