Universität Stuttgart
Permanent URI for this communityhttps://elib.uni-stuttgart.de/handle/11682/1
Browse
8 results
Search Results
Item Open Access Rigorous compilation for near-term quantum computers(2024) Brandhofer, Sebastian; Polian, Ilia (Prof.)Quantum computing promises an exponential speedup for computational problems in material sciences, cryptography and drug design that are infeasible to resolve by traditional classical systems. As quantum computing technology matures, larger and more complex quantum states can be prepared on a quantum computer, enabling the resolution of larger problem instances, e.g. breaking larger cryptographic keys or modelling larger molecules accurately for the exploration of novel drugs. Near-term quantum computers, however, are characterized by large error rates, a relatively low number of qubits and a low connectivity between qubits. These characteristics impose strict requirements on the structure of quantum computations that must be incorporated by compilation methods targeting near-term quantum computers in order to ensure compatibility and yield highly accurate results. Rigorous compilation methods have been explored for addressing these requirements as they exactly explore the solution space and thus yield a quantum computation that is optimal with respect to the incorporated requirements. However, previous rigorous compilation methods demonstrate limited applicability and typically focus on one aspect of the imposed requirements, i.e. reducing the duration or the number of swap gates in a quantum computation. In this work, opportunities for improving near-term quantum computations through compilation are explored first. These compilation opportunities are included in rigorous compilation methods to investigate each aspect of the imposed requirements, i.e. the number of qubits, connectivity of qubits, duration and incurred errors. The developed rigorous compilation methods are then evaluated with respect to their ability to enable quantum computations that are otherwise not accessible with near-term quantum technology. Experimental results demonstrate the ability of the developed rigorous compilation methods to extend the computational reach of near-term quantum computers by generating quantum computations with a reduced requirement on the number and connectivity of qubits as well as reducing the duration and incurred errors of performed quantum computations. Furthermore, the developed rigorous compilation methods extend their applicability to quantum circuit partitioning, qubit reuse and the translation between quantum computations generated for distinct quantum technologies. Specifically, a developed rigorous compilation method exploiting the structure of a quantum computation to reuse qubits at runtime yielded a reduction in the required number of qubits of up to 5x and result error by up to 33%. The developed quantum circuit partitioning method optimally distributes a quantum computation to distinct separate partitions, reducing the required number of qubits by 40% and the cost of partitioning by 41% on average. Furthermore, a rigorous compilation method was developed for quantum computers based on neutral atoms that combines swap gate insertions and topology changes to reduce the impact of limited qubit connectivity on the quantum computation duration by up to 58% and on the result fidelity by up to 29%. Finally, the developed quantum circuit adaptation method enables to translate between distinct quantum technologies while considering heterogeneous computational primitives with distinct characteristics to reduce the idle time of qubits by up to 87% and the result fidelity by up to 40%.Item Open Access Über die Lösung der Navier-Stokes-Gleichungen mit Hilfe der Moore-Penrose-Inversen des Laplace-Operators im Vektorraum der Polynomkoeffizienten(2024) Große-Wöhrmann, Bärbel; Resch, Michael (Prof. Dr.-Ing.)Die bekannten numerischen Standard-Verfahren zur Lösung partieller Differentialgleichungen basieren auf einer räumlichen Diskretisierung des Berechnungsgebiets. Ihre Performance und Skalierbarkeit auf modernen massiv-parallelen Höchstleistungsrechnern ist von der Verfügbarkeit effizienter numerischer Verfahren zur Lösung linearer Gleichungssysteme abhängig. Angesichts grundlegender Herausforderungen erscheint die Entwicklung neuer Lösungsansätze sinnvoll. Ich stelle in dieser Arbeit einen Polynomansatz zur Lösung partieller Differentialgleichungen vor, der nicht auf einer räumlichen Diskretisierung beruht und mit Hilfe der Moore-Penrose-Inversen des Laplace-Operators die Entkopplung der Navier-Stokes-Gleichungen ermöglicht. Dabei ist der Grad der Polynome nicht grundsätzlich beschränkt, so dass eine hohe räumliche Auflösung erreicht werden kann.Item Open Access Development of efficient multiscale multiphysics models accounting for reversible flow at various subsurface energy storage sites(Stuttgart : Eigenverlag des Instituts für Wasser- und Umweltsystemmodellierung der Universität Stuttgart, 2021) Becker, Beatrix; Helmig, Rainer (Prof. Dr.-Ing.)Energy storage is an essential component of future energy systems with a large share of renewable energy. Apart from pumped hydro storage, large scale energy storage is mainly provided by underground energy storage systems. In this thesis we focus on chemical subsurface storage, i.e., the storage of synthetic hydrogen or synthetic natural gas in porous formations. To improve understanding of the complex and coupled processes in the underground and enable planning and risk assessment of subsurface energy storage, efficient, consistent and adequate numerical models for multiphase flow and transport are required. Simulating underground energy storage requires large domains, including local features such as fault zones and a representation of the transient saline front, and simulation times spanning the whole time of plant operation and beyond. In addition, often a large number of simulation runs need to be conducted to quantify parameter uncertainty, and efficient models are needed for data assimilation as well. Therefore, a reduction of model complexity and thus computing effort is required. Numerous simplified models that require less computational resources have been developed. In this thesis we focus on a group of multiscale models which use vertically integrated equations and implicitly include fine-scale information along the vertical direction that is reconstructed assuming vertical equilibrium (VE). Classical VE models are restricted to situations where vertical equilibrium is valid in the whole domain during most of the simulated time. This may not be the case for underground energy storage, where simulated times may be too short and locally a high degree of accuracy and complexity may be required, e.g., around the area where gas is extracted for the purpose of energy production. The three core chapters of this thesis present solutions to adapt VE models for the simulation of underground energy storage, with increasing complexity.Item Open Access Stochastic model comparison and refinement strategies for gas migration in the subsurface(Stuttgart : Eigenverlag des Instituts für Wasser- und Umweltsystemmodellierung der Universität Stuttgart, 2023) Banerjee, Ishani; Nowak, Wolfgang (Prof. Dr.-Ing.)Gas migration in the subsurface, a multiphase flow in a porous-medium system, is a problem of environmental concern and is also relevant for subsurface gas storage in the context of the energy transition. It is essential to know and understand the flow paths of these gases in the subsurface for efficient monitoring, remediation or storage operations. On the one hand, laboratory gas-injection experiments help gain insights into the involved processes of these systems. On the other hand, numerical models help test the mechanisms observed and inferred from the experiments and then make useful predictions for real-world engineering applications. Both continuum and stochastic modelling techniques are used to simulate multiphase flow in porous media. In this thesis, I use a stochastic discrete growth model: the macroscopic Invasion Percolation (IP) model. IP models have the advantages of simplicity and computational inexpensiveness over complex continuum models. Local pore-scale changes dominantly affect the flow processes of gas flow in water-saturated porous media. IP models are especially favourable for these multi-scale systems because using continuum models to simulate them can be extremely computationally difficult. Despite offering a computationally inexpensive way to simulate multiphase flow in porous media, only very few studies have compared their IP model results to actual laboratory experimental image data. One reason might be the fact that IP models lack a notion of experimental time but only have an integer counter for simulation steps that imply a time order. The few existing experiments-to-model comparison studies have used perceptual similarity or spatial moments as comparison measures. On the one hand, perceptual comparison between the model and experimental images is tedious and non-objective. On the other hand, comparing spatial moments of the model and experimental images can lead to misleading results because of the loss of information from the data. In this thesis, an objective and quantitative comparison method is developed and tested that overcomes the limitations of these traditional approaches. The first step involves volume-based time-matching between real-time experimental data and IP-model outputs. This is followed by using the (Diffused) Jaccard coefficient to evaluate the quality of the fit. The fit between the images from the models and experiments can be checked across various scales by varying the extent of blurring in the images. Numerical model predictions for sparsely known systems (like the gas flow systems) suffer from high conceptual uncertainties. In literature, numerous versions of IP models, differing in their underlying hypotheses, have been used for simulating gas flow in porous media. Besides, the gas-injection experiments belong to continuous, transitional, or discontinuous gas flow regimes, depending on the gas flow rate and the porous medium's nature. Literature suggests that IP models are well suited for the discontinuous gas flow regime; other flow regimes have not been explored. Using the abovementioned method, in this thesis, four macroscopic IP model versions are compared against data from nine gas-injection experiments in transitional and continuous gas flow regimes. This model inter-comparison helps assess the potential of these models in these unexplored regimes and identify the sources of model conceptual uncertainties. Alternatively, with a focus on parameter uncertainty, Bayesian Model Selection is a standard statistical procedure for systematically and objectively comparing different model hypotheses by computing the Bayesian Model Evidence (BME) against test data. BME is the likelihood of a model producing the observed data, given the prior distribution of its parameters. Computing BME can be challenging: exact analytical solutions require strong assumptions; mathematical approximations (information criteria) are often strongly biased; assumption-free numerical methods (like Monte Carlo) are computationally impossible for large data sets. In this thesis, a BME-computation method is developed to use BME as a ranking criterion for such infeasible scenarios: The \emph{Method of Forced Probabilities} for extensive data sets and Markov-Chain models. In this method, the direction of evaluation is swapped: instead of comparing thousands of model runs on random model realizations with the observed data, the model is forced to reproduce the data in each time step, and the individual probabilities of the model following these exact transitions are recorded. This is a fast, accurate and exact method for calculating BME for IP models which exhibit the Markov chain property and for complete "atomic" data. The analysis results obtained using the methods and tools developed in this thesis help identify the strengths and weaknesses of the investigated IP model concepts. This further aids model development and refinement efforts for predicting gas migration in the subsurface. Also, the gained insights foster improved experimental methods. These tools and methods are not limited to gas flow systems in porous media but can be extended to any system involving raster outputs.Item Open Access A methodology for validation of a radar simulation for virtual testing of autonomous driving(2023) Ngo, Anthony; Resch, Michael M. (Prof. Dr.)Autonomous driving offers great potential for reducing the number of accidents as well as optimizing traffic flow. The safety validation of such an autonomous system is an extremely difficult problem and new approaches are needed because the conventional statistical safety proof based on field testing is not feasible. The combination of real-world and simulation-based tests is a promising approach to significantly reduce the validation effort of autonomous driving. As environment sensors such as lidar, camera, and radar are key technologies for a self-driving vehicle, they have to be validated to be able to rely on virtual tests using synthetically generated sensor data. In particular, radar has traditionally been one of the most complex sensor to model. Since a sensor simulation is an approximation of the real sensor, a discrepancy between real sensor measurements and synthetic data can be assumed. However, there exists no systematic and sound method for validating a sensor model, especially for radar models. Therefore, this work makes several contributions to address this problem with the objective to gain an understanding of the capabilities and limitations of sensor simulation for virtual testing of autonomous driving. Considering that high fidelity radar simulations face challenges regarding the required execution time, a sensitivity analysis approach is introduced with the goal to identify the sensor effects that has the biggest impact on a downstream sensor data processing algorithm. In this way, the modeling effort can be focused on the most important components in terms of fidelity, while minimizing the overall computation time required. Furthermore, a novel machine learning-based metric is proposed for evaluating the accuracy of synthetic radar data. By learning the latent features that distinguish real and simulated radar point clouds, it can be demonstrated that the developed metric outperforms conventional metrics in terms of its capability to measure characteristic differences. Additionally, after training, this removes the need for real radar measurements as a reference to evaluate the fidelity of a sensor simulation. Moreover, a multi-layered evaluation approach is developed to measure the gap between radar simulation and reality, consisting of an explicit and an implicit sensor model evaluation. The former directly assesses the realism of the simulated data, whereas the latter refers to an evaluation of a subsequent perception application. It can be shown that by introducing multiple levels of evaluation, the existing discrepancies can be revealed in detail and the sensor model fidelity can be accurately measured across different scenarios in a holistic manner.Item Open Access Learning structured models for active planning : beyond the Markov paradigm towards adaptable abstractions(2018) Lieck, Robert; Toussaint, Marc (Prof. Dr.)Item Open Access Behavior of concrete structures subjected to static and dynamic loading after fire exposure(2021) Lacković, Luka; Ožbolt, Joško (Prof. Dr.-Ing. habil.)The resistance of concrete structures exposed to extreme loading conditions such as explosion, impact, industrial accidents, tsunami, earthquake or their combination represents one of the major topics in research today. Such loading conditions are characterized with high loading rates often acting in conjunction with fire exposure. Especially vulnerable are the structures located in the seismically active areas with high level of urbanization and proximity to HAZMAT landfills, which additionally exacerbate fire conflagrations. The behavior of concrete changes significantly when exposed to elevated temperatures resulting in the decrease of its mechanical properties. Reinforced concrete (RC), when exposed to high temperature culminates in a simultaneous thermal behavior of its two constituents, steel and concrete, that should be considered in the analysis. It is also known that the resistance, crack pattern and failure mode in concrete are strongly influenced by the loading rate. The dynamic response of RC structures previously exposed to fire changes significantly when compared to initially undamaged RC structures. The main objective of the present work is to further improve the existing rate sensitive thermo-mechanical model for concrete through the following: (i) the implementation of the experimentally obtained thermal dependence of concrete fracture energy in the thermo-mechanical model, (ii) the calculation of concrete thermally dependent mechanical properties by means of nonlocal (average) temperature and (iii) to perform parametric study on fastening elements and RC frames in order to investigate the interaction between the thermally induced damage and mechanical behavior of structures. The experimental investigations in the present work indicated that the concrete fracture energy has a declining tendency with the temperature increase, measured on small and mid-sized concrete beams. This is implemented in the thermo-mechanical model and it is indicated that the decrease of fracture energy has a relatively mild influence on reaction values in terms of loading rate. However, its effect on the fracture patterns and reaction-time histories can be considered as more significant. The influence of the nonlocal temperature is validated against the experimental results carried out on RC frames which had been thermally pre-damaged and subsequently loaded with impact. Currently there are almost no models that can realistically predict the structural behavior at this level of complexity. Furthermore, a parametric study is carried out to show the influence of preloading of single-headed stud anchor and anchor group with two and four studs, on the residual concrete edge failure capacity after fire exposure. The anchors are exposed to fire and loaded in shear, perpendicular to the free edge of the concrete member up to failure, in both hot and cold state (after cooling). The influence of different geometry configurations and initial conditions such as the edge distance, embedment depth, anchor diameter and duration of fire on the load-bearing behavior of anchors is investigated. It is demonstrated that the preloading has a strong negative influence on the residual load-bearing capacity of the concrete. Finally, the numerical parametric study is performed to investigate the influence of fire duration and the loading rate on the resistance of RC frames. The response of the RC structures strongly depends on whether it was loaded in hot or residual (cold) state, i.e. after being naturally cooled down to ambient temperature. Furthermore, an extensive numerical investigation on the influence of post-earthquake fire on the residual capacity of RC frames with and without ductile detailing is conducted. The numerical investigation encompassed the validation of the thermo-mechanical model in terms of temperature distributions, thermal deflections and load-bearing capacity against the test data and subsequent parametric analysis with different levels of fire exposure ranging from 15 to 120 min.Item Open Access Resilience of quantum optimization algorithms(2024) Ji, Yanjun; Polian, Ilia (Prof. Dr.)Quantum optimization algorithms (QOAs) show promise in surpassing classical methods for solving complex problems. However, their practical application is limited by the sensitivity of quantum systems to noise. This study addresses this challenge by investigating the resilience of QOAs and developing strategies to enhance their performance and robustness on noisy quantum computers. We begin by establishing an evaluation framework to assess the performance of QOAs under various conditions, including simulated noise-free and error-modeled environments, as well as real noisy hardware, providing a foundation for guiding the development of enhancement strategies. We then propose innovative techniques to improve the performance of algorithms on near-term quantum devices characterized by limited qubit connectivity and noisy operations. Our study introduces an effective compilation process that maximizes the utilization of classical and quantum resources. To overcome the restricted connectivity of hardware, we develop an algorithm-oriented qubit mapping approach that bridges the gap between heuristic and exact methods, providing scalable and optimal solutions. Additionally, we demonstrate, for the first time, selective optimization of quantum circuits on real hardware by optimizing only gates implemented with low-quality native gates, providing significant insights for large-scale quantum computing. We also investigate error mitigation strategies and their dependence on hardware features and algorithm implementation details, emphasizing the synergistic effects of error mitigation and circuit design. While error mitigation can suppress the effects of noise, hardware quality and circuit design are ultimately more critical for achieving high performance. Building upon these insights, we explore the cooptimization of algorithm design and hardware implementation to achieve optimal performance and resilience. By optimizing gate sequences and parameters at the algorithmic level and minimizing error-prone two-qubit gates during compilation, we demonstrate significant improvements in QOA performance. Finally, we explore the practical application of QOAs in real-world problems, emphasizing the importance of optimizing parameters in problem instances to identify optimal solutions. With extensive experiments conducted on real devices, this dissertation makes a substantial contribution to the field of quantum optimization, providing both theoretical foundations and practical strategies for addressing the challenges posed by near-term quantum hardware. Our findings pave the way for the realization of practical quantum computing applications and unlock the full potential of QOAs.