Universität Stuttgart

Permanent URI for this communityhttps://elib.uni-stuttgart.de/handle/11682/1

Browse

Search Results

Now showing 1 - 10 of 10
  • Thumbnail Image
    ItemOpen Access
    Rigorous compilation for near-term quantum computers
    (2024) Brandhofer, Sebastian; Polian, Ilia (Prof.)
    Quantum computing promises an exponential speedup for computational problems in material sciences, cryptography and drug design that are infeasible to resolve by traditional classical systems. As quantum computing technology matures, larger and more complex quantum states can be prepared on a quantum computer, enabling the resolution of larger problem instances, e.g. breaking larger cryptographic keys or modelling larger molecules accurately for the exploration of novel drugs. Near-term quantum computers, however, are characterized by large error rates, a relatively low number of qubits and a low connectivity between qubits. These characteristics impose strict requirements on the structure of quantum computations that must be incorporated by compilation methods targeting near-term quantum computers in order to ensure compatibility and yield highly accurate results. Rigorous compilation methods have been explored for addressing these requirements as they exactly explore the solution space and thus yield a quantum computation that is optimal with respect to the incorporated requirements. However, previous rigorous compilation methods demonstrate limited applicability and typically focus on one aspect of the imposed requirements, i.e. reducing the duration or the number of swap gates in a quantum computation. In this work, opportunities for improving near-term quantum computations through compilation are explored first. These compilation opportunities are included in rigorous compilation methods to investigate each aspect of the imposed requirements, i.e. the number of qubits, connectivity of qubits, duration and incurred errors. The developed rigorous compilation methods are then evaluated with respect to their ability to enable quantum computations that are otherwise not accessible with near-term quantum technology. Experimental results demonstrate the ability of the developed rigorous compilation methods to extend the computational reach of near-term quantum computers by generating quantum computations with a reduced requirement on the number and connectivity of qubits as well as reducing the duration and incurred errors of performed quantum computations. Furthermore, the developed rigorous compilation methods extend their applicability to quantum circuit partitioning, qubit reuse and the translation between quantum computations generated for distinct quantum technologies. Specifically, a developed rigorous compilation method exploiting the structure of a quantum computation to reuse qubits at runtime yielded a reduction in the required number of qubits of up to 5x and result error by up to 33%. The developed quantum circuit partitioning method optimally distributes a quantum computation to distinct separate partitions, reducing the required number of qubits by 40% and the cost of partitioning by 41% on average. Furthermore, a rigorous compilation method was developed for quantum computers based on neutral atoms that combines swap gate insertions and topology changes to reduce the impact of limited qubit connectivity on the quantum computation duration by up to 58% and on the result fidelity by up to 29%. Finally, the developed quantum circuit adaptation method enables to translate between distinct quantum technologies while considering heterogeneous computational primitives with distinct characteristics to reduce the idle time of qubits by up to 87% and the result fidelity by up to 40%.
  • Thumbnail Image
    ItemOpen Access
    Über die Lösung der Navier-Stokes-Gleichungen mit Hilfe der Moore-Penrose-Inversen des Laplace-Operators im Vektorraum der Polynomkoeffizienten
    (2024) Große-Wöhrmann, Bärbel; Resch, Michael (Prof. Dr.-Ing.)
    Die bekannten numerischen Standard-Verfahren zur Lösung partieller Differentialgleichungen basieren auf einer räumlichen Diskretisierung des Berechnungsgebiets. Ihre Performance und Skalierbarkeit auf modernen massiv-parallelen Höchstleistungsrechnern ist von der Verfügbarkeit effizienter numerischer Verfahren zur Lösung linearer Gleichungssysteme abhängig. Angesichts grundlegender Herausforderungen erscheint die Entwicklung neuer Lösungsansätze sinnvoll. Ich stelle in dieser Arbeit einen Polynomansatz zur Lösung partieller Differentialgleichungen vor, der nicht auf einer räumlichen Diskretisierung beruht und mit Hilfe der Moore-Penrose-Inversen des Laplace-Operators die Entkopplung der Navier-Stokes-Gleichungen ermöglicht. Dabei ist der Grad der Polynome nicht grundsätzlich beschränkt, so dass eine hohe räumliche Auflösung erreicht werden kann.
  • Thumbnail Image
    ItemOpen Access
    Development and application of PICLas for combined optic-/plume-simulation of ion-propulsion systems
    (2019) Binder, Tilman; Fasoulas, Stefanos (Prof. Dr.-Ing.)
    Electric propulsion systems are an efficient option for altitude/attitude control and orbit transfers of spacecraft. One example is the gridded ion thruster which ionizes the propellant and accelerates the ions of the generated plasma by a high-voltage grid system. This work deals with the numerical simulation of the plasma flow starting near the grid system in the ionization chamber and leaving the thruster with high velocity. These simulations give direct insight into the modeled, physical interrelationships and can be used to investigate questions arising in the industrial development process of ion propulsion systems. The required simulation method is challenging due to the high degree of flow rarefaction and the plasma state itself, including freely moving ions and electrons. Applicable simulation methods belong to a particle-based, gas-kinetic approach, such as Particle-In-Cell (PIC) for the simulation of electromagnetic interaction and the Direct Simulation Monte Carlo (DSMC) for inter-particle collisions. The effects resulting from the finite size of a real system can only be investigated by simulating the complete, three-dimensional thruster geometry which requires a large and complex simulation domain. Acceptable simulation times are realized by expanding and using the framework of the coupled PIC-DSMC code PICLas in combination with high performance computing systems.
  • Thumbnail Image
    ItemOpen Access
    Multiscale modeling and stability analysis of soft active materials : from electro- and magneto-active elastomers to polymeric hydrogels
    (Stuttgart : Institute of Applied Mechanics, 2023) Polukhov, Elten; Keip, Marc-André (Prof. Dr.-Ing.)
    This work is dedicated to modeling and stability analysis of stimuli-responsive, soft active materials within a multiscale variational framework. In particular, composite electro- and magneto-active polymers and polymeric hydrogels are under consideration. When electro- and magneto-active polymers (EAP and MAP) are fabricated in the form of composites, they comprise at least two phases: a polymeric matrix and embedded electric or magnetic particles. As a result, the obtained composite is soft, highly stretchable, and fracture resistant like polymer and undergoes stimuli-induced deformation due to the interaction of particles. By designing the microstructure of EAP or MAP composites, a compressive or a tensile deformation can be induced under electric or magnetic fields, and also coupling response of the composite can be enhanced. Hence, these materials have found applications as sensors, actuators, energy harvesters, absorbers, and soft, programmable, smart devices in various areas of engineering. Similarly, polymeric hydrogels are also stimuli-responsive materials. They undergo large volumetric deformations due to the diffusion of a solvent into the polymer network of hydrogels. In this case, the obtained material shows the characteristic behavior of polymer and solvent. Therefore, these materials can also be considered in the form of composites to enhance the response further. Since hydrogels are biocompatible materials, they have found applications as contact lenses, wound dressings, drug encapsulators and carriers in bio-medicine, among other similar applications of electro- and magneto-active polymers. All above mentioned favorable features of these materials, as well as their application possibilities, make it necessary to develop mathematical models and numerical tools to simulate the response of them in order to design pertinent microstructures for particular applications as well as understand the observed complex patterns such as wrinkling, creasing, snapping, localization or pattern transformations, among others. These instabilities are often considered as failure points of materials. However, many recent works take advantage of instabilities for smart applications. Investigation of these instabilities and prediction of their onset and mode are some of the main goals of this work. In this sense, the thesis is organized into three main parts. The first part is devoted to the state of the art in the development, fabrication, and modeling of soft active materials as well as the continuum mechanical description of the magneto-electro-elasticity. The second part is dedicated to multiscale instabilities in electro- and magneto-active polymer composites within a minimization-type variational homogenization setting. This means that the highly heterogeneous problem is not resolved on one scale due to computational inefficiency but is replaced by an equivalent homogeneous problem. The effective response of the macroscopic homogeneous problem is determined by solving a microscopic representative volume element which includes all the geometrical and material non-linearities. To bridge these two scales, the Hill-Mandel macro-homogeneity condition is utilized. Within this framework, we investigate both macroscopic and microscopic instabilities. The former are important not only from a physical point of view but also from a computational point of view since the macroscopic stability (strong ellipticity) is necessary for the existence of minimizers at the macroscopic scale. Similarly, the investigation of the latter instabilities are also important to determine the pattern transformations at the microscale due to external action. Thereby the critical domain of homogenization is also determined for computation of accurate effective results. Both investigations are carried out for various composite microstructures and it is found that they play a crucial role in the response of the materials. Therefore, they must be considered for designing EAP and MAP composites as well as for providing reliable computations. The third part of the thesis is dedicated to polymeric hydrogels. Here, we develop a minimization-based homogenization framework to determine the response of transient periodic hydrogel systems. We demonstrate the prevailing size effect as a result of a transient microscopic problem, which has been investigated for various microstructures. Exploiting the elements of the proposed framework, we explore the material and structural instabilities in single and two-phase hydrogel systems. Here, we have observed complex experimentally observed and novel 2D pattern transformations such as diamond-plate patterns coupled with and without wrinkling of internal surfaces for perforated microstructures and 3D pattern transformations in thin reinforced hydrogel composites. The results indicate that the obtained patterns can be controlled by tuning the material and geometrical parameters of the composite.
  • Thumbnail Image
    ItemOpen Access
    Physics-informed regression of implicitly-constrained robot dynamics
    (2022) Geist, Andreas René; Allgöwer, Frank (Prof. Dr.-Ing.)
    The ability to predict a robot’s motion through a dynamics model is critical for the development of fast, safe, and efficient control algorithms. Yet, obtaining an accurate robot dynamics model is challenging as robot dynamics are typically nonlinear and subject to environment-dependent physical phenomena such as friction and material elasticities. The respective functions often cause analytical dynamics models to have large prediction errors. An alternative approach to analytical modeling forms the identification of a robot’s dynamics through data-driven modeling techniques such as Gaussian processes or neural networks. However, solely data-driven algorithms require considerable amounts of data, which on a robotic system must be collected in real-time. Moreover, the information stored in the data as well as the coverage of the system’s state space by the data is limited by the controller that is used to obtain the data. To tackle the shortcomings of analytical dynamics and data-driven modeling, this dissertation investigates and develops models in which analytical dynamics is being combined with data-driven regression techniques. By combining prior structural knowledge from analytical dynamics with data-driven regression, physics-informed models show improved data-efficiency and prediction accuracy compared to using the aforementioned modeling techniques in an isolated manner.
  • Thumbnail Image
    ItemOpen Access
    Physics-driven machine learning : from biomolecules to crystals
    (2024) Díaz Carral, Ángel; Schmauder, Siegfried (Prof. Dr. rer. nat. Dr. h. c.)
    Physical systems and their interactions exhibit inherent equivariance. In machine learning (ML), predicting quantities derived from these interactions follows two main approaches: constructing invariant scalar features as inputs to invariant models or employing equivariant models directly. This thesis focuses on the former, investigating feature extraction and data representation in the context of physics-driven machine learning (PDML). PDML leverages prior physical knowledge to construct descriptors that encode symmetries inherent in the data, thereby reducing dimensionality, enhancing interpretability, and improving generalization performance. The research addresses critical questions such as the limitations of physics-informed descriptors, the feasibility of dimensionality reduction without compromising prediction accuracy, the comparative performance of PDML against traditional ML methods, and the scalability of PDML in atomistic systems. Key investigations include: 1. Copper-based alloys: Combining molecular simulations and active learning (AL) to discover stable precipitate phases and assess mechanical properties. This involves density functional theory (DFT) simulations and the development of machine learning interatomic potentials (MLIPs) using moment tensor potentials (MTPs), leveraging invariant polynomials to model multi-component alloys. 2. Nanopore translocations: Improving DNA sequencing accuracy by training ML models on experimental ionic blockade data from DNA translocation through nanopores. The approach employs dimensionality reduction through a set of physical descriptors to efficiently classify nucleotide identities, with an emphasis on increasing readout accuracy and reducing model complexity. 3. High-Tc superconductivity: Proposing an effective PDML model to predict critical temperatures of superconductors by extracting key electronic and atomic features. Despite the reduced feature space, the model achieves high accuracy, offering a streamlined approach to predicting superconductor properties with minimal computational overhead. This work bridges the gap between machine learning and physics by embedding physical principles into ML feature representations, enhancing the ability to model, predict, and control complex physical systems with greater precision and efficiency. These advancements aim to unlock transformative applications and discoveries across a range of scientific and technological domains.
  • Thumbnail Image
    ItemOpen Access
    Stable and mass-conserving high-dimensional simulations with the sparse grid combination technique for full HPC systems and beyond
    (2024) Pollinger, Theresa; Pflüger, Dirk (Prof. Dr.)
    In the light of the ongoing climate crisis, mastering controlled plasma fusion has the potential to be one of the pivotal scientific achievements of the 21st century. To understand the turbulent fields in confined fusion devices, simulation has been and continues to be both an asset and a challenge. The main limiting factor to large-scale high-fidelity predictive simulations lies in the Curse of Dimensionality, which dominates all grid-based discretizations of plasmas based on the Vlasov-Poisson and Vlasov-Maxwell equations. In the full formulation, they result in six-dimensional grids and fine scales that need to be resolved, leading to a potentially untractable number of degrees of freedom. Typical approaches to this problem - coordinate transformations such as gyrokinetics, grid adaptation, restricting oneself to limited resolutions - do not directly address the Curse of Dimensionality, but rather work around it. The sparse grid combination technique, which forms the center of this work, is a multiscale approach that alleviates the curse of dimensionality for time-stepping simulations: Multiple regular grid-based simulations are run and update each other’s information throughout the course of simulation time. The present thesis improves upon the former state-of-the-art of the combination technique in three ways: introducing conservation of mass and numerical stability through the use of better-suited multiscale basis functions, optimizing the code for large-scale HPC systems, and extending the combination technique to the widely-distributed setting. Firstly, this thesis analyzes the often-used hierarchical hat function from the viewpoint of biorthogonal wavelets, which allows to replace the hierarchical hat function by other multiscale functions (such as the mass-conserving CDF wavelets) in a straightforward manner. Numerical studies presented in the thesis show that this not only introduces conservation but also increases accuracy and avoids numerical instabilities - which previously were a major roadblock for large-scale Vlasov simulations with the combination technique. Secondly, the open-source framework DisCoTec was extended to scale the combination technique up to the available memory of entire supercomputing systems. DisCoTec is designed to wrap the combination technique around existing grid-based solvers and draws on the inherent parallelism of the combination technique. Among several other contributions, different communication-avoiding multiscale reduction schemes were developed and implemented into DisCoTec as part of this work. The scalability of the approach is asserted by an extensive set of measurements in this thesis: DisCoTec is shown to scale up to the full system size of four German supercomputers, including the three CPU-based Tier-0/Tier-1 systems. Thirdly, the combination technique was further extended to the widely-distributed setting, where two HPC systems synchronously run a joint simulation. This is enabled by file transfer as well as sophisticated algorithms for assigning the different simulation instances to the systems, two of which were developed as part of this work. By the resulting drastic reductions in the communication volume, tolerable transfer times for combination technique simulations on different HPC systems have been achieved for the first time. These three advances - improved numerical properties, scaling efficiently up to full system sizes, and the possibility to extend the simulation beyond a single system - show the sparse grid combination technique to be a promising approach for future high-fidelity simulations of higher-dimensional problems, such as plasma turbulence.
  • Thumbnail Image
    ItemOpen Access
    Coarse grained hydrogels
    (2017) Richter, Tobias; Holm, Christian (Prof. Dr.)
  • Thumbnail Image
    ItemOpen Access
    Resilience of quantum optimization algorithms
    (2024) Ji, Yanjun; Polian, Ilia (Prof. Dr.)
    Quantum optimization algorithms (QOAs) show promise in surpassing classical methods for solving complex problems. However, their practical application is limited by the sensitivity of quantum systems to noise. This study addresses this challenge by investigating the resilience of QOAs and developing strategies to enhance their performance and robustness on noisy quantum computers. We begin by establishing an evaluation framework to assess the performance of QOAs under various conditions, including simulated noise-free and error-modeled environments, as well as real noisy hardware, providing a foundation for guiding the development of enhancement strategies. We then propose innovative techniques to improve the performance of algorithms on near-term quantum devices characterized by limited qubit connectivity and noisy operations. Our study introduces an effective compilation process that maximizes the utilization of classical and quantum resources. To overcome the restricted connectivity of hardware, we develop an algorithm-oriented qubit mapping approach that bridges the gap between heuristic and exact methods, providing scalable and optimal solutions. Additionally, we demonstrate, for the first time, selective optimization of quantum circuits on real hardware by optimizing only gates implemented with low-quality native gates, providing significant insights for large-scale quantum computing. We also investigate error mitigation strategies and their dependence on hardware features and algorithm implementation details, emphasizing the synergistic effects of error mitigation and circuit design. While error mitigation can suppress the effects of noise, hardware quality and circuit design are ultimately more critical for achieving high performance. Building upon these insights, we explore the cooptimization of algorithm design and hardware implementation to achieve optimal performance and resilience. By optimizing gate sequences and parameters at the algorithmic level and minimizing error-prone two-qubit gates during compilation, we demonstrate significant improvements in QOA performance. Finally, we explore the practical application of QOAs in real-world problems, emphasizing the importance of optimizing parameters in problem instances to identify optimal solutions. With extensive experiments conducted on real devices, this dissertation makes a substantial contribution to the field of quantum optimization, providing both theoretical foundations and practical strategies for addressing the challenges posed by near-term quantum hardware. Our findings pave the way for the realization of practical quantum computing applications and unlock the full potential of QOAs.
  • Thumbnail Image
    ItemOpen Access
    Adaptive error control for stratospheric long-distance optical links
    (2024) Parthasarathy, Swaminathan; Kirstädter, Andreas (Prof. Dr.-Ing.)
    Free-space optical (FSO) communication plays a crucial role in aerospace technology, utilizing lasers to establish high-speed, wireless connections over long distances. FSO surpasses conventional RF wireless technology in various aspects and supports high-data-rate connectivity for services such as Internet access, data transfer, voice communication, and image transfer. High-Altitude Platforms (HAPs) have emerged as ideal hosts for FSO communication networks, offering ultra-high data rates for applications like high-speed Internet, video conferencing, telemedicine, smart cities, and autonomous driving. FSO via HAPs ensures minimal latency, making it suitable for real-time tasks like remote surgery and autonomous vehicle control. The swift, long-distance communication links with low delays make FSO-equipped HAPs ideal for RF-congested areas, providing cost-effective solutions in remote regions and contributing to environmental monitoring. This thesis explores the use of adaptive code-rate Hybrid Automatic Repeat Request (HARQ) methods and channel state information (CSI) to improve the transmission efficiency of Free-Space Optical (FSO) links between High Altitude Platforms (HAPs). The study looks at channel problems like atmospheric turbulence and static pointing errors, focusing on the weak fluctuation regime of atmospheric turbulence. It explores the reciprocal behavior in bidirectional FSO channels to improve performance efficiency, providing evidence of channel reciprocity. The research proposes using HARQ, an adaptive Reed-Solomon (RS) code-rate technique, and different CSI types to address these impairments. Simulations of various situations are used to test how well these methods work. This helps us learn more about how efficient HARQ protocols are in inter-HAP FSO links, how important different CSI is in adaptive rate HARQ, and possible ways to make the system more efficient. This thesis looks at the channel model for inter-High Altitude Platform (HAP) Free-Space Optical (FSO) links in great detail, taking atmospheric conditions and static pointing errors into account. The channel is modeled as a lognormal fading channel under a weak fluctuation regime. The principle of channel reciprocity and the measures used to quantify it are discussed, providing a foundational understanding for the subsequent investigations. Forward Error Correction (FEC) schemes, with a specific emphasis on the Reed-Solomon (RS) scheme, and various Automatic Repeat reQuest (ARQ) schemes are thoroughly examined. A meticulous comparison of different ARQ schemes highlights that Selective Repeat ARQ (SR-ARQ) is the most efficient for high-error-rate channels, making it the preferred choice for inter-HAP FSO channels. Conversely, Stop and Wait ARQ (SW-ARQ) and Go-Back-N ARQ (GBN-ARQ) are found to be less suitable for these channels. An innovative approach is introduced, leveraging various types of Channel State Information (CSI) to adjust the Reed-Solomon Forward Error Correction (FEC) code-rate. Four types of CSI: perfect CSI (P-CSI), reciprocal CSI (R-CSI), delayed CSI (D-CSI), and fixed mean CSI (F-CSI) are employed. The adaptation of the Reed-Solomon FEC code-rate, aligned with Selective Repeat ARQ, is explored, and the optimal power selection is identified through rigorous analysis. It shows simulation models that use OMNET++ and gives information about the inter-HAP channel and the event-based selective repeat HARQ model. The study demonstrates reciprocity in the longest recorded ground-to-ground bidirectional Free-Space Optical (FSO) link, holding promise to mitigate signal scintillation caused by atmospheric turbulence. It evaluates the performance of different ARQ protocols and adaptive Hybrid Automatic Repeat Request (HARQ) schemes in inter-HAP FSO communication systems. The results show how channel state information, turbulence in the atmosphere, and pointing errors affect the performance of the system. They also suggest ways to improve system efficiency, such as using CSI prediction and soft combining. These findings offer valuable insights for the design and optimization of ARQ and HARQ schemes in inter-HAP FSO communication systems and suggest promising avenues for future research.