Universität Stuttgart

Permanent URI for this communityhttps://elib.uni-stuttgart.de/handle/11682/1

Browse

Search Results

Now showing 1 - 10 of 21
  • Thumbnail Image
    ItemOpen Access
    Rigorous compilation for near-term quantum computers
    (2024) Brandhofer, Sebastian; Polian, Ilia (Prof.)
    Quantum computing promises an exponential speedup for computational problems in material sciences, cryptography and drug design that are infeasible to resolve by traditional classical systems. As quantum computing technology matures, larger and more complex quantum states can be prepared on a quantum computer, enabling the resolution of larger problem instances, e.g. breaking larger cryptographic keys or modelling larger molecules accurately for the exploration of novel drugs. Near-term quantum computers, however, are characterized by large error rates, a relatively low number of qubits and a low connectivity between qubits. These characteristics impose strict requirements on the structure of quantum computations that must be incorporated by compilation methods targeting near-term quantum computers in order to ensure compatibility and yield highly accurate results. Rigorous compilation methods have been explored for addressing these requirements as they exactly explore the solution space and thus yield a quantum computation that is optimal with respect to the incorporated requirements. However, previous rigorous compilation methods demonstrate limited applicability and typically focus on one aspect of the imposed requirements, i.e. reducing the duration or the number of swap gates in a quantum computation. In this work, opportunities for improving near-term quantum computations through compilation are explored first. These compilation opportunities are included in rigorous compilation methods to investigate each aspect of the imposed requirements, i.e. the number of qubits, connectivity of qubits, duration and incurred errors. The developed rigorous compilation methods are then evaluated with respect to their ability to enable quantum computations that are otherwise not accessible with near-term quantum technology. Experimental results demonstrate the ability of the developed rigorous compilation methods to extend the computational reach of near-term quantum computers by generating quantum computations with a reduced requirement on the number and connectivity of qubits as well as reducing the duration and incurred errors of performed quantum computations. Furthermore, the developed rigorous compilation methods extend their applicability to quantum circuit partitioning, qubit reuse and the translation between quantum computations generated for distinct quantum technologies. Specifically, a developed rigorous compilation method exploiting the structure of a quantum computation to reuse qubits at runtime yielded a reduction in the required number of qubits of up to 5x and result error by up to 33%. The developed quantum circuit partitioning method optimally distributes a quantum computation to distinct separate partitions, reducing the required number of qubits by 40% and the cost of partitioning by 41% on average. Furthermore, a rigorous compilation method was developed for quantum computers based on neutral atoms that combines swap gate insertions and topology changes to reduce the impact of limited qubit connectivity on the quantum computation duration by up to 58% and on the result fidelity by up to 29%. Finally, the developed quantum circuit adaptation method enables to translate between distinct quantum technologies while considering heterogeneous computational primitives with distinct characteristics to reduce the idle time of qubits by up to 87% and the result fidelity by up to 40%.
  • Thumbnail Image
    ItemOpen Access
    General mathematical model for the period chirp in interference lithography
    (2023) Bienert, Florian; Graf, Thomas; Abdou Ahmed, Marwan
  • Thumbnail Image
    ItemOpen Access
    Über die Lösung der Navier-Stokes-Gleichungen mit Hilfe der Moore-Penrose-Inversen des Laplace-Operators im Vektorraum der Polynomkoeffizienten
    (2024) Große-Wöhrmann, Bärbel; Resch, Michael (Prof. Dr.-Ing.)
    Die bekannten numerischen Standard-Verfahren zur Lösung partieller Differentialgleichungen basieren auf einer räumlichen Diskretisierung des Berechnungsgebiets. Ihre Performance und Skalierbarkeit auf modernen massiv-parallelen Höchstleistungsrechnern ist von der Verfügbarkeit effizienter numerischer Verfahren zur Lösung linearer Gleichungssysteme abhängig. Angesichts grundlegender Herausforderungen erscheint die Entwicklung neuer Lösungsansätze sinnvoll. Ich stelle in dieser Arbeit einen Polynomansatz zur Lösung partieller Differentialgleichungen vor, der nicht auf einer räumlichen Diskretisierung beruht und mit Hilfe der Moore-Penrose-Inversen des Laplace-Operators die Entkopplung der Navier-Stokes-Gleichungen ermöglicht. Dabei ist der Grad der Polynome nicht grundsätzlich beschränkt, so dass eine hohe räumliche Auflösung erreicht werden kann.
  • Thumbnail Image
    ItemOpen Access
    Multiscale modeling and stability analysis of soft active materials : from electro- and magneto-active elastomers to polymeric hydrogels
    (Stuttgart : Institute of Applied Mechanics, 2023) Polukhov, Elten; Keip, Marc-André (Prof. Dr.-Ing.)
    This work is dedicated to modeling and stability analysis of stimuli-responsive, soft active materials within a multiscale variational framework. In particular, composite electro- and magneto-active polymers and polymeric hydrogels are under consideration. When electro- and magneto-active polymers (EAP and MAP) are fabricated in the form of composites, they comprise at least two phases: a polymeric matrix and embedded electric or magnetic particles. As a result, the obtained composite is soft, highly stretchable, and fracture resistant like polymer and undergoes stimuli-induced deformation due to the interaction of particles. By designing the microstructure of EAP or MAP composites, a compressive or a tensile deformation can be induced under electric or magnetic fields, and also coupling response of the composite can be enhanced. Hence, these materials have found applications as sensors, actuators, energy harvesters, absorbers, and soft, programmable, smart devices in various areas of engineering. Similarly, polymeric hydrogels are also stimuli-responsive materials. They undergo large volumetric deformations due to the diffusion of a solvent into the polymer network of hydrogels. In this case, the obtained material shows the characteristic behavior of polymer and solvent. Therefore, these materials can also be considered in the form of composites to enhance the response further. Since hydrogels are biocompatible materials, they have found applications as contact lenses, wound dressings, drug encapsulators and carriers in bio-medicine, among other similar applications of electro- and magneto-active polymers. All above mentioned favorable features of these materials, as well as their application possibilities, make it necessary to develop mathematical models and numerical tools to simulate the response of them in order to design pertinent microstructures for particular applications as well as understand the observed complex patterns such as wrinkling, creasing, snapping, localization or pattern transformations, among others. These instabilities are often considered as failure points of materials. However, many recent works take advantage of instabilities for smart applications. Investigation of these instabilities and prediction of their onset and mode are some of the main goals of this work. In this sense, the thesis is organized into three main parts. The first part is devoted to the state of the art in the development, fabrication, and modeling of soft active materials as well as the continuum mechanical description of the magneto-electro-elasticity. The second part is dedicated to multiscale instabilities in electro- and magneto-active polymer composites within a minimization-type variational homogenization setting. This means that the highly heterogeneous problem is not resolved on one scale due to computational inefficiency but is replaced by an equivalent homogeneous problem. The effective response of the macroscopic homogeneous problem is determined by solving a microscopic representative volume element which includes all the geometrical and material non-linearities. To bridge these two scales, the Hill-Mandel macro-homogeneity condition is utilized. Within this framework, we investigate both macroscopic and microscopic instabilities. The former are important not only from a physical point of view but also from a computational point of view since the macroscopic stability (strong ellipticity) is necessary for the existence of minimizers at the macroscopic scale. Similarly, the investigation of the latter instabilities are also important to determine the pattern transformations at the microscale due to external action. Thereby the critical domain of homogenization is also determined for computation of accurate effective results. Both investigations are carried out for various composite microstructures and it is found that they play a crucial role in the response of the materials. Therefore, they must be considered for designing EAP and MAP composites as well as for providing reliable computations. The third part of the thesis is dedicated to polymeric hydrogels. Here, we develop a minimization-based homogenization framework to determine the response of transient periodic hydrogel systems. We demonstrate the prevailing size effect as a result of a transient microscopic problem, which has been investigated for various microstructures. Exploiting the elements of the proposed framework, we explore the material and structural instabilities in single and two-phase hydrogel systems. Here, we have observed complex experimentally observed and novel 2D pattern transformations such as diamond-plate patterns coupled with and without wrinkling of internal surfaces for perforated microstructures and 3D pattern transformations in thin reinforced hydrogel composites. The results indicate that the obtained patterns can be controlled by tuning the material and geometrical parameters of the composite.
  • Thumbnail Image
    ItemOpen Access
    Chiral metamaterials
    (2016) Eslami, Sahand; Fischer, Peer (Prof. Dr.)
  • Thumbnail Image
    ItemOpen Access
    Self-adjointness and domain of a class of generalized Nelson models
    (2017) Wünsch, Andreas; Griesemer, Marcel (Prof. Dr.)
  • Thumbnail Image
    ItemOpen Access
    Unraveling the impact of acetylation patterns in chitosan oligomers on Cu2+ ion binding : insights from DFT calculations
    (2023) Singh, Ratna; Smiatek, Jens; Moerschbacher, Bruno M.
    Chitosans are partially acetylated polymers of glucosamine, structurally characterized by their degree of polymerization as well as their fraction and pattern of acetylation. These parameters strongly influence the physico-chemical properties and biological activities of chitosans, but structure-function relationships are only poorly understood. As an example, we here investigated the influence of acetylation on chitosan-copper complexation using density functional theory. We investigated the electronic structures of completely deacetylated and partially acetylated chitosan oligomers and their copper-bound complexes. Frontier molecular orbital theory revealed bonding orbitals for electrophiles and antibonding orbitals for nucleophiles in fully deacetylated glucosamine oligomers, while partially acetylated oligomers displayed bonding orbitals for both electrophiles and nucleophiles. Our calculations showed that the presence of an acetylated subunit in a chitosan oligomer affects the structural and the electronic properties of the oligomer by generating new intramolecular interactions with the free amino group of neighboring deacetylated subunits, thereby influencing its polarity. Furthermore, the band gap energy calculated from the fully and partially deacetylated oligomers indicates that the mobility of electrons in partially acetylated chitosan oligomers is higher than in fully deacetylated oligomers. In addition, fully deacetylated oligomers form more stable complexes with higher bond dissociation energies with copper than partially acetylated ones. Interestingly, in partially acetylated oligomers, the strength of copper binding was found to be dependent on the pattern of acetylation. Our study provides first insight into the influence of patterns of acetylation on the electronic and ion binding properties of chitosans. Depending on the intended application, the obtained results can serve as a guide for the selection of the optimal chitosan for a specific purpose.
  • Thumbnail Image
    ItemOpen Access
    Stable and mass-conserving high-dimensional simulations with the sparse grid combination technique for full HPC systems and beyond
    (2024) Pollinger, Theresa; Pflüger, Dirk (Prof. Dr.)
    In the light of the ongoing climate crisis, mastering controlled plasma fusion has the potential to be one of the pivotal scientific achievements of the 21st century. To understand the turbulent fields in confined fusion devices, simulation has been and continues to be both an asset and a challenge. The main limiting factor to large-scale high-fidelity predictive simulations lies in the Curse of Dimensionality, which dominates all grid-based discretizations of plasmas based on the Vlasov-Poisson and Vlasov-Maxwell equations. In the full formulation, they result in six-dimensional grids and fine scales that need to be resolved, leading to a potentially untractable number of degrees of freedom. Typical approaches to this problem - coordinate transformations such as gyrokinetics, grid adaptation, restricting oneself to limited resolutions - do not directly address the Curse of Dimensionality, but rather work around it. The sparse grid combination technique, which forms the center of this work, is a multiscale approach that alleviates the curse of dimensionality for time-stepping simulations: Multiple regular grid-based simulations are run and update each other’s information throughout the course of simulation time. The present thesis improves upon the former state-of-the-art of the combination technique in three ways: introducing conservation of mass and numerical stability through the use of better-suited multiscale basis functions, optimizing the code for large-scale HPC systems, and extending the combination technique to the widely-distributed setting. Firstly, this thesis analyzes the often-used hierarchical hat function from the viewpoint of biorthogonal wavelets, which allows to replace the hierarchical hat function by other multiscale functions (such as the mass-conserving CDF wavelets) in a straightforward manner. Numerical studies presented in the thesis show that this not only introduces conservation but also increases accuracy and avoids numerical instabilities - which previously were a major roadblock for large-scale Vlasov simulations with the combination technique. Secondly, the open-source framework DisCoTec was extended to scale the combination technique up to the available memory of entire supercomputing systems. DisCoTec is designed to wrap the combination technique around existing grid-based solvers and draws on the inherent parallelism of the combination technique. Among several other contributions, different communication-avoiding multiscale reduction schemes were developed and implemented into DisCoTec as part of this work. The scalability of the approach is asserted by an extensive set of measurements in this thesis: DisCoTec is shown to scale up to the full system size of four German supercomputers, including the three CPU-based Tier-0/Tier-1 systems. Thirdly, the combination technique was further extended to the widely-distributed setting, where two HPC systems synchronously run a joint simulation. This is enabled by file transfer as well as sophisticated algorithms for assigning the different simulation instances to the systems, two of which were developed as part of this work. By the resulting drastic reductions in the communication volume, tolerable transfer times for combination technique simulations on different HPC systems have been achieved for the first time. These three advances - improved numerical properties, scaling efficiently up to full system sizes, and the possibility to extend the simulation beyond a single system - show the sparse grid combination technique to be a promising approach for future high-fidelity simulations of higher-dimensional problems, such as plasma turbulence.
  • Thumbnail Image
    ItemOpen Access
    Formen und Kräfte : ein mathematisch-physikalischer Gang zur Kunst auf dem Campus Vaihingen
    (Stuttgart : Fakultät 8 - Mathematik und Physik, Universität Stuttgart, 2022) Stroppel, Markus; Scheffler, Marc; Engstler, Katja Stefanie; Engstler, Katja Stefanie (Konzept und Gestaltung)
    Der Rundgang erläutert und interpretiert einzelne Objekte und künstlerische Elemente der Lernstraße auf dem Campus Vaihingen aus mathematischer und physikalischer Sicht für die interessierte Allgemeinheit, aber auch für Schüler­innen und Schüler und für Studierende.
  • Thumbnail Image
    ItemOpen Access
    Simulating stochastic processes with variational quantum circuits
    (2022) Fink, Daniel
    Simulating future outcomes based on past observations is a key task in predictive modeling and has found application in many areas ranging from neuroscience to the modeling of financial markets. The classical provably optimal models for stationary stochastic processes are so-called ϵ-machines, which have the structure of a unifilar hidden Markov model and offer a minimal set of internal states. However, these models are not optimal in the quantum setting, i.e., when the models have access to quantum devices. The methods proposed so far for quantum predictive models rely either on the knowledge of an ϵ-machine, or on learning a classical representation thereof, which is memory inefficient since it requires exponentially many resources in the Markov order. Meanwhile, variational quantum algorithms (VQAs) are a promising approach for using near-term quantum devices to tackle problems arising from many different areas in science and technology. Within this work, we propose a VQA for learning quantum predictive models directly from data on a quantum computer. The learning algorithm is inspired by recent developments in the area of implicit generative modeling, where a kernel-based two-sample-test, called maximum mean discrepancy (MMD), is used as a cost function. A major challenge of learning predictive models is to ensure that arbitrarily many time steps can be simulated accurately. For this purpose, we propose a quantum post-processing step that yields a regularization term for the cost function and penalizes models with a large set of internal states. As a proof of concept, we apply the algorithm to a stationary stochastic process and show that the regularization leads to a small set of internal states and a constantly good simulation performance over multiple future time steps, measured in the Kullback-Leibler divergence and the total variation distance.