08 Fakultät Mathematik und Physik
Permanent URI for this collectionhttps://elib.uni-stuttgart.de/handle/11682/9
Browse
Item Open Access An adaptive finite element method for control-constrained optimal control problems(2012) Kohls, Kristina; Siebert, Kunibert G. (Prof. Dr.)Many problems from physics like heat conduction and energy conservation lead to partial differential equations (PDEs). Only some of them can be solved directly; in general one has to rely on approximation techniques like the Finite Element Method (FEM). Adaptive Finite Elements intend to only increase accuracy in those parts of the domain where the error is large relative to the rest of the domain. The gain in accuracy that can be achieved by this, in comparison to the classical FEM, depends on the exact solution itself. In this thesis the weak formulation of a PDE constitutes the side-constraint of an optimizing problem. Usually this consists of a convex functional that is minimized with respect to two variables - control and state - which are connected via the side-constraint. Additionally the control has to satisfy further constraints. To be able to apply Adaptive Finite Elements one needs to construct error- estimators that satisfy certain properties. In contrast to previous results in this field, this thesis uses a general approach to find error-estimators. This approach includes distributed and boundary-control as well as the cases of discretized and non-discretized control. The particularities of the involved PDE are only of interest when choosing the appropriate estimators for the linear subproblems from the toolbox. The other main contribution of this thesis consists of three convergence results. One for non-discretized control, one for discontinuous and one for continuous control-discretizations. We not only prove the convergence of the solution but also of the estimator which implies that the algorithm terminates for any given tolerance TOL > 0. Finally, a few numerical examples with boundary-control are investigated for varying marking strategies and estimators.Item Open Access Adaptive finite elements for state-constrained optimal control problems - convergence analysis and a posteriori error estimation(2014) Steinig, Simeon; Siebert, Kunibert G. (Prof. Dr.)Optimal control problems and in particular state-constrained optimal control problems fre- quently occur in all sorts of fields of science, from aerospace engineering to robotics, from process engineering to vehicle simulations. Against this backdrop, it is of interest to solve these kinds of problems in an efficient manner. Optimal control problems are characterised by the existence of a control u acting on a state y which is governed by a (ordinary/partial/stochastic) differential equation. In this PhD thesis, we considered linear, stationary partial differential equations (PDE); in particular, the state y is a linear function of the control u, y = Su. Now, solving such optimal control problems numerically involves solving two linear PDEs in each iterate of an optimisation algorithm. Over the last decades much research has been undertaken to numerically solve such linear PDEs efficiently, especially discretisations with adaptive finite elements have been proven to be highly useful for such a task. Thus, trying to apply these adaptive finite element methods to the specific setting of state-constrained optimal control problems suggested itself as an appropriate approach: The aim of this thesis was twofold: 1. The first goal was to prove a basic convergence result, i.e.: the sequence of discrete solutions obtained by discretising the optimal control problem with finite elements, U_k, converges to the true solution of the undiscretised problem u. 2. The second goal was to derive a reliable a posteriori error estimator, i.e., here an upper bound up to constants depending solely on data containing only known discrete or continuous functions and linear errors. 1st aim: We succeeded in characterising convergence U_k to u exactly, Theorem 3.3.8 and Theorem 3.3.10, i.e. we derived a necessary and sufficient condition for convergence U_k to u in terms of a discrete quantity which can potentially be used to steer a numerical algorithm, as we did in Section 6.3. We could not find an example, where this condition is fulfilled; nevertheless, because this result was achieved without assuming any additional regularity for the sequence of triangulations or the problem itself, it constitutes a major contribution to the convergence analysis for adaptive finite element methods for state-constrained optimal control problems. 2nd aim: The second goal, the a posteriori error estimator, was achieved in Theorem 4.2.12 and Theorem 4.2.13. Remarkably, the derived a posteriori estimator was proved to converge under relatively mild assumptions, Theorem 4.3.14. In the concluding chapters of this thesis, we constructed an adaptive algorithm on the basis of our a posteriori error estimator, Chapter 5, before successfully testing it for two problems, Chapter 6.Item Open Access Adaptive higher order discontinuous Galerkin methods for porous-media multi-phase flow with strong heterogeneities(2018) Kane, Birane; Siebert, Kunibert (Prof. Dr.)In this thesis, we develop, analyze, and implement adaptive discontinuous Galerkin (DG) finite element solvers for the efficient simulation of porous-media flow problems. We consider 2d and 3d incompressible, immiscible, two-phase flow in a possibly strongly heterogeneous and anisotropic porous medium. Discontinuous capillarypressure functions and gravity effects are taken into account. The system is written in terms of a phase-pressure/phase-saturation formulation. First and second order Adams-Moulton time discretization methods are combined with various interior penalty DG discretizations in space, such as the symmetric interior penalty Galerkin (SIPG), the nonsymmetric interior penalty Galerkin (NIPG) and the incomplete interior penalty Galerkin (IIPG). These fully implicit space time discretizations lead to fully coupled nonlinear systems requiring to build a Jacobian matrix at each time step and in each iteration of a Newton-Raphson method. We provide a stability estimate of the saturation and the pressure with respect to initial and boundary data. We also derive a-priori error estimates with respect to the L2(H1) norm for the pressure and the L∞(L2)∩L2(H1) norm for the saturation. Moving on to adaptivity, we implement different strategies allowing for a simultaneous variation of the element sizes, the local polynomial degrees and the time step size. These approaches allow to increase the local polynomial degree when the solution is estimated to be smooth and refine locally the mesh otherwise. They also grant more flexibility with respect to the time step size without impeding the convergence of the method. The aforementioned adaptive algorithms are applied in series of homogeneous, heterogeneous and anisotropic test cases. To our knowledge, this is the first time the concept of local hp-adaptivity is incorporated in the study of 2d and 3d incompressible, immiscible, two-phase flow problems. Delving into the issue of efficient linear solvers for the fully-coupled fully-implicit formulations, we implement a constrained pressure residual (CPR) two-stage preconditioner that exploits the algebraic properties of the Jacobian matrices of the systems. Furthermore, we provide an open-source DG two-phase flow simulator, based on the software framework DUNE, accompanied by a set of programs including instructions on how to compile and run them.Item Open Access Adaptive piecewise Poly-Sinc methods for ordinary differential equations(2022) Khalil, Omar; El-Sharkawy, Hany; Youssef, Maha; Baumann, GerdWe propose a new method of adaptive piecewise approximation based on Sinc points for ordinary differential equations. The adaptive method is a piecewise collocation method which utilizes Poly-Sinc interpolation to reach a preset level of accuracy for the approximation. Our work extends the adaptive piecewise Poly-Sinc method to function approximation, for which we derived an a priori error estimate for our adaptive method and showed its exponential convergence in the number of iterations. In this work, we show the exponential convergence in the number of iterations of the a priori error estimate obtained from the piecewise collocation method, provided that a good estimate of the exact solution of the ordinary differential equation at the Sinc points exists. We use a statistical approach for partition refinement. The adaptive greedy piecewise Poly-Sinc algorithm is validated on regular and stiff ordinary differential equations.Item Open Access Adaptive two-scale models for processes with evolution of microstructures(2014) Redeker, Magnus; Rohde, Christian (Prof. Dr.)In this dissertation two combinable numerical solution schemes are developed that - either in combination or on their own - allow for an efficient numerical solution of two-scale models that describe physical processes with changing microstructures via the combination of partial differential equations on a macro- and a microscopic length-scale. Furthermore, a two-scale phase-field model is established, that describes in a porous medium a pore-scale precipitation and a Darcy-scale diffusion process of in a fluid dissolved particles. One of the developed solution schemes is used in order to solve this model efficiently in a large time-space-cylinder. Numerical results show the interdependence of the pore-scale precipitation and the Darcy-scale diffusion process.Item Open Access Analysis of hyperbolic conservation laws with random discontinuous flux functions and their efficient simulation(2022) Brencher, Lukas; Barth, Andrea (Prof. Dr.)Item Open Access Analysis of target data-dependent greedy kernel algorithms : convergence rates for f-, f· P- and f/P-greedy(2022) Wenzel, Tizian; Santin, Gabriele; Haasdonk, BernardData-dependent greedy algorithms in kernel spaces are known to provide fast converging interpolants, while being extremely easy to implement and efficient to run. Despite this experimental evidence, no detailed theory has yet been presented. This situation is unsatisfactory, especially when compared to the case of the data-independent P-greedy algorithm, for which optimal convergence rates are available, despite its performances being usually inferior to the ones of target data-dependent algorithms. In this work, we fill this gap by first defining a new scale of greedy algorithms for interpolation that comprises all the existing ones in a unique analysis, where the degree of dependency of the selection criterion on the functional data is quantified by a real parameter. We then prove new convergence rates where this degree is taken into account, and we show that, possibly up to a logarithmic factor, target data-dependent selection strategies provide faster convergence. In particular, for the first time we obtain convergence rates for target data adaptive interpolation that are faster than the ones given by uniform points, without the need of any special assumption on the target function. These results are made possible by refining an earlier analysis of greedy algorithms in general Hilbert spaces. The rates are confirmed by a number of numerical examples.Item Open Access Approximation with matrix-valued kernels and highly effective error estimators for reduced basis approximations(2022) Wittwar, Dominik; Haasdonk, Bernard (Prof. Dr.)This thesis can be summarized under the aspect of surrogate modelling for vector-valued functions and error quantification for those surrogate models. The thesis, in a broad sense, is split into two different parts. The first aspect deals with constructing surrogate models via matrix-valued kernels using both interpolation and regularization procedures. For this purpose, a new class of so called uncoupled separable matrix-valued kernels is introduced and heavy emphasis is placed on how suitable sample points for the construction of the surrogate can be chosen in such a way that quasi-optimal convergence rates can be achieved. In the second part, the focus does not lie on the construction of the surrogate itself, but on how existing a-posteriori error estimation can be improved to result in highly efficient error bounds. This is done in the context of reduced basis methods, which similar to the kernel surrogates, construct surrogate models by using data acquired from samples of the desired target function. Both parts are accompanied by numerical experiments which illustrate the effectiveness as well as verify the analytically derived properties of the presented methods.Item Open Access Balance laws : non local mixed systems and IBVPs(2016) Rossi, Elena; Rohde, Christian (Prof. Dr.)Item Open Access A Bayesian approach to parameter reconstruction from surface electromyographic signals(2021) Rörich, Anna; Göddeke, Dominik (Prof. Dr.)Applying a Bayesian approach to infer the electrical conductivity of a body or body part from surface electromyographic (EMG) signals yields a non-invasive and radiation-free imaging technique. Further, measuring the surface EMG signals that stem from voluntary muscle contractions, there is no need to apply external electrical stimuli to the body. The electrical conductivity provides structural information of the corresponding tissue that is used to estimate whether the tissue has isotropic or anisotropic properties and which is the preferred conducting direction, if applicable. Additionally, changes in the magnitude of the electrical conductivity indicate changes in the tissue material. Together, these properties of the electrical conductivity provide medical images of the examined body part. This imaging process results in an inverse and mathematically ill-posed problem. Including a stochastic model of the inevitable measurement error into the mathematical problem description, the whole system is embedded into a probabilistic framework. Thus, instead of estimating the structure of the examined body part, the probability distribution of the parameters describing the tissue structure given surface EMG measurements, the so-called posterior distribution, is estimated. This Bayesian approach to inverse problems not only yields more information about the quantities of interest than classical regularization approaches, but also has a regularizing effect on the ill-posed problem. Indeed, the Bayesian inverse problem of inferring the tissue structure from surface EMG measurements is proven to be well-posed. This yields the convergence of the inversion algorithm and allows establishing error bounds and thus quantifying the uncertainties in the solution of the inverse EMG problem. Numerically, Markov chain Monte Carlo methods are used to explore the posterior distribution. Accelerations of these sampling methods are achieved by deriving a data-sparse representation of the discretized forward model for all conceivable discretizations of the parameters describing the tissue structure. The resulting approach is not only mathematically well-founded, but also faster by orders of magnitude. Finally, the proposed sampling algorithms are applied to several use cases that are related to clinical applications.Item Open Access BETI-Gebietszerlegungsmethoden mit schnellen Randelementverfahren und Anwendungen(2006) Of, Günther; Steinbach, Olaf (Prof. Dr.)In der numerischen Simulation wird die Behandlung gekoppelter Problemstellungen auch im Hinblick auf eine schnelle Produktentwicklung zunehmend wichtiger. Gebietszerlegungsmethoden bieten eine einfache Behandlung und eine effiziente numerische Simulation für Materialien mit unterschiedlichen Materialparametern und zur Kopplung verschiedener Modellgleichungen. Außerdem ermöglichen sie die Kopplung verschiedener numerischer Verfahren und eine einfache Parallelisierung der Simulation zur Verkürzung der Rechenzeiten. Neben der Gebietszerlegungsmethode selbst sind auch die verwendeten Verfahren zur Lösung der lokalen Teilprobleme wichtig für ein schnelles Gesamtverfahren. Bei der Randelementmethode wird die Variationsformulierung der partiellen Differentialgleichung durch partielle Integration in eine Randintegralgleichung transformiert. Vorteile der Randelementmethode gegenüber der häufig eingesetzten Finiten Element Methode liegen in der einfacheren Vernetzung des Randes, in der Behandlung von Außenraumproblemen und der expliziten Berechnungen der kompletten Cauchy-Daten auf dem Rand. Ein Nachteil der Standardrandelementmethode ist der mindestens quadratische Speicher- und Rechenaufwand. Dies läßt sich durch den Einsatz von schnellen Randelementmethoden, wie beispielsweise der Multipolmethode, auf fast lineare Komplexität reduzieren. Die Schwerpunkte dieser Arbeit liegen in der Entwicklung von effizienten Gebietszerlegungsmethoden und der Bereitstellung schneller Methoden zur Lösung der lokalen Teilprobleme. Dazu wird zunächst zusätzlich zu der bereits vorhandenden schnellen Multipolrandelementmethode für die Laplace-Gleichung eine Multipolmethode für die linearen Elastostatik entwickelt. Diese baut durch die Verwendung partieller Integrationsformeln im wesentlichen auf den vorhandenen Routinen aus der Laplace-Gleichung auf. Dabei wird auch für die lineare Elastostatik der Einsatz der Multipolmethode durch eine Konsistenzanalyse theoretisch abgesichert. Desweiteren werden mit dem algebraischen Mehrgitterverfahren und den Randintegraloperatoren entgegengesetzter Ordnung effiziente Vorkonditionierungstechniken sowohl für die Laplace-Gleichung als auch die lineare Elastostatik analysiert und eingesetzt. Für den hypersingulären Operator und den Steklov-Poincare-Operator wird speziell für die lineare Elastostatik eine Stabilisierung für die effiziente Invertierung der Operatoren vorgestellt. Aus Effizienzgründen wird das Einfachschichtpotential der Laplace-Gleichung als Vorkoditionierer verwendet und dessen Einsatz theoretisch abgesichert. Damit stehen schnelle Lösungsverfahren für den Einsatz in den Gebietszerlegungsmethoden zur Verfügung. Als Gebietszerlegungsmethode wird vor allem die BETI-Methode (Boundary Element Tearing and Interconnecting) eingesetzt. Dabei werden zur effizienten Lösung verschiedene Formulierungen in linearen Gleichungssystemen betrachtet. Diese linearen Gleichungssysteme werden mit geeigneten iterativen Verfahren mittels des Bramble-Pasciak-CG-Verfahrens gelöst. Die numerischen Experimente zeigen, daß bei Verwendung der Mehrgittervorkonditionierung für die lokalen Einfachschichtpotentiale die BETI-Methode meist schneller ist als die zum Vergleich verwendete primale Dirichlet-Gebietszerlegungsmethode. Insbesondere bei gemischten Randwertproblemen, wie sie in der Elastostatik meistens auftreten, ist die BETI-Methode schneller. Die Konditionszahl der BETI-Methode ist unabhängig von springenden Materialparametern und hat hier ihre Stärken gegenüber der primalen Gebietszerlegungsmethode. Insbesondere in der linearen Elastostatik kann die Behandlung von Teilgebieten ohne ausreichende Dirichlet-Randbedingung für die BETI-Methoden aufgrund der möglicherweise variierenden Zahl an auftretenden Starrkörperbewegungen kompliziert werden. Die in dieser Arbeit eingeführte Allfloating-Formulierung der BETI-Methode vereinheitlicht die Behandlung der einzelnen Teilgebiete der Gebietszerlegung. Dadurch vereinfacht sich die Realisierung der BETI-Methode insbesondere für die lineare Elastostatik und erscheint auch einfacher realisierbar als die Ideen der FETI-DP-Methoden. Außerdem ermöglicht die Allfloating-Formulierung den Einsatz optimaler Vorkonditionierer für die lokalen Steklov-Poincare-Operatoren. Dies hat ein verbessertes asymptotisches Verhalten der Allfloating-Formulierung zur Folge. In den numerischen Beispielen ist das bessere asymptotische Laufzeitverhalten der Allfloating-Formulierung gegenüber der Standard-BETI-Formulierung und damit auch gegenüber der zum Vergleich verwendeten primalen Dirichlet-Gebietszerlegungsmethode zu beobachten.Item Open Access The Cahn-Larché system : a model for spinodal decomposition in eutectic solder ; modelling, analysis and simulation(2005) Merkle, Thomas; Sändig, Anna-Margarete (Apl. Prof. Dr.)Electronic control of mechanical procedures in particular within an automobile becomes recently more and more important. Due to this fact reliability and life time of a solder joint in a control device become significant for automotive industry. Experimental investigations of a solder joint where the configuration is subjected to several thousand power cycles show an essential change of the microstructure in the alloy. The originally fine mixture separates into two or more phases. However, regions with a coarse microstructure are not randomly distributed over the solder joint. They are located in a vicinity of a notch or a reentrant corner or lie nearby a hard clamped boundary part of the solder bump. In the worst case cracks appear in the alloy between regions of coarse and fine microstructure. The phase separation process is modelled by a diffusive phase interface model, which was derived by Cahn and Hilliard and extended by Cahn and Larché in order to consider elastic effects. In this thesis a mathematical rigorous derivation of the Cahn-Larché system is presented, which additionally takes into account external mechanical loadings and viscosity. The examination of the general entropy principle is done by applying Lagrangian multipliers to the thermodynamics of the spinodal decomposition. The existence of a weak solution of the viscous Cahn-Larché system is shown under consideration of a concentration dependent mobility tensor and external mechanical forces. In order to attain this result, we extend the method developed by Garcke consisting of a time discretisation, minimising the internal energy and maximising the dissipation. An interesting result of our investigations is the fact that the a-priori estimates do not depend on the friction coefficient. Due to this observation we simultaneously get the existence of a weak solution of the viscous and non-viscous system. Thereby we observe that a weak solution of the viscous system is smoother with respect to time than a solution of the non-viscous system. The numerical simulations of the phase separation are done by using a Faedo-Galerkin method. Extremely small surface stress tensors cause the Gibbs phenomena. This means that high overshoots and undershoots appear within a diffusive interface. In order to solve this problem a numerical approximation method stabilised by dynamical friction is developed. A second viscous approximation method is analyzed, where friction is formulated in terms of driving forces. We show the equivalence of both methods by using the flow gradient structure of the system. Finally, an operator-splitting method is derived, where the Gibbs free energy density is decomposed into a convex and a concave part. The different numerical simulations show that the Cahn-Larché system with a concentration dependent elasticity tensor fits qualitatively better to the experiment than the simple model with a constant elasticity tensor. Mechanical stress singularities, which result from reentrant corners or changing boundary conditions affect the development of the microstructure essentially. In both cases a phase consisting of the softer material develops in a vicinity of a point with a stress singularity.Item Open Access Compressible multi-component and multi-phase flows: interfaces and asymptotic regimes(2021) Ostrowski, Lukas; Rohde, Christian (Prof. Dr.)This thesis consists of three parts. In the first part we consider multi-component flows through porous media. We introduce a hyperbolic system of partial differential equations which describes such flows, prove the existence of solutions, the convergence in a long-time-large-friction regime to a parabolic limit system, and finally present a new numerical scheme to efficiently simulate flows in this regime. In the second part we study two-phase flows where both phases are considered compressible. We introduce a Navier-Stokes-Allen-Cahn phase-field model and derive an energy-consistent discontinuous Galerkin scheme for this system. This scheme is used for the simulation of two complex examples, namely drop-wall interactions and multi-scale simulations of coupled porous-medium/free-flow scenarios including drop formation at the interface between the two domains. In the third part we investigate two-phase flows where one phase is considered incompressible, while the other phase is assumed to be compressible. We introduce an incompressible-compressible Navier-Stokes-Cahn-Hilliard model to describe such flows. Further, we present some analytical results for this system, namely a computable expression for the effective surface tension in the system and a formal proof of the convergence to a (quasi-)incompressible system in the low Mach regime. As a first step towards a discontinuous Galerkin discretization of the system, which is based on Godunov fluxes, we introduce the concept of an artificial equation of state modification, which is examined for a basic single-phase incompressible setting.Item Open Access Compressible multicomponent flow in porous media with Maxwell‐Stefan diffusion(2020) Ostrowski, Lukas; Rohde, ChristianWe introduce a Darcy‐scale model to describe compressible multicomponent flow in a fully saturated porous medium. In order to capture cross‐diffusive effects between the different species correctly, we make use of the Maxwell–Stefan theory in a thermodynamically consistent way. For inviscid flow, the model turns out to be a nonlinear system of hyperbolic balance laws. We show that the dissipative structure of the Maxwell‐Stefan operator permits to guarantee the existence of global classical solutions for initial data close to equilibria. Furthermore, it is proven by relative entropy techniques that solutions of the Darcy‐scale model tend in a certain long‐time regime to solutions of a parabolic limit system.Item Open Access Contact analysis and overlapping domain decomposition methods for dynamic and nonlinear problems(2008) Brunßen, Stephan; Wohlmuth, Barbara (Prof. Dr.)This thesis is concerned with the development of efficient numerical schemes for the Finite Element simulation of elastoplastic incremental metal forming processes. Two examples of this new and promising manufacturing technology are introduced to motivate the research work. Some basic technology is provided to accelerate the implicit Finite Element simulation which is still very costly for this kind of operations due to the small but very mobile forming zone and due to the highly nonlinear field equations and inequalities to be solved. For this purpose, the underlying equations and inequalities are reviewed. The main idea to meet these challenges it to use a divide and conquer approach: The workpiece is discretized with a global coarse mesh and the forming zone is meshed with a small fine grid. Unlike in adaptive Finite Elements, no sophisticated remeshing procedures are necessary and the two grids are computationally independent from each other. The interface between coarse and fine computation is small such that a block iterative solution with two different Finite Element programs is possible. To hide the nonlinearities from the global computation, the two meshes interchange information about the plastic deformation and about the contact stresses. Results and algorithms from several disciplines of numerical mathematics and computational mechanics (contact, domain decomposition, iterative solvers, plasticity etc.) are combined to accomplish this task.Item Open Access Coupled simulations and parameter inversion for neural system and electrophysiological muscle models(2024) Homs‐Pons, Carme; Lautenschlager, Robin; Schmid, Laura; Ernst, Jennifer; Göddeke, Dominik; Röhrle, Oliver; Schulte, MiriamThe functioning of the neuromuscular system is an important factor for quality of life. With the aim of restoring neuromuscular function after limb amputation, novel clinical techniques such as the agonist‐antagonist myoneural interface (AMI) are being developed. In this technique, the residual muscles of an agonist‐antagonist pair are (re‐)connected via a tendon in order to restore their mechanical and neural interaction. Due to the complexity of the system, the AMI can substantially profit from in silico analysis, in particular to determine the prestretch of the residual muscles that is applied during the procedure and determines the range of motion of the residual muscle pair. We present our computational approach to facilitate this. We extend a detailed multi‐X model for single muscles to the AMI setup, that is, a two‐muscle‐one‐tendon system. The model considers subcellular processes as well as 3D muscle and tendon mechanics and is prepared for neural process simulation. It is solved on high performance computing systems. We present simulation results that show (i) the performance of our numerical coupling between muscles and tendon and (ii) a qualitatively correct dependence of the range of motion of muscles on their prestretch. Simultaneously, we pursue a Bayesian parameter inference approach to invert for parameters of interest. Our approach is independent of the underlying muscle model and represents a first step toward parameter optimization, for instance, finding the prestretch, to be applied during surgery, that maximizes the resulting range of motion. Since our multi‐X fine‐grained model is computationally expensive, we present inversion results for reduced Hill‐type models. Our numerical results for cases with known ground truth show the convergence and robustness of our approach.Item Open Access ddX : polarizable continuum solvation from small molecules to proteins(2024) Nottoli, Michele; Herbst, Michael F.; Mikhalev, Aleksandr; Jha, Abhinav; Lipparini, Filippo; Stamm, BenjaminPolarizable continuum solvation models are popular in both, quantum chemistry and in biophysics, though typically with different requirements for the numerical methods. However, the recent trend of multiscale modeling can be expected to blur field‐specific differences. In this regard, numerical methods based on domain decomposition (dd) have been demonstrated to be sufficiently flexible to be applied all across these levels of theory while remaining systematically accurate and efficient. In this contribution, we present ddX , an open‐source implementation of dd‐methods for various solvation models, which features a uniform interface with classical as well as quantum descriptions of the solute, or any hybrid versions thereof. We explain the key concepts of the library design and its application program interface, and demonstrate the use of ddX for integrating into standard chemistry packages. Numerical tests illustrate the performance of ddX and its interfaces. This article is categorized under: Software > Quantum Chemistry Software > Simulation MethodsItem Open Access Deep and greedy kernel methods : algorithms, analysis and applications(2023) Wenzel, Tizian; Haasdonk, Bernard (Prof. Dr.)Item Open Access Dictionary-based online-adaptive structure-preserving model order reduction for parametric Hamiltonian systems(2024) Herkert, Robin; Buchfink, Patrick; Haasdonk, BernardClassical model order reduction (MOR) for parametric problems may become computationally inefficient due to large sizes of the required projection bases, especially for problems with slowly decaying Kolmogorov n -widths. Additionally, Hamiltonian structure of dynamical systems may be available and should be preserved during the reduction. In the current presentation, we address these two aspects by proposing a corresponding dictionary-based, online-adaptive MOR approach. The method requires dictionaries for the state-variable, non-linearities, and discrete empirical interpolation (DEIM) points. During the online simulation, local basis extensions/simplifications are performed in an online-efficient way, i.e., the runtime complexity of basis modifications and online simulation of the reduced models do not depend on the full state dimension. Experiments on a linear wave equation and a non-linear Sine-Gordon example demonstrate the efficiency of the approach.Item Open Access Discretization techniques and efficient algorithms for contact problems(2008) Hüeber, Stefan; Wohlmuth, Barbara (Prof. Dr.)This thesis is concerned with the development of efficient numerical solution algorithms for nonlinear contact problems with friction. Such type of problems play an important role in many technical and engineering applications. Thus, the design of discretization techniques and efficient solution strategies is still a challenging task both from the engineering and the mathematical point of view. Domain decomposition techniques based on finite element methods are a powerful tool to approximate the solution of partial differential equations as they occur in the framework of structural mechanics. Here, we focus on discretization techniques based on the mortar method by introducing an additional unknown named Lagrange multiplier or dual variable in order to formulate the interface constraints between the involved bodies. In the framework of contact problems, where the weak formulation consists of a variational inequality, this additional variable models the contact stresses at the common contact interface. Using standard finite elements for the discretization of the Lagrange multiplier, the contact conditions result in a segment-to-segment approach, where the mechanical inequality constraints can only be resolved by some global optimization procedure on the contact boundary. This can be avoided by working with locally defined dual or biorthogonal basis functions for the Lagrange multiplier space. Then, the segment-to-segment approach is algebraically equivalent to a node-to-segment approach, and the inequality constraints decouple point-wise. Additionally, we are able to transform a two-body contact problem into a one-body problem by a local preprocess, and hence apply the same nonlinear solver. Mathematically, the preprocess is equivalent to a basis transformation; physically, master and slave side are glued together such that the two bodies form a composite material and the displacement on the slave side reflects the relative displacement between the two bodies. In this thesis, we analyze the discretization error of the proposed mortar formulation and give optimal a priori error estimates. A various set of numerical examples are given to confirm the achieved theoretical results. The decoupled contact constraints provide a basis for the construction of efficient solution algorithms. The presented numerical approaches are semi-smooth Newton methods which are equivalent to a primal-dual active set strategy in the case without friction. The point-wise inequality constraints between the primal variable, i.e., the displacement, and the dual variable, i.e., the Lagrange multiplier, are written as an equality constraint by the use of a semi-smooth nonlinear complementarity function. Even for the case of contact problems including friction with Coulomb's friction law we are able to construct a full semi-smooth Newton algorithm. Due to the use of the dual basis functions for the Lagrange multiplier, we are able to locally eliminate the degrees of freedom for the dual variable. Thus, in each iteration step, we have to solve a linear system with respect to the primal variable, where, the contact constraints enter as boundary conditions of Dirichlet-, Neumann-, or Robin-type. Therefore, existing finite elements codes for structural mechanics can be easily extended to the case of contact problems by using the proposed methods. Using iterative solvers like optimal multigrid methods to solve the arising linear system in each step, we are able to construct inexact strategies, where the linear system is not completely solved in each Newton step. By this, we get an efficient algorithm for solving a fully nonlinear contact problem whose additional cost is negligible compared to solving a linear system. Several numerical examples are provided to investigate the performance and efficiency of the introduced algorithms. In the last part of this thesis, we extend the proposed formulation and the efficient solution algorithms to more general applications. Firstly, we adapt our solution strategies to the case of dynamical contact problems in combination with nonlinear material laws. Especially, we focus on energy-conserving algorithms. Secondly, we treat thermo-mechanical contact problems, where, the temperature is introduced as an additional unknown. This extension is quite natural since heat is generated due to the mechanical frictional work. Similar as the mechanical Lagrange multiplier takes care on the mechanical contact constraints, a thermal Lagrange multiplier modeling the heat flux across the contact interface is added to enforce the thermal flux conditions over the contact interface. We propose a mortar formulation for this Robin-type thermal interface conditions and extend our contact algorithms to solve the resulting nonlinear problem.