02 Fakultät Bau- und Umweltingenieurwissenschaften
Permanent URI for this collectionhttps://elib.uni-stuttgart.de/handle/11682/3
Browse
11 results
Search Results
Item Open Access Multiscale modeling and stability analysis of soft active materials : from electro- and magneto-active elastomers to polymeric hydrogels(Stuttgart : Institute of Applied Mechanics, 2023) Polukhov, Elten; Keip, Marc-André (Prof. Dr.-Ing.)This work is dedicated to modeling and stability analysis of stimuli-responsive, soft active materials within a multiscale variational framework. In particular, composite electro- and magneto-active polymers and polymeric hydrogels are under consideration. When electro- and magneto-active polymers (EAP and MAP) are fabricated in the form of composites, they comprise at least two phases: a polymeric matrix and embedded electric or magnetic particles. As a result, the obtained composite is soft, highly stretchable, and fracture resistant like polymer and undergoes stimuli-induced deformation due to the interaction of particles. By designing the microstructure of EAP or MAP composites, a compressive or a tensile deformation can be induced under electric or magnetic fields, and also coupling response of the composite can be enhanced. Hence, these materials have found applications as sensors, actuators, energy harvesters, absorbers, and soft, programmable, smart devices in various areas of engineering. Similarly, polymeric hydrogels are also stimuli-responsive materials. They undergo large volumetric deformations due to the diffusion of a solvent into the polymer network of hydrogels. In this case, the obtained material shows the characteristic behavior of polymer and solvent. Therefore, these materials can also be considered in the form of composites to enhance the response further. Since hydrogels are biocompatible materials, they have found applications as contact lenses, wound dressings, drug encapsulators and carriers in bio-medicine, among other similar applications of electro- and magneto-active polymers. All above mentioned favorable features of these materials, as well as their application possibilities, make it necessary to develop mathematical models and numerical tools to simulate the response of them in order to design pertinent microstructures for particular applications as well as understand the observed complex patterns such as wrinkling, creasing, snapping, localization or pattern transformations, among others. These instabilities are often considered as failure points of materials. However, many recent works take advantage of instabilities for smart applications. Investigation of these instabilities and prediction of their onset and mode are some of the main goals of this work. In this sense, the thesis is organized into three main parts. The first part is devoted to the state of the art in the development, fabrication, and modeling of soft active materials as well as the continuum mechanical description of the magneto-electro-elasticity. The second part is dedicated to multiscale instabilities in electro- and magneto-active polymer composites within a minimization-type variational homogenization setting. This means that the highly heterogeneous problem is not resolved on one scale due to computational inefficiency but is replaced by an equivalent homogeneous problem. The effective response of the macroscopic homogeneous problem is determined by solving a microscopic representative volume element which includes all the geometrical and material non-linearities. To bridge these two scales, the Hill-Mandel macro-homogeneity condition is utilized. Within this framework, we investigate both macroscopic and microscopic instabilities. The former are important not only from a physical point of view but also from a computational point of view since the macroscopic stability (strong ellipticity) is necessary for the existence of minimizers at the macroscopic scale. Similarly, the investigation of the latter instabilities are also important to determine the pattern transformations at the microscale due to external action. Thereby the critical domain of homogenization is also determined for computation of accurate effective results. Both investigations are carried out for various composite microstructures and it is found that they play a crucial role in the response of the materials. Therefore, they must be considered for designing EAP and MAP composites as well as for providing reliable computations. The third part of the thesis is dedicated to polymeric hydrogels. Here, we develop a minimization-based homogenization framework to determine the response of transient periodic hydrogel systems. We demonstrate the prevailing size effect as a result of a transient microscopic problem, which has been investigated for various microstructures. Exploiting the elements of the proposed framework, we explore the material and structural instabilities in single and two-phase hydrogel systems. Here, we have observed complex experimentally observed and novel 2D pattern transformations such as diamond-plate patterns coupled with and without wrinkling of internal surfaces for perforated microstructures and 3D pattern transformations in thin reinforced hydrogel composites. The results indicate that the obtained patterns can be controlled by tuning the material and geometrical parameters of the composite.Item Open Access Capturing local details in fluid-flow simulations : options, challenges and applications using marker-and-cell schemes(Stuttgart : Eigenverlag des Instituts für Wasser- und Umweltsystemmodellierung der Universität Stuttgart, 2024) Lipp, Melanie Gloria; Helmig, Rainer (Prof. Dr.-Ing.)Complex local flow structures appear in a wide range of free-flow systems, e.g. vortices build after obstacles. For understanding and predicting numerous processes, it is important to capture local details in free fluid flow, which is the focus of this work. Particularly, we are interested in local flow structures in free flow coupled to porous-medium flow. A better resolution of local structures in free flow can be achieved by refining computational grids, which is studied in this thesis. Particularly, we focus on finite-volume/finite-difference methods for the two-dimensional Navier-Stokes equations with constant density and constant viscosity, using the marker-and-cell method (pressures in cell centers, velocities on cell faces) and rectangular control volumes. There exists a variety of methods, with a range of characteristics, which can be used to refine computational grids. The first objective of this work was to develop for many different available approaches one common way of description of a class of methods within our focus and to display their similarities and differences. The second objective was to gain insight and in-detail understanding of the local-refinement-methods' behavior by examining one chosen method before numerical solution, i.e. by examining local truncation errors. The third objective was to gain further understanding of the local-refinement-methods' behavior as well as display examples, in which the chosen method is beneficial when neglecting computational-efficiency issues, by examining our chosen method after numerical solution, i.e. by examining actual numerical solutions.Item Open Access An XFEM-based model for fluid flow in fractured porous media(2015) Schwenck, Nicolas; Flemisch, Bernd (PD Dr. rer. nat.)Many fields of applications for porous media flow include geometrically anisotropic inclusions and strongly discontinuous material coefficients which differ in orders of magnitude. If the extension of those heterogeneities is small in normal direction compared to the tangential directions, e.g., long and thin, those features are called fractures. Examples which include such fractured porous-media systems in earth sciences include reservoir engineering, groundwater-resource management, carbon capture and storage (CCS), radioactive-waste reposition, coal bed methane migration in mines, geothermal engineering and hydraulic fracturing. The analysis and prediction of flow in fractured porous-media systems is important for all the aforementioned applications. Experiments are usually too expensive and time consuming to satisfy the demand for fast but accurate decision making information. Many different conceptual and numerical models to treat fractured porous-media systems can be found in the literature. However, even in the time of large supercomputers with massive parallel computing power, the computational efficiency, and therefore the economic efficiency, plays a dominating role in the evaluation of simulation software. In this thesis an efficient method to simulate flow in fractured porous media systems is presented. Darcy flow in fractures and matrix is assumed. The presented method is suited best for flow regimes depending on both, the fractures and the surrounding rock matrix and is able to account for highly conductive but also almost impermeable fractures with respect to the surrounding matrix. The newly developed method is based on a co-dimension one conceptual model for the fracture network which is embedded in the surrounding matrix. The basis for this model reduction is given in Martin et al. (2005). Numerically the fracture network is resolved by its own grid and coupled to the independent matrix grid. The discretization on this matrix grid allows jumps in the solution across the geometrical position of the fractures within elements by discontinuous basis functions. This discretization method is known as eXtended Finite Element Method (XFEM). A similar approach was simultaneously developed in D’Angelo and Scotti (2012). The main novelty of this work is the extension of the aforementioned conceptual model, which only accounts for a single fracture ending on the boundary of the matrix domain, towards more complex fracture networks and suitable boundary conditions. This work can be structured into the development and implementation of three conceptual models (see 1–3 below) and their respective validation. It is followed by an evaluation of quality and efficiency with respect to established models (see 4 below). The implementation is carried out using DUNE, a toolbox for solving partial differential equations. 1. The first extension is the treatment of fractures, which end inside the domain. This includes the conceptual coupling at the fracture tips as well as the numerical treatment within the XFEM of the matrix elements, in which the fracture ends. The validation shows, that the proposed treatment is efficient and for most validation cases produces the desired accuracy. 2. In the second part, a conceptual model for intersecting fractures is developed and the implementation within the XFEM is presented. The validation shows that the proposed model and implementation can capture the complex physics of fracture crossings very accurately. 3. Of special interest are the boundary conditions for lower-dimensional fractures intersecting the matrix boundary. The established models very often use for simplicity constant values across lower-dimensional intersections. This is physically not always correct, because in reality lower-dimensional objects do not exist and if a gradient on the rock matrix boundary exists, there is also a gradient on the fracture boundary. Therefore, a sophisticated interpolation method is proposed. It is easy to apply because very often discrete measured data is given to the model as input anyway and the proposed interpolation of values at the boundary is separated from the flow problem inside the domain. The concepts and results of the crossing model (2) and the boundary-condition interpolation (3) are published in Schwenck et al. (2014). 4. To show the performance of the newly developed model including the three major aspects mentioned above, it is compared against several established models and implementations within the simulation framework DuMux. The results of this comparison are published in Schwenck et al. (2015). The model presented here combines the advantages of lower-dimensional models and non-matching grids while keeping the ability to represent the fracture geometry and its influence on the matrix flow field exactly. Therefore, it is an efficient alternative to established models.Item Open Access The benefit of muscle-actuated systems : internal mechanics, optimization and learning(Stuttgart : Institut für Modellierung und Simulation Biomechanischer Systeme, Computational Biophysics and Biorobotics, 2023) Wochner, Isabell; Schmitt, Syn (Prof. Dr.)We are facing the challenge of an over-aging and overweight society. This leads to an increasing number of movement disorders and causes the loss of mobility and independence. To address this pressing issue, we need to develop new rehabilitation techniques and design innovative assistive devices. Achieving this goal requires a deeper understanding of the underlying mechanics that control muscle-actuated motion. However, despite extensive studies, the neural control of muscle-actuated motion remains poorly understood. While experiments are valuable and necessary tools to further our understanding, they are often limited by ethical and practical constraints. Therefore, simulating muscle-actuated motion has become increasingly important for testing hypotheses and bridge this knowledge gap. In silico, we can establish cause-effect relationships that are experimentally difficult or even impossible to measure. By changing morphological aspects of the underlying musculoskeletal structure or the neural control strategy itself, simulations are crucial in the quest for a deeper understanding of muscle-actuated motion. The insights gained from these simulations paves the way to develop new rehabilitation techniques, enhance pre-surgical planning, design better assistive devices and improve the performance of current robots. The primary objective of this dissertation is to study the intricate interplay between musculoskeletal dynamics, neural controller and the environment. To achieve this goal, a simulation framework has been developed as part of this thesis, enabling the modeling and control of muscle-actuated motion using both model-based and learning-based methods. By utilizing this framework, musculoskeletal models of the arm, head-neck complex and a simplified whole-body model are investigated in conjunction with various concepts of motor control. The main research questions of this thesis are therefore: 1. How does the neural control strategy select muscle activation patterns to generate the desired movement, and can we use this knowledge to design better assistive devices? 2. How does the musculoskeletal dynamics facilitate the neural control strategy in accomplishing this task of generating desired movements? To address these research questions, this thesis comprises a total of five journal and conference articles. More specifically, contributions I-III of this thesis focus on addressing the first research question which aims to understand how voluntary and reflexive movements can be predicted. First, we investigate various optimality principles using a musculoskeletal arm model to predict point-to-manifold reaching tasks. By using predictive simulations, we demonstrate how the arm would move towards a goal if, for example, our neural control strategy would minimize energy consumption. The main finding of this contribution shows that it is essential to include muscle dynamics and consider tasks with more openly defined targets to draw accurate conclusions about motor control. Through our analysis, we show that a combination of mechanical work, jerk and neuronal stimulation effort best predicts point-reaching when compared to human experiments. Second, we propose a novel method to optimize the design of exoskeleton power units taking into account the load cycle of predicted human movements. To achieve this goal, we employ a forward dynamic simulation of a generic musculoskeletal arm model, which is first scaled to represent different individuals. Next, we predict individual human motions and employ the predicted human torques to scale the electrical power units employing a novel scalability model. By considering the individual user needs and task demands, our approach achieves a lighter and more efficient design. In conclusion, our framework demonstrates the potential to improve the design of individual assistive devices. The third contribution focuses on predicting reflexive movements in response to sudden perturbations of the head-neck complex. To achieve this, we conducted experiments in which volunteers were placed on a table while supporting their heads with a trapdoor. This trapdoor was then suddenly released leading to a downward movement of the head until the reflexive reaction of the muscles stops the head from falling. We analyzed the results of these experiments, presenting characteristic parameters and highlighting differences between separate age and gender groups. Using this data, we also set up benchmark validations for a musculoskeletal head-neck model, including reflex control strategies. Our main findings are that there are large individual differences in reflexive responses between participants and that the perturbation direction significantly affects the reflexive response. Furthermore, we show that this data can be used as a benchmark test to validate musculoskeletal models and different muscle control strategies. While the first three contributions focus on the research question (1), contributions IV-V focus on (2) whether and how the musculoskeletal dynamics facilitate the learning and control task of various movements. We utilize a recently introduced information-theoretic approach called control effort to quantify the minimally required information to perform specific movements. By applying this concept, we can for example quantify how much biological muscles reduce the neuronal information load compared to technical DC-motors. We present a novel optimization algorithm to find this control effort and apply it to point-reaching and walking tasks. The main finding of this contribution is that the musculoskeletal dynamics reduce the control effort required for these movements compared to torque-driven systems. Finally, we hypothesize that the highly nonlinear muscle dynamics not only facilitate the control task but also provide inherent stability that is beneficial for learning from scratch. To test this, we employed various learning strategies for multiple anthropomorphic tasks, including point-reaching, ball-hitting, hopping, and squatting. The results of this investigation demonstrate that using muscle-like actuators improves the data-efficiency of the learning tasks. Additionally, including the muscle dynamics improves the robustness towards hyperparameters and allows for a better generalization towards unknown and unlearned perturbations. In summary, this thesis enhances existing methods to control and learn muscle-actuated motion, quantifies the control effort needed to perform certain movements and demonstrates that the inherent stability of the muscle dynamics facilitates the learning task. The models, control strategies, and experimental data presented in this work aid researchers in science and industry to improve their predictions in various fields such as neuroscience, ergonomics, rehabilitation, passive safety systems, and robotics. This allows us to reverse-engineer how we as humans control movement, uncovering the complex relationship between musculoskeletal dynamics and neural controller.Item Open Access Finite strain hyperelastic multiscale homogenization via projection, efficient sampling and concentric interpolation(Stuttgart : Institute of Applied Mechanics, 2021) Kunc, Oliver; Fritzen, Felix (Prof. Dr.-Ing. Dipl.-Math. techn.)Item Open Access Uncertainty quantification for expensive simulations : optimal surrogate modeling under time constraints(2017) Sinsbeck, Michael; Nowak, Wolfgang (Prof. Dr.-Ing.)Motivation and Goal Computer simulations allow us to predict the behavior of real-world systems. Any simulation, however, contains imperfectly adjusted parameters and simplifying assumptions about the processes considered. Therefore, simulation-based predictions can never be expected to be completely accurate and the exact behavior of the system under consideration remains uncertain. The goal of uncertainty quantification (UQ) is to quantify how large the deviation between the real-world behavior of a system and its predicted behavior can possibly be. Such information is valuable for decision making. Computer simulations are often computationally expensive. Each simulation run may take several hours or even days. Therefore, many UQ methods rely on surrogate models. A surrogate model is a function that behaves similarly to the simulation in terms of its input-output relation, but is much faster to evaluate. Most surrogate modeling methods are convergent: with increasing computational effort, the surrogate model converges to the original simulation. In engineering practice, however, results are often to be obtained under time constraints. In these situations, it is not an option to increase the computational effort arbitrarily and so the convergence property loses some of its appeal. For this reason, the key question of this thesis is the following: What is the best possible way of solving UQ problems if the time available is limited? This is a question of optimality rather than convergence. The main idea of this thesis is to construct UQ methods by means of mathematical optimization so that we can make the optimal use of the time available. Contributions This thesis contains four contributions to the goal of UQ under time constraints. 1. A widely used surrogate modeling method in UQ is stochastic collocation, which is based on polynomial chaos expansions and therefore leads to polynomial surrogate models. In the first contribution, I developed an optimal sampling rule specifically designed for the construction of polynomial surrogate models. This sampling rule showed to be more efficient than existing sampling rules because it is stable, flexible and versatile. Existing methods lack at least one of these properties. Stability guarantees that the response surface will not oscillate between the sample points, flexibility allows the modeler to choose the number of function evaluations freely, and versatility means that the method can handle multivariate input distributions with statistical dependence. 2. In the second contribution, I generalized the previous approach and optimized both the sampling rule and the functional form of a surrogate in order to obtain a general optimal surrogate modeling method. I compared three possible approaches to such optimization and the only one that leads to a practical surrogate modeling method requires the modeler to describe the model function by a random field. The optimal surrogate then coincides with the Kriging estimator. 3. I developed a sequential sampling strategy for solving Bayesian inverse problems. Like in the second contribution, the modeler has to describe the model function by a random field. The sequential design strategy selects sample points one at a time in order to minimize the residual error in the solution of the inverse problem. Numerical experiments showed that the sequential design is more efficient than non-sequential methods. 4. Finally, I investigated what impact available measurement data have on the model selection between a reference model and a low-fidelity model. It turned out that, under time constraints, data can favor the use of a low-fidelity model. This is in contrast to model selection without time constraint where the availability of data often favors the use of more complex models. Conclusions From the four contributions, the following overarching conclusions can be drawn. • Under time constraints, the number of possible model evaluations is restricted and the model behavior at unobserved input parameters remains uncertain. This type of uncertainty should be taken into account explicitly. For this reason, random fields as surrogates should be preferred over deterministic response surface functions when working under time constraints. • Optimization is a viable approach to surrogate modeling. Optimal methods are automatically flexible which means that they are easily adaptable to the computing time available. • Under time constraints, all available information about the model function should be used. • Model selection with and without time constraints is entirely different.Item Open Access Stochastic model comparison and refinement strategies for gas migration in the subsurface(Stuttgart : Eigenverlag des Instituts für Wasser- und Umweltsystemmodellierung der Universität Stuttgart, 2023) Banerjee, Ishani; Nowak, Wolfgang (Prof. Dr.-Ing.)Gas migration in the subsurface, a multiphase flow in a porous-medium system, is a problem of environmental concern and is also relevant for subsurface gas storage in the context of the energy transition. It is essential to know and understand the flow paths of these gases in the subsurface for efficient monitoring, remediation or storage operations. On the one hand, laboratory gas-injection experiments help gain insights into the involved processes of these systems. On the other hand, numerical models help test the mechanisms observed and inferred from the experiments and then make useful predictions for real-world engineering applications. Both continuum and stochastic modelling techniques are used to simulate multiphase flow in porous media. In this thesis, I use a stochastic discrete growth model: the macroscopic Invasion Percolation (IP) model. IP models have the advantages of simplicity and computational inexpensiveness over complex continuum models. Local pore-scale changes dominantly affect the flow processes of gas flow in water-saturated porous media. IP models are especially favourable for these multi-scale systems because using continuum models to simulate them can be extremely computationally difficult. Despite offering a computationally inexpensive way to simulate multiphase flow in porous media, only very few studies have compared their IP model results to actual laboratory experimental image data. One reason might be the fact that IP models lack a notion of experimental time but only have an integer counter for simulation steps that imply a time order. The few existing experiments-to-model comparison studies have used perceptual similarity or spatial moments as comparison measures. On the one hand, perceptual comparison between the model and experimental images is tedious and non-objective. On the other hand, comparing spatial moments of the model and experimental images can lead to misleading results because of the loss of information from the data. In this thesis, an objective and quantitative comparison method is developed and tested that overcomes the limitations of these traditional approaches. The first step involves volume-based time-matching between real-time experimental data and IP-model outputs. This is followed by using the (Diffused) Jaccard coefficient to evaluate the quality of the fit. The fit between the images from the models and experiments can be checked across various scales by varying the extent of blurring in the images. Numerical model predictions for sparsely known systems (like the gas flow systems) suffer from high conceptual uncertainties. In literature, numerous versions of IP models, differing in their underlying hypotheses, have been used for simulating gas flow in porous media. Besides, the gas-injection experiments belong to continuous, transitional, or discontinuous gas flow regimes, depending on the gas flow rate and the porous medium's nature. Literature suggests that IP models are well suited for the discontinuous gas flow regime; other flow regimes have not been explored. Using the abovementioned method, in this thesis, four macroscopic IP model versions are compared against data from nine gas-injection experiments in transitional and continuous gas flow regimes. This model inter-comparison helps assess the potential of these models in these unexplored regimes and identify the sources of model conceptual uncertainties. Alternatively, with a focus on parameter uncertainty, Bayesian Model Selection is a standard statistical procedure for systematically and objectively comparing different model hypotheses by computing the Bayesian Model Evidence (BME) against test data. BME is the likelihood of a model producing the observed data, given the prior distribution of its parameters. Computing BME can be challenging: exact analytical solutions require strong assumptions; mathematical approximations (information criteria) are often strongly biased; assumption-free numerical methods (like Monte Carlo) are computationally impossible for large data sets. In this thesis, a BME-computation method is developed to use BME as a ranking criterion for such infeasible scenarios: The \emph{Method of Forced Probabilities} for extensive data sets and Markov-Chain models. In this method, the direction of evaluation is swapped: instead of comparing thousands of model runs on random model realizations with the observed data, the model is forced to reproduce the data in each time step, and the individual probabilities of the model following these exact transitions are recorded. This is a fast, accurate and exact method for calculating BME for IP models which exhibit the Markov chain property and for complete "atomic" data. The analysis results obtained using the methods and tools developed in this thesis help identify the strengths and weaknesses of the investigated IP model concepts. This further aids model development and refinement efforts for predicting gas migration in the subsurface. Also, the gained insights foster improved experimental methods. These tools and methods are not limited to gas flow systems in porous media but can be extended to any system involving raster outputs.Item Open Access Process-oriented modeling of spatial random fields using copulas(Stuttgart : Eigenverlag des Instituts für Wasser- und Umweltsystemmodellierung der Universität Stuttgart, 2016) Hörning, Sebastian; Bárdossy, András (Prof. Dr. rer. nat. Dr.-Ing.)Item Open Access Numerical coupling of Navier-Stokes and Darcy flow for soil-water evaporation(Stuttgart : Eigenverlag des Instituts für Wasser- und Umweltsystemmodellierung der Universität Stuttgart, 2017) Grüninger, Christoph; Flemisch, Bernd (apl. Prof. Dr. rer. nat.)The objective of this work is to develop algorithms and provide a framework for an efficient coupling of free flow and porous-medium flow to simulate porous-medium-soil-water evaporation. The implementation must particularly be capable of simulating laminar free flows, be fast enough for applied research, and cover simulations in two and three dimensions with complex geometries. We introduce a model for a compositional non-isothermal free flow coupled with a two-fluid-phase compositional non-isothermal porous-medium flow. The free flow is modeled with the Navier-Stokes, component and energy transport equations. The porous-medium flow is modeled with compositional two-fluid-phase Darcy and energy transport equations. As the pressure has different orders in the free-flow and porous-medium-flow subdomains, the coupling is not straightforward. Although the simulation of the coupled flows is motivated by a laboratory experiment to measure soil-water evaporation caused by wind blowing over a water-filled porous bed, we intend to also explore its use in other applications The free flow is considered to be incompressible and laminar. We also assume that air and water follow nonlinear laws that describe their physical properties, and binary diffusion. Within the porous medium only creeping flows occur. Many quantities are averaged and used in a macroscopic sense. We use a formulation of two-phase Darcy law using the liquid saturation and the gas pressure as primary variables. The component mass fractions are calculated by Henry's law and the vapor pressure. The liquid phase may locally vanish leading to a variable switch, where the vapor mass fraction is tracked instead of the liquid saturation. We assume that a local thermodynamic equilibrium is valid everywhere within the domain, even across the interface. We follow the coupling concept proposed by Mosthaf et al. (2011), including the Beavers-Joseph-Saffman approach which has a sharp interface between the two subdomains. We use a cell-centered finite volume method (FVM) on an axially parallel grid to discretize the partial differential equations of the compositional two-phase Darcy's law, the heat equation in both subdomains, and the component transport in the free-flow domain. For the Navier-Stokes equation, we use the marker and cell (MAC) scheme which moves the degrees of freedom for the velocities towards the edges of the grid elements, forming one secondary, staggered grid per dimension. The MAC scheme is stable and can be interpreted as a FVM. The coupling conditions are applied without additional variables along the coupling interface. They are incorporated as Dirichlet, Neumann or Robin boundary conditions resulting in interface fluxes. For the porous-medium flow, we use the finite-volume implementation provided by DuMuX. The marker and cell scheme is implemented on top of Dune-PDELab utilizing the material laws from DuMuX. The grid is split into two subdomains and the grid elements can be graded. This is especially useful for developing smaller elements closer to the interface. We can use complex geometries in two or three dimensions. The coupling is provided by a Dune-Multidomain local coupling operator. The time integration is approximated with an implicit Euler scheme and an adaptive time stepping. The system of nonlinear equations is linearized by a Newton method. All contributions to the Jacobian are compiled in one system of linear equations. The resulting matrices are difficult to solve. Although they are sparse, with a blocked structure of bands of nonzero entries, the matrices contain a saddle point problem and are nonsymmetric. We solve the matrices with direct methods. We also investigate iterative methods to get around the computational complexity and memory consumption of the direct methods: An Algebraic Multigrid (AMG) method, a Schur complement method, and a Generalized Minimal Residual method (GMRES) preconditioned with the reordering algorithm MC64 and an incomplete LU factorization with threshold and pivoting (ILUTP). We experience problems with AMG's error criteria leading to convergence problems. The Schur complement method is slow, as the Schur complement, which is not explicitly calculated, lacks preconditioners. GMRES with ILUTP shows similar results to a direct method, but reveals a restriction on the time step size for larger problem sizes, flawing a possible speedup compared to the direct methods. We validate our implementation for proper operation with the simulation of a laboratory experiment for soil-water evaporation. The laboratory experiment consists of a water-filled sand box with a horizontal pipe installed on top of the box and a propeller creating a constant air flow. We use the implementation to investigate the influence of the Reynolds number on the evaporation rate. Further, we compare the two-dimensional simplification to different three-dimensional geometries with regard to the effects on the evaporation. For low Reynolds numbers, the geometry of the free-flow subdomain has a significant influence on the evaporation rate. Another application involves a geological repository for nuclear waste. We investigate the water saturation in the concrete ceiling and the rock above a ventilation gallery. Our results conclude that within the first 200~years, only part of the concrete will dry, and the rock will remain unaffected. This confirms the same result by another group, though they observe evaporation rates up to the factor of ten higher. Our third application is the water management within a polymer electrolyte membrane (PEM) fuel cell. Neglecting electrochemistry, we simulate the flow through the gas channels and the porous layer covering the membrane, including the transport of vapor and liquid water, the evaporation of water within the porous layer, and how energy and vapor are conveyed away. In comparison to the above applications, the gas phase flow is not horizontally parallel to the porous bed, but is forced to completely enter the porous medium and leave it through a second gas channel. We also briefly compare two different gas channel layouts. We introduce the discretization of the coupling concept and its implementation. We conduct simulations of applications from different areas. We show the versatility of our approach and that it can be used as the basis for further research.Item Open Access Clustering simultaneous occurrences of extreme floods in the Neckar catchment(Stuttgart : Eigenverlag des Instituts für Wasser- und Umweltsystemmodellierung der Universität Stuttgart, 2022) Modiri, Ehsan; Bárdossy, András (Prof. Dr. rer. nat. Dr.-Ing.)