Universität Stuttgart

Permanent URI for this communityhttps://elib.uni-stuttgart.de/handle/11682/1

Browse

Search Results

Now showing 1 - 10 of 34
  • Thumbnail Image
    ItemOpen Access
    High-resolution spatio-temporal measurements of the colmation phenomenon under laboratory conditions
    (Stuttgart : Eigenverlag des Instituts für Wasser- und Umweltsystemmodellierung der Universität Stuttgart, 2022) Mayar, Mohammad Assem; Wieprecht, Silke (Prof. Dr.-Ing.)
    The fine sediment infiltration and accumulation into the gravel bed of rivers, the so-called colmation phenomenon, is a pernicious process exacerbated by anthropogenic activities. Owing to the importance and complexity of this phenomenon, it has been widely studied over the last decades. Various devices and methods have been developed to assess this phenomenon, where most of them are destructive and sample-based, resulting in an alteration of the natural conditions. Therefore, non-intrusive techniques, which provide spatial and temporal details with a high-resolution, are required to discretize the mechanisms involved in the colmation process. To address these issues, investigations under laboratory conditions may simplify the complexity of nature and enable individual and exactly defined boundary conditions to be investigated. Therefore, this thesis aims at (i) developing a non-intrusive and undisturbed measurement method for the high-resolution spatio-temporal measurements of the sediment infiltration processes and the development of sediment accumulation in an artificial river bed under laboratory conditions, (ii) applying this method to certain experiments for the assessment of the effects of different boundary conditions on sediment infiltration, and (iii) investigating the colmation phenomenon (also known as clogging) of gravel beds. For this purpose, the gamma-ray attenuation method is used together with an artificial gravel bed arranged from the spheres with various diameters and placed in a laboratory flume. This new method works based on the gamma radiation that passes through the infiltrated sediments, water, and bed spheres, in which the gamma-ray attenuation is linked to the variations of the infiltrated sediments’ quantity. The main simplification of this approach is that gravel beds are represented by the combinations of different-sized spheres. This gives the opportunity to fully distinguish infiltrating sediments from the bed material, reduce the complexity of the natural environment, and allows for repetitive measurements of the same position with different boundary conditions. From the results of this study, first, the gamma-ray attenuation measurement method was optimized to resolve the inconsistencies in the measurements. Subsequently, the concept of the non-intrusive and undisturbed measurement is proved through box experiments. Additional reproducibility experiments in the laboratory flume, for a similar bed structure, showed only small deviations between two experiments with the same setup. Consequently, the established technique was used in a series of experiments to evaluate the effects of different supply rates, total supply masses, and sediment particle size boundary conditions on the sediment infiltration and colmation processes. Vertical profiles of the infiltrated sediment were quantified through high spatial resolution measurements. Furthermore, to evaluate the infiltrating sediment accumulation development, and the temporal variations of the infiltrated sediments, the vertical profile measurements were first repeated after a specific time-period to track interval-averaged variations in all positions of the vertical axis. Next, a specific position of the vertical axis was measured continuously during the entire experiment in a high temporal resolution. The measured vertical profiles illustrate the vertical distribution, colmation, and unimpeded percolation of the infiltrated sediments. The dynamic one-point measurement precisely identifies the three phases (the start of the pore-filling, the required time to fill the pore, and the final amount of infiltrated sediments including natural fluctuation during the ongoing experiments) of the sediment infiltration or the possible clogging. As a limitation, the gamma-ray attenuation system’s current configuration only works in artificial gravel beds because of the given density difference between infiltrated sediments and the artificial bed structure. Intense radiations that pass through the natural bed's thickness are capable of detecting a significant amount of infiltrated sediments. However, small amounts of infiltrated sediments will create only a minimal shift in attenuation, which might be confused with the statistical error. In addition, the legal restriction against using radioactive material in the natural environment is another reason for not applying it in the field. Furthermore, the gamma-ray attenuation method cannot resolve the sediment distribution in the measurement horizon and provides an integrative result for each measurement position. In addition, if a mixture of silt, clay, and sand is supplied to the experiment, the gamma-ray attenuation system will produce a bulk result of all the infiltrated materials. To conclude, despite the limitations mentioned above, the gamma-ray attenuation method offers a unique opportunity for the non-intrusive and undisturbed measurements of the sediment infiltration or the special case of colmation, with a high spatio-temporal resolution. This method has the potential to quantify the investigated processes on a millimetric spatial scale, if the measurement time is not a constraint, or vice versa, in a high temporal resolution (seconds) for a specific position, if spatial scale is not important. Moreover, the gamma-ray attenuation approach can simultaneously measure the longitudinal distribution of the sedimentological processes, if multiple instruments or a single device with several radiation-emitting-holes is in operation. Last, but not least, rather than the spheres, artificial gravel beds could be made of any substance with a composition significantly different from the infiltrating sediments, and the boundary conditions of the experiments can be improved in order to attain conditions close to nature. Finally, the gamma-ray attenuation method can be integrated with advanced flow measurement instruments such as Particle Image Velocimetry (PIV) and other high-resolution endoscopic devices to track the behavior of fine sediment infiltration and its clogging process in the porous gravel beds as it occurs in nature.
  • Thumbnail Image
    ItemOpen Access
    Porosity and permeability alterations in processes of biomineralization in porous media - microfluidic investigations and their interpretation
    (Stuttgart : Eigenverlag des Instituts für Wasser- und Umweltsystemmodellierung der Universität Stuttgart, 2022) Weinhardt, Felix; Class, Holger (apl. Prof. Dr.-Ing)
    Motivation: Biomineralization refers to microbially induced processes resulting in mineral formations. In addition to complex biomineral structures frequently formed by marine organisms, like corals or mussels, microbial activities may also indirectly induce mineralization. A famous example is the formation of stromatolites, which result from biofilm activities that locally alter the chemical and physical properties of the environment in favor of carbonate precipitation. Recently, biomineralization gained attention as an engineering application. Especially with the background of global warming and the objective to reduce CO2 emissions, biomineralization offers an innovative and sustainable alternative to the usage of conventional Portland cement, whose production currently contributes significantly to global CO2 emissions. The most widely used method of biomineralization in engineering applications, is ureolytic calcium carbonate precipitation, which relies on the hydrolysis of urea and the subsequent precipitation of calcium carbonate. The hydrolysis of urea at moderate temperatures is relatively slow and therefore needs to be catalyzed by the enzyme urease to be practical for applications. Urease can be extracted from plants, for example from ground jack beans, and the process is consequently referred to as enzyme-induced calcium carbonate precipitation (ECIP). Another method is microbially induced calcium carbonate precipitation (MICP), which uses ureolytic bacteria that produce the enzyme in situ. EICP and MICP applications allow for producing various construction materials, stabilizing soils, or creating hydraulic barriers in the subsurface. The latter can be used, for example, to remediate leakages at the top layer of gas storage reservoirs, or to contain contaminant plumes in aquifers. Especially when remediating leakages in the subsurface, the most crucial parameter to be controlled is its intrinsic permeability. A valuable tool for predicting and planning field applications is the use of numerical simulation at the scale of representative elementary volumes (REV). For that, the considered domain is subdivided into several REV’s, which do not resolve the pore space in detail, but represent it by averaged parameters, such as the porosity and permeability. The porosity describes the ratio of the pore space to the considered bulk volume, and the permeability quantifies the ease of fluid flow through a porous medium. A change in porosity generally also affects permeability. Therefore, for REV-scale simulations, constitutive relationships are utilized to describe permeability as a function of porosity. There are several porosity-permeability relationships in the literature, such as the Kozeny-Carman relationship, Verma-Pruess, or simple power-law relationships. These constitutive relationships can describe individual states but usually do not include the underlying processes. Different boundary conditions during biomineralization may influence the course of porosity-permeability relationships. However, these relationships have not yet been adequately addressed. Pore-scale simulations are, in principle, very well suited to investigate pore space changes and their effects on permeability systematically. However, these simulations also rely on simplifications and assumptions. Therefore, it is essential to conduct experimental studies to investigate the complex processes during calcium carbonate precipitation in detail at the pore scale. Recent studies have shown that microfluidic methods are particularly suitable for this purpose. However, previous microfluidic studies have not explicitly addressed the impact of biomineralization on hydraulic effects. Therefore, this work aims to identify relevant phenomena at the pore scale to conclude on the REV-scale parameters, porosity and permeability, and their relationship. Contributions: This work comprises three publications. First, a suitable microfluidic setup and workflow were developed in Weinhardt et al. [2021a] to study pore space changes and the associated hydraulic effects reliably. This paper illustrated the benefits and insights of combining optical microscopy and micro X-ray computed tomography (micro XRCT) with hydraulic measurements in microfluidic chips. The elaborated workflow allowed for quantitative analysis of the evolution of calcium carbonate precipitates in terms of their size, shape, and spatial distribution. At the same time, their influence on differential pressure could be observed as a measure of flow resistance. Consequently, porosity and permeability changes could be determined. Along with this paper, we published two data sets [Weinhardt et al., 2021b, Vahid Dastjerdi et al., 2021] and set the basis for two other publications. In the second publication [von Wolff et al., 2021], the simulation results of a pore-scale numerical model, developed by Lars von Wolff, were compared to the experimental data of the first paper [Weinhardt et al., 2021b]. We observed a good agreement between the experimental data and the model results. The numerical studies complemented the experimental observations in allowing for accurate analysis of crystal growth as a function of local velocity profiles. In particular, we observed that crystal aggregates tend to grow toward the upstream side, where the supply of reaction products is higher than on the downstream side. Crystal growth during biomineralization under continuous inflow is thus strongly dependent on the locally varying velocities in a porous medium. In the third publication [Weinhardt et al., 2022a], we conducted further microfluidic experiments based on the experimental setup and workflow of the first contribution and published another data set [Weinhardt et al., 2022b]. We used microfluidic cells with a different, more realistic pore structure and investigated the influence of different injection strategies. We found that the development of preferential flow paths during EICP application may depend on the given boundary conditions. Constant inflow rates can lead to the development of preferential flow paths and keep them open. Gradually reduced inflow rates can mitigate this effect. In addition, we concluded that the coexistence of multiple calcium carbonate polymorphs and their transformations could influence the temporal evolution of porosity-permeability relationships.
  • Thumbnail Image
    ItemOpen Access
    Investigations on functional relationships between cohesive sediment erosion and sediment characteristics
    (Stuttgart : Eigenverlag des Instituts für Wasser- und Umweltsystemmodellierung der Universität Stuttgart, 2021) Beckers, Felix; Wieprecht, Silke (Prof. Dr.-Ing.)
  • Thumbnail Image
    ItemOpen Access
    A surrogate-assisted Bayesian framework for uncertainty-aware validation benchmarks
    (Stuttgart : Eigenverlag des Instituts für Wasser- und Umweltsystemmodellierung der Universität Stuttgart, 2023) Mohammadi, Farid; Flemisch, Bernd (apl. Prof. Dr. rer. nat.)
    Over the last century, computational modeling in geoscience, especially in porous media research, has witnessed tremendous improvement. After decades of development, the state-of-the-art simulators can now solve coupled partial differential equations governing the complex subsurface multiphase flow system within a practically large spatial and temporal domain. Given the importance of computational modeling, quality assessment of these models in light of the purpose of a given simulation is of paramount importance to engineering designers and managers, public officials, and those affected by the decisions based on the predictions. Users and developers of computational simulations deal with a challenging question: How should confidence in modeling and simulation be critically assessed? Validation is one of the primary methods for building and quantifying confidence in modeling and simulation. It investigates the degree to which a model accurately represents reality from the perspective of the intended application of the model. Usually, this comparison between model outputs and experimental data constitutes plotting the model results against data on the same axes to provide a visual assessment of agreement or lack thereof. While comparisons between model and data are at the heart of any validation procedure, there are several concerns with such naive comparisons. First, these comparisons tend to provide qualitative rather than quantitative assessments and are clearly insufficient as a basis for making decisions regarding model validity. Second, naive comparisons often disregard or only partly account for existing uncertainties in the experimental observations or the model input parameters. Third, such comparisons can not reveal whether the model is appropriate for the intended purposes, as they mainly focus on the agreement in the observable quantities. These pitfalls give rise to the need for an uncertainty-aware framework that includes a validation metric. This metric shall provide a measure for comparison of the system response quantities of an experiment with the ones from a computational model while accounting for uncertainties in both in a rigorous way. To address this need, we developed a statistical framework incorporating a probabilistic modeling technique using a fully Bayesian approach. The dissertation aims to help modelers perform uncertainty aware model validation benchmarks. A two-stage Bayesian multi-model framework is discussed for modeling tasks where a set of models are at hand. To make this framework applicable for computationally demanding models, it is extended to a surrogate-assisted framework, keeping the computational costs at a reasonable level. Moreover, correction factors were introduced to compensate for the surrogate error in the Bayesian hypothesis testing and Bayesian model selection, as using surrogate representations instead of the full-fidelity computational models introduces additional errors to the validation metrics. In this dissertation, I show how the Bayesian formalism could be materialized by employing the concept of polynomial chaos expansion to achieve more accurate surrogates with a sparse representation and account for the uncertainty in the surrogate’s predictions. I also highlight how such surrogate models could be constructed with as few simulations as the computational budget allows. To this end, sequential adaptive sampling strategies are discussed, in which one attempts to augment the initial design iteratively. By doing so, informative regions in the parameter space are adequately explored. These regions are more likely to provide valuable information on the behavior of the original model responses. Using a sequential sampling strategy avoids the waste of computational resources, as opposed to the so-called one-shot designs. A series of benchmark studies are conducted to investigate the predictive capabilities of different sparsity and sequential adaptive sampling methods. Moreover, I introduce BayesValidRox, an open-source, object-oriented Python package that provides an automated workflow for surrogate-based sensitivity analysis, Bayesian calibration, and validation of computational models with a modular structure. The uncertainty-aware validation framework was applied to a range of cases in the field of subsurface hydro-system modeling, mainly to flow and transport in porous media, such as flow simulation models in fractured porous media, coupling free flow and porous medium flow, and microbially induced calcite precipitation. However, this validation framework can be transferred to other disciplines in which models are used for prediction.
  • Thumbnail Image
    ItemOpen Access
    Bayesian inversion and model selection of heterogeneities in geostatistical subsurface modeling
    (Stuttgart : Eigenverlag des Instituts für Wasser- und Umweltsystemmodellierung der Universität Stuttgart, 2021) Reuschen, Sebastian; Nowak, Wolfgang (Prof. Dr.-Ing.)
  • Thumbnail Image
    ItemOpen Access
    A holistic approach to assess the impact of global change on reservoir sedimentation
    (Stuttgart : Eigenverlag des Instituts für Wasser- und Umweltsystemmodellierung der Universität Stuttgart, 2024) Mouris, Kilian; Wieprecht, Silke (Prof. Dr.-Ing.)
  • Thumbnail Image
    ItemOpen Access
    Advanced experimental methods for investigating flow-biofilm-sediment interactions
    (Stuttgart : Eigenverlag des Instituts für Wasser- und Umweltsystemmodellierung der Universität Stuttgart, 2022) Koca, Kaan; Wieprecht, Silke (Prof. Dr.-Ing.)
  • Thumbnail Image
    ItemOpen Access
    Physics-informed neural networks for learning dynamic, distributed and uncertain systems
    (Stuttgart : Eigenverlag des Instituts für Wasser- und Umweltsystemmodellierung der Universität Stuttgart, 2023) Praditia, Timothy; Nowak, Wolfgang (Prof. Dr.-Ing.)
    Scientific models play an important role in many technical inventions to facilitate daily human activities. We use them to assist us in simple decision making such as deciding what type of clothing we should wear using the weather forecast model, and also in complex problems such as assessing the environmental impact of industrial wastes. Existing scientific models, however, are imperfect due to our limited understanding of complex physical systems. Due to the rapid growth in computing power in recent years, there has been an increasing interest in applying data-driven modeling to improve upon current models and to fill in the missing scientific knowledge. Traditionally, these data-driven models require a significant amount of observation data, which is often challenging to obtain, especially from a natural system. To address this issue, prior physical knowledge has been included in the model design, resulting in so-called hybrid models. Although the idea of infusing physics with data seems sound, current state-of-the-art models have not found the ideal combination of both aspects, and the application to real-world data has been lacking. To bridge this gap, three research questions are formulated: 1. How can prior physical knowledge be adopted to design a consistent and reliable hybrid model for dynamic systems? 2. How can prior physical and numerical knowledge be adopted to design a consistent and reliable hybrid model for dynamic and spatially distributed systems? 3. How can the hybrid model learn about its own total (predictive) uncertainty in a computationally effective manner, so that it is appropriate for real-world applications or could facilitate scientific hypothesis testing? The overall goal is, with these questions answered, to contribute to more consistent approaches for scientific inquiry through hybrid models. The first contribution of this thesis addresses the first research question by proposing a modeling framework for a dynamic system, in the form of a Thermochemical Energy Storage device. A Nonlinear Autoregressive Network with Exogeneous Input (NARX) model is trained recurrently with multiple time lags to capture the temporal dependency and the long-term dynamics of the system. During training, the model is penalized when it violates established physical laws, such as mass and energy conservation. As a result, the model produces accurate and physically plausible predictions compared to models that are trained without physical regularization. The second research question is addressed by the second contribution of this thesis, by designing a hybrid model that complements the Finite Volume Method (FVM) with the learning ability of Artificial Neural Networks (ANNs). The resulting model enables the learning of unknown closure/constitutive relationships in various advection-diffusion equations. This thesis shows that the proposed model outperforms state-of-the-art deep learning models by several orders of magnitude in accuracy, and it possesses excellent generalization ability. Finally, the third contribution addresses the third research question, by investigating the performance of assorted uncertainty quantification methods on the hybrid model. As a demonstration, laboratory measurement data of a groundwater contaminant transport process is employed to train the model. Since the available training data is extremely scarce and noisy, uncertainty quantification methods are essential to produce a robust and trustworthy model. It is shown that a gradient-based Markov Chain Monte Carlo (MCMC) algorithm, namely the Barker proposal is the most suitable to quantify the uncertainty of the proposed model. Additionally, the hybrid model outperforms a calibrated physical model and provides appropriate predictive uncertainty to sufficiently explain the noisy measurement data. With these contributions, this thesis proposes a robust hybrid modeling framework that is suitable for filling in missing scientific knowledge and lays the groundwork for a wider variety of complex real-world applications. Ultimately, the hope is for this work to inspire future studies that contribute to the continuous and mutual improvements of both scientific knowledge discovery and scientific model robustness.
  • Thumbnail Image
    ItemOpen Access
    Event-based flood estimation using a random forest algorithm for the regionalization in small catchments
    (Stuttgart : Eigenverlag des Instituts für Wasser- und Umweltsystemmodellierung der Universität Stuttgart, 2022) Pavía Santolamazza, Daniela; Bárdossy, András (Prof. Dr. rer. nat. Dr.-Ing. habil.)
    The hydrological cycle is a complex system, composed of multiple variables, which in most cases are not measured. This is one of the reasons why it is a challenge to have models that adequately represent the expected discharges. The PUB initiative reinforces the need on having models that capture the different catchment interactions and represent various catchment processes. These models are more robust and thus can be more reliable to transfer to the ungauged catchments. In recent years, the field of hydrological research has focused on understanding and explaining the different processes present in catchments. Nevertheless, few applications that include pre- cipitation, the main responsible of runoff change,are found.Further understanding of the temporal and spatial dependence of the meteorological event triggering the floods is needed. In this study, an analysis of the meteorological event triggering the floods was carried out. The concept of entropy was used to characterize the temporal distribution of precipitation. It was found that the precipitation temporal entropy is a better indicator of hydrograph shape than the duration or the intensity. Further, the geographical interdependence of the amount of precipitation and the temporal precipitation entropy causing the floods was described looking at the association of sta- tions triples. This suggested that, up until a given quantile, flood events are more likely caused by precipitation events of total coverage. However, for larger quantile values, it is observed that as the quantile increases the probability of observing joint occurrence in space decreases. The tem- poral distribution of precipitation events causing the floods showed to be more associated in space than the amount of precipitation triggering the floods. Nonetheless, this temporal distribu- tion is not constant over all flood events, what can be attributed to d ifferent flood mechanisms. The Antecedent Precipitation Index (API) was used to explain the soil moisture content. The em- pirical distribution of (API) at the time of a flood was compared with empirical distributions of unconditioned (API) data series. T o this end, the Wilcoxon statistic and the Kolmogorov -Smirnov distance were used to compare the empirical distributions. The re sults showed that the soil mois- ture triggering the floods is not an annual extreme, rather a value close to the monthly maximum (API). Further, it was observed that the longer memory of the catchment gives more information about the occurrence of the flood. Additionally, in order to estimate the catchment reaction at the time of a flood, a regiona lization of the flood wave hydrographs was carried out. T o this end, three methods of defining the simi- larity of the floods were considered. In all three methods, the similarity matrices were generated using the random forest algorithm. The novelty of this procedure was the use of a supervised random forest to describe the similarity of the floods events. It was supervised given that the algorithm was trained to estimate a target variable. The proximity matrix was obtained by calcu- lating the joint occurrence of floods in the random forest space. For evaluating the estimation the hydrograph peak and the time to peak were used. In all three methods, the same tendencies were observed, an overestimation of the peak and an underestimation of the time to peak. However, the bias was observed to be smaller when an ensemble of similarity matrices was used as com- pared to having a single similarity matrix. Moreover, an approach using an unsupervised random forest was compared to the supervised one. It was found that the unsupervised random forest yields larger estimation errors. Finally, to estimate the volume of the flood event a rainfall-runoff model was modified to represent the study region. The model chosen in this study was EPIC. The model was calibrated to be more representative of the study region. To this end, the estimation errors in the space of the model parameters were studied. This allowed to find the model parameters that can better represent the study area. The values obtained were considered reasonable. For example, it is observed that the longer memory of the catchment is more representative of the study catchments, which are the same results as when analyzing the meteorological phenomenon causing the floods. Further, the values obtained for the regional constant, parameter modifying the initial abstraction of the catchment, were found to be smaller than the original ones obtained for United States catchments, which agrees with other studies in European catchments.
  • Thumbnail Image
    ItemOpen Access
    Capturing local details in fluid-flow simulations : options, challenges and applications using marker-and-cell schemes
    (Stuttgart : Eigenverlag des Instituts für Wasser- und Umweltsystemmodellierung der Universität Stuttgart, 2024) Lipp, Melanie Gloria; Helmig, Rainer (Prof. Dr.-Ing.)
    Complex local flow structures appear in a wide range of free-flow systems, e.g. vortices build after obstacles. For understanding and predicting numerous processes, it is important to capture local details in free fluid flow, which is the focus of this work. Particularly, we are interested in local flow structures in free flow coupled to porous-medium flow. A better resolution of local structures in free flow can be achieved by refining computational grids, which is studied in this thesis. Particularly, we focus on finite-volume/finite-difference methods for the two-dimensional Navier-Stokes equations with constant density and constant viscosity, using the marker-and-cell method (pressures in cell centers, velocities on cell faces) and rectangular control volumes. There exists a variety of methods, with a range of characteristics, which can be used to refine computational grids. The first objective of this work was to develop for many different available approaches one common way of description of a class of methods within our focus and to display their similarities and differences. The second objective was to gain insight and in-detail understanding of the local-refinement-methods' behavior by examining one chosen method before numerical solution, i.e. by examining local truncation errors. The third objective was to gain further understanding of the local-refinement-methods' behavior as well as display examples, in which the chosen method is beneficial when neglecting computational-efficiency issues, by examining our chosen method after numerical solution, i.e. by examining actual numerical solutions.