Universität Stuttgart

Permanent URI for this communityhttps://elib.uni-stuttgart.de/handle/11682/1

Browse

Search Results

Now showing 1 - 10 of 12
  • Thumbnail Image
    ItemOpen Access
    Learning groundwater contaminant diffusion‐sorption processes with a finite volume neural network
    (2022) Praditia, Timothy; Karlbauer, Matthias; Otte, Sebastian; Oladyshkin, Sergey; Butz, Martin V.; Nowak, Wolfgang
    Improved understanding of complex hydrosystem processes is key to advance water resources research. Nevertheless, the conventional way of modeling these processes suffers from a high conceptual uncertainty, due to almost ubiquitous simplifying assumptions used in model parameterizations/closures. Machine learning (ML) models are considered as a potential alternative, but their generalization abilities remain limited. For example, they normally fail to predict accurately across different boundary conditions. Moreover, as a black box, they do not add to our process understanding or to discover improved parameterizations/closures. To tackle this issue, we propose the hybrid modeling framework FINN (finite volume neural network). It merges existing numerical methods for partial differential equations (PDEs) with the learning abilities of artificial neural networks (ANNs). FINN is applied on discrete control volumes and learns components of the investigated system equations, such as numerical stencils, model parameters, and arbitrary closure/constitutive relations. Consequently, FINN yields highly interpretable results. We demonstrate FINN's potential on a diffusion‐sorption problem in clay. Results on numerically generated data show that FINN outperforms other ML models when tested under modified boundary conditions, and that it can successfully differentiate between the usual, known sorption isotherms. Moreover, we also equip FINN with uncertainty quantification methods to lay open the total uncertainty of scientific learning, and then apply it to a laboratory experiment. The results show that FINN performs better than calibrated PDE‐based models as it is able to flexibly learn and model sorption isotherms without being restricted to choose among available parametric models.
  • Thumbnail Image
    ItemOpen Access
    Sampling behavioral model parameters for ensemble-based sensitivity analysis using Gaussian process emulation and active subspaces
    (2020) Erdal, Daniel; Xiao, Sinan; Nowak, Wolfgang; Cirpka, Olaf A.
    Ensemble-based uncertainty quantification and global sensitivity analysis of environmental models requires generating large ensembles of parameter-sets. This can already be difficult when analyzing moderately complex models based on partial differential equations because many parameter combinations cause an implausible model behavior even though the individual parameters are within plausible ranges. In this work, we apply Gaussian Process Emulators (GPE) as surrogate models in a sampling scheme. In an active-training phase of the surrogate model, we target the behavioral boundary of the parameter space before sampling this behavioral part of the parameter space more evenly by passive sampling. Active learning increases the subsequent sampling efficiency, but its additional costs pay off only for a sufficiently large sample size. We exemplify our idea with a catchment-scale subsurface flow model with uncertain material properties, boundary conditions, and geometric descriptors of the geological structure. We then perform a global-sensitivity analysis of the resulting behavioral dataset using the active-subspace method, which requires approximating the local sensitivities of the target quantity with respect to all parameters at all sampled locations in parameter space. The Gaussian Process Emulator implicitly provides an analytical expression for this gradient, thus improving the accuracy of the active-subspace construction. When applying the GPE-based preselection, 70-90% of the samples were confirmed to be behavioral by running the full model, whereas only 0.5% of the samples were behavioral in standard Monte-Carlo sampling without preselection. The GPE method also provided local sensitivities at minimal additional costs.
  • Thumbnail Image
    ItemOpen Access
    A stochastic framework to optimize monitoring strategies for delineating groundwater divides
    (2020) Allgeier, Jonas; González-Nicolás, Ana; Erdal, Daniel; Nowak, Wolfgang; Cirpka, Olaf A.
    Surface-water divides can be delineated by analyzing digital elevation models. They might, however, significantly differ from groundwater divides because the groundwater surface does not necessarily follow the surface topography. Thus, in order to delineate a groundwater divide, hydraulic-head measurements are needed. Because installing piezometers is cost- and labor-intensive, it is vital to optimize their placement. In this work, we introduce an optimal design analysis that can identify the best spatial configuration of piezometers. The method is based on formal minimization of the expected posterior uncertainty in localizing the groundwater divide. It is based on the preposterior data impact assessor, a Bayesian framework that uses a random sample of models (here: steady-state groundwater flow models) in a fully non-linear analysis. For each realization, we compute virtual hydraulic-head measurements at all potential well installation points and delineate the groundwater divide by particle tracking. Then, for each set of virtual measurements and their possible measurement values, we assess the uncertainty of the groundwater-divide location after Bayesian updating, and finally marginalize over all possible measurement values. We test the method mimicking an aquifer in South-West Germany. Previous works in this aquifer indicated a groundwater divide that substantially differs from the surface-water divide. Our analysis shows that the uncertainty in the localization of the groundwater divide can be reduced with each additional monitoring well. In our case study, the optimal configuration of three monitoring points involves the first well being close to the topographic surface water divide, the second one on the hillslope toward the valley, and the third one in between.
  • Thumbnail Image
    ItemOpen Access
    Bayesian calibration points to misconceptions in three‐dimensional hydrodynamic reservoir modeling
    (2023) Schwindt, Sebastian; Callau Medrano, Sergio; Mouris, Kilian; Beckers, Felix; Haun, Stefan; Nowak, Wolfgang; Wieprecht, Silke; Oladyshkin, Sergey
    Three‐dimensional (3d) numerical models are state‐of‐the‐art for investigating complex hydrodynamic flow patterns in reservoirs and lakes. Such full‐complexity models are computationally demanding and their calibration is challenging regarding time, subjective decision‐making, and measurement data availability. In addition, physically unrealistic model assumptions or combinations of calibration parameters may remain undetected and lead to overfitting. In this study, we investigate if and how so‐called Bayesian calibration aids in characterizing faulty model setups driven by measurement data and calibration parameter combinations. Bayesian calibration builds on recent developments in machine learning and uses a Gaussian process emulator as a surrogate model, which runs considerably faster than a 3d numerical model. We Bayesian‐calibrate a Delft3D‐FLOW model of a pump‐storage reservoir as a function of the background horizontal eddy viscosity and diffusivity, and initial water temperature profile. We consider three scenarios with varying degrees of faulty assumptions and different uses of flow velocity and water temperature measurements. One of the scenarios forces completely unrealistic, rapid lake stratification and still yields similarly good calibration accuracy as more correct scenarios regarding global statistics, such as the root‐mean‐square error. An uncertainty assessment resulting from the Bayesian calibration indicates that the completely unrealistic scenario forces fast lake stratification through highly uncertain mixing‐related model parameters. Thus, Bayesian calibration describes the quality of calibration and correctness of model assumptions through geometric characteristics of posterior distributions. For instance, most likely calibration parameter values (posterior distribution maxima) at the calibration range limit or with widespread uncertainty characterize poor model assumptions and calibration.
  • Thumbnail Image
    ItemOpen Access
    Diagnosis of model errors with a sliding time‐window Bayesian analysis
    (2022) Hsueh, Han‐Fang; Guthke, Anneli; Wöhling, Thomas; Nowak, Wolfgang
    Deterministic hydrological models with uncertain, but inferred‐to‐be‐time‐invariant parameters typically show time‐dependent model errors. Such errors can occur if a hydrological process is active in certain time periods in nature, but is not resolved by the model or by its input. Such missing processes could become visible during calibration as time‐dependent best‐fit values of model parameters. We propose a formal time‐windowed Bayesian analysis to diagnose this type of model error, formalizing the question “In which period of the calibration time‐series does the model statistically disqualify itself as quasi‐true?” Using Bayesian model evidence (BME) as model performance metric, we determine how much the data in time windows of the calibration time‐series support or refute the model. Then, we track BME over sliding time windows to obtain a dynamic, time‐windowed BME (tBME) and search for sudden decreases that indicate an onset of model error. tBME also allows us to perform a formal, sliding likelihood‐ratio test of the model against the data. Our proposed approach is designed to detect error occurrence on various temporal scales, which is especially useful in hydrological modeling. We illustrate this by applying our proposed method to soil moisture modeling. We test tBME as model error indicator on several synthetic and real‐world test cases that we designed to vary in error sources (structure and input) and error time scales. Results prove the successful detection errors in dynamic models. Moreover, the time sequence of posterior parameter distributions helps to investigate the reasons for model error and provide guidance for model improvement.
  • Thumbnail Image
    ItemOpen Access
    Integrating structural resilience in the design of urban drainage networks in flat areas using a simplified multi-objective optimization framework
    (2021) Bakhshipour, Amin E.; Hespen, Jessica; Haghighi, Ali; Dittmer, Ulrich; Nowak, Wolfgang
    Structural resilience describes urban drainage systems’ (UDSs) ability to minimize the frequency and magnitude of failure due to common structural issues such as pipe clogging and cracking or pump failure. Structural resilience is often neglected in the design of UDSs. The current literature supports structural decentralization as a way to introduce structural resilience into UDSs. Although there are promising methods in the literature for generating and optimizing decentralized separate stormwater collection systems, incorporating hydraulic simulations in unsteady flow, these approaches sometimes require high computational effort, especially for flat areas. This may hamper their integration into ordinary commercially designed UDS software due to their predominantly scientific purposes. As a response, this paper introduces simplified cost and structural resilience indices that can be used as heuristic parameters for optimizing the UDS layout. These indices only use graph connectivity information, which is computationally much less expensive than hydraulic simulation. The use of simplified objective functions significantly simplifies the feasible search space and reduces blind searches by optimization. To demonstrate the application and advantages of the proposed model, a real case study in the southwest city of Ahvaz, Iran was explored. The proposed framework was proven to be promising for reducing the computational effort and for delivering realistic cost-wise and resilient UDSs.
  • Thumbnail Image
    ItemOpen Access
    Hydraulically induced fracturing in heterogeneous porous media using a TPM‐phase‐field model and geostatistics
    (2023) Wagner, Arndt; Sonntag, Alixa; Reuschen, Sebastian; Nowak, Wolfgang; Ehlers, Wolfgang
    Hydraulically induced fracturing is widely used in practice for several exploitation techniques. The chosen macroscopic model combines a phase‐field approach to fractures with the Theory of Porous Media (TPM) to describe dynamic hydraulic fracturing processes in fully‐saturated porous materials. In this regard, the solid's state of damage shows a diffuse transition zone between the broken and unbroken domain. Rocks or soils in grown nature are generally inhomogeneous with material imperfections on the microscale, such that modelling homogeneous porous material may oversimplify the behaviour of the solid and fluid phases in the fracturing process. Therefore, material imperfections and inhomogeneities in the porous structure are considered through the definition of location‐dependent material parameters. In this contribution, a deterministic approach to account for predefined imperfection areas as well as statistical fields of geomechanical properties is proposed. Representative numerical simulations show the impact of solid skeleton heterogeneities in porous media on the fracturing characteristics, e. g. the crack path.
  • Thumbnail Image
    ItemOpen Access
    Bayesian model weighting : the many faces of model averaging
    (2020) Höge, Marvin; Guthke, Anneli; Nowak, Wolfgang
    Model averaging makes it possible to use multiple models for one modelling task, like predicting a certain quantity of interest. Several Bayesian approaches exist that all yield a weighted average of predictive distributions. However, often, they are not properly applied which can lead to false conclusions. In this study, we focus on Bayesian Model Selection (BMS) and Averaging (BMA), Pseudo-BMS/BMA and Bayesian Stacking. We want to foster their proper use by, first, clarifying their theoretical background and, second, contrasting their behaviours in an applied groundwater modelling task. We show that only Bayesian Stacking has the goal of model averaging for improved predictions by model combination. The other approaches pursue the quest of finding a single best model as the ultimate goal, and use model averaging only as a preliminary stage to prevent rash model choice. Improved predictions are thereby not guaranteed. In accordance with so-called ℳ-settings that clarify the alleged relations between models and truth, we elicit which method is most promising.
  • Thumbnail Image
    ItemOpen Access
    Experimental evaluation and uncertainty quantification for a fractional viscoelastic model of salt concrete
    (2022) Hinze, Matthias; Xiao, Sinan; Schmidt, André; Nowak, Wolfgang
    This study evaluates and analyzes creep testing results on salt concrete of type M2. The concrete is a candidate material for long-lasting structures for sealing underground radioactive waste repository sites. Predicting operational lifetime and security aspects for these structures requires specific constitutive equations to describe the material behavior. Thus, we analyze whether a fractional viscoelastic constitutive law is capable of representing the long-term creep and relaxation processes for M2 concrete. We conduct a creep test to identify the parameters of the fractional model. Moreover, we use the Bayesian inversion method to evaluate the identifiability of the model parameters and the suitability of the experimental setup to yield a reliable prediction of the concrete behavior. Particularly, this Bayesian analysis allows to incorporate expert knowledge as prior information, to account for limited experimental precision and finally to rigorously quantify the post-calibration uncertainty.
  • Thumbnail Image
    ItemOpen Access
    Information‐theoretic scores for Bayesian model selection and similarity analysis : concept and application to a groundwater problem
    (2023) Morales Oreamuno, Maria Fernanda; Oladyshkin, Sergey; Nowak, Wolfgang
    Bayesian model selection (BMS) and Bayesian model justifiability analysis (BMJ) provide a statistically rigorous framework for comparing competing models through the use of Bayesian model evidence (BME). However, a BME-based analysis has two main limitations: (a) it does not account for a model's posterior predictive performance after using the data for calibration and (b) it leads to biased results when comparing models that use different subsets of the observations for calibration. To address these limitations, we propose augmenting BMS and BMJ analyses with additional information-theoretic measures: expected log-predictive density (ELPD), relative entropy (RE) and information entropy (IE). Exploring the connection between Bayesian inference and information theory, we explicitly link BME and ELPD together with RE and IE to highlight the information flow in BMS and BMJ analyses. We show how to compute and interpret these scores alongside BME, and apply the framework to a controlled 2D groundwater setup featuring five models, one of which uses a subset of the data for calibration. Our results show how the information-theoretic scores complement BME by providing a more complete picture concerning the Bayesian updating process. Additionally, we demonstrate how both RE and IE can be used to objectively compare models that feature different data sets for calibration. Overall, the introduced Bayesian information-theoretic framework can lead to a better-informed decision by incorporating a model's post-calibration predictive performance, by allowing to work with different subsets of the data and by considering the usefulness of the data in the Bayesian updating process.