02 Fakultät Bau- und Umweltingenieurwissenschaften
Permanent URI for this collectionhttps://elib.uni-stuttgart.de/handle/11682/3
Browse
24 results
Search Results
Item Open Access Optimal design of experiments to improve the characterisation of atrazine degradation pathways in soil(2021) Chavez Rodriguez, Luciana; González‐Nicolás, Ana; Ingalls, Brian; Streck, Thilo; Nowak, Wolfgang; Xiao, Sinan; Pagel, HolgerContamination of soils with pesticides and their metabolites is a global environmental threat. Deciphering the complex process chains involved in pesticide degradation is a prerequisite for finding effective solution strategies. This study applies prospective optimal design (OD) of experiments to identify laboratory sampling strategies that allow model‐based discrimination of atrazine (AT) degradation pathways. We simulated virtual AT degradation experiments with a first‐order model that reflects a simple reaction chain of complete AT degradation. We added a set of Monod‐based model variants that consider more complex AT degradation pathways. Then, we applied an extended constraint‐based parameter search algorithm that produces Monte‐Carlo ensembles of realistic model outputs, in line with published experimental data. Differences between‐model ensembles were quantified with Bayesian model analysis using an energy distance metric. AT degradation pathways following first‐order reaction chains could be clearly distinguished from those predicted with Monod‐based models. As expected, including measurements of specific bacterial guilds improved model discrimination further. However, experimental designs considering measurements of AT metabolites were most informative, highlighting that environmental fate studies should prioritise measuring metabolites for elucidating active AT degradation pathways in soils. Our results suggest that applying model‐based prospective OD will maximise knowledge gains on soil systems from laboratory and field experiments.Item Open Access Learning groundwater contaminant diffusion‐sorption processes with a finite volume neural network(2022) Praditia, Timothy; Karlbauer, Matthias; Otte, Sebastian; Oladyshkin, Sergey; Butz, Martin V.; Nowak, WolfgangImproved understanding of complex hydrosystem processes is key to advance water resources research. Nevertheless, the conventional way of modeling these processes suffers from a high conceptual uncertainty, due to almost ubiquitous simplifying assumptions used in model parameterizations/closures. Machine learning (ML) models are considered as a potential alternative, but their generalization abilities remain limited. For example, they normally fail to predict accurately across different boundary conditions. Moreover, as a black box, they do not add to our process understanding or to discover improved parameterizations/closures. To tackle this issue, we propose the hybrid modeling framework FINN (finite volume neural network). It merges existing numerical methods for partial differential equations (PDEs) with the learning abilities of artificial neural networks (ANNs). FINN is applied on discrete control volumes and learns components of the investigated system equations, such as numerical stencils, model parameters, and arbitrary closure/constitutive relations. Consequently, FINN yields highly interpretable results. We demonstrate FINN's potential on a diffusion‐sorption problem in clay. Results on numerically generated data show that FINN outperforms other ML models when tested under modified boundary conditions, and that it can successfully differentiate between the usual, known sorption isotherms. Moreover, we also equip FINN with uncertainty quantification methods to lay open the total uncertainty of scientific learning, and then apply it to a laboratory experiment. The results show that FINN performs better than calibrated PDE‐based models as it is able to flexibly learn and model sorption isotherms without being restricted to choose among available parametric models.Item Open Access Sampling behavioral model parameters for ensemble-based sensitivity analysis using Gaussian process emulation and active subspaces(2020) Erdal, Daniel; Xiao, Sinan; Nowak, Wolfgang; Cirpka, Olaf A.Ensemble-based uncertainty quantification and global sensitivity analysis of environmental models requires generating large ensembles of parameter-sets. This can already be difficult when analyzing moderately complex models based on partial differential equations because many parameter combinations cause an implausible model behavior even though the individual parameters are within plausible ranges. In this work, we apply Gaussian Process Emulators (GPE) as surrogate models in a sampling scheme. In an active-training phase of the surrogate model, we target the behavioral boundary of the parameter space before sampling this behavioral part of the parameter space more evenly by passive sampling. Active learning increases the subsequent sampling efficiency, but its additional costs pay off only for a sufficiently large sample size. We exemplify our idea with a catchment-scale subsurface flow model with uncertain material properties, boundary conditions, and geometric descriptors of the geological structure. We then perform a global-sensitivity analysis of the resulting behavioral dataset using the active-subspace method, which requires approximating the local sensitivities of the target quantity with respect to all parameters at all sampled locations in parameter space. The Gaussian Process Emulator implicitly provides an analytical expression for this gradient, thus improving the accuracy of the active-subspace construction. When applying the GPE-based preselection, 70-90% of the samples were confirmed to be behavioral by running the full model, whereas only 0.5% of the samples were behavioral in standard Monte-Carlo sampling without preselection. The GPE method also provided local sensitivities at minimal additional costs.Item Open Access Diagnosing similarities in probabilistic multi-model ensembles : an application to soil-plant-growth-modeling(2022) Schäfer Rodrigues Silva, Aline; Weber, Tobias K. D.; Gayler, Sebastian; Guthke, Anneli; Höge, Marvin; Nowak, Wolfgang; Streck, ThiloThere has been an increasing interest in using multi-model ensembles over the past decade. While it has been shown that ensembles often outperform individual models, there is still a lack of methods that guide the choice of the ensemble members. Previous studies found that model similarity is crucial for this choice. Therefore, we introduce a method that quantifies similarities between models based on so-called energy statistics. This method can also be used to assess the goodness-of-fit to noisy or deterministic measurements. To guide the interpretation of the results, we combine different visualization techniques, which reveal different insights and thereby support the model development. We demonstrate the proposed workflow on a case study of soil–plant-growth modeling, comparing three models from the Expert-N library. Results show that model similarity and goodness-of-fit vary depending on the quantity of interest. This confirms previous studies that found that “there is no single best model” and hence, combining several models into an ensemble can yield more robust results.Item Open Access A stochastic framework to optimize monitoring strategies for delineating groundwater divides(2020) Allgeier, Jonas; González-Nicolás, Ana; Erdal, Daniel; Nowak, Wolfgang; Cirpka, Olaf A.Surface-water divides can be delineated by analyzing digital elevation models. They might, however, significantly differ from groundwater divides because the groundwater surface does not necessarily follow the surface topography. Thus, in order to delineate a groundwater divide, hydraulic-head measurements are needed. Because installing piezometers is cost- and labor-intensive, it is vital to optimize their placement. In this work, we introduce an optimal design analysis that can identify the best spatial configuration of piezometers. The method is based on formal minimization of the expected posterior uncertainty in localizing the groundwater divide. It is based on the preposterior data impact assessor, a Bayesian framework that uses a random sample of models (here: steady-state groundwater flow models) in a fully non-linear analysis. For each realization, we compute virtual hydraulic-head measurements at all potential well installation points and delineate the groundwater divide by particle tracking. Then, for each set of virtual measurements and their possible measurement values, we assess the uncertainty of the groundwater-divide location after Bayesian updating, and finally marginalize over all possible measurement values. We test the method mimicking an aquifer in South-West Germany. Previous works in this aquifer indicated a groundwater divide that substantially differs from the surface-water divide. Our analysis shows that the uncertainty in the localization of the groundwater divide can be reduced with each additional monitoring well. In our case study, the optimal configuration of three monitoring points involves the first well being close to the topographic surface water divide, the second one on the hillslope toward the valley, and the third one in between.Item Open Access Bayesian calibration points to misconceptions in three‐dimensional hydrodynamic reservoir modeling(2023) Schwindt, Sebastian; Callau Medrano, Sergio; Mouris, Kilian; Beckers, Felix; Haun, Stefan; Nowak, Wolfgang; Wieprecht, Silke; Oladyshkin, SergeyThree‐dimensional (3d) numerical models are state‐of‐the‐art for investigating complex hydrodynamic flow patterns in reservoirs and lakes. Such full‐complexity models are computationally demanding and their calibration is challenging regarding time, subjective decision‐making, and measurement data availability. In addition, physically unrealistic model assumptions or combinations of calibration parameters may remain undetected and lead to overfitting. In this study, we investigate if and how so‐called Bayesian calibration aids in characterizing faulty model setups driven by measurement data and calibration parameter combinations. Bayesian calibration builds on recent developments in machine learning and uses a Gaussian process emulator as a surrogate model, which runs considerably faster than a 3d numerical model. We Bayesian‐calibrate a Delft3D‐FLOW model of a pump‐storage reservoir as a function of the background horizontal eddy viscosity and diffusivity, and initial water temperature profile. We consider three scenarios with varying degrees of faulty assumptions and different uses of flow velocity and water temperature measurements. One of the scenarios forces completely unrealistic, rapid lake stratification and still yields similarly good calibration accuracy as more correct scenarios regarding global statistics, such as the root‐mean‐square error. An uncertainty assessment resulting from the Bayesian calibration indicates that the completely unrealistic scenario forces fast lake stratification through highly uncertain mixing‐related model parameters. Thus, Bayesian calibration describes the quality of calibration and correctness of model assumptions through geometric characteristics of posterior distributions. For instance, most likely calibration parameter values (posterior distribution maxima) at the calibration range limit or with widespread uncertainty characterize poor model assumptions and calibration.Item Open Access Strategies for simplifying reactive transport models : a Bayesian model comparison(2020) Schäfer Rodrigues Silva, Aline; Guthke, Anneli; Höge, Marvin; Cirpka, Olaf A.; Nowak, WolfgangFor simulating reactive transport on aquifer scale, various modeling approaches have been proposed. They vary considerably in their computational demands and in the amount of data needed for their calibration. Typically, the more complex a model is, the more data are required to sufficiently constrain its parameters. In this study, we assess a set of five models that simulate aerobic respiration and denitrification in a heterogeneous aquifer at quasi steady state. In a probabilistic framework, we test whether simplified approaches can be used as alternatives to the most detailed model. The simplifications are achieved by neglecting processes such as dispersion or biomass dynamics, or by replacing spatial discretization with travel‐time‐based coordinates. We use the model justifiability analysis proposed by Schöniger, Illman, et al. (2015, https://doi.org/10.1016/j.jhydrol.2015.07.047) to determine how similar the simplified models are to the reference model. This analysis rests on the principles of Bayesian model selection and performs a tradeoff between goodness‐of‐fit to reference data and model complexity, which is important for the reliability of predictions. Results show that, in principle, the simplified models are able to reproduce the predictions of the reference model in the considered scenario. Yet, it became evident that it can be challenging to define appropriate ranges for effective parameters of simplified models. This issue can lead to overly wide predictive distributions, which counteract the apparent simplicity of the models. We found that performing the justifiability analysis on the case of model simplification is an objective and comprehensive approach to assess the suitability of candidate models with different levels of detail.Item Open Access Diagnosis of model errors with a sliding time‐window Bayesian analysis(2022) Hsueh, Han‐Fang; Guthke, Anneli; Wöhling, Thomas; Nowak, WolfgangDeterministic hydrological models with uncertain, but inferred‐to‐be‐time‐invariant parameters typically show time‐dependent model errors. Such errors can occur if a hydrological process is active in certain time periods in nature, but is not resolved by the model or by its input. Such missing processes could become visible during calibration as time‐dependent best‐fit values of model parameters. We propose a formal time‐windowed Bayesian analysis to diagnose this type of model error, formalizing the question “In which period of the calibration time‐series does the model statistically disqualify itself as quasi‐true?” Using Bayesian model evidence (BME) as model performance metric, we determine how much the data in time windows of the calibration time‐series support or refute the model. Then, we track BME over sliding time windows to obtain a dynamic, time‐windowed BME (tBME) and search for sudden decreases that indicate an onset of model error. tBME also allows us to perform a formal, sliding likelihood‐ratio test of the model against the data. Our proposed approach is designed to detect error occurrence on various temporal scales, which is especially useful in hydrological modeling. We illustrate this by applying our proposed method to soil moisture modeling. We test tBME as model error indicator on several synthetic and real‐world test cases that we designed to vary in error sources (structure and input) and error time scales. Results prove the successful detection errors in dynamic models. Moreover, the time sequence of posterior parameter distributions helps to investigate the reasons for model error and provide guidance for model improvement.Item Open Access Combining crop modeling with remote sensing data using a particle filtering technique to produce real-time forecasts of winter wheat yields under uncertain boundary conditions(2022) Zare, Hossein; Weber, Tobias K. D.; Ingwersen, Joachim; Nowak, Wolfgang; Gayler, Sebastian; Streck, ThiloWithin-season crop yield forecasting at national and regional levels is crucial to ensure food security. Yet, forecasting is a challenge because of incomplete knowledge about the heterogeneity of factors determining crop growth, above all management and cultivars. This motivates us to propose a method for early forecasting of winter wheat yields in low-information systems regarding crop management and cultivars, and uncertain weather condition. The study was performed in two contrasting regions in southwest Germany, Kraichgau and Swabian Jura. We used in-season green leaf area index (LAI) as a proxy for end-of-season grain yield. We applied PILOTE, a simple and computationally inexpensive semi-empirical radiative transfer model to produce yield forecasts and assimilated LAI data measured in-situ and sensed by satellites (Landsat and Sentinel-2). To assimilate the LAI data into the PILOTE model, we used the particle filtering method. Both weather and sowing data were treated as random variables, acknowledging principal sources of uncertainties to yield forecasting. As such, we used the stochastic weather generator MarkSim® GCM to produce an ensemble of uncertain meteorological boundary conditions until the end of the season. Sowing dates were assumed normally distributed. To evaluate the performance of the data assimilation scheme, we set up the PILOTE model without data assimilation, treating weather data and sowing dates as random variables (baseline Monte Carlo simulation). Data assimilation increased the accuracy and precision of LAI simulation. Increasing the number of assimilation times decreased the mean absolute error (MAE) of LAI prediction from satellite data by ~1 to 0.2 m2/m2. Yield prediction was improved by data assimilation as compared to the baseline Monte Carlo simulation in both regions. Yield prediction by assimilating satellite-derived LAI showed similar statistics as assimilating the LAI data measured in-situ. The error in yield prediction by assimilating satellite-derived LAI was 7% in Kraichgau and 4% in Swabian Jura, whereas the yield prediction error by Monte Carlo simulation was 10 percent in both regions. Overall, we conclude that assimilating even noisy LAI data before anthesis substantially improves forecasting of winter wheat grain yield by reducing prediction errors caused by uncertainties in weather data, incomplete knowledge about management, and model calibration uncertainty.Item Open Access Bayesian calibration and validation of a large‐scale and time‐demanding sediment transport model(2020) Beckers, Felix; Heredia, Andrés; Noack, Markus; Nowak, Wolfgang; Wieprecht, Silke; Oladyshkin, SergeyThis study suggests a stochastic Bayesian approach for calibrating and validating morphodynamic sediment transport models and for quantifying parametric uncertainties in order to alleviate limitations of conventional (manual, deterministic) calibration procedures. The applicability of our method is shown for a large‐scale (11.0 km) and time‐demanding (9.14 hr for the period 2002-2013) 2‐D morphodynamic sediment transport model of the Lower River Salzach and for three most sensitive input parameters (critical Shields parameter, grain roughness, and grain size distribution). Since Bayesian methods require a significant number of simulation runs, this work proposes to construct a surrogate model, here with the arbitrary polynomial chaos technique. The surrogate model is constructed from a limited set of runs (n=20) of the full complex sediment transport model. Then, Monte Carlo‐based techniques for Bayesian calibration are used with the surrogate model (105 realizations in 4 hr). The results demonstrate that following Bayesian principles and iterative Bayesian updating of the surrogate model (10 iterations) enables to identify the most probable ranges of the three calibration parameters. Model verification based on the maximum a posteriori parameter combination indicates that the surrogate model accurately replicates the morphodynamic behavior of the sediment transport model for both calibration (RMSE = 0.31 m) and validation (RMSE = 0.42 m). Furthermore, it is shown that the surrogate model is highly effective in lowering the total computational time for Bayesian calibration, validation, and uncertainty analysis. As a whole, this provides more realistic calibration and validation of morphodynamic sediment transport models with quantified uncertainty in less time compared to conventional calibration procedures.
- «
- 1 (current)
- 2
- 3
- »