Browsing by Author "Nowak, Wolfgang"
Now showing 1 - 20 of 25
- Results Per Page
- Sort Options
Item Open Access Bayesian calibration and validation of a large‐scale and time‐demanding sediment transport model(2020) Beckers, Felix; Heredia, Andrés; Noack, Markus; Nowak, Wolfgang; Wieprecht, Silke; Oladyshkin, SergeyThis study suggests a stochastic Bayesian approach for calibrating and validating morphodynamic sediment transport models and for quantifying parametric uncertainties in order to alleviate limitations of conventional (manual, deterministic) calibration procedures. The applicability of our method is shown for a large‐scale (11.0 km) and time‐demanding (9.14 hr for the period 2002-2013) 2‐D morphodynamic sediment transport model of the Lower River Salzach and for three most sensitive input parameters (critical Shields parameter, grain roughness, and grain size distribution). Since Bayesian methods require a significant number of simulation runs, this work proposes to construct a surrogate model, here with the arbitrary polynomial chaos technique. The surrogate model is constructed from a limited set of runs (n=20) of the full complex sediment transport model. Then, Monte Carlo‐based techniques for Bayesian calibration are used with the surrogate model (105 realizations in 4 hr). The results demonstrate that following Bayesian principles and iterative Bayesian updating of the surrogate model (10 iterations) enables to identify the most probable ranges of the three calibration parameters. Model verification based on the maximum a posteriori parameter combination indicates that the surrogate model accurately replicates the morphodynamic behavior of the sediment transport model for both calibration (RMSE = 0.31 m) and validation (RMSE = 0.42 m). Furthermore, it is shown that the surrogate model is highly effective in lowering the total computational time for Bayesian calibration, validation, and uncertainty analysis. As a whole, this provides more realistic calibration and validation of morphodynamic sediment transport models with quantified uncertainty in less time compared to conventional calibration procedures.Item Open Access Bayesian calibration points to misconceptions in three‐dimensional hydrodynamic reservoir modeling(2023) Schwindt, Sebastian; Callau Medrano, Sergio; Mouris, Kilian; Beckers, Felix; Haun, Stefan; Nowak, Wolfgang; Wieprecht, Silke; Oladyshkin, SergeyThree‐dimensional (3d) numerical models are state‐of‐the‐art for investigating complex hydrodynamic flow patterns in reservoirs and lakes. Such full‐complexity models are computationally demanding and their calibration is challenging regarding time, subjective decision‐making, and measurement data availability. In addition, physically unrealistic model assumptions or combinations of calibration parameters may remain undetected and lead to overfitting. In this study, we investigate if and how so‐called Bayesian calibration aids in characterizing faulty model setups driven by measurement data and calibration parameter combinations. Bayesian calibration builds on recent developments in machine learning and uses a Gaussian process emulator as a surrogate model, which runs considerably faster than a 3d numerical model. We Bayesian‐calibrate a Delft3D‐FLOW model of a pump‐storage reservoir as a function of the background horizontal eddy viscosity and diffusivity, and initial water temperature profile. We consider three scenarios with varying degrees of faulty assumptions and different uses of flow velocity and water temperature measurements. One of the scenarios forces completely unrealistic, rapid lake stratification and still yields similarly good calibration accuracy as more correct scenarios regarding global statistics, such as the root‐mean‐square error. An uncertainty assessment resulting from the Bayesian calibration indicates that the completely unrealistic scenario forces fast lake stratification through highly uncertain mixing‐related model parameters. Thus, Bayesian calibration describes the quality of calibration and correctness of model assumptions through geometric characteristics of posterior distributions. For instance, most likely calibration parameter values (posterior distribution maxima) at the calibration range limit or with widespread uncertainty characterize poor model assumptions and calibration.Item Open Access Bayesian model weighting : the many faces of model averaging(2020) Höge, Marvin; Guthke, Anneli; Nowak, WolfgangModel averaging makes it possible to use multiple models for one modelling task, like predicting a certain quantity of interest. Several Bayesian approaches exist that all yield a weighted average of predictive distributions. However, often, they are not properly applied which can lead to false conclusions. In this study, we focus on Bayesian Model Selection (BMS) and Averaging (BMA), Pseudo-BMS/BMA and Bayesian Stacking. We want to foster their proper use by, first, clarifying their theoretical background and, second, contrasting their behaviours in an applied groundwater modelling task. We show that only Bayesian Stacking has the goal of model averaging for improved predictions by model combination. The other approaches pursue the quest of finding a single best model as the ultimate goal, and use model averaging only as a preliminary stage to prevent rash model choice. Improved predictions are thereby not guaranteed. In accordance with so-called ℳ-settings that clarify the alleged relations between models and truth, we elicit which method is most promising.Item Open Access Characterization of export regimes in concentration-discharge plots via an advanced time-series model and event-based sampling strategies(2021) González-Nicolás, Ana; Schwientek, Marc; Sinsbeck, Michael; Nowak, WolfgangCurrently, the export regime of a catchment is often characterized by the relationship between compound concentration and discharge in the catchment outlet or, more specifically, by the re-gression slope in log-concentrations versus log-discharge plots. However, the scattered points in these plots usually do not follow a plain linear regression representation because of different processes (e.g., hysteresis effects). This work proposes a simple stochastic time-series model for simulating compound concentrations in a river based on river discharge. Our model has an ex-plicit transition parameter that can morph the model between chemostatic behavior and che-modynamic behavior. As opposed to the typically used linear regression approach, our model has an additional parameter to account for hysteresis by including correlation over time. We demonstrate the advantages of our model using a high-frequency data series of nitrate concen-trations collected with in situ analyzers in a catchment in Germany. Furthermore, we identify event-based optimal scheduling rules for sampling strategies. Overall, our results show that (i) our model is much more robust for estimating the export regime than the usually used regres-sion approach, and (ii) sampling strategies based on extreme events (including both high and low discharge rates) are key to reducing the prediction uncertainty of the catchment behavior. Thus, the results of this study can help characterize the export regime of a catchment and manage water pollution in rivers at lower monitoring costs.Item Open Access Combining crop modeling with remote sensing data using a particle filtering technique to produce real-time forecasts of winter wheat yields under uncertain boundary conditions(2022) Zare, Hossein; Weber, Tobias K. D.; Ingwersen, Joachim; Nowak, Wolfgang; Gayler, Sebastian; Streck, ThiloWithin-season crop yield forecasting at national and regional levels is crucial to ensure food security. Yet, forecasting is a challenge because of incomplete knowledge about the heterogeneity of factors determining crop growth, above all management and cultivars. This motivates us to propose a method for early forecasting of winter wheat yields in low-information systems regarding crop management and cultivars, and uncertain weather condition. The study was performed in two contrasting regions in southwest Germany, Kraichgau and Swabian Jura. We used in-season green leaf area index (LAI) as a proxy for end-of-season grain yield. We applied PILOTE, a simple and computationally inexpensive semi-empirical radiative transfer model to produce yield forecasts and assimilated LAI data measured in-situ and sensed by satellites (Landsat and Sentinel-2). To assimilate the LAI data into the PILOTE model, we used the particle filtering method. Both weather and sowing data were treated as random variables, acknowledging principal sources of uncertainties to yield forecasting. As such, we used the stochastic weather generator MarkSim® GCM to produce an ensemble of uncertain meteorological boundary conditions until the end of the season. Sowing dates were assumed normally distributed. To evaluate the performance of the data assimilation scheme, we set up the PILOTE model without data assimilation, treating weather data and sowing dates as random variables (baseline Monte Carlo simulation). Data assimilation increased the accuracy and precision of LAI simulation. Increasing the number of assimilation times decreased the mean absolute error (MAE) of LAI prediction from satellite data by ~1 to 0.2 m2/m2. Yield prediction was improved by data assimilation as compared to the baseline Monte Carlo simulation in both regions. Yield prediction by assimilating satellite-derived LAI showed similar statistics as assimilating the LAI data measured in-situ. The error in yield prediction by assimilating satellite-derived LAI was 7% in Kraichgau and 4% in Swabian Jura, whereas the yield prediction error by Monte Carlo simulation was 10 percent in both regions. Overall, we conclude that assimilating even noisy LAI data before anthesis substantially improves forecasting of winter wheat grain yield by reducing prediction errors caused by uncertainties in weather data, incomplete knowledge about management, and model calibration uncertainty.Item Open Access Diagnosing similarities in probabilistic multi-model ensembles : an application to soil-plant-growth-modeling(2022) Schäfer Rodrigues Silva, Aline; Weber, Tobias K. D.; Gayler, Sebastian; Guthke, Anneli; Höge, Marvin; Nowak, Wolfgang; Streck, ThiloThere has been an increasing interest in using multi-model ensembles over the past decade. While it has been shown that ensembles often outperform individual models, there is still a lack of methods that guide the choice of the ensemble members. Previous studies found that model similarity is crucial for this choice. Therefore, we introduce a method that quantifies similarities between models based on so-called energy statistics. This method can also be used to assess the goodness-of-fit to noisy or deterministic measurements. To guide the interpretation of the results, we combine different visualization techniques, which reveal different insights and thereby support the model development. We demonstrate the proposed workflow on a case study of soil–plant-growth modeling, comparing three models from the Expert-N library. Results show that model similarity and goodness-of-fit vary depending on the quantity of interest. This confirms previous studies that found that “there is no single best model” and hence, combining several models into an ensemble can yield more robust results.Item Open Access Diagnosis of model errors with a sliding time‐window Bayesian analysis(2022) Hsueh, Han‐Fang; Guthke, Anneli; Wöhling, Thomas; Nowak, WolfgangDeterministic hydrological models with uncertain, but inferred‐to‐be‐time‐invariant parameters typically show time‐dependent model errors. Such errors can occur if a hydrological process is active in certain time periods in nature, but is not resolved by the model or by its input. Such missing processes could become visible during calibration as time‐dependent best‐fit values of model parameters. We propose a formal time‐windowed Bayesian analysis to diagnose this type of model error, formalizing the question “In which period of the calibration time‐series does the model statistically disqualify itself as quasi‐true?” Using Bayesian model evidence (BME) as model performance metric, we determine how much the data in time windows of the calibration time‐series support or refute the model. Then, we track BME over sliding time windows to obtain a dynamic, time‐windowed BME (tBME) and search for sudden decreases that indicate an onset of model error. tBME also allows us to perform a formal, sliding likelihood‐ratio test of the model against the data. Our proposed approach is designed to detect error occurrence on various temporal scales, which is especially useful in hydrological modeling. We illustrate this by applying our proposed method to soil moisture modeling. We test tBME as model error indicator on several synthetic and real‐world test cases that we designed to vary in error sources (structure and input) and error time scales. Results prove the successful detection errors in dynamic models. Moreover, the time sequence of posterior parameter distributions helps to investigate the reasons for model error and provide guidance for model improvement.Item Open Access Experimental evaluation and uncertainty quantification for a fractional viscoelastic model of salt concrete(2022) Hinze, Matthias; Xiao, Sinan; Schmidt, André; Nowak, WolfgangThis study evaluates and analyzes creep testing results on salt concrete of type M2. The concrete is a candidate material for long-lasting structures for sealing underground radioactive waste repository sites. Predicting operational lifetime and security aspects for these structures requires specific constitutive equations to describe the material behavior. Thus, we analyze whether a fractional viscoelastic constitutive law is capable of representing the long-term creep and relaxation processes for M2 concrete. We conduct a creep test to identify the parameters of the fractional model. Moreover, we use the Bayesian inversion method to evaluate the identifiability of the model parameters and the suitability of the experimental setup to yield a reliable prediction of the concrete behavior. Particularly, this Bayesian analysis allows to incorporate expert knowledge as prior information, to account for limited experimental precision and finally to rigorously quantify the post-calibration uncertainty.Item Open Access Gaussian active learning on multi-resolution arbitrary polynomial chaos emulator : concept for bias correction, assessment of surrogate reliability and its application to the carbon dioxide benchmark(2023) Kohlhaas, Rebecca; Kröker, Ilja; Oladyshkin, Sergey; Nowak, WolfgangSurrogate models are widely used to improve the computational efficiency in various geophysical simulation problems by reducing the number of model runs. Conventional one-layer surrogate representations are based on global (e.g. polynomial chaos expansion, PCE) or on local kernels (e.g., Gaussian process emulator, GPE). Global representations omit some details, while local kernels require more model runs. The existing multi-resolution PCE is a promising hybrid: it is a global representation with local refinement. However, it can not (yet) estimate the uncertainty of the resulting surrogate, which techniques like the GPE can do. We propose to join multi-resolution PCE and GPE s into a joint surrogate framework to get the best out of both worlds. By doing so, we correct the surrogate bias and assess the remaining uncertainty of the surrogate itself. The resulting multi-resolution emulator offers a pathway for several active learning strategies to improve the surrogate at acceptable computational costs, compared to the existing PCE-kriging approach it adds the multi-resolution aspect. We analyze the performance of a multi-resolution emulator and a plain GPE using didactic test cases and a CO2 benchmark, that is representative of many alike problems in the geosciences. Both approaches show similar improvements during the active learning, but our multi-resolution emulator leads to much more stable results than the GPE. Overall, our suggested emulator can be seen as a generalization of multi-resolution PCE and GPE concepts that offers the possibility for active learning.Item Open Access Geostatistical methods for the identification of flow and transport parameters in the subsurface(2005) Nowak, Wolfgang; Bárdossy, András (Prof. Dr. rer. nat. Dr.-Ing.)Per definition, log-conductivity fields estimated by geostatistical inversing do not resolve the full variability of heterogeneous aquifers. Therefore, in transport simulations, the dispersion of solute clouds is under-predicted. Macrotransport theory defines dispersion coefficients that parameterize the total magnitude of variability. Using these dispersion coefficients together with estimated conductivity fields would over-predict dispersion, since estimated conductivity fields already resolve some of the variability. Up to presence, only a few methods exist that allow to use estimated conductivity fields for transport simulations. A review of these methods reveals that they are either associated with excessive computational costs, only cover special cases, or are merely approximate. Their predictions hold only in a stochastic sense and cannot take into account measurements of transport-related quantities in an explicit manner. In this dissertation, I successfully develop, implement and apply a new method for geostatistical identification of flow and transport parameters in the subsurface. The parameters featured here are the log-conductivity and a scalar log-dispersion coefficient. The extension to other parameters like retardation coefficients or reaction rates is straightforward. Geostatistical identification of flow parameters is well-known. However, simultaneous identification together with transport parameters is new. In order to implement the new method, I develop a modified Levenberg-Marquardt algorithm for the Quasi-Linear Geostatistical Approach and extend the latter to the generalized case of uncertain prior knowledge. I derive the sensitivities of the state variables of interest with respect to the newly introduced scalar log-dispersion coefficient. Further, I summarize and extend the list of spectral methods that help to drastically speed up the expensive matrix operations involved in geostatistical inverse modeling. If the quality and quantity of input data is sufficient, the new method accurately simulates the dispersive mechanisms of spreading, dilution and the irregular movement of the center of mass of a plume. Therefore, it adequately predicts mixing of solute clouds and effective reaction rates in heterogeneous media. I perform extensive series of test cases in order to discuss and prove certain properties of the new method and the new dispersion coefficient. The character and magnitude of the identified dispersion coefficient depends strongly on the quality and quantity of input data and their potential to resolve variability in the conductivity field. Because inverse models of transport are coupled to inverse models of flow, the information in the input data has to sufficiently characterize the flow field. Otherwise, transport-related input data cannot be interpreted. Application to an experimental data set from a large-scale sandbox experiment and comparison to results from existing approaches in macrotransport theory show good agreement.Item Open Access Hydraulically induced fracturing in heterogeneous porous media using a TPM‐phase‐field model and geostatistics(2023) Wagner, Arndt; Sonntag, Alixa; Reuschen, Sebastian; Nowak, Wolfgang; Ehlers, WolfgangHydraulically induced fracturing is widely used in practice for several exploitation techniques. The chosen macroscopic model combines a phase‐field approach to fractures with the Theory of Porous Media (TPM) to describe dynamic hydraulic fracturing processes in fully‐saturated porous materials. In this regard, the solid's state of damage shows a diffuse transition zone between the broken and unbroken domain. Rocks or soils in grown nature are generally inhomogeneous with material imperfections on the microscale, such that modelling homogeneous porous material may oversimplify the behaviour of the solid and fluid phases in the fracturing process. Therefore, material imperfections and inhomogeneities in the porous structure are considered through the definition of location‐dependent material parameters. In this contribution, a deterministic approach to account for predefined imperfection areas as well as statistical fields of geomechanical properties is proposed. Representative numerical simulations show the impact of solid skeleton heterogeneities in porous media on the fracturing characteristics, e. g. the crack path.Item Open Access Improving thermochemical energy storage dynamics forecast with physics-inspired neural network architecture(2020) Praditia, Timothy; Walser, Thilo; Oladyshkin, Sergey; Nowak, WolfgangThermochemical Energy Storage (TCES), specifically the calcium oxide (CaO)/calcium hydroxide (Ca(OH)2) system is a promising energy storage technology with relatively high energy density and low cost. However, the existing models available to predict the system's internal states are computationally expensive. An accurate and real-time capable model is therefore still required to improve its operational control. In this work, we implement a Physics-Informed Neural Network (PINN) to predict the dynamics of the TCES internal state. Our proposed framework addresses three physical aspects to build the PINN: (1) we choose a Nonlinear Autoregressive Network with Exogeneous Inputs (NARX) with deeper recurrence to address the nonlinear latency; (2) we train the network in closed-loop to capture the long-term dynamics; and (3) we incorporate physical regularisation during its training, calculated based on discretized mole and energy balance equations. To train the network, we perform numerical simulations on an ensemble of system parameters to obtain synthetic data. Even though the suggested approach provides results with the error of 3.96 x 10^(-4) which is in the same range as the result without physical regularisation, it is superior compared to conventional Artificial Neural Network (ANN) strategies because it ensures physical plausibility of the predictions, even in a highly dynamic and nonlinear problem. Consequently, the suggested PINN can be further developed for more complicated analysis of the TCES system.Item Open Access Information‐theoretic scores for Bayesian model selection and similarity analysis : concept and application to a groundwater problem(2023) Morales Oreamuno, Maria Fernanda; Oladyshkin, Sergey; Nowak, WolfgangBayesian model selection (BMS) and Bayesian model justifiability analysis (BMJ) provide a statistically rigorous framework for comparing competing models through the use of Bayesian model evidence (BME). However, a BME-based analysis has two main limitations: (a) it does not account for a model's posterior predictive performance after using the data for calibration and (b) it leads to biased results when comparing models that use different subsets of the observations for calibration. To address these limitations, we propose augmenting BMS and BMJ analyses with additional information-theoretic measures: expected log-predictive density (ELPD), relative entropy (RE) and information entropy (IE). Exploring the connection between Bayesian inference and information theory, we explicitly link BME and ELPD together with RE and IE to highlight the information flow in BMS and BMJ analyses. We show how to compute and interpret these scores alongside BME, and apply the framework to a controlled 2D groundwater setup featuring five models, one of which uses a subset of the data for calibration. Our results show how the information-theoretic scores complement BME by providing a more complete picture concerning the Bayesian updating process. Additionally, we demonstrate how both RE and IE can be used to objectively compare models that feature different data sets for calibration. Overall, the introduced Bayesian information-theoretic framework can lead to a better-informed decision by incorporating a model's post-calibration predictive performance, by allowing to work with different subsets of the data and by considering the usefulness of the data in the Bayesian updating process.Item Open Access Integrating structural resilience in the design of urban drainage networks in flat areas using a simplified multi-objective optimization framework(2021) Bakhshipour, Amin E.; Hespen, Jessica; Haghighi, Ali; Dittmer, Ulrich; Nowak, WolfgangStructural resilience describes urban drainage systems’ (UDSs) ability to minimize the frequency and magnitude of failure due to common structural issues such as pipe clogging and cracking or pump failure. Structural resilience is often neglected in the design of UDSs. The current literature supports structural decentralization as a way to introduce structural resilience into UDSs. Although there are promising methods in the literature for generating and optimizing decentralized separate stormwater collection systems, incorporating hydraulic simulations in unsteady flow, these approaches sometimes require high computational effort, especially for flat areas. This may hamper their integration into ordinary commercially designed UDS software due to their predominantly scientific purposes. As a response, this paper introduces simplified cost and structural resilience indices that can be used as heuristic parameters for optimizing the UDS layout. These indices only use graph connectivity information, which is computationally much less expensive than hydraulic simulation. The use of simplified objective functions significantly simplifies the feasible search space and reduces blind searches by optimization. To demonstrate the application and advantages of the proposed model, a real case study in the southwest city of Ahvaz, Iran was explored. The proposed framework was proven to be promising for reducing the computational effort and for delivering realistic cost-wise and resilient UDSs.Item Open Access Learning groundwater contaminant diffusion‐sorption processes with a finite volume neural network(2022) Praditia, Timothy; Karlbauer, Matthias; Otte, Sebastian; Oladyshkin, Sergey; Butz, Martin V.; Nowak, WolfgangImproved understanding of complex hydrosystem processes is key to advance water resources research. Nevertheless, the conventional way of modeling these processes suffers from a high conceptual uncertainty, due to almost ubiquitous simplifying assumptions used in model parameterizations/closures. Machine learning (ML) models are considered as a potential alternative, but their generalization abilities remain limited. For example, they normally fail to predict accurately across different boundary conditions. Moreover, as a black box, they do not add to our process understanding or to discover improved parameterizations/closures. To tackle this issue, we propose the hybrid modeling framework FINN (finite volume neural network). It merges existing numerical methods for partial differential equations (PDEs) with the learning abilities of artificial neural networks (ANNs). FINN is applied on discrete control volumes and learns components of the investigated system equations, such as numerical stencils, model parameters, and arbitrary closure/constitutive relations. Consequently, FINN yields highly interpretable results. We demonstrate FINN's potential on a diffusion‐sorption problem in clay. Results on numerically generated data show that FINN outperforms other ML models when tested under modified boundary conditions, and that it can successfully differentiate between the usual, known sorption isotherms. Moreover, we also equip FINN with uncertainty quantification methods to lay open the total uncertainty of scientific learning, and then apply it to a laboratory experiment. The results show that FINN performs better than calibrated PDE‐based models as it is able to flexibly learn and model sorption isotherms without being restricted to choose among available parametric models.Item Open Access The method of forced probabilities : a computation trick for Bayesian model evidence(2022) Banerjee, Ishani; Walter, Peter; Guthke, Anneli; Mumford, Kevin G.; Nowak, WolfgangBayesian model selection objectively ranks competing models by computing Bayesian Model Evidence (BME) against test data. BME is the likelihood of data to occur under each model, averaged over uncertain parameters. Computing BME can be problematic: exact analytical solutions require strong assumptions; mathematical approximations (information criteria) are often strongly biased; assumption-free numerical methods (like Monte Carlo) are computationally impossible if the data set is large, for example like high-resolution snapshots from experimental movies. To use BME as ranking criterion in such cases, we develop the “Method of Forced Probabilities (MFP)”. MFP swaps the direction of evaluation: instead of comparing thousands of model runs on random model realizations with the observed movie snapshots, we force models to reproduce the data in each time step and record the individual probabilities of the model following these exact transitions. MFP is fast and accurate for models that fulfil the Markov property in time, paired with high-quality data sets that resolve all individual events. We demonstrate our approach on stochastic macro-invasion percolation models that simulate gas migration in porous media, and list additional examples of probable applications. The corresponding experimental movie was obtained from slow gas injection into water-saturated, homogeneous sand in a 25 x 25 x 1 cm acrylic glass tank. Despite the movie not always satisfying the high demands (resolving all individual events), we can apply MFP by suggesting a few workarounds. Results confirm that the proposed method can compute BME in previously unfeasible scenarios, facilitating a ranking among competing model versions for future model improvement.Item Open Access Optimal design of experiments to improve the characterisation of atrazine degradation pathways in soil(2021) Chavez Rodriguez, Luciana; González‐Nicolás, Ana; Ingalls, Brian; Streck, Thilo; Nowak, Wolfgang; Xiao, Sinan; Pagel, HolgerContamination of soils with pesticides and their metabolites is a global environmental threat. Deciphering the complex process chains involved in pesticide degradation is a prerequisite for finding effective solution strategies. This study applies prospective optimal design (OD) of experiments to identify laboratory sampling strategies that allow model‐based discrimination of atrazine (AT) degradation pathways. We simulated virtual AT degradation experiments with a first‐order model that reflects a simple reaction chain of complete AT degradation. We added a set of Monod‐based model variants that consider more complex AT degradation pathways. Then, we applied an extended constraint‐based parameter search algorithm that produces Monte‐Carlo ensembles of realistic model outputs, in line with published experimental data. Differences between‐model ensembles were quantified with Bayesian model analysis using an energy distance metric. AT degradation pathways following first‐order reaction chains could be clearly distinguished from those predicted with Monod‐based models. As expected, including measurements of specific bacterial guilds improved model discrimination further. However, experimental designs considering measurements of AT metabolites were most informative, highlighting that environmental fate studies should prioritise measuring metabolites for elucidating active AT degradation pathways in soils. Our results suggest that applying model‐based prospective OD will maximise knowledge gains on soil systems from laboratory and field experiments.Item Open Access Optimal exposure time in gamma-ray attenuation experiments for monitoring time-dependent densities(2022) Gonzalez-Nicolas, Ana; Bilgic, Deborah; Kröker, Ilja; Mayar, Assem; Trevisan, Luca; Steeb, Holger; Wieprecht, Silke; Nowak, WolfgangSeveral environmental phenomena require monitoring time-dependent densities in porous media, e.g., clogging of river sediments, mineral dissolution/precipitation, or variably-saturated multiphase flow. Gamma-ray attenuation (GRA) can monitor time-dependent densities without being destructive or invasive under laboratory conditions. GRA sends gamma rays through a material, where they are attenuated by photoelectric absorption and then recorded by a photon detector. The attenuated intensity of the emerging beam relates to the density of the traversed material via Beer-Lambert’s law. An important parameter for designing time-variable GRA is the exposure time, the time the detector takes to gather and count photons before converting the recorded intensity to a density. Large exposure times capture the time evolution poorly (temporal raster error, inaccurate temporal discretization), while small exposure times yield imprecise intensity values (noise-related error, i.e. small signal-to-noise ratio). Together, these two make up the total error of observing time-dependent densities by GRA. Our goal is to provide an optimization framework for time-dependent GRA experiments with respect to exposure time and other key parameters, thus facilitating neater experimental data for improved process understanding. Experimentalists set, or iterate over, several experimental input parameters (e.g., Beer-Lambert parameters) and expectations on the yet unknown dynamics (e.g., mean and amplitude of density and characteristic time of density changes). We model the yet unknown dynamics as a random Gaussian Process to derive expressions for expected errors prior to the experiment as a function of key experimental parameters. Based on this, we provide an optimization framework that allows finding the optimal (minimal-total-error) setup and demonstrate its application on synthetic experiments.Item Open Access The role of fast frequency response of energy storage systems and renewables for ensuring frequency stability in future low-inertia power systems(2021) González-Inostroza, Pablo; Rahmann, Claudia; Álvarez, Ricardo; Haas, Jannik; Nowak, Wolfgang; Rehtanz, ChristianRenewable generation technologies are rapidly penetrating electrical power systems, which challenge frequency stability, especially in power systems with low inertia. To prevent future instabilities, this issue should already be addressed in the planning stage of the power systems. With this purpose, this paper presents a generation expansion planning tool that incorporates a set of frequency stability constraints along with the capability of renewable technologies and batteries to support system frequency stability during major power imbalances. We study how the investment decisions change depending on (i) which technology - batteries, renewable or conventional generation - support system frequency stability, (ii) the available levels of system inertia, and (iii) the modeling detail of reserve allocation (system-wide versus zone-specific). Our results for a case study of Chile’s system in the year 2050 show that including fast frequency response from converter-based technologies will be mandatory to achieve a secure operation in power systems dominated by renewable generation. When batteries offer the service, the total investment sizes are only slightly impacted. More precise spatial modeling of the reserves primarily affects the location of the investments as well as the reserve provider. These findings are relevant to energy policy makers, energy planners, and energy companies.Item Open Access Sampling behavioral model parameters for ensemble-based sensitivity analysis using Gaussian process emulation and active subspaces(2020) Erdal, Daniel; Xiao, Sinan; Nowak, Wolfgang; Cirpka, Olaf A.Ensemble-based uncertainty quantification and global sensitivity analysis of environmental models requires generating large ensembles of parameter-sets. This can already be difficult when analyzing moderately complex models based on partial differential equations because many parameter combinations cause an implausible model behavior even though the individual parameters are within plausible ranges. In this work, we apply Gaussian Process Emulators (GPE) as surrogate models in a sampling scheme. In an active-training phase of the surrogate model, we target the behavioral boundary of the parameter space before sampling this behavioral part of the parameter space more evenly by passive sampling. Active learning increases the subsequent sampling efficiency, but its additional costs pay off only for a sufficiently large sample size. We exemplify our idea with a catchment-scale subsurface flow model with uncertain material properties, boundary conditions, and geometric descriptors of the geological structure. We then perform a global-sensitivity analysis of the resulting behavioral dataset using the active-subspace method, which requires approximating the local sensitivities of the target quantity with respect to all parameters at all sampled locations in parameter space. The Gaussian Process Emulator implicitly provides an analytical expression for this gradient, thus improving the accuracy of the active-subspace construction. When applying the GPE-based preselection, 70-90% of the samples were confirmed to be behavioral by running the full model, whereas only 0.5% of the samples were behavioral in standard Monte-Carlo sampling without preselection. The GPE method also provided local sensitivities at minimal additional costs.