Repository logoOPUS - Online Publications of University Stuttgart
de / en
Log In
New user? Click here to register.Have you forgotten your password?
Communities & Collections
All of DSpace
  1. Home
  2. Browse by Author

Browsing by Author "Nowak, Wolfgang (Prof. Dr.-Ing.)"

Filter results by typing the first few letters
Now showing 1 - 11 of 11
  • Results Per Page
  • Sort Options
  • Thumbnail Image
    ItemOpen Access
    Analysis and simulation of anomalous transport in porous media
    (Stuttgart : Eigenverlag des Instituts für Wasser- und Umweltsystemmodellierung der Universität Stuttgart, 2019) Most, Sebastian Christopher; Nowak, Wolfgang (Prof. Dr.-Ing.)
  • Thumbnail Image
    ItemOpen Access
    Bayesian inversion and model selection of heterogeneities in geostatistical subsurface modeling
    (Stuttgart : Eigenverlag des Instituts für Wasser- und Umweltsystemmodellierung der Universität Stuttgart, 2021) Reuschen, Sebastian; Nowak, Wolfgang (Prof. Dr.-Ing.)
  • Thumbnail Image
    ItemOpen Access
    Early-warning monitoring systems for improved drinking water resource protection
    (Stuttgart : Eigenverlag des Instituts für Wasser- und Umweltsystemmodellierung der Universität Stuttgart, 2018) Bode, Felix; Nowak, Wolfgang (Prof. Dr.-Ing.)
  • Thumbnail Image
    ItemOpen Access
    Integrating transient flow conditions into groundwater well protection
    (Stuttgart : Eigenverlag des Instituts für Wasser- und Umweltsystemmodellierung der Universität Stuttgart, 2020) Rodríguez Pretelín, Abelardo; Nowak, Wolfgang (Prof. Dr.-Ing.)
  • Thumbnail Image
    ItemOpen Access
    Long-term lumped projections of groundwater balances in the face of limited data
    (Stuttgart : Eigenverlag des Instituts für Wasser- und Umweltsystemmodellierung der Universität Stuttgart, 2024) Ejaz, Fahad; Nowak, Wolfgang (Prof. Dr.-Ing.)
  • Thumbnail Image
    ItemOpen Access
    Optimal planning of water and renewable energy systems for copper production processes with sector coupling and demand flexibility
    (Stuttgart : Eigenverlag des Instituts für Wasser- und Umweltsystemmodellierung der Universität Stuttgart, 2022) Moreno Leiva, Simón; Nowak, Wolfgang (Prof. Dr.-Ing.)
  • Thumbnail Image
    ItemOpen Access
    Optimizing hybrid decentralized systems for sustainable urban drainage infrastructures planning
    (Stuttgart : Eigenverlag des Instituts für Wasser- und Umweltsystemmodellierung der Universität Stuttgart, 2021) Bakhshipour, Amin Ebrahim; Nowak, Wolfgang (Prof. Dr.-Ing.)
  • Thumbnail Image
    ItemOpen Access
    Physics-informed neural networks for learning dynamic, distributed and uncertain systems
    (Stuttgart : Eigenverlag des Instituts für Wasser- und Umweltsystemmodellierung der Universität Stuttgart, 2023) Praditia, Timothy; Nowak, Wolfgang (Prof. Dr.-Ing.)
    Scientific models play an important role in many technical inventions to facilitate daily human activities. We use them to assist us in simple decision making such as deciding what type of clothing we should wear using the weather forecast model, and also in complex problems such as assessing the environmental impact of industrial wastes. Existing scientific models, however, are imperfect due to our limited understanding of complex physical systems. Due to the rapid growth in computing power in recent years, there has been an increasing interest in applying data-driven modeling to improve upon current models and to fill in the missing scientific knowledge. Traditionally, these data-driven models require a significant amount of observation data, which is often challenging to obtain, especially from a natural system. To address this issue, prior physical knowledge has been included in the model design, resulting in so-called hybrid models. Although the idea of infusing physics with data seems sound, current state-of-the-art models have not found the ideal combination of both aspects, and the application to real-world data has been lacking. To bridge this gap, three research questions are formulated: 1. How can prior physical knowledge be adopted to design a consistent and reliable hybrid model for dynamic systems? 2. How can prior physical and numerical knowledge be adopted to design a consistent and reliable hybrid model for dynamic and spatially distributed systems? 3. How can the hybrid model learn about its own total (predictive) uncertainty in a computationally effective manner, so that it is appropriate for real-world applications or could facilitate scientific hypothesis testing? The overall goal is, with these questions answered, to contribute to more consistent approaches for scientific inquiry through hybrid models. The first contribution of this thesis addresses the first research question by proposing a modeling framework for a dynamic system, in the form of a Thermochemical Energy Storage device. A Nonlinear Autoregressive Network with Exogeneous Input (NARX) model is trained recurrently with multiple time lags to capture the temporal dependency and the long-term dynamics of the system. During training, the model is penalized when it violates established physical laws, such as mass and energy conservation. As a result, the model produces accurate and physically plausible predictions compared to models that are trained without physical regularization. The second research question is addressed by the second contribution of this thesis, by designing a hybrid model that complements the Finite Volume Method (FVM) with the learning ability of Artificial Neural Networks (ANNs). The resulting model enables the learning of unknown closure/constitutive relationships in various advection-diffusion equations. This thesis shows that the proposed model outperforms state-of-the-art deep learning models by several orders of magnitude in accuracy, and it possesses excellent generalization ability. Finally, the third contribution addresses the third research question, by investigating the performance of assorted uncertainty quantification methods on the hybrid model. As a demonstration, laboratory measurement data of a groundwater contaminant transport process is employed to train the model. Since the available training data is extremely scarce and noisy, uncertainty quantification methods are essential to produce a robust and trustworthy model. It is shown that a gradient-based Markov Chain Monte Carlo (MCMC) algorithm, namely the Barker proposal is the most suitable to quantify the uncertainty of the proposed model. Additionally, the hybrid model outperforms a calibrated physical model and provides appropriate predictive uncertainty to sufficiently explain the noisy measurement data. With these contributions, this thesis proposes a robust hybrid modeling framework that is suitable for filling in missing scientific knowledge and lays the groundwork for a wider variety of complex real-world applications. Ultimately, the hope is for this work to inspire future studies that contribute to the continuous and mutual improvements of both scientific knowledge discovery and scientific model robustness.
  • Thumbnail Image
    ItemOpen Access
    Quantifying and visualizing model similarities for multi-model methods
    (Stuttgart : Eigenverlag des Instituts für Wasser- und Umweltsystemmodellierung der Universität Stuttgart, 2022) Schäfer Rodrigues Silva, Aline; Nowak, Wolfgang (Prof. Dr.-Ing.)
    Modeling environmental systems is typically limited by an incomplete system understanding due to scarce and imprecise measurements. This leads to different types of uncertainties, among which conceptual uncertainty plays a key role, but is difficult to address. Conceptual uncertainty refers to the problem of finding the most appropriate model representation of the physical system. This includes the problem of choosing from several plausible model hypotheses, but also the problem that the true system description might not even be among this set of hypotheses. In this thesis, I address the first of these issues, the uncertainty of choosing a model from a finite set. To account for this uncertainty of model choice, modelers typically use multi-model methods. This means that they consider not only one but several models and apply statistical methods to either combine them or select the most appropriate one. For any of these methods, it is crucial to know how similar the individual models are. But even though multi-model methods have become increasingly popular, no methods were available that quantify the similarities between models and visualize them intuitively. This dissertation aims at closing these gaps. In particular, it tackles the challenges of judging whether simplified models are a suitable replacement for a more detailed model, and of visualizing model similarities in a way that helps modelers to gain an intuitive understanding of the model set. I defined three research questions that address these challenges and form the basis of this thesis. 1. How can we systematically assess how similar conceptually simplified model versions are compared to an original, more detailed model? 2. How can we extend the similarity analysis so it is suitable for computationally expensive models? 3. How can we visualize the similarities between probabilistic model predictions? With the first contribution, I show that the so-called model confusion matrix can be used to quantify model similarities and thus identify the best conceptual simplification of a detailed reference model. This matrix was introduced by Schöniger et al. [2015] to estimate the data need of competing models. Here, I demonstrate that the matrix can be used, beyond this original purpose, to analyze model similarities. With the second contribution, I address the problem of assessing this matrix for computationally expensive models. Since calculating this matrix requires many model runs, the existing method was not yet suitable for models that have long run times. This problem is solved by extending the surrogate-based Bayesian model selection [Mohammadi et al., 2018] so that two models can be compared based on their surrogates while accounting for approximation errors. With the third contribution, I demonstrate how the similarity of probabilistic model predictions can be quantified based on so-called energy statistics. By comparing different visualization techniques, I show how multi-model ensembles can be visualized intuitively so that modelers can get a better understanding of the model set. The presented methods are widely applicable and can thus help to bring the importance of model similarities further into the focus of multi-model developers and users. Thus, depending on the research problem, the individual models or an appropriate multi-model method can be selected in a more targeted manner.
  • Thumbnail Image
    ItemOpen Access
    Stochastic model comparison and refinement strategies for gas migration in the subsurface
    (Stuttgart : Eigenverlag des Instituts für Wasser- und Umweltsystemmodellierung der Universität Stuttgart, 2023) Banerjee, Ishani; Nowak, Wolfgang (Prof. Dr.-Ing.)
    Gas migration in the subsurface, a multiphase flow in a porous-medium system, is a problem of environmental concern and is also relevant for subsurface gas storage in the context of the energy transition. It is essential to know and understand the flow paths of these gases in the subsurface for efficient monitoring, remediation or storage operations. On the one hand, laboratory gas-injection experiments help gain insights into the involved processes of these systems. On the other hand, numerical models help test the mechanisms observed and inferred from the experiments and then make useful predictions for real-world engineering applications. Both continuum and stochastic modelling techniques are used to simulate multiphase flow in porous media. In this thesis, I use a stochastic discrete growth model: the macroscopic Invasion Percolation (IP) model. IP models have the advantages of simplicity and computational inexpensiveness over complex continuum models. Local pore-scale changes dominantly affect the flow processes of gas flow in water-saturated porous media. IP models are especially favourable for these multi-scale systems because using continuum models to simulate them can be extremely computationally difficult. Despite offering a computationally inexpensive way to simulate multiphase flow in porous media, only very few studies have compared their IP model results to actual laboratory experimental image data. One reason might be the fact that IP models lack a notion of experimental time but only have an integer counter for simulation steps that imply a time order. The few existing experiments-to-model comparison studies have used perceptual similarity or spatial moments as comparison measures. On the one hand, perceptual comparison between the model and experimental images is tedious and non-objective. On the other hand, comparing spatial moments of the model and experimental images can lead to misleading results because of the loss of information from the data. In this thesis, an objective and quantitative comparison method is developed and tested that overcomes the limitations of these traditional approaches. The first step involves volume-based time-matching between real-time experimental data and IP-model outputs. This is followed by using the (Diffused) Jaccard coefficient to evaluate the quality of the fit. The fit between the images from the models and experiments can be checked across various scales by varying the extent of blurring in the images. Numerical model predictions for sparsely known systems (like the gas flow systems) suffer from high conceptual uncertainties. In literature, numerous versions of IP models, differing in their underlying hypotheses, have been used for simulating gas flow in porous media. Besides, the gas-injection experiments belong to continuous, transitional, or discontinuous gas flow regimes, depending on the gas flow rate and the porous medium's nature. Literature suggests that IP models are well suited for the discontinuous gas flow regime; other flow regimes have not been explored. Using the abovementioned method, in this thesis, four macroscopic IP model versions are compared against data from nine gas-injection experiments in transitional and continuous gas flow regimes. This model inter-comparison helps assess the potential of these models in these unexplored regimes and identify the sources of model conceptual uncertainties. Alternatively, with a focus on parameter uncertainty, Bayesian Model Selection is a standard statistical procedure for systematically and objectively comparing different model hypotheses by computing the Bayesian Model Evidence (BME) against test data. BME is the likelihood of a model producing the observed data, given the prior distribution of its parameters. Computing BME can be challenging: exact analytical solutions require strong assumptions; mathematical approximations (information criteria) are often strongly biased; assumption-free numerical methods (like Monte Carlo) are computationally impossible for large data sets. In this thesis, a BME-computation method is developed to use BME as a ranking criterion for such infeasible scenarios: The \emph{Method of Forced Probabilities} for extensive data sets and Markov-Chain models. In this method, the direction of evaluation is swapped: instead of comparing thousands of model runs on random model realizations with the observed data, the model is forced to reproduce the data in each time step, and the individual probabilities of the model following these exact transitions are recorded. This is a fast, accurate and exact method for calculating BME for IP models which exhibit the Markov chain property and for complete "atomic" data. The analysis results obtained using the methods and tools developed in this thesis help identify the strengths and weaknesses of the investigated IP model concepts. This further aids model development and refinement efforts for predicting gas migration in the subsurface. Also, the gained insights foster improved experimental methods. These tools and methods are not limited to gas flow systems in porous media but can be extended to any system involving raster outputs.
  • Thumbnail Image
    ItemOpen Access
    Uncertainty quantification for expensive simulations : optimal surrogate modeling under time constraints
    (2017) Sinsbeck, Michael; Nowak, Wolfgang (Prof. Dr.-Ing.)
    Motivation and Goal Computer simulations allow us to predict the behavior of real-world systems. Any simulation, however, contains imperfectly adjusted parameters and simplifying assumptions about the processes considered. Therefore, simulation-based predictions can never be expected to be completely accurate and the exact behavior of the system under consideration remains uncertain. The goal of uncertainty quantification (UQ) is to quantify how large the deviation between the real-world behavior of a system and its predicted behavior can possibly be. Such information is valuable for decision making. Computer simulations are often computationally expensive. Each simulation run may take several hours or even days. Therefore, many UQ methods rely on surrogate models. A surrogate model is a function that behaves similarly to the simulation in terms of its input-output relation, but is much faster to evaluate. Most surrogate modeling methods are convergent: with increasing computational effort, the surrogate model converges to the original simulation. In engineering practice, however, results are often to be obtained under time constraints. In these situations, it is not an option to increase the computational effort arbitrarily and so the convergence property loses some of its appeal. For this reason, the key question of this thesis is the following: What is the best possible way of solving UQ problems if the time available is limited? This is a question of optimality rather than convergence. The main idea of this thesis is to construct UQ methods by means of mathematical optimization so that we can make the optimal use of the time available. Contributions This thesis contains four contributions to the goal of UQ under time constraints. 1. A widely used surrogate modeling method in UQ is stochastic collocation, which is based on polynomial chaos expansions and therefore leads to polynomial surrogate models. In the first contribution, I developed an optimal sampling rule specifically designed for the construction of polynomial surrogate models. This sampling rule showed to be more efficient than existing sampling rules because it is stable, flexible and versatile. Existing methods lack at least one of these properties. Stability guarantees that the response surface will not oscillate between the sample points, flexibility allows the modeler to choose the number of function evaluations freely, and versatility means that the method can handle multivariate input distributions with statistical dependence. 2. In the second contribution, I generalized the previous approach and optimized both the sampling rule and the functional form of a surrogate in order to obtain a general optimal surrogate modeling method. I compared three possible approaches to such optimization and the only one that leads to a practical surrogate modeling method requires the modeler to describe the model function by a random field. The optimal surrogate then coincides with the Kriging estimator. 3. I developed a sequential sampling strategy for solving Bayesian inverse problems. Like in the second contribution, the modeler has to describe the model function by a random field. The sequential design strategy selects sample points one at a time in order to minimize the residual error in the solution of the inverse problem. Numerical experiments showed that the sequential design is more efficient than non-sequential methods. 4. Finally, I investigated what impact available measurement data have on the model selection between a reference model and a low-fidelity model. It turned out that, under time constraints, data can favor the use of a low-fidelity model. This is in contrast to model selection without time constraint where the availability of data often favors the use of more complex models. Conclusions From the four contributions, the following overarching conclusions can be drawn. • Under time constraints, the number of possible model evaluations is restricted and the model behavior at unobserved input parameters remains uncertain. This type of uncertainty should be taken into account explicitly. For this reason, random fields as surrogates should be preferred over deterministic response surface functions when working under time constraints. • Optimization is a viable approach to surrogate modeling. Optimal methods are automatically flexible which means that they are easily adaptable to the computing time available. • Under time constraints, all available information about the model function should be used. • Model selection with and without time constraints is entirely different.
OPUS
  • About OPUS
  • Publish with OPUS
  • Legal information
DSpace
  • Cookie settings
  • Privacy policy
  • Send Feedback
University Stuttgart
  • University Stuttgart
  • University Library Stuttgart