Browsing by Author "Nowak, Wolfgang (Jun.-Prof. Dr.-Ing.)"
Now showing 1 - 4 of 4
- Results Per Page
- Sort Options
Item Open Access Efficient concepts for optimal experimental design in nonlinear environmental systems(2014) Geiges, Andreas; Nowak, Wolfgang (Jun.-Prof. Dr.-Ing.)In modern scientific and practical applications, complex simulation tools are increasingly used to support various decision processes. Growing computer power in the recent decades allowed these simulation tools to grow in their complexity in order to model the relevant physical processes more accurately. The number of required model parameter that need to be calibrated is strongly connected to this complexity and hence is growing as well. In environmental systems, in contrast to technical systems, the relevant data and information for adequate calibration of these model parameters are usually sparse or unavailable. This hinders an exact determination of model parameters, initial and boundary conditions or even the correct formulation of a model concept. In such cases, stochastic simulation approaches allow to proceed with uncertain or unknown parameters and to transfer this input uncertainty to estimate the prediction uncertainty of the model. Thus, the predictive quality of an uncertain model can be assessed and thus represents the current state of knowledge about the model prediction. In the case that the prediction quality is judged to be insufficient, new measurement data or information about the real system is required to improve the model. For maximizing the benefits of such campaigns, it is necessary to assess the expected data impact of measurements that are collected according to a proposed campaign design. This allows to identify the so called 'optimal design' that promises the highest expected data impact with respect to the particular model purpose. This thesis addresses data impact analysis of measurements within nonlinear systems or nonlinear parameter estimation problems. In contrast to linear systems, data impact in nonlinear systems depends on the actual future measurement values, which are unknown at the stage of campaign planing. For this reason, only an expected value of data impact can be estimated, by averaging over many potential sets of future measurement values. This nonlinear analysis repeatedly employs nonlinear inference methods and is therefore much more computationally cumbersome than linear estimates. Therefore, the overall purpose of this thesis is to develop new and more efficient methods for nonlinear data impact analysis, which allow tackling complex and realistic applications for which in the past only linear(ized) methods were applicable. This thesis separated efficiency of data impact estimation into three different facets: Accuracy: The first goal of this thesis is the development of a nonlinear and fully flexible reference framework for the accurate estimation of data impact. The core of the developed method is the bootstrap filter, which was identified as the most efficient method for fast and accurate simulation of repeated of nonlinear Bayesian inference for many potential future measurement values. The method is implemented in a strict and rigorous Monte-Carlo framework based on a pre-computed ensemble of model evaluations. The non-intrusive nature of the framework allows its application for arbitrary physical systems and the consideration of any type of uncertainty. Computational speed: The second part of this thesis investigates the theoretical background of data impact analysis in order to identify potentials to speed up this analysis. The key idea followed in this part originates from the well-known symmetry of Bayes Theorem and of a related information measure called Mutual Information. Both allow considering a reversal of the direction of information analysis, in which the roles of potential measurement data and the relevant model prediction are exchanged. Since the space of potential measurements is usually much larger than the space of model prediction values and since both have fundamentally different properties, the reversal of the information assessment offers a high potential for increasing the evaluation speed. Robustness: The last basic facet of an efficient data impact estimation considers the robustness of such estimates with regard to the uncertainty of the underlying model. Basically, model-based data impact estimates are subject to the same uncertainty as any other model output. Thus, the data impact estimate can be regarded as just another uncertain model prediction. Therefore, the high uncertainty of the model (which is the reason for the search for new calibration data) also affects the process of evaluating the most useful new data. In summary, the developed methods and theoretic principles allow for more efficient evaluation of nonlinear data impact. The use of nonlinear measures for data impact lead to an essential improvement of the resulting data acquisition design with respect to a relevant prediction quality. These methods are flexibly applicable for any physical model system and allow the consideration of any degree of statistical dependency. Especially the interactive approach that counters the high initial uncertainties of the model does lead to huge improvement in the design of data acquisition. All achieved conceptual and practical improvements in the evaluation of nonlinear data impact assessment allow using such powerful nonlinear methods also for complex and realistic problems.Item Open Access Methods for physically-based model reduction in time : analysis, comparison of methods and application(2013) Leube, Philipp Christoph; Nowak, Wolfgang (Jun.-Prof. Dr.-Ing.)Model reduction techniques are essential tools to control the overburdening costs of complex models. One branch of such techniques is the reduction of the time dimension. Major contributions to solve this task have been based on integral transformation. They have the elegant property that by choosing suitable base functions, e.g., the monomials that lead to the so-called temporal moments (TM), the dynamic model can be simulated via steady-state equations. TM allow to maintain the required accuracy of hydro(geo)logical applications (e.g., forward predictions, model calibration or parameter estimation) at a reasonably high level whilst controlling the computational demand, or, alternatively, to admit more conceptual complexity, finer resolutions or larger domains at the same computational costs, or to make brute force optimization tasks more feasible. In comparison to classical approaches of model reduction that involve orthogonal base functions, however, the base functions that lead to TM are non-orthogonal. Also, most applications involving TM used only lower-degree TM without providing reasons for their choice. This led to a number of open research questions: - Does non-orthogonality impair the quality and efficiency of TM? - Can other temporal base functions more efficiently reduce dynamic systems than the monomials that lead to TM? - How can compression efficiency associated with temporal model reduction methods be quantified and how efficiently can information be compressed? - What is the value of temporal model reduction in competition with the computational demand of other discretized or reduced model dimensions, e.g., repetitive model runs through Monte-Carlo (MC) simulations? In this work, I successfully developed tools to analyze and assess existing techniques that reduce hydro(geo)logical models in time, and answered the questions posed above. As an overall conclusion, I found that there is no way of temporal model reduction for dynamic systems based on arbitrary integral transforms with (non-)polynomial base functions that is better than the monomials leading to TM. However, the order of TM as opposed to other model dimensions (e.g., number of MC realizations) should be carefully determined prior the model evaluation. TM can help to improve highly complex systems through upscaling. Based on my findings, I hope to encourage more studies to work with the concept of TM. Especially because the number of studies found in the literature that employ TM with real data is small, more improved tests on existing data sets should be performed as proof of concept for practical applications in real world scenarios. Also, I hope to encourage those who limited their TM applications to only lower-order TM to consider a longer moment sequence. My study results specifically provide valuable advice for hydraulic tomography studies under transient conditions to use TM up to the fourth order. This might potentially alleviate the loss of accuracy used as argument against TM by certain authors.Item Open Access Risk quantification and management in water production and supply systems(2014) Enzenhöfer, Rainer; Nowak, Wolfgang (Jun.-Prof. Dr.-Ing.)97% percent of the world’s usable freshwater is stored as groundwater, which is a limited resource. Thus, its protection and management is a world-wide major societal, health-related, ecologic and economic concern. The constant demand for clean and safe drinking water is in direct conflict with social and economic land-use claims. Therefore, water managers are challenged to know (1) what kind of hazards exist within the water catchment, (2) how these hazards can be controlled and (3) knowing that they are controlled. Thus, water management shifts from fixed and thus passive wellhead delineation zones to active risk management. Despite this desired change, a clear definition on dealing with uncertainties in risk assessment and management for drinking water supply systems is still missing. Nevertheless, uncertainty analysis is an integral part of risk assessment. Also, national guidelines in the US promulgates cumulative probability distribution functions to assess confidence bounds, regarding the risk prediction. These uncertainties are, for example, a result of measurement error, model conceptualization and parameterization. Therefore, it is necessary to quantify uncertainty as part of risk assessment. Risk assessment addresses three questions (1) What can happen?, (2) What is the probability that it happens? and (3) What is the damage after it happens? Thus, in general risk is a combination of uncertainty and damage. Unfortunately, only few comprehensive risk concepts exist for drinking water supply systems that address risk from source to receptor, while considering uncertainty and physically-based modeling aspects. Modularized, transport-based and probabilistic risk quantification models coupled with a rational, and stakeholder-objective decision analysis framework for groundwater supply systems do not yet exist. Only with this type of comprehensive risk model, stakeholders are able to estimate risk at the receptor level most accurately. This supports stakeholders to take risk-informed, implementable, transparent, and evidence-based decisions in an uncertain environmental framework and pushes water governance to the next higher level. Therefore, this work presents a new methodological risk concept within a Bayesian framework to quantify and manage risk within groundwater resources for drinking water supply, utilizing smart decision analysis concepts based on multiple stakeholder-objectives. The risk concept is quantitative, flexible, probabilistic and physically-based. This quantitative risk assessment approach is superior to qualitative ones. For example, it allows the aggregation of hazard impacts, provide transparency due objectivity, and enable risk-informed management that is based on cardinal scale and economic concepts. Furthermore, the risk modeling framework is flexible that allows stakeholders to easily exchange single modules (compare fault-tree: nodes or events) with ready available software and modeling techniques in a plug and play mode. The probabilistic approach quantifies uncertainty and provides a prediction space of many possible outcomes, such that stakeholders can better evaluate the current risk situation. Especially in case of the present subsurface heterogeneity and the lack of knowledge about the structural distribution, it is indispensable to quantify uncertainty. In addition, uncertainty is reduced by Bayesian-based conditioning techniques (e.g., Bayesian GLUE), moving risk estimates closer to reality. Furthermore, the state-of-the-art transport-based model is able to calculate the cumulative hazard impact at the target objective as required by European Commission. The physically-based transport model allows aggregation of mass discharges across space, time and frequency. This enables risk managers to evaluate hazards more precisely as individual hazards are often deemed to be no risk, although contributing to the overall expected impact at the well. Therefore, hazard ranking across the catchment is available in a cumulative environmental setting. Thus, the risk quantification concept is able to provide valuable and indispensable information for water stakeholders that are quantitative, flexible, probabilistic and physically-based. Second, by admitting uncertainty and utilizing this type of risk framework stakeholders are able to take transparent, robust, rational, and risk-informed decisions. The risk framework is her applied to two test cases, one being of synthetic nature, the other being a well catchment that is located in southern Germany.Item Open Access Simulation, identification and characterization of contaminant source architectures in the subsurface(2014) Koch, Jonas; Nowak, Wolfgang (Jun.-Prof. Dr.-Ing.)Improper storage and disposal of non-aqueous-phase liquids (NAPLs) has resulted in widespread subsurface contamination, threatening the quality of groundwater as freshwater resource. Contaminants with low immiscibility and solubility in the aqueous phase, remain as a separate phase. They dissolve into the groundwater and spread within the aquifer over long periods of time, before the contaminants are fully depleted. Due to their typically high toxicity, even low concentrations in groundwater may pose high risks on ecosystems and human health. The spatial distribution of contaminants in the subsurface (i.e., the contaminant source architecture, CSA for short) is highly irregular and not precisley predictable. Yet, the complex and uncertain morphology of CSAs and its interactions with uncertain aquifer parameters and groundwater flow have to be accounted for and need to be resolved at the relevant scale to maintain adequate prediction accuracy. The abundance of contaminated sites and difficulties of remediation efforts demand decisions to be based on a sound risk assessment. To this end, screening or investigation methods are applied. These methods assess which sites pose large risks, which ones can be left to natural attenuation, which ones need expensive remediation, and what remediation approach would be most promising. For this, it is important to determine relevant characteristics or impact metrics, such as geometric characteristics of the unknown CSA , total mass, potential mass removal by remediation, emanating dissolved mass fluxes and total mass discharge in past and future, predicted source depletion times, and the possible impact on drinking water wells, and thus on human health. The same characteristics are also important for designing monitoring or remediation schemes. Due to sparse data and natural heterogeneity, this risk assessment needs to be supported by adequate predictive models with quantified uncertainty. These models require an accurate source zone description, i.e., the distribution of mass of all partitioning phases in all possible states, mass-transfer algorithms, and the simulation of transport processes in the groundwater. Due to limited knowledge and computer resources, a selective choice of the relevant processes for the relevant states and decisions on the relevant scale is both sensitive and indispensable. Thus, it is an important research question what is a meaningful level of model complexity and how to obtain a physically and statistically consistent model framework. Almost every estimate of the desired impact metrics will be uncertain due to the typical uncertainty that is inherent in any process description in a heterogeneous subsurface environment, and due to the complex and non-linear interdependencies between aquifer parameters, CSA, groundwater velocities, and mass transfer. Thus, stochastic methods are indispensable because they can provide reasonable error bars and allow the involved stakeholders to take decisions in proportion to the posed risks of contaminated sites. In order to restrict this huge uncertainty, field data need to be assimilated by inverse models. To this end, concentration observations possess promising information on CSA geometries, transport processes, and aquifer parameters. Revealing these valuable information, however, requires an efficient inverse model that is again physically and stochastically consistent. In particular, the identification of CSAs has to cope with non-unique problems, non-linear interdependencies, and enhanced mixing and plume deformation in a heterogeneous environment. The overall goal of this thesis is to provide a sound basis for rational decisions that arise in the assessment of contaminated sites. Therefore, three theses are postulated in the following, for which their significance and validity is demonstrated throughout this work. 1.) The model framework must at least account for the heterogeneity of aquifers, the irregularity of flow fields, realistic and thus complex-shaped CSAs, the three-dimensionality of natural systems, adequate physical interlinkages of the key parameters at the adequate spatial and temporal scales, and it must at least treat the uncertainty of aquifer parameters and of the CSA. 2.) Joint identification of CSAs and aquifer parameters based on concentration observations can be achieved via non-linear and non-unique Bayesian inversion. An accurate and efficient inverse method for this task can be by obtained by applying the method of adjoint states and utilizing the linearity of the transport equation. 3.) The enhanced mixing of dissolved DNAPL and the solute plume deformation in heterogeneous aquifers significantly influences the inference quality of CSAs from downstream concentration observations. Knowledge on the driving processes of enhanced mixing allows to chose adequate measurement designs.