Universität Stuttgart
Permanent URI for this communityhttps://elib.uni-stuttgart.de/handle/11682/1
Browse
3 results
Search Results
Item Open Access Geostatistical methods for the identification of flow and transport parameters in the subsurface(2005) Nowak, Wolfgang; Bárdossy, András (Prof. Dr. rer. nat. Dr.-Ing.)Per definition, log-conductivity fields estimated by geostatistical inversing do not resolve the full variability of heterogeneous aquifers. Therefore, in transport simulations, the dispersion of solute clouds is under-predicted. Macrotransport theory defines dispersion coefficients that parameterize the total magnitude of variability. Using these dispersion coefficients together with estimated conductivity fields would over-predict dispersion, since estimated conductivity fields already resolve some of the variability. Up to presence, only a few methods exist that allow to use estimated conductivity fields for transport simulations. A review of these methods reveals that they are either associated with excessive computational costs, only cover special cases, or are merely approximate. Their predictions hold only in a stochastic sense and cannot take into account measurements of transport-related quantities in an explicit manner. In this dissertation, I successfully develop, implement and apply a new method for geostatistical identification of flow and transport parameters in the subsurface. The parameters featured here are the log-conductivity and a scalar log-dispersion coefficient. The extension to other parameters like retardation coefficients or reaction rates is straightforward. Geostatistical identification of flow parameters is well-known. However, simultaneous identification together with transport parameters is new. In order to implement the new method, I develop a modified Levenberg-Marquardt algorithm for the Quasi-Linear Geostatistical Approach and extend the latter to the generalized case of uncertain prior knowledge. I derive the sensitivities of the state variables of interest with respect to the newly introduced scalar log-dispersion coefficient. Further, I summarize and extend the list of spectral methods that help to drastically speed up the expensive matrix operations involved in geostatistical inverse modeling. If the quality and quantity of input data is sufficient, the new method accurately simulates the dispersive mechanisms of spreading, dilution and the irregular movement of the center of mass of a plume. Therefore, it adequately predicts mixing of solute clouds and effective reaction rates in heterogeneous media. I perform extensive series of test cases in order to discuss and prove certain properties of the new method and the new dispersion coefficient. The character and magnitude of the identified dispersion coefficient depends strongly on the quality and quantity of input data and their potential to resolve variability in the conductivity field. Because inverse models of transport are coupled to inverse models of flow, the information in the input data has to sufficiently characterize the flow field. Otherwise, transport-related input data cannot be interpreted. Application to an experimental data set from a large-scale sandbox experiment and comparison to results from existing approaches in macrotransport theory show good agreement.Item Open Access Optimal exposure time in gamma-ray attenuation experiments for monitoring time-dependent densities(2022) Gonzalez-Nicolas, Ana; Bilgic, Deborah; Kröker, Ilja; Mayar, Assem; Trevisan, Luca; Steeb, Holger; Wieprecht, Silke; Nowak, WolfgangSeveral environmental phenomena require monitoring time-dependent densities in porous media, e.g., clogging of river sediments, mineral dissolution/precipitation, or variably-saturated multiphase flow. Gamma-ray attenuation (GRA) can monitor time-dependent densities without being destructive or invasive under laboratory conditions. GRA sends gamma rays through a material, where they are attenuated by photoelectric absorption and then recorded by a photon detector. The attenuated intensity of the emerging beam relates to the density of the traversed material via Beer-Lambert’s law. An important parameter for designing time-variable GRA is the exposure time, the time the detector takes to gather and count photons before converting the recorded intensity to a density. Large exposure times capture the time evolution poorly (temporal raster error, inaccurate temporal discretization), while small exposure times yield imprecise intensity values (noise-related error, i.e. small signal-to-noise ratio). Together, these two make up the total error of observing time-dependent densities by GRA. Our goal is to provide an optimization framework for time-dependent GRA experiments with respect to exposure time and other key parameters, thus facilitating neater experimental data for improved process understanding. Experimentalists set, or iterate over, several experimental input parameters (e.g., Beer-Lambert parameters) and expectations on the yet unknown dynamics (e.g., mean and amplitude of density and characteristic time of density changes). We model the yet unknown dynamics as a random Gaussian Process to derive expressions for expected errors prior to the experiment as a function of key experimental parameters. Based on this, we provide an optimization framework that allows finding the optimal (minimal-total-error) setup and demonstrate its application on synthetic experiments.Item Open Access Gaussian active learning on multi-resolution arbitrary polynomial chaos emulator : concept for bias correction, assessment of surrogate reliability and its application to the carbon dioxide benchmark(2023) Kohlhaas, Rebecca; Kröker, Ilja; Oladyshkin, Sergey; Nowak, WolfgangSurrogate models are widely used to improve the computational efficiency in various geophysical simulation problems by reducing the number of model runs. Conventional one-layer surrogate representations are based on global (e.g. polynomial chaos expansion, PCE) or on local kernels (e.g., Gaussian process emulator, GPE). Global representations omit some details, while local kernels require more model runs. The existing multi-resolution PCE is a promising hybrid: it is a global representation with local refinement. However, it can not (yet) estimate the uncertainty of the resulting surrogate, which techniques like the GPE can do. We propose to join multi-resolution PCE and GPE s into a joint surrogate framework to get the best out of both worlds. By doing so, we correct the surrogate bias and assess the remaining uncertainty of the surrogate itself. The resulting multi-resolution emulator offers a pathway for several active learning strategies to improve the surrogate at acceptable computational costs, compared to the existing PCE-kriging approach it adds the multi-resolution aspect. We analyze the performance of a multi-resolution emulator and a plain GPE using didactic test cases and a CO2 benchmark, that is representative of many alike problems in the geosciences. Both approaches show similar improvements during the active learning, but our multi-resolution emulator leads to much more stable results than the GPE. Overall, our suggested emulator can be seen as a generalization of multi-resolution PCE and GPE concepts that offers the possibility for active learning.