Vol.:(0123456789) Transport in Porous Media (2022) 143:463–496 https://doi.org/10.1007/s11242-022-01777-5 1 3 Optimal Exposure Time in Gamma‑Ray Attenuation Experiments for Monitoring Time‑Dependent Densities Ana Gonzalez‑Nicolas1  · Deborah Bilgic1 · Ilja Kröker1  · Assem Mayar1 · Luca Trevisan2  · Holger Steeb3  · Silke Wieprecht1 · Wolfgang Nowak1 Received: 14 December 2021 / Accepted: 30 March 2022 / Published online: 28 April 2022 © The Author(s) 2022 Abstract Several environmental phenomena require monitoring time-dependent densities in porous media, e.g., clogging of river sediments, mineral dissolution/precipitation, or variably-sat- urated multiphase flow. Gamma-ray attenuation (GRA) can monitor time-dependent densi- ties without being destructive or invasive under laboratory conditions. GRA sends gamma rays through a material, where they are attenuated by photoelectric absorption and then recorded by a photon detector. The attenuated intensity of the emerging beam relates to the density of the traversed material via Beer–Lambert’s law. An important parameter for designing time-variable GRA is the exposure time, the time the detector takes to gather and count photons before converting the recorded intensity to a density. Large exposure times capture the time evolution poorly (temporal raster error, inaccurate temporal discretiza- tion), while small exposure times yield imprecise intensity values (noise-related error, i.e. small signal-to-noise ratio). Together, these two make up the total error of observing time- dependent densities by GRA. Our goal is to provide an optimization framework for time- dependent GRA experiments with respect to exposure time and other key parameters, thus facilitating neater experimental data for improved process understanding. Experimentalists set, or iterate over, several experimental input parameters (e.g., Beer–Lambert parameters) and expectations on the yet unknown dynamics (e.g., mean and amplitude of density and characteristic time of density changes). We model the yet unknown dynamics as a ran- dom Gaussian Process to derive expressions for expected errors prior to the experiment as a function of key experimental parameters. Based on this, we provide an optimization framework that allows finding the optimal (minimal-total-error) setup and demonstrate its application on synthetic experiments. Article Highlights • We study the influence of anticipated density changes and experimental setup on opti- mal designs for GRA measurements • We present a methodology that finds the optimal setup (minimum error) as a function of the exposure time and other parameters * Ana Gonzalez-Nicolas anagna@gmail.com Extended author information available on the last page of the article http://orcid.org/0000-0003-2869-8255 http://orcid.org/0000-0003-0360-5307 http://orcid.org/0000-0002-7172-5020 http://orcid.org/0000-0001-7602-4920 http://orcid.org/0000-0003-2583-8865 http://crossmark.crossref.org/dialog/?doi=10.1007/s11242-022-01777-5&domain=pdf 464 A. Gonzalez-Nicolas et al. 1 3 • We provide experimentalists with a quantitative understanding of unavoidable inaccu- racies and how to minimize them Keywords Gamma-ray · Time-dependent density · Optimal experimental design · Raster error · Noise error · Beer–Lambert’s law · Gaussian process regression 1 Introduction Several natural and technical processes occurring in porous media lead to time-dependent densities. With time-dependent densities, we refer to the fact that the total mass within a fixed volume (i.e. a bulk density) may change over time. Examples of such processes are the infiltration of fine-grained river sediments into a coarser sediment matrix (colma- tion) (Schälchli et al., 2002) and mineral precipitation or dissolution within porous rocks (e.g., Hommel et al., 2015). The induced changes in density also alter other properties of porous media. Depending on the application, these alterations are desired or undesired; in both cases, it is relevant to better understand the processes, and hence to have methods for observing time-dependent densities. In other cases, it is not the solid phase of porous media that changes the density, but the varying presence of fluids. Examples include (1) the variably-saturated flow of water through porous media, e.g., in the vadose zone in the area of soil science (e.g., van Genuchten, 1980) or (2) the multiphase flow of non-aqueous phase liquids (NAPLs) through otherwise water-saturated porous media in the area of con- taminant hydrology (e.g., Pankow & Cherry, 1996). Many existing methods for monitoring time-dependent densities are destructive, i.e., they require taking material samples. However, sampling disrupts the original condi- tions and processes to be monitored, hence affecting the dynamics that one would like to observe. Placing sensors into the system to be monitored may be less destructive, but still, it is invasive and affects the processes at work—again disturbing the dynamic one would like to monitor. Therefore, approaches based on the attenuation of radiation energy (such as gamma rays of x-rays) are very promising. These methods have proven very useful in a wide range of environmental applications (Oostrom et al., 2007; Werth et al., 2010). In this study, we focus on monitoring time-dependent densities via gamma-ray attenua- tion (GRA). We restrict this to GRA fixed in location and angle. This form of GRA experi- ments does not attempt to resolve spatial variations in density, but merely returns integral information on bulk attenuation (and hence on bulk density) along the beam line. Hence, our study focus should not be confused with tomographic approaches based on gamma rays that yield images, but we will discuss the extension to images and time-variable images in our outlook. Several studies have used GRA to measure time-dependent densities. There are GRA- based studies that monitored fluid saturations (Barth et al., 2003; Eckberg & Sunada, 1984; Ferguson & Gardner, 1962; Illangasekare et al., 1995; Nofziger, 1978). Likewise, there are GRA-based studies to determine physical characteristics of soils such as soil bulk density and porosity (Baytaş & Akbal, 2002), water content, NAPLs’ saturation, hydraulic conduc- tivity (Moreira et al., 2011), and reservoir sediments’ bulk density profiles (Beckers et al., 2018). Two very recent studies, Mayar et al. (2020) and Mayar et al. (2022) used GRA- based monitoring for colmation processes. Other fields where GRA has been successfully 465Optimal Exposure Time in Gamma‑Ray Attenuation Experiments… 1 3 applied can be found in Gharbi et al. (2004) for a colloid deposition in porous media and Sail et al. (2011) for suffusion tests on cohesionless granular matter. In GRA, gamma rays are sent through the system, where they are attenuated and then recorded by a detector, such as a Geiger counter or scintillator. Beer–Lambert’s law links the attenuated intensity of the emergent gamma rays (per wavelength) to the (bulk) density of the traversed materials. Details for handling bulk densities of multiphasic and/or porous materials will be given in Sect. 2.1. The time allowed to count discrete gamma photons at the detector before converting them to an intensity (and then to a density value) will be called exposure time hereafter. Both emission and attenuation of gamma rays are random processes commonly mod- eled as Poisson processes (Berger & Hubbell, 1987; Waggoner, 1951). Accordingly, the count numbers recorded in repeated experiments turn out to follow the Poisson distribution (Buczyk, 2009). Therefore, one can measure precise intensities only with long exposure times. In other words: the imprecision of GRA, in a static measurement scenario with con- stant densities, solely depends on the statistical error associated with the number of counts recorded by the detector (Mayar et al., 2020). The expected magnitude of this number, in turn, depends on the exposure time, the radiation intensity of the radiation source, and on specifics of technical equipment such as the collimators used to bundle the gamma rays through the experimental system (Mayar et al., 2020). However, when the density to be measured changes over time and the goal is to moni- tor the temporal variations of density as a time series, the need for long exposure times becomes a problem: thinking in a sequence of intensities per exposure time window will lead to a temporally rastered version of the time-dependent density one wants to monitor. The temporal resolution of this rastering is directly given by the exposure time. Thus, for a given set of experimental equipment, one has to choose smaller exposure times to cap- ture the temporal variations of density more accurately, and one enters a trade-off between precision and accuracy. Of course, one can milden the trade-off by using more powerful experimental equipment such as stronger sources or larger collimators (Mayar et al. 2019). Yet, this comes at larger costs, larger risks related to the handling of gamma sources, or reduced spatial resolution. Our study focuses on the trade-off between accuracy and precision for given experi- mental equipment. We will call the underlying two types of error “noise error” and “ras- ter error”. Noise error is caused by the statistical imprecision of gamma-ray counts. It is related to the noise part in the signal-to-noise ratio, and it decreases with larger exposure times. Raster error relates to the inaccuracy of representing a time-dependent density via constant values per exposure time. It relates to the temporal rastering quality of the signal in the signal-to-noise ratio and it increases with exposure time. Naturally, one can expect an optimum (minimum total error) for problem-specific medium values of exposure time. This raises the question: how can experimentalists find this problem-specific optimum without manual trial and error? Our goal is to achieve minimal total errors in GRA-based monitoring of time-dependent densities by mathematical optimization of exposure time, along with other key parameters of GRA experiments. As the optimization occurs prior to the experiment, one has to work with (statistical) expectations of experimental properties and outcomes. To keep the opti- mization accessible to experimentalists, it should rely on prior expectations that are rela- tively easy to specify. Therefore, our proposed approach will only ask to specify physical parameters of the GRA device and experimental setup (Beer–Lambert parameters: incident intensity, mass attenuation coefficient, system thickness) and statistical expectations of the yet unknown time-dependent density. These statistical expectations are the anticipated 466 A. Gonzalez-Nicolas et al. 1 3 mean density, the anticipated strength of density variations, and the characteristic time scale on which the density variations are expected to occur. These specifications are suf- ficient to predict the magnitude of both error types as a function of exposure time and of other parameters, and hence to find the optimal exposure time (and other parameters) that minimizes the overall error. For implementing the optimization, we will follow two different methods: a numeri- cal computation of errors (slow and computationally expensive) and a first-order approxi- mation (fast and still accurate). Then, we demonstrate the optimization of a hypothetical GRA monitoring exercise of time-dependent densities. We discuss the behavior of errors as a function of exposure time. Other experimental parameters may also be influenced or chosen by experimentalists, such as the strength of the gamma-ray source or the thickness of the analyzed sample. These are much less free to choose than the exposure duration, yet have impacts on experimental accuracy and precision. Therefore, we also analyze how the optimal exposure time and the achievable minimum error changes under variations in experimental conditions. The latter include both Beer–Lambert parameters (incident inten- sity, attenuation coefficient, system thickness, as they influence the number of counts to be expected, and hence the noise error) and the expectations on the dynamics (mean, ampli- tude, characteristic time, as they influence the expected signal to be rastered, and hence affect the raster error). Finally, we also discuss how to use our framework for an overall optimization of GRA experiments for monitoring time-variable densities. The overall benefits of our work are (1) to predict controllable experimental errors in time-variable GRA prior to experimentation; (2) to provide an optimization to achieve minimal errors by choosing appropriate exposure times and (3) to assist experimentalists in the overall optimization of their experiments toward a maximum utility in experimentation and process understanding. This paper is organized as follows. First, we explain the methodology including the system and the application of the Poisson distribution for this type of problem, the ana- lytical and numerical approach, as well as the optimization problem. Afterward, in Sect. 3, we present our results. We also discuss overall optimization across all parameters that an experimentalist could choose freely, and discuss the limitations of our approach. Last, we conclude with a summary of recommendations for experiments and with an outlook for future research. 2 Methods 2.1 System Definition and Beer–Lambert’s Law We consider a system and experimental setup as shown in Fig. 1. The system domain Ω has a time-dependent density �(t) (kg/m3) and a thickness d (m). In the scope of our study, we neglect possible spatial variations of density. This means we assume that the experimental- ist is either interested in an effective bulk density or that density can be seen as spatially homogeneous. Hence, we do not consider the density as a function of space, but only of time. For now, we provide all formulations just for a single (bulk) density to keep nota- tion short, but we provide details on wrapping multi-phasic and/or porous media into this framework at the end of the current section. For monitoring the density over time via GRA, the experimental system is equipped with a gamma-ray source that sends a constant incident intensity I0 (before passing through 467Optimal Exposure Time in Gamma‑Ray Attenuation Experiments… 1 3 the system) through the system and measures the attenuated emergent intensity I1 (after passing through the system), I1 < I0 at a photon counter (detector). The attenuation will follow Beer–Lambert’s law. The Beer–Lambert’s law (Lykos, 1992) is an exponential absorption law that relates the attenuation of light (or other electromagnetic rays/radiation) through a substance to the properties of the substance. Beer–Lambert’s law is defined as and defines attenuation as the relationship between the intensities I0 and I1 . Here, A� is the system absorbance (unitless) for a given wavelength � , I0 is the incident intensity ( W∕m2 ), I1 is the output intensity (W∕m2 ), �� is the absorption concentration of the (homogeneous) material in the system (m2/kg) for the wavelength � , � is the concentration (here: density) of the substance (kg/m3), and d is the characteristic path length (here: the thickness of the system) (m). In the following text, we will avoid all subscripts related to the wavelength � , and assume a monochromatic source. For polychromatic sources, one would have to apply Beer–Lambert’s law to each involved wavelength. However, applying the law to all wave- lengths in a lumped fashion can still be a good approximation, if �� does not vary too much with �. In Sect. 2.2, we will treat the intensities not as continuous variables (in units of W/m2), but as the discrete number of photon counts in a given time interval (unitless). As the frac- tion I0∕I1 is unitless, Eq. (1) is still valid when using count numbers for I0 and I1. Now, we turn toward multi-phasic and porous media. These are a volumetric mixture of several materials, each of them having its own absorption concentration � density � and volume fraction n (compare to porosity). Without loss of generality, we can look at bulk densities per constituent that already captures the volume fraction, so that we can use the thickness d as identical overall sample thickness across all constituents. This is allowable due to the multiplicative character of d and � in Eq. (1). In the logarithmic form of Eq. (1), several constituents would simply add up to ( �1�1d1 + �2�2d2 +… ) in the right-hand side. Yet, GRA would only provide a single value for bulk attenuation, i.e., with the overall thickness d and the individual values for � given, one would have to solve for several den- sity values from a single piece of information. Yet, in specific cases, it is possible to disen- tangle them: (1)A� = log ( I0 I1 ) = ���d Fig. 1 System sketch for measur- ing time-dependent densities using GRA 468 A. Gonzalez-Nicolas et al. 1 3 1. In a multiphasic or porous system, the initial state provides a base-line attenuated inten- sity Ibase , which can be substituted for I0. 2. As all volume fractions must sum up to a constant value over time, one receives an additional constraint and can solve Eq. (1) for up to two changing densities. 3. If one or more of the constituents have a negligible product of �� compared to all other constituents, they can be neglected in the bulk attenuation, allowing to solve for the key constituents of interest. Such techniques are standard for GRA experiments with porous media, see e.g., Mayar et al. (2020). 2.2 The Poisson Distribution Over Different Time Windows for Counting In GRA experiments, emerging beam intensities are typically measured by photon counts, i.e., as discrete numbers of counts per given time window. Due to the stochastic nature of gamma decay for natural sources and of attenuation by the material (Berger & Hubbell, 1987; Waggoner, 1951), the actual count is random but has a uniquely defined expected value for constant and given parameters within Eq. (1). This randomness is widely mod- eled via the Poisson distribution (Buczyk, 2009) because both the emitting decay process and the attenuation per emitted particle are assumed to be a collection of statistically inde- pendent events. That means the count events at the detector follow a Poisson process, such that the recorded count numbers follow the Poisson distribution (Haight, 1967). The Poisson distribution expresses the probability of a given number of events (here: counts X equaling any given value k ) occurring in a fixed interval of time, if these events occur with a known constant mean rate and independently of the time since the last event (Haight, 1967): where X denotes the underlying random variable, � is the only parameter of the Poisson distribution, and k is the number of times an event could actually occur within the given time interval. Due to properties of the Poisson distribution (Haight, 1967), the parameter � is both the expected value �X = E[X] and its variance �2 X = Var[X]: In this study, k is the actual number of photon counts recorded by the detector in the specified time window. Let the time window have a unit length of L1 (e.g., one second to be consistent with the units used in Beer–Lambert’s law). We assume that, if all other parame- ters remain constant, the observed counts k (recorded photon counts in the unit-length time window L1 ) are a random outcome of a Poisson-distributed random variable X . Given the physics of the system, Beer–Lambert’s law can provide the corresponding expected value (Haight, 1967): (2)P(X = k) = �k k! exp(−�), �X = � (3)�2 X = � → �X = √ �. 469Optimal Exposure Time in Gamma‑Ray Attenuation Experiments… 1 3 where the factor L1 merely appears for unit conversion from Beer–Lambert intensities (per time) to Poisson counts (unitless number in a given time window). A direct conversion of counts k to a density value via Beer–Lambert’s law means, sta- tistically, to use the point estimate � = E[X] ≈ k in Eq. (4), so that one can insert I1 ≈ k∕L1 and then re-arrange for � . This estimate is accurate at sufficiently large values of � , where the randomness of gamma counts can be ignored. For finite � , however, the relative accu- racy of � ≈ k , expressed as coefficient of variation CVX , is: The imprecision expressed by Eq.  (5) tends to zero as � → ∞ . For smaller expected count numbers � , the random fluctuation of the counts k does matter. This is the reason why we distinguish between an actually observed count k , the underlying random variable X that follows the Poisson distribution, and its expected value E[X]∕L1 = I1 , which is com- putable from Beer–Lambert’s law according to Eq. (4). When choosing a time window L that differs from L1 , one has to remember that the Poisson distribution is defined for a fixed time interval, and expresses a plain count (not a rate) without units of “per time”. Longer time windows will result, at the same count rate, in larger count numbers, but this must not be mistaken for a larger intensity or lower den- sity according to Beer–Lambert’s law. Therefore, one has to do the conversion from counts k to rates � = k∕L (counts per time) as a data pre-processing step. Suppose we count the value kL over a time window L(s) and subsequently we want to compute the rate �(1∕s) for using I1 ≈ � with Beer–Lambert’s law. Then one would use The random variable underlying �(1∕s) is denoted as � , and the random variable under- lying kL(−) is XL . For the expected values of � and XL , this means because the longer time L leads to larger count numbers kL , and we compensate for that fact with Eq. (6). The relevant difference arises in the imprecision. Due to the Poisson dis- tribution, the random count XL has a variance �2 XL = E [ XL ] = L∕L1 ⋅ E[X] that is L∕L1 times larger than that of X . However, the random rate � is not Poisson distributed, as it is merely computed from a Poisson-distributed variable via � = XL∕L (in analogy to � = kL∕L ). Instead, the division by L in Eq. (6) makes its variance L2 times smaller than that of XL via the plain rules of linearized error propagation. In combination, the variance of � is L times smaller than that of X , and can be expressed in various forms: (4) E[X] L1 = � L1 = I1 = I0 exp(��d) , (5)CVX = �X[X] EX[X] = √ � � ≈ √ k k = 1√ k (6)� = kL L . (7)E [ XL ] = L L1 ⋅ E[X] ≈ kL (8)E [ � ] = E[X] L1 ≈ � = kL L , 470 A. Gonzalez-Nicolas et al. 1 3 which confirms that longer time windows L allow more precise statistical estimates of the Beer–Lambert intensity I1 , which will lead to more precise density estimates. Based in the above equations, one can already construct a simple hypothesis test to specify how long of an exposure time is required to distinguish two slightly different den- sity values by GRA in spite of the imprecision of count rates. As this test is not yet in the context of time-variable densities, it is a slight side-track to our manuscript, and therefore it can be found in Appendix D. 2.3 Errors in Monitoring Time‑Dependent Densities: Raster Error and Noise Error From now on, we consider that the density �(t) is time-variable and, consequently in Eq. (1), we have I1(t) and A� (t) . Through Eq. (4), this makes the counting process in the detector a variable-rate Poisson process with rate �(t). Fig.  2 shows a possible temporal variation of the rate parameter �(t) . The red line is the theoretical �(t) , whereas the blue crosses are observed count values kL(t) counted in discrete time windows of some length L . For the sake of clear illustration, we have deliberately chosen a very small L compared to the total time length of the plot. This results in many blue crosses (photon counts) with high imprecision according to Eq. (5). As already mentioned, an important parameter for designing dynamic GRA is the time window size L . This is the exposure time that controls the expected number of photon counts received by the detector. At the same time, the length L also controls the temporal resolution of the density to be monitored, as detailed in the following. Let us first consider the temporal resolution aspect. Basically, we cannot resolve rate variations of recorded counts within the time windows of length L . Instead, we will get a discretized, piecewise constant raster curve Î1(t) to approximate the theoretical intensities I1(t) at a sampling rate of 1∕L . Hence, we also would obtain a piecewise constant raster curve �̂L(t) to approximate the targeted density �(t) . The difference between �̂L(t) and �(t) is an error of temporal discretization. We call the resulting error the raster error. Even with an (9)�2 � = �2 XL L2 = �2 X LL1 = � LL1 = I1 L ≈ � L , Fig. 2 Illustration of variable I 1 (t) caused by variable �(t) in GRA monitoring. Red line shows the theoreti- cal rate �(t) of the underlying variable-rate Poisson process and the blue crosses show photon counts k L (t) counted in discrete time windows of length L 471Optimal Exposure Time in Gamma‑Ray Attenuation Experiments… 1 3 infinitely strong source, where statistical imprecision of counts would not matter, the raster error would persist for any exposure time other than zero. In realistic setups with finite source strengths, each piecewise constant value of the ras- ter curve �̂L(t) will also be subject to the statistical imprecision of using counts kL(t) or count rates �(t) as point estimates instead of theoretical intensities I1(t) . As this is a statisti- cal imprecision caused by the Poisson distribution, we will call it the noise error. The combination of raster error and noise error makes up the total error of monitoring time-dependent densities by GRA. When expressing all these errors as variances (expected squared errors) and modeling these errors to be uncorrelated, the total error �2 total of moni- toring time-dependent densities �(t) is the sum of the raster error �2 raster and the noise error �2 noise : These errors will be explained in the following sections. The assumption of statistical independence between raster error and noise error is justified, as raster error is, in fact, deterministic and caused by a discretization choice of the experimentalist, while the noise error stems from the stochastic Poisson process of emission and attenuation. Of course, both errors are linked as the choice of L will jointly affect both errors, but the two errors are not directly dependent on each other, so they are statistically independent for any given choice of L . In statistics, this is called conditional independence, and this is sufficient to write both errors, as in the above equation, as independent per L. 2.4 Formal Optimization of Expected Errors It is understood that the total error �2 total according to Eq.  (10) is a function of the time- window size L . On the one hand, large windows can reduce the noise error �2 noise , but can- not resolve dynamics. Hence they will lead to large raster errors �2 raster . On the other hand, small windows could resolve the dynamics well with small raster errors but suffer from larger noise errors. The optimal (minimum) total error will be achieved through an optimal window size L = Lopt that strikes the optimal compromise between the raster and noise error (see Fig. 3). It can be found by optimization as follows: (10)�2 total (L) = �2 raster (L) + � 2 noise (L). (11)Lopt = arg��� L∈� �2 total (L), Fig. 3 Optimization of the total error as a function of the time window rorre window size 472 A. Gonzalez-Nicolas et al. 1 3 where Lopt is the optimal window size. It is the argument (arg) that minimizes (min) the total error �2 total (L) from all possible window sizes L ∈ � , and � is the range of feasible exposure times L . In Sect. 1.2, we also discuss how to extend our approach toward an over- all optimization across a much larger set of experimental settings. Next, we need to formulate how exactly �2 noise and �2 raster depend on L . Before perform- ing an experiment, the best we can do is to optimize expectations of these errors, which is exactly why we express them as variances (i.e., expected squared errors) in Eq. (10). The Poisson distribution for varying time windows upon conversion to count rates (Eqs.  (2) to (9)) gives us expectations for the noise error �2 noise , if we can anticipate (at least statistically) possible photon counts k and corresponding count rates � . This, in turn, requires anticipating possible densities �(t) for Eq. (4). For the raster error �2 raster , we also need some statistical anticipation of the time-depend- ent density. Then, we can mathematically derive expected squared errors for approximating an allegedly true density curve �(t) by a piecewise constant, noise-affected raster curve �̂(t) . For this reason, in the upcoming section, we will treat the time-dependent density �(t) as a Gaussian process. 2.5 Gaussian Processes for Time‑Dependent Densities Gaussian processes (GP) are widely used in machine learning and its applications. In the research field of geosciences, GPs are also used for interpolation methods known as Krig- ing (Matheron, 2019). In time series analysis, they occur as various types of autoregressive processes. From the mathematical point of view, GPs can be seen as a generalization of the well-known multivariate Gaussian probability distribution, which is completely described by its mean and (co)variance. For a comprehensive introduction to GPs, we refer to the book of Rasmussen (2006). In our case, a GP over time, f (t) , can be seen as a collection of random variables indexed by time t , such that each finite selection of points in time results in a multivariate Gaussian distribution. A GP is completely specified by its mean function m(t) = E[f (t)] and covariance function Cov ( t, t �) = E [ (f (t) − m(t)) ( f ( t �) − m ( t �))] . The latter describes the covariance between pairs of random variables f (t) and f ( t ′) at two different points in time t and t′. Altogether, a GP f (t) is usually denoted by f (t) ∼ GP(m(t),Cov(t, t � )). There are many choices of covariance functions available. Most of them are paramet- ric shapes that can be adapted to current needs via so-called hyperparameters. The most common hyperparameters are the variance �2 , which describes the magnitude of fluctua- tions, and the correlation length l , which can be seen as the timespan that should be passed between t′ and t , before the value of f (t) can differ significantly from f (t� ) . For further reading, we refer again to the book of Rasmussen (2006). As a modeling assumption that admits statistical expectations as argued in Sect. 0, we assume that the time-dependent density �(t) follows a GP Here, �m is the mean density, defined as For now, this is a temporally constant mean; if an experimentalist can make more spe- cific assumptions on temporal trends, this will be easily included in further developments. (12)�(t) = GP ( �m, cov ( t, t � , ��, l )) . (13)�m = � ( I1 ) = E[�]. 473Optimal Exposure Time in Gamma‑Ray Attenuation Experiments… 1 3 As covariance function, we use the so-called squared exponential kernel KSE (Rasmus- sen, 2006) with time lag r = (t − t � ) , variance �2 � , and correlation length l , such that Expressing the mean as a constant and the covariance as a function of lag r = (t − t � ) makes the GP stationary, which means that all its statistics are invariant with respect to shifts in time. Among others, this means that there are no trends—neither in the mean nor in the expected amplitude of fluctuations, nor in their correlation. Choosing the squared exponential kernel leads to very smooth functions �(t) , while, e.g., the plain exponen- tial kernel would lead to continuous but non-differentiable �(t) . For illustration, Fig.  12 (Appendix C) shows a collection of randomly generated functions from a GP. 2.6 Analytical Approach With �(t) modeled as a GP, we can quantify the errors �2 noise and �2 raster . The analytical approach is much faster because it avoids the Monte-Carlo simulation. It is based only on analytically available statistics of GPs and of the Poisson distribution, and uses linear error propagation from intensities to densities via a first-order Taylor expansion of Beer–Lam- bert’s law. 2.6.1 Raster Error We define the raster error �2 raster (or dispersion variance in the context of GPs): where �2 � is the total variance of the time-varying density (i.e., the variance of the GP), and �2 s is the so-called smoothed variance (i.e., the variance recovered by the rastered time series �̂(t) ). The term dispersion variance is a statistical term (where dispersion relates to variance), not to be confused with optical dispersion. The smoothed variance �2 s can be quantified as where L is the exposure time. Combining these two equations, the raster error yields The derivation of the analytical solution FSE(L) for the double integration in Eq. (17) for the squared exponential kernel is provided in Appendix A, Eq. (46): (14)KSE(r) = �2 � exp ( − r2 2l2 ) (15)�2 raster = E [( �(t) − �̂(t) )2] = � 2 � − �2 s , (16)�2 s = 1 L2 ∫ L 0 ∫ L 0 cov ( t, t � , l, �� ) dtdt � , (17)�2 raster = �2 � − 1 L2 ∫ L 0 ∫ L 0 cov ( t, t � , l, �� ) dtdt � . (18) FSE(L) = ∫ L 0 ∫ L 0 cov � t − t �� dL1dL2 = √ 2��2lL ⋅ erf � L√ 2l � + 2l2[cov(L) − cov(0)]. 474 A. Gonzalez-Nicolas et al. 1 3 Note that cov(r = 0) = �2 . Therefore, the raster error follows as From Eqs. (18) and (19), the raster error �2 raster depends, besides on the desired time win- dow size L , on the anticipated variance �2 � of the variable density, its correlation length l , and on the entire covariance function cov(⋅) of the GP. These are the statistical expectations concerning the amplitude and frequencies of �(t) . In the current context, they describe how difficult it is to approximate the true dynamics with a rastered curve �̂(t) that is piecewise constant per time window of length L. 2.6.2 Noise Error Recall that photon counts kL(−) counted in a time window L(s) would be converted to a rate �(1∕s) via Eq.  (6). Also, recall that, before conducting the experiment, we statisti- cally anticipate results by random variables. Here, this means to work with the random count rate � rather than with the actual count rates � . The random count rates � have an expected value of I1 given by Eq. (4) and Eq. (8): For any set of Beer–Lambert parameters [�, �, d, I0] , we get an expected count number � per unit time via Eq. (4), and then convert � to the expected count rate �0 = E[�] . Then, according to Eq. (9), we can anticipate their imprecision as �2 � = I1∕L . Equation (4) in this procedure requires a value for density, for which we use the expected density �m that we already used as part of the GP. Yet, this is only the imprecision of the random count rate. What we need now is to translate it to the noise variance �2 noise of the densities � that we would compute from the Beer–Lambert’s law with the point estimate I1 ≈ � . Thus, we need an uncertainty propa- gation from � (in lieu of I1 ≈ � ) via Beer–Lambert’s law (Eq.  (1)) to densities � . Since the Beer–Lambert equation � = �(�) is nonlinear, we use linearized error propagation via Taylor-expansion, truncated after the first derivative as announced above: where we use �0 = E[�] as expansion point. In Appendix B, we provide the derivation of the noise error from the above considerations and Eq. (20): This expression contains all Beer–Lambert parameters that make up the attenuation, including the mean density �m as statistical expectation, and is antiproportional to the time window L , i.e., the noise error decreases with increasing L. 2.7 Numerical Approach In contrast to the first-order analytical approximation in Sect. 2.6, the numerical approach offers arbitrarily high accuracy to compute the total error �2 total (L) , yet only at the limit of infinite computer time spent on the involved Monte-Carlo simulations. It randomly simu- lates the photon counter by randomly generating realizations of �(t) from the GP (Fig. 4a). Then it produces random numbers of counts kL per time window of length L by randomly (19)�2 raster = �2 � − 1 L2 F SE (L). (20)�(�) ≈ � ( �0 ) + �� �� ||||�0 ( � − �0 ) , (21)�2 noise (L) = exp ( ��md ) I0L� 2d2 . 475Optimal Exposure Time in Gamma‑Ray Attenuation Experiments… 1 3 drawing from the Poisson distribution (Eq.  (2)). Upon conversion to count rates � , this yields random time series �(t) , used as a point estimate for Î1(t) . For comparison, the same is done without Poisson noise to receive a noise-free reference curve. Then, we convert both time series (noisy and idealized, noise-free count rates) to density raster curves �̂L(t) via Beer–Lambert’s law. Finally, we compare the noise-affected versions of the raster curve for density (Fig. 4c) with the noise-free raster curve (Fig. 4b) and with the originally simu- lated exact curve (Fig. 4a) to measure the errors. When this procedure is repeated many times ( nMC ) via Monte-Carlo simulation, we can numerically extract the total error as defined in Eq.  (10), and also extract the individual terms for �2 raster (Fig. 4b) and �2 noise (Fig. 4c). For the latter, it is necessary to generate many �(t) realizations for each realization of �(t). 2.8 Optimization and Implementation With �2 total (L) expressed analytically in Sect. 2.6, it can be seen that the optimization prob- lem in Eq. (11) is convex and differentiable. Hence, we can find Lopt by setting the deriva- tive to zero Lopt ∶ ��2 total (L) �L = 0 . The derivative of the noise error (Eq. (21)) is Fig. 4 Steps within the numeri- cal quantification of errors for one density sample: a Density realization �(t) randomly gener- ated from the GP; b Rastered density realization �̂(t) computed noise-free from �(t) to quan- tify the raster error against the original realization �(t) ; c Many realizations of noise-affected step curves �̂(t) for quantifying the noise error by comparison to the noise-free step curve from (b) 476 A. Gonzalez-Nicolas et al. 1 3 The derivative of the raster error (Eq. (49) in Appendix A) is Therefore, the derivative of the total error is where F(L) and F(L)′ are defined in Eqs. (46) and (47), respectively. While analytical solu- tions may be achievable for simpler covariance functions, for our current case we find the roots of Eq. (24) numerically. For this, we use the function fzero as implemented in MAT- LAB (The Mathworks, Inc., 2016). For solving the optimization based on the numerical approach from Sect. 2.7, we call the numerical procedure described there for a finely resolved list of window sizes L (in steps of one second) and pick the optimal value Lopt manually. The next figure shows a flowchart that summarizes the steps to optimize the exposure time (Fig. 5). (22) ��2 noise (L) �L = −1 (�d)2E[I1] 1 L2 . (23) ��2 raster (L) �L = 2 L3 F(L) − 1 L2 F(L) � . (24) ��2 noise (L) �L = ��2 noise (L) �L + ��2 raster (L) �L = −1 (�d)2E[I1] 1 L2 + 2 L3 F(L) − 1 L2 F(L) � , Fig. 5 Flowchart summarizing the steps to optimize the exposure time 477Optimal Exposure Time in Gamma‑Ray Attenuation Experiments… 1 3 2.9 Scenarios To illustrate our suggested approach, we first use a base case scenario, whose parameters are included in Table  1. The incident intensity (count rate I0 = 400∕s) can be seen as a typical value from literature reports. It is chosen such that variations by factors of up to 0.2 and 5 as part of our later sensitivity analysis are still meaningful. In actual experiments, I0 can be controlled by choice of the source material (e.g., Taqi & Khalil, 2017) and through technical details of the collimator (e.g., Mayar et al., 2019). As the width of the system, we use a relatively thin width of d = 2 cm. This is mainly to ensure that variations by the same factors as for I0 still yield plausible values. As mean den- sity �m = 2650kg/m3 and absorption concentration � = 70.471 × 10−3m2∕kg , we take val- ues representative of aluminum as used in recent GRA experiments by Mayar et al. (2019). Aluminum, by pure chance, has the same density as quartz, which is the main constituent of many sand-based natural porous media. For the time-dependent density, we take a GP variance�2 � = ( 300kg/m3 )2 , i.e., a stand- ard deviation of �� = 300 kg∕m3 . As GPs assume a Gaussian distribution, this implies that about 95% of all variable densities should fall into the interval of �m ± 2 ⋅ �� = [2050, 3250] kg/m3. This is a proportion that could be caused by changes in the porosity of a porous material. As correlation length for the GP, we take l = 100s . This choice is almost arbitrary since the density processes to be monitored could be—depending on the experimental condi- tions and the considered density-changing processes—between seconds and months for hydraulic-flume colmation (Mayar et al., 2020) or experimental investigations of mineral precipitation in porous media (Hommel et al., 2015). Again, the variations by factors of up to 0.2 and 5 are still plausible values. We are not fixing a value for exposure time L , because the choice of exposure time will be subject to screening as part of the optimization. To investigate sensitivity questions about the optimal experimental design, we also run a series of scenario variations. We change one parameter at a time: Beer–Lambert param- eters ( I0, d , � , �m ) and GP hyperparameters ( ��, l ). These scenarios are defined in Table 2. For all varied parameters, we divide and/or multiply their base values by factors of 5, 2, and 1.33. A few of the resulting parameter combinations may be atypical for realistic experimental conditions, but we prefer having systematic and identical variations across all parameters for clearer interpretation. Table 1 Values of the parameters used in the base case scenario *Equivalent to quartz sand and aluminum **Equivalent to aluminum Parameter (units) Base value I 0 (s−1) 400 d(m) 0.02 � m (kg/m3)* 2,650 �(m2/kg)** 7.471 × 10–3 ��(kg/m3) 300 l(s) 100 478 A. Gonzalez-Nicolas et al. 1 3 Ta bl e 2 V al ue s o f t he p ar am et er s f or se ns iti vi ty a na ly si s. O th er p ar am et er s a re fi xe d (a s i n Ta bl e  1) Sc en ar io se rie s Pa ra m et er Va lu es ÷5 ÷2 ÷1 33 B as e ca se ×1 .3 3 ×2 ×5 1 � �( kg /m 3 ) 60 15 0 22 5 30 0 40 0 60 0 15 00 2 l (s ) 20 50 75 10 0 13 3. 3 20 0 50 0 3 I 0 (W /m 2 ) 80 20 0 30 0 40 0 53 3. 3 80 0 20 00 4 d (m ) 0. 00 2 0. 00 5 0. 00 75 0. 01 0. 01 3 0. 02 0. 05 5 � ( m 2 /k g) 1. 49 4 × 10 –3 3. 73 6 × 10 –3 5. 60 0 × 10 –3 7. 47 1 × 10 –3 99 .3 6 × 10 –3 14 9. 42 × 10 –3 37 3. 55 × 10 –3 6* � m (k g/ m 3 ) 53 0 10 00 19 87 .5 26 50 35 33 .3 53 00 13 ,2 50 479Optimal Exposure Time in Gamma‑Ray Attenuation Experiments… 1 3 3 Results and Discussion 3.1 Base Case and Comparison of the Analytical and Numerical Approach Figure 6 shows the results of the analytical and numerical approaches. It shows both the individual error terms (noise error: red lines, raster error: blue lines) and the resulting total error (black lines) over window size L . As the difference between analytical results (solid lines) and numerical results (dashed lines) is barely invisible in Fig. 6, we anticipate two scenario variations by using a width of d = 0.01 m and d = 0.10 m to confirm this. Since their plots show essentially the same behavior as Fig. 6, we show the corresponding results in Appendix C (Fig. 13). At first in Fig. 6, the noise error decreases with L , while the raster error increases with L , as expected from Section 2.3. Both approaches, analytical and numerical, provide very similar results. Visible differences are observed in the raster error for L greater than 80 s (Fig. 6). The vertical cyan line in Fig. 6 shows Lopt for the analytical approach, while the verti- cal magenta line shows Lopt for the numerical approach. Both optimal solutions are very similar with Lopt_analytical = 48.8 s and Lopt_numerical = 49 s ( L in the numerical approach is scanned at a resolution of 1 s). Since these solutions are very similar and because the ana- lytical approach is much faster (3 s versus 2 h on a regular, contemporary desktop com- puter), we will use the analytical approach for the scenarios variation (Table 2) and will recommend it for use in future studies. Fig. 6 Noise error, raster error, and total error as a function of window size: comparison of results computed by the analyti- cal and numerical approaches for the base case in Table 1 480 A. Gonzalez-Nicolas et al. 1 3 3.2 Influences of Parameters on Optimal Window Size and Attainable Minimal Errors We grouped the results into two groups: parameters that affect the raster error and param- eters that influence the noise error. Looking at the equations of both errors (Eqs. (19) and (21), respectively), we can see that the raster error is influenced only by those hyperparam- eters of the GP that describe the changes of density ( �2 � and l ). In contrast, the noise error is influenced only by the parameters of Beer–Lambert’s law ( I0 , �m , d , and �). 3.2.1 Influence of Signal Variance and Correlation Length on the Raster Error To analyze the sensitivity of Lopt with respect to the expected amplitude of density (repre- sented by the GP�2 � ), we vary �2 � according to the scenario variation 1 of Table 2. Note that results are shown as a function of the density’s standard deviation �� and not the variance�2 � . As Fig. 7a shows, when �� is increased, the raster error increases too. The noise error is not affected, since it does not depend on �� (Eq. (21)). The rise of raster error forces Lopt to decrease when �� is increased (Fig. 7a, b). In other words: more pronounced changes of �(t) need a finer resolution (smallerL ), i.e., the stronger changes occur in a system, the more “snapshots” will be needed to capture the whole dynamic. This is intuitive and not surpris- ing, but now we can quantify and produce an optimal experimental plan. Also in Fig. 7b, we observe that the optimal total error �2 total (Lopt ) decreases very strongly with decreasing strength of changes (i.e., with decreasing �2 � ). Similarly, to analyze the sensitivity of Lopt with respect to the correlation length l , we vary l according to scenario variation 2 in Table  2. We recall that the length scale l of the GPE represents the characteristic time scale on which we anticipate the changes of �(t) to act. As seen in Fig. 8a, when l increases (i.e., the dynamics are slower), the raster error decreases; and consequently the total error decreases too. Again, the noise error is not affected since it does not depend on the l (Eq. (21)). When l increases, Lopt increases too (Fig. 8b). Fig. 7 a Errors and b optimal window sizes and total error as a function of the density’s standard deviation in Table 2 481Optimal Exposure Time in Gamma‑Ray Attenuation Experiments… 1 3 This means that faster dynamics in �(t) (smaller l ) call for smaller L . Slower dynamics (larger l ) allow for larger L , and hence for strongly reduced noise error while still resolving the dynamics well. Overall, the attainable total error is smaller for slower dynamics. 3.2.2 Impact of Incident Intensity, Thickness, Substance Absorption, and Mean Density on the Noise Error To analyze the sensitivity of Lopt with respect to the incident intensity I0 , we vary I0 accord- ing to the scenario variations 3 in Table 2. As seen in Fig. 9a, when the experimentalist increases I0 , the noise error decreases (Eq. (21)), and consequently the total error decreases too. The raster error is not affected since it does not depend on I0 (Eq.  (19)). When I0 increases and the importance of noise error fades, Lopt can decrease (Fig.  9b) to better resolve the changes at smaller total errors. Therefore, the clear recommendation is to use a strong source. If the source cannot be varied, one can also use a larger collimator to obtain larger I0 . Using larger collimators, however, would reduce the spatial resolution if density were spatially heterogeneous. For such cases, the optimal collimator size could be deter- mined as shown by Mayar et al. (2019). Parameters d and � have identical roles in Eq. (24) via Eq. (4), therefore results should be identical. For checking this, we provide both sets of results. That means, we vary d according to scenario 4 in Table 2; additionally, we proceed with � according to scenario 5 of Table 2. Results for the impact of d and � are displayed in Fig. 10. We observe in Fig. 10a, b that only the noise error (Eq.  (21)) is affected by d and � , respectively; the raster error is not affected as already visible from the respective equations. The noise error decreases when increasing the thickness d , at least for the values studied in Table 2. Lopt also decreases when decreasing d . However, this decreasing behavior changes when we apply a broader range of thicknesses. These results are shown and discussed in Appendix C (Fig. 14). The impact of � (Fig. 10c, d) shows the same non-monotonic behavior as d on the Lopt . In fact, except for a change of scale on the x-axis, all results are identical in theory and practice. Fig. 8 a Errors and b optimal window sizes and total error as a function of correlation length in Table 2 482 A. Gonzalez-Nicolas et al. 1 3 Finally, we analyze the sensitivity of Lopt with respect to �m . That means we vary �m according to scenario 6 of Table 2. We observe in Fig. 11a that, when the mean density increases, the noise error is also increased, while the raster error remains constant (as expected from the respective equations). When �m increases the noise error, Lopt has to increase as well (Fig. 11b) to compensate for the rising noise by longer exposure. However, the observable increments of the total error or the change in Lopt for the studied range of �m in Table 2 are by far not as large as for the other parameters. Comparing the behavior of �m with that of d and � , one may wonder about the differ- ences, as �m seems to have the same role in Beer–Lambert’s law (Eq. (1)) as d and � . How- ever, �m is mathematically different, as it is the quantity of interest we want to observe. Instead of occurring quadratically in the denominator of Eq. (21), it shows up in (expected) quadratic form on the left-hand side as noise variance. From that point of view, it could be moved into the denominator in (expected) quadratic form just as well. 3.3 Overall Optimal Experimental Design Recall that both d and � show a non-monotonic influence on Lopt and on the attainable total error. Instead, they show a convex impact, just like the exposure time L . Convex is the mathematical term that refers to an upward-curved function, and guarantees to have a well- defined and unique minimum. Altogether, the total error is a convex function of L, d, and � , and hence will possess an overall joint optimum with respect to these three parameters. Figure 14 (in Appendix C) shows individual cross-sections through this function. That fig- ure visualizes, at least, pairwise optima. The overall optimum among all three at the same time can be even better. The location of this combined optimum (across L, d, and � ) depends on all remaining parameters considered in this study ( I0, �m , �2 � , l ). All four of these showed monotonous behavior—hence extreme values of them are the best for the experiment. However, not all of them are easy to control at extreme values. Incident intensities have a manageable upper limit, and the remaining three parameters �m , �2 � and l are given by the density process one wishes to monitor. Of course, the experimentalist can try to stimulate the time-dependent density process to be strongly pronounced but slow for a best-possible signal. This facilitates an overall iteration between technical optimization (for L, d, and � ) and other experimental controls (affecting �m , �2 � , l ): First, choose the largest available I0 , fix the statistical expectations �m , �2 � and l , and use the � of the planned material. Then, take our expressions for total error (Eqs. (10), (19), and (21)), and optimize jointly across L and d . If the material itself is also exchangeable, include � within this optimization. If the overall attainable, minimal errors are not satisfying, think about process controls to modify �m , �2 � and l , until finding a prom- ising overall setup. 3.4 Limitations Working with statistical expectations is the best one can do before conducting an experi- ment, and our approach relies on GPs as a tool for doing this. Yet, the anticipation of time- dependent densities by a GP may be a difficult guess in practice. In addition, the real den- sity curve to result from the planned experiment may be more complex than the realization of a stationary GP. For example, it may exhibit non-stationary in the mean (i.e., trends) or in its dynamics (i.e., changing from slow to fast), or it may exhibit abrupt jumps. For such 483Optimal Exposure Time in Gamma‑Ray Attenuation Experiments… 1 3 Fig. 9 a Errors and b optimal window sizes and total error as a function of incident intensity in Table 2 Fig. 10 a Errors and b optimal window sizes and total error as a function of thickness in Table 2. c Errors and d optimal window sizes and total error as a function of absorption of the substance � in Table 2 484 A. Gonzalez-Nicolas et al. 1 3 cases, more detailed and involved modeling than a stationary GP could be advised. We chose GPs, because they are mathematically convenient to handle, and because we are con- vinced that the key parameters of a GP represent pieces of information that an experimen- talist may actually be able to foresee with some confidence. However, the field of stochastic processes and random functions is much richer than just GPs. A second limitation is that we allow density to be time-dependent, but ignore that it may be variable in space. In spatially heterogeneous systems, there will be an additional need for spatial scanning (grid resolution), on top of the question of temporal rastering (sampling rate). Then, the question investigated here will expand: how long to record pho- ton counts at each spatial location before moving on to the next position, and how fine to resolve spatially between these different positions? Together, the time per location and the number of locations provide a turnaround time before revisiting locations. Then, one would desire a minimum total error that will consist of spatial raster error, temporal raster error, and noise error. The contribution in the current paper is the foundation for such further research. A related limitation of the sensitivity analysis (not of our optimization method) affects the statement that decreasing the thickness also decreases Lopt and will lead to lower over- all errors. This is correct as long as there is a reliable density of the material. Under real- istic conditions, e.g., when analyzing the density of porous media, the (bulk) density of thin samples is not yet statistically stable. This will hold for all sample dimensions that are still below the size of a representative elementary volume. Then, changing the thickness may also change the bulk density in a hard-to-predict manner, destroying the independence between density and thickness. In our theoretical study, we assumed these two factors to be independent. In practice, the corresponding aspects of the sensitivity study will hold on average but are not guaranteed to hold for each individual case. Finally, recall that we assumed a monochromatic source of gamma rays, so that the description by Beer–Lambert’s law is accurate. However, applying our framework to poly- chromatic GRA experiments is a good approximation when the absorption coefficient does not change strongly across the involved wavelengths. Otherwise, the Taylor expansion for linearized error propagation from intensities to densities needs to be written as an average Fig. 11 a Errors and b optimal window sizes and total error as function of mean density in Table 2 485Optimal Exposure Time in Gamma‑Ray Attenuation Experiments… 1 3 of Taylor expansions across the polychromatic spectrum. This is possible, but would sim- ply inflate the notation. 4 Summary and Conclusions In this study, we mathematically modeled and then minimized errors that arise when moni- toring the density (caused by volume fractions of individual constituents in multiphasic/ porous media) as a function of time by GRA experiments. In such experiments, one has to choose a so-called exposure time, which is the time spent on counting photon impacts of the attenuated gamma rays after passing through the systems, before converting the count to an intensity and then to a density value. We showed that the total error in observing time-dependent densities has two terms: the raster error and the noise error. Raster error arises by approximating the true variable density by a raster curve, consisting of piecewise constant density values per exposure time window. The noise error arises due to the sto- chastic nature of gamma-ray emission and attenuation. Both errors are controlled by the exposure time, but in contradicting directions: long exposure times capture the dynamics poorly (raster error), whereas short exposure times yield imprecise values (noise error). These two errors sum up and pose competing influences on the optimal choice of exposure times. We provided an optimization framework for experimental designs in dynamic GRA measurements. As per deviation, it minimizes the total error by providing the optimal exposure time, but it can serve just as well to optimize other key experimental parameters that are part of Beer–Lambert’s law. We modeled the yet unknown dynamics (i.e., mean and amplitude of density and characteristic time of density changes) as a random Gaussian process. We applied an analytical approach by a first-order approximation of total errors and we compare it to a numerical approach (Monte-Carlo simulation). The latter is arbi- trarily accurate at the limit of infinite computing time, yet both approaches produce similar results. Thus, we recommend and used the analytical approach since it is very fast and is simpler to implement. We demonstrated our approach on a synthetic experimental case and discussed the com- position of total error, the attainable minimum of errors, and the optimal exposure time. The optimum still depends on other experimental parameters that occur in the Beer–Lam- bert attenuation law (incident intensity, mean density, thickness, and substance absorption). It also depends on the statistical expectations on the changes of density expressed through parameters of the GP. The latter include the variance (expected amplitude of density changes) and the correlation length (characteristic time of density changes) of the GP. To capture their influence, we conducted a sensitivity analysis by systematic scenario variation and discussed the results. Our results showed that the signal variance and correlation length influence only the raster error, while the Lambert–Beer parameters influence only the noise error. However, the impact of the mean density (within the range studied here) is low. Faster and stronger dynamics of the process (large signal variance and small correlation length) increase the raster error and hence require smaller exposure times to capture these strong and fast dynamics. The noise error decreases with increasing incident intensity, allowing for smaller exposure times to better resolve the dynamics at smaller total errors. Thickness and substance absorption jointly showed a special, non-monotonic behav- ior on the optimal exposure time when their values are changed. Both parameters share 486 A. Gonzalez-Nicolas et al. 1 3 two competing effects: larger values increase the strength of the observable dynamics in intensities, but larger values also lead to an exponential decrease of output intensities. For small values, the first effect dominates, so that increasing these parameters leads to smaller optimal exposure times and smaller total errors. Beyond a specific break-even point, both the total error and the optimal exposure time rise again. This break-even point presents an overall optimum of GRA experiments. Therefore, we also illustrated and discussed how to apply our optimization to a joint optimization of exposure time, thickness, absorption coef- ficient and process controls that affect the time-variability of density. Overall, this study offers a quantitative knowledge of inevitable errors to experimen- talists and ways to minimize them. Our recommendations for the experimentalist are the following: to choose larger exposure times for slower dynamics, choose smaller exposure times for faster dynamics, use a strong source of intensity whenever possible, and consider the more complex interplay of the expected density and absorption coefficient of the mate- rial at study. In future research, our model of errors can be extended by a spatial rastering error, and then an optimization can be found for temporally and spatially rastered monitoring of densities. Appendix A: Derivation of analytical raster error Integration of exp(·) based on covariance functions Let us define the squared exponential kernel (the covariance function used) as function K ∶ ℝ → ℝ by First, let us recapitulate some basic properties of K(⋅) , which we will use later. Due to the properties of exp(⋅) we have Further, we use the following definition of the erf(⋅) function Now let us consider the integral (25)K(x) ∶= 𝜎2exp ( − x2 𝜆 ) for𝜆, 𝜎 > 0. (26)K(x) � = −2 x � K(x), (27)�xK(x − y) = −�yK(x − y), (28)∫ b a xK(x)dx = − � 2 [K(x)]b a , by Eq. (26). (29)erf(x) = 2√ � ∫ x 0 exp(−�)d�. (30)∫ L1 0 ∫ L2 0 K(x − y)dydx. 487Optimal Exposure Time in Gamma‑Ray Attenuation Experiments… 1 3 Using integration by parts and Fubini-theorem, we obtain by using Eq. (27), (28). Considering I = ∫ L2 0 yK ( L1 − y ) dy and by using Eq. (28) we obtain This yields Using integration by substitution and Eq. (25) we obtain for a, b > 0 Combining Eqs. (33), (34), and (29) we obtain Summarized, we obtain Derivatives Let us define (31) ∫ L1 0 ∫ L2 0 K(x − y)dydx = ∫ L1 0 ∫ L2 0 1 ⋅ K(x − y)dydx = ∫ L1 0 [ yK(x − y) ]L2 0 − ∫ L2 0 y�yK(x − y)dydx = ∫ L1 0 L2K ( x − L2 ) dx − ∫ L2 0 y∫ L1 0 �yK(x − y)dxdy = ∫ L1 0 L2K ( x − L2 ) dx + ∫ L2 0 y∫ L1 0 �xK(x − y)dxdy = ∫ L1 0 L2K ( x − L2 ) dx + ∫ L2 0 y [ K ( L1 − y ) − K(y) ] dy = ∫ L1 0 L2K ( x − L2 ) dx + � 2 [ K ( L2 ) − K(0) ] + ∫ L2 0 yK ( L1 − y ) dy (32) I = ∫ L2 0 yK ( L1 − y ) dy ± ∫ L2 0 L1K ( L1 − y ) dy = − � 2 [ K ( L1 − L2 ) − K ( L1 )] + L1∫ L2 0 K ( L1 − y ) dy. (33)∫ L1 0 ∫ L2 0 K(x − y)dydx = ∫ L1 0 L2K ( x − L2 ) dx + L1∫ L2 0 K ( y − L1 ) dy + � 2 [ K ( L1 ) + K ( L2 ) − K ( L1 − L2 ) − K(0) ] (34) ∫ a 0 K(x − b)dx = �2∫ a 0 exp � − (x − b)2 � � dx = �2 √ �∫ a 0 exp ⎛⎜⎜⎝ − � x − b√ � �2⎞⎟⎟⎠ 1√ � dx = �2 √ �∫ (a−b)∕ √ � −b∕ √ � exp � −�2 � d� (35) ∫ L1 0 ∫ L2 0 K(x − y)dydx = �2 √ � ⎛⎜⎜⎝ L2∫ (L1−L2 )∕ √ � −L2∕ √ � exp � −�2 � d� + L1∫ (L2−L1 )∕ √ � −L1∕ √ � exp � −�2 � d� ⎞⎟⎟⎠ + � 2 � K � L1 � + K � L2 � − K � L1 − L2 � − K(0) � = �2 √ � ⎛⎜⎜⎝ L2∫ (L1−L2 )∕ √ � 0 exp � −�2 � d� + L2∫ L2∕ √ � 0 exp � −�2 � d� + L1∫ (L2−L1 )∕ √ � 0 exp � −�2 � d� + L1∫ L1∕ √ � 0 exp � −�2 � d� ⎞⎟⎟⎠ + � 2 � K � L1 � + K � L2 � − K � L1 − L2 � − K(0) � = √ � 2 �2 √ � � L2 � erf � L1 − L2√ � � + erf � L2√ � �� + L1 � erf � L2 − L1√ � � + erf � L1√ � ��� + � 2 � K � L1 � + K � L2 � − K � L1 − L2 � − K(0) � . (36) ∫ L1 0 ∫ L2 0 K(x − y)dydx = √ � 2 �2 √ � � L 2 � erf � L 1 − L 2√ � � + erf � L 2√ � �� + L 1 � erf � L 2 − L 1√ � � + erf � L 1√ � ��� + � 2 � K � L 1 � + K � L 2 � − K � L 1 − L 2 � − K(0) � 488 A. Gonzalez-Nicolas et al. 1 3 Using Eq. (36) we obtain from (37) Using (29) we obtain Now let us consider the derivatives of F̃ . From Eqs. (36) and (38) follows This yields for 𝜕 𝜕L1 F̃ ( L1, L2 ) For 𝜕 𝜕L2 F̃ ( L1, L2 ) Application Here we use the squared exponential kernel with variance �2 and correlation length l: For two different upper bounds L1 and L2 , using Eq. (36) with � = 2l2 we obtain (37)F(L) = ∫ L 0∫ L 0 K(x − y)dydx (38)F̃ ( L1, L2 ) = ∫ L1 0 ∫ L2 0 K(x − y)dydx (39)F(L) = �2 √ ��L ⋅ erf � L√ � � + �[K(L) − K(0)]. (40)F(L) � = �2 �√ �� ⋅ erf � L√ � � + 2L ⋅ exp � L2 � �� − 2LK(L). (41) F̃ � L 1 , L 2 � = √ 𝜋𝜆 2 𝜎2 � L 2 � erf � L 1 − L 2√ 𝜆 � + erf � L 2√ 𝜆 �� + L 1 � erf � L 2 − L 1√ 𝜆 � + erf � L 1√ 𝜆 ��� + 𝜆 2 � K � L 1 � + K � L 2 � − K � L 1 − L 2 � − K(0) � . (42) 𝜕 𝜕L 1 F̃ � L 1 , L 2 � = 𝜎2 �√ 𝜋𝜆 2 � erf � L 2 − L 1√ 𝜆 � + erf � L 1√ 𝜆 �� + � L 2 − L 1 � exp � − � L 1 − L 2 �2 𝜆 � + L 1 exp � − L 1 2 𝜆 �� + � L 1 − L 2 � K � L 1 − L 2 � − L 1 K(L 1 ). (43) 𝜕 𝜕L 2 F̃ � L 1 , L 2 � = 𝜎2 �√ 𝜋𝜆 2 � erf � L 1 − L 2√ 𝜆 � + erf � L 2√ 𝜆 �� + � L 1 − L 2 � exp � − � L 1 − L 2 �2 𝜆 � + L 2 exp � − L 2 2 𝜆 �� − L 2 K � L 2 � + � L 1 − L 2 � K(L 1 − L 2 ). (44)KSE(r) = �2exp ( − r2 2l2 ) (45) FSE � L1, L2 � = ∫ L1 0 ∫ L2 0 KSE(x − y)dydx = √ ��2l√ 2 � L2 � erf � L1 − L2√ 2l � + erf � L2√ 2l �� + L1 � erf � L2 − L1√ 2l � + erf � L1√ 2l ��� + l2 � K � L1 � + K � L2 � − K � L1 − L2 � − K(0) � 489Optimal Exposure Time in Gamma‑Ray Attenuation Experiments… 1 3 For the special case of the same upper bound L1 = L2 = L we have Therefore, for the derivatives for the same upper bound, we obtain using Eq. (40) For sake of completeness, for some constant c > 0 and function We have Appendix B: Derivation of analytical noise error First, we re-arrange the Beer–Lambert’s law for density � , and replace I ≈ � by the random variable � as an act of statistical anticipation: Then, the announced Taylor expansion about the point �o = E [ � ] = I1 results in Next, we define the linearized relationship � = a� + b and find by comparison of coef- ficients that Applying standard linear(ized) error propagation with Var [ a� + b ] = a2 ⋅ Var[�] yields for the noise error �2 noise is (46)FSE(L) = ∫ L 0 ∫ L 0 KSE(x − y)dydx = √ 2��2lL ⋅ erf � L√ 2l � + 2l2[K(L) − K(0)] (47)FSE(L) � = �2 �√ 2�l ⋅ erf � L√ 2l �� + 2L ⋅ exp � − L2 2l2 � − 2LKSE(L). (48)G(L) = c − 1 L2 F(L) (49)G(L) � = 2 L3 F(L) − 1 L2 F(L) � = 1 L2 ( 2 L F(L) − F(L) � ) . (50)�(χ) = log ( I0 χ ) �d (51)�(�) = log ( I0 I1 ) �d − 1 �dI1 ( χ − I1 ) a = − 1 �dI1 (52)b = 1 �d ( log ( I0 I1 ) + 1 ) (53)�2 noise ≈ a2Var(�) = a2�2 � = a2 �2 X L = a2 I1 L = 1( �dI1 )2 I1 L 490 A. Gonzalez-Nicolas et al. 1 3 Simplifying terms leads to This still depends on I1 , for which we need an estimation. For this, we re-arrange the Beer–Lambert’s equation (Eq. (4)) for I1 , and insert �m as statistical expectation of the den- sity. Thus, we get: Appendix C: Supplemental results For illustration, Fig. 12 shows a collection of randomly generated functions from a GP with squared exponential covariance, using different values of variance �2 � and correlation length l . The black point markers in the figures are points where the true values of �(t) are known precisely. This will not occur during GRA experiments due to noise error, but it serves well to illustrate the properties of GPs. One can see how all randomly generated lines go through these points, and then start spreading when moving away from the points. The rate of spreading (and the frequency of fluctuations after that) depends on the correlation length (54)�2 noise (L) = 1 (�d)2I1L (55)�2 noise (L) = exp(��md) (�d)2I0L Fig. 12 Randomly generated functions from a GP with a squared exponential covariance: a large variance �2 � and large correlation length l , b small variance �2 � and large correlation length l , c large variance �2 � and small correlation length l , d small variance �2 � and small correlation length l 491Optimal Exposure Time in Gamma‑Ray Attenuation Experiments… 1 3 l , while the overall width of spreading at sufficient distance from known points is governed by the variance parameter �2 � . Figure  13 shows the results of the analytical and numerical approaches for several thicknesses. For the scenarios with a thickness of d = 0.01  m and d = 0.10  m, the opti- mal window size for the analytical and numerical approaches are the same: 74 s and 28 s, respectively. As seen in Fig. 14a, the total error presents a minimum value at d=0.10 m for the base case scenario (in Table 1). Correspondingly, the optimal value of the window size visible in Fig. 13c also presents a minimum for d=0.10 m at Lopt = 28 s. Therefore, when the thick- ness is either decreased or increased with respect to d=0.10 m, the optimal window size increases (Fig. 13c). What we see here is a competition between two effects. First, a larger width amplifies the effect of density, so that density is easier to measure. However, second, a larger width also decreases overall the countable photons that manage to pass through the system. This competition is the reason why d shows up quadratically in the denominator of the noise variance in Eq. (21) (first effect) and exponentially in the enumerator of the same equation (second effect). For small values of d , the first (quadratic) effect dominates. In that range, increasing d improves the total error, and allows for smaller exposure times. Then, there is a break- even point at a specific value of thickness (here: d=0.10 m). Beyond that point, i.e., for large values of d , the second (exponential) effect dominates. There, further increase of d is counterproductive. Overall, this results in a non-monotonic (but convex) impact of thick- ness d both on the optimal exposure time and on the attainable total error. The impact of � (Fig. 14b) shows the same non-monotonic, convex behavior with break-even point, for the same mathematical and physical reasons as d. Appendix D: Distinction of densities We are interested to determine a length L of time window that is large enough to distin- guish between two different densities �1 and �2 in GRA experiments, still ignoring that �1 and �2 may be part of a time-variable density. For this purpose, we write the required length L = NL1 , i.e., as N times the unit length L1 . Hence, N stands for the number of sec- onds. In this setting, we can estimate the expected count per unit time window � = E[X] Fig. 13 Noise error �2 noise , raster error �2 raster and total error �2 total as a function of window size L : comparison of results computed by the analytical and numerical approach for three thicknesses: a d = 0.01  m, b d = 0.02 m (base case), c d = 0.10 m 492 A. Gonzalez-Nicolas et al. 1 3 via the estimate �̂i = Xi N , where the subscript i(= 1, 2) enumerates the two different den- sities and corresponding GRA experiments over a length L = NL1 , and Xi are the counts observed in these two experiments over the duration L. For N > Nmin = 30 , the estimates �̂i are sufficiently close to normally distributed, so we can use a typical z-test of the hypotheses H0 ∶ �1 = �2 versus H1 ∶ �1 ≠ �2 . The cor- responding test statistic is: In a two-sided test, for a given significance level α, the critical value of z is: where Φ−1(∙) is the inverse of the cumulative distribution function for the cnormal distribu- tion. The power β of the two-sided hypothesis test at level α is given by Note that the significance level � controls the ‘false-negative’ probability that one falsely rejects H0 although it is true (i.e., one would deem two identical density values to be different due to the random imprecision of photon counts). The power � controls the ‘true- negative’ probability that one correctly rejects H0 when it is not true (i.e., one correctly distinguishes the two different densities). (56) ztest = �̂2 − �̂1√ �̂1 N1 + �̂2 N2 . (57)zcrit = z�∕2 = Φ−1(1 − �∕2), (58)� = Φ ⎛ ⎜⎜⎜⎝ �2 − �1� �1 N1 + �2 N2 − z�∕2 ⎞⎟⎟⎟⎠ . Fig. 14 Optimal window sizes and total error as a function of a thickness and b absorption of the substance for a broader range of values than in Table 2 493Optimal Exposure Time in Gamma‑Ray Attenuation Experiments… 1 3 In our current context, this means we formulate a requirement for � : our goal is to deter- mine the number of seconds N of exposure time, so that the probability of correctly distin- guishing two different densities ( �1 , �2 ) is greater than �crit . Without loss of generality, we assume 𝜆2 > 𝜆1 . Thus, we have: From Eq. (4) we know that �i = L1I0 exp(�d�i) . We can insert this to obtain: Apparently, the required number of seconds depends not only on a difference of densi- ties, but also on their overall magnitude, and on all other Beer–Lambert parameters that control the overall attenuation ( �, d ). Also, it depends on the parameters specifying the statistical reliability ( �, �). In Fig. 15, we illustrate the resulting relation as a function of density difference �2 − �1 for a set of values for �1 taken from our table of scenarios (Table 2), and using the base val- ues for � and d as in Table 1. Also, we arbitrarily choose typical standard values of � = 5% and � = 95%. In Fig. 15, one can clearly see: (1) how overall larger densities require longer exposure time; (2) how smaller density differences require exponentially more exposure time (for which reason we exclude density differences smaller than a threshold Δ�min from the plot); and (3) values of N smaller than 30 (= Ncrit ) should not be taken as exact numbers; yet density differences larger than 10 or 20 kg/m3 can be safely identified within 30 s under the shown conditions. (59)Φ ⎛ ⎜⎜⎜⎝ 𝜆2 − 𝜆1� 𝜆1+𝜆2 N − z𝛼∕2 ⎞ ⎟⎟⎟⎠ > pcrit ⇒ N > � Φ−1 � 𝛽crit � + Φ−1(1 − 𝛼∕2) �2 𝜆2 − 𝜆1 . (60)N > ( Φ−1 ( pcrit ) + z𝛼∕2 )2 L1I0(exp ( −𝜀d𝜌2 ) − exp ( −𝜀d𝜌1 ) ) . Fig. 15 Relationship between N crit and � 2 − � 1 for a set of � 1 values 494 A. Gonzalez-Nicolas et al. 1 3 Acknowledgements This work was supported by the Collaborative Research Center 1253 CAMPOS, funded by the German Research Foundation (DFG, Grant Agreement SFB 1253/1 2017). Further support by the German Research Foundation (DFG) through Project Number 327154368—SFB 1313, Project Number 432343452, and through Germany’s Excellence Strategy—EXC 2075—390740016 (SimTech). Author contributions Ana Gonzalez-Nicolas, Ilja Kröker, Assem Mayar, and Wolfgang Nowak, contributed to the study conception and design. Material preparation, data collection and analysis were performed by Deborah Bilgic, Ana Gonzalez-Nicolas, and Ilja Kröker. The first draft of the manuscript was written by Ana Gonzalez-Nicolas and all authors commented on previous versions of the manuscript. All authors read and approved the final manuscript. Funding Open Access funding enabled and organized by Projekt DEAL. This work was supported by the Collaborative Research Center 1253 CAMPOS, funded by the German Research Foundation (DFG, Grant Agreement SFB 1253/1 2017). Further support by the German Research Foundation (DFG) through Project Number 327154368—SFB 1313, Project Number 432343452, and through Germany’s Excellence Strat- egy—EXC 2075—390740016 (SimTech). Data Availability The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request. Declarations Conflict of interests The authors have no relevant financial or non-financial interests to disclose. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Com- mons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visithttp:// creat iveco mmons. org/ licen ses/ by/4. 0/ References Barth, G.R., Illangasekare, T.H., Rajaram, H.: The effect of entrapped nonaqueous phase liquids on tracer transport in heterogeneous porous media: Laboratory experiments at the intermediate scale. J. Contam. Hydrol. 67(1), 247–268 (2003). https:// doi. org/ 10. 1016/ S0169- 7722(03) 00066-4 Baytaş, A. F., Akbal, S.: Determination of soil parameters by gamma-ray transmission. Radiat. Measur. 35(1), 17–21 (2002). http:// www. scien cedir ect. com/ scien ce/ artic le/ pii/ S1350 44870 10025 30 Beckers, F., Haun, S., Noack, M.: Experimental investigation of reservoir sediments. E3S Web Conf., 40, 03030 (2018). doi:https:// doi. org/ 10. 1051/ e3sco nf/ 20184 003030 Berger, M. J., Hubbell, J.: XCOM: Photon cross sections on a personal computer (NBSIR–87-3597). National Bureau of Standards. (1987). http:// www. iaea. org/ inis/ colle ction/ NCLCo llect ionSt ore/_ Pub- lic/ 19/ 009/ 19009 871. pdf?r=1 Buczyk, B.: Poisson distribution of radioactive decay, p. 4. MIT Department of Physics, Cambridge (2009) Eckberg, D.K., Sunada, D.K.: Nonsteady three-phase immiscible fluid distribution in porous media. Water Resour. Res. 20(12), 1891–1897 (1984). https:// doi. org/ 10. 1029/ WR020 i012p 01891 Ferguson, H., Gardner, W.H.: Water content measurement in soil columns by gamma ray absorption. Soil Sci. Soc. Am. J. 26(1), 11–14 (1962). https:// doi. org/ 10. 2136/ sssaj 1962. 03615 99500 26000 10004x Gharbi, D., Bertin, H., Omari, A.: Use of a gamma ray attenuation technique to study colloid deposition in porous media. Exp. Fluids 37(5), 665–672 (2004) Haight, F.A.: Handbook of the Poisson distribution. Wiley, Hoboken (1967) Hommel, J., Lauchnor, E., Phillips, A., Gerlach, R., Cunningham, A.B., Helmig, R., Ebigbo, A., Class, H.: A revised model for microbially induced calcite precipitation: improvements and new insights based on recent experiments. Water Resour. Res. 51(5), 3695–3715 (2015). https:// doi. org/ 10. 1002/ 2014W R0165 03 http://creativecommons.org/licenses/by/4.0/ https://doi.org/10.1016/S0169-7722(03)00066-4 http://www.sciencedirect.com/science/article/pii/S1350448701002530 https://doi.org/10.1051/e3sconf/20184003030 http://www.iaea.org/inis/collection/NCLCollectionStore/_Public/19/009/19009871.pdf?r=1 http://www.iaea.org/inis/collection/NCLCollectionStore/_Public/19/009/19009871.pdf?r=1 https://doi.org/10.1029/WR020i012p01891 https://doi.org/10.2136/sssaj1962.03615995002600010004x https://doi.org/10.1002/2014WR016503 https://doi.org/10.1002/2014WR016503 495Optimal Exposure Time in Gamma‑Ray Attenuation Experiments… 1 3 Illangasekare, T.H., Ramsey, J.L., Jensen, K.H., Butts, M.B.: Experimental study of movement and distribu- tion of dense organic contaminants in heterogeneous aquifers. J. Contam. Hydrol. 20(1), 1–25 (1995). https:// doi. org/ 10. 1016/ 0169- 7722(95) 00045-W Lykos, P.: The Beer–Lambert law revisited: a development without calculus. J. Chem. Edu. 69(9), 730 (1992). https:// doi. org/ 10. 1021/ ed069 p730 Matheron, G.: Matheron’s theory of regionalised variables. In: V. Pawlowsky-Glahn & J. Serra, (Eds.). Oxford University Press, Oxford. (2019). doi:https:// doi. org/ 10. 1093/ oso/ 97801 98835 660. 001. 0001 Mayar, M.A., Schmid, G., Wieprecht, S., Noack, M.: Optimizing vertical profile measurements setup of gamma ray attenuation. Radiat. Phys. Chem. 164, 108376 (2019). https:// doi. org/ 10. 1016/j. radph yschem. 2019. 108376 Mayar, M.A., Schmid, G., Wieprecht, S., Noack, M.: Proof-of-concept for nonintrusive and undisturbed measurement of sediment infiltration masses using gamma-ray attenuation. J. Hydraul. Eng. 146(5), 04020032 (2020). https:// doi. org/ 10. 1061/ (ASCE) HY. 1943- 7900. 00017 34 Mayar, M.A., Haun, S., Schmid, G., Wieprecht, S., Noack, M.: Measuring vertical distribution and dynamic development of sediment infiltration under laboratory conditions. J. Hydraul. Eng. (2022). https:// doi. org/ 10. 1061/ (ASCE) HY. 1943- 7900. 00019 80 Moreira, A. C., Filho, O. P., Cavalcante, F. H. M., Appoloni, C. R.: Determination of hydraulic conductivity of undisturbed soil column: a measurement accomplished with the gamma ray transmission technique. In: O. Dikinya (Ed.), Developments in hydraulic conductivity research. IntechOpen. (2011). doi:https:// doi. org/ 10. 5772/ 15866 Nofziger, D.L.: Errors in gamma-ray measurements of water content and bulk density in nonuniform soils. Soil Sci. Soc. Am. J. 42(6), 845–850 (1978). https:// doi. org/ 10. 2136/ sssaj 1978. 03615 99500 42000 60001x Oostrom, M., Dane, J.H., Wietsma, T.W.: A review of multidimensional, multifluid, intermediate-scale experiments: flow behavior, saturation imaging, and tracer detection and quantification. Vadose Zone J. 6(3), 610–637 (2007). https:// doi. org/ 10. 2136/ vzj20 06. 0178 Pankow, J. F., Cherry, J. A.: Dense chlorinated solvents and other DNAPLs in groundwater: History, behavior, and remediation. Waterloo Press. (1996). https:// schol ar. google. com/ schol ar_ lookup? title= Dense+ chlor inated+ solve nts+ and+ other+ DNAPLs+ in+ groun dwater% 3A+ histo ry% 2C+ behav ior% 2C+ and+ remed iatio n& author= Pankow% 2C+J. F. & publi cation_ year= 1996 Rasmussen, C. E.: G1. Rasmussen, CE & Williams, CKI Gaussian processes for machine learning. MIT Press. Cambridge, MA, USA, 38, 715–719 (2006) Sail, Y., Marot, D., Sibille, L., Alexis, A.: Suffusion tests on cohesionless granular matter: experimental study. Eur. J. Environ. Civil Eng. 15(5), 799–817 (2011) Schälchli, U., Abegg, J., & Hunzinger, L. (2002). Kolmation–Methoden zur Erkennung und Bewertung. Eidg. Anstalt Für Wasserversorgung, Abwasserreinigung Und Gewässerschutz EAWAG, Dübendorf, Schweiz. Taqi, A.H., Khalil, H.J.: An investigation on gamma attenuation of soil and oil-soil samples. J. Radiat. Res. Appl. Sci. 10(3), 252–261 (2017). https:// doi. org/ 10. 1016/j. jrras. 2017. 05. 008 The Mathworks, Inc. (2016). MATLAB (MATLAB version 9.1.0.441655 (R2016b)) [Computer software]. The Mathworks, Inc. van Genuchten, M.T.: A closed-form equation for predicting the hydraulic conductivity of unsaturated soils. Soil Sci. Soc. Am. J. 44(5), 892–898 (1980). https:// doi. org/ 10. 2136/ sssaj 1980. 03615 99500 44000 50002x Waggoner, M.A.: Radioactive decay of Cs 137. Phys. Rev. 82(6), 906–909 (1951). https:// doi. org/ 10. 1103/ PhysR ev. 82. 906 Werth, C.J., Zhang, C., Brusseau, M.L., Oostrom, M., Baumann, T.: A review of non-invasive imaging methods and applications in contaminant hydrogeology research. J. Contam. Hydrol. 113(1), 1–24 (2010). https:// doi. org/ 10. 1016/j. jconh yd. 2010. 01. 001 Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. https://doi.org/10.1016/0169-7722(95)00045-W https://doi.org/10.1021/ed069p730 https://doi.org/10.1093/oso/9780198835660.001.0001 https://doi.org/10.1016/j.radphyschem.2019.108376 https://doi.org/10.1016/j.radphyschem.2019.108376 https://doi.org/10.1061/(ASCE)HY.1943-7900.0001734 https://doi.org/10.1061/(ASCE)HY.1943-7900.0001980 https://doi.org/10.1061/(ASCE)HY.1943-7900.0001980 https://doi.org/10.5772/15866 https://doi.org/10.5772/15866 https://doi.org/10.2136/sssaj1978.03615995004200060001x https://doi.org/10.2136/sssaj1978.03615995004200060001x https://doi.org/10.2136/vzj2006.0178 https://scholar.google.com/scholar_lookup?title=Dense+chlorinated+solvents+and+other+DNAPLs+in+groundwater%3A+history%2C+behavior%2C+and+remediation&author=Pankow%2C+J.F.&publication_year=1996 https://scholar.google.com/scholar_lookup?title=Dense+chlorinated+solvents+and+other+DNAPLs+in+groundwater%3A+history%2C+behavior%2C+and+remediation&author=Pankow%2C+J.F.&publication_year=1996 https://scholar.google.com/scholar_lookup?title=Dense+chlorinated+solvents+and+other+DNAPLs+in+groundwater%3A+history%2C+behavior%2C+and+remediation&author=Pankow%2C+J.F.&publication_year=1996 https://doi.org/10.1016/j.jrras.2017.05.008 https://doi.org/10.2136/sssaj1980.03615995004400050002x https://doi.org/10.2136/sssaj1980.03615995004400050002x https://doi.org/10.1103/PhysRev.82.906 https://doi.org/10.1103/PhysRev.82.906 https://doi.org/10.1016/j.jconhyd.2010.01.001 496 A. Gonzalez-Nicolas et al. 1 3 Authors and Affiliations Ana Gonzalez‑Nicolas1  · Deborah Bilgic1 · Ilja Kröker1  · Assem Mayar1 · Luca Trevisan2  · Holger Steeb3  · Silke Wieprecht1 · Wolfgang Nowak1 Deborah Bilgic Deborah.Bilgic@de.bosch.com Ilja Kröker Ilja.kroeker@iws.uni-stuttgart.de Assem Mayar assem.mayar@iws.uni-stuttgart.de Luca Trevisan luca.trevisan@kit.edu Holger Steeb holger.steeb@mechbau.uni-stuttgart.de Silke Wieprecht Silke.Wieprecht@iws.uni-stuttgart.de Wolfgang Nowak Wolfgang.nowak@iws.uni-stuttgart.de 1 Institute for Modelling Hydraulic and Environmental Systems (IWS), University of Stuttgart, 70569 Stuttgart, Germany 2 Institute for Hydromechanics (IfH), Karlsruhe Institute of Technology, 76131 Karlsruhe, Germany 3 Institute of Applied Mechanics (CE), University of Stuttgart, 70569 Stuttgart, Germany http://orcid.org/0000-0003-2869-8255 http://orcid.org/0000-0003-0360-5307 http://orcid.org/0000-0002-7172-5020 http://orcid.org/0000-0001-7602-4920 http://orcid.org/0000-0003-2583-8865 Optimal Exposure Time in Gamma-Ray Attenuation Experiments for Monitoring Time-Dependent Densities Abstract Article Highlights 1 Introduction 2 Methods 2.1 System Definition and Beer–Lambert’s Law 2.2 The Poisson Distribution Over Different Time Windows for Counting 2.3 Errors in Monitoring Time-Dependent Densities: Raster Error and Noise Error 2.4 Formal Optimization of Expected Errors 2.5 Gaussian Processes for Time-Dependent Densities 2.6 Analytical Approach 2.6.1 Raster Error 2.6.2 Noise Error 2.7 Numerical Approach 2.8 Optimization and Implementation 2.9 Scenarios 3 Results and Discussion 3.1 Base Case and Comparison of the Analytical and Numerical Approach 3.2 Influences of Parameters on Optimal Window Size and Attainable Minimal Errors 3.2.1 Influence of Signal Variance and Correlation Length on the Raster Error 3.2.2 Impact of Incident Intensity, Thickness, Substance Absorption, and Mean Density on the Noise Error 3.3 Overall Optimal Experimental Design 3.4 Limitations 4 Summary and Conclusions Acknowledgements References