Györfi, LászlóLinder, TamásWalk, Harro2023-10-232023-10-2320231099-4300187045863Xhttp://nbn-resolving.de/urn:nbn:de:bsz:93-opus-ds-136826http://elib.uni-stuttgart.de/handle/11682/13682http://dx.doi.org/10.18419/opus-13663We study the excess minimum risk in statistical inference, defined as the difference between the minimum expected loss when estimating a random variable from an observed feature vector and the minimum expected loss when estimating the same random variable from a transformation (statistic) of the feature vector. After characterizing lossless transformations, i.e., transformations for which the excess risk is zero for all loss functions, we construct a partitioning test statistic for the hypothesis that a given transformation is lossless, and we show that for i.i.d. data the test is strongly consistent. More generally, we develop information-theoretic upper bounds on the excess risk that uniformly hold over fairly general classes of loss functions. Based on these bounds, we introduce the notion of a δ -lossless transformation and give sufficient conditions for a given transformation to be universally δ -lossless. Applications to classification, nonparametric regression, portfolio strategies, information bottlenecks, and deep learning are also surveyed.eninfo:eu-repo/semantics/openAccesshttps://creativecommons.org/licenses/by/4.0/510Lossless transformations and excess risk bounds in statistical inferencearticle2023-10-09