05 Fakultät Informatik, Elektrotechnik und Informationstechnik

Permanent URI for this collectionhttps://elib.uni-stuttgart.de/handle/11682/6

Browse

Search Results

Now showing 1 - 2 of 2
  • Thumbnail Image
    ItemOpen Access
    Improved usability of differential privacy in machine learning : techniques for quantifying the privacy-accuracy trade-off
    (2022) Bernau, Daniel; Küsters, Ralf (Prof.)
    Differential privacy allows bounding the influence that training data records have on a neural network. To use differential privacy in machine learning with neural networks, data scientists must choose privacy parameter epsilon. Choosing meaningful privacy parameters is key since differentially private neural networks that have been trained with weak privacy parameters might result in excessive privacy leakage, while strong privacy parameters might overly degrade model utility. However, privacy parameter values are difficult to choose for two main reasons. First, the theoretical upper bound on privacy loss epsilon might be loose, depending on the chosen sensitivity and data distribution of practical datasets. Second, legal requirements and societal norms for anonymization often refer to individual identifiability, to which epsilon is only indirectly related. Within this thesis, we address the problem of choosing epsilon from two angles. First, we quantify the empirical lower bound on the privacy loss under empirical membership inference attacks to allow data scientists to compare the empirical privacy-accuracy trade-off between local and central differential privacy. Specifically, we consider federated and non-federated discriminative models, as well as generative models. Second, we transform the privacy loss under differential privacy into an analytical bound on identifiability map legal and societal expectations w.r.t. identifiability to corresponding privacy parameters. The thesis contributes techniques for quantifying the trade-off between accuracy and privacy over epsilon. The techniques provide information for interpreting differentially private training datasets or models trained with the differentially private stochastic gradient descent to improve usability of differential privacy in machine learning. In particular, we identify preferable ranges for privacy parameter epsilon and compare local and central differential privacy mechanisms for training differentially private neural networks under membership inference adversaries. Furthermore, we contribute an implementable instance of the differential privacy adversary that can be used to audit trained models w.r.t. identifiability.
  • Thumbnail Image
    ItemOpen Access
    Simple and flexible universal composability : definition of a framework and applications
    (2020) Rausch, Daniel; Küsters, Ralf (Prof. Dr.)
    Security protocols, such as TLS, SSH, IEEE~802.11, and DNSSEC, have become crucial tools in modern society to protect people, data, and infrastructure. They are used throughout virtually all electronic devices to achieve a wide range of different security goals, such as confidentiality, authentication, and integrity. As the long history of attacks on security protocols illustrates, it is indispensable to perform a formal security analysis of such protocols. A central tool in cryptography for taming the complexity of the design and the analysis of modern protocols is modularity, provided by security models for universal composability. Such models allow for designing and analyzing small parts of a protocol in isolation and then reusing these security results in the context of the overall protocol. This is not just easier than analyzing the whole protocol as a monolithic block but also reduces the overall effort required in building and analyzing multiple different protocols based on the same underlying components, such as cryptographic primitives. Ideally, a model for universal composability should support a protocol designer in easily creating full, precise, and detailed specifications as well as sound security proofs of various protocols for various types of adversarial models, instead of being an additional obstacle one has to overcome during a security analysis. In particular, such a model should be sound, flexible/expressive, and easy to use. Unfortunately, despite the wide spread use of models for universal composability, existing models and frameworks are still unsatisfying in these respects as none combines all of these requirements simultaneously. In this thesis we therefore develop a model for universal composability, called the iUC framework, which combines soundness, usability, and flexibility in a so far unmatched way, and hence constitutes a solid framework for designing and analyzing essentially any protocol and application in a modular, universally composable, and sound manner. We use our model in a case study to analyze multiple different key exchange protocols precisely as they are deployed in practice. This illustrates the combination of both flexibility and usability of our model. This case study is also an important independent contribution as this is the first faithful security analysis of these unmodified protocols in a universal composability setting.