11 Interfakultäre Einrichtungen

Permanent URI for this collectionhttps://elib.uni-stuttgart.de/handle/11682/12

Browse

Search Results

Now showing 1 - 4 of 4
  • Thumbnail Image
    ItemOpen Access
    The ethics of sustainable AI : why animals (should) matter for a sustainable use of AI
    (2023) Bossert, Leonie N.; Hagendorff, Thilo
    Technologies equipped with artificial intelligence (AI) influence our everyday lives in a variety of ways. Due to their contribution to greenhouse gas emissions, their high use of energy, but also their impact on fairness issues, these technologies are increasingly discussed in the “sustainable AI” discourse. However, current “sustainable AI” approaches remain anthropocentric. In this article, we argue from the perspective of applied ethics that such anthropocentric outlook falls short. We present a sentientist approach, arguing that the normative foundation of sustainability and sustainable development - that is, theories of intra- and intergenerational justice - should include sentient animals. Consequently, theories of sustainable AI must also be non-anthropocentric. Moreover, we investigate consequences of our approach for applying AI technologies in a sustainable way.
  • Thumbnail Image
    ItemOpen Access
    Mapping the ethics of generative AI : a comprehensive scoping review
    (2024) Hagendorff, Thilo
    The advent of generative artificial intelligence and the widespread adoption of it in society engendered intensive debates about its ethical implications and risks. These risks often differ from those associated with traditional discriminative machine learning. To synthesize the recent discourse and map its normative concepts, we conducted a scoping review on the ethics of generative artificial intelligence, including especially large language models and text-to-image models. Our analysis provides a taxonomy of 378 normative issues in 19 topic areas and ranks them according to their prevalence in the literature. The study offers a comprehensive overview for scholars, practitioners, or policymakers, condensing the ethical debates surrounding fairness, safety, harmful content, hallucinations, privacy, interaction risks, security, alignment, societal impacts, and others. We discuss the results, evaluate imbalances in the literature, and explore unsubstantiated risk scenarios.
  • Thumbnail Image
    ItemOpen Access
    Speciesist bias in AI : a reply to Arandjelović
    (2023) Hagendorff, Thilo; Bossert, Leonie; Fai, Tse Yip; Singer, Peter
    The elimination of biases in artificial intelligence (AI) applications-for example biases based on race or gender-is a high priority in AI ethics. So far, however, efforts to eliminate bias have all been anthropocentric. Biases against nonhuman animals have not been considered, despite the influence AI systems can have on normalizing, increasing, or reducing the violence that is inflicted on animals, especially on farmed animals. Hence, in 2022, we published a paper in AI and Ethics in which we empirically investigated various examples of image recognition, word embedding, and language models, with the aim of testing whether they perpetuate speciesist biases. A critical response has appeared in AI and Ethics , accusing us of drawing upon theological arguments, having a naive anti-speciesist mindset, and making mistakes in our empirical analyses. We show that these claims are misleading.
  • Thumbnail Image
    ItemOpen Access
    Fairness hacking : the malicious practice of shrouding unfairness in algorithms
    (2024) Meding, Kristof; Hagendorff, Thilo
    Fairness in machine learning (ML) is an ever-growing field of research due to the manifold potential for harm from algorithmic discrimination. To prevent such harm, a large body of literature develops new approaches to quantify fairness. Here, we investigate how one can divert the quantification of fairness by describing a practice we call “fairness hacking” for the purpose of shrouding unfairness in algorithms. This impacts end-users who rely on learning algorithms, as well as the broader community interested in fair AI practices. We introduce two different categories of fairness hacking in reference to the established concept of p-hacking. The first category, intra-metric fairness hacking, describes the misuse of a particular metric by adding or removing sensitive attributes from the analysis. In this context, countermeasures that have been developed to prevent or reduce p-hacking can be applied to similarly prevent or reduce fairness hacking. The second category of fairness hacking is inter-metric fairness hacking. Inter-metric fairness hacking is the search for a specific fair metric with given attributes. We argue that countermeasures to prevent or reduce inter-metric fairness hacking are still in their infancy. Finally, we demonstrate both types of fairness hacking using real datasets. Our paper intends to serve as a guidance for discussions within the fair ML community to prevent or reduce the misuse of fairness metrics, and thus reduce overall harm from ML applications.