11 Interfakultäre Einrichtungen

Permanent URI for this collectionhttps://elib.uni-stuttgart.de/handle/11682/12

Browse

Search Results

Now showing 1 - 10 of 10
  • Thumbnail Image
    ItemOpen Access
    Data processing, analysis, and evaluation methods for co-design of coreless filament-wound building systems
    (2023) Gil Pérez, Marta; Mindermann, Pascal; Zechmeister, Christoph; Forster, David; Guo, Yanan; Hügle, Sebastian; Kannenberg, Fabian; Balangé, Laura; Schwieger, Volker; Middendorf, Peter; Bischoff, Manfred; Menges, Achim; Gresser, Götz T.; Knippers, Jan
  • Thumbnail Image
    ItemOpen Access
    Temporally dense exploration of moving and deforming shapes
    (2020) Frey, Steffen
    We present our approach for the dense visualization and temporal exploration of moving and deforming shapes from scientific experiments and simulations. Our image space representation is created by convolving a noise texture along shape contours (akin to LIC). Beyond indicating spatial structure via luminosity, we additionally use colour to depict time or classes of shapes via automatically customized maps. This representation summarizes temporal evolution, and provides the basis for interactive user navigation in the spatial and temporal domain in combination with traditional renderings. Our efficient implementation supports the quick and progressive generation of our representation in parallel as well as adaptive temporal splits to reduce overlap. We discuss and demonstrate the utility of our approach using 2D and 3D scalar fields from experiments and simulations.
  • Thumbnail Image
    ItemOpen Access
    Simulation model for digital twins of pneumatic vacuum ejectors
    (2022) Stegmaier, Valentin; Schaaf, Walter; Jazdi, Nasser; Weyrich, Michael
    Increasing productivity, as well as flexibility, is required for the industrial production sector. To meet these challenges, concepts in the field of “Industry 4.0” are arising, such as the concept of Digital Twins. Vacuum handling systems are a widespread technology for material handling in industry and face the same challenges and opportunities. In this field, a key issue is the lack of Digital Twins containing behavior models for vacuum handling systems and their components in different applications and use cases. A novel concept for modeling and simulating the fluidic behavior of pneumatic vacuum ejectors as key components of vacuum handling systems is proposed. In order to increase the simulation accuracy, the concept can access instance‐specific data of the used asset instead of object‐specific data. The model and the data are part of the Digital Twins of pneumatic vacuum ejectors, which shall be able to be combined with other components to represent a Digital Twin of entire vacuum handling systems. The proposed model is validated in an experimental test setup and in an industrial application delivering sufficiently accurate results.
  • Thumbnail Image
    ItemOpen Access
    Visual analytics of multivariate intensive care time series data
    (2022) Brich, N.; Schulz, Christoph; Peter, J.; Klingert, W.; Schenk, M.; Weiskopf, Daniel; Krone, M.
    We present an approach for visual analysis of high‐dimensional measurement data with varying sampling rates as routinely recorded in intensive care units. In intensive care, most assessments not only depend on one single measurement but a plethora of mixed measurements over time. Even for trained experts, efficient and accurate analysis of such multivariate data remains a challenging task. We present a linked‐view post hoc visual analytics application that reduces data complexity by combining projection‐based time curves for overview with small multiples for details on demand. Our approach supports not only the analysis of individual patients but also of ensembles by adapting existing techniques using non‐parametric statistics. We evaluated the effectiveness and acceptance of our approach through expert feedback with domain scientists from the surgical department using real‐world data: a post‐surgery study performed on a porcine surrogate model to identify parameters suitable for diagnosing and prognosticating the volume state, and clinical data from a public database. The results show that our approach allows for detailed analysis of changes in patient state while also summarizing the temporal development of the overall condition.
  • Thumbnail Image
    ItemOpen Access
    Coordinating with a robot partner affects neural processing related to action monitoring
    (2021) Czeszumski, Artur; Gert, Anna L.; Keshava, Ashima; Ghadirzadeh, Ali; Kalthoff, Tilman; Ehinger, Benedikt V.; Tiessen, Max; Björkman, Mårten; Kragic, Danica; König, Peter
    Robots start to play a role in our social landscape, and they are progressively becoming responsive, both physically and socially. It begs the question of how humans react to and interact with robots in a coordinated manner and what the neural underpinnings of such behavior are. This exploratory study aims to understand the differences in human-human and human-robot interactions at a behavioral level and from a neurophysiological perspective. For this purpose, we adapted a collaborative dynamical paradigm from the literature. We asked 12 participants to hold two corners of a tablet while collaboratively guiding a ball around a circular track either with another participant or a robot. In irregular intervals, the ball was perturbed outward creating an artificial error in the behavior, which required corrective measures to return to the circular track again. Concurrently, we recorded electroencephalography (EEG). In the behavioral data, we found an increased velocity and positional error of the ball from the track in the human-human condition vs. human-robot condition. For the EEG data, we computed event-related potentials. We found a significant difference between human and robot partners driven by significant clusters at fronto-central electrodes. The amplitudes were stronger with a robot partner, suggesting a different neural processing. All in all, our exploratory study suggests that coordinating with robots affects action monitoring related processing. In the investigated paradigm, human participants treat errors during human-robot interaction differently from those made during interactions with other humans. These results can improve communication between humans and robot with the use of neural activity in real-time.
  • Thumbnail Image
    ItemOpen Access
    The ethics of sustainable AI : why animals (should) matter for a sustainable use of AI
    (2023) Bossert, Leonie N.; Hagendorff, Thilo
    Technologies equipped with artificial intelligence (AI) influence our everyday lives in a variety of ways. Due to their contribution to greenhouse gas emissions, their high use of energy, but also their impact on fairness issues, these technologies are increasingly discussed in the “sustainable AI” discourse. However, current “sustainable AI” approaches remain anthropocentric. In this article, we argue from the perspective of applied ethics that such anthropocentric outlook falls short. We present a sentientist approach, arguing that the normative foundation of sustainability and sustainable development - that is, theories of intra- and intergenerational justice - should include sentient animals. Consequently, theories of sustainable AI must also be non-anthropocentric. Moreover, we investigate consequences of our approach for applying AI technologies in a sustainable way.
  • Thumbnail Image
    ItemOpen Access
    To bucket or not to bucket? : analyzing the performance and interpretability of hybrid hydrological models with dynamic parameterization
    (2024) Acuña Espinoza, Eduardo; Loritz, Ralf; Álvarez Chaves, Manuel; Bäuerle, Nicole; Ehret, Uwe
    Hydrological hybrid models have been proposed as an option to combine the enhanced performance of deep learning methods with the interpretability of process-based models. Among the various hybrid methods available, the dynamic parameterization of conceptual models using long short-term memory (LSTM) networks has shown high potential. We explored this method further to evaluate specifically if the flexibility given by the dynamic parameterization overwrites the physical interpretability of the process-based part. We conducted our study using a subset of the CAMELS-GB dataset. First, we show that the hybrid model can reach state-of-the-art performance, comparable with LSTM, and surpassing the performance of conceptual models in the same area. We then modified the conceptual model structure to assess if the dynamic parameterization can compensate for structural deficiencies of the model. Our results demonstrated that the deep learning method can effectively compensate for these deficiencies. A model selection technique based purely on the performance to predict streamflow, for this type of hybrid model, is hence not advisable. In a second experiment, we demonstrated that if a well-tested model architecture is combined with an LSTM, the deep learning model can learn to operate the process-based model in a consistent manner, and untrained variables can be recovered. In conclusion, for our case study, we show that hybrid models cannot surpass the performance of data-driven methods, and the remaining advantage of such models is the access to untrained variables.
  • Thumbnail Image
    ItemOpen Access
    Mapping the ethics of generative AI : a comprehensive scoping review
    (2024) Hagendorff, Thilo
    The advent of generative artificial intelligence and the widespread adoption of it in society engendered intensive debates about its ethical implications and risks. These risks often differ from those associated with traditional discriminative machine learning. To synthesize the recent discourse and map its normative concepts, we conducted a scoping review on the ethics of generative artificial intelligence, including especially large language models and text-to-image models. Our analysis provides a taxonomy of 378 normative issues in 19 topic areas and ranks them according to their prevalence in the literature. The study offers a comprehensive overview for scholars, practitioners, or policymakers, condensing the ethical debates surrounding fairness, safety, harmful content, hallucinations, privacy, interaction risks, security, alignment, societal impacts, and others. We discuss the results, evaluate imbalances in the literature, and explore unsubstantiated risk scenarios.
  • Thumbnail Image
    ItemOpen Access
    Speciesist bias in AI : a reply to Arandjelović
    (2023) Hagendorff, Thilo; Bossert, Leonie; Fai, Tse Yip; Singer, Peter
    The elimination of biases in artificial intelligence (AI) applications-for example biases based on race or gender-is a high priority in AI ethics. So far, however, efforts to eliminate bias have all been anthropocentric. Biases against nonhuman animals have not been considered, despite the influence AI systems can have on normalizing, increasing, or reducing the violence that is inflicted on animals, especially on farmed animals. Hence, in 2022, we published a paper in AI and Ethics in which we empirically investigated various examples of image recognition, word embedding, and language models, with the aim of testing whether they perpetuate speciesist biases. A critical response has appeared in AI and Ethics , accusing us of drawing upon theological arguments, having a naive anti-speciesist mindset, and making mistakes in our empirical analyses. We show that these claims are misleading.
  • Thumbnail Image
    ItemOpen Access
    Fairness hacking : the malicious practice of shrouding unfairness in algorithms
    (2024) Meding, Kristof; Hagendorff, Thilo
    Fairness in machine learning (ML) is an ever-growing field of research due to the manifold potential for harm from algorithmic discrimination. To prevent such harm, a large body of literature develops new approaches to quantify fairness. Here, we investigate how one can divert the quantification of fairness by describing a practice we call “fairness hacking” for the purpose of shrouding unfairness in algorithms. This impacts end-users who rely on learning algorithms, as well as the broader community interested in fair AI practices. We introduce two different categories of fairness hacking in reference to the established concept of p-hacking. The first category, intra-metric fairness hacking, describes the misuse of a particular metric by adding or removing sensitive attributes from the analysis. In this context, countermeasures that have been developed to prevent or reduce p-hacking can be applied to similarly prevent or reduce fairness hacking. The second category of fairness hacking is inter-metric fairness hacking. Inter-metric fairness hacking is the search for a specific fair metric with given attributes. We argue that countermeasures to prevent or reduce inter-metric fairness hacking are still in their infancy. Finally, we demonstrate both types of fairness hacking using real datasets. Our paper intends to serve as a guidance for discussions within the fair ML community to prevent or reduce the misuse of fairness metrics, and thus reduce overall harm from ML applications.