05 Fakultät Informatik, Elektrotechnik und Informationstechnik
Permanent URI for this collectionhttps://elib.uni-stuttgart.de/handle/11682/6
Browse
Search Results
Item Open Access How mature is requirements engineering for AI-based systems? : a systematic mapping study on practices, challenges, and future research directions(2024) Habiba, Umm-e-; Haug, Markus; Bogner, Justus; Wagner, StefanArtificial intelligence (AI) permeates all fields of life, which resulted in new challenges in requirements engineering for artificial intelligence (RE4AI), e.g., the difficulty in specifying and validating requirements for AI or considering new quality requirements due to emerging ethical implications. It is currently unclear if existing RE methods are sufficient or if new ones are needed to address these challenges. Therefore, our goal is to provide a comprehensive overview of RE4AI to researchers and practitioners. What has been achieved so far, i.e., what practices are available, and what research gaps and challenges still need to be addressed? To achieve this, we conducted a systematic mapping study combining query string search and extensive snowballing. The extracted data was aggregated, and results were synthesized using thematic analysis. Our selection process led to the inclusion of 126 primary studies. Existing RE4AI research focuses mainly on requirements analysis and elicitation, with most practices applied in these areas. Furthermore, we identified requirements specification, explainability, and the gap between machine learning engineers and end-users as the most prevalent challenges, along with a few others. Additionally, we proposed seven potential research directions to address these challenges. Practitioners can use our results to identify and select suitable RE methods for working on their AI-based systems, while researchers can build on the identified gaps and research directions to push the field forward.Item Open Access How do ML practitioners perceive explainability? : an interview study of practices and challenges(2024) Habiba, Umm-e-; Habib, Mohammad Kasra; Bogner, Justus; Fritzsch, Jonas; Wagner, StefanExplainable artificial intelligence (XAI) is a field of study that focuses on the development process of AI-based systems while making their decision-making processes understandable and transparent for users. Research already identified explainability as an emerging requirement for AI-based systems that use machine learning (ML) techniques. However, there is a notable absence of studies investigating how ML practitioners perceive the concept of explainability, the challenges they encounter, and the potential trade-offs with other quality attributes. In this study, we want to discover how practitioners define explainability for AI-based systems and what challenges they encounter in making them explainable. Furthermore, we explore how explainability interacts with other quality attributes. To this end, we conducted semi-structured interviews with 14 ML practitioners from 11 companies. Our study reveals diverse viewpoints on explainability and applied practices. Results suggest that the importance of explainability lies in enhancing transparency, refining models, and mitigating bias. Methods like SHapley Additive exPlanations (SHAP) and Local Interpretable Model-Agnostic Explanation (LIME) are frequently used by ML practitioners to understand how models work, while tailored approaches are typically adopted to meet the specific requirements of stakeholders. Moreover, we have discerned emerging challenges in eight categories. Issues such as effective communication with non-technical stakeholders and the absence of standardized approaches are frequently stated as recurring hurdles. We contextualize these findings in terms of requirements engineering and conclude that industry currently lacks a standardized framework to address arising explainability needs.