05 Fakultät Informatik, Elektrotechnik und Informationstechnik

Permanent URI for this collectionhttps://elib.uni-stuttgart.de/handle/11682/6

Browse

Search Results

Now showing 1 - 3 of 3
  • Thumbnail Image
    ItemOpen Access
    How mature is requirements engineering for AI-based systems? : a systematic mapping study on practices, challenges, and future research directions
    (2024) Habiba, Umm-e-; Haug, Markus; Bogner, Justus; Wagner, Stefan
    Artificial intelligence (AI) permeates all fields of life, which resulted in new challenges in requirements engineering for artificial intelligence (RE4AI), e.g., the difficulty in specifying and validating requirements for AI or considering new quality requirements due to emerging ethical implications. It is currently unclear if existing RE methods are sufficient or if new ones are needed to address these challenges. Therefore, our goal is to provide a comprehensive overview of RE4AI to researchers and practitioners. What has been achieved so far, i.e., what practices are available, and what research gaps and challenges still need to be addressed? To achieve this, we conducted a systematic mapping study combining query string search and extensive snowballing. The extracted data was aggregated, and results were synthesized using thematic analysis. Our selection process led to the inclusion of 126 primary studies. Existing RE4AI research focuses mainly on requirements analysis and elicitation, with most practices applied in these areas. Furthermore, we identified requirements specification, explainability, and the gap between machine learning engineers and end-users as the most prevalent challenges, along with a few others. Additionally, we proposed seven potential research directions to address these challenges. Practitioners can use our results to identify and select suitable RE methods for working on their AI-based systems, while researchers can build on the identified gaps and research directions to push the field forward.
  • Thumbnail Image
    ItemOpen Access
    Software product line testing : a systematic literature review
    (2024) Agh, Halimeh; Azamnouri, Aidin; Wagner, Stefan
    A Software Product Line (SPL) is a software development paradigm in which a family of software products shares a set of core assets. Testing has a vital role in both single-system development and SPL development in identifying potential faults by examining the behavior of a product or products, but it is especially challenging in SPL. There have been many research contributions in the SPL testing field; therefore, assessing the current state of research and practice is necessary to understand the progress in testing practices and to identify the gap between required techniques and existing approaches. This paper aims to survey existing research on SPL testing to provide researchers and practitioners with up-to-date evidence and issues that enable further development of the field. To this end, we conducted a Systematic Literature Review (SLR) with seven research questions in which we identified and analyzed 118 studies dating from 2003 to 2022. The results indicate that the literature proposes many techniques for specific aspects (e.g., controlling cost/effort in SPL testing); however, other elements (e.g., regression testing and non-functional testing) still need to be covered by existing research. Furthermore, most approaches are evaluated by only one empirical method, most of which are academic evaluations. This may jeopardize the adoption of approaches in industry. The results of this study can help identify gaps in SPL testing since specific points of SPL Engineering still need to be addressed entirely.
  • Thumbnail Image
    ItemOpen Access
    How do ML practitioners perceive explainability? : an interview study of practices and challenges
    (2024) Habiba, Umm-e-; Habib, Mohammad Kasra; Bogner, Justus; Fritzsch, Jonas; Wagner, Stefan
    Explainable artificial intelligence (XAI) is a field of study that focuses on the development process of AI-based systems while making their decision-making processes understandable and transparent for users. Research already identified explainability as an emerging requirement for AI-based systems that use machine learning (ML) techniques. However, there is a notable absence of studies investigating how ML practitioners perceive the concept of explainability, the challenges they encounter, and the potential trade-offs with other quality attributes. In this study, we want to discover how practitioners define explainability for AI-based systems and what challenges they encounter in making them explainable. Furthermore, we explore how explainability interacts with other quality attributes. To this end, we conducted semi-structured interviews with 14 ML practitioners from 11 companies. Our study reveals diverse viewpoints on explainability and applied practices. Results suggest that the importance of explainability lies in enhancing transparency, refining models, and mitigating bias. Methods like SHapley Additive exPlanations (SHAP) and Local Interpretable Model-Agnostic Explanation (LIME) are frequently used by ML practitioners to understand how models work, while tailored approaches are typically adopted to meet the specific requirements of stakeholders. Moreover, we have discerned emerging challenges in eight categories. Issues such as effective communication with non-technical stakeholders and the absence of standardized approaches are frequently stated as recurring hurdles. We contextualize these findings in terms of requirements engineering and conclude that industry currently lacks a standardized framework to address arising explainability needs.