05 Fakultät Informatik, Elektrotechnik und Informationstechnik
Permanent URI for this collectionhttps://elib.uni-stuttgart.de/handle/11682/6
Browse
2 results
Search Results
Item Open Access Industry practices and challenges for the evolvability assurance of microservices : an interview study and systematic grey literature review(2021) Bogner, Justus; Fritzsch, Jonas; Wagner, Stefan; Zimmermann, AlfredMicroservices as a lightweight and decentralized architectural style with fine-grained services promise several beneficial characteristics for sustainable long-term software evolution. Success stories from early adopters like Netflix, Amazon, or Spotify have demonstrated that it is possible to achieve a high degree of flexibility and evolvability with these systems. However, the described advantageous characteristics offer no concrete guidance and little is known about evolvability assurance processes for microservices in industry as well as challenges in this area. Insights into the current state of practice are a very important prerequisite for relevant research in this field. We therefore wanted to explore how practitioners structure the evolvability assurance processes for microservices, what tools, metrics, and patterns they use, and what challenges they perceive for the evolvability of their systems. We first conducted 17 semi-structured interviews and discussed 14 different microservice-based systems and their assurance processes with software professionals from 10 companies. Afterwards, we performed a systematic grey literature review (GLR) and used the created interview coding system to analyze 295 practitioner online resources. The combined analysis revealed the importance of finding a sensible balance between decentralization and standardization. Guidelines like architectural principles were seen as valuable to ensure a base consistency for evolvability and specialized test automation was a prevalent theme. Source code quality was the primary target for the usage of tools and metrics for our interview participants, while testing tools and productivity metrics were the focus of our GLR resources. In both studies, practitioners did not mention architectural or service-oriented tools and metrics, even though the most crucial challenges like Service Cutting or Microservices Integration were of an architectural nature. Practitioners relied on guidelines, standardization, or patterns like Event-Driven Messaging to partially address some reported evolvability challenges. However, specialized techniques, tools, and metrics are needed to support industry with the continuous evaluation of service granularity and dependencies. Future microservices research in the areas of maintenance, evolution, and technical debt should take our findings and the reported industry sentiments into account.Item Open Access How do ML practitioners perceive explainability? : an interview study of practices and challenges(2024) Habiba, Umm-e-; Habib, Mohammad Kasra; Bogner, Justus; Fritzsch, Jonas; Wagner, StefanExplainable artificial intelligence (XAI) is a field of study that focuses on the development process of AI-based systems while making their decision-making processes understandable and transparent for users. Research already identified explainability as an emerging requirement for AI-based systems that use machine learning (ML) techniques. However, there is a notable absence of studies investigating how ML practitioners perceive the concept of explainability, the challenges they encounter, and the potential trade-offs with other quality attributes. In this study, we want to discover how practitioners define explainability for AI-based systems and what challenges they encounter in making them explainable. Furthermore, we explore how explainability interacts with other quality attributes. To this end, we conducted semi-structured interviews with 14 ML practitioners from 11 companies. Our study reveals diverse viewpoints on explainability and applied practices. Results suggest that the importance of explainability lies in enhancing transparency, refining models, and mitigating bias. Methods like SHapley Additive exPlanations (SHAP) and Local Interpretable Model-Agnostic Explanation (LIME) are frequently used by ML practitioners to understand how models work, while tailored approaches are typically adopted to meet the specific requirements of stakeholders. Moreover, we have discerned emerging challenges in eight categories. Issues such as effective communication with non-technical stakeholders and the absence of standardized approaches are frequently stated as recurring hurdles. We contextualize these findings in terms of requirements engineering and conclude that industry currently lacks a standardized framework to address arising explainability needs.