05 Fakultät Informatik, Elektrotechnik und Informationstechnik
Permanent URI for this collectionhttps://elib.uni-stuttgart.de/handle/11682/6
Browse
137 results
Search Results
Item Open Access Interdisciplinary composition of E-Learning platforms based on reusable low-code adapters(2022) Meißner, NiklasElectronic Learning (E-Learning) platforms or Learning Management Systems (LMSs) are becoming increasingly popular and, accordingly, are being used more and more by teachers at schools and university professors. They are used to digitally distribute educational material to students and provide the opportunity to, e.g., upload and collect assignments, solve tasks, and view grades. LMSs have been growing in popularity and are used alongside in-person lectures as an adjunct to self-study. Due to digital teaching during the COVID-19 pandemic, LMSs have increased in importance significantly. Even in after-pandemic times, with returning in-person lectures, it is hard to imagine teaching at universities without these platforms. The possibilities of working with the established LMSs are enormous. However, a closer look also reveals some negative aspects that were not considered in developing and using these platforms. The existing LMSs lack individualization for lecturers of their courses and a motivating design for students. Plugins attempt to remedy this, but they are complex and time-consuming to use. Thus, the underlying problems are, on the one hand, that lecturers are limited in the design of their courses and, on the other hand, that students experience disadvantages in terms of motivation and interactivity. This thesis aims to develop a concept for an e-learning platform that addresses these problems, supports lecturers in designing their courses, and motivates and assists students in learning. Under the aspect of generalization, a concept for a Software Product Line (SPL) was developed for the requirements of a wide variety of study programs, providing lecturers with a base platform and enabling them to use low-code adapters to design and modify their courses. In addition, the platform and a support team will assist lecturers in using the LMS and creating educational material. For the conceptual design of the LMS, some already existing solutions and approaches were examined to address a similar problem. However, similar problems have been insufficiently solved or overlap with the problem statement of this thesis only to a limited extent. After a requirements analysis, the requirements were gathered and listed so that solutions could then be developed. The prototypical implementation of the concept ’Interactive Training Remote Education Experience (IT-REX)’ was used to design the base e-learning platform and to include gamification aspects. However, since IT-REX was designed for computer science and software engineering students in the first semesters, it had to be modified for a broader range of uses. To evaluate the approach of the concept, a case study was conducted in which a low-fidelity prototype of the concept was presented to lecturers and other experts in the field of higher education didactics, learning psychology, and vocational and technical pedagogy. Subsequently, a questionnaire was used to assess and evaluate the previously defined requirements. The result of this elaboration is the concept for the e-learning platform with the corresponding prototype. Based on the feedback of the lecturers and experts, improvements and revisions could be identified. Furthermore, the evaluation helped to investigate how the platform’s usability could be enhanced to improve the structuring and design of the courses by the lecturers. Finally, future developments and further investigations based on the concept were described.Item Open Access Migration monolithischer Anwendungen in Microservices-basierte Architekturen : Fallstudie einer Service/Sales-Applikation(2023) Knodel, MarvinViele Altsysteme in der Industrie sind heutzutage in einer monolithischen Architektur implementiert. Manche Unternehmen setzen darauf ihre großen Applikationen in eine Microservices-Architektur zu migrieren, da sie sich hiervon viele Vorteile versprechen. So ist auch das Unternehmen L-mobile aus Sulzbach an der Murr dazu gewillt ihre Service/Sales-Applikation auf einen möglichen Microservices-Betrieb hin zu führen. Da es viele Ansätze gibt einen Monolithen in eine Microservices Applikation zu migrieren, hat die Abteilung Empirical Software Engineering des Institute of Software Engineering der Universität Stuttgart ein Framework für die Microservices Migration entwickelt, welches insbesondere Ansätze aus dem wissenschaftlichen Umfeld beinhaltet. Mithilfe dieses Frameworks wird in dieser Arbeit eine Teilmigration der Service/Sales-Applikation von L-mobile im Rahmen eines Proof of Concept durchgeführt. Dafür wurde zuerst eine Literaturrecherche durchgeführt um die Grundlagen von Monolithen, Microservices und dahingehende Migrationen im Allgemeinen zu erörtern. Anschließend wurde das Framework für Microservices Migration, für eine Teilmigration der Service/Sales-Applikation, durchgeführt. In dieser Durchführung wurde ein Service-Identifikationsansatz und eine Migrationsstrategie für die Applikation von L-mobile durch das Framework empfohlen. Während der Migration sind auch Herausforderungen aufgetreten. Einige der aufgetretene Herausforderungen wie die Migration der Datenbank werden auch in der wissenschaftlichen Literatur genannt, andere Herausforderungen, wie mangelnde Erfahrung mit Architekturbewertungen und der Implementierung von Microservices sind L-mobile spezifische Herausforderungen. Durch das Erheben strukturierter Feldnotizen während der Anwendung des Frameworks und durch verschiedene Reviews nach der Migration wurde das Framework hinsichtlich seiner Eignung für die Migration der Service/Sales-Applikation geprüft. Diese Evaluation ergab, dass sich das Framework für die Migration im Rahmen des Proof of Concept geeignet hat, da es umfangreich durch die Migration führt, eine Architekturbewertung berücksichtigt, geeignete Methoden für die Service-Identifizierung und Migration vorschlägt und durch das Vorschlagen von Patterns und Best Practices bei der Erstellung der Architektur unterstützt. Das Framework eignet sich auch für die komplette Migration der Service/Sales-ApplikationItem Open Access Migrating monolithic architectures to microservices : a study on software quality attributes(2022) Koch, DanielEs gibt viele Beweggründe für die Migration von einer monolithischen zu einer Microservice-Architektur, z. B. hohe Skalierbarkeit oder verbesserte Wartbarkeit. Dabei müssen jedoch mehrere Faktoren im Migrationsprozess berücksichtigt werden, darunter auch Qualitätsmerkmale. Da die Migration zu einer Microservice-Architektur keine einfache Aufgabe ist, können definierte Qualitätsziele dabei helfen, einen geeigneten Migrationsansatz auszuwählen und anschließend geeignete Architekturentscheidungen zu treffen. Ziel dieser Arbeit ist es, zu untersuchen, wie Qualitätsattribute in den Migrationsprozess eingebunden werden können, um Praktiker und Softwarearchitekten dabei zu unterstützen. Ebenso wird untersucht, welche Rolle sie im Migrationsprozess spielen. Dazu wurde zunächst eine Literaturrecherche durchgeführt, um die Qualitätsattribute zu identifizieren, die für eine Microservice-Architektur relevant sind. Anschließend wurden die Qualitätsattribute den Migrationsansätzen zugeordnet, die sie in Richtung der Zielarchitektur optimieren. Ebenso wurden die Qualitätsattribute den Architekturmustern und Best Practices zugeordnet. Auf der Grundlage der zuvor gesammelten Ergebnisse wurde ein Qualitätsmodell erstellt, das auch die Interdependenzen und Kompromisse zwischen ihnen berücksichtigt. Auf diese Weise soll das Qualitätsmodell als Leitfaden dienen, der die Auswahl geeigneter Techniken und architektonischer Entscheidungen auf der Grundlage der definierten Qualitätsziele erleichtert. Das entwickelte Qualitätsmodell wurde anschließend in ein Tool integriert, das Praktiker durch den Migrationsprozess leiten sollte. Um die Nutzbarkeit des Werkzeugs in Bezug auf das Qualitätsmodell zu untersuchen, wurde eine Evaluierung in Form einer Umfrage mit vier Praktikern aus der Industrie durchgeführt. Das Ergebnis der Evaluation zeigt, dass das integrierte Qualitätsmodell den Migrationsprozess auf Basis der definierten Qualitätsziele in der Praxis unterstützen kann und die Erweiterung des Tools eine hohe Usability aufweist.Item Open Access Evaluating human-computer interfaces for specification and comprehension of transient behavior in microservice-based software systems(2020) Beck, SamuelModern software systems are subject to constant change while operating in production. New agile development methods such as continuous deployment and DevOps enable developers to deploy code changes frequently. Also, failures and self-adaptation through mechanisms such as elastic scaling and resilience patterns introduce changes into a system during runtime. For that reason, these systems that become more complex and distributed continuously exhibit transient behavior, the state that occurs while transitioning from one state to another. To make statements about a system’s reliability and performance, it is imperative that this transient behavior is specified in non-functional requirements and that stakeholders can review whether these requirements are met. However, due to the complexity of this behavior and the accompanying specifications, only experts can achieve this. This thesis aims to make the specification of non-functional requirements for, and the comprehension of, transient behavior in microservice systems more accessible, particularly for stakeholders that lack expert knowledge about transient behavior. To achieve this, novel approaches are explored that utilize modern human-computer interaction methods to facilitate this problem. At first, the state of the art in transient behavior in software systems, human-computer interaction, and software visualization is presented. Subsequently, expert interviews are conducted to understand how transient behavior is handled in practice and which requirements experts have to an envisioned solution. Based on this, a concept for a solution is proposed, which integrates different visualizations with a chatbot, and implemented as a prototype. Finally, the prototype is evaluated in an expert study. The evaluation shows that the approach can support software architects and DevOps engineers to create and verify specifications for transient behavior. However, it also reveals that the prototype can still be improved further. Furthermore, it was shown that the integration of a chatbot into the solution was not helpful for the participants. In conclusion, human-computer interaction and visualization methods can be applied to the problems of specifying and analyzing transient behavior to support software architects and engineers. The developed prototype shows potential for the exploration of transient behavior. The evaluation also revealed many opportunities for future improvements.Item Open Access Bimodal taint analysis for detecting unusual parameter-sink flows(2022) Chow, Yiu WaiFinding vulnerabilities is a crucial activity, and automated techniques for this purpose are in high demand. For example, the Node Package Manager (npm) offers a massive amount of software packages, which get installed and used by millions of developers each day. Because of the dense network of dependencies between npm packages, vulnerabilities in individual packages may easily affect a wide range of software. Taint analysis is a powerful tool to detect such vulnerabilities. However, it is challenging to clearly define a problematic flow. A possible way to identify problematic flows is to incorporate natural language information like code convention and informal knowledge into the analysis. For example, a user might not find it surprising that a parameter named cmd of a function named execCommand is open to command injection. Thus this flow is likely unproblematic as the user will not pass untrusted data to cmd. In contrast, a user might not expect a parameter named value of a function named staticSetConfig to be vulnerable to command injection. Thus this flow is likely problematic as the user might pass untrusted data to value, since the natural language information from the parameter and function name suggests a different security context. To effectively exploit the implicit information in code, we introduce a bimodal taint analysis tool, Fluffy. The first modality is code: Fluffy uses a mining analysis implemented in CodeQL to find examples of flows from parameters to vulnerable sinks. The second modality is natural language: Fluffy uses a machine learning model that, based on a corpus of such examples, learns how to distinguish unexpected flows from expected flows using natural language information. We instantiate four neural models, offering different trade-offs between manual efforts required and accuracy of predictions. In our evaluation, Fluffy is able to achieve a F1-score of 0.85 or more on four common vulnerability types. In addition, Fluffy is able to flag eleven previously unknown vulnerabilities in real-life projects, of which six are confirmed.Item Open Access Enhancing automotive safety through an ADAS violation dashboard(2024) Senger, TobiasAutonomous Driving (AD) is an active area of research in which Advanved Driver Assistance Systems (ADAS) play an important role. Ensuring the safety of ADAS systems is critical. However, most ADAS systems nowadays make use of Deep Learning or other types of Machine Learning. Formally verifying these systems to ensure their safety is hardly possible. For this reason, Radic explored the use of Runtime Monitoring (RM) to ensure the safety of ADAS systems by detecting violations of several specified Safety Requirements (SR) at runtime. After performing a test run with the system, she manually analyzed the causes of each series of violations in the extracted Violations Report. As this was laborious and time-consuming, this thesis should explore available approaches and techniques to automatically derive the root causes of violation series. To do this, we first perform an exploratory literature search. This allows us to identify that the most suitable approach to address our problem is Root Cause Analysis (RCA) using Language Models (LMs), Large Language Models (LLMs), Knowledge Graphs (KGs), or a combination of them. We perform a Rapid Review (RR) to find concrete techniques for this approach. We then conduct a narrative data synthesis to explore the techniques retrieved with our RR. This allows us to derive a plan to automatically analyze the causes of SR violations in a Violations Report. Our solution is then incorporated into a web-based safety dashboard application. This application enables our safety engineers to configure ADAS use cases, test tracks, and test runs. Then, the safety engineer can select a test run to display an interactive view of the test run. The safety engineer can then select individual violation series and analyze their root causes using our automated RCA solution based on LLMs. To evaluate the effectiveness of our system, we conduct a simple experiment. This experiment shows that our system already achieves comparable performance to a human baseline provided by Radic. Our system, therefore, represents a valuable tool for safety engineers to identify and repair safety-critical problems in ADAS systems in the context of AD. We also propose modified variants of our system that allow researchers to improve our automated RCA system in the future, e.g., by incorporating a KG.Item Open Access Automatic resource scaling in cloud applications - case study in cooperation with AEB SE(2021) Weiler, SimonAs an increasing number of applications continue to migrate into the cloud, the implementation of automatic scaling for computing resources to meet service-level objectives in a dynamic load environment is becoming a common challenge for software developers. To research how this problem can be tackled in practice, a state-of-the-art auto-scaling solution was developed and implemented in cooperation with AEB SE as a part of their application migration to a new Kubernetes cluster. Requirement elicitation was done via interviews with their development and IT operations staff, who put a strong focus on fast response times for automated requests as the main performance goal, with CPU, memory and response times being the most commonly used performance indicators for their systems. Using the collected knowledge, a scaling architecture was developed using their existing performance monitoring tools and Kubernetes' own Horizontal Pod Autoscaler, with a special adapter used for communicating the metrics between the two components. The system was tested on a deployment of AEB's test product using three different scaling approaches, using CPU utilization, JVM Memory usage and response time quantiles respectively. Evaluation results show that scaling approaches based on CPU utilization and memory usage are highly dependent on the type of requests and the implementation of the tested application, while response time-based scaling provides a more aggregated view on system performance and also reflects the actions of the scaler in its metrics. Overall though, the resulting performance was mostly the same for all scaling approaches, showing that the described architecture works in practice, but a more elaborate evaluation on a larger scale in a more optimized cluster would be needed to clearly distinguish between performances of different scaling strategies in a production environment.Item Open Access Verifikation softwareintensiver Fahrwerksysteme(2023) Hellhake, Dominik; Wagner, Stefan (Prof. Dr.)Kontext: Die zunehmende Signifikanz von softwarebasierten Funktionen in modernen Fahrzeugen ist der Auslöser vieler Veränderungen im automobilen Entwicklungsprozess. In der Vergangenheit bestand ein Fahrzeug aus mehreren Electronic Control Units (ECUs), welche jeweils individuelle und voneinander unabhängige Softwarefunktionen ausführten. Demgegenüber bilden heute mehrere ECUs funktional kohärente Subsysteme, welche übergreifende und vernetzte Softwarefunktionen wie zum Beispiel Fahrerassistenzfunktionen und automatisierte Fahrfunktionen implementieren. Dieser Trend hin zu einem hochvernetzten Softwaresystem sorgt in der Entwicklung moderner Fahrzeuge für einen hohen Bedarf an geeigneten Architekturmodellen und Entwurfsmethoden. Aufgrund der Entwicklung von ECUs durch verschiedene Entwicklungsdienstleister werden zusätzlich systematische Integrationstestmethoden benötigt, um das korrekte Interaktionsverhalten jeder individueller ECU im Laufe der Fahrzeugentwicklung zu verifizieren. Hierfür stellt Kopplung eine weit verbreitete Messgröße dar, um in komponentenbasierten Softwaresystemen Qualitätseigenschaften wie die Verständlichkeit, Wiederverwendbarkeit, Modifizierbarkeit und Testbarkeit widerzuspiegeln. Problembeschreibung: Während Kopplung eine geeignete Messgröße für die Qualität eines Softwaredesigns darstellt, existieren nur wenig wissenschaftliche Beiträge über den Mehrwert von Kopplung für den Integrationstestprozess des aus dem Design resultierenden Systems. Existierende Arbeiten über das Thema Integrationstest beschreiben die schrittweise Integration von White-Box Softwarekomponenten unter Verwendung von Eigenschaften und Messgrößen, welche aus der Implementierung abgeleitet wurden. Diese Abhängigkeit vom Quellcode und der Softwarestruktur sorgt jedoch dafür, dass diese Methoden nicht auf die Entwicklung von Fahrzeugen übertragen werden können, da Fahrzeugsysteme zu einem großen Anteil aus Black-Box Software bestehen. Folglich existieren auch keine Methoden zur Messung der Testabdeckung oder zur Priorisierung der durchzuführenden Tests. In der Praxis sorgt dies dafür, dass lediglich erfahrungsbasierte Ansätze angewendet werden, bei denen signifikante Anteile des Interaktionsverhaltens im Laufe der Fahrzeugentwicklung ungetestet bleiben. Ziele: Um Lösungen für dieses Problem zu finden, soll diese Arbeit systematische und empirisch evaluierte Testmethoden ausarbeiten, welche für die Integrationstests während der Fahrzeugentwicklung angewendet werden können. Dabei wollen wir in erster Linie auch einen Einblick in das Potential bieten, welche Messgrößen Kopplung für die Verwendung zur Testfall-Priorisierung bietet. Das Ziel dieser Arbeit ist es, eine Empfehlung für das systematische Integrationstesten von Fahrzeugsystemen zu geben, welches auf dem Interaktionsverhalten einzelner ECUs basiert. Methoden: Um diese Ziele zu erreichen, analysieren wir im ersten Schritt dieser Arbeit den Stand der Technik, so wie er gegenwärtig bei BMW für das Integrationstesten der Fahrwerkssysteme angewendet wird. Dem gegenüber analysieren wir den Stand der Wissenschaft hinsichtlich existierender Testmethoden, welche auf die Problemstellung der Integration von Fahrzeugsystemen übertragen werden können. Basierend auf diesem Set an wissenschaftlich evaluierten Methoden leiten wir anschließend konkrete Vorgehensweisen für die Messung der Testabdeckung und der TestfallPriorisierung ab. Im Rahmen dieser Arbeit werden beide Vorgehensweisen empirisch evaluiert basierend auf Test- und Fehlerdaten aus einem Fahrzeugentwicklungsprojekt. Beiträge: Zusammengefasst enthält diese Arbeit zwei Beiträge, welche wir zu einem zentralen Beitrag zusammenführen. Der erste Bereich besteht aus einer Methode zur Messung der Testabdeckung basierend auf dem inter-komponenten Datenfluss von Black-Box-Komponenten. Die Definition eines Datenfluss-Klassifikationsschemas ermöglicht es, Daten über die Verwendung von Datenflüssen in existierenden Testfällen sowie in Fehlern zu sammeln, welche in den verschiedenen Testphasen gefunden wurden. Der zweite Beitrag dieser Arbeit stellt eine Korrelationsstudie zwischen verschiedenen Messmethoden für Coupling und der Fehlerverteilung in einem Fahrwerkssystem dar. Dabei evaluieren wir die Coupling-Werte von individuellen Software-Interfaces sowie die der Komponenten, welche diese implementieren. Zusammengefasst spiegelt diese Studie das Potential wider, das solche Coupling-Messmethoden für die Verwendbarkeit zur Testpriorisierung haben. Die Erkenntnisse aus diesen Beiträgen werden in unserem Hauptbeitrag zu einer Coupling-basierten Teststrategie für Systemintegrationstests zusammengeführt. Fazit: Der Beitrag dieser Arbeit verbindet zum ersten Mal den Stand der Technik zur Systemintegration von verteilten Black-Box-Softwaresystemen mit dem Stand der Wissenschaft über systematische Ansätze zur Integration von Softwaresystemen. Das Messen der Testabdeckung basierend auf dem Datenfluss ist hierfür eine effektive Methode, da der Datenfluss in einem System das Interaktionsverhalten der einzelnen Komponenten widerspiegelt. Zusätzlich kann das mögliche Interaktionsverhalten aller Komponenten des Systems aus dessen Architektur-Spezifikationen abgeleitet werden. Aus den Studien über die Korrelation von Coupling zur Fehlerverteilung geht außerdem eine moderate Abhängigkeit hervor. Aufgrund dessen ist die Selektion von Testfällen basierend auf die im Testfall erprobten Komponenteninteraktionen und dessen Coupling ein sinnvolles Vorgehen für die Praxis. Jedoch ist die moderate Korrelation auch ein Indiz dafür, dass zusätzliche Aspekte bei der Auswahl von Testfällen für Integrationstests zu berücksichtigen sind.Item Open Access Developing an autonomous trading system : a case study on AI engineering practices(2022) Grote, MarcelToday, more and more systems are using AI to efficiently solve complex problems. While in many cases this increases the system’s performance and efficiency, developing such systems with AI functionality is a more difficult process due to the additional complexity. Thus, engineering practices are required to ensure the quality of the resulting software. Since the development of AI-based systems comes with new challenges, new engineering practices are needed for such development processes. Many practices have already been proposed for the development of AI-based systems. However, only a few practical experiences have been accumulated in applying these practices. This study aims to address this problem by collecting such experiences. Furthermore, our objective is to accumulate evidence of the effectiveness of these proposed practices. Additionally, we analyze challenges that occur during such a development process and provide solutions to overcome them. Lastly, we examine the tools proposed to develop AI-based systems. We aim to identify how helpful these tools are and how they affect the resulting system. We conducted a single case study in which we developed an autonomous stock trading system that uses machine learning functionality to invest in stocks. Before development, we conducted literature surveys to identify effective practices and useful tools for such an AI development process. During the development, we applied ten practices and seven tools. Using structured field notes, we documented the effects of these practices and tools. Furthermore, we used field notes to document challenges that occurred during the development and the solutions we applied to overcome them. After the development, we analyzed the collected field notes. We evaluated how the application of each practice and tool simplified the development and how it affected the software quality. Moreover, the experiences collected in applying these proposed practices and tools and the challenges encountered are compared with existing literature. Our experiences and the evidence we collected during this study can be used as advice to simplify the development of AI-based systems and to improve software quality.Item Open Access Evaluation and control of the value provision of complex IoT service systems(2022) Niedermaier, Sina; Wagner, Stefan (Prof. Dr.)The Internet of Things (IoT) represents an opportunity for companies to create additional consumer value through merging connected products with software-based services. The quality of the IoT service can determine whether an IoT service is consumed in the long-term and whether it delivers the expected value for a consumer. Since IoT services are usually provided by distributed systems and their operations are becoming increasingly complex and dynamic, continuous monitoring and control of the value provision is necessary. The individual components of IoT service systems are usually developed and operated by specialized teams in a division of labor. With the increasing specialization of the teams, practitioners struggle to derive quality requirements based on consumer needs. Consequently, the teams often observe the behavior of “their” components isolated without relation to value provision to a consumer. Inadequate monitoring and control of the value provision across the different components of an IoT system can result in quality deficiencies and a loss of value for the consumer. The goal of this dissertation is to support organizations with concepts and methods in the development and operations of IoT service systems to ensure the quality of the value provision to a consumer. By applying empirical methods, we first analyzed the challenges and applied practices in the industry as well as the state of the art. Based on the results, we refined existing concepts and approaches. To evaluate their quality in use, we conducted action research projects in collaboration with industry partners. Based on an interview study with industry experts, we have analyzed the current challenges, requirements, and applied solutions for the operations and monitoring of distributed systems in more detail. The findings of this study form the basis for further contributions of this thesis. To support and improve communication between the specialized teams in handling quality deficiencies, we have developed a classification for system anomalies. We have applied and evaluated this classification in an action research project in industry. It allows organizations to differentiate and adapt their actions according to different classes of anomalies. Thus, quick and effective actions to ensure the value provision or minimize the loss of value can be optimized separately from actions in the context of long-term and sustainable correction of the IoT system. Moreover, the classification for system anomalies enables the organization to create feedback loops for quality improvement of the system, the IoT service, and the organization. To evaluate the delivered value of an IoT service, we decompose it into discrete workflows, so-called IoT transactions. Applying distributed tracing, the dynamic behavior of an IoT transaction can be reconstructed in a further activity and can be made “observable”. Consequently, the successful completion of a transaction and its quality can be determined by applying indicators. We have developed an approach for the systematic derivation of quality indicators. By comparing actual values determined in operations with previously defined target values, the organization is able to detect anomalies in the temporal behavior of the value provision. As a result, the value provision can be controlled with appropriate actions. The quality in use of the approach is confirmed in another action research project with an industry partner. In summary, this thesis supports organizations in quantifying the delivered value of an IoT service and controlling the value provision with effective actions. Furthermore, the trust of a consumer in the IoT service provided by an IoT system and in the organization can be maintained and further increased by applying appropriate feedback loops.