05 Fakultät Informatik, Elektrotechnik und Informationstechnik

Permanent URI for this collectionhttps://elib.uni-stuttgart.de/handle/11682/6

Browse

Search Results

Now showing 1 - 10 of 61
  • Thumbnail Image
    ItemOpen Access
    Interdisciplinary composition of E-Learning platforms based on reusable low-code adapters
    (2022) Meißner, Niklas
    Electronic Learning (E-Learning) platforms or Learning Management Systems (LMSs) are becoming increasingly popular and, accordingly, are being used more and more by teachers at schools and university professors. They are used to digitally distribute educational material to students and provide the opportunity to, e.g., upload and collect assignments, solve tasks, and view grades. LMSs have been growing in popularity and are used alongside in-person lectures as an adjunct to self-study. Due to digital teaching during the COVID-19 pandemic, LMSs have increased in importance significantly. Even in after-pandemic times, with returning in-person lectures, it is hard to imagine teaching at universities without these platforms. The possibilities of working with the established LMSs are enormous. However, a closer look also reveals some negative aspects that were not considered in developing and using these platforms. The existing LMSs lack individualization for lecturers of their courses and a motivating design for students. Plugins attempt to remedy this, but they are complex and time-consuming to use. Thus, the underlying problems are, on the one hand, that lecturers are limited in the design of their courses and, on the other hand, that students experience disadvantages in terms of motivation and interactivity. This thesis aims to develop a concept for an e-learning platform that addresses these problems, supports lecturers in designing their courses, and motivates and assists students in learning. Under the aspect of generalization, a concept for a Software Product Line (SPL) was developed for the requirements of a wide variety of study programs, providing lecturers with a base platform and enabling them to use low-code adapters to design and modify their courses. In addition, the platform and a support team will assist lecturers in using the LMS and creating educational material. For the conceptual design of the LMS, some already existing solutions and approaches were examined to address a similar problem. However, similar problems have been insufficiently solved or overlap with the problem statement of this thesis only to a limited extent. After a requirements analysis, the requirements were gathered and listed so that solutions could then be developed. The prototypical implementation of the concept ’Interactive Training Remote Education Experience (IT-REX)’ was used to design the base e-learning platform and to include gamification aspects. However, since IT-REX was designed for computer science and software engineering students in the first semesters, it had to be modified for a broader range of uses. To evaluate the approach of the concept, a case study was conducted in which a low-fidelity prototype of the concept was presented to lecturers and other experts in the field of higher education didactics, learning psychology, and vocational and technical pedagogy. Subsequently, a questionnaire was used to assess and evaluate the previously defined requirements. The result of this elaboration is the concept for the e-learning platform with the corresponding prototype. Based on the feedback of the lecturers and experts, improvements and revisions could be identified. Furthermore, the evaluation helped to investigate how the platform’s usability could be enhanced to improve the structuring and design of the courses by the lecturers. Finally, future developments and further investigations based on the concept were described.
  • Thumbnail Image
    ItemOpen Access
    Migration monolithischer Anwendungen in Microservices-basierte Architekturen : Fallstudie einer Service/Sales-Applikation
    (2023) Knodel, Marvin
    Viele Altsysteme in der Industrie sind heutzutage in einer monolithischen Architektur implementiert. Manche Unternehmen setzen darauf ihre großen Applikationen in eine Microservices-Architektur zu migrieren, da sie sich hiervon viele Vorteile versprechen. So ist auch das Unternehmen L-mobile aus Sulzbach an der Murr dazu gewillt ihre Service/Sales-Applikation auf einen möglichen Microservices-Betrieb hin zu führen. Da es viele Ansätze gibt einen Monolithen in eine Microservices Applikation zu migrieren, hat die Abteilung Empirical Software Engineering des Institute of Software Engineering der Universität Stuttgart ein Framework für die Microservices Migration entwickelt, welches insbesondere Ansätze aus dem wissenschaftlichen Umfeld beinhaltet. Mithilfe dieses Frameworks wird in dieser Arbeit eine Teilmigration der Service/Sales-Applikation von L-mobile im Rahmen eines Proof of Concept durchgeführt. Dafür wurde zuerst eine Literaturrecherche durchgeführt um die Grundlagen von Monolithen, Microservices und dahingehende Migrationen im Allgemeinen zu erörtern. Anschließend wurde das Framework für Microservices Migration, für eine Teilmigration der Service/Sales-Applikation, durchgeführt. In dieser Durchführung wurde ein Service-Identifikationsansatz und eine Migrationsstrategie für die Applikation von L-mobile durch das Framework empfohlen. Während der Migration sind auch Herausforderungen aufgetreten. Einige der aufgetretene Herausforderungen wie die Migration der Datenbank werden auch in der wissenschaftlichen Literatur genannt, andere Herausforderungen, wie mangelnde Erfahrung mit Architekturbewertungen und der Implementierung von Microservices sind L-mobile spezifische Herausforderungen. Durch das Erheben strukturierter Feldnotizen während der Anwendung des Frameworks und durch verschiedene Reviews nach der Migration wurde das Framework hinsichtlich seiner Eignung für die Migration der Service/Sales-Applikation geprüft. Diese Evaluation ergab, dass sich das Framework für die Migration im Rahmen des Proof of Concept geeignet hat, da es umfangreich durch die Migration führt, eine Architekturbewertung berücksichtigt, geeignete Methoden für die Service-Identifizierung und Migration vorschlägt und durch das Vorschlagen von Patterns und Best Practices bei der Erstellung der Architektur unterstützt. Das Framework eignet sich auch für die komplette Migration der Service/Sales-Applikation
  • Thumbnail Image
    ItemOpen Access
    Migrating monolithic architectures to microservices : a study on software quality attributes
    (2022) Koch, Daniel
    Es gibt viele Beweggründe für die Migration von einer monolithischen zu einer Microservice-Architektur, z. B. hohe Skalierbarkeit oder verbesserte Wartbarkeit. Dabei müssen jedoch mehrere Faktoren im Migrationsprozess berücksichtigt werden, darunter auch Qualitätsmerkmale. Da die Migration zu einer Microservice-Architektur keine einfache Aufgabe ist, können definierte Qualitätsziele dabei helfen, einen geeigneten Migrationsansatz auszuwählen und anschließend geeignete Architekturentscheidungen zu treffen. Ziel dieser Arbeit ist es, zu untersuchen, wie Qualitätsattribute in den Migrationsprozess eingebunden werden können, um Praktiker und Softwarearchitekten dabei zu unterstützen. Ebenso wird untersucht, welche Rolle sie im Migrationsprozess spielen. Dazu wurde zunächst eine Literaturrecherche durchgeführt, um die Qualitätsattribute zu identifizieren, die für eine Microservice-Architektur relevant sind. Anschließend wurden die Qualitätsattribute den Migrationsansätzen zugeordnet, die sie in Richtung der Zielarchitektur optimieren. Ebenso wurden die Qualitätsattribute den Architekturmustern und Best Practices zugeordnet. Auf der Grundlage der zuvor gesammelten Ergebnisse wurde ein Qualitätsmodell erstellt, das auch die Interdependenzen und Kompromisse zwischen ihnen berücksichtigt. Auf diese Weise soll das Qualitätsmodell als Leitfaden dienen, der die Auswahl geeigneter Techniken und architektonischer Entscheidungen auf der Grundlage der definierten Qualitätsziele erleichtert. Das entwickelte Qualitätsmodell wurde anschließend in ein Tool integriert, das Praktiker durch den Migrationsprozess leiten sollte. Um die Nutzbarkeit des Werkzeugs in Bezug auf das Qualitätsmodell zu untersuchen, wurde eine Evaluierung in Form einer Umfrage mit vier Praktikern aus der Industrie durchgeführt. Das Ergebnis der Evaluation zeigt, dass das integrierte Qualitätsmodell den Migrationsprozess auf Basis der definierten Qualitätsziele in der Praxis unterstützen kann und die Erweiterung des Tools eine hohe Usability aufweist.
  • Thumbnail Image
    ItemOpen Access
    Evaluating human-computer interfaces for specification and comprehension of transient behavior in microservice-based software systems
    (2020) Beck, Samuel
    Modern software systems are subject to constant change while operating in production. New agile development methods such as continuous deployment and DevOps enable developers to deploy code changes frequently. Also, failures and self-adaptation through mechanisms such as elastic scaling and resilience patterns introduce changes into a system during runtime. For that reason, these systems that become more complex and distributed continuously exhibit transient behavior, the state that occurs while transitioning from one state to another. To make statements about a system’s reliability and performance, it is imperative that this transient behavior is specified in non-functional requirements and that stakeholders can review whether these requirements are met. However, due to the complexity of this behavior and the accompanying specifications, only experts can achieve this. This thesis aims to make the specification of non-functional requirements for, and the comprehension of, transient behavior in microservice systems more accessible, particularly for stakeholders that lack expert knowledge about transient behavior. To achieve this, novel approaches are explored that utilize modern human-computer interaction methods to facilitate this problem. At first, the state of the art in transient behavior in software systems, human-computer interaction, and software visualization is presented. Subsequently, expert interviews are conducted to understand how transient behavior is handled in practice and which requirements experts have to an envisioned solution. Based on this, a concept for a solution is proposed, which integrates different visualizations with a chatbot, and implemented as a prototype. Finally, the prototype is evaluated in an expert study. The evaluation shows that the approach can support software architects and DevOps engineers to create and verify specifications for transient behavior. However, it also reveals that the prototype can still be improved further. Furthermore, it was shown that the integration of a chatbot into the solution was not helpful for the participants. In conclusion, human-computer interaction and visualization methods can be applied to the problems of specifying and analyzing transient behavior to support software architects and engineers. The developed prototype shows potential for the exploration of transient behavior. The evaluation also revealed many opportunities for future improvements.
  • Thumbnail Image
    ItemOpen Access
    Bimodal taint analysis for detecting unusual parameter-sink flows
    (2022) Chow, Yiu Wai
    Finding vulnerabilities is a crucial activity, and automated techniques for this purpose are in high demand. For example, the Node Package Manager (npm) offers a massive amount of software packages, which get installed and used by millions of developers each day. Because of the dense network of dependencies between npm packages, vulnerabilities in individual packages may easily affect a wide range of software. Taint analysis is a powerful tool to detect such vulnerabilities. However, it is challenging to clearly define a problematic flow. A possible way to identify problematic flows is to incorporate natural language information like code convention and informal knowledge into the analysis. For example, a user might not find it surprising that a parameter named cmd of a function named execCommand is open to command injection. Thus this flow is likely unproblematic as the user will not pass untrusted data to cmd. In contrast, a user might not expect a parameter named value of a function named staticSetConfig to be vulnerable to command injection. Thus this flow is likely problematic as the user might pass untrusted data to value, since the natural language information from the parameter and function name suggests a different security context. To effectively exploit the implicit information in code, we introduce a bimodal taint analysis tool, Fluffy. The first modality is code: Fluffy uses a mining analysis implemented in CodeQL to find examples of flows from parameters to vulnerable sinks. The second modality is natural language: Fluffy uses a machine learning model that, based on a corpus of such examples, learns how to distinguish unexpected flows from expected flows using natural language information. We instantiate four neural models, offering different trade-offs between manual efforts required and accuracy of predictions. In our evaluation, Fluffy is able to achieve a F1-score of 0.85 or more on four common vulnerability types. In addition, Fluffy is able to flag eleven previously unknown vulnerabilities in real-life projects, of which six are confirmed.
  • Thumbnail Image
    ItemOpen Access
    Enhancing automotive safety through an ADAS violation dashboard
    (2024) Senger, Tobias
    Autonomous Driving (AD) is an active area of research in which Advanved Driver Assistance Systems (ADAS) play an important role. Ensuring the safety of ADAS systems is critical. However, most ADAS systems nowadays make use of Deep Learning or other types of Machine Learning. Formally verifying these systems to ensure their safety is hardly possible. For this reason, Radic explored the use of Runtime Monitoring (RM) to ensure the safety of ADAS systems by detecting violations of several specified Safety Requirements (SR) at runtime. After performing a test run with the system, she manually analyzed the causes of each series of violations in the extracted Violations Report. As this was laborious and time-consuming, this thesis should explore available approaches and techniques to automatically derive the root causes of violation series. To do this, we first perform an exploratory literature search. This allows us to identify that the most suitable approach to address our problem is Root Cause Analysis (RCA) using Language Models (LMs), Large Language Models (LLMs), Knowledge Graphs (KGs), or a combination of them. We perform a Rapid Review (RR) to find concrete techniques for this approach. We then conduct a narrative data synthesis to explore the techniques retrieved with our RR. This allows us to derive a plan to automatically analyze the causes of SR violations in a Violations Report. Our solution is then incorporated into a web-based safety dashboard application. This application enables our safety engineers to configure ADAS use cases, test tracks, and test runs. Then, the safety engineer can select a test run to display an interactive view of the test run. The safety engineer can then select individual violation series and analyze their root causes using our automated RCA solution based on LLMs. To evaluate the effectiveness of our system, we conduct a simple experiment. This experiment shows that our system already achieves comparable performance to a human baseline provided by Radic. Our system, therefore, represents a valuable tool for safety engineers to identify and repair safety-critical problems in ADAS systems in the context of AD. We also propose modified variants of our system that allow researchers to improve our automated RCA system in the future, e.g., by incorporating a KG.
  • Thumbnail Image
    ItemOpen Access
    Analyzing student knowledge status
    (2024) Keller, Tessa Madleine
    Over time, various learning management systems have been developed. One of these is MEITREX, a gamified intelligent tutoring system, which has been developed specifically for software engineering eduction at higher education institutions. By providing students with individual feedback and learning material adapted based on students’ individual process MEITREX should increase students motivation. To provide students with individual feedback and to adapt learning materials based on students’ current knowledge status, the student’s current knowledge status needs to be automatically and reliably determined. Currently, MEITREX uses a simple score to determine students current knowledge. This approach has several drawbacks. This approach for example doesn’t determine students’ knowledge of a single skill, but of the content of a chapter. Additionally, a student needs to repeat each assessment of a chapter a certain number of times to master the content of a chapter, despite having mastered the content before. Over the past decades, different approaches for the estimation of students’ knowledge status have been introduced. The introduced approaches range from simple machine learning models to complicated neural networks, hidden markov models and approaches, that have their origins in the chess world but have been modified. All approaches have in common, that their estimation of students’ knowledge status is based on the students’ performance on exercises. One of these established and reliable approaches should replace the old score. Not all models are equally well suited for usage in an MEITREX. Therefore the requirements such a model needs to meet are defined and a requirement analysis based on the existing literature is conducted to find promising model groups, as the existing approaches can be grouped into eight groups of models. Based on the results of this requirement analysis the performance of eight promising models from three different model groups is tested. Of all tested models, that fulfill the most important requirements M-Elo showed the best performance and therefore M-Elo was integrated into MEITREX. Junit Tests and a short evaluation showed, that the integration M-Elo into MEITREX was successful.
  • Thumbnail Image
    ItemOpen Access
    Evaluating the maintainability of variability concepts in cloud deployment technologies
    (2024) Kenworthy, Samuel David
    Context. Deployment (Automation) Technologies (DTs) allow automating the provisioning and management of cloud deployments. It is often required to use multiple technologies concurrently and define multiple variations of a deployment model to satisfy various user requirements such as cost or elasticity. Therefore, variability must be managed across multiple DTs. Problem. Managing variability across multiple DTs can lead to increased complexity, as DTs support different variability mechanisms of varying expressiveness. The Variable Deployment Metamodel (VDMM) introduces variability management concepts on top of the DT-agnostic Essential Deployment Metamodel (EDMM), which does not support variability. Thus, VDMM proposes improving maintainability by providing a single deployment model supporting variability. However, the effectiveness of VDMM in terms of these improvements has yet to be validated compared to other DTs. Objective. In this thesis, we identify and classify different variability concepts in DTs. This classification serves as a tool for understanding deployment variability’s implications on maintainability, supporting the primary objective of this thesis: to evaluate and compare the maintainability of variability concepts in VDMM and other popular DTs. Method. We derive a classification framework for variability concepts in cloud DTs from literature research and the analysis of the official documentation of DTs. Based on the Goal Question Metric Approach (GQM), we evaluate the maintainability of variability concepts in DTs by conducting a case study and measuring and evaluating the defined metrics. Result. Our classification framework contains three dimensions: (i) Variability Concepts define high-level concepts enabling variability in the deployment model, (ii) Variability Implementations are mechanisms used by DTs to implement these concepts, and (iii) Variability Properties define the properties of the implementations. The case study shows that DTs using General-Purpose Programming Languages (GPLs), such as Pulumi, are the most maintainable technologies supporting variability in the deployment model for our scenario. Furthermore, using the VDMMs internal pruning algorithm requires less maintenance work when implementing architectural changes than technologies such as Ansible, Terraform, and OpenStack Heat, which do not provide such algorithms. Conclusion. The classification framework aids the case study evaluation in understanding why variability in some DTs may be more maintainable than in others. The evaluation provides a set of key findings concerning the maintainability of variability concepts, such as the most efficient DTs and variability implementations for implementing architectural changes in our scenario. However, the evaluation also shows that the maintainability of variability depends on the use case and complexity of the implemented changes; for example, using expressions to implement configuration changes requires less work in our scenario.
  • Thumbnail Image
    ItemOpen Access
    Wasm-R3 : creating executable benchmarks of WebAssembly binaries via record-reduce-replay
    (2024) Getz, Jakob
    WebAssembly is the newest language to arrive on the web and has now been implemented in all major browsers for several years. It features a compact binary format, making it fast to be loaded, decoded and run. To evaluate and improve the performance of WebAssembly engines, relevant executable benchmarks are required. Existing benchmarks such as PolyBenchC and Spec CPU have shortcomings in their relevancy, since they do not necessarily represent real-world WebAssembly applications well. To make the creation of such benchmarks faster and simpler, we develop Wasm-R3 an approach that has the capability of recording existing web applications and generate an executable benchmark from it. Wasm-R3’s workflow can be described in three phases: record, reduce and replay. In the record phase the instrumenter instruments the website’s WebAssembly code, a user then interacts with the website, which causes traces of the execution to be recorded. Since these traces are typically large, unnecessary information gets filtered out in the reduce phase. In the replay phase a replay generator takes these traces along with the original web application’s WebAssembly binary and generates a standalone executable benchmark from it. We evaluate Wasm-R3 by implementing it in Typescript and Rust to show that the generated benchmarks correctly mimic the behavior of the recorded application. We further demonstrate that replays can be generated in reasonable time by measuring a mean wall time of 8.651 seconds and that our benchmarks are portable across a variety of di↵erent WebAssembly engines.
  • Thumbnail Image
    ItemOpen Access
    Factors that enhance female participation in german computer science curricula: An exploration
    (2022) Schäfer, Melanie
    Das Phänomen der Unterrepräsentation von Frauen in Informatik-Studiengängen an Deutschlands Universitäten und Hochschulen lässt sich aus zwei Perspektiven untersuchen. Die negativen Faktoren, warum sich Frauen gegen ein solches Studium entscheiden, wurden in verschiedenen wissenschaftlichen Forschungsarbeiten betrachtet. Ziel dieser Arbeit ist es, die positiven Faktoren, warum sich Frauen für ein Informatik-Studium entscheiden aufzudecken. Mittels der Constructivist Grounded Theory von Kathy Charmaz soll hierzu eine initiale Theorie oder Taxonomie zu konzipiert werden. Für die Datengenese wurden $5$ Studentinnen der Universität Stuttgart interviewt, um ihre Beweggründe und Entscheidungen zu ergründen. Die parallel stattfindende Coding-Analyse und die initiale Theoriebildung ergaben insgesamt $5$ zentrale Faktoren. Die Interessensentwicklung, die die Initiierung bis zur Identifikation, der Spezifikation und Differenzierung gegenüber anderen Interessen, definiert. Im Zusammenhang dessen steht der Selbstwirksamkeitsprozess, also die Entwicklung der inneren Überzeugung in das eigene Können schwierige Herausforderungen zu meistern. Als dritter Faktor der Persönlichkeitsentfaltung gehört der Autonomieprozess, der die Selbstständigkeit der Studentinnen bis zum Studienanfang begleitet. Zusätzlich fanden sich zwei weitere Faktoren. Die Konvergenz beschreibt die Annäherung beiden Parteien, die durch Berührungspunkte geprägt werden. Entscheidend ist nicht die Anzahl, sondern die Intensität der Interessensförderung. Letzter Faktor beschreibt die MINT-Fähigkeiten, die speziell das mathematische Verständnis der Studentinnen betrifft. Hinsichtlich der Informatik stehen die fünf Faktoren in einer starken Kohärenz, die sich sowohl negativ als auch positiv beeinflussen können. Das Wissen über die Faktoren und ihren Beeinflussungsgrad von außerhalb können dazu eingesetzt werden, um Förderungen anzustreben, um mehr Frauen für ein solches Studium zu gewinnen.