05 Fakultät Informatik, Elektrotechnik und Informationstechnik

Permanent URI for this collectionhttps://elib.uni-stuttgart.de/handle/11682/6

Browse

Search Results

Now showing 1 - 10 of 77
  • Thumbnail Image
    ItemOpen Access
    Interdisciplinary composition of E-Learning platforms based on reusable low-code adapters
    (2022) Meißner, Niklas
    Electronic Learning (E-Learning) platforms or Learning Management Systems (LMSs) are becoming increasingly popular and, accordingly, are being used more and more by teachers at schools and university professors. They are used to digitally distribute educational material to students and provide the opportunity to, e.g., upload and collect assignments, solve tasks, and view grades. LMSs have been growing in popularity and are used alongside in-person lectures as an adjunct to self-study. Due to digital teaching during the COVID-19 pandemic, LMSs have increased in importance significantly. Even in after-pandemic times, with returning in-person lectures, it is hard to imagine teaching at universities without these platforms. The possibilities of working with the established LMSs are enormous. However, a closer look also reveals some negative aspects that were not considered in developing and using these platforms. The existing LMSs lack individualization for lecturers of their courses and a motivating design for students. Plugins attempt to remedy this, but they are complex and time-consuming to use. Thus, the underlying problems are, on the one hand, that lecturers are limited in the design of their courses and, on the other hand, that students experience disadvantages in terms of motivation and interactivity. This thesis aims to develop a concept for an e-learning platform that addresses these problems, supports lecturers in designing their courses, and motivates and assists students in learning. Under the aspect of generalization, a concept for a Software Product Line (SPL) was developed for the requirements of a wide variety of study programs, providing lecturers with a base platform and enabling them to use low-code adapters to design and modify their courses. In addition, the platform and a support team will assist lecturers in using the LMS and creating educational material. For the conceptual design of the LMS, some already existing solutions and approaches were examined to address a similar problem. However, similar problems have been insufficiently solved or overlap with the problem statement of this thesis only to a limited extent. After a requirements analysis, the requirements were gathered and listed so that solutions could then be developed. The prototypical implementation of the concept ’Interactive Training Remote Education Experience (IT-REX)’ was used to design the base e-learning platform and to include gamification aspects. However, since IT-REX was designed for computer science and software engineering students in the first semesters, it had to be modified for a broader range of uses. To evaluate the approach of the concept, a case study was conducted in which a low-fidelity prototype of the concept was presented to lecturers and other experts in the field of higher education didactics, learning psychology, and vocational and technical pedagogy. Subsequently, a questionnaire was used to assess and evaluate the previously defined requirements. The result of this elaboration is the concept for the e-learning platform with the corresponding prototype. Based on the feedback of the lecturers and experts, improvements and revisions could be identified. Furthermore, the evaluation helped to investigate how the platform’s usability could be enhanced to improve the structuring and design of the courses by the lecturers. Finally, future developments and further investigations based on the concept were described.
  • Thumbnail Image
    ItemOpen Access
    Migrating monolithic architectures to microservices : a study on software quality attributes
    (2022) Koch, Daniel
    Es gibt viele Beweggründe für die Migration von einer monolithischen zu einer Microservice-Architektur, z. B. hohe Skalierbarkeit oder verbesserte Wartbarkeit. Dabei müssen jedoch mehrere Faktoren im Migrationsprozess berücksichtigt werden, darunter auch Qualitätsmerkmale. Da die Migration zu einer Microservice-Architektur keine einfache Aufgabe ist, können definierte Qualitätsziele dabei helfen, einen geeigneten Migrationsansatz auszuwählen und anschließend geeignete Architekturentscheidungen zu treffen. Ziel dieser Arbeit ist es, zu untersuchen, wie Qualitätsattribute in den Migrationsprozess eingebunden werden können, um Praktiker und Softwarearchitekten dabei zu unterstützen. Ebenso wird untersucht, welche Rolle sie im Migrationsprozess spielen. Dazu wurde zunächst eine Literaturrecherche durchgeführt, um die Qualitätsattribute zu identifizieren, die für eine Microservice-Architektur relevant sind. Anschließend wurden die Qualitätsattribute den Migrationsansätzen zugeordnet, die sie in Richtung der Zielarchitektur optimieren. Ebenso wurden die Qualitätsattribute den Architekturmustern und Best Practices zugeordnet. Auf der Grundlage der zuvor gesammelten Ergebnisse wurde ein Qualitätsmodell erstellt, das auch die Interdependenzen und Kompromisse zwischen ihnen berücksichtigt. Auf diese Weise soll das Qualitätsmodell als Leitfaden dienen, der die Auswahl geeigneter Techniken und architektonischer Entscheidungen auf der Grundlage der definierten Qualitätsziele erleichtert. Das entwickelte Qualitätsmodell wurde anschließend in ein Tool integriert, das Praktiker durch den Migrationsprozess leiten sollte. Um die Nutzbarkeit des Werkzeugs in Bezug auf das Qualitätsmodell zu untersuchen, wurde eine Evaluierung in Form einer Umfrage mit vier Praktikern aus der Industrie durchgeführt. Das Ergebnis der Evaluation zeigt, dass das integrierte Qualitätsmodell den Migrationsprozess auf Basis der definierten Qualitätsziele in der Praxis unterstützen kann und die Erweiterung des Tools eine hohe Usability aufweist.
  • Thumbnail Image
    ItemOpen Access
    Evaluating human-computer interfaces for specification and comprehension of transient behavior in microservice-based software systems
    (2020) Beck, Samuel
    Modern software systems are subject to constant change while operating in production. New agile development methods such as continuous deployment and DevOps enable developers to deploy code changes frequently. Also, failures and self-adaptation through mechanisms such as elastic scaling and resilience patterns introduce changes into a system during runtime. For that reason, these systems that become more complex and distributed continuously exhibit transient behavior, the state that occurs while transitioning from one state to another. To make statements about a system’s reliability and performance, it is imperative that this transient behavior is specified in non-functional requirements and that stakeholders can review whether these requirements are met. However, due to the complexity of this behavior and the accompanying specifications, only experts can achieve this. This thesis aims to make the specification of non-functional requirements for, and the comprehension of, transient behavior in microservice systems more accessible, particularly for stakeholders that lack expert knowledge about transient behavior. To achieve this, novel approaches are explored that utilize modern human-computer interaction methods to facilitate this problem. At first, the state of the art in transient behavior in software systems, human-computer interaction, and software visualization is presented. Subsequently, expert interviews are conducted to understand how transient behavior is handled in practice and which requirements experts have to an envisioned solution. Based on this, a concept for a solution is proposed, which integrates different visualizations with a chatbot, and implemented as a prototype. Finally, the prototype is evaluated in an expert study. The evaluation shows that the approach can support software architects and DevOps engineers to create and verify specifications for transient behavior. However, it also reveals that the prototype can still be improved further. Furthermore, it was shown that the integration of a chatbot into the solution was not helpful for the participants. In conclusion, human-computer interaction and visualization methods can be applied to the problems of specifying and analyzing transient behavior to support software architects and engineers. The developed prototype shows potential for the exploration of transient behavior. The evaluation also revealed many opportunities for future improvements.
  • Thumbnail Image
    ItemOpen Access
    Bimodal taint analysis for detecting unusual parameter-sink flows
    (2022) Chow, Yiu Wai
    Finding vulnerabilities is a crucial activity, and automated techniques for this purpose are in high demand. For example, the Node Package Manager (npm) offers a massive amount of software packages, which get installed and used by millions of developers each day. Because of the dense network of dependencies between npm packages, vulnerabilities in individual packages may easily affect a wide range of software. Taint analysis is a powerful tool to detect such vulnerabilities. However, it is challenging to clearly define a problematic flow. A possible way to identify problematic flows is to incorporate natural language information like code convention and informal knowledge into the analysis. For example, a user might not find it surprising that a parameter named cmd of a function named execCommand is open to command injection. Thus this flow is likely unproblematic as the user will not pass untrusted data to cmd. In contrast, a user might not expect a parameter named value of a function named staticSetConfig to be vulnerable to command injection. Thus this flow is likely problematic as the user might pass untrusted data to value, since the natural language information from the parameter and function name suggests a different security context. To effectively exploit the implicit information in code, we introduce a bimodal taint analysis tool, Fluffy. The first modality is code: Fluffy uses a mining analysis implemented in CodeQL to find examples of flows from parameters to vulnerable sinks. The second modality is natural language: Fluffy uses a machine learning model that, based on a corpus of such examples, learns how to distinguish unexpected flows from expected flows using natural language information. We instantiate four neural models, offering different trade-offs between manual efforts required and accuracy of predictions. In our evaluation, Fluffy is able to achieve a F1-score of 0.85 or more on four common vulnerability types. In addition, Fluffy is able to flag eleven previously unknown vulnerabilities in real-life projects, of which six are confirmed.
  • Thumbnail Image
    ItemOpen Access
    Automatic resource scaling in cloud applications - case study in cooperation with AEB SE
    (2021) Weiler, Simon
    As an increasing number of applications continue to migrate into the cloud, the implementation of automatic scaling for computing resources to meet service-level objectives in a dynamic load environment is becoming a common challenge for software developers. To research how this problem can be tackled in practice, a state-of-the-art auto-scaling solution was developed and implemented in cooperation with AEB SE as a part of their application migration to a new Kubernetes cluster. Requirement elicitation was done via interviews with their development and IT operations staff, who put a strong focus on fast response times for automated requests as the main performance goal, with CPU, memory and response times being the most commonly used performance indicators for their systems. Using the collected knowledge, a scaling architecture was developed using their existing performance monitoring tools and Kubernetes' own Horizontal Pod Autoscaler, with a special adapter used for communicating the metrics between the two components. The system was tested on a deployment of AEB's test product using three different scaling approaches, using CPU utilization, JVM Memory usage and response time quantiles respectively. Evaluation results show that scaling approaches based on CPU utilization and memory usage are highly dependent on the type of requests and the implementation of the tested application, while response time-based scaling provides a more aggregated view on system performance and also reflects the actions of the scaler in its metrics. Overall though, the resulting performance was mostly the same for all scaling approaches, showing that the described architecture works in practice, but a more elaborate evaluation on a larger scale in a more optimized cluster would be needed to clearly distinguish between performances of different scaling strategies in a production environment.
  • Thumbnail Image
    ItemOpen Access
    Developing an autonomous trading system : a case study on AI engineering practices
    (2022) Grote, Marcel
    Today, more and more systems are using AI to efficiently solve complex problems. While in many cases this increases the system’s performance and efficiency, developing such systems with AI functionality is a more difficult process due to the additional complexity. Thus, engineering practices are required to ensure the quality of the resulting software. Since the development of AI-based systems comes with new challenges, new engineering practices are needed for such development processes. Many practices have already been proposed for the development of AI-based systems. However, only a few practical experiences have been accumulated in applying these practices. This study aims to address this problem by collecting such experiences. Furthermore, our objective is to accumulate evidence of the effectiveness of these proposed practices. Additionally, we analyze challenges that occur during such a development process and provide solutions to overcome them. Lastly, we examine the tools proposed to develop AI-based systems. We aim to identify how helpful these tools are and how they affect the resulting system. We conducted a single case study in which we developed an autonomous stock trading system that uses machine learning functionality to invest in stocks. Before development, we conducted literature surveys to identify effective practices and useful tools for such an AI development process. During the development, we applied ten practices and seven tools. Using structured field notes, we documented the effects of these practices and tools. Furthermore, we used field notes to document challenges that occurred during the development and the solutions we applied to overcome them. After the development, we analyzed the collected field notes. We evaluated how the application of each practice and tool simplified the development and how it affected the software quality. Moreover, the experiences collected in applying these proposed practices and tools and the challenges encountered are compared with existing literature. Our experiences and the evidence we collected during this study can be used as advice to simplify the development of AI-based systems and to improve software quality.
  • Thumbnail Image
    ItemOpen Access
    Evaluation and control of the value provision of complex IoT service systems
    (2022) Niedermaier, Sina; Wagner, Stefan (Prof. Dr.)
    The Internet of Things (IoT) represents an opportunity for companies to create additional consumer value through merging connected products with software-based services. The quality of the IoT service can determine whether an IoT service is consumed in the long-term and whether it delivers the expected value for a consumer. Since IoT services are usually provided by distributed systems and their operations are becoming increasingly complex and dynamic, continuous monitoring and control of the value provision is necessary. The individual components of IoT service systems are usually developed and operated by specialized teams in a division of labor. With the increasing specialization of the teams, practitioners struggle to derive quality requirements based on consumer needs. Consequently, the teams often observe the behavior of “their” components isolated without relation to value provision to a consumer. Inadequate monitoring and control of the value provision across the different components of an IoT system can result in quality deficiencies and a loss of value for the consumer. The goal of this dissertation is to support organizations with concepts and methods in the development and operations of IoT service systems to ensure the quality of the value provision to a consumer. By applying empirical methods, we first analyzed the challenges and applied practices in the industry as well as the state of the art. Based on the results, we refined existing concepts and approaches. To evaluate their quality in use, we conducted action research projects in collaboration with industry partners. Based on an interview study with industry experts, we have analyzed the current challenges, requirements, and applied solutions for the operations and monitoring of distributed systems in more detail. The findings of this study form the basis for further contributions of this thesis. To support and improve communication between the specialized teams in handling quality deficiencies, we have developed a classification for system anomalies. We have applied and evaluated this classification in an action research project in industry. It allows organizations to differentiate and adapt their actions according to different classes of anomalies. Thus, quick and effective actions to ensure the value provision or minimize the loss of value can be optimized separately from actions in the context of long-term and sustainable correction of the IoT system. Moreover, the classification for system anomalies enables the organization to create feedback loops for quality improvement of the system, the IoT service, and the organization. To evaluate the delivered value of an IoT service, we decompose it into discrete workflows, so-called IoT transactions. Applying distributed tracing, the dynamic behavior of an IoT transaction can be reconstructed in a further activity and can be made “observable”. Consequently, the successful completion of a transaction and its quality can be determined by applying indicators. We have developed an approach for the systematic derivation of quality indicators. By comparing actual values determined in operations with previously defined target values, the organization is able to detect anomalies in the temporal behavior of the value provision. As a result, the value provision can be controlled with appropriate actions. The quality in use of the approach is confirmed in another action research project with an industry partner. In summary, this thesis supports organizations in quantifying the delivered value of an IoT service and controlling the value provision with effective actions. Furthermore, the trust of a consumer in the IoT service provided by an IoT system and in the organization can be maintained and further increased by applying appropriate feedback loops.
  • Thumbnail Image
    ItemOpen Access
    Factors that enhance female participation in german computer science curricula: An exploration
    (2022) Schäfer, Melanie
    Das Phänomen der Unterrepräsentation von Frauen in Informatik-Studiengängen an Deutschlands Universitäten und Hochschulen lässt sich aus zwei Perspektiven untersuchen. Die negativen Faktoren, warum sich Frauen gegen ein solches Studium entscheiden, wurden in verschiedenen wissenschaftlichen Forschungsarbeiten betrachtet. Ziel dieser Arbeit ist es, die positiven Faktoren, warum sich Frauen für ein Informatik-Studium entscheiden aufzudecken. Mittels der Constructivist Grounded Theory von Kathy Charmaz soll hierzu eine initiale Theorie oder Taxonomie zu konzipiert werden. Für die Datengenese wurden $5$ Studentinnen der Universität Stuttgart interviewt, um ihre Beweggründe und Entscheidungen zu ergründen. Die parallel stattfindende Coding-Analyse und die initiale Theoriebildung ergaben insgesamt $5$ zentrale Faktoren. Die Interessensentwicklung, die die Initiierung bis zur Identifikation, der Spezifikation und Differenzierung gegenüber anderen Interessen, definiert. Im Zusammenhang dessen steht der Selbstwirksamkeitsprozess, also die Entwicklung der inneren Überzeugung in das eigene Können schwierige Herausforderungen zu meistern. Als dritter Faktor der Persönlichkeitsentfaltung gehört der Autonomieprozess, der die Selbstständigkeit der Studentinnen bis zum Studienanfang begleitet. Zusätzlich fanden sich zwei weitere Faktoren. Die Konvergenz beschreibt die Annäherung beiden Parteien, die durch Berührungspunkte geprägt werden. Entscheidend ist nicht die Anzahl, sondern die Intensität der Interessensförderung. Letzter Faktor beschreibt die MINT-Fähigkeiten, die speziell das mathematische Verständnis der Studentinnen betrifft. Hinsichtlich der Informatik stehen die fünf Faktoren in einer starken Kohärenz, die sich sowohl negativ als auch positiv beeinflussen können. Das Wissen über die Faktoren und ihren Beeinflussungsgrad von außerhalb können dazu eingesetzt werden, um Förderungen anzustreben, um mehr Frauen für ein solches Studium zu gewinnen.
  • Thumbnail Image
    ItemOpen Access
    Ein Ansatz für IoT-Sicherheitstests basierend auf dem MQTT-Protokoll
    (2021) Chen, Kai
    Das Internet der Dinge (IoT) besteht aus einer stark wachsenden Anzahl an vernetzten Geräten und gewinnt immer mehr an Bedeutung. Aufgrund der Komplexität und Heterogenität der verwendeten Technologien existieren im IoT-Bereich viele Sicherheitsprobleme. MQTT ist das meist verwendete IoT-spezifische Protokoll für die Kommunikation, wodurch es einen attraktiven Angriffspunkt darstellt. Daher muss die Sicherheit bei MQTT-Systemen gewährleistet sein. Durch eine Literaturrecherche wurden als Hauptprobleme im Zusammenhang mit der Sicherheit von MQTT die unsichere Standardkonfiguration der Broker, sowie Schwachstellen im Umgang mit fehlerhaften Paketen identifiziert. Das Ziel dieser Arbeit ist, einen Testansatz zu entwerfen, der die Sicherheitsprobleme von MQTT-Broker-Implementierungen mittels automatisierten Sicherheitstests untersucht. Der Ansatz, genannt MQTT-AIO, besteht aus drei Testkomponenten und ist in der Lage, die Konfiguration des Brokers zu analysieren, Angriffe basierend auf Angriffsmustern auszuführen und weitere Schwachstellen mithilfe von Fuzzing zu finden. Eine weitere Komponente überwacht das System während des Testprozesses und zeichnet relevante Daten auf. Die Ergebnisse der Testdurchläufe werden als Bericht ausgegeben und können weiter analysiert werden. Der Testansatz MQTT-AIO wird im Rahmen dieser Masterarbeit prototypisch implementiert und anhand einer Fallstudie validiert.
  • Thumbnail Image
    ItemOpen Access
    Explainability of operating systems
    (2021) Huschle, Tobias
    With the recent rise of machine learning and artificial intelligence, the explainability of software has found its way into the focus of research activities. Black box-like approaches that take critical decisions must be enabled to justify its actions in a comprehensible manner. This thesis takes these considerations and applies them to the area of operating systems and problem analysis thereof. To do so, a user study, conducted among professionals, is presented that shows that simplifying the generation of explanations of the operating system behavior can bring additional value. Furthermore, already available tools will be discussed based on their capabilities with regard to explanation generation. Subsequently, a new approach is proposed that allows to visualize decisions taken by the operating system in a decision graph. These graphs allow to examine how and why a certain value was set by the operating system in a convenient and efficient way. Finally, this approach is evaluated in another user study, which is again conducted among professionals. The final conclusion of this thesis then yields, that an increased focus on explainability capabilities in the context of operating system problem analysis would bring additional value to people working in this area. There is a wide range of other publications that focus on either problem analysis or explainable software, but not on the combination thereof. The proposed approach aims to connect the two areas by providing assistance in deriving explanations and justifications for the internal reasoning processes of operating systems in a convenient way. The potential value is successfully confirmed with an evaluation study conducted among professionals.