05 Fakultät Informatik, Elektrotechnik und Informationstechnik
Permanent URI for this collectionhttps://elib.uni-stuttgart.de/handle/11682/6
Browse
8 results
Search Results
Item Open Access Evaluation and control of the value provision of complex IoT service systems(2022) Niedermaier, Sina; Wagner, Stefan (Prof. Dr.)The Internet of Things (IoT) represents an opportunity for companies to create additional consumer value through merging connected products with software-based services. The quality of the IoT service can determine whether an IoT service is consumed in the long-term and whether it delivers the expected value for a consumer. Since IoT services are usually provided by distributed systems and their operations are becoming increasingly complex and dynamic, continuous monitoring and control of the value provision is necessary. The individual components of IoT service systems are usually developed and operated by specialized teams in a division of labor. With the increasing specialization of the teams, practitioners struggle to derive quality requirements based on consumer needs. Consequently, the teams often observe the behavior of “their” components isolated without relation to value provision to a consumer. Inadequate monitoring and control of the value provision across the different components of an IoT system can result in quality deficiencies and a loss of value for the consumer. The goal of this dissertation is to support organizations with concepts and methods in the development and operations of IoT service systems to ensure the quality of the value provision to a consumer. By applying empirical methods, we first analyzed the challenges and applied practices in the industry as well as the state of the art. Based on the results, we refined existing concepts and approaches. To evaluate their quality in use, we conducted action research projects in collaboration with industry partners. Based on an interview study with industry experts, we have analyzed the current challenges, requirements, and applied solutions for the operations and monitoring of distributed systems in more detail. The findings of this study form the basis for further contributions of this thesis. To support and improve communication between the specialized teams in handling quality deficiencies, we have developed a classification for system anomalies. We have applied and evaluated this classification in an action research project in industry. It allows organizations to differentiate and adapt their actions according to different classes of anomalies. Thus, quick and effective actions to ensure the value provision or minimize the loss of value can be optimized separately from actions in the context of long-term and sustainable correction of the IoT system. Moreover, the classification for system anomalies enables the organization to create feedback loops for quality improvement of the system, the IoT service, and the organization. To evaluate the delivered value of an IoT service, we decompose it into discrete workflows, so-called IoT transactions. Applying distributed tracing, the dynamic behavior of an IoT transaction can be reconstructed in a further activity and can be made “observable”. Consequently, the successful completion of a transaction and its quality can be determined by applying indicators. We have developed an approach for the systematic derivation of quality indicators. By comparing actual values determined in operations with previously defined target values, the organization is able to detect anomalies in the temporal behavior of the value provision. As a result, the value provision can be controlled with appropriate actions. The quality in use of the approach is confirmed in another action research project with an industry partner. In summary, this thesis supports organizations in quantifying the delivered value of an IoT service and controlling the value provision with effective actions. Furthermore, the trust of a consumer in the IoT service provided by an IoT system and in the organization can be maintained and further increased by applying appropriate feedback loops.Item Open Access Analyzing code corpora to improve the correctness and reliability of programs(2021) Patra, Jibesh; Pradel, Michael (Prof. Dr.)Bugs in software are commonplace, challenging, and expensive to deal with. One widely used direction is to use program analyses and reason about software to detect bugs in them. In recent years, the growth of areas like web application development and data analysis has produced large amounts of publicly available source code corpora, primarily written in dynamically typed languages, such as Python and JavaScript. It is challenging to reason about programs written in such languages because of the presence of dynamic features and the lack of statically declared types. This dissertation argues that, to build software developer tools for detecting and understanding bugs, it is worthwhile to analyze code corpora, which can uncover code idioms, runtime information, and natural language constructs such as comments. The dissertation is divided into three corpus-based approaches that support our argument. In the first part, we present static analyses over code corpora to generate new programs, to perform mutations on existing programs, and to generate data for effective training of neural models. We provide empirical evidence that the static analyses can scale to thousands of files and the trained models are useful in finding bugs in code. The second part of this dissertation presents dynamic analyses over code corpora. Our evaluations show that the analyses are effective in uncovering unexpected behaviors when multiple JavaScript libraries are included together and to generate data for training bug-finding neural models. Finally, we show that a corpus-based analysis can be useful for input reduction, which can help developers to find a smaller subset of an input that still triggers the required behavior. We envision that the current dissertation motivates future endeavors in corpus-based analysis to alleviate some of the challenges faced while ensuring the reliability and correctness of software. One direction is to combine data obtained by static and dynamic analyses over code corpora for training. Another direction is to use meta-learning approaches, where a model is trained using data extracted from the code corpora of one language and used for another language.Item Open Access Automated generation of tailored load tests for continuous software engineering(2021) Schulz, Henning; Hoorn, André van (Dr.-Ing.)Continuous software engineering (CSE) aims to produce high-quality software through frequent and automated releases of concurrently developed services. By replaying workloads that are representative of the production environment, load testing can identify quality degradation under realistic conditions. The literature proposes several approaches that extract representative workload models from recorded data. However, these approaches contradict CSE's high pace and automation in three aspects: they require manual parameterization, generate resource-intensive system-level load tests, and lack the means to select appropriate periods from the temporally varying production workload to justify time-consuming testing. This dissertation addresses the automated generation of tailored load tests to reduce the time and resources required for CSE-integrated testing. The tailoring needs to consider the services of interest and select the most relevant workload periods based on their context, such as the presence of a special sale when testing a webshop. Also, we intend to support experts and non-experts with a high degree of automation and abstraction. We develop and evaluate description languages, algorithms, and an automated load test generation approach that integrates workload model extraction, clustering, and forecasting. The evaluation comprises laboratory experiments, industrial case studies, an expert survey, and formal proofs. Our results show that representative context-tailored load tests can be generated by learning a workload model incrementally, enriching it with contextual information, and predicting the expected workload using time series forecasting. For further tailoring the load tests to services, we propose extracting call hierarchies from recorded invocation traces. Dedicated models of evolving manual parameterizations automate the generation process and restore the representativeness of the load tests. Furthermore, the integration of our approach with an automated execution framework enables load testing for non-experts. Following open-science practices, we provide supplementary material online. The proposed approach is a suitable solution for the described problem. Future work should refine specific building blocks the approach leverages. These blocks are the clustering and forecasting techniques from existing work, which we have assessed to be limited for predicting sharply fluctuating workloads, such as load spikes.Item Open Access Sicherheitsanalysen von Fail-Operational-Systemen für einen Nachweis nach ISO 26262(2021) Schmid, Tobias; Wagner, Stefan (Prof. Dr.)Der Übergang vom teil- auf das hochautomatisierte Fahren stellt eine Entlastung des Fahrers dar, da dieser das Verkehrsgeschehen nicht mehr permanent überwachen muss. Ein Fail-Silent-Verhalten ist im Fehlerfall kein Übergang in einen sicheren Zustand, weswegen Fail-Operational-Systeme für die funktionale Sicherheit notwendig sind. Fail-Operational-Fahrzeugführungen erfordern redundante Architekturen und neuartige Sicherheitskonzepte, um die Fehlertoleranz und eine geeignete Fehlerreaktion sicherzustellen. Einzelne Aspekte solcher Systeme wurden in der Literatur bereits diskutiert, allerdings fehlt bisher ein hinreichender Nachweis der funktionalen Sicherheit von Fail-Operational-Fahrzeugsystemen. In dieser Arbeit wird eine hinreichende Argumentation der funktionalen Sicherheit gemäß des Industriestandards ISO 26262 für Fail-Operational-Fahrzeugsysteme vorgestellt. Basierend auf der Argumentation werden notwendige Sicherheitsanalysen inklusive derer Nachweisziele auf Systemebene identifiziert Vorgehen für die jeweiligen Analysen vorgestellt. Daraus ergeben sich zusätzlich die Schnittstellen von System- und Subsystemanalysen. Für die Analyse gemeinsamer Ausfälle und den Nachweis der Unabhängigkeit redundanter Elemente, werden, auf Basis einer Studie zur Identifikation relevanter Anforderungen, existierende Vorgehen adaptiert und erweitert. Damit ergibt sich ein Vorgehen, dass den Randbedingungen der Entwicklung eines Fail-Operational-Systems in der Automobilindustrie gerecht wird. Das Fail-Operational-Verhalten der Umschaltlogik, welche im Fehlerfall eine redundante Fahrzeugführung aktiviert, wird anhand eines Model-Checking-Ansatzes verifiziert. Durch die Qualifizierung des Werkzeugs wird die Konformität zur ISO 26262 sichergestellt. Für die Analyse der Fehlerpropagation und der Fehlertoleranzzeit wird der Ansatz entsprechend um den Softwareverbund erweitert. Implementierungs- und Rechenaufwand zeigen die Anwendbarkeit der Analysen. Darüber hinaus werden Fehlerbaummodelle aus der Luft- und Raumfahrt für den quantitativen Nachweis von Fail-Operational-Systemen adaptiert und mittels Markov-Modellen validiert. Durch eine Sensitivitätsanalyse erfolgt die Identifikation von Optimierungsansätzen zur Minimierung der Ausfallwahrscheinlichkeit.Item Open Access Leadership gap in agile teams: how developers and scrum masters mature(2021) Spiegler, Simone V.; Wagner, Stefan (Prof. Dr.)An increasing number of companies aim to enable their developers to work in an agile manner. One key success factor that supports teams in working in an agile way is fitting leadership. Therefore, companies aim to understand leadership in such self-organising teams. One agile leadership concept describes a Scrum Master who is supposed to empower the team to work in an agile manner. However, the findings on leadership that unfolds in a self-organising team are controversial. By using Grounded Theory this thesis provides theories on how leadership evolves in agile teams while taking maturity and organisational culture and structure into account. The thesis does not only provide more theoretical underpinning to human aspects of the agile manner but also builds groundwork for future quantitative testing of leadership in agile teams.Item Open Access Model-based performance prediction for concurrent software on multicore architectures - a simulation-based approach(2021) Frank, Markus; Becker, Steffen (Prof. Dr.-Ing.)Model-based performance prediction is a well-known concept to ensure the quality of software. Thereby, software architects create abstract architectural models and specify software behaviour, hardware characteristics, and the user's interaction. They enrich the models with performance-relevant characteristics and use performance models to solve the models or simulate the software behaviour. Doing so, software architects can predict quality attributes such as the system's response time. Thus, they can detect violations of service-level objectives already early during design time, and alter the software design until it meets the requirements. Current state-of-the-art tools like Palladio have proven useful for over a decade now, and provide accurate performance prediction not only for sophisticated, but also for distributed cloud systems. They are built upon the assumption of single-core CPU architectures, and consider only the clock rate as a single metric for CPU performance. However, current processor architectures have multiple cores and a more complex design. Therefore, the use of a single-metric model leads to inaccurate performance predictions for parallel applications in multicore systems. In the course of this thesis, we face the challenges for model-based performance predictions which arise from multicore processors, and present multiple strategies to extend performance prediction models. In detail, we (1) discuss the use of multicore CPU simulators used by CPU vendors; (2) conduct an extensive experiment to understand the effect of performance-influencing factors on the performance of parallel software; (3) research multi-metric models to reflect the characteristics of multicore CPUs better, and finally, (4) investigate the capabilities of software modelling languages to express massively parallel behaviour. As a contribution of this work, we show that (1) multicore CPU simulators simulate the behaviour of CPUs in detail and accurately. However, when using architectural models as input, the simulation results are very inaccurate. (2) Due to extensive experiments, we present a set of performance curves that reflect the behaviour of characteristic demand types. We included the performance curves into Palladio and have increased the performance predictions significantly. (3) We present an enhanced multi-metric hardware model, which reflects the memory architecture of modern multicore CPUs. (4) We provide a parallel architectural pattern catalogue, which includes four of the most common parallelisation patterns (i.e., parallel loops, pipes and filter, fork/join, master worker). Through this catalogue, we enable the software architect to model the parallel behaviour of software faster and with fewer errors.Item Open Access The influence of personality on software quality(2020) Weilemann, Erica; Wagner, Stefan (Prof. Dr.)Objective of the work is an investigation into the relationship between the personality of a software engineer and the quality of the software she/he has created, primarily in terms of maintainability.Item Open Access Program analysis of WebAssembly binaries(2022) Lehmann, Daniel; Pradel, Michael (Prof. Dr.)WebAssembly is a rapidly expanding low-level bytecode that runs in browsers, on the server side, and in standalone runtimes. It brings exciting opportunities to the Web and has the potential to radically change the distribution model of software. At the same time, WebAssembly comes with new challenges and open questions, in particular regarding program analysis and security. The goal of this dissertation is to answer such questions and to support developers with novel insights, datasets, and program analysis techniques for WebAssembly binaries. WebAssembly is frequently compiled from unsafe languages such as C and C++. That begs the question: What happens with memory vulnerabilities when compiling to WebAssembly? We start by analyzing the language and ecosystem and find severe issues, such as the inability to protect memory, missing mitigations, and new attacks that are unique to WebAssembly. To assess the risk in practice, we collect WasmBench, a large-scale dataset of real-world binaries, and study common source languages and usages of WebAssembly. To find and mitigate vulnerabilities leading to such attacks, we develop Fuzzm, the first binary-only greybox fuzzer for WebAssembly. Due to WebAssembly's novelty and its low-level nature, developers are also in dire need of techniques to help them understand and analyze WebAssembly programs. For that, we introduce Wasabi, the first dynamic analysis framework for WebAssembly. It employs static binary instrumentation, which requires us to address several technical challenges, such as handling WebAssembly's static types and structured control-flow. Finally, we present SnowWhite, a learning-based approach for recovering high-level types from WebAssembly binaries. Unlike prior work, also among other binary formats, it generates types from an expressive type language, and not by classification into few fixed choices. This dissertation shows that program analysis of WebAssembly binaries has versatile applications and can be reliably and efficiently implemented. Given the young age yet steep trajectory of WebAssembly, it is going to be an important language and binary format for years to come. We look forward to many more works in this area, and hope they can build on the results, techniques, and datasets put forth in this dissertation.