Universität Stuttgart
Permanent URI for this communityhttps://elib.uni-stuttgart.de/handle/11682/1
Browse
36 results
Search Results
Item Open Access Industry practices and challenges of using AI planning : an interview-based study(2024) Vashisth, DhananjayIn the rapidly evolving landscape of industrial applications, AI planning systems have emerged as critical tools for optimizing processes and decision-making. However, implementing and integrating these systems present significant challenges that can hinder their effectiveness. This thesis addresses the urgent need to understand the best practices and challenges involved in designing, integrating, and deploying AI planning systems in industrial settings. Without this understanding, industries risk inefficient implementation, leading to poor performance and resistance from end-users. This research employs a methodology that includes a literature review and interviews with industry professionals and researchers to identify common strategies and obstacles practitioners face. The study examines existing literature to uncover reported best practices and challenges in AI planning systems. Interviews provide additional perspectives, enriching the data collected and ensuring a thorough analysis. The findings reveal best practices, including the importance of cross-disciplinary collaboration, robust data management strategies, and iterative development processes. Additionally, recurring challenges such as integration complexities, scalability issues, and the need for continuous system evaluation are identified. These insights highlight critical areas for improvement and offer practical recommendations for enhancing the effectiveness of AI planning systems in industrial applications.Item Open Access Continuous estimation of energy efficiency for source code in virtual environments(2024) Schulth, Maximilian NiklasThe increasing energy consumption of servers, high-performance computing clusters, and data centers necessitates measures to reduce energy consumption. However, many companies, like TeamViewer, rely on rented infrastructure where direct hardware-level energy management is unavailable. This thesis presents a method for estimating software energy efficiency on virtualized server environments, focusing on optimizing code execution without access to physical hardware metrics. The research addresses several key challenges, including how to measure energy consumption at the function level of a software, simulate realistic and reproducible user loads, and considerations for isolating performance measurements from external influences. Profiling tools were evaluated to measure CPU time, a metric which is correlated with energy consumption. The method was tested in a virtual environment by simulating user loads and measuring the impact of software changes on performance. The results demonstrate that CPU time can provide insights into the performance of a software which correlates with its energy consumption. This work contributes to the field by providing a lightweight method for continuously estimating the energy efficiency during software development and maintenance.Item Open Access Python's dominance in machine learning : unraveling its emergence and exploring the trade-offs of faster alternatives(2024) Youssef, JohnnyThis research investigates the intricate relationship between library optimization and machine learning algorithm performance across Python, Java, C++, and Julia. Through comprehensive benchmarking of widely used libraries, the study reveals that library efficiency often supersedes the inherent characteristics of programming languages in determining execution speed, accuracy, and energy consumption of machine learning models. The findings challenge the conventional wisdom that compiled languages invariably outperform interpreted ones in computational tasks. Notably, Python’s well-optimized libraries, such as Scikit-learn, demonstrate competitive and sometimes superior performance compared to C++ implementations in specific scenarios. This paradigm shift underscores the critical importance of library selection over language choice in optimizing machine learning workflows. The study delves into the nuanced interplay of factors influencing machine learning performance, including execution efficiency, ecosystem richness, and implementation ease. It also examines the impact of Just-In-Time (JIT) compilation in Julia, revealing significant performance enhancements in subsequent runs, which points to its potential in long-running or repetitive tasks. By providing a comprehensive analysis of the performance landscape across different programming languages and libraries, this study offers valuable insights for practitioners and researchers. It enables informed decision-making in selecting optimal tools and languages for specific machine learning applications, considering not only computational efficiency but also broader ecosystem factors and long-term maintainability. Ultimately, this research contributes to a more nuanced understanding of the performance dynamics in machine learning implementations, challenging preconceptions and providing a data-driven foundation for optimizing machine learning workflows across diverse computational environments.Item Open Access Architectural principles and decision model for Function-as-a-Service(2024) Yussupov, Vladimir; Leymann, Frank (Prof. Dr. Dr. h. c.)Cloud computing revolutionized the way modern applications are designed and operated. Instead of maintaining the on premise infrastructure many enterprises incorporate various cloud offerings when designing their applications to decrease time-to-market and reduce the required management efforts. This includes the use of traditional cloud service models such as IaaS or PaaS as well as novel models such as FaaS that enables engineering cloud applications by composing fine-grained functions hosted on FaaS platforms with a variety of provider-managed services, e.g., data persistence and messaging services. Most management tasks for the components in FaaS-based applications become a responsibility of the chosen cloud provider, which, however, results in a stronger dependence on provider products and their implementation and packaging requirements. Therefore, engineering of FaaS-based applications can benefit from a stronger focus on the architectural considerations instead of specific products that often appear as fast as they become obsolete. This work focuses on different aspects of provider-agnostic design for FaaS-based applications and is inspired by the increased dependence of components in such applications on product-specific requirements. To enable reasoning on component hosting and management requirements in a provider-agnostic manner, this work introduces a pattern language capturing various trade-offs for hosting application components in the cloud. Furthermore, to facilitate classification and selection support for components in FaaS-based applications, this work presents a classification framework for FaaS platforms and introduces a classification framework metamodel that generalizes these concepts for other component types in such applications. Additionally, this work introduces a standards-based modeling approach for specifying function orchestrations and transforming them into provider-specific formats and an automated function code extraction and wrapping approach that aims to facilitate reusing functions for different FaaS platforms. To enable using these contributions together, this thesis also introduces the GRASP method that enables gradual modeling and refinement of FaaS-based applications from abstract topologies to executable deployment models. The technological support for using the GRASP Method is enabled by an integrated GRASP toolchain. To validate the feasibility of the introduced concepts, the GRASP toolchain is implemented prototypically and integrated with the existing tools for pattern-based modeling and deployment of cloud applications.Item Open Access Analyzing the effect of entanglement of training samples on the loss landscape of quantum neural networks(2024) Ülger, VictorQuantum Neural Networks (QNNs) are a promising new intersection between classical machine learning and quantum computing. Recent advancements have shown that using entangled training data reduces the amount of data necessary to train a QNN with minimal risk. However, it is not yet fully understood how an increase in degree of entanglement influences the trainability of these QNNs. For this reason, our research aims to analyze the effect of increasing degrees of entanglement and sizes of training sets, as well as different linear structures, such as orthogonal or linearly dependent training samples, on the shape and trainability of QNN loss landscapes. In our experiments, we sample loss landscapes obtained from QNN loss functions for various compositions of training samples. We then analyze the shape of the loss landscapes using multiple roughness metrics. Our findings include correlations between the entanglement, size, and linear structure of the training data and the shape of the corresponding loss landscapes. Most notably, for an increase in degree of entanglement, the loss landscapes show a noticeable decrease in roughness. An increase in training data size has similar results for almost all data structures except for linearly dependent samples, which were significantly less affected. While a smoother landscape can hint at a reduction of local minima and saddle points, this roughness decrease likely implies other training obstacles, such as barren plateaus or narrow gorges. The insights gathered by our experiments can help develop new effective and efficient training strategies for QNNs.Item Open Access Investigation on precise measurement of CO2 emissions from AI applications(2024) Verma, PankhuriThe exponential growth of Artificial Intelligence (AI) has significantly increased the reliance on Data Centers (DCs), making them crucial for processing and storing vast amounts of data. However, this surge in AI deployment has highlighted an environmental concern of Carbon Dioxide (CO2) emissions generated by the DCs. These facilities are resource-intensive and demand substantial power to meet the computational needs of AI applications, thus contributing to a high carbon footprint. To address the issue, this thesis explores an innovative approach to measure the CO2 emissions by introducing a linear regression energy model based on Performance Monitoring Counters (PMCs) such as the total number of instructions and the total number of cycles of the computer processor and the development of energy-efficient AI models by optimising the hyperparameters and architecture of AI models to minimise the impact on the environment. The operational efficiency and environmental impact of DCs have been estimated based on metrics such as Power Usage Effectiveness (PUE), partial Power Usage Effectiveness (pPUE), and Carbon Usage Effectiveness (CUE). Several types of research have been conducted to optimise hardware such as processor idleness, power supply to the machine, cooling machines for the system, and selecting training locations with low carbon intensity to lower energy consumption. However, such improvements are insufficient since inadequately developed AI models can drastically drain the processor power. Therefore, engineers should focus on developing highly efficient and computationally feasible models. During this thesis, PMCs are used to estimate the computational complexity of AI models running on processors. It has been observed that processor-specific PMCs, like the total number of instructions and the total number of cycles collected during processing, strongly correlate with the processor’s energy consumption. They also impose very minimal overhead on energy utilisation, making them ideal for usage with AI applications. Therefore, PMCs have been used to calculate the energy consumption of processors and the DCs they are placed in. Central to our research is the formulation of an energy model that utilises PMCs to estimate processors’ energy consumption and CO2 emissions. By training various AI models on the Central Processing Unit (CPU), collecting Performance Monitoring Counter (PMC) data, and their associated energy consumption, a linear regression energy model to estimate the energy usage of AI applications is established. Subsequently, the CO2 emissions of applications running on these Central Processing Units (CPUs) are also calculated. For the simplicity of this research, only CPU and Dynamic Random Access Memory (DRAM) are taken into consideration, as they consume the maximum energy in comparison to other parts of the processor. This linear model produced an error of only 0.158% for CPU and 0.272% for DRAM. Further, the implications of hyperparameter optimisation and model architecture on energy consumption and CO2 emissions have been studied based on PMCs with a tradeoff in accuracy. This research will enable the estimation of energy consumption and CO2 emissions of AI applications based on inbuilt PMCs, and also reduce energy consumption and CO2 emissions by modifying the model architecture and hyperparameters while maintaining a tradeoff between accuracy and energy consumption.Item Open Access Design and implementation of a platform for discovering and sharing AI planning software(2024) Adhau, Saurabh VijayOne of the branch of Artificial Intelligence is AI planning and over the past few years, there has been significant research in the AI planning field which led to development of multiple techniques, algorithms and tools. Yet while AI planning techniques have been moving forward with research, the lack of a central repository for sharing AI planning software may have hindered their broader adoption. Resources are currently scattered across web-pages, research papers and books and this situation complicates finding appropriate AI planning software as per requirements by people who are from the field and even more difficult for new users of AI planning. The lack of standardisation and accessibility of planning software brings the field in a situation that hinders applicability and reproducibility. The goal of this master’s thesis is to design and implement a single platform that provides access to multiple AI planning software artefacts. The platform is designed with a modular architecture with different functionalities as a service which ensures seamless integration between frontend and backend. This platform provides users with an intuitive UI with features to discover, filter and sort. The solution is evaluated through user studies and it highlights positive feedback on platform’s design and features while it also guides on areas of improvement for overall user experience and widespread adoption.Item Open Access Impact on smart technology on energy use in university offices : a case study(2024) Müller, LukasIn order to oppose various causes of the climate crisis, many measurements are developed and evaluated worldwide. One important step towards carbon neutrality and therefore to stop global warming, lies in reducing energy consumption of every type. One of the biggest opportunities to save energy in different forms, lies in the building sector. Especially office rooms often demand more energy than necessary due to inefficient handling of different resources. Hence, this work proposes a simple, low cost and smart system to automatically control a thermostat, workstation and lamps in an office environment in order to save as much energy as possible. Applying the proposed system leads to electric energy savings figuring at least 8.6 kg a year per office only on controlling power supply of one workstation. Furthermore, promising results about possible savings in electricity are examined by automatically controlling lights. Heating energy utilization can be additionally reduced with the use of smart thermostats, especially on weekends and outside working hours, by turning off thermostats. It is shown, that using only low cost equipment in form of sensors and actuators can reduce office energy demands and therefore gives perspectives to reduce global warming.Item Open Access A method for pattern application based on concrete solutions(2024) Falkenthal, Michael; Leymann, Frank (Prof. Dr. Dr. h.c.)Patterns and pattern languages have become valuable tools in many domains for representing proven solutions to frequently recurring problems. However, the use of patterns presents some challenges in practice. For example, it is often difficult to find the right patterns for a problem at hand. In addition, the application of patterns often involves a lot of manual effort, since the abstraction of implementation details when writing the patterns means that pattern implementations cannot be systematically reused. As a result, although patterns provide proven knowledge for conceptual solutions, they always have to be manually transformed into concrete implementations when a pattern is used in a specific use case. In particular, the interaction of patterns in pattern languages, thus, leads to high manual effort when implementing complex use cases. Therefore, in this thesis an approach is presented which aims at facilitating the use of patterns in practice. The approach is based on the idea that implementations of patterns are kept available as Concrete Solutions that can be directly reused in the implementation of use cases. To this end, the EINSTEIN-Method provides a framework for systematically storing concrete solutions for their reuse. The method uses Pattern-based Design Models to model conceptual solutions, which can subsequently be transformed into concrete solutions in a semi-automated way. This involves supporting the refinement of abstract patterns via more technologically specific patterns towards concrete solutions. Based on a formalization of pattern languages as graphs, Pattern Graphs with connected Concrete Solutions are introduced, which enable the systematic reuse of concrete solutions. Since patterns are often used in combination to solve complex problems, an approach for automating the aggregation of concrete solutions using Aggregation Operators is presented. In addition, the principle of pattern languages is also projected to the space of concrete solutions and, thus, with Solution Languages an approach is presented that also supports the manual aggregation of concrete solutions to an overall solution. For the reuse of concrete solutions, an iterative IT-supported approach is presented that allows to replace patterns in design models with concrete solutions. Resulting Solution Models can then be aggregated to an overall solution using aggregation operators. For automating the aggregation of concrete solutions, Solution Algebras are introduced that allow mathematical structures to be defined over the set of concrete solutions. For automating the aggregation of concrete solutions, it is also shown how the concept of aggregation operators can be implemented as Solution Aggregation Programs. These allow solution models to be aggregated into overall solutions in a semi-automated manner controlled by the user. For the identification of potential aggregation steps in a solution model, an algorithm is presented that supports the user in the selection of concrete solutions to be aggregated in the solution model. For the transferability of the EINSTEIN-Method into different domains, a tool environment is conceptually described. The practical feasibility of the presented approaches as well as the tool environment is demonstrated by an overall architecture and various tool prototypes. Finally, the feasibility of the presented concepts is shown by means of validation scenarios in different domains.Item Open Access Entwicklung eines Frameworks für die Nutzung von konkreten Lösungen der Quantencomputing Mustersprache(2024) Ebrahim Aldekal, AhmedMuster finden in vielfältigen Bereichen der Informationstechnologie (IT) Anwendung und bieten abstrakte, bewährte Lösungsansätze für wiederkehrende Probleme. Insbesondere im Quantencomputing (QC) sind Muster von Bedeutung, da sie dazu dienen, QC-Algorithmen zu beschreiben und deren Implementierung zu erleichtern. Jedoch besteht das Problem des Mangels an Methoden zur effektiven Integration konkreter Lösungen in ein Gesamtkonzept. Zusätzlich mangelt es an einer angemessenen Speicher- und Wiederverwendungsstrategie für diese Lösungen, was zum Verlust wertvollen Wissens führt. Diese Arbeit stellt ein Framework vor, welches sich auf die Speicherung sowie Kombination konkreter Lösungen konzentriert und anhand der QC-Mustersprache veranschaulicht wird. Das Framework integriert und erweitert verschiedene Open-Source-Anwendungen zur Muster-Verwaltung. Die Evaluation des Frameworks wird anhand eines spezifischen Anwendungsszenarios durchgeführt, bei dem der Deutsch-Algorithmus mittels der Muster und deren konkreten Lösungen realisiert wird. Das vorgestellte Framework erleichtert die Zusammenführung verschiedener Muster zur Entwicklung von QC-Algorithmen. Es vereinfacht den Einsatz der Mustersprache und unterstützt somit den Einstieg in das QC. Besonders technische Feinheiten und quantenspezifische Details, die häufig Hürden darstellen, sollen durch dieses Framework für den Anwender in der Zukunft vereinfacht werden.