Universität Stuttgart
Permanent URI for this communityhttps://elib.uni-stuttgart.de/handle/11682/1
Browse
181 results
Search Results
Item Open Access Modeling and timing analysis of micro-ROS application on an off-road vehicle control unit(2022) Bappanadu, Suraj RaoROS is known to be the most popular middleware for the development of software in modern day robots. It's next version, ROS 2 is highly modular and offers flexibility by supporting on microprocessors running desktop operating systems. Micro-ROS puts the major ROS 2 features on microcontrollers, i.e., highly resource-constrained computing devices running specialized real-time operating systems. ROS 2 is also of great importance for other domains, including autonomous driving and the off-road sector. Accordingly, there is significant interest in bringing micro-ROS to typical automotive control units. These embedded platforms support AUTOSAR Classic OSEK-like operating system which is very different in many aspects when compared to the platforms supported by micro-ROS. Some of the aspects have already been addressed in a previous work. This thesis mainly focuses on mapping the micro-ROS execution scheme to AUTOSAR scheme and dynamic memory management of the micro-ROS stack. From the micro-ROS architecture perspective, to successfully port the stack on an AUTOSAR-based ECU, the middleware and other layers of the stack are also analysed and adapted using a standard approach to support tasks-like execution model instead of threads-like execution model. Additionally, the support for standard CAN protocol based on custom transport configuration with the hardware CAN on the BODAS ECU is introduced. Model-based development methods have proven their utility in automotive industry. Therefore, we also focus on describing the timing properties of the micro-ROS stack in a model-based approach. We develop a generic model which is independent of a specific modeling language. In the next step, we realize the generic model using the widely used AMALTHEA language and analyse how well the developed model predicts the timing behavior of micro-ROS tasks. Finally, the effectiveness of the approach regarding timing and modeling is demonstrated with a micro-ROS test application first on Linux and then on the off-road vehicle control unit BODAS RC18-12/40 by Bosch Rexroth.Item Open Access Classifying physical exercises and counting repetitions using three-dimensional pose estimation(2023) Wallmann, JonasResistance training is known to increase physical and mental health but requires a lot of knowledge and experience to be done effectively and safely. Personal trainers and physiotherapists provide their knowledge to athletes but their profession requires a lot of learning and experience, thus making their services often not affordable to the general public. Automating certain aspects of their work will make their services more available to the general population and therefore lead to more safe and more effective athletes. The first steps of automating personal training lie in observing a subject train and understanding their performed workout. This provides the basics for future work of automating providing feedback on exercise execution and improving their training regimes. In order to do so, we developed a proof-of-concept program, that uses a two-dimensional camera video as an input to classify what exercise a user performs and automatically counts the number of performed repetitions, in real-time. It should work without imposing requirements in the camera perspective or needing to know what exercise will be performed in advance. This is achieved by using a three-dimensional pose estimation model and defining a rule-based algorithm, that considers the position and angle of joints that characterize the performed exercises We evaluate our proof-of-concept program using videos of subjects performing squats and push-ups in order to understand the accuracy in a real-world scenario. Our program achieved an overall accuracy of 95.57% for the squats and 93.69% for the push-up evaluation.Item Open Access A systematic mapping study on development and use of AI planning tools(2021) Philippsohn, RobertArtificial intelligence (AI) planning is a big area in the AI field with many needs and special problems. Therefore, it needs tools to suit these special problems and request, as well as for trends in the AI planning community. Since 1971 there has been an influx of many tools that assist insolving planning problems and making plans. To give a better overview of the available landscape of AI planning tools this systematic mapping study was conducted and try also to shows what software engineering principles are used in creating the tools. We also try to depict in which industry domains the AI planning tools are used and how many papers mention the tools being used in the industry. In the end, we conclude that there are at least 106 different tools out there, with only a fraction being used in the industry. While only a small part of the tools are talked about being used in the industry, this small part is covering a wide array of industry domains.Item Open Access Eine Methode zum Verteilen, Adaptieren und Deployment partnerübergreifender Anwendungen(2022) Wild, Karoline; Leymann, Frank (Prof. Dr. Dr. h. c.)Ein wesentlicher Aspekt einer effektiven Kollaboration innerhalb von Organisationen, aber vor allem organisationsübergreifend, ist die Integration und Automatisierung der Prozesse. Dazu zählt auch die Bereitstellung von Anwendungssystemen, deren Komponenten von unterschiedlichen Partnern, das heißt Abteilungen oder Unternehmen, bereitgestellt und verwaltet werden. Die dadurch entstehende verteilte, dezentral verwaltete Umgebung bedarf neuer Konzepte zur Bereitstellung. Die Autonomie der Partner und die Verteilung der Komponenten führen dabei zu neuen Herausforderungen. Zum einen müssen partnerübergreifende Kommunikationsbeziehungen realisiert und zum anderen muss das automatisierte dezentrale Deployment ermöglicht werden. Eine Vielzahl von Technologien wurde in den letzten Jahren entwickelt, die alle Schritte von der Modellierung bis zur Bereitstellung und dem Management zur Laufzeit einer Anwendung abdecken. Diese Technologien basieren jedoch auf einer zentralisierten Koordination des Deployments, wodurch die Autonomie der Partner eingeschränkt ist. Auch fehlen Konzepte zur Identifikation von Problemen, die aus der Verteilung von Anwendungskomponenten resultieren und die Funktionsfähigkeit der Anwendung einschränken. Dies betrifft speziell die partnerübergreifenden Kommunikationsbeziehungen. Um diese Herausforderungen zu lösen, stellt diese Arbeit die DivA-Methode zum Verteilen, Adaptieren und Deployment partnerübergreifender Anwendungen vor. Die Methode vereinigt die globalen und lokalen Partneraktivitäten, die zur Bereitstellung partnerübergreifender Anwendungen benötigt werden. Dabei setzt die Methode auf dem deklarativen Essential Deployment Meta Model (EDMM) auf und ermöglicht damit die Einführung deploymenttechnologieunabhängiger Modellierungskonzepte zur Verteilung von Anwendungskomponenten sowie zur Modellanalyse und -adaption. Das Split-and-Match-Verfahren wird für die Verteilung von Anwendungskomponenten basierend auf festgelegten Zielumgebungen und zur Selektion kompatibler Cloud-Dienste vorgestellt. Für die Ausführung des Deployments können EDMM-Modelle in unterschiedliche Technologien transformiert werden. Um die Bereitstellung komplett dezentral durchzuführen, werden deklarative und imperative Technologien kombiniert und basierend auf den deklarativen EDMM-Modellen Workflows generiert, die die Aktivitäten zur Bereitstellung und zum Datenaustausch mit anderen Partnern zur Realisierung partnerübergreifender Kommunikationsbeziehungen orchestrieren. Diese Workflows formen implizit eine Deployment-Choreographie. Für die Modellanalyse und -adaption wird als Kern dieser Arbeit ein zweistufiges musterbasiertes Verfahren zur Problemerkennung und Modelladaption eingeführt. Dafür werden aus den textuellen Musterbeschreibungen die Problem- und Kontextdefinition analysiert und formalisiert, um die automatisierte Identifikation von Problemen in EDMM-Modellen zu ermöglichen. Besonderer Fokus liegt dabei auf Problemen, die durch die Verteilung der Komponenten entstehen und die Realisierung von Kommunikationsbeziehungen verhindern. Das gleiche Verfahren wird auch für die Selektion geeigneter konkreter Lösungsimplementierungen zur Behebung der Probleme angewendet. Zusätzlich wird ein Ansatz zur Selektion von Kommunikationstreibern abhängig von der verwendeten Integrations-Middleware vorgestellt, wodurch die Portabilität von Anwendungskomponenten verbessert werden kann. Die in dieser Arbeit vorgestellten Konzepte werden durch das DivA-Werkzeug automatisiert. Zur Validierung wird das Werkzeug prototypisch implementiert und in bestehende Systeme zur Modellierung und Ausführung des Deployments von Anwendungssystemen integriert.Item Open Access Elastic parallel systems for high performance cloud computing(2020) Kehrer, Stefan; Blochinger, Wolfgang (Prof. Dr.)High Performance Computing (HPC) enables significant progress in both science and industry. Whereas traditionally parallel applications have been developed to address the grand challenges in science, as of today, they are also heavily used to speed up the time-to-result in the context of product design, production planning, financial risk management, medical diagnosis, as well as research and development efforts. However, purchasing and operating HPC clusters to run these applications requires huge capital expenditures as well as operational knowledge and thus is reserved to large organizations that benefit from economies of scale. More recently, the cloud evolved into an alternative execution environment for parallel applications, which comes with novel characteristics such as on-demand access to compute resources, pay-per-use, and elasticity. Whereas the cloud has been mainly used to operate interactive multi-tier applications, HPC users are also interested in the benefits offered. These include full control of the resource configuration based on virtualization, fast setup times by using on-demand accessible compute resources, and eliminated upfront capital expenditures due to the pay-per-use billing model. Additionally, elasticity allows compute resources to be provisioned and decommissioned at runtime, which allows fine-grained control of an application's performance in terms of its execution time and efficiency as well as the related monetary costs of the computation. Whereas HPC-optimized cloud environments have been introduced by cloud providers such as Amazon Web Services (AWS) and Microsoft Azure, existing parallel architectures are not designed to make use of elasticity. This thesis addresses several challenges in the emergent field of High Performance Cloud Computing. In particular, the presented contributions focus on the novel opportunities and challenges related to elasticity. First, the principles of elastic parallel systems as well as related design considerations are discussed in detail. On this basis, two exemplary elastic parallel system architectures are presented, each of which includes (1) an elasticity controller that controls the number of processing units based on user-defined goals, (2) a cloud-aware parallel execution model that handles coordination and synchronization requirements in an automated manner, and (3) a programming abstraction to ease the implementation of elastic parallel applications. To automate application delivery and deployment, novel approaches are presented that generate the required deployment artifacts from developer-provided source code in an automated manner while considering application-specific non-functional requirements. Throughout this thesis, a broad spectrum of design decisions related to the construction of elastic parallel system architectures is discussed, including proactive and reactive elasticity control mechanisms as well as cloud-based parallel processing with virtual machines (Infrastructure as a Service) and functions (Function as a Service). To evaluate these contributions, extensive experimental evaluations are presented.Item Open Access Industry practices and challenges of using AI planning : an interview-based study(2024) Vashisth, DhananjayIn the rapidly evolving landscape of industrial applications, AI planning systems have emerged as critical tools for optimizing processes and decision-making. However, implementing and integrating these systems present significant challenges that can hinder their effectiveness. This thesis addresses the urgent need to understand the best practices and challenges involved in designing, integrating, and deploying AI planning systems in industrial settings. Without this understanding, industries risk inefficient implementation, leading to poor performance and resistance from end-users. This research employs a methodology that includes a literature review and interviews with industry professionals and researchers to identify common strategies and obstacles practitioners face. The study examines existing literature to uncover reported best practices and challenges in AI planning systems. Interviews provide additional perspectives, enriching the data collected and ensuring a thorough analysis. The findings reveal best practices, including the importance of cross-disciplinary collaboration, robust data management strategies, and iterative development processes. Additionally, recurring challenges such as integration complexities, scalability issues, and the need for continuous system evaluation are identified. These insights highlight critical areas for improvement and offer practical recommendations for enhancing the effectiveness of AI planning systems in industrial applications.Item Open Access Feasibility analysis of using Model Predictive Control in Demand-Side Management of residential building(2020) Ramachandran Selvaraj, Sri VishnuThe energy systems are becoming smart recently with an increase in communication capabilities between producer, distributor and consumer. Also, many distributed renewable energy producers both in large and domestic scale are adding to the system day by day. Executing Smart Demand-Side Management (DSM) programs can help in providing financial benefits and stability of the energy system without compromising the comfort of end-users. Model Predictive Control (MPC) is an advanced method of process control that is used to control a process while satisfying a set of constraints. Due to its ability to predict future events and generate optimal control, it is widely used in process industries since the 1980s and in recent years it is introduced in power systems. This motivates to study the economic feasibility of using MPC in executing DSM for Residential building, to optimize the power consumption costs and stability of the energy system in the presence of local renewable energy sources (E.g., PV system). The main contribution of this thesis work is to measure the economic benefit of using MPC on DSM of household electricity consumption. A detailed study of modeling the demand side, i.e the appliances of a smart home, along with the domestic energy generators is done in the initial part. Apart from the physical properties of the renewable energy generators, the influence of external factors like weather, dynamic-pricing of electricity and changing user preference is also considered in the model. This formulated model is used to perform simulation of the residential building to generate an optimized energy consumption schedule and calculate the resulting economic benefits. The periodic changes in weather forecast and dynamic-prices are fed into the simulation to improve the prediction accuracy of the system. Lastly, the model is evaluated on a physical implementation to analyze its performance. There are multiple findings as part of the result of this thesis, like the economic benefit of using such a system will encourage many users to participate in Demand response programs, this in turn will help in the reduction of pollution originating from non-renewable energy generators.Item Open Access Design of a software architecture for supervisor system in a satellite(2020) Kelapure, SarthakInternet of Things (IoT) is not just a word now. With an estimated 30 billion devices in the world by 2020, IoT has already become what it was envisioned when the trend began. But there is still a hustle from companies around the world for better and better user experience because the technology keeps getting upgraded and need for upgrade never stops for the user. Researchers and scientists are trying every day to improve the experience by improving the involved things and by improving the communication means, ”the internet”. One such means of communication expected to grow in the future is satellite communication for IoT. Satellite to be used for this purpose needs to be low-cost, robust, reliable, and future ready. An improvement in satellite architecture is imminent. For making satellite feature rich and robust but still low-cost means increase in mission-life of a satellite. Like human life, this can be achieved by better medical system for the satellites. With introduction of a doctor on board, the thesis aims to propose solution for improved mission-life and features for the satellite. The doctor on board in this case is called Supervisor system. This system will need to have a robust and modular software on its designated hardware. Software can be designed and developed to be robust using a standard software architecture that is promising while complementing the requirements. The thesis focuses on designing software architecture for this ”Supervisor system”. By the end of this thesis, the author designs a software architecture for the said system after study of similar architectures. The software architecture is used to develop important features of the mission and is tested for its portability and modularity. Future needs and changes to the existing system are also foreseen and discussed in the end.Item Open Access Economic feasibility analysis of vehicle-to-grid service from an EV owner's perspective in the german electricity market(2020) Malya, Prasad PrakashThe increasing number of Electrical Vehicles (EV) has led to a tremendous amount of inac- cessible electric energy stored in the EV batteries. Vehicle-to-grid (V2G) services can utilize this energy to profit the EV owners’ and stabilize the grid during faults and fluctuations. This thesis presents a novel way of estimating the profitability of V2G from the EV owner’s perspective. The main contribution of this thesis is the formulation of a profit model that includes the EV battery degradation due to V2G. The work done so far considers fixed battery degradation cost, whereas in this work, an online battery degradation model is used. This model takes into account the parameters that represent real-life scenarios resulting in more accurate battery degradation estimation. The V2G profit model uses the electricity price signal from the German energy market for the year 2019 and estimates the annual profit. The first part of the thesis calculates the profitability of V2G, where EV can participate freely in energy arbitrage. This analysis explores the range of profit when EV participates in V2G purely based on the EV owner’s discretion. A sensitivity analysis is done with respect to battery capacity, battery efficiency, and driving distance. The second part of the thesis evaluates the profitability of EV participating in the German energy market’s frequency regulation ancillary service.=. The analysis compares the profitability of EV participating in primary, secondary, and tertiary frequency regulation services. The results of this thesis provide several findings, the potential profit from V2G services should encourage EV owners’ to participate in the V2G services. Additionally, participating in V2G service can extend the life of the battery. However, this depends on the battery technology and battery usage during V2G services. Ancillary services provide higher potential profit compared to energy arbitrage because of the high remuneration scheme. The ancillary services with both capacity and energy payment result in higher profit compared to ancillary services with only capacity payment.Item Open Access Flutter on Windows Desktop: a use case based study(2021) Zindl, StefanIn the last years, the number of different computer platforms increased from Desktop, mobile devices, tablets to the Web. Among others, cross-platform frameworks enable to target all platforms. One of those cross-platform frameworks is Flutter which is developed by Google and targets Windows Desktop in beta stage since 2020. Because of this early stage, it is relevant to verify how well Flutter already works on Windows Desktop. In the first part of this bachelor thesis, we compare a simple image gallery in Flutter and WPF with .NET 5. The implementation in both frameworks worked well with similar kind of realization. Our comparison concentrates on metrics such as code, startup time, and packaged size. In addition, we measure RAM usage and CPU usage. We measure these in two scenarios which we automated with a simulation script. In the second part, we focus on the available third-party extensions and the current missing functionalities of the Flutter framework. Our results indicate that we could implement the Flutter application with 55% less code and with a 70 times faster startup time. Surprisingly, Flutter uses less RAM most of the time, but instead, it needs more CPU to process the images. Nevertheless, there are some missing important functionalities for Desktop applications such as adding icons in the system tray or adding a menubar to the application. We show that some functionality is still missing in the current stage of the Flutter framework, but it has a good chance to become a well established framework for new developers. Keywords: Desktop, WPF, Windows, Cross-Platform, Flutter, Use-Case Study