05 Fakultät Informatik, Elektrotechnik und Informationstechnik
Permanent URI for this collectionhttps://elib.uni-stuttgart.de/handle/11682/6
Browse
Item Open Access 3D pose estimation of vehicles from monocular videos using deep learning(2018) Cheng, QingIn this thesis, we present a novel approach, Deep3DP, to perform 3D pose estimation of vehicles from monocular images intended for autonomous driving scenarios. A robust deep neural network is applied to simultaneously perform 3D dimension proximity estimation, 2D part localization, and 2D part visibility prediction. In the inference phase, these learned features are fed to a pose estimation algorithm to recover the 3D location, 3D orientation, and 3D dimensions of the vehicles with the help of a set of 3D vehicle models. Our approach can perform these six tasks simultaneously in real time and handle highly occluded or truncated vehicles. The experiment results show that our approach achieves state-of-the-art performance on six tasks and outperforms most of the monocular methods on the challenging KITTI benchmark.Item Open Access 3D video tracking and localization of underwater swarm robots(2012) Antoni, MartinAutonomous underwater vehicles (AUV) are robots, which usually estimate their position by localization the help of internal or external sensors. In this thesis, small swarm robots from the CoCoRo are used as experimental platform. It is often useful, to know the exact position inside the testing area to evaluate swarm algorithms. Controlling the position of the robot should be possible as well. A software is developed, which is able to track a robot inside an aquarium. Two cameras are install at each side of this aquarium for determining the 3D position which includes the diving depth. Perspective distortions, which come from viewing angle, are compensated with the help of image transformation. With the corrected image, the template matching algorithm with normalized cross-correlation is used to track the robot in the camera image. A wireless connection is established between the computer and the robot to read out sensor data and to control the motors. Waypoints can be set by the user which the robot follows. The computer uses two independent controllers for rotational and for distance control.Item Open Access A 3D-aware conditional diffusion model for gaze redirection(2024) Cho, Yeon JooGaze redirection refers to the task of modifying the direction of eye gaze and its corresponding facial counterparts to a targeted direction, while preserving the original identity of the subject. An effective gaze redirection approach must (i) be aware of the 3D nature of the task, (ii) accurately redirect the gaze into any specified direction, and (iii) generate photorealistic output images that preserve the shape and texture details from the input images. In response to these requirements, this thesis presents a novel approach to gaze redirection using a 3D-aware conditional diffusion model that leverages the intrinsic geometric properties of human faces. This approach effectively transforms the task into a conditional image-to-image translation. To embed 3D awareness comprehensively, we adopt a viewpoint-conditioned diffusion model, that can learn the 3D context of the facial geometry. Then, the conditions to this model are unique gaze rotations and latent facial parameters from the face images. These strategies are further reinforced by a novel loss function focused on gaze direction and head orientation, which enhances the model's ability to learn and apply accurate gaze and head adjustments effectively. Together, these elements underscore the potential of our approach to produce high-quality, accurate gaze redirection, fulfilling the complex demands of this sophisticated visual task.Item Open Access About the design changes required for enabling ECM systems to exploit cloud technology(2020) Shao, GangSince the late 1980s, Enterprise Content Management Systems (ECM systems) have been used to store, manage, distribute all kinds of documents, media content, and information in enterprises. ECM systems also enable enterprises to integrate their business processes with contents, employing corporate information lifecycle and governance as well as automation of contents processing. The ever-changing business models and increasing demands have pushed ECM systems to evolve into a very active content repository with expectations such as high availability, high scalability, high customizability. These expectations soon became a costly financial burden for enterprises. The on-going hype around cloud computing has raised attention with its claims on improved manageability, less maintenance, and cost-effectiveness. Embracing the cloud might be a good solution for the next high-performance ECM system at an affordable price. To achieve such a goal, the designs of ECM systems must be changed before deployment into the cloud. Thus, this thesis aims to analyze the architecture design of legacy ECM systems, determine its shortcomings, and propose design changes required for embracing cloud technologies. The main proposal to design changes are i) decomposing an ECM system to its constituent components, ii) containerizing those components and create standard images, iii) decoupling the physical link between the data storage device from the applications container by utilizing docker volumes in dedicated persistent data containers instead, iv) utilizing software-defined network infrastructure where possible. These design changes then were tested with a proof-of-concept prototype, where an ECM product was successfully deployed and tested using Docker in a cloud environment backed by OpenStack.Item Open Access Absicherung der SOME/IP Kommunikation bei Adaptive AUTOSAR(2017) Kreissl, JochenDie Entwicklung einer neuen Generation vernetzter, (teil-)autonomer und zumindest teilweise elektrisch betriebener Fahrzeuge fordert von der Automobilindustrie den Wechsel zu einer neuen Fahrzeugarchitektur, welche den Einsatz dynamischer Softwarekomponenten auf leistungsstarker Hardware ermöglicht. Um den schnellen Austausch der notwendigen Informationen zwischen einzelnen Systemen zu gewährleisten, werden zudem On-Board Kommunikationsnetze mit hoher Bandbreite benötigt. Die adaptive AUTomotive Open System ARchitecture (AUTOSAR) Plattform in Verbindung mit IP-basierter, service-orientierter Kommunikation soll die Basis für diese neue Fahrzeuggeneration bereitstellen. Für eine hohe Abstraktionsebene der Kommunikation zwischen einzelnen Softwarekomponenten sieht die adaptive AUTOSAR Spezifikation den Einsatz der Scalable service-Oriented MiddlewarE over IP (SOME/IP) vor, welche den dynamischen Aufbau von Kommunikationskanälen zwischen den Komponenten zur Laufzeit des Systems ermöglicht (Service Discovery). Durch den hohen Grad der Vernetzung von Fahrzeugen, insbesondere durch die Internetanbindung via moderner Mobilfunkstandards, steigt zugleich die Gefahr von Angriffen auf Fahrzeuge durch Hacker und Schadsoftware. Um dennoch die Sicherheit der übertragenen Daten, und damit indirekt die Sicherheit der Passagiere, zu gewährleisten, müssen die eingesetzten Kommunikationsprotokolle höchsten Sicherheitsansprüchen genügen. Nach einer kurzen Einführung der adaptive AUTOSAR Plattform und des SOME/IP-Protokolls wird in der folgenden Arbeit eine Gefahren- und Risikoanalyse der Fahrzeugarchitektur durchgeführt. Dabei liegt der Schwerpunkt auf der Analyse der On-Board Kommunikation. Weiterhin werden Sicherheitsprotokolle untersucht, welche die aufgedeckten Schwachstellen wirksam und effizient absichern, wobei auf den Einsatz asymmetrischer Verfahren soweit wie möglich verzichtet wird. Insbesondere werden Protokolle zur Absicherung von Multicast-basierter Kommunikation betrachtet, da das SOME/IP-Protokoll für die Implementierung effizienter Gruppenkommunikation und das Auffinden von Softwarekomponenten IP-Multicast einsetzt. Die betrachteten Protokolle werden im Anschluss auf ihre Kompatibilität mit dem SOME/IP-Standard untersucht und durch Kombination verschiedener Ansätze ein Gesamtkonzept für die Absicherung der gesamten SOME/IP-Kommunikation innerhalb des Systems entwickelt. Während die Unicast-Kommunikation mithilfe des weit verbreiteten Transport Layer Security (TLS) Protokoll erreicht werden kann, wird eine Kombination von TLS und dem Time Efficient Stream Loss-tolerant Authentication (TESLA) Protokoll vorgestellt, um die Multicast-Kommunikation von SOME/IP abzusichern. Wird die Option zur Sitzungswiederaufnahme (Session Resumption) des TLS-Protokolls genutzt, so kommt das vorgestellte Konzept vollkommen ohne asymmetrische Kryptographie aus und erreicht dennoch die Sicherheitseigenschaften Geheimhaltung und Senderauthentifizierung für alle Kommunikationskonzepte des SOME/IP Protokolls. Die Authentizität übertragener Nachrichten kann dabei insbesondere auch dann garantiert werden, wenn ein Angreifer vollständige Kontrolle über einen Kommunikationspartner besitzt.Item Open Access Accelerated computation using runtime partial reconfiguration(2013) Nayak, Naresh GaneshRuntime reconfigurable architectures, which integrate a hard processor core along with a reconfigurable fabric on a single device, allow to accelerate a computation by means of hardware accelerators implemented in the reconfigurable fabric. Runtime partial reconfiguration provides the flexibility to dynamically change these hardware accelerators to adapt the computing capacity of the system. This thesis presents the evaluation of design paradigms which exploit partial reconfiguration to implement compute intensive applications on such runtime reconfigurable architectures. For this purpose, image processing applications are implemented on Zynq-7000, a System on a Chip (SoC) from Xilinx Inc. which integrates an ARM Cortex A9 with a reconfigurable fabric. This thesis studies different image processing applications to select suitable candidates that benefit if implemented on the above mentioned class of reconfigurable architectures using runtime partial reconfiguration. Different Intellectual Property (IP) cores for executing basic image operations are generated using high level synthesis for the implementation. A software based scheduler, executed in the Linux environment running on the ARM core, is responsible for implementing the image processing application by means of loading appropriate IP cores into the reconfigurable fabric. The implementation is evaluated to measure the application speed up, resource savings, power savings and the delay on account of partial reconfiguration. The results of the thesis suggest that the use of partial reconfiguration to implement an application provides FPGA resource savings. The extent of resource savings depend on the granularity of the operations into which the application is decomposed. The thesis could also establish that runtime partial reconfiguration can be used to accelerate the computations in reconfigurable architectures with processor core like the Zynq-7000 platform. The achieved computational speed-up depends on factors like the number of hardware accelerators used for the computation and the used reconfiguration schedule. The thesis also highlights the power savings that may be achieved by executing computations in the reconfigurable fabric instead of the processor core.Item Open Access Accountable secure multi-party computation for tally-hiding e-voting(2020) Rivinius, MarcWith multi-party computation becoming more and more efficient and thus more practical, we can start to investigate application scenarios. One application where multi-party computation can be used to great effect is e-voting. Unlike classical e-voting protocols, one can get tally-hiding e-voting systems. There, some part of the tally (especially the whole set of votes) is not made public. Notwithstanding this, most existing (verifiable) multi-party computation protocols are not suitable for e-voting. A property that is arguably more important than verifiability is missing: accountability -- as a matter of fact, we need external accountability in this setting, where anyone audit the protocol. This is especially of importance for e-voting systems and more researchers are paying attention to it lately. To this effect, we introduce a general multi-party computation protocol that meets all the requirements to be used in e-voting systems. Our protocol achieves accountability and fairness in the honest majority setting and is -- to our best knowledge -- the first one to do so.Item Open Access ACP Dashboard: an interactive visualization tool for selecting analytics configurations in an industrial setting(2017) Volga, YuliyaThe production process on a factory can be described by big amount of data. It is used to optimize the production process, reduce number of failures and control material waste. For this, data is processed, analyzed and classified using the analysis techniques - text classification algorithms. Thus there should be an approach that supports choice of algorithms on both, technical and management levels. We propose a tool called Analytics Configuration Performance Dashboard which facilitates process of algorithm configurations comparison. It is based on a meta-learning approach. Additionally, we introduce three business metrics on which algorithms are compared, they map onto machine learning algorithm evaluation metrics and help to assess algorithms from industry perspective. Moreover, we develop a visualization in order to provide clear representation of the data. Clustering is used to define groups of algorithms that have common performance in business metrics. We conclude with evaluation of the proposed approach and techniques, which were chosen for its implementation.Item Open Access Active exploration and identification of kinematic devices(2016) Mohrmann, JochenAs an important part of solving the lockbox problem, this thesis deals with the problem of identifying kinematic devices based on data generated using an Active Learning strategy. We model the belief over different device types and parameters using a discrete multinomial distribution. We discretize directions as a Geodesic sphere. This allows an isotropic distribution without being biased towards certain directions. The belief update is based on experience using a Bayes Filter. This allows to localize the correct states, even if an action fails to generate movement. Our action selection strategy aims to minimize the number of actions necessary to identify devices by considering the expected future belief. We evaluate the effectiveness of different information measures and compare them with a random strategy within a simulation. Our experiments show that the use of the MaxCE strategy creates the best results. We were able to correctly identify prismatic, revolute, and fixed devices in 3D space.Item Open Access Active learning strategies for deep learning based question answering models(2024) Lin, Kuan-YuQuestion Answering (QA) systems enable machines to understand human language, requiring robust training on related datasets. Nonetheless, large, high-quality datasets are only sometimes available due to cost restrictions. Active learning (AL) addresses this challenge by selecting the data with high information value as small subsets for model training, considering computational resources while preserving performance. There are many different ways to detect the information value of the data, which in turn leads to a variety of AL strategies. In this study, we aim to investigate the performance change of the QA system after applying various AL strategies. In addition, we use the BatchBALD strategy, compared with its predecessor, the BALD strategy, to inspect the advantages of batch querying in data selection. Eventually, we propose Unique Context Selection (UC) and Unique Embedding Selection Methods (UE) to enhance the sampling effectiveness by ensuring maximal diversity of context and embedding within querying samples, respectively. Observing the experimental results, we learn that each dataset has its own AL strategy that brings out its best results, and there is no universal optimal AL strategy for QA tasks. BatchBALD maintains the modeling results similar to BALD in the regular setting while significantly reducing computation time, though this feature is not practiced in the low-resource setting. Finally, UC could not enhance the effectiveness of AL since half of the datasets used in this study consisted of more than 65% unique contexts. However, the effect of UE enhancement deviates across datasets and AL strategies, but it can be observed that most of the AL strategies with the best effect of UE enhancement can increase by more than 0.5% F1. Compared with context, a feature of datasets is limited to natural language processing tasks; embedding is more generalized and has a good enhancement effect, which is worth studying in depth.Item Open Access Adaptation of the data access layer to enable cloud data access(2012) Reza, S. M. MohsinIn the current era of technology, Cloud computing has become significantly popular within enterprise IT community, as it brings a large number of opportunities and provides solutions for user’s data, software, and computations. As part of the Cloud computing the service model Database-as-a-Service (DBaaS) has been recognized, where application can access highly available, scaled, and elastic data store services on demand with the possibility of paying only for the resources are actually consumed. While enterprise IT becoming larger these days, the current challenges are to manage the traditional database with entire enterprise data. One possible solution is to move the application data to the Cloud and then accessing Cloud data from the traditional application on local server. Thus, ensuring the use of economies of scale and reducing the capital expenditure of enterprise IT. Moving data layer to the Cloud introduces an issue how an application can access data from the Cloud data store services with full functionality of accessing like traditional database service. To ensure this possibility, the application needs to be implemented a Data Access Layer (DAL) separately in order to enable access to Cloud data, where DAL is responsible for encapsulating the data access functionalities and interacts with business logic within the application system. Thus reduces the application complexity and brings the solutions for managing entire enterprise, data. However, accessing heterogeneous data store services the DAL requires implementing necessary adaptations. This master’s thesis focuses on investigating the adaptations of SQL statements required for accessing Relational Database Management Systems (RDMS) in the Cloud. In this scope, we perform testing on several RDMS (i.e. MySQL, Oracle, PostgreSQL) in different Cloud services in order to determine the required adaptations. However, the adaptations are to be implemented in DAL for enable accessing Cloud data. Evaluating the adaptations of SQL statements, a software application called SQL Evaluation tool has been developed in this master’s thesis, where the application has implemented a DAL explicitly and is capable to execute the SQL statements simultaneously in different Cloud data store services. The purpose of developing this application is verifying the concept of adaptation of DAL.Item Open Access Adaptive robust scheduling in wireless Time-Sensitive Networks (TSN)(2024) Egger, SimonThe correct operation of upper-layer services is unattainable in wireless Time-Sensitive Networks (TSN) if the schedule cannot provide formal reliability guarantees to each stream. Still, current TSN scheduling literature leaves reliability, let alone provable reliability, either poorly quantified or entirely unaddressed. This work aims to remedy this shortcoming by designing an adaptive mechanism to compute robust schedules. For static wireless channels, robust schedules enforce the streams' reliability requirements by allocating sufficiently large wireless transmission intervals and by isolating omission faults. While robustness against omission faults is conventionally achieved by strictly isolating each transmission, we show that controlled interleaving of wireless streams is crucial for finding eligible schedules. We adapt the Disjunctive Graph Model (DGM) from job-shop scheduling to design TSN-DGM as a metaheuristic scheduler that can schedule up to one hundred wireless streams with fifty cross-traffic streams in under five minutes. In comparison, we demonstrate that strict transmission isolation already prohibits scheduling a few wireless streams. For dynamic wireless channels, we introduce shuffle graphs as a linear-time adaptation strategy that converts reliability surpluses from improving wireless links into slack and reliability impairments from degrading wireless links into tardiness. While TSN-DGM is able to improve the adapted schedule considerably within ten seconds of reactive rescheduling, we justify that the reliability contracts between upper-layer services and the infrastructure provider should specify a worst-case channel degradation beyond which no punctuality guarantees can be made.Item Open Access Adding value to object storage: integrating analytics with cloud storage back ends(2016) Noori, HodaWith the vast interest of customers in using the cloud infrastructure, cloud providers are going beyond limits to offer advanced functionalities. They try their utmost best to present the services in a way that makes the customers highly attracted and convince them about value and benefits of using such services. For this purpose, cloud providers need to have an access to customers’ data, hence customer-sensitive data stored in repositories should be transferred to the cloud. Object storages are one of the possible solutions for the implementation of repositories in cloud environments. However, due to the data being confidential and fragile, security and encryption mechanisms are required. The application of Enterprise Content Management (ECM) system highly relies on metadata, thus there is a need to keep metadata unencrypted while encrypting data itself. Therefore, cloud providers that are hosting ECM systems are forced to keep metadata unencrypted in order to satisfy the main functionalities of ECM systems on the cloud. Although other cloud providers can offer data encryption and unencrypted metadata as an option to their customers. This leads to the conclusion that enhancing object storages with analysis capabilities in ECM systems is more beneficial if it is done on top of unencrypted metadata. In this thesis I investigate how value can be added to such cloud storage services by only using access the metadata. I specifically focus on providing analytics functionality on metadata. This Master’s thesis aims at providing the means to efficiently analyze the metadata inside a cloud-based ECM system (OSECM) which uses Swift Object Store as its back end repository. I extended the OSECM system with required components by providing new modules that enable the retrieval of metadata from the object storage and the insertion of this metadata into a metadata warehouse. The importance of metadata replication in a distinct data warehouse offers the possibility of benefiting from SQL query capabilities for analysis purposes. Furthermore, an existing tool was integrated as the analysis component to offer the means for interaction with the underlying metadata warehouse and the user interface. Finally, after applying analysis queries, the results are presented on the user interface using the predefined set of visualization interfaces. The supported data structure for the visualization of the result are also defined in this work.Item Open Access Addressing TCAM limitations in an SDN-based pub/sub system(2017) Balogh, AlexanderContent-based publish/subscribe is a popular paradigm that enables asynchronous exchange of events between decoupled applications that is practiced in a wide range of domains. Hence, extensive research has been conducted in the area of efficient large-scale pub/sub system. A more recent development are content-based pub/sub systems that utilize software-defined networking (SDN) in order to implement event-filtering in the network layer. By installing content-filters in the ternary content-addressable memory (TCAM) of switches, these systems are able to achieve event filtering and forwarding at line-rate performance. While offering great performance, TCAM is also expensive, power hunger and limited in size. However, current SDN-based pub/sub systems don't address these limitations, thus using TCAM excessively. Therefore, this thesis provides techniques for constraining TCAM usage in such systems. The proposed methods enforce concrete flow limits without dropping any events by selectively merging content-filters into more coarse granular filters. The proposed algorithms leverage information about filter properties, traffic statistics, event distribution and global filter state in order to minimize the increase of unnecessary traffic introduced through merges. The proposed approach is twofold. A local enforcement algorithm ensures that the flow limit of a particular switch is never violated. This local approach is complemented by a periodically executed global optimization algorithm that tries to find a flow configuration on all switches, which minimized to increase in unnecessary traffic, given the current set of advertisements and subscriptions. For both classes, two algorithms with different properties are outlined. The proposed algorithms are integrated into the PLEROMA middleware and evaluated thoroughly in a real SDN testbed as well as in a large-scale network emulation. The evaluations demonstrate the effectiveness of the approaches under diverse and realistic workloads. In some cases, reducing the number of flows by more than 70% while increasing the false positive rate by less than 1% is possible.Item Open Access Adjusting virtual worlds to real world feedback limitations while using quadcopters(2017) Hoppe, MatthiasThe current development of Virtual Reality technologies are mostly focused on providing deeper immersion by improving displays and 3D audio quality. The influence of interaction and haptic feedback is often neglected. State-of-the-art technologies are still adapting gamepads and forcing the user to hold controllers that give haptic feedback by simply applying vibration. Such interaction and haptic feedback methods are therefore inducing less presence on the user. We suggest a method of combining hands free hand tracking and providing haptic feedback by utilising quadcopters as a feedback device. We reviewed haptic quadcopter feedback by conducting three user studies to validate the quality of quadcopter feedback, explore the ability to simulate various objects and explore additional feedback methods for simulating objects with extreme properties. We found that haptic feedback provided by quadcopters in combination with hand tracking is a feasible improvement of providing feedback. Furthermore, haptic quadcopter feedback is well received while simulating interaction with small, light object or objects with a soft surface. In cases of non-moving, solid objects additional feedback methods can be applied to increase the resistance felt by the user. While limitations have to be kept in mind when it comes to designing virtual worlds to include quadcopter feedback, we see it as a suitable way of providing flexible, three-dimensional, hands-free, haptic feedback.Item Open Access Advanced variational methods for dense monocular SLAM(2016) Hofmann, MichaelStructure from Motion (SfM) denotes one of the central problems in computer vision. It deals with the reconstruction of a static scene from an image sequence of a single moving camera. This task is typically divided into two alternating stages: tracking, which tries to identify the camera’s position and orientation with respect to a global coordinate system, and mapping, which uses this information to create a depth map from the current camera frame. There are already numerous approaches in the literature concerning local reconstruction techniques which attempt to create sparse point clouds from selected image features. However, the resulting scene information is often insufficient for many fields of application like robotics or medicine. Therefore, dense reconstruction has become more and more prominent in recent research. In 2011, Newcombe et al. presented a new technique called DTAM (Dense Tracking and Mapping), which was one of the first to create fully dense depth maps based on variational methods. Since then, most of the follow-up work concentrated on performance rather than on qualitative optimization due to DTAM’s limited real-time capability compared to sparse methods. It is therefore the objective of this thesis to improve the quality and robustness of the original DTAM algorithm and extend it to a generalized and modular mathematical framework. In particular, the influence of different constancy assumptions and regularizers will be evaluated and tested under various conditions using multiple benchmark data sets.Item Open Access Aktualisierung und Änderungsweitergabe in Workflow-Choreographien(2017) Nemet, MarkusDas Forschungsfeld e-Science beschäftigt sich unter anderem mit Simulationen in der Wissenschaft. Eine Strategie besteht darin, die etablierten Standards, aus der Geschäftswelt, auf die Anforderungen von Wissenschaftlern, für Scientific-Workflows, zu übertragen. Die angebotene Werkzeuge für Wissenschaftler sollten das Modellieren mit der Trial and Error Methode unterstützen, da dies eine natürliche Vorgehensweise bei der Erstellung von Experimenten darstellt. Die Experimente werden als Workflow-Choreographien beschrieben. Diese Arbeit beschäftigt sich damit, wie Aktualisierungen von Workflow-Choreographien an die beteiligten Partner propagiert und gleichzeitig diese Aktualisierungen automatisch in das bestehende Modell des Partners übernommen werden können. Dazu wird ein Model-Integration-Konzept erarbeitet und anschließend in einem Proof of Concept die Funktionalität innerhalb eines wissenschaftlichen Prototyps bereitgestellt.Item Open Access Algorithmic planning, simulation and validation of smart, shared parking services using a last mile hardware(2021) Thulasi Raman, MuralikrishnaA major problem due to increased usage of vehicles is parking. In the work, we address the problem of parking by the use of a model called ’Smart Shared Private Parking’. In an urban area mixed with commercial and residential buildings, a household owner i.e. a resident keeps their parking lots not utilized during their office hours, long duration of shopping and vacations. An important question is how to make the residents provide their parking spaces. This can be made possible by hardware and efficient communication about the services offered. The work explains the conceptual deployment of such hardware and communication mechanism. This in turn helps to achieve the involvement of common residents to benefit from renting out their space and also help the drivers to utilize such services and reduce their expenses due to possibly competitive pricing scheme. For effective analysis of the model, we propose an infrastructure setup required to analyze various aspects of the model. To start with, the work highlights how a communication strategy could effectively reduce the number of times a vehicle is rerouted in search of a parking spot. Apart from reducing the commute time of a driver, this reduces the CO2 emissions. The beneficial aspects of the proposed solution for a vehicle (i.e. a car driver) are obtained by estimating and comparing vehicular emission in scenarios with and without such shared parking lots provisioned. The work proposed also highlights the beneficial aspects from a parking lot owner’s point of view. The first one among the beneficial aspects is how the parking lots are utilized in the time frame considered. This is achieved by comparing the utilization of commercial parking lots and shared parking lots with and without the proposed parking model in place. The next beneficial aspect is how the model helps the parking lot owners to earn money by lending their parking spaces. Finally, the work proposed presents a correlation analysis. This helps to conclude how the change in the number of such shared private parking lots affects the vehicles and the model itself through some metrics.Item Open Access Algorithms for calculating robust schedules for time sensitive networking (TSN)(2023) Nieß, AdriaanWith the 802.1Qbv standard, the time-sensitive networking (TSN) task group has specified a TDMA-based scheduling method based on Ethernet, that allows to ensure real-time guarantees for time-critical data streams. A major challenge here is the computation of the cyclic schedules required for the switch configurations in the network. There exist already a number of different approaches to calculate these schedules. However, most of them focus on supporting a high number of streams and optimize target functions such as the Makespan. This leads to schedules that are particularly fragile against errors in the system model. To close this gap, we have developed a scheduler based on simple temporal networks (STNs) that maximizes the intervals allocated for the transmission of data packets in order to ensure a very high degree of robustness against unforeseen delay or cross-traffic. In a simulation it could be shown that the calculated schedules could still guarantee a loss-free data transmission under compliance with all deadlines if the actual per-hop network delay deviated from the assumed delay by a factor of up to 7. Furthermore, an even larger error did not immediately lead to a breakdown, but to a gradual degradation. This makes the scheduling method presented here as well as the underlying algorithms particularly interesting for applications that come with a high degree of uncertainty in the system model, such as wireless communications or heterogeneous networks.Item Open Access Eine Analyse der Benutzererfahrung in der Interaktion mit einem Kino Chatbot(2019) Berhe, SenaitHeutzutage spielt die Künstliche Intelligenz (KI) eine immer wichtigere Rolle im Markt und damit auch für die Unternehmen. Dies zählt auch für den Bereich der Interaktion mit Kunden. Das ist der Grund, warum Unternehmen immer öfter auf Chatbots im Kundendienst setzen. Jedoch gibt es viele Herausforderungen bei der Entwicklung solcher Chatbots, damit sie den Anforderungen der Kunden gerecht werden können. Die Nutzerakzeptanz bei Neutechnologien, wie Chatbots ist von enormer Bedeutung, denn eine positive Nutzererfahrung (User Experience = UX) würde dazu führen, dass Nutzer nicht abgeneigt wären eine Interaktion mit einem Chatbot zu führen. Bei der Entwicklung des Chatbots wird die Methodik der Fokusgruppe angewandt, da die daraus resultierenden Ergebnisse aus einer Gruppe mit allgemeinen Erfahrungen sein werden. Damit stützt sich die Entwicklung auf eine möglichst allgemeine Bewertung und Auswertung des Marktes, die für die Implementierung des finalen Chatbots relevant ist. In dieser Masterarbeit wird die Nutzererfahrung in der Interaktion mit einem Kinochatbot im Vergleich zur bestehenden Webseite behandelt. Dabei wird das Konzept des Human-Centered Design (HCD) verwendet und daher werden zuerst Nutzerstudien am Fraunhofer Institut für Arbeitswissenschaft und Organisation (IAO) durchgeführt, um daraus Anforderungen zu ermitteln, die für die Entwicklung des Chatbots berücksichtigt werden. Das Fraunhofer kooperiert mit der Firma Compeso, die eine deutschlandweite Webseite für Kinos pflegt. Nachdem die Nutzerstudien durchgeführt worden sind, wird mit Hilfe von den erlangten Erkenntnissen ein Chatbot entwickelt, der abschließend nochmal in Form einer geführten Nutzerstudie getestet wird, um die Funktionen und Qualität des Chatbots zu untersuchen.