05 Fakultät Informatik, Elektrotechnik und Informationstechnik
Permanent URI for this collectionhttps://elib.uni-stuttgart.de/handle/11682/6
Browse
Search Results
Item Open Access Strukturierte Modellierung von Affekt in Text(2020) Klinger, Roman; Padó, Sebastian (Prof. Dr.)Emotionen, Stimmungen und Meinungen sind Affektzustände, welche nicht direkt von einer Person bei anderen Personen beobachtet werden können und somit als „privat“ angesehen werden können. Um diese individuellen Gefühlsregungen und Ansichten dennoch zu erraten, sind wir in der alltäglichen Kommunikation gewohnt, Gesichtsausdrücke, Körperposen, Prosodie, und Redeinhalte zu interpretieren. Das Forschungsgebiet Affective Computing und die spezielleren Felder Emotionsanalyse und Sentimentanalyse entwickeln komputationelle Modelle, mit denen solche Abschätzungen automatisch möglich werden. Diese Habilitationsschrift fällt in den Bereich des Affective Computings und liefert in diesem Feld Beiträge zur Betrachtung und Modellierung von Sentiment und Emotion in textuellen Beschreibungen. Wir behandeln hier unter anderem Literatur, soziale Medien und Produktbeurteilungen. Um angemessene Modelle für die jeweiligen Phänomene zu finden, gehen wir jeweils so vor, dass wir ein Korpus als Basis nutzen oder erstellen und damit bereits Hypothesen über die Formulierung des Modells treffen. Diese Hypothesen können dann auf verschiedenen Wegen untersucht werden, erstens, durch eine Analyse der Übereinstimmung der Annotatorinnen, zweitens, durch eine Adjudikation der Annotatorinnen gefolgt von einer komputationellen Modellierung, und drittens, durch eine qualitative Analyse der problematischen Fälle. Wir diskutieren hier Sentiment und Emotion zunächst als Klassifikationsproblem. Für einige Fragestellungen ist dies allerdings nicht ausreichend, so dass wir strukturierte Modelle vorschlagen, welche auch Aspekte und Ursachen des jeweiligen Gefühls beziehungsweise der Meinung extrahieren. In Fällen der Emotion extrahieren wir zusätzlich Nennungen des Fühlenden. In einem weiteren Schritt werden die Verfahren so erweitert, dass sie auch auf Sprachen angewendet werden können, welche nicht über ausreichende annotierte Ressourcen verfügen. Die Beiträge der Habilitationsarbeit sind also verschiedene Ressourcen, für deren Erstellung auch zugrundeliegende Konzeptionsarbeit notwendig war. Wir tragen deutsche und englische Korpora für aspektbasierte Sentimentanalyse, Emotionsklassifikation und strukturierte Emotionsanalyse bei. Des Weiteren schlagen wir Modelle für die automatische Erkennung und Repräsentation von Sentiment, Emotion und verwandten Konzepten vor. Diese zeigen entweder bessere Ergebnisse, als bisherige Verfahren oder modellieren Phänomene erstmalig. Letzteres gilt insbesondere bei solchen Methoden, welche auf durch uns erstellte Korpora ermöglicht wurden. In den verschiedenen Ansätzen werden wiederkehrend Konzepte gemeinsam modelliert, sei es auf der Repräsentations- oder der Inferenzebene. Solche Verfahren, welche Entscheidungen im Kontext treffen, zeigen in unserer Arbeit durchgängig bessere Ergebnisse, als solche, welche Phänomene getrennt betrachten. Dies gilt sowohl für den Einsatz künstlicher neuronaler Netze, als auch für die Verwendung probabilistischer graphischer Modelle.Item Open Access Adaption des Systems XSTAMPP 4 an die Analysemethode STAMP/CAST in der Einzelplatzanwendung(2020) Zimmermann, EvaTäglich geschehen Unfälle, die analysiert werden müssen und für die Erklärungen gefunden warden sollten. Dazu gibt es einen Analyseprozess CAST, der auf STAMPP aufbauend, existierende Unfälle betrachtet und durch dessen Erkenntnisse weitere Unfälle verhindert werden sollen. Um diesen Prozess zu unterstützen, wird in dieser Bachelorarbeit eine Einzelplatzanwendung umgesetzt, die den Anwender bei der Analyse von Unfällen unterstützt. Dafür wurde aufbauend auf der Theorie und den existierenden Arbeiten, eine Anforderungsanalyse durchgeführt, auf dessen Grundlage dann die Einzelplatzanwendung implementiert wurde. Als Ergebnis der Arbeit wurde eine Software fertiggestellt, die es dem Analyst ermöglicht, alle Schritte von CAST durchzuführen.Item Open Access Task-oriented specialization techniques for entity retrieval(2020) Glaser, Andrea; Kuhn, Jonas (Prof. Dr.)Finding information on the internet has become very important nowadays, and online encyclopedias or websites specialized in certain topics offer users a great amount of information. Search engines support users when trying to find information. However, the vast amount of information makes it difficult to separate relevant from irrelevant facts for a specific information need. In this thesis we explore two areas of natural language processing in the context of retrieving information about entities: named entity disambiguation and sentiment analysis. The goal of this thesis is to use methods from these areas to develop task-oriented specialization techniques for entity retrieval. Named entity disambiguation is concerned with linking referring expressions (e.g., proper names) in text to their corresponding real world or fictional entity. Identifying the correct entity is an important factor in finding information on the internet as many proper names are ambiguous and need to be disambiguated to find relevant information. To that end, we introduce the notion of r-context, a new type of structurally informed context. This r-context consists of sentences that are relevant to the entity only to capture all important context clues and to avoid noise. We then show the usefulness of this r-context by performing a systematic study on a pseudo-ambiguity dataset. Identifying less known named entities is a challenge in named entity disambiguation because usually there is not much data available from which a machine learning algorithm can learn. We propose an approach that uses an aggregate of textual data about other entities which share certain properties with the target entity, and learn information from it by using topic modelling, which is then used to disambiguate the less known target entity. We use a dataset that is created automatically by exploiting the link structure in Wikipedia, and show that our approach is helpful for disambiguating entities without training material and with little surrounding context. Retrieving the relevant entities and information can produce many search results. Thus, it is important to effectively present the information to a user. We regard this step beyond the entity retrieval and employ sentiment analysis, which is used to analyze opinions expressed in text, in the context of effectively displaying information about product reviews to a user. We present a system that extracts a supporting sentence, a single sentence that captures both the sentiment of the author as well as a supportingfact. This supporting sentence can be used to provide users with an easy way to assess information in order to make informed choices quickly. We evaluate our approach by using the crowdsourcing service Amazon Mechanical Turk.Item Open Access Verifiable tally-hiding E-voting with fully homomorphic encryption(2020) Hasler, SebastianAn E-voting system is end-to-end verifiable if arbitrary external parties can check whether the result of the election is correct or not. It is tally-hiding if it does not disclose the full election result but rather only the relevant information, such as e.g. the winner of the election. In this thesis we pursue the goal of constructing an end-to-end verifiable tally-hiding E-voting system using fully homomorphic encryption. First we construct an alteration of the GSW levelled fully homomorphic encryption scheme based on the learning with errors over rings assumption. We utilize a key homomorphic property of this scheme in order to augment the scheme by a distributed key generation and distributed decryption. This leads to a passively secure 4-round multi-party computation protocol in the common random string model that can evaluate arithmetic circuits of arbitrary size. The complexity of this protocol is quasi-linear in the number of parties, polynomial in the security parameter and polynomial in the size of the circuit. By using Fiat-Shamir-transformed discrete-log-based zero-knowledge proofs we achieve security against active adversaries in the random oracle model while preserving the number of 4 rounds. Based on this actively secure protocol we construct an end-to-end verifiable tally-hiding E-voting system that has quasi-linear time complexity in the number of voters.Item Open Access Entwicklung einer höchsteffizienten, weichschaltenden Totem-Pole PFC Stufe basierend auf GaN Transistoren(2020) Lu, SiyuanIn der Arbeit wird eine Totem-Pole Power Factor Correction (PFC1)-Stufe vorgestellt, die als die Eingangsstufe für ein zweistufiges Ladegerät für E-Bike mit Nennleistung 180 W arbeitet. Und die Ausgangsspannung ist zwischen 360 V und 400 V einstellbar. Die PFC ist auf GaN2-HEMT3 von TI4 basiert und so aufgebaut, dass sie in zwei unterschiedlichen Modulationsverfahren betreiben kann:Triangular Current Mode(TCM5) und Continuous Current Mode(CCM6). Bei CCM wird die PFC mit konstanter Schaltfrequenz und Hart Switsching betrieben. Dagegen arbeitet sie bei TCM mit variabler Schaltfrequenz und Zero Voltage Switching(ZVS7), die zu besserem Wirkungsgrad und schlechterem Leistungsfaktor(PF8) im Vergleich zu bei CCM führt. Die Hauptaufgabe der Arbeit ist Entwurf, Aufbau und Inbetriebnahme der PFC-Stufe. Und die Messungen für Verläufe der elektrischen Größen, Wirkungsgrad und Temperatur der Bauteile werden bei unterschiedlicher Systemkonfigurationen durchgeführt, um die Entwurf und Aufbau zu validieren und Systemverhalten zu vergleichen. Der maximale Wirkungsgrad des Systems erreicht über 99 % durch die Anwendung von GaN-HEMT und TCM.Item Open Access Elastic parallel systems for high performance cloud computing(2020) Kehrer, Stefan; Blochinger, Wolfgang (Prof. Dr.)High Performance Computing (HPC) enables significant progress in both science and industry. Whereas traditionally parallel applications have been developed to address the grand challenges in science, as of today, they are also heavily used to speed up the time-to-result in the context of product design, production planning, financial risk management, medical diagnosis, as well as research and development efforts. However, purchasing and operating HPC clusters to run these applications requires huge capital expenditures as well as operational knowledge and thus is reserved to large organizations that benefit from economies of scale. More recently, the cloud evolved into an alternative execution environment for parallel applications, which comes with novel characteristics such as on-demand access to compute resources, pay-per-use, and elasticity. Whereas the cloud has been mainly used to operate interactive multi-tier applications, HPC users are also interested in the benefits offered. These include full control of the resource configuration based on virtualization, fast setup times by using on-demand accessible compute resources, and eliminated upfront capital expenditures due to the pay-per-use billing model. Additionally, elasticity allows compute resources to be provisioned and decommissioned at runtime, which allows fine-grained control of an application's performance in terms of its execution time and efficiency as well as the related monetary costs of the computation. Whereas HPC-optimized cloud environments have been introduced by cloud providers such as Amazon Web Services (AWS) and Microsoft Azure, existing parallel architectures are not designed to make use of elasticity. This thesis addresses several challenges in the emergent field of High Performance Cloud Computing. In particular, the presented contributions focus on the novel opportunities and challenges related to elasticity. First, the principles of elastic parallel systems as well as related design considerations are discussed in detail. On this basis, two exemplary elastic parallel system architectures are presented, each of which includes (1) an elasticity controller that controls the number of processing units based on user-defined goals, (2) a cloud-aware parallel execution model that handles coordination and synchronization requirements in an automated manner, and (3) a programming abstraction to ease the implementation of elastic parallel applications. To automate application delivery and deployment, novel approaches are presented that generate the required deployment artifacts from developer-provided source code in an automated manner while considering application-specific non-functional requirements. Throughout this thesis, a broad spectrum of design decisions related to the construction of elastic parallel system architectures is discussed, including proactive and reactive elasticity control mechanisms as well as cloud-based parallel processing with virtual machines (Infrastructure as a Service) and functions (Function as a Service). To evaluate these contributions, extensive experimental evaluations are presented.Item Open Access Evaluating human-computer interfaces for specification and comprehension of transient behavior in microservice-based software systems(2020) Beck, SamuelModern software systems are subject to constant change while operating in production. New agile development methods such as continuous deployment and DevOps enable developers to deploy code changes frequently. Also, failures and self-adaptation through mechanisms such as elastic scaling and resilience patterns introduce changes into a system during runtime. For that reason, these systems that become more complex and distributed continuously exhibit transient behavior, the state that occurs while transitioning from one state to another. To make statements about a system’s reliability and performance, it is imperative that this transient behavior is specified in non-functional requirements and that stakeholders can review whether these requirements are met. However, due to the complexity of this behavior and the accompanying specifications, only experts can achieve this. This thesis aims to make the specification of non-functional requirements for, and the comprehension of, transient behavior in microservice systems more accessible, particularly for stakeholders that lack expert knowledge about transient behavior. To achieve this, novel approaches are explored that utilize modern human-computer interaction methods to facilitate this problem. At first, the state of the art in transient behavior in software systems, human-computer interaction, and software visualization is presented. Subsequently, expert interviews are conducted to understand how transient behavior is handled in practice and which requirements experts have to an envisioned solution. Based on this, a concept for a solution is proposed, which integrates different visualizations with a chatbot, and implemented as a prototype. Finally, the prototype is evaluated in an expert study. The evaluation shows that the approach can support software architects and DevOps engineers to create and verify specifications for transient behavior. However, it also reveals that the prototype can still be improved further. Furthermore, it was shown that the integration of a chatbot into the solution was not helpful for the participants. In conclusion, human-computer interaction and visualization methods can be applied to the problems of specifying and analyzing transient behavior to support software architects and engineers. The developed prototype shows potential for the exploration of transient behavior. The evaluation also revealed many opportunities for future improvements.Item Open Access Feasibility analysis of using Model Predictive Control in Demand-Side Management of residential building(2020) Ramachandran Selvaraj, Sri VishnuThe energy systems are becoming smart recently with an increase in communication capabilities between producer, distributor and consumer. Also, many distributed renewable energy producers both in large and domestic scale are adding to the system day by day. Executing Smart Demand-Side Management (DSM) programs can help in providing financial benefits and stability of the energy system without compromising the comfort of end-users. Model Predictive Control (MPC) is an advanced method of process control that is used to control a process while satisfying a set of constraints. Due to its ability to predict future events and generate optimal control, it is widely used in process industries since the 1980s and in recent years it is introduced in power systems. This motivates to study the economic feasibility of using MPC in executing DSM for Residential building, to optimize the power consumption costs and stability of the energy system in the presence of local renewable energy sources (E.g., PV system). The main contribution of this thesis work is to measure the economic benefit of using MPC on DSM of household electricity consumption. A detailed study of modeling the demand side, i.e the appliances of a smart home, along with the domestic energy generators is done in the initial part. Apart from the physical properties of the renewable energy generators, the influence of external factors like weather, dynamic-pricing of electricity and changing user preference is also considered in the model. This formulated model is used to perform simulation of the residential building to generate an optimized energy consumption schedule and calculate the resulting economic benefits. The periodic changes in weather forecast and dynamic-prices are fed into the simulation to improve the prediction accuracy of the system. Lastly, the model is evaluated on a physical implementation to analyze its performance. There are multiple findings as part of the result of this thesis, like the economic benefit of using such a system will encourage many users to participate in Demand response programs, this in turn will help in the reduction of pollution originating from non-renewable energy generators.Item Open Access Design of a software architecture for supervisor system in a satellite(2020) Kelapure, SarthakInternet of Things (IoT) is not just a word now. With an estimated 30 billion devices in the world by 2020, IoT has already become what it was envisioned when the trend began. But there is still a hustle from companies around the world for better and better user experience because the technology keeps getting upgraded and need for upgrade never stops for the user. Researchers and scientists are trying every day to improve the experience by improving the involved things and by improving the communication means, ”the internet”. One such means of communication expected to grow in the future is satellite communication for IoT. Satellite to be used for this purpose needs to be low-cost, robust, reliable, and future ready. An improvement in satellite architecture is imminent. For making satellite feature rich and robust but still low-cost means increase in mission-life of a satellite. Like human life, this can be achieved by better medical system for the satellites. With introduction of a doctor on board, the thesis aims to propose solution for improved mission-life and features for the satellite. The doctor on board in this case is called Supervisor system. This system will need to have a robust and modular software on its designated hardware. Software can be designed and developed to be robust using a standard software architecture that is promising while complementing the requirements. The thesis focuses on designing software architecture for this ”Supervisor system”. By the end of this thesis, the author designs a software architecture for the said system after study of similar architectures. The software architecture is used to develop important features of the mission and is tested for its portability and modularity. Future needs and changes to the existing system are also foreseen and discussed in the end.Item Open Access Emotion classification based on the emotion component model(2020) Heindl, AmelieThe term emotion is, despite its frequent use, still mysterious to researchers. This poses difficulties on the task of automatic emotion detection in text. At the same time, applications for emotion classifiers increase steadily in today's digital society where humans are constantly interacting with machines. Hence, the need for improvement of current state-of-the-art emotion classifiers arises. The Swiss psychologist Klaus Scherer published an emotion model according to which an emotion is composed of changes in the five components cognitive appraisal, physiological symptoms, action tendencies, motor expressions, and subjective feelings. This model, which he calls CPM gained reputation in psychology and philosophy, but has so far not been used for NLP tasks. With this work, we investigate, whether it is possible to automatically detect the CPM components in social media posts and, whether information on those components can aid the detection of emotions. We create a text corpus consisting of 2100 Twitter posts, that has every instance labeled with exactly one emotion and a binary label for each CPM component. With a Maximum Entropy classifier we manage to detect CPM components with an average F1-score of 0.56 and average accuracy of 0.82 on this corpus. Furthermore, we compare baseline versions of one Maximum Entropy and one CNN emotion classifier to extensions of those classifiers with the CPM annotations and predictions as additional features. We find slight performance increases of up to 0.03 for the F1-score for emotion detection upon incorporation of CPM information.