05 Fakultät Informatik, Elektrotechnik und Informationstechnik

Permanent URI for this collectionhttps://elib.uni-stuttgart.de/handle/11682/6

Browse

Search Results

Now showing 1 - 10 of 101
  • Thumbnail Image
    ItemOpen Access
    Strukturierte Modellierung von Affekt in Text
    (2020) Klinger, Roman; Padó, Sebastian (Prof. Dr.)
    Emotionen, Stimmungen und Meinungen sind Affektzustände, welche nicht direkt von einer Person bei anderen Personen beobachtet werden können und somit als „privat“ angesehen werden können. Um diese individuellen Gefühlsregungen und Ansichten dennoch zu erraten, sind wir in der alltäglichen Kommunikation gewohnt, Gesichtsausdrücke, Körperposen, Prosodie, und Redeinhalte zu interpretieren. Das Forschungsgebiet Affective Computing und die spezielleren Felder Emotionsanalyse und Sentimentanalyse entwickeln komputationelle Modelle, mit denen solche Abschätzungen automatisch möglich werden. Diese Habilitationsschrift fällt in den Bereich des Affective Computings und liefert in diesem Feld Beiträge zur Betrachtung und Modellierung von Sentiment und Emotion in textuellen Beschreibungen. Wir behandeln hier unter anderem Literatur, soziale Medien und Produktbeurteilungen. Um angemessene Modelle für die jeweiligen Phänomene zu finden, gehen wir jeweils so vor, dass wir ein Korpus als Basis nutzen oder erstellen und damit bereits Hypothesen über die Formulierung des Modells treffen. Diese Hypothesen können dann auf verschiedenen Wegen untersucht werden, erstens, durch eine Analyse der Übereinstimmung der Annotatorinnen, zweitens, durch eine Adjudikation der Annotatorinnen gefolgt von einer komputationellen Modellierung, und drittens, durch eine qualitative Analyse der problematischen Fälle. Wir diskutieren hier Sentiment und Emotion zunächst als Klassifikationsproblem. Für einige Fragestellungen ist dies allerdings nicht ausreichend, so dass wir strukturierte Modelle vorschlagen, welche auch Aspekte und Ursachen des jeweiligen Gefühls beziehungsweise der Meinung extrahieren. In Fällen der Emotion extrahieren wir zusätzlich Nennungen des Fühlenden. In einem weiteren Schritt werden die Verfahren so erweitert, dass sie auch auf Sprachen angewendet werden können, welche nicht über ausreichende annotierte Ressourcen verfügen. Die Beiträge der Habilitationsarbeit sind also verschiedene Ressourcen, für deren Erstellung auch zugrundeliegende Konzeptionsarbeit notwendig war. Wir tragen deutsche und englische Korpora für aspektbasierte Sentimentanalyse, Emotionsklassifikation und strukturierte Emotionsanalyse bei. Des Weiteren schlagen wir Modelle für die automatische Erkennung und Repräsentation von Sentiment, Emotion und verwandten Konzepten vor. Diese zeigen entweder bessere Ergebnisse, als bisherige Verfahren oder modellieren Phänomene erstmalig. Letzteres gilt insbesondere bei solchen Methoden, welche auf durch uns erstellte Korpora ermöglicht wurden. In den verschiedenen Ansätzen werden wiederkehrend Konzepte gemeinsam modelliert, sei es auf der Repräsentations- oder der Inferenzebene. Solche Verfahren, welche Entscheidungen im Kontext treffen, zeigen in unserer Arbeit durchgängig bessere Ergebnisse, als solche, welche Phänomene getrennt betrachten. Dies gilt sowohl für den Einsatz künstlicher neuronaler Netze, als auch für die Verwendung probabilistischer graphischer Modelle.
  • Thumbnail Image
    ItemOpen Access
    Task-oriented specialization techniques for entity retrieval
    (2020) Glaser, Andrea; Kuhn, Jonas (Prof. Dr.)
    Finding information on the internet has become very important nowadays, and online encyclopedias or websites specialized in certain topics offer users a great amount of information. Search engines support users when trying to find information. However, the vast amount of information makes it difficult to separate relevant from irrelevant facts for a specific information need. In this thesis we explore two areas of natural language processing in the context of retrieving information about entities: named entity disambiguation and sentiment analysis. The goal of this thesis is to use methods from these areas to develop task-oriented specialization techniques for entity retrieval. Named entity disambiguation is concerned with linking referring expressions (e.g., proper names) in text to their corresponding real world or fictional entity. Identifying the correct entity is an important factor in finding information on the internet as many proper names are ambiguous and need to be disambiguated to find relevant information. To that end, we introduce the notion of r-context, a new type of structurally informed context. This r-context consists of sentences that are relevant to the entity only to capture all important context clues and to avoid noise. We then show the usefulness of this r-context by performing a systematic study on a pseudo-ambiguity dataset. Identifying less known named entities is a challenge in named entity disambiguation because usually there is not much data available from which a machine learning algorithm can learn. We propose an approach that uses an aggregate of textual data about other entities which share certain properties with the target entity, and learn information from it by using topic modelling, which is then used to disambiguate the less known target entity. We use a dataset that is created automatically by exploiting the link structure in Wikipedia, and show that our approach is helpful for disambiguating entities without training material and with little surrounding context. Retrieving the relevant entities and information can produce many search results. Thus, it is important to effectively present the information to a user. We regard this step beyond the entity retrieval and employ sentiment analysis, which is used to analyze opinions expressed in text, in the context of effectively displaying information about product reviews to a user. We present a system that extracts a supporting sentence, a single sentence that captures both the sentiment of the author as well as a supportingfact. This supporting sentence can be used to provide users with an easy way to assess information in order to make informed choices quickly. We evaluate our approach by using the crowdsourcing service Amazon Mechanical Turk.
  • Thumbnail Image
    ItemOpen Access
    German clause-embedding predicates : an extraction and classification approach
    (2010) Lapshinova-Koltunski, Ekaterina; Heid, Ulrich (Prof. Dr. phil. habil.)
    This thesis describes a semi-automatic approach to the analysis of subcategorisation properties of verbal, nominal and multiword predicates in German. We semi-automatically classify predicates according to their subcategorisation properties by means of extracting them from German corpora along with their complements. In this work, we concentrate exclusively on sentential complements, such as dass, ob and w-clauses, although our methods can be also applied for other complement types. Our aim is not only to extract and classify predicates but also to compare subcategorisation properties of morphologically related predicates, such as verbs and their nominalisations. It is usually assumed that subcategorisation properties of nominalisations are taken over from their underlying verbs. However, our tests show that there exist different types of relations between them. Thus, we review subcategorisation properties of morphologically related words and analyse their correspondences and differences. For this purpose, we elaborate a set of semi-automatic procedures, which allow us not only to classify extracted units according to their subcategorisation properties, but also to compare the properties of verbs and their nominalisations, which occur both freely in corpora and within a multiword expression. The lexical data are created to serve symbolic NLP, especially large symbolic grammars for deep processing, such as HPSG or LFG, cf. work in the LinGO project (Copestake et al. 2004) and the Pargram project (Butt et al. 2002). HPSG and LFG need detailed linguistic knowledge. Besides that, subcategorisation iformation can be applied in applications for IE, cf. (Surdeanu et al. 2003). Moreover, this information is necessary for linguistic, lexicographic, SLA and translation work. Our extraction and classification procedures are precision-oriented, which means that we focus on high accuracy of our extraction and classification results. High precision is opposed to completeness, which is compensated by the application of extraction procedures on larger corpora.
  • Thumbnail Image
    ItemOpen Access
    The perfect time span : on the present perfect in German, Swedish and English
    (2006) Rothstein, Björn Michael; Kamp, Hans (Prof. Dr. h.c. PhD)
    This study proposes a discourse based approach to the present perfect in German, Swedish and English. It is argued that the present perfect is best analysed by applying an ExtendedNow-approach. It introduces a perfect time span in which the event time expressed by the present perfect is contained. The present perfects in these languages differ with respect to the boundaries of perfect time span. In English, the right boundary is identical to the point of speech, in Swedish it can be either at or after the moment of speech and in German it can also be before the moment of speech. The left boundary is unspecified. The right boundary is set by context.
  • Thumbnail Image
    ItemOpen Access
    Segmental factors in language proficiency : degree of velarization, coarticulatory resistance and vowel formant frequency distribution as a signature of talent
    (2011) Baumotte, Henrike; Dogil, Grzegorz (Prof. Dr.)
    The present PhD proposes a reason for German native speakers of various proficiency levels and multiple English varieties producing their L2 English with different degrees of a foreign accent. The author took into account phonetic measurements to investigate the degree of velarization and coarticulation or coarticulatory resistance respectively in German and English, taking non-words and natural language stimuli. To get an impression of the differences between the productions of proficient, average and less proficient speakers in German and English, the mean F2 and Fv values in /ə/ before /l/ and in /l/ were calculated, for then comparing the degree of velarization in /əlV/ non-word sequences with each other. Proficient speakers gained lower formant frequencies for F2 and Fv in /ə/ than less proficient speakers, i.e. proficient speakers velarized more than less proficient speakers. Within the comparisons with respect to coarticulation or coarticulatory resistance results respectively the difference values for F2 and F2' out of /ə/ in /əleɪ/ vs. /əlu:/, /əly/ vs. /əleɪ/ and /əly/ vs. /əlaɪ/ were created. In the whole series of measurements, an overwhelming trend for proficient speakers being more coarticulatory resistant, i.e. velarizing more, and more precisely pronouncing English vowel characteristics than less proficient speakers was present, while average speakers did not continuously behave according to prediction, as a result of being sometimes “worse” than less proficient speakers. On the basis of Díaz et al. (2008) who pled for pre-existing individual differences in phonetic discrimination ability which enormously influence the achievement of a foreign sound system, it is claimed for a derivation of foreign language from native phonetic abilities.
  • Thumbnail Image
    ItemOpen Access
    Computational modelling of coreference and bridging resolution
    (2019) Rösiger, Ina; Kuhn, Jonas (Prof. Dr.)
  • Thumbnail Image
    ItemOpen Access
    Analysis of political positioning from politician’s tweets
    (2023) Maurer, Maximilian Martin
    Social media platforms such as Twitter have become important communication channels for politicians to interact with the electorate and communicate their stances on policy issues. In contrast to party manifestos, which lay out curated, compromised positions, the full range of positions within the ideological bounds of a party can be found on social media. This begs the question of how aligned the ideological positions of parties on social media are with their respective manifesto. To assess the alignment of social media and manifesto positions, we correlate the positions automatically retrieved from the tweets with manifesto-based positions for the German federal elections of 2017 and 2021. Additionally, we assess whether the change in positions over time is aligned between social media and manifestos. We retrieve ideological positions by aggregating distances between parties from sentence representations of their members' tweets from a corpus containing >2M individual tweets of 421 German politicians. We leverage domain-specific information by training a sentence embedding model such that representations of tweets with co-occurring hashtags are closer to each other than ones without co-occurring hashtags, following the assumption that hashtags approximate policy-related topics. Our experiments compare this political social media domain-specific model with other political domain and general domain sentence embedding models. We find high, significant correlations between the Twitter-retrieved positions and manifesto positions, especially for our domain-specific fine-tuned model. Moreover, for this model, we find overlaps in terms of how the positions change over time. These results indicate that the ideological positions of parties on Twitter correspond to the ideological positions as laid out in the manifestos to a large extent.
  • Thumbnail Image
    ItemOpen Access
    Evaluating methods of improving the distribution of data across users in a corpus of tweets
    (2023) Milovanovic, Milan
    Corpora created from social network data often serve as the data source for tasks in natural language processing. Compared to other, more standardized corpora, social media corpora have idiosyncratic properties due to the fact that they consist of user-generated comments. These are, for example, the unbalanced distribution of the respective comments, a generally lower linguistic quality, and an inherently unstructured and noisy nature. Using a Twitter-generated corpus, I will investigate to what extent the unbalanced distribution of the data has an influence on two downstream tasks, relying on word embeddings. Word embeddings are a ubiquitous and frequently used concept in the field of natural language processing. The most common models are often the means to obtain semantic information about words and their usage by representing the words in an abstract word vector space. The basic idea is that semantically similar words in the mapped vector space have similar vectors. In doing so, these vectors serve as input for standard downstream tasks such as word similarity and semantic change detection. One of the most common models in current research is the use of word2vec, and more specifically, the Skip-gram architecture of this model. The Skip-gram architecture attempts to predict the surrounding words based on the current word. The data on which this architecture is trained greatly influences the resulting word vectors. In the context of this work, however, no significant improvement in the results to a fully preprocessed corpus could be found when filtering methods, widely used in the literature, without specific motivation, are used to select a subset of data according to defined criteria, neither for word similarity nor for semantic change detection. However, comparable results could be achieved with some filters, although the resulting models were trained using significantly fewer tokens as input.
  • Thumbnail Image
    ItemOpen Access
    Modeling the interface between morphology and syntax in data-driven dependency parsing
    (2016) Seeker, Wolfgang; Kuhn, Jonas (Prof. Dr.)
    When people formulate sentences in a language, they follow a set of rules specific to that language that defines how words must be put together in order to express the intended meaning. These rules are called the grammar of the language. Languages have essentially two ways of encoding grammatical information: word order or word form. English uses primarily word order to encode different meanings, but many other languages change the form of the words themselves to express their grammatical function in the sentence. These languages are commonly subsumed under the term morphologically rich languages. Parsing is the automatic process for predicting the grammatical structure of a sentence. Since grammatical structure guides the way we understand sentences, parsing is a key component in computer programs that try to automatically understand what people say and write. This dissertation is about parsing and specifically about parsing languages with a rich morphology, which encode grammatical information in the form of words. Today’s parsing models for automatic parsing were developed for English and achieve good results on this language. However, when applied to other languages, a significant drop in performance is usually observed. The standard model for parsing is a pipeline model that separates the parsing process into different steps, in particular it separates the morphological analysis, i.e. the analysis of word forms, from the actual parsing step. This dissertation argues that this separation is one of the reasons for the performance drop of standard parsers when applied to other languages than English. An analysis is presented that exposes the connection between the morphological system of a language and the errors of a standard parsing model. In a second series of experiments, we show that knowledge about the syntactic structure of sentence can support the prediction of morphological information. We then argue for an alternative approach that models morphological analysis and syntactic analysis jointly instead of separating them. We support this argumentation with empirical evidence by implementing two parsers that model the relationship between morphology and syntax in two different but complementary ways.
  • Thumbnail Image
    ItemOpen Access
    The German boundary tones: categorical Perception, perceptual magnets, and the perceptual reference space
    (2012) Schneider, Katrin; Dogil, Grzegorz (Prof. Dr.)
    This thesis experimentally analyzes the perception of prosodic categories in German, using the two German boundary tones L% and H% postulated by German phonology. These two boundary tone categories were selected because they constitute the least disputed tonal contrast. In many languages, in German as well, the contrast between the low (L%) and the high (H%) boundary tone corresponds to a contrast in sentence mode. The low boundary tone is interpreted as a statement and the high boundary tone as a question. For all experiments presented in this thesis it is hypothesized that the different perception of L% and H% as statement versus question, respectively, can be attributed to a contrast between two prosodic categories, i.e. to Categorical Perception. The basis for this hypothesis is the observation that the sentence mode of a syntactically ambiguous utterance can only be determined by the height of its boundary tone. Assuming the existence of the two proposed boundary tone categories two experimental designs that can be used to confirm categories, perceptual differences inside a category or perceptual differences between categories are presented. These two designs are the test for the Categorical Perception (CP) and the test for the Perceptual Magnet Effect (PME). Originally, both designs were developed to examine perceptual differences in the segmental domain, especially for the evaluation of phoneme categories. Categorical Perception is confirmed when the boundary between these two categories corresponds to the point at which the discrimination performance between two adjacent stimuli is best. If for two speech events the Categorical Perception test is successful then these two events will be confirmed as being categories of the respective language. A Perceptual Magnet Effect includes a warping of the perceptual space towards a prototype of the respective category. Such a warping does not occur towards a non-prototype of the same category. The result of the warping is a significantly lower discrimination performance around the prototype, i.e. the prototype is not or only hard to discriminate from a adjacent stimulus. Such a warping is not found around a non-prototype, although the acoustic difference between a stimulus and the non-prototype is comparable to the acoustic difference between a stimulus and the prototype. For the analyses and the interpretation of the experimental results the Signal Detection Theory (SDT) and the Exemplar Theory are used. Signal Detection Theory postulates that despite similar auditory abilities subjects may differ in their perceptual results because of their individual response criterion. Exemplar Theory proposes that listeners store their perceived instances of speech events in exemplar clouds located in their perceptual space, and that these instances are stored with much phonetic detail. During speech production, the speaker uses these clouds of similar exemplars to produce an instance of a speech event. Thus, speech perception and production are inseparably connected. The more exemplars are stored the more stable a speech category will get. Only stable categories can develop a category center and a Perceptual Magnet Effect. In various studies reaction times were found to be a reliable indicator for the simplicity of a perceptual decision. Thus, in the experiment presented in this thesis reaction times were measured for each individual decision. The results support the already known correlation, i.e. the more simple a perceptual decision is the lower the reaction time will be. To summarize, the results discussed in this thesis support the existence of prosodic categories in general, and especially those of the high and the low boundary tone in German. These two prosodic categories are used to differentiate between the sentence modes statement versus question, but only in case of syntactically ambiguous phrases. Furthermore, the results support the use on Exemplar Theory for speech data. The category of the low boundary tone seems to contain much more exemplars than the category of the high boundary tone as the latter category is less often produced and thus less often perceived than the first one. This results in a clear Perceptual Magnet Effect for the L% category as there enough exemplar are stored to support the development of a category center, and only in the center of a category the PME can occur. For most listeners the H% category contains only a few exemplars which in turn inhibits the development of a Perceptual Magnet Effect there. The logged reaction times support the perceptual findings and reveal the hypothesis that reaction times correlate with the simplicity of a perceptual decision.