05 Fakultät Informatik, Elektrotechnik und Informationstechnik
Permanent URI for this collectionhttps://elib.uni-stuttgart.de/handle/11682/6
Browse
14 results
Search Results
Item Open Access Task-oriented specialization techniques for entity retrieval(2020) Glaser, Andrea; Kuhn, Jonas (Prof. Dr.)Finding information on the internet has become very important nowadays, and online encyclopedias or websites specialized in certain topics offer users a great amount of information. Search engines support users when trying to find information. However, the vast amount of information makes it difficult to separate relevant from irrelevant facts for a specific information need. In this thesis we explore two areas of natural language processing in the context of retrieving information about entities: named entity disambiguation and sentiment analysis. The goal of this thesis is to use methods from these areas to develop task-oriented specialization techniques for entity retrieval. Named entity disambiguation is concerned with linking referring expressions (e.g., proper names) in text to their corresponding real world or fictional entity. Identifying the correct entity is an important factor in finding information on the internet as many proper names are ambiguous and need to be disambiguated to find relevant information. To that end, we introduce the notion of r-context, a new type of structurally informed context. This r-context consists of sentences that are relevant to the entity only to capture all important context clues and to avoid noise. We then show the usefulness of this r-context by performing a systematic study on a pseudo-ambiguity dataset. Identifying less known named entities is a challenge in named entity disambiguation because usually there is not much data available from which a machine learning algorithm can learn. We propose an approach that uses an aggregate of textual data about other entities which share certain properties with the target entity, and learn information from it by using topic modelling, which is then used to disambiguate the less known target entity. We use a dataset that is created automatically by exploiting the link structure in Wikipedia, and show that our approach is helpful for disambiguating entities without training material and with little surrounding context. Retrieving the relevant entities and information can produce many search results. Thus, it is important to effectively present the information to a user. We regard this step beyond the entity retrieval and employ sentiment analysis, which is used to analyze opinions expressed in text, in the context of effectively displaying information about product reviews to a user. We present a system that extracts a supporting sentence, a single sentence that captures both the sentiment of the author as well as a supportingfact. This supporting sentence can be used to provide users with an easy way to assess information in order to make informed choices quickly. We evaluate our approach by using the crowdsourcing service Amazon Mechanical Turk.Item Open Access Where are emotions in text? A human-based and computational investigation of emotion recognition and generation(2023) Troiano, Enrica; Klinger, Roman (Prof. Dr.)Item Open Access Automatic term extraction for conventional and extended term definitions across domains(2020) Hätty, Anna; Schulte im Walde, Sabine (apl. Prof. Dr.)A terminology is the entirety of concepts which constitute the vocabulary of a domain or subject field. Automatically identifying various linguistic forms of terms in domain-specific corpora is an important basis for further natural language processing tasks, such as ontology creation or, in general, domain knowledge acquisition. As a short overview for terms and domains, expressions like 'hammer', 'jigsaw', 'cordless screwdriver' or 'to drill' can be considered as terms in the domain of DIY (’do-it-yourself’); 'beaten egg whites' or 'electric blender' as terms in the domain of cooking. These examples cover different linguistic forms: simple terms like 'hammer' and complex terms like 'beaten egg whites', which consist of several simple words. However, although these words might seem to be obvious examples of terms, in many cases the decision to distinguish a term from a ‘non-term’ is not straightforward. There is no common, established way to define terms, but there are multiple terminology theories and diverse approaches to conduct human annotation studies. In addition, terms can be perceived to be more or less terminological, and the hard distinction between term and ‘non-term’ can be unsatisfying. Beyond term definition, when it comes to the automatic extraction of terms, there are further challenges, considering that complex terms as well as simple terms need to be automatically identified by an extraction system. The extraction of complex terms can profit from exploiting information about their constituents because complex terms might be infrequent as a whole. Simple terms might be more frequent, but they are especially prone to ambiguity. If a system considers an assumed term occurrence in text, which actually carries a different meaning, this can lead to wrong term extraction results. Thus, term complexity and ambiguity are major challenges for automatic term extraction. The present work describes novel theoretical and computational models for the considered aspects. It can be grouped into three broad categories: term definition studies, conventional automatic term extraction models, and extended automatic term extraction models that are based on fine-grained term frameworks. Term complexity and ambiguity are special foci here. In this thesis, we report on insights and improvements on these theoretical and computational models for terminology: We find that terms are concepts that can intuitively be derstood by lay people. We test more fine-grained term characterization frameworks that go beyond the conventional term/‘non-term’-distinction. We are the first to describe and model term ambiguity as gradual meaning variation between general and domain-specific language, and use the resulting representations to prevent errors typically made by term extraction systems resulting from ambiguity. We develop computational models that exploit the influence of term constituents on the prediction of complex terms. We especially tackle German closed compound terms, which are a frequent complex term type in German. Finally, we find that we can use similar strategies for modeling term complexity and ambiguity computationally for conventional and extended term extraction.Item Open Access A computational stylistics of poetry : distant reading and modeling of German and English verse(2023) Haider, Thomas; Kuhn, Jonas (Prof. Dr.)This doctoral thesis is about the computational modeling of stylistic variation in poetry. As ‘a computational stylistics’ it examines the forms, social embedding, and the aesthetic potential of literary texts by means of computational and statistical methods, ranging from simple counting over information theoretic measures to neural network models, including experiments with representation learning, transfer learning, and multi-task learning. We built small corpora to manually annotate a number of phenomena that are relevant for poetry, such as meter, rhythm, rhyme, and also emotions and aesthetic judgements that are elicited in the reader. A strict annotation workflow allows us to better understand these phenomena, from how to conceptualize them and which problems arise when trying to annotate them on a larger scale. Furthermore, we built large corpora to discover patterns in a wide historical, aesthetic and linguistic range, with a focus on German and English writing, encompassing public domain texts from the late 16th century up into the early 20th century. These corpora are published with metadata and reliable automatic annotation of part-of-speech tags, syllable boundaries, meter and verse measures. This thesis contains chapters on diachronic variation, aesthetic emotions, and modeling prosody, including experiments that also investigate the interaction between them. We look at how the diction of poets in different languages changed over time, which topics and metaphors were and became popular, both as a reaction to aesthetic considerations and also the political climate of the time. We investigate which emotions are elicited in readers when they read poetry, how that relates to aesthetic judgements, how we can annotate such emotions, and then train models to learn them. Also, we present experiments on how to annotate prosodic devices on a large scale, how well we can train computational models to predict the prosody from text, and how informative those devices are for each other.Item Open Access Distributional analysis of entities(2022) Gupta, Abhijeet; Padó, Sebastian (Prof. Dr.)Arguably, one of the most important aspects of natural language processing is natural language understanding which relies heavily on lexical knowledge. In computational linguistics, modelling lexical knowledge through distributional semantics has gained considerable popularity. However, the modelling is largely restricted to generic lexical categories (typically common nouns, adjectives, etc.) which are associated with coarse-grained information i.e., the category country has a boundary, rivers and gold deposits. Comparatively, less attention has been paid towards modelling entities which, on the other hand, are associated with fine-grained real-world information, for instance: the entity Germany has precise properties such as, (GDP - 3.6 trillion Euros), (GDP per capita - 44.5 thousand Euros) and (Continent - Europe). The lack of focus on entities and the inherent latency of information in distributional representations warrants greater efforts towards modelling entity related phenomena and, increasing the understanding about the information encoded within distributional representations. This work makes two contributions in that direction: (a) We introduce a semantic relation – Instantiation, a relation between entities and their categories, and distributionally model it to investigate the hypothesis that distributional distinctions do exist in modelling entities versus modelling categories within a semantic space. Our results show that in a semantic space: 1) entities and categories are quite distinct with respect to their distributional behaviour, geometry and linguistic properties; 2) Instantiation relation is recoverable by distributional models; and, 3) for lexical relational modelling purposes, categories are better represented by the centroids of their entities instead of their distributional representations constructed directly from corpora. (b) We also investigate the potential and limitations of distributional semantics for the purpose of Knowledge Base Completion, starting with the hypothesis that fine-grained knowledge is encoded in distributional representations of entities during their meaning construction. We show that: 1) fine-grained information of entities is encoded in distributional representations and can be extracted by simple data-driven supervised models as attribute-value pairs; 2) the models can predict the entire range of fine-grained attributes, as seen in a knowledge base, in one go; and, 3) a crucial factor in determining success in extracting this type of information is contextual support i.e., the extent of contextual information captured by a distributional model during meaning construction. Overall, this thesis takes a step towards increasing the understanding about entity meaning representations in a distributional setup, with respect to their modelling and the extent of knowledge inclusion during their meaning construction.Item Open Access Computational models of word order(2022) Yu, Xiang; Kuhn, Jonas (Prof. Dr.)A sentence in our mind is not a simple sequence of words but a hierarchical structure. We put the sentence in the linear order when we utter it for communication. Linearization is the task of mapping the hierarchical structure of a sentence into its linear order. Our work is based on the dependency grammar, which models the dependency relation between the words, and the resulting syntactic representation is a directed tree structure. The popularity of dependency grammar in Natural Language Processing (NLP) benefits from its separation of structure order and linear order and its emphasis on syntactic functions. These properties facilitate a universal annotation scheme covering a wide range of languages used in our experiments. We focus on developing a robust and efficient computational model that finds the linear order of a dependency tree. We take advantage of deep learning models’ expressive power to encode the syntactic structures of typologically diverse languages robustly. We take a graph-based approach that combines a simple bigram scoring model and a greedy decoding algorithm to search for the optimal word order efficiently. We use the divide-and-conquer strategy to reduce the search space, which restricts the output to be projective. We then resolve the restriction with a transition-based post-processing model. Apart from the computational models, we also study the word order from a quantitative linguistic perspective. We examine the Dependency Length Minimization (DLM) hypothesis, which is believed to be a universal factor that affects the word order of every language. It states that human languages tend to order the words to minimize the overall length of dependency arcs, which reduces the cognitive burden of speaking and understanding. We demonstrate that DLM can explain every aspect of word order in a dependency tree, such as the direction of the head, the arrangement of sibling dependents, and the existence of crossing arcs (non-projectivity). Furthermore, we find that DLM not only shapes the general word order preferences but also motivates the occasional deviation from the preferences. Finally, we apply our model in the task of surface realization, which aims to generate a sentence from a deep syntactic representation. We implement a pipeline with five steps, (1) linearization, (2) function word generation, (3) morphological inflection, (4) contraction, and (5) detokenization, which achieved state-of-the-art performance.Item Open Access Linguistically-informed modeling of potentials for misunderstanding(2024) Anthonio, Talita; Roth, Michael (Dr.)Misunderstandings are prevalent in communication. While there is a large amount of work on misunderstandings in conversations, only little attention has been given to misunderstandings that arise from text. This is because readers and writers typically do not interact with one another. However, texts that potentially evoke different interpretations can be identified by certain linguistic phenomena, especially those related to implicitness or underspecificity. In Computational Linguistics, there is a considerable amount of work conducted on such linguistic phenomena and the computational modeling thereof. However, most of these studies do not examine when these phenomena cause misunderstandings. This is a crucial aspect, because ambiguous language does not always cause misunderstanding. In this thesis, we provide the first steps to develop a computational model that can automatically identify whether an instructional text is likely to cause misunderstandings ("potentials for misunderstanding"). To achieve this goal, we build large corpora with potentials for misunderstanding in instructional texts. We follow previous work and define misunderstandings as the existence of multiple, plausible interpretations. As these interpretations may be similar in meaning to one another, we specifically define misunderstandings as the existence of multiple plausible, but conflicting interpretations. Therefore, we find texts that potentially cause misunderstanding ("potentials for misunderstanding") by looking for passages that have several plausible interpretations that are conflicting to one another. We automatically identify such passages from revision histories of instructional texts, based on the finding that we can find potentials for misunderstanding by looking into older versions of a text, and their clarifications thereof in newer versions. We specifically look for unclarified sentences that contain implicit and underspecified language, and study their clarifications. Through several analyses and crowdsourcing studies, we demonstrate that our corpora provide valuable resources on potentials for misunderstanding, as we find that revised sentences are better than their previous ones. Furthermore, we show that the provided corpora can be used for several computational modeling purposes. The three resulting models can each be combined to identify whether a text potentially causes misunderstanding or not. More specifically, we first develop a model that can detect improvements in a text, even when they are subtle and closely dependent on the context. In an analysis, we verify that the judgements from the model on what makes a better or equally good sentence overlap with the judgements by humans. Secondly, we build a transformer-based language model that automatically resolves potentials for misunderstanding caused by implicit references. We find that modeling discourse context improves the performance of this model. In an analysis, we find that the best model is not only capable of generating the golden resolution, but also capable of generating several plausible resolutions for implicit references in instructional text. We use this finding to build a large dataset with plausible and implausible resolutions of implicit and underspecified elements. We use the resulting dataset for a third computational task, in which we train a model to automatically distinguish between plausible and implausible resolutions for implicit and underspecified elements. We show that this model and the provided dataset can be used to find passages with several, plausible clarifications. Since our definition of misunderstanding focuses on conflicting clarifications, we conduct a final study to conclude the thesis. In particular, we provide and validate a crowdsourcing set-up that allows to find the cases with conflicting, plausible, resolutions. The set-up and findings could be used in future research to directly train a model to identify passages with implicit elements that have conflicting resolutions.Item Open Access Prosodic event detection for speech understanding using neural networks(2020) Stehwien, Sabrina; Vu, Ngoc Thang (Prof. Dr.)Item Open Access Deep learning aided clinical decision support(2023) Schneider, Rudolf; Staab, Steffen (Prof. Dr.)Medical professionals create vast amounts of clinical texts during patient care. Often, these documents describe medical cases from anamnesis to the final clinical outcome. Automated understanding and selection of relevant medical records pose an opportunity to assist medical doctors in their day-to-day work on a large scale. However, clinical text understanding is challenging, especially when dealing with clinical narratives such as nursing notes or diagnostic reports. These clinical documents differ extensively in length, structure, vocabulary, and lexical and grammatical correctness. In addition, they are highly context-dependent. For all these reasons, approaches based on syntactic rules and discrete text representation often fail to address the variety of clinical narratives propagating unrecoverable errors to downstream applications. Therefore, this thesis focuses on evaluating and designing methods and models that are generalizable and adaptable enough to deal with these challenges. Our goal is to enable text-based clinical decision support systems to utilize the knowledge from clinical archives and medical publications. We aim to design methods that can scale up to the growing amount of clinical documents in hospital archives. A fundamental problem in achieving deep-learning-enabled clinical decision support systems is designing a patient representation that captures all relevant information for automated processing. We engage these challenges by designing a framework for deep-learning-enabled differential diagnosis support. Guided by the needs emerging from this framework, we design and evaluate methods based on three information representation paradigms: (1) Discrete relation extraction using the open information extraction paradigm. (2) Neural text representations based on language and topic modeling. (3) Combining complementary neural text representations. Our framework translates clinical diagnostic steps and pathways to statistical and deep-learning-based models. Accordingly, we can show that deep-learning-enabled differential diagnosis benefits from contextualized information representations. Further, we identify shortcomings of the open information extraction paradigm in a comprehensive benchmark. We design a distributed text representation model based on topical information. Our extensive large-scale experiment results show that topical distributed text representations capture information complementary to language modeling-based approaches across domains, thus enabling a holistic text representation for medical texts. Our experiments with medical doctors using our prototypical implementation of the deep-learning-enabled differential diagnosis process validate this framework. Moreover, we identify seven crucial design challenges for text-based clinical decision support systems based on our qualitative and quantitative findings.Item Open Access Cross-lingual frame comparability : computational and linguistic perspectives(2023) Sikos, Jennifer; Padó, Sebastian (Prof. Dr.)Frames are descriptions of commonplace scenarios or events. Because they describe everyday scenes, such as buying or eating, it seems reasonable to assume that many frames in one language would carry over directly to other languages. However, the specifics of how that scene is realized can be highly specific to a culture; it is still an open research question as to how well (and how many) frames actually apply across languages. This thesis concerns cross-lingual frame comparability - the degree to which a frame can be transferred from one language to another. It addresses several aspects of frame comparability: what is frame comparability; how a computational system can measure cross-lingual frame comparability; and how frame comparability affects cross-lingual models of frames.