05 Fakultät Informatik, Elektrotechnik und Informationstechnik

Permanent URI for this collectionhttps://elib.uni-stuttgart.de/handle/11682/6

Browse

Search Results

Now showing 1 - 10 of 102
  • Thumbnail Image
    ItemOpen Access
    German clause-embedding predicates : an extraction and classification approach
    (2010) Lapshinova-Koltunski, Ekaterina; Heid, Ulrich (Prof. Dr. phil. habil.)
    This thesis describes a semi-automatic approach to the analysis of subcategorisation properties of verbal, nominal and multiword predicates in German. We semi-automatically classify predicates according to their subcategorisation properties by means of extracting them from German corpora along with their complements. In this work, we concentrate exclusively on sentential complements, such as dass, ob and w-clauses, although our methods can be also applied for other complement types. Our aim is not only to extract and classify predicates but also to compare subcategorisation properties of morphologically related predicates, such as verbs and their nominalisations. It is usually assumed that subcategorisation properties of nominalisations are taken over from their underlying verbs. However, our tests show that there exist different types of relations between them. Thus, we review subcategorisation properties of morphologically related words and analyse their correspondences and differences. For this purpose, we elaborate a set of semi-automatic procedures, which allow us not only to classify extracted units according to their subcategorisation properties, but also to compare the properties of verbs and their nominalisations, which occur both freely in corpora and within a multiword expression. The lexical data are created to serve symbolic NLP, especially large symbolic grammars for deep processing, such as HPSG or LFG, cf. work in the LinGO project (Copestake et al. 2004) and the Pargram project (Butt et al. 2002). HPSG and LFG need detailed linguistic knowledge. Besides that, subcategorisation iformation can be applied in applications for IE, cf. (Surdeanu et al. 2003). Moreover, this information is necessary for linguistic, lexicographic, SLA and translation work. Our extraction and classification procedures are precision-oriented, which means that we focus on high accuracy of our extraction and classification results. High precision is opposed to completeness, which is compensated by the application of extraction procedures on larger corpora.
  • Thumbnail Image
    ItemOpen Access
    The perfect time span : on the present perfect in German, Swedish and English
    (2006) Rothstein, Björn Michael; Kamp, Hans (Prof. Dr. h.c. PhD)
    This study proposes a discourse based approach to the present perfect in German, Swedish and English. It is argued that the present perfect is best analysed by applying an ExtendedNow-approach. It introduces a perfect time span in which the event time expressed by the present perfect is contained. The present perfects in these languages differ with respect to the boundaries of perfect time span. In English, the right boundary is identical to the point of speech, in Swedish it can be either at or after the moment of speech and in German it can also be before the moment of speech. The left boundary is unspecified. The right boundary is set by context.
  • Thumbnail Image
    ItemOpen Access
    Segmental factors in language proficiency : degree of velarization, coarticulatory resistance and vowel formant frequency distribution as a signature of talent
    (2011) Baumotte, Henrike; Dogil, Grzegorz (Prof. Dr.)
    The present PhD proposes a reason for German native speakers of various proficiency levels and multiple English varieties producing their L2 English with different degrees of a foreign accent. The author took into account phonetic measurements to investigate the degree of velarization and coarticulation or coarticulatory resistance respectively in German and English, taking non-words and natural language stimuli. To get an impression of the differences between the productions of proficient, average and less proficient speakers in German and English, the mean F2 and Fv values in /ə/ before /l/ and in /l/ were calculated, for then comparing the degree of velarization in /əlV/ non-word sequences with each other. Proficient speakers gained lower formant frequencies for F2 and Fv in /ə/ than less proficient speakers, i.e. proficient speakers velarized more than less proficient speakers. Within the comparisons with respect to coarticulation or coarticulatory resistance results respectively the difference values for F2 and F2' out of /ə/ in /əleɪ/ vs. /əlu:/, /əly/ vs. /əleɪ/ and /əly/ vs. /əlaɪ/ were created. In the whole series of measurements, an overwhelming trend for proficient speakers being more coarticulatory resistant, i.e. velarizing more, and more precisely pronouncing English vowel characteristics than less proficient speakers was present, while average speakers did not continuously behave according to prediction, as a result of being sometimes “worse” than less proficient speakers. On the basis of Díaz et al. (2008) who pled for pre-existing individual differences in phonetic discrimination ability which enormously influence the achievement of a foreign sound system, it is claimed for a derivation of foreign language from native phonetic abilities.
  • Thumbnail Image
    ItemOpen Access
    Computational modelling of coreference and bridging resolution
    (2019) Rösiger, Ina; Kuhn, Jonas (Prof. Dr.)
  • Thumbnail Image
    ItemOpen Access
    Modeling paths in knowledge graphs for context-aware prediction and explanation of facts
    (2019) Stadelmaier, Josua
    Knowledge bases are an important resource for question answering systems and search engines but often suffer from incompleteness. This work considers the problem of knowledge base completion (KBC). In the context of natural language processing, knowledge bases comprise facts that can be formalized as triples of the form (entity 1, relation, entity 2). A common approach for the KBC problem is to learn representations for entities and relations that allow for generalizing existing connections in the knowledge base to predict the correctness of a triple that is not in the knowledge base. In this work, I propose the context path model, which is based on this approach. In contrast to existing KBC models, it also provides explanations for predictions. For this purpose, it uses paths that capture the context of a given triple. The context path model can be applied on top of several existing KBC models. In a manual evaluation, I observe that most of the paths the model uses as explanation are meaningful and provide evidence for assessing the correctness of triples. I also show in an experiment that the performance of the context path model on a standard KBC task is close to a state of the art model.
  • Thumbnail Image
    ItemOpen Access
  • Thumbnail Image
    ItemOpen Access
    Modeling the interface between morphology and syntax in data-driven dependency parsing
    (2016) Seeker, Wolfgang; Kuhn, Jonas (Prof. Dr.)
    When people formulate sentences in a language, they follow a set of rules specific to that language that defines how words must be put together in order to express the intended meaning. These rules are called the grammar of the language. Languages have essentially two ways of encoding grammatical information: word order or word form. English uses primarily word order to encode different meanings, but many other languages change the form of the words themselves to express their grammatical function in the sentence. These languages are commonly subsumed under the term morphologically rich languages. Parsing is the automatic process for predicting the grammatical structure of a sentence. Since grammatical structure guides the way we understand sentences, parsing is a key component in computer programs that try to automatically understand what people say and write. This dissertation is about parsing and specifically about parsing languages with a rich morphology, which encode grammatical information in the form of words. Today’s parsing models for automatic parsing were developed for English and achieve good results on this language. However, when applied to other languages, a significant drop in performance is usually observed. The standard model for parsing is a pipeline model that separates the parsing process into different steps, in particular it separates the morphological analysis, i.e. the analysis of word forms, from the actual parsing step. This dissertation argues that this separation is one of the reasons for the performance drop of standard parsers when applied to other languages than English. An analysis is presented that exposes the connection between the morphological system of a language and the errors of a standard parsing model. In a second series of experiments, we show that knowledge about the syntactic structure of sentence can support the prediction of morphological information. We then argue for an alternative approach that models morphological analysis and syntactic analysis jointly instead of separating them. We support this argumentation with empirical evidence by implementing two parsers that model the relationship between morphology and syntax in two different but complementary ways.
  • Thumbnail Image
    ItemOpen Access
    The German boundary tones: categorical Perception, perceptual magnets, and the perceptual reference space
    (2012) Schneider, Katrin; Dogil, Grzegorz (Prof. Dr.)
    This thesis experimentally analyzes the perception of prosodic categories in German, using the two German boundary tones L% and H% postulated by German phonology. These two boundary tone categories were selected because they constitute the least disputed tonal contrast. In many languages, in German as well, the contrast between the low (L%) and the high (H%) boundary tone corresponds to a contrast in sentence mode. The low boundary tone is interpreted as a statement and the high boundary tone as a question. For all experiments presented in this thesis it is hypothesized that the different perception of L% and H% as statement versus question, respectively, can be attributed to a contrast between two prosodic categories, i.e. to Categorical Perception. The basis for this hypothesis is the observation that the sentence mode of a syntactically ambiguous utterance can only be determined by the height of its boundary tone. Assuming the existence of the two proposed boundary tone categories two experimental designs that can be used to confirm categories, perceptual differences inside a category or perceptual differences between categories are presented. These two designs are the test for the Categorical Perception (CP) and the test for the Perceptual Magnet Effect (PME). Originally, both designs were developed to examine perceptual differences in the segmental domain, especially for the evaluation of phoneme categories. Categorical Perception is confirmed when the boundary between these two categories corresponds to the point at which the discrimination performance between two adjacent stimuli is best. If for two speech events the Categorical Perception test is successful then these two events will be confirmed as being categories of the respective language. A Perceptual Magnet Effect includes a warping of the perceptual space towards a prototype of the respective category. Such a warping does not occur towards a non-prototype of the same category. The result of the warping is a significantly lower discrimination performance around the prototype, i.e. the prototype is not or only hard to discriminate from a adjacent stimulus. Such a warping is not found around a non-prototype, although the acoustic difference between a stimulus and the non-prototype is comparable to the acoustic difference between a stimulus and the prototype. For the analyses and the interpretation of the experimental results the Signal Detection Theory (SDT) and the Exemplar Theory are used. Signal Detection Theory postulates that despite similar auditory abilities subjects may differ in their perceptual results because of their individual response criterion. Exemplar Theory proposes that listeners store their perceived instances of speech events in exemplar clouds located in their perceptual space, and that these instances are stored with much phonetic detail. During speech production, the speaker uses these clouds of similar exemplars to produce an instance of a speech event. Thus, speech perception and production are inseparably connected. The more exemplars are stored the more stable a speech category will get. Only stable categories can develop a category center and a Perceptual Magnet Effect. In various studies reaction times were found to be a reliable indicator for the simplicity of a perceptual decision. Thus, in the experiment presented in this thesis reaction times were measured for each individual decision. The results support the already known correlation, i.e. the more simple a perceptual decision is the lower the reaction time will be. To summarize, the results discussed in this thesis support the existence of prosodic categories in general, and especially those of the high and the low boundary tone in German. These two prosodic categories are used to differentiate between the sentence modes statement versus question, but only in case of syntactically ambiguous phrases. Furthermore, the results support the use on Exemplar Theory for speech data. The category of the low boundary tone seems to contain much more exemplars than the category of the high boundary tone as the latter category is less often produced and thus less often perceived than the first one. This results in a clear Perceptual Magnet Effect for the L% category as there enough exemplar are stored to support the development of a category center, and only in the center of a category the PME can occur. For most listeners the H% category contains only a few exemplars which in turn inhibits the development of a Perceptual Magnet Effect there. The logged reaction times support the perceptual findings and reveal the hypothesis that reaction times correlate with the simplicity of a perceptual decision.
  • Thumbnail Image
    ItemOpen Access
    Sub-lexical investigations: German particles, prefixes and prepositions
    (2013) Roßdeutscher, Antje
    The papers investigate constructions with P(repositional) elements in German. It aims at a comprehensive theory of the syntax-semantics interface for the different verbal constructions in German, including verb plus prepostional phrase, (separable) particle verbs, and (inseparable) prefix verbs. The constructions are given syntactic representations following minimalist principles as known from \textit{Distributive Morphology} (DM) according to which a single syntactic engine drives formation of both words and phrases. Among the syntactic principles the Split-P hypothesis plays a central role. A crucial feature of the approach is that the syntactic structures are used as input to the computation of semantic representations according to principles of Discourse Representation Theory (DRT). Several challenges that present themselves for a compositional theory of word- and phrase- formation with P-elements in German are accounted for in the paper: syntactic separability of verb-particle constructions vs non-separability of prefix-verbs; semantic restrictions in the P-elements to build constructions of the former and the latter type; syntactic alternations w.r.t. the realisation of figure and ground arguments and the semantic basis of these alternations. A particular challenge are the differences in the conceptual and aspectual contribution of the same prepositional root in different syntactic contexts.
  • Thumbnail Image
    ItemOpen Access
    Automatische Kategorisierung von Autoren in Bezug auf Arzneimittel in Twitter
    (2016) Xu, MIn
    Mit der rasch wachsenden Popularität von Twitter werden auch immer mehr unterschiedliche Themen diskutiert. Dies lässt sich auch im Bezug auf die Wirkung von Arzneimitteln beobachten. Es ist daher sehr interessant herauszufinden, welche sozialen Gruppen dazu neigen, bestimmte Arzneimittel in Twitter zu diskutieren und welche Arzneimittel am meisten in Twitter diskutiert werden. Deshalb bietet es sich an, mit Verwendung der Technologie der Textklassifikation, die große Anzahl von Tweets zu kategorisieren. In dieser Arbeit wird das hauptsächlich mit dem Maximum Entropy Klassifikator realisiert, mit den sich die Autoren der Tweets erkennen lassen. Da das Maximum Entropy Modell eine Vielzahl der relevanten oder irrelevanten Kenntnis der Wahrscheinlichkeiten umfassend beobachten kann, erzielt der Maximum Entropy Klassifikator im Vergleich zum naiven Bayes-Klassifikator in dieser Arbeit ein besseres Ergebnis bei der Multi-Klassen-Klassifikation. Die Beeinflussung auf die Leistungen des Maximum Entropy Klassifikator unter der Verwendungen von verschiedenen Methoden, wie Information Gain & Mutual Information und LDA-Topic Model, zur Auswahl der Merkmale und unterschiedlicher Anzahl an Merkmalen wird verglichen und analysiert. Die Ergebnissen zeigen, dass die Methoden Information Gain & Mutual Information und LDA-Topic-Model gute praktische Ansätze sind, mit denen die Merkmale kurzer Texte erkannt werden können. Mit dem Maximum Entropy Klassifikator wird eine durchschnittliche Testgenauigkeit von 79.8% erreicht.