12 Sonderforschungs- und Transferbereiche

Permanent URI for this collectionhttps://elib.uni-stuttgart.de/handle/11682/13

Browse

Search Results

Now showing 1 - 3 of 3
  • Thumbnail Image
    ItemOpen Access
    The RefLex scheme - annotation guidelines
    (Stuttgart : Universität Stuttgart, SFB, 2017) Riester, Arndt; Baumann, Stefan
    The purpose of the RefLex annotation scheme (Baumann and Riester 2012) is the two-dimensional analysis of textual or spoken corpus data with regard to referential information status (including coreference and bridging) as well as lexical information status (semantic relations). We provide some linguistic-philosophical background followed by detailed guidelines, which can be used in combination with various annotation tools.
  • Thumbnail Image
    ItemOpen Access
    Ontology and argument structure in nominalizations
    (2013) Pross, Tillmann; Roßdeutscher, Antje (Hrsg.)
    Based on data from German -ung nominalizations, I argue that selection restriction tests are not suitable as linguistic tools for ontological disambiguation. Consequently, I question the significance of ontology as a starting point for linguistic theorizing. Instead, I argue for an underspecified account of the ontology of nominalizations, in which disambiguation looses its central role in the commerce with ambiguity.
  • Thumbnail Image
    ItemOpen Access
    A clustering approach to automatic verb classification incorporating selectional preferences: model, implementation, and user manual
    (2010) Schulte im Walde, Sabine; Schmid, Helmut; Wagner, Wiebke; Hying, Christian; Scheible, Christian
    This report presents two variations of an innovative, complex approach to semantic verb classes that relies on selectional preferences as verb properties. The underlying linguistic assumption for this verb class model is that verbs which agree on their selectional preferences belong to a common semantic class. The model is implemented as a soft-clustering approach, in order to capture the polysemy of the verbs. The training procedure uses the Expectation-Maximisation (EM) algorithm (Baum, 1972) to iteratively improve the probabilistic parameters of the model, and applies the Minimum Description Length (MDL) principle (Rissanen, 1978) to induce WordNet-based selectional preferences for arguments within subcategorisation frames. One variation of the MDL principle replicates a standard MDL approach by Li and Abe (1998), the other variation presents an improved pruning strategy that outperforms the standard implementation considerably. Our model is potentially useful for lexical induction (e.g., verb senses, subcategorisation and selectional preferences, collocations, and verb alternations), and for NLP applications in sparse data situations. We demonstrate the usefulness of the model by a standard evaluation (pseudo-word disambiguation), and three applications (selectional preference induction, verb sense disambiguation, and semi-supervised sense labelling).