05 Fakultät Informatik, Elektrotechnik und Informationstechnik
Permanent URI for this collectionhttps://elib.uni-stuttgart.de/handle/11682/6
Browse
273 results
Search Results
Item Open Access Automated composition of adaptive pervasive applications in heterogeneous environments(2012) Schuhmann, Stephan Andreas; Rothermel, Kurt (Prof. Dr. rer. nat. Dr. h. c.)Distributed applications for Pervasive Computing represent a research area of high interest. Configuration processes are needed before the application execution to find a composition of components that provides the required functionality. As dynamic pervasive environments and device failures may yield unavailability of arbitrary components and devices at any time, finding and maintaining such a composition represents a nontrivial task. Obviously, many degrees of decentralization and even completely centralized approaches are possible in the calculation of valid configurations, spanning a wide spectrum of possible solutions. As configuration processes produce latencies which are noticed by the application user as undesired waiting times, configurations have to be calculated as fast as possible. While completely distributed configuration is inevitable in infrastructure-less Ad Hoc scenarios, many realistic Pervasive Computing environments are located in heterogeneous environments, where additional computation power of resource-rich devices can be utilized by centralized approaches. However, in case of strongly heterogeneous pervasive environments including several resource-rich and resource-weak devices, both centralized and decentralized approaches may lead to suboptimal results concerning configuration latencies: While the resource-weak devices may be bottlenecks for decentralized configuration, the centralized approach faces the problem of not utilizing parallelism. Most of the conducted projects in Pervasive Computing only focus on one specific type of environment: Either they concentrate on heterogeneous environments, which rely on additional infrastructure devices, leading to inapplicability in infrastructure-less environments. Or they address homogeneous Ad Hoc environments and treat all involved devices as equal, which leads to suboptimal results in case of present resource-rich devices, as their additional computation power is not exploited. Therefore, in this work we propose an advanced comprehensive adaptive approach that particularly focuses on the efficient support of heterogeneous environments, but is also applicable in infrastructure-less homogeneous scenarios. We provide multiple configuration schemes with different degrees of decentralization for distributed applications, optimized for specific scenarios. Our solution is adaptive in a way that the actual scheme is chosen based on the current system environment and calculates application compositions in a resource-aware efficient manner. This ensures high efficiency even in dynamically changing environments. Beyond this, many typical pervasive environments contain a fixed set of applications and devices that are frequently used. In such scenarios, identical resources are part of subsequent configuration calculations. Thus, the involved devices undergo a quite similar configuration process whenever an application is launched. However, starting the configuration from scratch every time not only consumes a lot of time, but also increases communication overhead and energy consumption of the involved devices. Therefore, our solution integrates the results from previous configurations to reduce the severity of the configuration problem in dynamic scenarios. We prove in prototypical real-world evaluations as well as by simulation and emulation that our comprehensive approach provides efficient automated configuration in the complete spectrum of possible application scenarios. This extensive functionality has not been achieved by related projects yet. Thus, our work supplies a significant contribution towards seamless application configuration in Pervasive Computing.Item Open Access Interacting with large high-resolution display workplaces(2018) Lischke, Lars; Schmidt, Albrecht (Prof.)Large visual spaces provide a unique opportunity to communicate large and complex pieces of information; hence, they have been used for hundreds of years for varied content including maps, public notifications and artwork. Understanding and evaluating complex information will become a fundamental part of any office work. Large high-resolution displays (LHRDs) have the potential to further enhance the traditional advantages of large visual spaces and combine them with modern computing technology, thus becoming an essential tool for understanding and communicating data in future office environments. For successful deployment of LHRDs in office environments, well-suited interaction concepts are required. In this thesis, we build an understanding of how concepts for interaction with LHRDs in office environments could be designed. From the human-computer interaction (HCI) perspective three aspects are fundamental: (1) The way humans perceive and react to large visual spaces is essential for interaction with content displayed on LHRDs. (2) LHRDs require adequate input techniques. (3) The actual content requires well-designed graphical user interfaces (GUIs) and suitable input techniques. Perceptions influence how users can perform input on LHRD setups, which sets boundaries for the design of GUIs for LHRDs. Furthermore, the input technique has to be reflected in the design of the GUI. To understand how humans perceive and react to large visual information on LHRDs, we have focused on the influence of visual resolution and physical space. We show that increased visual resolution has an effect on the perceived media quality and the perceived effort and that humans can overview large visual spaces without being overwhelmed. When the display is wider than 2 m users perceive higher physical effort. When multiple users share an LHRD, they change their movement behavior depending whether a task is collaborative or competitive. For building LHRDs consideration must be given to the increased complexity of higher resolutions and physically large displays. Lower screen resolutions provide enough display quality to work efficiently, while larger physical spaces enable users to overview more content without being overwhelmed. To enhance user input on LHRDs in order to interact with large information pieces, we built working prototypes and analyzed their performance in controlled lab studies. We showed that eye-tracking based manual and gaze input cascaded (MAGIC) pointing can enhance target pointing to distant targets. MAGIC pointing is particularly beneficial when the interaction involves visual searches between pointing to targets. We contributed two gesture sets for mid-air interaction with window managers on LHRDs and found that gesture elicitation for an LHRD was not affected by legacy bias. We compared shared user input on an LHRD with personal tablets, which also functioned as a private working space, to collaborative data exploration using one input device together for interacting with an LHRD. The results showed that input with personal tablets lowered the perceived workload. Finally, we showed that variable movement resistance feedback enhanced one-dimensional data input when no visual input feedback was provided. We concluded that context-aware input techniques enhance the interaction with content displayed on an LHRD so it is essential to provide focus for the visual content and guidance for the user while performing input. To understand user expectations of working with LHRDs we prototyped with potential users how an LHRD work environment could be designed focusing on the physical screen alignment and the placement of content on the display. Based on previous work, we implemented novel alignment techniques for window management on LHRDs and compared them in a user study. The results show that users prefer techniques, that enhance the interaction without breaking well-known desktop GUI concepts. Finally, we provided the example of how an application for browsing scientific publications can benefit from extended display space. Overall, we show that GUIs for LHRDs should support the user more strongly than GUIs for smaller displays to arrange content meaningful or manage and understand large data sets, without breaking well-known GUI-metaphors. In conclusion, this thesis adopts a holistic approach to interaction with LHRDs in office environments. Based on enhanced knowledge about user perception of large visual spaces, we discuss novel input techniques for advanced user input on LHRDs. Furthermore, we present guidelines for designing future GUIs for LHRDs. Our work creates the design space of LHRD workplaces and identifies challenges and opportunities for the development of future office environments.Item Open Access Partnerübergreifende Geschäftsprozesse und ihre Realisierung in BPEL(2016) Kopp, Oliver; Leymann, Frank (Prof. Dr. Dr. h. c.)Diese Arbeit beschäftigt sich mit Geschäftsprozessen, die die Grenzen von Organisationen überspannen. Solche Geschäftsprozesse werden Choreographien genannt. In der Arbeit wird die CREAM-Methode vorgestellt, die zeigt, wie Choreographien modelliert werden können. Im Gegensatz zu Choreographien bezeichnen Orchestrierungen ausführbare Geschäftsprozesse einer einzelnen Organisation, die Dienste nutzen, um ein Geschäftsziel zu erreichen. Eine Variante der CREAM-Methode erlaubt, von einer Orchestrierung durch Aufteilung der Orchestrierung eine Choreographie zu erhalten. Um hierbei die impliziten orchestrierungsinternen Datenabhängigkeiten in Nachrichtenaustausche zu transformieren, wird der explizite Datenfluss der Orchestrierung benötigt. Die Web Services Business Process Execution Language (BPEL) ist eine verbreitete Sprache zur Modellierung von Geschäftsprozessen. In ihr wird der Datenfluss implizit modelliert und somit wird ein Verfahren benötigt, das den expliziten Datenfluss bestimmt. In dieser Arbeit wird ein solches Verfahren vorgestellt. Um eine Choreographie zu modellieren, wird eine Choreographiesprache benötigt. Zur Identifikation einer geeigneten Sprache werden in dieser Arbeit Kriterien zur Evaluation von Choreographiesprachen vorgestellt und damit Choreographiesprachen im Web-Service-Umfeld bewertet. Da keine der betrachteten Sprachen alle Kriterien erfüllt, wird die Sprache BPEL4Chor vorgestellt, die alle Kriterien erfüllt. Um die wohldefinierte Ausführungssemantik von BPEL wiederzuverwenden, verwendet BPEL4Chor die Sprache BPEL als Beschreibungssprache des Verhaltens jedes Teilnehmers in der Choreographie. BPEL4Chor verwendet analog zu BPEL XML als Serialisierungsformat und spezifiziert keine eigene graphische Repräsentation. Die Business Process Modeling Notation (BPMN) ist der de-facto Standard, um Geschäftsprozesse graphisch darzustellen. Deshalb wird in dieser Arbeit BPMN so erweitert, dass alle in BPEL4Chor verfügbaren Konstrukte mittels BPMN modelliert werden können.Item Open Access Visualization challenges in distributed heterogeneous computing environments(2015) Panagiotidis, Alexandros; Ertl, Thomas (Prof. Dr.)Large-scale computing environments are important for many aspects of modern life. They drive scientific research in biology and physics, facilitate industrial rapid prototyping, and provide information relevant to everyday life such as weather forecasts. Their computational power grows steadily to provide faster response times and to satisfy the demand for higher complexity in simulation models as well as more details and higher resolutions in visualizations. For some years now, the prevailing trend for these large systems is the utilization of additional processors, like graphics processing units. These heterogeneous systems, that employ more than one kind of processor, are becoming increasingly widespread since they provide many benefits, like higher performance or increased energy efficiency. At the same time, they are more challenging and complex to use because the various processing units differ in their architecture and programming model. This heterogeneity is often addressed by abstraction but existing approaches often entail restrictions or are not universally applicable. As these systems also grow in size and complexity, they become more prone to errors and failures. Therefore, developers and users become more interested in resilience besides traditional aspects, like performance and usability. While fault tolerance is well researched in general, it is mostly dismissed in distributed visualization or not adapted to its special requirements. Finally, analysis and tuning of these systems and their software is required to assess their status and to improve their performance. The available tools and methods to capture and evaluate the necessary information are often isolated from the context or not designed for interactive use cases. These problems are amplified in heterogeneous computing environments, since more data is available and required for the analysis. Additionally, real-time feedback is required in distributed visualization to correlate user interactions to performance characteristics and to decide on the validity and correctness of the data and its visualization. This thesis presents contributions to all of these aspects. Two approaches to abstraction are explored for general purpose computing on graphics processing units and visualization in heterogeneous computing environments. The first approach hides details of different processing units and allows using them in a unified manner. The second approach employs per-pixel linked lists as a generic framework for compositing and simplifying order-independent transparency for distributed visualization. Traditional methods for fault tolerance in high performance computing systems are discussed in the context of distributed visualization. On this basis, strategies for fault-tolerant distributed visualization are derived and organized in a taxonomy. Example implementations of these strategies, their trade-offs, and resulting implications are discussed. For analysis, local graph exploration and tuning of volume visualization are evaluated. Challenges in dense graphs like visual clutter, ambiguity, and inclusion of additional attributes are tackled in node-link diagrams using a lens metaphor as well as supplementary views. An exploratory approach for performance analysis and tuning of parallel volume visualization on a large, high-resolution display is evaluated. This thesis takes a broader look at the issues of distributed visualization on large displays and heterogeneous computing environments for the first time. While the presented approaches all solve individual challenges and are successfully employed in this context, their joint utility form a solid basis for future research in this young field. In its entirety, this thesis presents building blocks for robust distributed visualization on current and future heterogeneous visualization environments.Item Open Access German clause-embedding predicates : an extraction and classification approach(2010) Lapshinova-Koltunski, Ekaterina; Heid, Ulrich (Prof. Dr. phil. habil.)This thesis describes a semi-automatic approach to the analysis of subcategorisation properties of verbal, nominal and multiword predicates in German. We semi-automatically classify predicates according to their subcategorisation properties by means of extracting them from German corpora along with their complements. In this work, we concentrate exclusively on sentential complements, such as dass, ob and w-clauses, although our methods can be also applied for other complement types. Our aim is not only to extract and classify predicates but also to compare subcategorisation properties of morphologically related predicates, such as verbs and their nominalisations. It is usually assumed that subcategorisation properties of nominalisations are taken over from their underlying verbs. However, our tests show that there exist different types of relations between them. Thus, we review subcategorisation properties of morphologically related words and analyse their correspondences and differences. For this purpose, we elaborate a set of semi-automatic procedures, which allow us not only to classify extracted units according to their subcategorisation properties, but also to compare the properties of verbs and their nominalisations, which occur both freely in corpora and within a multiword expression. The lexical data are created to serve symbolic NLP, especially large symbolic grammars for deep processing, such as HPSG or LFG, cf. work in the LinGO project (Copestake et al. 2004) and the Pargram project (Butt et al. 2002). HPSG and LFG need detailed linguistic knowledge. Besides that, subcategorisation iformation can be applied in applications for IE, cf. (Surdeanu et al. 2003). Moreover, this information is necessary for linguistic, lexicographic, SLA and translation work. Our extraction and classification procedures are precision-oriented, which means that we focus on high accuracy of our extraction and classification results. High precision is opposed to completeness, which is compensated by the application of extraction procedures on larger corpora.Item Open Access Segmental factors in language proficiency : degree of velarization, coarticulatory resistance and vowel formant frequency distribution as a signature of talent(2011) Baumotte, Henrike; Dogil, Grzegorz (Prof. Dr.)The present PhD proposes a reason for German native speakers of various proficiency levels and multiple English varieties producing their L2 English with different degrees of a foreign accent. The author took into account phonetic measurements to investigate the degree of velarization and coarticulation or coarticulatory resistance respectively in German and English, taking non-words and natural language stimuli. To get an impression of the differences between the productions of proficient, average and less proficient speakers in German and English, the mean F2 and Fv values in /ə/ before /l/ and in /l/ were calculated, for then comparing the degree of velarization in /əlV/ non-word sequences with each other. Proficient speakers gained lower formant frequencies for F2 and Fv in /ə/ than less proficient speakers, i.e. proficient speakers velarized more than less proficient speakers. Within the comparisons with respect to coarticulation or coarticulatory resistance results respectively the difference values for F2 and F2' out of /ə/ in /əleɪ/ vs. /əlu:/, /əly/ vs. /əleɪ/ and /əly/ vs. /əlaɪ/ were created. In the whole series of measurements, an overwhelming trend for proficient speakers being more coarticulatory resistant, i.e. velarizing more, and more precisely pronouncing English vowel characteristics than less proficient speakers was present, while average speakers did not continuously behave according to prediction, as a result of being sometimes “worse” than less proficient speakers. On the basis of Díaz et al. (2008) who pled for pre-existing individual differences in phonetic discrimination ability which enormously influence the achievement of a foreign sound system, it is claimed for a derivation of foreign language from native phonetic abilities.Item Open Access Efficient fault tolerance for selected scientific computing algorithms on heterogeneous and approximate computer architectures(2018) Schöll, Alexander; Wunderlich, Hans-Joachim (Prof. Dr.)Scientific computing and simulation technology play an essential role to solve central challenges in science and engineering. The high computational power of heterogeneous computer architectures allows to accelerate applications in these domains, which are often dominated by compute-intensive mathematical tasks. Scientific, economic and political decision processes increasingly rely on such applications and therefore induce a strong demand to compute correct and trustworthy results. However, the continued semiconductor technology scaling increasingly imposes serious threats to the reliability and efficiency of upcoming devices. Different reliability threats can cause crashes or erroneous results without indication. Software-based fault tolerance techniques can protect algorithmic tasks by adding appropriate operations to detect and correct errors at runtime. Major challenges are induced by the runtime overhead of such operations and by rounding errors in floating-point arithmetic that can cause false positives. The end of Dennard scaling induces central challenges to further increase the compute efficiency between semiconductor technology generations. Approximate computing exploits the inherent error resilience of different applications to achieve efficiency gains with respect to, for instance, power, energy, and execution times. However, scientific applications often induce strict accuracy requirements which require careful utilization of approximation techniques. This thesis provides fault tolerance and approximate computing methods that enable the reliable and efficient execution of linear algebra operations and Conjugate Gradient solvers using heterogeneous and approximate computer architectures. The presented fault tolerance techniques detect and correct errors at runtime with low runtime overhead and high error coverage. At the same time, these fault tolerance techniques are exploited to enable the execution of the Conjugate Gradient solvers on approximate hardware by monitoring the underlying error resilience while adjusting the approximation error accordingly. Besides, parameter evaluation and estimation methods are presented that determine the computational efficiency of application executions on approximate hardware. An extensive experimental evaluation shows the efficiency and efficacy of the presented methods with respect to the runtime overhead to detect and correct errors, the error coverage as well as the achieved energy reduction in executing the Conjugate Gradient solvers on approximate hardware.Item Open Access Computational modelling of coreference and bridging resolution(2019) Rösiger, Ina; Kuhn, Jonas (Prof. Dr.)Item Open Access Causal models for decision making via integrative inference(2017) Geiger, Philipp; Toussaint, Marc (Prof. Dr.)Understanding causes and effects is important in many parts of life, especially when decisions have to be made. The systematic inference of causal models remains a challenge though. In this thesis, we study (1) "approximative" and "integrative" inference of causal models and (2) causal models as a basis for decision making in complex systems. By "integrative" here we mean including and combining settings and knowledge beyond the outcome of perfect randomization or pure observation for causal inference, while "approximative" means that the causal model is only constrained but not uniquely identified. As a basis for the study of topics (1) and (2), which are closely related, we first introduce causal models, discuss the meaning of causation and embed the notion of causation into a broader context of other fundamental concepts. Then we begin our main investigation with a focus on topic (1): we consider the problem of causal inference from a non-experimental multivariate time series X, that is, we integrate temporal knowledge. We take the following approach: We assume that X together with some potential hidden common cause - "confounder" - Z forms a first order vector autoregressive (VAR) process with structural transition matrix A. Then we examine under which conditions the most important parts of A are identifiable or approximately identifiable from only X, in spite of the effects of Z. Essentially, sufficient conditions are (a) non-Gaussian, independent noise or (b) no influence from X to Z. We present two estimation algorithms that are tailored towards conditions (a) and (b), respectively, and evaluate them on synthetic and real-world data. We discuss how to check the model using X. Still focusing on topic (1) but already including elements of topic (2), we consider the problem of approximate inference of the causal effect of a variable X on a variable Y in i.i.d. settings "between" randomized experiments and observational studies. Our approach is to first derive approximations (upper/lower bounds) on the causal effect, in dependence on bounds on (hidden) confounding. Then we discuss several scenarios where knowledge or beliefs can be integrated that in fact imply bounds on confounding. One example is about decision making in advertisement, where knowledge on partial compliance with guidelines can be integrated. Then, concentrating on topic (2), we study decision making problems that arise in cloud computing, a computing paradigm and business model that involves complex technical and economical systems and interactions. More specifically, we consider the following two problems: debugging and control of computing systems with the help of sandbox experiments, and prediction of the cost of "spot" resources for decision making of cloud clients. We first establish two theoretical results on approximate counterfactuals and approximate integration of causal knowledge, which we then apply to the two problems in toy scenarios.Item Open Access Effiziente Synthese konsistenter Graphen und ihre Anwendung in der Lokalisierung akustischer Quellen(2015) Kreißig, Martin; Yang, Bin (Prof. Dr.-Ing.)In dieser Arbeit wird das Problem der simultanen, akustischen Mehrquellenlokalisierung in echobehafteten Umgebungen genauer betrachtet und daran beispielhaft die Anwendungsmöglichkeit der Synthese konsistenter Graphen gezeigt und analysiert. Dafür werden die Grundlagen der akustischen Lokalisierung eingeführt und unterschiedliche Ansätze vorgestellt. Im Besonderen werden die laufzeitdifferenzbasierten Lokalisierungsverfahren betrachtet, die das Problem der uneindeutigen Zuweisung der Laufzeitdifferenzen zu den Quellen haben. Anhand dieses konkreten Anwendungsbeispiels der Lokalisierung wird die Problematik auf ein graphentheoretisches Problem abstrahiert. Deshalb werden zunächst die graphentheoretischen Grundlagen und bereits bekannte Algorithmen, wie die Tiefen- und Breitensuche eingeführt, die beide einen aufspannenden Baum suchen. Der aufspannende Baum ist notwendig, um die Menge der fundamentalen Maschen zu bestimmen. Die Synthese konsistenter Graphen erfolgt durch das Zusammenführen der konsistenten fundamentalen Maschen. Dabei unterscheidet man folgende Vorgehensweisen: die Synthese voll konsistenter Graphen, die jeder Kante des Eingangsgraphen ein konsistentes Kantengewicht zuweisen und die Synthese partiell konsistenter Graphen, die nur eine Teilmenge von Kanten beinhalten. Beide Vorgehensweisen basieren auf den konsistenten fundamentalen Maschen, welche die Nullsummenbedingung erfüllen. Die Synthese voll konsistenter Graphen wird über ein Backtracking-Verfahren realisiert. Die Synthese partiell konsistenter Graphen wird aus dem Kompatibilitäts-Konflikt-Graph abgeleitet, der ein neuer Typus von Graph ist und die drei unterschiedlichen Zustände zwischen den konsistenten fundamentalen Maschen beschreibt: 1) zwei konsistente Maschen haben keine gemeinsame Kanten, 2) zwei konsistente Maschen haben gemeinsame Kanten und identische Kantengewichte (kompatibel) und 3) zwei konsistente Maschen haben gemeinsame Kanten und unterschiedliche Kantengewichte (Konflikt). Um alle möglichen, partiell konsistenten Graphen zu synthetisieren, wird der neue Algorithmus CCGsearch eingeführt und auf Vollständigkeit bewiesen. Die Berechnungskomplexitäten der beiden Syntheseverfahren werden sowohl analytisch hergeleitet als auch durch Simulationen verifiziert.