Universität Stuttgart
Permanent URI for this communityhttps://elib.uni-stuttgart.de/handle/11682/1
Browse
317 results
Search Results
Item Open Access Automated composition of adaptive pervasive applications in heterogeneous environments(2012) Schuhmann, Stephan Andreas; Rothermel, Kurt (Prof. Dr. rer. nat. Dr. h. c.)Distributed applications for Pervasive Computing represent a research area of high interest. Configuration processes are needed before the application execution to find a composition of components that provides the required functionality. As dynamic pervasive environments and device failures may yield unavailability of arbitrary components and devices at any time, finding and maintaining such a composition represents a nontrivial task. Obviously, many degrees of decentralization and even completely centralized approaches are possible in the calculation of valid configurations, spanning a wide spectrum of possible solutions. As configuration processes produce latencies which are noticed by the application user as undesired waiting times, configurations have to be calculated as fast as possible. While completely distributed configuration is inevitable in infrastructure-less Ad Hoc scenarios, many realistic Pervasive Computing environments are located in heterogeneous environments, where additional computation power of resource-rich devices can be utilized by centralized approaches. However, in case of strongly heterogeneous pervasive environments including several resource-rich and resource-weak devices, both centralized and decentralized approaches may lead to suboptimal results concerning configuration latencies: While the resource-weak devices may be bottlenecks for decentralized configuration, the centralized approach faces the problem of not utilizing parallelism. Most of the conducted projects in Pervasive Computing only focus on one specific type of environment: Either they concentrate on heterogeneous environments, which rely on additional infrastructure devices, leading to inapplicability in infrastructure-less environments. Or they address homogeneous Ad Hoc environments and treat all involved devices as equal, which leads to suboptimal results in case of present resource-rich devices, as their additional computation power is not exploited. Therefore, in this work we propose an advanced comprehensive adaptive approach that particularly focuses on the efficient support of heterogeneous environments, but is also applicable in infrastructure-less homogeneous scenarios. We provide multiple configuration schemes with different degrees of decentralization for distributed applications, optimized for specific scenarios. Our solution is adaptive in a way that the actual scheme is chosen based on the current system environment and calculates application compositions in a resource-aware efficient manner. This ensures high efficiency even in dynamically changing environments. Beyond this, many typical pervasive environments contain a fixed set of applications and devices that are frequently used. In such scenarios, identical resources are part of subsequent configuration calculations. Thus, the involved devices undergo a quite similar configuration process whenever an application is launched. However, starting the configuration from scratch every time not only consumes a lot of time, but also increases communication overhead and energy consumption of the involved devices. Therefore, our solution integrates the results from previous configurations to reduce the severity of the configuration problem in dynamic scenarios. We prove in prototypical real-world evaluations as well as by simulation and emulation that our comprehensive approach provides efficient automated configuration in the complete spectrum of possible application scenarios. This extensive functionality has not been achieved by related projects yet. Thus, our work supplies a significant contribution towards seamless application configuration in Pervasive Computing.Item Open Access Interacting with large high-resolution display workplaces(2018) Lischke, Lars; Schmidt, Albrecht (Prof.)Large visual spaces provide a unique opportunity to communicate large and complex pieces of information; hence, they have been used for hundreds of years for varied content including maps, public notifications and artwork. Understanding and evaluating complex information will become a fundamental part of any office work. Large high-resolution displays (LHRDs) have the potential to further enhance the traditional advantages of large visual spaces and combine them with modern computing technology, thus becoming an essential tool for understanding and communicating data in future office environments. For successful deployment of LHRDs in office environments, well-suited interaction concepts are required. In this thesis, we build an understanding of how concepts for interaction with LHRDs in office environments could be designed. From the human-computer interaction (HCI) perspective three aspects are fundamental: (1) The way humans perceive and react to large visual spaces is essential for interaction with content displayed on LHRDs. (2) LHRDs require adequate input techniques. (3) The actual content requires well-designed graphical user interfaces (GUIs) and suitable input techniques. Perceptions influence how users can perform input on LHRD setups, which sets boundaries for the design of GUIs for LHRDs. Furthermore, the input technique has to be reflected in the design of the GUI. To understand how humans perceive and react to large visual information on LHRDs, we have focused on the influence of visual resolution and physical space. We show that increased visual resolution has an effect on the perceived media quality and the perceived effort and that humans can overview large visual spaces without being overwhelmed. When the display is wider than 2 m users perceive higher physical effort. When multiple users share an LHRD, they change their movement behavior depending whether a task is collaborative or competitive. For building LHRDs consideration must be given to the increased complexity of higher resolutions and physically large displays. Lower screen resolutions provide enough display quality to work efficiently, while larger physical spaces enable users to overview more content without being overwhelmed. To enhance user input on LHRDs in order to interact with large information pieces, we built working prototypes and analyzed their performance in controlled lab studies. We showed that eye-tracking based manual and gaze input cascaded (MAGIC) pointing can enhance target pointing to distant targets. MAGIC pointing is particularly beneficial when the interaction involves visual searches between pointing to targets. We contributed two gesture sets for mid-air interaction with window managers on LHRDs and found that gesture elicitation for an LHRD was not affected by legacy bias. We compared shared user input on an LHRD with personal tablets, which also functioned as a private working space, to collaborative data exploration using one input device together for interacting with an LHRD. The results showed that input with personal tablets lowered the perceived workload. Finally, we showed that variable movement resistance feedback enhanced one-dimensional data input when no visual input feedback was provided. We concluded that context-aware input techniques enhance the interaction with content displayed on an LHRD so it is essential to provide focus for the visual content and guidance for the user while performing input. To understand user expectations of working with LHRDs we prototyped with potential users how an LHRD work environment could be designed focusing on the physical screen alignment and the placement of content on the display. Based on previous work, we implemented novel alignment techniques for window management on LHRDs and compared them in a user study. The results show that users prefer techniques, that enhance the interaction without breaking well-known desktop GUI concepts. Finally, we provided the example of how an application for browsing scientific publications can benefit from extended display space. Overall, we show that GUIs for LHRDs should support the user more strongly than GUIs for smaller displays to arrange content meaningful or manage and understand large data sets, without breaking well-known GUI-metaphors. In conclusion, this thesis adopts a holistic approach to interaction with LHRDs in office environments. Based on enhanced knowledge about user perception of large visual spaces, we discuss novel input techniques for advanced user input on LHRDs. Furthermore, we present guidelines for designing future GUIs for LHRDs. Our work creates the design space of LHRD workplaces and identifies challenges and opportunities for the development of future office environments.Item Open Access Partnerübergreifende Geschäftsprozesse und ihre Realisierung in BPEL(2016) Kopp, Oliver; Leymann, Frank (Prof. Dr. Dr. h. c.)Diese Arbeit beschäftigt sich mit Geschäftsprozessen, die die Grenzen von Organisationen überspannen. Solche Geschäftsprozesse werden Choreographien genannt. In der Arbeit wird die CREAM-Methode vorgestellt, die zeigt, wie Choreographien modelliert werden können. Im Gegensatz zu Choreographien bezeichnen Orchestrierungen ausführbare Geschäftsprozesse einer einzelnen Organisation, die Dienste nutzen, um ein Geschäftsziel zu erreichen. Eine Variante der CREAM-Methode erlaubt, von einer Orchestrierung durch Aufteilung der Orchestrierung eine Choreographie zu erhalten. Um hierbei die impliziten orchestrierungsinternen Datenabhängigkeiten in Nachrichtenaustausche zu transformieren, wird der explizite Datenfluss der Orchestrierung benötigt. Die Web Services Business Process Execution Language (BPEL) ist eine verbreitete Sprache zur Modellierung von Geschäftsprozessen. In ihr wird der Datenfluss implizit modelliert und somit wird ein Verfahren benötigt, das den expliziten Datenfluss bestimmt. In dieser Arbeit wird ein solches Verfahren vorgestellt. Um eine Choreographie zu modellieren, wird eine Choreographiesprache benötigt. Zur Identifikation einer geeigneten Sprache werden in dieser Arbeit Kriterien zur Evaluation von Choreographiesprachen vorgestellt und damit Choreographiesprachen im Web-Service-Umfeld bewertet. Da keine der betrachteten Sprachen alle Kriterien erfüllt, wird die Sprache BPEL4Chor vorgestellt, die alle Kriterien erfüllt. Um die wohldefinierte Ausführungssemantik von BPEL wiederzuverwenden, verwendet BPEL4Chor die Sprache BPEL als Beschreibungssprache des Verhaltens jedes Teilnehmers in der Choreographie. BPEL4Chor verwendet analog zu BPEL XML als Serialisierungsformat und spezifiziert keine eigene graphische Repräsentation. Die Business Process Modeling Notation (BPMN) ist der de-facto Standard, um Geschäftsprozesse graphisch darzustellen. Deshalb wird in dieser Arbeit BPMN so erweitert, dass alle in BPEL4Chor verfügbaren Konstrukte mittels BPMN modelliert werden können.Item Open Access Visualization challenges in distributed heterogeneous computing environments(2015) Panagiotidis, Alexandros; Ertl, Thomas (Prof. Dr.)Large-scale computing environments are important for many aspects of modern life. They drive scientific research in biology and physics, facilitate industrial rapid prototyping, and provide information relevant to everyday life such as weather forecasts. Their computational power grows steadily to provide faster response times and to satisfy the demand for higher complexity in simulation models as well as more details and higher resolutions in visualizations. For some years now, the prevailing trend for these large systems is the utilization of additional processors, like graphics processing units. These heterogeneous systems, that employ more than one kind of processor, are becoming increasingly widespread since they provide many benefits, like higher performance or increased energy efficiency. At the same time, they are more challenging and complex to use because the various processing units differ in their architecture and programming model. This heterogeneity is often addressed by abstraction but existing approaches often entail restrictions or are not universally applicable. As these systems also grow in size and complexity, they become more prone to errors and failures. Therefore, developers and users become more interested in resilience besides traditional aspects, like performance and usability. While fault tolerance is well researched in general, it is mostly dismissed in distributed visualization or not adapted to its special requirements. Finally, analysis and tuning of these systems and their software is required to assess their status and to improve their performance. The available tools and methods to capture and evaluate the necessary information are often isolated from the context or not designed for interactive use cases. These problems are amplified in heterogeneous computing environments, since more data is available and required for the analysis. Additionally, real-time feedback is required in distributed visualization to correlate user interactions to performance characteristics and to decide on the validity and correctness of the data and its visualization. This thesis presents contributions to all of these aspects. Two approaches to abstraction are explored for general purpose computing on graphics processing units and visualization in heterogeneous computing environments. The first approach hides details of different processing units and allows using them in a unified manner. The second approach employs per-pixel linked lists as a generic framework for compositing and simplifying order-independent transparency for distributed visualization. Traditional methods for fault tolerance in high performance computing systems are discussed in the context of distributed visualization. On this basis, strategies for fault-tolerant distributed visualization are derived and organized in a taxonomy. Example implementations of these strategies, their trade-offs, and resulting implications are discussed. For analysis, local graph exploration and tuning of volume visualization are evaluated. Challenges in dense graphs like visual clutter, ambiguity, and inclusion of additional attributes are tackled in node-link diagrams using a lens metaphor as well as supplementary views. An exploratory approach for performance analysis and tuning of parallel volume visualization on a large, high-resolution display is evaluated. This thesis takes a broader look at the issues of distributed visualization on large displays and heterogeneous computing environments for the first time. While the presented approaches all solve individual challenges and are successfully employed in this context, their joint utility form a solid basis for future research in this young field. In its entirety, this thesis presents building blocks for robust distributed visualization on current and future heterogeneous visualization environments.Item Open Access Spatio-temporal and immersive visual analytics for advanced manufacturing(2019) Herr, Dominik; Ertl, Thomas (Prof. Dr.)The increasing amount of digitally available information in the manufacturing domain is accompanied by a demand to use these data to increase the efficiency of a product’s overall design, production, and maintenance steps. This idea, often understood as a part of Industry 4.0, requires the integration of information technologies into traditional manufacturing craftsmanship. Despite an increasing amount of automation in the production domain, human creativity is still essential when designing new products. Further, the cognitive ability of skilled workers to comprehend complex situations and solve issues by adapting solutions of similar problems makes them indispensable. Nowadays, customers demand highly customizable products. Therefore, modern factories need to be highly flexible regarding the lot size and adaptable regarding the produced goods, resulting in increasingly complex processes. One of the major challenges in the manufacturing domain is to optimize the interplay of human expert knowledge and experience with data analysis algorithms. Human experts can quickly comprehend previously unknown patterns and transfer their knowledge and gained experience to solve new issues. Contrarily, data analysis algorithms can process tasks very efficiently at the cost of limited adaptability to handle new situations. Further, they usually lack a sense of semantics, which leads to a need to combine them with human world knowledge to assess the meaningfulness of such algorithms’ results. The concept of Visual Analytics combines the advantages of the human’s cognitive abilities and the processing power of computers. The data are visualized, allowing the users to understand and manipulate them interactively, while algorithms process the data according to the users’ interaction. In the manufacturing domain, a common way to describe the different states of a product from the idea throughout the realization until the product is disposed is the product lifecycle. This thesis presents approaches along the first three phases of the lifecycle: design, planning, and production. A challenge that all of the phases face is that it is necessary to be able to find, understand, and assess relations, for example between concepts, production line layouts, or events reported in a production line. As all phases of the product lifecycle cover broad topics, this thesis focuses on supporting experts in understanding and comparing relations between important aspects of the respective phases, such as concept relationships in the patent domain, as well as production line layouts, or relations of events reported in a production line. During the design phase, it is important to understand the relations of concepts, such as key concepts in patents. Hence, this thesis presents approaches that help domain experts to explore the relationship of such concepts visually. It first focuses on the support of analyzing patent relationships and then extends the presented approach to convey relations about arbitrary concepts, such as authors in scientific literature or keywords on websites. During the planning phase, it is important to discover and compare different possibilities to arrange production line components and additional stashes. In this field, the digitally available data is often insufficient to propose optimal layouts. Therefore, this thesis proposes approaches that help planning experts to design new layouts and optimize positions of machine tools and other components in existing production lines. In the production phase, supporting domain experts in understanding recurring issues and their relation is important to improve the overall efficiency of a production line. This thesis presents visual analytics approaches to help domain experts to understand the relation between events reported by machine tools and comprehend recurring error patterns that may indicate systematic issues during production. Then, this thesis combines the insights and lessons learned from the previous approaches to propose a system that combines augmented reality with visual analysis to allow the monitoring and a situated analysis of machine events directly at the production line. The presented approach primarily focuses on the support of operators on the shop floor. At last, this thesis discusses a possible combination of the product lifecycle with knowledge generating models to communicate insights between the phases, e.g., to prevent issues that are caused from problematic design decisions in earlier phases. In summary, this thesis makes several fundamental contributions to advancing visual analytics techniques in the manufacturing domain by devising new interactive analysis techniques for concept and event relations and by combining them with augmented reality approaches enabling an immersive analysis to improve event handling during production.Item Open Access Process migration in a parallel environment(Stuttgart : Höchstleistungsrechenzentrum, Universität Stuttgart, 2016) Reber, Adrian; Resch, Michael (Prof. Dr.- Ing. Dr. h.c. Dr. h.c. Prof. E.h.)To satisfy the ever increasing demand for computational resources, high performance computing systems are becoming larger and larger. Unfortunately, the tools supporting system management tasks are only slowly adapting to the increase in components in computational clusters. Virtualization provides concepts which make system management tasks easier to implement by providing more flexibility for system administrators. With the help of virtual machine migration, the point in time for certain system management tasks like hardware or software upgrades no longer depends on the usage of the physical hardware. The flexibility to migrate a running virtual machine without significant interruption to the provided service makes it possible to perform system management tasks at the optimal point in time. In most high performance computing systems, however, virtualization is still not implemented. The reason for avoiding virtualization in high performance computing is that there is still an overhead accessing the CPU and I/O devices. This overhead continually decreases and there are different kind of virtualization techniques like para-virtualization and container-based virtualization which minimize this overhead further. With the CPU being one of the primary resources in high performance computing, this work proposes to migrate processes instead of virtual machines thus avoiding any overhead. Process migration can either be seen as an extension to pre-emptive multitasking over system boundaries or as a special form of checkpointing and restarting. In the scope of this work process migration is based on checkpointing and restarting as it is already an established technique in the field of fault tolerance. From the existing checkpointing and restarting implementations, the best suited implementation for process migration purposes was selected. One of the important requirements of the checkpointing and restarting implementation is transparency. Providing transparent process migration is important enable the migration of any process without prerequisites like re-compilation or running in a specially prepared environment. With process migration based on checkpointing and restarting, the next step towards providing process migration in a high performance computing environment is to support the migration of parallel processes. Using MPI is a common method of parallelizing applications and therefore process migration has to be integrated with an MPI implementation. The previously selected checkpointing and restarting implementation was integrated in an MPI implementation, and thus enabling the migration of parallel processes. With the help of different test cases the implemented process migration was analyzed, especially in regards to the time required to migrated a process and the advantages of optimizations to reduce the process’ downtime during migration.Item Open Access Efficient fault tolerance for selected scientific computing algorithms on heterogeneous and approximate computer architectures(2018) Schöll, Alexander; Wunderlich, Hans-Joachim (Prof. Dr.)Scientific computing and simulation technology play an essential role to solve central challenges in science and engineering. The high computational power of heterogeneous computer architectures allows to accelerate applications in these domains, which are often dominated by compute-intensive mathematical tasks. Scientific, economic and political decision processes increasingly rely on such applications and therefore induce a strong demand to compute correct and trustworthy results. However, the continued semiconductor technology scaling increasingly imposes serious threats to the reliability and efficiency of upcoming devices. Different reliability threats can cause crashes or erroneous results without indication. Software-based fault tolerance techniques can protect algorithmic tasks by adding appropriate operations to detect and correct errors at runtime. Major challenges are induced by the runtime overhead of such operations and by rounding errors in floating-point arithmetic that can cause false positives. The end of Dennard scaling induces central challenges to further increase the compute efficiency between semiconductor technology generations. Approximate computing exploits the inherent error resilience of different applications to achieve efficiency gains with respect to, for instance, power, energy, and execution times. However, scientific applications often induce strict accuracy requirements which require careful utilization of approximation techniques. This thesis provides fault tolerance and approximate computing methods that enable the reliable and efficient execution of linear algebra operations and Conjugate Gradient solvers using heterogeneous and approximate computer architectures. The presented fault tolerance techniques detect and correct errors at runtime with low runtime overhead and high error coverage. At the same time, these fault tolerance techniques are exploited to enable the execution of the Conjugate Gradient solvers on approximate hardware by monitoring the underlying error resilience while adjusting the approximation error accordingly. Besides, parameter evaluation and estimation methods are presented that determine the computational efficiency of application executions on approximate hardware. An extensive experimental evaluation shows the efficiency and efficacy of the presented methods with respect to the runtime overhead to detect and correct errors, the error coverage as well as the achieved energy reduction in executing the Conjugate Gradient solvers on approximate hardware.Item Open Access Computational modelling of coreference and bridging resolution(2019) Rösiger, Ina; Kuhn, Jonas (Prof. Dr.)Item Open Access Referenzmodell zur zielgruppenspezifischen Entwicklung einer webbasierten Informationsplattform für den technischen Vertrieb(2012) Kett, Holger; Spath, Dieter (Prof. Dr.-Ing. Dr.-Ing. E.h)In Deutschland werden jährlich Waren im Wert von 175 Mrd. Euro über Handelsvertretungen und vermittlungen vertrieben, die Unternehmen als eigenständige Vertriebspartner beim Marketing und Vertrieb ihrer Produkte und Dienstleistungen unterstützen. 66 Prozent dieser Umsätze sind dem verarbeitenden Gewerbe zuzurechnen. Durchschnittlich vertreten diese eigenständigen Organisationen sechs Herstellerunternehmen. Um diesen Vertriebsweg zu etablieren, müssen Handelsvertretungen und vermittlungen in die Prozesse des zu vertretenden, produzierenden Unternehmens integriert werden. Bei den Handelsvertretern und vermittlern handelt es sich überwiegend um kleine Unternehmen (87 Prozent der Handelsvertretungen beschäftigen nur bis zu sechs Mitarbeiter), die je nach Wirtschaftsbereich, in dem sie tätig sind, unterschiedliche Anforderungen an die IT-Unterstützung stellen. Aktuell existieren keine geeigneten IT-Lösungen, die auf die Bedürfnisse dieser kleinen Gewerbebetriebe zugeschnitten sind und deren wesentlichen Anforderungen abdecken. Auf dem Konzept Software-as-a-Service (SaaS) basierende Lösungen, wie z. B. webbasierte Informationsplattformen, eröffnen völlig neue Möglichkeiten, da sie den Aufwand für Betrieb und Wartung von IT-Anwendungen bei Herstellern reduzieren helfen. Sie ermöglichen eine flexiblere Nutzung der IT-Infrastruktur und bieten u. a. den Vorteil, dass nur die tatsächlich in Anspruch genommenen Leistungen abgerechnet werden. Die vorliegende Arbeit verfolgt das Ziel, die zielgruppenspezifische Entwicklung webbasierter Lösungen als elektronische Dienstleistung für Handelsvertreter, -vermittler und produzierende Unternehmen methodisch zu unterstützen. Hierfür ist ein interdisziplinäres Vorgehen notwendig, bei dem zu Beginn die Erarbeitung des Dienstleistungsangebots für produzierende Unternehmen und deren Handelsvertreter und -vermittler im Mittelpunkt steht (Sicht 1). Anschließend erfolgt die Konkretisierung des Dienstleistungsangebots in Form eines fachlichen Konzepts (Sicht 2), sodass daraus ein dienstbasiertes IT-Konzept abgeleitet (Sicht 3), softwaretechnisch umgesetzt (Sicht 4) und in Betrieb genommen (Sicht 5) werden kann. Eine durchgehend modellbasierte Entwicklung der webbasierten Lösung erhöht die Transparenz zwischen den genannten Sichten und der dafür erarbeiteten Modelle und steigert die Wiederverwendbarkeit entwickelter Dienste. Ganzheitliche Vorgehensweisen zur modellbasierten Entwicklung elektronischer Dienste existieren in dieser Form bisher nicht. Ebenso konnten keine anwendbaren Metamodelle zur Modellierung von Dienstleistungsangeboten und der zugrundeliegenden Geschäftsmodelle identifiziert werden. Zur Lösung dieses Defizits stellt die Arbeit ein Referenzmodell vor, das IT-Anbieter bei der zielgruppenspezifischen Entwicklung webbasierter Informationsplattformen für Handelsvertretungen, -vermittlungen und deren zu vertretenen Hersteller im technischen Vertrieb unterstützt. Die Grundlage für das Referenzmodell ist ein Metamodell (Integrated Service Engineering Framework), das auf dem Zachman-Framework aufbaut. Das Metamodell ermöglicht die Zuordnung geeigneter Modelle zu den oben genannten Sichten auf einen elektronischen Dienst in Form einer Matrix. Die Arbeit stellt hierfür geeignete Modelle vor und erstellt ein Referenzmodell für die marktorientierte, zielgruppenspezifische Sicht mit Fokus auf die Dienstleistungsangebote und die zugrunde liegenden Geschäftsmodelle. Das Referenzmodell ermöglicht die Strukturierung wirtschaftlich-strategischer Informationen zu Beginn des Entwicklungsprozesses und ordnet diese Modellen und Informationen nachgelagerten Sichten zu. Um die Anwendung des Referenzmodells zu vereinfachen, wird ein methodisches Vorgehen für dessen Nutzung vorgestellt. Das Referenzmodell wurde in Form eines Eclipse-basierten Editors umgesetzt und mit vier Partnerunternehmen evaluiert. Mithilfe des Referenzmodells für die Geschäftsmodellentwicklung von webbasierten Informationsplattformen für den technischen Vertrieb mit besonderem Schwerpunkt auf Handelsvertretungen, -vermittlungen und Herstellerbetriebe konnten in der Evaluation folgende Nutzen realisiert werden: Die strukturierte Entwicklung eines Geschäftsmodells und die zielgruppenspezifische Ausrichtung einer Informationsplattform mit allen Beteiligten, die Erstellung einer wesentlichen Grundlage und eines Rahmens für die konkrete Planung und Umsetzung der Informationsplattform und eine vereinfachte Prüfung der Frage, inwieweit die Entwicklung und der Betrieb der Informationsplattform für jedes Partnerunternehmen wirtschaftlich interessant sind.Item Open Access Causal models for decision making via integrative inference(2017) Geiger, Philipp; Toussaint, Marc (Prof. Dr.)Understanding causes and effects is important in many parts of life, especially when decisions have to be made. The systematic inference of causal models remains a challenge though. In this thesis, we study (1) "approximative" and "integrative" inference of causal models and (2) causal models as a basis for decision making in complex systems. By "integrative" here we mean including and combining settings and knowledge beyond the outcome of perfect randomization or pure observation for causal inference, while "approximative" means that the causal model is only constrained but not uniquely identified. As a basis for the study of topics (1) and (2), which are closely related, we first introduce causal models, discuss the meaning of causation and embed the notion of causation into a broader context of other fundamental concepts. Then we begin our main investigation with a focus on topic (1): we consider the problem of causal inference from a non-experimental multivariate time series X, that is, we integrate temporal knowledge. We take the following approach: We assume that X together with some potential hidden common cause - "confounder" - Z forms a first order vector autoregressive (VAR) process with structural transition matrix A. Then we examine under which conditions the most important parts of A are identifiable or approximately identifiable from only X, in spite of the effects of Z. Essentially, sufficient conditions are (a) non-Gaussian, independent noise or (b) no influence from X to Z. We present two estimation algorithms that are tailored towards conditions (a) and (b), respectively, and evaluate them on synthetic and real-world data. We discuss how to check the model using X. Still focusing on topic (1) but already including elements of topic (2), we consider the problem of approximate inference of the causal effect of a variable X on a variable Y in i.i.d. settings "between" randomized experiments and observational studies. Our approach is to first derive approximations (upper/lower bounds) on the causal effect, in dependence on bounds on (hidden) confounding. Then we discuss several scenarios where knowledge or beliefs can be integrated that in fact imply bounds on confounding. One example is about decision making in advertisement, where knowledge on partial compliance with guidelines can be integrated. Then, concentrating on topic (2), we study decision making problems that arise in cloud computing, a computing paradigm and business model that involves complex technical and economical systems and interactions. More specifically, we consider the following two problems: debugging and control of computing systems with the help of sandbox experiments, and prediction of the cost of "spot" resources for decision making of cloud clients. We first establish two theoretical results on approximate counterfactuals and approximate integration of causal knowledge, which we then apply to the two problems in toy scenarios.