Universität Stuttgart
Permanent URI for this communityhttps://elib.uni-stuttgart.de/handle/11682/1
Browse
18 results
Search Results
Item Open Access Visualization of uncorrelated point data(2008) Reina, Guido; Ertl, Thomas (Prof. Dr.)Sciences are the most common application context for computer-generated visualization. Researchers in these areas have to work with large datasets of many different types, but the one trait that is common to all is that in their raw form they exceed the cognitive abilities of human beings. Visualization not only aims at enabling users to quickly extract as much information as possible from datasets, but also at allowing the user to work at all with those that are too large and complex to be directly grasped by human cognition. In this work, the focus is on uncorrelated point data, or point clouds, which is sampled from real-world measurements or generated by computer simulations. Such datasets are gridless and exhibit no connectivity, and each point represents an entity of its own. To effectively work with such datasets, two main problems must be solved: on the one hand, a large number of complex primitives with potentially many attributes must be visualized, and on the other hand the interaction with the datasets must be designed in an intuitive way. This dissertation will present novel methods which allow the handling of large, point-based data sets of high dimensionality. The contribution for the rendering of hundreds of thousands of application-specific glyphs is a Graphics-Processing-Unit(GPU)-based solution that allows the exploration of datasets that exhibit a moderate number of dimensions, but an extremely large number of points. These approaches are proven to be working for molecular dynamics(MD) datasets as well as for 3D tensor fields. Factors critical for the performance of these algorithms are thoroughly analyzed, the main focus being on the fast rendering of these complex glyphs in high quality. To improve the visualization of datasets with many attributes and only a moderate number of points, methods for the interactive reduction of dimensionality and analysis of the influences of different dimensions as well as of different metrics will be presented. The rendering of the resulting data in 3D similarity space is also addressed. A GPU-based reduction of dimensions has been implemented that allows interactive tweaking of the reduction parameters while observing the results in real time. With the availability of a fast and responsive visualization, the missing component for a complete system is the human-computer interaction. The user must be able to navigate the information space and interact with a dataset, selecting or filtering the items that are of interest to him, inspecting the attributes of particular data points. Today, one must distinguish between the application context and the modality of different interaction approaches. Current research ranges from keyboard-and-mouse desktop interaction over different haptic interfaces (also including feedback) up to tracked interaction for virtual reality(VR) installations. In the context of this work, the problem of interacting with point-based datasets is tackled for two different situations. The first is the workstation-based analysis of clustering mechanics in thermodynamics simulations, the second a VR immersive navigation and interaction with point cloud datasets.Item Open Access System support for adaptive pervasive applications(2009) Handte, Marcus; Rothermel, Kurt (Prof. Dr. rer. nat. Dr. h. c.)Driven by the ongoing miniaturization of computer technology as well as the proliferation of wireless communication technology, Pervasive Computing envisions seamless and distraction-free task support by distributed applications that are executed on computers embedded in everyday objects. As such, this vision is equally appealing to the computer industry and the user. Induced by various factors such as invisible integration, user mobility and computer failures, the resulting computer systems are heterogeneous, highly dynamic and evolving. As a consequence, applications that are executed in these systems need to adapt continuously to their ever-changing execution environment. Without further precautions, the need for adaptation can complicate application development and utilization which hinders the realization of the basic vision. As solution to this dilemma, this dissertation describes the design of system software for Pervasive Computing that simplifies the development of adaptive applications. As opposed to shifting the responsibility for adapting an application to the user or the application developer, the system software introduces a component-based application model that can be configured and adapted automatically. To enable automation at the system level, the application developer specifies the dependencies on components and resources in an abstract manner using contracts. Upon application startup, the system uses the contractual descriptions to compute and execute valid configurations. At runtime, it detects changes to the configuration that require adaptation and it reconfigures the application. To compute valid configurations upon application startup, the dissertation identifies the requirements for configuration algorithms. Based on an analysis of the problem complexity, the dissertation classifies possible algorithmic solutions and it presents an integrated approach for configuration based on a parallel backtracking algorithm. Besides from scenario specific modifications, retrofitting the backtracking algorithm requires a problem mapping from configuration to constraint satisfaction which can be computed on-the-fly at runtime. The resulting approach for configuration is then extended to support the optimization of a cost function that captures the most relevant cost factors during adaptation. This enables the use of the approach for configuration upon startup and reconfiguration during runtime adaptation. As basis for the evaluation of the system software and the algorithm, the dissertation outlines a prototypical implementation. The prototypical implementation is used for a thorough evaluation of the presented concepts and algorithms by means of real world measurements and a number of simulations. The evaluation results suggest that the presented system software can indeed simplify the development of distributed applications that compensate the heterogeneity, dynamics and evolution of the underlying system. Furthermore, they indicate that the algorithm for configuration and the extensions for adaptation provide a sufficiently high performance in typical applications scenarios. Moreover, the results also suggest that they are preferable over of alternative solutions. To position the presented solution within the space of possible and existing solutions, the dissertation discusses major representatives of existing systems and it proposes a classification of the relevant aspects. The relevant aspects are the underlying conceptual model of the system and the distribution of the responsibility for configuration and adaptation. The classification underlines that in contrast to other solutions, the presented solution provides a higher degree of automation without relying on the availability of a powerful computer. Thus, it simplifies the task of the application developer without distracting the user while being applicable to a broader range of scenarios. After discussing the related approaches and clarifying similarities and differences, the dissertation concludes with a short summary and an outlook on future work.Item Open Access Multi-field visualization on graphics processing units(2008) Botchen, Ralf Peter; Ertl, Thomas (Prof. Dr.)The generation of multi-field data has become commonplace in many scientific disciplines and application areas today. While researchers have produced numerous techniques for analyzing a single scalar, vector, or tensor field over the last years, finding approaches for exploring multi-field datasets still forms one of the significant challenges in visualization and analytics. One crucial aspect for the growing demand of multi-field visualization techniques is the fact that scientists need to explore the interaction of these fields to gain deeper understanding of underlying processes and relationships. This work addresses the challenge of illustrating multi-field data and presents new approaches of visualization techniques for a variety of application areas, with the aim to map these algorithms to graphics hardware architectures to achieve interactive visualization. In particular, the main contributions of this thesis contain multi-field flow visualization with one focus on integrating an additional flow uncertainty value, based on measurement simulation, into visualization. Therefore, texture based advection techniques are extended for the transport and display of the additional information. The second focus lies on the illustration of multiple fields as one combined characteristic set to minimize memory usage and allow further feature extraction from the new unique representation. New techniques are developed for multi-field volume rendering in the area of medical applications, with the primary challenge to intermix volumetric data that was acquired by different medical imaging modalities. The proposed solutions give implementation details for raycasting and slice-based rendering of multiple overlapping volumes. The third application area is video visualization. This domain is a typical representative for multi-field visualization, as it combines both, flow fields and multi-volume data for illustration. The goal of the introduced video visualization techniques is to extract dynamic or still objects in a scene, detect their individual actions and the relations among each other and to display this filtered information as a continuous stream of signatures for analysis. Another problematic issue in multi-field visualization is the size of the data, which is usually rather large. Yet, data transfer to and memory size on GPUs are two major bottlenecks. To address this issue, throughout the thesis techniques for data reduction by combination and data bricking for continuous streaming are discussed. Finally, multi-field data encoding and visualization techniques are presented that utilize the advantages of radial basis functions to minimize the data size.Item Open Access Design of radio frequency power amplifiers for cellular phones and base stations in modern mobile communication systems(2009) Wu, Lei; Berroth, Manfred (Prof. Dr.-Ing.)The mobile radio communication has begun with Guglielmo Marconi's and Alexander Popov's experiments with ship-to-shore communication in the 1890's. Land mobile radio telephone systems have been used since the Detroit City Police Department installed the first wireless communication system in 1921. Since that time, radio systems have become more and more important for both voice and data communication. The modern mobile communication systems are mainly designed in high frequency ranges due to the larger available bandwidth at these frequencies. Today, the mostly used mobile communication systems in the United States are cellular telephone systems operating at 800 - 900 MHz and personal communication systems (PCS) at 1800 - 2000 MHz. In Europe, these include the Global System for Mobile Communication (GSM) and Universal Mobile Telecommunications System (UMTS). China now has GSM/GPRS and Code Division Multiple Access (CDMA) networks. For the third generation services, China has been planning a 3G standard called Time Division Synchronous CDMA (TD-SCDMA) since 1999, which is planned to operate at 2010 MHz - 2025 MHz. In this work, attentions are paid on the uplink and downlink applications in the GSM and the UMTS systems adopted in Europe. No matter which system is discussed, a wireless communication link usually includes a transmitter, a receiver, and a channel. The functions of the quantization, of the coding and of the decoding are only performed in digital systems. Most links are fully duplex and include a transmitter and a receiver or a transceiver at each end of the link. Obviously, to send or receive large enough signals, power amplifiers and their driving amplifiers are necessary on both sides of the link. A radio frequency power amplifier is a circuit for converting directional current input power into a significant amount of RF output power. One of the principal differences between a small-signal amplifier design and a power amplifier design is that the main purpose of the latter is the maximum output power, not the maximum gain. However, a power amplifier cannot simply be regarded as a small-signal amplifier driven into the saturation. There is a great variety of different power amplifiers, while most of them employ techniques beyond simple linear amplification. In other words, RF power can be generated by a wide variety of techniques using a wide variety of devices. In this work, the fundamental theories used for the design of RF power amplifiers are systematically introduced. Using these theories, power amplifier circuits are designed both for the base stations and for the cellular phones adopted in the modern mobile communication systems in Europe.Item Open Access Particle tracing methods for visualization and computer graphics(2008) Schafhitzel, Tobias; Weiskopf, Daniel (Prof. Dr.)This thesis discusses the broad variety of particle tracing algorithms with focus on flow visualization. Starting with a general overview of the basics of visualization and computer graphics, mathematics, and fluid dynamics, a number of methods using particle tracing for flow visualization and computer graphics are proposed. The first part of this thesis considers mostly texture-based techniques that are implemented on the graphics processing unit (GPU) in order to provide an interactive dense representation of 3D flow fields. This part considers particle tracing methods that can be applied on general vector fields and includes texture based visualization in volumes as well as on surfaces. Furthermore, it is described how particle tracing can be used for extracting flow structures, like path surfaces, of the given vector field. The second part of this thesis considers particle tracing on derived vector fields for flow visualization. Therefore, first a feature extraction criterion is applied on a fluid flow field. In most cases this results in a scalar field serving as base for the particle tracing methods. Here, it is shown how higher order derivatives of scalar fields can be used to extract flow features like 1D vortex core lines or 2D shear sheets. The extracted structures are further processed in terms of feature tracking. The third part generalizes particle tracing for arbitrary applications in visualization and computer graphics. Here, the particles' path either might be defined by the perspective of the human eye or by a force field that influences the particles' motion by considering second order ordinary differential equations. All three parts clarify the importance of particle tracing methods for a wide range of applications in flow visualization and computer graphics by various examples. Furthermore, it is shown how the flexibility of this method strongly depends on the underlying vector field, and how those vector fields can be generated in order to solve problems that go beyond traditional particle tracing in fluid flow fields.Item Open Access Bridging the gap between volume visualization and medical applications(2009) Rößler, Friedemann Andreas; Ertl, Thomas (Prof. Dr.)Direct volume visualization has been established as a common visualization technique for tomographic volume datasets in many medical application fields. In particular, the introduction of volume visualization techniques that exploit the computing power of modern graphics hardware has expanded the application capabilities enormously. However, the employment of programmable graphics processing units (GPUs) usually requires an individual adaption of the algorithms for each different medical visualization task. Thus, only few sophisticated volume visualization algorithms have yet found the way into daily medical practice. In this thesis several new techniques for medical volume visualization are presented that aid to bridge this gap between volume visualization and medical applications. Thereby, the problem of medical volume visualization is addressed on three different levels of abstraction, which build upon each other. On the lowest level a flexible framework for the simultaneous rendering of multiple volume datasets is introduced. This is needed when multiple volumes, which may be acquired with different imaging modalities or at different points in time, should be combined into a single image. Therefore, a render graph was developed that allows the definition of complex visualization rules for arbitrary multi-volume scenes. From this graph GPU programs for optimized rendering are generated automatically. The second level comprises interactive volume visualization applications for different medical tasks. Several tools and techniques are presented that demonstrate the flexibility of the multi-volume rendering framework. Specifically, a visualization tool was developed that permits the direct configuration of the render graph via a graphical user interface. Another application focuses on the simultaneous visualization of functional and anatomical brain images, as they are acquired in studies for cognitive neuroscience. Moreover, an algorithm for direct volume deformation is presented, which can be applied for surgical simulation. On the third level the automation of visualization processes is considered. This can be applied for standard visualization taks to support medical doctors in their daily work. First, 3D object movies are proposed for the representation of automatically generated visualizations. These allow intuitive navigation along precomputed views of an object. Then, a visualization service is presented that delegates the costly computation of video sequences and object movies of a volume dataset to a GPU-cluster. In conclusion, a processing model for the development of medical volume visualization solutions is proposed. Beginning from the initial request for the application of volume-visualization techniques for a certain medical task, this covers the whole life cycle of such a solution from a prototype to an automated service. Thereby, it is shown how the techniques that where developed for this thesis support the generation of the visualization solutions on the different stages.Item Open Access A cross-layer framework for sensor networks(2008) Lachenmann, Andreas; Rothermel, Kurt (Prof. Dr.)Cross-layer interactions are often used in wireless sensor networks. They help to optimize energy consumption, deal with memory limitations, and consider the special properties of wireless communication. However, cross-layer interactions have the disadvantage of negatively affecting desirable properties of the software design like modularity and reusability. In the extreme, applications consist of a monolithic piece of code that is hard to develop and impossible to maintain. Therefore, this thesis investigates different approaches to address the negative side-effects of cross-layer interactions. In particular, it develops a framework that pursues three different strategies. First, it tries to preserve modularity and increase reusability by decoupling components that exchange data. This strategy is realized by TinyXXL, a programming abstraction for cross-layer data exchange. This part of the framework has been created based on an analysis of cross-layer interactions in existing applications. With some compile-time optimizations TinyXXL can reduce both energy and memory consumption compared to an application built from reusable components. Using Neidas, a novel neighborhood data sharing algorithm, it offers a comprehensive system for data exchange among the layers of a single node and with neighboring nodes. Second, the framework relaxes one of the constraints that often lead to cross-layer interactions and, thus, reduces the need to apply them. Specifically, it includes ViMem, a flash-based virtual memory system that helps to reduce memory limitations and tries to optimize the memory layout. Finally, the third strategy is to partially move energy concerns into the system software. For this purpose the framework includes Levels, an abstraction to specify optional functionality which allows to accurately meet a user-defined lifetime goal. If necessary, Levels deactivates functionality in order to reach that target lifetime. Furthermore, it includes a distributed algorithm that helps to provide a constant application quality over the total network lifetime.Item Open Access Supporting business process fragmentation while maintaining operational semantics : a BPEL perspective(2008) Khalaf, Rania; Leymann, Frank (Prof. Dr.)Globalization and the increase of competitive pressures created the need for agility in business processes, including the ability to outsource, offshore, or otherwise distribute its once-centralized business processes or parts thereof. While hampered thus far by limited infrastructure capabilities, the increase in bandwidth and connectivity and decrease in communication cost have removed these limits. An organization that aims for such fragmentation of its business processes needs to be able to separate the process into different parts. Today, this is a manual, design-time endeavor. For example, it may use the concept of subprocesses as parts to be outsourced. However, there is often no way to foresee, in advance, which parts of the process need to be cut-off. Thus, today’s technology for outsourcing is static and not dynamic at all. Therefore, there is a growing need for the ability to fragment one’s business processes in an agile manner, and be able to distribute and wire these fragments so that their combined execution recreates the function of the original process. Additionally, this needs to be done in a networked environment, which is where ‘Service Oriented Architecture’ plays a vital role. ‘Service Oriented Architecture’ (SOA) is a relatively new approach to software that natively deals with the very dynamic, distributed, loosely coupled, and heterogeneous features of today’s networked environment, offering application functions as networked services. Web services is one instantiation of an SOA, consisting of a modular, layered stack of XML standards and corresponding implementations that address the different aspects of this environment. The standard covering business processes for Web services is the Business Process Execution Language for Web Services (also known as ‘BPEL’). Relevant characteristics of BPEL are that it is SOA-centric, has a scope construct that groups activities providing them with common behavior such as fault and compensation handlers, and combines graph and calculus based approaches to process modeling. This thesis describes how to identify, create, and execute process fragments without loosing the operational semantics of the original process models. It does so within the framework of the Web services stack of standards, BPEL in particular. The contributions are a categorization of existing Web services aggregation techniques, a meta-model of Web services business process mechanisms using a graph-based formalism, a solution for the automatic and operational semantics-preserving decomposition of such processes, and an architecture and implementation for a corresponding build-time and runtime environment.Item Open Access An architectural decision modeling framework for service oriented architecture design(2009) Zimmermann, Olaf; Leymann, Frank (Prof. Dr.)In this thesis, we investigate whether reusable architectural decision models can support Service-Oriented Architecture (SOA) design. In the current state of the art, architectural decisions are captured ad hoc and retrospectively on projects; this is a labor-intensive undertaking without immediate benefits. On the contrary, we investigate the role reusable architectural decision models can play during SOA design: We treat recurring architectural decisions as first-class method elements and propose an architectural decision modeling framework and a reusable architectural decision model for SOA which guide the architect through the SOA design. Our approach is tool supported. Our framework is called SOA Decision Modeling (SOAD). SOAD provides a technique to systematically identify recurring decisions. Our reusable architectural decision model for SOA conforms to a metamodel supporting reuse and collaboration. The model organization follows Model-Driven Architecture (MDA) principles and separates long lasting platform-independent decisions from rapidly changing platform-specific ones. The alternatives in a conceptual model level reference SOA patterns. This simplifies the initial population and ongoing maintenance of the decision model. Decision dependency management allows knowledge engineers and software architects to check model consistency and prune irrelevant decisions. Moreover, a managed issue list guides through the decision making process. To update design artifacts according to decisions made, decision outcome information is injected into design model transformations. Finally, a Web-based collaboration system provides tool support for the framework steps and concepts. The SOAD framework is not only applicable to enterprise application and SOA design, but also to other application genres and architectural styles. SOAD supports use cases such as education, knowledge exchange, design method, review technique, and governance instrument.Item Open Access Analyse und Optimierung der Softwareschichten von wissenschaftlichen Anwendungen für Metacomputing(2008) Keller, Rainer; Resch, Michael (Prof. Dr.-Ing.)Für parallele Anwendungen ist das Message Passing Interface (MPI) das Programmierparadigma der Wahl für Höchstleistungsrechner mit verteiltem Speicher. Mittels des Konzeptes des MetaComputings wiederum können verschiedenste Rechenressourcen mit PACX-MPI gekoppelt werden. Dies ist einerseits von Interesse, weil Problemgrößen gelöst werden sollen, die nicht auf nur einem System ausgeführt werden könnten, andererseits, weil gekoppelte Simulationen gerechnet werden, die auf bestimmten Rechnerarchitekturen ausgeführt werden sollen oder weil Systeme mit bestimmten Eigenschaften wie Visualisierungs- mit parallelen Rechenressourcen verbunden werden müssen. Diese Koppelung stellt für die verteilten Anwendungen eine Barriere dar, da Kommunikation zu nicht-lokalen Prozessen weitaus langsamer ist, als über das rechnerinterne Netzwerk. In dieser Arbeit werden Lösungen auf den Software-Ebenen ausgehend von der Netzwerkschicht, durch Verbesserungen innerhalb der verwendeten Middleware, bis hin zur Optimierung innerhalb der Anwendungsschicht erarbeitet. In Bezug auf die unterste Softwareschicht wird für die Middleware PACX-MPI eine allgemeine Bibliothek zur Netzwerkkommunikation auf Basis von User Datagram Protocol (UDP) entwickelt. Somit können Limitierungen des Transport Control Protocols (TCP) umgangen werden, vor allem in Verbindung mit Netzwerken mit hoher Latenz und großer Bandbreite, so genannte Long Fat Pipes. Die hier implementierte Bibliothek ist portabel programmiert und durch die Verwendung von Threads effizient. Dieses Protokoll erreicht gute Werte für die Bandbreite im Local Area Network (LAN), aber auch im Wide Area Network (WAN). Getestet wird dieses Protokoll zur Veranschaulichung mittels einer Verbindung zwischen Rechnern in Stuttgart und Canberra, Australien. Innerhalb der Middleware wird die Optimierung der kollektiven Kommunikationsroutinen behandelt und am Beispiel der Funktion PACX_Alltoall die Verbesserung anhand des IMB Benchmarks auf einem Metacomputer gezeigt. Zur Analyse der Kommunikationseigenschaften wird die Erweiterung einer Tracing-Bibliothek für PACX-MPI, sowie die Implementierung einer generischen Schnittstelle zur Messung der Kommunikationscharakteristik auf MPI-Schicht erläutert. Weiterhin wird eine allgemeine MPI-Testsuite vorgestellt, die beim Auffinden von Fehlern sowohl in PACX-MPI, als auch innerhalb der Open MPI Implementierung hilfreich war. Auf der obersten Softwareschicht werden Optimierungsmöglichkeiten für Anwendungen für MetaComputing aufgezeigt. Beispielhaft wird die Analyse des Kommunikationsmusters einer Anwendung aus dem Bereich der Bioinformatik gezeigt. Weiterhin wird die Implementierung des Cachings und Prefetchings von vielfach kommunizierten Daten mit räumlicher und zeitlicher Lokalität vorgestellt. Erst die Methodik des Cachings und Prefetchings erlaubt die Ausführung der Anwendung in einem Metacomputer und ist exemplarisch für eine Klasse von Algorithmen mit ähnlichem Kommunikationsmuster.