13 Zentrale Universitätseinrichtungen
Permanent URI for this collectionhttps://elib.uni-stuttgart.de/handle/11682/14
Browse
13 results
Search Results
Item Open Access 3D visualization of multivariate data(2012) Sanftmann, Harald; Weiskopf, Daniel (Prof. Dr.)Nowadays large amounts of data are organized in tables, especially in relational databases where the rows store the data items to which multiple attributes are stored in the columns. Information stored this way, having multiple (more than two or three) attributes, can be treated as multivariate data. Therefore, visualization methods for multivariate data have a large application area and high potential utility. This thesis focuses on the application of 3D scatter plots for the visualization of multivariate data. When dealing with 3D, spatial perception needs to be exploited, by effectively using depth cues to convey spatial information to the user. To improve the presentation of individual 3D scatter plots, a technique is presented that applies illumination to them, thus using the shape-from-shading depth cue. To enable the analysis not only of 3D but of multivariate data, a novel technique is introduced that allows the navigation between 3D scatter plots. Inspecting the large number of 3D scatter plots that can be projected from a multivariate data set is very time consuming. The analysis of multivariate data can benefit from automatic machine learning approaches. A presented method uses decision trees to increase the speed a user can gain an understanding of the multivariate data at no extra cost. Stereopsis can also support the display of 3D scatter plots. Here an improved anaglyph rendering technique is presented, significantly reducing ghosting artifacts. The technique is not only applicable for information visualization, but for general rendering or to present stereoscopic image data. Some information visualization algorithms require high computation time. Many of these algorithms can be parallelized to run interactively. A framework that supports the parallelization on shared and distributed memory systems is presented.Item Open Access Visualization and mesoscopic simulation in systems biology(2013) Falk, Martin Samuel; Ertl, Thomas (Prof. Dr.)A better understanding of the internal mechanisms and interplays within a single cell is key to the understanding of life. The focus of this thesis lies on the mechanism of cellular signal transduction, i.e. relaying a signal from outside the cell by different means of transport toward its target inside the cell. Besides experiments, understanding can also be achieved by numerical simulations of cellular behavior which require theoretical models to be designed and evaluated. This is where systems biology closely relates and depends on recent research results in computer science in order to deal with the modeling, the simulation, and the analysis of the computational results. Since a single cell can consist of billions of atoms, the simulation of intracellular processes requires a simplified, mesoscopic model. The simulation domain has to be three dimensional to consider the spatial, possibly asymmetric, intracellular architecture filled with individual particles representing signaling molecules. In contrast to continuous models defined by systems of partial differential equations, a particle-based model allows tracking individual molecules moving through the cell. The overall process of signal propagation usually requires between minutes and hours to complete, but the movement of molecules and the interactions between them have to be determined in the microsecond range. Hence, the computation of thousands of consecutive time steps is necessary, requiring several hours or even days of computational time for a non-parallel simulation. To speed up the simulation, the parallel hardware of current central processing units (CPUs) and graphics processing units (GPUs) can be employed. Finally, the resulting data has to be analyzed by domain experts and, therefore, has to be represented in meaningful ways. Typical prevalent analysis methods include the aggregation of the data in tables or simple 2D graph plots, sometimes 3D plots for continuous data. Despite the fact that techniques for interactive visualization of data in 3D are well-known, so far none of the methods have been applied to the biological context of single cell models and specialized visualizations fitted to the experts’ need are missing. Another issue is the hardware available to the domain experts that can be used for the task of visualizing the increasing amount of time-dependent data resulting from simulations. It is important that the visualization keeps up with the simulations to ensure that domain experts can still analyze their data sets. To deal with the massive amount of data to come, compute clusters will be necessary with specialized hardware dedicated to data visualization. It is, thus, important, to develop visualization algorithms for this dedicated hardware, which is currently available as GPU. In this thesis, the computational power of recent many-core architectures (CPUs and GPUs) is harnessed for both the simulation and the visualizations. Novel parallel algorithms are introduced to parallelize the spatio-temporal, mesoscopic particle simulation to fit the architectures of CPU and GPU in a similar way. Besides molecular diffusion, the simulation considers extracellular effects on the signal propagation as well as the import of molecules into the nucleus and a dynamic cytoskeleton. An extensive comparison between different configurations is performed leading to the conclusion that the usage of GPUs is not always beneficial. For the visual data analysis, novel interactive visualization techniques were developed to visualize the 3D simulation results. Existing glyph-based approaches are combined in a new way facilitating the visualization of the individual molecules in the interior of the cell as well as their trajectories. A novel implementation of the depth of field effect combined with additional depth cues and coloring aid the visual perception and reduce visual clutter. To obtain a continuous signal distribution from the discrete particles, techniques known from volume rendering are employed. The visualization of the underlying atomic structures provides new detailed insights and can be used for educational purposes besides showing the original data. A microscope-like visualization allows for the first time to generate images of synthetic data similar to images obtained in wet lab experiments. The simulation and the visualizations are merged into a prototypical framework, thereby supporting the domain expert during the different stages of model development, i.e. design, parallel simulation, and analysis. Although the proposed methods for both simulation and visualization were developed with the study of single-cell signal transduction processes in mind, they are also applicable to models consisting of several cells and other particle-based scenarios. Examples in this thesis include the diffusion of drugs into a tumor, the detection of protein cavities, and molecular dynamics data from laser ablation simulations, among others.Item Open Access Advanced visualization techniques for flow simulations : from higher-order polynomial data to time-dependent topology(2013) Üffinger, Markus; Ertl, Thomas (Prof. Dr.)Computational Wuid dynamics (CFD) has become an important tool for predicting Fluid behavior in research and industry. Today, in the era of tera- and petascale computing, the complexity and the size of simulations have reached a state where an extremely large amount of data is generated that has to be stored and analyzed. An indispensable instrument for such analysis is provided by computational Wow visualization. It helps in gaining insight and understanding of the Wow and its underlying physics, which are subject to a complex spectrum of characteristic behavior, ranging from laminar to turbulent or even chaotic characteristics, all of these taking place on a wide range of length and time scales. The simulation side tries to address and control this vast complexity by developing new sophisticated models and adaptive discretization schemes, resulting in new types of data. Examples of such emerging simulations are generalized Vnite element methods or hp-adaptive discontinuous Galerkin schemes of high-order. This work addresses the direct visualization of the resulting higher-order Veld data, avoiding the traditional resampling approach to enable a more accurate visual analysis. The second major contribution of this thesis deals with the inherent complexity of Wuid dynamics. New feature-based and topology-based visualization algorithms for unsteady Wow are proposed to reduce the vast amounts of raw data to their essential structure. For the direct visualization pixel-accurate techniques are presented for 2D Veld data from generalized Vnite element simulations, which consist of a piecewise polynomial part of high order enriched with problem-dependent ansatz functions. Secondly, a direct volume rendering system for hp-adaptive Vnite elements, which combine an adaptive grid discretization with piecewise polynomial higher-order approximations, is presented. The parallel GPU implementation runs on single workstations, as well as on clusters, enabling a real-time generation of high quality images, and interactive exploration of the volumetric polynomial solution. Methods for visual debugging of these complex simulations are also important and presented. Direct Wow visualization is complemented by new feature and topology-based methods. A promising approach for analyzing the structure of time-dependent vector Velds is provided by Vnite-time Lyapunov exponent (FTLE) Velds. In this work, interactive methods are presented that help in understanding the cause of FTLE structures, and novel approaches to FTLE computation are developed to account for the linearization error made by traditional methods. Building on this, it is investigated under which circumstances FTLE ridges represent Lagrangian coherent structures (LCS)—the timedependent counterpart to separatrices of traditional “steady” vector Veld topology. As a major result, a novel time-dependent 3D vector Veld topology concept based on streak surfaces is proposed. Streak LCS oUer a higher quality than corresponding FTLE ridges, and animations of streak LCS can be computed at comparably low cost, alleviating the topological analysis of complex time-dependent Velds.Item Open Access Visualization techniques for parallel coordinates(2013) Heinrich, Julian; Weiskopf, Daniel (Prof. Dr.)Visualization plays a key role in knowledge discovery, visual data exploration, and visual analytics. Static images are an effective tool for visual communication, summarization, and pattern extraction in large and complex datasets. Only together with human-computer-interaction techniques, visual interfaces enable an analyst to explore large information spaces and to drive the whole analytical reasoning process. Scatterplots and parallel coordinates are well-recognized visualization techniques that are commonly employed for statistics (both explorative and descriptive) and data-mining, but are also gaining importance for scientific visualization. While scatterplots are restricted to the display of at most three dimensions due to the orthogonal layout of coordinate axes, a parallel arrangement allows for the visualization of multiple attributes of a dataset. Although both techniques rely on projections of higher-dimensional geometry and are related by a point–line duality, parallel coordinates enjoy great popularity for the visualization and analysis of multivariate data. Despite their popularity, parallel coordinates are subject to a number of limitations that remain to be solved. For large datasets, the potentially high amount of overlapping lines may hinder the observer from visually extracting meaningful patterns. Encoding observations with polylines make it difficult to follow lines over all dimensions, as they lose visual continuation across the axes. Clusters cannot be represented by the geometry of lines, and the order of axes has a high impact on the patterns exhibited by parallel coordinates. This thesis presents visualization techniques for parallel coordinates that address these limitations. As a foundation, an extensive review of the state of the art of parallel coordinates will be given. Based on the point–line duality, the existing model of continuous scatterplots is adapted to parallel coordinates for the visualization of data defined on continuous domains. To speed up computation and obtain interactive frame rates, a scalable and progressive rendering algorithm is introduced that further allows for arbitrary reconstruction and interpolation schemes. A curve-bundling model for parallel coordinates is evaluated with a user study showing that bundling is effective for cluster visualization based on geometric cues while being equally capable of revealing correlations between neighboring axes. To address the axis-order problem, a graph-based approach is presented that allows for the visualization of all pairwise relations in a matrix layout without redundancy. Finally, the use of parallel coordinates is demonstrated for real datasets from computational fluid dynamics, motion capturing, bioinformatics, and systems biology.Item Open Access Visualization techniques for group structures in graphs(2015) Vehlow, Corinna; Weiskopf, Daniel (Prof. Dr.)Graph visualization plays a key role analyzing relations between objects. With increasing size of the graph, it becomes difficult to understand global and local structures of the graph. Grouping objects of the graph based on their attributes or relations helps reveal global structures. Visualizing these group structures together with the graph topology can highlight central objects and reveal outliers. The ability of a visualization to help detecting these features becomes more difficult for groups that overlap or change over time. In many applications, groups cannot be interpreted as disjoint sets of objects. In fact, objects are often involved in several groups, sometimes even to different extent. With the existing types of overlapping groups, further analysis tasks arise that need to be considered for the visualization. In addition, real-world scenarios are not static but change over time and so do relations among objects. With the graph topology changing over time, the group structure changes as well. The challenge for visualizations of dynamic groups in dynamic graphs is to facilitate the analysis of group-related features not only for individual points in time but over time, showing group evolution events. This thesis presents visualization techniques for group structures in graphs that address these challenges: overlap and time dependency. As a basis, a survey of the state of the art in visualizing group structures in graphs is presented. The first part of this thesis is dedicated to the visualization of overlapping groups in static graphs, where different types of overlaps are considered. With each technique, the complexity of the groups increases. First, a visual analytics system for crisp overlapping groups in multivariate graphs is presented. This system integrates interactive filtering of large and dense networks with groupbased layouts of the resulting subnetworks and a technique to compare those subnetworks. Second, a technique that visualizes fuzzy overlapping groups in a graph based on layout strategies and further visual mappings is presented. This technique facilitates the investigation of fuzzy group memberships at different levels of detail based on a hierarchical aggregation model. In contrast to these techniques, the third visualization technique shows groups based on multivariate edge attributes rather than vertex attributes or the topology of the graph. In particular, edge-edge relations are visualized as curves that are directly integrated into the node-link diagram representing the object-relation structure. The second part is dedicated to visualization techniques for dynamic groups in dynamic graphs. Again, the complexity of the group structure rises from the first technique addressing flat groups to the second technique addressing more complex hierarchical groups. Within both techniques, the evolution of groups is encoded using a flow metaphor. The first technique visualizes the partially aggregated graphs by node-link diagrams, whereas the second technique is based on an extended adjacency matrix representation that encodes the hierarchical structure of vertices as well as changes in the graph topology. All presented techniques visualize the group structure integrated with the graph topology in a single image. Finally, the use of all techniques is demonstrated for real data sets from biology, one of the main application domains of group structures in graphs.Item Open Access Topology and morphology of bounded vector fields(2015) Machado, Gustavo Mello; Ertl, Thomas (Prof. Dr.)Vector fields are a fundamental concept in science, and as they can represent properties from electromagnetic fields to the dynamics of fluid flow, their visualization assists the study and comprehension of many physical phenomena. Aiming to extract the global structure of streamlines with respect to regions of qualitatively different behavior, the concept of vector field topology basically consists of locating singularities, i.e., critical points and periodic orbits, and computing the sets of streamlines that converge to them in positive or negative direction, called separatrices. On the one hand, this approach has proven its valuable contributions to scientific visualization, on the other hand, its limitations with respect to bounded domains have not yet been sufficiently researched, and only comparably few approaches exist for such configurations. This thesis contributes to vector field visualization, in particular to vector field topology on bounded domains, with new techniques that range from feature extraction, integration-based approaches, and topology-based approaches. More specifically, the contributions of this thesis are the following. A local extraction technique for bifurcation lines is proposed, together with the extraction of their manifolds. Bifurcation lines represent a topological feature that has not yet been sufficiently recognized in scientific visualization. The bifurcation lines are extracted by a modification of the vortex core line extraction techniques due to Sujudi and Haimes, and Roth and Peikert, both formulated using the parallel vectors operator. While the former formulation provides acceptable results only in configurations with high hyperbolicity and low curvature of the bifurcation lines, the latter operates only well in configurations with low hyperbolicity but is able to perform well with strong curvature of the bifurcation lines, however, with the drawback that it often fails to provide a solution. The refinement of the solutions of the parallel vectors operator is presented as a means to improve both criteria and, in particular, to refine the solutions of the Sujudi and Haimes criterion in cases where the Roth and Peikert criterion fails. This technique is exemplified on synthetic data, data from computational fluid dynamics, and on magnetohydrodynamics data. As a particularly interesting application, it is demonstrated that this technique is able to extract saddle-type periodic orbits locally, and in case of high hyperbolicity at higher accuracy than traditional techniques based on integral curves. Solar dynamics data, particularly those from the Solar Dynamics Observatory, are now available in a sheer volume that is hard to investigate with traditional visualization tools, which mainly display 2D images. While the challenge of data access and browsing has been solved by web-based interfaces and efforts like the Helioviewer project, the approaches so far only provide 2D visualizations. The visualization of such data in the full 3D context is presented, providing appropriate coordinate systems and projection techniques, including time. Methods from volume rendering and flow visualization are applied to 3D solar magnetic fields, which are derived from the sensor data in an interactive process. They are applied and extended to the space-time visualization of photospheric data, and a view-dependent visualization of coronal holes is presented. This work concentrates on two solar phenomena: the structure and dynamics of coronal loops, and the temporal evolution of the plasma convection in close vicinity of sunspots over time. This approach avoids the time coherence issue inherent in traditional magnetic field line placement, providing insight in the magnetic field and the structure of the coronal plasma. The presented techniques are also applicable in many other fields, such as terrestrial magnetospheric physics, or magnetohydrodynamics simulations. Inspired by the view-dependent visualization of coronal holes, this thesis also presents a technique to visualize the streamline-based mapping between the boundary of a simply-connected subregion of arbitrary 3D vector fields. While the streamlines are seeded on one part of the boundary, the remaining part serves as escape border. Hence, the seeding part of the boundary represents a map of streamline behavior, indicating if streamlines reach the escape border or not. Since the resulting maps typically exhibit a very fine and complex structure and are thus not amenable to direct sampling, this approach instead aims at topologically consistent extraction of their boundary. It is shown that isocline surfaces of the projected vector field provide a robust basis for streamsurface-based extraction of these boundaries. The utility of this technique is demonstrated in the context of transport processes using vector field data from different domains like Magma flow, coronal magnetic fields for the extraction of coronal holes, and computational fluid dynamics. Streamsurfaces are of fundamental importance to the visualization of flows. Among other features, they offer strong capabilities in revealing flow behavior (e.g., in the vicinity of vortices), and are an essential tool for the computation of 2D separatrices in vector field topology. Computing streamsurfaces is, however, typically expensive due to the difficult triangulation involved, in particular when triangle sizes are kept in the order of the size of a pixel. Different image-based approaches are here investigated for rendering streamsurfaces without triangulation, and a new technique that renders them by dense streamlines is proposed. Although this technique does not perform triangulation, it does not depend on user parametrization to avoid noticeable gaps. A GPU-based implementation shows that this technique provides interactive frame rates and low memory usage in practical applications. It is also shown that previous texture-based flow visualization approaches can be integrated with this method, for example, for the visualization of flow direction with line integral convolution.Item Open Access Distributed computing and transparency rendering for large displays(2015) Kauker, Daniel; Ertl, Thomas (Prof. Dr.)Today’s computational problems are getting bigger and the performance required to solve them increases steadily. Furthermore, the results are getting more detailed, so that methods for rendering, visualization, and interaction methods need to adapt. While the computational power of a single chip also increases steadily, the most performance is gained by parallelization of the algorithms. Although Graphics Processing Units are built and specialized for the task of graphics rendering, their programmability makes them also suitable for general purpose computations. Thus, a typical workstation computer offers at least two processing units, the Central Processing Unit and the Graphics Processing Unit. Using multiple processing units for a task is commonly referred to as "distributed computing". One of the biggest challenges when using such heterogeneous and distributed systems is the variety of software and ways to use them for an optimal result. The first section of the thesis focuses on an abstraction layer to simplify software development on heterogeneous computing systems. The presented framework aims to encapsulate the vendor-specific details and the hardware architecture, giving the programmer a task-oriented interface which is easy to use, to extend, and to maintain. Having the results computed in a distributed environment, the interactive visualization becomes another challenge, especially when semi-transparent parts are involved, as the rendering order has to be taken into account. Additionally, the distributed rendering nodes do not know the details about their surroundings like the existence or complexity of objects in front. Typically, the large scale computations are distributed in object space so that one node works exclusively on one part of the scene. As it is too costly to collect all computation results on a single node for rendering, those nodes also have to do the rendering work to achieve interactive framerates. The resulting parts of the visualization are then sent to specialized display nodes. These display nodes are responsible for compositing the final image, e.g. combining data from multiple sources, and show them on display devices. In this context, rendering transparency effects with objects that might intersect each other within a distributed environment is challenging. This thesis will present an approach for rendering object-space decomposed scenes with semi-transparent parts using "Per-Pixel Linked Lists". Presenting these visualizations on large display walls or on a remote (mobile) device raises the final challenge discussed in this thesis. As the scenes can be either complex or very detailed and thus large in terms of memory, a single system is not always capable of handling all data for a scene. Typically, display walls that can handle such amounts of data consist of multiple displays or projectors, driven by a number of display nodes, and often have a separate node where an operator controls which part of the scene is displayed. I will describe interaction methods where the user can directly control the visualization on a large display wall using mobile devices without an operator. The last part of the thesis presents interaction concepts using mobile devices for large displays, allowing the users to control the visualization with a smartphone or tablet. Depending on the data and visualization method, the mobile device can either visualize the data directly or in a reduced form, or uses streaming mechanisms so that the user has the same visual impression as a user in front of the display wall. With the mobile application, the user can directly influence any parameter of the visualization and can thus actively steer an interactive presentation. In this thesis, I will present approaches for employing heterogeneous computing environments, from a single PC to networked clusters, how to use order-independent transparency rendering for local and distributed visualization, as well as interaction methods for large display walls and remote visualization devices. The approaches for heterogeneous computing environments make the development easier, especially in terms of support of different hardware platforms. The presented distributed rendering approach enables accurate transparency renderings with far less memory transfer than existing algorithms. For the interaction methods, the usage of ubiquitous mobile devices brings the described approaches to all types of display devices without the need for special hardware. Additionally, a concept for an integrated system containing the contributions of the thesis is proposed. It uses the abstraction layer as a middle ware for the computation and visualization operations in the distributed rendering environments. The user controls the application using the methods for mobile device interactions.Item Open Access Video visual analytics(2013) Höferlin, Markus Johannes; Weiskopf, Daniel (Prof. Dr.)The amount of video data recorded world-wide is tremendously growing and has already reached hardly manageable dimensions. It originates from a wide range of application areas, such as surveillance, sports analysis, scientific video analysis, surgery documentation, and entertainment, and its analysis represents one of the challenges in computer science. The vast amount of video data renders manual analysis by watching the video data impractical. However, automatic evaluation of video material is not reliable enough, especially when it comes to semantic abstraction from the video signal. In this thesis, the visual analytics methodology is applied to the video domain to combine the complementary strengths of human cognition and machine processing. After depicting the challenges of scalable video analysis, a video visual analytics pipeline is proposed that relies on stream processing for scalability. The proposed video visual analytics pipeline consists of six stages that are processed successively--data stream selection, manipulation, feature extraction, filtering, relevance measure, and visualization--before the results are presented to the human analysts. The human analysts can interact and modify each of these stages iteratively. To support sense-making, the human analysts can directly integrate and organize reasoning artifacts into a reasoning sandbox. For the video visual analytics pipeline, various methods for the different stages are introduced that address data scalability, task scalability, and situational awareness. This work focuses mainly on the filtering and visualization stages, but provides reviews and discussions of techniques for the other stages as well. In the filtering stage, four interaction guidelines--easy-to-use filter definition, confidence-incorporated filter definition, decision-guided filter definition, and filter feedback--are defined and applied to formulate filters by properties, by sketch, or by example. Due to the suitability of trajectories for filtering, a configurable similarity metric for trajectories is introduced that allows combining different facets (features) with different similarity measures. Besides a survey on video visualization methods, the thesis contributes to the visualization stage by methods for fast-forward video visualization and hierarchical video exploration (the interactive schematic summaries). The VideoPerpetuoGram is extended and applied to different domains (video surveillance and snooker skill training), and an example of video visualization that solely depends on extracted features from video (the layered TimeRadarTrees) is discussed. Moreover, two sonification approaches with the purpose to improve situational awareness are introduced.Item Open Access Computational visualization of scalar fields(2014) Ament, Marco; Weiskopf, Daniel (Prof. Dr.)Scalar fields play a fundamental role in many scientific disciplines and applications. The increasing computational power offers scientists and digital artists novel opportunities for complex simulations, measurements, and models that generate large amounts of data. In technical domains, it is important to understand the phenomena behind the data to advance research and development in the application domain. Visualization is an essential interface between the usually abstract numerical data and human operators who want to gain insight. In contrast, in visual media, scalar fields often describe complex materials and their realistic appearance is of highest interest by means of accurate rendering models and algorithms. Depending on the application focus, the different requirements on a visualization or rendering must be considered in the development of novel techniques. The first part of this thesis presents three novel optical models that account for the different goals of photorealistic rendering and scientific visualization of volumetric data. In the first case, an accurate description of light transport in the real world is essential for realistic image synthesis of natural phenomena. In particular, physically based rendering aims to produce predictive results for real material parameters. This thesis presents a physically based light transport equation for inhomogeneous participating media that exhibit a spatially varying index of refraction. In addition, an extended photon mapping algorithm is introduced that provides a solution of this optical model. In scientific volume visualization, spatial perception and interactive controllability of the visual representation are usually more important than physical accuracy, which offers researchers more flexibility in developing goal-oriented optical models. This thesis presents a novel illumination model that approximates multiple scattering of light in a finite spherical region to achieve advanced lighting effects like soft shadows and translucency. The main benefit of this contribution is an improved perception of volumetric features with full interactivity of all relevant parameters. Additionally, a novel model for mapping opacity to isosurfaces that have a small but finite extent is presented. Compared to physically based opacity, the presented approach offers improved control over occlusion and visibility of such interval volumes. In addition to the visual representation, the continuously growing data set sizes pose challenges with respect to performance and data scalability. In particular, fast graphics processing units (GPUs) play a central role for current and future developments in distributed rendering and computing. For volume visualization, this thesis presents a parallel algorithm that dynamically decomposes image space and distributes work load evenly among the nodes of a multi-GPU cluster. The presented technique facilitates illumination with volumetric shadows and achieves data scalability with respect to the combined GPU memory in the cluster domain. Distributed multi-GPU clusters become also increasingly important for solving compute-intense numerical problems. The second part of this thesis presents two novel algorithms for efficiently solving large systems of linear equations in multi-GPU environments. Depending on the driving application, linear systems exhibit different properties with respect to the solution set and choice of algorithm. Moreover, the special hardware characteristics of GPUs in combination with the rather slow data transfer rate over a network pose additional challenges for developing efficient methods. This thesis presents an algorithm, based on compressed sensing, for solving underdetermined linear systems for the volumetric reconstruction of astronomical nebulae from telescope images. The technique exploits the approximate symmetry of many nebulae combined with regularization and additional constraints to define a linear system that is solved with iterative forward and backward projections on a distributed GPU cluster. In this way, data scalability is achieved by combining the GPU memory of the entire cluster, which allows one to automatically reconstruct high-resolution models in reasonable time. Despite their high computational power, the fine grained parallelism of modern GPUs is problematic for certain types of numerical linear solvers. The conjugate gradient algorithm for symmetric and positive definite linear systems is one the most widely used solvers. Typically, the method is used in conjunction with preconditioning to accelerate convergence. However, traditional preconditioners are not suitable for efficient GPU processing. Therefore, a novel approach is introduced, specifically designed for the discrete Poisson equation, which plays a fundamental role in many applications. The presented approach builds on a sparse approximate inverse of the matrix to exploit the strengths of the GPU.Item Open Access Strategies for efficient parallel visualization(2014) Frey, Steffen; Ertl, Thomas (Prof. Dr.)Visualization is a crucial tool for analyzing data and gaining a deeper understanding of underlying features. In particular, interactive exploration has shown to be indispensable, as it can provide new insights beyond the original focus of analysis. However, efficient interaction requires almost immediate feedback to user input, and achieving this poses a big challenge for the visualization of data that is ever-growing in size and complexity. This motivates the increasing effort in recent years towards high-performance visualization using powerful parallel hardware architectures. The analysis and rendering of large volumetric grids and time-dependent data is particularly challenging. Despite many years of active research, significant improvements are still required to enable the efficient explorative analysis for many use cases and scenarios. In addition, while many diverse kinds of approaches have been introduced to tackle different angles of the issue, no consistent scheme exists to classify previous efforts and to guide further development. This thesis presents research that enables or improves the interactive analysis in various areas of scientific visualization. To begin with, new techniques for the interactive analysis of time-dependent field and particle data are introduced, focusing both on the expressiveness of the visualization and on a structure allowing for efficient parallel computing. Volume rendering is a core technique in scientific visualization, that induces significant costs. In this work, approaches are presented that decrease this cost by means of a new acceleration data structure, and handle it dynamically by adapting the progressive visualization process on-the-fly based on the estimation of spatio-temporal errors. In addition, view-dependent representations are presented that both reduce the size and render cost of volume data with only minor quality impact for a range of camera configurations. Remote and in-situ rendering approaches are discussed for enabling the interactive volume visualization without having to move the actual volume data. In detail, an approach for the integrated adaptive sampling and compression is introduced, as well as a technique allowing for user prioritization of critical results. Computations are further dynamically redistributed to reduce load imbalance. In detail, this encompasses the tackling of divergence issues on GPUs, the adaptation of volume data assigned to each node for rendering in distributed GPU clusters, and the detailed consideration of the different performance characteristics of the components in a heterogeneous system. From these research projects, a variety of generic strategies towards high-performance visualization is extracted, ranging from the parallelization of the program structure and algorithmic optimization, to the efficient execution on parallel hardware architectures. The introduced strategy tree further provides a consistent and comprehensive hierarchical classification of these strategies. It can provide guidance during development to identify and exploit potentials for improving the performance of visualization applications, and it can be used as expressive taxonomy for research on high-performance visualization and computer graphics.