13 Zentrale Universitätseinrichtungen

Permanent URI for this collectionhttps://elib.uni-stuttgart.de/handle/11682/14

Browse

Search Results

Now showing 1 - 6 of 6
  • Thumbnail Image
    ItemOpen Access
    Visualization and mesoscopic simulation in systems biology
    (2013) Falk, Martin Samuel; Ertl, Thomas (Prof. Dr.)
    A better understanding of the internal mechanisms and interplays within a single cell is key to the understanding of life. The focus of this thesis lies on the mechanism of cellular signal transduction, i.e. relaying a signal from outside the cell by different means of transport toward its target inside the cell. Besides experiments, understanding can also be achieved by numerical simulations of cellular behavior which require theoretical models to be designed and evaluated. This is where systems biology closely relates and depends on recent research results in computer science in order to deal with the modeling, the simulation, and the analysis of the computational results. Since a single cell can consist of billions of atoms, the simulation of intracellular processes requires a simplified, mesoscopic model. The simulation domain has to be three dimensional to consider the spatial, possibly asymmetric, intracellular architecture filled with individual particles representing signaling molecules. In contrast to continuous models defined by systems of partial differential equations, a particle-based model allows tracking individual molecules moving through the cell. The overall process of signal propagation usually requires between minutes and hours to complete, but the movement of molecules and the interactions between them have to be determined in the microsecond range. Hence, the computation of thousands of consecutive time steps is necessary, requiring several hours or even days of computational time for a non-parallel simulation. To speed up the simulation, the parallel hardware of current central processing units (CPUs) and graphics processing units (GPUs) can be employed. Finally, the resulting data has to be analyzed by domain experts and, therefore, has to be represented in meaningful ways. Typical prevalent analysis methods include the aggregation of the data in tables or simple 2D graph plots, sometimes 3D plots for continuous data. Despite the fact that techniques for interactive visualization of data in 3D are well-known, so far none of the methods have been applied to the biological context of single cell models and specialized visualizations fitted to the experts’ need are missing. Another issue is the hardware available to the domain experts that can be used for the task of visualizing the increasing amount of time-dependent data resulting from simulations. It is important that the visualization keeps up with the simulations to ensure that domain experts can still analyze their data sets. To deal with the massive amount of data to come, compute clusters will be necessary with specialized hardware dedicated to data visualization. It is, thus, important, to develop visualization algorithms for this dedicated hardware, which is currently available as GPU. In this thesis, the computational power of recent many-core architectures (CPUs and GPUs) is harnessed for both the simulation and the visualizations. Novel parallel algorithms are introduced to parallelize the spatio-temporal, mesoscopic particle simulation to fit the architectures of CPU and GPU in a similar way. Besides molecular diffusion, the simulation considers extracellular effects on the signal propagation as well as the import of molecules into the nucleus and a dynamic cytoskeleton. An extensive comparison between different configurations is performed leading to the conclusion that the usage of GPUs is not always beneficial. For the visual data analysis, novel interactive visualization techniques were developed to visualize the 3D simulation results. Existing glyph-based approaches are combined in a new way facilitating the visualization of the individual molecules in the interior of the cell as well as their trajectories. A novel implementation of the depth of field effect combined with additional depth cues and coloring aid the visual perception and reduce visual clutter. To obtain a continuous signal distribution from the discrete particles, techniques known from volume rendering are employed. The visualization of the underlying atomic structures provides new detailed insights and can be used for educational purposes besides showing the original data. A microscope-like visualization allows for the first time to generate images of synthetic data similar to images obtained in wet lab experiments. The simulation and the visualizations are merged into a prototypical framework, thereby supporting the domain expert during the different stages of model development, i.e. design, parallel simulation, and analysis. Although the proposed methods for both simulation and visualization were developed with the study of single-cell signal transduction processes in mind, they are also applicable to models consisting of several cells and other particle-based scenarios. Examples in this thesis include the diffusion of drugs into a tumor, the detection of protein cavities, and molecular dynamics data from laser ablation simulations, among others.
  • Thumbnail Image
    ItemOpen Access
    Advanced visualization techniques for flow simulations : from higher-order polynomial data to time-dependent topology
    (2013) Üffinger, Markus; Ertl, Thomas (Prof. Dr.)
    Computational Wuid dynamics (CFD) has become an important tool for predicting Fluid behavior in research and industry. Today, in the era of tera- and petascale computing, the complexity and the size of simulations have reached a state where an extremely large amount of data is generated that has to be stored and analyzed. An indispensable instrument for such analysis is provided by computational Wow visualization. It helps in gaining insight and understanding of the Wow and its underlying physics, which are subject to a complex spectrum of characteristic behavior, ranging from laminar to turbulent or even chaotic characteristics, all of these taking place on a wide range of length and time scales. The simulation side tries to address and control this vast complexity by developing new sophisticated models and adaptive discretization schemes, resulting in new types of data. Examples of such emerging simulations are generalized Vnite element methods or hp-adaptive discontinuous Galerkin schemes of high-order. This work addresses the direct visualization of the resulting higher-order Veld data, avoiding the traditional resampling approach to enable a more accurate visual analysis. The second major contribution of this thesis deals with the inherent complexity of Wuid dynamics. New feature-based and topology-based visualization algorithms for unsteady Wow are proposed to reduce the vast amounts of raw data to their essential structure. For the direct visualization pixel-accurate techniques are presented for 2D Veld data from generalized Vnite element simulations, which consist of a piecewise polynomial part of high order enriched with problem-dependent ansatz functions. Secondly, a direct volume rendering system for hp-adaptive Vnite elements, which combine an adaptive grid discretization with piecewise polynomial higher-order approximations, is presented. The parallel GPU implementation runs on single workstations, as well as on clusters, enabling a real-time generation of high quality images, and interactive exploration of the volumetric polynomial solution. Methods for visual debugging of these complex simulations are also important and presented. Direct Wow visualization is complemented by new feature and topology-based methods. A promising approach for analyzing the structure of time-dependent vector Velds is provided by Vnite-time Lyapunov exponent (FTLE) Velds. In this work, interactive methods are presented that help in understanding the cause of FTLE structures, and novel approaches to FTLE computation are developed to account for the linearization error made by traditional methods. Building on this, it is investigated under which circumstances FTLE ridges represent Lagrangian coherent structures (LCS)—the timedependent counterpart to separatrices of traditional “steady” vector Veld topology. As a major result, a novel time-dependent 3D vector Veld topology concept based on streak surfaces is proposed. Streak LCS oUer a higher quality than corresponding FTLE ridges, and animations of streak LCS can be computed at comparably low cost, alleviating the topological analysis of complex time-dependent Velds.
  • Thumbnail Image
    ItemOpen Access
    Visualization techniques for parallel coordinates
    (2013) Heinrich, Julian; Weiskopf, Daniel (Prof. Dr.)
    Visualization plays a key role in knowledge discovery, visual data exploration, and visual analytics. Static images are an effective tool for visual communication, summarization, and pattern extraction in large and complex datasets. Only together with human-computer-interaction techniques, visual interfaces enable an analyst to explore large information spaces and to drive the whole analytical reasoning process. Scatterplots and parallel coordinates are well-recognized visualization techniques that are commonly employed for statistics (both explorative and descriptive) and data-mining, but are also gaining importance for scientific visualization. While scatterplots are restricted to the display of at most three dimensions due to the orthogonal layout of coordinate axes, a parallel arrangement allows for the visualization of multiple attributes of a dataset. Although both techniques rely on projections of higher-dimensional geometry and are related by a point–line duality, parallel coordinates enjoy great popularity for the visualization and analysis of multivariate data. Despite their popularity, parallel coordinates are subject to a number of limitations that remain to be solved. For large datasets, the potentially high amount of overlapping lines may hinder the observer from visually extracting meaningful patterns. Encoding observations with polylines make it difficult to follow lines over all dimensions, as they lose visual continuation across the axes. Clusters cannot be represented by the geometry of lines, and the order of axes has a high impact on the patterns exhibited by parallel coordinates. This thesis presents visualization techniques for parallel coordinates that address these limitations. As a foundation, an extensive review of the state of the art of parallel coordinates will be given. Based on the point–line duality, the existing model of continuous scatterplots is adapted to parallel coordinates for the visualization of data defined on continuous domains. To speed up computation and obtain interactive frame rates, a scalable and progressive rendering algorithm is introduced that further allows for arbitrary reconstruction and interpolation schemes. A curve-bundling model for parallel coordinates is evaluated with a user study showing that bundling is effective for cluster visualization based on geometric cues while being equally capable of revealing correlations between neighboring axes. To address the axis-order problem, a graph-based approach is presented that allows for the visualization of all pairwise relations in a matrix layout without redundancy. Finally, the use of parallel coordinates is demonstrated for real datasets from computational fluid dynamics, motion capturing, bioinformatics, and systems biology.
  • Thumbnail Image
    ItemOpen Access
    MPI-semantic memory checking tools for parallel applications
    (2013) Fan, Shiqing; Resch, Michael (Prof. Dr.-Ing.)
    The Message Passing Interface (MPI) is a language-independent application interface that provides a standard for communication among the processes of programs running on parallel computers, clusters or heterogeneous networks. However, writing correct and portable MPI applications is difficult: inconsistent or incorrect use of parameters may occur; the subtle semantic differences of various MPI calls may be used inconsistently or incorrectly even by expert programmers. The MPI implementations typically implement only minimal sanity checks to achieve the highest possible performance. Although many interactive debuggers have been developed or extended to handle the concurrent processes of MPI applications, there are still numerous classes of bugs which are hard or even impossible to find with a conventional debugger. There are many cases of memory conflicts or errors, for example, overlapping access or segmentation fault, does not provide enough and useful information for programmer to solve the problem. That is even worse for MPI applications, due to the flexibility and high-frequency of using memory parallel in MPI standard, which makes it more difficult to observe the memory problems in the traditional way. Currently, there is no available debugger helpful especially for MPI semantic memory errors, i.e. detecting memory problem or potential errors according to the standard. For this specific c purpose, in this dissertation memory checking tools have been implemented. And the corresponding frameworks in Open MPI for parallel applications based on MPI semantics have been developed, using different existing memory debugging tool interfaces. Developers are able to detect hard to find bugs, such as memory violations, buffer overrun, inconsistent parameters and so on. This memory checking tool provides detailed comprehensible error messages that will be most helpful for MPI developers. Furthermore, the memory checking frameworks may also help improve the performance of MPI based parallel applications by detecting whether the communicated data is used or not. The new memory checking tools may also be used in other projects or debuggers to perform different memory checks. The memory checking tools do not only apply to MPI parallel applications, but may also be used in other kind of applications that require memory checking. The technology allows programmers to handle and implement their own memory checking functionalities in a flexible way, which means they may define what information they want to know about the memory and how the memory in the application should be checked and reported. The world of high performance computing is Linux-dominated and open source based. However Microsoft is becoming also a more important role in this domain, establishing its foothold with Windows HPC Server 2008 R2. In this work, the advantages and disadvantages of these two HPC operating systems will be discussed. To amend programmability and portability, we introduce a version of Open MPI for Windows with several newly developed key components. Correspondingly, an implementation of memory checking tool on Windows will also be introduced. This dissertation has five main chapters: after an introduction of state of the art, the development of the Open MPI for Windows platform is described, including the work of InfiniBand network support. Chapter four presents the methods explored and opportunities for error analysis of memory accesses. Moreover, it also describes the two implemented tools for this work based on the Intel PIN and the Valgrind tool, as well as their integration into the Open MPI library. In chapter five, the methods are based on several benchmarks (NetPIPE, IMB and NPB) and evaluated using real applications (heat conduction application, and the MD package Gromacs). It is shown that the instrumentation generated by the tool has no significant overhead (NetPIPE with 1.2% to 2.5% for the latency) and accordingly no impact on application benchmarks such as NPB or Gromacs. If the application is executed to analyze with the memory access tools, it extends naturally the execution time by up to 30x, and using the presented MemPin is only half the rate of dropdown. The methods prove successful in the sense that unnecessary data communicated can be found in the heat conduction application and in Gromacs, resulting in the first case, the communication time of the application is reduced by 12%.
  • Thumbnail Image
    ItemOpen Access
    Video visual analytics
    (2013) Höferlin, Markus Johannes; Weiskopf, Daniel (Prof. Dr.)
    The amount of video data recorded world-wide is tremendously growing and has already reached hardly manageable dimensions. It originates from a wide range of application areas, such as surveillance, sports analysis, scientific video analysis, surgery documentation, and entertainment, and its analysis represents one of the challenges in computer science. The vast amount of video data renders manual analysis by watching the video data impractical. However, automatic evaluation of video material is not reliable enough, especially when it comes to semantic abstraction from the video signal. In this thesis, the visual analytics methodology is applied to the video domain to combine the complementary strengths of human cognition and machine processing. After depicting the challenges of scalable video analysis, a video visual analytics pipeline is proposed that relies on stream processing for scalability. The proposed video visual analytics pipeline consists of six stages that are processed successively--data stream selection, manipulation, feature extraction, filtering, relevance measure, and visualization--before the results are presented to the human analysts. The human analysts can interact and modify each of these stages iteratively. To support sense-making, the human analysts can directly integrate and organize reasoning artifacts into a reasoning sandbox. For the video visual analytics pipeline, various methods for the different stages are introduced that address data scalability, task scalability, and situational awareness. This work focuses mainly on the filtering and visualization stages, but provides reviews and discussions of techniques for the other stages as well. In the filtering stage, four interaction guidelines--easy-to-use filter definition, confidence-incorporated filter definition, decision-guided filter definition, and filter feedback--are defined and applied to formulate filters by properties, by sketch, or by example. Due to the suitability of trajectories for filtering, a configurable similarity metric for trajectories is introduced that allows combining different facets (features) with different similarity measures. Besides a survey on video visualization methods, the thesis contributes to the visualization stage by methods for fast-forward video visualization and hierarchical video exploration (the interactive schematic summaries). The VideoPerpetuoGram is extended and applied to different domains (video surveillance and snooker skill training), and an example of video visualization that solely depends on extracted features from video (the layered TimeRadarTrees) is discussed. Moreover, two sonification approaches with the purpose to improve situational awareness are introduced.
  • Thumbnail Image
    ItemOpen Access
    Interactive visual analysis of vector fields
    (2013) Bachthaler, Sven; Weiskopf, Daniel (Prof. Dr.)
    Visualization is a very active research area due to several reasons. For years, data sets have been getting larger and more complex, increasing the difficulty of handling this data. Furthermore, in technical application areas, visualization is an essential part of the engineering process. These developments drive the need for improvements of all aspects of scientific visualization, as well as the integration of information visualization techniques. This thesis focuses on the development of visualization and analysis techniques for different types of vector fields - vector fields representing the flow of air or water, but also magnetic fields and vector fields derived computationally from scalar fields. The different techniques that were developed to handle such fields are organized in three parts: the first part presents methods that visualize vector fields in dense manner. The second part discusses methods that rely on topological approaches - the complexity of the visualization is reduced by concentrating on features of the data. In the third and final part, continuous scatterplots are introduced, which are designed to analyze correlations in multivariate data sets. In the first part, the goal is to show as much information as possible and using every available pixel of the viewport to do so. However, one of the challenges of dense visualization methods is to maintain interactivity for high resolution visualizations. A cluster environment is used here to offer increased rendering performance and memory size for large and complex data sets. Additionally, an animation-based approach is presented that allows one to decouple the line-like patterns of LIC from the direction of animation. This decoupling is desirable since perception research suggests that LIC-based techniques combined with animation are non-optimal for local motion detection of the human visual system. The second part focuses on topological methods to filter the data and hence, reduce the complexity of the resulting visualization. For time-dependent vector fields, Lagrangian coherent structures are used to visualize space-time manifolds that represent the topology of these fields. Furthermore, the dynamic of such fields is visualized directly on these space-time manifolds, allowing us to quantify the hyperbolicity close to the topological skeleton. In addition, another technique is presented in the second part that allows one to visualize the topology of magnetic fields based on dipoles. Here, traditional topological methods are non-optimal, hence, an alternative topology is developed that visualizes the existence and magnitude of magnetic flux between dipoles. In the final part, the mathematical basis and several computational approaches are presented to compute continuous scatterplots. These plots are designed to work with data sets defined on a continuous domain, which is typical for scientific visualization data. In contrast to traditional scatterplots, they visualize the density in the data domain, instead of merely plotting data attached at discrete sampling positions. The additional computational approaches are an improvement of the original approach in terms of flexibility - they allow a trade-off between output quality and rendering performance, as well as the use of generic interpolation methods.