05 Fakultät Informatik, Elektrotechnik und Informationstechnik

Permanent URI for this collectionhttps://elib.uni-stuttgart.de/handle/11682/6

Browse

Search Results

Now showing 1 - 3 of 3
  • Thumbnail Image
    ItemOpen Access
    Prediction and similarity models for visual analysis of spatiotemporal data
    (2022) Tkachev, Gleb; Ertl, Thomas (Prof. Dr.)
    Ever since the early days of computers, their usage have become essential in natural sciences. Whether through simulation, computer-aided observation or data processing, the progress in computer technology have been mirrored by the constant growth in the size of scientific data. Unfortunately, as the data sizes grow, and human capabilities remains constant, it becomes increasingly difficult to analyze and understand the data. Over the last decades, visualization experts have proposed many approaches to address the challenge, but even these methods have their limitations. Luckily, recent advances in the field of Machine Learning can provide the tools needed to overcome the obstacle. Machine learning models are a particularly good fit as they can both benefit from the large amount of data present in the scientific context and allow the visualization system to adapt to the problem at hand. This thesis presents research into how machine learning techniques can be adapted and extended to enable visualization of scientific data. It introduces a diverse set of techniques for analysis of spatiotemporal data, including detection of irregular behavior, self-supervised similarity metrics, automatic selection of visual representations and more. It also discusses the general challenges of applying Machine Learning to Scientific Visualization and how to address them.
  • Thumbnail Image
    ItemOpen Access
    Performance quantification of visualization systems
    (2022) Bruder, Valentin; Ertl, Thomas (Prof. Dr.)
    Visualization is an important part of data analysis, complementing automatic data processing to provide insight in the data and understand the underlying structure or patterns. A visualization system describes a visualization algorithm running on a specific compute architecture or device. Runtime performance is crucial for visualization systems, especially in the context of ever-growing data sizes and complexity. One reason for this is the importance of interactivity, another is to provide the opportunity for a comprehensive investigation of generated data in a limited time frame. Providing the possibility of changing the perspective beyond the original focus has been shown to be particularly helpful for explorative data analysis. Performance optimization is also key to save costs during visualization on supercomputers due to the high demand for their compute time. Being able to predict runtime enables a better resource planning and optimized scheduling on such devices. The central research questions addressed in this thesis are threefold and build on each other: How can we quantify runtime performance of visualization systems? How to use this information to develop models for prediction, and ultimately: How to integrate both aspects in the application context? The goal is to gain a comprehensive understanding of the runtime performance of visualization systems and optimize them to save costs and improve the user experience. Despite many works in this direction, there are still open questions and challenges on how to reach this goal. One of these challenges is the diversity of compute architectures used for visualization, including devices from mobile devices to supercomputers. Most visualization algorithms profit from running in parallel. However, this poses another challenge in performance quantification due to the usage of multiple heterogeneous parallel hardware hierarchies. Typically, visualization algorithms deal with large data, sparse regions, and interactivity requirements. Further, they can be fundamentally different in their rendering approaches. All these aspects make a reliable performance prediction difficult. This thesis addresses those challenges and presents research on performance evaluation, modeling, and prediction of visualization systems, and how to translate these concepts to improve performance-critical applications. Assessing runtime performance plays a key role in understanding and improving it. A new framework for the extensive and systematic performance evaluation of interactive visualizations is introduced, to help gain a deeper understanding of runtime behavior and rendering parameter dependencies. Based on the current practice of runtime performance evaluation in literature, a database of performance measurements is created. A list of best practices on how to improve performance evaluation is compiled based on a statistical analysis of the data. Additionally, a frontend has been developed to visually compare the rendering performance data from multiple perspectives. With a fundamental understanding of an application's runtime behavior, performance can be modeled, and the model used for prediction. New techniques for different hardware systems are introduced that are typically used for the visualization of large data sets: desktop computers featuring dedicated graphics hardware and high-performance distributed memory systems. For the former, a method to predict performance on-line is used to dynamically tune volume rendering during runtime to guarantee interactivity. For image database generation on distributed memory systems, a hybrid approach for dynamic load balancing during in situ visualization is introduced. This work also explores how human perceptual properties can be used to improve the performance of visualization applications. Two novel techniques are introduced that adapt rendering quality to the human visual system by tracking the users gaze and changing the visualization accordingly. In this thesis, a special focus is set on volume rendering. Performance optimization makes it possible to use volume rendering to visualize data outside the typical use cases. Two visualization systems are presented that use volume rendering at their core: one for the interactive exploration of large dynamic graphs and one for the space-time visualization of gaze and stimulus data. Overall, this thesis advances the state of the art by introducing new ways to assess, model, and predict runtime performance of visualization systems that can be used to improve usability and realize cost savings. This is demonstrated through several applications.
  • Thumbnail Image
    ItemOpen Access
    Interactive remote-visualisation for large displays
    (2022) Frieß, Florian; Ertl, Thomas (Prof. Dr.)
    While visualisation often strives for abstraction, the interactive exploration of large scientific data sets, such as densely sampled 3D fields, massive particle data sets or molecular visualisations still benefits from rendering their graphical representation in large detail on high-resolution displays such as Powerwalls or tiled display walls. With the ever-growing size of data, and the increased availability of the aforementioned displays, collaboration becomes desirable in the sense of sharing this type of a visualisation running on one site in real time with another high-resolution display on a remote site. While most desktop computers - and in turn the visualisation software running on them - are alike, large high-resolution display setups are often unique, making use of multiple GPUs, a GPU cluster or only CPUs to drive the display. Therefore, particularly if the goal is the interactive scientific visualisation of large data sets, unique software might have to be written for a unique display and compute system. Molecular visualisations are one application domain in which users would clearly benefit from being able to collaborate remotely, combining video and audio conference setups with the possibility of sharing high-resolution interactive visualisations. However, for large - often tiled - displays and image resolutions beyond 4K no obvious generic, let alone commercial, solution exists. While there are specialized solutions that support sharing the output of these displays, based on hardware-accelerated video encoding, these make compromises between quality and bandwidth. They either deliver a high quality image and therefore induce bandwidth requirements that cannot generally be met, or they uniformly decrease the quality to maintain adequate frame rates. However, in visualisation in particular, details are crucial in areas that are currently being investigated. Hence, an interactive remote-visualisation for high-resolution displays requires new methods that can run on different hardware setups and offer a high image quality while reducing the required bandwidth as much as possible. In this dissertation, an innovative technique for rendering and comparing molecular surfaces as well as a novel system that supports interactive remote-visualisation, for molecular surfaces and other scientific visualisations, between different high-resolution displays is introduced and discussed. The rendering technique solves the view-dependency and occlusion of the three dimensional representation of the molecular surfaces by showing the topography and the physico-chemical properties of the surface in one single image. This also allows analysts to compare and cluster the images in order to understand the relationship structures, based on the idea that a visually similar surface implies a similarity in the function of the protein. The system presented in this dissertation uses a low latency pixel streaming approach, leveraging GPU-based video encoding and decoding to solve the aforementioned problems and to allow for interactive remote visualisations on large high-resolution displays. In addition to remote-visualisation the system offers collaboration capabilities via bidirectional video and audio simultaneously. The system is based on the fact that, regardless of the underlying hardware setup, large displays share one property: they have a large (distributed or not) frame buffer to display coloured pixels. Consequently, this allows the users to collaborate between two sites that use different display walls with only a minimal delay. To address the bandwidth limitations, several methods have been developed and introduced which aim to reduce the required bandwidth and the end-to-end latency while still offering high image quality. The aim of these methods is to reduce the image quality and therefore the required bandwidth in regions that are not currently of interest to the users, while those that are of interest remain at a high quality. These methods can be categorised into algorithmic and user-driven optimisations to the remote visualisation pipeline. The user-driven optimisations make use of gaze tracking to adapt the quality of the encoding locally while the algorithmic optimisations use the content of the frames. Algorithmic optimisations include the usage of a convolutional neural network to detect regions of interest and adapt the encoding quality accordingly and a temporal downsampling prior to the encoding. These methods can also be combined, for example, foveated encoding may be combined with temporal downsampling to further reduce the required bandwidth and the latency. Overall, this dissertation advances the state of the art by enabling the collaborative analysis of molecular and other scientific visualisations remotely at interactive frame rates without imposing bandwidth requirements that cannot generally be met.