05 Fakultät Informatik, Elektrotechnik und Informationstechnik

Permanent URI for this collectionhttps://elib.uni-stuttgart.de/handle/11682/6

Browse

Search Results

Now showing 1 - 10 of 12
  • Thumbnail Image
    ItemOpen Access
    Prediction and similarity models for visual analysis of spatiotemporal data
    (2022) Tkachev, Gleb; Ertl, Thomas (Prof. Dr.)
    Ever since the early days of computers, their usage have become essential in natural sciences. Whether through simulation, computer-aided observation or data processing, the progress in computer technology have been mirrored by the constant growth in the size of scientific data. Unfortunately, as the data sizes grow, and human capabilities remains constant, it becomes increasingly difficult to analyze and understand the data. Over the last decades, visualization experts have proposed many approaches to address the challenge, but even these methods have their limitations. Luckily, recent advances in the field of Machine Learning can provide the tools needed to overcome the obstacle. Machine learning models are a particularly good fit as they can both benefit from the large amount of data present in the scientific context and allow the visualization system to adapt to the problem at hand. This thesis presents research into how machine learning techniques can be adapted and extended to enable visualization of scientific data. It introduces a diverse set of techniques for analysis of spatiotemporal data, including detection of irregular behavior, self-supervised similarity metrics, automatic selection of visual representations and more. It also discusses the general challenges of applying Machine Learning to Scientific Visualization and how to address them.
  • Thumbnail Image
    ItemOpen Access
    Impact of gaze uncertainty on AOIs in information visualisations
    (2022) Wang, Yao; Koch, Maurice; Bâce, Mihai; Weiskopf, Daniel; Bulling, Andreas
    Gaze-based analysis of areas of interest (AOIs) is widely used in information visualisation research to understand how people explore visualisations or assess the quality of visualisations concerning key characteristics such as memorability. However, nearby AOIs in visualisations amplify the uncertainty caused by the gaze estimation error, which strongly influences the mapping between gaze samples or fixations and different AOIs. We contribute a novel investigation into gaze uncertainty and quantify its impact on AOI-based analysis on visualisations using two novel metrics: the Flipping Candidate Rate (FCR) and Hit Any AOI Rate (HAAR). Our analysis of 40 real-world visualisations, including human gaze and AOI annotations, shows that gaze uncertainty frequently and significantly impacts the analysis conducted in AOI-based studies. Moreover, we analysed four visualisation types and found that bar and scatter plots are usually designed in a way that causes more uncertainty than line and pie plots in gaze-based analysis.
  • Thumbnail Image
    ItemOpen Access
    Matrix methods in visualization
    (2024) Krake, Tim; Weiskopf, Daniel (Prof. Dr.)
    The theory of matrices has a long history that began over 4000 years ago. It took a while until matrices were studied systematically in the context of linear algebra. While these results from the 18th and 19th century were mainly characterized by theoretical thoughts, the modern use of matrices is usually linked to computational aspects. This aspect made the theory of matrices extremely useful for applied sciences, such as computer graphics and visualization, and paved the way for innovative matrix methods. The overall goal of this thesis is to integrate such matrix methods into the field of data analysis and visualization, where emphasis is placed on matrix decompositions. In this context, the following four concepts are addressed: the examination of linear structures and matrix formulations, the utilization of matrix formulations and matrix methods, the customization of matrix methods for visualization, and the augmentation of visualization techniques. These four conceptual steps characterize a sequential process that is used throughout the chapters of this thesis. With a main focus on data-driven methods that reveal time evolutionary and statistical patterns, the contents of the chapters refer to different fields of application. Chapter 2 demonstrates applications of Dynamic Mode Decomposition in the context of visual computing, and Chapter 3 addresses the challenges of uncertainty propagation and visualization. In contrast, Chapters 4 and 5 present methods in the context of structural analysis (solid mechanics) and smoothed particle hydrodynamics (fluid mechanics). The overall content of this thesis demonstrates the versatile, effective use of matrices for visual computing.
  • Thumbnail Image
    ItemOpen Access
    Moving haptics research into practice : four case studies from automotive engineering
    (2023) Achberger, Alexander; Sedlmair, Michael (Prof. Dr.)
  • Thumbnail Image
    ItemOpen Access
    Interactive remote-visualisation for large displays
    (2022) Frieß, Florian; Ertl, Thomas (Prof. Dr.)
    While visualisation often strives for abstraction, the interactive exploration of large scientific data sets, such as densely sampled 3D fields, massive particle data sets or molecular visualisations still benefits from rendering their graphical representation in large detail on high-resolution displays such as Powerwalls or tiled display walls. With the ever-growing size of data, and the increased availability of the aforementioned displays, collaboration becomes desirable in the sense of sharing this type of a visualisation running on one site in real time with another high-resolution display on a remote site. While most desktop computers - and in turn the visualisation software running on them - are alike, large high-resolution display setups are often unique, making use of multiple GPUs, a GPU cluster or only CPUs to drive the display. Therefore, particularly if the goal is the interactive scientific visualisation of large data sets, unique software might have to be written for a unique display and compute system. Molecular visualisations are one application domain in which users would clearly benefit from being able to collaborate remotely, combining video and audio conference setups with the possibility of sharing high-resolution interactive visualisations. However, for large - often tiled - displays and image resolutions beyond 4K no obvious generic, let alone commercial, solution exists. While there are specialized solutions that support sharing the output of these displays, based on hardware-accelerated video encoding, these make compromises between quality and bandwidth. They either deliver a high quality image and therefore induce bandwidth requirements that cannot generally be met, or they uniformly decrease the quality to maintain adequate frame rates. However, in visualisation in particular, details are crucial in areas that are currently being investigated. Hence, an interactive remote-visualisation for high-resolution displays requires new methods that can run on different hardware setups and offer a high image quality while reducing the required bandwidth as much as possible. In this dissertation, an innovative technique for rendering and comparing molecular surfaces as well as a novel system that supports interactive remote-visualisation, for molecular surfaces and other scientific visualisations, between different high-resolution displays is introduced and discussed. The rendering technique solves the view-dependency and occlusion of the three dimensional representation of the molecular surfaces by showing the topography and the physico-chemical properties of the surface in one single image. This also allows analysts to compare and cluster the images in order to understand the relationship structures, based on the idea that a visually similar surface implies a similarity in the function of the protein. The system presented in this dissertation uses a low latency pixel streaming approach, leveraging GPU-based video encoding and decoding to solve the aforementioned problems and to allow for interactive remote visualisations on large high-resolution displays. In addition to remote-visualisation the system offers collaboration capabilities via bidirectional video and audio simultaneously. The system is based on the fact that, regardless of the underlying hardware setup, large displays share one property: they have a large (distributed or not) frame buffer to display coloured pixels. Consequently, this allows the users to collaborate between two sites that use different display walls with only a minimal delay. To address the bandwidth limitations, several methods have been developed and introduced which aim to reduce the required bandwidth and the end-to-end latency while still offering high image quality. The aim of these methods is to reduce the image quality and therefore the required bandwidth in regions that are not currently of interest to the users, while those that are of interest remain at a high quality. These methods can be categorised into algorithmic and user-driven optimisations to the remote visualisation pipeline. The user-driven optimisations make use of gaze tracking to adapt the quality of the encoding locally while the algorithmic optimisations use the content of the frames. Algorithmic optimisations include the usage of a convolutional neural network to detect regions of interest and adapt the encoding quality accordingly and a temporal downsampling prior to the encoding. These methods can also be combined, for example, foveated encoding may be combined with temporal downsampling to further reduce the required bandwidth and the latency. Overall, this dissertation advances the state of the art by enabling the collaborative analysis of molecular and other scientific visualisations remotely at interactive frame rates without imposing bandwidth requirements that cannot generally be met.
  • Thumbnail Image
    ItemOpen Access
    Group diagrams for simplified representation of scanpaths
    (2023) Schäfer, Peter; Rodrigues, Nils; Weiskopf, Daniel; Storandt, Sabine
    We instrument Group Diagrams (GDs) to reduce clutter in sets of eye-tracking scanpaths. Group Diagrams consist of trajectory subsets that cover, or represent, the whole set of trajectories with respect to some distance measure and an adjustable distance threshold. The original GDs allow for an application of various distance measures. We implement the GD framework and evaluate it on scanpaths that were collected by a former user study on public transit maps. We find that the Fréchet distance is the most appropriate measure to get meaningful results, yet it is flexible enough to cover outliers. We discuss several implementation-specific challenges and improve the scalability of the algorithm. To evaluate our results, we conducted a qualitative study with a group of eye-tracking experts. Finally, we note that our enhancements are also beneficial within the original problem setting, suggesting that our approach might be applicable to various types of input data.
  • Thumbnail Image
    ItemOpen Access
    Performance quantification of visualization systems
    (2022) Bruder, Valentin; Ertl, Thomas (Prof. Dr.)
    Visualization is an important part of data analysis, complementing automatic data processing to provide insight in the data and understand the underlying structure or patterns. A visualization system describes a visualization algorithm running on a specific compute architecture or device. Runtime performance is crucial for visualization systems, especially in the context of ever-growing data sizes and complexity. One reason for this is the importance of interactivity, another is to provide the opportunity for a comprehensive investigation of generated data in a limited time frame. Providing the possibility of changing the perspective beyond the original focus has been shown to be particularly helpful for explorative data analysis. Performance optimization is also key to save costs during visualization on supercomputers due to the high demand for their compute time. Being able to predict runtime enables a better resource planning and optimized scheduling on such devices. The central research questions addressed in this thesis are threefold and build on each other: How can we quantify runtime performance of visualization systems? How to use this information to develop models for prediction, and ultimately: How to integrate both aspects in the application context? The goal is to gain a comprehensive understanding of the runtime performance of visualization systems and optimize them to save costs and improve the user experience. Despite many works in this direction, there are still open questions and challenges on how to reach this goal. One of these challenges is the diversity of compute architectures used for visualization, including devices from mobile devices to supercomputers. Most visualization algorithms profit from running in parallel. However, this poses another challenge in performance quantification due to the usage of multiple heterogeneous parallel hardware hierarchies. Typically, visualization algorithms deal with large data, sparse regions, and interactivity requirements. Further, they can be fundamentally different in their rendering approaches. All these aspects make a reliable performance prediction difficult. This thesis addresses those challenges and presents research on performance evaluation, modeling, and prediction of visualization systems, and how to translate these concepts to improve performance-critical applications. Assessing runtime performance plays a key role in understanding and improving it. A new framework for the extensive and systematic performance evaluation of interactive visualizations is introduced, to help gain a deeper understanding of runtime behavior and rendering parameter dependencies. Based on the current practice of runtime performance evaluation in literature, a database of performance measurements is created. A list of best practices on how to improve performance evaluation is compiled based on a statistical analysis of the data. Additionally, a frontend has been developed to visually compare the rendering performance data from multiple perspectives. With a fundamental understanding of an application's runtime behavior, performance can be modeled, and the model used for prediction. New techniques for different hardware systems are introduced that are typically used for the visualization of large data sets: desktop computers featuring dedicated graphics hardware and high-performance distributed memory systems. For the former, a method to predict performance on-line is used to dynamically tune volume rendering during runtime to guarantee interactivity. For image database generation on distributed memory systems, a hybrid approach for dynamic load balancing during in situ visualization is introduced. This work also explores how human perceptual properties can be used to improve the performance of visualization applications. Two novel techniques are introduced that adapt rendering quality to the human visual system by tracking the users gaze and changing the visualization accordingly. In this thesis, a special focus is set on volume rendering. Performance optimization makes it possible to use volume rendering to visualize data outside the typical use cases. Two visualization systems are presented that use volume rendering at their core: one for the interactive exploration of large dynamic graphs and one for the space-time visualization of gaze and stimulus data. Overall, this thesis advances the state of the art by introducing new ways to assess, model, and predict runtime performance of visualization systems that can be used to improve usability and realize cost savings. This is demonstrated through several applications.
  • Thumbnail Image
    ItemOpen Access
    Physically Based Rendering für fraktale Entladungen
    (2024) Barta, Alexander
    In dieser Arbeit wird ein Physically-Based-Rendering-Modell für das Darstellen von Blitzen vorgestellt, das interaktives Rendering ermöglicht. Es wurde durch Kombination mehrerer existierender Methoden erstellt. Einerseits wird die Blitz-Geometrie mithilfe einer physikalischen Simulation erzeugt, welche auf dem Dielectric Breakdown Model basiert. Andererseits wird diese Geometrie unter Berücksichtigung von diversen Effekten gerendert. Hierbei wird die Interaktion des vom Blitz ausgestrahlten Lichts mit der Atmosphäre simuliert. Darüber hinaus wird die Belichtung einer Kamera simuliert, wodurch Überbelichtungs-Effekte dargestellt werden können. Zudem wird die Beugung von Licht an der Kamerablende berücksichtigt. Das beschriebene Modell wurde in OpenGL implementiert und anhand dieser Implementierung eine Evaluation des visuellen Eindrucks durchgeführt. Das Modell kann Bilder erzeugen, die mit Fotos von echten Blitzen vergleichbar sind.
  • Thumbnail Image
    ItemOpen Access
    Adaptation of point- and line-based visualization
    (2024) Rodrigues, Nils; Weiskopf, Daniel (Prof. Dr.)
    Visualization plays an important role in the lives of various heterogeneous parts of society: from a voter looking for the latest results of an election, to statisticians examining a distribution, to analysts trying to make sense of multidimensional data sets. This thesis adapts existing point- and line-based visualization methods to improve knowledge gain. The included contributions address three research questions: How to scale unit visualization for 1D data? How to improve navigation between 2D visualizations of multivariate data? How to combine the advantages of multiple 2D views in a single static visualization for multivariate data? The first part of the thesis focuses on unit visualization of 1D data with dot plots. Compared to the previous state of the art, the developed visualizations fit a wider range of data and expand the number of potential users by requiring less prior knowledge for interpretation. They adapt the definition of dot plots to scale nonlinearly with sample count, accurately show value frequencies in high-dynamic-range data, reduce positional error in displayed data points, and enhance the perception of subtle nuances in the data while avoiding moiré effects. We provide evidence for claimed improvements through evaluation with computational metrics and a crowdsourced user study. The second part of the dissertation focuses on visualizing multivariate data with scatter plots and scatter plot matrices. First, we evaluate six animated transitions between plots of different 2D subspaces with respect to task performance for tracking individual points and interactions between clusters. The results of a quantitative study with 170 participants show that orthographic rotation animation performs best and should be adopted more widely. Next, we develop a novel concept for recommending views in scatter plot matrices. It provides user- and task-specific suggestions by focusing on the data of interest to the viewer. Together, animation and recommendation adapt scatter plots to improve the user's ability to analyze more complex data effectively. In the third part, we develop a new visualization technique that extends parallel coordinate plots to provide a static alternative to scatter plots with animated transitions. The approach does not require interaction to display data flow between 2D subspace clusters. A custom density-based rendering technique enables the visibility of individual lines and structures within highly overdrawn regions. Our technique can communicate fuzzy clustering results through binning and color mapping. Finally, we discuss the presented contributions with respect to the original main questions and show possible directions for future research.
  • Thumbnail Image
    ItemOpen Access
    Efficient and robust background modeling with dynamic mode decomposition
    (2022) Krake, Tim; Bruhn, Andrés; Eberhardt, Bernhard; Weiskopf, Daniel
    A large number of modern video background modeling algorithms deal with computational costly minimization problems that often need parameter adjustments. While in most cases spatial and temporal constraints are added artificially to the minimization process, our approach is to exploit Dynamic Mode Decomposition (DMD), a spectral decomposition technique that naturally extracts spatio-temporal patterns from data. Applied to video data, DMD can compute background models. However, the original DMD algorithm for background modeling is neither efficient nor robust. In this paper, we present an equivalent reformulation with constraints leading to a more suitable decomposition into fore- and background. Due to the reformulation, which uses sparse and low-dimensional structures, an efficient and robust algorithm is derived that computes accurate background models. Moreover, we show how our approach can be extended to RGB data, data with periodic parts, and streaming data enabling a versatile use.