05 Fakultät Informatik, Elektrotechnik und Informationstechnik
Permanent URI for this collectionhttps://elib.uni-stuttgart.de/handle/11682/6
Browse
16 results
Search Results
Item Open Access Prediction and similarity models for visual analysis of spatiotemporal data(2022) Tkachev, Gleb; Ertl, Thomas (Prof. Dr.)Ever since the early days of computers, their usage have become essential in natural sciences. Whether through simulation, computer-aided observation or data processing, the progress in computer technology have been mirrored by the constant growth in the size of scientific data. Unfortunately, as the data sizes grow, and human capabilities remains constant, it becomes increasingly difficult to analyze and understand the data. Over the last decades, visualization experts have proposed many approaches to address the challenge, but even these methods have their limitations. Luckily, recent advances in the field of Machine Learning can provide the tools needed to overcome the obstacle. Machine learning models are a particularly good fit as they can both benefit from the large amount of data present in the scientific context and allow the visualization system to adapt to the problem at hand. This thesis presents research into how machine learning techniques can be adapted and extended to enable visualization of scientific data. It introduces a diverse set of techniques for analysis of spatiotemporal data, including detection of irregular behavior, self-supervised similarity metrics, automatic selection of visual representations and more. It also discusses the general challenges of applying Machine Learning to Scientific Visualization and how to address them.Item Open Access Uncertainty-aware visualization techniques(2021) Schulz, Christoph; Weiskopf, Daniel (Prof. Dr.)Nearly all information is uncertainty-afflicted. Whether and how we present this uncertainty can have a major impact on how our audience perceives such information. Still, uncertainty is rarely visualized and communicated. One difficulty is that we tend to interpret illustrations as truthful. For example, it is difficult to understand that a drawn point’s presence, absence, and location may not convey its full information. Similarly, it may be challenging to classify a point within a probability distribution. One must learn how to interpret uncertainty-afflicted information. Accordingly, this thesis addresses three research questions: How can we identify and reason about uncertainty? What are approaches to modeling flow of uncertainty through the visualization pipeline? Which methods are suitable for harnessing uncertainty? The first chapter is concerned with sources of uncertainty. Then, approaches to model uncertainty using descriptive statistics and unsupervised learning are discussed. Also, a model for validation and evaluation of visualization methods is proposed. Further, methods for visualizing uncertainty-afflicted networks, trees, point data, sequences, and time series are presented. The focus lies on modeling, propagation, and visualization of uncertainty. As encodings of uncertainty, we propose wave-like splines and sampling-based transparency. As an overarching approach to adapt existing visualization methods for uncertain information, we identify the layout process (the placement of objects). The main difficulty is that these objects are not simple points but distribution functions or convex hulls. We also develop two stippling-based methods for rendering that utilize the ability of the human visual system to cope with uncertainty. Finally, I provide insight into possible directions for future research.Item Open Access Impact of gaze uncertainty on AOIs in information visualisations(2022) Wang, Yao; Koch, Maurice; Bâce, Mihai; Weiskopf, Daniel; Bulling, AndreasGaze-based analysis of areas of interest (AOIs) is widely used in information visualisation research to understand how people explore visualisations or assess the quality of visualisations concerning key characteristics such as memorability. However, nearby AOIs in visualisations amplify the uncertainty caused by the gaze estimation error, which strongly influences the mapping between gaze samples or fixations and different AOIs. We contribute a novel investigation into gaze uncertainty and quantify its impact on AOI-based analysis on visualisations using two novel metrics: the Flipping Candidate Rate (FCR) and Hit Any AOI Rate (HAAR). Our analysis of 40 real-world visualisations, including human gaze and AOI annotations, shows that gaze uncertainty frequently and significantly impacts the analysis conducted in AOI-based studies. Moreover, we analysed four visualisation types and found that bar and scatter plots are usually designed in a way that causes more uncertainty than line and pie plots in gaze-based analysis.Item Open Access Light transport simulation in participating media using spherical harmonic methods(2021) Körner, David; Eberhardt, Bernhard (Prof. Dr.)Item Open Access Matrix methods in visualization(2024) Krake, Tim; Weiskopf, Daniel (Prof. Dr.)The theory of matrices has a long history that began over 4000 years ago. It took a while until matrices were studied systematically in the context of linear algebra. While these results from the 18th and 19th century were mainly characterized by theoretical thoughts, the modern use of matrices is usually linked to computational aspects. This aspect made the theory of matrices extremely useful for applied sciences, such as computer graphics and visualization, and paved the way for innovative matrix methods. The overall goal of this thesis is to integrate such matrix methods into the field of data analysis and visualization, where emphasis is placed on matrix decompositions. In this context, the following four concepts are addressed: the examination of linear structures and matrix formulations, the utilization of matrix formulations and matrix methods, the customization of matrix methods for visualization, and the augmentation of visualization techniques. These four conceptual steps characterize a sequential process that is used throughout the chapters of this thesis. With a main focus on data-driven methods that reveal time evolutionary and statistical patterns, the contents of the chapters refer to different fields of application. Chapter 2 demonstrates applications of Dynamic Mode Decomposition in the context of visual computing, and Chapter 3 addresses the challenges of uncertainty propagation and visualization. In contrast, Chapters 4 and 5 present methods in the context of structural analysis (solid mechanics) and smoothed particle hydrodynamics (fluid mechanics). The overall content of this thesis demonstrates the versatile, effective use of matrices for visual computing.Item Open Access Moving haptics research into practice : four case studies from automotive engineering(2023) Achberger, Alexander; Sedlmair, Michael (Prof. Dr.)Item Open Access Interactive remote-visualisation for large displays(2022) Frieß, Florian; Ertl, Thomas (Prof. Dr.)While visualisation often strives for abstraction, the interactive exploration of large scientific data sets, such as densely sampled 3D fields, massive particle data sets or molecular visualisations still benefits from rendering their graphical representation in large detail on high-resolution displays such as Powerwalls or tiled display walls. With the ever-growing size of data, and the increased availability of the aforementioned displays, collaboration becomes desirable in the sense of sharing this type of a visualisation running on one site in real time with another high-resolution display on a remote site. While most desktop computers - and in turn the visualisation software running on them - are alike, large high-resolution display setups are often unique, making use of multiple GPUs, a GPU cluster or only CPUs to drive the display. Therefore, particularly if the goal is the interactive scientific visualisation of large data sets, unique software might have to be written for a unique display and compute system. Molecular visualisations are one application domain in which users would clearly benefit from being able to collaborate remotely, combining video and audio conference setups with the possibility of sharing high-resolution interactive visualisations. However, for large - often tiled - displays and image resolutions beyond 4K no obvious generic, let alone commercial, solution exists. While there are specialized solutions that support sharing the output of these displays, based on hardware-accelerated video encoding, these make compromises between quality and bandwidth. They either deliver a high quality image and therefore induce bandwidth requirements that cannot generally be met, or they uniformly decrease the quality to maintain adequate frame rates. However, in visualisation in particular, details are crucial in areas that are currently being investigated. Hence, an interactive remote-visualisation for high-resolution displays requires new methods that can run on different hardware setups and offer a high image quality while reducing the required bandwidth as much as possible. In this dissertation, an innovative technique for rendering and comparing molecular surfaces as well as a novel system that supports interactive remote-visualisation, for molecular surfaces and other scientific visualisations, between different high-resolution displays is introduced and discussed. The rendering technique solves the view-dependency and occlusion of the three dimensional representation of the molecular surfaces by showing the topography and the physico-chemical properties of the surface in one single image. This also allows analysts to compare and cluster the images in order to understand the relationship structures, based on the idea that a visually similar surface implies a similarity in the function of the protein. The system presented in this dissertation uses a low latency pixel streaming approach, leveraging GPU-based video encoding and decoding to solve the aforementioned problems and to allow for interactive remote visualisations on large high-resolution displays. In addition to remote-visualisation the system offers collaboration capabilities via bidirectional video and audio simultaneously. The system is based on the fact that, regardless of the underlying hardware setup, large displays share one property: they have a large (distributed or not) frame buffer to display coloured pixels. Consequently, this allows the users to collaborate between two sites that use different display walls with only a minimal delay. To address the bandwidth limitations, several methods have been developed and introduced which aim to reduce the required bandwidth and the end-to-end latency while still offering high image quality. The aim of these methods is to reduce the image quality and therefore the required bandwidth in regions that are not currently of interest to the users, while those that are of interest remain at a high quality. These methods can be categorised into algorithmic and user-driven optimisations to the remote visualisation pipeline. The user-driven optimisations make use of gaze tracking to adapt the quality of the encoding locally while the algorithmic optimisations use the content of the frames. Algorithmic optimisations include the usage of a convolutional neural network to detect regions of interest and adapt the encoding quality accordingly and a temporal downsampling prior to the encoding. These methods can also be combined, for example, foveated encoding may be combined with temporal downsampling to further reduce the required bandwidth and the latency. Overall, this dissertation advances the state of the art by enabling the collaborative analysis of molecular and other scientific visualisations remotely at interactive frame rates without imposing bandwidth requirements that cannot generally be met.Item Open Access Group diagrams for simplified representation of scanpaths(2023) Schäfer, Peter; Rodrigues, Nils; Weiskopf, Daniel; Storandt, SabineWe instrument Group Diagrams (GDs) to reduce clutter in sets of eye-tracking scanpaths. Group Diagrams consist of trajectory subsets that cover, or represent, the whole set of trajectories with respect to some distance measure and an adjustable distance threshold. The original GDs allow for an application of various distance measures. We implement the GD framework and evaluate it on scanpaths that were collected by a former user study on public transit maps. We find that the Fréchet distance is the most appropriate measure to get meaningful results, yet it is flexible enough to cover outliers. We discuss several implementation-specific challenges and improve the scalability of the algorithm. To evaluate our results, we conducted a qualitative study with a group of eye-tracking experts. Finally, we note that our enhancements are also beneficial within the original problem setting, suggesting that our approach might be applicable to various types of input data.Item Open Access Computational methods for SPH-based fluid animation(2021) Reinhardt, Stefan; Weiskopf, Daniel (Prof. Dr.)In der Computergrafik hat sich Smoothed Particle Hydrodynamics (SPH) zu einem der wichtigsten Ansätze für physikalisch basierte Fluidanimation entwickelt. Die ursprüngliche Formulierung wurde in den letzten Jahrzehnten von vielen Autoren weiterentwickelt. Dadurch ist SPH zu einer vielseitig einsetzbaren Technik geworden, welche in der Lage ist, ein breites Spektrum von Phänomenen zu modellieren. Ein Kernaspekt jüngerer Forschung ist die Entwicklung von Methoden zur Verbesserung der Effizienz, Genauigkeit und visuellen Qualität der ursprünglichen Formulierung. Trotz aller Fortschritte ist die Forschung zu SPH noch immer ein sehr aktives Gebiet. In dieser Arbeit werden Methoden zur Verbesserung der physikalisch basierten Animation von Fluiden mit SPH vorgestellt. Die vorgestellten Beiträge befassen sich mit verschiedenen Herausforderungen, welche sich aus der Simulation von Fluiden mit SPH ergeben. Nach einer ausführlichen Diskussion der Grundlagen zu SPH-basierter Fluidanimation wird ein Ansatz zum visuellen Debugging von SPH-Simulationen vorgestellt. Dieser ist so konzipiert, dass er die Entwicklung neuer Techniken für SPH-basierte Fluidsimulation unterstützt. Eine Anforderungsanalyse wird durchgeführt, um die speziellen Bedürfnisse systematisch zu erörtern und die Anwendung wird dementsprechend gestaltet. Im Weiteren wird ein besonderes Augenmerk auf den Diskretisierungsprozess des numerischen Modells gelegt. Zunächst wird ein asynchrones Zeitintegrationsverfahren vorgestellt. Durch die Verwendung individueller Zeitschrittweiten wird die Effizienz des Simulationsprozesses verbessert. Anschließend wird ein konsistentes Verfahren zur Korrektur des Glättungskerns präsentiert. Dieser Ansatz trägt dazu bei, die im Diskretisierungsprozess auftretenden Fehler zu reduzieren und damit die Genauigkeit des Simulationsmodells zu verbessern. Er basiert auf der bekannten Shepard-Interpolation, beseitigt aber Inkonsistenzen, welche bei der Anwendung der Shepard-Interpolation auf SPH entstehen. Zuletzt wird ein Ansatz vorgestellt, um feine Details auf eine animierte Oberfläche hinzuzufügen. Solch eine Oberfläche resultiert beispielsweise aus einer SPH-basierten Flüssigkeitsanimation. Diese auf der Oberfläche modellierten Effekte werden durch das Geschwindigkeitsfeld der Basissimulation getrieben. Mit der präsentierten Methode können feine Details sehr effizient simuliert werden und das Erscheinungsbild der Fluids wird verbessert. Die vorgestellten Methoden berücksichtigen die speziellen Anforderungen von Computergrafikanwendungen. Insbesondere wird auf Effizienz, Genauigkeit und visuelle Qualität geachtet. Die Effizienz der Simulation ist im Bereich der Computergrafik von besonderer Bedeutung. Typischerweise sind dort sehr viele Simulationsdurchläufe der gleichen Sequenz nötig, da die Animation eines Fluids nach und nach verfeinert wird. Die Verbesserung der Effizienz des Simulationsprozesses ermöglicht zusätzlich eine feinere räumliche Diskretisierung, womit der Realitätsgrad der Simulation weiter erhöht werden kann. Bei der physikalisch basierten Animation von Fluiden mit SPH hängt die Genauigkeit nicht nur von der Simulationsauflösung ab, sondern auch von der Genauigkeit der SPH-Approximation. Verbesserungen in diesem Punkt erlauben es die physikalischen Phänomene besser zu modellieren und tragen zu einer realistisch wirkenden Simulation bei. Visuelle Qualität ist bei Computergrafikanwendungen immer von besonderer Bedeutung, da ansprechende Visualisierungen meist das gewünschte Ergebnis sind.Item Open Access Performance quantification of visualization systems(2022) Bruder, Valentin; Ertl, Thomas (Prof. Dr.)Visualization is an important part of data analysis, complementing automatic data processing to provide insight in the data and understand the underlying structure or patterns. A visualization system describes a visualization algorithm running on a specific compute architecture or device. Runtime performance is crucial for visualization systems, especially in the context of ever-growing data sizes and complexity. One reason for this is the importance of interactivity, another is to provide the opportunity for a comprehensive investigation of generated data in a limited time frame. Providing the possibility of changing the perspective beyond the original focus has been shown to be particularly helpful for explorative data analysis. Performance optimization is also key to save costs during visualization on supercomputers due to the high demand for their compute time. Being able to predict runtime enables a better resource planning and optimized scheduling on such devices. The central research questions addressed in this thesis are threefold and build on each other: How can we quantify runtime performance of visualization systems? How to use this information to develop models for prediction, and ultimately: How to integrate both aspects in the application context? The goal is to gain a comprehensive understanding of the runtime performance of visualization systems and optimize them to save costs and improve the user experience. Despite many works in this direction, there are still open questions and challenges on how to reach this goal. One of these challenges is the diversity of compute architectures used for visualization, including devices from mobile devices to supercomputers. Most visualization algorithms profit from running in parallel. However, this poses another challenge in performance quantification due to the usage of multiple heterogeneous parallel hardware hierarchies. Typically, visualization algorithms deal with large data, sparse regions, and interactivity requirements. Further, they can be fundamentally different in their rendering approaches. All these aspects make a reliable performance prediction difficult. This thesis addresses those challenges and presents research on performance evaluation, modeling, and prediction of visualization systems, and how to translate these concepts to improve performance-critical applications. Assessing runtime performance plays a key role in understanding and improving it. A new framework for the extensive and systematic performance evaluation of interactive visualizations is introduced, to help gain a deeper understanding of runtime behavior and rendering parameter dependencies. Based on the current practice of runtime performance evaluation in literature, a database of performance measurements is created. A list of best practices on how to improve performance evaluation is compiled based on a statistical analysis of the data. Additionally, a frontend has been developed to visually compare the rendering performance data from multiple perspectives. With a fundamental understanding of an application's runtime behavior, performance can be modeled, and the model used for prediction. New techniques for different hardware systems are introduced that are typically used for the visualization of large data sets: desktop computers featuring dedicated graphics hardware and high-performance distributed memory systems. For the former, a method to predict performance on-line is used to dynamically tune volume rendering during runtime to guarantee interactivity. For image database generation on distributed memory systems, a hybrid approach for dynamic load balancing during in situ visualization is introduced. This work also explores how human perceptual properties can be used to improve the performance of visualization applications. Two novel techniques are introduced that adapt rendering quality to the human visual system by tracking the users gaze and changing the visualization accordingly. In this thesis, a special focus is set on volume rendering. Performance optimization makes it possible to use volume rendering to visualize data outside the typical use cases. Two visualization systems are presented that use volume rendering at their core: one for the interactive exploration of large dynamic graphs and one for the space-time visualization of gaze and stimulus data. Overall, this thesis advances the state of the art by introducing new ways to assess, model, and predict runtime performance of visualization systems that can be used to improve usability and realize cost savings. This is demonstrated through several applications.