Universität Stuttgart

Permanent URI for this communityhttps://elib.uni-stuttgart.de/handle/11682/1

Browse

Search Results

Now showing 1 - 10 of 120
  • Thumbnail Image
    ItemOpen Access
    Philosophy of action and Its relationship to interactive visualisation and Molière’s theatre
    (2023) Feige, Daniel M.; Weiskopf, Daniel; Dickhaut, Kirsten
  • Thumbnail Image
    ItemOpen Access
    Touching data with PropellerHand
    (2022) Achberger, Alexander; Heyen, Frank; Vidackovic, Kresimir; Sedlmair, Michael
    Immersive analytics often takes place in virtual environments which promise the users immersion. To fulfill this promise, sensory feedback, such as haptics, is an important component, which is however not well supported yet. Existing haptic devices are often expensive, stationary, or occupy the user’s hand, preventing them from grasping objects or using a controller. We propose PropellerHand, an ungrounded hand-mounted haptic device with two rotatable propellers, that allows exerting forces on the hand without obstructing hand use. PropellerHand is able to simulate feedback such as weight and torque by generating thrust up to 11 N in 2-DOF and a torque of 1.87 Nm in 2-DOF. Its design builds on our experience from quantitative and qualitative experiments with different form factors and parts. We evaluated our prototype through a qualitative user study in various VR scenarios that required participants to manipulate virtual objects in different ways, while changing between torques and directional forces. Results show that PropellerHand improves users’ immersion in virtual reality. Additionally, we conducted a second user study in the field of immersive visualization to investigate the potential benefits of PropellerHand there.
  • Thumbnail Image
    ItemOpen Access
    Visualization challenges in distributed heterogeneous computing environments
    (2015) Panagiotidis, Alexandros; Ertl, Thomas (Prof. Dr.)
    Large-scale computing environments are important for many aspects of modern life. They drive scientific research in biology and physics, facilitate industrial rapid prototyping, and provide information relevant to everyday life such as weather forecasts. Their computational power grows steadily to provide faster response times and to satisfy the demand for higher complexity in simulation models as well as more details and higher resolutions in visualizations. For some years now, the prevailing trend for these large systems is the utilization of additional processors, like graphics processing units. These heterogeneous systems, that employ more than one kind of processor, are becoming increasingly widespread since they provide many benefits, like higher performance or increased energy efficiency. At the same time, they are more challenging and complex to use because the various processing units differ in their architecture and programming model. This heterogeneity is often addressed by abstraction but existing approaches often entail restrictions or are not universally applicable. As these systems also grow in size and complexity, they become more prone to errors and failures. Therefore, developers and users become more interested in resilience besides traditional aspects, like performance and usability. While fault tolerance is well researched in general, it is mostly dismissed in distributed visualization or not adapted to its special requirements. Finally, analysis and tuning of these systems and their software is required to assess their status and to improve their performance. The available tools and methods to capture and evaluate the necessary information are often isolated from the context or not designed for interactive use cases. These problems are amplified in heterogeneous computing environments, since more data is available and required for the analysis. Additionally, real-time feedback is required in distributed visualization to correlate user interactions to performance characteristics and to decide on the validity and correctness of the data and its visualization. This thesis presents contributions to all of these aspects. Two approaches to abstraction are explored for general purpose computing on graphics processing units and visualization in heterogeneous computing environments. The first approach hides details of different processing units and allows using them in a unified manner. The second approach employs per-pixel linked lists as a generic framework for compositing and simplifying order-independent transparency for distributed visualization. Traditional methods for fault tolerance in high performance computing systems are discussed in the context of distributed visualization. On this basis, strategies for fault-tolerant distributed visualization are derived and organized in a taxonomy. Example implementations of these strategies, their trade-offs, and resulting implications are discussed. For analysis, local graph exploration and tuning of volume visualization are evaluated. Challenges in dense graphs like visual clutter, ambiguity, and inclusion of additional attributes are tackled in node-link diagrams using a lens metaphor as well as supplementary views. An exploratory approach for performance analysis and tuning of parallel volume visualization on a large, high-resolution display is evaluated. This thesis takes a broader look at the issues of distributed visualization on large displays and heterogeneous computing environments for the first time. While the presented approaches all solve individual challenges and are successfully employed in this context, their joint utility form a solid basis for future research in this young field. In its entirety, this thesis presents building blocks for robust distributed visualization on current and future heterogeneous visualization environments.
  • Thumbnail Image
    ItemOpen Access
    Comparative visualization across physical and parameter space
    (2022) Zeyfang, Adrian
    We designed and developed an interactive visualization approach for exploring and comparing image sequences in the context of porous media research. Our tool facilitates the visual analysis of two-dimensional image sequence datasets captured during fluid displacement experiments in a porous micromodel. The images are aggregated into a single graph-based representation, allowing for an experiment to be visualized across its entire temporal domain. This graph is generated from the viscous flow patterns of the invading fluid, reducing the need for manual image masking and clean-up steps. The Node-Link representation of the graph is superimposed onto the raw images, creating a composite spatio-temporal view of the dataset. We demonstrate the functionality of our implementation by evaluating its output and performance on a collection of related datasets. We found that separate experiments in the same porous medium yield topologically different, yet visually similar flow graphs with comparable node positions.
  • Thumbnail Image
    ItemOpen Access
    Datamator : an authoring tool for creating datamations via data query decomposition
    (2023) Guo, Yi; Cao, Nan; Cai, Ligan; Wu, Yanqiu; Weiskopf, Daniel; Shi, Danqing; Chen, Qing
    Datamation is designed to animate an analysis pipeline step by step, serving as an intuitive and efficient method for interpreting data analysis outcomes and facilitating easy sharing with others. However, the creation of a datamation is a difficult task that demands expertise in diverse skills. To simplify this task, we introduce Datamator, a language-oriented authoring tool developed to support datamation generation. In this system, we develop a data query analyzer that enables users to generate an initial datamation effortlessly by inputting a data question in natural language. Then, the datamation is displayed in an interactive editor that affords users the ability to both edit the analysis progression and delve into the specifics of each step undertaken. Notably, the Datamator incorporates a novel calibration network that is able to optimize the outputs of the query decomposition network using a small amount of user feedback. To demonstrate the effectiveness of Datamator, we conduct a series of evaluations including performance validation, a controlled user study, and expert interviews.
  • Thumbnail Image
    ItemOpen Access
    Mapping molecular surfaces of arbitrary genus to a sphere
    (2015) Frieß, Florian
    Molecular surfaces are one of the most widely used visual representations for the analysis of molecules. They allow different properties of the molecule to be shown and allow additional information to be added, such as chemical properties of the atoms, using colour. With the usual representation of molecular surfaces being three dimensional there are common problems, such as occlusion and view-dependency. To solve these problems a two dimensional representation of the molecular surface can be created. For molecules with a surface of genus zero there are different methods of creating the sphere that is used as an intermediate object to create the map. For molecules with a higher genus this process becomes more difficult. Tunnels can only be mapped to the sphere if they are closed at some point inside the tunnel. Introducing arbitrary cuts can lead to small areas on the map. The deeper inside the tunnel the cut is placed the smaller the area. To avoid these small areas the cuts have to be placed close to the entrance of the tunnel. Therefore a mesh segmentation is performed to identify the tunnel and to create a genus zero surface for the molecule. Based on this identification further information can be displayed, such as geodesic lines showing how the tunnels are connected.
  • Thumbnail Image
    ItemOpen Access
    Visual MIDI data comparison
    (2020) Schierle, Christian
    Wir präsentieren ein System, das die Visualisierung und den visuellen Vergleich von MIDI Dateien ermöglicht. MIDI Daten, die aus einer chronologischen Abfolge von Ereignissen bestehen, stellen eine besondere Herausforderung für den Entwurf entsprechender visueller Repräsentationen dar. Basierend auf den Bedürfnissen von Nutzern aus bestimmten Zielgruppen entwickeln wir ein Konzept und eine Implementierung einiger Visualisierungen. Beispielsweise können mehrere MIDI Dateien gleichzeitig mit Hilfe einer Liste angezeigt werden, die aus Informationskarten mit einer kurzen Zusammenfassung der jeweils zugehörigen Datei gebildet wird. Unser Visualisierungssystem stellt einige Möglichkeiten zur Visualisierung des Inhalts einzelner MIDI Dateien bereit. Eine Heatmap wird verwendet, um einen Überblick über die Verteilung von Noten in den MIDI Kanälen zu bieten. Als Alternative zu einer traditionelleren Implementierung eines gestapelten Säulendiagramms stellen wir eine neuartige Visualisierung der Anzahl der Vorkommen jeder Note vor, die auf einer Wabenstruktur aus Sechsecken basiert. Zur Visualisierung von Notensequenzen verwenden wir ein Diagramm, das die Tonhöhe und Dauer der einzelnen Noten darstellt. Außerdem untersuchen wir die Leistungsfähigkeit einer adaptierten MatrixWave Visualisierung zur Darstellung von Musik. Des Weiteren haben wir prototypische Entwürfe zur Visualisierung von Ähnlichkeiten zwischen als Zeichenketten repräsentierten Sequenzen untersucht, die auf sogenannten Arc Diagrams basieren. Um die Darstellung von Unterschieden zwischen den Inhalten zweier MIDI Dateien zu unterstützen, stellen wir auf dem Tonhöhendiagramm und der Wabenstruktur basierende Vergleichsansichten vor. Die Entwürfe wurden in Form einer Webanwendung implementiert und mit Hilfe eines Anwendungsszenarios evaluiert. Die Ergebnisse zeigen, dass das Visualisierungssystem die spezifizierten Nutzerbedürfnisse erfüllt, decken aber auch Schwachstellen in der Konzeption und Implementierung auf.
  • Thumbnail Image
    ItemOpen Access
    Point cloud and particle data compression techniques
    (2023) Ravi, Niranjan
    The contemporary need for heightened processing speed and storage capacity has necessitated the implementation of data compression in various applications. This study encompasses a diverse array of applications, varying in scale, that need the implementation of efficient compression techniques. At present, there is no universally preferred compression technique that can outperform others across all data types. This is due to the fact that certain compression methods are more effective in compressing specific applications than others. Point cloud data finds widespread usage in diverse domains such as computer vision, robotics, and virtual as well as augmented reality. The dense nature of point cloud data presents difficulties with respect to storage, transmission, and computation. In a similar way, particle data usually contains significant amounts of particles that have been produced through simulations, experiments, or observations. The magnitude of particle data and the computational resources necessary to handle and examine such datasets can pose a formidable obstacle. To date, there has been no direct comparative analysis of compression methodologies applied to particle data and point cloud data. This study represents the initial attempt to compare these two distinct categories. The primary objective of this study is to test different compression techniques belonging to the particle and point cloud worlds and establish a standardized metric for evaluating the effectiveness of those compression methodologies. An integrated tool has been developed in this work that incorporates various compression techniques to evaluate the appropriateness of each technique for particle data and point cloud data. The assessment of compression techniques involves the consideration of particle error metrics and point cloud error metrics. Evidence from experiments in this work demonstrates that particle compressors exhibit superior performance across both tested data categories, while point cloud compressors demonstrate superior performance solely for point cloud data. Also, it reveals that the particle error metrics exhibit stringent boundaries, which are deemed necessary for the type of data they are intended to analyze. In contrast, the point cloud error metrics display more relaxed boundaries.
  • Thumbnail Image
    ItemOpen Access
    Code execution reports: visually augmented summaries of executed source code fragments
    (2016) Siddiqui, Hafiz Ammar
    Understanding a fragment of code is important for developers as it enables them to optimize, debug and extend it. Developers adopt different procedures for understanding a piece of code, which involves going through the source code, documentation, and profilers results. Various code comprehension techniques have suggested code summarization approaches, which generates the intended behavior of code in natural language text. In this thesis, we present an approach to summarize the actual behavior of a method during its execution. For this purpose, we create a framework that facilitates the generation of interactive and web-based natural language reports with small embedded word-size visualizations. Then, we develop a tool that profiles a method for runtime behavior, and then it processes the information. The tool uses our framework to generate a visually augmented natural language summary report that explains the behavior of the code. In the end, we conduct a small user study to evaluate the quality of our code execution reports.
  • Thumbnail Image
    ItemOpen Access
    3D visualization of multivariate data
    (2012) Sanftmann, Harald; Weiskopf, Daniel (Prof. Dr.)
    Nowadays large amounts of data are organized in tables, especially in relational databases where the rows store the data items to which multiple attributes are stored in the columns. Information stored this way, having multiple (more than two or three) attributes, can be treated as multivariate data. Therefore, visualization methods for multivariate data have a large application area and high potential utility. This thesis focuses on the application of 3D scatter plots for the visualization of multivariate data. When dealing with 3D, spatial perception needs to be exploited, by effectively using depth cues to convey spatial information to the user. To improve the presentation of individual 3D scatter plots, a technique is presented that applies illumination to them, thus using the shape-from-shading depth cue. To enable the analysis not only of 3D but of multivariate data, a novel technique is introduced that allows the navigation between 3D scatter plots. Inspecting the large number of 3D scatter plots that can be projected from a multivariate data set is very time consuming. The analysis of multivariate data can benefit from automatic machine learning approaches. A presented method uses decision trees to increase the speed a user can gain an understanding of the multivariate data at no extra cost. Stereopsis can also support the display of 3D scatter plots. Here an improved anaglyph rendering technique is presented, significantly reducing ghosting artifacts. The technique is not only applicable for information visualization, but for general rendering or to present stereoscopic image data. Some information visualization algorithms require high computation time. Many of these algorithms can be parallelized to run interactively. A framework that supports the parallelization on shared and distributed memory systems is presented.