Universität Stuttgart
Permanent URI for this communityhttps://elib.uni-stuttgart.de/handle/11682/1
Browse
14 results
Search Results
Item Open Access Philosophy of action and Its relationship to interactive visualisation and Molière’s theatre(2023) Feige, Daniel M.; Weiskopf, Daniel; Dickhaut, KirstenItem Open Access Subjective annotation for a frame interpolation benchmark using artefact amplification(2020) Men, Hui; Hosu, Vlad; Lin, Hanhe; Bruhn, Andrés; Saupe, DietmarCurrent benchmarks for optical flow algorithms evaluate the estimation either directly by comparing the predicted flow fields with the ground truth or indirectly by using the predicted flow fields for frame interpolation and then comparing the interpolated frames with the actual frames. In the latter case, objective quality measures such as the mean squared error are typically employed. However, it is well known that for image quality assessment, the actual quality experienced by the user cannot be fully deduced from such simple measures. Hence, we conducted a subjective quality assessment crowdscouring study for the interpolated frames provided by one of the optical flow benchmarks, the Middlebury benchmark. It contains interpolated frames from 155 methods applied to each of 8 contents. For this purpose, we collected forced-choice paired comparisons between interpolated images and corresponding ground truth. To increase the sensitivity of observers when judging minute difference in paired comparisons we introduced a new method to the field of full-reference quality assessment, called artefact amplification. From the crowdsourcing data (3720 comparisons of 20 votes each) we reconstructed absolute quality scale values according to Thurstone’s model. As a result, we obtained a re-ranking of the 155 participating algorithms w.r.t. the visual quality of the interpolated frames. This re-ranking not only shows the necessity of visual quality assessment as another evaluation metric for optical flow and frame interpolation benchmarks, the results also provide the ground truth for designing novel image quality assessment (IQA) methods dedicated to perceptual quality of interpolated images. As a first step, we proposed such a new full-reference method, called WAE-IQA, which weights the local differences between an interpolated image and its ground truth.Item Open Access Visual analytics for nonlinear programming in robot motion planning(2022) Hägele, David; Abdelaal, Moataz; Oguz, Ozgur S.; Toussaint, Marc; Weiskopf, DanielNonlinear programming is a complex methodology where a problem is mathematically expressed in terms of optimality while imposing constraints on feasibility. Such problems are formulated by humans and solved by optimization algorithms. We support domain experts in their challenging tasks of understanding and troubleshooting optimization runs of intricate and high-dimensional nonlinear programs through a visual analytics system. The system was designed for our collaborators’ robot motion planning problems, but is domain agnostic in most parts of the visualizations. It allows for an exploration of the iterative solving process of a nonlinear program through several linked views of the computational process. We give insights into this design study, demonstrate our system for selected real-world cases, and discuss the extension of visualization and visual analytics methods for nonlinear programming.Item Open Access A method for optimizing and spatially distributing heating systems by coupling an urban energy simulation platform and an energy system model(2021) Steingrube, Annette; Bao, Keyu; Wieland, Stefan; Lalama, Andrés; Kabiro, Pithon M.; Coors, Volker; Schröter, BastianDistrict heating is seen as an important concept to decarbonize heating systems and meet climate mitigation goals. However, the decision related to where central heating is most viable is dependent on many different aspects, like heating densities or current heating structures. An urban energy simulation platform based on 3D building objects can improve the accuracy of energy demand calculation on building level, but lacks a system perspective. Energy system models help to find economically optimal solutions for entire energy systems, including the optimal amount of centrally supplied heat, but do not usually provide information on building level. Coupling both methods through a novel heating grid disaggregation algorithm, we propose a framework that does three things simultaneously: optimize energy systems that can comprise all demand sectors as well as sector coupling, assess the role of centralized heating in such optimized energy systems, and determine the layouts of supplying district heating grids with a spatial resolution on the street level. The algorithm is tested on two case studies; one, an urban city quarter, and the other, a rural town. In the urban city quarter, district heating is economically feasible in all scenarios. Using heat pumps in addition to CHPs increases the optimal amount of centrally supplied heat. In the rural quarter, central heat pumps guarantee the feasibility of district heating, while standalone CHPs are more expensive than decentral heating technologies.Item Open Access VisRecall: quantifying information visualisation recallability via question answering(2022) Wang, Yao; Jiao, Chuhan; Bâce, Mihai; Bulling, AndreasDespite its importance for assessing the effectiveness of communicating information visually, fine-grained recallability of information visualisations has not been studied quantitatively so far. In this work, we propose a question-answering paradigm to study visualisation recallability and present VisRecall - a novel dataset consisting of 200 visualisations that are annotated with crowd-sourced human (N = 305) recallability scores obtained from 1,000 questions of five question types. Furthermore, we present the first computational method to predict recallability of different visualisation elements, such as the title or specific data values. We report detailed analyses of our method on VisRecall and demonstrate that it outperforms several baselines in overall recallability and FE-, F-, RV-, and U-question recallability. Our work makes fundamental contributions towards a new generation of methods to assist designers in optimising visualisations.Item Open Access RfX : a design study for the interactive exploration of a random forest to enhance testing procedures for electrical engines(2022) Eirich, J.; Münch, M.; Jäckle, D.; Sedlmair, Michael; Bonart, J.; Schreck, T.Random Forests (RFs) are a machine learning (ML) technique widely used across industries. The interpretation of a given RF usually relies on the analysis of statistical values and is often only possible for data analytics experts. To make RFs accessible to experts with no data analytics background, we present RfX, a Visual Analytics (VA) system for the analysis of a RF's decision‐making process. RfX allows to interactively analyse the properties of a forest and to explore and compare multiple trees in a RF. Thus, its users can identify relationships within a RF's feature subspace and detect hidden patterns in the model's underlying data. We contribute a design study in collaboration with an automotive company. A formative evaluation of RFX was carried out with two domain experts and a summative evaluation in the form of a field study with five domain experts. In this context, new hidden patterns such as increased eccentricities in an engine's rotor by observing secondary excitations of its bearings were detected using analyses made with RfX. Rules derived from analyses with the system led to a change in the company's testing procedures for electrical engines, which resulted in 80% reduced testing time for over 30% of all components.Item Open Access Group diagrams for simplified representation of scanpaths(2023) Schäfer, Peter; Rodrigues, Nils; Weiskopf, Daniel; Storandt, SabineWe instrument Group Diagrams (GDs) to reduce clutter in sets of eye-tracking scanpaths. Group Diagrams consist of trajectory subsets that cover, or represent, the whole set of trajectories with respect to some distance measure and an adjustable distance threshold. The original GDs allow for an application of various distance measures. We implement the GD framework and evaluate it on scanpaths that were collected by a former user study on public transit maps. We find that the Fréchet distance is the most appropriate measure to get meaningful results, yet it is flexible enough to cover outliers. We discuss several implementation-specific challenges and improve the scalability of the algorithm. To evaluate our results, we conducted a qualitative study with a group of eye-tracking experts. Finally, we note that our enhancements are also beneficial within the original problem setting, suggesting that our approach might be applicable to various types of input data.Item Open Access Scanpath prediction on information visualisations(2023) Wang, Yao; Bâce, Mihai; Bulling, AndreasWe propose Unified Model of Saliency and Scanpaths (UMSS) - a model that learns to predict multi-duration saliency and scanpaths (i.e. sequences of eye fixations) on information visualisations. Although scanpaths provide rich information about the importance of different visualisation elements during the visual exploration process, prior work has been limited to predicting aggregated attention statistics, such as visual saliency. We present in-depth analyses of gaze behaviour for different information visualisation elements (e.g. Title, Label, Data) on the popular MASSVIS dataset. We show that while, overall, gaze patterns are surprisingly consistent across visualisations and viewers, there are also structural differences in gaze dynamics for different elements. Informed by our analyses, UMSS first predicts multi-duration element-level saliency maps, then probabilistically samples scanpaths from them. Extensive experiments on MASSVIS show that our method consistently outperforms state-of-the-art methods with respect to several, widely used scanpath and saliency evaluation metrics. Our method achieves a relative improvement in sequence score of 11.5 % for scanpath prediction, and a relative improvement in Pearson correlation coefficient of up to 23.6 % for saliency prediction. These results are auspicious and point towards richer user models and simulations of visual attention on visualisations without the need for any eye tracking equipment.Item Open Access Case study on privacy-aware social media data processing in disaster management(2020) Löchner, Marc; Fathi, Ramian; Schmid, David ‘-1’; Dunkel, Alexander; Burghardt, Dirk; Fiedrich, Frank; Koch, SteffenSocial media data is heavily used to analyze and evaluate situations in times of disasters, and derive decisions for action from it. In these critical situations, it is not surprising that privacy is often considered a secondary problem. In order to prevent subsequent abuse, theft or public exposure of collected datasets, however, protecting the privacy of social media users is crucial. Avoiding unnecessary data retention is an important question that is currently largely unsolved. There are a number of technical approaches available, but their deployment in disaster management is either impractical or requires special adaption, limiting its utility. In this case study, we explore the deployment of a cardinality estimation algorithm called HyperLogLog into disaster management processes. It is particularly suited for this field, because it allows to stream data in a format that cannot be used for purposes other than the originally intended. We develop and conduct a focus group discussion with teams of social media analysts. We identify challenges and opportunities of working with such a privacy-enhanced social media data format and compare the process with conventional techniques. Our findings show that, with the exception of training scenarios, deploying HyperLogLog in the data acquisition process will not distract the data analysis process. Instead, several benefits, such as improved working with huge datasets, may contribute to a more widespread use and adoption of the presented technique, which provides a basis for a better integration of privacy considerations in disaster management.Item Open Access Efficient and robust background modeling with dynamic mode decomposition(2022) Krake, Tim; Bruhn, Andrés; Eberhardt, Bernhard; Weiskopf, DanielA large number of modern video background modeling algorithms deal with computational costly minimization problems that often need parameter adjustments. While in most cases spatial and temporal constraints are added artificially to the minimization process, our approach is to exploit Dynamic Mode Decomposition (DMD), a spectral decomposition technique that naturally extracts spatio-temporal patterns from data. Applied to video data, DMD can compute background models. However, the original DMD algorithm for background modeling is neither efficient nor robust. In this paper, we present an equivalent reformulation with constraints leading to a more suitable decomposition into fore- and background. Due to the reformulation, which uses sparse and low-dimensional structures, an efficient and robust algorithm is derived that computes accurate background models. Moreover, we show how our approach can be extended to RGB data, data with periodic parts, and streaming data enabling a versatile use.