05 Fakultät Informatik, Elektrotechnik und Informationstechnik

Permanent URI for this collectionhttps://elib.uni-stuttgart.de/handle/11682/6

Browse

Search Results

Now showing 1 - 7 of 7
  • Thumbnail Image
    ItemOpen Access
    VisRecall++: analysing and predicting visualisation recallability from gaze behaviour
    (2024) Wang, Yao; Jiang, Yue; Hu, Zhiming; Ruhdorfer, Constantin; Bâce, Mihai; Bulling, Andreas
    Question answering has recently been proposed as a promising means to assess the recallability of information visualisations. However, prior works are yet to study the link between visually encoding a visualisation in memory and recall performance. To fill this gap, we propose VisRecall++ - a novel 40-participant recallability dataset that contains gaze data on 200 visualisations and five question types, such as identifying the title, and finding extreme values.We measured recallability by asking participants questions after they observed the visualisation for 10 seconds.Our analyses reveal several insights, such as saccade amplitude, number of fixations, and fixation duration significantly differ between high and low recallability groups.Finally, we propose GazeRecallNet - a novel computational method to predict recallability from gaze behaviour that outperforms several baselines on this task.Taken together, our results shed light on assessing recallability from gaze behaviour and inform future work on recallability-based visualisation optimisation.
  • Thumbnail Image
    ItemOpen Access
    SalChartQA: question-driven saliency on information visualisations
    (2024) Wang, Yao; Wang, Weitian; Abdelhafez, Abdullah; Elfares, Mayar; Hu, Zhiming; Bâce, Mihai; Bulling, Andreas
    Understanding the link between visual attention and user’s needs when visually exploring information visualisations is under-explored due to a lack of large and diverse datasets to facilitate these analyses. To fill this gap, we introduce SalChartQA - a novel crowd-sourced dataset that uses the BubbleView interface as a proxy for human gaze and a question-answering (QA) paradigm to induce different information needs in users. SalChartQA contains 74,340 answers to 6,000 questions on 3,000 visualisations. Informed by our analyses demonstrating the tight correlation between the question and visual saliency, we propose the first computational method to predict question-driven saliency on information visualisations. Our method outperforms state-of-the-art saliency models, improving several metrics, such as the correlation coefficient and the Kullback-Leibler divergence. These results show the importance of information needs for shaping attention behaviour and paving the way for new applications, such as task-driven optimisation of visualisations or explainable AI in chart question-answering.
  • Thumbnail Image
    ItemOpen Access
    Impact of gaze uncertainty on AOIs in information visualisations
    (2022) Wang, Yao; Koch, Maurice; Bâce, Mihai; Weiskopf, Daniel; Bulling, Andreas
    Gaze-based analysis of areas of interest (AOIs) is widely used in information visualisation research to understand how people explore visualisations or assess the quality of visualisations concerning key characteristics such as memorability. However, nearby AOIs in visualisations amplify the uncertainty caused by the gaze estimation error, which strongly influences the mapping between gaze samples or fixations and different AOIs. We contribute a novel investigation into gaze uncertainty and quantify its impact on AOI-based analysis on visualisations using two novel metrics: the Flipping Candidate Rate (FCR) and Hit Any AOI Rate (HAAR). Our analysis of 40 real-world visualisations, including human gaze and AOI annotations, shows that gaze uncertainty frequently and significantly impacts the analysis conducted in AOI-based studies. Moreover, we analysed four visualisation types and found that bar and scatter plots are usually designed in a way that causes more uncertainty than line and pie plots in gaze-based analysis.
  • Thumbnail Image
    ItemOpen Access
    VisRecall: quantifying information visualisation recallability via question answering
    (2022) Wang, Yao; Jiao, Chuhan; Bâce, Mihai; Bulling, Andreas
    Despite its importance for assessing the effectiveness of communicating information visually, fine-grained recallability of information visualisations has not been studied quantitatively so far. In this work, we propose a question-answering paradigm to study visualisation recallability and present VisRecall - a novel dataset consisting of 200 visualisations that are annotated with crowd-sourced human (N = 305) recallability scores obtained from 1,000 questions of five question types. Furthermore, we present the first computational method to predict recallability of different visualisation elements, such as the title or specific data values. We report detailed analyses of our method on VisRecall and demonstrate that it outperforms several baselines in overall recallability and FE-, F-, RV-, and U-question recallability. Our work makes fundamental contributions towards a new generation of methods to assist designers in optimising visualisations.
  • Thumbnail Image
    ItemOpen Access
    Saliency3D: a 3D saliency dataset collected on screen
    (2024) Wang, Yao; Dai, Qi; Bâce, Mihai; Klein, Karsten; Bulling, Andreas
    While visual saliency has recently been studied in 3D, the experimental setup for collecting 3D saliency data can be expensive and cumbersome. To address this challenge, we propose a novel experimental design that utilizes an eye tracker on a screen to collect 3D saliency data. Our experimental design reduces the cost and complexity of 3D saliency dataset collection. We first collect gaze data on a screen, then we map them to 3D saliency data through perspective transformation. Using this method, we collect a 3D saliency dataset (49,276 fixations) comprising 10 participants looking at sixteen objects. Moreover, we examine the viewing preferences for objects and discuss our findings in this study. Our results indicate potential preferred viewing directions and a correlation between salient features and the variation in viewing directions.
  • Thumbnail Image
    ItemOpen Access
    Scanpath prediction on information visualisations
    (2023) Wang, Yao; Bâce, Mihai; Bulling, Andreas
    We propose Unified Model of Saliency and Scanpaths (UMSS) - a model that learns to predict multi-duration saliency and scanpaths (i.e. sequences of eye fixations) on information visualisations. Although scanpaths provide rich information about the importance of different visualisation elements during the visual exploration process, prior work has been limited to predicting aggregated attention statistics, such as visual saliency. We present in-depth analyses of gaze behaviour for different information visualisation elements (e.g. Title, Label, Data) on the popular MASSVIS dataset. We show that while, overall, gaze patterns are surprisingly consistent across visualisations and viewers, there are also structural differences in gaze dynamics for different elements. Informed by our analyses, UMSS first predicts multi-duration element-level saliency maps, then probabilistically samples scanpaths from them. Extensive experiments on MASSVIS show that our method consistently outperforms state-of-the-art methods with respect to several, widely used scanpath and saliency evaluation metrics. Our method achieves a relative improvement in sequence score of 11.5 % for scanpath prediction, and a relative improvement in Pearson correlation coefficient of up to 23.6 % for saliency prediction. These results are auspicious and point towards richer user models and simulations of visual attention on visualisations without the need for any eye tracking equipment.
  • Thumbnail Image
    ItemOpen Access
    Analysis and modelling of visual attention on information visualisations
    (2024) Wang, Yao; Bulling, Andreas (Prof. Dr.)
    Understanding and predicting human visual attention have emerged as key research topics in information visualisation research. Knowing where users might look provides rich information on how users perceive, explore, and recall information from visualisations. However, understanding and predicting human visual attention on information visualisations is still severely limited: First, eye tracking datasets on information visualisations are limited in size and viewing conditions. They usually contain hundreds of stimuli, but thousands of stimuli are usually required to train a well-generalised deep-learning model to predict visual attention. Moreover, top-down factors such as tasks strongly influence human visual attention in information visualisations. However, eye tracking datasets on information visualisations required more viewing conditions for a thorough analysis. Second, computational visual attention models do not perform well on visualisations. Information visualisations are fundamentally different from natural images, as they contain more text (e.g. titles, axis labels or legends) as well as larger areas with uniform colour. Computational visual attention models can predict attention distributions over an image, i.e. saliency maps, without the need for any eye tracking equipment. However, current visual attention models are primarily designed for natural images and cannot generalise on information visualisations. The thesis aims to investigate human visual attention on information visualisations and develop computational models for predicting saliency maps and visual scanpaths. To achieve this goal, the thesis with five scientific publications progresses through four key stages: First, the thesis addresses the scarcity of visual attention data in the field by collecting two novel datasets. The SalChartQA dataset contains 6,000 question-driven saliency maps on information visualisations, while the VisRecall++ dataset contains users' gaze data from 40 participants with their answers to recallability questions. Second, based upon the collected and public visual attention data, the thesis investigates multi-duration saliency of different visualisation elements, attention behaviour under recallability and question-answering task, and proposes two metrics to quantify the impact of gaze uncertainty on AOI-based visual analysis. Third, building upon these insights, two visual attention and scanpath prediction models are proposed. VisSalFormer is the first model to predict question-driven saliency, outperforming existing baselines under all saliency metrics. The Unified Model of Saliency and Scanpaths predicts scanpaths probabilistically, achieving significant improvements in scanpath metrics. Fourth, the thesis proposes a question-answering paradigm to quantify visualisation recallability. It further establishes connections between gaze behaviour and recallability, enabling predictions of recallability from gaze data.