05 Fakultät Informatik, Elektrotechnik und Informationstechnik

Permanent URI for this collectionhttps://elib.uni-stuttgart.de/handle/11682/6

Browse

Search Results

Now showing 1 - 7 of 7
  • Thumbnail Image
    ItemOpen Access
    VisRecall++: analysing and predicting visualisation recallability from gaze behaviour
    (2024) Wang, Yao; Jiang, Yue; Hu, Zhiming; Ruhdorfer, Constantin; Bâce, Mihai; Bulling, Andreas
    Question answering has recently been proposed as a promising means to assess the recallability of information visualisations. However, prior works are yet to study the link between visually encoding a visualisation in memory and recall performance. To fill this gap, we propose VisRecall++ - a novel 40-participant recallability dataset that contains gaze data on 200 visualisations and five question types, such as identifying the title, and finding extreme values.We measured recallability by asking participants questions after they observed the visualisation for 10 seconds.Our analyses reveal several insights, such as saccade amplitude, number of fixations, and fixation duration significantly differ between high and low recallability groups.Finally, we propose GazeRecallNet - a novel computational method to predict recallability from gaze behaviour that outperforms several baselines on this task.Taken together, our results shed light on assessing recallability from gaze behaviour and inform future work on recallability-based visualisation optimisation.
  • Thumbnail Image
    ItemOpen Access
    SalChartQA: question-driven saliency on information visualisations
    (2024) Wang, Yao; Wang, Weitian; Abdelhafez, Abdullah; Elfares, Mayar; Hu, Zhiming; Bâce, Mihai; Bulling, Andreas
    Understanding the link between visual attention and user’s needs when visually exploring information visualisations is under-explored due to a lack of large and diverse datasets to facilitate these analyses. To fill this gap, we introduce SalChartQA - a novel crowd-sourced dataset that uses the BubbleView interface as a proxy for human gaze and a question-answering (QA) paradigm to induce different information needs in users. SalChartQA contains 74,340 answers to 6,000 questions on 3,000 visualisations. Informed by our analyses demonstrating the tight correlation between the question and visual saliency, we propose the first computational method to predict question-driven saliency on information visualisations. Our method outperforms state-of-the-art saliency models, improving several metrics, such as the correlation coefficient and the Kullback-Leibler divergence. These results show the importance of information needs for shaping attention behaviour and paving the way for new applications, such as task-driven optimisation of visualisations or explainable AI in chart question-answering.
  • Thumbnail Image
    ItemOpen Access
    Mouse2Vec: learning reusable semantic representations of mouse behaviour
    (2024) Zhang, Guanhua; Hu, Zhiming; Bâce, Mihai; Bulling, Andreas
    The mouse is a pervasive input device used for a wide range of interactive applications. However, computational modelling of mouse behaviour typically requires time-consuming design and extraction of handcrafted features, or approaches that are application-specific. We instead propose Mouse2Vec - a novel self-supervised method designed to learn semantic representations of mouse behaviour that are reusable across users and applications. Mouse2Vec uses a Transformer-based encoder-decoder architecture, which is specifically geared for mouse data: During pretraining, the encoder learns an embedding of input mouse trajectories while the decoder reconstructs the input and simultaneously detects mouse click events. We show that the representations learned by our method can identify interpretable mouse behaviour clusters and retrieve similar mouse trajectories. We also demonstrate on three sample downstream tasks that the representations can be practically used to augment mouse data for training supervised methods and serve as an effective feature extractor.
  • Thumbnail Image
    ItemOpen Access
    PrivacyScout: assessing vulnerability to shoulder surfing on mobile devices
    (2022) Bâce, Mihai; Saad, Alia; Khamis, Mohamed; Schneegass, Stefan; Bulling, Andreas
    One approach to mitigate shoulder surfing attacks on mobile devices is to detect the presence of a bystander using the phone’s front-facing camera. However, a person’s face in the camera’s field of view does not always indicate an attack. To overcome this limitation, in a novel data collection study (N=16), we analysed the influence of three viewing angles and four distances on the success of shoulder surfing attacks. In contrast to prior works that mainly focused on user authentication, we investigated three common types of content susceptible to shoulder surfing: text, photos, and PIN authentications. We show that the vulnerability of text and photos depends on the observer’s location relative to the device, while PIN authentications are vulnerable independent of the observation location. We then present PrivacyScout – a novel method that predicts the shoulder-surfing risk based on visual features extracted from the observer’s face as captured by the front-facing camera. Finally, evaluations from our data collection study demonstrate our method’s feasibility to assess the risk of a shoulder surfing attack more accurately.
  • Thumbnail Image
    ItemOpen Access
    VisRecall: quantifying information visualisation recallability via question answering
    (2022) Wang, Yao; Jiao, Chuhan; Bâce, Mihai; Bulling, Andreas
    Despite its importance for assessing the effectiveness of communicating information visually, fine-grained recallability of information visualisations has not been studied quantitatively so far. In this work, we propose a question-answering paradigm to study visualisation recallability and present VisRecall - a novel dataset consisting of 200 visualisations that are annotated with crowd-sourced human (N = 305) recallability scores obtained from 1,000 questions of five question types. Furthermore, we present the first computational method to predict recallability of different visualisation elements, such as the title or specific data values. We report detailed analyses of our method on VisRecall and demonstrate that it outperforms several baselines in overall recallability and FE-, F-, RV-, and U-question recallability. Our work makes fundamental contributions towards a new generation of methods to assist designers in optimising visualisations.
  • Thumbnail Image
    ItemOpen Access
    Saliency3D: a 3D saliency dataset collected on screen
    (2024) Wang, Yao; Dai, Qi; Bâce, Mihai; Klein, Karsten; Bulling, Andreas
    While visual saliency has recently been studied in 3D, the experimental setup for collecting 3D saliency data can be expensive and cumbersome. To address this challenge, we propose a novel experimental design that utilizes an eye tracker on a screen to collect 3D saliency data. Our experimental design reduces the cost and complexity of 3D saliency dataset collection. We first collect gaze data on a screen, then we map them to 3D saliency data through perspective transformation. Using this method, we collect a 3D saliency dataset (49,276 fixations) comprising 10 participants looking at sixteen objects. Moreover, we examine the viewing preferences for objects and discuss our findings in this study. Our results indicate potential preferred viewing directions and a correlation between salient features and the variation in viewing directions.
  • Thumbnail Image
    ItemOpen Access
    Designing for noticeability: understanding the impact of visual importance on desktop notifications
    (2022) Müller, Philipp; Staal, Sander; Bâce, Mihai; Bulling, Andreas
    Desktop notifications should be noticeable but are also subject to a number of design choices, e.g. concerning their size, placement, or opacity. It is currently unknown, however, how these choices interact with the desktop background and their influence on noticeability. To address this limitation, we introduce a software tool to automatically synthesize realistically looking desktop images for major operating systems and applications. Using these images, we present a user study (N=34) to investigate the noticeability of notifications during a primary task. We are first to show that visual importance of the background at the notification location significantly impacts whether users detect notifications. We analyse the utility of visual importance to compensate for suboptimal design choices with respect to noticeability, e.g. small notification size. Finally, we introduce noticeability maps - 2D maps encoding the predicted noticeability across the desktop and inform designers how to trade-off notification design and noticeability.