05 Fakultät Informatik, Elektrotechnik und Informationstechnik

Permanent URI for this collectionhttps://elib.uni-stuttgart.de/handle/11682/6

Browse

Search Results

Now showing 1 - 10 of 13
  • Thumbnail Image
    ItemOpen Access
    VisRecall++: analysing and predicting visualisation recallability from gaze behaviour
    (2024) Wang, Yao; Jiang, Yue; Hu, Zhiming; Ruhdorfer, Constantin; Bâce, Mihai; Bulling, Andreas
    Question answering has recently been proposed as a promising means to assess the recallability of information visualisations. However, prior works are yet to study the link between visually encoding a visualisation in memory and recall performance. To fill this gap, we propose VisRecall++ - a novel 40-participant recallability dataset that contains gaze data on 200 visualisations and five question types, such as identifying the title, and finding extreme values.We measured recallability by asking participants questions after they observed the visualisation for 10 seconds.Our analyses reveal several insights, such as saccade amplitude, number of fixations, and fixation duration significantly differ between high and low recallability groups.Finally, we propose GazeRecallNet - a novel computational method to predict recallability from gaze behaviour that outperforms several baselines on this task.Taken together, our results shed light on assessing recallability from gaze behaviour and inform future work on recallability-based visualisation optimisation.
  • Thumbnail Image
    ItemOpen Access
    SalChartQA: question-driven saliency on information visualisations
    (2024) Wang, Yao; Wang, Weitian; Abdelhafez, Abdullah; Elfares, Mayar; Hu, Zhiming; Bâce, Mihai; Bulling, Andreas
    Understanding the link between visual attention and user’s needs when visually exploring information visualisations is under-explored due to a lack of large and diverse datasets to facilitate these analyses. To fill this gap, we introduce SalChartQA - a novel crowd-sourced dataset that uses the BubbleView interface as a proxy for human gaze and a question-answering (QA) paradigm to induce different information needs in users. SalChartQA contains 74,340 answers to 6,000 questions on 3,000 visualisations. Informed by our analyses demonstrating the tight correlation between the question and visual saliency, we propose the first computational method to predict question-driven saliency on information visualisations. Our method outperforms state-of-the-art saliency models, improving several metrics, such as the correlation coefficient and the Kullback-Leibler divergence. These results show the importance of information needs for shaping attention behaviour and paving the way for new applications, such as task-driven optimisation of visualisations or explainable AI in chart question-answering.
  • Thumbnail Image
    ItemOpen Access
    Usable and fast interactive mental face reconstruction
    (2023) Strohm, Florian; Bâce, Mihai; Bulling, Andreas
    We introduce an end-to-end interactive system for mental face reconstruction - the challenging task of visually reconstructing a face image a person only has in their mind. In contrast to existing methods that suffer from low usability and high mental load, our approach only requires the user to rank images over multiple iterations according to the perceived similarity with their mental image. Based on these rankings, our mental face reconstruction system extracts image features in each iteration, combines them into a joint feature vector, and then uses a generative model to visually reconstruct the mental image. To avoid the need for collecting large amounts of human training data, we further propose a computational user model that can simulate human ranking behaviour using data from an online crowd-sourcing study (N=215). Results from a 12-participant user study show that our method can reconstruct mental images that are visually similar to existing approaches but has significantly higher usability, lower perceived workload, and is faster. In addition, results from a third 22-participant lineup study in which we validated our reconstructions on a face ranking task show a identification rate of , which is in line with prior work. These results represent an important step towards new interactive intelligent systems that can robustly and effortlessly reconstruct a user’s mental image.
  • Thumbnail Image
    ItemOpen Access
    Dynamic ontology supported user interface for personalized decision support
    (2012) Bosch, Harald; Thom, Dennis; Heinze, Geoffrey-Alexeij; Wokusch, Stefan; Ertl, Thomas
    European citizens are increasingly aware of the influence of air quality and weather on their health and quality of life. At the same time, more environmental information is freely available through a plethora of websites, dedicated portals, and web services. In order to exploit these data for personal decisions one has to identify, retrieve, and combine the information that is relevant to one's personal situation, planned activity, and information need. Often, this task is hindered by different data formats, display styles and data resolutions. The PESCaDO system is a web-based decision support system addressing this issue. The inquiry to the system, as well as the system's result, can cover a broad range of environmental aspects and personal situations and is therefore quite complex. In this work we present a novel approach on how the system can actively assist users in all steps of the decision making process, especially by enhancing the user interaction. This approach combines an intelligent dialog steering method based on analyzing the domain ontology with flexible, dynamic data visualizations for a situation depending orchestration of data sources. Both aspects have been evaluated in on-line user studies, as well as with an expert evaluation of the whole system.
  • Thumbnail Image
    ItemOpen Access
    SUPREYES: SUPer resolution for EYES using implicit neural representation learning
    (2023) Jiao, Chuhan; Hu, Zhiming; Bâce, Mihai; Bulling, Andreas
    We introduce SUPREYES - a novel self-supervised method to increase the spatio-temporal resolution of gaze data recorded using low(er)-resolution eye trackers. Despite continuing advances in eye tracking technology, the vast majority of current eye trackers - particularly mobile ones and those integrated into mobile devices - suffer from low-resolution gaze data, thus fundamentally limiting their practical usefulness. SUPREYES learns a continuous implicit neural representation from low-resolution gaze data to up-sample the gaze data to arbitrary resolutions. We compare our method with commonly used interpolation methods on arbitrary scale super-resolution and demonstrate that SUPREYES outperforms these baselines by a significant margin. We also test on the sample downstream task of gaze-based user identification and show that our method improves the performance of original low-resolution gaze data and outperforms other baselines. These results are promising as they open up a new direction for increasing eye tracking fidelity as well as enabling new gaze-based applications without the need for new eye tracking equipment.
  • Thumbnail Image
    ItemOpen Access
    Impact of gaze uncertainty on AOIs in information visualisations
    (2022) Wang, Yao; Koch, Maurice; Bâce, Mihai; Weiskopf, Daniel; Bulling, Andreas
    Gaze-based analysis of areas of interest (AOIs) is widely used in information visualisation research to understand how people explore visualisations or assess the quality of visualisations concerning key characteristics such as memorability. However, nearby AOIs in visualisations amplify the uncertainty caused by the gaze estimation error, which strongly influences the mapping between gaze samples or fixations and different AOIs. We contribute a novel investigation into gaze uncertainty and quantify its impact on AOI-based analysis on visualisations using two novel metrics: the Flipping Candidate Rate (FCR) and Hit Any AOI Rate (HAAR). Our analysis of 40 real-world visualisations, including human gaze and AOI annotations, shows that gaze uncertainty frequently and significantly impacts the analysis conducted in AOI-based studies. Moreover, we analysed four visualisation types and found that bar and scatter plots are usually designed in a way that causes more uncertainty than line and pie plots in gaze-based analysis.
  • Thumbnail Image
    ItemOpen Access
    Mouse2Vec: learning reusable semantic representations of mouse behaviour
    (2024) Zhang, Guanhua; Hu, Zhiming; Bâce, Mihai; Bulling, Andreas
    The mouse is a pervasive input device used for a wide range of interactive applications. However, computational modelling of mouse behaviour typically requires time-consuming design and extraction of handcrafted features, or approaches that are application-specific. We instead propose Mouse2Vec - a novel self-supervised method designed to learn semantic representations of mouse behaviour that are reusable across users and applications. Mouse2Vec uses a Transformer-based encoder-decoder architecture, which is specifically geared for mouse data: During pretraining, the encoder learns an embedding of input mouse trajectories while the decoder reconstructs the input and simultaneously detects mouse click events. We show that the representations learned by our method can identify interpretable mouse behaviour clusters and retrieve similar mouse trajectories. We also demonstrate on three sample downstream tasks that the representations can be practically used to augment mouse data for training supervised methods and serve as an effective feature extractor.
  • Thumbnail Image
    ItemOpen Access
    Mid-Air gestures for window management on large displays
    (2015) Lischke, Lars; Knierim, Pascal; Klinke, Hermann
    We can observe a continuous trend for using larger screens with higher resolutions and greater pixel density. With advances in hard- and software technology, wall-sized displays for daily office work are already on the horizon. We assume that there will be no hard paradigm change in interaction techniques in the near future. Therefore, new concepts for wall-sized displays will be included in existing products. Designing interaction concepts for wall-sized displays in an office environment is a challenging task. Most crucial is designing appropriate input techniques. Moving the mouse pointer from one corner to another over a longer distance is cumbersome. However, pointing with a mouse is precise and common-place. We propose using mid-air gestures to support input with mouse and keyboard on large displays. In particular, we designed a gesture set for manipulating regular windows.
  • Thumbnail Image
    ItemOpen Access
    Learning user embeddings from human gaze for personalised saliency prediction
    (2024) Strohm, Florian; Bâce, Mihai; Bulling, Andreas
    Reusable embeddings of user behaviour have shown significant performance improvements for the personalised saliency prediction task. However, prior works require explicit user characteristics and preferences as input, which are often difficult to obtain. We present a novel method to extract user embeddings from pairs of natural images and corresponding saliency maps generated from a small amount of user-specific eye tracking data. At the core of our method is a Siamese convolutional neural encoder that learns the user embeddings by contrasting the image and personal saliency map pairs of different users. Evaluations on two public saliency datasets show that the generated embeddings have high discriminative power, are effective at refining universal saliency maps to the individual users, and generalise well across users and images. Finally, based on our model's ability to encode individual user characteristics, our work points towards other applications that can benefit from reusable embeddings of gaze behaviour.
  • Thumbnail Image
    ItemOpen Access
    PrivacyScout: assessing vulnerability to shoulder surfing on mobile devices
    (2022) Bâce, Mihai; Saad, Alia; Khamis, Mohamed; Schneegass, Stefan; Bulling, Andreas
    One approach to mitigate shoulder surfing attacks on mobile devices is to detect the presence of a bystander using the phone’s front-facing camera. However, a person’s face in the camera’s field of view does not always indicate an attack. To overcome this limitation, in a novel data collection study (N=16), we analysed the influence of three viewing angles and four distances on the success of shoulder surfing attacks. In contrast to prior works that mainly focused on user authentication, we investigated three common types of content susceptible to shoulder surfing: text, photos, and PIN authentications. We show that the vulnerability of text and photos depends on the observer’s location relative to the device, while PIN authentications are vulnerable independent of the observation location. We then present PrivacyScout – a novel method that predicts the shoulder-surfing risk based on visual features extracted from the observer’s face as captured by the front-facing camera. Finally, evaluations from our data collection study demonstrate our method’s feasibility to assess the risk of a shoulder surfing attack more accurately.