05 Fakultät Informatik, Elektrotechnik und Informationstechnik
Permanent URI for this collectionhttps://elib.uni-stuttgart.de/handle/11682/6
Browse
23 results
Search Results
Item Open Access VisRecall++: analysing and predicting visualisation recallability from gaze behaviour(2024) Wang, Yao; Jiang, Yue; Hu, Zhiming; Ruhdorfer, Constantin; Bâce, Mihai; Bulling, AndreasQuestion answering has recently been proposed as a promising means to assess the recallability of information visualisations. However, prior works are yet to study the link between visually encoding a visualisation in memory and recall performance. To fill this gap, we propose VisRecall++ - a novel 40-participant recallability dataset that contains gaze data on 200 visualisations and five question types, such as identifying the title, and finding extreme values.We measured recallability by asking participants questions after they observed the visualisation for 10 seconds.Our analyses reveal several insights, such as saccade amplitude, number of fixations, and fixation duration significantly differ between high and low recallability groups.Finally, we propose GazeRecallNet - a novel computational method to predict recallability from gaze behaviour that outperforms several baselines on this task.Taken together, our results shed light on assessing recallability from gaze behaviour and inform future work on recallability-based visualisation optimisation.Item Open Access A muscle model for injury simulation(2023) Millard, Matthew; Kempter, Fabian; Fehr, Jörg; Stutzig, Norman; Siebert, TobiasCar accidents frequently cause neck injuries that are painful, expensive, and difficult to simulate. The movements that lead to neck injury include phases in which the neck muscles are actively lengthened. Actively lengthened muscle can develop large forces that greatly exceed the maximum isometric force. Although Hill-type models are often used to simulate human movement, this model has no mechanism to develop large tensions during active lengthening. When used to simulate neck injury, a Hill model will underestimate the risk of injury to the muscles but may overestimate the risk of injury to the structures that the muscles protect. We have developed a musculotendon model that includes the viscoelasticity of attached crossbridges and has an active titin element. In this work we evaluate the proposed model to a Hill model by simulating the experiments of Leonard et al. [1] that feature extreme active lengthening.Item Open Access SalChartQA: question-driven saliency on information visualisations(2024) Wang, Yao; Wang, Weitian; Abdelhafez, Abdullah; Elfares, Mayar; Hu, Zhiming; Bâce, Mihai; Bulling, AndreasUnderstanding the link between visual attention and user’s needs when visually exploring information visualisations is under-explored due to a lack of large and diverse datasets to facilitate these analyses. To fill this gap, we introduce SalChartQA - a novel crowd-sourced dataset that uses the BubbleView interface as a proxy for human gaze and a question-answering (QA) paradigm to induce different information needs in users. SalChartQA contains 74,340 answers to 6,000 questions on 3,000 visualisations. Informed by our analyses demonstrating the tight correlation between the question and visual saliency, we propose the first computational method to predict question-driven saliency on information visualisations. Our method outperforms state-of-the-art saliency models, improving several metrics, such as the correlation coefficient and the Kullback-Leibler divergence. These results show the importance of information needs for shaping attention behaviour and paving the way for new applications, such as task-driven optimisation of visualisations or explainable AI in chart question-answering.Item Open Access Usable and fast interactive mental face reconstruction(2023) Strohm, Florian; Bâce, Mihai; Bulling, AndreasWe introduce an end-to-end interactive system for mental face reconstruction - the challenging task of visually reconstructing a face image a person only has in their mind. In contrast to existing methods that suffer from low usability and high mental load, our approach only requires the user to rank images over multiple iterations according to the perceived similarity with their mental image. Based on these rankings, our mental face reconstruction system extracts image features in each iteration, combines them into a joint feature vector, and then uses a generative model to visually reconstruct the mental image. To avoid the need for collecting large amounts of human training data, we further propose a computational user model that can simulate human ranking behaviour using data from an online crowd-sourcing study (N=215). Results from a 12-participant user study show that our method can reconstruct mental images that are visually similar to existing approaches but has significantly higher usability, lower perceived workload, and is faster. In addition, results from a third 22-participant lineup study in which we validated our reconstructions on a face ranking task show a identification rate of , which is in line with prior work. These results represent an important step towards new interactive intelligent systems that can robustly and effortlessly reconstruct a user’s mental image.Item Open Access SUPREYES: SUPer resolution for EYES using implicit neural representation learning(2023) Jiao, Chuhan; Hu, Zhiming; Bâce, Mihai; Bulling, AndreasWe introduce SUPREYES - a novel self-supervised method to increase the spatio-temporal resolution of gaze data recorded using low(er)-resolution eye trackers. Despite continuing advances in eye tracking technology, the vast majority of current eye trackers - particularly mobile ones and those integrated into mobile devices - suffer from low-resolution gaze data, thus fundamentally limiting their practical usefulness. SUPREYES learns a continuous implicit neural representation from low-resolution gaze data to up-sample the gaze data to arbitrary resolutions. We compare our method with commonly used interpolation methods on arbitrary scale super-resolution and demonstrate that SUPREYES outperforms these baselines by a significant margin. We also test on the sample downstream task of gaze-based user identification and show that our method improves the performance of original low-resolution gaze data and outperforms other baselines. These results are promising as they open up a new direction for increasing eye tracking fidelity as well as enabling new gaze-based applications without the need for new eye tracking equipment.Item Open Access Informationsmodelle mit intelligenter Auswertung für den Digitalen Zwilling(2020) Müller, Manuel; Ashtari Talkhestani, Behrang; Jazdi, Nasser; Rosen, Roland; Wehrstedt, Jan Christoph; Weyrich, MichaelDie zunehmende Komplexität hochautomatisierter Systeme bringt neue Herausforderungen bei der Verwaltung ihrer Modelle entlang des gesamten Lebenszyklus des Systems mit sich - von der Kundenakquise über Engineering und Rekonfiguration bis hin zum Systemrecycling. Der Digitale Zwilling ist ein Konzept, welches über den gesamten Lebenszyklus eines Assets hinweg das Management dieser Modelle sicherstellen kann. Es unterstützt jedoch nicht die automatisierte Modellerweiterung. Hier setzt diese Arbeit an. Die Anreicherung des Digitalen Zwillings um Modellverständnis und KI-Algorithmen zur eigenständigen Modellerweiterung bildet die Grundlager des vorgestellten Konzepts. Über die intelligente Auswertung der Informationsmodelle -angereichert mit aktuellen Prozessdaten- erkennt der Digitale Zwilling, wenn Modelle an ihre Grenzen stoßen. Zwei mögliche Ursachen für diesen Sachverhalt werden genauer betrachtet: (1) es fehlt eine Fähigkeit oder Information (2) der Gültigkeitsbereich des Modells wurde verlassen. Für beide Zustände wird ein Verfahren vorgeschlagen, welches auf Basis kooperativer Information aus dem Wertschöpfungsnetzwerk automatisiert eine Lösung findet. Die Evaluierung des Konzepts anhand eines Szenarios aus der Logistik und aus der Produktion liefert vielversprechende Ergebnisse.Item Open Access Improving the accuracy of musculotendon models for the simulation of active lengthening(2023) Millard, Matthew; Kempter, Fabian; Stutzig, Norman; Siebert, Tobias; Fehr, JörgVehicle accidents can cause neck injuries which are costly for individuals and society. Safety systems could be designed to reduce the risk of neck injury if it were possible to accurately simulate the tissue-level injuries that later lead to chronic pain. During a crash, reflexes cause the muscles of the neck to be actively lengthened. Although the muscles of the neck are often only mildly injured, the forces developed by the neck’s musculature affect the tissues that are more severely injured. In this work, we compare the forces developed by MAT_156, LS-DYNA’s Hill-type model, and the newly proposed VEXAT muscle model during active lengthening. The results show that Hill-type muscle models underestimate forces developed during active lengthening, while the VEXAT model can more faithfully reproduce experimental measurements.Item Open Access Sprachassistierter Entwicklungsprozess für automatisierungstechnische Systeme : ein Ansatz zur Strukturierung komplexer Entwicklungsprozesse(2020) White, Dustin; Weyrich, MichaelDer Systementwicklungsprozess nimmt immer mehr an Komplexität zu, da die Systeme selbst immer komplexer werden. Gleichzeitig Vermischen sich die verschiedenen Disziplinen wie Maschinenbau, Elektrotechnik und Softwaretechnik zunehmend, so dass Unternehmen einer Disziplin sprunghafte Komplexitätszuwächse bei ihren Systemen und in ihrer Entwicklung haben. Deshalb wird in dieser Veröffentlichung ein Konzept eines Sprachassistenten erarbeitet, der durch eine Entwicklungsphase führt. Daraus geht hervor, dass die Software zur Unterstützung der Entwicklung ein Informationsmodell benötigt, um die Daten des entwickelten Systems zu speichern und diese mit dem vorhandenen Wissen zu verbinden. Dieses Wissen kann entweder intern oder im Web vorhanden sein. Der Entwicklungsprozess soll daher Kooperation unterstützen, so dass die Assistenzsoftware und Ingenieure miteinander interagieren.Item Open Access Impact of gaze uncertainty on AOIs in information visualisations(2022) Wang, Yao; Koch, Maurice; Bâce, Mihai; Weiskopf, Daniel; Bulling, AndreasGaze-based analysis of areas of interest (AOIs) is widely used in information visualisation research to understand how people explore visualisations or assess the quality of visualisations concerning key characteristics such as memorability. However, nearby AOIs in visualisations amplify the uncertainty caused by the gaze estimation error, which strongly influences the mapping between gaze samples or fixations and different AOIs. We contribute a novel investigation into gaze uncertainty and quantify its impact on AOI-based analysis on visualisations using two novel metrics: the Flipping Candidate Rate (FCR) and Hit Any AOI Rate (HAAR). Our analysis of 40 real-world visualisations, including human gaze and AOI annotations, shows that gaze uncertainty frequently and significantly impacts the analysis conducted in AOI-based studies. Moreover, we analysed four visualisation types and found that bar and scatter plots are usually designed in a way that causes more uncertainty than line and pie plots in gaze-based analysis.Item Open Access Mouse2Vec: learning reusable semantic representations of mouse behaviour(2024) Zhang, Guanhua; Hu, Zhiming; Bâce, Mihai; Bulling, AndreasThe mouse is a pervasive input device used for a wide range of interactive applications. However, computational modelling of mouse behaviour typically requires time-consuming design and extraction of handcrafted features, or approaches that are application-specific. We instead propose Mouse2Vec - a novel self-supervised method designed to learn semantic representations of mouse behaviour that are reusable across users and applications. Mouse2Vec uses a Transformer-based encoder-decoder architecture, which is specifically geared for mouse data: During pretraining, the encoder learns an embedding of input mouse trajectories while the decoder reconstructs the input and simultaneously detects mouse click events. We show that the representations learned by our method can identify interpretable mouse behaviour clusters and retrieve similar mouse trajectories. We also demonstrate on three sample downstream tasks that the representations can be practically used to augment mouse data for training supervised methods and serve as an effective feature extractor.
- «
- 1 (current)
- 2
- 3
- »