Bitte benutzen Sie diese Kennung, um auf die Ressource zu verweisen: http://dx.doi.org/10.18419/opus-13676
Autor(en): Gado, Sabrina
Lingelbach, Katharina
Wirzberger, Maria
Vukelić, Mathias
Titel: Decoding mental effort in a quasi-realistic scenario : a feasibility study on multimodal data fusion and classification
Erscheinungsdatum: 2023
Dokumentart: Zeitschriftenartikel
Seiten: 26
Erschienen in: Sensors 23 (2023), No. 6546
URI: http://nbn-resolving.de/urn:nbn:de:bsz:93-opus-ds-136950
http://elib.uni-stuttgart.de/handle/11682/13695
http://dx.doi.org/10.18419/opus-13676
ISSN: 1424-8220
Zusammenfassung: Humans’ performance varies due to the mental resources that are available to successfully pursue a task. To monitor users’ current cognitive resources in naturalistic scenarios, it is essential to not only measure demands induced by the task itself but also consider situational and environmental influences. We conducted a multimodal study with 18 participants (nine female, M = 25.9 with SD = 3.8 years). In this study, we recorded respiratory, ocular, cardiac, and brain activity using functional near-infrared spectroscopy (fNIRS) while participants performed an adapted version of the warship commander task with concurrent emotional speech distraction. We tested the feasibility of decoding the experienced mental effort with a multimodal machine learning architecture. The architecture comprised feature engineering, model optimisation, and model selection to combine multimodal measurements in a cross-subject classification. Our approach reduces possible overfitting and reliably distinguishes two different levels of mental effort. These findings contribute to the prediction of different states of mental effort and pave the way toward generalised state monitoring across individuals in realistic applications.
Enthalten in den Sammlungen:10 Fakultät Wirtschafts- und Sozialwissenschaften

Dateien zu dieser Ressource:
Datei Beschreibung GrößeFormat 
sensors-23-06546.pdf5,41 MBAdobe PDFÖffnen/Anzeigen


Diese Ressource wurde unter folgender Copyright-Bestimmung veröffentlicht: Lizenz von Creative Commons Creative Commons