Bitte benutzen Sie diese Kennung, um auf die Ressource zu verweisen: http://dx.doi.org/10.18419/opus-13676
Langanzeige der Metadaten
DC ElementWertSprache
dc.contributor.authorGado, Sabrina-
dc.contributor.authorLingelbach, Katharina-
dc.contributor.authorWirzberger, Maria-
dc.contributor.authorVukelić, Mathias-
dc.date.accessioned2023-10-25T08:31:34Z-
dc.date.available2023-10-25T08:31:34Z-
dc.date.issued2023de
dc.identifier.issn1424-8220-
dc.identifier.other1869560388-
dc.identifier.urihttp://nbn-resolving.de/urn:nbn:de:bsz:93-opus-ds-136950de
dc.identifier.urihttp://elib.uni-stuttgart.de/handle/11682/13695-
dc.identifier.urihttp://dx.doi.org/10.18419/opus-13676-
dc.description.abstractHumans’ performance varies due to the mental resources that are available to successfully pursue a task. To monitor users’ current cognitive resources in naturalistic scenarios, it is essential to not only measure demands induced by the task itself but also consider situational and environmental influences. We conducted a multimodal study with 18 participants (nine female, M = 25.9 with SD = 3.8 years). In this study, we recorded respiratory, ocular, cardiac, and brain activity using functional near-infrared spectroscopy (fNIRS) while participants performed an adapted version of the warship commander task with concurrent emotional speech distraction. We tested the feasibility of decoding the experienced mental effort with a multimodal machine learning architecture. The architecture comprised feature engineering, model optimisation, and model selection to combine multimodal measurements in a cross-subject classification. Our approach reduces possible overfitting and reliably distinguishes two different levels of mental effort. These findings contribute to the prediction of different states of mental effort and pave the way toward generalised state monitoring across individuals in realistic applications.en
dc.description.sponsorshipMinistry of Economic Affairs, Labour and Tourism Baden-Württembergde
dc.description.sponsorshipKI-Fortschrittszentrum Lernende Systeme und Kognitive Robotikde
dc.description.sponsorshipFederal Ministry of Science, Research, and the Arts Baden-Württembergde
dc.description.sponsorshipUniversity of Stuttgartde
dc.language.isoende
dc.relation.uridoi:10.3390/s23146546de
dc.rightsinfo:eu-repo/semantics/openAccessde
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/de
dc.subject.ddc004de
dc.subject.ddc150de
dc.titleDecoding mental effort in a quasi-realistic scenario : a feasibility study on multimodal data fusion and classificationen
dc.typearticlede
dc.date.updated2023-08-08T16:22:04Z-
ubs.fakultaetWirtschafts- und Sozialwissenschaftende
ubs.fakultaetExterne wissenschaftliche Einrichtungende
ubs.fakultaetFakultätsübergreifend / Sonstige Einrichtungde
ubs.institutInstitut für Erziehungswissenschaftde
ubs.institutFraunhofer Institut für Arbeitswirtschaft und Organisation (IAO)de
ubs.institutFakultätsübergreifend / Sonstige Einrichtungde
ubs.publikation.seiten26de
ubs.publikation.sourceSensors 23 (2023), No. 6546de
ubs.publikation.typZeitschriftenartikelde
Enthalten in den Sammlungen:10 Fakultät Wirtschafts- und Sozialwissenschaften

Dateien zu dieser Ressource:
Datei Beschreibung GrößeFormat 
sensors-23-06546.pdf5,41 MBAdobe PDFÖffnen/Anzeigen


Diese Ressource wurde unter folgender Copyright-Bestimmung veröffentlicht: Lizenz von Creative Commons Creative Commons