Bitte benutzen Sie diese Kennung, um auf die Ressource zu verweisen: http://dx.doi.org/10.18419/opus-14618
Langanzeige der Metadaten
DC ElementWertSprache
dc.contributor.authorVukelić, Mathias-
dc.contributor.authorBui, Michael-
dc.contributor.authorVorreuther, Anna-
dc.contributor.authorLingelbach, Katharina-
dc.date.accessioned2024-07-09T08:56:50Z-
dc.date.available2024-07-09T08:56:50Z-
dc.date.issued2023de
dc.identifier.issn2673-6195-
dc.identifier.other1895374936-
dc.identifier.urihttp://nbn-resolving.de/urn:nbn:de:bsz:93-opus-ds-146378de
dc.identifier.urihttp://elib.uni-stuttgart.de/handle/11682/14637-
dc.identifier.urihttp://dx.doi.org/10.18419/opus-14618-
dc.description.abstractDeep reinforcement learning (RL) is used as a strategy to teach robot agents how to autonomously learn complex tasks. While sparsity is a natural way to define a reward in realistic robot scenarios, it provides poor learning signals for the agent, thus making the design of good reward functions challenging. To overcome this challenge learning from human feedback through an implicit brain-computer interface (BCI) is used. We combined a BCI with deep RL for robot training in a 3-D physical realistic simulation environment. In a first study, we compared the feasibility of different electroencephalography (EEG) systems (wet- vs. dry-based electrodes) and its application for automatic classification of perceived errors during a robot task with different machine learning models. In a second study, we compared the performance of the BCI-based deep RL training to feedback explicitly given by participants. Our findings from the first study indicate the use of a high-quality dry-based EEG-system can provide a robust and fast method for automatically assessing robot behavior using a sophisticated convolutional neural network machine learning model. The results of our second study prove that the implicit BCI-based deep RL version in combination with the dry EEG-system can significantly accelerate the learning process in a realistic 3-D robot simulation environment. Performance of the BCI-based trained deep RL model was even comparable to that achieved by the approach with explicit human feedback. Our findings emphasize the usage of BCI-based deep RL methods as a valid alternative in those human-robot applications where no access to cognitive demanding explicit human feedback is available.en
dc.description.sponsorshipFraunhofer Internal Programsde
dc.description.sponsorshipMinistry of Economic Affairs, Labor and Tourism Baden-Wuerttembergde
dc.language.isoende
dc.relation.uridoi:10.3389/fnrgo.2023.1274730de
dc.rightsinfo:eu-repo/semantics/openAccessde
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/de
dc.subject.ddc004de
dc.subject.ddc150de
dc.titleCombining brain-computer interfaces with deep reinforcement learning for robot training : a feasibility study in a simulation environmenten
dc.typearticlede
dc.date.updated2024-04-25T13:23:06Z-
ubs.fakultaetKonstruktions-, Produktions- und Fahrzeugtechnikde
ubs.fakultaetExterne wissenschaftliche Einrichtungende
ubs.institutInstitut für Arbeitswissenschaft und Technologiemanagementde
ubs.institutFraunhofer Institut für Arbeitswirtschaft und Organisation (IAO)de
ubs.publikation.seiten16de
ubs.publikation.sourceFrontiers in neuroergonomics 4 (2023), No. 1274730de
ubs.publikation.typZeitschriftenartikelde
Enthalten in den Sammlungen:07 Fakultät Konstruktions-, Produktions- und Fahrzeugtechnik

Dateien zu dieser Ressource:
Datei Beschreibung GrößeFormat 
Data_Sheet_1.PDFSupplement626,24 kBAdobe PDFÖffnen/Anzeigen
fnrgo-04-1274730.pdfArtikel1,85 MBAdobe PDFÖffnen/Anzeigen


Diese Ressource wurde unter folgender Copyright-Bestimmung veröffentlicht: Lizenz von Creative Commons Creative Commons