Bitte benutzen Sie diese Kennung, um auf die Ressource zu verweisen: http://dx.doi.org/10.18419/opus-14783
Langanzeige der Metadaten
DC ElementWertSprache
dc.contributor.authorRamasamy Sundararaj, Jayakumar-
dc.date.accessioned2024-08-07T13:18:35Z-
dc.date.available2024-08-07T13:18:35Z-
dc.date.issued2024de
dc.identifier.other1898089760-
dc.identifier.urihttp://nbn-resolving.de/urn:nbn:de:bsz:93-opus-ds-148023de
dc.identifier.urihttp://elib.uni-stuttgart.de/handle/11682/14802-
dc.identifier.urihttp://dx.doi.org/10.18419/opus-14783-
dc.description.abstractThough Deep Reinforcement Learning (DRL) has emerged as a powerful paradigm for training agents to perform complex tasks, it encounters challenges when confronted with raw sensory inputs. Despite using the deep neural network’s prowess to generate meaningful internal representations, DRL approaches suffer from a high sample complexity. The effectiveness and scalability of DRL techniques are frequently hindered by the high-dimensional nature of input data, especially in methods utilizing image-based observations. To overcome this challenge, a promising approach is to start with improved input representations to enhance learning performance significantly. This work addresses this challenge using novel techniques to enhance DRL agents’ training efficiency and performance. We propose using compact and structured image representations, namely object-centric and scene graph-based state representations, as intermediate state representations for training lightweight DRL agents. These representations facilitate extracting important features from raw observations, effectively reducing input space dimensionality. To assess the effectiveness of our proposed approaches, we conduct experiments on three Atari 2600 games: Space Invaders, Frostbite, and Freeway. Our findings reveal that models trained with intermediate state representations, while showing slightly lower performance than those trained from raw image pixels, achieved a notable performance by surpassing Human Normalized Score (HNS) in one game environment with fewer model parameters. Furthermore, we investigate alternative loss functions for value function estimation and explore strategies to mitigate the issue of diminishing entropy during training. Finally, through a systematic analysis of experimental findings, we provide valuable insights into the efficacy and drawbacks of these approaches, shedding light on promising avenues for future research in formulating suitable state spaces for training agents using DRL.en
dc.language.isoende
dc.rightsinfo:eu-repo/semantics/openAccessde
dc.subject.ddc004de
dc.titleEvaluation of different image representations for reinforcement learning agentsen
dc.typemasterThesisde
ubs.fakultaetInformatik, Elektrotechnik und Informationstechnikde
ubs.institutInstitut für Visualisierung und Interaktive Systemede
ubs.publikation.seiten57de
ubs.publikation.typAbschlussarbeit (Master)de
Enthalten in den Sammlungen:05 Fakultät Informatik, Elektrotechnik und Informationstechnik

Dateien zu dieser Ressource:
Datei Beschreibung GrößeFormat 
Final_Thesis_Report.pdf2,5 MBAdobe PDFÖffnen/Anzeigen


Alle Ressourcen in diesem Repositorium sind urheberrechtlich geschützt.