Please use this identifier to cite or link to this item: http://dx.doi.org/10.18419/opus-11961
Authors: Balachandra Midlagajni, Niteesh
Title: Learning object affordances using human motion capture data
Issue Date: 2019
metadata.ubs.publikation.typ: Abschlussarbeit (Master)
metadata.ubs.publikation.seiten: 58
URI: http://nbn-resolving.de/urn:nbn:de:bsz:93-opus-ds-119781
http://elib.uni-stuttgart.de/handle/11682/11978
http://dx.doi.org/10.18419/opus-11961
Abstract: When interacting with their environment, humans model the action possibilities directly in the product space of their own capabilities using the spatial configuration of their body and the environment. This idea of the existence of an intuitive and perceptual representation of the possibilities in an environment has been hypothesized and discussed by psychologist JJ Gibons, and is called affordances. The goal of this thesis is to build an algorithmic framework to learn and encode human object affordances from motion capture data. In this regard, we collect motion capture data, wherein, the human subjects perform pick and place activities in the scene. Using the collected data, we develop models using neural network architecture to learn graspability and placeability affordances, while also capturing the uncertainty in predictions. We achieve this by modeling affordances within the probabilistic framework of Deep Learning. Our models predict grasp densities and place densities accurately, in the sense that the ground truth is always within the confidence interval. Furthermore, we develop a system and integrate our models for real-time application, in order to produce affordance features in live setting and visualize the densities as heatmaps in real-time.
Appears in Collections:05 Fakultät Informatik, Elektrotechnik und Informationstechnik

Files in This Item:
File Description SizeFormat 
19-balachandra-midlagajni-MSc.pdf13,6 MBAdobe PDFView/Open


Items in OPUS are protected by copyright, with all rights reserved, unless otherwise indicated.