Zielke, Viktor2017-10-242017-10-242016495759228http://nbn-resolving.de/urn:nbn:de:bsz:93-opus-ds-92924http://elib.uni-stuttgart.de/handle/11682/9292http://dx.doi.org/10.18419/opus-9275Robots tasked with the autonomous interaction of objects, such as assembly and disassembly tasks, in a dynamic environment require the ability to explore their environment and detect objects for interactions. State-of-the-art methods exist which can handle these tasks separately. This work describes a method for combining both tasks and therefor reduce the amount of costly operations like motion and sensing. A next-best-view system is developed which incrementally builds a map of the environment and enables the selection of view poses for an eye-in-hand robot system. The system and the performance of the selected view poses is evaluated on a robotic system. The evaluations showed that the method selected view poses which explored the environment and detected objects.eninfo:eu-repo/semantics/openAccess004Design and implementation of next-best-view algorithms for automatic robotic-based (dis)assembly tasksEntwurf und Implementierung von Next-Best-View Algorithmen für die robotergestützte Automatisierung von (De-)MontageaufgabenmasterThesis