Please use this identifier to cite or link to this item: http://dx.doi.org/10.18419/opus-10889
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorHaala, Norbert (apl. Prof. Dr.-Ing.)-
dc.contributor.authorTutzauer, Patrick-
dc.date.accessioned2020-06-16T07:46:44Z-
dc.date.available2020-06-16T07:46:44Z-
dc.date.issued2020de
dc.identifier.other1700641670-
dc.identifier.urihttp://elib.uni-stuttgart.de/handle/11682/10906-
dc.identifier.urihttp://nbn-resolving.de/urn:nbn:de:bsz:93-opus-ds-109068de
dc.identifier.urihttp://dx.doi.org/10.18419/opus-10889-
dc.description.abstractWith constant advances both in hardware and software, the availability of urban data is more versatile than ever. Structure-from-Motion (SfM), dense image matching (DIM), and multi-view stereo (MVS) algorithms have revolutionized the software side and scale to large data sets. The outcomes of these pipelines are various products such as point clouds and textured meshes of complete cities. Correspondingly, the geometric reconstruction of large scale urban scenes is widely solved. To keep detailed urban data understandable for humans, the highest level of detail (LOD) might not always be the best representation to transport intended information. Accordingly, the semantic interpretation of various levels of urban data representations is still in its early stages. There are many applications for digital urban scenes: gaming, urban planning, disaster management, taxation, navigation, and many more. Consequently, there is a great variety of geometric representations of urban scenes. Hence, this work does not focus on a single data representation such as imagery but instead incorporates various representation types to address several aspects of the reconstruction, enhancement, and, most importantly, interpretation of virtual building and city models. A semi-automatic building reconstruction approach with subsequent grammar-based synthesis of facades is presented. The goal of this framework is to generate a geometrically, as well as semantically enriched CityGML LOD3 model from coarse input data. To investigate the human understanding of building models, user studies on building category classification are performed. Thereof, important building category-specific features can be extracted. This knowledge and respective features can, in turn, be used to modify existing building models to make them better understandable. To this end, two approaches are presented - a perceptionbased abstraction and a grammar-based enhancement of building models using category-specific rule sets. However, in order to generate or extract building models, urban data has to be semantically analyzed. Hence, this work presents an approach for semantic segmentation of urban textured meshes. Through a hybrid model that combines explicit feature calculation and convolutional feature learning, triangle meshes can be semantically enhanced. For each face within the mesh, a multi-scale feature vector is calculated and fed into a 1D convolutional neural network (CNN). The presented approach is compared with a random forest (RF) baseline. Once buildings can be extracted from urban data representation, further distinctions can be made on an instance-level. Therefore, a deep learning-based approach for building use classification, i.e., the subdivision of buildings into different types of use, based on image representations, is presented. In order to train a CNN for classification, large amounts of training data are necessary. The presented work addresses this issue by proposing a pipeline for large-scale automated training data generation, which is comprised of crawling Google Street View (GSV) data, filtering the imagery for suitable training samples, and linking each building sample to ground truth cadastral data in the form of building polygons. Classification results of the trained CNNs are reported. Additionally, class-activation maps (CAMs) are used to investigate critical features for the classifier. The transferability to different representation types of building is investigated, and CAMs assist here to compare important features to those extracted from the previous human user studies. By these means, several integral parts that contribute to a holistic pipeline for urban scene interpretation are proposed. Finally, several open issues and future directions related to maintaining and processing virtual city models are presented.en
dc.language.isoende
dc.rightsinfo:eu-repo/semantics/openAccessde
dc.subject.ddc620de
dc.titleOn the reconstruction, interpretation and enhancement of virtual city modelsen
dc.title.alternativeZur Rekonstruktion, Interpretation und Anreicherung von virtuellen Stadtmodellende
dc.typedoctoralThesisde
ubs.bemerkung.externAußerdem online veröffentlicht unter: http://www.dgk.badw.de/publikationen/reihe-c-dissertationen.htmlde
ubs.dateAccepted2020-01-30-
ubs.fakultaetLuft- und Raumfahrttechnik und Geodäsiede
ubs.institutInstitut für Photogrammetriede
ubs.publikation.seiten109de
ubs.publikation.typDissertationde
ubs.thesis.grantorLuft- und Raumfahrttechnik und Geodäsiede
Appears in Collections:06 Fakultät Luft- und Raumfahrttechnik und Geodäsie

Files in This Item:
File Description SizeFormat 
thesis.pdf49,1 MBAdobe PDFView/Open


Items in OPUS are protected by copyright, with all rights reserved, unless otherwise indicated.