Browsing by Author "Moussa, Wassim"
Now showing 1 - 1 of 1
- Results Per Page
- Sort Options
Item Open Access Integration of digital photogrammetry and terrestrial laser scanning for cultural heritage data recording(2014) Moussa, Wassim; Fritsch, Dieter (Prof. Dr.)Within this research the potential of combining digital images and TLS data for close-range applications in particular, 3D data recording and preservation of cultural heritage sites is discussed. Furthermore, besides improving both the geometry and the visual quality of the model, this combination promotes new solutions for issues that need to be investigated deeply. This covers issues such as filling gaps in laser scanning data to avoid modeling errors, retrieving more details in higher resolution, target-free registration of multiple laser scans. The integration method is based on reducing the feature extraction from a 3D to a 2D problem by using synthetic/virtual images derived from the 3D laser data. It comprises three methods for data fusion. The first method utilizes a scene database stored in a point-based environment model (PEM), which stores the 3D laser scanner point clouds associated with intensity and RGB values. The PEM allows the extraction of accurate control information, camera positions related to the TLS data and 2D-to-3D correspondences between each image and the 3D data, for the direct computation of absolute camera orientations by means of accurate space resection methods. Precedent to that, in the second method, the local relative orientations of the camera images are calculated through a Structure-from-Motion (SfM) reconstruction method. These orientations are then used for dense surface reconstruction by means of dense image matching algorithms. Subsequently, the 3D-to-3D correspondences between the dense image point clouds and those extracted from the PEM can be determined. This is performed by reprojecting the dense point clouds onto at least one camera image, and then finding the 3D-3D correspondences between the reprojected points and those extracted from the PEM. Alternatively, the 3D-3D camera positions can be used for this purpose. Thereby, the seven-parameters transformation is obtained and then employed in order to compute the absolute orientation of each image in relation to the laser data.The results are improved further by introducing a general solution, as a third method, that combines both the synthetic images and the camera images in one SfM process. It provides accurate image orientations and the sparse point clouds, initially in an arbitrary model space. This enables an implicit determination of 3D-to-3D correspondences between the sparse point clouds and the laser data via 2D-to-3D correspondences stored in the generated images. Alternatively, the sparse point clouds can be projected onto the virtual images using the collinearity equations in order to increase measurement redundancy. Then, a seven-parameter transformation is introduced and its parameters are calculated. This enables automatic registration of multiple laser scans. This holds particularly in case of laser scans that are captured at considerably changed viewpoints or non-overlapping laser scans. Furthermore, surface information can also be derived from the imagery using dense image matching algorithms. Due to the common bundle block adjustment, the results possess the same scale and coordinate system as the laser data and can directly be used to fill gaps or occlusions in the laser scanner point clouds and resolve small object details.