Please use this identifier to cite or link to this item:
Authors: Roth, Stephan
Title: Investigation of volume rendering performance through active learning and visual analysis
Issue Date: 2017 Abschlussarbeit (Master) 90
Abstract: Volume visualization has many real world applications such as medical imaging and scientific research. Rendering volumes can be done directly by shooting rays from the camera through the volume data, or indirectly by extracting features such as iso-surfaces. Knowing the runtime performance of visualization techniques enables for optimized infrastructure planning, trained models could also be reused for interactive quality adaption. Prediction models can make use of information about renderer and datasets to determine execution times before rendering. In this thesis, we present a model based on neural networks to predict rendering times, by using volume properties and rendering configuration. Moreover, our model actively intervenes the sampling process to improve learning while decreasing the amount of necessary measurements. For this, it estimates how likely a drawn sample will improve future predictions. Our model consists of multiple submodels, using their disagreement about certain samples as criteria for possible improvement. We evaluate our model, using different sampling strategies, loss functions and volume rendering techniques. This includes predictions based on measurement data of a volume raycaster, as well as a continuous setup with interleaved execution and prediction of an indirect volume renderer. Our indirect renderer utilizes marching cubes to extract iso-surfaces as triangle mesh from a density field and organizes them in an octree. This way, highly parallel sorting on the graphics card is enabled that is necessary for rendering transparent surfaces in correct order.
Appears in Collections:13 Zentrale Universitätseinrichtungen

Files in This Item:
File Description SizeFormat 
Masterarbeit_Stephan_Roth_Volume_Visualization.pdf10,35 MBAdobe PDFView/Open

Items in OPUS are protected by copyright, with all rights reserved, unless otherwise indicated.