05 Fakultät Informatik, Elektrotechnik und Informationstechnik

Permanent URI for this collectionhttps://elib.uni-stuttgart.de/handle/11682/6

Browse

Search Results

Now showing 1 - 3 of 3
  • Thumbnail Image
    ItemOpen Access
    Quantum support vector machines of high-dimensional data for image classification problems
    (2023) Vikas Singh, Rajput
    This thesis presents a comprehensive investigation into the efficient utilization of Quantum Support Vector Machines (QSVMs) for image classification on high-dimensional data. The primary focus is on analyzing the standard MNIST dataset and the high-dimensional dataset provided by TRUMPF SE + Co. KG. To evaluate the performance of QSVMs against classical Support Vector Machines (SVMs) for high-dimensional data, a benchmarking framework is proposed. In the current Noisy Intermediate Scale Quantum (NISQ) era, classical preprocessing of the data is a crucial step to prepare the data for classification tasks using NISQ machines. Various dimensionality reduction techniques, such as principal component analysis (PCA), t-distributed stochastic neighbor embedding (tSNE), and convolutional autoencoders, are explored to preprocess the image datasets. Convolutional autoencoders are found to outperform other methods when calculating quantum kernels on a small dataset. Furthermore, the benchmarking framework systematically analyzes different quantum feature maps by varying hyperparameters, such as the number of qubits, the use of parameterized gates, the number of features encoded per qubit line, and the use of entanglement. Quantum feature maps demonstrate higher accuracy compared to classical feature maps for both TRUMPF and MNIST data. Among the feature maps, one using 𝑅𝑧 and 𝑅𝑦 gates with two features per qubit, without entanglement, achieves the highest accuracy. The study also reveals that increasing the number of qubits leads to improved accuracy for the real-world TRUMPF dataset. Additionally, the choice of the quantum kernel function significantly impacts classification results, with the projected type quantum kernel outperforming the fidelity type quantum kernel. Subsequently, the study examines the Kernel Target Alignment (KTA) optimization method to improve the pipeline. However, for the chosen feature map and dataset, KTA does not provide significant benefits. In summary, the results highlight the potential for achieving quantum advantage by optimizing all components of the quantum classifier framework. Selecting appropriate dimensionality reduction techniques, quantum feature maps, and quantum kernel methods is crucial for enhancing classification accuracy. Further research is needed to address challenges related to kernel optimization and fully leverage the capabilities of quantum computing in machine learning applications.
  • Thumbnail Image
    ItemOpen Access
    Audio guide for visually impaired people based on combination of stereo vision and musical tones
    (2019) Simões, Walter C. S. S.; Silva, Yuri M. L. R.; Pio, José Luiz de S.; Jazdi, Nasser; F. de Lucena, Vicente
    Indoor navigation systems offer many application possibilities for people who need information about the scenery and the possible fixed and mobile obstacles placed along the paths. In these systems, the main factors considered for their construction and evaluation are the level of accuracy and the delivery time of the information. However, it is necessary to notice obstacles placed above the user’s waistline to avoid accidents and collisions. In this paper, different methodologies are associated to define a hybrid navigation model called iterative pedestrian dead reckoning (i-PDR). i-PDR combines the PDR algorithm with a Kalman linear filter to correct the location, reducing the system’s margin of error iteratively. Obstacle perception was addressed through the use of stereo vision combined with a musical sounding scheme and spoken instructions that covered an angle of 120 degrees in front of the user. The results obtained in the margin of error and the maximum processing time are 0.70 m and 0.09 s, respectively, with obstacles at ground level and suspended with an accuracy equivalent to 90%.
  • Thumbnail Image
    ItemOpen Access
    Multi-material blind beam hardening correction in near real-time based on non-linearity adjustment of projections
    (2023) Alsaffar, Ammar; Sun, Kaicong; Simon, Sven
    Beam hardening (BH) is one of the major artifacts that severely reduces the quality of computed tomography (CT) imaging. This BH artifact arises due to the polychromatic nature of the X-ray source and causes cupping and streak artifacts. This work aims to propose a fast and accurate BH correction method that requires no prior knowledge of the materials and corrects first and higher-order BH artifacts. This is achieved by performing a wide sweep of the material based on an experimentally measured look-up table to obtain the closest estimate of the material. Then, the non-linearity effect of the BH is corrected by adding the difference between the estimated monochromatic and the polychromatic simulated projections of the segmented image. The estimated polychromatic projection is accurately derived using the least square estimation (LSE) method by minimizing the difference between the experimental projection and the linear combination of simulated polychromatic projections. As a result, an accurate non-linearity correction term is derived that leads to an accurate BH correction result. The simulated projections in this work are performed using a multi-GPU-accelerated forward projection model which ensures a fast BH correction in near real-time. To evaluate the proposed BH correction method, we have conducted extensive experiments on real-world CT data. It is shown that the proposed method results in images with improved contrast-to-noise ratio (CNR) in comparison to the images corrected from only the scatter artifacts and the BH-corrected images using the state-of-the-art empirical BH correction method.