05 Fakultät Informatik, Elektrotechnik und Informationstechnik
Permanent URI for this collectionhttps://elib.uni-stuttgart.de/handle/11682/6
Browse
8 results
Search Results
Item Open Access A framework for similarity recognition of CAD models in respect to PLM optimization(2022) Zehtaban, Leila; Roller, Dieter (Univ.-Prof. Hon.-Prof. Dr.)Item Open Access Rigorous compilation for near-term quantum computers(2024) Brandhofer, Sebastian; Polian, Ilia (Prof.)Quantum computing promises an exponential speedup for computational problems in material sciences, cryptography and drug design that are infeasible to resolve by traditional classical systems. As quantum computing technology matures, larger and more complex quantum states can be prepared on a quantum computer, enabling the resolution of larger problem instances, e.g. breaking larger cryptographic keys or modelling larger molecules accurately for the exploration of novel drugs. Near-term quantum computers, however, are characterized by large error rates, a relatively low number of qubits and a low connectivity between qubits. These characteristics impose strict requirements on the structure of quantum computations that must be incorporated by compilation methods targeting near-term quantum computers in order to ensure compatibility and yield highly accurate results. Rigorous compilation methods have been explored for addressing these requirements as they exactly explore the solution space and thus yield a quantum computation that is optimal with respect to the incorporated requirements. However, previous rigorous compilation methods demonstrate limited applicability and typically focus on one aspect of the imposed requirements, i.e. reducing the duration or the number of swap gates in a quantum computation. In this work, opportunities for improving near-term quantum computations through compilation are explored first. These compilation opportunities are included in rigorous compilation methods to investigate each aspect of the imposed requirements, i.e. the number of qubits, connectivity of qubits, duration and incurred errors. The developed rigorous compilation methods are then evaluated with respect to their ability to enable quantum computations that are otherwise not accessible with near-term quantum technology. Experimental results demonstrate the ability of the developed rigorous compilation methods to extend the computational reach of near-term quantum computers by generating quantum computations with a reduced requirement on the number and connectivity of qubits as well as reducing the duration and incurred errors of performed quantum computations. Furthermore, the developed rigorous compilation methods extend their applicability to quantum circuit partitioning, qubit reuse and the translation between quantum computations generated for distinct quantum technologies. Specifically, a developed rigorous compilation method exploiting the structure of a quantum computation to reuse qubits at runtime yielded a reduction in the required number of qubits of up to 5x and result error by up to 33%. The developed quantum circuit partitioning method optimally distributes a quantum computation to distinct separate partitions, reducing the required number of qubits by 40% and the cost of partitioning by 41% on average. Furthermore, a rigorous compilation method was developed for quantum computers based on neutral atoms that combines swap gate insertions and topology changes to reduce the impact of limited qubit connectivity on the quantum computation duration by up to 58% and on the result fidelity by up to 29%. Finally, the developed quantum circuit adaptation method enables to translate between distinct quantum technologies while considering heterogeneous computational primitives with distinct characteristics to reduce the idle time of qubits by up to 87% and the result fidelity by up to 40%.Item Open Access Deep learning based prediction and visual analytics for temporal environmental data(2022) Harbola, Shubhi; Coors, Volker (Prof. Dr.)The objective of this thesis is to focus on developing Machine Learning methods and their visualisation for environmental data. The presented approaches primarily focus on devising an accurate Machine Learning framework that supports the user in understanding and comparing the model accuracy in relation to essential aspects of the respective parameter selection, trends, time frame, and correlating together with considered meteorological and pollution parameters. Later, this thesis develops approaches for the interactive visualisation of environmental data that are wrapped over the time series prediction as an application. Moreover, these approaches provide an interactive application that supports: 1. a Visual Analytics platform to interact with the sensors data and enhance the representation of the environmental data visually by identifying patterns that mostly go unnoticed in large temporal datasets, 2. a seasonality deduction platform presenting analyses of the results that clearly demonstrate the relationship between these parameters in a combined temporal activities frame, and 3. air quality analyses that successfully discovers spatio-temporal relationships among complex air quality data interactively in different time frames by harnessing the user’s knowledge of factors influencing the past, present, and future behaviour with Machine Learning models' aid. Some of the above pieces of work contribute to the field of Explainable Artificial Intelligence which is an area concerned with the development of methods that help understand, explain and interpret Machine Learning algorithms. In summary, this thesis describes Machine Learning prediction algorithms together with several visualisation approaches for visually analysing the temporal relationships among complex environmental data in different time frames interactively in a robust web platform. The developed interactive visualisation system for environmental data assimilates visual prediction, sensors’ spatial locations, measurements of the parameters, detailed patterns analyses, and change in conditions over time. This provides a new combined approach to the existing visual analytics research. The algorithms developed in this thesis can be used to infer spatio-temporal environmental data, enabling the interactive exploration processes, thus helping manage the cities smartly.Item Open Access Deep learning aided clinical decision support(2023) Schneider, Rudolf; Staab, Steffen (Prof. Dr.)Medical professionals create vast amounts of clinical texts during patient care. Often, these documents describe medical cases from anamnesis to the final clinical outcome. Automated understanding and selection of relevant medical records pose an opportunity to assist medical doctors in their day-to-day work on a large scale. However, clinical text understanding is challenging, especially when dealing with clinical narratives such as nursing notes or diagnostic reports. These clinical documents differ extensively in length, structure, vocabulary, and lexical and grammatical correctness. In addition, they are highly context-dependent. For all these reasons, approaches based on syntactic rules and discrete text representation often fail to address the variety of clinical narratives propagating unrecoverable errors to downstream applications. Therefore, this thesis focuses on evaluating and designing methods and models that are generalizable and adaptable enough to deal with these challenges. Our goal is to enable text-based clinical decision support systems to utilize the knowledge from clinical archives and medical publications. We aim to design methods that can scale up to the growing amount of clinical documents in hospital archives. A fundamental problem in achieving deep-learning-enabled clinical decision support systems is designing a patient representation that captures all relevant information for automated processing. We engage these challenges by designing a framework for deep-learning-enabled differential diagnosis support. Guided by the needs emerging from this framework, we design and evaluate methods based on three information representation paradigms: (1) Discrete relation extraction using the open information extraction paradigm. (2) Neural text representations based on language and topic modeling. (3) Combining complementary neural text representations. Our framework translates clinical diagnostic steps and pathways to statistical and deep-learning-based models. Accordingly, we can show that deep-learning-enabled differential diagnosis benefits from contextualized information representations. Further, we identify shortcomings of the open information extraction paradigm in a comprehensive benchmark. We design a distributed text representation model based on topical information. Our extensive large-scale experiment results show that topical distributed text representations capture information complementary to language modeling-based approaches across domains, thus enabling a holistic text representation for medical texts. Our experiments with medical doctors using our prototypical implementation of the deep-learning-enabled differential diagnosis process validate this framework. Moreover, we identify seven crucial design challenges for text-based clinical decision support systems based on our qualitative and quantitative findings.Item Open Access Optische Messsysteme und Ein-Sensor-Bildgebungsverfahren für Biosensoren(2024) Berner, Marcel; Werner, Jürgen H. (Prof. Dr. rer. nat. habil.)Die vorliegende Arbeit präsentiert die Entwicklung mehrerer Messsysteme und -verfahren für optische Biosensoranwendungen. Der erste Teil dieser Arbeit entwirft eine universelle experimentelle Plattform für die Erprobung neuer optischer Biosensorkonzepte nach dem Prinzip der laserinduzierten Fluoreszenz (LIF). Die Plattform unterstützt das europäische Forschungsprojekt Nanodem bei der Entwicklung eines portablen Point-of-Care-Testing-Gerätes (PoCT) zur Live-Überwachung von Immunsuppressivakonzentrationen im Blut von Transplantationspatienten unmittelbar am Patientenbett. Das in dieser Arbeit entwickelte Plattformkonzept umfasst die optoelektronische Fluoreszenzanregung und -detektion, optische Filtersysteme, den fluoreszenten Farbstoff, das Materialsystem der Transducerchips, das Mikrofluidiksystem sowie die Automatisierung der Ablaufsteuerung. Der Ausgangspunkt der Entwicklung ist die Herleitung eines allgemeinen physikalischen Modells für LIF-Systeme, an dem sich die Konstruktion der Plattform orientiert. Das in Kooperation mit der Eberhard Karls Universität Tübingen entworfene Transducerchipkonzept auf der Basis lasergeschnittener Klebebänder gestattet eine hohe Flexibilität bezüglich der Geometrie und des Aufbaus der Transducerchips und unterstützt den Technologietransfer akademischer Forschungsergebnisse in die industrielle Fertigung. Die entworfenen Photodetektorarrays aus amorphem Silizium lassen sich dank leicht adaptierbarer Herstellungsprozesse kosteneffizient auf beliebige Biosensorgeometrien anpassen. Die erreichte spezifische Detektivität D* = 11 × 10^12 Jones der Detektoren liegt dabei auf Augenhöhe mit der von State-of-the-Art-Detektoren aus kristallinem Material. Die erzielte Detektionsgrenze von c_{LOD,exp} = 26 nmol/l. Weiter bestätigen die experimentellen Messdaten das aufgestellte physikalische Modell. Der zweite Teil dieser Arbeit zeigt ein neues optisches Verfahren zur ortsaufgelösten Messung, das eine Vielzahl von Bildpunkten simultan mit nur einem einzigen optischen Sensor beobachtet. Das Verfahren nutzt hierzu ortsaufgelöste Lichtmodulatoren (Spatial Light Modulators - SLMs), um eine ortsabhängige optische Modulation zu erzeugen. Die erzeugten optischen Trägersignale gestatten die Zuordnung der als Summensignal empfangenen Signale zu ihren Ursprungspunkten. Der sogenannte Fourier Spotter macht sich dabei die mathematischen Eigenschaften der Fourier-Transformation zunutze. Durch die Anwendung zueinander phasenverschobener Modulationssignale gestattet der Fourier Spotter zudem die unmittelbare Messung von Helligkeitsdifferenzen zwischen unterschiedlichen Beobachtungspunkten. Dieses differentielle optische Messprinzip ist der Kern eines bereits erteilten Patents des Autors mit der Universität Stuttgart. Das neuartige optische Messprinzip eignet sich für die Integration in optische Biosensor-Verfahren, wie etwa die Einwellenlängenreflektometrie (engl. Single Color Reflectometry - SCORE), welche derzeit noch auf teure Spezialkameras angewiesen sind. Herkömmliche Kamerasysteme erzeugen hohe Datenmengen, deren Auswertung erhebliche Rechenleistung in Anspruch nimmt und damit der Weiterentwicklung hin zu miniaturisierten, portablen Biosensorplattformen entgegensteht. Die vorliegende Arbeit präsentiert einen erfolgreichen experimentellen Machbarkeitsnachweis des Fourier Imagers anhand von Helligkeitsdifferenzmessungen an einem SCORE-Aufbau. Eine zukünftige Erweiterung des Fourier Spotters um ein Zeilenspektrometer erlaubt neben der ortsaufgelösten Beobachtung auch eine simultane Erfassung der optischen Spektren jedes einzelnen beobachteten Punktes. Durch diese hyperspektrale Erweiterung wird die erstmalige Umsetzung einer auf der reflektometrischen Interferenzspektroskopie (RIfS) basierenden mehrkanaligen optischen Biosensorplattform möglich. Der dritte Teil dieser Arbeit verallgemeinert das Prinzip des Fourier Spotters und überführt dieses in ein Ein-Pixel-Kamera-Verfahren - das AM-FDM Imaging (engl. Amplitude Modulated Frequency Division Multiplexing). Das AM-FDM Imaging basiert auf der Anwendung von Näherungsverfahren, die ein Übersprechen zwischen den Trägersignalen minimieren. Das aufgestellte systemtheoretische Modell des AM-FDM Imaging umfasst auch das Fourier Spotting und erlaubt den Vergleich mit Rasterscans sowie bereits bekannten Ein-Pixel-Kamera-Verfahren wie dem Hadamard Imaging. Ist das Signal-zu-Rausch-Verhältnis durch das Rauschen des Detektorsystems begrenzt, so erreicht das AM-FDM Imaging einen sogenannten Multiplexgewinn amult = O(M) in der Größenordnung der Anzahl simultan beobachteter Bildpunkte M. Mit den derzeit eingesetzten Näherungsverfahren erreicht das AM-FDM Imaging hinsichtlich des Signal-zu-Rausch-Verhältnisses, der Anzahl simultan beobachtbarer Bildpunkte und der erzielbaren Bildwiederholrate nicht die Leistungsfähigkeit des bei Ein-Pixel-Imaging-Verfahren vorherrschenden Hadamard Imagings. Die in dieser Arbeit diskutierten Verwandtschaftsverhältnisse des AM-FDM Imagings zu anderen bekannten Ein-Pixel-Kamera-Verfahren legen jedoch die Vermutung nahe, dass ein bisher unbekanntes Näherungsverfahren existiert, das das AM-FDM Imaging mit dem Hadamard Imaging gleichstellt. Die Ergebnisse des systemtheoretischen Modells wurden mittels Simulation in Matlab bestätigt und gelten auch für den Fourier Spotter. Damit zeigen die Ergebnisse auf, dass im SCORE-Anwendungsfall eine Modulation nach dem Prinzip des Hadamard Imagings vorteilhafter ist. Das erteilte Patent zum optisch differentiellen Messverfahren schließt auch eine differentielle Variante des Hadamard Imagings mit ein. Gegenüber der Differenzwertbestimmung aus gemessenen Absolutwerten verdoppelt das differentielle Messverfahren wahlweise das Signal-zu-Rauschleistungs-Verhältnis oder die Bildwiederholrate des Hadamard Imagings.Item Open Access Resilience of quantum optimization algorithms(2024) Ji, Yanjun; Polian, Ilia (Prof. Dr.)Quantum optimization algorithms (QOAs) show promise in surpassing classical methods for solving complex problems. However, their practical application is limited by the sensitivity of quantum systems to noise. This study addresses this challenge by investigating the resilience of QOAs and developing strategies to enhance their performance and robustness on noisy quantum computers. We begin by establishing an evaluation framework to assess the performance of QOAs under various conditions, including simulated noise-free and error-modeled environments, as well as real noisy hardware, providing a foundation for guiding the development of enhancement strategies. We then propose innovative techniques to improve the performance of algorithms on near-term quantum devices characterized by limited qubit connectivity and noisy operations. Our study introduces an effective compilation process that maximizes the utilization of classical and quantum resources. To overcome the restricted connectivity of hardware, we develop an algorithm-oriented qubit mapping approach that bridges the gap between heuristic and exact methods, providing scalable and optimal solutions. Additionally, we demonstrate, for the first time, selective optimization of quantum circuits on real hardware by optimizing only gates implemented with low-quality native gates, providing significant insights for large-scale quantum computing. We also investigate error mitigation strategies and their dependence on hardware features and algorithm implementation details, emphasizing the synergistic effects of error mitigation and circuit design. While error mitigation can suppress the effects of noise, hardware quality and circuit design are ultimately more critical for achieving high performance. Building upon these insights, we explore the cooptimization of algorithm design and hardware implementation to achieve optimal performance and resilience. By optimizing gate sequences and parameters at the algorithmic level and minimizing error-prone two-qubit gates during compilation, we demonstrate significant improvements in QOA performance. Finally, we explore the practical application of QOAs in real-world problems, emphasizing the importance of optimizing parameters in problem instances to identify optimal solutions. With extensive experiments conducted on real devices, this dissertation makes a substantial contribution to the field of quantum optimization, providing both theoretical foundations and practical strategies for addressing the challenges posed by near-term quantum hardware. Our findings pave the way for the realization of practical quantum computing applications and unlock the full potential of QOAs.Item Open Access Adaptive error control for stratospheric long-distance optical links(2024) Parthasarathy, Swaminathan; Kirstädter, Andreas (Prof. Dr.-Ing.)Free-space optical (FSO) communication plays a crucial role in aerospace technology, utilizing lasers to establish high-speed, wireless connections over long distances. FSO surpasses conventional RF wireless technology in various aspects and supports high-data-rate connectivity for services such as Internet access, data transfer, voice communication, and image transfer. High-Altitude Platforms (HAPs) have emerged as ideal hosts for FSO communication networks, offering ultra-high data rates for applications like high-speed Internet, video conferencing, telemedicine, smart cities, and autonomous driving. FSO via HAPs ensures minimal latency, making it suitable for real-time tasks like remote surgery and autonomous vehicle control. The swift, long-distance communication links with low delays make FSO-equipped HAPs ideal for RF-congested areas, providing cost-effective solutions in remote regions and contributing to environmental monitoring. This thesis explores the use of adaptive code-rate Hybrid Automatic Repeat Request (HARQ) methods and channel state information (CSI) to improve the transmission efficiency of Free-Space Optical (FSO) links between High Altitude Platforms (HAPs). The study looks at channel problems like atmospheric turbulence and static pointing errors, focusing on the weak fluctuation regime of atmospheric turbulence. It explores the reciprocal behavior in bidirectional FSO channels to improve performance efficiency, providing evidence of channel reciprocity. The research proposes using HARQ, an adaptive Reed-Solomon (RS) code-rate technique, and different CSI types to address these impairments. Simulations of various situations are used to test how well these methods work. This helps us learn more about how efficient HARQ protocols are in inter-HAP FSO links, how important different CSI is in adaptive rate HARQ, and possible ways to make the system more efficient. This thesis looks at the channel model for inter-High Altitude Platform (HAP) Free-Space Optical (FSO) links in great detail, taking atmospheric conditions and static pointing errors into account. The channel is modeled as a lognormal fading channel under a weak fluctuation regime. The principle of channel reciprocity and the measures used to quantify it are discussed, providing a foundational understanding for the subsequent investigations. Forward Error Correction (FEC) schemes, with a specific emphasis on the Reed-Solomon (RS) scheme, and various Automatic Repeat reQuest (ARQ) schemes are thoroughly examined. A meticulous comparison of different ARQ schemes highlights that Selective Repeat ARQ (SR-ARQ) is the most efficient for high-error-rate channels, making it the preferred choice for inter-HAP FSO channels. Conversely, Stop and Wait ARQ (SW-ARQ) and Go-Back-N ARQ (GBN-ARQ) are found to be less suitable for these channels. An innovative approach is introduced, leveraging various types of Channel State Information (CSI) to adjust the Reed-Solomon Forward Error Correction (FEC) code-rate. Four types of CSI: perfect CSI (P-CSI), reciprocal CSI (R-CSI), delayed CSI (D-CSI), and fixed mean CSI (F-CSI) are employed. The adaptation of the Reed-Solomon FEC code-rate, aligned with Selective Repeat ARQ, is explored, and the optimal power selection is identified through rigorous analysis. It shows simulation models that use OMNET++ and gives information about the inter-HAP channel and the event-based selective repeat HARQ model. The study demonstrates reciprocity in the longest recorded ground-to-ground bidirectional Free-Space Optical (FSO) link, holding promise to mitigate signal scintillation caused by atmospheric turbulence. It evaluates the performance of different ARQ protocols and adaptive Hybrid Automatic Repeat Request (HARQ) schemes in inter-HAP FSO communication systems. The results show how channel state information, turbulence in the atmosphere, and pointing errors affect the performance of the system. They also suggest ways to improve system efficiency, such as using CSI prediction and soft combining. These findings offer valuable insights for the design and optimization of ARQ and HARQ schemes in inter-HAP FSO communication systems and suggest promising avenues for future research.Item Open Access Compact modeling of modern power MOSFETs based on industry-standard CMOS models(2025) Yan, Lixi; Kallfass, Ingmar (Prof. Dr.-Ing.)This work presents a modeling approach adopting the industry-standard models for circuit simulation with necessary extensions to describe vertical power MOSFETs. Standard models, which are developed for CMOS logic devices, are adopted with their proven robustness and fidelity to describe the voltage-controlled channel behavior of power MOSFETs. Considering the vertical MOSFET structure, the extended components including the nonlinear drift region, body-diode and the parasitic capacitance are defined as model extensions. The specific requirements for SiC MOSFETs different from the Si devices are also analyzed. The static and dynamic characteristics considering the thermal effects are measured as the reference for the model parameter extraction. Some attempts to create high voltage MOSFET models by adding elements to a standard MOSFET model are already reported, but these models are still not developed aiming at high current level power MOSFETs, or some crucial effects like the asymmetric reverse conducting current, reverse recovery of the body-diode etc. are not defined. This work provides a particular approach to characterize commercially available vertical power MOSFETs and proposes the modeling method describing the critical effects of power MOSFETs which enable the model to precisely describe the performance of the devices in switching mode simulations. Moreover, the model extension approach discussed in this work is not limited to a certain standard model. The physics based standard models can be categorized into three groups: threshold voltage based, inversion charge based, and surface potential based. The properties of three standard models from each group are analyzed and compared. The appropriate extension strategy is developed for each standard model and the specific parameter extraction flow is also provided for each proposed model. Compared with the vendor model, the modeling method proposed in this work can increase the accuracy of the simulation of the transient switching loss by around 20%, which can contribute to improve the power MOSFET compact modeling of the semiconductor community in view of improving the design of switched-mode power converters.