05 Fakultät Informatik, Elektrotechnik und Informationstechnik

Permanent URI for this collectionhttps://elib.uni-stuttgart.de/handle/11682/6

Browse

Search Results

Now showing 1 - 10 of 18
  • Thumbnail Image
    ItemOpen Access
    A framework for similarity recognition of CAD models in respect to PLM optimization
    (2022) Zehtaban, Leila; Roller, Dieter (Univ.-Prof. Hon.-Prof. Dr.)
  • Thumbnail Image
    ItemOpen Access
    Rigorous compilation for near-term quantum computers
    (2024) Brandhofer, Sebastian; Polian, Ilia (Prof.)
    Quantum computing promises an exponential speedup for computational problems in material sciences, cryptography and drug design that are infeasible to resolve by traditional classical systems. As quantum computing technology matures, larger and more complex quantum states can be prepared on a quantum computer, enabling the resolution of larger problem instances, e.g. breaking larger cryptographic keys or modelling larger molecules accurately for the exploration of novel drugs. Near-term quantum computers, however, are characterized by large error rates, a relatively low number of qubits and a low connectivity between qubits. These characteristics impose strict requirements on the structure of quantum computations that must be incorporated by compilation methods targeting near-term quantum computers in order to ensure compatibility and yield highly accurate results. Rigorous compilation methods have been explored for addressing these requirements as they exactly explore the solution space and thus yield a quantum computation that is optimal with respect to the incorporated requirements. However, previous rigorous compilation methods demonstrate limited applicability and typically focus on one aspect of the imposed requirements, i.e. reducing the duration or the number of swap gates in a quantum computation. In this work, opportunities for improving near-term quantum computations through compilation are explored first. These compilation opportunities are included in rigorous compilation methods to investigate each aspect of the imposed requirements, i.e. the number of qubits, connectivity of qubits, duration and incurred errors. The developed rigorous compilation methods are then evaluated with respect to their ability to enable quantum computations that are otherwise not accessible with near-term quantum technology. Experimental results demonstrate the ability of the developed rigorous compilation methods to extend the computational reach of near-term quantum computers by generating quantum computations with a reduced requirement on the number and connectivity of qubits as well as reducing the duration and incurred errors of performed quantum computations. Furthermore, the developed rigorous compilation methods extend their applicability to quantum circuit partitioning, qubit reuse and the translation between quantum computations generated for distinct quantum technologies. Specifically, a developed rigorous compilation method exploiting the structure of a quantum computation to reuse qubits at runtime yielded a reduction in the required number of qubits of up to 5x and result error by up to 33%. The developed quantum circuit partitioning method optimally distributes a quantum computation to distinct separate partitions, reducing the required number of qubits by 40% and the cost of partitioning by 41% on average. Furthermore, a rigorous compilation method was developed for quantum computers based on neutral atoms that combines swap gate insertions and topology changes to reduce the impact of limited qubit connectivity on the quantum computation duration by up to 58% and on the result fidelity by up to 29%. Finally, the developed quantum circuit adaptation method enables to translate between distinct quantum technologies while considering heterogeneous computational primitives with distinct characteristics to reduce the idle time of qubits by up to 87% and the result fidelity by up to 40%.
  • Thumbnail Image
    ItemOpen Access
    Classification of cryptographic libraries
    (2017) Poppele, Andreas; Eichler, Rebecca; Jäger, Roland
    Software developers today are faced with choosing cryptographic libraries in order to implement security concepts. There is a large variety of cryptographic libraries for diverse programming languages, without there being a standardized conception of different properties of these cryptographic libraries. This report provides a classification of over 700 cryptographic libraries. The libraries were chosen pertaining to currentness and popularity. In order to provide a standardized overview the most important traits and characteristics of these libraries were gathered and defined. Data collection on these characteristics was performed in a manual as well as automated fashion. The classification contains information that will help experienced and inexperienced developers in the cryptographic field to choose a library that fits their abilities. Furthermore, it may be used as a basis for studies concerning any form of improvement of these libraries and many more.
  • Thumbnail Image
    ItemOpen Access
    Test planning for low-power built-in self test
    (2014) Zoellin, Christian G.; Wunderlich, Hans-Joachim (Prof. Dr. rer. nat. habil.)
    Power consumption has become the most important issue in the design of integrated circuits. The power consumption during manufacturing or in-system test of a circuit can significantly exceed the power consumption during functional operation. The excessive power can lead to false test fails or can result in the permanent degradation or destruction of the device under test. Both effects can significantly impact the cost of manufacturing integrated circuits. This work targets power consumption during Built-In Self-Test (BIST). BIST is a Design-for-Test (DfT) technique that adds additional circuitry to a design such that it can be tested at-speed with very little external stimulus. Test planning is the process of computing configurations of the BIST-based tests that optimize the power consumption within the constraints of test time and fault coverage. In this work, a test planning approach is presented that targets the Self-Test Using Multiple-input signature register and Parallel Shift-register sequence generator (STUMPS) DfT architecture. For this purpose, the STUMPS architecture is extended by clock gating in order to leverage the benefits of test planning. The clock of every chain of scan flip-flops can be independently disabled, reducing the switching activity of the flip-flops and their clock distribution to zero as well as reducing the switching activity of the down-stream logic. Further improvements are obtained by clustering the flip-flops of the circuit appropriately. The test planning problem is mapped to a set covering problem. The constraints for the set covering are extracted from fault simulation and the circuit structure such that any valid cover will test every targeted fault at least once. Divide-and-conquer is employed to reduce the computational complexity of optimization against a power consumption metric. The approach can be combined with any fault model and in this work, stuck-at and transition faults are considered. The approach effectively reduces the test power without increasing the test time or reducing the fault coverage. It has proven effective with academic benchmark circuits, several industrial benchmarks and the Synergistic Processing Element (SPE) of the Cell/B.E.™ Processor (Riley et al., 2005). Hardware experiments have been conducted based on the manufacturing BIST of the Cell/B.E.™ Processor and shown the viability of the approach for industrial, high-volume, high-end designs. In order to improve the fault coverage for delay faults, high-frequency circuits are sometimes tested with complex clock sequences that generate test with three or more at-speed cycles (rather than just two of traditional at-speed testing). In order to allow such complex clock sequences to be supported, the test planning presented here has been extended by a circuit graph based approach for determining equivalent combinational circuits for the sequential logic. In addition, this work proposes a method based on dynamic frequency scaling of the shift clock that utilizes a given power envelope to it full extent. This way, the test time can be reduced significantly, in particular if high test coverage is targeted.
  • Thumbnail Image
    ItemOpen Access
    Deep learning based prediction and visual analytics for temporal environmental data
    (2022) Harbola, Shubhi; Coors, Volker (Prof. Dr.)
    The objective of this thesis is to focus on developing Machine Learning methods and their visualisation for environmental data. The presented approaches primarily focus on devising an accurate Machine Learning framework that supports the user in understanding and comparing the model accuracy in relation to essential aspects of the respective parameter selection, trends, time frame, and correlating together with considered meteorological and pollution parameters. Later, this thesis develops approaches for the interactive visualisation of environmental data that are wrapped over the time series prediction as an application. Moreover, these approaches provide an interactive application that supports: 1. a Visual Analytics platform to interact with the sensors data and enhance the representation of the environmental data visually by identifying patterns that mostly go unnoticed in large temporal datasets, 2. a seasonality deduction platform presenting analyses of the results that clearly demonstrate the relationship between these parameters in a combined temporal activities frame, and 3. air quality analyses that successfully discovers spatio-temporal relationships among complex air quality data interactively in different time frames by harnessing the user’s knowledge of factors influencing the past, present, and future behaviour with Machine Learning models' aid. Some of the above pieces of work contribute to the field of Explainable Artificial Intelligence which is an area concerned with the development of methods that help understand, explain and interpret Machine Learning algorithms. In summary, this thesis describes Machine Learning prediction algorithms together with several visualisation approaches for visually analysing the temporal relationships among complex environmental data in different time frames interactively in a robust web platform. The developed interactive visualisation system for environmental data assimilates visual prediction, sensors’ spatial locations, measurements of the parameters, detailed patterns analyses, and change in conditions over time. This provides a new combined approach to the existing visual analytics research. The algorithms developed in this thesis can be used to infer spatio-temporal environmental data, enabling the interactive exploration processes, thus helping manage the cities smartly.
  • Thumbnail Image
    ItemOpen Access
    Maschinelles Lernen für intelligente Automatisierungssysteme mit dezentraler Datenhaltung am Anwendungsfall Predictive Maintenance
    (2019) Maschler, Benjamin; Jazdi, Nasser; Weyrich, Michael
    Für eine hohe Ergebnisqualität sind Machine Learning Algorithmen auf eine breite Datenbasis angewiesen. Studien zeigen jedoch, dass viele Unternehmen nicht bereit sind, ihre Daten mit anderen Unternehmen, beispielsweise in Form einer gemeinsamen Daten-Cloud, zu teilen. Ziel sollte es daher sein, effizientes maschinelles Lernen mit einer dezentralen Datenhaltung, die den Verbleib vertraulicher Daten im jeweiligen Ursprungs-Unternehmen ermöglicht, zu ermöglichen. In diesem Artikel wird diesbezüglich ein neuartiges Konzept vorgestellt und hinsichtlich seiner Potentiale für intelligente Automatisierungssysteme am Beispiel des Anwendungsfalls Predictive Maintenance analysiert. Die Umsetzbarkeit des Konzepts unter Nutzung verschiedener bestehender Ansätze wird diskutiert, bevor schließlich auf potentielle Mehrwerte für Anlagenbetreiber sowie -hersteller unter besonderer Berücksichtigung der Perspektive kleiner und mittlerer Unternehmen eingegangen wird.
  • Thumbnail Image
    ItemOpen Access
    Sprachassistierter Entwicklungsprozess für automatisierungstechnische Systeme : ein Ansatz zur Strukturierung komplexer Entwicklungsprozesse
    (2020) White, Dustin; Weyrich, Michael
    Der Systementwicklungsprozess nimmt immer mehr an Komplexität zu, da die Systeme selbst immer komplexer werden. Gleichzeitig Vermischen sich die verschiedenen Disziplinen wie Maschinenbau, Elektrotechnik und Softwaretechnik zunehmend, so dass Unternehmen einer Disziplin sprunghafte Komplexitätszuwächse bei ihren Systemen und in ihrer Entwicklung haben. Deshalb wird in dieser Veröffentlichung ein Konzept eines Sprachassistenten erarbeitet, der durch eine Entwicklungsphase führt. Daraus geht hervor, dass die Software zur Unterstützung der Entwicklung ein Informationsmodell benötigt, um die Daten des entwickelten Systems zu speichern und diese mit dem vorhandenen Wissen zu verbinden. Dieses Wissen kann entweder intern oder im Web vorhanden sein. Der Entwicklungsprozess soll daher Kooperation unterstützen, so dass die Assistenzsoftware und Ingenieure miteinander interagieren.
  • Thumbnail Image
    ItemOpen Access
    Berechnungsverfahren und auf Abtastung basierende Messverfahren zur Bestimmung elektrischer HF-Störfelder und der damit verbundenen Störeinkopplungen in Leitersysteme
    (2006) Geisbusch, Lothar; Landstorfer, Friedrich (Prof. Dr.-Ing.)
    Ein wichtiger Aspekt der EMV ist die Einstrahlung hochfrequenter elektromagnetischer Felder in Leitersysteme. Zur Quantisierung derartiger Verkopplungen entwickelt die vorliegende Arbeit sowohl messtechnische als auch numerische Verfahren. Für die Berechnung der Störeinkopplung in Herzschrittmacher-Systeme wird ein hybrides Berechnungsverfahren, zusammengesetzt aus der Mehrfach-Multipol-Methode und der Methode der Momente, weiterentwickelt und es wird seine Anwendbarkeit verbessert. Neben dem Berechnungsverfahren wird in der vorliegenden Arbeit ein neues Feldsensor-Verfahren zur Betrags- und Phasenmessung hochfrequenter elektrischer Felder entwickelt. Dieses Verfahren macht sich das subharmonische Abtasten zunutze, indem ein schneller Abtaster in eine elektrisch kurze Dipolantenne untergebracht wird. Die Triggerung des Abtasters erfolgt über einen optischen Leiter, welcher Verzerrungen des zu messenden Feldes vermeidet. Es wird eine sehr hohe Messgeschwindigkeit erreicht, so dass auch Feldverteilungen innerhalb kurzer Zeit gemessen werden können. Neben dem Feldsensor wird ein hierzu abgewandeltes Sensorsystem zur Messung von Störspannungen entwickelt, welches z.B. die Messung von Einkoppelspannungen an Herzschrittmacherelektroden erlaubt. Die Arbeit schließt zum einen mit der Untersuchung der Störeinkopplung in Herzschrittmachersysteme und zum anderen mit experimentellen Arbeiten zur Feldverteilung im Kraftfahrzeug bei Mobilfunkbetrieb ab.
  • Thumbnail Image
    ItemOpen Access
    Deep learning aided clinical decision support
    (2023) Schneider, Rudolf; Staab, Steffen (Prof. Dr.)
    Medical professionals create vast amounts of clinical texts during patient care. Often, these documents describe medical cases from anamnesis to the final clinical outcome. Automated understanding and selection of relevant medical records pose an opportunity to assist medical doctors in their day-to-day work on a large scale. However, clinical text understanding is challenging, especially when dealing with clinical narratives such as nursing notes or diagnostic reports. These clinical documents differ extensively in length, structure, vocabulary, and lexical and grammatical correctness. In addition, they are highly context-dependent. For all these reasons, approaches based on syntactic rules and discrete text representation often fail to address the variety of clinical narratives propagating unrecoverable errors to downstream applications. Therefore, this thesis focuses on evaluating and designing methods and models that are generalizable and adaptable enough to deal with these challenges. Our goal is to enable text-based clinical decision support systems to utilize the knowledge from clinical archives and medical publications. We aim to design methods that can scale up to the growing amount of clinical documents in hospital archives. A fundamental problem in achieving deep-learning-enabled clinical decision support systems is designing a patient representation that captures all relevant information for automated processing. We engage these challenges by designing a framework for deep-learning-enabled differential diagnosis support. Guided by the needs emerging from this framework, we design and evaluate methods based on three information representation paradigms: (1) Discrete relation extraction using the open information extraction paradigm. (2) Neural text representations based on language and topic modeling. (3) Combining complementary neural text representations. Our framework translates clinical diagnostic steps and pathways to statistical and deep-learning-based models. Accordingly, we can show that deep-learning-enabled differential diagnosis benefits from contextualized information representations. Further, we identify shortcomings of the open information extraction paradigm in a comprehensive benchmark. We design a distributed text representation model based on topical information. Our extensive large-scale experiment results show that topical distributed text representations capture information complementary to language modeling-based approaches across domains, thus enabling a holistic text representation for medical texts. Our experiments with medical doctors using our prototypical implementation of the deep-learning-enabled differential diagnosis process validate this framework. Moreover, we identify seven crucial design challenges for text-based clinical decision support systems based on our qualitative and quantitative findings.
  • Thumbnail Image
    ItemOpen Access
    Anwendungsfälle und Methoden der künstlichen Intelligenz in der anwendungsorientierten Forschung im Kontext von Industrie 4.0
    (2020) Maschler, Benjamin; White, Dustin; Weyrich, Michael
    Es wird erwartet, dass datengetriebene Methoden künstlicher Intelligenz im Kontext Industrie 4.0 die Zukunft industrieller Fertigung prägen werden. Obwohl das Thema in der Forschung sehr präsent ist, bleibt der Umfang der tatsächlichen Nutzung dieser Methoden unklar. Dieser Beitrag analysiert daher von 2013 bis 2018 veröffentlichte wissenschaftliche Artikel, um statistische Daten über den Einsatz von Methoden künstlicher Intelligenz in der Industrie zu gewinnen. Besonderes Augenmerk wird dabei auf die Trainings- und Evaluations-Datentypen, die Verbreitung in verschiedenen Industriezweigen, die betrachteten Anwendungsfälle sowie die geographische Herkunft dieser Artikel gelegt. Die resultierenden Erkenntnisse werden in praxisnahe Hinweise für Entscheider destilliert.