Universität Stuttgart
Permanent URI for this communityhttps://elib.uni-stuttgart.de/handle/11682/1
Browse
103 results
Search Results
Item Open Access Wege zur Ermittlung von Energieeffizienzpotenzialen von Informations- und Kommunikationstechnologien(Stuttgart : Universität Stuttgart, Institut für Energiewirtschaft und Rationelle Energieanwendung, 2020) Miller, Michael; Hufendiek, Kai (Prof. Dr.-Ing.)Item Open Access A framework for similarity recognition of CAD models in respect to PLM optimization(2022) Zehtaban, Leila; Roller, Dieter (Univ.-Prof. Hon.-Prof. Dr.)Item Open Access Automatische Applikation modellbasierter Diesel-Luftsystem-Funktionen in Motorsteuergeräten(2020) Xie, Yijiang; Kistner, Arnold (Prof. Dr.-Ing.)The continuous development of diesel engines for meeting the legal and functional requirements, e.g. reducing emissions and fuel consumption while taking drivability into account, has led to a significant increase in the number of sensors and actuators required for the engine. For the diesel-air system it means to introduce a turbocharger, a system for exhaust gas recirculation (EGR), an exhaust gas aftertreatment system, a variable valve control, etc. In order to control such an increasingly complex system in diesel engines, ECU-functions are developed by means of a model-based approach. The success of a model-based development methodology is based on a precise and e cient modeling of the relevant engine behavior. Because of the limited computing power of an ECU, a combination of physical models and so-called calibration parameters is usually preferred for engine modeling. The calibration parameters can be scalar or one or two-dimensional empirical models and usual ly have to be determined (calibrated) by experiments on an engine test bench. Typical examples for such calibration parameters are lookup-tables for modeling the cylinder charge (volumetric e ciency) and the e ective area of the EGR valve. In this thesis a procedure is proposed which is able to calibrate the ECU functions for stationary relationships, e.g. in the diesel-air system, automatically and with as little measurement e ort as possible in terms of the number of measurement points. The algorithm runs within the framework of sequential experimental planning, in which Gaussian models with non-stationary covariance functions are used to approximate the relations of interest. For adaptive experimental planning an active sampling strategy is developed based on the concept of mutual information and optimal system inputs (engine speed, fuel quantity, air actuators, etc.) and which determines the resulting operating points, with respect to the input space coverage, the inhomogeneous properties of the relations, the uncertainty of the estimated calibration parameters and the feasibility of the operating points. The method is able to predict the stationary engine behavior, which results from the selected system inputs, by means of the physical structure of the air system and the data-based models of the calibration parameters. On this basis the uncertainties of the application parameters are estimated using extended Kalman filters. The feasibility of the operating point is checked by comparing the predicted system behavior with the engine limits. For validation the developed algorithm was implemented on an engine test bench to calibrate the air system of a diesel engine equipped with high and low pressure EGR, a variable geometry turbocharger and variable valve timing. As a result, using the presented approach, using as little as approx. 130 measurement points is enough to obtain a comparable application quality to that achieved by conventional methods with more than 800 measurement points.Item Open Access Rigorous compilation for near-term quantum computers(2024) Brandhofer, Sebastian; Polian, Ilia (Prof.)Quantum computing promises an exponential speedup for computational problems in material sciences, cryptography and drug design that are infeasible to resolve by traditional classical systems. As quantum computing technology matures, larger and more complex quantum states can be prepared on a quantum computer, enabling the resolution of larger problem instances, e.g. breaking larger cryptographic keys or modelling larger molecules accurately for the exploration of novel drugs. Near-term quantum computers, however, are characterized by large error rates, a relatively low number of qubits and a low connectivity between qubits. These characteristics impose strict requirements on the structure of quantum computations that must be incorporated by compilation methods targeting near-term quantum computers in order to ensure compatibility and yield highly accurate results. Rigorous compilation methods have been explored for addressing these requirements as they exactly explore the solution space and thus yield a quantum computation that is optimal with respect to the incorporated requirements. However, previous rigorous compilation methods demonstrate limited applicability and typically focus on one aspect of the imposed requirements, i.e. reducing the duration or the number of swap gates in a quantum computation. In this work, opportunities for improving near-term quantum computations through compilation are explored first. These compilation opportunities are included in rigorous compilation methods to investigate each aspect of the imposed requirements, i.e. the number of qubits, connectivity of qubits, duration and incurred errors. The developed rigorous compilation methods are then evaluated with respect to their ability to enable quantum computations that are otherwise not accessible with near-term quantum technology. Experimental results demonstrate the ability of the developed rigorous compilation methods to extend the computational reach of near-term quantum computers by generating quantum computations with a reduced requirement on the number and connectivity of qubits as well as reducing the duration and incurred errors of performed quantum computations. Furthermore, the developed rigorous compilation methods extend their applicability to quantum circuit partitioning, qubit reuse and the translation between quantum computations generated for distinct quantum technologies. Specifically, a developed rigorous compilation method exploiting the structure of a quantum computation to reuse qubits at runtime yielded a reduction in the required number of qubits of up to 5x and result error by up to 33%. The developed quantum circuit partitioning method optimally distributes a quantum computation to distinct separate partitions, reducing the required number of qubits by 40% and the cost of partitioning by 41% on average. Furthermore, a rigorous compilation method was developed for quantum computers based on neutral atoms that combines swap gate insertions and topology changes to reduce the impact of limited qubit connectivity on the quantum computation duration by up to 58% and on the result fidelity by up to 29%. Finally, the developed quantum circuit adaptation method enables to translate between distinct quantum technologies while considering heterogeneous computational primitives with distinct characteristics to reduce the idle time of qubits by up to 87% and the result fidelity by up to 40%.Item Open Access Die Kornstruktur und der Heißrisswiderstand von Laserstrahlschweißnähten in Aluminiumlegierungen(München : utzverlag, 2020) Hagenlocher, Christian; Graf, Thomas (Prof. Dr. phil. nat.)Die Kornstruktur einer Schweißnaht beeinflusst ihren Widerstand gegen die Bildung von Nahtmittenheißrissen. Im Rahmen dieser Arbeit wurde der übergreifende Zusammenhang zwischen Schweißparameter, Kornstruktur und Heißrisswiderstand beim Laserstrahlschweißen durch analytische Gleichungen beschrieben und das resultierende Modell experimentell validiert.Item Open Access Multi-objective automatic calibration of hydrodynamic models - development of the concept and an application in the Mekong Delta(2011) Nguyen, Viet-Dung; Bárdossy, András (Prof. Dr. rer.nat. Dr.-Ing. habil.)Automatic and multi-objective calibration of hydrodynamic models is still underdeveloped, in particular, in comparison with other fields such as hydrological modeling. This is for several reasons: lack of appropriate data, the high degree of computational time demanded, and a suitable framework. These aspects are aggravated in large-scale applications. There are recent developments, however, that improve both the data and the computing constraints. Remote sensing, especially radar-based techniques, provide highly valuable information on flood extents, and in case high precision Digital Elevation Models (DEMs) are present, also on spatially distributed inundation depths. With regards to computation, the use of parallelization techniques brings significant performance gains. In the presented study, we build on these developments by calibrating a large-scale one-dimensional hydrodynamic model of the whole Mekong Delta downstream of Kratie in Cambodia: We combine in-situ data from a network of river gauging stations, i.e. data with high-temporal but low-spatial resolution, with a series of inundation maps derived from ENVISAT Advanced Synthetic Aperture Radar (ASAR) satellite images, i.e. data with low-temporal but high-spatial resolution, in a multi-objective automatic calibration process. It is shown that this kind of calibration of hydrodynamic models is possible, even in an area as large-scale and complex as the Mekong Delta. Furthermore, the calibration process reveals deficiencies in the model structure, i.e. the representation of the dike system in Vietnam, which would be difficult to detect by a standard manual calibration procedure. In the last part of the dissertation the established hydrodynamic model is combined with flood frequency analysis in order to assess the flood hazard in the Mekong Delta. It is now common to state that climate change can lead to a change in flood hazard. Starting from this assumption, this study develops a novel approach for flood hazard mapping in the Mekong Delta. Typically, flood frequency analysis assumes stationarity and is limited to extreme value statistics of flood peaks. Both, the stationarity assumption and the limitation to univariate frequency analysis remain doubtful in the case of the Mekong Delta, because of changes in hydrologic variability and because of the large relevance of the flood volume for the impact of flooding. Thus, besides the use of the traditional approach for flood frequency analysis, this study takes non-stationarity and bivariate behavior into account. Copula-based bivariate analysis is used to model the dependence and to generate pairs of maximum discharge and volume, by coupling their marginal distributions to gain a bivariate distribution. In addition, based on cluster analysis, groups of characteristic hydrographs are identified and synthetic flood hydrographs are generated. These hydrographs are the input for the calibrated large-scale hydrodynamic model of the Mekong Delta, resulting in flood hazard maps for the whole Mekong Delta. To account for uncertainty within the hazard assessment, a Monte Carlo framework is applied yielding probabilistic hazard maps.Item Open Access Test planning for low-power built-in self test(2014) Zoellin, Christian G.; Wunderlich, Hans-Joachim (Prof. Dr. rer. nat. habil.)Power consumption has become the most important issue in the design of integrated circuits. The power consumption during manufacturing or in-system test of a circuit can significantly exceed the power consumption during functional operation. The excessive power can lead to false test fails or can result in the permanent degradation or destruction of the device under test. Both effects can significantly impact the cost of manufacturing integrated circuits. This work targets power consumption during Built-In Self-Test (BIST). BIST is a Design-for-Test (DfT) technique that adds additional circuitry to a design such that it can be tested at-speed with very little external stimulus. Test planning is the process of computing configurations of the BIST-based tests that optimize the power consumption within the constraints of test time and fault coverage. In this work, a test planning approach is presented that targets the Self-Test Using Multiple-input signature register and Parallel Shift-register sequence generator (STUMPS) DfT architecture. For this purpose, the STUMPS architecture is extended by clock gating in order to leverage the benefits of test planning. The clock of every chain of scan flip-flops can be independently disabled, reducing the switching activity of the flip-flops and their clock distribution to zero as well as reducing the switching activity of the down-stream logic. Further improvements are obtained by clustering the flip-flops of the circuit appropriately. The test planning problem is mapped to a set covering problem. The constraints for the set covering are extracted from fault simulation and the circuit structure such that any valid cover will test every targeted fault at least once. Divide-and-conquer is employed to reduce the computational complexity of optimization against a power consumption metric. The approach can be combined with any fault model and in this work, stuck-at and transition faults are considered. The approach effectively reduces the test power without increasing the test time or reducing the fault coverage. It has proven effective with academic benchmark circuits, several industrial benchmarks and the Synergistic Processing Element (SPE) of the Cell/B.E.™ Processor (Riley et al., 2005). Hardware experiments have been conducted based on the manufacturing BIST of the Cell/B.E.™ Processor and shown the viability of the approach for industrial, high-volume, high-end designs. In order to improve the fault coverage for delay faults, high-frequency circuits are sometimes tested with complex clock sequences that generate test with three or more at-speed cycles (rather than just two of traditional at-speed testing). In order to allow such complex clock sequences to be supported, the test planning presented here has been extended by a circuit graph based approach for determining equivalent combinational circuits for the sequential logic. In addition, this work proposes a method based on dynamic frequency scaling of the shift clock that utilizes a given power envelope to it full extent. This way, the test time can be reduced significantly, in particular if high test coverage is targeted.Item Open Access Konzepte zur Übertragbarkeit von Prozessparametern des Rührreibschweißens(2016) Noveva, Radostina; Roos, Eberhard (Prof. Dr.-Ing. habil.)Der Einsatz von Aluminiumlegierungen hat sich als eine Schlüsselkomponente in zahlreichen Leichtbaukonzepten etabliert. Ein wichtiger Aspekt bei der industriellen Anwendung von Aluminiumwerkstoffen ist mit ihrer Schweißeignung verbunden. Das Rührreibschweißverfahren bietet eine einfache, umweltfreundliche und wirtschaftliche Methode zum Fügen von solchen Materialien. Die Integration dieses Verfahrens in den Fertigungsprozessketten kleiner und mittelständischer Unternehmen ist jedoch mit einer Reihe von Herausforderungen verbunden. Dazu gehören beispielsweise die unzureichenden Informationen über die Randbedingungen des Schweißprozesses sowie die begrenzte Übertragbarkeit von Prozessparametern auf unterschiedliche Anwendungen. Im Rahmen dieser Arbeit wurden Schweißparameterstudien an drei Aluminiumlegierungen (EN AW-5454-O, EN AW-5754-O und EN AW-6016) durchgeführt. In einer Reihe von Experimenten, realisiert an einer Rührreibschweißanlage und zwei Werkzeugmaschinen, konnten die verbindungsspezifischen Prozessfelder für die jeweilige Werkstoff-Blechdicken-Konfiguration ermittelt werden. Die Prozessfelder umfassen unterschiedliche Kombinationen der Hauptschweißparameter Drehzahl, Vorschubgeschwindigkeit des Schweißwerkzeugs sowie Anpresskraft Fz auf den zu schweißenden Halbzeugen. Die Eignung der Parametersätze für die gestellte schweißtechnische Aufgabe wurde anhand des Vergleichs der mechanischen und der mikrostrukturellen Eigenschaften der hergestellten Verbindungen beurteilt. Für jeden Versuchswerkstoff wurden gezielt zwei Gruppen von Parametersätzen gewählt. Mit der ersten Gruppe konnte keine direkte Übertragbarkeit der guten Festigkeits- und Verformungseigenschaften der Verbindungen auf die unterschiedlichen Anlagen gewährleistet werden. In der zweiten Gruppe wurden Parametersätze betrachtet, mit denen, unabhängig von den verwendeten Schweißanlagen, eine wiederholbar gute Qualität der Verbindungen erzielt werden konnte. Die Wiederholung und die weiterführende thermographische Analyse solcher Parametersätze haben aufgezeigt, dass die Abweichungen in der Qualität der Schweißverbindungen bei einer relativ geringen Wärmeeinbringung in der Fügezone auftreten d.h., dass die unterschiedlichen Steifigkeiten der Versuchsanlagen nur bei ungünstigen Randbedingungen der Prozessführung eine messbare Reduktion der Qualität der Verbindungen verursachen. Darüber hinaus konnte nachgewiesen werden, dass der Einfluss der Anlagensteifigkeit und der Positioniergenauigkeit beim Fügen von dünnen Halbzeugen und von Werkstoffen mit hoher Festigkeit zunimmt. Die gewonnenen Erkenntnisse wurden als Grundlage für die Entwicklung eines analytischen Modells verwendet. Letzteres beschreibt die Zusammenhänge zwischen den, beim Rührreibschweißprozess auftretenden Anpresskräften und dem Schweißsystem, das aus den zu fügenden Halbzeugen und der entsprechenden Schweißvorrichtung (Rührreibschweißanlage und/oder Werkzeugmaschine) besteht. Die Konzeption dieses Modells ermöglicht eine praxisnahe und einfache Ermittlung von Prozesskräften für unterschiedliche Anwendungsfälle, unter Berücksichtigung der Maschinensteifigkeit, der Abmessungen der Schweißwerkzeuge sowie der temperaturabhängigen Materialeigenschaften der Halbzeuge. Die Verknüpfung der o. g. Einflussgrößen erlaubt die deutliche Verbesserung bestehender Ansätze zur Übertragbarkeit von Rührreibschweißparametern.Item Open Access Merging spacecraft software development and system tests : an agile verification approach(2021) Bucher, Nico; Eickhoff, Jens (Prof. Dr.-Ing.)In this dissertation, the author describes an agile verification approach for spacecraft onboard software that allows for software development guided by system tests performed with the actual spacecraft. The approach was applied for the Flying Laptop small satellite, built and operated by the Institute of Space System (IRS) at the University of Stuttgart, Germany. This work contains examples of practical experience gathered during the system testing campaign of Flying Laptop.Item Open Access Beitrag zur Untersuchung von hochfesten synthetischen Faserseilen unter hochdynamischer Beanspruchung(Stuttgart : Institut für Fördertechnik und Logistik (IFT) der Universität Stuttgart, 2017) Wehr, Martin; Wehking, Karl-Heinz (Prof. Dr.-Ing. Dr. h.c.)