05 Fakultät Informatik, Elektrotechnik und Informationstechnik
Permanent URI for this collectionhttps://elib.uni-stuttgart.de/handle/11682/6
Browse
Item Open Access 3D digital analysis of mammographic composition(2009) Lampasona, Constanza; Roller, Dieter (Prof. Dr.)Breast cancer represents the most frequent cancer within women. Besides clinical examination and self-examination, breast imaging plays a very important role in detecting breast cancer before tumors turn clinically visible. The mammography, a radiograph of the breast, is the most widespread test for the early detection of breast cancer. The images obtained through mammography are known as mammograms and they visualize the breast structure. The woman breast consists of fibroglandular and fatty tissue. Increased mammographic breast density, an increase of fibroglandular tissue, is a factor that influences the risk of becoming affected with breast cancer. Computer-based image analysis could help to find such abnormal changes in the breast tissues from digital mammograms. Full-field digital mammograms are acquired using an electronic detector and they are stored using the DICOM standard file format. In this thesis we first describe the image acquisition process, the DICOM file format as well as the conventional and digital mammography, together with its advantages for computer-based image processing. Former image processing methods and their application into mammograms were also studied. These methods include the measurement of area and volumetric mammographic breast density, the segmentation and the registration of mammograms and methods that could be applied to visualize the breast density. Based on the knowledge on the acquisition process, the DICOM file format and the former methods, computer-based image analysis methods were developed during this research project. All the methods were implemented in a software prototype to test them. The software architecture of the prototype is also shown in this thesis. The main contribution of this work is a new method for the measurement of volumetric breast density. This measurement of volumetric breast density consists in the interpretation of pixels gray levels from full-field digital mammograms to determine which combinations of tissues they represent. In order to be able to compare many images, after performing the measurements, the images are standardized and registered. From the breast composition and its changes, a conclusion could be reached in relation to a suspected cancer or an elevated breast cancer risk. Additionally, some image processing methods were developed to prepare the images for the analysis. These methods segment the mammogram into background, pectoral muscle and breast tissue. The information obtained from the analysis of the mammograms could also be used for the detection of microcalcifications and the skin line or breast border. The mammograms are then graphically shown using different two and three-dimensional views. The last chapters show the results of the computer-based image analysis of the full-filed digital mammograms using the software prototype, conclusions and future work.Item Open Access 3D printing-as-a-service for collaborative engineering(2017) Baumann, Felix W.; Roller, Dieter (Univ.-Prof. Hon.-Prof. Dr.)3D printing or Additive Manufacturing (AM) are utilised as umbrella terms to denote a variety of technologies to manufacture or create a physical object based on a digital model. Commonly, these technologies create the objects by adding, fusing or melting a raw material in a layer-wise fashion. Apart from the 3D printer itself, no specialised tools are required to create almost any shape or form imaginable and designable. The possibilities of these technologies of these technologies are plentiful and cover the ability to manufacture every object, rapidly, locally and cost-efficiently without wasted resources and material. Objects can be created to specific forms to perform as perfectly fitting functions without consideration of the assembly process. To further the advance the availability and applicability of 3D printing, this thesis identifies the problems that currently exist and attempts to solve them. During the 3D printing process, data (i. e., files) must be converted from their original representation, e. g., CAD file, to the machine instructions for a specific 3D printer. During this process, information is lost, and other information is added. Traceability is lacking in 3D printing. The actual 3D printing can require a long period of time to complete, during which errors can occur. In 3D printing, these errors are often non-recoverable or reversible, which results in wasted material and time. In addition to the lack of closed-loop control systems for 3D printers, careful planning and preparation are required to avoid these costly misprints. 3D printers are usually located remotely from users, due to health and safety considerations, special placement requirements or out of comfort. Remotely placed equipment is impractical to monitor in person; however, such monitoring is essential. Especially considering the proneness of 3D printing to errors and the implications of this as described previously. Utilisation of 3D printers is an issue, especially with expensive 3D printers. As there are a number of differing 3D printing technologies available, having the required 3D printer, might be problematic. 3D printers are equipped with a variety of interfaces, depending on the make and model. These differing interfaces, both hard- and software, hinder the integration of different 3D printers into consistent systems. There exists no proper and complete ontology or resource description schema or mechanism that covers all the different 3D printing technologies. Such a resource description mechanism is essential for the automated scheduling in services or systems. In 3D printing services the selection and matching of appropriate and suitable 3D printers is essential, as not all 3D printing technologies are able to perform on all materials or are able to create certain object features, such as thin walls or hollow forms. The need for companies to sell digital models for AM will increase in scenarios where replacement or customised parts are 3D printed by consumers at home or in local manufacturing centres. Furthermore, requirements to safeguard these digital models will increase to avoid a repetition of the problems from the music industry, e. g., Napster. Replication and ‘theft’ of these models are uncontrollable in the current situation. In a service oriented deployment, or in scenarios where the utilisation is high, estimations of the 3D printing time are required to be available. Common 3D printing time estimations are inaccurate, which hinder the application of scheduling. The complete and comprehensive understanding of the complexity of an object is discordant, especially in the domain of AM. This understanding is required to both support the design of objects for AM and match appropriate manufacturing resources to certain objects. Quality in AM and FDM have been incompletely researched. The quality in general is increased with maturity of the technology; however, research on the quality achievable with consumer-grade 3D printers is lacking. Furthermore, cost-sensitive measurement methods for quality assessment are expandable. This thesis presents the structured design and implementation of a 3D printing service with associated contributions that provide solutions to particular problems present in the AM domain. The 3D printing service is the overarching component of this thesis and provides the platform for the other contributions with the intention to establish an online, cloud-based 3D printing service for use in end-user and professional settings with a focus on collaboration and cooperation.Item Open Access a-Si:H/c-Si heterojunction front- and back contacts for silicon solar cells with p-type base(2010) Rostan, Philipp Johannes; Werner, Jürgen H. (Prof. Dr. rer. nat. habil.)This thesis reports on low temperature amorphous silicon back and front contacts for high-efficiency crystalline silicon solar cells with a p-type base. The back contact uses a sequence of intrinsic amorphous (i-a-Si:H) and boron doped microcrystalline (p-μc-Si:H) silicon layers fabricated by Plasma Enhanced Chemical Vapor Deposition (PECVD) and a magnetron sputtered ZnO:Al layer. The back contact is finished by evaporating Al onto the ZnO:Al and altogether prepared at a maximum temperature of 220 °C. Analysis of the electronic transport of mobile charge carriers at the back contact shows that the two high-efficiency requirements low back contact series resistance and high quality c-Si surface passivation are in strong contradiction to each other, thus difficult to achieve at the same time. The preparation of resistance- and effective lifetime samples allows one to investigate both requirements independently. Analysis of the majority charge carrier transport on complete Al/ZnO:Al/a-Si:H/c-Si back contact structures derives the resistive properties. Measurements of the effective minority carrier lifetime on a-Si:H coated wafers determines the back contact surface passivation quality. Both high-efficiency solar cell requirements together are analyzed in complete photovoltaic devices where the back contact series resistance mainly affects the fill factor and the back contact passivation quality mainly affects the open circuit voltage. The best cell equipped with a diffused emitter with random texture and a full-area a-Si:H/c-Si back contact has an independently confirmed efficiency η = 21.0 % with an open circuit voltage Voc = 681 mV and a fill factor FF = 78.7 % on an area of 1 cm². An alternative concept that uses a simplified a-Si:H layer sequence combined with Al-point contacts yields a confirmed efficiency η = 19.3 % with an open circuit voltage Voc = 655 mV and a fill factor FF = 79.5 % on an area of 2 cm². Analysis of the internal quantum efficiency shows that both types of back contacts lead to effective diffusion lengths in excess of 600 μm. An extended fill factor analysis shows that fill factor limitations for the full-area a-Si:H/c-Si contacts result from non-ideal diode behavior, ascribed to the injection dependence of the heterojunction interface recombination velocity. Analysis of the external quantum efficiency under back side illumination with different bias light intensities delivers the effective surface recombination Seff(Φ) in dependance of the illumination intensity Φ. The front contact (emitter) uses a sequence of intrinsic and phosphorous doped amorphous silicon layers together with a ZnO:Al or a SnO2:In layer and an Al front contact grid. The emitter is prepared at a maximum temperature of 220 °C. Measurements of the minority carrier lifetime on symmetric i/n-a-Si:H coated wafers judge the emitter passivation quality. The best solar cells that use a thermal oxide back side passivation with Al-point contacts and flat a-Si:H emitters have open circuit voltages up to 683 mV and efficiencies up to 17.4 %. The efficiency of such devices is limited by a low short circuit current due to the flat front side. Using the same back contact structure with random pyramid textured wafer front sides and a-Si:H emitters yields open circuit voltages up to 660 mV and efficiencies up to 18.5 %, sofar limited by a relatively low fill factor FF ≤ 74.3 %. Analysis of the external quantum efficiency underlines the excellent surface passivation properties of the amorphous emitter. Combining both, amorphous front- and back contacts yields p-type heterojunction solar cells completely fabricated at temperatures below 220 °C. The best devices reach an open circuit voltage Voc = 678 mV and an efficiency η = 18.1 % with random textured wafers, limited by low fill factors FF ∼ 75 %. Besides the cell fabrication and characterization, this thesis reveals that the inherent a-Si:H/c-Si band offset distribution with a low conduction band offset and a large valence band offset is disadvantageous for p-c-Si heterojuntion solar cells if compared to their n-c-Si counterparts. A calculation of the saturation current densities of the cell's emitter, bulk and back contact demonstrates that the n-a-Si:H/p-c-Si emitter suffers from a low built-in potential. Modelling of the back contact based on the charge carrier transport equations shows that the insertion of an i-a-Si:H layer with a thickness d ≥ 3 nm (that is mandatory for a high surface passivation quality) leads to a series resistance that is critical for usage in a solar cell. The model mainly ascribes the high back contact resistance to the large valence band offset at the heterojunction.Item Open Access Acceleration techniques for numerical flow visualization(2006) Stegmaier, Simon; Ertl, Thomas (Prof. Dr.)This thesis addresses the problem of making computer-aided flow visualization more efficient and more effective. More efficient because several new algorithms are presented for accelerating the visualization itself; more effective because accelerated visualization yields more productive work during data analysis. Whether there is a need for acceleration techniques depends on several parameters. Obviously, there is a strong dependence on the available computing hardware: what is reasonable on one hardware platform might be unbearable on another platform. This straightforwardly leads to the idea of switching to another (remote) visualization platform while keeping the researcher's workspace untouched. Alternatively, more efficient use of local hardware resources can be made, a direction followed in this thesis by balancing the workload between the (programmable) graphics hardware and the central processing unit. Instead of exploiting parallel processing, reduced accuracy can be traded for improved interactivity. In this work, this trade-off is made by converting the grid underlying the data to a representation that can be handled more efficiently. In the worst case, neither hardware approaches nor accuracy reduction sufficiently improve the data analysis. Consequently, data reduction must be employed to keep up with human cognition capabilities and limited graphics processing resources. This issue is addressed by flow feature extraction which aims at presenting a highly compact representation of the data. This work thus presents a unique multi-level approach for accelerating flow visualization, considering hardware resources, accuracy requirements, and cognitive issues. Due to the generality of the selected acceleration techniques presented in this thesis, some results do also have impact on other areas of scientific visualization. Furthermore, due to the layered approach addressing the acceleration on multiple abstraction levels, the presented techniques can be used stand-alone as well as in combination to yield a highly flexible toolbox that can be fine-tuned to the respective environment.Item Open Access Active electronic loads for radiometric calibration(2017) Weissbrodt, Ernst; Kallfass, Ingmar (Prof. Dr.)Although radiometer systems are widely applied in very different fields, they all have one important requirement in common: They require a thorough radiometric calibration. Various conventional calibration references are well established, but their bulkiness, high power consumption, and complexity are limiting the expanding fields of application. Since novel industrial applications such as passive millimeter-wave imaging emerge, the requirements for calibration references have increased drastically. But also in scientific fields like radio astronomy, cosmology or environmental monitoring, modern remote sensing radiometers do not rely only on conventional references anymore. In this work, millimeter-wave monolithic integrated circuits (MMICs) based on metamorphic high electron mobility transistors (mHEMT) were designed to be used as active electronic loads for radiometric calibration. These novel references have not only the outstanding property, that they can be directly integrated on chip-level into the radiometer front-end, but also, that they can exhibit cold as well as hot reference noise temperatures. Since this is achieved without any physical cooling or heating, the power consumption is notably reduced. By monolithic integration of field effect transistor (FET) switches, theses multiple references can internally be routed to the receiver input without any mechanical wear. As a result, laborious external references can be omitted and the repetition rate of the calibration procedure increased, which results in a higher radiometric accuracy and allows a more compact and cost-effective design of modern radiometer systems. This work presents the first radiometric calibration front-end that allows to internally switch between active electronic cold and active electronic hot loads, as well as a passive ambient load. All components are integrated on a single MMIC, and a patent was granted for this innovation. To predict the achievable noise temperatures of active cold loads (ACLs), different simulation approaches were previously published. This work evaluates and adapts these existing approaches to design and manufacture several W-band loads. But the required design-flow was found to be very time-consuming because multiple iterations are necessary to successively design and optimize the input- and output matching networks, and to finally achieve the desired low noise temperature. Therefore, a novel simulation approach is introduced that makes efficiently use of modern optimization algorithms and the very accurate model library of the mHEMT technology and the passive structures. With this novel simulation method, the first active hot loads (AHLs) were designed as well as state-of-the-art ACLs up to 140 GHz. However, the characterization of low-noise one-port devices is particularly challenging, especially at such high frequencies. Hence, a substantial part of this work is to investigate the reliability of different noise measurement setups and the repeatability of noise temperature results. Dedicated setups in W- and D-band are used to characterize all manufactured active loads and some selected results are cross-checked by measuring the same circuits with independently designed measurement systems of other research facilities. The discrepant results are discussed, concluding that high variations in measured one-port noise temperature do not allow to rely on one single measurement setup. At the same time, this thorough investigation and comparison permits to establish an accuracy range within which the results of the manufactured active electronic loads are reliable, whereas other previously published ACLs are typically only measured with one measurement setup.Item Open Access Adaptation of point- and line-based visualization(2024) Rodrigues, Nils; Weiskopf, Daniel (Prof. Dr.)Visualization plays an important role in the lives of various heterogeneous parts of society: from a voter looking for the latest results of an election, to statisticians examining a distribution, to analysts trying to make sense of multidimensional data sets. This thesis adapts existing point- and line-based visualization methods to improve knowledge gain. The included contributions address three research questions: How to scale unit visualization for 1D data? How to improve navigation between 2D visualizations of multivariate data? How to combine the advantages of multiple 2D views in a single static visualization for multivariate data? The first part of the thesis focuses on unit visualization of 1D data with dot plots. Compared to the previous state of the art, the developed visualizations fit a wider range of data and expand the number of potential users by requiring less prior knowledge for interpretation. They adapt the definition of dot plots to scale nonlinearly with sample count, accurately show value frequencies in high-dynamic-range data, reduce positional error in displayed data points, and enhance the perception of subtle nuances in the data while avoiding moiré effects. We provide evidence for claimed improvements through evaluation with computational metrics and a crowdsourced user study. The second part of the dissertation focuses on visualizing multivariate data with scatter plots and scatter plot matrices. First, we evaluate six animated transitions between plots of different 2D subspaces with respect to task performance for tracking individual points and interactions between clusters. The results of a quantitative study with 170 participants show that orthographic rotation animation performs best and should be adopted more widely. Next, we develop a novel concept for recommending views in scatter plot matrices. It provides user- and task-specific suggestions by focusing on the data of interest to the viewer. Together, animation and recommendation adapt scatter plots to improve the user's ability to analyze more complex data effectively. In the third part, we develop a new visualization technique that extends parallel coordinate plots to provide a static alternative to scatter plots with animated transitions. The approach does not require interaction to display data flow between 2D subspace clusters. A custom density-based rendering technique enables the visibility of individual lines and structures within highly overdrawn regions. Our technique can communicate fuzzy clustering results through binning and color mapping. Finally, we discuss the presented contributions with respect to the original main questions and show possible directions for future research.Item Open Access Adaptive algorithms for 3D reconstruction and motion estimation(2019) Maurer, Daniel; Bruhn, Andrés (Prof. Dr.)Item Open Access Adaptive error control for stratospheric long-distance optical links(2024) Parthasarathy, Swaminathan; Kirstädter, Andreas (Prof. Dr.-Ing.)Free-space optical (FSO) communication plays a crucial role in aerospace technology, utilizing lasers to establish high-speed, wireless connections over long distances. FSO surpasses conventional RF wireless technology in various aspects and supports high-data-rate connectivity for services such as Internet access, data transfer, voice communication, and image transfer. High-Altitude Platforms (HAPs) have emerged as ideal hosts for FSO communication networks, offering ultra-high data rates for applications like high-speed Internet, video conferencing, telemedicine, smart cities, and autonomous driving. FSO via HAPs ensures minimal latency, making it suitable for real-time tasks like remote surgery and autonomous vehicle control. The swift, long-distance communication links with low delays make FSO-equipped HAPs ideal for RF-congested areas, providing cost-effective solutions in remote regions and contributing to environmental monitoring. This thesis explores the use of adaptive code-rate Hybrid Automatic Repeat Request (HARQ) methods and channel state information (CSI) to improve the transmission efficiency of Free-Space Optical (FSO) links between High Altitude Platforms (HAPs). The study looks at channel problems like atmospheric turbulence and static pointing errors, focusing on the weak fluctuation regime of atmospheric turbulence. It explores the reciprocal behavior in bidirectional FSO channels to improve performance efficiency, providing evidence of channel reciprocity. The research proposes using HARQ, an adaptive Reed-Solomon (RS) code-rate technique, and different CSI types to address these impairments. Simulations of various situations are used to test how well these methods work. This helps us learn more about how efficient HARQ protocols are in inter-HAP FSO links, how important different CSI is in adaptive rate HARQ, and possible ways to make the system more efficient. This thesis looks at the channel model for inter-High Altitude Platform (HAP) Free-Space Optical (FSO) links in great detail, taking atmospheric conditions and static pointing errors into account. The channel is modeled as a lognormal fading channel under a weak fluctuation regime. The principle of channel reciprocity and the measures used to quantify it are discussed, providing a foundational understanding for the subsequent investigations. Forward Error Correction (FEC) schemes, with a specific emphasis on the Reed-Solomon (RS) scheme, and various Automatic Repeat reQuest (ARQ) schemes are thoroughly examined. A meticulous comparison of different ARQ schemes highlights that Selective Repeat ARQ (SR-ARQ) is the most efficient for high-error-rate channels, making it the preferred choice for inter-HAP FSO channels. Conversely, Stop and Wait ARQ (SW-ARQ) and Go-Back-N ARQ (GBN-ARQ) are found to be less suitable for these channels. An innovative approach is introduced, leveraging various types of Channel State Information (CSI) to adjust the Reed-Solomon Forward Error Correction (FEC) code-rate. Four types of CSI: perfect CSI (P-CSI), reciprocal CSI (R-CSI), delayed CSI (D-CSI), and fixed mean CSI (F-CSI) are employed. The adaptation of the Reed-Solomon FEC code-rate, aligned with Selective Repeat ARQ, is explored, and the optimal power selection is identified through rigorous analysis. It shows simulation models that use OMNET++ and gives information about the inter-HAP channel and the event-based selective repeat HARQ model. The study demonstrates reciprocity in the longest recorded ground-to-ground bidirectional Free-Space Optical (FSO) link, holding promise to mitigate signal scintillation caused by atmospheric turbulence. It evaluates the performance of different ARQ protocols and adaptive Hybrid Automatic Repeat Request (HARQ) schemes in inter-HAP FSO communication systems. The results show how channel state information, turbulence in the atmosphere, and pointing errors affect the performance of the system. They also suggest ways to improve system efficiency, such as using CSI prediction and soft combining. These findings offer valuable insights for the design and optimization of ARQ and HARQ schemes in inter-HAP FSO communication systems and suggest promising avenues for future research.Item Open Access Adaptive grid implementation for parallel continuum mechanics methods in particle simulations(2019) Lahnert, Michael; Mehl, Miriam (Prof. Dr.)Item Open Access Adaptive human-robot policy blending for shared control teleoperation(2024) Oh, Yoojin; Toussaint, Marc (Prof. Dr. rer. nat.)Item Open Access Adaptive Internetanbindung von Feldbussystemen(2005) Eberle, Stephan; Göhner, Peter (Prof. Dr.-Ing. Dr. h. c.)Feldbusse sind spezialisierte Netzwerke, um automatisierungstechnische Geräte wie Sensoren, Aktoren und Steuerungen miteinander zu verbinden. In der letzten Zeit wird immer mehr nach Möglichkeiten gesucht, um mit Hilfe des Internets von entfernten Orten aus an Informationen zu gelangen, die in feldbusvernetzten Systemen verarbeitet werden. Die Verfügbarkeit dieser Informationen verleihen Teleservices, wie z.B. der Ferndiagnose, oder der vertikalen Integration automatisierungstechnischer Anlagen mit kaufmännischen Geschäftsbereichen von Unternehmen einen enormen Auftrieb und versprechen Effizienzsteigerungen und Kosteneinsparungen beträchtlichen Ausmaßes. Nichtsdestotrotz erweist sich die fehlende Interoperabilität zwischen Feldbussystemen und darauf zugreifenden Softwarewerkzeugen und -anwendungen nach wie vor als schwer überwindbares Hindernis. Dass zur Beseitigung dieser Schwierigkeiten standardisierte Anwendungsschnittstellen und Datenaustauschformate notwendig sind, darüber ist man sich weitgehend einig. Die Meinungen über deren Gestaltung liegen jedoch weit auseinander und trotz zahlloser Anstrengungen ist keine Einigung absehbar. Mit der adaptiven Internetanbindung von Feldbussystemen wird die Frage der Interoperabilität von Feldbussystemen und -werkzeugen aus einer völlig neuen Richtung angegangen. Nicht der Anwender soll gezwungen sein, sich nach den Begebenheiten des Feldbussystems zu richten. Stattdessen soll die Technik, sprich das Feldbussystem, in die Lage versetzt werden, sich in flexibler Weise an die jeweilige Werkzeuginfrastruktur des Anwenders anzupassen. Die Verwirklichung dieser Idee erfolgt mit Hilfe von Transformationsvorschriften, die an einem bekannten Ort im Internet bereitgestellt werden. Sobald ein Feldbuswerkzeug und -system miteinander in Verbindung treten, werden die zwischen ihnen ausgetauschten Nachrichten mit Hilfe der passenden Transformationsvorschrift übersetzt, sodass sie von der jeweiligen Gegenseite verstanden und ordnungsgemäß verarbeitet werden können. Auf diese Weise kann die Interoperabilität von Feldbussystemen und -werkzeugen erstmals auch dann hergestellt werden, wenn keine einheitliche Form des Informationsaustausches vereinbart werden kann und beide Seiten im herkömmlichen Sinne inkompatibel sind.Item Open Access Adaptive und wandlungsfähige IT-Architektur für Produktionsunternehmen(2014) Silcher, Stefan; Mitschang, Bernhard (Prof. Dr.-Ing. habil.)Die Herausforderungen, denen sich Produktionsunternehmen heutzutage stellen müssen, nehmen kontinuierlich zu. Diese umfassen insbesondere die Globalisierung, die wachsende Komplexität und das heute vorherrschende turbulente Umfeld [Jovane 2009]. Durch die Globalisierung muss sich jedes Unternehmen dem Wettbewerb und den vielfältigen Herausforderungen der unterschiedlichen Märkte stellen. Die zunehmende Komplexität wird nicht nur durch eine steigende Anzahl an Produktvarianten hervorgerufen, sondern nimmt auch auf der Prozessebene kontinuierlich zu. Die Probleme vergrößern sich durch das turbulente Umfeld, in dem interne und externe Einflüsse auf die Produktionsunternehmen einwirken und zu einem kontinuierlichen Anpassungsbedarf führen [Westkaemper 2007]. In Produktionsunternehmen wird diesen Herausforderungen zunehmend mittels Informationstechnik (IT) begegnet. Die Vielzahl an Softwaresystemen und deren oft proprietäre Integration führen jedoch schnell zu einer komplexen IT-Landschaft, deren Wartungsaufwand kontinuierlich steigt. Zusätzlich sind sowohl die Softwareanwendungen als auch deren Integration unflexibel [Kirchner 2003], weshalb Änderungen und Erweiterungen nur mit großem Aufwand durchführbar sind. Die in den Anwendungen implementierten Prozesse werden damit ebenfalls starr und können aufgrund dessen nicht schnell genug angepasst werden. Zudem sind Integrationslösungen weitestgehend auf eine Domäne beschränkt und ermöglichen keinen unternehmensweiten Datenaustausch oder domänen- und anwendungsübergreifende Prozessdefinitionen. Aus diesen Gründen wird eine neue IT-Architektur für Produktionsunternehmen benötigt, welche die Adaptivität sowohl der Anwendungen und deren Integration als auch der Prozesse unterstützt. Die vorliegende Arbeit beschreibt eine solche adaptive und wandlungsfähige IT-Architektur (ACITA) für Produktionsunternehmen [Silcher 2011]. Deren initiale Anwendungsdomäne ist der Produktlebenszyklus bzw. das Product Lifecycle Management (PLM), sie kann jedoch relativ einfach auf weitere Domänen ausgeweitet werden. Zur Integration der Anwendungen werden einheitliche und standardisierte Web Service Schnittstellen verwendet. Die lose und damit flexible Kopplung der Services erfolgt über einen angepassten Enterprise Service Bus (ESB). Die Unterstützung der Prozesse geschieht durch flexible Komposition von Services in Workflows, die die Geschäftsprozesse unterstützen können. Jede Domäne wird durch dieses Vorgehen getrennt voneinander über einen angepassten ESB integriert. Dies erlaubt die technischen Anforderungen der Domäne zu berücksichtigen, wodurch eine leistungsfähigere IT-Umgebung erreicht wird. Die Integration der einzelnen domänenspezifischen ESBs erfolgt über einen weiteren ESB, dem sogenannten PLM-Bus. Dieser sorgt für eine wandlungsfähige IT-Architektur, indem phasenspezifische ESBs einfach hinzugefügt oder entfernt werden können. Die Umsetzung der ACITA erfordert eine Reihe verschiedener Komponenten. Die zu integrierenden Anwendungen benötigen Serviceschnittstellen, um auf deren Funktionalität oder Daten zugreifen zu können. Die Verwaltung dieser Serviceschnittstellen wird durch mehrere Serviceverzeichnisse bewerkstelligt, deren Anordnung der Hierarchie der ACITA entsprechen [Silcher 2013a]. Die lose Kopplung der Services erfolgt über Content-based Router (CBR), die in jedem ESB implementiert sind. Die Unabhängigkeit von proprietären Datenformaten der integrierten Anwendungen wird durch die Verwendung von einheitlichen Nachrichtenaustauschformaten in jeder Phase sichergestellt. Um Nachrichten zwischen unterschiedlichen Phasen zu übertragen, sind Übersetzungsservices notwendig. Die prototypische Implementierung der ACITA erfolgte in der Lernfabrik aIE, die aus einer digitalen Lerninsel und einer physischen Modellfabrik besteht [Riffelmacher 2007]. Zur Integration wurden die beiden domänenspezifischen Integrationsumgebungen des Production-planning Service Bus (PPSB) und des Manufacturing Service Bus (MSB) über den PLM-Bus verbunden, um den nahtlosen Datenaustausch zwischen den entsprechenden Phasen zu demonstrieren [Silcher 2013a]. Die Evaluation der ACITA wird in vier Anwendungsszenarien durchgeführt und anhand von sechs Kriterien mit anderen Integrationslösungen für den Produktlebenszyklus verglichen. Die ACITA kann in einem global aufgestellten Unternehmen durch die Integration von verteilten Anwendungen bzw. Services den Datenaustausch zwischen allen Standorten sicherstellen. Der durchgängige Einsatz von Softwaresystemen in Kombination mit der flexiblen Prozessunterstützung macht die wachsende Komplexität der Produkte und Prozesse besser beherrschbar. Der kontinuierliche Anpassungsbedarf, der durch das turbulente Umfeld hervorgerufen wird, ist durch die Adaptivität und Wandlungsfähigkeit der IT-Architektur einfacher durchführbar. Damit sind Unternehmen bestens für zukünftige Herausforderungen gewappnet.Item Open Access Advancing manipulation skill learning towards sample-efficiency and generalization(2018) Englert, Peter; Toussaint, Marc (Prof. Dr.)Item Open Access Agent-based dynamic scheduling for flexible manufacturing systems(2011) Badr, Iman; Göhner, Peter (Prof. Dr.-Ing. Dr. h. c.)The need to react to fluctuations and versatility in market demand and face competition threats has led to the increasing trend to produce a wide variety of product types in small batches. The advent of advanced technology like computerized numerically controlled (CNC) machines and automatic guided vehicles has enabled the realization of flexible manufacturing systems (FMSs). FMSs aim at bringing the efficiency of mass production to small-to-medium sized batch production with high product diversity. This objective calls for scheduling approaches that optimize the utilization of the applied technology and at the same time react to the environmental dynamics flexibly. Conventional scheduling approaches fail to provide a mechanism for reacting to the dynamics of FMSs in a timely and efficient manner. Approaches that cater for optimality by a thorough investigation of available schedule alternatives always fail to exhibit real-time reactivity due to the high complexity of the problem. In this research work, an agent-based concept for the flexible and efficient FMS scheduling is proposed. The inherent complexity of the FMS scheduling problem is tackled by decomposing it into autonomous agents. These agents are organized in a hetrarchical multi-layered architecture that builds on the flexibility of FMSs. Every involved agent applies search heuristics to optimize its assigned task out of its local perspective. Through the interactions among the concerned agents along the different levels of abstraction, the schedule is optimized from the global perspective in reasonable time. Different scheduling modes are supported to account for the different managerial decisions and the different environmental conditions. The generated schedule is adapted to disturbing events such as machine breakdowns based on a schedule repair method that caters for automating the reaction to disturbances efficiently in real-time. In addition, structural changes of FMSs, including the addition of new resources, are incorporated dynamically into the proposed scheduling, which guarantees long-term flexibility.Item Open Access Agentenbasierte Konsistenzprüfung heterogener Modelle in der Automatisierungstechnik(2015) Rauscher, Michael; Göhner, Peter (Prof. Dr.-Ing. Dr. h.c.)Automatisierte bzw. mechatronische Systeme sind aus der heutigen Welt nicht mehr wegzudenken. Sowohl ein Großteil der technischen Produkte als auch die industrielle Produktion enthalten Elemente der Mechatronik. Sie ist eine Disziplin, die die klassischen Disziplinen des Maschinenbaus, der Elektrotechnik und der Informationstechnik in sich vereint. Um die gewinnbringende Zusammenarbeit dieser sehr unterschiedlichen Disziplinen zu gewährleisten, muss verschiedenen Herausforderungen begegnet werden. Die durch die verschiedenen Disziplinen bedingte Verwendung heterogener Modelle ist dabei eine große Fehlerquelle beim Entwurf mechatronischer Systeme. Inkonsistenzen zwischen den einzelnen Modellen, die erst spät im Entwurfsprozess aufgedeckt werden, erfordern großen Aufwand zur Behebung und gefährden die Qualität der entwickelten Produkte. Auf Grund der Heterogenität der Modelle, ist eine automatisierte Prüfung der Modelle auf Inkonsistenzen nicht möglich. Die manuelle Abstimmung zwischen den Disziplinen und Modellen kosten die Entwickler jedoch viel Zeit im Entwicklungsprozess und sie können sich nicht vollständig auf ihre eigentliche, kreative und gewinnbringende Arbeit konzentrieren. Deshalb ist die Unterstützung der Entwickler bei der Prüfung der Modelle notwendig, um die Entwicklungskosten zu reduzieren und die Produktqualität zu steigern. In der vorliegenden Arbeit wird ein Konzept zur automatisierten Konsistenzprüfung für heterogene Modelle vorgestellt, das in der Lage ist, Inkonsistenzen aufzudecken und den Entwicklern alle zu deren Auflösung verfügbaren Informationen bereitzustellen. Das Konzept basiert darauf, die heterogenen Modelle und deren Inhalt auf eine globale, modellübergreifende Ebene zu abstrahieren und dort zu interpretieren und zu prüfen. Dies wird erreicht, indem jedes Modell, genauer jeder Modelltyp, eine (lokale) Beschreibung erhält, in der die Struktur und die Bedeutung der Syntax des Modelltyps enthalten sind. Das zur Prüfung erforderliche Wissen in Form von Fakten, Zusammenhängen und daraus abgeleiteten Regeln ist allgemeingültig verfasst und befindet sich auf der globalen Ebene. Benutzerspezifisch können zu prüfende Regeln ergänzt werden. Die lokalen und die globale Wissensbasen werden in Form von Ontologien realisiert. Die Prüfung selbst wird mit Hilfe eines Softwareagentensystems durchgeführt. Die Grundidee der agentenorientierten Konsistenzprüfung ist es, jedes beteiligte Modell mit einem Modellagenten zu repräsentieren, der als Schnittstelle zwischen lokaler und globaler Ebene fungiert. Ein Ontologieagent leitet die Regeln aus der globalen Wissensbasis ab. Regelagenten führen die Prüfung der Modelle auf die Einhaltung der Regeln aus. Ein Koordinierungsagent stößt die Prüfung an, koordiniert diese und verwaltet die Ergebnisse. Dem Entwickler werden diese Ergebnisse am Ende einer Prüfung präsentiert und alle Informationen, die zu einem positiven oder negativen Ergebnis geführt haben, bereitgestellt. Er kann die aufgedeckten Inkonsistenzen anschließend gezielt beheben und wird so bei seiner Tätigkeit unterstützt. Gleichzeitig wird die Qualität der mechatronischen Systeme gesteigert, indem die möglichen Fehlerquellen reduziert werden.Item Open Access Agentenunterstütztes Engineering von Automatisierungsanlagen(2008) Wagner, Thomas; Göhner, Peter (Prof. Dr.-Ing. Dr. h. c.)Im Zuge des fortschreitenden globalen Wettbewerbs kommt dem Standort Deutschland zunehmend die Rolle eines „Engineering“-Standorts denn eines Produktionsstandorts zu. Im Bereich der Anlagenautomatisierung werden unter dem Begriff Engineering die Arbeitsprozesse und Tätigkeiten beim technischen Entwurf und der Auslegung von Automatisierungsanlagen zusammengefasst. Die Kosten für das Engineering hängen wesentlich von der Effizienz und Produktivität der menschlichen Arbeitsprozesse und der Qualität der resultierenden Engineeringinformationen ab. Dabei stellt neben methodischen und technologischen Aspekten die Beachtung der technischen Zusammenhänge zwischen den einzelnen Anlagenkomponenten eine große Herausforderung dar. Diese sind sehr vielfältig und für jede Automatisierungsanlage unterschiedlich ausgeprägt. Sie müssen daher im Zuge des Engineerings vollständig erfasst und aufeinander abgestimmt werden, was heute zum überwiegenden Teil manuell erfolgt und hohe Aufwendungen sowie zusätzliche Fehlermöglichkeiten mit sich bringt. Ausgehend von der modernen komponentenbasierten Vorgehensweise im Engineering und der Beschaffenheit der erstellten Engineeringinformationen wird in der vorliegenden Arbeit ein Ansatz vorgestellt, der zur informationstechnischen Unterstützung des Engineerings von Automatisierungsanlagen dient. Durch geeignete Konzepte wird eine deutliche Reduktion des manuellen Aufwandes und eine vereinfachte Durchführung der menschlichen Tätigkeiten ermöglicht. Dabei wird durch den Einsatz von Softwareagenten eine aktive Form der Unterstützung bereitgestellt, welche sich flexibel an den Ablauf der menschlichen Arbeitsprozesse anpasst. Das Konzept nutzt die vorhandenen Engineeringinformationen und bestehendes Wissen über technische Abhängigkeiten einzelner Komponenten und überträgt diese auf einzelne Softwareagenten. Auf dieser Basis agieren und kooperieren die Softwareagenten parallel zu den Tätigkeiten des Ingenieurs im Hintergrund. Sie sind in der Lage, die beim Engineering entstehenden technischen Zusammenhänge innerhalb der Automatisierungsanlage selbstständig zu erkennen, zu analysieren und geeignete Anpassungen der Engineeringinformationen zu ermitteln. Die Interaktion mit dem Ingenieur erfolgt in Form von entsprechenden Hinweisen und Lösungsvorschlägen, welche auf Wunsch von den Softwareagenten selbstständig umgesetzt werden können. Auf diese Weise werden die individuellen technischen Zusammenhänge einer Automatisierungsanlage bereits auf informationstechnischer Ebene berücksichtigt und ein Großteil der bisher erforderlichen manuellen Tätigkeiten und Überlegungen kann entfallen. Das Konzept wurde darüber hinaus so ausgelegt, dass es zu bisher verwendeten Komponentenmodellen und Werkzeugen kompatibel ist und sich problemlos in bestehende Engineeringprozesse integrieren lässt.Item Open Access Algorithm engineering in geometric network planning and data mining(2018) Seybold, Martin P.; Funke, Stefan (Prof. Dr.-Ing.)The geometric nature of computational problems provides a rich source of solution strategies as well as complicating obstacles. This thesis considers three problems in the context of geometric network planning, data mining and spherical geometry. Geometric Network Planning: In the d-dimensional Generalized Minimum Manhattan Network problem (d-GMMN) one is interested in finding a minimum cost rectilinear network N connecting a given set of n pairs of points in ℝ^d such that each pair is connected in N via a shortest Manhattan path. The decision version of this optimization problem is known to be NP-hard. The best known upper bound is an O(log^{d+1} n) approximation for d>2 and an O(log n) approximation for 2-GMMN. In this work we provide some more insight in, whether the problem admits constant factor approximations in polynomial time. We develop two new algorithms, a `scale-diversity aware' algorithm with an O(D) approximation guarantee for 2-GMMN. Here D is a measure for the different `scales' that appear in the input, D ∈ O(log n) but potentially much smaller, depending on the problem instance. The other algorithm is based on a primal-dual scheme solving a more general, combinatorial problem - which we call Path Cover. On 2-GMMN it performs well in practice with good a posteriori, instance-based approximation guarantees. Furthermore, it can be extended to deal with obstacle avoiding requirements. We show that the Path Cover problem is at least as hard to approximate as the Hitting Set problem. Moreover, we show that solutions of the primal-dual algorithm are 4ω^2 approximations, where ω ≤ n denotes the maximum overlap of a problem instance. This implies that a potential proof of O(1)-inapproximability for 2-GMMN requires gadgets of many different scales and non-constant overlap in the construction. Geometric Map Matching for Heterogeneous Data: For a given sequence of location measurements, the goal of the geometric map matching is to compute a sequence of movements along edges of a spatially embedded graph which provides a `good explanation' for the measurements. The problem gets challenging as real world data, like traces or graphs from the OpenStreetMap project, does not exhibit homogeneous data quality. Graph details and errors vary in areas and each trace has changing noise and precision. Hence, formalizing what a `good explanation' is becomes quite difficult. We propose a novel map matching approach, which locally adapts to the data quality by constructing what we call dominance decompositions. While our approach is computationally more expensive than previous approaches, our experiments show that it allows for high quality map matching, even in presence of highly variable data quality without parameter tuning. Rational Points on the Unit Spheres: Each non-zero point in ℝ^d identifies a closest point x on the unit sphere S^{d-1}. We are interested in computing an ε-approximation y ∈ ℚ^d for x, that is exactly on S^{d-1} and has low bit-size. We revise lower bounds on rational approximations and provide explicit spherical instances. We prove that floating-point numbers can only provide trivial solutions to the sphere equation in ℝ^2 and ℝ^3. However, we show how to construct a rational point with denominators of at most 10(d-1)/ε^2 for any given ε ∈ (0, 1/8], improving on a previous result. The method further benefits from algorithms for simultaneous Diophantine approximation. Our open-source implementation and experiments demonstrate the practicality of our approach in the context of massive data sets, geo-referenced by latitude and longitude values.Item Open Access Algorithm-based fault tolerance for matrix operations on graphics processing units : analysis and extension to autonomous operation(2015) Braun, Claus; Wunderlich, Hans-Joachim (Prof. Dr. rer. nat. habil.)Scientific computing and computer-based simulation technology evolved to indispensable tools that enable solutions for major challenges in science and engineering. Applications in these domains are often dominated by compute-intensive mathematical tasks like linear algebra matrix operations. The provision of correct and trustworthy computational results is an essential prerequisite since these applications can have direct impact on scientific, economic or political processes and decisions. Graphics processing units (GPUs) are highly parallel many-core processor architectures that deliver tremendous floating-point compute performance at very low cost. This makes them particularly interesting for the substantial acceleration of complex applications in science and engineering. However, like most nano-scaled CMOS devices, GPUs are facing a growing number of threats that jeopardize their reliability. This makes the integration of fault tolerance measures mandatory. Algorithm-Based Fault Tolerance (ABFT) allows the protection of essential mathematical operations, which are intensively used in scientific computing. It provides a high error coverage combined with a low computational overhead. However, the integration of ABFT into linear algebra matrix operations on GPUs is a non-trivial task, which requires a thorough balance between fault tolerance, architectural constraints and performance. Moreover, ABFT for operations carried out in floating-point arithmetic has to cope with a reduced error detection and localization efficacy due to inevitable rounding errors. This work provides an in-depth analysis of Algorithm-Based Fault Tolerance for matrix operations on graphics processing units with respect to different types and combinations of weighted checksum codes, partitioned encoding schemes and architecture-related execution parameters. Moreover, a novel approach called A-ABFT is introduced for the efficient online determination of rounding error bounds, which improves the error detection and localization capabilities of ABFT significantly. Extensive experimental evaluations of the error detection capabilities, the quality of the determined rounding error bounds, as well as the achievable performance confirm that the proposed A-ABFT method performs better than previous approaches. In addition, two case studies (QR decomposition and Linear Programming) emphasize the efficacy of A-ABFT and its applicability to practical problems.Item Open Access Algorithmische Aspekte der Fluid-Struktur-Wechselwirkung auf kartesischen Gittern(2007) Brenk, Markus; Bungartz, Hans-Joachim (Prof. Dr.)In vielen physikalischen Systemen und technischen Anwendungen spielen Fluid-Struktur-Wechselwirkungen eine wesentliche Rolle. Die Wechselwirkung von Fluiden und flexiblen Strukturen stellt ein gekoppeltes Problem dar, bei dem die Bewegung eines Fluides und einer Struktur über die so genannte nasse Oberfläche (Kopplungsoberfläche) der Struktur bidirektional gekoppelt sind. So sind Windlasten an Gebäuden und Brücken, das Aufblasen von Airbags, das Öffnen von Fallschirmen, der Blutfluss in einer Ader oder die Strömung zwischen den Lamellen eines Autoreifens Beispiele für diese Art der Kopplung. Bei der Untersuchung von Fluid-Struktur-Wechselwirkungen ist die numerische Simulation ein unerlässliches Hilfsmittel. Oft werden diese Simulationen durch so genannte partitionierte Ansätzen realisiert. Diese sind dadurch gekennzeichnet, dass getrennte und für die einzelnen Teilprobleme konzipierte und angepasste Programme zur Berechnung der Strömungen und der Strukturbewegungen bzw. -verformungen eingesetzt werden -- im Gegensatz zu so genannten monolithischen Ansätzen, bei denen alle Teilprobleme gemeinsam diskretisiert und in einem Programm behandelt werden. Bei partitionierten Ansätzen können Teile der Berechnungen mit bewährten und für den jeweiligen Teilaspekt am besten geeigneten Softwarelösungen erfolgen. Damit ist jedoch eine zusätzliche Programmkomponente erforderlich, die den Ablauf der gekoppelten Simulation steuert und den Austausch der Daten zwischen den Simulationsprogrammen ermöglicht und die somit einen integralen Bestandteil partitionierter Ansätze darstellt. Dies zeigt deutlich, dass sich bei der Simulation von Fluid-Struktur-Wechselwirkungen mit partitionierten Ansätzen, neben den ingenieurwissenschaftlichen Herausforderungen (wie bspw. dem Lösen konkreter Problemstellungen) und den mathematischen Herausforderungen (wie bspw. dem Sicherstellen von Konvergenz und Robustheit), insbesondere auch softwaretechnische und damit informatische Herausforderungen ergeben. Die vorliegende Arbeit befasst sich schwerpunktmäßig mit den resultierenden Fragestellungen zur Steuerung der Kopplung, zur Verknüpfung der in den Programmen unterschiedlichen geometrischen Darstellungen der nassen Oberfläche und zum Austausch der kopplungsrelevanten Daten. Die physikalische Beschreibung des Fluid-Struktur-Wechselwirkungsproblems fordert die Erfüllung von Gleichgewichtsbedingungen auf der Kopplungsoberfläche zu jedem Zeitpunkt. Für partitionierte Ansätze existieren je nach Anwendungsfall unterschiedliche Strategien und Methoden zum Austausch der Kopplungsdaten und zur Steuerung der Kopplung in der Zeit, um diese Gleichgewichte zwischen den getrennten Simulationen sicherzustellen. Dies erfordert eine Softwarelösung zur Kopplung der Simulationsprogramme, die neben einer einfachen und mit geringem Aufwand durchzuführenden Anpassung der Programme und einer flexiblen Möglichkeit zur Steuerung der Kopplung, eine Lösung zum Transfer der kopplungsrelevanten Daten -- zwischen den auf unterschiedlichen Diskretisierungen und Datenstrukturen aufbauenden Repräsentationen der Kopplungsober Fläche in den Simulationsprogrammen -- bietet. Dieser Transfer von Kopplungsdaten zwischen den verschiedenen Oberflächenrepräsentationen kann durch die Einbettung in eine übergeordnete Geometrierepräsentation wirkungsvoll unterstützt werden. Hierfür bieten sich insbesondere hierarchisch-strukturierte Zerlegungen des Raumes durch kartesische Volumenelemente (z. B. Oktalbäume) als laufzeit- und speichereffiziente Lösung an. Diesem Effizienz-Gedanken folgend, stellt sich die Frage, ob diese strukturierten kartesischen Zerlegungen des Raumes nicht direkt als Basis für die Diskretisierung des Strömungsgebietes bei der Simulation von Fluid-Struktur-Wechselwirkungen genutzt werden können. Die Untersuchung kartesischer Diskretisierungen im Kontext der Fluid-Struktur-Wechselwirkung bildet, neben den Fragestellungen der Realisierung der Kopplung den zweiten Schwerpunkt dieser Arbeit. Es werden entsprechende Methoden vorgestellt, untersucht und insbesondere durch die dreidimensionale Simulation des Transportes von Partikeln in Mikroströmungen validiert.Item Open Access Algorithms and complexity results for finite semigroups(2019) Fleischer, Lukas; Diekert, Volker (Prof. Dr.)We consider the complexity of decision problems for regular languages given as recognizing morphisms to finite semigroups. We describe efficient algorithms for testing language emptiness, universality, inclusion, equivalence and finiteness, as well as intersection non-emptiness. Some of these algorithms have sublinear running time and are therefore implemented on random-access Turing machines or Boolean circuits. These algorithms are complemented by lower bounds. We give completeness results for the general case and also investigate restrictions to certain varieties of finite semigroups. Except for intersection non-emptiness, the problems mentioned above are shown to be closely connected to the Cayley semigroup membership problem, i.e., membership of an element to a subsemigroup given by a multiplication table and a set of generators. Therefore, the complexity of this problem is one of the main topics of this thesis. In many (but not all) cases, efficient algorithms for Cayley semigroup membership are based on the existence of succinct representations of semigroup elements over a given set of generators. These representations are algebraic circuits, also referred to as straight-line programs. As a compressibility measure for such representations within specific classes of finite semigroups, we introduce a framework called circuits properties. We give algebraic characterizations of certain classes of circuits properties and derive complexity results. As a byproduct, a generalization of a long-standing open problem in complexity theory is resolved. For intersection non-emptiness, a similar tool called product circuits properties is used. We provide completeness results for the problem of deciding membership to varieties of finite semigroups and to varieties of languages. We show that many varieties, which were previously known to be decidable in polynomial time, are actually in DLOGTIME-uniform AC^0. The key ingredient is definability of varieties by first-order formulas. Combining our results with known lower bounds for deciding Parity, we also present a novel technique to prove that a specific variety cannot be defined by first-order formulas with multiplication. Since such formulas are more expressive than finite sets of ω-identities, this implies non-definability by finite sets of ω-identities.