Universität Stuttgart

Permanent URI for this communityhttps://elib.uni-stuttgart.de/handle/11682/1

Browse

Search Results

Now showing 1 - 10 of 953
  • Thumbnail Image
    ItemOpen Access
    Automated composition of adaptive pervasive applications in heterogeneous environments
    (2012) Schuhmann, Stephan Andreas; Rothermel, Kurt (Prof. Dr. rer. nat. Dr. h. c.)
    Distributed applications for Pervasive Computing represent a research area of high interest. Configuration processes are needed before the application execution to find a composition of components that provides the required functionality. As dynamic pervasive environments and device failures may yield unavailability of arbitrary components and devices at any time, finding and maintaining such a composition represents a nontrivial task. Obviously, many degrees of decentralization and even completely centralized approaches are possible in the calculation of valid configurations, spanning a wide spectrum of possible solutions. As configuration processes produce latencies which are noticed by the application user as undesired waiting times, configurations have to be calculated as fast as possible. While completely distributed configuration is inevitable in infrastructure-less Ad Hoc scenarios, many realistic Pervasive Computing environments are located in heterogeneous environments, where additional computation power of resource-rich devices can be utilized by centralized approaches. However, in case of strongly heterogeneous pervasive environments including several resource-rich and resource-weak devices, both centralized and decentralized approaches may lead to suboptimal results concerning configuration latencies: While the resource-weak devices may be bottlenecks for decentralized configuration, the centralized approach faces the problem of not utilizing parallelism. Most of the conducted projects in Pervasive Computing only focus on one specific type of environment: Either they concentrate on heterogeneous environments, which rely on additional infrastructure devices, leading to inapplicability in infrastructure-less environments. Or they address homogeneous Ad Hoc environments and treat all involved devices as equal, which leads to suboptimal results in case of present resource-rich devices, as their additional computation power is not exploited. Therefore, in this work we propose an advanced comprehensive adaptive approach that particularly focuses on the efficient support of heterogeneous environments, but is also applicable in infrastructure-less homogeneous scenarios. We provide multiple configuration schemes with different degrees of decentralization for distributed applications, optimized for specific scenarios. Our solution is adaptive in a way that the actual scheme is chosen based on the current system environment and calculates application compositions in a resource-aware efficient manner. This ensures high efficiency even in dynamically changing environments. Beyond this, many typical pervasive environments contain a fixed set of applications and devices that are frequently used. In such scenarios, identical resources are part of subsequent configuration calculations. Thus, the involved devices undergo a quite similar configuration process whenever an application is launched. However, starting the configuration from scratch every time not only consumes a lot of time, but also increases communication overhead and energy consumption of the involved devices. Therefore, our solution integrates the results from previous configurations to reduce the severity of the configuration problem in dynamic scenarios. We prove in prototypical real-world evaluations as well as by simulation and emulation that our comprehensive approach provides efficient automated configuration in the complete spectrum of possible application scenarios. This extensive functionality has not been achieved by related projects yet. Thus, our work supplies a significant contribution towards seamless application configuration in Pervasive Computing.
  • Thumbnail Image
    ItemOpen Access
    Lebensdauerabschätzung von metallischen Strukturen mittels der Diskrete-Elemente-Methode im gekoppelten thermo-mechanischen Feld
    (2012) Hahn, Manfred; Kröplin, Bernd (Prof. Dr.-Ing. habil.)
    Aufgrund der hohen Streubreite der physikalischen Eigenschaften von metallischen Werkstoffen kann man deren Verhalten nicht zuverlässig voraussagen. Dies stellt insbesondere bei der Lebensdauervorhersage von industriell gefertigten Konstruktionen ein Problem dar. Der atomare Aufbau eines metallischen Werkstoffs besteht in der Regel aus einem Hauptelement und einem oder mehreren Nebenkomponenten was dazu führt, dass beim Erstarren einer Metallschmelze auf der Atomskala die Anreihung der Atome nicht strukturiert erfolgt, sondern mit zufällig verteilten Fehlern. Betrachtet man den Werkstoff auf einer göberen Skala so fällt auf, dass metallische Schmelzen an vielen Stellen zu erstarren beginnen und dass die Erstarrungskeime und die Grenzen der Erstarrungsfronten stochastisch verteilt sind. Die metallischen Werkstoffe sind also bei genauerer Betrachtung kein Kontinuum sondern ein Diskontinuum in dem die physikalischen Eigenschaften global betrachtet zu den in den Laborexperimenten beobachteten Werten verschmieren. Ziel der vorliegenden Arbeit ist es, den Werkstoff als ein Diskontinuum darzustellen, um die stochastisch verteilten physikalischen Eigenschaften auf einer kleinen Skala zu erfassen. Im Weiteren sollen auf einer größeren Skala die physikalischen Eigenschaften zu den global beobachteten Werten verschmieren. Dabei soll die in dieser Arbeit vorgestellte numerische Methode der diskreten Elemente im mechanischen Feld ausgebaut und im Folgenden auf andere physikalische Felder übertragen werden. In metallischen Werkstoffen gehen bei zyklischen Belastungen im Inneren des Werkstoffs mikroplastische Verformungen vonstatten, welche mit zunehmender Zyklenzahl den Werkstoff an diskreten Stellen mit Mikroschäden sättigen. Dabei schreitet die diskrete Sättigung solange voran, bis der diskrete Ort übersättigt bzw.~geschwächt ist und im Folgenden versagt. Die mikroplastischen Verformungen sind, wie auch die anderen physikalischen Parameter, mit einer statistischen Größe behaftet. Die diskrete Akkumulation der statistisch verteilten mikroplastischen Schädigung soll in das numerische Modell der Diskreten-Elemente-Methode (DEM) mit aufgenommen werden, so dass sich auf numerischem Weg Lebensdauersimulationen durchführen lassen. Ein wesentlicher Teil dieser Arbeit beschäftigt sich mit den mathematischen Beweisen für die Gültigkeit der DEM als numerische Methode, sowohl für das mechanische als auch für das thermische Feld. Die Beweisführung erfolgt über eine vergleichende Betrachtung mit der Finite-Volumen-Methode (FVM), der Finite-Differenzen-Methode (FDM) und der Finite-Elemente-Methode (FEM). Dabei wird aus der FVM die Idee übernommen, dass physikalische Flüsse übertragen werden. Im Weiteren ergibt die Assemblierung von sechs Stäben, welche die physikalische Feldgröße übertragen, zusammen mit dem lokalen physikalischen Gleichgewicht den Finite-Differenzen-Stern der FDM. Zuletzt wird das globale Gleichgewicht mittels der FEM gewonnen. Im Nachlauf der Finite-Elemente-Rechnung lassen sich die Flussgrößen zurückbestimmen. In dieser Arbeit wird gezeigt, wie die Kopplung des mechanischen und thermischen Feldes mittels der DEM vorgenommen werden kann. Dazu wird ausgeführt, wie die beiden Felder zusammen wirken und wie stark deren Kopplung ist. Abschließend wird eine Lebensdauersimulation an einer thermo-mechanisch belasteten virtuellen Probe vorgeführt. Das Ergebnis dieser Arbeit zeigt, dass die DEM eine Methode ist, um partielle Differentialgleichungen zu lösen, sowohl für Einzelfelder, als auch für gekoppelte Felder. Außerdem kann der Werkstoff durch die Methode als ein Diskontinuum betrachtet werden, so dass sich Lebensdauersimulationen mittels eines erweiterten Werkstoffmodells vornehmen lassen.
  • Thumbnail Image
    ItemOpen Access
    Untersuchungen zu neuartigen Infusionspumpen für die Medizintechnik
    (2013) Wolter, Frank; Kück, Heinz (Prof. Dr. rer. nat.)
    Es gibt derzeit keine am Markt erhältlichen Geräte, die sowohl unbegrenztes Dosiervolumen verabreichen können als auch den genannten Ratenbereich mit der geforderten Genauigkeit abdecken. Um den Geräteaufwand zu minimieren, ist es deshalb eine wichtige Anforderung an eine neuartige Infusionspumpe, diese universell einsetzen zu können. Das ist in dieser Arbeit vorgestellte neuartige Pumpprinzip mit ferromagnetischem Pumpelement stellt eine Verfeinerung des Hubkolbenprinzips dar und ist aufgrund seines einfachen und robusten Aufbaus für die Ausführung als Einwegteil in der Medizintechnik geeignet. Der zur Bewegung des Pumpelements nötige Antrieb kann ebenfalls sehr einfach und mit wenigen Bauteilen ausgeführt sein. Er kommt dabei nicht mit dem zu fördernden Fluid in Kontakt.
  • Thumbnail Image
    ItemOpen Access
    Regulation of the catalytic activity and specificity of DNA nucleotide methyltransferase 1
    (2014) Bashtrykov, Pavel; Jeltsch, Albert (Prof. Dr.)
    DNA nucleotide methyltransferase 1 (Dnmt1) is mainly responsible for the maintenance of DNA methylation in mammals and plays a crucial role in the epigenetic control of gene expression. Dnmt1 recognizes and methylates hemimethylated CpG sites formed during DNA replication. In the present work, the mechanistic details of the substrate recognition by the catalytic domain of Dnmt1, the possible role of the CXXC and RFTS domains of Dnmt1 in the regulation of specificity and activity of Dnmt1, and the influence of the Ubiquitin-like PHD and RING finger domain-containing 1 (Uhrf1) protein on the enzymatic properties of Dnmt1 was investigated. Using modified substrates, the functional roles of individual contacts of the Dnmt1 catalytic domain with the CpG site of the DNA substrate were analysed. The data show that the interaction with the 5-methylcytosine:guanine pair is required for the catalytic activity of Dnmt1, whereas the contacts to the non-target strand guanine are not important, since its replacement with adenine increased the activity of Dnmt1. It was proposed that the CXXC domain binding to unmethylated CpG sites increases the specificity of Dnmt1 for hemimethylated DNA. Our data showed that the CXXC domain does not influence the enzyme’s specificity in the full-length Dnmt1. In contrast, mutagenesis in the catalytic domain introducing an M1235S exchange resulted in a significant reduction in specificity. Therefore, the readout for the hemimethylated DNA occurs within its catalytic domain. It was observed in a crystal structure that the RFTS domain of Dnmt1 inhibits the activity of the enzyme by binding to the catalytic domain and blocking the entry of the DNA. By amino acid substitution in the RFTS domain its positioning within the catalytic domain was destabilized and a corresponding increase in the catalytic rate was observed, which supports this concept and suggests a possible mechanism to allosterically regulate the activity of Dnmt1 in cells. Uhrf1 has been shown to target Dnmt1 to replicated DNA, which is essential for DNA methylation. Here it is demonstrated that Uhrf1 as well as its isolated SRA domain increase the activity and specificity of Dnmt1 in an allosteric mechanism. The stimulatory effect was independent of the SRA domain’s ability to bind hemimethylated DNA. The RFTS domain of Dnmt1 is required for the stimulation, since its deletion or blocking of its interaction with the SRA domain, significantly reduced the ability of Uhrf1 to increase the activity and specificity of Dnmt1. Uhrf1, therefore, plays multiple roles that support DNA methylation including targeting of Dnmt1, its stimulation and an increase of its specificity.
  • Thumbnail Image
    ItemOpen Access
    Konzept eines wandlungsfähigen und modularen Produktionssystems für Franchising-Modelle
    (2013) Rauch, Erwin; Spath, Dieter (Univ.-Prof. Dr.-Ing. Dr.-Ing. E.h. Dr. h.c.)
    Die Zielsetzung dieser Arbeit ist es ein Konzept zur systematischen Gestaltung eines franchisefähigen Produktionssystems mit geographisch verteilten Produktionseinheiten zu entwickeln, welches sowohl wandlungsfähig, skalierbar, als auch replizierbar ist. Das Konzept soll zudem sicherstellen, dass das gestaltete Produktionssystem erprobt, anschließend umgesetzt und im Sinne der Wandlungsfähigkeit laufend adaptiert wird. Es gibt bereits viele Ansätze in der Forschung, welche behandeln, wie wandlungsfähige Systeme bzw. Produktionssysteme an die sich verändernden Wandlungstreiber angepasst werden können. Diese Ansätze haben jedoch meist einen sehr universellen und allgemeingültigen Charakter ohne auf spezifische Eigenschaften oder Anforderungen bestimmter Produktionsbranchen oder Organisationsformen näher einzugehen. Diese Arbeit widmet sich einer spezifischen Organisationsform, nämlich jener des Franchising mit geographisch verteilten Produktionsstätten und versucht mit dem erarbeiteten Konzept, ein für dieses Geschäftsmodell maßgeschneidertes Produktionssystem zu entwickeln. Das Thema der erarbeiteten Dissertation ist von besonderer Relevanz, da es dem Trend hin zu flexiblen und kundenorientierten sowie dezentralen Produktionen und deren Vernetzung folgt. Gleichzeitig verfolgt die Arbeit das Ziel zur Steigerung der Wandlungsfähigkeit von Produktionseinheiten durch Modularität und Skalierbarkeit des Produktionssystems. Franchising als Organisationsform gewinnt in den letzten Jahren und Jahrzehnten immer mehr an Bedeutung und wächst deutlich stärker als die Gesamtwirtschaft. In den vergangenen Jahrzehnten wurde im Themenbereich Franchising fast ausschließlich Forschung auf betriebswirtschaftlicher und juristischer Seite betrieben – es fehlte gänzlich an einem Leitfaden zur Planung, Gestaltung und Umsetzung von Produktionssystemen innerhalb von Franchise-Netzwerken. Durch diese Arbeit soll diese Lücke in der Forschung geschlossen werden. Die Inhalte der Arbeit sind daher: • Stand der Technik und Lösungsansätze aus der Forschung: Aktuelle Trends, Identifikation und Vergleich bestehender bzw. bereits erprobter Ansätze aus der Forschung. • Anforderungen an die Produktionssystemgestaltung: Systematische Ermittlung und Ableitung der Entwurfsparameter zur Gestaltung von Produktionssystemen für Franchising-Modelle mittels Systemtheorie (Axiomatic Design). • Konzeptentwicklung: Entwicklung eines ganzheitlichen Konzepts zur Gestaltung, Planung und Einführung eines wandlungsfähigen und modular erweiterbaren Franchise-Produktionssystems. • Erprobung und Validierung: Gewährleistung der praktischen Anwendbarkeit durch Validierung mittels einer Fallstudie und Validierung des Konzepts. Kurzbeschreibung des erarbeiteten Konzepts: Das Konzept basiert auf einem 3-Ebenen-Modell zur Gestaltung, Planung und Umsetzung von Produktionssystemen im Franchising: • Gestaltungsebene: Normative Ebene mit fünf Gestaltungsfeldern und 50 Gestaltungsbausteinen zur Modellierung des Produktionssystems: o Gestaltungsfeld 1 – Sortiment o Gestaltungsfeld 2 – Franchise-Modell o Gestaltungsfeld 3 – Produktionseinheit o Gestaltungsfeld 4 – Versorgung o Gestaltungsfeld 5 – Prozesse • Planungsebene: Planung der Pilotproduktion, Konsolidierung Pilotproduktion und Planung des Roll-Out der Franchise-Produktionseinheiten • Ausführungsebene: Operative Umsetzung und Betriebsführung Weitere Kerninhalte des erarbeiteten Konzepts sind die Ableitung einer umfassenden Sammlung an Gestaltungsparametern für Produktionssysteme, die Schaffung einer regelmäßigen Rückkopplung zur systematischen Anpassung des Produktionssystems und damit zur Gewährleistung der Wandlungsfähigkeit des Produktionssystems, die Berücksichtigung der Skalierbarkeit und Replizierbarkeit des Produktionssystems anhand eines modularen Ausbaustufenkonzepts sowie eine Szenarioplanung und Roll-Out-Planung der dezentralen Produktionseinheiten im Franchising-Netzwerk.
  • Thumbnail Image
    ItemOpen Access
    Dominant dimensions of finite dimensional algebras
    (2012) Abrar, Muhammad; König, Steffen (Prof. Dr. rer. nat.)
    We study the dominant dimensions of three classes of finite dimensional algebras, namely hereditary algebras, quotient algebras of trees and serial algebras. We see that a branching vertex plays a key role to establish that the dominant dimension (dom.dim) of hereditary algebras (quivers) is at most one. We define arms of a tree and split trees into two classes: trees without arms and trees with arms. Like hereditary algebras, it turns out that the dominant dimension of the quotient algebras of trees can not exceed one. For serial algebras A associated to linearly oriented quiver with n vertices, we give lower and upper bounds of dom.dimA, and show that the bounds are optimal. It is also shown that some of the algebras A satisfy the conditions in the higher dimensional version of the Auslander's correspondence. Further we consider serial algebras corresponding to one-oriented-cycle quiver Q with n vertices, and give optimal bounds for a special subclass of these algebras. We conjecture that for any non self-injective quotient algebra A of Q dom.dimA is at most 2n-3, where the number of vertices n is bigger than 2.. Finally, we construct few examples of algebras having large (finite) dominant dimensions.
  • Thumbnail Image
    ItemOpen Access
    Insights into the structural and functional properties of the eukaryotic porin Tom40
    (2012) Gessmann, Dennis; Nußberger, Stephan (Prof. Dr.)
    Tom40 forms the preprotein conducting channel in the outer membrane of mitochondria enabling transport of up to 1500 different preproteins through an optimized pore environment. Moreover, Tom40 exhibits a voltage-dependent gating mechanism in terms of an ‘electrical switch’ making this eukaryotic beta-barrel a promising target for nanopore based applications. In this work, new bioinformatics methods were developed and verified through practical approaches to shed light on the structural elements of Tom40 facilitating its particular function in mitochondria. Based on these results, Tom40 proteins were designed with modified and optimized structural properties. TmSIP, a physical interaction model developed for TM beta-barrel proteins, was used to identify weakly stable regions in the TM domain of Tom40 from mammals and fungi. Three unfavorable beta-strands were determined for human Tom40A. Via CD and Trp-fluorescence spectroscopy it was shown that substitution of key amino acid residues in theses strands resulted in an improved resistance of the protein to chemical and thermal perturbations. Further, the mutated form of hTom40A was strictly found in its monomeric state. Equal improvements were gained for the apparent stability of Tom40 from Aspergillus fumigatus. Tom40 was isolated and purified in its native state from Neurospora crassa mitochondria. Time-limited proteolysis of native NcTom40 coupled to mass spectrometry revealed comparable protease-accessibility to VDAC isoform 1 from mammals suggesting a similar fold. Thus, a homology model of NcTom40 was developed on the basis of the solved mouse VDAC-1 crystal structure. It was found that Tom40 forms a 19-stranded beta-barrel with an N-terminal alpha-helix inside the pore. Further, a conserved ‘polar slide’ in the pore interior is possibly involved in preprotein translocation and a second conserved domain, termed ‘helix anchor region’, in arresting the helix inside the Tom40 pore. Based on the homology model of NcTom40, the structure and function of the N-terminal domain of Tom40 was addressed. Examination of the model structure revealed two different domains for the N-terminus, the inner-barrel and outer-barrel N-terminus. In vivo investigations showed that both parts prevent a heat-induced dysfunction of Tom40 in N. crassa mitochondria independently. By applying CD spectroscopy the predicted N-terminal alpha-helix could be allocated to the inner-barrel N-terminus. Further, in combination with Trp-fluorescence spectroscopy it was found that the N-terminal alpha-helix unfolds independently from the Tom40 beta-barrel, but is not necessary for pore stability or integrity. However, a conserved amino acid residue, Ile47 of NcTom40, in the inner-barrel N-terminus is essential for the structural integrity of the N-terminal alpha-helix. In conclusion, these results may offer a basis for future works on TM beta-barrel proteins with the aim to alter the structural properties in the absence of a high atomic resolution structure or an established knowledge of the biochemical and biophysical properties.
  • Thumbnail Image
    ItemOpen Access
    Thermodynamic analysis and numerical modeling of supercritical injection
    (2015) Banuti, Daniel; Weigand, Bernhard (Prof. Dr.-Ing. habil.)
    Although liquid propellant rocket engines are operational and have been studied for decades, cryogenic injection at supercritical pressures is still considered essentially not understood. This thesis intends to approach this problem in three steps: by developing a numerical model for real gas thermodynamics, by extending the present thermodynamic view of supercritical injection, and finally by applying these methods to the analysis of injection. A new numerical real gas thermodynamics model is developed as an extension of the DLR TAU code. Its main differences to state-of-the-art methods are the use of a precomputed library for fluid properties and an innovative multi-fluid-mixing approach. This results in a number of advantages: There is effectively no runtime penalty of using a real gas model compared to perfect gas formulations, even for high fidelity equations of state (EOS) with associated high computational cost. A dedicated EOS may be used for each species. The model covers all fluid states of the real gas component, including liquid, gaseous, and supercritical states, as well as liquid-vapor mixtures. Numerical behavior is not affected by local fluid properties, such as diverging heat capacities at the critical point. The new method implicitly contains a vaporization and condensation model. In this thesis, oxygen is modeled using a modified Benedict-Webb-Rubin equation of state, all other involved species are treated as perfect gases. A quantitative analysis of the supercritical pseudo-boiling phenomenon is given. The transition between supercritical liquid-like and gas-like states resembles subcritical vaporization and is thus called pseudo-boiling in the literature. In this work it is shown that pseudo-boiling differs from its subcritical counterpart in that heating occurs simultaneously to overcoming molecular attraction. In this process, the dividing line between liquid-like and gas-like, the so called Widom line, is crossed. This demarcation is characterized by the set of states with maximum specific heat capacity. An equation is introduced for this line which is more accurate than previous equations. By analyzing the Clausius-Clapeyron equation towards the critical limit, an expression is derived for its sole parameter. A new nondimensional parameter evaluates the ratio of overcoming molecular attraction to heating: It diverges towards the critical point but shows a significant pseudo-boiling effect for up to reduced pressures of 2.5 for various fluids. It appears reasonable to interpret the Widom-line, which divides liquid-like from gas-like supercritical states, as a definition of the boundary of a dense supercritical fluid. This may be used to uniquely determine the radius of a droplet or the dense core length of a jet. Then, a quantitative thermodynamic analysis is possible. Furthermore, as the pseudo-boiling process may occur during moderate heat addition, this allows for a previously undescribed thermal jet disintegration mechanism which may take place within the injector. This thermal jet break-up hypothesis is then applied to an analysis of Mayer’s and Branam’s nitrogen injection experiments. Instead of the constant density cores as predicted by theory, the majority of their cases show an immediate drop in density upon entering the chamber. Here, three different axial density modes are identified. The analysis showed that heat transfer did in fact take place in the injector. The two cases exhibiting a dense core are the cases which require the largest amount of power to reach the pseudo-boiling temperature. After this promising application of pseudo-boiling analysis, thermal break-up is tested numerically. By accounting for heat transfer inside the injector, a non dense-core injection can indeed be simulated for the first time with CFD. Finally, the CFD model is applied to the A60 Mascotte test case, a reactive GH2/LOX single injector operating at supercritical pressure. The results are compared with experimental and other researcher’s numerical data. The flame shape lies well within the margins of other CFD results. Maximum OH* concentration is found in the shear layer close to the oxygen core and not in the shoulder, in agreement with experimental data. The axial temperature distribution is matched very well, particularly concerning position and value of the maximum temperature.
  • Thumbnail Image
    ItemOpen Access
    Visualization challenges in distributed heterogeneous computing environments
    (2015) Panagiotidis, Alexandros; Ertl, Thomas (Prof. Dr.)
    Large-scale computing environments are important for many aspects of modern life. They drive scientific research in biology and physics, facilitate industrial rapid prototyping, and provide information relevant to everyday life such as weather forecasts. Their computational power grows steadily to provide faster response times and to satisfy the demand for higher complexity in simulation models as well as more details and higher resolutions in visualizations. For some years now, the prevailing trend for these large systems is the utilization of additional processors, like graphics processing units. These heterogeneous systems, that employ more than one kind of processor, are becoming increasingly widespread since they provide many benefits, like higher performance or increased energy efficiency. At the same time, they are more challenging and complex to use because the various processing units differ in their architecture and programming model. This heterogeneity is often addressed by abstraction but existing approaches often entail restrictions or are not universally applicable. As these systems also grow in size and complexity, they become more prone to errors and failures. Therefore, developers and users become more interested in resilience besides traditional aspects, like performance and usability. While fault tolerance is well researched in general, it is mostly dismissed in distributed visualization or not adapted to its special requirements. Finally, analysis and tuning of these systems and their software is required to assess their status and to improve their performance. The available tools and methods to capture and evaluate the necessary information are often isolated from the context or not designed for interactive use cases. These problems are amplified in heterogeneous computing environments, since more data is available and required for the analysis. Additionally, real-time feedback is required in distributed visualization to correlate user interactions to performance characteristics and to decide on the validity and correctness of the data and its visualization. This thesis presents contributions to all of these aspects. Two approaches to abstraction are explored for general purpose computing on graphics processing units and visualization in heterogeneous computing environments. The first approach hides details of different processing units and allows using them in a unified manner. The second approach employs per-pixel linked lists as a generic framework for compositing and simplifying order-independent transparency for distributed visualization. Traditional methods for fault tolerance in high performance computing systems are discussed in the context of distributed visualization. On this basis, strategies for fault-tolerant distributed visualization are derived and organized in a taxonomy. Example implementations of these strategies, their trade-offs, and resulting implications are discussed. For analysis, local graph exploration and tuning of volume visualization are evaluated. Challenges in dense graphs like visual clutter, ambiguity, and inclusion of additional attributes are tackled in node-link diagrams using a lens metaphor as well as supplementary views. An exploratory approach for performance analysis and tuning of parallel volume visualization on a large, high-resolution display is evaluated. This thesis takes a broader look at the issues of distributed visualization on large displays and heterogeneous computing environments for the first time. While the presented approaches all solve individual challenges and are successfully employed in this context, their joint utility form a solid basis for future research in this young field. In its entirety, this thesis presents building blocks for robust distributed visualization on current and future heterogeneous visualization environments.
  • Thumbnail Image
    ItemOpen Access
    Climate sensitivity of a large lake
    (2013) Eder, Maria Magdalena; Bárdossy, András (Prof. Dr. rer. nat. Dr.-Ing.)
    Lakes are complex ecosystems that are on the one hand more or less enclosed by defined borders, but are on the other hand connected to their environment, especially to their catchment and the atmosphere. This study is examinig the climate sensitivity of large lakes using Lake Constance as an example. The lake is situated in Central Europe at the northern edge of the Alps, at the boundary of Austria, Germany and Switzerland. The maximum depth is 235 m, the total surface area is 535 km³ and the total volume 48.45 km². The numerical simulations in this study have been performed with the lake model system ELCOM-CAEDYM. The model system was validated using three different data sets: Observations of a turbid underflow after a flood flow in the main tributary, a lake-wide field campaign of temperature and phytoplankton, and long term monitoring data of temperature and oxygen in the hypolimion. The model system proved to be able to reproduce the effects of a flood flow in the largest tributary,. A huge turbid underflow was observed flowing into the main basin after an intense rain event in the Alps in August 2005. A numerical experiment showed the influence of the earth’s rotation on the flow path of the riverine water within the lake. The model also reproduced the temperature evolution and distribution and to some extent the phytoplankton patchiness measured in spring 2007 during an intensive field campaign. The model reproduced the measured time series of temperature and oxygen in the deep hypolimnion measured in the years 1980-2000. This indicates, that the vertical mixing and the lake’s cycle of mixing and stratification was reproduced correctly. Based on the model set-up validated with long term monitoring data, climate scenario simulations were run. The main focus was on temperature and oxygen concentrations in the hypolimnion, the cycle of stratification and mixing, and the heat budget of the lake. The meteorological boundary conditions for the climate scenario simulations were generated using a weather generator instead of downscaling climate projections from Global Climate Models. This approach gives the possibility to change different characteristics of the climate independently. The resulting lake model simulations are ”what-if”-scenarios rather than predictions, helping to obtain a deeper understanding of the processes in the lake. The main results can be summarized as follows: An increase in air temperature leads to an increase in water temperature, especially in the upper layers. The deep water temperature increases as well, but not to the same extent as the temperature of the epilimnion. This results in an increased vertical temperature difference. Due to the non-linear shape of the temperature-density curve, the difference in density grows even stronger than the temperature difference. This results in enhanced stratification stability, and consequently in less mixing. Complete mixing of the lake becomes more seldom in a warmer climate, but even in the scenario simulations with air temperature increased by 5 °C, full circulation took place every 3-4 years. Less complete mixing events lead to less oxygen in the hypolimnion. Additionally, as many biogeochemical processes are temperature dependant, the oxygen consumption rate is larger in warmer water. In the context of this study, climate variability is defined as episodes with daily average air temperatures deviating from the long-term average for this day of year. The episodes can be described by their duration in days and their amplitude in °C. Changes in climate variability can have very different effects, depending on the average air and water temperatures. The effects are stronger in lakes with higher water temperatures: For the hypolimnetic conditions, the seasonality in warming is important: Increasing winter air temperatures have a much stronger effect on the water temperatures in the lake than increasing summer temperatures. The combined effects of a warmer climate and higher nutrient concentrations enhances oxygen depletion in the hypolimnion. Finally, it is discussed, to what extent the results of this study are transferrable to other lakes. The reactions of Lake Constance to climate change are determined by the physical, geographical and ecological characteristics of the lake. Hydrodynamic reactions are defined by the mixing type, water temperatures and the residence time of the water in the lake. Furthermore it is important that the lake is almost never completely ice-covered, and that there are only minor salinity differences. The reactions of the ecosystem are determined also by the oligotrophic state of the lake. Results of this study thus can be transferred to other deep, monomictic, oligotrophic fresh water lakes, as for example the other large perialpine lakes of glacial origin.