Recent Submissions

Thumbnail Image
ItemOpen Access
Examining methodologies to explain autonomous cyber defence agents in critical networks
(2024) Braun, Johannes
Networks worldwide are facing increasing pressure from cyberattacks. For cybersecurity, Reinforcement Learning (RL) agents have demonstrated promising potential, enhancing network resilience and fortifying cybersecurity posture. However, particularly in critical scenarios, it is imperative to ensure the dependability of RL agents to foster operator trust. Therefore, this thesis explores and discusses Explainable Artificial Intelligence (XAI) methodologies for Multi Agent Reinforcement Learning (MARL) defending cyber-critical networks. These XAI mechanisms were implemented using Shapley values and decision trees. Furthermore,a novel hybrid approach was developed combining a Large Language Model (LLM), neuro-symbolic outputs, and Shapley values. Afterwards, the MARL was explained via the three XAI implementations. Moreover, experts in the field of RL for cyber security investigated the trained MARL; the expert knowledge was contrasted with the XAI explanations. From the XAI and expert insights, a strategy improving the MARL was proposed, employing imitation learning to reinforce the significance of underrepresented features in the agents’ outputs. Lastly, the presented XAI tools were critically assessed within the framework of a structured discussion methodology. The XAI tools demonstrate promising potential in explaining MARL, illustrated by the improvements in the agents’ feature relevance achieved through imitation learning. However, existing XAI frameworks may require adaptation to MARL, and XAI methodology advantages and constraints must be considered for their utilization. Moreover, we developed a novel XAI technique to generate natural language explanations for MARL systems with symbolic output spaces. Consequentially, we demonstrated that the presented XAI components provide valuable assets in the development process of reliable, trustworthy MARL delivering insights to enhance agents defending cyber critical network infrastructure.
Thumbnail Image
ItemOpen Access
Etablierung und Evaluierung molekularbiologischer Verfahren zur Analyse zellfreier DNA für die Infektions- und Tumordiagnostik
(2024) Hartwig, Christina; Rupp, Steffen (apl. Prof. Dr.)
Zellfreie DNA (cfDNA) ist ein Biomolekül, das extrazellulär in verschiedenen Körperflüssigkeiten wie Blut(-plasma) oder Urin gefunden werden kann. Sie ist Träger von Erbinformation und besitzt eine relativ kurze Halbwertszeit. Diese kurze Halbwertszeit ist nützlich, um die aktuelle cfDNA-Konzentration und damit den physiologischen Zustand spezifisch bestimmen zu können. Somit kann cfDNA als Biomarker für diagnostische Zwecke eingesetzt werden. Neben ihren biologisch vorteilhaften Eigenschaften kann sie außerdem über eine Blutentnahme minimal-invasiv in Form einer sogenannten Flüssigbiopsie entnommen werden und somit eine risikobehaftete Gewebebiopsie vermieden werden. Mithilfe von Next-Generation Sequencing (NGS) sollte cfDNA im Rahmen dieser kumulativ verfassten Doktorarbeit analysiert und in verschiedenen Anwendungsgebieten auf ihre Eignung als Biomarker untersucht werden. Die zugrundeliegenden Publikationen sind in Kapitel 7.2 beigelegt. Im ersten Teil dieser Dissertation wurden die Charakteristika von mikrobieller cfDNA (mcfDNA) in der Infektionsdiagnostik am Beispiel der Sepsis untersucht. In einem Mausmodell wurde unter definierten Bedingungen die zeitliche und räumliche Dynamik von mcfDNA während einer Sepsis nachverfolgt und daraus ein Workflow für die individuelle Charakterisierung des pathologischen Mikrobioms – im Folgenden Pathobiom – und seiner dynamischen Veränderungen entwickelt. Zunächst wurde das Darmmikrobiom von gesunden Mäusen als Reservoir für Sepsis-verursachende Erreger bestimmt. Das physiologische Darmmikrobiom veränderte sich rapide nach Induktion einer Sepsis: Schon nach 24 h bildete sich ein Pathobiom im Darm, das sehr individuell ausgeprägt war. Im nächsten Schritt wurde die räumliche Transition des Pathobioms aus dem Darm in andere Kompartimente wie das normalerweise sterile Peritoneum und den Blutkreislauf untersucht. Auch hier konnten starke Veränderungen bereits nach 24 h in Peritoneum und Blut detektiert und mit unterschiedlichen Methoden nachgewiesen werden: Die Befunde von klassischer Blutkultur und NGS deckten sich weitestgehend, wobei mit NGS zusätzlich deutlich mehr Spezies identifiziert werden konnten. Die Pathobiome unterschieden sich teilweise zwischen einzelnen Mäusen deutlich, es konnten aber im Vergleich zu humanen Proben deutliche Überlappungen mikrobieller Genera identifiziert werden, welche als hauptsächliche Erreger bei einer Sepsis auftreten. Schließlich wurde eine Formel entwickelt, mit deren Hilfe die absolute Erregerbelastung im Blut berechnet werden konnte. Für die Analyse reichten kleinste Blutmengen von nur 30 µl, was die serielle Probenentnahme in Mäusen erst ermöglichte. Folglich konnten einzelne Mäuse über einen Zeitraum von 72 h analysiert werden. Nach 24 h wurde die höchste mikrobielle Belastung festgestellt, es konnten aber auch dynamische Veränderungen innerhalb weniger Stunden festgestellt werden. So ließ sich in dieser Studie eine kurze Halbwertszeit für mcfDNA ableiten und diese DNA-Klasse damit als sensitiver und adäquater Marker für die Sepsisdiagnostik etabliert werden. Der zweite Teil dieser Arbeit widmete sich der Charakterisierung epigenetischer Marker von humaner cfDNA für die Diagnostik von pankreatobiliären Krebsformen (PBC), wobei hier im Speziellen DNA-Methylierungsmuster (als Information innerhalb der cfDNA) als Biomarker im Fokus standen. Ziel war es, Regionen im Genom zu identifizieren, die zwischen PBC-Patienten und Kontrollgruppen in unterschiedlichem Maße methyliert waren. Aus diesen differenziell methylierten Regionen (DMRs) sollten ein Target-Panel gebildet werden, das basierend auf NGS von Patienten-cfDNA im hohen Durchsatz sensitive und spezifische Diagnostik ermöglichen sollte. Im ersten Schritt wurde methylierte cfDNA unspezifisch angereichert und mithilfe von NGS sequenziert. Die erhobenen Daten wurden mit drei unterschiedlichen bioinformatischen Methoden ausgewertet und DMRs damit bestimmt. Diese wurden zusammen mit bereits publizierten Regionen aus der Literatur sowie neu bestimmten Regionen aus Gebewebedatenbanken zu einem Sequenzierpanel zusammengefügt, welches für die zielgerichtete Methode Hybridization and Capture verwendet wurde. Zusätzliche klinisch erhobene Daten wie der in der Routine bereits etablierte Tumor-Proteinmarker CA19-9 wurden nur als ergänzende Information für das Panel herangezogen, da diese im Gegensatz zu den Sequenzierdaten in dieser Kohorte bisher keine zuverlässige Unterscheidung zwischen Patientengruppen erlaubten. Das Hybridization and Capture-Panel wurde schließlich für die Sequenzierung von je 15 PBC-, 15 Pankreatitis- sowie 15 Kontroll-Patienten verwendet. In Kombination mit den CA19-9-Werten der Patienten wurde ein Machine-Learning-Ansatz angewendet, um in der Identifizierungskohorte die 50 besten Markerpositionen zu identifizieren. Mit diesen konnte eine Sensitivität von 93%, eine Spezifität von 63% und eine Fläche unter der ROC-Kurve von 0,85 erreicht werden. Anschließend wurde eine Validierungskohorte bestehend aus je zehn Patienten aus der PBC-, Pankreatitis- und Kontrollgruppe sowie sieben IPMN-Patienten sequenziert. In dieser Kohorte wurde eine Sensitivität von 92%, eine Spezifität von 84% und eine Fläche unter der ROC-Kurve von 0,88 erzielt. Außerdem konnten die High-grade-IPMNs und PBC-Patienten gut von Low-grade-IPMNs, Pankreatitis und Kontrollen unterschieden werden. Somit konnten Patienten identifiziert werden, welche eine intensivere Behandlung bzw. eine Operation benötigten. Methylierte cfDNA birgt folglich großes diagnostisches Potenzial für Pankreaserkrankungen und zeigte in dieser Publikation sensitive Eigenschaften, die für eine große Anzahl an (Krebs-)Erkrankungen genutzt werden könnten. Im dritten und abschließenden Teil im Rahmen dieser Doktorarbeit war das Ziel, eine spezifische Klasse an kurzen cfDNA-Fragmenten aus Gesamt-cfDNA zu charakterisieren. Die Hypothese war, dass kurze cfDNA-Fragmente (20-60 bp) regulatorische Informationen im systemischen Kontext eines Individuums enthalten, welche in der Diagnostik als differenzielle Marker genutzt werden können. Hierfür wurde zunächst ein Verfahren zur Größenselektion von cfDNA etabliert, welches mithilfe von Gelelektrophorese die Anreicherung intakter doppelsträngiger kurzer DNA-Fragmente ermöglichte. Kurze cfDNA reicherte sich in spezifischen genomischen Positionen an, und zeigte schmale, definierte als auch breite Cluster-Peaks. Dabei zeichneten sich Cluster-Peaks meist durch die Nähe zu Transkriptionsstartpunkten (TSPs) oder Transkriptionsfaktorbindestellen (TFBS) aus. Ein Vergleich zu regulärer cfDNA offenbarte, dass sie sich gegensätzlich verhielt: Die Anreicherung der einen cfDNA-Klasse bedeutete die Abreicherung der anderen zum Beispiel an offenem Chromatin, Nukleosom-freien Regionen oder TSPs. Kurze cfDNA schien somit kein Abbauprodukt der regulären cfDNA zu sein, sondern eher durch die Bindung von Transkriptionsfaktoren (nicht Nukleosomen wie bei regulärer cfDNA) vor dem Abbau von DNasen geschützt zu sein. Mithilfe der kurzen cfDNA konnte auch über Transkriptionsfaktormotiv-Anreicherung oder die Analyse bekannter Transkriptionsfaktorbindestellen (TFBS) eine potenzielle Bindung verschiedener Transkriptionsfaktoren (TFs) detektiert werden. Dies deutete bereits auf den Bezug von kurzer cfDNA zu transkriptionellen Vorgängen hin. Darüber hinaus zeigte sich eine Abhängigkeit der Anreicherung von kurzer cfDNA in bestimmten Regionen von der epigenetischen und transkriptionellen Aktivität: Aktive Promotoren zeigten eine starke Anreicherung von kurzer cfDNA, wohingegen stark methylierte CpG-Inseln eine deutlich schwächere Anreicherung von kurzer cfDNA als wenig methylierte CpG-Inseln zeigten. Außerdem zeigten ergänzende RNA-Sequenzierungen die Anreicherung von kurzer cfDNA in Genen, die laut RNA-Analysen hoch exprimiert waren. Diese Beobachtungen erinnerten insgesamt an die Detektion von DNA-Footprints, bei denen die Bindung spezifischer DNA-Sequenzen an bestimmte Proteine durch den Schutz vor Abbau durch DNasen identifiziert wird. Folglich wurden die ableitbaren Transkriptionsfaktor-Footprints aus Flüssigbiopsien als Liquid Footprints bezeichnet. Abschließend konnte gezeigt werden, dass die Sequenzierung von kurzer cfDNA die Detektion konditionsspezifischer TFBS in Flüssigbiopsien ermöglichte und die Unterscheidung vier klinischer Indikationen (PDAC, Kolorektalkrebs, Sepsis, Post-OP) möglich war. Dies unterstreicht die Möglichkeiten von Liquid Footprinting als explorative, unvoreingenommene Plattform zur Detektion diagnostischer Markerregionen für verschiedenste klinische Indikationen. Abschließend wurde ein kurzer Ausblick auf die Kombination von Methylierungsdaten und Transkriptionsfaktordaten gegeben. Es wurden beispielhaft zwei Regionen betrachtet, welche auf ein Zusammenspiel der beiden Datentypen und damit eine biologische Verknüpfung schließen lassen. Zukünftige Diagnostik könnte von der integrierten Analyse profitieren und das volle Potenzial der verschiedenen cfDNA-Klassen in Kombination ausschöpfen.
Thumbnail Image
ItemOpen Access
Investigation of the impact of different scale-up dependent stimuli on metabolism and population heterogeneity in Corynebacterium glutamicum
(2024) Eilingsfeld, Adrian; Takors, Ralf (Prof. Dr.-Ing.)
This thesis investigates the impact of elevated carbon dioxide levels on population heterogeneity in Corynebacterium glutamicum, a widely used industrial production host. Through a series of experiments involving cultivation at varying CO2 partial pressures, flow cytometry, and analysis of DNA content, the research reveals that increased CO2 exerts significant selection pressure, affecting growth rates and cell aggregation tendencies. Key findings indicate that higher growth rates speed up DNA replication levels, while elevated CO2 levels slow them down. The results contribute to understanding how CO2 influences population dynamics, providing insights for optimizing industrial bioprocesses and support Corynebacterium glutamicum as a robust production strain.
Thumbnail Image
ItemOpen Access
Stereoscopic videos : data generation, image synthesis and motion analysis
(2025) Mehl, Lukas; Bruhn, Andrés (Prof. Dr.-Ing.)
Videos are an essential data source for computer vision and an important form of media and entertainment. While many research works consider monocular videos, i.e. videos captured with a single camera, there is less focus on stereoscopic videos, i.e. videos captured with two cameras, although they have a direct correspondence to the human binocular vision system. In this thesis, we will discuss stereoscopic videos in detail and focus on the three main topics data generation, image synthesis and motion analysis. In the first main topic, we discuss data generation for stereoscopic video tasks in computer vision. There, we first introduce a novel large-scale stereoscopic video dataset with ground truth for the stereo matching, optical flow and scene flow tasks and propose a high-detail evaluation methodology that is able to assess predictions at fine details such as grass or hair. Based on this, we introduce a benchmark website for the evaluation and comparison of future works. We further analyze the results of 14 initial methods from the dense matching literature and investigate the influence of their model architectures on the benchmark performance. Additionally, we propose an augmentation strategy with snow particle effects that can be used to create even more data based on existing datasets. In the second main topic, we introduce image synthesis for stereo conversion, i.e. generating stereoscopic videos from monocular videos. We present a method that performs disparity-aware warping, consistent foreground-background compositing and background-aware inpainting and combines these steps with a temporal consistency strategy that integrates information from additional video frames. Several experiments not only show that our approach outperforms existing methods both visually and quantitatively by a large margin, but also analyze the model design choices. Further, by adding extensions for user interaction to our model, we demonstrate that our approach is directly applicable to current practices in 3D movie production. In our third main topic, we cover motion analysis for stereoscopic videos by introducing a novel method for scene flow prediction. Our model predicts scene flow based on three frames from a stereoscopic video by combining forward predictions and the SE(3) matrix inversion of backward predictions in a fusion module, which leads to strong improvements over baseline models and highly competitive benchmark results. Further experiments demonstrate model robustness and compare architectures, scene flow parametrizations and fusion strategies. With contributions in these three main topics, this dissertation advances both datasets and algorithms in the context of stereoscopic videos.
Thumbnail Image
ItemOpen Access
Thermal diffusion and trapping of vacancies for the formation of optical centers in diamond
(2024) Santonocito, Santo; Wrachtrup, Jörg (Prof. Dr.)
Since its introduction, quantum physics has revolutionized our understanding of the fundamental laws governing the universe. Originally employed to address problems unsolvable by classical mechanics, quantum physics has gradually found a wide variety of applications in modern life, many of which are based on the principle of quantum coherence, such as lasers. Despite these advancements, understanding the behavior of complex quantum systems remains an enduring challenge, primarily due to the exponential growth in complexity as the number of system components increases. This difficulty is largely attributed to the limited computational power of classical computers. As Richard Feynman famously remarked, ”Nature isn’t classical, dammit, and if you want to make a simulation of nature, you’d better make it quantum mechanical, and by golly, it’s a wonderful problem, because it doesn’t look easy”, highlighting the intrinsic link between the complexity of quantum phenomena and the need for computational paradigms that can inherently handle such complexity. Contrary to Gordon Moore’s 1965 prediction, recent years have seen a slowdown in the reduction of transistor size as physical limits have been approached, thereby obstructing further advances in classical computational power. Ongoing research into alternative methods to enhance computational performance has thus shifted focus towards quantum computers. Proposed initially in 1988 by Y. Yamamoto and K. Igeta, quantum computers are based on qubits as units of information. Unlike classical bits that exist strictly as either 0 or 1, qubits are quantum systems that can exist in any arbitrary superposition of two distinct states (|0⟩ and |1⟩). This property enables quantum computers to perform computational tasks through the temporal evolution of these systems via logical operations (defined as gates), potentially offering exponential speed-ups for problems intractable with classical computation. Over the years, advancements in the production and control of various quantum systems have led to the exploration of applications such as nanoscale quantum sensing. Among several types of quantum technologies, including single photons and trapped ions, devices based on spin impurities in solid-state systems have been demonstrated exceptionally effective for both quantum computing and quantum sensing. These impurities meet the DiVincenzo criteria for quantum systems, featuring spins that not only define discrete, individually addressable energy levels but also allow for initialization, coherent manipulation, and measurement. Furthermore, these systems are sensitive to physical quantities like electric and magnetic fields due to Stark and Zeeman interactions, respectively. Among the various spin impurities, the negatively charged nitrogen-vacancy (NV) center in diamond is extensively studied. This atomic-scale spin system is composed of a nitrogen atom located in a neighboring lattice site of a vacancy and surrounded by carbon atoms. Recent advances in material engineering have enabled the controlled synthesis of diamond crystals with a high degree of purity, allowing NV centers to function as wellisolated spin systems. The NV center is characterized by triplet spin energy configuration, including a spin-selective relaxation via inter system crossing, that enables to initialize, manipulate, and optically read out the ground state spin. Moreover, the energy levels of the NV center are sensitive to variations in magnetic fields, electric fields, temperature, and strain, making this center a versatile sensor for various physical phenomena. Its atomic size and spin properties render the NV suitable for quantum sensing applications. Unlike other competing sensing technologies that require high-energy systems, the NV center can operate under standard temperature and pressure conditions. Furthermore, the chemically inert nature of diamond renders NV-based devices biocompatible, allowing for their placement within a few nanometers from the field sources and, thus, enabling imaging of magnetic field at nanometer scale. The most common technique for creating single NV centers in diamond involves ion implantation followed by thermal annealing. This method allows the creation of NVs at various depths within diamond with nanometer precision. Despite its spatial resolution, low-energy implantation, necessary for near-surface NV positioning (relevant for sensing applications), results in a low yield of NVs even when the concentration of vacancies induced by implantation is significantly higher than that of atomic nitrogen included within the diamond lattice. This inefficiency is related to the tendency of vacancy to aggregate into di-vacancies (V2) or multi-vacancy complexes (Vn) in ion-damaged areas rather than getting trapped at nitrogen lattice site and forming NV centers. Although a higher implantation dose could potentially increase the yield, it would also result in greater lattice damage, which would reduce the spin coherence time (T2) of the NV, compromising its sensitivity as a quantum sensor. In addition to the challenges associated with low formation efficiency, the proximity of NV centers to the diamond surface (less than 10 nm) introduces further complications. Surface-induced noise and electronic defects can degrade spin coherence and destabilize the NV charge state, significantly impacting the performance and reliability of these shallow defects. Addressing these issues, while simultaneously maximizing NV concentration and preserving T2 and signal contrast, remains a critical challenge in the field of quantum sensing. Recent advancements in material processing and NV creation strategies have provided promising solutions. Co-implantation of nitrogen and helium ions, with precise optimization of implantation energies (below 10 keV) and fluences, has been shown to minimize lattice damage and enhance the efficiency of NV formation. Helium co-implantation, in particular, facilitates the introduction of vacancies at controlled depths, improving the nitrogen-to-NV conversion efficiency. This technique has proven particularly effective for generating shallow NV centers with narrow linewidths in optically detected magnetic resonance (ODMR) measurements, a key requirement for quantum sensing applications. In parallel, high-temperature annealing protocols, typically at temperatures exceeding 1200◦C, have been employed to mitigate the formation of vacancy complexes and restore lattice integrity, further enhancing NV coherence times. Furthermore, surface treatments, such as hydrogen and oxygen termination prior to implantation, have shown promise in addressing surface-induced noise. These treatments not only enhance the nitrogen-to-NV conversion efficiency but also improve the stability of shallow NV centers by reducing charge-state conversion to neutral NV configurations. Additionally, pre-doping diamond substrates with nitrogen prior to ion implantation has emerged as a powerful method to enhance NV yield. Molecular dynamics simulations suggest that nitrogen concentrations of approximately 1000 ppm optimize vacancy diffusion and NV formation, achieving yields up to 10%. These techniques, combined with advancements in chemical vapor deposition (CVD) diamond growth, allow for controlled nitrogen incorporation and defect alignment along specific crystallographic orientations, critical for achieving reproducible quantum performance. In addition to quantum sensing, the domain of quantum information processing has also gathered increasing attention. Although NV centers meet the DiVincenzo necessary criteria for quantum applications, they lack some optical properties ideal for quantum information processing, such as a high Debye-Waller factor and minimal spectral diffusion. In contrast, color centers in diamond based on group-IV elements are considered suitable candidates. These centers not only exhibit higher optical performances than NV centers but also show inherent compatibility with nanostructures usually employed for enhancing optical properties. Like NVs, these group-IV color centers can be precisely engineered through ion implantation followed by annealing. However, since nitrogen is intrinsically present within the diamond lattice, the formation of unwanted NV centers is inevitable. Developing methods to suppress the formation of unwanted NVs is crucial for advancing the application of group-IV color centers in quantum information processing. This thesis primarily examines the formation of color centers in diamond, achieved through the trapping of diffusing vacancies. Specifically, it explores the use of irradiation and annealing techniques designed to generate vacancies and promote their diffusion within the diamond lattice. A critical aspect of this study involves identifying the inherent limitations associated with these techniques and understanding the underlying reasons of these constraints. The aim is to develop novel methodologies that can overcome these limitations. One such innovative method has been applied for the synthesis of tin-vacancy (SnV) centers in diamond. These developments are crucial for improving the production of color centers in diamond, thereby broadening their utility in quantum technology applications. Impurity centers in diamond lattice. The present work starts with an overview of diamond as host material for color centers employed for quantum applications, focusing particularly on the NV. The discussion includes description of the chemical vapor deposition (CVD) and high temperature high pressure (HPHT) growth techniques of diamond, pivotal in synthesizing substrates with controlled level of the impurities. This control is significant as the presence of any paramagnetic impurities within the lattice can compromise the spin properties of targeted color centers. The chapter further focuses on the NV center physical properties, providing a basic introduction of the associated spin manipulation techniques. Moreover, detailed attention is given to the methods employed to create NV centers, with a specific focus on ion implantation followed by annealing, and on the limitations associated with these technique. Vacancy diffusion and defect formation in a crystalline solid. The second chapter addresses the diffusion of vacancies and their trapping by defects within the diamond lattice. Here, a novel model of vacancy diffusion based on probabilistic atomic jumps in crystal lattice, is developed to investigate the limit in the NV formation due to trapping of diffusing vacancies induced by irradiation. The NV formation within the model is considered as competing mechanism of vacancy trapping between nitrogen atoms, divacancies (V2) and multi-vacancies complexes (Vn). A critical parameter of the described model is the so-defined capture cross-section, which, quantifying the probability for a vacancy to be trapped by a specific defect, can be related to the formation energy of the corresponding defect. The model, developed for different vacancies distributions scenarios, has been validated through Monte Carlo simulations. The efficiency of NV formation by irradiation techniques: estimates by the model and the experimental validation. The third chapter starts with the description of the so-called ”Scanning Protocol” developed to collect profiles of NV centers within the diamond bulk with a nanometric precision. In this protocol, photoluminesce (PL) confocal scans of fixed area have been collected at different depth with a step 0.1 µm. Within each scan, the position and the fluorescence intensity of NV centers have been evaluated by employing a 2-D Gaussian fit. An additional 1-D Gaussian fit of the NV PL intensity as function of depth result in the localization of the NV in the diamond bulk. NV depth distributions by helium implantation followed by annealing have been collected for different annealing temperatures. A fit of these distributions through the model developed in chapter 2, provides an activation energy for vacancies diffusion in diamond of 1.7 eV. This value has subsequently been used into the model to evaluate the capture cross section ratio between NV and V2 in a range of 0.1 to 0.5. This result demonstrates the tendency of vacancies to preferentially aggregate rather than being trapped by nitrogen, defining the limit (low formation yield) in the NV formation (as for other color center in diamond) by ion implantation or electron irradiation followed by annealing. The ability of the model developed in chapter 2 to predict NV center concentrations resulting from the aforementioned techniques enables the formulation of strategies aimed at enhancing the formation of NV centers associated with these techniques. This enhancement can be achieved while preserving the spin properties of the NV centers. Furthermore, the model probabilistic approach holds promise for wider applications in the engineering of additional vacancy-related defects in diamond. This extends the model utility significantly within the realm of quantum engineering. Planar p-n junction structures on diamond for controlling the vacancy diffusion. The final chapter explores an advanced engineering technique to control vacancy diffusion at thermal annealing. Specifically, vacancies created by ion implantation within the depletion region of a diamond p+ junction get charged, and their long-range diffusion has been proved to be suppressed due to repulsive forces from ionized donors in the depleted region of the n-doped substrate. This mechanism is demonstrated to reduce the formation of NV centers by limiting vacancy trapping at nitrogen sites in the diamond bulk. The effectiveness of this technique is verified through the fabrication of p+-n junctions in high purity single-crystal diamond substrates. Indeed, C and He implantations (tuned to create vacancies as described above) across such junction, followed by annealing at 1200◦ resulted in a strong reduction of NVs compared to not doped areas subjected to the same implantation and annealing procedure. The method described has been successfully applied to the implantation of tin to produce SnV centers in diamond. This approach resulted in both an enhanced yield of SnV centers and a reduction of unwanted NVs along the implantation paths of the Sn atoms. The utility of this method extends beyond the creation of SnV centers; it is also applicable to the formation of other color centers in diamond. By providing control over vacancy diffusion within semiconductor materials, this technique possesses substantial potential for a variety of applications in the field of quantum technologies.
Thumbnail Image
ItemOpen Access
Higgs spectroscopy on superconductors : dynamical interplay with charge-density-wave
(2024) Feng, Liwen; Dressel, Martin (Prof. Dr.)
Thumbnail Image
ItemOpen Access
Entwicklung und Evaluation einer gamifizierten Design Thinking Methode für die frühe Phase des Innovationsmanagements
(2023) Härer, Florian; Herzwurm, Georg
Das Innovationsmanagement wird für Unternehmen immer bedeutsamer und insbesondere die frühe Phase des Innovationsprozesses, welche aber aufgrund des volatilen Umfeld einige Herausforderungen besitzt. Der folgende gestalterische Forschungsbeitrag versucht diese Probleme aufzugreifen und mittels eines situativen Theorieansatzes zu lösen. Dafür wurde durch den Aspekt von Gamification eine neue Methode auf der Grundlage von Design Thinking systematisch und wissenschaftlich konstruiert. In einer Einzelfallstudie aus der Automobilbranche konnten durch drei verschiedene Durchläufe die neue Methode im Feld angewendet und mit den teilnehmenden Personen evaluiert werden. Die Ergebnisse zeigen einerseits auf, dass die neue Methode für die Entwicklung von Ideen genutzt werden kann und einige identifizierte Gestaltungsprobleme gelöst werden konnten. Andererseits, besitzt die Methode aber weiteren Forschungsbedarf, um die Qualität der frühen Phase des Innovationsprozesses noch weiter zu verbessern.
Thumbnail Image
ItemOpen Access
Reduced order homogenization of thermoelastic materials with strong temperature dependence and comparison to a machine-learned model
(2023) Sharba, Shadi; Herb, Julius; Fritzen, Felix
In this work, an approach for strongly temperature-dependent thermoelastic homogenization is presented. It is based on computational homogenization paired with reduced order models (ROMs) that allow for full temperature dependence of material parameters in all phases. In order to keep the model accurate and computationally efficient at the same time, we suggest the use of different ROMs at few discrete temperatures. Then, for intermediate temperatures, we derive an energy optimal basis emerging from the available ones. The resulting reduced homogenization problem can be solved in real time. Unlike classical homogenization where only the effective behavior, i.e., the effective stiffness and the effective thermal expansion, of the microscopic reference volume element are of interest, our ROM delivers also accurate full-field reconstructions of all mechanical fields within the microstructure. We show that the proposed method referred to as optimal field interpolation is computationally as efficient as simplistic linear interpolation. However, our method yields an accuracy that matches direct numerical simulation in many cases, i.e., very accurate real-time predictions are achieved. Additionally, we propose a greedy sampling procedure yielding a minimal number of direct numerical simulations as inputs (two to six discrete temperatures are used over a range of around 1000 K). Further, we pick up a black box machine-learned model as an alternative route and show its limitations in view of the limited amount of training data. Using our new method to generate an abundance of data, we demonstrate that a highly accurate tabular interpolator can be gained easily.