05 Fakultät Informatik, Elektrotechnik und Informationstechnik

Permanent URI for this collectionhttps://elib.uni-stuttgart.de/handle/11682/6

Browse

Search Results

Now showing 1 - 10 of 11
  • Thumbnail Image
    ItemOpen Access
    A framework for similarity recognition of CAD models in respect to PLM optimization
    (2022) Zehtaban, Leila; Roller, Dieter (Univ.-Prof. Hon.-Prof. Dr.)
  • Thumbnail Image
    ItemOpen Access
    Rigorous compilation for near-term quantum computers
    (2024) Brandhofer, Sebastian; Polian, Ilia (Prof.)
    Quantum computing promises an exponential speedup for computational problems in material sciences, cryptography and drug design that are infeasible to resolve by traditional classical systems. As quantum computing technology matures, larger and more complex quantum states can be prepared on a quantum computer, enabling the resolution of larger problem instances, e.g. breaking larger cryptographic keys or modelling larger molecules accurately for the exploration of novel drugs. Near-term quantum computers, however, are characterized by large error rates, a relatively low number of qubits and a low connectivity between qubits. These characteristics impose strict requirements on the structure of quantum computations that must be incorporated by compilation methods targeting near-term quantum computers in order to ensure compatibility and yield highly accurate results. Rigorous compilation methods have been explored for addressing these requirements as they exactly explore the solution space and thus yield a quantum computation that is optimal with respect to the incorporated requirements. However, previous rigorous compilation methods demonstrate limited applicability and typically focus on one aspect of the imposed requirements, i.e. reducing the duration or the number of swap gates in a quantum computation. In this work, opportunities for improving near-term quantum computations through compilation are explored first. These compilation opportunities are included in rigorous compilation methods to investigate each aspect of the imposed requirements, i.e. the number of qubits, connectivity of qubits, duration and incurred errors. The developed rigorous compilation methods are then evaluated with respect to their ability to enable quantum computations that are otherwise not accessible with near-term quantum technology. Experimental results demonstrate the ability of the developed rigorous compilation methods to extend the computational reach of near-term quantum computers by generating quantum computations with a reduced requirement on the number and connectivity of qubits as well as reducing the duration and incurred errors of performed quantum computations. Furthermore, the developed rigorous compilation methods extend their applicability to quantum circuit partitioning, qubit reuse and the translation between quantum computations generated for distinct quantum technologies. Specifically, a developed rigorous compilation method exploiting the structure of a quantum computation to reuse qubits at runtime yielded a reduction in the required number of qubits of up to 5x and result error by up to 33%. The developed quantum circuit partitioning method optimally distributes a quantum computation to distinct separate partitions, reducing the required number of qubits by 40% and the cost of partitioning by 41% on average. Furthermore, a rigorous compilation method was developed for quantum computers based on neutral atoms that combines swap gate insertions and topology changes to reduce the impact of limited qubit connectivity on the quantum computation duration by up to 58% and on the result fidelity by up to 29%. Finally, the developed quantum circuit adaptation method enables to translate between distinct quantum technologies while considering heterogeneous computational primitives with distinct characteristics to reduce the idle time of qubits by up to 87% and the result fidelity by up to 40%.
  • Thumbnail Image
    ItemOpen Access
    Test planning for low-power built-in self test
    (2014) Zoellin, Christian G.; Wunderlich, Hans-Joachim (Prof. Dr. rer. nat. habil.)
    Power consumption has become the most important issue in the design of integrated circuits. The power consumption during manufacturing or in-system test of a circuit can significantly exceed the power consumption during functional operation. The excessive power can lead to false test fails or can result in the permanent degradation or destruction of the device under test. Both effects can significantly impact the cost of manufacturing integrated circuits. This work targets power consumption during Built-In Self-Test (BIST). BIST is a Design-for-Test (DfT) technique that adds additional circuitry to a design such that it can be tested at-speed with very little external stimulus. Test planning is the process of computing configurations of the BIST-based tests that optimize the power consumption within the constraints of test time and fault coverage. In this work, a test planning approach is presented that targets the Self-Test Using Multiple-input signature register and Parallel Shift-register sequence generator (STUMPS) DfT architecture. For this purpose, the STUMPS architecture is extended by clock gating in order to leverage the benefits of test planning. The clock of every chain of scan flip-flops can be independently disabled, reducing the switching activity of the flip-flops and their clock distribution to zero as well as reducing the switching activity of the down-stream logic. Further improvements are obtained by clustering the flip-flops of the circuit appropriately. The test planning problem is mapped to a set covering problem. The constraints for the set covering are extracted from fault simulation and the circuit structure such that any valid cover will test every targeted fault at least once. Divide-and-conquer is employed to reduce the computational complexity of optimization against a power consumption metric. The approach can be combined with any fault model and in this work, stuck-at and transition faults are considered. The approach effectively reduces the test power without increasing the test time or reducing the fault coverage. It has proven effective with academic benchmark circuits, several industrial benchmarks and the Synergistic Processing Element (SPE) of the Cell/B.E.™ Processor (Riley et al., 2005). Hardware experiments have been conducted based on the manufacturing BIST of the Cell/B.E.™ Processor and shown the viability of the approach for industrial, high-volume, high-end designs. In order to improve the fault coverage for delay faults, high-frequency circuits are sometimes tested with complex clock sequences that generate test with three or more at-speed cycles (rather than just two of traditional at-speed testing). In order to allow such complex clock sequences to be supported, the test planning presented here has been extended by a circuit graph based approach for determining equivalent combinational circuits for the sequential logic. In addition, this work proposes a method based on dynamic frequency scaling of the shift clock that utilizes a given power envelope to it full extent. This way, the test time can be reduced significantly, in particular if high test coverage is targeted.
  • Thumbnail Image
    ItemOpen Access
    Deep learning based prediction and visual analytics for temporal environmental data
    (2022) Harbola, Shubhi; Coors, Volker (Prof. Dr.)
    The objective of this thesis is to focus on developing Machine Learning methods and their visualisation for environmental data. The presented approaches primarily focus on devising an accurate Machine Learning framework that supports the user in understanding and comparing the model accuracy in relation to essential aspects of the respective parameter selection, trends, time frame, and correlating together with considered meteorological and pollution parameters. Later, this thesis develops approaches for the interactive visualisation of environmental data that are wrapped over the time series prediction as an application. Moreover, these approaches provide an interactive application that supports: 1. a Visual Analytics platform to interact with the sensors data and enhance the representation of the environmental data visually by identifying patterns that mostly go unnoticed in large temporal datasets, 2. a seasonality deduction platform presenting analyses of the results that clearly demonstrate the relationship between these parameters in a combined temporal activities frame, and 3. air quality analyses that successfully discovers spatio-temporal relationships among complex air quality data interactively in different time frames by harnessing the user’s knowledge of factors influencing the past, present, and future behaviour with Machine Learning models' aid. Some of the above pieces of work contribute to the field of Explainable Artificial Intelligence which is an area concerned with the development of methods that help understand, explain and interpret Machine Learning algorithms. In summary, this thesis describes Machine Learning prediction algorithms together with several visualisation approaches for visually analysing the temporal relationships among complex environmental data in different time frames interactively in a robust web platform. The developed interactive visualisation system for environmental data assimilates visual prediction, sensors’ spatial locations, measurements of the parameters, detailed patterns analyses, and change in conditions over time. This provides a new combined approach to the existing visual analytics research. The algorithms developed in this thesis can be used to infer spatio-temporal environmental data, enabling the interactive exploration processes, thus helping manage the cities smartly.
  • Thumbnail Image
    ItemOpen Access
    Berechnungsverfahren und auf Abtastung basierende Messverfahren zur Bestimmung elektrischer HF-Störfelder und der damit verbundenen Störeinkopplungen in Leitersysteme
    (2006) Geisbusch, Lothar; Landstorfer, Friedrich (Prof. Dr.-Ing.)
    Ein wichtiger Aspekt der EMV ist die Einstrahlung hochfrequenter elektromagnetischer Felder in Leitersysteme. Zur Quantisierung derartiger Verkopplungen entwickelt die vorliegende Arbeit sowohl messtechnische als auch numerische Verfahren. Für die Berechnung der Störeinkopplung in Herzschrittmacher-Systeme wird ein hybrides Berechnungsverfahren, zusammengesetzt aus der Mehrfach-Multipol-Methode und der Methode der Momente, weiterentwickelt und es wird seine Anwendbarkeit verbessert. Neben dem Berechnungsverfahren wird in der vorliegenden Arbeit ein neues Feldsensor-Verfahren zur Betrags- und Phasenmessung hochfrequenter elektrischer Felder entwickelt. Dieses Verfahren macht sich das subharmonische Abtasten zunutze, indem ein schneller Abtaster in eine elektrisch kurze Dipolantenne untergebracht wird. Die Triggerung des Abtasters erfolgt über einen optischen Leiter, welcher Verzerrungen des zu messenden Feldes vermeidet. Es wird eine sehr hohe Messgeschwindigkeit erreicht, so dass auch Feldverteilungen innerhalb kurzer Zeit gemessen werden können. Neben dem Feldsensor wird ein hierzu abgewandeltes Sensorsystem zur Messung von Störspannungen entwickelt, welches z.B. die Messung von Einkoppelspannungen an Herzschrittmacherelektroden erlaubt. Die Arbeit schließt zum einen mit der Untersuchung der Störeinkopplung in Herzschrittmachersysteme und zum anderen mit experimentellen Arbeiten zur Feldverteilung im Kraftfahrzeug bei Mobilfunkbetrieb ab.
  • Thumbnail Image
    ItemOpen Access
    Deep learning aided clinical decision support
    (2023) Schneider, Rudolf; Staab, Steffen (Prof. Dr.)
    Medical professionals create vast amounts of clinical texts during patient care. Often, these documents describe medical cases from anamnesis to the final clinical outcome. Automated understanding and selection of relevant medical records pose an opportunity to assist medical doctors in their day-to-day work on a large scale. However, clinical text understanding is challenging, especially when dealing with clinical narratives such as nursing notes or diagnostic reports. These clinical documents differ extensively in length, structure, vocabulary, and lexical and grammatical correctness. In addition, they are highly context-dependent. For all these reasons, approaches based on syntactic rules and discrete text representation often fail to address the variety of clinical narratives propagating unrecoverable errors to downstream applications. Therefore, this thesis focuses on evaluating and designing methods and models that are generalizable and adaptable enough to deal with these challenges. Our goal is to enable text-based clinical decision support systems to utilize the knowledge from clinical archives and medical publications. We aim to design methods that can scale up to the growing amount of clinical documents in hospital archives. A fundamental problem in achieving deep-learning-enabled clinical decision support systems is designing a patient representation that captures all relevant information for automated processing. We engage these challenges by designing a framework for deep-learning-enabled differential diagnosis support. Guided by the needs emerging from this framework, we design and evaluate methods based on three information representation paradigms: (1) Discrete relation extraction using the open information extraction paradigm. (2) Neural text representations based on language and topic modeling. (3) Combining complementary neural text representations. Our framework translates clinical diagnostic steps and pathways to statistical and deep-learning-based models. Accordingly, we can show that deep-learning-enabled differential diagnosis benefits from contextualized information representations. Further, we identify shortcomings of the open information extraction paradigm in a comprehensive benchmark. We design a distributed text representation model based on topical information. Our extensive large-scale experiment results show that topical distributed text representations capture information complementary to language modeling-based approaches across domains, thus enabling a holistic text representation for medical texts. Our experiments with medical doctors using our prototypical implementation of the deep-learning-enabled differential diagnosis process validate this framework. Moreover, we identify seven crucial design challenges for text-based clinical decision support systems based on our qualitative and quantitative findings.
  • Thumbnail Image
    ItemOpen Access
    Un-coordinated multi-user and inter-cell interference alignment based on partial and outdated information for large cellular networks
    (2016) Aziz, Danish; Speidel, Joachim (Prof. Dr.-Ing.)
    The cellular networks have gone through rapid evolution during the past decade. However, their performance is still limited due to the problem of interference. Therefore, interference management in current and future cellular networks is still an ongoing research topic. Interference Alignment is one of the techniques to manage the interference efficiently by using "align" and "suppression" strategy. In the first part of this thesis we focus on Coordinated inter cell interference alignment in a large cellular network. We assess the performance of interference alignment based transmit precoding under specific receiver strategies and coordination scenarios by comparing with different state of the art precoding schemes. We continue our assessment by considering imperfect channel state information at the transmitter. The results show that the gains of coordinated alignment based transmission are very sensitive to the receiver strategies and imperfections as compared to the other precoding schemes. However, in case of the availability of good channel conditions with very slow moving users, coordinated interference alignment outperforms the other baselines even with imperfect channel state information. In addition to that, we propose efficient user selection methods to enhance the performance of coordinated alignment. The results of our assessment draws important conclusions about the application of coordinated interference alignment in practical systems. In the second part of the thesis we consider a cellular system where each cell is serving multiple users simultaneously using the same radio resource. In this scenario, we have to manage not only the inter cell interference but also the multi user interference. For this purpose, we propose a novel Uncoordinated transmit precoding scheme for multi user cellular networks which is based on the alignment of multi user interference with partial and outdated inter cell interference. We show analytically that our scheme approaches the performance optimal transmission scheme. With the help of simulations we show that our proposal outperforms the state of the art non-alignment based multi user transmit precoding schemes We further propose user selection methods which exploit the diversity gains and improve the system spectral efficiency. In order to assess the feasibility of our proposal in a real system, we evaluate our scheme with practical constraints like imperfect information at the transmitter and limited feedback in uplink channel. For the proof of concept we also evaluate the performance of our scheme with measured channels using a software defined measurement platform. Finally, we also assess the application of our proposal in future heterogeneous networks. The outcome of our efforts states that as an interference alignment based transmission scheme, our scheme is a good candidate to manage the two dimensional interference in multi user cellular networks. It outperforms the non-alignment baselines in many scenarios even with practical constraints.
  • Thumbnail Image
    ItemOpen Access
    Optische Messsysteme und Ein-Sensor-Bildgebungsverfahren für Biosensoren
    (2024) Berner, Marcel; Werner, Jürgen H. (Prof. Dr. rer. nat. habil.)
    Die vorliegende Arbeit präsentiert die Entwicklung mehrerer Messsysteme und -verfahren für optische Biosensoranwendungen. Der erste Teil dieser Arbeit entwirft eine universelle experimentelle Plattform für die Erprobung neuer optischer Biosensorkonzepte nach dem Prinzip der laserinduzierten Fluoreszenz (LIF). Die Plattform unterstützt das europäische Forschungsprojekt Nanodem bei der Entwicklung eines portablen Point-of-Care-Testing-Gerätes (PoCT) zur Live-Überwachung von Immunsuppressivakonzentrationen im Blut von Transplantationspatienten unmittelbar am Patientenbett. Das in dieser Arbeit entwickelte Plattformkonzept umfasst die optoelektronische Fluoreszenzanregung und -detektion, optische Filtersysteme, den fluoreszenten Farbstoff, das Materialsystem der Transducerchips, das Mikrofluidiksystem sowie die Automatisierung der Ablaufsteuerung. Der Ausgangspunkt der Entwicklung ist die Herleitung eines allgemeinen physikalischen Modells für LIF-Systeme, an dem sich die Konstruktion der Plattform orientiert. Das in Kooperation mit der Eberhard Karls Universität Tübingen entworfene Transducerchipkonzept auf der Basis lasergeschnittener Klebebänder gestattet eine hohe Flexibilität bezüglich der Geometrie und des Aufbaus der Transducerchips und unterstützt den Technologietransfer akademischer Forschungsergebnisse in die industrielle Fertigung. Die entworfenen Photodetektorarrays aus amorphem Silizium lassen sich dank leicht adaptierbarer Herstellungsprozesse kosteneffizient auf beliebige Biosensorgeometrien anpassen. Die erreichte spezifische Detektivität D* = 11 × 10^12 Jones der Detektoren liegt dabei auf Augenhöhe mit der von State-of-the-Art-Detektoren aus kristallinem Material. Die erzielte Detektionsgrenze von c_{LOD,exp} = 26 nmol/l. Weiter bestätigen die experimentellen Messdaten das aufgestellte physikalische Modell. Der zweite Teil dieser Arbeit zeigt ein neues optisches Verfahren zur ortsaufgelösten Messung, das eine Vielzahl von Bildpunkten simultan mit nur einem einzigen optischen Sensor beobachtet. Das Verfahren nutzt hierzu ortsaufgelöste Lichtmodulatoren (Spatial Light Modulators - SLMs), um eine ortsabhängige optische Modulation zu erzeugen. Die erzeugten optischen Trägersignale gestatten die Zuordnung der als Summensignal empfangenen Signale zu ihren Ursprungspunkten. Der sogenannte Fourier Spotter macht sich dabei die mathematischen Eigenschaften der Fourier-Transformation zunutze. Durch die Anwendung zueinander phasenverschobener Modulationssignale gestattet der Fourier Spotter zudem die unmittelbare Messung von Helligkeitsdifferenzen zwischen unterschiedlichen Beobachtungspunkten. Dieses differentielle optische Messprinzip ist der Kern eines bereits erteilten Patents des Autors mit der Universität Stuttgart. Das neuartige optische Messprinzip eignet sich für die Integration in optische Biosensor-Verfahren, wie etwa die Einwellenlängenreflektometrie (engl. Single Color Reflectometry - SCORE), welche derzeit noch auf teure Spezialkameras angewiesen sind. Herkömmliche Kamerasysteme erzeugen hohe Datenmengen, deren Auswertung erhebliche Rechenleistung in Anspruch nimmt und damit der Weiterentwicklung hin zu miniaturisierten, portablen Biosensorplattformen entgegensteht. Die vorliegende Arbeit präsentiert einen erfolgreichen experimentellen Machbarkeitsnachweis des Fourier Imagers anhand von Helligkeitsdifferenzmessungen an einem SCORE-Aufbau. Eine zukünftige Erweiterung des Fourier Spotters um ein Zeilenspektrometer erlaubt neben der ortsaufgelösten Beobachtung auch eine simultane Erfassung der optischen Spektren jedes einzelnen beobachteten Punktes. Durch diese hyperspektrale Erweiterung wird die erstmalige Umsetzung einer auf der reflektometrischen Interferenzspektroskopie (RIfS) basierenden mehrkanaligen optischen Biosensorplattform möglich. Der dritte Teil dieser Arbeit verallgemeinert das Prinzip des Fourier Spotters und überführt dieses in ein Ein-Pixel-Kamera-Verfahren - das AM-FDM Imaging (engl. Amplitude Modulated Frequency Division Multiplexing). Das AM-FDM Imaging basiert auf der Anwendung von Näherungsverfahren, die ein Übersprechen zwischen den Trägersignalen minimieren. Das aufgestellte systemtheoretische Modell des AM-FDM Imaging umfasst auch das Fourier Spotting und erlaubt den Vergleich mit Rasterscans sowie bereits bekannten Ein-Pixel-Kamera-Verfahren wie dem Hadamard Imaging. Ist das Signal-zu-Rausch-Verhältnis durch das Rauschen des Detektorsystems begrenzt, so erreicht das AM-FDM Imaging einen sogenannten Multiplexgewinn amult = O(M) in der Größenordnung der Anzahl simultan beobachteter Bildpunkte M. Mit den derzeit eingesetzten Näherungsverfahren erreicht das AM-FDM Imaging hinsichtlich des Signal-zu-Rausch-Verhältnisses, der Anzahl simultan beobachtbarer Bildpunkte und der erzielbaren Bildwiederholrate nicht die Leistungsfähigkeit des bei Ein-Pixel-Imaging-Verfahren vorherrschenden Hadamard Imagings. Die in dieser Arbeit diskutierten Verwandtschaftsverhältnisse des AM-FDM Imagings zu anderen bekannten Ein-Pixel-Kamera-Verfahren legen jedoch die Vermutung nahe, dass ein bisher unbekanntes Näherungsverfahren existiert, das das AM-FDM Imaging mit dem Hadamard Imaging gleichstellt. Die Ergebnisse des systemtheoretischen Modells wurden mittels Simulation in Matlab bestätigt und gelten auch für den Fourier Spotter. Damit zeigen die Ergebnisse auf, dass im SCORE-Anwendungsfall eine Modulation nach dem Prinzip des Hadamard Imagings vorteilhafter ist. Das erteilte Patent zum optisch differentiellen Messverfahren schließt auch eine differentielle Variante des Hadamard Imagings mit ein. Gegenüber der Differenzwertbestimmung aus gemessenen Absolutwerten verdoppelt das differentielle Messverfahren wahlweise das Signal-zu-Rauschleistungs-Verhältnis oder die Bildwiederholrate des Hadamard Imagings.
  • Thumbnail Image
    ItemOpen Access
    Resilience of quantum optimization algorithms
    (2024) Ji, Yanjun; Polian, Ilia (Prof. Dr.)
    Quantum optimization algorithms (QOAs) show promise in surpassing classical methods for solving complex problems. However, their practical application is limited by the sensitivity of quantum systems to noise. This study addresses this challenge by investigating the resilience of QOAs and developing strategies to enhance their performance and robustness on noisy quantum computers. We begin by establishing an evaluation framework to assess the performance of QOAs under various conditions, including simulated noise-free and error-modeled environments, as well as real noisy hardware, providing a foundation for guiding the development of enhancement strategies. We then propose innovative techniques to improve the performance of algorithms on near-term quantum devices characterized by limited qubit connectivity and noisy operations. Our study introduces an effective compilation process that maximizes the utilization of classical and quantum resources. To overcome the restricted connectivity of hardware, we develop an algorithm-oriented qubit mapping approach that bridges the gap between heuristic and exact methods, providing scalable and optimal solutions. Additionally, we demonstrate, for the first time, selective optimization of quantum circuits on real hardware by optimizing only gates implemented with low-quality native gates, providing significant insights for large-scale quantum computing. We also investigate error mitigation strategies and their dependence on hardware features and algorithm implementation details, emphasizing the synergistic effects of error mitigation and circuit design. While error mitigation can suppress the effects of noise, hardware quality and circuit design are ultimately more critical for achieving high performance. Building upon these insights, we explore the cooptimization of algorithm design and hardware implementation to achieve optimal performance and resilience. By optimizing gate sequences and parameters at the algorithmic level and minimizing error-prone two-qubit gates during compilation, we demonstrate significant improvements in QOA performance. Finally, we explore the practical application of QOAs in real-world problems, emphasizing the importance of optimizing parameters in problem instances to identify optimal solutions. With extensive experiments conducted on real devices, this dissertation makes a substantial contribution to the field of quantum optimization, providing both theoretical foundations and practical strategies for addressing the challenges posed by near-term quantum hardware. Our findings pave the way for the realization of practical quantum computing applications and unlock the full potential of QOAs.
  • Thumbnail Image
    ItemOpen Access
    Adaptive error control for stratospheric long-distance optical links
    (2024) Parthasarathy, Swaminathan; Kirstädter, Andreas (Prof. Dr.-Ing.)
    Free-space optical (FSO) communication plays a crucial role in aerospace technology, utilizing lasers to establish high-speed, wireless connections over long distances. FSO surpasses conventional RF wireless technology in various aspects and supports high-data-rate connectivity for services such as Internet access, data transfer, voice communication, and image transfer. High-Altitude Platforms (HAPs) have emerged as ideal hosts for FSO communication networks, offering ultra-high data rates for applications like high-speed Internet, video conferencing, telemedicine, smart cities, and autonomous driving. FSO via HAPs ensures minimal latency, making it suitable for real-time tasks like remote surgery and autonomous vehicle control. The swift, long-distance communication links with low delays make FSO-equipped HAPs ideal for RF-congested areas, providing cost-effective solutions in remote regions and contributing to environmental monitoring. This thesis explores the use of adaptive code-rate Hybrid Automatic Repeat Request (HARQ) methods and channel state information (CSI) to improve the transmission efficiency of Free-Space Optical (FSO) links between High Altitude Platforms (HAPs). The study looks at channel problems like atmospheric turbulence and static pointing errors, focusing on the weak fluctuation regime of atmospheric turbulence. It explores the reciprocal behavior in bidirectional FSO channels to improve performance efficiency, providing evidence of channel reciprocity. The research proposes using HARQ, an adaptive Reed-Solomon (RS) code-rate technique, and different CSI types to address these impairments. Simulations of various situations are used to test how well these methods work. This helps us learn more about how efficient HARQ protocols are in inter-HAP FSO links, how important different CSI is in adaptive rate HARQ, and possible ways to make the system more efficient. This thesis looks at the channel model for inter-High Altitude Platform (HAP) Free-Space Optical (FSO) links in great detail, taking atmospheric conditions and static pointing errors into account. The channel is modeled as a lognormal fading channel under a weak fluctuation regime. The principle of channel reciprocity and the measures used to quantify it are discussed, providing a foundational understanding for the subsequent investigations. Forward Error Correction (FEC) schemes, with a specific emphasis on the Reed-Solomon (RS) scheme, and various Automatic Repeat reQuest (ARQ) schemes are thoroughly examined. A meticulous comparison of different ARQ schemes highlights that Selective Repeat ARQ (SR-ARQ) is the most efficient for high-error-rate channels, making it the preferred choice for inter-HAP FSO channels. Conversely, Stop and Wait ARQ (SW-ARQ) and Go-Back-N ARQ (GBN-ARQ) are found to be less suitable for these channels. An innovative approach is introduced, leveraging various types of Channel State Information (CSI) to adjust the Reed-Solomon Forward Error Correction (FEC) code-rate. Four types of CSI: perfect CSI (P-CSI), reciprocal CSI (R-CSI), delayed CSI (D-CSI), and fixed mean CSI (F-CSI) are employed. The adaptation of the Reed-Solomon FEC code-rate, aligned with Selective Repeat ARQ, is explored, and the optimal power selection is identified through rigorous analysis. It shows simulation models that use OMNET++ and gives information about the inter-HAP channel and the event-based selective repeat HARQ model. The study demonstrates reciprocity in the longest recorded ground-to-ground bidirectional Free-Space Optical (FSO) link, holding promise to mitigate signal scintillation caused by atmospheric turbulence. It evaluates the performance of different ARQ protocols and adaptive Hybrid Automatic Repeat Request (HARQ) schemes in inter-HAP FSO communication systems. The results show how channel state information, turbulence in the atmosphere, and pointing errors affect the performance of the system. They also suggest ways to improve system efficiency, such as using CSI prediction and soft combining. These findings offer valuable insights for the design and optimization of ARQ and HARQ schemes in inter-HAP FSO communication systems and suggest promising avenues for future research.