Browsing by Author "Rothermel, Kurt (Prof. Dr. rer. nat. Dr. h. c.)"
Now showing 1 - 20 of 32
- Results Per Page
- Sort Options
Item Open Access Architecture-based availability prediction and service recommendation for cloud computing(2023) Bibartiu, Otto; Rothermel, Kurt (Prof. Dr. rer. nat. Dr. h. c.)Cloud computing has become an essential pillar in the development of modern IT systems. IoT solutions such as smart factories and connected cars, or finance and e-commerce systems, just to name a few, have formed a strong availability dependency to the cloud. Consequently, it becomes increasingly important to assess the availability of a cloud application during its development phase to choose suitable cloud services to assure high service quality and to define robust service-level agreements with clients. However, one single cloud data center is already a highly complex system of hard- and software components, making availability assessments of cloud applications a challenging task. While researchers acknowledge the significance of infrastructure and communication faults in the cloud as important aspects of the availability prediction, they usually model either the (compute) infrastructure or the communication part of a cloud service, disregarding that a cloud application consists of multiple interconnected services with potentially different replication degrees and common cause failures. Especially with the introduction of the Function-as-a-Service (FaaS) paradigm, which have a large number of redundant service instances, availability models become computationally infeasible when modeling k-out-of-n services with large n. Usually, availability assessments guide design choices by probing for different cloud services that fit availability requirements or other constraints such as cost simultaneously. However, this task becomes also increasingly challenging in its own right due to the vast amount of potential service offerings and individual configuration options. As a result, recent developments in cloud application modeling have introduced a wide range of technology-agnostic cloud modeling languages, introducing abstractions or patterns that do not require ample knowledge on concrete cloud services anymore. However, the initial challenge remains: the more abstract a pattern, the larger the solution space, and the longer the time to find a suitable solution. This thesis addresses both challenges. First, it proposes a hierarchical availability model implementing a novel availability model utilizing Bayesian networks for the prediction task. Second, it presents a service recommendation system based on a novel pattern-based cloud modeling language called Clams, which provides a framework for custom search criteria in cooperation with meta-heuristics. In detail, the availability model enables developers to model cloud applications at any preferred level of component and network granularity, accounting for cascading infrastructure and communication faults, including the individual replication semantics of services. Moreover, this work introduces scalable Bayesian network structures to enable the modeling of FaaS offerings, or large-scale replicated services, with many instances. The presented service recommendation system provides a generic approach of utilizing meta-heuristics to exploit a component-based architectural description of a cloud application. Cloud computing patterns are used as architectural placeholders while at the same time encoding the solution space of concrete services. Combining the service recommendation system with the availability model, this work demonstrates its results by implementing a system that suggests services with minimal operational cost, while adhering to availability constraints. To show the feasibility of the modeling concepts, this thesis analyses a set of thirty-one real-life architectural examples. Performance evaluations show that the service recommendation system can return a near-to-optimal solution in a feasible time.Item Open Access Concepts and algorithms for efficient distributed processing of data streams(2013) Rizou, Stamatia; Rothermel, Kurt (Prof. Dr. rer. nat. Dr. h. c.)During the last years, the proliferation of modern devices capable of capturing context information through various sensors has triggered the blossom of context-aware systems, which automatically adapt their behaviour based on the detected context. For many emerging context-aware applications, context may include a huge amount of entities possibly dispersed geographically over a wide area. In such large-scale scenarios, the efficient processing of context information becomes a challenging task. In this dissertation, we are going to focus on the problem of the efficient processing of context information. In particular, we will consider the problem of deriving high-level context information, also referred to as situation in the literature, from sensor data streams captured by a large set of geographically distributed sensors. First, we present the architecture of a distributed system that uses reasoning algorithms to detect situations in an overlay network of data stream processing operators. Then we are going to introduce our strategies for the optimal distribution of data processing between processing nodes in order to save network resources, by optimizing for bandwidth-delay product, and fulfill given QoS requirements, such as end-to-end latency constraints. To this end, we formulate three (constrained) optimization problems, which search for an optimal placement of operators onto physical hosts with respect to different application constraints. The proposed algorithms are executed in a distributed way, by using local knowledge of the system. Our evaluation shows that our algorithms achieve good approximations of the optimal solutions, while inducing limited communication overhead.Item Open Access Content-based routing in software-defined networks(2017) Bhowmik, Sukanya; Rothermel, Kurt (Prof. Dr. rer. nat. Dr. h. c.)Content-based routing, as provided by publish/subscribe systems, has emerged as a universal paradigm for interactions between loosely coupled application components, i.e., content publishers and subscribers, where published content is filtered and forwarded by content filters to interested subscribers. Over the past few decades, content-based publish/subscribe has been primarily implemented as an overlay network of software brokers. Even though these systems have proven to efficiently support content-based routing between a large number of distributed application components, such broker-based routing and content filtering in software results in performance (w.r.t. end-to-end latency, throughput rates, etc.) that is far behind the performance of network layer implementations of communication protocols. As a result, the goal of this thesis is to develop methods that enable content-based filtering and routing at line-rate in the network layer by exploiting the capabilities of Software-Defined Networking (SDN). In particular, this thesis focuses on realizing a high performance SDN-based publish/subscribe middleware, called PLEROMA, while addressing major obstacles raised by data (forwarding) plane and control plane limitations of software-defined networks. More specifically, the following contributions are made in this thesis. Our first contribution is to provide methods to fulfill the functional requirements of the content-based publish/subscribe paradigm on the network layer in order to enable line-rate filtering and forwarding of published content in the data plane. We propose methods to establish paths between publishers and their relevant subscribers by installing content filters directly on hardware switches in the data plane. While the developed methods result in a publish/subscribe middleware whose performance (w.r.t. end-to-end latency, throughput rates, etc.) is significantly better than state-of-the-art solutions, a network layer implementation faces some serious challenges due to inherent limitations of software-defined networks. In fact, our next three contributions focus on addressing the problems associated with expressive filtering of content in the network layer, i.e., on hardware switches in the data plane, in the presence of hardware limitations. In particular, we address limitations w.r.t. limited flow table size and limited number of bits available for filter representation in hardware switches that curtail the expressiveness of content filters. Our contributions include various methods that use the knowledge of workload in the system to mitigate the adverse effects of these data plane limitations, thus improving bandwidth efficiency in the system. Not just the data plane, but also the control plane can have its own limitations (w.r.t. scalability in the presence of dynamically changing subscription requests) which can pose as a significant bottleneck for content-based routing on software-defined networks. As a result, our final contribution is to provide methods that enable concurrent and consistent control distribution, thus paving the way for a scalable and distributed control plane solution to high dynamics in an SDN-based publish/subscribe system.Item Open Access Efficient and secure event correlation in heterogeneous environments(2015) Schilling, Björn; Rothermel, Kurt (Prof. Dr. rer. nat. Dr. h. c.)The importance of managing events has increased steadily over the last years and has reached a great magnitude in science and industry. The reasons for that are twofold. On the one hand, sensor devices are cheap and provide event information which is of great interest for a large variety of applications. In fact, sensors are ubiquitous in modern life. Nowadays, RFID tags are attached to goods and parcels to allow an easy tracking. Numerous weather stations are providing up-to-date information about temperature, pressure, and humidity to allow for precise weather forecasts worldwide. Lately, mobile phones are equipped with various sensor devices like Global Positioning System sensors or acceleration sensors to increase the applicability of the phone. On the other hand, reacting on events has become an increasingly important factor especially for business applications. The occurrence of a system failure, a sudden drop in the stock exchange, or a missing parcel can cause huge costs for the company if their appearance is not handled properly. As a consequence, detecting and reacting on events quickly is of great value and has lead to a change in the design of modern software systems, where event-driven architectures and service-oriented architectures have become more and more important. With the emerging establishment of event-driven solutions, complex event processing (CEP) has become increasingly important in the context of a wide range of business applications such as supply chain management, manufacturing, or ensuring safety and security. CEP allows applications to asynchronously react to the changing conditions of possibly many business contexts by describing relevant business situations as correlations over many events. Each event corresponds either to a change of a business context or the occurrence of a relevant business situation. This thesis addresses the need to cope with heterogeneity in distributed event correlation systems in order to i) reuse expressive correlation and efficient technology optimized for processing speed, ii) increase scalability by distributing correlation tasks over various correlation engines, iii) allow migration of correlation tasks between heterogeneous engines and security domains, and iv) provide security guarantees among domains in order to increase interoperability, availability and privacy of correlation results. In particular, a framework called DHEP is presented that copes with such requirements.Item Open Access Efficient code offloading techniques for mobile applications(2017) Berg, Florian; Rothermel, Kurt (Prof. Dr. rer. nat. Dr. h. c.)Since the release of the first smart phone from Apple in the year 2007, smart phones in general experience a fast growth of rising popularity. A smart phone typically possesses among others a touchscreen display as user interface, a mobile communication for accessing the Internet, and a System-on-a-Chip as an integrated circuit of required components like a central processing unit. This pervasive computing platform derives its required power from a battery, where an end user runs upon it different kinds of applications like a calendar application or a high-end mobile game. Differing in the usage of the local resources from a battery-operated smart phone, a heavy utilization of local resources like playing a resource-demanding application drains the limited resource of energy in few hours. Despite the constant increase of memory, communication, or processing capabilities of a smart phone since the release in 2007, applications are also getting more and more sophisticated and demanding. As a result, the energy consumed on a smart phone was, still is, and will be its main limiting factor. To prevent the limited resource of energy from a quick exhaustion, researchers propose code offloading for (resource-constrained) mobile devices like smart phones. Code offloading strives for increasing the energy efficiency and execution speed of applications by utilizing a server instance in the infrastructure. To this end, a code offloading approach executes dynamically resource-intensive parts from an application on powerful remote servers in the infrastructure on behalf of a (resource-constrained) mobile device. During the remote execution of a resource-intensive application part on a remote server, a mobile device only waits in idle mode until it receives the result of the application part executed remotely. Instead of executing an application part on its local resources, a (resource-constrained) mobile device benefits from the more powerful resources of a remote server by sending the information required for a remote execution, waiting in idle mode, and receiving the result of the remote execution. The process of offloading code from a (resource-constrained) mobile device to a powerful remote server in the infrastructure, however, faces different problems. For instance, code offloading introduces some overhead for additional computation and communication on a mobile device. Moreover, spontaneous disconnections during a remote execution can cause a higher energy consumption and execution time than a local execution on a mobile device without code offloading. To this end, this dissertation addresses the whole process of offloading code from a mobile device not only to one but also to multiple remote resources, comprising the following steps: 1) First, code offloading has to identify feasible parts from an application for a remote execution, where the distributed execution of the identified application part is more beneficial than its local execution. A feasible part for a remote execution typically has the following properties: A low size of information required for transmission before a remote execution, a resource-intensive computation not accessing local sensors, and a low size of information required for transmission after a remote execution. In the area of identification of application parts for a remote execution, this dissertation presents an approach based on code annotations from application developers that automatically transforms a monolithic execution on a mobile device to a distributed execution on multiple heterogeneous resources. In contrast to related approaches in the literature, the annotation-based approach requires least interventions from application developers and end users, keeping the overhead introduced on a mobile device low. 2) For an application part identified for a remote execution, code offloading has to determine its execution side, executing the application part either on the local resources of a mobile device or on the remote resource at the infrastructure. In the area of determining the execution side for an application part, this dissertation presents the offloading problem, where a mobile device decides whether to execute an application part locally or remotely. Furthermore, this dissertation also presents an approach called "code bubbling" that shifts the decision making into the infrastructure. In contrast to related approaches in the literature, the decision-based approach on a mobile device and the bubbling-based approach minimize the execution time, energy consumption, and monetary cost for an application. 3) To determine the execution side for an application part identified for a remote execution, code offloading has to obtain different parameters from the application, participating resources, and utilized links. In the area of obtaining the information required from an application, this dissertation presents a bit-flipping approach that dynamically flips a bit at the modification of application-related information. Furthermore, this dissertation also presents an offload-aware Application Programming Interface (API) that encapsulates the application-related information required for code offloading. In contrast to related approaches in the literature, the bit-flipping approach and the offload-aware API provide an efficient gathering of information at run-time, keeping the overhead introduced on a mobile device low. 4) Beside the information from an application, code offloading has to obtain further information from participating resources and utilized links. In the area of obtaining the information required from participating resources and utilized links, this dissertation presents the approach of code bubbling, already mentioned above. In contrast to related approaches in the literature, the bubbling-based approach makes the offload decision at the place where the related information occurs, keeping the overhead introduced on a mobile device, participating resources, and utilized links low. 5) In case of a remote execution of an application part, code offloading has to send the information required for a remote execution to the remote resource that subsequently executes the application part on behalf of the mobile device. In the area of sending the required information and executing an application part remotely, this dissertation presents code offloading with a cache on the remote side. The cache on the remote side serves as a collective storage of results for already executed application parts, avoiding a repeated execution of previously run application parts. In contrast to related approaches in the literature, the caching-aware approach increases the efficiency of code offloading, keeping the energy consumption, execution time, and monetary cost low. 6) While a remote resource executes an application part, code offloading has to handle the occurrence of failures like a failure of the remote resource or a disconnection. In the area of handling the occurrence of failures, this dissertation presents a preemptable offloading of code with safe-points. The preemptable offloading of code with safe-points enables an interruption of an offloading process and a corresponding continuation of a remote execution on a mobile device, without abandoning the complete result calculated remotely so far. Based on a preemptable offloading of code with safe-points, this dissertation further presents a predictive offloading of code with safe-points that minimizes the overhead introduced by safe-point'ing and maximizes the efficiency of a deadline-aware offloading. In contrast to related approaches in the literature, the preemptable approach with safe-point'ing increases the robustness of code offloading in case of failures. Furthermore, the predictive approach for safe-point'ing ensures a minimal responsiveness and a maximal efficiency of applications despite failures. 7) At the end of a remote execution of an application part, code offloading has to gather on the remote resource the required information after the execution and send this information to the mobile device. In the area of gathering the required information, a remote resource utilizes the same approaches as a mobile device, already mentioned above (cf. the bit-flipping approach and the offload-aware API). 8) Last, code offloading has to receive on the mobile device the information from a remote resource, install the information on the mobile device, and continue the execution of the application on the mobile device. In the area of installing the information and continuing the execution locally, a mobile device utilizes the approaches already mentioned above (cf. the bit-flipping approach and the offload-aware API).Item Open Access Fundamental models and algorithms for a distributed reputation system(2007) Engler, Michael; Rothermel, Kurt (Prof. Dr. rer. nat. Dr. h. c.)With the increased significance of the Internet in our everyday lifes, we embrace its benefits as seemingly unlimited information source, warehouse and general communication medium, but sometimes fall prey to its predators. Outside the online world, social network structures of friends or colleagues allow to identify malicious and reputable entities and to communicate recommendations or warnings accordingly. When interacting through open computer networks, these traditional mechanisms used in the physical world for establishing trust are adapted by reputation systems that allow to build trust in entities and create social network structures on a much larger scale. In this dissertation, we investigate various models and algorithms required for realizing a fully decentralized reputation system with enhanced privacy properties and fine-grained trust modeling. To ensure the former, we bind trust to virtual identities instead of real identities and present extended destination routing, an approach that allows anonymous communication between pseudonyms without exposing any link to a real identity. To enable the latter, we introduce a generic trust model that allows to model trust in various context areas in addition to expressing context area dependencies that are taken into account when updating trust. The model definition permits incorporating several well-known trust update algorithms from the related work. Subjecting the algorithms to a set of evaluation scenarios gives valuable inputs regarding their specific performance. In order to capture the transitivity of trust, we present algorithms to simplify trust networks and then compute the transitive trust with subjective logic operators. Finally, we propose mechanisms to protect trust by firstly laying its foundation in trusted hardware and secondly ensuring the authenticity of recommendations through the integration of an originality statement. This reputation system can be utilized by users and relying applications alike to determine the trustworthiness of other entities. While these building blocks are all essential for our system, many contributions can be applied to other reputation systems and even to other research areas as well.Item Open Access Fundamental storage mechanisms for location based services in mobile ad hoc networks(2009) Dudkowski, Dominique; Rothermel, Kurt (Prof. Dr. rer. nat. Dr. h. c.)The proliferation of mobile wireless communication technology has reached a considerable magnitude. As of 2009, a large fraction of the people in most industrial and emerging nations is equipped with mobile phones and other types of portable devices. Supported by trends in miniaturization and price decline of electronic components, devices become enhanced with localization technology, which delivers, via the Global Positioning System, the geographic position to the user. The combination of both trends enables location-based services, bringing information and services to users based on their whereabouts in the physical world, for instance, in the form of navigation systems, city information systems, and friend locators. A growing number of wireless communication technologies, such as Wireless Local Area Networks, Bluetooth, and ZigBee, enable mobile devices to communicate in a purely peer-to-peer fashion, thereby forming mobile ad-hoc networks. Together with localization technology, these communication technologies make it feasible, in principle, to implement distributed locationbased services without relying on any support by infrastructure components. However, the specific characteristics of mobile ad-hoc networks, especially the significant mobility of user devices and the highly dynamic topology of the network, make the implementation of locationbased services extremely challenging. Current research does not provide an adequate answer to how such services can be supported. Efficient, robust, and scalable fundamental mechanisms that allow for generic and accurate services are lacking. This dissertation presents a solution to the fundamental support of location-based services in mobile ad-hoc networks. A conceptual framework is outlined that implements mechanisms on the levels of routing, data storage, location updating, and query processing to support and demonstrate the feasibility of location-based services in mobile ad-hoc networks. The first contribution is the concept of location-centric storage and the implementation of robust routing and data storage mechanisms in accordance with this concept. This part of the framework provides a solution to the problems of data storage that stem from device mobility and dynamic network topology. The second contribution is a comprehensive set of algorithms for location updating and the processing of spatial queries, such as nearest neighbor queries. To address more realistic location-based application scenarios, we consider the inaccuracy of position information of objects in the physical world in these algorithms. Extensive analytical and numerical analyses show that the proposed framework of algorithms possesses the necessary performance characteristics to allow the deployment of location-based services in purely infrastructureless networks. A corollary from these results is that currently feasible location-based services in infrastructure-based networks may be extended to the infrastructureless case, opening up new business opportunities for service providers.Item Open Access A generalized broadcasting technique for mobile ad hoc networks(2007) Khelil, Abdelmajid; Rothermel, Kurt (Prof. Dr. rer. nat. Dr. h. c.)Broadcasting is a major communication primitive required by many applications and protocols in Mobile Ad Hoc Networks (MANETs). It is frequently deployed for content distribution, service discovery or advertisement, and sensor data dissemination. Broadcast protocols are also a fundamental building block to realize principal middleware functionalities such as replication, group management and consensus. Broadcasting in MANETs has therefore been an active area of research recently. Most of the research conducted on broadcasting in MANETs has primarily focused only on carefully selected application and evaluation scenarios. Consequently, the developed broadcasting schemes do not yield good performance for other scenarios. Different comparative studies show that the existing broadcasting techniques are tailored to only one class of MANETs with respect to node density and node mobility, and are unfortunately not likely to operate well in other classes. Node spatial distribution is a key issue for the performance of broadcast protocols, since it determines the connectivity of the MANET. Our survey of potential MANET application scenarios shows a wide range of possible node spatial distributions and node mobilities. This leads to that a MANET generally shows a continuously changing network connectivity over space and time. Therefore, a generalized solution for broadcasting that accounts for the requirements of the various applications and adapts to the heterogeneous and evolving node spatial distribution and mobility is a major contribution. In this thesis, we present hypergossiping, a novel generalized broadcasting technique for MANETs. Hypergossiping integrates two adaptive schemes and efficiently switches between them depending on local node density. The first scheme is adaptive gossiping, which distributes messages within connected parts of the MANET. We adapted gossiping as follows. First, we established an analytical model for gossiping through adopting the SI mathematical model from the epidemiology. Then, we used the model to adapt the gossiping forwarding probability to local node density. As a result, we provide a simple analytical expression that nodes use to set the appropriate forwarding probability depending on the current number of neighbors. Simulation results showed that adaptive gossiping efficiently propagates messages within a network partition independent of the node spatial distribution and node mobility in that network partition. The second scheme is a broadcast repetition method, which detects partition joins using an efficient and localized heuristic and efficiently repeats the needed broadcasts upon detection of a partition join. Our approach is mobility-assisted since it exploits the mobility of nodes to efficiently deliver messages in frequently partitioned scenarios. We defined mobility metrics that simplify the design of mobility-assisted concepts, and used some of them to design a mobility-aware buffering strategy, which can significantly reduce the buffer overhead of hypergossiping. Simulation results in the standard network simulator ns-2 show that hypergossiping outperforms all existing strategies. Hypergossiping significantly increases the delivery ratio for a broad range of MANETs with respect to node density, node mobility and network load while providing high efficiency and scalability.Item Open Access Geographische Kommunikationsmechanismen auf Basis von feingranularen räumlichen Umgebungsmodellen(2010) Dürr, Frank; Rothermel, Kurt (Prof. Dr. rer. nat. Dr. h. c.)Die immer weiter zunehmende Verbreitung von leistungsfähigen mobilen Kommunikationstechnologien und Positionierungssystemen ermöglicht die Realisierung neuartiger geographischer Kommunikationsmechanismen (kurz Geocast). Mit Hilfe von Geocast können Nachrichten an alle Endgeräte in einem bestimmten geographischen Zielgebiet gesendet werden, d.h. die Gruppe der Empfänger wird durch ihren geographischen Aufenthaltsort definiert. Die Anwendungen von Geocast sind vielfältig. So können mit Hilfe von Geocast zum Beispiel Stauinformationen gezielt an alle Fahrzeuge auf einem bestimmten Straßenabschnitt verteilt werden, Touristen können über Sehenswürdigkeiten in ihrer Umgebung informiert werden, oder es können Warnmeldungen oder ortsbezogene Werbeinformationen in bestimmten räumlichen Gebieten verteilt werden. Im Mittelpunkt dieser Arbeit stehen Geocast-Verfahren, welche die feingranulare ortsbezogene Verteilung von Nachrichten erlauben. Die Hauptbeiträge dieser Arbeit sind zum einen ein feingranulares Lokationsmodell und Adressierungskonzept für Geocast, welche sowohl die Bestimmung und Adressierung von Lokationen durch geometrische Figuren als auch durch symbolische Bezeichner wie Raum- oder Stockwerksnummern ermöglichen. Die Hauptaufgaben des Modells sind neben der Definition der Zielgebiete von Nachrichten und der Empfängerpositionen der Vergleich von Zielgebieten und Empfängerpositionen zur Definition der Empfängermenge. Zum anderen werden skalierbare Geocast-Protokolle für die effiziente Verteilung von Nachrichten vorgeschlagen, welche die notwendige Skalierbarkeit sowohl für die Unterstützung einer großen Anzahl an Lokationen als auch großer Empfängermengen aufweisen. Das vorgeschlagene Lokationsmodell verwendet ein hybrides Modellierungskonzept, das es ermöglicht, Lokationen sowohl mit symbolischen Bezeichnern als auch geometrischen Beschreibungen zu versehen. Geometrien können dabei sowohl durch zweidimensionale als 2,5-dimensional Figuren modelliert werden. Der symbolisch Teil des Modells basiert auf einer Hierarchie von Lokationen entsprechend der räumlichen Inklusionsbeziehung. Zusätzlich werden strukturelle Regeln für den Aufbau der symbolische Hierarchie eingeführt, die schließlich zu einem verbandsbasierten Modell führen, das im Gegensatz zu einfachen hierarchischen Modellen auch die Möglichkeit zur Definition überlappender Lokationen bietet. Ferner wird die Integration lokaler Lokationsmodelle unterstützt, welche gleichzeitig zur Definition mobiler Zielgebiete, z.B. eines bestimmten Wagens eines Zuges, verwendet werden. Durch die Unterstützung sowohl geometrischer als auch symbolischer Beschreibungen sind spezielle Konzepte zum Vergleich von Zielgebieten und Empfängerpositionen erforderlich. Das vorgeschlagene Modell ordnet hierzu symbolischen Lokationen geometrische Ausdehnungen zu, wobei auch die Approximierung symbolischer Lokationen durch unscharfe Lokationsangaben unterstützt wird. Der zweite Teil dieser Arbeit widmet sich Ansätzen zur Vermittlung symbolisch adressierter Geocast-Nachrichten. Hier betrachtet die Arbeit zwei grundsätzliche Ansätze: Einen verzeichnisbasierten Ansatz und die Vermittlung von Nachrichten in einem der Internet-Infrastruktur überlagerten Overlay-Netz. Der verzeichnisbasierte Ansatz bildet zunächst die symbolische Geocast-Adresse auf eine Menge von IP-Adressen ab, welche in einem zweiten Schritt für die Vermittlung genutzt werden. Im zweiten Schritt können die Nachrichten sowohl per IP-Unicast als auch IP-Multicast zugestellt werden. Insbesondere bei größeren Empfängergruppen bietet sich IP-Multicast für die effiziente Vermittlung an. Allerdings stellt Geocast aufgrund der großen Anzahl an notwendigen Multicast-Gruppen für ein feingranulares globales Lokationsmodell entsprechende Anforderungen an die verwendeten Multicast-Protokolle. Es wird daher die Verwendung von explizitem Multicast (Xcast) vorgeschlagen, das sich insbesondere für eine große Anzahl von Gruppen kleiner bis mittlerer Größe eignet. Die vorgeschlagenen Geocast-Overlay-Netze bestehen aus einer Menge von Geocast-Routern, welche symbolische Adressen interpretieren und daraus die nächsten Router auf dem Weg ins Zielgebiet bestimmen können. Die Grundidee ist dabei die Abbildung des hierarchischen symbolischen Lokationsmodells auf eine entsprechende hierarchische Overlay-Netzstruktur. Entsprechend der vorgeschlagenen Lokationsmodelle wurde ein baumförmiges und eine verbandsbasiertes Geocast-Overlay-Netz entworfen, einschließlich der notwendigen Mechanismen zur Verwaltung (Hinzufügen/Entfernen von Geocast-Routern, Behandlung von Router-Fehlern) der Netze. Als Optimierung der grundsätzlich hierarchischen Netzstrukturen wurden verschiedene Strategien zur Ergänzung der hierarchischen Netze durch zusätzliche Direktverbindungen entworfen. In der Evaluierung wurde die Effizienz der vorgeschlagenen Ansätze z.B. hinsichtlich der erzielten Nachrichtenpfadlänge nachgewiesen.Item Open Access Key distribution schemes for resource-constrained devices in wireless sensor networks(2007) Wacker, Arno Rüdiger; Rothermel, Kurt (Prof. Dr. rer. nat. Dr. h. c.)Wireless sensor networks based on highly resource-constrained devices require symmetric cryptography in order to make them secure. Integral to this is the exchange of unique symmetric keys between two devices. In this dissertation, we propose three novel decentralized key distribution schemes that guarantee the confidentiality of a key exchange even if an attacker has compromised some of the devices in the network. Our first key distribution scheme -- the basic key distribution scheme -- guarantees the confidentiality of any new established key in case there are only eavesdropping attacker and no device failures present. Our second scheme -- the fault-tolerant key distribution scheme -- extends the basic scheme so that also more powerful attackers and device failures can be handled. Our third proposed key distribution scheme -- the extended key distribution scheme -- is also based on the basic scheme but further optimized in terms of memory consumption and network traffic. A central objective of all key distribution scheme designs was to minimize resource consumption on the individual devices. We evaluate the resource requirements of our schemes in terms of attacker resilience, memory requirements, and network traffic both through theoretical analysis and through simulations.Item Open Access Konzepte und Mechanismen für die Darstellung von sicherheitskritischen Informationen im Fahrzeug(2017) Gansel, Simon; Rothermel, Kurt (Prof. Dr. rer. nat. Dr. h. c.)Die zunehmende Verwendung von Anwendungen im Fahrzeug wie Navigation, Videowiedergabe oder Geschwindigkeitsanzeige, welche eine grafische Repräsentation anstatt der physischen Zeigerinstrumente nutzen, geht einher mit einer Zunahme der verbauten digitalen Anzeigen im Fahrzeug. Neben den Anzeigen der Headunit und der Kombiinstrumente gibt es Anzeigen in den Kopfstützen und die Headup-Anzeige. Da meist jede Anzeige ihr eigenes Steuergerät besitzt, führt dieser Trend auch zu einer Zunahme an Steuergeräten. Dies bringt jedoch Skalierungsprobleme und eine zunehmende Komplexität mit sich, sowie erhöhten Bauraumbedarf, zunehmende Kosten und einen höheren Stromverbrauch. Um diesen Problemen begegnen zu können, wird eine Konsolidierung von Steuergeräten angestrebt. Die Anwendungen im Automobilbereich sind jedoch teils sehr unterschiedlich in ihrer Sicherheitskritikalität, da sie unterschiedlichen Einfluss auf die funktionale Sicherheit des Fahrzeugs haben. So ist die Darstellung mancher Warnlampen sicherheitskritisch, da sie für die Sicherheit der Insassen relevant sind, während das Abspielen einer DVD nur den Qualitätsansprüchen genügen muss. Die unterschiedlichen Anwendungen dürfen sich gegenseitig nicht ungewollt beeinflussen, was eine Isolation erforderlich macht, die bisher durch physisch separierte Hardware-Plattformen realisiert wurde. Dies muss aufgrund der oben genannten Gründe durch Software implementiert werden. Hierzu eignet sich vor allem die Technologie Virtualisierung, welche verschiedene Anwendungen in virtuellen Maschinen kapselt. Die Virtualisierung gewährleistet Isolation derzeit in der Nutzung von Ressourcen wie CPU und Speicher und vermeidet unbeabsichtigte oder böswillige Beeinflussung. Jedoch erstreckt sich die Isolation nicht auf die Nutzung der grafischen Ressourcen wie Anzeigen und GPU und kann insbesondere nicht die Anforderungen im Automobilbereich erfüllen. Der konfliktfreie Zugriff auf Anzeigebereiche unter Berücksichtigung der Sicherheitskritikalität der Anwendungen ist essentiell für die Sicherheit während der Fahrt. Im Rahmen des öffentlich geförderten Projektes ARAMiS wurde dieser Sachverhalt untersucht und geeignete Konzepte entwickelt. In dieser Arbeit werden unterschiedliche Anforderungen aus Rahmenrichtlinien wie ISO-Standards oder gesetzlichen Bestimmungen analysiert und auf sieben Kategorien von Anforderungen reduziert, welche für das grafische System im Fahzeug erfüllt werden müssen. Auf Grundlage dieser Anforderungen wird dann eine Architektur für einen Domänen-Server vorgeschlagen, welche mittels Virtualisierung und verschiedener Komponenten Isolation zwischen grafischen Anwendungen mit unterschiedlicher Sicherheitskritikalität bietet. Insbesondere die gemeinsame Nutzung der Anzeigen durch die Anwendungen mit unterschiedlicher Kritikalität stellt eine besondere Herausforderung dar. Die Konsolidierung von Steuergeräten wie der Headunit und den Kombiinstrumenten ermöglicht die flexible und dynamische Nutzung der viele Anzeigen, die den Anwendungen nun zur Verfügung stehen. Die dynamische Zuweisung der Anzeigebereiche muss die verschiedenen Anforderungen erfüllen und zu jeder Zeit die Ablenkung des Fahrers vermeiden. Zu diesem Zweck ist eine Zugriffskontrolle für die Anzeigebereiche notwendig. Hierzu werden Kontexte verwendet, um dynamisch festzustellen, welche Anwendung auf welchen Anzeigebereich zugreifen darf. Ein Kontext kann aus Sensorinformationen des Fahrzeugs (z. B. die Geschwindigkeit) oder aus Zuständen der Anwendungen (z. B. welcher Eintrag in der Auswahllite ausgewählt ist) abgeleitet werden. In dieser Arbeit wird ein Zugriffskontrollmodell vorgeschlagen, welches den Zugriff auf die Anzeigebereiche abhängig vom Kontext des Fahrzeugs und der Anwendungen regelt. Für eine möglichst flexible Erweiterbarkeit werden die Berechtigungen für die Anzeigebereiche zwischen Anwendungen, welche beispielsweise von verschiedenen Drittanbietern stammen, delegiert. Das Zugriffskontrollmodell ist vollständig formal definiert und es werden anhand von definierten Zuständen im Modell bestimmte Eigenschaften wie die Konfliktfreiheit bei Zugriff auf Anzeigebereiche bewiesen. Die Evaluation des Zugriffskontrollmodells wird anhand einer Implementierung der Konzepte durchgeführt und zeigt auf, dass die Latenz, die durch die Zugriffskontrolle entsteht, gering genug für Szenarien im Fahrzeug ist. Zudem wird ein Konzept für das Compositing von Fenstern vorgeschlagen, welche den grafischen Inhalt von Anwendungen enthalten und entsprechend ihrer Größe und Position auf einer Anzeige dargestellt werden. Hierzu wird zwischen rechteckigen Fenstern und Fenstern, die eine beliebige Form annehmen können, unterschieden. Rechteckige Fenster werden meist in den existierenden Fenstersystemen verwendet, für welche zwei populäre Ansätze für das Compositing mehrerer sich teils überdeckender Fenster existieren. In dieser Arbeit wird ein Hybridansatz für das Compositing vorgeschlagen, welcher die Vorteile der beiden Ansätze nutzt, um ein effizienteres Compositing durchzuführen, was anhand von verschiedenen Szenarien aufgezeigt werden kann. Die Verwendung von Fenstern in beliebiger Form erfordert andere Ansätze für das Compositing. Um durchgängig die flexiblen Möglichkeiten des Zugriffskontrollmodells zu ermöglichen, wird daher ein weiterer Ansatz für ein Compositing vorgeschlagen, welcher als Grundlage für die Definition der Fenster Bitmasken verwendet, die ebenfalls in den Berechtigungen für die Anzeigebereiche verwendet werden. Das Compositing gewährleistet dann, dass nur die Pixel auf der Anzeige geschrieben werden, welche in der Berechtigung für den Zugriff mittels Bitmaske definiert wurde. Anhand geeigneter Evaluationen wird aufgezeigt, dass diese Eigenschaft für das Compositing einen Mehraufwand darstellt, jedoch in Szenarien im Fahrzeug anwendbar ist. Zur Evaluation der Konzepte für ein Zugriffskontrollmodell und ein Compositing für Anzeigebereiche wird die Systemarchitektur basierend auf Virtualisierung in einem Demonstrator implementiert. Anhand des Demonstrators in Form eines Cockpits, welcher im Rahmen des Projektes ARAMiS entstanden ist, werden verschiedene Szenarien aus dem Fahrzeug demonstriert. Dadurch wird gezeigt, dass eine Konsolidierung der separaten Hardware-Plattformen für die Kombiinstrumente und die Headunit unter Berücksichtigung der verschiedenen Anforderungen für sicherheitskritische Anwendungen im Fahrzeug durch den Einsatz der vorgeschlagenen Konzepte möglich ist.Item Open Access Load shedding in window-based complex event processing(2022) Slo, Ahmad; Rothermel, Kurt (Prof. Dr. rer. nat. Dr. h. c.)Complex event processing (CEP) is a powerful paradigm to detect patterns in continuous input event streams. The application area of CEP is very broad, e.g., transportation, stock market, network monitoring, game analytics, retail management, etc. A CEP operator performs pattern matching by correlating input events to detect important situations (called complex events). The criticality of detected complex events depends on the application. For example, in fraud detection systems in banks, detected complex events might indicate that a fraudster tries to withdraw money from a victim’s account. Naturally, the complex events in this application are critical. On the other hand, in applications like network monitoring, soccer analysis, and transportation, the detected complex events might be less critical. As a result, these applications might tolerate imprecise detection or loss of some complex events. In many applications, the rate of input events is high and exceeds the processing capacity of CEP operators. Moreover, for many applications, it is important to detect complex events within a certain latency bound, where the late detected complex events might become useless. For CEP applications that tolerate imprecise detection of complex events and have limited processing resources, one way to keep the given latency bound is by using load shedding. Load shedding reduces the overload on a CEP operator by either dropping events from the operator’s input event stream or dropping partial matches (short PM) from the operator’s internal state. That results in decreasing the number of queued events and in increasing the operator processing rate, hence enabling the operator to maintain the given latency bound. Of course, dropping might adversely impact the quality of results (QoR). Therefore, it is crucial to shed load in a way that has a low impact on QoR. There exists only limited work on load shedding in the CEP domain. Therefore, in this thesis, we aim to realize a load shedding library that contains several load shedding approaches for CEP systems. Our shedding approaches drop events and PMs, shed events on different granularity levels, and use several features to predict the importance/utility of events and PMs. More specifically, our contributions are as follows. At first, we precisely define the quality of results (QoR) using real-world examples and different pattern matching semantics defined in the CEP domain. Secondly, we propose a load shedding approach (called pSPICE) that drops PMs to maintain a given latency bound. pSPICE uses the Markov chain and Markov reward process to predict the utility of PMs. Moreover, pSPICE adaptively calculates the number of PMs that must be dropped to maintain the given latency bound. In our third and fourth contributions, we develop two load shedding approaches that are called eSPICE and hSPICE. eSPICE drops events from windows to maintain the given latency bound. While hSPICE drops events from windows and PMs to maintain the given latency bound. Both approaches use a probabilistic model to predict the event utilities. Moreover, in both approaches, we provide algorithms that predict utility thresholds to drop the needed number of events. Additionally, in eSPICE, we develop an algorithm that adaptively calculates the number of events that must be dropped to maintain the given latency bound. Finally, we propose a load shedding approach (called gSPICE) that drops events from the input event stream and from windows to maintain the given latency bound. gSPICE also predicts the event utilities using a probabilistic model. Moreover, to efficiently store the event utilities, we develop a data structure that depends on the Zobrist hashing. Furthermore, gSPICE uses well-known machine learning approaches, e.g., decision trees or random forests, to estimate event utilities. We extensively evaluate our proposed load shedding approaches on several real-world and synthetic datasets using a wide range of CEP queries.Item Open Access Model-driven optimizations for public sensing systems(2015) Philipp, Damian; Rothermel, Kurt (Prof. Dr. rer. nat. Dr. h. c.)The proliferation of modern smartphones such as the Apple iPhone or Google Android Phones has given rise to Public Sensing, a new paradigm for sensor data acquisition using spare resources of commodity smartphones. As smartphones are already deployed wherever there are people present, data collection is enabled at an urban scale. Having access to such a wealth of data facilitates the creation of applications depending on real-world information in a way that may have a lasting impact on our everyday life. However, creating large-scale Public Sensing systems is not without its challenges. On the data requesting side, an interface is required that allows to specify arbitrary sensing tasks independent of the mobility of participants and thus facilitates user acceptance of Public Sensing. On the data gathering side, as many people as possible must participate in the system and thus provide a sufficient amount of data. To this end, the system must conserve the resources shared by participants as much as possible, with the main concern being energy. Participants will withdraw from the system, when participating significantly impacts the battery life of their smartphone. We address the aforementioned issues in the context of two applications: Indoor map generation and large-scale environmental data acquisition. In the area of indoor map generation, we first address the problem of building an indoor map directly from odometry traces. In contrast to existing approaches, our focus is to extract the maximum amount of information from trace data without relying on additional features such as WiFi fingerprints. Furthermore, we present an approach to improve indoor maps derived from traces using a formal grammar encoding structural information about similar indoor areas. Using this grammar allows us to extend an incomplete trace-based map to a plausible layout for the entire floor while simultaneously improving the accuracy of floor plan objects observed by odometry traces. Our evaluations show that the accuracy of grammar-based maps in the worst-case is similar to the accuracy of trace-based maps in the best-case, thus proving the benefit of the grammar-based approach. To improve the energy efficiency of the mapping process, we furthermore present a generic quality model for trace-based indoor maps. This quality metric is used by a scheduling algorithm, instructing a participating device to disable its energyintensive sensors while it travels in an area that has been mapped with high quality already, enabling energy savings of up to 15%. In the area of large-scale environmental data acquisition, we first present the concept of virtual sensors as a mobility-independent abstraction layer. Applications configure virtual sensors to report a set of readings at a given sampling rate at a fixed position. The Public Sensing system then selects smartphones near the position of a virtual sensor to provide the actual data readings. Furthermore, we present several optimization approaches geared towards improving the energy efficiency of Public Sensing. In a local optimization, smartphones near each individual virtual sensor coordinate to determine which device should take a reading and thus avoid oversampling the virtual sensor. The local optimization can achieve a 99% increase in efficiency with the most efficient approaches and exhibits only about 10% decrease in result quality under worst conditions. Furthermore, we present a global optimization, where a data-driven model is used to identify the subset of most interesting virtual sensors. Data is obtained from this subset only, while readings for other virtual sensors are inferred from the model. To this end, we present a set of online learning and control algorithms that can create a model in just hours or even minutes and that continuously validate its accuracy. Evaluations show that the global optimization can save up to 80% of energy while providing inferred temperature readings matching an error-bound of 1°C up to 100% of the time.Item Open Access Non-functional requirements in publish/subscribe systems(2013) Tariq, Muhammad Adnan; Rothermel, Kurt (Prof. Dr. rer. nat. Dr. h. c.)Content-based publish/subscribe has gained high popularity for large-scale dissemination of dynamic information in the form of events from publishers to subscribers in a decoupled fashion. Yet, it is highly challenging to achieve scalability without sacrificing the expressiveness of subscriptions (user queries) in such systems, especially in a peer-to-peer (P2P) environment where subscribers and publishers are also responsible for forwarding events by forming an overlay network. Moreover, the support for non-functional requirements such as quality of service (e.g., end-to-end delay, bandwidth etc.) and security (e.g., authentication, confidentiality etc.) instigate many open research questions. The main advantage of publish/subscribe - its inherent decoupling - turns to be a major obstacle in the fulfillment of non-functional requirements with conventional methods. Therefore, the goal of this thesis is to develop methods and algorithms that i) enable scalable dissemination of events, ii) support different quality of service aspects, and iii) provide basic security mechanisms, in a P2P content-based publish/subscribe system. In particular, the following contributions are made in this thesis. As a first contribution, we propose a distributed algorithm to disseminate events in a publish/subscribe system respecting the subscriber-defined QoS constraints in terms of delay requirements and available bandwidth. In addition, the second contribution focuses on minimizing the overall resource usage, i.e., bandwidth consumption and processing load on peers (publishers and subscribers), in a publish/subscribe system. This contribution develops an efficient and scalable method to reduce the rate of events that peers receive and forward though lacking subscription (i.e., false positives) by means of subscription clustering, using the techniques from spectral graph theory. The first two contributions target overlay-level methods to provide efficient dissemination of events and quality of service. The third contribution, however, proposes underlay-aware methods that explicitly take into account the properties of the underlying physical network and its topology, to construct an efficient publish/subscribe routing overlay with low relative delay penalty and low stress on the physical links. Finally, as our last contribution, we present novel methods to provide authentication of publishers and subscribers as well as confidentiality and integrity of events using the techniques from pairing-based cryptography. Additionally, an algorithm is developed to preserve the weak subscription confidentiality in the presence of interest clustering of subscribers.Item Open Access Observing physical world events through a distributed world model(2007) Bauer, Martin Peter; Rothermel, Kurt (Prof. Dr. rer. nat. Dr. h. c.)The topic of this dissertation is the observation of physical world events through a distributed world model. So the events of interest occur in the world we live in. The basis for their observation is a model of the relevant aspects of the physical world. These include more static aspects like geometric models of stationary objects, e.g., houses and streets, but also dynamic aspects, e.g., the position of mobile users or the temperature. With the proliferation of mobile computing devices like personal digital assistants or mobile phones with significant computing and communication capabilities, there is a trend to extend computer support from the desktop to the physical world. As the focus of the mobile user may be on other tasks, computer support should be proactive, providing the user with information and services relevant in his current situation. The observation of high-level physical world events is an enabler for these new kinds of services. Due to the size of the data, different characteristics of the data, and a multitude of providers, the world model data needed for the observation can be distributed over a number of servers. We present a novel event service architecture that allows the observation of complex high-level events through a distributed world model. As the accuracy of the data is limited due to the characteristics of both the underlying sensor data and the computer network, this has to be taken into account. We propose a concept for specifying physical world events together with a threshold probability above which the event is considered to have occurred. We then show how physical world events can be observed, calculating the occurrence probability and comparing this to the specified threshold probability. Finally, we present an evaluation based on a prototype implementation with a number of concrete events. The focus of the evaluation is on both the performance and the quality of the observation, showing the general feasibility of our approach.Item Open Access On consistency and distribution in software-defined networking(2020) Kohler, Thomas; Rothermel, Kurt (Prof. Dr. rer. nat. Dr. h. c.)Software-defined Networking (SDN) is an emerging networking paradigm promising flexible programmability and simplified management. Over the last years, SDN has built up huge momentum in academia that has led to huge practical impact through the large-scale adoption of big industrial players like Google, Facebook, and Microsoft driving cloud computing, data center networks, and their interconnection in SDN-based wide-area networks. SDN is a key enabler for high dynamics in terms of network reconfiguration and innovation, allowing the deployment of new network protocols and substantially expanding the networking paradigm by moving applications into the network, both at unprecedented pace and ease. The SDN paradigm is centered around the separation of the data plane from the logically centralized but typically physically distributed control plane that programs the forwarding behaviour of the network devices in the data plane based on a global view. Especially requirements on correctness, scalability, availability, and resiliency raised through practical adoption at scale have put a strong emphasis on consistency and distribution in the SDN paradigm. This thesis addresses various challenges regarding consistency and distribution in Software-defined Networking. More specifically, it focusses and contributes to the research areas of update consistency, flexibility in control plane distribution, and data plane implementation of a distributed application. Reconfiguring an SDN-based network inevitably requires to update the rules that determine the forwarding behaviour of the devices in its data plane. Updating these rules, which are situated on the inherently distributed data plane devices, is an asynchronous process. Hence, packets traversing the network may be processed according to a mixture of new and old rules during the update process. Consequently arising inconsistency effects can severely degrade the network performance and can break stipulated network invariants for instance on connectivity or security. We introduce a general architecture for network management under awareness of expectable update-induced inconsistency effects, which allows for an appropriate selection of an update mechanism and its parameters in order to prevent those effects. We thoroughly analyze update consistency for the case of multicast networks, show crucial particularities and present mechanisms for the prevention and mitigation of multicast-specific inconsistency effects. Observing that on the one hand SDN's separation of control has been deemed rather strict, moving any control ``intelligence'' from the data plane devices to remote controller entities hence increasing control latency while on the other hand the coupling between controller and data plane devices is quite tight hence hindering free distribution of control logic, we present a controller architecture enabling flexible and full-range distribution of network control. The architecture is based on decoupling through an event abstraction and a flexible dissemination scheme for those events based on the content-based publish/subscribe paradigm. This lightweight design allows to push down control logic back onto data plane devices. Thus, we expand SDN's control paradigm and enable the full range from fully decentralized control, over local control still profiting from global view up to fully centralized control. This scheme allows to trade-off scope of state data, consistency semantics and synchronization overhead, control latency, and quality of control decisions. Furthermore, our implementation covers a large set of mechanisms for improving control plane consistency and scalability, such as inherent load-balancing, fast autonomous control decision making, detection of policy conflicts, and a feedback mechanism for data plane updates. In a last area, we focus on the implementation of a distributed application from the domain of message-oriented middleware in the data plane. We implement Complex Event Processing (CEP) on top of programmable network devices employing data plane programming, a recent big trend in SDN, or more specifically, using the P4 language. We discuss challenges entailed in the distributed data plane processing and address aspects of distribution and consistency in particular regarding consistency in stateful data plane programming, where internal state that determines how packets are processed is changed within this very processing, in turn changing the processing of subsequent packets. Since packet processing is executed in parallel on different execution units on the same device sharing the same state data, strong consistency semantics are required in order to ensure application correctness. Enabled by P4's flexible and powerful programming model, our data plane implementation of CEP yields greatly reduced latency and increased throughput. It comprises a compiler that compiles patterns for the detection of complex events specified in our rule specification language to P4 programs, consisting of a state machine and operators that process so-called windows containing historic events.Item Open Access Optimized information discovery in structured peer-to-peer overlay networks(2011) Memon, Faraz; Rothermel, Kurt (Prof. Dr. rer. nat. Dr. h. c.)Peer-to-peer (P2P) overlay networks allow for efficient information discovery in large-scale distributed systems. Although point queries are well supported by current P2P systems - in particular systems based on distributed hash tables (DHTs) -, providing efficient support for more complex queries remains a challenge. Therefore, the goal of this research is to develop methodologies that enable efficient processing of complex queries, in particular processing of multi-attribute range queries, over DHTs. Generally, the support for multi-attribute range queries over DHTs has been provided either by creating an individual index for each data attribute or by creating a single index using the combination of all data attributes. In contrast to these approaches, we propose to create and modify indices using the attribute combinations that dynamically appear in multi-attribute range queries in the system. In order to limit the overhead induced by index maintenance, the total number of created indices has to be limited. Thus, one of the major problems is to create a limited number of indices such that the overall system performance is optimal for multi-attribute range queries. We propose several index recommendation algorithms that implement heuristic solutions to this NP-hard problem. Our evaluations show that these heuristics lead to a close-to-optimal system performance for multi-attribute range queries. The final outcome of this research is an adaptive DHT-based information discovery system that adapts its set of indices according to the dynamic load of multi-attribute range queries in the system. The index adaptation is carried out using a four-phase index adaptation process. Our evaluations show that the adaptive information discovery system continuously optimizes the overall system performance for multi-attribute range queries. Moreover, compared to a non-adaptive system, our system achieves several orders of a magnitude improved performance.Item Open Access Performance-oriented communication concepts for networked control systems(2022) Carabelli, Ben W.; Rothermel, Kurt (Prof. Dr. rer. nat. Dr. h. c.)Networked control systems (NCS) integrate sensors, actuators, and digital controllers using a communication network in order to control physical processes. They can be found in diverse application areas, including automotive and aircraft systems, smart homes, and smart manufacturing systems in the context of Industry 4.0. Because control systems have demanding Quality of Service (QoS) requirements, the provisioning of appropriate communication services for NCS is a challenge. Moreover, the trend of steadily increasing digitization in many fields will likely lead to control applications with more complex system integration, especially in large-scale systems such as smart grids and smart cities. The proliferation of NCS in such an environment clearly depends on strong methods for integrating communication and control. However, there currently remains a gap between these two domains. On the one hand, the control-theoretic design and analysis methods for NCS have been based on simplistic and abstract network connection models. On the other hand, communication networks are optimized for conventional performance metrics such as throughput and latency, which do not readily translate into application specific Quality of Control (QoC) metrics. The goal of this thesis is to provide performance-oriented concepts for the design of communication services for NCS. In particular, methods for scheduling and routing the traffic of NCS and increasing their reliability through replication are developed on the basis of integrated models that capture the relationship between control-relevant characteristics of communication services and the methods that are used to provide those communication services in the network. This thesis makes the following contributions. First, we address the problem of optimally arbitrating limited communication bandwidth for a group of NCS in a shared network by designing a performance-aware dynamic priority scheduler. The resulting first scheduling policy provides asymptotic stability guarantees for each NCS and performance bounds on the joint QoC. While it is efficient to implement on the data link layer with stateless priority queueing, it requires a large optimization problem comprising all NCS to be solved initially for determining scheduler parameters. To increase the scalability, we therefore relax the scheduling problem by separating the NCS traffic into deterministic transmissions with real-time guarantees and opportunistic traffic used for QoC optimization. The resulting second scheduling policy imposes no QoS constraints on opportunistic traffic, yields less conservative stability guarantees, and allows scheduler parameters to be calculated for each NCS separately and thus much more efficiently. Second, we address the problem of optimally routing NCS traffic in networks with random latency distributions by designing a cross-layer communication service for stochastic NCS. The routing algorithm exploits trade-offs between delay and in-time arrival probabilities to find a route that provides a predefined level of QoC while minimizing network load. Third, we address the problem of active replication for controllers in order to increase the reliability of NCS subject to crash failures and message loss. While existing replication schemes for real-time systems focus only on ensuring that no conflicting values are sent to actuators, we develop stronger consistency concepts that provide replication transparency for control systems. We present a corresponding replication management protocol that achieves high availability and low latency at low message cost, and evaluate it using physical experiments.Item Open Access Real-time scheduling for 3D rendering on automotive embedded systems(2019) Schnitzer, Stephan; Rothermel, Kurt (Prof. Dr. rer. nat. Dr. h. c.)Im Automobilbereich erfreut sich der Einsatz von 3D-Grafik zunehmender Beliebtheit. Beispielsweise zeigte Mercedes-Benz im F125 Autoprototypen, wie analoge Zeiger der Kombiinstrumente durch digitale Displays ersetzt werden. Der Trend, 3D-Anwendungen zu nutzen, geht in zwei Richtungen: Zum einen hin zu kritischeren Anwendungen wie der Geschwindigkeitsanzeige, zum anderen hin zu Drittanbieteranwendungen, die beispielsweise über einen Appstore bezogen werden. Um Isolationsanforderungen zu erfüllen, werden traditionell neue Funktionen im Auto häufig mittels neuer Steuergeräte umgesetzt. Um jedoch Kosten, Energieverbrauch und Bauraum im Fahrzeug zu sparen, sollten alle 3D-Anwendungen eine einzige Hardwareplattform und somit auch eine einzige GPU als gemeinsame Ressource nutzen. Für zeitsensitive Anwendungen wie die Geschwindigkeitsanzeige ergibt sich hierbei die Herausforderung, Rendering in Echtzeit zu gewährleisten. Hierfür sind wirksame Konzepte für das Echtzeitscheduling der GPU erforderlich, welche Sicherheit und Isolation beim 3D-Rendering garantieren können. Da aktuelle GPUs nicht unterbrechbar sind, muss ein Deadline-basierter Scheduler die Ausführungszeit der GPU-Befehle im Voraus kennen. Bestehende Schedulingkonzepte unterstützen leider keine dynamischen Tasks, keine periodischen Echtzeitdeadlines, oder setzen unterbrechbare Ausführung voraus. In dieser Arbeit werden die für HMI-Rendering im Automobilbereich relevanten Anforderungen beschrieben. Basierend auf diesen Anforderungen wird das Konzept des virtualisierten automobilen Grafiksystems (VAGS) vorgestellt, welches einen Hypervisor nutzt um die Isolation zwischen verschiedenen VMs, insbesondere für die Headunit und die Kombiinstrumente, sicherzustellen. Des Weiteren wird ein neuartiges Framework vorgestellt, welches die Ausführungszeit von GPU-Befehlen misst und basierend auf OpenGL ES 2.0 vorhersagt. Hierbei werden für die relevanten GPU-Befehle wie Draw und SwapBuffers Vorhersagemodelle vorgestellt. Für Draw-Befehle werden zwei Heuristiken vorgeschlagen, welche die Anzahl der Fragmente abschätzen, zwei Konzepte, welche die Ausführungszeit der Grafikshader vorhersagen, sowie ein optionaler Echtzeit-Korrekturmechanismus. Die Anzahl der Fragmente wird entweder mittels einer Bounding-Box des gerenderten Modells, auf welche die Projektion des Vertexshaders angewendet wird, abgeschätzt, oder durch eine Teilmenge der gerenderten Dreiecke, welche genutzt wird um die Durchschnittsgröße eines Dreiecks zu ermitteln. Um die Laufzeit eines Shaders abzuschätzen, wird er entweder in einer Kalibrierungsumgebung in einem separaten OpenGL-Kontext ausgeführt, oder es wird ein offline trainiertes MARS-Modell verwendet. Die Implementierung und die Auswertungen des Frameworks zeigen dessen Machbarkeit und dass eine gute Vorhersagegenauigkeit erreicht werden kann. Beim Rendern einer Szene des bekannten Benchmarkprogramms Glmark2 wurden beispielsweise weniger 0,4 % der Messproben um mehr als 100 μs unterschätzt und weniger als 0,2 % der Messproben um mehr als 100 μs überschätzt. Unsere Implementierung verursacht bei langer Ausführung eine zusätzliche CPU-Rechenzeit von üblicherweise weniger als 25 %, bei manchen Szenarien ist diese sogar vernachlässigbar. Der Programmstart verlangsamt sich beim effizientesten Verfahren hierbei lediglich um etwa 30 ms. Auf lange Sicht liegt er typischerweise unter 25 % und ist für manche Szenarien sogar vernachlässigbar. Darüber hinaus wird ein echtzeitfähiges 3D-GPU-Schedulingframework vorgestellt, welches kritischen Anwendungen Garantien gibt und trotzdem die verbleibenden GPU-Ressourcen den weniger kritischen Anwendungen zur Verfügung stellt, wodurch eine hohe GPU-Auslastung erreicht wird. Da aktuelle GPUs nicht unterbrechbar sind, werden die vorgestellten Konzepte zur Vorhersage der Ausführungszeit verwendet um prioritätsbasiert Scheduling-Entscheidungen zu treffen. Die Implementierung basiert auf einem automobilkonformen eingebetteten System, auf welchem Linux ausgeführt wird. Die darauf ausgeführten Auswertungen zeigen die Machbarkeit und Wirksamkeit der vorgestellten Konzepte. Der GPU-Scheduler erfüllt die jeweiligen Echtzeitvorgaben für eine variable Anzahl von Anwendungen, welche unterschiedliche GPU-Befehlsfolgen erzeugen. Hierbei wird bei einem anspruchsvollen Szenario mit 17 Anwendungen eine hohe GPU-Auslastung von 99 % erzielt und 99,9 % der Deadlines der höchstprioren Anwendung erfüllt. Des Weiteren wird das Scheduling in Echtzeit mit weniger als 9 μs Latenz effizient ausgeführt.Item Open Access Replicated execution of workflows(2018) Schäfer, David Richard; Rothermel, Kurt (Prof. Dr. rer. nat. Dr. h. c.)Workflows are the de facto standard for managing and optimizing business processes. Workflows allow businesses to automate interactions between business locations and partners residing anywhere on the planet. This, however, requires the workflows to be executed in a distributed and dynamic environment, where device and communication failures occur quite frequently. In case that a workflow execution becomes unavailable through such failures, the business operations that rely on the workflow might be hindered or even stopped, implying the loss of money. Consequently, availability is a key concern when using workflows in dynamic environments. In this thesis, we propose replication schemes for workflow engines to ensure the availability of the workflows that are executed by these engines. Of course, a workflow that is executed by a replicated workflow engine has to yield the same result as a non-replicated execution of that workflow. To this end, we formally define the equivalence of a replicated and a non-replicated execution called Single-Execution-Equivalence. Subsequently, we present replication schemes for both imperative and declarative workflow languages. Imperative workflow languages, such as the Web Service Business Process Execution Language (WS-BPEL), specify the execution order of activities through an ordering relation and are the predominant way of specifying workflow models. We implement a proof-of-concept for demonstrating the compatibility of our replication schemes with current (imperative) workflow technology. Declarative workflow languages provide greater flexibility by allowing the reordering of the activities within a workflow at run-time. We exploit this by executing differently ordered replicas on several nodes in the network for improving availability further.