Universität Stuttgart

Permanent URI for this communityhttps://elib.uni-stuttgart.de/handle/11682/1

Browse

Search Results

Now showing 1 - 10 of 65
  • Thumbnail Image
    ItemOpen Access
    Automated composition of adaptive pervasive applications in heterogeneous environments
    (2012) Schuhmann, Stephan Andreas; Rothermel, Kurt (Prof. Dr. rer. nat. Dr. h. c.)
    Distributed applications for Pervasive Computing represent a research area of high interest. Configuration processes are needed before the application execution to find a composition of components that provides the required functionality. As dynamic pervasive environments and device failures may yield unavailability of arbitrary components and devices at any time, finding and maintaining such a composition represents a nontrivial task. Obviously, many degrees of decentralization and even completely centralized approaches are possible in the calculation of valid configurations, spanning a wide spectrum of possible solutions. As configuration processes produce latencies which are noticed by the application user as undesired waiting times, configurations have to be calculated as fast as possible. While completely distributed configuration is inevitable in infrastructure-less Ad Hoc scenarios, many realistic Pervasive Computing environments are located in heterogeneous environments, where additional computation power of resource-rich devices can be utilized by centralized approaches. However, in case of strongly heterogeneous pervasive environments including several resource-rich and resource-weak devices, both centralized and decentralized approaches may lead to suboptimal results concerning configuration latencies: While the resource-weak devices may be bottlenecks for decentralized configuration, the centralized approach faces the problem of not utilizing parallelism. Most of the conducted projects in Pervasive Computing only focus on one specific type of environment: Either they concentrate on heterogeneous environments, which rely on additional infrastructure devices, leading to inapplicability in infrastructure-less environments. Or they address homogeneous Ad Hoc environments and treat all involved devices as equal, which leads to suboptimal results in case of present resource-rich devices, as their additional computation power is not exploited. Therefore, in this work we propose an advanced comprehensive adaptive approach that particularly focuses on the efficient support of heterogeneous environments, but is also applicable in infrastructure-less homogeneous scenarios. We provide multiple configuration schemes with different degrees of decentralization for distributed applications, optimized for specific scenarios. Our solution is adaptive in a way that the actual scheme is chosen based on the current system environment and calculates application compositions in a resource-aware efficient manner. This ensures high efficiency even in dynamically changing environments. Beyond this, many typical pervasive environments contain a fixed set of applications and devices that are frequently used. In such scenarios, identical resources are part of subsequent configuration calculations. Thus, the involved devices undergo a quite similar configuration process whenever an application is launched. However, starting the configuration from scratch every time not only consumes a lot of time, but also increases communication overhead and energy consumption of the involved devices. Therefore, our solution integrates the results from previous configurations to reduce the severity of the configuration problem in dynamic scenarios. We prove in prototypical real-world evaluations as well as by simulation and emulation that our comprehensive approach provides efficient automated configuration in the complete spectrum of possible application scenarios. This extensive functionality has not been achieved by related projects yet. Thus, our work supplies a significant contribution towards seamless application configuration in Pervasive Computing.
  • Thumbnail Image
    ItemOpen Access
    Causal models for decision making via integrative inference
    (2017) Geiger, Philipp; Toussaint, Marc (Prof. Dr.)
    Understanding causes and effects is important in many parts of life, especially when decisions have to be made. The systematic inference of causal models remains a challenge though. In this thesis, we study (1) "approximative" and "integrative" inference of causal models and (2) causal models as a basis for decision making in complex systems. By "integrative" here we mean including and combining settings and knowledge beyond the outcome of perfect randomization or pure observation for causal inference, while "approximative" means that the causal model is only constrained but not uniquely identified. As a basis for the study of topics (1) and (2), which are closely related, we first introduce causal models, discuss the meaning of causation and embed the notion of causation into a broader context of other fundamental concepts. Then we begin our main investigation with a focus on topic (1): we consider the problem of causal inference from a non-experimental multivariate time series X, that is, we integrate temporal knowledge. We take the following approach: We assume that X together with some potential hidden common cause - "confounder" - Z forms a first order vector autoregressive (VAR) process with structural transition matrix A. Then we examine under which conditions the most important parts of A are identifiable or approximately identifiable from only X, in spite of the effects of Z. Essentially, sufficient conditions are (a) non-Gaussian, independent noise or (b) no influence from X to Z. We present two estimation algorithms that are tailored towards conditions (a) and (b), respectively, and evaluate them on synthetic and real-world data. We discuss how to check the model using X. Still focusing on topic (1) but already including elements of topic (2), we consider the problem of approximate inference of the causal effect of a variable X on a variable Y in i.i.d. settings "between" randomized experiments and observational studies. Our approach is to first derive approximations (upper/lower bounds) on the causal effect, in dependence on bounds on (hidden) confounding. Then we discuss several scenarios where knowledge or beliefs can be integrated that in fact imply bounds on confounding. One example is about decision making in advertisement, where knowledge on partial compliance with guidelines can be integrated. Then, concentrating on topic (2), we study decision making problems that arise in cloud computing, a computing paradigm and business model that involves complex technical and economical systems and interactions. More specifically, we consider the following two problems: debugging and control of computing systems with the help of sandbox experiments, and prediction of the cost of "spot" resources for decision making of cloud clients. We first establish two theoretical results on approximate counterfactuals and approximate integration of causal knowledge, which we then apply to the two problems in toy scenarios.
  • Thumbnail Image
    ItemOpen Access
    Verwaltung von zeitbezogenen Daten und Sensordatenströmen
    (2013) Hönle, Nicola Anita Margarete; Mitschang, Bernhard (Prof. Dr.-Ing. habil.)
    Sogenannte ortsbezogene Anwendungen interpretieren die räumliche Position des Benutzers als wichtigste Kontextinformation, um ihr Verhalten darauf abzustimmen. Im Rahmen des Nexus-Projekts (SFB627) werden Konzepte zur Unterstützung ortsbezogener Anwendungen erforschtund die Ergebnisse in der sogenannten Nexus-Plattform integriert. Der Benutzerkontext wird aber auch durch die Zeit beeinflusst, da Zeit ein wesentlicher Bestandteil unseres Lebens ist und so gut wie jede Information einen zeitlichen Bezug hat. Die Integration von Zeit bedeutet eine Erweiterung der Nexus-Plattform von der ortsbezogenen Unterstützung hin zu einem allgemeineren kontextbezogenen System. Da die uneingeschränkte Berücksichtigung von Zeit im allgemeinen Fall ein zu großes Themenfeld ist, wurden im Rahmen einer Use-Case-Analyse Anforderungen identifiziert, die besondere Relevanz für das Nexus-Projekt haben. Diese Anforderungen und ihre Umsetzung werden in der vorliegenden Arbeit beschrieben. Die Speicherung von Zeiträumen und Zeitpunkten basiert auf dem GML-Zeitdatentyp, so dass Zeitwerte im Format des ISO-8601-Standards dargestellt werden. Mit diesem Basisdatentyp sind temporale Attribute im Nexus-Datenmodell definierbar. Für die Formulierung von Anfragen wird das neue Prädikat temporalIntersects eingeführt, mit dem eine beliebige Überschneidung eines temporalen Attributs zu einem vorgegebenen Zeitraum angegeben werden kann. Da jedoch die Anfragekriterien nicht im Vorfeld eingeschränkt werden sollen, werden außerdem die minimal notwendigen temporalen Basisprädikate beschrieben, mit denen alle Relationen der Allen-Intervallalgebra formuliert werden können. Die Gültigkeitszeit gibt an, zu welchen Zeiten ein bestimmter Wert den tatsächlichen Realweltzustand korrekt modelliert. Zur Annotation von Daten mit Gültigkeitszeiten, aber auch mit anderen Metadaten, wird ein allgemeines Metadatenkonzept für das Nexus-Datenmodell beschrieben. Mit Metadaten können dann Gültigkeitszeiten von Objekten und Attributen angegeben und so auf einfache Weise Historien von beliebigen Attributen modelliert werden. Interpolationsfunktionen ermöglichen eine genauere und komprimierte Darstellung von sich häufig ändernden Daten mit kontinuierlichen Werteverläufen wie z.B. Sensordatenhistorien. Deshalb werden die Basisdatentypen für Gleitkommazahlen und räumliche Werte so geändert, dass lineare Interpolationsfunktionen für die kontinuierliche Änderung von Werten über die Zeit modellierbar sind. Zur Speicherung wird die Implementierung eines Historienservers beschrieben, der interpolierbare Basisdatentypen verarbeiten kann. Messwerte von Sensoren bestehen meist aus diskreten (Wert, Zeitpunkt)-Tupeln. Da bei der dauerhaften Speicherung von Sensordaten schnell eine große Menge an Daten anfallen kann, ist es sinnvoll, die Daten vorher zu komprimieren. In dieser Arbeit werden sowohl strombasierte als auch konventionell arbeitende Ansätze für eine Komprimierung von Sensordatenströmen vorgestellt: Einfache Approximationsverfahren und die Approximation durch lineare Ausgleichsrechnung sowie Verfahren zur Polygonzugvereinfachung, aber auch ein kartenbasierter Ansatz speziell für Positionsdaten. Zur Klassifikation der Ansätze werden verschiedene Eigenschaften von Komprimierungsalgorithmen vorgestellt. Für die Alterung von komprimierten Sensordaten wird das neue Konzept der Fehlerbeschränktheit bei Alterung eingeführt. Die Algorithmen werden entsprechend klassifiziert und mit GPS-Testdatensätzen von PKW-Fahrten evaluiert. Die gelungene Integration der Zeitaspekte wird anhand dem Messetagebuch, einer Beispielanwendung zur Aufzeichnung und Auswertung von Benutzeraktivitäten, gezeigt. Ein weiteres Anwendungsbeispiel ist der Einsatz des NexusDS-Datenstrommanagementsystems zur Erfassung, Integration und Historisierung von Datenströmen unterschiedlicher Herkunft in einer sogenannten Smart Factory.
  • Thumbnail Image
    ItemOpen Access
    Distributed stream processing in a global sensor grid for scientific simulations
    (2015) Benzing, Andreas; Rothermel, Kurt (Prof. Dr. rer. nat)
    With today's large number of sensors available all around the globe, an enormous amount of measurements has become available for integration into applications. Especially scientific simulations of environmental phenomena can greatly benefit from detailed information about the physical world. The problem with integrating data from sensors to simulations is to automate the monitoring of geographical regions for interesting data and the provision of continuous data streams from identified regions. Current simulation setups use hard coded information about sensors or even manual data transfer using external memory to bring data from sensors to simulations. This solution is very robust, but adding new sensors to a simulation requires manual setup of the sensor interaction and changing the source code of the simulation, therefore incurring extremely high cost. Manual transmission allows an operator to drop obvious outliers but prohibits real-time operation due to the long delay between measurement and simulation. For more generic applications that operate on sensor data, these problems have been partially solved by approaches that decouple the sensing from the application, thereby allowing for the automation of the sensing process. However, these solutions focus on small scale wireless sensor networks rather than the global scale and therefore optimize for the lifetime of these networks instead of providing high-resolution data streams. In order to provide sensor data for scientific simulations, two tasks are required: i) continuous monitoring of sensors to trigger simulations and ii) high-resolution measurement streams of the simulated area during the simulation. Since a simulation is not aware of the deployed sensors, the sensing interface must work without an explicit specification of individual sensors. Instead, the interface must work only on the geographical region, sensor type, and the resolution used by the simulation. The challenges in these tasks are to efficiently identify relevant sensors from the large number of sources around the globe, to detect when the current measurements are of relevance, and to scale data stream distribution to a potentially large number of simulations. Furthermore, the process must adapt to complex network structures and dynamic network conditions as found in the Internet. The Global Sensor Grid (GSG) presented in this thesis attempts to close this gap by approaching three core problems: First, a distributed aggregation scheme has been developed which allows for the monitoring of geographic areas for sensor data of interest. The reuse of partial aggregates thereby ensures highly efficient operation and alleviates the sensor sources from individually providing numerous clients with measurements. Second, the distribution of data streams at different resolutions is achieved by using a network of brokers which preprocess raw measurements to provide the requested data. The load of high-resolution streams is thereby spread across all brokers in the GSG to achieve scalability. Third, the network usage is actively minimized by adapting to the structure of the underlying network. This optimization enables the reduction of redundant data transfers on physical links and a dynamic modification of the data streams to react to changing load situations.
  • Thumbnail Image
    ItemOpen Access
    Supporting multi-tenancy in Relational Database Management Systems for OLTP-style software as a service applications
    (2015) Schiller, Oliver; Mitschang, Bernhard (Prof. Dr.-Ing. habil.)
    The consolidation of multiple tenants onto a single relational database management system (RDBMS) instance, commonly referred to as multi-tenancy, turned out being beneficial since it supports improving the profit margin of the provider and allows lowering service fees, by what the service attracts more tenants. So far, existing solutions create the required multi-tenancy support on top of a traditional RDBMS implementation, i. e., they implement data isolation between tenants, per-tenant customization and further tenant-centric data management features in application logic. This is complex, error-prone and often reimplements efforts the RDBMS already offers. Moreover, this approach disables some optimization opportunities in the RDBMS and represents a conceptual misstep with Separation of Concerns in mind. For the points mentioned, an RDBMS that provides support for the development and operation of a multi-tenant software as a service (SaaS) offering is compelling. In this thesis, we contribute to a multi-tenant RDBMS for OLTP-style SaaS applications by extending a traditional disk-oriented RDBMS architecture with multi-tenancy support. For this purpose, we primarily extend an RDBMS by introducing tenants as first-class database objects and establishing tenant contexts to isolate tenants logically. Using these extensions, we address tenant-aware schema management, for which we present a schema inheritance concept that is tailored to the needs of multi-tenant SaaS applications. Thereafter, we evaluate different storage concepts to store a tenant’s tuples with respect to their scalability. Next, we contribute an architecture of a multi-tenant RDBMS cluster for OLTP-style SaaS applications. At that, we focus on a partitioning solution which is aligned to tenants and allows obtaining independently manageable pieces. To balance load in the proposed cluster architecture, we present a live database migration approach, whose design favors low migration overhead and provides minimal interruption of service.
  • Thumbnail Image
    ItemOpen Access
    Flexible processing of streamed context data in a distributed environment
    (2014) Cipriani, Nazario; Mitschang, Bernhard (Prof. Dr.-Ing. habil.)
    Nowadays, stream-based data processing occurs in many context-aware application scenarios, such as in context-aware facility management applications or in location-aware visualization applications. In order to process stream-based data in an application-independent manner, Data Stream Processing Systems (DSPSs) emerged. They typically translate a declarative query to an operator graph, place the operators on stream processing nodes and execute the operators to process the streamed data. Context-aware stream processing applications often have different requirements although relying on the same processing principle, i.e. data stream processing. These requirements exist because context-aware stream processing applications differ in functional and operational behavior as well as their processing requirements. These facts are challenging on their own. As a key enabler for the effcient processing of streamed data the DSPS must be able to integrate this speciVc functionality seamlessly. Since processing of data streams usually is subject to temporal aspects, i.e. they are time critical, custom functionality should be integrated seamlessly in the processing task of a DSPS to prevent the formation of isolated solutions and to support exploitation of synergies. Depending on the domain of interest, data processing often depends on highly domain-specific functionalities, e.g. for the application of a location-aware visualization pipeline displaying a three-dimensional map of its surroundings. The application runs on a mobile device and consists of many interconnected operations that form a network of operators called stream processing graph (SP graph). First, the friends’ locations must be collected and connected to their public profile. However, to enable the application to run smoothly for some parts of data processing the presence of a Graphics Processing Unit (GPU) is mandatory. To solve that challenge, we have developed concepts for a flexible DSPS that allows the integration of specific functionality to enable a seamless integration of applications into the DSPS. Therefore, an architecture is proposed. A DSPS based on this architecture can be extended by integrating additional operators responsible for data processing and services realizing additional interaction patterns with context-aware applications. However, this specific functionality is often subject to deployment and run time constraints. Therefore, an SP graph model has been developed which reWects these constraints by allowing to annotate the graph by constraints, e.g. to constrain the execution of operators to only certain processing nodes or specify that the operator necessitates a GPU. The data involved in the processing steps is often subject to restrictions w.r.t the way it is accessed and processed. Users participating in the process might not want to expose their current location to potentially unknown parties, restricting e.g. data access to known ones only. Therefore, in addition to the Wexible integration of specialized operators security aspects must also be considered, limiting the access of data as well as the granularity of which data is made available. We have developed a security framework that defines three different types of security policies: Access Control (AC) policies controlling data access, Process Control (PC) policies influencing how data is processed, and Granularity Control (GC) policies defining the Level of Detail (LOD) at which the data is made available. The security policies are interpreted as constraints which are supported by augmenting the SP graph by the relevant security policies. The operator placement in a DSPS is very important, as it deeply influences SP graph execution. Every stream-based application requires a different placement of SP graphs according to its specific objectives, e.g. bandwidth should not fall below 500 MBit/s and is more important than latency. This fact constrains operator placement. As objectives might conflict among each other, operator placement is subject to trade-offs. Knowing the bandwidth requirements of a certain application, an application developer can clearly identify the specific Quality of Service (QoS) requirements for the correct distribution of the SP graph. These requirements are a good indicator for the DSPS to decide how to distribute the SP graph to meet the application requirements. Two applications within the same DSPS might have different requirements. E.g. if interactivity is an issue, a stream-based game application might in a first place need a minimization of latency to get a fast and reactive application. We have developed a multi-target operator placement (M-TOP) algorithm which allows the DSPS to find a suitable deployment, i.e. a distribution of the operators in an SP graph which satisfies a set of predefined QoS requirements. Thereby, the M-TOP approach considers operator-specific deployment constraints as well as QoS targets.
  • Thumbnail Image
    ItemOpen Access
    Optimized information discovery in structured peer-to-peer overlay networks
    (2011) Memon, Faraz; Rothermel, Kurt (Prof. Dr. rer. nat. Dr. h. c.)
    Peer-to-peer (P2P) overlay networks allow for efficient information discovery in large-scale distributed systems. Although point queries are well supported by current P2P systems - in particular systems based on distributed hash tables (DHTs) -, providing efficient support for more complex queries remains a challenge. Therefore, the goal of this research is to develop methodologies that enable efficient processing of complex queries, in particular processing of multi-attribute range queries, over DHTs. Generally, the support for multi-attribute range queries over DHTs has been provided either by creating an individual index for each data attribute or by creating a single index using the combination of all data attributes. In contrast to these approaches, we propose to create and modify indices using the attribute combinations that dynamically appear in multi-attribute range queries in the system. In order to limit the overhead induced by index maintenance, the total number of created indices has to be limited. Thus, one of the major problems is to create a limited number of indices such that the overall system performance is optimal for multi-attribute range queries. We propose several index recommendation algorithms that implement heuristic solutions to this NP-hard problem. Our evaluations show that these heuristics lead to a close-to-optimal system performance for multi-attribute range queries. The final outcome of this research is an adaptive DHT-based information discovery system that adapts its set of indices according to the dynamic load of multi-attribute range queries in the system. The index adaptation is carried out using a four-phase index adaptation process. Our evaluations show that the adaptive information discovery system continuously optimizes the overall system performance for multi-attribute range queries. Moreover, compared to a non-adaptive system, our system achieves several orders of a magnitude improved performance.
  • Thumbnail Image
    ItemOpen Access
    Scalable computer network emulation using node virtualization and resource monitoring
    (2011) Maier, Steffen Dirk; Rothermel, Kurt (Prof. Dr. rer. nat. Dr. h. c.)
    Ongoing development of computer network technology requires new communication protocols on all layers of the protocol stack to adapt to and to exploit technology specifics. The performance of new protocol implementations has to be evaluated before deployment. Computer network emulation enables the execution of real unmodified protocol implementations within a configurable synthetic environment. Since network properties are reproduced synthetically, emulation supports reproducible measurement results for wired and wireless networks. Meaningful evaluation scenarios typically involve a large number of communicating nodes. Reproducing the network properties of the medium access control layer can be accomplished efficiently on cheap common off the shelf computers and allows to evaluate network protocols, transport protocols, and applications. However, meaningful emulation scenario sizes often require more nodes than affordable computers. To scale the number of nodes in an emulation scenario beyond the available computers, we discuss approaches to virtualization and operating system partitioning. Focusing on the latter, we argue for virtual protocol stacks, which provide an extremely lightweight node virtualization enabling the execution of multiple instances of software to be evaluated on each physical computer. To connect virtual nodes on the same and on different computers, we design and implement a highly efficient software communication switch. A centralized emulation control component distributes dynamic network property updates which result from node mobility for instance. To handle the large number of nodes and thus increased updates, we propose a hierarchical control where the central component delegates updates to sub-components distributed over the computers of an emulation system. Extensive evaluations show the scalability of our virtualized network emulation system. Virtual nodes executed on the same computer share its limited resources. Hosting too many virtual nodes on the same computer may lead to resource contention. This can cause unrealistic measurement results and is thus undesirable. Discussing different approaches to handle resource contention, we argue for detection and recovery. We define quality criteria that allow the detection of resource contention. In order to observe those quality criteria during emulation experiments, we propose a highly lightweight monitoring approach. Our monitoring is based on instrumenting an operating system kernel and observing basic resource scheduling events. This enables the detection of even peak resource usage within a split second. Thorough evaluations demonstrate the effectiveness of quality criteria and monitoring as well as the negligible overhead of our monitoring approach.
  • Thumbnail Image
    ItemOpen Access
    Optimierung datenintensiver Workflows: Konzepte und Realisierung eines heuristischen, regelbasierten Optimierers
    (2011) Vrhovnik, Marko; Mitschang, Bernhard (Prof. Dr.-Ing. habil.)
    Um die Modellierung datenintensiver Workflows, die große relationale Datenmengen verarbeiten, zu vereinfachen, wurden Workflowbeschreibungssprachen, wie BPEL, von führenden Herstellern von Workflow- und Datenbankmanagementsystemen um SQL-Funktionalität erweitert. Dadurch müssen Datenverarbeitungsoperationen, wie SQL-Anweisungen oder Aufrufe benutzerdefinierter Prozeduren, nicht mehr in Web-Services gekapselt werden, sondern können direkt auf der Workflowebene definiert werden. Daraus resultiert eine neue Möglichkeit der Anfrageoptimierung, die existierende Optimierungsansätze in Datenbanksystemen ergänzt: Suboptimal modellierte Datenverarbeitungsoperationen lassen sich in einer Workflowbeschreibung unter Verwendung von Restrukturierungsregeln derart transformieren, dass sie von einem Workflow- bzw. Datenbankmanagementsystem wesentlich effizienter ausgeführt werden können. In dieser Doktorarbeit werden Konzepte zur Realisierung eines heuristischen, regelbasierten Optimierers für datenintensive Workflows vorgestellt. Der Optimierer wendet eine Regelbasis gemäß einer wohldefinierten Kontrollstrategie auf eine interne Repräsentation für datenintensive Workflows, dem sogenannten Prozessgraphenmodell (PGM), an, um die Datenverarbeitung eines datenintensiven Workflows zu optimieren. PGM erlaubt eine effiziente und sprachunabhängige Definition und Anwendung der Restrukturierungsregeln und unterstützt somit eine Optimierung von Datenverarbeitungsoperationen, die in unterschiedlichen Beschreibungssprachen definiert sein können. Die Regelbasis enthält Restrukturierungsregeln, die auf existierenden und neuen Optimierungsstrategien beruhen. Insbesondere nutzen die Restrukturierungsregeln das Wissen über Abhängigkeiten in einer Workflowbeschreibung aus, um die darin eingebetteten Datenverarbeitungsoperationen unter Beibehaltung der ursprünglichen Ausführungssemantik eines datenintensiven Workflows zu optimieren. Die Kontrollstrategie bestimmt, welche Restrukturierungsregeln in welcher Reihenfolge auf welche Teile einer Workflowbeschreibung angewendet werden, um zum einen das Optimierungspotential eines datenintensiven Workflows umfassend zu nutzen und zum anderen die Korrektheit der Regelanwendungen sicherzustellen. Die ausführliche Beschreibung des Prozessgraphenmodells, der Regelbasis und der Kontrollstrategie stehen im Mittelpunkt dieser wissenschaftlichen Abhandlung. Des Weiteren wird eine prototypische Implementierung des Optimierungsansatzes vorgestellt, welche dessen praktische Einsatzfähigkeit unterstreicht. Schließlich wird die Effektivität der einzelnen Restrukturierungsregeln mithilfe verschiedener Messszenarien untersucht. Dabei wird gezeigt, dass durch Anwendung der Restrukturierungsregeln Leistungssteigerungen in mehreren Größenordnungen erreicht werden können.
  • Thumbnail Image
    ItemOpen Access
    Efficient code offloading techniques for mobile applications
    (2017) Berg, Florian; Rothermel, Kurt (Prof. Dr. rer. nat. Dr. h. c.)
    Since the release of the first smart phone from Apple in the year 2007, smart phones in general experience a fast growth of rising popularity. A smart phone typically possesses among others a touchscreen display as user interface, a mobile communication for accessing the Internet, and a System-on-a-Chip as an integrated circuit of required components like a central processing unit. This pervasive computing platform derives its required power from a battery, where an end user runs upon it different kinds of applications like a calendar application or a high-end mobile game. Differing in the usage of the local resources from a battery-operated smart phone, a heavy utilization of local resources like playing a resource-demanding application drains the limited resource of energy in few hours. Despite the constant increase of memory, communication, or processing capabilities of a smart phone since the release in 2007, applications are also getting more and more sophisticated and demanding. As a result, the energy consumed on a smart phone was, still is, and will be its main limiting factor. To prevent the limited resource of energy from a quick exhaustion, researchers propose code offloading for (resource-constrained) mobile devices like smart phones. Code offloading strives for increasing the energy efficiency and execution speed of applications by utilizing a server instance in the infrastructure. To this end, a code offloading approach executes dynamically resource-intensive parts from an application on powerful remote servers in the infrastructure on behalf of a (resource-constrained) mobile device. During the remote execution of a resource-intensive application part on a remote server, a mobile device only waits in idle mode until it receives the result of the application part executed remotely. Instead of executing an application part on its local resources, a (resource-constrained) mobile device benefits from the more powerful resources of a remote server by sending the information required for a remote execution, waiting in idle mode, and receiving the result of the remote execution. The process of offloading code from a (resource-constrained) mobile device to a powerful remote server in the infrastructure, however, faces different problems. For instance, code offloading introduces some overhead for additional computation and communication on a mobile device. Moreover, spontaneous disconnections during a remote execution can cause a higher energy consumption and execution time than a local execution on a mobile device without code offloading. To this end, this dissertation addresses the whole process of offloading code from a mobile device not only to one but also to multiple remote resources, comprising the following steps: 1) First, code offloading has to identify feasible parts from an application for a remote execution, where the distributed execution of the identified application part is more beneficial than its local execution. A feasible part for a remote execution typically has the following properties: A low size of information required for transmission before a remote execution, a resource-intensive computation not accessing local sensors, and a low size of information required for transmission after a remote execution. In the area of identification of application parts for a remote execution, this dissertation presents an approach based on code annotations from application developers that automatically transforms a monolithic execution on a mobile device to a distributed execution on multiple heterogeneous resources. In contrast to related approaches in the literature, the annotation-based approach requires least interventions from application developers and end users, keeping the overhead introduced on a mobile device low. 2) For an application part identified for a remote execution, code offloading has to determine its execution side, executing the application part either on the local resources of a mobile device or on the remote resource at the infrastructure. In the area of determining the execution side for an application part, this dissertation presents the offloading problem, where a mobile device decides whether to execute an application part locally or remotely. Furthermore, this dissertation also presents an approach called "code bubbling" that shifts the decision making into the infrastructure. In contrast to related approaches in the literature, the decision-based approach on a mobile device and the bubbling-based approach minimize the execution time, energy consumption, and monetary cost for an application. 3) To determine the execution side for an application part identified for a remote execution, code offloading has to obtain different parameters from the application, participating resources, and utilized links. In the area of obtaining the information required from an application, this dissertation presents a bit-flipping approach that dynamically flips a bit at the modification of application-related information. Furthermore, this dissertation also presents an offload-aware Application Programming Interface (API) that encapsulates the application-related information required for code offloading. In contrast to related approaches in the literature, the bit-flipping approach and the offload-aware API provide an efficient gathering of information at run-time, keeping the overhead introduced on a mobile device low. 4) Beside the information from an application, code offloading has to obtain further information from participating resources and utilized links. In the area of obtaining the information required from participating resources and utilized links, this dissertation presents the approach of code bubbling, already mentioned above. In contrast to related approaches in the literature, the bubbling-based approach makes the offload decision at the place where the related information occurs, keeping the overhead introduced on a mobile device, participating resources, and utilized links low. 5) In case of a remote execution of an application part, code offloading has to send the information required for a remote execution to the remote resource that subsequently executes the application part on behalf of the mobile device. In the area of sending the required information and executing an application part remotely, this dissertation presents code offloading with a cache on the remote side. The cache on the remote side serves as a collective storage of results for already executed application parts, avoiding a repeated execution of previously run application parts. In contrast to related approaches in the literature, the caching-aware approach increases the efficiency of code offloading, keeping the energy consumption, execution time, and monetary cost low. 6) While a remote resource executes an application part, code offloading has to handle the occurrence of failures like a failure of the remote resource or a disconnection. In the area of handling the occurrence of failures, this dissertation presents a preemptable offloading of code with safe-points. The preemptable offloading of code with safe-points enables an interruption of an offloading process and a corresponding continuation of a remote execution on a mobile device, without abandoning the complete result calculated remotely so far. Based on a preemptable offloading of code with safe-points, this dissertation further presents a predictive offloading of code with safe-points that minimizes the overhead introduced by safe-point'ing and maximizes the efficiency of a deadline-aware offloading. In contrast to related approaches in the literature, the preemptable approach with safe-point'ing increases the robustness of code offloading in case of failures. Furthermore, the predictive approach for safe-point'ing ensures a minimal responsiveness and a maximal efficiency of applications despite failures. 7) At the end of a remote execution of an application part, code offloading has to gather on the remote resource the required information after the execution and send this information to the mobile device. In the area of gathering the required information, a remote resource utilizes the same approaches as a mobile device, already mentioned above (cf. the bit-flipping approach and the offload-aware API). 8) Last, code offloading has to receive on the mobile device the information from a remote resource, install the information on the mobile device, and continue the execution of the application on the mobile device. In the area of installing the information and continuing the execution locally, a mobile device utilizes the approaches already mentioned above (cf. the bit-flipping approach and the offload-aware API).