Universität Stuttgart

Permanent URI for this communityhttps://elib.uni-stuttgart.de/handle/11682/1

Browse

Search Results

Now showing 1 - 10 of 13
  • Thumbnail Image
    ItemOpen Access
    Distributed stream processing in a global sensor grid for scientific simulations
    (2015) Benzing, Andreas; Rothermel, Kurt (Prof. Dr. rer. nat)
    With today's large number of sensors available all around the globe, an enormous amount of measurements has become available for integration into applications. Especially scientific simulations of environmental phenomena can greatly benefit from detailed information about the physical world. The problem with integrating data from sensors to simulations is to automate the monitoring of geographical regions for interesting data and the provision of continuous data streams from identified regions. Current simulation setups use hard coded information about sensors or even manual data transfer using external memory to bring data from sensors to simulations. This solution is very robust, but adding new sensors to a simulation requires manual setup of the sensor interaction and changing the source code of the simulation, therefore incurring extremely high cost. Manual transmission allows an operator to drop obvious outliers but prohibits real-time operation due to the long delay between measurement and simulation. For more generic applications that operate on sensor data, these problems have been partially solved by approaches that decouple the sensing from the application, thereby allowing for the automation of the sensing process. However, these solutions focus on small scale wireless sensor networks rather than the global scale and therefore optimize for the lifetime of these networks instead of providing high-resolution data streams. In order to provide sensor data for scientific simulations, two tasks are required: i) continuous monitoring of sensors to trigger simulations and ii) high-resolution measurement streams of the simulated area during the simulation. Since a simulation is not aware of the deployed sensors, the sensing interface must work without an explicit specification of individual sensors. Instead, the interface must work only on the geographical region, sensor type, and the resolution used by the simulation. The challenges in these tasks are to efficiently identify relevant sensors from the large number of sources around the globe, to detect when the current measurements are of relevance, and to scale data stream distribution to a potentially large number of simulations. Furthermore, the process must adapt to complex network structures and dynamic network conditions as found in the Internet. The Global Sensor Grid (GSG) presented in this thesis attempts to close this gap by approaching three core problems: First, a distributed aggregation scheme has been developed which allows for the monitoring of geographic areas for sensor data of interest. The reuse of partial aggregates thereby ensures highly efficient operation and alleviates the sensor sources from individually providing numerous clients with measurements. Second, the distribution of data streams at different resolutions is achieved by using a network of brokers which preprocess raw measurements to provide the requested data. The load of high-resolution streams is thereby spread across all brokers in the GSG to achieve scalability. Third, the network usage is actively minimized by adapting to the structure of the underlying network. This optimization enables the reduction of redundant data transfers on physical links and a dynamic modification of the data streams to react to changing load situations.
  • Thumbnail Image
    ItemOpen Access
    Supporting multi-tenancy in Relational Database Management Systems for OLTP-style software as a service applications
    (2015) Schiller, Oliver; Mitschang, Bernhard (Prof. Dr.-Ing. habil.)
    The consolidation of multiple tenants onto a single relational database management system (RDBMS) instance, commonly referred to as multi-tenancy, turned out being beneficial since it supports improving the profit margin of the provider and allows lowering service fees, by what the service attracts more tenants. So far, existing solutions create the required multi-tenancy support on top of a traditional RDBMS implementation, i. e., they implement data isolation between tenants, per-tenant customization and further tenant-centric data management features in application logic. This is complex, error-prone and often reimplements efforts the RDBMS already offers. Moreover, this approach disables some optimization opportunities in the RDBMS and represents a conceptual misstep with Separation of Concerns in mind. For the points mentioned, an RDBMS that provides support for the development and operation of a multi-tenant software as a service (SaaS) offering is compelling. In this thesis, we contribute to a multi-tenant RDBMS for OLTP-style SaaS applications by extending a traditional disk-oriented RDBMS architecture with multi-tenancy support. For this purpose, we primarily extend an RDBMS by introducing tenants as first-class database objects and establishing tenant contexts to isolate tenants logically. Using these extensions, we address tenant-aware schema management, for which we present a schema inheritance concept that is tailored to the needs of multi-tenant SaaS applications. Thereafter, we evaluate different storage concepts to store a tenant’s tuples with respect to their scalability. Next, we contribute an architecture of a multi-tenant RDBMS cluster for OLTP-style SaaS applications. At that, we focus on a partitioning solution which is aligned to tenants and allows obtaining independently manageable pieces. To balance load in the proposed cluster architecture, we present a live database migration approach, whose design favors low migration overhead and provides minimal interruption of service.
  • Thumbnail Image
    ItemOpen Access
    Position sharing for location privacy in non-trusted systems
    (2015) Skvortsov, Pavel; Rothermel, Kurt (Prof. Dr. rer. nat. Dr. h.c.)
    Currently, many location-aware applications are available for mobile users of location-based services. Applications such as Google Now, Trace4You or FourSquare are being widely used in various environments where privacy is a critical issue for users. A general solution for preserving location privacy for a user is to degrade the quality of his or her position information. In this work, we propose an approach that uses spatial obfuscation to secure the users’ position information. By revealing the user’s position with a certain degree of obfuscation, the first crucial issue is the tradeoff between privacy and precision. This tradeoff problem is caused by limited trust in the location service providers: higher obfuscation increases privacy but leads to lower quality of service. We overcome this problem by introducing the position sharing approach. Our main idea is that position information is distributed amongst multiple providers in the form of separate data pieces called position shares. Our approach allows for the usage of non-trusted providers and flexibly manages the user’s location privacy level based on probabilistic privacy metrics. In this work, we present the multi-provider based position sharing approach, which includes algorithms for the generation of position shares and share fusion algorithms. The second challenge that must be addressed is that the user’s environmental context can significantly decrease the level of obfuscation. For example, a plane, a boat and a car create different requirements for the obfuscated region. Therefore, it is very important to consider map-awareness in selecting the obfuscated areas. We assume that a static map is known to an adversary, which may help in deriving the user’s true position. We analyze both how map-awareness affects the generation and fusion of position shares and the difference between the map-aware position sharing approach and its open space based version. Our security analysis shows that the proposed position sharing approach provides good security guarantees for both open space and constrained space based models. The third challenge is that multiple location servers and/or their providers may have different trustworthiness from the user’s point of view. In this case, the user would prefer not to reveal an equal level (precision) of position information to every server. We propose a placement optimization approach that ensures that risk is balanced among the location servers according to their individual trust levels. Our evaluation shows significant improvement of privacy guarantees after applying the optimized share distribution, in comparison with the equal share distribution. The fourth related problem is the location update algorithm. A high number of different location servers n (corresponding to n privacy levels) may lead to significant communication overhead. Each update would require n messages from the mobile user to the location servers, especially in cases of high update rate. Therefore, we propose an optimized location update algorithm to decrease the number of messages sent without reducing the number of privacy levels and the user’s privacy.
  • Thumbnail Image
    ItemOpen Access
    A flexible framework for multi physics and multi domain PDE simulations
    (2015) Müthing, Steffen; Bastian, Peter (Prof. Dr.)
    Many important problems in physics and engineering like fluid dynamics and continuum mechanics are modeled using partial differential equations. These problems can typically not be solved directly, but have to be approximated numerically, a challenging process at both the mathematical and the computer science level. In this work, we present a novel set of software components that facilitate the creation of simulation programs for multi domain partial differential equation problems. We identify the implementation challenges related to the coupling of multiple spatial domains and their attached physical problems and develop a mathematical framework of clearly defined building blocks that can be used to compose a multi domain problem by combining single physics building blocks (which are typically already well understood by application scientists) with additional components that describe the interactions between those subproblems. We introduce an open source software implementation of these mathematical concepts on top of the well-established DUNE numerics framework. This implementation consists of two major parts: a mechanism to subdivide any existing DUNE mesh into multiple subdomains, and a set of extensions to the high-level partial differential equation toolbox solver PDELab, which make the components of our mathematical framework available within its solvers. Our overall design enables application-level scientists to reuse existing code blocks from single-physics simulations and combine them to solve new multi domain problems. This new functionality is heavily based on PDELab’s recursive tree representation of product function spaces; we replace the internal ad-hoc implementation of these trees with a new C++ library for statically defined, template-based trees of objects. As multi domain problems typically require structured linear algebra solvers that exploit domain decomposition approaches, we develop a mathematical framework for describing the structure of the vectors and matrices generated during the assembly of a partial differential equation problem based on the structure of the underlying function spaces. This framework is implemented in PDELab; it is based on a tree transformation mechanism provided by our tree library. We demonstrate the versatility of our multi domain simulation components and their impact on developer productivity by means of two model examples; our ultimate goal of simplifying the development of real-world applications is shown by a description of the impact of our software on several external research projects. Finally, we measure the performance impact of our extensions on the existing DUNE framework and discuss the mitigation measures we implemented to reduce any existing performance penalties.
  • Thumbnail Image
    ItemOpen Access
    Model-driven optimizations for public sensing systems
    (2015) Philipp, Damian; Rothermel, Kurt (Prof. Dr. rer. nat. Dr. h. c.)
    The proliferation of modern smartphones such as the Apple iPhone or Google Android Phones has given rise to Public Sensing, a new paradigm for sensor data acquisition using spare resources of commodity smartphones. As smartphones are already deployed wherever there are people present, data collection is enabled at an urban scale. Having access to such a wealth of data facilitates the creation of applications depending on real-world information in a way that may have a lasting impact on our everyday life. However, creating large-scale Public Sensing systems is not without its challenges. On the data requesting side, an interface is required that allows to specify arbitrary sensing tasks independent of the mobility of participants and thus facilitates user acceptance of Public Sensing. On the data gathering side, as many people as possible must participate in the system and thus provide a sufficient amount of data. To this end, the system must conserve the resources shared by participants as much as possible, with the main concern being energy. Participants will withdraw from the system, when participating significantly impacts the battery life of their smartphone. We address the aforementioned issues in the context of two applications: Indoor map generation and large-scale environmental data acquisition. In the area of indoor map generation, we first address the problem of building an indoor map directly from odometry traces. In contrast to existing approaches, our focus is to extract the maximum amount of information from trace data without relying on additional features such as WiFi fingerprints. Furthermore, we present an approach to improve indoor maps derived from traces using a formal grammar encoding structural information about similar indoor areas. Using this grammar allows us to extend an incomplete trace-based map to a plausible layout for the entire floor while simultaneously improving the accuracy of floor plan objects observed by odometry traces. Our evaluations show that the accuracy of grammar-based maps in the worst-case is similar to the accuracy of trace-based maps in the best-case, thus proving the benefit of the grammar-based approach. To improve the energy efficiency of the mapping process, we furthermore present a generic quality model for trace-based indoor maps. This quality metric is used by a scheduling algorithm, instructing a participating device to disable its energyintensive sensors while it travels in an area that has been mapped with high quality already, enabling energy savings of up to 15%. In the area of large-scale environmental data acquisition, we first present the concept of virtual sensors as a mobility-independent abstraction layer. Applications configure virtual sensors to report a set of readings at a given sampling rate at a fixed position. The Public Sensing system then selects smartphones near the position of a virtual sensor to provide the actual data readings. Furthermore, we present several optimization approaches geared towards improving the energy efficiency of Public Sensing. In a local optimization, smartphones near each individual virtual sensor coordinate to determine which device should take a reading and thus avoid oversampling the virtual sensor. The local optimization can achieve a 99% increase in efficiency with the most efficient approaches and exhibits only about 10% decrease in result quality under worst conditions. Furthermore, we present a global optimization, where a data-driven model is used to identify the subset of most interesting virtual sensors. Data is obtained from this subset only, while readings for other virtual sensors are inferred from the model. To this end, we present a set of online learning and control algorithms that can create a model in just hours or even minutes and that continuously validate its accuracy. Evaluations show that the global optimization can save up to 80% of energy while providing inferred temperature readings matching an error-bound of 1°C up to 100% of the time.
  • Thumbnail Image
    ItemOpen Access
    Reducing context uncertainty for robust pervasive workflows
    (2015) Wolf, Hannes; Rothermel, Kurt (Prof. Dr.)
    Mobile computing devices equipped with sensors are ubiquitously available, today. These platforms provide readings of a multitude of different sensor modalities with fairly high accuracy. But the lack of associated application knowledge restrains the possibility to combine this sensor information to accurate high-level context information. This information is required to drive the execution of applications, without the need for obtrusive explicit human interaction. A modeled workflow as formal representation of a (business) process can provide structural information on the application. This is especially the case for processes that cover applications with rich human interaction. Processes in the health-care domain are characterized by coarsely predefined recurring procedures that are adapted flexibly by the personnel to suite specific situations and rich human interaction. In this setting, a workflow managment system that gives guidance and documents staff actions can lead to a higher quality of care, fewer mistakes, and a higher efficiency. However, most existing workflow managment systems enforce rigid inflexible workflows and rely on direct manual input. Both is inadequate for health-care processes. The solution could be activity recognition systems that use sensor data (e. g. from smart phones) to infer the current activities by the personnel and provide input to a workflow (e.g. informing it that a certain activity is finished now). However, state of the art activity recognition technologies have difficulties in providing reliable information. In this thesis we show that a workflow can aid as source of structural application knowledge for activity recognition and that the other way around, a workflow can be driven by context information in a way reducing the need for explicit interaction. We describe a comprehensive framework - FlowPal - tailored for flexible human-centric processes, that improves the reliability of activity recognition data. FlowPals set of mechanisms exploits the application knowledge encoded in workflows in two ways. StarConincreases the accuracy of high-level context events using information from an associated workflow. Fuzzy Event Assignment (FEvA) mitigates errors in sequences of recognized context. This way FlowPal enables unobtrusive robust workflows. We evaluate our work based on a real-world case study situated in the health-care domain and show that the robustness of unobtrusive health-care workflows can be increased. With StarCon we can improve the accuracy of recognized context events up to 56%. Further we enable the successful execution of flows for a uncertain context events large range of uncertain context events, where a reference system fails. Overall, we achieve an absolute flow completion rate of about 91% (compared to only 12% with a classical workflow system). Our experiments also show that FEvA achieves an event assignment accuracy of 78% to 97% and improves the performance of dealing with false positive, out-of-order events and missed context events.
  • Thumbnail Image
    ItemOpen Access
    Integration management : a virtualization architecture for adapter technologies
    (2015) Wagner, Ralf; Mitschang, Bernhard (Prof. Dr.-Ing. habil.)
    Integration management (IM) provides a means of systematically dealing with integration technologies. It abstracts from integration technologies so that software development is shielded from integration tasks. The achieved integration independence significantly alleviates maintenance and evolution of IT environments and reduces the overall complexity and costs of IT landscapes.
  • Thumbnail Image
    ItemOpen Access
    Efficient and secure event correlation in heterogeneous environments
    (2015) Schilling, Björn; Rothermel, Kurt (Prof. Dr. rer. nat. Dr. h. c.)
    The importance of managing events has increased steadily over the last years and has reached a great magnitude in science and industry. The reasons for that are twofold. On the one hand, sensor devices are cheap and provide event information which is of great interest for a large variety of applications. In fact, sensors are ubiquitous in modern life. Nowadays, RFID tags are attached to goods and parcels to allow an easy tracking. Numerous weather stations are providing up-to-date information about temperature, pressure, and humidity to allow for precise weather forecasts worldwide. Lately, mobile phones are equipped with various sensor devices like Global Positioning System sensors or acceleration sensors to increase the applicability of the phone. On the other hand, reacting on events has become an increasingly important factor especially for business applications. The occurrence of a system failure, a sudden drop in the stock exchange, or a missing parcel can cause huge costs for the company if their appearance is not handled properly. As a consequence, detecting and reacting on events quickly is of great value and has lead to a change in the design of modern software systems, where event-driven architectures and service-oriented architectures have become more and more important. With the emerging establishment of event-driven solutions, complex event processing (CEP) has become increasingly important in the context of a wide range of business applications such as supply chain management, manufacturing, or ensuring safety and security. CEP allows applications to asynchronously react to the changing conditions of possibly many business contexts by describing relevant business situations as correlations over many events. Each event corresponds either to a change of a business context or the occurrence of a relevant business situation. This thesis addresses the need to cope with heterogeneity in distributed event correlation systems in order to i) reuse expressive correlation and efficient technology optimized for processing speed, ii) increase scalability by distributing correlation tasks over various correlation engines, iii) allow migration of correlation tasks between heterogeneous engines and security domains, and iv) provide security guarantees among domains in order to increase interoperability, availability and privacy of correlation results. In particular, a framework called DHEP is presented that copes with such requirements.
  • Thumbnail Image
    ItemOpen Access
    Privacy-aware sharing of location information
    (2015) Wernke, Marius; Rothermel, Kurt (Prof. Dr. rer. nat. Dr. h.c.)
    Location-based applications such as Foursquare, Glympse, or Waze attract millions of users by implementing points of interest finders, geosocial networking, trajectory sharing, or real-time traffic monitoring. An essential requirement for these applications is the knowledge of user location information, i.e., the user's position or his movement trajectory. Location-based applications typically act as clients to a location service, which manages mobile object location information in a scalable fashion and provides various clients with this information. However, sharing location information raises user privacy concerns, especially if location service providers are not fully trustworthy and user location information can be exposed. For instance, an attacker successfully compromising a location service may misuse the revealed location information for stalking, mugging, or to derive personal user information like habits, preferences, or interests of the user. Driven by the increasing number of reported incidents where service providers did not succeed in protecting private user information adequately, user privacy concerns are further intensified. Therefore, we present novel approaches protecting user location privacy when sharing location information without assuming location service providers to be fully trustworthy. To protect user position information, we present our position sharing concept. Position sharing allows to reveal only positions of decreased precision to different location services, while clients can query position shares from different location services to increase precision. To protect movement trajectories, we introduce our trajectory fragmentation approach and an approach protecting the speed information of movement trajectories.
  • Thumbnail Image
    ItemOpen Access
    Kollisionserkennung für echtzeitfähige Starrkörpersimulationen in der Industrie- und Servicerobotik
    (2015) Aichele, Fabian; Levi, Paul (Prof. Dr. rer. nat. habil.)
    Die mechanisch plausible Simulation von Robotern und deren Arbeitsumgebungen ist in der Industrie- und Service-Robotik ein zunehmend wichtiges Werkzeug bei der Entwicklung und Erprobung neuer Hardware und Algorithmen. Ebenso sind Simulationsanwendungen oftmals eine kostengünstige und vielseitig einsetzbare Alternative, sofern die Beschaffung echter Roboter unrentabel ist, oder Hardware und Arbeitsumgebung nur mit großer zeitlicher Verzögerung zur Verfügung stehen würden. Besonders wichtig sind Mechanik-Simulationen für Anwendungsfälle, in denen die direkte mechanische Interaktion von Objekten miteinander beziehungsweise der Arbeitsumgebung selbst im Vordergrund stehen, wie etwa in der Greifplanung oder der Ermittlung kollisionsfreier Bewegungsabläufe. Bei welcher Art von Szenarien der Einsatz von Mechanik-Simulationen sinnvoll ist und inwieweit die Möglichkeiten solcher Simulations-Werkzeuge ein geeigneter Ersatz für eine reale Arbeitsumgebung sein können, hängt sowohl von den technischen Besonderheiten dieser Werkzeuge, als auch von den Anforderungen des jeweiligen Anwendungsgebiets ab. Die wichtigsten Kriterien sind dabei: Die zur Umsetzung der jeweiligen Aufgabe nötige oder gewünschte geometrische Präzision bei der Modellierung von Objekten in einer Simulation, ie bei der Simulation mechanischem Verhaltens berücksichtigten Eigenschaften und Phänomene (etwa durch die Berücksichtigung von Verformungsarbeit oder tribologischer Eigenschaften), und die Fähigkeit, eine Simulation in oder nahe Echtzeit betreiben zu können (d. h. innerhalb von Laufzeitgrenzen, wie sie auch durch die reale Entsprechung eines simulierten Systems gegeben sind). Die Fähigkeit zum Echtzeit-Betrieb steht dabei in Konflikt mit der geometrischen und mechanischen Präzision einer Simulation. Jedoch ist es gerade die Kombination aus diesen drei Kriterien, die für Szenarien mit einem hohen Anteil mechanischer Interaktion zwischen aktiv durch einen Benutzer gesteuerten Aktorik und einer simulierten Arbeitsumgebung besonders wichtig sind: Im Speziellen gilt das für Simulationssysteme, die zur Steuerung simulierter Roboter-Hardware dieselben Hardware- oder Software-Steuerungen verwenden, die auch für die realen Entsprechungen der betrachteten Systeme verwendet werden. Um einen Betrieb innerhalb sehr kurzer Iterationszeiten gewährleisten zu können, muss eine Mechanik-Simulation zwei Teilaufgaben effizient bewältigen können: Die Überprüfung auf Berührung und Überschneidung zwischen simulierten Objekten in der Kollisionserkennung in komplex strukturierten dreidimensionalen Szenen, und die Gewährleistung einer numerisch stabilen Lösung des zugrundeliegenden Gleichungssystems aus der klassischen Mechanik in der Kollisionsbehandlung. Die Kollisionserkennung erfordert dabei gegenüber der Kollisionsbehandlung ein Vielfaches an Laufzeit-Aufwand, und ist dementsprechend die Komponente einer jeden echtzeitfähigen Mechanik-Simulation mit dem größten Optimierungspotential und -bedarf: Ein Schwerpunkt der vorliegenden Arbeit ist daher die Kombination existierender Ansätze zur Kollisionserkennung unter weitgehender Vermeidung von deren Nachteilen. Dazu sollen ausgehend von Erfahrungen einer Projektstudie aus der Industrie-Robotik die speziellen Anforderungen an echtzeitfähige Mechanik-Simulationen beim Einsatz in dieser und verwandten Disziplinen hergeleitet und den Möglichkeiten und Einschränkungen existierender Simulations-Lösungen gegenüber gestellt werden. Basierend auf der Analyse existierender Kollisionserkennnungs-Verfahren soll im weiteren Verlauf der Arbeit eine alternative Möglichkeit zur Bewältigung dieser laufzeitaufwendigen Aufgabe auf Basis der Verwendung massiv paralleler Prozessor-Architekturen, wie sie in Form programmierbarer Grafik-Prozessoren (GPUs) kostengünstig zur Verfügung stehen, erarbeitet und umgesetzt werden.