05 Fakultät Informatik, Elektrotechnik und Informationstechnik

Permanent URI for this collectionhttps://elib.uni-stuttgart.de/handle/11682/6

Browse

Search Results

Now showing 1 - 10 of 29
  • Thumbnail Image
    ItemOpen Access
    Interacting with large high-resolution display workplaces
    (2018) Lischke, Lars; Schmidt, Albrecht (Prof.)
    Large visual spaces provide a unique opportunity to communicate large and complex pieces of information; hence, they have been used for hundreds of years for varied content including maps, public notifications and artwork. Understanding and evaluating complex information will become a fundamental part of any office work. Large high-resolution displays (LHRDs) have the potential to further enhance the traditional advantages of large visual spaces and combine them with modern computing technology, thus becoming an essential tool for understanding and communicating data in future office environments. For successful deployment of LHRDs in office environments, well-suited interaction concepts are required. In this thesis, we build an understanding of how concepts for interaction with LHRDs in office environments could be designed. From the human-computer interaction (HCI) perspective three aspects are fundamental: (1) The way humans perceive and react to large visual spaces is essential for interaction with content displayed on LHRDs. (2) LHRDs require adequate input techniques. (3) The actual content requires well-designed graphical user interfaces (GUIs) and suitable input techniques. Perceptions influence how users can perform input on LHRD setups, which sets boundaries for the design of GUIs for LHRDs. Furthermore, the input technique has to be reflected in the design of the GUI. To understand how humans perceive and react to large visual information on LHRDs, we have focused on the influence of visual resolution and physical space. We show that increased visual resolution has an effect on the perceived media quality and the perceived effort and that humans can overview large visual spaces without being overwhelmed. When the display is wider than 2 m users perceive higher physical effort. When multiple users share an LHRD, they change their movement behavior depending whether a task is collaborative or competitive. For building LHRDs consideration must be given to the increased complexity of higher resolutions and physically large displays. Lower screen resolutions provide enough display quality to work efficiently, while larger physical spaces enable users to overview more content without being overwhelmed. To enhance user input on LHRDs in order to interact with large information pieces, we built working prototypes and analyzed their performance in controlled lab studies. We showed that eye-tracking based manual and gaze input cascaded (MAGIC) pointing can enhance target pointing to distant targets. MAGIC pointing is particularly beneficial when the interaction involves visual searches between pointing to targets. We contributed two gesture sets for mid-air interaction with window managers on LHRDs and found that gesture elicitation for an LHRD was not affected by legacy bias. We compared shared user input on an LHRD with personal tablets, which also functioned as a private working space, to collaborative data exploration using one input device together for interacting with an LHRD. The results showed that input with personal tablets lowered the perceived workload. Finally, we showed that variable movement resistance feedback enhanced one-dimensional data input when no visual input feedback was provided. We concluded that context-aware input techniques enhance the interaction with content displayed on an LHRD so it is essential to provide focus for the visual content and guidance for the user while performing input. To understand user expectations of working with LHRDs we prototyped with potential users how an LHRD work environment could be designed focusing on the physical screen alignment and the placement of content on the display. Based on previous work, we implemented novel alignment techniques for window management on LHRDs and compared them in a user study. The results show that users prefer techniques, that enhance the interaction without breaking well-known desktop GUI concepts. Finally, we provided the example of how an application for browsing scientific publications can benefit from extended display space. Overall, we show that GUIs for LHRDs should support the user more strongly than GUIs for smaller displays to arrange content meaningful or manage and understand large data sets, without breaking well-known GUI-metaphors. In conclusion, this thesis adopts a holistic approach to interaction with LHRDs in office environments. Based on enhanced knowledge about user perception of large visual spaces, we discuss novel input techniques for advanced user input on LHRDs. Furthermore, we present guidelines for designing future GUIs for LHRDs. Our work creates the design space of LHRD workplaces and identifies challenges and opportunities for the development of future office environments.
  • Thumbnail Image
    ItemOpen Access
    Efficient fault tolerance for selected scientific computing algorithms on heterogeneous and approximate computer architectures
    (2018) Schöll, Alexander; Wunderlich, Hans-Joachim (Prof. Dr.)
    Scientific computing and simulation technology play an essential role to solve central challenges in science and engineering. The high computational power of heterogeneous computer architectures allows to accelerate applications in these domains, which are often dominated by compute-intensive mathematical tasks. Scientific, economic and political decision processes increasingly rely on such applications and therefore induce a strong demand to compute correct and trustworthy results. However, the continued semiconductor technology scaling increasingly imposes serious threats to the reliability and efficiency of upcoming devices. Different reliability threats can cause crashes or erroneous results without indication. Software-based fault tolerance techniques can protect algorithmic tasks by adding appropriate operations to detect and correct errors at runtime. Major challenges are induced by the runtime overhead of such operations and by rounding errors in floating-point arithmetic that can cause false positives. The end of Dennard scaling induces central challenges to further increase the compute efficiency between semiconductor technology generations. Approximate computing exploits the inherent error resilience of different applications to achieve efficiency gains with respect to, for instance, power, energy, and execution times. However, scientific applications often induce strict accuracy requirements which require careful utilization of approximation techniques. This thesis provides fault tolerance and approximate computing methods that enable the reliable and efficient execution of linear algebra operations and Conjugate Gradient solvers using heterogeneous and approximate computer architectures. The presented fault tolerance techniques detect and correct errors at runtime with low runtime overhead and high error coverage. At the same time, these fault tolerance techniques are exploited to enable the execution of the Conjugate Gradient solvers on approximate hardware by monitoring the underlying error resilience while adjusting the approximation error accordingly. Besides, parameter evaluation and estimation methods are presented that determine the computational efficiency of application executions on approximate hardware. An extensive experimental evaluation shows the efficiency and efficacy of the presented methods with respect to the runtime overhead to detect and correct errors, the error coverage as well as the achieved energy reduction in executing the Conjugate Gradient solvers on approximate hardware.
  • Thumbnail Image
    ItemOpen Access
    Efficient modeling and computation methods for robust AMS system design
    (2018) Gil, Leandro; Radetzki, Martin (Prof. Dr.-Ing.)
    This dissertation copes with the challenge regarding the development of model based design tools that better support the mixed analog and digital parts design of embedded systems. It focuses on the conception of efficient modeling and simulation methods that adequately support emerging system level design methodologies. Starting with a deep analysis of the design activities, many weak points of today’s system level design tools were captured. After considering the modeling and simulation of power electronic circuits for designing low energy embedded systems, a novel signal model that efficiently captures the dynamic behavior of analog and digital circuits is proposed and utilized for the development of computation methods that enable the fast and accurate system level simulation of AMS systems. In order to support a stepwise system design refinement which is based on the essential system properties, behavior computation methods for linear and nonlinear analog circuits based on the novel signal model are presented and compared regarding the performance, accuracy and stability with existing numerical and analytical methods for circuit simulation. The novel signal model in combination with the method proposed to efficiently cope with the interaction of analog and digital circuits as well as the new method for digital circuit simulation are the key contributions of this dissertation because they allow the concurrent state and event based simulation of analog and digital circuits. Using a synchronous data flow model of computation for scheduling the execution of the analog and digital model parts, very fast AMS system simulations are carried out. As the best behavior abstraction for analog and digital circuits may be selected without the need of changing component interfaces, the implementation, validation and verification of AMS systems take advantage of the novel mixed signal representation. Changes on the modeling abstraction level do not affect the experiment setup. The second part of this work deals with the robust design of AMS systems and its verification. After defining a mixed sensitivity based robustness evaluation index for AMS control systems, a general robust design method leading to optimal controller tuning is presented. To avoid over-conservative AMS system designs, the proposed robust design optimization method considers parametric uncertainty and nonlinear model characteristics. The system properties in the frequency domain needed to evaluate the system robustness during parameter optimization are obtained from the proposed signal model. Further advantages of the presented signal model for the computation of control system performance evaluation indexes in the time domain are also investigated in combination with range arithmetic. A novel approach for capturing parameter correlations in range arithmetic based circuit behavior computation is proposed as a step towards a holistic modeling method for the robust design of AMS systems. The several modeling and computation methods proposed to improve the support of design methodologies and tools for AMS system are validated and evaluated in the course of this dissertation considering many aspects of the modeling, simulation, design and verification of a low power embedded system implementing Adaptive Voltage and Frequency Scaling (AVFS) for energy saving.
  • Thumbnail Image
    ItemOpen Access
    A massively parallel combination technique for the solution of high-dimensional PDEs
    (2018) Heene, Mario; Pflüger, Dirk (Jun.-Prof. Dr.)
    The solution of high-dimensional problems, especially high-dimensional partial differential equations (PDEs) that require the joint discretization of more than the usual three spatial dimensions and time, is one of the grand challenges in high performance computing (HPC). Due to the exponential growth of the number of unknowns - the so-called curse of dimensionality, it is in many cases not feasible to resolve the simulation domain as fine as required by the physical problem. Although the upcoming generation of exascale HPC systems theoretically provides the computational power to handle simulations that are out of reach today, it is expected that this is only achievable with new numerical algorithms that are able to efficiently exploit the massive parallelism of these systems. The sparse grid combination technique is a numerical scheme where the problem (e.g., a high-dimensional PDE) is solved on different coarse and anisotropic computational grids (so-called component grids), which are then combined to approximate the solution with a much higher target resolution than any of the individual component grids. This way, the total number of unknowns being computed is drastically reduced compared to the case when the problem is directly solved on a regular grid with the target resolution. Thus, the curse of dimensionality is mitigated. The combination technique is a promising approach to solve high-dimensional problems on future exascale systems. It offers two levels of parallelism: the component grids can be computed in parallel, independently and asynchronously of each other; and the computation of each component grid can be parallelized as well. This reduces the demand for global communication and synchronization, which is expected to be one of the limiting factors for classical discretization techniques to achieve scalability on exascale systems. Furthermore, the combination technique enables novel approaches to deal with the increasing fault rates expected from these systems. With the fault-tolerant combination technique it is possible to recover from failures without time-consuming checkpoint-restart mechanisms. In this work, new algorithms and data structures are presented that enable a massively parallel and fault-tolerant combination technique for time-dependent PDEs on large-scale HPC systems. The scalability of these algorithms is demonstrated on up to 180225 processor cores on the supercomputer Hazel Hen. Furthermore, the parallel combination technique is applied to gyrokinetic simulations in GENE, a software for the simulation of plasma microturbulence in fusion devices.
  • Thumbnail Image
    ItemOpen Access
    Interactive web-based visualization
    (2018) Mwalongo, Finian
    The visualization of large amounts of data, which cannot be easily copied for processing on a user’s local machine, is not yet a fully solved problem. Remote visualization represents one possible solution approach to the problem, and has long been an important research topic. Depending on the device used, modern hardware, such as high-performance GPUs, is sometimes not available. This is another reason for the use of remote visualization. Additionally, due to the growing global networking and collaboration among research groups, collaborative remote visualization solutions are becoming more important. The additional use of collaborative visualization solutions is eventually due to the growing global networking and collaboration among research groups. The attractiveness of web-based remote visualization is greatly increased by the wide availability of web browsers on almost all devices; these are available today on all systems - from desktop computers to smartphones. In order to ensure interactivity, network bandwidth and latency are the biggest challenges that web-based visualization algorithms have to solve. Despite the steady improvements in available bandwidth, these improvements are still significantly slower than, for example, processor performance, resulting in increasing the impact of this bottleneck. For example, visualization of large dynamic data in low-bandwidth environments can be challenging because it requires continuous data transfer. However, bandwidth improvement alone cannot improve the latency because it is also affected by factors such as the distance between server and client and network utilization. To overcome these challenges, a combination of techniques is needed to customize the individual processing steps of the visualization pipeline, from efficient data representation to hardware-accelerated rendering on the client side. This thesis first deals with related work in the field of remote visualization with a particular focus on interactive web-based visualization and then presents techniques for interactive visualization in the browser using modern web standards such as WebGL and HTML5. These techniques enable the visualization of dynamic molecular data sets with more than one million atoms at interactive frame rates using GPU-based ray casting. Due to the limitations which exist in a browser-based environment, the concrete implementation of the GPU-based ray casting had to be customized. Evaluation of the resulting performance shows that GPU-based techniques enable the interactive rendering of large data sets and achieve higher image quality compared to polygon-based techniques. In order to reduce data transfer times and network latency, and improve rendering speed, efficient approaches for data representation and transmission are used. Furthermore, this thesis introduces a GPU-based volume-ray marching technique based on WebGL 2.0, which uses progressive brick-wise data transfer, as well as multiple levels of detail in order to achieve interactive volume rendering of datasets stored on a server. The concepts and results presented in this thesis contribute to the further spread of interactive web-based visualization. The algorithmic and technological advances that have been achieved form a basis for further development of interactive browser-based visualization applications. At the same time, this approach has the potential for enabling future collaborative visualization in the cloud.
  • Thumbnail Image
    ItemOpen Access
    Coupling schemes and inexact Newton for multi-physics and coupled optimization problems
    (2018) Scheufele, Klaudius; Mehl, Miriam (Prof. Dr.)
    This work targets mathematical solutions and software for complex numerical simulation and optimization problems. Characteristics are the combination of different models and software modules and the need for massively parallel execution on supercomputers. We consider two different types of multi-component problems in Part I and Part II of the thesis: (i) Surface coupled fluid- structure interactions and (ii) analysis of medical MR imaging data of brain tumor patients. In (i), we establish highly accurate simulations by combining different aspects such as fluid flow and arterial wall deformation in hemodynamics simulations or fluid flow, heat transfer and mechanical stresses in cooling systems. For (ii), we focus on (a) facilitating the transfer of information such as functional brain regions from a statistical healthy atlas brain to the individual patient brain (which is topologically different due to the tumor), and (b) to allow for patient specific tumor progression simulations based on the estimation of biophysical parameters via inverse tumor growth simulation (given a single snapshot in time, only). Applications and specific characteristics of both problems are very distinct, yet both are hallmarked by strong inter-component relations and result in formidable, very large, coupled systems of partial differential equations. Part I targets robust and efficient quasi-Newton methods for black-box surface-coupling of parti- tioned fluid-structure interaction simulations. The partitioned approach allows for great flexibility and exchangeable of sub-components. However, breaking up multi-physics into single components requires advanced coupling strategies to ensure correct inter-component relations and effectively tackle instabilities. Due to the black-box paradigm, solver internals are hidden and information exchange is reduced to input/output relations. We develop advanced quasi-Newton methods that effectively establish the equation coupling of two (or more) solvers based on solving a non-linear fixed-point equation at the interface. Established state of the art methods fall short by either requiring costly tuning of problem dependent parameters, or becoming infeasible for large scale problems. In developing parameter-free, linear-complexity alternatives, we lift the robustness and parallel scalability of quasi-Newton methods for partitioned surface-coupled multi-physics simulations to a new level. The developed methods are implemented in the parallel, general purpose coupling tool preCICE. Part II targets MR image analysis of glioblastoma multiforme pathologies and patient specific simulation of brain tumor progression. We apply a joint medical image registration and biophysical inversion strategy, targeting at facilitating diagnosis, aiding and supporting surgical planning, and improving the efficacy of brain tumor therapy. We propose two problem formulations and decompose the resulting large-scale, highly non-linear and non-convex PDE-constrained optimization problem into two tightly coupled problems: inverse tumor simulation and medical image registration. We deduce a novel, modular Picard iteration-type solution strategy. We are the first to successfully solve the inverse tumor-growth problem based on a single patient snapshot with a gradient-based approach. We present the joint inversion framework SIBIA, which scales to very high image resolutions and parallel execution on tens of thousands of cores. We apply our methodology to synthetic and actual clinical data sets and achieve excellent normal-to-abnormal registration quality and present a proof of concept for a very promising strategy to obtain clinically relevant biophysical information. Advanced inexact-Newton methods are an essential tool for both parts. We connect the two parts by pointing out commonalities and differences of variants used in the two communities in unified notation.
  • Thumbnail Image
    ItemOpen Access
    Vision-based methods for evaluating visualizations
    (2018) Netzel, Rudolf; Weiskopf, Daniel (Prof. Dr.)
  • Thumbnail Image
    ItemOpen Access
    An expressive formal model of the web infrastructure
    (2018) Fett, Daniel; Küsters, Ralf (Prof. Dr.)
    The World Wide Web is arguably the most important medium of our time. Billions of users rely on the security of the web each day for tasks such as banking, shopping, and business and private communication. The web is a heterogeneous infrastructure developing at a high pace. The question of whether the web infrastructure or certain web applications are secure is not easy to answer. Standards and applications today are reviewed by experts before they are deployed, but all too often even serious security vulnerabilities are simply overlooked. In this thesis, we propose a formal model for the web infrastructure which enables a rigorous formal analysis of security and privacy in the web. Our model is the most comprehensive and expressive model of the web infrastructure to date. It facilitates accurate security and privacy analyses of current web standards and applications, and can serve as a reference for web security researchers, developers of new technologies and standards, and for teaching web security concepts. As a case study we analyze the security of two important standards for federated authorization and authentication, OAuth 2.0 and OpenID Connect. Standardized by the IETF and OpenID Foundation, respectively, they are among the most widely deployed single sign-on systems in the web. For our analysis, we develop detailed formal models for both systems based on our model of the web infrastructure. These models then allow us to precisely define the security goals of authentication, authorization and session integrity. While proving security with respect to these goals, we found a total of five new attacks on the two single sign-on systems, breaking all of the security goals. In particular OAuth 2.0 had been analyzed many times before; the fact that we were able to find new attacks in OAuth 2.0 demonstrates the potential of rigorous analyses in our web infrastructure model. We develop fixes against the underlying vulnerabilities and are then able to prove the security of OAuth 2.0 and OpenID Connect. Since our results are based on a comprehensive model, our proofs can exclude large classes of attacks against OAuth and OpenID Connect, including yet unknown attack vectors. Our attacks and fixes led to the development of new security recommendations by the standardization organizations.
  • Thumbnail Image
    ItemOpen Access
    Animated surfaces in physically-based simulation
    (2018) Huber, Markus; Weiskopf, Daniel (Prof. Dr.)
    Physics-based animation has become a ubiquitous element in all application areas of computer animation, especially in the entertainment sector. Animation and feature films, video games, and advertisement contain visual effects using physically-based simulation that blend in seamlessly with animated or live-action productions. When simulating deformable materials and fluids, especially liquids, objects are usually represented by animated surfaces. The visual quality of these surfaces not only depends on the actual properties of the surface itself but also on its generation and relation to the underlying simulation. This thesis focuses on surfaces of cloth simulations and fluid simulations based on Smoothed Particle Hydrodynamics (SPH), and contributes to improving the creation of animations by specifying surface shapes, modeling contact of surfaces, and evaluating surface effects of fluids. In many applications, there is a reference given for a surface animation in terms of its shape. Matching a given reference with a simulation is a challenging task and similarity is often determined by visual inspection. The first part of this thesis presents a signature for cloth animations that captures characteristic shapes and their temporal evolution. It combines geometric features with physical properties to represent accurately the typical deformation behavior. The signature enables calculating similarities between animations and is applied to retrieve cloth animations from collections by example. Interactions between particle-based fluids and deformable objects are usually modeled by sampling the deformable objects with particles. When interacting with cloth, however, this would require resampling the surface at large planar deformations and the thickness of cloth would be bound to the particle size. This problem is addressed in this thesis by presenting a two-way coupling technique for cloth and fluids based on the simulation mesh of the textile. It allows robust contact handling and intuitive control of boundary conditions. Further, a solution for intersection-free fluid surface reconstruction at contact with thin flexible objects is presented. The visual quality of particle-based fluid animation highly depends on the properties of the reconstructed surface. An important aspect of the reconstruction method is that it accurately represents the underlying simulation. This thesis presents an evaluation of surfaces at interfaces of SPH simulations incorporating the connection to the simulation model. A typical approach in computer graphics is compared to surface reconstruction used in material sciences. The behavior of free surfaces in fluid animations is highly influenced by surface tension. This thesis presents an evaluation of three types of surface tension models in combination with different pressure force models for SPH to identify the individual characteristics of these models. Systematic tests using a set of benchmark scenes are performed to reveal strengths and weaknesses, and possible areas of applications.
  • Thumbnail Image
    ItemOpen Access
    System-theoretic safety analysis in agile software development
    (2018) Wang, Yang; Wagner, Stefan (Prof. Dr.)
    Agile software development (ASD) has gained a good reputation for a number of years due to its higher customer satisfaction, lower defect rates, faster development times and as a solution to rapidly changing requirements. Thus, ASD arouses interests from safety-critical industries due to a fast changing market and upcoming customised requirements. However, applying ASD to develop safety-critical systems (SCS) is contro- versial. Most of practitioners in SCS prefer using traditional development processes together with a standardised safety assurance process by satisfying the norms, such as IEC 61508. Existing research is striving for a consistency or a hybrid model between ASD and norms. However, the traditional safety assurance cannot work well without a stable architecture. ASD has a con- stantly changing architecture, which makes the integration of traditional safety assurance in ASD a bottleneck, especially the execution of safety analysis. In this dissertation, we aim to propose a process model called S-Scrum, which is mainly based on integrating a System-Theoretic Process Analysis (STPA) to face the changing architectures when using ASD for developing SCS.