Universität Stuttgart

Permanent URI for this communityhttps://elib.uni-stuttgart.de/handle/11682/1

Browse

Search Results

Now showing 1 - 10 of 32
  • Thumbnail Image
    ItemOpen Access
    Visualization challenges in distributed heterogeneous computing environments
    (2015) Panagiotidis, Alexandros; Ertl, Thomas (Prof. Dr.)
    Large-scale computing environments are important for many aspects of modern life. They drive scientific research in biology and physics, facilitate industrial rapid prototyping, and provide information relevant to everyday life such as weather forecasts. Their computational power grows steadily to provide faster response times and to satisfy the demand for higher complexity in simulation models as well as more details and higher resolutions in visualizations. For some years now, the prevailing trend for these large systems is the utilization of additional processors, like graphics processing units. These heterogeneous systems, that employ more than one kind of processor, are becoming increasingly widespread since they provide many benefits, like higher performance or increased energy efficiency. At the same time, they are more challenging and complex to use because the various processing units differ in their architecture and programming model. This heterogeneity is often addressed by abstraction but existing approaches often entail restrictions or are not universally applicable. As these systems also grow in size and complexity, they become more prone to errors and failures. Therefore, developers and users become more interested in resilience besides traditional aspects, like performance and usability. While fault tolerance is well researched in general, it is mostly dismissed in distributed visualization or not adapted to its special requirements. Finally, analysis and tuning of these systems and their software is required to assess their status and to improve their performance. The available tools and methods to capture and evaluate the necessary information are often isolated from the context or not designed for interactive use cases. These problems are amplified in heterogeneous computing environments, since more data is available and required for the analysis. Additionally, real-time feedback is required in distributed visualization to correlate user interactions to performance characteristics and to decide on the validity and correctness of the data and its visualization. This thesis presents contributions to all of these aspects. Two approaches to abstraction are explored for general purpose computing on graphics processing units and visualization in heterogeneous computing environments. The first approach hides details of different processing units and allows using them in a unified manner. The second approach employs per-pixel linked lists as a generic framework for compositing and simplifying order-independent transparency for distributed visualization. Traditional methods for fault tolerance in high performance computing systems are discussed in the context of distributed visualization. On this basis, strategies for fault-tolerant distributed visualization are derived and organized in a taxonomy. Example implementations of these strategies, their trade-offs, and resulting implications are discussed. For analysis, local graph exploration and tuning of volume visualization are evaluated. Challenges in dense graphs like visual clutter, ambiguity, and inclusion of additional attributes are tackled in node-link diagrams using a lens metaphor as well as supplementary views. An exploratory approach for performance analysis and tuning of parallel volume visualization on a large, high-resolution display is evaluated. This thesis takes a broader look at the issues of distributed visualization on large displays and heterogeneous computing environments for the first time. While the presented approaches all solve individual challenges and are successfully employed in this context, their joint utility form a solid basis for future research in this young field. In its entirety, this thesis presents building blocks for robust distributed visualization on current and future heterogeneous visualization environments.
  • Thumbnail Image
    ItemOpen Access
    Distributed stream processing in a global sensor grid for scientific simulations
    (2015) Benzing, Andreas; Rothermel, Kurt (Prof. Dr. rer. nat)
    With today's large number of sensors available all around the globe, an enormous amount of measurements has become available for integration into applications. Especially scientific simulations of environmental phenomena can greatly benefit from detailed information about the physical world. The problem with integrating data from sensors to simulations is to automate the monitoring of geographical regions for interesting data and the provision of continuous data streams from identified regions. Current simulation setups use hard coded information about sensors or even manual data transfer using external memory to bring data from sensors to simulations. This solution is very robust, but adding new sensors to a simulation requires manual setup of the sensor interaction and changing the source code of the simulation, therefore incurring extremely high cost. Manual transmission allows an operator to drop obvious outliers but prohibits real-time operation due to the long delay between measurement and simulation. For more generic applications that operate on sensor data, these problems have been partially solved by approaches that decouple the sensing from the application, thereby allowing for the automation of the sensing process. However, these solutions focus on small scale wireless sensor networks rather than the global scale and therefore optimize for the lifetime of these networks instead of providing high-resolution data streams. In order to provide sensor data for scientific simulations, two tasks are required: i) continuous monitoring of sensors to trigger simulations and ii) high-resolution measurement streams of the simulated area during the simulation. Since a simulation is not aware of the deployed sensors, the sensing interface must work without an explicit specification of individual sensors. Instead, the interface must work only on the geographical region, sensor type, and the resolution used by the simulation. The challenges in these tasks are to efficiently identify relevant sensors from the large number of sources around the globe, to detect when the current measurements are of relevance, and to scale data stream distribution to a potentially large number of simulations. Furthermore, the process must adapt to complex network structures and dynamic network conditions as found in the Internet. The Global Sensor Grid (GSG) presented in this thesis attempts to close this gap by approaching three core problems: First, a distributed aggregation scheme has been developed which allows for the monitoring of geographic areas for sensor data of interest. The reuse of partial aggregates thereby ensures highly efficient operation and alleviates the sensor sources from individually providing numerous clients with measurements. Second, the distribution of data streams at different resolutions is achieved by using a network of brokers which preprocess raw measurements to provide the requested data. The load of high-resolution streams is thereby spread across all brokers in the GSG to achieve scalability. Third, the network usage is actively minimized by adapting to the structure of the underlying network. This optimization enables the reduction of redundant data transfers on physical links and a dynamic modification of the data streams to react to changing load situations.
  • Thumbnail Image
    ItemOpen Access
    Neue Methoden und Techniken für die Evaluation von Visualisierungen
    (2015) Raschke, Michael; Ertl, Thomas (Prof. Dr.)
    Visualisierungen umgeben uns wie selbstverständlich im Alltag und bei der Arbeit, um abstrakte Informationen darzustellen und komplexe Zusammenhänge zu verstehen. Lag bisher das Hauptaugenmerk der Entwicklung von Visualisierungstechniken auf der Frage, wie möglichst viele Daten in möglichst kurzer Zeit, in einer möglichst hohen Auflösung dargestellt werden können, so gewann in der Visualisierungsforschung in den letzten Jahren die Fragestellung an Bedeutung, ob eine Visualisierung auch nützlich und leicht lesbar ist. Um diese Fragestellung umfassend beantworten zu können, war das Ziel dieser Arbeit die Entwicklung von neuen Methoden und Techniken zur Untersuchung der Wahrnehmung von Visualisierungen, sowie zur Evaluation von Visualisierungstechniken. Dazu wurde ein interdisziplinärer Ansatz gewählt, der die drei wissenschaftlichen Forschungsgebiete Eye-Tracking, Wissensrepräsentation und Kognitionswissenschaften miteinander verbindet. Eye-Tracking-Experimente wurden für die Analyse des Blickverhaltens bei der Arbeit mit Visualisierungen eingesetzt. Die Repräsentation visuellen Wissens erlaubt es, semantische Eigenschaften von Scan-Paths untersuchen zu können. Simulationsmethoden aus den Kognitionswissenschaften ermöglichen es, das Blickverhalten vorherzusagen. Eye-Tracking-Experimente werden in der Visualisierungsforschung dazu eingesetzt, um Augenbewegungen von Probanden, welche Aufgaben mit Visualisierungen durchführen, aufzunehmen. Ein nicht zu unterschätzender Zeitaufwand bei der Auswertung dieser Art von Experimenten nimmt die anschließende Analyse der Augenbewegungen ein. Um den Aufwand der Analyse dieser Scan-Paths zu reduzieren und ähnliche Augenbewegungsmuster über die Probanden hinweg zu identifizieren, wurde die parallele Scan-Path-Visualisierungstechnik entwickelt, die eine übersichtliche Darstellung von mehreren Scan-Paths erlaubt. Damit können Lesestrategien von Visualisierungen über mehrere Probanden hinweg erkannt und miteinander verglichen werden. Die parallele Scan-Path-Visualisierung wurde zusätzlich mit automatischen Mustererkennungsverfahren erweitert. Dieser sogenannten visuelle Analytik-Ansatz erlaubt es, Scan-Paths quantitativ miteinander zu vergleichen und führt zu einer effizienten Analyse von sehr großen Eye-Tracking-Datensätzen. Für die Modellierung von Wissen über Visualisierungen wurde ein Wissensmodell mit drei Ebenen entwickelt. Jede Ebene beschreibt in Form einer Ontologie eine unterschiedliche Abstraktionsebene des Wissens über Visualisierungen und die darin enthaltenen graphischen Elemente. Elemente aus diesen Ontologien werden mit bestimmten Bereichen in einer Visualisierung oder mit einzelnen graphischen Elementen in Visualisierungen verknüpft. Dieser Ansatz ermöglicht es nicht nur wie bisher zu analysieren, welche Bereiche in einer Visualisierung auf einem Bildschirm in welcher Reihenfolge betrachtet worden sind (WO-Raum), sondern auch, was für graphische Elemente dort wahrgenommen (WAS-Raum) und wie diese kognitiv weiterverarbeitet wurden. Es wird gezeigt, wie mit der parallelen Scan-Path-Visualisierungstechnik, basierend auf dieser Annotation, Wissensverarbeitungsprozesse visualisiert werden können. Damit können auch Bereiche in Visualisierungen, die möglicherweise zu einer kognitiven Verzerrung führen, erkannt und im Detail weiter untersucht werden. Für die Simulation der visuellen Suche wurde eine auf dem Kognitionssimulationsframework ACT-R basierende Simulation entwickelt, die Leseprozesse in Visualisierungen simuliert, und es erlaubt, diese mit empirisch ermittelten Daten zu vergleichen. Zusätzlich stellt diese Arbeit erstmalig ein operatorenbasiertes Modell zur Vorhersage von Durchführungszeiten von visuellen Aufgaben vor. Dieses operatorenbasierte Diagram-Viewing-Modell verwendet das Konzept des aus der Mensch-Computer-Interaktionsforschung bekannten Keystroke-Level-Modells und erweitert es für die Vorhersage von Durchführungszeiten von visuellen Aufgaben. Neben einer Effizienzsteigerung bei der Auswertung von Eye-Tracking-Experimenten führt die Kombination der visuellen Analyse von Scan-Paths mit ontologiebasierten Wissensmodellen zu einem tieferen Verständnis der Leseprozesse von Visualisierungen. Semantische Charakteristika von Scan-Paths können besser untersucht werden und die Wahrscheinlichkeit für kognitive Verzerrungen bei der Arbeit mit Visualisierungen durch eine geeignete Anpassung des Visualisierungskonzepts verringert werden. Insgesamt können die in dieser Arbeit vorgestellten Methoden und Techniken zu einem stärker benutzerorientierten, iterativen Entwicklungsprozess von Visualisierungen führen. In diesem Entwicklungsprozess können Ergebnisse der Eye-Tracking-Analyse oder Ergebnisse aus Simulationen dazu eingesetzt werden, um zu untersuchen, wie Visualisierungen von verschiedenen Benutzergruppen wahrgenommen werden.
  • Thumbnail Image
    ItemOpen Access
    Supporting multi-tenancy in Relational Database Management Systems for OLTP-style software as a service applications
    (2015) Schiller, Oliver; Mitschang, Bernhard (Prof. Dr.-Ing. habil.)
    The consolidation of multiple tenants onto a single relational database management system (RDBMS) instance, commonly referred to as multi-tenancy, turned out being beneficial since it supports improving the profit margin of the provider and allows lowering service fees, by what the service attracts more tenants. So far, existing solutions create the required multi-tenancy support on top of a traditional RDBMS implementation, i. e., they implement data isolation between tenants, per-tenant customization and further tenant-centric data management features in application logic. This is complex, error-prone and often reimplements efforts the RDBMS already offers. Moreover, this approach disables some optimization opportunities in the RDBMS and represents a conceptual misstep with Separation of Concerns in mind. For the points mentioned, an RDBMS that provides support for the development and operation of a multi-tenant software as a service (SaaS) offering is compelling. In this thesis, we contribute to a multi-tenant RDBMS for OLTP-style SaaS applications by extending a traditional disk-oriented RDBMS architecture with multi-tenancy support. For this purpose, we primarily extend an RDBMS by introducing tenants as first-class database objects and establishing tenant contexts to isolate tenants logically. Using these extensions, we address tenant-aware schema management, for which we present a schema inheritance concept that is tailored to the needs of multi-tenant SaaS applications. Thereafter, we evaluate different storage concepts to store a tenant’s tuples with respect to their scalability. Next, we contribute an architecture of a multi-tenant RDBMS cluster for OLTP-style SaaS applications. At that, we focus on a partitioning solution which is aligned to tenants and allows obtaining independently manageable pieces. To balance load in the proposed cluster architecture, we present a live database migration approach, whose design favors low migration overhead and provides minimal interruption of service.
  • Thumbnail Image
    ItemOpen Access
    Decoding strategies for syntax-based statistical machine translation
    (2015) Braune, Fabienne; Maletti, Andreas (Dr.)
    Provided with a sentence in an input language, a human translator produces a sentence in the desired target language. The advances in artificial intelligence in the 1950s led to the idea of using machines instead of humans to generate translations. Based on this idea, the field of Machine Translation (MT) was created. The first MT systems aimed to map input text into the target translation through the application of hand-crafted rules. While this approach worked well for specific language-pairs on restricted fields, it was hardly extendable to new languages and domains because of the huge amount of human effort necessary to create new translation rules. The increase of computational power enabled Statistical Machine Translation (SMT) in the late 1980s, which addressed this problem by learning translation units automatically from large text collections. Statistical machine translation can be divided into several paradigms. Early systems modeled translation between words while later work extended these to sequences of words called phrases. A common point between word and phrase-based SMT is that the translation process takes place sequentially, which is not well suited to translate between languages where words need to be reordered over (potentially) long distances. Such reorderings led to the implementation of SMT systems based on formalisms that allow to translate recursively instead of sequentially. In these systems, called syntax-based systems, the translation units are modeled with formal grammar productions and translation is performed by assembling the productions of these grammars. This thesis contributes to the field of syntax-based SMT in two ways : (i) the applicability of a new grammar formalism is tested by building the first SMT system based on the local local Multi Bottom-Up Tree Transducer (l-MBOT) (ii) new ways to integrate linguistic annotations in the translation model (instead of the grammar rules) of syntax-based systems are developed.
  • Thumbnail Image
    ItemOpen Access
    Position sharing for location privacy in non-trusted systems
    (2015) Skvortsov, Pavel; Rothermel, Kurt (Prof. Dr. rer. nat. Dr. h.c.)
    Currently, many location-aware applications are available for mobile users of location-based services. Applications such as Google Now, Trace4You or FourSquare are being widely used in various environments where privacy is a critical issue for users. A general solution for preserving location privacy for a user is to degrade the quality of his or her position information. In this work, we propose an approach that uses spatial obfuscation to secure the users’ position information. By revealing the user’s position with a certain degree of obfuscation, the first crucial issue is the tradeoff between privacy and precision. This tradeoff problem is caused by limited trust in the location service providers: higher obfuscation increases privacy but leads to lower quality of service. We overcome this problem by introducing the position sharing approach. Our main idea is that position information is distributed amongst multiple providers in the form of separate data pieces called position shares. Our approach allows for the usage of non-trusted providers and flexibly manages the user’s location privacy level based on probabilistic privacy metrics. In this work, we present the multi-provider based position sharing approach, which includes algorithms for the generation of position shares and share fusion algorithms. The second challenge that must be addressed is that the user’s environmental context can significantly decrease the level of obfuscation. For example, a plane, a boat and a car create different requirements for the obfuscated region. Therefore, it is very important to consider map-awareness in selecting the obfuscated areas. We assume that a static map is known to an adversary, which may help in deriving the user’s true position. We analyze both how map-awareness affects the generation and fusion of position shares and the difference between the map-aware position sharing approach and its open space based version. Our security analysis shows that the proposed position sharing approach provides good security guarantees for both open space and constrained space based models. The third challenge is that multiple location servers and/or their providers may have different trustworthiness from the user’s point of view. In this case, the user would prefer not to reveal an equal level (precision) of position information to every server. We propose a placement optimization approach that ensures that risk is balanced among the location servers according to their individual trust levels. Our evaluation shows significant improvement of privacy guarantees after applying the optimized share distribution, in comparison with the equal share distribution. The fourth related problem is the location update algorithm. A high number of different location servers n (corresponding to n privacy levels) may lead to significant communication overhead. Each update would require n messages from the mobile user to the location servers, especially in cases of high update rate. Therefore, we propose an optimized location update algorithm to decrease the number of messages sent without reducing the number of privacy levels and the user’s privacy.
  • Thumbnail Image
    ItemOpen Access
    On the complexity of conjugacy in amalgamated products and HNN extensions
    (2015) Weiß, Armin; Diekert, Volker (Prof. Dr. rer. nat. habil.)
    This thesis deals with the conjugacy problem in classes of groups which can be written as HNN extension or amalgamated product. The conjugacy problem is one of the fundamental problems in algorithmic group theory which were introduced by Max Dehn in 1911. It poses the question whether two group elements given as words over a fixed set of generators are conjugate. Thus, it is a generalization of the word problem, which asks whether some input word represents the identity. Both, word and conjugacy problem, are undecidable in general. In this thesis, we consider not only decidability, but also complexity of conjugacy. We consider fundamental groups of finite graphs of groups as defined by Serre - a generalization of both HNN extensions and amalgamated products. Another crucial concept for us are strongly generic algorithms - a formalization of algorithms which work for "most" inputs. The following are our main results: The elements of an HNN extension which cannot be conjugated into the base group form a strongly generic set if and only if both inclusions of the associated subgroup into the base group are not surjective. For amalgamated products we prove an analogous result. Following a construction by Stillwell, we derive some undecidability results for the conjugacy problem in HNN extensions with free (abelian) base groups. Next, we show that conjugacy is decidable if all associated subgroups are cyclic or if the base group is abelian and there is only one stable letter. Moreover, in a fundamental group of a graph of groups with free abelian vertex groups, conjugacy is strongly generically in P. Moreover, we consider the case where all edge groups are finite: If conjugacy can be decided in time T(N) in the vertex groups, then it can be decided in time O(log N * T(N)) in the fundamental group under some reasonable assumptions on T (here, N is the length of the input). We also derive some basic transfer results for circuit complexity in the same class of groups. Furthermore, we examine the conjugacy problem of generalized Baumslag-Solitar groups. Our main results are: the conjugacy problem in solvable Baumslag-Solitar groups is TC0-complete, and in arbitrary generalized Baumslag-Solitar groups it can be decided in LOGDCFL. The uniform conjugacy problem for generalized Baumslag-Solitar groups is hard for EXPSPACE. Finally, we deal with the conjugacy problem in the Baumslag group, an HNN extension of the Baumslag-Solitar group BS12. The Baumslag group has a non-elementary Dehn function, and thus, for a long time, it was considered to have a very hard word problem, until Miaskikov, Ushakov, and Won showed that the word problem, indeed, is in P by introducing a new data structure, the so-called power circuits. We follow their approach and show that the conjugacy problem is strongly generically in P. We conjecture that there is no polynomial time algorithm which works for all inputs, because the divisibility problem in power circuits can be reduced to this conjugacy problem. Also, we prove that the comparison problem in power circuits is complete for P under logspace reductions.
  • Thumbnail Image
    ItemOpen Access
    Navigation systems for special user groups
    (2015) Schmitz, Bernhard; Ertl, Thomas (Prof. Dr.rer.nat. Dr.techn.hc. Dr.-Ing.E.h.)
    With the advent of smartphones and apps, navigation systems have become one of the most widely used mobile applications on the planet. At the same time there are some user groups, especially among people with disabilities, for whom the benefit of navigation systems is even greater than for the average user. Even though navigation systems for those smaller user groups have become available in recent years and are of great help to their users, those navigation systems are currently not as great a tool as they could potentially be, as has been shown by prototypes in smaller research projects. From a user perspective, navigation systems give feedback about the current location, receive input about a desired destination, and guide the user to this destination. In addition to that, the system itself needs to determine the position and calculate a route to the destination. For all of these navigational tasks, the systems needs a world model, be it a map or another representation of the world. Many existing projects have concentrated on adapting these navigational tasks to the specific requirements of a user group, and even though this thesis contributes to these efforts, especially for blind users, its main contribution lies in a different approach. The thesis supposes that the world model provides a great leverage for adapting navigation systems to the specific requirements of the users, as the world model has influence on all navigational tasks and therefore is an integral part of all modern navigation systems. Due to this importance, the world models of currently existing special navigation systems for people with disabilities often suffer from one of two distinct problems: If the model is specifically built for the intended purpose, e.g. a navigation system for a defined disability, it is only available in a confined area. This is mostly the case with research systems. Commercially available systems on the other hand strive to cover as large an area as possible, but have to accept certain drawbacks regarding the world model's applicability for the specific purpose. Ideally, a world model for special navigation systems is both available world-wide and specifically built or at least adapted for the intended purpose. This thesis introduces a way of integrating both requirements. A world model that is widely available and used by many people builds a common base for the navigation system. This ensures the availability and currentness of the data. This world model is then changed individually with Map Content Transformations, which were developed specifically for this purpose. These Map Content Transformations combine data that is implicitly present in the base data with user specific requirements encoded into the transformation rules and thus adapts the world model according to both the requirements of the user and of the intended navigational tasks. It is shown that these adaptations of the world model can have positive influence on all navigational tasks and can, together with the incremental advances regarding the navigational tasks themselves, create an important step towards individualized navigation systems that optimally support their users in their spatial tasks. Even though in this thesis Map Content Transformations and their usages are applied to the field of navigation for people with disabilities, they have the potential to be used in a variety of applications based on spatial data.
  • Thumbnail Image
    ItemOpen Access
    Crawling von Enterprise Topologien zur automatisierten Migration von Anwendungen : eine Cloud-Perspektive
    (2015) Binz, Tobias; Leymann, Frank (Prof. Dr.)
    Eine schnelle Anpassung der IT an sich ändernde Anforderungen bei gleichzeitiger Reduktion der Kosten bestimmt heute die Konkurrenzfähigkeit einer Organisation. Voraussetzung dafür ist ein technisch detaillierter Einblick in die gesamte IT, also ein Instanzmodell aller Komponenten und deren Beziehungen zueinander. Da Organisationen diese Art der Dokumentation meist nicht durchführen, sind diese IT-Instanzmodelle typischerweise nicht vorhanden, unvollständig oder veraltet. Eine Ursache dafür ist, dass die manuelle Identifikation von Komponenten und deren Beziehungen eine sehr zeitaufwändige, fehleranfällige und somit kostenintensive Aufgabe ist. Neben der Adaption der IT im Allgemeinen erschwert dies auch die Migration von Anwendungen, welche durch den Trend zum Auslagern der IT in die Cloud stark nachgefragt wird. Die Vision dieser Arbeit ist es, einen technisch detaillierten, vollständigen und aktuellen Einblick in die IT zu erlauben und diesen zu nutzen, um die automatisierte Migration von Anwendungen zu ermöglichen. Dafür stellt die vorliegende Arbeit eine Methode zum automatisierten Crawling eines Instanzmodells der gesamten IT einer Organisation vor. Zu dessen Repräsentation, Verwaltung und Verarbeitung wird mit dem Enterprise Topologie Graph (ETG) ein Metamodell eingeführt, das alle Anwendungen, der für deren Betrieb nötigen Komponenten und deren Beziehungen untereinander repräsentiert. ETGs und ihr automatisiertes Crawling erlauben einen umfassenden und vollständigen Einblick in die IT einer Organisation und bilden somit eine solide Grundlage für deren Analyse, Adaption und Optimierung. Darauf aufbauend wird eine Methode zur Migration von Anwendungen (AROMA) entwickelt, die es ermöglicht, von den Vorteilen fortschrittlicher IT-Umgebungen zu profitieren, ohne diese Anwendungen neu entwickeln zu müssen. Nach dem Crawling des ETGs der Ursprungsumgebung wird in der AROMA-Methode die zu migrierende Anwendung extrahiert, transformiert, evaluiert, adaptiert und in der Zielumgebung, zum Beispiel einer Cloud, bereitgestellt. Die Umsetzung der AROMA-Methode mithilfe des OASIS-Standards TOSCA trägt zur Automatisierung der Migration bei und erhält die Funktionalität der Anwendung. Die Forschungsbeiträge und Prototypen werden durch verschiedene Fallstudien validiert und anhand der Aspekte Automatisierung, Korrektheit, Anwendbarkeit, Erweiterbarkeit sowie der Verbesserung der Cloud-Eigenschaften und Portabilität der Anwendung evaluiert.
  • Thumbnail Image
    ItemOpen Access
    Self-diagnosis in Network-on-Chips
    (2015) Dalirsani, Atefe; Wunderlich, Hans-Joachim (Dr. rer. nat. habil.)
    Network-on-Chips (NoCs) constitute a message-passing infrastructure and can fulfil communication requirements of the today’s System-on-Chips (SoCs), which integrate numerous semiconductor Intellectual Property (IP) blocks into a single die. As the NoC is responsible for data transport among IPs, its reliability is very important regarding the reliability of the entire system. In deep nanoscale technologies, transient and permanent failures of transistors and wires are caused by variety of effects. Such failures may occur in the NoC as well, disrupting its normal operation. An NoC comprises a large number of switches that form a structure spanning across the chip. Inherent redundancy of the NoC provides multiple paths for communication among IPs. Graceful degradation is the property of tolerating a component’s failure in a system at the cost of limited functionality or performance. In NoCs, when a switch in the path is faulty, alternative paths can be used to connect IPs, keeping the SoC functional. To this purpose, a fault detection mechanism is needed to identify the faulty switch and a fault tolerant routing should bypass it. As each NoC switch consists of a number of ports and multiple routing paths, graceful degradation can be considered even in a rather granular way. The fault may destroy some routing paths inside the switch, leaving the rest non-faulty. Thus, instead of disabling the faulty switch completely, its fault-free parts can be used for message passing. In this way, the chance of disconnecting the IP cores is reduced and the probability of having disjoint networks decreases. This study pursues efficient self-test and diagnosis approaches for both manufacturing and in-field testing aiming at graceful degradation of defective NoCs. The approaches here identify the location of defective components in the network rather than providing only a go/no-go test response. Conventionally, structural test approaches like scan-design have been employed for testing the NoC products. Structural testing targets faults of a predefined structural fault model like stuck-at faults. In contrast, functional testing targets certain functionalities of a system for example the instructions of a microprocessor. In NoCs, functional tests target NoC characteristics such as routing functions and undistorted data transport. Functional tests get the highest gain of the regular NoC structure. They reduce the test costs and prevent overtesting. However, unlike structural tests, functional tests do not explicitly target structural faults and the quality of the test approach cannot be measured. We bridge this gap by proposing a self-test approach that combines the advantages of structural and functional test methodologies and hence is suitable for both manufacturing and in-field testing. Here, the software running on the IP cores attached to the NoC is responsible for test. Similar to functional tests, the test patterns here deal only with the functional inputs and outputs of switches. For pattern generation, a model is introduced that brings the information about structural faults to the level of functional outputs of the switch. Thanks to this unique feature of the model, a high structural fault coverage is achieved as revealed by the results. To make NoCs more robust against various defect mechanisms during the lifetime, concurrent error detection is necessary. Toward this, this dissertation contributes an area efficient synthesis technique of NoC switches to detect any error resulting from single combinational and transition fault in the switch and its links during the normal operation. This technique incorporates data encoding and the standard concurrent error detection using multiple parity trees. Results reveal that the proposed approach imposes less area overhead as compared to traditional techniques for concurrent error detection. To enable fine-grained graceful degradation, intact functions of defective switches must be identified. Thanks to the fault tolerant techniques, fault-free parts of switches can be still employed in the NoC. However, reasoning about the fault-free functions with respect to the exact cause of a malfunction is missing in the literature. This dissertation contributes a novel fine-grained switch diagnosis technique that works based on the structural logic diagnosis. After determining the location and the nature of the defect in the faulty switch, all routing paths are checked and the soundness of the intact switch functions is proved. Experimental results show improvements in both performance and reliability of degraded NoCs by incorporating the fine-grained diagnosis of NoC switches.