Universität Stuttgart

Permanent URI for this communityhttps://elib.uni-stuttgart.de/handle/11682/1

Browse

Search Results

Now showing 1 - 10 of 71
  • Thumbnail Image
    ItemOpen Access
    Interacting with large high-resolution display workplaces
    (2018) Lischke, Lars; Schmidt, Albrecht (Prof.)
    Large visual spaces provide a unique opportunity to communicate large and complex pieces of information; hence, they have been used for hundreds of years for varied content including maps, public notifications and artwork. Understanding and evaluating complex information will become a fundamental part of any office work. Large high-resolution displays (LHRDs) have the potential to further enhance the traditional advantages of large visual spaces and combine them with modern computing technology, thus becoming an essential tool for understanding and communicating data in future office environments. For successful deployment of LHRDs in office environments, well-suited interaction concepts are required. In this thesis, we build an understanding of how concepts for interaction with LHRDs in office environments could be designed. From the human-computer interaction (HCI) perspective three aspects are fundamental: (1) The way humans perceive and react to large visual spaces is essential for interaction with content displayed on LHRDs. (2) LHRDs require adequate input techniques. (3) The actual content requires well-designed graphical user interfaces (GUIs) and suitable input techniques. Perceptions influence how users can perform input on LHRD setups, which sets boundaries for the design of GUIs for LHRDs. Furthermore, the input technique has to be reflected in the design of the GUI. To understand how humans perceive and react to large visual information on LHRDs, we have focused on the influence of visual resolution and physical space. We show that increased visual resolution has an effect on the perceived media quality and the perceived effort and that humans can overview large visual spaces without being overwhelmed. When the display is wider than 2 m users perceive higher physical effort. When multiple users share an LHRD, they change their movement behavior depending whether a task is collaborative or competitive. For building LHRDs consideration must be given to the increased complexity of higher resolutions and physically large displays. Lower screen resolutions provide enough display quality to work efficiently, while larger physical spaces enable users to overview more content without being overwhelmed. To enhance user input on LHRDs in order to interact with large information pieces, we built working prototypes and analyzed their performance in controlled lab studies. We showed that eye-tracking based manual and gaze input cascaded (MAGIC) pointing can enhance target pointing to distant targets. MAGIC pointing is particularly beneficial when the interaction involves visual searches between pointing to targets. We contributed two gesture sets for mid-air interaction with window managers on LHRDs and found that gesture elicitation for an LHRD was not affected by legacy bias. We compared shared user input on an LHRD with personal tablets, which also functioned as a private working space, to collaborative data exploration using one input device together for interacting with an LHRD. The results showed that input with personal tablets lowered the perceived workload. Finally, we showed that variable movement resistance feedback enhanced one-dimensional data input when no visual input feedback was provided. We concluded that context-aware input techniques enhance the interaction with content displayed on an LHRD so it is essential to provide focus for the visual content and guidance for the user while performing input. To understand user expectations of working with LHRDs we prototyped with potential users how an LHRD work environment could be designed focusing on the physical screen alignment and the placement of content on the display. Based on previous work, we implemented novel alignment techniques for window management on LHRDs and compared them in a user study. The results show that users prefer techniques, that enhance the interaction without breaking well-known desktop GUI concepts. Finally, we provided the example of how an application for browsing scientific publications can benefit from extended display space. Overall, we show that GUIs for LHRDs should support the user more strongly than GUIs for smaller displays to arrange content meaningful or manage and understand large data sets, without breaking well-known GUI-metaphors. In conclusion, this thesis adopts a holistic approach to interaction with LHRDs in office environments. Based on enhanced knowledge about user perception of large visual spaces, we discuss novel input techniques for advanced user input on LHRDs. Furthermore, we present guidelines for designing future GUIs for LHRDs. Our work creates the design space of LHRD workplaces and identifies challenges and opportunities for the development of future office environments.
  • Thumbnail Image
    ItemOpen Access
    Neue Methoden und Techniken für die Evaluation von Visualisierungen
    (2015) Raschke, Michael; Ertl, Thomas (Prof. Dr.)
    Visualisierungen umgeben uns wie selbstverständlich im Alltag und bei der Arbeit, um abstrakte Informationen darzustellen und komplexe Zusammenhänge zu verstehen. Lag bisher das Hauptaugenmerk der Entwicklung von Visualisierungstechniken auf der Frage, wie möglichst viele Daten in möglichst kurzer Zeit, in einer möglichst hohen Auflösung dargestellt werden können, so gewann in der Visualisierungsforschung in den letzten Jahren die Fragestellung an Bedeutung, ob eine Visualisierung auch nützlich und leicht lesbar ist. Um diese Fragestellung umfassend beantworten zu können, war das Ziel dieser Arbeit die Entwicklung von neuen Methoden und Techniken zur Untersuchung der Wahrnehmung von Visualisierungen, sowie zur Evaluation von Visualisierungstechniken. Dazu wurde ein interdisziplinärer Ansatz gewählt, der die drei wissenschaftlichen Forschungsgebiete Eye-Tracking, Wissensrepräsentation und Kognitionswissenschaften miteinander verbindet. Eye-Tracking-Experimente wurden für die Analyse des Blickverhaltens bei der Arbeit mit Visualisierungen eingesetzt. Die Repräsentation visuellen Wissens erlaubt es, semantische Eigenschaften von Scan-Paths untersuchen zu können. Simulationsmethoden aus den Kognitionswissenschaften ermöglichen es, das Blickverhalten vorherzusagen. Eye-Tracking-Experimente werden in der Visualisierungsforschung dazu eingesetzt, um Augenbewegungen von Probanden, welche Aufgaben mit Visualisierungen durchführen, aufzunehmen. Ein nicht zu unterschätzender Zeitaufwand bei der Auswertung dieser Art von Experimenten nimmt die anschließende Analyse der Augenbewegungen ein. Um den Aufwand der Analyse dieser Scan-Paths zu reduzieren und ähnliche Augenbewegungsmuster über die Probanden hinweg zu identifizieren, wurde die parallele Scan-Path-Visualisierungstechnik entwickelt, die eine übersichtliche Darstellung von mehreren Scan-Paths erlaubt. Damit können Lesestrategien von Visualisierungen über mehrere Probanden hinweg erkannt und miteinander verglichen werden. Die parallele Scan-Path-Visualisierung wurde zusätzlich mit automatischen Mustererkennungsverfahren erweitert. Dieser sogenannten visuelle Analytik-Ansatz erlaubt es, Scan-Paths quantitativ miteinander zu vergleichen und führt zu einer effizienten Analyse von sehr großen Eye-Tracking-Datensätzen. Für die Modellierung von Wissen über Visualisierungen wurde ein Wissensmodell mit drei Ebenen entwickelt. Jede Ebene beschreibt in Form einer Ontologie eine unterschiedliche Abstraktionsebene des Wissens über Visualisierungen und die darin enthaltenen graphischen Elemente. Elemente aus diesen Ontologien werden mit bestimmten Bereichen in einer Visualisierung oder mit einzelnen graphischen Elementen in Visualisierungen verknüpft. Dieser Ansatz ermöglicht es nicht nur wie bisher zu analysieren, welche Bereiche in einer Visualisierung auf einem Bildschirm in welcher Reihenfolge betrachtet worden sind (WO-Raum), sondern auch, was für graphische Elemente dort wahrgenommen (WAS-Raum) und wie diese kognitiv weiterverarbeitet wurden. Es wird gezeigt, wie mit der parallelen Scan-Path-Visualisierungstechnik, basierend auf dieser Annotation, Wissensverarbeitungsprozesse visualisiert werden können. Damit können auch Bereiche in Visualisierungen, die möglicherweise zu einer kognitiven Verzerrung führen, erkannt und im Detail weiter untersucht werden. Für die Simulation der visuellen Suche wurde eine auf dem Kognitionssimulationsframework ACT-R basierende Simulation entwickelt, die Leseprozesse in Visualisierungen simuliert, und es erlaubt, diese mit empirisch ermittelten Daten zu vergleichen. Zusätzlich stellt diese Arbeit erstmalig ein operatorenbasiertes Modell zur Vorhersage von Durchführungszeiten von visuellen Aufgaben vor. Dieses operatorenbasierte Diagram-Viewing-Modell verwendet das Konzept des aus der Mensch-Computer-Interaktionsforschung bekannten Keystroke-Level-Modells und erweitert es für die Vorhersage von Durchführungszeiten von visuellen Aufgaben. Neben einer Effizienzsteigerung bei der Auswertung von Eye-Tracking-Experimenten führt die Kombination der visuellen Analyse von Scan-Paths mit ontologiebasierten Wissensmodellen zu einem tieferen Verständnis der Leseprozesse von Visualisierungen. Semantische Charakteristika von Scan-Paths können besser untersucht werden und die Wahrscheinlichkeit für kognitive Verzerrungen bei der Arbeit mit Visualisierungen durch eine geeignete Anpassung des Visualisierungskonzepts verringert werden. Insgesamt können die in dieser Arbeit vorgestellten Methoden und Techniken zu einem stärker benutzerorientierten, iterativen Entwicklungsprozess von Visualisierungen führen. In diesem Entwicklungsprozess können Ergebnisse der Eye-Tracking-Analyse oder Ergebnisse aus Simulationen dazu eingesetzt werden, um zu untersuchen, wie Visualisierungen von verschiedenen Benutzergruppen wahrgenommen werden.
  • Thumbnail Image
    ItemOpen Access
    Interactive volume rendering in virtual environments
    (2003) Schulze-Döbold, Jürgen Peter; Ertl, Thomas (Prof. Dr.)
    This dissertation is about the interactive visualization of volume data in virtual environments. Only data on regular grids will be discussed. Research was conducted on three major topics: visualization algorithms, user interfaces, and parallelization of the visualization algorithms. Because the shear-warp algorithm is a very fast CPU-based volume rendering algorithm, it was investigated how it could be adapted to the characteristics of virtual environments. This required the support of perspective projection, as well as specific developments for interactive work, for instance a variable frame rate or the application of clipping planes. Another issue was the improvement of image quality by the utilization of pre-integration for the compositing. Concerning the user interface, a transfer function editor was created, which was tailored to the conditions of virtual environments. It should be usable as intuitively as possible, even with imprecise input devices or low display resolutions. Further research was done in the field of direct interaction, for instance a detail probe was developed which is useful to look inside of a dataset. In order to run the user interface on a variety of output devices, a device independent menu and widget system was developed. The shear-warp algorithm was accelerated by a parallelization which is based on MPI. For the actual volume rendering, a remote parallel computer can be employed, which needs to be linked to the display computer via a network connection. Because the image transfer turned out to be the bottleneck of this solution, it is compressed before being transferred. Furthermore, it will be described how all the above developments were combined to a volume rendering system, and how they were integrated into an existing visualization toolkit.
  • Thumbnail Image
    ItemOpen Access
    Darstellungs- und Interaktionstechniken zur effizienten Nutzung grafischer Oberflächen durch Blinde und Sehbehinderte
    (2011) Taras, Christiane; Ertl, Thomas (Prof. Dr.)
    Personal-Computer sind heute eines der wichtigsten Arbeits-, Kommunikations- und Lernmittel. Moderne Computer bieten hochwertige grafische Darstellungen, die die tägliche Arbeit erleichtern oder durch ihre Ästhetik schlicht Wohlbefinden und Freude hervorrufen sollen. Auch für blinde und sehbehinderte Menschen ist der Computer ein wichtiges Werkzeug. Er bringt Unabhängigkeit und verbessert gleichzeitig die Integration in die Gesellschaft. Durch den zunehmenden Einsatz digitaler Dokumente wird die Zugänglichkeit von Informationen stetig erhöht. Technische Hilfsmittel wie Vergrößerungssoftware oder Screenreader bieten Möglichkeiten, die Ausgabe der Dokumente an die eigenen Bedürfnisse anzupassen. Bei gedruckten oder handschriftlichen Dokumenten ist dies nicht ohne weiteres möglich. Allerdings weisen aktuelle Technologien noch einige Defizite auf. So werden grafische Benutzungsschnittstellen für Blinde größtenteils durch textuelle Informationen mit semantischen Annotationen (wie Nennung des Typs und des Aktivierungszustandes eines Elements) präsentiert. Informationen über die grafische Darstellung an sich werden kaum bereitgestellt. Dabei sind diese auch für Blinde sehr interessant, da grafische Eigenschaften von Normalsichtigen häufig als alleinige Informationsträger oder auch als Kommunikationshilfe genutzt werden. Sehbehinderten, die noch die visuelle Ausgabe nutzen, werden zwar grafische Informationen präsentiert, aber die Darstellungsweise ist selten optimal für ein effizientes Arbeiten. So ist beispielsweise bei der Bearbeitung von Text häufig horizontales Scrollen nötig oder wichtige Bereiche der Bildschirmausgabe sind nicht immer sichtbar. Diese Arbeit trägt dazu bei, den Zugang Blinder und Sehbehinderter zu grafischen Darstellungen weiter zu erleichtern. Dazu wurde untersucht, wie diese mit Hilfe neuer Technologien und Herangehensweisen besser aufbereitet und von den Betroffenen interaktiv genutzt und auch selbst produziert werden können. Eine wesentliche Erkenntnis liegt in der Ähnlichkeit der grundlegenden Fragestellung bei der Präsentation von grafischen Oberflächen für Sehbehinderte und Blinde, die ein grafisch-taktiles Display nutzen. Basierend darauf wurde ein Framework entwickelt, mit dem sowohl taktile Darstellungen realisiert werden können wie auch ein neuartiges Konzept für vergrößerte Bildschirmausgaben für stärker sehbehinderte Computernutzer. Um auch die stetig wachsende Gruppe der zumeist altersbedingt von leichteren Seheinschränkungen Betroffenen besser zu unterstützen, wurde ein weiteres Vergrößerungskonzept samt einer prototypischen Umsetzung erarbeitet, welches nur minimale Änderungen an der Darstellung eines Programms umfasst und somit den Einarbeitungsaufwand und die damit verbundene Hemmschwelle minimiert. Zur Förderung der Kommunikation zwischen Normalsichtigen, die sich in ihren Beschreibungen häufig auf Farben beziehen, Sehbehinderten und Blinden wurden Konzepte zum Umgang mit Farben und farbigen Grafiken erforscht und Umsetzungen für monochrome, taktile Ausgabegeräte implementiert. Da die Voraussetzung für angepasste Darstellungen von GUIs und Grafiken deren zugängliche Gestaltung ist, wurden zudem Konzepte und Umsetzungen zur Unterstützung von Entwicklern und Designern bei der zugänglichen Gestaltung von GUIs und der Verbreitung von Standards zur Gestaltung zugänglicher Grafiken im SVG-Format erarbeitet.
  • Thumbnail Image
    ItemOpen Access
    Visualization of uncorrelated point data
    (2008) Reina, Guido; Ertl, Thomas (Prof. Dr.)
    Sciences are the most common application context for computer-generated visualization. Researchers in these areas have to work with large datasets of many different types, but the one trait that is common to all is that in their raw form they exceed the cognitive abilities of human beings. Visualization not only aims at enabling users to quickly extract as much information as possible from datasets, but also at allowing the user to work at all with those that are too large and complex to be directly grasped by human cognition. In this work, the focus is on uncorrelated point data, or point clouds, which is sampled from real-world measurements or generated by computer simulations. Such datasets are gridless and exhibit no connectivity, and each point represents an entity of its own. To effectively work with such datasets, two main problems must be solved: on the one hand, a large number of complex primitives with potentially many attributes must be visualized, and on the other hand the interaction with the datasets must be designed in an intuitive way. This dissertation will present novel methods which allow the handling of large, point-based data sets of high dimensionality. The contribution for the rendering of hundreds of thousands of application-specific glyphs is a Graphics-Processing-Unit(GPU)-based solution that allows the exploration of datasets that exhibit a moderate number of dimensions, but an extremely large number of points. These approaches are proven to be working for molecular dynamics(MD) datasets as well as for 3D tensor fields. Factors critical for the performance of these algorithms are thoroughly analyzed, the main focus being on the fast rendering of these complex glyphs in high quality. To improve the visualization of datasets with many attributes and only a moderate number of points, methods for the interactive reduction of dimensionality and analysis of the influences of different dimensions as well as of different metrics will be presented. The rendering of the resulting data in 3D similarity space is also addressed. A GPU-based reduction of dimensions has been implemented that allows interactive tweaking of the reduction parameters while observing the results in real time. With the availability of a fast and responsive visualization, the missing component for a complete system is the human-computer interaction. The user must be able to navigate the information space and interact with a dataset, selecting or filtering the items that are of interest to him, inspecting the attributes of particular data points. Today, one must distinguish between the application context and the modality of different interaction approaches. Current research ranges from keyboard-and-mouse desktop interaction over different haptic interfaces (also including feedback) up to tracked interaction for virtual reality(VR) installations. In the context of this work, the problem of interacting with point-based datasets is tackled for two different situations. The first is the workstation-based analysis of clustering mechanics in thermodynamics simulations, the second a VR immersive navigation and interaction with point cloud datasets.
  • Thumbnail Image
    ItemOpen Access
    Cognition-aware systems to support information intake and learning
    (2016) Dingler, Tilman; Schmidt, Albrecht (Prof. Dr.)
    Knowledge is created at an ever-increasing pace putting us under constant pressure to consume and acquire new information. Information gain and learning, however, require time and mental resources. While the proliferation of ubiquitous computing devices, such as smartphones, enables us to consume information anytime and anywhere, technologies are often disruptive rather than sensitive to the current user context. While people exhibit different levels of concentration and cognitive capacity throughout the day, applications rarely take these performance variations into account and often overburden their users with information or fail to stimulate. This work investigates how technology can be used to help people effectively deal with information intake and learning tasks through cognitive context-awareness. By harvesting sensor and usage data from mobile devices, we obtain people's levels of attentiveness, receptiveness, and cognitive performance. We subsequently use this cognition-awareness in applications to help users process information more effectively. Through a series of lab studies, online surveys, and field experiments we follow six research questions to investigate how to build cognition-aware systems. Awareness of user's variations in levels of attention, receptiveness, and cognitive performance allows systems to trigger appropriate content suggestions, manage user interruptions, and adapt User Interfaces in real-time to match tasks to the user's cognitive capacities. The tools, insights, and concepts described in this book allow researchers and application designers to build systems with an awareness of momentary user states and general circadian rhythms of alertness and cognitive performance.
  • Thumbnail Image
    ItemOpen Access
  • Thumbnail Image
    ItemOpen Access
    Deep learning based prediction and visual analytics for temporal environmental data
    (2022) Harbola, Shubhi; Coors, Volker (Prof. Dr.)
    The objective of this thesis is to focus on developing Machine Learning methods and their visualisation for environmental data. The presented approaches primarily focus on devising an accurate Machine Learning framework that supports the user in understanding and comparing the model accuracy in relation to essential aspects of the respective parameter selection, trends, time frame, and correlating together with considered meteorological and pollution parameters. Later, this thesis develops approaches for the interactive visualisation of environmental data that are wrapped over the time series prediction as an application. Moreover, these approaches provide an interactive application that supports: 1. a Visual Analytics platform to interact with the sensors data and enhance the representation of the environmental data visually by identifying patterns that mostly go unnoticed in large temporal datasets, 2. a seasonality deduction platform presenting analyses of the results that clearly demonstrate the relationship between these parameters in a combined temporal activities frame, and 3. air quality analyses that successfully discovers spatio-temporal relationships among complex air quality data interactively in different time frames by harnessing the user’s knowledge of factors influencing the past, present, and future behaviour with Machine Learning models' aid. Some of the above pieces of work contribute to the field of Explainable Artificial Intelligence which is an area concerned with the development of methods that help understand, explain and interpret Machine Learning algorithms. In summary, this thesis describes Machine Learning prediction algorithms together with several visualisation approaches for visually analysing the temporal relationships among complex environmental data in different time frames interactively in a robust web platform. The developed interactive visualisation system for environmental data assimilates visual prediction, sensors’ spatial locations, measurements of the parameters, detailed patterns analyses, and change in conditions over time. This provides a new combined approach to the existing visual analytics research. The algorithms developed in this thesis can be used to infer spatio-temporal environmental data, enabling the interactive exploration processes, thus helping manage the cities smartly.
  • Thumbnail Image
    ItemOpen Access
    A design space for pervasive advertising on public displays
    (2013) Alt, Florian; Schmidt, Albrecht (Prof. Dr.)
    Today, people living in cities see up to 5000 ads per day and many of them are presented on public displays. More and more of these public displays are networked and equipped with various types of sensors, making them part of a global infrastructure that is currently emerging. Such networked and interactive public displays provide the opportunity to create a benefit for society in the form of immersive experiences and relevant content. In this way, they can overcome the display blindness that evolved among passersby over the years. We see two main reasons that prevent this vision from coming true: first, public displays are stuck with traditional advertising as the driving business model, making it difficult for novel, interactive applications to enter the scene. Second, no common ground exists for researchers or advertisers that outline important challenges. The provider view and audience view need to be addressed to make open, interactive display networks, successful. The main contribution made by this thesis is presenting a design space for advertising on public displays that identifies important challenges -- mainly from a human-computer interaction perspective. Solutions to these core challenges are presented and evaluated, using empirical methods commonly applied in HCI. First, we look at challenges that arise from the shared use of display space. We conducted an observational study of traditional public notice areas that allowed us to identify different stakeholders, to understand their needs and motivations, to unveil current practices used to exercise control over the display, and to understand the interplay between space, stakeholders, and content. We present a set of design implications for open public display networks that we applied when implementing and evaluating a digital public notice area. Second, we tackle the challenge of making the user interact by taking a closer look at attracting attention, communicating interactivity, and enticing interaction. Attracting attention is crucial for any further action to happen. We present an approach that exploits gaze as a powerful input modality. By adapting content based on gaze, we are able to show a significant increase in attention and an effect on the user's attitude. In order to communicate interactivity, we show that the mirror representation of the user is a powerful interactivity cue. Finally, in order to entice interaction, we show that the user needs to be motivated to interact and to understand how interaction works. Findings from our experiments reveal direct touch and the mobile phone as suitable interaction technologies. In addition, these findings suggest that relevance of content, privacy, and security have a strong influence on user motivation. Third, this thesis makes a set of contributions towards understanding audience behavior, which is particularly important for advertisers in order to choose appropriate content and to select suitable locations for future advertising displays. Our findings provide an in-depth understanding of the honeypot effect as a powerful interactivity cue. Furthermore, we identify a number of interesting effects (e.g., the landing effect) and explain how developers could design for them. We envision the results of this thesis to provide a basis for future research and for practitioners to shape future advertisements on public displays in a positive way.
  • Thumbnail Image
    ItemOpen Access
    Visual analytics of human mobility behavior
    (2017) Krüger, Robert; Ertl, Thomas (Prof. Dr.)
    Human mobility plays an important role in many domains of today’s society, such as security, logistics, transportation, urban planning, and geo-marketing. Both, government and industry thus have great interest in understanding mobility patterns and their driving social, economical, and environmental causes and effects. While stakeholders had to rely on manual traffic surveys for a long time, improvements in tracking technology made analyses based on large digital datasets possible. Recently, the omnipresence of mobile devices significantly increased the amounts of collected movement and context data. People are willing to reveal their position, but also further personal details such as visited places, observations, events, news, and sentiments in exchange for personalized services and social networking. This opens up new possibilities for many domains where a semantic mobility understanding is required but also raises major challenges. To reveal a holistic picture, heterogeneous datasets of different services with different resolution and format have to be fused and analyzed. However, social sensing data is vast, has varying scale, is unevenly distributed, and constantly updated. Especially content from social media services is often inconsistent, unreliable, and incomplete, which requires special treatment. Fully automatic mapping approaches are not trustworthy as they do not take into account these uncertainties. At the same time, manual approaches become insufficient with large amounts of data. Even when data is perfectly aligned, analysts cannot purely rely on existing techniques. Answering questions about reasons for movement requires a broader perspective that takes into account environmental and social context, the driving forces for human mobility behavior. Visual analytics is an emerging research field to tackle such challenges. It creates added value by combining the processing power and accuracy of machines with human capabilities to perceive information visually. Automatic means are used to fuse and aggregate data and to detect hidden patterns therein. Interactive visualizations allow to explore and query the data and to steer the automatic processes with domain knowledge. This increases trust in data, models, and results, which is especially important when critical decisions need to be made. The strengths of visual analytics have been shown to be particularly advantageous when problems and goals are underspecified and exploratory means are needed to discover yet unknown patterns. This thesis presents novel visual analytics approaches to derive meaning and reasons behind movement, by taking into account the aforementioned characteristics. The approaches are aligned in a holistic process model covering all steps from data retrieval, enrichment, exploration, and verification to externalization of gained knowledge for various fields of application such as electric mobility, event management, and law enforcement. It is shown how data from social media can not only be used to retrieve up-to-date movement information, but also to enrich movement trajectories from other sources with structured and unstructured information about places, events, transactions, and other observations. Through highly interactive visual interfaces analysts can bring in domain knowledge to deal with uncertainties during data fusion and to steer the subsequent semantic analysis. Exploratory and confirmatory analysis techniques are presented to create hypotheses, refine them, and find support in the data. Analysts can discover routines and abnormal behavior with assistance of automatic pattern detection methods to cope with the vast amounts of data. Spatial drill-down is supported by a set-based focus+context technique, while a more abstract visual query language allows to explicitly formulate, extract, and query for movement patterns. The approaches are applied in different scenarios and are integrated in a visual analytics system. Evaluation with experts and novice users, case studies, and comparisons to ground truth data reveal the need and effectiveness of the contributions. Overall, the thesis contributes a visual analytics process for human mobility behavior with novel semantic analysis approaches, ranging from global movements of many to local activities of a few people, for a wide range of application domains.