Institute of Visualization and Interactive Systems University of Stuttgart Universitätsstraße 38 D–70569 Stuttgart Studienarbeit Nr. 2429 Improving the effectiveness of interactive data analytics with phone-tablet combinations Lars Lischke Course of Study: Informatik Examiner: Prof. Dr. Albrecht Schmidt Supervisor: Prof. Dr. Morten Fjeld, Paweł Woz´niak Commenced: 07.01.2013 Completed: 17.06.2013 CR-Classification: H.5.2 Abstract Smartphones and tablet computer are ubiquitous in daily life. Many people carrying smartphones and tablet computers with them simultaneously. The multiplicity of dif- ferent sized devices indicates the conflict between the maximal interaction space and a minimal bulkiness of the devices. This dissertation we extend the interaction space of mobile devices by adding mutual- spatial awareness to ordinary devices. By combining multiple mobile devices and using relative device placement as an additional input source we designed a mobile tabletop system for ad-hoc collaboration. With this setting we aimed to emulate the concept of so-called interactive tablecloth, which envisages every surface of a table top will become an interactive surface. To evaluate the concept we designed and implemented a working prototype, called MochaTop. To provide the mutual-spatial awareness we placed the mobile devices on an interactive table. For the future we believe in possibilities to replace the interactive table by technology integrated in the mobile device. In this study we used both one Android smartphone and one Android tablet as mobile devices. To track the position of the devices we used one Microsoft Surface2 (SUR40). The system is designed for exploring multimedia information and visual data represen- tations by manipulating the position of two mobile devices on a horizontal surface. We present possible use-cases and environments. In a second step we discuss multiple low fidelity prototypes. The results are integrated in the development of MochaTop. The application MochaTop is designed as an example for exploring digital information. To influence the participants not too much by the content, we choose a common topic to present in MochaTop: coffee production and trade. We present the implementation of MochaTop and the conducted user study with 23 participants. Overall we could awaken interest for future systems by the study-participants and show that the system supports knowledge transfer. Furthermore we were able to identify design challenges for future development of mobile tabletops. These challenges concern mostly input feedback, in- teraction zones and three dimensional input. 3 Kurzfassung Smartphones und Tablet-Computer sind Teil unseres täglichen Lebens. Viele Menschen tragen sowohl Smartphone als auch Tablet-Computer ständig bei sich. Die Vielfalt an unterschiedlich großen Smartphones und Tablet-Computern zeigt einen Interessenskon- flikt auf: Einerseits sollen mobile Geräte eine maximal große Interaktionsfläche bieten. Andererseits sollen die Geräte möglichst wenig sperrig sein. In dieser Studienarbeit wird der Interaktionsraum von mobilen Geräten durch gegenseit- ige räumliche Lage Wahrnehmung erweitert. Durch die Kombination von mehreren mo- bilen Geräten und der Nutzung von relativen Geräte-Positionen als zusätzliche Eingabe- methode, gestalten wir ein mobiles Tabletop System für ad-hoc Zusammenarbeit. Somit emulieren wir das Konzept “interactive tablecloth”, welches hervorsagt, dass sich alle Tische und Oberflächen zu digitalen Interaktionsflächen verwandeln werden. Um unser Konzept zu evaluieren entworfen und implementierten wir einen lauffähigen Prototype, genannt MochaTop. Um die gegenseitige räumliche Lage Wahrnehmung der mobil Geräte nutzen zu können, platzierten wir diese auf einem interaktiven Tisch. Für die Zukunft gehen wir davon aus, dass sich entsprechende Sensoren leicht in Smartphones und Tablet-Computer integrieren lassen. In dieser Arbeit verwenden wir sowohl Android Smartphones als auch Android Tablet-Computer. Um die Position des Smartphones und des Tablet-Computers zu ermitteln nutzen wir einen Microsoft Surface2 (SUR40). Das System ist entworfen um multimediale Informationen und graphische Datenrepräsenta- tionen durch Positionsveränderung zweier Geräte zu erforschen. Wir stellen verschiedene Use-Cases und Einsatzumgebungen vor. In einem zweiten Schritt diskutieren wir verschiedene Prototypen. Diese Ergebnisse fließen anschließend in die Entwicklung von MochaTop ein. Die Anwendung MochaTop ist eine beispielhafter Prototype, um digitalen Inhalt erfahrbar zu machen. Um die Studienteilnehmer nicht zu sehr durch den präsentierten Inhalt zu beeinflussen, präsentieren wir in MochaTop ein alltägliches Thema: Kaffeeproduktion und -Handel. In dieser Arbeit stellen wir die Implantierung von MochaTop sowie die anschließende Benutzerstudie vor. Die Benutzer- studie führten wir mit 23 Probanden durch um unser System zu analysieren. Insgesamt stellten wir Interesse der Teilnehmer an den getesteten Techniken fest und konnten zeigen, dass unser System einen positiven Einfluss auf die Wissensvermittlung hat. Darüber hinaus konnten wir verschiedene Herausforderungen für weitere Entwick- lungen identifizieren. Diese betreffen hauptsächlich das Eingabefeedback, interaktive Zonen und drei dimensionale Eingaben. 4 Contents 1. Introduction 7 2. Related work 8 2.1. Mobile devices and large screens . . . . . . . . . . . . . . . . . . . . . . . 8 2.2. Data exploration and collaboration tools . . . . . . . . . . . . . . . . . . . 9 2.3. Mutual-spatial-aware mobile devices . . . . . . . . . . . . . . . . . . . . . 10 2.4. Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.5. Submitted Paper based on this work . . . . . . . . . . . . . . . . . . . . . 11 3. Design 11 3.1. Design requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 3.2. Discussion of prototypes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 3.2.1. eBook Reader . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 3.2.2. App Selector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 3.2.3. 3D-Viewer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 3.2.4. InfoVis-Viewer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 3.2.5. Fish-Bowl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 3.2.6. Multi Device Google-Drive Application . . . . . . . . . . . . . . . 16 3.2.7. Presenting tool for architecture planning . . . . . . . . . . . . . . . 16 3.2.8. Nutrition supporting tool . . . . . . . . . . . . . . . . . . . . . . . 17 3.3. MochaTop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 3.3.1. Content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 3.3.2. Navigation patterns . . . . . . . . . . . . . . . . . . . . . . . . . . 18 4. Implementation 21 4.1. Spatial Awareness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 4.2. Inter-device communication . . . . . . . . . . . . . . . . . . . . . . . . . . 24 4.3. Navigation Patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 4.4. MochaTop Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 5. Proof-of-Concept Study 26 5.1. Pre-Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 5.2. Study design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 5.3. Participants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 5.4. Results and discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 5.4.1. Data exploration and providing feedback . . . . . . . . . . . . . . . 29 5.4.2. Conditions for new navigation patterns . . . . . . . . . . . . . . . 30 5.4.3. Navigation through large data sets . . . . . . . . . . . . . . . . . . 32 5.4.4. Physical design aspects of mobile devices . . . . . . . . . . . . . . 33 5.4.5. Users learning from data . . . . . . . . . . . . . . . . . . . . . . . . 34 6. Conclusion 35 5 7. Appendix 42 A. Study design document 42 A.1. Research question . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 A.2. Participant profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 A.3. Compensation plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 A.4. Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 A.5. Timeline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 A.6. Data to collect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 A.7. Data analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 B. MochaTop mockups 44 6 1. Introduction Tablet computers and smartphones are part of everyday life. To perform tasks in dif- ferent situations people carrying both devices at the same time. Even if tablets and smartphones are developed for mobile usage, they are designed for different use cases. Smartphones are small enough to use them with one hand on-the-go and fits easily in a pocked. A smartphone is optimized to write mails, to browse in the web and making calls on the go. In contrast a tablet cannot be used comfortable with one hand and is less maneuverable. Tablets work best placed on a table or on the users legs. The advantage of a tablet is the bigger interaction space in terms of in- and output. This provides easier navigation and data input on one hand. On the other hand the larger screen creates a better view. It is more convenient to watch a movie on a tablet, than on a smartphone. These characteristics also enable more possibilities for collaboration on a tablet. The growing number of different form factors of smartphones and tablets indicate the need for as much as possible interaction space and for small mobile devices. The challenge is to optimize form factors and in- and output in terms of mobility and interaction. We are aiming to extent the interaction space by using multiple mobile devices simul- taneously. In most use cases devices will be placed on a horizontal surface, while using multiple devices at once. There is a long history in using tables. For hundreds of years we meet, discuss, celebrate, eat and work around tables. Humans organize physical ob- jects on tables to get overviews and to gain insights. The cultural significance of tables indicates the need for a connection between traditional tables and digital world. Inter- active tabletops are now more than 20 years topic in research and development. But interactive tabletops are not mobile and can only be used in special environments. These are reasons way they have not permeated our everyday live. In contrast smartphones and tablets are already commonplace. Every fourth in the US own a tablet [26] and nearly every second own a smartphone in Germany [36]. By adding spatial awareness to mobile devices, the user is able to arrange smartphones and tablets like sheets of paper and post-its. This would realize Mark Weisers’ “tabs” and “pads” [38]. In contrast to classical interactive tabletops, our system is not bulky and can be used in all situations where horizontal surfaces are existing. By combining mobility with the ad- vantages of big interactive tabletop we realize the concept of the interactive “interactive- tablecloth” [23] with nowadays technology. By giving the chance to bring the own devices to the table, we allow users to enhance their interaction capacities with the most private devices [28] ad-hoc. This is also strongly connected to the research trend called "Bring Your Own Device" (BYOD) [1]. From a content point of view there is a need communicating data to communities and to provide tools to support big-data related issues. So Weise et al. [37] argue for developing tools of accessing and understanding data for the general public and Shaer et al. [31] calls for exploring how to design tabletop tools to support discussions about complex issues. This work describes the possible use cases, design, implementation and evaluation of a system that emulates such an interactive tablecloth using an interactive tabletop, 7 a smartphone and a tablet. To evaluate our system, we designed and implemented MochaTop. This application provides information about coffee production and the cof- fee trade. This kind of data is usually consumed in desktop edutainment. Both devices play an active role in presenting data on coffee pricing and about exporting and import- ing countries. To interact with the content we designed new navigation patterns, which use the relative position of the devices. We believe that MochaTop offers its user new possibilities when it comes to exploring, learning and collaborating "on-the-go". This work aims to inspire future designs of multiple mobile devices systems. 2. Related work This section examines related work to this dissertation. The related work is structured in three categories: using mobile devices in connection to large stationary screens, data exploration with collaboration tools on tabletops and as well mutual spatial aware mobile devices. In 2.4 our evaluation study is classified in terms of methodology. 2.1. Mobile devices and large screens Large displays, horizontal as tabletop or as a vertical screen provide great potential to gain insights into large and complex data sets. Even small details can be shown in high solution, while a wider sector is visualized. Large displays allow having a look on the data set together with other people at the same time and place. This enables the viewers to discuss about the visualized data set. One research topic is the navigation and interaction with these visualizations. An im- portant principle is the visual information-seeking mantra by Schneiderman: “Overview first, zoom and filter, then details-on-demand” [33]. This is the key challenge to enhance the full power of large displays. Large displays have the drawback to be not portable. This allows no ad-hoc work on visualizations. Our research aims to benefit from the usage of different-sized devices and bring this into a mobile world. Keefe et al. connect an iPod touch to a rear-projection display to navigate through large data sets (e. g. geospatial data for social scientist or relational algebra systems for biochemistry analysis) [13]. The position and orientation of the iPod is tracked to enable different navigation patterns, depending on the position of the user and the orientation of the iPod. This allows the user to walk freely in front of the visualization and provides different views of the data set. Cheng et al. use a large screen to provide an overview for multiple users. Each of them can focus on a subarea of the visualization and manipulate the data by a tablet computer. This allows to get a public view and to work on a private screen [5]. A com- parable aim has the BEMViewer. The BEMViewer is a system for collaborative work on a tabletop connected to multiple tablet computers. This enables users to work together on the table and to work on single tasks alone on the tablet. The BEMViewer has the advantage of a more complex branching and merging system [19]. Spindler et al. extent an interactive table by lightweight displays [34]. These displays are a metaphor for paper sheets. The mobile screens can be moved freely above the 8 horizontal tabletop display. This allows interaction in different horizontal levels above the tabletop. The tabletop provides the basic information, more detailed data can be overlayed on the mobile device. To interact with the data on the mobile screens Spindler et al. present different spatial gestures. In a user study Spindler et al. analyzed these gestures and discovered different suitable layers above the table for interaction [35]. Yang et al. present a novel mouse prototype, called LensMouse, for desktop environ- ments. LensMouse combines a regular computer mouse with a touch screen. Yang et al. aims to reduce mouse trips by displaying auxiliary content on the mouse directly [41]. These research contributions show the usage of different devices simultaneously. The benefit for data exploration by using at least on additional mobile device is explicit. The usage of multiple devices enables users to keep the overview over very large data sets and to interact with very high resolution data at the same time. We aim to design a system, which take advantages of different-sized device to enhance user’s capabilities in gaining insights. We are focusing in contrast to the related work on ad-hoc interaction. Users should have the freedom to analyze big data on-the-go. It is a realization of the “interactive-table-cloth” [23] with low cost hardware. 2.2. Data exploration and collaboration tools Tabletop applications enhance the possibilities for social interaction while exploring dig- ital content in small groups [7]. O. Shaer et al. indicates that tabletops provide great potential to gain insights into complex data sets [31]. Fjeld et al. use a tangible user interface (TUI) for a collaborative learning tool called “Augmented Chemistry” [8]. The use of 3D models on a tabletop allows understanding abstract chemistry concepts bet- ter. Strongly related is a tabletop application called “Chemieraum” by Gläser et al. [10]. The aim of Chemieraum is to spark interest in chemistry in an exhibition. Shaer et al. developed a tabletop learning tool for undergraduate students, called G-nome Sufer [32]. G-nome Sufer can be used for inquiry-based learning of genomes. Shaer et al. analyzed in a user study a positive effect in terms of performance, workload, and enjoyment in comparison to existing bioinformatics tools. Piper et al. analyzed exam preparation in small groups of students. The students were divided into two groups. One group used only lecture material, pen and paper. The other group used in addition a tabletop tool. Piper et al could show a positive effect by using a tabletop in terms of exam results [25]. Phylo-Genie [29] is a tool to engaging students work in a collaborative way with phyloge- netic trees. Schneider et al. analyzed in a study that students are engaged to keep active collaborating by using Phylo-Genie. In contrast Kinesthetic Pathways [40] is focusing on professional researchers. Kinesthetic Pathways allows researchers to discuss and to manipulate complex processes in systems biology. Collaborative works seems to profit from a table-centered work-environment. Tables have a long cultural importance. Many social activities take place since time immemo- rial around tables. We meet around tables, we eat around tables, and we play around tables. In this work we aim to create a mobile system to enhance the multifarious ac- tivities around tables. As a very early contribution Mandryk et al. built a system consisting out of multiple 9 handhelds for education in primary schools [18]. Every student uses one ’Personal Dig- ital Assistant’ (PDA) and can share information with other students. Mandryk et al. showed a rich social interaction between the students. Especially playful-education can profit from multi-device interaction. With MochaTop we show the possibilities to build multi-device application with (nearly) today available hardware for playful education. Playful-education is suitable for every age group and for many different learning environments. Shaer et al. describes a need for more research in terms of data exploration tools for all groups of interest [31]. In these terms our work presents a tool for ad-hoc exploration and wants to inspire to design new interaction techniques for big data on-the-go. 2.3. Mutual-spatial-aware mobile devices Kratz et al. present “around-device interaction (ADI)” for mobile and wearable devices [16]. The aim is to extent the interaction space, where the space on the device is too small for a proper user interaction. To track hand gestures beside and over the device Kratz et al. use IR1-Sensors. A. Butler et al. extended a regular touch smartphone with a linear array of infrared proximity sensors on each long edge [3]. The sensors allow interacting with one finger on each side of the smartphone simultaneously. Codex [11] is a prototype of a dual screen tablet. The two displays are permanently con- nected. Between the two screens Codex is foldable. This offers the possibility to use the tablet in different ways. It can be used in a private, public or semi-private state. This enhance the interaction patterns immense in comparison to a classical tablet computer. Merrill et al. developed small touch-sensitive devices, called SifteoCubes [21]. Multiple cubes can be used together. The advantage is the wireless connection between the cubes. The user interaction takes place by touching, tilting, shifting and arranging the cubes. Marrill et al. were inspired by “observing the skill that humans have at sifting, sorting, and otherwise manipulating large numbers of small physical objects” [20]. The project aims to reduce the cognitive workload by the possibility to arrange digital objects in a physical way. Kurdyukova et al. analyzed different gestures to transfer data between tablet computers and between tablet computers and tabletops [17]. They tested three different modali- ties of gestures to transfer data from a tablet computer to another device: Multi-touch gestures, spatial gestures and direct contact gestures. Users understand the information displayed on a tablet as two dimensional. For this reason only a few users tried to ma- nipulate the content by moving the tablet three dimensional. Mostly the tablet is placed flat on a table. Three dimensional spatial gestures seem to be non-natural on a tablet. While devices getting more compact and smaller, they lose input space. In the same time portable devices supports our daily life more and more. This contrast points the requirement for more interaction space outboard of the devices out. We add mutual spa- tial awareness to tablet computers and smartphones. This enhances possible navigation patterns and provide more usability to users. 1infrared 10 2.4. Methodology J. Bardram and A. Friday introduce in Ubiquious Computing Fundamentals by John Krumm different concepts to evaluate ubiquitous systems: “simulation“, “proof-of-con- cept” and “implementation and evaluating applications“ [2]. After the implementation of a system, functions can be tested by test-scripts. Such simulations help to identify technical issues for future research. The proof-of-concept is more a test that the system can be designed, implemented and used instead of a technical or mathematic proof. For a proof-of-concept study all important features are implemented as a working prototype. At this stage the system can be tested by potential users and discussed in terms of fu- ture implementations. The last evaluating concept requires a full implementation of the system. After the implementation the system can be used by a group of end users. In this study we conducted a proof-of-concept study. A proof-of-concept study is a suitable way to identify the feasibility and usability of MochaTop. Furthermore an im- plementation of this concept allows gaining insights in natural used interaction patterns. An appearing question for the conduction of a proof-of-concept study is about the study environment. “In-the-wild” studies, like Hinrichs and Carpendale conducted [12], have the advantage of showing the feasibility of a system in a real-life world. But in relation to Kjeldskov controlled laboratory experiment significant more usability problems could be emphasized as during the field-test [14]. Kjeldskov et al. compared the usability evaluation of a context-aware mobile system in a laboratory with the usability test in a field study of the same system. In this study we aim to explore user’s behavior and analyze the user experience. We want to inspire future development of multi-device sys- tems and their interaction patterns. For this aim we conducted the study in a controlled environment. 2.5. Submitted Paper based on this work On base of the findings of this work is a paper written and to the ACM International Joint Conference on Pervasive and Ubiquitous Computing 2013 (UbiComp 2013) sub- mitted: Woźniak, P., Lischke, L., Zhao, S., Yantaç, A., Fjeld, M.: MochaTop: Exploring Ad-hoc Interactions in Mobile Tabletops Submitted to ACM International Joint Conference on Pervasive and Ubiquitous Computing 2013 (UbiComp 2013) 3. Design Our design process starts with a vision of future exposure of information visualization and digital devices. In this vision horizontal surfaces play a central role. On these surfaces we will arrange both physical objects and digital information in a physical way. Multiple connected devices play one important part. In the following we present different possible applications for these devices as low fidelity prototypes. 11 3.1. Design requirements We assume a future in which people use multiple in- and output devices with different sizes at the same time. The devices will be ubiquitously connected to each other. We imagine people will use these devices as digital paper to arrange content in a physical way. Digital paper will be used to gain knowledge about business data in a work environment, as a tool for self-teaching and to sit in small groups and discuss about any kind of information. The possibility to arrange digital content in a physical way, improves meeting effectiveness. This vision is supported by the popularity to carry smartphones and tablet computer simultaneously. In the US 31% of the smartphone users own also a tablet computer and 23% of the Americans get news on at least two digital devices [22]. A growing ownership of tablets and smartphones is expectable in the next years [26, 36]. This growing user group of ultra-portable and highly-portable devices creates the chance for systems which connects multiple devices of these categories. A system which connects multiple mobile devices should obviously be designed for an environment where it is at least suitable to use the device with the lowest portability. In our case the use environment of a tablet computer seems to be more promising than focusing a use environment of a smartphone. Mostly users experience a tablet as a tool to put (flat) on a table or hold it with both hands [17]. This clarify that a system consisting out of at least one tablet and one smartphone cannot be used on-the-go. In our scenario the user is sitting on a table. It is an environment where the user does not want to use or has no access a regular laptop or computer. A suitable area could be a service area, in an airplane or train, in a coffee bar, at home in the kitchen, in the living room or in a sells situation. The user enjoys free time. While sitting in a coffee bar, the user wants to be informed or entertained. After a while a friend is coming and there is a need to discuss results of a common project. All these ad-hoc tasks can be supported by applications using multiple devices. 3.2. Discussion of prototypes Inspired by former research in our working group we designed first low fidelity pro- totypes. Thereby we focused on user environments described in 3.1. We assume the physical environment is not changing while the system is used. In a first open minded design workshop we created two prototypes. This prototypes aim to support reading and application selection on tablets. In a second step we con- ducted several semi-structured interviews with students and employees at the division of Interaction Design, Department of Applied IT, Chalmers University of Technology. We discussed 1) the presented prototypes and 2) more possible application scenarios. Our aim was to check already at a very early state of developing if users would like to use a system consisting out of multiple devices. Furthermore promising scenarios and navigation patterns are discovered. 12 3.2.1. eBook Reader While reading secondary tasks are performed often. Overview over the whole text is needed, notes and comments are added to the text, comparisons between different text passages or figures support the understanding of the content. To support these secondary tasks our prototype uses the smartphone in addition. Six features, which support the reading task, are placed around the tablet (see fig. 1). They can be selected by the position of the smartphone. The additional feature is mainly displayed on the smart- phone to present the focusing text without any distraction on the tablet. The different functions are visualized in a frame at the edge of the tablet screen. In the first design step we thought about the following functions: • View table of contents • View and open previous page • View and open next page • Create a note list to this page • Mark passages with different styles • Show references as list or open a single reference In interviews multiple people did not saw a need for the previous and next page functions. For them a compare function seems to be more useful. It should be possible to select different pages or paragraphs. This could be used to compare different text passages, figures and tables. This function is included in the most apple products. But only rarely used. Furthermore most asked people would like to have an easy accessible dictionary. One student mentioned that the effort to move the smartphone around the whole tablet is too high. So some functions are not well accessible. While moving the phone around the tablet would be covered and not visible. Instead of arranging functions all around the tablet it would be more natural to use only one or may be two edges of the tablet. To compensate the smaller interaction space the student proposed to nest the functions in a menu. Then the distance between the devices could be used as input method. eBook reader in high quality are available on the present market. This makes bigger improvements by developing prototypes difficult. Furthermore the normal interaction with eBook reader takes multiple hours, while active interaction with the applications is very rare. 3.2.2. App Selector Mainly a few applications are used on a tablet computer for daily purposes. People use their tablets inter alia to browse in the internet, messaging, organize meetings. We arranged most frequently used tablet apps at the edge of the tablet screen. In our prototype we arranged six different applications around the tablet. We assumed a web- browser, a mail client, a calendar, an address book, a note taking tool and an instant 13 Figure 1: eBook-Reader; As low fidelity paper prototype messenger as mostly used (see fig. 2). By placing the smartphone in one of the six areas the application will be started. To indicate the areas around the tablet a menu at the edges of the tablet screen is shown. When an application is selected, the smartphone provides additional information. For the web-browser bookmarks or a history could be viewed. The calendar could be supported by a list of all upcoming meetings. While a instant messaging chat is open, other available contacts could be shown on the smaller screen or while writing an email an overview over certain mails could be provided. While we discussed this idea, multiple people mentioned that they see no great benefit for their personal use of tablets and smartphones. 3.2.3. 3D-Viewer Navigation through 3D data is a challenging question in a mobile environment. The small screen allows not displaying larger visualizations highly detailed. On the other hand gestures to navigate through the data set are complicated to perform in a mobile environment. The screen of the tablet can be used to view a 3D-model. The smartphone can be used to navigate through the model. This application could be interesting to give people a vision of not physical existing objects in the field. The smartphone can be used to navigate through the visualization and to customize the viewing settings. The here explored navigation patterns are very close to the 3D navigation on desktop computer. So we see no sense in analyzing them on tablets again. 14 Figure 2: App Selector is viewing a calendar application; as mockup 3.2.4. InfoVis-Viewer Abstract data play an increasing role in our world. In many cases decisions depend on a huge amount of data. This justified the need for insights generating data visualization and intuitive interaction patterns. To allow people to make decisions on-the-go suitable mobile solutions have to be provided. On the tablet screen abstract information can be presented (e. g. time series graph, scatter plot, parallel coordinates). Around the tablet can different data sets or subsets be placed and selected by the position of the smartphone. Different visualizations can be selected on the smartphone or it can be used to create new subsets of the data set. The selection of different data sets by the physical position of the smartphone is a metaphor relating to the cultural capacity to arrange physical objects. We see a great potential by this idea. At this point more content is needed to be able to evaluate a information visualization tool. We included some proposals concerning this prototype in our high fidelity prototype. 3.2.5. Fish-Bowl Playful education is a rising topic. Interactive education can be used in many different environments. They are suitable for nearly all age groups. To enhance the potential of interactive education tools have to support a rich social interaction on one hand. On the other hand they should allow exploring the topic privately. Two of our here presented ideas focus on primary school pupil and one is developed for higher education. The general concept consists of the idea to explore a topic by exploring the physical space. Different subtopics, hints or task solutions can be found all around the tablet. • Animal species: On the tablet screen are a number of dots shown. Each dot represents a fish around the tablet. By moving the smartphone into the area next to a dot the fish will show up on the smartphone. The task is now to find all fishes of one species and collect them on the tablet by moving the smartphone close to 15 the tablet. • Math: On the tablet is a math equation presented (e. g. 3 ◦ 7 = 10)2. Around the tablet the four basic operators are placed. The task is now to select the right one. • Frequency: On the tablet is a frequency shown, which consists out of multiple amplitudes. Around the tablet are different amplitudes arranged. The task is now to select all amplitudes which are used in the frequency by the smartphone. These ideas would need a more detailed playful-learning concept. This concept would mainly not focus on interaction patterns and would not contribute to our research topic. 3.2.6. Multi Device Google-Drive Application During the discussion of the app selector paper prototype the idea of a Google-drive ap- plication was created. Several (e. g. six) Google-drive documents can be placed circular around the tablet. One student mentioned that he would like to arrange different doc- uments around the tablet. This seems to be natural for him. One document is focused on the tablet. This document is currently edited by the user. By moving the phone to the area of one document, the document is shown on the smartphone. By this method an easy and fast overview over multiple documents can be created without losing the focus on the main document. This is in particular helpful for collaborative work. While one user is working on own tasks, it is possible to keep overview about the process of documents edited by other contributors. By double-clicking on the smartphone the doc- uments can be changed. Many people liked this idea and would see a benefit for their work. To develop a high fidelity prototype out of this concept, a collaborative cloud computing office solutions is needed, which provides a comfortable access from tablet and smartphone. At present there are no extendable office solutions for smartphones and tablets available. To im- plement a whole office suit, would take too much time for this study. 3.2.7. Presenting tool for architecture planning To imagine new planned buildings or transport facilities is always complicated and for ordinary person nearly not possible. Architects could easier satisfy costumers of their solutions by providing more visual representation ad-hoc. Aggrieved parties could be involved more by showing ideas in a real environment. This idea would profit in particular by adding three dimensional spatial awareness to multiple devices. This would allow presenting animated constructions in the field - for example in a construction area. We decided to not continue on this idea, because we would need a deeper understanding of the daily work of architectures to build and evaluate a high fidelity prototype. 2Thereby ◦ is a unknown mathematical basic operator +,−, ∗, / 16 3.2.8. Nutrition supporting tool Nutrition plays an important role in western societies. Allergies, ascendance, diabetes or other diseases force people to keep to a special diet. At the same time it is part of life-style to care about nutrition. By these arguments nutrition becomes a field of interest for ubiquitous computing (e. g. [6]). The smartphone could be used to select different groceries from the space around the tablet. By this interaction a healthy meal can be composed. This idea is inspired by a result “Tangible MyPlate” of the “Tangible Interaction” course at Interaction Design & Technologies master program at Chalmers University of Technology [9]. For this concept other interaction methods are more suitable, than a mobile multiple de- vice system. For example we see better conditions for a tangible solutions, like “Tangible MyPlate” [9]. 3.3. MochaTop To investigate user’s behavior, we aimed to develop a working prototype. The prototype should contain different types of information and require multiple interaction patterns. In our scenario multi device systems will be used in many environments, one could be a coffee bar. For our high fidelity prototype we assumed the user is drinking coffee and enjoying free time. To be closer related to possible users we created different personas. These allow us a deeper understanding of users and clarify assumptions about users and their behavior. During the design and implementation phase, the personas support us to keep track. Our users are between 20 and 50 years old, life in Gothenburg, are multifarious interested and like to sit in coffee houses. So let’s call in follow a specific user out of our created group Linnea Carlsson. While she is drinking coffee on an early afternoon in a cafe, she likes to get more knowl- edge about the product she is consuming. She downloads a multi-devices application called MochaTop for her smartphone and her tablet to inform herself. The application aims to enlighten about fair-trade. Information about large coffee producing countries, about countries, which import a lot of coffee, about the price of fair trade coffee and in comparison to regular coffee, about the process from the bean to the cup and what Linnea could do to support fair-trade. 3.3.1. Content In MochaTop the topic coffee and fair trade is presented in five subcategories. In every category different types of visualization are combined together: • Importers: statistical data, as numbers and pie charts • Exporters: statistical data, as numbers and pie charts • Prices: time related price data, as numbers and time series plot 17 • Production chain: as network and described as text • Fairtrade: stories and headwords The largest coffee exporting and importing counties are illustrated as a pie chart. Every country is represented by the relative amount of ex- or imported coffee on the world market. By moving the smartphone next to the sector, the sector will be highlighted and on the smartphone information about the specific country is displayed. The coun- tries are presented by a geographical map and statistical data about the country, like area and the gross domestic product (GDP). Furthermore statistical data about coffee is shown: consumption in total and per head and if it is a coffee exporting country the total amount of exported coffee. For the general information about the countries we used wikipedia3 as data source. The coffee related data is provided by the International Coffee Organization (ICO)4. The coffee production chain is described as a flow-graph displayed on the tablet. Every single production step is explained as a short text while some of them are with images. We visualize the regular market price for coffee and the fairtrade price form December 1982 until October 2012 as a time series chart. The chart contains around 370 data points per normal and per fair trade price. All information about fair-trade are based on information material5 form the fairtrade foundation. The pictures are provided with written permission by Fairtrade Deutsch- land6. 3.3.2. Navigation patterns These topics contain a broad variation of different information kinds. This allows us to use different visualizations and a diversity of navigation patterns. Figure 3 shows different kinds of data structures and the in MochaTop used navigation patterns. Pie Menu Inspired by Hopkind’s pie menu [4] we designed a radial interaction pattern. By moving the smartphone around the tablet, the user enters distinct zones. The zones can be used to display relevant information on the smartphone or the tablet. Furthermore the zones can trigger contextual actions. The zones are arranged in the shape of a piece of pie around the center of the tablet. The zones can have equal size or they can differ related to the context. This pattern uses the entire surface around the tablet. This allows a maximal interaction space and a wide arrangement of different context. A pie menu with different-sized zones can be used (e. g.) to present percentage data. In MochaTop pie menus are used to present the marked share of countries, which in- or ex- port coffee. The pie chart is displayed on the tablet. By moving the smartphone around single countries can be selected. On the smartphone is information about the specific country presented. At the same time the slice of the selected country is highlighted by 3https://en.wikipedia.org 4http://www.ico.org/coffee_prices.asp?section=Statistics 5http://www.fairtrade.org.uk/includes/documents/cm_docs/2012/F/FT_Coffee_Report_ May2012.pdf 6http://www.fairtrade-deutschland.de 18 Figure 3: Data structures contained in MochaTop and the used spatial navigation pat- terns, reprinted form [39] color to provide feedback also on the tablet (see figure 4). A pie menu with equal-sized zones is implemented for the main screen of MochaTop to select one of the five subtopics. On the tablet the subtopics are arranged circular around a coffee bean. No other indication for a circular interaction is provided. By entering a zone the related subtopic will be distinguish by the text font. On the smartphone the selected subtopic will be named. A button on the touch screen of the smartphone enables the user to enter the subtopic. Slide zones Many contexts do not allow a natural circular arrangement. In these cases a pie menu would not be intuitive for users. For these cases linear interaction is needed. We de- signed rectangular boxes in which different states can be selected by the position of the smartphone. These slide zones are inspired by physical sliders. The slide zones can be placed next to the tablet horizontal or as well vertical. The center of the slide zone is related to the middle axis of the tablet. In this way the slide zone is centered horizontal or vertical to the tablet. The amount of different selectable states is delimited by the technical accuracy of system. This allow to implement sliders, which give the user the feeling of continuous and as well discreet selection. In MochaTop we use as well a horizontal slide zone as two connected vertical slide zones (see figure 5). The presented time series chart has around 720 data points. To enable users to explore the relationship between the normal coffee price and the fair trade coffee 19 Figure 4: Coffee exporting countries presented in MochaTop Figure 5: Exploring time series plots (left) and discovering the coffee production chain (right). price, the hole chart is not displayed at once. Below the tablet is a slide zone imple- mented. By sliding the smartphone to the right side the most current data is visualized, by sliding the smartphone to the left the oldest prices are displayed. In the middle of the tablet screen a vertical blue line symbolize the point where the prices are displayed as number. MochaTop presents the coffee producing process as a directed graph. Because the process is mostly hierarchic, linear interaction is needed to present single steps of the produc- tion. Nine different steps are explained, seven on the left side and two on the right. Every single step is presented on the smartphone. To access the information about the production steps the smartphone has to be slided in the sliding zone next to the vertex, which describes the product step. Distance controlled navigation In some cases the user has to take binary decisions. A comfortable way to control these decisions is to use the distance between smartphone and tablet. MochaTop uses the distance control to switch back from subtopics to the main menu. In an early state of implementation we aimed to change the content of the screens to make reading of longer 20 text on the tablet more comfortable, instant to read on the smaller smartphone. We disabled this function for the proof-of-concept study, because technical lagging. Slide under A not in MochaTop implemented pattern is to slide the smartphone under the tablet. This pattern needs a tablet holder which allow to tilt the tablet, like the iPad Smart Cover or the iPad Smart Case7. This pattern seems to be promising (e. g.) to transfer- ring data between smartphones and tablets. One disadvantage could be the occlusion of the smartphone, while it is lying under the tablet. 4. Implementation In this section is the technical implementation of the application MochaTop and the underlying framework described. To emulate the mutual spatial awareness of the devices we put them on an interactive table (see figure 6). To provide a maximum of reproducibility and accessibility for future research we use a commercial available interactive table - a Microsoft Surface (Pixelsense)8 and changed later to the newer version MS Surface2 (Samsung SUR40)9. Following we call both systems just MS Surface. In earlier studies iPhones and iPads were used as mobile devices. For this study we decided to work with Android devices. This has multiple advantages: • A wider range of different sized devices is available • The Android developer’s environment is easier to access • Android offers a deeper system access for developers • For Android more open APIs and libraries are existing 4.1. Spatial Awareness The here presented software for the MS Surface, called “Codeine”[24], is developed for an earlier scientific work. Beside small adaptations, we did not reimplement the system. On the starting point of this study no documentation about the developed software was available. For this reason a briefly description of the system is given here. The software is written in C# and uses the MS Surface SDK. To be able to identify the mobile devices, an identifying tag is placed on the back of the device, respectively on the cover of the device. The tags are just default MS Surface tags, provided by Microsoft. The tags are compatible with both versions of the MS Surface. To achieve the best possible tag recognition the tag has to be printed in a very high quality. The black and white have to be maximal dark respectively very lightful. The edges between white and 7https://www.apple.com/ipad/accessories/ 8https://www.microsoft.com/en-us/pixelsense/default.aspx 9http://www.samsunglfd.com/product/feature.do?modelCd=SUR40 21 Figure 6: Underlaying system of MochaTop: MS Surface and two mobile devices, reprinted form [39] black areas have to be sharp. If self-printed tags are used the size of the tags can be increased by the factor of 1.5. Every undesired IR10-reflection should to be avoided. So bright areas, especially around the tag should be covered. The light conditions the room, where the MS Surface is placed, have an influence on the tracking quality. In particular direct light incidence or sunlight can provoke undesired manner. To provide constantly positions of the devices, the tags should place on the surface much as possible. This is challenging in terms of the design of many Android device. Android smartphones are often curved on the back side or they have enhancements. Examples are the Nexus S11 or the Sony Xperia U12. It is nearly impossible to place a tag on these not flat backs, in a way that enables a proper detection while sliding. In many cases the tag will be lifted while the user slides the smartphones over the surface. Back-sides of devices, which are not black, should be covered with black covers or paper. The interactive surface tracks the position and orientation of the tags. The mobile de- vices can request all tracked tags via UDP13. The interactive surface sends the requested information as a byte string. The byte string has the following format: Message = Type Subtype Number PositionNumber (1) Position = Tag XCoordinate Y Coordinate Orientation (2) Thereby is: 10infrared 11https://en.wikipedia.org/wiki/Nexus_s 12https://en.wikipedia.org/wiki/Sony_Xperia_U 13User Datagram Protocol 22 Name Description Length in byte Type The type of the message 1 Subtype Subtype of the message 1 Number Number of tags (tagged devices) 1 Position Description of the position of the tag (see Section 2) 13 Tag Number of the tag (printed in the middle of the tag) 1 YCoordinate Y-Coordinate of the tag14 4 XCoordinate X-Coordinate of the tag15 4 The type of the message can be: Name Description Value TypekMSGContacts Requested concerns tagged device (contacts). 0 TypekMSGIPs Requested concerns IPs of tagged device (not used). 1 The subtype of the message can be: Name Description Value subTypekMSGSetContacts Set contact massage (not implemented) 0 subTypekMSGSetIPs Set IP message (not implemented) 1 subTypekMSGGetContacs Position of the devices is requested 2 subTypekMSGGetIPs Device IPs is requested (not used) 3 While the system is running on the whole surface tablets and smartphones can be tagged. In the study we used a dark gray background to avoid possible irritations of the partici- pants. For debugging purposes it is possible to display all needed information about the tags on the surface. So the tag id, position and orientation can be shown. To provide a natural interaction each device asks 3 ms16 for all tagged devices. This value is empirical developed for the here presented prototype. On one hand a high fre- quency of requests creates a lot of not needed computation and network overhead. On the other and a too low frequency of requests generates a “unnatural“ system behavior - the system is lagging. The software, which is running on the surface, does not provide multi-threading at this point. So only one request can be handled to each time. On the smartphone every activity can run CodeineController as a thread to get the positions of the devices tagged by the Surface. The thread starting activity has to imple- ment a CodineListener to listen to received messages by the thread. The implementa- tion of CodineListener needs to implement the function public void onCodeineEvent (CodeineEvent e). The CodeineEvent contains a object, with the type CodeineDevices, 14y = 0 is the upper edge of the screen 15x = 0 is the left edge of the screen 16millisecond 23 with all tagged devices, their positions and orientations. The correlation between the tagIds and the devices is not transmitted and has to be known by the mobile device. We set the tagIds as constants in const.java. Based on this data interaction patterns can implemented. By communication between the MS Surface and the Android devices is to consider, that C# and Java uses different implementation for bytes. Java is using signed bytes and C# uses unsigned bytes. A conversion has to be conducted. 4.2. Inter-device communication To give the user the feeling of a strongly coupled system, a communication between the devices is needed. Inter device communication could be used to transfer files between the devices or to trigger an event on the other device. In MochaTop inter-device communication is used to trigger distributed touch gestures. The user can select a submenu by moving the smartphone in the related zone and click a button on the smartphone. To enter the submenu the event of the pressed button has to be reported to the tablet. The inter-device communication is quite similar to the Codine-Communication. UDP is used as communication protocol, again. In the current implementation it is possible two send one byte information between the devices. This is not a technical border and can be extended on every time, if needed. To send messages a CodeineD2DController has to be created. This class handles all needed communication. To receive inter-device messages the activity has to implement a CodeineD2DListener. The CodeineD2DController can be started in a separate thread. 4.3. Navigation Patterns As described in 3.3.2 we designed different navigation patterns for MochaTop. In this section the implementation is specified. From a software engineering point of view the model–view–controller pattern [15] is a good way to create clearly structured navigation patterns. The information received from the MS Surface is stored in a CodeineDevices. CodeineDevices extents vector of CodeineContact. For every tagged device is a CodeineContact created. CodeineContact contains the ID of the tag, x- and y-coordinates and the orientation. Pie menu To implement a pie menu, the angle between two devices is needed. The central point for the calculation should be the central point of the tablet. CodeineContact contains a public int getAngle(CodeinePoint p) to calculate the angle in relationship to a center device. The center device is the function parameter. Be Cx the x-coordinate of the center device, Cy the y-coordinate of the center device, Mx and My analogous the coordinates of the smartphone. With Math.atan2 is the angle of the polar representation of the rectangular coordinates (x, y) calculated. Thereby is x = Cx − Mx and x = Cy −My. In a second step the angle is transformed into degree. Because more accurate information is not useable for designing interaction patterns, the value is returned as an int. 24 Slide zones It is possible to create a slide zone next to the tablet in every direction (left, right, above, below). In SlideBox is distinguished between each side. This is used to differen- tiate between horizontal slide zones and vertical slide zones. Slide zones left and right from the tablet are vertical, above and below the zones are horizontal. If a vertical slide zone is used, the difference between the x-coordinates of both devices is calcu- lated. If the slide zone is horizontal, instead of the x-coordinates, the y-coordinates are used. The difference is than divided by the width of one state. This width is cal- culated by the length of the slide zone and the number of needed states. The length of the slide zone has to be set in Const.java as UsedHorizontalBoxSpace respec- tively UsedVerticalBoxSpace. In MochaTop UsedHorizontalBoxSpace is set to 600 and UsedVerticalBoxSpace to 500. These values are empirical discovered. To check on which side of the tablet the smartphone is lying, textttCodeineContact provides check functions: public boolean isLeftOf (CodeineContact c), public boolean isRightOf (CodeineContact c), public boolean isAboveOf (CodeineContact c) and public boolean isDownOf (CodeineContact c). Thereby the parameter is the referenced center device - tablet. The returned values are in the range of (−#states2 ; +#states2 ). The smallest state is the most left one (if horizontal) or the highest one (if vertical). If the smartphone is outside of the slide zone, #states+ 1 is returned. Distance controlled navigation To control navigation by the distance between two devices is very natural and simple to implement. CodeineContact provides a function to calculate this value in relation to a second CodeineContact. The function uses the Pythagorean Theorem to compute the distance. More critical is the implementation of the usable distance values. In MochaTop are two distance controlled navigation patterns used. One is used to change the content between the smartphone and the tablet screen. Here the smartphone has to be moved very closely to the tablet. A distance of 400 is chosen to trigger this controller. The other distance control is used to return to the main menu. To trigger this controller the smartphone has to be moved away from the tablet. Here the chosen value is 720. To find usable values the every area has to have enough space. So for example, if the distance is too high, it is not natural or even impossible to return to the main menu. On the other hand, if the distance is to small other actions can be negative effected. The user is not able to perform desirable interaction properly. Because of not optimal system performance we decided to test only the second distance controller in the proof-of-concept study (5). To improve this pattern, the distance should not calculated between the central points of the device (tags). Instead the distance should be calculated between the boarders of the devices. Another improvement could be to use the distance in combination with the direction and the speed of a movement of the smartphone. 25 4.4. MochaTop Application To present the content MochaTop uses different methods. The main menu and the production chain is a set of images. The images are changed by different angles between tablet and smartphone or different states as a result of two vertical slide zones. The time series plot is a large image, which is viewed in separate pieces. The connection between the values represented in the plot and shown as a numbers shown on the smartphone would be much more accurate, if both visualizations would use the same data source. Therefore should the plot rendered directly on the tablet and not shown as an image. This would also decrease the amount of needed memory. The pie charts representing the coffee ex- and import are implemented with Android graphics. The information presented on the smartphone is nested in regular android layouts. 5. Proof-of-Concept Study In this section the methodology, which is used in the user study to evaluate MochaTop, is described. In a second step the results are discussed. The user study aims to explore users’ needs and to identify design challenges in terms of ad-hoc use of multi devices systems. To get broad feedback, we conduct a pre-study with the local focus group “Fairtrade Gothenburg”. With the same group we discussed at the end the working prototype of MochaTop. Considering the technical constraints of the system, we decided to check the feasibility of MochaTop and the underlying system using a semi-controlled user experiment with video and audio analysis as data collection tool. For this study we recruited the participants from our campus environment. 5.1. Pre-Study In a early state we conducted a semi-structured interview with the focus group of “Fair- trade Gothenburg”. In this interview we could discover content users would like to explore. This was helpful to be able to develop a working high fidelity prototype with an interesting data set. A real and interesting content inspire probands to play with the application freely and to act natural. Furthermore we analyzed our low fidelity proto- types and the expectations of the probands. Overall we get very positive feedback from the community. Even if some are not smartphone or tablet users, they liked the idea to inspire people to think about fair trade by using multiple devices. To speak to a wide audience, they saw a need for statistical data and personal stories. Related to this the idea of a function to move the focused content between the different-sized screens was mentioned. This would allow reading longer texts or complex tables also on the tablet, even if the application structure would show it on the smartphone. Some of them would like to use more than one smartphone to be able to compare more information at once. This indicates a high interest to extent the digital interactive space in a mobile world. Nevertheless we decided to use only two mobile devices for MochaTop, to be able to control the experiment. 26 5.2. Study design The proof-of-concept study is designed in an iterative process. In several steps the study design document was created (see Appendix A). The study consists out of three parts. At first an initial interview was conducted. There the probands were asked questions about their personal use of smartphones and tablets, demographical questions and about coffee drinking habits. Afterwards, the participants were invited to explore the system in a semi-structured way. Every participant had around 15 minutes time to explore MochaTop freely. During this sandbox interaction, we provided encouragement to explore all parts of the application only if we were asked. As the last part a second interview was conducted. There the participants were asked about the explored content. Thereby we wanted to see how much the users remembered and understand the content. As well they were asked to find the most current price for coffee and explain in a second step the concept of the fair trade price. These tasks require an interaction with the time series plot and the slide zone navigation pattern. Furthermore the time series plot has to be analyzed. While the user analyzed the plot, the interaction is not focused by the user. This allows gaining deeper insight in user’s mental model of navigation patterns. For analyzing the user interaction the entire study was recorded by video and audio. The video was recorded by two cameras from different angles. One camera was placed directly over the table. The other camera faced the participants. The video recording from two angles has different benefits. At first a proper analysis is possible even if the participants obscure the interaction with his body. Also ambiguous looking interaction can better clarified. The recording of the participant from the front allows conclusions to the user experience. The audio recording, by a conference microphone, is useful to obtain a better understanding of ambiguities while exploring the content. At the same time the audio recording allows to gain more insight to user’s mental model about the system. These help to identify strengths and weaknesses of the design. There are several motivations for the evaluation methodology, we chose. Related to 4.1 the tagging of the devices on the MS Surface is quiet dependent from the light conditions. In a controlled study environment the light can be limited to a needed factor. Over all the MS Surface is very sensitive and has to be handled with care. This is easier to be realized in a controlled environment. Primary the goal of the study was to confirm the feasibility of MochaTop, examine user acceptance and look for natural interaction patterns. Supported by Kjeldskov et al. [14] we saw no benefit of an “in-the-wild-study”. 5.3. Participants We recruited mostly students from our campus environment for participating on our user study. N = 23 participants aged 22 — 31 (µ = 25.09, x˜ = 25) took part in the study. In our user scenario, we assumed multi mobile device systems as an extension for tablets and smartphones. The assumed users will use tablets and smartphones on their daily life. So we looked for everyday users of mobile technology. Overall, 96% (n = 22) of the participants were regular smartphone users, while 52% (n = 12) reported using a tablet on a regular basis. We also checked for possible familiarity with the subject 27 Participant age: 22–31 (µ = 25.09, x˜ = 25) % n Male participants 87 20 Smartphone users 96 22 Tablet users 52 12 iOS users 39 9 Android users 52 12 Users of other mobile OS 17 4 Coffee drinkers 78 18 Aware of existence of Fairtrade 74 17 Table 1: Basic information on the participants. matter of MochaTop. We asked the participants about their regular coffee consumption and to explain the concept of Fairtrade. We ranked their familiarity on a scale from 0 to 4 (with 0 — participant has never heard the term, 4 — participant is able to explain how fairtrade price works). Detailed demographics are presented in Table 1 each participant was compensated with a small gift in the form of a USB fan. Sessions were conducted across three days. Additionally, we explored both single-user and collaborative use by inviting solo participants and pairs. In total, 5 pairs and 13 solo users generated 18 sets of video footage. To analyze pairs using MochaTop has two advantages. On one hand we see mobile tabletop systems well suitable for collaborative tasks. On the other hand the participants talk to each other and explain their thoughts naturally. 5.4. Results and discussion Overall, we can conclude that our prototype appeal positive to the participants. More than the half of the participants (61%, n = 14) would like to use a comparable system in their daily life. Only two participants were unable to extract any content out of MochaTop. One of those was an Android power user. The missing of the established Android interaction pattern demotivates the participant. In the following, we describe our key observations about how the participants used the system and potential implica- tions for designing future mobile tabletops. A key outcome of our study is the confirmation of the hypothesis that zone-based input is necessary for a mobile tabletop. In MochaTop both distance- and zone-based naviga- tion patterns are implemented. While no user had problems to use distance controlled navigation patterns, many participants (35%, n = 8) replaced the distance controlled interaction, by a zone based navigation. They placed the smartphone on one area of the table in a single, fast motion. Mostly they used the corners of the table (see Figure 7 for a detailed explanation). The need of zone-based input become even more clearly, while observing participants working in pairs. The pairs in our study divided the horizontal space into two zones, in which one user interacted with the system. Four of five pairs (80%) shared the smartphone at the middle line of the table. If input from the other side of the table was needed, the smartphone was handed out to the other participant. 28 Figure 7: Many participants (35%, n = 8) replaced the distance controlled interaction a), by a zone based navigation b), reprinted form [39] Data structure Spatial mapping Performance Pie chart Entire surface 96% Time-series plot below tablet 78% Organization chart Tablet sides 70% Table 2: Performance results for three different data structures and navigation patterns. Measured is a success rate for participants exploring a given data structure during the study. This confirms that the territoriality observed for traditional tabletops and reported by Scott et al. [30] is also a key consideration for mobile tabletops. Lastly, qualitative feedback from our participants confirms the need for active zones on the surface. As one participant stated: I like the linear [below tablet] interaction, because I don’t have to go [move the smartphone] around and cover the screen. This opinion indicates that the issue of reach in mobile tabletops requires significant consideration. A qualitative analysis of the behavior of our subjects shows that the Pix- elSense table did not obstruct the process of exploring the system. One participant even forgot that only the horizontal display was the active zone and placed the smartphone on the very edge of the table (where its position could not be detected by MochaTop). In the next part we describe, our key observations concerning the user experience and their possible implications for designing future mobile tabletops. 5.4.1. Data exploration and providing feedback One contribution of this study is the identification of several challenges for designing future multiple mobile devices systems. We evaluated user performance in exploring 29 data structures with respect to different navigation patterns. We verified if the users were able to access all information during the sandbox-interaction time. Detailed results are presented in Table 2. The highest performance rate was observed when the pie menu was used. Even if some users mentioned that the affordance for the pie menu is too high, the advantage is that the whole surface is used as input zone. Users often tried random movements, to explore the navigation patterns. When the entire surface is a input zone, much more feedback is generated. By this feedback the participants get faster a understanding of the navigation pattern and the navigation pattern become more natural. In contrast space limited interaction zones as a set of relative positions is far less natural. We observed that probands enjoyed using the system much more if they get instantly feedback on every movement. For designing future multiple device systems we believe that this result indicates a significant challenge. Circular interaction patterns are not suitable for many navigation tasks. Furthermore circular interaction patterns have the drawback of a high affordance and high occlusion while navigating. Without circular navigation patterns, not the whole surface can be used as input zone. This creates the need for finding ways to provide feedback on the location of the interaction zones and the expected movement to the users. The trivial solution would be to view explanations on the screens, but this distract from the actual task. We think that haptics could be explored as a possible way to facilitate the interaction. Therefore mobile device should provide more than one vibration mode. A suitable solution could be to indicate the direction, in which the smartphone should be moved, by vibration on this side of the device. Zhao et al. designed earPod to provide “eyes-free” feedback, using audio [42]. Audio feedback, which explains with a few words where the interaction can be performed, could support the understanding of the navigation pattern. Another obvious solution would be to use the table to project support on the surface. But this would imply that the table is interactive as well. To assume the table itself would be interactive, would rise the question why the tablet and the smartphone and the tablet is needed. For this reason in our scenario, the table is a regular table. 5.4.2. Conditions for new navigation patterns During the user study we identified different constraints for designing multiple device navigation patterns. Following we describe constraints concerning the the structure of the navigation patterns. Static tablet position In the user study we placed the tablet in the center of the Surface in landscape orienta- tion, to guarantee the maximal interaction space in all directions. All participants felt comfortable with this setting. Even if our navigation patterns use mere relative positions of the devices, only two participants (9%) moved the tablet around. Both of them tried to discover navigation patterns by moving the tablet over the Surface. Shortly after they did not discover other navigation patterns, they placed the tablet again in the center of the table. No one saw a need to place the tablet on another place on the Surface to 30 discover the application. Moreover no participant picked the tablet up. This confirm the hypothesis from Kur- dyukova et al. [17], that three dimensional interaction with tablets is not natural. For new navigation patterns the position of the tablet can be assumed as static and users will not slide or lift the tablet for navigating. If the tablet has to be moved to perform action, the design of the system has to be revealing this movement. Spatial awareness in the space Designing navigation patterns, which use the device orientations on all axes, seem to be promising. As well in the discussions of the low fidelity prototypes as in the proof-of- concept study some users saw a benefit in moving the devices in space. So 57% (n = 13) of our participants lifted the smartphone at least once up. This was performed mostly for reading content on the smartphone or to place the smartphone on another position in a pick-and-drop manner (see 5.4.3). So three dimensional navigation patterns can be useful for navigation through large 3D visualizations or for picking up different informa- tion with the smartphone from the tablet. According to the founding from Kurdyukova et al. [17] and our observations, we see no need to design navigation patterns in which the tablet is moved in space. To design and implement usable navigation patterns, which use the three dimensional space, more research about user’s intention and proper feedback would be needed. The tracking of the device orientation on all axes rises another challenge for the tech- nology behind spatial aware multiple device systems. To achieve three dimensional navigation patterns, the motion sensors in the smartphone could be used, but the accu- rateness of the sensors has to be highly increased. There are multiple technical solutions, which could solve the issue of tracking the devices in another way, too. Solutions could use e.g. ultrasonic sensing [27] or camera motion tracking. First commercial products for 3D motion captions above smartphone are already available17. Orientation of the Smartphone All participants found the smartphone on the left side of the tablet in portrait orienta- tion at the beginning. The orientation of the content is not changing while using one navigation pattern in MochaTop. Beside interaction with the time series plot, below the tablet, all information is shown in portrait orientation on the smartphone. While inter- action with the time series plot the smartphone should be used in landscape orientation (see fig. 4 and fig. 5). While we could not observe any problem by the change of the orientation of the content on the smartphone screen, many participants rotated the smartphone in other situa- tions, where no rotation was needed. So 65% (n = 15) rotated the smartphone while navigating, in a way that the content was turned. This allows two hypothesize: 1. The participants did not focus on the smartphone. 2. It is natural to rotate the smartphone while moving the smartphone around the tablet. 17http://crunchfish.com/#/document/list 31 We observed the rotation of the smartphone also while the participants selected subtopics in the main menu. Thereby is active selection on the smartphone screen needed. All subtopics have to be selected by pressing a button on the touch screen of the smartphone. Furthermore the second hypothesis is affirmed by one of the participants: It is annoying that the content on the smartphone is not readable. The smart- phone should be aware of my position and rotate the content. Three participants (13%) rotated the smartphones around the own axis, without sliding the smartphone on the Surface, to manipulate the view. This gesture seems to be intuitive for different tasks. One participant mapped the smartphone to the pie menu. The participant tried to rotate the pie menu by rotating the smartphone. Another possibility would be to use this gesture to zoom in and out. Future development should attend to the high frequency of screen orientation changes of the smartphone. Even if the user performs not needed movements the content has to be in the right orientation. In contrast the probability of an orientation change of the tablet is very low and can be assumed as constant during the whole session the same. This allows calculating the position of the user in a simple way by the orientation of the tablet. Input by device position or touch input A changing design question is to decide which modality is most suitable for a task. In MochaTop we implemented as much as possible by using the relative position of the devices. “Classical” touch input is only needed to enter a subtopic or to scroll content on the smartphone display. The entering of the subtopics by pressing a button on the smartphone screen allows using only one position gesture combined with a touch gesture at one point of time. Multiple position gestures could be difficult to perform at the same time. The scrolling gesture on touch displays is quiet common and very natural, so here is no reason to replace this gesture by some new gesture. To analyze how strong the influence of known structures on new input methods is, we did not hide the android navigation bar on the tablet. While no participant had problems to perform scrolling and entering a subtopic, nine (39%) participants tried to use touch gestures, in situations where no touch gesture is implemented. Only two (9%) participants tried to use the android navigation bar. Both of them tried to replace the return gesture by pressing the return button. These allow two conclusions. The gestures have to be intuitive and easy to perform. Gestures which produce high effort will not be used. As second smartphone and tablet users are familiar with using touch gestures. In many situations they will try to use touch gestures. Multi mobile device systems should provide multiple modalities for interaction. This lowers user‘s frustration and adjudicates upon the success of the system. 5.4.3. Navigation through large data sets We could clearly observe that there is a demand for extending the input range. Often participants reached the end of the input zone while they are exploring data by using the 32 Figure 8: Example of a clutching gesture when using two mobile devices. The gesture consists of three steps: (a) moving the phone from left to right, (b) lifting up the phone at the end of the input zone and (c) moving the phone to the starting point (a)., reprinted from [39] horizontal slide box as navigation pattern. To shift the data more in the same direction or to interact in a more comfortable body position, 57% of participants repositioned device to continue motion in the same direction abruptly. Obviously here is a strong relation to desktop computer mouse we propose to categorize this phenomenon mobile tabletop clutching. While our prototype uses only the relative position of the devices to manipulate the view, we observed that several subjects had a necessity to perform a clutching motion. In many practical applications the amount of data is too large to be mapped to a relative position. Furthermore often the data set is unlimited, so no static mapping is possible. The clutching movement is presented in Figure 8. The clutching was primarily observed while the participants explored our time series plot. This shows the need for providing feedback on the available motion range. We believe that clutching is a natural navigation pattern. So it should be used for designing data navigation on multi-device tabletop systems. It is an essential technique to handle limited interaction space. Besides this clutching can be used to allow the user to interact in a natural space on the surface and to prevent awkward positions (this would result in a significant decrease in manipulation accuracy). Overall clutching is another design challenge for ad-hoc tabletops. 5.4.4. Physical design aspects of mobile devices Another issue we observed during the study concerns the physical grasping of the mobile devices and how there are handled on the surface. We observed a large set of different approaches to not obscure the view. Figure 9 presents a selectulty of not obscuring the screen, while the smartphone is slided over the surface seems to be problematic. This depends probably on the novelty of a use for sliding a mobile device over a surface. For 33 Figure 9: Examples of alternative smartphone grips observed during the study (excerpts from actual study footage), reprinted from [39]. the study we used a HTC One V. This smartphone has the unexpected design advantage, of a not for in- or output used area below the display. This area is tilted up and invites to grasps the smartphone there. This allows moving the smartphone over the surface without too much screen occlusion. But the physical design did not take place in the majority of cases. As a consequence, we see a need to support both horizontal and vertical interaction on mobile devices by the physical form of the device. Users must be able to handle the devices efficiently and with minimum screen obstruction. In difference to pointing devices in the desktop area, participants moved with both hands alternating. Only one participant (4%) used the left hand only, six participants (26%) used the right hand only and 16 participants (70%) navigated with both hands. Typically the smartphone was moved with the left hand on the left side of the tablet and with the right one on the right side. This behavior we observed as well while one participant used the system as well while pairs used the system. To navigate very exactly, for example though the time series plot, participants grasped the smartphone on two sides (see fig.10). This requires a device design which allows grasping the smartphone comfortable with the left hand and as well with the right hand on all sides. The software design has to be aware of a wide spreaded diversity of ways of grasping the device. This has consequences for possible occlusions and for an ergonomically reachability of touch interaction areas. Figure 11 illustrate the problem of occlusion while sliding the smartphone around. 5.4.5. Users learning from data Our last inquiry investigated how well users understand and remember in MochaTop presented content. We aimed to analyze, if distributed data visualization and physical relations on horizontal surfaces have a positive effect concerning learning. Initially, our participants’ average score on our 0 to 4 fair trade knowledge scale was 1.96. After a session with the system, 65% (n = 15) of the participants were able to explain the fair-trade coffee price concept. This result shows that mobile multi device systems have the potential to communicate knowledge and have capabilities of stationary tabletops confirmed by past research. This indicates that existing tabletop application can be 34 Figure 10: Study footage: Participant is grasping the smartphone with both hands, to navigate more ac- curate. Figure 11: Study footage: By sliding the smartphone around, the tablet screen can be covered. easily redesigned for mobile tabletop applications for a ad-hoc setting. We aim which this proof-of-concept study to inspire designers to add more meaningful- ness to exploring data sets. 6. Conclusion Smartphones and tablets are widely spread and many people use devices of both cate- gories every day. The diversity of devices in different sizes indicates a need for maximal interaction space and maximal portability. We presented a way to extent the interaction space of today available smartphones and tablet by using the physical position of the devices. We designed and implemented a working prototype of a multi mobile device system, called MochaTop. In the user study we identified positive characteristics of multi mobile device systems. As most important point we see the usability of the system. Nearly every participant was able to use the position of the devices as input method. More participants can imagine using a multi mobile device system, than tablet computers are used present. This indicates a the chance to rise attractiveness of tablet computers. Nevertheless one open challenge is to identify real tasks in which a combination of smart- phones and tablets can tab the full potential. To arrange physical objects is a crucial human capability. Systems which allow to arrange in- and output devices freely, could for example enhance tools for collaboration, entertainment and education. To identify more design constraints for navigation patterns, a deeper understanding of the user behavior is needed. The identification of the areas in which the smartphone is mostly placed, helps to answer multiple design questions. For example on which position gestures should be performed for often repeated tasks or which areas of the surface will not be used. The results presented in this work and the recommended ones, are not only interesting 35 for multi mobile device systems. The results can also be transferred to mobile tabletops - independent from the technology - and to scenarios in which devices from different owners are used together. There are plenty of different possible environments, in which multiple owned devices can used together. Guests of coffee bars could bring his or her smartphones to an interactive table in the coffee bar pedestrians could interact with a showcase or advertisement screen. To use one stationary and one mobile device would have the advantage over two mobile devices to lower the effort of taking care of two devices when leaving the place. The development of motion sensing technology and mobile video projectors open totally new possibilities for mobile tabletop interaction. Small mobile projectors allow using a larger output area than the size of the device. This offers the chance to define the boarders between tablets and smartphones totally new. In situations, where more space is available a larger output area can be used. Is less space usable, the same device can be used as well. Motion sensing free input methods from two-dimensional gestures and allow integrating all kinds physical objects in the interaction at the same time. These technologies have the potential to enhance the interaction possibilities in a mo- bile environment much more general than a combination of today available smartphones and tablet computers. Nevertheless is designing multi mobile device systems with today available technology an important step. Multi mobile device systems, like MochaTop, have multiple advantages. All needed technology is easily available. Modern tablets and smartphones include nearly all needed hardware. With Android and iOS new software can be fast and easy implemented. This allows designing and implementing working prototypes without too much overhead. In comparison to future technologies, the de- signed systems will have more constraints in terms of user interaction. But all identified challenges will be a topic for future mobile interactive tabletops. To start a creative and open design challenge our developed framework will be pub- lished as open source. We want inspire the community to build new applications and prototypes for widely spreaded tasks. References [1] R. Ballagas, M. Rohs, J. G. Sheridan, and J. Borchers. Byod: Bring your own device. In In Proceedings of the Workshop on Ubiquitous Display Environments, Ubicomp, 2004. [2] J. Bardram and A. Friday. Ubiquitous computing systems. In J. Krumm, editor, Ubiquitous Computing Fundamentals, pages 37–94. CRC Press, 2009. [3] A. Butler, S. Izadi, and S. Hodges. Sidesight: multi-"touch" interaction around small devices. In Proceedings of the 21st annual ACM symposium on User interface software and technology, UIST ’08, pages 201–204, New York, NY, USA, 2008. ACM. [4] J. Callahan, D. Hopkins, M. Weiser, and B. Shneiderman. An empirical comparison 36 of pie vs. linear menus. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’88, pages 95–100, New York, NY, USA, 1988. ACM. [5] K. Cheng, J. Li, and C. Müller-Tomfelde. Supporting interaction and collaboration on large displays using tablet devices. In Proceedings of the International Working Conference on Advanced Visual Interfaces, AVI ’12, pages 774–775, New York, NY, USA, 2012. ACM. [6] J. Chhabra, J. Singh, and D. Serrano-Baquero. Abstracting nutritional information of food service facilities using the pervasive healthy diet adviser. In Proceedings of the 6th international conference on Smart Homes and Health Telematics, ICOST ’08, pages 113–122, Berlin, Heidelberg, 2008. Springer-Verlag. [7] P. Dillenbourg and M. Evans. Interactive tabletops in education. International Journal of Computer-Supported Collaborative Learning, 6:491–514, 2011. [8] M. Fjeld, J. Fredriksson, M. Ejdestig, F. Duca, K. Bötschi, B. Voegtli, and P. Juchli. Tangible user interface for chemistry education: comparative evaluation and re- design. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’07, pages 805–808, New York, NY, USA, 2007. ACM. [9] M. Fjeld, F. J. Harand, and A. S. Pour. Tangible myplate, tangible interaction. http://www.ixdcth.se/courses/2012/ciu180/news/40; accessed on 31 March 2013 (18:00), November 2012. [10] T. Gläser, J. Franke, G. Wintergerst, and R. Jagodzinski. Chemieraum: tangible chemistry in exhibition space. In Proceedings of the 3rd International Conference on Tangible and Embedded Interaction, TEI ’09, pages 285–288, New York, NY, USA, 2009. ACM. [11] K. Hinckley, M. Dixon, R. Sarin, F. Guimbretiere, and R. Balakrishnan. Codex: a dual screen tablet computer. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’09, pages 1933–1942, New York, NY, USA, 2009. ACM. [12] U. Hinrichs and S. Carpendale. Gestures in the wild: studying multi-touch gesture sequences on interactive tabletop exhibits. In Proc. CHI ’11, pages 3023–3032, New York, NY, USA, 2011. ACM. [13] D. F. Keefe, A. Gupta, D. Feldman, J. V. Carlis, S. K. Keefe, and T. J. Grif- fin. Scaling up multi-touch selection and querying: Interfaces and applications for combining mobile multi-touch input with large-scale visualization displays. Inter- national Journal of Human-Computer Studies, 70(10):703 – 713, 2012. Special issue on Developing, Evaluating and Deploying Multi-touch Systems. [14] J. Kjeldskov, M. Skov, B. Als, and R. Høegh. Is it worth the hassle? exploring the added value of evaluating the usability of context-aware mobile systems in the 37 field. In S. Brewster and M. Dunlop, editors, Mobile Human-Computer Interaction - MobileHCI 2004, volume 3160 of Lecture Notes in Computer Science, pages 61–73. Springer Berlin Heidelberg, 2004. [15] G. E. Krasner, S. T. Pope, et al. A description of the model-view-controller user interface paradigm in the smalltalk-80 system. Journal of object oriented program- ming, 1(3):26–49, 1988. [16] S. Kratz and M. Rohs. Hoverflow: expanding the design space of around-device in- teraction. In Proceedings of the 11th International Conference on Human-Computer Interaction with Mobile Devices and Services, MobileHCI ’09, pages 4:1–4:8, New York, NY, USA, 2009. ACM. [17] E. Kurdyukova, M. Redlin, and E. André. Studying user-defined ipad gestures for interaction in multi-display environment. In Proceedings of the 2012 ACM inter- national conference on Intelligent User Interfaces, IUI ’12, pages 93–96, New York, NY, USA, 2012. ACM. [18] R. L. Mandryk, R. L. M, K. M. Inkpen, and E. Lab. Exploring a new interaction paradigm for collaborating on handheld computers. In Proceedings of CHI 2001 (submitted, 2001. [19] W. McGrath, B. Bowman, D. McCallum, J. D. Hincapié-Ramos, N. Elmqvist, and P. Irani. Branch-explore-merge: facilitating real-time revision control in collabora- tive visual exploration. In Proceedings of the 2012 ACM international conference on Interactive tabletops and surfaces, ITS ’12, pages 235–244, New York, NY, USA, 2012. ACM. [20] D. Merrill, J. Kalanithi, and P. Maes. Siftables: towards sensor network user inter- faces. In Proceedings of the 1st international conference on Tangible and embedded interaction, TEI ’07, pages 75–78, New York, NY, USA, 2007. ACM. [21] D. Merrill, E. Sun, and J. Kalanithi. Sifteo cubes. In Proceedings of the 2012 ACM annual conference extended abstracts on Human Factors in Computing Sys- tems Extended Abstracts, CHI EA ’12, pages 1015–1018, New York, NY, USA, 2012. ACM. [22] A. Mitchell, T. Rosenstiel, and L. Christian. Mobile devices and news consump- tion: Some good signs for journalism. http://stateofthemedia.org/2012/ mobile-devices-and-news-consumption-some-good-signs-for-journalism/; accessed on 30 of March 2013 (12:00), 2012. [23] C. Muller-Tomfelde and M. Fjeld. Tabletops: Interactive horizontal displays for ubiquitous computing. Computer, 45(2):78–81, 2012. [24] T. Piazza, S. Zhao, G. Ramos, A. E. Yantc, and M. Fjeld. Dynamic duo: Exploring phone-tablet combinations for mobile usage. In Proceedings of the sixth interna- 38 tional conference on Tangible, embedded, and embodied interaction, TEI ’12 WIP, Barcelona, Spain, 2013. ACM. [25] A. M. Piper and J. D. Hollan. Tabletop displays for small group study: affordances of paper and digital materials. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’09, pages 1227–1236, New York, NY, USA, 2009. ACM. [26] L. Rainie. 25% of american adults own tablet computers. http://pewinternet. org/Reports/2012/Tablet-Ownership-August-2012.aspx; accessed on 30 of March 2013 (12:25), October 2012. [27] B. Raj, K. Kalgaonkar, C. Harrison, and P. Dietz. Ultrasonic doppler sensing in hci. Pervasive Computing, IEEE, 11(2):24–29, Feb. [28] A. Schmidt, B. Pfleging, F. Alt, A. Sahami, and G. Fitzpatrick. Interacting with 21st-century computers. Pervasive Computing, IEEE, 11(1):22 –31, january-march 2012. [29] B. Schneider, M. Strait, L. Muller, S. Elfenbein, O. Shaer, and C. Shen. Phylo-genie: engaging students in collaborative ’tree-thinking’ through tabletop techniques. In Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems, CHI ’12, pages 3071–3080, New York, NY, USA, 2012. ACM. [30] S. D. Scott, M. Sheelagh, T. Carpendale, and K. M. Inkpen. Territoriality in collaborative tabletop workspaces. In Proc. CSCW ’04, pages 294–303, New York, NY, USA, 2004. ACM. [31] O. Shaer, A. Mazalek, B. Ullmer, and M. Konkel. From big data to insights: Opportunities and challenges for tei in genomics. In Proceedings of the sixth in- ternational conference on Tangible, embedded, and embodied interaction, Proc. TEI ’12, Barcelona, Spain, 2013. ACM. [32] O. Shaer, M. Strait, C. Valdes, T. Feng, M. Lintz, and H. Wang. Enhancing genomic learning through tabletop interaction. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’11, pages 2817–2826, New York, NY, USA, 2011. ACM. [33] B. Shneiderman. The eyes have it: A task by data type taxonomy for information visualizations. In Proceedings of the 1996 IEEE Symposium on Visual Languages, VL ’96, pages 336–, Washington, DC, USA, 1996. IEEE Computer Society. [34] M. Spindler, M. Martsch, and R. Dachselt. Going beyond the surface: studying multi-layer interaction above the tabletop. In Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems, CHI ’12, pages 1277–1286, New York, NY, USA, 2012. ACM. 39 [35] M. Spindler, C. Tominski, H. Schumann, and R. Dachselt. Tangible views for in- formation visualization. In ACM International Conference on Interactive Tabletops and Surfaces, ITS ’10, pages 157–166, New York, NY, USA, 2010. ACM. [36] M. Thylmann, B. Klusmann, M. Konarski, D. Kempf, and B. Rohleder. Fast 40 prozent haben ein smartphone. http://www.bitkom.org/files/documents/ BITKOM_Presseinfo_Smartphone-Verbreitung_03_10_2012.pdf; accessed on 30 of March 2013 (12:20), October 2012. [37] S. Weise, J. Hardy, P. Agarwal, P. Coulton, A. Friday, and M. Chiasson. Democra- tizing ubiquitous computing: a right for locality. In Proc. Ubicomp 2012, UbiComp ’12, pages 521–530, New York, NY, USA, 2012. ACM. [38] M. Weiser. The computer for the 21st century. SIGMOBILE Mob. Comput. Com- mun. Rev., 3(3):3–11, July 1999. [39] P. Woźniak, L. Lischke, S. Zhao, A. Yantaç, and M. Fjeld. Mochatop: Exploring ad-hoc interactions in mobile tabletops. submitted to UbiComp ’13. ACM. [40] A. Wu, J.-B. Yim, E. Caspary, A. Mazalek, S. Chandrasekharan, and N. J. Ners- essian. Kinesthetic pathways: a tabletop visualization to support discovery in sys- tems biology. In Proceedings of the 8th ACM conference on Creativity and cognition, C&C ’11, pages 21–30, New York, NY, USA, 2011. ACM. [41] X.-D. Yang, E. Mak, D. McCallum, P. Irani, X. Cao, and S. Izadi. Lensmouse: augmenting the mouse with an interactive touch display. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’10, pages 2431–2440, New York, NY, USA, 2010. ACM. [42] S. Zhao, P. Dragicevic, M. Chignell, R. Balakrishnan, and P. Baudisch. Earpod: eyes-free menu selection using touch input and reactive audio feedback. In Proceed- ings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’07, pages 1395–1404, New York, NY, USA, 2007. ACM. 40 Decleration I hereby declare that the work presented in this thesis is entirely my own and that I did not use any other sources and references than the listed ones. I have marked all direct or indirect statements from other sources contained therein as quotations. Neither this work nor significant parts of it were part of another examination procedure. I have not published this work in whole or in part before. The electronic copy is consistent with all submitted copies. place, date, signature 41 7. Appendix A. Study design document A.1. Research question The purpose of this proof-of-concept study is to prove the usability of a application which use one smartphone and one tablet computer as input and output devices, in the following called DynamicDuo. In special we are focusing on the following questions: • How intuitive is it to use a system consisting of a smartphone and one tablet? • Provides DynamicDuo good possibilities to gain insights into data sets in a mobile area? • Which gestures, movements do the users to interact with DynamicDuo? A.2. Participant profile The participant group will consist of approx. 10 persons. The participants are probably mostly master and PhD students of Chalmers University of Technology. A.3. Compensation plan The participants will be presented with a iTunes gift card, or something coffee-related (Chalmers cup, gift card from a coffee house, ...). It is not finally decided jet. A.4. Methodology As the main part of the study the participant is asked to use DynamicDuo and the MochaTop-App to inform about coffee and fairtrade. This interaction will be recorded by a video camera and the position of devices will be tracked by the MS Surface2 (Pixelsense). Afterwards every participant will be asked some general question and about the experiences. A.5. Timeline During one week all participants will be asked to take part in the study. All participants have to fulfill the same task: • Study introduction (5 min) • System introduction (5 min) • User Interaction (20 min) • Final interview (10 min) 42 A.6. Data to collect During the user interaction the interaction will be recorded by video from to angles and the devices will be tracked by the surface. One camera will be paced directly above the table. The second one will placed more in the perspective of the users view. The following questions will be asked in the interview, (questionnaire): • Personal information: – Age – Sex – Use of smartphones ∗ Smartphone platform – use of tablet computer ∗ Tablet computer platform – interest in coffee and fairtrade • What is the most important fact you learned by MochaTop? • Asked some facts presented in MochaTop • Would you use DynamicDuo for ...? A.7. Data analysis The collected video data will be analyzed with ELAN18. The aim is to classify the movements and the gestures of the user by the movements itself, by the needed space and by the user intention. 18http://tla.mpi.nl/tools/tla-tools/elan/ 43 B. MochaTop mockups Figure 12: Main screen; The smartphone can be moved around the tablet. Subsections can be entered by pressing the button on the smartphone screen. Figure 13: Visualization of the coffee pro- duction chain. By Sliding the smartphone next to the produc- tion step, a description is shown on the smartphone. Figure 14: Coffee producing countries are visualized as a pie chart. By moving the smart- phone around information about the contry is displayed. 44 Figure 15: In a time series plot the coffee price is shown. By sliding the smartphone next to the plot, the price is displayed as a number. 45