Repository logoOPUS - Online Publications of University Stuttgart
de / en
Log In
New user? Click here to register.Have you forgotten your password?
Communities & Collections
All of DSpace
  1. Home
  2. Browse by Author

Browsing by Author "Lischke, Lars"

Filter results by typing the first few letters
Now showing 1 - 6 of 6
  • Results Per Page
  • Sort Options
  • Thumbnail Image
    ItemOpen Access
    Improving the effectiveness of interactive data analytics with phone-tablet combinations
    (2013) Lischke, Lars
    Smartphones and tablet computer are ubiquitous in daily life. Many people carrying smartphones and tablet computers with them simultaneously. The multiplicity of different sized devices indicates the conflict between the maximal interaction space and a minimal bulkiness of the devices. This dissertation we extend the interaction space of mobile devices by adding mutual-spatial awareness to ordinary devices. By combining multiple mobile devices and using relative device placement as an additional input source we designed a mobile tabletop system for ad-hoc collaboration. With this setting we aimed to emulate the concept of so-called interactive tablecloth, which envisages every surface of a table top will become an interactive surface. To evaluate the concept we designed and implemented a working prototype, called MochaTop. To provide the mutual-spatial awareness we placed the mobile devices on an interactive table. For the future we believe in possibilities to replace the interactive table by technology integrated in the mobile device. In this study we used both one Android smartphone and one Android tablet as mobile devices. To track the position of the devices we used one Microsoft Surface2 (SUR40). The system is designed for exploring multimedia information and visual data representations by manipulating the position of two mobile devices on a horizontal surface. We present possible use-cases and environments. In a second step we discuss multiple low fidelity prototypes. The results are integrated in the development of MochaTop. The application MochaTop is designed as an example for exploring digital information. To influence the participants not too much by the content, we choose a common topic to present in MochaTop: coffee production and trade. We present the implementation of MochaTop and the conducted user study with 23 participants. Overall we could awaken interest for future systems by the study-participants and show that the system supports knowledge transfer. Furthermore we were able to identify design challenges for future development of mobile tabletops. These challenges concern mostly input feedback, interaction zones and three dimensional input.
  • Thumbnail Image
    ItemOpen Access
    Interacting with large high-resolution display workplaces
    (2018) Lischke, Lars; Schmidt, Albrecht (Prof.)
    Large visual spaces provide a unique opportunity to communicate large and complex pieces of information; hence, they have been used for hundreds of years for varied content including maps, public notifications and artwork. Understanding and evaluating complex information will become a fundamental part of any office work. Large high-resolution displays (LHRDs) have the potential to further enhance the traditional advantages of large visual spaces and combine them with modern computing technology, thus becoming an essential tool for understanding and communicating data in future office environments. For successful deployment of LHRDs in office environments, well-suited interaction concepts are required. In this thesis, we build an understanding of how concepts for interaction with LHRDs in office environments could be designed. From the human-computer interaction (HCI) perspective three aspects are fundamental: (1) The way humans perceive and react to large visual spaces is essential for interaction with content displayed on LHRDs. (2) LHRDs require adequate input techniques. (3) The actual content requires well-designed graphical user interfaces (GUIs) and suitable input techniques. Perceptions influence how users can perform input on LHRD setups, which sets boundaries for the design of GUIs for LHRDs. Furthermore, the input technique has to be reflected in the design of the GUI. To understand how humans perceive and react to large visual information on LHRDs, we have focused on the influence of visual resolution and physical space. We show that increased visual resolution has an effect on the perceived media quality and the perceived effort and that humans can overview large visual spaces without being overwhelmed. When the display is wider than 2 m users perceive higher physical effort. When multiple users share an LHRD, they change their movement behavior depending whether a task is collaborative or competitive. For building LHRDs consideration must be given to the increased complexity of higher resolutions and physically large displays. Lower screen resolutions provide enough display quality to work efficiently, while larger physical spaces enable users to overview more content without being overwhelmed. To enhance user input on LHRDs in order to interact with large information pieces, we built working prototypes and analyzed their performance in controlled lab studies. We showed that eye-tracking based manual and gaze input cascaded (MAGIC) pointing can enhance target pointing to distant targets. MAGIC pointing is particularly beneficial when the interaction involves visual searches between pointing to targets. We contributed two gesture sets for mid-air interaction with window managers on LHRDs and found that gesture elicitation for an LHRD was not affected by legacy bias. We compared shared user input on an LHRD with personal tablets, which also functioned as a private working space, to collaborative data exploration using one input device together for interacting with an LHRD. The results showed that input with personal tablets lowered the perceived workload. Finally, we showed that variable movement resistance feedback enhanced one-dimensional data input when no visual input feedback was provided. We concluded that context-aware input techniques enhance the interaction with content displayed on an LHRD so it is essential to provide focus for the visual content and guidance for the user while performing input. To understand user expectations of working with LHRDs we prototyped with potential users how an LHRD work environment could be designed focusing on the physical screen alignment and the placement of content on the display. Based on previous work, we implemented novel alignment techniques for window management on LHRDs and compared them in a user study. The results show that users prefer techniques, that enhance the interaction without breaking well-known desktop GUI concepts. Finally, we provided the example of how an application for browsing scientific publications can benefit from extended display space. Overall, we show that GUIs for LHRDs should support the user more strongly than GUIs for smaller displays to arrange content meaningful or manage and understand large data sets, without breaking well-known GUI-metaphors. In conclusion, this thesis adopts a holistic approach to interaction with LHRDs in office environments. Based on enhanced knowledge about user perception of large visual spaces, we discuss novel input techniques for advanced user input on LHRDs. Furthermore, we present guidelines for designing future GUIs for LHRDs. Our work creates the design space of LHRD workplaces and identifies challenges and opportunities for the development of future office environments.
  • Thumbnail Image
    ItemOpen Access
    Interaction techniques for wall-sized screens
    (2015) Lischke, Lars; Grüninger, Jürgen; Klouche, Khalil; Schmidt, Albrecht; Slusallek, Philipp; Jacucci, Giulio
    Large screen displays are part of many future visions, such as i-LAND that describes the possible workspace of the future. Research showed that wall-sized screens provide clear benefits for data exploration, collaboration and organizing work in office environments. With the increase of computational power and falling display prices wall-sized screens currently make the step out of research labs and specific settings into office environments and private life. Today, there is no standard set of interaction techniques for interacting with wall-sized displays and it is even unclear if a single mode of input is suitable for all potential applications. In this workshop, we will bring together researchers from academia and industry who work on large screens. Together, we will survey current research directions, review promising interaction techniques, and identify the underlying fundamental research challenges.
  • Thumbnail Image
    ItemOpen Access
    Mid-Air gestures for window management on large displays
    (2015) Lischke, Lars; Knierim, Pascal; Klinke, Hermann
    We can observe a continuous trend for using larger screens with higher resolutions and greater pixel density. With advances in hard- and software technology, wall-sized displays for daily office work are already on the horizon. We assume that there will be no hard paradigm change in interaction techniques in the near future. Therefore, new concepts for wall-sized displays will be included in existing products. Designing interaction concepts for wall-sized displays in an office environment is a challenging task. Most crucial is designing appropriate input techniques. Moving the mouse pointer from one corner to another over a longer distance is cumbersome. However, pointing with a mouse is precise and common-place. We propose using mid-air gestures to support input with mouse and keyboard on large displays. In particular, we designed a gesture set for manipulating regular windows.
  • Thumbnail Image
    ItemOpen Access
    Parallel exhibitions : empowering users to virtually and physically design customized museum exhibits
    (2014) Lischke, Lars
    Digital content is ubiquitous in all parts of life today. In particular Web 2.0 technology changed the way of communication. It allows everybody to contribute to digital content and to reach a large audience. The possibility to contribute also has an effect on the desire to contribute to real world'' matters. At the same time an incredible amount of information is online accessible without any effort. In many cases this enables us to find specific information fast and without leaving our current location. This forces public knowledge places, like libraries or museums, to re-think their role as knowledge providers. These institutions have to become places of social interaction which provide meaningful collections of objects and information as well as space for creativity. Visiting a museum is a great experience. Seeing objects, which have texture and physical characteristics combined with the history and the story of the exhibit, is an adventure and beneficial for engagement with a certain topic. Museums store much more objects, than they can present. These exhibits are not accessible for the public and sometimes not even for research purposes. It is a challenging task for curators and museum professionals to select objects for a meaningful and appealing arrangement. Re-creating and re-arranging exhibits in museums is mostly prohibited for visitors, because shown exhibits are often one of a kind, expensive, or damageable. During the last decade museums build large databases to index their objects. In Parallel Exhibitions we make use of these databases to invite visitors to become co-curators in museums. We design and implemented an application, which allows museum visitors to contribute to the exhibition design. Curators can additionally include physical exhibition in the virtual interaction space to create a close relationship to other exhibits in the museum. To evaluate our concept and our application we conducted a field test in a museum as well as an online study. In addition we interviewed possible users and museums professionals. We observed a rich social interaction around our application in the field study and the studies confirm that visitors have an interest to contribute to exhibitions they are visiting, both locally and on social media.
  • Thumbnail Image
    ItemOpen Access
    Pervasive interaction across displays
    (2015) Lischke, Lars; Weber, Dominik; Greenwald, Scott
    Digital screens are becoming more and more ubiquitous. Resolution and size are increasing, and, at the same time, prices for displays are falling. Large display installations are increasingly appearing in public spaces as well as in home and office environments. We expect this trend to continue, making wall-size displays commonplace in the next decade. With this development, all three classes of devices described by Mark Weiser - pads, tabs, and boards - will be mainstream. Pads (tablets), tabs (smartphones), and boards (displays) let us show and interact with data in different situations, because each device class is optimized for a certain use case. Consequently, the use of multiple devices becomes common—for example, the use of second screens while watching TV is becoming the norm. However, the use of multiple devices requires seamless transitions between devices, mechanisms for exchanging data, and the ability to move content from one device to another and to remotely access or control the data. Back in 1998, Michael Beigle and his colleagues proposed dynamically and automatically distributing Web-based content to different output devices in a smart environment. A few years later, Roy Want and his colleagues suggested using interfaces in our environment to interact with our personal data. Because mobile devices or notebooks often provide only a small screen for output and limited input techniques, they proposed using office screens or public displays to create a more enjoyable user experience. They also argued for having physical access to private data. These examples highlight that research in ubiquitous computing was already early on exploring interaction across pervasive devices, displays, and content. Current products support both visions. On one hand, there are devices that provide options to present remote data on a screen in the environment with the control residing on the mobile device. On the other hand, there are means to easily present content from mobile devices on remote displays. There are now also many cloud-based products for interacting with data on multiple devices. For example, Dropbox provides access to all text documents and images. Spotify lets you enjoy your favorite music on smartphones, tablets, notebooks, and music systems. Furthermore, people are starting to use mobile devices as remote controls for large screens, smart TVs, or music systems. All these examples show that streaming and connecting different devices ubiquitously are key technologies for smart environments. Here, we present a few commercially available technologies supporting this and provide an outlook on how displays might become a service themselves.
OPUS
  • About OPUS
  • Publish with OPUS
  • Legal information
DSpace
  • Cookie settings
  • Privacy policy
  • Send Feedback
University Stuttgart
  • University Stuttgart
  • University Library Stuttgart