Reimagining graphical user interface ecologies

Graphical User Interfaces (GUIs) have proven to work effectively in practice, but still suffer from a lack of power and expressiveness: simple tasks are often laborious to carry out with a mouse, there is never enough display space available to make commands easy to reach, and customization and tail...

Full description

Bibliographic Details
Main Author: Block, Florian
Published: Lancaster University 2010
Subjects:
Online Access:http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.551629
Description
Summary:Graphical User Interfaces (GUIs) have proven to work effectively in practice, but still suffer from a lack of power and expressiveness: simple tasks are often laborious to carry out with a mouse, there is never enough display space available to make commands easy to reach, and customization and tailoring are difficult to accomplish. In this thesis we propose to overcome these limitations by reimagining graphical user interfaces. The aim is to preserve all established benefits of existing GUIs while seamlessly integrating novel forms of interaction. Our contribution consist of three case studies: The first case study is concerned with physical pen-and-paper customization of GUIs on large interactive surfaces. We introduce two novel techniques: the live sketching of free-form controls, and the configuration of new controls via handwritten annotations. The results of our studies show: handwritten annotations are more efficient for configuring physical interfaces than list-based configuration; free-form sketched controls can be created effectively, and be used as efficiently by users as tangible controls and keyboard interfaces. The second case study explores two-handed interaction with touchpad and mouse on notebooks - a configuration that we show commonly exists in practice. We introduce a series of novel interaction techniques that utilize the touchpad as independent input device alongside the mouse. The results of our studies show: using the touchpad for document navigation with the non-dominant hand is instantly be as efficient as conventional methods, and show potential to improve with more practice; flick-gestures and inertia introduce navigation overhead; absolute mappings are initially less efficient but show steeper learning curves and adding a metallic token can decrease jumps, but produces more overshoots. The third case study revolves around keyboards that are extended with touch-sensing and display capability, allowing them to seamlessly blend with the graphical input and output space surrounding them. We introduce a series of novel interaction techniques that push the boundaries of conventional mouse-keyboard-display configurations. The results of our studies show: using display enabled keyboards, users can instantly invoke commands faster than with the mouse or conventional keyboards; touch-enabled keyboards can enforce the tactile acquisition of hotkeys, allowing users to rest their hands and focus on the primary screen. We conclude by envisioning a GUI ecology that seamlessly integrates the interaction space from all three case studies.