Implicit Interaction: Creating a new interface model for the Internet of Things Kristina Höök, KTH implicit [ɪmˈplɪs.ɪt] adjective 1. suggested without being directly expressed 2. forming part of something (although perhaps not directly expressed) Smart materials and related autonomous technologies offer the potential to automate and hide much of the tedium of our everyday lives: logistics, transportation, electricity consumption in our homes, connectivity, or the management of autonomous systems such as robot vacuum cleaners. Combined with the growth in ubiquitous- and IoT-based systems there is now the opportunity to make significant improvements in how technology benefits everyday life. Yet existing systems are beset with manifest human interaction problems (Harper 2006, Taylor, Harper et al. 2007). The fridge warns you with a beep if you leave the door open, the washing machine signals when it is finished, or even chainsaws now warns you when you have been using them for too long. Each individual system has been designed with a particular, limited, interaction model: the smart lighting system in your apartment has not been designed for the sharing economy, the lawn mower robot might run off and leave your garden. Different parts of your entertainment system turn the volume up and down and fail to work together. Each smart object comes with its own form of interaction, its own mobile app, its own upgrade requirements and its own manner of calling for users’ attention. Interaction models have been inherited from the desktop-metaphor, and sometimes mobile interaction have their own apps that use non-standardised, icons, sounds or notification frameworks. When put together, the current forms of smart technology do not blend, they cannot interface one-another, and most importantly, as end-users we have to learn how to interact with them each time, one by one. In some senses this is like personal computing before the desktop metaphor, the Internet before the web, or mobile computing before touch interfaces. In short, IoT lacks its killer interface paradigm. Indeed, the need for a novel interface paradigm is a known research challenge in human computer interaction and smart computing. It is also of key strategic importance to Swedish as well as international consumer facing companies to enable a thriving market of compelling Internet of Things-products. Here we propose a solution which addresses the “Artificial Intelligence Based Information Systems” part of the SSF-call. In particular, the need for “[..] more intuitive, cooperation between computer systems and humans”. Building new forms of Smart technology: Implicit Interaction What is needed is not a new vision of the smart home or the smart city, we need environments that make us smart (Taylor, Harper et al. 2007). We need a way to design smart objects such that they continuously adapt to us, and us to them. This project is built around developing a new interface paradigm that we call smart implicit interaction. Implicit interactions stay in the background (Schmidt 2000, Dix 2002, Ju and Leifer 2008), thriving on data analysis of speech (McMillan, Loriette et al. 2015), movements (Bachynskyi, Palmas et al. 2015), and other contextual data (Izadi, Kim et al. 2011), avoiding unnecessarily disturbing us or grabbing our attention. When we turn to them, depending on context and functionality, they either shift into an explicit interaction – engaging us in a classical interaction dialogue (but starting from analysis of the context at hand) – or they continue to engage us implicitly using entirely different modalities that do not require an explicit dialogue – that is through the ways we move or engage in other tasks, the smart objects responds to us. Building on the history of intelligent agents (Woldridge and Jennings, 1995), behavioural inference (Höök, 2000) and motion sensing (Bachynskyi et al., 2015), the core research problem is implicitly moving from tracking humans to meaningful, beneficial system behaviour. One form of implicit interaction we have experimented with is when mobile phones listen to surrounding conversation and continuously adapt to what might be a relevant starting point once the user decides to turn to it. As the user activates the mobile, we can imagine how the search app already has search terms from the conversation inserted, the map app shows places discussed in the conversation, or if the weather was mentioned and the person with the mobile was located in their gar- 1 den, the gardening app may have integrated the weather information with the sensor data from the humidity sensor in your garden to provide a relevant starting point. This is of course only possible through providing massive data sets and making continuous adaptations to what people say, their indoor and outdoor location, their movements and any smart objects in that environment – thriving off the whole ecology of artefacts, people and their practices. Previous work While there has been work in human computer interaction on problems such as interfaces to support training machine learning systems (Amershi, Chickering et al. 2015); and ubicomp and mobisys research documenting applications of varying context-aware systems (Woerndl, Huebner et al. 2011, Dargie, Plosila et al. 2012; Tian, 2015), human computer interaction has seldom directly studied implicit forms of interaction, focusing its attention almost exclusively on interactions where humans are ‘in an interactive loop’ with their attention focused on a single system in a rich immediate form of interaction and feedback – often through a screen interface on a screen of some kind. Yet smart systems by their semi- or fully- autonomous nature fit outside this interaction mode. As Verbeek points out (Verbeek 2015), nontechnological systems interact with us in a much wider range of ways than existing technology: such as overhearing a conversation, walking on a path, or feeling our own internal body. Yet to make use of these interface modes will require a risky and radical break with existing ways of thinking about human computer interaction. Implicit interaction makes use of location and bodily movement to allow for input to systems that does not use predefined mappings, and a wider range of sensor inputs. Implicit interaction echoes visions for alternative interaction models, most notably the calm computing vision by Weiser (Weiser 1991) and colleagues, description a world where interaction would be in the periphery, not calling for our attention until something changes or when we turn to it. While this vision inspired much work in Ubicomp actually building calm systems proved to be beyond what was currently possible, and instead work within that field focused much more on building new compelling applications, as well as solving important system problems. Recent work on the interface problem to IoT includes Greenberg’s concept of (Ballendat, Marquardt et al. 2010). His idea is that approaching objects should be like approaching other people. As we get closer, orienting towards some interactive object, in some particular location, the object will “wake up” and start interacting with us. But this mainly works for screen-based interaction in settings where the screen can be located and contextualized it ignores the development towards services that are available anywhere – not tied to a particular location or context. In terms of guiding system action, systems both from academia and commercial companies, usually require a rulebased relationship (through scripts) with how you can control the smart objects. These systems feature a lighting system with rules such as “when my car approaches the garage on a weekday, turn on the lights in the garage, on the driveway and in the kitchen”. Yet the messiness and idiosyncrasies of our everyday life will break such a rule quickly (Harper, 2006). In particular as they rarely count on the social setting and complex interplay of rules. Rule-based user modelling has been a research topic since the 1980ies, but useful solutions for ordinary desktop use, such as in search, or in mobile use, or modelling the position of the users, did not take off until we on the one hand, had the necessary data and on the other hand the new wave of AI-models that built on machine learning using masses of data. Instead of requiring end-users to program their smart objects through static rules, we need continuous adaptation to the changing conditions of everyday life, based on what is really happening, from moment to moment, using data-driven machine learning techniques. References orientation-aware environment. ACM International Conference on Interactive Tabletops and Surfaces, ACM. Baroni, M., Lenci, A. & Sahlgren, M. (2007): Proceedings of the 2007 Workshop on Contextual Information in Semantic Space Models (CoSMo 2007): Beyond Words and Documents. Computer science research report 116, Roskilde University, Denmark. ISSN 0109-9779. Dargie, W., J. Plosila and V. De Florio (2012). Existing challenges and new opportunities in context-aware systems. Proceedings of the 2012 ACM Conference on Ubiquitous Computing, ACM. Amershi, S., M. Chickering, S. M. Drucker, B. Lee, P. Simard and J. Suh (2015). ModelTracker: Redesigning Performance Analysis Tools for Machine Learning. Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. Seoul, Republic of Korea, ACM: 337-346. Bachynskyi, M., G. Palmas, A. Oulasvirta, J. Steimle and T. Weinkauf (2015). Performance and Ergonomics of Touch Surfaces: A Comparative Study using Biomechanical Simulation. Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, ACM. Ballendat, T., N. Marquardt and S. Greenberg (2010). Proxemic interaction: designing for a proximity and 2 Dix, A. (2002). Beyond intention-pushing boundaries with incidental interaction. Proceedings of Building Bridges: Interdisciplinary Context-Sensitive Computing, Glasgow University. Fernaeus, Y., & Sundström, P. (2012, June). The material move how materials matter in interaction design research. In Proceedings of the Designing Interactive Systems Conference (pp. 486-495). ACM. Harper, R. (2006). Inside the smart home, Springer Science & Business Media. Höök, K. (2000). Steps to take before intelligent user interfaces become real.Interacting with computers, 12(4), 409-426. Höök, K., & Löwgren, J. (2012). Strong concepts: Intermediate-level knowledge in interaction design research. ACM Transactions on Computer-Human Interaction (TOCHI), 19(3), 23. Höök, K., A. Ståhl, M. Jonsson, J. Mercurio, A. Karlsson and E.-C. Johnson "Somaesthetic design." Izadi, S., D. Kim, O. Hilliges, D. Molyneaux, R. Newcombe, P. Kohli, J. Shotton, S. Hodges, D. Freeman and A. Davison (2011). KinectFusion: realtime 3D reconstruction and interaction using a moving depth camera. Proceedings of the 24th annual ACM symposium on User interface software and technology, ACM. Ju, W. and L. Leifer (2008). "The design of implicit interactions: Making interactive systems less obnoxious." Design Issues 24(3): 72-84. Karlgren, J. & Sahlgren, M. (2001): From Words to Understanding. In Uesaka, Y., Kanerva, P. & Asoh, H. (Eds.):Foundations of Real-World Intelligence, pp.294308, Stanford: CSLI Publications.Schmidt, A. (2000). "Implicit human computer interaction through context." Personal technologies 4(2-3): 191-199. Li T., An C., Tian Z., Campbell A., and Zhou X.. Human Sensing Using Visible Light Communication. In Proc. MobiCom 2015, ACM McGrath, W., M. Etemadi, S. Roy and B. Hartmann (2015). fabryq: using phones as gateways to prototype internet of things applications using web scripting. Proceedings of the 7th ACM SIGCHI Symposium on Engineering Interactive Computing Systems, ACM. McMillan, D., A. Loriette and B. Brown (2015). Repurposing Conversation: Experiments with the Continuous Speech Stream. Proceedings of the 33rd annual ACM conference on Human factors in computing systems, ACM. Sahlgren, M. & Karlgren. J. (2009) Terminology Mining in Social Media. Proceedings of The 18th ACM Conference on Information and Knowledge Management (CIKM'09), November 2-6, Hong Kong, China. Sahlgren, M. (2008) The Distributional Hypothesis. From context to meaning: Distributional models of the lexicon in linguistics and cognitive science (Special issue of the Italian Journal of Linguistics), Rivista di Linguistica, volume 20, numero 1, 200 Stolterman, E. (2008). The nature of design practice and implications for interaction design research. International Journal of Design, 2(1), 55-65. Ståhl, A., Löwgren, J., & Höök, K. (2014). Evocative balance: Designing for interactional empowerment. Taylor, A. S., R. Harper, L. Swan, S. Izadi, A. Sellen and M. Perry (2007). "Homes that make us smart." Personal Ubiquitous Comput. 11(5): 383-393. Tian, Z., Campbell, A., & Zhou, X. “Poster: Visible Light Communication in the Dark”. In Proc. MobiCom 2015, ACM Vallgårda, A., & Fernaeus, Y. (2015, January). Interaction Design as a Bricolage Practice. In Proceedings of the Ninth International Conference on Tangible, Embedded, and Embodied Interaction (pp. 173-180). ACM. Verbeek, P.-P. (2015). "COVER STORY Beyond interaction: a short introduction to mediation theory." interactions 22(3): 26-31. Weiser, M. (1991). "The computer for the 21st century." Scientific american 265(3): 94-104. Woerndl, W., J. Huebner, R. Bader and D. Gallego-Vico (2011). A model for proactivity in mobile, context-aware recommender systems. Proceedings of the fifth ACM conference on Recommender systems. Chicago, Illinois, USA, ACM: 273-276. Wooldridge, M., & Jennings, N. R. (1995). Intelligent agents: Theory and practice. The knowledge engineering review, 10(02), 115-152. Zimmerman, J., Forlizzi, J., & Evenson, S. (2007, April). Research through design as a method for interaction design research in HCI. In Proceedings of the SIGCHI conference on Human factors in computing systems (pp. 493-502). ACM. 3
© Copyright 2025 Paperzz