A new Wired article looks at Nokia research into expanding the uses of accelerometers in phones. You could make an ‘F’ gesture in the air with your phone, for example, to launch its FM radio. You might use another gesture to lock your phone.
But a physical on-screen button tap has a distinct advantage over a mid-air gesture: perceived reliability.
When I physically tap an on-screen control, the outcome is 100% predictable. I push the button, I get the intended result. But with a gesture, it feels like there’s a greater risk of my phone misinterpreting what I want.
So when I’m choosing my interaction method, there’s a cost-benefit tradeoff. Why should I risk wasting valuable seconds while my phone misinterprets my gesture, when I know I can push a button that will work every time?
Obviously, the more reliable the gestural interface, the more confidence I’ll have in it. But it can never trump a 100% reliable button. What’s more, to make the appropriate gesture for what I want to do, I need to recall it from memory first.
That’s not to suggest that gestural interfaces aren’t a good thing; just that the context of use is important. You could allow gestural commands in your design – but in what circumstances will they save users sufficient time and/or effort that they’d choose making a gesture over pushing a button?
Designing voice control interfaces raises the same questions.
(Just to be clear: I think gestural interfaces are very cool. Can’t wait to have a go on Kinect…)