This is an extraordinary video demoing how a smartwatch can be made aware of what you’re currently holding / touching and be programmed to respond accordingly.

My favourite thing about this demo device is it’s use of audio feedback. I think audio UI is a massively underused and under explored area of design at the moment. Too many of our devices and products allow us to talk to them, but aren’t capable of talking back to us. There are many cases where a speaking device would be inconvenient, invasive and potentially, useless. But, I think there are a growing number of uses for audio feedback. Especially when so many people have headphones plugged in while at work, or commuting, etc.

For example, if I get a phone call while I’m listening to music / podcasts / etc with headphones on my iPhone, I’d love it to say the name of the caller as well as ring. Pausing what I’m listening to and ringing in my ears is good, and gets my attention, but I still need to look at my phone or my watch in order to know who’s calling and make a decision about whether or not to answer it. If I’m already listening to something, and the headphones are in, and the phone already knows to pause audio, and play the ringtone, why not also get Siri to say; “XXX is calling”. If I don’t want to take the call all I have to do is ignore it, or dismiss it with either the headphone controls or my Apple watch, etc.

I’m sure this is coming. Siri, and her fellow assistants on other platforms are getting more integrated into our devices everyday. And are becoming more capable. I’m looking forward to the day when they are more like the device in this video and can talk to us a little more in the context of what we’re doing.