Tuesday, September 25, 2012

Google Glass

Google Glass was showcased at Diane von Furstenberg’s (DVF) Spring/Summer 2013 runway show in New York, earlier this month.

The advantage of Google Glass would be the hands-free capability, allowing users to use dialogue to communicate with the heads-up display in the eyeglass that they wear and make calls, take videos and pictures, browse the internet etc.

This article provides a technology review of the Glass. An excerpt from the article:

“So why not just keep your smart phone? Because the goggles promise speed and invisibility. Imagine that one afternoon at work, you meet your boss in the hall and he asks you how your weekly sales numbers are looking. The truth is, you haven't checked your sales numbers in a few days. You could easily look up the info on your phone, but how obvious would that be? A socially aware heads-up display could someday solve this problem. At Starner's computer science lab at the Georgia Institute of Technology, grad students built a wearable display system that listens for "dual-purpose speech" in conversation—speech that seems natural to humans but is actually meant as a cue to the machine. For instance, when your boss asks you about your sales numbers, you might repeat, "This week's sales numbers?" Your goggles would instantly look up the info and present it to you in your display.”

It would be great if the interaction of the display with the user is as seamless as is projected here.

Some points to consider from a human factors perspective when designing this system:
1)   How is the processing of information in the heads-up display and the real world going to look like? There is evidence from prior literature that the human visual system categorizes the world and the heads-up display into two separate entities and when attention is focused on the external world, the heads-up display is ignored and when attention is focused on the heads-up display, the world is ignored. There is also an ‘information shift cost’ when transitioning one’s attentional resources from the heads-up display to the real world and vice-versa.
2)   What is the context of use for this device? What happens if users put this on during driving? Will this cause more distractions during driving than what exists today with smart phones?
3)   There of course are the challenges involved in designing a friendly speech recognition user interface. Background noise, user accent, complex user queries, to name a few can easily cause annoyance.
4)   What about the form factor of the Glass? It does appear stylish at first glance but how would this accommodate users that already wear glasses (though the new designs are projected to be integrated into people's eye wear)?
5)   What about social etiquette and interactions? You could be browsing the internet on your Google Glass while pretending to be in a conversation with a friend. Granted this can happen even today, would it seem more or less obvious with the Glass? 

No comments:

Post a Comment