Thursday, September 27, 2012

Look before you leap!

This article describes how the graphic “LOOK!” with a pair of rolling eyes have been installed in the intersections and crosswalks in New York City as well as part of advertisements on buses to remind drivers, pedestrians, and cyclists to be alert, as an attempt to reduce traffic deaths involving pedestrians and cyclists.

An excerpt from this article:
“In a busy city like New York, it’s often not easy to know which way to look. Inspired by the “look right, look left” signage on the streets of London and other cities, the new symbol consists of a single simple word––“LOOK!” The graphic turns the “O”s of the word into a pair of eyes, with the pupils positioned to the left or right to let pedestrians know exactly which way to look. New Yorkers are accustomed to glancing down as they walk, and on the pavement the graphic becomes a quick and intuitive cue, easily understood by pedestrians of all ages and languages. The signage is currently being applied to intersections throughout the city.”

Photo credit: Gnarly via Wikimedia Commons.

Tuesday, September 25, 2012

Google Glass

Google Glass was showcased at Diane von Furstenberg’s (DVF) Spring/Summer 2013 runway show in New York, earlier this month.

The advantage of Google Glass would be the hands-free capability, allowing users to use dialogue to communicate with the heads-up display in the eyeglass that they wear and make calls, take videos and pictures, browse the internet etc.

This article provides a technology review of the Glass. An excerpt from the article:

“So why not just keep your smart phone? Because the goggles promise speed and invisibility. Imagine that one afternoon at work, you meet your boss in the hall and he asks you how your weekly sales numbers are looking. The truth is, you haven't checked your sales numbers in a few days. You could easily look up the info on your phone, but how obvious would that be? A socially aware heads-up display could someday solve this problem. At Starner's computer science lab at the Georgia Institute of Technology, grad students built a wearable display system that listens for "dual-purpose speech" in conversation—speech that seems natural to humans but is actually meant as a cue to the machine. For instance, when your boss asks you about your sales numbers, you might repeat, "This week's sales numbers?" Your goggles would instantly look up the info and present it to you in your display.”

It would be great if the interaction of the display with the user is as seamless as is projected here.

Some points to consider from a human factors perspective when designing this system:
1)   How is the processing of information in the heads-up display and the real world going to look like? There is evidence from prior literature that the human visual system categorizes the world and the heads-up display into two separate entities and when attention is focused on the external world, the heads-up display is ignored and when attention is focused on the heads-up display, the world is ignored. There is also an ‘information shift cost’ when transitioning one’s attentional resources from the heads-up display to the real world and vice-versa.
2)   What is the context of use for this device? What happens if users put this on during driving? Will this cause more distractions during driving than what exists today with smart phones?
3)   There of course are the challenges involved in designing a friendly speech recognition user interface. Background noise, user accent, complex user queries, to name a few can easily cause annoyance.
4)   What about the form factor of the Glass? It does appear stylish at first glance but how would this accommodate users that already wear glasses (though the new designs are projected to be integrated into people's eye wear)?
5)   What about social etiquette and interactions? You could be browsing the internet on your Google Glass while pretending to be in a conversation with a friend. Granted this can happen even today, would it seem more or less obvious with the Glass? 

Friday, September 21, 2012

Designing for Mothers and Babies

I was at my friend’s baby shower a few weeks back and one of the gifts that she got was a pendant (see Figure) that not only looked good but was also safe to be chewed on by her baby (The material used in the pendant has been approved by the FDA).

Babies certainly like to grab and chew on things while teething and a pendant such as this provides babies the capability to do just that and gives mothers the opportunity to accessorize. 

Thursday, September 6, 2012

Apps taking over parenting?

This article discusses an iPhone App that IDEO has developed to help parents train their children. The Sesame Street character Elmo is used in this App to instill healthy habits such as brushing their teeth, exercising, and going to bed on time in children.

An excerpt from the article:

“It’s the result of a synthesis of extensive “human-centered research” that forms the basis for all the firm’s endeavors. As it will be used by both adults and kids, the project had to reflect the various needs of both, and follows an iPhone-style interface that allows parents to unlock their menus with an easy swipe.

Based on this, it looks like a lot of thought has gone into understanding the needs of both the parents and kids and consequently in developing a user interface that is intuitive for both the user groups. However, this begs the question as to whether the technique adopted here is appropriate to instill good habits in children, from a developmental psychology perspective? Why is there a need for a mediator, which is nothing but a cartoon character, to do the tasks that parents should perform? Is this reliance on a make-believe character good in the long run and even if this starts showing good results in the beginning, how long would these results last?

Photo credit: Bill Thompson via Wikimedia Commons.

Monday, September 3, 2012

Driverless cars in the news again!

This article discusses how autonomous cars would reduce accidents on the road (by eliminating human error) and revolutionize ground transportation.
I have in an earlier post discussed the perils of fully automated systems; so I am not going into lengths about that here.
An excerpt from the article that caught my eye:
“It is even possible to make judgments about the mental or physical state of other drivers. Software developed by Probayes, a firm based near Grenoble, in France, identifies and then steers clear of drivers who are angry, drowsy, tipsy or aggressive. Upset drivers tend to speed up and brake quickly. Sleepy drivers tend to drift off course gradually and veer back sharply. Drunk drivers struggle to keep a straight line. The firm sells its software to Toyota, Japan’s car giant. Google’s cars have even been programmed to behave appropriately at junctions such as four-way stops, edging forward cautiously to signal their intentions and stopping quickly if another driver moves out of turn.”
This means that not only will these cars be equipped with sensors that can detect traffic lights, pedestrians, obstacles, road signs, and compute position relative to other vehicles, these cars may also make maneuvers based on inferences of the mental state of nearby drivers. Obviously a lot of behavioral modeling will be applied to make these inferences. There are so many things that can go wrong in these computations and then we go back to the ironies of automation.

Photo credit: Adam sk via Wikimedia Commons.