Saturday, December 29, 2012

Using animation for training

On a recent Jet Airways flight in India, I saw animations being used to demonstrate the safety features of the aircraft (i.e., use of seat buckle, oxygen mask, etc). I felt that this was a very creative way to communicate the safety message across to the audience.

Why are animations beneficial?
  • It is universally understood irrespective of age, educational background, and culture.
  • It can be a more creative way to convey the safety message to passengers in a “cool” way and can constitute a part of the customer experience.
  • Though I have no real evidence as to whether animations are superior to traditional (recorded or live) safety videos in terms of better passenger performance in the event of an emergency, animated videos appear to have the potential to capture the attention of the audience to the safety material.

Here is an interesting article on this topic.

Friday, December 28, 2012

Human and Robot

I recently watched the movie “Robot and Frank”.

Some of the human factors aspects that came to mind when I saw the movie:
  • A lot of the fundamental elements that govern human-human interaction, such as trust, dependence, empathy, are applicable even to human-robotic interaction.
  • Just like amongst humans, the trust that a human has on a robot evolves gradually through various experiences.  In the movie, Frank goes from hating the robot to trusting and liking the robot and even calling the robot his buddy.

Now, what tasks should a personal service robot help the elderly accomplish?
  • Support activities of daily living: It is important that robots help the elderly accomplish their day-to-day activities that they cannot carry out on their own, without having to rely on others. For example, in the movie, the robot does all the cooking and cleaning for Frank, which Frank is not good at. He also encourages Frank to exercise and accompanies him on walks. But, it is important to note that the robot only assists Frank in tasks that Frank is not capable to do on his own. This is a crucial thing that designers should keep in mind. Frank, who is a jewel thief, still plans robbery on his own.  Encouraging the human to do tasks that they can do on their own (not robbery of course!) is an important element that will help built trust on a robot.
  • Help improve cognitive functioning: Aging is accompanied by diminished cognitive functioning. Hence, it is important that the elderly engage in activities that stimulates mental functioning. For example, in the movie, the robot urges Frank to do gardening for Frank’s mental stimulation.

As the story evolves, Frank becomes emotionally attached to the robot and refers to the robot as his friend. As a movie spectator, I have to admit that I also started liking the friendship between the characters, albeit the fact that one character was inanimate.

Now, this leads to the question: How much human characteristics should a personal service robot possess? Certainly, to develop trust in the robot, the robot should promote independence while at the same time provide companionship to the human. The robot should also be designed with some empathy for the user.  But how much “human-like traits” should the robot possess? Is too much trust and too much attachment to the robot good for the human? 

Thursday, December 20, 2012

Perceptual Illusion and Fashion

Let us look at the Muller-Lyer illusion. Though the line segments in Figure A and Figure B are of the same length, our visual system fools us in believing that the line segment in Figure A (top figure) is smaller than the line segment in Figure B.

Applying this logic to your choice of dresses below:

The dress depicted in Figure 1 has the potential to create a wider waistline than the dress in Figure 2 (in reality the red lines in both Figure 1 and Figure 2 are of equal length). Notice that Figure 1 is like the wider-looking line segment (Figure B) of the Muller-Lyer illusion.

The dress depicted in Figure 3 has the potential to create a slimmer waistline than the dress in Figure 4 (in reality the red lines in both Figure 3 and 4 are of equal length). Notice that Figure 3 is like the smaller-looking line segment (Figure A) of the Muller-Lyer illusion.

Therefore the peplum waist (shown in Figure 1), an iconic look that is in fact back in style, has the potential to create a less flattering silhouette. 

Cant virtually everything be explained through human perception and cognition?

This blog is co-written with Lu Wang.

Photo credit: Gwestheimer via Wikimedia Commons.

Tuesday, December 18, 2012

Ergonomic baby carrier

The baby carrier (shown in the picture) has the following advantages:

  • More degrees of freedom: The mother can carry the baby on the back, front or side, thereby providing more options and potentially more comfort.
  • More support for the baby: The infant insert feature in the carrier provides additional support for the baby at the back and the hip.
  • Allows the mother to multi-task: The carrier promotes hands-free carrying of the baby, allowing the mother to perform other tasks when busy, not worrying that the baby is going to grab something dangerous.
  • Carry items that need to be readily accessible: The carrier has pouches that help the mother carry smaller items such as napkins, allowing for easy access, without having to look for these items in other bags.
  • Promotes baby’s sleep: The carrier even has a hood for the baby’s nap time, allowing the baby to sleep without being disturbed by lights and other distractions.
  • Enhanced safety: The carrier also has multiple belts, one at the waist and one at the shoulder, providing better safety.
  • Machine-washable: Helps to easily wash off all the baby spills.

Now this is one carrier that is designed with the mother and baby in mind!

This blog is co-written with Lulu Wang, who is in the picture with her adorable baby.

Friday, December 14, 2012

Gloves for my phone (and hands)!

My friend, Steve, presented me with a pair of gloves and I absolutely love it!

Now living in Minneapolis, gloves go beyond just being an accessory for me. However, gloves restrict the use of touch-screen devices like the iPhone. The capacitive touch-screen in the phone relies on the conductive properties of the human body. Gloves insulate the conductivity of your body, thereby making it impossible to use your touch-screen phone when wearing gloves.

So, I was very excited to find out that the gloves that were gifted to me are touch screen compatible. These gloves use conductive thread to mimic the conductive properties of the human hand.

Now I can unlock my phone, text, and make a call when I am outside and not freeze! I am not sure whether this will work in really extreme weather conditions but it is great for now.

This is what I call human factors in every day life!

Wednesday, December 12, 2012

Are two heads better than one?

You are sitting at your desk writing a document and a colleague comes by to ask a question or an email notification appears and you feel compelled to read the email. Have you experienced any difficulties resuming your work on the document?

Well, now think of operators working in a dynamic environment (e.g., military command and control, aviation). When these operators are interrupted, resuming the interrupted task involves inferring the changes that took place during the interruption and also determining the consequences of the changes.

This article shows that working with a team mate helps to recover faster when faced with interruptions than when working individually. This is because collaborative work allows responsibilities to be distributed amongst the team members, thereby allowing for the resumption of multiple tasks by each team mate in parallel rather than one individual having to resume each of the tasks sequentially. In other words, faster recovery from interruption is possible in teams due to a distribution of task load.

This team superiority effect is however mediated by the coordination and communication that takes place between the team mates, following an interruption. That is, if team mates needed to communicate more following an interruption, it took them longer to resume their task. This was true only when one person in the team was interrupted and had to inform her team mate that she is back from the interruption to resume the task.

So, what are the implications to a designer here?

It is important that technology/systems that will be used in team settings be designed to help operators obtain a ‘shared view’ of the world.

Photo credit: Unsigned engraving [Public domain] via Wikimedia Commons

Monday, December 10, 2012

Role of Sound in Interaction Design

The inappropriate or excessive use of sound can annoy the user. Therefore, understanding the situations under which audio is the most appropriate is important for creating a good user experience. The advantages of sound include:
  • Promoting safety:  The use of audio can support human performance in situations where the visual channel is overloaded. For example, GPS systems that talk certainly have an edge over those that do not because the former has the potential to help drivers allocate their visual resources to the primary task of driving, which puts a lot of burden on the visual system.
  •  Creating more use involvement:  Sound is an excellent addition in games and simulators to create a sense of ‘presence’.
  • Delivering emotion: Adding good audio has the potential to give a more human touch to products that users interact with.

Several companies are now incorporating sound into their product design.
  • This article describes how GE is incorporating sound into their appliance design. You can listen to the soundtracks here.
  • Ford is pursuing the idea of using sound in their electric cars to warn pedestrians of an approaching electric vehicle. You can listen to the sounds here
      Photo credit: Wikimedia Commons

Wednesday, November 21, 2012

Assessing text readability

One of the essentials in user interface design is that the verbiage that you use is simple for users to understand.

An easy way to assess readability is to use the Flesch-Kincaid Grade Level, which is a standard part of Microsoft Word processing. In Word 2010, go to File->Options->Proofing->Show readability statistic and then use the F7 key on the desired text to view this metric.

For the text above, the Flesch-Kincaid Grade Level  is 10.4, which means that it is written for a user with a 10th grade reading level. 

Sunday, November 11, 2012

Human Factors and Disasters

This article describes the work of William Helton and James Head from the University of Canterbury in New Zealand. They compared the differences in cognitive performance of participants before and after a local earthquake. Their findings are summarized below:
  • There was an increase in errors of omission following the earthquake.
  • The participants who reported feeling depressed following the earthquake were slower in responding to the cognitive task.
  • The participants who reported feeling anxious following the earthquake responded faster and made more mistakes in the cognitive task.
  • The researchers concluded that humans are under increased cognitive load following a disaster and are prone to make more errors.

As the nation is recovering from the aftermath of Sandy, it is the right time to think about the role of human factors in disaster management. Special attention needs to be given to the design of training programs and tools that emergency responders receive to perform disaster management.  

Photo credit: The National Guard (Maryland National Guard  Uploaded by Dough4872) via Wikimedia Commons.

Monday, October 15, 2012

Using Facial EMG to detect a loss of situation awareness

Situation awareness is a term that is widely used in the human factors community. The term is used to denote one’s comprehension or understanding of the environment in which one is working. Studying this construct is important because loss of situation awareness is considered responsible for performance failures in several safety-critical domains. For example, air traffic controllers who are unaware that a loss of separation is happening is more likely to be involved in more severe operational errors.

Various techniques to measure situation awareness exist. For example, objective measures uses accuracy and time to respond to queries as a way of inferring operator situation awareness. Subjective measures rely on feedback from an expert or self-ratings to determine situation awareness. Finally, implicit performance measures are also used and that involves embedding events into scenarios that would require operators to exhibit specific behaviors.

More recently, Dr. Frank Durso and colleagues published an article in the Journal of Human Factors, wherein they discuss how facial EMG can be used to detect loss of situation awareness (or confusion).

The experimental set up was such that the participants in the experimental condition listened to a passage and were asked to raise their index finder when they heard something that did not make sense to them. Participants in the control condition raised their finger when they heard an animal being mentioned. Four facial muscles (near the left and right and left eyebrows, the mandiable, and the cheek) were recorded using EMG while participants listened to the passages.

Key takeaways from the article:
  • EMG traces detected confusion in all the participants who reported that they were confused and also in 6 participants who did not report any confusion. This shows that facial EMG is a better detector of loss of situation awareness than self-report measures.
  • The facial muscles near the eyebrows were the most effective in detecting confusion.

 Photo credit: FASTILY via Wikimedia Commons

Friday, October 12, 2012

A Vest that Hugs!

This article describes a vest that Facebook users can wear, which gives users a hug whenever they receive a ‘like’ to their status updates on Facebook. The hug is simulated by the inflation of the vest. The users can also give a hug back to the sender by deflating the vest.

The perceived advantage of this vest is that it allows people to be close irrespective of physical boundaries.   

A few aspects to consider when converting this prototype to a product (Note that the integration with Facebook has not occurred yet):
  • Do users want to be hugged by everyone who likes their status updates or just a select few of their friends and family?
  • Do users want to be hugged for each and every status update that they do? That seems a little ridiculous. So, what should be the criteria? Perhaps users should have the flexibility to set this option.
  • Do users want to be hugged in all social settings?
  • Should users be hugged when they are performing critical tasks such as driving? The vest can be a source of distraction.

Photo credit: Enoc vt via Wikimedia Commons

Monday, October 8, 2012

Ehealth and Older Adults

Today’s article from The New York Times examines the ehealth benefits for older adults.  

Highlights from the article:
  • 53% of Americans 65 and older use the Internet or email but after age 75, internet use drops to 34%.
  • Fear of computers and smartphones, problems with vision and hearing, cognitive declines, limited finances, lack of learning opportunities are considered to be the reasons why older adults are not as interfaced with technology.
  • Agencies such as the National Institute on Aging and the National Network of Libraries of Medicine are working to help older adults learn and use technology.
  • Lots of medical services are becoming available online and older adults can benefit a lot from this.
  • Ehealth sites can help older adults make informed decisions about their heath, communicate with their physicians, be independent, identify the best Medicare options, find nutritional recipes, and order prescriptions online, to name a few.
  • Employ good design principles (that takes into consideration the vision and cognitive declines associated with aging) when designing websites for the elderly. This website provides a list of design principles. Some of the key principles from the website include:
1.      Organize information into short, meaningful sections.
2.      Present the key information first.
3.      Avoid lengthy paragraphs.
4.      Use active voice.
5.      Minimize scrolling.
6.      If instructions involve more than one step, number the steps.
7.      Minimize the use of technical jargon.
8.      Use single mouse clicks.
9.      Use 12- or 14-point type size, and make it easy for users to enlarge text.
10.  Use high-contrast color combinations.
11.  Provide a speech function to hear text read aloud.

Check out the ehealth site developed by the National Institute on Aging, which incorporates these web design principles. 

Tuesday, October 2, 2012

What a Cool Thermostat!

Nest, a Silicon Valley startup company, has announced that they will be releasing their next generation thermostat later this month. More details about the thermostat can be found here.
Some of the ‘cool’ features of this thermostat from a human factors and ergonomics perspective include:

  • Sleek design: The new model is slimmer.
  • Adjustability via Apps: Through iPhone and Android apps, users can adjust the temperature settings even when they are away from their homes.
  • Automation support: If the thermostat detects that there are no individuals in the house (uses sensors to detect presence), it has the capability to adjust the temperature settings to reduce unnecessary heating and cooling. This way, users don’t have to remember to adjust the thermostat to conserve energy when going out.

Photo credit: grantsewell via Wikimedia Commons

Thursday, September 27, 2012

Look before you leap!

This article describes how the graphic “LOOK!” with a pair of rolling eyes have been installed in the intersections and crosswalks in New York City as well as part of advertisements on buses to remind drivers, pedestrians, and cyclists to be alert, as an attempt to reduce traffic deaths involving pedestrians and cyclists.

An excerpt from this article:
“In a busy city like New York, it’s often not easy to know which way to look. Inspired by the “look right, look left” signage on the streets of London and other cities, the new symbol consists of a single simple word––“LOOK!” The graphic turns the “O”s of the word into a pair of eyes, with the pupils positioned to the left or right to let pedestrians know exactly which way to look. New Yorkers are accustomed to glancing down as they walk, and on the pavement the graphic becomes a quick and intuitive cue, easily understood by pedestrians of all ages and languages. The signage is currently being applied to intersections throughout the city.”

Photo credit: Gnarly via Wikimedia Commons.

Tuesday, September 25, 2012

Google Glass

Google Glass was showcased at Diane von Furstenberg’s (DVF) Spring/Summer 2013 runway show in New York, earlier this month.

The advantage of Google Glass would be the hands-free capability, allowing users to use dialogue to communicate with the heads-up display in the eyeglass that they wear and make calls, take videos and pictures, browse the internet etc.

This article provides a technology review of the Glass. An excerpt from the article:

“So why not just keep your smart phone? Because the goggles promise speed and invisibility. Imagine that one afternoon at work, you meet your boss in the hall and he asks you how your weekly sales numbers are looking. The truth is, you haven't checked your sales numbers in a few days. You could easily look up the info on your phone, but how obvious would that be? A socially aware heads-up display could someday solve this problem. At Starner's computer science lab at the Georgia Institute of Technology, grad students built a wearable display system that listens for "dual-purpose speech" in conversation—speech that seems natural to humans but is actually meant as a cue to the machine. For instance, when your boss asks you about your sales numbers, you might repeat, "This week's sales numbers?" Your goggles would instantly look up the info and present it to you in your display.”

It would be great if the interaction of the display with the user is as seamless as is projected here.

Some points to consider from a human factors perspective when designing this system:
1)   How is the processing of information in the heads-up display and the real world going to look like? There is evidence from prior literature that the human visual system categorizes the world and the heads-up display into two separate entities and when attention is focused on the external world, the heads-up display is ignored and when attention is focused on the heads-up display, the world is ignored. There is also an ‘information shift cost’ when transitioning one’s attentional resources from the heads-up display to the real world and vice-versa.
2)   What is the context of use for this device? What happens if users put this on during driving? Will this cause more distractions during driving than what exists today with smart phones?
3)   There of course are the challenges involved in designing a friendly speech recognition user interface. Background noise, user accent, complex user queries, to name a few can easily cause annoyance.
4)   What about the form factor of the Glass? It does appear stylish at first glance but how would this accommodate users that already wear glasses (though the new designs are projected to be integrated into people's eye wear)?
5)   What about social etiquette and interactions? You could be browsing the internet on your Google Glass while pretending to be in a conversation with a friend. Granted this can happen even today, would it seem more or less obvious with the Glass? 

Friday, September 21, 2012

Designing for Mothers and Babies

I was at my friend’s baby shower a few weeks back and one of the gifts that she got was a pendant (see Figure) that not only looked good but was also safe to be chewed on by her baby (The material used in the pendant has been approved by the FDA).

Babies certainly like to grab and chew on things while teething and a pendant such as this provides babies the capability to do just that and gives mothers the opportunity to accessorize. 

Thursday, September 6, 2012

Apps taking over parenting?

This article discusses an iPhone App that IDEO has developed to help parents train their children. The Sesame Street character Elmo is used in this App to instill healthy habits such as brushing their teeth, exercising, and going to bed on time in children.

An excerpt from the article:

“It’s the result of a synthesis of extensive “human-centered research” that forms the basis for all the firm’s endeavors. As it will be used by both adults and kids, the project had to reflect the various needs of both, and follows an iPhone-style interface that allows parents to unlock their menus with an easy swipe.

Based on this, it looks like a lot of thought has gone into understanding the needs of both the parents and kids and consequently in developing a user interface that is intuitive for both the user groups. However, this begs the question as to whether the technique adopted here is appropriate to instill good habits in children, from a developmental psychology perspective? Why is there a need for a mediator, which is nothing but a cartoon character, to do the tasks that parents should perform? Is this reliance on a make-believe character good in the long run and even if this starts showing good results in the beginning, how long would these results last?

Photo credit: Bill Thompson via Wikimedia Commons.

Monday, September 3, 2012

Driverless cars in the news again!

This article discusses how autonomous cars would reduce accidents on the road (by eliminating human error) and revolutionize ground transportation.
I have in an earlier post discussed the perils of fully automated systems; so I am not going into lengths about that here.
An excerpt from the article that caught my eye:
“It is even possible to make judgments about the mental or physical state of other drivers. Software developed by Probayes, a firm based near Grenoble, in France, identifies and then steers clear of drivers who are angry, drowsy, tipsy or aggressive. Upset drivers tend to speed up and brake quickly. Sleepy drivers tend to drift off course gradually and veer back sharply. Drunk drivers struggle to keep a straight line. The firm sells its software to Toyota, Japan’s car giant. Google’s cars have even been programmed to behave appropriately at junctions such as four-way stops, edging forward cautiously to signal their intentions and stopping quickly if another driver moves out of turn.”
This means that not only will these cars be equipped with sensors that can detect traffic lights, pedestrians, obstacles, road signs, and compute position relative to other vehicles, these cars may also make maneuvers based on inferences of the mental state of nearby drivers. Obviously a lot of behavioral modeling will be applied to make these inferences. There are so many things that can go wrong in these computations and then we go back to the ironies of automation.

Photo credit: Adam sk via Wikimedia Commons. 

Thursday, August 30, 2012

Blogging on Division 21 talks from the 2012 American Psychological Association Annual Meeting: IED Detection

Improvised explosive devices (IEDs) are responsible for majority of deaths or injuries in overseas combats.

In his talk, Dr. Russel Branagahan discussed the techniques that he and his colleagues used to determine the strategies that experts use to detect IEDs and how this information can be used to create simulators for training novices.

Some of the techniques that were employed are described below:
  • Observations and unstructured interviews:  These techniques revealed that discovering an already placed IED is very difficult and that success lies in detecting the IED placement as it is happening. Individuals also exhibit certain consistent behaviours during the IED placement process (such as several individuals adding trash to a heap until someone finally planting the IED).
  • Concurrent verbal protocols: In this technique, the research team presented sensor operators with video replays of IED events and concurrently elicited information on the cues and strategies that they employed in IED detection. This technique helped to glean important information with regard to search strategies, camera operation, and contextual cues to look for (e.g., people digging on the side of the road, disturbed earth, and behaviour inconsistent with the time of the day).
  • Structured interviews: Through this technique, the research team asked experts to walkthrough various suspicious situations and to explain the cues that led to an alert. This provided information on important environmental characteristics that need to be paid attention to, such as pattern and activities of people (e.g., loitering, running, evacuating a street), terrain (e.g., tunnel, trash), and things (e.g., car, dead animal, shovel).
The information obtained through these techniques will be used to develop simulator scenarios.

Photo credit: US Army via Wikimedia Commons

Thursday, August 23, 2012

Can Technology Come to the Rescue of the Distracted Driver?

In my previous post, I discussed how technology can be a bane to a driver. In this post, I discuss how technology can help the distracted driver.

According to the National Highway Traffic Safety Administration, cell phones are responsible for 18% of fatalities in distraction-related crashes. Hence, it is important to reduce distractions caused by cell phones. This article in Ergonomics in Design by Dr. Linda Angell, a Research Scientist at the Virginia Tech Transportation Institute, discusses various software applications that can be used to reduce driver distractions associated with cell phone use. These applications vary with respect to what they do. is a software application that reads out texts and emails to drivers, without drivers having to touch their cell phones. Dial2do is a similar software application that allows drivers to listen to and send texts and emails, tweets etc. The perceived advantage of these apps is that these are hands-free. This reminded me of Henry Thoreau’s quote “It’s not what you look at that matters, it’s what you see”. These applications certainly facilitate “looking” but may fail to promote “seeing” because drivers are still using their cognitive resources to think about the world outside of the driving environment via the texts and emails. Therefore, though these apps may not necessarily help with cognitive distractions (because drivers are still listening to and comprehending texts and emails while driving), it is one step in the right direction – which is, helping drivers keep their hands in the steering wheel and their eyes on the road.

Zoomsafer is an application that can be downloaded onto your phone and provides auto-replies to incoming texts and calls stating that the driver is driving and is unable to receive calls and texts. TrinityNoble’s Guardian Angel MP is another application that locks the cell phone when the car is traveling above a certain speed limit, thereby disallowing any cell phone usage.

These applications can be used by a conscientious driver to parents wanting to enforce ‘no-texting while driving’ on their children to companies wanting to avoid lawsuits that arise due to motor accidents involving use of cell phones.

Photo credit: Edbrown05 via Wikimedia Commons 

Monday, August 20, 2012

Technology and the Distracted Driver

Mr. Mouhamad Naboulsi’s recent post in the Cognitive Engineering and Decision Making group on Linkedin bought my attention to this articleRoximity and Ford have partnered to make an app that would be available in cars that bring location-based deal alerts to drivers. The information from the app in your smartphone will be displayed on Ford’s dashboard. You can find more information on this here.

Granted the app might be fantastic in finding eye-popping deals on gas, restaurants, and department stores, the question here is why should this information be relayed to people while they are driving? According to a 2009 report by the National Highway Traffic Safety Administration, 5,474 people were killed and an additional of 448,000 were injured in motor vehicle crashes in the United States as a result of distracted driving. So do we need any more distractions on the road?

Photo credit: The Library of Congress via Flickr

Wednesday, August 15, 2012

Live Agents

A month back, I was travelling outside the United States and attempted to buy credit to place internet calls, to connect with my family. I entered my payment information and received a message stating that “We were unable to process your request. Try again later”. I repeated this 5 times (because I kept receiving this message) only to find out that I have been charged 5 items. However, I was able to chat with a live agent and get this issue resolved in no time.

Now, this is not the first time I have chatted with a live agent to help me with my queries. I have used this feature for shopping enquiries to billing enquiries.

Chatting is so much more hassle-free in comparison to waiting for an agent on the phone. Advantages of getting support from live agents over telephone support include:
  • It is faster.
  • It does not disrupt customers’ workflow. That is, chatting with a live agent can be easily done in conjunction with other tasks and do not require the extra step of placing a phone call. This really fits the model of today’s technology users who are trying to accomplish multiple things with their computer.

One of the enhancements that I would desire as a customer is more feedback as to what the live agent is doing. Though the live agents ask you to wait while they are investigating the issue or query, better indicators as to how long this process would take would be beneficial. These status indicators could be automated and can be easily integrated with the chat engine. This way, the customer knows how long the process would take and what the chat agent is doing.

I also wonder about the usage statistics on live chat agents and whether this appeals only to certain demographics of the user population. For example, older adults may prefer having a conversation on the phone versus chatting on the computer.

Photo credit: David Vignoni via Wikimedia Commons

Monday, August 13, 2012

Human Factors and Ergonomics in India

The BRIC nations (Brazil, Russia, India, and China) are projected to be the world’s largest and most influential countries by 2050. Undoubtedly, as these countries undergo advancements, several domains/industries would benefit from the application of human factors and ergonomics principles.

Agriculture has been the backbone of Indian economy and hence it was not surprising when I came across a paper written by Saran in 1968, which discussed the design of a grain harvester for agricultural work in India. Rather than imposing a design that required altering the working position and habits of the Indian farmer, a design that incorporated the habits and usage patterns of the Indian farmer are described in this paper.

Nearly four decades after Saran published his paper, Mukhopadhyay (2006) describes the state of ergonomics in India. He discusses how ergonomics can be applied to various industries such as:
  • Crafts (e.g., pottery making, jewellery making)
  • Agricultural tasks (e.g., harvesting, sowing)
  • Non-motorized transportation (e.g., rickshaws)

Mukhopadhyay discusses how successful ergonomic interventions in India would require raising awareness about ergonomics to the rural people.

How can we forget India’s IT sector? The operators working long shifts in the call centers to the software developers working on projects outsourced from other countries, the Indian IT industry would benefit from various human factors and ergonomic interventions. 

Last but not the least, increasing consumerism in India also offers a plethora of opportunities for human factors and ergonomics research, that would take into account the unique needs of this market.

Photo credit: Rdglobetrekker via Wikimedia Commons.

Wednesday, August 8, 2012

Blogging on Division 21 talks from the American Psychological Association Annual Meeting – Use of Smart Phone Apps as Health Interventions

Dr. David Gustafson from the University of Wisconsin-Madison discussed how mobile phone apps can be used to help recovering alcoholics from relapsing by using virtual communities.

The advantages of the application include:
  • Use of GPS to detect whether the user is near a high risk location such as a liquor store and allow the user to get support. Specifically, a screen with options to call or text a friend comes up when the user is near a high risk location.
  • Capability to chat with individuals with addiction problems thereby gaining peer support.
  • Distance counseling via video chat.
  • Capability to complete weekly surveys: this information is used by the computer intelligence to detect relapses.
More details on Dr. Gustafson's work can be found here. A video on this app can be found here.
Photo credit: James Whatley via Wikimedia Commons