Monday, June 4, 2012

As our languages make clear, context is a human specialty

Research shows that humans are remarkably good at divining one another's intentions using context — a skill, incidentally, that computers have not yet mastered.

Is Bob your uncle? Probably depends on what language you're speaking. In Urdu, if Robert's your father's brother, he's your chacha — but not if he's your father's older brother (taaya), your mother's brother (mamu), your father's sister's husband (phupa) or your mother's sister's husband (khalu).

In the Native American language Crow, your father's brother is also called your father. So is your father's sister's son.

In any language, each kinship system balances simplicity with specificity, according to a study in Friday's edition of the journal Science. And that principle could potentially be applied to the way we talk about other domains, such as color or location.

Kinship was a good place to start studying this phenomenon, said study coauthor Terry Regier, a cognitive scientist at UC Berkeley, because scholars have collected data on kinship systems in hundreds of languages over many decades.

Theoretically, a language could name all members of the extended family as "relatives." But this would be vague to the point of uselessness. A language could also have a specific title for every family member, but that would be a lot to remember.

The researchers examined 487 different family-naming systems and found that 85% of them had some distinct categories. But none of the kinship systems veered toward the extremely specific or the overwhelmingly general.

Humans are good at striking a balance between specificity and generality in daily conversation, a necessary tactic to communicate quickly and efficiently. This requires people to be remarkably good at divining one another's intentions using context — a skill, incidentally, that computers have not yet mastered.

To understand how people intuit one another's meanings, Stanford researchers in another study asked volunteers to look at a blue square, a blue circle and a green square. Without any further context, they were asked: Someone is referring to one of these objects. Which one is it?

Participants were most likely to choose the blue circle, and almost as likely to choose the green square — the most unique objects. The blue square, sharing color and shape with the other objects, was most neglected.

Then participants were asked: If someone uses the word "blue," which of the objects would they be referring to?

Instead of splitting 50-50 between the two blue objects, they leaned heavily toward the blue square. The idea, said cognitive scientist and study leader Michael Frank, is that the circle is already unique among the three objects — so if the speaker were trying to be as informative as possible, the term "blue" would distinguish the blue square from the green square.

This "talk-about-ability," as Frank put it, is easy for humans to understand. If two people are talking about a man named Scott, it's likely Scott is someone they both know. The study, which was also published Friday in Science, shows it's possible to generate formulas that could potentially help computers draw pragmatic conclusions from otherwise ambiguous sentences.

For now, don't let Siri's pre-programmed witticisms fool you: It's really hard to make a computer understand unspoken meaning, said Stephen Levinson, a linguistic anthropologist at the Max Planck Institute for Psycholinguistics in the Netherlands, who wasn't involved in either study.

"They're quite dumb at making these leaps of insight," he said.
____________________
References:

Khan, Amina. 2012. "As our languages make clear, context is a human specialty ". Los Angeles Times. Posted: May 27, 2012. Available online: http://www.latimes.com/news/science/la-sci-language-20120527,0,2325902.story

No comments: