They recruited 20 English-speaking volunteers, a mix of men and women, to watch video clips of an actress gesturing or speaking the phrases that the gestures indicate. An MRI brain scan snapped pictures of the brain responding to the two seemingly different forms of communication. For comparison, the volunteers also watched clips of meaningless hand and arm movements and a jumble of half-words that made no sense.
The goal was to see what areas of the brain were used during these different forms of communication.
What they found was that it was the same neural tissue. The experiments revealed that the frontal and posterior temporal areas were activated under both conditions, what the researchers describe as compelling proof that the brain can receive information in any form -- a gesture, a picture, words on a page, a sound or an object -- and that these regions of the brain will process it. The findings were published recently in the Proceedings of the National Academy of Sciences.
"We support multi-sensory integration," Gannon said. "There is nothing special about auditory or spoken language. This tells us about the evolutionary depth of our common system and that gestures were the beginning of language."
Not all agree, though, with many language theorists holding fast to the idea that spoken language evolved independently of gestures. To counter that, Gannon asked: "Why would a part of the brain do the same thing regardless of the type of information it receives?"
It makes perfect sense, said William D. Hopkins, an associate professor of psychology at Agnes Scott College in Decatur, Ga., and a research scientist at Yerkes National Primate Research Center in Atlanta. "The whole system is linked to pairing symbolic information to referential information in the environment," he said. "It is
All rights reserved