Washington, Sept. 05 (ANI): Researchers have discovered that what you see can override what you hear, as understanding of language may depend more heavily on vision.
These findings suggest artificial hearing devices and speech-recognition software could benefit from a camera, not just a microphone.
The study's first author, Elliot Smith, a bioengineering and neuroscience graduate student at the University of Utah, said that the study links the auditory signal in the brain to what a person said they heard when what they actually heard was something different.
They found vision is influencing the hearing part of the brain to change the perception of reality.
The brain considers both sight and sound when processing speech. However, if the two are slightly different, visual cues dominate sound. This phenomenon is named the McGurk effect for Scottish cognitive psychologist Harry McGurk.
The researchers pinpointed the source of the McGurk effect by recording and analyzing brain signals in the temporal cortex, the region of the brain that typically processes sound.
Working with University of Utah bioengineer Bradley Greger and neurosurgeon Paul House, Smith recorded electrical signals from the brain surfaces of four severely epileptic adults (two male, two female) from Utah and Idaho. House placed three button-sized electrodes on the left, right or both brain hemispheres of each test subject, depending on where each patient's seizures were thought to originate. The experiment was done on volunteers with severe epilepsy who were undergoing surgery to treat their epilepsy.
These four test subjects were then asked to watch and listen to videos focused on a person's mouth as they said the syllables "ba," "va," "ga" and "tha."
By measuring the electrical signals in the brain while each video was being watched, Smith and Greger could pinpoint whether auditory or visual brain signals were being used to identify the syllable in each video.
The study is published in the journal PLOS ONE. (ANI)