You're at a crowded party. The music is loud, people are laughing, and a dozen different conversations are happening all around you. However, despite the hubbub, you're able to focus on the one voice you want to hear.
No reason to shout. Though background noise can be distracting, the brain has the remarkable ability to track conversation and scale down unwanted noise.
Courtesy of the National Archives
Scientists call our ability to zero in on a single speaker amid a cacophony of other sounds the "cocktail party problem." Our brains, like those of many animals, are so skilled at tuning in to what we want to hear while ignoring everything else that we barely give it a second thought. However, the process of listening to a solitary voice in a noisy atmosphere is surprisingly difficult.
To hear what your friend is saying amidst the party scene, your brain must first segregate the individual sound of interest from the intermingled and overlapping mixture of sounds entering your ears. At the same time, you have to focus closely on the sound even as people laugh and talk, and music plays.
"There are aspects of a person's voice that are distinct to the individual, and we focus our attention on those features" to track their voice in a noisy room, says University of California, Berkeley psychologist Frederic Theunissen, who studies how the brain recognizes complex sounds such as human speech and music. For instance, listeners may key into the pitch and timbre of the speaker's voice or his or her accent.
Even the way in which the speaker strings together the words of a sentence can influence a person's perception of speech in a noisy setting. Studies show that people are significantly better at identifying words if they form a coherent sentence than if they are randomly strung together.
How the brain tunes in
Because there's no way to effectively shut some sounds out from your ear while allowing others to pass through, all sounds in an environment enter the ear, where they are then translated into electrical signals in the brain. These signals move through several regions of the brain before eventually reaching the auditory cortex, the part of the brain engaged in processing sound.
While some researchers previously hypothesized the auditory cortex was the location where such signals are scaled up or down, it was only recently that scientists were able to directly record changes in activity as people listen to conversations.
In the study, Charles Schroeder, a neuroscientist at Columbia University, and his colleagues directly recorded the brain activity from electrodes implanted in epilepsy patients as they listened to short movie clips featuring two speakers reciting a short narrative. The study participants were instructed to pay attention to one of the speakers and ignore the other.
The researchers found the signals reflecting both speakers made their way to the auditory cortex. However, only the signals reflecting the speech people were focused on were detected in brain regions involved in processing language and attention control.
According to Schroeder, the findings demonstrate that while the brain still processes the sounds we intend to ignore, these signals — unlike those generated from the speech we are paying attention to — fail to reach our consciousness.
A gorilla of a problem
The ability to focus on some things at the expense of others is crucial for functioning in a complicated world. But studies show there can be a downside to this focus — too much attention to one thing may make us seemingly "blind" or "deaf" to other stimuli in the environment.
Polly Dalton, a psychologist at Royal Holloway, University of London, along with research associate Nick Fraenkel, had people listen to short audio recordings that featured two simultaneous conversations: one shared between two men and the other between two women. Before the recording took place, participants were asked to pay attention to either the conversation between the men or the women. Halfway through the recording, a new male voice begins, repeating the phrase "I'm a gorilla!" for 19 seconds.
The researchers found that up to 70 percent of the participants engaged in the listening exercise failed to notice the gorilla statement during the recordings, depending on the conversation they were focused on and how close the additional voice was to the other speakers.
"This research demonstrates that we can miss even surprising and distinctive sounds when we are paying attention to something else," Dalton explains. She adds that this likely happens because of the push-pull nature of attention: When you concentrate on one task, your brain prioritizes those signals, and it filters out other information to avoid being distracted.
Helping machines become better listeners
Some scientists are interested in understanding how the brain separates and filters out unnecessary noise for its potential use in technology, including hearing aids and devices with speech recognition capabilities.
People with hearing problems often have a hard time separating voices of interest from background noise. While hearing aids can amplify sounds, turning up the volume doesn't solve the issue of segregating sound. Similarly, automatic speech recognition technologies, such as Apple's Siri, struggle to make out voices in loud environments.
To overcome such technological limitations scientists first need to learn more about how the brain separates mixtures of sounds into individual sound sources, explains Josh McDermott, a cognitive scientist at the Massachusetts Institute of Technology.
"We don't quite understand yet what it is we're paying attention to when we focus on one voice at the expense of another," according to McDermott. "It's probably some representation of voice quality, but figuring out exactly what goes into that representation will help us design machines and software that can do the same thing."
Alain C, Arnott SR. Selectively attending to auditory objects. Frontiers in Bioscience. Jan 1; 5:d202-212 (2000).
Cherry EC. Some experiments on the recognition of speech, with one and two ears. Journal of the Acoustical Society of America. 25(5):975-979 (1953).
Dalton P, Fraenkel N. Gorillas we have missed: Sustained inattentional deafness for dynamic events. Cognition. Sep; 124(3):367-372 (2012).
McDermott JH. The cocktail party problem. Current Biology. Dec 1; 19(22):R1024-R1027 (2009).
Mesgarani N, Chang EF. Selective cortical representation of attended speaker in multi-talker speech perception. Nature. May 10; 485(7397):233-236 (2012).
Shamma SA, Micheyl C. Behind the scenes of auditory perception. Current Opinion in Neurobiology. June; 20(3):361-366 (2010).
Zion Golumbic EM, Ding N, Bickel S, Lakatos P, Schevon CA, et al. Mechanisms underlying selective neuronal tracking of attended speech at a "cocktail party." Neuron. Mar 6; 77(5):980-991 (2013).