Study: Socioeconomic Status Can Affect Hearing
Differences Between Word Interpretation Between Children From Different Income Levels Could Be Key To Learning
BY WILLIAM WEIR, firstname.lastname@example.org
The Hartford Courant
10:15 PM EST, November 17, 2013
There’s been plenty of research on the different ways that poverty can take a toll on children — health, literacy and behavioral problems, for instance. A new study now looks at how socioeconomic status can affect hearing.
Erika Skoe, a University of Connecticut professor in the department of speech, language, and hearing sciences, has looked at how children’s backgrounds can shape the way their brains interpret different sounds.
The differences could be contributing to the achievement gap in learning between children of different economic backgrounds, Skoe said, and identifying them could be a step toward narrowing that division.
The study was published last month in the Journal of Neuroscience.
The research builds off a 1995 study by researchers Betty Hart and Todd R. Risley, who monitored for three years the conversations of 42 families with children between 7 and 9 months old.
They found that children from families receiving welfare heard an average of 616 words per hour while children from professional families heard an average of 2,153 words per hour. The study gave rise to the phrase “the 30 million word gap.”
Exactly why there’s such a stark difference in word usage among families of different means is unknown, but Nina Kraus, professor of neurobiology at Northwestern University in Chicago and co-author of the most recent study, pointed to some possible reasons.
“There’s less emphasis on reading, and if [the mother] has less education, it may coincide with other environmental factors — perhaps the mom is holding down more jobs and the kid is watching more TV,” she said.
Hearing words on television, the researchers said, doesn’t add to linguistic development.
“Those same sounds are going to be uttered whether the kid is interacting with the TV or not, so it’s not a dynamic experience,” Skoe said. “The TV isn’t going to correct a kid if they misspeak.”
Skoe and her fellow researchers wanted to see how deeply ingrained this word gap takes hold in some children. For instance, does it have origins deep in the brain?
To find out, they tested the hearing of 66 ninth-grade students from the Chicago area. The students were divided into two groups according to their mother’s education levels. In one group, the students’ mothers had a high school degree or less.
Mothers of the students in the other group had at least some college, and most had an associate’s degree or higher. Maternal education levels, Skoe said, are a reliable indicator of income levels.
For the test, an earphone that emitted the sounds of different syllables was placed in one ear of each child. The children, who watched a movie while taking the test, wore caps with scalp sensors that picked up their brain activity. The tests pick up subtle differences in auditory processing, Skoe said. Even though a child may pass a standard hearing test “with flying colors,” his or her brain might be processing sounds abnormally.
With an electroencephalography (EEG) machine, the researchers could measure the activity in the auditory brainstem, a part of the brain that responds to sound stimuli automatically. Among those whose mothers had received more education, the sounds were processed more faithfully. The researchers could tell this by looking at the brain waves, which closely resembled the sound waves of the audio signal.
In those whose auditory processing proved faulty, the brain waves differed from the sound waves. What’s more, how they processed the same sound on subsequent tests differed each time. That’s important, Skoe said, because those differences make it very difficult to quickly discern the meaning of a sound.
“The analogy would be like listening to a telephone where there was static in the background,” she said. “You’d be talking to someone, but there’d also be a ‘shhhhhhhh’ at the same time.”
Overwhelmingly, the students with the most variability in their auditory processing were in the group with mothers with low education levels.
Based on the 1995 study and several follow-up studies, Skoe said, it’s likely that the children whose brains processed sound less accurately experienced a lack of conversation and other adverse effects, such as noise pollution.
“We’re making assumptions based on a lot of research out there that has shown an association between socioeconomic status and these adverse conditions,” Skoe said.
There’s no easy solution to creating an environment of “auditory enrichment,” especially for families struggling to get by, Skoe said.
But engaging their children in conversation as much as possible is one thing parents can do, she said. Also creating opportunities for them to take up an instrument or learn a new language could help.
Skoe, who came to UConn from Northwestern this fall, said she now wants to conduct further studies to get a better sense of exactly how and when auditory experiences shape the brain’s response to sounds.
“Our brains’ ability to process sounds, is that dictated by what happens to us early in life, or could it be that other experiences could come along?” she said. “You start playing a musical instrument, you start speaking a different language, can that override what’s happened [earlier] in life?”
Doug Whalen, a linguist and head of research at Haskins Laboratories in New Haven, agreed that there are no easy answers toward closing the gap between families of different means.
“I have a great deal of sympathy for poorer parents because they’re struggling with all sorts of demands that middle-class parents don’t,” said Whalen, who was not part of the study. “Talking to [their children] more is a good first step.”
He said the study is a good addition to the increasing awareness of the importance of early language.
“This particular assessment of brain activity is really new,” he said.
Copyright © 2013, The Hartford Courant