“This loop likely has cascading impacts over the course of a child’s development,” says psychological scientist and study author Anne S. Warlaumont of the University of California, Merced. “Understanding how it works and being able to monitor its components while the children go about their daily lives may eventually lead to better strategies for helping parents and other adults interact most effectively with autistic children.”
Recent technological advances enable people to record all the sounds children make and hear during the course of the day and to automatically label that data, Warlaumont says. With these tools, researchers can detect subtle moment-to-moment effects that child and caregiver have on each other.
“These local effects appear to add up over the millions of exchanges children experience over the first few years of life, resulting in substantial differences in the types of sounds kids produce,” she explains.
Warlaumont and her co-authors at the LENA Research Foundation and the University of Memphis studied 13,836 hours of daylong audio recordings of caregivers and children, ages 8 months to 4 years old, to better understand how parents respond to children’s sounds. One hundred and six of the children were typically developing and 77 had autism. The LENA Research Foundation collected the data.
The data revealed that adults are more likely to immediately respond to children when the vocalizations are speech-related. In turn, the children are more likely to create more vocalizations. Together, this forms a social feedback loop that promotes speech development.
However, the data showed that autistic children produce fewer vocalizations and the responses from adults are less connected to whether they’re speech-related. The result is that the feedback loop happens less often and is diminished in effectiveness, reducing the opportunities the child has to learn from the social interactions.
“Our simulations provide further support that these differences may account for the slower growth in speech-related vocalization production that we see in autism compared to in typical development,” says Warlaumont.
The research was made possible by a small audio recorder worn by each child all day long. The recordings were processed using technology — called Language ENvironment Analysis (LENA) — that can identify who or what is making sound. The software can also detect the difference between speech-like sounds and crying or laughing.
The research also showed that socioeconomic status seems to affect the interactions making up the feedback loop. Higher maternal education was associated with increased rates of child vocalization as well as increased sensitivity of adult responses to the type of vocalization a child produced. Both these differences are expected to promote faster speech development in high-socioeconomic-status families.
In addition to Warlaumont, co-authors include Jeffrey A. Richards of the LENA Research Foundation, Jill Gilkerson of the LENA Research Foundation and University of Colorado at Boulder, and D. Kimbrough Oller of the University of Memphis and Konrad Lorenz Institute for Evolution and Cognition Research.
This work was supported by a DOE CSGF (DE-FG02-97ER25308), the NIDCD (R01 DC011027), the Plough Foundation, and the LENA Research Foundation.
contact: Anna Mikulak
Association for Psychological Science