The intersection of psychology, linguistics, and technology is an exciting place. At Arts & Sciences, dozens of professors, graduate students, and undergraduates conduct pathbreaking research that is rapidly expanding the boundaries of our understanding of that most human of capacities—the use and comprehension of language. Following are some stories of these explorers.
Fifteen years ago, geneticists studying autism thought there were between 6 and 10 genes controlling the disorder. Today, as a result of rapid advances in genetics research, at least 100 risk variants are thought to be involved, with more found each year. In addition, the definition of autism has expanded in recent years to include various autism spectrum disorders, including Asperger’s syndrome, which vary in severity. Marked by social isolation, trouble with language, and repetitive behaviors, an estimated one in 110 children is affected by an autism spectrum disorder, according to the Centers for Disease Control and Prevention.
Finding a pharmaceutical intervention for such a complex disorder, or even developing early genetic testing, is a great challenge. But Arts & Sciences Professor of Psychology Helen Tager-Flusberg is tackling autism from a different angle, seeking early behavioral and brain signs of autism in the hopes that early detection will lead to better early therapies. Effectively teaching social and communication skills to infants at risk for autism can help lessen the impact of the disorder in later life.
In collaboration with Dr. Charles Nelson at Children’s Hospital in Boston, Tager-Flusberg is studying the very early signs of autism in infants, including differences in gestural communication, like head-nodding or waving “bye-bye”; vocalization and early language development; right and left hemisphere brain organization; and electrical activity in the brain as shown on electroencephalography (EEG) devices.
With funding from the National Institutes of Health and the Simons Foundation, Tager-Flusberg, Nelson, and the grad student and postdocs assisting them work with around 180 children who are at risk for autism. These children are vulnerable because they have siblings or other close relatives with the disorder. The researchers begin studying the infants when they are between three and six months old; their parents bring them to Children’s Hospital Boston for regular lab visits, behavioral assessments, and EEG scans.
By cataloguing the differences in brain development between those at risk for autism and those not at risk, Tager-Flusberg and her colleagues hope to point the way toward behavioral interventions that could reduce the severity of the disorder. So far, however, there is no silver bullet. “There is no single thing we can point to that says, ‘I see this or that, and it is predicting an autism diagnosis,’” explains Tager-Flusberg. “Our model is more that it is likely to be a combination of factors.”
One of the earliest signs of differentiation between infants at risk for autism and those who are not at risk can come when babies are as young as six months. At that age, at-risk infants’ brains start to display lower electrical activity on EEG scans than do the brains of those not at risk for the disorder. As with other variations between the children in the high-risk pool and typical children, simply having lower levels on the EEG does not predict that a child will develop autism. Some in the at-risk group do develop autism, while most do not. This unpredictability is what makes finding a method for early diagnosis so elusive.
Sometime between the ages of 9 and 12 months, the high-risk children’s brains start to form language-related connections in a very different way from low-risk children’s brains. For the latter, language processing is concentrated in the left hemisphere, with very strong connections between the frontal and posterior language areas of the brain. In high-risk children, however, language processing happens on both the left and right sides of the brain.
As with the electrical activity differences, the differences in language processing do not predict whether or not a child will develop autism. But they are characteristic of the children in the at-risk pool. Tager-Flusberg and her colleagues are tracking their subjects to see if this brain organizational difference persists, or if at-risk children’s brains develop a left-hemisphere organization later on.
The final difference the researchers have seen between at-risk children and others is that by 12 months the at-risk children gesture less. Gestures are a larger part of communication than we often realize, and infants at risk for autism are not learning how to communicate with gestures to the extent that other children are.
This failure to pick up gestures as quickly is one sign of a communications gap that can widen dramatically as those toddlers who will go on to be diagnosed with autism grow older.
Typically infants are able to multitask, play with toys while simultaneously tuning in to whatever their mothers or others around them are communicating to them, including words and gestures. High-risk infants, however, do not seem to possess this ability. When they are playing with a toy, they are laser-focused on it to the exclusion of all other stimuli. “When these children are playing with a toy, they are very engrossed with it, so when their mother is gesturing to them they are not gaining any advantage in their communicative and language development,” explains Tager-Flusberg. “Parents need to be much more attuned to children and engage them more. I don’t mean by that to completely bombard them, but parents should probably insert themselves more.”
The goal for Tager-Flusberg and her colleagues, as well as the other scientists around the world studying this at-risk population, is to develop better early clinical interventions with children at risk for autism. Interventions would likely include guidelines for parents on how to best interact with their children to help them develop better communication skills at an early age.
“I think it would be very interesting to investigate whether behavioral treatments in this population lead to changes in the brain, because of the brain’s plasticity,” says Tager-Flusberg. “I know people think that ultimately biology is going to provide all of the answers. I actually have a lot of optimism that behavioral treatments are the way to go.”
In Plato’s Cratylus, Socrates describes the way that deaf people communicate via hand and head gestures. A second-century Judean record of the Mishnah (a redaction of Jewish oral traditions) mentions deaf people communicating with each other using hand signs and lip movements. Sign languages are as old as human society. But until recent decades, linguists paid little attention to them, and many did not even consider them to be full-fledged languages.
We now know differently, thanks to the pioneering research of scientists like Arts & Sciences Professor of Linguistics and French Carol Neidle. Neidle has spent much of the past two decades compiling and analyzing a video database of American Sign Language (ASL). Multiple synchronized videos, showing the signing of native ASL signers from multiple angles, have been collected through collaboration between the linguists and computer scientists, including CAS Chair of Computer Science Stanley Sclaroff and his students. Then linguists, including many BU undergraduates and graduate students, have carried out painstaking annotations, capturing in minute detail the manual signing and the facial expressions, head movements, and eye gestures that are as much a part of sign language as the hand signs themselves. Neidle’s linguistic research has deepened our understanding of ASL through study of variations among signers and the integral connection between facial gestures and hand signs.
For the past four years, Neidle and Sclaroff, along with computer science doctoral student Ashwin Thangali (GRS’12), have been using a lexicon database created by them specifically for this research to develop computer algorithms that can analyze sign language videos and determine what signs are being produced. Neidle and Sclaroff are collaborating on this National Science Foundation (NSF)-funded project with Vassilis Athitsos (GRS’06) of the University of Texas at Arlington. Over the years, there have also been collaborations with Dimitris Metaxas, professor of computer science at Rutgers University, and Benjamin Bahan (GRS’96) and Christian Vogler (both Deaf), who are on the faculty at Gallaudet University. For example, Neidle and Metaxas now have two active NSF-funded research projects focusing on computer-based recognition of linguistically significant facial expressions.
The implications of this research are wide-ranging. Neidle and Sclaroff hope that Sclaroff’s search algorithms will become the basis for sign lookup in multimedia sign language dictionaries and aids to ASL learners, educators, and linguists. An ASL learner could look up an unknown sign he or she just viewed.
“Right now, if you’ve seen a sign and don’t know what it means, it is difficult to look that sign up,” explains Neidle. That is because many current dictionaries list signs based on English translations, despite the fact that there is no one-to-one equivalency between ASL signs and English words. “There is a kind of catch-22 here, since it is only possible to look up a sign if you already know its meaning,” says Neidle. Although there are some dictionaries that allow the user to specify properties of the articulation of ASL signs, it is very time-consuming to search in this way.
Neidle and Sclaroff’s technology will allow an ASL signer to record a video of a sign, or specify an unknown sign in an existing video, and then have the computer identify the sign; this could ultimately allow the user to access that sign in an ASL dictionary, for example. “We hope the tools we are creating would someday allow translation of ASL into English,” says Neidle. “We are developing capacity in that direction.”
The challenges in creating effective ASL search algorithms are enormous, however. Individuals sign slightly differently from one another, just as many English speakers have slightly different accents or pronunciations. For instance, one person’s hands might start clenched while another’s might start open when making the same sign.
The algorithms developed by Sclaroff and Thangali look at the start and end positions of the hands for each sign, as well as their trajectory. “It’s not always easy,” says Sclaroff. “You have to distinguish what is moving and not. Hands move fast, and the signer can be wearing short sleeves or long sleeves, or the background can be a color similar to their skin color.”
To pare down the possible options that the computer must explore, Sclaroff has constrained searches to include only a range of signs that are similar to the starting hand shape. This means the computer is only searching for a certain number of ending hand shapes, rather than comparing the signer’s movements to every sign in ASL. Still, the technology is not as accurate as Sclaroff and Neidle would like. Given an example of a sign, the algorithms will retrieve a number of videos from the database that contain the sign. However, it will also pull up a number of videos that don’t have the sign in them.
“People pick up on these things naturally without thinking about it,” explains Neidle. “But for machines, it is much harder. That is why we need to develop these algorithms, and even then it is a work in progress and educated guesswork.”