New technology could enable smart devices to recognize, interpret sign language
Professor Roozbeh Jafari
Read more about Roozbeh Jafari
A smart device that translates sign language while being worn on the wrist could bridge the communications gap between the deaf and those who don’t know sign language, says a Texas A&M University biomedical engineering researcher who is developing the technology.
The wearable technology combines motion sensors and the measurement of electrical activity generated by muscles to interpret hand gestures, says Roozbeh Jafari, associate professor in the university’s Department of Biomedical Engineering and researcher at the Center for Remote Health Technologies and Systems.
Although the device is still in its prototype stage, it can already recognize 40 American Sign Language words with nearly 96 percent accuracy, notes Jafari who presented his research at the Institute of Electrical and Electronics Engineers (IEEE) 12th Annual Body Sensor Networks Conference this past June. The technology was among the top award winners in the Texas Instruments Innovation Challenge this past summer.
The technology, developed in collaboration with Texas Instruments, represents a growing interest in the development of high-tech sign language recognition systems (SLRs) but unlike other recent initiatives, Jafari’s system foregoes the use of a camera to capture gestures. Video-based recognition, he says, can suffer performance issues in poor lighting conditions, and the videos or images captured may be considered invasive to the user’s privacy. What’s more, because these systems require a user to gesture in front of a camera, they have limited wearability – and wearability, for Jafari, is key.
“Wearables provide a very interesting opportunity in the sense of their tight coupling with the human body,” Jafari says. “Because they are attached to our body, they know quite a bit about us throughout the day, and they can provide us with valuable feedback at the right times. With this in mind, we wanted to develop a technology in the form factor of a watch.”
In order to capture the intricacies of American Sign Language, Jafari’s system makes use of two distinct sensors. The first is an inertial sensor that responds to motion. Consisting of an accelerometer and gyroscope, the sensor measures the accelerations and angular velocities of the hand and arm, Jafari notes. This sensor plays a major role in discriminating different signs by capturing the user’s hand orientations and hand and arm movements during a gesture.
However, a motion sensor alone wasn’t enough, Jafari explains. Certain signs in American Sign Language are similar in terms of the gestures required to convey the word. With these gestures the overall movement of the hand may be the same for two different signs, but the movement of individual fingers may be different. For example, the respective gestures for “please” and “sorry” and for “name” and “work” are similar in hand motion. To discriminate between these types of hand gestures, Jafari’s system makes use of another type of sensor that measures muscle activity.
Known as an electromyographic sensor (sEMG), this sensor non-invasively measures the electrical potential of muscle activities, Jafari explains. It is used to distinguish various hand and finger movements based on different muscle activities. Essentially, it’s good at measuring finger movements and the muscle activity patterns for the hand and arm, working in tandem with the motion sensor to provide a more accurate interpretation of the gesture being signed, he says.
“These two technologies are complementary to each other, and the fusion of these two systems will enhance the recognition accuracy for different signs, making it easier to recognize a large vocabulary of signs,” Jafari says.