Apple’s voice-enabled personal assistant Siri has become a staple for millions of iPhone users across the world but there are millions of Americans who still can’t use it due to speech impediments. As Scientific American notes, voice-enabled computer technology “cannot be used by more than nine million people in the U.S. with voice disabilities like Mattes nor by stutterers or those afflicted with cerebral palsy and other disorders.”
One particular problem that voice-enabled technologies have is with people who regularly slur their speech, as they make sounds that could potentially be interpreted as several different words. In fact, research recently published in Speech Communication found people who suffer from dysarthria have speech recognition rates between 26% and 82% lower than the rest of the population.
Added to this, not everyone who has dysarthria slurs their speech the same way, so there are no simple adjustments that software developers can make to account for their different speech patterns. One solution may come in the form of using your phone’s camera to read your lips to better fill in the blanks that its microphone can’t pick up through your voice.
For a more detailed analysis of why speech recognition tech is still inaccessible for millions of people, check out the full Scientific American report here.