On August 22, 2008, University of Washington Electrical Engineering Ph.D candidate Jon Malkin spoke about the Vocal Joystick (VJ) project at the Gnomedex 8.0 tech conference.
The Vocal Joystick is software that will allow people with physical disabilities to use computers more freely. It maps vowel sounds to spatial directions, allowing someone to interact with a computer or mechanical device such as a robotic arm using his or her voice. While traditional speech-recognition software like Dragon NaturallySpeaking is good for typing, using it to control a computer mouse—by saying “up,” “down,” “left,” “right,” etc.,—is awkward. Thus, such software is not helpful for playing many video games. But the VJ uses volume, pitch and vowel quality to create continuous movement, similar to how one’s hand controls a computer mouse. (In fact, you can see someone use the VJ to play a Flash game 7 min, 21 sec into Malkin’s talk above).
While the project is still in progress, in one study from 2006 (PDF here) with nondisabled subjects the VJ did as well as or better than an eye-tracking device at target acquisition (e.g. clicking on a circular object that appears in different portions of the computer screen) and web browsing. Another study involving more than forty children (ages six to fourteen) showed that kids created their own sounds when they could not produce the pre-programmed ones exactly. As the study concludes: “This sort of feedback is invaluable in redesigning the VJ system to be more customized for the human perceptional system. Since this initial demonstration, we have greatly improved the VJ-engine.”
The Vocal Joystick is available for Windows and Linux; there’s a plan to release the source code in the future.