Researchers from the Boston University in Massachusetts worked with a patient who has locked-in syndrome, a condition in which patients are almost completely paralysed — often able to move only their eyelids — but still fully conscious. They enabled him to speak by using a speech synthesizer to produce vowel sounds as he thinks them after implanting an electrode into his brain.

Frank Guenther, the leader of the study, and his colleagues first had to determine whether the man's brain could produce the same speech signals as a healthy person's. After determining that the signals were the same, they implanted an electrode — designed by neuroscientist Philip Kennedy of the firm Neural Signals in Duluth, Georgia — into the speech-production areas of the man's brain.

This electrode is different from other electrodes used for brain–computer interfaces because most of the electrodes are fixed to the skull rather than within a specific part of the brain. When fixed to the skull, the electrodes can move around, making it difficult to record from the same neurons every time or to leave the electrode in place in for more than a few months at a time. The electrode fixed within a specific part of the brain was impregnated with neurotrophic factors, which encourage neurons to grow into and around the electrode, keeping it in one place and allowing it to be recorded from for a much longer time.

After implanting the electrode, the researchers used a computer model of speech to decode the signals coming from the man's brain and discern which vowel sounds he was thinking of. So far, the patient has been able "to produce three vowel sounds with good accuracy that happens as quickly as normal speech.

Their long-term goal within five years time-frame is to have the man use the speech brain–computer interface to produce words directly.

This is the first brain–computer interface that has been made for speech as most of the current interfaces transmit signals from the region of the brain that controls movement to either a prosthetic arm or the subject's own arm.

The next step would be to train their computer decoder to recognize consonants so that patients can form whole words, and maybe even sentences. They also hope that with developments in technology, they can implant more electrodes in their next patient to transmit a more detailed signal.