A device capable of detecting and deciphering brain signals could give the gift of speech back to people who have lost the ability.
Electrodes fitted in the human can brain pick up on what a person wants to say and transforms it into a digital signal, which is then spoken by a voice synthesiser.
It was tested on five volunteers who already have electrodes in their brain as a treatment for epilepsy and the machine was able to speak 150 words per minute.
This, the developers claim, is much faster than existing technology and equivalent to normal human conversation.
Electrodes fitted in the human brain pick up on what a person wants to say and transforms it into a digital signal which is then spoken by a voice synthesiser at 150 words per minute
The technology is reliant on brain signals designed to move parts of the face and throat involved in speech, such as movements of the aw, larynx, lips and tongue.
It was found that although they are very complex they are also similar in most people.
Natural speech production involves more than 100 facial muscles, according to the scientists from the University of California.
The technology includes giving people the ability to talk again as long as they are able to imagine mouthing the words.
Signals from the brain are fed into a neural network computer linked to a voice synthesiser, similar to that used by the late Stephen Hawking – but far quicker.
The famed academic, who suffered with motor neuron disease for most of his adult life, was only able to speak around ten words a minute.
This was produced using his cheek to select the words and was a relatively slow process, despite being the best technology available at the time.
At the start of the study, patterns of electrical activity were recorded from the brains of the volunteers as they spoke several hundred sentences aloud.
All the volunteers, four women and one man, were epilepsy patients who had undergone surgery to implant an electrode array on to their brain surfaces.
The passages were taken from well-known children’s stories, including Sleeping Beauty, The Frog Prince, and Alice In Wonderland.
Armed with the recordings, the US team devised a system capable of translating brain signals responsible for individual movements of the vocal tract.
In trials of 101 sentences, volunteer listeners were easily able to understand and transcribe the synthesised speech.
The research, led by Dr Edward Chang from the University of California at San Francisco, US, is reported in the latest issue of Nature journal.
The scientists wrote: ‘Listeners were able to transcribe synthesised speech well.
‘Of the 101 synthesised trials, at least one listener was able to provide a perfect transcription for 82 sentences with a 25-word pool and 60 sentences with a 50-word pool.’
It was tested on five volunteers who have electrodes already in their brain as a treatment for epilepsy and was able to speak at up to 150 words per minute
Electrodes fitted in the human brain pick up on what a person wants to say and transforms it into a digital signal which is then spoken by a voice synthesiser
All the volunteers, four women and one man, were epilepsy patients who had undergone surgery to implant an electrode array on to their brain surfaces. The passages were taken from well-known children’s stories, including Sleeping Beauty and Alice In Wonderland
In trials of 101 sentences, volunteer listeners were easily able to understand and transcribe the synthesised speech
WHAT IS ELECTROCORTICOGRAPHY?
Electrocorticography (ECoG), or intracranial electroencephalography (iEEG), is a type of monitoring that uses electrodes placed directly on the brain.
It records electrical activity from the cerebral cortex.
It works similarly to conventional electroencephalography (EEG) electrodes which monitor brain activity from outside the skull.
ECoG may be performed either in the operating room during surgery or outside of surgery.
Because a craniotomy (a surgical incision into the skull) is required to implant the electrode grid, ECoG is an invasive procedure.
They added: ‘Our results may be an important next step in realising speech restoration for patients with paralysis.’
In a second part of the study, one participant was asked to speak sentences and then mime them without making a sound.
The decoder was able to read the brain signals associated with the mime and translate them into synthesised speech.
The electrocorticography technique is used to monitor electrical activity in the cerebral cortex.
Professor Sophie Scott at University College London (UCL), who was not involved with the study, said: ‘This is very interesting work from a great lab but it must be noted that it is at very early stages and is not close to clinical applications yet.
‘This work asked listeners to try and recognise the speech produced through this technique but they were able to select from a closed set of options – a list of 25 or 50 words.
‘This makes the task easier and likely increases recognition rates as they are choosing from a small selection.
‘Compare that to the 12,000 words a 12 year old human knows or the 23,000 words an adult does and you can see that there is some way to go to having full real-world relevance.
‘It will be interesting to follow the progress of this work.’