Researchers in California have developed an AI-powered system that allows individuals with paralysis to speak in real-time using their own voices. The technology, a breakthrough in brain-computer interface (BCI) research, was created by scientists at the University of California, Berkeley, and the University of California, San Francisco.
This system uses neural interfaces to measure brain activity and AI algorithms to reconstruct speech patterns. Unlike previous systems, it allows for near-instantaneous speech synthesis, bringing a level of fluency and naturalness never before achieved in neuroprostheses. “Our streaming approach is a major leap forward,” said Gopala Anumanchipalli, a lead researcher of the study.
The device works with various brain-sensing interfaces, including high-density electrodes and microelectrodes, or non-invasive sensors that measure muscle activity. It samples neural data from the motor cortex, which controls speech production, and AI decodes this data into audible speech within a second.
This breakthrough significantly improves the lives of patients with conditions like ALS or severe paralysis, offering them a way to communicate more naturally. Though the technology is still evolving, it promises to be a game-changer in providing better communication for those with speech impairments.