Berkeley EECS pioneers AI brain implant to restore speech

Clinical research coordinator Max Dougherty connects a neural data port in Ann’s head to the speech neuroprosthesis system as part of a study led by Dr. Ed Chang at UCSF.

A team of researchers from UCSF and Berkeley EECS have developed an implantable AI-powered device that can translate brain signals into modulated speech and facial expressions. The device, a multimodal speech prosthesis, and digital avatar, was developed to help a woman who had lost the ability to speak due to a stroke. The results have the potential to help countless others who are unable to speak due to paralysis or disease. The breakthrough study, published in the journal Nature, was led by UCSF neurosurgeon Edward Chang, EE Assistant Professor Gopala Anumanchipalli and Ph.D. student Kaylo Littlejohn. “This study heavily uses tools that we developed here at Berkeley, which in turn are inspired by the neuroscientific insights from UCSF,” said Gopala. “This is why Kaylo is such a key liaison between the engineering and the science and the medicine — he’s both involved in developing these tools and also deploying them in a clinical setting. I could not see this happening anywhere else but somewhere that is the best in engineering and the best in medicine, on the bleeding edge of research.”