How artificial intelligence gave a paralyzed woman her voice back

In a groundbreaking development, researchers from UC San Francisco and UC Berkeley have unveiled a remarkable brain-computer interface (BCI) that has enabled a woman with severe paralysis caused by a brainstem stroke to communicate through a digital avatar. This remarkable achievement, detailed in a report published on August 23, 2023, in the journal Nature, represents a pioneering breakthrough in the field of neurotechnology.

This innovative system marks the first instance in which both speech and facial expressions have been synthesized directly from brain signals. The BCI can decode these neural signals into text at an impressive rate of nearly 80 words per minute, a significant improvement over existing commercial technologies.

Edward Chang, MD, who leads the neurological surgery department at UCSF and has dedicated over a decade to the development of this technology, envisions that this recent advancement could pave the way for an FDA-approved system enabling speech through brain signals in the near future. He underscores the ultimate goal of restoring a comprehensive and natural form of communication for individuals who have lost this ability due to neurological conditions.

The previous work of Chang’s team showcased the feasibility of translating brain signals into text for a man who had also experienced a brainstem stroke years earlier. However, the current study accomplishes something more ambitious by decoding brain signals into the intricate nuances of speech and the accompanying facial expressions that accompany conversation.

Chang’s approach involved implanting a thin panel with 253 electrodes onto specific speech-related regions of the woman’s brain. These electrodes intercepted the neural signals that would typically control muscles in the face, tongue, jaw, larynx, and other speech-related areas. A cable connected to a port on her head linked the electrodes to a cluster of computers.

Over several weeks, the participant collaborated closely with the research team to train the artificial intelligence algorithms of the system to recognize her unique brain signals associated with speech. This training process entailed repeating various phrases from a vocabulary of 1,024 words, allowing the computer to discern the neural patterns corresponding to different sounds.

Significantly, instead of teaching the AI to recognize complete words, the researchers devised a system that deciphers words from phonemes, the elemental speech units that constitute spoken words, akin to how letters form written words. By adopting this approach, the AI only needed to learn 39 phonemes to decode any English word. This strategy improved the system’s accuracy and accelerated its performance by threefold.

Developed by graduate students Sean Metzger and Alex Silva from the joint Bioengineering Program at UC Berkeley and UCSF, the text decoder played a pivotal role in the system’s efficacy, speed, and vocabulary. The researchers emphasize that these factors are instrumental in enabling users to communicate at a pace that approaches natural conversations.

To give the system a voice, the researchers created an algorithm for synthesizing speech, tailored to resemble the woman’s voice before her injury. A recording of her speech during her wedding aided in personalizing this voice.

The digital avatar’s facial expressions were brought to life with the help of software from Speech Graphics, a company specializing in AI-driven facial animation. Customized machine-learning techniques facilitated the integration of this software with the brain signals as the woman attempted to speak. These signals were then translated into the corresponding movements of the avatar’s face, allowing it to replicate the opening and closing of the jaw, the movement of the lips, and even expressions of emotion like happiness, sadness, and surprise.

The research team acknowledges the need to develop a wireless version of the system, freeing users from the physical tether to the BCI. Such an advancement would have profound implications for enhancing users’ independence and social interactions, as it could potentially grant them the ability to control their computers and phones more freely.

Posted in

Aihub Team

Leave a Comment





Meta bets on AI chatbots to retain users

Meta bets on AI chatbots to retain users

GPT-3 can reason about as well as a college student, psychologists report

GPT-3 can reason about as well as a college student, psychologists report

Explosive growth in AI and ML fuels expertise demand

Explosive growth in AI and ML fuels expertise demand

AI regulation: A pro-innovation approach – EU vs UK

AI regulation: A pro-innovation approach – EU vs UK

Reopening the Economy: How AI Is Providing Guidance

Reopening the Economy: How AI Is Providing Guidance

Paving the Way for Diversity in the Decade of Ubiquitous AI

Paving the Way for Diversity in the Decade of Ubiquitous AI

On Privacy Day, Remembering How Much Work Still Lies Ahead

On Privacy Day, Remembering How Much Work Still Lies Ahead

Lessons from Space May Help Care for Those Living Through Social Isolation on Earth

Lessons from Space May Help Care for Those Living Through Social Isolation on Earth

Igniting the Dynamic Workforce in Your Company

Igniting the Dynamic Workforce in Your Company

How IBM is Advancing AI Once Again & Why it Matters to Your Business

How IBM is Advancing AI Once Again & Why it Matters to Your Business

How AI is Driving the New Industrial Revolution

How AI is Driving the New Industrial Revolution

How AI and Weather Data Can Help You Plan for Allergy Season

How AI and Weather Data Can Help You Plan for Allergy Season

Automotive Data Privacy: Securing Software at Speed & Scale

Automotive Data Privacy: Securing Software at Speed & Scale

Accelerating Digital Transformation with DataOps

Accelerating Digital Transformation with DataOps

Yuval Noah Harari: AI and the future of humanity | Frontiers Forum Live 2023

Yuval Noah Harari: AI and the future of humanity | Frontiers Forum Live 2023

OpenAI created a PHYSICAL ROBOT?! (NEO = GPT-5 WITH BODY)

OpenAI created a PHYSICAL ROBOT?! (NEO = GPT-5 WITH BODY)

London Conference 2023: How can countries respond to great power competition?

London Conference 2023: How can countries respond to great power competition?

AI vs Machine Learning

AI vs Machine Learning

Interview with Mr.Yoshua Bengio

Interview with Mr.Yoshua Bengio

Interview with Mr.Nick Bostrom

Interview with Mr.Nick Bostrom

Interview with Mr.Stuart J. Russell

Interview with Mr.Stuart J. Russell

This 3D printed gripper doesn't need electronics to function

This 3D printed gripper doesn’t need electronics to function

Robotic hand rotates objects using touch, not vision

Robotic hand rotates objects using touch, not vision

Researchers develop low-cost sensor to enhance robots' sense of touch

Researchers develop low-cost sensor to enhance robots’ sense of touch

Reinforcement learning allows underwater robots to locate and track objects underwater

Reinforcement learning allows underwater robots to locate and track objects underwater

Artificial Intelligence Microscopy Market is Going to Boom | CAMECA, Celly.AI Corporation, Hitachi High-Tech Corporation, JEOL Ltd., Life Technologies Corporation, a Thermo Fisher Scientific company, Motic

Artificial Intelligence Microscopy Market is Going to Boom | CAMECA, Celly.AI Corporation, Hitachi High-Tech Corporation, JEOL Ltd., Life Technologies Corporation, a Thermo Fisher Scientific company, Motic

The Importance of Creating a Culture of Data

The Importance of Creating a Culture of Data

Scaling the AI Ladder

Scaling the AI Ladder

How to Accelerate the Use of AI in Organizations

How to Accelerate the Use of AI in Organizations

How IBM and Salesforce Are Challenging Traditional Business Models

How IBM and Salesforce Are Challenging Traditional Business Models