The technology to transform brain waves into speech has been in development for a number of years. Now, there has been a great leap in advancement, thanks to video game technology and AI. Read on to find out more about this groundbreaking discovery.
TL;DR:
- Video game technology aids paralysed woman in regaining communication abilities.
- Brain-computer interface developed by Speech Graphics, UCSF, and UC Berkeley generates speech and facial expressions from brain signals.
- Avatar-based communication through synthesized voice and facial animation marks a significant advancement in restoring natural communication for those unable to speak.
Transforming Brain Waves to Speech Through a Digital Avatar
Video game technology has played a groundbreaking role in helping a woman regain her ability to communicate, after she was left paralysed following a stroke. Now, she can communicate again – through a digital avatar.
Researchers from Edinburgh-based Speech Graphics, UC San Francisco (UCSF), and UC Berkeley have developed the world’s first brain-computer interface that generates speech and facial expressions from brain signals. Therefore, this development offers hope for restoring natural communication among those unable to speak.
How Does the Software Work?
Utilizing software akin to that used in video games like The Last Of Us Part II and Hogwarts Legacy, brain waves are transformed into a digital avatar. This avatar is capable of speech and also facial animation. The study focused on a woman named Ann, converting her brain signals into three forms of communication. The communication forms are text, synthetic voice, and also facial animation on a digital avatar. This further includes lip sync and emotional expressions. Remarkably, this marks the first time facial animation has been synthesized from brain signals.
Led by UCSF’s chairman of neurological surgery, Edward Chang, the team implanted a paper-thin rectangle of 253 electrodes onto the woman’s brain surface. The electrodes intercept signals that would have otherwise reached facial muscles. These electrodes are then connected to computers via a cable. Following this, AI algorithms were trained over weeks to recognize brain activity.
Real-Time Facial Expressions and Speech From Brain Waves
The woman achieved text writing and speaking using a synthesized voice based on past recordings. Moreover, the AI decoded her brain activity into facial movements, transforming her thoughts into real-time facial expressions. One method involved using the subject’s synthesized voice to drive muscle actions. These actions were then converted into 3D animation in a video game engine. The end result was a lifelike avatar that could pronounce words in sync with the synthesized voice.
This technology represents a major leap in restoring communication to individuals affected by paralysis, offering real-time expression of emotions and nuanced muscle movement.
All investment/financial opinions expressed by NFTevening.com are not recommendations.
This article is educational material.
As always, make your own research prior to making any kind of investment.