Skip to main content

Verified by Psychology Today

Brain Computer Interface

Brain-Computer Interface Predicts Patient’s Thoughts

Caltech study shows how a brain-computer interface may help the speech impaired.

Key points

  • In a recent study, researchers observed that internal speech is highly decodable in the supramarginal gyrus region of the brain.
  • With this proof-of-concept, the researchers believe that the supramarginal gyrus can potentially represent an even greater internal vocabulary.
  • Being able to build models on internal speech may allow scientists to help people who cannot vocalize speech.
Geralt/Pixabay
Source: Geralt/Pixabay

New scientific research presented at this week’s Society for Neuroscience 2022 conference by the California Institute of Technology (Caltech) shows that a brain-machine interface (BMI), also known as a brain-computer interface (BCI), can predict a person’s internal monologue with a high degree of accuracy.

Proof-of-concept for a high-performance internal speech BMI

Brain-machine interfaces enable those unable to speak due to neurological diseases such as amyotrophic lateral sclerosis (ALS) to control external devices to communicate, use smartphones, type emails, shop online, and many more functions in order to live more independently.

“This work represents the first proof-of-concept for a high-performance internal speech BMI,” wrote the Caltech researchers in their latest study.

The scientists hypothesized that different regions of the brain would modulate during vocalized versus internal speech. Specifically, the researchers were testing their theory that for vocalized speech, the supramarginal gyrus (SMG) in the posterior parietal cortex (PPC) and primary somatosensory cortex (S1) activity would modulate and that during internal speech just the SMG activity would modulate.

The study participant was quadriplegic (tetraplegic) with a prior spinal cord injury. The participant was implanted with a 96-channel multi-electrode array, the Neuroport Array by Blackrock Microsystems, in the supramarginal gyrus (SMG) and left ventral premotor cortex (PMv) areas, as well as two 48-channel microelectrode arrays in the primary somatosensory cortex (S1).

The Caltech researchers opted to use an invasive brain-machine interface in efforts to obtain a favorable signal-to-noise ratio and resolution instead of non-invasive brain recording technologies such as magnetoencephalography (MEG), magnetic imaging (fMRI), or electroencephalography (EEG).

The participant’s brain activity was recorded by the implanted arrays while thinking or internally speaking six words and two pseudowords. The researchers characterized the four language processes of vocalized speech production, reading words, listening comprehension, and internal speech at the neuronal level. They observed that internal speech is highly decodable in the supramarginal gyrus.

“In this work, we demonstrated a robust decoder for internal and vocalized speech, capturing single-neuron activity from the supramarginal gyrus,” wrote the Caltech researchers. “A chronically implanted, speech-abled participant with tetraplegia was able to use an online, closed-loop internal speech BMI to achieve up to 91 percent classification accuracy with an eight-word vocabulary.”

With this demonstrated proof-of-concept, the researchers believe that the supramarginal gyrus brain region has the potential to represent an even greater internal vocabulary.

“By building models on internal speech directly, our results may translate to people who cannot vocalize speech or are completely locked in,” the researchers concluded.

Copyright © 2022 Cami Rosso All rights reserved.

advertisement
More from Cami Rosso
More from Psychology Today
More from Cami Rosso
More from Psychology Today