With the aid of artificial intelligence (AI), scientists have developed a decoder that can translate cerebral activity into a continuous stream of text. The Guardian reported that this is the first time a non-invasive technique to discern a person’s thoughts has been developed.

The AI-powered decoder is extremely accurate at analyzing the thoughts of individuals who are listening to a story or silently imagining one, according to the researchers cited by the outlet. The instrument was created by neuroscientists at the University of Texas.

“We were somewhat surprised by how well it worked. Dr. Alexander Huth, a member of the team and a neuroscientist, told The Guardian, “I’ve been working on this for 15 years, so it was both shocking and exciting when it finally worked.”

The innovation eliminates a fundamental limitation of fMRI, namely time latency. Although the technique can produce a high-resolution image of brain activity, it cannot track activity in real time.

Mr. Huth explained that his team’s language decoder “operates on an entirely different level.”

“Our system operates on the level of concepts, semantics, and meaning,” he told reporters.

How does the new system operate?

Three participants spent a total of 16 hours inside an fMRI machine listening to spoken narrative stories, primarily podcasts, as part of the study.

This allowed the researchers to map how words, phrases, and meanings triggered responses in language-processing regions of the brain.

They incorporated this information into a neural network language model that employs GPT-1, the forerunner of the AI technology that was later implemented in the immensely popular ChatGPT.

The model was trained to predict how each individual’s brain would respond to perceived speech, then it narrowed down the options until it identified the most appropriate response.

To evaluate the accuracy of the model, each participant listened to a novel story in the fMRI machine.

Jerry Tang, the first author of the study, stated that the decoder could “recover the essence of what the user was hearing.”

For instance, when the participant said, “I don’t have my driver’s license yet,” the model responded, “She hasn’t even begun learning to drive yet.”

What existing instruments exist?

Before this triumph, scientists had to rely on surgically implantable language decoding systems. One of these was published in 2019 to assist those who have lost their voice due to paralysis, throat cancer, amyotrophic lateral sclerosis (ALS), or Parkinson’s disease.

The technology identified relevant neural signals from brain activity by implanting electrodes. These signals were then decoded into approximated lip, tongue, larynx, and jaw movements before being transformed into synthetic speech.

Warning issued by scientists

David Rodriguez-Arias Vailhen, a bioethics professor at Spain’s Granada University who was not involved in the study, remarked that it exceeded what previous brain-computer interfaces had accomplished.

“This brings us closer to a future in which machines are able to read minds and transcribe thought,” he was quoted as saying by AFP. Scientists also cautioned that this could occur against people’s will, such as while they are asleep.

However, researchers reported that they anticipated such concerns.

The team stated that experiments demonstrated that the decoder did not function on a person if it had not been trained on that individual’s specific brain activity.

The three participants could also readily circumvent the decoder.

Users were instructed to count by sevens, name and imagine animals, or create a distinct story in their minds while listening to a podcast. All of these tactics, according to the researchers, “sabotaged” the decoder.

    Leave a Reply