"AI mind reading capabilities""debunking AI mind reading myths."

  “Can AI Read Your Mind? Debunking Myths and Facts”

  AI mind reading  we tend to think of our minds as, for better or worse, impenetrable fortresses. Other people only see our inner thoughts when we transform them into language and send them out into the world.

Earlier this month, however, researchers at the University of Austin removed that barrier between inside and outside.

By feeding brain imaging data into an artificial intelligence model, they capture the essence of what their subjects hear, see and think. As the accuracy of the technology improves, it may even enable communication with people who are paralyzed or who have otherwise lost the ability to speak.

Mind reading technology

Speech decoding is nothing new. But until now, it has relied on brain implants that detect a person’s attempts to form words and then convert those voice signals into language.

This new, non-invasive technique works in a different way: by predicting words based on patterns in brain activity that they are not directly related to speech. The decoder can’t guess every word exactly, but the overall similarity in meaning still amazed its creators.

“What we got were actually really good paraphrases,” says lead author Alexander Hutt, a computational neuroscientist. “That kind of surprised us.”

learning, published in Nature Neurologyfocused on three subjects who each spent 16 hours listening to narrative podcasts such as The moth and Modern love while lying in a functional magnetic resonance imaging (fMRI) machine.

The scanner measures blood flow to different parts of their brain, showing which parts are active at specific points in the podcast episodes. A large language model (an older version of the one behind OpenAI’s ChatGPT) then matched the words the subjects heard with their corresponding brain activities.

The decoder that emerged from this process couldn’t eavesdrop on your inner monologue per se, but after all that training, it became intimately familiar with the brain states induced by certain language. In subsequent fMRI sessions, he was able to reverse engineer a thought based solely on the neural signals the thought produced.

He still routinely gets single words and phrases wrong and struggles with some aspects of grammar, such as pronouns and proper names. (Don’t we all?) But his ability to repackage the essence of a storyline is amazing; it performs better than would be expected by pure chance 70 to 80 percent of the time.

Read more: Analyzing brain waves for near-death experiences

Pulling words out of thin air

Over the past decade, decoders have allowed seemingly unconscious people to answer yes or no questions.and have separated what one hears from list of possible options.

“But what’s interesting about this paper is that it’s not multiple choice, it’s fill-in-the-blank,” said Tom Mitchell, a computer scientist and professor at Carnegie Mellon University who was not involved in the study. “Here’s the brain activity, what is the language sequence that this brain is thinking about?”

In the following example, the first sentence shows what the research subject actually heard, while the second shows what the language model predicted they heard:

“I didn’t know whether to scream, cry or run away. Instead I said, “Leave me alone, I don’t need your help.” Adam disappeared and I cleaned up alone, crying.

“[I] started screaming and crying and then just said, “I told you to leave me alone, you can’t hurt me anymore. I’m sorry.’ And then he ran away. I thought he was gone. I started crying.”

With less (but still impressive) accuracy, the decoder could also guess the content of stories when participants simply imagined themselves telling them or, most surprisingly, when they watched short films without sound. Although the language model is trained exclusively on text, it seems to dig deeper—to the realm of meaning that lies beyond language.

All of this points to the fact that something similar happens in the brain, regardless of whether you hear the word dogthinking of a dog or seeing a dog.

“It’s the kind of high-level representation we get here that underpins all of these things,” Hutt says. “Language is the gateway to the examination of thought.”

Read more: Have AI language models achieved theory of mind?

Brain scanning and mind reading

For many experts, the fact that fMRI was capable of such feats came as a shock. After all, blood flows much slower than the rate at which neurons fire, often resulting in low-resolution data.

Notice how quickly the words run through your head—each fMRI scan presents several of them—but the predictive power of language patterns is able to glean fine details about the thoughts encoded in this coarse process.

Still, fMRI comes with other limitations. Because it requires a massive machine, on the one hand, it cannot be easily woven into daily life. “If we want this to help people,” Hutt says, “we really need to move to a different methodology.”

And that means wearable technology.

The study points to the possibility that other brain imaging techniques could replicate fMRI’s success. One contender is functional near-infrared spectroscopy (fNIRS), which measures the same physiological response but is small enough to be included in a hat.

Its resolution is worse than fMRI, but when the researchers blurred their results to the level of fNIRS, they found that the decoding still worked—albeit less accurately. Moreover, given the recent rate of development of a large language model, it is possible that current and future versions will perform much better, even with lower resolution images.

The GPT-1 model used in this study has already been replaced three times. Perhaps the GPT-4 that powers the ChatGPT Plus can achieve the same accuracy with lower quality fNIRS.

Brain imaging and privacy of thought

The most urgent application for decoding is to communicate with people who have lost their usual means of communication. But in the long run, some experts believe this technology could fundamentally change the way we all interact with our devices.

As Mitchell said, “What if we had a computer interface that wasn’t a keyboard, it wasn’t a mouse, it was just your thinking?” You just imagine where you want to eat tonight and your phone makes the reservation for you.

Despite technology’s potential for good, of course, there’s a clear Orwellian thread to the idea of ​​a future where bad actors can tap into your brain. Anticipating these concerns, the researchers conducted experiments on their model to test whether it could be misused.

One obvious danger is that interrogators or authoritarian regimes could use decoders to extract information from people against their will. But because these models have to be trained extensively on each individual, they cannot derive anything useful without cooperation.

Even when the decoder is tuned for a specific person, that person can resist, such as doing mental math or listing animals in their head.

Hutt and his colleagues asked their subjects to do just that, “and the decoding went crazy,” he says. “The person whose brain is being decoded ultimately has control over what comes out.”

Mind reading technology and freedom of thought

However, Nita Farahani, a professor at Duke University who studies the ethical and legal implications of emerging technologies, doesn’t think coercion is the biggest threat.

She envisions a more insidious future in which people voluntarily deny access to their thoughts—much like we do today with the personal information companies collect from our online activity.

“We never envisioned a world where there was no room for inner reflection,” says Farahani. Still, it’s easy to imagine how advertisers could use it to make products almost irresistible, or how employers could use it to track performance.

Before we cross this Rubicon, Farahani argues that the international community must embrace a right to cognitive freedom. This could ensure that by default we have personal rather than corporate ownership of brain data, limiting the commercialization of our minds.

“I really believe we’re literally at the previous moment,” she says, “where we can make choices that make this technology hopeful and useful.”

Read more:  Artificial intelligence / AI and the human brain: how similar are they?

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *