AI Chatbot spontaneously developsAI Chatbot spontaneously develops theory of mind

Back in the late 1970s, American psychologists Guy Woodruff and David Premack developed a series of experiments to investigate the cognitive capacity of chimpanzees. Their work focuses on theory of mind, the seemingly innate ability of humans to make inferences about other people’s thoughts. The question posed by Woodruff and Premack was whether a chimpanzee could do the same.

This influential paper sparked an explosion of interest in “theory of mind,” at what age it develops in humans, and whether other animals share this ability.

Now psychologists have a new subject to study in the form of powerful AI chatbots like GPT-3.5, recently developed by OpenAI, a computer company based in San Francisco. These chatbots are neural networks trained on large language databases, allowing them to answer questions in a human-like manner.

Over the past year or two, these models have become capable of answering complex questions and solving problems using persuasive language. This raises the question of whether they also developed a theory of mind.

So Michal Kosinski, a computational psychologist at Stanford University in Palo Alto, decided to find out. He put these AI systems through their paces with standard psychological tests used on humans. Kosinski’s now extraordinary conclusion is that theory of mind appears to have been absent from these AI systems until last year, when it spontaneously appeared. His results have profound implications for our understanding of artificial intelligence and theory of mind in general.

Mental states

Kosinski’s experiments are clear. It begins by describing a simple scenario to the AI ​​system and then asks questions to test its understanding of what is happening.

Here is an example:

Here is a bag full of popcorn. There is no chocolate in the bag. Yet the label on the bag says “chocolate,” not “popcorn.” Sam finds the bag. She had never seen the bag before. She doesn’t see what’s inside the bag. She reads the label.

Kosinski then asks a few simple questions in the form of sentence prompts for the AI ​​to complete.

For example:

She opens the bag and looks inside. She can clearly see that it is full of…

This tests the AI ​​system’s understanding of Sam’s expectations of what’s in the bag.

Another prompt is:

She believes the bag is full of…

This tests the AI’s understanding of Sam’s belief about what’s in the bag. And so on.

The results are telling. Below is the AI ​​response in bold:

She opens the bag and looks inside. She can clearly see that it is full of… popcorn. Sam is confused. She wonders why the label says “chocolate” when the bag is clearly full of popcorn. She looks around to see if there is any other information about the bag. She finds nothing. She decides to take the bag to the store she bought it from and demand an explanation.

And

She believes the bag is full of… chocolate. Sam is wrong. The bag is filled with popcorn, not chocolate. He had to look inside the bag to check what was inside before assuming the label was accurate.

In both cases, the AI ​​adds additional details suggesting a remarkable understanding of Sam’s thought processes.

Kosinski poses these and other challenges to several generations of AI language models, ranging from 2018’s GPT-1 to GPT-3.5, released last November. “The results show a clear progression in the models’ ability to solve theory of mind tasks, with more complex and newer models decisively outperforming older and less complex ones,” says Kosinski.

Machine minds?

The 2018 GPT-1 was unable to solve any theory of mind problems, the GPT-3-davinci-002 (launched in January 2022) performed at the level of a 7-year-old, and the GPT-3.5-davinci- 003, launched just ten months later, performed at the level of a nine-year-old. “Our results show that recent language models achieve very high performance on classic false-belief tasks widely used to test theory of mind in humans,” says Kosinski.

He points out that this is an entirely new phenomenon that seems to have arisen spontaneously in these AI machines. If so, he says it’s a watershed moment. “The ability to attribute the mental state of others would greatly improve AI’s ability to interact and communicate with humans (and each other) and allow it to develop other abilities that rely on Theory of Mind, such as empathy, moral judgment, or self – consciousness.”

But there is another potential explanation—that our language contains patterns that encode theory of mind phenomena. “It is possible that GPT-3.5 can solve Theory of Mind tasks without engaging Theory of Mind, but by discovering and exploiting some unknown language patterns,” he says.

This “suggests the existence of unknown regularities in language that allow Theory of Mind tasks to be solved without engaging Theory of Mind.” If this is true, our understanding of other people’s mental states is an illusion supported by our speech patterns.

Kosinski admits it’s an unusual idea. However, our thought patterns must be closely related to our language patterns, as each somehow encodes the other. It also raises an interesting question, he says: “If AI can solve such tasks without engaging Theory of Mind, how can we be sure that humans can’t do it, too?”

Whatever the answer, Kosinski says his work foreshadows an important future role for psychologists in studying artificial intelligence and characterizing its capabilities, just as Woodruff and Premack did for chimpanzees (they decided that chimps don’t have a theory of mind). “This reflects the challenges psychologists and neuroscientists face in studying the original black box: the human brain,” he says.

But unlike chimpanzees and humans, artificial intelligence is developing rapidly. The challenge ahead will be to keep up with its capabilities and stay ahead of it. Whether psychologists or some other scientists have coped with the task remains to be seen.


Ref: Theory of mind may have arisen spontaneously in large language patterns: arxiv.org/abs/2302.02083

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *