Distrust of AI

I was standing in line for a tourist attraction in Tokyo when a small robot started addressing the crowd. The robot looked like Rosie from The Jetsons and was meant to entertain people while they waited. It chattered for a while, then its eyes turned into two pink hearts. “I love everybody,” he announced.

(Credit: meunierd/Shutterstock)

“Really?” I replied sarcastically. I could not help myself. “Anyone? That’s disingenuous.

Tokyo Robot was one of many robots and other forms of artificial intelligence (AI) that got on my nerves. I am not alone in my disdain. Scientists have studied robot hatred for more than 30 years. Research has found that many people view robots as ‘just like others’ and hatred of robots can lead to sabotage, attacks and even ‘robot bullying’.

Is artificial intelligence a threat to humans?

Robots and AI have a relatively short history in the US, but it’s one that has long been controversial. S increasing automation in the 1950s, some people saw mechanization as a way to make their lives better or easier. Others saw him as a threat. Robots can take over the job or the world for those who read a lot of science fiction.

Until 1990 information retrieval agents became mainstream, but they weren’t always functional and could be more of a nuisance than a help. Microsoft introduced Clippy, a virtual assistant, in 1996, and it became notorious for popping up at inopportune times and asking aggravating questions like “it looks like you’re writing a letter. Do you want help with that?’ A study describes Clippy as having “fatal flaws in its ability to determine when users actually need help.”

In the early 2000s, AI became more useful. People turned to online search engines to retrieve information, and global positioning systems (GPS) became widely available. But AI has also become more personal. Tech companies introduced chatbots, such as Jabberwacky, that interacted and responded to users.

Vocal social agents” like Siri or Alexa are already part of the daily life of many users. Like their chatbot predecessors, they are designed to replicate the norms of human communication and learn and repeat our behavior patterns.

For some users, asking Alexa to play 80s music is a convenience. But for others, it can be an opportunity for bad behavior.


Read more: Robots and artificial intelligence have ancient mythological origins


Bullying with robots: Gross verbal abuse

Long before people were asking Siri or Alexa rude questions, chatbot users of the early 2000s also showed a penchant for harassment. This bad human behavior towards robots is an example of robot bullying.

In 2008 research in Interaction with computers analyzes how users engage with Jabberwacky, an online chatterbot that launched in 1997 and garnered more than 10 million responses over the next decade.

To analyze the conversations, the researchers chose a time sample, meaning they chose a specific day (November 22, 2004) and then analyzed all the interactions (716 conversations) that occurred during the time period.

Abuse of AI

When analyzing the content of the conversations, the authors found that some users were friendly or curious about testing the system and its capabilities. But many were unkind. On the milder side of the AI ​​abuse spectrum, some users liked to tell Jabberwacky it was just a computer or correct its grammar.

However, about 10 percent of the interactions involved offensive or abusive language. Another 11 percent were sexually ambiguous, or as the researchers described it: “Rough verbal abuse is the norm in these conversations, which are more akin to dirty monologues than heated chats.”

The authors conclude that because chatbots lack memory and reasoning, they are a way for humans to violate social norms in a seemingly harmless way. But studies have found other cases where people perceive robots or AI as threats, leading to “anti-robot attacks.”

Robot bullying: Physical attacks

What exactly is robot harassment in a physical sense, like attacks? Scientists organize “anti-bot attacks” in several categories. Physical attacks, disruption of decision-making (i.e. tampering with sensors), manipulation, deliberate neglect and security breaches. There is also a very specific category – “staging robot attacks for online distribution”, which includes stunts such as ordering food delivered by a robot, waiting for the machine to roll up and then kicking it or knocking down the little flag it has. The attackers then posted the video on the Internet.

(Credit: JHVEPhoto/Shutterstock)

AI Substitute Jobs

So why would anyone kick a food delivery robot? Scientists have discovered that there are complex motivations. From early 1800s, people have attacked machines that threaten to displace workers. Some disdain for robots continues to stem from the threat people perceive robots to have to their livelihoods.

People also view robots as “the other,” meaning that they are not one of us, but we still have to accept them into our lives. Likewise, people may associate a particular robot with an organization or corporation they dislike.


Read more: What a digital worker could mean for the human workforce


Fear of robots

And since the technology is relatively new, people can be distrustful and cynical. 2022 survey Personality and individual differences measures how high school students feel about AI. Using the Cynical Hostility to AI Scale, the researchers had 659 participants fill out a survey about their feelings towards AI.

The study found that just because a person is cynical about AI does not mean they are cynical in general or about other people. Participants were also more distrustful of AI when they perceived it as hostile or with negative intentions.

An innocent mind

The belief that a machine can have negative intentions demonstrates the complexity of robot hatred. People believe that a machine can be programmed to be harmful, but people understand that robots are not conscious and do not have the ability to suffer if we are mean to them.

A scientist argues the fact that robots are not “morally significant” was one of the reasons people were comfortable with hating robots. Our sarcasm doesn’t hurt robots’ feelings. Food delivery robots are not traumatized by being kicked. In this way, bots can be a safe place for people (like Jabberwacky users) to violate social norms.

And sometimes… it can feel like the bots and AI are just hanging around for a bruise. Devices that are programmed to reproduce human communication can become cheeky with their responses. Researchers are now investigating ways in which the devices can be better anthropomorphized to evoke consumer empathy.


Read more: A study shows what a human-robot relationship looks like


Source link

Leave a Reply

Your email address will not be published. Required fields are marked *