Video of a sidewalk delivery robot crossing a yellow warning tape and rolling through a Los Angeles crime scene went viral this week, racking up more than 650,000 views on Twitter and sparking debate over whether the technology is ready for prime time.
It turns out that the robot’s mistake, at least in this case, was caused by humans.
The video of the event was taken and posted on Twitter by William Goode, owner of Shoot LA Police, a Los Angeles-based police watch account. Gude was in the area of a suspected school shooting at Hollywood High School around 10 a.m. when he captured on video the bot hovering on a street corner, looking confused, until someone raised the tape, allowing the bot to continue its path through the crime scene.
Uber’s Serve Robotics business told TechCrunch that the robot’s self-driving system didn’t decide to go to the crime scene. It was the choice of a human operator who remotely controlled the bot.
The company’s delivery robots have so-called Level 4 autonomy, meaning they can drive themselves under certain conditions without the need for a human to take over. Serve has been piloting its robots with Uber Eats in the area since May.
Serve Robotics has a policy that requires a human operator to remotely monitor and assist its bot at each intersection. The human operator will also remotely take control if the bot encounters an obstacle like a construction zone or fallen tree and can’t figure out how to get around it within 30 seconds.
In this case, the bot, which had just completed a delivery, approached the intersection and a human operator took over, according to the company’s internal operating policy. Initially, the human operator stopped at the yellow warning tape. But when bystanders picked up the tape and apparently “waved” it, the human operator decided to continue, Serve Robotics CEO Ali Kashani told TechCrunch.
“The robot would never have crossed (on its own),” Kashani said. “There’s just a lot of systems in place to make sure it never gets through until a person gives that permission.”
The error of judgment here is that someone did decide to continue crossing, he added.
Whatever the reason, Kashani said it shouldn’t have happened. Serve has downloaded data from the incident and is working on a new set of human and AI protocols to prevent it from happening in the future, he added.
A few obvious steps will be to ensure that employees follow standard operating procedure (or SOP), which includes proper training and developing new rules for what to do if a person tries to swing the robot over a barricade.
But Kashani said there are also ways to use software to prevent this from happening again.
The software can be used to help people make better decisions or avoid an area altogether, he said. For example, the company could work with local law enforcement to send updates to a robot about police incidents so it can roam those areas. Another option is to enable the software to identify law enforcement and then alert decision makers and remind them of local laws.
These lessons will be critical as robots advance and expand their operational domains.
“The funny thing is, the robot did the right thing; stopped, Kashani said. “So it really comes back to giving people enough context to make good decisions until we’re confident enough that we don’t need people to make those decisions.”
Serve Robotics’ bots haven’t reached that point yet. However, Kashani told TechCrunch that the robots are becoming more independent and generally operate alone, with two exceptions: intersections and roadblocks of some kind.
The scenario that played out this week contradicts how many people view AI, Kashani said.
“I think the narrative in general is that humans are really great at the edge, and then the AI makes mistakes or maybe isn’t ready for the real world,” Kashani said. “Strangely enough, we’re learning the exact opposite, which is that humans make a lot of mistakes and we need to rely more on AI.”