« Mid-Morning Art Thread |
Main
|
Claim: New J6 Videos Show Undercover Cops Urging Protesters To Trespass »
November 22, 2023
Wednesday Morning Rant
Contextual Failure
It's no secret that the use of "AI" is growing. From robots to search engines to self-driving cars and more, AI models are making their way into an ever-growing number of industries. The opening credits from Disney's recent "Secret Invasion" show on its streaming service were AI-generated. AI-enabled robots work in warehouses. AI-based systems power self-driving cars.
But it is not all wine and roses. AI-powered things that interact in the real world can result in baffling and seemingly impossible failure modes, as well as very inconsistent behavior. These failures are often quite different from human errors. In some cases, AI is much better than a person at avoiding some failure conditions, but it introduces exciting new failure conditions.
Earlier this month, an AI-powered robot killed a human technician. It was a picker robot designed to spend its time identifying boxes of peppers at a sorting plant, picking the boxes up and loading them onto pallets. The robot was new, getting ready for its trial run. The robot got confused in a way no human ever could be: it confused the technician for one of the boxes of peppers it was designed to load:
The robotic arm, confusing the man for a box of vegetables, grabbed him and pushed his body against the conveyer belt, crushing his face and chest, South Korean news agency Yonhap said.
He was sent to hospital but later died.
This is an impossible failure mode for a human. There is no way (except perhaps in the case of some extreme neurological disorder) for a human to get confused about whether he is looking at a box of peppers or another person. Even if he did get confused, it is unlikely that he would pick up a person and
still think that the person was a box of vegetables. People grow up and live in the real world. They're aware of their bodies. They're aware of each other. They have context. Robots do not.
It isn't just industrial robots. Self-driving cars exhibit frequent erratic behavior due to their AI systems. One such behavior is "phantom braking:" a self-driving car suddenly just stands on the brakes, rapidly cutting speed. This results in a lot of self-driving cars getting rear-ended:
For no obvious reason, a self-driving car will suddenly brake hard, perhaps causing a rear-end collision with the vehicle just behind it and other vehicles further back. Phantom braking has been seen in the self-driving cars of many different manufacturers and in ADAS-equipped cars as well.
The cause of such events is still a mystery.
They also often don't understand emergencies, happily driving their way through active accident scenes. These are unlikely failure modes for a human operator. People know that shadows are not obstacles that need panic braking. People know that fire trucks parked across the lane means that maybe they shouldn't proceed through at speed. People have context. "AI" doesn't. It has no clue what a street sign, for example,
is. It can (usually) recognize them, but it doesn't actually know what they are and so it can spot one where it doesn't exist, or spot the wrong one. A human might run an intersection, but he won't confuse a "Stop" sign for a "Speed Limit" sign. He
knows what they are in a way that machines can't.
The inability of AI systems to properly contextualize and reason - they can't do the former effectively or the latter at all - means failure modes are often surprising and unpredictable. As the IEEE put it:
As other kinds of AI begin to infiltrate society, it is imperative for standards bodies and regulators to understand that AI failure modes will not follow a predictable path. ...
As AI continues to make inroads into systems that interact with the physical world, those systems may well avoid common human errors - albeit at the expense of myriad and unpredictable new errors.
posted by Joe Mannix at
11:00 AM
|
Access Comments