Intermarkets' Privacy Policy
Support


Donate to Ace of Spades HQ!



Recent Entries
Absent Friends
Bandersnatch 2024
GnuBreed 2024
Captain Hate 2023
moon_over_vermont 2023
westminsterdogshow 2023
Ann Wilson(Empire1) 2022
Dave In Texas 2022
Jesse in D.C. 2022
OregonMuse 2022
redc1c4 2021
Tami 2021
Chavez the Hugo 2020
Ibguy 2020
Rickl 2019
Joffen 2014
AoSHQ Writers Group
A site for members of the Horde to post their stories seeking beta readers, editing help, brainstorming, and story ideas. Also to share links to potential publishing outlets, writing help sites, and videos posting tips to get published. Contact OrangeEnt for info:
maildrop62 at proton dot me
Cutting The Cord And Email Security
Moron Meet-Ups


NoVaMoMe 2024: 06/08/2024
Arlington, VA
Registration Is Open!


Texas MoMe 2024: 10/18/2024-10/19/2024 Corsicana,TX
Contact Ben Had for info





















« Mid-Morning Art Thread | Main | Claim: New J6 Videos Show Undercover Cops Urging Protesters To Trespass »
November 22, 2023

Wednesday Morning Rant

mannixape2.jpg

Contextual Failure

It's no secret that the use of "AI" is growing. From robots to search engines to self-driving cars and more, AI models are making their way into an ever-growing number of industries. The opening credits from Disney's recent "Secret Invasion" show on its streaming service were AI-generated. AI-enabled robots work in warehouses. AI-based systems power self-driving cars.

But it is not all wine and roses. AI-powered things that interact in the real world can result in baffling and seemingly impossible failure modes, as well as very inconsistent behavior. These failures are often quite different from human errors. In some cases, AI is much better than a person at avoiding some failure conditions, but it introduces exciting new failure conditions.


Earlier this month, an AI-powered robot killed a human technician. It was a picker robot designed to spend its time identifying boxes of peppers at a sorting plant, picking the boxes up and loading them onto pallets. The robot was new, getting ready for its trial run. The robot got confused in a way no human ever could be: it confused the technician for one of the boxes of peppers it was designed to load:

The robotic arm, confusing the man for a box of vegetables, grabbed him and pushed his body against the conveyer belt, crushing his face and chest, South Korean news agency Yonhap said.

He was sent to hospital but later died.

This is an impossible failure mode for a human. There is no way (except perhaps in the case of some extreme neurological disorder) for a human to get confused about whether he is looking at a box of peppers or another person. Even if he did get confused, it is unlikely that he would pick up a person and still think that the person was a box of vegetables. People grow up and live in the real world. They're aware of their bodies. They're aware of each other. They have context. Robots do not.

It isn't just industrial robots. Self-driving cars exhibit frequent erratic behavior due to their AI systems. One such behavior is "phantom braking:" a self-driving car suddenly just stands on the brakes, rapidly cutting speed. This results in a lot of self-driving cars getting rear-ended:

For no obvious reason, a self-driving car will suddenly brake hard, perhaps causing a rear-end collision with the vehicle just behind it and other vehicles further back. Phantom braking has been seen in the self-driving cars of many different manufacturers and in ADAS-equipped cars as well.

The cause of such events is still a mystery.

They also often don't understand emergencies, happily driving their way through active accident scenes. These are unlikely failure modes for a human operator. People know that shadows are not obstacles that need panic braking. People know that fire trucks parked across the lane means that maybe they shouldn't proceed through at speed. People have context. "AI" doesn't. It has no clue what a street sign, for example, is. It can (usually) recognize them, but it doesn't actually know what they are and so it can spot one where it doesn't exist, or spot the wrong one. A human might run an intersection, but he won't confuse a "Stop" sign for a "Speed Limit" sign. He knows what they are in a way that machines can't.

The inability of AI systems to properly contextualize and reason - they can't do the former effectively or the latter at all - means failure modes are often surprising and unpredictable. As the IEEE put it:

As other kinds of AI begin to infiltrate society, it is imperative for standards bodies and regulators to understand that AI failure modes will not follow a predictable path. ...

As AI continues to make inroads into systems that interact with the physical world, those systems may well avoid common human errors - albeit at the expense of myriad and unpredictable new errors.

digg this
posted by Joe Mannix at 11:00 AM

| Access Comments




Recent Comments
Hour of the Wolf: "Hailstones can be very irregular, and some have sp ..."

Hadrian the Seventh: " 6-1. I want it to be 15-1. ..."

Aetius451AD: "Damn. That would suck. Were they all on foot with ..."

Archimedes: "[i]Hailstones can be very irregular, and some have ..."

rickb223 [/s][/b][/i][/u]: "I seem to remember there was an "unsolved mystery" ..."

techsan: "Rules of engagement...as demonstrated by dude blas ..."

BifBewalski [/s] [/u] [/b] [/i]: "Hailstones can be very irregular, and some have sp ..."

...: "Actually are years gonna look weird when there's s ..."

[/i][/b]andycanuck (ZdexC)[/s][/u]: "somehow, latter-day investigators decided they wer ..."

rickb223 [/s][/b][/i][/u]: "I seem to remember there was an "unsolved mystery" ..."

Alberta Oil Peon: "Wait, ice has a pretty specific density... and rai ..."

JackStraw: "Spoke too soon. Frigging monsoon. ..."

Recent Entries
Search


Polls! Polls! Polls!
Frequently Asked Questions
The (Almost) Complete Paul Anka Integrity Kick
Top Top Tens
Greatest Hitjobs

The Ace of Spades HQ Sex-for-Money Skankathon
A D&D Guide to the Democratic Candidates
Margaret Cho: Just Not Funny
More Margaret Cho Abuse
Margaret Cho: Still Not Funny
Iraqi Prisoner Claims He Was Raped... By Woman
Wonkette Announces "Morning Zoo" Format
John Kerry's "Plan" Causes Surrender of Moqtada al-Sadr's Militia
World Muslim Leaders Apologize for Nick Berg's Beheading
Michael Moore Goes on Lunchtime Manhattan Death-Spree
Milestone: Oliver Willis Posts 400th "Fake News Article" Referencing Britney Spears
Liberal Economists Rue a "New Decade of Greed"
Artificial Insouciance: Maureen Dowd's Word Processor Revolts Against Her Numbing Imbecility
Intelligence Officials Eye Blogs for Tips
They Done Found Us Out, Cletus: Intrepid Internet Detective Figures Out Our Master Plan
Shock: Josh Marshall Almost Mentions Sarin Discovery in Iraq
Leather-Clad Biker Freaks Terrorize Australian Town
When Clinton Was President, Torture Was Cool
What Wonkette Means When She Explains What Tina Brown Means
Wonkette's Stand-Up Act
Wankette HQ Gay-Rumors Du Jour
Here's What's Bugging Me: Goose and Slider
My Own Micah Wright Style Confession of Dishonesty
Outraged "Conservatives" React to the FMA
An On-Line Impression of Dennis Miller Having Sex with a Kodiak Bear
The Story the Rightwing Media Refuses to Report!
Our Lunch with David "Glengarry Glen Ross" Mamet
The House of Love: Paul Krugman
A Michael Moore Mystery (TM)
The Dowd-O-Matic!
Liberal Consistency and Other Myths
Kepler's Laws of Liberal Media Bias
John Kerry-- The Splunge! Candidate
"Divisive" Politics & "Attacks on Patriotism" (very long)
The Donkey ("The Raven" parody)
Powered by
Movable Type 2.64