Intermarkets' Privacy Policy
Support
Donate to Ace of Spades HQ!
Contact
Ace:aceofspadeshq at gee mail.com
Buck:buck.throckmorton at protonmail.com
CBD:
cbd at cutjibnewsletter.com
joe mannix:
mannix2024 at proton.me
MisHum:
petmorons at gee mail.com
J.J. Sefton:
sefton at cutjibnewsletter.com
Recent Entries
Absent Friends
Bandersnatch 2024
GnuBreed 2024
Captain Hate 2023
moon_over_vermont 2023
westminsterdogshow 2023
Ann Wilson(Empire1) 2022
Dave In Texas 2022
Jesse in D.C. 2022
OregonMuse 2022
redc1c4 2021
Tami 2021
Chavez the Hugo 2020
Ibguy 2020
Rickl 2019
Joffen 2014
AoSHQ Writers Group
A site for members of the Horde to post their stories seeking beta readers, editing help, brainstorming, and story ideas. Also to share links to potential publishing outlets, writing help sites, and videos posting tips to get published.
Contact OrangeEnt for info: maildrop62 at proton dot me
Cutting The Cord And Email Security
Moron Meet-Ups
|
« Overnight Open Thread - 11/10/2024 [Roger Ball] |
Main
| The Morning Report 11/11/24 »
November 11, 2024
Daily Tech News 11 November 2024
Top Story
- Generative AI doesn't have a coherent understanding of the world. (MIT)
No fucking shit. Thanks to the big brains at MIT for bringing us this world-shattering news.The researchers found that a popular type of generative AI model can provide turn-by-turn driving directions in New York City with near-perfect accuracy — without having formed an accurate internal map of the city.
Despite the model's uncanny ability to navigate effectively, when the researchers closed some streets and added detours, its performance plummeted.
When they dug deeper, the researchers found that the New York maps the model implicitly generated had many nonexistent streets curving between the grid and connecting far away intersections. It's a stochastic parrot. We know.This could have serious implications for generative AI models deployed in the real world, since a model that seems to be performing well in one context might break down if the task or environment slightly changes. Again, anyone who has used AI for more than a couple of minutes is fully aware of this."We needed test beds where we know what the world model is. Now, we can rigorously think about what it means to recover that world model," Vafa explains. It doesn't have one.The researchers demonstrated the implications of this by adding detours to the map of New York City, which caused all the navigation models to fail. Yep.
Years ago, engineers tried using genetic algorithms to optimise a particular electronic circuit to use fewer transistors. They got a result that worked, but nobody could explain how.
Turned out it worked by the coincidental passive properties of the circuit, and not due to the transistors. The moment you made the slightest change to the operating conditions, it failed entirely.
Disclaimer: Unless I have to work late. If I have to work late, which I usually do...
posted by Pixy Misa at 04:00 AM
| Access Comments
|
Recent Comments
Recent Entries
Search
Polls! Polls! Polls!
Frequently Asked Questions
The (Almost) Complete Paul Anka Integrity Kick
Top Top Tens
Greatest Hitjobs
|