Intermarkets' Privacy Policy
Support
Donate to Ace of Spades HQ!
Contact
Ace:aceofspadeshq at gee mail.com
Buck:buck.throckmorton at protonmail.com
CBD:
cbd at cutjibnewsletter.com
joe mannix:
mannix2024 at proton.me
MisHum:
petmorons at gee mail.com
J.J. Sefton:
sefton at cutjibnewsletter.com
Recent Entries
Absent Friends
Bandersnatch 2024
GnuBreed 2024
Captain Hate 2023
moon_over_vermont 2023
westminsterdogshow 2023
Ann Wilson(Empire1) 2022
Dave In Texas 2022
Jesse in D.C. 2022
OregonMuse 2022
redc1c4 2021
Tami 2021
Chavez the Hugo 2020
Ibguy 2020
Rickl 2019
Joffen 2014
AoSHQ Writers Group
A site for members of the Horde to post their stories seeking beta readers, editing help, brainstorming, and story ideas. Also to share links to potential publishing outlets, writing help sites, and videos posting tips to get published.
Contact OrangeEnt for info: maildrop62 at proton dot me
Cutting The Cord And Email Security
Moron Meet-Ups
|
« The Best Things In Life Are ONT |
Main
| The Morning Report — 10/27/23 »
October 27, 2023
Daily Tech News 27 October 2023
Top Story
- Humanity is at risk from an AI "race to the bottom". (The Guardian)
What's the risk?A handful of tech companies are jeopardising humanity's future through unrestrained AI development and must stop their "race to the bottom", according to the scientist behind an influential letter calling for a pause in building powerful systems. What's the risk?"We're witnessing a race to the bottom that must be stopped," Tegmark told the Guardian. "We urgently need AI safety standards, so that this transforms into a race to the top. AI promises many incredible benefits, but the reckless and unchecked development of increasingly powerful systems, with no oversight, puts our economy, our society, and our lives at risk. Regulation is critical to safe innovation, so that a handful of AI corporations don't jeopardise our shared future." What's the risk?In a policy document published this week, 23 AI experts, including two modern "godfathers" of the technology, said governments must be allowed to halt development of exceptionally powerful models. What's the risk?The paper, whose authors include Geoffrey Hinton and Yoshua Bengio – two winners of the ACM Turing award, the "Nobel prize for computing" – argues that powerful models must be licensed by governments and, if necessary, have their development halted. What's the risk?The unrestrained development of artificial general intelligence, the term for a system that can carry out a wide range of tasks at or above human levels of intelligence, is a key concern among those calling for tighter regulation. None of these companies are working on AGI. All of them are dumping huge amounts of money into glorified typeahead systems that understand nothing.
Tech News
Disclaimer:Beeeeeeeeeeeeeeeeeeeeeeeeeeeeeep.
posted by Pixy Misa at 04:00 AM
| Access Comments
|
Recent Comments
Recent Entries
Search
Polls! Polls! Polls!
Frequently Asked Questions
The (Almost) Complete Paul Anka Integrity Kick
Top Top Tens
Greatest Hitjobs
|