Intermarkets' Privacy Policy
Support
Donate to Ace of Spades HQ!
Contact
Ace:aceofspadeshq at gee mail.com
Buck:buck.throckmorton at protonmail.com
CBD:
cbd at cutjibnewsletter.com
joe mannix:
mannix2024 at proton.me
MisHum:
petmorons at gee mail.com
J.J. Sefton:
sefton at cutjibnewsletter.com
Recent Entries
Absent Friends
Jewells45 2025
Bandersnatch 2024
GnuBreed 2024
Captain Hate 2023
moon_over_vermont 2023
westminsterdogshow 2023
Ann Wilson(Empire1) 2022
Dave In Texas 2022
Jesse in D.C. 2022
OregonMuse 2022
redc1c4 2021
Tami 2021
Chavez the Hugo 2020
Ibguy 2020
Rickl 2019
Joffen 2014
AoSHQ Writers Group
A site for members of the Horde to post their stories seeking beta readers, editing help, brainstorming, and story ideas. Also to share links to potential publishing outlets, writing help sites, and videos posting tips to get published.
Contact OrangeEnt for info: maildrop62 at proton dot me
Cutting The Cord And Email Security
Moron Meet-Ups
Texas MoMe 2025:
10/17/2025-10/18/2025
Corsicana,TX
Contact Ben Had for info
|
« The Second Day After Christmas, My True Love Gave To Me: Another ONT |
Main
| The Classical Saturday Morning Coffee Break & Prayer Revival »
December 28, 2024
Daily Tech News 28 December 2024
Top Story
- A new Chinese AI called DeepSeek V3 outperforms ChatGPT on standard tests while costing a small fraction of the price to train because - apparently - the developers stole the ChatGPT training data. (Tech Crunch)
The evidence for this is that the model is convinced it is ChatGPT."Obviously, the model is seeing raw responses from ChatGPT at some point, but it's not clear where that is," Mike Cook, a research fellow at King's College London specializing in AI, told TechCrunch. "It could be 'accidental'... but unfortunately, we have seen instances of people directly training their models on the outputs of other models to try and piggyback off their knowledge."
Cook noted that the practice of training models on outputs from rival AI systems can be "very bad" for model quality, because it can lead to hallucinations and misleading answers like the above. "Like taking a photocopy of a photocopy, we lose more and more information and connection to reality," Cook said. Training AIs on AI-generated data leads to insanity in as little as three generations. It can improve results on specific standard tests because it biases the AI very, very heavily towards those tests, throwing everything else out the window. After setting it on fire.
Tech News
Disclaimer: I have a cold. It is the worst thing to ever happen to anyone.

posted by Pixy Misa at 04:00 AM
| Access Comments
|
Recent Comments
Recent Entries
Search
Polls! Polls! Polls!
Frequently Asked Questions
The (Almost) Complete Paul Anka Integrity Kick
Top Top Tens
Greatest Hitjobs
|