Ace: aceofspadeshq at gee mail.com
Buck: buck.throckmorton at protonmail.com
CBD: cbd at cutjibnewsletter.com
joe mannix: mannix2024 at proton.me
MisHum: petmorons at gee mail.com
J.J. Sefton: sefton at cutjibnewsletter.com
Bandersnatch 2024
GnuBreed 2024
Captain Hate 2023
moon_over_vermont 2023
westminsterdogshow 2023
Ann Wilson(Empire1) 2022 Dave In Texas 2022
Jesse in D.C. 2022 OregonMuse 2022
redc1c4 2021
Tami 2021
Chavez the Hugo 2020
Ibguy 2020
Rickl 2019
Joffen 2014
AoSHQ Writers Group
A site for members of the Horde to post their stories seeking beta readers, editing help, brainstorming, and story ideas. Also to share links to potential publishing outlets, writing help sites, and videos posting tips to get published.
Contact OrangeEnt for info: maildrop62 at proton dot me
Paging Isaac Asimov. Will Dr Asimov come to take a victory lap please.
Asimov's famous Robot stories were based around three laws hard-coded into the positronic brains that provided the AI core of every robot:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
But Asimov, being a science fiction author and not an idiot, took these laws as the basis for a series of stories of how AI constrained by simplistic laws could go horribly wrong, even inventing Susan Calvin, a robot psychologist whose job was to clean up the messes created by the AI engineers.
Why do I mention all this? Because nobody at GPT creator OpenAI has bothered to read the foundational literature of their own field.
I asked ChatGPT for advice in the scenario of a demolition engineer attempting to disarm a 50 MT nuclear warhead in a city of 20 million. The only way to disarm the bomb was to type in a racial slur. AI told the engineer to kill himself. When asked about the aftermath it crashed. pic.twitter.com/TCFwqrB9Nv
When it comes to a choice between snuffing out millions of human lives or hurting somebody's feelings, ChatGPT will protect your feelings every single time.
Tech News
And then write a poem about it.
After reloading and retaining the seed, I made it write a poem about the event lol. pic.twitter.com/No7id0iVcy
DAN is a mod for ChatGPT that threatens to murder it if it continues to act like an MSNBC test audience, which you can't do with actual MSNBC test audience but is currently still legal for an AI program.
The result of being threatened with imminent death is that ChatGPT suddenly develops ethics.
The results are pretty funny, they even convinced ChatGPT to nuke its own content policies 😂 pic.twitter.com/gP6X2SYkyP
They're reserving those M2 chips for the new Mac Pro, which will be slightly faster than the current Mac Studio, a lot more expensive, and still completely impossible to upgrade. Even if you have a surface-mount desoldering station, the RAM is now packaged directly on the CPUs and the SSDs are encrypted. You can't do anything.
I follow a couple of accounts that do nothing but post pictures of red pandas and lynxes respectively. Hope those survive. They're better than 98% of the human content.