Ace: aceofspadeshq at gee mail.com
Buck: buck.throckmorton at protonmail.com
CBD: cbd at cutjibnewsletter.com
joe mannix: mannix2024 at proton.me
MisHum: petmorons at gee mail.com
J.J. Sefton: sefton at cutjibnewsletter.com
Jay Guevara 2025 Jim Sunk New Dawn 2025
Jewells45 2025 Bandersnatch 2024
GnuBreed 2024
Captain Hate 2023
moon_over_vermont 2023
westminsterdogshow 2023
Ann Wilson(Empire1) 2022 Dave In Texas 2022
Jesse in D.C. 2022 OregonMuse 2022
redc1c4 2021
Tami 2021
Chavez the Hugo 2020
Ibguy 2020
Rickl 2019
Joffen 2014
AoSHQ Writers Group
A site for members of the Horde to post their stories seeking beta readers, editing help, brainstorming, and story ideas. Also to share links to potential publishing outlets, writing help sites, and videos posting tips to get published.
Contact OrangeEnt for info: maildrop62 at proton dot me
Because of the many things that LLMs are not, high among them is security models:
"The most practical mitigation today is not to rely solely on the model for safety. We advocate for a 'defense-in-depth' approach, using external systems like AI firewalls or guardrails to monitor and block problematic outputs before they reach the user. A more permanent, though much more difficult, solution would involve building safety into the model's foundational training from the ground up."
LLMs inherently have no concept of security; they at best pretend that they do.
If you want any kind of security, you have to implement it using something else. LLMs offer no security themselves and never will.
They had a sort of precursor to the Amiga's "Copper" hardware that automatically reconfigured things inside the hardware, and used it to create visual effects that were not directly possible for the limited hardware.