Ace: aceofspadeshq at gee mail.com
Buck: buck.throckmorton at protonmail.com
CBD: cbd at cutjibnewsletter.com
joe mannix: mannix2024 at proton.me
MisHum: petmorons at gee mail.com
J.J. Sefton: sefton at cutjibnewsletter.com
Chavez the Hugo 2020
Ibguy 2020
Rickl 2019
Joffen 2014
AoSHQ Writers Group
A site for members of the Horde to post their stories seeking beta readers, editing help, brainstorming, and story ideas. Also to share links to potential publishing outlets, writing help sites, and videos posting tips to get published.
Contact OrangeEnt for info: maildrop62 at proton dot me
"What's likely going on is a mix-up with real events that are often described in dramatic or misleading ways online. Here's what actually has happened:
What the U.S. did not do
❌ No U.S. military invasion of Venezuela
❌ No arrest or capture of Nicolás Maduro
❌ No overthrow of the Venezuelan government by U.S. forces
Maduro is still in power in Venezuela."
I have used Grok as an augmented search tool on occasion and it's often pretty good. When you don't know the exact name of what you are looking for it will match things up better than Google.
Sometimes it misses things or simply gets it wrong, but I haven't seen it get things this wrong. (Grok told me in answer to a poorly-worded question, "No, the US did not capture Nicolás Maduro today. It happened yesterday." which was true at least in my own time zone.)
"Pure LLMs are inevitably stuck in the past, tied to when they are trained, and deeply limited in their inherent abilities to reason, search the web, 'think' critically, etc.," says Gary Marcus, a cognitive scientist and author of Taming Silicon Valley: How We Can Ensure That AI Works for Us. While human intervention can fix glaring problems like the Maduro response, Marcus says, that doesn't address the underlying problem. "The unreliability of LLMs in the face of novelty is one of the core reasons why businesses shouldn't trust LLMs."
This problem has been greatly alleviated by AI providers enabling live search, which loads fresh data into the context window - the AI equivalent of short-term memory.
But it can't be properly fixed without continual learning, which is an unsolved problem with LLMs.
It's not a high-end card, but it's competent and comes with 10GB of VRAM, the early driver issues that plagued the previous generation have been resolved, and, well, it costs $199.
If that's all you want to spend it's a solid option.
The story cites an unnamed German distributor cancelling orders by retailers, saying that stock of the RTX 5070 is limited, and the 5070 Ti, 5080, and 5090 are not available at all.
Predictions were that price hikes and shortages would start from the low end, particularly the 16GB models of the 5060 Ti and 9060 XT, so this is a little surprising.
Nvidia of course does not care. It has long since ceased being a consumer-focused company.
(I've already ordered everything I need for at least three years that involves memory chips, but if you haven't, you might want to check for bargains while they last.)
These are minor but welcome refreshes, replacing some older features like DVI ports that were largely obsolete even five years ago with more modern DisplayPort, um, ports.
Humans don't just learn through language; we learn by experiencing how the world works. But LLMs don't really understand the world; they just predict the next word or idea. That's why many researchers believe the next big leap will come from world models: AI systems that learn how things move and interact in 3D spaces so they can make predictions and take actions.