Bandersnatch 2024
GnuBreed 2024
Captain Hate 2023
moon_over_vermont 2023
westminsterdogshow 2023
Ann Wilson(Empire1) 2022 Dave In Texas 2022
Jesse in D.C. 2022 OregonMuse 2022
redc1c4 2021
Tami 2021
Chavez the Hugo 2020
Ibguy 2020
Rickl 2019
Joffen 2014
AoSHQ Writers Group
A site for members of the Horde to post their stories seeking beta readers, editing help, brainstorming, and story ideas. Also to share links to potential publishing outlets, writing help sites, and videos posting tips to get published.
Contact OrangeEnt for info: maildrop62 at proton dot me
AI - the LLM version of AI, which is all that makes the news these days - requires human input to learn. It only takes a few iterations of training AI on AI output for it to turn completely to shit.
Yes, even more than usual.
Which is why OpenAI and Google made deals with Reddit to train their respective AI engines on that vast trove of human-generated data.
Only problem is, Reddit is Reddit:
Imagine this: you've carved out an evening to unwind and decide to make a homemade pizza. You assemble your pie, throw it in the oven, and are excited to start eating. But once you get ready to take a bite of your oily creation, you run into a problem - the cheese falls right off. Frustrated, you turn to Google for a solution.
"Add some glue," Google answers. "Mix about 1/8 cup of Elmer's glue in with the sauce. Non-toxic glue will work."
Google will literally tell you to do this, because it found that answer in a ten year old Reddit thread.
Obviously it's a joke. Obviously it's a bad idea. But AI doesn't know those things, because AI doesn't know anything except, statistically, which words are likely to be found together.
Look, Google didn't promise this would be perfect, and it even slaps a "Generative AI is experimental" label at the bottom of the AI answers. But it's clear these tools aren't ready to accurately provide information at scale.
They work just fine if you don't care about the answers. They work just fine if an answer that looks right, is right.
That's why AI is advancing rapidly in image generation (and in producing astoundingly mediocre music) but is absolute garbage at anything that requires a factual answer.
Compared to current 5nm chips (and my new laptop is still 7nm) this process will reduce power consumption by 60 to 70%, or conversely deliver about three times the performance at the same power consumption.
In an industry constantly producing overpriced products that nobody wants Apple is second to none. (WCCFTech)
This time with a foldable laptop.
...
I mean, yes, but here the whole laptop is screen - no keyboard - and it's the screen that folds.
How do you type on that, you ask? Simple, you clip a keyboard over the screen.
This appears to be limited to a single module instead of the usual four, but that's okay because CAMM2 modules cost about twice as much as regular memory.
CAMM2 is designed to be compact and to provide dual channel memory from a single module. Perfect for laptops, pointless for desktops.
Because Samsung shipped repair parts glued together.
Literally.
Most importantly, Samsung has only ever shipped batteries to iFixit that are preglued to an entire phone screen - making consumers pay over $160 even if they just want to replace a worn-out battery pack. That's something Samsung doesn't do with other vendors, according to Wiens. Meanwhile, iFixit's iPhone and Pixel batteries cost more like $50.
In Linton's view, simulated phishing tests are like forcing workers to quickly evacuate a building during a fire drill - except that real smoke and fire are being blown through the premises. "Once outside, if you took too long you're scolded for responding inappropriately and told you need to train better for next time. Is this an effective way to instill confidence and practice fire evacuation?"
Well, yes. In an occupation where fires are an everyday event, that's precisely the type of training you will receive.