Ace: aceofspadeshq at gee mail.com
Buck: buck.throckmorton at protonmail.com
CBD: cbd at cutjibnewsletter.com
joe mannix: mannix2024 at proton.me
MisHum: petmorons at gee mail.com
J.J. Sefton: sefton at cutjibnewsletter.com
Bandersnatch 2024
GnuBreed 2024
Captain Hate 2023
moon_over_vermont 2023
westminsterdogshow 2023
Ann Wilson(Empire1) 2022 Dave In Texas 2022
Jesse in D.C. 2022 OregonMuse 2022
redc1c4 2021
Tami 2021
Chavez the Hugo 2020
Ibguy 2020
Rickl 2019
Joffen 2014
AoSHQ Writers Group
A site for members of the Horde to post their stories seeking beta readers, editing help, brainstorming, and story ideas. Also to share links to potential publishing outlets, writing help sites, and videos posting tips to get published.
Contact OrangeEnt for info: maildrop62 at proton dot me
Okay, I may have paraphrased Tim Lee at Ars just a little there, but if you look at the promises AI leaders have made against the mathematical problems they face, that is the gist of the situation.
AI - LLM-based generative AI, not the more interesting discriminative AI - uses a technology called transformers which lets it process data in a massively parallel way. This requires about the same amount of work as a traditional neural network on simple prompts, while being able to use highly parallel hardware like graphics cards, so you get the result much faster.
For simple prompts:
The longer the context gets, the more attention operations (and therefore computing power) are needed to generate the next token.
This means that the total computing power required for attention grows quadratically with the total number of tokens. Suppose a 10-token prompt requires 414,720 attention operations. Then:
Processing a 100-token prompt will require 45.6 million attention operations.
Processing a 1,000-token prompt will require 4.6 billion attention operations.
Processing a 10,000-token prompt will require 460 billion attention operations.
So as you make your question more detailed and specific, the amount of time taken to produce an answer increases rapidly.
Work is now on to replace transformer models with classic neural networks, which don't have these limitations, but also don't have the magical ease of development of the transformer model.
But that means that promises of AGI next year are simply lies.
Qualcomm, which has a license to produce Arm chips, bought startup Nuvia, which had a license to produce Arm chips.
Qualcomm then produced chips based on Nuvia's design.
Arm sued Qualcomm saying Qualcomm was not licensed to do that.
The jury verdict said Qualcomm did not breach its Arm license in buying Nuvia or producing the chips - which are used in the new Arm based laptops which are not selling particularly well so far.
They did not reach a verdict on the question of whether Nuvia was in compliance with its Arm license. I'm not sure how relevant that is, though Arm plans to continue legal action.
Not making many friends in the process, but they can worry about that after taking a dip in their swimming pool full of money. Based on the latest figures sales have increased only slightly, but costs have been cut in half.
In addition, Broadcom has effectively killed the VMWare reseller market, so that if you want to migrate your company off VMWare, you need to get technical support from Broadcom.
Happy Birthday Everyone Video of the Day
Disclaimer: You need to read the comments for once.