Ace: aceofspadeshq at gee mail.com
Buck: buck.throckmorton at protonmail.com
CBD: cbd at cutjibnewsletter.com
joe mannix: mannix2024 at proton.me
MisHum: petmorons at gee mail.com
J.J. Sefton: sefton at cutjibnewsletter.com
Bandersnatch 2024
GnuBreed 2024
Captain Hate 2023
moon_over_vermont 2023
westminsterdogshow 2023
Ann Wilson(Empire1) 2022 Dave In Texas 2022
Jesse in D.C. 2022 OregonMuse 2022
redc1c4 2021
Tami 2021
Chavez the Hugo 2020
Ibguy 2020
Rickl 2019
Joffen 2014
AoSHQ Writers Group
A site for members of the Horde to post their stories seeking beta readers, editing help, brainstorming, and story ideas. Also to share links to potential publishing outlets, writing help sites, and videos posting tips to get published.
Contact OrangeEnt for info: maildrop62 at proton dot me
With 2.7 billion parameters it generally outperforms Llama-2 and Mistral with 7 billion parameters, while being small enough to run happily on graphics cards with just 4GB of RAM.
It's comparable to Mistral on language tests, but significantly better on mathematics and programming.
If that's too big for you, Microsoft recently released Phi-1 and Phi-1.5 which will run on any reasonably specced potato.
I'm happy to see this progress in making improvements to small models you can run yourself, that have potential to do some limited set of things well. The push to ever larger models at astronomical expense is going to fail unless fundamental changes are made to the designs - and to the culture of the companies building them.
This a similar idea. Rather than building one huge model that tries to handle everything, Mixtral uses eight small (7 billion term) models, each able to fit on a commodity graphics card, and each tuned to a specific kind of task.
It outperforms the largest version of Llama-2 (70 billion terms) while being 30% smaller overall (for all eight models combined) and working on hardware at one twentieth the price.
The models we're looking at are the 4070 Super, 4070 Ti Super, and 4080 Ti.
The only one of interest is the 4070 Ti Super, which is a cut-down 4080 and lowers the price point for a good 16GB Nvidia card. I mean, there is the 4060 Ti, but it's kind of crap.
They believe this is limited to the corporate systems and does not extend to their cloud offerings. If it turns out it does extend to their cloud offerings, that would be catastrophic.
I run MongoDB at my day job - just finished migrating a huge cluster from one cloud to another - but that's using regular cloud servers rather than cloud databases.
Despite Wizards of the Coast repeatedly fucking over its own customers with woke bullshit, those two product lines are major money earners for Hasbro. It makes little sense for the company to fire core staff working on both products, and yet the announcements from those affected are all over Twitter.
It's amusing to see people complaining that Sad Girls should be competing with the titans like Hololive and Nijisanji. The entire reason they won this award is because they've grown so much this year that people compare them to the titans.
Neuro-sama is a home-made AI VTuber who loves nothing more than roasting her poor creator. As the creation of a single person - even building off open-source tools - she's truly impressive.
Secure Your Damn Basements, People Videos of the Day
First it was Hololive.
And now Phase Connect - the aforementioned Sad Girls, Inc.
>
Who knows what Nijisanji or VShojo could have chained up down there?
Disclaimer: Well, probably Pomu and Kson respectively, but they're not talking. I mean they're talking a lot, but not about this. Yet. Actually, they're mostly talking about frogs. I don't know why.