Intermarkets' Privacy Policy
Support
Donate to Ace of Spades HQ!
Contact
Ace:aceofspadeshq at gee mail.com
Buck:buck.throckmorton at protonmail.com
CBD:
cbd at cutjibnewsletter.com
joe mannix:
mannix2024 at proton.me
MisHum:
petmorons at gee mail.com
J.J. Sefton:
sefton at cutjibnewsletter.com
Recent Entries
Absent Friends
AoSHQ Writers Group
A site for members of the Horde to post their stories seeking beta readers, editing help, brainstorming, and story ideas. Also to share links to potential publishing outlets, writing help sites, and videos posting tips to get published.
Contact OrangeEnt for info: maildrop62 at proton dot me
Cutting The Cord And Email Security
Moron Meet-Ups
|
« Wednesday Overnight Open Thread (3/22/23) |
Main
| The Morning Report — 3/23/23 »
March 23, 2023
Daily Tech News 23 March 2023
Top Story
Tech News
- There's a looming replication crisis in AI research. (AI Snake Oil)
More specifically there's a looming replication crisis for any research that involves the products of ChatGPT creator OpenAI, which in reality is anything but open. OpenAI is shutting down access to its Codex AI, giving researchers three days notice before a hundred scientific papers were consigned to the reproducibility dustbin.
That site looks interesting; it throws cold water on a number of overheated subjects in the AI space.
- Nvidia's RTX 4000 SFF is a half-height Ada Lovelace professional graphics card. (Tom's Hardware)
Perfect if you need a second graphics card but your special edition Hololive PC case only has half-height slots after the first one.
It has 20GB of RAM and four mini-DisplayPort ports, delivers roughly the performance of the previous generation's RTX 3070, and uses just 70W of power. The 3070 itself has 8GB of RAM and uses 220W of power, so that's a pretty substantial improvement.
The price is, unfortunately, $1250. It would be quite a good card otherwise.
- Meanwhole Nvidia's H100 NVL has 188GB of RAM and fills four full-height PCIe slots. (AnandTech)
And uses around 800W of power.
Price is not even mentioned, but if you assume it will cost somewhere between a new car and a new house you won't be disappointed. If you wonder who is in the market for such a thing, Nvidia's marketing says it offers "12x the GPT3-175B inference throughput as a last-generation HGX A100".
Yeah, it's aimed squarely at OpenAI.
Disclaimer: No, b-e-e-tles. An inordinate fondness of beetles. Now turn that music down.

posted by Pixy Misa at 04:10 AM
| Access Comments
|
Recent Comments
Recent Entries
Search
Polls! Polls! Polls!
Frequently Asked Questions
The (Almost) Complete Paul Anka Integrity Kick
Top Top Tens
Greatest Hitjobs
|