Ace: aceofspadeshq at gee mail.com
Buck: buck.throckmorton at protonmail.com
CBD: cbd at cutjibnewsletter.com
joe mannix: mannix2024 at proton.me
MisHum: petmorons at gee mail.com
J.J. Sefton: sefton at cutjibnewsletter.com
Chavez the Hugo 2020
Ibguy 2020
Rickl 2019
Joffen 2014
AoSHQ Writers Group
A site for members of the Horde to post their stories seeking beta readers, editing help, brainstorming, and story ideas. Also to share links to potential publishing outlets, writing help sites, and videos posting tips to get published.
Contact OrangeEnt for info: maildrop62 at proton dot me
One the one hand, it's very fast in most games and for heavy multi-threaded workloads like rendering animated films or compiling the entire Linux kernel, though in the latter case it's usually a little slow than the regular 7950X.
On the other hand, for simpler single-threaded applications it can be slower than much cheaper chips like the Ryzen 7600 or Intel's 13600K.
The reason is that the 7950X3D has mismatched cores. One chiplet runs at full speed, while the other chiplet runs several hundred megahertz slower but has an extra 64MB of cache.
That means that you want to run your programs on the chiplet that gives the best results for that particular code. For games that's usually the chiplet with the cache; for applications it's usually the other one. AMD has driver software for Windows to do this automatically but it doesn't always pick the right cores.
If your task uses all sixteen cores then it doesn't matter and you're off to the races. If you pick the right game you can also see huge performance gains over any other chip. But it the driver picks the wrong core things might slow down by 12% or so against the regular - and cheaper - 7950X.
If you're building a gaming system you will probably want to wait for the 7800X3D, which has none of this complexity. If you're building a server or a workstation, you'll be fine with the regular 7950X - or the 7900X, or the 7900.
The one notable - very, very notable - thing that comes out of these benchmarks is that while the 7950X3D is a 120W part and Intel's competing 13900K is a 125W part, under full load the AMD chip uses 140W and the Intel chip uses 330W.
Which is far too much. Don't buy the 13900K. Even the 13600K uses 100W more than the 7950X3D.
You can dial down the power consumption of both AMD and Intel chips. AMD's chips suffer minimal performance loss until you get to really low power consumption, while the performance impact on Intel chips is immediate and significant.
Tech News
How does the Ryzen 7950X3D perform under Linux? It's complicated. (Phoronix)
On a geometric mean of 400 benchmarks (!) it's 3% slower than the regular 7950X while using 40% less power. Compared to the 13900K it's 11% faster while again using 40% less power.
On some specific benchmarks that extra cache lets it blow everything else out of the water, so if you are running some specific computational kernel, it's worth taking a look through those benchmarks; you might get a 50% speedup over any other desktop chip at minimal cost.
Before Musk took over Twitter employed 7500 people directly plus over 5000 contractors, so he's cut costs rather significantly.
As one employee who was just laid off told me, "I think he's just tearing this thing down to the studs and trying to run as lean as possible till the market turns around."
Maybe hire that one back; he's smarter than the entire mainstream media put together.
Disclaimer: And infinitely smarter than the comments on that article. Rule One: Don't read the comments.