Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Marekul

Suspended
Jan 2, 2018
376
638
AVX512 is a special instruction set that lets the CPU process large chunks of data at once. Works well for doing the same thing repeatedly across a set of data. Things like processing a lot of pixels at once, or processing a lot of audio frames at once. Color correction, audio effects, etc. While image has been moving to the GPU, audio is still on CPU. So you see AVX a lot on audio. Apple seems pretty into AVX512, so my guess is Logic has a lot of AVX512 optimization.

If you wanted to do a gain boost on audio or something, that's a pretty basic acceleration case for AVX. A lot of audio filters are probably AVX accelerated. Especially on Mac where the Accelerate framework is built on AVX. AVX512 is the latest version, and Apple has been doing a lot of work with it.

The one place I've seen the Xeon easily beat Threadripper in benchmarks is on AVX workloads.

I don't know if that's enough to keep Apple on Intel. But it would be enough to give them pause if they thought they might be sacrificing Logic or FCPX performance by switching over.

I think the business and contract entanglements with Intel are more important. But for pro Mac apps, Threadripper is not a clear winner in all workflows.


Do you have any source for this claim?
My bet would be AVX optimisations in software like Logic bring only marginal gains that are easily offset by overall performance gain. Most of FCPX workload is offloaded to the GPU anyway would really surprise me if there were any recognisable differences in performance.
 

koyoot

macrumors 603
Jun 5, 2012
5,939
1,853
So why aren’t other workstation vendors falling over themselves to switch to AMD?

HP is but one example . . . There are zero AMD options on the Z8, and if HP is to be believed there won’t be one.
Dell is currently working on Workstations with Threadripper and EPYC. Oh and BTW, The most powerful supercomputer in the world, Frontier is based solely on AMD: CPUs and GPUs.

For those still believing that AMD is gaming monster. No. Gaming is the only place right now in technology where Intel still has an edge over AMD. To the degree, that there are games in which if you pair RX 5700 XT, for example with Ryzen 3700X, you effectively get RX 5700, because of how big performance loss you get, compared to what would you get with Intel CPU.

Compute and professional work is where Intel is being pushed by AMD Zen 2 CPUs. People who would not consider AMD based Mac, or any other computer, are genuinely having problems with their heads.

There is way too many benefits currently to have AMD CPU: security, performance, efficiency, Total-Cost-of Ownership, that it is impossible to pass.

At least that stays for professional work. Gamers - you should still buy Intel. On the other hand, Zen 3 will solve every gaming problem AMD CPUs have, so if you buy Intel platform today, you still will be pushed around by those having AMD Zen3 CPUs, and Intel paltforms are dead, there is no upgrade path. While if you have AMD MoBo... ;)
 
Last edited:
  • Like
Reactions: ssgbryan

cube

Suspended
May 10, 2004
17,011
4,972
Do you have any source for this claim?
My bet would be AVX optimisations in software like Logic bring only marginal gains that are easily offset by overall performance gain. Most of FCPX workload is offloaded to the GPU anyway would really surprise me if there were any recognisable differences in performance.
It is easier to program a bit of AVX than GPGPU.

I guess one could use Parallel STL now.
 
Last edited:

deconstruct60

macrumors G5
Mar 10, 2009
12,311
3,902
Dell is currently working on Workstations with Threadripper and EPYC. Oh and BTW, The most powerful supercomputer in the world, Frontier is based solely on AMD: CPUs and GPUs.

Frontier isn't even assembled yet. It isn't the most powerful supercomputer in the world. Other folks will have a shot to assemble another system by late 2021. It isn't necessarily going to be the winner that Fall. ( probably will because would have to spend a giant bucketload of money to beat it and nobody is going to find that kind of money in spare change under the couch cushions. But Chinese government are big mega spenders. they may dump a giant bucket of money here before the deadline. ). Most powerful supercomputer in the world is as much about the size of your budget as the tech difference between Intel , Nvidia, ARM and/or AMD.
 

defjam

macrumors 6502a
Sep 15, 2019
795
735
The thread title could substitute "HP" for Apple, as the HP Z8 does not offer any AMD option CPU-wise.

I wonder why not?

;)
One could make the argument the Z8 was released a couple of years ago whereas the 2019 Mac Pro was released a month ago.
 
  • Like
Reactions: throAU

cube

Suspended
May 10, 2004
17,011
4,972
In my opinion, Threadripper is a bunch of stuff they threw together to get their core market, gamers and kids at home, excited. That's why people aren't making serious clusters with AMD and why I wouldn't buy an AMD Mac.
Threadripper was a secret grassroots project by a small group of AMD engineers.
 
  • Like
Reactions: ssgbryan

deconstruct60

macrumors G5
Mar 10, 2009
12,311
3,902
We are talking about the APUs. That is what Anandtech said.

Maybe it is about the cache and the LPDDR4X.

There are probably multiple contributing factors, but a major one is redesign. Anandtech had a meeting with Lisa Su at CES.

".. ( about APU being Vega and not Navi)
LS: ....It's always how we integrate the components at the right time. Certainly the Vega architecture is well known, very well optimized. It was always planned that this would be Zen2 + Vega.


AnandTech: The rearchitect of Vega for 7nm has been given a +56% performance increase. Does this mean that there was a lot left on the table with the design for 14/12nm? I’m trying to understand how you were able to pull so much extra performance from a simple process node change.
LS: When we put Vega into a mobile form factor with Ryzen 4000, we learned a lot about power optimization. 7nm was a part of it sure, but it was also a very power optimized design of that architecture. The really good thing about that is that what we learned is all applicable to Navi as well. David’s team put a huge focus on performance per watt, and that really comes out of the mobile form factor, and so I’m pleased with what they are doing. You will see a lot of that technology will also impact when you see Navi in a mobile form factor as well. ..."


Implicitly it appears that Vega 20 was more of a "quick shrink" of Vega 10. They tweak a few things but really didn't optimize from scratch for 7nm. It sounds like AMD did more of a "do over" for the Vega in the APU. They reimplemented Vega learning what they did from doing the initial steps with the 7nm Vega 20 and the previous APU Vega implementation. That feed in part into Navi. This Vega redo will feedback into next iterations of Navi. It is as much the knowledge feedback at 7nm as the process itself. ( Intel doing 14nm++++ is partially along the same lines. Only Intel hasn't done a very deep 'reimplement" along 14nm span. ). The previous APU GPU being "not so good" probably played a role too.

The GPU ABI ( binary interface) didn't change much but the implementation did. It won't be surprising if Arcturus is a scaled up implementation and extended tweak of these lessons learned also for a "Vega 30" (or whatever they will call it). Not as big of a bump because they'll be pushing it harder with higher power levels but probably will idle lower and have a more forgiving power curve.

It looks as though AMD is doing "more" focused designs in a wider set of targeted markets ( i.e., is doing a bigger R&D spend) than when trying to make one design fit into multiple categories. They aren't doing as many varied implementations as Intel , but it is more than before. The real tell will be if AMD and broader and scale to take Intel's place before Intel fixes their problem of being too broad and too spread out with mistakes to fix.

Apple could quit Intel in late 2020 and AMD would be broad enough to fit the subset of the Mac line up that was moving in 2020. ( Mac Pro probably would not.) Moving the Mini and iMac (and possibly iMac Pro; probably need to tweak the enclosure. ) they have the pieces.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,311
3,902
Apple should have put its money/effort/resources into AMD. Apple's current problems:

1) Delay. They delayed a Mac Pro refresh for nearly 10 years. Specifically, they have ignored a large segment of potential and certain buyers.

More AMD in the Mac Pro would have meant even more delays. Folks are getting 20/20 hindsight here.
The selections for the Mac Pro would have needed to be made back in Spring/Summer 2017. AMD was doing better back then but didn't have the track record they have had over the last two years.

The hiccup of the Mac Pro having a 580X instead of perhaps a W5600X ( affodable Navi) card is indicative that AMD would not have meant better timing. Apple would have been doubling down on both CPU and GPU possible delays.


The 32 core Threadripper is coming months after the other 3 gen Threadrippers arrived because AMD couldn't get it out ( volume constrained). Apple wouldn't have been magically special on getting product.
Going AMD was a higher risk potential of being late than Intel back in 2017.

For the next Mac Pro ( and the 2020 Mac products that probably laid ground work for back in 2018 or early 2019 ) then that is different.

That 10 year span for the Mac Pro covered numerous years where they were not doing much work, but the lead time on this product isn't the 6-12 months lots of folks handwave about.


2) Shrinking the market segment. Apple decided that the Mac Pro wasn't for pro-sumers or enthusiasts anymore, but strictly for a narrow segment. Related closely to 3). This is actually huge.

That really isn't true. There is a difference between pro-sumers and enthusiasts and dogmatically dedicated to "box with slots" form factor. Pro-sumer and enthusiasts is larger than that. The number of Pro-sumers and enthusiasts buying laptops and all-in-one demonstratively show that.

AMD's Ryzen 4000 H APU product is putting what was recently "desktop" class CPU horepower in laptop. The MPB 16" basically outclasses most of the pre 2013 Mac Pros. The market has changed since 2008-2009 when iMacs only had close to an mobile CPU and GPU.

Apple has followed the market just as much as "unilaterally decided that .... " . The performance space that doesn't significantly overlap with the rest of the Mac line up has moved in the last 10 years. It is moved on other vendors too... The gap here is more so willingness to sell overlapping and more likely fratricide inducing products.


When I was going through graduate school and learning Graphic Design and the such, Apple made the Mac Pro affordable for my needs, although it was still expensive but palatable--I could work longer summer hours to get one by the end of summer. The Mac Pro is absolutely out of this realm for any college student.

Like holy cow couldn't possibly learn Graphic Design using a MBP 16" as tool now. *cough* . Using an iMac ( or iMac Pro ). Impossible? No. It is a tool not a product form factor that is most conducive to learning.


Apple driving the base price higher on the Mac Pro is a long term somewhat risky move for Apple. They could high a pricing death spiral here where fewer folks buy because expensive and then price goes up because volume is lower (rise and repeat). However, that isn't solely Apple's move. Some customers have shifted too and that shift matters.

The bigger problem is Apple going back into Rip van Winkle mode on the Mac Pro for a long time. If Apple would cycle Mac Pro into "superseded" mode on a regular basis ( Mac Pro 2019 then Mac Pro 2021 then Mac Pro 2023 , etc. ) then the older versions would iterate down into the pricing slots closer to the older Mac Pro. Folks who "needed" a box with slot Mac Pro as a college student with limited budget would have something to grasp at if dogmatically adverse to other viable form factors.

The competitive market dynamics for major components in the current space the Mac Pro is targeting and going to sleep for 3-4 years would be bad even for the folks who do have the budget (and return on investment path) for the current Mac Pro. One factor in why Apple may not be quite aligned here is that they were doing a whole lot of nothing for a long time. So when they finally jumped back in they aren't necessarily aligned with what is going on.
 

goMac

Contributor
Apr 15, 2004
7,662
1,694
Do you have any source for this claim?

Not specifically but...

The Accelerate framework is AVX512 optimized. And Logic uses Accelerate.

(That also means any app using Accelerate is probably getting AVX512 optimization.)

My bet would be AVX optimisations in software like Logic bring only marginal gains that are easily offset by overall performance gain.

Vectorization gains are typically very significant. AVX512 is going to be roughly twice as fast as AMD's AVX2 alone.

Most of FCPX workload is offloaded to the GPU anyway would really surprise me if there were any recognisable differences in performance.

That's not entirely true. There is another active thread right now noting that FCPX is using a lot of CPU for certain workflows.

Like I said, there are times that a GPU is slower than AVX512. It's very likely that FCPX is using Accelerate for these situations, which in turn uses AVX512.

Again, I don't think Apple would hold out on Intel _just_ because of AVX512. But for the Mac Pro and the intended market for that box, it's something to think about.
[automerge]1578595117[/automerge]
It is easier to program a bit of AVX than GPGPU.

I guess one could use Parallel STL now.

That too. AVX is more common than GPU acceleration because it's a lot easier to program. It's also cross platform, whereas Metal isn't.
 
  • Like
Reactions: Marekul

danwells

macrumors 6502a
Apr 4, 2015
778
611
A top-end MBP 16" outraces literally any Mac Pro you could have bought a month ago...The top 27" iMac is faster than that. What people are complaining about isn't that Apple doesn't make a machine suitable for any workload (and at some fairly reasonable prices).

The complaint is that there is a very specific form factor people want that Apple doesn't really make. They make only two models without integrated displays, they don't want to make any at all, but there are two specific use cases they can't address with their preferred integrated-display Macs.

The first is the "this is faster than we can cool in an iMac case" monster. The Mac Pro has always started above where the iMac left off - as the iMac gets more capable, the Mac Pro retreats farther into the land of exotic Macs. The PowerMac G5 did the same thing before the Mac Pro.

The reason the current Mac Pro is fantastically expensive is because the iMac has gotten very fast. Remember that earlier iMacs had mobile CPUs and GPUs. The idea of an iMac running a top-level desktop CPU and a midrange desktop GPU is relatively recent.

The only other non-integrated display Mac is the Mini - which started out as a media center machine, and has become largely a lightweight server. They keep GPUs and faster CPUs out of it to keep the desktop focus where they want it - on various versions of the iMac.
 

fendersrule

macrumors 6502
Oct 9, 2008
423
324
Dell is currently working on Workstations with Threadripper and EPYC. Oh and BTW, The most powerful supercomputer in the world, Frontier is based solely on AMD: CPUs and GPUs.

For those still believing that AMD is gaming monster. No. Gaming is the only place right now in technology where Intel still has an edge over AMD. To the degree, that there are games in which if you pair RX 5700 XT, for example with Ryzen 3700X, you effectively get RX 5700, because of how big performance loss you get, compared to what would you get with Intel CPU.

Compute and professional work is where Intel is being pushed by AMD Zen 2 CPUs. People who would not consider AMD based Mac, or any other computer, are genuinely having problems with their heads.

There is way too many benefits currently to have AMD CPU: security, performance, efficiency, Total-Cost-of Ownership, that it is impossible to pass.

At least that stays for professional work. Gamers - you should still buy Intel. On the other hand, Zen 3 will solve every gaming problem AMD CPUs have, so if you buy Intel platform today, you still will be pushed around by those having AMD Zen3 CPUs, and Intel paltforms are dead, there is no upgrade path. While if you have AMD MoBo... ;)

I think you are being too harsh on AMD for gaming. Gaming on the 3xxx series Ryzen (consumer CPU) vs Intel's best gaming CPU....what is it now, the 9700k or somethingish? It's a 3-8 FPS difference......the difference is well over 140 FPS when it occurs......at 1080p which is LOW RES for 2019/2020 for gaming....doesn't prove/say a whole lot....

I would say AMD is just fine and dandy for gaming, especially at 1440p gaming and above. The 3xxx Ryzens are pretty much at parity for the most part at this day and age. Microsoft and Sony think so...

The Zen 3 however is estimated to bring AMD at 0 FPS difference with its ~10% increase in IPC (or above), maybe even better, than Intel....

I would not blanket and say that Intel is better for gaming. There are certainly games out there that are always better for Intel, but from my research it's a minority amount--can be counted on less than 1 hand. I would say that if you want to maximize the FPS by a negligible amount whilst spending $150 more, then you'll gain a few FPS over AMD at certain resolutions in certain games that you will never notice. That's more accurate. But there's cons: Your CPU will run hotter. Your CPU won't multi-task as well. And you'll spend more for that "gain"--better to game with an AMD and spend more on the GPU where the true price/performance is to actually boost your FPS by more than "a couple".

I'm planning to build a gaming/editing rig that's quiet and low heat, and there's no better option than the 3900x/4900x. It just doesn't make any sense to go with Intel.
 
Last edited:

koyoot

macrumors 603
Jun 5, 2012
5,939
1,853
I think you are being too harsh on AMD for gaming. Gaming on the 3xxx series Ryzen (consumer CPU) vs Intel's best gaming CPU....what is it now, the 9700k or somethingish? It's a 3-8 FPS difference......the difference is well over 140 FPS when it occurs......at 1080p which is LOW RES for 2019/2020 for gaming....doesn't prove/say a whole lot....

I would say AMD is just fine and dandy for gaming, especially at 1440p gaming and above. The 3xxx Ryzens are pretty much at parity for the most part at this day and age. Microsoft and Sony think so...

The Zen 3 however is estimated to bring AMD at 0 FPS difference with its ~10% increase in IPC (or above), maybe even better, than Intel....

I would not blanket and say that Intel is better for gaming. There are certainly games out there that are always better for Intel, but from my research it's a minority amount--can be counted on less than 1 hand. I would say that if you want to maximize the FPS by a negligible amount whilst spending $150 more, then you'll gain a few FPS over AMD at certain resolutions in certain games that you will never notice. That's more accurate. But there's cons: Your CPU will run hotter. Your CPU won't multi-task as well. And you'll spend more for that "gain"--better to game with an AMD and spend more on the GPU where the true price/performance is to actually boost your FPS by more than "a couple".

I'm planning to build a gaming/editing rig that's quiet and low heat, and there's no better option than the 3900x/4900x. It just doesn't make any sense to go with Intel.
In CPU limited scenarios - yes, Intel is still the king. Especially 9700/9700K.

Any CPU is fine and dandy for gaming at and above 1440p, because in those situations you are GPU limited, not CPU limited.

And no, 1080p is not low-res in 2019/2020. Its still the golden standard. Untill we will get mainstream GPUs(like RX 5600 XT and RTX 2060) that can maintain 144 FPS minimum at 1440p, highest possible settings in every, single E-Sports game - 1080p will still be the godlen standard.

And at that FPS, at any resolution - you still will be CPU limited.
 

fendersrule

macrumors 6502
Oct 9, 2008
423
324
I consider gaming at 1080p to be low res in 2019/2020. You have consoles that were released 3 years ago that boast 4k gaming. The amount of 1440p screens today are about 100x what they were 10 years ago. There's actually lots of IPS 1440P @ 120Hz/144Hz and above now to pick from, thankfully.

What you are talking about is e-sports stuff, where twitchy 1080p 320Hz gaming is solely what matters and is strikly competitive. I don't think that's most gamers.

Take a look at benchmarks now compared to the past. There's always 1440p benchmarks. In-fact, some of them are starting to have less 1080p benchmarks compared to 1440p benchmarks....
 

cube

Suspended
May 10, 2004
17,011
4,972
I consider gaming at 1080p to be low res in 2019/2020. You have consoles that were released 3 years ago that boast 4k gaming. The amount of 1440p screens today are about 100x what they were 10 years ago. There's actually lots of IPS 1440P @ 120Hz/144Hz and above now to pick from, thankfully.

What you are talking about is e-sports stuff, where twitchy 1080p 320Hz gaming is solely what matters and is strikly competitive. I don't think that's most gamers.

Take a look at benchmarks now compared to the past. There's always 1440p benchmarks. In-fact, some of them are starting to have less 1080p benchmarks compared to 1440p benchmarks....
People are still buying low end APUs and playing at 720p.
[automerge]1578681874[/automerge]
Ok, now I see everything. Which means that the design that was supposed to be Dali: 4C/8T+Navi GPU is something else, named differently.

AMD quite possibly renamed RavenV2 as Dali APUs, and called it a day.

Nomenclature changes. But also gives a little perspective, into something. This is quite interesting how everything unfolds.
It appears that Dalí has the same CPUID as Banded Kestrel, as now expected.
 

fendersrule

macrumors 6502
Oct 9, 2008
423
324
Hackintosh?

Naw, I'd probably go Intel if that were the case. I just want something reliable that will last a long time and can "do it all" without any software headaches and upkeep. I'm going to have to switch over to Windows full time. Good news is that I've already became accustomed to Windows on my 5,1 because I typically stay in Windows and only boot into MacOS for random things (photo library, password files, etc).

I'll have to begin that migration process which will be painful, but necessary. Certainly going to miss some things.
 

cube

Suspended
May 10, 2004
17,011
4,972
Pollock showed up in drivers. I expect Zen+ (Pinnacle, Picasso).
+#define ASICREV_IS_POLLOCK(eChipRev) (eChipRev == RAVEN2_15D8_REV_94 \
+ || eChipRev == RAVEN2_15D8_REV_95 \
+ || eChipRev == RAVEN2_15D8_REV_E9 \
+ || eChipRev == RAVEN2_15D8_REV_EA \
+ || eChipRev == RAVEN2_15D8_REV_EB)
 

koyoot

macrumors 603
Jun 5, 2012
5,939
1,853
I consider gaming at 1080p to be low res in 2019/2020. You have consoles that were released 3 years ago that boast 4k gaming. The amount of 1440p screens today are about 100x what they were 10 years ago. There's actually lots of IPS 1440P @ 120Hz/144Hz and above now to pick from, thankfully.

What you are talking about is e-sports stuff, where twitchy 1080p 320Hz gaming is solely what matters and is strikly competitive. I don't think that's most gamers.

Take a look at benchmarks now compared to the past. There's always 1440p benchmarks. In-fact, some of them are starting to have less 1080p benchmarks compared to 1440p benchmarks....
Sure for those who do not need anything above 60Hz refresh rate, there is no difference between CPUs.

But when you need 144 Hz refresh rate, or higher - well, there is one option. Intel.

I can give you one great example of this. Overwatch. So "Badly" Intel optimized title, that you can max out your GPU with measily Core i5-9400F CPU. You cannot do this in this game with Ryzen 5 2600 or 3600.

And this is rather general rule, instead of exception, at least when it goes for E-Sports.
 

fendersrule

macrumors 6502
Oct 9, 2008
423
324
Hmmm...I'm seeing well over 140 FPS in overwatch with AMD CPUs...even all the way to first gen Ryzen...

I still don't think you've built up a case as the difference between Zen 2 and Intel, even for super high FPS is super small and in the single digit percentages. Car analogy: 780 Horsepower is better than 772 Horsepower, but can you feel or see it? Both are super high performance, so it's getting moot.

You're kinda implying that AMD has no option to be anywhere close to Intel. That has all changed with Zen 2. Please share links. Of course there are some Intel Optimized titles where Intel will be 10-15 FPS more, but that's only a handful, and you're still clearing very high FPS to where you won't see it...you're still in a >700 HP car....

Ryzen 1600 w/ 1080 GTX in Overwatch on "Ultra":


I get what you're saying about E-sports, but that's not the average gamer. And whatever lead that is there is shrinking Y/oY.

FYI, Ryzen 3600 4-game average @ 1440p is 110 FPS w/ 2070 super. Assuming these are triple-A titles. The 9700k is only 6 FPS better on average, which is in the noise (as expected). Well over 60 FPS is easily obtainablle at 1440p. Hell, my Mac Pro 5,1 is well over 60 FPS @ 1440p on most tripple-A games that I've played...and that's an old i7!

 
Last edited:

throAU

macrumors G3
Feb 13, 2012
8,944
7,106
Perth, Western Australia
One could make the argument the Z8 was released a couple of years ago whereas the 2019 Mac Pro was released a month ago.

One could also make the argument that intel has a history of predatory business models with regards to pricing.

If HP were to start shipping AMD, maybe they'd lose massive amounts of "gold partner" pricing incentives.

Enterprise workstation market is fairly conservative. Until threadripper 3000 which just dropped there were definite niche workloads where intel was a much better choice.

Now? That niche is much much smaller, if it even still exists at all.

Apple missed a golden opportunity here.

They could have snapped up every 64 core threadripper AMD could make, and got a time limited exclusivity contract with them, limiting the rest of the PC industry to 48 cores (on AMD, 28 on intel) or whatever.

Then they could reduce the cost of the mac pro, get better power consumption and less heat, increase the profit margin on it, and have a box that actually had some legitimate advantage in most workloads vs. the PC based intel (or AMD with fewer cores) competition.

But they didn't.
 
  • Like
Reactions: ssgbryan

DoofenshmirtzEI

macrumors 6502a
Mar 1, 2011
862
713
I ran across the AWS re:Invent video from the session put on by AMD to promote the Epyc and Radeon stuff that AWS is offering. As an aside, the video had 89 views as of the time I was writing this so it's definitely not a hot topic. Note this was done in December, so not about the stuff just announced.

AMD Epyc instances are offered at a 10% discount from Intel instances. Since they were launched last year, AWS has been offering them in the T, M, and R lines (burstable, balanced, and memory weighted). This year they are offering some "semi-custom" processors in the C line (compute weighted) with faster processors than you can get elsewhere, so apparently AWS is buying that yield instead of Apple.

Notably missing from the AMD offerings on AWS is anything in the X line (extreme memory weighted). The most memory in an AMD offering is 768 GiB. The X line goes up to 3,904 GiB, all Intel.

I'll be keeping an eye out for any announcements about the new stuff going into any AMD instances, but it is interesting that AMD had to sponsor a session to tell a crowd notorious for jumping on any cost optimization that they could save 10% on their EC2 bill by tacking an "a" on the end of the instance type.
 

throAU

macrumors G3
Feb 13, 2012
8,944
7,106
Perth, Western Australia
I ran across the AWS re:Invent video from the session put on by AMD to promote the Epyc and Radeon stuff that AWS is offering. As an aside, the video had 89 views as of the time I was writing this so it's definitely not a hot topic. Note this was done in December, so not about the stuff just announced.

AMD Epyc instances are offered at a 10% discount from Intel instances. Since they were launched last year, AWS has been offering them in the T, M, and R lines (burstable, balanced, and memory weighted). This year they are offering some "semi-custom" processors in the C line (compute weighted) with faster processors than you can get elsewhere, so apparently AWS is buying that yield instead of Apple.

Notably missing from the AMD offerings on AWS is anything in the X line (extreme memory weighted). The most memory in an AMD offering is 768 GiB. The X line goes up to 3,904 GiB, all Intel.

I'll be keeping an eye out for any announcements about the new stuff going into any AMD instances, but it is interesting that AMD had to sponsor a session to tell a crowd notorious for jumping on any cost optimization that they could save 10% on their EC2 bill by tacking an "a" on the end of the instance type.

Cloud vendors are just getting started with EPYC. They have a heap of existing intel hardware that is still paying itself off. However you can be sure that EPYC will be getting deployed almost exclusively over the next 2 years, as intel simply have nothing competitive for the vast majority of cloud workloads, and certainly at nowhere near the same power per watt.

Even if they could supply, which they can't. They are at capacity and can't keep up due to being stuck on 14nm process and having to make dies 2-4x the size they originally anticipated on this process node; which means less dies per wafter and a much higher number of defective dies.

They can't compete on price, and they can't supply sufficient volume, even if they COULD. Intel are pretty screwed at the high end until 2022 according to their current public roadmap, which has consistently slipped since 2015.
 

DoofenshmirtzEI

macrumors 6502a
Mar 1, 2011
862
713
However you can be sure that EPYC will be getting deployed almost exclusively over the next 2 years, as intel simply have nothing competitive for the vast majority of cloud workloads, and certainly at nowhere near the same power per watt.
Quite frankly, I don't care what the fanboys on either side (AMD or Intel) say. The proof is in the pudding, and even the AMD guy is carefully couching his language about exactly what types of workloads will make a 10% discount pay for itself. "A dev/test environment is just ideal for these instances", and I start to look for whatever skunk is making that smell.

AWS customers can easily spin up an instance with an a, and an instance without, and run their workload on both and see real world numbers. And still the AMD guy is having to get up there and say, "Hey, try us out guys, please! Please?"

If you're slicing and dicing a server for instances, AMD's memory constraints mean there's not a lot of memory per core. That limits how AWS can slice up each box. You need a lot of T's to balance a few R's on a box when you can't shove a lot of memory into it.
 

cube

Suspended
May 10, 2004
17,011
4,972
+#define ASICREV_IS_POLLOCK(eChipRev) (eChipRev == RAVEN2_15D8_REV_94 \
+ || eChipRev == RAVEN2_15D8_REV_95 \
+ || eChipRev == RAVEN2_15D8_REV_E9 \
+ || eChipRev == RAVEN2_15D8_REV_EA \
+ || eChipRev == RAVEN2_15D8_REV_EB)
Could Pollock just be AM4 Banded Kestrel?
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.