Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

deconstruct60

macrumors G5
Mar 10, 2009
12,368
3,936
Intel bought SMT when they purchased Digital's Alpha team and foundries.

2001 was about 6 years after SMT ( as used now) term showed up. ( it was not like some secret Coca Cola recipe that nobody else had ) .

“... As Intel has become more talkative about SMT technology, it is somewhat downplaying the significance of Alpha engineers who will be working on SMT for Intel. "Our engineers are already looking at that, it wasn't something we weren't looking at before," Kircos said of SMT. ....”
https://www.computerworld.com.au/article/25703/intel_gets_smt_engineers_along_alpha/

The Alpha team was tracked toward Itanium not particularly toward x86 .


Susan Eggers , Henry Levy , and Dean Tullsen we’re publishing SMT papers by 1995 . (
https://dada.cs.washington.edu/smt/ ) . Intel got parts of an implantation of SMT . They didn’t buy the whole entire concept .


https://en.m.wikipedia.org/wiki/Simultaneous_multithreading#Historical_implementations


October 2000 article
https://www.eetimes.com/document.asp?doc_id=1142447

Some patents
U Wash SMT registers 1998) https://patents.google.com/patent/US6092175A

Slightly different usage of term
Intel (1997) https://patents.google.com/patent/US6658447B2/en

IBM shows up in numerous places in the citations of those .
 

deconstruct60

macrumors G5
Mar 10, 2009
12,368
3,936
Few reasons at least.

Apple would get bulk discounts from Intel for purchasing CPUs for desktop and laptops. The discounts are diminished by having Intel and AMD options.

One major assumption there is that Apple isn't going to shift a significant fraction of their mac laptops over to the A-series. For example, if Apple moved MBA and a revived MacBook ( and perhaps some substantive percentage of the two port MBP 13" ) then a major chunk of that bulk discount is going away regardless.

2019 Apple doesn't have that. But 2020-21 they could move that way if willing to 'split' the Mac market up into two arch for several years.

For the Xeon W Intel charges an extra $3+K just to address more than 1TB of memory. Just avoiding the ">TB" tax is a discount right there with no bulk. AMD's pricing is not particularly higher than Intel. There is a discount just by switching over. AMD has a discount just on list prices relative to Intel before any bulk discount.

Apple cherry picking just for the Mac Pro and the rest on Intel ( or needing some magical ARM solution) ? Probably not. But dumping Intel for the entire desktop line up and the MBP 15-16" and 4 port MBP 13" would still be sizable.

In short, if narrow the scope that AMD has to cover and skew that to the desktop where can mostly use discrete GPUs then it is workable. AMD trying to cover exactly what Intel covers now ( with just MacBook dropped) ? That is more diverse ( but more bulk ) and more problematical for AMD.


Second, AMD processors have slower per core performance which is slower when you aren't doing things like rendering. Most people aren't rendering all the time and a lot of rendering is now done on GPU or render farms anyway.

The per core performance differences aren't generally all that big at this point ( AMD's Zen 2 and Intel's missteps ) . Apple could 'cover' that mainly by just picking something 'next gen' or skipping some updates to create a bump in the "faster than previous Mac" sales pitch.

Some workloads like rendering shifting off to the GPU actually helps AMD because relatively minor CPU gaps doesn't matter as much.

Intel has built a deeper moat around their CPUs on the Data Center (e.g., Xeon D , SP) side and in the 'edge' laptops side than the desktop. Coming from the "lower" edge of the laptop space I doubt Apple is generally impressed with either one of them (AMD or Intel).


Third...obviously Apple would have to modify the T2 chip to give Thunderbolt support for AMD processors which is pointless and would annoy Intel.

The T2 chip is unlikely to get features that iPadOS and iPhoneOS don't need. Apple uses it to make security more uniform across all their devices. They are probably going to put more "AI/ML" into it to make Siri and security more locally smarter ( so Touch, video , audio import flows through T2 just like on the iPhone/iPad ).

I wouldn't hold my breath on Thunderbolt ( or USB4 ). That doesn't map down to the phones and primarily Apple wants a "hand me down" baseline that they can somewhat strip down to the Mac. All iOS devices have SSDs so the SSD controller part is useful across multiple product line ups.

It is a simple test. What T-series feature exists on Mac that doesn't exist on a iOS/iPadOS/WatchOS device. ( driving Touch Bar ... watchOS , fingerprint ... iPhone , Security Enclave ... varies , etc. i ). If Apple later brings FaceID .. yet another "hand me down" feature.

Thunderbolt would be kind of loopy to bring to the T2 from a security standpoint.

Most of the aspects of the classic PC I/O Hub ( PCH in Intel terms ) of Multiple USB ports , SATA , PCI-e switch , etc. Apple doesn't need on the iOS forked operating systems. Hence, Apple likely isn't likely to do much chip development there. Apple might build a bigger T-series packege with Cellular, and/or Wifi-Bluetooth bolted to the side in a space packaging move .

Some folks have the notion that Apple wants to grow the T2 in size and complexity like "The Blob' until it consumes the x86 PCH and CPU. I highly doubt that is the plan. Keeping it small, affordable and focused is probably the path. If they want to eventually shift the whole systems then "big bang" switch per device. ( e.g. flip MacBook to A-1_X , then MBA , then two MBP 13" , .... )


There is a fourth reason. Intel always holds back when they have weak competition. Whenever AMD steps up the game then Intel releases everything they were holding back and really starts kicking ass and cutting costs. They did that to AMD in the 486, Pentium and Core era. They are clever bastards we have to admit.

They were clever, but The last 3-4 years hasn't been the "Only the paranoid survive" Intel of the past. This was more the Intel that got used to little competition and addicted to fat margins for lower utility delivered.

Have to be drinking lots of Santa Clara kool-aid to think Intel has been holding back at this point. Intel has blown the lead they had across much of their CPU product space. There are some niche areas where their sheer size and depth of market penetration means they'll hold on to market share before AMD eats most of it away, but that whole "non compete and arrogantly play around" never did really work well for them. It was far more so a matter of even goofier management at AMD doing another set of dubious moves.

That said the "colossal doom" of the 10nm process and the rut that Intel has been in for the last 2+ years isn't as bad as the AMD fanboys want to make it out to be. Pretty good chance Intel gets to 7nm on the revised timeline OK. They do have stuff in the pipeline. They probably won't win the "maximum x86 core count" war over next 2-3 years but Apple doesn't particularly need that for their product mix.

IMHO, it is doubtful that Intel ever going to be able to put AMD 'far' in the rearview mirror in the future. However, same is basically true for AMD ( barring bad management at Intel ). The general problem space is going to bog down all the players a bit going forward.

Intel has a big CPU product pipeline stall, but after this present one gets flushed they should be back in the game if get back on managing complexity like there were when on the "tick/tock" methodology.
 
  • Like
Reactions: jinnyman

deconstruct60

macrumors G5
Mar 10, 2009
12,368
3,936
Yeah, by then Zen 3 & 4 will have 4 threads per core, so obviously they are doomed.

" ...“Our heritage was in making high performance X86 processors,” said Norrod. “And we were going to take where we started with Zen and build on it with Zen 2, and Zen 3, and Zen 4, and Zen 5, and we were going to put a roadmap in place the cadence of which I sincerely hope is going to bore you to death. I want to be absolutely predictable about when we bring things to market. When we introduced the first generation Epycs, we said that two years from now we’re going to be launching the second generation, and the third generation is right on track. So it is boring in terms of when we do what. But I hope we can be very exciting in what we are doing.” ..."
https://www.nextplatform.com/2019/08/07/amd-doubles-down-and-up-with-rome-epyc-server-chips/

The 'when' is 'boring'. ( i.e, shouldn't be hard to see in the above passage.)

Zen 3 ( if that is where they weave in SMT ) likely isn't coming for another two years. Likewise Zen 4 is likely about four years out ! Intel totally stuck in the mud for another 4 years going forward? I wouldn't bet on that. In four years Intel will have iterated through at least 3 , if not 4 , cycle updates also.

Mainstream desktop wise there probably will be a "tick" ( out of a tick/tock cycle ) by AMD on the limited core number products. TSMC 6nm ( same design library as 7hm ) bump or 7nm+ ( if AMD is betting a bigger risk ). Probably somewhat similar to the 14nm -> 12nm bumps they got. If talking about AMD's slide deck posturing where they put ( "7nm+" next to Zen 3) that is probably more a shrink ("tick") than a microarchitectural change (expanding SMT ). EPYC has a longer user acceptance cycle ( the months long ussage by Google/cloud vendors that Zen 2 got before 'launching' ) so it would slide out even if picking up the 7nm+ bump presuming the yield ramp well.


If AMD manages to fall into same pit that Intel did ( change 2-3+ major things at a time ) then their likelihood of stumbling on delivery on roadmap is likely to go significantly up. That will be most of all that Intel needs to get back into the game. ( Navi blew right through its timeline and much deeper into 2019 before release so it isn't like AMD hasn't done this in the last couple of years. )


AMD does have major holes to fill in I/O where Intel doesn't. Intel has holes to fill where AMD doesn't. Zen 3 and Zen 4 isn't going to magically save them with getting Apple Design bake-off wins is they don't fill their gaps. Apple isn't solely out to ship the higher x86 core count possible. High count ( in Mac Pro context) ? Yes. Maximum number regardless of any other aspect? No.


Just getting to 2017-2018 era Thunderbolt discrete controllers in 2019 is behind the curve. USB4 merge coming. Integration levels , etc.
 

Kpjoslee

macrumors 6502
Sep 11, 2007
416
266
I have no idea who started SMT4 rumor but I am willing to bet that it won't happen with zen 3-4 or future Intel architecture for next 3-4 years at least.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,368
3,936
I have no idea who started SMT4 rumor but I am willing to bet that it won't happen with zen 3-4 or future Intel architecture for next 3-4 years at least.

Already happened on Xeon Phi.

".. Design elements inherited from the Larrabee project include x86 ISA, 4-way SMT per core, ..."
https://en.wikipedia.org/wiki/Xeon_Phi

That wasn't the vector math load 'barn burner' result that was pitched by some.

But yes I would agree Intel isn't likely to bring it back over the next iteration or two.

As for who started the SMT4 rumors .... there is a possibility that it was AMD. At least for some variant fork that may roll out.

Zen 2 significantly bump up the L3 and DRAM latencies. AMD 'solved' there uneven NUMA zones problems but essentially making creating a central I/O chip upon which to hang the memory off of. That makes the memory access more uniform; uniformly slower.

A chiplet (CCD) has two CCX ( group of 4 cores and a L3 cache). If CCX1 wants data in the L3 of CCX2 in the same package? ...

".. A little bit weird is the fact that accessing data that resides at the same die (CCD) but is not within the same CCX is just as slow as accessing data is on a totally different die. This is because regardless of where the other CCX is, whether it is nearby on the same die or on the other side of the chip, the data access still has to go through the IF to the IO die and back again. ..."
https://www.anandtech.com/show/14694/amd-rome-epyc-2nd-gen/7

Easier communication and cache coherence .... just takes more time.


If they have a model that tries to stuff more cores into about the same aggregated memory package bandwidth and "even out" the memory access ... that is probably even more latency. Add enough memory latency and SMT is far more handy in filling in those gaps where the CPU is just twiddling its thumbs waiting on data. Can take a stab at another thread hoping that what it needs is now in ( or still in) the L1/L2/ local L3 fragment.and can get something done.

Since AMD has a L3 victim cache ( stuff chucked out of the L1/L2 levels ) as long as the other SMT threads don't push each other data mostly out of the L3 then work gets 'restarted' reasonably fast. If its big sets of vectors that seriously thrash the L1/L2 caches then often going to have problems.

Depending upon how AMD did it SMT4 could turn into another "Bulldozer" for AMD or if done wisely they get some incremental boost on a subset of loads.
[doublepost=1565925284][/doublepost]

Note though that can also turn that off dynamically. ( per LPAR and by context). There is a variant of Power9 that doesn't even allow over SMT4.

There is a set of loads where it has traction and more than a few it does not.


ARM's 'edge server' entry is following SMT path too.

"
SMT on the Neoverse E1 is enabled through the duplication of architectural state components of the core. This means the CPU has double the general purpose, vector and system registers and their corresponding structures on the physical core.
....
Performance partitioning between the two threads is enabled by a simple round-robin instruction fetch mechanism, ensuing that both threads get the same amount of attention. ..."
https://www.anandtech.com/show/13959/arm-announces-neoverse-n1-platform/5

Just SMT2 but if AMD wants to deeper tracking in some spots this is targeting. Essentially first step that Power8 and Power9 have. There are also "smaller core size implementation and more of them" that AMD will also be fending off on certain workloads.
 

jerwin

Suspended
Jun 13, 2015
2,895
4,651
There is a set of loads where it has traction and more than a few it does not.

The real question becomes--"Are you interested in running these loads on a mac pro, and would it be cost effective to do so?" Many of these architectural developments are oriented towards transaction processing-- and MacOS brings few advantages to the that table.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,368
3,936
The real question becomes--"Are you interested in running these loads on a mac pro, and would it be cost effective to do so?" Many of these architectural developments are oriented towards transaction processing-- and MacOS brings few advantages to the that table.

There are some corner cases that probably match up much better with the rack mounted Mac Pro. I'm a bit sketical that Apple is really serious about that option though ( I won't be surprised if they have some "dog ate my homework excuse at the Mac Pro launch about how the rack options has slid off into the "don't have a date yet' future ). and it probably won't be a huge driver.

There are a few "Mac in the cloud" vendors out there that have many thousands of Macs that folks use virtualized over a network connection. Some folks are still sticking with web services host on macOS ( despite direction Apple going with macOS Server so far). A substantively bigger block is folks doing QA regression testing for App Store application development. Cost effective throughput on the other end of a Internet pipeline is pretty much what they want. If that market gets "big enough" Apple may jump into it also to expand their services revenue. For at least the next couple of years i think they'll let those vendors take the revenue risk and just sell the foundation systems as products.

In the virtualized context, the underlying hypervisor just has to be 'smart" about the features and the macOS instances are closer to just client programs.

Is Apple going to skew Mac Pro workstation design requirements to put those workloads at highest priority? No. If it is a feature that happens to be there, then they'll take it. If it is like IBM Power 9 can can 'turn it off/down" even more so. If the rack model does better than expected perhaps incrementally more so.

But yes, in the single user sitting in front of a Mac Pro with 1-2 displays hooked to it doing work with mainstream mac GUI applications ... not much traction there at all. If the Mac Pro sinks to primarily effectively just being a "FCPX and/or LogicX" machine then really no traction at all.
 
  • Like
Reactions: Nugget and jerwin

apolloa

Suspended
Oct 21, 2008
12,318
7,802
Time, because it rules EVERYTHING!
Given all the great news about second-generation Epyc or more generally the Zen 2 architecture, as maybe this is more about Theadripper 3 . . . and assuming that Apple could have been one of those companies, like Google and HP, that AMD would have been willing to show its internal roadmaps . . . why would this not have been an ideal time to use AMD's platform for a modern workstation?

They would have immediately gotten so much. So much. The only apparent negatives would be AVX512, and some single-thread performance given how Apple is apparently cooling and clocking these particular Intel cpus.

I cannot imagine Apple would introduce a new Mac Pro and think they can move this over to ARM any time soon . . . and Apple does not have an x86 license . . . so they are locked into an Intel platform that is guaranteed to substantially change already some time next year. Even then, you will not get the core counts and aggregate performance AMD already has now. It is not clear if Intel has decided to embrace PCIe 4.0 next year. And who knows what they will do with pricing.

It hope it cannot be as silly as not wanting to propagate the hackintosh community by avoiding support for modern AMD cpus . . . Is it Thunderbolt 3 licensing somehow?

I believe it’s to do with pro or high end production software being tested and / or certified for Xeon processors. Like AMDs Fire pro GPUs. But they aren’t tested for AMD processors I think.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.