Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Irishman

macrumors 68040
Nov 2, 2006
3,401
845
I don’t think you have to be a game dev yourself to notice a trend.

Expertise helps, especially when we’re talking about engineers who have working experience with a job that they’re paid to do. Whether it’s network coding, or UI coding, who are part of a team of hundreds, or a one-person team. It’s too easy to just talk about our own personal take on a trend, as though that carries as much water as those of actual working programmers. That’s too informal an approach for me to be meaningful, because I’ll always pick the data over opinions about that same data.
 
Last edited:

JMacHack

Suspended
Mar 16, 2017
1,965
2,422
Expertise helps, especially when we’re talking about engineers who have working experience with a job that they’re paid to do. Whether it’s network coding, or UI coding, who are part of a team of hundreds, or a one-person team. It’s too easy to just talk about our own personal take on a trend, as though that carries as much water as those of actual working programmers. That’s too informal an approach for me to be meaningful, because I’ll always pick the data over opinions about that same data.
In principle, I agree.

However I believe in this case the problem is entrenched, and the period has been almost the entire lifetime of the Mac.

Knowing this, commenting on the trend isn’t out of line .
 

Horselover Fat

macrumors regular
Feb 2, 2012
232
306
Germany
Two impediments:

1. No GPU power to compete with the top offerings from nVidia/AMD (in gaming, not in video rendering)
2. The Mac is unattractively priced for the masses, hence it won't ever be as widespread, hence the majority of developers have no interest to develop for macOS.

I don't see how 2. would ever change.
 

Malus120

macrumors 6502a
Original poster
Jun 28, 2002
678
1,412
Two impediments:

2. The Mac is unattractively priced for the masses, hence it won't ever be as widespread, hence the majority of developers have no interest to develop for macOS.

I don't see how 2. would ever change.
Let's discuss this first as its the more interesting of your two arguments IMHO.

Apple sold 7 million units last quarter of which >90% were Apple Silicon. Apple's is seeing significant (8.6%) year on year growth at a time when many of their competitors are seeing double digit declines. Apple sold roughly 25 million Macs last year of which the vast majority were Apple Silicon, and is on track to sell even more this year. For reference that means there are likely more Apple Silicon based macs in circulation than there are PlayStation 5s (19.5 Million).

Now. Does that mean everyone who bought an Apple Silicon Mac has an interest in gaming on it? Of course not. But it does mean the number of Mac users who bought a Mac capable of playing modern AAAs games in the last 18-24 months is higher than it has ever been in terms of not only overall units sold and percentage of macs sold, but also units sold / percentage vs PCs. Although I don't have time to run the numbers (nor do I imagine they're easily found) I'd be interested to see the number of PCs built/sold capable of running AAA games vs Macs.

So whats the point? Apple is rapidly building out a very capable hardware platform (Apple Silicon based Macs) with a reasonable performance floor (the original M1 has more CPU power than a PS5/Xbox Series X, and a modern GPU more than twice as fast as the PS4,) backed by a modern API (Metal 3) and anything built for it is cross compatible with the most profitable mobile platform (iOS) (with 239 million iPhones sold in 2021 alone) where the power of the average device makes the Nintendo Switch look like a potato.

At $699 for the Mac Mini and $999 for the MBA the Mac is not "unattractively priced for the masses," and Apple's sales figures bear this out.
To be clear the Mac isn't going to become gamers first choice anytime soon, and even if Apple wanted to make the Mac a tier one gaming experience, it would take years of hard work and tons of money. Nonetheless the mac arguably has more potential today than it has had at almost any other time in its existence to at least become a platform the average user can game on.

1. No GPU power to compete with the top offerings from nVidia/AMD (in gaming, not in video rendering)

First, Apple doesn't have to compete with enthusiast tier graphics cards (3080/3090/6800XT/6900XT) to create a viable or even vibrant gaming ecosystem. Performance just needs to be good enough to deliver a good lucking experience that runs well.
Second, Apple Silicon offers plenty of GPU power for gaming when properly optimized for. Non native games running through Rosetta 2 with renderers that were built without Apple Silicon in mind are a poor measure of performance. RE Village (and No Man's Sky) should hopefully give us our first look at how well games can perform when they're actually built/ported with Apple Silicon as the target. Finally, if MetalFX can deliver on the promise of a good looking temporal reconstruction technique, it should help keep even the weakest Apple Silicon devices relevant for quite some time.

None of this is to say the Mac should be anyone's primary platform at this point in time. If you're a Mac user that is serious about gaming today, buy the Mac that best meets your needs for whatever you want to use it for and then get yourself a gaming PC/PS5/Series X for fun.
 

Horselover Fat

macrumors regular
Feb 2, 2012
232
306
Germany
Thank you for sharing these numbers. I find them interesting. They seem to show that Apple has found a lever to increase their market share considerably and – more importantly – opposed to the negative trend of Windows PCs. The comparison with the number of PS5 sold is a bit off, since Sony was and still is struggling to deliver as many units as people want. But let’s see how far Apple can get or even turn the market to attract the interest of developers.

You say the Apple GPU is more than twice as fast as a PS4. Do you mean the M1 base, pro, max or Ultra? Apple Silicon is fragmented in terms of GPU Power and higher performance comes at a significantly higher cost. And how is it compared to something more modern like a PS5? I can only remember that Apple compared the first M1 to some integrated Intel GPU which is not exactly amazing. Now I have read that not even the most expensive desktop from Apple, the Mac Studio with an M1 Ultra, can beat a 3090 (in terms of raw power, neglecting performance per watt). Which leads me to the other point about GPU power.

I’m not sure that Apple doesn’t have to compete with enthusiast tier GPU performance. This is especially important for AAA titles. GPU performance is a major USP during the introduction of a next gen console or next gen desktop GPU. Granted, there are many examples of great games that don’t need the lates and greatest from nVidia. With Nintendo Switch there’s even the example of a whole platform. But they excel at the games.
 

diamond.g

macrumors G4
Mar 20, 2007
11,155
2,466
OBX
Thank you for sharing these numbers. I find them interesting. They seem to show that Apple has found a lever to increase their market share considerably and – more importantly – opposed to the negative trend of Windows PCs. The comparison with the number of PS5 sold is a bit off, since Sony was and still is struggling to deliver as many units as people want. But let’s see how far Apple can get or even turn the market to attract the interest of developers.

You say the Apple GPU is more than twice as fast as a PS4. Do you mean the M1 base, pro, max or Ultra? Apple Silicon is fragmented in terms of GPU Power and higher performance comes at a significantly higher cost. And how is it compared to something more modern like a PS5? I can only remember that Apple compared the first M1 to some integrated Intel GPU which is not exactly amazing. Now I have read that not even the most expensive desktop from Apple, the Mac Studio with an M1 Ultra, can beat a 3090 (in terms of raw power, neglecting performance per watt). Which leads me to the other point about GPU power.

I’m not sure that Apple doesn’t have to compete with enthusiast tier GPU performance. This is especially important for AAA titles. GPU performance is a major USP during the introduction of a next gen console or next gen desktop GPU. Granted, there are many examples of great games that don’t need the lates and greatest from nVidia. With Nintendo Switch there’s even the example of a whole platform. But they excel at the games.
With MetalFX GPU performance doesn't matter anymore, just render at a lower resolution internally and upscale (temporal or scaler) and move on. Especially if the upscaling is hardware accelerated, "free" performance boost!
 
  • Like
Reactions: Homy and Irishman

Malus120

macrumors 6502a
Original poster
Jun 28, 2002
678
1,412
Thank you for sharing these numbers. I find them interesting. The comparison with the number of PS5 sold is a bit off, since Sony was and still is struggling to deliver as many units as people want.

No problem, I'm glad you found the numbers interesting and appreciate your willingness to engage constructively.

The point I wanted to make with the PS5 comparison wasn't that Apple was outselling Sony (I'm aware of the supply constraints impacting sales,) but that at roughly 20 million PS5's (+ ~12 million Xbox Series consoles) we seem to be at the point where developers are deciding its "worth it" to drop last gen in favor of developing exclusively for "next" gen.
In other words, the number of Apple Silicon Macs out there is quickly approaching (if not already at) a level where the total addressable market should be large enough to begin attracting significant developer attention, refuting the point that Macs are not "widespread" enough.

You say the Apple GPU is more than twice as fast as a PS4. Do you mean the M1 base, pro, max or Ultra? Apple Silicon is fragmented in terms of GPU Power and higher performance comes at a significantly higher cost. And how is it compared to something more modern like a PS5? I can only remember that Apple compared the first M1 to some integrated Intel GPU which is not exactly amazing. Now I have read that not even the most expensive desktop from Apple, the Mac Studio with an M1 Ultra, can beat a 3090 (in terms of raw power, neglecting performance per watt). Which leads me to the other point about GPU power.

So I had some free time and went a bit too far but...
TL : DR - M1 is around 1.5x a ps4, whereas Xbox Series S is ~2.2X. Apple Silicon GPUs already perform "okay" in unoptimized games running through Rosetta 2 / with sub optimal Metal renderers but could likely do better)

When I said the GPU was 2x a PS4, I was specifically talking about the 7/8 Core Apple M1 GPU found in the Macbook Air/Pro and Mac Mini and I was talking about raw compute performance. I actually overshot a bit (I think I was thinking of the Xbox One,) its actually more like 1.5x (real world game performance is likely to vary.) Perhaps a useful yardstick for these numbers is the Xbox Series S, which is just over 2x the PS4 (around the same level as a PS4 Pro.)

Anyway, diving deeper into expected performance, it is difficult if not downright impossible to get good numbers given we have basically zero Apple Silicon native games to compare with.
The bottleneck here isn't just the CPU code being translated from x86 to ARM. Apple Silicon GPUs are built using the same building blocks as Apple's mobile GPUs which are better at rendering scenes using what's known as "Tile Based Rendering" (basically splitting each frame into chunks and rendering them separately), instead of "immediate rendering" (rendering the scene all at once). In other words a game can feature a Metal renderer and still perform sub optimally on Apple Silicon vs other GPUs depending on how its coded.

Nonetheless if we take Rise of the Tomb Raider and Shadow of the Tomb Raider (running via Rosetta 2 with a Metal renderer developed before Apple Silicon) as a benchmark of how they run "unoptimized" (for AS) Mac code:
M1: ~RX560X (Anandtech)
M1 Pro: ~RX 580/5500XT / RTX 3050TI Laptop (Anandtech / Pcmag)
M1 Max: RTX 3060 Laptop - RTX 3070 Laptop (Anandtech / Pcmag)
M1 Ultra: ~20% slower than an RTX 3090 at 1440P (The Verge)

A combination of Raw FP32 Compute with Synthetic benchmarks like GraphicsBench, 3D Mark, etc (which have Apple Silicon/Mobile rendering native versions) can paint a different picture:
M1: ~RX570 <5500XT
M1 Pro: RTX3060 Laptop
M1 Max: RTX3080 Laptop - 3060TI/6700XT
M1 Ultra: RTX 3080 / RX 6800XT (?)
(A lot of the above is inference from the spotty data available. Consider it a best case scenario not necessarily what you should expect on average even if we do start getting "good" ports)

So performance today is ok (not amazing but not terrible,) but its possible (albeit not guaranteed) that better ports built on Metal 3 with Apple Silicon in mind could go a long way toward closing the gap with high end desktop GPUs like the RTX 3080.

I’m not sure that Apple doesn’t have to compete with enthusiast tier GPU performance. This is especially important for AAA titles. GPU performance is a major USP during the introduction of a next gen console or next gen desktop GPU. Granted, there are many examples of great games that don’t need the lates and greatest from nVidia. With Nintendo Switch there’s even the example of a whole platform. But they excel at the games.

Of course, Apple should be trying to keep up, but that doesn't that every mac needs a built in RTX 3090. Furthermore, if we look at the "next gen" (current gen really now) consoles, we can see that even mid tier Apple Silicon is (theoretically) not that far off the mark (although to be fair both the consoles and modern PC GPUs do hardware accelerated ray tracing)
Xbox Series S - 4TFlops (M2 is 3.6, M1 Pro is 5.3) ~ <RX 5500
Playstation 5 - 10.25TFlops (M1 Max is 10.6)
Xbox Series S - 12.1 TFlops
Compared to desktop PC GPUs, they're similar to the RX 6600XT ~ 6700XT or the RTX 3060TI-3070. Powerful, but certainly not an RTX 3090. The point here isn't to say fast GPUs don't matter, merely that the Series S, with a GPU just a bit faster than an M2, will be at worst, the baseline going forward for the next 5+ years, so I think most Apple Silicon Macs will be able to turn out reasonable frame rates for the foreseeable future.

With MetalFX GPU performance doesn't matter anymore, just render at a lower resolution internally and upscale (temporal or scaler) and move on. Especially if the upscaling is hardware accelerated, "free" performance boost!
I'm not sure if this is meant to be sarcastic or not. Assuming its good, MetalFX will certainly help lower the cost of rendering an image that looks good at the displays "native resolution" (or 4K on an external screen,) but it's not magic.

Looking at what FSR 2.0 is doing today, if MetalFX can achieve similar results, it means that a game can render internally at between 1/4 - 1/2 of the output resolution and deliver a result that varies between acceptable and "near native," however it does cost significantly more than actually rendering at those lower resolutions and, at least in FSR's case, seems to do better with higher target output resolutions, and struggle when targeting resolutions below 1440P.

Anyway that's enough "foruming" for one night LOL
 

diamond.g

macrumors G4
Mar 20, 2007
11,155
2,466
OBX
I'm not sure if this is meant to be sarcastic or not. Assuming its good, MetalFX will certainly help lower the cost of rendering an image that looks good at the displays "native resolution" (or 4K on an external screen,) but it's not magic.

Looking at what FSR 2.0 is doing today, if MetalFX can achieve similar results, it means that a game can render internally at between 1/4 - 1/2 of the output resolution and deliver a result that varies between acceptable and "near native," however it does cost significantly more than actually rendering at those lower resolutions and, at least in FSR's case, seems to do better with higher target output resolutions, and struggle when targeting resolutions below 1440P.

Anyway that's enough "foruming" for one night LOL
What do you mean costs more than rendering at native resolution? I wasn't being sarcastic.

I even think Capcom mentioned MetalFX as though it is why RE8 can run at the resolutions/performance it runs at on the hardware they mentioned in the presentation.
 
  • Like
Reactions: Irishman

Malus120

macrumors 6502a
Original poster
Jun 28, 2002
678
1,412
What do you mean costs more than rendering at native resolution? I wasn't being sarcastic.

I even think Capcom mentioned MetalFX as though it is why RE8 can run at the resolutions/performance it runs at on the hardware they mentioned in the presentation.
Apologies for being unclear (both in my explanation and on whether or not you were being sarcastic).

I did not mean to imply MetalFX "4K" will cost more than native 4K to render. What I was trying to get across was that MetalFX is not just free performance.

For example, let's assume you render the scene internally at 1080P and use MetalFX Temporal Upscaling to reconstruct a 4K output (that's a 4x resolution scale.) The cost to render the final image will be significantly less than that of rendering a native 4K image, netting a significant performance savings, but still a good deal greater than that of rendering a 1080P image and outputting that onto the screen.

The reason for this is that the GPU has to run all of the calculations involved in the temporal upscaling. Depending on the target frame-rate, how efficient MetalFX is, how wide the GPU is, and whether or not Apple ends up adding dedicated hardware to speed this process up, it can actually be quite expensive, making the performance saved less impressive than might be expected.

I'd recommend Digital Foundry's recent videos on FSR 2.0, if you want to learn more about this. Obviously Apple Silicon won't be exactly the same but as the concept is very similar, I would expect many of the characteristics (particularly frame time cost) to be similar.

With MetalFX GPU performance doesn't matter anymore, just render at a lower resolution internally and upscale (temporal or scaler) and move on. Especially if the upscaling is hardware accelerated, "free" performance boost!
I mainly wanted to get across that for a variety of reasons, this is just not true.
First, the lower the internal resolution, the lower the quality of the final output, particularly if the output resolution itself is not extremely high (>1440P.) Second, as mentioned above, the cost of the upscaling itself can be quite expensive, especially for weaker GPUs at lower frame rates. Again, I'd recommend checking out Digital Foundry's work on this stuff if you want a detailed explanation.
So yes, rendering performance absolutely still matters. While MetalFX (assuming it sees widespread adoption) will absolutely help to alleviate rendering costs on weaker GPUs, and allow Apple's high end GPUs to really fly, it isn't magic.
 

JMacHack

Suspended
Mar 16, 2017
1,965
2,422
First, the lower the internal resolution, the lower the quality of the final output, particularly if the output resolution itself is not extremely high (>1440P.)
Uninformed guess:

I’d bet the internal renderer is exactly 1440p. It seems like Apple’s devices use that as a base and scale text rendering from there (if I’m remembering the post-Mojave rendering method correctly).

Considering Apple’s displays try to use even multiples of that, I’d about bet that they’re extending that capability.
 

Malus120

macrumors 6502a
Original poster
Jun 28, 2002
678
1,412
Uninformed guess:

I’d bet the internal renderer is exactly 1440p. It seems like Apple’s devices use that as a base and scale text rendering from there (if I’m remembering the post-Mojave rendering method correctly).

Considering Apple’s displays try to use even multiples of that, I’d about bet that they’re extending that capability.
Uninformed huh... I'll admit we won't know for sure how MetalFX performs until we have a finished product in hand but... I'm not even sure you understood what I was talking about. I was referring to how existing temporal reconstruction techniques (FSR 2.0, DLSS, Unreal Engine Temporal Upscaling) tend to struggle at delivering a near or "better than native" result when the output resolution is significantly lower than 4K, and particularly when it is LOWER than 1440P. These temporal upscalers are getting very good at taking an internal resolution of 1080P, ~1200P, or 1440P and reconstructing a nice looking 4K output, but at, say, a reconstructed 1080P (or even 1440P) where the internal resolution is often 720P, 540P (or lower) the result can be less convincing.

You should check out the video on MetalFX on Apple's developer website (link below.)
(https://developer.apple.com/videos/play/wwdc2022-10103)

In it, they specifically reference reconstructing a 4K image from 1080P (something I talked about in my post,) so no, I don't think the plan is to have the internal render resolution set "exactly at 1440P." Based on the video, Apple seems to believe 1080P will be a common internal res, but devs are free to set it however they'd like.
 

leman

macrumors Core
Oct 14, 2008
19,302
19,281
The technical hurdles have been completely cleared. Even the base Apple Silicon Macs have more than adequate performance to support modern games and Metal 3 brings a number of crucial quality of life updates and essentially closes the feature parity with DX12 (while being much more developer-friendly and exceeding it in flexibility in some areas). Finally, Apples GPU dev tools are best in class, with detailed shader debugging, per frame timelines as well as resource traces.

The big question is developer and user interest. If some pioneer studios can demonstrate that they can generate revenue on the Nac, others will follow.
 
  • Like
Reactions: Homy

Bodhitree

macrumors 68000
Apr 5, 2021
1,944
2,048
Netherlands
Glad to see that some of what I’ve said in the past hasn’t fallen on deaf ears.

Apple’s market share of capable hardware is already at a good level, and if they can keep selling 25m M-series Macs every year they will have a very tempting hardware platform in a few years. The question is, will there be enough gamers in the crowd? Percentage wise its very worthwhile to do ports for 10% more market share.

But the Mac market has been self-selecting for a long time. The accepted wisdom has been “don’t buy a Mac for gaming” so the people genuinely interested in gaming have bought PC laptops. With as a result that the Steam hardware survey lists only ~2.5% Mac customers, far less than the hardware market share would indicate, and a lot of those will be running older Macs.

I think the answer lies in finding what style of game does well on the Mac. Some ports like Total War and Civilization have been around in quite a few iterations. I would suspect that anything that links to the Mac’s aesthetic will do well. But there are other options as well, such as bringing the best of iOS games to the Mac. This would result in very low barriers to entry — they’re all free-to-play — and good technical compatibility.
 

GrumpyCoder

macrumors 68020
Nov 15, 2016
2,072
2,650
I was referring to how existing temporal reconstruction techniques (FSR 2.0, DLSS, Unreal Engine Temporal Upscaling) tend to struggle at delivering a near or "better than native" result when the output resolution is significantly lower than 4K, and particularly when it is LOWER than 1440P. These temporal upscalers are getting very good at taking an internal resolution of 1080P, ~1200P, or 1440P and reconstructing a nice looking 4K output, but at, say, a reconstructed 1080P (or even 1440P) where the internal resolution is often 720P, 540P (or lower) the result can be less convincing.
Pretty much this. The whole thing is totally pointless if not shooting for 4k output. That's what it's always been about. We've always seen 1080p/1440p with ~60fps and a massive drop with 4k. When Nvidia released DLSS they bridged that gap and 4k ~60fps became a reality. Starting with anything sub 1080p seems pointless.
Even the base Apple Silicon Macs have more than adequate performance to support modern games and Metal 3 brings a number of crucial quality of life updates and essentially closes the feature parity with DX12 (while being much more developer-friendly and exceeding it in flexibility in some areas).
Are you saying that Metal 3 brings all the features of DX12 tier 3?
 

leman

macrumors Core
Oct 14, 2008
19,302
19,281
Are you saying that Metal 3 brings all the features of DX12 tier 3?

DX12 descriptor heap tiers are not relevant to Metal because Metal does not have descriptor heaps. With Metal, shader bindings are encoded into regular GPU memory buffers and the upper bound here seems to be the maximal buffer size (but even then you can just use multiple buffers :).

Apple documentation, somewhat unluckily, referred to "up to 500,000 buffers or textures" for argument buffers tier 2, but it has been clarified by an Apple GPU engineer that this is the number of resources that you can expect to simultaneously access in a shader pipeline, not the number of resources you can encode. I had no problems with encoding five million buffer references in an argument buffer, and what's more, I could also successfully used all of them in a single shader invocation, which shows that the tier 2 limits on Metal are more of a recommendation than a hard limit. Anyway, no real game is going to use hundreds of thousands of textures in a single render pass, so there won't be any problems here.

What's more relevant for porting games from DX12/Vulkan to Metal is that Metal 3 has redesigned how bindings are encoded in GPU buffers. Previously, you had to rely on a type safe wrapper (MTLArgumentEncoder) to do the encoding, which was annoying and didn't map well to how these things work in other APIs. But Metal 3 gives you access to GPU resource pointers, and you simply write those pointers into the buffer memory. So setting a texture binding with Metal 3 is as easy as

C++:
((*ResourceID) bindings_buffer.contents())[1000] = my_texture.gpuResourceID();

with the bindings being declared like this in the shader

C++:
struct TextureBindings {
  // one million texture bindings
  texture2d<float> textures[1000000];
};

and since Metal already supports bindless paradigm (sparse bindings and dynamic indexing of resources), emulating both DX12-style descriptor heaps and Vulkan-style descriptor heaps becomes a trivial matter of allocating some large metal buffers and copying some pointers around.

P.S. To fully appreciate how user-friendly Metal is consider this Vulkan tutorial on how to send some transformation matrix data to a GPU: https://vulkan-tutorial.com/Uniform_buffers/Descriptor_layout_and_buffer It's a two-parts tutorial over multiple pages and dozens of Vulkan API calls. The same thing in Metal needs a staggering number of two (2!) API calls, one of which is allocating the buffer itself:

C++:
// create a buffer for matrix data
auto uniform_buffer = device.newBuffer(sizeof(my_uniform_data));
// copy the data to the gpu buffer
memcpy(uniform_buffer.contents(), &my_uniform_data, sizeof(my_uniform_data));
// bind the buffer to the pipeline at index 0
encoder.setVertexBuffer(uniform_buffer, 0, 0)
 
Last edited:
  • Like
Reactions: Homy

GrumpyCoder

macrumors 68020
Nov 15, 2016
2,072
2,650
DX12 descriptor heap tiers are not relevant to Metal because Metal does not have descriptor heaps. With Metal, shader bindings are encoded into regular GPU memory buffers and the upper bound here seems to be the maximal buffer size (but even then you can just use multiple buffers :).

Apple documentation, somewhat unluckily, referred to "up to 500,000 buffers or textures" for argument buffers tier 2, but it has been clarified by an Apple GPU engineer that this is the number of resources that you can expect to simultaneously access in a shader pipeline, not the number of resources you can encode. I had no problems with encoding five million buffer references in an argument buffer, and what's more, I could also successfully used all of them in a single shader invocation, which shows that the tier 2 limits on Metal are more of a recommendation than a hard limit. Anyway, no real game is going to use hundreds of thousands of textures in a single render pass, so there won't be any problems here.

What's more relevant for porting games from DX12/Vulkan to Metal is that Metal 3 has redesigned how bindings are encoded in GPU buffers. Previously, you had to rely on a type safe wrapper (MTLArgumentEncoder) to do the encoding, which was annoying and didn't map well to how these things work in other APIs. But Metal 3 gives you access to GPU resource pointers, and you simply write those pointers into the buffer memory. So setting a texture binding with Metal 3 is as easy as

C++:
((*ResourceID) bindings_buffer.contents())[1000] = my_texture.gpuResourceID();

with the bindings being declared like this in the shader

C++:
struct TextureBindings {
  // one million texture bindings
  texture2d<float> textures[1000000];
};

and since Metal already supports bindless paradigm (sparse bindings and dynamic indexing of resources), emulating both DX12-style descriptor heaps and Vulkan-style descriptor heaps becomes a trivial matter of allocating some large metal buffers and copying some pointers around.
So we're still stuck at the same old porting issues many are not willing to work around. 🤷‍♂️
It was never an issue to bring anything DX12 to Metal, it was always a question of cost and time to invest. In other words, same old, same old.
 

diamond.g

macrumors G4
Mar 20, 2007
11,155
2,466
OBX
Pretty much this. The whole thing is totally pointless if not shooting for 4k output. That's what it's always been about. We've always seen 1080p/1440p with ~60fps and a massive drop with 4k. When Nvidia released DLSS they bridged that gap and 4k ~60fps became a reality. Starting with anything sub 1080p seems pointless.

Are you saying that Metal 3 brings all the features of DX12 tier 3?
Steam Deck says hi....
 

leman

macrumors Core
Oct 14, 2008
19,302
19,281
So we're still stuck at the same old porting issues many are not willing to work around. 🤷‍♂️
It was never an issue to bring anything DX12 to Metal, it was always a question of cost and time to invest. In other words, same old, same old.

Sure, but now there are fewer hoops to jump through. And it’s good news for MoltenVK.
 
  • Like
Reactions: Homy

GrumpyCoder

macrumors 68020
Nov 15, 2016
2,072
2,650
Steam Deck says hi....
So does my Game Boy from 1989. ;)

Have you tried it on the Steam Deck? It's not doing much for resolution, in fact, at these low resolutions it works more like a regular AA algorithm, which could have been done by... an AA algorithm in the first place. Search YouTube, bunch of videos out there showing results.
 

diamond.g

macrumors G4
Mar 20, 2007
11,155
2,466
OBX
So does my Game Boy from 1989. ;)

Have you tried it on the Steam Deck? It's not doing much for resolution, in fact, at these low resolutions it works more like a regular AA algorithm, which could have been done by... an AA algorithm in the first place. Search YouTube, bunch of videos out there showing results.
Personally I am not a fan of resolution scaling. So I have it turned off on my deck. I have read that it makes things more smeared/grainy so I tend to avoid it.

Capcom livestream about to begin…
 
  • Like
Reactions: Irishman

Malus120

macrumors 6502a
Original poster
Jun 28, 2002
678
1,412
Have you tried it on the Steam Deck? It's not doing much for resolution, in fact, at these low resolutions it works more like a regular AA algorithm, which could have been done by... an AA algorithm in the first place. Search YouTube, bunch of videos out there showing results.

I think this take is a bit unfair. The quality of most temporal upscaling solutions definitely suffers at lower output resolutions (seemingly loosing that "better than native" look) but they still produce results that are far better than either simply rendering at a lower res and or relying on spatial upscalers/sharpeners like FSR 1.0.
Of course, there's definitely a subjective argument to be made about the results modern TAA produces even at native resolution, but that's a different discussion and honestly I don't see the market moving away from these techniques.

Personally I am not a fan of resolution scaling. So I have it turned off on my deck. I have read that it makes things more smeared/grainy so I tend to avoid it.

Capcom livestream about to begin…
Have you actually tested FSR 2.0 on SteamDeck (it's only supported in Deathloop and God of War at the moment) ?
I've heard good, albeit not amazing things about FSR 2.0 quality mode using the internal screen. I would imagine that the smaller the screen the better this is going to look.
 
  • Like
Reactions: Irishman

Horselover Fat

macrumors regular
Feb 2, 2012
232
306
Germany
In other words, the number of Apple Silicon Macs out there is quickly approaching (if not already at) a level where the total addressable market should be large enough to begin attracting significant developer attention, refuting the point that Macs are not "widespread" enough.
I see your point. It will be interesting to see how this plays out.
 

GrumpyCoder

macrumors 68020
Nov 15, 2016
2,072
2,650
I think this take is a bit unfair. The quality of most temporal upscaling solutions definitely suffers at lower output resolutions (seemingly loosing that "better than native" look) but they still produce results that are far better than either simply rendering at a lower res and or relying on spatial upscalers/sharpeners like FSR 1.0.
They could just render native for these low resolutions. Steam Deck is 800p, vs. 540p that would cost them what, 5fps?
Of course, there's definitely a subjective argument to be made about the results modern TAA produces even at native resolution, but that's a different discussion and honestly I don't see the market moving away from these techniques.
Of course the market isn't moving away, it makes sense. DLSS is a fantastic thing and it allows to play 4k60, which was something we've not been able to do before it arrived. That might change with the upcoming 40x0 series, which probably bring enough power to do it natively. But then we could go 4k120 or maybe 8k60...
If 8k makes sense for small (<100") screens is another discussion, but for a well made, story driven game, why not play in the home theater at 15' to 20' wide. The market is there, small, but probably larger than the Mac gaming market. Time will tell.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.