Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

joevt

Contributor
Jun 21, 2012
6,661
4,078
I presume you're not referring to standard displays though, because I'm currently running 2x4k's through a single Thunderbolt connection via this guy to my M1 Max.
Different Apple Silicon Macs support different number of displays:
https://forums.macrumors.com/threads/multiple-displays.2409806/post-32694660

Being a hub, I wondered if it was passive enough that it would "just work" - or that the implementers *think* it would and just never tried it (assuming that everything specified will magically be fine is not unheard of in marketing). But I could be maligning them. I'd thought asymmetric mode was older (I remember it being announced as a USB4 feature before I got my M1Max MBP, because I asked Apple whether they supported it - they didn't...) but it may have been in a press release before it was fully standardised.
Hubs are not so passive. They have to look at the packets coming in and decide where they go. Passive would be like a coax splitter for connecting two TVs to the same cable output.
A USB4 v2 or Thunderbolt 5 hub would not be able to get the 80 Gbps asymmetric mode from existing Macs which only support 40 Gbps.

That's why I was confused - the MBP display info screen swore blind it was a 10-bit panel (like the two UltraFine 4Ks, which are collectively using the same bandwidth over two links). I initially wondered if the Mac was using a 10-bit internal framebuffer and dithering it to 8-bit in the display controller (I tried a screen grab to prove it, but of course that was 8-bit PNG regardless...) but if that was the solution, it wouldn't explain why the internal display was only 8-bit - this on my Intel/AMD MBP.

The OSD reported the refresh rate and resolution; I've just fired up my old computer to check, and the OSD says "input color format RGB" (7680x4320, 30Hz). SwitchResX reports billions of colours. (I think the MacOS UI used to as well, but updates made it harder to find.) I'd be fine to switch between the 24 and 30Hz depending on whether I want higher bit depth (I did set up some gradients to see if I could tell on the display, but it's a mild pain to make out); 30Hz and 30bpp was a nice bonus. It might just be over driving the clock and both ends happen to work?
macOS only reports the color depth of the framebuffer. To see the color depth being output to the display, you need to get that info from the onscreen menu of the display or from a utility like AllRez or AGDCDiagnose. I don't know how to get that info for Apple Silicon Macs though; ioreg will give a list of color modes for each display mode but it doesn't show which color mode is currently being used.

Too bad most displays don't show current pixel depth.

I don't know if dithering happens when outputting a 10bpc framebuffer to a 8bpc display.
I know dithering in the framebuffer can happen if the framebuffer is 8bpc but this is up to the app (or API) doing the drawing into the framebuffer.

I actually didn't realise the UltraFine 5K was a dual tile display - I assumed it was just using HBR3, since that should be enough bandwidth for 60Hz 30bpp.
LG UltraFine 5K works from Macs that don't support HBR3. It always counted as two displays on Intel Macs when doing 5K60 because it uses two HBR2 x4 connections. It doesn't have an HBR3 mode. It's like the Dell UP2715K except the LG UltraFine 5K uses Thunderbolt to connect the two DisplayPort connections.

Apple Silicon Macs are weird because they treat dual tile displays as one display. An M1 Mac supports one display from Thunderbolt but can have two DisplayPort connections where the second DisplayPort connection can only be used by the second tile of a dual tile display. I wonder if Asahi linux could somehow create a second display from the second DisplayPort connection? It might be limited to being equal in size/timing of the display connected to the first DisplayPort connection.

The 6K displays would still need splitting, but I assumed they were special cased. Somehow I'd missed that the M1 could drive the 6K display, or forgotten that I thought it was odd - I could believe it being easier on the display controller if it's offering paired pixels across two channels, similar to the old T220 pixel interleaving.
The XDR supports:
- dual tile 6K60 for GPUs that support HBR3 but not DSC. This is the only mode of the XDR that uses HBR3.
- single tile 6K60 for GPUs like Navi, RTX, Apple Silicon that support DSC.
- dual tile 5K60 for GPUs that don't support DSC.
- single tile 5K60 for GPUs that support DSC. Also, maybe 6bpc which doesn't require DSC. macOS doesn't support 6bpc.
- single tile 4K60, 1440p, etc.

I assume at some point one runs into the bandwidth limit of the interface. What Apple are prepared to admit they support on their web site and what the hardware can actually do are very different things, as this thread shows. Unfortunately what they "support" is what the customer-facing people are prepared to talk about, with nothing more specific. Thank you, this forum!
The bandwidth limit of the interface is HBR3 x4 for each port. External factors such as cables, adapters, hubs, and docks can reduce that limit. Different GPUs have different number of ports. However, a GPU having n ports capable of HBR3 x4 bandwidth doesn't necessarily mean it can use all of them at full bandwidth even after ignoring external factors. There are internal factors that limit the bandwidth of a port. For example, a certain older Intel or Nvidia GPU supports HBR2 x4 but can't do more than 540 MHz of pixels even though it should allow 720MHz (for 8bpc) or 576MHz (for 10bpc).

A GPU can only do so much work. Compression for DSC? Scaling for scaled modes? Converting pixels to output? Some of these processes might reduce the number of displays that can be connected. I don't know which.

If a GPU has a clock of x MHz, but needs to output pixels > x MHz, then it has to do a parallel task using a second something. There's a limited number of these somethings.

DSC splits a framebuffer into columns called slices. A display tells the GPU how many slices it supports, how wide they can be, and how many pixels/second each slice can accept. I suppose each slice is compressed separately and could be a separate task done in parallel. A GPU supports a certain number of slices and slice widths. The intersection of the GPU and display capabilities determines the display modes that can be used. This has been true since analog CRTs.
 
  • Like
Reactions: hardwickj and HDFan

Derived

macrumors 6502
Mar 1, 2015
313
205
Midwest
Unlikely so, but wondering if anyone has happened to drive a UP3218K with a W5700 - just picked up a 16c/1tb/256gb 2019 MP w/a 5700 and have been eyeing a few of these displays on Ebay since they seem to have depreciated quite a bit. Seems like it would make quite a nice single-display solution. The machine will likely be used to run Linux much of the time as well, via an RTX 3090, but will spend some time in Mac OS as well and would be nice if that GPU could drive the display reasonably well.
 
  • Like
Reactions: mattspace
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.