Pushing Through The Pixels
The following table show the maximum supported pixel clock for various interface protocols, when using 24 bits per pixel:
|Interface||Max Clock (MHz)|
|DVI (single link)||165|
|DVI (dual link)||>330|
|HDMI v1.0 - v1.2||165|
|HDMI v1.3 - v1.4||340|
And this table lists the pixel clock required for a particular video mode. They were derived by filling in a particular video mode in the Custom Timing generator of the Nvidia Control panel. It's possible that AMD GPUs may choose slighty different timings, but they should mostly match.
|640 x 480||60||25.175|
|800 x 600||60||36.000|
|1024 x 768||60||65.000|
|1280 x 1024||60||108.000|
|1440 x 900||60||88.750|
|1600 x 1200||60||162.000|
|1920 x 1200||60||154.000|
|2560 x 1440||60||234.590|
|2560 x 1440||85||336.375|
|2560 x 1440||120||483.120|
|2560 x 1440||144||586.586|
|2560 x 1440||165||678.100|
|3440 x 1440||60||312.787|
|3440 x 1440||85||448.500|
|3440 x 1440||100||531.520|
|3440 x 1440||120||644.160|
|3840 x 2160||60||533.250|
|3840 x 2160||120||1075.804|
Let's have an in-depth look the amount of pixels that can be pushed through a cable, and how that translates into resolutions and refresh rates.
When a GPU sends images to a monitor, it always has the following components: actual pixels blanking information * extra data
The pixels can be in various formats, but in the PC world, they are most commonly 24-bit RGB.
The blanking information marks the start of an image and the start of a new line.
The extra data can be different things. Some common ones are:
- information from the GPU to the monitor about the format in which images are transmitted.
- decryption information when DRM (almost always HDCP) is being used
This kind of extra data isn't always there, some cable protocol don't even have a way of doing so, and the list isn't exhaustive either.
Combined, these components take their share of the total amount of bits that are transmitted, and they will determine the maximum amount of pixels and frames that can be transmitted per second.
An important concept is the pixel clock. This is determines the gross amount of pixels can be send per second over the cable. Why 'gross'? Because the net amount of pixels that will be displayed will always be lower, due to the extra information that's being sent.
Let's give an example: say we want to send a 4K image at 60Hz from the GPU to the monitor.
4K is a resolution of 3840 x 2160 = 8,294,400 pixels per image. At 60Hz refresh rate, that's 497,664,000 pixels per second.
Without any overhead, we could live with a pixel clock 497.6 MHz, but now we need to take into account horizontal and vertical blanking times.
VESA defines a set of timings for various resolutions and refresh rates. On such set is called 'CVT-RB' or Coordinated Video Timings - Reduced Blanking. Let's use that one, since you can download a spreadsheet for free that calculates the timings for you.
According to the spreadsheet, 4K@60Hz needs a horizontal blanking time of 80 pixels per line. That increases total horizontal resolution from 3860 to 3940. Similarly, CVT-RB uses a vertical blanking time of 62 lines, for a total number of lines of 2222.
The total amount of 'gross' pixels per image is now 3940 x 2222 = 8,754,680. At 60Hz, that's 525,280,800 pixels per second.
We need a pixel clock of 525.3 MHz to transfer 4K images at 60Hz. That's a 5% overhead over just sending out visible pixels alone.
The pixel clock is one of the defining parameters that determines whether or not a particular resolution/refresh rate can be transmitted over a particular video cable.
Note that the pixel clock doesn't necessarily correspond to the highest rate at which pixels are transferred per wire: pixels may be transmitted in parallel on the same cable. For example, a pixel clock of 200 MHz with images being transfered over Dual Link-DVI would result in a physical clock of only 100 MHz.
Horizontal Scan Rate
The horizontal scan rate is the frequency at which horizontal lines are being transmitted (or displayed). It can easily be calculated once you know the refresh rate and the gross number of horizontal lines in the image.
From the 4K example above, it's 60Hz * 2222 lines/image = 133.32 kHz.
The horizontal scan time is the time to transmit just one line. In our example, it's 1/133.32kHz = 7.5 uS.
Different Cable Formats
We established above that the height of pixel clock is the determining factor to judge whether a particular interface/cable can be used to transmit images to the GPU.
Let's now go over the various standard to see what's possible and what is not.
In old-school VGA mode, images are transmitted in analog format. This means that, in theory, you could send out pixels at almost any pixel clock as long as you're able to whiggle the signal fast enough.
In practice, there are limits at which you can do this: when signals travel in a cable, different frequencies will travel at different speed. This effect of this gets worse for higher frequencies. This will deform the signal into some unrecognizable mess.
According to Wikipedia, the highest timings for VGA are 2048 x 1536 @ 85Hz. With standard CVT timings, that results in a pixel clock of 388.5 MHz, which is insane. I've seen 1080p at 60Hz over VGA (DMT pixel clock of 148.5 MHz), and that one already started to have to visible issues of edges that weren't quite sharp and some ghosting here and there.
So for practical purposes, let's say that a pixel clock of 150 MHz is close to the practical limit of VGA.
DVI introduced the era of high pixel clock digital signalling, but it also maintained some backward compatibilty with VGA as well.
That backward compatibilty was achieved by have a completely different set of wires in the cable, and pins in the connector, that supported the original VGA signalling. This is called DVI-I. The most modern GPUs don't support that version anymore, but if you want to know more, it's really just VGA.
DVI-D is purely digital.
Contrary to VGA, data is transferred over differential pairs: instead of having a single wire that transfers, say, red, it now has 2 wires, but one wire is exactly the opposite of the other. This has the benefit of being largely immune to so called common-mode noise: when some external source injects noise onto both wires, this noise can easily be removed by subtracting 2 values from eachother and the noise will cancel out.
It uses 3 differential pairs for data, R,G, and B, that run at 10 times the rate of a 4th pair, which contains the pixel clock. The DVI-D specification declares a maximum pixel clock of 165 MHz.
A display mode of 1080p @ 60Hz (DMT timings) with a pixel clock of 148.5 MHz fits in just fine.
Since the data wires run at 10 times the clock, a pixel clock of 165 MHz means that the rate of the data wires in DVI-D is 1.65 Gbps.
Now we know that pixels are 24 bits per pixel or 8 bits per color component. Why, then, is there a factor or 10 instead of 8?
That's because DVI uses TMDS encoding: 8 bits are transformed into 10 bits. This has a number of benefits that improve the transmission characteristics over the wire:
- Balanced amount of zeros and ones on the wires. This ensures that the average voltage on the wires is 0.
- Extra room for special 10-bit codes: this makes it possible to encode vertical and horizontal sync over the data wires. There are also some codes that make it possible for the receiver to synchronize the start of a 10-bit word in an endless stream of bits.
For pixel clocks that are exceed 165 MHz, there's dual-link DVI.
Dual-Link DVI, adds a 3 additional differential pairs over which to transfer RGB. The clock link is shared between the 2 sets, so now you have 7 instead of 4 wires to transfer image data. The original set of regular DVI is used to transfer odd pixels, the extra pair is used to transfer even pixels.
The DVI specification says that DL-DVI must be used for pixel clocks that go above 165 MHz. A common mistake is to assume that DL-DVI is thus limited to a pixel clock of 330 MHz, since there are now simply 2 pixels in parallel. This is not the case: there is no such limitation and sources and sinks are allowed to jack up the pixel clock to whatever rate they want.
Note that even the Wikipedia on DL-DVI makes this mistake!
That doesn't mean that there aren't any limitations: older GPUs weren't designed to transfer pixels at pixel rates that were higher than 330 MHz, so that automatically became the limit.
A good example is 2560 x 1440 @ 120 Hz: this is a video mode that is routinely achieved by some direct drive monitors over DL-DVI. When using CVT2-RB timings, this requires a clock speed of 483 MHz, far higher than incorrect 330 MHz limit.
HDMI is built on top of DVI-D. They use the same low level signalling and there is considerable amount of compatibility between the two formats.
One way in which they differ, is in the maximum pixel clock speed. While, initially, HDMI has the same 165 MHz limit as single link DVI, this was later increased to 340 MHz with the introduction of HDMI version 1.3, and 600 MHz for HDMI version 2.0.
A little known fact is that the HDMI specification also defines a dual-link HDMI version, but if actual implementations exist, they are very rare indeed, since they have never been observed in the field.
For DVI and HDMI, a physical clock travels alongside the data signals and it's either the pixel clock or half the pixel clock (for DL-DVI.) When a video mode is selected that requires a particular pixel clock, this pixel clock is reflected in the data rate on the cable.
DisplayPort does something entirely different: it has number of fixed rates at which data can travel on the cable. When a video mode is selected that doesn't need the amount of bandwidth of a particular physical clock, empty data words are inserted instead.
There are 4 data wires that all run at the same physical clock. There is no separate clock line: the clock speed to be used is communicated by the source to the sink via a seperate communication channel.
Each of the physical clock speeds has its own name.
- RBR: Reduced Bit Rate. 1.64 Gbps.
- HBR1: High Bit Rate 1. 2.7 Gbps.
- HBR2: High Bit Rate 2. 5.4 Gbps. (Introduced with DP 1.2)
- HBR3: High Bit Rate 3: 8.1 Gbps. (Introduced with DP 1.3)