...while everything is mastered in 8 bit, because of digital jitter (timing error) and other issues, errors can essentially reduce 8 bit down upwards of 40%. By having a path for 10 bit resolution, you have built in headroom...
I think this might be a misunderstanding of how bit depth and timing errors might manifest as deterioration to PQ. Skip to the last four paragraphs for what is and what is not possible, and you are welcome to read and pick apart the rest (the indented part in between) if you are still wondering how we can come to this conclusion. No guns to the head here, so easy on the hatin'. If an issue is complicated, sometimes verbosity is required.
the largest 8-bit word is 11111111, or 255 decimal. This means that there are 256 possible values to assign quantization levels to (in practice, about 232). The largest 10-bit word is 1111111111, or 1023 decimal, and this yields 1024 possible quantization levels (in practice, about 1006). If you multiply 232 (for Pr) times 232 (for Pb) that yields over 53,000 shades of available colors or chroma for 8-bit video. If you multiply 1006 by 1006 that yields over a million available shades of color for 10-bit video.The point is
The largest digital word in 8-bit video (which is what all consumer HD is limited to) even when processed at 10 bits is 0011111111, again, 255 in decimal. Yes, there are two more bits of resolution in 10-bit, but if the source is 8-bit, those extra two bits are truncated, or zeroed out, meaning that the end result is exactly the same as if processed at 8-bit.
The limitation of maximum number of values or quantization levels is still 256 (232), the available shades of color for RGB or for YPrPB is still about 53,000, and the number of quantization levels and color values (assuming sample rate [number of pixels in the pixel map] is unchanged) is what fixes and defines how accurate the video is in relation to its source, and the pixel map itself is what defines the amount of possible detail. And that, in a nutshell, is exactly how video digitization works.
In digital delivery, the ones and zeroes representing these coefficients are usually modulated as low/high voltages, low representing zero and high representing 1. This can be seen as a series of square-wave pulses in pulse-code modulation. Degradation to the bit stream due to frequency response losses manifests as a rounding of corners on those pulses.
Timing errors manifest as the pulses arriving a bit early or a bit late compared to a reference or to adjacent pulses, which smears the pulses' rise times and fall times.
So after transport through a hostile environment, the representation on a scope can be smeared with the pulses being rounded. That is analog degradation to the carrying medium, which is unavoidable.
But an MPEG decoder or a DAC can still identify each rounded pulse as a one, and each absence of a pulse as a zero, which means it can extract the original coefficients perfectly, making rounding degradation of the pulses meaningless. It can also know that even if offset in time, at what time the pulse is supposed to occur; it reclocks them, resetting them back to where they were originally in reference to each other, which takes jitter out of the equation completely. What we are left with is a perfect representation of the coefficients as transmitted, even if there is error correction used to supplement some of the potentially missing numbers.
All of the information can still be extracted perfectly, which is why we use digital in the first place. In analog, the information is married to, even part of, the carrying medium, and degradation to the medium also degrades the message. In digital, the message is turned into a mathematical construct, essentially a number, or a stream of binary numbers, which divorces it from the medium. If the medium is degraded (up to a point) the message still survives 100% intact.
All of this is according to how much carrying medium degradation there is. If there is little degradation, it is easy to extract the information and recreate the coefficients perfectly. If there is a lot of degradation, error correction can fill in the blanks intelligently so that the numbers are still extracted perfectly. If there is too much degradation, the signal is muted and the screen goes black. It's all, perfectly, or none. There is no inbetween.
That is what is sometimes referred to as the digital cliff; at any one point in time an MPEG decoder can either extract the 187 digital words in a packet, or replace corrupted values with redundant copies using error correction even if the carrying medium is severely compromised, or it can extract nothing if the packets are compromised too much. It is all or nothing; you either end up with a perfect picture or a blank screen. The decoder either has enough (all) of the sent information to recreate the other 99% that was not encoded (discarded in compression) or it doesn't have enough information that it can make sense of and so can't make intelligent guesses about how to reconstruct the other 99%, and so does nothing (mutes to black).
It might not appear that way if the stream is pixellated due to reception, but at any one instant in time each macroblock is either painted onto the screen perfectly or not, and when not, the previous macroblock remains there frozen until eventually updated (or until a time-out mutes the entire screen), which is why you might briefly see a mosaic effect. Each part of the picture is still perfect, but older macroblocks mixed with newer updated macroblocks destroy the stitching illusion and what you see over all then does not look as it really should, which is considered a perceptual artifact (this is different from pixellation due to overcompression, which obviously can present an imperfect picture, although still faithful to what was encoded and compressed).
that while the carrying medium can partially or gradually degrade, while inside the digital domain, the information doesn't, and can't; at any one particular instant in time you either have it all extracted and reclocked perfectly faithful to what was sent, or you have nothing. There is no visible artifact that can degrade 8-bit video by "40%", and having 10-bit processing would not in any way provide "headroom" to an 8-bit signal
, whether it was degraded (which it can't be) or not.
In binary math, and therefore in digital video, there is no "up to 40%" of a "one" or "part" of a "zero", there can only be one of two states of being for each bit of the information itself: one, or zero. And the only way those numbers can be changed from one to zero or back is if a mathematical process is performed on them, which does not happen by accident in nature during transport. It happens only on purpose at decode and conversion to analog (assuming conversion happens in the DVR, which it only does for component and composite) and if the pixel map is rescaled, but it always happens in virtually the same way, providing virtually the same result, and completely irregardless of limited deterioration of the carrying medium during transport.
As D-Nice says, the only differences are due to differences in chipsets. The standard for what is accomplished is complicated and rigid, but how they get to that finish line, how they do that job, is up to the chipset designer. That can cause small, nearly imperceptible differences for test patterns, and virtually invisible differences for garden-variety video.
One other place where math is done is if there is YPrPb to RGB conversion, such as in the HR24-500. Since that is a matrix equation and resultant values are based on mixing percentages of other values, there is room for tiny amounts of quantization rounding error to creep in, the amount depending upon how sophisticated the conversion might be (we have to assume it is not sophisticated to keep costs down). That would imply that true YPrPb processing (no conversion needed) would be more accurate, although not necessarily "better". There would be less error if it were 10-bit video, but it isn't.
Edited by TomCat, 29 December 2012 - 07:25 PM.