Why does it seem that way? I've seen several informal reports from people who claim to be able to easily distinguish 4k from 2k, and several have said that colors have a more solid appearance on the 4k displays. Nothing to do with discerning individual pixels or picture details.
Having thought about this some more, I think I understand better the reports of improved color in 4k, and why seeing an improved picture with 4k does not have to do with whether you can see individual pixels. In a sense, actually, the improvement comes because you can't discern the pixels.
As you all know, a color TV does not show all the colors we see on the screen directly, but depends on the perceptual merger of 3 RGB sub-pixels (or sometimes 4), and for a panel with 8 bit color depth, we get the perceptual effect of combining 2^8 values of the 3 sub-pixels, which gives (2^8)^3 = 2^24 colors. (It's a little less, because not quite all the 2^8 values can be used for color.) A way to improve the color is to use a panel with 10 or 12 bit color, which is expensive, however.
In the same screen area that a 2k TV has a single pixel, a 4k TV has four pixels, which gives it 4*3 = 12 RGB subpixels. Since we can't discern the individual sub-pixels on a 2k set, of course we can't discern the still smaller sub-pixels on a 4k set, either, and the color we see at this spot on the screen will be a perceptual merger of the values given to the 12 sub-pixels. The number of levels of red that can be shown with 4 red sub-pixels is 4 * 2^8 = 2^10 for an 8 bit panel. So since we have more and smaller pixels, we get more colors, (2^10)^3, which is the number of different colors available on a 2k set that has a 10 bit panel.
The combining of several small pixels of varying colors to display additional intermediate shades, by dithering, is something that is already done by video cameras.
Edited by GregLee, 03 August 2014 - 06:02 PM.