No confusion here. You may have noticed I used the terminology "Mb/s", which stands for Megabits per second. "MB/s" would be the terminology for Megabytes per second.
Uncompressed? MPEG-2 encoding is compressed by definition...a combination of a number of compression algorithms to reduce the amount of bits needed to convey necessary information. It is never lossless (information is indeed discarded so the original information can't be reconstituted 100% faithfully), but it is very robust and if done properly the losses are not at all noticeable.
A few years ago I did some tests to see how low of a bit rate I could go to before degradation was noticeable in order to decide what bit rate we would use in our facility. At the time, storage media was much more expensive and we were finding it difficult to store more than about 2000 minutes of video for a commercial playback system. Using a 17 frame GOP and 4:2:0 and encoding extremely "busy" video (white-water rafting footage and a clip of a vat of quarters being poured onto a table) I found that I could approach down to 6 Mb/s before the encoders became bit-starved and started to pixellate. Of course we were also encoding 4 audio channels and 1 timecode channel in the same bitstream, which takes up about 8-9% of the bits. We chose 10 Mb/s just to be safe. Today, now that we have 60 181GB drives capable of storing well over 1500 hours of video, we use 12 Mb/s.
There are two other major factors in DBS distribution, VBR (variable bit rate) and statistical multiplexing, both used to smooth out the peaks of bit demand in a multi-channel delivery system, so calculating the actual bit rate of a single DBS channel is both a moving target and a complex issue dependent upon all of the other channels being transmitted. VBR is usually limited to some small percentage, however, so that low-cost decoders in the STB's can keep up.