When evaluating Wi-Fi software a few years back for an industrial application, software vendors considered data rate fallback an afterthought. I spent a lot of time on rooftops testing fallback in real-world scenarios. I've recently been reading about LTE "4G", and I wonder how effective its fallback algorithm is.
Fallback is the ability to use a lower data rate to maintain a link at lower signal-to-noise ratios (SNRs). Lower data rates require less SNR to maintain the same bit error rate (BER). They also require less transmitter linearity, allowing the transmitter to turn up the output power to levels that would introduce too much distortion for higher data rates.
This is very important in Wi-Fi because the highest data rates are at the upper limit of what is theoretically possible using the inexpensive ADCs in a Wi-Fi chipset. Any noise at all introduces bit errors.
In 802.11(b), which was popular in the early 2000s, Wi-Fi cards operated at their top data rate (11Mbps) without many bit errors. When 802.11(g) appeared in consumer networking products in 2004, fallback became more important. Its top data rate of 54Mbps required a high SNR. The modulation for all (g) data rates was OFDM, which requires high transmitter linearity for the top data rates. It’s common for a transmitter to have to cut back to 10% of their full low-data-rate output power to transmit at the higher rates.
A good fallback algorithm reduces the data rate until the the packet error rate (PER) is about 10%. The algorithm could cut its data rate in half and reduce PER to way below 10%, but even if PER fell to 0% the total throughput would be lower than using the higher rate and accepting that 10% of packets must be re-sent. If intermittent noise is causing the errors, the lower data rate might actually make PER worse because packets take longer to send and are more likely to collide with intermittent interference.
802.11(n) added another wrinkle to this problem by supporting MIMO, meaning multiple transceivers. In poor signal conditions, the multiple transceivers provide spatial diversity. If one antenna is in a bad location, maybe one of the others will happen be in a good location. In excellent channel conditions, an 802.11(n) chipset can transmit multiple data streams from each antenna at the same time and frequency. It’s counterintuitive to me that MIMO works at all outside a MATLAB simulation, and channel conditions must be excellent for it to work. The fallback algorithm now has to select whether to do multi-stream and which rate to use. You cannot simply fall back “one step” because there are two parameters to select.
When 802.11(n) was new, I worked at a company using 802.11(n) chipsets. The new fallback complexity meant the software often picked a horrible choice of data rate. Fixing it required tweaking controls in the driver that I didn’t have access to. My carrying on about about fallback reminded me of the story of a Roman politician during the Punic Wars who would absurdly end every speech, even if it was completely unrelated to the Punic Wars, with the phrase, “And furthermore, I consider that Carthage must be destroyed.” The signature on my e-mail at that time was, “And furthermore, I consider that the fallback algorithm must be fixed.”
Last week I watched an IEEE talk on LTE “4G” mobile wireless. Surprisingly, the same fallback issues affect LTE. They call number of streams “rank”, but the issues are the same. They address it by selecting the "rank" first and then calculating the best data rate. The presentation showed how, just as in Wi-Fi, once rank is determined a good fallback algorithm allows a higher throughput (the dashed black line) by selecting the best rate for a given SNR.
I would love to hear from some LTE engineers about how good the fallback algorithm is in practice. An explosion in the amount of mobile data is in progress. Journal articles I read accept the notion that LTE approaches the theoretical limits of data per unit bandwidth. If that’s true, LTE must use an excellent fallback algorithm. During times of congestion, is a base station smart enough to drop a distant user who requires a low data rate in order to share that user’s time slot among five nearby users at a higher data rate? I have never worked on LTE, but based on my experiences with Wi-Fi I suspect more data can be squeezed into the mobile wireless band by tweaking fallback.