What will the next generation of mobile data service (5G) be like? This February I wrote about several technologies that might be incorporated into it. Researchers at the University of California Irvine published a provocatively-titled paper on this question in the latest IEEE Communications: Millimeter-Wave Massive MIMO: The Next Wireless Revolution?
As the title suggests, the researchers argue that millimeter wave and MIMO will be a huge part of 5G.
When I think of MIMO I think of multi-stream MIMO in which multiple antennas transmit different streams of data at the same frequency at the same time in order to transmit more data, as in 802.11(n/ac). MIMO can also be used for beamforming to create a directional antenna out of many omnidirectional antennas. In 5G, base stations may contain hundreds of antennas using beamforming to focus their power on a tight beam toward the handset. Handsets have room for six to ten antennas, but power and cost limitations on the handset may limit the number of handset antennas to one or two.
Millimeter-wave simply refers to the spectrum from 3 to 300 GHz. 30 GHz is a likely location for mobile phone service. Using these frequencies will be a huge benefit simply because there is so much spectrum available.
Signals at these frequencies are more easily blocked by objections. Even in free-space, higher frequency signals loss strength faster, following the free space path loss (FSPL) equation. Although this is a disadvantage in terms of link budget (amount of power needed to get from point A to point B), it can be an advantage in that signals directed in one direction lose strength as they bounce in undesired directions.
Another benefit of increased FSPL is less intersymbol interference (ISI). 4G and 802.11 use orthogonal frequency division multiplexing (OFDM) to send dozens of subcarriers simultaneously. 802.11 sends 250,000 symbols per second on each subcarrier. That means each symbol lasts 1/250ksps = 4us. The symbols actually last 3.2us, with 0.8 us of guard band to reduce ISI. Radio waves propagate at 5us/sec, so reflected signals that take a path 0.16 miles [250m] longer than the primary signal will cause ISI. Recent advancements in 802.11 involve adding subcarriers, not increasing the symbol rate, b/c increasing symbol rate would worsen ISI. Since reflected paths lose strength more rapidly at higher frequencies, much higher symbol rates are possible. This means more efficient single-carrier systems, which would not work in the 1-6 GHz range, might work n the 30GHz range.
I spoke with Dr. Swindlehurst, one of the authors of the recent papers. He says in practice mobile carriers would not adopt a system with one wideband carrier. It will be a system with a few subcarriers each with bandwidths in the 10-100MHz range. Because there will be only a handful subcarriers and spectrum is more plentiful at higher frequencies, there will be no need to cram them together as closely as possible using OFDM. This obviates the need for highly linear transmitter amplifiers.
Dr. Swindlehurst says it’s unlikely carriers will increase bandwidth using multiple streams (similar to 802.11(n/ac)) because at these high frequencies the channels are not that rich in multipath reflections. Instead it will be one stream, beamed straight to the user, going very fast.
The stream will be several channels taking up dozens of bandwidth. In an ideal noiseless channel, there’s no limit to the amount of information that can be sent using a given amount of spectrum, but from a practical standpoint we typically get about 1 Mbps per 1 MHz. What could we do with 100 Mbps streamed to a phone? It seems like more than is needed, but I remember sitting in a class 12 years ago with the professor wondering what possible applications there could be for 3G technology. Could we use >20 Mbps to a handheld device? Or will the main benefit be in supporting more users in densely populated areas?