Viasat introduced its Exede high-speed satellite Internet service in January. It uses narrow-beam transmission like the proposed Lightsquared system to reuse bandwidth in cellular fashion. Viasat’s system, however, operates in the 20GHz range making it easier to realize higher-gain antennas with tight beams serving small regions of the earth.
Providing "fast" Internet service to rural regions is an important infrastructure issue that has been likened to providing electricity, phone service, and highways to rural regions in the last century. It brings up the definition of what is “fast” when it comes to data service. Fast can mean high throughput, which means a large number of bytes per second. Or it can mean low-latency, which means it takes a short amount of time to begin receiving data. For sending large files, transmission delay, i.e. delay associated throughput limitations, predominate. For small files or webpages that load many small files, latency predominates. Latency on satellite links is high due to the propagation delay of radio waves, 5us per mile.
Viasat reduces the delay by downloading all the files needed for a webpage and sending them all at once. There is still the delay of requesting a webpage and downloading it, but the system eliminates the need for the browser to download each file needed for the webpage individually.
This issue underscores the need for a new benchmark to characterize Internet speed. Most consumers know whether their Internet “speed” is 2Mbps or 5Mbps, but they don’t know typical latency to large servers.
Drawing an analogy to airplane speed, throughput would be the airplane’s capacity and latency would be the airplane’s speed. If someone asks how fast an airplane can move people, they want to know more than the number of seats it has.
The promotional information on Viasat’s website barely touches on latency, even though that’s where the bulk of the value of their system is. Maybe the marketing people have determined that throughput is the only figure of merit widely recognized and a benchmark of loading pages with many files would be confusing.
As technology moves forward will people become more aware of latency? Or will people find more throughput-intensive applications, keeping throughput the primary figure of merit for data service?