Primer on Latency and Bandwidth
Speed is a feature of modern web applications.
Latency: the time from the source sending a packet to the destination receiving it.
Bandwidth: maximum throughput of a logical or physical communication path.
Latency may be impacted by:
- Propagation delay: Amount of time required for a message to travel from the sender to receiver (distance over speed).
- Transmission delay: Amount of time required to push all the packet’s bits into the link (packet’s length and data rate of the link).
- Processing delay: Amount of time required to process the packet header, check for bit-level errors, and determine the packet’s destination.
- Queuing delay: Amount of time the packet is waiting in the queue until it can be processed.
Speed of light maximum speed which energy, matter or information can travel. Speed of light measured in a vacuum, whereas slower through mediums like copper or fibre-optic cable.
The ratio of the speed of light and the speed with which the packet travels in a material is the refractive index of the material. The larger the value, the slower light travels in that medium.
Refractive index of optical fibre varies between 1.4 and 1.6. The rule of thumb is to assume that the speed of light in fiber is around 200,000,000 meters per second, which corresponds to a refractive index of ~1.5.
Given various delays, the actual RTT between New York and Sydney, over our existing networks, works out to be in the 200–300 millisecond range. Studies have shown most of us will reliably report perceptible "lag" once a delay of over 100–200 milliseconds is introduced into the system. Once the 300 millisecond delay threshold is exceeded, the interaction is often reported as "sluggish," and at the 1,000 milliseconds (1 second) barrier, many users have already performed a mental context switch while waiting for the response.
Content delivery network (CDN) strategically reduce RTT by placing servers closer to users.
It is often the last few miles where significant latency is introduced. The last few hops alone can take tens of milliseconds. The last-mile latencies for terrestrial-based broadband (DSL, cable, fiber) within the United States have remained relatively stable over time: fiber has best average performance (10-20 ms), followed by cable (15-40 ms), and DSL (30-65 ms).
An optical fiber acts as a simple "light pipe," slightly thicker than a human hair, designed to transmit light between the two ends of the cable.
Optical fibers have a distinct advantage over metal wires when it comes to bandwidth because each fiber can carry many different wavelengths (channels) of light through a process known as wavelength-division multiplexing (WDM). Hence, the total bandwidth of a fiber link is the multiple of per-channel data rate and the number of multiplexed channels.
The fiber links, that form the core data paths of the Internet are capable of moving hundreds of terabits per second. However, the available capacity at the edges of the network is much, much less.
As a reference point, streaming an HD video can require anywhere from 2 to 10 Mbps depending on resolution and the codec.
To improve performance of our applications, we need to architect and optimize our protocols and networking code with explicit awareness of the limitations of available bandwidth and the speed of light: we need to reduce round trips, move the data closer to the client, and build applications that can hide the latency through caching, pre-fetching, and a variety of similar techniques.
Building blocks of TCP
The three-way handshake startup process of every TCP connection ensures each new connection will have a full roundtrip of latency before any application data can be transferred.
The delay imposed by the three-way handshake makes new TCP connections expensive to create, and is one of the big reasons why connection reuse is a critical optimization for any application running over TCP.
TCP Fast Open (TFO) is a mechanism that aims to eliminate the latency penalty imposed on new TCP connections by allowing data transfer within the SYN packet.
The core principles for TCP performance are:
- TCP three-way handshake introduces a full roundtrip of latency.
- TCP slow-start is applied to every new connection.
- TCP flow and congestion control regulate throughput of all connections.
- TCP throughput is regulated by current congestion window size.
The simplest option to obtain benefits of improvements underwritten in kernal is to keep servers up-to-date with latest kernals.
With the latest kernal:
- increase TCP's initial congestion window. A larger starting congestion window allows TCP to transfer more data in the first roundtrip and significantly accelerates the window growth.
- disable slow-start restart. Disabling slow-start after idle will improve performance of long-lived TCP connections that transfer data in periodic bursts.
- enable window scaling. Enabling window scaling increases the maximum receive window size and allows high-latency connections to achieve better throughput.
- implement TCP Fast Open. TFO allows application data to be sent in the initial SYN packet in certain situations. Requires client and server support.
ss --options --extended --memory --processes --info to see the current peers and their respective connection settings on Linux.
At the application layer:
- send as few bits as possible.
- move the bits closer using CDNs.
- reuse TCP connections to improve performance.
Building blocks of UDP
A self-contained, independent entity of data carrying sufficient information to be routed from the source to the destination nodes without reliance on earlier exchanges between the nodes and the transporting network.
The most well-known use of UDP is the domain name system (DNS). The new Web Real-Time Communication (WebRTC) standards enable real-time communication natively within the browser via UDP.
UDP non-services are:
- no guarantee of message delivery.
- no guarantee of order delivery.
- no connection state tracking (no connection establishment or teardown state machines).
- no congestion control (no built-in client nor network feedback mechanisms).
IPv4 addresses are only 32 bits long, which provides a maximum of 4.29 billion unique IP addresses. The IP Network Address Translator (NAT) specification was introduced in mid-1994 (RFC 1631) as an interim solution to resolve the looming IPv4 address depletion problem.
The proposed IP reuse solution was to introduce NAT devices at the edge of the network, each of which would be responsible for maintaining a table mapping of local IP and port tuples to one or more globally unique (public) IP and port tuples.
Three well-known IP ranges are reserved for private networks (usually residing behind a NAT device). No public computer is allowed to be assigned an IP address from any of the reserved private network ranges.
|IP address range||Number of addresses|
|10.0.0.0 – 10.255.255.255||16,777,216|
|172.16.0.0 – 172.31.255.255||1,048,576|
|192.168.0.0 – 192.168.255.255||65,536|
NAT translation poses problem for UDP because NAT middleboxes require state whereas UDP has none.
One of the de facto best practices for long-running sessions over UDP is to introduce bidirectional keepalive packets to periodically reset the timers for the translation records in all the NAT devices along the path.
Inability to establish UDP connection at all especially true for P2P applications such as VoIP, games and file sharing which often need to act as client and server to enable two-way direct communication between the peers.
The first issue is that in the presence of a NAT, the internal client is unaware of its public IP. So the application must first discover its public IP address if it needs to share it with a peer outside its private network.
To work around this mismatch in UDP and NATs, various traversal techniques (TURN, STUN, ICE) have to be used to establish end-to-end connectivity between the UDP peers on both sides.
Session Traversal Utilities for NAT (STUN) is a protocol (RFC 5389) that allows the host application to discover the presence of a network address translator on the network, and when present to obtain the allocated public IP and port tuple for the current connection. To do so, the protocol requires assistance from a well-known, third-party STUN server that must reside on the public network.
The STUN server responds to the client request with its public IP.
However, in practice, STUN is not sufficient to deal with all NAT topologies and network configurations. Further, unfortunately, in some cases UDP may be blocked altogether by a firewall or some other network appliance—not an uncommon scenario for many enterprise networks. To address this issue, whenever STUN fails, we can use the Traversal Using Relays around NAT (TURN) protocol (RFC 5766) as a fallback, which can run over UDP and switch to TCP if all else fails.
Both clients begin their connections by sending an allocate request to the same TURN server, followed by permissions negotiation.
Once the negotiation is complete, both peers communicate by sending their data to the TURN server, which then relays it to the other peer.
The obvious downside in this exchange is that it is no longer peer-to-peer! TURN is the most reliable way to provide connectivity between any two peers on any networks, but it carries a very high cost of operating the TURN server—at the very least, the relay must have enough capacity to service all the data flows. As a result, TURN is best used as a last resort fallback for cases where direct connectivity fails.
Interactive Connectivity Establishment (ICE) protocol (RFC 5245) is a protocol, and a set of methods, that seek to establish the most efficient tunnel between the participants: direct connection where possible, leveraging STUN negotiation where needed, and finally fallback to TURN if all else fails.
If you are building a P2P application over UDP, then you most definitely want to leverage an existing platform API, or a third-party library that implements ICE, STUN, and TURN for you.
If using UDP, easiest path is to use WebRTC.
Browser APIs and Protocols
The browser abstracts most of this complexity behind three primary APIs:
MediaStream: acquisition of audio and video streams
RTCPeerConnection: communication of audio and video data
RTCDataChannel: communication of arbitrary application data
Unlike all other browser communication, WebRTC transports its data over UDP.
Capturing and processing audio and video is a complex problem. However, WebRTC brings fully featured audio and video engines to the browser which take care of all the signal processing, and more, on our behalf.
MediaStream object is the primary interface to request audio and video streams from the platform, as well as a set of APIs to manipulate and process the acquired media streams.
getUserMedia() API is responsible for requesting access to the microphone and camera from the user, and acquiring the streams that match the specified constraints.
Once the stream is acquired, we can feed it into a variety of other browser APIs:
- Web Audio API enables processing of audio in the browser.
- Canvas API enables capture and post-processing of individual video frames.
- CSS3 and WebGL APIs can apply a variety of 2D/3D effects on the output stream.
The requirement for timeliness over reliability is the primary reason why the UDP protocol is a preferred transport for delivery of real-time data.
RTCPeerConnection interface is responsible for managing the full life cycle of each peer-to-peer connection.
DataChannel API enables exchange of arbitrary application data between peers—think WebSocket, but peer-to-peer, and with customizable delivery properties of the underlying transport.
In order to establish a successful peer-to-peer connection:
- We must notify the other peer of the intent to open a peer-to-peer connection, such that it knows to start listening for incoming packets.
- We must identify potential routing paths for the peer-to-peer connection on both sides of the connection and relay this information between peers.
- We must exchange the necessary information about the parameters of the different media and data streams—protocols, encodings used, and so on.
Before any connectivity checks or session negotiation can occur, we must find out if the other peer is reachable and if it is willing to establish the connection. We must extend an offer, and the peer must return an answer via a shared signaling channel.
WebRTC defers the choice of signaling transport and protocol to the application. Options include:
- Session Initiation Protocol (SIP): widely used for voice over IP (VoIP) networks.
- Jingle: signaling extension for the XMPP protocol, used for session control of voice over IP and videoconferencing over IP networks.
- ISDN User Part (ISUP): Signaling protocol used for setup of telephone calls in many public switched telephone networks around the globe.
Asterix or a custom signalling channel are options.
WebRTC uses Session Description Protocol (SDP) to describe the parameters of the peer-to-peer connection. o
The ICE agent in WebRTC can largely handle NAT traversals. It is recommended to trickle ICE candidates as they are selected.
chrome://webrtc-internals may be used to inspect all open peer-to-peer connections.
Datagram Transport Layer Security (DTLS) is used to negotiate the secret keys for encrypting media data and for secure transport of application data.
Secure Real-Time Transport (SRTP) is used to transport audio and video streams.
Stream Control Transport Protocol (SCTP) is used to transport application data.
DTLS provides equivalent security guarantees as TLS for UDP datagrams and is equivalent generally to TLS besides modifications to enable UDP compatability.
DTLS implements a "mini-TCP" for the handshake sequence which must guarentee delivery order. The DTLS handshake requires two roundtrips to complete—an important aspect to keep in mind, as it adds extra latency to setup of the peer-to-peer connection.
Secure Real-time Transport Protocol (SRTP): Secure profile of the standardized format for delivery of real-time data, such as audio and video over IP networks.
Secure Real-time Control Transport Protocol (SRTCP): Secure profile of the control protocol for delivery of sender and receiver statistics and control information for an SRTP flow.
SRTP defines a standard packet format for delivering audio and video over IP networks.
The DataChannel API relies on the Stream Control Transmission Protocol (SCTP). This is used for application data. SCTP is a transport protocol, similar to TCP and UDP, which can run directly on top of the IP protocol. However, in the case of WebRTC, SCTP is tunneled over a secure DTLS tunnel, which itself runs on top of UDP.
DataChannel enables bidirectional exchange of arbitrary application data between peers—think WebSocket, but peer-to-peer.