When UDP is the right choice, and how QUIC changes the picture
By the end of this module you will be able to:
- Explain what UDP provides and what it deliberately omits
- Describe QUIC's key additions over bare UDP: TLS integration, streams, and connection migration
- Choose transport by delivery contract and control ownership, not by protocol reputation

Real-world deployment · 2012–2015
How Google deployed QUIC and measured the results at scale
Starting around 2012, Google began developing a new transport protocol internally. They called it QUIC. The initial motivation was specific: TCP's head-of-line blocking meant that one lost packet stalled all HTTP/2 streams sharing a connection, even streams that had no relationship to the lost packet. For a search results page loading dozens of resources in parallel, one loss event degraded everything.
Google deployed QUIC in Chrome and on their servers, then measured latency improvements in production. Their 2015 research found that for the 1% of worst-case connections (highest latency), QUIC reduced Google Search latency by approximately 8%. YouTube users on QUIC saw rebuffer rates drop by around 18%. These are not typical-case improvements; they are tail-latency improvements where users experience degraded service most.
The reason QUIC could achieve this while using UDP as its substrate: QUIC handles recovery per stream rather than per connection. A lost packet affects only the stream it belongs to. Other streams continue uninterrupted. That is a fundamental difference from HTTP/2 over TCP, where all streams share a single byte stream.
TCP is reliable and well-understood. Why would Google invest years building a new transport protocol on top of UDP, and what specific problems did they measure it solving?
11.1 UDP: eight bytes and a delivery attempt
UDP (User Datagram Protocol), defined in RFC 768, has a header of exactly eight bytes: source port (2 bytes), destination port (2 bytes), length (2 bytes), and checksum (2 bytes). That is the entire protocol overhead.
UDP provides message-oriented delivery. Each UDP datagram is sent as a single unit and received as a single unit, or not at all. There is no connection establishment, no acknowledgement, no ordering, no retransmission, and no congestion control. The application gets a datagram delivery service and owns everything else.
This is not a weakness; it is a design decision. When an application needs the lowest possible per-message overhead, or when it can handle losses better than TCP's recovery mechanism, or when it needs to implement its own timing and ordering strategy, UDP provides the simplest possible foundation.
“User Datagrams are to be used for sending data with a minimal amount of protocol mechanism.”
RFC 768 - Introduction
The RFC 768 introduction states the design intent directly. UDP deliberately omits the mechanisms that make TCP reliable. Applications that use UDP are expected to own the parts of the protocol contract that their workload requires.
Common UDP use cases include DNS queries (short request-response, latency-sensitive, application can retry), VoIP (Voice over Internet Protocol) and video conferencing (packet loss is preferable to the delay of retransmission for real-time audio), online gaming (position updates become stale faster than retransmission can deliver them), and streaming media (the application manages buffering and playback timing).
11.2 QUIC: UDP with transport state
QUIC, defined in RFC 9000, is a general-purpose transport protocol. It runs over UDP but adds the reliability, ordering, flow control, and congestion control that TCP provides. It also adds multiplexed streams and integrates TLS 1.3 cryptographic handshake behaviour directly into the transport handshake.
The multiplexed streams are the key difference from HTTP/2 over TCP. Each stream has independent flow control and recovery. A lost packet affects only the stream carrying that packet. Other streams in the same QUIC connection are unaffected. In TCP, all streams share a single byte stream, so any packet loss stalls everything waiting for the gap to be filled.
QUIC also supports 0-RTT (zero round-trip time) resumption for connections to servers the client has connected to before. In this mode, the client can send application data in the very first packet, before the handshake completes. TCP with TLS 1.3 requires at least one RTT for the handshake. QUIC 0-RTT removes that cost for repeat connections, at the cost of some replay protection limitations.
“QUIC is a UDP-based multiplexed and secure transport. QUIC builds on decades of transport and security experience, and implements mechanisms that make it useful as a general-purpose transport.”
RFC 9000 - Section 1, Overview
The framing 'UDP-based' is deliberate. QUIC uses UDP as a datagram substrate but replaces TCP's entire connection model. QUIC implementations include flow control, congestion control, connection management, and TLS 1.3. UDP contributes only the checksum and port multiplexing.
11.3 HTTP/3: the application layer on QUIC
HTTP/3, defined in RFC 9114, is the version of HTTP designed to run over QUIC. Where HTTP/2 runs over TCP (and suffers head-of-line blocking when packets are lost), HTTP/3 runs over QUIC streams and gets stream-independent recovery.
From the application developer's perspective, HTTP/3 looks similar to HTTP/2. The same semantics apply: methods, headers, status codes, bodies. The transport difference is invisible at the HTTP level. The practical benefit is reduced latency on lossy paths and faster connection setup on repeat visits via 0-RTT.
Connection migration is another QUIC feature. A QUIC connection is identified by a connection ID, not by the 4-tuple of source IP, source port, destination IP, destination port. This means a mobile client can switch from WiFi to a cellular network without reconnecting. The connection ID stays constant even when the underlying IP address changes.
Common misconception
“UDP is unreliable, therefore bad.”
UDP provides a delivery attempt without built-in recovery. Whether that is good or bad depends entirely on the application. DNS, VoIP, online gaming, and streaming all choose UDP or QUIC deliberately. A real-time audio stream should not retransmit a 20 ms audio frame that would arrive 150 ms late. The application handles loss by concealing it, not by waiting for a retransmit that arrives too late to use.
A video conferencing app uses UDP. A packet containing 20 ms of audio is lost. Should the app retransmit it?
QUIC uses UDP as its substrate. A developer says 'QUIC is basically UDP with encryption.' What is wrong with this?
A mobile client has an active QUIC connection to a server. The user switches from WiFi to a cellular network and gets a new IP address. What happens to the QUIC connection?
Key takeaways
- UDP provides an 8-byte header and message-oriented delivery with no connection state, ordering, or recovery. The application owns any guarantees it needs.
- UDP is the right choice when the application can handle loss better than TCP retransmission would, or when per-message overhead must be minimal.
- QUIC adds reliable, ordered, multiplexed streams, integrated TLS 1.3, congestion control, and connection migration on top of UDP. It solves TCP's head-of-line blocking problem.
- HTTP/3 over QUIC gives independent per-stream recovery. A lost packet affects only its stream, not the entire connection.
Standards and sources cited in this module
RFC 768, User Datagram Protocol
Introduction and Header Format
One-page original UDP specification. Quoted in Section 11.1 for the stated design intent of minimal protocol mechanism.
RFC 9000, QUIC: A UDP-Based Multiplexed and Secure Transport
Section 1, Overview; Section 9, Connection Migration
Defines QUIC transport. Quoted in Section 11.2 for the overview framing. Section 9 is the basis for the connection migration description.
Section 1, Overview of HTTP/3
Defines HTTP/3 over QUIC. Referenced in Section 11.3 for the HTTP/3 description and head-of-line blocking contrast with HTTP/2.
Langley, A. et al. (2017). The QUIC Transport Protocol: Design and Internet-Scale Deployment
SIGCOMM 2017, Section 5: Performance Evaluation
Google's production deployment data. Used in the opening case study for the measured latency and rebuffer improvements.
You now know the transport options. Module 12 separates routing (how the network learns paths) from forwarding (how individual packets follow those paths), and shows why BGP hijacks like the 2008 Pakistan-YouTube incident are possible.
Module 11 of 21 · Applied stage