QUICER: Next Generation Transport Protocol Library for BEAM

QUIC protocol support coming to BEAM.

As of now, I am using Caddy in front of Phoenix server to experience HTTP/3 goodness.

6 Likes

WTH is HTTP/3?! I thought I was behind the curve by still using HTTP/1. I just need to retire already.

2 Likes

HTTP/3 in simplified words is HTTP/2 over QUIC (however I have heard that there are few more modifications).

2 Likes

I thought HTTP/2 was complicated, and now you’re saying it’s an added layer of abstraction on top of that?? :joy:

Jokes aside, has HTTP/2 seen wide adoption? When I think of “modern” web, I think of Phoenix channels/LiveView which still work over just HTTP/1 and websockets… or am I way off base?

grpc uses HTTP/2 under the hood.

The nice thing about these new protocols is that you, as an application developer, don’t have to care about the complexity, because the application’s semantics of HTTP stay the same. You will only notice increased performance (hopefully).

1 Like

It’s not an abstraction on top of HTTP/2. It overcomes the issues with previous versions.
HTTP/2 is widely adopted. See: Can I use HTTP/2?, Can I use HTTP/3?.

Socket initial handshake is over HTTP1.1, then onwards it upgrades to bidirectional socket connection.

However HTTP/3 will improve other aspects, like assets retrieval, faster connection establishment, low latency, so even LiveView application will benefit from overhaul of the underlying infrastructure.


TL;DR:


Pre HTTP/1

Rather than sending all outstanding data as fast as possible once the connection is established, TCP enforces a warm-up period called “slow start”, which allows the TCP congestion control algorithm to determine the amount of data that can be in flight at any given moment before congestion on the network path occurs, and avoid flooding the network with packets it can’t handle. But because new connections have to go through the slow start process, they can’t use all of the network bandwidth available immediately.

HTTP/1

The HTTP/1.1 revision of the HTTP specification tried to solve these problems a few years later by introducing the concept of “keep-alive” connections, that allow clients to reuse TCP connections, and thus amortize the cost of the initial connection establishment and slow start across multiple requests. But this was no silver bullet: while multiple requests could share the same connection, they still had to be serialized one after the other, so a client and server could only execute a single request/response exchange at any given time for each connection.

As the web evolved, browsers found themselves needing more and more concurrency when fetching and rendering web pages as the number of resources (CSS, JavaScript, images, …) required by each web site increased over the years. But since HTTP/1.1 only allowed clients to do one HTTP request/response exchange at a time, the only way to gain concurrency at the network layer was to use multiple TCP connections to the same origin in parallel, thus losing most of the benefits of keep-alive connections. While connections would still be reused to a certain (but lesser) extent, we were back at square one.

HTTP/2

Finally, more than a decade later, came SPDY and then HTTP/2, which, among other things, introduced the concept of HTTP “streams”: an abstraction that allows HTTP implementations to concurrently multiplex different HTTP exchanges onto the same TCP connection, allowing browsers to more efficiently reuse TCP connections.

But, yet again, this was no silver bullet! HTTP/2 solves the original problem — inefficient use of a single TCP connection — since multiple requests/responses can now be transmitted over the same connection at the same time. However, all requests and responses are equally affected by packet loss (e.g. due to network congestion), even if the data that is lost only concerns a single request. This is because while the HTTP/2 layer can segregate different HTTP exchanges on separate streams, TCP has no knowledge of this abstraction, and all it sees is a stream of bytes with no particular meaning.

The role of TCP is to deliver the entire stream of bytes, in the correct order, from one endpoint to the other. When a TCP packet carrying some of those bytes is lost on the network path, it creates a gap in the stream and TCP needs to fill it by resending the affected packet when the loss is detected. While doing so, none of the successfully delivered bytes that follow the lost ones can be delivered to the application, even if they were not themselves lost and belong to a completely independent HTTP request. So they end up getting unnecessarily delayed as TCP cannot know whether the application would be able to process them without the missing bits. This problem is known as “head-of-line blocking”.

HTTP/3

This is where HTTP/3 comes into play: instead of using TCP as the transport layer for the session, it uses QUIC, a new Internet transport protocol, which, among other things, introduces streams as first-class citizens at the transport layer. QUIC streams share the same QUIC connection, so no additional handshakes and slow starts are required to create new ones, but QUIC streams are delivered independently such that in most cases packet loss affecting one stream doesn’t affect others. This is possible because QUIC packets are encapsulated on top of UDP datagrams.

Using UDP allows much more flexibility compared to TCP, and enables QUIC implementations to live fully in user-space — updates to the protocol’s implementations are not tied to operating systems updates as is the case with TCP. With QUIC, HTTP-level streams can be simply mapped on top of QUIC streams to get all the benefits of HTTP/2 without the head-of-line blocking.

QUIC also combines the typical 3-way TCP handshake with TLS 1.3’s handshake. Combining these steps means that encryption and authentication are provided by default, and also enables faster connection establishment. In other words, even when a new QUIC connection is required for the initial request in an HTTP session, the latency incurred before data starts flowing is lower than that of TCP with TLS.


Reference:


P.S. You need to enable HTTPS to see these in action! Otherwise you will see HTTP/1.1 in your network while developing. Use Mkcert or Caddy to taste some of the modern goodness!

7 Likes