The promising HTTP/3

Clients and servers communicate with each other using the HTTP protocol.

Browsers are the most popular clients for servers, so millions of people use that protocol every day.

Its next version is HTTP/3. Let’s see what it is.

HTTP/2

HTTP/2 is a new version of the protocol. There have been many versions before, such as HTTP/1.1, HTTP/1.0, etc.

HTTP requests and data

Clients and servers use HTTP to ask each other to do specific tasks.

The HTTP protocol defines a set of rules that organize those communications, but another protocol is used to transmit data: TCP/IP.

It is a prevalent networking protocol as almost all operating systems implement it. Its purpose is to connect computers.

Before HTTP/2, each request needs a TCP/IP connection. It’s not ideal for modern websites with rich interfaces. It has some performance issues.

What adds HTTP/2

HTTP/2 has introduced multiplexing. Instead of opening a new TCP/IP connection for each static file, you need only one connection to load everything.

It is, of course, more complicated than that. Here you get a general overview. Let’s go further, just a little more.

What does “multiplexing” mean? It allows for sending and receiving multiple HTTP requests over a single TCP/IP connection. This way, the browser can download files in parallel, which is logically better for front-end performances.

HTTP/2 does not come without consequences for the front-end development and what we call best practices.

HTTP/2 and best practices

Amongst best practices for front-end development, you find JS/CSS concatenation, CSS sprites, domain sharding, CSS inlining, and many other techniques.

The idea is to reduce the number of requests because HTTP/1.1 does not allow downloading resources in parallel. You need a TCP/IP connection per each static file.

With HTTP/2, you can load resources as soon as they’re ready. There’s an internal prioritization, which is pretty cool, but that also means your website could be slower with HTTP/2 because front-end optimizations are no longer relevant.

Bear that in mind, as it could explain why you don’t get the expected gains or lose speed with HTTP/2.

Server push, the killer feature?

One cannot write about HTTP/2 without mentioning server push.

It’s probably the best feature for the front-end. Before HTTP/2, you need to wait for the server to send a request for the HTML.

It’s only after this step that the browser can request and receive styles.

With push server, HTML and styles come on the initial request as soon as they are ready. It leads to faster rendering.

I think it’s a terrific enhancement, but best practices such as concatening or bundling might not be relevant with HTTP/2.

Find more details here

A new incoming version

HTTP/2 has now a fantastic support.

In contrast, support for HTTP/3 is barely existent. New versions for such protocols take some time.

Make sense of big words: TCP/IP, UDP

TCP/IP is not the only way to transmit data. Another widely used protocol you may already know is UDP.

It would be too long to describe in detail those protocols, but TCP/IP basically allows for transmitting data between computers. UDP does the same but not the same way.

TCP/IP is said to be more reliable, while UDP should be faster. Both deliver packets, but TCP runs back and forth between computers to ensure delivery is ok. Besides, it sends packets in chronological order.

It’s robust, but it’s limited in speed and it sometimes leads to bottlenecks.

UDP does not run the same tests for delivery. It does not run back and forth. That’s why it’s faster.

HTTP-over-QUIC and HTTP/3

Google develops its protocols to transmit data. In 2009, they released SPDY, which is a significant part of the HTTP/2 protocol.

In other words, they make their tools, and it often becomes a web standard after, with other contributions from other big companies such as Microsoft, Facebook, Apple, etc. It’s the same process with QUIC and HTTP/3.

QUIC stands for Quick UDP Internet Connections. It’s a protocol that leverages the benefits of UDP to speed transmission delays.

Source: web.dev

HTTP/3 merges that HTTP-over-QUIC with other additions such as “HTTPS everywhere”.

It means that only HTTPS URLs can work with HTTP/3. It allows HTTP/3 to skip some checking called handshakes, hence the speed gains.

They say HTTP-over-QUIC protocol is more secure by design, drastically reducing the risks for some known attacks such as man in the middle and obscure injections. Thanks to modern encryption and authentication processes, it will be more secure even for some elementary headers, which is impossible with TCP.

The constant need for speed

The need for faster page loads is an insatiable desire. Everything becomes asynchronous. That is why protocols change.

Neither HTTP/2 nor HTTP/3 is perfect. There are critics and backers.

Cloudflare seems enthusiastic:

a faster, more reliable, more secure web experience for all.

Source: Cloudflare

Most critics say the “revolution” won’t happen. Gains should be noticeable but do not expect miracles.

IMHO, unless you work on the system part, you won’t have to configure that manually. However, as HTTP/3 is an evolution of HTTP/2, there are impacts for workflows.

We saw HTTP/2 might have paradoxical effects, as some front-end optimizations are useful for HTTP/1.1 but do not make sense in HTTP/2, leading to slower pages.

Many tools such as Grunt, Gulp, or Webpack have internal processes that concatenate or bundle assets. In that perspective, the entire pipeline strategy and even the internal logic of these tools might need to change.

Photo by Borna Bevanda on Unsplash