11 Jul 2013 cdent   » (Master)

HTTP/2 Missing the Mark

There's been a fair bit of swirl around the internet lately about the latest draft of HTTP 2.0. The issues are several, with lots of disagreement from different camps:

  • It's too hard to implement.
  • It's too complex with that complexity focused on the wrong problems.
  • It's being developed with too much attention paid to the needs of a few large internet behemoths (notably Google, who are getting a bit greedy) where aggregate savings from shaving a few bytes here and there are huge.

James Snell has two excellent postings that cover some of the issues with robust technical detail:

Poul-Henning Kamp, the Varnish guy, has a well thought out response in which he explains:

Overall, I find all three proposals are focused on solving yesteryears problems, rather than on creating a protocol that stands a chance to last us the next 20 years.

Marco Tabini provides an example of many similarly themed postings which worry about the cultural impact. His, The 7-bit Internet, is aligned with my own concerns. He says:

Understanding how things work is a first—and important!—step towards using them well. When we don’t understand the tools we use, we are forced to rely on other tools to hide the underlying complexity and dumb things down to a point where they are manageable.

The concern here is that HTTP/2.x is going to be very challenging to inspect without additional tools. This means that the barrier to learning will be much higher with it than it is for earlier versions. You may think "oh it can't be that bad". If you're thinking that go and read Snell's postings a bit more closely.

Since its start the biggest win of the web has been the relatively low barriers to entry. It's always been quite easy for individuals and small groups without a lot of background or preparation to learn, to publish, to participate and to build. This is because the protocols, techniques and languages of content, configuration and code have often been relatively straightforward text that both people and computers can read with relatively low effort.

HTTP/2 takes a huge step away from this. It is a protocol that isn't easy for anyone or anything. It trades ease of comprehension for effective computation. This is a false win, a short term trade with damaging consequences that doesn't need to happen. Network bandwidth and computer power will continue to grow at astounding rates, but the human ability to engage in the social milieu of making and sharing that makes the web so awesomely diverse is fairly static. We want to encourage and enable that engagement and learning. The decisions we make about how stuff works impacts who will work with it.

Syndicated 2013-07-11 12:48:05 (Updated 2013-10-08 16:35:49) from cdent

Latest blog entries     Older blog entries

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!