bagder is currently certified at Master level.

Name: Daniel Stenberg
Member since: 2000-05-10 09:34:05
Last Login: 2009-12-04 19:23:29

FOAF RDF Share This

Homepage: http://daniel.haxx.se/

Notes:

My blog is on daniel.haxx.se/blog

Projects

Articles Posted by bagder

Recent blog entries by bagder

Syndication: RSS 2.0

status update: http2 multiplexed uploads

I wrote a previous update about my work on multiplexing in curl. This is a follow-up to describe the status as of today.

I’ve successfully used the http2-upload.c code to upload 600 parallel streams to the test server and they were all sent off fine and the responses received were stored fine. MAX_CONCURRENT_STREAMS on the server was set to 100.

This is using curl git master as of right now (thus scheduled for inclusion in the pending curl 7.43.0 release).  I’m not celebrating just yet, but it is looking pretty good. I’ll continue testing.

Commit b0143a2a3 was crucial for this, as I realized we didn’t store and use the read callback in the easy handle but in the connection struct which is completely wrong when many easy handles are using the same connection! I don’t recall the exact reason why I put the data in that struct (I went back and read the commit messages etc) but I think this setup is correct conceptually and code-wise, so if this leads to some side-effects I think we need to just fix it.

Next up: more testing, and then taking on the concept of server push to make libcurl able to support it. It will certainly be a subject for future blog posts…

cURL

Syndicated 2015-05-21 07:34:44 from daniel.haxx.se

RFC 7540 is HTTP/2

HTTP/2 is the new protocol for the web, as I trust everyone reading my blog are fully aware of by now. (If you’re not, read http2 explained.)

Today RFC 7540 was published, the final outcome of the years of work put into this by the tireless heroes in the HTTPbis working group of the IETF. Closely related to the main RFC is the one detailing HPACK, which is the header compression algorithm used by HTTP/2 and that is now known as RFC 7541.

The IETF part of this journey started pretty much with Mike Belshe’s posting of draft-mbelshe-httpbis-spdy-00 in February 2012. Google’s SPDY effort had been going on for a while and when it was taken to the httpbis working group in IETF, where a few different proposals on how to kick off the HTTP/2 work were debated.

HTTP team working in LondonThe first “httpbis’ified” version of that document (draft-ietf-httpbis-http2-00) was then published on November 28 2012 and the standardization work began for real. HTTP/2 was of course discussed a lot on the mailing list since the start, on the IETF meetings but also in interim meetings around the world.

In Zurich, in January 2014 there was one that I only attended remotely. We had the design team meeting in London immediately after IETF89 (March 2014) in the Mozilla offices just next to Piccadilly Circus (where I took the photos that are shown in this posting). We had our final in-person meetup with the HTTP team at Google’s offices in NYC in June 2014 where we ironed out must of the remaining issues.

In between those two last meetings I published my first version of http2 explained. My attempt at a lengthy and very detailed description of HTTP/2, including describing problems with HTTP/1.1 and motivations for HTTP/2. I’ve since published eleven updates.

HTTP team in London, debating protocol detailsThe last draft update of HTTP/2 that contained actual changes of the binary format was draft-14, published in July 2014. After that, the updates were in the language and clarifications on what to do when. There are some functional changes (added in -16 I believe) for like when which sort of frames are accepted that changes what a state machine should do, but it doesn’t change how the protocol looks on the wire.

RFC 7540 was published on May 15th, 2015

I’ve truly enjoyed having had the chance to be a part of this. There are a bunch of good people who made this happen and while I am most certainly forgetting key persons, some of the peeps that have truly stood out are: Mark, Julian, Roberto, Will, Tatsuhiro, Patrick, Martin, Mike, Nicolas, Mike, Jeff, Hasan, Herve and Willy.

http2 logo

Syndicated 2015-05-14 23:18:05 from daniel.haxx.se

HTTP/2 for TCP/IP Geeks

I attended a TCP/IP Geeks Stockholm meetup yesterday and did a talk about HTTP/2. Below is the slide set, but as usual it might not be entirely self explanatory…

Syndicated 2015-05-07 06:22:44 from daniel.haxx.se

curl user poll 2015

Now is the time. If you use curl or libcurl from time to time, please consider helping us out with providing your feedback and opinions on a few things:

https://goo.gl/FyToBn

It’ll take you a couple of minutes and it’ll help us a lot when making decisions going forward.

The poll is hosted by Google and that short link above will take you to:

https://docs.google.com/forms/d/1uQNYfTmRwF9RX5-oq_HV4VyeT1j7cxXpuBIp8uy5nqQ/viewform

Syndicated 2015-05-06 12:44:57 from daniel.haxx.se

HTTP/2 in curl, status update

http2 logoI’m right now working on adding proper multiplexing to libcurl’s HTTP/2 code. So far we’ve only done a single stream per connection and while that works fine and is HTTP/2, applications will still want more when switching to HTTP/2 as the multiplexing part is one of the key components and selling features of the new protocol version.

Pipelining means multiplexed

As a starting point, I’m using the “enable HTTP pipelining” switch to tell libcurl it should consider multiplexing. It makes libcurl work as before by default. If you use the multi interface and enable pipelining, libcurl will try to re-use established connections and just add streams over them rather than creating new connections. Yes this means that A) you need to use the multi interface to get the full HTTP/2 stuff and B) the curl tool won’t be able to take advantage of it since it doesn’t use the multi interface! (An old outstanding idea is to move the tool to use the multi interface and this would yet another reason why this could be a good idea.)

We still have some decisions to make about how we want libcurl to act by default – especially when we can expect application to use both HTTP/1.1 and HTTP/2 at the same time. Since we don’t know if the server supports HTTP/2 until after a certain point in the negotiation, we need to decide on how to do when we issue N transfers at once to the same server that might speak HTTP/2… Right now, we get the best HTTP/2 behavior by telling libcurl we only want one connection per host but that is probably not ideal for an application that might use a mix of HTTP/1.1 and HTTP/2 servers.

Downsides with abusing pipelining

There are some drawbacks with using that pipelining switch to allow multiplexing since users may very well want HTTP/2 multiplexing but not HTTP/1.1 pipelining since the latter is just riddled with interop problems.

Also, re-using the same options for limited connections to host names etc for both HTTP/1.1 and HTTP/2 may not at all be what real-world applications want or need.

One easy handle, one stream

libcurl API wise, each HTTP/2 stream is its own easy handle. It makes it simple and keeps the API paradigm very much in the same way it works for all the other protocols. It comes very natural for the libcurl application author. If you setup three easy handles, all identifying a resource on the same server and you tell libcurl to use HTTP/2, it makes perfect sense that all these three transfers are made using a single connection.

As multiplexed data means that when reading from the socket, there is data arriving that belongs to other streams than just a single one. So we need to feed the received data into the different “data buckets” for the involved streams. It gives us a little internal challenge: we get easy handles with no socket activity to trigger a read, but there is data to take care of in the incoming buffer. I’ve solved this so far with a special trigger that says that there is data to take care of, that it should make a read anyway that then will get the data from the buffer.

Server push

HTTP/2 supports server push. That’s a stream that gets initiated from the server side without the client specifically asking for it. A resource the server deems likely that the client wants since it asked for a related resource, or similar. My idea is to support server push with the application setting up a transfer with an easy handle and associated options, but the URL would only identify the server so that it knows on which connection it would accept a push, and we will introduce a new option to libcurl that would tell it that this is an easy handle that should be used for the next server pushed stream on this connection.

Of course there are a few outstanding issues with this idea. Possibly we should allow an easy handle to get created when a new stream shows up so that we can better deal with a dynamic number of  new streams being pushed.

It’d be great to hear from users who have ideas on how to use server push in a real-world application and how you’d imagine it could be used with libcurl.

Work in progress code

My work in progress code for this drive can be found in two places.

First, I do the libcurl multiplexing development in the separate http2-multiplex branch in the regular curl repo:

https://github.com/bagder/curl/tree/http2-multiplex.

Then, I put all my test setup and test client work in a separate repository just in case you want to keep up and reproduce my testing and experiments:

https://github.com/bagder/curl-http2-dev

Feedback?

All comments, questions, praise or complaints you may have on this are best sent to the curl-library mailing list. If you are planning on doing a HTTP/2 capable applications or otherwise have thoughts or ideas about the API for this, please join in and tell me what you think. It is much better to get the discussions going early and work on different design ideas now before anything is set in stone rather than waiting for us to ship something semi-stable as the closer to an actual release we get, the harder it’ll be to change the API.

Not quite working yet

As I write this, I’m repeatedly doing 99 parallel HTTP/2 streams with no data corruption… But there’s a lot more to be done before I’ll call it a victory.

Syndicated 2015-05-04 08:18:56 from daniel.haxx.se

879 older entries...

 

bagder certified others as follows:

  • bagder certified shughes as Journeyer
  • bagder certified andrei as Master
  • bagder certified kbob as Apprentice
  • bagder certified mbp as Master
  • bagder certified shughes as Journeyer
  • bagder certified sussman as Journeyer
  • bagder certified mpawlo as Apprentice
  • bagder certified BrucePerens as Master
  • bagder certified rmk as Master
  • bagder certified Fefe as Journeyer
  • bagder certified gstein as Master
  • bagder certified robey as Master
  • bagder certified edd as Journeyer
  • bagder certified ask as Journeyer
  • bagder certified joe as Master
  • bagder certified alan as Master
  • bagder certified pawal as Apprentice
  • bagder certified stone as Apprentice
  • bagder certified sej as Journeyer
  • bagder certified fxn as Apprentice
  • bagder certified forrest as Apprentice
  • bagder certified wsanchez as Master
  • bagder certified zagor as Journeyer
  • bagder certified ben as Master
  • bagder certified kfogel as Master
  • bagder certified orabidoo as Master
  • bagder certified linas as Master
  • bagder certified jas as Master

Others have certified bagder as follows:

  • ib certified bagder as Master
  • chipx86 certified bagder as Master
  • rupert certified bagder as Master
  • larsu certified bagder as Master
  • mvw certified bagder as Journeyer
  • neurogato certified bagder as Journeyer
  • whytheluckystiff certified bagder as Master
  • andrei certified bagder as Journeyer
  • jbowman certified bagder as Journeyer
  • alexr certified bagder as Journeyer
  • pretzelgod certified bagder as Journeyer
  • thallgren certified bagder as Journeyer
  • execve certified bagder as Master
  • pelleb certified bagder as Master
  • GJF certified bagder as Master
  • kroah certified bagder as Master
  • jooon certified bagder as Master
  • nixnut certified bagder as Journeyer
  • jLoki certified bagder as Master
  • mpawlo certified bagder as Journeyer
  • technik certified bagder as Master
  • highgeek certified bagder as Journeyer
  • Stab certified bagder as Master
  • TheCorruptor certified bagder as Master
  • sethcohn certified bagder as Master
  • elho certified bagder as Master
  • monkeyiq certified bagder as Master
  • ebf certified bagder as Master
  • jmg certified bagder as Master
  • robey certified bagder as Master
  • edd certified bagder as Master
  • jbontje certified bagder as Journeyer
  • khazad certified bagder as Master
  • walken certified bagder as Master
  • ask certified bagder as Journeyer
  • xsa certified bagder as Master
  • shlomif certified bagder as Master
  • pawal certified bagder as Master
  • CarloK certified bagder as Master
  • stone certified bagder as Master
  • lerdsuwa certified bagder as Master
  • sej certified bagder as Master
  • nny certified bagder as Journeyer
  • ks certified bagder as Journeyer
  • fxn certified bagder as Master
  • jono certified bagder as Master
  • forrest certified bagder as Master
  • rw2 certified bagder as Master
  • wsanchez certified bagder as Master
  • dlc certified bagder as Journeyer
  • zagor certified bagder as Journeyer
  • ncm certified bagder as Master
  • redi certified bagder as Master
  • ianweller certified bagder as Master
  • bwy certified bagder as Master
  • badvogato certified bagder as Journeyer
  • mnot certified bagder as Master

[ Certification disabled because you're not logged in. ]

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!

X
Share this page