Older blog entries for bagder (starting at number 771)

Why the latest security vulnerability in curl happened

In the end of January 2013 we got a fresh security vulnerability pointed out to us in the curl project (it was publicly announced on Feb 6). Another buffer overflow. This time in the SASL Digest-MD5 handling for POP3, IMAP and SMTP. It is the 16th security flaw during curl’s life-time of almost 15 years so it isn’t a disaster but still of course it is never fun when it happens. I put a lot of my own effort and pride into this project so every time something like this floats to the surface my pride and self-esteem get damaged a bit.

Everyone who’s concerned about open source and security and foremost in a reliable and secure libcurl of course now wonders: how did this happen? How could this piece of security problem get into libcurl and what are we doing to make sure it doesn’t happen again?

Let me tell you the story. It is not as interesting nor full of conspiracies as you’d like. It is instead rather dull and boring but nevertheless the truth.

I’m the lead developer and maintainer of curl and libcurl. I personally have done some 65% of all commits in the project and I do the majority of all code reviews on the mailing list. Our code might be used by some 500 million users, but the number of regulars that can be considered the “core team” can still basically be counted on a single hand. Also, we all do this primarily on our spare time.

During intense development periods we get flooded by bug reports and patch submissions and my backlog grows. It’s really not possible to foresee when these periods come, but occasionally it seems the planets align in this way and work piles up.

In order to then proceed the best way in the project, I try to focus on the architectural and “deep” matters that need me and my particular knowledge most. I then try to leave the “easy” problems that are easier to work on to others, and I try to stay away from the issues that already seem to be under control by some of the existing regulars in the project. I also have to let other “elders” in the project push things with slightly less scrutiny just to be able to plow through the work better. Unfortunately this leads to the occasional flaw getting through and in this case it was even a security vulnerability that when you look back on the code you really cannot understand how we could miss this.

We do take security seriously though and we make a big effort on handling all security reports swiftly and accurately. Even if this was the 16th time we let our guard down, I want to think that we at least react responsibly and in a good way when we realize our mistakes.

Please don’t judge us due to this. Please instead consider joining us and help us review code and help us find the next flaw before we merge it into mainline or at least before we do a public release with the code!

sasl-patch

Syndicated 2013-02-07 20:50:27 from daniel.haxx.se

curl and libcurl 7.29.0

As a representative for the team behind curl and libcurl, we’re of course proud to yet again having shipped a release to the public today. Over 240 commits, with in total almost 10000 lines added and 6000 removed since the previous release in November 2012. We’re only a month away until the curl project turns 15 years old.

Some highlights this time include:

  • We fixed a nasty overflow vulnerability we have been shipping in a few previous releases. The flaw existed in code used by IMAP, POP3 and SMTP.
  • We introduced a new test suite output mode that is “automake compliant”. This can help linux distros and others who want to run many test suites and have a unified way of parsing the results and outcome. It follows the spirit of ptest and I believe it will be used in the future.
  • The IMAP support got a lot of improvements and lots of login and authentication fixes were brought in. Now libcurl supports the sasl methods digest-md5, cram-md5, ntlm and login., and it also recognizes the login disabled server capability.
  • Architecture wise, we remodeled the internals quite a lot and made it “always-multi“. This improves readability and internal complexity and is all just goodness. The short-term downside is possibly the risk for a temporary increase in bug reports due to this…
  • 35 specified bug fixes were crammed in as well, and there are a bunch more we haven’t mentioned that just “silently” improved the multi interface functionality.

Syndicated 2013-02-06 13:56:04 from daniel.haxx.se

the new bug tracker on sourceforge

A while ago Sourceforge gave me the offer to upgrade curl’s bug tracker to “the new one” they offer. They do offer some arguments to why you would want to do this but they don’t elaborate much on the transition for existing projects. Since I’ve been annoyed and disappointed on the old one for years I decided to dive right in. I decided to post this blog entry to possibly encourage others as well, or at least explain how upgrading worked for us.

I’ll start by explaining a bit about what’s so bad about the old Sourceforge bug tracker. Anyone who has tried to use it for anything “real” most likely already know about these things and then I figure my list can be used for a comparison if we’ve gotten annoyed by the same things.

  1. They use a global bug id which makes all bug entries get very large numbers that aren’t in sequence and are fairly hard to remember.
  2. You can’t respond to bug reports by mail, so you are forced to use the heavy ad-filled web site.
  3. Ridiculous URLs to the bug tracker and each individual bug entry. I created a bounce CGI years ago on the curl web site to avoid having to use the overly long ones anywhere.
  4. When sending out email notifications, it prepends the new comments while having the older ones below which basically is an odd-order top-posting style a lot of people and projects have a hard time to get accustomed to.
The new tracker addresses all of these issues and I agreed to allow it to make curl use their new tracker. And this is the outcome:
  • All the existing bug tracker entries were converted. They all now get numbered sequentially in a private number series so no more bug #31234234 and instead the 1100 or so bug reports became bug #1 to bug #1169.
  • The new bug entries have a different set of meta-data but the ’status’ and ‘owner’ etc seemed to get translated pretty good. The new ‘milestone’ got populated wrongly for me, but it didn’t matter much to me because I simply cleared it.
  • There’s no visible way to translate from old style bug numbers to the new bug numbers. When I go to the URL for the old number it redirects me to the new bug so clearly sourceforge has created a look-up table it can use.
  • There’s now a sensible public URL to point out the “home” for the curl bug tracking.

Annoying things with the new tracker:

  • It splits up a the comments to a single report into several “pages” far too early and forces you to click through annoying “page 2″ or “next page” links to see the latest comments.

Summary: the upgrade was totally worth it. A much better bug tracker with much more useful interfaces, both the web interface and the ability to respond to it by email etc. And still room for improvements!

Syndicated 2013-01-29 08:53:36 from daniel.haxx.se

internally, we’re all multi now!

libcurl internals suddenly become a lot cleaner and neater to work with when we made all code assume and work with the multi interface!

libcurl was initially created slightly after the birth of the curl tool. After the tool started to get some traction and use out in the world, requests and queries about a library with its powers started to drop in. Soon enough, in the year 2000 we shipped the first release of libcurl and it featured a synchronous API (the “easy” interface) that performs the complete operation and then returns. I think we can now say that the blocking easy interface was successful and its ease of use has been very popular and appreciated by many users.

During 2002 the need for a non-blocking API had been identified and we introduced the multi interface. The multi interface is kind of a super-set as it re-uses the same handles as is used with the easy interface, so it cleverly makes it fairly easy for a standard application to move from the easy interface to the multi.

Basically since that day, we’ve struggled in the source code structure to handle the fact that we have both a blocking and a non-blocking API. In lots of places we’ve had different code paths and choices done depending on which API that was used. It made the source code hard to follow and it occasionally introduced hard to track bugs which could lead to the multi and easy interface not behaving the same way to the underlying network or protocol behavior. It was clear very early on that it wasn’t an ideal design choice, but it was a design choice that was spread out among the code and it stuck.

During November 2012 I finally took on the code that we’ve had #ifdef’ed since around 2005 which makes the blocking easy interface operation a wrapper function around the non-blocking multi interface functions. Using this method, all internals should be considered non-blocking and there is no need left to treat things differently depending on which API that was used because everything is now multi interface == non-blocking.

On January 17th 2013 the big patch was committed. 400 added lines, 800 removed over 54 modified files…

cURL

Syndicated 2013-01-17 21:46:08 from daniel.haxx.se

The curl year 2012

2012

So what did happen in the curl project during 2012?

First some basic stats

We shipped 6 releases with 199 identified bug fixes and a 40 other changes. That makes on average 33 bug fixes shipped every 61st day or a little over one bug fix done every second day. All this done with about 1000 commits to the git repository, which is roughly the same amount of git activity as 2010 and 2011. We merged commits from 72 different authors, which is a slight increase from the 62 in 2010 and 68 in 2011.

On our main development mailing list, the curl-library list, we now have 1300 subscribers and during 2012 it got about 3500 postings from almost 500 different From addresses. To no surprise, I posted by far the largest amount of mails there (847) with the number two poster being Günter Knauf who posted 151 times. Four more members posted more than 100 times: Steve Holme (145), Dan Fandrich (131), Marc Hoersken (130) and Yang Tse (107). Last year I sent 1175 mails to the same list…

Notable events

I’ve walked through the biggest changes and fixes and here are the particular ones I found stood out during this otherwise rather calm and laid back curl year. Possibly in a rough order of importance…

  1. We started the year with two security vulnerability announcements, regarding an SSL weakness and an injection flaw. They were reported in 2011 though and we didn’t get any further security alerts during 2012 which I think is good. Or a sign that nobody has been looking close enough…
  2. We got two interesting additions in the SSL backend department almost simultaneously. We got native Windows support with the use of the schannel subsystem and we got native Mac OS X support with the use of Darwin SSL. Thanks to these, we can now offer SSL-enabled libcurls on those operating systems without relying on third party SSL libraries.
  3. The VERIFYHOST debacle took off with “security researchers” throwing accusations and insults, ending with us releasing a curl release with the bug removed. It did however unfortunately lead to some follow-up problems in for example the PHP binding.
  4. During the autumn, the brokeness of WSApoll was identified, and we now build libcurl without it and as a result libcurl now works better on Windows!
  5. In an attempt to allow libcurl-using applications to avoid select() and its problems, we introduced the new public function curl_multi_wait. It avoids the FD_SETSIZE limit and makes it harder to screw up…
  6. The overly bloated User-Agent string for the curl tool was dramatically shortened when we cut out all the subsystems/libraries and their version numbers from the string. Now there’s only curl and its version number left. Nice and clean.
  7. In July we finally introduced metalink support in the curl tool with the curl 7.27.0 release. It’s been one of those things we’ve discussed for ages that finally came through and became reality.
  8. With the brand new HTTP CONNECT support in the test suite we suddenly could get much improved test cases that does SSL or just tunnel through an HTTP proxy with the CONNECT request. It of course helps us avoid regressions and otherwise improve curl and libcurl.

What didn’t happen

  1. I made an attempt to get the spindly hacking going, but I’ve mostly failed with that effort. I have personally not had enough time and energy to work on it, and the interest from the rest of the world seems luke warm at best.
  2. HTTP pipelining. Linus Nielsen Feltzing has a patch series in the works with a much improved pipelining support for libcurl. I’ll write a separate post about it once it gets in. Obviously we failed to merge it before the end of the year.
  3. Some of my friends like to mock me about curl not being completely IPv6 friendly due to its lack of support for Happy Eyeballs, and of course they’re right. Making curl just do two connects on IPv6-enabled machines should be a fairly small change but yet I haven’t yet managed to get into actually implementing it…
  4. DANE is SSL cert verification with records from DNS thanks to DNSSEC. Firefox has some experiments going and Chrome already supports it. This is a technology that truly can improve HTTPS going forwards and allow us to avoid the annoyingly weak and broken CA model…

I won’t promise that any of these will happen during 2013 but I can promise there will be efforts…

The Future

I wrote a separate post a short while ago about the HTTP2 progress, and I expect 2013 to bring much more details and discussions in that area. Will we get SRV record support soon? Or perhaps even URI records? Will some of the recent discussions about new HTTP auth schemes develop into something that will reach the internet in the coming year?

In libcurl we will switch to an internal design that is purely non-blocking with a lot of if-then-that-else source code removed for checks which interface that is used. I’ll make a follow-up post with details about that as well as soon as it actually happens.

Our Responsibility

curl and libcurl are considered pilars in the internet world by now. This year I’ve heard from several places by independent sources how people consider support by curl to be an important driver for internet technology. As long as we don’t have it, it hasn’t really reached everyone and that things won’t get adopted for real in the Internet community until curl has it supported. As father of the project it makes me proud and humble, but I also feel the responsibility of making sure that we continue to do the right thing the right way.

I also realize that this position of ours is not automatically glued to us, we need to keep up the good stuff to make it stick.

cURL

Syndicated 2012-12-23 12:40:19 from daniel.haxx.se

I’m with Nexus 4

About two years ago I purchased my Desire HD made by HTC, which has indeed been a trusted work horse of mine. Even if does lack on the battery side and the micro USB connector has gotten a bit worn out so that most cables fall out unless I take precautions to avoid it.Nexus 4

Back then I upgraded from an HTC Magic to a rather high end device of the time. This time the bump goes like this in pure specs/numbers, and it is interesting to see how two years have changed the scene…

Size and weight

HTC Desire HD: 164 grams, 123 x 68 mm and 11.8mm thick. 4.3″ LCD

Nexus 4: 139 grams, 133.9 x 68.7mm and 9.1 mm thick. 4.7″ LCD

Two years ago many people asked me about the “big” phone and had objections. Today, that old 4.3″ thing is small in comparison. As you can see, the Nexus 4 is basically “only” a centimeter taller than the old one, while a bit thinner and much lighter. The extra centimeter and the removal of the bottom buttons basically gave the extra screen reel estate.

htc-desire-hdPixels

HTC Desire HD: 800 x 480

Nexus 4: 1280 x 768

Roughly 2.5 times the number of pixels on screen.

Battery

HTC Desire HD: 1230 mah

Nexus 4: 2100 mah

70% more battery juice. Should come handy but won’t stop me from dreaming about some real battery evolution!

More!

CPU: 1GHz single core is now a 1.5GHz quad-core.

RAM: 768MB of RAM has now grown to 2GB.

Price: The price on this new phone is lower than the old one as new!

Buttons: I find it interesting that I’ve gone from 6 buttons, to 4 to none through my three Android phones.

HTC Sense vs Stock Android: I’ve never been particularly upset with Sense, and now when the Desire HD is stuck on Android 2.3 and Nexus runs 4.2 they feel very different anyway.

A feature my HTC phone has and that I like, but that stock Android lacks is the ability to completely block (ignore) certain contacts on incoming calls. I can add sales people or telemarketers and then completely not see HTC Magic them at all, no matter how many times they phone me – not even as missed phone calls.

One thing I’ve actually been slightly annoyed with in the Desire HD is its really crappy camera. I believe the Nexus 4 camera has the same amount of pixels but I do have hopes that it’ll allow me to take better pictures while being out and about.

I figured this posting wouldn’t be complete without also include a picture of my first Adroid phone, the HTC Magic.

Syndicated 2012-12-20 08:19:22 from daniel.haxx.se

HTTP2, SPDY and spindly right now

SPDYOn November 28, the HTTPbis group within the IETF published the first draft for the upcoming HTTP2 protocol. What is being posted now is a start and a foundation for further discussions and changes. It is basically an import of the SPDY version 3 protocol draft.

There’s been a lot of resistance within the HTTPbis to the mandated TLS that SPDY has been promoting so far and it seems unlikely to reach a consensus as-is. There’s also been a lot of discussion and debate over the compression SPDY uses. Not only because of the pre-populated dictionary that might already be a little out of date or the fact that gzip compression consumes a notable amount of memory per stream, but also recently the security aspect to compression thanks to the CRIME attack.

Meanwhile, the discussions on the spdy development list have brought up several changes to the version 3 that are suggested and planned to become part of the version 4 that is work in progress. Including a new compression algorithm, shorter length fields (now 16bit) and more. Recently discussions have brought up a need for better flexibility when it comes to prioritization and especially changing prio run-time. For like when browser users switch tabs or simply scroll down the page and you rather have the images you have in sight to load before the images you no longer have in view…

I started my work on Spindly a little over a year ago to build a stand-alone library, primarily intended for libcurl so that we could soon offer SPDY downloads for it. We’re still only on SPDY protocol 2 there and I’ve failed to attract any fellow developers to the project and my own lack of time has basically made the project not evolve the way I wanted it to. I haven’t given up on it though. I hope to be able to get back to it eventually, very much also depending on how the HTTPbis talk goes. I certainly am determined to have libcurl be part of the upcoming HTTP2 experiments (even if that is not happening very soon) and spindly might very well be the infrastructure that powers libcurl then.

We’ll see…

Syndicated 2012-12-01 22:26:58 from daniel.haxx.se

“haking”

(This is an authentic email we received at Haxx the other day. Names, emails and URLs are replaced in this excerpt to save the innocent)

Date: Thu, 29 Nov 2012 14:59:25
Subject: haking

hello, can you tell me how to hack into web site:
[FIRST URL]
so it is showing:

[OTHER URL]
when you click on a link in google results?

for example if you click on a google result:
[URL to a google.rs search for something on the FIRST URL site]

the point is i would like to protect my web site form that kind of attack so please let me know how to do that

how did i found you? there is your address at [FIRST URL]/coockies.txt so i think you did it, but was polite enough to leave address.. please help me.

Of course I was curious enough to check the “coockies.txt” file, and the beginning of that file looked like this:

# Netscape HTTP Cookie File
# http://curlm.haxx.se/rfc/cookie_spec.html
# This file was generated by libcurl! Edit at your own risk.
[FIRST URL] FALSE	/	FALSE	0	PHPSESSID	dfn1a5ll0hs8odpfh3p2qtlcj3

This tells us a few trivial things, all of which might not be obvious to the untrained eye:

  • The file was generated by libcurl that was 7.16.0 or later, but no later than 7.18.3 as we only used the URL in that file between those releases.
  • The spelling of that cookie file is so hilarious we can guess it wasn’t a native English speaker who named it. The subject of the email is similarly bad so perhaps it was a fellow countryman of Serbia? (the TLD of the google URL was .rs after all)
  • The person doing this didn’t even try to clean up the remaining junk file(s) afterwards
  • The guy sending me the email is completely in the blue of what has happened or even who he’s contacting or my relation to this all.
  • The world can be a harsh and cruel place and it isn’t easy to know your way around all of it…

Syndicated 2012-11-30 21:50:05 from daniel.haxx.se

I’m with Nexus 10

I held off this long but now I’ve joined theNexus 10 tablet owning part of the world. I brought home my new and shiny Nexus 10 yesterday (purchased in the US, it is not yet available to buy in this dusty and dark corner of the world).

Android 4.2 on a 10 inch 2560×1600 screen is a lovely experience. It is the 16GB wifi-only version. Did I mention that the screen is awesome?

Syndicated 2012-11-24 19:04:28 from daniel.haxx.se

Say hello to Moo

I decided it was about time to upgrade my main development machine to something modern and snappy. It is 5.5 years ago since I bought my current work horse, a dual-core AMD Athlon 64 X2 5600+ (2.8GHz) equipped thing.Fractal Design I’m using my machine primarily for development. I never game. I decided to go for the higher end of what’s available to get me something to live with for several years to come.

Motherboard: Asus P8Z77-M. Micro-ATX. Intel Z77 chipset.

CPU: Intel Core i7 3770K 3,5Ghz Socket 1155. This is a 22nm monster featuring 8 MB L3-cache

Memory: TridentX DDR3 PC19200/2400MHz CL10 2×8GB. 16GB of ram.

HDDSeagate Barracuda ST3000DM001 64MB 3TB.

Chassi: Fractal Design Define R3 USB3. See picture. Rather big and fits a lot more drives and stuff than what I have now…

SSD: OCZ Vertex 4 256GB

CPU cooler: Cooler Master Hyper 412S

Graphics: ASUS Radeon HD5450 512MB (very simple and cheap thing but supports 2560×1600 which the MB doesn’t do)

PSU: Plexgear PS-500 500W

(a prisjakt list with the full setup)

All in all, this has two 120mm chassi fans, one 135mm fan on the big CPU cooler and there’s one fan in the PSU. I hope they won’t be causing too much noise or problems for me. The rather low-end graphics should keep the total power consumption (and thus heat production) at a decent level. ASUS p8z77-m

I purchased all the individual parts separately as I dislike how I can’t get an as optimized machine prebuilt from anywhere  - I basically have to pay around 50% more, and then I still wouldn’t get the exact set of pieces I’d like. This way I also avoid the highly disturbing Microsoft tax prebuilt systems come with.

Unfortunately I got some bad luck included too, as when I first put everything together and pressed the power button nothing happened. Well, a single led was turned on but nothing else happened. It took me a while and some sweat to figure out where the problem lied and once I replaced the broken motherboard it would start properly and then I could proceed and install it.gskill TridentX ddr3

Once my new machine (which now goes under the name Moo) gets settled, my old box will become my daughter’s new machine as hers existing tired old PIII machine isn’t really fun to do a lot with.

Syndicated 2012-11-20 19:41:05 from daniel.haxx.se

762 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!