Older blog entries for apenwarr (starting at number 606)

26 Apr 2011 (updated 2 May 2011 at 22:06 UTC) »

Avery, sshuttle, and bup at LinuxFest Northwest (Bellingham, WA) April 30

Where: LinuxFest Northwest conference (Bellingham, WA)
When: 1:30-3:30 on Saturday, April 30 (conf runs all weekend)
Price: FREE!

You might think that now that I live in New York, I would stop doing presentations on the West coast. But no. Ironically, right after moving to New York, I'll have done three separate presentations (four, if you count this one as two) on the West coast in a single calendar month.

In this particular case, it's because I proposed my talks back when I lived in BC, when Bellingham was a convenient one-hour drive from Vancouver's ferry terminal. Now it's a day-long trip across the continent (and twice across the US/Canada border). But oh well, it should be fun.

Also, I foolishly took someone's advice from a Perl conference one time (was it Damian Conway?) and proposed *two* talks, under the theory that if you propose two talks, you double your chances that the conference admins might find one of them interesting, but of course nobody would be crazy enough to give you *two* time slots. Clearly this theory is crap, because this is the second time I've tried it out, and in both cases *both* of my talks have been selected. Thanks a lot.

The good news is that at least they're in consecutive time slots. So while I'll be hoarse by the end, I only have to psych myself up once.

Bellingham is convenient to reach from Vancouver, Seattle, and Portland, among other places, and the conference is free, so take your chance to come see it! If you like open source, it promises to be... filled with open source.

Um, and I promise to start writing something other than my presentation schedule in this space again soon. I realize how annoying it is when a blog diary turns into a glorified presentation schedule. I'm working on it.

Update 2011/05/02: By popular request, my slides from the conference:

Enjoy.

Syndicated 2011-04-26 03:02:29 (Updated 2011-05-02 22:06:06) from apenwarr - Business is Programming

Avery is doing a presentation in Mountain View (maybe about bup)

Where: Hacker Dojo in Mountain View, California
When: 7:30pm on Tuesday, April 12
Why: Because (as Erin tells me) I have trouble saying no.

I have heard unconfirmed rumours that there are programmers of some sort somewhere in the region of Silicon Valley, despite how silly this sounds in concept. (Silicon? That stuff you make beaches out of? Why would any nerd go anywhere near a beach?) Nevertheless, after my thrilling and/or mind boggling presentation and/or rant about bup in San Francisco on Saturday, there was some interest in having me do something similar out in the middle of nowhere, so I accepted.

You're invited! I'm not expecting a very big crowd, given the short notice, which means it will probably be more Q&A and less presentation. But I'll bring my presentation slides just in case. There will be demos. There will be oohing and aahing, I guarantee it, even if I have to do it myself.

I might also talk about sshuttle or redo, or maybe Linux arcnet poetry, if there are any poetry lovers in the audience. (I doubt there will be any arcnet users on the beach, so a talk on arcnet is unlikely.)

Syndicated 2011-04-11 17:11:33 from apenwarr - Business is Programming

Avery's doing a bup presentation in San Francisco

When: Saturday, April 9, 2:30 PM
Where: San... Francisco... somewhere

The venue is not quite certain yet since we don't know how many people are actually interested.

If you want to see me talk about how we took the git repository format and made a massively more scalable implementation of it, as well as my evil (disclaimer: I do not speak for my employer) schemes for the future, please email [tony at godshall.org] with subject "bup SF". Thanks to Tony for organizing this little adventure.

Tell your friends!

Or skip the boring meatspace stuff and jump straight to bup on github.

Syndicated 2011-04-05 22:41:46 from apenwarr - Business is Programming

3 Apr 2011 (updated 3 Apr 2011 at 18:04 UTC) »

What you can't say

Normally I don't write a new post just about updates to a previous post, but I have added quite a lot of clarifications and notes to I hope IPv6 never catches on, in response to copious feedback I've received through multiple channels. Perhaps you will enjoy them.

One very interesting trend is that the comments about the article on news.ycombinator were almost uniformly negative - especially the highly upvoted comments - while I simultaneously saw tons and tons of positive comments on Twitter (it was my most popular article ever) and numerous people wrote me polite messages with agreement and/or minor suggestions and clarifications. Only one person emailed me personally to say I was an idiot (although it was my first, at least from people I don't know).

The trend is curious, because normally news.yc has more balanced debate. Either I'm utterly completely wrong and missed every point somehow (as a highly upvoted point-by-point rebuttal seems to claim)... or I seriously pinched a nerve among a certain type of people.

All this reminds me of Paul Graham's old article, What You Can't Say. Perhaps some people's world views are threatened by the idea of IPv6 being pointless and undesirable.

And all *that*, in turn, reminded me of my old article series about XML, sadomasochism, and Postel's Law. I was shocked at the time that some people actually think Postel's Law is *wrong*, but now I understand. Some people believe the world must be *purified*; hacky workarounds are bad; they must be removed. XML parsers must not accept errors. Internet Protocol must not accept anything less than one address per device. Lisp is the one truly pure language. And so on.

Who knows, maybe those people will turn out to be right in the end. But I'm with Postel on this one. Parsers that refuse to parse, Internet Protocol versions that don't work with 95% of servers on the Internet, and programming languages that are still niche 50+ years later... sometimes you just have to relax and let it flow.

Update 2011/04/02: Another big example of a "less good" technology failing to catch on for similar reasons as IPv6: x86 vs. non-x86 architectures. Everyone knows x86 is a hack... but we're stuck with it. (ARM is making an impact in power-constrained devices, though, just like IPv6 is making an impact in severely IPv4-constrained subnets. Who will win? I don't know, I just know that IPv4 and x86 are less work for me, the programmer, right now.)

Thinking about the problem in that way - why "worse" (hackier) technologies tend to stick around while the purified replacements don't - reminded me of Worse is Better by Richard Gabriel. In retrospect, when I divided people above into "purists" and "Postel's Law believers", I guess I was just restating Gabriel's much better-written point about "worse-is-better" vs. "the right thing." If you haven't read the article, read it; it's great. And see if you're firmly on one side or the other, and if you think the other side is clearly just crazy.

If you think that, then *that*, in turn, brings us back to "What You Can't Say." The truth is, you *can* say it, but people will jump down your throat for saying it. And not everybody will; only the very large group of people in the camp you're not in.

That's why we have religious wars. Figurative ones, anyway. I suspect real religious wars are actually about something else.

Syndicated 2011-04-03 01:30:12 (Updated 2011-04-03 18:04:57) from apenwarr - Business is Programming

28 Mar 2011 (updated 29 Mar 2011 at 02:13 UTC) »

I hope IPv6 *never* catches on

(Temporal note: this article was written a few days ago and then time-released.)

This year, like every year, will be the year we finally run out of IPv4 addresses. And like every year before it, you won't be affected, and you won't switch to IPv6.

I was first inspired to write about IPv6 after I read an article by Daniel J. Bernstein (the qmail/djbdns/redo guy) called The IPv6 Mess. Now, that article appears to be from 2002 or 2003, if you can trust its HTTP Last-Modified date, so I don't know if he still agrees with it or not. (If you like trolls, check out the recent reddit commentary about djb's article.) But 8 years later, his article still strikes me as exactly right.

Now, djb's commentary, if I may poorly paraphrase, is really about why it's impossible (or perhaps more precisely, uneconomical, in the sense that there's a chicken-and-egg problem preventing adoption) for IPv6 to catch on without someone inventing something fundamentally new. His point boils down to this: if I run an IPv6-only server, people with IPv4 can't connect to it, and at least one valuable customer is *surely* on IPv4. So if I adopt IPv6 for my server, I do it in addition to IPv4, not in exclusion. Conversely, if I have an IPv6-only client, I can't talk to IPv4-only servers. So for my IPv6 client to be useful, either *all* servers have to support IPv6 (not likely), or I *must* get an IPv4 address, perhaps one behind a NAT.

In short, any IPv6 transition plan involves *everyone* having an IPv4 address, right up until *everyone* has an IPv6 address, at which point we can start dropping IPv4, which means IPv6 will *start* being useful. This is a classic chicken-and-egg problem, and it's unsolvable by brute force; it needs some kind of non-obvious insight. djb apparently hadn't seen any such insight by 2002, and I haven't seen much new since then.

(I'd like to meet djb someday. He would probably yell at me. It would be awesome. </groupie>)

Still, djb's article is a bit limiting, because it's all about why IPv6 physically can't become popular any time soon. That kind of argument isn't very convincing on today's modern internet, where people solve impossible problems all day long using the unstoppable power of "not SQL", Ruby on Rails, and Ajax to-do list applications (ones used by breakfast cereal companies!).

No, allow me to expand on djb's argument using modern Internet discussion techniques:

Top 10 reasons I hope IPv6 never catches on

Just kidding. No, we're going to do this apenwarr-style:

What I hate about IPv6

Really, there's only one thing that makes IPv6 undesirable, but it's a doozy: the addresses are just too annoyingly long. 128 bits: that's 16 bytes, four times as long as an IPv4 address. Or put another way, IPv4 contains almost enough addresses to give one to each human on earth; IPv6 has enough addresses to give 39614081257132168796771975168 (that's 2**95) to every human on earth, plus a few extra if you really must.

Of course, you wouldn't really do that; you would waste addresses to make subnetting and routing easier. But here's the horrible ironic part of it: all that stuff about making routing easier... that's from 20 years ago!

Way back in the IETF dark ages when they were inventing IPv6 (you know it was the dark ages, because they invented the awful will-never-be-popular IPsec at the same time), people were worried about the complicated hardware required to decode IPv4 headers and route packets. They wanted to build the fastest routers possible, as cheaply as possible, and IPv4 routing tables are annoyingly complex. It's pretty safe to assume that someday, as the Internet gets more and more crowded, nearly every single /24 subnet in IPv4 will be routed to a different place. That means - hold your breath - an astonishing 2**24 routes in every backbone router's routing table! And those routes might have 4 or 8 or even 16 bytes of information each! Egads! That's... that's... 256 megs of RAM in every single backbone router!

Oh. Well, back in 1995, putting 256 megs of RAM in a router sounded like a big deal. Nowadays, you can get a $99 Sheevaplug with twice that. And let me tell you, the routers used on Internet backbones cost a lot more than $99.

It gets worse. IPv6 is much more than just a change to the address length; they totally rearranged the IPv4 header format (which means you have to rewrite all your NAT and firewall software, mostly from scratch). Why? Again, to try to reduce the cost of making a router. Back then, people were seriously concerned about making IPv6 packets "switchable" in the same way ethernet packets are: that is, using pure hardware to read the first few bytes of the packet, look it up in a minimal routing table, and forward it on. IPv4's variable-length headers and slightly strange option fields made this harder. Some would say impossible. Or rather, they would, if it were still 1995.

Since then, FPGAs and ASICs and DSPs and microcontrollers have gotten a whole lot cheaper and faster. If Moore's Law calls for a doubling of transistor performance every 18 months, then between 1995 and 2011 (16 years), that's 10.7 doublings, or 1663 times more performance for the price. So if your $10,000 router could route 1 gigabit/sec of IPv4 in 1995 - which was probably pretty good for 1995 - then nowadays it should be able to route 1663 gigabits/sec. It probably can't, for various reasons, but you know what? I sincerely doubt that's IPv4's fault.

If it were still 1995 and you had to route, say, 10 gigabits/sec for the same price as your old 1 gigabit/sec router using the same hardware technology, then yeah, making a more hardware-friendly packet format might be your only option. But the router people somehow forgot about Moore's Law, or else they thought (indications are that they did) that IPv6 would catch on much faster than it has.

Well, it's too late now. The hardware-optimized packet format of IPv6 is worth basically zero to us on modern technology. And neither is the simplified routing table. But if we switch to IPv6, we still have to pay the software cost of those things, which is extremely high. (For example, Linux IPv6 iptables rules are totally different from IPv4 iptables rules. So every Linux user would need to totally change their firewall configuration.)

So okay, the longer addresses don't fix anything technologically, but we're still running out of addresses, right? I mean, you can't argue with the fact that 2**32 is less than the number of people on earth. And everybody needs an IP address, right?

Well, no, they don't:

The rise of NAT

NAT is Network Address Translation, sometimes called IP Masquerading. Basically, it means that as a packet goes through your router/firewall, the router transparently changes your IP address from a private one - one reused by many private subnets all over the world and not usable on the "open internet" - to a public one. Because of the way TCP and UDP work, you can safely NAT many, many private addresses onto a single public address.

So no. Not everybody in the world needs a public IP address. In fact, *most* people don't need one, because most people make only outgoing connections, and you don't need your own public IP address to make an outgoing connection.

By the way, the existence of NAT (and DHCP) has largely eliminated another big motivation behind IPv6: automatic network renumbering. Network renumbering used to be a big annoying pain in the neck; you'd have to go through every computer on your network, change its IP address, router, DNS server, etc, rewrite your DNS settings, and so on, every time you changed ISPs. When was the last time you heard about that being a problem? A long, long time ago, because once you switch to private IP subnets, you virtually never have to renumber again. And if you use DHCP, even the rare mandatory renumbering (like when you merge with another company and you're both using 192.168.1.0/24) is rolled out automatically from a central server.

Okay, fine, so you don't need more addresses for client-only machines. But every server needs its own public address, right? And with the rise of peer-to-peer networking, everyone will be a server, right?

Well, again, no, not really. Consider this for a moment:

Every HTTP Server on Earth Could Be Sharing a Single IP Address and You Wouldn't Know The Difference

That's because HTTP/1.1 (which is what *everyone* uses now... speaking of avoiding chicken/egg problems) supports "virtual hosts." You can connect to an IP address on port 80, and you provide a Host: header at the beginning of the connection, telling it which server name you're looking for. The IP you connect to can then decide to route that request anywhere it wants.

In short, HTTP is IP-agnostic. You could run HTTP over IPv4 or IPv6 or IPX or SMS, if you wanted, and you wouldn't need to care which IP address your server had. In a severely constrained world, Linode or Slicehost or Comcast or whoever could simply proxy all the incoming HTTP requests to their network, and forward the requests to the right server.

(See the very end of this article for discussion of how this applies to HTTPS.)

Would it be a pain? Inefficient? A bit expensive? Sure it would. So was setting up NAT on client networks, when it first arrived. But we got used to it. Nowadays we consider it no big deal. The same could happen to servers.

What I'd expect to happen is that as the IPv4 address space gets more crowded, the cost of a static IP address will go up. Thus, fewer and fewer public IP addresses will be dedicated to client machines, since clients won't want to pay extra for something they don't need. That will free up more and more addresses for servers, who will have to pay extra.

It'll be a *long* time before we reach 4 billion (2**32) server IPs, particularly given the long-term trend toward more and more (infinitely proxyable) HTTP. In fact, you might say that HTTP/1.1 has successfully positioned itself as the winning alternative to IPv6.

So no, we are obviously not going to run out of IPv4 addresses. Obviously. The world will change, as it did when NAT changed from a clever idea to a worldwide necessity (and earlier, when we had to move from static IPs to dynamic IPs) - but it certainly won't grind to a halt.

Next:

It is possible do do peer-to-peer when both peers are behind a NAT.

Another argument against widespread NATting is that you can't run peer-to-peer protocols if both ends are behind a NAT. After all, how would they figure out how to connect to each other? (Let's assume peer-to-peer is a good idea, for purposes of this article. Don't just think about movie piracy; think about generally improved distributed database protocols, peer-to-peer filesystem backups, and such.)

I won't go into this too much, other than to say that there are already various NAT traversal protocols out there, and as NAT gets more and more annoyingly mandatory, those protocols and implementations are going to get much better.

Note too that NAT traversal protocols don't have a chicken-and-egg problem like IPv6 does, for the same reason that dynamic IP addresses don't, and NAT itself doesn't. The reason is: if one side of the equation uses it, but the other doesn't, you might never know. That, right there, is the one-line description of how you avoid chicken-and-egg adoption problems. And how IPv6 didn't.

IPv6 addresses are as bad as GUIDs

So here's what I really hate about IPv6: 16-byte (32 hex digit) addresses are impossible to memorize. Worse, auto-renumbering of networks, facilitated by IPv6, mean that anything I memorize today might be totally different tomorrow.

IPv6 addresses are like GUIDs (which also got really popular in the 1990s dark ages, notably, although luckily most of us have learned our lessons since then). The problem with GUIDs are now well-known: that is, although they're globally unique, they're also totally unrecognizable.

If GUIDs were a good idea, we would use them instead of URLs. Are URLs perfect? Does anyone love Network Solutions? No, of course not. But it's 1000x better than looking at http://b05d25c8-ad5c-4580-9402-106335d558fe and trying to guess if that's *really* my bank's web site or not.

The counterargument, of course, is that DNS is supposed to solve this problem. Give each host a GUID IPv6 address, and then just map a name to that address, and you can have the best of both worlds.

Sounds good, but isn't actually. First of all, go look around in the Windows registry sometime, specifically the HKEY_CLASSES_ROOT section. See how super clean and user-friendly it isn't? Barf. But furthermore, DNS on the Internet is still a steaming pile of hopeless garbage. When I bring my laptop to my friend's house and join his WLAN, why can't he ping it by name? Because DNS sucks. Why doesn't it show up by name in his router control panel so he knows which box is using his bandwidth? Because DNS sucks. Why can the Windows server browse list see it by name (sometimes, after a random delay, if you're lucky), even though DNS can't? Because they got sick of DNS and wrote something that works. Why do we still send co-workers hyperlinks with IP addresses in them instead of hostnames? Because the fascist sysadmin won't add a DNS entry for the server Bob set up on his desktop PC.

DNS is, at best, okay. It will get better over time, as necessity dictates. All the problems I listed above are mostly solved already, in one form or another, in different DNS, DHCP, and routing products. It's certainly not the DNS *protocol* that's to blame, it's just how people use it.

But still, if you had to switch to IPv6, you'd discover that those DNS problems that were a nuisance yesterday are suddenly a giant fork stabbing you in the face today. I'd rather they fixed DNS *before* making me switch to something where I can't possibly remember my IP addresses anymore, thanks.

Server-side NAT could actually make the world a better place

So that's my IPv6 rant. I want to leave you with some good news, however: I think the increasing density of IPv4 addresses will actually make the Internet a better place, not a worse one.

Client-side NAT had an unexpected huge benefit: security. NAT is like "newspeak" in Orwell's 1984: we remove nouns and verbs to make certain things inexpressible. For example, it is not possible for a hacker in Outer Gronkstown to even express to his computer the concept of connecting to the Windows File Sharing port on your laptop, because from where he's standing, there is no name that unambiguously identifies that port. There is no packet, IPv4 or IPv6 or otherwise, that he can send that will arrive at that port.

A NAT can be unbelievably simple-minded, and just because of that one limitation, it will vastly, insanely, unreasonably increase your security. As a society of sysadmins, we now understand this. You could give us all the IPv6 addresses in the world, and we'd still put our corporate networks behind a NAT. No contest.

Server-side NAT is another thing that could actually make life better, not worse. First of all, it gives servers the same security benefits as clients - if I accidentally leave a daemon running on my server, it's not automatically a security hole. (I actually get pretty scared about the vhosts I run, just because of those accidental holes.)

But there's something else, which I would be totally thrilled to see fixed. You see, IPv4 addresses aren't really 32-bits. They're actually 48 bits: a 32-bit IP address plus a 16-bit port number. People treat them as separate things, but what NAT teaches us is that they're really two parts of the same whole: the flow identifier, and you can break them up any way you want.

The address of my personal apenwarr.ca server isn't 74.207.252.179; it's 74.207.252.179:80. As a user of my site, you didn't have to type the IP (which was provided by DNS) or the port number (which is a hardcoded default in your web browser), but if I started another server, say on port 8042, then you *would* have to enter the port. Worse, the port number would be a weird, meaningless, magic number, akin to memorizing an IP address (though mercifully, only half as long).

So here's my proposal to save the Internet from IPv6: let's extend DNS to give out not only addresses, but port numbers. So if I go to www2.apenwarr.ca, it could send me straight to 74.207.252.179:8042. Or if I ask for ssh.apenwarr.ca, I get 74.207.252.179:22.

Someday, when IPv4 addresses get too congested, I might have to share that IP address with five other people, but that'll be fine, because each of us can run our own web server on something other than port 80, and DNS would transparently give out the right port number.

This also solves the problem with HTTPS. Alert readers will have noticed, in my comments above, that HTTPS can't support virtual hosts the same way HTTP does, because of a terrible flaw in its certificate handling. Someday, someone might make a new version of the HTTPS standard without this terrible flaw, but in the meantime, transparently supporting multiple HTTPS servers via port numbers on the same machine eliminates the problem; each port can have its own certificate.

(Update 2011/03/28: zillions of people wrote to remind me about SNI, an HTTPS extension that allows it to work with vhosts. Thanks! Now, some of those people seemed to think this refutes my article somehow, which is not true. In fact, the existence of an HTTPS vhosting standard makes IPv6 even *less* necessary. Then again, the standard doesn't work with IE6.)

This proposal has very minor chicken-and-egg problems. Yes, you'll have to update every operating system and every web browser before you can safely use it for *everything*. But for private use - for example, my personal ssh or VPN or testing web server - at least it'll save me remembering stupid port numbers. Like the original DNS, it can be adopted incrementally, and everyone who adopts it sees a benefit. Moreover, it's layered on top of existing standards, and routable over the existing Internet, so enabling it has basically zero admin cost.

Of course, I can't really take credit for this idea. It's already been invented and is being used in a few places.

Embrace IPv4. Love it. Appreciate the astonishing long-lasting simplicity and resilience of a protocol that dates back to the 1970s. Don't let people pressure you into stupid, awful, pain-inducing, benefit-free IPv6. Just relax and have fun.

You're going to be enjoying IPv4 for a long, long time.

Syndicated 2011-03-27 02:00:38 (Updated 2011-03-29 02:13:30) from apenwarr - Business is Programming

27 Mar 2011 (updated 27 Mar 2011 at 02:09 UTC) »

Time capsule: assorted cooking advice

Hi all,

As I mentioned previously, I'm about to disappear into the Google Vortex, across which are stunning vistas of trees and free food and butterflies as far as the eye can see. Thus, I plan to never ever cook for myself again, allowing me to free up all the neurons I had previously dedicated to remembering how.

Just in case I'm wrong, let me exchange some of those neurons for electrons.

Note that I'm not a "foodie" or a gourmet or any of that stuff. This is just baseline information needed in order to be relatively happy without dying of starvation, boredom, or (overpriced ingredient induced) bankruptcy, in countries where you can die of bankruptcy.

Here we go:

  • Priority order for time-saving appliances: microwave, laundry machine, dishwasher, food processor, electric grill. Under no circumstances should you get a food processor before you get a dishwasher. Seriously. And I include laundry machine here because if you have one, you can do your laundry while cooking, which reduces the net time cost of cooking.

  • Cast-iron frying pans are a big fad among "foodies" nowadays. Normally I ignore fads, but in this case, they happen to be right. Why cast iron pans are better: 1) they're cheap (don't waste your money on expensive skillets! It's cast iron, it's supposed to be cheap!); 2) unlike nonstick coatings, they never wear out; 3) you're not even supposed to clean them carefully, because microbits from the previous meal help the "seasoning" process *and* makes food taste better; 4) they never warp; 5) they heat very gradually and evenly, so frying things (like eggs) is reliable and repeatable; 6) you can use a metal flipper and still never worry about damage; 7) all that crap advice about "properly seasoning your skillet before use" is safe to ignore, because nothing you can do can ever possibly damage it, because it's freakin' cast iron. (Note: get one with a flat bottom, not one with ridges. The latter has fewer uses.)

  • You can "bake" a potato by prodding it a few times with a fork, then putting it on a napkin or plate, microwaving it for 7 minutes, and adding butter. I don't know of any other form of healthy, natural food that's as cheap and easy as this. The Irish (from whom I descend, sort of) reputedly survived for many years, albeit a bit unhappily, on a diet of primarily potatoes. (Useless trivia: the terrible "Irish potato famine" was deadly because the potatoes ran out, not because potatoes are bad.) Amaze your friends at work by bringing a week's worth of "lunch" to the office in the form of a sack of potatoes. (I learned that trick from a co-op student once. We weren't paying him much... but we aimed to hire resourceful people with bad negotiating skills, and it paid off.)

  • Boiled potatoes are also easy. You stick them in a pot of water, then boil it for half an hour, then eat.

  • Bad news: the tasty part of food is the fat. Good news: nobody is sure anymore if fat is bad for you or not, or what a transfat even is, so now's your chance to flaunt it before someone does any real science! Drain the fat if you must, but don't be too thorough about it.

  • Corollary: cheaper cuts of meat usually taste better, if prepared correctly, because they have more fat than expensive cuts. "Correctly" usually means just cooking on lower heat for a longer time.

  • Remember that cooking things for longer is not the same as doing more work. It's like wall-clock time vs. user/system time in the Unix "time" command. Because of this, you can astonish your friends by making "roast beef", which needs to cook in the oven for several hours, without using more than about 20 minutes of CPU time.

  • Recipe for french toast: break two eggs into a bowl, add a splash of milk, mix thoroughly with a fork. Dip slices of bread into bowl. Fry in butter in your cast iron pan at medium heat. Eat with maple syrup. And don't believe anyone who tells you more ingredients are necessary.

  • Recipe for perogies: buy frozen perogies from store (or Ukranian grandmother). Boil water. Add frozen perogies to water. Boil them until they float, which is usually 6-8 minutes. Drain water. Eat.

  • Recipe for meat: Slice, chop, or don't, to taste. Brown on medium-high heat in butter in cast-iron skillet (takes about 2 minutes). Turn heat down to low. Add salt and pepper and some water so it doesn't dry out. Cover. Cook for 45-60 minutes, turning once, and letting the water evaporate near the end.

  • Recipe for vegetables: this is a trick question. You can just not cook them. (I know, right? It's like food *grows on trees*!)
Hope this advice isn't too late to be useful to you. So long, suckers!

Syndicated 2011-03-24 03:38:29 (Updated 2011-03-27 02:09:10) from apenwarr - Business is Programming

The Google Vortex

For a long time I referred to Google as the Programmer Black Hole: my favourite programmers get sucked in, and they never come out again. Moreover, the more of them that get sucked in, the more its gravitation increases, accelerating the pull on those that remain.

I've decided that this characterization isn't exactly fair. Sure, from our view in the outside world, that's obviously what's happening. But rather than all those programmers being compressed into a spatial singularity, they're actually emerging into a parallel universe on the other side. A universe where there *is* such a thing as a free lunch, threads don't make your programs crash, parallelism is easy, and you can have millions of customers but provide absolutely no tech support and somehow get away with it. A universe with self-driving cars, a legitimate alternative to C, a working distributed filesystem, and the entire Internet cached in RAM.

A universe where on average, each employee produced $425,450 in profit in 2010, after deducting their salary and all other expenses. (Or alternatively: $1.2 million in revenue.)

I don't much like the fact of the Google Vortex. It's very sad to me that there are now two programmer universes: the haves and the have-nots. More than half of the programmers I personally know have already gone to the other side. Once you do, you can suddenly work on more interesting problems, with more powerful tools, with on average smarter people, with few financial constraints... and you don't have to cook for yourself. For the rest of us left behind, the world looks more and more threadbare.

A few people do manage to emerge from the vortex, providing circumstantial evidence that human life does still exist on the other side. Many of them emerge talking about bureaucracy, politics, "big company attitude", projects that got killed, and how things "aren't like they used to be." And also how Google stock options aren't worth much because they've already IPO'd. But sadly, this is a pretty self-selecting group, so you can't necessarily trust what they're complaining about; presumably they'll be complaining about something, or they wouldn't have left.

What you really want to know is what the people who didn't leave are thinking. Which is a problem, because Google is so secretive that nobody will tell you much more than, "Google has free food. You should come work here." And I already knew that.

So let's get to the point: in the name of science (and free food, and because all my friends said I should go work there), I've agreed to pass through the Google Vortex starting on Monday. The bad news for you is, once I get through to the other side, I won't be able to tell you what I discover, so you're no better off. Google doesn't stop its employees from blogging, but you might have noticed that the blogs of Googlers don't tell you the things you really want to know. If the NDA they want me to sign is any indication, I won't be telling you either.

What I can do, however, is give you some clues. Here's what I'm hoping to do at Google:

  • Work on customer-facing real technology products: not pure infrastructure and not just web apps.
  • Help solve some serious internet-wide problems, like traffic shaping, real-time communication, bufferbloat, excessive centralization, and the annoying way that surprise popularity usually means losing more money (hosting fees) by surprise.
  • Keep coding. But apparently unlike many programmers, I'm not opposed to managing a few people too.
  • Keep working on my open source projects, even if it's just on evenings and weekends.
  • Eat a *lot* of free food.
  • Avoid the traps of long release cycles and ignoring customer feedback.
  • Avoid switching my entire life to Google products, in the cases where they aren't the best... at least not without first making them the best.
  • Stay so highly motivated that I produce more valuable software, including revenue, inside Google than I would have by starting a(nother) startup.
Talking to my friends "on the inside", I believe I can do all those things. If I can't achieve at least most of it, then I guess I'll probably quit. Or else I will discover, with the help of that NDA, that there's something even *better* to work on.

So that's my parting gift to you, my birth universe: a little bit of circumstantial evidence to watch for. Not long from now, assuming the U.S. immigration people let me into the country, I'll know too much proprietary information to be able to write objectively about Google. Plus, speaking candidly in public about your employer is kind of bad form; even this article is kind of borderline bad form.

This is probably the last you'll hear from me on the topic. From now on, you'll have to draw your own conclusions.

Syndicated 2011-03-23 22:29:01 from apenwarr - Business is Programming

Suggested One-Line Plot Summaries, Volume 1 (of 1)

"The Summer of My Disco-tent."

Discuss.

Syndicated 2011-03-20 20:42:42 from apenwarr - Business is Programming

Celebrating Failure

I receive the monthly newsletter from Engineers Without Borders, an interesting organization providing engineering services to developing countries.

Whether because they're Canadian or because they're engineers, or both, they are unusual among aid organizations because they focus on understanding what didn't work. For the last three years, they've published Failure Reports detailing their specific failures. The reports make an interesting read, not just for aid organizations, but for anyone trying to manage engineering teams.

I wish more organizations, and even more individuals would write documents like that. I probably should too.

Syndicated 2011-03-18 22:03:22 from apenwarr - Business is Programming

Parsing ought to be easier

I just realized that I've spent too much time writing about stuff I actually know about lately. (Although okay, that last article was a stretch.) So here's a change of pace.

I just read an interesting article about parsing, that is, Parsing: The Solved Problem that Isn't. It talks about "composability" of grammars, that is, what it would take to embed literal SQL into your C parser, for example.

It's a very interesting question that I hadn't thought of before. Interesting, because every parser I've seen would be *hellish* to try to compose with another grammar. Take the syntax highlighting in your favourite editor and try to have to it auto-shift from one language to another (PHP vs. HTML, or python vs. SQL, or perl vs. regex). It never works. Or if you're feeling really brave, take the C++ parser from gcc and use it to do syntax highlighting in wordpress for when you insert some example code. Not likely, right?

The article was a nice reminder of what I had previously learned about parsing in school: context free grammars, LL(k), and so on. Before I went to school, I had never heard of or used any of those things, but I had written a lot of parsers; I now know that what I had independently "invented" is called recursive descent and it seems to be pretty unfashionable among people who "know" parsing.

I admit it; I'm a troublemaker and I fail badly at academia. I still always write all my parsers as recursive descent. Sometimes I even don't split the tokenizer from the grammar. Booyah! I even write non-conforming XML parsers sometimes, and use them for real work.

So if you're a parsing geek, you'd better leave now, because this isn't going to get any prettier.

Anyway, here's my big uneducated non-academic parsing neophyte theory:

You guys spend *way* too much time caring about arithmetic precedence.

See, arithmetic precedence is important; languages that don't understand it (like Lisp) will never be popular, because they prevent you from writing what you mean in a way that looks like what you mean. Fine. You've gotta have it. But it's a problem, because context-free grammars (and its subsets) have a *seriously hard time* with it. You can't just say "addition looks like E+E" and "multiplication looks like E*E" and "an expression E is either a number or an addition or a multiplication", because then 1+2*3 might mean either (1+2)*3 or 1+(2*3), and those are two different things. Every generic parsing algorithm seems to require hackery to deal with things like that. Even my baby, recursive descent, has a problem with it.

So here's what I say: just let it be a hack!

Because precedence is only a tiny part of your language, and the *rest* is not really a problem at all.

When I write a parser that cares about arithmetic precedence - which I do, sometimes - the logic goes like this:

  • ah, there's the number one
  • a plus sign!
  • the number two! Sweet! That's 1+2! It's an expression!
  • a multiplication sign. Uh oh.
  • the number three. Hmm. Well, now we have a problem.
  • (hacky hacky swizzle swizzle) Ta da! 1+(2*3).
I'm not proud of it, but it happens. You know what else? Almost universally, the *rest* of the parser, outside that one little hacky/swizzly part, is fine. The rest is pretty easy. Matching brackets? Backslash escapes? Strings? Function calls? Code blocks? All those are easy and non-ambiguous. You just walk forward in the text one token at a time and arrange your nice tree.

The dirty secret about parsing theory is that if you're a purist, it's almost impossible, but if you're willing to put up with a bit of hackiness in one or two places, it's *really* easy. And now that computers are fast, your algorithm rarely has to be even close to optimized.

Even language composition is pretty easy, but only in realistic cases, not generalized ones. If you expect this to parse:

	if (parseit) {
		query = "select "booga" + col1 from table where n="}"";
	}

Then you've got a problem. Interestingly, a human can do it. A computer *could* do it too. You can probably come up with an insane grammar that will make that work, if you want to allow for potentially exponential amounts of backtracking and no chance of separating scanning from parsing. (My own preferred recursive descent technique is almost completely doomed here, so you'll need to pull out the really big Ph.D. parsing cannons.) But it *is* possible. You know it is, because you can look at the above code and know what it means.

So that's an example of the "hard problems" that you're talking about when you try to define composability of independent context-free grammars that weren't designed for each other. It's a virtually impossible problem. An interesting one, but not even one that's guaranteed to have a generalizable solution. Compare it, however, with this:

	if (parseit) {
		query = { select "booga" + col1 from table where n = "}" };
	}

Almost the same thing. But this time, the SQL is embedded inside braces instead of quotes. Aha, but that n="}" business is going to screw us over, right? The C-style parser will see the close-brace and stop parsing!

No, not at all. A simple recursive descent parser, without even needing lookahead, will have no trouble with this one, because it will clearly know it's inside a string at the time it sees the closebrace. Obviously you need to be using a SQL-style tokenizer inside the SQL section, and your SQL-style tokenizer needs to somehow know that when it finds the mismatched brace, that's the end of its job, time to give it back to C. So yeah, if you're writing this parser "Avery style", you're going to have to be writing it as one big ugly chunk of C + SQL parser all in one. But it won't be any harder than any other recursive descent parser, really; just twice the size because it's for two languages instead of one.

So here's my dream: let's ignore the parsing purists for a moment. Let's accept that operator precedence is a special case, and just hack around it when needed. And let's only use composability rules that are easy instead of hard - like using braces instead of quotes when introducing sublanguages.

Can we define a language for grammars - a parser generator - that easily lets us do *that*? And just drop into a lower-level implementation for that inevitable operator precedence hack.

Pre-emptive snarky comments: This article sucks. It doesn't actually solve any problems of parsing or even present a design, it just complains a lot. Also, this type of incorrect and dead-end thinking is already well covered in the literature, it's just that I'm too lazy to send you a link to the article right now because I'm secretly a troll and would rather put you down than be constructive. Also, the author smells funny.

Response to pre-emptive snarky comments: All true. I would appreciate those links though, if you have them, and I promise not to call you a troll. To your face.

Syndicated 2011-03-16 02:13:11 from apenwarr - Business is Programming

597 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!