Recent blog entries for apenwarr

19 Jul 2014 (updated 19 Jul 2014 at 23:07 UTC) »

Wifi maximum range, lies, and the signal to noise ratio

Last time we talked about how wifi signals cross about 12 orders of magnitude in terms of signal power, from +30dBm (1 watt) to -90dBm (1 picowatt). I mentioned my old concern back in school about splitters causing a drop to 1/n of the signal on a wired network, where n is the number of nodes, and said that doesn't matter much after all.

Why doesn't it matter? If you do digital circuits for a living, you are familiar with the way digital logic works: if the voltage is over a threshold, say, 1.5V, then you read a 1. If it's under the threshold, then you read a 0. So if you cut all the voltages in half, that's going to be a mess because the threshold needs to get cut in half too. And if you have an unknown number of nodes on your network, then you don't know where the threshold is at all, which is a problem. Right?

Not necessarily. It turns out analog signal processing is - surprise! - not like digital signal processing.


Essentially, in receiving an analog signal and converting it back to digital, you want to do one of three things:

  • see if the signal power is over/under a threshold ("amplitude shift keying" or ASK)
  • or: see if the signal is frequency #0 or frequency #1 ("frequency shift keying" or FSK)
  • or: fancier FSK-like schemes such as PSK or QAM (look it up yourself :)).
Realistically nowadays everyone does QAM, but the physics are pretty much the same for FSK and it's easier to explain, so let's stick with that.

But first, what's wrong with ASK? Why toggle between two frequencies (FSK) when you can just toggle one frequency on and off (ASK)? The answer comes down mainly to circuit design. To design an ASK receiver, you have to define a threshold, and when the amplitude is higher than the threshold, call it a 1, otherwise call it a 0. But what is the threshold? It depends on the signal strength. What is the signal strength? The height of a "1" signal. How do we know whether we're looking at a "1" signal? It's above the threshold ... It ends up getting tautological.

The way you implement it is to design an "automatic gain control" (AGC) circuit that amplifies more when too few things are over the threshold, and less when too many things are over the threshold. As long as you have about the same number of 1's and 0's, you can tune your AGC to do the right thing by averaging the received signal power over some amount of time.

In case you *don't* have an equal number of 1's and 0's, you can fake it with various kinds of digital encodings. (One easy encoding is to split each bit into two halves and always flip the signal upside down for the second half, producing a "balanced" signal.)

So, you can do this of course, and people have done it. But it just ends up being complicated and fiddly. FSK turns out to be much easier. With FSK, you just build two circuits: one for detecting the amplitude of the signal at frequency f1, and one for detecting the amplitude of the signal at frequency f2. It turns out to be easy to design analog circuits that do this. Then you design a "comparator" circuit that will tell you which of two values is greater; it turns out to be easy to design that too. And you're done! No trying to define a "threshold" value, no fine-tuned AGC circuit, no circular reasoning. So FSK and FSK-like schemes caught on.


With that, you can see why my original worry about a 1/n signal reduction from cable splitters didn't matter. As long as you're using FSK, the 1/n reduction doesn't mean anything; your amplitude detector and comparator circuits just don't care about the exact level, essentially. With wifi, we take that to the extreme with tiny little FSK-like signals down to a picowatt or so.

But where do we stop? Why only a picowatt? Why not even smaller?

The answer is, of course, background noise. No signal exists in perfect isolation, except in a simulation (and even in a simulation, the limitations of floating point accuracy might cause problems). There might be leftover bits of other people's signals transmitted from far away; thermal noise (ie. molecules vibrating around which happen to be at your frequency); and amplifier noise (ie. inaccuracies generated just from trying to boost the signal to a point where your frequency detector circuits can see it at all). You can also have problems from other high-frequency components on the same circuit board emitting conflicting signals.

The combination of limits from amplifier error and conflicting electrical components is called the receiver sensitivity. Noise arriving from outside your receiver (both thermal noise and noise from interfering signals) is called the noise floor. Modern circuits - once properly debugged, calibrated, and shielded - seem to be good enough that receiver sensitivity is not really your problem nowadays. The noise floor is what matters.

It turns out, with modern "low-noise amplifier" (LNA) circuits, we can amplify a weak signal essentially as much as we want. But the problem is... we amplify the noise along with it. The ratio between signal strength and noise turns out to be what really matters, and it doesn't change when you amplify. (Other than getting slightly worse due to amplifier noise.) We call that the signal to noise ratio (SNR), and if you ask an expert in radio signals, they'll tell you it's one of the most important measurements in analog communications.

A note on SNR: it's expressed as a "ratio" which means you divide the signal strength in mW by the noise level in mW. But like the signal strength and noise levels, we normally want to express the SNR in decibels to make it more manageable. Decibels are based on logarithms, and because of the way logarithms work, you subtract decibels to get the same effect as dividing the original values. That turns out to be very convenient! If your noise level is -90dBm and your signal is, say, -60dBm, then your SNR is 30dB, which means 1000x. That's awfully easy to say considering how complicated the underlying math is. (By the way, after subtracting two dBm values we just get plain dB, for the same reason that if you divide 10mW by 2mW you just get 5, not 5mW.)

The Shannon Limit

So, finally... how big does the SNR need to be in order to be "good"? Can you just receive any signal where SNR > 1.0x (which means signal is greater than noise)? And when SNR < 1.0x (signal is less than noise), all is lost?

Nope. It's not that simple at all. The math is actually pretty complicated, but you can read about the Shannon Limit on wikipedia if you really want to know all the details. In short, the bigger your SNR, the faster you can go. That makes a kind of intuitive sense I guess.

(But it's not really all that intuitive. When someone is yelling, can they talk *faster* than when they're whispering? Perhaps it's only intuitive because we've been trained to notice that wifi goes faster when the nodes are closer together.)

The Shannon limit even calculates that you can transfer some data even when the signal power is lower than the noise, which seems counterintuitive or even impossible. But it's true, and the global positioning system (GPS) apparently actually does this, and it's pretty cool.

The Maximum Range of Wifi is Unchangeable

So that was all a *very* long story, but it has a point. Wifi signal strength is fundamentally limited by two things: the regulatory transmitter power limit (30dBm or less, depending on the frequency and geography), and the distance between transmitter and receiver. You also can't do much about background noise; it's roughly -90dBm or maybe a bit worse. Thus, the maximum speed of a wifi link is fixed by the laws of physics. Transmitters have been transmitting at around the regulatory maximum since the beginning.

So how, then, do we explain the claims that newer 802.11n devices have "double the range" of the previous-generation 802.11g devices?

Simple: they're marketing lies. 802.11g and 802.11n have exactly the same maximum range. In fact, 802.11n just degrades into 802.11g as the SNR gets worse and worse, so this has to be true.

802.11n is certainly faster at close and medium range. That's because 802.11g tops out at an SNR of about 20dB. That is, the Shannon Limit says you can go faster when you have >20dB, but 802.11g doesn't try; technology wasn't ready for it at the time. 802.11n can take advantage of that higher SNR to get better speeds at closer ranges, which is great.

But the claim about longer range, by any normal person's definition of range, is simply not true.

Luckily, marketing people are not normal people. In the article I linked above they explain how. Basically, they define "twice the range" as a combination of "twice the speed at the same distance" and "the same speed at twice the distance." That is, a device fulfilling both criteria has double the range as an original device which fulfills neither.

It sounds logical, but in real life, that definition is not at all useful. You can do it by comparing, say, 802.11g and 802.11n at 5ft and 10ft distances. Sure enough, 802.11n is more than twice as fast as 802.11g at 5ft! And at 10ft, it's still faster than 802.11g at 5ft! Therefore, twice the range. Magic, right? But at 1000ft, the same equations don't work out. Oddly, their definition of "range" does not include what happens at maximum range.

I've been a bit surprised at how many people believe this "802.11n has twice the range" claim. It's obviously great for marketing; customers hate the limits of wifi's maximum range, so of course they want to double it, or at least increase it by any nontrivial amount, and they will pay money for a new router if it can do this. As of this writing, even wikipedia's table of maximum ranges says 802.11n has twice the maximum range of 802.11g, despite the fact that anyone doing a real-life test could easily discover that this is simply not the case. I did the test. It's not the case. You just can't cheat Shannon and the Signal to Noise Ratio.


Coming up next, some ways to cheat Shannon and the Signal to Noise Ratio.

Syndicated 2014-07-18 02:23:26 (Updated 2014-07-19 23:07:34) from apenwarr

14 Jul 2014 (updated 14 Jul 2014 at 07:02 UTC) »

Wifi and the square of the radius

I have many things to tell you about wifi, but before I can tell you most of them, I have to tell you some basic things.

First of all, there's the question of transmit power, which is generally expressed in watts. You may recall that a watt is a joule per second. A milliwatt is 1/1000 of a watt. A transmitter generally can be considered to radiate its signal outward from a center point. A receiver "catches" part of the signal with its antenna.

The received signal power declines with the square of the radius from the transmit point. That is, if you're twice as far away as distance r, the received signal power at distance 2r is 1/4 as much as it was at r. Why is that?

Imagine a soap bubble. It starts off at the center point and there's a fixed amount of soap. As it inflates, the same amount of soap is stretched out over a larger and larger area - the surface area. The surface area of a sphere is 4 π r2.

Well, a joule of energy works like a millilitre of soap. It starts off at the transmitter and gets stretched outward in the shape of a sphere. The amount of soap (or energy) at one point on the sphere is proportional to
1 / 4 π r2.

Okay? So it goes down with the square of the radius.

A transmitter transmitting constantly will send out a total of one joule per second, or a watt. You can think of it as a series of ever-expanding soap bubbles, each expanding at the speed of light. At any given distance from the transmitter, the soap bubble currently at that distance will have a surface area of 4 π r2, and so the power will be proportional to 1 / that.

(I say "proportional to" because the actual formula is a bit messy and depends on your antenna and whatnot. The actual power at any point is of course zero, because the point is infinitely small, so you can only measure the power over a certain area, and that area is hard to calculate except that your antenna picks up about the same area regardless of where it is located. So although it's hard to calculate the power at any given point, it's pretty easy to calculate that a point twice as far away will have 1/4 the power, and so on. That turns out to be good enough.)

If you've ever done much programming, someone has probably told you that O(n^2) algorithms are bad. Well, this is an O(n^2) algorithm where n is the distance. What does that mean?

    1cm -> 1 x
    2cm -> 1/4 x
    3cm -> 1/9 x
    10cm -> 1/100 x
    20cm -> 1/400 x
    100cm (1m) -> 1/10000 x
    10,000cm (100m) -> 1/100,000,000 x

As you get farther away from the transmitter, that signal strength drops fast. So fast, in fact, that people gave up counting the mW of output and came up with a new unit, called dBm (decibels times milliwatts) that expresses the signal power logarithmically:

    n dBm = 10n/10 mW

So 0 dBm is 1 mW, and 30 dBm is 1W (the maximum legal transmit power for most wifi channels). And wifi devices have a "receiver sensitivity" that goes down to about -90 dBm. That's nine orders of magnitude below 0; a billionth of a milliwatt, ie. a trillionth of a watt. I don't even know the word for that. A trilliwatt? (Okay, I looked it up, it's a picowatt.)

Way back in university, I tried to build a receiver for wired modulated signals. I had no idea what I was doing, but I did manage to munge it, more or less, into working. The problem was, every time I plugged a new node into my little wired network, the signal strength would be cut down by 1/n. This seemed unreasonable to me, so I asked around: what am I doing wrong? What is the magical circuit that will let me split my signal down two paths without reducing the signal power? Nobody knew the answer. (Obviously I didn't ask the right people :))

The answer is, it turns out, that there is no such magical circuit. The answer is that 1/n is such a trivial signal strength reduction that essentially, on a wired network, nobody cares. We have RF engineers building systems that can handle literally a 1/1000000000000 (from 30 dBm to -90 dBm) drop in signal. Unless your wired network has a lot of nodes or you are operating it way beyond distance specifications, your silly splitter just does not affect things very much.

In programming terms, your runtime is O(n) + O(n^2) = O(n + n^2) = O(n^2). You don't bother optimizing the O(n) part, because it just doesn't matter.

(Update 2014/07/14: The above comment caused a bit of confusion because it talks about wired networks while the rest of the article is about wireless networks. In a wireless network, people are usually trying to extract every last meter of range, and a splitter is a silly idea anyway, so wasting -3 dB is a big deal and nobody does that. Wired networks like I was building at the time tend to have much less, and linear instead of quadratic, path loss and so they can tolerate a bunch of splitters. For example, good old passive arcnet star topology, or ethernet-over-coax, or MoCA, or cable TV.)

There is a lot more to say about signals, but for now I will leave you with this: there are people out there, the analog RF circuit design gods and goddesses, who can extract useful information out of a trillionth of a watt. Those people are doing pretty amazing work. They are not the cause of your wifi problems.

Syndicated 2014-07-10 05:15:49 (Updated 2014-07-14 07:02:06) from apenwarr

The Curse of Vicarious Popularity

I had already intended for this next post to be a discussion of why people seem to suddenly disappear after they go to work for certain large companies. But then, last week, I went and made an example of myself.

Everything started out normally (the usual bit of attention on news.yc). It then progressed to a mention on Daring Fireball, which was nice, but okay, that's happened before. A few days later, though, things started going a little overboard, as my little article about human nature got a company name attached to it and ended up quoted on Business Insider and CNet.

Now don't get me wrong, I like fame and fortune as much as the next person, but those last articles crossed an awkward threshold for me. I wasn't quoted because I said something smart; I was quoted because what I said wasn't totally boring, and an interesting company name got attached. Suddenly it was news, where before it was not.

Not long after joining a big company, I asked my new manager - wow, I had a manager for once! - what would happen if I simply went and did something risky without getting a million signoffs from people first. He said something like this, which you should not quote because he was not speaking for his employer and neither am I: "Well, if it goes really bad, you'll probably get fired. If it's successful, you'll probably get a spot bonus."

Maybe that was true, and maybe he was just telling me what I, as a person arriving from the startup world, wanted to hear. I think it was the former. So far I have received some spot bonuses and no firings, but the thing about continuing to take risks is my luck could change at any time.

In today's case, the risk in question is... saying things on the Internet.

What I have observed is that the relationship between big companies and the press is rather adversarial. I used to really enjoy reading fake anecdotes about it at Fake Steve Jobs, so that's my reference point, but I'm pretty sure all those anecdotes had some basis in reality. After all, Fake Steve Jobs was a real journalist pretending to be a real tech CEO, so it was his job to know both sides.

There are endless tricks being played on everyone! PR people want a particular story to come out so they spin their press releases a particular way; reporters want more conflict so they seek it out or create it or misquote on purpose; PR people learn that this happens so they become even more absolutely iron-fisted about what they say to the press. There are classes that business people at big companies can take to learn to talk more like politicians. Ironically, if each side would relax a bit and stop trying so hard to manipulate the other, we could have much better and more interesting and less tabloid-like tech news, but that's just not how game theory works. The first person to break ranks would get too much of an unfair advantage. And that's why we can't have nice things.

Working at a startup, all publicity is good publicity, and you're the underdog anyway, and you're not publicly traded, so you can be pretty relaxed about talking to the press. Working at a big company, you are automatically the bad guy in every David and Goliath story, unless you are very lucky and there's an even bigger Goliath. There is no maliciousness in that; it's just how the story is supposed to be told, and the writers give readers what they want.

Which brings me back to me, and people like me, who just write for fun. Since I work at a big company, there are bunch of things I simply should not say, not because they're secret or there's some rule against saying them - there isn't, as far as I know - but because no matter what I say, my words are likely to be twisted and used against me, and against others. If I can write an article about Impostor Syndrome and have it quoted by big news organizations (to their credit, the people quoting it so far have done a good job), imagine the damage I might do if I told you something mean about a competitor, or a bug, or a missing feature, or an executive. Even if, or especially if, it were just my own opinion.

In the face of that risk - the risk of unintentionally doing a lot of damage to your friends and co-workers - most people just give up and stop writing externally. You may have noticed that I've greatly cut back myself. But I have a few things piling up that I've been planning to say, particularly about wifi. Hopefully it will be so technically complicated that I will scare away all those press people.

And if we're lucky, I'll get the spot bonus and not that other thing.

Syndicated 2014-07-08 23:05:01 from apenwarr

The Curse of Smart People

A bit over 3 years ago, after working for many years at a series of startups (most of which I co-founded), I disappeared through a Certain Vortex to start working at a Certain Large Company which I won't mention here. But it's not really a secret; you can probably figure it out if you bing around a bit for my name.

Anyway, this big company that now employs me is rumoured to hire the smartest people in the world.

Question number one: how true is that?

Answer: I think it's really true. A suprisingly large fraction of the smartest programmers in the world *do* work here. In very large quantities. In fact, quantities so large that I wouldn't have thought that so many really smart people existed or could be centralized in one place, but trust me, they do and they can. That's pretty amazing.

Question number two: but I'm sure they hired some non-smart people too, right?

Answer: surprisingly infrequently. When I went for my job interview there, they set me up for a full day of interviewers (5 sessions plus lunch). I decided that I would ask a few questions of my own in these interviews, and try to guess how good the company is based on how many of the interviewers seemed clueless. My hypothesis was that there are always some bad apples in any medium to large company, so if the success rate was, say, 3 or 4 out of 5 interviewers being non-clueless, that's pretty good.

Well, they surprised me. I had 5 out of 5 non-clueless interviewers, and in fact, all of them were even better than non-clueless: they impressed me. If this was the average smartness of people around here, maybe the rumours were really true, and they really had something special going on.

(I later learned that my evil plan and/or information about my personality *may* have been leaked to the recruiters who *may* have intentionally set me up with especially clueful interviewers to avoid the problem, but this can neither be confirmed nor denied.)

Anyway, I continue to be amazed at the overall smartness of people at this place. Overall, very nearly everybody, across the board, surprises or impresses me with how smart they are.

Pretty great, right?


But it's not perfect. Smart people have a problem, especially (although not only) when you put them in large groups. That problem is an ability to convincingly rationalize nearly anything.

Everybody rationalizes. We all want the world to be a particular way, and we all make mistakes, and we all want to be successful, and we all want to feel good about ourselves.

We all make decisions for emotional or intuitive reasons instead of rational ones. Some of us admit that. Some of us think using our emotions is better than being rational all the time. Some of us don't.

Smart people, computer types anyway, tend to come down on the side of people who don't like emotions. Programmers, who do logic for a living.

Here's the problem. Logic is a pretty powerful tool, but it only works if you give it good input. As the famous computer science maxim says, "garbage in, garbage out." If you know all the constraints and weights - with perfect precision - then you can use logic to find the perfect answer. But when you don't, which is always, there's a pretty good chance your logic will lead you very, very far astray.

Most people find this out pretty early on in life, because their logic is imperfect and fails them often. But really, really smart computer geek types may not ever find it out. They start off living in a bubble, they isolate themselves because socializing is unpleasant, and, if they get a good job straight out of school, they may never need to leave that bubble. To such people, it may appear that logic actually works, and that they are themselves logical creatures.

I guess I was lucky. I accidentally co-founded what turned into a pretty successful startup while still in school. Since I was co-running a company, I found out pretty fast that I was wrong about things and that the world didn't work as expected most of the time. This was a pretty unpleasant discovery, but I'm very glad I found it out earlier in life instead of later, because I might have wasted even more time otherwise.

Working at a large, successful company lets you keep your isolation. If you choose, you can just ignore all the inconvenient facts about the world. You can make decisions based on whatever input you choose. The success or failure of your project in the market is not really that important; what's important is whether it gets canceled or not, a decision which is at the whim of your boss's boss's boss's boss, who, as your only link to the unpleasantly unpredictable outside world, seems to choose projects quasi-randomly, and certainly without regard to the quality of your contribution.

It's a setup that makes it very easy to describe all your successes (project not canceled) in terms of your team's greatness, and all your failures (project canceled) in terms of other people's capriciousness. End users and profitability, for example, rarely enter into it. This project isn't supposed to be profitable; we benefit whenever people spend more time online. This project doesn't need to be profitable; we can use it to get more user data. Users are unhappy, but that's just because they're change averse. And so on.

What I have learned, working here, is that smart, successful people are cursed. The curse is confidence. It's confidence that comes from a lifetime of success after real success, an objectively great job, working at an objectively great company, making a measurably great salary, building products that get millions of users. You must be smart. In fact, you are smart. You can prove it.

Ironically, one of the biggest social problems currently reported at work is lack of confidence, also known as Impostor Syndrome. People with confidence try to help people fix their Impostor Syndrome, under the theory that they are in fact as smart as people say they are, and they just need to accept it.

But I think Impostor Syndrome is valuable. The people with Impostor Syndrome are the people who *aren't* sure that a logical proof of their smartness is sufficient. They're looking around them and finding something wrong, an intuitive sense that around here, logic does not always agree with reality, and the obviously right solution does not lead to obviously happy customers, and it's unsettling because maybe smartness isn't enough, and maybe if we don't feel like we know what we're doing, it's because we don't.

Impostor Syndrome is that voice inside you saying that not everything is as it seems, and it could all be lost in a moment. The people with the problem are the people who can't hear that voice.

Syndicated 2014-06-30 07:05:56 from apenwarr

Reflections on Marketing and Humanity

Today I have new evidence that the human brain is made up of multiple interoperating, loosely connected components. Because I was out buying dryer sheets and there's one with "Fresh Linen" scent. And while one part of my mind was saying, "That's rather tautological," another part was saying, "That's what I always wanted my linen to smell like!" So I bought it, and now you know which one wins.

In the same aisle I found a new variant of soap with the tagline "inspired by celtic rock salt." Now, inspiration can be a hard thing to pin down, but this soap contains nothing celtic and no salt. I'm not even sure there is such a thing as celtic rock salt, or if there is, that it differs in any way from other rock salt, or other salt for that matter. Moreover, the whole purpose of soap is to wash off the generally saltly sweaty smelly mess you produced naturally, so we'd probably criticize them if it *were* salty, for the same reason people criticize shampoos for stripping your hair of its natural oils only to sell the oils back to you in the form of conditioner. Also, how long has soap had an "Ingredients" section on the package? Why not a nutritional content section? And is it bad when the first ingredient is (literally) "soap"? But I bought it anyway, because Irish. Salt. Mmmm.

Finally, a note to you people who would argue that I'm overanalyzing this. You might define overanalyzing as analyzing beyond the point required to make a decision. Since the analysis figured not one bit into my purchasing decision, by that definition, any analysis at all would be considered overanalysis. And frankly, that just doesn't seem fair.

Syndicated 2013-06-30 19:38:22 from apenwarr - Business is Programming

28 Jun 2013 (updated 14 Jul 2014 at 07:02 UTC) »


A quote from "The Trouble With Computers" about usability studies at Apple when they were developing the original Macintosh:

Apple interface guru Bruce Tognazzini tells this story. The in-box tutorial for novices, "Apple Presents... Apple," needed to know whether the machine it was on had a color monitor. He and his colleagues rejected the original design solution, "Are you using a color TV on the Apple?" because computer store customers might not know that they were using a monitor with the color turned off. So he tried putting up a color graphic and asking, "Is the picture above in color?" Twenty-five percent of test users didn't know; they thought maybe their color was turned off.

Then he tried a graphic with color named in their color, GREEN, BLUE, ORANGE, MAGENTA, and asked, "Are the words above in color?" Users with black and white or color monitors got it right. But luckily the designers tried a green-screen monitor too. No user got it right; they all thought green was a fine color.

Next he tried the same graphic but asked, "Are the words above in more than one color?" Half the green-screen users flunked, by missing the little word "in". Finally, "Do the words above appear in several different colors?"


That was what Apple did for a single throwaway UX question in a non-core part of the product - before its first release. It apparently took about 5 iterations of UX design followed up by UX research before they finally converged on the right answer.

The lesson I learned from that: usability studies are important. But you can't just take the recommendations of a usability study; you have to implement the recommendations, do another study, be prepared to be frustrated that the new version is just as bad as the old one, and do it all again. And again. If that's not how you're doing usability studies, you're doing it wrong.

Maybe I should re-buy that book. I gave mine away at some point. It's kind of indispensable as a tool for explaining software usability research, if only for its infamous "Can you *not* see the cow?" photo.

Syndicated 2013-06-28 03:47:19 (Updated 2014-07-14 07:02:06) from apenwarr - Business is Programming

Cheap Thrills

I've heard it said that you can just alternate between two UI themes once a week, and every time you switch, the new one will feel prettier, newer, and more exciting than the old one.

This is a natural tendency. The human mind is intrigued by change. That's where fashion comes from, and fads. It gives you a little burst of some chemical, maybe adrenaline (fear of the unknown?), or endorphins (appreciation of the unexpected?), or perhaps some other kind of juice I heard of somewhere but I don't really know what it does.

In tech, this kind of unlimited attraction to the unexpected is the main characteristic of the first phase of the Technology Adoption Lifecycle, the so-called "Innovators."

Perhaps people are happy to be included in the Innovator category. But Innovation isn't just doing something different for the sake of being different. Real innovation is the *willingness* to take the *risk* to do something different, because you know that difference is expensive, but that it will pay off in some way that more conservative sorts will fail to recognize until later.

In fashion, the end goal is to catch people's attention; if you do that, you are innovative. That's why fashion repeats itself every few years: because you can be innovative over and over again with the same ideas, rehashed forever.

In technology, we can hold you to a higher standard. Innovation requires difference, but it also requires a vision of usefulness. Change is expensive. Staying the same is cheap. Make it worth my while. Or if I'm an Innovator, or even an Early Adopter, at least give me a hint about how it's worth my while so I can exploit it while others are too afraid.

Every needless change creates expensive fragmentation. Microsoft ruled their market by being change averse. So did IBM. So did Intel. Even Apple. Whenever they forgot this, they stumbled.

Change aversion works because what makes a platform successful isn't so much the platform as the complementary products. For a phone, that means third-party power adapters, car chargers, headphones with integrated volume controls, alarm clocks with a connector to charge your phone *and* play your music at the same time. For a PC, it could be something as simple as maintaining the same power supply connector across many years' worth of models, so that anyone who standardizes on your brand will have an ever-growing investment in leftover power supplies plugged in wherever they might want them. For an operating system, it means keeping the same approximate style of UI for a long time, so that apps can learn to optimize for it, and a really great app made two years ago can keep on selling well, perhaps with bugfixes and new features but no need for rewrites, because it still looks like it's perfectly integrated into your OS experience. That sort of consistency allows developers to focus on quality instead of flavour, and produces an overall feeling of well-integratedness. It makes people feel like when they buy your thing, they're paying for quality. And yes, people - moving beyond the innovators into the more profitable market segments of the curve - will definitely pay for quality.

Real design genius lies in the ability to make something look pretty, and with gentle updates to keep it modern looking, without causing huge disruption to your whole ecosystem every couple of years. Following fashion trends, while not caring about disruption, does not require genius at all. All it requires is a factory in a third-world country and some photos of what you want to copy.

Ironically, even app developers mostly fail to recognize just how bad it is for them when a platform changes out from under them unnecessarily. Instead, they get excited by it. Finally, I get to rewrite that UI code I really hated, and while I'm there, I can fix all those interaction bugs I knew we had but could never justify repairing! Because now I *have* to rewrite it!

Redesigning things to match a moving target of a platform is really comforting, because it's a ready-made strategy for your company. The truth is, you don't have to think about what customers want, or how to make the workflow smoother, or how to eliminate one more click from that common operation, or how to fix that really annoying network bug that only happens 1 in 1000 times. Those bugs are hard; this feels like freedom. We'll just dedicate our team to "refreshing" the UI, again, for another few months, and nobody can complain because it's obviously necessary. And it is, obviously, necessary. Because your platform has screwed you. Your platform changed for no reason, and that's why your users can't have what they really need. They'll get a UI refresh instead.

And although they are less productive, they will love it. Because of endorphins, or sodium, or whatever.

And so you will feel good about yourself in the morning.

Syndicated 2013-06-12 07:35:30 from apenwarr - Business is Programming

9 Jun 2013 (updated 30 Jun 2014 at 06:02 UTC) »

A Modest Proposal for the Telephone Network

You might not realize it, but there's an imminent phone number shortage. It's been building up for a while, but the problem has been mitigated by people using "PBXes", which basically add a 4-5 digit extension to the end of your phone number to expand the available range. The problem with PBXes is they don't work right with caller id (it makes it look like a bunch of people near each other all have the same phone number) and you can't easily direct-dial PBX extensions from a phone's integrated address book, unless your phone has some kind of special "PBX penetration" technology. (PBX penetration is pretty well-understood, but not implemented widely.)

Even worse: it's no longer possible to route phone calls hierarchically by the first few digits. Nowadays any 10-digit U.S. phone number could be registered anywhere in the U.S. and area codes change all the time.

So here's my proposal. Let's fix this once and for all! We'll double the number of digits in a Canada/U.S. phone number from 10 to 20. No, wait, that might not be enough to do fully hierarchy-based call routing, let's make it 40 digits. But that could be too much typing, so instead of using decimal, we can add a few digits to your phone dialpad and let you use hexadecimal instead. Then it should only be 33 digits or so, with the same numbering capacity as 40 decimal digits! Awesome!

It'll still be kind of a pain to remember numbers that long, but don't worry about it, nobody actually dials directly by number anymore. We have phone directories for that. And modern smartphones can just autodial from hyperlinks on the web or in email. Or you can send vcards around with NFC or infrared or QR codes or something. Okay, those technologies aren't really perfect and there are a few remaining situations where people actually rely on the ability to remember and dial phone numbers by hand, but it really shouldn't be a problem most of the time and I'm sure phone directory technology will mature, because after all, it has to for my scheme to work.

Now, as to deployment. For a while, we're going to need to run two parallel phone networks, because old phones won't be able to support the new numbering scheme, and vice versa. There's an awful lot of phone software out there hardcoded to assume its local phone number will be a small number of digits that are decimal and not hex. Plus caller ID displays have a limited number of physical digits they can show. So at first, every new phone will be assigned both a short old-style phone number and a longer new-style phone number. Eventually all the old phones will be shut down and we can switch entirely to the new system. Until then, we'll have to maintain the old-style phone number compatibility on all devices because obviously a phone network doesn't make any sense if everybody can't dial everybody else.

Actually you only need to keep an old-style number if you want to receive *incoming* calls. As you know, not everybody really needs this, so it shouldn't be a big barrier to adoption. (Of course, now that I think of it, if that's true, maybe we can conserve numbers in the existing system by just not assigning a distinct number to phones that don't care to receive calls. And maybe charge extra if you want to be assigned a number. As a bonus, people without a routable phone number won't ever have to receive annoying unsolicited sales calls!)

For outgoing calls, we can have a "carrier-grade PBX" sort of system that basically maps from one numbering scheme to the other. Basically we'll reserve a special prefix in the new-style number space that you'd dial when you want to connect to an old-style phone. And then your new phone won't need to support the old system, even if not everyone has transitioned yet! I mean, unless you want to receive incoming calls.


Or, you know. We could just automate connecting through a PBX.

Syndicated 2013-06-09 16:08:21 (Updated 2014-06-30 06:02:03) from apenwarr - Business is Programming

26 Apr 2013 (updated 26 Apr 2013 at 08:02 UTC) »

blip: a tool for seeing your Internet latency

I just released a pretty neat tool that I wrote, called blip. You can read the README to find out what's going on. Or, if your RSS reader or web browser supports iframes, you can see it action right here: