Older blog entries for wingo (starting at number 411)

but that would be anarchy!

Good morning, internets!

This is the second of a series of articles on what it's like to work for/in/with a cooperative; the first one is here. Eventually I'll update the first one to link to the whole series, whereever it goes.

I work for a worker's cooperative, Igalia. This article series is about moving beyond theory to practice: to report on our experience with collective self-determination, for the curious and for the interested. It's a kind of exercise in marketing the revolution :) I hope I can be free from accusations of commercial interest, however; for better or for worse, our customers couldn't care less how we are organized internally.

the ties that bind

The essence of a worker's cooperative is to enable people to make decisions about their work to the degree to which they are affected by those decisions. For decisions affecting the whole group, you need collective deliberation. You could think of it as workplace democracy, if you understand democracy to go beyond simple voting and referenda.

Collective decision-making works, but you need some preconditions -- see for example conditions for consensus, from this excellent article on consensus. More so than any other enterprise, a cooperative needs to have a set of goals or principles that binds all of its members together. In Igalia, we have a rather wordy document internally, but it's basically a commitment to finding a way to let hackers work on interesting things, centered around free software, in a self-organized way. Everything else is just strategy.

structure

There are two more-or-less parallel structures in Igalia: one for decision-making and one for work. There is also a legal structure that is largely vestigial; more on that at the end.

The most important structure in Igalia is the "assembly". It's composed of all workers that have been in Igalia for a certain amount of time (currently 1 year, though there are discussions to shorten that period).

The assembly is the ultimate decision-making body, with power over all things: bank accounts, salary levels, budgets, strategies, team assignments, etc. All company information is open to all assembly members, which currently comprises all of the people in Igalia.

The assembly meets in person every two months at the head office in A Coruña, with the option for people to dial in. Since many of us work in remote locations and just communicate over mail and jabber, it's good to see people in person to renew the kind of personal connections that help make agreement easier. Incidentally, this requirement for group face-to-face meetings influenced the architecture of our head office; there is a dividable room there that can be expanded into a kind of "round table" with microphones at all the posts. You can see a picture of it on the about page.

The in-person assemblies are usually a bit abstracted over day-to-day operations and focus more on proposals that people have brought up or on strategy. Sometimes they are easy and quick and sometimes they go late into the night. The ones associated with longer-term planning like the yearly budgeting assemblies are the longest.

How well does this work, you ask? I would summarize by saying that it works OK. There are so many advantages to collective decision-making that I now take for granted that it is difficult to imagine other ways. However, making decisions is hard on a personal level: it's a challenge to hold all of the ideas in your head at one time, and to feel the right level of responsibility for the success of the business. I'll write another article on this point, I think, because it is also part of the cooperative working experience.

"The assembly" is both the bimonthly meeting and also the body of people. We're all on a mailing list and a jabber channel, which is where a lot of the other day-to-day business decisions get made, like "do we accept this contract", "should we hire people soon", "should we hire X person in particular", etc. However with some 40 people it's tricky to establish an active consensus on a mailing list, so it's usually the people that lead with proposals and actively round up people to help them implement that get things done.

work groups

So that's the power structure in Igalia. However on a day-to-day level, unless a thread is really active on the assembly mailing list, folks just do their jobs. Sounds silly but it has to happen. We're organized into teams, like in a normal company, but without managers -- we also do consensus on a smaller, more informal level within the teams.

Since we're a consulting shop, most people write code all day, but there are also people that primarily focus on other things: sales, team coordination (who's available for what work when?), administrative work, finance and cash-flow monitoring, etc. This broader team allocation is also a result of consensus. (You can see the theme here.) Ideally we rotate around the "coordinator"-type jobs so everyone stays fresh, hacking-wise, and we don't entrench these informal power structures. We've done some of that but could do more.

I've heard people say that "if you don't have a boss, the customer is your boss", but that's simply not true for us in any of the ways that matter. Our working conditions, pay, holidays, hours -- all of this is up to us. Yes, we have to do good work, but that's already an expectation we have of ourselves as hackers. It's a much healthier perspective to consider your customer to be your partner: both providing value for the other. If this isn't the case, we should find other partners, and happily in our particular industry this is a possibility. (More on cooperatives, capitalism, and crisis in a future post.)

legalities

As I said in the beginning, the important thing is that a group share principles, and agree periodically on the strategy to implement them. In that light, the particular legal structure of Igalia is an afterthought, though an important one.

Although Spanish law explicitly provides for cooperatives as a kind of legal entity, Igalia is organized as a limited-liability corporation. The reasons are not entirely clear to me, and periodically come up for debate. One of the issues, though, is that in a cooperative, 85% of your partners have to be Spanish residents, and we did not want that restriction.

Spanish workers are employees of Igalia, and folks outside of Spain are freelancers. However once you've been in the assembly for an amount of time, currently 2 years (way too long IMO), you have the option to become a legal partner in the business, purchasing an equal share of the business at a fixed price. I say "option" but it's also an expectation; the idea is that being a partner is a logical and natural outcome of working at/with/on/in Igalia. We have partners in a number of countries now.

You see my confusion with prepositions, and it's because you have to fit all of these ideas in your head at the same time: working for Igalia as an employee, on it as a project, with it as a partner, at it as a social experiment, etc.

Partners are the legal owners of Igalia, and there are about 30 of them now. They meet every few months, mostly to assess the progression of "prepartners" (those in the assembly but not yet partners, like myself). Ideally they don't discuss other things, and I think that works out in practice. There is a small power differential there between the partners and the assembly. However all the really important things get done in the assembly.

Finally, since Igalia is an S.L. (like an LLC), there is a legal administrator as well -- someone who's theoretically responsible for the whole thing and has to sign certain legal papers. In fact we have three of them, and the positions rotate around every three years. If we were a legal cooperative we could remove this need, which would be convenient. But that's how it is right now.

domination

I want a society without hierarchical power: no state, no military, no boss. But that would be anarchy, right? Well yes, of course that's what it would be! "Anarchy" doesn't equate to a lack of structure, though. It's true that Igalia is embedded in capitalism, but I think it and other cooperatives are a kind of practical anarchy, or at least a step along the path.

I'll close this epistle with a quote, in Chomsky's halting but endearing style. The whole interview is well worth a read.

Anarchism is quite different from that. It calls for an elimination to tyranny, all kinds of tyranny. Including the kind of tyranny that's internal to private power concentrations. So why should we prefer it? Well I think because freedom is better than subordination. It's better to be free than to be a slave. Its' better to be able to make your own decisions than to have someone else make decisions and force you to observe them. I mean, I don't think you really need an argument for that. It seems like ... transparent.

The thing you need an argument for, and should give an argument for, is, How can we best proceed in that direction? And there are lots of ways within the current society. One way, incidentally, is through use of the state, to the extent that it is democratically controlled. I mean in the long run, anarchists would like to see the state eliminated. But it exists, alongside of private power, and the state is, at least to a certain extent, under public influence and control -- could be much more so. And it provides devices to constrain the much more dangerous forces of private power. Rules for safety and health in the workplace for example. Or insuring that people have decent health care, let's say. Many other things like that. They're not going to come about through private power. Quite the contrary. But they can come about through the use of the state system under limited democratic control ... to carry forward reformist measures. I think those are fine things to do. they should be looking forward to something much more, much beyond, -- namely actual, much larger-scale democratization. And that's possible to not only think about, but to work on. So one of the leading anarchist thinkers, Bakunin in the 19th cent, pointed out that it's quite possible to build the institutions of a future society within the present one. And he was thinking about far more autocratic societies than ours. And that's being done. So for example, worker- and community- controlled enterprises are germs of a future society within the present one. And those not only can be developed, but are being developed.

The Kind of Anarchism I Believe In, Noam Chomsky, 28 May 2013

Syndicated 2013-06-13 07:55:24 from wingolog

ecmascript generators from a performance perspective

It's been really gratifying to see the recent interest in ES6 generators in V8. I'm still a Schemer at heart, but one thing all language communities should learn from JS is its enthusiasm and sense of joyful play. One can't spend much time around folks like James Long or Domenic Denicola without feeling the urge to go out and build something fun.

perfie perf perf perf

But this is a blog about nargery! A lot of people have been speculating about the performance impact of using generators, and I've been meaning to write about it, so perhaps this article will tickle your neuron.

A generator is a function that can suspend. So the first perf question to ask yourself is, does the performance of generators matter at all? If your generator spends more time suspended than active, then probably not. Though it's not the only application of generators, if you are using generators for asynchronous programming, then probably generator performance matters not at all.

But that's not very satisfying, right? Even though it probably doesn't matter, maybe it does, and then you need to make sure your mental model of what is fast and what is slow corresponds to reality.

crankshaft

So, the skinny: as implemented in V8, the primary perf impact of generators is that they do not get optimized by Crankshaft.

As you probably know, Crankshaft is V8's optimizing compiler. There are two ways a function can be optimized: on entry, because the function is called many times, or within a loop, because the loop is taking too long. In the first case it's fairly easy: you kick off a parallel task to recompile the function, reading the type feedback collected by the inline caches from the baseline compiler. Once you've made the optimized function, you swap the function's code pointer, and the next time you call the function you get the optimized version instead of the generic version.

The other way is if you are in a function, in a hot loop, and you decide to optimize the function while you are in it. This is called on-stack replacement (OSR), and is important for functions that are only called once but do have hot loops. (Cough KRAKEN cough.) It's trickier to do parallel compilation with OSR, as the function might have left the loop by the time recompilation finishes, but oh well.

So in summary, when V8 goes to optimize a function it will be in one of two states: inactive, because it hasn't started to run yet, or active, because it's stuck in a tight loop.

Generators introduce a new state for functions: suspended. You can know whether a function is active or inactive, but you can't know how many suspended generator activations are out in the JS heap. So you have multiple potential entry points to the function, which complicates the optimizer as it has to work on graphs with irreducible loops. Perhaps it's still possible to optimize, as OSR is a form of irreducible loops; I suspect though that you would end up compiling as many optimized functions as there are yield points, plus one for the function entry (and possibly another for OSR). Anyway it's a bit complicated.

Also, deoptimization (bailout) becomes more difficult if you can have suspended activations, and as a technical detail, it's tricky for V8 to efficiently capture the state of an activation in a way that the garbage collector can understand.

The upshot is that tight loops in generators won't run fast. If you need speed in a loop in a generator, you will need to call out to some other function (which itself would get optimized).

baseline

On the other hand, I should note that generators are optimized for quick suspend and resume. In a generator function, V8 stores local variables on the heap instead of on the stack. You would think this would make things slow, but that is not the case. Access to locals has the same speed characteristics: both stack and heap locals are accessed by index from a register (the stack pointer or the context pointer).

There is an allocation overhead when you enter a generator to create the heap storage, but it is almost never the case that you need to copy or allocate when suspending a generator. Since the locals are on the heap already, there is nothing to do! You just write the current instruction pointer (something you know at compile-time) into a slot in the generator object and return the result.

Similarly, when resuming you have to push on a few words to rebuild the stack frame, but after that's done you just jump back to where you were and all is good.

The exception to this case is when you yield and there are temporaries on the stack. Recall in my article on V8's baseline compiler that the full-codegen is a stack machine. It allocates slots to named locals, but temporary values go on the stack at run-time, and unfortunately V8's baseline compiler is too naive even to know what the stack height is at compile-time. So each suspension checks the stack height, and if it is nonzero, it calls out to a runtime helper to store the operands.

This run-time save would be avoidable in register machines like JavaScriptCore's baseline compiler, which avoids pushing and popping on the stack. Perhaps this note might spur the competitive urges of some fine JSC hacker, to show up V8's generators performance. I would accept JSC having faster generators if that's what it took to get generators into WebKit, at least for a while anyway :)

abstract musings

Many of you have read about the Futamura projections, in which one compiles a program via partial application of an interpreter with respect to a given input program. Though in the real world this does not appear to be a good strategy for implementing compilers, as so much of compilation is not simply about program structure but also about representation, there is a kernel of deep truth in this observation: the essence of compilation is partial evaluation of a strategy with respect to a particular program.

This observation most often helps me in the form of its converse. Whenever I find myself implementing some sort of generic evaluation strategy, there's a sign in my head that start blinking the word "interpreter". (Kinda like the "on-air" signs in studios.) That means I'm doing something that's probably not as efficient as it could be. I find myself wondering how I can make the strategy specific to a particular program.

In this case, we can think of suspending and resuming not as copying a stack frame out and in, but instead compiling specific stubs to copy out only the values that are needed, in the representations that we know that they have. Instead of looping from the bottom to the top of a frame, you emit code for each element. In this way you get a specific program, and if you manage to get here, you have a program that can be inlined too: the polymorphic "next" reduces to a monomorphic call of a particular continuation.

Anyway, Amdahl's law limits the juice we can squeeze from this orange. For a tight for-of loop with a couple of optimizations that should land soon in V8, I see generator performance that is only 2 times slower that the corresponding hand-written iterators that use stateful closures or objects: 25 nanoseconds per suspend/resume versus 12 for a hand-written iterator.

25 nanoseconds, people. You're using jQuery and asking for data from a server in the next country. It is highly unlikely that generators will be a bottleneck in your program :)

Syndicated 2013-06-11 13:48:00 from wingolog

no master

It's difficult for anyone with an open heart and a fresh mind to look at the world without shuddering at its injustices: police brutality, obscene gaps between rich and poor, war and bombs and land mines, people without houses and houses without people. So much wrong! It is right and natural not only to feel revulsion, but to revolt as well. Fight the man! Down with the state! No god, no master!

So to those working against foreclosures and evictions of families in your neighborhoods, more power to you. Fight the good fight.

However, revolt is unfortunately not sufficient. We must also think beyond the now and imagine a better world: life after the revolution.

I'm sure I've lost some of you now, and that's OK. We are all at different points of political consciousness, and you can't speak to all at the same time. Some of the words like "revolution" probably bother some people. It sounds strident, right? But if you agree (and perhaps you do not) that there are fundamental problems with the way the world works, and that symptoms like banks kicking families out of houses using riot police are ultimately caused by those problems, then any real change must also be fundamental: at the foundation, where the roots are. Radical problems need radical solutions. Revolution is no more (and no less!) than a radically different world.

For me, one of these roots to dig up is hierarchy. I take as a principle that people should have power over and responsibility for decisions to the extent that they are affected, and hierarchy is all about some people having more power than others. Even if you do away with capitalism, as in the early Soviet Union, if you don't directly address the issue of hierarchy, classes will emerge again. A world in which some people are coordinators and other people are workers will result in a coordinator class that gradually accretes more and more power. Hierarchy is in essence tyrannical, though any specific instance may be more or less so.

On the other side of things, hierarchy deadens those at the bottom: the listless students, the mechanical workers, the passive and cynical voters. The revolution has to come about in a way that leaves people more alive and curious and energetic, and that seems to correspond not only with greater freedom but also with personal responsibility. I think that a world without hierarchy would be a world whose people would be more self-actualized and self-aware, as they would have more responsibility over the situations that affect their lives and those around them.

Well. I don't want to wax too theoretical here, in what I meant to be an introduction. I've been having these thoughts for a while and finally a couple years ago I decided to join in a collective experiment: to practice my chosen craft of computer programming within a cooperative context. I haven't had a boss since then, and I haven't been the boss of anyone. I'm not even my own boss. There are no masters!

I've gotten so used to this way of working that sometimes I forget how unusual it is to work in a cooperative. Also, we're usually too busy hacking to write about it ;) So in the next few articles I'll take a look at the internal structure of our company, Igalia, and try to give some insight into what it's like working in a cooperative.

Syndicated 2013-06-05 13:35:31 from wingolog

generators in v8

Hey y'all, ES6 generators have landed in V8! Excellent!

Many of you know what that means already, but for those of you that don't, a little story.

A few months ago I was talking with Andrew Paprocki over at Bloomberg. They use JavaScript in all kinds of ways over there, and as is usually the case in JS, it often involves communicating with remote machines. This happens on the server side and on the client side. JavaScript has some great implementations, but as a language it doesn't make asynchronous communication particularly easy. Sure, you can do anything you want with node.js, but it's pretty easy to get stuck in callback hell waiting for data from the other side.

Of course if you do a lot of JS you learn ways to deal with it. The eponymous Callback Hell site lists some, and lately many people think of Promises as the answer. But it would be nice if sometimes you could write a function and have it just suspend in the middle, wait for your request to the remote computer to come through, then continue on.

So Andrew asked if me if we could somehow make asynchronous programming easier to write and read, maybe by implementing something like C#'s await operator in the V8 JavaScript engine. I told him that the way into V8 was through standards, but that fortunately the upcoming ECMAScript 6 standard was likely to include an equivalent to await: generators.

ES6 generators

Instead of returning a value, when you first call a generator, it returns an iterator:

// notice: function* instead of function
function* values() {
  for (var i = 0; i < arguments.length; i++) {
    yield arguments[i];
  }
}

var o = values(1, 2, 3);  // => [object Generator]

Calling next on the iterator resumes the generator, lets it run until the next yield or return, and then suspends it again, resulting in a value:

o.next(); // => { value: 1, done: false }
o.next(); // => { value: 2, done: false }
o.next(); // => { value: 3, done: false }
o.next(); // => { value: undefined, done: true }

Maybe you're the kind of person that likes imprecise, incomplete, overly abstract analogies. Yes? Well generators are like functions, with their bodies taken to the first derivative. Calling next integrates between two yield points. Chew on that truthy nugget!

asynchrony

Anyway! Supending execution, waiting for something to happen, then picking up where you left off: put these together and you have a nice facility for asynchronous programming. And happily, it works really well with promises, a tool for asynchrony that lots of JS programmers are using these days.

Q is a popular promises library for JS. There are some 250+ packages that depend on it in NPM, node's package manager. So cool, let's take their example of the "Pyramid of Doom" from the github page:

step1(function (value1) {
    step2(value1, function(value2) {
        step3(value2, function(value3) {
            step4(value3, function(value4) {
                // Do something with value4
            });
        });
    });
});

The promises solution does at least fix the pyramid problem:

Q.fcall(step1)
.then(step2)
.then(step3)
.then(step4)
.then(function (value4) {
    // Do something with value4
}, function (error) {
    // Handle any error from step1 through step4
})
.done();

But to my ignorant eye, some kind of solution involving a straight-line function would be even better. Remember what generators do: they suspend computation, wait for someone to pass back in a result, and then continue on. So whenever you would register a callback, whenever you would chain a then onto your promise, instead you suspend computation by yielding.

Q.async(function* () {
  try {
    var value1 = yield step1();
    var value2 = yield step2(value1);
    var value3 = yield step3(value2);
    var value4 = yield step4(value3);
    // Do something with value4
  } catch (e) {
    // Handle any error from step1 through step4
  }
});

And for a super-mega-bonus, we actually get to use try and catch to handle exceptions, just as Gods and Brendan intended.

Now I know you're a keen reader and there are two things that you probably noticed here. One is, where are the promises, and where are the callbacks? Who's making these promises anyway? Well you probably saw already in the second example that using promises well means that you have functions that return promises instead of functions that take callbacks. The form of functions like step1 and such are different when you use promises. So in a way the comparison between the pyramid and the promise isn't quite fair because the functions aren't quite the same. But we're all reasonable people so we'll deal with it.

Note that it's not actually necessary that any of the stepN functions return promises. The promises library will lift a value to a promise if needed.

The second and bigger question would be, how does the generator resume? Of course you've already seen the answer: the whole generator function is decorated by Q.async, which takes care of resuming the generator when the yielded promises are fulfilled.

You don't have to use generators of course, and using them with Q does mean you have to understand more things: not only standard JavaScript, but newfangled features, and promises to boot. Well, if it's not your thing, that's cool. But it seems to me that the widespread appreciation of await in C# bodes well for generators in JS.

ES6, unevenly distributed

Q.async has been part of Q for a while now, because Firefox has been shipping generators for some time.

Note however that the current ES6 draft specification for generators is slightly different from what Firefox ships: calling next on the iterator returns in an object with value and done properties, whereas the SpiderMonkey JS engine used by Firefox uses an exception to indicate that the iterator is finished.

This change was the result of some discussions at the TC39 meeting in March, and hasn't made it into a draft specification or to the harmony:generators page yet. All could change, but there seems to be a consensus.

I made a patch to Q to allow it to work both with the old SpiderMonkey-style generators as well as with the new ES6 style, and something like it should go in soon.

give it to me already!

So yeah, generators in V8! I've been working closely with V8 hackers Michael Starzinger and Andreas Rossberg on the design and implementation of generators in V8, and I'm happy to say that it's all upstream now, modulo yield* which should go in soon. Along with other future ES6 features, it's all behind a runtime flag. Pass --harmony or --harmony-generators to your V8 to enable it.

Barring unforeseen issues, this will probably see the light in Chromium 29, corresponding to V8 3.19. For now though this hack is so hot out of the fire that it hasn't even had time to cool down and make it to the Chrome Canary channel yet. Perhaps within a few weeks; whenever the V8 dependency in Chrome gets updated to the 3.19 tree.

As far as Node goes, they usually track the latest stable V8 release, and so for them it will probably also be a few weeks before it goes in. You'll still have to run Node with the --harmony flag. However if you want a sneak preview of the future, I uploaded a branch of Node with V8 3.19 that you can build from source. It might mulch your cat, but life is not without risk on the bleeding_edge.

Finally for Q, as I said ES6 compatibility should come soon; track the progress or check out your own copy here.

final notes

Thanks to the V8 team for their support in this endeavor, especially to Michael Starzinger for enduring my constant questions. There's lots of interesting things in the V8 pipeline coming out of Munich. Thanks also to Andrew Paprocki and the Bloomberg crew for giving me the opportunity to fill out this missing piece of real-world ES6. The Bloomberg folks really get the web.

This has been a hack from Igalia, your friendly neighborhood browser experts. Have fun with generators, and happy hacking!

Syndicated 2013-05-08 19:31:42 from wingolog

inside full-codegen, v8's baseline compiler

Greetings to all. This is another nargish article on the internals of the V8 JavaScript engine. If that's your thing, read on. Otherwise, here's an interesting interview with David Harvey. See you laters!

full-codegen

Today's topic is V8's baseline compiler, called "full-codegen" in the source. To recall, V8 has two compilers: one that does a quick-and-dirty job (full-codegen), and one that focuses on hot spots (Crankshaft).

When you work with V8, most of the attention goes to Crankshaft, and rightly so: Crankshaft is how V8 makes code go fast. But everything goes through full-codegen first, so it's still useful to know how full-codegen works, and necessary if you go to hack on V8 itself.

The goal of this article is to recall the full-codegen to mind, so as to refresh our internal "efficiency model" of how V8 runs your code (cf. Norvig's PAIP retrospective, #20), and to place it in a context of other engines and possible models.

stack machine

All JavaScript code executed by V8 goes through full-codegen, so full-codegen has to be very fast. This means that it doesn't have time to do very much optimization. Even if it did have time to do optimization (which it doesn't), full-codegen lacks the means to do so. Since the code hasn't run yet when full-codegen compiles it, it doesn't know anything about types or shapes of objects. And as in other JS engines, full-codegen only compiles a function at a time, so it can't see very far. (This lets it compile functions lazily, the first time they are executed. Overall syntactic validity is ensured by a quick "pre-parse" over the source.)

I have to admit my surprise however at seeing again how simple full-codegen actually is: it's a stack machine! You don't get much simpler than that. It's shocking in some ways that this is the technology that kicked off the JS performance wars, back in 2008. The article I wrote a couple years ago does mention this, but since then I have spent a lot of time in JSC and forgot how simple things could be.

Full-codegen does have a couple of standard improvements over the basic stack machine model, as I mentioned in the "two compilers" article. One is that the compiler threads a "context" through the compilation process, so that if (say) x++ is processed in "effect" context, it doesn't need to push the initial value of x on the stack as the result value. The other is that if the top-of-stack (ToS) value is cached in a register, so that even if we did need the result of x++, we could get it with a simple mov instruction. Besides these "effect" and "accumulator" (ToS) contexts, there are also contexts for pushing values on the stack, and for processing a value for "control" (as the test of a conditional branch).

and that's it!

That's the whole thing!

Still here?

What about type recording and all that fancy stuff, you say? Well, most of the fancy stuff is in crankshaft. Type recording happens in the inline caches that aren't really part of full-codegen proper.

Full-codegen does keep some counters on loops and function calls and such that it uses to determine when to tier up to Crankshaft, but that's not very interesting. (It is interesting to note that V8 used to determine what to optimizing using a statistical profiler, and that is no longer the case. It seems statistical profiling would be OK in the long run, but as far as I understand it didn't do a great job with really short-running code, and made it difficult to reproduce bugs.)

I think -- and here I'm just giving my impression, but fortunately I have readers that will call my bullshit -- I think that if you broke down basic JavaScript and ignored optimizable hot spots, an engine mostly does variable accesses, property accesses, and allocations. Those are the fundamental things to get right. In a browser you also have DOM operations, which Blink will speed up at some point; and there are some domain-specific things like string operations and regular expressions of course. But ignoring that, there are just variables, properties, and allocations.

We'll start with the last bit. Allocation is a function of your choice of value representation, your use of "hidden classes" ("maps" in V8; all engines do the same thing now), and otherwise it's more a property of the runtime than of the compiler. (Crankshaft can lower allocation cost by using unboxed values and inlining allocations, but that's Crankshaft, not full-codegen.)

Property access is mostly handled by inline caches, which also handle method dispatch and relate to hidden classes.

The only thing left for full-codegen to do is to handle variable accesses: to determine where to store local variables, and how to get at them. In this case, again there's not much magic: either variables are really local and don't need to be looked up by name at runtime, in which case they can be allocated on the stack, or they persist for longer or in more complicated scopes, in which case they go on the heap. Of course if a variable is never referenced, it doesn't need to be allocated at all.

So in summary, full-codegen is quick, and it's dirty. It does its job with the minimum possible effort and gets out of the way, giving Crankshaft a chance to focus on the hot spots.

comparison

How does this architecture compare with what other JS engines are doing, you ask?

I have very little experience with SpiderMonkey, but as far as I can tell, with every release they they get closer and closer to V8's model. Earlier this month Kannan Vijayan wrote about SpiderMonkey's new baseline compiler, which looks to be exactly comparable to full-codegen. It's a stack machine that emits native code. It looks a little different because it operates on bytecode and not the AST, as SpiderMonkey still has an interpreter hanging around.

JavaScriptCore, the WebKit JavaScript engine, is similar on the surface: it also has a baseline compiler and an optimizing compiler. (In fact it's getting another optimizing compiler soon, but that's a story for another article). But if you look deeper, I think JSC has diverged from V8's model in interesting ways.

Besides being a register machine, a model that has inherent advantages over a stack machine, JavaScriptCore also has a low-level interpreter that uses the same calling convention as the optimizing compiler. I don't think JSC is moving in V8's direction in this regard; in fact, they could remove their baseline compiler entirely, relying instead on the interpreter and the optimizing compiler. JSC also has some neat tricks related to scoping (lazy tear-off); again, a topic for some future article.

summary

The field is still a bit open as to what is the best approach to use for cold code. It seems that an interpreter could still be a win, because if not, why would SpiderMonkey keep one around, now that they have a baseline compiler? And why would JSC add a new one?

At the same time, it seems to me that one could do more compile-time analysis at little extra cost, to speed up full-codegen by allocating more temporaries into slots; you would push and pop at compile-time to avoid doing it at run-time. It's unclear whether it would be worth it, though, as that analysis would have to run on all code instead of just the hot spots.

Hopefully someone will embark on the experiment; it should just be a simple matter of programming :) Until next time, happy hacking!

Syndicated 2013-04-18 15:45:20 from wingolog

thoughts on blink

So Chromium forked WebKit again! You've probably seen some articles on it already, but Alex Russell's piece is pretty good.

In retrospect the split was easy to see coming, but I can't help but feel bittersweet about it. The announcement and Alex's article both cite technical reasons for the fork, but to me it seems to be ultimately a human failure. So the Chromium team saw problems in WebKit and agreed on a solution that will help them achieve their goals faster: that's great! But why were they unable to do so within WebKit itself?

There are technical differences, yes, but those could be resolved by human action (as they now will be, by humans, under the Blink aegis). No, to me the fundamental problem was a lack of the conditions for consensus within the WebKit project, and for that there is ample blame to go around.

In the best of events this split will end up with both Blink and WebKit as more technically consistent, nimble projects. I would also hope that this split serves as a wakeup call to some of the decision-making processes within WebKit. However it is easy to be pessimistic in this regard -- the Apple-Google dynamic has been great for WebKit as an open project, opening up discussions and collaboration possibilities, especially among folks that don't work for the two major corporations. Both projects will have to actively make an effort to remain open, because passivity will favor the structural tendency to work in-house.

I should probably mention the obvious thing, that Igalia will work with both projects, of course. We'll get our WebKit patches into Blink, and vice versa. The message for customers doesn't change much, either -- most customers have already chosen a particular WebKit port, and those that used Chromium will now use Chromium on Blink.

For those projects that haven't chosen a WebKit port yet, the fundamental decision has always been WebKit2/JSC versus Chromium/V8, and that hasn't changed too much from this announcement. It's true that WebKit2 is a less mature multi-process implementation than Chromium, but it is getting a lot of investment these days, so it's fair to assume that technically that the two approaches will work just as well within a few months. The same goes for JSC versus V8.

Finally, a meta-note. Sometimes these press releases catch us up in their whirlwind and we're left with a feeling of anxiety: "Google just released AngularJS, now I need to rewrite all my apps!?" Well, maybe ;) But I find a bit of dispassion is appropriate after a piece of news like this. Good luck to both both projects, and see you on the mailing lists.

Syndicated 2013-04-04 06:43:17 from wingolog

sexuality and sexism

Greetings, dear readers. Today's article is not about compilers, but about the people that write and run compilers. Like me, and like you.

I write a lot about programming here because it's interesting to me and it makes me happy, but that's not the extent of my desires as a human. Among all the things, and perhaps even foremost among them, is the desire to live in a more beautiful world: a world of making and sharing, of nature abloom, and of people too: a world, in short, full of life.

Part of that life is sexual, and how wonderfully, playfully, rightfully so. But the world as a whole would be better if we kept sexuality out of programming and other male-dominated pursuits.

The reason for this is that sexuality (for example, in the form of sexual jokes) among a group of men creates a kind of "boy's club" atmosphere in which people that aren't in the straight male majority feel uncomfortable. A "boy's club" has a virtual "no girls or queers allowed" sign on it. It's just not a comfortable place to be, for non-club-members to be.

Of course, sometimes being uncomfortable is good. But being uncomfortable because of your gender is not one of those cases. And even, it must be said, sometime it goes beyond discomfort to danger -- conferences that women do not attend for fear of groping; things that women cannot say for fear of rape threats. There is no hyperbole here. It is an astonishing, indignation-provoking injustice.

How did it get this bad?

As usual, through small steps. One of the first is widespread toleration of unrelated sexuality in male-dominated fields: boy's clubs. So I think that we all -- and especially men, and especially people with respect within a community -- should actively discourage any of these factors that lead to "boy's clubs". A joke that "I'd fork that project" is not OK. It would be fine if it were just a lame joke; but it's not fine because it's part of this whole "boy's club" thing.

Incidentally, there is a name for the structural tendency to favor one gender over another in a group that isn't "boy's club", and it is "sexism". Sometimes people dismiss sexism in programming because, you know, "show me the code", but this misses the broader picture. I personally have profited immensely from the personal connections I have made at the many conferences that I have attended. I've even put up with some "boy's club" jokes on the same level as "fork-that-repo". I think a woman would find it more difficult to make some of these connections, and so would not end up producing the patches I do.

So, friends, if you are with me in recognizing this structural injustice called sexism, a stain upon our community of compiler hackers and users, I think this is an occasion in which imperative programming is acceptable and even appropriate. Learn to recognize "boy's clubs" and work constructively against them. Sex and tech is usually a bad idea, so point this out when you see it -- especially if you are a man -- and support others who do the same.

Syndicated 2013-03-26 20:07:10 from wingolog

on generators

Hello! Is this dog?

Hi dog this is Andy, with some more hacking nargery. Today the topic is generators.

(shift k)

By way of introduction, you might recall (with a shudder?) a couple of articles I wrote in the past on delimited continuations. I even gave a talk on delimited continuations at FrOSCon last year, a talk that was so hot that thieves broke into the the FrOSCon video squad's room and stole their hard drives before the edited product could hit the tubes. Not kidding! The FrOSCon folks still have the tapes somewhere, but meanwhile time marches on without the videos; truly, the terrorists have won.

Anyway, the summary of all that is that Scheme's much-praised call-with-current-continuation isn't actually a great building block for composable abstractions, and that delimited continuations have much better properties.

So imagine my surprise when Oleg Kiselyov, in a mail to the Scheme standardization list, suggested a different replacement for call/cc:

My bigger suggestion is to remove call/cc and dynamic-wind from the base library into an optional feature. I'd like to add the note inviting the discussion for more appropriate abstractions to supersede call/cc -- such [as] generators, for example.

He elaborated in another mail:

Generators are a wide term, some versions of generators are equivalent (equi-expressible) to shift/reset. Roshan James and Amr Sabry have written a paper about all kinds of generators, arguing for the generator interface.

And you know, generators are indeed a very convenient abstraction for composing producers and consumers of values -- more so than the lower-level delimited continuations. Guile's failure to standardize a generator interface is an instance of a class of Scheme-related problems, in which generality is treated as more important than utility. It's true that you can do generators in Guile, but we shouldn't make the user build their own.

The rest of the programming world moved on and built generators into their languages. James and Sabry's paper is good because it looks at yield as a kind of mainstream form of delimited continuations, and uses that perspective to place the generators of common languages in a taxonomy. Some of them are as expressive as delimited continuations. There are are also two common restrictions on generators that trade off expressive power for ease of implementation:

  • Generators can be restricted to disallow backtracking, as in Python. This allows generators to yield without allocating memory for a new saved execution state, instead updating the stored continuation in-place. This is commonly called the one-shot restriction.

  • Generators can also restrict access to their continuation. Instead of reifying generator state as external iterators, internal iterators call their consumers directly. The first generators developed by Mary Shaw in the Alphard language were like this, as are the generators in Ruby. This can be called the linear-stack restriction, after an implementation technique that they enable.

With that long introduction out of the way, I wanted to take a look at the proposal for generators in the latest ECMAScript draft reports -- to examine their expressiveness, and to see what implementations might look like. ES6 specifies generators with the the first kind of restriction -- with external iterators, like Python.

Also, Kiselyov & co. have a hip new paper out that explores the expressiveness of linear-stack generators, Lazy v. Yield: Incremental, Linear Pretty-printing. I was quite skeptical of this paper's claim that internal iterators were significantly more efficient than external iterators. So this article will also look at the performance implications of the various forms of generators, including full resumable generators.

ES6 iterators

As you might know, there's an official ECMA committee beavering about on a new revision of the ECMAScript language, and they would like to include both iterators and generators.

For the purposes of ECMAScript 6 (ES6), an iterator is an object with a next method. Like this one:

var p = console.log;

function integers_from(n) {
  function next() {
    return this.n++;
  }
  return { n: n, next: next };
}

var ints = integers_from(0);
p(ints.next()); // prints 0
p(ints.next()); // prints 1

Here, int_gen is an iterator that returns successive integers from 0, all the way up to infinity2^53.

To indicate the end of a sequence, the current ES6 drafts follow Python's example by having the next method throw a special exception. ES6 will probably have an isStopIteration() predicate in the @iter module, but for now I'll just compare directly.

function StopIteration() {}
function isStopIteration(x) { return x === StopIteration; }

function take_n(iter, max) {
  function next() {
    if (this.n++ < max)
      return iter.next();
    else
      throw StopIteration;
  }
  return { n: 0, next: next };
}

function fold(f, iter, seed) {
  var elt;
  while (1) {
    try {
      elt = iter.next();
    } catch (e) {
      if (isStopIteration (e))
        break;
      else
        throw e;
    }
    seed = f(elt, seed);
  }
  return seed;
}

function sum(a, b) { return a + b; }

p(fold(sum, take_n(integers_from(0), 10), 0));
// prints 45

Now, I don't think that we can praise this code for being particularly elegant. The end result is pretty good: we take a limited amount of a stream of integers, and combine them with a fold, without creating intermediate data structures. However the means of achieving the result -- the implementation of the producers and consumers -- are nasty enough to prevent most people from using this idiom.

To remedy this, ES6 does two things. One is some "sugar" over the use of iterators, the for-of form. With this sugar, we can rewrite fold much more succinctly:

function fold(f, iter, seed) {
  for (var x of iter)
    seed = f(x, seed);
  return seed;
}

The other thing offered by ES6 are generators. Generators are functions that can yield. They are defined by function*, and yield their values using iterators:

function* integers_from(n) {
  while (1)
    yield n++;
}

function* take_n(iter, max) {
  var n = 0;
  while (n++ < max)
    yield iter.next();
}

These definitions are mostly equivalent to the previous iterator definitions, and are much nicer to read and write.

I say "mostly equivalent" because generators are closer to coroutines in their semantics because they can also receive values or exceptions from their caller via the send and throw methods. This is very much as in Python. Task.js is a nice example of how a coroutine-like treatment of generators can help avoid the fraction-of-an-action callback mess in frameworks like Twistednode.js.

There is also a yield* operator to delegate to another generator, as in Python's PEP 380.

OK, now that you know all the things, it's time to ask (and hopefully answer) a few questions.

design of ES6 iterators

Firstly, has the ES6 committee made the right decisions regarding the iterators proposal? For me, I think they did well in choosing external iterators over internal iterators, and the choice to duck-type the iterator objects is a good one. In a language like ES6, lacking delimited continuations or cross-method generators, external iterators are more expressive than internal iterators. The for-of sugar is pretty nice, too.

Of course, expressivity isn't the only thing. For iterators to be maximally useful they have to be fast, and here I think they could do better. Terminating iteration with a StopIteration exception is not so great. It will be hard for an optimizing compiler to rewrite the throw/catch loop terminating condition into a simple goto, and with that you lose some visibility about your loop termination condition. Of course you can assume that the throw case is not taken, but you usually don't know for sure that there are no more throw statements that you might have missed.

Dart did a better job here, I think. They split the next method into two: a moveNext method to advance the iterator, and a current getter for the current value. Like this:

function dart_integers_from(n) {
  function moveNext() {
    this.current++;
    return true;
  }
  return { current: n-1, moveNext: moveNext };
}

function dart_take_n(iter, max) {
  function moveNext() {
    if (this.n++ < max && iter.moveNext()) {
      this.current = iter.current;
      return true;
    }
    else
      return false;
  }
  return {
    n: 0,
    current: undefined,
    moveNext: moveNext
  };
}

function dart_fold(f, iter, seed) {
  while (iter.moveNext())
    seed = f(iter.current, seed);
}

I think it might be fair to say that it's easier to implement an ES6 iterator, but it is easier to use a Dart iterator. In this case, dart_fold is much nicer, and almost doesn't need any sugar at all.

Besides the aesthetics, Dart's moveNext is transparent to the compiler. A simple inlining operation makes the termination condition immediately apparent to the compiler, enabling range inference and other optimizations.

iterator performance

To measure the overhead of for-of-style iteration, we can run benchmarks on "desugared" versions of for-of loops and compare them to normal for loops. We'll throw in Dart-style iterators as well, and also a stateful closure iterator designed to simulate generators. (More on that later.)

I put up a test case on jsPerf. If you click through, you can run the benchmarks directly in your browser, and see results from other users. It's imprecise, but I believe it is accurate, more or less.

The summary of the results is that current engines are able to run a basic summing for loop in about five cycles per iteration (again, assuming 2.5GHz processors; the Epiphany 3.6 numbers, for example, are mine). That's suboptimal only by a factor of 2 or so; pretty good. Thigh fives all around!

However, performance with iterators is not so good. I was surprised to find that most of the other tests are suboptimal by about a factor of 20. I expected the engines to do a better job at inlining.

One data point that stands out of my limited data set is the development Epiphany 3.7.90 browser, which uses JavaScriptCore from WebKit trunk. This browser did significantly better with Dart-style iterators, something I suspect due to better inlining, though I haven't verified it.

Better inlining and stack allocation is already a development focus of modern JS engines. Dart-style iterators will get faster without being a specific focus of optimization. It seems like a much better strategy for the ES6 specification to piggy-back off this work and specify iterators in such a way that they will be fast by default, without requiring more work on the exception system, something that is universally reviled among implementors.

what about generators?

It's tough to know how ES6 generators will perform, because they're not widely implemented and don't have a straightforward "desugaring". As far as I know, there is only one practical implementation strategy for yield in JavaScript, and that is to save the function's stack and the set of live registers. Resuming a generator copies the activation back onto the stack and restores the registers. Any other strategy that involves higher-level control-flow rewriting would make debugging too difficult. This is not a terribly onerous requirement for an implementation. They already have to know exactly what is live and what is not, and can reconstruct a stack frame without too much difficulty.

In practice, invoking a generator is not much different from calling a stateful closure. For this reason, I added another test to the jsPerf benchmark I mentioned before that tests stateful closures. It does slightly worse than iterators that keep state in an object, but not much worse -- and there is no essential reason why it should be slow.

On the other hand, I don't think we can expect compilers to do a good job at inlining generators, especially if they are specified as terminating with a StopIteration. Terminating with an exception has its own quirks, as Chris Leary wrote a few years ago, though I think in his conclusion he is searching for virtues where there are none.

dogs are curious about multi-shot generators

So dog, that's ECMAScript. But since we started with Scheme, I'll close this parenthesis with a note on resumable, multi-shot generators. What is the performance impact of allowing generators to be fully resumable?

Being able to snapshot your program at any time is pretty cool, but it has an allocation cost. That's pretty much the dominating factor. All of the examples that we have seen so far execute (or should execute) without allocating any memory. But if your generators are resumable -- or even if they aren't, but you build them on top of full delimited continuations -- you can't re-use the storage of an old captured continuation when reifying a new one. A test of iterating through resumable generators is a test of short-lived allocations.

I ran this test in my Guile:

(define (stateful-coroutine proc)
  (let ((tag (make-prompt-tag)))
    (define (thunk)
      (proc (lambda (val) (abort-to-prompt tag val))))
    (lambda ()
      (call-with-prompt tag
                        thunk
                        (lambda (cont ret)
                          (set! thunk cont)
                          ret)))))

(define (integers-from n)
  (stateful-coroutine
   (lambda (yield)
     (let lp ((i n))
       (yield i)
       (lp (1+ i))))))

(define (sum-values iter n)
  (let lp ((i 0) (sum 0))
    (if (< i n)
        (lp (1+ i) (+ sum (iter)))
        sum)))

(sum-values (integers-from 0) 1000000)

I get the equivalent of 2 Ops/sec (in terms of the jsPerf results), allocating about 0.5 GB/s (288 bytes/iteration). This is similar to the allocation rate I get for JavaScriptCore, which is also not generational. V8's generational collector lets it sustain about 3.5 GB/s, at least in my tests. Test case here; multiply the 8-element test by 100 to yield approximate MB/s.

Anyway these numbers are much lower than the iterator tests. ES6 did right to specify one-shot generators as a primitive. In Guile we should perhaps consider implementing some form of one-shot generators that don't have an allocation overhead.

Finally, while I enjoyed the paper from Kiselyov and the gang, I don't see the point in praising simple generators when they don't offer a significant performance advantage. I think we will see external iterators in JavaScript achieve near-optimal performance within a couple years without paying the expressivity penalty of internal iterators.

Catch you next time, dog!

Syndicated 2013-02-25 16:17:25 from wingolog

opengl particle simulation in guile

Hello, internets! Did you know that today marks two years of Guile 2? Word!

Guile 2 was a major upgrade to Guile's performance and expressiveness as a language, and I can say as a user that it is so nice to be able to rely on such a capable foundation for hacking the hack.

To celebrate, we organized a little birthday hack-feast -- a communal potluck of programs that Guilers brought together to share with each other. Like last year, we'll collect them all and publish a summary article on Planet GNU later on this week. This is an article about the hack that I worked on.

figl

Figl is a Guile binding to OpenGL and related libraries, written using Guile's dynamic FFI.

Figl is a collaboration between myself and Guile super-hacker Daniel Hartwig. It's pretty complete, with a generated binding for all of OpenGL 2.1 based on the specification files. It doesn't have a good web site yet -- it might change name, and probably we'll move to savannah -- but it does have great documentation in texinfo format, built with the source.

But that's not my potluck program. My program is a little demo particle simulation built on Figl.

Source available here. It's a fairly modern implementation using vertex buffer objects, doing client-side geometry updates.

To run, assuming you have some version of Guile 2.0 installed:

git clone git://gitorious.org/guile-figl/guile-figl.git
cd guile-figl
autoreconf -vif
make
# 5000 particles
./env guile examples/particle-system/vbo.scm 5000

You'll need OpenGL development packages installed -- for the.so links, not the header files.

performance

If you look closely on the video, you'll see that the program itself is rendering at 60 fps, using 260% CPU. (The CPU utilization drops as soon as the video starts, because of the screen capture program.) This is because the updates to the particle positions and to the vertices are done using as many processors as you have -- 4, in my case (2 real and 2 threads each). I've gotten it up to 310% or so, which is pretty good utilization considering that the draw itself is single-threaded and there are other things that need to use the CPU like the X server and the compositor.

Each particle is has a position and a velocity, and is attracted to the center using a gravity-like force. If you consider that updating a particle's position should probably take about 100 cycles (for a square root, some divisions, and a few other floating point operations), then my 2.4GHz processors should allow for a particle simulation with 1.6 million particles, updated at 60fps. In that context, getting to 5K particles at 60 fps is not so impressive -- it's sub-optimal by a factor of 320.

This slowness is oddly exciting to me. It's been a while since I've felt Guile to be slow, and this is a great test case. The two major things to work on next in Guile are its newer register-based virtual machine, which should speed things up by a factor of 2 or so in this benchmark; and an ahead-of-time native compiler, which should add another factor of 5 perhaps. Optimizations to an ahead-of-time native compiler could add another factor of 2, and a JIT would probably make up the rest of the difference.

One thing I was really wondering about when making this experiment was how the garbage collector would affect the feeling of the animation. Back when I worked with the excellent folks at Oblong, I realized that graphics could be held to a much higher standard of smoothness than I was used to. Guile allocates floating-point numbers on the heap and uses the Boehm-Demers-Weiser conservative collector, and I was a bit skeptical that things could be smooth because of the pause times.

I was pleasantly surprised that the result was very smooth, without noticeable GC hiccups. However, GC is noticeable on an overloaded system. Since the simulation advances time by 1/60th of a second at each frame, regardless of the actual frame rate, differing frame processing times under load can show a sort of "sticky-zipper" phenomenon.

If I ever got to optimal update performance, I'd probably need to use a better graphics card than my Intel i965 embedded controller. And of course vertex shaders are really the way to go if I cared about the effect and not the compiler -- but I'm more interested in compilers than graphics, so I'm OK with the state of things.

Finally I would note that it has been pure pleasure to program with general, high-level macros, knowing that the optimizer would transform the code into really efficient low-level code. Of course I had to check the optimizations, with the ,optimize, command at the REPL. I even had to add a couple of transformations to the optimizer at one point, but I was able to do that without restarting the program, which was quite excellent.

So that's my happy birthday dish for the Guile 2 potluck. More on other programs after the hack-feast is over; until then, see you in #guile!

Syndicated 2013-02-16 19:20:14 from wingolog

knocking on private back doors with the web browser

I woke up at five this morning with two things in my head.

One was, most unfortunately, Rebecca Black's Friday, except with "Monday" -- you know, "Monday, Monday, gettin' up on Monday, hack, hack, hack, hack, which thing should I hack?", et cetera.

The other was a faint echo of Patrick McKenzie's great article on the practical implications of the recent Rails vulnerabilities. In particular, he pointed out that development web servers running only on your local laptop could be accessed by malicious web pages.

Of course this is not new, strictly speaking, but it was surprising to me in a way that it shouldn't have been. For example, Guile has a command-line argument to run a REPL server on a local port. This is fantastic for interactive development and debugging, but it is also a vulnerability if you run a web server on the same machine. A malicious web page could request data from the default Guile listener port, which is a short hop away from rooting your machine.

I made a little test case this morning: a local port scanner. It tries to fetch http://localhost:port/favicon.ico for all ports between 1 and 10000 on your machine. (You can click that link; it doesn't run the scan until you click a button.)

If it finds a favicon, that indicates that there is a running local web application, which may or may not be vulnerable to CSRF attacks or other attacks like the Rails yaml attack.

If the request for a favicon times out, probably that port is vulnerable to some kind of attack, like the old form protocol attack.

If the request fails, the port may be closed, or the process listening at the port may have detected invalid input and closed the connection. We could run a timing loop to determine if a connection was made or not, but for now the port scanner doesn't output any information for this case.

In my case, the page finds that I have a probably vulnerable port 2628, which appears to be dictd.

Anyway, I hope that page is useful to someone. And if you are adding backdoors into your applications for development purposes, be careful!

Syndicated 2013-02-04 08:47:09 from wingolog

402 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!