Older blog entries for crhodes (starting at number 152)

What I got for Christmas: sufficiently advanced Intel graphics drivers for 855GM, in Linux 2.6.37-rc7. No more missing mouse cursor on boot (and, icing on the Christmas cake, working video playback!) Thank you to those who worked on this, particularly since I couldn't actually work out how or where to submit useful bug reports (and so resorted to my usual strategy when dealing with laptop-related issues, which is to contact mjg59 by whatever means available and follow his suggestions as precisely as possible).

2 Dec 2010 (updated 2 Dec 2010 at 17:05 UTC) »

As the train I'm on ambles its unheated way through the unseasonably Wintry English countryside, it's time for another “weekly” exciting entrepreneurial update. Actually I should be properly working, not just talking about working, but there's a file I need for that elsewhere, and three's mobile Internet coverage evaporates about 3 minutes outside Waterloo station – if only there were a company dedicated to bettering mobile data infrastructure... So, here I am, with means, motive and opportunity to write a diary entry.

Since I last wrote, I have fought with R's handling of categorical variables in linear models; the eventual outcome was a score draw. The notion of a contrast is a useful one; very often, when we have a heap of conditions under which we observe some value, what we're interested in is not so much the predicted value given some condition, but the difference between the value under some condition and the value under some other: the canonical example for this is probably the difference between the condition of some group receiving a trial treatment, and the group receiving a control or placebo: the default contrast for unordered categorical variables in R is called the treatment contrast (contr.treatmen t).

In my particular case, I wanted to know the difference between any particular contrast and the average response – none of the categories I had in my system should have been privileged over any of the others, and there wasn't anything like a “control” group, so comparing against the overall average is a reasonable thing to want to do, and indeed it is supported in R through the use of the sum contrast contr.sum. However, this reveals a slight technical problem: the overall average and differences for each categorical variable is one more variable than the (effective) number of values; just as in simultaneous equations, this is a Bad Thing. (Technically, the system becomes undetermined.) So, in solving the system, one of the differences is jettisoned; my problem was that I wanted to visualise that information for all the differences, whether or not the last one was technically redundant – particularly since I wanted to offer a guideline as to which differences were most strongly different from the average, and I would be out of luck if the most unusual one happened to be the one jettisoned. Obviously I could trivially compute the last difference, simply from the constraint that all the differences must sum to zero (and actually dummy. coef does that for me); but what about its standard error?

Enter se.co ntrast. This operator allows the user to construct an arbitrary contrast, expressed most simply as a vector of contributions to that contrast and ask an aov object for the standard error of that contrast. Some experimentation later, for a linear model m for len observations, and a particular factor variable f, and a function class.ind to construct a matrix of class indicator values (i.e. for a vector vi of observations, construct a matrix xij where xij is 1 if observation i came from condition j, and zero otherwise), I think that:


  anova <- aov(m)
  ci <- class.ind(data[[f]])
  ci <- ci[,colSums(ci) != 0]
  contrasts <- ci %*% diag(1/colSums(ci)) %*% (diag(len)-
(1/len)*matrix(rep(1,len*len), nrow=len))
  ses <- se.contrast(anova, contrasts)
gives me a vector ses of the standard errors corresponding to the sum contrasts in my system, including the degenerate one. (As seems to be standard in this kind of endeavour, the effort per net line of code is huge; please do not think that I wrote these five lines of code off the top of my head. Thanks to denizens of the r-help mailing list and in particular to Greg Snow for his answer to my question about this).

So, this looks like total victory! Why have I described this as only a score draw? Well, because while the above recipe works for a single factor variable, in the case I am actually dealing with I have all sorts of interaction terms between factors, and between factors and numerical variables, and again I want to display and examine all the contrasts, not just some subset of them chosen so that the system of equations to solve is nondegenerate. This looked sufficiently challenging, and the analysis to be done looked sufficiently peripheral to the current business focus, that it's been shelved, maybe for a rematch in the new year.

My weekly diary schedule has already slipped! In my defence, last week was exceptional, because the major activity was neither entrepreneurial nor academic, but practical and logistical: moving house. A lengthy rant about the insanity of the English conveyancing system is probably not of great interest to readers of this diary, so I will save the accumulated feelings of helplessness and insecurity for some other outlet.

Meanwhile, back to work. It's the start of teaching next week; fortunately, I am teaching largely the same material as last year, so now is the time that I can reap the benefits of preparation time that spent on the course over the last two years. Inevitably, there will be new things to include and outdated material to remove or update, but by and large I should be able to deliver the same content.

This is a relief, because of course this year I only have one fifth of my time on academic-related activities. This means that various things have to be sacrificed or delegated, not least some of my extra-curricular activities such as being release manager of SBCL – so I'm very glad that Juho Snellman has volunteered to step in and do that for the next while. (He suffered the by-now traditional baptism of fire, dealing with amusing regressions late in the 1.0.42.x series, and released version 1.0.43 today; we'll see how his coefficient of grumpiness evolves over the next few months).

In the land of industry, what I've mostly been doing is drawing graphs. As the screenshot in last week's my previous entry suggests, I'm using R for data processing and visualisation; I have datasets with large numbers of variables, and the facilities for visualising those quickly and compactly with the lattice package (implementing Becker and Cleveland's trellis paradigm) are very convenient. By and large, progressing from prototype visualisation to presentation- or publication-quality visualisation is also straightforward, but I spent so long figuring out one thing that I needed to do this week that I'll document it here for posterity: that thing was to construct a graph using lattice with an axis break. It's not that it's absurdly difficult – there are plenty of hookable or parameterisable functions in the lattice graph-drawing implementation; the difficult part is finding out which functions to override, which hooks to use, and which traps to avoid.

The problem as I have found it is that when drawing a lattice plot, for these purposes, things happen in an inconvenient order. First, the axes, tickmarks and labels are drawn, using the axis function provided to the lattice call (or axis.default by default; then the data are plotted using the panel function. So, that would be fine; one could even hackily draw over the axis in the panel function to implement the axis break, at least if one remembers to turn clipping off with clip=list(panel="off") in par.settings. Except that the axis function doesn't actually draw the axis lines; instead, there's a non-overridable bit of plot.trellis which draws the box around the plot, effectively being the axis lines – and that happens after everything else.

So, piling hack upon hack: there's no way of not drawing the box. There is, however, a way of drawing the box with a line thickness of zero: pass axis.line=list(lwd=0) in par.settings as well. Ah, but then the tick marks have zero thickness too. Oh, but we can override that setting of axis.line$lwd within our custom axis function. (Each of these realisations took a certain amount of time, experimentation, and code reading to come to pass...). What it boils down to, in the end, is a call like


xyplot(gmeans.zoo,
       screens=1, col=c(2,3,4), lwd=2, lty=3, more=TRUE,
       ylim=c(0.5,1.7), scales=list(
                          x=list(at=dates, labels=date.labels),
                          y=list(at=c(0.5,1.0,1.5),
                            labels=c("- 50%", "± 0%", "+ 50%", "+ 100%"))),
       key=list(lines=list(col=c(2,3,4)),
         text=list(lab=c("5m", "500k", "galileo"))),
       xlab="Date",
       par.settings = list(axis.line=list(lwd=0),
         clip=list(panel="off", strip="off")),
       axis=function(side, scales, components, ...) {
         print(scales)
         lims <- current.panel.limits()
         trellis.par.set(axis.line=list(lwd=0.5))
         panel.axis(side=side, outside=TRUE, at=scales$at,
                    labels=scales$labels,
                    draw.labels=side %in% c("bottom", "left"), rot=0)
         panel.lines(lims$xlim[[1]], lims$ylim, col=1, lwd=1)
         panel.lines(lims$xlim[[2]], lims$ylim, col=1, lwd=1)
         panel.lines(c(lims$xlim[[1]], as.Date("2010-09-11")+0.45),
                     lims$ylim[[1]], col=1, lwd=1)
         panel.lines(c(lims$xlim[[2]], as.Date("2010-09-11")+0.55),
                     lims$ylim[[1]], col=1, lwd=1)
         panel.lines(c(lims$xlim[[1]], as.Date("2010-09-11")+0.45),
                     lims$ylim[[2]], col=1, lwd=1)
         panel.lines(c(lims$xlim[[2]], as.Date("2010-09-11")+0.55),
                     lims$ylim[[2]], col=1, lwd=1)
       },
       panel=function(x,y,...) {
         xs <- current.panel.limits()$xlim
         ys <- current.panel.limits()$ylim
         panel.xyplot(x,y,...)
         panel.polygon(as.Date("2010-09-11")+c(0.4,0.6,0.6,0.4),
                       c(ys[1]+0.05,ys[1]+0.05,ys[2]-0.05,ys[2]-0.05),
                       border="white", col="white", alpha=1)
         panel.lines(xs,1,col=1,lty=3)
         panel.lines(as.Date("2010-09-11")+c(0.5,0.6),
                     c(ys[1]-0.025,ys[1]+0.025), col="black")
         panel.lines(as.Date("2010-09-11")+c(0.4,0.5),
                     c(ys[2]-0.025,ys[2]+0.025), col="black")
         panel.lines(as.Date("2010-09-11")+c(0.5,0.6),
                     c(ys[2]-0.025,ys[2]+0.025), col="black")
         panel.lines(as.Date("2010-09-11")+c(0.4,0.5),
                     c(ys[1]-0.025,ys[1]+0.025), col=1, lwd=1)
         panel.text(as.Date("2010-09-20"),
                    t(gmeans.zoo[1,])+c(0.01,0,-0.01),
                    sprintf("%2.0f%%", round(100*t(gmeans.zoo[1,]-1))),
                    pos=4)
       })

allows me to draw a picture like

for us to show to interested parties.

New laptop video (intel 855GM) drivers holding up pleasantly well. One crash so far, from attempting to play a video; I haven't tried to reproduce it, since that's not something I do very often in any case. (Said video was of my own extremely minor contribution to UK televisual arts programming, so maybe my video drivers were wisely preventing me from narcissistic overexposure...)

Meanwhile, back in the land of sort-of-industrial research and development, I've been using (and simultaneously learning) R for analysing the meaty chunks of data that my colleagues are generating. A couple of my academic labmates were already R users, unashamedly using R for their data treatment needs, even when their data came from Lisp programs. Shocking, I know. So, when substantial datasets started landing on my lap a few months ago (not just from a group of people in mobile broadband; energy use data and computer usage metrics also crossed my desk) I decided that the time was ripe to learn some new tools and techniques.

Initial reactions: mostly positive. To help put my observations into context: I've dabbled in MATLAB before, and it's never quite stuck. The everything-is-a-matrix aspect was painful; graphical output was OK but nothing to write home about; and environmental support pretty painful. (GNU Octave suffers from all of these problems too, and then some; it does have the significant advantage, from my point of view, that at least there isn't the attempted lock-in from purchasing a bare proprietary wrapper over BLAS and LAPACK with the option to buy yet more functionality wrapping BLAS and LAPACK in slightly different ways). I've also used (a long, long time ago now) IDL, again a vector-oriented language; my memory of it is mostly faded, but I remember being satisfied with its graphing facilities and much more satisfied with an Emacs-mode than with its default User Interface. This was 1998; I dare say things would be much the same today...

By contrast, R has data types that are mostly comfortable to my inner Lisp programmer. Yes, number crunching is best done through vectors (matrices being a thin wrapper around vectors rather than a distinct language data type), but lists of vectors are fine, and are used to collect data into data.frames. It has a lightweight object system, with single dispatch on the class of the first argument; mind you, classes of objects are a pretty mutable concept in R, settable at arbitrary points in program execution (there's mostly no relationship between object class and object contents). Speaking of settable, there's the `<-` operator both for assignment and for mutation, like Common Lisp's setf. “Mutation” there might actually not be quite the right word; the evaluation semantics are mostly lexical binding and call-by-value, with the interpreter attempting to perform copy-on-write and deforestation optimizations. Speaking of interpreters, there's a reified environment and call stack at all stages of program execution, which makes the language mildly tricky to compile, particularly since environments are just about the only thing in the language which can be mutated; this aspect of the language has recently been the subject of discussion in the R community (and, would you believe it, one of the camps is advocating a rewrite of the engine using Common Lisp both as a model and as an implementation language, mentioning SBCL by name... sadly without increasing my citation count. Oh well.)

In any case, once I saw the reified environments, and also spotted parse (analogue to read) and eval, I started wondering whether the comint-based Emacs mode for R, Emacs Speaks Statistics, could be enhanced or replaced with some SLIME-like functionality. I was particularly interested in whether there was enough introspective capability to replace the default debugger, which I was not having much success in using at all. So I started digging into the documentation, finding first try, then tryCatch (OK, that's like handler-case, so far so normal), and then on the same help page found withCallingHandlers and withRestarts, direct analogues of handler-bind and restart-case. At that point it seemed logical, instead of trying to write some SLIME-like functionality for ESS, to simply write an R backend for SLIME. With some careful (ahem) adaptation of a stub backend Helmut Eller wrote for Ruby, I got cracking, and swankr now supports a slime REPL, slime scratch buffers, SLDB, the default inspector, and presentations.

Here's a teaser screenshot, demonstrating functionality not yet merged: image output for lattice objects, presented in the SLIME REPL, and usable in subsequent input. There's plenty in swankr that doesn't work, and plenty more that's buggy, but it might be good enough for the enthusiastic R/Lisp/Emacs crossover community to give it a try.

Lattice Presentations

(No, my screen is not that tall.)

In my new life as an itinerant entrepreneur, spending significant amounts of time both working from strange places and travelling on trains, my productivity depends at least in part on having a good development laptop. In my other current life lecturing in Creative Computing, I also need to be able to display reliably to a data projector. Neither of these was a problem until relatively recently.

For the last four years or so, after obtaining a recommendation from one of the masters of Linux and laptops, I've used an IBMlenovo X40: light, fairly rugged (has survived at least one drop), almost full-sized keyboard, and – importantly – fits into more or less any carrying equipment. However, the relative instability (which I am not the only one to experience) brought about by the change in Linux and X.org's to Kernel-Mode Switching was beginning to get worrying; the X40 has an Intel 855GM graphics card, which since Intel is participating heavily in KMS support and development, means that new things got turned on early; I follow squeeze (Debian testing), which gives me some of the thrills and spills of being on the leading edge of consumer Linux development; most pertinently for the start of the new academic year, I've been suffering from odd display corruption on the VGA output.

In various combinations of Xorg and kernel versions, trying to work out a set of versions that work for me, I have experienced

  • intermittent (about once a day) X server crashes, leaving the hardware in a sufficiently inconsistent state that the X server refuses to start again (linux v2.6.33ish, xserver-xorg-video-intel 2.9.1)
  • failure to start the X server on boot at all (linux v2.6.33ish, xserver- xorg-video-intel 2.12.0+legacy1-1)
  • missing mouse cursor on X server start (linux v2.6.35, xserver-xorg- video-intel 2.9.1)
  • substantial (~0.5s) latency about once every 10 seconds, with kslowd or kworkqueue processes taking about 20% of the CPU (linux v2.6.35 and v2.6.36-rc3, xserver-xorg-video-intel 2.9.1)

The good news is that a large number of Intel-graphics-related fixes appeared in Linus' master branch yesterday, and at least some of these problems are fixed; with a module parameter poll=0 for the kms_drm_helper module, the 0.5s latencies are gone (put options kms_drm_helper poll=0 in a file in /etc/modprobe.d). The VGA corruption I was experiencing seems to have been fixed somewhere between Linux versions 2.6.33 and 2.6.35; I have hopes, too, that the X server crashes might be substantially less frequent (but I haven't been running a single kernel long enough to check yet). The one remaining issue that is definitely still present is the missing mouse cursor when X first starts; a suspend/resume cycle works around this fairly reliably for me. Thank you to all those who've worked on fixing these problems.

I'm particularly glad that all of this is just about sufficiently fixed, because the alternative would have been to get a new laptop, and as well as my natural disinclination to purchase new stuff when not strictly necessary (and let's face it, the initial phases of a startup are not usually those where money is abundant), it seems to be largely impossible to get a new one with the form factor of the X40: with the growing prevalence of widescreens, just about every modern laptop is substantially bigger than this one, and thus would not fit in some of my carrying equipment. I will just have to preserve this one as long as possible.

I'm on a train! A train, heading towards Christchurch, with an amusingly bad synthesized (at least, I hope it's synthesized) announcement of train stops.

Being on a train would not normally be worthy of a diary entry. However, I'm travelling not as part of my heretofore stable academic enquiries, but as a mostly-fledged member of the high-flying technological startup community. No, I haven't quite sold out; as of yesterday, I work four days a week for a bright-futured company in the general area of mobile and wireless broadband; the other working day will continue to be spent at Goldsmiths, where I retain most of my responsibilities (but not the management of the music-informatics OMRAS2 project, which finished at the end of August); this arrangement with Goldsmiths will continue for a year.

So, why? A number of factors came together to make this a very attractive opportunity. Firstly, as I've already mentioned, the research project whose technical side I was responsible for came to an end last month; although the project itself was interesting, and its existence was at least partly responsible for me having a permanent academic position, I think it's fair to say that there were all sorts of wetware headaches as a result. The end of the project was therefore a natural point to start afresh — and in particular, to try to focus on aspects of work that I enjoy. Secondly, my PhD students are largely getting towards the end of their studies; most should be submitting their dissertations in the next few months. While PhD students are usually an asset to an academic, it is true that they can take up significant amounts of time, particularly in the early stages; not having to guide any in the early stages of their research next year gives me the freedom to investigate other activities, and to try to broaden my experience. Thirdly, when the person who comes calling is as awesome as our CTO, and when the rest of the gang is full of names I recognize, well, I expect to learn a lot. (I already have!)

And of course, the fact that some of the underlying technology used in the company uses is software that I'm fairly familiar with is attractive; it would be good to show the world that there are more SBCL-using companies with a healthy business than just the poster child. If I'm lucky, the Free Software and Lisp-related content on this blog should noticeably increase, because although the academic life gives a huge freedom to think, it doesn't actually give a large amount of time to do very much with those thoughts; in this newly-acquired business role, as well as the inevitable all- hands pitching in until the small hours to make the demos work with bits of string and glue, I hope that there will be time to implement and reflect on interesting things.

It's been a while. It's in fact almost embarassingly late for me to be blogging now about the 2010 European Lisp Symposium, which was over a month ago now – in my defence, I point to the inevitable stress-related illness that follows excessive concentration on a single event, coupled with hilarious clashing deadlines at work and in my outside activities.

So, we cast our minds back to April. When I booked my flight to Lisbon, I deliberately chose not to fly with British Airways, on the basis that they were likely to strike. It turned out that the activities of British Airways cabin crew was going to be the least of the problems associated with getting to an international conference...

Yes, shortly after I booked my tickets, Eyjafjallajökull went *boom* and most of Europe's airspace closed for a week, with ongoing disruption for the best part of a month. There were moments during the disruption when I wondered whether there was an actual curse affecting ELS, but in the event things cleared up and there was only minimal disruption to delegates and speakers, both getting there and getting back.

But enough about transport! How was the symposium itself? Well, I enjoyed the programme – but given that I had as much control over it as events would allow me, perhaps that's not the most unbiased endorsement of quality. Still, I was entertained by all the keynote speakers, from the window into the business world opened by Jason Cornez of Ravenpack, via the practical philosophy and view on history afforded by Kent Pitman, to the language experimentation and development in PLT Scheme Racket described enthusiastically by Matthias Felleisen. Pascal Costanza's tutorial on parallel and concurrent programming was highly informative, and there was a good variety of technical talks.

It's often said, though, that the good stuff at conferences happens between the programme, and for that the local organization needs to be on the ball. I'm glad to say that António Leitão and Edgar Gonçalves, and their helpers, enabled a huge amount of interaction: lunch, coffee and tea breaks, evening meals (including a fabulous conference banquet, but also a more informal dinner and a meal punctuated by Fado. I gather the excursion to Sintra on the Saturday was interesting; by then I was at the airport, looking nervously at the departure boards...

I enjoyed meeting and talking with many of the attendees; some whose names I knew but whose faces I didn't (and some whose names I knew because they shared them with other people who I knew already: take a bow, both Luís Oliveiras); but with always one eye on the next thing that could go wrong, I didn't get to go very deeply into interesting conversations. Maybe next year, in Hamburg for ELS2011 (expect the paper deadline to be around 31st December 2010) – in the meantime, there's likely to be a journal special issue with an open call for papers, coming soon, and of course the ALU are holding an International Lisp Conference this year, whose call for papers is currently open. So get writing!

Many people will have heard that Nick Levine's health has meant that he had to withdraw from giving a talk at the upcoming European Lisp Symposium; I wish him a speedy and comfortable recovery.

I'm very glad to be able to announce that Jason Cornez of RavenPack International has agreed, at very short notice, to give a talk at the Symposium: Reading the News with Common Lisp. The abstract for his talk is:

The financial industry thrives on data: oceans of historical archives and rivers of low-latency, real-time feeds. If you can know more, know sooner, or know differently, then there is the opportunity to exploit this knowledge and make money. Today's automated trading systems consume this data and make unassisted decisions to do just that. But even though almost every trader will tell you that news is an important input into their trading decisions, most automated systems today are completely unaware of the news – some data is missing. What technology is being used to change all this and make news available as analytic data to meet the aggressive demands of the financial industry?

For around seven years now, RavenPack has been using Common Lisp as the core technology to solve problems and create opportunities for the financial industry. We have a revenue-generating business model where we sell News Analytics – factual and sentiment data extracted from unstructured, textual news. In this talk, I'll describe the RavenPack software architecture with special focus on how Lisp plays a critical role in our technology platform, and hopefully in our success. I hope to touch on why we at RavenPack love Lisp, some challenges we face when using Lisp, and perhaps even some principles of successful software engineering.

Many thanks to Jans Aasman and Craig Norvell of Franz Inc., as well as to Jason and RavenPack, for making this possible.

ELS2010 Call for Participation

May 6-7, 2010, Fundação Calouste Gulbenkian, Lisbon, Portugal

Registration for the 3rd European Lisp Symposium (ELS 2010) is open at the Symposium website.

Scope and Programme Highlights

The purpose of the European Lisp Symposium is to provide a forum for the discussion of all aspects of the design, implementation and application of any of the Lisp dialects. We encourage everyone interested in Lisp to participate.

As well as presentations of the accepted technical papers and tutorials, the programme features the following highlights:

  • Kent Pitman of HyperMeta Inc. will offer reflections on Lisp Past, Present and Future;
  • Jason Cornez of RavenPack International will talk on the use of Lisp to read and analyse news sources;
  • Pascal Costanza will lead a tutorial session on Parallel Programming in Common Lisp;
  • Matthias Felleisen of PLT will talk about languages for creating programming languages;
  • A TI Explorer Lisp Machine, having been unplugged for the best part of two decades, will be demonstrated;
  • there will be opportunities for attendees to give lightning talks and demos of late-breaking work.

Social events

  • Symposium banquet (included with registration)

  • Excursion to Sintra (optional, Saturday May 8): for six centuries the favourite Summer residence of the Kings of Portugal, who were attracted by cool climates and the beauty of the town's setting.

Registration

Registration is open at http://www.european-lisp-symposium.org/ and costs €200 (€120 for students).

Registration includes a copy of the proceedings, coffee breaks, and the symposium banquet. Accommodation is not included.

A little news regarding the upcoming European Lisp Symposium is enough excuse to repost the Call for Participation, particularly since the Early Registration Deadline is approaching (it's this Thursday, 22nd April). António Leitão has managed to resurrect an Explorer Lisp Machine, cannibalising parts from a second, and some of the applications from yesteryear (or rather two decades ago) will be demonstrated at the Symposium.

ELS2010 Call for Participation

May 6-7, 2010, Fundação Calouste Gulbenkian, Lisbon, Portugal

Registration for the 3rd European Lisp Symposium (ELS 2010) is now open at the Symposium website. The early registration period lasts until Thursday, 22nd April.

Scope and Programme Highlights

The purpose of the European Lisp Symposium is to provide a forum for the discussion of all aspects of the design, implementation and application of any of the Lisp dialects. We encourage everyone interested in Lisp to participate.

As well as presentations of the accepted technical papers and tutorials, the programme features the following highlights:

  • Kent Pitman of HyperMeta Inc. will offer reflections on Lisp Past, Present and Future;

  • Pascal Costanza will lead a tutorial session on Parallel Programming in Common Lisp;

  • Matthias Felleisen of PLT will talk about languages for creating programming languages;

  • A TI Explorer Lisp Machine, having been unplugged for the best part of two decades, will be demonstrated;

  • there will be opportunities for attendees to give lightning talks and demos of late-breaking work.

Social events

  • Symposium banquet (included with registration)
  • Excursion to Sintra (optional, Saturday May 8): for six centuries the favourite Summer residence of the Kings of Portugal, who were attracted by cool climates and the beauty of the town's setting.

Programme Chair

Christophe Rhodes, Goldsmiths, University of London, UK

Local Chair

António Leitão, Technical University of Lisbon, Portugal

Programme Committee

  • Marco Antoniotti, Università Milano Bicocca, Italy
  • Giuseppe Attardi, Università di Pisa, Italy
  • Pascal Costanza, Vrije Universiteit Brussel, Belgium
  • Irène Anne Durand, Université Bordeaux I, France
  • Marc Feeley, Université de Montréal, Canada
  • Ron Garret, Amalgamated Widgets Unlimited, USA
  • Gregor Kiczales, University of British Columbia, Canada
  • António Leitão, Technical University of Lisbon, Portugal
  • Nick Levine, Ravenbrook Ltd, UK
  • Scott McKay, ITA Software, Inc., USA
  • Peter Norvig, Google Inc., USA
  • Kent Pitman, PTC, USA
  • Christian Queinnec, Université Pierre et Marie Curie, France
  • Robert Strandh, Université Bordeaux I, France
  • Didier Verna, EPITA Research and Development Laboratory, France
  • Barry Wilkes, Citi, UK
  • Taiichi Yuasa, Kyoto University, Japan

Registration

Registration is open at http://www.european-lisp-symposium.org/ and costs €120 (€60 for students) until 22nd April, and €200 (€120 for students) afterwards.

Registration includes a copy of the proceedings, coffee breaks, and the symposium banquet. Accommodation is not included.

Things are beginning to take shape for the European Lisp Symposium (May 6-7, Lisbon): I'm glad to say that the review process is over and the preliminary programme is now available. The symposium website has also undergone a facelift: out with the spartan, old-school look, and in with the tasteful and functional; Edgar Gonçalves put the new look and content together in very short order, so thanks to him; thanks also to the event's supporters.

The final organizational details are being sorted out, and registration should be open very soon.

143 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!