Older blog entries for danstowell (starting at number 30)

12 Aug 2012 (updated 12 Aug 2012 at 20:10 UTC) »

The Olympics can be both successful AND unjust ...lessons for Rio?

There was a lot of negative press in the run-up to the Olympics, with the G4S security cockup, the ever-inflating costs, and the Zil lanes. But after a spectacular opening ceremony, nice weather and some lovely stories of Olympics winners, the mood has swung really positive.

This has led to a major anti-whingeing press offensive. For example this Independent article described "a great festival of pre-emptive whingeing followed by people having a surprisingly good time" and concluded "It is time for the country to put its doubts to one side." Of course Boris Johnson put it more pithily when he wrote in The Sun "Oh come off it, everybody - enough whimpering."

The main argument of the anti-whingeing offensive is to point out how well it's all gone: no public transport problems, no security calamities, and so on. (Let's put aside for now the fact that all London businesses outside the olympic venues have had a surprising drought of business, which is a bit of a fail that I guess the London organisers could have helped prevent.) But there's an unfortunate problem with this: most of the nay-sayers did not say the Olympics would fail, they said that the Olympics was a bad deal, in both the moral and the financial senses.

Not every criticism has equal weight. Personally, I don't think it's particularly awful that some commuter routes and cycle routes are disrupted, or that police are brought in from the counties to help police the event, for example. We need to take a balanced view of things: we recognise that London changes a bit when the world's biggest sport festival is happening here.

But that doesn't mean that we should sacrifice our civil liberties on the altar of the event. We have British values such as liberty, decency and fair-mindedness, which we believe are reflected in the way we run our policing etc. If we don't make sure the Olympics lives up to these values when it's here, what chance do we have of helping it behave that way when it's in other countries?

So yes, it's good that lots of people are enjoying a smooth-running Olympics, and the British medal success has made it a particularly jolly affair for the country. But the end does not justify the means, and for the purposes of memory I'd like to summarise three types of unpleasantness that were completely unneccessary to build into the the Olympics:

  1. The sponsors' dictatorship and the militant denial of small businesses' / communities' participation
  2. Security clampdowns and chilling effects on civil liberties
  3. Financial weaselings

But the most important thing right at this moment is what lessons can we pass on to Rio, which I'll consider at the end of this post.

1: The sponsors' dictatorship

The sponsors' dictatorship has been extensively covered in the press. For example, an Oxfordshire farm supplying local produce is not only banned from mentioning its association with the Olympics, but banned from doing it for the next ten years. (Source). (Similarly for the UK plastics industry apparently and they're not happy about it either.)

Butchers have been banned from displaying Olympic rings made of sausages, pubs warned they can't advertise the fact that they're showing the Olympics on telly, and cafes banned from selling a "flaming torch breakfast baguette" (source). This is all supposed to be to protect the sponsors' investment - but the rather extreme policing is completely out of proportion to the following fact:

The money from sponsorship covered a tiny fragment of the costs of the Olympics: around 7%.
(Data sources: Parliametary Accounts Committee, Sponsors list at Guardian)

The suppression of UK businesses' benefiting from the Olympics in perfectly ordinary ways is a refusal to recognise this balance. The amount of money coming from UK taxpaying businesses is comparable with the amount of money coming from the official sponsors (since around 12% of UK tax income is directly from businesses). So this heavy-handedness should not be tolerated. The sponsors, maybe their participation is fine (though some argue they're not even needed - that the 7% could have been trimmed or come from elsewhere, and of course the brand-policing costs would have evaporated immediately). But assuming that we don't mind sponsorship, we should accept it without trampling over local businesses' ordinary grassroots involvement.

Incidentally, Jeremy Hunt (the government minister for the Olympics) has made very different pronouncements on the importance of the sponsors: "the sponsors are actually paying for about half of the cost of hosting the Olympics." This is a direct quote from him speaking on Radio 4 (2012-07-23 13:32, World At One), but it is so clearly out of line with the known figures that I can't think of any explanation other than it's propaganda designed to blunt opposition to the olympic brand police, which was the issue under discussion at the time.

A large cause of the branding strictness is because of ambush marketing, where rival non-sponsors essentially embarrass the sponsors by smuggling their own branding opportunities into the event. So the IOC introduced rules (as did FIFA and other such bodies) to try and prevent this. Unfortunately the consequences have swung far too far in the sponsors' favour.

One thing that seems crazy to me is that you're not allowed to use non-Visa cards to buy Olympic tickets, and ATMs have been forced to close, replaced with a (smaller) number of Visa-only machines. This is actually very different from sponsorship - it means that you cannot be a spectator at the games without signing up with Visa. You can keep out of the McDonalds, you can frown at Dow Chemicals, but you simply won't be allowed in without a Visa card. Why hasn't this been mentioned more often in the media? It's an abusive monopoly.


2: Security clampdowns and chilling effects on civil liberties

There are already "chilling effects" described above in the sponsor/logo ridiculousness: for example, the police wasted time emptying their crisps into clear plastic bags in case they ired the sponsors. But much worse than this is the chilling effect on free speech and other civil rights.

Very notable has been the intimidation and disruption of photographers. Many different people have reported that G4S and sometimes the police have stopped journalists taking pictures in a public place, using fairly vague ideas about it being a security issue. The Guardian has a video of it happening.

But it's not just photographers:

  • People who had done nothing wrong at all were arrested, had their homes raided & forbidden to use public transport. These were ex-graffiti artists arrested just-in-case. Unfortunately, this pre-emptive rounding up of people who "might" cause trouble was recently supported by the High Court, who dismissed complaints about police arresting various people who had done nothing wrong just in case they caused controversy during the day of the royal wedding. It's difficult for the police to respond to modern phenomena such as flashmobs but this sounds like plain injustice to me: people who have done nothing wrong detained on the basis of their political beliefs. There's a delicate balance needed here, to protect people's democratic rights.
  • Groups of two or more people (or young people at night) are banned from the Stratford area under specific regulations which I guess must have quite an impact on people who live in Stratford. "A Met spokesman admitted that previous dispersal orders in the area had transferred problems to other areas, but said the force hoped it would not happen this time."
  • A woman received a police warning for a Facebook joke about squirting the Olympic flame with a water pistol.
  • A man in Surrey was arrested (later freed without charge) "based on his manner, his state of dress and his proximity to the [cycling] course". The press summarised it as arrested in Leatherhead for 'not smiling'.

This kind of heavy-handedness is used not only to reduce the chance of flashmobs and other disruption, but also to suppress dissent. The absolute peak of this: police even arrested three protestors for pouring custard over their own heads in Trafalgar Square. As a distilled example of the British spirit right now - critical awareness and up-yours spirit paired with the chilling effects of policing heavy-handedness - I don't know how anyone can top this.


3: Financial weaselings

I don't need to reinforce the fact that the costs of the Olympics ballooned beyond original estimates. That's almost inevitable. And no, no-one expects that money to come back in increased tourism and business - it hasn't done that in the past and that was never the point.

I just want to mention some of the smaller-scale unfairnesses that sneaked under the radar a little:

There's also an issue of the land used for Olympic developments is being developed with public money, and ending up in private hands. The Olympic Park will turn into the "Queen Elizabeth Olympic Park", but it will not be a Royal Park (not managed by the Royal Parks Agency), and I'm not entirely clear on how the ownership is managed but a lot of it will be technically private (source). And the Olympic village has already been sold to a company which will use it for private rental housing, apparently at a loss to the taxpayer. Hm. (The loss-making side of this venture is at least in part a consequence of the economic crash.)


What lessons can we pass on to Rio?

The Olympics is a strange travelling carnival, but it's one that has unprecedented power: it can reroute an entire capital city, it can force countries to pass new laws, and it can consume billions of pounds from the taxpayer.

But because it is travelling, it is largely unaccountable. Having learnt from our experiences, we Londoners and our elected representatives are in no position to change the way the Olympics works - for example to rebalance the sponsorship barminess.

Rio is the site for the next games. I guess they've already undertaken to pass a bizarre olympic logos-and-words-and-sponsors law. But can the citizens of Rio learn from us? Would we British not have said anything, if we'd noticed when a law was being passed that banned small businesses from offering an "olympic breakfast" or using the words "Summer" and "Gold" in promotional material? And will the number of gold medals we won really make us forget the heavy-handed and half-competent security clampdown we've been through, with its chilling effects on journalism, free speech, local business and civil liberties?

(I should acknowledge that there's no reason to think the 2012 Olympics have been particularly "worse" than, say, the 2008 Olympics. I don't know much about the Beijing organisation, but there are stories of communities being moved wholesale out of their homes etc.)

It would be a big shame if the lesson for Rio is "don't worry about any criticism - as long as you can pull it off no-one minds how you do it." How can we help Rio, and future olympic venues, to have an enjoyable games without compromising their values to the unaccountable juggernaut that carries it along?

Syndicated 2012-08-12 09:46:51 (Updated 2012-08-12 15:44:50) from Dan Stowell

Installing Cyanogenmod 7 on HTC Tattoo

Just notes for posterity: I am installing Cyanogenmod on my HTC Tattoo Android phone.

There are instructions here which are perfectly understandable if you're comfortable with a command-line. But the instructions are incomplete. As has been discussed here I needed to install "tattoo-hack.ko" in order to get the procedure to complete (at the flash_image step I got "illegal instruction" error). Someone on the wiki argues that you don't need that instruction if you're upgrading from stock Tattoo - but I'm upgrading from stock Tattoo and I needed it.

In the end, as advised in the forum thread linked above I followed bits of these instructions and used tattoo-hack.ko as well as the different versions of "su" and "flash_image" linked from that forum thread, NOT the versions in the wiki.

Also, there's an instruction in the wiki that says "reboot into recovery mode". It would have been nice if it had spelt that out for me. The command to run is

         adb reboot recovery

Syndicated 2012-08-08 18:11:14 from Dan Stowell

Some things I have learnt about optimising Python

I've been enjoying writing my research code in Python over the past couple of years. I haven't had to put much effort into optimising it, so I never bothered, but just recently I've been working on a graph-search algorithm which can get quite heavy - there was one script I ran which took about a week.

So I've been learning how to optimise my Python code. I've got it running roughly 10 or 20 times faster than it was doing, which is definitely worth it. Here are some things I've learnt:

  • The golden rule is to profile your code before optimising it; don't waste your effort. The cProfile module is surprisingly easy to use, and it shows you where your code is using the most CPU. Here's an example of what I do on the commandline:

    # run your heavy code for a good while, with the cProfile module logging it ('streammodels.py' is the script I'm profiling):
    python -m cProfile -o scriptprof streammodels.py
    # then to analyse it:
    python
    import pstats
    p = pstats.Stats('scriptprof')
    p.strip_dirs().sort_stats('cumulative').print_stats(30)
    p.strip_dirs().sort_stats('time').print_stats(30)
    
  • There are some lovely features in Python which are nice ways to code, but once you want your code to go fast you need to avoid them :( - boo hoo. It's a bit of a shame that you can't tell Python to act as an "optimising compiler" and automatically do this stuff for you. But here are two key things to avoid:

    • list-comprehensions are nice and pythonic but they're BAD for fast code because they create an array even if you don't need it. Instead, use things like map() or filter() which process data without constructing a new array. (Also use "xrange" rather than "range" if you just want to iterate a range rather than keeping the resulting list.)
    • lambdas and locally-defined functions (by which I mean something like "def myfunc():" as a local thing inside a function or method) are lovely for flexible programming, but when your code is ready to run fast and solid, you will often need to replace these with more "ordinary" functions. The reason is that you don't want these functions constructed afresh every time you use them; you want them constructed once and then just used.
  • Shock for scientists and other ex-matlabbers: using numpy isn't necessarily a good idea. For example, I lazily used numpy's "exp" and "log" when I could have used the math module and avoided dragging in the heavy array-processing facilities that I didn't need. After I changed my code to not actually use numpy (I didn't need it - I wasn't really using array/matrix maths for this particular code), I went much faster.

  • Cython is easy to use and speeds up your python code by turning it into C and compiling it for you - who could refuse? you can also add static typing things to speed it up even more but that makes it not pure python code so ignore that until you need it.

  • Name-lookups are apparently expensive in python (though I don't think the profiler really shows the effect this, so I can't tell if it's important). there's no harm in storing something in a local variable -- even a function, e.g. "detfunc = scipy.linalg.det".

So now I know these things my Python runs much faster. I'm sure there are many more tricks of the trade. For me as a researcher, I need to balance the time saved by optimising against the flexibility to change the code on a whim and sto be able to hack around with it, so I don't want to go too far down the optimisation rabbit-hole. The benefit of Python is its readability and hackability. It's particularly handy, for example, that Cython can speed up my code without me having to make any weird changes to it.

Any other top tips, please feel free to let me know...

Syndicated 2012-07-17 05:26:01 (Updated 2012-07-17 05:37:52) from Dan Stowell

13 Jul 2012 (updated 17 Jul 2012 at 12:11 UTC) »

A very simple toy problem for matching pursuits

To help me think about how and why matching pursuits fail, here's a very simple toy problem which defeats matching pursuit (MP) and orthogonal matching pursuit (OMP). [[NOTE: It doesn't defeat OMP actually - see comments.]]

We have a signal which is a sequence of eight numbers. It's very simple, it's four "on" and then four "off". The "on" elements are of value 0.5 and the "off" are of value 0; this means the L2 norm is 1, which is convenient.

  signal = array([0.5, 0.5, 0.5, 0.5, 0, 0, 0, 0])

Diagram of signal

Now, we have a dictionary of 8 different atoms, each of which is again a sequence of eight numbers, again having unit L2 norm. I'm deliberately constructing this dictionary to "outwit" the algorithms - not to show that there's anything wrong with the algorithms (because we know the problem in general is NP-hard), but just to think about what happens. Our dictionary consists of four up-then-down atoms wrapped round in the first half of the support, and four double-spikes:

  dict = array([
   [0.8, -0.6, 0, 0, 0, 0, 0, 0],
   [0, 0.8, -0.6, 0, 0, 0, 0, 0],
   [0, 0, 0.8, -0.6, 0, 0, 0, 0],
   [-0.6, 0, 0, 0.8, 0, 0, 0, 0],
   [sqrt(0.8), 0, 0, 0, sqrt(0.2), 0, 0, 0],
   [0, sqrt(0.8), 0, 0, 0, sqrt(0.2), 0, 0],
   [0, 0, sqrt(0.8), 0, 0, 0, sqrt(0.2), 0],
   [0, 0, 0, sqrt(0.8), 0, 0, 0, sqrt(0.2)],
]).transpose()

Diagram of dictionary atoms

BTW, I'm writing my examples as very simple Python code with Numpy (assuming you've run "from numpy import *"). We can check that the atoms are unit norm, by getting a list of "1"s when we run:

  sum(dict ** 2, 0)

So, now if you wanted to reconstruct the signal as a weighted sum of these eight atoms, it's a bit obvious that the second lot of atoms are unappealing because the sqrt(0.2) elements are sitting in a space that we want to be zero. The first lot of atoms, on the other hand, look quite handy. In fact an equal portion of each of those first four can be used to reconstruct the signal exactly:

  sum(dict * [2.5, 2.5, 2.5, 2.5, 0, 0, 0, 0], 0)

That's the unique exact solution for the present problem. There's no other way to reconstruct the signal exactly.

So now let's look at "greedy" matching pursuits, where a single atom is selected one at a time. The idea is that we select the most promising atom at each step, and the way of doing that is by taking the inner product between the signal (or the residual) and each of the atoms in turn. The one with the highest inner product is the one for which you can reduce the residual energy by the highest amount on this step, and therefore the hope is that it typically helps us toward the best solution.

What's the result on my toy data?

  • For the first lot of atoms the inner product is (0.8 * 0.5) + (-0.6 * 0.5) which is of course 0.1.
  • For the second lot of atoms the inner product is (sqrt(0.8) * 0.5) which is about 0.4472.

To continue with my Python notation you could run "sum(dict.T * signal, 1)". The result looks like this:

  array([ 0.1,  0.1,  0.1,  0.1,  0.4472136,  0.4472136,  0.4472136,  0.4472136])

So the first atom chosen by MP or OMP is definitely going to be one of the evil atoms - more than four times better in terms of the dot-product. (The algorithm would resolve this tie-break situation by picking one of the winners at random or just using the first one in the list.)

What happens next depends on the algorithm. In MP you subtract (winningatom * winningdotproduct) from the signal, and this residual is what you work with on the next iteration. For my purposes here it's irrelevant: both MP and OMP are unable to throw away this evil atom once they've selected it, which is all I needed to show. There exist variants which are allowed to throw away dodgy candidates even after they've picked them (such as "cyclic OMP").

NOTE: see the comments for an important proviso re MP.

Syndicated 2012-07-13 10:37:44 (Updated 2012-07-17 07:25:59) from Dan Stowell

13 Jul 2012 (updated 15 Jul 2012 at 11:14 UTC) »

A* orthogonal matching pursuit - or is it

The delightful thing about the A* routing algorithm is that it is provably the optimal algorithm for the purpose, in the sense that it's the algorithm that visits the fewest possible path nodes given the information made available. See the original paper for proof. Despite its simplicity, it is apparently still used in a lot of industrial routing algorithms today, and it can be adapted to help solve other sorts of problem.

A colleague pointed out a paper about "A*OMP" - an algorithm that performs a kind of OMP (Orthogonal Matching Pursuit) with a tree search added to try out different paths towards fitting a good sparse representation. "Aha," I thought, "if they can use A* then they can get some nice recovery properties inherited from the A* search."

However, in reading the paper I find two issues with the A*OMP algorithm which make me reluctant to use the name "A*" for it:

  1. The heuristics used are not "consistent" (this means you can't guarantee they are always less-than-or-equal to the true distance remaining). This lack of consistency means the proof of A*'s optimality doesn't apply. (Remember, A*'s "optimality" is about the number of nodes inspected before finding the best path.) (EDIT: a colleague pointed out that it's actually worse than this - if the heuristic isn't consistent then it's not just sub-optimal search, it may fail to inspect the best path.)
  2. Since A*OMP restricts the number of paths it adds (to the top "B" atoms having largest cross-product with the residual) there are no guarantees that it will even inspect the true basis.

These issues are independent of each other. If you leave out the pragmatic restriction on the number of search paths (to get round the second issue), the first issue still applies. OMP itself is greedy rather than exact, so this doesn't make A*OMP worse than OMP, but to my mind it's "not as good as A*".

In practice, the authors' A*OMP algorithm might indeed get good results, and the experiments shown seem to do so. So my quibbles may be mere quibbles. But the name "A*" led me to expect guarantees that just aren't there (e.g. guarantees of being better than OMP). It's quite easy to construct a toy problem for which A*OMP will not get you nearer the true solution than OMP will.

It's not obvious how to come up with a consistent heuristic. For a given problem, if we knew there was an exact solution (i.e. zero residual was possible within the sparsity constraints) then we could use the residual energy, but since we can't know that in general then the residual energy may overestimate the distance to be "travelled" to the goal.

One minor thing: their "equivalent path pruning" in section 4.2.3 is a bit overkill - I know a simpler way to avoid visiting duplicate paths. I'll leave that as an exercise for the reader :)

Syndicated 2012-07-13 06:38:06 (Updated 2012-07-15 06:50:08) from Dan Stowell

24 Jun 2012 (updated 24 Jun 2012 at 14:08 UTC) »

Rhubarb in the hole

I've been experimenting with rhubarb, looking at nice ways to cook it in the oven without stewing it first (so it keeps its shape). Here's a rather nice one: rhubarb in the hole.

These amounts serve 2. Should work fine if you double the amounts - you just need a roasting tin big enough to sit all the rhubarb in with plenty of space.

  • 2 medium sticks rhubarb
  • 5 or 6 heaped tbsp ginger preserve (ginger jam)
  • 3 tsp caster sugar
  • Oil or marge for greasing
  • For the batter:
    • 1 egg
    • 1/2 cup (125 ml) milk
    • 50 g (1/2 cup) plain flour
    • Pinch of salt
    • 3 tsp caster sugar

In a mixing jug, beat the egg then add the milk and beat again. Add the salt and 3 tsp sugar and beat again, then gradually whisk in the flour to make a medium-thick batter. (This is normal Yorkshire pudding batter but slightly sweetened.)

If you can, leave the batter to sit for about 30 mins, then whisk again.

Put the oven on at 200 C. Grease the roasting tin / Yorkshire-pudding tin and put it in the oven to preheat.

Rinse the rhubarb and cut it into 8cm (3in) pieces. On a chopping board or in a bowl, mix the rhubarb pieces with the ginger preserve to get them nice and evenly coated. Sprinkle the other 3 tsp sugar onto them.

Assembling the pudding can be a bit tricky - mainly because you want to work fast, so the tin stays hot. Get it out of the oven, then wobble it around to make sure the grease is evenly spread just before you pour in the batter. Then place the rhubarb pieces in (probably best to do it one-by-one) so they're evenly spread but they're all a good couple of centimetres away from the edge.

Immediately put this all back in the oven and bake for 20-25 minutes. The batter should rise and get a nice golden-brown crust.

You can serve it on its own, or with a small amount of ice cream maybe.

(I had expected it to be a bit dry on its own, but actually the fruit keeps it moist so the ice cream isn't compulsory.)

Syndicated 2012-06-24 09:07:36 (Updated 2012-06-24 09:30:22) from Dan Stowell

9 Jun 2012 (updated 22 Jun 2012 at 08:09 UTC) »

My research about time and sound

I've got three papers accepted in conferences this summer, and they're all different angles on the technical side of how we analyse audio with respect to time.

In our field, time is often treated in a fairly basic way, for reasons of tractability. A common assumption is the "Markov assumption" that the current state only depends on the immediate past - this is really handy because it means our calculations don't explode with all the considerations of what happened the day before yesterday. It's not a particularly hobbling assumption - for example, most speech recognition systems in use have the assumption in there, and they do OK.

It's not obvious whether we "need" to build systems with complicated representations of time. There is some good research in the literature already which does, and they have promising results. And conversely, there are some good arguments that simple representations capture most of what's important.

Anyway, I've been trying to think about how our signal-processing systems can make intelligent use of the different timescales in sound, from the short-term to the long-term. Some of my first work on this is in these three conference talks I'm doing this summer, each on a different aspect:

  1. At EUSIPCO I have a paper about a simple chirplet representation that can do better than standard spectral techniques at representing birdsong. Birdsong has lots of fast frequency modulation, yet the standard spectral approaches assume "local stationarity" - i.e. they assume that within a small-enough window, we can treat the signal as unchanging. My argument is that we're throwing away information at this point in the analysis chain, information that for birdsong at least is potentially very useful.

  2. At MML I have a paper about tracking multiple pitched vibrato sounds, using a technique called the PHD filter which has already been used quite a bit in radar and video tracking. The main point is that when we're trying to track multiple objects over time (and we don't know how many objects), it's suboptimal to just take a model that deals with one object and apply the model multiple times. You benefit from using a technique that "knows" there may be multiple things. The PHD filter is one such technique, and it lets you model things with a linear-gaussian evolution over time. So I applied it to vibrato sources, which don't have a fixed pitch but oscillate around. It seems (in a synthetic experiment) the PHD filter handles them quite nicely, and is able to pull out the "base" pitches as well as the vibrato extent automatically. The theoretical elegance of the filter is very nice, although there are some practical limitations which I'll mention in my talk.

  3. At CMMR I have a paper about estimating the arcs of expression in pianists' tempo modulations. The paper is with Elaine Chew, a new lecturer in our group who works a lot with classical piano performance. She has had students before working on the technical question of automatically identifying the "arcs" that we can see visually in expressive tempo modulation. I wanted to apply a Bayesian formulation to the problem, and I think it gives pretty nice results and a more intuitive way to specify the prior assumptions about scale.

So all of these are about machine learning applied to temporal evolution of sound, at different timescales. Hope to chat to some other researchers in this area over the summer!

Syndicated 2012-06-09 07:18:44 (Updated 2012-06-22 03:52:09) from Dan Stowell

Asparagus with garlic mayonnaise

Classic combination this, but when asparagus is in season it's a lovely light lunch to base around a handful of asparagus. Serves 2, takes 10 minutes, and NB it involves a raw egg so it's not for anyone preganant.

  • 1 bunch fresh asparagus, rinsed
  • 2 slices bread
  • 1 egg
  • About 1/2 tsp garlic puree
  • About 1 tbsp white wine vinegar
  • About 1/2 tsp mustard powder
  • Olive oil
  • Small amount of pecorino or parmesan cheese

Put a griddle pan on a high heat to warm up. Put the bread in the toaster on a light setting. (The plan is to ignore when it pops up and leave it in the toaster while preparing the other stuff, which gets the bread a bit dry and crispy which complements the other stuff well.) Separate the egg, putting the yolk in a small mixing bowl, and put the white in the fridge/freezer (we don't need it). Shave/slice the cheese finely.

Put the asparagus in the hot griddle pan (lying perpendicular to the lines of the pan). Sprinkle a small amount of oil onto the asparagus. Let it cook for 5--7 minutes or so, turning halfway through, and meanwhile make the mayonnaise as follows.

Add the mustard powder, vinegar and garlic to the yolk, and mix it all in with a whisk. Add a tiny drop of olive oil and whisk that in until it's well blended. Add another bit of olive oil and whisk it in again. Keep doing this until you've added about 2 tbsp oil (after the first couple of drops, you can start adding slightly larger amounts at a time). Taste it, and adjust the flavour if wanted. Keep whisking and adding a dab more oil until it reaches a slightly gelatinous mayonnaisey consistency.

When the asparagus is almost ready, put the toaster down and let it warm the bread up for another 20 seconds or so. Then slice the toast in halves, and arrange each plate with toast and asparagus, sprinkling the cheese over the asparagus and then adding a blob-or-drizzle of mayonnaise over.

Syndicated 2012-06-03 05:56:08 (Updated 2012-06-03 05:58:43) from Dan Stowell

Rustic green lentils

A simple lentil supper that happens to have a nice variegation of good strong flavours. Serves two, takes 40 minutes.

  • 100g green lentils
  • 3 tbsp olive oil
  • 1 medium onion (chopped)
  • 1 tsp cumin seeds
  • 4 garlic cloves (chopped)
  • 1 carrot (peeled and chopped in bitesize chunks)
  • 1 pinch asafoetida
  • 1 handful fresh thyme
  • 3 mushrooms
  • Small handful beans e.g. aduki beans (optional)
  • 1 handful fresh parsley

Put the lentils in a sieve and rinse them under running water.

Heat up 1 tbsp olive oil in a pan and start the onion and the cumin frying. After a couple of minutes, add the garlic. After another minute or two (maybe you're taking this time to chop the carrot) add the carrot and the asafoetida. Stir around and allow to fry a bit more. Overall, the onion etc should have been frying for 5 minutes or so before the next step.

Add enough boiling water to just cover what's in the pan, and let it boil strongly for a couple of minutes. Then turn the heat down to a simmer, bung the thyme in, and put a lid on.

This is now going to simmer for almost 30 minutes but halfway through we'll add the mushrooms. So about halfway through, heat up a frying pan with 1 tbsp olive oil in, peel and slice the mushrooms, and fry them quickly for a minute or so - no need to overdo it. Then bung the mushrooms in with everything else, and also the beans if you have any. Now is a good time to wash and chop up the parsley.

When the food is almost ready to serve (not much water left in the pan, and the lentils are almost soft, but still with a bit of a bite), add the parsley and 1 tbsp olive oil and stir through. Then serve.

Syndicated 2012-05-29 15:18:24 (Updated 2012-05-29 15:35:51) from Dan Stowell

28 May 2012 (updated 28 May 2012 at 20:09 UTC) »

Letter to my MP about the Long Range Acoustic Device (LRAD)

I'm concerned about the LRAD which is apparently being deployed during the Olympics. We citizens don't seem to have a way of knowing if it will be used for backup, general crowd control, or long-term hearing damage. So I wrote to my MP; if you live in London and care about your hearing, maybe write to yours. Here's what I wrote:

Dear [__________],

I'm an audio researcher living in [__________]. Because of my profession, my hearing is particularly important to me. So I'm a little concerned about the Long-Range Acoustic Device (LRAD) that the MoD has confirmed will be in use during the Olympics, and I'd like to ask you to seek some specific assurance please.

I'm aware that the device has two uses: one is to broadcast messages loudly, and one is as a non-lethal weapon which can damage hearing. The MoD has said the device is "primarily to be used in the loud hailer mode". I'm sure you can imagine that the use of the term "primarily" is a concern to me.

In a given situation, there is no way for anyone except the operator to discern which mode is being used or intended. If I am commuting through London during the Olympics and get caught up in something (a crowd situation of some sort), and an officer points the LRAD at me, I have no way of being reassured that there is no chance of permanent damage. This is why I'd like to ask you to seek very specific assurances.

Could you please press the Ministry of Defence to state specifically:

  • Is the LRAD intended to be deployed frequently, or in exceptional circumstances?
  • Under what conditions would the LRAD be used with settings that might endanger hearing?
  • What rank of officer would be required to authorise the use of LRAD with settings that might endanger hearing?
  • What safeguards are in place to ensure that the LRAD will not endager hearing, whether during correct use or operator error?

I hope you can help put these specific questions forward.

Best wishes, [__________]

Reference: http://www.guardian.co.uk/sport/2012/may/12/british-military-sonic-weapon-olympics

Syndicated 2012-05-28 14:59:52 (Updated 2012-05-28 15:12:45) from Dan Stowell

21 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!