Recent blog entries for danstowell

An executive summary of Islam in Britain

Just finished this really useful little book: "Medina in Birmingham, Najaf in Brent: Inside British Islam" by Innes Bowen. It could well be subtitled "An executive summary of Islam in Britain", because that's exactly what it feels like - a brief, breezy and dispassionate summary of the main Muslim groups in the UK, what they believe, how they interact with the world, etc.

Very handy reading, if you're a non-Muslim British person like me who might be wondering: the Muslims in my neighbourhood, are they sunni or shia? Does it matter? How do they relate to the various Muslim groups that are making the news these days? Which ones dress in special ways, and how significant is it? - All those naive questions that you can't just come out and ask.

All kinds of interesting stuff comes up while answering these questions. For example I learnt about the Tablighi Jamaat and why they wanted to build the "mega-mosque" that has been back and forth in the news trying to get planning permission. I learnt which groups have a voice in the Muslim Council of Britain. And even though the book doesn't spend much time on women's issues, it gives lots of titbits about different groups' conventions on veiling, staying in the house, marriage, and mosque provision - so it gives me some "local" insight to complement this other reading on veiling practices.

As in that other book, one thing that might surprise you is that some seemingly "traditional" things (like clothing practices) are borne of quite modern movements within Islam; really, you realise that "traditional" vs "modern" is not a particuarly helpful way to distinguish different strands of Islam practiced in Britain today.

Syndicated 2016-01-10 11:43:31 from Dan Stowell

Kale and rosemary flatbread

Kale and rosemary flatbread. What I particularly like about this flatbread is that the kale baked in the oven goes crispy like fried seaweed. I had it as a main course with a bit of rocket and some manchego cheese. It could also be a good accompaniment, maybe an accompaniment to something meaty.

Serves 2. It's derived from a recipe from "Crumb" by Roby Tandoh.

  • 250g strong white flour
  • 1 tsp instant dried yeast
  • 1/2 tsp salt
  • a twist of black pepper
  • 175ml lukewarm water
  • a few (6? 10?) tines of fresh rosemary
  • 4 tbsp olive oil
  • 150g kale, stalks removed, and shredded

Combine the flour, salt, pepper and yeast in a large bowl. Make a well in the middle and add the warm water. Mix with a fork, then when that gets difficult add 1 tbsp of the olive oil and rosemary, and mix with one hand.

Knead it for 10 minutes. You might be able to do this in the bowl or it might be easier to tip it out onto a clean surface. You might need to sprinkle a bit more flour on. It should become elastic and less sticky.

Now cover the dough and let it rise for 30-60 minutes in a warm place. Meanwhile, blanche the kale: bring a pan of water to the boil and plunge the kale in. Boil it quickly for 1 minute, then immediately drain it and run cold water over it to stop it from cooking any further. Now you need to get it as dry as you can, firstly by draining it then by pressing it gently.

Knead just under half of the kale into the risen dough. It'll be a little tricky, due to the residual moisture on the leaves, but there's no need to worry about it being perfect.

Preheat a fan oven to 170C. Using a rolling pin and a floured surface, roll out the dough and then roll/hand-stretch it into a kind of A4 shape, quite thin, and put it onto a lightly floured baking tray. Put the remaining kale over the top, pressing it down a bit so that it'll stick in. Drizzle plenty of olive oil over the top and bake for 20 minutes.

Syndicated 2016-01-10 09:00:05 (Updated 2016-01-10 09:01:55) from Dan Stowell

Poverty: demolish sink estates or not?

David Cameron wrote an article today saying that knocking down poor people's homes is how to make their lives better. ("David Cameron vows to 'blitz' poverty by demolishing UK's worst sink estates").

There's a short version of my response to this: go and read Anne Power who's studied housing and regeneration a lot, and has concrete recommendations for the best way to handle all this stuff. Read this article: "Housing and sustainability: demolition or refurbishment?"

I was undecided about all this stuff in 2014 when I went to see the Carpenters Estate protests. If you're not involved, it sort of sounds like a good idea. "Ooh those scary estates. If we knock them down and replace them with shinier ones, that's the neatest way to fix the situation up, and the residents can come back and live in them so they won't be any worse off."

But then you go down to the estates and meet people, and you read about how these regenerations happen in reality, and you realise it's not as neat as that. Firstly, in modern times regeneration usually involves selling off a fraction of the estate for private development, and the community doesn't really get to be rehoused back together, many get scattered to random locations over which they have no choice. Community cohesion is lost, i.e. part of the social fabric that keeps everyone safe. Secondly, demolition has unhelpful side-effects on the area around it (house prices, antisocial stuff, disrepair, local services leaving). Thirdly, there are alternatives to demolition (renovation, infill building) which avoid many of these downsides, are more sustainable, and are good for the local economy because local small-scale builders can do them.

Cameron said three out of four rioters in 2011 came from sink estates. "The riots of 2011 didn’t emerge from within terraced streets or low-rise apartment buildings. The rioters came overwhelmingly from these postwar estates. That’s not a coincidence," he wrote.

David Cameron, your logical fallacy is: False Cause. The people he's talking about are poor and disenfranchised, and that's the common cause of both things. It's the cause of living in the less popular estates, and it's an important cause of the rioting. It's not the shape of the buildings which caused the riots!

The current UK government is acting from a position of strength, and they are really taking their opportunity to make bold moves in the directions they want. Putting money into improving housing can be a good thing - the biggest risk I see is that this initiative will end up pushing poor people out of the way and fragmenting their communities. We can do it better. Read this article: "Housing and sustainability: demolition or refurbishment?"

Syndicated 2016-01-10 05:52:02 (Updated 2016-01-10 09:00:18) from Dan Stowell

Poached thai-style sea bass

Poached thai-style sea bass - a handy everyday recipe, easy to do, healthy and fresh, whenever you see a nice piece of sea bass in the shop. It takes less than ten minutes. All of the flavourings are optional really but most of them can be kept in your store cupboard.

These amounts are to serve one.

  • 1 fillet of sea bass
  • 1 portion dried noodles
  • Flavourings:
    • 2 spring onions
    • 1 lemongrass stalk
    • 1/2 a chilli
    • a piece of root ginger (maybe 2 cm cubed?)
    • 1 or 2 lime leaves
    • a dash of fish sauce (or if you don't have that then a dash of light soy sauce or worcestershire sauce will add a bit of depth to compensate)
  • to serve: a small amount of fresh coriander or parsley

Boil a kettle.

Meanwhile chop things up: the spring onions into ~1mm slices, the chilli into ~1mm slices, the root ginger into fine slices, and chop the parsley. Don't chop the lemongrass, but bruise it (bash it with the heel of the knife a bit). Don't chop the lime leaves.

Put all the flavourings (not the coriander/parsley) into a pan with a small-soup-portion of boiled water and bring it to the boil. Add the noodles, then add the seabass so it sits on top of them (it should still be submerged though). No need to stir anything.

Turn the heat right down and put a lid on. Let it poach for about 5 minutes.

At the end, ladle the whole lot into a soup dish, ideally keeping the fish in one piece sitting on top. Sprinkle the coriander/parsley on.

Syndicated 2016-01-03 16:36:55 (Updated 2016-01-03 16:38:14) from Dan Stowell

Mushy peas three-way showdown

Lunchtime showdown: three different tins of mushy peas!

  • Harry Ramsden
  • Sainsbury's
  • Batchelors

All served up with a bit of black pudding.

The verdict:

Sainsbury's mushy peas are not very nice - there's a kind of minty flavour (mint is not in the ingredients) which tastes like it's masking something.

Harry Ramsden's are nice - chip-shop flavour, with a decent hint of savoury flavour.

Batchelors mushy peas are pretty similar to Harry Ramsden's, but with not so much of the savoury depth. They're fine, but just short of the chip-shop flavour I'm looking for.

Syndicated 2015-12-14 08:35:06 from Dan Stowell

Paris climate agreements (COP 21), sustainable energy and Britain

I'm happy that the Paris climate-change discussions seem to have had a positive outcome. Some telling quotes about it, with links to articles covering the Paris outcomes in more detail:

"This is an exciting moment in history. The debate is over and the vision of the future is low carbon." (New Scientist)

"By comparison to what it could have been, it’s a miracle. By comparison to what it should have been, it’s a disaster." (George Monbiot in The Guardian)

"The climate deal is at once both historic, important – and inadequate." (Simon Lewis in The Conversation)

and here's an analysis by CarbonBrief

An interesting aspect is the way countries have made commitments, and the agreement reifies a specific global target, while acknowledging that the countries' current commitments cannot actually meet that goal. Countries have to get together again in a few years to check on progress and hopefully to extend the ambition of their commitments, so that they eventually meet the overall target. That might sound like a cop-out but actually it strikes me as good politics/psychology. (However, I'm no expert. At least one observer, James Hansen, thinks it's all hot air without serious action on carbon taxation.)

I'd like to read about the UK's role in the negotiations, especially because the mind boggles on how they could have had much to say about reducing climate change while the current government has deliberately derailed the UK's burgeoning renewable energy industries. (Also for community energy schemes.) To be clear, the problem with what they did is not the fact of reducing subsidies - they were already scheduled to be gradually reduced - but changing the plan and reducing them suddenly, thus creating business uncertainty in that sector and making it a risky sector for investors in the medium term.

Renewable energy technologies are getting close to parity with fossil fuel generation, i.e. reaching a tipping point where people start to invest in them for simple financial reasons rather than altruism, and that could be the start of a really big acceleration. According to Simon Lewis (see above) the Paris agreement will help to accelerate the technologies' maturity, efficiency and profitability. I'd like to see British engineering play its part in this, and if the current UK government could only see which way the wind is blowing (ha!) and help British engineering to do this, that would be just great.

If you're interested in the technology/engineering/IT side of all this here are two excellent excellent things to read, which give lots of really concrete ideas:

Syndicated 2015-12-13 15:28:09 (Updated 2015-12-13 15:29:25) from Dan Stowell

Tracking fast frequency modulation (FM) in audio signals - a list of methods

When you work with birdsong you encounter a lot of rapid frequency modulation (FM), much more than in speech or music. This is because songbirds have evolved specifically to be good at it: as producers they have muscles specially adapted for rapid FM (Goller and Riede 2012), and as listeners they're perceptually and behaviourally sensitive to well-executed FM (e.g. Vehrencamp et al 2013).

Standard methods for analysing sound signals - spectrograms (or Fourier transforms in general) or filterbanks - assume that the signal is locally stationary, which means that when you consider a single small window of time, the statistics of the process are unchanging across the length of the window. For music and speech we can use a window size such as 10 milliseconds, and the signal evolves slowly enough that our assumption is OK. For birdsong, it often isn't, and you can see the consequences when a nice sharp chirp comes out in a spectrogram as a blurry smudge across many pixels of the image.

So, to analyse birdsong, we'd like to analyse our signal using representations that account for nonstationarity. Lots of these representations exist. How can we choose?

If you're impatient, just scroll down to the Conclusions at the bottom of this blog. But to start off, let's state the requirements. We'd like to take an audio signal and convert it into a representation that:

  • Characterises FM compactly - i.e. FM signals as well as fixed-pitch signals have most of their energy represented in a similar small number of coefficients;
  • Handles multiple overlapping sources - since we often deal with recordings having multiple birds;
  • Copes with discontinuity of the frequency tracks - since not only do songbirds make fast brief audio gestures, but also, unlike us they have two sets of vocal folds which they can alternate between - so if a signal is a collage of chirpy fragments rather than a continuously-evolving pitch, we want to be able to reflect that;
  • Ideally is fairly efficient to calculate - simply because we often want to apply calculations at big data scales;
  • Does the transformation need to be invertible? (i.e. do we need a direct method to resynthesise a signal, if all we know is the transformed representation?) Depends. If we're interested in modifying and resynthesising the sounds then yes. But I'm primarily interested in extracting useful information, for which purposes, no.

Last year we published an empirical comparison of four FM methods (Stowell and Plumbley 2014). The big surprise from that was that the dumbest method was the best-performing for our purposes. But I've encountered a few different methods, including a couple that I learnt about very recently, so here's a list of methods for reference. This list is not exhaustive - my aim is to list an example of each paradigm, and only for those paradigms that might be particularly relevant to audio, in particular bird sounds.

  • Let's start with the stupid method: take a spectrogram, then at each time-point find out which frequency has the most energy. From this list of peaks, draw a straight line from each peak to the one that comes immediately next. That set of discontinuous straight lines is your representation. It's a bit chirplet-like in that it expresses each moment as a frequency and a rate-of-change of frequency, but any signal processing researcher will tell you not to do this. In principle it's not very robust, and it's not even guaranteed to find peaks that correspond to the actual fundamental frequency. In our 2014 paper we tested this as a baseline method, and... it turned out to be surprisingly robust and useful for classification! It's also extremely fast to compute. However, note that this doesn't work with polyphonic (multi-source) audio at all. For big data analysis it's handy to be able to do this, but I don't expect it to make any sense for analysing a sound scene in detail.
  • Chirplets. STFT analysis assumes the signal is composed of little packets, and each packet contains a sine-wave with a fixed frequency. Chirplet analysis generalises that to assume that each packet is a sine-wave with parametrically varying frequency (you can choose linearly-varying, quadratically-varying, etc). See chirplets on wikipedia for a quick intro. There are different ways to turn the concept of a chirplet into an analysis method. Here are some applied to birds:
  • Filter diagonalisation method - an interesting method from quantum mechanics, FDM models a chunk of signal as a sum of purely exponentially decaying sinusoids. Our PhD student Luwei Yang recently applied this to tracking vibrato in string instruments. I think this is the first use of FDM for audio. It's not been explored much - I believe it satisfies most of the requirements I stated above, but I've no idea of its behaviour in practice.
  • Subspace-based methods such as ESPRIT. See for example this ESPRIT paper by Badeau et al. These are one class of sinusoidal tracking techniques, because they analyse a signal by making use of an assumed continuity from one frame to the next. In fact, this is a problem for birdsong analysis. Roland Badeau tested a birdsong recording for me and found that the very fast FM was a fatal problem for this type of method: the method simply needs to be able to rely on some relatively smooth continuity of pitch tracks, in order to give strong tracking results.
  • Fan chirp transform (Weruaga and Kepesi 2007) - when you take the FFT of a signal, we might say you analyse it as a series of "horizontal lines" in the time-frequency plane. The fan chirp transform tilts all these lines at the same time: imagine the lines, instead of being horizontal, all converge on a single vanishing point in the distance. So it should be particularly good for analysing harmonic signals that involve pitch modulation. Note that the angles are all locked together, so it's best for monophonic-but-harmonic signals, not polyphonic signals. My PhD student Veronica Morfi, before she joined us, extended the fan-chirp model to non-linear FM curves too: Morfi et al 2015.
  • Spectral reassignment methods. When you take the FFT of a signal, note that you analyse it as a series of equally-spaced packets on the frequency axis. The clever idea in spectral reassignment is to say, if we assume the packets weren't actually sitting on that grid, but we analysed them with the FFT anyway, let's take the results and move every one of those grid-points to an irregular location that best matches the evidence. You can extend this idea to allow each packet to be chirpy rather than fixed-freqency, so there you have it: run a simple FFT on a frame of audio, and then magically transform the results into a more-detailed version that can allow each bin to have its own AM and FM. This is good because it makes sense for polyphonic audio.
    • A particular example of this is the distribution derivative method (code available here). I worked with Sasho Musevic a couple of years ago, who did his PhD on this method, and we found that it yielded good informative information for multiple birdsong tracking. (Stowell et al 2013) Definitely promising. (In my later paper where I compared different FM methods, this gave a strong performance again. The main disbenefit, in that context, was simply that it took longer to compute than the methods I was comparing it against.) Also you have to make some peak-picking decisions, but that's doable. This summer, I did some work with Jordi Bonada and we saw the distribution derivative method getting very good precise results on a dataset of chaffinch recordings.
  • There's lots of work on multi-pitch trackers, and it would be incomplete if I didn't mention that general idea. Why not just apply a multi-pitch tracker to birdsong audio and then use the pitch curves coming out from that? Well, as with the ESPRIT method I mentioned above, the methods developed for speech and music tend to build upon assumptions such as relatively long, smooth curves often with hard limits to the depth of FM that can exist.
  • How about feature learning? Rather than design a feature transform, we could simply feed a learning algorithm with a large amount of birdsong data and get it to learn what FM patterns exist in the audio. That's what we did last year in this paper on large-scale birdsong classification - that was based on spectrogram patches, but it definitely detected characteristic FM patterns. That representation didn't explicitly recover pitch tracks or individual chirplets, but there may be ways to develop things in that direction. In particular, there's quite a bit of effort in deep learning on "end-to-end" learning which asks the learning algorithm to find its own transformation from the raw audio data. The transformations learnt by such systems might themselves be useful representations for other tasks.


So.... It's too soon to have conclusions about the best signal representations for FM in birdsong. But out of this list, the distribution derivative method is the main "off-the-shelf" tool that I'd suggest for high-resolution bird FM analysis (code available here), while feature-learning and filter diagonalisation are the approaches that I'd like to see more research on.

At the same time, I should also emphasise that machine learning methods don't need a nice clean understandable representation as their input. Even if a spectrogram turns birdsong into a blur when you look at it, that doesn't necessarily mean it shouldn't be used as the input to a classifier. Machine learning often has different requirements than the human eye.

(You might think I'm ignoring the famous rule garbage in, garbage out when I say a classifier might work fine with blurry data - well, yes and no. A spectrogram contains a lot of high-dimensional information, so it's rich enough that the crucial information can still be embedded in there. Even the "stupid method" I mentioned, which throws away so much information, preserves something of the important aspects of the sound signal. However modern classifiers work well with rich high-dimensional data.)

But if you're trying to do something specific such as clearly characterise the rates of FM used by a particular bird species, a good representation will help you a lot.

Syndicated 2015-12-08 09:34:47 (Updated 2015-12-08 09:49:35) from Dan Stowell

Dry-fried paneer

This is my approximation of the lovely dry-fried paneer served at Tayyabs, the famous Punjabi Indian place in East London. These amounts are for 1 as a main, or more as a starter. Takes about ten minutes:

  • 200g paneer, cut into bite-size cubes
  • 1 tbsp curry powder
  • 1 tsp ground cumin
  • 1/2 an onion, sliced finely
  • 1 red chilli, sliced
  • 1/2 tsp cumin seeds (optional)
  • a squeeze of lemon juice
  • 1 tsp garam masala (optional)
  • A few chives (optional)

First put the cubed paneer into a bowl, add the curry powder and cumin and toss to get an even coating.

Get a frying pan nice and hot, with about 1 tbsp of veg oil in it. Add the onion and chilli (and cumin seed if using). Note that you want the onion to be frying to be crispy at the end, so you want it finely sliced and separated (no big lumps), you want the oil hot, and you want the onion to have plenty of space in the pan. Fry it hot for about 4 minutes.

Add the paneer to the pan, and any spice left in the bowl. Shuffle it all around, it's time to get the paneer browning too. It'll take maybe another 4 minutes, not too long. Stir it now and again - it'll get nice and brown on the sides, no need to get a very even colour on all sides, but do turn it all around a couple of times.

Near the end, e.g. with 30 seconds to go, add the squeeze of lemon juice to the pan, and stir around. You might also like to sprinkle some garam masala into the pan too.

Serve the paneer with chive sprinkled over the top. It's good to have some bread to eat it with (e.g. naan or roti) and salad, or maybe with other indian things.

Syndicated 2015-11-26 14:27:06 from Dan Stowell

Reading list: excellent papers for birdsong and machine learning

I'm happy to say I'm now supervising two PhD students, Pablo and Veronica. Veronica is working on my project all about birdsong and machine learning - so I've got some notes here about recommended reading for someone starting on this topic. It's a niche topic but it's fascinating: sound in general is fascinating, and birdsong in particular is full of many mysteries, and it's amazing to explore these mysteries through the craft of trying to get machines to understand things on our behalf.

If you're thinking of starting in this area, you need to get acquainted with: (a) birds and bird sounds; (b) sound/audio and signal processing; (c) machine learning methods. You don't need to be expert in all of those - a little naivete can go a long way!

But here are some recommended reads. I don't want to give a big exhaustive bibliography of everything that's relevant. Instead, some choice reading that I have selected because I think it satisfies all of these criteria: each paper is readable, is relevant, and is representative of a different idea/method that I think you should know. They're all journal papers, which is good because they're quite short and focused, but if you want a more complete intro I'll mention some textbooks at the end.

  • Briggs et al (2012) "Acoustic classification of multiple simultaneous bird species: A multi-instance multi-label approach"

    • This paper describes quite a complex method but it has various interesting aspects, such as how they detect individual bird sounds and how they modify the classifier so that it handles multiple simultaneous birds. To my mind this is one of the first papers that really gave the task of bird sound classification a thorough treatment using modern machine learning.
  • Lasseck (2014) "Large-scale identification of birds in audio recordings: Notes on the winning solution of the LifeCLEF 2014 Bird Task"

    • A clear description of one of the modern cross-correlation classifiers. Many people in the past have tried to identify bird sounds by template cross-correlation - basically, taking known examples and trying to detect if the shape matches well. The simple approach to cross-correlation fails in various situations such as organic variation of sound. The modern approach, introduced to bird classification by Gabor Fodor in 2013 and developed further by Lasseck and others, uses cross-correlation, but it doesn't use it to guess the answer, it uses it to generate new data that gets fed into a classifier. At time of writing (2015), this type of classifier is the type that tends to win bird classification contests.
  • Wang (2003), "An industrial strength audio search algorithm"

    • This paper tells you how the well-known "Shazam" music recognition system works. It uses a clever idea about what is informative and invariant about a music recording. The method is not appropriate for natural sounds but it's interesting and elegant.

      Bonus question: Take some time to think about why this method is not appropriate for natural sounds, and whether you could modify it so that it is.

  • Stowell and Plumbley (2014), "Automatic large-scale classification of bird sounds is strongly improved by unsupervised feature learning"

    • This is our paper about large-scale bird species classification. In particular, a "feature-learning" method which seems to work well. There are some analogies between our feature-learning method and deep learning, and also between our method and template cross-correlation. These analogies are useful to think about.
  • Lots of powerful machine learning right now uses deep learning. There's lots to read on the topic. Here's a blog post that I think gives a good introduction to deep learning. Also, for this article DO read the comments! The comments contain useful discussion from some experts such as Yoshua Bengio. Then after that, this recent Nature paper is a good introduction to deep learning from some leading experts, which goes into more detail while still at the conceptual level. When you come to do practical application of deep learning, the book "Neural Networks: Tricks of the Trade" is full of good practical advice about training and experimental setup, and you'll probably get a lot out of the tutorials for the tool you use (for example I used Theano's deep learning tutorials).

    • I would strongly recommend NOT diving in with deep learning until you have spent at least a couple of months reading around different methods. The reason for this is that there's a lot of "craft" to deep learning, and a lot of current-best-practice that changes literally month by month, and anyone who gets started could easily spend three years tweaking parameters.
  • Theunissen and Shaevitz (2006), "Auditory processing of vocal sounds in birds"

    • This one is not computer science, it's neurology - it tells you how birds recognise sounds!

      A question for you: should machines listen to bird sounds in the same way that birds listen to bird sounds?

  • O'Grady and Pearlmutter (2006), "Convolutive non-negative matrix factorisation with a sparseness constraint"

    • An example of analysing a spectrogram using "non-negative matrix factorisation" (NMF), which is an interesting and popular technique for identifying repeated components in a spectrogram. NMF is not widely used for bird sound, but it certainly could be useful, maybe for feature learning, or for decoding, who knows - it's a tool that anyone analysing audio spectrograms should be aware of.
  • Kershenbaum et al (2014), "Acoustic sequences in non-human animals: a tutorial review and prospectus"

    • A good overview from a zoologist's perspective on animal sound considered as sequences of units. Note, while you read this, that sequences-of-units is not the only way to think about these things. It's common to analyse animal vocalisations as if they were items from an alphabet "A B A BBBB B A B C", but that way of thinking ignores the continuous (as opposed to discrete) variation of the units, as well as any ambiguity in what constitutes a unit. (Ambiguity is not just failure to understand: it's used constructively by humans, and probably by animals too!)
  • Benetos et al (2013), "Automatic music transcription: challenges and future directions"

    • This is a good overview of methods used for music transcription. In some ways it's a similar task to identifying all the bird sounds in a recording, but there are some really significant differences (e.g. the existence of tempo and rhythmic structure, the fact that musical instruments usually synchronise in pitch and timing whereas animal sounds usually do not). A big difference from "speech recognition" research is that speech recognition generally starts from the idea of there just being one voice. The field of music transcription has spent more time addressing problems of polyphony.
  • Domingos (2012), "A few useful things to know about machine learning"

    • lots of sensible, clearly-written advice for anyone getting involved in machine learning.


  • "Machine learning: a probabilistic perspective" by Murphy
  • "Nature's Music: the Science of Birdsong" by Marler and Slabbekoorn - a great comprehensive textbook about bird vocalisations.

Syndicated 2015-11-13 03:18:31 (Updated 2015-11-13 03:27:52) from Dan Stowell

Emoji understanding fail

I'm having problems understanding people. More specifically, I'm having problems now that people are using emoji in their messages. Is it just me?

OK so here's what just happened. I saw this tweet which has some text and then 3 emoji. Looking at the emoji I think to myself,

"Right, so that's: a hand, a beige square (is the icon missing?), and an evil scary face. Hmm, what does he mean by that?"

I know that I can mouseover the images to see text telling me what the actual icons are meant to be. SO I mouseover the three images in turn and I get:

  • "Clapping hands sign"
  • "(white skin)"
  • "Grinning face with smiling eyes"

So it turns out I've completely misunderstood the emotion that was supposed to be on that face icon. Note that you probably see a different image than I do anyway, since different systems show different images for each glyph.

Clapping hands, OK fine, I can deal with that. Clapping hands and grinning face must mean that he's happy about the thing.

But "(white skin)"? WTF?

Is it just me? How do you manage to interpret these things?

Syndicated 2015-11-10 05:16:04 (Updated 2015-11-10 05:18:12) from Dan Stowell

90 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!