Yup, another set of possibly ill-conceived thoughts on spam filtering. There was a good round-up of what's currently going on in yesterday's NTK, incidentally. (Search for Tracking in that page).
What the current crop of Bayesian spam filters are trying to do is detect whether an email is written in 'normal English' or 'spam English'. That language identification requirement is exactly what this this text-based language identifier is trying to work out.
So, can anyone tell me why an n-gram (where 2 <= n <= about 4-5) based system, working on the word level, hasn't been tried on this spam/non-spam classification problem? After all, n-gram statistics are used by virtually every single speech recognition software package out there. That's because n-grams are what let the speech recogniser determine which word has a reasonable chance of following the word it thinks you just said, and this information is used to improve the likelihood of what you said being a 'valid' sentence.
There are problems with this, of course: recognising 'spam English' as opposed to 'proper English' is much more difficult than recognising English as opposed to, say, French. But there are subtleties of vocabulary ('free' can often be followed by 'porn', for instance), that may make this a workable method.
Well, those are my thoughts anyway. Someone's probably tried this already and will tell me why it doesn't work as well as other methods, but even in that case I really would be interested to know why.
[edited for formatting]