Older blog entries for salmoni (starting at number 594)

Long time no write.

Ten years after making the last release of Salstat, I've decided to continue with it. The project is on Github now (https://github.com/salmoni/Salstat).

Today's release utilises the excellent xlrd module which has allowed Salstat to read Excel files (xls and xlsx). Many people have asked for this. For now, the basic "happy days" workflow is fine but there is poor error handling.

The next one will have database access. This is a more complex workflow. I also need to harden the Excel and CSV import routines.

Mozilla are looking for a Quantitative user researcher which sounds cool. The emphasis on user research sounds right up my street, particularly the need for mastery of experimental design and statistical analysis. It kind of takes me back to my PhD and work on SalStat (still going strong).

The problem is my covering letter. Can anyone here tell me what style of covering letters are preferred? Long and detailed explaining why I meet each of the requirements? The standard 3 paragraph ["intro", "I'm cool", "thanks"]? Or some combination in between?

In the meantime, I've released Roistr which does some basic semantic analysis / text analytics stuff. I put up some demos but it's hard to really show how useful this thing is. It's based on the open source Gensim toolkit along with numpy and scipy.

Scipy sounds like it's going places. Travis Oliphant recently announced an initiative to bring it to big data properly. I have an idea of what he means and it would be very cool.

Does anyone have any Google Plus invites that they could send (one) to me?

In other news, wife, daughter and I are off to the Philippines for 5 weeks and hoping to get some start-up work moving over there. UX is in demand at the moment so it's a good time to be around.

I've also been looking up versions of principle components analysis in Python and found these:

All the linguistic stuff I've been doing lately is making my head spin but it's coming together.

Lots happening: I've been building a semantic relevance engine - something that can accurately determine the semantic similarity of 2 text documents and it's working reasonably well. Working completely untrained, I'm getting accuracies of well above 0.8 and often above 0.9. Obviously 1.0 is the ideal but even human judgements rarely get above 0.9 with the corpora I've been using for this.

The good thing is that I appear to be discovering new stuff almost every day about how documents are understood. There are some approaches I've used that I've not read about in the literature so there might be some useful stuff for the world here.

However my aim is to make a web service around this. And it's all based on open source software (Python, numpy, Scipy, Gensim etc) which is perfect. There is proprietary knowledge used, however: the corpora, how it's prepared and the architecture of the engine; but that will all come publicly out soon enough.

Log Entropy models

I had problems when I last upgraded to 0.7.8 of Gensim. The main issue was that the package I imported wasn't necessarily the one used: quite often, it seemed as though the top level would be from one install whereas another import would be from somewhere else. The net result was that parts of my software were looking for an id2word method in a dictionary where there were none before.

However, I still want to try 0.7.8 if I can and I found a way. I downloaded and untarred it, and renamed it 'gensim078'. Then, I went and changed each 'from gensim import *' statement to 'from gensim078 import *' which seems to be doing the trick. I'm sure there are better ways to do it but this is working for me so I'm happy.

The advantages are that a) it's faster particularly for similarity calculations, and b) I now have access to the Log Entropy model which I'm building for G1750.

Later tonight, I'll adjust the dictionary and begin pruning words that appear across lots of documents to see if that improves the focus. The program does seem a little 'fuzzy' as it is but that is quite a human characteristic so I'm not too worried. However, it will help me explore vector models and understand them better myself.

Although the results of the word-pair semantic association task were poor, I'm not dismayed (too much!) because my whole construction is not perfect and there is lots of room for improvement. The task is also useful as it gives me an indication of accuracy by another means to the 20NG categorisation task. When I create a new corpus, I should ideally subject it to a battery of tests designed to test different things. With the results of these, I can work out whether the corpus is heading in the right direction or not. It's all good to have these tools even if (initially) not going how I wanted them to.

I'm turning into a perfectionist. I really need to release something useful before I refine... Release early, release often...

I've been having lots of fun lately with Gensim, a Python framework for vector space modelling. It includes fun stuff like latent semantic analysis, latent dirichlet allocation and other goodies. Allied with NLTK, this makes a very formidable Python- based NLP framework.

My tasks are sorting newsgroup posts into correct groups and I've achieved a reasonable level of accuracy (0.92) which isn't bad given that it's entirely dependent upon content. However, most analyses are showing lower accuracies (0.70+) which isn't bad but not far away enough from chance performance to be taken realistically. However, there are a few ways to improve this and I'm conducting an enormous number of experiments to get an effective mental model of how vector space models work.

This is all the beginning of constructing a relevance engine which I'm sure will be useful to some people.

Great fun!

This is a list of things that have to be done to get Infomap working on a modern Linux distribution (tried on Ubuntu 10.10).

* BLOCKSIZE in preprocessing/preprocessing_env.h : needs to be set to the highest number of words a document has in the corpus. If a document has more words than BLOCKSIZE, the building of the model will hang.

* Install libgdbm-dev with Synaptic or apt-get. Infomap needs a header file and without it, Infomap will not compile (not pass ./configure).

* Not finding ndbm.h : All happens in /usr/include

ln -s gdbm-ndbm.h ndbm.h or just copy gdbm-ndbm.h to /usr/include/ndbm.h

Infomap will not compile (not pass ./configure) without this.

Then it should go through configure, make, and make install well.

This is the code for CompareTerms:

# term1, term2 - terms to be compared

vec1 = "associate -q term1"

vec2 = "associate -q term2"

vec1 = numpy.array(vec1)

vec2 = numpy.array(vec2)

product = numpy.sum(vec1 * vec2)

return product

This produces an association between 2 terms.

When calling this, the 'args' string that calls associate must be formatted as a single string and not by Popen. This is important when sending more than 1 term. If not, associate will treat the terms as a quote search rather than an AND search.

Long time no post! I've been very busy with family and work and not had much time to do stuff. If there are no objections, I was thinking of reposting some of my UX stuff here. It's not commercial but informational and might be of use.

As for open source, I've been working a lot on Infomap lately for natural language processing. I had some failures using Semantic Vectors, namely the speed at which it does comparisons between terms. I had an idea for an automated information architecture creator but the speed was too slow. Infomap is much faster so I will try to use that - even though I know it's been superceded by Semantic Vectors.

Plus, being written in C means that it is accessible with Python whereas Semantic Vectors being in Java means going through Jython (and learning lots of new things which I don't have time for) or going through a very awkward process to translate.

With my first run using SV, I generated an information map much like that resulting from a card sort. The card sort took weeks to prepare, perform and analyse - and a lot of staff time. Mine ran in a few hours and got results that weren't entirely dissimilar to the human version. There were some odd surprises but that was because of the corpus (Wikipedia was what I used at the time) which by nature has a focus on particular topics as opposed to general language. This meant that the results were generally quite good but with one or two startling exceptions.

But the difficulty in integrating it with a Python backend is too hard, so back to Infomap. I just need to figure out how to do semantic comparisons of terms in Infomap.

It was a job to get going. The first problem was not having the appropriate symlink to a DB library and a header file. Once rectified, I had to ensure the BLOCKSIZE constant was set to a figure larger than the highest number of words. It defaults to 1 million but the longest document in the corpus was 1.25 million words. Without doing this, I had no warning and left the program building its model for over a week before finding the problem. Once done, the model was analysed and built in under 2 hours on an Asus 701 netbook!

I remember when LSA used to take days...

So in the spirit of openness and the basis of this endeavour being in open source software, I will publish results here to ensure everyone is totally bored.

20 Apr 2009 (updated 20 Apr 2009 at 08:30 UTC) »

I have a linkedin profile here. Advogatoans are welcome to add me to their network.

Edit: This entry was already turning up in Google's search results less than 2 hours after writing it. I think it was spidered 15 minutes ago.

19 Apr 2009 (updated 19 Apr 2009 at 07:03 UTC) »

Life is going well in NZ. My job is enjoyable - thoroughly so - and I'm learning lots every day. Very little open source work done lately as I need to check the T&Cs of my contract to see if I'm okay. I'm sure there is no problem but I need to check first.

Our application for permanent residence here is going well. I submitted our expression of interest back on 21 March and we were successful on 6th April which is quite quick really. I was expecting it to take a few months. I'm still waiting for the ITA form to come through by post which seems to be taking some time. I'm guessing that receiving it is really the long part of the process.

I hope it comes through quickly as my wife and daughter are still in the Philippines and I'm missing them so much. We could apply for a visitors visa for her, but we have other obligations which need to be met in the immediate future (too much detail to go into here). Still, we chat every day by email and video chat. I've even managed to play games with Louise by webcam which ranks as a good achievement. It's not the same as being with her but it's the best I can do right now.

585 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!