Older blog entries for Stevey (starting at number 580)

I should bite my tongue.

Too often requests of the form "I'm looking for an open source solution to ..." mean "I'm looking to spend zero money, contribute nothing, and probably not even read your excellent documentation".

Syndicated 2012-09-29 22:08:51 from Steve Kemp's Blog

So about that off-site encrypted backup idea ..

I'm just back from having spent a week in Helsinki. Despite some minor irritations (the light-switches were always too damn low) it was a lovely trip.

There is a lot to be said for a place, and a culture, where shrugging and grunting counts as communication.

Now I'm back, catching up on things, and mostly plotting and planning how to handle my backups going forward.

Filesystem backups I generally take using backup2l, creating local incremental backup archives then shipping them offsite using rsync. For my personal stuff I have a bunch of space on a number of hosts and I just use rsync to literally copy my ~/Images, ~/Videos, etc..

In the near future I'm going to have access to a backup server which will run rsync, and pretty much nothing else. I want to decide how to archive my content to that - securely.

The biggest issue is that my images (.CR2 + .JPG) will want to be encrypted remotely, but not locally. So I guess if I re-encrypt transient copies and rsync them I'll end up having to send "full" changes each time I rsync. Clearly that will waste bandwidth.

So my alternatives are to use incrementals, as I do elsewhere, then GPG-encrypt the tar files that are produced - simple to do with backup2l - and us rsync. That seems like the best plan, but requires that I have more space available locally since :

  • I need the local .tar files.
  • I then need to .tar.gz.asc/.tar.gz.gpg files too.

I guess I will ponder. It isn't horrific to require local duplication, but it strikes me as something I'd rather avoid - especially given that we're talking about rsync from a home-broadband which will take weeks at best for the initial copy.

Syndicated 2012-09-28 16:31:03 from Steve Kemp's Blog

Security changes have unintended effects.

A couple of months ago I was experimenting with adding no-new-privileges to various systems I run. Unfortunately I was surprised a few weeks later at unintended breakge.

My personal server has several "real users", and several "webserver users". Each webserver user runs a single copy of thttpd under its own UID, listening on, where xxxx is the userid:

steve@steve:~$ id -u s-steve

steve@steve:~$ sudo lsof -i :1019
thttpd  9993 s-steve    0u  IPv4 7183548      0t0  TCP localhost:1019 (LISTEN)

Facing the world I have an IPv4 & IPv6 proxy server that routes incoming connections to these local thttpd instances.

Wouldn't it be wonderful to restrict these instances, and prevent them from acquiring new privileges? Yes, I thought. Unfortunately I stumbled across a down-side: Some of the servers send email, and they do that by shelling out to /usr/sbin/sendmail which is setuid (and thus fails). D'oh!

The end result was choosing between:

  • Leaving "no-new-privileges" in place, and rewriting all my mail-sending CGI scripts.
  • Removing the protection such that setuid files can be executed.

I went with the latter for now, but will probably revisit this in the future.

In more interesting news recently I tried to recreate the feel of a painting, as an image which was successful. I think.

I've been doing a lot more shooting recently, even outdoors, which has been fun.

ObQuote: "You know, all the cheerleaders in the world wouldn't help our football team." - Bring it On

Syndicated 2012-09-07 14:35:52 from Steve Kemp's Blog

Failing to debug a crash with epiphany-browser and webkit

Today I'm in bed, because I have le sniffles, and a painful headache. I'm using epiphany to write this post, via VNC to my main desktop, but I'm hating it as I've somehow evolved into a state where the following crashes my browser:

  • Open browser.
  • Navigate to gmail.com
  • Login.
  • Wait for page to complete loading, showing my empty inbox.
  • Click "signout".

Running under GDB shows nothing terribly helpful:

Program received signal SIGSEGV, Segmentation fault.
0x0000000000000000 in ?? ()
(gdb) bt
#0  0x0000000000000000 in ?? ()
#1  0x00007ffff51a0a46 in ?? () from /usr/lib/libwebkit-1.0.so.2
#2  0x00007ffff3d8f79d in ?? () from /usr/lib/libsoup-2.4.so.1
#3  0x00007ffff2a4947e in g_closure_invoke () from /usr/lib/libgobject-2.0.so.0
#4  0x00007ffff2a5f7f4 in ?? () from /usr/lib/libgobject-2.0.so.0

To get more detail I ran "apt-get install epiphany-browser-dbg" - this narrows down the crash, but not in a useful way:

Program received signal SIGSEGV, Segmentation fault.
0x0000000000000000 in ?? ()
(gdb) bt
#0  0x0000000000000000 in ?? ()
#1  0x00007ffff51a0a46 in finishedCallback (session=<value optimized out>, msg=0x7fffd801d9c0, data=) at ../WebCore/platform/network/soup/ResourceHandleSoup.cpp:329
#2  0x00007ffff3d8f79d in ?? () from /usr/lib/libsoup-2.4.so.1
#3  0x00007ffff2a4947e in g_closure_invoke () from /usr/lib/libgobject-2.0.so.0
#4  0x00007ffff2a5f7f4 in ?? () from /usr/lib/libgobject-2.0.so.0

So this crash happens in ResourceHandleSoup.cpp. Slowly I realized that this came from the webkit package.

We see that the last call by name is to the function in line ResourceHandleSoup.cpp:329, that puts us at the last line of this function:

// Called at the end of the message, with all the necessary about the last informations.
// Doesn't get called for redirects.
static void finishedCallback(SoupSession *session, SoupMessage* msg, gpointer data)
    RefPtr<ResourceHandle> handle = adoptRef(static_cast<ResourceHandle*>(data));

    // TODO: maybe we should run this code even if there's no client?
    if (!handle)

    ResourceHandleInternal* d = handle->getInternal();

    ResourceHandleClient* client = handle->client();
    if (!client)


So we see there is some validation that happens, then a call to "didFinishLoading" and somewhere shortly after that it dies. didFinishLoading looks trivial:

void WebCoreSynchronousLoader::didFinishLoading(ResourceHandle*)
      m_finished = true;

So my mental-debugging is stymied. I blame my headache. It looks like there is no obvious NULL-pointer deference, if we pretend client cannot be NULL. So the next step is to get the source, the build-dependencies and then build a debug version of webkit. I ran "apt-get source webkit", then editted the file ./debian/rules to add --enable-debug and rebuilt it:

skx@precious:~/Debian/epiphany/webkit-1.2.7$ DEB_BUILD_OPTIONS="nostrip noopt" debuild -sa

*time passes*

The build fails:

  CXX    WebCore/svg/libwebkit_1_0_la-SVGUseElement.lo
../WebCore/svg/SVGUseElement.cpp: In member function ‘virtual void WebCore::SVGUseElement::insertedIntoDocument()’:
../WebCore/svg/SVGUseElement.cpp:125: error: ‘class WebCore::Document’ has no member named ‘isXHTMLDocument’
../WebCore/svg/SVGUseElement.cpp:125: error: ‘class WebCore::Document’ has no member named ‘parser’
make[2]: *** [WebCore/svg/libwebkit_1_0_la-SVGUseElement.lo] Error 1

Ugh. So I guess we disable that "--enable-debug", and hope that "nostrip noopt" helps instead.

*Thorin sits down and starts singing about gold*

Finally the debugging build has finished and I've woken up again. Let us do this thing. I'd looked over the webkit tracker and the crashing bugs list in the meantime, but nothing jumped out at me as being similar to my issue.

Anyway without the --enable-debug flag present in the call to ../configure the Debian webkit packages were built, eventually, and installed:

skx@precious:~/Debian/epiphany$ mv libwebkit-dev_1.2.7-0+squeeze2_amd64.deb x.deb
skx@precious:~/Debian/epiphany$ sudo dpkg --install libweb*deb
[sudo] password for skx:
(Reading database ... 173767 files and directories currently installed.)
Preparing to replace libwebkit-1.0-2 1.2.7-0+squeeze2 (using libwebkit-1.0-2_1.2.7-0+squeeze2_amd64.deb) ...
Unpacking replacement libwebkit-1.0-2 ...
Preparing to replace libwebkit-1.0-2-dbg 1.2.7-0+squeeze2 (using libwebkit-1.0-2-dbg_1.2.7-0+squeeze2_amd64.deb) ...
Unpacking replacement libwebkit-1.0-2-dbg ...
Preparing to replace libwebkit-1.0-common 1.2.7-0+squeeze2 (using libwebkit-1.0-common_1.2.7-0+squeeze2_all.deb) ...
Unpacking replacement libwebkit-1.0-common ...
Setting up libwebkit-1.0-common (1.2.7-0+squeeze2) ...
Setting up libwebkit-1.0-2 (1.2.7-0+squeeze2) ...
Setting up libwebkit-1.0-2-dbg (1.2.7-0+squeeze2) ...

Good news everybody: The crash still happens!

Firing up GDB should hopefully reveal more details - but sadly it didn't.

Program received signal SIGSEGV, Segmentation fault.
0x0000000000000000 in ?? ()
(gdb) bt
#0  0x0000000000000000 in ?? ()
#1  0x00007ffff51a0a46 in finishedCallback (session=, msg=0xb03420, data=) at ../WebCore/platform/network/soup/ResourceHandleSoup.cpp:329
(gdb) up
#1  0x00007ffff51a0a46 in finishedCallback (session=, msg=0xb03420, data=) at ../WebCore/platform/network/soup/ResourceHandleSoup.cpp:329
329     client->didFinishLoading(handle.get());
(gdb) p client
$1 = <value optimized out>

At this point my head hurts too much, and I'm stuck. No quote today, I cannot be bothered.

Syndicated 2012-08-20 11:43:24 from Steve Kemp's Blog

minidlna is now packaged

So in my previous entry I talked about streaming audio media to my new tablet. Despite it working for others I couldn't get MPDroid to stream to my tablet from my MPD server successfully.

After looking around for alternatives I used MediaTomb to stream my content to the Android "2player" application. After a while I switched from MediaTomb to minidlna instead - that application being built from source.

To save myself effort, and be useful, I've packaged that for Squeeze here :

There's a configuration file in /etc/minidlna and a trivial init-script. Works for me.

In more fun news yesterday I endured and epic 8 hour bus trip and travelled from Edinburgh to Loch Ness.

Why go to Loch Ness? Well I fancied a swim, and wanted to capture shots of hairy cows. In addition to that I saw several interesting birds and a lot of Scottish Scenery.

All in all a good day out.

ObQuote: "It's not you. It's me... I'm completely fucked up."- Cruel Intentions

Syndicated 2012-08-05 16:00:51 from Steve Kemp's Blog

So I joined the bandwagon and bought a tablet

So this week I bought a cheap Android tablet, specifically to allow me to read books upon the move and listen to music at home.

Currently my desktop PC has a bunch of music on it, and I listen to it locally only. In my living room I have an iPOD in a dock which has identical contents.

In my bedroom? No music. This was something I wished to change.

The music on my desktop is played via mpd, and I use the sonata client to play it. I figured I could use this for my tablet, as MPD is all about remote control and similar stuff.

Unfortunately things didn't prove so easy. I found several android applications that would let me connect to my MPD server and control it, but despite several of them claiming to support streaming I couldn't make music appear upon my local tablet. (I rebuilt MPD with the lame encoder available, and used that too, to no avail).

Still if you have music in a central location and you wish to control it then the setup is trivial. (Though there is one caveat with MPD and streaming, my understanding is that you can only stream what you're playing. You cannot configure MPD to stream music and not also play it locally..)

So although it was neat to be able to control the music on my desktop host it wasn't what I wanted. Instead I had to install mediatomb to my desktop to serve the media, and use the 2player application to browse and play it.

Once I'd configured mediatomb all my music was available to my tablet. Result. Unfortunately 2player didn't play my movies, for that purposes I needed to use bubbleupnp. But that was a trivial install too.

So? End result. I have a toy tablet for <£100 which will stream my music to the bedroom.

ObQuote: "How about this: I work for you; in exchange, you teach me how to clean. " - Léon

Syndicated 2012-08-01 08:58:55 from Steve Kemp's Blog

I will be awesome, eventually.

Earlier this year, in March, I switched to the bluetile tiling window, and it has been a great success.

So I was interested in seeing Vincent Bernats post about switching to Awesome, I myself intend to do that "eventually", now I've successfully dipped my toes into tiling-land, via bluetiles simple config and gnome-friendly setup.

One thing that puts me off is the length of my sessions:

skx@precious:~/hg/blog$ ps -ef | grep skx
skx       2237     1  0 Mar12 ?        00:00:01 /usr/bin/gnome-keyring-daemon ...

As you can see I've been logged into my desktop session for four months. I lock the screen when I wander away, but generally login once when the computer boots and never again for half a year or so. FWIW:

skx@precious:~/hg/blog$ uptime
 23:01:19 up 138 days,  4:44,  4 users,  load average: 0.02, 0.06, 0.02

ObQuote: "I'm 30 years old. I'm almost a grown man. " - Rocketman

Syndicated 2012-07-28 22:04:25 from Steve Kemp's Blog

Another day, another upgrade

Tonight I upgraded my personal machine to run the recently released 3.5[.0] kernel.

On my personal machine(s) I'm usually loathe to change a running kernel, but this one was a good step forward because it allows me to experiment with seccomp filters.

I've tested the trivial "no new privileges" pctl and I followed along with the nice seccomp tutorial which gave me simple working code which I married to my javascript interpreter.

On top of that I upgraded node.js, which meant I had to clean up a little depreciated code in my node reverse proxy - which is the public face of the websites I run upon my box. (The proxy tunnels to about 10 different thttpd instances, each running upon

Happily however my weekend was not full of code, it was brightened by the opportunity to take pictures of Aurora and her long hair - more to come as I've still got about 350 images to wade through..

ObQuote: "Don't you think I make a remarkable queen? " - St. Trinian's (2007)

Syndicated 2012-07-22 21:53:15 from Steve Kemp's Blog

Misc update.

I got a few emails about the status panel I'd both toyed with and posted. The end result is that the live load graphs now have documentation, look prettier, and contain a link to the source code.

Apart from that this week has mostly involved photographing cute cats, hairy dogs, and women in corsets.

In Debian-related news njam: Insecure usage of environmental variable was closed after about 7 months, and I reported a failure of omega-rpg to drop group(games) privileges prior to saving game-state. That leads to things like this:

skx@precious:~$ ls -l | grep games
-rw-r--r--   1 skx games   14506 Jul  8 15:20 Omega1000

Not the end of the world, but it does mean you can write to directories owned by root.games, and potentially over-write level/high-score files in other packages leading to compromises.

ObQuote: "Your suffering will be legendary, even in hell! " - Hellraiser II (Did you know there were eight HellRaiser sequels?)

Syndicated 2012-07-08 14:23:21 from Steve Kemp's Blog

Writing a status panel the modern way

So status displays are cool. Seeing what is happening in real time is cool.

As a proof of concept I put together a trivial load-graph:

This is broken down into three parts:

Load Client

The load client is a trivial script which reads /proc/loadavg, and sends the 1-minute entry to a remote server, via a single UDP packet.

Load Server

The load-server is a service which listens for UDP traffic, and when it receives a new integer records that in a redis data-store.

Load Display

This is a HTML page which has the values from the store in it, which is then plotted using javscript.

So the UDP-server which receives load will receive two things:

  • load:N - The load figure. The text "load:" is literal, and present in case I decide to extend the stats..
  • x.x.x.x - The IP address from which it received the message.

This is inserted into a Redis database as an array. This array could then be fetched via an AJAX script to update the HTML display in real-time, but at the moment I just have a shell script which updates it in near-real time.

The idea of having a UDP-server receive values from remote clients is interesting. We just need to define a mapping to redis. For me I've just done this:

receive a UDP packet with value "load:1.2" from source
append "1.2" to key "".
append the value "" to the global "known_hosts"

The values received can be truncated (i.e. keep only the most recent 60 entries) with ease, due to the available Redis primitives, and we can easily graph these using the qjplot library.

Adding more metrics just means updating the clients to send "memfree:400m", "disk-free:50%", "users:2", "uptime:12345s", or similar. The storage is wonderfully abstract - all you need to do is get the graph-drawing code to a) Know which source to display, and b) which metric.

For example, if we did extend the client to send that data I could draw a graph of the memory on host foo.example.com just by selecting "memfree" against the origin "".

ObQuote: "Come here, damn you, I want to touch you. " - Hellraiser

Syndicated 2012-06-19 18:38:12 from Steve Kemp's Blog

571 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!