Older blog entries for Stevey (starting at number 584)

slaughter 2.x is getting closer

Work on slaughter 2.x is going rather well.

The scripting hasn't changed, and no primitives have been altered to break backward compatibility, but it is probably best to release this as "slaughter2" - because the way to specify the source from which to pull scripts has changed.

Previously we'd specify two arguments (or have them in a configuration file):

  • --server=example.com
  • --prefix=/slaughter/

That would result in policies being downloaded from:

  http://example.com/slaughter/

Now the rework is complete we use "transports" and "prefixes". The new way to specify the old default is to run with:

--transport=http --prefix=http://example.com/slaughter/

I've implemented four transports thus far:

  • GIT
  • http
  • Mercurial
  • rsync

The code has been made considerably neater, the test-cases are complete, and the POD/inline documentation is almost 100% complete.

Adding additional revision-controlled transports would be trivial at this point - but I suspect I'd be wasting my time if I were to add CVS support!

Life is good. Though I've still got a fair bit more documentation, prettification and updates to make before I'm ready to release it.

Play along at home if you wish: via the repository.

Syndicated 2012-10-26 19:35:25 from Steve Kemp's Blog

So slaughter is definitely getting overhauled

There have been a few interesting discussions going on in parallel about my slaughter sysadmin tool.

I've now decided there will be a 2.0 release, and that will change things for the better. At the moment there are two main parts to the system:

Downloading polices

These are instructions/perl code that are applied to the local host.

Downloading files

Polices are allowed to download files. e.g. /etc/ssh/sshd_config templates, etc.

Both these occur over HTTP fetches (SSL may be used), and there is a different root for the two trees. For example you can see the two public examples I have here:

A fetch of the policy "foo.policy" uses the first prefix, and a fetch of the file "bar" uses the latter prefix. (In actual live usage I use a restricted location because I figured I might end up storing sensitive things, though I suspect I don't.)

The plan is to update the configuration file to read something like this:

transport = http

#
# Valid options will be
#    rsync | http | git | mercurial | ftp
#

#
# each transport will have a different prefix
#
prefix = http://static.steve.org.uk/private

# for rsync:
#  prefix=rsync.example.com::module/
#
# for ftp:
#  prefix=ftp://ftp.example.com/pub/
#
#  for git:
#  prefix=git://github.com/user/repo.git
#
#  for mercurial
#  prefix=http://repo.example.com/path/to/repo
#

I anticipate that the HTTP transport will continue to work the way it currently does. The other transports will clone/fetch the appropriate resource recursively to a local directory - say /var/cache/slaughter. So the complete archive of files/policies will be available locally.

The HTTP transport will continue to work the same way with regard to file fetching, i.e. fetching them remotely on-demand. For all other transports the "remote" file being copied will be pulled from the local cache.

So assuming this:

transport = rsync
prefix    = rsync.company.com::module/

Then the following policy will result in the expected action:

if ( UserExists( User => "skx" ) )
{
    # copy
    FetchFile(
            Source => "/global-keys",
              Dest => "/home/skx/.ssh/authorized_keys2",
             Owner => "skx",
             Group => "skx",
              Mode => "600" );
}

The file "/global-keys" will refer to /var/cache/slaughter/global-keys which will have been already downloaded.

I see zero downside to this approach; it allows HTTP stuff to continue to work as it did before, and it allows more flexibility. We can benefit from knowing that the remote policies are untampered with, for example, via the checking built into git/mercurial, and the speed gains of rsync.

There will also be an optional verification stage. So the code will roughly go like this:

  • 1. Fetch the policy using the specified transport.
  • 2. (Optionally) run some local command to verify the local policies.
  • 3. Execute policies.

I'm not anticipating additional changes, but I'm open to persuasion.

Syndicated 2012-10-24 07:28:56 from Steve Kemp's Blog

Software and hardware..

Software

I've been using redis for a while now. It is a fast in-memory storage system which offers persistence (unlike memcached), as well as several primitive data-types such as lists & hashes.

Anyway it crossed my mind that I don't have a backup of the data it contains, so I knocked up a simple script to dump the contents in plain-text:

In other software-news I've had some interesting and useful feedback and made two new releases of my slaughter sysadmin tool - it now contains a wee test suite and more robustness.

Hardware

I received an email last night to say that my Raspberry PI has shipped. Ordered 24/05/2012, and dispatched 12/10/2012 - I'd almost forgotten about it.

My plan is to make it a media-serving machine, SNES emulator, or similar. Not 100% decided yet.

Finally I've taken the time to repaint my office. When I last wrote about working from home I didn't include pictures - I just described the process of using a "work computer" and a "personal computer".

So this is what my office used to look like. As you can see there are two machines and a huge desk.

With a few changes I now have an office which looks like this - the two machines are glued-together with a KVM. and I have much more room behind it for another desk, more books, and similar toys. Additionally my dedication is now enforced - I simply cannot play with both computer as the same time.

The chair was used to mount the picture - usually I sit on a kneeling chair, which is almost visible.

What inspired the painting? Partly the need for more space, but mostly water damage. I had a leaking ceiling. (Local people will know all about my horrible leaking roof situation).

The end?

Syndicated 2012-10-13 07:55:57 from Steve Kemp's Blog

Artificially inflation of facebook-likes

Facebook Like-Inflation

If you have a website, with a "Facebook Like" box on it, it probably shows something like this:

  • 400 People Like this

Did you know that number is not just the total number of people who clicked "Like" on your page? Did you know you can artificially inflate that number?

Interesting stuff.

Send a message to yourself with the URL in the body, such that it becomes an "attachment". Watch as the like-counter increases by 1 or even 2. Lather. Rinse. Repeat.

Sending messages to other people probably does the same thing. But sending to yourself is sufficient.

Syndicated 2012-10-04 21:06:32 from Steve Kemp's Blog

I should bite my tongue.

Too often requests of the form "I'm looking for an open source solution to ..." mean "I'm looking to spend zero money, contribute nothing, and probably not even read your excellent documentation".

Syndicated 2012-09-29 22:08:51 from Steve Kemp's Blog

So about that off-site encrypted backup idea ..

I'm just back from having spent a week in Helsinki. Despite some minor irritations (the light-switches were always too damn low) it was a lovely trip.

There is a lot to be said for a place, and a culture, where shrugging and grunting counts as communication.

Now I'm back, catching up on things, and mostly plotting and planning how to handle my backups going forward.

Filesystem backups I generally take using backup2l, creating local incremental backup archives then shipping them offsite using rsync. For my personal stuff I have a bunch of space on a number of hosts and I just use rsync to literally copy my ~/Images, ~/Videos, etc..

In the near future I'm going to have access to a backup server which will run rsync, and pretty much nothing else. I want to decide how to archive my content to that - securely.

The biggest issue is that my images (.CR2 + .JPG) will want to be encrypted remotely, but not locally. So I guess if I re-encrypt transient copies and rsync them I'll end up having to send "full" changes each time I rsync. Clearly that will waste bandwidth.

So my alternatives are to use incrementals, as I do elsewhere, then GPG-encrypt the tar files that are produced - simple to do with backup2l - and us rsync. That seems like the best plan, but requires that I have more space available locally since :

  • I need the local .tar files.
  • I then need to .tar.gz.asc/.tar.gz.gpg files too.

I guess I will ponder. It isn't horrific to require local duplication, but it strikes me as something I'd rather avoid - especially given that we're talking about rsync from a home-broadband which will take weeks at best for the initial copy.

Syndicated 2012-09-28 16:31:03 from Steve Kemp's Blog

Security changes have unintended effects.

A couple of months ago I was experimenting with adding no-new-privileges to various systems I run. Unfortunately I was surprised a few weeks later at unintended breakge.

My personal server has several "real users", and several "webserver users". Each webserver user runs a single copy of thttpd under its own UID, listening on 127.0.0.1:xxxx, where xxxx is the userid:

steve@steve:~$ id -u s-steve
1019

steve@steve:~$ sudo lsof -i :1019
COMMAND  PID    USER   FD   TYPE  DEVICE SIZE/OFF NODE NAME
thttpd  9993 s-steve    0u  IPv4 7183548      0t0  TCP localhost:1019 (LISTEN)

Facing the world I have an IPv4 & IPv6 proxy server that routes incoming connections to these local thttpd instances.

Wouldn't it be wonderful to restrict these instances, and prevent them from acquiring new privileges? Yes, I thought. Unfortunately I stumbled across a down-side: Some of the servers send email, and they do that by shelling out to /usr/sbin/sendmail which is setuid (and thus fails). D'oh!

The end result was choosing between:

  • Leaving "no-new-privileges" in place, and rewriting all my mail-sending CGI scripts.
  • Removing the protection such that setuid files can be executed.

I went with the latter for now, but will probably revisit this in the future.

In more interesting news recently I tried to recreate the feel of a painting, as an image which was successful. I think.

I've been doing a lot more shooting recently, even outdoors, which has been fun.

ObQuote: "You know, all the cheerleaders in the world wouldn't help our football team." - Bring it On

Syndicated 2012-09-07 14:35:52 from Steve Kemp's Blog

Failing to debug a crash with epiphany-browser and webkit

Today I'm in bed, because I have le sniffles, and a painful headache. I'm using epiphany to write this post, via VNC to my main desktop, but I'm hating it as I've somehow evolved into a state where the following crashes my browser:

  • Open browser.
  • Navigate to gmail.com
  • Login.
  • Wait for page to complete loading, showing my empty inbox.
  • Click "signout".

Running under GDB shows nothing terribly helpful:

Program received signal SIGSEGV, Segmentation fault.
0x0000000000000000 in ?? ()
(gdb) bt
#0  0x0000000000000000 in ?? ()
#1  0x00007ffff51a0a46 in ?? () from /usr/lib/libwebkit-1.0.so.2
#2  0x00007ffff3d8f79d in ?? () from /usr/lib/libsoup-2.4.so.1
#3  0x00007ffff2a4947e in g_closure_invoke () from /usr/lib/libgobject-2.0.so.0
#4  0x00007ffff2a5f7f4 in ?? () from /usr/lib/libgobject-2.0.so.0
...

To get more detail I ran "apt-get install epiphany-browser-dbg" - this narrows down the crash, but not in a useful way:

Program received signal SIGSEGV, Segmentation fault.
0x0000000000000000 in ?? ()
(gdb) bt
#0  0x0000000000000000 in ?? ()
#1  0x00007ffff51a0a46 in finishedCallback (session=<value optimized out>, msg=0x7fffd801d9c0, data=) at ../WebCore/platform/network/soup/ResourceHandleSoup.cpp:329
#2  0x00007ffff3d8f79d in ?? () from /usr/lib/libsoup-2.4.so.1
#3  0x00007ffff2a4947e in g_closure_invoke () from /usr/lib/libgobject-2.0.so.0
#4  0x00007ffff2a5f7f4 in ?? () from /usr/lib/libgobject-2.0.so.0
..

So this crash happens in ResourceHandleSoup.cpp. Slowly I realized that this came from the webkit package.

We see that the last call by name is to the function in line ResourceHandleSoup.cpp:329, that puts us at the last line of this function:

// Called at the end of the message, with all the necessary about the last informations.
// Doesn't get called for redirects.
static void finishedCallback(SoupSession *session, SoupMessage* msg, gpointer data)
{
    RefPtr<ResourceHandle> handle = adoptRef(static_cast<ResourceHandle*>(data));

    // TODO: maybe we should run this code even if there's no client?
    if (!handle)
        return;

    ResourceHandleInternal* d = handle->getInternal();

    ResourceHandleClient* client = handle->client();
    if (!client)
       return;

..
..
    client->didFinishLoading(handle.get());
}

So we see there is some validation that happens, then a call to "didFinishLoading" and somewhere shortly after that it dies. didFinishLoading looks trivial:

void WebCoreSynchronousLoader::didFinishLoading(ResourceHandle*)
{
      g_main_loop_quit(m_mainLoop);
      m_finished = true;
}

So my mental-debugging is stymied. I blame my headache. It looks like there is no obvious NULL-pointer deference, if we pretend client cannot be NULL. So the next step is to get the source, the build-dependencies and then build a debug version of webkit. I ran "apt-get source webkit", then editted the file ./debian/rules to add --enable-debug and rebuilt it:

skx@precious:~/Debian/epiphany/webkit-1.2.7$ DEB_BUILD_OPTIONS="nostrip noopt" debuild -sa

*time passes*

The build fails:

  CXX    WebCore/svg/libwebkit_1_0_la-SVGUseElement.lo
../WebCore/svg/SVGUseElement.cpp: In member function ‘virtual void WebCore::SVGUseElement::insertedIntoDocument()’:
../WebCore/svg/SVGUseElement.cpp:125: error: ‘class WebCore::Document’ has no member named ‘isXHTMLDocument’
../WebCore/svg/SVGUseElement.cpp:125: error: ‘class WebCore::Document’ has no member named ‘parser’
make[2]: *** [WebCore/svg/libwebkit_1_0_la-SVGUseElement.lo] Error 1

Ugh. So I guess we disable that "--enable-debug", and hope that "nostrip noopt" helps instead.

*Thorin sits down and starts singing about gold*

Finally the debugging build has finished and I've woken up again. Let us do this thing. I'd looked over the webkit tracker and the crashing bugs list in the meantime, but nothing jumped out at me as being similar to my issue.

Anyway without the --enable-debug flag present in the call to ../configure the Debian webkit packages were built, eventually, and installed:

skx@precious:~/Debian/epiphany$ mv libwebkit-dev_1.2.7-0+squeeze2_amd64.deb x.deb
skx@precious:~/Debian/epiphany$ sudo dpkg --install libweb*deb
[sudo] password for skx:
(Reading database ... 173767 files and directories currently installed.)
Preparing to replace libwebkit-1.0-2 1.2.7-0+squeeze2 (using libwebkit-1.0-2_1.2.7-0+squeeze2_amd64.deb) ...
Unpacking replacement libwebkit-1.0-2 ...
Preparing to replace libwebkit-1.0-2-dbg 1.2.7-0+squeeze2 (using libwebkit-1.0-2-dbg_1.2.7-0+squeeze2_amd64.deb) ...
Unpacking replacement libwebkit-1.0-2-dbg ...
Preparing to replace libwebkit-1.0-common 1.2.7-0+squeeze2 (using libwebkit-1.0-common_1.2.7-0+squeeze2_all.deb) ...
Unpacking replacement libwebkit-1.0-common ...
Setting up libwebkit-1.0-common (1.2.7-0+squeeze2) ...
Setting up libwebkit-1.0-2 (1.2.7-0+squeeze2) ...
Setting up libwebkit-1.0-2-dbg (1.2.7-0+squeeze2) ...
skx@precious:~/Debian/epiphany$

Good news everybody: The crash still happens!

Firing up GDB should hopefully reveal more details - but sadly it didn't.

Program received signal SIGSEGV, Segmentation fault.
0x0000000000000000 in ?? ()
(gdb) bt
#0  0x0000000000000000 in ?? ()
#1  0x00007ffff51a0a46 in finishedCallback (session=, msg=0xb03420, data=) at ../WebCore/platform/network/soup/ResourceHandleSoup.cpp:329
..
(gdb) up
#1  0x00007ffff51a0a46 in finishedCallback (session=, msg=0xb03420, data=) at ../WebCore/platform/network/soup/ResourceHandleSoup.cpp:329
329     client->didFinishLoading(handle.get());
(gdb) p client
$1 = <value optimized out>

At this point my head hurts too much, and I'm stuck. No quote today, I cannot be bothered.

Syndicated 2012-08-20 11:43:24 from Steve Kemp's Blog

minidlna is now packaged

So in my previous entry I talked about streaming audio media to my new tablet. Despite it working for others I couldn't get MPDroid to stream to my tablet from my MPD server successfully.

After looking around for alternatives I used MediaTomb to stream my content to the Android "2player" application. After a while I switched from MediaTomb to minidlna instead - that application being built from source.

To save myself effort, and be useful, I've packaged that for Squeeze here :

There's a configuration file in /etc/minidlna and a trivial init-script. Works for me.

In more fun news yesterday I endured and epic 8 hour bus trip and travelled from Edinburgh to Loch Ness.

Why go to Loch Ness? Well I fancied a swim, and wanted to capture shots of hairy cows. In addition to that I saw several interesting birds and a lot of Scottish Scenery.

All in all a good day out.

ObQuote: "It's not you. It's me... I'm completely fucked up."- Cruel Intentions

Syndicated 2012-08-05 16:00:51 from Steve Kemp's Blog

So I joined the bandwagon and bought a tablet

So this week I bought a cheap Android tablet, specifically to allow me to read books upon the move and listen to music at home.

Currently my desktop PC has a bunch of music on it, and I listen to it locally only. In my living room I have an iPOD in a dock which has identical contents.

In my bedroom? No music. This was something I wished to change.

The music on my desktop is played via mpd, and I use the sonata client to play it. I figured I could use this for my tablet, as MPD is all about remote control and similar stuff.

Unfortunately things didn't prove so easy. I found several android applications that would let me connect to my MPD server and control it, but despite several of them claiming to support streaming I couldn't make music appear upon my local tablet. (I rebuilt MPD with the lame encoder available, and used that too, to no avail).

Still if you have music in a central location and you wish to control it then the setup is trivial. (Though there is one caveat with MPD and streaming, my understanding is that you can only stream what you're playing. You cannot configure MPD to stream music and not also play it locally..)

So although it was neat to be able to control the music on my desktop host it wasn't what I wanted. Instead I had to install mediatomb to my desktop to serve the media, and use the 2player application to browse and play it.

Once I'd configured mediatomb all my music was available to my tablet. Result. Unfortunately 2player didn't play my movies, for that purposes I needed to use bubbleupnp. But that was a trivial install too.

So? End result. I have a toy tablet for <£100 which will stream my music to the bedroom.

ObQuote: "How about this: I work for you; in exchange, you teach me how to clean. " - Léon

Syndicated 2012-08-01 08:58:55 from Steve Kemp's Blog

575 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!