Older blog entries for fraggle (starting at number 65)

Most absurd CMOS battery ever?

I have a Performa 6400 PowerMac, which I bought off a friend several years ago. It runs Linux and I use it for portability testing (as it's a big endian machine). Considering its age, it's not surprising that the battery for the onboard clock ran out years ago. It's not a huge problem, but it's certainly annoying to have make complain about modification dates in the future, and be confronted with "filesystem has not been checked for 19370 days" on boot-up. So I decided to replace the battery.

This is the battery that is supposed to go in it. Almost £15 for a battery! This seemed far too expensive to me, so I set out to find an alternative. Unfortunately 4.5 volt batteries are rather uncommon. However, I managed to find this, which was listed as a "lantern battery" used for torches, bike lamps and doorbells. I suspect that there are probably 3 AA batteries inside:

- Greencell 312G 4.5V lantern battery alongside the original (depleted) CMOS battery.

I did a quick sanity check of the old battery to make sure I had the polarity right. The voltage is down to 1V:

All that needs to be done now is to connect the connector from the old battery to the new one. I've always been hopeless at soldering, but fortunately this is a job that is simple enough for sellotape.

This is the logic board from the Mac. I've maxed out the RAM and added PCI Ethernet and USB cards. The black square at the bottom left is where the battery is supposed to be attached (it has a velcro strip at the back).

Obviously this is far too small for the new battery, and the card slides into the back of the machine, mounted vertically. It has to be attached to the board somehow. Where can it go?

Fortunately, the engineers at Apple were apparently smart enough to anticipate this very problem, and designed a convenient space on the riser card to put the battery in.

Syndicated 2010-09-06 11:16:19 from fragglet

Interview with me

fragorama.se is a new website about Classic Doom, and they have just published an interview with me!

Syndicated 2010-07-29 21:01:22 from fragglet

Southampton Test Hustings

I just attended the Hustings for Southampton Test. These are my thoughts on some of the candidates.

Alan Whitehead (incumber Labour MP): Seemed rather nervous at the start but gained confidence later in the debate. To his credit he made some good points; I was impressed that he was bold enough to state that the law of the land should trump religious beliefs. However I also got the impression that he was less up to speed with other candidates on local issues and perhaps hadn't been paying proper attention to his constituency. Surprisingly enough he opposes Trident in favour of a cruise missile system like the LibDems advocate.

Jeremy Moulton (Conservatives): Made some good points and seemed a confident speaker. Some of the answers he gave seemed to have been slighty evasive/misleading when audience members responded to his answers. Attacked Alan Whitehead for using the communications allowance and supposedly putting the Labour party name on it (?)

Dave Callaghan (Liberal Democrats): Seemed the most honest of the lot. He highlighted some of the local issues that he's been campaigning for, like the closure of the Millbrook library, which he attacked Jeremy on (Jeremy is responsible for finances on the local council?).

Pearline Hingston (UKIP): A UKIP candidate who is an immigrant (how's that for a brain-breaker?). She came across as completely clueless and in general contributed very little of note. The one time she really attempted to express an opinion on something (cyclists riding on the pavement) she got smacked down by a member of the audience in response for not having a clue what she was talking about.

Chris Bluemel (Green): Surprisingly clueful and well-spoken. He spoke out in favour of nuclear disarmament and did it well, even though I don't agree with his views.

During the debates I sat next to an older gentleman who spent the time scribbling down notes on the back of an envelope. When he asked a question to the panel, he made some strange comments about Halliburton and BP. He seemed to think that there were plans to site nuclear submarines in Southampton docks, and was worried they might blow up and destroy the city. Very odd.

There was an obvious large Christian presence in the audience, and I suspect that siting the debate in a church probably didn't help. The candidates were asked at one point why they had all refused to sign a petition (I forget the name of it) declaring their support for Christian beliefs, although it was then revealed that none of the candidates had even heard of it. Several questions were asked about Christian rights that were obviously homophobic (eg. anti-gay marriage), though the people posing the questions tried to veil this by speaking in vague terms that made it less obvious what they were talking about.

Syndicated 2010-05-02 18:31:39 from fragglet


I now have a Twitter account. You will find regular postings there about completely irrelevant things. I'm tagging Chocolate Doom-related postings with the #chocdoom tag.

Syndicated 2010-04-29 11:53:20 from fragglet

Chocolate Doom on OS X, and GNUstep

Chocolate Doom runs on Mac OS X and has done for several years; however, until now, getting it running has been overly complicated and required compiling the source code from scratch. Obviously this isn't really appropriate for a Mac; it certainly doesn't fit in with the Apple way of doing things. I recently set about trying to improve the situation.

I first investigated how things are installed on OS X. Generally speaking there are two ways that things are installed; the installer (.pkg files), and Application Bundles, typically contained inside a .dmg archive. The installer simply installs a bunch of files to your machine, while Application Bundles are a lot more fluid; to install, you simply drag an icon into the Applications folder.

Application Bundles seem obviously preferable, but there's the problem of how one should be structured. Chocolate Doom needs a Doom IWAD file that contains the data used by the game, so it's not sufficient to simply package the normal binary as a bundle. Then there's the setup tool as well - should that be in a separate bundle? Finally, people often like to load PWAD files containing extra levels and mods. How do you do that with a bundle?

In the end, I decided to write a minimalist launcher program. Everything is in a single bundle file which, when launched opens a launcher window. The launcher allows the locations of the IWAD files to be configured and extra command line parameters entered. There's also a button to open the setup tool.

The launcher also sets up file associations when installed, so that it is possible to double-click a WAD file in the Finder, and an appropriate command line is constructed to load it. The interface is not as fully-featured as other "launcher" programs are, but it's simple and I think fits with the philosophy of the project.

Developing with GNUstep

The interesting part is how I developed the launcher. I only have occasional use of a Mac, so I developed it on GNUstep. This is an earlier version of the launcher interface while it was under development:

GNUstep provides an implementation of the same Objective C API that OS X's Cocoa provides, albeit with a rather crufty-looking NeXTStep appearance. It also has Gorm, which works in a very similar way to OS X's Interface Builder application. Using GNUstep, I was able to mock up a working program relatively easy. Constructing interfaces is very straightforward: the controls are simply dragged-and-dropped onto a window. I was able to get the underlying code into a working state before porting to OS X.

Porting to OS X

Porting to OS X had some hassles. Firstly, Gorm/GNUstep uses its own native format for interface files, which are different to the .nibs used on OS X. Recent versions of Gorm can save .nibs, but I found that the program crashed when I tried to do this. I eventually just reconstructed the whole interface from scratch in Interface Builder. GNUstep can use .nibs, so I just threw the older Gorm interface away.

The other main annoyance was that the format for property lists is different on OS X. It seems that GNUstep uses the older NeXT format, which Apple have since replaced with a newer XML-based format. Finally, icon files on OS X are in a proprietary .icns format, while GNUstep simply uses PNGs.

Both OS X and GNUstep try to force you to use their build tools (Xcode, ProjectCenter) which seem to generate a whole load of junk. I wrote a Makefile instead. There are some conditional parts to handle the two systems - OS X and GNUstep application bundles have different internal structures, for example. On OS X, the Makefile will do the complete process of compiler code, constructing the application bundle and generating a .dmg archive.

One thing I did find interesting is how OS X handles libraries. The full paths to any .dylib libraries (which are like Linux .so files or Windows DLLs) are stored inside a program when it is compiled. In my case, my application bundle needs to include the SDL libraries that Chocolate Doom depends upon. There's a convenient program called install_name_tool that can be used to change these paths after the program has been compiled. A special macro called @executable_path can be used to mean "the path where this binary is". I wrote a script to copy a program along with any libraries it depends on, changing its library search paths appropriately.

Thoughts on GNUstep

GNUstep was certainly incredibly useful in this activity; the ability to develop the program on my usual (Linux) laptop was very convenient. From a technical perspective, GNUstep seems to be a very impressive project. There is great usefulness in having a Linux implementation of the OPENSTEP (ie. Cocoa) API, which is what GNUstep is. However, the NeXT-style interface clashes horribly with almost any desktop environment that you might want to run under Linux (Gnome/KDE/etc), which is a huge turn-off.

The main problems are (1) the mini-window icons (which represent the running application) and (2) the menus, which appear in a separate window to the other application windows. I expect these are things that I could get used to if I was running a full GNUstep desktop where everything was like this; however, I'm not and it wouldn't really be practical for me to do so. It is possible to theme GNUstep to look nicer than its default "ugly grey square" appearance, but these problem remain.

GNUstep is a frustrating project in this respect. I can't help wonder if the full potential of the project is limited by the short-sightedness of its developers. It seems like they're too hung up on their goal of recreating NeXTstep, when I doubt there are many people who would even want to use such a system nowadays. This entry from the developer FAQ gives a good example of what I'm talking about:
How about implementing parts of the Application Kit with GTK?
Yes and No - The GNUstep architecture provides a single, platform-independent, API for handling all aspects of GUI interaction (implemented in the gstep-gui library), with a backend architecture that permits you to have different display models (display postscript, X-windows, win32, berlin ...) while letting you use the same code for printing as for displaying. Use of GTK in the frontend gui library would remove some of those advantages without adding any.

"Without adding any [advantages]" - except, of course, the ability to give GNUstep applications an appearance that is consistent with 99% of Linux desktops! If it was possible to use GNUstep to make applications that looked like Gtk+ apps, I bet it would be a lot more attractive to developers. The practical advantages of such a decision are dismissed completely in the face of architectural/technical advantages that probably have little practical use.

Syndicated 2010-02-05 13:19:11 from fragglet

How to make a program just run

Starting with Windows Vista, Windows limits the privileges that are given to normal users, running programs as the Administrator user only when necessary. To smooth over the fact that install programs for most software need to run as Administrator, it uses heuristics to detect whether a program is an installer. One of these is to look at the file name - if it contains "setup" in the name (among others), it is treated as an installer.

This is a problem if you develop a program that is not an installer but has "setup" in the name, because Windows treats it as though it is an installer and prompts you for administrator privileges.

User Account Control

The first problem is that it prompts the user for administrator privileges. This is part of the User Account Control system. Fortunately, there's a way around this - it's possible to embed a special "manifest" XML file inside the EXE that tells Windows that Administrator privileges aren't necessary.

Here's the magic manifest file to do this:

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>

<assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0">
  <!-- The "name" field in this tag should be the same as the executable's
       name -->
  <assemblyIdentity version="" processorArchitecture="X86"
                    name="chocolate-setup.exe" type="win32"/>
  <trustInfo xmlns="urn:schemas-microsoft-com:asm.v3">
        <requestedExecutionLevel level="asInvoker" uiAccess="false" />

The important part here is the "requestedExecutionLevel" statement, that specifies to run the program as the invoker. I think the "uiAccess" element is necessary as well. I'm not entirely sure what this control does, and there are some people who say it should be set to true. However, it seems that if set to true, the executable has to be digitally signed with a certificate, which all looks like a massive hassle, so I've just left it turned off.

The "assemblyIdentity" tag here matches the executable name, but I'm not sure it's actually necessary. The version number is a dummy value.

Embedding it inside an executable is a matter of writing a resource file containing a statement to include the manifest file. Here's the magic statement for that:
1 24 MOVEABLE PURE "setup-manifest.xml"

The resource file is then compiled to a .o (using windres) and incorporated into the build.

Compatibility Assistant

So far, so good. If the above is done properly, Windows won't prompt to run the program with administrator privileges any more. However, that's not the end of the story. Windows still thinks the program is an installer, just an installer that doesn't need administrator privileges. The next problem is the "Program Compatibility Assistant".

If your program exits without writing any files to disk (in Chocolate Setup, it's possible to quit without saving configuration file changes, for example), the compatibility assistant appears. Because Windows thinks the program is an installer, and it hasn't written any files to disk, it assumes that something must have gone wrong with installation, and it might be a compatibility problem with a program designed for an older version of Windows. The assistant is supposed to help you resolve the problems you've encountered.

To work around this requires an addition to the manifest file to state that Vista (and Windows 7) are supported OSes; therefore, if no files are written, it's no problem. Here's the new version of the manifest:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>

<assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0">
  <!-- The "name" field in this tag should be the same as the executable's
       name -->
  <assemblyIdentity version="" processorArchitecture="X86"
                    name="chocolate-setup.exe" type="win32"/>
  <trustInfo xmlns="urn:schemas-microsoft-com:asm.v3">
        <requestedExecutionLevel level="asInvoker" uiAccess="false" />

  <!-- Stop the Program Compatibility Assistant appearing: -->

  <compatibility xmlns="urn:schemas-microsoft-com:compatibility.v1">
      <supportedOS Id="{35138b9a-5d96-4fbd-8e2d-a2440225f93a}"/> <!-- 7 -->
      <supportedOS Id="{e2011457-1546-43c5-a5fe-008deee3d3f0}"/> <!-- Vista -->

Syndicated 2009-12-10 13:05:58 from fragglet

Python's braindamaged scoping rules

Python distinguishes between local and global variables from assignment statements. If a variable is assigned within a function, that variable is treated as a local variable. This means that you cannot do this:

my_var = None

def set_my_var():
    my_var = "hello world"

print my_var

As my_var is assigned within the function, it is treated as a local variable that is separate to the global variable with the same name. Instead, you have to explicitly tell the compiler that you want to assign the global variable, like this:
my_var = None

def set_my_var():
    global my_var
    my_var = "hello world"

print my_var

This all strikes me as rather brain-damaged. If assignments are used to detect the declaration of a variable, is it really so difficult to just examine the surrounding context to see if there is already a variable with the same name?

Syndicated 2009-05-07 11:37:54 from fragglet

Creative defacement

Something funny I saw attached to a sign on the car park down the road from my flat:

Syndicated 2009-04-30 22:38:23 from fragglet


IPv6 is something that I've been interested in for a while; I was even employed to do some v6 porting work a few years ago. Unfortunately, even though it's been several years and address exhaustion is rapidly approaching, uptake remains slow.

As I see it there are several problems with IPv6 adoption:
  1. Software doesn't support it
  2. Hardware doesn't support it
  3. ISPs don't provide it

As these go, (1) isn't actually that big a problem now. A lot of the most important software already supports v6. Ubuntu/Debian seems to just work with IPv6 (and presumably other Linux distributions as well), and even Windows supports it as of Vista. Software packages like Firefox work out of the box.

(2) is still a big issue for a lot of hardware but I suspect that there's a lot of hardware now that supports it, but has it turned off (routers, etc). (3) is simply a fact; I haven't heard of any ISPs supporting v6, and I suspect a lot of that is dependent on (2).


6to4 (not to be confused with
6in4 or
6over4, thanks for the clear naming, guys), is in my opinion an excellent piece of engineering and exactly what is needed to fuel IPv6 adoption. It solves the hardware/ISP problems by tunneling v6 traffic over v4; however, the clever part about it is that it does this without the need to register an account with a tunnel provider or explicitly configure it. I first became aware of 6to4 when I heard that the Apple Extreme base station has it enabled by default, which I think demonstrates its potential; it's possible to circumvent the remaining hardware/ISP problems with IPv6 just by getting manufacturers of broadband routers to adopt 6to4.

With 6to4, tunnels are made opportunistically between v4 addresses, which means that if you have two machines using 6to4, they can communicate directly, without the overhead that routing through a third party would cause (If this sounds a bit pointless, consider that it means two machines both behind NAT gateways in the v4 world can have end-to-end connectivity
in the v6 world). Any other v6 data is sent to a magic anycast address that automatically routes v6 data to the closest v6 gateway.

With 6to4, a machine has an IPv6 address range that is derived from its public IPv4 address. For example, if your IPv4 address is, your IPv6 subnet range is 2002:0102:0304::/48. IPv6 traffic for that range automatically gets sent to that IPv4 address. What really happens
is that your 6to4-enabled broadband router assigns addresses from this range to machines on your home LAN.

Setting up 6to4

My DSL router doesn't support 6to4; however, I managed to work around this. My router does support port forwarding (actually, protocol forwarding in this case), and I have a Linux machine in my lounge that I use as a media centre/server.

The first step was to set up a rule on the router to forward 6to4 data to the server machine. I have a BT
router which is helpfully quite flexible in this respect. 6to4 data is IP traffic with a protocol number of 41. From the router's command line interface, this did the job:

create nat rule entry ruleid 41416 rdr prot num 41 lcladdrfrom lcladdrto

It was then a case of configuring the server to do 6to4. As it is running Ubuntu, I added this to /etc/network/interfaces:
iface tun6to4 inet6 v4tunnel
	address 2002:0102:0304::1
	netmask 16
	endpoint any
	ttl 255
	post-up ip -6 route add 2000::/3 via :: dev tun6to4
	post-down ip -6 route flush dev tun6to4

auto tun6to4

A simple "sudo ifup tun6to4" and the tunnel device should come up. It should then be possible to ping IPv6 addresses:
$ ping6 ipv6.google.com
PING ipv6.google.com(2001:4860:a003::68) 56 data bytes
64 bytes from 2001:4860:a003::68: icmp_seq=1 ttl=61 time=53.8 ms
64 bytes from 2001:4860:a003::68: icmp_seq=2 ttl=61 time=52.5 ms
64 bytes from 2001:4860:a003::68: icmp_seq=3 ttl=61 time=45.5 ms
64 bytes from 2001:4860:a003::68: icmp_seq=4 ttl=61 time=51.5 ms


At this point, the server has IPv6 connectivity, but what I really want is every machine on the network to have it. So the next step is to set up the server as an IPv6 router.

To do this, other machines need to know that the server is a router and acquire IPv6 addresses. In IPv4, this is usually done with a DHCP server handing out addresses from a pool. Instead, with IPv6, routers advertise their address ranges, and the clients automatically construct an address. This is possible because of the vast address range in IPv6.

A package called radvd (router advertisement daemon) sends router advertisements. It's in the Debian package repository and very easy to configure. This is my /etc/radvd.conf file:
interface eth0
	AdvSendAdvert on;
	prefix 2002:0102:0304:face::/64
		AdvOnLink on;
		AdvAutonomous on;
		AdvRouterAddr on;

Notice that I've defined a subnet range for clients. The address range given by 6to4 is 2002:0102:0304::/48, while radvd assigns addresses in the 2002:0102:0304:face::/64 range. Next, I statically assign an address in this range in /etc/network/interfaces by adding this:
iface eth0 inet6 static
        address 2002:0102:0304:face::1
	netmask 64

Now the router advertisements are handing out v6 addresses to other machines on the network, and the server has an address within the subnet range to communicate with them. It's then just a matter of turning on routing. Add this to /etc/sysctl.conf:

Or to make it take effect immediately:
sudo sysctl net.ipv6.conf.all.forwarding=1
sudo sysctl net.ipv6.conf.default.forwarding=1

That's it! Here's the output from ifconfig on another machine on my network:
wlan0     Link encap:Ethernet  HWaddr 00:1c:10:63:63:d0
          inet addr:  Bcast:  Mask:
          inet6 addr: 2002:0102:0304:face:21c:10ff:fe63:63d0/64 Scope:Global
          inet6 addr: fe80::21c:10ff:fe63:63d0/64 Scope:Link
          RX packets:7658 errors:0 dropped:0 overruns:0 frame:0
          TX packets:7228 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:4073660 (4.0 MB)  TX bytes:903010 (903.0 KB)

And here's Google IPv6:

Note that in the examples above, I've obscured my 6to4 address range to 2002:0102:0304::, to hide my IPv4 address, for privacy. If you want to follow my instructions, this needs to be replaced with your own public IPv4 address.

Syndicated 2009-03-20 22:03:27 from fragglet

Stock photos

BBC News' obsession with filling their articles with stock photos that contain no relevant information is reaching absurd extremes.

Syndicated 2009-02-13 13:07:45 from fragglet

56 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!