Recent blog entries

28 Nov 2015 jas   » (Master)

Automatic Replicant Backup over USB using rsync

I have been using Replicant on the Samsung SIII I9300 for over two years. I have written before on taking a backup of the phone using rsync but recently I automated my setup as described below. This work was prompted by a screen accident with my phone that caused it to die, and I noticed that I hadn’t taken regular backups. I did not lose any data this time, since typically all content I create on the device is immediately synchronized to my clouds. Photos are uploaded by the ownCloud app, SMS Backup+ saves SMS and call logs to my IMAP server, and I use DAVDroid for synchronizing contacts, calendar and task lists with my instance of ownCloud. Still, I strongly believe in regular backups of everything, so it was time to automate this.

For my use-case, taking backups of the phone whenever I connect it to one of my laptops is sufficient. I typically connect it to my laptops for charging at least every other day. My laptops are all running Debian, but this should be applicable to most modern GNU/Linux system. This is not Replicant-specific, although you need a rooted phone. I thought that automating this would be simple, but I got to learn the ins and outs of systemd and udev in the process and this ended up taking the better part of an evening.

I started out adding an udev rule and a small script, thinking I could invoke the backup process from the udev rule. However rsync would magically die after running a few seconds. After an embarrassing long debugging session, finally I found someone with a similar problem which led me to a nice writeup on the topic of running long-running services on udev events. I created a file /etc/udev/rules.d/99-android-backup.rules with the following content:

ACTION=="add", SUBSYSTEMS=="usb", ENV{ID_SERIAL_SHORT}=="323048a5ae82918b", TAG+="systemd", ENV{SYSTEMD_WANTS}+="android-backup@$env{ID_SERIAL_SHORT}.service"
ACTION=="add", SUBSYSTEMS=="usb", ENV{ID_SERIAL_SHORT}=="4df9e09c25e75f63", TAG+="systemd", ENV{SYSTEMD_WANTS}+="android-backup@$env{ID_SERIAL_SHORT}.service"

The serial numbers correspond to the device serial numbers of the two devices I wish to backup. The adb devices command will print them for you, and you need to replace my values with the values from your phones. Next I created a systemd service to describe a oneshot service. The file /etc/systemd/system/android-backup@.service have the following content:

ExecStart=/usr/local/sbin/android-backup %I

The at-sign (“@”) in the service filename signal that this is a service that takes a parameter. I’m not enough of an udev/systemd person to explain these two files using the proper terminology, but at least you can pattern-match and follow the basic idea of them: the udev rule matches the devices that I’m interested in (I don’t want this to happen to all random Android devices I attach, hence matching against known serial numbers), and it causes a systemd service with a parameter to be started. The systemd service file describe the script to run, and passes on the parameter.

Now for the juicy part, the script. I have /usr/local/sbin/android-backup with the following content.


export ANDROID_SERIAL="$1"

exec 2>&1 | logger

if ! test -d "$DIRBASE-$ANDROID_SERIAL"; then
    echo "could not find directory: $DIRBASE-$ANDROID_SERIAL"
    exit 1

set -x

adb wait-for-device
adb root
adb wait-for-device
adb shell printf "address\nuid = root\ngid = root\n[root]\n\tpath = /\n" \> /mnt/secure/rsyncd.conf
adb shell rsync --daemon --no-detach --config=/mnt/secure/rsyncd.conf &
adb forward tcp:6010 tcp:873
sleep 2
rsync -av --delete --exclude /dev --exclude /acct --exclude /sys --exclude /proc rsync://localhost:6010/root/ $DIRBASE-$ANDROID_SERIAL/
: rc $?
adb forward --remove tcp:6010
adb shell rm -f /mnt/secure/rsyncd.conf

This script warrant more detailed explanation. Backups are placed under, e.g., /var/backups/android-323048a5ae82918b/ for later off-site backup (you do backup your laptop, right?). You have to manually create this directory, as a safety catch to not wildly rsync data into non-existing directories. The script logs everything using syslog, so run a tail -F /var/log/syslog& when setting this up. You may want to reduce verbosity of rsync if you prefer (replace rsync -av with rsync -a). The script runs adb wait-for-device which you rightly guessed will wait for the device to settle. Next adb root is invoked to get root on the device (reading all files from the system naturally requires root). It takes some time to switch, so another wait-for-device call is needed. Next a small rsyncd configuration file is created in /mnt/secure/rsyncd.conf on the phone. The file tells rsync do listen on localhost, run as root, and use / as the path. By default, rsyncd is read-only so the host will not be able to upload any data over rsync, just read data out. Next rsync is started on the phone. The adb forward command forwards port 6010 on the laptop to port 873 on the phone (873 is the default rsyncd port). Unfortunately, setting up the TCP forward appears to take some time, and adb wait-for-device will not wait for that to complete, hence an ugly sleep 2 at this point. Next is the rsync invocation itself, which just pulls in everything from the phone to the laptop, excluding some usual suspects. The somewhat cryptic : rc $? merely logs the exit code of the rsync process into syslog. Finally we clean up the TCP forward and remove the rsyncd.conf file that was temporarily created.

This setup appears stable to me. I can plug in a phone and a backup will be taken. I can even plug in both my devices at the same time, and they will run at the same time. If I unplug a device, the script or rsync will error out and systemd cleans up.

If anyone has ideas on how to avoid the ugly temporary rsyncd.conf file or the ugly sleep 2, I’m interested. It would also be nice to not have to do the ‘adb root’ dance, and instead have the phone start the rsync daemon when connecting to my laptop somehow. TCP forwarding might be troublesome on a multi-user system, but my laptops aren’t. Killing rsync on the phone is probably a good idea too. If you have ideas on how to fix any of this, other feedback, or questions, please let me know!

Syndicated 2015-11-27 23:33:02 from Simon Josefsson's blog

26 Nov 2015 Stevey   » (Master)

A transient home-directory?

For the past few years all my important work has been stored in git repositories. Thanks to the mr tool I have a single configuration file that allows me to pull/maintain a bunch of repositories with ease.

Having recently wiped & reinstalled a pair of desktop systems I'm now wondering if I can switch to using a totally transient home-directory.

The basic intention is that:

  • Every time I login "rm -rf $HOME/*" will be executed.

I see only three problems with this:

  • Every time I login I'll have to reclone my "dotfiles", passwords, bookmarks, etc.
  • Some programs will need their configuration updated, post-login.
  • SSH key management will be a pain.

My dotfiles contain my my bookmarks, passwords, etc. But they don't contain setup for GNOME, etc.

So there might be some configuration that will become annoying - For example I like "Ctrl-Alt-t" to open a new gnome-terminal command. That's configured on each new system I login to the first time.

My images/videos/books are all stored beneath /srv and not in my home directory - so the only thing I'll be losing is program configuration, caches, and similar.

Ideally I'd be using a smartcard for my SSH keys - but I don't have one - so for the moment I might just have to rsync them into place, but that's grossly bad.

I'll be interesting to see how well this works out, but I see a potential gain in portability and discipline at the very least.

Syndicated 2015-11-25 14:00:09 from Steve Kemp's Blog

26 Nov 2015 hypatia   » (Journeyer)

Thursday 26 November 2015

My main goal in being unemployed right now is to not launch entire new projects or businesses and so far I’m being very successful in restricting myself to a zine and maybe a eventually forthcoming short series of podcasts. But the zine — a very small run for a group of friends — was fun and not very hard. I like this trend of fun and not very hard. Next in fun and not very hard is my Christmas cards.

Zero businesses launched and not counting!

We’re building up to Australia’s all-in summer, compressing what the US, say, has to spend three periods (Thanksgiving, Christmas/late December, and their summer) on into six weeks beginning mid-December. We finally made it to Wet’n’Wild for the first time this summer. We picked a grey mild day for it, which was a good decision in most respects but it turns out there’s a downside to smaller crowds. Andrew took A home after a few hours to nap, and I discovered that no queuing means riding waterslides over and over and over and over, which means getting motion sick. Especially since Wet’n’Wild, in the parent-child scenario, makes the parent ride the raft facing backwards. But once I convinced a sceptical V to give me a break on the relentless stair climbing, raft-hauling, and being ill on slides, I had more fun. Wet’n’Wild is a two parent experience for sure. Liking speed and getting motion sick is my curse.

For better or for worse I’ve reached the age where my expat friends don’t come home for summer any more. So in the next few weeks we merely have a trip to my family, an extended family gathering, a friend’s annual houseparty, birthday drinks, the Google party for children, and carols. Also, hoping to squeeze a few beach trips in there. We are also rushing up on V’s last weeks at his current school, with three weeks to go yesterday. He is fortunately fairly excited if anything to go to a new, larger, school with children whom he knows from the neighbourhood. I still feel bad that he also won’t get the experience I longed for, of going to the same damn school for the whole primary years. A big part of my attraction to buying a house — in Sydney! — was to have access to that for them, so fingers crossed from here on in.

Syndicated 2015-11-26 01:32:43 from

25 Nov 2015 gary   » (Master)

Infinity status

I’m winding down for a month away from Infinity. The current status is that the language and note format changes for 0.0.2 are all done. You can get them with:

git clone

There’s also the beginnings of an Emacs major mode for i8 in there too. My glibc tree now has notes for td_ta_thr_iter as well as td_ta_map_lwp2thr. That’s two of the three hard ones done. Get them with:

git clone -b infinity2

FWIW td_thr_get_info is just legwork and td_thr_tls_get_addr is just a wrapper for td_thr_tlsbase; td_thr_tlsbase is the other hard note.

All notes have testcases with 100% bytecode coverage. I may add a flag for I8X to make not having 100% coverage a failure, and make glibc use it so nobody can commit notes with untested code.

The total note size so far is 720 bytes so I may still manage to get all five libpthread notes implemented in less than 1k:

Displaying notes found at file offset 0x00018f54 with length 0x000002d0:
  Owner                 Data size	Description
  GNU                  0x00000063	NT_GNU_INFINITY (inspection function)
    Signature: libpthread::__lookup_th_unique(i)ip
  GNU                  0x00000088	NT_GNU_INFINITY (inspection function)
    Signature: libpthread::map_lwp2thr(i)ip
  GNU                  0x000000cd	NT_GNU_INFINITY (inspection function)
    Signature: libpthread::__iterate_thread_list(Fi(po)oipii)ii
  GNU                  0x000000d2	NT_GNU_INFINITY (inspection function)
    Signature: libpthread::thr_iter(Fi(po)oiipi)i

Syndicated 2015-11-25 10:33:07 from

23 Nov 2015 bagder   » (Master)

copy as curl

Using curl to perform an operation a user just managed to do with his or her browser is one of the more common requests and areas people ask for help about.

How do you get a curl command line to get a resource, just like the browser would get it, nice and easy? Both Chrome and Firefox have provided this feature for quite some time already!

From Firefox

You get the site shown with Firefox’s network tools.  You then right-click on the specific request you want to repeat in the “Web Developer->Network” tool when you see the HTTP traffic, and in the menu that appears you select “Copy as cURL”. Like this screenshot below shows. The operation then generates a curl command line to your clipboard and you can then paste that into your favorite shell window. This feature is available by default in all Firefox installations.


From Chrome

When you pop up the More tools->Developer mode in Chrome, and you select the Network tab you see the HTTP traffic used to get the resources of the site. On the line of the specific resource you’re interested in, you right-click with the mouse and you select “Copy as cURL” and it’ll generate a command line for you in your clipboard. Paste that in a shell to get a curl command line  that makes the transfer. This feature is available by default in all Chome and Chromium installations.


On Firefox, without using the devtools

If this is something you’d like to get done more often, you probably find using the developer tools a bit inconvenient and cumbersome to pop up just to get the command line copied. Then cliget is the perfect add-on for you as it gives you a new option in the right-click menu, so you can get a quick command line generated really quickly, like this example when I right-click an image in Firefox:


Syndicated 2015-11-23 07:46:25 from

22 Nov 2015 fzort   » (Journeyer)

Xperia E1 kernel tweaking (5)

Aaaand... another bug!

<4>[ 0.217984] WARNING: at /home/fzort/projects/xperia/kernel/fs/sysfs/dir.c:508 sysfs_add_one+0x88/0xb8()
<4>[ 0.218192] sysfs: cannot create duplicate filename '/class/sensors'

The device class sensors is registered twice, once by drivers/input/sensor_mgr.c, and then by (mainline kernel) drivers/sensors/sensors_class.c, which fails.

sensor_mgr.c seems to run first (is this deterministic?) so it's probably safe to remove sensors_class.c from the build. By the way, sensor_mgr.c also registers a device attribute called deviceInfor (yuck, camel case). Trying to read it on sysfs causes a kernel crash!

This bug is also present in stock software.

Edit: easy fix for the double /sys/class/sensors registration.

Edit 2: pretty awful kernel crash when doing a cat /sys/class/sensors/light/deviceInfor (or anything under /sys/class/sensors):

<1>[ 2909.524014] Unable to handle kernel NULL pointer dereference at virtual address 00000010
<1>[ 2909.524347] pgd = d4760000
<1>[ 2909.524610] [00000010] *pgd=00000000
<0>[ 2909.525297] Internal error: Oops: 805 [#1] PREEMPT SMP ARM

Edit 3: fixed

22 Nov 2015 fzort   » (Journeyer)

Xperia E1 kernel tweaking (4)

Looks like the msm-thermal driver (responsible for throttling the CPU frequency when things get hot) was not loading:

<4>[ 0.760658] msm_thermal: Wrong number of cpu

Relevant code in drivers/thermal/msm_thermal.c:

key = "qcom,cpu-sensors";
cpu_cnt = of_property_count_strings(node, key);
if (cpu_cnt != num_possible_cpus()) {
        pr_err("%s: Wrong number of cpu\n", KBUILD_MODNAME);
        ret = -EINVAL;
        goto hotplug_node_fail;

Looks like a mistake in the device tree (someone must have copy-pasted the qcom,msm-thermal section from a chipset with 4 cores, but this one only has 2). The fix was easy enough.

This also seems to happen with stock Sony software. If this were one of the flagship Xperia devices, I suppose I could submit a pull request, since the code for those phones is on github. But this is the lowest-end Xperia, only released on third-world markets - probably Sony won't care.

22 Nov 2015 olea   » (Master)

De visita en Madrid

Con la excusa de participar en el Taller de nanotecnología casera organizado por MediaLab Prado, de la última visita en Madrid me traigo entre otras varias cosas: 

  • conocer los trabajos de En-Te Hwu que han llevado a la creación de un microscopio de fuerza atómica de bajo coste y una alternativa OSS en desarrollo: OpenAFM;
  • descubrir el alucinante proyecto de microscopio de barrido láser opensource de Raquel López, del cual espero ansioso novedades de las versiones más avanzadas;
  • participar en la fundación del grupo de trabajo de Microscopía DIY creado en MediaLab Prado a consecuencia y por los participantes del taller impartido por En-Te;
  • visitar por primera vez a los amigos del Makespace en Madrid, que es otro punto de encuentro de potencial BESTIAL;
  • descubrir a la gente del BivosLab/Biocore, que están haciendo cosas que tal vez podamos aplicar también en el Club de Cacharreo;
  • el agradable reencuentro con los amigos de MediaLab Prado, de los cuales cada vez soy más admirador y que utilizo como inspiración para construir el HackLab Almería;
  • más equipos para la colección del museo de retroinformática (gracias Kix);
  • tratos con la realeza (sí, realmente estoy en esa foto, con mi camiseta celeste) ZOMG;
  • y hasta la petición de Jesús Cea de escribir mis experiencias «pastoreando los procomunes».

Un viaje preñado de... TODO.

Syndicated 2015-11-22 03:19:00 from Ismael Olea

21 Nov 2015 fzort   » (Journeyer)

Xperia E1 kernel tweaking (3)

I was able to get rid of the error when trying to load the wlan.ko module by adding the option CONFIG_MODVERSIONS=n. But now I get another error:

<3>[ 21.083079] wlan: version magic '3.4.0-perf SMP preempt mod_unload modversions ARMv7 ' should be '3.4.0-perf SMP preempt mod_unload ARMv7 '

Trying to disable the checking on the kernel code made everything crash horribly. On a real Linux, I'd be able to try modprobe --force, but Android doesn't have modprobe. Sigh.

Edit: looks like the wlan.ko source code is on a separate repo after all. Not much much luck with it yet, though:
<6>[ 1601.775384] wlan: loading driver v3.2.3.185
<3>[ 1601.854994] wlan: driver load failure


Edit 2: after looking at the error messages displayed when the module is compiled with BUILD_DEBUG_VERSION and peeking around the code I could sort of guess that the configuration was incorrect and replaced the config files under /system/etc/firmware/wlan with the ones generated during the build. Now the module is loaded correctly with insmod, but for some reason is not loaded during boot. Still no wifi.

Edit 3: it works.

21 Nov 2015 fzort   » (Journeyer)

Xperia E1 kernel tweaking (2)

So I managed to get the kernel on the phone (gory details). But I don't get wifi - apparently because it's failing to load a module:

<6>[ 20.643671] wlan: disagrees about version of symbol module_layout

The module is not built with make modules - maybe it's closed-source. Sheesh.

20 Nov 2015 caolan   » (Master)

Better polygon rendering in LibreOffice's Gtk3 Support

Above is how LibreOffice's "svp" backend rendered rotated text outlines in chart where the text is represented by polygon paths. Because the gtk3 backend is based on that svp backend that's what you got with the gtk3 support enabled.

After today's work above is how the svp backend now renders those paths when rendering to a cairo-compatible surface such as the gtk3 support provides.

If we mandate that "svp" only operates on cairo compatible surfaces, then we can get this niceness into android and online too, and can ditch our non-cairo text rendering code paths.

Syndicated 2015-11-20 13:17:00 (Updated 2015-11-20 13:18:21) from Caolán McNamara

20 Nov 2015 bagder   » (Master)

This post was not bought

coinsAt times I post blog articles that get the view counter go up to and beyond 50,000 views. This puts me in a position where I get offers from companies to mention them or to “cooperate” on further blog posts that would somehow push their agenda or businesses.

I also get the more simple offers of adding random ads or “text only information” on specific individual pages on my sites that some SEO person out there figured out could potentially attract audience that search for specific terms.

I’ve even gotten offers from a company to sell off my server logs. Allegedly to help them work on anti-fraud so possibly for a good cause, but still…

This is by no counts a “big” blog or site, yet I get a steady stream of individuals and companies offering me money to give up a piece of my soul. I can only imagine what more popular sites get and it is clear that someone with a less strict standpoint than mine could easily make an extra income that way.

I turn down all those examples of “easy money”.

I want to be able to look you, my dear readers, straight in the eyes when I say that what’s written here are my own words and the opinions revealed are my own – even if of course you may not agree with me and I may do mistakes and be completely wrong at times or even many times. You can rest assured that I did the mistakes on my own and I was not paid by anyone to do them.

I’ve also removed ads from most of my sites and I don’t run external analytic scripts, minimizing the privacy intrusions and optimizing the contents: the stuff downloaded from my sites are what your browser needs to render the page. Not helps of useless crap to show ads or to help anyone track you (in order to show more targeted ads).

I don’t judge others’ actions based on how I decide to run my blog. I’m in a fortunate position to take this stand, I realize that.

Still biased of course

This all said, I’m still employed by a company (Mozilla) that pays my salary and I work on several projects that are dear to me so of course I will show bias to some subjects. I don’t claim to have an objective view on things and I don’t even try to have that. When I write posts here, they come colored by my background and by what I am.

Syndicated 2015-11-20 08:28:26 from

20 Nov 2015 pixelbeat   » (Journeyer)

Building multiple conflicting RPMs

Producing alternate packages from an RPM build

Syndicated 2015-11-20 00:47:48 from

19 Nov 2015 mjg59   » (Master)

If it's not practical to redistribute free software, it's not free software in practice

I've previously written about Canonical's obnoxious IP policy and how Mark Shuttleworth admits it's deliberately vague. After spending some time discussing specific examples with Canonical, I've been explicitly told that while Canonical will gladly give me a cost-free trademark license permitting me to redistribute unmodified Ubuntu binaries, they will not tell me what Any redistribution of modified versions of Ubuntu must be approved, certified or provided by Canonical if you are going to associate it with the Trademarks. Otherwise you must remove and replace the Trademarks and will need to recompile the source code to create your own binaries actually means.

Why does this matter? The free software definition requires that you be able to redistribute software to other people in either unmodified or modified form without needing to ask for permission first. This makes it clear that Ubuntu itself isn't free software - distributing the individual binary packages without permission is forbidden, even if they wouldn't contain any infringing trademarks[1]. This is obnoxious, but not inherently toxic. The source packages for Ubuntu could still be free software, making it fairly straightforward to build a free software equivalent.

Unfortunately, while true in theory, this isn't true in practice. The issue here is the apparently simple phrase you must remove and replace the Trademarks and will need to recompile the source code. "Trademarks" is defined later as being the words "Ubuntu", "Kubuntu", "Juju", "Landscape", "Edubuntu" and "Xubuntu" in either textual or logo form. The naive interpretation of this is that you have to remove trademarks where they'd be infringing - for instance, shipping the Ubuntu bootsplash as part of a modified product would almost certainly be clear trademark infringement, so you shouldn't do that. But that's not what the policy actually says. It insists that all trademarks be removed, whether they would embody an infringement or not. If a README says "To build this software under Ubuntu, install the following packages", a literal reading of Canonical's policy would require you to remove or replace the word "Ubuntu" even though failing to do so wouldn't be a trademark infringement. If an email address is present in a changelog, you'd have to change it. You wouldn't be able to ship the juju-core package without renaming it and the application within. If this is what the policy means, it's so impractical to be able to rebuild Ubuntu that it's not free software in any meaningful way.

This seems like a pretty ludicrous interpretation, but it's one that Canonical refuse to explicitly rule out. Compare this to Red Hat's requirements around Fedora - if you replace the fedora-logos, fedora-release and fedora-release-notes packages with your own content, you're good. A policy like this satisfies the concerns that Dustin raised over people misrepresenting their products, but still makes it easy for users to distribute modified code to other users. There's nothing whatsoever stopping Canonical from adopting a similarly unambiguous policy.

Mark has repeatedly asserted that attempts to raise this issue are mere FUD, but he won't answer you if you ask him direct questions about this policy and will insist that it's necessary to protect Ubuntu's brand. The reality is that if Debian had had an identical policy in 2004, Ubuntu wouldn't exist. The effort required to strip all Debian trademarks from the source packages would have been immense[2], and this would have had to be repeated for every release. While this policy is in place, nobody's going to be able to take Ubuntu and build something better. It's grotesquely hypocritical, especially when the Ubuntu website still talks about their belief that people should be able to distribute modifications without licensing fees.

All that's required for Canonical to deal with this problem is to follow Fedora's lead and isolate their trademarks in a small set of packages, then tell users that those packages must be replaced if distributing a modified version of Ubuntu. If they're serious about this being a branding issue, they'll do it. And if I'm right that the policy is deliberately obfuscated so Canonical can encourage people to buy licenses, they won't. It's easy for them to prove me wrong, and I'll be delighted if they do. Let's see what happens.

[1] The policy is quite clear on this. If you want to distribute something other than an unmodified Ubuntu image, you have two choices:

  1. Gain approval or certification from Canonical
  2. Remove all trademarks and recompile the source code
Note that option 2 requires you to rebuild even if there are no trademarks to remove.

[2] Especially when every source package contains a directory called "debian"…

comment count unavailable comments

Syndicated 2015-11-19 22:16:30 from Matthew Garrett

19 Nov 2015 fzort   » (Journeyer)

So I tried to build a kernel for my lowly Xperia E1 smartphone today. Long story short, the kernel in the tarball I got from Sony's site didn't build out of the box, probably because I used gcc 4.8, which has the super nifty -Wsizeof-pointer-memaccess warning option.

It caught a bunch of facepalm-worthy bugs such as this one, in the akm8963 compass driver (drivers/misc/akm8963.c):

static ssize_t akm8963_sysfs_delay_show(
        struct akm8963_data *akm, char *buf, int pos)
        int64_t val;

        val = akm->delay[pos];

        return snprintf(buf,sizeof(buf), "%lld\n", val);

This one, in the BlueZ Bluetooth protocol stack (!) (net/bluetooth/hci_conn.c), reminded me of a recent rant by Linus Torvalds:

void hci_le_ltk_reply(struct hci_conn *conn, u8 ltk[16])
        struct hci_dev *hdev = conn->hdev;
        struct hci_cp_le_ltk_reply cp;
        BT_DBG("%p", conn);
        memset(&cp, 0, sizeof(cp));
        cp.handle = cpu_to_le16(conn->handle);
        memcpy(cp.ltk, ltk, sizeof(ltk));
        hci_send_cmd(hdev, HCI_OP_LE_LTK_REPLY, sizeof(cp), &cp);

I'm surprised this even works. Weirdly a similar memcpy is correct in the function immediately above this one.

After fixing these I eventually built the kernel, but couldn't get it to run on the phone yet (I think it should run with fastboot boot zImage-dtb). Ah well, I'll figure it out eventually.

By the way, it's very very nice of Sony to provide an official way to unlock the bootloader (that is, disable the kernel/ramdisk image checking in the bootloader), to provide instructions on how to build the kernel on their official blog, and to put the kernels for (most of) their smartphones on github. <3 you, Sony. Very different from a certain Korean smartphone manufacturer (won't name any names, but has a two-letter name).

18 Nov 2015 philiph   » (Journeyer)

Help / Wiki Blog Plugin / Blog / 2012-10-17 / 03:16:03-07:00


Syndicated 2015-11-18 21:42:15 from HollenbackDotNet

18 Nov 2015 marnanel   » (Journeyer)


Conversation today:

"This box of firelighters has a picture of fire on it. It's not a box of fire."
"Unless it was flatpack fire. You know, like Ikea FJIRE."
"Oh... that explains why there's a sign outside saying FIRE ASSEMBLY POINT."

This entry was originally posted at Please comment there using OpenID.

Syndicated 2015-11-17 23:52:59 from Monument

16 Nov 2015 berend   » (Journeyer)

Weirdest error today with printing. Upgraded my FreeBSD (source recompiled of 10.2p6), after that wanted to print from my recently upgraded Ubuntu 15.10 desktop. Didn't want to print, got: "Filter failed" in cups jobs screen. First I thought it was due to the FreeBSD upgrade, but after a while I figured out an Ubuntu 12.04 laptop still printed fine, as well as a PC-BSD 10.2 laptop.

Error messages on FreeBSD cups server were not very helpful, with things like:

(/usr/local/libexec/cups/filter/rastertopdf) stopped with status 1.
prnt/hpcups/HPCupsFilter.cpp 530: cupsRasterOpen failed, fd = 0
Anyway, after I discovered it must be my recently upgraded Ubuntu 15.10, and I probably hadn't tried to print after that, tried to remove and reinstall the printer. That didn't help. The final magic was a suggestion:
lpadmin -P HP-LaserJet-cm1415fnw -m raw
After that thinks worked, weird, weird, weird.

16 Nov 2015 Stevey   » (Master)

lumail2 nears another release

I'm pleased with the way that Lumail2 development is proceeding, and it is reaching a point where there will be a second source-release.

I've made a lot of changes to the repository recently, and most of them boil down to moving code from the C++ side of the application, over to the Lua side.

This morning, for example, I updated the handing of index.limit to be entirely Lua based.

When you open a Maildir folder you see the list of messages it contains, as you would expect.

The notion of the index.limit is that you can limit the messages displayed, for example:

  • See all messages: Config:set( "index.limit", "all")
  • See only new/unread messages: Config:set( "index.limit", "new")
  • See only messages which arrived today: Config:set( "index.limit", "today")
  • See only messages which contain "Steve" in their formatted version: Config:set( "index.limit", "steve")

These are just examples that are present as defaults, but they give an idea of how things can work. I guess it isn't so different to Mutt's "limit" facilities - but thanks to the dynamic Lua nature of the application you can add your own with relative ease.

One of the biggest changes, recently, was the ability to display coloured text! That was always possible before, but a single line could only be one colour. Now colours can be mixed within a line, so this works as you might imagine:

Panel:append( "$[RED]This is red, $[GREEN]green, $[WHITE]white, and $[CYAN]cyan!" )

Other changes include a persistant cache of "stuff", which is Lua-based, the inclusion of at least one luarocks library to parse Date: headers, and a simple API for all our objects.

All good stuff. Perhaps time for a break in the next few weeks, but right now I think I'm making useful updates every other evening or so.

Syndicated 2015-11-16 22:04:44 from Steve Kemp's Blog

16 Nov 2015 marnanel   » (Journeyer)

let them buy houses

"The peasants have no bread."
"Let them eat cake!" (brioche)

Marie Antoinette didn't actually say that. The story spread because people were so worried about bread, which was the staple food. You might well spend 50% of your income on buying bread.

We were talking about this, and Kit said that the modern equivalent would be:

"Minister, the people say rents are too high."
"Well, they should just buy houses!"

This entry was originally posted at Please comment there using OpenID.

Syndicated 2015-11-16 12:23:53 from Monument

16 Nov 2015 bagder   » (Master)

The most popular curl download – by a malware

During October 2015 the curl web site sent out 1127 gigabytes of data. This was the first time we crossed the terabyte limit within a single month.

Looking at the stats a little closer, I noticed that in July 2015 a particular single package started to get very popular. The exact URL was

Curious. In October it alone was downloaded more than 300,000 times, accounting for over 70% of the site’s bandwidth. Why?

The downloads came from what appears to be different locations. They don’t use any HTTP referer headers and they used different User-agent headers. I couldn’t really see a search bot gone haywire or a malicious robot stuck in a crazy mode.

After I shared some of this data over in our IRC channel (#curl on freenode), Björn Stenberg stumbled over this AVG slide set, describing how a particular malware works when it infects a computer. Downloading that particular file is thus a step in its procedures to create a trojan that will run on the host system – see slide 11 for the curl details. The slide also mentions that an updated version of the malware comes bundled with the curl library already, which then I guess makes the hits we see on the curl site being done by the older versions still being run.

Of course, we can’t be completely sure this is the source for the increased download of this particular file but it seems highly likely.

I renamed the file just now to see what happens.

Evil use of good code

We can of course not prevent evil uses of our code. We provide source code and we even host some binaries of curl and libcurl and both good and bad actors are able to take advantage of our offers.

This rename won’t prevent a dedicated hacker, but hopefully it can prevent a few new victims from getting this malware running on their machines.

Syndicated 2015-11-16 11:43:18 from

15 Nov 2015 mikal   » (Journeyer)

Mount Stranger one last time

This is the last walk in this series, which was just a pass through now that the rain has stopped to make sure that we hadn't left any markers or trash lying around after the Scout orienteering a week ago. This area has really grown on me -- I think most people stick to the path down by the river, whereas this whole area has nice terrain, plenty of gates through fences and is just fun to explore. I'm so lucky to have this so close to home.

Interactive map for this route.

Tags for this post: blog canberra bushwalk


Syndicated 2015-11-15 12:20:00 from : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

14 Nov 2015 marnanel   » (Journeyer)

I'm wearing my white poppy

White poppy

I'm wearing my white poppy again. There's rarely a better day than today to call for peace.

This entry was originally posted at Please comment there using OpenID.

Syndicated 2015-11-14 16:25:38 from Monument

13 Nov 2015 marnanel   » (Journeyer)

the CofE makes more money than McDonald's

The Daily Mail is running a story saying that the Church of England makes more money than Starbucks or McDonald's. Even beyond the obvious point that Starbucks and McD's are run for the profit of shareholders, this is pretty silly.

If you don't think churches should exist at all, obviously you're going to think the CofE is handling too much money. Apart from that, though, it's pretty obvious that a large organisation with a lot of expenditure is also going to need a lot of income. The CofE is huge, and puts a lot of money into a lot of things.

It's fair enough to say that this or that expenditure is too high-- the accounts are all public, so this isn't difficult to do. But saying "aha, the CofE claims to be a Christian organisation but has more income than McDonald's" is inane.

Not linking to the article, because the Daily Mail.

This entry was originally posted at Please comment there using OpenID.

Syndicated 2015-11-13 13:40:50 from Monument

13 Nov 2015 caolan   » (Master)

Insert Special Character in Spelling Dialog

LibreOffice 5.1 spelling dialog now has a little toolbar to enable inserting special characters into the spelling editing widget. Also Added paste, so the insert icon isn't lonely.

Syndicated 2015-11-13 10:57:00 (Updated 2015-11-13 10:57:38) from Caolán McNamara

12 Nov 2015 glyph   » (Master)

Your Text Editor Is Malware

Are you a programmer? Do you use a text editor? Do you install any 3rd-party functionality into that text editor?

If you use Vim, you’ve probably installed a few vimballs from, a website only available over HTTP. Vimballs are fairly opaque; if you’ve installed one, chances are you didn’t audit the code.

If you use Emacs, you’ve probably installed some packages from ELPA or MELPA using package.el; in Emacs’s default configuration, ELPA is accessed over HTTP, and until recently MELPA’s documentation recommended HTTP as well.

When you install un-signed code into your editor that you downloaded over an unencrypted, unauthenticated transport like HTTP, you might as well be installing malware. This is not a joke or exaggeration: you really might be.1 You have no assurance that you’re not being exploited by someone on your local network, by someone on your ISP’s network, the NSA, the CIA, or whoever else.

The solution for Vim is relatively simple: use vim-plug, which fetches stuff from GitHub exclusively via HTTPS. I haven’t audited it conclusively but its relatively small codebase includes lots of https:// and no http:// or git://2 that I could see.

I’m relatively proud of my track record of being a staunch advocate for improved security in text editor package installation. I’d like to think I contributed a little to the fact that MELPA is now available over HTTPS and instructs you to use HTTPS URLs.

But the situation still isn’t very good in Emacs-land. Even if you manage to get your package sources from an authenticated source over HTTPS, it doesn’t matter, because Emacs won’t verify TLS.

Although package signing is implemented, practically speaking, none of the packages are signed.3 Therefore, you absolutely cannot trust package signing to save you. Plus, even if the packages were signed, why is it the NSA’s business which packages you’re installing, anyway? TLS is shorthand for The Least Security (that is acceptable); whatever other security mechanisms, like package signing, are employed, you should always at least have HTTPS.

With that, here’s my unfortunately surprise-filled step-by-step guide to actually securing Emacs downloads, on Windows, Mac, and Linux.

Step 1: Make Sure Your Package Sources Are HTTPS Only

By default, Emacs ships with its package-archives list as '(("gnu" . "")), which is obviously no good. You will want to both add MELPA (which you surely have done anyway, since it’s where all the actually useful packages are) and change the ELPA URL itself to be HTTPS. Use M-x customize-variable to change package-archives to:

`(("gnu" . "")
  ("melpa" . ""))

Step 2: Turn On TLS Trust Checking

There’s another custom variable in Emacs, tls-checktrust, which checks trust on TLS connections. Go ahead and turn that on, again, via M-x customize-variable tls-checktrust.

Step 3: Set Your Trust Roots

Now that you’ve told Emacs to check that the peer’s certificate is valid, Emacs can’t successfully fetch HTTPS URLs any more, because Emacs does not distribute trust root certificates. Although the set of cabforum certificates are already probably on your computer in various forms, you still have to acquire them in a format usable by Emacs somehow. There are a variety of ways, but in the interests of brevity and cross-platform compatibility, my preferred mechanism is to get the certifi package from PyPI, with python -m pip install --user certifi or similar. (A tutorial on installing Python packages is a little out of scope for this post, but hopefully my little website about this will help you get started.)

At this point, M-x customize-variable fails us, and we need to start just writing elisp code; we need to set tls-program to a string computed from the output of running a program, and if we want this to work on Windows we can’t use Bourne shell escapes. Instead, do something like this in your .emacs or wherever you like to put your start-up elisp:4

(let ((trustfile
        "\\\\" "/"
         "\n" ""
         (shell-command-to-string "python -m certifi")))))
  (setq tls-program
         (format "gnutls-cli%s --x509cafile %s -p %%p %%h"
                 (if (eq window-system 'w32) ".exe" "") trustfile))))

This will run gnutls-cli on UNIX, and gnutls-cli.exe on Windows.

You’ll need to install the gnutls-cli command line tool, which of course varies per platform:

  • On OS X, of course, Homebrew is the best way to go about this: brew install gnutls will install it.
  • On Windows, the only way I know of to get GnuTLS itself over TLS is to go directly to this mirror. Download one of these binaries and unzip it next to Emacs in its bin directory.
  • On Debian (or derivatives), apt-get install gnutls-bin
  • On Fedora (or derivatives), yum install gnutls-utils

Great! Now we’ve got all the pieces we need: a tool to make TLS connections, certificates to verify against, and Emacs configuration to make it do those things. We’re done, right?



It turns out there are two ways to tell Emacs to really actually really secure the connection (really), but before I tell you the second one or why you need it, let’s first construct a little test to see if the connection is being properly secured. If we make a bad connection, we want it to fail. Let’s make sure it does.

This little snippet of elisp will use the helpful site to give you some known-bad and known-good certificates (assuming nobody’s snooping on your connection):

(if (condition-case e
          (url-retrieve ""
                        (lambda (retrieved) t))
          (url-retrieve ""
                        (lambda (retrieved) t))
      ('error nil))
    (error "tls misconfigured")
  (url-retrieve ""
                (lambda (retrieved) t)))

If you evaluate it and you get an error, either your trust roots aren’t set up right and you can’t connect to a valid site, or Emacs is still blithely trusting bad certificates. Why might it do that?

Step 5: Configure the Other TLS Verifier

One of Emacs’s compile-time options is whether to link in GnuTLS or not. If GnuTLS is not linked in, it will use whatever TLS program you give it (which might be gnutls-cli or openssl s_client, but since only the most recent version of openssl s_client can even attempt to verify certificates, I’d recommend against it). That is what’s configured via tls-checktrust and tls-program above.

However, if GnuTLS is compiled in, it will totally ignore those custom variables, and honor a different set: gnutls-verify-error and gnutls-trustfiles. To make matters worse, installing the packages which supply the gnutls-cli program also install the packages which might satisfy Emacs’s dynamic linking against the GnuTLS library, which means this code path could get silently turned on because you tried to activate the other one.

To give these variables the correct values as well, we can re-visit the previous trust setup:

(let ((trustfile
        "\\\\" "/"
         "\n" ""
         (shell-command-to-string "python -m certifi")))))
  (setq tls-program
         (format "gnutls-cli%s --x509cafile %s -p %%p %%h"
                 (if (eq window-system 'w32) ".exe" "") trustfile)))
  (setq gnutls-verify-error t)
  (setq gnutls-trustfiles (list trustfile)))

Now it ought to be set up properly. Try the example again from Step 4 and it ought to work. It probably will. Except, um...

Appendix A: Windows is Weird

Presently, the official Windows builds of Emacs seem to be linked against version 3.3 of GnuTLS rather than the latest 3.4. You might need to download the latest micro-version of 3.3 instead. As far as I can tell, it’s supposed to work with the command-line tools (and maybe it will for you) but for me, for some reason, Emacs could not parse gnutls-cli.exe’s output no matter what I did. This does not appear to be a universal experience, others have reported success; your mileage may vary.


We nerds sometimes mock the “normals” for not being as security-savvy as we are. Even if we’re considerate enough not to voice these reactions, when we hear someone got malware on their Windows machine, we think “should have used a UNIX, not Windows”. Or “should have been up to date on your patches”, or something along those lines.

Yet, nerdy tools that download and execute code - Emacs in particular - are shockingly careless about running arbitrary unverified code from the Internet. And we are often equally shockingly careless to use them, when we should know better.

If you’re an Emacs user and you didn’t fully understand this post, or you couldn’t get parts of it to work, stop using package.el until you can get the hang of it. Get a friend to help you get your environment configured properly. Since a disproportionate number of Emacs users are programmers or sysadmins, you are a high-value target, and you are risking not only your own safety but that of your users if you don’t double-check that your editor packages are coming from at least cursorily authenticated sources.

If you use another programmer’s text editor or nerdy development tool that is routinely installing software onto your system, make sure that if it’s at least securing those installations with properly verified TLS.

  1. Technically speaking of course you might always be installing malware; no defense is perfect. And HTTPS is a fairly weak one at that. But is significantly stronger than “no defense at all”. 

  2. Never, ever, clone a repository using git:// URLs. As explained in the documentation: “The native transport (i.e. git:// URL) does no authentication and should be used with caution on unsecured networks.”. You might have heard that git uses a “cryptographic hash function” and thought that had something to do with security: it doesn’t. If you want security you need signed commits, and even then you can never really be sure

  3. Plus, MELPA accepts packages on the (plain-text-only) Wiki, which may be edited by anyone, and from CVS servers, although they’d like to stop that. You should probably be less worried about this, because that’s a link between two datacenters, than about the link between you and MELPA, which is residential or business internet at best, and coffee-shop WiFi at worst. But still maybe be a bit worried about it and go comment on that bug. 

  4. Yes, that let is a hint that this is about to get more interesting... 

Syndicated 2015-11-12 08:51:00 from Deciphering Glyph

10 Nov 2015 mikal   » (Journeyer)

A walk in the Orroral Valley

Last weekend was a walk in the Orroral Valley with a group of scout leaders. Embarrassingly, I'd never been in this area before, and its lovely -- especially at the moment after all the rain we've had. Easy terrain, and a well marked path for this walk. The only catch is that there's either a car shuffle involved, or you need to do a 12km return walk.


Interactive map for this route.

Tags for this post: blog pictures 20151107 photo canberra bushwalk


Syndicated 2015-11-09 19:13:00 from : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

10 Nov 2015 benad   » (Apprentice)

KeePass: Password Management Apps

Like many others, I'm a bit worried about the LogMeIn acquisition of LastPass. While they haven't drastically increased the pricing of LastPass (yet), it would be a good idea to look at other options.

A recommended option for open-source password management that keeps being mentioned is KeePass, a .NET application that manages encrypted passwords and secure notes. While it's mostly made for Windows, it does work, though clumsily, on Mac using Mono. Even when using the stable version of Mono, the experience is clunky: Most keyboard shortcuts don't work, double-clicking on an items crashes the software half the time, and it generally looks horrible. Still, once you learn avoid those Mono bugs, or you simply use that Windows virtual machine you have hanging around your copy of VirtualBox, KeePass is a great tool.

There is a more "native" port of KeePass called KeePassX (as in, made for ). This one works much better on Macs, but has far less features than the .NET version.

As for portable versions, there are of course a dozen or so different options for Android, so I haven't explored that yet. For iOS, the best free option seems to be limited to MiniKeePass. It doesn't sync automatically to any online storage, but transferring password database files in and out is simple enough that it should be acceptable if you only sparingly create new secure items on iOS.

Speaking of syncing, KeePass is server-less, as it only deals with database files. What can be done though with the desktop KeePass is synchronize two password database files with each other easily. The databases do keep track of the history of changes for each item, so that offline file synchronization is quite safe.

Scripting options seem to be limited. I found a Perl module File::KeyPass, but it has a quite large bug that needs to be patched with a proper implementation of Salsa20.

There is also a 20-days old new KeePass-compatible app that is entirely done in pure HTML and JavaScript called KeeWeb. It can be served up as a single static HTML page on any HTTPS server, and no server side code is needed. It can also work as a standalone desktop application. It is too new for me to recommend it (a new release was done as I was typing this), but in my limited tests, it worked amazingly well. For example, I was able to load and save from OneDrive my test KeePass file using Safari on my iPhone 6. Once it matures, it may even replace MiniKeePass as my recommended iOS KeePass app.

The fact that the original KeePass code was clean and documented enough to allow for so many different implementations means that using KeePass is pretty much "future proof", unlike any online password service. Sure, browser plugin options are limited and there's no automatic synchronization, but I would fully trust it.

Syndicated 2015-11-10 01:01:10 from Benad's Blog

9 Nov 2015 wingo   » (Master)

embracing conway's law

Most of you have heard of "Conway's Law", the pithy observation that the structure of things that people build reflects the social structure of the people that build them. The extent to which there is coordination or cohesion in a system as a whole reflects the extent to which there is coordination or cohesion among the people that make the system. Interfaces between components made by different groups of people are the most fragile pieces. This division goes down to the inner life of programs, too; inside it's all just code, but when a program starts to interface with the outside world we start to see contracts, guarantees, types, documentation, fixed programming or binary interfaces, and indeed faults as well: how many bug reports end up in an accusation that team A was not using team B's API properly?

If you haven't heard of Conway's law before, well, welcome to the club. Inneresting, innit? And so thought I until now; a neat observation with explanatory power. But as aspiring engineers we should look at ways of using these laws to build systems that take advantage of their properties.

in praise of bundling

Most software projects depend on other projects. Using Conway's law, we can restate this to say that most people depend on things built by other people. The Chromium project, for example, depends on many different libraries produced by many different groups of people. But instead of requiring the user to install each of these dependencies, or even requiring the developer that works on Chrome to have them available when building Chrome, Chromium goes a step further and just includes its dependencies in its source repository. (The mechanism by which it does this isn't a direct inclusion, but since it specifies the version of all dependencies and hosts all code on Google-controlled servers, it might as well be.)

Downstream packagers like Fedora bemoan bundling, but they ignore the ways in which it can produce better software at lower cost.

One way bundling can improve software quality is by reducing the algorithmic complexity of product configurations, when expressed as a function of its code and of its dependencies. In Chromium, a project that bundles dependencies, the end product is guaranteed to work at all points in the development cycle because its dependency set is developed as a whole and thus uniquely specified. Any change to a dependency can be directly tested against the end product, and reverted if it causes regressions. This is only possible because dependencies have been pulled into the umbrella of "things the Chromium group is responsible for".

Some dependencies are automatically pulled into Chrome from their upstreams, like V8, and some aren't, like zlib. The difference is essentially social, not technical: the same organization controls V8 and Chrome and so can set the appropriate social expectations and even revert changes to upstream V8 as needed. Of course the goal of the project as a whole has technical components and technical considerations, but they can only be acted on to the extent they are socially reified: without a social organization of the zlib developers into the Chromium development team, Chromium has no business automatically importing zlib code, because the zlib developers aren't testing against Chromium when they make a release. Bundling zlib into Chromium lets the Chromium project buffer the technical artifacts of the zlib developers through the Chromium developers, thus transferring responsibility to Chromium developers as well.

Conway's law predicts that the interfaces between projects made by different groups of people are the gnarliest bits, and anyone that has ever had to maintain compatibility with a wide range of versions of upstream software has the scar tissue to prove it. The extent to which this pain is still present in Chromium is the extent to which Chromium, its dependencies, and the people that make them are not bound tightly enough. For example, making a change to V8 which results in a change to Blink unit tests is a three-step dance: first you commit a change to Blink giving Chromium a heads-up about new results being expected for the particular unit tests, then you commit your V8 change, then you commit a change to Blink marking the new test result as being the expected one. This process takes at least an hour of human interaction time, and about 4 hours of wall-clock time. This pain would go away if V8 were bundled directly into Chromium, as you could make the whole change at once.

forking considered fantastic

"Forking" sometimes gets a bad rap. Let's take the Chromium example again. Blink forked from WebKit a couple years ago, and things have been great in both projects since then. Before the split, the worst parts in WebKit were the abstraction layers that allowed Google and Apple to use the dependencies they wanted (V8 vs JSC, different process models models, some other things). These abstraction layers were the reified software artifacts of the social boundaries between Google and Apple engineers. Now that the social division is gone, the gnarly abstractions are gone too. Neither group of people has to consider whether the other will be OK with any particular change. This eliminates a heavy cognitive burden and allows both projects to move faster.

As a pedestrian counter-example, Guile uses the libltdl library to abstract over the dynamic loaders of different operating systems. (Already you are probably detecting the Conway's law keywords: uses, library, abstract, different.) For years this library has done the wrong thing while trying to do the right thing, ignoring .dylib's but loading .so's on Mac (or vice versa, I can't remember), not being able to specify soversions for dependencies, throwing a stat party every time you load a library because it grovels around for completely vestigial .la files, et cetera. We sent some patches some time ago but the upstream project is completely unmaintained; the patches haven't been accepted, users build with whatever they have on their systems, and though we could try to take over upstream it's a huge asynchronous burden for something that should be simple. There is a whole zoo of concepts we don't need here and Guile would have done better to include libltdl into its source tree, or even to have forgone libltdl and just written our own thing.

Though there are costs to maintaining your own copy of what started as someone else's work, people who yammer on against forks usually fail to recognize their benefits. I think they don't realize that for a project to be technically cohesive, it needs to be socially cohesive as well; anything else is magical thinking.

not-invented-here-syndrome considered swell

Likewise there is an undercurrent of smarmy holier-than-thou moralism in some parts of the programming world. These armchair hackers want you to believe that you are a bad person if you write something new instead of building on what has already been written by someone else. This too is magical thinking that comes from believing in the fictional existence of a first-person plural, that there is one "we" of "humanity" that is making linear progress towards the singularity. Garbage. Conway's law tells you that things made by different people will have different paces, goals, constraints, and idiosyncracies, and the impedance mismatch between you and them can be a real cost.

Sometimes these same armchair hackers will shake their heads and say "yeah, project Y had so much hubris and ignorance that they didn't want to bother understanding what X project does, and they went and implemented their own thing and made all their own mistakes." To which I say, so what? First of all, who are you to judge how other people spend their time? You're not in their shoes and it doesn't affect you, at least not in the way it affects them. An armchair hacker rarely understands the nature of value in an organization (commercial or no). People learn more when they write code than when they use it or even when they read it. When your product has a problem, where will you find the ability to fix it? Will you file a helpless bug report or will you be able to fix it directly? Assuming your software dependencies model some part of your domain, are you sure that their models are adequate for your purpose, with the minimum of useless abstraction? If the answer is "well, I'm sure they know what they're doing" then if your organization survives a few years you are certain to run into difficulties here.

One example. Some old-school Mozilla folks still gripe at Google having gone and created an entirely new JavaScript engine, back in 2008. This is incredibly naïve! Google derives immense value from having JS engine expertise in-house and not having to coordinate with anyone else. This control also gives them power to affect the kinds of JavaScript that gets written and what goes into the standard. They would not have this control if they decided to build on SpiderMonkey, and if they had built on SM, they would have forked by now.

As a much more minor, insignificant, first-person example, I am an OK compiler hacker now. I don't consider myself an expert but I do all right. I got here by making a bunch of mistakes in Guile's compiler. Of course it helps if you get up to speed using other projects like V8 or what-not, but building an organization's value via implementation shouldn't be discounted out-of-hand.

Another point is that when you build on someone else's work, especially if you plan on continuing to have a relationship with them, you are agreeing up-front to a communications tax. For programmers this cost is magnified by the degree to which asynchronous communication disrupts flow. This isn't to say that programmers can't or shouldn't communicate, of course, but it's a cost even in the best case, and a cost that can be avoided by building your own.

When you depend on a project made by a distinct group of people, you will also experience churn or lag drag, depending on whether the dependency changes faster or slower than your project. Depending on LLVM, for example, means devoting part of your team's resources to keeping up with the pace of LLVM development. On the other hand, depending on something more slow-moving can make it more difficult to work with upstream to ensure that the dependency actually suits your use case. Again, both of these drag costs are magnified by the asynchrony of communicating with people that probably don't share your goals.

Finally, for projects that aim to ship to end users, depending on people outside your organization exposes you to risk. When a security-sensitive bug is reported on some library that you use deep in your web stack, who is responsible for fixing it? If you are responsible for the security of a user-facing project, there are definite advantages for knowing who is on the hook for fixing your bug, and knowing that their priorities are your priorities. Though many free software people consider security to be an argument against bundling, I think the track record of consumer browsers like Chrome and Firefox is an argument in favor of giving power to the team that ships the product. (Of course browsers are terrifying security-sensitive piles of steaming C++! But that choice was made already. What I assert here is that they do well at getting security fixes out to users in a timely fashion.)

to use a thing, join its people

I'm not arguing that you as a software developer should never use code written by other people. That is silly and I would appreciate if commenters would refrain from this argument :)

Let's say you have looked at the costs and the benefits and you have decided to, say, build a browser on Chromium. Or re-use pieces of Chromium for your own ends. There are real costs to doing this, but those costs depend on your relationship with the people involved. To minimize your costs, you must somehow join the community of people that make your dependency. By joining yourself to the people that make your dependency, Conway's law predicts that the quality of your product as a whole will improve: there will be fewer abstraction layers as your needs are taken into account to a greater degree, your pace will align with the dependency's pace, and colleagues at Google will review for you because you are reviewing for them. In the case of Opera, for example, I know that they are deeply involved in Blink development, contributing significantly to important areas of the browser that are also used by Chromium. We at Igalia do this too; our most successful customers are those who are able to work the most closely with upstream.

On the other hand, if you don't become part of the community of people that makes something you depend on, don't be surprised when things break and you are left holding both pieces. How many times have you heard someone complain the "project A removed an API I was using"? Maybe upstream didn't know you were using it. Maybe they knew about it, but you were not a user group they cared about; to them, you had no skin in the game.

Foundations that govern software projects are an anti-pattern in many ways, but they are sometimes necessary, born from the need for mutually competing organizations to collaborate on a single project. Sometimes the answer for how to be able to depend on technical work from others is to codify your social relationship.

hi haters

One note before opening the comment flood: I know. You can't control everything. You can't be responsible for everything. One way out of the mess is just to give up, cross your fingers, and hope for the best. Sure. Fine. But know that there is no magical first-person-plural; Conway's law will apply to you and the things you build. Know what you're actually getting when you depend on other peoples' work, and know what you are paying for it. One way or another, pay for it you must.

Syndicated 2015-11-09 13:48:51 from wingolog

9 Nov 2015 mikal   » (Journeyer)

Scout activity: orienteering at Mount Stranger

I've run scout activities before, but its always been relatively trivial things like arranging attendance at a Branch level event such as an astronomy night or an environment camp. They've involved consent forms and budgeting and so forth, but never the end to end creation of a thing from scratch. So, I was quite excited to be presented with an opportunity to take the scouts orienteering in an unfamiliar environment.

I chose the area of nature reserve between Mount Stranger and the Murrumbidgee River because its nice terrain (no tea tree!), but big enough for us to be able to do some long distance bearing navigation, which is a badge requirement some of the scouts are working on at the moment.

The first step was to scout out (pun intended) the area, and see what sort of options there are for controls and so forth. I'd walked through this area a bit before, as its close to my house, but I'd never bush bashed from the river to the trig before. The first attempt was a simple marking off of the gates along the bicentennial horse trail -- I knew we'd want to cross this somewhere for the long distance leg. That route looked like this:

Interactive map for this route.

The next recce was a wander along a candidate route with some geocaching thrown in for good luck. The geocaching turned out to be quite useful, because on the actual night with the scouts it meant I had a better handle of what was in the area, so when a couple of girls started losing interest I could say stuff like "Did I forget to mention there's an awesome tree house just over there?".

Interactive map for this route.

With that in mind, I then just started slogging out a route -- the long distance leg turned out to be the hardest part here. I wanted to avoid fence crossings as much as possible, and this whole area is littered with barbed wire fences. I think I redid that leg four times before I found a route that I was happy with, which was ironically the first one I'd tried.

Interactive map for this route.

Job done! Now I only needed to walk this route three more times! The first walk was to lay out the orienteering markers before the scouts attacked the course:

Interactive map for this route.

...and then actually doing the course with some scouts...

Interactive map for this route.

Comparing the two maps, I don't think they did too bad to be honest. There's definitely potential here for more navigation practise, but I think the key there is that practise makes perfect. There shall be more hiking and orienteering in our future! The final walk was just collecting the markers after the event, which I will skip here.

I put a fair bit of effort into this course, so I'd like to see it used more than once. To that end, I am going to put the documentation online for others to see and use. If you'd like help running this course, drop me a line at and I'd be happy to help.

Tags for this post: scouts orienteering navex


Syndicated 2015-11-08 15:40:00 from : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

18 Nov 2015 philiph   » (Journeyer)

Help / Wiki Blog Plugin / Blog / 2012-10-17 / 03:14:02-07:00


Syndicated 2015-11-18 21:42:11 from HollenbackDotNet

8 Nov 2015 bagder   » (Master)

TCP tuning for HTTP

I’m the author of a brand new internet-draft that I submitted just the other day. The title is TCP Tuning for HTTP,  and the intent is to gather a set of current best practices for HTTP implementers; to share and distribute knowledge we’ve gathered over the years. Clients, servers and intermediaries. For HTTP/1.1 as well as HTTP/2.

I’m now awaiting, expecting and looking forward to feedback, criticisms and additional content for this document so that it can become the resource I’d like it to be.

How to contribute to this?

  1.  ideally, send your feedback to the HTTPbis mailing list,
  2. or submit an issue or pull-request on github for the
  3. or simply email me your comments: daniel <at>

I’ve been participating first passively and more and more actively over the years within the IETF, mostly in the HTTPbis working group. I think open protocols and open standards are important and I like being part of making them reality. I have the utmost respect and admiration for those who are involved in putting the RFCs together and thus improve the world we live in, step by step.

For a long while I’ve been wanting  to step up and “pull my weight” too,  to become a better participant in this area, and I’m happy to now finally take this step. Hopefully this is just the first step of many more to come.

(Psssst: While gathering feedback and updating the git version, the current work in progress version of the draft is always visible here.)

Syndicated 2015-11-07 23:17:07 from

6 Nov 2015 marnanel   » (Journeyer)

Jack by the hedge

"Want to come up to the Wood? We could play Star Wars."

Martin considered. The Wood was the thin strip of uncultivated land at the top of the school field. The grassed and mowed part petered out in a mild incline before the trees began. It was perhaps a hundred feet long and fifteen feet wide before it met the wire fence that separated it from the gravel footpath, yet to the boys the space was a jungle, the wildest part of their suburban lives. The trees, mostly oaks and birches, alternately towered and stood invitingly climbable; the undergrowth provided hiding places; the worn earth tracks, so adaptable for games, ran the length of the Wood. There was an itching–berry tree, a holly bush whose hollow centre could shelter those brave enough to risk its scratches, and the Dragon, a great fallen log, by turns fortress, stage and spaceship.

"I don't want to," he said after some thought. He'd had the dreams again last night.

"Why not?"

"There's toadstools up there. I hate toadstools." The lie slipped out of him unexpectedly. He weighed it mentally, admiring its lines. "Let's stay here, play tag or something."

His brother shrugged. "I could kick 'em down with my boots. Come on."

Martin followed him: the events had played out like a familiar story. Richard was his younger brother, but Martin always found himself tagging along like a four-year-old. Sometimes at night Martin would keep himself awake pondering the difficult riddles of life; the question of why his brother always took the lead was prominent among these. Even now that he had agreed to play, he could tell before it was ever discussed who would be playing the good guys.

Recently, things had got worse. In the last few months Richard had got himself involved with a particular bunch of kids, too loosely organised to have a name, though Martin thought of them as "Paul's lot". Richard spent much of his free time playing with them, now, and less time with Martin. Martin might have been glad not to be bossed around so much, but in fact nothing appeared to fill the vacuum that Richard had left. Martin spent his breaktimes wandering alone around the school field, yearning for the bell. When Richard was around, things were no better: he seemed to have learned new and still more uncomfortable management techniques during his social climbing.

"We could go to the dragon," said Richard. "We could it for the Death Star."

"Yeah, we could do that..."

The sunlight flecked the earth before them, green under the trees. The birds sang on, unaware of plans to destroy planets. Martin stuck his hands into his pockets and tried not to look at the undergrowth. White blossoms caught the corner of his eye. His nightmares flowed back.

Suddenly, his brother asked, "What are Nastiers?"

"Um." The weight of his dream held onto his mind. "Why'd you ask?"

"Heard you talking about them in your sleep last night."

Richard picked up a stick and began slashing at nettles. Martin watched with mild dread. "Did I say much?"

"Just kept saying it, over and over again. 'The Nastiers... the Nastiers...' and something about the Wood."

Martin shuddered. The Nastiers had first started to grow in his imagination in the spring, when the small heart-shaped leaves appeared under the hedges. Gradually they filled his dreams with their menace, popping up underfoot, filling the rooms, choking the ground, daring him to touch them. By day he had given them wide berths, sometimes even crossing the road. However hard he tried to avoid them, still they filled his imagination.

One day in early summer he had been tortured by the thought of himself lying down to sleep, and waking up as a single great Nastier, four feet across its sickly shining leaf, nodding gently in the aircurrent. He had run out into his garden the next morning, and the plants had flowered, tall spires of tiny white petals topping their towers of leaves, staring him down, glorying in their plantish treason.

"It's just a plant, a kind of plant. I don't like them much," he said. "Those ones."

"You were having nightmares about a plant?" Richard went over and kicked at the nearby patch of Nastiers. He looked back quickly enough to catch Martin wincing. The plants shook and were still.

"It's nothing," said Martin. "Let's go to the dragon."

Soon after they entered the Wood, Martin cursed under his breath: Paul's lot were already there. A few seconds passed before Richard saw them too. He called out to them, and ran off to join them. Martin was alone. He sighed, and walked on towards the seclusion of the dragon.

He sat astride the fallen log, looking out over the school field. With his hands he gripped the bark, tracing patterns in the cracks while his thoughts flowed over him. The voices of Paul's lot were too far away to pick out words. They were as much a part of his peace as the song of the blackbirds. Both reminded him that it wasn't so bad being alone. Sometimes. Maybe. At least Richard wouldn't drag out old arguments with him now, and at least he had space to think.


He looked around for the voice, to both sides, and finally behind himself: Paul was standing at one end of the log, with a grin on his face. Like a long-stemmed rose given to a lover, he held a single Nastier in his hand.

Martin's stomach jumped and twisted. Chills passed over his body. Richard had betrayed him. He scrambled half to his feet and backed away.

The other end of the log lay in a mass of nettles, beyond the edge of the Wood proper. Paul climbed onto the far end and began walking slowly towards him. Martin was trapped: Paul in front, and the nettles behind. Paul's friends appeared one by one, with quiet giggling, then open laughter. They clustered around the far end of the log. A few climbed up behind Paul. Most were carrying Nastiers.

A few weeks earlier, a kid in Martin's class had come in from break with nettle rash over most of his body. Martin's teacher had asked why, and the kid said that Paul told him to jump off the log into the nettles. The teacher asked whether Paul could have told him to jump off a cliff. Martin had been in the Wood that morning. He'd seen it all. The teacher never heard about the pointed sticks.

History seemed about to repeat itself. Martin took a step backwards, almost losing his footing. He caught his breath: Paul's eyes, the leaves of the plant, the plant's white flowers, were all picked out in feverish detail. He's got me, thought Martin. He's got me and I can't get away.

Then with the same strange dream-like clarity, it came to him. His fear was not Paul, but the unnamable terror of the Nastier. If Paul had trapped him, it was only in a prison of himself.

Martin bit the inside of his cheeks to give himself strength. He grabbed the plant from Paul's hand and crushed it. It smelled of herbs, and garlic. Paul took a step backwards in surprise, and slipped. Martin leapt forwards and to the right, landing on the grass ahead of the nettles, and ran as hard as he could towards the school. A few of Paul's lot gave chase in a disinterested sort of way, but soon gave up and returned to their leader.

Martin didn't stop running until he was inside the school, and didn't start crying until he was safely in the cloakroom, washing his hands over, and over, and over again.

This entry was originally posted at Please comment there using OpenID.

Syndicated 2015-11-06 20:19:31 (Updated 2015-11-06 20:30:17) from Monument

6 Nov 2015 mjg59   » (Master)

Why improving kernel security is important

The Washington Post published an article today which describes the ongoing tension between the security community and Linux kernel developers. This has been roundly denounced as FUD, with Rob Graham going so far as to claim that nobody ever attacks the kernel.

Unfortunately he's entirely and demonstrably wrong, it's not FUD and the state of security in the kernel is currently far short of where it should be.

An example. Recent versions of Android use SELinux to confine applications. Even if you have full control over an application running on Android, the SELinux rules make it very difficult to do anything especially user-hostile. Hacking Team, the GPL-violating Italian company who sells surveillance software to human rights abusers, found that this impeded their ability to drop their spyware onto targets' devices. So they took advantage of the fact that many Android devices shipped a kernel with a flawed copy_from_user() implementation that allowed them to copy arbitrary userspace data over arbitrary kernel code, thus allowing them to disable SELinux.

If we could trust userspace applications, we wouldn't need SELinux. But we assume that userspace code may be buggy, misconfigured or actively hostile, and we use technologies such as SELinux or AppArmor to restrict its behaviour. There's simply too much userspace code for us to guarantee that it's all correct, so we do our best to prevent it from doing harm anyway.

This is significantly less true in the kernel. The model up until now has largely been "Fix security bugs as we find them", an approach that fails on two levels:

1) Once we find them and fix them, there's still a window between the fixed version being available and it actually being deployed
2) The forces of good may not be the first ones to find them

This reactive approach is fine for a world where it's possible to push out software updates without having to perform extensive testing first, a world where the only people hunting for interesting kernel vulnerabilities are nice people. This isn't that world, and this approach isn't fine.

Just as features like SELinux allow us to reduce the harm that can occur if a new userspace vulnerability is found, we can add features to the kernel that make it more difficult (or impossible) for attackers to turn a kernel bug into an exploitable vulnerability. The number of people using Linux systems is increasing every day, and many of these users depend on the security of these systems in critical ways. It's vital that we do what we can to avoid their trust being misplaced.

Many useful mitigation features already exist in the Grsecurity patchset, but a combination of technical disagreements around certain features, personality conflicts and an apparent lack of enthusiasm on the side of upstream kernel developers has resulted in almost none of it landing in the kernels that most people use. Kees Cook has proposed a new project to start making a more concerted effort to migrate components of Grsecurity to upstream. If you rely on the kernel being a secure component, either because you ship a product based on it or because you use it yourself, you should probably be doing what you can to support this.

Microsoft received entirely justifiable criticism for the terrible state of security on their platform. They responded by introducing cutting-edge security features across the OS, including the kernel. Accusing anyone who says we need to do the same of spreading FUD is risking free software being sidelined in favour of proprietary software providing more real-world security. That doesn't seem like a good outcome.

comment count unavailable comments

Syndicated 2015-11-06 09:19:07 from Matthew Garrett

5 Nov 2015 Stevey   » (Master)

lumail2 approaches readiness

So the work on lumail2 is going well, and already I can see that it is a good idea. The main reason for (re)writing it is to unify a lot of the previous ad-hoc primitives (i.e. lua functions) and to try and push as much of the code into Lua, and out of C++, as possible. This work is already paying off with the introduction of new display-modes and simpler implementation.

View modes are an important part of lumail, because it is a modal mail-client. You're always in one mode:

  • maildir-mode
    • Shows you lists of Maildir-folders.
  • index-mode
    • Shows you lists of messages inside the maildir you selected.
  • message-mode
    • Shows you a single message.

This is nothing new, but there are two new modes:

  • attachment-mode
    • Shows you the attachments associated with the current message.
  • lua-mode
    • Shows you your configuration-settings and trivia.

Each of these modes draws lines of text on the screen, and those lines consist of things that Lua generated. So there is a direct mapping:

Mode Lua Function
maildir function maildir_view()
index function index_view()
message function message_view()
lua function lua_view()

With that in mind it is possible to write a function to scroll to the next line containing a pattern like so:

function find()
   local pattern = Screen:get_line( "Search for:" )

   -- Get the global mode.
   local mode = Config:get("global.mode")

   -- Use that to get the lines we're currently displaying
   loadstring( "out = " .. mode .. "_view()" )()

   -- At this point "out" is a table containing lines that
   -- the current mode wishes to display.

    -- .. do searching here.

Thus the whole thing is dynamic and mode-agnostic.

The other big change is pushing things to lua. So to reply to an email, populating the new message, appending your ~/.signature, is handled by Lua. As is forwarding a message, or composing a new mail.

The downside is that the configuration-file is now almost 1000 lines long, thanks to the many little function definitions, and key-binding setup.

At this rate the first test-release will be out at the weekend, but API documentation, and sample configuration file might make interesting reading until then.

Syndicated 2015-11-05 21:52:02 from Steve Kemp's Blog

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!

Advogato User Stats

New Advogato Members

Recently modified projects

19 Sep 2015 Break Great Firewall
20 Jul 2015 Justice4all
25 May 2015 Molins framework for PHP5
25 May 2015 Beobachter
7 Mar 2015 Ludwig van
7 Mar 2015 Stinky the Shithead
18 Dec 2014 AshWednesday
11 Nov 2014 respin
20 Jun 2014
13 Apr 2014 Babel
13 Apr 2014 Polipo
19 Mar 2014 usb4java
8 Mar 2014 Noosfero
17 Jan 2014 Haskell
17 Jan 2014 Erlang

New projects

2 Dec 2014 Justice4all
11 Nov 2014 respin
8 Mar 2014 Noosfero
17 Jan 2014 Haskell
17 Jan 2014 Erlang
17 Jan 2014 Hy
17 Jan 2014 clj-simulacrum
17 Jan 2014 Haskell-Lisp
17 Jan 2014 lfe-disco
17 Jan 2014 clj-openstack
17 Jan 2014 lfe-openstack
17 Jan 2014 LFE
1 Nov 2013 FAQ Linux
15 Apr 2013 Gramps
8 Apr 2013 pydiction