Recent blog entries

30 Aug 2014 marnanel   » (Journeyer)

Gentle Readers: harmless phantoms

Gentle Readers
a newsletter made for sharing
volume 1, number 20
25th August 2014: harmless phantoms
What I’ve been up to

It's been three months! This is the last issue of volume 1, and next week volume 2 begins: it'll be more of the same, except that I'm adding reviews of some of the children's books I've loved in my life. I'll be collecting the twenty issues of volume 1 together in a printed book, which I'll be emailing you about when it's ready.

This week has been busy but uneventful, which I wish was a less common mixture, but it was good to drop into Manchester during the Pride festival. I apologise for this issue being late: I had it all prepared, and then there was a server problem, and then I found I'd lost one of the sections completely, so it had to be rewritten. Never mind: you have it now!

A poem of mine

ON FIRST LOOKING INTO AN A TO Z (T13)

My talent (or my curse) is getting lost:
my routes are recondite and esoteric.
Perverted turns on every road I crossed
have dogged my feet from Dover up to Berwick.
My move to London only served to show
what fearful feast of foolishness was mine:
I lost my way from Tower Hill to Bow,
and rode the wrong way round the Circle Line.
In nameless London lanes I wandered then
whose tales belied my tattered A to Z,
and even now, in memory again
I plod despairing, Barking in my head,
still losing track of who and where I am,
silent, upon a street in Dagenham.

(Notes: the title is a reference to Keats's sonnet On First Looking into Chapman's Homer. "A to Z" is a standard book of London streetmaps.)

 

A picture

http://thomasthurman.org/pics/on-sweet-bathroom
On-sweet bathroom

Something wonderful

In the poem above, I mentioned Berwick-upon-Tweed, or Berwick for short, which rhymes with Derek. Berwick is the most northerly town in England, two miles from the Scottish border. It stands at the mouth of the river Tweed, which divides Scotland from England in those parts, but Berwick is on the Scottish bank: for quite a bit of its history it was a very southerly town in Scotland instead. The town's football team still plays in the Scottish leagues instead of the English. Berwick has been in English hands since 1482, though given next month's referendum I'm not going to guess how long that will last.

http://gentlereaders.uk/pics/berwick-map

As befits such a frontier town, it's impressively fortified, and the castle and ramparts are well worth seeing. But today I particularly wanted to tell you about the story of its war with Russia.
 

http://gentlereaders.uk/pics/berwick-miller


Fans of Jasper Fforde's Thursday Next series, and anyone who had to learn The Charge of the Light Brigade at school, will remember the Crimean War, a conflict which remained an infamous example of pointless waste of life until at least 1914. Now, because Berwick had changed hands between England and Scotland several times, it was once the rule that legal documents would mention both countries as "England, Scotland, and Berwick-upon-Tweed" to be on the safe side. And the story goes that when Britain declared war on Russia in 1853, it was in the name of England, Scotland, and Berwick-upon-Tweed, but the peace treaty in 1856 forgot to include Berwick, so this small town remained technically at war with Russia for over a century.

In fact, the tale is untrue: Berwick wasn't mentioned in the declaration of war, as far as I know, though I admit I haven't been able to trace a copy-- can any of you do any better? But such is the power of story that in 1966, with the Cold War becoming ever more tense, the town council decided that something had to be done about the problem. So the London correspondent of Pravda, one Oleg Orestov, travelled the 350 miles up to Berwick for peace talks, so that everyone could be sure that Berwick was not at war with the USSR. The mayor told Mr Orestov, "Please tell the Russian people through your newspaper that they can sleep peacefully in their beds."

Something from someone else

from HAUNTED HOUSES
by Henry Wadsworth Longfellow (1807-1882)

All houses wherein men have lived and died
Are haunted houses. Through the open doors
The harmless phantoms on their errands glide,
With feet that make no sound upon the floors.

We meet them at the doorway, on the stair,
Along the passages they come and go,
Impalpable impressions on the air,
A sense of something moving to and fro.

There are more guests at table than the hosts
Invited; the illuminated hall
Is thronged with quiet, inoffensive ghosts,
As silent as the pictures on the wall.

Colophon
Gentle Readers is published on Mondays and Thursdays, and I want you to share it. The archives are at http://gentlereaders.uk/ , and so is a form to get on the mailing list. If you have anything to say or reply, or you want to be added or removed from the mailing list, I’m at thomas@thurman.org.uk and I’d love to hear from you. The newsletter is reader-supported; please pledge something if you can afford to, and please don't if you can't. Love and peace to you all.
This entry was originally posted at http://marnanel.dreamwidth.org/310953.html. Please comment there using OpenID.

Syndicated 2014-08-30 13:44:45 from Monument

29 Aug 2014 dmarti   » (Master)

Don't punch the monkey. Embrace the Badger.

One of the main reactions I get to Targeted Advertising Considered Harmful is: why are you always on about saving advertising? Advertising? Really? Wouldn't it be better to have a world where you don't need advertising?

Even when I do point out how non-targeted ads are good for publishers and advertisers, the obvious question is, why should I care? As a member of the audience, or a regular citizen, why does advertising matter? And what's all this about the thankless task of saving online advertising from itself? I didn't sign up for that.

The answer is: Because externalities.

Some advertising has positive externalities.

The biggest positive externality is ad-supported content that later becomes available for other uses. For example, short story readers today are benefitting from magazine ad budgets of the 19th-20th centuries.

Every time you binge-watch an old TV show, you're a positive externality winner, using a cultural good originally funded by advertising.

I agree with the people who want ad-supported content for free, or at a subsidized price. I'm not going to condemn all advertising as The Internet's Original Sin. I just think that we need to fix the bugs that make Internet advertising less valuable than ads in older media.

Some advertising has negative externalities.

On the negative side, the biggest externality is the identity theft risks inherent in large databases of PII. (And it's all PII. Anonymization is bogus.) The costs of identity theft fall on the people whose information is compromised, not on the companies that chose to collect it.

In 20 years, people will look back at John Battelle's surveillance marketing fandom the way we now watch those 1950s industrial films that praise PCBs, or asbestos, or some other God-awful substance that we're still spending billions to clean up. PII is informational haszmat.

The French Task Force on Taxation of the Digital Economy suggests a unit charge per user monitored to address the dangers that uncontrolled practices regarding the use of these data are likely to raise for the protection of public freedoms. But although that kind of thing might fly in Europe, in the USA we have to use technology. And that's where regular people come in.

What you can do

Your choice to protect your privacy by blocking those creepy targeted ads that everyone hates is not a selfish one. You're helping to re-shape the economy. You're helping to move ad spending away from ads that target you, and have negative externalities, and towards ads that are tied to content, and have positive externalities. It's unlikely that Internet ads will ever be all positive, or all negative, but privacy-enabled users can shift the balance in a good way.

Don't punch the monkey. Embrace the Badger.

Syndicated 2014-08-29 13:16:08 from Don Marti

29 Aug 2014 Stevey   » (Master)

Migration of services and hosts

Yesterday I carried out the upgrade of a Debian host from Squeeze to Wheezy for a friend. I like doing odd-jobs like this as they're generally painless, and when there are problems it is a fun learning experience.

I accidentally forgot to check on the status of the MySQL server on that particular host, which was a little embarassing, but later put together a reasonably thorough serverspec recipe to describe how the machine should be setup, which will avoid that problem in the future - Introduction/tutorial here.

The more I use serverspec the more I like it. My own personal servers have good rules now:

shelob ~/Repos/git.steve.org.uk/server/testing $ make
..
Finished in 1 minute 6.53 seconds
362 examples, 0 failures

Slow, but comprehensive.

In other news I've now migrated every single one of my personal mercurial repositories over to git. I didn't have a particular reason for doing that, but I've started using git more and more for collaboration with others and using two systems felt like an annoyance.

That means I no longer have to host two different kinds of repositories, and I can use the excellent gitbucket software on my git repository host.

Needless to say I wrote a policy for this host too:

#
#  The host should be wheezy.
#
describe command("lsb_release -d") do
  its(:stdout) { should match /wheezy/ }
end


#
# Our gitbucket instance should be running, under runit.
#
describe supervise('gitbucket') do
  its(:status) { should eq 'run' }
end

#
# nginx will proxy to our back-end
#
describe service('nginx') do
  it { should be_enabled   }
  it { should be_running   }
end
describe port(80) do
  it { should be_listening }
end

#
#  Host should resolve
#
describe host("git.steve.org.uk" ) do
  it { should be_resolvable.by('dns') }
end

Simple stuff, but being able to trigger all these kind of tests, on all my hosts, with one command, is very reassuring.

Syndicated 2014-08-29 13:28:28 from Steve Kemp's Blog

29 Aug 2014 badvogato   » (Master)

back from a weekend of camping and hiking, rafting fun...
http://www.nps.gov/dewa/planyourvisit/trail-maps-nj.htm

finished this book 'music lesson' by Victor Wooten, incredible story about spirit of music/life.

reflect on American life thru TV series 'mad man'. that's all for now.

29 Aug 2014 bagder   » (Master)

Firefox OS Flatfish Bluedroid fix

Hey, when I just built my own Firefox OS (b2g) image for my Firefox OS Tablet (flatfish) I ran into this (known) problem:

Can't find necessary file(s) of Bluedroid in the backup-flatfish folder.
Please update the system image for supporting Bluedroid (Bug-986314),
so that the needed binary files can be extracted from your flatfish device.

So, as I struggled to figure out the exact instructions on how to proceed from this, I figured I should jot down what I did in the hopes that it perhaps will help a fellow hacker at some point:

  1. Download the 3 *.img files from the dropbox site that is referenced from bug 986314.
  2. Download the flash-flatfish.sh script from the same dropbox place
  3. Make sure you have ‘fastboot’ installed (I’m mentioning this here because it turned out I didn’t and yet I have already built and flashed my Flame phone successfully without having it). “apt-get install android-tools-fastboot” solved it for me. Note that if it isn’t installed, the flash-flatfish.sh script will claim that the device is not in fastboot mode and stop with an error message saying so.
  4. Finally: run the script “./flash-flatfish.sh [dir with the 3 .img files]“
  5. Once it had succeeded, the tablet reboots
  6. Remove the backup-flatfish directory in the build dir.
  7. Restart the flatfish build again and now it should get passed that Bluedroid nit

Enjoy!

Syndicated 2014-08-29 12:11:30 from daniel.haxx.se

29 Aug 2014 robertc   » (Master)

Test processes as servers

Since its very early days subunit has had a single model – you run a process, it outputs test results. This works great, except when it doesn’t.

On the up side, you have a one way pipeline – there’s no interactivity needed, which makes it very very easy to write a subunit backend that e.g. testr can use.

On the downside, there’s no interactivity, which means that anytime you want to do something with those tests, a new process is needed – and thats sometimes quite expensive – particularly in test suites with 10’s of thousands of tests.Now, for use in the development edit-execute loop, this is arguably ok, because one needs to load the new tests into memory anyway; but wouldn’t it be nice if tools like testr that run tests for you didn’t have to decide upfront exactly how they were going to run. If instead they could get things running straight away and then give progressively larger and larger units of work to be run, without forcing a new process (and thus new discovery directory walking and importing) ? Secondly, testr has an inconsistent interface – if testr is letting a user debug things to testr through to child workers in a chain, it needs to use something structured (e.g. subunit) and route stdin to the actual worker, but the final testr needs to unwrap everything – this is needlessly complex. Lastly, for some languages at least, its possibly to dynamically pick up new code at runtime – so a simple inotify loop and we could avoid new-process (and more importantly complete-enumeration) *entirely*, leading to very fast edit-test cycles.

So, in this blog post I’m really running this idea up the flagpole, and trying to sketch out the interface – and hopefully get feedback on it.

Taking subunit.run as an example process to do this to:

  1. There should be an option to change from one-shot to server mode
  2. In server mode, it will listen for commands somewhere (lets say stdin)
  3. On startup it might eager load the available tests
  4. One command would be list-tests – which would enumerate all the tests to its output channel (which is stdout today – so lets stay with that for now)
  5. Another would be run-tests, which would take a set of test ids, and then filter-and-run just those ids from the available tests, output, as it does today, going to stdout. Passing somewhat large sets of test ids in may be desirable, because some test runners perform fixture optimisations (e.g. bringing up DB servers or web servers) and test-at-a-time is pretty much worst case for that sort of environment.
  6. Another would be be std-in a command providing a packet of stdin – used for interacting with debuggers

So that seems pretty approachable to me – we don’t even need an async loop in there, as long as we’re willing to patch select etc (for the stdin handling in some environments like Twisted). If we don’t want to monkey patch like that, we’ll need to make stdin a socketpair, and have an event loop running to shepard bytes from the real stdin to the one we let the rest of Python have.

What about that nirvana above? If we assume inotify support, then list_tests (and run_tests) can just consult a changed-file list and reload those modules before continuing. Reloading them just-in-time would be likely to create havoc – I think reloading only when synchronised with test completion makes a great deal of sense.

Would such a test server make sense in other languages?  What about e.g. testtools.run vs subunit.run – such a server wouldn’t want to use subunit, but perhaps a regular CLI UI would be nice…


Syndicated 2014-08-29 03:48:18 from Code happens

28 Aug 2014 Rich   » (Master)

Apache httpd at ApacheCon Budapest

tl;dr - There will be a full day of Apache httpd content at ApacheCon Europe, in Budapest, November 17th - apacheconeu2014.sched.org/type/httpd

Links:

* ApacheCon website - http://apachecon.eu
* ApacheCon Schedule - http://apacheconeu2014.sched.org/
* Register - http://events.linuxfoundation.org//events/apachecon-europe/attend/register
* Apache httpd - http://httpd.apache.org/

I'll be giving two talks about the Apache http server at ApacheCon.eu in a little over 2 months.

On Monday morning (November 17th) I'll be speaking about Configurable Configuration in httpd. New in Apache httpd 2.4 is the ability to put conditional statements in your configuration file which are evaluated at request time rather than at server startup time. This means that you can have the configuration adapt to the specifics of the request - like, where in the world it came from, what time of day it is, what browser they're using, and so on. With the new If/ElseIf/Else syntax, you can embed this logic directly in your configuration.

2.4 also includes mod_macro, and a new expression evaluation engine, which further enhance httpd's ability to have a truly flexible configuration language.

Later in the day, I'll be speaking about mod_rewrite, the module that lets you manipulate requests using regular expressions and other logic, also at request time. Most people who have some kind of website are forced to use mod_rewrite now and then, and there's a lot of terrible advice online about ways to use it. In this session, you'll learn learn the basics of regular expression syntax, and how to correctly craft rewrite expressions.

There's other httpd content throughout the day, and the people who created this technology will be on hand to answer your questions, and teach you all of the details of using the server. We'll also have a hackathon running the entire length of the conference, where people will be working on various aspects of the server. In particular, I'll be working on the documentation. If you're interested in participating in the httpd docs, this is a great time to learn how to do that, and dive into submitting your first patch.

See you there!

Syndicated 2014-08-28 14:04:37 (Updated 2014-08-28 14:18:45) from Notes In The Margin

28 Aug 2014 idcmp   » (Journeyer)

Backing up OS X onto Linux. Finally.

I've tried all sorts of black voodoo magic to make this work, but finally I have something repeatable and reliable. Also, please excuse the horrible formatting of this post.

History

I started with Time Machine talking to my Linux box with netatalk. This worked fine until one day Time Machine told me it needed to back up everything all over again.

Then it started to tell me this almost weekly.  Apparently when this happens, the existing backup is corrupt and all that precious data is at worst irretrievable or at best tedious to retrieve. These are not attributes I want associated with my backup solution.

Then I did an rsync over ssh to my Linux box.  This is fine exception that it lacks all the special permissions, resource forks, acls, etc, etc that are hidden in a Mac filesystem.

Then I tried SuperDuper! backing up to a directory served up via netatalk mounted via afp:// on OS X.  This worked, but was mind numbingly slow. Also, it would mean I'd have to pay for a tool if I wanted to do incremental backups. This gets expensive as I also back up a few friends OS X laptops on my Linux file server.

I tried SuperDuper! backing up over Samba, but hdiutil create apparently doesn't work over Samba. Workarounds all needed the purchased version of SuperDuper!.

There's *another* work around for SuperDuper! where I can use MacFUSE and sshfs, but the MacFUSE author has abandoned the project and recommends people not to use it.

Sheesh.

The Solution

Ultimately, the goal is to make a sparsebundle HFS+ disk image, put it on a Samba mounted share and rsync my data over to it. You'd be surprised how many niggly bits there are for this.

Install Rsync


First, I grabbed the 3.1.x version of rsync from Homebrew - install Homebrew as per the directions there, then run:

brew install https://raw.github.com/Homebrew/homebrew-dupes/master/rsync.rb

If you've been digging through voodoo magic, then you'll be happy to hear this version of rsync has the all the rsync patches you'll read about (like --protect-decmpfs).

Samba

Nobody needs another out of date blog entry explaining how to setup Samba. Follow some other guide, make sure Samba starts automatically and use smbpasswd to create an account.  

I recommend using the name of the machine being backed up as the account name. I'm calling that machinename for the rest of this post.

Make sure you can mount this share on OS X via smb:// ( Finder > Go > Connect to Server... ).  Make sure you can 1) create a file, 2) edit and save the file, 3) delete the file.  I'm going to assume you've mounted this share at /Volumes/Machinename

Backup Into

Now lets make something for us to backup into.  Figure out how big the disk is on the source machine (we'll assume 100g) then run:

hdiutil create /tmp/backup.sparsebundle -size 100g -type SPARSEBUNDLE -nospotlight -volname 'Machinename-Backup' -verbose -fs 'Case-sensitive Journaled HFS+'

Yes, you're creating it in /tmp, this is to work around hdiutil create not liking Samba.

Next you'll want to copy this sparse bundle onto your Samba share:

 cp -rvp /tmp/backup.sparsebundle /Volumes/machinename


This will copy a bunch of files and should be successful without any warnings.  Now lets mount this sparse bundle:

hdiutil attach /Volumes/machinename/backup.sparsebundle -mount required -verbose

You should now have /Volumes/Machinename-Backup mounted on your system. Fun story, OS X recognizes that this disk image is hosted off the machine, so it mounts this disk image with "noowners" (see mount man page). That's going to be a problem for our backup, so we need to tell OS X it's okay to use userids normally:

sudo diskutil enableOwnership /Volumes/Machinename-Backup

Preparing Rsync

There are a handful of files which recommend to be excluded:


.DocumentRevisions-*/
.Spotlight-*/
/.fseventsd
/.hotfiles.btree
.Trashes
/afs/*
/automount/*
/cores/*
/private/var/db/dyld/dyld_*
/System/Library/Caches/com.apple.bootstamps/*
/System/Library/Caches/com.apple.corestorage/*
/System/Library/Caches/com.apple.kext.caches/*
/dev/*
/automount
/.vol/*
/net
/private/tmp/*
/private/var/run/*
/private/var/spool/postfix/*
/private/var/vm/*
/private/var/folders/*
/Previous Systems.localized
/tmp/*
/Volumes/*
*/.Trash
/Backups.backupdb
/.MobileBackups
/.bzvol
/PGPWDE01
/PGPWDE02

Store this in a file somewhere. I stored mine as exclude-from.txt in /Volumes/Machinename

Okay, now we're ready to run rsync.  I think the correct arguments to rsync are: -aNHAXx --nfs-compression --protect-decmpfs --itemize-changes --super --fileflags --force-change --crtimes

So, we run:

rsync all-those-args-above --exclude-from=/Volumes/Machinename/exclude-from.txt / /Volumes/Machinename-Backup/

When The Backup Is Done

This will take a little while. When it's done, you can then bless your backup so it could be booted:

sudo bless -folder /Volumes/Machinename-Backup/System/Library/CoreServices

Then you can umount your backup:

hdiutil detach /Volumes/Machinename-Backup

Periodically, and after your first run, you should compact down your sparsebundle disk image:

hdiutil compact /Volumes/Machinename/backup.sparsebundle -batteryallowed

You can now log into your Linux server and tar up the backup (apparently XZ > bzip2 for compression size).

 tar Jcvf machnename-backup.tar.xz backup.sparsebundle

Depending on the size of that tarball, you could upload it to Google Drive, Drop Box, etc. Before you do, you'll probably want to encrypt it. I used OpenSSL:

openssl aes-256-cbc -a -salt -in machinename-backup.tar.xz -out machinename-backup.tar.xz.aes-256-cbc

Many of these steps will take *hours* if you have a lot of data, so you may consider just backing up parts of your system more frequently and doing your whole system once every-so-often.

Why This Is Nice

One thing I really love about this setup is that each piece of the puzzle does one thing and does it well. If Samba is too slow, I could go back to netatalk. The sparsebundle disk image hosts HFS+ properly with all its OS X-specific voodoo and rsync's job is to copy files. If there's a better file copier, I could drop that in.

Conclusion

I left a lot out. I know. I'm kind of expecting you to have a rough idea of how to get around OS X and Linux, figure out how to put most of the above in a shell script, decide when to do backups, how to store those tarballs, etc. Hopefully though this will help someone that just needs some of the key ingredients to make it work.  



Syndicated 2014-08-27 23:49:00 (Updated 2014-08-29 17:52:35) from Idcmp

27 Aug 2014 wingo   » (Master)

a wingolog user's manual

Greetings, dear readers!

Welcome to my little corner of the internet. This is my place to share and write about things that are important to me. I'm delighted that you stopped by.

Unlike a number of other personal sites on the tubes, I have comments enabled on most of these blog posts. It's gratifying to me to hear when people enjoy an article. I also really appreciate it when people bring new information or links or things I hadn't thought of.

Of course, this isn't like some professional peer-reviewed journal; it's above all a place for me to write about my wanderings and explorations. Most of the things I find on my way have already been found by others, but they are no less new to me. As Goethe said, quoted in the introduction to The Joy of Cooking: "That which thy forbears have bequeathed to thee, earn it anew if thou wouldst possess it."

In that spirit I would enjoin my more knowledgeable correspondents to offer their insights with the joy of earning-anew, and particularly to recognize and banish the spectre of that moldy, soul-killing "well-actually" response that is present on so many other parts of the internet.

I've had a good experience with comments on this site, and I'm a bit lazy, so I take an optimistic approach to moderation. By default, comments are posted immediately. Every so often -- more often after a recent post, less often in between -- I unpublish comments that I don't feel contribute to the piece, or which I don't like for whatever reason. It's somewhat arbitrary, but hey, welcome to my corner of the internet.

This has the disadvantage that some unwanted comments end up published, then they go away. If you notice this happening to someone else's post, it's best to just ignore it, and in particular to not "go meta" and ask in the comments why a previous comment isn't there any more. If it happens to you, I'd ask you to re-read this post and refrain from unwelcome comments in the future. If you think I made an error -- it can happen -- let me know privately.

Finally, and it really shouldn't have to be said, but racism, sexism, homophobia, transphobia, and ableism are not welcome here. If you see such a comment that I should delete and have missed, let me know privately. However even among well-meaning people, and that includes me, there are ways of behaving that reinforce subtle bias. Please do point out such instances in articles or comments, either publicly or privately. Working on ableist language is a particular challenge of mine.

You can contact me via comments (anonymous or not), via email (wingo@pobox.com), twitter (@andywingo), or IRC (wingo on freenode). Thanks for reading, and happy hacking :)

Syndicated 2014-08-27 08:37:17 from wingolog

27 Aug 2014 bagder   » (Master)

Going to FOSDEM 2015

Yeps,

I’m going there and I know several friends are going too, so this is just my way of pointing this out to the ones of you who still haven’t made up your mind! There’s still a lot of time left as this event is taking place late January next year.

I intend to try to get a talk to present this time and I would love to meet up with more curl contributors and fans.

fosdem

Syndicated 2014-08-27 09:01:10 from daniel.haxx.se

27 Aug 2014 marnanel   » (Journeyer)

Not the ice bucket challenge

I spent like two hours making this. I'm sure there was some good reason for that.



This entry was originally posted at http://marnanel.dreamwidth.org/310657.html. Please comment there using OpenID.

Syndicated 2014-08-27 00:24:13 from Monument

26 Aug 2014 Rich   » (Master)

LinuxCon NA 2014

Last week I attended LinuxCon North America in Chicago.