Recent blog entries

3 Sep 2014 marnanel   » (Journeyer)

Gentle Readers in video form


As promised, here's Gentle Readers in video form. Please let me know what you think-- and share widely, because I'd like lots of feedback!

This entry was originally posted at http://marnanel.dreamwidth.org/311377.html. Please comment there using OpenID.

Syndicated 2014-09-03 00:04:28 from Monument

2 Sep 2014 Hobart   » (Journeyer)

Windows technique to print timestamps before & after from the command line

On Unix, a quick way to output timestamps is:

$ date ; slowcommand ; date
Tue Sep  2 12:12:18 MDT 2014
Tue Sep  2 12:12:34 MDT 2014
$ 
But if you try a similar approach at the Windows command prompt, there's a few problems.
  • The command TIME /T outputs the time, but only in HH:MM format.
     
  • The command prompt's builtin magic variable %TIME% outputs HH:MM:SS.ss, but if you try it, the results are unexpected:
    C:\>echo %TIME% && SLOWCOMMAND && echo %TIME%
    13:42:05.10
    13:42:05.10

    C:\>
    The timestamps come out the same, because the command prompt does all variable substitution in a line at once, before executing the first command.

    In batch files, this can be mitigated with the setting ENABLEDELAYEDEXPANSION and referring to variables !LIKETHIS! instead of %LIKETHIS%. But that won't work at the command prompt.
The solution I used was to run the command explicitly afterwards with CMD /C , using the ^ to escape out the % character:
C:\>echo %TIME% && SLOWCOMMAND && cmd /c echo %TIME^%
13:51:27.58
13:51:46.66

C:\>
Other solutions welcome.

Syndicated 2014-09-02 20:18:46 from jon's blog

2 Sep 2014 louie   » (Master)

Wikimania 2014 Notes – very miscellaneous

A collection of semi-random notes from Wikimania London, published very late:

Gruppenfoto Wikimania 2014 London, by Ralf Roletschek, under CC BY-SA 3.0 Austria

The conference generally

  • Tone: Overall tone of the conference was very positive. It is possibly just small sample size—any one person can only talk to a small number of the few thousand at the conference—but seemed more upbeat/positive than last year.
  • Tone, 2: The one recurring negative theme was concern about community tone, from many angles, including Jimmy. I’m very curious to see how that plays out. I agree, of course, and will do my part, both at WMF and when I’m editing. But that sort of social/cultural change is very hard.
  • Speaker diversity: Heard a few complaints about gender balance and other diversity issues in the speaker lineup, and saw a lot of the same (wonderful!) faces as last year. I’m wondering if there are procedural changes (like maybe blind submissions, or other things from this list) might bring some new blood and improve diversity.
  • “Outsiders”: The conference seemed to have better representation than last year from “outside” our core community. In particular, it was great for me to see huge swathes of the open content/open access movements represented, as well as other free software projects like Mozilla. We should be a movement that works well with others, and Wikimania can/should be a key part of that, so this was a big plus for me.
  • Types of talks: It would be interesting to see what the balance was of talks (and submissions) between “us learning about the world” (e.g., me talking about CC), “us learning about ourselves” (e.g., the self-research tracks), and “the world learning about us” (e.g., aimed at outsiders). Not sure there is any particular balance we should have between the three of them, but it might be revealing to see what the current balance is.
  • Less speaking, more conversing: Next year I will probably propose mostly (only?) panels and workshops, and I wonder if I can convince others to do the same. I can do a talk+slides and stream it at any time; what I can only do in person is have deeper, higher-bandwidth conversations.
  • Physical space and production values: The hackathon space was amazingly fun for me, though I got the sense not everyone agreed. The production values (and the rest of the space) for the conference were very good. I’m torn on whether or not the high production values are a plus for us, honestly. They raise the bar for participation (bad); make the whole event feel somewhat… un-community-ish(?); but they also make us much more accessible to people who aren’t yet ready for the full-on, super-intense Wikimedian Experience.

The conference for projects I work on

  • LCA: Legal/Community Affairs was pretty awesome on many fronts—our talks, our work behind the scenes, our dealing with both the expected and unexpected, etc. Deeply proud to be part of this dedicated, creative team. Also very appreciative for everyone who thanked us—it means a lot when we hear from people we’ve helped.
  • Maps: Great seeing so much interest in Open Street Map. Had a really enjoyable time at their 10th birthday meetup; was too bad I had to leave early. Now have a better understanding of some of the technical issues after a chat with Kolossos and Katie. Also had just plain fun geeking out about “hard choices” like map boundaries—I find how communities make decisions about problems like that fascinating.
  • Software licensing: My licensing talk with Stephen went well, but probably should have been structured as part of the hackathon rather than for more general audiences. Ultimately this will only work out if engineering (WMF and volunteer) is on board, and will work best if engineering leads. (The question asked by Mako afterwards has already led to patches, which is cool.)
  • Creative Commons: My CC talk with Kat went well, and got some good questions. Ultimately the rubber will meet the road when the translations are out and we start the discussion with the full community. Also great meeting User:Multichill; looking forward to working on license templates with him and May from design.
  • Metadata: The multimedia metadata+licensing work is going to be really challenging, but very interesting and ultimately very empowering for everyone who wants to work with the material on commons. Look forward to working with a large/growing number of people on this project.
  • Advocacy: Advocacy panel was challenging, in a good way. A variety of good, useful suggestions; but more than anything else, I took away that we should probably talk about how we talk when subjects are hard, and consensus may be difficult to reach. Examples would include when there is a short timeline for a letter, or when topics are deeply controversial for good, honest reasons.

The conference for me

  • Lesson (1): Learned a lesson: never schedule a meeting for the day after Wikimania. Odds of being productive are basically zero, though we did get at least some things done.
  • Lesson (2): I badly overbooked myself; it hurt my ability to enjoy the conference and meet everyone I wanted to meet. Next year I’ll try to be more focused in my commitments so I can benefit more from spontaneity, and get to see some slightly less day-job-related (but enjoyable or inspirational) talks/presentations.
  • Research: Love that there is so much good/interesting research going on, and do deeply think that it is important to understand it so that I can apply it to my work. Did not get to see very much of it, though :/
  • Arguing with love: As tweeted about by Phoebe, one of the highlights was a vigorous discussion (violent agreement :) with Mako over dinner about the four freedoms and how they relate to just/empowering software more broadly. Also started a good, vigorous discussion with SJ about communication and product quality, but we sadly never got to finish that.
  • Recharging: Just like GUADEC in my previous life, I find these exhausting but also ultimately exhilarating and recharging. Can’t wait to get to Mexico City!

Misc.

  • London: I really enjoy London—the mix of history and modernity is amazing. Bonus: I think the beer scene has really improved since the last time I was there.
  • Movies: I hardly ever watch movies anymore, even though I love them. Knocked out 10 movies in the 22 hours in flight. On the way to London:
    • Grand Hotel Budapest (the same movie as every other one of his movies, which is enjoyable)
    • Jodorowsky’s Dune (awesome if you’re into scifi)
    • Anchorman (finally)
    • Stranger than Fiction (enjoyed it, but Adaptation was better)
    • Captain America, Winter Soldier (not bad?)
  • On the way back:
    • All About Eve (finally – completely compelling)
    • Appleseed:Alpha (weird; the awful dialogue and wooden “faces” of computer animated actors clashed particularly badly with the clasically great dialogue and acting of All About Eve)
    • Mary Poppins (having just seen London; may explain my love of magico-realism?)
    • The Philadelphia Story (great cast, didn’t engage me otherwise)
    • Her (very good)

Syndicated 2014-09-02 20:24:03 from Luis Villa » Blog

2 Sep 2014 lloydwood   » (Journeyer)

SaVi on Raspberry Pi.

Discovered that my SaVi satellite constellation visualization software could be installed easily on a Raspberry Pi, which is far more powerful than what I began working on SaVi on, so many years ago.

And that SaVi is now available for a very wide range of architectures.

My PhD work is running on the big screen in the living room. Looks like I've finally gotten satellite TV.

2 Sep 2014 dyork   » (Master)

Can You Please Help The Ottawa Linux Symposium?

Ols logoIf you have ever used the Linux operating system, could you please help out the Ottawa Linux Symposium (OLS)? For many years OLS has been one of the key events that has helped bring together people from all across the Linux community, and the connections made at OLS have helped to make the Linux operating system that much more powerful and useful. But… as organizer Andrew Hutton recounts on the OLS Indiegogo page, the event has fallen in a bit of a financial crunch and it is now not clear if there will be an OLS in 2015… or ever again.

Could you spare $10? $25? or even $50 or $100? (Or more?)

If so, please help fund OLS on the IndieGogo page!

I first attended OLS back in the early 2000s when I was living right there in Ottawa and working for first a startup called e-smith and then subsequently Mitel Networks. In looking at my list of presentations I can see that I spoke there several times… and the topics I covered take me back to a much different time:

  • 2004 OLS – Tutorial: Introduction to OpenPGP, GnuPG and the Web of Trust
  • 2002 OLS – Tutorial – Single Source Publishing Using DocBook XML
  • 2001 OLS – Maximizing Your Use of CVS

I still remember OLS as the incredibly passionate place where people connected…. and where I made so many connections and learned an amazing amount about Linux.

If OLS was ever important to you… or if Linux has been important to you… please consider donating to help the OLS organization get out of its financial hole and get moving ahead in future years. Organizer Andrew Hutton has poured his heart and soul – and personal money – into making OLS the incredible event it has been… now it would be great if we all can help him! Please consider donating!

Here are a few other viewpoints on the importance of OLS:

Please do donate if you can! THANK YOU!

The post Can You Please Help The Ottawa Linux Symposium? appeared first on Code.DanYork.Com.

Syndicated 2014-09-02 11:49:32 from Code.DanYork.Com

2 Sep 2014 wingo   » (Master)

high-performance packet filtering with pflua

Greets! I'm delighted to be able to announce the release of Pflua, a high-performance packet filtering toolkit written in Lua.

Pflua implements the well-known libpcap packet filtering language, which we call pflang for short.

Unlike other packet filtering toolkits, which tend to use the libpcap library to compile pflang expressions bytecode to be run by the kernel, Pflua is a completely new implementation of pflang.

why lua?

At this point, regular readers are asking themselves why this Schemer is hacking on a Lua project. The truth is that I've always been looking for an excuse to play with the LuaJIT high-performance Lua implementation.

LuaJIT is a tracing compiler, which is different from other JIT systems I have worked on in the past. Among other characteristics, tracing compilers only emit machine code for branches that are taken at run-time. Tracing seems a particularly appropriate strategy for the packet filtering use case, as you end up with linear machine code that reflects the shape of actual network traffic. This has the potential to be much faster than anything static compilation techniques can produce.

The other reason for using Lua was because it was an excuse to hack with Luke Gorrie, who for the past couple years has been building the Snabb Switch network appliance toolkit, also written in Lua. A common deployment environment for Snabb is within the host virtual machine of a virtualized server, with Snabb having CPU affinity and complete control over a high-performance 10Gbit NIC, which it then routes to guest VMs. The administrator of such an environment might want to apply filters on the kinds of traffic passing into and out of the guests. To this end, we plan on integrating Pflua into Snabb so as to provide a pleasant, expressive, high-performance filtering facility.

Given its high performance, it is also reasonable to deploy Pflua on gateway routers and load-balancers, within virtualized networking appliances.

implementation

Pflua compiles pflang expressions to Lua source code, which are then optimized at run-time to native machine code.

There are actually two compilation pipelines in Pflua. The main one is fairly traditional. First, a custom parser produces a high-level AST of a pflang filter expression. This AST is lowered to a primitive AST, with a limited set of operators and ways in which they can be combined. This representation is then exhaustively optimized, folding constants and tests, inferring ranges of expressions and packet offset values, hoisting assertions that post-dominate success continuations, etc. Finally, we residualize Lua source code, performing common subexpression elimination as we go.

For example, if we compile the simple Pflang expression ip or ip6 with the default compilation pipeline, we get the Lua source code:

return function(P,length)
   if not (length >= 14) then return false end
   do
      local v1 = ffi.cast("uint16_t*", P+12)[0]
      if v1 == 8 then return true end
      do
         do return v1 == 56710 end
      end
   end
end

The other compilation pipeline starts with bytecode for the Berkeley packet filter VM. Pflua can load up the libpcap library and use it to compile a pflang expression to BPF. In any case, whether you start from raw BPF or from a pflang expression, the BPF is compiled directly to Lua source code, which LuaJIT can gnaw on as it pleases. Compiling ip or ip6 with this pipeline results in the following Lua code:

return function (P, length)
   local A = 0
   if 14 > length then return 0 end
   A = bit.bor(bit.lshift(P[12], 8), P[12+1])
   if (A==2048) then goto L2 end
   if not (A==34525) then goto L3 end
   ::L2::
   do return 65535 end
   ::L3::
   do return 0 end
   error("end of bpf")
end

We like the independence and optimization capabilities afforded by the native pflang pipeline. Pflua can hoist and eliminate bounds checks, whereas BPF is obligated to check that every packet access is valid. Also, Pflua can work on data in network byte order, whereas BPF must convert to host byte order. Both of these restrictions apply not only to Pflua's BPF pipeline, but also to all other implementations that use BPF (for example the interpreter in libpcap, as well as the JIT compilers in the BSD and Linux kernels).

However, though Pflua does a good job in implementing pflang, it is inevitable that there may be bugs or differences of implementation relative to what libpcap does. For that reason, the libpcap-to-bytecode pipeline can be a useful alternative in some cases.

performance

When Pflua hits the sweet spots of the LuaJIT compiler, performance screams.


(full image, analysis)

This synthetic benchmark runs over a packet capture of a ping flood between two machines and compares the following pflang implementations:

  1. libpcap: The user-space BPF interpreter from libpcap

  2. linux-bpf: The old Linux kernel-space BPF compiler from 2011. We have adapted this library to work as a loadable user-space module (source)

  3. linux-ebpf: The new Linux kernel-space BPF compiler from 2014, also adapted to user-space (source)

  4. bpf-lua: BPF bytecodes, cross-compiled to Lua by Pflua.

  5. pflua: Pflang compiled directly to Lua by Pflua.

To benchmark a pflang implementation, we use the implementation to run a set of pflang expressions over saved packet captures. The result is a corresponding set of benchmark scores measured in millions of packets per second (MPPS). The first set of results is thrown away as a warmup. After warmup, the run is repeated 50 times within the same process to get multiple result sets. Each run checks to see that the filter matches the the expected number of packets, to verify that each implementation does the same thing, and also to ensure that the loop is not dead.

In all cases the same Lua program is used to drive the benchmark. We have tested a native C loop when driving libpcap and gotten similar results, so we consider that the LuaJIT interface to C is not a performance bottleneck. See the pflua-bench project for more on the benchmarking procedure and a more detailed analysis.

The graph above shows that Pflua can stream in packets from memory and run some simple pflang filters them at close to the memory bandwidth on this machine (100 Gbit/s). Because all of the filters are actually faster than the accept-all case, probably due to work causing prefetching, we actually don't know how fast the filters themselves can run. At any case, in this ideal situation, we're running at a handful of nanoseconds per packet. Good times!


(full image, analysis)

It's impossible to make real-world tests right now, especially since we're running over packet captures and not within a network switch. However, we can get more realistic. In the above test, we run a few filters over a packet capture from wingolog.org, which mostly operates as a web server. Here we see again that Pflua beats all of the competition. Oddly, the new Linux JIT appears to fare marginally worse than the old one. I don't know why that would be.

Sadly, though, the last tests aren't running at that amazing flat-out speed we were seeing before. I spent days figuring out why that is, and that's part of the subject of my last section here.

on lua, on luajit

I implement programming languages for a living. That doesn't mean I know everything there is to know about everything, or that everything I think I know is actually true -- in particular, I was quite ignorant about trace compilers, as I had never worked with one, and I hardly knew anything about Lua at all. With all of those caveats, here are some ignorant first impressions of Lua and LuaJIT.

LuaJIT has a ridiculously fast startup time. It also compiles really quickly: under a minute. Neither of these should be important but they feel important. Of course, LuaJIT is not written in Lua, so it doesn't have the bootstrap challenges that Guile has; but still, a fast compilation is refreshing.

LuaJIT's FFI is great. Five stars, would program again.

As a compilation target, Lua is OK. On the plus side, it has goto and efficient bit operations over 32-bit numbers. However, and this is a huge downer, the result range of bit operations is the signed int32 range, not the unsigned range. This means that bit.band(0xffffffff, x) might be negative. No one in the history of programming has ever wanted this. There are sensible meanings for negative results to bit operations, but only if an argument was negative. Grr. Otherwise, Lua shares the same concerns as other languages whose numbers are defined as 64-bit doubles.

Sometimes people get upset that Lua starts its indexes (in "arrays" or strings) with 1 instead of 0. It's foreign to me, so it's sometimes a challenge, but it can work as well as anything else. The problem comes in when working with the LuaJIT FFI, which starts indexes with 0, leading me to make errors as I forget which kind of object I am working on.

As a language to implement compilers, Lua desperately misses a pattern matching facility. Otherwise, a number of small gripes but no big ones; tables and closures abound, which leads to relatively terse code.

Finally, how well does trace compilation work for this task? I offer the following graph.


(full image, analysis)

Here the tests are paired. The first test of a pair, for example the leftmost portrange 0-6000, will match most packets. The second test of a pair, for example the second-from-the-left portrange 0-5, will reject all packets. The generated Lua code will be very similar, except for some constants being different. See portrange-0-6000.md for an example.

The Pflua performance of these filters is very different: the one that matches is slower than the one that doesn't, even though in most cases the non-matching filter will have to do more work. For example, a non-matching filter probably checks both src and dst ports, whereas a successful one might not need to check the dst.

It hurts to see Pflua's performance be less than the Linux JIT compilers, and even less than libpcap at times. I scratched my head for a long time about this. The Lua code is fine, and actually looks much like the BPF code. I had taken a look at the generated assembly code for previous traces and it looked fine -- some things that were not as good as they should be (e.g. a fair bit of conversions between integers and doubles, where these traces have no doubles), but things were OK. What changed?

Well. I captured the traces for portrange 0-6000 to a file, and dove in. Trace 66 contains the inner loop. It's interesting to see that there's a lot of dynamic checks in the beginning of the trace, although the loop itself is not bad (scroll down to see the word LOOP:), though with the double conversions I mentioned before.

It seems that trace 66 was captured for a packet whose src port was within range. Later, we end up compiling a second trace if the src port check fails: trace 67. The trace starts off with an absurd amount of loads and dynamic checks -- to a similar degree as trace 66, even though trace 66 dominates trace 67. It seems that there is a big penalty for transferring from one trace to another, even though they are both compiled.

Finally, once trace 67 is done -- and recall that all it has to do is check the destination port, and then update the counters from the inner loop) -- it jumps back to the top of trace 66 instead of the top of the loop, repeating all of the dynamic checks in trace 66! I can only think this is a current deficiency of LuaJIT, and not with trace compilation in general, although the amount of state transfer points to a lack of global analysis that you would get in a method JIT. I'm sure that values are being transferred that are actually dead.

This explains the good performance for the match-nothing cases: the first trace that gets compiled residualizes the loop expecting that all tests fail, and so only matching cases or variations incur the trace transfer-and-re-loop cost.

It could be that the Lua code that Pflua residualizes is in some way not idiomatic or not performant; tips in that regard are appreciated.

conclusion

I was going to pass some possible slogans by our marketing department, but we don't really have one, so I pass them on to you and you can tell me what you think:

  • "Pflua: A Totally Adequate Pflang Implementation"

  • "Pflua: Sometimes Amazing Performance!!!!1!!"

  • "Pflua: Organic Artisanal Network Packet Filtering"

Pflua was written by Igalians Diego Pino, Javier Muñoz, and myself for Snabb Gmbh, fine purveyors of high-performance networking solutions. If you are interested in getting Pflua in a Snabb context, we'd be happy to talk; drop a note to the snabb-devel forum. For Pflua in other contexts, file an issue or drop me a mail at wingo@igalia.com. Happy hackings with Pflua, the totally adequate pflang implementation!

Syndicated 2014-09-02 10:15:49 from wingolog

2 Sep 2014 bagder   » (Master)

HTTP/2 interop pains

At around 06:49 CEST on the morning of August 27 2014, Google deployed HTTP/2 draft-14 status on their front-end servers that handle logins to Google accounts (and possibly others). Those at least take care of all the various login stuff you do with Google, G+, gmail, etc.

The little problem with that was just that their implementation of HTTP2 is in disagreement with all existing client implementations of that same protocol at that draft level. Someone immediately noticed this problem and filed a bug against Firefox.

The Firefox Nightly and beta versions have HTTP2 enabled by default and so users quickly started to notice this and a range of duplicate bug reports have been filed. And keeps being filed as more users run into this problem. As far as I know, Chrome does not have this enabled by default so no Chrome users get this ugly surprise.

The Google implementation has a broken cookie handling (remnants from the draft-13 it looks like by how they do it). As I write this, we’re on the 7th day with this brokenness. We advice bleeding-edge users of Firefox to switch off HTTP/2 support in the mean time until Google wakes up and acts.

You can actually switch http2 support back on once you’ve logged in and it then continues to work fine. Below you can see what a lovely (wildly misleading) error message you get if you try http2 against Google right now with Firefox:

google-http2-draft14-cookies

Syndicated 2014-09-02 07:47:46 from daniel.haxx.se

2 Sep 2014 marnanel   » (Journeyer)

Gentle Readers: like an apple tree

Gentle Readers
a newsletter made for sharing
volume 2, number 1
1st September 2014: like an apple tree
What I’ve been up to

I've been up to surprisingly little in the last few days. I'm trying to be peaceful and spend time reading and taking things in, instead of always being on the go and trying to make things, otherwise I'll wear myself out. That may be crashingly obvious, but I've managed to avoid noticing it for years.

A poem of mine

TRANSPLANTED (T120)

Let an apple tree be planted
close beside a ditch of mud,
let its roots be parched and aching,
ever waiting for the flood;
so its small and bitter apples
overhang the streambed dry,
cursed to live and never flourish,
painful grow, and painful die.

Yet, this tree shall be transplanted
to a meadow by a stream;
clouds shall shower down their mercies,
sunlight throw its kindest beam;
roots recall the feel of fullness,
by the river, in the rain,
branches shall be pruned and ready,
hope and apples grow again.

A picture

http://gentlereaders.uk/pics/happy-birthday-eve
Adam: "Happy birthday, Eve!"
Eve: "It's today, not tomorrow."

Something wonderful

Mitochondria are tiny living things, rather like bacteria. They live inside the cells of almost all animals, plants, and fungi, where their job is to process glucose in order to provide a source of power for the rest of the cell. Without their help, we wouldn't be here.

http://gentlereaders.uk/pics/mitochondria
Two cheerful little mitochondria from a lung cell. Each is about 0.00025 millimetres across.
Photo by Louisa Howard, public domain.

What fascinates me particularly about mitochondria is that they have their own DNA, which is not at all like human DNA and much more like the DNA of bacteria. They're essentially a different creature. And because you inherit all your mitochondria only from your mother, mitochondrial DNA is very useful in tracing your ancestry.

http://gentlereaders.uk/pics/mito-inherit
So how did we come to have these creatures living inside our cells? The most commonly-accepted explanation is that two billion years ago, when complex cells were just starting out, the mitochondria discovered that the cells were a good place to live inside, with lots of glucose to feed on. It was just as useful for the cell, which needed the glucose processed. Symbiosis! The mitochondria hitched a lift, and they've been with us ever since. So even when you think you're alone, remember you're also a sort of walking mitochondrial city.

Something from someone else

A BIRTHDAY
by Christina Rossetti (1830-1894)

My heart is like a singing bird
Whose nest is in a water'd shoot;
My heart is like an apple-tree
Whose boughs are bent with thickset fruit;
My heart is like a rainbow shell
That paddles in a halcyon sea;
My heart is gladder than all these
Because my love is come to me.

Raise me a dais of silk and down;
Hang it with vair and purple dyes;
Carve it in doves and pomegranates,
And peacocks with a hundred eyes;
Work it in gold and silver grapes,
In leaves and silver fleurs-de-lys;
Because the birthday of my life
Is come, my love is come to me.

Colophon

Gentle Readers is published on Mondays and Thursdays, and I want you to share it. The archives are at http://gentlereaders.uk/ , and so is a form to get on the mailing list. If you have anything to say or reply, or you want to be added or removed from the mailing list, I’m at thomas@thurman.org.uk and I’d love to hear from you. The newsletter is reader-supported; please pledge something if you can afford to, and please don't if you can't. Love and peace to you all.
 
 
This entry was originally posted at http://marnanel.dreamwidth.org/311201.html. Please comment there using OpenID.

Syndicated 2014-09-01 23:16:53 from Monument

31 Aug 2014 danstowell   » (Journeyer)

ArcTanGent 2014 festival

I'll admit it: I wasn't sure I could tolerate 48 hours of nothing but post-rock. Lots of great stuff in that scene - but all at once? Wouldn't it wear a bit thin? Well no, ArcTanGent festival was chuffing fab. My top three awesome stickers are awarded to:

  • Bear Makes Ninja - wow like math-rock with great indie-rock vocals and harmonies, and some blinding drumming which isn't obvious in that video I linked but you should really see.

  • AK/DK - a twopiece, and both of them play synths and effects and vocals and drums, shifting roles as they go to make great electro stuff totally live. Fun and danceable as hell.

  • Cleft - another twopiece, drums and guitar, using a loopstation to fill it out and make mathy tuneful stuff. Oh and great crowd interaction - this might violate postrock ethics but I do like a band that talks to the crowd. This crowd was pretty dedicated, they were actually singing along with the zany time-signature riffs.

Unfortunately we missed Rumour Cubes while putting our tent up in the rain, so I'll never know if they would have earnt a top awesome sticker. But loads of other stuff was also great: Jamie Lenman (from heavy to tuneful, like early Nirvana), Sleep Beggar (heavy angry hip-hop and chuffing rocking), Luo (ensemble postrock with some delicious intricate drum breaks), Year Of No Light (dark slow heavy doomy, like a black hole), Alarmist (another dose of good ensemble postrock), and Human Pyramids (sort of like a school orchestra playing postrock compositions... in a good way).

Almost all of these things I've mentioned were non-headline acts, and most of them were amazed to be in a tent with so many people digging their shit, since they were used to being the niche odd-time-signature weirdos at normal festivals :)

By way of contrast, a couple of the big names I found a bit boring to be honest, but I'll spare you that since overall the weekend was great with so much great stuff. Mono was a nice headliner to end with, enveloping, orchestral and often low-key - we were actually not "at" the main stage but sitting on a bench 50m or so up the slope. Lots of people were doing as we did, letting the sound wash its way up the hill as we took in the night.

I didn't join in the silent disco in the middle of the night but it had a lovely effect, as hundreds of people with headphones sang along to some indie rock classics, and from afar you could hear nothing except their perfectly-timed amateur indie choir, it sounded great.

Syndicated 2014-08-31 11:57:25 (Updated 2014-08-31 17:09:21) from Dan Stowell

31 Aug 2014 etbe   » (Master)

Links August 2014

Matt Palmer wrote a good overview of DNSSEC [1].

Sociological Images has an interesting article making the case for phasing out the US $0.01 coin [2]. The Australian $0.01 and $0.02 coins were worth much more when they were phased out.

Multiplicity is a board game that’s designed to address some of the failings of SimCity type games [3]. I haven’t played it yet but the page describing it is interesting.

Carlos Buento’s article about the Mirrortocracy has some interesting insights into the flawed hiring culture of Silicon Valley [4].

Adam Bryant wrote an interesting article for NY Times about Google’s experiments with big data and hiring [5]. Among other things it seems that grades and test results have no correlation with job performance.

Jennifer Chesters from the University of Canberra wrote an insightful article about the results of Australian private schools [6]. Her research indicates that kids who go to private schools are more likely to complete year 12 and university but they don’t end up earning more.

Kiwix is an offline Wikipedia reader for Android, needs 9.5G of storage space for the database [7].

Melanie Poole wrote an informative article for Mamamia about the evil World Congress of Families and their connections to the Australian government [8].

The BBC has a great interactive web site about how big space is [9].

The Raspberry Pi Spy has an interesting article about automating Minecraft with Python [10].

Wired has an interesting article about the Bittorrent Sync platform for distributing encrypted data [11]. It’s apparently like Dropbox but encrypted and decentralised. Also it supports applications on top of it which can offer social networking functions among other things.

ABC news has an interesting article about the failure to diagnose girls with Autism [12].

The AbbottsLies.com.au site catalogs the lies of Tony Abbott [13]. There’s a lot of work in keeping up with that.

Racialicious.com has an interesting article about “Moff’s Law” about discussion of media in which someone says “why do you have to analyze it” [14].

Paul Rosenberg wrote an insightful article about conservative racism in the US, it’s a must-read [15].

Salon has an interesting and amusing article about a photography project where 100 people were tased by their loved ones [16]. Watch the videos.

Related posts:

  1. Links August 2013 Mark Cuban wrote an interesting article titled “What Business is...
  2. Links February 2014 The Economist has an interesting and informative article about the...
  3. Links July 2014 Dave Johnson wrote an interesting article for Salon about companies...

Syndicated 2014-08-31 13:55:49 from etbe - Russell Coker

31 Aug 2014 shlomif   » (Master)

Finishing Off The Open Content / Web 2.0 Revolution: (#SummerNSA)

Headers
  • Subject: Finishing off the Open content / Web 2.0 revolution

  • From: Shlomi Fish (a.k.a “Rindolf”), the hacker king of the Open content/Web 2.0 revolution (~2000-2014)

  • To::

    1. Summer Glau, Hollywood actress, known for her roles in Firefly, the Sarah Connor Chronicles, xkcd and Summerschool at the NSA, and who I suspect wishes to become the new hacker queen.

    2. Sarah Michelle Gellar, Hollywood actress and producer, known for playing the fictional Buffy Summers, who was the hacker queen of the Web 1.0 revolution (~1997-2000), in the show Buffy the Vampire Slayer (BtVS). Ms. Gellar was also the frontwoman of the show, and Hollywood’s Alpha female for that period.

    3. Chuck Norris, martial artist, actor, and filmmaker , the inspiration and subject of many satirical “facts” about him, which have become a very powerful weapon by their own right, and also inspired my NSA “facts”, and later and earlier humorous collections of facts.. Furthermore, Mr. Norris appears to be the current Alpha male of Hollywood.

    4. Megan Fox, Hollywood actress, a very inspiring person, and someone whom I suspect wishes to become the next Hollywood Alpha female.

    5. Jennifer Lawrence, Hollywood actress, a very inspiring person, and the present Alpha female of Hollywood. Also provided a lot of inspiration for the multi-sectioned essay “Putting All The Cards On The Table (2013)”.

    6. General Keith B. Alexander, retired United States Army general, and former director of the National Security Agency (NSA).

    7. Joss Whedon, Hollywood filmmaker (writer, director, etc.) who is notable here as the creator of BtVS, and Firefly, and a potential back-up director for Summerschool at the NSA.

  • CC::

    1. Larry Wall, creator of the original “patch” program, the Usenet newsreader “rn”, the Perl programming language versions 1-to-6 and their “perl” implementations and the hacker king of the Open source/Usenet revolution (~1984-1997).

    2. Edward Snowden, former contractor for the CIA and the NSA, who is notable for foolishly, but gallantly, revealing a lot of internal claims of the NSA (= NSA “intel”) and becoming a media hero and an outcast. Mr. Snowden has thought he was under a constant threat for revealing what was likely mostly a product of delusional minds inside the NSA (or false/out-of-date data as intelligence data probably often goes).

    3. Randall Munroe, creator of the xkcd web-comics, which introduced me to Summer Glau, and provided a lot of inspiration and fodder both for “Summerschool at the NSA” and for other aspects of hackerism. He may wish to further collaborate with Glau on her journey as the new hacker monarch.

    4. All action heroes / hackers and geeks / amateurs of the world.

Producing “Summerschool at the NSA” (#SummerNSA)

Your assistance is required in producing the feature film “Summerschool at the NSA” based on my original screenplay. The screenplay is made available under the Creative Commons Attribution License (CC-by) and its text is original. I have designated my recommendations for some of the cast, and the film should be capable of being filmed mostly or entirely in a film studio and at a relatively low cost.

Note however that I encourage any productions whatsoever of the screenplay, including by enthusiastic independent film makers, on YouTube, and including producing them as voiced animations.

I also neither mind nor discourage any hacks and deviations from the original screenplay, up to and including featuring Arnold Schwarzenegger sending Rihanna to kick the NSA’a ass, or Kermit the Frog doing the same with Fluttershy. (And you may consider both mutations as artistic challenges).

What I do not want is that nothing will get done, and that there won’t be any film - by anyone - in the forseeable future. So please get to it as soon as you can. I need you, and the world needs you.

If you create something, please mark it with the hashtag of #SummerNSA.

I have written about some profitable business models for creators of online culture, which do not involve ad revenue which is small and has proven to be ineffective. Absent from the post, is selling merchandise (see the famous “Merchandising” excerpt from Spaceballs), and putting selected Project Wonderful-like ads, which the web site owner pre-approved of and are non-intrusive. Film makers and artists may wish to deploy them either with “Summerschool at the NSA” or with different cultural works.

Implications Of The “Summerschool At The NSA” Films

The open content / Web 2.0 revolution has proven to be a blazing success and a source of decent or better income, esteem, and publicity for many individuals, small companies, large companies, and other organisations. While during the Web 1.0 era, information was hard to find, not reliable, and often hard to contribute to (what Prof. Lawrence Lessig calls "read only" vs. "read/write" in his book Remix, which I read and loved.), in the Web 2.0 era, one can build upon information, which is often cited and reliable, change it, enhance it, and perform many remixes as well as crossovers and “mashups”.

It is my belief that it was I, Shlomi Fish (“Rindolf”), who was the “Hacker King” (a.k.a “Warrior King/Queen/Monarch” / The best-of-the-best-of-the-best / The Saladin / The Qoheleth / The John Galt / etc.) of that revolution. To quote what Quark from the Star Trek franchise said about the Grand Nagus, he/she: «has the greatest business mind… always thinking ten, sometimes twenty steps ahead of everyone else.» and the kind of person who has the same ideas as everyone else only five years earlier and thus is named a lunatic.

Nevertheless, my reign as Hacker Monarch has reached its end with the writing of the screenplay, Summerschool at the NSA. The latter mixed and matched Judaism (Tanakh, Talmud, and Israelism), Buffy the Vampire Slayer, the xkcd web comic, the “99 problems” meme, the old “Publish or Perish” adage, the deeds and words of Saladin, open source and open content, some modern but not unthinkable technology, love/romance/sex/relationships, pop culture, and humour, and more into what was essentially a realism, Real Person Fiction, story. Furthermore, it featured fictionalised versions of Sarah Michelle Gellar, Summer Glau, and General Keith Alexander, who was the director of the NSA at the time. I also ended up seeing it as my modernisation of Ayn Rand’s Atlas Shrugged novel, which was her magnus opus as Hacker Monarch (while still building upon it, referencing it, and going against some of it original premises).

Like Atlas Shrugged, “Summerschool at the NSA” was eventually understood to be my magnus opus, and I passed the baton to someone else, Summer Glau, and thus mostly concluded the open content/Web 2.0 revolution with a mostly happy ending and a blazing success. Sic transit gloria mundi (STGM).

The formalities for concluding all that are:

  1. Directing/producing the feature film or films of “Summerschool at the NSA” in whatever format they shall be done.

  2. Ms. Glau and I meeting somewhere and me asking her these questions, which I'll give along with the answers I expect:

    1. Question: Are you afraid to die?

      Answer: There is no correct answer.

    2. Question: Are you afraid to live?

      Answer: Maybe I have in the past, but I no longer am. I will do and say what seems right and good, whether people like it or not (while still being careful and avoiding being arrogant).

    3. Question: Do you wish to become the Hacker Monarch, while being fully aware of the implications of this role, and taking full responsibility for it?

      Answer: Yes, I do.

  3. I will give her my blessed/cursed amulet of power, a plain brown ten-sided die, that was given to me as a present by my friends at the time, and ask her to determine what to do with it next. I urge her not to throw it away or destroy it (by seeing if it blends or whatever), since despite its low cost and mundaneness, it is a fine piece of engineering.

  4. I will declare Summer Glau as the presiding hacker queen, and step down from my role, and become a hacker king emeritus.

One implication of all this is that we shall finally and almost completely unite these worlds:

  1. The Academia.

  2. The software industry / open source / Internet / World-wide-web workers.

  3. The “content”/culture creators, both the content industry (e.g: Hollywood, the MPAA, the RIAA, and many smaller local franchises around the world) and many hobbyists, amateur, independent and/or unsigned artists and content producers.

This is despite the fact that some of these worlds appeared to be at a constant dispute with each other. This merging of these worlds is similar to the merger of the AT&T UNIX/BSD worlds and those of the early PDP-10-based ARPA-NET and NSFnet hackers that happened at the early 80s, and that in turn led to the open source / Usenet revolution.

There will likely be a lot of time to reflect upon my history and achievements during my reign as hacker monarch of the world, but I think the future is more important than the past. I have a lot of potential advice we can use to continue to battle the remaining man-made problems (e.g: bloodshed, suicides, deaths due to arrogance and carelessness, possible present and future environmental problems, unnecessary red tape and regulations, unnecessary hatred, antagonism and distrust, and vandalism) and some questions for further inquiry. However, it's now also up to the new generation of activists of the upcoming post-open content / post-Web-2.0 revolution (whose nature is yet to be discerned) to build upon the work of the activists and action heroes of the open content revolution, and take the world even more forward.

So let me just give some pieces of advice to Ms. (Summer) Glau, which are mostly relevant to other people as well.

Advice To The Upcoming Hacker Monarch
  1. Don’t be too arrogant and/or careless - I don’t want you to get killed prematurely, and it seems God punishes more people for that rather than for being bad.

    Note that Hubris in moderation is still very important as almost all ancient and modern technology (from fire, through Aristotle’s Logic and science, through the Lever, through modern architecture, through automobiles and air and space travel, to computer and computer networks, to this very essay and very word) are product of mankind wishing to defy "gravity" and show his environment that he is not bound by its rules.

    Furthermore, courage and spite is required as a way to avoid the “fear of living”: never fear what some other people think or do not think about you. Furthermore, accept criticism and even encourage people to prove you wrong or even offend you. Like the mightiest Klingon warriors say when they were proven wrong: “What a great day it was for me to die! Thank you for this excellent battle.”.

  2. Don’t feel superior. Even if you are the hacker monarch, everyone else is or can be the most powerful man on Earth, and the Messiah. Even the smallest and most fragile inanimate object, serves an important purpose in God’s (= The King of the Kings of the Kings) world.

    You can never travel the path or survive alone. You need each and everyone and everything out there.

  3. Be Yourself: remember that whatever you do or whoever you are some people will always complain. Please all→Please none. Aim for perfection in imperfection. Remember that You’re awesome.

  4. Take a good care of yourself. Have a lot of “Wine, Women & Song”: Good food and drinks (that taste good and are what you desire at the moment); Good company - of any sex ; and clean, creative, enlightening, fun - however amateuristic or of apparent low quality.

    While it’s OK to be busy for short periods of time, don’t become a wage slave who doesn’t eat and drink well, doesn’t socialise, and doesn’t have time to enjoy themselves.

  5. Seize the day! (= “Carpe diem”) Don’t wait for a special ocassion to enjoy yourself or contribute to the world - or usually both. Every day is the unbirthday of your friends, your fans, everybody, and of you, and it’s a good day to remind them that you love them.

    Every day can be the best day of your life so far.

  6. Don’t be pseudo-Utilitarian: if you made one person a little happier, then what you did was a blazing success. “He who saved one man, has essentially saved the world entire.”.

  7. Never deny that you are the hacker queen: proudly admit it. Too many hacker monarchs did not acknowledge their own self-worth. Don’t repeat the mistake of Larry Wall and of me, and play “The Invisible” who is arguably the worse kind of hacker monarch.

  8. Be a hacker / action hero: bend the rules, violate them, surprise people, defy social norms, dogma, inertia, prejudice, enthropy and gravity, all in order to earn your victory and - be happy and proud doing that.

  9. Be an alpha female (= see “Wesley Snipes” in this essay by Eric Sink) or a beta female (= “Denzel Washington”) or a little bit of both, but don’t be a Gamma↔Omega female.

  10. Get an active online presence. See my plan for that and the comments I got. One further note is to avoid Shaike Ophir’s “The English Teacher”’s definition of monologue as “One person talking to himself”, which I noticed many celebrities succumb to. The more you reply and interact with the people who reply to your online posts and comments, or otherwise interact with them, and collaborate (engaging in a true dialogue), then the fewer redundant answers will be given, and the better quality the discussion will be.

  11. Be honest and enlightened and constantly stay honest and enlightened. Honesty and enlightenment are processes and one must constantly be committed to become more and more honest and enlightened, or else they immediately became dishonest, cynical and stagnate.

  12. Practice the basics of the philosophy of Saladin, a very noble man, a strategical genius, and one of the most notable hacker kings in history.

  13. If you’re in a dilemma or run into some trouble, remember that I and possibly other hacker monarchs emeriti (such as Larry Wall) and your other friends who are hackers and geeks, are always there for you and can give you a fresh perspective on the situation.

    ( An independent person is not someone who does everything on their own using feudalism taken to extreme. As long as he or she takes full responsibility for the outcome of their actions, they can ask or even pay for help or advice. )

  14. Finally, remember - Sic transit gloria mundi! You will most probably not be the hacker monarch forever, because one day you too will create your magnus opus, and a younger (at least in spirit), more awesome, action hero will displace you as hacker monarch, because they want it more badly than you. And they can be a man or a woman or a group, fact or fiction, animal, vegatable, or mineral, etc. And then you too will become the Hacker monarch emerita and actually feel relieved about all that.

Have a lot of fun, stay smashing and awesome, and hack on!

Hail, Saladin! Hackers of the world, unite!

References and Further Reading
  1. “The Eternal Jew” - an early attempt at codifying “Rindolfism”, which is my personal, one man, dynamic philosophy.

  2. “Putting all the Cards on the Table (2013)” - a multi-sectioned essay, that was written in March 2013, a short time before I wrote Summerschool at the NSA, and which was inspired a lot by Silver Linings Playbook, Jennifer Lawrence, and the fact that she won the Academy for it (at age 22).

  3. Summary for “Putting more cards on the table (2014)” - points for an essay that is an update/ammendement to the previous one.

  4. “Saladin Style” - a short, irresponsible essay about Saladin’s innovative and inspiring strategy and philosophy, that still has direct implications today.

  5. My works of fiction, humour and action heroism and my essays

  6. About “Rindolf” and “Rindolfism” - a page about my nickname and personal philosophy and my hopes and expectations for the future of me, Summer Glau, and everyone else.

  7. My Twitter feed, where I posted many thoughts and insights about “#SummerNSA” and other things. I have some other presence on social media sites.

Licence

This work is copyright by Shlomi Fish and licensed under the Creative Commons Attribution licence version 3.0 (or any later version). See my interpretation of it.

Syndicated 2014-08-31 12:13:12 from shlomif

31 Aug 2014 Stevey   » (Master)

A diversion - The National Health Service

Today we have a little diversion to talk about the National Health Service. The NHS is the publicly funded healthcare system in the UK.

Actually there are four such services in the UK, only one of which has this name:

  • The national health service (England)
  • Health and Social Care in Northern Ireland.
  • NHS Scotland.
  • NHS Wales.

In theory this doesn't matter, if you're in the UK and you break your leg you get carried to a hospital and you get treated. There are differences in policies because different rules apply, but the basic stuff "free health care" applies to all locations.

(Differences? In Scotland you get eye-tests for free, in England you pay.)

My wife works as an accident & emergency doctor, and has recently changed jobs. Hearing her talk about her work is fascinating.

The hospitals she's worked in (Dundee, Perth, Kirkcaldy, Edinburgh, Livingstone) are interesting places. During the week things are usually reasonably quiet, and during the weekend things get significantly more busy. (This might mean there are 20 doctors to hand, versus three at quieter times.)

Weekends are busy largely because people fall down hills, get drunk and fight, and are at home rather than at work - where 90% of accidents occur.

Of course even a "quiet" week can be busy, because folk will have heart-attacks round the clock, and somebody somewhere will always be playing with a power tool, a ladder, or both!

So what was the point of this post? Well she's recently transferred to working for a childrens hospital (still in A&E) and the patiences are so very different.

I expected the injuries/patients she'd see to differ. Few 10 year olds will arrive drunk (though it does happen), and few adults fall out of trees, or eat washing machine detergent, but talking to her about her day when she returns home is fascinating how many things are completely different from how I expected.

Adults come to hospital mostly because they're sick, injured, or drunk.

Children come to hospital mostly because their parents are paranoid.

A child has a rash? Doctors are closed? Lets go to the emergency ward!

A child has fallen out of a tree and has a bruise, a lump, or complains of pain? Doctors are closed? Lets go to the emergency ward!

I've not kept statistics, though I wish I could, but it seems that she can go 3-5 days between seeing an actually injured or chronicly-sick child. It's the first-time-parents who bring kids in when they don't need to.

Understandable, completely understandable, but at the same time I'm sure it is more than a little frustrating for all involved.

Finally one thing I've learned, which seems completely stupid, is the NHS-Scotland approach to recruitment. You apply for a role, such as "A&E doctor" and after an interview, etc, you get told "You've been accepted - you will now work in Glasgow".

In short you apply for a post, and then get told where it will be based afterward. There's no ability to say "I'd like to be a Doctor in city X - where I live", you apply, and get told where it is post-acceptance. If it is 100+ miles away you either choose to commute, or decline and go through the process again.

This has lead to Kirsi working in hospitals with a radius of about 100km from the city we live in, and has meant she's had to turn down several posts.

And that is all I have to say about the NHS for the moment, except for the implicit pity for people who have to pay (inflated and life-changing) prices for things in other countries.

Syndicated 2014-08-31 11:51:46 from Steve Kemp's Blog

30 Aug 2014 marnanel   » (Journeyer)

Gentle Readers: harmless phantoms

Gentle Readers
a newsletter made for sharing
volume 1, number 20
25th August 2014: harmless phantoms
What I’ve been up to

It's been three months! This is the last issue of volume 1, and next week volume 2 begins: it'll be more of the same, except that I'm adding reviews of some of the children's books I've loved in my life. I'll be collecting the twenty issues of volume 1 together in a printed book, which I'll be emailing you about when it's ready.

This week has been busy but uneventful, which I wish was a less common mixture, but it was good to drop into Manchester during the Pride festival. I apologise for this issue being late: I had it all prepared, and then there was a server problem, and then I found I'd lost one of the sections completely, so it had to be rewritten. Never mind: you have it now!

A poem of mine

ON FIRST LOOKING INTO AN A TO Z (T13)

My talent (or my curse) is getting lost:
my routes are recondite and esoteric.
Perverted turns on every road I crossed
have dogged my feet from Dover up to Berwick.
My move to London only served to show
what fearful feast of foolishness was mine:
I lost my way from Tower Hill to Bow,
and rode the wrong way round the Circle Line.
In nameless London lanes I wandered then
whose tales belied my tattered A to Z,
and even now, in memory again
I plod despairing, Barking in my head,
still losing track of who and where I am,
silent, upon a street in Dagenham.

(Notes: the title is a reference to Keats's sonnet On First Looking into Chapman's Homer. "A to Z" is a standard book of London streetmaps.)

 

A picture

http://thomasthurman.org/pics/on-sweet-bathroom
On-sweet bathroom

Something wonderful

In the poem above, I mentioned Berwick-upon-Tweed, or Berwick for short, which rhymes with Derek. Berwick is the most northerly town in England, two miles from the Scottish border. It stands at the mouth of the river Tweed, which divides Scotland from England in those parts, but Berwick is on the Scottish bank: for quite a bit of its history it was a very southerly town in Scotland instead. The town's football team still plays in the Scottish leagues instead of the English. Berwick has been in English hands since 1482, though given next month's referendum I'm not going to guess how long that will last.

http://gentlereaders.uk/pics/berwick-map

As befits such a frontier town, it's impressively fortified, and the castle and ramparts are well worth seeing. But today I particularly wanted to tell you about the story of its war with Russia.
 

http://gentlereaders.uk/pics/berwick-miller


Fans of Jasper Fforde's Thursday Next series, and anyone who had to learn The Charge of the Light Brigade at school, will remember the Crimean War, a conflict which remained an infamous example of pointless waste of life until at least 1914. Now, because Berwick had changed hands between England and Scotland several times, it was once the rule that legal documents would mention both countries as "England, Scotland, and Berwick-upon-Tweed" to be on the safe side. And the story goes that when Britain declared war on Russia in 1853, it was in the name of England, Scotland, and Berwick-upon-Tweed, but the peace treaty in 1856 forgot to include Berwick, so this small town remained technically at war with Russia for over a century.

In fact, the tale is untrue: Berwick wasn't mentioned in the declaration of war, as far as I know, though I admit I haven't been able to trace a copy-- can any of you do any better? But such is the power of story that in 1966, with the Cold War becoming ever more tense, the town council decided that something had to be done about the problem. So the London correspondent of Pravda, one Oleg Orestov, travelled the 350 miles up to Berwick for peace talks, so that everyone could be sure that Berwick was not at war with the USSR. The mayor told Mr Orestov, "Please tell the Russian people through your newspaper that they can sleep peacefully in their beds."

Something from someone else

from HAUNTED HOUSES
by Henry Wadsworth Longfellow (1807-1882)

All houses wherein men have lived and died
Are haunted houses. Through the open doors
The harmless phantoms on their errands glide,
With feet that make no sound upon the floors.

We meet them at the doorway, on the stair,
Along the passages they come and go,
Impalpable impressions on the air,
A sense of something moving to and fro.

There are more guests at table than the hosts
Invited; the illuminated hall
Is thronged with quiet, inoffensive ghosts,
As silent as the pictures on the wall.

Colophon
Gentle Readers is published on Mondays and Thursdays, and I want you to share it. The archives are at http://gentlereaders.uk/ , and so is a form to get on the mailing list. If you have anything to say or reply, or you want to be added or removed from the mailing list, I’m at thomas@thurman.org.uk and I’d love to hear from you. The newsletter is reader-supported; please pledge something if you can afford to, and please don't if you can't. Love and peace to you all.
This entry was originally posted at http://marnanel.dreamwidth.org/310953.html. Please comment there using OpenID.

Syndicated 2014-08-30 13:44:45 from Monument

29 Aug 2014 dmarti   » (Master)

Don't punch the monkey. Embrace the Badger.

One of the main reactions I get to Targeted Advertising Considered Harmful is: why are you always on about saving advertising? Advertising? Really? Wouldn't it be better to have a world where you don't need advertising?

Even when I do point out how non-targeted ads are good for publishers and advertisers, the obvious question is, why should I care? As a member of the audience, or a regular citizen, why does advertising matter? And what's all this about the thankless task of saving online advertising from itself? I didn't sign up for that.

The answer is: Because externalities.

Some advertising has positive externalities.

The biggest positive externality is ad-supported content that later becomes available for other uses. For example, short story readers today are benefitting from magazine ad budgets of the 19th-20th centuries.

Every time you binge-watch an old TV show, you're a positive externality winner, using a cultural good originally funded by advertising.

I agree with the people who want ad-supported content for free, or at a subsidized price. I'm not going to condemn all advertising as The Internet's Original Sin. I just think that we need to fix the bugs that make Internet advertising less valuable than ads in older media.

Some advertising has negative externalities.

On the negative side, the biggest externality is the identity theft risks inherent in large databases of PII. (And it's all PII. Anonymization is bogus.) The costs of identity theft fall on the people whose information is compromised, not on the companies that chose to collect it.

In 20 years, people will look back at John Battelle's surveillance marketing fandom the way we now watch those 1950s industrial films that praise PCBs, or asbestos, or some other God-awful substance that we're still spending billions to clean up. PII is informational haszmat.

The French Task Force on Taxation of the Digital Economy suggests a unit charge per user monitored to address the dangers that uncontrolled practices regarding the use of these data are likely to raise for the protection of public freedoms. But although that kind of thing might fly in Europe, in the USA we have to use technology. And that's where regular people come in.

What you can do

Your choice to protect your privacy by blocking those creepy targeted ads that everyone hates is not a selfish one. You're helping to re-shape the economy. You're helping to move ad spending away from ads that target you, and have negative externalities, and towards ads that are tied to content, and have positive externalities. It's unlikely that Internet ads will ever be all positive, or all negative, but privacy-enabled users can shift the balance in a good way.

Don't punch the monkey. Embrace the Badger.

Syndicated 2014-08-29 13:16:08 from Don Marti

29 Aug 2014 Stevey   » (Master)

Migration of services and hosts

Yesterday I carried out the upgrade of a Debian host from Squeeze to Wheezy for a friend. I like doing odd-jobs like this as they're generally painless, and when there are problems it is a fun learning experience.

I accidentally forgot to check on the status of the MySQL server on that particular host, which was a little embarassing, but later put together a reasonably thorough serverspec recipe to describe how the machine should be setup, which will avoid that problem in the future - Introduction/tutorial here.

The more I use serverspec the more I like it. My own personal servers have good rules now:

shelob ~/Repos/git.steve.org.uk/server/testing $ make
..
Finished in 1 minute 6.53 seconds
362 examples, 0 failures

Slow, but comprehensive.

In other news I've now migrated every single one of my personal mercurial repositories over to git. I didn't have a particular reason for doing that, but I've started using git more and more for collaboration with others and using two systems felt like an annoyance.

That means I no longer have to host two different kinds of repositories, and I can use the excellent gitbucket software on my git repository host.

Needless to say I wrote a policy for this host too:

#
#  The host should be wheezy.
#
describe command("lsb_release -d") do
  its(:stdout) { should match /wheezy/ }
end


#
# Our gitbucket instance should be running, under runit.
#
describe supervise('gitbucket') do
  its(:status) { should eq 'run' }
end

#
# nginx will proxy to our back-end
#
describe service('nginx') do
  it { should be_enabled   }
  it { should be_running   }
end
describe port(80) do
  it { should be_listening }
end

#
#  Host should resolve
#
describe host("git.steve.org.uk" ) do
  it { should be_resolvable.by('dns') }
end

Simple stuff, but being able to trigger all these kind of tests, on all my hosts, with one command, is very reassuring.

Syndicated 2014-08-29 13:28:28 from Steve Kemp's Blog

29 Aug 2014 badvogato   » (Master)

back from a weekend of camping and hiking, rafting fun...
http://www.nps.gov/dewa/planyourvisit/trail-maps-nj.htm

finished this book 'music lesson' by Victor Wooten, incredible story about spirit of music/life.

reflect on American life thru TV series 'mad man'. that's all for now.

29 Aug 2014 bagder   » (Master)

Firefox OS Flatfish Bluedroid fix

Hey, when I just built my own Firefox OS (b2g) image for my Firefox OS Tablet (flatfish) I ran into this (known) problem:

Can't find necessary file(s) of Bluedroid in the backup-flatfish folder.
Please update the system image for supporting Bluedroid (Bug-986314),
so that the needed binary files can be extracted from your flatfish device.

So, as I struggled to figure out the exact instructions on how to proceed from this, I figured I should jot down what I did in the hopes that it perhaps will help a fellow hacker at some point:

  1. Download the 3 *.img files from the dropbox site that is referenced from bug 986314.
  2. Download the flash-flatfish.sh script from the same dropbox place
  3. Make sure you have ‘fastboot’ installed (I’m mentioning this here because it turned out I didn’t and yet I have already built and flashed my Flame phone successfully without having it). “apt-get install android-tools-fastboot” solved it for me. Note that if it isn’t installed, the flash-flatfish.sh script will claim that the device is not in fastboot mode and stop with an error message saying so.
  4. Finally: run the script “./flash-flatfish.sh [dir with the 3 .img files]“
  5. Once it had succeeded, the tablet reboots
  6. Remove the backup-flatfish directory in the build dir.
  7. Restart the flatfish build again and now it should get passed that Bluedroid nit

Enjoy!

Syndicated 2014-08-29 12:11:30 from daniel.haxx.se

29 Aug 2014 robertc   » (Master)

Test processes as servers

Since its very early days subunit has had a single model – you run a process, it outputs test results. This works great, except when it doesn’t.

On the up side, you have a one way pipeline – there’s no interactivity needed, which makes it very very easy to write a subunit backend that e.g. testr can use.

On the downside, there’s no interactivity, which means that anytime you want to do something with those tests, a new process is needed – and thats sometimes quite expensive – particularly in test suites with 10’s of thousands of tests.Now, for use in the development edit-execute loop, this is arguably ok, because one needs to load the new tests into memory anyway; but wouldn’t it be nice if tools like testr that run tests for you didn’t have to decide upfront exactly how they were going to run. If instead they could get things running straight away and then give progressively larger and larger units of work to be run, without forcing a new process (and thus new discovery directory walking and importing) ? Secondly, testr has an inconsistent interface – if testr is letting a user debug things to testr through to child workers in a chain, it needs to use something structured (e.g. subunit) and route stdin to the actual worker, but the final testr needs to unwrap everything – this is needlessly complex. Lastly, for some languages at least, its possibly to dynamically pick up new code at runtime – so a simple inotify loop and we could avoid new-process (and more importantly complete-enumeration) *entirely*, leading to very fast edit-test cycles.

So, in this blog post I’m really running this idea up the flagpole, and trying to sketch out the interface – and hopefully get feedback on it.

Taking subunit.run as an example process to do this to:

  1. There should be an option to change from one-shot to server mode
  2. In server mode, it will listen for commands somewhere (lets say stdin)
  3. On startup it might eager load the available tests
  4. One command would be list-tests – which would enumerate all the tests to its output channel (which is stdout today – so lets stay with that for now)
  5. Another would be run-tests, which would take a set of test ids, and then filter-and-run just those ids from the available tests, output, as it does today, going to stdout. Passing somewhat large sets of test ids in may be desirable, because some test runners perform fixture optimisations (e.g. bringing up DB servers or web servers) and test-at-a-time is pretty much worst case for that sort of environment.
  6. Another would be be std-in a command providing a packet of stdin – used for interacting with debuggers

So that seems pretty approachable to me – we don’t even need an async loop in there, as long as we’re willing to patch select etc (for the stdin handling in some environments like Twisted). If we don’t want to monkey patch like that, we’ll need to make stdin a socketpair, and have an event loop running to shepard bytes from the real stdin to the one we let the rest of Python have.

What about that nirvana above? If we assume inotify support, then list_tests (and run_tests) can just consult a changed-file list and reload those modules before continuing. Reloading them just-in-time would be likely to create havoc – I think reloading only when synchronised with test completion makes a great deal of sense.

Would such a test server make sense in other languages?  What about e.g. testtools.run vs subunit.run – such a server wouldn’t want to use subunit, but perhaps a regular CLI UI would be nice…


Syndicated 2014-08-29 03:48:18 from Code happens

28 Aug 2014 Rich   » (Master)

Apache httpd at ApacheCon Budapest

tl;dr - There will be a full day of Apache httpd content at ApacheCon Europe, in Budapest, November 17th - apacheconeu2014.sched.org/type/httpd

Links:

* ApacheCon website - http://apachecon.eu
* ApacheCon Schedule - http://apacheconeu2014.sched.org/
* Register - http://events.linuxfoundation.org//events/apachecon-europe/attend/register
* Apache httpd - http://httpd.apache.org/

I'll be giving two talks about the Apache http server at ApacheCon.eu in a little over 2 months.

On Monday morning (November 17th) I'll be speaking about Configurable Configuration in httpd. New in Apache httpd 2.4 is the ability to put conditional statements in your configuration file which are evaluated at request time rather than at server startup time. This means that you can have the configuration adapt to the specifics of the request - like, where in the world it came from, what time of day it is, what browser they're using, and so on. With the new If/ElseIf/Else syntax, you can embed this logic directly in your configuration.

2.4 also includes mod_macro, and a new expression evaluation engine, which further enhance httpd's ability to have a truly flexible configuration language.

Later in the day, I'll be speaking about mod_rewrite, the module that lets you manipulate requests using regular expressions and other logic, also at request time. Most people who have some kind of website are forced to use mod_rewrite now and then, and there's a lot of terrible advice online about ways to use it. In this session, you'll learn learn the basics of regular expression syntax, and how to correctly craft rewrite expressions.

There's other httpd content throughout the day, and the people who created this technology will be on hand to answer your questions, and teach you all of the details of using the server. We'll also have a hackathon running the entire length of the conference, where people will be working on various aspects of the server. In particular, I'll be working on the documentation. If you're interested in participating in the httpd docs, this is a great time to learn how to do that, and dive into submitting your first patch.

See you there!

Syndicated 2014-08-28 14:04:37 (Updated 2014-08-28 14:18:45) from Notes In The Margin

28 Aug 2014 idcmp   » (Journeyer)

Backing up OS X onto Linux. Finally.

I've tried all sorts of black voodoo magic to make this work, but finally I have something repeatable and reliable. Also, please excuse the horrible formatting of this post.

History

I started with Time Machine talking to my Linux box with netatalk. This worked fine until one day Time Machine told me it needed to back up everything all over again.

Then it started to tell me this almost weekly.  Apparently when this happens, the existing backup is corrupt and all that precious data is at worst irretrievable or at best tedious to retrieve. These are not attributes I want associated with my backup solution.

Then I did an rsync over ssh to my Linux box.  This is fine exception that it lacks all the special permissions, resource forks, acls, etc, etc that are hidden in a Mac filesystem.

Then I tried SuperDuper! backing up to a directory served up via netatalk mounted via afp:// on OS X.  This worked, but was mind numbingly slow. Also, it would mean I'd have to pay for a tool if I wanted to do incremental backups. This gets expensive as I also back up a few friends OS X laptops on my Linux file server.

I tried SuperDuper! backing up over Samba, but hdiutil create apparently doesn't work over Samba. Workarounds all needed the purchased version of SuperDuper!.

There's *another* work around for SuperDuper! where I can use MacFUSE and sshfs, but the MacFUSE author has abandoned the project and recommends people not to use it.

Sheesh.

The Solution

Ultimately, the goal is to make a sparsebundle HFS+ disk image, put it on a Samba mounted share and rsync my data over to it. You'd be surprised how many niggly bits there are for this.

Install Rsync


First, I grabbed the 3.1.x version of rsync from Homebrew - install Homebrew as per the directions there, then run:

brew install https://raw.github.com/Homebrew/homebrew-dupes/master/rsync.rb

If you've been digging through voodoo magic, then you'll be happy to hear this version of rsync has the all the rsync patches you'll read about (like --protect-decmpfs).

Samba

Nobody needs another out of date blog entry explaining how to setup Samba. Follow some other guide, make sure Samba starts automatically and use smbpasswd to create an account.  

I recommend using the name of the machine being backed up as the account name. I'm calling that machinename for the rest of this post.

Make sure you can mount this share on OS X via smb:// ( Finder > Go > Connect to Server... ).  Make sure you can 1) create a file, 2) edit and save the file, 3) delete the file.  I'm going to assume you've mounted this share at /Volumes/Machinename

Backup Into

Now lets make something for us to backup into.  Figure out how big the disk is on the source machine (we'll assume 100g) then run:

hdiutil create /tmp/backup.sparsebundle -size 100g -type SPARSEBUNDLE -nospotlight -volname 'Machinename-Backup' -verbose -fs 'Case-sensitive Journaled HFS+'

Yes, you're creating it in /tmp, this is to work around hdiutil create not liking Samba.

Next you'll want to copy this sparse bundle onto your Samba share:

 cp -rvp /tmp/backup.sparsebundle /Volumes/machinename


This will copy a bunch of files and should be successful without any warnings.  Now lets mount this sparse bundle:

hdiutil attach /Volumes/machinename/backup.sparsebundle -mount required -verbose

You should now have /Volumes/Machinename-Backup mounted on your system. Fun story, OS X recognizes that this disk image is hosted off the machine, so it mounts this disk image with "noowners" (see mount man page). That's going to be a problem for our backup, so we need to tell OS X it's okay to use userids normally:

sudo diskutil enableOwnership /Volumes/Machinename-Backup

Preparing Rsync

There are a handful of files which recommend to be excluded:


.DocumentRevisions-*/
.Spotlight-*/
/.fseventsd
/.hotfiles.btree
.Trashes
/afs/*
/automount/*
/cores/*
/private/var/db/dyld/dyld_*
/System/Library/Caches/com.apple.bootstamps/*
/System/Library/Caches/com.apple.corestorage/*
/System/Library/Caches/com.apple.kext.caches/*
/dev/*
/automount
/.vol/*
/net
/private/tmp/*
/private/var/run/*
/private/var/spool/postfix/*
/private/var/vm/*
/private/var/folders/*
/Previous Systems.localized
/tmp/*
/Volumes/*
*/.Trash
/Backups.backupdb
/.MobileBackups
/.bzvol
/PGPWDE01
/PGPWDE02

Store this in a file somewhere. I stored mine as exclude-from.txt in /Volumes/Machinename

Okay, now we're ready to run rsync.  I think the correct arguments to rsync are: -aNHAXx --nfs-compression --protect-decmpfs --itemize-changes --super --fileflags --force-change --crtimes

So, we run:

rsync all-those-args-above --exclude-from=/Volumes/Machinename/exclude-from.txt / /Volumes/Machinename-Backup/

When The Backup Is Done

This will take a little while. When it's done, you can then bless your backup so it could be booted:

sudo bless -folder /Volumes/Machinename-Backup/System/Library/CoreServices

Then you can umount your backup:

hdiutil detach /Volumes/Machinename-Backup

Periodically, and after your first run, you should compact down your sparsebundle disk image:

hdiutil compact /Volumes/Machinename/backup.sparsebundle -batteryallowed

You can now log into your Linux server and tar up the backup (apparently XZ > bzip2 for compression size).

 tar Jcvf machnename-backup.tar.xz backup.sparsebundle

Depending on the size of that tarball, you could upload it to Google Drive, Drop Box, etc. Before you do, you'll probably want to encrypt it. I used OpenSSL:

openssl aes-256-cbc -a -salt -in machinename-backup.tar.xz -out machinename-backup.tar.xz.aes-256-cbc

Many of these steps will take *hours* if you have a lot of data, so you may consider just backing up parts of your system more frequently and doing your whole system once every-so-often.

Why This Is Nice

One thing I really love about this setup is that each piece of the puzzle does one thing and does it well. If Samba is too slow, I could go back to netatalk. The sparsebundle disk image hosts HFS+ properly with all its OS X-specific voodoo and rsync's job is to copy files. If there's a better file copier, I could drop that in.

Conclusion

I left a lot out. I know. I'm kind of expecting you to have a rough idea of how to get around OS X and Linux, figure out how to put most of the above in a shell script, decide when to do backups, how to store those tarballs, etc. Hopefully though this will help someone that just needs some of the key ingredients to make it work.  



Syndicated 2014-08-27 23:49:00 (Updated 2014-08-29 17:52:35) from Idcmp

27 Aug 2014 wingo   » (Master)

a wingolog user's manual

Greetings, dear readers!

Welcome to my little corner of the internet. This is my place to share and write about things that are important to me. I'm delighted that you stopped by.

Unlike a number of other personal sites on the tubes, I have comments enabled on most of these blog posts. It's gratifying to me to hear when people enjoy an article. I also really appreciate it when people bring new information or links or things I hadn't thought of.

Of course, this isn't like some professional peer-reviewed journal; it's above all a place for me to write about my wanderings and explorations. Most of the things I find on my way have already been found by others, but they are no less new to me. As Goethe said, quoted in the introduction to The Joy of Cooking: "That which thy forbears have bequeathed to thee, earn it anew if thou wouldst possess it."

In that spirit I would enjoin my more knowledgeable correspondents to offer their insights with the joy of earning-anew, and particularly to recognize and banish the spectre of that moldy, soul-killing "well-actually" response that is present on so many other parts of the internet.

I've had a good experience with comments on this site, and I'm a bit lazy, so I take an optimistic approach to moderation. By default, comments are posted immediately. Every so often -- more often after a recent post, less often in between -- I unpublish comments that I don't feel contribute to the piece, or which I don't like for whatever reason. It's somewhat arbitrary, but hey, welcome to my corner of the internet.

This has the disadvantage that some unwanted comments end up published, then they go away. If you notice this happening to someone else's post, it's best to just ignore it, and in particular to not "go meta" and ask in the comments why a previous comment isn't there any more. If it happens to you, I'd ask you to re-read this post and refrain from unwelcome comments in the future. If you think I made an error -- it can happen -- let me know privately.

Finally, and it really shouldn't have to be said, but racism, sexism, homophobia, transphobia, and ableism are not welcome here. If you see such a comment that I should delete and have missed, let me know privately. However even among well-meaning people, and that includes me, there are ways of behaving that reinforce subtle bias. Please do point out such instances in articles or comments, either publicly or privately. Working on ableist language is a particular challenge of mine.

You can contact me via comments (anonymous or not), via email (wingo@pobox.com), twitter (@andywingo), or IRC (wingo on freenode). Thanks for reading, and happy hacking :)

Syndicated 2014-08-27 08:37:17 from wingolog

27 Aug 2014 bagder   » (Master)

Going to FOSDEM 2015

Yeps,

I’m going there and I know several friends are going too, so this is just my way of pointing this out to the ones of you who still haven’t made up your mind! There’s still a lot of time left as this event is taking place late January next year.

I intend to try to get a talk to present this time and I would love to meet up with more curl contributors and fans.

fosdem

Syndicated 2014-08-27 09:01:10 from daniel.haxx.se

27 Aug 2014 marnanel   » (Journeyer)

Not the ice bucket challenge

I spent like two hours making this. I'm sure there was some good reason for that.



This entry was originally posted at http://marnanel.dreamwidth.org/310657.html. Please comment there using OpenID.

Syndicated 2014-08-27 00:24:13 from Monument

26 Aug 2014 Rich   » (Master)

LinuxCon NA 2014

Last week I attended LinuxCon North America in Chicago.