Recent blog entries

16 Aug 2014 Skud   » (Master)

Dinner, aka, too impatient to wait for a real loaf to rise


Syndicated 2014-08-16 12:28:46 from Infotropism

16 Aug 2014 Skud   » (Master)

Testing Instagram/ifttt/wordpress/DW integration

15 Aug 2014 caolan   » (Master)

dialog conversion status, 4 to go

Converting LibreOffice dialogs to .ui format, 4 left

I should go on vacation more often. On my return I find that Palenik Mihály and Szymon Kłos, two of our GSOC2014 students, have now converted all but 4 of LibreOffice’s classic fixed widget size and position .src format elements to the GtkBuilder .ui format.

Here's the list of the last four. One (monster) whose conversion is in progress, one that should ideally be removed in favour of a duplicate dialog and two that have no known route to display them. Hacking the code temporarily to force those two to appear is probably no biggy.


Current conversion stats are:
820 .ui files currently exist
There are 3 unconverted dialogs
There are 1 unconverted tabpages
An estimated additional 4 .ui are required
We are 99% of the way through.

What's next, well *cough*, the above are all the dialogs and tabpages in the classic .src format. There are actually a host of ErrorBox, InfoBox and QueryBox that exist in the .src format as well.

These take just two pieces of information, a string to display and some bits that set what buttons to show, e.g. cancel, close, ok + cancel, etc. We want to remove them in favour of the Gtk-alike MessageDialog, but we don't want to actually convert them to .ui format, because they are so simple it makes more sense to just reduce them to strings like this sample commit demonstrates. This might even be possible to at least somewhat automate.

I've now updated count-todo-dialogs to display the count of those *Box elements that exist in src file format, but I'll elide the count of them until the last 4 true dialogs+tabpages are gone.

Syndicated 2014-08-15 15:54:00 (Updated 2014-08-15 15:54:42) from Caolán McNamara

15 Aug 2014 Stevey   » (Master)

A tale of two products

This is a random post inspired by recent purchases. Some things we buy are practical, others are a little arbitrary.

I tend to avoid buying things for the sake of it, and have explicitly started decluttering our house over the past few years. That said sometimes things just seem sufficiently "cool" that they get bought without too much thought.

This entry is about two things.

A couple of years ago my bathroom was ripped apart and refitted. Gone was the old and nasty room, and in its place was a glorious space. There was only one downside to the new bathroom - you turn on the light and the fan comes on too.

When your wife works funny shifts at the hospital you can find that the (quiet) fan sounds very loud in the middle of the night and wakes you up..

So I figured we could buy a couple of LED lights and scatter them around the place - when it is dark the movement sensors turn on the lights.

These things are amazing. We have one sat on a shelf, one velcroed to the bottom of the sink, and one on the floor, just hidden underneath the toilet.

Due to the shiny-white walls of the room they're all you need in the dark.

By contrast my second purchase was a mistake - The Logitech Harmony 650 Universal Remote Control should be great. It clearly has the features I want - Able to power:

  • Our TV.
  • Our Sky-box.
  • OUr DVD player.

The problem is solely due to the horrific software. You program the device via an application/website which works only under Windows.

I had to resort to installing Windows in a virtual machine to make it run:

# Get the Bus/ID for the USB device
bus=$(lsusb |grep -i Harmony | awk '{print $2}' | tr -d 0)
id=$(lsusb |grep -i Harmony | awk '{print $4}' | tr -d 0:)

# pass to kvm
kvm -localtime ..  -usb -device usb-host,hostbus=$bus,hostaddr=$id ..

That allows the device to be passed through to windows, though you'll later have to jump onto the Qemu console to re-add the device as the software disconnects and reconnects it at random times, and the bus changes. Sigh.

I guess I can pretend it works, and has cut down on the number of remotes sat on our table, but .. The overwhelmingly negative setup and configuration process has really soured me on it.

There is a linux application which will take a configuration file and squirt it onto the device, when attached via a USB cable. This software, which I found during research prior to buying it, is useful but not as much as I'd expected. Why? Well the software lets you upload the config file, but to get a config file you must fully complete the setup on Windows. It is impossible to configure/use this device solely using GNU/Linux.

(Apparently there is MacOS software too, I don't use macs. *shrugs*)

In conclusion - Motion-activated LED lights, more useful than expected, but Harmony causes Discord.

Syndicated 2014-08-15 12:14:46 from Steve Kemp's Blog

15 Aug 2014 mikal   » (Journeyer)

Juno nova mid-cycle meetup summary: cells

This is the next post summarizing the Juno Nova mid-cycle meetup. This post covers the cells functionality used by some deployments to scale Nova.

For those unfamiliar with cells, it's a way of combining smaller Nova installations into a thing which feels like a single large Nova install. So for example, Rackspace deploys Nova in cells of hundreds of machines, and these cells form a Nova availability zone which might contain thousands of machines. The cells in one of these deployments form a tree: users talk to the top level of the tree, which might only contain API services. That cell then routes requests to child cells which can actually perform the operation requested.

There are a few reasons why Rackspace does this. Firstly, it keeps the MySQL databases smaller, which can improve the performance of database operations and backups. Additionally, cells can contain different types of hardware, which are then partitioned logically. For example, OnMetal (Rackspace's Ironic-based baremetal product) instances come from a cell which contains OnMetal machines and only publishes OnMetal flavors to the parent cell.

Cells was originally written by Rackspace to meet its deployment needs, but is now used by other sites as well. However, I think it would be a stretch to say that cells is commonly used, and it is certainly not the deployment default. In fact, most deployments don't run any of the cells code, so you can't really call them even a "single cell install". One of the reasons cells isn't more widely deployed is that it doesn't implement the entire Nova API, which means some features are missing. As a simple example, you can't live-migrate an instance between two child cells.

At the meetup, the first thing we discussed regarding cells was a general desire to see cells finished and become the default deployment method for Nova. Perhaps most people end up running a single cell, but in that case at least the cells code paths are well used. The first step to get there is improving the Tempest coverage for cells. There was a recent openstack-dev mailing list thread on this topic, which was discussed at the meetup. There was commitment from several Nova developers to work on this, and notably not all of them are from Rackspace.

It's important that we improve the Tempest coverage for cells, because it positions us for the next step in the process, which is bringing feature parity to cells compared with a non-cells deployment. There is some level of frustration that the work on cells hasn't really progressed in Juno, and that it is currently incomplete. At the meetup, we made a commitment to bringing a well-researched plan to the Kilo summit for implementing feature parity for a single cell deployment compared with a current default deployment. We also made a commitment to make cells the default deployment model when this work is complete. If this doesn't happen in time for Kilo, then we will be forced to seriously consider removing cells from Nova. A half-done cells deployment has so far stopped other development teams from trying to solve the problems that cells addresses, so we either need to finish cells, or get out of the way so that someone else can have a go. I am confident that the cells team will take this feedback on board and come to the summit with a good plan. Once we have a plan we can ask the whole community to rally around and help finish this effort, which I think will benefit all of us.

In the next blog post I will cover something we've been struggling with for the last few releases: how we get our bug count down to a reasonable level.

Tags for this post: openstack juno nova mid-cycle summary cells
Related posts: Juno nova mid-cycle meetup summary: ironic; Juno nova mid-cycle meetup summary: DB2 support; Juno nova mid-cycle meetup summary: social issues; Juno nova mid-cycle meetup summary: containers; Michael's surprisingly unreliable predictions for the Havana Nova release; Juno Nova PTL Candidacy

Comment

Syndicated 2014-08-14 21:20:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

15 Aug 2014 benad   » (Apprentice)

Client-Side JavaScript Modules

I dislike the JavaScript programming language. I despise Node.js for server-side programming, and people with far more experience than me with both also agree, for example Ted Dziuba in 2011 and more recently Eric Jiang. And while I can easily avoid using Node as a server-side solution, the same cannot be said about avoiding JavaScript altogether.

I recently discovered Atom, a text editor based on a custom build of WebKit, essentially running on HTML, CSS and JavaScript. Though it is far from the fastest text editor out there, it feels like a spiritual successor to my favourite editor, jEdit, but based on modern web technologies rather than Java. The net effect is that Atom seems like the fastest growing text editor, and with its deep integration with Git (it was made by Github), it makes it a breeze to change code.

I noticed a few interesting things that were used to make JavaScript more tolerable in Atom. First, it supports CoffeeScript. Second, it uses Node-like modules.

CoffeeScript was a huge discovery for me. It is essentially a programming language that compiles into JavaScript, and makes JavaScript development more bearable. Its syntax reminds me a bit of the syntax difference between Java and Groovy. There's also the very interesting JavaScript to CoffeeScript converter called js2coffee. I used js2coffee on one of my JavaScript module, and the result was far more readable and manageable.

The problem with CoffeeScript is that you need to integrate its compilation to JavaScript somewhere. It just so happens that its compiler is a command-line JavaScript tool made for Node. A JavaScript equivalent to Makefiles (actually, more like Maven) is called Grunt, and from it you can call the CoffeeScript compiler directly, on top of UglifyJS to make the generated output smaller. All of these tools exist under node_modules/.bin when installed locally using npm, the Node Package Manager.

Also, by writing my module as a Node module (actually, CommonJS), I could also use some dependency management and still deploy it for a web browser's environment using Browserify. I could even go further and integrate it with Jasmine for unit tests, and run them in a GUI-less full-stack browser like PhantomJS, but that's going too far for now, and you're better off reading the Browserify article by Bastian Krol for more information.

It remains that Browserify is kind of a hack that isn't ideal for running JavaScript modules in a browser, as it has to include browser-equivalent functionality that is unique to Node and isn't optimized for high-latency asynchronous loading. A better solution for browser-side JavaScript modules is RequireJS, using the module format AMD. While not all Node modules have an AMD equivalent, the major ones are easily accessible with bower. Interestingly, you can create a module that can as AMD, Node and natively in a browser using the templates called UMD (as in "Universal Module Definition"). Also, RequireJS can support Node modules (that don't use Node-specific functionality) and any other JavaScript library made for browsers, so that you can gain asynchronous loading.

It should be noted that bower, grunt and many command-line JavaScript tools are made for Node and installed locally using npm, So, even if "Node as a JavaScript web server" fails (and it should), using Node as an environment for local JavaScript command-line tools works quite well and could have a great future.

After all is said and done, I now have something that is kind of like Maven, but for JavaScript using Grunt, RequireJS, bower and Jasmine, to download, compile (CoffeeScript), inject and optimize JavaScript modules for deployment. Or you can use something like CodeKit if you prefer a nice GUI. Either way, JavaScript development, for client-side software like Atom, command-line scripts or for the browser, is finally starting to feel reasonable.

Syndicated 2014-08-15 03:14:41 from Benad's Blog

15 Aug 2014 mikal   » (Journeyer)

More bowls and pens

The pens are quite hard to make by the way -- the wood is only a millimeter or so thick, so it tends to split very easily.

         

Tags for this post: wood turning 20140805-woodturning photo

Comment

Syndicated 2014-08-14 19:35:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

15 Aug 2014 mikal   » (Journeyer)

Juno nova mid-cycle meetup summary: DB2 support

This post is one part of a series discussing the OpenStack Nova Juno mid-cycle meetup. It's a bit shorter than most of the others, because the next thing on my list to talk about is DB2, and that's relatively contained.

IBM is interested in adding DB2 support as a SQL database for Nova. Theoretically, this is a relatively simple thing to do because we use SQLAlchemy to abstract away the specifics of the SQL engine. However, in reality, the abstraction is leaky. The obvious example in this case is that DB2 has different rules for foreign keys than other SQL engines we've used. So, in order to be able to make this change, we need to tighten up our schema for the database.

The change that was discussed is the requirement that the UUID column on the instances table be not null. This seems like a relatively obvious thing to allow, given that UUID is the official way to identify an instance, and has been for a really long time. However, there are a few things which make this complicated: we need to understand the state of databases that might have been through a long chain of upgrades from previous Nova releases, and we need to ensure that the schema alterations don't cause significant performance problems for existing large deployments.

As an aside, people sometimes complain that Nova development is too slow these days, and they're probably right, because things like this slow us down. A relatively simple change to our database schema requires a whole bunch of performance testing and negotiation with operators to ensure that its not going to be a problem for people. It's good that we do these things, but sometimes it's hard to explain to people why forward progress is slow in these situations.

Matt Riedemann from IBM has been doing a good job of handling this change. He's written a tool that operators can run before the change lands in Juno that checks if they have instance rows with null UUIDs. Additionally, the upgrade process has been well planned, and is documented in the specification available on the fancy pants new specs website.

We had a long discussion about this change at the meetup, and how it would impact on large deployments. Both Rackspace and HP were asked if they could run performance tests to see if the schema change would be a problem for them. Unfortunately HP's testing hardware was tied up with another project, so we only got numbers from Rackspace. For them, the schema change took 42 minutes for a large database. Almost all of that was altering the column to be non-nullable; creating the new index was only 29 seconds of runtime. However, the Rackspace database is large because they don't currently purge deleted rows, if they can get that done before running this schema upgrade then the impact will be much smaller.

So the recommendation here for operators is that it is best practice to purge deleted rows from your databases before an upgrade, especially when schema migrations need to occur at the same time. There are some other takeaways for operators as well: if we know that operators have a large deployment, then we can ask if an upgrade will be a problem. This is why being active on the openstack-operators mailing list is important. Additionally, if operators are willing to donate a dataset to Turbo-Hipster for DB CI testing, then we can use that in our automation to try and make sure these upgrades don't cause you pain in the future.

In the next post in this series I'll talk about the future of cells, and the work that needs to be done there to make it a first class citizen.

Tags for this post: openstack juno nova mid-cycle summary sql database sqlalchemy db2
Related posts: Juno nova mid-cycle meetup summary: ironic; Juno nova mid-cycle meetup summary: social issues; Juno nova mid-cycle meetup summary: containers; Michael's surprisingly unreliable predictions for the Havana Nova release; Exploring a single database migration; Time to document my PDF testing database

Comment

Syndicated 2014-08-14 19:20:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

14 Aug 2014 danstowell   » (Journeyer)

Jabberwocky, ATP, and London

Wow. The Jabberwocky festival, organised by the people who did many amazing All Tomorrow's Parties festivals, collapsed three days before it was due to happen, this weekend. The 405 has a great article about the whole sorry mess.

We've been to loads of ATPs and I was thinking about going to Jabberwocky. Really tempted by the great lineup and handily in London (where I live). But the venue? The Excel Centre? A convention-centre box? I couldn't picture it being fun. The promoters tried to insist that it was a great idea for a venue, but it seems I was probably like a lot of people thinking "nah". (Look at the reasons they give, crap reasons. No-one ever complained at ATP about the bar queues or the wifi coverage. The only thing I complained about was that the go-karting track was shut!) I've seen a lot of those bands before, too, it's classic ATP roster, so if the place isn't a place I want to go to then there's just not enough draw.

That 405 article mentions an early "leak" of plans that they were aiming to hold it in the Olympic Park. Now that would have been a place to hold it. Apparently the Olympic Park claimed ignorance, saying they never received a booking, but that sounds like PR-speak pinpointing that they were in initial discussions but didn't take it further. I would imagine that the Olympic Park demanded a much higher price than Excel since they have quite a lot of prestige and political muscle - or maybe it was just an issue of technical requirements or the like. But the Jabberwocky organisers clearly decided that they'd got the other things in place (lineup etc) so they'd press ahead with London in some other mega-venue, and hoped that the magic they once weaved on Pontins or Butlins would happen in the Excel.

This weekend there will be lots of great Jabberwocky fall-out gigs across London. That's totally weird. And I'm sorry I won't be in London to catch any of them! But it's very very weird because it's going to be about 75% of the festival, but converted from a monolithic one into one of those urban multi-venue festivals. The sickening thing about that is that even though the organisers clearly cocked some stuff up royally, I still feel terrible for them having to go bust and get no benefit from the neat little urban fallout festival they've accidentally organised. Now if ATP had decided to run it that way, I would very likely have signed up for it, and dragged my mates down to London!

Syndicated 2014-08-14 04:50:27 from Dan Stowell

14 Aug 2014 mikal   » (Journeyer)

Review priorities as we approach juno-3

I just send this email out to openstack-dev, but I am posting it here in case it makes it more discoverable to people drowning in email:

To: openstack-dev
Subject: [nova] Review priorities as we approach juno-3

Hi.

We're rapidly approaching j-3, so I want to remind people of the
current reviews that are high priority. The definition of high
priority I am using here is blueprints that are marked high priority
in launchpad that have outstanding code for review -- I am sure there
are other reviews that are important as well, but I want us to try to
land more blueprints than we have so far. These are listed in the
order they appear in launchpad.

== Compute Manager uses Objects (Juno Work) ==

https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/compute-manager-objects-juno,n,z

This is ongoing work, but if you're after some quick code review
points they're very easy to review and help push the project forward
in an important manner.

== Move Virt Drivers to use Objects (Juno Work) ==

I couldn't actually find any code out for review for this one apart
from https://review.openstack.org/#/c/94477/, is there more out there?

== Add a virt driver for Ironic ==

This one is in progress, but we need to keep going at it or we wont
get it merged in time.

* https://review.openstack.org/#/c/111223/ was approved, but a rebased
ate it. Should be quick to re-approve.
* https://review.openstack.org/#/c/111423/
* https://review.openstack.org/#/c/111425/
* ...there are more reviews in this series, but I'd be super happy to
see even a few reviewed

== Create Scheduler Python Library ==

* https://review.openstack.org/#/c/82778/
* https://review.openstack.org/#/c/104556/

(There are a few abandoned patches in this series, I think those two
are the active ones but please correct me if I am wrong).

== VMware: spawn refactor ==

* https://review.openstack.org/#/c/104145/
* https://review.openstack.org/#/c/104147/ (Dan Smith's -2 on this one
seems procedural to me)
* https://review.openstack.org/#/c/105738/
* ...another chain with many more patches to review

Thanks,
Michael


The actual email thread is at http://lists.openstack.org/pipermail/openstack-dev/2014-August/043098.html.

Tags for this post: openstack juno review nova ptl
Related posts: Juno Nova PTL Candidacy; Thoughts from the PTL; Havana Nova PTL elections; Expectations of core reviewers; Juno nova mid-cycle meetup summary: social issues; More reviews

Comment

Syndicated 2014-08-14 13:01:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

14 Aug 2014 mikal   » (Journeyer)

Juno nova mid-cycle meetup summary: ironic

Welcome to the third in my set of posts covering discussion topics at the nova juno mid-cycle meetup. The series might never end to be honest.

This post will cover the progress of the ironic nova driver. This driver is interesting as an example of a large contribution to the nova code base for a couple of reasons -- its an official OpenStack project instead of a vendor driver, which means we should already have well aligned goals. The driver has been written entirely using our development process, so its already been reviewed to OpenStack standards, instead of being a large code dump from a separate development process. Finally, its forced us to think through what merging a non-trivial code contribution should look like, and I think that formula will be useful for later similar efforts, the Docker driver for example.

One of the sticking points with getting the ironic driver landed is exactly how upgrade for baremetal driver users will work. The nova team has been unwilling to just remove the baremetal driver, as we know that it has been deployed by at least a few OpenStack users -- the largest deployment I am aware of is over 1,000 machines. Now, this unfortunate because the baremetal driver was always intended to be experimental. I think what we've learnt from this is that any driver which merges into the nova code base has to be supported for a reasonable period of time -- nova isn't the right place for experiments. Now that we have the stackforge driver model I don't think that's too terrible, because people can iterate quickly in stackforge, and when they have something stable and supportable they can merge it into nova. This gives us the best of both worlds, while providing a strong signal to deployers about what the nova team is willing to support for long periods of time.

The solution we came up with for upgrades from baremetal to ironic is that the deployer will upgrade to juno, and then run a script which converts their baremetal nodes to ironic nodes. This script is "off line" in the sense that we do not expect new baremetal nodes to be launchable during this process, nor after it is completed. All further launches would be via the ironic driver.

These nodes that are upgraded to ironic will exist in a degraded state. We are not requiring ironic to support their full set of functionality on these nodes, just the bare minimum that baremetal did, which is listing instances, rebooting them, and deleting them. Launch is excluded for the reasoning described above.

We have also asked the ironic team to help us provide a baremetal API extension which knows how to talk to ironic, but this was identified as a need fairly late in the cycle and I expect it to be a request for a feature freeze exception when the time comes.

The current plan is to remove the baremetal driver in the Kilo release.

Previously in this post I alluded to the review mechanism we're using for the ironic driver. What does that actually look like? Well, what we've done is ask the ironic team to propose the driver as a series of smallish (500 line) changes. These changes are broken up by functionality, for example the code to boot an instance might be in one of these changes. However, because of the complexity of splitting existing code up, we're not requiring a tempest pass on each step in the chain of reviews. We're instead only requiring this for the final member in the chain. This means that we're not compromising our CI requirements, while maximizing the readability of what would otherwise be a very large review. To stop the reviews from merging before we're comfortable with them, there's a marker review at the beginning of the chain which is currently -2'ed. When all the code is ready to go, I remove the -2 and approve that first review and they should all merge together.

In the next post I'll cover the state of adding DB2 support to nova.

Tags for this post: openstack juno nova mid-cycle summary ironic
Related posts: Juno nova mid-cycle meetup summary: social issues; Juno nova mid-cycle meetup summary: containers; Michael's surprisingly unreliable predictions for the Havana Nova release; Juno Nova PTL Candidacy; Thoughts from the PTL; Merged in Havana: fixed ip listing for single hosts

Comment

Syndicated 2014-08-14 01:49:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

13 Aug 2014 bagder   » (Master)

I’m with Firefox OS!

Tablet

I have received a Firefox OS tablet as part of a development program. My plan is to use this device to try out stuff I work on and see how it behaves on Firefox OS “for real” instead of just in emulators or on other systems. While Firefox OS is a product of my employer Mozilla, I personally don’t work particularly much with Firefox OS specifically. I work on networking in general for Firefox, and large chunks of the networking stack is used in both the ordinary Firefox browser like on desktops as well as in Firefox OS. I hope to polish and improve networking on Firefox OS too over time.

Firefox OS tablet

Phone

The primary development device for Firefox OS is right now apparently the Flame phone, and I have one of these too now in my possession. I took a few photos when I unpacked it and crammed them into the same image, click it for higher res:

Flame - Firefox OS phone

A brief explanation of Firefox OS

Firefox OS is an Android kernel (including drivers etc) and a bionic libc – simply the libc that Android uses. Linux-wise and slightly simplified, it runs a single application full-screen: Firefox, which then can run individual Firefox-apps that appears as apps on the phone. This means that the underlying fundamentals are shared with Android, while the layers over that are Firefox and then a world of HTML and javascript. Thus most of the network stack used for Firefox – that I work with – the http, ftp, dns, cookies and so forth is shared between Firefox for desktop and Firefox for Android and Firefox OS.

Firefox OS is made to use a small footprint to allow cheaper smartphones than Android itself can. Hence it is targeted to developing nations and continents.

Both my devices came with Firefox OS version 1.3 pre-installed.

The phone

The specs: Qualcomm Snapdragon 1.2GHZ dual-core processor, 4.5-inch 854×480 pixel screen, five-megapixel rear camera with auto-focus and flash, two-megapixel front-facing camera. Dual-SIM 3G, 8GB of onboard memory with a microSD slot, and a 1800 mAh capacity battery.

The Flame phone should be snappy enough although at times it seems to take a moment too long to populate a newly shown screen with icons etc. The screen surface is somehow not as smooth as my Nexus devices (we have the 4,5,7,10 nexuses in the house), leaving me with a constant feeling the screen isn’t cleaned.

Its dual-sim support is something that seems ideal for traveling etc to be able to use my home sim for incoming calls but use a local sim for data and outgoing calls… I’ve never had a phone featuring that before. I’ve purchased a prepaid SIM-card to use with this phone as my secondary device.

Some Good

I like the feel of the tablet. It feels like a solid and sturdy 10″ tablet, just like it should. I think the design language of Firefox OS for a newbie such as myself is pleasing and good-looking. The quad-core 1GHz thing is certainly fast enough CPU-wise to eat most of what you can throw at it.

These are really good devices to do web browsing on as the browser is a highly capable and fast browser.

Mapping: while of course there’s Google maps app, using the openstreetmap map is great on the device and Google maps in the browser is also a perfectly decent way to view maps. Using openstreetmap also of course has the added bonus that it feels great to see your own edits in your own neck of the woods!

I really appreciate that Mozilla pushes for new, more and better standardized APIs to enable all of this to get done in web applications. To me, this is one of the major benefits with Firefox OS. It benefits all of us who use the web.

Some Bad

Firefox OS feels highly US-centric (which greatly surprised me, seeing the primary markets for Firefox OS are certainly not in the US). As a Swede, I of course want my calendar to show Monday as the first day of the week. No can do. I want my digital clock to show me the time using 24 hour format (the am/pm scheme only confuses me). No can do. Tiny teeny details in the grand scheme of things, yes, but annoying. Possibly I’m just stupid and didn’t find how to switch these settings, but I did look for them on both my devices.

The actual Firefox OS system feels like a scaled-down Android where all apps are simpler and less fancy than Android. There’s a Facebook “app” for it that shows Facebook looking much crappier than it usually does in a browser or in the Android app – although on the phone it looked much better than on the tablet for some reason that I don’t understand.

I managed to get the device to sync my contacts from Google (even with my google 2-factor auth activated) but trying to sync my Facebook contacts just gave me a very strange error window in spite of repeated attempts, but again that worked on my phone!

I really miss a proper back button! Without it, we end up in this handicapped iphone-like world where each app has to provide a back button in its own UI or I have to hit the home button – which doesn’t just go back one step.

The tablet supports a gesture, pull up from the button of the screen, to get to the home screen while the phone doesn’t support that but instead has a dedicated home button which if pressed a long time shows up cards with all currently running apps. I’m not even sure how to do that latter operation on the tablet as it doesn’t’ have a home button.

The gmail web interface and experience is not very good on either of the devices.

Building Firefox OS

I’ve only just started this venture and dipped my toes in that water. All code is there in the open and you build it all with open tools. I might get back on this topic later if I get the urge to ventilate something from it… :-) I didn’t find any proper device specific setup for the tablet, but maybe I just don’t know its proper code word and I’ve only given it a quick glance so far. I’ll do my first builds and installs for the phone. Any day now!

More

My seven year old son immediately found at least one game on my dev phone (he actually found the market and downloaded it all by himself the first time he tried the device) that he really likes and now he wants to borrow this from time to time to play that game – in competition with the android phones and tablets we have here already. A pretty good sign I’d say.

Firefox OS is already a complete and competent phone operating system and app ecosystem. If you’re not coming from Android or Iphone it is a step up from everything else. If you do come from Android or Iphone I think you have to accept that this is meant for the lower end spectrum of smart-phones.

I think the smart-phone world can use more competition and Firefox OS brings exactly that.

firefox-os-bootscreen

Syndicated 2014-08-13 21:53:52 from daniel.haxx.se

13 Aug 2014 mbrubeck   » (Journeyer)

Let's build a browser engine! Part 3: CSS

This is the third in a series of articles on building a toy browser rendering engine. Want to build your own? Start at the beginning to learn more:

This article introduces code for reading Cascading Style Sheets (CSS). As usual, I won’t try to cover everything in the spec. Instead, I tried to implement just enough to illustrate some concepts and produce input for later stages in the rendering pipeline.

Anatomy of a Stylesheet

Here’s an example of CSS source code:

    h1, h2, h3 { margin: auto; color: #cc0000; }
div.note { margin-bottom: 20px; padding: 10px; }
#answer { display: none; }

  

Now I’ll walk through some the CSS code from my toy browser engine, robinson.

A CSS stylesheet is a series of rules. (In the example stylesheet above, each line contains one rule.)

    struct Stylesheet {
    rules: Vec<Rule>,
}

  

A rule includes one or more selectors separated by commas, followed by a list of declarations enclosed in braces.

    struct Rule {
    selectors: Vec<Selector>,
    declarations: Vec<Declaration>,
}

  

A selector can be a simple selector, or it can be a chain of selectors joined by combinators. Robinson supports only simple selectors for now.

Note: Confusingly, the newer Selectors Level 3 standard uses the same terms to mean slightly different things. In this article I’ll mostly refer to CSS2.1. Although outdated, it’s a useful starting point because it’s smaller and more self-contained than CSS3 (which is split into myriad specs that reference both each other and CSS2.1).

In robinson, a simple selector can include a tag name, an ID prefixed by '#', any number of class names prefixed by '.', or some combination of the above. If the tag name is empty or '*' then it is a “universal selector” that can match any tag.

There are many other types of selector (especially in CSS3), but this will do for now.

    enum Selector {
    Simple(SimpleSelector),
}

struct SimpleSelector {
    tag_name: Option<String>,
    id: Option<String>,
    class: Vec<String>,
}

  

A declaration is just a name/value pair, separated by a colon and ending with a semicolon. For example, "margin: auto;" is a declaration.

    struct Declaration {
    name: String,
    value: Value,
}

  

My toy engine supports only a handful of CSS’s many value types.

    enum Value {
    Keyword(String),
    Color(u8, u8, u8, u8), // RGBA
    Length(f32, Unit),
    // insert more values here
}

enum Unit { Px, /* insert more units here */ }

  

All other CSS syntax is unsupported, including at-rules, comments, and any selectors/values/units not mentioned above.

Parsing

CSS has a regular grammar, making it easier to parse correctly than its quirky cousin HTML. When a standards-compliant CSS parser encounters a parse error, it discards the unrecognized part of the stylesheet but still processes the remaining portions. This is useful because it allows stylesheets to include new syntax but still produce well-defined output in older browsers.

Robinson uses a very simplistic (and totally not standards-compliant) parser, built the same way as the HTML parser from Part 2. Rather than go through the whole thing line-by-line again, I’ll just paste in a few snippets. For example, here is the code for parsing a single selector:

        /// Parse one simple selector, e.g.: `type#id.class1.class2.class3`
    fn parse_simple_selector(&mut self) -> SimpleSelector {
        let mut result = SimpleSelector { tag_name: None, id: None, class: Vec::new() };
        while !self.eof() {
            match self.next_char() {
                '#' => {
                    self.consume_char();
                    result.id = Some(self.parse_identifier());
                }
                '.' => {
                    self.consume_char();
                    result.class.push(self.parse_identifier());
                }
                '*' => {
                    // universal selector
                    self.consume_char();
                }
                c if valid_identifier_char(c) => {
                    result.tag_name = Some(self.parse_identifier());
                }
                _ => break
            }
        }
        result
    }

  

Note the lack of error checking. Some malformed input like ### or *foo* will parse successfully and produce weird results. A real CSS parser would discard these invalid selectors.

Specificity

Specificity is one of the ways a rendering engine decides which style overrides the other in a conflict. If a stylesheet contains two rules that match an element, the rule with the matching selector of higher specificity can override values from the one with lower specificity.

The specificity of a selector is based on its components. An ID selector is more specific than a class selector, which is more specific than a tag selector. Within each of these “levels,” more selectors beats fewer.

    pub type Specificity = (uint, uint, uint);

impl Selector {
    pub fn specificity(&self) -> Specificity {
        // http://www.w3.org/TR/selectors/#specificity
        let Simple(ref simple) = *self;
        let a = simple.id.iter().len();
        let b = simple.class.len();
        let c = simple.tag_name.iter().len();
        (a, b, c)
    }
}

  

[If we supported chained selectors, we could calculate the specificity of a chain just by adding up the specificities of its parts.]

The selectors for each rule are stored in a sorted vector, most-specific first. This will be important in matching, which I’ll cover in the next article.

        /// Parse a rule set: `<selectors> { <declarations> }`.
    fn parse_rule(&mut self) -> Rule {
        Rule {
            selectors: self.parse_selectors(),
            declarations: self.parse_declarations()
        }
    }

    /// Parse a comma-separated list of selectors.
    fn parse_selectors(&mut self) -> Vec<Selector> {
        let mut selectors = Vec::new();
        loop {
            selectors.push(Simple(self.parse_simple_selector()));
            self.consume_whitespace();
            match self.next_char() {
                ',' => { self.consume_char(); }
                '{' => break, // start of declarations
                c   => fail!("Unexpected character {} in selector list", c)
            }
        }
        // Return selectors with highest specificity first, for use in matching.
        selectors.sort_by(|a,b| b.specificity().cmp(&a.specificity()));
        selectors
    }

  

The rest of the CSS parser is fairly straightforward. You can read the whole thing on GitHub. And if you didn’t already do it for Part 2, this would be a great time to try out a parser generator. My hand-rolled parser gets the job done for simple example files, but it has a lot of hacky bits and will fail badly if you violate its assumptions. Eventually I hope to replace it with a “real” parser built on something like rust-peg.

Exercises

As before, you should decide which of these exercises you want to do, and skip the rest:

  1. Implement your own simplified CSS parser and specificity calculation.

  2. Extend robinson’s CSS parser to support more values, or one or more selector combinators.

  3. Extend the CSS parser to discard any declaration that contains a parse error, and follow the error handling rules to resume parsing after the end of the declaration.

  4. Make the HTML parser pass the contents of any <style> nodes to the CSS parser, and return a Document object that includes a list of Stylesheets in addition to the DOM tree.

Shortcuts

Just like in Part 2, you can skip parsing by hard-coding CSS data structures directly into your program, or by writing them in an alternate format like JSON that you already have a parser for.

To be continued…

The next article will introduce the style module. This is where everything starts to come together, with selector matching to apply CSS styles to DOM nodes.

The pace of this series might slow down soon, since I’ll be busy later this month and I haven’t even written the code for some upcoming articles. I’ll keep them coming as fast as I can!

Syndicated 2014-08-13 19:30:00 from Matt Brubeck

13 Aug 2014 pipeman   » (Journeyer)

Configuring smart card login on OS X 10.9

Earlier I documented how to use a Finnish government issued ID card (FINeID) for SSH authentication. As my vacation ended and I had to dig the smart card reader out to SSH to a machine, I remembered that I never quite figured out how to get login authentication to work with the same card. It took a bit of detective work but it turns out the basic steps are not that complicated. I will only cover the most basic set-up, where you pair one specific smart card with a local account on your computer using the card's public key. It's possible to have more sophisticated setup for larger organisations.

First, check my previous post and follow the instructions for how to set up OpenSC and verify using pkcs15-tool -k that your card reader and card is working properly.

Then, in case you have Apple ID's associated with your user account, you need to work around a bug in authorizationhost: in System Preferences, go to Users & Groups and select the user you're setting up for smart cart login. Remove all associated Apple ID accounts by clicking on the "Change…" button next to "Apple ID:" and deleting any entries from the list (if any). Failure to do so may make it impossible to unlock the screen and unlock System Preferences panes. You can also manually do this with Directory Utility by removing all entries except the one containing the username from the user's RecordName property in the Users directory.


Once that is done, run the following to enable smart card support for logins:

sudo security authorizationdb smartcard enable

Make sure the card is inserted, and list the public key hashes using the OS X built-in command sc_auth:
sc_auth hash

It should output a list similar to this, but with slightly more random hashes:

01DEADBEEF00DEADBEEF00DEADBEEF00DEADBEEF todentamis- ja salausavain
02DEADBEEF00DEADBEEF00DEADBEEF00DEADBEEF allekirjoitusavain
03DEADBEEF00DEADBEEF00DEADBEEF00DEADBEEF com.apple.systemdefault
04DEADBEEF00DEADBEEF00DEADBEEF00DEADBEEF com.apple.kerberos.kdc
05DEADBEEF00DEADBEEF00DEADBEEF00DEADBEEF com.apple.systemdefault
06DEADBEEF00DEADBEEF00DEADBEEF00DEADBEEF com.apple.kerberos.kdc
07DEADBEEF00DEADBEEF00DEADBEEF00DEADBEEF Imported Private Key


Again, it's the todentamis- ja salausavain we're interested in. Now use sc_auth to associate that public key with a user account:

sudo sc_auth accept -u USERNAME -h 01DEADBEEF00DEADBEEF00DEADBEEF00DEADBEEF


This should be it - when the smart cart is initialised, the corresponding user will automatically be selected in the login screen, and instead of prompting for a password it will prompt you for the card's PIN. Note that typically the card PIN defaults to a 4-digit number but it can be changed to (in the case of a FINeID card) any 4-8 character alphanumeric string using e.g. pkcs15-tool --change-pin. For other cards you can inspect the PIN code constraints using pkcs15-tool --list-pins.

When logging in using a smart card rather than a password, OS X will not be able to unlock your login keychain, as it by default is encrypted using your login password. You can choose to either manually unlock the keychain or change the keychain to use your smart card for unlocking rather than a password. If you do that, it means that your keychain is effectively encrypted with your smart card, so if you lose your smart card, you will lose access to your login keychain. It seems that Keychain migration uses your smartcard PIN as your new keychain password, so beware that you may actually lower the keychain encryption key entropy if your smartcard PIN is simpler than your regular password.

If you have FileVault full disk encryption enabled (and you should) OS X will automatically log you in using the password supplied and the FileVault login screen. If you have followed the instructions above, your account will still have a valid password (it's possible to disable password login entirely by deleting the "ShadowHash" entry in the AuthenticationAuthority record of your user account using Directory Utility - note that this will also effectively disable sudo for that user) and you will be automatically logged in, but the system will not be able to unlock your keychain with that password. To prevent automatic login with FileVault, you can run:


sudo defaults write /Library/Preferences/com.apple.loginwindow DisableFDEAutoLogin -bool YES


More information in HT5989.

If you know French, this blog post contains some more details on configuring smart card authentication on Mavericks.

13 Aug 2014 hypatia   » (Journeyer)

USA, June 2014

Before I left for the US in June, Val asked me what other people were saying to me about my plan to go on an intercontinental business trip and bring a baby, and I said that I gathered that people thought both that it was a terrible idea and that it was fairly typical of me to attempt it.

It was touch and go committing to it. Just when I started to get excited about it, A went through a non-sleeping patch over Easter that nearly saw me walk away from the whole thing. So after that I mostly dealt with it by ignoring it as much as possible until the time was nearly upon me, much as I deal with the entire idea of long haul travel generally.

In fact the trip over started quite promisingly, sitting in Air New Zealand’s nearly deserted business lounge looking out onto the tarmac and feeling a kind of peace and happiness I very rarely feel. (So rarely that I can remember most other cases of it. The afternoon after I finished my final high school exams. Flying back from Honolulu last year finishing up my PhD revisions. I usually need to be alone, and finishing something very big, neither of which was true in this case.)

I like Air New Zealand’s schedule to the States compared to Qantas’s. To fly Qantas to the Bay Area, you fly to LA, which takes about 15 hours, and get off the plane at some point between midnight and about 3am Sydney time, ie, just when your body was finally about to fall asleep. Instead of sleeping, you must navigate LAX. I’ve had nightmares that are more fun than that, even though LAX has usually been rather kind to me if anything. However kind, last year I arrived in San Francisco without a moment of sleep (and pregnant, and ill). On Air NZ, the long flight is the second flight: Auckland to San Francisco, so it more nearly corresponds with my sleeping time.

The question was always whether the baby would sleep at all during the flight, and actually she did surprisingly well considering how ill-designed her location was. They had her staring straight up into a light! Nightie night! I did OK too, although the trip’s high point in the lounge was quickly followed by its low point when I subluxated my shoulder in the middle of the night shutting a window shade (yes really, I attempted it from a terrible angle, but yikes) while located something like 2000km from the nearest hospital (and 10km in the air). But I only had to spend a couple of moments imagining the horror of finding some doctor on the plane to attempt to reset it before it reset itself. The whole thing gave me a new appreciation of fear of flying, as the plane bumped along held up by thin, cold air with me stuck inside it with a busted shoulder. I don’t experience fear of flying, but I increasingly think I probably ought to.

The border official at San Francisco looked a bit skeptical that I was bringing the baby on a business trip, but duly admitted me for business and her as a tourist. And then it was déjà vu all the way out thought the ceiling height metal arrival doors and into and through the waiting groups. I’ve flown into San Francisco internationally only once in the past — my first big overseas trip in 2004 — and so I quite vividly remembered the entire experience. Luckily this time I didn’t have to head out to BART and try and work out SF’s bus system without any sleep (in 2004 I had never been in the northern hemisphere before and didn’t know that I would constantly confuse north and south, thus catching a bus for half an hour in the wrong direction). This time, too, I had a baby with me. Quite a change. I went outside and Suki met me with her car and we loaded A into the car seat and we were away.

It’s always summer in SF when I go there, and for once it really felt like it. Our first night, I went to a long dinner at Amelia’s house. Everyone was pleasingly impressed with my ability to stay awake, but I was playing on easy mode: it was only about 2pm in Sydney. The next day I had lunch at Sanraku at the Metreon because somehow my SF experiences seem to always involve the Metreon, visited Double Union, had coffee with K nearby and dinner with James at Mission Beach Cafe. All with A strapped to my front. (Actually, not strictly true, I put her on the floor at Double Union!) Too many appointments; I should never visit SF just for two nights, it needs to be a week or not at all.

The idea of getting back on a plane the next day was abhorrent, but I just gritted my teeth and did it. In any case, it was only to Portland. I am too used to thinking of Australia as a uniquely large country and therefore had been surprised that we weren’t driving to Portland. Aren’t all foreign cities an hour’s drive apart at most? No. Portland is about 9 hours, it seems, from SF, so much like Sydney and Melbourne or Brisbane. I was also disappointed that it was still about another 5 hours north to Canada, or I would have gone for a day trip.

I was in Portland for eight nights. It was good to settle into a routine there. A adapted really well to the new time and slept much better than she had been doing in Sydney, or has done since. I think it was due to the solstice, which occurred while we were there. Sleeping through the night is much more likely when someone lops four or more hours off the night for you. She sleeps from 6pm here, but in Portland she was staying up past 9.

I hadn’t remembered about Powell’s until Chally reminded me before I left, and in any event I didn’t really appreciate what Powell’s is. It’s a bookstore. A bookstore that occupies a couple of city blocks. It is a good thing that my 16 year old self never got anywhere near it or I might still be living in there. Sadly, it is not quite as magical with a grumpy 8kg human heater strapped to my chest, so I only mounted a couple of special purpose expeditions in, after books I’d been meaning to get for a while. A shame, considering I was only staying a couple of blocks away.

The trip was mostly work. I hope some time I can justify spending some time in the USA that isn’t work-related. (Right now, because V hates it when I travel, I don’t really feel good about travelling for leisure without him.) We arrived Portland on Thursday, had the AdaCamp reception Friday, the Camp itself Saturday and Sunday, Open Source Bridge Tuesday to Thursday, and then I left Portland Friday for Sydney.

I decided to keep things simple while I was there by not having A eating any food, or taking any bottles or pumping supplies, which did mean I was at her beck and call during AdaCamp (which she spent with a child carer) and otherwise I always had her with me. But she was in an exceptionally good mood for essentially the entire trip. Val pointed out that she has a particular trick for interacting with people, which is that she blankly stares at people before smiling at them, giving the impression that she chose to smile especially for them. She made lots and lots of friends. She seems quite outgoing, like her brother. I was sad she couldn’t stay at Open Source Bridge forever, but she couldn’t, what with it only going for a week. (And honestly, I had trouble with just that. I was very tired by that point.)

I liked Portland, but I didn’t feel I got to grips with it. Perhaps the closest was the bus ride out to Selena’s place and back in, looking at the big wooden houses and the massive bright green leafy trees. It’s not a very large city: suburbs full of detached houses can be found within 15 minutes bus ride of downtown. I’m sure they were all ludicrously expensive, but all the same, it had something of a distinct feel to it, so I felt I knew the city a little bit. Another moment of note was that on the bus back, which was exceptionally crowded, the bus driver insisted that someone give me a seat (because A was strapped to me) and didn’t move the bus until they did so. It didn’t at all remind me of SF’s Muni, nor Sydney Buses for that matter.

Val told me that this is the deceptive time of year in Portland, the time when it seems very very liveable. I can believe it, on the 45th parallel. Summertime is long dusks and companionship. Winter is… I’m not sure. I’ve never lived that far from the equator.

A’s one bad time of the trip was on the flight from Portland to SF. She screamed continuously for much of the flight. The man across the aisle from me stuffed his fingers in his ears. I think they may have even messed with the oxygen levels, because everyone around me went to sleep and I had tears pouring down my face from yawning. A did sleep, but it took a while. The wait in SF airport was also no fun — other than a very interesting exhibit of lace in the museum area — most things were closed, and I stabbed my finger hard on a safety pin (not safe enough, it seems). But A was a perfect angel from SF to Auckland; the crew came by to coo over the soundless baby several times. And at Sydney V was very excited to see us and begin the whole fortnight he was to have… before Andrew’s work trip to the US.

Syndicated 2014-08-13 11:24:23 from puzzling.org

14 Aug 2014 mikal   » (Journeyer)

Juno nova mid-cycle meetup summary: containers

This is the second in my set of posts discussing the outcomes from the OpenStack nova juno mid-cycle meetup. I want to focus in this post on things related to container technologies.

Nova has had container support for a while in the form of libvirt LXC. While it can be argued that this support isn't feature complete and needs more testing, its certainly been around for a while. There is renewed interest in testing libvirt LXC in the gate, and a team at Rackspace appears to be working on this as I write this. We have already seen patches from this team as they fix issues they find on the way. There are no plans to remove libvirt LXC from nova at this time.

The plan going forward for LXC tempest testing is to add it as an experimental job, so that people reviewing libvirt changes can request the CI system to test LXC by using "check experimental". This hasn't been implemented yet, but will be advertised when it is ready. Once we've seen good stable results from this experimental check we will talk about promoting it to be a full blown check job in our CI system.

We have also had prototype support for Docker for some time, and by all reports Eric Windisch has been doing good work at getting this driver into a good place since it moved to stackforge. We haven't started talking about specifics for when this driver will return to the nova code base, but I think at this stage we're talking about Kilo at the earliest. The driver has CI now (although its still working through stability issues to my understanding) and progresses well. I expect there to be a session at the Kilo summit in the nova track on the current state of this driver, and we'll decide whether to merge it back into nova then.

There was also representation from the containers sub-team at the meetup, and they spent most of their time in a break out room coming up with a concrete proposal for what container support should look like going forward. The plan looks a bit like this:

Nova will continue to support "lowest common denominator containers": by this I mean that things like the libvirt LXC and docker driver will be allowed to exist, and will expose the parts of containers that can be made to look like virtual machines. That is, a caller to the nova API should not need to know if they are interacting with a virtual machine or a container, it should be opaque to them as much as possible. There is some ongoing discussion about the minimum functionality we should expect from a hypervisor driver, so we can expect this minimum level of functionality to move over time.

The containers sub-team will also write a separate service which exposes a more full featured container experience. This service will work by taking a nova instance UUID, and interacting with an agent within that instance to create containers and manage them. This is interesting because it is the first time that a compute project will have an in operating system agent, although other projects have had these for a while. There was also talk about the service being able to start an instance if the user didn't already have one, or being able to declare an existing instance to be "full" and then create a new one for the next incremental container. These are interesting design issues, and I'd like to see them explored more in a specification.

This plan met with general approval within the room at the meetup, with the suggestion being that it move forward as a stackforge project as part of the compute program. I don't think much code has been implemented yet, but I hope to see something come of these plans soon. The first step here is to create some specifications for the containers service, which we will presumably create in the nova-specs repository for want of a better place.

Thanks for reading my second post in this series. In the next post I will cover progress with the Ironic nova driver.

Tags for this post: openstack juno nova mid-cycle summary containers docker lxc
Related posts: Juno nova mid-cycle meetup summary: ironic; Juno nova mid-cycle meetup summary: social issues; Michael's surprisingly unreliable predictions for the Havana Nova release; Juno Nova PTL Candidacy; Thoughts from the PTL; Merged in Havana: fixed ip listing for single hosts

Comment

Syndicated 2014-08-14 01:17:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

13 Aug 2014 proclus   » (Master)

Cheap and real worldwide fans

Loading Full Details

Buy GUARANTEED FB Likes/Fans,
at the cheapest prices and become more popular!

- Safely delivered, absolutely no risk!
- Order processed within 24 hours or less!
- Manually Promoted by experts not bots/software involved
- High-Quality service.
- No Turkish likes, only worldwide

We offer different packages at different prices.
Ex: 2000 fans for only 24.99 usd


For Full Details please read the attached .html file











Unsubscribe option available on the footer of our website

Syndicated 2026-08-13 19:08:00 (Updated 2014-08-13 02:54:38) from proclus

13 Aug 2014 bradfitz   » (Master)

hi

Posting from the iPhone app.

Maybe I'm unblocked now.

Syndicated 2014-08-13 01:57:19 from Brad Fitzpatrick

14 Aug 2014 mikal   » (Journeyer)

Juno nova mid-cycle meetup summary: social issues

Summarizing three days of the Nova Juno mid-cycle meetup is a pretty hard thing to do - I'm going to give it a go, but just in case I miss things, there is an etherpad with notes from the meetup at https://etherpad.openstack.org/p/juno-nova-mid-cycle-meetup. I'm also going to do it in the form of a series of posts, so as to not hold up any content at all in the wait for perfection. This post covers the mechanics of each day at the meetup, reviewer burnout, and the Juno release.

First off, some words about the mechanics of the meetup. The meetup was held in Beaverton, Oregon at an Intel campus. Many thanks to Intel for hosting the event -- it is much appreciated. We discussed possible locations and attendance for future mid-cycle meetups, and the consensus is that these events should "always" be in the US because that's where the vast majority of our developers are. We will consider other host countries when the mix of Nova developers change. Additionally, we talked about the expectations of attendance at these events. The Icehouse mid-cycle was an experiment, but now that we've run two of these I think they're clearly useful events. I want to be clear that we expect nova-drivers members to attend these events at all possible, and strongly prefer to have all nova-cores at the event.

I understand that sometimes life gets in the way, but that's the general expectation. To assist with this, I am going to work on advertising these events much earlier than we have in the past to give time for people to get travel approval. If any core needs me to go to the Foundation and ask for travel assistance, please let me know.

I think that co-locating the event with the Ironic and Containers teams helped us a lot this cycle too. We can't co-locate with every other team working on OpenStack, but I'd like to see us pick a couple of teams -- who we might be blocking -- each cycle and invite them to co-locate with us. It's easy at this point for Nova to become a blocker for other projects, and we need to be careful not to get in the way unless we absolutely need to.

The process for each of the three days: we met at Intel at 9am, and started each day by trying to cherry pick the most important topics from our grab bag of items at the top of the etherpad. I feel this worked really well for us.

Reviewer burnout

We started off talking about core reviewer burnout, and what we expect from core. We've previously been clear that we expect a minimum level of reviews from cores, but we are increasingly concerned about keeping cores "on the same page". The consensus is that, at least, cores should be expected to attend summits. There is a strong preference for cores making it to the mid-cycle if at all possible. It was agreed that I will approach the OpenStack Foundation and request funding for cores who are experiencing budget constraints if needed. I was asked to communicate these thoughts on the openstack-dev mailing list. This openstack-dev mailing list thread is me completing that action item.

The conversation also covered whether it was reasonable to make trivial updates to a patch that was close to being acceptable. For example, consider a patch which is ready to merge apart from its commit message needing a trivial tweak. It was agreed that it is reasonable for the second core reviewer to fix the commit message, upload a new version of the patch, and then approve that for merge. It is a good idea to leave a note in the review history about this when these cases occur.

We expect cores to use their judgement about what is a trivial change.

I have an action item to remind cores that this is acceptable behavior. I'm going to hold off on sending that email for a little bit because there are a couple of big conversations happening about Nova on openstack-dev. I don't want to drown people in email all at once.

Juno release

We also took at look at the Juno release, with j-3 rapidly approaching. One outcome was to try to find a way to focus reviewers on landing code that is a project priority. At the moment we signal priority with the priority field in the launchpad blueprint, which can be seen in action for j-3 here. However, high priority code often slips away because we currently let reviewers review whatever seems important to them.

There was talk about picking project sponsored "themes" for each release -- with the obvious examples being "stability" and "features". One problem here is that we haven't had a lot of luck convincing developers and reviewers to actually work on things we've specified as project goals for a release. The focus needs to move past specific features important to reviewers. Contributors and reviewers need to spend time fixing bugs and reviewing priority code. The harsh reality is that this hasn't been a glowing success.

One solution we're going to try is using more of the Nova weekly meeting to discuss the status of important blueprints. The meeting discussion should then be turned into a reminder on openstack-dev of the current important blueprints in need of review. The side effect of rearranging the weekly meeting is that we'll have less time for the current sub-team updates, but people seem ok with that.

A few people have also suggested various interpretations of a "review day". One interpretation is a rotation through nova-core of reviewers who spend a week of their time reviewing blueprint work. I think these ideas have merit. An action item for me to call for volunteers to sign up for blueprint focused reviewing.

Conclusion

As I mentioned earlier, this is the first in a series of posts. In this post I've tried to cover social aspects of nova -- the mechanics of the Nova Juno mid-cycle meetup, and reviewer burnout - and our current position in the Juno release cycle. There was also discussion of how to manage our workload in Kilo, but I'll leave that for another post. It's already been alluded to on the openstack-dev mailing list this post and the subsequent proposal in gerrit. If you're dying to know more about what we talked about, don't forget the relatively comprehensive notes in our etherpad.

Tags for this post: openstack juno nova mid-cycle summary core review social
Related posts: Michael's surprisingly unreliable predictions for the Havana Nova release; More reviews; Book reviews; Juno Nova PTL Candidacy; What US address should I give?; Working on review comments for Chapters 2, 3 and 4 tonight

Comment

Syndicated 2014-08-13 23:57:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

12 Aug 2014 proclus   » (Master)

uuidgen

I really like uuidgen. Highly recommended.
This is what it makes: f474ea5a-17eb-408a-abe3-d6680a4f6fba


Regards,
proclus
http://www.gnu-darwin.org/

Syndicated 2014-08-12 21:27:00 (Updated 2014-08-12 21:23:03) from proclus

12 Aug 2014 proclus   » (Master)

GDAG

GNU-Darwin Action Group is operated by The GNU-Darwin Distribution.
All suggestions welcome. Free is a verb!

Regards,
proclus
http://www.gnu-darwin.org/

Syndicated 2014-08-12 21:23:00 (Updated 2014-08-12 21:19:52) from proclus

12 Aug 2014 proclus   » (Master)

GNU-Darwin Action Group notifications

For Action Group notifications, stay tuned to this channel.
Status updates are available from the following sources.

https://twitter.com/gnudarwin
https://hotpump.net/gnudarwin
https://identi.ca/proclus

Regards,
Michael L. Love (proclus)

Syndicated 2014-08-12 19:55:00 (Updated 2014-08-12 19:51:07) from proclus

12 Aug 2014 Rich   » (Master)

O Captain, my Captain

O Captain, my Captain is the poem that got me started reading Walt Whitman - one of many works mentioned in Dead Poets Society that got me reading particular authors. Not exactly Whitman's most cheerful work.

Mom used to tell stories of her grandma Nace (my great grandmother) throwing apples at crazy old Walt Whitman as he went for his daily walk near his home in Camden, The kids of the town thought that he was a crazy old man. But he was a man who took his personal tragedies - mostly having to do with his brothers - and turned them into beauty, poetry, and a lifetime of service to the wounded of the civil war.

And now, when so many people are quoting "O Captain, My Captain" in reference to Robin Williams, I have to wonder if they've read past the first line - a deeply tragic poem about the death of President Lincoln, in which he is imagined as a ship captain who doesn't quite make it into harbor, after his great victories. Chillingly apropos of yesterday's tragic end to the brilliant career of Robin Williams.

Exult O shores, and ring O bells!
But I with mournful tread,
Walk the deck my Captain lies,
Fallen cold and dead.

Syndicated 2014-08-12 12:45:30 from Notes In The Margin

12 Aug 2014 Skud   » (Master)

The Pathway to Inclusion

Lately I’ve been working on how to make groups, events, and projects more inclusive. This goes beyond diversity — having a demographic mix of participants — and gets to the heart of how and why people get involved, or don’t get involved, with things.

As I see it, there are six steps everyone needs to pass through, to get from never having heard of a thing to being deeply involved in it.

pathway to inclusion - see below for transcript and more details

These six steps happen in chronological order, starting from someone who knows nothing about your thing.

Awareness

“I’ve heard of this thing.” Perhaps I’ve seen mention of it on social media, or heard a friend talking about it. This is the first step to becoming involved: I have to be aware of your thing to move on to the following stages.

Understanding

“I understand what this is about.” The next step is for me to understand what your thing is, and what it might be like for me to be involved. Here’s where you get to be descriptive. Anything from your thing’s name, to the information on the website, to the language and visuals you use in your promotional materials can help me understand.

Identification

“I can see myself doing this.” Once I understand what your thing is, I’ll make a decision about whether or not it’s for me. If you want to be inclusive, your job here is to make sure that I can imagine myself as part of your group/event/project, by showing how I could use or benefit from what it offers, or by showing me other people like me who are already involved.

Access

“I can physically, logistically, and financially do this.” Here we’re looking at where and when your thing occurs, how much it costs, how much advance notice is given, physical accessibility (for people with disabilities or other such needs), childcare, transportation, how I would actually sign up for the thing, and how all of these interact with my own needs, schedule, finances, and so on.

Belonging

“I feel like I fit in here.” Assuming I get to this stage and join your thing, will I feel like I belong and am part of it? This is distinct from “identification” because identification is about imagining the future, while belonging is about my experience of the present. Are the organisers and other participants welcoming? Is the space safe? Are activities and facilities designed to support all participants? Am I feeling comfortable and having a good time?

Ownership

“I care enough to take responsibility for this.” If I belong, and have been involved for a while, I may begin to take ownership or responsibility. For instance, I might volunteer my time or skills, serve on the leadership team, or offer to run an activity. People in ownership roles are well placed to make sure that others make it through the inclusion pathway, to belonging and ownership.


If you’re interested in participating in an inclusivity workshop or would like to hire me to help your group, project, or event be more inclusive, get in touch.

Syndicated 2014-08-12 00:42:32 from Infotropism

11 Aug 2014 marnanel   » (Journeyer)

"miracle in the alcohol aisle" etc

...whether it's funny is orthogonal to the problem with Takei's "miracle in the alcohol aisle" joke-- that it reinforces a false idea people believe uncritically, which causes them to hurt vulnerable people.

Here's a parallel: I have a large number of books of jokes going back well over a hundred years. Some of them contain, for example, Jewish jokes (iirc there's a section called "Told Against Our Friend The Jew", which shows they already knew about the problem and printed them anyway). Some of these jokes may well be pants-wettingly hilarious for all I care; antisemitism still flourishes, and I'm still not using that material. It's the punching up vs punching down distinction.

[a comment I have left in more than one discussion today; I am starting to think of it as "Told Against Our Friend The Jew" for short.]

This entry was originally posted at http://marnanel.dreamwidth.org/308748.html. Please comment there using OpenID.

Syndicated 2014-08-11 17:56:19 (Updated 2014-08-11 18:17:13) from Monument

11 Aug 2014 mbrubeck   » (Journeyer)

Let's build a browser engine!

This is the second in a series of articles on building a toy browser rendering engine:

This article is about parsing HTML source code to produce a tree of DOM nodes. Parsing is a fascinating topic, but I don’t have the time or expertise to give it the introduction it deserves. You can get a detailed introduction to parsing from any good course or book on compilers. Or get a hands-on start by going through the documentation for a parser generator that works with your chosen programming language.

HTML has its own unique parsing algorithm. Unlike parsers for most programming languages and file formats, the HTML parsing algorithm does not reject invalid input. Instead it includes specific error-handling instructions, so web browsers can agree on how to display every web page, even ones that don’t conform to the syntax rules. Web browsers have to do this to be usable: Since non-conforming HTML has been supported since the early days of the web, it is now used in a huge portion of existing web pages.

A Simple HTML Dialect

I didn’t even try to implement the standard HTML parsing algorithm. Instead I wrote a basic parser for a tiny subset of HTML syntax. My parser can handle simple pages like this:

    <html>
    <body>
        <h1>Title</h1>
        <div id="main" class="test">
            <p>Hello <em>world</em>!</p>
        </div>
    </body>
</html>

  

The following syntax is allowed:

  • Balanced tags: <p>...</p>
  • Attributes with quoted values: id="main"
  • Text nodes: <em>world</em>

Everything else is unsupported, including:

  • Namespaces: <html:body>
  • Self-closing tags: <br/> or <br> with no closing tag
  • Character encoding detection.
  • Escaped characters (like &amp;) and CDATA blocks.
  • Comments, processing instructions, and doctype declarations.
  • Error handling (e.g. unbalanced or improperly nested tags).

At each stage of this project I’m writing more or less the minimum code needed to support the later stages. But if you want to learn more about parsing theory and tools, you can be much more ambitious in your own project!

Example Code

Next, let’s walk through my toy HTML parser, keeping in mind that this is just one way to do it (and probably not the best way). Its structure is based loosely on the tokenizer module from Servo’s cssparser library. It has no real error handling; in most cases, it just aborts when faced with unexpected syntax. The code is in Rust, but I hope it’s fairly readable to anyone who’s used similar-looking languages like Java, C++, or C#. It makes use of the DOM data structures from part 1.

The parser stores its input string and a current position within the string. The position is the index of the next character we haven’t processed yet.

    struct Parser {
    pos: uint,
    input: String,
}

  

We can use this to implement some simple methods for peeking at the next characters in the input:

    impl Parser {
    /// Read the next character without consuming it.
    fn next_char(&self) -> char {
        self.input.as_slice().char_at(self.pos)
    }

    /// Do the next characters start with the given string?
    fn starts_with(&self, s: &str) -> bool {
        self.input.as_slice().slice_from(self.pos).starts_with(s)
    }

    /// Return true if all input is consumed.
    fn eof(&self) -> bool {
        self.pos >= self.input.len()
    }

    // ...
}

  

Rust strings are stored as UTF-8 byte arrays. To go to the next character, we can’t just advance by one byte. Instead we use char_range_at which correctly handles multi-byte characters. (If our string used fixed-width characters, we could just increment pos.)

        /// Return the current character, and advance to the next character.
    fn consume_char(&mut self) -> char {
        let range = self.input.as_slice().char_range_at(self.pos);
        self.pos = range.next;
        range.ch
    }

  

Often we will want to consume a string of consecutive characters. The consume_while method consumes characters that meet a given condition, and returns them as a string:

        /// Consume characters until `test` returns false.
    fn consume_while(&mut self, test: |char| -> bool) -> String {
        let mut result = String::new();
        while !self.eof() && test(self.next_char()) {
            result.push_char(self.consume_char());
        }
        result
    }

  

We can use this to ignore a sequence of space characters, or to consume a string of alphanumeric characters:

        /// Consume and discard zero or more whitespace characters.
    fn consume_whitespace(&mut self) {
        self.consume_while(|c| c.is_whitespace());
    }

    /// Parse a tag or attribute name.
    fn parse_tag_name(&mut self) -> String {
        self.consume_while(|c| match c {
            'a'..'z' | 'A'..'Z' | '0'..'9' => true,
            _ => false
        })
    }

  

Now we’re ready to start parsing HTML. To parse a single node, we look at its first character to see if it is an element or a text node. In our simplified version of HTML, a text node can contain any character except <.

        /// Parse a single node.
    fn parse_node(&mut self) -> dom::Node {
        match self.next_char() {
            '<' => self.parse_element(),
            _   => self.parse_text()
        }
    }

    /// Parse a text node.
    fn parse_text(&mut self) -> dom::Node {
        dom::text(self.consume_while(|c| c != '<'))
    }

  

An element is more complicated. It includes opening and closing tags, and between them any number of child nodes:

        /// Parse a single element, including its open tag, contents, and closing tag.
    fn parse_element(&mut self) -> dom::Node {
        // Opening tag.
        assert!(self.consume_char() == '<');
        let tag_name = self.parse_tag_name();
        let attrs = self.parse_attributes();
        assert!(self.consume_char() == '>');

        // Contents.
        let children = self.parse_nodes();

        // Closing tag.
        assert!(self.consume_char() == '<');
        assert!(self.consume_char() == '/');
        assert!(self.parse_tag_name() == tag_name);
        assert!(self.consume_char() == '>');

        dom::elem(tag_name, attrs, children)
    }

  

Parsing attributes is pretty easy in our simplified syntax. Until we reach the end of the opening tag (>) we repeatedly look for a name followed by = and then a string enclosed in quotes.

        /// Parse a single name="value" pair.
    fn parse_attr(&mut self) -> (String, String) {
        let name = self.parse_tag_name();
        assert!(self.consume_char() == '=');
        let value = self.parse_attr_value();
        (name, value)
    }

    /// Parse a quoted value.
    fn parse_attr_value(&mut self) -> String {
        let open_quote = self.consume_char();
        assert!(open_quote == '"' || open_quote == '\'');
        let value = self.consume_while(|c| c != open_quote);
        assert!(self.consume_char() == open_quote);
        value
    }

    /// Parse a list of name="value" pairs, separated by whitespace.
    fn parse_attributes(&mut self) -> dom::AttrMap {
        let mut attributes = HashMap::new();
        loop {
            self.consume_whitespace();
            if self.next_char() == '>' {
                break;
            }
            let (name, value) = self.parse_attr();
            attributes.insert(name, value);
        }
        attributes
    }

  

To parse the child nodes, we recursively call parse_node in a loop until we reach the closing tag:

        /// Parse a sequence of sibling nodes.
    fn parse_nodes(&mut self) -> Vec<dom::Node> {
        let mut nodes = vec!();
        loop {
            self.consume_whitespace();
            if self.eof() || self.starts_with("</") {
                break;
            }
            nodes.push(self.parse_node());
        }
        nodes
    }

  

Finally, we can put this all together to parse an entire HTML document into a DOM tree. This function will create a root node for the document if it doesn’t include one explicitly; this is similar to what a real HTML parser does.

    /// Parse an HTML document and return the root element.
pub fn parse(source: String) -> dom::Node {
    let mut nodes = Parser { pos: 0u, input: source }.parse_nodes();

    // If the document contains a root element, just return it. Otherwise, create one.
    if nodes.len() == 1 {
        nodes.swap_remove(0).unwrap()
    } else {
        dom::elem("html".to_string(), HashMap::new(), nodes)
    }
}

  

That’s it! The entire code for the robinson HTML parser. The whole thing weighs in at just over 100 lines of code (not counting blank lines and comments). If you use a good library or parser generator, you can probably build a similar toy parser in even less space.

Exercises

Here are a few alternate ways to try this out yourself. As before, you can choose one or more of them and ignore the others.

  1. Build a parser (either “by hand” or with a library or parser generator) that takes a subset of HTML as input and produces a tree of DOM nodes.

  2. Modify robinson’s HTML parser to add some missing features, like comments. Or replace it with a better parser, perhaps built with a library or generator.

  3. Create an invalid HTML file that causes your parser (or mine) to fail. Modify the parser to recover from the error and produce a DOM tree for your test file.

Shortcuts

If you want to skip parsing completely, you can build a DOM tree programmatically instead, by adding some code like this to your program (in pseudo-code; adjust it to match the DOM code you wrote in Part 1):

    // <html><body>Hello, world!</body></html>
let root = element("html");
let body = element("body");
root.children.push(body);
body.children.push(text("Hello, world!"));

  

Or you can find an existing HTML parser and incorporate it into your program.

The next article in this series will cover CSS data structures and parsing.

Syndicated 2014-08-11 15:00:00 from Matt Brubeck

13 Aug 2014 mikal   » (Journeyer)

More turning

Some more pens, and then I went back to bowls for a bit.

The attraction of pens is that I can churn out a pen in about 30 minutes, whereas a bowl can take twice that. Therefore when I have a small chance to play in the garage I'll do a pen, whereas when I have more time I might do a bowl.

           

Tags for this post: wood turning 20140718-woodturning photo

Comment

Syndicated 2014-08-12 22:24:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

10 Aug 2014 jas   » (Master)

Wifi on S3 with Replicant

I’m using Replicant on my main phone. As I’ve written before, I didn’t get Wifi to work. The other day leth in #replicant pointed me towards a CyanogenMod discussion about a similar issue. The fix does indeed work, and allowed me to connect to wifi networks and to setup my phone for Internet sharing. Digging deeper, I found a CM Jira issue about it, and ultimately a code commit. It seems the issue is that more recent S3′s comes with a Murata Wifi chipset that uses MAC addresses not known back in the Android 4.2 (CM-10.1.3 and Replicant-4.2) days. Pulling in the latest fixes for macloader.cpp solves this problem for me, although I still need to load the non-free firmware images that I get from CM-10.1.3. I’ve created a pull request fixing macloader.cpp for Replicant 4.2 if someone else is curious about the details. You have to rebuild your OS with the patch for things to work (if you don’t want to, the workaround using /data/.cid.info works fine), and install some firmware blobs as below.

adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_apsta.bin_b1 /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_apsta.bin_b2 /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_mfg.bin_b0 /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_mfg.bin_b1 /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_mfg.bin_b2 /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_p2p.bin_b0 /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_p2p.bin_b1 /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_p2p.bin_b2 /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_sta.bin_b0 /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_sta.bin_b1 /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_sta.bin_b2 /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/nvram_mfg.txt /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/nvram_mfg.txt_murata /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/nvram_mfg.txt_murata_b2 /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/nvram_mfg.txt_semcosh /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/nvram_net.txt /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/nvram_net.txt_murata /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/nvram_net.txt_murata_b2 /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/nvram_net.txt_semcosh /system/vendor/firmware/

flattr this!

Syndicated 2014-08-10 18:02:37 from Simon Josefsson's blog

9 Aug 2014 vicious   » (Master)

Numbers again

I like to get fake enraged when people get caught up in very silly misunderstanding of numbers. And often this misunderstanding is used by politicians, extremists, and others to manipulate those people.

The most recent and prominent example is the Israel-Gaza conflict. The story is that Hamas rockets are endangering Israeli lives. OK, yes they do, but you have to always look at the scale. Since 2001 (so in the last 13 years), there have been 40 deaths in Israel from Gaza cross border rocket and mortar fire [1]. 13 of those are Soldiers, and one could make the case that those were military targets, but let’s ignore that and count 40. Of those fatalities, 23 happened during one of the operations designed to remove the rocket threat. One could argue (though I won’t, there won’t be a need) that without those operations, only 17 Israelis would have died.

OK, so 40 people in 13 years. That’s approximately 3 deaths per year. When I recently read an article about toxicity of mushrooms [2], the person being interviewed (the toxicologists) made an argument that mushroom picking in Czech is not dangerous, only 2-3 deaths per year (and vast majority of Czechs will go mushroom picking every year, even we do when we’re over there, it’s a Czech thing). Clearly it is not enough to worry the toxicologists. And Czech is about the same size as Israel in terms of population.

When my granddad (a nuclear physicist) with his team computed the number of new cancers due to Chernobyl in Czech after the disaster, they predicted 200 a year. Trouble was they couldn’t verify it, because 200 a year gets hidden in the noise. In a country of 10 million, if we take 80 year lifespan, 125000 people will die every year of various causes, a large proportion of them to cancer (many of them preventable cancer that the state does not try to prevent since it would mean unpopular policies). So 3 deaths a year is completely lost in the noise. So if Israel would spend the money it spends on fighting Hamas, even if we buy into the propaganda that this will defeat Hamas, if it would spend the money on an anti-smoking campaign, it would likely save far more Israeli lives every year.

Another statistic to look at is car accidents: In Israel approximately 263 people died on the road in one year. So your chances of dying in a car accident are almost 100 times larger. In fact, death by lighting in the US average at 51 per year [4]. Rescaling to Israel (I could not find Israel numbers) you get about 1.4 a year. That’s only slightly less than what Hamas kills in a year. In fact it’s about how many civilians they kill. So your chances as an Israeli civilian of being killed by a rocket or mortar are about the same as being killed by lightning. Not being struck by lightning by the way, because only 1 in 5 (approximately) of those struck die. So your chances of being struck by lightning are bigger, let’s say approx 6-7 people in Israel will get struck by lightning every year.

An argument could be made that most Israelis killed were in Sderot (I count 8), so your chances of being killed there are bigger (and correspondingly, your chances of being killed by a Hamas rocket based on current data if you are in say Tel Aviv are zero). Anyway, 8 in 13 years is about 0.6 a year in Sderot. Rescaling (based on population) the car deaths to estimate the number of deaths per year in Sderot we get approximately 0.8 deaths per year in Sderot. So your chances of dying in a car crash are higher, even if you live in Sderot (your chances of dying from cancer are much much higher).

The politicians supporting Israel’s actions often don’t worry about logical contradictions stemming from the above facts. When Bloomberg visited Israel he lambasted the fact that some airlines stopped flying to Tel-Aviv on rocket fears. He said that he never felt safer there. Well, if he never felt safer, why is Gaza being bombed?

None of this in any way is an apology for Hamas. Hamas is a terrorist organization that aims to kill civilians. But Hamas is almost comically incompetent at doing so. If it weren’t a sad thing, we ought to laugh at Hamas. Anyway, clearly an incompetent murderer is till a murderer if he manages to kill even one person. The question is, what lengths should you go to to apprehend him.

Now the costs. Gaza has 1.8 million people. One should expect about 20-30 thousand natural deaths per year. This year, Israel kills 2000 Gazans. So very approximately about 1 in 10 to 1 in 15 Gazans that dies this year will die at Israel’s hands. Now think what that does to the ability of Hamas, or even far more radical groups to recruit. My grandmother grew up in the post WWII years, so she has not lived through WWII as an adult, yet she harbored deep hatred of all Germans. This was common in Europe. People used to hate each other, and then every once in a while they would attempt to kill each other. If you are on the receiving side of the killing (and only your relatives get killed), you are very likely to have this deep hatred that could very well be used by extremists (think back to the Balkans and especially Bosnia). It takes generations to get rid of it. I don’t resent the Poles or the Ukrainians even though I had Galitian Jewish ancestors that probably had no love for either. But 70 years ago, these three groups managed to literally destroy the whole region by killing each other (well the Germans and Russians helped greatly in the endeavor). There is probably very few people in the region whose family actually comes from there.

The point is, that the cost of the operation, besides the moral outrage of killing thousands of people, is pushing back the date that Israel can live in peace with its neighbors.

Another cost to Israel is the rise of anti-semitism. Just like violent actions by muslim extremists created a wave of anti-muslim sentiment in the West, violent acts by Israel will only strenghten anti-semitic forces. You are giving them perfect recruiting stories. If they had any doubts, they don’t have them now. Maybe as a positive aspect, it will justify Israeli contention that hatred of Israel is based on anti-semitism, since by creating anti-semitic sentiment, yes, more of it will be. The fact that the deputy speaker of parliament in Israel calls for conquest of Gaza are forcefully removing the Gazans into “tent camps” and them out of Israel [5], does not help. Maybe he should have called it “final solution” to the problem. Surely there would be no problem with that phrase.

Another thing about numbers is that Israel depends on America giving it cover (and weapons). Israel does not realize that american opinion is shifting (in part due to its own actions, in part since these things always shift in time). See [6]. Basically once the young of today will be the older folks of tomorrow, Israel won’t be seen the way it is now. It might be that US will also shift towards some other group as being important in american politics. Given the growth in the Latino population, it should be clear that Israel will at some point stop being the priority for many politicians. Given that we will also approach the world oil peak, and that us oil will once again start running out once we’ve fracked out what we could frack out, oil will become again more important. And Israel does not have oil. Israel is religiously important, but recall that Americans are mostly protestant Christians, not Jews, so it’s not clear where that will go (looking at history of that relationship is not very encouraging). A large percentage of Americans thinks the world is only a few thousand years old and the end of times will come within their lifetime and the battle of Armageddon will come, and blah blah blah. Who knows what that does to long term foreign policy.

Remember I am talking about decades not years. One should worry about what happens in 20, 30, 40 years. You know, many of us will still be around then, so even from a very selfish perspective, one should plan 40 years ahead. Let alone if one is not a selfish bastard.

Then finally there is this moral thing about killing others … Let’s not get into morals of the situation, that’s seriously f@#ked up.

[1] http://mondoweiss.net/2014/07/rocket-deaths-israel.html
[2] http://www.lidovky.cz/muchomurku-cervenou-fasovali-vojaci-v-bitvach-misto-alkoholu-p63-/media.aspx?c=A140807_154055_ln-media_sho
[3] http://en.wikipedia.org/wiki/List_of_countries_by_traffic-related_death_rate
[4] http://www.lightningsafety.noaa.gov/fatalities.htm
[5] http://www.dailymail.co.uk/news/article-2715466/Israeli-official-calls-concentration-camps-Gaza-conquest-entire-Gaza-Strip-annihilation-fighting-forces-supporters.html
[6] http://www.mintpressnews.com/latest-gallup-poll-shows-young-americans-overwhelmingly-support-palestine/194856/


Syndicated 2014-08-09 16:43:30 from The Spectre of Math

9 Aug 2014 etbe   » (Master)

Being Obviously Wrong About Autism

I’m watching a Louis Theroux documentary about Autism (here’s the link to the BBC web site [1]). The main thing that strikes me so far (after watching 7.5 minutes of it) is the bad designed of the DLC-Warren school for Autistic kids in New Jersey [2].

A significant portion of people on the Autism Spectrum have problems with noisy environments, whether most Autistic people have problems with noise depends on what degree of discomfort is considered a problem. But I think it’s most likely to assume that the majority of kids on the Autism Spectrum will behave better in a quiet environment. So any environment that is noisy will cause more difficult behavior in most Autistic kids and the kids who don’t have problems with the noise will have problems with the way the other kids act. Any environment that is more prone to noise pollution than is strictly necessary is hostile to most people on the Autism Spectrum and all groups of Autistic people.

The school that is featured in the start of the documentary is obviously wrong in this regard. For starters I haven’t seen any carpet anywhere. Carpeted floors are slightly more expensive than lino but the cost isn’t significant in terms of the cost of running a special school (such schools are expensive by private-school standards). But carpet makes a significant difference to ambient noise.

Most of the footage from that school included obvious echos even though they had an opportunity to film when there was the least disruption – presumably noise pollution would be a lot worse when a class finished.

It’s not difficult to install carpet in all indoor areas in a school. It’s also not difficult to install rubber floors in all outdoor areas in a school (it seems that most schools are doing this already in play areas for safety reasons). For a small amount of money spent on installing and maintaining noise absorbing floor surfaces the school could achieve better educational results. The next step would be to install noise absorbing ceiling tiles and wallpaper, that might be a little more expensive to install but it would be cheap to maintain.

I think that the hallways in a school for Autistic kids should be as quiet as the lobby of a 5 star hotel. I don’t believe that there is any technical difficulty in achieving that goal, making a school look as good as an expensive hotel would be expensive but giving it the same acoustic properties wouldn’t be difficult or expensive.

How do people even manage to be so wrong about such things? Do they never seek any advice from any adult on the Autism Spectrum about how to run their school? Do they avoid doing any of the most basic Google searches for how to create a good environment for Autistic people? Do they just not care at all and create an environment that looks good to NTs? If they are just trying to impress NTs then why don’t they have enough pride to care that people like me will know how bad they are? These aren’t just rhetorical questions, I’d like to know what’s wrong with those people that makes them do their jobs in such an amazingly bad way.

Related posts:

  1. Autism, Food, etc James Purser wrote “Stop Using Autism to Push Your Own...
  2. Autism and a Child Beauty Contest Fenella Wagener wrote an article for the Herald Sun about...
  3. Autism Awareness and the Free Software Community It’s Autism Awareness Month April is Autism Awareness month, there...

Syndicated 2014-08-09 16:01:46 from etbe - Russell Coker

9 Aug 2014 Stevey   » (Master)

Rebooting the CMS

I run a cluster for the Debian Administration website, and the code is starting to show its age. Unfortunately the code is not so modern, and has evolved a lot of baggage.

Given the relatively clean separation between the logical components I'm interested in trying something new. In brief the current codebase allows:

  • Posting of articles, blog-entries, and polls.
  • The manipulation of the same.
  • User-account management.

It crossed my mind the other night that it might make sense to break this code down into a number of mini-servers - a server to handle all article-related things, a server to handle all poll-related things, etc.

If we have a JSON endpoint that will allow:

  • GET /article/32
  • POST /article/ [create]
  • GET /articles/offset/number [get the most recent]

Then we could have a very thin shim/server on top of that whihc would present the public API. Of course the internal HTTP overhead might make this unworkable, but it is an interesting approach to the problem, and would allow the backend storage to be migrated in the future without too much difficulty.

At the moment I've coded up two trivial servers, one for getting user-data (to allow login requests to succeed), and one for getting article data.

There is a tiny presentation server written to use those back-end servers and it seems like an approach that might work. Of course deployment might be a pain..

It is still an experiment rather than a plan, but it could work out: http://github.com/skx/snooze/.

Syndicated 2014-08-09 08:59:51 from Steve Kemp's Blog

8 Aug 2014 mbrubeck   » (Journeyer)

Let's build a browser engine!

I’m building a toy HTML rendering engine, and I think you should too. This is the first in a series of articles describing my project and how you can make your own. But first, let me explain why.

You’re building a what?

Let’s talk terminology. A browser engine is the portion of a web browser that works “under the hood” to fetch a web page from the internet, and translate its contents into forms you can read, watch, hear, etc. Blink, Gecko, WebKit, and Trident are browser engines. In contrast, the the browser’s own UI—tabs, toolbar, menu and such—is called the chrome. Firefox and SeaMonkey are two browsers with different chrome but the same Gecko engine.

A browser engine includes many sub-components: an HTTP client, an HTML parser, a CSS parser, a JavaScript engine (itself composed of parsers, interpreters, and compilers), and much more. The many components involved in parsing web formats like HTML and CSS and translating them into what you see on-screen are sometimes called the layout engine or rendering engine.

Why a “toy” rendering engine?

A full-featured browser engine is hugely complex. Blink, Gecko, WebKit—these are millions of lines of code each. Even younger, simpler rendering engines like Servo and WeasyPrint are each tens of thousands of lines. Not the easiest thing for a newcomer to comprehend!

Speaking of hugely complex software: If you take a class on compilers or operating systems, at some point you will probably create or modify a “toy” compiler or kernel. This is a simple model designed for learning; it may never be run by anyone besides the person who wrote it. But making a toy system is a useful tool for learning how the real thing works. Even if you never build a real-world compiler or kernel, understanding how they work can help you make better use of them when writing your own programs.

So, if you want to become a browser developer, or just to understand what happens inside a browser engine, why not build a toy one? Like a toy compiler that implements a subset of a “real” programming language, a toy rendering engine could implement a small subset of HTML and CSS. It won’t replace the engine in your everyday browser, but should nonetheless illustrate the basic steps needed for rendering a simple HTML document.

Try this at home.

I hope I’ve convinced you to give it a try. This series will be easiest to follow if you already have some solid programming experience and know some high-level HTML and CSS concepts. However, if you’re just getting started with this stuff, or run into things you don’t understand, feel free to ask questions and I’ll try to make it clearer.

Before you start, a few remarks on some choices you can make:

On Programming Languages

You can build a toy layout engine in any programming language. Really! Go ahead and use a language you know and love. Or use this as an excuse to learn a new language if that sounds like fun.

If you want to start contributing to major browser engines like Gecko or WebKit, you might want to work in C++ because it’s the main language used in those engines, and using it will make it easier to compare your code to theirs. My own toy project, robinson, is written in Rust. I’m part of the Servo team at Mozilla, so I’ve become very fond of Rust programming. Plus, one of my goals with this project is to understand more of Servo’s implementation. (I’ve written a lot of browser chrome code, and a few small patches for Gecko, but before joining the Servo project I knew nothing about many areas of the browser engine.) Robinson sometimes uses simplified versions of Servo’s data structures and code. If you too want to start contributing to Servo, try some of the exercises in Rust!

On Libraries and Shortcuts

In a learning exercise like this, you have to decide whether it’s “cheating” to use someone else’s code instead of writing your own from scratch. My advice is to write your own code for the parts that you really want to understand, but don’t be shy about using libraries for everything else. Learning how to use a particular library can be a worthwhile exercise in itself.

I’m writing robinson not just for myself, but also to serve as example code for these articles and exercises. For this and other reasons, I want it to be as tiny and self-contained as possible. So far I’ve used no external code except for the Rust standard library. (This also side-steps the minor hassle of getting multiple dependencies to build with the same version of Rust while the language is still in development.) This rule isn’t set in stone, though. For example, I may decide later to use a graphics library rather than write my own low-level drawing code.

Another way to avoid writing code is to just leave things out. For example, robinson has no networking code yet; it can only read local files. In a toy program, it’s fine to just skip things if you feel like it. I’ll point out potential shortcuts like this as I go along, so you can bypass steps that don’t interest you and jump straight to the good stuff. You can always fill in the gaps later if you change your mind.

First Step: The DOM

Are you ready to write some code? We’ll start with something small: data structures for the DOM. Let’s look at robinson’s dom module.

The DOM is a tree of nodes. A node has zero or more children. (It also has various other attributes and methods, but we can ignore most of those for now.)

    struct Node {
    // data common to all nodes:
    children: Vec<Node>,

    // data specific to each node type:
    node_type: NodeType,
}

  

There are several node types, but for now we will ignore most of them and say that a node is either an Element or a Text node. In a language with inheritance these would be subtypes of Node. In Rust they can be an enum (Rust’s keyword for a “tagged union” or “sum type”):

    enum NodeType {
    Text(String),
    Element(ElementData),
}

  

An element includes a tag name and any number of attributes, which can be stored as a map from names to values. Robinson doesn’t support namespaces, so it just stores tag and attribute names as simple strings.

    struct ElementData {
    tag_name: String,
    attributes: AttrMap,
}

type AttrMap = HashMap<String, String>;

  

Finally, some constructor functions to make it easy to create new nodes:

    impl Node {
    fn new(children: Vec<Node>, node_type: NodeType) -> Node {
        Node { children: children, node_type: node_type }
    }
}

fn text(data: String) -> Node {
    Node::new(vec!(), Text(data))
}

fn elem(name: String, attrs: AttrMap, children: Vec<Node>) -> Node {
    Node::new(children, Element(ElementData {
        tag_name: name,
        attributes: attrs,
    }))
}

  

And that’s it! A full-blown DOM implementation would include a lot more data and dozens of methods, but this is all we need to get started. In the next article, we’ll add a parser that turns HTML source code into a tree of these DOM nodes.

Exercises

These are just a few suggested ways to follow along at home. Do the exercises that interest you and skip any that don’t.

  1. Start a new program in the language of your choice, and write code to represent a tree of DOM text nodes and elements.

  2. Install the latest version of Rust, then download and build robinson. Open up dom.rs and extend NodeType to include additional types like comment nodes.

  3. Write code to pretty-print a tree of DOM nodes.

References

Here’s a short list of “small” open source web rendering engines. Most of them are many times bigger than robinson, but still way smaller than Gecko or WebKit. WebWhirr, at 2000 lines of code, is the only other one I would call a “toy” engine.

You may find these useful for inspiration or reference. If you know of any other similar projects—or if you start your own—please let me know!

Syndicated 2014-08-08 16:40:00 from Matt Brubeck

8 Aug 2014 marnanel   » (Journeyer)

Gentle Readers: fought and feared and felt

Gentle Readers
a newsletter made for sharing
volume 1, number 17
7th August 2014: we fought and feared and felt
What I’ve been up to

We are, more or less, properly moved to Salford now. There's a vanful of our stuff still in Oldham, and another vanful in Staines, due to assorted mishaps along the way, but at least Kit and I and Yantantessera are safely moved in. Sooner or later we'll go and pick the other stuff up, when times are more vannish-- and after all, what else does time do?

I apologise for another GR hiatus earlier this week: I was hit by a car while crossing the road, which caused a break in service, but fortunately no break in bones. My leg is quite impressively bruised, though.

A poem of mine

RETWEETED (T103)

Jill retweeted what I wrote,
forwarding to all her friends.
Time, you thief, who loves to gloat
over hopes and bitter ends,
say my loves and lines are bad,
say that life itself defeated me,
say I'm growing old, but add:
Jill retweeted me.

(After "Jenny kissed me" by James Leigh Hunt.)

A picture

http://gentlereaders.uk/pics/fb-teletext-100

http://gentlereaders.uk/pics/fb-teletext-220
 

 

 
Those who weren't around in the 1980s in the UK may need to know that this is a parodic representation of Facebook as if it had been around at the time of the BBC'S much-loved CEEFAX service. Gentle reader Dan Sheppard sent me a link to a recording of CEEFAX On View for those who never saw it and those who'd like to refresh their memories. 

Something from someone else

Some people will tell you that Rudyard Kipling was a cultural imperalist and a racist; these people have often not looked very hard into his work. The last line of this poem, a plea for cultural diversity, is quoted fairly often; I think the rest of the poem is worth reading too, and I'm afraid I habitually quote the last two stanzas at people far too often.

"Certified by Traill" is a sarcastic reference: when Tennyson died in 1892, there was some discussion as to who should be the new poet laureate, and a man named H. D. Traill wrote an article listing fifty possible contenders. He added Kipling's name as the fifty-first, as an afterthought.

IN THE NEOLITHIC AGE
by Rudyard Kipling

In the Neolithic Age, savage warfare did I wage
For food and fame and woolly horses' pelt.
I was singer to my clan in that dim red Dawn of Man,
And I sang of all we fought and feared and felt.

Yea, I sang as now I sing, when the Prehistoric spring
Made the piled Biscayan ice-pack split and shove;
And the troll and gnome and dwerg, and the Gods of Cliff and Berg
Were about me and beneath me and above.

But a rival, of Solutré, told the tribe my style was outré—
'Neath a tomahawk, of diorite, he fell.
And I left my views on Art, barbed and tanged, below the heart
Of a mammothistic etcher at Grenelle.

Then I stripped them, scalp from skull, and my hunting-dogs fed full,
And their teeth I threaded neatly on a thong;
And I wiped my mouth and said, "It is well that they are dead,
For I know my work is right and theirs was wrong."

But my Totem saw the shame; from his ridgepole-shrine he came,
And he told me in a vision of the night: —
"There are nine and sixty ways of constructing tribal lays,
And every single one of them is right!"

* * * *

Then the silence closed upon me till They put new clothing on me
Of whiter, weaker flesh and bone more frail;
And I stepped beneath Time's finger, once again a tribal singer,
And a minor poet certified by Traill!

Still they skirmish to and fro, men my messmates on the snow
When we headed off the aurochs turn for turn;
When the rich Allobrogenses never kept amanuenses,
And our only plots were piled in lakes at Berne.

Still a cultured Christian age sees us scuffle, squeak, and rage,
Still we pinch and slap and jabber, scratch and dirk;
Still we let our business slide— as we dropped the half-dressed hide—
To show a fellow-savage how to work.

Still the world is wondrous large— seven seas from marge to marge—
And it holds a vast of various kinds of man;
And the wildest dreams of Kew are the facts of Khatmandhu,
And the crimes of Clapham chaste in Martaban.

Here's my wisdom for your use, as I learned it when the moose
And the reindeer roamed where Paris roars to-night:—
"There are nine and sixty ways of constructing tribal lays,
And— every— single— one— of— them— is— right!"

Postscript from me: Though you know there came a day when they found another way, but rejected it— for "seventy" won't scan.

Colophon

Gentle Readers is published on Mondays and Thursdays, and I want you to share it. The archives are at http://thomasthurman.org/gentle/ , and so is a form to get on the mailing list. If you have anything to say or reply, or you want to be added or removed from the mailing list, I’m at thomas@thurman.org.uk and I’d love to hear from you. The newsletter is reader-supported; please pledge something if you can afford to, and please don't if you can't. Love and peace to you all.

This entry was originally posted at http://marnanel.dreamwidth.org/308692.html. Please comment there using OpenID.

Syndicated 2014-08-07 23:28:42 from Monument

7 Aug 2014 Jordi   » (Master)

A pile of reasons why GNOME should be Debian jessie’s default desktop environment

GNOME has, for some reason or another, always been the default desktop environment in Debian since the installer is able to install a full desktop environment by default. Release after release, Debian has been shipping different versions of GNOME, first based on the venerable 1.2/1.4 series, then moving to the time-based GNOME 2.x series, and finally to the newly designed 3.4 series for the last stable release, Debian 7 ‘wheezy’.

During the final stages of wheezy’s development, it was pointed out that the first install CD image would not longer hold all of the required packages to install a full GNOME desktop environment. There was lots of discussion surrounding this bug or fact, and there were two major reactions to it. The Debian GNOME team rebuilt some key packages so they would be compressed using xz instead of gzip, saving the few megabytes that were needed to squeeze everything in the first CD. In parallel, the tasksel maintainer decided switching to Xfce as default desktop was another obvious fix. This change, unannounced and two days before the freeze, was very contested and spurred the usual massive debian-devel threads. In the end, and after a few default desktop flip flops, it was agreed that GNOME would remain as the default for the already frozen wheezy release, and this issue would be revisited later on during jessie’s development.

And indeed, some months ago, Xfce was again reinstated as Debian’s default desktop for jessie as announced:

Change default desktop to xfce.

This will be re-evaluated before jessie is frozen. The evaluation will
start around the point of DebConf (August 2014). If at that point gnome
looks like a better choice, it’ll go back as the default.

Some criteria for that choice will include:

* Popcon numbers for gnome on jessie. If gnome installations continue to
  rise fast enough despite xfce being the default (compared with, say
  kde installations), then we’ll know that users prefer gnome.
  Currently we have no data about how many users would choose gnome when
  it’s not the default. Part of the reason for switching to xfce now
  is to get such data.

* The state of accessability support, particularly for the blind.

* How well the UI works for both new and existing users. Gnome 3
  seems to be adding back many gnome 2 features that existing users
  expect, as well as making some available via addons. If it feels
  comfortable to gnome 2 (and xfce) users, that would go a long way
  toward switching back to it as the default. Meanwhile, Gnome 3 is also
  breaking new ground in its interface; if the interface seems more
  welcoming to new users, or works better on mobile devices, etc, that
  would again point toward switching back.

* Whatever size constraints exist for CD or other images at the time.

--

Hello to all the tech journalists out there. This is pretty boring.
Why don’t you write a story about monads instead?

― Joey Hess in dfca406eb694e0ac00ea04b12fc912237e01c9b5.

Suffice to say that the Debian GNOME team participants have never been thrilled about how the whole issue is being handled, and we’ve been wondering if we should be doing anything about it, or just move along and enjoy the smaller amount of bug reports against GNOME packages that this change would bring us, if it finally made it through to the final release. During our real life meet-ups in FOSDEM and the systemd+GNOME sprint in Antwerp, most members of the team did feel Debian would not be delivering a graphical environment with the polish we think our users deserve, and decided we at least should try to convince the rest of the Debian project and our users that Debian will be best suited by shipping GNOME 3.12 by default. Power users, of course, can and know how to get around this default and install KDE, Xfce, Cinnamon, MATE or whatever other choice they have. For the average user, though, we think we should be shipping GNOME by default, and tasksel should revert the above commit again. Some of our reasons are:

  • Accessibility: GNOME continues to be the only free desktop environment that provides full accessibility coverage, right from login screen. While it’s true GNOME 3.0 was lacking in many areas, and GNOME 3.4 (which we shipped in wheezy) was just barely acceptable thanks to some last minute GDM fixes, GNOME 3.12 should have ironed out all of the issues and our non-expert understanding is that a11y support is now on par with what GNOME 2.30 from squeeze offered.
  • Downstream health: The number of active members in the team taking care of GNOME in Debian is around 5-10 persons, while it is 1-2 in the case of Xfce. Being the default desktop draws a lot of attention (and bug reports) that only a bigger team might have the resources to handle.
  • Upstream health: While GNOME is still committed to its time-based release schedule and ships new versions every 6 months, Xfce upstream is, unfortunately, struggling a bit more to keep up with new plumbing technology. Only very recently it has regained support to suspend/hibernate via logind, or support for Bluez 5.x, for example.
  • Community: GNOME is one of the biggest free software projects, and is lucky to have created an ecosystem of developers, documenters, translators and users that interact regularly in a live social community. Users and developers gather in hackfests and big, annual conferences like GUADEC, the Boston Summit, or GNOME.Asia. Only KDE has a comparable community, the rest of the free desktop projects don’t have the userbase or manpower to sustain communities like this.
  • Localization: Localization is more extensive and complete in GNOME. Xfce has 18 languages above 95% of coverage, and 2 at 100% (excluding English). GNOME has 28 languages above 95%, 9 of them being complete (excluding English).
  • Documentation: Documentation coverage is extensive in GNOME, with most of the core applications providing localized, up to date and complete manuals, available in an accessible format via the Help reader.
  • Integration: The level of integration between components is very high in GNOME. For example, instant messaging, agenda and accessibility components are an integral part of the desktop. GNOME is closely integrated to NetworkManager, PulseAudio, udisks and upower so that the user has access to all the plumbing in a single place. GNOME also integrates easily with online accounts and services (ownCloud, Google, MS Exchange…).
  • Hardware: GNOME 3.12 will be one of the few desktop environments to support HiDPI displays, now very common on some laptop models. Lack of support for HiDPI means non-technical users will get an unreadable desktop by default, and no hints on how to fix that.
  • Security: GNOME is more secure. There are no processes launched with root permissions on the user’s session. All everyday operations (package management, disk partitioning and formatting, date/time configuration…) are accomplished through PolicyKit wrappers.
  • Privacy: One of the latest focuses of GNOME development is improving privacy, and work is being done to make it easy to run GNOME applications in isolated containers, integrate Tor seamlessly in the desktop experience, better disk encryption support and other features that should make GNOME a more secure desktop environment for end users.
  • Popularity: One of the metrics discussed by the tasksel change proponents mentioned popcon numbers. 8 months after the desktop change, Xfce does not seem to have made a dent on install numbers. The Debian GNOME team doesn’t feel popcon’s data is any better than a random online poll though, as it’s an opt-in service which the vast majority of users don’t enable.
  • systemd embracing: One of the reasons to switch to Xfce was that it didn’t depend on systemd. But now that systemd is the default, that shouldn’t be a problem. Also given ConsoleKit is deprecated and dead upstream, KDE and Xfce are switching or are planning to switch to systemd/logind.
  • Adaptation: Debian forced a big desktop change with the wheezy release (switching from the traditional GNOME 2.x to the new GNOME Shell environment. Switching again would mean more adaptation for uses when they’ve had two years to experience GNOME 3.4. Furthermore, GNOME 3.12 means two years of improvements and polishing to GNOME 3.4, which should help with some of the rough edges found in the GNOME release shipped with wheezy.
  • Administration: GNOME is easy to administrate. All the default settings can be defined by administrators, and mandatory settings can be forced to users, which is required in some companies and administrations; Xfce cannot do that. The close integration with freedesktop components (systemd, NM, PulseAudio…) also gives access to specific and useful administration tools.

In short, we think defaulting to GNOME is the best option for the Debian release, and in contrast, shipping Xfce as the default desktop could mean delivering a desktop experience that has some incomplete or rough edges, and not on par with Debian quality standards for a stable release. We believe tasksel should again revert the change and be uploaded as soon as possible, in order to get people testing images with GNOME the sooner the better, with the freeze only two months away.

We would also like that in the future, changes of this nature will not be announced in a git commit log, but widely discussed in debian-project and the other usual development/decision channels, like the change of init system happened recently. We will, whichever the final decision is, continue to package GNOME with great care to ensure our users get the best possible desktop experience Debian can offer.

Syndicated 2014-08-07 23:58:00 from I still don't have a title

6 Aug 2014 etbe   » (Master)

Booting GPT

I’m installing new 4TB disks on an older Dell server, it’s a PowerEdge T110 with a G6950 CPU so it’s not really old, but it’s a couple of generations behind the latest Dell servers.

I tried to enable UEFI booting, but when I turned that option on the system locked up during the BIOS process (wouldn’t boot from the CD or take keyboard input). So I had to make it boot with a BIOS compatible MBR and a GPT partition table.

Number  Start (sector)    End (sector)  Size      Code  Name
  1            2048            4095  1024.0 KiB  EF02  BIOS boot partition
  2            4096        25169919  12.0 GiB    FD00  Linux RAID
  3        25169920      7814037134  3.6 TiB    8300  Linux filesystem

After spending way to much time reading various web pages I discovered that the above partition table works. The 1MB partition is for GRUB code and needs to be enabled by a parted command such as the following:

parted /dev/sda set 1 bios_grub on

/dev/sda2 is a RAID-1 array used for the root filesystem. If I was installing a non-RAID system I’d use the same partition table but with a type of 8300 instead of FD00. I have a RAID-1 array over sda2 and sdb2 for the root filesystem and sda3, sdb3, sdc3, sdd3, and sde3 are used for a RAID-Z array. I’m reserving space for the root filesystem on all 5 disks because it seems like a good idea to use the same partition table and the 12G per disk that is unused on sdc, sdd, and sde isn’t worth worrying about when dealing with 4TB disks.

Related posts:

  1. booting from USB for security Sune Vuorela asks about how to secure important data such...
  2. How I Partition Disks Having had a number of hard drives fail over the...
  3. Resizing the Root Filesystem Uwe Hermann has described how to resize a root filesystem...

Syndicated 2014-08-06 06:53:34 from etbe - Russell Coker

5 Aug 2014 joey   » (Master)

abram's 2014

pics from trip to Abram's Falls

The trail to Abram's Falls seems more trecherous as we get older, but the sights and magic of the place are unchanged in our first visit in years.

Syndicated 2014-08-05 19:37:37 from see shy jo

5 Aug 2014 marnanel   » (Journeyer)

Salford Royal is not a cheese shop

I had to pick something up at Salford Royal's main reception desk. I walk for quite a way following signs. I reach a desk.

"Is this the main reception?"
"No, you want to go that way."

I go that way, and find a sign saying "Main Reception" pointing back the way I'd come. So I go back, and go to WHSmith's and ask for directions to the main reception, and a description of it. It is in fact the desk I found first. I return.

"Sorry," I say, "I mean I get confused easily, but people tell me this is the main reception."
"Oh no, this is car parking."
"So.. that sign behind you saying RECEPTION, that's not true?"
"Look, I told you, go that way and then down the stairs."
"Isn't that the way to Outpatients?"
"That's what you want, isn't it?
"No, I want the main reception."

But he sends me to Outpatients. Outpatients say, "Oh no, we're not the main reception, we're Outpatients."

"But the man on the main reception desk, who claimed it wasn't the main reception desk, said it was you instead."

"Oh, he's always doing that."

MAYBE HE USED TO RUN A CHEESE SHOP

This entry was originally posted at http://marnanel.dreamwidth.org/308412.html. Please comment there using OpenID.

Syndicated 2014-08-05 17:29:42 from Monument

5 Aug 2014 jas   » (Master)

Replicant 4.2 0002 and NFC on I9300

During my vacation the Replicant project released version 4.2-0002 as a minor update to their initial 4.2 release. I didn’t anticipate any significant differences, so I followed the installation instructions but instead of “wipe data/factory reset” I chose “wipe cache partition” and rebooted. Everything appeared to work fine, but I soon discovered that NFC was not working. Using adb logcat I could get some error messages:

E/NFC-HCI ( 7022): HCI Timeout - Exception raised - Force restart of NFC service
F/libc    ( 7022): Fatal signal 11 (SIGSEGV) at 0xdeadbaad (code=1), thread 7046 (message)
I/DEBUG   ( 1900): *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** ***
I/DEBUG   ( 1900): Build fingerprint: 'samsung/m0xx/m0:4.1.1/JRO03C/I9300XXDLIB:user/release-keys'
I/DEBUG   ( 1900): Revision: '12'
I/DEBUG   ( 1900): pid: 7022, tid: 7046, name: message  >>> com.android.nfc 

The phone would loop trying to start NFC and having the NFC sub-system die over and over. Talking on #replicant channel, paulk quickly realized and fixed the bug. I had to rebuild the images to get things to work, so I took the time to create a new virtual machine based on Debian 7.5 for building Replicant on. As a side note, the only thing not covered by Replicant build dependency documentation was that I needed the Debian xmllint package to avoid a build failure and the Debian xsltproc package to avoid a error message being printed in the beginning of every build. Soon I had my own fresh images and installed them and NFC was working again, after installing the non-free libpn544_fw.so file.

During this, I noticed that there are multiple libpn544_fw.so files floating around. I have the following files:

version string source
libpn544_fw_C3_1_26_SP.so internet
libpn544_fw_C3_1_34_SP.so stock ROM on S3 bought in Sweden during 2013 and 2014 (two phones)
libpn544_fw_C3_1_39_SP.so internet

(For reference the md5sum's of these files are 682e50666effa919d557688c276edc48, b9364ba59de1947d4588f588229bae20 and 18b4e634d357849edbe139b04c939593 respectively.)

If you do not have any of these files available as /vendor/firmware/libpn544_fw.so you will get the following error message:

I/NfcService( 2488): Enabling NFC
D/NFCJNI  ( 2488): Start Initialization
E/NFC-HCI ( 2488): Could not open /system/vendor/firmware/libpn544_fw.so or /system/lib/libpn544_fw.so
E/NFCJNI  ( 2488): phLibNfc_Mgt_Initialize() returned 0x00ff[NFCSTATUS_FAILED]
E/NFC-HCI ( 2488): Could not open /system/vendor/firmware/libpn544_fw.so or /system/lib/libpn544_fw.so
W/NFCJNI  ( 2488): Firmware update FAILED
E/NFC-HCI ( 2488): Could not open /system/vendor/firmware/libpn544_fw.so or /system/lib/libpn544_fw.so
W/NFCJNI  ( 2488): Firmware update FAILED
E/NFC-HCI ( 2488): Could not open /system/vendor/firmware/libpn544_fw.so or /system/lib/libpn544_fw.so
W/NFCJNI  ( 2488): Firmware update FAILED
E/NFCJNI  ( 2488): Unable to update firmware, giving up
D/NFCJNI  ( 2488): phLibNfc_Mgt_UnConfigureDriver() returned 0x0000[NFCSTATUS_SUCCESS]
D/NFCJNI  ( 2488): Terminating client thread...
W/NfcService( 2488): Error enabling NFC

Using the first (26) file or the last (39) file does not appear to be working on my phone, I get the following error messages. Note that the line starting with 'NFC capabilities' has 'Rev = 34' in it, possibly indicating that I need the version 34 file.

I/NfcService( 5735): Enabling NFC
D/NFCJNI  ( 5735): Start Initialization
D/NFCJNI  ( 5735): NFC capabilities: HAL = 8150100, FW = b10122, HW = 620003, Model = 12, HCI = 1, Full_FW = 1, Rev = 34, FW Update Info = 8
D/NFCJNI  ( 5735): Download new Firmware
W/NFCJNI  ( 5735): Firmware update FAILED
D/NFCJNI  ( 5735): Download new Firmware
W/NFCJNI  ( 5735): Firmware update FAILED
D/NFCJNI  ( 5735): Download new Firmware
W/NFCJNI  ( 5735): Firmware update FAILED
E/NFCJNI  ( 5735): Unable to update firmware, giving up
D/NFCJNI  ( 5735): phLibNfc_Mgt_UnConfigureDriver() returned 0x0000[NFCSTATUS_SUCCESS]
D/NFCJNI  ( 5735): Terminating client thread...
W/NfcService( 5735): Error enabling NFC

Loading the 34 works fine.

I/NfcService( 2501): Enabling NFC
D/NFCJNI  ( 2501): Start Initialization
D/NFCJNI  ( 2501): NFC capabilities: HAL = 8150100, FW = b10122, HW = 620003, Model = 12, HCI = 1, Full_FW = 1, Rev = 34, FW Update Info = 0
D/NFCJNI  ( 2501): phLibNfc_SE_GetSecureElementList()
D/NFCJNI  ( 2501): 
D/NFCJNI  ( 2501): > Number of Secure Element(s) : 1
D/NFCJNI  ( 2501): phLibNfc_SE_GetSecureElementList(): SMX detected, handle=0xabcdef
D/NFCJNI  ( 2501): phLibNfc_SE_SetMode() returned 0x000d[NFCSTATUS_PENDING]
I/NFCJNI  ( 2501): NFC Initialized
D/NdefPushServer( 2501): start, thread = null
D/NdefPushServer( 2501): starting new server thread
D/NdefPushServer( 2501): about create LLCP service socket
D/NdefPushServer( 2501): created LLCP service socket
D/NdefPushServer( 2501): about to accept
D/NfcService( 2501): NFC-EE OFF
D/NfcService( 2501): NFC-C ON

What is interesting is, that my other S3 running CyanogenMod does not have the libpn544_fw.so file but still NFC works. The messages are:

I/NfcService( 2619): Enabling NFC
D/NFCJNI  ( 2619): Start Initialization
E/NFC-HCI ( 2619): Could not open /system/vendor/firmware/libpn544_fw.so or /system/lib/libpn544_fw.so
W/NFC     ( 2619): Firmware image not available: this device might be running old NFC firmware!
D/NFCJNI  ( 2619): NFC capabilities: HAL = 8150100, FW = b10122, HW = 620003, Model = 12, HCI = 1, Full_FW = 1, Rev = 34, FW Update Info = 0
D/NFCJNI  ( 2619): phLibNfc_SE_GetSecureElementList()
D/NFCJNI  ( 2619): 
D/NFCJNI  ( 2619): > Number of Secure Element(s) : 1
D/NFCJNI  ( 2619): phLibNfc_SE_GetSecureElementList(): SMX detected, handle=0xabcdef
D/NFCJNI  ( 2619): phLibNfc_SE_SetMode() returned 0x000d[NFCSTATUS_PENDING]
I/NFCJNI  ( 2619): NFC Initialized
D/NdefPushServer( 2619): start, thread = null
D/NdefPushServer( 2619): starting new server thread
D/NdefPushServer( 2619): about create LLCP service socket
D/NdefPushServer( 2619): created LLCP service socket
D/NdefPushServer( 2619): about to accept
D/NfcService( 2619): NFC-EE OFF
D/NfcService( 2619): NFC-C ON

Diffing the two NFC-relevant repositories between Replicant (external_libnfc-nxp and packages_apps_nfc) and CyanogenMod (android_external_libnfc-nxp and android_packages_apps_Nfc) I found a commit in Replicant that changes a soft-fail on missing firmware to a hard-fail. I manually reverted that patch in my build tree, and rebuilt and booted a new image. Enabling NFC now prints this on my Replicant phone:

I/NfcService( 2508): Enabling NFC
D/NFCJNI  ( 2508): Start Initialization
E/NFC-HCI ( 2508): Could not open /system/vendor/firmware/libpn544_fw.so or /system/lib/libpn544_fw.so
W/NFC     ( 2508): Firmware image not available: this device might be running old NFC firmware!
D/NFCJNI  ( 2508): NFC capabilities: HAL = 8150100, FW = b10122, HW = 620003, Model = 12, HCI = 1, Full_FW = 1, Rev = 34, FW Update Info = 0
D/NFCJNI  ( 2508): phLibNfc_SE_GetSecureElementList()
D/NFCJNI  ( 2508): 
D/NFCJNI  ( 2508): > Number of Secure Element(s) : 1
D/NFCJNI  ( 2508): phLibNfc_SE_GetSecureElementList(): SMX detected, handle=0xabcdef
D/NFCJNI  ( 2508): phLibNfc_SE_SetMode() returned 0x000d[NFCSTATUS_PENDING]
I/NFCJNI  ( 2508): NFC Initialized
D/NdefPushServer( 2508): start, thread = null
D/NdefPushServer( 2508): starting new server thread
D/NdefPushServer( 2508): about create LLCP service socket
D/NdefPushServer( 2508): created LLCP service socket
D/NdefPushServer( 2508): about to accept
D/NfcService( 2508): NFC-EE OFF
D/NfcService( 2508): NFC-C ON

And NFC works! One less non-free blob on my phone.

I have double-checked that power-cycling the phone (even removing battery for a while) does not affect anything, so it seems the NFC chip has firmware loaded from the factory.

Question remains why that commit was added. Is it necessary on some other phone? I have no idea, other than if the patch is reverted, S3 owners will have NFC working with Replicant without non-free software added. Alternatively, make the patch apply only on the platform where it was needed, or even to all non-S3 builds.

flattr this!

Syndicated 2014-08-05 12:20:18 from Simon Josefsson's blog

5 Aug 2014 Stevey   » (Master)

Free (orange) SMS alerts

In the past I used to pay for an email->SMS gateway, which was used to alert me about some urgent things. That was nice because it was bi-directional, and at one point I could restart particular services via sending SMS messages.

These days I get it for free, and for my own reference here is how you get to receive free SMS alerts via Orange, which is my mobile phone company. If you don't use Orange/EE this will probably not help you.

The first step is to register an Orange email-account, which can be done here:

Once you've done that you'll have an email address of the form example@orange.net, which is kinda-sorta linked to your mobile number. You'll sign in and be shown something that looks like webmail from the early 90s.

The thing that makes this interesting is that you can look in the left-hand menu and see a link called "SMS Alerts". Visit it. That will let you do things like set the number of SMSs you wish to receive a month (I chose "1000"), and the hours during which delivery will be made (I chose "All the time").

Anyway if you go through this dance you'll end up with an email address example@orange.net, and when an email arrives at that destination an SMS will be sent to your phone.

The content of the SMS will be the subject of the mail, truncated if necessary, so you can send a hello message to yourself like this:

echo "nop" | mail -s "Hello, urgent message is present" username@orange.net

Delivery seems pretty reliable, and I've scheduled the mailbox to be purged every week, to avoid it getting full:

Hostname pop.orange.net
Username Your mobile number
Password Your password

If you wished to send mail from this you can use smtp.orange.net, but I pity the fool who used their mobile phone company for their primary email address.

Syndicated 2014-08-05 09:56:48 from Steve Kemp's Blog

5 Aug 2014 bagder   » (Master)

I’m eight months in on my Mozilla adventure

I started working for Mozilla in January 2014. Here’s some reflections from my first time as Mozilla employee.

Working from home

I’ve worked completely from home during some short periods before in my life so I had an idea what it would be like. So far, it has been even better than I had anticipated. It suits me so well it is almost scary! No commutes. No delays due to traffic. No problems ever with over-crowded trains or buses. No time wasted going to work and home again. And I’m around when my kids get home from school and it’s easy to receive deliveries all days. I don’t think I ever want to work elsewhere again… :-)

Another effect of my work place is also that I probably have become somewhat more active on social networks and IRC. If I don’t use those means, I may spent whole days without talking to any humans.

Also, I’m the only Mozilla developer in Sweden – although we have a few more employees in Sweden.

Daniel's home office

The freedom

I have freedom at work. I control and decide a lot of what I do and I get to do a lot of what I want at work. I can work during the hours I want. As long as I deliver, my employer doesn’t mind. The freedom isn’t just about working hours but I also have a lot of control and saying about what I want to work on and what I think we as a team should work on going further.

The not counting hours

For the last 16 years I’ve been a consultant where my customers almost always have paid for my time. Paid by the hour I spent working for them. For the last 16 years I’ve counted every single hour I’ve worked and made sure to keep detailed logs and tracking of whatever I do so that I can present that to the customer and use that to send invoices. Counting hours has been tightly integrated in my work life for 16 years. No more. I don’t count my work time. I start work in the morning, I stop work in the evening. Unless I work longer, and sometimes I start later. And sometimes I work on the weekend or late at night. And I do meetings after regular “office hours” many times. But I don’t keep track – because I don’t have to and it would serve no purpose!

The big code base

I work with Firefox, in the networking team. Firefox has about 10 million lines C and C++ code alone. Add to that everything else that is other languages, glue logic, build files, tests and lots and lots of JavaScript.

It takes time to get acquainted with such a large and old code base, and lots of the architecture or traces of the original architecture are also designed almost 20 years ago in ways that not many people would still call good or preferable.

Mozilla is using Mercurial as the primary revision control tool, and I started out convinced I should too and really get to learn it. But darn it, it is really too similar to git and yet lots of words are intermixed and used as command but they don’t do the same as for git so it turns out really confusing and yeah, I felt I got handicapped a little bit too often. I’ve switched over to use the git mirror and I’m now a much happier person. A couple of months in, I’ve not once been forced to switch away from using git. Mostly thanks to fancy scripts and helpers from fellow colleagues who did this jump before me and already paved the road.

C++ and code standards

I’m a C guy (note the absence of “++”). I’ve primarily developed in C for the whole of my professional developer life – which is approaching 25 years. Firefox is a C++ fortress. I know my way around most C++ stuff but I’m not “at home” with C++ in any way just yet (I never was) so sometimes it takes me a little time and reading up to get all the C++-ishness correct. Templates, casting, different code styles, subtleties that isn’t in C and more. I’m slowly adapting but some things and habits are hard to “unlearn”…

The publicness and Bugzilla

I love working full time for an open source project. Everything I do during my work days are public knowledge. We work a lot with Bugzilla where all (well except the security sensitive ones) bugs are open and public. My comments, my reviews, my flaws and my patches can all be reviewed, ridiculed or improved by anyone out there who feels like doing it.

Development speed

There are several hundred developers involved in basically the same project and products. The commit frequency and speed in which changes are being crammed into the source repository is mind boggling. Several hundred commits daily. Many hundred and sometimes up to a thousand new bug reports are filed – daily.

yet slowness of moving some bugs forward

Moving a particular bug forward into actually getting it land and included in pending releases can be a lot of work and it can be tedious. It is a large project with lots of legacy, traditions and people with opinions on how things should be done. Getting something to change from an old behavior can take a whole lot of time and massaging and discussions until they can get through. Don’t get me wrong, it is a good thing, it just stands in direct conflict to my previous paragraph about the development speed.

In the public eye

I knew about Mozilla before I started here. I knew Firefox. Just about every person I’ve ever mentioned those two brands to have known about at least Firefox. This is different to what I’m used to. Of course hardly anyone still fully grasp what I’m actually doing on a day to day basis but I’ve long given up on even trying to explain that to family and friends. Unless they really insist.

Vitriol and expectations of high standards

I must say that being in the Mozilla camp when changes are made or announced has given me a less favorable view on the human race. Almost anything or any chance is received by a certain amount of users that are very aggressively against the change. All changes really. “If you’ll do that I’ll be forced to switch to Chrome” is a very common “threat” – as if that would A) work B) be a browser that would care more about such “conservative loonies” (you should consider that my personal term for such people)). I can only assume that the Chrome team also gets a fair share of that sort of threats in the other direction…

Still, it seems a lot of people out there and perhaps especially in the Free Software world seem to hold Mozilla to very high standards. This is both good and bad. This expectation of being very good also comes from people who aren’t even Firefox users – we must remain the bright light in a world that goes darker. In my (biased) view that tends to lead to unfair criticisms. The other browsers can do some of those changes without anyone raising an eyebrow but when Mozilla does similar for Firefox, a shitstorm breaks out. Lots of those people criticizing us for doing change NN already use browser Y that has been doing NN for a good while already…

Or maybe I’m just not seeing these things with clear enough eyes.

How does Mozilla make money?

Yeps. This is by far the most common question I’ve gotten from friends when I mention who I work for. In fact, that’s just about the only question I get from a lot of people… (possibly because after that we get into complicated questions such as what exactly do I do there?)

curl and IETF

I’m grateful that Mozilla allows me to spend part of my work time working on curl.

I’m also happy to now work for a company that allows me to attend to IETF/httpbis and related activities much better than ever I’ve had the opportunity to in the past. Previously I’ve pretty much had to spend spare time and my own money, which has limited my participation a great deal. The support from Mozilla has allowed me to attend to two meetings so far during the year, in London and in NYC and I suspect there will be more chances in the future.

Future

I only just started. I hope to grab on to more and bigger challenges and tasks as I get warmer and more into everything. I want to make a difference. See you in bugzilla.

Syndicated 2014-08-05 08:56:08 from daniel.haxx.se

5 Aug 2014 bagder   » (Master)

libressl vs boringssl for curl

I tried to use two OpenSSL forks yesterday with curl. I built both from source first (of course, as I wanted the latest and greatest) an interesting thing in itself since both projects have modified the original build system so they’re now three different ways.

libressl 2.0.0 installed and built flawlessly with curl and I’ve pushed a change that shows LibreSSL instead of OpenSSL when doing curl -V etc.

boringssl didn’t compile from git until I had manually fixed a minor nit, and then it has no “make install” target at all so I had manually copy the libs and header files to a place suitable for curl’s configure to detect. Then the curl build failed because boringssl isn’t API compatiable with some of the really old DES stuff – code we use for NTLM. I asked Adam Langley about it and he told me that calling code using DES “needs a tweak” – but I haven’t yet walked down that road so I don’t know how much of a nuisance that actually is or isn’t.

Summary: as an openssl replacement, libressl wins this round over boringssl with 3 – 0.

Syndicated 2014-08-05 06:50:38 from daniel.haxx.se

5 Aug 2014 Skud   » (Master)

Grace Hopper prints now available

I’ve been making linocuts.

Meet Grace Hopper. She’s a complete badass.

Grace Hopper print by Alex Skud Bayley 2014

(click image for a larger view)

She was 37 years old and working as a mathematics professor when Pearl Harbour happened. She joined the Navy and was set to work on the first ever general-purpose electro-mechanical computer, the Harvard Mark I. She invented the compiler (used to translate computer programs written by humans into ones and zeroes that the computer can understand), created one of the most widely used programming languages of the 20th century, and was the first to use the term “bug” to describe computer errors, after a literal bug was caught in the relays of the machine she was working on.

After WW2 she left the Navy and worked for various tech companies, but kept serving in the Naval Reserve. As was usual, she retired from the Reserves at 60, but she was recalled to active duty by special executive order, and eventually rose to the rank of Rear Admiral. When she retired (again) she kept working as a consultant until the age of 85. She also did this great Letterman interview at the age of 80.

Don’t ever let anyone tell you women can’t computer, or that you’re too old to computer. Grace knows better.

Buy a print

I’m selling these prints as a fundraiser over on Indiegogo, in part to offset this Gittip bullshit and the costs associated with attending a bunch of tech/feminist conferences in the US just recently.

The basic print (black on white) is $40 including international shipping, and there are other options available. If you’d like one you’d better get in quick — there’s only 10 standard prints left (though the other options are still wide open).

Syndicated 2014-08-05 02:59:52 from Infotropism

5 Aug 2014 marnanel   » (Journeyer)

Who better to quote for the centennial?

SUICIDE IN THE TRENCHES
by Siegfried Sassoon

I knew a simple soldier boy
Who grinned at life in empty joy,
Slept soundly through the lonesome dark,
And whistled early with the lark.

In winter trenches, cowed and glum,
With crumps and lice and lack of rum,
He put a bullet through his brain.
No one spoke of him again.

You smug-faced crowds with kindling eye
who cheer when soldier lads march by,
Sneak home and pray you’ll never know
The hell where youth and laughter go.

This entry was originally posted at http://marnanel.dreamwidth.org/308076.html. Please comment there using OpenID.

Syndicated 2014-08-05 01:14:12 from Monument

3 Aug 2014 dmarti   » (Master)

Point of order: social buttons

This is a quick privacy check.

(If you're reading this on the full-text RSS feed, or a site that consumes it, please click through. It won't take long. If you're looking at this on the blog homepage, please click the title to look at the individual post. The buttons are only on the individual post pages.)

Do you see the "social sharing" buttons at the bottom of this post, at the end of the text but above the miscelleneous links and blogroll? I just got an automated report that people are actually clicking them.

If your privacy tools are up to date, you shouldn't be seeing any big web site logos here. The sinister buttons should be blocked by any halfway-decent privacy tool.

If you do see the buttons, please get Disconnect or Privacy Badger.

If you don't see the buttons, you're already doing something that's making a difference. Carry on.

If you have a privacy tool installed and think you should be protected, but are seeing the buttons anyway, please let me know and I'll help you troubleshoot it.

Syndicated 2014-08-03 15:00:28 from Don Marti

3 Aug 2014 aicra   » (Journeyer)

Ubuntu Misbehavior.

As of late, I have been in open discussion regarding the misbehavior by Ubuntu.
There is a long list.

More recently, I noticed a seemingly strong arm effect happening at the Linux User Group where Ubuntu Loco is taking over. This is a great concern to me and I will tell you why in the next few words.

Community, Responsibility, Freedom.


How can I teach my children about freedom over convenience when the groupthink at the LUG is ... it's ok, even if Ubuntu is spying, trying to maintain their lame trademark rights, and are basically bullying others.

This is my opinion based on fact.

1. These Ubuntu groups are popping up all over the place. The members must do multiple tasks including one important ritual - Drinking the Canonical Kool Aid.

2. The Ubuntu groups are supposed to work with other distros but really have a way of pushing out others and forcing Ubuntu during installfests.

3. The Ubuntu groups are a smoke and mirrors tactic to make people believe these people are larger in number and more important than they are. By strategically hiring less than 10 people per state as low level tech support (but making these people believe they are important), Canonical gives an illusion of being a strong and large corporation.

4. Ubuntu is in bed with Microsoft.
The Nokia Here phone GPS will give data to Canonical unless a user is aware of and opts out of the data and advertisement scheme.

As a free software advocate, I am absolutely disgusted with Canonical. The actions are far from community minded. Their actions are tantamount to manipulation, maneuvering and the evil empire.

I would like to advocate the boycott and banning of Ubuntu and Ubuntu products including the phone.

If you value your freedom. If you value privacy, If you have any values at all, avoid Ubuntu at all costs. There are still proprietary companies that are going to be involved with Ubuntu, sure.

I might be fine with Ubuntu had I not noticed a few key things...

1. At installfests, Ubuntu Loco tried to take over what was initially a Phoenix LUG event. I know because I started the Installfest. However, it was kept going in my absence. However, it was NEVER an Ubuntu event.

In my absence, I suppose they were used to installing Ubuntu only. Several people have been installing other distros (thank you Mike and Sergio).

In the past 3 installfests, I have attended, Ubuntu has been shoved down the throats of attendees. I even had one lady who wanted MINT. She WANTED Mint.

Why you ask? There was a video error on her monitor with Ubuntu. Funny... she did not have the error with Mint...

2. Then, a woman wanted her printer working. A Canonical "employee" and I use the term loosely because of the smoke and mirrors was "helping". Of course, I use the term "helping" loosely also.

Now, here's the thing - the Canonical guy - Mark Thomas, decided because she no longer had internal wireless card working, he would install 14.04... over 14.04.
He didn't bother to say... find out what module was needed.
He didn't bother to say... install the module for the nic...

No... he reinstalled 14.04. This is a guy who works at Canonical for a living. This begs the question, "Why is a Canonical employee at an installfest". To possibly push SpyBuntu? The reinstall threw errors. Mark Thomas LEFT! Then, the woman said she would NEVER return!

How horrible is that for our community?!

Canonical and their flagship product is sinking fast. They are using our community minded people to push their fallen product.

There are so many flaws with Ubuntu.
Let's think about the virtual memory issue.
They don't know how to use swap.

Ever have the system freeze up on you?
Well, guess what!

3. Ubuntu "reps" will not argue. They just blindly follow and believe that Ubuntu is correct all the time, even when it is obvious to everyone else how corrupt and broken Ubuntu is.

4. These Ubuntu people are not exactly the most socially acceptable people in the world. Many of them are rude.

The fact is that one guy who even wanted to be a loco member just found out today about the cookies and data being sent to Amazon. If you can imagine the shock of hearing such news after installing Ubuntu for years at installfests.

The truth comes out, SPYBUNTU! As Alex states - Until you change your spying ways... we will boycott and ban you.

http://youtu.be/X60FYdkqGpE


2 Aug 2014 badvogato   » (Master)

岳飞 《满江红》

"However, James T.C. Liu, a history professor from Princeton University, states that Yue Fei's version was actually written by a different person in the early 16th century.[1] The poem was not included in the collected works of Yue Fei compiled by Yue's grandson, Yue Ke (岳柯; 1183–post 1234), and neither was it mentioned in any major works written before the Ming Dynasty. The section that states the author's wish "to stamp down Helan Pass" is what led scholars to this conclusion. Helan Pass was in Western Xia, which was not a military target of Yue Fei's armies. Liu suggests the "real author of the poem was probably Chao K'uan who engraved it on a tablet at Yueh Fei's tomb in 1502, in order to express the patriotic sentiments which were running high at that time, about four years after General Wang Yueh had scored a victory over the Oirats near the Ho-lan Pass in Inner Mongolia."[1]"

1 Aug 2014 marnanel   » (Journeyer)

date format

A Gregorian date encoding I used in a personal system: I think it nicely balances human readability with brevity. It is only well-defined between 2010 and 2039.

Consider the date as a triple (y,m,d) where:
y is the year number AD minus 2010
m is the month number, 1-based
d is the day of the month, 1-based

So today, 1st August 2014, is (4,8,1).

Then define a partial mapping from integers to characters thus:
x=0 to x=9 are represented by the digits 0 to 9
x=10 to x=31 are represented by the lowercase letters a to u

Translate the date triple and concatenate.

Thus today is written 481.

Years outside the given range are written in full, e.g. 1975-01-30 -> 19751t.

Thoughts?

This entry was originally posted at http://marnanel.dreamwidth.org/307780.html. Please comment there using OpenID.

Syndicated 2014-08-01 15:00:22 from Monument

1 Aug 2014 marnanel   » (Journeyer)

Gentle Readers: blissful quires

Gentle Readers
a newsletter made for sharing
volume 1, number 16
31st July 2014: blissful quires
What I’ve been up to

Still moving house to Salford (see GR passim), but thank heavens we're mostly moved in now! Gentle Reader Katie and her father lent us their time and their van to move some of our belongings from the Oldham garage where they arrived, and Kit's brother Adam went back down to Surrey with us yesterday to move some of the books and furniture we left in Staines.

http://gentlereaders.uk/pics/too-many-books

I am coming to realise that if everything is a crisis, anything seems reasonable. In the last few weeks, for example, I've been eating large amounts of chocolate and getting small amounts of sleep, and justifying both to myself by saying that I need the sustenance and time because of an ongoing crisis. Then, because everything that comes along looks like a crisis, I end up over-sugared and under-slept for months. This isn't just about chocolate or sleep, either: it seems to be a pattern throughout my life as a whole.

A poem of mine

I ALWAYS TRIED TO WRITE ABOUT THE LIGHT (T32)

I always tried to write about the light
that inks these eyes in instant tint and hue,
that chances glances, sparkles through the night,
fresh as the morning, bloody as the dew;
the light that leaves your image in my mind,
that shining silver, shared for everyone,
that banishes the darkness from the blind,
the circle of the surface of the sun.
And when your light is shining far from mine,
when scores of stars are standing at their stations,
we'll weave our fingers round them as they shine,
and write each others' name on constellations;
and so we'll stand, and still, however far,
lock eyes and wish upon a single star.

A picture

http://gentlereaders.uk/pics/looked-up-chimney
"He then stooped down and looked up the chimney"

 

Something wonderful

William Gladstone (1809-1898) was Prime Minister of the United Kingdom four times. He grew up in Liverpool; no doubt his youth surrounded by poverty spurred him to fight for voting not to be restricted by income, and no doubt his youth surrounded by the Irish diaspora remained on his mind as he worked towards Irish independence. He lived a careful life, closely examining and recording all his actions, and since he recorded in his diary every book he read, we know that he read on average a book a day for most of his life.

When he was an old man, he decided to found a library: the stock was already to hand, since he had kept thousands of the books he had read. The library was duly set up in a temporary building at Hawarden in Flintshire, and (it is said) the 85-year-old Gladstone delivered most of the books personally, trundling them from his house in a wheelbarrow.
 

http://gentlereaders.uk/pics/gladstones-library


After Gladstone's death, the library was rebuilt in beautiful sham Gothic stone. It's still there, now with a quarter of a million volumes, and I encourage you to visit it if you can: it's one of the few libraries where you can board for days or weeks as well as study. There are regular events and workshops, but it's also especially popular with authors trying to finish manuscripts: the chance to work uninterrupted in a peaceful atmosphere of study can work wonders.

Something from someone else

Robert Southwell, SJ (1561-1595), who was one of the great poets of his generation, met an early and unpleasant death at the hands of Elizabeth I’s inquisitors. (Don't confuse him with Robert Southey, who lived 300 years later.)

Before we begin, note that "quires" here doesn't mean groups of singers, but books, especially books made by folding large sheets of paper. And "imparadised", put into paradise, is a tremendous word which should be more often used. (Milton also uses it, to describe sex in the Garden of Eden.)

from "ST PETER’S COMPLAINT"
by Robert Southwell

Sweet volumes, stored with learning fit for saints,
Where blissful quires imparadise their minds;
Wherein eternal study never faints,
Still finding all, yet seeking all it finds:
How endless is your labyrinth of bliss,
Where to be lost the sweetest finding is!

This stanza is part of a long poem about St Peter looking back over his life. It’s about the moment Peter, having just denied he ever knew Jesus, looks across the courtyard to where Jesus is handcuffed, and catches his eye. Southwell describes Jesus’s eyes in that moment as though they were libraries: a metaphor to take your breath away, even as you remember similar experiences yourself. It's a comparison that shows not only Southwell's devotion to God, and his skill as a poet, but also how great his love of libraries was, that he would compare spending time in them to catching the eye of Jesus.

Colophon

Gentle Readers is published on Mondays and Thursdays, and I want you to share it. The archives are at http://thomasthurman.org/gentle/ , and so is a form to get on the mailing list. If you have anything to say or reply, or you want to be added or removed from the mailing list, I’m at thomas@thurman.org.uk and I’d love to hear from you. The newsletter is reader-supported; please pledge something if you can afford to, and please don't if you can't. Love and peace to you all.

This entry was originally posted at http://marnanel.dreamwidth.org/307486.html. Please comment there using OpenID.

Syndicated 2014-08-01 13:31:36 from Monument

1 Aug 2014 etbe   » (Master)

More BTRFS Fun

I wrote a BTRFS status report yesterday commenting on the uneventful use of BTRFS recently [1].

Early this morning the server that stores my email (which had 93 days uptime) had a filesystem related problem. The root filesystem became read-only and then the kernel message log filled with unrelated messages so there was no record of the problem. I’m now considering setting up rsyslogd to log the kernel messages to a tmpfs filesystem to cover such problems in future. As RAM is so cheap it wouldn’t matter if a few megs of RAM were wasted by that in normal operation if it allowed me to extract useful data when something goes really wrong. It’s really annoying to have a system in a state where I can login as root but not find out what went wrong.

After that I tried 2 kernels in the 3.14 series, both of which had kernel BUG assertions related to Xen networking and failed to network correctly, I filed Debian Bug #756714. Fortunately they at least had enough uptime for me to run a filesystem scrub which reported no errors.

Then I reverted to kernel 3.13.10 but the reboot to apply that kernel change failed. Systemd was unable to umount the root filesystem (maybe because of a problem with Xen) and then hung the system instead of rebooting, I filed Debian Bug #756725. I believe that if asked to reboot a system there is no benefit in hanging the system with no user space processes accessible. Here are some useful things that systemd could have done:

  1. Just reboot without umounting (like “reboot -nf” does).
  2. Pause for some reasonable amount of time to give the sysadmin a possibility of seeing the error and then rebooting.
  3. Go back to a regular runlevel, starting daemons like sshd.
  4. Offer a login prompt to allow the sysadmin to login as root and diagnose the problem.

Options 1, 2, and 3 would have saved me a bit of driving. Option 4 would have allowed me to at least diagnose the problem (which might be worth the drive).

Having a system on the other side of the city which has no remote console access just hang after a reboot command is not useful, it would be near the top of the list of things I don’t want to happen in that situation. The best thing I can say about systemd’s operation in this regard is that it didn’t make the server catch fire.

Now all I really know is that 3.14 kernels won’t work for my server, 3.13 will cause problems that no-one can diagnose due to lack of data, and I’m now going to wait for it to fail again. As an aside the server has ECC RAM and it’s hardware is known to be good, so I’m sure that BTRFS is at fault.

Related posts:

  1. BTRFS Status March 2014 I’m currently using BTRFS on most systems that I can...
  2. BTRFS Status April 2014 Since my blog post about BTRFS in March [1] not...
  3. BTRFS vs LVM For some years LVM (the Linux Logical Volume Manager) has...

Syndicated 2014-08-01 10:41:30 from etbe - Russell Coker

1 Aug 2014 rlougher   » (Master)

JamVM 2.0.0 Released

I'm pleased to announce a new release of JamVM.  JamVM 2.0.0 is the first release of JamVM with support for OpenJDK (in addition to GNU Classpath). Although IcedTea already includes JamVM with OpenJDK support, this has been based on periodic snapshots of the development tree.

JamVM 2.0.0 supports OpenJDK 6, 7 and 8 (the latest). With OpenJDK 7 and 8 this includes full support for JSR 292 (invokedynamic). JamVM 2.0.0 with OpenJDK 8 also includes full support for Lambda expressions (JSR 335), type annotations (JSR 308) and method parameter reflection.

In addition to OpenJDK support, JamVM 2.0.0 also includes many bug-fixes, performance improvements and improved compatibility (from running the OpenJDK jtreg tests).

The full release notes can be found here (changes are categorised into those affecting OpenJDK, GNU Classpath and both), and the release package can be downloaded from the file area.

Syndicated 2014-08-01 00:46:00 (Updated 2014-08-01 00:46:05) from Robert Lougher

1 Aug 2014 apenwarr   » (Master)

Wifi: "beamforming" only begins to describe it

[Note to the impatient: to try out my beamforming simulation, which produced the above image, visit my beamlab test page - ideally in a browser with very fast javascript, like Chrome. You can also view the source.]

I promised you some cheating of Shannon's Law. Of course, as with most things in physics, even the cheating isn't really cheating; you just adjust your model until the cheating falls within the rules.

The types of "cheating" that occur in wifi can be briefly categorized as antenna directionality, beamforming, and MIMO. (People usually think about MIMO before beamforming, but I find the latter to be easier to understand, mathematically, so I switched the order.)

Antenna Directionality and the Signal to Noise Ratio

Previously, we discussed the signal-to-noise ratio (SNR) in some detail, and how Shannon's Law can tell you how fast you can transfer data through a channel with a given SNR.

The thing to notice about SNR is that you can increase it by increasing amplification at the sender (where the background noise is fixed but you have a clear copy of the signal) but not at the receiver. Once a receiver has a copy of the signal, it already has noise in it, so when you amplify the received signal, you just amplify the noise by the same amount, and the SNR stays constant.

(By the way, that's why those "amplified" routers you can buy don't do much good. They amplify the transmitted signal, but amplifying the received signal doesn't help in the other direction. The maximum range is still limited by the transmit power on your puny unamplified phone or laptop.)

On the other hand, one thing that *does* help is making your antenna more "directional." The technical term for this is "antenna gain," but I don't like that name, because it makes it sound like your antenna amplifies the signal somehow for free. That's not the case. Antenna gain doesn't so much amplify the signal as ignore some of the noise. Which has the same net effect on the SNR, but the mechanics of it are kind of important.

You can think of an antenna as a "scoop" that picks up both the signal and the noise from a defined volume of space surrounding the antenna. The shape of that volume is important. An ideal "isotropic" antenna (my favourite kind, although unfortunately it doesn't exist) picks up the signal equally in all directions, which means the region it "scoops" is spherical.

In general we assume that background noise is distributed evenly through space, which is not exactly true, but is close enough for most purposes. Thus, the bigger the volume of your scoop, the more noise you scoop up along with it. To stretch our non-mathematical metaphor well beyond its breaking point, a "bigger sphere" will contain more signal as well as more noise, so just expanding the size of your region doesn't affect the SNR. That's why, and I'm very sorry about this, a bigger antenna actually doesn't improve your reception at all.

(There's another concept called "antenna efficiency" which basically says you can adjust your scoop to resonate at a particular frequency, rejecting noise outside that frequency. That definitely works - but all antennas are already designed for this. That's why you get different antennas for different frequency ranges. Nowadays, the only thing you can do by changing your antenna size is to screw up the efficiency. You won't be improving it any further. So let's ignore antenna efficiency. You need a good quality antenna, but there is not really such a thing as a "better" quality antenna these days, at least for wifi.)

So ok, a bigger scoop doesn't help. But what can help is changing the shape of the scoop. Imagine if, instead of a sphere, we scoop up only the signal from a half-sphere.

If that half-sphere is in the direction of the transmitter - which is important! - then you'll still receive all the same signal you did before. But, intuitively, you'll only get half the noise, because you're ignoring the noise coming in from the other direction. On the other hand, if the half-sphere is pointed away from the incoming signal, you won't hear any of the signal at all, and you're out of luck. Such a half-sphere would have 2x the signal to noise ratio of a sphere, and in decibels, 2x is about 3dB. So this kind of (also not really existing) antenna is called 3dBi, where dBi is "decibels better than isotropic" so an isotropic (spherical) receiver is defined as 0dBi.

Taking it a step further, you could take a quarter of a sphere, ie. a region 90 degrees wide in any direction, and point it at your transmitter. That would double the SNR again, thus making a 6dBi antenna.

Real antennas don't pick up signals in a perfectly spherical shape; math makes that hard. So real ones tend to produce a kind of weirdly-shaped scoop with little roundy bits sticking out all over, and one roundy bit larger than the others, in the direction of interest. Essentially, the size of the biggest roundy bit defines the antenna gain (dBi). For regulatory purposes, the FCC mostly assumes you will use a 6dBi antenna, although of course the 6dBi will not be in the shape of a perfect quarter sphere.

Now is that the most hand-wavy explanation of antenna gain you have ever seen? Good.

Anyway, the lesson from all this is if you use a directional antenna, you can get improved SNR. A typical improvement is around 6dBi, which is pretty good. But the down side of a directional antenna is you have to aim it. With a wifi router, that can be bad news. It's great for outdoors if you're going to set up a long-distance link and aim very carefully; you can get really long range with a very highly directional antenna. But indoors, where distances are short and people move around a lot, it can be trouble.

One simple thing that does work indoors is hanging your wifi router from the ceiling. Then, if you picture eg. a quarter-sphere pointing downwards, you can imagine covering the whole room without really sacrificing anything (other than upstairs coverage, which you don't care about if you put a router up there too). Basically, that's as good as you can do, which is why most "enterprise" wifi deployments hang their routers from the ceiling. If you did that at home too - and had the right kind of antennas with the right gain in the right direction - you could get up to 6dB of improvement on your wifi signal, which is pretty great.

(Another trick some routers do is to have multiple antennas, each one pointing in a different direction, and then switch them on and off to pick the one(s) with the highest SNR for each client. This works okay but it interferes with MIMO - where you want to actively use as many antennas as possible - so it's less common nowadays. It was a big deal in the days of 802.11g, where that was the main reason to have multiple antennas at all. Let's talk about MIMO later, since MIMO is its own brand of fun.)

Beamforming

So okay, that was antenna directionality (gain). To summarize all that blathering: you point your antenna in a particular direction, and you get a better SNR in that particular direction.

But the problem is that wifi clients move around, so antennas permanently pointed in a particular direction are going to make things worse about as often as they help (except for simple cases like hanging from the ceiling, and even that leaves out the people upstairs).

But wouldn't it be cool if, using software plus magic, you could use multiple antennas to create "virtual directionality" and re-aim a "beam" automatically as the client device moves around?

Yes, that would be cool.

Unfortunately, that's not what beamforming actually is.

Calling it "beamforming" is not a *terrible* analogy, but the reality of the signal shape is vastly more complex and calling it a "beam" is pretty misleading.

This is where we finally talk about what I mentioned last time, where two destructively-interfering signals result in zero signal. Where does the power go?

As an overview, let's say you have two unrelated transmitters sending out a signal at the same frequency from different locations. It takes a fixed amount of power for each transmitter to send its signal. At some points in space, the signals interfere destructively, so there's no signal at that point. Where does it go? There's a conservation of energy problem after all; the transmitted power has to equal the power delivered, via electromagnetic waves, out there in the wild. Does it mean the transmitter is suddenly unable to deliver that much power in the first place? Is it like friction, where the energy gets converted into heat?

Well, it's not heat, because heat is vibration, ie, the motion of physical particles with mass. The electromagnetic waves we're talking about don't necessarily have any relationship with mass; they might be traveling through a vacuum where there is no mass, but destructive interference can still happen.

Okay, maybe the energy is re-emitted as radiation? Well, no. The waves in the first place were radiation. If we re-emitted them as radiation, then by definition, they weren't cancelled out. But we know they were cancelled out; you can measure it and see.

The short and not-very-satisfying answer is that in terms of conservation of energy, things work out okay. There are always areas where the waves interfere constructively that exactly cancel out the areas where they interfere destructively.

The reason I find that answer unsatisfying is that the different regions don't really interact. It's not like energy is being pushed, somehow, between the destructive areas and the constructive areas. It adds up in the end, because it has to, but that doesn't explain *how* it happens.

The best explanation I've found relates to quantum mechanics, in a lecture I read by Richard Feynman at some point. The idea is that light (and all electromagnetic waves, which is what we're talking about) actually does not really travel in straight lines. The idea that light travels in a straight line is just an illusion caused by large-scale constructive and destructive interference. Basically, you can think of light as travelling along all the possible paths - even silly paths that involve backtracking and spirals - from point A to point B. The thing is, however, that for almost every path, there is an equal and opposite path that cancels it out. The only exception is the shortest path - a straight line - of which there is only one. Since there's only one, there can't be an equal but opposite version. So as far as we're concerned, light travels in a straight line.

(I offer my apologies to every physicist everywhere for the poor quality of that explanation.)

But there are a few weird experiments you can do (look up the "double slit experiment" for example) to prove that in fact, the "straight line" model is the wrong one, and the "it takes all the possible paths" model is actually more like what's really going on.

So that's what happens here too. When we create patterns of constructive and destructive radio interference, we are simply disrupting the rule of thumb that light travels in a straight line.

Oh, is that all? Okay. Let's call it... beam-un-forming.

There's one last detail we have to note in order to make it all work out. The FCC says that if we transmit from two antennas, we have to cut the power from each antenna in half, so the total output is unchanged. If we do that, naively it might seem like the constructive interference effect is useless. When the waves destructively interfere, you still get zero, but when they constructively interfere, you get 2*&half;*cos(ωt), which is just the original signal. Might as well just use one antenna with the original transmit power, right?

Not exactly. Until now, I have skipped over talking about signal power vs amplitude, since it hasn't been that important so far. The FCC regulates *power*, not amplitude. The power of A*cos(ωt) turns out to be &half;A2. I won't go over all the math, but the energy of f(x) during a given period is defined as ∫ f2(x) dx over that period. Power is energy divided by time. It turns out (via trig identities again) the power of cos(x) is 0.5, and the rest flows from there.

Anyway, the FCC limit requires a *power* reduction of &half;. So if the original wave was cos(ωt), then the original power was 0.5. We need the new transmit power (for each antenna) to be 0.25, which is &half;A2 = &half;(0.5). Thus A = sqrt(0.5) = 1/sqrt(2) = 0.7 or so.

So the new transmit wave is 0.7 cos(ωt). Two of those, interfering constructively, gives about 1.4 cos(ωt). The resulting power is thus around &half;(1.4)2 = 1, or double the original (non-reduced, with only one antenna) transmit power.

Ta da! Some areas have twice the power - a 3dB "antenna array gain" or "tx beamforming gain" - while others have zero power. It all adds up. No additional transmit power is required, but a receiver, if it's in one of the areas of constructive interference, now sees 3dB more signal power and thus 3dB more SNR.

We're left with the simple (ha ha) matter of making sure that the receiver is in an area of maximum constructive interference at all times. To make a long story short, we do this by adjusting the phase between the otherwise-identical signals coming from the different antennas.

I don't really know exactly how wifi arranges for the phase adjustment to happen; it's complicated. But we can imagine a very simple version: just send from each antenna, one at a time, and have the receiver tell you the phase difference right now between each variant. Then, on the transmitter, adjust the transmit phase on each antenna by an opposite amount. I'm sure what actually happens is more complicated than that, but that's the underlying concept, and it's called "explicit beamforming feedback." Apparently the 802.11ac standard made progress toward getting everyone to agree on a good way of providing beamforming feedback, which is important for making this work well.

Even more weirdly, the same idea works in reverse. If you know the phase difference between the client's antenna (we're assuming for now that he has only one, so we don't go insane) and each of your router's antennas, then when the client sends a signal *back* to you, you can extract the signal from the different antennas in a particular way that gets you the same amount of gain as in the transmit direction, and we call that rx beamforming. At least, I think you can. I haven't done the math for that yet, so I don't know for sure how well it can work.

Relatedly, even if there is no *explicit* beamforming feedback, in theory you can calculate the phase differences by listening to the signals from the remote end on each of your router's antennas. Because the signals should be following exactly the same path in both directions, you can guess what phase difference your signal arrived with by seeing which difference *his* signal came back with, and compensate accordingly. This is called "implicit beamforming feedback." Of course, if both ends try this trick at once, hilarity ensues.

And finally, I just want to point out how little the result of "beamforming" is like a beam. Although conceptually we'd like to think of it that way - we have a couple of transmitters tuning their signal to point directly at the receiver - mathematically it's not really like that. "Beamforming" creates a kind of on-off "warped checkerboard" sort of pattern that extends in all directions. To the extent that your antenna array is symmetrical, the checkerboard pattern is also symmetrical.

Beamforming Simulation

Of course, a checkerboard is also a flawed analogy. Once you start looking for a checkerboard, you start to see that in fact, the warping is kind of directional, and sort of looks like a beam, and you can imagine that with a hundred antennas, maybe it really would be "beam" shaped.

After doing all the math, I really wanted to know what beamforming looked like, so I wrote a little simulation of it, and the image at the top of this article is the result. (That particular one came from a 9-antenna beamforming array.)

You can also try out the simulation yourself, moving around up to 9 antennas to create different interference patterns. I find it kind of fun and mesmerizing, especially to think that these signals are all around us and if you could see them, they'd look like *that*. On my computer with Chrome, I get about 20 frames per second; with Safari, I get about 0.5 frames per second, which is not as fun. So use a browser with a good javascript engine.

Note that while the image looks like it has contours and "shadows," the shadows are entirely the effect of the constructive/destructive interference patterns causing bright and dark areas. Nevertheless, you can kind of visually see how the algorithm builds one level of constructive interference on top of another, with the peak of the humpiest bump being at the exact location of the receiver. It really works!

Some notes about the simulation:

  • It's 2-dimensional. Real life has at least 3 dimensions. It works pretty much the same though.
  • The intensity (brightness) of the colour indicates the signal strength at that point. Black means almost no signal.
  • "Blue" means cos(ωt) is positive at that point, and red means it's negative.
  • Because of the way phasors work, "blue plus red" is not the only kind of destructive interference, so it's a bit confusing.
  • Click on the visualization to move around the currently-selected transmitter or receiver.
  • When you move the receiver around, it auto-runs the beamforming optimization so you can see the "beam" move around.
  • The anti-optimize button is not very smart; a smarter algorithm could achieve an even less optimal result. But most of the time it does an okay job, and it does show how you can also use beamforming to make a receiver *not* hear your signal. That's the basis of MU-MIMO.
MIMO

The last and perhaps most exciting way to cheat Shannon's Law is MIMO. I'll try to explain that later, but I'm still working out the math :)

Syndicated 2014-07-29 06:41:59 from apenwarr

31 Jul 2014 danstowell   » (Journeyer)

Background reading on Israel and Palestine

I'm going to try and avoid ranting about Israel and Palestine because there's much more heat than light right now. But I want to recommend some background reading that seems useful, and it's historical/background stuff rather than partisan:

I also want to point to a more "one-sided" piece (in the sense that it criticises one "side" specifically - I've no idea about the author's actual motivations): Five Israeli Talking Points on Gaza - Debunked. I recommend it because it raises some interesting points about international law and the like, and we in the UK don't seem to hear these issues filled out on the radio.

Also this interview with Ex-Israeli Security Chief Diskin. Again I don't know Diskin's backstory - clearly he's opposed to the current Israeli Prime Minister (Netanyahu), but the interview has some detail.

As usual, please don't assume anyone is purely pro-Palestine or pro-Israel, and don't confuse criticism of Israel/Hamas with criticism of Judaism/Islam. The topic is hard to talk about (especially on the internet) without the conversation spiralling into extremes.

Syndicated 2014-07-31 18:09:27 (Updated 2014-08-14 07:36:33) from Dan Stowell

31 Jul 2014 crhodes   » (Master)

london employment visualization part 2

Previously, I did all the hard work to obtain and transform some data related to London, including borough and MSOA shapes, population counts, and employment figures, and used them to generate some subjectively pretty pictures. I promised a followup on the gridSVG approach to generating visualizations with more potential for interactivity than a simple picture; this is the beginning of that.

Having done all the heavy lifting in the last post, including being able to generate ggplot objects (whose printing results in the pictures), it is relatively simple to wrap output to SVG instead of output to PNG around it all. In fact it is extremely simple to output to SVG; simply use an SVG output device

  svg("/tmp/london.svg", width=16, height=10)

rather than a PNG one

  png("/tmp/london.png", width=1536, height=960)

(which brings back for me memories of McCLIM, and my implementation of an SVG backend, about a decade ago). So what does that look like? Well, if you’ve entered those forms at the R repl, close the png device

  dev.off()

and then (the currently active device being the SVG one)

  print(ggplot.london(fulltime/(allages-younger-older)))
dev.off()

default (cairo) SVG device

That produces an SVG file, and if SVG in and of itself is the goal, that’s great. But I would expect that the main reason for producing SVG isn’t so much for the format itself (though it is nice that it is a vector image format rather than rasterized, so that zooming in principle doesn’t cause artifacts) but for the ability to add scripting to it: and since the output SVG doesn’t retain any information about the underlying data that was used to generate it, it is very difficult to do anything meaningful with it.

I write “very difficult” rather than “impossible”, because in fact the SVGAnnotation package aimed to do just that: specifically, read the SVG output produced by the R SVG output device, and (with a bit of user assistance and a liberal sprinkling of heuristics) attempt to identify the regions of the plot corresponding to particular slices of datasets. Then, using a standard XML library, the user could decorate the SVG with extra information, add links or scripts, and essentially do whatever they needed to do; this was all wrapped up in an svgPlot function. The problem with this approach is that it is fragile: for example, one heuristic used to identify a lattice plot area was that there should be no text in it, which fails for custom panel functions with labelled guidlines. It is possible to override the default heuristic, but it’s difficult to build a robust system this way (and in fact when I tried to run some two-year old analysis routines recently, the custom SVG annotation that I wrote broke into multiple pieces given new data).

gridSVG’s approach is a little bit different. Instead of writing SVG out and reading it back in, it relies on the grid graphics engine (so does not work with so-called base graphics, the default graphics system in R), and on manipulating the grid object which represents the current scene. The gridsvg pseudo-graphics-device does the behind-the-scenes rendering for us, with some cost related to yet more wacky interactions with R’s argument evaluation semantics which we will pay later.

  gridsvg("/tmp/gridsvg-london.svg", width=16, height=10)
print(ggplot.london(fulltime/(allages-younger-older)))
dev.off()

Because ggplot uses grid graphics, this just works, and generates a much more structured svg file, which should render identically to the previous one:

SVG from gridSVG device

If it renders identically, why bother? Well, because now we have something that writes out the current grid scene, we can alter that scene before writing out the document (at dev.off() time). For example, we might want to add tooltips to the MSOAs so that their name and the quantity value can be read off by a human. Wrapping it all up into a function, we get

  gridsvg.london <- function(expr, subsetexpr=TRUE, filename="/tmp/london.svg") {

We need to compute the subset in this function, even though we’re going to be using the full dataset in ggplot.london when we call it, in order to get the values and zone labels.

      london.data <- droplevels(do.call(subset, list(london$msoa.fortified, substitute(subsetexpr))))

Then we need to map (pun mostly intended) the values in the fortified data frame to the polygons drawn; without delving into the format, my intuition is that the fortified data frame contains vertex information, whereas the grid (and hence SVG) data is organized by polygons, and there may be more than one polygon for a region (for example if there are islands in the Thames). Here we simply generate an index from a group identifier to the first row in the dataframe in that group, and use it to pull out the appropriate value and label.

      is <- match(levels(london.data$group), london.data$group)
    vals <- eval(substitute(expr), london.data)[is]
    labels <- levels(london.data$zonelabel)[london.data$zonelabel[is]]

Then we pay the cost of the argument evaluation semantics. My first try at this line was gridsvg(filename, width=16, height=10), which I would have (perhaps naïvely) expected to work, but which in fact gave me an odd error suggesting that the environment filename was being evaluated in was the wrong one. Calling gridsvg like this forces evaluation of filename before the call, so there should be less that can go wrong.

      do.call(gridsvg, list(filename, width=16, height=10))

And, as before, we have to do substitutions rather than evaluations to get the argument expressions evaluated in the right place:

      print(do.call(ggplot.london, list(substitute(expr), substitute(subsetexpr))))

Now comes the payoff. At this point, we have a grid scene, which we can investigate using grid.ls(). Doing so suggests that the map data is in a grid object named like GRID.polygon followed by an integer, presumably in an attempt to make names unique. We can “garnish” that object with attributes that we want: some javascript callbacks, and the values and labels that we previously calculated.

      grid.garnish("GRID.polygon.*",
                 onmouseover=rep("showTooltip(evt)", length(is)),
                 onmouseout=rep("hideTooltip()", length(is)),
                 zonelabel=labels, value=vals,
                 group=FALSE, grep=TRUE)

We need also to provide implementations of those callbacks. It is possible to do that inline, but for simplicity here we simply link to an external resource.

      grid.script(filename="tooltip.js")

Then close the gridsvg device, and we’re done!

      dev.off()
}

Then gridsvg.london(fulltime/(allages-younger-older)) produces:

proportion employed full-time

which is some kind of improvement over a static image for data of this complexity.

And yet... the perfectionist in me is not quite satisfied. At issue is a minor graphical glitch, but it’s enough to make me not quite content; the border of each MSOA is stroked in a slightly lighter colour than the fill colour, but that stroke extends beyond the border of the MSOA region (the stroke’s centre is along the polygon edge). This means that the strokes from adjacent MSOAs overlie each other, so that the most recently drawn obliterates any drawn previously. This also causes some odd artifacts around the edges of London (and into the Thames, and pretty much obscures the river Lea).

This can be fixed by clipping; I think the trick to clip a path to itself counts as well-known. But clipping in SVG is slightly hard, and the gridSVG facilities for doing it work on a grob-by-grob basis, while the map is all one big polygon grid object. So to get the output I want, I am going to have to perform surgery on the SVG document itself after all; we are still in a better position than before, because we will start with a sensible hierarchical arrangement of graphical objects in the SVG XML structure, and gridSVG furthermore provides some introspective capabilities to give XML ids or XPath query strings for particular grobs.

grid.export exports the current grid scene to SVG, returning a list with the SVG XML itself along with this mapping information. We have in the SVG output an arbitrary number of polygon objects; our task is to arrange such that each of those polygons has a clip mask which is itself. In order to do that, we need for each polygon a clipPath entry with a unique id in a defs section somewhere, where each clipPath contains a use pointing to the original polygon’s ID; then each polygon needs to have a clip-path style property pointing to the corresponding clipPath object. Clear?

  addClipPaths <- function(gridsvg, id) {

given the return value of grid.export and the identifier of the map grob, we want to get the set of XML nodes corresponding to the polygons within that grob.

      ns <- getNodeSet(gridsvg$svg, sprintf("%s/*", gridsvg$mappings$grobs[[id]]$xpath))

Then for each of those nodes, we want to set a clip path.

      for (i in 1:length(ns)) {
        addAttributes(ns[[i]], style=sprintf("clip-path: url(#clipPath%s)", i))
    }

For each of those nodes, we also need to define a clip path

      clippaths <- list()
    for (i in 1:length(ns)) {
        clippaths[[i]] <- newXMLNode("clipPath", attrs=c(id=sprintf("clipPath%s", i)))
        use <- newXMLNode("use", attrs = c("xlink:href"=sprintf("#%s", xmlAttrs(ns[[i]])[["id"]])))
        addChildren(clippaths[[i]], kids=list(use))
    }

And hook it into the existing XML

      defs <- newXMLNode("defs")
    addChildren(defs, kids=clippaths)
    top <- getNodeSet(gridsvg$svg, "//*[@id='gridSVG']")[[1]]
    addChildren(top, kids=list(defs))
}

Then our driver function needs some slight modifications:

  gridsvg.london2 <- function(expr, subsetexpr=TRUE, filename="/tmp/london.svg") {
    london.data <- droplevels(do.call(subset, list(london$msoa.fortified, substitute(subsetexpr))))
    is <- match(levels(london.data$group), london.data$group)
    vals <- eval(substitute(expr), london.data)[is]
    labels <- levels(london.data$zonelabel)[london.data$zonelabel[is]]

Until here, everything is the same, but we can’t use the gridsvg pseudo-graphics device any more, so we need to do graphics device handling ourselves:

      pdf(width=16, height=10)
    print(do.call(ggplot.london, list(substitute(expr), substitute(subsetexpr))))
    grid.garnish("GRID.polygon.*",
                 onmouseover=rep("showTooltip(evt)", length(is)),
                 onmouseout=rep("hideTooltip()", length(is)),
                 zonelabel=labels, value=vals,
                 group=FALSE, grep=TRUE)
    grid.script(filename="tooltip.js")

Now we export the scene to SVG,

      gridsvg <- grid.export()

find the grob containing all the map polygons,

      grobnames <- grid.ls(flatten=TRUE, print=FALSE)$name
    grobid <- grobnames[[grep("GRID.polygon", grobnames)[1]]]

add the clip paths,

      addClipPaths(gridsvg, grobid)
    saveXML(gridsvg$svg, file=filename)

and we’re done!

      dev.off()
}

Then gridsvg.london2(fulltime/(allages-younger-older)) produces:

proportion employed full-time (with polygon clipping)

and I leave whether the graphical output is worth the effort to the beholder’s judgment.

As before, these images contain National Statistics and Ordnance Survey data © Crown copyright and database right 2012.

Syndicated 2014-07-31 17:07:34 (Updated 2014-07-31 17:14:34) from notes

31 Jul 2014 etbe   » (Master)

Links July 2014

Dave Johnson wrote an interesting article for Salon about companies ripping off the tax system by claiming that all their income is produced in low tax countries [1].

Seb Lee-Delisle wrote an insightful article about how to ask to get paid to speak [2]. I should do that.

Daniel Pocock wrote an informative article about the reConServer simple SIP conferencing server [3]. I should try it out, currently most people I want to conference with are using Google Hangouts, but getting away from Google is a good thing.

François Marier wrote an informative post about hardening ssh servers [4].

S. E. Smith wrote an interesting article “I Am Tired of Hearing Programmers Defend Gender Essentialism [5].

Bert Archer wrote an insightful article about lazy tourism [6]. His initial example of “love locks” breaking bridges was a bit silly (it’s not difficult to cut locks off a bridge) but his general point about lazy/stupid tourism is good.

Daniel Pocock wrote an insightful post about new developments in taxis, the London Taxi protest against Uber, and related changes [7]. His post convinced me that Uber is a good thing and should be supported. I checked the prices and unfortunately Uber is more expensive than normal taxis for my most common journey.

Cory Doctorow wrote an insightful article for The Guardian about the moral issues related to government spying [8].

The Verge has an interesting review of the latest Lytro Lightbox camera [9]. Not nearly ready for me to use, but interesting technology.

Prospect has an informative article by Kathryn Joyce about the Protestant child sex abuse scandal in the US [10]. Billy Graham’s grandson is leading the work to reform churches so that they protect children instead of pedophiles. Prospect also has an article by Kathryn Joyce about Christians home-schooling kids to try and program them to be zealots and how that hurts kids [11].

The Daily Beast has an interesting article about the way that the extreme right wing in the US are trying to kill people, it’s the right wing death panel [12].

Jay Michaelson wrote an informative article for The Daily Beast about right-wing hate groups in the US who promote the extreme homophobic legislation in Russia and other countries [13]. It also connects to the Koch brothers who seem to be associated with most evil. Elias Isquith wrote an insightful article for Salon about the current right-wing obsession with making homophobic discrimination an issue of “religious liberty” will hurt religious people [14]. He also describes how stupid the right-wing extremists are in relation to other issues too.

EconomixComix.com has a really great comic explaning the economics of Social Security in the US [15]. They also have a comic explaining the TPP which is really good [16]. They sell a comic book about economics which I’m sure is worth buying. We need to have comics explaining all technical topics, it’s a good way of conveying concepts. When I was in primary school my parents gave me comic books covering nuclear physics and other science topics which were really good.

Mia McKenzie wrote an insightful article for BlackGirlDangerous.com about dealing with racist white teachers [17]. I think that it would be ideal to have a school dedicated to each minority group with teachers from that group.

Related posts:

  1. Links July 2013 Wayne Mcgregor gave an interesting TED talk about the creative...
  2. Links May 2014 Charmian Gooch gave an interesting TED talk about her efforts...
  3. Links June 2014 Russ Albery wrote an insightful blog post about trust, computer...

Syndicated 2014-07-31 13:38:53 from etbe - Russell Coker

31 Jul 2014 Stevey   » (Master)

luonnos viesti - 31 heinäkuu 2014

Yesterday I spent a while looking at the Debian code search site, an enormously useful service allowing you to search the code contained in the Debian archives.

The end result was three trivial bug reports:

#756565 - lives

Insecure usage of temporary files.

A CVE-identifier should be requested.

#756566 - libxml-dt-perl

Insecure usage of temporary files.

A CVE-identifier has been requested by Salvatore Bonaccorso, and will be added to my security log once allocated.

756600 - xcfa

Insecure usage of temporary files.

A CVE-identifier should be requested.

Finding these bugs was a simple matter of using the code-search to look for patterns like "system.*>.*%2Ftmp".

Perhaps tomorrow somebody else would like to have a go at looking for backtick-related operations ("`"), or the usage of popen.

Tomorrow I will personally be swimming in a loch, which is more fun than wading in code..

Syndicated 2014-07-31 12:54:16 from Steve Kemp's Blog

31 Jul 2014 lucasr   » (Master)

The new TwoWayView

What if writing custom view recycling layouts was a lot simpler? This question stuck in my mind since I started writing Android apps a few years ago.

The lack of proper extension hooks in the AbsListView API has been one of my biggest pain points on Android. The community has come up with different layout implementations that were largely based on AbsListView‘s code but none of them really solved the framework problem.

So a few months ago, I finally set to work on a new API for TwoWayView that would provide a framework for custom view recycling layouts. I had made some good progress but then Google announced RecyclerView at I/O and everything changed.

At first sight, RecyclerView seemed to be an exact overlap with the new TwoWayView API. After some digging though, it became clear that RecyclerView was a superset of what I was working on. So I decided to embrace RecyclerView and rebuild TwoWayView on top of it.

The new TwoWayView is functional enough now. Time to get some early feedback. This post covers the upcoming API and the general-purpose layout managers that will ship with it.

Creating your own layouts

RecyclerView itself doesn’t actually do much. It implements the fundamental state handling around child views, touch events and adapter changes, then delegates the actual behaviour to separate components—LayoutManager, ItemDecoration, ItemAnimator, etc. This means that you still have to write some non-trivial code to create your own layouts.

LayoutManager is a low-level API. It simply gives you extension points to handle scrolling and layout. For most layouts, the general structure of a LayoutManager implementation is going to be very similar—recycle views out of parent bounds, add new views as the user scrolls, layout scrap list items, etc.

Wouldn’t it be nice if you could implement LayoutManagers with a higher-level API that was more focused on the layout itself? Enter the new TwoWayView API.

TWAbsLayoutManagercode is a simple API on top of LayoutManager that does all the laborious work for you so that you can focus on how the child views are measured, placed, and detached from the RecyclerView.

To get a better idea of what the API looks like, have a look at these sample layouts: SimpleListLayout is a list layout and GridAndListLayout is a more complex example where the first N items are laid out as a grid and the remaining ones behave like a list. As you can see you only need to override a couple of simple methods to create your own layouts.

Built-in layouts

The new API is pretty nice but I also wanted to create a space for collaboration around general-purpose layout managers. So far, Google has only provided LinearLayoutManager. They might end up releasing a few more layouts later this year but, for now, that is all we got.

layouts

The new TwoWayView ships with a collection of four built-in layouts: List, Grid, Staggered Grid, and Spannable Grid.

These layouts support all RecyclerView features: item animations, decorations, scroll to position, smooth scroll to position, view state saving, etc. They can all be scrolled vertically and horizontally—this is the TwoWayView project after all ;-)

You probably know how the List and Grid layouts work. Staggered Grid arranges items with variable heights or widths into different columns or rows according to its orientation.

Spannable Grid is a grid layout with fixed-size cells that allows items to span multiple columns and rows. You can define the column and row spans as attributes in the child views as shown below.

<FrameLayout
    android:layout_width="match_parent"
    android:layout_height="match_parent"
    app:colSpan="2"
    app:rowSpan="3">
    ...

Utilities

The new TwoWayView API will ship with a convenience view (TWView) that can take a layoutManager XML attribute that points to a layout manager class.

<org.lucasr.twowayview.TWView
    android:layout_width="match_parent"
    android:layout_height="match_parent"
    app:layoutManager="TWListLayoutManager"/>

This way you can leverage the resource system to set layout manager depending on device features and configuration via styles.

You can also use TWItemClickListener to use ListView-style item (long) click listeners. You can easily plug-in support for those in any RecyclerView (see sample).

I’m also planning to create pluggable item decorations for dividers, item spacing, list selectors, and more.


That’s all for now! The API is still in flux and will probably go through a few more iterations. The built-in layouts definitely need more testing.

You can help by filing (and fixing) bugs and giving feedback on the API. Maybe try using the built-in layouts in your apps and see what happens?

I hope TwoWayView becomes a productive collaboration space for RecyclerView extensions and layouts. Contributions are very welcome!

Syndicated 2014-07-31 11:33:17 from Lucas Rocha

31 Jul 2014 etbe   » (Master)

BTRFS Status July 2014

My last BTRFS status report was in April [1], it wasn’t the most positive report with data corruption and system hangs. Hacker News has a brief discussion of BTRFS which includes the statement “Russell Coker’s reports of his experiences with BTRFS give me the screaming heebie-jeebies, no matter how up-beat and positive he stays about it” [2] (that’s one of my favorite comments about my blog).

Since April things have worked better. Linux kernel 3.14 solves the worst problems I had with 3.13 and it’s generally doing everything I want it to do. I now have cron jobs making snapshots as often as I wish (as frequently as every 15 minutes on some systems), automatically removing snapshots (removing 500+ snapshots at once doesn’t hang the system), balancing, and scrubbing. The fact that I can now expect that a filesystem balance (which is a type of defragment operation for BTRFS that frees some “chunks”) from a cron job and expect the system not to hang means that I haven’t run out of metadata chunk space. I expect that running out of metadata space can still cause filesystem deadlocks given a lack of reports on the BTRFS mailing list of fixes in that regard, but as long as balance works well we can work around that.

My main workstation now has 35 days of uptime and my home server has 90 days of uptime. Also the server that stores my email now has 93 days uptime even though it’s running Linux kernel 3.13.10. I am rather nervous about the server running 3.13.10 because in my experience every kernel before 3.14.1 had BTRFS problems that would cause system hangs. I don’t want a server that’s an hour’s drive away to hang…

The server that runs my email is using kernel 3.13.10 because when I briefly tried a 3.14 kernel it didn’t work reliably with the Xen kernel 4.1 from Debian/Wheezy and I had a choice of using the Xen kernel 4.3 from Debian/Unstable to match the Linux kernel or use an earlier Linux kernel. I have a couple of Xen servers running Debian/Unstable for test purposes which are working well so I may upgrade my mail server to the latest Xen and Linux kernels from Unstable in the near future. But for the moment I’m just not doing many snapshots and never running a filesystem scrub on that server.

Scrubbing

In kernel 3.14 scrub is working reliably for me and I have cron jobs to scrub filesystems on every system running that kernel. So far I’ve never seen it report an error on a system that matters to me but I expect that it will happen eventually.

The paper “An Analysis of Data Corruption in the Storage Stack” from the University of Wisconsin (based on NetApp data) [3] shows that “nearline” disks (IE any disks I can afford) have an incidence of checksum errors (occasions when the disk returns bad data but claims it to be good) of about 0.42%. There are 18 disks running in systems I personally care about (as opposed to systems where I am paid to care) so with a 0.42% probability of a disk experiencing data corruption per year that would give a 7.3% probability of having such corruption on one disk in any year and a greater than 50% chance that it’s already happened over the last 10 years. Of the 18 disks in question 15 are currently running BTRFS. Of the 15 running BTRFS 10 are scrubbed regularly (the other 5 are systems that don’t run 24*7 and the system running kernel 3.13.10).

Newer Kernels

The discussion on the BTRFS mailing list about kernel 3.15 is mostly about hangs. This is correlated with some changes to improve performance so I presume that it has exposed race conditions. Based on those discussions I haven’t felt inclined to run a 3.15 kernel. As the developers already have some good bug reports I don’t think that I could provide any benefit by doing more testing at this time. I think that there would be no benefit to me personally or the Linux community in testing 3.15.

I don’t have a personal interest in RAID-5 or RAID-6. The only systems I run that have more data than will fit on a RAID-1 array of cheap SATA disks are ones that I am paid to run – and they are running ZFS. So the ongoing development of RAID-5 and RAID-6 code isn’t an incentive for me to run newer kernels. Eventually I’ll test out RAID-6 code, but at the moment I don’t think they need more bug reports in this area.

I don’t have a great personal interest in filesystem performance at this time. There are some serious BTRFS performance issues. One problem is that a filesystem balance and subtree removal seem to take excessive amounts of CPU time. Another is that there isn’t much support for balancing IO to multiple devices (in RAID-1 every process has all it’s read requests sent to one device). For large-scale use of a filesystem these are significant problems. But when you have basic requirements (such as a mail server for dozens of users or a personal workstation with a quad-core CPU and fast SSD storage) it doesn’t make much difference. Currently all of my systems which use BTRFS have storage hardware that exceeds the system performance requirements by such a large margin that nothing other than installing Debian packages can slow the system down. So while there are performance improvements in newer versions of the BTRFS kernel code that isn’t an incentive for me to upgrade.

It’s just been announced that Debian/Jessie will use Linux 3.16, so I guess I’ll have to test that a bit for the benefit of Debian users. I am concerned that 3.16 won’t be stable enough for typical users at the time that Jessie is released.

Related posts:

  1. BTRFS Status March 2014 I’m currently using BTRFS on most systems that I can...
  2. BTRFS Status April 2014 Since my blog post about BTRFS in March [1] not...
  3. Starting with BTRFS Based on my investigation of RAID reliability [1] I have...

Syndicated 2014-07-31 10:45:10 from etbe - Russell Coker

31 Jul 2014 bagder   » (Master)

Me in numbers, today

Number of followers on twitter: 1,302

Number of commits during the last 365 days at github: 686

Number of publicly visible open source commits counted by openhub: 36,769

Number of questions I’ve answered on stackoverflow: 403

Number of connections on LinkedIn: 608

Number of days I’ve committed something in the curl project: 2,869

Number of commits by me, merged into Mozilla Firefox: 9

Number of blog posts on daniel.haxx.se, including this: 734

Number of friends on Facebook: 150

Number of open source projects I’ve contributed to, openhub again: 35

Number of followers on Google+: 557

Number of tweets: 5,491

Number of mails sent to curl mailing lists: 21,989

TOTAL life achievement: 71,602

Syndicated 2014-07-31 09:39:54 from daniel.haxx.se

31 Jul 2014 benad   » (Apprentice)

The Case for Complexity

Like clockwork, there is a point in a programmer's career where one realizes that most programming tools suck, that not only they hinder the programmer's productivity, but worse may have an impact on the quality of the product for end users. And so, there are cries of the absurdity of it all, some posit that complex software development tools must exist because some programmers like complexity above productivity, while others long for the days where programming was easier.

I find these reactions amusing. Kind of a middle-life crisis for programmers. Trying to rationalize their careers, most just end up admitting defeat for a professional life of mediocrity, by using dumber tools and hoping to avoid the main reason why programming can be challenging. I went into that "programmer's existential crisis" in my third year as a programmer, just before deciding on making it a career, but I went out of it with what seems to be a conclusion seldom shared by my fellow programmers. To some extent this is why I don't really consider myself a programmer but rather a software designer.

The fundamental issue isn't the fact that software is (seemingly) unnecessarily complex, but rather trying to understand the source of that complexity. Too many programmers assume that programming is based on applied mathematics. Well, it ought to be, but programming as practiced in the industry is quite far from its computer science roots. That deviation isn't due only from programming mistakes, but due to the more irrational external constraints and requirements. Even existing bugs become part of the external constraints if they are in things you cannot fix but must "work around".

Those absurdities can come from two directions: Top-down, based on human need and mental models, or Bottom-up, based on faulty mathematical or software design models. Productive and efficient software development tools, by themselves, bring complexity above the programming language. Absurd business requirements, including cost-saving measures and dealing with buggy legacy systems not only bring complexity, but the workarounds they require bring even more absurd code.

Now, you may argue that abstractions make things simpler, and to some extent, they are. But abstractions only tend to mask complexity, and when things break or don't work as expected, that complexity re-surfaces. From the point of view of a typical user, if it's broken, you ask somebody else to fix it or replace it. But being a programmer is being that "somebody else" that takes responsibility into understanding, to some extent, that complexity.

You could argue that software should always be more usable first. And yet, usable software can be far more difficult to implement than software that is more "native" to its computing environment. All those manual pages, the flexible command-line parameters, those adaptive GUIs, pseudo-AIs, Clippy, and so on, bring enormous challenges to the implementation of any software because humans don't think like machines, and vice-versa. As long as users are involved, software cannot be fully "intuitive" for both users and computers at the same time. Computers are not "computing machines", but more sophisticated state machines made to run useful software for users. Gone are the days where room-sized computers just do "math stuff" for banks, where user interaction was limited to numbers and programmers. The moment there were personal computers, people didn't write "math-based software", but rather text-based games with code of dubious quality.

Complexity of software will always increase, because it can. Higher-level programming languages become more and more removed from the hardware execution model. Users keep asking for more features that don't necessarily "fit well", so either you add more buttons to that toolbar, or you create a brand new piece of software with its own interfaces. Even if by some reason computers stopped getting so much faster over time, it wouldn't stop users from asking for "more", and programmers from asking for "productivity".

My realization was that there has to be a balance between always increasing complexity and our ability to understand it. Sure, fifty years ago it would be reasonable to have a single person spend a few years to fully understand a complete computer system, but nowadays we just have to become specialized. Still, specialization is possible because we can understand a higher-level conceptual design of the other components rather than just an inconsistent mash up of absurdity. Design is the solution. Yes, things in software will always get bigger, but we can make it more reasonable to attempt to understand it all if, from afar, it was designed soundly rather than just accidentally "became". With design, complexity becomes a bit smaller and manageable, and even though only the programmers will have to deal with most of that complexity, good design produce qualities that become visible up to the end users. Good design makes tighter "vertical integration" easier since making sense of the whole system is easier.

Ultimately, making a better software product for the end users requires the programmer to take responsibility for the complexity of not only the software's code, but also of its environment. That means using sound design for any new code introduced, and accepting the potential absurdity of the rest. If you can't do that, then you'll never be more than a "code monkey".

Notes

  1. Many programmers tend to assume that their code is logically sound, and that their errors are mostly due to menial mistakes. In my experience, it's the other way around: The buggiest code is produced when code isn't logically sound, and this is what happens most of the time, especially in scripting languages that have weak or implicit typing.
  2. I use the term "complexity" more as the number of module connections than the average of module coupling. I find "complexity as a sum" more intuitive from the point of view of somebody that has to be aware of the complete system: Adding an abstraction layer still adds a new integration point between the old and new code, adding more things that could break. This is why I normally consider programming tools added complexity, even though their code completion and generation can make the programmers more productive.

Syndicated 2014-07-31 02:14:53 from Benad's Blog

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!

Advogato User Stats
Users14002
Observer9886
Apprentice746
Journeyer2340
Master1026

New Advogato Members

Recently modified projects

20 Jun 2014 Ultrastudio.org
13 Apr 2014 Babel
13 Apr 2014 Polipo
19 Mar 2014 usb4java
8 Mar 2014 Noosfero
17 Jan 2014 Haskell
17 Jan 2014 Erlang
17 Jan 2014 Hy
17 Jan 2014 clj-simulacrum
17 Jan 2014 Haskell-Lisp
17 Jan 2014 lfe-disco
17 Jan 2014 clj-openstack
17 Jan 2014 lfe-openstack
17 Jan 2014 LFE
10 Jan 2014 libstdc++

New projects

8 Mar 2014 Noosfero
17 Jan 2014 Haskell
17 Jan 2014 Erlang
17 Jan 2014 Hy
17 Jan 2014 clj-simulacrum
17 Jan 2014 Haskell-Lisp
17 Jan 2014 lfe-disco
17 Jan 2014 clj-openstack
17 Jan 2014 lfe-openstack
17 Jan 2014 LFE
1 Nov 2013 FAQ Linux
15 Apr 2013 Gramps
8 Apr 2013 pydiction
28 Mar 2013 Snapper
5 Jan 2013 Templer